diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzziwlu" "b/data_all_eng_slimpj/shuffled/split2/finalzziwlu" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzziwlu" @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\nOver the last few decades ~\\cite{Cormack1964Representation, Hounsfield1974Computerized}, X-ray Computed Tomography (CT) has demonstrated its prominent practical value and wide range of applications including clinical diagnosis, safety inspection and industrial detection ~\\cite{Wang2008An}. Especially in the past year, due to the global spread of the Corona Virus Disease 2019 (COVID-19), the term CT has become well-known to the public as an essential auxiliary technology. However, the radiation dose brought by CT has a nonnegligible side effect on the human body. Since it has a latent risk of inducing cancers, radiation dose reduction is becoming more and more crucial under the principle of ALARA (as low as reasonably achievable) ~\\cite{Krishnamoorthi2011Effectiveness, Slovis2002CT, Mccollough2009Strategies, Mccollough2006CT}.\n\nGenerally speaking, there are two approaches to reduce radiation dose. The approach of tube current (or voltage) reduction ~\\cite{Poletti2007Low-DoseVersus, Tack2003DoseReduction} lowers the x-ray exposure in each view but suffers from the increased noise in projections. Although the approach of projection number reduction ~\\cite{Bian2010Evaluation, Bian2012Optimization} (also known as sparse-view CT) can avoid the former problem and realize the additional benefit of accelerated scan and calculation, it leads to severe image quality degradation of increased streaking artifacts brought by its missing projections. In this paper, we focus on effectively repairing and reconstructing sparse-view CT so as to acquire high-quality CT images.\n\nSparse-view CT reconstruction has always been a classic inverse problem which has attracted wide attention \\cite{Jin2016Deep}. In the past few decades, iterative reconstruction methods have become the dominant approach to solve inverse problems \\cite{Andersen1984Simultaneous, Wu2017Iterative, Hu2017An, Zhang2019JSR}. With the advent of compressed sensing \\cite{Candes2006Robust} and its related regularizers, the quality of reconstructed images has been improved to a certain extent. One of the most typical regularizers is the total variation (TV) method, algorithms based on which include TV-POCS \\cite{Sidky2008Image}, TGV method \\cite{Niu2014Sparse}, SART \\cite{Andersen1984Simultaneous} and SART-TV \\cite{Sidky2009Accurate} etc. In addition, dictionary learning is also commonly used as a regularizer. For example, \\cite{Xu2012Low} constructs a global dictionary and an iterative adaptive dictionary to solve the problem of low-dose CT reconstruction.\n\nIn recent years, with the improvement of computing power, there comes a rapid growth in deep learning \\cite{LeCun2015DeepLearning}. Subsequently, neural networks have been widely applied in image analysis tasks, such as image classification \\cite{Russakovsky2015ImageNet}, image segmentation \\cite{Long2015Fully, Ronneberger2015U, Soltaninejad2020Three}, especially inverse problems in image reconstruction, such as artifacts reduction \\cite{Dong2015Compression, Guo2016Building}, denoising \\cite{Xie2012Image} and inpainting \\cite{Kulkarni2016ReconNet}. Since GAN (Generative Adversarial Networks) was designed elaborately by Goodfellow in 2014 \\cite{goodfellow2014generative}, it has been adopted in many image processing tasks due to its prominent performance in realistically predicting image details. Therefore, GANs are also naturally applied to improving the quality of low-dose CT images \\cite{Bai2018Limited, Xie2019Artifact, Zhao2018Sparse}. In addition, Ye et al. explored the relationship between deep learning and classical signal processing methods in \\cite{Ye2017Deep}, explained the reason why deep learning can be employed in imaging inverse problems, and provided a theoretical basis for the application of deep learning in low-dose CT reconstruction.\n\nSome researchers adopt deep learning-based architectures to complement and restore the limit-view Radon data \\cite{Dong2019A, fu2020a, Bai2018Limited, Anirudh2018Lose, Dai2019Limited, ghani2018deep}. Dong et al. \\cite{Dong2019A} used U-Net \\cite{Ronneberger2015U} to predict the missing Radon data, then reconstruct it to the image through FBP \\cite{katsevich2002theoretically}. Jian Fu et al. \\cite{fu2020a} built a network that involves the tight coupling of the deep learning neural network and DPC-CT (Differential phase-contrast CT) reconstruction algorithm in the domain of DPC projection sinograms. The estimated result is a complete phase-contrast projection sinogram. Rushil Anirudh et al. established CTNet \\cite{Anirudh2018Lose}, a system of 1D and 2D convolutional neural networks, which operates on the limited-view sinogram to predict the full-view sinogram, and then fed it to the standard analytical and iterative reconstruction algorithms to obtain the final result.\n\nOther researchers carried out post-processing on reconstructed images with deep learning models, so as to remove the artifacts and noises for upgrading the quality of these images\\cite{Han2018Framing, Zhang2018A, zhang2016image, Xie2019Artifact, Wang2020Deep, kuanar2019low, guan2020fully}. In 2016, a deep convolutional neural network \\cite{zhang2016image} was proposed to learn an end-to-end mapping between the FBP and artifact-free images. In 2018, Yoseob Han and Jong Chul Ye designed a dual frame and tight frame U-Net \\cite{Han2018Framing} which satisfies the frame condition and performs better for recovery of high frequency edges in sparse-view CT. In 2019, Xie et al. \\cite{Xie2019Artifact} built an end-to-end cGAN model with joint function used for removing artifacts from limited-angle CT reconstruction images. In 2020, Wang et al. \\cite{Wang2020Deep} developed a limited-angle TCT image reconstruction algorithm based on U-Net, which could suppress the artifacts and preserve the structures. Experiments have shown that U-Net-like structures are efficacious for image artifacts removal and texture restoration \\cite{Ye2017Deep,Dong2019A,Han2018Framing,Wang2020Deep,guan2020fully} .\n\nSince neural networks are capable of predicting unknown data in the Radon and image domains, a natural idea is to combine these two domains \\cite{Lee2018High,liang2018comparison,Zhao2018Sparse,Zhu2020Low,hammernik2017a,Zhang2020Artifact} to acquire better restoration results. Specifically, it first complements the Radon data, and then remove the residual artifacts and noises on images converted from the full-view Radon data. In 2018, Zhao et al. proposed SVGAN \\cite{Zhao2018Sparse}, an artifacts reduction method for low-dose and sparse-view CT via a single model trained by GAN. In 2019, Liang et al. \\cite{liang2018comparison} proposed a comprehensive network combining projection and image domains. The projection estimation network is based on Res-CNN structure, and the image domain network takes the advantage of U-Net. In 2020, Zhu et al. designed ADAPTIVE-NET \\cite{Zhu2020Low} to conduct joint denoising on the acquired sinogram and the reconstructed CT image, while reconstructing CT image in an end-to-end manner. In the past three years, experiments have proved that this sort of two-stage algorithm is quite conducive to image quality improvement.\n\nAll the current mainstream methods mentioned above make us notice that they solely process on each single CT image, while neglecting the solid fact the scanned object is always highly continuous. Consequently, there is abundant spatial information lies in these obtained consecutive CT images, which is largely left to be exploited. This enlightens us to propose a novel cascade model called LS-AAE (Lightweight Spatial Adversarial Autoencoder) that mainly focus on availably utilizing the spatial information between greatly correlated images. It has been proved in our experiments that this sort of structure design manages to efficaciously remove streaking artifacts in sparse-view CT images, and outruns other prevailing methods with its remarkable performance.\n\nIt is the social trend now to make healthcare mobile and portable. In lots of deep learning-based methods, however, scholars improve accuracy at the expense of sacrificing computing resources. Such computational complexity usually exceeds the capabilities of many mobile and embedded applications. This paper adopts the inverted residual with linear bottleneck \\cite{Sandler2018MobileNetV2} in the module design to propose a mobile structure that reduce model parameters to one-eighth of its original without sacrificing its performance.\n\nAlthough enhancing the sparsity of sparse-view CT can bring benefits of accelerated scanning and related calculations, it will cause additional imaging damage. Balancing image quality and X-ray dose level has become a well-known trade-off problem. Thus, in order to explore the limit of sparsity in sparse-view CT reconstruction, we conduct sparse sampling at intervals of 4\u00b0, 8\u00b0 and most importantly, 16\u00b0. Even under such sampling sparsity, our model can still exhibit its remarkable robustness and the state-of-the-art performance.\n\nWe introduce our proposed method exhaustively in Section II, the experimental results and corresponding discussion are described in section III, and conclusion is stated in section IV.\n\n\\section{Methods}\n\\subsection{Preliminaries}\n\\subsubsection{Utilize Spatial Information for Artifact Removal}\n\n\n\nAs is known to all, consecutive CT images usually contain high spatial coherency since the scanned object is usually spatially continuous. On account of that, we can imagine these CT images as adjacent frames in a video which contains much more information than a still image. This high correlation within the sequence of images can improve the performance of artifact removal from two aspects. Firstly, the extension of search regions from two-dimensional image neighborhoods to three-dimensional spatial neighborhoods provide extra information which can be used to denoise the reference image. Secondly, using spatial neighbors helps to reduce streaking artifacts as the residual error in each image is correlated.\n\nAlso, we cannot help but notice that the task of artifact removal between consecutive images is similar to video denoising. Therefore, after investigating lots of research work on video denoising \\cite{pascanu2013on, caballero2017real, maggioni2012video, arias2018video, vogels2018denoising, ehret2019model, claus2019videnn, davy2018non, tassano2019dvdnet, chen2016deep}, we find out that current state-of-the-art methods lay lots of emphasis on motion estimation due to the strong redundancy along motion trajectories. To conclude, in order to more effectively remove streaking artifacts from sparse-view CT images, we need to design a structure that can not only look into the three-dimensional spatial neighborhood, but also capture the motion between consecutive images.\n\n\\subsubsection{Enhance Mobility of Neural Networks}\nIn recent years, lots of research has been invested into tuning deep neural networks to achieve an optimal balance between efficiency and performance. Among them, depthwise separable convolutions \\cite{howard2017mobilenets} exhibits its extraordinary capability and has gradually become an essential building block for numerous lightweight neural networks \\cite{howard2017mobilenets, chollet2017xception, zhang2018shufflenet}. It aims to decompose the standard convolutional layer into two separate layers, namely the depthwise convolutional layer and the pointwise convolutional layer. The former layer is designed to perform lightweight filtering through employing a single convolutional filter per input channel, the latter one conducts $1\\times1$ convolution to construct new features by computing linear combinations of input channels.\n\nFor the standard convolutional layer with input tensor size $(c_{in}, h, w)$, kernel size $(c_{out}, c_{in}, k, k)$ and output tensor size $(c_{out}, h, w)$, its computational cost equals to $c_{in} \\cdot h \\cdot w\\cdot (k^2 \\cdot c_{out})$. However, in depthwise separable convolutions, the depthwise convolutional layer has a computational cost of $c_{in} \\cdot h \\cdot w \\cdot k^2$ since it merely operates on a single input channel, and the pointwise convolutional layer has a computational cost of $c_{in} \\cdot h \\cdot w \\cdot c_{out}$. Therefore, we only need a computational cost of $h \\cdot w \\cdot c_{in} \\cdot (k^2 + c_{out}) $ for depthwise separable convolutions, which is almost the one-ninth ($k$ equals to 3 in our case) of the standard convolution. Most importantly, depthwise separable convolutions manage to lower the computational complexity to a large extent without sacrificing its accuracy, which would make it perfect to be inserted into our module design.\n\n\\subsection{Overall Structure}\n\\subsubsection{Structure Overview}\n\n\\begin{figure*}[!htb]\n\\setlength{\\abovecaptionskip}{-5pt}\n\\centerline\n{\\includegraphics[width=\\linewidth]{fig1.eps}}\n\\caption{Structure overview. The sparse-view Radon data $\\bold{X}$ is first sent to the neural network $F$ for completion, then the restored full-view Radon data $\\bold{X}^\\prime$ is converted to the image $\\bold{Y}$, which is feed into the neural network $G$ for artifacts removal and we can finally obtain the ideal high-quality image $\\bold{Y}^\\prime$.}\n\\label{fig1}\n\\end{figure*}\n\nWe can learn from the universal approximation theorem \\cite{hornik1989multilayer} that multilayer feedforward networks are capable of approximating various continuous functions. This inspires us to think that neural networks can be used to learn complex mappings that are difficult to solve through mathematical analysis. Thus, in this paper, we utilize a deep learning-based structure that combines the Radon domain and the image domain (Figure \\ref{fig1}) to solve the task of sparse-view CT reconstruction and inpainting.\n\nFirstly, we want to make full use of the prior information in the Radon domain by converting the sparse-view Radon data $\\bold{X}$ to the full-view Radon data $\\bold{X}^{\\prime}$ so as to complement the missing data in some scanning angles. This process can be represented by the mapping: $\\bold{X} \\xrightarrow{f} \\bold{X}^{\\prime}$ according to the universal approximation theorem, where function $f$ can be approximated through our proposed neural network $F$. After we obtain the full-view Radon data $\\bold{X}^{\\prime}$, we transform it to the image $\\bold{Y}$ through FBP. Although the first stage manages to alleviate the severe imaging damage from the original sparse-view CT image, there are still lots of streaking artifacts existing in Y that need to be removed to acquire the high-quality restored result $\\bold{Y}^{\\prime}$. We represent the restoration process into the mapping: $\\bold{Y} \\xrightarrow{g} \\bold{Y}^{\\prime}$, where function $g$ can be approximated through our proposed neural network $G$. Through the above two-stage structure that combines the Radon domain with the image domain, we can finally get the ideal restored results.\n\n\\subsubsection{Stage One: Data Completion in the Radon Domain}\n\\begin{figure}[!tb]\n\\vspace{-7pt}\n\\setlength{\\abovecaptionskip}{-5pt}\n\\centerline{\\includegraphics[scale=0.7]{fig2_3.eps}}\n\\caption{The diagram of our proposed L-AAE, which is composed of a L-AE and a discriminator that help restore the image texture.}\n\\label{fig2}\n\\vspace{-5pt}\n\\end{figure}\n\nWe first adopt linear interpolation to convert the original sparse-view Radon data to full-view Radon data so as to satisfy the structural characteristics of our proposed neural network, which requires the input and output images to have the same resolution. Then we build a lightweight adversarial autoencoder (L-AAE) in Figure \\ref{fig2} to restore the Radon data, the structure of its autoencoder (L-AE) can be seen from Figure \\ref{fig3} and Table \\ref{table1}, which is composed of the encoder and the decoder that are highly symmetrical.\n\n\\begin{figure}[!tb]\n\\setlength{\\abovecaptionskip}{-10pt}\n\\centerline{\\includegraphics[width=\\linewidth]{fig3_2.eps}}\n\\caption{The detailed structure of the L-AE. Input images are first feed into the encoder for feature extraction and then sent into the decoder for texture restoration, where skip connections are added to merge low-level features.}\n\\label{fig3}\n\\end{figure}\n\n\\begin{table}[!htb]\n \\centering\n \\caption{Parametric structure of the L-AE}\n \\renewcommand{\\arraystretch}{1.3}\n \\begin{tabular}{m{5.5em}<{\\centering} m{5.5em}<{\\centering}cc m{4em}<{\\centering} m{4em}<{\\centering} } \\hline \\hline\n \\multicolumn{1}{m{5.5em}<{\\centering}}{Layer} & \\multicolumn{1}{m{5.5em}<{\\centering} }{$IC$} & \\multicolumn{1}{m{1.6em}<{\\centering} }{$OC$} & \\multicolumn{1}{m{2em} <{\\centering}}{Stride} & Input Size & Output Size\\\\\\hline\n Conv1 & 1 & 32 & 2 & 192$\\times$512 & 96$\\times$256 \\\\\n Block1 & 32 & 16 & 1 & 96$\\times$256 & 96$\\times$256 \\\\\n Block2\\_1 & 16 & 32 & 2 & 96$\\times$256 & 48$\\times$128 \\\\\n Block2\\_2 & 32 & 32 & 1 & 48$\\times$128 & 48$\\times$128 \\\\\n Block3\\_1 & 32 & 64 & 2 & 48$\\times$128 & 24$\\times$64 \\\\\n Block3\\_2 & 64 & 64 & 1 & 24$\\times$64 & 24$\\times$64 \\\\\n Block3\\_3 & 64 & 64 & 1 & 24$\\times$64 & 24$\\times$64 \\\\\n Block4\\_1 & 64 & 128 & 2 & 24$\\times$64 & 12$\\times$32 \\\\\n Block4\\_2 & 128 & 128 & 1 & 12$\\times$32 & 12$\\times$32 \\\\\n Block4\\_3 & 128 & 128 & 1 & 12$\\times$32 & 12$\\times$32 \\\\\n Block4\\_4 & 128 & 128 & 1 & 12$\\times$32 & 12$\\times$32 \\\\\n Trans Conv1 & 128 & 64 & 2 & 12$\\times$32 & 24$\\times$64 \\\\\n Block5\\_1 & \\multicolumn{1}{m{6em}}{64+64(Concat)} & 64 & 2 & 24$\\times$64 & 24$\\times$64 \\\\\n Block5\\_2 & 64 & 64 & 1 & 24$\\times$64 & 24$\\times$64 \\\\\n Block5\\_3 & 64 & 64 & 1 & 24$\\times$64 & 24$\\times$64 \\\\\n Trans Conv2 & 64 & 32 & 2 & 24$\\times$64 & 48$\\times$128 \\\\\n Block6\\_1 & \\multicolumn{1}{m{6em}}{32+32(Concat)} & 32 & 1 & 48$\\times$128 & 48$\\times$128 \\\\\n Block6\\_2 & 32 & 32 & 1 & 48$\\times$128 & 48$\\times$128 \\\\\n Trans Conv3 & 32 & 16 & 2 & 48$\\times$128 & 96$\\times$256 \\\\\n Block7 & \\multicolumn{1}{m{6em}}{16+16(Concat)} & 32 & 1 & 96$\\times$256 & 96$\\times$256 \\\\\n Block8 & \\multicolumn{1}{m{6em}}{32+32(Concat)} & 32 & 1 & 96$\\times$256 & 96$\\times$256 \\\\\n Trans Conv4 & 32 & 16 & 2 & 96$\\times$256 & 192$\\times$512 \\\\\n Conv9 & 16 & 1 & 1 & 192$\\times$512 & 192$\\times$512 \\\\ \\hline \\hline\n \\end{tabular}%\n \\label{table1}%\n \\vspace{-10pt}\n\\end{table}%\n\nWe perform four down sampling in the encoder to obtain high-level semantic features of the input image, which is initially downsampled through conv1 layer with a stride of 2, and the subsequent downsampling is separately accomplished by the first building block of each unit. Each downsampling will halve the height and width of the activation map and double the number of channels.\n\nAs for the decoder, we correspondingly conduct four upsampling to restore the texture of the input image. Deconvolution is adopted here for upsampling with its kernel size and stride both equal to 2, so that each upsampling will double the height and width of the activation map and halve the number of channels. In addition, we add skip connections \\cite{he2016identity} between the encoder and decoder feature maps of the same resolution. Since the final feature map of the encoder has a relatively low resolution due to multiple downsampling, it will have an undesirable effect on the restoration of the image texture in the decoder. While the skip connection incorporates low-level features from the encoder which have a high resolution and contain abundant detailed information that will help to accurately restore the image texture. This sort of multi-scale, U-Net-like architectures have been proved to be effective in processing medical images.\n\nThe detailed structure of our building block can be seen from Figure \\ref{fig4}, it adopts the inverted residual with linear bottleneck referring to \\cite{Sandler2018MobileNetV2}, each block is composed of three convolutional layers. The first layer expands (characterized by the expansion factor $exp$) a low-dimensional compressed representation to high dimension with a kernel size of $1 \\times 1$. The intermediate expansion layer adopts lightweight depthwise convolutions mentioned above so as to significantly decreases the number of operations (ops) and memory needed while sustaining the same performance. The last layer projects the feature back to a low-dimensional representation with a linear convolution like the first layer. All these layers are followed by batch normalization \\cite{ioffe2015batch} and ReLU \\cite{glorot2011deep} as the non-linear activation except for the last layer that only followed by a batch normalization layer.\n\n\\begin{figure*}[!htb]\n\\setlength{\\abovecaptionskip}{3pt}\n\\centerline{\\includegraphics[width=14cm]{fig4.eps}}\n\\caption{The detailed diagram of the building blocks in L-AAE, which adopt the inverted residual with linear bottleneck.}\n\\label{fig4}\n\\end{figure*}\n\nIn Figure \\ref{fig4}, $IC$ and $OC$ stand for the input and output channel of building blocks respectively. All convolutional layers in all building blocks have a stride of 1 except for Block2\\_1, Block3\\_1 and Block4\\_1 that have a stride of 2 to conduct downsampling.Expansion factor $exp$ is 1 for Block1, Block7 and Block8 to avoid large ops and memory cost, we set up $exp$ to be 3 for Block5\\_1 and Block6\\_1, and every block expect these mentioned above have an $exp$ of 6. Besides, shortcut connections are implemented in blocks that have the same resolution between its input and output feature maps to enhance information flow and also improve the ability of a gradient to propagate across multiplier layers. We adopt $1\\times1$ convolution in shortcuts when the number of channel in the input and output feature maps is different.\n\nThe discriminator in our L-AAE aims to strengthen model's ability to restore the detailed texture of images, its structure is almost the same as the encoder above, except that its Block4\\_3 and Block4\\_4 have an $OC$ of 64 and 1 respectively. The output of Block4\\_4 is flattened and sent to sigmoid function for probability prediction, which we average to get the final output that represents the input image's probability to be a real image. This novel lightweight AAE enables us to acquire the well restored Radon data that are complete in every scanning angle, and the computational cost is about 8 times smaller than that of standard convolutions without sacrificing its accuracy.\n\n\n\n\\subsubsection{Stage Two: LS-AAE -- Image Inpainting through spatial information}\nAfter stage one, we transform the acquired full-view Radon data to images and find out that, we successfully enrich the information in the Radon domain and alleviate streaking artifacts from the original sparse-view CT imaging. Now in stage two, we will mainly focus on removing artifacts, restoring image to an ideal level. As mentioned above, we need a neural network that not only look into the three-dimensional spatial neighborhood, but also capture the motion between consecutive images, so as to efficaciously utilize the abundant spatial information between consecutive images to remove artifacts from the input image.\n\nGenerally speaking, motion estimation always brings an additional degree of complexity that is adverse to model's implementation in reality. It means that we need a structure that can manage to deploy motion estimation without much resource cost, we refer to \\cite{tassano2020fastdvdnet} and its general structure appears to be a cascaded two-step architecture that inherently embed the motion of objects. Inspired by this, we propose a model named Lightweight Spatial Adversarial Autoencoder (LS-AAE) which can be seen from Figure \\ref{fig5}. It slightly modifies the L-AE from Figure \\ref{fig3} as its inpainting block, details are shown in Table \\ref{table1}. The replacement from 2D convolution to 3D convolution enables our model to look into the three-dimensional spatial neighborhood for extra information.\n\n\\begin{table}[htbp]\n \\centering\n \\caption{From 2D convolution to 3D convolution}\n \\renewcommand{\\arraystretch}{1.3}\n \\begin{tabular}{m{4em}<{\\centering}m{3.5em}<{\\centering}m{1.4em}<{\\centering}m{1.4em}<{\\centering}m{2.5em}<{\\centering}m{2.5em}<{\\centering}m{2.8em}<{\\centering}}\\hline\\hline\n & Layer & \\multicolumn{1}{m{1.4em}<{\\centering}}{$IC$} & \\multicolumn{1}{m{1.4em}<{\\centering}}{$OC$} & Kernel Size & Stride & Padding \\\\ \\hline\n \\multicolumn{1}{c}{2D Convolution} & Conv1 & 1 & 16 & (3,3) & (2,2) & (1,1) \\\\ \\hline\n \\multicolumn{1}{c}{\\multirow{2}[4]{*}{3D Convolution}} & Conv1\\_1 & 1 & 16 & (3,3,3) & (1,2,2) & (1,0,0) \\\\\n & Conv1\\_2 & 16 & 32 & (3,3,3) & (2,1,1) & (0,1,1) \\\\ \\hline\\hline\n \\end{tabular}%\n \\label{table2}%\n \n\\end{table}%\n\nAs shown in Figure \\ref{fig5}, five consecutive images $\\left\\{\\bold{I}_{i-2}, \\bold{I}_{i-1}, \\bold{I}_i, \\bold{I}_{i+1}, \\bold{I}_{i+2}\\right\\}$ are sent into the LS-AAE to restore the middle one. We firstly treat these inputs as triplets of consecutive images $\\left\\{\\bold{I}_{i-2}, \\bold{I}_{i-1}, \\bold{I}_i\\right\\}$, $\\left\\{\\bold{I}_{i-1}, \\bold{I}_i, \\bold{I}_{i+1}\\right\\}$ and $\\left\\{\\bold{I}_i, \\bold{I}_{i+1}, \\bold{I}_{i+2}\\right\\}$, then enter them into the Inpainting Blocks 1. Subsequently, we obtain the outputs of these blocks and combine them into triplet $\\left\\{\\bold{I}^{\\prime}_{i-1}, \\bold{I}^{\\prime}_i, \\bold{I}^{\\prime}_{i+1}\\right\\}$ which will be sent into Inpainting Block 2 to acquire the ultimate estimation $\\bold{I}^{\\prime\\prime}_i$ corresponding to the central image $\\bold{I}_i$. The LS-AAE digs deep into the three-dimensional space and implicitly handles motion without any explicit motion compensation stage on account of the traits of its architecture. Besides, the three Inpainting Blocks in step one share the same weights so as to avoid memory cost. We also add a discriminator in stage two to better restore the image texture, the predicted image $\\bold{I}^{\\prime\\prime}_i$ and its corresponding ground truth (the full-view CT imaging) $\\bold{I}^{GT}_i$ are both send into this discriminator, its structure is exactly the same as it is in stage one..\n\n\\begin{figure*}[!htb]\n\\setlength{\\abovecaptionskip}{-10pt}\n\\centerline{\\includegraphics[width=\\linewidth,scale=0.8]{fig5_2.eps}}\n\\caption{The diagram of our proposed LS-AAE. It combines an autoencoder that fully utilizes the spatial correlation between consecutive CT images and a discriminator that help refine image details.}\n\\label{fig5}\n\\end{figure*}\n\n\\subsection{Network Training}\nStage one and stage two are trained separately. For the autoencoders in these two models, we employ the multi-loss function below, which is consists of three parts $l_{MSE}$, $l_{Adv}$ and $l_{Reg}$ with their respective hyperparameters $\\alpha_1$, $\\alpha_2$ and $\\alpha_3$.\n\n\\begin{equation}\n\\label{eq1}\n{{l}_{AE}}={{\\alpha}_{1}}{{l}_{MSE}}+{{\\alpha}_{2}}{{l}_{Adv}}+{{\\alpha}_{3}}{{l}_{Reg}}\n\\end{equation}\n\n$l_{MSE}$ calculates the mean square error between the restored image and its corresponding ground truth, it is widely used in various image inpainting tasks since it provides an intuitive evaluation for the model's prediction. The expression of $l_{MSE}$ can be seen from Equation (\\ref{eq2}).\n\n\\begin{equation}\\label{eq2}\n{{l}_{MSE}}=\\frac{1}{W\\times H}\\sum\\limits_{x=1}^{W}{\\sum\\limits_{y=1}^{H}{{{(\\bold{I}_{x,y}^{GT}-{{G}_{AE}}{{({{\\bold{I}}^{Input}})}_{x,y}})}^{2}}}}\n\\end{equation}\n\n\\noindent Where function $G_{AE}$ stands for the autoencoder, $\\bold{I}^{Input}$ and $\\bold{I}^{GT}$ are the input image and its corresponding ground truth, $W$ and $H$ are the width and height of the input image respectively.\n\n$l_{Adv}$ refers to the adversarial loss. The autoencoder manages to fool the discriminator by making its prediction as close to the ground truth as possible, so as to achieve the ideal image restoration outcome. Its expression can be seen from Equation (\\ref{eq3}).\n\n\\begin{equation}\\label{eq3}\n{{l}_{adv}}=1-D({{G}_{AE}}({\\bold{I}^{Input}}))\n\\end{equation}\n\n\\noindent Where function $D$ and $G_{AE}$ stands for the discriminator and the autoencoder respectively, $\\bold{I}^{Input}$ is the model's input image.\n\n$l_{Reg}$ is the regularization term of our multi-loss function. Since noises will have a side effect on our restoration result, we add a regularization term to maintain the smoothness of the image and also prevent overfitting. TV Loss is widely used in image analysis tasks, it reduces the variation between adjacent pixels to a certain extent. Its expression can be seen from Equation (\\ref{eq4}).\n\n\\begin{equation}\\label{eq4}\n{{l}_{Reg}}=\\frac{1}{W\\times H}\\sum\\limits_{x=1}^{W}{\\sum\\limits_{y=1}^{H}{\\left\\| \\left. \\nabla {{G}_{AE}}{{({\\bold{I}^{Input}})}_{x,y}} \\right\\| \\right.}}\n\\end{equation}\n\n\\noindent Where function $G_{AE}$ represents the autoencoder, $I^{Input}$ is the model's input image, $W$ and $H$ are the width and height of the input image respectively. $\\nabla$ calculates the gradient, $\\left\\| \\cdot \\right\\|$ obtains the norm.\n\nTo optimize the discriminator of these two stages, their loss function should enable them to better distinguish between real and fake inputs. The loss function $l_{Dis}$ is shown in Equation (\\ref{eq5}).\n\n\\begin{equation}\\label{eq5}\n{{l}_{D\\text{is}}}=1-D\\left( {\\bold{I}^{GT}} \\right)+D\\left( {{G}_{AE}}\\left( {\\bold{I}^{Input}} \\right) \\right)\n\\end{equation}\n\n\\noindent Where function $D$ and $G$ stands for the discriminator and the autoencoder respectively, $\\bold{I}^{Input}$ and $\\bold{I}^{GT}$ are the input image and its corresponding ground truth. The discriminator outputs a scalar between 0 to 1 which represents the probability that the input image is real. Therefore, minimizing $1-D(\\bold{I}^{GT})$\/maximizing $D(\\bold{I}^{GT})$ enables the discriminator to recognize real images, while minimizing $D(G_{AE}(\\bold{I}^{Input}))$ enables the discriminator to distinguish fake images that generated from the autoencoder from all input images.\n\nDuring the training process, we adopt the Adam algorithm \\cite{kingma2015adam} for optimization. the learning rate is set to 1e-4 initially. For the multi-loss function, $\\alpha_1$, $\\alpha_2$ and $\\alpha_3$ are set to 1, 1e-3, and 2e-8 respectively. We implement our whole structure using PyTorch \\cite{paszke2017automatic} on two GeForce RTX 2080 Ti.\n\n\\section{Experiments}\nWe adopt the LIDC-IDRI \\cite{armato2011the} as our dataset, which includes 1018 cases and approximately 240,000 DCM files of corresponding CT images. Cases 1 to 100 are divided into test set, cases 101 to 200 are divided into validation set, and the rest are divided into train set. Such a large amount of data allows us to train our models from scratch without overfitting. We utilize NumPy to read from these DCM files and conduct sparse sampling at intervals of 4, 8 and 16 (the corresponding full-view Radon data has 180 projections). Subsequently, we first analyze our overall structure through a series of ablation studies, and then compare our experimental results with other current methods to prove its superiority and robustness.\n\n\\subsection{Ablation Study}\nWith all these innovations we make in our overall structure design, it would be appropriate for us to conduct corresponding ablation studies to prove their necessity. In this part, all the experimental results are acquired from sparse-view CT data with an interval of 4 if there is no specific mention.\n\n\\subsubsection{The L-AE's Trade-off between Mobility and Performance}\nAs is known to all, U-Net has extraordinary performance in numerous medical image processing tasks, \\cite{Han2018Framing} implemented it for sparse-view CT imaging restoration and obtained outstanding restoration results. To testify that our proposed autoencoder can achieve a good balance between performance and mobility, we replace it with U-Net in the first stage and compare the restoration results and model parameters of this stage with ours, as shown in Table \\ref{table3}. The images mentioned in Table \\ref{table3} are reconstructed from the Radon data restored through stage one.\n\n\\begin{table}[htbp]\n \\centering\n \\caption{U-Net VS. L-AE}\n \\renewcommand{\\arraystretch}{1.3}\n \\begin{tabular}{m{3em}<{\\centering}ccccm{4.945em}<{\\centering}}\\hline\\hline\n \\multicolumn{1}{c}{} & \\multicolumn{1}{m{3em}<{\\centering}}{Radon PSNR} & \\multicolumn{1}{m{3em}<{\\centering}}{Radon SSIM} & \\multicolumn{1}{m{3em}<{\\centering}}{Image PSNR} & \\multicolumn{1}{m{3em}<{\\centering}}{Image SSIM} & Parameters \\\\\\hline\n U-Net & 57.582 & 0.998 & 29.598 & 0.874 & 10.401M \\\\\n L-AE & 57.66 & 0.998 & 29.609 & 0.874 & 1.675M \\\\\\hline\\hline\n \\end{tabular}%\n \\label{table3}%\n \n\\end{table}%\n\nAs we can see from Table \\ref{table3}, whether in the Radon domain or in the image domain, L-AE has competitive performance compared with U-Net. Moreover, it significantly reduces model parameters, making it suitable for situations where computational resources are extremely limited. This exhibits our model's ability in efficiently restoring CT images, thus adapting to the social trend of deploying portable medical devices.\n\n\\subsubsection{The Discriminator}\nWe establish discriminators in both two stages, hoping to further improve our model's performance in restoring sparse-view CT data through the adversarial learning between the autoencoders and the discriminators. In order to verify this point of view, we send the test set into stage one where there is merely an autoencoder and compare its restoration results with ours, which can be seen from Table \\ref{table4}. The images mentioned in Table \\ref{table4} are reconstructed from the Radon data restored through stage one.\n\n\\begin{table}[htbp]\n \\centering\n \\caption{The Role of the Discriminator}\n \\renewcommand{\\arraystretch}{1.3}\n \\begin{tabular}{m{5.2em}<{\\centering}cccc}\\hline\\hline\n \\multicolumn{1}{c}{} & \\multicolumn{1}{m{3em}<{\\centering}}{Radon PSNR} & \\multicolumn{1}{m{3em}<{\\centering}}{Radon SSIM} & \\multicolumn{1}{m{3em}<{\\centering}}{Image PSNR} & \\multicolumn{1}{m{3em}<{\\centering}}{Image SSIM} \\\\\\hline\n L-AE Only & 48.904 & 0.985 & 28.448 & 0.871 \\\\\n \\textbf{L-AAE} & \\textbf{57.660} & \\textbf{0.998} & \\textbf{29.609} & \\textbf{0.874} \\\\ \\hline\\hline\n \\end{tabular}%\n \\label{table4}%\n \n\\end{table}%\n\nFrom the above table, we can realize the significance of our proposed discriminator, it indeed assists our model to achieve a better level of restoration under the evaluation of PSNR and SSIM. Its precise structure (refers to Sec II) also ensures a high degree of mobility, which enables our overall structure to be portable and accurate at the same time.\n\n\n\n\\subsubsection{The Two-step Architecture -- LS-AAE}\nAs we state in Sec II, this sort of cascaded two-step structure inherently embeds the motion of objects which can largely help to remove image artifacts due to the strong redundancy between these consecutive images. Consequently, we design an experiment with reference to \\cite{tassano2020fastdvdnet} to prove this view. In the second stage, instead of sending five consecutive images into this two-step LS-AAE, we directly input them into a single Inpainting Block (SIB) that is slightly modified in the three-dimensional convolution part to handle five images, that means we adopt a stride of 2 in the Conv1\\_1 layer (refers to Table \\ref{table1}). The experimental results can be seen from Table \\ref{table5} below.\n\n\\begin{table}[htbp]\n \\centering\n \\caption{Restoration Results of SIB and LS-AAE}\n \\renewcommand{\\arraystretch}{1.3}\n \\begin{tabular}{p{4.5em}<{\\centering}cc}\\hline\\hline\n \\multicolumn{1}{c}{} & \\multicolumn{1}{p{6em}<{\\centering}}{Image PSNR} & \\multicolumn{1}{p{6em}<{\\centering}}{Image SSIM} \\\\\\hline\n SIB & 38.972 & 0.941 \\\\\n \\textbf{LS-AAE} & \\textbf{40.305} & \\textbf{0.948} \\\\\\hline\\hline\n \\end{tabular}%\n \\label{table5}%\n \n\\end{table}%\n\nNow the SIB no longer owns this built-in cascade structure to implicitly conduct motion estimation, it suffers from a obvious drop in PSNR and SSIM. Therefore, we can arrive at the conclusion that, LS-AAE manages to effectively improve model's capability of restoring CT images with its cascaded two-step architecture that inherently capture the motion between consecutive images.\n\n\\subsubsection{The 3D convolution in LS-AAE}\nWe mention in Sec II that, the extension of search regions from two-dimensional image neighborhoods to three-dimensional spatial neighborhoods provide extra information for image restoration. Also, extracting spatial features is conducive to remove streaking artifacts as the residual error in each image is correlated. In order to realize this extension of search regions, three-dimensional convolution is employed in every Inpainting Block of LS-AAE. To verify the cruciality of these three-dimensional convolutions, we conduct an experiment in which 3D convolution are replaced back to 2D convolution, where the number of input images is regarded as the number of input channel (refers to Table \\ref{table2}). The inpainting results of these two models are shown in Table \\ref{table6}.\n\n\\begin{table}[htbp]\n\n \\centering\n \\caption{Restoration Results of 2D and 3D LS-AAE}\n \\renewcommand{\\arraystretch}{1.3}\n \\begin{tabular}{p{6em}<{\\centering}cc}\\hline\\hline\n \\multicolumn{1}{c}{} & \\multicolumn{1}{p{6em}<{\\centering}}{Image PSNR} & \\multicolumn{1}{p{6em}<{\\centering}}{Image SSIM} \\\\\\hline\n 2D LS-AAE & 39.472 & 0.944 \\\\\n \\textbf{3D LS-AAE} & \\textbf{40.305} & \\textbf{0.948} \\\\\\hline\\hline\n \\end{tabular}%\n \\label{table6}%\n \n\\end{table}%\n\nWe can see that the inpainting outcome suffers from a drop about 0.9dB in PSNR, proving that three-dimensional convolutions assist model in restoring CT images to a certain extent without significantly consuming computational resources.\n\n\n\n\\subsubsection{The Image Interval of LS-AAE's Input}\nIn all the experiments above, we set the image interval between input consecutive CT images of LS-AAE to the default value of 1. However, we cannot help but wonder that whether increasing the interval value can help the model obtain more spatial information, thereby enhancing its ability in removing image artifacts. In the following experiment, we set this image interval $T$ to 1, 2, 3, 4 and 5 respectively, their corresponding results are shown in Table \\ref{table7}.\n\n\\begin{table}[htbp]\n\n \\centering\n \\caption{The Image Interval's Effect on Restoration Results}\n \\renewcommand{\\arraystretch}{1.3}\n \\begin{tabular}{p{4em}<{\\centering}cc}\n \\hline\\hline\n \\multicolumn{1}{c}{} & \\multicolumn{1}{p{6em}}{Image PSNR} & \\multicolumn{1}{p{6em}}{Image SSIM} \\\\ \\hline\n $\\bm{T=1}$ & \\textbf{40.305} & \\textbf{0.948} \\\\\n $T=2$ & 39.961 & 0.948 \\\\\n $T=3$ & 40.032 & 0.948 \\\\\n $T=4$ & 40.147 & 0.948 \\\\\n $T=5$ & 40.195 & 0.950 \\\\\\hline\\hline\n \\end{tabular}%\n \\label{table7}%\n \n\\end{table}%\n\nIt can be learnt from Table \\ref{table7} that this hyperparameter T does not have much impact on the final restoration result. Spatial correlation seems to be well utilized when the image interval is set to 1, which would be a decent default choice.\n\n\\subsubsection{The Radon Domain VS. the Image Domain}\nIn this paper, we adopt a two-stage structure that combines the Radon domain and the image domain to obtain high-quality sparse-view CT images. Since each stage of the overall structure conducts restoration in their separate domains and both remarkably upgrade the restoration results, this leads us to think, what role do these two domains play? Subsequently, we feed our test set into these three structures: L-AAE in stage one that concentrates on the Radon domain, LS-AAE in stage two that focus on the image domain and of course, our overall structure that contains these two stages. The quantitative inpainting results of the above three structures can be referred from Table \\ref{table8}, the intuitive outcome can also be seen in Figure \\ref{fig6}.\n\n\\begin{figure}[!htb]\n\\centerline{\\includegraphics[width=\\linewidth,scale=0.8]{fig6.eps}}\n\\caption{The intuitive restoration results obtained by different domains.}\n\\label{fig6}\n\\vspace{5pt}\n\\end{figure}\n\n\\begin{table}[htbp]\n\\vspace{5pt}\n \\centering\n \\caption{Restoration Results obtained by different domains}\n \\renewcommand{\\arraystretch}{1.3}\n \\begin{tabular}{p{9em}<{\\centering}cc} \\hline\\hline\n \\multicolumn{1}{c<{\\centering}}{} & \\multicolumn{1}{p{6em}<{\\centering}}{Image PSNR} & \\multicolumn{1}{p{6em}<{\\centering}}{Image SSIM} \\\\ \\hline\n The Radon Domain & 30.310 & 0.905 \\\\\n The Image Domain & 34.135 & 0.888 \\\\\n \\textbf{Dual Domains} & \\textbf{40.305} & \\textbf{0.948} \\\\\\hline\\hline\n \\end{tabular}%\n \\label{table8}%\n \n\\end{table}%\n\nIt can be seen from above that, restoration in each domain has its pros and cons. For the Radon domain, it demonstrates its superiority in enhancing the structural similarity of images so as to perform well under the evaluation of SSIM. While as for the image domain, it exhibits great ability in alleviating distortion, thus has a relatively good performance under the evaluation of PSNR. Naturally, we acquire extraordinary restoration results when combining these two domains to merge their respective advantages. Besides, we solely utilize the spatial correlation in the Image domain due to our discovery that, the spatial information between continuous Radon slices has little impact on the final inpainting outcome. We suppose this is because the texture in Radon slices does not have much similarity with CT images, thus cannot be restored in this way.\n\n\\subsection{Methods Comparison}\nAfter verifying the rationality of our overall structural design, we want to testify its robustness through applying it to sparse-view CT data with a higher level of sparsity, which means, conducting sparse sampling at intervals of 4, 8 and even 16 (the corresponding full-view Radon data has 180 projections). In addition, we compare our method with other current ones to prove its prominent capability of restoring sparse-view CT images and removing streaking artifacts. The experimental results are shown in Table \\ref{table9}, and the intuitive outcome can be seen from Figure \\ref{fig7}.\n\n\\begin{figure}[!htb]\n\\centerline{\\includegraphics[width=\\linewidth,scale=0.8]{fig7.eps}}\n\\caption{The intuitive restoration results of various methods at different sampling intervals.}\n\\label{fig7}\n\\end{figure}\n\n\\begin{table}[htbp]\n \\centering\n \\caption{Methods Comparison}\n \\renewcommand{\\arraystretch}{1.3}\n \\begin{tabular}{p{4.1em}<{\\centering}p{2.7em}<{\\centering}p{2.7em}<{\\centering}p{2.7em}<{\\centering}p{2.7em}<{\\centering}p{2.7em}<{\\centering}p{2.7em}<{\\centering}} \\hline\\hline\n \\multicolumn{1}{c}{\\multirow{2}[4]{*}{}} & \\multicolumn{2}{c}{Interval=4} & \\multicolumn{2}{c}{Interval=8} & \\multicolumn{2}{c}{Interval=16} \\\\\n\\cline{2-7} \\multicolumn{1}{c}{} & \\multicolumn{1}{c}{PSNR} & \\multicolumn{1}{c}{SSIM} & \\multicolumn{1}{c}{PSNR} & \\multicolumn{1}{c}{SSIM} & \\multicolumn{1}{c}{PSNR} & \\multicolumn{1}{c}{SSIM} \\\\\n \\hline\n FBP & 12.080 & 0.498 & 12.065 & 0.485 & 12.032 & 0.471 \\\\\n SART-TV & 19.179 & 0.665 & 19.061 & 0.632 & 18.777 & 0.602\\\\\n U-Net & 34.018 & 0.885 & 31.944 & 0.843 & 28.767 & 0.798 \\\\\n \\textbf{Ours} & \\textbf{40.305} & \\textbf{0.948} & \\textbf{37.633} & \\textbf{0.937} & \\textbf{34.052} & \\textbf{0.910} \\\\ \\hline\\hline\n \\end{tabular}%\n \\label{table9}%\n \n\\end{table}%\n\n\nAs we can see, our method exhibits extraordinary capability of restoring sparse-view CT imaging, effectively removes streaking artifacts and outruns other methods by a large margin. Also, it can be applied to extreme sparsity while still obtaining prominent inpainting outcome. Particularly, our method still exceeds others when the sampling rate is one-fourth of them, thereby demonstrating its remarkable robustness and superiority.\n\n\\section{Conclusion}\nIn this paper, we propose a lightweight structure that efficaciously restores sparse-view CT with its two-stage architecture combining the Radon domain and the image domain. Most importantly, we groundbreakingly exploit the abundant spatial information existing between consecutive CT images, so as to achieve a remarkable restoration outcome even if our method encounters extreme sparsity.\n\nIn the first stage, a mobile model named L-AAE is proposed to complement the original sparse-view CT in the Radon domain, it adopts the inverted residual with linear bottleneck in order to significantly reduce computational resource requirements while maintaining outstanding performance. In the second stage, after reconstructing the restored full-view Radon data into images through FBP, we establish a lightweight model called LS-AAE. It is designed to implicitly conduct motion estimation and dig into the three-dimensional spatial neighborhood with a relatively low memory cost. Therefore, it manages to concentrates on fully utilizing the strong spatial correlation between continuous CT images, so as to productively remove streaking artifacts and finally acquire high-quality restoration results.\n\nEventually, for the sparse-view CT with a sampling interval of 4, we achieve a PSNR of 40.305 and a SSIM of 0.948, realizing a remarkable restoration result that can effectively eliminate image artifacts. In addition, our method also performs well when it comes to extreme sparsity (the sampling interval is 8 or even 16), exhibiting its prominent robustness.\n\n\\bibliographystyle{unsrt}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction} \\label{section:introduction}\n\n\n\\IEEEPARstart{I}{n} this paper and in a sequel \\cite{OIS} we consider the problem of interpolating all values of a discrete, periodic signal $f\\colon \\mathbb{Z}_N \\longrightarrow \\mathbb{C}$, $N \\ge 2$, when $d2}\n If $\\mathbb{B}^\\mathcal{J}$ has an orthogonal interpolating basis then $|\\mathcal{J}| \\le N\/2$.\n \\end{proposition}\n \\begin{IEEEproof}\n Suppose $\\mathbb{B}^\\mathcal{J}$ has an orthogonal interpolating basis indexed by $\\mathcal{I}$. Then $|\\mathcal{I}|=|\\mathcal{J}|$. Let $\\mathcal{I}'=[0\\ : N-1]\\setminus \\mathcal{I}$. By Proposition \\ref{proposition:perturbation} we can write \n \\[\n \\mathbb{B}^{\\mathcal{J}}= {\\rm span}\\{\\delta_i+v_i\\colon i \\in\\mathcal{I}\\} ,\n \\]\nwhere the $v_i$ are orthogonal vectors in $\\mathbb{C}^{\\mathcal{I}'}$, or some possibly $0$. But none of the $v_i$ can be zero, for ${{\\mathcal{F}}} \\delta_k = \\omega^{-k}$ which never vanishes. There are $|\\mathcal{I}|$ of the $v$'s, and if $|\\mathcal{J}|=|\\mathcal{I}| > N\/2$ then $|\\mathcal{I}'| k$) is \n\\[\n\\sum_{l=0}^{p^k-1} \\binom{\\widetilde{\\chi}_k(l)}{2},\n\\]\nand hence the number of differences that have a factor of exactly $p^k$ is given by \n\\[\n\\sum_{l=0}^{p^k-1} \\binom{\\widetilde{\\chi}_k(l)}{2} - \\sum_{l=0}^{p^{k+1}-1} \\binom{\\widetilde{\\chi}_{k+1}(l)}{2}.\n\\]\nThe largest power of $p$ that divides $A$ is then $p$ raised to\n\\begin{equation} \\label{eq:largest-power-of-p}\n{\\sum_k k\\left( \\sum_{l=0}^{p^k-1} \\binom{\\widetilde{\\chi}_k(l)}{2} - \\sum_{l=0}^{p^{k+1}-1} \\binom{\\widetilde{\\chi}_{k+1}(l)}{2}\\right)} .\n\\end{equation} \nThe expression \\eqref{eq:largest-power-of-p} depends only on the values of $\\widetilde{\\chi}_k$, but the hypothesis is that $\\widetilde{\\chi}_k = \\widetilde{\\chi}_k^*$ for $0 \\le k \\le N$, and therefore the products $A=\\prod(m_j - m_i)$ and $B = \\prod(j-i)$ have the same powers of $p$ as factors. Hence $\\mu = A\/B$ is coprime to $p$ and from Lemma \\ref{lem:univ-suff} we conclude that $\\mathcal{I}$ is a universal sampling set.\n\\end{IEEEproof}\n\n\\begin{remark} \nThe argument above also gives an insight, if not a proof, as to why $\\mu=A\/B$ in \\eqref{eq:magic-frac} is an integer. Suppose $\\mathcal{M}((\\mathcal{I}\/p^k)^\\sim)= \\{r_1, r_2, r_3, \\ldots, r_d\\}$. The power of $p^k$ in $A = \\prod (m_i - m_j)$ is given by \n\\[\n\\sum_{i=1}^d {r_i \\choose 2} = \\frac{1}{2}\\left(\\sum_{i=1}^d r_i^2 - \\sum_{i=1}^d r_i\\right).\n\\]\nNow, $\\sum_{i=1}^d r_i$ is the cardinality of $\\mathcal{I}$ so\n\\[\n\\sum_{i=1}^d r_i = d.\n\\]\nHence for a set $\\mathcal{I}$ which has the minimum power of $p^k$ in $A$ it must be that $\\mathcal{M}((\\mathcal{I}\/p^k)^\\sim) = \\{r_1, r_2,\\ldots r_d\\}$ is a solution to \n\\begin{align*}\n&\\text{minimize }r_1^2 + r_2^2 + \\cdots +r_d^2 \\\\\n&\\text{subject to }r_1 + r_2 + \\cdots r_d = d.\n\\end{align*}\nOn the reals the optimal solution satisfies $r_1 = r_2 = \\cdots = r_d$. This suggests that the set $\\mathcal{I}$ with the smallest power of $p^k$ in $A$ must have roughly an equal number of elements in each congruence class. $\\mathcal{I}^* = \\{0,1,2,\\ldots, d-1\\}$ is one such set. Thus the power of $p^k$ is smaller in $B = \\prod(i-j)$ than in $A = \\prod (m_i - m_j)$ for each $p$ and $k$, and, if the reasoning is to trusted, $\\mu = A\/B$ is an integer.\n\n\\end{remark}\n\n\nTo finish the proof of Theorem \\ref{theorem:universal} we will derive the following bounds on $\\widetilde{\\chi}_k$. \n\n\\begin{lemma} \\label{lemma:chi-upper-and-lower}\nIf $\\mathcal{I}\\subseteq [0:p^M-1]$ is a universal sampling set of size $d$ then\n\\begin{equation} \\label{eq:chi-upper-and-lower}\n\\left\\lfloor \\frac{d}{p^k}\\right\\rfloor \\le \\widetilde{\\chi}_k(s) \\le \\left\\lceil \\frac{d}{p^k}\\right\\rceil, \\quad s\\in[0:p^k-1]\\,, 0 \\le k \\le M.\n\\end{equation}\n\\end{lemma}\n\nIt follows immediately from \\eqref{eq:chi-upper-and-lower} that if $\\mathcal{I}$ is a universal sampling set then\n\\[\n|\\widetilde{\\chi}_k(a) - \\widetilde{\\chi}_k(b)| \\le 1, \\quad a, b \\in [0:p^k-1].\n\\]\nThis is condition (ii), and with this result the proof of Theorem \\ref{theorem:universal} will be complete. Incidentally, for the case $\\mathcal{I}=\\mathcal{I}^*$, \\eqref{eq:chi-upper-and-lower} is a simple consequence of \\eqref{eq:chi^*-explicit} and \\eqref{eq:chi-I*}.\n\n\n\n\nThe argument for Lemma \\ref{lemma:chi-upper-and-lower} is through constructing submatrices of the Fourier matrix of known rank to obtain upper and lower bounds for $\\widetilde{\\chi}_k$. The first step is to build a particular model submatrix, and this requires some bookkeeping. \n\nLet $\\mathcal{I} \\subseteq [0:p^M-1]$, at this point not assumed to be a universal sampling set. Fix $k\\le M$ and $s\\in[0:p^k-1]$, and recall that we let\n\\[\n\\mathcal{I}_{ks} = \\{ i \\in \\mathcal{I} \\colon i \\equiv s \\text{ mod $p^k$}\\}.\n\\]\nThe set $\\mathcal{I}_{ks}$ has $\\widetilde{\\chi}_k(s)$ elements. List them, in numerical order, as $i_0, i_1,i_2,\\dots, i_c$, where we put $c=\\chi_k(s)-1$ to simplify notation. Let $r$ be a positive integer and define the column vector of length $c$ by \n\\[\n \\mathfrak{z}^r = \\begin{bmatrix}\n\\zeta_{N}^{i_0r} & \\zeta_{N}^{i_1r} & \\zeta_{N}^{i_2r} & \\cdots & \\zeta_N^{i_cr}\n\\end{bmatrix}^\\textsf{T}.\n\\]\n\nNow let $\\mathfrak{Z}^r$ be the $c \\times p^k$ matrix obtained by repeating $p^k$ copies of the column $\\mathfrak{z}^r$:\n\\[\n\\mathfrak{Z}^r =\\underbrace{\\begin{bmatrix} \\mathfrak{z}^r & \\mathfrak{z}^r & \\mathfrak{z}^r & \\cdots \\mathfrak{z}^r \\end{bmatrix}}_\\text{$p^k$ times},\n\\]\nand let $\\mathfrak{D}^s$ be the $p^k \\times p^k$ diagonal matrix\n\\[\n\\mathfrak{D}^s =\n\\begin{bmatrix}\n1 & 0 & 0 & \\dots & 0\\\\\n0 & \\zeta_{p^k}^{s} & 0 \\dots & 0\\\\\n0 & 0 & \\zeta_{p^k}^{2s} & \\dots & 0\\\\\n\\vdots & \\vdots & \\vdots & \\ddots & \\vdots \\\\\n0 & 0 & 0 & \\cdots & \\zeta_{p^k}^{(p^k-1)s}\n\\end{bmatrix} .\n\\]\n\nFinally, let $k'=M-k$, and set \n\\begin{equation} \\label{eq:JJ_k'r}\n\\mathcal{J}_{k'r} = \\{0\\cdot p^{k'}+r, 1\\cdot p^{k'} +r, 2\\cdot p^{k'}+r, \\dots, (p^k-1)p^{k'}+r\\} .\n\\end{equation}\nFrom the Fourier matrix ${{\\mathcal{F}}}$ we choose $c$ rows indexed by $\\mathcal{I}_{ks}$ and $p^k$ columns indexed by $\\mathcal{J}_{k'r}$. The result of these choices, we claim, results in\n\\begin{equation} \\label{eq:F_rs-1}\nE_{\\mathcal{I}_{ks}}^\\textsf{T} {{\\mathcal{F}}} E_{\\mathcal{J}_{k'r}} = \\mathfrak{Z}^r\\mathfrak{D}^s.\n\\end{equation}\n\n\nAfter the preparations, the derivation of \\eqref{eq:F_rs-1} is straightforward. The $(a,b)$-entry of $E_{\\mathcal{I}_{ks}}^\\textsf{T} {{\\mathcal{F}}} E_{\\mathcal{J}_{k'r}}$ is\n\\[\n\\begin{aligned}\n\\zeta_{N}^{i_a(bp^{k'}+r)}&=\\exp\\left(-\\frac{(2\\pi i)i_a(bp^{M-k}+r)}{p^M}\\right)\\\\\n&= \\exp\\left(-\\frac{(2\\pi i ) i_ab}{p^k}\\right)\\exp\\left(-\\frac{(2\\pi i )i_ar}{p^M}\\right) .\n\\end{aligned}\n\\]\nBut now recall that, by definition, when $i_a\\in \\mathcal{I}_{ks}$ is divided by $p^k$ it leaves a remainder of $s$, and thus\n\\[\n\\begin{aligned}\n&\\exp\\left(-\\frac{(2\\pi i ) i_ab}{p^k}\\right)\\exp\\left(-\\frac{(2\\pi i )i_ar}{p^M}\\right)\\\\\n &= \\exp\\left(-\\frac{(2\\pi i ) sb}{p^k}\\right)\\exp\\left(-\\frac{(2\\pi i )i_ar}{p^M}\\right) \\\\\n&=\\zeta_{p^k}^{sb}\\,\\zeta_N^{i_a r}.\n\\end{aligned}\n\\]\n\n\nThis construction is the basis for the proof of Lemma \\ref{lemma:chi-upper-and-lower}, but applied in block form.\n\n\\begin{IEEEproof}[Proof of Lemma \\ref{lemma:chi-upper-and-lower}] To deduce the upper bound $\\widetilde{\\chi}(s) \\le \\lceil d\/p^k\\rceil$ we begin by letting\n\\[\n\\mathcal{J}= \\mathcal{J}_{k'0}\\cup \\mathcal{J}_{k'1} \\cup \\mathcal{J}_{k'2} \\cup \\cdots \\cup \\mathcal{J}_{k'd'}, \\quad d'=\\left\\lceil \\frac{d}{p^k}\\right\\rceil -1,\n\\]\nwhere $\\mathcal{J}_{k'r}$ is defined as in \\eqref{eq:JJ_k'r}. Note that $\\mathcal{J}$ is a union of $\\lceil d\/p^k\\rceil$ disjoint sets. Each $\\mathcal{J}_{k'r}'$, $0\\le r \\le d'=\\lceil d\/p^k\\rceil-1$ indexes the choice of $p^k$ columns from ${{\\mathcal{F}}}$ and applying \\eqref{eq:F_rs-1} we have\n\\[\n\\begin{aligned}\n&E_{\\mathcal{I}_{ks}}^\\textsf{T}{{\\mathcal{F}}} E_\\mathcal{J}= \\\\\n&\\begin{bmatrix}\nE_{\\mathcal{I}_{ks}}^\\textsf{T}{{\\mathcal{F}}} E_{\\mathcal{J}_{k'0}} & E_{\\mathcal{I}_{ks}}^\\textsf{T}{{\\mathcal{F}}} E_{\\mathcal{J}_{k'1}} & \\cdots & E_{\\mathcal{I}_{ks}}^\\textsf{T}{{\\mathcal{F}}} E_{\\mathcal{J}_{k'd'}}\n\\end{bmatrix}\\\\\n&=\n\\begin{bmatrix}\n\\mathfrak{Z}^0\\mathfrak{D}^s & \\mathfrak{Z}^1\\mathfrak{D}^s &\\cdots & \\mathfrak{Z}^{d'}\\mathfrak{D}^s\n\\end{bmatrix}\\\\\n&=\n\\begin{bmatrix}\n\\mathfrak{Z}^0 & \\mathfrak{Z}^1 & \\cdots & \\mathfrak{Z}^{d'}\n\\end{bmatrix}\n\\begin{bmatrix}\n\\mathfrak{D}^s & 0 & \\cdots & 0 \\\\\n0 & \\mathfrak{D}^s & \\cdots &0 \\\\\n\\vdots & \\vdots & \\ddots & \\vdots \\\\\n0 & 0 & \\cdots & \\mathfrak{D}^s \n\\end{bmatrix} .\n\\end{aligned}\n\\]\nThe diagonal matrix in this product is invertible, hence\n\\begin{equation} \\label{eq:rank-bound-1}\n\\begin{aligned}\n&\\text{Rank of $E_{\\mathcal{I}_{ks}}^\\textsf{T}{{\\mathcal{F}}} E_\\mathcal{J}$} = \\text{Rank of $\\begin{bmatrix}\n\\mathfrak{Z}^0 & \\mathfrak{Z}^1 & \\mathfrak{Z}^2 & \\cdots & \\mathfrak{Z}^{d'}\n\\end{bmatrix}$} \\\\\n&\\hspace{.5in} \\le \\text{Number of distinct columns} = \\left\\lceil\\frac{d}{p^k}\\right\\rceil.\n\\end{aligned}\n\\end{equation}\nNow, the number of columns of $E_{\\mathcal{I}_{ks}}^\\textsf{T}\\mathcal{F} E_{\\mathcal{J}}$ is equal to \n\\begin{align*}\n|\\mathcal{J}| &= |\\mathcal{J}_{k'0} \\cup \\mathcal{J}_{k'1} \\cup \\mathcal{J}_{k'2} \\cup \\ldots \\cup \\mathcal{J}_{k'd'}| \\\\\n& = \\sum_{r=0}^{ \\lceil d\/p^k \\rceil -1} |\\mathcal{J}_{k'r}| = p^k\\lceil d\/p^k \\rceil \\geq d,\n\\end{align*}\nso there are at least $d$ columns. Hence if $\\mathcal{I}$ is a universal sampling set of size $d$ then $E_{\\mathcal{I}}^\\textsf{T}\\mathcal{F} E_{\\mathcal{J}}$ must be of full row rank. In particular, since $\\mathcal{I}_{ks} \\subseteq \\mathcal{I}$, it must be that \n$E_{\\mathcal{I}_{ks}}^\\textsf{T}\\mathcal{F} E_{\\mathcal{J}}$ is also of full row rank, for each $s$. Next, the number of rows in $E_{\\mathcal{I}_{ks}}^\\textsf{T}\\mathcal{F} E_{\\mathcal{J}}$ is equal to $|\\mathcal{I}_{ks}| = \\widetilde{\\chi}_k(s)$ by definition. From \\eqref{eq:rank-bound-1} we know that the rank of $E_{\\mathcal{I}}^\\textsf{T}\\mathcal{F} E_{\\mathcal{J}}$ is at most $\\lceil d \/p^k \\rceil$, and so we have\n\\[\n\\text{(Number of rows) $\\widetilde{\\chi}_k(s)$} \\leq \\left\\lceil \\frac{d}{p^k} \\right\\rceil.\n\\]\n\nThe proof of the lower bound $\\widetilde{\\chi}_k(s) \\ge \\lfloor d\/p^k \\rfloor$ is very similar. This time we construct a set $\\mathcal{J}$ with $|\\mathcal{J}| \\leq d$, and observe that if $\\mathcal{I}$ is a universal sampling set of size $d$, then $E_{\\mathcal{I}}^\\textsf{T}\\mathcal{F} E_{\\mathcal{J}}$ is of full column rank. \n\nLet\n\\[\n\\mathcal{J} = \\mathcal{J}_{k'0} \\cup \\mathcal{J}_{k'1} \\cup \\mathcal{J}_{k'2} \\cup \\ldots \\cup \\mathcal{J}_{k' d''}, \\quad d''=\\lfloor d\/p^k \\rfloor -1.\n\\] \nThen just as above,\n\\[\n\\begin{aligned}\n\\text{Rank of $E_{\\mathcal{I}_{ks}}^\\textsf{T}{{\\mathcal{F}}} E_\\mathcal{J}$} &= \\text{Rank of $\\begin{bmatrix}\n\\mathfrak{Z}^0 & \\mathfrak{Z}^1 & \\mathfrak{Z}^2 & \\cdots & \\mathfrak{Z}^{d''}\n\\end{bmatrix}$} \\\\\n&\\le \\text{Number of distinct columns} = \\left\\lfloor\\frac{d}{p^k}\\right\\rfloor.\n\\end{aligned}\n\\]\n\nThe number of rows of $E_{\\mathcal{I}_{ks}}^\\textsf{T}\\mathcal{F} E_{\\mathcal{J}}$ is $|\\mathcal{I}_{ks}| = \\widetilde{\\chi}_k(s)$, and so we must have\n\\begin{equation}\n\\text{Rank of }E_{\\mathcal{I}_{ks}}^\\textsf{T}\\mathcal{F} E_{\\mathcal{J}} \\leq \\min \\{\\lfloor d\/p^k \\rfloor ,\\widetilde{\\chi}_k(s)\\}. \\label{eq:univ-nec-31}\n\\end{equation} \nFurthermore, \n\\[\nE_{\\mathcal{I}}^\\textsf{T}\\mathcal{F} E_{\\mathcal{J}} \\nonumber\\\\\n= \\begin{bmatrix}E_{\\mathcal{I}_{k0}}^\\textsf{T}\\mathcal{F} E_{\\mathcal{J}} \\\\ E_{\\mathcal{I}_{k1}}^\\textsf{T}\\mathcal{F} E_{\\mathcal{J}} \\\\\n\\vdots\\\\\nE_{\\mathcal{I}_{k(p^k-1)}}^\\textsf{T}\\mathcal{F} E_{\\mathcal{J}}\\\\\n \\end{bmatrix} , \\nonumber\n\n \\]\nwhence\n\\begin{align}\n&\\text{Row rank of }E_{\\mathcal{I}}^\\textsf{T}\\mathcal{F} E_{\\mathcal{J}} \\nonumber \\\\\n&\\quad \\leq \\text{Rank of }E_{\\mathcal{I}_{k0}}^\\textsf{T}\\mathcal{F} E_{\\mathcal{J}} + \\text{Rank of }E_{\\mathcal{I}_{k1}}^\\textsf{T}\\mathcal{F} E_{\\mathcal{J}} \\nonumber\\\\\n& \\hspace{.5in}+ \\ldots + \\text{Rank of }E_{\\mathcal{I}_{k(p^k-1)}}^\\textsf{T}\\mathcal{F} E_{\\mathcal{J}} \\nonumber\\\\\n&\\quad \\leq \\sum_{s=0}^{p^k-1} \\min \\{\\lfloor d\/p^k \\rfloor ,\\widetilde{\\chi}_k(s)\\} . \\label{eq:univ-nec-32}\n\\end{align}\n\nNow, the number of columns indexed by $\\mathcal{J}$ is $p^k \\lfloor d\/p^k \\rfloor \\leq d$. Hence if $\\mathcal{I}$ is a universal sampling set of size $d$, we need $E_{\\mathcal{I}}^\\textsf{T}\\mathcal{F} E_{\\mathcal{J}} $ to be of full column rank. From \\eqref{eq:univ-nec-32}, this means we must have \n\\[\n\\text{(Number of columns) } p^k \\lfloor d\/p^k \\rfloor \\leq \\sum_{s=0}^{p^k-1} \\min \\{\\lfloor d\/p^k \\rfloor ,\\widetilde{\\chi}_k(s)\\}. \n\\] \nThis inequality will not be satisfied unless $\\lfloor d\/p^k \\rfloor \\leq \\widetilde{\\chi}_k(s)$ for all $s$. This completes the proof. \n\\end{IEEEproof}\n\n\\begin{remark}\nFor many values of $d$, it is enough to prove one side of the inequality \\eqref{eq:chi-upper-and-lower}. If we know that $\\widetilde{\\chi}_k(s) \\leq \\lceil d \/p^k \\rceil$, then from $\\sum_s\\widetilde{\\chi}_k(s) =d$ and a recurrence relation \\eqref{eq:chi-recurrence}, below, it is possible to prove that $\\lfloor d \/p^k \\rfloor \\leq \\widetilde{\\chi}_k(s)$. Such cases include\n\\begin{enumerate}\n\\item $N=p^M$, $d = c_0 p^k + c_1 p^{k-1}$ for $c_0, c_1 \\in \\{0,1,2,\\ldots,p-1\\}$.\n\\item $N=2^M$, $d = c_0 2^k + c_1 2^{k-1} + c_22^{k-2}$ for $c_0, c_1, c_2 \\in \\{0,1\\}$\n\\item $N=2^M$, $d = 2^k + 2^{k-1} + 2^{k-2} + \\ldots + 2^{k-r+1}$ for some $r$,\n\\end{enumerate}\n\\end{remark}\n\n\n\n\n\\subsection{Digit Reversal and Universal Sampling Sets} \\label{subsection:digit-reversal}\n\nThere is another interesting characterization of universal sampling sets in terms of \\emph{digit reversal}. Expanding in base $p$, any integer $a \\in [0:p^m-1]$, $m \\ge 1$, can be written uniquely as\n\\[\na=\\alpha_0+\\alpha_1p+\\alpha_2p^2+\\cdots \\alpha_{m-1}p^{m-1},\n\\] \nwhere the $\\alpha$'s are in $[0:p-1]$. We define a permutation $\\pi_m\\colon [0:p^m-1] \\longrightarrow [0:p^m-1]$ by\n\\[\n\\begin{aligned}\n&\\pi_m(\\alpha_0+\\alpha_1p+\\alpha_2p^2+\\cdots \\alpha_{m-1}p^{m-1}) = \\\\\n&\\hspace{.5in} \\alpha_{m-1}+\\alpha_{m-2}p+\\alpha_{m-3}p^2+\\cdots +\\alpha_0p^{m-1}.\n\\end{aligned}\n\\]\nThe $\\alpha$'s are the digits in the base $p$ expansion of $a \\in [0:p^m-1]$ and applying $\\pi_m$ to $a$ produces the number in $[0:p^m-1]$ with the digits reversed. For example (an example we will use again in Section \\ref{section:maximal}), take $[0:7]$. Then $\\pi_3([0:7])=\\{0,4,2,6,1,5,3,7\\}$ in that order. Such digit reversing permutations were used in \\cite{delvaux:rank-defficient} to find rank-one submatrices of the Fourier matrix. \n\n\n\n\n\n\n\nThe issue for universal sampling sets is how the numbers $\\pi_M(\\mathcal{I})$ are dispersed within the interval $[0:p^M-1]$, where, as before, $N=p^M$. To make this precise, take $k\\ge 1$ and partition $[0:p^M-1]$ into $p^k$ equal parts:\n\\[\n[0:p^M-1] = \\bigcup_{a=0}^{p^k-1}[ap^{k'}:(a+1)p^{k'}-1], \\quad k'=M-k.\n\\] \nFor any $\\mathcal{J} \\subseteq [0:p^M-1]$ and $a \\in [0:p^k-1]$, let\n\\[\n\\phi_k(a\\,;\\mathcal{J}) = |\\mathcal{J} \\cap [ap^{k'},(a+1)p^{k'}-1]|.\n\\]\nWe say that $\\mathcal{J}$ is \\emph{uniformly dispersed} in $[0,p^M-1]$ if\n\\begin{equation} \\label{eq;uniformly-dispersed}\n|\\phi_k(a\\,;\\mathcal{J}) - \\phi_k(b\\,;\\mathcal{J})| \\le 1\n\\end{equation}\nfor all $a, b\\in[0:p^k-1]$, and $1\\le k \\le M$. Thus $\\mathcal{J}$ is uniformly dispersed if roughly equal numbers\nof its elements are in each of the intervals $[ap^{k'}: (a+1)p^{k'}-1]$ for all $1 \\le k \\le M$, $k'=M-k$. \n\nWe will show \n\\begin{equation} \\label{eq:phi-chi}\n\\phi_k(\\pi_k(a)\\,; \\pi_M(\\mathcal{I})) = \\widetilde{\\chi}_k(a), \\quad a \\in[0:p^k-1].\n\\end{equation}\nThus, to the three equivalent conditions in Theorem \\ref{theorem:universal} we can add a fourth:\n\\begin{quote}\n\\begin{enumerate}\n\\item[(iv)] $\\pi_M(\\mathcal{I})$ is uniformly dispersed. \n\\end{enumerate}\n\\end{quote}\n\n\nThe derivation of \\eqref{eq:phi-chi} uses the following lemma.\n\\begin{lemma}\n\\label{lemma:univ-alt-2}\nIf $j \\in [0:p^M-1]$ is given by $j = b + ap^{k'}$, $0\\leq b \\leq\np^{k'}-1 $, then $\\pi_M(j) = \\pi_k(a) + p^k\\pi_{k'}(b)$. \n\\end{lemma}\n\nThe proof is straightforward, and the argument for \\eqref{eq:phi-chi} then goes very quickly. As defined, for any index set $\\mathcal{J}$, $\\phi_k(a\\,; \\mathcal{J})$ is the number of elements in $\\mathcal{J}$ that lie in $[ap^{k'} : (a+1)p^{k'}-1]$, and these are precisely the $j \\in \\mathcal{J}$ of the form $ap^{k'}+b$ with $0\\le b \\le p^{k'}-1$. Thus for $i \\in [0:p^k-1]$,\n\\[\n\\begin{aligned}\n&{\\phi}_k(\\pi_k(i)\\,;{\\pi_M(\\mathcal{I})}) = \\\\\n&\\hspace{.25in}\\text{the number of }j \\in\n\\pi_M(\\mathcal{I}) \\\\\n&\\hspace{.35in}\\text{ of the form }\\pi_k(i) p^{k'} + b, \\ 0\\leq b \\leq\np^{k'}-1 \\nonumber \\\\\n&= \\text{ number of }j \\in\n\\mathcal{I} \\text{ of the form } \\\\\n&\\hspace{.35in}p^{k}\\pi_{k'}(b) + i, \\ 0\\leq b \\leq\np^{k'}-1 \\text{ (from Lemma \\ref{lemma:univ-alt-2})} \\\\\n& = \\text{ number of }j \\in\n\\mathcal{I} \\text{ that leave a remainder of } \\\\\n&\\hspace{.35in} i \\text{ on dividing by }p^k\n\\nonumber \\\\\n& = \\widetilde{\\chi}_k(i). \\nonumber\n\\end{aligned}\n\\]\n \n\n\n\\section{Structure and Enumeration of Universal Sampling Sets} \\label{section:maximal}\n\n In this section we analyze in detail the structure of universal sampling sets. Specifically we show that when $N=p^M$ is a prime power such a set $\\mathcal{I}$ is the disjoint union of smaller, \\emph{elementary universal sets} that depend on the base $p$ expansion of $|\\mathcal{I}|$. The method is algorithmic, allowing us to construct universal sets of a given size, and to find a formula that counts the number of universal sets as a function of $p^M$ and $|\\mathcal{I}|$. In particular the formula answers the question: How likely is it that a randomly chosen index set is universal? Not very likely, but there are several subtle aspects to the answer. For example, we exhibit plots of the counting function showing some striking phenomena depending on the prime $p$. Our approach is via \\emph{maximal universal sampling sets} which, in turn, enter naturally in studying the relationship between universal sampling sets and uncertainty principles. We take up the latter topic in the next section.\n\n\n\n \n\n\\subsection{A Recurrence Relation and Tree for $\\widetilde{\\chi}$}\n\nWhen $N=p^M$ the condition that an index set be a universal sampling set depends on the values of $\\widetilde{\\chi}_k$ for different $k$. To study this we use a recurrence relation in $k$ for $\\widetilde{\\chi}_k(a)$. The formula holds even when $N$ is not a prime power. \n\n\\begin{lemma} \\label{lemma:recursion}\nLet $\\mathcal{I} \\subseteq [0:N-1]$. Then\n \\begin{equation} \\label{eq:chi-recurrence}\n\\widetilde{\\chi}_{k-1}(a) = \\sum_{j=0}^{p-1}\\widetilde{\\chi}_k(a + jp^{k-1}), \\quad \n \\end{equation}\n{for all $a\\in[0:p^{k-1}-1]$.}\n \\end{lemma}\n\\begin{IEEEproof}\nAn integer $x \\in \\mathcal{I}$ that leaves a remainder of $a$ when divided by $p^{k-1}$ is of the form $x=\\alpha p^{k-1} + a$. Let $\\alpha = \\beta p + \\gamma$ for $\\gamma \\in [0:p-1]$. Then $x = \\beta p^k + \\gamma p^{k-1} + a$, that is, $x$ leaves a remainder of either $0\\cdot p^{k-1}+a, 1\\cdot p^{k-1}+a, 2\\cdot p^{k-1}+ a,\\dots$ or $(p-1)\\cdot p^{k-1}+a$ on dividing by $p^k$. The result follows. \n\\end{IEEEproof}\n\nWhen $N=p^M$ the recurrence formula and the relation it expresses between conjugacy classes has an appealing interpretation in terms of a $p$-ary tree. Several arguments in this section will be based on this configuration. \n\nLet $\\mathcal{I}\\subseteq [0: p^M-1]$. We construct a tree with $M+1$ levels and $p^k$ nodes in level $k$, $0 \\le k \\le M$. The nodes in level $k$ are identified by a pair $(k,a)$, with $a \\in [0:p^{k-1}]$. Call the\nnodes at the level $M$ the leaves. At the node $(k,a)$ we imagine placing the congruence class $\\mathcal{I}_{ka} =\\{i\\in \\mathcal{I} \\colon i \\equiv a \\mod p^k\\}$. The root is $\\mathcal{I}_{00}=\\mathcal{I}$ and the nodes at the leaves host the sets $\\mathcal{I}_{Ma}$, $a \\in [0:p^M-1]$, each of which is either a singleton or empty. We assign a weight of $\\widetilde{\\chi}_k(a) = |\\mathcal{I}_{ka}|$ to the node $(k,a)$. Further, at each level we arrange the nodes according to the digit reversing permutation, i.e., nodes at level $k$ are arranged as $\\pi_k([0:p^k-1])$, where $\\pi_k$ is the digit reversing permutation from Section \\ref{subsection:digit-reversal}. (This is similar to the starting step of the FFT algorithm, where the indices are sorted according to the reversed digits.) Figure \\ref{fig:tree1} shows the case $N=2^3$, a binary tree with four levels, $k=0, 1, 2, 3$. In the third level of the tree the nodes are ordered $0, 4, 2, 6, 1, 5, 3, 7$, which is $\\pi_3([0:7])$. Then:\n\n\\begin{enumerate}\n\\item The set $\\mathcal{I}_{ka}$ at level $k$ is the disjoint union of the sets at its children nodes at level $k+1$.\n\\item The value of $\\widetilde{\\chi}_k(a)$ at the node $(k,a)$ is \n the sum of the values of $\\widetilde{\\chi}_{k+1}$ at its children\n nodes at level $k+1$. In other words, the weight of a parent is the sum of the weights of its children; this is the recurrence relation. Consequently, the value of $\\widetilde{\\chi}_k$ at any\n node is the sum of the values of $\\widetilde{\\chi}_M$ at the\n leaves at level $M$ descended from the node. \n \\end{enumerate}\n For example, in Figure \\ref{fig:tree1} we have\n\\[\n\\begin{aligned}\n \\widetilde{\\chi}_0(0) &= \\sum_{a=0}^7\\widetilde{\\chi}_3(a), \\\\\n \\widetilde{\\chi}_1(0) &=\\sum_{a=0}^3\\widetilde{\\chi}_3(2a),\\\\\n \\widetilde{\\chi}_1(1) &= \\sum_{a=0}^3\\widetilde{\\chi}_3(2a+1), \n \\end{aligned}\n \\]\n and so on. \n\nIn fact, a more general conclusion is the following: Fix a level $k$. Then the value of $\\widetilde{\\chi}_r$ at any\n node $(r,a)$, for $r \\leq k$ is the sum of the values of $\\widetilde{\\chi}_k$ at the level-$k$ nodes descending from the tree node $(r,a)$. \n\n\nWhen the root is $[0:p^M-1]$, the extreme case, the leaves are all singletons and the nodes at level $k$ are each of weight $p^{M-k}$.\n\n\\begin{figure*}\n\\centering\n\\begin{tikzpicture}[scale=1.0]\n\\node (0) at (0,8) [circle, minimum size = 1.0cm, fill=gray!10, draw] {$\\mathcal{I}_{00}$};\n\\node (1) at (-4,6) [circle, minimum size = 1.0cm, fill=gray!10,\ndraw] {$\\mathcal{I}_{10}$};\n\\node (2) at (4,6) [circle, minimum size = 1cm, fill=gray!10, draw]\n{$\\mathcal{I}_{11}$};\n\\node (3) at (-6,4) [circle, minimum size = 1cm, fill=gray!10, draw]\n{$\\mathcal{I}_{20}$};\n\\node (4) at (-2,4) [circle, minimum size = 1cm, fill=gray!10, draw]\n{$\\mathcal{I}_{22}$};\n\\node (5) at (2,4) [circle, minimum size = 1cm, fill=gray!10, draw]\n{$\\mathcal{I}_{21}$};\n\\node (6) at (6,4) [circle, minimum size = 1cm, fill=gray!10, draw]\n{$\\mathcal{I}_{23}$};\n\\node (7) at (-7,2) [circle, minimum size = 1cm, fill=gray!10, draw]\n{$\\mathcal{I}_{30}$};\n\\node (8) at (-5,2) [circle, minimum size = 1cm, fill=gray!10, draw]\n{$\\mathcal{I}_{34}$};\n\\node (9) at (-3,2) [circle, minimum size = 1cm, fill=gray!10, draw]\n{$\\mathcal{I}_{32}$};\n\\node (10) at (-1,2) [circle, minimum size = 1cm, fill=gray!10, draw]\n{$\\mathcal{I}_{36}$};\n\\node (11) at (1,2) [circle, minimum size = 1cm, fill=gray!10, draw]\n{$\\mathcal{I}_{31}$};\n\\node (12) at (3,2) [circle, minimum size = 1cm, fill=gray!10, draw]\n{$\\mathcal{I}_{35}$};\n\\node (13) at (5,2) [circle, minimum size = 1cm, fill=gray!10, draw]\n{$\\mathcal{I}_{33}$};\n\\node (14) at (7,2) [circle, minimum size = 1cm, fill=gray!10, draw]\n{$\\mathcal{I}_{37}$};\n\\draw (node cs:name=0) -- (node cs:name =1);\n\\draw (node cs:name=0) -- (node cs:name =2);\n\\draw (node cs:name=1) -- (node cs:name =3);\n\\draw (node cs:name=1) -- (node cs:name =4);\n\\draw (node cs:name=2) -- (node cs:name =5);\n\\draw (node cs:name=2) -- (node cs:name =6);\n\\draw (node cs:name=3) -- (node cs:name =7);\n\\draw (node cs:name=3) -- (node cs:name =8);\n\\draw (node cs:name=4) -- (node cs:name =9);\n\\draw (node cs:name=4) -- (node cs:name =10);\n\\draw (node cs:name=5) -- (node cs:name =11);\n\\draw (node cs:name=5) -- (node cs:name =12);\n\\draw (node cs:name=6) -- (node cs:name =13);\n\\draw (node cs:name=6) -- (node cs:name =14);\n\\end{tikzpicture}\n\\caption{A tree representing the relations between the congruence classes, and the recurrence relation satisfied by\n $\\widetilde{\\chi}_k(a)$. The value of\n $\\widetilde{\\chi}_k(a)$ at any node is the sum of\n the values of $\\widetilde{\\chi}_k(a)$ at its children nodes in level $k+1$.}\n \\label{fig:tree1}\n\\end{figure*}\n\n\n\\subsection{Elementary and Maximal Sets} \\label{subsection:elementary-maximal}\n\nTo study the structure of universal sampling sets we need a series of definitions. When $N$ is a prime power the building blocks are the elementary sets:\n\\begin{defn} \\label{definition:elementary}\nA set $\\mathcal{E} \\subseteq [0:p^M-1]$ is a $k$-\\emph{elementary set} if\n\\[\n\\widetilde{\\chi}_k(a) = 1, \\quad \\text{~for all $a\\in [0:p^k-1]$}.\n\\]\n\\end{defn}\nNote that $|\\mathcal{E}|=p^k$.\n\nAs a first application of the formula \\eqref{eq:chi-recurrence} we can add the adjective ``universal'' to the description of elementary sets.\n\\begin{lemma} \\label{lemma:elementary-universal}\nA $k$-elementary set $\\mathcal{E}$ is a universal sampling set.\n\\end{lemma}\n\n\\begin{IEEEproof}\nFrom $\\widetilde{\\chi}_k(a) = 1$ and \\eqref{eq:chi-recurrence} it follows that $\\mathcal{E}$ has an equal number of elements in each\ncongruence class modulo $p^s$, $s \\leq k$. More precisely,\n\\begin{equation} \\label{eq:chi-constant}\n\\widetilde{\\chi}_s(a) = p^{k-s},\n\\end{equation}\nfor all $s \\le k$. Also from \\eqref{eq:chi-recurrence}, for $s > k$ all the congruence classes are of size $0$ or $1$, i.e.\n\\begin{equation} \\label{eq:chi-in-{0,1}}\n\\widetilde{\\chi}_s(a) \\in\\{0,1\\}.\n\\end{equation}\nTherefore\n\\[\n|\\widetilde{\\chi}_s(a)-\\widetilde{\\chi}_s(b)| \\le1,\n\\]\nfor all $a,b \\in [0: p^k-1]$ and all $s$, and we conclude that $\\mathcal{E}$ is a universal sampling set. \n\\end{IEEEproof}\n\n\nNext, a fruitful approach to understanding the structure of universal sampling sets is to ask how well an arbitrary index set is approximated from within by universal sets. \n\n \n\n\\begin{defn} \\label{definition:maximal}\nLet $\\mathcal{I} \\subseteq [0:N-1]$. A \\emph{maximal universal sampling set} for $\\mathcal{I}$ is a universal sampling set of largest cardinality that is contained in $\\mathcal{I}$. \n\\end{defn}\n\nNote that the definition does not require $N$ to be a prime power, though this will most often be the case. There is an allied notion of a minimal universal set. We define this in Subsection \\ref{subsection:maximal-minimal} below, and show how they are related to maximal sets. Maximal and minimal sets enter naturally and together in connection with uncertainty principles, discussed in Section \\ref{section:uncertainty}.\n\n\nFinding a maximal universal sampling set for a given $\\mathcal{I}$ is a finitary process, so existence is not an issue. However, maximal universal sampling sets need not be unique. For example, take $N=3^2$ and $\\mathcal{I}=\\{0, 1, 2, 3, 6 \\}$. The set $\\mathcal{I}$ is not itself a universal sampling set, and both $\\{0, 1, 2, 3\\}$ and $\\{0, 1,2,6 \\}$ are maximal universal sampling sets contained in $\\mathcal{I}$. \n\nDespite the lack of uniqueness it will be convenient to have a notation, and we let $\\Omega(\\mathcal{I})$ denote a generic maximal universal sampling set in $\\mathcal{I}$. The cardinality $|\\Omega(\\mathcal{I})|$ is well-defined; by definition $|\\mathcal{J}| \\le |\\Omega(\\mathcal{I})|$ for any universal sampling set $\\mathcal{J} \\subseteq \\mathcal{I}$. \n\n\nElementary sets and maximal sets are related through an important construction of an elementary set. \n\n\\begin{defn}\nLet $\\mathcal{I}\\subseteq[0:p^M-1]$ and let $\\bar{k}$ be the largest integer such that no congruence class in $\\mathcal{I}\/p^{\\bar{k}}$ is empty. (It might be that $\\bar{k}=0$.) Let $\\mathcal{I}_{\\bar{k}}^\\dagger$ denote an elementary set obtained by choosing one element from each congruence class in $\\mathcal{I}\/p^{\\bar{k}}$. \n\\end{defn}\n\nBy Lemma \\ref{lemma:elementary-universal}, $\\mathcal{I}_{\\bar{k}}^\\dagger$ is a universal sampling set, and is of order $p^{\\bar{k}}$. \nWe now have\n\n\\begin{theorem}\n\\label{thm:maximal-bounds}\nLet $\\mathcal{\\mathcal{I}} \\subseteq [0:p^M-1]$, and $\\mathcal{I}_{\\bar{k}}^\\dagger$ as above. \nThen\n\\begin{enumerate}\n\\item[(i)] $p^{\\bar{k}}\\leq|\\Omega(\\mathcal{I}) | < p^{\\bar{k}+1}$.\n\\item[(ii)] There exists a maximal universal sampling set contained in $\\mathcal{I}$ and containing $\\mathcal{I}_{\\bar{k}}^\\dagger$.\n\\end{enumerate}\n\\end{theorem}\n\n\\begin{IEEEproof} \nThe lower bound in (i) follows from the definition of a maximal set and the comments above,\n\\[\n p^{\\bar{k}} = |\\mathcal{I}_{\\bar{k}}^\\dagger| \\le |\\Omega(\\mathcal{I})|.\n \\]\n To prove the upper bound, suppose $\\mathcal{J} \\subseteq \\mathcal{I}$ has $|\\mathcal{J}| \\geq p^{\\bar{k}+1}$. By the\ndefinition of $\\bar{k}$ at least one congruence class in $\\mathcal{J}\/p^{\\bar{k}+1}$ is empty, so $\\widetilde{\\chi}_{\\bar{k}+1}(a\\,;\\mathcal{J}) = 0$ for some $a\\in[0:p^{\\bar{k}+1}-1]$. From the cardinality equation \\eqref{eq:cardinality-chi},\n\\[\n\\sum_{\\ell=0}^{p^{\\bar{k}+1}-1}\\widetilde{\\chi}_{\\bar{k}+1}(\\ell \\,; \\mathcal{J}) = |\\mathcal{J}| \\geq p^{\\bar{k}+1}.\\]\nThis implies that at least one congruence class in $\\mathcal{J}\/p^{\\bar{k}+1}$ has at least two elements, or $\\widetilde{\\chi}_{\\bar{k}+1}(b\\,;\\mathcal{J}) \\geq 2$ for some $b$. We then have\n\\[\n|\\widetilde{\\chi}_{\\bar{k}+1}(b\\,;\\mathcal{J}) - \\widetilde{\\chi}_{\\bar{k}+1}(a\\,;\\mathcal{J})| = 2 > 1,\n\\]\nand $\\mathcal{J}$ cannot be a universal sampling set. \n\nFor part {(ii)}, we first show that any maximal universal sampling set $\\Omega(\\mathcal{I})$ set must contain at least one element from each congruence class in $\\mathcal{I}\/p^{\\bar{k}}$. By way of contradiction, suppose that $\\widetilde{\\chi}_{\\bar{k}}(a\\,;\\Omega(\\mathcal{I}))=0$ for some $a$. Since $\\Omega(\\mathcal{I})$ is universal we must then have $\\widetilde{\\chi}_{\\bar{k}}(b\\,;\\Omega(\\mathcal{I}))\\le 1$ for all $b$. By \\eqref{eq:cardinality-chi},\n\\[\n|\\Omega(\\mathcal{I})|= \\sum_{b=0}^{p^{\\bar{k}-1}} \\widetilde{\\chi}_{\\bar{k}}(b\\,;\\Omega(\\mathcal{I})) r}\n\\widetilde{\\chi}_s(a\\,;\\mathcal{E}_{t})=0 \\quad \\text{for all $t>r$},\n\\end{equation}\n i.e., none of the $\\mathcal{E}_{t}$ for $t>r$ will have an element from the congruence class of $a$ modulo $p^s$. This follows (just as described for the tree) from $\\mathcal{E}_{t} \\cap \\mathcal{L}_r=\\emptyset$, and also\n\\begin{align}\n\\mathcal{L}_r &= \\{x \\in [0:N-1] \\colon x \\equiv e \\text{ mod $p^{k_r+1}$ for some $e \\in\n\\mathcal{E}_{r}$}\\} \\nonumber \\\\\n&\\supseteq \\{x \\in [0:N-1]: x \\equiv e \\text{ mod $p^s$ for some }e \\in\n\\mathcal{E}_{r} \\} \\nonumber \\label{eq:univ-structure-1}.\n\\end{align}\n\nFrom \\eqref{eq:chi-in-{0,1}-again} and \\eqref{eq:chi=0-t>r} we conclude that\n\\begin{equation} \\label{eq:sum-chi-in-{0,1}}\n\\sum_{r} \\widetilde{\\chi}_s(a\\,;\\mathcal{E}_r) \\in\\{0,1\\}\n\\end{equation}\nfor all $a$, where the sum is over all $r$ with $k_r l_2 > l_3$. Given this, equation \\eqref{eq:|Omega(I)|} appears as \n\\begin{equation} \\label{eq:Omega(I)-2}\n|\\Omega(\\mathcal{I})| = \\alpha_1p^{l_1} + \\alpha_2p^{l_2} + \\alpha_3p^{l_3} + \\ldots .\n\\end{equation}\nIn fact, effectively, Theorem \\ref{theorem:maximal-precise} constructs a\nbase $p$ expansion of $|\\Omega(\\mathcal{I})|$ because each power of $p$ appears at most $p-1$ times.\n\n\\begin{corollary} \\label{corollary:base-p-expansion}\nLet $\\mathcal{I} \\subseteq [0:p^M-1]$. The formula \\eqref{eq:Omega(I)-2} is of the form,\n\\[\n|\\Omega(\\mathcal{I})| = \\sum_r p^{k_r} = \\sum_s \\alpha_s p^{l_s},\n\\]\nwith $l_1 > l_2 > l_3 > \\ldots$ and $\\alpha_s \\in [0:p-1]$ for all $s$.\n\\end{corollary}\n\n\\begin{IEEEproof} Begin with $\\Omega(\\mathcal{I}) = \\bigcup_r\\mathcal{I}_r^\\dagger$.\nSince the $\\mathcal{I}_r^\\dagger$ are disjoint, we have\n\\begin{equation}\n\\label{eq:uncen-phi-k-0}\n\\begin{aligned}\n\\sum_{r=1}^{\\alpha_1} \\widetilde{\\chi}_{l_1+1}(a\\,;\\mathcal{I}_r^\\dagger) &= \\widetilde{\\chi}_{l_1+1}(a\\,;\\bigcup_{r=1}^{\\alpha_1}\\mathcal{I}_r^\\dagger) \\nonumber\\\\\n& \\leq \\widetilde{\\chi}_{l_1+1}(a\\,; \\Omega(\\mathcal{I})), \\quad a\\in[0:p^{l_1+1}-1].\n\\end{aligned}\n\\end{equation}\nSumming this over all $a \\in [0:p^{l_1 +1}-1]$ we have \n\\begin{equation}\n\\label{eq:uncen-phi-k}\n\\begin{aligned}\n\\alpha_1p^{l_1} &= \\sum_{r=1}^{\\alpha_1} |\\mathcal{I}_r^\\dagger| = \\sum_{r=1}^{\\alpha_1} \\sum_a\n\\widetilde{\\chi}_{l_1+1}(a\\,;\\mathcal{I}_r^\\dagger)\\\\\n& \\leq \\sum_i\\widetilde{\\chi}_{l_1+1}(a\\,; \\Omega(\\mathcal{I})) = |\\Omega(\\mathcal{I})| < p^{l_1+1}, \n\\end{aligned}\n\\end{equation}\nso $\\alpha_1 < p$. For the last inequality in\n\\eqref{eq:uncen-phi-k} we have used the upper bound from part (ii) in Theorem\n\\ref{thm:maximal-bounds}. We have also used that $|\\mathcal{I}_r| = p^{k_r}$.\nThe proof for other $\\alpha_s$ is similar. For example, to prove that\n$\\alpha_20$ satisfy\n\\begin{equation}\n\\label{eq:ran-uncen-1}\nN\\log(1\/\\lambda) \\geq (1+ \\delta) d \\log d, \n\\end{equation}\nthen $|\\Omega(\\mathcal{R}_s)| \\geq d$ with probability at least $1- d^{-\\delta}$.\n\\end{theorem}\n\nThis means that if we can choose a large $d$ satisfying \\eqref{eq:ran-uncen-1}, which is possible, for example, if $N$ is large and $\\lambda$ is small, then $|\\Omega(\\mathcal{R}_s)| \\geq d$ with high probability. Thus while it is unlikely that a randomly chosen index set will be universal, it is quite likely that such an index set will contain a large universal set as a subset.\n\n\n\nWe will apply Theorem \\ref{theorem:rand-samp} to the case when $\\mathcal{R}_s$ is the zero set of $f\\colon \\mathbb{Z}_N \\longrightarrow \\mathbb{C}$. Then $\\lambda = |\\text{supp}(f)|\/N$, i.e., $\\lambda$ is the fraction of nonzero entries in $f$.\n\n\\begin{IEEEproof} The proof uses the bound in part (ii) of Theorem \\ref{thm:maximal-bounds}. Let $k$ be the largest integer such that no congruence classes in $\\mathcal{R}_s\/p^k$ are empty. Note that $k$ is random since $\\mathcal{R}_s$ is random. Then $|\\Omega(\\mathcal{R}_s)| \\le d-1$ implies \n\\[\np^k \\le |\\Omega(\\mathcal{R}_s)| \\le d-1,\n\\]\nby Theorem \\ref{thm:maximal-bounds}. Therefore\n\\begin{align}\n&\\text{Prob}\\left(|\\Omega(\\mathcal{R}_s)| \\leq d-1 \\right) \\leq \\text{Prob}(p^k \\leq d-1) \\nonumber \\\\\n&\\quad = \\text{Prob}(k \\leq \\lfloor \\log_p(d-1) \\rfloor) \\nonumber \\\\\n&\\quad = \\text{Prob}(\\text{at least one congruence class in } \\nonumber\\\\\n& \\hspace{.5in} \\mathcal{R}_s\/p^{\\lfloor \\log_p(d-1) \\rfloor +1} \\text{ is empty}).\n\\end{align}\nWe will compute the last probability.\n\n\nLet $b = {\\lfloor \\log_p(d-1) \\rfloor +1}$, and let $\\mathcal{N}_{ba}$ be the set of elements in $[0:N-1]$ that leave a remainder of $a\\in[0:p^b-1]$ when divided by $p^b$. Since $N=p^M$ all of the $\\mathcal{N}_{ba}$ have size $t = N\/p^b = p^M \/ p^{\\lfloor \\log_p(d-1) \\rfloor +1}$. \n\nFix a particular residue $a$. The probability that $\\mathcal{N}_{ba}\\cap \\mathcal{R}_s $ is empty (in words, the probability that a particular congruence class goes missing in $\\mathcal{R}_s$) is $\\binom{N-t}{s}\/\\binom{N}{s}$. This is because the number of ways of picking $\\mathcal{R}_s$ is $\\binom{N}{s}$ while the number of ways of picking $\\mathcal{R}_s $ so that $\\mathcal{N}_{ba} \\cap \\mathcal{R}_s = \\emptyset$ is the number of ways of picking $s$ elements from \n\\[ |[0:N-1] \\setminus \\mathcal{N}_{ba}| = N- t\n\\]\n elements. Then\n\\begin{align}\n&\\text{Prob}\\left(\\mathcal{N}_{ba}\\cap \\mathcal{R}_s = \\emptyset\\right) \\nonumber \\\\ \n&= \\binom{N-t}{s}\\Big{\/} \\binom{N}{s} \\nonumber \\\\\n&= \\frac{(N-(t-1)\n -s)(N-(t-2)-s)\\ldots(N-s)}{(N-t+1)(N-t+2)\\ldots N} \\nonumber \\\\\n&\\quad = \\left(1- \\frac{s}{N-t+1} \\right) \\left(1- \\frac{s}{N-t+2}\n\\right)\\ldots \\left(1- \\frac{s}{N} \\right) \\nonumber \\\\\n&\\quad \\leq \\left(1- \\frac{s}{N} \\right) \\left(1- \\frac{s}{N}\n\\right)\\ldots \\left(1- \\frac{s}{N} \\right) =\n\\left(1-\\frac{s}{N}\\right)^t. \\label{eq:rnd-1} \n\\end{align}\nFrom this,\n\\begin{align}\n &\\text{Prob}(\\text{at least one congruence class in\n } \\nonumber \\\\\n & \\hspace{.7in} \\mathcal{R}_s\/p^{\\lfloor \\log_p(d-1) \\rfloor +1} \\text{ is empty})\n \\nonumber \\\\\n &\\quad = \\text{Prob}\\left(\\bigcup_i \\left(\\mathcal{N}_{ba}\\cap \\mathcal{R}_s = \\emptyset \\right)\\right)\n \\nonumber \\\\\n&\\quad \\leq \\sum_i \\text{Prob}\\left(\\mathcal{N}_{ba}\\cap \\mathcal{R}_s = \\emptyset\\right) \\nonumber \\\\\n&\\quad \\leq \\frac{N}{t}\\left(1-\\frac{s}{N}\\right)^t = N\\lambda^t\/t.\\label{eq:end-2}\n\\end{align}\nHence we have from \\eqref{eq:end-2},\n\\begin{equation}\n\\label{eq:ran-bnd-1}\n\\text{Prob}\\left(|\\Omega(\\mathcal{R}_s)| \n\\leq d-1 \\right) \\leq N\\lambda^t\/t. \n\\end{equation}\nNow, $ t = N \/ p^{\\lfloor \\log_p(d-1) \\rfloor +1} \\geq N\/d$, since $\\lfloor x \\rfloor \\leq x$. Using this in \\eqref{eq:ran-bnd-1},\n\\begin{align}\n&\\text{Prob}\\left(|\\Omega(\\mathcal{R}_s)| \\leq d-1 \\right) \\nonumber\\\\ \n& \\leq N\\lambda^t\/t \\leq d\\lambda^{N\/d} \\nonumber \\\\\n& =\\exp\\left(\\log d - \\frac{N\\log(1\/\\lambda)}{d} \\right) \\nonumber \\\\\n& = \\exp\\left(\\log d\\left(1- \\frac{N\\log(1\/\\lambda)}{d\\log d} \\right)\\right) \\nonumber \\\\\n& \\leq \\exp\\left(-\\delta \\log d\\right) \\text{ (from the hypothesis of the theorem) } \\nonumber \\\\\n& = d^{-\\delta}.\n\\end{align}\nWe conclude that $\\text{Prob}\\left(|\\Omega(\\mathcal{R}_s)| \\geq d \\right) \\geq 1- d^{-\\delta}$.\n\\end{IEEEproof}\n\n We can now state a probabilistic uncertainty principle. Afterward we will comment on how this compares to the result of Candes, Romberg and Tao \\cite{candes:robust}. \n \n \\begin{theorem}\n\\label{thm:rnd-uncen} \nLet $N=p^M$. Let $\\mathcal{G}_{N,r}$ be the set of all signals $g\\colon \\mathbb{Z}_N \\longrightarrow \\mathbb{C}$ with support of size $r$. Let $g \\in \\mathcal{G}_{N,r}$ be a signal whose support is drawn at random from the set of all index sets of size $r$. Let the values of $g$ on the support set be drawn according to some arbitrary distribution. For $\\delta>0$ let \n\\[\na_{N, \\delta} = \\frac{N}{(1+\\delta)\\log N}\\left(1+ \\log (1+ \\delta) + \\log \\log N\\right).\n\\]\nThen \n\\begin{equation}\n\\label{eq:rnd-uncen}\n|\\text{supp}(g)| + |\\text{supp}(\\mathcal{F}g)| \\geq 1 + a_{N,\\delta} \n\\end{equation}\nwith probability at least $1- (a_{N,\\delta}-r)^{-\\delta}$.\n\\end{theorem}\n\nIf $r$ is small compared to $a_{N,\\delta}$, Theorem \\ref{thm:rnd-uncen} states\nthat almost all signals $g$ in $\\mathcal{G}_{N,r}$ satisfy the \nuncertainty principle above; roughly speaking\n\\[\n|\\text{supp}(g)| + |\\text{supp}(\\mathcal{F}g)| \\geq N(1+ \\log \\log\nN)\/\\log N\n\\]\nfor most $g$.\n\\begin{IEEEproof}\nPicking the support of $g$ at random among sets of size $r$ is equivalent to picking the zero set of $g$ at random among all index sets of size $N-r$. The proof now makes use of Theorem \\ref{theorem:rand-samp} to get a lower bound on $|\\Omega(\\mathcal{Z}(g))|$. For this we need to choose $d, \\delta$ so that\n\\begin{equation}\n\\label{eq:ran-uncen-repeat}\nN \\log (1\/\\lambda) = N \\log N\/r > (1+\\delta)d \\log d.\n\\end{equation}\nFix any $\\delta>0$ and let $d = N \\log (N\/r)\/(1+\\delta)\\log N$. We check that $d, \\delta$ satisfy $\\eqref{eq:ran-uncen-repeat}$:\n\\begin{align*}\n(1+\\delta)d\\log d &= \\frac{N \\log (N\/r)}{\\log N} \\log \\left(\\frac{N \\log (N\/r)}{(1+\\delta)\\log N} \\right)\\\\\n&<\\frac{N \\log (N\/r)}{\\log N} \\log N = N \\log N\/r, \n\\end{align*}\nThen from Theorem \\ref{theorem:rand-samp}, \n\\[\n|\\Omega(\\mathcal{Z}(g))| \\geq N\\log (N\/r) \/ (1+\\delta)\\log N \n\\] \nwith probability $1-d^{-\\delta}$. From the uncertainty principle Theorem \\ref{theorem:uncertainty}, we now have\n\\[\n\\begin{aligned}\n|\\text{supp}(\\mathcal{F}g)| &\\geq 1 + |\\Omega(\\mathcal{Z}(g))|\\\\\n& \\geq 1 + N\\log (N\/r) \/ (1+\\delta)\\log N \n\\end{aligned}\n\\]\nwith probability $1-d^{-\\delta}$.\n\nThe final step in the proof uses a lower bound on $d = N \\log (N\/r)\/(1+\\delta)\\log N$. We have set apart this technical result as Lemma \\ref{lem: rnd-ucen-cvx}, below. This gives\n\\[\n|\\text{supp}(\\mathcal{F}g)| \\geq 1 + a_{N,\\delta} - r \\quad\n\\]\nwith probability $1-d^{-\\delta}$.\nSince $1-d^{-\\delta} \\geq 1- (a_{N,\\delta} - r)^{-\\delta}$, we can say\n\\[\n|\\text{supp}(\\mathcal{F}g)| \\geq 1 + a_{N,\\delta} - r \n\\]\nwith probability $1-(a_{N,\\delta} - r)^{-\\delta}$.\nThe result follows since $ r = |\\text{supp}(g)|$.\n\\end{IEEEproof}\n\n\\begin{lemma}\n\\label{lem: rnd-ucen-cvx}\nLet \n\\[\nd = \\frac{N\\log(N\/r)}{(1+\\delta)\\log N}\n\\]\nand \n\\[\na_{N, \\delta} = \\frac{N}{(1+\\delta)\\log N}\\left(1+ \\log (1+ \\delta) + \\log \\log N\\right),\n\\]\nas in Theorem \\ref{thm:rnd-uncen}. Then $d \\geq a_{N, \\delta} - r$.\n\\end{lemma}\n\n\\begin{IEEEproof}\nThe convex function $\\log (N\/r)$ is bounded below by its tangent at any point $r_0>0$. Thus\n\\[\n\\log (N\/r) \\geq \\log (N\/r_0) + \\left(-\\frac{1}{r_0}(r - r_0)\\right).\n\\]\nFor \n\\[\nr_0 = \\frac{N}{(1+\\delta)\\log N}\\,,\n\\]\n this reads\n\\[\n\\begin{aligned}\n\\log (N\/r) &\\geq \\log \\left((1+\\delta)\\log N\\right)\\\\\n& \\hspace{.25in} + \\left(-\\frac{(1+\\delta)\\log N}{N}\\left(r - \\frac{N}{(1+\\delta)\\log N}\\right)\\right).\n\\end{aligned}\n\\]\nMultiplying by $N\/(1+\\delta)\\log N$, we have\n\\begin{align*}\nd &= \\frac{N\\log(N\/r)}{(1+\\delta)\\log N} \\\\\n&\\geq \\frac{N\\log \\left((1+\\delta)\\log N\\right)}{(1+\\delta)\\log N} - \\left(r - \\frac{N}{(1+\\delta)\\log N}\\right) \\\\\n&= \\frac{N}{(1+\\delta)\\log N}\\left(\\log(1+\\delta) + 1 + \\log \\log N \\right) - r \\\\\n&= a_{N, \\delta} - r.\n\\end{align*}\n\\end{IEEEproof}\n\n\\begin{remark}\nThe robust uncertainty principle of Candes, Romberg and Tao in \\cite{candes:robust} is as follows: for $M >0$ there exists a constant $C_M$ such that\n\\[\n|\\text{supp}(g)| + |\\text{supp}(\\mathcal{F}g)| \\geq C_M N(\\log N)^{-1\/2},\n\\]\nwith probability $1- O(N^{-M})$.\nThis inequality is stronger than that of Theorem \\ref{thm:rnd-uncen} by about $(\\log N)^{-1\/2}$. Also, Theorem \\ref{thm:rnd-uncen} holds for $N=p^M$, whereas the inequality above holds for all $N$.\n\nIn our proof of Theorem \\ref{theorem:rand-samp} we have only used the bound $|\\Omega(\\mathcal{Z}(g))| \\geq p^k$ from Theorem \\ref{thm:maximal-bounds}. By using the exact formula for $|\\Omega(\\mathcal{Z}(g))|$ in Theorem \\ref{theorem:maximal-precise} (or by a better lower bound) it might be possible to tighten the uncertainty principle of Theorem \\ref{thm:rnd-uncen} and remove the factor $(\\log N)^{-1\/2}$. \n\\end{remark}\n\n\\subsection{Sumsets and the Cauchy-Davenport Theorem}\n\nOur final application is a generalization of the Cauchy-Davenport theorem \\cite{davenport:residues}, from additive number theory, on the size of sumsets. Again the inspiration comes from Tao's approach, \\cite{tao:uncertainty}, to the original Cauchy-Davenport theorem via Chebotarev's theorem. \n\n\\begin{theorem}\n\\label{thm:sumset-univ}\nLet $\\mathcal{X}, \\mathcal{Y} \\subseteq [0:N-1]$. If either $\\mathcal{X}$ or $\\mathcal{Y}$ is a universal sampling set, then \n\\begin{equation}\n\\label{eq:sumset-univ}\n|\\mathcal{X}+\\mathcal{Y}| \\geq |\\mathcal{X}| + |\\mathcal{Y}| -1,\n\\end{equation}\nwhen $|\\mathcal{X}| + |\\mathcal{Y}| -1 \\leq N$. \n\nHere $\\mathcal{X}+\\mathcal{Y}$ is the sumset defined as\n\\[\n\\mathcal{X} + \\mathcal{Y} = \\{x + y: x \\in \\mathcal{X}, y \\in \\mathcal{Y} \\},\n\\]\nwhere the addition is modulo $N$. \n\\end{theorem}\n\nWe are not assuming that $N$ is a prime power, while the classical theorem has $N=p$ and there are no assumptions on $\\mathcal{X}$ or $\\mathcal{Y}$. \nThat form of the result follows from Theorem \\ref{thm:sumset-univ}, since all index sets in $[0:N-1]$ are universal when $N$ is prime. \n\nAs a corollary we get a statement on the size of $|\\mathcal{X}+\\mathcal{Y}|$ without making an assumption on $\\mathcal{X}$ or $\\mathcal{Y}$.\n\\begin{corollary}\n\\label{cor:sumset-1}\nLet $\\mathcal{X}, \\mathcal{Y} \\subseteq [0:N-1]$ be index sets. Then, \n\\begin{equation}\n\\label{eq:sumset-gen}\n|\\mathcal{X}+\\mathcal{Y}| \\geq \\max\\{ |\\Omega (\\mathcal{X})| + |\\mathcal{Y}| -1, |\\mathcal{X}| + |\\Omega (\\mathcal{Y})| -1 \\}.\n\\end{equation}\n\\end{corollary}\n\\begin{IEEEproof} Since $\\Omega(\\mathcal{X}) \\subseteq \\mathcal{X}$, it follows that $\\Omega(\\mathcal{X}) + \\mathcal{Y} \\subseteq \\mathcal{X} + \\mathcal{Y}$. Now,\n\\begin{align*}\n|\\mathcal{X}+ \\mathcal{Y}| \\geq |\\Omega(\\mathcal{X}) + \\mathcal{Y}| \\geq |\\Omega(\\mathcal{X})| + |\\mathcal{Y}| -1 \\text{ from Theorem \\ref{thm:sumset-univ}}.\n\\end{align*}\nThe inequality $|\\mathcal{X} + \\mathcal{Y}| \\geq |\\mathcal{X}| + |\\Omega(\\mathcal{Y})| -1 $ follows similarly.\n\\end{IEEEproof}\n\n\n\\begin{IEEEproof}[Proof of Theorem \\ref{thm:sumset-univ}]\nFirst note that \\eqref{eq:sumset-univ} follows trivially when either $X$ or $Y$ is a singleton. (More precisely, if, say, $\\mathcal{X}$ is a singleton, then $\\mathcal{X}+\\mathcal{Y}$ is just a translate of $\\mathcal{Y}$, and so \\eqref{eq:sumset-univ} holds with equality). For the rest of the proof, we assume that $|\\mathcal{X}|, |\\mathcal{Y}| \\geq 2$. Let $|\\mathcal{X}|=r$, $|\\mathcal{Y}| =s$.\n\n\nAssume without loss of generality that $\\mathcal{X}$ is universal. Let\n\\[\nf_1 \\in \\mathbb{B}^{\\mathcal{X}} \\text{ be such that }f_1(\\left[1:r\\right]) = (\\underbrace{0, 0, \\ldots, 0}_{r-1 \\text{ times }}, 1). \n\\]\nSuch an $f_1$ exists because the set $[1:r]$, as an index set of $r$ consecutive integers, is a universal sampling set, so is in particular a sampling set for $\\mathbb{B}^\\mathcal{X}$. Similarly let\n\\[\nf_2 \\in \\mathbb{B}^\\mathcal{Y} \\text{ be such that }f_2(\\left[r:r+s-1\\right]) = (\\underbrace{0, 0, \\ldots, 0}_{s-1 \\text{ times }}, 1),\n\\]\n again possible because $[r:r+s-1]$ is a set of $s$ consecutive integers, and hence a sampling set for $\\mathbb{B}^\\mathcal{Y}$. Note that $f_1f_2 \\in \\mathbb{B}^{\\mathcal{X}+\\mathcal{Y}}$ and so $|\\mathcal{X}+\\mathcal{Y}| \\geq {\\rm supp}(\\mathcal{F}(f_1f_2))$. Note also that the zero set $\\mathcal{Z}(f_1f_2)$ of $f_1f_2$ contains $[1:r+s-2]$, and hence, since the latter is a universal sampling set, $ |\\Omega \\left(\\mathcal{Z}(f_1f_2)\\right)| \\geq r+s-2 = |\\mathcal{X}|+|\\mathcal{Y}|-2$.\n \n Now we apply the uncertainty principle of Theorem \\ref{theorem:uncertainty} to $f_1f_2$. We have, so long as $f_1f_2 \\neq 0$,\n\\begin{align} \\label{eq:sumset-0}\n|\\mathcal{X}+\\mathcal{Y}| &\\geq {\\rm supp}\\left(\\mathcal{F}(f_1f_2)\\right) \\nonumber \\\\\n&\\geq 1 + |\\Omega\\left(\\mathcal{Z}(f_1f_2)\\right)| \\nonumber \\\\\n& \\geq 1 + |\\mathcal{X}|+|\\mathcal{Y}|-2 = |\\mathcal{X}| + |\\mathcal{Y}| -1, \n\\end{align}\nSo we have proved that $|\\mathcal{X}+\\mathcal{Y}| \\geq |\\mathcal{X}| + |\\mathcal{Y}| -1 $ if we know that $ f_1 f_2 \\neq 0$. \n\nFor this, again from Theorem \\ref{theorem:uncertainty} we have \n\\[\n|\\mathcal{Z}(f_1)| \\leq |\\Phi({\\rm supp}(\\mathcal{F}f_1))| - 1 \\leq |\\Phi(\\mathcal{X})| - 1,\n\\]\nsince $f_1 \\in \\mathbb{B}^\\mathcal{X}$. But $\\mathcal{X}$ is universal, so $\\Phi(\\mathcal{X})=\\mathcal{X}$ and\n\\begin{equation}\n\\label{eq:sumset-1}\n|\\mathcal{Z}(f_1)| \\leq |\\mathcal{X}| -1. \n\\end{equation}\nBy definition of $f_1$, the set $[1:r-1]=[1:|\\mathcal{X}|-1]$ is already in $\\mathcal{Z}(f_1)$. Together with \\eqref{eq:sumset-1}, this implies that $f_1$ cannot have any more zeros. In particular, $f_1(r+s -1) \\neq 0$. Since $f_2(r+s -1) = 1$, $f_1f_2$ cannot be identically zero and \\eqref{eq:sumset-0} applies. \\end{IEEEproof}\n\nAn important generalization of the Cauchy-Davenport theorem to any finite abelian group, not necessarily of prime order, is due to Kneser, \\cite{kneser:sumset}. \n\\begin{theorem}[Kneser]\n Let $G$ be a finite abelian group. Let $A, B \\subseteq G$ be non empty subsets of $G$. Let $H$ be the set of periods, defined by $H = \\{h \\in G : h + (A+B) = A+B \\}$. (Thus $A+B$ is periodic if $H \\ne \\{0\\}$.) \n Then \n \\[\n |A+B| \\geq |A| + |B| - |H|.\n \\]\nHence unless $A + B$ is periodic, $|A+B| \\geq |A| + |B| - 1$. \n\\end{theorem} \n\nThough the form is similar, this result neither implies nor is implied by Theorem \\ref{thm:sumset-univ}. We give two examples. Let $N=8$, $\\mathcal{X}= \\{0,1\\}$, $\\mathcal{Y} = \\{0,4\\}$. Then $\\mathcal{X}$ is universal and $\\mathcal{X} + \\mathcal{Y} = \\{0, 1, 4, 5\\}$ is periodic with period $4$. So Theorem \\ref{thm:sumset-univ} applies, but Kneser's theorem does not. Next let $N=16$, $\\mathcal{X} = \\{0,2\\}, \\mathcal{Y} = \\{0,2,4\\}$. Then $\\mathcal{X}+\\mathcal{Y} = \\{0,2,4,6,8,10\\}$, which is not periodic, and neither $\\mathcal{X}$ nor $\\mathcal{Y}$ is universal. So Kneser's theorem applies, but Theorem \\ref{thm:sumset-univ} does not. We hope to understand this more thoroughly.\n\n\n \n\\appendices{\\section{Condition Number Associated with the Universal Sampling Set $\\mathcal{I}^*$} \\label{appendix:condition-number}}\n\nAn index set of consecutive integers is the simplest universal sampling set, but there is a catch in using it.\nLet $\\mathcal{I}$ be a universal sampling set of size $d$, $f\\in \\mathbb{C}^N$, and $f_\\mathcal{I}$ the $d$-vector obtained from $f$ by sampling at locations in $\\mathcal{I}$. If $f$ is in some bandlimited space $\\mathbb{B}^\\mathcal{J}$, $|\\mathcal{J}|=d$, then the interpolation formula \\eqref{eq:interpolation-formula-2} reads\n\\[\nf = \\mathcal{F} E_\\mathcal{J} (E_\\mathcal{I}^T \\mathcal{F} E_\\mathcal{J})^{-1} f_\\mathcal{I}.\n\\]\nThe practical difficulty is the computation of the inverse of $E_\\mathcal{I}^T \\mathcal{F} E_\\mathcal{J}$. Suppose we use $\\mathcal{I} = \\mathcal{I}^* =[0:d-1]$ as a universal sampling set. We give a lower bound on the condition number of $E_\\mathcal{I}^T \\mathcal{F} E_\\mathcal{J}$ that can be quite large for some $\\mathcal{J}$, even though the matrix $E_\\mathcal{I}^T \\mathcal{F} E_\\mathcal{J}$ is invertible for all $\\mathcal{J}$. \n\nFor $\\mathcal{I} = [0:d-1]$, note that\n\\begin{align*}\n|\\det\\left(E_{\\mathcal{I}}^T \\mathcal{F} E_{\\mathcal{J}} \\right)| &= |\\det(\\zeta_N^{ij})_{i \\in \\mathcal{I}, j \\in \\mathcal{J}} |\\\\\n&= \\prod_{j_1, j_2 \\in \\mathcal{J}}|\\zeta_N^{j_1} - \\zeta_N^{j_2}| \\\\\n&= \\prod_{j_1, j_2 \\in \\mathcal{J}}\\left|2\\sin\\frac{2\\pi(j_1-j_2)}{N}\\right|.\n\\end{align*}\n\nIf $\\{\\sigma_i\\}$ are the singular values of $A = E_{\\mathcal{I}}^T \\mathcal{F} E_{\\mathcal{J}}$, then \n\\begin{equation}\n\\label{eq:minsgn}\n\\det(A) = \\sigma_1\\sigma_2\\sigma_3\\ldots \\sigma_d \\geq \\sigma_{\\min}^d.\n\\end{equation}\nAlso if $a_{rk} = \\exp(-2\\pi i rj_k\/N)$ are the entries of $A$, then \n\\begin{equation}\n\\label{eq:maxsgn}\nd^2 = \\sum_{r,k = 0}^{d-1}|a_{rk}|^2 = \\text{tr}(A^*A) = \\sum_{r=0}^{d-1} \\sigma_r^2 \\leq d\\sigma_{\\text{max}}^2,\n\\end{equation}\nand so $\\sigma_{\\text{max}}^2 \\geq d$. \n\nFrom \\eqref{eq:minsgn} and \\eqref{eq:maxsgn}, the condition number satisfies\n\\[\n\\frac{\\sigma_{\\max}}{\\sigma_{\\min}} \\geq \\sqrt{d}\\left(\\frac{1}{\\prod_{j_1, j_2 \\in \\mathcal{J}}|2\\sin\\frac{2\\pi(j_1-j_2)}{N}|}\\right)^{1\/2d}.\n\\]\nA possible scenario may be when $d$ is very small and $N$ is very large. In this case, the condition number can be very large if the frequency slots $\\mathcal{J}$ are clustered.\n\n\n\\section{Counting Bracelets} \\label{appendix:bracelets}\n\nSeveral of our results, Theorem \\ref{theorem:universal} for example, depend only on the bracelet of an index set rather than on the index set itself. Thus it is useful to know how many bracelets there are and how to enumerate them. Counting bracelets -- actually, multicolored bracelets -- is a standard application in combinatorics of the orbit stabilizer theorem, and the problem is treated in many places. Our situation is slightly different because we want a count that specifies the number of black beads in a black-and-white bracelet, corresponding to the size of the index set that determines the locations of the black beads. Nevertheless, the orbit stabilizer theorem can still be applied, and we have the following results.\n\n\\begin{theorem} Let $\\phi$ denote Euler's totient function.\nWhen $N$ is odd, the number of black-and-white bracelets of length $N$ with exactly $d$ black beads is\n\\[\n\\begin{array}{ll}\n\\frac{1}{2} { { {(N-1)}\/{2} } \\choose { d\/2 } } + \\frac{1}{2N} \\sum_{k|N , k|d} \\frac{\\phi(k)}{N} { {N\/k} \\choose {d\/k}} \n\t\t& \\quad \\textrm{for even } d, \\\\ \\\\\n\\frac{1}{2} { { {(N-1)}\/{2} } \\choose { ({d-1})\/{2} } } + \\frac{1}{2N} \\sum_{k|N , k|d} \\frac{\\phi(k)}{N} { {N\/k} \\choose {d\/k}} \n\t\t& \\quad \\textrm{for odd } d.\n\t\t\\end{array} \n\\]\n\nWhen $N$ is even, the number of black-and-white bracelets of length $N$ with exactly $d$ black beads is\n\\[\n\\begin{array}{ll}\n\\frac{1}{2} {{ N\/2 } \\choose { d\/2 } } + \\frac{1}{2N} \\sum_{k|N , k|d} \\frac{\\phi(k)}{N} { {N\/k} \\choose {d\/k}} \n\t\t& \\quad \\textrm{for even } d, \\\\ \\\\\n\\frac{1}{2} {{(N\/2) - 1} \\choose {{(d-1)}\/{2} }} + \\frac{1}{2N} \\sum_{k|N , k|d} \\frac{\\phi(k)}{N} { {N\/k} \\choose {d\/k}} \n\t\t& \\quad \\textrm{for odd } d.\n\t\t\\end{array} \n\\]\t\n\\end{theorem}\n\nWe omit the proof; see \\cite{Will:thesis}. An efficient algorithm for enumerating bracelets has been devised only recently by Sawada \\cite{sawada:bracelets}. An algorithm for determining when two index sets are in the same necklace is due to J.P. Duval \\cite{duval:necklace}. It can also be used for bracelets. See \\cite{Will:thesis} for examples of both of these.\n\n\n\\section{Additional References} \\label{section:additional-references}\n\nThough our work has concerned discrete-time signals exclusively, there is also a notion of universal sampling sets for continuous-time signals. We will not give the definition; it is interesting and not clear what the relations between the two may be. Here we cite only a few sources, starting with the paper of Landau \\cite{landau:density} that featured the renowned necessary density condition on sampling sets. More recently, many interesting results have been obtained by Olevskii and Ulanovskii \\cite{olevskii-ulanovskii:universal}, \\cite{olevskii-ulanovskii:universal-2} on universal sampling and stable reconstruction, by Matei and Meyer \\cite{matei-meyer:variant-compressed}, who work with lattices and make contact with compressed sensing, and by Bass and Gr\\\"ochenig \\cite{Bass04randomsampling}, who consider random sampling. Of course, anyone writing on so fundamental a topic as sampling and interpolation will encounter an enormous literature, and most probably miss an equal or greater amount. We apologize to the authors of works we have missed.\n\n\\section*{Acknowledgments}\n\nThere are many people to thank for their interest, insight, and encouragement over quite some time, in particular S. Boyd, M. Chudnovsky, A. El Gamal, J.T. Gill, S. Gunturk, B. Hassibi, J. Sawada, J. Smith, and M. Tygert. We also thank the reviewers for their thorough and thoughtful comments. \n\n\n\n\n\n\\bibliographystyle{IEEEtran}\n\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section*{preprocessing network by weitghing the edges}\n\n\n\n\\section{Introduction}\n\nIn many sciences---for example Sociology~\\citep{scott2017social}, Biology~\\citep{chung2015bridging}, and Computer Science~\\citep{hendrickson2000graph}---units under study often belong to communities, and thus, may behave similarly.\nHence, understanding the community structure of units is critical in these sciences, and much work has been devoted to the development of methods to detect these communities~\\citep{fortunato2010community}. \n\nCommunity detection methods often use the framework of mathematical networks: units are nodes (vertices) in the graph, and edges are drawn between two nodes if the corresponding units interact with each other.\nUnder this framework, the problem of community detection becomes a graph partitioning problem where units within each block of the partition are ``optimally'' connected under some objective.\nCommonly, communities are identified through edge prevalence; edges are more frequently observed between nodes in the same community than between nodes in different communities.\n\nOptimal detection of communities is often $\\mathcal{NP}$-hard because the number of possible partitions grow exponentially as the number of units $n$ grows.\nMany (mostly heuristic) methods have been developed for the detection of communities under large $n$ settings.\nTwo such methods include the Louvain algorithm~\\citep{blondel2008fast} and the CNM~\\citep*{clauset2004finding} algorithm.\nMoreover, the ability of these algorithms to accurately identify communities may be improved through incorporating additional graph structure, which are typically integrated into these algorithms through the use of edge weights~\\citep{de2013enhancing,sun2014weighting, khadivi2011network}.\n\n\n\nOf note, previous work has shown that measures quantifying the cyclic structure of the network may be useful in detecting communities~\\citep{newman2003structure, radicchi2004defining, kim2005cyclic, vragovic2006network, zhang2008clustering, shakeri2017network}.\nThe intuition is that, if edges are more prevalent within a community than across communities, cycles (especially small cycles) may be more prevalent within communities as well. \nHence, edge weights that incorporate information about the density of cycles within a graph may be useful in detecting communities within that graph.\nHowever, some of these methods may be computationally prohibitive to implement as network sizes grow very large.\n\n\n\nOn the other hand, random walks have been proposed as a computationally efficient tool for uncovering the structure of networks. \nWhen the density of edges is higher within communities than across communities, random walks will spend a majority of their time traveling within communities~\\citep{hughes1995random, lai2010enhanced}. \nHence, network structure may be uncovered by following the paths of these random walks.\nAn important variant is the non-backtracking random walk (NBRW), in which the random walk is prohibited from returning back to a node in exactly two steps~\\citep{alon2007non, fitzner2013non}.\nWhile work on NBRW currently focuses on its fast convergence rate, we exploit another useful property of NBRW---its ability to identify cyclic structure. \n\nIn this paper, we consider the process in which a NBRW is performed until it forms a cycle, at which point the walk terminates and restarts---we call this process the renewal non-backtracking random walk (RNBRW).\nWe are particularly interested in the \\textit{retracing probability} of an edge---the probability that the edge completes the cycle in one iteration of RNBRW. \nIntuitively, edges with higher retracing probabilities are more critical to the formation of a cycle.\nHence, the cyclic structure of a network may be incorporated into community detection algorithms through weighting edges by their retracing probability.\nAlthough analytically obtaining exact values for retracing probabilities is difficult, repeated iterations of RNBRW can yield accurate estimates of these probabilities. \nAdditionally, these iterations can be performed in parallel, allowing these probabilities to be estimated extremely fast, even for networks containing millions of nodes. \n\nWe show through simulation that weighting edges by their estimated retracing probabilities through RNBRW improves substantially the ability for Louvain and CNM methods to detect communities.\nThe improvement is especially noticeable in the case of sparse graphs.\nAdditionally, we show that using Louvain and CNM with these weights may offer comparable performance to that of state-of-the-art community detection methods with considerably less computation.\n\n\n\n\\section{Notation and preliminaries}\\label{sec:prelim}\nLet $G=(V,E)$ denote a graph with node set $V$ and edge set $E$. \nWe do not currently make any restrictions on whether $G$ is directed or undirected. \nLet $n = |V|$ and $m = |E|$. \nIf $G$ is undirected, we will denote the edge between vertices $i$ and $j$ by $ij$, and if $G$ is directed, we denote the edge traveling from $i$ to $j$ by $\\vec{ij}$.\nFor convenience, we may refer to edges in an undirected graph $G$ using the latter notation; \nin this case, $\\vec{ij}$ and $\\vec{ji}$ refer to the same edge $ij$.\nEdges $\\vec{ij} \\in E$ may be weighted; let $w_{\\vec{ij}}$ denote the value of this weight.\nFor ease of notation, we assume an unweighted graph has default edge weights $w_{\\vec{ij}} = 1$.\nAn \\textit{adjacency matrix} $A$ for a graph $G$ is an $n \\times n$ matrix \nwhere element $A_{ij} =1$ if edge $\\vec{ij} \\in E$ and $A_{ij} = 0$ otherwise. \n\nThe (possibly weighted) \\textit{degree} of node $i$, denoted $d_i$, is the sum of the edge weights that begin from node $i$. \n\\begin{equation}\n \\label{eq:definedi}\n d_i \\equiv \\sum_{j = 1}^n w_{\\vec{ij}}\n\\end{equation}\nWe assume edge weights are scaled so that\n\\begin{equation}\n \\sum_{i=1}^n d_i = 2m.\n\\end{equation}\nNote that, in the unweighted case, $d_i$ is simply the number of edges beginning from node $i$. \n\nWe deviate from conventional literature by defining a walk by the edges it traverses; a \\textit{walk} $(\\overrightarrow{v_0v_1}, \\overrightarrow{v_1v_2}, \\ldots, \\overrightarrow{v_{k-1}v_k})$ of length $k$ is a vector of edges $\\overrightarrow{v_{\\ell-1}v_{\\ell}} \\in E$, $\\ell = 1,2,\\ldots, k$ connecting (possibly non-distinct) nodes $v_0,v_1, v_2, \\ldots,v_k \\in V$.\nA \\textit{random walk} $(e_1, e_2, \\ldots, e_k)$ is a walk such that:\n\\begin{align}\n &P\\left(e_{\\ell+1} = \\overrightarrow{v_{\\ell}v_{\\ell+1}} | e_{\\ell} = \\overrightarrow{v_{\\ell-1}v_{\\ell}}, \\ldots, e_1 = \\overrightarrow{v_{0}v_{1}}\\right)\\nonumber \\\\\n =\\;& \n P\\left(e_{\\ell+1} = \\overrightarrow{v_{\\ell}v_{\\ell+1}} | e_{\\ell} = \\overrightarrow{v_{\\ell-1}v_{\\ell}}\\right) = \\frac{w_{\\overrightarrow{v_{\\ell}v_{\\ell+1}}}}{d_\\ell}\n\\end{align} \nA walk $c$ is a \\textit{simple cycle} \nif $v_{0} = v_{k}$ and $v_0,v_1, v_2, \\ldots,v_{k-1}$ are distinct.\n\n\nThe nodes $V$ are partitioned into $q$ communities, numbered 1 through $q$.\nLet $g_v \\in \\{1, \\ldots, q\\}$ denote the community to which $v$ belongs and $\\mathbf g = (g_1, g_2, \\ldots, g_n)$.\nIn problems of community detection, we wish to uncover the true label $g_v$ for each node $v \\in V$.\n\n\\subsection{Community detection through maximizing modularity}\nOne popular approach for the problem of community detection is to choose communities that corrrespond to the optimal solution of an integer programming problem. \nA common objective function for community detection is \nthe graph modularity~\\citep{newman2003structure}: \n\\begin{equation}\\label{eq_Modularity}\n M(\\mathbf g; A) = \\frac{1}{2m}\\sum_{i,j}\\left(A_{ij}-\\frac{d_i d_j}{2m} \\right) \\delta_{g_i g_j}.\n\\end{equation}\nHere, $\\delta_{g_i g_j}$ is the Dirac delta function equal to 1 if\n$g_i = g_j$---that is, if $i$ and $j$ belong to the same community.\nIntuitively, for a given community label $\\mathbf g$, the modularity function measures how much within-group and across-group edge formation deviates from independence.\nBetter choices of communities $\\mathbf g$ correspond to larger values of $M(\\mathbf g)$. \nChoosing communities $\\mathbf g$ that maximize the modularity---or almost any other graph partitioning objective---is an $\\mathcal{NP}$-hard problem; hence, this approach often focuses on finding heuristic or approximately optimal solutions~\\citep{brandes2008modularity}.\nWe now detail two such approaches, the CNM method~\\citep{clauset2004finding} and the Louvain method~\\citep{blondel2008fast}.\n\n\n\n\n\n\\subsubsection{CNM method}\nThe CNM method is a greedy algorithm in which communities are iteratively merged so that each merger maximizes the change in modularity.\nLet $\\Delta Q_{g_ig_j}$ denote the change in modularity obtained when communities $g_i$ and $g_j$ are merged together.\nFor CNM, each node initially begins in its own singleton community.\nIn this case, the change of modularity obtained from merging nodes $i$ and $j$ into a single community is\n\\begin{equation}\\label{eq_CNMstep}\n \\Delta Q_{ij} = \\begin{cases} \n 1\/{m}-2d_id_j\/(2m)^2 & \\text{if }A_{ij}\\neq 0 \\\\\n 0 & \\text{otherwise}\n\\end{cases}\n\\end{equation}\nOnce the initializations are complete, the algorithm repeatedly selects the merger that maximizes modularity, and then updates the $\\Delta Q$ values, until no further mergers are possible.\nCNM then selects communities according to the largest found modularity.\nThe computational complexity of the CNM algorithm is\n$O(n^2\\log n)$.\n\\subsubsection{Louvain method}\nThe Louvain method is divided into two phases. \nThe first phase initializes each node into its own community \nand considers increasing modularity locally by changing a node's community label to that of a neighboring community. \nFor example, the modularity change obtained by moving singular node $i$ from its community to a neighboring community $g$ is calculated by:\n\\begin{equation}\\label{eq_LouvStep}\n\\begin{split}\n\\Delta Q_{-i,g}={\\bigg [}{\\frac {\\Sigma _{in}+d_{i,in}}{2m}}-{\\bigg (}{\\frac {\\Sigma _{tot}+d_{i}}{2m}}{\\bigg )}^{2}{\\bigg ]}\n-{\\bigg [}{\\frac {\\Sigma _{in}}{2m}}-{\\bigg (}{\\frac {\\Sigma _{tot}}{2m}}{\\bigg )}^{2}-{\\bigg (}{\\frac {d_{i}}{2m}}{\\bigg )}^{2}{\\bigg ]}\n\\end{split}\n\\end{equation}\nwhere $\\Sigma_{in}$ is sum of internal edge weights within $g$, $ \\Sigma _{tot}$ is the sum of all edge weights incident to nodes in $g$, $d_{i,in} $ is the sum of the weights of edges between node $i$ and nodes in $g$, \nand $m$ is the sum of the edge weights of the entire network. \nFor each node $i$, this change in modularity is computed for each community neighboring $i$, and $i$ is then assigned the community label that corresponds with the largest change. \nIf no positive change in modularity is possible, then $i$ does not change its community label. \n\nIn the next phase, the algorithm uses the final network from the first phase to generate a new network. \nEach community from phase one is condensed into a single node, and edges between two (possibly non-distinct) communities are condensed into a single edge with weight equal to the sum of the corresponding between-community edge weights.\nIn particular, self-loops in the new network may be formed if the corresponding community in the previous network has more than one node.\nPhase one is then applied to this new network.\n\n\nThese two phases are repeated iteratively until no local improvement in the modularity is possible.\nThe Louvain algorithm examines fewer mergers (only adjacent ones) compared to CNM, and hence, the time complexity is reduced---it appears to run in linearithmic time for sparse graphs \\citep{lancichinetti2009community}.\n\n\n\n\\subsection{Using edge weights to improve community detection}\n The quality of community detection may be improved by adding a pre-processing step to weight edges of the graph. \n We briefly go over some proposed weighting schemes in the literature.\n\n\\subsubsection{Weights based on random walks}\nInspired by message propagation, \\citet{de2013enhancing} introduces a method called weighted edge random walk-K path (WERW-Kpath) that runs a $k$-hop random walk and weights edges on the walk based on their ability in passing a message. \nWERW-Kpath initially assigns to each edge $e\\in E$ a weight $w_e = 1$.\nAt each iteration, WERW-Kpath chooses a node at random and and runs a biased random walk of at most $k$ steps; the probability of traveling from node $i$ to $j$ is fraction of times that $j$ has been visited from node $i$ over the total number of times $i$ has been visited. \nFinal edge weights are obtained after many iterations of WERW-Kpath.\n\\citet{lai2010enhanced} suggests a similar approach for using random walks to explore local structures of communities, and ultimately obtain an edge weight for edge $ij$ as the cosine distance between $i$ and $j$.\n\n\n\n\\subsubsection{Preprocessing the graph by analyzing cyclic topologies}\\label{sec:LitRevWeighting}\nThe idea of using cyclic structure to identify communities relies on the fact that, if edges are more prevalent within communities than across communities, then flows will also tend to stay within communities. \nFor example, \\citet{klymko2014using} focused on using triangles (cycles of length 3) to improve community detection in directed networks by weighting the edges based on \\textit{3-cycle cut ratio}.\n\\citet{radicchi2004defining} incorporate the importance of triangles using the edge clustering coefficient.\nThis method is similar to \\citet{newman2004finding} in which at each iteration the edge with the smallest clustering coefficient is removed.\nThe complexity of the algorithm in~\\citet{radicchi2004defining} is $O(m^4\/n^2)$---roughly $O(n^2)$ for sparse graphs. \nThis algorithm may perform poorly in graphs with few short cycles.\n\n\n\\citet{castellano2004self} modified the edge clustering coefficient for the weighted networks, in which number of cycles is multiplied by the edge weight. \n\\citet{zhang2008clustering} extended this to bipartite graphs with \ncycles of even length.\n\\citet{vragovic2006network} introduced the node loop centrality measure such that communities are built around nodes with high centrality in graph cycles---this algorithm has time complexity of $O(nm)$.\nCommunities with rich cyclic structures--many short loops--are known to have higher quality; flows tend to stay longer and information disseminates faster in their nodes.\n\n\\citet{shakeri2017network} suggest weighting edges using a method called \\textit{loop modulus} (LM). \nLM finds a probability distribution over all cycles such that two random cycles from this distribution have \"minimum edge overlap.\"\nThe weight of an edge is the probability that this edge is overlapped.\n\n\n\n\n\n\n \n\n\n\n\\subsubsection{Other weighting method based on local and global network structure}\\label{sec:k_path}\nThe use of global measures for determining edge weights requires the knowledge of entire networks. \nWhile these edge weights may be helpful in community detection, their use requires substantial computation, and sometimes the weights lack the necessary specificity to the community structures.\nLocal measures are often much faster to compute and may estimate local network subtlety that global methods cannot, but are unable to obtain thorough knowledge of global network structure. \nWe now summarize the use of global and local measures in the literature.\n\n\n\\citet{newman2004finding} propose to use of edge-betweeness centrality (EBC) as a way to evaluate the community structure of networks. \nEBC was introduced by \\citet{freeman1977set} and, for each edge $e$, measures the fraction of geodesic paths between all pairs of nodes that passes through $e$. \n\\citet{khadivi2011network} proposed to use common neighbor ratio in addition to EBC to balance the local and global measurements and weigh the graph edges. \nThis will allow for smaller weights of edges formed between clusters and also facilitates greedy algorithms in identifying the hierarchical structure of the clusters. \nThe runtime complexity of this weighting strategy is dominated by computing the edge-betweenness centrality which is in $\\mathcal{O}(nm)$ \\citep{brandes2001faster} \nThis runtime is prohibitive for large networks.\nFurthermore, tuning the hyperparameters requires the design of efficient heuristics to improve the performance of the weighting algorithm.\n\\citet{zhang2015novel} focused on the problem of local and global weighting of the edges to improve the separation of communities. They proposed to measure the similarity of each pair of nodes based on their similarity to other nodes through an iterative function called SimRank.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\section{Renewal non-backtracking random walk (RNBRW)} \\label{sec_3}\n\n\n\n\nWe now introduce our method for weighting edges called the renewal non-backtracking random walk (RNBRW).\nThis method is effective in capturing network cyclic structure while maintaining the computational advantages given by random walk methods. \nWe begin with some definitions.\n\nRecall the definition of a random walk in Section~\\ref{sec:prelim}.\nA non-backtracking random walk (NBRW) is a random walk that cannot return to a node $i$ in exactly two steps. \nThat is, for all nodes $j$,\n\\begin{eqnarray}\n P(e_{\\ell+1} = \\vec{ji} | e_{\\ell} = \\vec{ij}) = 0.\n\\end{eqnarray}\nNote that, while NBRW is not Markovian on the nodes of the graph, we can make NBRW Markovian on the edges by replacing each undirected edge $uv$ in $E$ with two directed edges $\\vec{uv}$ and $\\vec{vu}$. \n\nFor an unbiased NBRW, the transition probability is \\citep{kempton2016non}\n\\begin{equation}\\label{eq_NBRW_transition}\n\\Pr(\\vec{jk},\\vec{ij})=\n \\begin{cases} \n1\/(d_j-1),& \\text{ if } \\vec{ij}, \\vec{jk} \\in E \\text{ and } k \\neq i, \\\\\n0, & \\text{otherwise},\n\\end{cases}\n\\end{equation}\nwhere $d_j$ is the out-degree of node $j$.\nA primary benefit of NBRW is that it explores the graph more efficiently than random walks; using NBRW, the proportion of time spent on each edge converges faster to the stationary distribution.\n\nIn previous works, such as \\citet{de2013enhancing}, the random walk has been used to quantify the idea of information passing and to identify nodes or edges that are more capable of passing information.\nBoth random walk and NBRW behave similarly in the long run and their stationary distribution depends only on the degrees of nodes \\citep{kempton2015high}. \nTo reduce this dependability on node-degree, we keep the walk length small by stopping the NBRW.\nHowever, instead of stopping NBRW after a given number of iterations, we instead make a stopping rule that may help in uncovering cyclic structure in the network, thereby obtaining the renewal non-backtracking random walk. \n\n\\begin{definition}[Renewal non-backtracking random walk (RNBRW)]\nA renewal non-backtracking random walk (RNBRW) is a NBRW that terminates and restarts once the walk completes a cycle.\nEach NBRW begins at a directed edge selected completely at random.\nEach new run of a NBRW is an iteration of RNBRW.\n\\end{definition}\n\\begin{definition}[Retraced edge]\n\tThe retraced edge for a run of RNBRW is the edge that forms a cycle. \n\tMore precisely, supposing that $j$ is the first node revisited by a NBRW, and supposing that the NBRW travels to $j$ from $i$, then $\\vec{ij}$ is the retraced edge. \n\\end{definition}\n\\begin{definition}[Retracing probability]\n\tThe retracing probability $\\pi_e$ of an edge $e$ is the probability that $e$ is retraced in one iteration of RNBRW given that the NBRW terminates by forming a cycle. \n\tThe retracing probability $\\pi_{ij}$ of an undirected edge $ij$ is obtained by summing the retracing probabilities for both $\\vec{ij}$ and $\\vec{ji}$.\n\\end{definition}\n\nFigure~\\ref{fig:RetracedNBRW} gives an example of one RNBRW iteration on a graph.\nHeuristically, edges with large retracing probabilities are important to the formation of cycles.\nAs discussed in Section~\\ref{sec:LitRevWeighting}, the presence of cycles is a helpful indicator in discovering communities.\nThus, it seems reasonable that quantifying the cyclic structure by the retracing probabilities may lead to improved performance of algorithms designed for detecting communities.\nFigure~\\ref{fig:HouseNBRW} gives undirected retracing probabilities for a small graph.\n\n\n\n\\begin{figure}\n\t\\centering\n\t\\includegraphics[clip, angle = 90, width=.4\\columnwidth]{RetracedNBRW.pdf}%\n\t\\caption{One iteration of RNBRW on a graph. Gray lines denote edges not traversed by the NBRW. The walk begins with the edge in black. The walk completes after forming the cycle in blue. The dotted edge represents the retraced edge.}\n\t\\label{fig:RetracedNBRW}\n\\end{figure}\n\n\t\\begin{figure}\n\t\t\\centering\n\t\t\\includegraphics[clip,width=.5\\columnwidth]{House3.pdf}%\n\t\t\\caption{Undirected retracing probabilities for a house graph with 5 nodes and 6 edges. \n\t\n\t\n\t\tEdges with more overlapping loops have higher retracing probabilities. \n\t\tFor this graph, edge $dc$ appears to be the most important edge for the formation of cycles. \n\t\t}\\label{fig:HouseNBRW}\n\t\\end{figure}\n\nThe retracing probabilities $\\pi_e$ for each edge $e$ may be difficult to compute analytically, even for small graphs. \nInstead, as we show in Section~\\ref{subsec:asymptoticprops} these probabilities can be estimated precisely by running many iterations of RNBRW.\nPrecisely, the estimated retracing probability for edge $e$, denoted $\\hat \\pi_e$, is the fraction of RNBRW runs for which $e$ is the retracing edge.\nNote that $\\sum_{e} \\pi_e = \\sum_{e} \\hat \\pi_e = 1$.\n\n\n\nA particularly useful property of RNBRW is that iterations are mutually independent.\nIn particular, unlike other random walk methods (\\textit{e.g.} \\citet{de2013enhancing}), each run is not affected by weights obtained from previous runs.\nHence, multiple RNBRW instances can be run in parallel, and thus, retracing probabilities can be estimated quickly, even for large graphs.\nFor example, in \\ref{sec:results}, we apply RNBRW on graphs containing one-million nodes.\n\nFinally, note that a RNBRW iteration may terminate in one of two ways: either the run forms a cycle or the NBRW reaches a node with degree 1.\nIn the latter case, the iteration is discarded---no retracing edge is recorded---and a new iteration of RNBRW is started.\nFortunately, the probability that this case occurs may become vanishingly small as the number of nodes $n$ gets large.\nFor example, for Erd\\H{o}s-R\\'enyi graphs, the probability that a run terminates by visiting a node with degree 1 decreases exponentially with $n$~\\citep{tishby2017distribution}.\n\n\\subsection{RNBRW for community detection}\n\nRNBRW can be implemented to improve community detection algorithms as follows.\nFirst, the estimated undirected retracing probability $\\hat \\pi_e$ is obtained for each edge $e$ through performing many iterations of RNBRW.\nThen, each edge $e$ in the graph given the weight $w_e = 2m\\pi_e$, where $m$ is the number of edges.\nNext, the weighted degree $d_i$ in~\\eqref{eq:definedi} is computed for each node $i$.\nNote that multiplying $\\hat \\pi_e$ by $2m$ ensures that the sum of degrees $\\sum_{i=1}^n d_i$ is the same when moving from unweighted edges to weighted edges. \nThese weighted degrees are then plugged into the modularity function~\\eqref{eq_Modularity}.\nFinally, community detection algorithms such as CNM or Louvain are then performed using the weighted modularity function.\n\nFigure~\\ref{fig:weightLFR} demonstrates the usefulness of RNBRW weighting in community detection.\nThis figure shows an LFR benchmark graph~\\citep{lancichinetti2008benchmark} with four communities before and after weighting RNBRW.\nWeights of edges formed within communities are much larger on average than weights of edges across communities.\n\n\\begin{figure}\n\t\\centering\n\t\\subfloat[]{\t\t\n\t\t\\includegraphics[clip,width=.35\\columnwidth]{LFR_unweighted.pdf}%\n\t}~~~~~~~~~\n\t\\subfloat[]{%\n\t\t\\includegraphics[clip,width=.35\\columnwidth]{LFR_weighted.pdf}%\n\t}\\\\\n\n\n\t\\caption{(a) An LFR benchmark graph. (b) The same graph weighted by RNBRW. Observe that within-community edges have larger weights than across-community edges.} \n\n\t\\label{fig:weightLFR}\n\\end{figure}\n\nFigure~\\ref{fig:iteration} gives a simple illustration of how many iterations of RNBRW is necessary in order to obtain satisfactory detection of communities after weighting.\nAs the figure indicates, \nthere is a large improvement in the detection of communities when the number of walkers is on the order of the number of edges in the graph. \nRunning additional walkers will still improve this precision;\nhowever, the corresponding improvement in community detection is fairly small.\nA theoretical justification for this number of iterations\ncan be found in Section~\\ref{subsec:asymptoticprops}.\n\n\n\n\t\\begin{figure}\n\n\t\\centering\n\n\n\n\t\\includegraphics[clip,width=.5\\columnwidth]{NumberWalker.pdf}\n\n\t\\caption{Performance of community detection with RNBRW weighting increasing the number of walkers in both CNM and Louvain using RNBRW weighting for LFR networks with $n = 10,000$ and average degrees $9$ and $27$.}\n\t\\label{fig:iteration}\n\\end{figure}\n\n\n\n\n\n\n\n \n\n\n\n\n\t\n\t\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\\n\n\n\n\n\n\n\n\n\n\\subsection{Algorithm for RNBRW}\n\nFor completeness, we now describe in detail the RNBRW algorithm. \nEach run of the algorithm runs a NBRW and returns the retracing edge (if any) as the sample of retracing edge. \nThe number of times each edge has been retraced by a NBRW is approximately proportional to its retracing probability.\nThe algorithm only requires knowledge the graph and each run is independent of the other. \nTherefore, we can collect samples in parallel leading to fast convergence. \nEach run of the algorithm is comprised of the following steps:\n \\begin{itemize}\n\t\\item[1] Choose a random edge $\\overrightarrow{v_0v_1}$ in $E$.\n\t\\item[2] Form the walk $w = \\left(\\overrightarrow {v_0v_1}\\right)$. \n\t\\item[3] (For $k=\\{1,\\cdots\\}$) The walker continues her walk from ${v_k}$ to a neighboring node $v_{k+1}\\neq v_{k-1}$.\n\t\\item[4] If $v_{k+1}$ has degree 1, return immediately to Step~1. \n\tIf $v_{k+1}$ is already in $w$, return $\\overrightarrow{v_kv_{k+1}}$ as the retracing edge and return to Step~1. \n\tOtherwise add $v_{k+1}$ to $w$ and go to Step 3 incrementing $k = k+1$.\n\\end{itemize}\nIn Algorithm~1, we present the pseudo-code for RNBRW.\nBy employing a swarm of walkers and recording their retracing edges, we are able to get an accurate estimate of the retracing probability for each edge.\nNote that these walkers can be initialized independently from each other and retraced edges can be collated at the very end of the process. \nTherefore, one can execute this process as an array of jobs on a cluster of computers efficiently. \n\n\\begin{figure}\n\t\\begin{minipage}{\\linewidth}\\label{algNBRW}\n\t\t\\begin{algorithm}[H]\n\t\t\t\\caption{Algorithm for RNBRW from a random directed edge $\\vec{uv}$.}\n\t\t\t\\begin{algorithmic}[1]\n\t\t\t\t\\State \\textit{walk} $\\leftarrow$empty set\n\t\t\t\t\\State \\textbf{choose} edge $\\vec{uv}$ at random\n\t\t\t\t\\State \\textbf{add} $\\vec{uv}$ to \\textit{walk}\n\t\t\t\t\\While{True}\n\t\t\t\t\\State \\textit{nexts}$\\leftarrow$neighbors of $v$\n\t\t\t\t\\State \\textbf{remove} $u$ from \\textit{nexts}\n\t\t\t\t\\State \\textbf{if} nexts is empty \\textbf{break}\n\t\t\t\t\\State \\textit{next} $\\leftarrow$\\textbf{choose} a node from \\textit{nexts} randomly\n\t\t\t\t\\If {\\textit{next }$\\in$\\textit{walk}}\n\t\t\t\t\\State \\textbf{return} ($v$,\\textit{next}) as the retracing edge\n\t\t\t\t\\State \\textbf{add} \\textit{next} to \\textit{walk}\n\t\t\t\t\\State $u\\leftarrow v$\n\t\t\t\t\\State $v \\leftarrow $\\textit{next}\n\t\t\t\t\\EndIf\n\t\t\t\t\\EndWhile\n\t\t\t\\end{algorithmic}\n\t\t\\end{algorithm}\n\t\\end{minipage}\n\\end{figure}\n\n\n\n\n\\section{Asymptotic properties of RNBRW \\label{subsec:asymptoticprops}}\n\nWe now give proof that the RNBRW estimates for the retracing probabilities converge almost surely to the true retracing probabilities.\nThe proof also gives insight as far as how many iterations are necessary in order to obtain a good estimate of these probabilities.\nWe begin with a few definitions.\n\nSuppose there are $\\rho$ iterations of RNBRW that conclude by returning a retracing edge.\nFor a given directed edge $\\vec e$, define $Y_{\\vec e,k}$ as a random variable indicating that the $k$th iteration returns edge ${\\vec e}$ as the retracing edge.\nThat is,\n\\begin{equation}\n\tY_{\\vec e,k}=\n\t\t\\begin{cases}\n\t\t1,& \\text{if the $k$th RNBRW returns $e$ as the retracing edge}, \\\\\n\t\t0, & otherwise.\n\t\t\\end{cases}\n\\end{equation}\nNote that $Y_{\\vec e,k}$ is a \\textit{Bernoulli}$(\\pi_{\\vec e})$ random variable and that\n\\begin{equation}\n \\overline Y_{\\vec e} = \\frac{1}{\\rho}\\sum_{k = 1}^\\rho Y_{\\vec e,k} = \\hat \\pi_{\\vec e}.\n\\end{equation}\n\nWe first make a statement about how close any arbitrary estimated retracing probability comes to the true probability.\n\\begin{lemma}\\label{lemma:justhoeff}\n \\begin{equation}\n \\label{eq:justhoeff}\n P(|\\overline{Y_{\\vec e}}-\\pi_{\\vec e}| \\geq \\epsilon) \\leq 2 \\exp(-2\\rho \\epsilon^2).\n \\end{equation}\n\\end{lemma}\n\\begin{proof}\n This result follows from direct application of Hoeffding's inequality~\\citep{hoeffding1963probability}.\n\\end{proof}\n\nWe now obtain a result on how estimates of undirected retracing probabilities uniformly converge to the true probabilities.\nSuppose that, in the original NBRW setup, undirected edge $e$ is replaced with directed edges $\\vec e_1$ and $\\vec e_2$. \nHence, $\\pi_{e} = \\pi_{\\vec e_1} + \\pi_{\\vec e_2}$, and\nthis undirected probability is estimated by $\\overline Y_e = \\overline Y_{\\vec e_1} + \\overline Y_{\\vec e_2} = \\hat \\pi_{\\vec e_1} + \\hat \\pi_{\\vec e_2}$.\n\\begin{lemma}\\label{lemma:maxsumhoeff}\n \\begin{equation}\n \\label{eq:maxsumhoeff}\n P(\\max_{e} |\\overline{Y_{e}}-\\pi_{e}| \\geq \\epsilon) \\leq 4 m \\exp(-\\rho \\epsilon^2\/2).\n \\end{equation}\n\\end{lemma}\n\\begin{proof}\nTo see this, note that\n\\begin{equation}\n \\label{eq:halfthehoeff}\n\tP(|\\overline{Y_{e}}-\\pi_{e} |\\geq \\epsilon)=\n\t\tP(|\\overline{Y}_{\\vec e_1}-\\pi_{\\vec e_1} + \\overline{Y}_{\\vec e_2}-\\pi_{\\vec e_2} |\\geq \\epsilon) \\leq\n\t\tP\\left(\\bigcup_{\\ell = 1}^2\\left\\{|\\overline{Y}_{\\vec e_\\ell}-\\pi_{\\vec e_\\ell} |\\geq \\epsilon\/2\\right\\}\\right).\n\\end{equation}\nThus, by~\\eqref{eq:halfthehoeff} and Lemma~\\ref{lemma:justhoeff}, we have\n \\begin{eqnarray}\n\t\tP(\\max_{e}|\\overline{Y_{e}}-\\pi_{e} |\\geq \\epsilon) &=& \n\t\t\tP\\left(\\bigcup_{e}\\left\\{\\left|\\overline{Y_{ e}}-\\pi_{e} \\right|\\geq \\epsilon\\right\\}\\right) \\nonumber \\\\ &\\leq&\n\t\n\t\t\tP\\left(\\bigcup_{e}\\bigcup_{\\ell = 1}^2\\left\\{\\left|\\overline{Y}_{ \\vec e_{\\ell}}-\\pi_{\\vec e_\\ell} \\right|\\geq \\epsilon\/2\\right\\}\\right)\t \\nonumber \\\\\n\t\t&\\leq& \\sum_{{e}}\\sum_{{\\ell = 1}}^2 P\\left(\\left|\\overline{Y}_{\\vec e_\\ell}-\\pi_{\\vec e_\\ell} \\right|\\geq \\epsilon\/2\\right) \\nonumber \\\\ &\\leq&\n\t\t 4m \\exp(-\\rho \\epsilon^2\/2).\n \\end{eqnarray}\n\\end{proof}\n\\noindent\nThat is, we have shown that the estimated retracing probabilities converge exponentially fast to the true retracing probabilities as number of iterations of RNBRW (that terminate by forming a cycle) increases. \n\nLemma~\\ref{lemma:maxsumhoeff} also gives insight as far how many iterations of RNBRW are necessary to obtain good estimates of the retracing probability. \nWe now give a result that demonstrates how accurate these estimates are if the number of RNBRW runs is approximately equal to the number of edges in the graph.\n\\begin{corollary}\n Given $\\rho = m$ iterations of RNBRW, the estimated retracing probabilities are guaranteed to be within $\\sqrt{\\frac{4\\log(2m)}{m}}$ of the true probabilities\n with probability no smaller than $(m-1)\/m$.\n That is,\n \\begin{equation}\n \tP\\left(\\max_{e}|\\overline{Y_{e}}-\\pi_{e} |\\geq \\sqrt{\\frac{4\\log(2m)}{m}}\\right) \\leq \\frac{1}{m}.\n \\end{equation}\n\\end{corollary}\n\\begin{proof}\n To see this result, set $\\rho = m$ and $\\epsilon = \\sqrt{\\frac{4\\log(2m)}{m}}$ in~\\eqref{eq:maxsumhoeff}.\n\\end{proof}\n\nFinally, we show that the estimated retracing probabilities converge almost surely and uniformly to the true retracing probabilities.\n\\begin{theorem}[Strong and uniform consistency]\n As the number of iterations of RNBRW that terminate in a cycle $\\rho \\to \\infty$, then $\\hat \\pi_e \\overset{a.s.}{\\to} \\pi_e$ for each undirected edge $e$. Moreover this convergence is uniform across all edges $e$.\n\\end{theorem}\n\t\\begin{proof}\n\tUsing~\\eqref{eq:maxsumhoeff}, we observe that\n\t\\begin{eqnarray}\n\t\t\\lim_{\\rho \\to \\infty}\\sum_{n = 1}^\\rho P(\\max_{e} |\\overline{Y_{e}}-\\pi_{e}| \\geq \\epsilon) \\leq\n\t\t\t\\lim_{\\rho \\to \\infty}\\sum_{n = 1}^\\rho 4 m \\exp(-\\rho \\epsilon^2\/2) < \\infty.\n\t\\end{eqnarray}\t \n\tThe theorem then follows directly from the Borel-Cantelli Lemma \\citep{rosenthal2006first}.\n\t\\end{proof}\n\t\n\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\n\n\n\n\n\n\\section{Simulation results}\\label{sec:results}\nWe investigate the performance of two community detection methods, Louvain and CNM, with and without the proposed weighting methods.\nIn addition to RNBRW, we consider these preprocessing weighting methods: WERW-Kpath, SimRank, Loop modulus, and the weighting method considered in~\\citet{khadivi2011network}.\nAdditionially, we compare the performance from preprocessing with RNBRW to other scalable community detection algorithms such as\nInfo map \\citep{rosvall2007maps}, Label propagation \\citep{raghavan2007near}, Edge betweenness \\citep{girvan2002community}, Spin glass \\citep{reichardt2006statistical}, and Walk trap \\citep{pons2005computing}. \nTo perform our comparisons, we apply the community detection algorithms on LFR benchmarks \\citep{lancichinetti2008benchmark} and measure their similarity to the ground truth data using normalized mutual information (NMI) \\citep{danon2005comparing}. \n\n\n\n\n\n\n\n\n\n\n\\subsection{Comparison of weighting methods}\n\n\nFigure~\\ref{fig:InfoMixModulusNBRW} compares the performance in detecting communities for various weighting methods coupled with the Louvain algorithm as the mixing parameter $\\mu$ increases.\nFigure~\\ref{fig:Khadivi_L} compares these methods for sparse graphs (average degree is $\\approx \\log(n)$) given a fixed mixing parameter but as the number of nodes increases.\nIn all cases, preprocessing the graph with RNBRW substantially improves the detection of communities when compared to the unweighted algorithm.\nAdditionally, the improvement provided by RNBRW is substantially larger than that from the WERW-KPath, SimRank, and the Khadavi \\textit{et.~al.} methods.\nThe performance of RNBRW seems comparable to that of loop modulus. \nHowever, a good comparison between RNBRW and loop modulus is difficult as current methods for computing loop modulus require prohibitive computational cost for graphs containing more than a couple thousand nodes. \nHence, loop modulus is omitted from \nFigure~\\ref{fig:Khadivi_L}.\nAppendix \\ref{app_CNM} gives these results for the CNM algorithm.\n\n\n\\begin{figure}\n\n\t\\centering\n\t\\includegraphics[width=.60\\textwidth]{InfoMix_L.pdf}\n\t\\caption{\n\tPerformance analysis of community detection when coupling weighting methods with the Louvain algorithm. \n\tWe use LFR benchmark networks with $n = 500$, average degree $7$, and community sizes ranging from $30$ to $70$.\n\t\tThe mixing rate $\\mu$ adjusts the ratio of within-communities links over all links.}\\label{fig:InfoMixModulusNBRW}\n\t\n\t\n\t\n\t\n\t\t\t\n\\end{figure}\n\n\\begin{comment}\n\\begin{figure}\n\n\t\\centering\t\t\n\t \n\t\t\\includegraphics[width=.65\\textwidth]{Info_degree_retracing.pdf}%\n\t\n\t\\caption{Performance analysis of weighting methods on LFR benchmark networks with $n=500$ and varying average degree. We compare the performance of weighting RNBRW with the raw algorithm (no weighting) and weighting WERW-Kpath with the raw algorithm (no weighting).}\\label{fig:InfoMixW}\n\\end{figure}\n\\end{comment}\n\n\n\n\t\\begin{figure}\n\t\n\t\t\\centering\n\t\n\n\t\t\t\\includegraphics[clip, width=.60\\columnwidth]{sparse_khadivi_L.pdf}%\n\t\t\\caption{Performance improvement of Louvain algorithm for different weighting algorithms for sparse LFR benchmark graphs ($\\mu = 0.4$, $\\hat{d} = \\log(n)$). Loop modulus is omitted in this example due to expensive computational costs.}\\label{fig:Khadivi_L}\n\t\\end{figure}\n\t\n\n\n\n\n\n\n\n\\subsection{Performance of RNBRW to other scalable algorithms}\n\nWe now compare the performance of Louvain and CNM equipped with RNBRW to five popular scalable algorithms: Infomap, Label propagation, Edge betweenness, Spinglass, and Walktrap.\nWe first begin our comparison on a small LFR benchmark graph with identical parameters to that of Figure~\\ref{fig:InfoMixModulusNBRW}.\nFigure~\\ref{fig:InfoMixLFR} suggests that---on an albeit small graph of $n=500$ nodes---Louvain with RNBRW is as good or better at discovering communities as the aforementioned algorithms across all mixing parameters considered.\n\n\\begin{figure}\n\t\\begin{center}\n\t\t\\includegraphics[clip,width=.60\\columnwidth]{InfoMixLFR500_single.pdf}%\n\t\\end{center}\n\t\\caption{Comparing the performance of Louvain equipped with RNBRW to Infomap, Label propagation, Edge betweenness, Spinglass, and Walktrap algorithms. \n\tThe LFR benchmark networks have $500$ nodes with average degree $7$, and community sizes ranging from $30$ to $70$.\n\n\t\t}\\label{fig:InfoMixLFR}\n\\end{figure}\n\nWe now focus our analysis on larger graphs with varying degrees of sparseness.\nSparse graphs tend to be a particularly challenging case for community detection algorithms.\nWhile \\citet{zhao2012consistency} showed consistent detection of communities if the average degree grows at least logarithmically with network size, in practice, some algorithms may still struggle when average degree increases sub-linearly with the number of nodes.\nAnother problem that arises with modularity maximization methods is their resolution limit in detecting small communities.\nThis is a critical shortcoming since small size communities are common in large social networks and their size are not necessarily growing with graph size. \n\\citet{fortunato2007resolution} identify that communities with number of internal edges less than $\\sqrt{2|E|}$ are most likely misdetected. \nThis leads to a resolution limit that holds back heuristics for modularity maximization.\n\nWe test these community detection algorithms and see how they withstand the challenges of low average degree and existence of small communities for LFR benchmarks.\nWe consider average degrees that grow logarithmically with the number of nodes.\nThe community sizes vary between an upper and a lower bound that falls below the resolution limit and the average degree is changing for different size graphs. \n\nFor these larger benchmarks\\footnote{We use Beocat for running simulations on the large networks. Beocat is a computer cluster located in Kansas State University \\url{https:\/\/support.beocat.ksu.edu\/BeocatDocs\/index.php\/Main_Page}.} we use a constant mixing rate $\\mu =0.3$ and consider graphs with $10,000$, $100,000$ and $1,000,000$ nodes.\nEdge betweenness, Spinglass, and the previously considered weighting algorithms are not included due to prohibitive computational cost with larger network sizes.\n\nWe first analyze how the use of RNBRW can improve the performance of Louvain and CNM for sparse graphs.\nFigure~\\ref{fig:100k} plots the mutual information for graphs of $n = 10,000$ and $n = 100,000$ nodes for CNM and Louvain with and without RNBRW. \nWe demonstrate that, even when the average degree is on the order of $\\log(n)$, these algorithms equipped with RNBRW achieve a mutual information very close to 1. \nIn particular, for $n = 10,000$ and average degree $\\log(n)$, equipping CNM with RNBRW increases the mutual information from $0.263$ to $0.974$.\n\n\n\n\n\\begin{figure}\n\n\t\\centering\n\t\\includegraphics[clip,width=1\\columnwidth]{Info_degree_retracing10k_single.pdf}%\n\t\\caption{Plots of NMI for sparse graphs. \n\tWe use LFR networks with $n = 10,000$ (left) and $n = 100,000$ nodes (right) with mixing parameter $\\mu = 0.3$. \n\tAverage degree varies from $\\log(n)$ to $3\\log(n)$.\n\tWeighting by RNBRW leads to substantial performance increases for both the CNM and Louvain algorithms. }\\label{fig:100k}\n\\end{figure}\n\n\t\\begin{table}\n\t\t\\resizebox{\\textwidth}{!}{\n\\begin{tabular}{ |l|c|c|c|c|c|c|c| c| }\n\t\\hline\n\tnetwork size & average degree & Infomap & LP & WT &CNM & Louvain & RNBRW+CNM& RNBRW+Louvain\\\\\n\t\\hline\n\t\\multirow{3}{*}{$10,000$} & $\\log n$ & $0.912$ & $\\textbf{0.992}$ & $0.763$ & $0.263$ & $0.740$ & $0.974$ & $0.970$\\\\\n\t& $2\\log n$ & $\\textbf{1}$ & $0.995$ & $\\textbf{1}$ &$0.276$ & $0.757$ & $\\textbf{1}$ & $\\textbf{1}$\\\\\n\t& $3\\log n$ & $\\textbf{1}$ & $\\textbf{1}$ & $\\textbf{1}$ &$0.232$ & $0.882$ & $\\textbf{1}$ & $\\textbf{1}$\\\\\n\t\\hline\t\n\t\\multirow{3}{*}{$100,000$}& $\\log n$ & $0.73$ & $\\textbf{0.988}$ & $0.644$ & $0.154$ & $0.524$ & $0.975$ & $0.960$\\\\\t\n\t& $2\\log n$ & $\\textbf{1}$ & $\\textbf{1}$ & $\\textbf{1}$ & $0.159$ & $0.649$ & $\\textbf{1}$ & $\\textbf{1}$\\\\\t\n\t& $3\\log n$ & $\\textbf{1}$ & $\\textbf{1}$ & $\\textbf{1}$ &$0.118$ & $0.713$ & $\\textbf{1}$ & $\\textbf{1}$\\\\\t\n\t\\hline\t\t\t \n\t$1,000,000$ & $\\log n$ & $0.994$ & $\\textbf{0.998}$ & $-$ & $0.050$ & $0.192$ & $0.989$ & $0.969$\\\\\t\n\t\\hline\n\\end{tabular}}\n\\caption{Performance measured by NMI for scalable community detection algorithms for sparse LFR networks with $\\mu =0.3$ and average degree $\\approx \\log n$. \nWe compare CNM and Louvain with and without RNBRW weighting with Infomap, Label propagation (LP), and Walktrap (WT). }\n\\label{tab_results}\n\\end{table}\n\nWe now compare CNM and Louvain equipped with RNBRW with other efficient community detection algorithms.\nSummaries of these simulations are be found in Table~\\ref{tab_results}. \nWe first note that, again, equipping CNM and Louvain with RNBRW improves community detection dramatically; for example, in the 1,000,000 node example, equipping these community detection algorithms with RNBRW leads to an increase NMI in CNM from 0.050 to 0.989 and in Louvain from 0.192 to 0.969. \nSimilar increases occur across all other network sizes and average degrees.\nMoreover, the detection of communities for CNM and Louvain with RNBRW is comparable to other scalable community detection algorithms.\n\nAlthough not included in the table, the runtime of RNBRW---especially coupled with Louvain---is much lower than the other considered methods. \nFor example, because RNBRW is embarrassingly parallelizable, because Louvain has a small asymptotic runtime, and because RNBRW weights and the Louvain algorithm are performed sequentially, the total runtime of Louvain weighted by RNBRW is smaller than\nInfomap $\\mathcal{O}(n^2)$, Label propagation $\\mathcal{O}(n^2)$, Spinglass $\\mathcal{O}(n^{3.2})$ , and \nWalktrap $\\mathcal{O}(n^2\\log n)$. \nTherefore, adding a RNBRW preprocessing to Louvain seems to be a very reasonable method for obtaining efficient community detection with performance comparable to that of the best scalable community detection algorithms.\n\n\n\t\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\t\n\n\n\n\n\n\n\n\n\n\n\n\\section{Conclusion}\n\nCommunities can be identified by the ``richness'' of cycles; more short cycles occur within a community than across communities. \nHence, existing community detection algorithms may be enhanced by incorporating information about the participation of edges in rhe cyclic structure of the graph. \nWe develop renewal non-backtracking random walks (RNBRW) as a way of quantifying the cyclic stucture of a graph.\nRNBRW quantifies edge importance as the likelihood of an edge completing a cycle in a non-backtracking random walk, providing a scalable alternative to analyse real-world networks.\nWe describe how RNBRW weights can be coupled with other popular scalable community detection algorithms, such as CNM and Louvain, to improve their performance.\n\n\nWe show that weighting the graph with RNBRW can substantially improve the detection of ground-truth communities.\nThis improvement is most notable when the network is sparse. \nFurthermore, the RNBRW preprocessing step can overcome the problem of detecting small communities known as resolution limit in modularity maximization methods.\nWe show that performance appears to be equal or superior to that of other weighting methods, and of other scalable (non-weighted) commnity detection algorithms.\nIn the case of large, sparse, networks, RNBRW may be quite effective; RNBRW can improve the efficacy of available community detection algorithms without sacrificing their computational efficiency.\n\n\n\n\n\\bibliographystyle{chicago}\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nLarge-scale magnetic fields with fascinating \nquasi-regular spatio-temporal behavior \nare ubiquitous in solar and stellar settings. \nUnderstanding the mechanisms for the generation of such fields and \ntheir spatio-temporal variations is still a major challenge for \ndynamo theory. \nThe solar magnetic field has three particularly\nimportant features: quasi-regular oscillations, reversal of polarity\nand equatorward migration. \nDirect numerical simulations of solar-like convective dynamos\nhave been able to generate large-scale magnetic fields \n\\citep{bro+bro+bru+mie+nel+tom07,brown10}\nwhich in some cases show oscillatory behavior,\nbut the fields exhibit either rather weak equatorward migration at high\nlatitudes~\\citep{charb} or \nanti-solar (i.e.\\ poleward) migration~\\citep{gil83,kapyla_etal2010}. \n\nA useful tool for studying these dynamical phenomena is mean-field (MF)\nelectrodynamics \\citep[e.g.][]{Kra+Rad80,bra+sub05}, where the effects\nof turbulence are characterized by turbulent magnetic diffusion\nand an $\\alpha$ effect.\nAccording to MF theory, equatorward migration\nis expected if there is negative radial shear accompanied by a\npositive (negative) $\\alpha$ effect in the northern (southern) \nhemisphere \\citep{Kra+Rad80}.\nDirect numerical simulations (DNS) of helical turbulence with shear \nhave confirmed the presence of migratory dynamo waves \n\\citep{bra+big+sub02,kap+bra09}. It is, however, \nunclear whether this is really what is going on in the Sun,\nsince there the layer with negative radial shear is rather thin and\nonly concentrated near the surface (see, e.g., \\cite{bra05} and references therein).\nThe other alternative is that meridional circulation might change the\ndirection of migration \\citep{cho+sch+dik95}, but evidence for\nthis has not yet been seen in DNS.\n\nIn this Letter we present a completely different mechanism for\npolarity reversal and equatorward migration of dynamo activity.\nIn the context of MF models this mechanism is connected with the \nantisymmetry of the profiles\nof $\\alpha$ across the equator \\citep{Rud+Hol04,bra+can+cha09}.\nWe demonstrate the operation\nof this mechanism in DNS of the equations of magnetohydrodynamics (MHD).\nOur model consists of a spherical wedge-shaped\nshell in which the turbulence in the fluid is maintained by a random\nhelical forcing.\nMotivated by the Sun, we choose our forcing to have opposite\nsigns of helicity in the two hemispheres\n(negative in the north and positive in the south).\nWe emphasize that, even though our model\ndoes not explicitly include convection, stratification and \nrotation,\nthe helical forcing used here does partially\nmodel these features implicitly.\n\nOur model shows large-scale magnetic fields\nin excess of the equipartition value. More importantly,\nwe find oscillations of the magnetic field which show opposite\nsigns in different hemispheres with periodic reversals of polarity.\nFurthermore, the magnetic\nfield develops at higher latitudes and migrates equatorward where the two\ndifferent polarities of magnetic field annihilate and the cycle repeats itself\nas shown in the top panel of Fig.~\\ref{fig:butterfly}. \nTo our knowledge, such\ndynamical features of the large-scale magnetic field have not been observed\nearlier in DNS of MHD turbulence.\nBelow we introduce our model, discuss its oscillatory solutions,\nand briefly compare our DNS results with those obtained\nfrom corresponding MF models.\n\n\\begin{figure}[h]\n\\includegraphics[width=\\linewidth]{fig\/butterfly_three_eqopen.ps}\n\\caption{Space-time diagrams of the azimuthal component\nof the large-scale magnetic field for\nDNS over both the hemispheres (top panel),\nDNS over only the northern hemisphere with an antisymmetric\ncondition at the equator (middle panel), and\nMF simulation over only the northern hemisphere with \nan antisymmetry condition at the equator (bottom panel).}\n\\label{fig:butterfly}\n\\end{figure}\n\\begin{figure}[h]\n\\includegraphics[width=\\linewidth]{fig\/constr_large_domain.ps}\n\\caption{Orthographic projection of the toroidal magnetic field $B_\\phi$\nat $r=0.85$ for Run~{\\tt S5}.\nThe projection is tilted by $15$ degrees towards the viewer. \n} \n\\label{fig:contour}\n\\end{figure}\n\\section{The model}\nIn our simulations, we solve the equations for compressible MHD\nin terms of the velocity $\\u$, the logarithmic density $\\ln\\rho$,\nand the magnetic vector potential $\\A$,\n\\begin{eqnarray}\n\\label{mhd1}\n\\Dt\\u &=& -c_{\\rm s}^2\\grad\\ln\\rho + \\frac{1}{\\rho}\\j\\times\\B \n + \\bm{F}_{\\rm visc} + \\f, \\\\\n\\Dt\\ln\\rho &=& -\\grad\\cdot\\u, \\\\\n\\delt\\A &= & \\u\\times\\B + \\eta\\lap\\A ,\n\\end{eqnarray}\nwhere $\\bm{F}_{\\rm visc}=(\\mu\/\\rho)(\\lap\\u + \\frac{1}{3}\\grad\\dive\\u)$\nis the viscous force, $\\mu$ is the dynamic viscosity, \n$\\B = \\curl \\A$ is the magnetic field,\n$\\j = \\curl \\B\/\\mu_0$ is the current density,\n$\\mu_0$ is the vacuum permeability,\n$c_{\\rm s}$ is the (constant) speed of sound in the medium,\n$\\eta$ is the magnetic diffusivity,\nand $D_t \\equiv \\delt + \\u\\cdot\\grad$ is the advective derivative.\nOur computational domain is a spherical wedge with \nradius $r \\in [r_1,r_2]$ symmetric about the equator with \ncolatitude $\\theta \\in [\\Theta,\\pi-\\Theta]$\nand azimuth $\\phi \\in [0,\\Phi]$. \nThe radial, meridional and azimuthal extents of our domain are respectively,\n$\\lr \\equiv r_2-r_1$, $\\lt \\equiv r_2(\\pi-2\\Theta)$ and\n$\\lp \\equiv r_2 \\Phi$. All lengths are measured\nin the units of $r_2$.\nOur main results do not depend on our choices of $\\Theta$ and $\\Phi$.\n\nIn Equation~\\ref{mhd1} $\\fxt$ is an external white-in-time random helical forcing \nconstructed using\nthe Chandrasekhar-Kendall functions \\citep{cha+ken57} as described below.\nIn spherical coordinates a helical vector function can be expressed\nin terms of a scalar potential function:\n\\begin{equation}\n\\psi\\big(\\beta(t),\\ell(t),m(t)\\big)=\n \\mbox{Re}\\, z_\\ell(\\beta r)Y^m_\\ell(\\theta,\\phi)\\exp[{\\rm i}\\xi_m(t)],\n\\label{solhelmoholtz}\n\\end{equation}\nwith \n$z_\\ell(\\beta r)=a_\\ell j_\\ell(\\beta r) + b_\\ell n_\\ell(\\beta r) $.\nHere $j_\\ell$ and $n_\\ell$ are spherical Bessel functions of the first and\nsecond kind respectively, $a_\\ell$ and $b_\\ell$ are constants determined by the \nboundary conditions and\n$\\xi_m$ is a random angle uniformly distributed between $0$ and $2\\pi$.\nThe helical forcing $\\f$ is then given by the equation $\\curl \\f = \\beta \\f $,\nwhere $\\f = {\\bm T} + {\\bm S}$,\n${\\bm T} = \\curl({\\bm e}\\psi)$,\nand ${\\bm S} = \\beta^{-1}\\curl{\\bm T}$,\nwhere ${\\bm e}(t)$ is a unit vector chosen randomly on the unit sphere. \nAs to the choice of boundary conditions, we demand that $\\f$ is zero at the \ntwo radial boundaries $r=r_1$ and $r=r_2$ which yields the following\ntranscendental equation relating $a_\\ell$, $b_\\ell$ and $\\beta$:\n\\begin{equation}\na_\\ell j_\\ell(\\beta r_1) + b_\\ell n_\\ell(\\beta r_1)\n= a_\\ell j_\\ell(\\beta r_2) + b_\\ell n_\\ell(\\beta r_2) = 0.\n\\label{alphaeqn}\n\\end{equation}\nWe first construct a table of values of $m$, $\\ell$ and $\\beta$ in the\nfollowing way. \nAs we use periodic boundary conditions along the azimuthal direction,\n$m_{\\rm min}= 2\\pi\/\\Phi$. We choose $m = p m_{\\rm min}$, and for a fixed\n$m$ we choose $\\ell$ to be odd, $\\ell = 2(m+q)+1$, because we want the forcing\nto go to zero at the equator. \nHere $p$ and $q$ are integers which range between $3$ to $5$.\nFor a fixed $\\ell$ and $m$ we solve Eq.~(\\ref{alphaeqn}) by Newton-Raphson\nmethod and list the solutions which have $3$ to $5$ zeros in the domain. \nTo randomize the resulting forcing we randomly choose a triplet \nof $m$, $\\ell$ and $\\beta$\nfrom the table. We also randomize the unit vector ${\\bm e}$ on the unit sphere.\nTwo different signs of helicity are imposed by choosing negative \n(positive) $\\beta$\nin the northern (southern) hemisphere. The choice of parameters\nimplies that we have a scale separation between $3$ to $5$ in our simulations.\nOur results are fairly robust under the change of different parameters\nof forcing. We need scale separation of 3 or more to excite a \nlarge-scale dynamo \\citep{HBD04},\nwhich invariably shows oscillations and equatorward migration. \nOur simulations are performed using the \n\\textsc{Pencil Code}\\footnote{{\\tt http:\/\/pencil-code.googlecode.com}};\nsee \\cite{mit+tav+bra+mos08} for details\nregarding the implementation of spherical polar coordinates.\n\\begin{deluxetable*}{c|c|c|c|c|c|c|c|c|c|c|c|c|c}\nRun & Grid & $\\lt$ & $\\lp$ & $\\Bbar_{\\rm rms}\/B_{\\rm eq}$ & $\\Rey$ & $\\Rm$ &\n $\\kf\/\\kone$ & $\\nu\\times10^{5}$ & $\\eta\\times10^{5}$ & $\\etat\\times10^{5}$&\n$\\omc\\times10^{3}$ &$T\\times10^{-3}$ & $t_{\\rm max}$ \\\\\n\\hline\n{\\tt S1} &$32\\times 64\\times 32$ &$\\pi\/5$ & $\\pi\/10$ & $0.88$ & $5$ & $12$ &\n$3$ & $5$ & $2$ & $5.3$ & $2.5$ & $0.18$ & $\\sim10T$ \\\\\n{\\tt S2} &$64\\times 128\\times 64$ &$\\pi\/5$ &$\\pi\/10$ & $0.79$ & $8$ & $21$ & \n$4$ & $3$ & $1.2$ & $6.2$ & $2.5$ & $0.16$ & $\\sim5T$\\\\\n{\\tt S3} &$32\\times 64\\times 64$ &$\\pi\/5$ & $\\pi\/5$ & $1.16$ & $2$ & $4$ &\n$7$ & $5$ & $2$ & $3.6$ & $2$ & $0.27$ & $\\sim10T$\\\\\n{\\tt S4} &$64\\times 128\\times 128$ & $\\pi\/5$ & $\\pi\/5$ & $1.04$ & $4$ & $10$ & \n$7$ & $2$ & $1$ & $4.9$ & $2.6$ & $0.2$ & $\\sim5T$\\\\\n{\\tt S5} &$64\\times 128\\times 128$ & $9\\pi\/10$ & $\\pi\/2$ & $2.03$ & $7$ & $13$ & \n$7$ & $2$ & $1$ & $4.4$ & $-$ & $-$ & $\\sim T$. \\\\\n\\caption{Summary of our parameters including \ngrid size, the meridional and azimuthal extents of our domain, \nrms value of the azimuthally averaged field,\nReynolds number $\\Rey$ and \nmagnetic Reynolds number $\\Rm$.\nThe forcing amplitude, $f_{\\rm amp}=0.2$, is chosen such that the Mach number is \nof the order of 0.1, making the flow essentially incompressible. $t_{\\rm max}$\nthe duration of each run. The run {\\tt S5}\nhas not been ran long enough to accurately measure $\\omc$.} \n\\label{paratable}\n\\end{deluxetable*}\n\nWe use periodic boundary conditions along the azimuthal direction\nand set the normal component of the magnetic field to zero on all\nother boundaries (perfect-conductor boundary condition). \nThis is implemented by setting the two tangential components of $\\A$\nto zero.\nAs an estimate of the characteristic Fourier mode \nof the forcing we define $\\kf=w_{\\rm rms}\/\\urms$,\n(column 8 of Table~\\ref{paratable})\nwhere $w_{\\rm rms}$ and $\\urms$ are the rms values of small-scale\nvorticity and velocity, respectively.\nWe introduce the fluid and magnetic Reynolds numbers \nas $\\Rey=\\urms\/\\nu\\kf$ and $\\Rm=\\urms\/\\eta\\kf$, respectively. \nHere, $\\nu = \\mu\/\\rho_0$ is the mean kinematic\nviscosity, where $\\rho_0$ is the\ninitial and the mean density in the volume.\nA representative list of parameters is given in Table~\\ref{paratable}.\n\\section{Results}\nWe start our simulations with a random seed magnetic field\nof no particular parity about the equator.\nAfter a transient time, of about one turbulent diffusion time, \nwe find\nthat a large-scale magnetic field is generated with energy of the order \nof or exceeding the equipartition strength in all runs.\nThe magnetic field encompasses the whole azimuthal extent of \nthe domain. \nA contour plot of the toroidal component of the magnetic field \non a surface with constant radius from Run~{\\tt S5} is shown in \nFig.~\\ref{fig:contour}.\nWe define the large-scale magnetic field via averages over the azimuthal \nand radial directions, i.e.,\n$\\Bbar = \\bra{{\\bm B}}_{r\\phi}$,\nsuch that the resultant magnetic field is solely a function \nof latitude and time.\nWe normalize the magnetic field with the equipartition field strength,\n$B_{\\rm eq}=\\bra{\\mu_0\\rho {\\bm u}^2}^{1\/2}$,\nwhere ${\\bm u} = \\u-\\Ubar$ is the small-scale velocity. \nThe field first develops at higher \nlatitudes and then with time migrates equatorward.\nIn each hemisphere the field shows oscillations\nand reversals of polarity. These features can be seen in the space-time diagram\nshown in the top panel of Fig.~\\ref{fig:butterfly} where we plot\n$\\overline{B}_\\phi$ as a function of latitude and time.\nThe principal frequency \nof oscillations, $\\omc$ (column 12 of Table~\\ref{paratable} and the\ninset of Fig.~\\ref{fig:osc+helm})\nis obtained by Fourier transforming the \ntime series of ${\\overline B}_{\\phi}(\\theta,t)$ in time at a \ngiven $\\theta$ ($\\theta=\\pi\/20$ say)\nand determining the frequency corresponding to the dominant mode.\nNormalized energy in the large-scale magnetic field also shows \noscillations as a function of time, but with frequency\n$ 2 \\omc$. \nA characteristic dynamical time scale\nis the turbulent diffusion time corresponding to the length scale $\\lt$ defined by\n$T\\equiv (k_{\\theta}^2 \\etat)^{-1}$ where $k_{\\theta} = 2\\pi\/\\lt$ and for $\\etat$\nwe take the expression from the first-order-smoothing approximation,\n$\\etat=\\urms\/3\\kf$. In all our runs we find the product $\\omc T$ to be \nof order unity (see the bottom panel of Fig.~\\ref{fig:osc+helm}).\n\\begin{figure}\n\\includegraphics[width=\\linewidth]{fig\/freq_vs_Rm_inset.ps}\n\\caption{\nThe frequency of the oscillations multiplied\nby the turbulent diffusion time $T=(\\eta_tk^2_{\\theta})^{-1}$ \nas a function of $\\Rm$ \nfor $\\lt = \\pi\/5 $.\nData from DNS (closed circles) and MF models (open circles) are shown.\nFor the MF runs $\\etat=1$, \nfor the DNS runs $\\etat$ is given in \nTable~\\ref{paratable}. \nThe inset shows $\\omc \\times10^{3}$ (column 12 Table~\\ref{paratable})\nas a function of $\\Rm$ for the DNS data. \n}\\label{fig:osc+helm}\n\\end{figure}\n\nWe note that the radial and azimuthal components of the large-scale\nmagnetic field are almost antisymmetric about the equator.\nThis allows a further simplification of our model by simulating only one\nhalf of the domain (e.g., the northern hemisphere), while keeping exactly the\nsame forcing function (e.g., a forcing that is random, and negatively helical \nin the northern hemisphere going smoothly to zero at the equator), but choosing\nthe boundary condition $B_r=B_\\phi=0$ at the equator. Such simulations \nproduce exactly the same oscillations (as can be seen by comparing the top and\nthe middle panels of Fig.~\\ref{fig:butterfly}) as those obtained in the\nDNS with both hemispheres. \nThis implies that these oscillations can be studied with \nhalf the number of grid points and appropriately chosen boundary \nconditions at the equator. \n\nGiven the large values of the magnetic Reynolds number in solar\/stellar \nsettings, an important question is how the frequency $\\omc$ scales with \n$\\Rm$?\nThis question cannot be answered from DNS because the \nmagnetic Reynolds numbers reached are far from \nthe asymptotic limit of large $\\Rm$~(see Column 7 \nof Table~\\ref{paratable})\nA way forward is to use analogous MF models.\nThe appropriate setting would be that of an $\\alpha^2$ dynamo\nwith dynamical $\\alpha$ quenching~\\citep{BB02}, which incorporates \nconservation of magnetic helicity, given by the equations\n\\begin{eqnarray}\n\\partial_t \\Bbar &=& \\curl (\\alpha \\Bbar) + (\\eta+\\etat)\\lap \\Bbar, \\\\\n\\partial_t \\alpha &=& -2\\eta\\kf^2 \\left( \n \\frac{\\alpha \\bra{\\Bbar^2} -\\etat \\bra{\\Bbar\\cdot \\Jbar}}{\\Beq^2} +\n \\frac{\\alpha-\\alpha_{\\rm K}}{\\Rm\/3}\n \\right) ,\n\\end{eqnarray}\nwhere $\\Rm\/3=\\etat\/\\eta$ and $\\Beq$ is the equipartition field strength.\nWe use\n$\\kf\/\\kone=6$ in our MF simulations. \nIn view of the discussion above, we solve the MF equations in only the northern\nhemisphere with appropriate boundary condition at the equator. \nIn the MF approach the helical nature of turbulence is modelled by the $\\alpha$\ncoefficient ($\\alpha_{\\rm K}$).\nWe choose $\\alpha_{\\rm K} = g(\\theta)\\alpha_0$ and $\\etat = 1$. \nThe profile function $g$ takes positive (negative) values in the northern \n(southern) hemisphere, going smoothly to zero at the equator.\nThis reflects the fact that, according to MF theory, the kinetic $\\alpha$\neffect usually has the opposite sign to the mean kinetic helicity.\nWe have used three different functional forms for $g$, namely\n$g = \\theta-\\pi\/2$, $g = \\sin(\\theta-\\pi\/2) $ and\n$ g = \\tanh(\\theta-\\pi\/2)$, without any significant change in our results.\nWe need $\\alpha_0 \\geq 16$ to excite a dynamo but once excited the oscillatory\nand migratory properties of the dynamo do not depend on $\\alpha_0$. \nWe use perfect-conductor boundary conditions along the radial direction\nand our magnetic Reynolds number (changed by varying $\\eta$)\nranges between $ 10 \\lesssim \\Rm \\lesssim 10^{8}$. \nWe have also used domains with\nlarger latitudinal extents than those used in our DNS.\n\nHere we briefly mention a few important outcomes of\nour MF results relevant to our discussion above:\n(a) Our DNS results -- namely oscillatory behavior as well as\nmigration towards the equator -- are qualitatively reproduced by the MF\nsolutions in the range of parameters reported here. \nAn example of the space-time\ndiagram produced by our MF runs is shown in the bottom\npanel of Fig.~\\ref{fig:butterfly}.\n(b) The frequency of oscillations remains almost constant as a \nfunction of $\\Rm$, see Fig.~\\ref{fig:osc+helm}. \nThe dependence of the oscillation period on $\\Rm$\nseen in DNS may be related to $\\Rm$-dependence of\nthe turbulent magnetic diffusivity. \nA similar behavior has been observed earlier in Cartesian DNS\n\\citep{kap+bra09}.\n(c) To show the robustness of our results with respect to the\nsize of the domain, we also studied domain sizes extended\nin the meridional direction from $\\lt = \\pi\/5$ to $(178\/180)\\pi$ \nwhich corresponds respectively to $\\Theta= 72 $ degrees and $1$ degree.\nWe find that the oscillations and the migratory behavior do not change.\n(d) For the MF model considered here the mean value of \nthe large-scale magnetic field decreases as $\\Rm^{-1}$, i.e., the field is catastrophically \nquenched for large values of $\\Rm$. \nIn the DNS, however, such quenching could be alleviated by magnetic \nhelicity fluxes across the equator \\citep{bra+can+cha09,mitra_etal2010}.\n\nTo test the robustness of our simulations with respect \nto the choice of boundary condition in the radial direction \nwe repeated our simulations with the vertical field boundary \ncondition, which makes the two tangential components of the magnetic \nfield vanish at the radially outward boundary. \nThese simulations also show oscillations and\nequatorward migration of magnetic activity, but in this case the\noscillations are less regular and the frequency is marginally higher. \n\\section{Conclusions}\nWe have found large-scale fields, oscillations on\ndynamical time scales and polarity reversals with equatorward\nmigration of magnetic activity \nin direct numerical simulations of helically\nforced MHD equations in spherical wedge domains. \nDespite its simplicity, it is quite\nstriking how our model can reproduce these important features of\nthe solar dynamo. As far as we are aware, these features \nhave not been observed earlier in DNS.\nWe have elucidated our DNS results by considering\nanalogous $\\alpha^2$ MF models which support\nour conclusions. \nWe have further used these MF models\nto explore magnetic Reynolds numbers that are \nat present inaccessible to DNS. \nThis has enabled us to show that the frequency \nof the oscillations is almost\nindependent of $\\Rm$\nfor large $\\Rm$.\nSuch MF models have been known to have oscillatory solutions\nif $\\alpha$ changes sign in the computational domain \n\\citep[see, e.g.][]{bar+shu87,ste+ger03,Rud+Hol04,bra+can+cha09},\nbut their migratory property had not been studied before.\nAntisymmetry of $\\alpha$ with depth also produces oscillatory \nsolutions \\citep{bar+shu87,ste+ger03}, but not the equatorward\nmigration. \n\nThe helical forcing used in our DNS, with its different\nsigns of helicity in different hemispheres, implicitly \nmodels only the helical aspect of the effects of rotation \nand stratification present in the Sun. \nPhysically, a more complete picture should emerge from DNS of convective\nturbulent dynamo as done, for example, by \n\\cite{gil83,bro+bro+bru+mie+nel+tom07,brown10,kapyla_etal2010,charb}.\nSuch simulations also generate differential rotation and lead\nto another dynamo mode of operation -- the $\\alpha\\Omega$ dynamo\n-- which also produces oscillatory behavior.\nHowever, in order to get equatorward\nmigration radial shear must be negative. \nHelioseismology has shown that negative radial shear exists only\nnear the surface of the convection zone.\nThis feature has so far not been reproduced by global DNS.\nWhether or not an $\\alpha\\Omega$ dynamo is the dominant mechanism operating\nin the Sun remains unclear.\nIt is therefore important to keep in mind \nthat there exists alternative mechanisms \nfor producing oscillatory behavior with \nequatorward migration, such as the one discussed here.\n\\acknowledgments\nThis work was supported in part by\nthe European Research Council under the AstroDyn Research Project No.\\ 227952\nand the Swedish Research Council Grant No.\\ 621-2007-4064.\nDM is supported by the Leverhulme Trust.\nPJK is supported by the Academy of Finland grant No.\\ 121431.\nComputational resources were granted by\nUKMHD, QMUL HPC facilities purchased under the SRIF initiative,\nthe National Supercomputer Centre in Link\\\"oping in Sweden, and \nCSC--IT Center for Science in Espoo, Finland.\n\n\\newcommand{\\ypre}[3]{ #1, {Phys.\\ Rev.\\ E,} {#2}, #3}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nThe classification of compact, complex surfaces $S$ of general type with $\\chi(\\mathcal{O}_S)=1$, i.e. $p_g(S)=q(S)$, is currently an active area of research. For all these surfaces we have $p_g \\leq 4$, and the cases $p_g=q=4$ and $p_g=q=3$ are nowadays completely described. Regarding the case $p_g=q=2$, a complete classification has been recently obtained when $K_S^2=4$: in fact, these are surfaces on the Severi line $K_S^2=4 \\chi (\\mathcal{O}_S)$. By contrast, the classification in the case $p_g=q=2$, $K_S^2 \\geq 5$ is still missing, albeit some interesting examples were recently discovered. We refer the reader to the paper \\cite{1} and the references contained therein for a historical account on the subject and more details. \n\nThe purpose of this note is to show how monodromy representations of braid groups can be concretely applied to the fine classification of surfaces with $p_g=q=2$ and maximal Albanese dimension, allowing one to rediscover old examples and to find new ones.\n\nThe idea is to consider degree $n$, generic covers of $\\SS$, the symmetric square of a smooth curve of genus $2$, simply branched over the diagonal $\\delta$. In fact, if such a cover exists, then it is a smooth surface $S$ with\n\\begin{equation*}\n\\chi(\\mathcal{O}_S)=1, \\quad K_S^2=10-n.\n\\end{equation*} \n\nOn the other hand, by the Grauert-Remmert extension theorem and the GAGA principle, isomorphism classes of degree $n$, connected covers \n\\begin{equation*}\nf \\colon S \\to \\SS, \n\\end{equation*}\nbranched at most over $\\delta$, correspond to group homomorphisms \n\\begin{equation*}\n\\varphi \\colon \\pi_1(\\SS - \\delta) \\to \\mathfrak{S}_n\n\\end{equation*}\nwith transitive image, up to conjugacy in $\\mathfrak{S}_n$. The group $\\pi_1(\\SS - \\delta)$ is isomorphic to $\\mathsf{B}_2(C_2)$, the braid group on two strings on $C_2$; furthermore, our condition that the branching is simple can be translated by requiring that $\\varphi(\\sigma)$ is a transposition, where $\\sigma$ denotes the homotopy class in $\\SS - \\delta$ of a topological loop in $\\SS$ that ``winds once around $\\delta$\". \n\nA group homomorphism $\\mathsf{B}_2(C_2) \\to \\mathfrak{S}_n$ satisfying the requirements above will be called a \\emph{generic monodromy representation} of $\\mathsf{B}_2(C_2)$. By using the Computer Algebra System \\verb|GAP4| we computed the number of generic monodromy representations for $2 \\leq n \\leq 9$. In particular, such a number is zero for $n \\in \\{5, \\, 7, \\, 9 \\}$, so there exist no generic covers in these cases.\n\nAs an application of the general theory, we end the paper with a detailed discussion of the cases $n=2$ and $n=3$.\n\n\\section{Braid groups on closed surfaces}\n\nIn this section we collect some preliminary results on surface braid groups that are needed in the sequel of the work.\n\n\\begin{definition-conf}\nLet $X$ be a topological space. The $k$th \\emph{ordered configuration space} of $X$ is defined as\n\\begin{equation} \\label{eq:n-conf}\n\\mathrm{Conf}_k (X):=\\{(x_1,\\ldots, x_k) \\in X^k \\, | \\, \\ x_i \\neq x_j \\;\\; \\textrm{for all} \\; \\;i \\neq j\\}, \n\\end{equation}\nnamely $\\mathrm{Conf}_k (X) = X^k - \\Delta$, where $\\Delta$ is the big diagonal. \n \nThe quotient of $\\mathrm{Conf}_k (X)$ by the natural free action of the symmetric group $\\mathfrak{S}_k$ is called the $k$th \\emph{unordered configuration space} of $X$, and it is denoted by $\\mathrm{UConf}_k (X)$. \n\nThen $\\mathrm{UConf}_k (X) = \\mathrm{Sym}^k(X) - \\delta$, where $\\delta$ denotes the image of $\\Delta$ in the symmetric product. \n\\end{definition-conf}\n\n\\begin{remark}\nIf $X$ is a smooth, compact, $n$-dimensional manifold, then both the configuration spaces $\\mathrm{Conf}_k (X)$ and $\\mathrm{UConf}_k (X)$ are smooth, open, $kn$-dimensional manifolds. \n\\end{remark}\n\n\\begin{definition-braid}\nLet $\\Sigma_g$ be a closed topological surface of genus $g$, and let $\\mathscr{P} = \\{p_1, \\ldots, p_k\\} \\subset \\Sigma_g$ be a set of $k$ distinct points. A \\emph{geometric braid} on $\\Sigma_g$ based at $\\mathscr{P}$ (also called a \\emph{braid on} $k$ \\emph{strings}) is a $k$-ple $(\\alpha_1, \\ldots, \\alpha_k)$ of paths $\\alpha_i \\colon [0, \\, 1] \\to \\Sigma_g$ such that \n\\begin{itemize}\n\\item $\\alpha_i(0) = p_i, \\quad i=1, \\ldots, k$;\n\\item $\\alpha_i(1) \\in \\mathscr{P}, \\quad i=1, \\ldots, k$;\n\\item the points $\\alpha_1(t), \\ldots, \\alpha_k(t) \\in \\Sigma_g$ are pairwise distinct for all $t \\in [0, \\, 1]$.\n\\end{itemize} \nA geometric braid such that $\\alpha_i(0)=\\alpha_i(1)$ for all $i \\in \\{1, \\ldots, k\\}$ is called a \\emph{pure geometric braid}.\n\\end{definition-braid}\n\\begin{figure}[h]\n\\centering\n\\begin{tikzpicture}\n\\braid[number of strands=3, style strands={1}{red}, style strands={2}{blue}, style strands={3}{olive}, line width=1.5pt]\n(braid) a_1 a_2 a_1^{-1};\n\\end{tikzpicture}\n\\medskip\n\\caption{A non-pure braid on $3$ strings} \\label{fig:braids}\n\\end{figure}\nThe \\emph{braid group} on $k$ strings on $\\Sigma_g$ is the group $\\mathsf{B}_{k}(\\Sigma_g)$ whose elements are the braids based at $\\mathscr{P}$ and whose operation is the usual product of paths, up to homotopies among braids. The \\emph{pure braid group} is the subgroup $\\mathsf{P}_{k}(\\Sigma_g)$ of $\\mathsf{B}_{k}(\\Sigma_g)$ given by the homotopy classes of pure braids. It can be shown that $\\mathsf{B}_{k}(\\Sigma_g)$ and $\\mathsf{P}_{k}(\\Sigma_g)$ do not depend on the choice of the set $\\mathscr{P}$, and that there is a short exact sequence of groups\n\\begin{equation} \\label{eq:pure-nonpure}\n1 \\to \\mathsf{P}_{k}(\\Sigma_g)\\to \\mathsf{B}_{k}(\\Sigma_g)\\to \\mathfrak{S}_k \\to 1.\n\\end{equation}\nMoreover, there are isomorphisms \n\\begin{equation} \\label{eq:iso-braids}\n\\mathsf{P}_{k}(\\Sigma_g) \\simeq \\pi_1(\\mathrm{Conf}_k (\\Sigma_g)), \\quad \\mathsf{B}_{k}(\\Sigma_g) \\simeq \\pi_1(\\mathrm{UConf}_k (\\Sigma_g)),\n\\end{equation}\nso that we can interpret \\eqref{eq:pure-nonpure} as the short exact sequence of fundamental groups induced by the $\\mathfrak{S}_k$-covering \n\\begin{equation}\n\\mathrm{Conf}_k (\\Sigma_g) \\to \\mathrm{UConf}_k (\\Sigma_g). \n\\end{equation}\nBraid groups are an important and flexible tool used in several areas of science, such as \n\\begin{itemize}\n\\item[-] Knot Theory (Alexander's theorem)\n\\item[-] Mathematical Physics (Yang-Baxter's equation)\n\\item[-] Mechanical Engineering (robot motion planning)\n\\item[-] Algebraic Geometry (monodromy invariants).\n\\end{itemize}\nWe will focus on the last topic, explaining\nhow the representation monodromy of surface braid groups onto the symmetric group can be used to produce interesting examples of projective surfaces defined over the field of complex numbers. \\\\\n\nWe are primarily interested in the case $g=k=2$. In that case, a simple presentation for the braid group is provided by the following result.\n\\begin{bellingeri} \nThe braid group $\\mathsf {B}_2(\\Sigma_2)$ can be generated by five elements $a_1, \\, a_2, \\, b_1, \\, b_2, \\, \\sigma,$\nsubject to the eleven relations below$:$\n\\begin{equation} \\label{eq:relations}\n\\begin{split}\n(R2) \\quad & \\sigma^{-1} a_1 \\sigma^{-1} a_1= a_1 \\sigma^{-1} a_1 \\sigma^{-1} \\\\ & \\sigma^{-1} a_2 \\sigma^{-1} a_2= a_2 \\sigma^{-1} a_2 \\sigma^{-1} \\\\ &\n\\sigma^{-1} b_1 \\sigma^{-1} b_1 = b_1 \\sigma^{-1} b_1 \\sigma^{-1} \\\\ & \\sigma^{-1} b_2 \\sigma^{-1} b_2 = b_2 \\sigma^{-1} b_2 \\sigma^{-1}\\\\ \n& \\\\\n(R3) \\quad & \\sigma^{-1} a_1 \\sigma a_2 = a_2 \\sigma^{-1} a_1 \\sigma \\\\ & \\sigma^{-1} b_1 \\sigma b_2 = b_2 \\sigma^{-1} b_1 \\sigma \\\\\n& \\sigma^{-1} a_1 \\sigma b_2 = b_2 \\sigma^{-1} a_1 \\sigma \\\\\n& \\sigma^{-1} b_1 \\sigma a_2 = a_2 \\sigma^{-1} b_1 \\sigma \\\\\n & \\\\\n(R4) \\quad & \\sigma^{-1} a_1 \\sigma^{-1} b_1 = b_1 \\sigma^{-1} a_1 \\sigma \\\\\n & \\sigma^{-1} a_2 \\sigma^{-1} b_2 = b_2 \\sigma^{-1} a_2 \\sigma \\\\\n & \\\\\n (TR) \\quad & [a_1, \\, b_1^{-1}] [a_2, \\, b_2^{-1}]= \\sigma^2. \n\\end{split}\n\\end{equation}\n\\end{bellingeri}\nHere the $a_i$ and the $b_i$ are pure braids coming from the representation of the topological surface $\\Sigma_2$ as a polygon of $8$ sides with the standard identification of the edges, whereas $\\sigma$ is a non-pure braid exchanging the two points $p_1$, $p_2 \\in \\Sigma_2$. These braids are depicted in Figure \\ref{fig:2}; note that, both in the cases of $a_i$ and $b_j$, the only non-trivial string is the first one, which goes through the wall $\\alpha_i$, respectively the wall $\\beta_j$. \n\\begin{figure}[H]\n \\begin{tikzpicture}[line cap=round,line join=round,>=triangle 45,x=1cm,y=1cm]\n\\clip(-6.17,-1.27) rectangle (15.43,5.43);\n\\fill[line width=0.4pt,color=rvwvcq,fill=rvwvcq,fill opacity=0.08] (-2.5,4.61) -- (-4.26,4.63) -- (-5.518650070512054,3.3996342007354072) -- (-5.538650070512054,1.6396342007354078) -- (-4.3082842712474605,0.38098413022335364) -- (-2.548284271247461,0.3609841302233536) -- (-1.289634200735407,1.5913499294879463) -- (-1.269634200735407,3.351349929487946) -- cycle;\n\\fill[line width=0.8pt,color=rvwvcq,fill=rvwvcq,fill opacity=0.07] (3.21,4.55) -- (1.45,4.57) -- (0.1913499294879455,3.3396342007354085) -- (0.17134992948794458,1.579634200735409) -- (1.4017157287525365,0.32098413022335403) -- (3.161715728752536,0.30098413022335313) -- (4.420365799264591,1.531349929487945) -- (4.440365799264592,3.2913499294879447) -- cycle;\n\\fill[line width=0.4pt,color=rvwvcq,fill=rvwvcq,fill opacity=0.08] (8.97,4.53) -- (7.21,4.55) -- (5.951349929487945,3.3196342007354067) -- (5.931349929487945,1.559634200735406) -- (7.161715728752538,0.3009841302233509) -- (8.921715728752538,0.2809841302233509) -- (10.180365799264594,1.511349929487944) -- (10.200365799264593,3.2713499294879446) -- cycle;\n\\draw [line width=0.4pt,color=rvwvcq] (-5.538650070512054,1.6396342007354078)-- (-4.3082842712474605,0.38098413022335364);\n\\draw [line width=0.4pt,color=rvwvcq] (-4.3082842712474605,0.38098413022335364)-- (-2.548284271247461,0.3609841302233536);\n\\draw [line width=0.4pt,color=rvwvcq] (-2.548284271247461,0.3609841302233536)-- (-1.289634200735407,1.5913499294879463);\n\\draw [line width=0.4pt,color=rvwvcq] (-1.289634200735407,1.5913499294879463)-- (-1.269634200735407,3.351349929487946);\n\\draw [line width=0.8pt,color=rvwvcq] (3.21,4.55)-- (1.45,4.57);\n\\draw [line width=0.8pt,color=rvwvcq] (1.45,4.57)-- (0.1913499294879455,3.3396342007354085);\n\\draw [line width=0.8pt,color=rvwvcq] (4.420365799264591,1.531349929487945)-- (4.440365799264592,3.2913499294879447);\n\\draw [line width=0.8pt,color=rvwvcq] (4.440365799264592,3.2913499294879447)-- (3.21,4.55);\n\\draw [line width=0.4pt,color=rvwvcq] (8.97,4.53)-- (7.21,4.55);\n\\draw [line width=0.4pt,color=rvwvcq] (7.21,4.55)-- (5.951349929487945,3.3196342007354067);\n\\draw [line width=0.4pt,color=rvwvcq] (5.951349929487945,3.3196342007354067)-- (5.931349929487945,1.559634200735406);\n\\draw [line width=0.4pt,color=rvwvcq] (5.931349929487945,1.559634200735406)-- (7.161715728752538,0.3009841302233509);\n\\draw [line width=0.4pt,color=rvwvcq] (7.161715728752538,0.3009841302233509)-- (8.921715728752538,0.2809841302233509);\n\\draw [line width=0.4pt,color=rvwvcq] (8.921715728752538,0.2809841302233509)-- (10.180365799264594,1.511349929487944);\n\\draw [line width=0.4pt,color=rvwvcq] (10.180365799264594,1.511349929487944)-- (10.200365799264593,3.2713499294879446);\n\\draw [line width=0.4pt,color=rvwvcq] (10.200365799264593,3.2713499294879446)-- (8.97,4.53);\n\\draw [line width=0.8pt,color=ffqqqq] (-3.98,2.49)-- (-1.8820135571336331,3.9778069722401543);\n\\draw [->,line width=0.8pt,color=ffqqqq] (-4.912072950290508,3.992580374435121) -- (-3.98,2.49);\n\\draw [line width=0.8pt,color=ffqqqq] (1.73,2.43)-- (0.7983789804885912,0.9381906887922181);\n\\draw [->,line width=0.8pt,color=ffqqqq] (3.787196012305263,0.9124086770670292) -- (1.73,2.43);\n\\draw [->,line width=1pt,color=qqqqcc] (-5.5386500705120545,1.6396342007354092) -- (-5.518650070512055,3.399634200735406);\n\\draw [->,line width=1pt,color=qqqqcc] (-5.518650070512055,3.399634200735406) -- (-4.26,4.63);\n\\draw [->,line width=1pt,color=qqqqcc] (-1.269634200735407,3.3513499294879456) -- (-2.5,4.61);\n\\draw [->,line width=1pt,color=qqqqcc] (-2.5,4.61) -- (-4.26,4.63);\n\\draw [->,line width=1pt,color=qqqqcc] (0.1913499294879455,3.3396342007354085) -- (0.17,1.51);\n\\draw [->,line width=1pt,color=qqqqcc] (0.17139736171360578,1.6297503150282006) -- (1.41,0.33);\n\\draw [->,line width=1pt,color=qqqqcc] (3.17,0.33) -- (1.41,0.33);\n\\draw [->,line width=1pt,color=qqqqcc] (4.421030852899942,1.5898746493988644) -- (3.17,0.33);\n\\draw [shift={(8.11,2.4)},line width=0.8pt,color=ffqqqq] plot[domain=-0.016127633843636246:3.125465019746157,variable=\\t]({1*0.6200806399171007*cos(\\t r)+0*0.6200806399171007*sin(\\t r)},{0*0.6200806399171007*cos(\\t r)+1*0.6200806399171007*sin(\\t r)});\n\\draw [->,line width=2pt,color=ffqqqq] (8.639103545440543,2.723341055546999) -- (8.73,2.39);\n\\draw [shift={(8.11,2.4)},line width=0.8pt,color=ffqqqq] plot[domain=3.125465019746157:6.26705767333595,variable=\\t]({1*0.6200806399170989*cos(\\t r)+0*0.6200806399170989*sin(\\t r)},{0*0.6200806399170989*cos(\\t r)+1*0.6200806399170989*sin(\\t r)});\n\\draw [->,line width=2pt,color=ffqqqq] (7.563712410516943,2.106623331573913) -- (7.49,2.41);\n\\draw [->,line width=2pt,color=ffqqqq] (7.563712410516943,2.106623331573913) -- (7.49,2.41);\n\\draw (-3.63,-0.35) node[anchor=north west] {$a_i$};\n\\draw (2.37,-0.35) node[anchor=north west] {$b_j$};\n\\draw (7.69,-0.35) node[anchor=north west] {$\\sigma$};\n\\begin{scriptsize}\n\\draw [fill=black] (-3.98,2.49) circle (1.5pt); \\draw[color=black]\n(-3.89,2.08) node {$p_1$}; \\draw [fill=black] (-2.74,2.47)\ncircle(1.5pt); \\draw[color=black] (-2.65,2.04) node {$p_2$}; \\draw\n[fill=black] (1.73,2.43) circle (1.5pt); \\draw[color=black]\n(1.83,2.02) node {$p_1$}; \\draw [fill=black] (2.97,2.41) circle\n(1.5pt); \\draw[color=black] (3.05,1.98) node {$p_2$}; \\draw\n[fill=black] (7.49,2.41) circle (1.5pt); \\draw[color=black] (7.14,2.41) node {$p_1$}; \\draw [fill=black]\n(8.73,2.39) circle (1.5pt); \\draw[color=black] (9.10,2.39) node {$p_2$}; \n\\draw[color=qqqqcc] (-5.80,2.46) node\n{$\\beta_i$}; \n\\draw[color=qqqqcc] (-5.15,4.1) node {$\\alpha_i$};\n\\draw[color=qqqqcc] (-1.6,4) node {$\\alpha_i$};\n\\draw[color=qqqqcc] (-3.20,4.88) node {$\\beta_i$};\n\\draw[color=qqqqcc] (-0.11,2.46) node {$\\alpha_j$};\n\\draw[color=qqqqcc] (0.57,0.82) node {$\\beta_j$};\n\\draw[color=qqqqcc] (2.50,0) node {$\\alpha_j$}; \\draw[color=qqqqcc]\n(4.07,0.82) node {$\\beta_j$};\n\\end{scriptsize}\n\\end{tikzpicture}\n\\medskip\n\\caption{Generators of $\\mathsf{B}_2(\\Sigma_2)$}\\label{fig:2}\n\\end{figure}\nRegarding the generator $\\sigma$, in terms of the isomorphism \n\\begin{equation}\n\\mathsf{B}_{2}(\\Sigma_2) \\simeq \\pi_1(\\mathrm{UConf}_2 (\\Sigma_2)) = \\pi_1(\\mathrm{Sym}^2(\\Sigma_2)-\\delta), \n\\end{equation}\nit corresponds to the homotopy class in $\\mathrm{UConf}_2(\\Sigma_2)$ of a topological loop in $\\mathrm{Sym^2}(\\Sigma_2)$ that ``winds once around the diagonal $\\delta$\".\n\n\\section{Finite coverings and monodromy representations}\n\nLet us recall now the classification of branched coverings \n$f \\colon X \\to Y$ of complex, projective varieties via the classification of monodromy representations of the fundamental group $\\pi_1(Y-B)$, where $B \\subset Y$ is the branch locus of $f$. The main technical tools needed are the \\emph{Grauert-Remmert extension theorem} and the GAGA \\emph{principle}, that we recall below. \n\n\\begin{GR}\nLet $Y$ be a normal analytic space over $\\mathbb{C}$ and $Z \\subset Y$ a closed analytic subspace such that $U=Y - Z$ is dense in $Y$. Then any finite, analytic, unramified covering $f^{\\circ} \\colon V \\to U$ can be extended to a normal, analytic, finite covering $f \\colon X \\to Y$, branched at most over $Z$. Furthermore, such an extension is unique up to analytic isomorphisms. \n\\end{GR}\n\n\\begin{GAGA}\nLet $X$, $Y$ be projective varieties over $\\mathbb{C}$, and $X^{\\rm an}$, $Y^{\\rm an}$ the underlying complex analytic spaces. Then\n\\begin{itemize}\n\\item[$\\boldsymbol{(i)}$] every analytic map $X^{\\rm an} \\to Y^{\\rm an}$ is algebraic$;$\n\\item[$\\boldsymbol{(ii)}$] every coherent analytic sheaf on $X^{\\rm an}$ is algebraic, and its algebraic cohomology coincides with its analytic one.\n\\end{itemize} \n\\end{GAGA}\nFrom this, we deduce the following important consequences.\n\\begin{proj-ext} \nLet $Y$ be a smooth, projective variety over $\\mathbb{C}$ and $Z \\subset Y$ be a smooth, irreducible divisor. Set $U=Y - Z$. Then any finite, unramified analytic covering $f^{\\circ} \\colon V \\to U$ can be extended in a unique way to a finite covering $f \\colon X \\to Y,$ branched at most over $Z$. \\vskip 3mm\n\nMoreover, there exists on $X$ a unique structure of smooth projective variety that makes $f$ an algebraic finite covering. \n\\end{proj-ext}\n\n\n\n\\begin{monodromy} \nLet $Y$ be a smooth projective variety over $\\mathbb{C}$ and $Z \\subset Y$ be a smooth, irreducible divisor. Then isomorphism classes of connected coverings of degree $n$\n\\begin{equation*}\nf \\colon X \\to Y,\n\\end{equation*}\nbranched at most over $Z$, are in bijection to group homomorphisms with transitive image\n\\begin{equation} \n\\varphi \\colon \\pi_1(Y - Z) \\to \\mathfrak{S}_n,\n\\end{equation}\nup to conjugacy in $\\mathfrak{S}_n$. Furthermore, $f$ is a Galois covering if and only if the subgroup $\\mathrm{im}\\, \\varphi$ of $\\mathfrak{S}_n$ has order $n$, and in this case $\\mathrm{im}\\, \\varphi$ is isomorphic to the Galois group of $f$.\n\\end{monodromy}\nThe group homomorphism \n\\begin{equation} \n\\varphi \\colon \\pi_1(Y - Z) \\to \\mathfrak{S}_n,\n\\end{equation}\nis called the \\emph{monodromy representation} of the covering $f \\colon X \\to Y$, and its image $\\mathrm{im}\\, \\varphi \\subseteq \\mathfrak{S}_n$ is called the \\emph{monodromy group} of $f$. The last corollary implies that, if $f$ is a Galois covering, then the monodromy group of $f$ is isomorphic to its Galois group (coinciding with the group $D(X\/Y)$ of deck transformations of the covering).\n\n\\section{Generic coverings of $\\mathrm{UConf}_2(\\Sigma_2)$}\n\nWe now apply the previous theory in the special case\n\\begin{equation}\nY = \\mathrm{UConf(\\Sigma_2)}=\\mathrm{Sym}^2(\\Sigma_2)-\\delta, \\quad Z = \\delta.\n\\end{equation}\nWe consider $\\Sigma_2$ as a \\emph{compact Riemann surface}, namely we fix a complex structure on it. Then the Abel-Jacobi map\n\\begin{equation}\n\\pi \\colon \\mathrm{Sym}^2(\\Sigma_2) \\to J(\\Sigma_2)\n\\end{equation}\nis a birational morphism onto the the Abelian surface $J(\\Sigma_2)$, more precisely it is the blow-down of the unique rational curve $E \\subset \\mathrm{Sym}^2(\\Sigma_2)$, namely the $(-1)$-curve given by the graph of the hyperelliptic involution on $\\Sigma_2$. \n\nWe have $\\delta E=6$, because the curve $E$ intersects the diagonal $\\delta$ transversally at the six points corresponding to the six Weierstrass points of $\\Sigma_2$. Writing $\\Theta$ for the numerical class of a Theta divisor in $J(\\Sigma_2)$, it follows that the image $D:=\\pi_*\\delta \\subset J(\\Sigma_2)$ is an irreducible curve with an ordinary sextuple point and no other singularities, whose numerical class is $4 \\Theta$. \n\n\\begin{definition-gen-cov}\nLet $f \\colon S \\to \\mathrm{Sym}^2(\\Sigma_2)$ be a connected covering of degree $n$\nbranched over the diagonal $\\delta$, with ramification divisor $R \\subset S$. Then $f$ is called \\emph{generic} if \n\\begin{equation*}\nf^* \\delta= 2R + R_0,\n\\end{equation*}\nwhere the restriction $\\left.f\\right|_{R} \\colon R \\to \\delta$ is an isomorphism and $R_0$ is an effective divisor over which $f$ is not ramified. \n\\end{definition-gen-cov}\nGeneric coverings are never Galois, unless $n=2$, in which case $R_0$ is empty and $f^* \\delta = 2R$. \nSince $\\delta$ is smooth, the genericity condition in the previous definition is equivalent to requiring that the fibre of $f$ over every point of $\\delta$ has cardinality $n-1$. \n\nSetting $\\alpha:=\\pi \\circ f$, the case where the curve $Z=f^*(E)$ is irreducible is illustrated in Figure \\ref{fig:3}.\n\\begin{figure}[H]\n\\centering\n\\begin{tikzpicture}[xscale=-1,yscale=-0.25,inner sep=0.7mm,place\/.style={circle,draw=black!100,fill=black!100,thick}] \n\\draw (-0.7,-0.5) rectangle (0.6,5);\n\n\\draw[red,rotate=92,x=6.28ex,y=1ex] (0.9,-0.85) cos (1,0) sin (1.25,1) cos (1.5,0) sin (1.75,-1) cos (2,0) sin (2.25,1) cos (2.5,0) sin (2.75,-1) cos (3,0) sin (3.25,1) cos (3.5,0) sin (3.6,-0.85);\n\\draw[rotate=92] (0.5,-0.025) .. controls (1.75,0.035) and (2.75,0.035) .. (4,-0.025);\n\n\\draw (-6.7,-0.5) rectangle (-5.4,5);\n\n\\draw[red,rotate=92,x=6.28ex,y=1ex,xshift=6,yshift=170] (0.9,-0.85) cos (1,0) sin (1.25,1) cos (1.5,0) sin (1.75,-1) cos (2,0) sin (2.25,1) cos (2.5,0) sin (2.75,-1) cos (3,0) sin (3.25,1) cos (3.5,0) sin (3.6,-0.85);\n\\draw[rotate=92,xshift=6,yshift=170] (0.5,-0.025) .. controls (1.75,0.035) and (2.75,0.035) .. (4,-0.025);\n\n\\draw (-6.7,15.5) rectangle (-5.4,21);\n\n\\draw[red,xscale=-1,yscale=-4] (6.05,-4.55) node(A0) [place,scale=0.2]{} to [in=5,out=55,looseness=8mm,loop] () to [in=65,out=115,looseness=8mm,loop] () to [in=125,out=175,looseness=8mm,loop] () to [in=185,out=235,looseness=8mm,loop] () to [in=245,out=295,looseness=8mm,loop] () to [in=305,out=355,looseness=8mm,loop] ();\n\n\\draw[xscale=-1,yscale=-4] (0.2,-0.1) node(B0) []{\\textcolor{red}{\\footnotesize{$R$}}};\n\\draw[xscale=-1,yscale=-4] (0.25,-1.05) node(0B) []{\\footnotesize{$Z$}};\n\n\\draw[xscale=-1,yscale=-4] (6.25,-0.1) node(B1) []{\\textcolor{red}{\\footnotesize{$\\delta$}}};\n\\draw[xscale=-1,yscale=-4] (6.25,-1.05) node(1B) []{\\footnotesize{$E$}};\n\n\\draw[xscale=-1,yscale=-4] (6.4,-5) node(B2) []{\\textcolor{red}{\\footnotesize{$D$}}};\n\n\\draw[xscale=-1,yscale=-4] (-0.9,-0.125) node(C0) []{$S$};\n\\draw[xscale=-1,yscale=-4] (7,-0.125) node(C1) []{$\\quad \\quad \\quad \\; \\mathrm{Sym}^2(\\Sigma_2)$};\n\\draw[xscale=-1,yscale=-4] (7,-5.00) node(C2) []{$\\quad \\quad J(\\Sigma_2)$};\n\n\\draw[->] (-0.9,2.25) -- (-5.2,2.25) node[midway,above] {$f$};\n\n\\draw[->] (-6.05,5.8) -- (-6.05,14.7) node[midway,right] {$\\pi$};\n\n\\draw[->] (-0.9,4.8) -- (-5.2,15.5) node[midway,below=3pt] {$\\alpha$}; \n\\end{tikzpicture}\n\\medskip\n\\caption{A generic covering of $\\mathrm{Sym}^2(\\Sigma_2)$ branched over $\\delta$ }\\label{fig:3}\n\\end{figure}\n\nFrom now on, by \\emph{surface} we mean a smooth, complex, projective variety $S$ with $\\dim_{\\mathbb{C}}(S)=2$. For such a surface\n \\begin{itemize}\n\\item $K_S=\\wedge^2 \\Omega_S^1$ denotes the \\emph{canonical line bundle} \n\\item $p_g(S)=h^0(S, \\, K_S)$ is the \\emph{geometric genus}\n\\item $q(S)=h^1(S, \\, K_S)$ is the \\emph{irregularity}\n\\item $\\chi(\\mathcal{O}_S)=1-q(S)+p_g(S)$ is the \\emph{holomorphic Euler-Poincar\\'e characteristic}.\n\\end{itemize}\nThe zero locus of a meromorphic section of $K_S$ defines a class in $H_2(S, \\, \\mathbb{Z})$, whose Poincar\\'e \ndual $[K_S] \\in H^2(S, \\, \\mathbb{Z})$ is called the \\emph{canonical class} of $S$. Its self-intersection is the integer number defined by the cup product\n\\begin{equation}\nK_S^2=[K_S] \\cup [K_S] \\in H^4(S, \\, \\mathbb{Z}) \\simeq \\mathbb{Z}.\n\\end{equation}\nGiven a surface $S$, we can define some important meromorphic maps on it: the \\emph{pluricanonical maps} and the \\emph{Albanese map}. \n\\begin{definition-pluri}\nSet $N(r):=\\dim H^0(S, \\, K_S^{\\otimes r})-1$ and let $\\{ \\sigma_0, \\ldots, \\sigma_{N(r)} \\}$ be a basis for $H^0(S, \\, K_S^{\\otimes r})$. Then the $r$-\\emph{th} \\emph{pluricanonical map} of $S$ is the rational map\n\\begin{equation}\n\\psi_r \\colon S \\to \\mathbb{P}^{N(r)}(\\mathbb{C}), \\quad x \\mapsto [\\sigma_0(x): \\ldots : \\sigma_{N(r)}(x)].\n\\end{equation}\nWe say that $S$ is \\emph{of general type} if the image of $\\psi_r$ is a surface for $r$ large enough (i.e., if $\\psi_r$ is generically finite onto its image for $r$ large enough).\n\\end{definition-pluri}\n\\begin{definition-alb}\nThe \\emph{Albanese map} of $S$ is the rational morphism\n\\begin{equation}\na_S \\colon S \\to \\mathrm{Alb}(S):=H^0(S, \\, \\Omega^1_S)^*\/H_1(S, \\, \\mathbb{Z}), \n\\end{equation}\ndefined by the integration of global, holomorphic $1$-forms on $S$ (this is a generalization of the Abel-Jacobi map $C \\to J(C)$, sending a smooth complex curve into its Jacobian). Note that $\\mathrm{Alb}(S)$ is a complex torus (actually, an Abelian variety) of dimension $q(S)$.\n\nWe say that $S$ is \\emph{of maximal Albanese dimension} if the image of its Albanese map is a surface \n(i.e., if $a_S$ is generically finite onto its image); note that this condition implies $q(S) \\geq 2$.\n\\end{definition-alb}\nSurfaces of general type satisfy $\\chi(\\mathcal{O}_S) \\geq 1$, and those with $\\chi(\\mathcal{O}_S)=1$ are usually difficult to construct. This explains the relevance of the following result.\n\\begin{theoremP1}\nLet $f \\colon S \\to \\mathrm{Sym}^2(\\Sigma_2)$ be a generic covering of degree $n$ and whose branch locus is the diagonal $\\delta$. Then $S$ is a surface of maximal Albanese dimension with\n\\begin{equation*}\n\\chi(\\mathcal{O}_S)=1, \\quad K_S^2= 10-n. \n\\end{equation*}\nMoreover, if $ 2 \\leq n \\leq 9$ then $S$ is of general type.\n\\end{theoremP1}\nOur aim is now to use the theory developed before in order to construct generic coverings $f \\colon S \\to \\mathrm{Sym}^2(\\Sigma_2)$.\n\n\\begin{definition-gen-mod} \nA \\emph{generic monodromy representation} of the braid group $\\mathsf{B}_2(\\Sigma_2)$ is a group homomorphism \n\\begin{equation}\n\\varphi \\colon \\mathsf{B}(\\Sigma_2) \\to \\mathfrak{S}_n\n\\end{equation}\nwith transitive image and such that $\\varphi(\\sigma)$ is a \\emph{transposition}.\n\\end{definition-gen-mod}\nBy the previous discussion, since $\\mathsf{B}_2(\\Sigma_2) \\simeq \\pi_1(\\mathrm{Sym}^2(\\Sigma_2) - \\delta)$, generic coverings and generic monodromy representations are related by the following\n\\begin{theoremP2} \nIsomorphism classes of generic coverings of degree $n$ \n\\begin{equation}\nf \\colon S \\to \\mathrm{Sym}^2(\\Sigma_2),\n\\end{equation}\nwith branched locus $\\delta$, are in bijective correspondence to generic monodromy representations\n\\begin{equation}\n\\varphi \\colon \\mathsf{B}_2(\\Sigma_2) \\to \\mathfrak{S}_n,\n\\end{equation} \nup to conjugacy in $\\mathfrak{S}_n$. \nFor $2 \\leq n \\leq 9$, the number of such representations is given in the table below$:$ \n\\begin{table}[H]\n\\begin{center}\n\\begin{tabular}{c|c|c|c|c|c|c|c|c}\n$ n$ & $2$ & $3$ & $4$ & $5$ & $6$ & $7$ & $8$ & $9$ \\\\\n \\hline\n$\\mathrm{Number \\; of}$ $\\varphi$ & $16$ & $3 \\cdot 80$ & $6 \\cdot 480$ & $0$ & $15 \\cdot 2880$ & $0$ & $28 \\cdot 172800$ & $0$\\\\\n\\end{tabular}\n\\end{center}\n\\end{table} In particular, for $n \\in \\{5, \\, 7, \\, 9\\}$ there exist no generic coverings.\n\\end{theoremP2}\n\n\\bigskip \\bigskip\n\nAs an application of the general theory, let us finish this note by discussing in detail the cases $n=2$ and $n=3$.\n\\begin{example-2}\nIn this case we are looking for generic monodromy representations\n\\begin{equation}\n\\varphi \\colon \\mathsf {B}_2(\\Sigma_2) \\to \\mathfrak{S}_2 = \\{(1), \\, (1 \\, 2)\\}.\n\\end{equation} \nSince $\\mathsf{B}_2(\\Sigma_2)$ is generated by five elements $a_1$, $a_2$, $b_1$, $b_2$, $\\sigma$, and necessarily $\\varphi(\\sigma)=(1 \\, 2)$, we see that there are $2^4=16$ possibilities for $\\varphi$. \n\nThe group $\\mathfrak{S}_2$ is abelian, so there is no conjugacy relation to consider and we get sixteen isomorphism classes of double coverings $f \\colon S \\to \\mathrm{Sym}^2(\\Sigma_2)$, branched over $\\delta$ and with \n\\begin{equation}\n\\chi(\\mathcal{O}_S)=1, \\quad K_S^2=8.\n\\end{equation}\nThese coverings correspond to the sixteen square roots of $\\delta$ in the Picard group of $\\mathrm{Sym}^2(\\Sigma_2)$. One covering coincides with the natural projection $f \\colon C_2 \\times C_2 \\to \\mathrm{Sym}^2(\\Sigma_2)$, in fact \n\\begin{equation*}\np_g(C_2 \\times C_2) = q(C_2 \\times C_2) =4, \\quad K_{C_2 \\times C_2}=8.\n\\end{equation*}\nThe remaining fifteen coverings are surfaces of general type with\n\\begin{equation}\np_g(S)=q(S)=2, \\quad K_S^2=8.\n\\end{equation}\n\\end{example-2}\n\\begin{example-3}\nIn this case we are looking for generic monodromy representations\n\\begin{equation*}\n\\varphi \\colon \\mathsf{B}_2(\\Sigma_2) \\to \\mathfrak{S}_3, \n\\end{equation*} \nup to conjugacy in $\\mathfrak{S}_3$. By hands, or by using a Computer Algebra software like $\\mathsf{GAP4}$ (see \\cite{5}), we find that the total number of monodromy representations is $240$. For every such a representation we have $\\mathrm{im}\\, \\varphi = \\mathfrak{S}_3$. Moreover, each orbit for the conjugacy action of $\\mathfrak{S}_3$ on the set of monodromy representations consists of six elements, and consequently the orbit set has cardinality $240\/6 = 40$. \nBy our last theorem, this implies that there are $40$ isomorphism classes of generic coverings $f \\colon S \\to \\mathrm{Sym}^2(\\Sigma_2)$ of degree $3$ and branched over $\\delta$. For all of them, $S$ is a surface of general type with\n\\begin{equation*}\n p_g(S)=q(S)=2, \\quad K_S^2=7 \n\\end{equation*}\nand its Albanese map $\\alpha \\colon S \\to J(\\Sigma_2)$ is a generically finite covering of degree $3$.\n\nThese surfaces were previously studied by R. Pignatelli and the Author in \\cite{6}, by using completely different methods. In fact, they showed that they all lie in the same deformation class, and that their moduli space is a connected, quasi-finite cover of degree $40$ of $\\mathcal{M}_2$, the coarse moduli space of curves of genus $2$.\n\\end{example-3}\n\n\\section*{Acknowledgments}\nThe Author wishes to thank the organizers of Group 32 - The 32nd \\emph{International Colloquium on Group Theoretical Methods in Physics}, held on Czech Technical University (Prague) on July 9-13, 2018, for the invitation and the hospitality.\n\n\\section*{References}\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}