diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzzlxlg" "b/data_all_eng_slimpj/shuffled/split2/finalzzlxlg" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzzlxlg" @@ -0,0 +1,5 @@ +{"text":"\n\\subsubsection*{Acknowledgements} \\label{sec:acknowledgements}\nWe would like to thank Tatiana Likhomanenko, Tiffany Cai, Eric Mintun, Janice Lan, Mike Rabbat, Sergey Edunov, Yuandong Tian, and Lyndon Duong for their productive scrutiny and insightful feedback.\n\\section{Appendix} \\label{sec:appendix}\n\n\\subsection{Models, training, datasets, and software} \\label{appendix:models_training_etc}\n\nOur experiments were performed on ResNet18 \\citep{he_resnet_2016} trained on Tiny Imagenet \\citep{tiny_imagenet}, and ResNet20 (\\citet{he_resnet_2016}; code modified from \\citet{github_pytorch_resnet20_2020}) and a VGG16-like network \\citep{simonyan_vgg_15}, both trained on CIFAR10 \\citep{krizhevsky_cifar10_2009}. All models were trained using stochastic gradient descent (SGD) with momentum = 0.9 and weight decay = 0.0001.\n\nThe maxpool layer after the first batchnorm layer (see \\citet{he_resnet_2016}) was removed because of the smaller size of Tiny Imagenet images compared to standard ImageNet images (64x64 vs. 256x256, respectively). ResNet18 were trained for 90 epochs with a minibatch size of 4096 samples with a learning rate of 0.1, multiplied (annealed) by 0.1 at epochs 35, 50, 65, and 80. Tiny Imagenet \\citep{tiny_imagenet} consists of 500 training images and 50 images for each of its 200 classes. We used the validation set for testing and created a new validation set by taking 50 images per class from the training set, selected randomly for each training run. \n\nThe VGG16-like network is identical to the batch norm VGG16 in \\citet{simonyan_vgg_15}, except the final two fully-connected layers of 4096 units each were replaced with a single 512-unit layer. ResNet20 and VGG16 were trained for 200 epochs using a minibatch size of 256 samples. ResNet20 were trained with a learning rate of 0.1 and VGG16 with a learning rate of 0.01, both annealed by $10^{-1}$ at epochs 100 and 150. We split the 50k CIFAR10 training samples into a 45k sample training set and a 5k validation set, similar to our approach with Tiny Imagenet.\n\nAll experimental results were derived from the test set with the parameters from the epoch that achieved the highest validation set accuracy over the training epochs. 20 replicates with different random seeds were run for each hyperparameter set. Selectivity regularization was not applied to the final (output) layer, nor was the final layer included any of our analyses.\n\nExperiments were conducted using PyTorch \\citep{paszke_pytorch_2019}, analyzed using the SciPy ecosystem \\citep{virtanen_scipy_2019}, and visualized using Seaborn \\citep{waskom_seaborn_2017}. \n\n\\subsection{\\mat{NEW SECTION:} Effect of selectivity regularizer on training time} \\label{appendix:train_time}\nWe quantified the number of training epochs required to reach 95\\% of maximum test accuracy ($t^{95}$). The $t^{95}$ without selectivity regularization ($t^{95}_{\\alpha=0}$) for ResNet20 is 45$\\pm$15 epochs (median $\\pm$ IQR). $\\alpha$ in [-2, 0.7] had overlapping IQRs with $\\alpha = 0$. For ResNet18, $t^{95}_{\\alpha=0}=35\\pm1$, while $t^{95}$ for $\\alpha$ in [-2, 1] was as high as 51$\\pm$1.5. Beyond these ranges, the $t^{95}$ exceeded 1.5$\\times t^{95}_{\\alpha=0}$ and\/or was highly variable.\n\n\\subsection{CCA} \\label{appendix:cca}\n\\subsubsection{An intuition}We used Canonical Correlation Analysis (CCA) to examine the effects of class selectivity regularization on hiden layer representations. CCA is a statistical method that takes two sets of multidimensional variates and finds the linear combinations of these variates that have maximum correlation with each other \\citep{hotelling_relations_1936}. Critically, CCA is invariant to rotation and other invertible affine transformations. CCA has been productively applied to analyze and compare representations in (and between) biological and neural networks \\citep{sussillo_neural_2015, smith_fmri_cca_2015, raghu_svcca_2017, morcos_cca_2018, gallego_manifolds_2018}. \n\nWe use projection-weighted CCA (PWCCA), a variant of CCA introducted in \\citet{morcos_cca_2018} that has been shown to be more robust to noise than traditional CCA and other CCA variants (though for brevity we just use the term \"CCA\" in the main text). PWCCA generates a scalar value, $\\rho$, that can be thought of as the distance or dissimilarity between the two sets of multidimensional variates, $L_1$ and $L_2$. For example, if $L_2 = L_1$, then $\\rho_{L_1,L_2} = 0$. Now let $R$ be a rotation matrix. Because CCA is invariant to rotation and other invertible affine transformations, if $L_2 = RL_1$ (i.e. if $L_2$ is a rotation of $L_1$), then $\\rho_{L_1,L_2} = 0$. In contrast, traditional similarity metrics such as Pearson's Correlation and cosine similarity would obtain different values if $L_2 = L_1$ compared to $L_2 = RL_1$. We use the PWCCA implementation available at \\href{https:\/\/github.com\/google\/svcca\/}{https:\/\/github.com\/google\/svcca\/}, as provided in \\citet{morcos_cca_2018}.\n\n\\subsubsection{Our application}\n\\label{appendix:cca_our_analysis}\nAs an example for the analyses in our experiments, $L_1$ is the activation matrix for a layer in a network that was not regularized against class selectivity (i.e. $\\alpha = 0$), and $L_{2}$ is the activation matrix for the same layer in a network that was structured and initialized identically, but subject to regularization against class selectivity (i.e. $\\alpha < 0$). If regularizing against class selectivity causes the network's representations to be rotated (or to undergo to some other invertible affine transformation), then $\\rho_{L_1,L_2} = 0$. In practice $\\rho_{L_1,L_2} > 0$ due to differences in random seeds and\/or other stochastic factors in the training process, so we can determine a threshold value $\\epsilon$ and say $\\rho_{L_1,L_2} \\leq \\epsilon$. If regularizing against class selectivity instead causes a non-affine transformation to the network's representations, then $\\rho_{L_1,L_2} > \\epsilon$.\n\nIn our experiments we empirically establish a distribution of $\\epsilon$ values by computing the PWCCA distances between $\\rho_{L_{2a}L_{2b}}$, where $L_{2a}$ and $L_{2b}$ are two networks from the set of 20 replicates for a given hyperparameter combination that differ only in their initial random seed values (and thus have the same $\\alpha$). This gives ${20 \\choose 2} = 190$ values of $\\epsilon$. We then compute the PWCCA distance between each $\\{ L_1,L_2 \\}$ replicate pair, yielding a distribution of $20 \\times 20 = 400$ values of $\\rho_{L_1,L_2}$, which we compare to the distribution of $\\epsilon$.\n\n\\subsubsection{Formally}For the case of our analyses, let us start with a dataset $X$, which consists of $M$ data samples $\\{x_1,...x_M\\}$. Using the notation from \\citet{raghu_svcca_2017}, the scalar output (activation) of a single neuron $i$ on layer $\\iota$ in response to each data sample collectively form the vector\n\n\\[z_{i}^{\\iota} = (z(x_{i}^{\\iota}(x_1),...,x_{i}^{\\iota}(x_M))\\]\n\nWe then collect the activation vector $z_{i}^{l}$ of every neuron in layer $\\iota$ into a matrix $L=\\{z_{1}^{\\iota},...,z_{M}^{\\iota}\\}$ of size $N \\times M$, $N$ is the number of neurons in layer $\\iota$, and $M$ is the number of data samples. Given two such activation matrices $L_1$, of size $N_a \\times M$, and $L_2$, of size $N_b \\times M$, CCA finds the vectors $w$ (in $\\mathbb{R}^{N_a}$) and $s$ (in $\\mathbb{R}^{N_b}$), such that the inner product\n\n\\[\\rho = 1 -\\frac{\\langle w^T L_1, s^T L_2 \\rangle}{\\| w^T L_1 \\| \\cdotp \\| s^T L_2 \\|} \\]\nis maximized.\n\n\\newpage\n\n\\subsection{Regularizing to decrease class selectivity in ResNet18 and Resnet20} \\label{appendix:decreasing_selectivity}\n\n\\begin{figure*}[h!]\n \\centering\n \\begin{subfloatrow*}\n \n \\sidesubfloat[]{\n \\label{fig:apx_si_layerwise_neg_resnet18}\n \\includegraphics[width=0.48\\textwidth]{figures\/supplement\/si_layerwise_neg_resnet18.pdf}\n }\n \\hspace{-8mm}\n \\sidesubfloat[]{\n \\label{fig:apx_si_mean_neg_neg_resnet18}\n \\includegraphics[width=0.47\\textwidth]{figures\/supplement\/si_mean_neg_resnet18.pdf}\n }\n \\end{subfloatrow*}\n \\vspace{0mm}\n \n \n\n \\centering\n \\begin{subfloatrow*}\n \n \\sidesubfloat[]{\n \\label{fig:apx_si_layerwise_neg}\n \\includegraphics[width=0.55\\textwidth]{figures\/supplement\/si_layerwise_neg.pdf}\n }\n \\hspace{-16mm}\n \\sidesubfloat[]{\n \\label{fig:apx_si_mean_neg}\n \\includegraphics[width=0.49\\textwidth]{figures\/supplement\/si_mean_neg.pdf}\n }\n \\end{subfloatrow*}\n \\vspace{-8mm}\n \n \\caption{\\textbf{Manipulating class selectivity by regularizing against it in the loss function.} (\\textbf{a}) Mean class selectivity index (y-axis) as a function of layer (x-axis) for different regularization scales ($\\alpha$; denoted by intensity of blue) for ResNet18. (\\textbf{b}) Similar to (\\textbf{a}), but mean is computed across all units in a network instead of per layer. (\\textbf{b}) Similar to (\\textbf{a}), but mean is computed across all units in a network instead of per layer. (\\textbf{c}) and (\\textbf{d}) are identical to (\\textbf{a}) and (\\textbf{b}), respectively, but for ResNet20. Error bars denote bootstrapped 95\\% confidence intervals.\\newline}\n \\label{fig:apx_si_neg}\n \n\\end{figure*}\n\n\\subsection{Decreasing class selectivity without decreasing test accuracy in ResNet20}\n\\label{appendix:si_vs_accuracy_neg_resnet20}\n\n\\begin{figure*}[!htbp]\n \\centering\n \\begin{subfloatrow*}\n \n \\hspace{-4mm}\n \\sidesubfloat[]{\n \\label{fig:apx_acc_neg_full}\n \\includegraphics[width=0.33\\textwidth]{figures\/acc_test_full_neg.pdf}\n }\n \\hspace{-9mm}\n \\sidesubfloat[]{\n \\label{fig:apx_acc_neg_violin}\n \\includegraphics[width=0.33\\textwidth]{figures\/acc_test_zoom_neg_violin.pdf}\n }\n \\hspace{-3mm}\n \\sidesubfloat[]{\n \\label{fig:apx_si_vs_accuracy_neg}\n \\includegraphics[width=0.33\\textwidth]{figures\/si_vs_accuracy_neg.pdf}\n }\n \\end{subfloatrow*}\n \n \\caption{\\textbf{Effects of reducing class selectivity on test accuracy in ResNet20 trained on CIFAR10.} (\\textbf{a}) Test accuracy (y-axis) as a function of regularization scale ($\\alpha$, x-axis and intensity of blue). (\\textbf{b}) Identical to (\\textbf{a}), but for a subset of $\\alpha$ values. The center of each violin plot contains a boxplot, in which the darker central lines denote the central two quartiles. (\\textbf{c}) Test accuracy (y-axis) as a function of mean class selectivity (x-axis) for different values of $\\alpha$. Error bars denote 95\\% confidence intervals. *$p < 0.05$, **$p < 5\\times10^{-6}$ difference from $\\alpha = 0$, t-test, Bonferroni-corrected.}\n \\label{fig:apx_acc_neg_resnet20}\n\\end{figure*}\n\n\\newpage\n\n\\subsection{CCA Results for ResNet20}\n\\label{appendix:cca_resnet20}\n\\begin{figure*}[!htbp]\n \\centering\n \n \n \n \n \n \n \\hspace{-14mm}\n \\sidesubfloat[]{\n \\label{fig:apx_cca_distance_ratios_resnet20}\n \\includegraphics[width=0.49\\textwidth]{figures\/cca_ratios.pdf}\n }\n \n \\vspace{-3mm}\n \\caption{\\textbf{Using CCA to check whether class selectivity is rotated off-axis in ResNet20 trained on CIFAR10.} Similar to Figure \\ref{fig:cca}, we plot the average CCA distance ratio (y-axis) as a function of $\\alpha$ (x-axis, intensity of blue). The distance ratio is significantly greater than the baseline for all values of $\\alpha$ ($p < 5\\times10^{-6}$, paired t-test). Error bars = 95\\% confidence intervals. }\n \\label{fig:apx_cca}\n \n\\end{figure*}\n\n\\subsection{\\mat{NEW SECTION:} Calculating an upper bound for off-axis selectivity} \\label{appendix:maximum_selectivity}\nAs an additional control to ensure that our regularizer did not simply shift class selectivity to off-axis directions in activation space, we calculated an upper bound on the amount of class selectivity that could be recovered by finding the linear projection of unit activations that maximizes class selectivity. To do so, we first collected the validation set activation vector $z_{i}^{l}$ of every neuron in layer $\\iota$ into a matrix $A_{val}=\\{z_{1}^{\\iota},...,z_{M}^{\\iota}\\}$ of size $M \\times N$, where $M$ is the number of data samples in validation set and $N$ is the number of neurons in layer $\\iota$. We then found the projection matrix $W\\in\\mathbb{R}^{N\\times N}$ that minimizes the loss\n\\[loss=(1-SI(A_{val}W))\\]\n\nsuch that\n\\[||W^TW-I||^2 = 0\\]\n\ni.e. $W$ is orthonormal, where $SI$ is the selectivity index from Equation \\ref{eqn:si}. We constrained $W$ to be orthonormal using \\citet{lezcano2019geotorch}'s toolbox. Because $SI$ requires inputs $\\geq 0$, we shifted the columns of $AW$ by subtracting the columnwise mininum value before computing $SI$. The optimization was performed using Adam \\citep{ba_kingma_adam} with a learning rate of 0.001 for 3500 steps or until the magnitude of the change in loss was less than $10^{-6}$ for 10 steps. $W$ was then used to project the activation matrix for the test set $A_{test}$, and the selectivity index was calculated for each axis of the new activation space (i.e. each column of $A_{test}W$) after shifting the columns of $A_{test}W$ to be $\\geq0$. A separate $W$ was obtained for each layer of each model and for each replicate and value of $\\alpha$.\n\n\n\\newpage\n\n\n\\begin{figure*}[!htbp]\n \\centering\n \\begin{subfloatrow*}\n \\hspace{30mm}\n \\sidesubfloat[]{\n \\label{fig:apx_maximum_selectivity_resnet18_layerwise}\n \\includegraphics[width=0.43\\textwidth]{figures\/supplement\/optimal_selectivity_layerwise_tiny_imagenet.pdf}\n }\n \\end{subfloatrow*}\n \\begin{subfloatrow*}\n \\hspace{1mm}\n\n \\sidesubfloat[]{\n \\label{fig:apx_maximum_selectivity_resnet20}\n \\includegraphics[width=0.43\\textwidth]{figures\/supplement\/optimal_selectivity_cifar10.pdf}\n } \n \\hspace{4mm}\n \\sidesubfloat[]{\n \\label{fig:apx_maximum_selectivity_resnet20_layerwise}\n \\includegraphics[width=0.43\\textwidth]{figures\/supplement\/optimal_selectivity_layerwise_cifar10.pdf}\n }\n \\end{subfloatrow*} \n \n \\caption{\\textbf{An upper bound for off-axis class selectivity.} \\textbf{(a)} Upper bound on class selectivity (y-axis) as a function of layer (x-axis) for different regularization scales ($\\alpha$; denoted by intensity of blue) for ResNet18 trained on Tiny ImageNet. \\textbf{(b)} Mean class selectivity (y-axis) as a function of regularization scale ($\\alpha$; x-axis) for ResNet20 trained on CIFAR10. Diamond-shaped data points denote the upper bound on class selectivity for a linear projection of activations as described in Appendix \\ref{appendix:maximum_selectivity}, while circular points denote the amount of axis-aligned class selectivity for the corresponding values of $\\alpha$. \\textbf{(c)} \\textbf{(a)}, but for ResNet20 trained on CIFAR10. Error bars = 95\\% confidence intervals.}\n \\label{fig:apx_maximum_selectivity}\n \n\\end{figure*}\n\n\\newpage\n\n\\vspace{4mm}\n\\subsection{Regularizing to increase class selectivity in ResNet18 and ResNet20} \\label{appendix:increasing_selectivity}\n\n\\begin{figure*}[!h]\n \\centering\n \\begin{subfloatrow*}\n \n \\sidesubfloat[]{\n \\label{fig:apx_si_layerwise_pos_resnet18}\n \\includegraphics[width=0.48\\textwidth]{figures\/supplement\/si_layerwise_pos_resnet18.pdf}\n }\n \\hspace{-8mm}\n \\sidesubfloat[]{\n \\label{fig:apx_si_mean_pos_resnet18}\n \\includegraphics[width=0.46\\textwidth]{figures\/supplement\/si_mean_pos_resnet18.pdf}\n }\n \\end{subfloatrow*}\n \n\n \\centering\n \\begin{subfloatrow*}\n \n \\sidesubfloat[]{\n \\label{fig:apx_si_layerwise_pos}\n \\includegraphics[width=0.52\\textwidth]{figures\/supplement\/si_layerwise_pos.pdf}\n }\n \\hspace{-16mm}\n \\sidesubfloat[]{\n \\label{fig:apx_si_mean_pos}\n \\includegraphics[width=0.5\\textwidth]{figures\/supplement\/si_mean_pos.pdf}\n }\n \\end{subfloatrow*}\n \n \\caption{\\textbf{Regularizing to increase class selectivity} (\\textbf{a}) Mean class selectivity index (y-axis) as a function of layer (x-axis) for different regularization scales ($\\alpha$; denoted by intensity of red) for ResNet18. (\\textbf{b}) Similar to (\\textbf{a}), but mean is computed across all units in a network instead of per layer. (\\textbf{c}) and (\\textbf{d}) are identical to (\\textbf{a}) and (\\textbf{b}), respectively, but for ResNet20. Note that the inconsistent effect of larger $\\alpha$ values in (\\textbf{c}) and (\\textbf{d}) is addressed in Appendix \\ref{appendix:relu}. Error bars denote bootstrapped 95\\% confidence intervals.}\n \\label{fig:apx_si_pos}\n \\vspace*{-0.9em}\n\\end{figure*}\n\n\\newpage\n\n\\subsection{Increased class selectivity impairs test accuracy in ResNet20}\n\\label{appendix:si_vs_accuracy_pos_resnet20}\n\\begin{figure*}[!htbp]\n \\centering\n \\begin{subfloatrow*}\n \\hspace{-2mm}\n \n \\sidesubfloat[]{\n \\label{fig:apx_acc_pos_full_resnet20}\n \\includegraphics[width=0.32\\textwidth]{figures\/acc_test_full_pos.pdf}\n }\n \\hspace{-7mm}\n \\sidesubfloat[]{\n \\label{fig:apx_acc_pos_violin_resnet20}\n \\includegraphics[width=0.33\\textwidth]{figures\/acc_test_zoom_pos_violin.pdf}\n }\n \\hspace{-4mm}\n \\sidesubfloat[]{\n \\label{fig:apx_si_vs_accuracy_pos_resnet20}\n \\includegraphics[width=0.33\\textwidth]{figures\/si_vs_accuracy_pos.pdf}\n }\n \\end{subfloatrow*}\n\n \\caption{\\textbf{Effects of increasing class selectivity on test accuracy on ResNet20 trained on CIFAR10.} (\\textbf{a}) Test accuracy (y-axis) as a function of regularization scale ($\\alpha$; x-axis, intensity of red). (\\textbf{b}) Identical to (\\textbf{a}), but for a subset of $\\alpha$ values. The center of each violin plot contains a boxplot, in which the darker central lines denote the central two quartiles. (\\textbf{c}) Test accuracy (y-axis) as a function of mean class selectivity (x-axis) for different values of $\\alpha$. Error bars denote 95\\% confidence intervals. *$p < 2\\times10^{-4}$, **$p < 5\\times10^{-7}$ difference from $\\alpha = 0$, t-test, Bonferroni-corrected.}\n \\label{fig:acc_pos}\n \\vspace*{-0.9em}\n\\end{figure*}\n\n\\medskip\n\n\\subsection{Single unit necromancy}\n\\label{appendix:relu}\n\\paragraph{Lethal ReLUs} \nThe inconsistent relationship between $\\alpha$ and class selectivity for larger values of $\\alpha$ led us to question whether the performance deficits were due to an alternative factor, such as the optimization process, rather than class selectivity per se. Interestingly, we observed that ResNet20 regularized to increase selectivity contained significantly higher proportions of dead units, and the number of dead units is roughly proportional to alpha (see Figure \\ref{fig:apx_dead_units_layerwise}; also note that this was not a problem in ResNet18). Removing the dead units makes the relationship between regularization and selectivity in ResNet20 more consistent at large regularization scales (see Appendix \\ref{fig:apx_dead_units}). \n\nThe presence of dead units is not unexpected, as units with the ReLU activation function are known to suffer from the \"dying ReLU problem\"\\citep{lu_dying_relu_2019}: If, during training, a weight update causes a unit to cease activating in response to all training samples, the unit will be unaffected by subsequent weight updates because the ReLU gradient at $x\\leq0$ is zero, and thus the unit's activation will forever remain zero. The dead units could explain the decrease in performance from regularizing to increase selectivity as simply a decrease in model capacity. \n\n\\paragraph{Fruitless resuscitation} \nOne solution to the dying ReLU problem is to use a leaky-ReLU activation function \\citep{maas_leakyrelu_2013}, which has a non-zero slope, $b$ (and thus non-zero gradient) for $x \\leq 0$. Accordingly, we re-ran the previous experiment using units with a leaky-ReLU activation in an attempt to control for the potential confound of dead units. Note that because the class selectivity index assumes activations $\\geq 0,$ we shifted activations by subtracting the minimum activation when computing selectivity for leaky-ReLUs. If the performance deficits from regularizing for selectivity are simply due to dead units, then using leaky-ReLUs should rescue performance. Alternatively, if dead units are not the cause of the performance deficits, then leaky-ReLUs should not have an effect.\n\nWe first confirmed that using leaky-ReLUs solves the dead unit problem. Indeed, the proportion of dead units is reduced to 0 in all networks across all tested values of $b$. Despite complete recovery of the dead units, however, using leaky-ReLUs does not rescue class selectivity-induced performance deficits (Figure \\ref{fig:apx_leaky_relu}). While the largest negative slope value improved test accuracy for larger values of $\\alpha$, the improvement was minor, and increasing $\\alpha$ still had catastrophic effects. These results confirm that dead units cannot explain the rapid and catastrophic effects of increased class selectivity on performance.\n\n\\begin{figure*}[!htp]\n \\centering\n \\begin{subfloatrow*}\n \n \\hspace{-2mm}\n \\sidesubfloat[]{\n \\label{fig:apx_dead_units_layerwise}\n \\includegraphics[width=0.42\\textwidth]{figures\/supplement\/deadunits_layerwise_pos.pdf}\n }\n \\hspace{-14mm}\n \\sidesubfloat[]{\n \\label{fig:apx_mean_si_pos_deadunits}\n \\includegraphics[width=0.38\\textwidth]{figures\/supplement\/si_mean_pos_deadremoved.pdf}\n }\n \\hspace{-5mm}\n \\sidesubfloat[]{\n \\label{fig:apx_si_vs_accuracy_pos_nodead}\n \\includegraphics[width=0.27\\textwidth]{figures\/supplement\/si_vs_accuracy_pos_no_dead.pdf}\n }\n \\end{subfloatrow*}\n\n \\caption{\\textbf{Removing dead units partially stabilizes the effects of large positive regularization scales in ResNet20.} (\\textbf{a}) Proportion of dead units (y-axis) as a function of layer (x-axis) for different regularization scales ($\\alpha$, intensity of red). (\\textbf{b}) Mean class selectivity index (y-axis) as a function of regularization scale ($\\alpha$; x-axis and intensity of red) after removing dead units. Removing dead units from the class selectivity calculation establishes a more consistent relationship between $\\alpha$ and the mean class selectivity index (compare to Figure \\ref{fig:apx_si_mean_pos}). (\\textbf{c}) Test accuracy (y-axis) as a function of mean class selectivity (x-axis) for different values of $\\alpha$ after removing dead units from the class selectivity calculation. Error bars denote 95\\% confidence intervals. *$p < 2\\times10^{-4}$, **$p < 5\\times10^{-7}$ difference from $\\alpha = 0$ difference from $\\alpha = 0$, t-test, Bonferroni-corrected. All results shown are for ResNet20. \\newline}\n \\label{fig:apx_dead_units}\n \\vspace*{-4mm}\n\\end{figure*}\n\n\\begin{figure*}[!htb]\n \\centering\n \\begin{subfloatrow*}\n \n \n \\sidesubfloat[]{\n \\label{fig:apx_acc_leaky_slopes}\n \\includegraphics[width=0.5\\textwidth]{figures\/supplement\/acc_test_leakyrelu_pos.pdf}\n }\n \\hspace{0mm}\n \\sidesubfloat[]{\n \\label{fig:apx_mean_si_pos_leaky}\n \\includegraphics[width=0.33\\textwidth]{figures\/supplement\/si_vs_accuracy_pos_leaky.pdf}\n }\n \\end{subfloatrow*}\n\n \\caption{\\textbf{Reviving dead units does not rescue the performance deficits caused by increasing selectivity in ResNet20.} (\\textbf{a}) Test accuracy (y-axis) as a function of regularization scale ($\\alpha$; x-axis) for different leaky-ReLU negative slopes (intensity of red). Leaky-ReLUs completely solve the dead unit problem but do not fully rescue test accuracy for networks with $\\alpha >0 $. (\\textbf{b}) Mean class selectivity index (y-axis) as a function of regularization scale ($\\alpha$; x-axis and intensity of red) for leaky-ReLU negative slope = 0.5. *$p < 0.001$, **$p < 2\\times10^{-4}$, ***$p < 5\\times10^{-10}$ difference from $\\alpha = 0$, t-test, Bonferroni-corrected. Error bars denote bootstrapped 95\\% confidence intervals.}\n \\label{fig:apx_leaky_relu}\n \\vspace*{-4mm}\n\\end{figure*}\n\n\\newpage\n\n\\subsection{\\mat{NEW SECTION:} Ruling out a degenerate solution for increasing selectivity} \\label{appendix:si_control}\n\nOne degenerate solution to generate a very high selectivity index is for a unit to be silent for the vast majority of inputs, and have low activations for the small set of remaining inputs. We refer to this scenario as \"activation minimization\". We verified that activation minimization does not fully account for our results by examining the proportion of activations which elicit non-zero activations in the units in our models. If our regularizer is indeed using activation minimization to generate high selectivity in individual units, then regularizing to increase class selectivity should cause most units to have non-zero activations for only a very small proportion of samples. In ResNet18 trained on Tiny ImageNet, we found that sparse units, defined as units that do not respond to at least half of the data samples, only constitute more than 10\\% of the total population at extreme positive regularization scales ($\\alpha \\geq 10$; Figures \\ref{fig:apx_sparsity_resnet18}, \\ref{fig:apx_spars_vs_acc_resnet18}), well after large performance deficits >4\\% emerge (Figure \\ref{fig:acc_pos_resnet18}). In ResNet20 trained on CIFAR10, networks regularized to have higher class selectivity (i.e. positive $\\alpha$) did indeed have more sparse units (Figures \\ref{fig:apx_sparsity_resnet20}, \\ref{fig:apx_spars_vs_acc_resnet20}). However, this effect does not explain away our findings: by $\\alpha = 0.7$, the majority of units respond to over 80\\% of samples (i.e. they are not sparse), but test accuracy has already decreased by 5\\% (Figure \\ref{fig:acc_pos}). These results indicate that activation minimization does not explain class selectivity-related changes in test accuracy.\n\n\\newpage\n\n\\begin{figure*}[!h]\n\\vspace{-3mm}\n \\centering\n \\begin{subfloatrow*}\n \n \n \\sidesubfloat[]{\n \\label{fig:apx_sparsity_resnet18}\n \\includegraphics[width=0.4\\textwidth]{figures\/supplement\/proportion_nonzero_activations_resnet18.png}\n }\n \n \n \n \n \n \n \n \n \\hspace{4mm}\n \\sidesubfloat[]{\n \\label{fig:apx_sparsity_resnet20}\n \\includegraphics[width=0.4\\textwidth]{figures\/supplement\/proportion_nonzero_activations_resnet20.png}\n }\n \n \n \n \n \n \\end{subfloatrow*} \n \n \\caption{\\textbf{Activation minimization does not explain selectivity-induced performance changes.} (\\textbf{a}) Proportion of samples eliciting a non-zero activation (y-axis) vs. regularization scale ($\\alpha$; x-axis) in ResNet18 trained on Tiny ImageNet. Data points denote individual units. Boxes denote IQR, whiskers extend 2$\\times$IQR. Note that the boxes are very compressed because the distribution is confined almost entirely to y=1.0 for all values of x. (\\textbf{b}) Identical to (\\textbf{a}), but for ResNet20 trained on CIFAR10.}\n \\label{fig:sparsity}\n\\end{figure*}\n\n\\subsection{Results for VGG16} \\label{appendix:vgg}\n\nModulating class selectivity in VGG16 yielded results qualitatively similar to those we observed in ResNet20. The regularizer reliably decreases class selectivity for negative values of $\\alpha$ (Figure \\ref{fig:apx_si_neg_vgg}), and class selectivity can be drastically reduced with little impact on test accuracy (Figure \\ref{fig:apx_acc_neg_vgg}. Although test accuracy decreases significantly at $\\alpha=-0.1$ ($p=0.004$, Bonferroni-corrected t-test), the effect is small: it is possible to reduce mean class selectivity by a factor of 5 with only a 0.5\\% decrease in test accuracy, and by a factor of 10\u2014to 0.03\u2014with only a $\\sim$1\\% drop in test accuracy.\n\nRegularizing to increase class selectivity also has similar effects in VGG16 and ResNet20. Increasing $\\alpha$ causes class selectivity to increase, and the effect becomes less consistent at large values of $\\alpha$ (Figure \\ref{fig:apx_si_pos}). Although the class selectivity-induced collapse in test accuracy does not emerge quite as rapidly in VGG16 as it does in ResNet20, the decrease in test accuracy is still significant at the smallest tested value of $\\alpha$ ($\\alpha = 0.1$, $p=0.02$, Bonferroni-corrected t-test), and the effects on test accuracy of regularizing to promote vs. discourage class selectivity become significantly different at $\\alpha = 0.3$ ($p = 10^{-4}$, Wilcoxon rank-sum test; Figure \\ref{fig:apx_acc_si_comparison_vgg}). Our observations that class selectivity is neither necessary nor sufficient for performance in both VGG16 and ResNet20 indicates that this is likely a general property of CNNs.\n\nIt is worth noting that VGG16 exhibits greater class selectivity than ResNet20. In the absence of regularization (i.e. $\\alpha=0$), mean class selectivity in ResNet20 is 0.22, while in VGG16 it is 0.35, a 1.6x increase. This could explain why positive values of $\\alpha$ seem to have a stronger effect on class selectivity in VGG16 relative to ResNet20 (compare Figure \\ref{fig:apx_si_pos} and Figure \\ref{fig:apx_si_pos_vgg}; also see Figure \\ref{fig:si_vgg_resnet_zoom_violin}).\n\n\\newpage\n\n\\begin{figure*}[h]\n \\vspace{-6mm}\n \\centering\n \\begin{subfloatrow*}\n \\hspace{0mm}\n \\sidesubfloat[]{\n \\label{fig:apx_si_layerwise_neg_vgg}\n \\includegraphics[width=0.56\\textwidth]{figures\/supplement\/si_layerwise_neg_vgg.pdf}\n }\n \\hspace{-14mm}\n \\sidesubfloat[]{\n \\label{fig:apx_si_mean_neg_vgg}\n \\includegraphics[width=0.47\\textwidth]{figures\/supplement\/si_mean_neg_vgg.pdf}\n }\n \\end{subfloatrow*}\n \n \\caption{\\textbf{Regularizing to decrease class selectivity in VGG16.} (\\textbf{a}) Mean class selectivity index (y-axis) as a function of layer (x-axis) for different regularization scales ($\\alpha$; denoted by intensity of blue) for VGG16. (\\textbf{b}) Similar to (\\textbf{a}), but mean is computed across all units in a network instead of per layer. Error bars denote bootstrapped 95\\% confidence intervals.}\n \\label{fig:apx_si_neg_vgg}\n \\vspace*{-12mm}\n\\end{figure*}\n\n\\medskip\n\n\\begin{figure*}[!htb]\n \\centering\n \\begin{subfloatrow*}\n \n \\hspace{0mm}\n \\sidesubfloat[]{\n \\label{fig:apx_acc_neg_full_vgg}\n \\includegraphics[width=0.30\\textwidth]{figures\/supplement\/acc_test_full_neg_vgg.pdf}\n }\n \\hspace{-6mm}\n \\sidesubfloat[]{\n \\label{fig:apx_acc_neg_violin_vgg}\n \\includegraphics[width=0.31\\textwidth]{figures\/supplement\/acc_test_zoom_neg_violin_vgg.pdf}\n }\n \\hspace{-2mm}\n \\sidesubfloat[]{\n \\label{fig:apx_si_vs_accuracy_neg_vgg}\n \\includegraphics[width=0.31\\textwidth]{figures\/supplement\/si_vs_accuracy_neg_vgg.pdf}\n }\n \\vspace{-2mm}\n \\end{subfloatrow*}\n\n \\caption{\\textbf{Effects of reducing class selectivity on test accuracy in VGG16.} (\\textbf{a}) Test accuracy (y-axis) as a function of regularization scale ($\\alpha$, x-axis and intensity of blue). (\\textbf{b}) Identical to (\\textbf{a}), but for a subset of $\\alpha$ values. The center of each violin plot contains a boxplot, in which the darker central lines denote the central two quartiles. (\\textbf{c}) Test accuracy (y-axis) as a function of mean class selectivity (x-axis) for different values of $\\alpha$. Error bars denote 95\\% confidence intervals. *$p < 0.005$, **$p < 5\\times10^{-6}$, ***$p < 5\\times10^{-60}$ difference from $\\alpha = 0$, t-test, Bonferroni-corrected. All results shown are for VGG16.}\n \\label{fig:apx_acc_neg_vgg}\n \\vspace*{-2mm}\n \n\\end{figure*}\n\n\\begin{figure*}[!htb]\n \\centering\n \\begin{subfloatrow*}\n \\hspace{0mm}\n \\sidesubfloat[]{\n \\label{fig:apx_si_layerwise_pos_vgg}\n \\includegraphics[width=0.52\\textwidth]{figures\/supplement\/si_layerwise_pos_vgg.pdf}\n }\n \\hspace{-10mm}\n \\sidesubfloat[]{\n \\label{fig:apx_si_mean_pos_vgg}\n \\includegraphics[width=0.48\\textwidth]{figures\/supplement\/si_mean_pos_vgg.pdf}\n }\n \\vspace{-5mm}\n \\end{subfloatrow*}\n \n \\caption{\\textbf{Regularizing to increase class selectivity in VGG16.} (\\textbf{a}) Mean class selectivity index (y-axis) as a function of layer (x-axis) for different regularization scales ($\\alpha$; denoted by intensity of red) for VGG16. (\\textbf{b}) Similar to (\\textbf{a}), but mean is computed across all units in a network instead of per layer. Error bars denote bootstrapped 95\\% confidence intervals.}\n \\label{fig:apx_si_pos_vgg}\n \\vspace*{-0.9em}\n\\end{figure*}\n\n\\clearpage\n\n\\begin{figure*}[!htb]\n \\centering\n \\begin{subfloatrow*}\n \n \\hspace{-0mm}\n \\sidesubfloat[]{\n \\label{fig:apx_acc_pos_full_vgg}\n \\includegraphics[width=0.3\\textwidth]{figures\/supplement\/acc_test_full_pos_vgg.pdf}\n }\n \\hspace{-6mm}\n \\sidesubfloat[]{\n \\label{fig:apx_acc_pos_violin_vgg}\n \\includegraphics[width=0.31\\textwidth]{figures\/supplement\/acc_test_zoom_pos_violin_vgg.pdf}\n }\n \\hspace{-2mm}\n \\sidesubfloat[]{\n \\label{fig:apx_si_vs_accuracy_pos_vgg}\n \\includegraphics[width=0.31\\textwidth]{figures\/supplement\/si_vs_accuracy_pos_vgg.pdf}\n }\n \\vspace{-2mm}\n \\end{subfloatrow*}\n\n \\caption{\\textbf{Effects of increasing class selectivity on test accuracy in VGG16.} (\\textbf{a}) Test accuracy (y-axis) as a function of regularization scale ($\\alpha$, x-axis and intensity of red). (\\textbf{b}) Identical to (\\textbf{a}), but for a subset of $\\alpha$ values. The center of each violin plot contains a boxplot, in which the darker central lines denote the central two quartiles. (\\textbf{c}) Test accuracy (y-axis) as a function of mean class selectivity (x-axis) for different values of $\\alpha$. Error bars denote 95\\% confidence intervals. *$p < 0.05$, **$p < 5\\times10^{-4}$, ***$p < 9\\times10^{-6}$ difference from $\\alpha = 0$, t-test, Bonferroni-corrected. All results shown are for VGG16.}\n \\label{fig:apx_acc_pos_vgg}\n \\vspace*{-2mm}\n\\end{figure*}\n\n\\begin{figure*}[!htbp]\n \\centering\n \\begin{subfloatrow*}\n \\hspace{0mm}\n \\sidesubfloat[]{\n \\label{fig:apx_acc_full_regscales_vgg}\n \\includegraphics[width=0.5\\textwidth]{figures\/supplement\/acc_test_neg_pos_vgg.pdf}\n }\n \\hspace{-8mm}\n \\sidesubfloat[]{\n \\label{fig:apx_acc_neg_pos_zoom_vgg}\n \\includegraphics[width=0.5\\textwidth]{figures\/supplement\/acc_test_neg_pos_zoom_violin_vgg.pdf}\n }\n \\end{subfloatrow*}\n \n \\caption{\\textbf{Regularizing to promote vs. penalize class selectivity in VGG16.} (\\textbf{a}) Test accuracy (y-axis) as a function of regularization scale magnitude ($|\\alpha|$; x-axis) when promoting ($\\alpha > 0$, red) or penalizing ($\\alpha < 0$, blue) class selectivity in VGG16. Error bars denote bootstrapped 95\\% confidence intervals.(\\textbf{b}) Identical to (\\textbf{a}), but for a subset of $|\\alpha|$ values. *$p < 0.05$, **$p < 10^{-3}$, ***$p < 10^{-6}$ difference between $\\alpha < 0$ and $\\alpha > 0$, Wilcoxon rank-sum test, Bonferroni-corrected.}\n \\label{fig:apx_acc_si_comparison_vgg}\n \\vspace*{-0.9em}\n\\end{figure*}\n\n\\subsection{Different selectivity metrics} \\label{appendix:other_metrics}\nIn order to confirm that the effect of the regularizer is not unique to our chosen class selectivity metric, we also examined the effect of our regularizer on the \"precision\" metric for class selectivity \\citep{zhou_object_2015, zhou_revisiting_2018, gale_selectivity_2019}. The precision metric is calculated by finding the $N$ images that most strongly activate a given unit, then finding the image class $C_i$ that constitutes the largest proportion of the $N$ images. Precision is defined as this proportion. For example, if $N = 200$, and the \"cats\" class, with 74 samples, constitutes the largest proportion of those 200 activations for a unit, then the precision of the unit is $\\frac{74}{200} = 0.34$. Note that for a given number of classes $C$, precision is bounded by $[\\frac{1}{C}, 1]$, thus in our experiments the lower bound on precision is 0.1.\n\n\\citet{zhou_object_2015} used $N=60$, while \\citet{gale_selectivity_2019} used $N=100$. We chose to use the number of samples per class in the test set data and thus the largest possible sample size. This yielded $N=1000$ for CIFAR10 and $N=50$ for Tiny Imagenet. \n\nThe class selectivity regularizer has similar effects on precision as it does on the class selectivity index. Regularizing against class selectivity has a consistent effect on precision (Figure \\ref{fig:apx_precision}), while regularizing to promote class selectivity has a consistent effect in ResNet18 trained on Tiny ImageNet and for smaller values of $\\alpha$ in ResNet20 trained on CIFAR10. However, the relationship between precision and the class selectivity index becomes less consistent for larger positive values of $\\alpha$ in ResNet20 trained on CIFAR10. One explanation for this is that activation sparsity is a valid solution for maximizing the class selectivity index but not precision. For example, a unit that responded only to ten samples from the class \"cat\" and not at all to the remaining samples would have a class selectivity index of 1, but a precision value of 0.11. This seems likely given the increase in sparsity observed for very large positive values of $\\alpha$ (see Appendix \\ref{appendix:si_control}).\n\nWhile there are additional class selectivity metrics that we could have used to further assess the effect of our regularizer, many of them are based on relating the activity of a neuron to the accuracy of the network's output(s) (e.g. top class selectivity \\citet{gale_selectivity_2019} and class correlation \\citet{li_yosinski_convergent, zhou_revisiting_2018}), confounding classification accuracy and class selectivity. Accordingly, these metrics are unfit for use in experiments that examine the relationship between class selectivity and classification accuracy, which is exactly what we do here.\n\n\\begin{figure*}[!htbp]\n \\centering\n \\begin{subfloatrow*}\n \\hspace{-5mm}\n \\sidesubfloat[]{\n \\label{fig:apx_precision_layerwise_neg_resnet18}\n \\includegraphics[width=0.5\\textwidth]{figures\/supplement\/precision_50_layerwise_neg_resnet18.pdf}\n }\n \\hspace{-8mm}\n \\sidesubfloat[]{\n \\label{fig:apx_precision_mean_neg_resnet18}\n \\includegraphics[width=0.48\\textwidth]{figures\/supplement\/precision_50_mean_neg_resnet18.pdf}\n }\n \\end{subfloatrow*}\n \\begin{subfloatrow*}\n \\hspace{-5mm}\n \\sidesubfloat[]{\n \\label{fig:apx_precision_layerwise_pos_resnet18}\n \\includegraphics[width=0.5\\textwidth]{figures\/supplement\/precision_50_layerwise_pos_resnet18.pdf}\n }\n \\hspace{-8mm}\n \\sidesubfloat[]{\n \\label{fig:apx_precision_mean_pos_resnet18}\n \\includegraphics[width=0.48\\textwidth]{figures\/supplement\/precision_50_mean_pos_resnet18.pdf}\n }\n \\end{subfloatrow*} \n \\begin{subfloatrow*}\n \\hspace{-5mm}\n \\sidesubfloat[]{\n \\label{fig:apx_precision_layerwise_neg}\n \\includegraphics[width=0.5\\textwidth]{figures\/supplement\/precision_1000_layerwise_neg.pdf}\n }\n \\hspace{-8mm}\n \\sidesubfloat[]{\n \\label{fig:apx_precision_mean_neg}\n \\includegraphics[width=0.48\\textwidth]{figures\/supplement\/precision_1000_mean_neg.pdf}\n }\n \\end{subfloatrow*}\n \\begin{subfloatrow*}\n \\hspace{-5mm}\n \\sidesubfloat[]{\n \\label{fig:apx_precision_layerwise_pos}\n \\includegraphics[width=0.5\\textwidth]{figures\/supplement\/precision_1000_layerwise_pos.pdf}\n }\n \\hspace{-8mm}\n \\sidesubfloat[]{\n \\label{fig:apx_precision_mean_pos}\n \\includegraphics[width=0.48\\textwidth]{figures\/supplement\/precision_1000_mean_pos.pdf}\n }\n \\end{subfloatrow*} \n \n \\caption{\\textbf{Class selectivity regularization has similar effects when measured using a different class selectivity metric.} (\\textbf{a}) Mean precision (y-axis) as a function of layer (x-axis) for different regularization scales ($\\alpha$; denoted by intensity of blue) when regularizing against class selectivity in ResNet18. Precision is an alternative class selectivity metric (see Appendix \\ref{appendix:other_metrics}). (\\textbf{b}) Similar to (\\textbf{a}), but mean is computed across all units in a network instead of per layer. (\\textbf{c}) and (\\textbf{d}) are identical to (\\textbf{a}) and (\\textbf{b}), respectively, but when regularizing to promote class selectivity. (\\textbf{e-h}) are identical to (\\textbf{a-d}), respectively, but for ResNet20. Error bars denote bootstrapped 95\\% confidence intervals.}\n \\label{fig:apx_precision}\n \\vspace*{-0.9em}\n\\end{figure*}\n\n \n\n\\clearpage\n\n\\subsection{Restricting class selectivity regularization to the first three or final three layers} \\label{appendix:first_last_three_layers}\n\nTo investigate the layer-specificity of the effects of class selectivity regularization, we also examined the effects of restricting class selectivity regularization to the first three or last three layers of the networks. Interestingly, we found that much of the effect of regularizing for or against selectivity on test accuracy was replicated even when the regularization was restricted to the first or final three layers. For example, reducing class selectivity in the first three layers either improves test accuracy\u2014in ResNet18 trained on Tiny ImageNet\u2014or has little-to-no effect on test accuracy\u2014in ResNet20 trained on CIFAR10 (Figures \\ref{fig:apx_si_neg_first3} and \\ref{fig:apx_acc_neg_first3}). Likewise, regularizing to increase class selectivity in the first three layers had an immediate negative impact on test accuracy in both models (Figures \\ref{fig:apx_si_pos_first3} and \\ref{fig:apx_acc_pos_first3}). Regularizing against class selectivity in the final three layers (Figures \\ref{fig:apx_si_neg_last3} and \\ref{fig:apx_acc_neg_last3}) caused a modest increase in test accuracy over a narrow range of $\\alpha$ in ResNet18 trained on Tiny ImageNet: less than half a percent gain at most (at $\\alpha=-0.2$), and no longer present by $\\alpha=-0.4$ (Figure \\ref{fig:apx_alpha_vs_acc_violin_neg_last3_resnet18}). In ResNet20, regularizing against class selectivity in the final three layers actually causes a decrease in test accuracy (Figures \\ref{fig:apx_alpha_vs_acc_neg_first3_resnet20} and \\ref{fig:apx_si_vs_acc_neg_first3_resnet20}). Given that the output layer of CNNs trained for image classification are by definition class-selective, we thought that regularizing to increase class selectivity in the final three layers could improve performance, but surprisingly it causes an immediate drop in test accuracy in both models (Figures \\ref{fig:apx_si_pos_last3} and \\ref{fig:apx_acc_pos_last3}). Our observation that regularizing to decrease class selectivity provides greater benefits (in the case of ResNet18) or less impairment (in the case of ResNet20) in the first three layers compared to the final three layers leads to the conclusion that class selectivity is less necessary (or more detrimental) in early layers compared to late layers.\n\n\\newpage\n\n\\begin{figure*}[!h]\n \\centering\n \\begin{subfloatrow*}\n \n \\sidesubfloat[]{\n \\label{fig:apx_si_neg_layerwise_first3_resnet18}\n \\includegraphics[width=0.42\\textwidth]{figures\/supplement\/first_last_three\/si_layerwise_neg_resnet18_first3.pdf}\n }\n \n \n \\sidesubfloat[]{\n \\label{fig:apx_si_neg_first3_resnet18}\n \\includegraphics[width=0.44\\textwidth]{figures\/supplement\/first_last_three\/si_mean_neg_resnet18_first3.pdf}\n }\n \\end{subfloatrow*}\n \\begin{subfloatrow*}\n \n \\sidesubfloat[]{\n \\label{fig:apx_si_neg_layerwise_first3_resnet20}\n \\includegraphics[width=0.44\\textwidth]{figures\/supplement\/first_last_three\/si_layerwise_neg_resnet20_first3.pdf}\n }\n \n \n \\sidesubfloat[]{\n \\label{fig:apx_si_neg_first3_resnet20}\n \\includegraphics[width=0.42\\textwidth]{figures\/supplement\/first_last_three\/si_mean_neg_resnet20_first3.pdf}\n }\n \\end{subfloatrow*} \n\n \\caption{\\textbf{Regularizing to decrease class selectivity in the first three network layers.} (\\textbf{a}) Mean class selectivity (y-axis) as a function of layer (x-axis) for different values of $\\alpha$ (intensity of blue) when class selectivity regularization is restricted to the first three network layers in ResNet18 trained on Tiny ImageNet. (\\textbf{b}) Mean class selectivity in the first three layers (y-axis) as a function of $\\alpha$ (x-axis) in ResNet18 trained on Tiny ImageNet. (\\textbf{c}) and (\\textbf{d}) are identical to (\\textbf{a}) and (\\textbf{b}), respectively, but for ResNet20 trained on CIFAR10. Error bars = 95\\% confidence intervals.}\n \\vspace*{-3mm}\n \\label{fig:apx_si_neg_first3}\n\\end{figure*} \n\\vspace{2mm}\n\\begin{figure*}[!h]\n \\centering\n \\begin{subfloatrow*}\n \\hspace{20mm}\n \\sidesubfloat[]{\n \\label{fig:apx_alpha_vs_acc_neg_first3_resnet18}\n \\includegraphics[width=0.3\\textwidth]{figures\/supplement\/first_last_three\/acc_test_full_neg_resnet18_first3.pdf}\n }\n \n \n \n \n \n \n \\hspace{10mm}\n \\sidesubfloat[]{\n \\label{fig:apx_si_vs_acc_neg_first3_resnet18}\n \\includegraphics[width=0.31\\textwidth]{figures\/supplement\/first_last_three\/si_vs_accuracy_neg_resnet18_first3.pdf}\n } \n \\end{subfloatrow*}\n \\begin{subfloatrow*}\n \\hspace{20mm}\n \\sidesubfloat[]{\n \\label{fig:apx_alpha_vs_acc_neg_first3_resnet20}\n \\includegraphics[width=0.3\\textwidth]{figures\/supplement\/first_last_three\/acc_test_full_neg_resnet20_first3.pdf}\n }\n \n \n \n \n \n \n \\hspace{10mm}\n \\sidesubfloat[]{\n \\label{fig:apx_si_vs_acc_neg_first3_resnet20}\n \\includegraphics[width=0.31\\textwidth]{figures\/supplement\/first_last_three\/si_vs_accuracy_neg_resnet20_first3.pdf}\n } \n \\end{subfloatrow*} \n\n \\caption{\\textbf{Effects on test accuracy when regularizing to decrease class selectivity in the first three network layers.} (\\textbf{a}) Test accuracy (y-axis) as a function of $\\alpha$ (x-axis) when class selectivity regularization is restricted to the first three network layers in ResNet18 trained on Tiny ImageNet. (\\textbf{b}) Test accuracy (y-axis) as a function of mean class selectivity in the first three layers (x-axis) in ResNet18 trained on Tiny ImageNet. (\\textbf{c}) and (\\textbf{d}) are identical to (\\textbf{a}) and (\\textbf{b}), respectively, but for ResNet20 trained on CIFAR10. Error bars = 95\\% confidence intervals.}\n \\vspace*{-3mm}\n \\label{fig:apx_acc_neg_first3}\n\\end{figure*} \n\n\\begin{figure*}[!h]\n \\centering\n \\begin{subfloatrow*}\n \n \\sidesubfloat[]{\n \\label{fig:apx_si_pos_layerwise_first3_resnet18}\n \\includegraphics[width=0.42\\textwidth]{figures\/supplement\/first_last_three\/si_layerwise_pos_resnet18_first3.pdf}\n }\n \n \n \\sidesubfloat[]{\n \\label{fig:apx_si_pos_first3_resnet18}\n \\includegraphics[width=0.44\\textwidth]{figures\/supplement\/first_last_three\/si_mean_pos_resnet18_first3.pdf}\n }\n \\end{subfloatrow*}\n \\begin{subfloatrow*}\n \n \\sidesubfloat[]{\n \\label{fig:apx_si_pos_layerwise_first3_resnet20}\n \\includegraphics[width=0.44\\textwidth]{figures\/supplement\/first_last_three\/si_layerwise_pos_resnet20_first3.pdf}\n }\n \n \n \\sidesubfloat[]{\n \\label{fig:apx_si_pos_first3_resnet20}\n \\includegraphics[width=0.42\\textwidth]{figures\/supplement\/first_last_three\/si_mean_pos_resnet20_first3.pdf}\n }\n \\end{subfloatrow*} \n\n \\caption{\\textbf{Regularizing to increase class selectivity in the first three network layers.} (\\textbf{a}) Mean class selectivity (y-axis) as a function of layer (x-axis) for different values of $\\alpha$ (intensity of red) when class selectivity regularization is restricted to the first three network layers in ResNet18 trained on Tiny ImageNet. (\\textbf{b}) Mean class selectivity in the first three layers (y-axis) as a function of $\\alpha$ (x-axis) in ResNet18 trained on Tiny ImageNet. (\\textbf{c}) and (\\textbf{d}) are identical to (\\textbf{a}) and (\\textbf{b}), respectively, but for ResNet20 trained on CIFAR10. Error bars = 95\\% confidence intervals.}\n \\vspace*{-3mm}\n \\label{fig:apx_si_pos_first3}\n\\end{figure*} \n\\vspace{2mm}\n\\begin{figure*}[!h]\n \\centering\n \\begin{subfloatrow*}\n \\hspace{20mm}\n \\sidesubfloat[]{\n \\label{fig:apx_alpha_vs_acc_pos_first3_resnet18}\n \\includegraphics[width=0.3\\textwidth]{figures\/supplement\/first_last_three\/acc_test_full_pos_resnet18_first3.pdf}\n }\n \n \n \n \n \n \n \\hspace{10mm}\n \\sidesubfloat[]{\n \\label{fig:apx_si_vs_acc_pos_first3_resnet18}\n \\includegraphics[width=0.31\\textwidth]{figures\/supplement\/first_last_three\/si_vs_accuracy_pos_resnet18_first3.pdf}\n } \n \\end{subfloatrow*}\n \\begin{subfloatrow*}\n \\hspace{20mm}\n \\sidesubfloat[]{\n \\label{fig:apx_alpha_vs_acc_pos_first3_resnet20}\n \\includegraphics[width=0.3\\textwidth]{figures\/supplement\/first_last_three\/acc_test_full_pos_resnet20_first3.pdf}\n }\n \n \n \n \n \n \n \\hspace{10mm}\n \\sidesubfloat[]{\n \\label{fig:apx_si_vs_acc_pos_first3_resnet20}\n \\includegraphics[width=0.31\\textwidth]{figures\/supplement\/first_last_three\/si_vs_accuracy_pos_resnet20_first3.pdf}\n } \n \\end{subfloatrow*} \n\n \\caption{\\textbf{Effects on test accuracy when regularizing to increase class selectivity in the first three network layers.} (\\textbf{a}) Test accuracy (y-axis) as a function of $\\alpha$ (x-axis) when class selectivity regularization is restricted to the first three network layers in ResNet18 trained on Tiny ImageNet. (\\textbf{b}) Test accuracy (y-axis) as a function of mean class selectivity in the first three layers (x-axis) in ResNet18 trained on Tiny ImageNet. (\\textbf{c}) and (\\textbf{d}) are identical to (\\textbf{a}) and (\\textbf{b}), respectively, but for ResNet20 trained on CIFAR10. Error bars = 95\\% confidence intervals.}\n \\vspace*{-3mm}\n \\label{fig:apx_acc_pos_first3}\n\\end{figure*}\n\n\\begin{figure*}[!h]\n \\centering\n \\begin{subfloatrow*}\n \n \\sidesubfloat[]{\n \\label{fig:apx_si_neg_layerwise_last3_resnet18}\n \\includegraphics[width=0.42\\textwidth]{figures\/supplement\/first_last_three\/si_layerwise_neg_resnet18_last3.pdf}\n }\n \n \n \\sidesubfloat[]{\n \\label{fig:apx_si_neg_last3_resnet18}\n \\includegraphics[width=0.44\\textwidth]{figures\/supplement\/first_last_three\/si_mean_neg_resnet18_last3.pdf}\n }\n \\end{subfloatrow*}\n \\begin{subfloatrow*}\n \n \\sidesubfloat[]{\n \\label{fig:apx_si_neg_layerwise_last3_resnet20}\n \\includegraphics[width=0.44\\textwidth]{figures\/supplement\/first_last_three\/si_layerwise_neg_resnet20_last3.pdf}\n }\n \n \n \\sidesubfloat[]{\n \\label{fig:apx_si_neg_last3_resnet20}\n \\includegraphics[width=0.42\\textwidth]{figures\/supplement\/first_last_three\/si_mean_neg_resnet20_last3.pdf}\n }\n \\end{subfloatrow*} \n\n \\caption{\\textbf{Regularizing to decrease class selectivity in the last three network layers.} (\\textbf{a}) Mean class selectivity (y-axis) as a function of layer (x-axis) for different values of $\\alpha$ (intensity of blue) when class selectivity regularization is restricted to the last three network layers in ResNet18 trained on Tiny ImageNet. (\\textbf{b}) Mean class selectivity in the last three layers (y-axis) as a function of $\\alpha$ (x-axis) in ResNet18 trained on Tiny ImageNet. (\\textbf{c}) and (\\textbf{d}) are identical to (\\textbf{a}) and (\\textbf{b}), respectively, but for ResNet20 trained on CIFAR10. Error bars = 95\\% confidence intervals.}\n \\vspace*{-3mm}\n \\label{fig:apx_si_neg_last3}\n\\end{figure*} \n\\vspace{2mm}\n\\begin{figure*}[!h]\n \\centering\n \\begin{subfloatrow*}\n \n \\sidesubfloat[]{\n \\label{fig:apx_alpha_vs_acc_neg_last3_resnet18}\n \\includegraphics[width=0.3\\textwidth]{figures\/supplement\/first_last_three\/acc_test_full_neg_resnet18_last3.pdf}\n }\n \n \n \\sidesubfloat[]{\n \\label{fig:apx_alpha_vs_acc_violin_neg_last3_resnet18}\n \\includegraphics[width=0.3\\textwidth]{figures\/supplement\/first_last_three\/acc_test_zoom_neg_violin_resnet18_last3.pdf}\n }\n \n \\sidesubfloat[]{\n \\label{fig:apx_si_vs_acc_neg_last3_resnet18}\n \\includegraphics[width=0.31\\textwidth]{figures\/supplement\/first_last_three\/si_vs_accuracy_neg_resnet18_last3.pdf}\n } \n \\end{subfloatrow*}\n \\begin{subfloatrow*}\n \\hspace{20mm}\n \\sidesubfloat[]{\n \\label{fig:apx_alpha_vs_acc_pos_last3_resnet20}\n \\includegraphics[width=0.3\\textwidth]{figures\/supplement\/first_last_three\/acc_test_full_neg_resnet20_last3.pdf}\n }\n \n \n \n \n \n \n \\hspace{10mm}\n \\sidesubfloat[]{\n \\label{fig:apx_si_vs_acc_neg_last3_resnet20}\n \\includegraphics[width=0.31\\textwidth]{figures\/supplement\/first_last_three\/si_vs_accuracy_neg_resnet20_last3.pdf}\n } \n \\end{subfloatrow*} \n\n \\caption{\\textbf{Effects on test accuracy when regularizing to decrease class selectivity in the last three network layers.} (\\textbf{a}) Test accuracy (y-axis) as a function of $\\alpha$ (x-axis) when class selectivity regularization is restricted to the last three network layers in ResNet18 trained on Tiny ImageNet. (\\textbf{b}) Similar to (\\textbf{a}), but for a subset of $\\alpha$ values. (\\textbf{c}) Test accuracy (y-axis) as a function of mean class selectivity in the last three layers (x-axis) in ResNet18 trained on Tiny ImageNet. (\\textbf{d}) and (\\textbf{e}) are identical to (\\textbf{a}) and (\\textbf{c}), respectively, but for ResNet20 trained on CIFAR10. Error bars = 95\\% confidence intervals.}\n \\vspace*{-3mm}\n \\label{fig:apx_acc_neg_last3}\n\\end{figure*} \n\n\\begin{figure*}[!h]\n \\centering\n \\begin{subfloatrow*}\n \n \\sidesubfloat[]{\n \\label{fig:apx_si_pos_layerwise_last3_resnet18}\n \\includegraphics[width=0.42\\textwidth]{figures\/supplement\/first_last_three\/si_layerwise_pos_resnet18_last3.pdf}\n }\n \n \n \\sidesubfloat[]{\n \\label{fig:apx_si_pos_last3_resnet18}\n \\includegraphics[width=0.44\\textwidth]{figures\/supplement\/first_last_three\/si_mean_pos_resnet18_last3.pdf}\n }\n \\end{subfloatrow*}\n \\begin{subfloatrow*}\n \n \\sidesubfloat[]{\n \\label{fig:apx_si_pos_layerwise_last3_resnet20}\n \\includegraphics[width=0.44\\textwidth]{figures\/supplement\/first_last_three\/si_layerwise_pos_resnet20_last3.pdf}\n }\n \n \n \\sidesubfloat[]{\n \\label{fig:apx_si_pos_last3_resnet20}\n \\includegraphics[width=0.42\\textwidth]{figures\/supplement\/first_last_three\/si_mean_pos_resnet20_last3.pdf}\n }\n \\end{subfloatrow*} \n\n \\caption{\\textbf{Regularizing to increase class selectivity in the last three network layers.} (\\textbf{a}) Mean class selectivity (y-axis) as a function of layer (x-axis) for different values of $\\alpha$ (intensity of red) when class selectivity regularization is restricted to the last three network layers in ResNet18 trained on Tiny ImageNet. (\\textbf{b}) Mean class selectivity in the last three layers (y-axis) as a function of $\\alpha$ (x-axis) in ResNet18 trained on Tiny ImageNet. (\\textbf{c}) and (\\textbf{d}) are identical to (\\textbf{a}) and (\\textbf{b}), respectively, but for ResNet20 trained on CIFAR10. Error bars = 95\\% confidence intervals.}\n \\vspace*{-3mm}\n \\label{fig:apx_si_pos_last3}\n\\end{figure*} \n\\vspace{2mm}\n\\begin{figure*}[!h]\n \\centering\n \\begin{subfloatrow*}\n \n \\sidesubfloat[]{\n \\label{fig:apx_alpha_vs_acc_pos_last3_resnet18}\n \\includegraphics[width=0.3\\textwidth]{figures\/supplement\/first_last_three\/acc_test_full_pos_resnet18_last3.pdf}\n }\n \n \n \\sidesubfloat[]{\n \\label{fig:apx_alpha_vs_acc_violin_pos_last3_resnet18}\n \\includegraphics[width=0.3\\textwidth]{figures\/supplement\/first_last_three\/acc_test_zoom_pos_violin_resnet18_last3.pdf}\n }\n \n \\sidesubfloat[]{\n \\label{fig:apx_si_vs_acc_pos_last3_resnet18}\n \\includegraphics[width=0.31\\textwidth]{figures\/supplement\/first_last_three\/si_vs_accuracy_pos_resnet18_last3.pdf}\n } \n \\end{subfloatrow*}\n \\begin{subfloatrow*}\n \n \\sidesubfloat[]{\n \\label{fig:apx_alpha_vs_acc_pos_last3_resnet20}\n \\includegraphics[width=0.3\\textwidth]{figures\/supplement\/first_last_three\/acc_test_full_pos_resnet20_last3.pdf}\n }\n \n \n \\sidesubfloat[]{\n \\label{fig:apx_alpha_vs_acc_violin_pos_last3_resnet20}\n \\includegraphics[width=0.3\\textwidth]{figures\/supplement\/first_last_three\/acc_test_zoom_pos_violin_resnet20_last3.pdf}\n }\n \n \\sidesubfloat[]{\n \\label{fig:apx_si_vs_acc_pos_last3_resnet20}\n \\includegraphics[width=0.31\\textwidth]{figures\/supplement\/first_last_three\/si_vs_accuracy_pos_resnet20_last3.pdf}\n } \n \\end{subfloatrow*} \n\n \\caption{\\textbf{Effects on test accuracy when regularizing to increase class selectivity in the last three network layers.} (\\textbf{a}) Test accuracy (y-axis) as a function of $\\alpha$ (x-axis) when class selectivity regularization is restricted to the last three network layers in ResNet18 trained on Tiny ImageNet. (\\textbf{b}) Similar to (\\textbf{a}), but for a subset of $\\alpha$ values. (\\textbf{c}) Test accuracy (y-axis) as a function of mean class selectivity in the last three layers (x-axis) in ResNet18 trained on Tiny ImageNet. (\\textbf{d-f}) are identical to (\\textbf{a-c}), respectively, but for ResNet20 trained on CIFAR10. Error bars = 95\\% confidence intervals.}\n \\vspace*{-3mm}\n \\label{fig:apx_acc_pos_last3}\n\\end{figure*}\n\n\\clearpage\n\n\\subsection{Additional results} \\label{appendix:misc}\n\n\n\\begin{figure*}[!h]\n \\centering\n \\begin{subfloatrow*}\n \\hspace{-2mm}\n \\sidesubfloat[]{\n \\label{fig:apx_pos_neg_compare_resnet}\n \\includegraphics[width=0.5\\textwidth]{figures\/supplement\/acc_test_neg_pos_zoom_violin.pdf}\n }\n \n \\hspace{-10mm}\n \\sidesubfloat[]{\n \\label{fig:apx_si_vs_accuray_scatter_resnet}\n \\includegraphics[width=0.48\\textwidth]{figures\/supplement\/si_vs_accuracy_full_scatter_vanilla_no_dead.pdf}\n }\n \\end{subfloatrow*}\n\n \\caption{\\textbf{Increasing class selectivity has rapid and deleterious effects on test accuracy compared to reducing class selectivity in ResNet20 trained on CIFAR10.} (\\textbf{a}) Test accuracy (y-axis) as a function of regularization scale magnitude ($|\\alpha|$) for negative (blue) vs positive (red) values of $\\alpha$. Solid line in distributions denotes mean, dashed line denotes central two quartiles. **$p < 6\\times10^{-6}$ difference between $\\alpha < 0$ and $\\alpha > 0$, Wilcoxon rank-sum test, Bonferroni-corrected. (\\textbf{b}) Test accuracy (y-axis) as a function of mean class selectivity (x-axis).}\n \\vspace*{-1.5em}\n \\label{fig:apx_pos_neg_comparison}\n\\end{figure*} \n\n\\medskip\n\n\\begin{figure*}[!h]\n \\centering\n \n \\begin{subfloatrow*}\n \\sidesubfloat[]{\n \\label{fig:acc_full_regscales_resnet18}\n \\includegraphics[width=0.46\\textwidth]{figures\/supplement\/acc_test_neg_pos_resnet18.pdf}\n }\n \\sidesubfloat[]{\n \\label{fig:acc_full_regscales_resnet20}\n \\includegraphics[width=0.5\\textwidth]{figures\/supplement\/acc_test_neg_pos.pdf}\n } \n \n \\end{subfloatrow*}\n \\caption{\\textbf{Directly comparing promoting vs penalizing class selectivity.} (\\textbf{a}) Test accuracy (y-axis) as a function of regularization scale ($\\alpha$; x-axis) when promoting ($\\alpha > 0$, red) or penalizing ($\\alpha < 0$, blue) class selectivity in ResNet18 trained on Tiny Imagenet. **$p < 2\\times10^{-5}$ difference between penalizing vs. promoting selectivity, Wilcoxon rank-sum test, Bonferroni-corrected. (\\textbf{b}) same as (\\textbf{a}) but for ResNet20 trained on CIFAR10. *$p < 0.05$, **$p < 6\\times10^{-6}$ difference, Wilcoxon rank-sum test, Bonferroni-corrected. Error bars denote bootstrapped 95\\% confidence intervals.}\n \\label{fig:apx_acc_vs_alpha_resnet_full}\n \\vspace*{-1.7em}\n\\end{figure*}\n\n\\medskip\n\n\\begin{figure*}[!htb]\n \\centering\n \\begin{subfloatrow*}\n \n \n \\sidesubfloat[]{\n \\label{fig:acc_full_regscales_vgg_resnet}\n \\includegraphics[width=0.47\\textwidth]{figures\/supplement\/acc_full_regscales_vgg_resnet.pdf}\n }\n \\hspace{-2mm}\n \\sidesubfloat[]{\n \\label{fig:si_vgg_resnet_zoom_violin}\n \\includegraphics[width=0.5\\textwidth]{figures\/supplement\/si_vgg_resnet_zoom_violin.pdf}\n }\n \\end{subfloatrow*}\n \n \\caption{\\textbf{Differences between ResNet20 and VGG16.} (\\textbf{a}) Test accuracy (y-axis) as a function of regularization scale ($\\alpha$ x-axis) for ResNet20 (cyan) and VGG16 (orange). Error bars denote bootstrapped 95\\% confidence intervals. (\\textbf{b}) Class selectivity (y-axis) as a function of regularization scale ($\\alpha$ for Resnet20 (cyan) and VGG16 (orange).}\n \\label{fig:apx_resnet_vs_vgg}\n \\vspace*{-1em}\n\\end{figure*}\n\\section{Approach}\n\nNetworks naturally seem to learn solutions that result in class-selective individual units \\citep{zhou_object_2015, olah_feature_visualization_2017, morcos_single_directions_2018, zhou_revisiting_2018, meyes_ablation_2019, na_nlp_concepts_single_units_2019, zhou_network_dissection_2019, rafegas_indexing_selectivity_2019, amjad_understanding_2018, meyes_ablation_2019, bau_network_2017, karpathy_visualizing_rnns_2016, radford_sentiment_neuron_2017, olah_building_blocks_2018, olah_zoom_2020}. We examined whether learning class-selective representations in individual units is actually necessary for networks to function properly. Motivated by the limitations of single unit ablation techniques and the indirectness of using batch norm or dropout to modulate class selectivity (e.g. \\citet{morcos_single_directions_2018, zhou_revisiting_2018, lillian_ablation_2018, meyes_ablation_2019}), we developed an alternative approach for examining the necessity of class selectivity for network performance. By adding a term to the loss function that serves as a regularizer to suppress (or increase) class selectivity, we demonstrate that it is possible to directly modulate the amount of class selectivity in all units in aggregate. We then used this approach as the basis for a series of experiments in which we modulated levels of class selectivity across individual units and measured the resulting effects on the network. Critically, the selectivity regularizer sidesteps the limitations of single unit ablation-based approaches, allowing us to answer otherwise-inaccessible questions such as whether single units actually need to learn class selectivity, and whether increased levels of class selectivity are beneficial.\n\nUnless otherwise noted: all experimental results were derived from the test set with the parameters from the epoch that achieved the highest validation set accuracy over the training epochs; 20 replicates with different random seeds were run for each hyperparameter set; error bars and shaded regions denote bootstrapped 95\\% confidence intervals; selectivity regularization was not applied to the final (output) layer, nor was the final layer included in any of our analyses because by definition the output layer must be class selective in a classification task.\n\n\\subsection{Models and datasets}\n\nOur experiments were performed on ResNet18 \\citep{he_resnet_2016} trained on Tiny ImageNet \\citep{tiny_imagenet}, and ResNet20 \\citep{he_resnet_2016} and a VGG16-like network \\citep{simonyan_vgg_15}, both trained on CIFAR10 \\citep{krizhevsky_cifar10_2009}. Additional details about hyperparameters, data, training, and software are in Appendix \\ref{appendix:models_training_etc}. We focus on Tiny ImageNet in the main text, but results were qualitatively similar across models and datasets except where noted.\n\n\\subsection{Defining class selectivity}\nThere are a breadth of approaches for quantifying class selectivity in individual units \\citep{zipser_selectivity_1998, zhou_object_2015, li_yosinski_convergent, zhou_revisiting_2018, gale_selectivity_2019}. We chose the neuroscience-inspired approach of \\citet{morcos_single_directions_2018} because it is similar to many widely-used metrics, easy to compute, and most importantly, differentiable (the utility of this is addressed in the next section). We also confirmed the efficacy of our regularizer on a different, non-differentiable selectivity metric (see Appendix \\ref{appendix:other_metrics}). For a single convolutional feature map (which we refer to as a \"unit\"), we computed the mean activation across elements of the filter map in response to a single sample, after the non-linearity. Then the class-conditional mean activation (i.e. the mean activation for each class) was calculated across all samples in the test set, and the class selectivity index ($SI$) was calculated as follows:\n\n\\vspace{-8mm}\n\\begin{equation}\n\\label{eqn:si}\n SI = \\frac{\\mu_{max} - \\mu_{-max}}{\\mu_{max} + \\mu_{-max} + \\epsilon}\n\\end{equation}\n\nwhere $\\mu_{max}$ is the largest class-conditional mean activation, $\\mu_{-max}$ is the mean response to the remaining (i.e. non-$\\mu_{max}$) classes, and $\\epsilon$ is a small value to prevent division by zero (we used $10^{-7}$) in the case of a dead unit. The selectivity index can range from 0 to 1. A unit with identical average activity for all classes would have a selectivity of 0, and a unit that only responded to a single class would have a selectivity of 1.\n\nAs \\citet{morcos_single_directions_2018} note, this selectivity index is not a perfect measure of information content in single units. For example, a unit with some information about many classes would have a low selectivity index. But it achieves the goal of identifying units that are class-selective in a similarly intuitive way as prior studies \\citep{zhou_revisiting_2018}, while also being differentiable with respect to the model parameters.\n\n\\subsection{A single knob to control class selectivity} \\label{sec:approach_regularization}\n\nBecause the class selectivity index is differentiable, we can insert it into the loss function, allowing us to directly regularize for or against class selectivity. Our loss function, which we seek to minimize, thus takes the following form:\n\n\\vspace{-8mm}\n\\begin{equation}\n loss = -\\sum_{c}^{C} {y_{c} \\cdotp \\log(\\hat{y_{c}})} - \\alpha\\mu_{SI}\n\\end{equation}\n\\vspace{-3mm}\n\nThe left-hand term in the loss function is the traditional cross-entropy between the softmax of the output units and the true class labels, where $c$ is the class index, $C$ is the number of classes, $y_c$ is the true class label, and $\\hat{y_c}$ is the predicted class probability. We refer to the right-hand component of the loss function, $-\\alpha\\mu_{SI}$, as the class selectivity regularizer (or regularizer, for brevity). The regularizer consists of two terms: the selectivity term,\n\n\\vspace{-8mm}\n\\begin{equation}\n \\mu_{SI} = \\frac{1}{L} \\sum_{l}^{L} \\frac{1}{U} {\\sum_{u}^{U} {SI_u}}\n\\end{equation}\n\\vspace{-3mm}\n\nwhere $l$ is a convolutional layer, $L$ is number of layers, $u$ is a unit (i.e. feature map), $U$ is the number of units in a given layer, and $SI_u$ is the class selectivity index of unit $u$. The selectivity term of the regularizer is obtained by computing the selectivity index for each unit in a layer, then computing the mean selectivity index across units within each layer, then computing the mean selectivity index across layers. Computing the mean within layers before computing the mean across layers (as compared to computing the mean across all units in the network) mitigates the biases induced by the larger numbers of units in deeper layers. The remaining term in the regularizer is $\\alpha$, the regularizer scale. The sign of $\\alpha$ determines whether class selectivity is promoted or discouraged. Negative values of $\\alpha$ discourage class selectivity in individual units, while positive values promote it. The magnitude of $\\alpha$ controls the contribution of the selectivity term to the overall loss. $\\alpha$ thus serves as a single knob with which we can modulate class selectivity across all units in the network in aggregate. During training, the class selectivity index was computed for each minibatch. For the results presented here, the class selectivity index was computed across the entire test set. We also experimented with restricting regularization to the first or final three layers, which yielded qualitatively similar results (Appendix \\ref{appendix:first_last_three_layers}). \\ari{We need to at least state the result somewhere. We could just say here that results were qualitatively similar or something}\n\n\\begin{figure*}[!t]\n\\vspace{-12mm}\n \\centering\n \\begin{subfloatrow*}\n \n \n \\sidesubfloat[]{\n \\label{fig:acc_test_full_neg_resnet18}\n \\includegraphics[width=0.27\\textwidth]{figures\/acc_test_full_neg_resnet18.pdf}\n }\n \\hspace{2mm}\n \\sidesubfloat[]{\n \\label{fig:acc_neg_violin_resnet18}\n \\includegraphics[width=0.28\\textwidth]{figures\/acc_test_zoom_neg_violin_resnet18.pdf}\n }\n \\hspace{4mm}\n \\sidesubfloat[]{\n \\label{fig:si_vs_accuracy_neg_resnet18}\n \\includegraphics[width=0.3\\textwidth]{figures\/si_vs_accuracy_neg_resnet18.pdf}\n }\n \\end{subfloatrow*}\n \\vspace{-3mm}\n \\caption{\\textbf{Effects of reducing class selectivity on test accuracy in ResNet18 trained on Tiny Imagenet.} (\\textbf{a}) Test accuracy (y-axis) as a function of regularization scale ($\\alpha$, x-axis and intensity of blue). (\\textbf{b}) Identical to (\\textbf{a}), but for a subset of $\\alpha$ values. The center of each violin plot contains a boxplot, in which the darker central lines denote the central two quartiles. (\\textbf{c}) Test accuracy (y-axis) as a function of mean class selectivity (x-axis) for different values of $\\alpha$. Error bars denote 95\\% confidence intervals. *$p < 0.01$, **$p < 5\\times10^{-10}$ difference from $\\alpha = 0$, t-test, Bonferroni-corrected. See Appendix \\ref{appendix:decreasing_selectivity} and \\ref{appendix:vgg} for ResNet20 and VGG results, respectively.}\n \\label{fig:acc_neg_resnet18}\n\\end{figure*}\n\\section{Discussion} \\label{sec:discussion}\n\nWe examined the causal role of class selectivity in CNN performance by adding a term to the loss function that allows us to directly manipulate class selectivity across all neurons in the network. We found that class selectivity is not strictly necessary for networks to function, and that reducing it can even improve test accuracy. In ResNet18 trained on Tiny Imagenet, reducing class selectivity by 1.6$\\times$ improved test accuracy by over 2\\%. In ResNet20 trained on CIFAR10, we could reduce the mean class selectivity of units in a network by factor of $\\sim$2.5 with no impact on test accuracy, and by a factor of $\\sim$20\u2014nearly to a mean of 0\u2014with only a 2\\% change in test accuracy. We confirmed that our regularizer seems to suppress class selectivity, and not simply cause the network to rotate it off of unit-aligned axes. We also found that regularizing a network to increase class selectivity in individual units has negative effects on performance. These results resolve questions about class selectivity that remained inaccessible to previous approaches: class selectivity in individual units is neither necessary nor sufficient for\u2014and can sometimes even constrain\u2014CNN performance. \n\n\nOne caveat to our results is that they are limited to CNNs trained to perform image classification. It's possible that our findings are due to idiosyncracies of benchmark datasets, and wouldn't generalize to more naturalistic datasets and tasks. Given that class selectivity is ubiquitous across DNNs trained on different tasks and datasets, future work should examine how broadly our results generalize, and the viability of class selectivity regularization as a general-purpose tool to improve DNN performance.\n\nOur results make a broader point about the potential pitfalls of focusing on the properties of single units when trying to understand DNNs, emphasizing instead the importance of analyses that focus on \\textit{distributed} representations. While we consider it essential to find tractable, intuitive approaches for understanding complex systems, it's critical to empirically verify that these approaches actually reflect functionally relevant properties of the system being examined.\n\n\n\n\\section*{Broader Impact}\n\n\nOur ability to understand deep learning systems lags considerably behind our ability to obtain practical outcomes with them. This has not stopped their widespread deployment across socially-significant domains, such as self-driving automobiles \\citep{karpathy_tesla_pytorch_2019} and law enforcement \\citep{crime_prediction_2017, crime_scene_analysis_2017}. But it's critical that these systems are understandable and accountable given their impact on lives and livelihoods. Public acceptance of machine learning technologies also hinges on trust and understanding \\citep{public_trust_ai}. If the public becomes mistrustful of these technologies and objects to their use, their potential utility would be unrealized, and the fruits of our labor as researchers would die on the vine. So its essential that we not only develop approaches for understanding and explaining deep learning systems, but ensure that the approaches we develop are based on properties of the system that are actually meaningful their function. Our results indicate that class selectivity may not be a robust property for describing the function of DNNs, and more generally suggest caution when focusing on the properties of single units as representative of the mechanisms by which DNNs function.\n\\section{Introduction} \\label{sec:introduction}\n\n\\ari{We're going to need to recover some space. Perhaps by converting to numbered cites}\n\nOur ability to understand deep learning systems lags considerably behind our ability to obtain practical outcomes with them. A breadth of approaches have been developed in attempts to better understand deep learning systems and render them more comprehensible to humans \\citep{yosinski_understanding_2015, bau_network_2017, olah_building_blocks_2018, hooker_benchmark_2019}. Many of these approaches examine the properties of single neurons and treat them as representative of the networks in which they're embedded \\citep{erhan_visualizing_2009, zeiler_deconvolution_2014, karpathy_visualizing_rnns_2016, amjad_understanding_2018, lillian_ablation_2018, dhamdhere_conductance_2019, olah_zoom_2020}. \n\n\n\nThe selectivity of individual units (i.e. the variability in a neuron's responses across data classes or dimensions) is one property that has been of particular interest to researchers trying to better understand deep neural networks (DNNs) \\citep{zhou_object_2015, olah_feature_visualization_2017, morcos_single_directions_2018, zhou_revisiting_2018, meyes_ablation_2019, na_nlp_concepts_single_units_2019, zhou_network_dissection_2019, rafegas_indexing_selectivity_2019, bau_understanding_2020}. This focus on individual neurons makes intuitive sense, as the tractable, semantic nature of selectivity is extremely alluring; some measure of selectivity in individual units is often provided as an explanation of \"what\" a network is \"doing\". One notable study highlighted a neuron selective for sentiment in an LSTM network trained on a word prediction task \\citep{radford_sentiment_neuron_2017}. Another attributed visualizable, semantic features to the activity of individual neurons across GoogLeNet trained on ImageNet \\citep{olah_feature_visualization_2017}. Both of these examples influenced many subsequent studies, demonstrating the widespread, intuitive appeal of \"selectivity\" \\citep{amjad_understanding_2018, meyes_ablation_2019, morcos_single_directions_2018, zhou_object_2015, zhou_revisiting_2018, bau_network_2017, karpathy_visualizing_rnns_2016, na_nlp_concepts_single_units_2019, radford_sentiment_neuron_2017, rafegas_indexing_selectivity_2019, morcos_single_directions_2018, olah_feature_visualization_2017, olah_building_blocks_2018, olah_zoom_2020}.\n\n\nFinding intuitive ways of representing the workings of DNNs is essential for making them understandable and accountable, but we must ensure that our approaches are based on meaningful properties of the system. Recent studies have begun to address this issue by investigating the relationships between selectivity and measures of network function such as generalization and robustness to perturbation \\citep{morcos_single_directions_2018, zhou_revisiting_2018, dalvi_grain_of_sand_2019}. Selectivity has also been used as the basis for targeted modulation of neural network function through individual units \\citep{bau_control_translation_2019, bau_gan_dissection_2019}.\n\nHowever there is also growing evidence from experiments in both deep learning \\citep{fong_net2vec_2018, morcos_single_directions_2018, gale_selectivity_2019, donnelly_interpretability_2019} and neuroscience \\citep{leavitt_correlated_2017, zylberberg_untuned_2018, insanally_untuned_2019} that single unit selectivity may not be as important as once thought. Previous studies examining the functional role of selectivity in DNNs have often measured how selectivity mediates the effects of ablating single units, or used indirect, correlational approaches that modulate selectivity indirectly (e.g. batch norm) \\citep{morcos_single_directions_2018, zhou_revisiting_2018, lillian_ablation_2018, meyes_ablation_2019, kanda_deleting_2020}. But single unit ablation in trained networks has two critical limitations: it cannot address whether the \\textit{presence} of selectivity is beneficial, nor whether networks \\textit{need to learn} selectivity to function properly. It can only address the effect of removing a neuron from a network whose training process assumed the presence of that neuron. And even then, the observed effect might be misleading. For example, a property that is critical to network function may be replicated across multiple neurons. This redundancy means that ablating any one of these neurons would show little effect, and could thus lead to the erroneous conclusion that the examined property has little impact on network function.\n\nWe were motivated by these issues to pursue a series of experiments investigating the causal importance of class selectivity in artificial neural networks. To do so, we introduced a term to the loss function that allows us to directly regularize for or against class selectivity, giving us a single knob to control class selectivity in the network. The selectivity regularizer sidesteps the limitations of single unit ablation and other indirect techniques, allowing us to conduct a series of experiments evaluating the causal impact of class selectivity on DNN performance. Our findings are as follows:\n\n\\vspace{-3mm}\n\\newcommand{\\vspace{-1mm}}{\\vspace{-1mm}}\n\\begin{itemize}\n \n \\item Performance can be improved by reducing class selectivity, suggesting that naturally-learned levels of class selectivity can be detrimental. Reducing class selectivity in ResNet18 trained on Tiny ImageNet could improve test accuracy by over 2\\%. \n \\vspace{-1mm}\n \\item Even when class selectivity isn't detrimental to network function, it remains largely unnecessary. We \\ari{might be stylistic, but saying \"We could...\" always feels off to me. I prefer \"We were able to...\"} reduced the mean class selectivity of units in ResNet20 trained on CIFAR10 by a factor of $\\sim$2.5 with no impact on test accuracy, and by a factor of $\\sim$20\u2014nearly to a mean of 0\u2014with only a 2\\% change in test accuracy.\n \\vspace{-1mm}\n \\item Our regularizer does not simply cause networks to preserve class-selectivity by rotating it off of unit-aligned axes (i.e. by distributing selectivity linearly across units), but rather seems to suppress selectivity more generally, even when optimizing for high-selectivity basis sets \\ari{Maybe add \"...more generally, even when optimizing for high-selectivity basis sets\" or something like that?}. This demonstrates the viability of low-selectivity representations distributed \\textit{across} units.\n \\vspace{-1mm}\n \\item We show that regularizing to increase class selectivity, even by small amounts, has significant negative effects on performance. Trained networks seem to be perched precariously at a performance cliff with regard to class selectivity. These results indicate that the levels of class selectivity learned by individual units in the absence of explicit regularization are at the limit of what will impair the network.\n \n\\end{itemize}\n\\vspace{-2mm}\n\nOur findings collectively demonstrate that class selectivity in individual units is neither necessary nor sufficient for convolutional neural networks (CNNs) to perform image classification tasks, and in some cases can actually be detrimental. This alludes to the possibility of class selectivity regularization as a technique for improving CNN performance. More generally, our results encourage caution when focusing on the properties of single units as representative of the mechanisms by which CNNs function, and emphasize the importance of analyses that examine properties across neurons (i.e. distributed representations). Most importantly, our results are a reminder to verify that the properties we \\textit{do} focus on are actually relevant to CNN function.\n\n\\ari{Do we have anything to lose by just using CNN everywhere instead of DNN? I'm not sure if it would blunt our claims any and would prevent someone thinking we're overclaiming...} \\mat{nope, done}\n\n\n\\subsection{Reference summaries}\n\nAmjad 2018 - extension of Morcos, also applies info theory, finds within- vs. betwee-layer effects of ablation.\n\nBarrett 2018 - representational similarity across artificial and neural networks\n\nBau 2018 - use selectivity metrics to identify neurons, then modulate them to control parameters of network's machine translation output\n\nBau 2017 - Method for quantifying semantic selectivity\n\nBau 2018 - use selectivity metrics to identify neurons in Gans, then modulate them to control features of output (e.g remove or insert trees into generated images)\n\nChen 2018 - selectivity is not independent between units within a layer (i.e. optimal image for a given unit will also activate many other units). optimize for independent activation maximization\n\nDalvi 2018 - ablating units selective for linguistic features causes greater performance deficits on NLP tasks than ablating less-selective units\n\nDhamdhere 2018 - novel method (\"conductance\") for evaluating single unit importance\n\nDonnelly 2019 - Ablating sentiment neuron doesn't impact performance\n\nErhan 2009 - find optimal stimulus for a unit by via gradient descent in image space to maximize the unit's activation\n\nFong and Vedaldi (2018) - look beyond extreme responses in single units; semantic representations are distributed; develop methods for examining distributed selectivity\n\nHooker 2019 - benchmark for interpretability\n\nHubel and Wiesel 1959 -\n\nKarpathy 2015 - single LSTM units that track character and sentence features\n\nLillian 2018 - This paper is so poorly written I don't want to cite it. It apparently was it NeurIPS2017\n\nMeyes 2019 - Single unit ablation paper. Contributions not clear.\n\nMorcos 2018 - CCA - beyond single units. layer-level representations.\n\nNa 2019 - Concept representation in CNN units trained on NLP task\n\nNguyen 2016 - GANs - use generator network for generate image that maximally activates a unit in a given DNN\n\nOlah 2017 - visualization and explainability\n\nOlah 2018 - visualization and explainability\n\nPruszynski and Zylberberg 2019 - review on neural coding with particular relevance to population coding and deep nets\n\nQuian Quiroga 2005 - Jennifer Aniston neuron\n\nRadford 2017 - Sentiment neuron\n\nSherrington 1906 - \n\nSimonyan 2014 - gradient optimization of input image; saliency maps\n\nUkita 2018 - ablation of orientation-selective units impairs CNN performance\n\nYang 2019 - semantic selectivity at the level of layers\n\nYosinski 2015 - Mostly a toolbox of exististing methods, but also novel regularization scheme(s) for generating more naturalistic\/interpretable \"optimal\" images for units \n\nZeiler and Fergus 2013 - use deconvnet for determining \"RFs\" of CNN filters\n\nZhou 2018 - extension of Morcos; single unit ablation shows that class selectivity important for performance on individual classes\n\nZhou 2018 - method for quantifying interpretability of individual units\n\nZhou 2015 - explainability by finding images (and subsections\/occlusions) that elicit highest activations\n\\section{Related work} \\label{sec:related_work}\n\n\\subsection{Selectivity in deep learning}\nExamining some form of selectivity in individual units constitutes the bedrock of many approaches to understanding DNNs. Sometimes the goal is simply to visualize selectivity, which has been pursued using a breadth of methods. These include identifying the input sample(s) (e.g. images) or sample subregions that maximally activate a given neuron \\citep{zhou_object_2015, rafegas_indexing_selectivity_2019}, and numerous optimization-based techniques for generating samples that maximize unit activations \\citep{erhan_visualizing_2009, zeiler_deconvolution_2014, simonyan_deep_2014, yosinski_understanding_2015, nguyen_preferred_synthesis_2016, olah_feature_visualization_2017, olah_building_blocks_2018}. While the different methods for quantifying single unit selectivity are often conceptually quite similar (measuring how variable are a neuron's responses across different classes of data samples), they have been applied across a broad range of contexts \\citep{amjad_understanding_2018, meyes_ablation_2019, morcos_single_directions_2018, zhou_object_2015, zhou_revisiting_2018, bau_network_2017, karpathy_visualizing_rnns_2016, na_nlp_concepts_single_units_2019, radford_sentiment_neuron_2017, rafegas_indexing_selectivity_2019}. For example, \\citet{bau_network_2017} quantified single unit selectivity for \"concepts\" (as annotated by humans) in networks trained for object and scene recognition. \\citet{olah_building_blocks_2018, olah_zoom_2020} have pursued a research program examining single unit selectivity as a building block for understanding DNNs. And single units in models trained to solve natural language processing tasks have been found to exhibit selectivity for syntactical and semantic features \\citep{karpathy_visualizing_rnns_2016, na_nlp_concepts_single_units_2019}, of which the \"sentiment-selective neuron\" reported by \\citet{radford_sentiment_neuron_2017} is a particularly recognized example.\n\nThe relationship between individual unit selectivity and various measures of DNN performance have been examined in prior studies, but the conclusions have not been concordant. \\citet{morcos_single_directions_2018}, using single unit ablation and other techniques, found that a network's test set generalization is negatively correlated (or uncorrelated) with the class selectivity of its units, a finding replicated by \\citet{kanda_deleting_2020}. In contrast, though \\citet{amjad_understanding_2018} confirmed these results for single unit ablation, they also performed cumulative ablation analyses which suggested that selectivity is beneficial, suggesting that redundancy across units may make it difficult to interpret single unit ablation studies.\n\nIn a follow-up study, \\citet{zhou_revisiting_2018} found that ablating class-selective units impairs classification accuracy for specific classes (though interestingly, not always the same class the unit was selective for), but a compensatory increase in accuracy for other classes can often leave overall accuracy unaffected. \\citet{ukita_ablation_2018} found that orientation selectivity in individual units is correlated with generalization performance in convolutional neural networks (CNNs), and that ablating highly orientation-selective units impairs classification accuracy more than ablating units with low orientation-selectivity. But while orientation selectivity and class selectivity can both be considered types of feature selectivity, orientation selectivity is far less abstract and focuses on specific properties of the image (e.g., oriented edges) rather than semantically meaningful concepts and classes. Nevertheless, this study still demonstrates the importance of some types of selectivity.\n\nResults are also variable for models trained on NLP tasks. \\citet{dalvi_grain_of_sand_2019} found that ablating units selective for linguistic features causes greater performance deficits than ablating less-selective units, while \\citet{donnelly_interpretability_2019} found that ablating the \"sentiment neuron\" of \\citet{radford_sentiment_neuron_2017} has equivocal effects on performance. These findings seem challenging to reconcile.\n\nAll of these studies examining class selectivity in single units are hamstrung by their reliance on single unit ablation, which could account for their conflicting results. As discussed earlier, single unit ablation can only address whether class selectivity affects performance in trained networks, and not whether individual units to \\textit{need to learn} class selectivity for optimal network function. And even then, the conclusions obtained from single neuron ablation analyses can be misleading due to redundancy across units \\citep{amjad_understanding_2018, meyes_ablation_2019}.\n\n\\subsection{Selectivity in neuroscience}\nMeasuring the responses of single neurons to a relevant set of stimuli has been the canonical first-order approach for understanding the nervous system \\citep{sherrington_integrative_1906, adrian_impulses_1926, granit_receptors_1955, hubel_receptive_1959, barlow_single_1972, kandel_principles_2000}; its application has yielded multiple Nobel Prizes \\citep{hubel_receptive_1959, hubel_receptive_1962, hubel_exploration_1982, wiesel_postnatal_1982, okeefe_spatialmap_1971, grid_cells_moser_2004}. But recent experimental findings have raised doubts about the necessity of selectivity for high-fidelity representations in neuronal populations \\citep{leavitt_correlated_2017, insanally_untuned_2019, zylberberg_untuned_2018}, and neuroscience research seems to be moving beyond characterizing neural systems at the level of single neurons, towards population-level phenomena \\citep{shenoy_cortical_2013, raposo_category-free_2014, fusi_why_2016, morcos_history-dependent_2016, pruszynski_review_2019, heeger_pnas_2019, saxena_population_doctrine_2019}.\n\n\n\nSingle unit selectivity-based approaches are ubiquitous in attempts to understand artificial and biological neural systems, but growing evidence has led to questions about the importance of focusing on selectivity and its role in DNN function. These factors, combined with the limitations of prior approaches, lead to the question: is class selectivity necessary and\/or sufficient for DNN function?\n\\section{Results} \\label{sec:results}\n\n\\subsection{Test accuracy is improved or unaffected by reducing class selectivity} \\label{sec:results_regularize_against}\n\nPrior research has yielded equivocal results regarding the importance of class selectivity in individual units. We sidestepped the limitations of previous approaches by regularizing against selectivity directly in the loss function through the addition of the selectivity term (see Approach \\ref{sec:approach_regularization}), giving us a knob with which to causally manipulate class selectivity. We first verified that the regularizer works as intended (Figure \\ref{fig:apx_si_neg}). Indeed, class selectivity across units in a network decreases as $\\alpha$ becomes more negative. We also confirmed that our class selectivity regularizer has a similar effect when measured using a different class selectivity metric (see Appendix \\ref{appendix:other_metrics}), indicating that our results are not unique to the metric used in our regularizer. The regularizer thus allows us to to examine the causal impact of class selectivity on test accuracy.\n\n\\ari{For Fig 1c, we should add a dashed line or something for unregularized performance because otherwise I worry readers will compare to the peak rather than baseline} \\mat{done}\n\n\n\n\n\\begin{figure*}[!t]\n \\vspace{-12mm}\n \\centering\n \\begin{subfloatrow*}\n \n \\sidesubfloat[]{\n \\label{fig:cca_example}\n \\includegraphics[width=0.33\\textwidth]{figures\/cca_example_resnet18.pdf}\n }\n \\hspace{-5mm}\n \\sidesubfloat[]{\n \\label{fig:cca_distance_ratios}\n \\includegraphics[width=0.32\\textwidth]{figures\/cca_ratios_resnet18.pdf}\n }\n \\sidesubfloat[]{\n \\label{fig:selectivity_upper_bound}\n \\includegraphics[width=0.31\\textwidth]{figures\/supplement\/optimal_selectivity_tiny_imagenet.pdf}\n }\n \\end{subfloatrow*}\n \\vspace{-2mm}\n \\caption{\\textbf{Checking for off-axis selectivity.} (\\textbf{a}) Mean CCA distance ($\\rho$, y-axis) as a function of layer (x-axis) between pairs of replicate ResNet18 networks (see Section \\ref{sec:results_cca} or Appendix \\ref{appendix:cca_our_analysis}) trained with $\\alpha = -2$ (i.e. $\\rho(\\alpha_{-2},\\alpha_{-2})$; light purple), and between pairs of networks trained with $\\alpha = -2$ and $\\alpha = 0$ (i.e. $\\rho(\\alpha_{-2},\\alpha_{0})$; dark purple). (\\textbf{b}) For each layer, we compute the ratio of $\\rho(\\alpha_{-2},\\alpha_{0}) : \\rho(\\alpha_{-1},\\alpha_{-2})$, which we refer to as the CCA distance ratio. We then plot the mean CCA distance ratio across layers (y-axis) as a function of $\\alpha$ (x-axis, intensity of blue). Example from panel \\textbf{a} ($\\alpha = -2$) circled in purple. $p < 1.3\\times10^{-5}$, paired t-test, for all $\\alpha$ except -0.1. (\\textbf{c}) Mean class selectivity (y-axis) as a function of regularization scale ($\\alpha$; x-axis) for ResNet18 trained on Tiny ImageNet. Diamond-shaped data points denote the upper bound on class selectivity for a linear projection of activations (see Section \\ref{sec:results_cca} or Appendix \\ref{appendix:maximum_selectivity}), while circular points denote the amount of axis-aligned class selectivity for the corresponding values of $\\alpha$. Error bars or shaded region = 95\\% confidence intervals. ResNet20 results are in Appendix \\ref{appendix:cca_resnet20} (CCA) and \\ref{appendix:maximum_selectivity} (selectivity upper bound). \\ari{Make all the axis labels bigger since the figures have shrunk} \\ari{For c, make y axis start at 0, I'd also make the errorbar hats narrower. They look really weird so wide. They should be roughly the width of the dot as in b}}\n \\label{fig:cca}\n \n\\end{figure*}\n\n\\ari{Big formatting issue here with cca fig}\n\nRegularizing against class selectivity could yield three possible outcomes: If the previously-reported anti-correlation between selectivity and generalization is causal, then test accuracy should increase. But if class selectivity is necessary for high-fidelity class representations, then we should observe a decrease in test accuracy. Finally, if class selectivity is an emergent phenomenon and\/or irrelevant to network performance, test accuracy should remain unchanged.\n\nSurprisingly, we observed that reducing selectivity significantly improves test accuracy in ResNet18 trained on Tiny ImageNet for all examined values of $\\alpha \\in [-0.1, -2.5]$ (Figure \\ref{fig:acc_neg_resnet18}; $p<0.01$, Bonferroni-corrected t-test). Test accuracy increases with the magnitude of $\\alpha$, reaching a maximum at $\\alpha=-1.0$ (test accuracy at $\\alpha_{-1.0}=53.60\\pm0.13$, $\\alpha_{0}$ (i.e. no regularization) $=51.57\\pm0.18$), at which point there is a 1.6x reduction in class selectivity (mean class selectivity at $ \\alpha_{-1.0}=0.22\\pm0.0009$, $\\alpha_{0}=0.35\\pm0.0007$). Test accuracy then begins to decline; at $\\alpha_{-3.0}$ test accuracy is statistically indistinct from $\\alpha_{0}$, despite a 3x decrease in class class selectivity (mean class selectivity at $\\alpha_{-3.0}=0.12\\pm0.0007$, $\\alpha_{0}=0.35\\pm0.0007$). Further reducing class selectivity beyond $\\alpha=-3.5$ (mean class selectivity $=0.10\\pm0.0007$) has increasingly detrimental effects on test accuracy. These results show that the amount of class selectivity naturally learned by a network (i.e. the amount learned in the absence of explicit regularization) can actually constrain the network's performance.\n\nResNet20 trained on CIFAR10 also learned superfluous class selectivity. Although reducing class selectivity does not improve performance, it causes minimal detriment, except at extreme regularization scales ($\\alpha \\leq -30$; Figure \\ref{fig:apx_acc_neg_resnet20}). Increasing the magnitude of $\\alpha$ decreases mean class selectivity across the network, with little impact on test accuracy until mean class selectivity reaches $0.003\\pm0.0002$ at $\\alpha_{-30}$ (Figure \\ref{fig:apx_si_mean_neg}). Reducing class selectivity only begins to have a statistically significant effect on performance at $\\alpha_{-1.0}$ (Figure \\ref{fig:apx_acc_neg_full}), at which point mean class selectivity across the network has decreased from $0.22\\pm0.002$ at $\\alpha_{0}$ (i.e. no regularization) to $0.07\\pm0.0013$ at $\\alpha_{-1.0}$\u2014a factor of more than 3 (Figure \\ref{fig:apx_si_vs_accuracy_neg}; $p = 0.03$, Bonferroni-corrected t-test). This implies that ResNet20 learns more than three times the amount of class selectivity required for maximum test accuracy.\n\nWe observed qualitatively similar results for VGG16 (see Appendix \\ref{appendix:vgg}). Although the difference is significant at $\\alpha=-0.1$ ($p=0.004$, Bonferroni-corrected t-test), it is possible to reduce mean class selectivity by a factor of 5 with only a 0.5\\% decrease in test accuracy, and by a factor of 10 with only a $\\sim$1\\% drop in test accuracy. These differences may be due to VGG16's naturally higher levels of class selectivity (see Appendix \\ref{appendix:misc} for comparisons between VGG16 and ResNet20). Together, these results demonstrate that class selectivity in individual units is largely unnecessary for optimal performance in CNNs trained on image classification tasks.\n\n\\begin{figure*}[!t]\n \\vspace{-12mm}\n \\centering\n \\begin{subfloatrow*}\n \n \n \\sidesubfloat[]{\n \\label{fig:acc_pos_full_resnet18}\n \\includegraphics[width=0.25\\textwidth]{figures\/acc_test_full_pos_resnet18.pdf}\n }\n \\hspace{2mm}\n \\sidesubfloat[]{\n \\label{fig:acc_pos_violin_resnet18}\n \\includegraphics[width=0.28\\textwidth]{figures\/acc_test_zoom_pos_violin_resnet18.pdf}\n }\n \\hspace{5mm}\n \\sidesubfloat[]{\n \\label{fig:si_vs_accuracy_pos_resnet18}\n \\includegraphics[width=0.28\\textwidth]{figures\/si_vs_accuracy_pos_resnet18.pdf}\n }\n \\end{subfloatrow*}\n \n \\caption{\\textbf{Effects of increasing class selectivity on test accuracy in ResNet18 trained on Tiny ImageNet.} (\\textbf{a}) Test accuracy (y-axis) as a function of regularization scale ($\\alpha$; x-axis, intensity of red). (\\textbf{b}) Identical to (\\textbf{a}), but for a subset of $\\alpha$ values. Each violin plot contains a boxplot in which the darker central lines denote the central two quartiles. (\\textbf{c}) Test accuracy (y-axis) as a function of mean class selectivity (x-axis) across $\\alpha$ values. Error bars denote 95\\% confidence intervals. *$p < 6\\times10^{-5}$, **$p < 8\\times10^{-12}$ difference from $\\alpha = 0$, t-test, Bonferroni-corrected. See Appendix \\ref{appendix:si_vs_accuracy_pos_resnet20} and \\ref{appendix:vgg} for ResNet20 and VGG results, respectively.}\n \\label{fig:acc_pos_resnet18}\n \n \\vspace{-3mm}\n\\end{figure*}\n\n\\subsection{Does selectivity shift to a different basis set?} \\label{sec:results_cca}\n\nWe were able to reduce mean class selectivity in all examined networks by a factor of at least three with minimal negative impact on test accuracy ($\\sim$1\\%, at worst, for VGG16). However, one trivial solution for reducing class selectivity is for the network to \"hide\" it from the regularizer by rotating it off of unit-aligned axes or performing some other linear transformation. In this scenario the selectivity in individual units would be reduced, but remain accessible through linear combinations of activity across units. In order to test this possibility, we used CCA (see Appendix \\ref{appendix:cca}), which is invariant to rotation and other invertible affine transformations, to compare the representations in regularized (i.e. low-selectivity) networks to the representations in unregularized networks.\n\nWe first established a meaningful baseline for comparison by computing the CCA distances between each pair of 20 replicate networks for a given value of $\\alpha$ (we refer to this set of distances as $\\rho(\\alpha_{r},\\alpha_{r})$). If regularizing against class selectivity causes the network to move selectivity off-axis, the CCA distances between regularized and unregularized networks \u2014which we term $\\rho(\\alpha_{r},\\alpha_{0})$\u2014should be similar to $\\rho(\\alpha_{r},\\alpha_{r})$. Alternatively, if class selectivity is suppressed via some non-affine transformation of the representation, $\\rho(\\alpha_{r},\\alpha_{0})$ should exceed $\\rho(\\alpha_{r},\\alpha_{r})$.\n\nOur analyses confirm the latter hypothesis: we find that $\\rho(\\alpha_{r},\\alpha_{0})$ significantly exceeds $\\rho(\\alpha_{r},\\alpha_{r})$ for all values of $\\alpha$ except $\\alpha=-0.1$ in ResNet18 trained on Tiny ImageNet (Figure \\ref{fig:cca} $p < 1.3\\times10^{-5}$, paired t-test). The effect is even more striking in ResNet20 trained on CIFAR10; all tested values of $\\alpha$ are significant (Figure \\ref{fig:apx_cca}; $p < 5\\times10^{6}$, paired t-test). Furthermore, the size of the effect is proportional to $\\alpha$ in both models; larger $\\alpha$ values yield representations that are more dissimilar to unregularized representations. These results support the conclusion that our regularizer doesn't just cause class selectivity to be rotated off of unit-aligned axes, but also suppresses it.\n\n\\mat{version if upper bound is given main text treatment:} As an additional control to ensure that our regularizer did not simply shift class selectivity to off-axis directions in activation space, we calculated an upper bound on the amount of class selectivity that could be recovered by finding the linear projection of unit activations that maximizes class selectivity (see Appendix \\ref{appendix:maximum_selectivity} for methodological details). For both ResNet18 trained on Tiny ImageNet (Figure \\ref{fig:selectivity_upper_bound}) and ResNet20 trained on CIFAR10 (Figure \\ref{fig:apx_maximum_selectivity_resnet20}), the amount of class selectivity in the optimized projection decreases as a function of increasing $|\\alpha|$, indicating that regularizing against class selectivity does not simply rotate the selectivity off-axis. Interestingly, the upper bound on class selectivity is very similar across regularization scales in the final two convolutional layers in both models (Figure \\ref{fig:apx_maximum_selectivity_resnet18_layerwise}; \\ref{fig:apx_maximum_selectivity_resnet20_layerwise}), indicating that immediate proximity to the output layer may mitigate the effect of class selectivity regularization. While we also found that the amount of class selectivity in the optimized projection is consistently higher than the observed axis-aligned class selectivity, we consider this to be an expected result, as the optimized projection represents an upper bound on the amount class selectivity that could be recovered from the models' representations. However, the decreasing upper bound as a function of increasing $|\\alpha|$ indicates that our class selectivity regularizer decreases selectivity across all basis sets, and not just along unit-aligned axes. \\ari{I think the stronger takeaway is to re-emphasize that our class selecitivty regularizer is decreasing selectivity across all basis sets and focus on that, but not positive. THoughts?} \\mat{good point, done!}\n\n\\subsection{Increased class selectivity considered harmful} \\label{sec:results_regularize_for}\n\nWe have demonstrated that class selectivity can be significantly reduced with minimal impact on test accuracy. However, we only examined the effects of \\textit{reducing} selectivity. What are the effects of \\textit{increasing} selectivity? We examined this question by regularizing \\textit{for} class selectivity, instead of against it. This is achieved quite easily, as it requires only a change in the sign of $\\alpha$. We first confirmed that changing the sign of the scale term in the loss function causes the intended effect of increasing class selectivity in individual units (see Appendix \\ref{appendix:increasing_selectivity}).\n\nDespite class selectivity not being strictly necessary for high performance, its ubiquity across biological and artificial neural networks leads us to suspect it may still be sufficient. We thus expect that increasing it would either improve test accuracy or yield no effect. For the same reason, we would consider it unexpected if increasing selectivity impairs test accuracy.\n\n\\begin{figure*}[!t]\n \\vspace{-12mm}\n \\centering\n \\begin{subfloatrow*}\n \\hspace{4mm}\n \\sidesubfloat[]{\n \\label{fig:pos_neg_compare_resnet18}\n \\includegraphics[width=0.36\\textwidth]{figures\/acc_test_neg_pos_zoom_violin_resnet18.pdf}\n }\n \n \\hspace{8mm}\n \\sidesubfloat[]{\n \\label{fig:si_vs_accuray_scatter_resnet18}\n \\includegraphics[width=0.37\\textwidth]{figures\/si_vs_accuracy_full_scatter_resnet18.pdf}\n }\n \\end{subfloatrow*}\n\n \\caption{\\textbf{Increasing class selectivity has deleterious effects on test accuracy compared to reducing class selectivity.} (\\textbf{a}) Test accuracy (y-axis) as a function of regularization scale magnitude ($|\\alpha|$) for negative (blue) vs positive (red) values of $\\alpha$. Solid line in distributions denotes mean, dashed line denotes central two quartiles. **$p < 6\\times10^{-6}$ difference between $\\alpha < 0$ and $\\alpha > 0$, Wilcoxon rank-sum test, Bonferroni-corrected. (\\textbf{b}) Test accuracy (y-axis) as a function of mean class selectivity (x-axis). All results shown are for ResNet18.}\n \\vspace*{-0.9em}\n \\label{fig:pos_neg_comparison_resnet18}\n\\end{figure*} \n\nSurprisingly, we observe the latter outcome: increasing class selectivity negatively impacts network performance in ResNet18 trained on Tiny ImageNet (Figure \\ref{fig:acc_pos_full_resnet18}). Scaling the regularization has an immediate effect: a significant decline in test accuracy is present even at the smallest tested value of $\\alpha$ ($p\\leq6\\times10^{-5}$ for all $\\alpha$, Bonferroni-corrected t-test) and falls catastrophically to $\\sim$25\\% by $\\alpha = 5.0$. The effect proceeds even more dramatically in ResNet20 trained on CIFAR10 (Figure \\ref{fig:apx_acc_pos_full_resnet20}). Note that we observed a correlation between the strength of regularization and the presence of dead units in ResNet20, but further analyses ruled this out as an explanation for the decline in test accuracy (see Appendix \\ref{appendix:relu}). \\mat{new:} One solution to generate a very high selectivity index is if a unit is silent for the vast majority of inputs and has low activations for remaining set of inputs. If this were the case, we would expect that regularizing to increase selectivity would cause units to be silent for the majority of inputs. However, we found that the majority of units were active for $\\geq$80\\% of inputs even at $\\alpha=0.7$, after significant performance deficits have emerged in both ResNet18 and ResNet20 (Appendix \\ref{appendix:si_control}). These findings rule out sparsity as a potential explanation for our results. The results are qualitatively similar for VGG16 (see Appendix \\ref{appendix:vgg}), indicating that increasing class selectivity beyond the levels that are learned naturally (i.e. without regularization, $\\alpha = 0$) impairs network performance.\n\n\\textbf{Recapitulation} \\ \\ We directly compare the effects of increasing vs. decreasing class selectivity in Figure \\ref{fig:pos_neg_comparison_resnet18} (and Appendix \\ref{appendix:misc}). The effects diverge immediately at $|\\alpha| = 0.1$, and suppressing class selectivity yields a 6\\% increase in test accuracy relative to increasing class selectivity by $|\\alpha|=2.0$.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\chapter*{\\centering Abstract}\n\\addcontentsline{toc}{chapter}{Abstract}\n\nA vast majority of machine learning algorithms train their models and perform inference by solving optimization problems. In order to capture the learning and prediction problems accurately, structural constraints such as sparsity or low rank are frequently imposed or else the objective itself is designed to be a non-convex function. This is especially true of algorithms that operate in high-dimensional spaces or that train non-linear models such as tensor models and deep networks.\n\nThe freedom to express the learning problem as a non-convex optimization problem gives immense modeling power to the algorithm designer, but often such problems are NP-hard to solve. A popular workaround to this has been to relax non-convex problems to convex ones and use traditional methods to solve the (convex) \\emph{relaxed} optimization problems. However this approach may be lossy and nevertheless presents significant challenges for large scale optimization.\n\nOn the other hand, direct approaches to non-convex optimization have met with resounding success in several domains and remain the methods of choice for the practitioner, as they frequently outperform relaxation-based techniques -- popular heuristics include projected gradient descent and alternating minimization. However, these are often poorly understood in terms of their convergence and other properties.\n\nThis monograph presents a selection of recent advances that bridge a long-standing gap in our understanding of these heuristics. We hope that an insight into the inner workings of these methods will allow the reader to appreciate the unique marriage of task structure and generative models that allow these heuristic techniques to (provably) succeed. The monograph will lead the reader through several widely used non-convex optimization techniques, as well as applications thereof. The goal of this monograph is to both, introduce the rich literature in this area, as well as equip the reader with the tools and techniques needed to analyze these simple procedures for non-convex problems.\n\\chapter{Alternating Minimization}\n\\label{chap:altmin}\n\nIn this section we will introduce a widely used non-convex optimization primitive, namely the alternating minimization principle. The technique is extremely general and its popular use actually predates the recent advances in non-convex optimization by several decades. Indeed, the popular Lloyd's algorithm \\citep{Lloyd1982} for k-means clustering and the EM algorithm \\citep{DempsterLR1977} for latent variable models are problem-specific variants of the general alternating minimization principle. The technique continues to inspire new algorithms for several important non-convex optimization problems such as matrix completion, robust learning, phase retrieval and dictionary learning.\n\nGiven the popularity and breadth of use of this method, our task to present an introductory treatment will be even more challenging here. To keep the discussion focused on core principles and tools, we will refrain from presenting the alternating minimization principle in all its varieties. Instead, we will focus on showing, in a largely problem-independent manner, what are the challenges that face alternating minimization when applied to real-life problems, and how they can be overcome. Subsequent sections will then show how this principle can be applied to various machine learning and signal processing tasks. In particular, \\S~\\ref{chap:em} will be devoted to the EM algorithm which embodies the alternating minimization principle and is extremely popular for latent variable estimation problems in statistics and machine learning.\n\nThe discussion will be divided into four parts. In the first part, we will look at some useful structural properties of functions that frequently arise in alternating minimization settings. In the second part, we will present a general implementation of the alternating minimization principle and discuss some challenges faced by this algorithm in offering convergent behavior in real-life problems. In the third part, as a warm-up exercise, we will show how these challenges can be overcome when the optimization problem being solved is convex. Finally in the fourth part, we will discuss the more interesting problem of convergence of alternating minimization for non-convex problems.\n\n\\section{Marginal Convexity and Other Properties}\nAlternating Minimization is most often utilized in settings where the optimization problem concerns two or more (groups of) variables. For example, recall the matrix completion problem in recommendation systems from \\S~\\ref{chap:intro} which involved two variables $U$ and $V$ denoting respectively, the latent factors for the users and the items. In several such cases, the optimization problem, more specifically the objective function, is not \\emph{jointly convex} in all the variables.\n\n\\clearpage\n\n\\begin{definition}[Joint Convexity]\n\\label{defn:joint-cvx-fn}\nA continuously differentiable function in two variables $f: \\bR^p \\times \\bR^q \\rightarrow \\bR$ is considered jointly convex if for every $(\\vx^1,\\vy^1), (\\vx^2,\\vy^2) \\in \\bR^p \\times \\bR^q$ we have\n\\[\nf(\\vx^2,\\vy^2) \\geq f(\\vx^1,\\vy^1) + \\ip{\\nabla f(\\vx^1,\\vy^1)}{(\\vx^2,\\vy^2) - (\\vx^1,\\vy^1)},\n\\]\nwhere $\\nabla f(\\vx^1,\\vy^1)$ is the gradient of $f$ at the point $(\\vx^1,\\vy^1)$.\n\\end{definition}\n\nThe definition of joint convexity is not different from the one for convexity Definition~\\ref{defn:cvx-fn}. Indeed the two coincide if we assume $f$ to be a function of a single variable $\\vz = (\\vx,\\vy) \\in \\bR^{p+q}$ instead of two variables. However, not all multivariate functions that arise in applications are jointly convex. This motivates the notion of \\emph{marginal convexity}.\n\n\\begin{definition}[Marginal Convexity]\n\\label{defn:marg-cvx-fn}\nA continuously differentiable function of two variables $f: \\bR^p \\times \\bR^q\\rightarrow \\bR$ is considered marginally convex in its first variable if for every value of $\\vy \\in \\bR^q$, the function $f(\\cdot,\\vy): \\bR^p \\rightarrow \\bR$ is convex, i.e., for every $\\vx^1,\\vx^2 \\in \\bR^p$, we have\n\\[\nf(\\vx^2,\\vy) \\geq f(\\vx^1,\\vy) + \\ip{\\nabla_\\vx f(\\vx^1,\\vy)}{\\vx^2 - \\vx^1},\n\\]\nwhere $\\nabla_\\vx f(\\vx^1,\\vy)$ is the partial gradient of $f$ with respect to its first variable at the point $(\\vx^1,\\vy)$. A similar condition is imposed for $f$ to be considered marginally convex in its second variable.\n\\end{definition}\nAlthough the definition above has been given for a function of two variables, it clearly extends to functions with an arbitrary number of variables. It is interesting to note that whereas the objective function in the matrix completion problem mentioned earlier is not jointly convex in its variables, it is indeed marginally convex in both its variables\\elink{exer:altmin-marg-conv}.\n\n\\begin{figure}\n\\includegraphics[width=\\columnwidth]{marg.pdf}\n\\caption[Marginal Convexity]{A marginally convex function is not necessarily (jointly) convex. The function $f(x,y) = x\\cdot y$ is marginally linear, hence marginally convex, in both its variables, but clearly not a (jointly) convex function.}%\n\\label{fig:marg}\n\\end{figure}\n\nIt is also useful to note that even though a function that is marginally convex in all its variables need not be a jointly convex function (see Figure~\\ref{fig:marg}), the converse is true\\elink{exer:altmin-joint-marg}. We will find the following notions of \\emph{marginal strong convexity and smoothness} to be especially useful in our subsequent discussions.\n\n\\begin{definition}[Marginally Strongly Convex\/Smooth Function]\n\\label{defn:marg-strong-cvx-smooth-fn}\nA continuously differentiable function $f: \\bR^p \\times \\bR^q \\rightarrow \\bR$ is considered (uniformly) $\\alpha$-marginally strongly convex (MSC) and (uniformly) $\\beta$-marginally strongly smooth (MSS) in its first variable if for every value of $\\vy \\in \\bR^q$, the function $f(\\cdot,\\vy): \\bR^p \\rightarrow \\bR$ is $\\alpha$-strongly convex and $\\beta$-strongly smooth, i.e., for every $\\vx^1,\\vx^2 \\in \\bR^p$, we have\n\\[\n\\frac{\\alpha}{2}\\norm{\\vx^2 - \\vx^1}_2^2 \\leq f(\\vx^2,\\vy) - f(\\vx^1,\\vy) - \\ip{\\vg}{\\vx^2 - \\vx^1} \\leq \\frac{\\beta}{2}\\norm{\\vx^2 - \\vx^1}_2^2,\n\\]\nwhere $\\vg = \\nabla_\\vx f(\\vx^1,\\vy)$ is the partial gradient of $f$ with respect to its first variable at the point $(\\vx^1,\\vy)$. A similar condition is imposed for $f$ to be considered (uniformly) MSC\/MSS in its second variable.\n\\end{definition}\n\nThe above notion is a ``uniform'' one since the parameters $\\alpha, \\beta$ do not depend on the $\\vy$ coordinate. It is instructive to relate MSC\/MSS to the RSC\/RSS properties from \\S~\\ref{chap:tools}. MSC\/MSS extend the idea of functions that are not ``globally'' convex (strongly or otherwise) but do exhibit such properties under ``qualifications''. MSC\/MSS use a different qualification than RSC\/RSS did. Note that a function that is MSC with respect to \\emph{all} its variables, need not be a convex function\\elink{exer:altmin-msc-sc}.\n\n\\begin{algorithm}[t]\n\t\\caption{Generalized Alternating Minimization (gAM)}\n\t\\label{algo:gam}\n\t\\begin{algorithmic}[1]\n\t\t{\n\t\t\t\\REQUIRE Objective function $f: \\cX \\times \\cY \\rightarrow \\bR$\n\t\t\t\\ENSURE A point $(\\hat\\vx,\\hat\\vy) \\in \\cX \\times \\cY$ with near-optimal objective value\n\t\t\t\\STATE $(\\vx^1,\\vy^1) \\leftarrow \\text{\\textsf{INITALIZE}}()$\n\t\t\t\\FOR{$t = 1, 2, \\ldots, T$}\n\t\t\t\t\\STATE $\\vx^{t+1} \\leftarrow \\arg\\min_{\\vx \\in \\cX} f(\\vx,\\vy^t)$\n\t\t\t\t\\STATE $\\vy^{t+1} \\leftarrow \\arg\\min_{\\vy \\in \\cY} f(\\vx^{t+1},\\vy)$\n\t\t\t\\ENDFOR\n\t\t\t\\STATE \\textbf{return} {$(\\vx^T,\\vy^T)$}\n\t\t}\n\t\\end{algorithmic}\n\\end{algorithm}\n\n\\section{Generalized Alternating Minimization}\nThe alternating minimization algorithm (gAM) is outlined in Algorithm~\\ref{algo:gam} for an optimization problem on two variables constrained to the sets $\\cX$ and $\\cY$ respectively. The procedure can be easily extended to functions with more variables, or have more complicated constraint sets\\elink{exer:altmin-gen-am} of the form $\\cZ \\subset \\cX \\times \\cY$. After an initialization step, gAM alternately fixes one of the variables and optimizes over the other.\n\nThis approach of solving several intermediate \\emph{marginal} optimization problems instead of a single big problem is the key to the practical success of gAM. Alternating minimization is mostly used when these marginal problems are easy to solve. Later, we will see that there exist simple, often closed form solutions to these marginal problems for applications such as matrix completion, robust learning etc. \n\nThere also exist ``descent'' versions of gAM which do not completely perform marginal optimizations but take gradient steps along the variables instead. These are described below\n\\begin{align*}\n\\vx^{t+1} &\\leftarrow \\vx^t - \\eta_{t,1}\\cdot\\nabla_\\vx f(\\vx^t,\\vy^t)\\\\\n\\vy^{t+1} &\\leftarrow \\vy^t - \\eta_{t,2}\\cdot\\nabla_\\vy f(\\vx^{t+1},\\vy^t)\n\\end{align*}\nThese descent versions are often easier to execute but may also converge more slowly. If the problem is nicely structured, then progress made on the intermediate problems offers fast convergence to the optimum. However, from the point of view of convergence, gAM faces several challenges. To discuss those, we first introduce some more concepts.\n\n\\begin{definition}[Marginally Optimum Coordinate]\n\\label{defn:marg-opt}\nLet $f$ be a function of two variables constrained to be in the sets $\\cX,\\cY$ respectively. For any point $\\vy \\in \\cY$, we say that $\\tilde\\vx$ is a marginally optimal coordinate with respect to $\\vy$, and use the shorthand $\\tilde\\vx \\in \\text{\\textsf{mOPT}}_{\\!f}(\\vy)$, if $f(\\tilde\\vx,\\vy) \\leq f(\\vx,\\vy)$ for all $\\vx \\in \\cX$. Similarly for any $\\vx \\in \\cX$, we say $\\tilde\\vy \\in \\text{\\textsf{mOPT}}_{\\!f}(\\vx)$ if $\\tilde\\vy$ is a marginally optimal coordinate with respect to $\\vx$.\n\\end{definition}\n\n\\begin{definition}[Bistable Point]\n\\label{defn:bistable}\nGiven a function $f$ over two variables constrained within the sets $\\cX,\\cY$ respectively, a point $(\\vx,\\vy) \\in \\cX\\times\\cY$ is considered a \\emph{bistable} point if $\\vy \\in \\text{\\textsf{mOPT}}_{\\!f}(\\vx)$ and $\\vx \\in \\text{\\textsf{mOPT}}_{\\!f}(\\vy)$ i.e., both coordinates are marginally optimal with respect to each other.\n\\end{definition}\n\nIt is easy to see\\elink{exer:altmin-opt-bs} that the optimum of the optimization problem must be a bistable point. The reader can also verify that the gAM procedure must stop after it has reached a bistable point. However, two questions arise out of this. First, how fast does gAM approach a bistable point and second, even if it reaches a bistable point, is that point guaranteed to be (globally) optimal?\n\nThe first question will be explored in detail later. It is interesting to note that the gAM procedure has no parameters, such as step length. This can be interpreted as a benefit as well as a drawback. While it relieves the end-user from spending time tweaking parameters, it also means that the user has less control over the progress of the algorithm. Consequently, the convergence of the gAM procedure is totally dependent on structural properties of the optimization problem. In practice, it is common to switch between gAM updates as given in Algorithm~\\ref{algo:gam} and descent versions thereof discussed earlier. The descent versions do give a step length as a tunable parameter to the user.\n\n\\begin{figure}\n\\includegraphics[width=\\columnwidth]{bistable.pdf}\n\\caption[Bistable Points and Convergence of gAM]{The first plot depicts the marginally optimal coordinate curves for the two variables whose intersection produces bistable points. gAM is adept at converging to bistable points for well-behaved functions. Note that gAM progresses only along a single variable at a time. Thus, in the 2-D example in the second plot, the progress lines are only vertical or horizontal. A function may have multiple bistable points, each with its own region of attraction, depicted as shaded circles.}\n\\label{fig:bistable}\n\\end{figure}\n\nThe second question requires a closer look at the interaction between the objective function and the gAM process. Figure~\\ref{fig:bistable} illustrates this with toy bi-variate functions over $\\bR^2$. In the first figure, the bold solid curve plots the function $g: \\cX \\rightarrow \\cX \\times \\cY$ with $g(\\vx) = (\\vx,\\text{\\textsf{mOPT}}_{\\!f}(\\vx))$ (in this toy case, the marginally optimal coordinates are taken to be unique for simplicity, i.e., $\\abs{\\text{\\textsf{mOPT}}_{\\!f}(\\vx)} = 1$ for all $\\vx \\in \\cX$). The bold dashed curve similarly plots $h: \\cY \\rightarrow \\cX \\times \\cY$ with $h(\\vy) = (\\text{\\textsf{mOPT}}_{\\!f}(\\vy),\\vy)$. These plots are quite handy in demonstrating the convergence properties of the gAM algorithm.\n\nIt is easy to see that bistable points lie precisely at the intersection of the bold solid and the bold dashed curves. The second illustration shows how the gAM process may behave when instantiated with this toy function -- clearly gAM exhibits rapid convergence to the bistable point. However, the third illustration shows that functions may have multiple bistable points. The figure shows that this may happen even if the marginally optimal coordinates are unique i.e., for every $\\vx$ there is a unique $\\tilde\\vy$ such that $\\tilde\\vy = \\text{\\textsf{mOPT}}_{\\!f}(\\vx)$ and vice versa.\n\nIn case a function taking bounded values possesses multiple bistable points, the bistable point to which gAM eventually converges depends on where the procedure was initialized. This is exemplified in the third illustration where each bistable region has its own ``region of attraction''. If initialized inside a particular region, gAM converges to the bistable point corresponding to that region. This means that in order to converge to the globally optimal point, gAM must be initialized inside the region of attraction of the global optimum.\n\nThe above discussion shows that it may be crucial to properly initialize the gAM procedure to ensure convergence to the global optimum. Indeed, when discussing gAM-style algorithms for learning latent variable models, matrix completion and phase retrieval in later sections, we will pay special attention to initialize the procedure ``close'' to the optimum. The only exception will be that of robust regression in \\S~\\ref{chap:rreg} where it seems that the problem structure ensures a unique bistable point and so, a careful initialization is not required.\n\n\\section{A Convergence Guarantee for gAM for Convex Problems}\nAs usual, things do become nice when the optimization problems are convex. For instance, for differentiable convex functions, all bistable points are global minima\\elink{exer:altmin-conv-gam} and thus, converging to any one of them is sufficient. In fact, approaches similar to gAM, commonly known as \\emph{Coordinate Minimization} (CM), are extremely popular for large scale convex optimization problems. The CM procedure simply treats a single $p$-dimensional variable $\\vx \\in \\bR^p$ as $p$ one-dimensional variables $\\bc{\\vx_1,\\ldots,\\vx_p}$ and executes gAM-like steps with them, resulting in the intermediate problems being uni-dimensional. However, it is worth noting that gAM can struggle with non-differentiable objectives. \n\nIn the following, we will analyze the convergence of the gAM algorithm for the case when the objective function is jointly convex in both its variables. We will then see what happens to Algorithm~\\ref{algo:gam} if the function $f$ is non-convex. To keep the discussion focused, we will consider an unconstrained optimization problem.\n\n\\begin{theorem}\n\\label{thm:gam-smooth-cvx-proof}\nLet $f: \\bR^p \\times \\bR^q \\rightarrow \\bR$ be jointly convex, continuously differentiable, satisfy $\\beta$-MSS in both its variables, and $f^\\ast = \\min_{\\vx,\\vy}f(\\vx,\\vy) > -\\infty$. Let the region $S_0 = \\bc{\\vx,\\vy: f(\\vx,\\vy) \\leq f(\\vzero,\\vzero)} \\subset \\bR^{p+q}$ be bounded, i.e., satisfy $S_0 \\subseteq \\cB_2((\\vzero,\\vzero),R)$ for some $R > 0$. Let Algorithm~\\ref{algo:gam} be executed with the initialization $(\\vx^1,\\vy^1) = (\\vzero,\\vzero)$. Then after at most $T = \\bigO{\\frac{1}{\\epsilon}}$ steps, we have $f(\\vx^T,\\vy^T) \\leq f^\\ast + \\epsilon$.\n\\end{theorem}\n\\begin{proof}\nThe first property of the gAM algorithm that we need to appreciate is \\emph{monotonicity}. It is easy to see that due to the marginal minimizations carried out, we have at all time steps $t$,\n\\[\nf(\\vx^{t+1},\\vy^{t+1}) \\leq f(\\vx^{t+1},\\vy^t) \\leq f(\\vx^t,\\vy^t)\n\\]\nThe region $S_0$ is the \\emph{sublevel set} of $f$ at the initialization point. Due to the monotonicity property, we have $f(\\vx^t,\\vy^t) \\leq f(\\vx^1,\\vy^1)$ for all $t$ i.e., $(\\vx^t,\\vy^t) \\in S_0$ for all $t$. Thus, gAM remains restricted to the bounded region $S_0$ and does not diverge. We notice that this point underlies the importance of proper initialization: gAM benefits from being initialized at a point at which the sublevel set of $f$ is bounded.\n\nWe will use $\\Phi_t = \\frac{1}{f(\\vx^t,\\vy^t)-f^\\ast}$ as the potential function. This is a slightly unusual choice of potential function but its utility will be clear from the proof. Note that $\\Phi_t > 0$ for all $t$ and that convergence is equivalent to showing $\\Phi_t \\rightarrow \\infty$. We will, as before, use smoothness to analyze the per-iteration progress made by gAM and use convexity for global convergence analysis. For any time step $t \\geq 2$, consider the hypothetical update we could have made had we done a gradient step instead of the marginal minimization step gAM does in step 3.\n\\[\n\\tilde\\vx^{t+1} = \\vx^t - \\frac{1}{\\beta}\\nabla_\\vx f(\\vx^t,\\vy^t)\n\\]\n\\textbf{(Apply Marginal Strong Smoothness)} We get\n\\begin{align*}\nf(\\tilde\\vx^{t+1},\\vy^t) &\\leq f(\\vx^t,\\vy^t) + \\ip{\\nabla_\\vx f(\\vx^t,\\vy^t),\\tilde\\vx^{t+1} - \\vx^t} + \\frac{\\beta}{2}\\norm{\\tilde\\vx^{t+1} - \\vx^t}_2^2\\\\\n&= f(\\vx^t,\\vy^t) - \\frac{1}{2\\beta}\\norm{\\nabla_\\vx f(\\vx^t,\\vy^t)}_2^2\n\\end{align*}\n\\textbf{(Apply Monotonicity of gAM)} Since $\\vx^{t+1} \\in \\text{\\textsf{mOPT}}_{\\!f}(\\vy^t)$, we must have $f(\\vx^{t+1},\\vy^t) \\leq f(\\tilde\\vx^{t+1},\\vy^t)$, which gives us\n\\[\nf(\\vx^{t+1},\\vy^{t+1}) \\leq f(\\vx^{t+1},\\vy^t) \\leq f(\\vx^t,\\vy^t) - \\frac{1}{2\\beta}\\norm{\\nabla_\\vx f(\\vx^t,\\vy^t)}_2^2\n\\]\nNow since $t \\geq 2$, we must have had $\\vy^t \\in \\argmin_\\vy f(\\vx^t,\\vy)$. Since $f$ is differentiable, we must have (\\cite[see][Proposition 1.2]{Bubeck2015}) $\\nabla_\\vy f(\\vx^t,\\vy^t) = \\vzero$. Applying the Pythagoras' theorem now gives us as a result, $\\norm{\\nabla f(\\vx^t,\\vy^t)}_2^2 = \\norm{\\nabla_\\vx f(\\vx^t,\\vy^t)}_2^2$.\\\\\n\n\\noindent\\textbf{(Apply Convexity)} Since $f$ is jointly convex, we can state\n\\[\nf(\\vx^t,\\vy^t) - f^\\ast \\leq \\ip{\\nabla f(\\vx^t,\\vy^t)}{(\\vx^t,\\vy^t)-(\\vx^\\ast,\\vy^\\ast)} \\leq 2R\\norm{\\nabla f(\\vx^t,\\vy^t)}_2,\n\\]\nwhere we have used the Cauchy-Schwartz inequality and the fact that $(\\vx^t,\\vy^t),(\\vx^\\ast,\\vy^\\ast) \\in S_0$. Putting these together gives us\n\\[\nf(\\vx^{t+1},\\vy^{t+1}) \\leq f(\\vx^t,\\vy^t) - \\frac{1}{4\\beta R^2}\\br{f(\\vx^t,\\vy^t) - f^\\ast}^2,\n\\]\nor in other words,\n\\[\n\\frac{1}{\\Phi_{t+1}} \\leq \\frac{1}{\\Phi_t} - \\frac{1}{4\\beta R^2}\\frac{1}{\\Phi_t^2} \\leq \\frac{1}{\\Phi_t} - \\frac{1}{4\\beta R^2}\\frac{1}{\\Phi_t\\Phi_{t+1}},\n\\]\nwhere the second step follows from monotonicity. Rearranging gives us\n\\[\n\\Phi_{t+1} - \\Phi_t \\geq \\frac{1}{4\\beta R^2},\n\\]\nwhich upon telescoping, and using $\\Phi_2 \\geq 0$ gives us\n\\[\n\\Phi_T \\geq \\frac{T}{4\\beta R^2},\n\\]\nwhich proves the result. Note that the result holds even if $f$ is jointly convex and satisfies the MSS property only \\emph{locally} in the region $S_0$.\n\\end{proof}\n\n\\section{A Convergence Guarantee for gAM under MSC\/MSS}\nWe will now investigate what happens when the gAM algorithm is executed with a non-convex function. Note that the previous result crucially uses convexity and will not extend here. Moreover, for non-convex functions, there is no assurance that all bistable points are global minima. Instead we will have to fend for fast convergence to a bistable point, as well as show that the function is globally minimized there.\n\nDoing the above will require additional structure on the objective function. In the following, we will denote $f^\\ast = \\min_{\\vx,\\vy}f(\\vx,\\vy)$ to be the optimum value of the objective function. We will fix $(\\vx^\\ast,\\vy^\\ast)$ to be any point such that $f(\\vx^\\ast,\\vy^\\ast) = f^\\ast$ (there may be several). We will also let $\\cZ^\\ast \\subset \\bR^p \\times \\bR^q$ denote the set of all bistable points for $f$.\n\nFirst of all, notice that if a continuously differentiable function $f: \\bR^p \\times \\bR^q \\rightarrow \\bR$ is marginally convex (strongly or otherwise) in both its variables, then its bistable points are exactly its stationary points.\n\n\\begin{lemma}\n\\label{lemma:bistable-stat}\nA point $(\\vx,\\vy)$ is bistable with respect to a continuously differentiable function $f: \\bR^p \\times \\bR^q$ that is marginally convex in both its variables iff $\\nabla f(\\vx,\\vy) = \\vzero$.\n\\end{lemma}\n\\begin{proof}\nIt is easy to see that partial derivatives must vanish at a bistable point since the function is differentiable (\\cite[see][Proposition 1.2]{Bubeck2015}) and thus we get $\\nabla f(\\vx,\\vy) = [\\nabla_\\vx f(\\vx,\\vy),\\nabla_\\vy f(\\vx,\\vy)] = \\vzero$. Arguing the other way round, if the gradient, and by extension the partial derivatives, vanish at $(\\vx,\\vy)$, then by marginal convexity, for any $\\vx'$\n\\[\nf(\\vx',\\vy) - f(\\vx,\\vy) \\geq \\ip{\\nabla_\\vx f(\\vx,\\vy)}{\\vx'-\\vx} = 0\n\\]\nSimilarly, $f(\\vx,\\vy') \\geq f(\\vx,\\vy)$ for any $\\vy'$. Thus $(\\vx,\\vy)$ is bistable.\n\\end{proof}\n\nThe above tells us that $\\cZ^\\ast$ is also the set of all stationary points of $f$. However, not all points in $\\cZ^\\ast$ may be global minima. Addressing this problem requires careful initialization and problem-specific analysis, that we will carry out for problems such as matrix completion etc in later sections. For now, we introduce a generic \\emph{robust} bistability property that will be very useful in the analysis. Similar properties are frequently used in the analysis of gAM-style algorithms.\n\n\\begin{definition}[Robust Bistability Property]\n\\label{defn:rob-bistable}\nA function $f: \\bR^p \\times \\bR^q \\rightarrow \\bR$ satisfies the $C$-robust bistability property if for some $C > 0$, for every $(\\vx,\\vy) \\in \\bR^p \\times \\bR^q$, $\\tilde\\vy \\in \\text{\\textsf{mOPT}}_{\\!f}(\\vx)$ and $\\tilde\\vx \\in \\text{\\textsf{mOPT}}_{\\!f}(\\vy)$, we have\n\\[\nf(\\vx,\\vy^\\ast) + f(\\vx^\\ast,\\vy) - 2f^\\ast \\leq C\\cdot\\br{2f(\\vx,\\vy) - f(\\vx,\\tilde\\vy) - f(\\tilde\\vx,\\vy)}.\n\\]\n\\end{definition}\n\nThe right hand expression captures how much one can reduce the function value \\emph{locally} by performing marginal optimizations. The property suggests\\elink{exer:altmin-robust-bistable} that if not much local improvement can be made (i.e., if $f(\\vx,\\tilde\\vy) \\approx f(\\vx,\\vy) \\approx f(\\tilde\\vx,\\vy)$) then we are close to the optimum. This has a simple corollary that all bistable points achieve the (globally) optimal function value. We now present a convergence analysis for gAM.\n\n\\begin{theorem}\n\\label{thm:gam-msc-mss-proof}\nLet $f: \\bR^p \\times \\bR^q \\rightarrow \\bR$ be a continuously differentiable (but possibly non-convex) function that, within the region $S_0 = \\bc{\\vx,\\vy: f(\\vx,\\vy) \\leq f(\\vzero,\\vzero)} \\subset \\bR^{p+q}$, satisfies the properties of $\\alpha$-MSC, $\\beta$-MSS in both its variables, and $C$-robust bistability. Let Algorithm~\\ref{algo:gam} be executed with the initialization $(\\vx^1,\\vy^1) = (\\vzero,\\vzero)$. Then after at most $T = \\bigO{\\log\\frac{1}{\\epsilon}}$ steps, we have $f(\\vx^T,\\vy^T) \\leq f^\\ast + \\epsilon$.\n\\end{theorem}\nNote that the MSC\/MSS and robust bistability properties need only hold within the sublevel set $S_0$. This again underlines the importance of proper initialization. Also note that gAM offers rapid convergence despite the non-convexity of the objective. In order to prove the result, the following consequence of $C$-robust bistability will be useful.\n\\begin{lemma}\n\\label{lemma:local-conv-gam}\nLet $f$ satisfy the properties mentioned in Theorem~\\ref{thm:gam-msc-mss-proof}. Then for any $(\\vx,\\vy) \\in \\bR^p \\times \\bR^q$, $\\tilde\\vy \\in \\text{\\textsf{mOPT}}_{\\!f}(\\vx)$ and $\\tilde\\vx \\in \\text{\\textsf{mOPT}}_{\\!f}(\\vy)$,\n\\[\n\\norm{\\vx-\\vx^\\ast}_2^2 + \\norm{\\vy-\\vy^\\ast}_2^2 \\leq \\frac{C\\beta}{\\alpha}\\br{\\norm{\\vx-\\tilde\\vx}_2^2 + \\norm{\\vy-\\tilde\\vy}_2^2}\n\\]\n\\end{lemma}\n\\begin{proof}\nApplying MSC\/MSS repeatedly gives us\n\\begin{align*}\nf(\\vx,\\vy^\\ast) + f(\\vx^\\ast,\\vy) &\\geq 2f^\\ast + \\frac{\\alpha}{2}\\br{\\norm{\\vx-\\vx^\\ast}_2^2 + \\norm{\\vy-\\vy^\\ast}_2^2}\\\\\n2f(\\vx,\\vy) &\\leq f(\\vx,\\tilde\\vy) + f(\\tilde\\vx,\\vy) + \\frac{\\beta}{2}\\br{\\norm{\\vx-\\tilde\\vx}_2^2+\\norm{\\vy-\\tilde\\vy}_2^2}\n\\end{align*}\nApplying robust stability then proves the result.\n\\end{proof}\nIt is noteworthy that Lemma~\\ref{lemma:local-conv-gam} relates local convergence to global convergence and assures us that reaching an almost bistable point is akin to converging to the optimum. Such a result can be crucial, especially for non-convex problems. Indeed, similar properties are used in other proofs concerning coordinate minimization as well, for example, the \\emph{local error bound} used in \\citep{LuoT1993}.\n\\begin{proof}[Proof (of Theorem~\\ref{thm:gam-msc-mss-proof}).]\nWe will use $\\Phi_t = f(\\vx^t,\\vy^t) - f^\\ast$ as the potential function. Since the intermediate steps in gAM are marginal optimizations and not gradient steps, we will actually find it useful to apply marginal strong convexity at a local level, and apply marginal strong smoothness at a global level instead.\\\\\n\n\\noindent\\textbf{(Apply Marginal Strong Smoothness)} As $\\nabla_\\vx f(\\vx^\\ast,\\vy^\\ast) = \\vzero$, applying MSS gives us\n\\[\nf(\\vx^{t+1},\\vy^\\ast) - f(\\vx^\\ast,\\vy^\\ast) \\leq \\frac{\\beta}{2}\\norm{\\vx^{t+1}-\\vx^\\ast}_2^2.\n\\]\nFurther, the gAM updates ensure $\\vy^{t+1} \\in \\text{\\textsf{mOPT}}_{\\!f}(\\vx^{t+1})$, which gives\n\\[\n\\Phi_{t+1} = f(\\vx^{t+1},\\vy^{t+1}) - f^\\ast \\leq f(\\vx^{t+1},\\vy^\\ast) - f^\\ast \\leq \\frac{\\beta}{2}\\norm{\\vx^{t+1}-\\vx^\\ast}_2^2,\n\\]\n\n\\noindent\\textbf{(Apply Marginal Strong Convexity)} Since $\\nabla_\\vx f(\\vx^{t+1},\\vy^t) = \\vzero$,\n\\begin{align*}\nf(\\vx^t,\\vy^t) &\\geq f(\\vx^{t+1},\\vy^t) + \\frac{\\alpha}{2}\\norm{\\vx^{t+1}-\\vx^t}_2^2\\\\\n&\\geq f(\\vx^{t+1},\\vy^{t+1}) + \\frac{\\alpha}{2}\\norm{\\vx^{t+1}-\\vx^t}_2^2,\n\\end{align*}\nwhich gives us\n\\[\n\\Phi_t - \\Phi_{t+1} \\geq \\frac{\\alpha}{2}\\norm{\\vx^{t+1}-\\vx^t}_2^2.\n\\]\nThis shows that appreciable progress is made in a single step. Now, with $(\\vx^t,\\vy^t)$ for any $t \\geq 2$, due to the nature of the gAM updates, we know that $\\vy^t \\in \\text{\\textsf{mOPT}}_{\\!f}(\\vx^t)$ and $\\vx^{t+1} \\in \\text{\\textsf{mOPT}}_{\\!f}(\\vy^t)$. Applying Lemma~\\ref{lemma:local-conv-gam} then gives us the following inequality\n\\[\n\\norm{\\vx^t-\\vx^\\ast}_2^2 \\leq \\norm{\\vx^t-\\vx^\\ast}_2^2 + \\norm{\\vy^t-\\vy^\\ast}_2^2 \\leq \\frac{C\\beta}{\\alpha}\\norm{\\vx^t-\\vx^{t+1}}_2^2\n\\]\nPutting these together and using $(a+b)^2 \\leq 2(a^2+b^2)$ gives us\n\\begin{align*}\n\\Phi_{t+1} &\\leq \\frac{\\beta}{2}\\norm{\\vx^{t+1}-\\vx^\\ast}_2^2 \\leq \\beta\\br{\\norm{\\vx^{t+1}-\\vx^t}_2^2 + \\norm{\\vx^t-\\vx^\\ast}_2^2}\\\\\n\t\t\t\t\t &\\leq \\beta(1 + C\\kappa)\\norm{\\vx^{t+1}-\\vx^t}_2^2 \\leq 2\\kappa(1 + C\\kappa)\\br{\\Phi_t - \\Phi_{t+1}},\n\\end{align*}\nwhere $\\kappa = \\frac{\\beta}{\\alpha}$ is the effective condition number of the problem. Rearranging gives us\n\\[\n\\Phi_{t+1} \\leq \\eta_0\\cdot\\Phi_t,\n\\]\nwhere $\\eta_0 = \\frac{2\\kappa(1 + C\\kappa)}{1+2\\kappa(1 + C\\kappa)} < 1$ which proves the result.\n\\end{proof}\n\nNotice that the condition number $\\kappa$ makes an appearance in the convergence rate of the algorithm but this time, with a fresh definition in terms of the MSC\/MSS parameters. As before, small values of $\\kappa$ and $C$ ensure fast convergence, whereas large values of $\\kappa,C$ promote $\\eta_0 \\rightarrow 1$ which slows the procedure down.\n\nBefore we conclude, we remind the reader that in later sections, we will see more precise analyses of gAM-style approaches, and the structural assumptions will be more problem specific. However, we hope the preceding discussion has provided some insight into the inner workings of alternating minimization techniques.\n\n\\section{Exercises}\n\\begin{exer}\n\\label{exer:altmin-marg-conv}\nRecall the low-rank matrix completion problem in recommendation systems from \\S~\\ref{chap:intro}\n\\[\n\\hat A_\\text{lv} = \\underset{\\substack{U \\in \\bR^{m \\times r}\\\\V \\in \\bR^{n \\times r}}}{\\min}\\ \\sum_{(i,j) \\in \\Omega}\\br{U_i^\\top V_j - A_{ij}}^2.\n\\]\nShow that the objective in this optimization problem is not jointly convex in $U$ and $V$. Then show that the objective is nevertheless, marginally convex in both the variables.\n\\end{exer}\n\\begin{exer}\n\\label{exer:altmin-joint-marg}\nShow that a function that is jointly convex is necessarily marginally convex as well. Similarly show that a (jointly) strongly convex and smooth function is marginally so as well.\n\\end{exer}\n\\begin{exer}\n\\label{exer:altmin-msc-sc}\nMarginal strong convexity does not imply convexity. Show this by giving an example of a function $f: \\bR^p \\times \\bR^q \\rightarrow \\bR$ that is marginally strongly convex in \\emph{both} its variables, but non-convex.\\\\\n\\textit{Hint}: use the fact that the function $f(x) = x^2$ is $2$-strongly convex.\n\\end{exer}\n\\begin{exer}\n\\label{exer:altmin-gen-am}\nDesign a variant of the gAM procedure that can handle a general constraint set $\\cZ \\subset \\cX \\times \\cY$. Attempt to analyze the convergence of your algorithm.\n\\end{exer}\n\\begin{exer}\n\\label{exer:altmin-opt-bs}\nShow that $(\\vx^\\ast,\\vy^\\ast) \\in \\arg\\min_{\\vx\\in\\cX,\\vy\\in\\cY}f(\\vx,\\vy)$ must be a bistable point for any function even if $f$ is non-convex.\n\\end{exer}\n\\begin{exer}\n\\label{exer:altmin-conv-gam}\nLet $f: \\bR^p \\times \\bR^q \\rightarrow \\bR$ be a differentiable, \\emph{jointly} convex function. Show that any bistable point of $f$ is a global minimum for $f$.\\\\\n\\textit{Hint}: first show that directional derivatives vanish at bistable points.\n\\end{exer}\n\\begin{exer}\n\\label{exer:altmin-robust-bistable}\nFor a robustly bistable function $f$, any \\emph{almost} bistable point is \\emph{almost} optimal as well. Show this by proving, for any $(\\vx,\\vy)$, $\\tilde\\vy \\in \\text{\\textsf{mOPT}}_{\\!f}(\\vx)$, $\\tilde\\vx \\in \\text{\\textsf{mOPT}}_{\\!f}(\\vy)$ such that $\\max\\bc{f(\\vx,\\tilde\\vy),f(\\tilde\\vx,\\vy)} \\leq f(\\vx,\\vy) + \\epsilon$, that $f(\\vx,\\vy) \\leq f^\\ast + \\bigO{\\epsilon}$. Conclude that if $f$ satisfies robust bistability, then any bistable point $(\\vx,\\vy) \\in \\cZ^\\ast$ is optimal.\n\\end{exer}\n\\begin{exer}\nShow that marginal strong convexity is additive i.e., if $f, g: \\bR^p \\times \\bR^q \\rightarrow \\bR$ are two functions such that $f$ is respectively $\\alpha_1$ and $\\alpha_2$-MSC in its two variables and $g$ is $\\bar\\alpha_1$ and $\\bar\\alpha_2$-MSC in its variables, then the function $f+g$ is $(\\alpha_1+\\bar\\alpha_1)$ and $(\\alpha_2+\\bar\\alpha_2)$-MSC in its variables.\n\\end{exer}\n\\begin{exer}\nThe alternating minimization procedure may oscillate if the optimization problem is not well-behaved. Suppose for an especially nasty problem, the gAM procedure enters into the following loop\n\\[\n(\\vx^t,\\vy^t) \\rightarrow (\\vx^{t+1},\\vy^t) \\rightarrow (\\vx^{t+1},\\vy^{t+1}) \\rightarrow (\\vx^t,\\vy^{t+1}) \\rightarrow (\\vx^t, \\vy^t)\n\\]\nShow that all four points in the loop are bistable and share the same function value. Can you draw a hypothetical set of marginally optimal coordinate curves which may cause this to happen (see Figure~\\ref{fig:bistable})?\n\\end{exer}\n\n\\section{Bibliographic Notes}\n\\label{sec:altmin-bib}\nThe descent version for CM is aptly named \\emph{Coordinate Descent} (CD) and only takes descent steps along the coordinates \\citep{SahaT2013}. There exist versions of CM and CD for constrained optimization problems as well \\citep{LuoT1993,Nesterov2012}. A variant of CM\/CD splits variables into \\emph{blocks} of multi-dimensional variables. The resulting algorithm is appropriately named \\emph{Block Coordinate Descent} and minimizes over one block of variables at each time step.\n\nThe coordinate\/block being processed at each step is chosen carefully to ensure rapid convergence. Several ``rules'' exist for this, for example, the Gauss-Southwell rule (that chooses the coordinate\/block along which the objective gradient is the largest), the cyclic rule that simply performs a round-robin selection of coordinates\/blocks, and random choice that chooses a random coordinate\/block at each time step, independent of previous such choices.\n\nFor several problem areas such support vector machines, CM\/CD methods are at the heart of some of the fastest solvers available due to their speed and ease of implementation \\citep{FanCHWL2008,LuoT1992,Shalev-ShwartzZ2013}.\n\n\\chapter{The EM Algorithm}\n\\label{chap:em}\n\n\\newcommand{\\vtheta}{\\vtheta}\n\\newcommand{\\vth^\\ast}{\\vtheta^\\ast}\n\nIn this section we will take a look at the \\emph{Expectation Maximization} (EM) principle. The principle forms the basis for widely used learning algorithms such as those used for learning Gaussian mixture models, the Baum-Welch algorithm for learning hidden Markov models (HMM), and mixed regression. The EM algorithm is also a close cousin to the Lloyd's algorithm for clustering with the k-means objective.\n\nAlthough the EM algorithm, at a surface level, follows the alternating minimization principle which we studied in \\S~\\ref{chap:em}, given its wide applicability in learning latent variable models in probabilistic learning settings, we feel it is instructive to invest in a deeper understanding of the EM method. To make the reading experience self-contained, we will first devote some time developing intuitions and notation in probabilistic learning methods.\\\\\n\n\\noindent\\textbf{Notation} A parametric distribution over a domain $\\cX$ with parameter $\\vtheta$ is denoted by $f(\\cdot\\cond\\vtheta)$ or $f_\\vtheta$. The notation is abused to let $f(\\vx\\cond\\vtheta)$ denote the probability mass or density function (as the case may be) of the distribution at the point $\\vx \\in \\cX$. The notation is also abused to let $X \\sim f(\\cdot\\cond\\vtheta)$ or $X \\sim f_\\vtheta$ denote a sample drawn from this distribution.\n\n\\section{A Primer in Probabilistic Machine Learning}\n\\label{sec:em-pml}\nSuppose we observe i.i.d. samples $\\vx_1,\\vx_2,\\ldots,\\vx_n$ of a random variable $X \\in \\cX$ drawn from an unknown distribution $f^\\ast$. Suppose also, that it is known that the distribution generating these data samples belongs to a \\emph{parametric family} of distributions $\\cF = \\bc{f_\\vtheta: \\vtheta \\in \\Theta}$ such that $f^\\ast = f_{\\vth^\\ast}$ for some unknown parameter $\\vth^\\ast \\in \\Theta$.\n\nHow may we recover (an accurate estimate of) the true parameter $\\vth^\\ast$, using only the samples $\\vx_1,\\ldots,\\vx_n$? There are several ways to do so, popular among them being the \\emph{maximum likelihood} estimate. Since the samples were generated independently, one can, for any parameter $\\vtheta^0 \\in \\Theta$, write their joint density function as follows\n\\[\nf(\\vx_1,\\vx_2,\\ldots,\\vx_n\\cond\\vtheta^0) = \\prod_{i=1}^nf(\\vx_i\\cond\\vtheta^0)\n\\]\nThe above quantity is also known as the \\emph{likelihood} of the data parametrized on $\\vtheta^0$ as it captures the probability that the observed data was generated by the parameter $\\vtheta^0$. In fact, we can go ahead and define the likelihood function for any parameter $\\vtheta \\in \\Theta$ as follows\n\\[\n\\cL(\\vtheta;\\vx_1,\\vx_2,\\ldots,\\vx_n) := f(\\vx_1,\\vx_2,\\ldots,\\vx_n\\cond\\vtheta)\n\\]\nThe maximum likelihood estimate (MLE) of $\\vth^\\ast$ is simply the parameter that maximizes the above likelihood function i.e, the parameter which seems to be the ``most likely'' to have generated the data.\n\\[\n\\hat\\vtheta_{\\text{MLE}} := \\underset{\\vtheta \\in \\Theta}{\\arg\\max}\\ \\cL(\\vtheta;\\vx_1,\\vx_2,\\ldots,\\vx_n)\n\\]\nIt is interesting to study the convergence of $\\hat\\vtheta_{\\text{MLE}} \\rightarrow \\vth^\\ast$ as $n \\rightarrow \\infty$ but we will not do so in this monograph. We note that there do exist other estimation techniques apart from MLE, such as the \\emph{Maximum a Posteriori} (MAP) estimate that incorporates a \\emph{prior} distribution over $\\vtheta$, but we will not discuss those here either.\\\\\n\n\\noindent\\textbf{Least Squares Regression} As a warmup exercise, let us take the example of linear regression and reformulate it in a probabilistic setting to better understand the above framework. Let $y \\in \\bR$ and $\\bt,\\vx \\in \\bR^p$ and consider the following parametric distribution over the set of reals, parametrized by $\\bt$ and $\\vx$\n\\[\nf(y\\cond\\vx,\\bt) = \\frac{1}{\\sqrt{2\\pi}}\\exp\\br{-\\frac{(y - \\vx^\\top\\bt)^2}{2}},\n\\]\nNote that this distribution exactly encodes the responses in a linear regression model with unit variance Gaussian noise. More specifically, if $y \\sim f(\\cdot\\cond\\vx,\\bt)$, then\n\\[\ny \\sim \\cN(\\vx^\\top\\bt,1)\n\\]\nThe above observation allows us to cast linear regression as a parameter estimation problem. Consider the parametric distribution family\n\\[\n\\cF = \\bc{f_\\bt = f(\\cdot\\cond\\cdot,\\bt) : \\norm{\\bt}_2 \\leq 1}.\n\\]\nSuppose now that we have $n$ covariate samples $\\vx_1,\\vx_2,\\ldots,\\vx_n$ and there is a true parameter $\\bto$ such that the distribution $f_{\\bto} \\in \\cF$ (i.e., $\\norm{\\bto}_2 \\leq 1$) is used to generate the responses i.e., $y_i \\sim f(\\cdot\\cond\\vx_i,\\bto)$. It is easy to see that the likelihood function in this setting is\\footnote{The reader would notice that we are modeling only the process that generates the responses \\emph{given} the covariates. However, this is just for sake of simplicity. It is possible to model the process that generates the covariates $\\vx_i$ as well using, for example, a mixture of Gaussians (that we will study in this very section). A model that accounts for the generation of both $\\vx_i$ and $y_i$ is called a \\emph{generative} model.}\n\\[\n\\cL(\\bt;\\bc{(\\vx_i,y_i)}_{i=1}^n) = \\prod_{i=1}^nf(y_i\\cond\\vx_i,\\bt) = \\frac{1}{\\sqrt{2\\pi}}\\prod_{i=1}^n\\exp\\br{-\\frac{(y_i - \\vx_i^\\top\\bt)^2}{2}}\n\\]\nSince the logarithm function is a strictly increasing function, maximizing the \\emph{log-likelihood} will also yield the MLE, i.e.,\n\\[\n\\bth_{\\text{MLE}} = \\underset{\\norm{\\bt}_2 \\leq 1}{\\arg\\max}\\ \\log\\cL(\\bt;\\bc{(\\vx_i,y_i)}_{i=1}^n) = \\underset{\\norm{\\bt}_2 \\leq 1}{\\arg\\min}\\ \\sum_{i=1}^n(y_i - \\vx_i^\\top\\bt)^2\n\\]\nThus, the MLE for linear regression under Gaussian noise is nothing but the common least squares estimate! The theory of maximum likelihood estimates and their consistency properties is well studied and under suitable conditions, we indeed have $\\bth_{\\text{MLE}} \\rightarrow \\bto$. ML estimators are members of a more general class of estimators known as \\emph{M-estimators} \\citep{HuberR2009}.\n\n\\section{Problem Formulation}\nIn practical applications, the probabilistic models one encounters are often more challenging than the one for linear regression due to the presence of \\emph{latent variables} which necessitates the use of more careful routines like the EM algorithm to even calculate the MLE.\n\nConsider a statistical model that generates two random variables $Y \\in \\cY$ and $Z \\in \\cZ$, instead of one, using a distribution from a parametric family $\\cF = \\bc{f_\\vtheta = f(\\cdot, \\cdot \\cond \\vtheta): \\vtheta \\in \\Theta}$ i.e., $(Y,Z) \\sim f_{\\vth^\\ast}$ for some $\\vth^\\ast \\in \\Theta$. However, we only get to see realizations of the $Y$ components and not the $Z$ components. More specifically, although the (unknown) parameter $\\vth^\\ast$ generates pairs of samples $(\\vy_1,\\vz_1), (\\vy_2,\\vz_2), \\ldots, (\\vy_n,\\vz_n)$, only $\\vy_1,\\vy_2,\\ldots,\\vy_n$ are revealed to us. The missing $Z$ components are often called \\emph{latent variables} since they are hidden from us.\n\nThe above situation arises in several practical situations in data modeling, clustering, and analysis. We encourage the reader to momentarily skip to \\S~\\ref{sec:em-app} to look at a few nice examples before returning to proceed with this discussion. A first attempt at obtaining the MLE in such a scenario would be to maximize the \\emph{marginal likelihood} function instead. Assume for the sake of simplicity that the support set of the random variable $Z$, i.e., $\\cZ$, is discrete. Then the marginal likelihood function is defined as the following\n\\[\n\\cL(\\vtheta;\\vy_1,\\vy_2,\\ldots,\\vy_n) = \\prod_{i=1}^nf(\\vy_i\\cond\\vtheta) = \\prod_{i=1}^n\\sum_{\\vz_i\\in\\cZ}f(\\vy_i,\\vz_i\\cond\\vtheta).\n\\]\nIn most practical situations, using the marginal likelihood function $\\cL(\\vtheta;\\vy_1,\\vy_2,\\ldots,\\vy_n)$ to perform ML estimation becomes intractable since the expression on the right hand side, when expanded as a sum, contains $\\abs{\\cZ}^n$ terms which makes it difficult to even write down the expression fully let alone optimize using it as an objective function!\n\nFor comparison, the log-likelihood expression for the linear regression problem with $n$ data points (the least squares expression) was a summation of $n$ terms. Indeed, the problem of maximizing the marginal likelihood function $\\cL(\\vtheta;\\vy_1,\\vy_2,\\ldots,\\vy_n)$ is often NP-hard and as a consequence, direct optimization techniques for finding the MLE fail for even small scale problems.\n\n\\begin{algorithm}[t]\n\t\\caption{AltMax for Latent Variable Models (AM-LVM)}\n\t\\label{algo:em-hard-altopt}\n\t\\begin{algorithmic}[1]\n\t\t{\n\t\t\t\\REQUIRE Data points $\\vy_1,\\ldots,\\vy_n$\n\t\t\t\\ENSURE An approximate MLE $\\hat\\vtheta \\in \\Theta$\n\t\t\t\\STATE $\\vtheta^1 \\leftarrow \\text{\\textsf{INITALIZE}}()$\n\t\t\t\\FOR{$t = 1, 2, \\ldots$}\n\t\t\t\t\\FOR{$i = 1, 2, \\ldots, n$}\n\t\t\t\t\t\\STATE $\\hat\\vz_i^t \\leftarrow \\underset{\\vz \\in \\cZ}{\\arg\\max}\\ f(\\vz\\cond\\vy_i,\\vtheta^t)$ \\hfill {(Estimate latent variables)}%\n\t\t\t\t\\ENDFOR\n\t\t\t\t\\STATE $\\vtheta^{t+1} \\leftarrow \\underset{\\vtheta\\in\\Theta}{\\arg\\max}\\ \\log\\cL(\\vtheta;\\bc{(\\vy_i,\\hat\\vz_i^t)}_{i=1}^n)$ \\hfill {(Update parameter)}\n\t\t\t\\ENDFOR\n\t\t\t\\STATE \\textbf{return} {$\\btt$}\n\t\t}\n\t\\end{algorithmic}\n\\end{algorithm}\n\n\\section{An Alternating Maximization Approach}\nNotice that the key reason for the intractability of the MLE problem in the previous discussion was the missing information about the latent variables. Had the latent variables $\\vz_1,\\ldots,\\vz_n$ been magically provided to us, it would have been simple to find the MLE solution as follows\n\\[\n\\hat\\vtheta_{\\text{MLE}} = \\underset{\\vtheta\\in\\Theta}{\\arg\\max}\\ \\log\\cL(\\vtheta;\\bc{(\\vy_i,\\vz_i)}_{i=1}^n)\n\\]\nHowever, notice that it is also true that, had the identity of the true parameter $\\vtheta^\\ast$ been provided to us (again magically), it would have been simple to estimate the latent variables using a maximum posterior probability estimate for $\\vz_i$ as follows\n\\[\n\\hat\\vz_i = \\underset{\\vz \\in \\cZ}{\\arg\\max}\\ f(\\vz\\cond\\vy_i,\\vtheta^\\ast)\n\\]\n\nGiven the above, it is tempting to apply a gAM-style algorithm to solve the MLE problem in the presence of latent variables. Algorithm~\\ref{algo:em-hard-altopt} outlines such an adaptation of gAM to the latent variable learning problem. Note that steps 4 and 6 in the algorithm can be very efficiently carried out for several problem cases. In fact, it can be shown\\elink{exer:em-em-k-means} that for the Gaussian Mixture modeling problem, AM-LVM reduces to the popular Llyod's algorithm for k-means clustering.\n\nHowever, the AM-LVM algorithm has certain drawbacks, especially when the space of latent variables $\\cZ$ is large. At every time step $t$, AM-LVM makes a ``hard assignment'', assigning the data point $\\vy_i$ to just one value of the latent variable $\\vz_i^t \\in \\cZ$. This can amount to throwing away a lot of information, especially when there may be other values $\\vz' \\in \\cZ$ present such that $f(\\vz'\\cond\\vy_i,\\vtheta^t)$ is also large but nevertheless $f(\\vz'\\cond\\vy_i,\\vtheta^t) < f(\\vz^t_i\\cond\\vy_i,\\vtheta^t)$ so that AM-LVM neglects $\\vz'$. The EM algorithm tries to remedy this.\n\n\\section{The EM Algorithm}\n\\label{sec:em-em}\nGiven the drawbacks of the hard assignment approach adopted by the AM-LVM algorithm, the EM algorithm presents an alternative approach that can be seen as making ``soft'' assignments.\n\nAt a very high level, the EM algorithm can be seen as doing the following. At time step $t$, instead of assigning the point $\\vy_i$ to a single value of the latent variable $\\hat\\vz^t_i = \\underset{\\vz \\in \\cZ}{\\arg\\max}\\ f(\\vz\\cond\\vy_i,\\vtheta^t)$, the EM algorithm chooses to make a partial assignment of $\\vy_i$ to \\emph{all} possible values of the latent variable in the set $\\cZ$. The EM algorithm ``assigns'' $\\vy_i$ to a value $\\vz \\in \\cZ$ with affinity\/weight equal to $f(\\vz\\cond\\vy_i,\\vtheta^t)$.\n\nThus, $\\vy_i$ gets partially assigned to all possible values of the latent variable, to some with more weight, to others with less weight. This avoids any loss of information. Note that these weights are always positive and sum up to unity since $\\sum_{\\vz\\in\\cZ}f(\\vz\\cond\\vy_i,\\vtheta^t) = 1$. Also note that EM still assigns $\\hat\\vz^t_i = \\underset{\\vz \\in \\cZ}{\\arg\\max}\\ f(\\vz\\cond\\vy_i,\\vtheta^t)$ the highest weight. In contrast, AM-LVM can be now seen as putting all the weight on $\\hat\\vz_i^t$ alone and zero weight on any other latent variable value.\n\nWe now present a more formal derivation of the EM algorithm. Instead of maximizing the likelihood in a single step, the EM algorithm tries to efficiently \\emph{encourage} an increase in the likelihood over several steps. Define the \\emph{point-wise likelihood} function as\n\\[\n\\cL(\\vtheta;\\vy) = \\sum_{\\vz\\in\\cZ}f(\\vy,\\vz\\cond\\vtheta).\n\\]\nNote that we can write the marginal likelihood function as $\\cL(\\vtheta;\\vy_1,\\vy_2,\\ldots,\\vy_n) = \\prod_{i=1}^n\\cL(\\vtheta;\\vy_i)$. Our goal is to maximize $\\cL(\\vtheta;\\vy_1,\\vy_2,\\ldots,\\vy_n)$ but doing so directly is too expensive. So the next best thing is to do so indirectly. Suppose we had a proxy function that lower bounded the likelihood function but was also easy to optimize. Then maximizing the proxy function would also lead to an increase in the likelihood if the proxy were really good.\n\nThis is the key to the EM algorithm: it introduces a proxy function called the \\emph{$Q$-function} that lower bounds the marginal likelihood function and casts the parameter estimation problem as a bi-variate problem, the two variables being the parameter $\\vtheta$ and the $Q$-function.\n\nGiven an initialization, $\\vtheta^0 \\in \\Theta$, EM constructs a $Q$-function out of it, uses that as a proxy to obtain a better parameter $\\vtheta^1$, uses the newly obtained parameter to construct a better $Q$-function, uses the better $Q$-function to obtain a still better parameter $\\vtheta^2$, and so on. Thus, it essentially performs alternating optimization steps, with better estimations of the $\\vtheta$ parameter leading to better constructions of the $Q$-function and vice versa.\n\nTo formalize this notion, we will abuse notation to let $f(\\vz\\cond\\vy,\\vtheta^0)$ denote the conditional probability function for the random variable $Z$ given the variable $Y$ and the parameter $\\vtheta^0$. Then we have\n\\[\n\\log\\cL(\\vtheta;\\vy) = \\log\\sum_{\\vz\\in\\cZ}f(\\vy,\\vz\\cond\\vtheta) = \\log\\sum_{\\vz\\in\\cZ}f(\\vz\\cond\\vy,\\vtheta^0)\\frac{f(\\vy,\\vz\\cond\\vtheta)}{f(\\vz\\cond\\vy,\\vtheta^0)}\n\\]\nThe summation in the last expression can be seen to be simply an expectation with respect to the random variable $Z$ being sampled from the conditional distribution $f(\\cdot\\cond\\vy,\\vtheta^0)$. Using this, we get\n\\begin{align*}\n\\log\\cL(\\vtheta;\\vy) &= \\log \\bE_{\\vz \\sim f(\\cdot\\cond\\vy,\\vtheta^0)}\\bs{\\frac{f(\\vy,\\vz\\cond\\vtheta)}{f(\\vz\\cond\\vy,\\vtheta^0)}}\\\\\n\t\t\t\t\t\t\t\t\t &\\geq \\bE_{\\vz \\sim f(\\cdot\\cond\\vy,\\vtheta^0)}\\bs{\\log\\frac{f(\\vy,\\vz\\cond\\vtheta)}{f(\\vz\\cond\\vy,\\vtheta^0)}}\\\\\n\t\t\t\t\t\t\t\t\t &= \\underbrace{\\bE_{\\vz \\sim f(\\cdot\\cond\\vy,\\vtheta^0)}\\bs{\\log f(\\vy,\\vz\\cond\\vtheta)}}_{Q_{\\vy}(\\vtheta\\cond\\vtheta^0)} - \\underbrace{\\bE_{\\vz \\sim f(\\cdot\\cond\\vy,\\vtheta^0)}\\bs{\\log f(\\vz\\cond\\vy,\\vtheta^0)}}_{R_{\\vy}(\\vtheta^0)}.\n\\end{align*}\nThe inequality follows from Jensen's inequality as the logarithm function is concave. The function $Q_{\\vy}(\\vtheta\\cond\\vtheta^0)$ is called the \\emph{point-wise $Q$-function}. Now, the $Q$-function can be interpreted as a \\emph{weighted} point-wise likelihood function\n\\[\nQ_{\\vy}(\\vtheta\\cond\\vtheta^0) = \\sum_{\\vz \\in \\cZ}w_\\vz\\cdot\\log f(\\vy,\\vz\\cond\\vtheta),\n\\]\nwith weights $w_\\vz = f(\\vz\\cond\\vy,\\theta^0)$. Notice that this exactly corresponds to assigning $\\vy$ to every $\\vz \\in \\cZ$ with weight $w_\\vz$ instead of assigning it to just one value $\\hat\\vz = \\arg\\max_{\\vz\\in\\cZ}\\ f(\\vz\\cond\\vy,\\vtheta^0)$ (with weight $1$) as AM-LVM does. We will soon see that due to the way the $Q$-function is used, the EM algorithm can be seen as performing AM-LVM with a soft-assignment step instead of a hard assignment step.\n\nUsing the point-wise $Q$-function, we define the $Q$-function.\n\\[\nQ(\\vtheta\\cond\\vtheta^0) = \\frac1n\\sum_{i=1}^nQ_{\\vy_i}(\\vtheta\\cond\\vtheta^0)\n\\]\n\nThe $Q$-function has all the properties we desired from our proxy: parameters maximizing the function $Q(\\vtheta\\cond\\vtheta^0)$ do indeed improve the likelihood $\\cL(\\vtheta;\\vy_1,\\vy_2,\\ldots,\\vy_n)$. More importantly, for several applications, it is possible to both construct as well as optimize the $Q$-function, very efficiently.\n\nAlgorithm~\\ref{algo:em} gives an overall skeleton of the EM algorithm. Implementing EM requires two routines, one to construct the $Q$-function corresponding to the current iterate (the \\emph{Expectation step} or E-step), and the other to maximize the $Q$-function (the \\emph{Maximization step} or M-step) to get the next iterate. We will next give precise constructions of these steps.\n\nEM works well in many situations where the marginal likelihood function is inaccessible. This is because the $Q$-function only requires access to the conditional probability function $f(\\cdot\\cond\\vy,\\vtheta^0)$ and the joint probability function $f(\\cdot,\\cdot\\cond\\vtheta)$, both of which are readily accessible in several applications. We will see examples of such applications shortly.\n\n\\section{Implementing the E\/M steps}\n\\label{sec:em-implement}\nWe will now look at various implementations of the E and the M steps in the EM algorithm. Some would be easier to implement in practice whereas others would be easier to analyze.\n\n\\begin{algorithm}[t]\n\t\\caption{Expectation Maximization (EM)}\n\t\\label{algo:em}\n\t\\begin{algorithmic}[1]\n\t\t{\n\t\t\t\\REQUIRE Implementations of the E-step $E(\\cdot)$, and the M-step $M(\\cdot)$\n\t\t\t\\ENSURE A good parameter $\\hat\\vtheta \\in \\Theta$\n\t\t\t\\STATE $\\vtheta^1 \\leftarrow \\text{\\textsf{INITALIZE}}()$\n\t\t\t\\FOR{$t = 1, 2, \\ldots$}\n\t\t\t\t\\STATE $Q_t(\\cdot\\cond\\vtheta^t) \\leftarrow E(\\vtheta^t)$ \\hfill {(E-step)}%\n\t\t\t\t\\STATE $\\vtheta^{t+1} \\leftarrow M(\\vtheta^t,Q_t)$ \\hfill {(M-step)}\n\t\t\t\\ENDFOR\n\t\t\t\\STATE \\textbf{return} {$\\btt$}\n\t\t}\n\t\\end{algorithmic}\n\\end{algorithm}\n\n\\noindent\\textbf{E-step Constructions} Given the definition of the point-wise $Q$-function, one can use it to construct the $Q$-function in several ways. The first construction is of largely theoretical interest but reveals a lot of the inner workings of the EM algorithm by simplifying the proofs. This \\emph{population} construction requires access to the marginal distribution on the Y variable i.e., the probability function $f(\\vy\\cond\\vth^\\ast)$. Recall that $\\vth^\\ast$ is the true parameter generating these samples. Given this, the population $E$-step constructs the $Q$-function as\n\\begin{align*}\nQ^\\text{pop}_t(\\vtheta\\cond\\vtheta^t) &= \\bE_{\\vy \\sim f_{\\vth^\\ast}}Q_{\\vy}(\\vtheta\\cond\\vtheta^t)\\\\\n\t\t\t\t\t\t\t\t\t &= \\bE_{\\vy \\sim f_{\\vth^\\ast}}\\bE_{\\vz \\sim f(\\cdot\\cond\\vy,\\vtheta^t)}\\bs{\\log f(\\vy,\\vz\\cond\\vtheta)}\\\\\n\t\t\t\t\t\t\t\t\t &= \\sum_{\\vy \\in \\cY}\\sum_{\\vz \\in \\cZ}f(\\vy\\cond\\vth^\\ast)\\cdot f(\\vz\\cond\\vy,\\vtheta^t)\\cdot\\log f(\\vy,\\vz\\cond\\vtheta)\n\\end{align*}\nClearly this construction is infeasible in practice. A much more realistic \\emph{sample} construction works with just the observed samples $\\vy_1,\\vy_2,\\ldots,\\vy_n$ (note that these were indeed drawn from the distribution $f(\\vy\\cond\\vth^\\ast)$). The sample $E$-step constructs the $Q$-function as\n\\begin{align*}\nQ^\\text{sam}_t(\\vtheta\\cond\\vtheta^t) &= \\frac{1}{n}\\sum_{i=1}^nQ_{\\vy_i}(\\vtheta\\cond\\vtheta^t)\\\\\n\t\t\t\t\t\t\t\t\t &= \\frac{1}{n}\\sum_{i=1}^n\\bE_{\\vz \\sim f(\\cdot\\cond\\vy_i,\\vtheta^t)}\\bs{\\log f(\\vy_i,\\vz\\cond\\vtheta)}\\\\\n\t\t\t\t\t\t\t\t\t &= \\frac{1}{n}\\sum_{i=1}^n\\sum_{\\vz \\in \\cZ}f(\\vz\\cond\\vy,\\vtheta^t)\\cdot\\log f(\\vy_i,\\vz\\cond\\vtheta)\n\\end{align*}\nNote that this expression has $n\\cdot\\abs{\\cZ}$ terms instead of the $\\abs{\\cZ}^n$ terms which the marginal likelihood expression had. This drastic reduction is a key factor behind the scalability of the EM algorithm.\\\\\n\n\\noindent\\textbf{M-step Constructions}\nRecall that in \\S~\\ref{chap:altmin} we considered alternating approaches that fully optimize with respect to a variable, as well as those that merely perform a descent step, improving the function value along that variable but not quite optimizing it.\n\nSimilar variants can be developed for EM as well. Given a $Q$-function, the simplest strategy is to optimize it completely with the M-step simply returning the maximizer of the $Q$-function. This is the \\emph{fully corrective} version of EM. It is useful to remind ourselves here that whereas in previous sections we looked at minimization problems, the problem here is that of likelihood \\emph{maximization}.\n\\[\nM^\\text{fc}(\\vtheta^t,Q_t) = \\underset{\\vtheta \\in \\Theta}{\\arg\\max}\\ Q_t(\\vtheta\\cond\\vtheta^t)\n\\]\nSince this can be expensive in large scale optimization settings, a \\emph{gradient descent} version exists that makes the M-step faster by performing just a gradient step with respect to the $Q$-function i.e.,\n\\[\nM^\\text{grad}(\\vtheta^t,Q_t) = \\vtheta^t + \\alpha_t\\cdot\\nabla Q_t(\\vtheta^t\\cond\\vtheta^t).\n\\]\n\n\\noindent\\textbf{Stochastic EM Construction}\nA highly scalable version of the algorithm is the \\emph{stochastic update} version that uses the point-wise $Q$-function of a single, randomly chosen sample $Y_t \\sim \\text{\\textsf{Unif}}[n]$ to execute a gradient update at each time step $t$. It can be shown\\elink{exer:em-sgd} that on expectation, this executes the sample E-step and the gradient M-step.\n\\[\nM^\\text{sto}(\\vtheta^t) = \\vtheta^t + \\alpha_t\\cdot\\nabla Q_{Y_t}(\\vtheta^t\\cond\\vtheta^t)\n\\]\n\n\\section{Motivating Applications}\n\\label{sec:em-app}\nWe will now look at a few applications of the EM algorithm and see how the E and M steps are executed efficiently. We will revisit these applications while discussing convergence proofs for the EM algorithm.\n\n\\subsection{Gaussian Mixture Models}\nMixture models are ubiquitous in applications such as clustering, topic modeling, and segmentation tasks. Let $\\cN(\\cdot;\\vmu,\\Sigma)$ denote the multivariate normal distribution with mean $\\vmu \\in \\bR^p$ and covariance $\\Sigma \\in \\bR^{p \\times p}$. A mixture model can be constructed by combining two such normal distributions to obtain a density function of the form\n\\[\nf(\\cdot,\\cdot\\cond\\bc{\\phi_i,\\vmu^{\\ast,i},\\Sigma_i}_{i \\in \\bc{0,1}}) = \\phi^\\ast_0\\cdot\\cN_0 + \\phi^\\ast_1\\cdot\\cN_1,\n\\]\nwhere $\\cN_i = \\cN(\\cdot;\\vmu^{\\ast,i},\\Sigma^\\ast_i), i = 0,1$ are the \\emph{mixture components} and $\\phi^\\ast_i \\in (0,1), i = 0,1$ are the \\emph{mixture coefficients}. We insist that $\\phi^\\ast_0+\\phi^\\ast_1 = 1$ to ensure that $f$ is indeed a probability distribution. Note that we consider a mixture with just two components for simplicity. Mixture models with larger number of components can be similarly constructed.\n\nA sample $(\\vy,z) \\in \\bR^p \\times \\bc{0,1}$ can be drawn from this distribution by first tossing a Bernoulli coin with bias $\\phi_1$ to choose a component $z \\in \\bc{0,1}$ and then drawing a sample $\\vy \\sim \\cN_{z}$ from that component.\n\nHowever, despite drawing the samples $(\\vy_1,z_1),(\\vy_2,z_2),\\ldots,(\\vy_n,z_n)$, what is presented to us is $\\vy_1,\\vy_2,\\ldots,\\vy_n$ i.e., the identities $z_i$ of the components that actually resulted in these draws is hidden from us. For instance in topic modeling tasks, the underlying topics being discussed in documents is hidden from us and we only get to see surface realizations of words in the documents that have topic-specific distributions. The goal here is to recover the mixture components as well as coefficients in an efficient manner from such partially observed draws.\n\nFor the sake of simplicity, we will look at a balanced, isotropic mixture i.e., where we are given that $\\phi^\\ast_0 = \\phi^\\ast_1 = 0.5$ and $\\Sigma^\\ast_0 = \\Sigma^\\ast_1 = I_p$. This will simplify our updates and analysis as the only unknown parameters in the model are $\\vmu^{\\ast,0}$ and $\\vmu^{\\ast,1}$. Let $\\vM = (\\vmu^0,\\vmu^1) \\in \\bR^{p\\times p}$ denote an ensemble describing such a parametric mixture. Our job is to recover $\\vM^\\ast = (\\vmu^{\\ast,0},\\vmu^{\\ast,1})$.\\\\\n\n\\noindent\\textbf{E-step Construction} For any $\\vM = (\\vmu^0,\\vmu^1)$, we have the mixture\n\\[\nf(\\cdot,\\cdot\\cond\\vM) = 0.5\\cdot\\cN(\\cdot;\\vmu^0,I_p) + 0.5\\cdot\\cN(\\cdot;\\vmu^1,I_p),\n\\]\ni.e., $\\cN_i = \\cN(\\cdot;\\vmu^i,I_p)$. In this case, the $Q$-function actually has a closed form expression. For any $\\vy \\in \\bR^p$, $z \\in \\bc{0,1}$, and $\\vM$, we have\n\\begin{align*}\nf(\\vy,z\\cond\\vM) &= \\cN_z(\\vy) = \\exp\\br{-\\frac{\\norm{\\vy - \\vmu^z}_2^2}{2}}\\\\\nf(z\\cond\\vy,\\vM) &= \\frac{f(\\vy,z\\cond\\vM)}{f(\\vy\\cond\\vM)} = \\frac{f(\\vy,z\\cond\\vM)}{f(\\vy,z\\cond\\vM) + f(\\vy,1-z\\cond\\vM)}.\n\\end{align*}\nThus, even though the marginal likelihood function was inaccessible, the point-wise $Q$-function has a nice closed form. For the E-step construction, for any two ensembles $\\vM^t = (\\vmu^{t,0},\\vmu^{t,1})$ and $\\vM = (\\vmu^0,\\vmu^1)$,\n\\begin{multline*}\nQ_{\\vy}(\\vM\\cond\\vM^t) = \\bE_{z \\sim f(\\cdot\\cond\\vy,\\vM^t)}\\bs{\\log f(\\vy,z\\cond\\vM)}\\\\\n\t\t\t\t= f(0\\cond\\vy,\\vM^t)\\cdot\\log f(\\vy,0\\cond\\vM) + f(1\\cond\\vy,\\vM^t)\\cdot\\log f(\\vy,1\\cond\\vM)\\\\\n\t\t\t\t= -\\frac{1}{2}\\br{w^0_t(\\vy)\\cdot\\norm{\\vy - \\vmu^0}_2^2 + w^1_t(\\vy)\\cdot\\norm{\\vy - \\vmu^1}_2^2},\n\\end{multline*}\nwhere $w^z_t(\\vy) = e^{-\\frac{\\norm{\\vy - \\vmu^{t,z}}_2^2}{2}}\\br{e^{-\\frac{\\norm{\\vy - \\vmu^{t,0}}_2^2}{2}} + e^{-\\frac{\\norm{\\vy - \\vmu^{t,1}}_2^2}{2}}}^{-1}$. Note that $w^z_t(\\vy) \\geq 0$ for $z = 0,1$ and $w^0_t(\\vy) + w^1_t(\\vy) = 1$. Also note that $w^z_t(\\vy)$ is larger if $\\vy$ is closer to $\\vmu^{t,z}$ i.e., it measures the affinity of a point to the center. Given a sample of data points $\\vy_1,\\vy_2,\\ldots,\\vy_n$, the $Q$-function is\n\\begin{align*}\nQ(\\vM\\cond\\vM^t) &= \\frac{1}{n}\\sum_{i=1}^nQ_{\\vy_i}(\\vM\\cond\\vM^t)\\\\\n\t\t\t\t\t\t\t\t\t &= -\\frac{1}{2n}\\sum_{i=1}^n {w^0_t(\\vy_i)\\cdot\\norm{\\vy_i - \\vmu^0}_2^2 + w^1_t(\\vy_i)\\cdot\\norm{\\vy_i - \\vmu^1}_2^2}\n\\end{align*}\n\n\\begin{figure}\n\\includegraphics[width=\\columnwidth]{gmm.pdf}\n\\caption[Gaussian Mixture Models and the EM Algorithm]{The data is generated from a mixture model: circle points from $\\cN(\\vmu^{\\ast,0},I_p)$ and triangle points from $\\cN(\\vmu^{\\ast,1},I_p)$ but their origin is unknown. The EM algorithm performs soft clustering assignments to realize this and keeps increasing $w^0_t$ values for circle points and increasing $w^1_t$ values for triangle points. As a result, the estimated means $\\vmu^{t,z}$ rapidly converge to the true means $\\vmu^{\\ast,z}$, $z=0,1$.}\n\\label{fig:gmm}\n\\end{figure}\n\n\\noindent\\textbf{M-step Construction} This also has a closed form solution in this case\\elink{exer:em-gmm-m}. If $\\vM^{t+1} = (\\vmu^{t+1,0},\\vmu^{t+1,1}) = {\\arg\\max}_\\vM\\ Q(\\vM\\cond\\vM^t)$, then\n\\[\n\\vmu^{t+1,z} = \\sum_{i=1}^n w^z_t(\\vy_i)\\vy_i\n\\]\nNote that the M-step can be executed in linear time and does not require an explicit construction of the $Q$-function at all! One just needs to use the M-step repeatedly -- the $Q$-function is implicit in the M-step.\n\nThe reader would notice a similarity between the EM algorithm for Gaussian mixture models and the Llyod's algorithm \\citep{Lloyd1982} for k-means clustering. In fact\\elink{exer:em-em-k-means}, the Llyod's algorithm implements exactly the AM-LVM algorithm (see Algorithm~\\ref{algo:em-hard-altopt}) that performs ``hard'' assignments, assigning each data point completely to one of the clusters whereas the EM algorithm makes ``soft'' assignments, allowing each point to have different levels of affinity to different clusters. Figure~\\ref{fig:gmm} depicts the working of the EM algorithm on a toy GMM problem.\n\n\\subsection{Mixed Regression}\nAs we have seen in \\S~\\ref{chap:intro}, regression problems and their variants have several applications in data analysis and machine learning. One such variant is that of mixed regression. Mixed regression is especially useful when we suspect that our data is actually composed of several \\emph{sub-populations} which cannot be explained well using a single model.\n\nFor example, consider the previous example of predicting family expenditure. Although we may have data from families across a nation, it may be unwise to try and explain it using a single model due to various reasons. The prices of various commodities and services may vary across urban and rural areas and similar consumption in two regions may very well result in different expenditures. Moreover, there may exist parameters such as total income which are not revealed in a survey due to privacy issues, but nevertheless influence expenditure.\n\nThus, there may actually be several models, each corresponding to a certain income bracket or a certain geographical region, which \\emph{together} explain the data very well. This poses a challenge since the income bracket, or geographical location of a family may not have been recorded as a part of the survey due to privacy or other reasons!\n\nTo formalize the above scenario, consider two linear models $\\bt^{\\ast,0}, \\bt^{\\ast,1} \\in \\bR^p$. For each data point $\\vx_i \\in \\bR^p$, first one of the models is selected by performing a Bernoulli trial with bias $\\phi_1$ to get $z_i \\in \\bc{0,1}$ and then the response is generated as\n\\[\ny_i = \\vx_i^\\top\\bt^{\\ast,z_i} + \\eta_i\n\\]\nwhere $\\eta_i$ is i.i.d. Gaussian noise $\\eta_i \\sim \\cN(0,\\sigma_{z_i}^2)$. This can be cast as a parametric model by considering density functions of the form\n\\[\nf(\\cdot\\cond\\cdot,\\bc{\\phi_z,\\bt^{\\ast,z},\\sigma_z}_{z=0,1}) = \\phi_0\\cdot g(\\cdot\\cond\\cdot,\\bt^{\\ast,0},\\sigma_0) + \\phi_1\\cdot g(\\cdot\\cond\\cdot,\\bt^{\\ast,1},\\sigma_1),\n\\]\nwhere $\\sigma_z,\\phi_z > 0$, $\\phi_0 + \\phi_1 = 1$, and for any $(\\vx,y) \\in \\bR^p\\times\\bR$, we have\n\\[\ng(y\\cond\\vx,\\bt^{\\ast,z},\\sigma_z) = \\exp\\br{-\\frac{(y - \\vx^\\top\\bt^{\\ast,z})^2}{2\\sigma_z^2}}.\n\\]\nNote that although such a model generates data in the form of triplets $(\\vx_1,y_1,z_1),(\\vx_2,y_2,z_2),\\ldots,(\\vx_n,y_n,z_n)$, we are only allowed to observe $(\\vx_1,y_1),(\\vx_2,y_2),\\ldots,(\\vx_n,y_n)$ as the data. For the sake of simplicity, we will yet again look at the special case when the Bernoulli trials are fair i.e., $\\phi_0 = \\phi_1 = 0.5$ and $\\sigma_1 = \\sigma_2 = 1$. Thus, the only unknown parameters are the models $\\bt^{\\ast,0}$ and $\\bt^{\\ast,1}$. Let $\\vW = (\\bt^0,\\bt^1) \\in \\bR^{p \\times p}$ denote the parametric mixed model. Our job is to recover $\\vW^\\ast = (\\bt^{\\ast,0},\\bt^{\\ast,1})$.\n\nA particularly interesting special case arises when we further impose the constraint $\\bt^{\\ast,0} = -\\bt^{\\ast,1}$, i.e. the two models in the mixture are tied together to be negative of each other. This model is especially useful in the \\emph{phase retrieval} problem. Although we will study this problem in more generality in \\S~\\ref{chap:phret}, we present a special case here.\n\nPhase retrieval is a problem that arises in several imaging situations such as X-ray crystallography where, after data $\\bc{(\\vx_i,y_i)}_{i=1,\\ldots,n}$ has been generated as $y_i = \\ip{\\bto}{\\vx_i}$, the sign of the response (or more generally, the phase of the response if the response is complex-valued) is omitted and we are presented with just $\\bc{(\\vx_i,\\abs{y_i})}_{i=1,\\ldots,n}$. In such a situation, we can use the latent variable $z_i = \\text{sign}(y_i)$ to denote the omitted sign information. In this setting, it can be seen that the mixture model with $\\bt^{\\ast,0} = -\\bt^{\\ast,1}$ is very appropriate since each data point $(\\vx_i,\\abs{y_i})$ will be nicely explained by either $\\bto$ or $-\\bto$ depending on the value of $\\text{sign}(y_i)$.\n\nWe will revisit this problem in detail in \\S~\\ref{chap:phret}. For now we move on to discuss the E and M-step constructions for the mixed regression problem. We leave details of the constructions as an exercise\\elink{exer:em-mr-em}.\\\\\n\n\\noindent\\textbf{E-step Construction} The point-wise $Q$-function has a closed form expression. Given two ensembles $\\vW^t = (\\bt^{t,0},\\bt^{t,1})$ and $\\vW = (\\bt^{0},\\bt^{1})$,\n\\[\nQ_{(\\vx,y)}(\\vW|\\vW^t) = -\\frac{1}{2}\\br{\\alpha^0_{t,(\\vx,y)}\\cdot(y - \\vx^\\top\\bt^0)^2 - \\alpha^1_{t,(\\vx,y)}\\cdot(y - \\vx^\\top\\bt^1)^2},\n\\]\nwhere $\\alpha^z_{t,(\\vx,y)} = e^{-\\frac{(y - \\vx^\\top\\bt^{t,z})^2}{2}}\\bs{e^{-\\frac{(y - \\vx^\\top\\bt^{t,0})^2}{2}} + e^{-\\frac{(y - \\vx^\\top\\bt^{t,1})^2}{2}}}^{-1}$. Note that $\\alpha^z_{t,(\\vx,y)} \\geq 0$ for $z = 0,1$ and $\\alpha^0_{t,(\\vx,y)} + \\alpha^1_{t,(\\vx,y)} = 1$. Also note that $\\alpha^z_{t,(\\vx,y)}$ is larger if $\\bt^{t,z}$ gives less regression error for the point $(\\vx,y)$ than $\\bt^{t,1-z}$ i.e., if $(y - \\vx^\\top\\bt^{t,z})^2 < (y - \\vx^\\top\\bt^{t,1-z})^2$. Thus, the data point $(\\vx,y)$ feels greater affinity to the model that fits it better, which is intuitively, an appropriate thing to do.\\\\\n\n\\noindent\\textbf{M-step Construction} The maximizer of the $Q$-function has a closed form solution in this case as well. If $\\vW^{t+1} = (\\bt^{t+1,0},\\bt^{t+1,1}) = \\arg\\max_{\\vW}Q(\\vW|\\vW^t)$, where $Q(\\vW|\\vW^t)$ is the sample $Q$-function created from a data sample $(\\vx_1,y_1),(\\vx_2,y_2),\\ldots,(\\vx_n,y_n)$, then it is easy to see that the M-step update is given by the solution to two weighted least squares problems with weights given by $\\alpha^z_{t,(\\vx_i,y_i)}$ for $z = \\bc{0,1}$, which have closed form solutions given by\n\\[\n\\bt^{t+1,z} = \\br{\\sum_{i=1}^n\\alpha^z_{t,(\\vx_i,y_i)}\\cdot \\vx_i\\vx_i^\\top}^{-1}\\sum_{i=1}^n\\alpha^z_{t,(\\vx_i,y_i)}\\cdot y_i\\vx_i\n\\]\nNote that to update $\\bt^{t,0}$, for example, the M-step essentially performs least squares only over those points $(\\vx_i,y_i)$ such that $\\alpha^0_{t,(\\vx,y)}$ is large and ignores points where $\\alpha^0_{t,(\\vx,y)} \\approx 0$. This is akin to identifying points that belong to sub-population $0$ strongly and performing regression only over them. One need not construct the $Q$-function explicitly here either and may simply keep repeating the M-step. Figure~\\ref{fig:mrem} depicts the working of the EM algorithm on a toy mixed regression problem.\\\\\n\n\\begin{figure}\n\\includegraphics[width=\\columnwidth]{mrem.pdf}\n\\caption[Mixed Regression and the EM Algorithm]{The data contains two sub-populations (circle and triangle points) and cannot be properly explained by a single linear model. EM rapidly realizes that the circle points should belong to the solid model and the triangle points to the dashed model. Thus, $\\alpha^0_t$ keeps going up for circle points and $\\alpha^1_t$ keeps going up for triangle points. As a result, only circle points contribute significantly to learning the solid model and only triangle points contribute significantly to the dashed model.}\n\\label{fig:mrem}\n\\end{figure}\n\n\\noindent\\textbf{Note on Initialization}: when executing the EM algorithm, it is important to initialize the models properly. As we saw in \\S~\\ref{chap:altmin}, this is true of all alternating strategies. Careless initialization can lead to poor results. For example, if the EM algorithm is initialized for the mixed regression problem with $\\vW^1 = (\\bt^{1,0},\\bt^{1,1})$ such that $\\bt^{1,0} = \\bt^{1,1}$ then it is easy to see that the EM algorithm will never learn two distinct models and we will have $\\bt^{t,0} = \\bt^{t,1}$ for all $t$. We will revisit initialization issues shortly when discussing convergence analyses for the EM.\n\nWe refer the reader to \\citep{BalakrishnanWY2017,YangBW2015} for examples of the EM algorithm applied to other problems such as learning hidden Markov models and regression with missing covariates.\n\n\\section{A Monotonicity Guarantee for EM}\n\\label{sec:em-monotone}\nWe now present a simple but useful monotonicity property of the EM algorithm that guarantees that the procedure never worsens the likelihood of the data during its successive updates. This assures us that EM does not diverge, although it does not ensure convergence to the true parameter $\\vth^\\ast$ either. For gAM, a similar result was immediate from the nature of the alternating updates.\n\n\\begin{theorem}\n\\label{thm:em-monotone}\nThe EM algorithm (Algorithm~\\ref{algo:em}), when executed with a population E-step and fully corrective M-step, ensures that the population likelihood never decreases across iterations, i.e., for all $t$,\n\\[\n\\bE_{\\vy\\sim f_{\\vth^\\ast}}f(\\vy\\cond\\vtheta^{t+1}) \\geq \\bE_{\\vy\\sim f_{\\vth^\\ast}}f(\\vy\\cond\\vtheta^t),\n\\]\nIf executed with the sample E-step on data $\\vy_1,\\ldots,\\vy_n$, EM ensures that the sample likelihood never decreases across iterations, i.e., for all $t$,\n\\[\n\\cL(\\vtheta^{t+1};\\vy_1,\\ldots,\\vy_n) \\geq \\cL(\\vtheta^t;\\vy_1,\\ldots,\\vy_n).\n\\]\n\\end{theorem}\n\nA similar result holds for gradient M-steps too but we do not consider that here. To prove this result, we will need the following simple observation\\elink{exer:em-monotone}. Recall that for any $\\vtheta^0 \\in \\Theta$ and $\\vy \\in \\cY$, we defined the terms $Q_{\\vy}(\\vtheta\\cond\\vtheta^0) = \\Ee{\\vz \\sim f(\\cdot\\cond\\vy,\\vtheta^0)}{\\log f(\\vy,\\vz\\cond\\vtheta)}$ and $R_{\\vy}(\\vtheta^0) = \\bE_{\\vz \\sim f(\\cdot\\cond\\vy,\\vtheta^0)}\\bs{\\log f(\\vz\\cond\\vy,\\vtheta^0)}$.\n\n\\begin{lemma}\n\\label{lem:em-monotone-point}\nFor any $\\vtheta^0 \\in \\Theta$, we have\n\\[\n\\log f(\\vy\\cond\\vtheta^0) = Q_{\\vy}(\\vtheta^0\\cond\\vtheta^0) - R_{\\vy}(\\vtheta^0)\n\\]\n\\end{lemma}\n\nThis is a curious result since we have earlier seen that for any $\\vtheta \\in \\Theta$, we have the inequality $\\log f(\\vy\\cond\\vtheta) \\geq Q_{\\vy}(\\vtheta\\cond\\vtheta^0) - R_{\\vy}(\\vtheta^0)$ by an application of the Jensen's inequality (see \\S~\\ref{sec:em-em}). The above lemma suggests that the inequality is actually tight. We now prove Theorem~\\ref{thm:em-monotone}.\n\n\\begin{proof}[Proof (of Theorem~\\ref{thm:em-monotone}).]\nConsider the sample E-step with $Q(\\vtheta\\cond\\vtheta^t) = \\frac{1}{n}\\sum_{i=1}^nQ_{\\vy_i}(\\vtheta\\cond\\vtheta^t)$. A similar argument works for population E-steps. The fully corrective M-step ensures $Q(\\vtheta^{t+1}\\cond\\vtheta^t) \\geq Q(\\vtheta^t\\cond\\vtheta^t)$ i.e.,\n\\[\n\\frac{1}{n}\\sum_{i=1}^nQ_{\\vy_i}(\\vtheta^{t+1}\\cond\\vtheta^t) \\geq \\frac{1}{n}\\sum_{i=1}^nQ_{\\vy_i}(\\vtheta^t\\cond\\vtheta^t).\n\\]\nSubtracting the same terms from both sides gives us\n\\[\n\\frac{1}{n}\\sum_{i=1}^n\\br{Q_{\\vy_i}(\\vtheta^{t+1}\\cond\\vtheta^t) - R_{\\vy_i}(\\vtheta^t)} \\geq \\frac{1}{n}\\sum_{i=1}^n\\br{Q_{\\vy_i}(\\vtheta^t\\cond\\vtheta^t) - R_{\\vy_i}(\\vtheta^t)}.\n\\]\nUsing the inequality $\\log f(\\vy\\cond\\vtheta^{t+1}) \\geq Q_{\\vy}(\\vtheta^{t+1}\\cond\\vtheta^0) - R_{\\vy}(\\vtheta^0)$ and applying Lemma~\\ref{lem:em-monotone-point} gives us\n\\[\n\\frac{1}{n}\\sum_{i=1}^n\\log f(\\vy_i\\cond\\vtheta^{t+1}) \\geq \\frac{1}{n}\\sum_{i=1}^n\\log f(\\vy_i\\cond\\vtheta^t),\n\\]\nwhich proves the result.\n\\end{proof}\n\n\\section{Local Strong Concavity and Local Strong Smoothness}\nIn order to prove stronger convergence guarantees for the EM algorithm, we need to introduce a few structural properties of parametric distributions. Let us recall the EM algorithm with a population E-step and fully corrective M-step for sake of simplicity.\n\\begin{enumerate}\n\t\\item (E-step) $Q(\\cdot\\cond\\vtheta^t) = \\bE_{\\vy \\sim f_{\\vth^\\ast}}Q_{\\vy}(\\cdot\\cond\\vtheta^t)$\n\t\\item (M-step) $\\vtheta^{t+1} = \\underset{\\vtheta \\in \\Theta}{\\arg\\max}\\ Q_t(\\vtheta\\cond\\vtheta^t)$\n\\end{enumerate}\nFor any $\\vtheta^0 \\in \\Theta$, we will use $q_{\\vtheta^0}(\\cdot) = Q(\\cdot\\cond\\vtheta^0)$ as a shorthand for the $Q$-function with respect to $\\vtheta^0$ (constructed at the population or sample level depending on the E-step) and let $M(\\vtheta^0) := \\underset{\\vtheta \\in \\Theta}{\\arg\\max}\\ q_{\\vtheta^0}(\\vtheta)$ denote the output of the M-step if $\\vtheta^0$ is the current parameter. Let $\\vth^\\ast$ denote a parameter that optimizes the population likelihood\n\\[\n\\vth^\\ast \\in \\underset{\\vtheta \\in \\Theta}{\\arg\\max}\\ \\Ee{\\vy \\sim f(\\vy|\\vth^\\ast)}{\\cL(\\vtheta;\\vy)} = \\underset{\\vtheta \\in \\Theta}{\\arg\\max}\\ \\Ee{\\vy \\sim f(\\vy|\\vth^\\ast)}{f(\\vy\\cond\\vtheta)}\n\\]\nRecall that our overall goal is indeed to recover a parameter such as $\\vth^\\ast$ that maximizes the population level likelihood (or else sample likelihood if using sample E-steps). Now the $Q$-function satisfies\\elink{exer:em-self-consis-q} the following \\emph{self-consistency} property.\n\\[\n\\vth^\\ast \\in \\underset{\\vtheta \\in \\Theta}{\\arg\\max}\\ q_{\\vth^\\ast}(\\vtheta)\n\\]\nThus, if we could somehow get hold of the $Q$-function $q_{\\vth^\\ast}(\\cdot)$, then a single M-step would solve the problem! However, this is a circular argument since getting hold of $q_{\\vth^\\ast}(\\cdot)$ would require finding $\\vth^\\ast$ first.\n\nTo proceed along the previous argument, we need to refine this observation. Not only should the M-step refuse to deviate from the optimum $\\vth^\\ast$ if initialized there, it should behave in relatively calm manner in the neighborhood of the optimum as well. The following properties characterize ``nice'' $Q$ functions that ensure this happens.\n\nFor sake of simplicity, we will assume that all $Q$-functions are continuously differentiable, as well as that the estimation problem is unconstrained i.e., $\\Theta = \\bR^p$. Note that this ensures $\\nabla q_{\\vth^\\ast}(\\vth^\\ast) = \\vzero$ due to self-consistency. Also note that since we are looking at maximization problems, we will require the $Q$ function to satisfy ``concavity'' properties instead of ``convexity'' properties.\n\n\\begin{definition}[Local Strong Concavity]\nA statistical estimation problem with a population likelihood maximizer $\\vth^\\ast$ satisfies the $(r,\\alpha)$-Local Strong Concavity (LSC) property if there exist $\\alpha, r > 0$, such that the function $q_{\\vth^\\ast}(\\cdot)$ is $\\alpha$-strongly concave in neighborhood ball of radius $r$ around $\\vth^\\ast$ i.e., for all $\\vtheta^1,\\vtheta^2 \\in \\cB_2(\\vth^\\ast,r)$,\n\\[\nq_{\\vth^\\ast}(\\vtheta^1) - q_{\\vth^\\ast}(\\vtheta^2) - \\ip{\\nabla q_{\\vth^\\ast}(\\vtheta^2)}{\\vtheta^1 - \\vtheta^2} \\leq -\\frac{\\alpha}{2}\\norm{\\vtheta^1 - \\vtheta^2}_2^2\n\\]\n\\end{definition}\n\nThe reader would find LSC similar to restricted strong convexity (RSC) in Definition~\\ref{defn:res-strong-cvx-smooth-fn}, with the ``restriction'' being the neighborhood $\\cB_2(\\vth^\\ast,r)$ of $\\vth^\\ast$. Also note that only $q_{\\vth^\\ast}(\\cdot)$ is required to satisfy the LSC property, and not $Q$-functions corresponding to every $\\vtheta$.\n\nWe will also require a counterpart to restricted strong smoothness (RSS). For that, we introduce the notion of \\emph{Lipschitz} gradients.\n\\begin{definition}[Lipschitz Gradients]\n\\label{defn:lip-grad}\nA differentiable function $f: \\bR^p \\rightarrow \\bR$ is said to have $\\beta$-Lipschitz gradients if for all $\\vx,\\vy\\in\\bR^p$, we have\n\\[\n\\norm{\\nabla f(\\vx) - \\nabla f(\\vy)}_2 \\leq \\beta\\cdot\\norm{\\vx-\\vy}_2.\n\\]\n\\end{definition}\nWe advise the reader to relate this notion to that of Lipschitz functions (Definition~\\ref{defn:lip}). It can be shown\\elink{exer:em-ss-lip-grad} that all functions with $L$-Lipschitz gradients are also $L$-strongly smooth (Definition~\\ref{defn:strong-cvx-smooth-fn}). Using this notion we are now ready to introduce our next properties.\n\n\\begin{definition}[Local Strong Smoothness]\n\\label{defn:lss-em}\nA statistical estimation problem with a population likelihood maximizer $\\vth^\\ast$ satisfies the $(r,\\beta)$-Local Strong Smoothness (LSS) property if there exist $\\beta, r > 0$, such that for all $\\vtheta^1,\\vtheta^2 \\in \\cB_2(\\vth^\\ast,r)$, the function $q_{\\vth^\\ast}(\\cdot)$ satisfies\n\\[\n\\norm{\\nabla q_{\\vth^\\ast}(M(\\vtheta^1)) - \\nabla q_{\\vth^\\ast}(M(\\vtheta^2))}_2 \\leq \\beta\\cdot\\norm{\\vtheta^1-\\vtheta^2}_2\n\\]\n\\end{definition}\n\nThe above property ensures that in the restricted neighborhood around the optimum, the $Q$-function $q_{\\vth^\\ast}(\\cdot)$ is strongly smooth. The similarity to RSS is immediate. Note that this property also generalizes the self-consistency property we saw a moment ago.\n\nSelf-consistency forces $\\nabla q_{\\vth^\\ast}(\\vth^\\ast) = \\vzero$ at the optimum. LSS forces such behavior to extend around the optimum as well. To see this, simply set $\\vtheta^2 = \\vth^\\ast$ with LSS and observe the corollary $\\norm{\\nabla q_{\\vth^\\ast}(M(\\vtheta^1))}_2 \\leq \\beta\\cdot\\norm{\\vtheta^1-\\vth^\\ast}_2$. The curious reader may wish to relate this corollary to the Robust Bistability property (Definition~\\ref{defn:rob-bistable}) and the \\emph{Local Error Bound} property introduced by \\cite{LuoT1993}.\n\nThe LSS property offers, as another corollary, a useful property of statistical estimation problems called the \\emph{First Order Stability} property (introduced by \\cite{BalakrishnanWY2017} in a more general setting).\n\n\\begin{definition}[First Order Stability \\citep{BalakrishnanWY2017}]\nA statistical estimation problem with a population likelihood maximizer $\\vth^\\ast$ satisfies the $(r,\\gamma)$-First Order Stability (FOS) property if there exist $\\gamma >0, r > 0$ such that the the gradients of the functions $q_\\vtheta(\\cdot)$ are stable in a neighborhood of $\\vth^\\ast$ i.e., for all $\\vtheta \\in \\cB_2(\\vth^\\ast,r)$,\n\\[\n\\norm{\\nabla q_\\vtheta(M(\\vtheta)) - \\nabla q_{\\vth^\\ast}(M(\\vtheta))}_2 \\leq \\gamma\\cdot\\norm{\\vtheta - \\vth^\\ast}_2\n\\]\n\\end{definition}\n\n\\begin{lemma}\n\\label{lemma:lss-lrc-fos}\nA statistical estimation problem that satisfies the $(r,\\beta)$-LSS property, also satisfies the $(r,\\beta)$-FOS property.\n\\end{lemma}\n\\begin{proof}\nSince $M(\\vtheta)$ maximizes the function $q_\\vtheta(\\cdot)$ due to the M-step, and the problem is unconstrained and the $Q$-functions differentiable, we get $\\nabla q_{\\vtheta}(M(\\vtheta)) = \\vzero$. Thus, we have, using the triangle inequality,\n\\begin{align*}\n&\\norm{\\nabla q_\\vtheta(M(\\vtheta)) - \\nabla q_{\\vth^\\ast}(M(\\vtheta))}_2 = \\norm{\\nabla q_{\\vth^\\ast}(M(\\vtheta))}_2 \\\\\n&\\leq \\norm{\\nabla q_{\\vth^\\ast}(M(\\vtheta)) - \\nabla q_{\\vth^\\ast}(M(\\vth^\\ast))}_2 + \\norm{\\nabla q_{\\vth^\\ast}(M(\\vth^\\ast))}_2\\\\\n&\\leq \\beta\\cdot\\norm{\\vtheta-\\vth^\\ast}_2,\n\\end{align*}\nusing self-consistency to get $M(\\vth^\\ast) = \\vth^\\ast$ and $\\nabla q_{\\vth^\\ast}(\\vth^\\ast) = \\vzero$.\n\\end{proof}\n\n\\section{A Local Convergence Guarantee for EM}\nWe will now prove a convergence result much stronger than Theorem~\\ref{thm:em-monotone}: if EM is initialized ``close'' to the optimal parameter $\\vth^\\ast$ for ``nice'' problems, then it approaches $\\vth^\\ast$ at a linear rate. For the sake of simplicity, we will analyze EM with population E-steps and fully corrective M-steps. We refer the reader to \\citep{BalakrishnanWY2017} for analyses of sample E-steps and stochastic M-steps. The following \\emph{contraction} lemma will be crucial in the analysis of the EM algorithm.\n\n\\begin{lemma}\n\\label{lem:em-local-contract}\nSuppose we have a statistical estimation problem with a population likelihood maximizer $\\vth^\\ast$ that, for some $\\alpha,\\beta,r > 0$, satisfies the $(r,\\alpha)$-LSC and $(r,\\beta)$-LSS properties. Then in the region $\\cB_2(\\vth^\\ast,r)$, the $M$ operator corresponding to the fully corrective M-step is contractive, i.e., for all $\\vtheta \\in \\cB_2(\\vth^\\ast,r)$,\n\\[\n\\norm{M(\\vtheta) - M(\\vth^\\ast)}_2 \\leq \\frac{\\beta}{\\alpha}\\cdot\\norm{\\vtheta - \\vth^\\ast}_2\n\\]\n\\end{lemma}\n\nSince by the self consistency property, we have $M(\\vth^\\ast) = \\vth^\\ast$ and the EM algorithm sets $\\vtheta^{t+1} = M(\\vtheta^t)$ due to the M-step, Lemma~\\ref{lem:em-local-contract} immediately guarantees the following local convergence property\\elink{exer:em-conv-like}.\n\n\\begin{theorem}\n\\label{thm:em-local-conv}\nSuppose a statistical estimation problem with population likelihood maximizer $\\vth^\\ast$ satisfies the $(r,\\alpha)$-LSC and $(r,\\beta)$-LSS properties such that $\\beta < \\alpha$. Let the EM algorithm (Algorithm~\\ref{algo:em}) be initialized with $\\vtheta^1 \\in \\cB_2(\\vth^\\ast,r)$ and executed with population E-steps and fully corrective M-steps. Then after at most $T = \\bigO{\\log\\frac{1}{\\epsilon}}$ steps, we have $\\norm{\\vtheta^t - \\vth^\\ast}_2 \\leq \\epsilon$.\n\\end{theorem}\n\nNote that the above result holds only if $\\beta < \\alpha$, in other words, if the condition number $\\kappa = \\beta\/\\alpha < 1$. We hasten to warn the reader that whereas in previous sections we always had $\\kappa \\geq 1$, here the LSC and LSS properties are defined differently (LSS involves the M-step whereas LSC does not) and thus it is possible that we have $\\kappa < 1$.\n\nAlso, since all functions satisfy $(0,0)$-LSC, it is plausible that for well behaved problems, even for $\\alpha > \\beta$, there should exist some small radius $r(\\alpha)$ so that the $(r(\\alpha),\\alpha)$-LSC property holds. This may require the EM algorithm to be initialized closer to the optimum for the convergence properties to kick in. We now prove Lemma~\\ref{lem:em-local-contract} below.\n\n\\begin{proof}[Proof of Lemma~\\ref{lem:em-local-contract}]\nSince we have differentiable $Q$-functions and an unconstrained estimation problem, we immediately get a lot of useful results. We note that this lemma holds even for constrained estimation problems but the arguments are more involved which we wish to avoid. Let $\\vtheta \\in \\cB_2(\\vth^\\ast,r)$ be any parameter in the $r$-neighborhood of $\\vth^\\ast$.\\\\\n\n\\noindent\\textbf{(Apply Local Strong Concavity)} Upon a two sided application of LSC and using $\\nabla q_{\\vth^\\ast}(\\vth^\\ast) = \\vzero$, we get\n\\begin{align*}\nq_{\\vth^\\ast}(\\vth^\\ast) - q_{\\vth^\\ast}(M(\\vtheta)) - \\ip{\\nabla q_{\\vth^\\ast}(M(\\vtheta))}{\\vth^\\ast - M(\\vtheta)} &\\leq -\\frac{\\alpha}{2}\\norm{M(\\vtheta) - \\vth^\\ast}_2^2\\\\\nq_{\\vth^\\ast}(M(\\vtheta)) - q_{\\vth^\\ast}(\\vth^\\ast) &\\leq -\\frac{\\alpha}{2}\\norm{\\vth^\\ast - M(\\vtheta)}_2^2,\n\\end{align*}\nadding which gives us the inequality\n\\[\n\\ip{\\nabla q_{\\vth^\\ast}(M(\\vtheta))}{\\vth^\\ast - M(\\vtheta)} \\geq \\alpha\\cdot\\norm{M(\\vtheta) - \\vth^\\ast}_2^2\n\\]\n\\noindent\\textbf{(Apply Local Strong Smoothness)} Since $M(\\vtheta)$ maximizes the function $q_\\vtheta(\\cdot)$ due to the M-step, we get $\\nabla q_{\\vtheta}(M(\\vtheta)) = \\vzero$. Thus,\n\\begin{align*}\n\\ip{\\nabla q_{\\vth^\\ast}(M(\\vtheta))}{\\vth^\\ast - M(\\vtheta)} = \\ip{\\nabla q_{\\vth^\\ast}(M(\\vtheta)) - \\nabla q_{\\vtheta}(M(\\vtheta))}{\\vth^\\ast - M(\\vtheta)}\\\\\n\\leq \\norm{\\nabla q_{\\vth^\\ast}(M(\\vtheta)) - \\nabla q_{\\vtheta}(M(\\vtheta))}_2\\norm{\\vth^\\ast - M(\\vtheta)}_2\n\\end{align*}\nUsing Lemma~\\ref{lemma:lss-lrc-fos} to invoke the $(r,\\beta)$-FOS property further gives us\n\\[\n\\ip{\\nabla q_{\\vth^\\ast}(M(\\vtheta))}{\\vth^\\ast - M(\\vtheta)} \\leq \\beta\\cdot\\norm{\\vtheta-\\vth^\\ast}_2\\norm{\\vth^\\ast - M(\\vtheta)}_2\n\\]\nCombining the two results proved above gives us\n\\[\n\\alpha\\cdot\\norm{M(\\vtheta) - \\vth^\\ast}_2^2 \\leq \\beta\\cdot\\norm{M(\\vtheta) - \\vth^\\ast}_2\\cdot\\norm{\\vtheta - \\vth^\\ast}_2,\n\\]\nwhich finish the proof.\n\\end{proof}\n\n\\subsection{A Note on the Application of Convergence Guarantees}\nWe conclude the discussion with some comments on the feasibility of the structural assumptions we used to prove the convergence guarantees, in practical settings. Rigorous proofs that these properties are indeed satisfied in practical applications is beyond the scope of this monograph. These can be found in \\citep{BalakrishnanWY2017}. Note that for the convergence guarantees (specifically Lemma~\\ref{lem:em-local-contract}) to hold, a problem need only satisfy the LSC and FOS properties, with LSS being equivalent to FOS due to Lemma~\\ref{lemma:lss-lrc-fos}.\\\\\n\n\\noindent\\textbf{Gaussian Mixture Models} To analyze the LSC and FOS properties, we need to look at the population version of the $Q$-function. Given the point-wise $Q$-function derivation of the in \\S~\\ref{sec:em-app}, we get for $\\vM = (\\vmu^0,\\vmu^1)$\n\\[\nQ(\\vM\\cond\\vM^\\ast) = -\\frac{1}{2} \\bE_{\\vy \\sim f_{\\vM^\\ast}}\\bs{w^0(\\vy)\\cdot\\norm{\\vy - \\vmu^0}_2^2 + w^1(\\vy)\\cdot\\norm{\\vy - \\vmu^1}_2^2},\n\\]\nwhere $w^z(\\vy) = e^{-\\frac{\\norm{\\vy - \\vmu^{\\ast,z}}_2^2}{2}}\\bs{e^{-\\frac{\\norm{\\vy - \\vmu^{\\ast,0}}_2^2}{2}} + e^{-\\frac{\\norm{\\vy - \\vmu^{\\ast,1}}_2^2}{2}}}^{-1}$ for $z = 0,1$. It can be seen that the function $q_{\\vM^\\ast}(\\cdot)$ satisfies $\\nabla^2 q_{\\vM^\\ast}(\\cdot) \\succeq w\\cdot I$ where $w = \\min\\bc{\\E{w^0(\\vy)},\\E{w^1(\\vy)}} > 0$ and hence this problem satisfies the $(\\infty,w)$-LSC property i.e., it is globally strongly concave.\n\nEstablishing the FOS property is more involved. However, it can be shown that the problem does satisfy the $(r,\\alpha)$-FOS property with $r = \\Omega(\\norm{\\vM^\\ast}_2)$ and $\\alpha = \\exp(-\\Om{\\norm{\\vM^\\ast}_2^2})$ under suitable conditions.\\\\\n\n\\noindent\\textbf{Mixed Regression} We again use the point-wise $Q$-function construction to construct the population $Q$-function. For any $\\vW = (\\bt^0,\\bt^1)$,\n\\[\nQ_{(\\vx,y)}(\\vW|\\vW^\\ast) = -\\frac{1}{2}\\E{\\alpha^0_{(\\vx,y)}\\cdot(y - \\vx^\\top\\bt^0)^2 - \\alpha^1_{(\\vx,y)}\\cdot(y - \\vx^\\top\\bt^1)^2},\n\\]\nwhere $\\alpha^z_{(\\vx,y)} = e^{-\\frac{(y - \\vx^\\top\\bt^{\\ast,z})^2}{2}}\\bs{e^{-\\frac{(y - \\vx^\\top\\bt^{\\ast,0})^2}{2}} + e^{-\\frac{(y - \\vx^\\top\\bt^{\\ast,1})^2}{2}}}^{-1}$. Assuming $\\E{\\vx\\x^\\top} = I$ for sake of simplicity, we get $\\nabla^2 q_{\\vW^\\ast}(\\cdot) \\succeq \\alpha\\cdot I$ where $\\alpha = \\min\\bc{\\lambda_{\\min}\\br{\\E{w^0(\\vx,y)\\cdot\\vx\\x^\\top}}, \\lambda_{\\min}\\br{\\E{w^1(\\vx,y)\\cdot\\vx\\x^\\top}}}$ i.e., the problem satisfies the $(\\infty,w)$-LSC property i.e., it is globally strongly concave. Establishing the FOS property is more involved but the problem does satisfy the $(r,\\alpha)$-FOS property with $r = \\Omega(\\norm{\\vW^\\ast}_2)$ and $\\alpha = \\Om{1}$ under suitable conditions.\\\\\n\n\\section{Exercises}\n\\begin{exer}\n\\label{exer:em-em-k-means}\nShow that for Gaussian mixture models with a balanced isotropic mixture, the AM-LVM algorithm (Algorithm~\\ref{algo:em-hard-altopt}) implements exactly recovers Lloyd's algorithm for k-means clustering. Note that AM-LVM in this case prescribes setting $w^0_t(\\vy) = 1$ if $\\norm{\\vy - \\vmu^{t,0}}_2 \\leq \\norm{\\vy - \\vmu^{t,1}}_2$ and $0$ otherwise and also setting $w^1_t(\\vy) = 1 - w^0_t(\\vy)$.\n\\end{exer}\n\\begin{exer}\n\\label{exer:em-sgd}\nShow that on expectation, the stochastic EM update rule is, on expectation, equivalent to the sample E and the gradient M-step i.e., $\\E{M^{\\text{sto}}(\\vtheta^t,Q_t)\\cond\\vtheta^t} = M^{\\text{grad}}(\\vtheta^t,Q^\\text{sam}_t(\\cdot\\cond\\vtheta^t))$.\n\\end{exer}\n\\begin{exer}\n\\label{exer:em-gmm-m}\nDerive the fully corrective and gradient M-step in the Gaussian mixture modeling problem. Show that they have closed forms.\n\\end{exer}\n\\begin{exer}\n\\label{exer:em-mr-em}\nDerive the E and M-step constructions for the mixed regression problem with fair Bernoulli trials.\n\\end{exer}\n\\begin{exer}\n\\label{exer:em-monotone}\nProve Lemma~\\ref{lem:em-monotone-point}.\n\\end{exer}\n\\begin{exer}\n\\label{exer:em-self-consis-q}\nLet $\\hat\\vtheta$ be a population likelihood maximizer i.e.,\n\\[\n\\hat\\vtheta \\in \\underset{\\vtheta \\in \\Theta}{\\arg\\max}\\ \\Ee{\\vy \\sim f(\\vy|\\vth^\\ast)}{f(\\vy\\cond\\vtheta)}\n\\]\nThen show that $\\hat\\vtheta \\in \\underset{\\vtheta\\in\\Theta}{\\arg\\max}\\ q_{\\hat\\vtheta}(\\vtheta)$. \\textit{Hint}: One way to show this result is to use Theorem~\\ref{thm:em-monotone} and Lemma~\\ref{lem:em-monotone-point}.\n\\end{exer}\n\\begin{exer}\n\\label{exer:em-ss-lip-grad}\nShow that a function $f$ (whether convex or not) that has $L$-Lipschitz gradients is necessarily $L$-strongly smooth. Also show that for any given $L > 0$, there exist functions that do not have $L$-Lipschitz gradients which are also not $L$-strongly smooth.\\\\\n\\textit{Hint}: Use the fundamental theorem for calculus for line integrals for the first part. For the second part try using a quadratic function.\n\\end{exer}\n\\begin{exer}\n\\label{exer:em-conv-like}\nFor any statistical estimation problem with population likelihood maximizer $\\vth^\\ast$ that satisfies the LSC and LSS properties with appropriate constants, show that parameters close to $\\vth^\\ast$ are approximate population likelihood maximizers themselves. Show this by finding constants $\\epsilon_0, D > 0$ (that may depend on the LSC, LSS constants) such that for any $\\vtheta$, if $\\norm{\\vtheta - \\vth^\\ast}_2 \\leq \\epsilon < \\epsilon_0$, then\n\\[\n\\Ee{\\vy \\sim f(\\vy|\\vth^\\ast)}{f(\\vy\\cond\\vtheta)} \\geq \\Ee{\\vy \\sim f(\\vy|\\vth^\\ast)}{f(\\vy\\cond\\vth^\\ast)} - D\\cdot\\epsilon.\n\\]\n\\text{Hint}: Use Lemma~\\ref{lem:em-monotone-point} and Exercise~\\ref{exer:em-ss-lip-grad}.\n\\end{exer}\n\\begin{exer}\n\\label{exer:em-altmax-monotone}\nSimilar to how the EM algorithm ensures monotone progress with respect to the likelihood objective (see \\S~\\ref{sec:em-monotone}), show that the the AM-LVM algorithm (see Algorithm~\\ref{algo:em-hard-altopt}) ensures monotone progress with respect to the following optimization problem\n\\[\n\\max_{\\substack{\\vtheta\\in\\Theta\\\\\\hat\\vz_i,\\ldots,\\hat\\vz_n\\in\\cZ}}\\ \\cL\\br{\\vtheta;\\bc{(\\vy_i,\\hat\\vz_i)}_{i=1}^n}.\n\\]\nMore specifically, show that if the iterates obtained by AM-LVM at time $t$ are $\\vtheta^t,\\bc{\\hat\\vz^t_i}_{i=1}^n$, then we have for all $t \\geq 1$\n\\[\n\\cL\\br{\\vtheta^{t+1};\\bc{(\\vy_i,\\hat\\vz^{t+1}_i)}_{i=1}^n} \\geq \\cL\\br{\\vtheta^t;\\bc{(\\vy_i,\\hat\\vz^t_i)}_{i=1}^n}\n\\]\n\\end{exer}\n\n\\section{Bibliographic Notes}\n\\label{sec:em-bib}\nThe EM algorithm was given its name and formally introduced in the seminal work of \\citet{DempsterLR1977}. Although several results exist that attempt to characterize the convergence properties of the EM algorithm, prominent among them is the work of \\citet{Wu1983} which showed that for problems with a uni-modal likelihood function and some regularity conditions that make the estimation problem well-posed, the EM algorithm converges to the (unique) global optimum.\n\nWe have largely followed the work of \\cite{BalakrishnanWY2017} in the treatment of the topic in this section. Recent results have attempted to give local convergence results for the EM algorithm, similar to the ones we discussed. Examples include analyses of the Baum-Welch algorithm used to learn hidden Markov models \\citep{YangBW2015}, the EM algorithm \\citep{BalakrishnanWY2017}, and the more general problem of M-estimation \\citep{AndresenS2016} (recall that maximum likelihood estimators are members of the broader class of M-estimators). The recurring message here is that upon proper initialization, the EM algorithm does converge to the global optimum.\n\nFor certain problems, such as mixed regression, it is known how to get such initializations in polynomial time under regularity assumptions \\citep{YiCS2014} which makes the entire procedure a polynomial time solution to get globally consistent parameters. Other recent advances in the area include extensions to high dimensional estimation estimation problems by way of regularization \\citep{YiC2015} or directly incorporating sparsity structure into the procedure \\citep{WangGNL2015}. We refer the reader to the work of \\citet{BalakrishnanWY2017} and other more recent works for a more detailed review of literature in this area.\n\nAs a closing note, we reiterate our previous comment on similarities between EM and alternating minimization. Both have an alternating pattern in their execution with each alternation typically being cheap and easy to execute. Indeed, for several areas in which the EM approach has been found successful, such as learning mixture models, regression with missing or corrupted data, phase retrieval etc, there also exist efficient alternating minimization algorithms. We will look at some of them in later sections. Having said that, the EM algorithm does distinguish itself within the general alternating minimization approach in its single minded goal of approaching the MLE by promoting parameters that increase the likelihood of the data.\n\\chapter{Introduction}\n\\label{chap:intro}\n\nThis section will set the stage for subsequent discussions by motivating some of the non-convex optimization problems we will be studying using real life examples, as well as setting up notation for the same.\n\n\\section{Non-convex Optimization}\nThe generic form of an analytic optimization problem is the following\n\\begin{align*}\n\\min_{\\vx \\in \\bR^p}\\ & f(\\vx)\\\\\n\\text{s.t.}\\ & \\vx \\in \\cC,\n\\end{align*}\nwhere $\\vx$ is the \\emph{variable} of the problem, $f: \\bR^p \\rightarrow \\bR$ is the \\emph{objective function} of the problem, and $\\cC \\subseteq \\bR^p$ is the \\emph{constraint set} of the problem. When used in a machine learning setting, the objective function allows the algorithm designer to encode proper and expected behavior for the machine learning model, such as fitting well to training data with respect to some loss function, whereas the constraint allows restrictions on the model to be encoded, for instance, restrictions on model size.\n\nAn optimization problem is said to be \\emph{convex} if the objective is a convex function, as well as the constraint set is a convex set. We refer the reader to \\S~\\ref{chap:tools} for formal definitions of these terms. An optimization problem that violates either one of these conditions, i.e., one that has a non-convex objective, or a non-convex constraint set, or both, is called a \\emph{non-convex} optimization problem. In this monograph, we will discuss non-convex optimization problems with non-convex objectives and convex constraints (\\S~\\ref{chap:altmin}, \\ref{chap:em}, \\ref{chap:saddle}, and \\ref{chap:matrec}), as well as problems with non-convex constraints but convex objectives (\\S~\\ref{chap:pgd}, \\ref{chap:spreg}, \\ref{chap:rreg}, \\ref{chap:phret}, and \\ref{chap:matrec}). Such problems arise in a lot of application areas.\n\n\\section{Motivation for Non-convex Optimization}\nModern applications frequently require learning algorithms to operate in extremely high dimensional spaces. Examples include web-scale document classification problems where $n$-gram-based representations can have dimensionalities in the millions or more, recommendation systems with millions of items being recommended to millions of users, and signal processing tasks such as face recognition and image processing and bio-informatics tasks such as splice and gene detection, all of which present similarly high dimensional data.\n\nDealing with such high dimensionalities necessitates the imposition of structural constraints on the learning models being estimated from data. Such constraints are not only helpful in regularizing the learning problem, but often essential to prevent the problem from becoming ill-posed. For example, suppose we know how a user rates some items and wish to infer how this user would rate other items, possibly in order to inform future advertisement campaigns. To do so, it is essential to impose some structure on how a user's ratings for one set of items influences ratings for other kinds of items. Without such structure, it becomes impossible to infer any new user ratings. As we shall soon see, such structural constraints often turn out to be non-convex.\n\nIn other applications, the natural objective of the learning task is a non-convex function. Common examples include training deep neural networks and tensor decomposition problems. Although non-convex objectives and constraints allow us to accurately model learning problems, they often present a formidable challenge to algorithm designers. This is because unlike convex optimization, we do not possess a handy set of tools for solving non-convex problems. Several non-convex optimization problems are known to be NP-hard to solve. The situation is made more bleak by a range of non-convex problems that are not only NP-hard to solve optimally, but NP-hard to solve approximately as well \\citep{MekaJCD2008}.\n\n\\section{Examples of Non-Convex Optimization Problems}\nBelow we present some areas where non-convex optimization problems arise naturally when devising learning problems.\\\\\n\n\\noindent\\textbf{Sparse Regression} The classical problem of linear regression seeks to recover a linear model which can effectively predict a response variable as a linear function of covariates. For example, we may wish to predict the average expenditure of a household (the response) as a function of the education levels of the household members, their annual salaries and other relevant indicators (the covariates). The ability to do allows economic policy decisions to be more informed by revealing, for instance, how does education level affect expenditure.\n\nMore formally, we are provided a set of $n$ covariate\/response pairs $(\\vx_1, y_1),\\ldots,(\\vx_n,y_n)$ where $\\vx_i \\in \\bR^p$ and $y_i \\in \\bR$. The linear regression approach makes the modeling assumption $y_i = \\vx_i^\\top\\bto + \\eta_i$ where $\\bto \\in \\bR^p$ is the underlying linear model and $\\eta_i$ is some benign additive noise. Using the data provided $\\bc{\\vx_i,y_i}_{i=1,\\ldots,n}$, we wish to recover back the model $\\bto$ as faithfully as possible.\n\nA popular way to recover $\\bto$ is using the \\emph{least squares} formulation\n\\[\n\\bth = \\underset{\\bt\\in\\bR^p}{\\arg\\min}\\ \\sum_{i=1}^n\\br{y_i - \\vx_i^\\top\\bt}^2.\n\\]\nThe linear regression problem as well as the least squares estimator, are extremely well studied and their behavior, precisely known. However, this age-old problem acquires new dimensions in situations where, either we expect only a few of the $p$ features\/covariates to be actually relevant to the problem but do not know their identity, or else are working in extremely data-starved settings i.e., $n \\ll p$.\n\n\\begin{figure}\n\\includegraphics[width=\\columnwidth]{fam.pdf}\n\\caption[Sparse Recovery for Automated Feature Selection]{Not all available parameters and variables may be required for a prediction or learning task. Whereas the family size may significantly influence family expenditure, the eye color of family members does not directly or significantly influence it. Non-convex optimization techniques, such as sparse recovery, help discard irrelevant parameters and promote compact and accurate models.}%\n\\label{fig:fam}\n\\end{figure}\n\nThe first problem often arises when there is an excess of covariates, several of which may be spurious or have no effect on the response. \\S~\\ref{chap:spreg} discusses several such practical examples. For now, consider the example depicted in Figure~\\ref{fig:fam}, that of expenditure prediction in a situation when the list of indicators include irrelevant ones such as whether the family lives in an odd-numbered house or not, which should arguably have no effect on expenditure. It is useful to eliminate such variables from consideration to promote consistency of the learned model.\n\nThe second problem is common in areas such as genomics and signal processing which face moderate to severe \\emph{data starvation} and the number of data points $n$ available to estimate the model is small compared to the number of model parameters $p$ to be estimated, i.e., $n \\ll p$. Standard statistical approaches require at least $n \\geq p$ data points to ensure a consistent estimation of all $p$ model parameters and are unable to offer accurate model estimates in the face of data-starvation.\n\nBoth these problems can be handled by the \\emph{sparse recovery} approach, which seeks to fit a sparse model vector (i.e., a vector with say, no more than $s$ non-zero entries) to the data. The least squares formulation, modified as a sparse recovery problem, is given below\n\\begin{align*}\n\\bth_\\text{sp} = \\underset{\\bt \\in \\bR^p}{\\arg\\min}\\ & \\sum_{i=1}^n\\br{y_i - \\vx_i^\\top\\bt}^2\\\\\n\\text{s.t.}\\ & \\bt \\in \\cB_0(s),\n\\end{align*}\n\nAlthough the objective function in the above formulation is convex, the constraint $\\norm{\\bt}_0 \\leq s$ (equivalently $\\bt \\in \\cB_0(s)$ -- see list of mathematical notation at the beginning of this monograph) corresponds to a non-convex constraint set\\elink{exer:tools-nonconv-sp}. Sparse recovery effortlessly solves the twin problems of discarding irrelevant covariates and countering data-starvation since typically, only $n \\geq s\\log p$ (as opposed to $n \\geq p$) data points are required for sparse recovery to work which drastically reduces the data requirement. Unfortunately however, sparse-recovery is an NP-hard problem \\citep{Natarajan1995}.\\\\\n\n\\noindent\\textbf{Recommendation Systems} Several internet search engines and e-commerce websites utilize recommendation systems to offer items to users that they would benefit from, or like, the most. The problem of recommendation encompasses benign recommendations for songs etc, all the way to critical recommendations in personalized medicine.\n\nTo be able to make accurate recommendations, we need very good estimates of how each user likes each item (song), or would benefit from it (drug). We usually have first-hand information for some user-item pairs, for instance if a user has specifically rated a song or if we have administered a particular drug on a user and seen the outcome. However, users typically rate only a handful of the hundreds of thousands of songs in any commercial catalog and it is not feasible, or even advisable, to administer every drug to a user. Thus, for the vast majority of user-item pairs, we have no direct information.\n\nIt is useful to visualize this problem as a \\emph{matrix completion} problem: for a set of $m$ users $u_1,\\ldots,u_m$ and $n$ items $a_1,\\ldots,a_n$, we have an $m \\times n$ \\emph{preference matrix} $A = [A_{ij}]$ where $A_{ij}$ encodes the preference of the $i\\th$ user for the $j\\th$ item. We are able to directly view only a small number of entries of this matrix, for example, whenever a user explicitly rates an item. However, we wish to recover the remaining entries, i.e., complete this matrix. This problem is closely linked to the \\emph{collaborative filtering} technique popular in recommendation systems.\n\nNow, it is easy to see that unless there exists some structure in matrix, and by extension, in the way users rate items, there would be no relation between the unobserved entries and the observed ones. This would result in there being no unique way to complete the matrix. Thus, it is essential to impose some structure on the matrix. A structural assumption popularly made is that of low rank: we wish to fill in the missing entries of $A$ assuming that $A$ is a low rank matrix. This can make the problem well-posed and have a unique solution since the additional low rank structure links the entries of the matrix together. The unobserved entries can no longer take values independently of the values observed by us. Figure~\\ref{fig:cf} depicts this visually.\n\n\\begin{figure}\n\\includegraphics[width=\\columnwidth]{cf.pdf}\n\\caption[Matrix Completion for Recommendation Systems]{Only the entries of the ratings matrix with thick borders are observed. Notice that users rate infrequently and some items are not rated even once. Non-convex optimization techniques such as low-rank matrix completion can help recover the unobserved entries, as well as reveal hidden features that are descriptive of user and item properties, as shown on the right hand side.}%\n\\label{fig:cf}\n\\end{figure}\n\nIf we denote by $\\Omega \\subset [m] \\times [n]$, the set of observed entries of $A$, then the low rank matrix completion problem can be written as\n\\begin{align*}\n\\hat A_\\text{lr} = \\underset{X \\in \\bR^{m \\times n}}{\\arg\\min}\\ & \\sum_{(i,j) \\in \\Omega}\\br{X_{ij} - A_{ij}}^2\\\\\n\\text{s.t.}\\ & \\rank(X) \\leq r,\n\\end{align*}\n\nThis formulation also has a convex objective but a non-convex rank constraint\\elink{exer:pgd-nc-rank}. This problem can be shown to be NP-hard as well. Interestingly, we can arrive at an alternate formulation by imposing the low-rank constraint indirectly. It turns out that\\elink{exer:pgd-low-rank} assuming the ratings matrix to have rank at most $r$ is equivalent to assuming that the matrix $A$ can be written as $A = UV^\\top$ with the matrices $U \\in \\bR^{m \\times r}$ and $V \\in \\bR^{n \\times r}$ having at most $r$ columns. This leads us to the following alternate formulation\n\\[\n\\hat A_\\text{lv} = \\underset{\\substack{U \\in \\bR^{m \\times r}\\\\V \\in \\bR^{n \\times r}}}{\\arg\\min}\\ \\sum_{(i,j) \\in \\Omega}\\br{U_i^\\top V_j - A_{ij}}^2.\n\\]\nThere are no constraints in the formulation. However, the formulation requires joint optimization over a pair of variables $(U,V)$ instead of a single variable. More importantly, it can be shown\\elink{exer:altmin-marg-conv} that the objective function is non-convex in $(U,V)$.\n\nIt is curious to note that the matrices $U$ and $V$ can be seen as encoding $r$-dimensional descriptions of users and items respectively. More precisely, for every user $i \\in [m]$, we can think of the vector $U^i \\in \\bR^r$ (i.e., the $i$-th row of the matrix $U$) as describing user $i$, and for every item $j \\in [n]$, use the row vector $V^j \\in \\bR^r$ to describe the item $j$ in vectoral form. The rating given by user $i$ to item $j$ can now be seen to be $A_{ij} \\approx \\ip{U^i}{V^j}$. Thus, recovering the rank $r$ matrix $A$ also gives us a bunch of $r$-dimensional latent vectors describing the users and items. These latent vectors can be extremely valuable in themselves as they can help us in understanding user behavior and item popularity, as well as be used in ``content''-based recommendation systems which can effectively utilize item and user features.\\\\\n\nThe above examples, and several others from machine learning, such as low-rank tensor decomposition, training deep networks, and training structured models, demonstrate the utility of non-convex optimization in naturally modeling learning tasks. However, most of these formulations are NP-hard to solve exactly, and sometimes even approximately. In the following discussion, we will briefly introduce a few approaches, classical as well as contemporary, that are used in solving such non-convex optimization problems.\n\n\\section{The Convex Relaxation Approach}\nFaced with the challenge of non-convexity, and the associated NP-hardness, a traditional workaround in literature has been to modify the problem formulation itself so that existing tools can be readily applied. This is often done by \\emph{relaxing} the problem so that it becomes a convex optimization problem. Since this allows familiar algorithmic techniques to be applied, the so-called \\emph{convex relaxation} approach has been widely studied. For instance, there exist relaxed, convex problem formulations for both the recommendation system and the sparse regression problems. For sparse linear regression, the relaxation approach gives us the popular LASSO formulation.\n\nNow, in general, such modifications change the problem drastically, and the solutions of the relaxed formulation can be poor solutions to the original problem. However, it is known that if the problem possesses certain nice structure, then under careful relaxation, these distortions, formally referred to as a``relaxation gap'', are absent, i.e., solutions to the relaxed problem would be optimal for the original non-convex problem as well.\n\nAlthough a popular and successful approach, this still has limitations, the most prominent of them being scalability. Although the relaxed convex optimization problems are solvable in polynomial time, it is often challenging to solve them \\emph{efficiently} for large-scale problems.\n\n\\section{The Non-Convex Optimization Approach}\nInterestingly, in recent years, a new wisdom has permeated the fields of machine learning and signal processing, one that advises not to relax the non-convex problems and instead solve them directly. This approach has often been dubbed the \\emph{non-convex optimization} approach owing to its goal of optimizing non-convex formulations directly.\n\nTechniques frequently used in non-convex optimization approaches include simple and efficient primitives such as projected gradient descent, alternating minimization, the expectation-maximization algorithm, stochastic optimization, and variants thereof. These are very fast in practice and remain favorites of practitioners.\n\n\\begin{figure}[t]\n\\begin{subfigure}[t]{.5\\columnwidth}\n\\centering \\includegraphics[width=\\columnwidth]{spreg-comp.pdf}\n\\caption{Sparse Recovery (\\S~\\ref{chap:spreg})}\n\\label{fig:intro-comparison-spreg}\n\\end{subfigure}\n\\hfill\n\\begin{subfigure}[t]{.5\\columnwidth}\n\\centering \\includegraphics[width=\\columnwidth]{rreg-comp.pdf}\n\\caption{Robust Regression (\\S~\\ref{chap:rreg})}\n\\label{fig:intro-comparison-rreg}\n\\end{subfigure}\\\\\n\\begin{subfigure}[t]{.5\\columnwidth}\n\\centering \\includegraphics[width=\\columnwidth]{matrec-comp.pdf}\n\\caption{Matrix Recovery (\\S~\\ref{chap:matrec})}\n\\label{fig:intro-comparison-matrec}\n\\end{subfigure}\n\\hfill\n\\begin{subfigure}[t]{.5\\columnwidth}\n\\centering \\includegraphics[width=\\columnwidth]{matcomp-comp.pdf}\n\\caption{Matrix Completion (\\S~\\ref{chap:matrec})}\n\\label{fig:intro-comparison-matcomp}\n\\end{subfigure}%\n\\caption[Relaxation vs. Non-convex Optimization Methods]{An empirical comparison of run-times offered by various approaches to four different non-convex optimization problems. LASSO, extended LASSO, SVT are relaxation-based methods whereas IHT, gPGD, FoBa, AM-RR, SVP, ADMiRA are non-convex methods. In all cases, non-convex optimization techniques offer routines that are faster, often by an order of magnitude or more, than relaxation-based methods. Note that Figures~\\ref{fig:intro-comparison-matrec} and \\ref{fig:intro-comparison-matcomp}, employ a $y$-axis at logarithmic scale. The details of the methods are present in the sections linked with the respective figures.}%\n\\label{fig:intro-comparison}\n\\end{figure}\n\nAt first glance, however, these efforts seem doomed to fail, given to the aforementioned NP-hardness results. However, in a series of deep and illuminating results, it has been repeatedly revealed that if the problem possesses nice structure, then not only do relaxation approaches succeed, but non-convex optimization algorithms do too. In such nice cases, non-convex approaches are able to only avoid NP-hardness, but actually offer provably optimal solutions. In fact, in practice, they often handsomely outperform relaxation-based approaches in terms of speed and scalability. Figure~\\ref{fig:intro-comparison} illustrates this for some applications that we will investigate more deeply in later sections.\n\nVery interestingly, it turns out that problem structures that allow non-convex approaches to avoid NP-hardness results, are very similar to those that allow their convex relaxation counterparts to avoid distortions and a large relaxation gap! Thus, it seems that if the problems possess nice structure, convex relaxation-based approaches, as well as non-convex techniques, both succeed. However, non-convex techniques usually offer more scalable solutions.\n\n\\section{Organization and Scope}\nOur goal of this monograph is to present basic tools, both algorithmic and analytic, that are commonly used in the design and analysis of non-convex optimization algorithms, as well as present results which best represent the non-convex optimization philosophy. The presentation should enthuse, as well as equip, the interested reader and allow further readings, independent investigations, and applications of these techniques in diverse areas.\n\nGiven this broad aim, we shall appropriately restrict the number of areas we cover in this monograph, as well as the depth in which we cover each area. For instance, the literature abounds in results that seek to perform optimizations with more and more complex structures being imposed - from sparse recovery to low rank matrix recovery to low rank tensor recovery. However, we shall restrict ourselves from venturing too far into these progressions. Similarly, within the problem of sparse recovery, there exist results for recovery in the simple least squares setting, the more involved setting of sparse M-estimation, as well as the still more involved setting of sparse M-estimation in the presence of outliers. Whereas we will cover sparse least squares estimation in depth, we will refrain from delving too deeply into the more involved sparse M-estimation problems.\n\nThat being said, the entire presentation will be self contained and accessible to anyone with a basic background in algebra and probability theory. Moreover, the bibliographic notes given at the end of the sections will give pointers that should enable the reader to explore the state of the art not covered in this monograph.\n\\chapter{Low-rank Matrix Recovery}\n\\label{chap:matrec}\n\nIn this section, we will look at the problem of low-rank matrix recovery in detail. Although simple to motivate as an extension of the sparse recovery problem that we studied in \\S~\\ref{chap:spreg}, the problem rapidly distinguishes itself in requiring specific tools, both algorithmic and analytic. We will start our discussion with a milder version of the problem as a warm up and move on to the problem of low-rank matrix completion which is an active area of research.\n\n\\section{Motivating Applications}\nWe will take the following two running examples to motivate the problem of low-rank matrix recovery.\\\\\n\n\\noindent\\textbf{Collaborative Filtering} Recommendation systems are popularly used to model the preference patterns of users, say at an e-commerce website, for items being sold on that website, although the principle of recommendation extends to several other domains that demand \\emph{personalization} such as education and healthcare. Collaborative filtering is a popular technique for building recommendation systems.\n\nThe collaborative filtering approach seeks to exploit co-occurring patterns in the observed behavior across users in order to predict future user behavior. This approach has proven successful in addressing users that interact very sparingly with the system. Consider a set of $m$ users $u_1,\\ldots,u_m$, and $n$ items $a_1,\\ldots,a_n$. Our goal is to predict the preference score $s_{(i,j)}$ that is indicative of the interest user $u_i$ has in item $a_j$.\n\nHowever, we get direct access to (noisy estimates of) actual preference scores for only a few items per user by looking at clicks, purchases etc. That is to say, if we consider the $m \\times n$ \\emph{preference matrix} $A = [A_{ij}]$ where $A_{ij} = s_{(i,j)}$ encodes the (true) preference of the $i\\th$ user for the $j\\th$ item, we get to see only $k \\ll m\\cdot n$ entries of $A$, as depicted in Figure~\\ref{fig:matrec-cf}. Our goal is to recover the remaining entries.\n\n\\begin{figure}\n\\includegraphics[width=\\columnwidth]{cf.pdf}\n\\caption[Low-rank Matrix Completion for Recommendation]{In a typical recommendation system, users rate items very infrequently and certain items may not get rated even once. The figure depicts a ratings matrix. Only the matrix entries with a bold border are observed. Low-rank matrix completion can help recover the unobserved entries, as well as reveal hidden features that are descriptive of user and item properties, as shown on the right hand side.}%\n\\label{fig:matrec-cf}\n\\end{figure}\n\nThe problem of paucity of available data is readily apparent in this setting. In its nascent form, the problem is not even well posed and does not admit a unique solution. A popular way of overcoming these problems is to assume a low-rank structure in the preference matrix. \n\nAs we saw in Exercise~\\ref{exer:pgd-low-rank}, this is equivalent to assuming that there is an $r$-dimensional vector $\\vu_i$ denoting the $i\\th$ user and an $r$-dimensional vector $\\va_j$ denoting the $j\\th$ such that $s_{(i,j)} \\approx \\ip{\\vu_i}{\\va_j}$. Thus, if $\\Omega \\subset [m] \\times [n]$ is the set of entries that have been observed by us, then the problem of recovering the unobserved entries can be cast as the following optimization problem:\n\\[\n\\underset{\\substack{X \\in \\bR^{m \\times n}\\\\\\rank(X) \\leq r}}{\\min}\\ \\sum_{(i,j) \\in \\Omega}\\br{X_{ij} - A_{ij}}^2.\n\\]\nThis problem can be shown to be NP-hard \\citep{HardtMRW2014} but has generated an enormous amount of interest across communities. We shall give special emphasis to this \\emph{matrix completion} problem in the second part of this section.\\\\\n\n\\noindent\\textbf{Linear Time-invariant Systems} Linear Time-invariant (LTI) systems are widely used in modeling dynamical systems in fields such as engineering and finance. The \\emph{response behavior} of these systems is characterized by a model vector $\\vh = [h(0),h(1),\\ldots,h(2N-1)]$. The \\emph{order} of such a system is given by the rank of the following Hankel matrix\n\\[\n\\text{hank}(\\vh) =\\left[\n\\begin{array}{cccc}\n\th(0) & h(1) & \\ldots & h(N)\\\\\n\th(1) & h(2) & \\ldots & h(N+1)\\\\\n\t\\vdots & \\vdots & \\ddots & \\vdots\\\\\n\th(N-1) & h(N) & \\ldots & h(2N-1)\n\\end{array}\n\\right]\n\\]\nGiven a sequence of inputs $\\va = [a(1),a(2),\\ldots,a(N)]$ to the system, the output of the system is given by\n\\[\ny(N) = \\sum_{t=0}^{N-1}a(N-t)h(t)\n\\]\nIn order to recover the model parameters of a system, we repeatedly apply i.i.d. Gaussian impulses $a(i)$ to the system for $N$ time steps and then observe the output of the system. This process is repeated, say $k$ times, to yield observation pairs $\\bc{(\\va^i,y^i)}_{i=1}^k$. Our goal now, is to take these observations and identify an LTI vector $\\vh$ that best fits the data. However, for the sake of accuracy and ease of analysis \\citep{FazelPST2013}, it is advisable to fit a low-order model to the data. Let the matrix $A \\in \\bR^{k\\times N}$ contain the i.i.d. Gaussian impulses applied to the system. Then the problem of fitting a low-order model can be shown to reduce to the following constrained optimization problem with a rank objective and an affine constraint.\n\\begin{equation*}\n\\begin{array}{cc}\n\t\\min & \\rank(\\text{hank}(\\vh))\\\\\n\t\\text{s.t.} & A\\vh = \\vy,\n\\end{array}\n\\end{equation*}\nThe above problem is a non-convex optimization problem due to the objective being the minimization of the rank of a matrix. Several other problems in metric embedding and multi-dimensional scaling, image compression, low rank kernel learning and spatio-temporal imaging can also be reduced to low rank matrix recovery problems \\citep{JainMD2010, RechtFP2010}.\n\n\\section{Problem Formulation}\nThe two problems considered above can actually be cast in a single problem formulation, that of \\emph{Affine Rank Minimization} (ARM). Consider a low rank matrix $X^\\ast \\in \\bR^{m \\times n}$ that we wish to recover and an affine transformation $\\cA: \\bR^{m \\times n} \\rightarrow \\bR^k$. The transformation can be seen as a concatenation of $k$ real valued affine transformations $\\cA_i: \\bR^{m \\times n} \\rightarrow \\bR$. We are given the transformation $\\cA$, and its (possibly noisy) action $\\vy = \\cA(X^\\ast) \\in \\bR^k$ on the matrix $X^\\ast$ and our goal is to recover this matrix by solving the following optimization problem.\n\\begin{equation}\n\\begin{array}{cl}\n\t\\min & \\rank(X)\\\\\n\t\\text{s.t.} & \\cA(X) = \\vy,\n\\end{array}\n\\tag*{(ARM)}\\label{eq:arm}\n\\end{equation}\nThis problem can be shown to be NP-hard due to a reduction to the sparse recovery problem\\elink{exer:matrec-np-hard}. The LTI modeling problem can be easily seen to be an instance of ARM with the Gaussian impulses being delivered to the system resulting in a $k$-dimensional affine transformation of the Hankel matrix corresponding to the system. However, the Collaborative Filtering problem is also an instance of ARM. To see this, for any $(i,j) \\in [m]\\times[n]$, let $O^{(i,j)} \\in \\bR^{m\\times n}$ be the matrix such that its $(i,j)$-th entry $O^{(i,j)}_{ij}=1$ and all other entries are zero. Then, simply define the affine transformation $\\cA_{(i,j)} : X \\mapsto \\text{tr}(X^\\top O^{(i,j)}) = X_{ij}$. Thus, if we observe $k$ user-item ratings, the ARM problem effectively operates with a $k$-dimensional affine transformation of the underlying rating matrix.\n\nDue to its similarity to the sparse recovery problem, we will first discuss the general ARM problem. However, we will find it beneficial to cast the collaborative filtering problem as a \\emph{Low-rank Matrix Completion} problem instead. In this problem, we have an underlying low rank matrix $X^\\ast$ of which, we observe entries in a set $\\Omega \\subset {[m]\\times[n]}$. Then the low rank matrix completion problem can be stated as\n\\begin{equation}\n\\underset{\\substack{X \\in \\bR^{m \\times n}\\\\ \\rank(X) \\leq r}}{\\min}\\ \\norm{\\Pi_{\\Omega}(X - X^\\ast)}_F^2,\\tag*{(LRMC)}\\label{eq:lrmc}\n\\end{equation}\nwhere $\\Pi_\\Omega(X)$ is defined, for any matrix $X$ as\n\\[\n\\Pi_\\Omega(X)_{i,j} = \\left\\{\n\\begin{array}{cc}\n\tX_{i,j} & \\text{ if } (i,j) \\in \\Omega\\\\\n\t0 & \\text{ otherwise}.\n\\end{array}\n\\right.\n\\]\nThe above formulation succinctly captures our objective to find a \\emph{completion} of the ratings matrix that is both, low rank, as well as agrees on the user ratings that are actually observed. As pointed out earlier, this problem is NP-hard \\citep{HardtMRW2014}.\n\nBefore moving on to present algorithms for the ARM and LRMC problems, we discuss some matrix design properties that would be required in the convergence analyses of the algorithms.\n\n\\section{Matrix Design Properties}\nSimilar to sparse recovery, there exist design properties that ensure that the general NP-hardness of the ARM and LRMC problems can be overcome in well-behaved problem settings. In fact given the similarity between ARM and sparse recovery problems, it is tempting to try and import concepts such as RIP into the matrix-recovery setting.\n\nIn fact this is exactly the first line of attack that was adopted in literature. What followed was a beautiful body of work that generalized, both structural notions such as RIP, as well as algorithmic techniques such as IHT, to address the ARM problem. Given the generality of these constructs, as well as the smooth transition it offers having studied sparse recovery, we feel compelled to present them to the reader.\n\n\\subsection{The Matrix Restricted Isometry Property}\nThe generalization of RIP to matrix settings, referred to as matrix RIP, follows in a relatively straightforward manner and was first elucidated by \\cite{RechtFP2010}. Quite in line with the sparse recovery setting, the intuition dictates that recovery should be possible only if the affine transformation does not identify two distinct low-rank matrices. A more robust version dictates that no low-rank matrix should be distorted significantly by this transformation which gives us the following.\n\n\\begin{definition}[Matrix Restricted Isometry Property \\citep{RechtFP2010}]\n\\label{defn:matrix-rip}\nA linear map $\\cA: \\bR^{m \\times n} \\rightarrow \\bR^k$ is said to satisfy the matrix restricted isometry property of order $r$ with constant $\\delta_r \\in [0,1)$ if for all matrices $X$ of rank at most $r$, we have\n\\begin{center}\n\t$(1-\\delta_r)\\cdot\\norm{X}_F^2 \\leq \\norm{\\cA(X)}_2^2 \\leq (1+\\delta_r)\\cdot\\norm{X}_F^2$.\n\\end{center}\n\\end{definition}\nFurthermore, the work of \\cite{RechtFP2010} also showed that linear maps or affine transformations arising in random measurement models, such as those in image compression and LTI systems, do satisfy RIP with requisite constants whenever the number of affine measurements satisfies $k = \\bigO{nr}$ \\citep{OymakRS15a}. Note however, that these are settings in which the design of the affine map is within our control. For settings, where the restricted condition number of the affine map is not within our control, more involved analysis is required. The bibliographic notes point to some of these results.\n\nGiven the relatively simple extension of the RIP definitions to the matrix setting, it is all the more tempting to attempt to apply gPGD-style techniques to solve the ARM problem, particularly since we saw how IHT succeeded in offering scalable solutions to the sparse recovery problem. The works of \\citep{GoldfarbM2011,JainMD2010} showed that this is indeed possible. We will explore this shortly.\n\n\\subsection{The Matrix Incoherence Property}\nWe begin this discussion by warning the reader that there are two distinct notions prevalent in literature, both of which are given the same name, that of the matrix incoherence property. The first of these notions was introduced in \\S~\\ref{sec:rip-ensure} as a property that can be used to ensure the RIP property in matrices. However a different property, but bearing the same name, finds application in matrix completion problems which we now introduce. We note that the two properties are not known to be strongly related in a formal sense and the coincidental clash of the names seems to be a matter of legacy.\n\nNevertheless, the intuition behind the second notion of matrix incoherence is similar to that for RIP in that it seeks to make the problem well posed. Consider the matrix $A = \\sum_{t=1}^r s_t\\cdot\\ve_{i_t}\\bar\\ve_{j_t}^\\top \\in \\bR^{m \\times n}$ where $\\ve_i$ are the canonical orthonormal vectors in $\\bR^m$ and $\\bar\\ve_j$ are the canonical orthonormal vectors in $\\bR^n$. Clearly $A$ has rank at most $r$.\n\nHowever, this matrix $A$ is non-zero only at $r$ locations. Thus, it is impossible to recover the entire matrix uniquely unless these very $r$ locations $\\bc{(i_t,j_t)}_{t=1,\\ldots,r}$ are actually observed. Since in recommendation settings, we only observe a few random entries of the matrix, there is a good possibility that none of these entries will ever be observed. This presents a serious challenge for the matrix completion problem -- the low rank structure is not sufficient to ensure unique recovery!\n\nTo overcome this and make the LRMC problem well posed with a unique solution, an additional property is imposed. This so-called matrix incoherence property prohibits low rank matrices that are also sparse. A side effect of this imposition is that for incoherent matrices, observing a small random set of entries is enough to uniquely determine the unobserved entries of the matrix.\n\n\\begin{definition}[Matrix Incoherence Property \\citep{CandesR2009}]\n\\label{defn:matrix-incoherence}\nA matrix $A \\in \\bR^{m \\times n}$ of rank $r$ is said to be incoherent with parameter $\\mu$ if its left and right singular matrices have bounded row norms. More specifically, let $A = U \\Sigma V^\\top$ be the SVD of $A$. Then $\\mu$-incoherence dictates that $\\norm{U^i}_2 \\leq \\frac{\\mu\\sqrt r}{\\sqrt m}$ for all $i \\in [m]$ and $\\norm{V^j}_2 \\leq \\frac{\\mu\\sqrt r}{\\sqrt n}$ for all $j \\in [n]$. A stricter version of this property requires all entries of $U$ to satisfy $\\abs{U_{ij}} \\leq \\frac{\\mu}{\\sqrt m}$ and all entries of $V$ to satisfy $\\abs{V_{ij}} \\leq \\frac{\\mu}{\\sqrt n}$.\n\\end{definition}\n\nA low rank incoherent matrix is guaranteed to be \\emph{far}, i.e., well distinguished, from any sparse matrix, something that is exploited by algorithms to give guarantees for the LRMC problem.\n\n\\begin{algorithm}[t]\n\t\\caption{Singular Value Projection (SVP)}\n\t\\label{algo:svp}\n\t\\begin{algorithmic}[1]\n\t\t\t\\REQUIRE Linear map $\\cA$, measurements $\\vy$, target rank $q$, step length $\\eta$\n\t\t\t\\ENSURE A matrix $\\hat X$ with rank at most $q$\n\t\t\t\\STATE $X^1 \\leftarrow \\vzero^{m \\times n}$\n\t\t\t\\FOR{$t = 1, 2, \\ldots$}\n\t\t\t\t\\STATE $Y^{t+1} \\leftarrow X^t - \\eta\\cdot \\cA^\\top(\\cA(X^t) - \\vy)$\n\t\t\t\t\\STATE Compute top $q$ singular vectors\/values of $Y^{t+1}$: $U^t_q,\\Sigma^t_q,V^t_q$\n\t\t\t\t\\STATE $X^{t+1} \\leftarrow U^t_q\\Sigma^t_q(V^t_q)^\\top$\n\t\t\t\\ENDFOR\n\t\t\t\\STATE \\textbf{return} {$X^t$}\n\t\\end{algorithmic}\n\\end{algorithm}\n\n\\section{Low-rank Matrix Recovery via Proj. Gradient Descent}\nWe will now apply the gPGD algorithm to the ARM problem. To do so, first consider the following reformulation of the ARM problem to make it more compatible to the projected gradient descent iterations.\n\\begin{equation}\n\\begin{array}{cl}\n\t\\min & \\frac{1}{2}\\norm{\\cA(X) - \\vy}_2^2\\\\\n\t\\text{s.t.} & \\rank(X) \\leq r\n\\end{array}\n\\tag*{(ARM-2)}\\label{eq:arm-2}\n\\end{equation}\nApplying the gPGD algorithm to the above formulation gives us the \\emph{Singular Value Projection} (SVP) algorithm (Algorithm~\\ref{algo:svp}). Note that in this case, the projection needs to be carried out onto the set of low rank matrices. However, as we saw in \\S~\\ref{sec:non-cvx-proj}, this can be efficiently done by computing the singular value decomposition of the iterates.\n\nSVP offers ease of implementation and speed similar to IHT. Moreover, it applies to ARM problems in general. If the Matrix RIP property is appropriately satisfied, then SVP guarantees a linear rate of convergence to the optimum, much like IHT. All these make SVP a very attractive choice for solving low rank matrix recovery problems.\n\nBelow, we give a convergence proof for SVP in the noiseless case, i.e., when $\\vy = \\cA(X^\\ast)$. The proof is similar in spirit to the convergence proof we saw for the IHT algorithm in \\S~\\ref{chap:spreg} but differs in crucial aspects since sparsity in this case is apparent not in the signal domain (the matrix is not itself sparse) but the spectral domain (the set of singular values of the matrix is sparse). The analysis can be extended to noisy measurements as well and can be found in \\citep{JainMD2010}.\n\n\\section{A Low-rank Matrix Recovery Guarantee for SVP}\nWe will now present a convergence result for the SVP algorithm. As before, although the general convergence result for gPGD can indeed be applied here, we will see, just as we did in \\S~\\ref{chap:spreg}, that the problem specific analysis we present here is finer and reveals more insights about the ARM problem structure.\n\n\\begin{theorem}\n\\label{thm:svp-conv-proof-rip}\nSuppose $\\cA: \\bR^{m \\times n} \\rightarrow \\bR^k$ is an affine transformation that satisfies the matrix RIP property of order $2r$ with constant $\\delta_{2r} \\leq \\frac{1}{3}$. Let $X^\\ast \\in \\bR^{m \\times n}$ be a matrix of rank at most $r$ and let $\\vy = \\cA(X^\\ast)$. Then the SVP Algorithm (Algorithm~\\ref{algo:svp}), when executed with a step length $\\eta = 1\/(1+\\delta_{2r})$, and a target rank $q = r$, ensures $\\norm{X^t - X^\\ast}_F^2 \\leq \\epsilon$ after $t = \\bigO{\\log\\frac{\\norm{\\vy}_2^2}{2\\epsilon}}$ iterations of the algorithm.\n\\end{theorem}\n\\begin{proof}\nNotice that the notions of \\emph{sparsity} and \\emph{support} are very different in ARM than what they were for sparse regression. Consequently, the exact convergence proof for IHT (Theorem~\\ref{thm:iht-conv-proof-rip}) is not applicable here. We will first establish an intermediate result that will show, that after $t = \\bigO{\\log\\frac{\\norm{\\vy}_2^2}{2\\epsilon}}$ iterations, SVP ensures $\\norm{\\cA(X^t) - \\vy}_2^2 \\leq \\epsilon$. We will then use the matrix RIP property (Definition~\\ref{defn:matrix-rip}) to deduce\n\\[\n\\norm{\\cA(X^t) - \\vy}_2^2 = \\norm{\\cA(X^t - X^\\ast)}_2^2 \\geq (1 - \\delta_{2r})\\cdot\\norm{X^t - X^\\ast}_F^2,\n\\]\nwhich will conclude the proof. To prove this intermediate result, let us denote the objective function as\n\\[\nf(X) = \\frac{1}{2}\\norm{\\cA(X) - \\vy}_2^2 = \\frac{1}{2}\\norm{\\cA(X - X^\\ast)}_2^2.\n\\]\nAn application of the matrix RIP property then gives us\n\\begin{align*}\n&f(X^{t+1})\\\\\n&= f(X^t) + \\ip{\\cA(X^t - X^\\ast)}{\\cA(X^{t+1} - X^t)} + \\frac{1}{2}\\norm{\\cA(X^{t+1} - X^{t})}_2^2\\\\\n\t\t\t\t\t\t\t\t\t\t&\\leq f(X^t) + \\ip{\\cA(X^t - X^\\ast)}{\\cA(X^{t+1} - X^t)} + \\frac{(1 + \\delta_{2r})}{2}\\norm{X^{t+1} - X^{t}}_F^2.\n\\end{align*}\nThe following steps now introduce the intermediate variable $Y^{t+1}$ into the analysis in order to link the successive iterates by using the fact that $X^{t+1}$ was the result of a non-convex projection operation.\n\\begin{align*}\n&\\ip{\\cA(X^t - X^\\ast)}{\\cA(X^{t+1} - X^t)} + \\frac{(1 + \\delta_{2r})}{2}\\cdot\\norm{X^{t+1} - X^{t}}_F^2\\\\\n&\\quad = \\frac{1 + \\delta_{2r}}{2}\\cdot\\norm{X^{t+1} - Y^{t+1}}_F^2 - \\frac{1}{2(1+\\delta_{2r})}\\cdot\\norm{\\cA^\\top(\\cA(X^t - X^\\ast))}_F^2\\\\\n&\\quad \\leq \\frac{1 + \\delta_{2r}}{2}\\cdot\\norm{X^\\ast - Y^{t+1}}_F^2 - \\frac{1}{2(1+\\delta_{2r})}\\cdot\\norm{\\cA^\\top(\\cA(X^t - X^\\ast))}_F^2\\\\\n&\\quad = \\ip{\\cA(X^t - X^\\ast)}{\\cA(X^\\ast - X^t)} + \\frac{(1 + \\delta_{2r})}{2}\\cdot\\norm{X^\\ast - X^{t}}_F^2\\\\\n&\\quad \\leq \\ip{\\cA(X^t - X^\\ast)}{\\cA(X^\\ast - X^t)} + \\frac{(1 + \\delta_{2r})}{2(1 - \\delta_{2r})}\\cdot\\norm{\\cA(X^\\ast - X^{t})}_2^2\\\\\n&\\quad = -f(X^t) - \\frac{1}{2}\\norm{\\cA(X^\\ast - X^{t})}_2^2 + \\frac{(1 + \\delta_{2r})}{2(1 - \\delta_{2r})}\\cdot\\norm{\\cA(X^\\ast - X^{t})}_2^2.\n\\end{align*}\nThe first step uses the identity $Y^{t+1} = X^t - \\eta\\cdot \\cA^\\top(\\cA(X^t) - \\vy)$ from Algorithm~\\ref{algo:svp}, the fact that we set $\\eta = \\frac{1}{1 + \\delta_{2r}}$, and elementary rearrangements. The second step follows from the fact that $\\norm{X^{t+1} - Y^{t+1}}_F^2 \\leq \\norm{X^\\ast - Y^{t+1}}_F^2$ by virtue of the SVD step which makes $X^{t+1}$ the best rank-$(2r)$ approximation to $Y^{t+1}$ in terms of the Frobenius norm. The third step simply rearranges things in the reverse order of the way they were arranged in the first step, the fourth step uses the matrix RIP property and the fifth step makes elementary manipulations. This, upon rearrangement, and using $\\norm{\\cA(X^{t} - X^\\ast)}_2^2 = 2f(X^t)$, gives us\n\\[\nf(X^{t+1}) \\leq \\frac{2\\delta_{2r}}{1 - \\delta_{2r}}\\cdot f(X^t).\n\\]\nSince $f(X^1) = \\norm{\\vy}_2^2$, as we set $X^1 = \\vzero^{m \\times n}$, if $\\delta_{2r} < 1\/3$ (i.e., $\\frac{2\\delta_{2r}}{1 - \\delta_{2r}} < 1$), we have the claimed convergence result.\n\\end{proof}\n\nOne can, in principle, apply the SVP technique to the matrix completion problem as well. However, on the LMRC problem, SVP is outperformed by gAM-style approaches which we study next. Although the superior performance of gAM on the LMRC problem was well documented empirically, it took some time before a theoretical understanding could be obtained. This was first done in the works of \\cite{Keshavan2012, JainNS2013}. These results set off a long line of works that progressively improved both the algorithm, as well as its analysis.\n\n\\begin{algorithm}[t]\n\t\\caption{AltMin for Matrix Completion (AM-MC)}\n\t\\label{algo:altmin-lrmc}\n\t\\begin{algorithmic}[1]\n\t\t\t\\REQUIRE Matrix $A \\in \\bR^{m \\times n}$ of rank $r$ observed at entries in the set $\\Omega$, sampling probability $p$, stopping time $T$\n\t\t\t\\ENSURE A matrix $\\hat X$ with rank at most $r$\n\t\t\t\\STATE Partition $\\Omega$ into $2T+1$ sets $\\Omega_0,\\Omega_1,\\ldots,\\Omega_{2T}$ uniformly and randomly%\n\t\t\t\\STATE $U^1 \\leftarrow \\text{SVD}(\\frac{1}{p}\\Pi_{\\Omega_0}(A),r)$, the top $r$ left singular vectors of $\\frac{1}{p}\\Pi_{\\Omega_0}(A)$%\n\t\t\t\\FOR{$t = 1, 2, \\ldots, T$}\n\t\t\t\t\\STATE $V^{t+1} \\leftarrow \\arg\\min_{V \\in \\bR^{n \\times r}}\\ \\norm{\\Pi_{\\Omega_{t}}(U^tV^\\top - A)}_F^2$\n\t\t\t\t\\STATE $U^{t+1} \\leftarrow \\arg\\min_{U \\in \\bR^{m \\times r}}\\ \\norm{\\Pi_{\\Omega_{T+t}}(U(V^{t+1})^\\top - A)}_F^2$\n\t\t\t\\ENDFOR\n\t\t\t\\STATE \\textbf{return} {$U^\\top(V^\\top)^\\top$}\n\t\\end{algorithmic}\n\\end{algorithm}\n\n\\section{Matrix Completion via Alternating Minimization}\nWe will now look at the alternating minimization technique for solving the low-rank matrix completion problem. As we have observed before, the LRMC problem admits an equivalent reformulation where the low rank structure constraint is eliminated and instead, the solution is described in terms of two low-rank components\n\\begin{equation}\n\\underset{\\substack{U \\in \\bR^{m \\times k}\\\\ V \\in \\bR^{n \\times k}}}{\\min}\\ \\norm{\\Pi_{\\Omega}(UV^\\top - X^\\ast)}_F^2.\\tag*{(LRMC*)}\\label{eq:lrmc2}\n\\end{equation}\nIn this case, fixing either $U$ or $V$ reduces the above problem to a simple least squares problem for which we have very efficient and scalable solvers. As we saw in \\S~\\ref{chap:altmin}, such problems are excellent candidates for the gAM algorithm to be applied. The AM-MC algorithm (see Algorithm~\\ref{algo:altmin-lrmc}) applies the gAM approach to the reformulated LMRC problem. The AM-MC approach is the choice of practitioners in the context of collaborative filtering \\citep{ChenH2012,KorenBV2009,ZhouWSP2008}. However, AM-MC, like other gAM-style algorithms, does require proper initialization and tuning.\n\n\\section{A Low-rank Matrix Completion Guarantee for AM-MC}\nWe will now analyze the convergence properties of the AM-MC method for matrix completion. To simplify the presentation, we will restrict ourselves to the case when the matrix $A \\in \\bR^{m \\times n}$ is rank one. This will allow us to present the essential arguments without getting involved with technical details. Let $A = \\vu^\\ast(\\vv^\\ast)^\\top$ be a $\\mu$-incoherent matrix that needs to be recovered, where $\\norm{\\vu^\\ast}_2 = \\norm{\\vv^\\ast}_2 = 1$. It is easy to see that $\\mu$-incoherence implies $\\norm{\\vu^\\ast}_\\infty \\leq \\frac{\\mu}{\\sqrt m}$ and $\\norm{\\vv^\\ast}_\\infty \\leq \\frac{\\mu}{\\sqrt n}$.\n\nWe will also assume the Bernoulli sampling model i.e., that the set of observed indices $\\Omega$ is generated by selecting each entry $(i,j)$ for inclusion in $\\Omega$ in an i.i.d. fashion with probability $p$. More specifically, $(i,j)\\in \\Omega$ iff $\\delta_{ij} = 1$ where $\\delta_{ij}=1$ with probability $p$ and $0$ otherwise. \n\nFor simplicity, we will assume that each iteration of the AM-MC procedure receives a fresh set of samples $\\Omega$ from $A$. This can be achieved in practice by randomly partitioning the available set of samples into as many groups as the iterations of the procedure. The completion guarantee will proceed in two steps. In the first step, we will show that the initialization step in Algorithm~\\ref{algo:altmin-lrmc} itself brings AM-MC within a constant distance of the optimal solution. Next, we will show that this close initialization is sufficient for AM-MC to ensure a linear rate of convergence to the optimal solution.\\\\\n\n\\noindent{\\bf Initialization}: We will now show that the initialization step (Step 2 in Algorithm~\\ref{algo:altmin-lrmc}) provides a point $(\\vu^1, \\vv^1)$ which is at most a constant $c>0$ distance away from $(\\vu^\\ast, \\vv^\\ast)$. To this we need a Bernstein-style argument which we provide here for the rank-$1$ case.\n\n\\begin{theorem}\\cite[Theorem 1.6]{Tropp2012}\n\\label{thm:matrix-bern}\nConsider a finite sequence $\\bc{Z_k}$ of independent random matrices of dimension $m \\times n$. Assume each matrix satisfies $\\E{Z_k} = \\vzero$ and $\\norm{Z_k}_2 \\leq R$ almost surely and denote\n\\[\n\\textstyle\\sigma^2 := \\max\\bc{\\norm{\\sum_k Z_kZ_k^\\top}_2, \\norm{\\sum_k Z_k^\\top Z_k}_2}.\n\\]\nThen, for all $t \\geq 0$, we have\n\\[\n\\textstyle\\Pr{\\norm{\\sum_k Z_k}_2 \\geq t} \\leq (m+n)\\cdot\\exp\\br{\\frac{-t^2}{\\sigma^2 + Rt\/3}}.\n\\]\n\\end{theorem}\n\nBelow we apply this inequality to analyze the initialization step for AM-MC in the rank-$1$ case. We point the reader to \\citep{Recht2011} and \\citep{KeshavanMO2010} for a more precise argument analyzing the initialization step in the general rank-$r$ case.\n\n\\begin{theorem}\n\\label{thm:mc_init}\nFor a rank one matrix $A$ satisfying the $\\mu$-incoherence property, let the observed samples $\\Omega$ be generated with sampling probability $p$ as described above. Let $\\vu^1, \\vv^1$ be the singular vectors of $\\frac{1}{p}P_{\\Omega}(A)$ corresponding to its largest singular value. Then for any $\\epsilon > 0$, if $p \\geq \\frac{45\\mu^2}{\\epsilon^2}\\cdot\\frac{\\log(m+n)}{\\min\\bc{m,n}}$, then with probability at least $1-1\/(m+n)^{10}$:\n\\[\n\\norm{\\frac{1}{p}\\Pi_\\Omega(A)-A}_2\\leq \\epsilon, \\quad \\norm{\\vu^1 - \\vu^\\ast}_2 \\leq \\epsilon, \\quad \\norm{\\vv^1 - \\vv^\\ast}_2 \\leq \\epsilon.\n\\]\nMoreover, the vectors $\\vu^1$ and $\\vv^1$ are also $2\\mu$-incoherent.\n\\end{theorem}\n\\begin{proof}\nNotice that the statement of the theorem essentially states that once enough entries in the matrix have been observed (as dictated by the requirement $p \\geq \\frac{45\\mu^2}{\\epsilon^2}\\cdot\\frac{\\log(m+n)}{\\min\\bc{m,n}}$) an SVD step on the incomplete matrix will yield components $\\vu^1,\\vv^1$ that are very close to the components of the complete matrix $\\vu^\\ast,\\vv^\\ast$. Moreover, since $\\vu^\\ast,\\vv^\\ast$ are incoherent by assumption, the estimated components $\\vu^1,\\vv^1$ will be so too.\n\nTo apply Theorem~\\ref{thm:matrix-bern} to prove this result, we will first express $\\frac{1}{p}\\Pi_{\\Omega}(A)$ as a sum of random matrices. We first rewrite $\\frac{1}{p}\\Pi_{\\Omega}(A)=\\frac{1}{p}\\sum_{ij}\\delta_{ij} A_{ij}\\ve_i\\ve_j^\\top=\\sum_{ij} W_{ij}$ where $\\delta_{ij} = 1$ if $(i,j) \\in \\Omega$ and $0$ otherwise. Note that the Bernoulli sampling model assures us that the random variables $W_{ij}=\\frac{1}{p}\\delta_{ij} A_{ij}\\ve_i\\ve_j^\\top$ are independent and that $\\E{\\delta_{ij}} = p$. This gives us $\\E{W_{ij}}=A_{ij}\\ve_i\\ve_j^\\top$. Note that $\\sum_{ij}A_{ij}\\ve_i\\ve_j^\\top = A$.\n\nThe matrices $Z_{ij} = W_{ij} - A_{ij}\\ve_i \\ve_j^\\top$ shall serve as our \\emph{random matrices} in the application of Theorem~\\ref{thm:matrix-bern}. Clearly $\\E{Z_{ij}} = \\vzero$. We also have $\\max_{ij}\\|W_{ij}\\|_2 \\leq \\frac{1}{p} \\max_{ij} |A_{ij}|\\leq \\frac{\\mu^2}{p\\sqrt{mn}}$ due to the incoherence assumption. Applying the triangle inequality gives us $\\max_{ij} \\norm{Z_{ij}}_2 \\leq \\max_{ij}\\|W_{ij}\\|_2 + \\max_{ij}\\|A_{ij}\\ve_i \\ve_j^\\top\\|_2 \\leq \\br{1 + \\frac{1}{p}}\\frac{\\mu^2}{\\sqrt{mn}} \\leq \\frac{2\\mu^2}{p\\sqrt{mn}}$.\n\nMoreover, as $A_{ij} = \\vu^\\ast_i\\vv^\\ast_j$ and $\\norm{\\vv^\\ast}_2 = 1$, we have $\\E{\\sum_{ij} W_{ij} W_{ij}^\\top}=\\frac{1}{p}\\sum_i\\sum_j A_{ij}^2 \\ve_i \\ve_i^\\top = \\frac{1}{p} \\sum_{i} (\\vu^\\ast_i)^2\\ve_i \\ve_i^\\top$. Due to incoherence $\\norm{\\vu^\\ast}_\\infty \\leq \\frac{\\mu}{\\sqrt m}$, we get $\\norm{\\E{\\sum_{ij} W_{ij} W_{ij}^\\top}}_2\\leq \\frac{\\mu^2}{p\\cdot m}$, which can be shown to give us\n\\[\n\\norm{\\E{\\sum_{ij} Z_{ij} Z_{ij}^\\top}}_2 \\leq \\br{\\frac{1}{p}-1}\\cdot\\frac{\\mu^2}{m} \\leq \\frac{\\mu^2}{p\\cdot m}\n\\]\nSimilarly, we can also get $\\norm{\\E{\\sum_{ij} Z_{ij}^\\top Z_{ij}}}_2\\leq \\frac{\\mu^2}{p\\cdot n}$. Now using Theorem~\\ref{thm:matrix-bern} gives us, with probability at least $1 - \\delta$,\n\\[\n\\norm{\\frac{1}{p}\\Pi_\\Omega(A) - A}_2 \\leq \\frac{2\\mu^2}{3p\\sqrt{mn}}\\log\\bs{\\frac{m+n}{\\delta}} + \\sqrt{\\frac{\\mu^2}{p\\cdot\\min\\bc{m,n}}\\log\\bs{\\frac{m+n}{\\delta}}}\n\\]\nIf $p \\geq \\frac{45\\mu^2\\log(m+n)}{\\epsilon^2\\cdot\\min\\bc{m,n}}$, we have with probability at least $1-1\/(m+n)^{10}$,\n\\[\n\\norm{\\frac{1}{p}\\Pi_\\Omega(A) - A}_2 \\leq \\epsilon.\n\\]\nThe proof now follows by applying the Davis-Kahan inequality \\citep{GolubVL1996} with the above bound. It can be shown \\citep{JainN2015} that the vectors that are recovered as a result of this initialization are incoherent as well.\n\\end{proof}\n\n\\noindent{\\bf Linear Convergence}: We will now show that, given the initialization above, the AM-MC procedure converges to the true solution with a linear rate of convergence. This will involve showing a few intermediate results, such as showing that the alternation steps preserve incoherence. Since the Theorem~\\ref{thm:mc_init} shows that $\\vu^1$ is $2\\mu$-incoherent, this will establish the incoherence of all future iterates. Preserving incoherence will be crucial in showing the next result which shows that successive iterates get increasingly close to the optimum. Put together, these will establish the convergence result. First, recall that in the $t\\th$ iteration of the AM-MC algorithm, $\\vv^{t+1}$ is updated as \n\\[\n\\vv^{t+1} = \\underset{\\vv}{\\arg\\min}\\ \\sum_{ij} \\delta_{ij}(\\vu_i^t \\vv_j-\\vu^\\ast_i\\vv^\\ast_j)^2, \n\\]\nwhich gives us\n\\begin{equation}\n\\vv^{t+1}_j=\\frac{\\sum_{i} \\delta_{ij}\\vu^\\ast_i \\vu^t_i}{\\sum_i \\delta_{ij} (\\vu^t_i)^2}\\cdot \\vv^\\ast_j.\n\\label{eq:v-update}\n\\end{equation}\nNote that this means that if $\\vu^\\ast=\\vu^t$, then $\\vv^{t+1}=\\vv^\\ast$. Also, note that if $\\delta_{ij}=1$ for all $(i,j)$ which happens when the sampling probability satisfies $p=1$, we have $\\vv^{t+1} = \\frac{\\ip{\\vu^t}{\\vu^\\ast}}{\\norm{\\vu^t}_2^2}\\cdot\\vv^\\ast$. This is reminiscent of the \\emph{power method} used to recover the leading singular vectors of a matrix. Indeed if we let $\\tilde\\vu = \\vu^t\/\\norm{\\vu^t}_2$, we get $\\norm{\\vu^t}_2\\cdot\\vv^{t+1} = \\ip{\\tilde\\vu}{\\vu^\\ast}\\cdot\\vv^\\ast$ if $p=1$.\n\nThis allows us to rewrite the update \\eqref{eq:v-update} as a noisy power update.\n\\begin{equation}\n\\norm{\\vu^t}_2\\cdot\\vv^{t+1} = \\ip{\\tilde\\vu}{\\vu^\\ast}\\cdot\\vv^\\ast - B^{-1}(\\ip{\\tilde\\vu}{\\vu^\\ast} B - C)\\vv^\\ast\n\\label{eq:v-update-mod}\n\\end{equation}\nwhere $B,C\\in \\bR^{n\\times n}$ are diagonal matrices with $B_{jj}=\\frac{1}{p}\\sum_i \\delta_{ij} (\\tilde{\\vu}_i)^2$ and $C_{jj}=\\frac{1}{p}\\sum_i \\delta_{ij} \\tilde{\\vu}_i \\vu^\\ast_i$. The following two lemmata show that if $\\vu^t$ is $2\\mu$ incoherent and if $p$ is large enough, then: a) $\\vv^{t+1}$ is also $2\\mu$ incoherent, and b) the angular distance between $\\vv^{t+1}$ and $\\vv^\\ast$ decreases as compared to that between $\\vu^t$ and $\\vu^\\ast$. The following lemma will aid the analysis.\n\\begin{lemma}\n\\label{lem:ip-preserve}\nSuppose $\\va,\\vb\\in\\bR^n$ are two fixed $\\mu$-incoherent unit vectors. Also suppose $\\delta_i, i \\in [n]$ are i.i.d. Bernoulli random variables such that $\\delta_i = 1$ with probability $p$ and 0 otherwise. Then, for any $\\epsilon > 0$, if $p > \\frac{27\\mu^2\\log n}{n\\epsilon^2}$, then with probability at least $1-1\/n^{10}$, $\\abs{\\frac{1}{p}\\sum_i\\delta_i\\va_i\\vb_i - \\ip{\\va}{\\vb}} \\leq \\epsilon$.\n\\end{lemma}\n\\begin{proof}\nDefine $Z_i = \\br{\\frac{\\delta_i}{p}-1}\\va_i\\vb_i$. Using the incoherence of the vectors, we get $\\E{Z_i} = 0$, $\\sum_{i=1}^n\\E{Z_i^2} = \\br{\\frac{1}{p}-1}\\sum_{i=1}^n(\\va_i\\vb_i)^2 \\leq \\frac{\\mu^2}{pn}$ since $\\norm{\\vb}_2 = 1$, and $\\abs{Z_i} \\leq \\frac{\\mu^2}{pn}$ almost surely. Applying the Bernstein inequality gives us\n\\[\n\\Pr{\\abs{\\frac{1}{p}\\sum_i\\delta_i\\va_i\\vb_i - \\ip{\\va}{\\vb}} > t} \\leq \\exp\\br{\\frac{-3pnt^2}{6\\mu^4+2\\mu^2t}},\n\\]\nwhich upon simple manipulations, gives us the result.\n\\end{proof}\n\n\\begin{lemma}\n\\label{lem:incoherence-preserve}\nWith probability at least $\\min\\bc{1 - 1\/n^{10}, 1 - 1\/m^{10}}$, if a pair of iterates $(\\vu^t,\\vv^t)$ in the execution of the AM-MC procedure are $2\\mu$-incoherent, then so are the next pair of iterates $(\\vu^{t+1},\\vv^{t+1})$.\n\\end{lemma}\n\\begin{proof}\nSince $\\norm{\\tilde\\vu}_2 = 1$, using Lemma~\\ref{lem:ip-preserve} tells us that with high probability, for all $j$, we have $\\abs{B_{jj} - 1} \\leq \\epsilon$ as well as $\\abs{C_{jj} - \\ip{\\tilde\\vu}{\\vu^\\ast}} \\leq \\epsilon$. Also, using triangle inequality, we get $\\norm{\\vu^t}_2 \\geq 1 - \\epsilon$. Using these and the incoherence of $\\vv^\\ast$ in the update equation for $\\vv^{t+1}$ \\eqref{eq:v-update-mod}, we have\n\\begin{align*}\n\\abs{\\vv^{t+1}_j} &= \\frac{1}{1-\\epsilon}\\abs{\\ip{\\tilde\\vu}{\\vu^\\ast}\\vv^\\ast_j - \\frac{1}{B_{jj}}(\\ip{\\tilde\\vu}{\\vu^\\ast}B_{jj} - C_{jj})\\vv^\\ast_j}\\\\\n\t\t\t\t\t\t\t\t\t&\\leq \\frac{1}{1-\\epsilon}\\abs{\\ip{\\tilde\\vu}{\\vu^\\ast}\\vv^\\ast_j} + \\frac{1}{1-\\epsilon}\\abs{\\frac{1}{B_{jj}}(\\ip{\\tilde\\vu}{\\vu^\\ast}B_{jj} - C_{jj})\\vv^\\ast_j}\\\\\n\t\t\t\t\t\t\t\t\t&\\leq \\frac{1}{(1-\\epsilon)^2}\\br{\\abs{\\ip{\\tilde\\vu}{\\vu^\\ast}} + \\abs{\\ip{\\tilde\\vu}{\\vu^\\ast}(1+\\epsilon) - (\\ip{\\tilde\\vu}{\\vu^\\ast} - \\epsilon)}}\\frac{\\mu}{\\sqrt n} \\leq \\frac{1+2\\epsilon}{(1-\\epsilon)^2}\\frac{\\mu}{\\sqrt n}\n\\end{align*}\nFor $\\epsilon < 1\/6$, the result now holds.\n\\end{proof}\n\nWe note that whereas Lemma~\\ref{lem:ip-preserve} is proved for fixed vectors, we seem to have inappropriately applied it to $\\tilde\\vu$ in the proof of Lemma~\\ref{lem:incoherence-preserve} which is not a fixed vector as it depends on the randomness used in sampling the entries of the matrix revealed to the algorithm. However notice that the AM-MC procedure in Algorithm~\\ref{algo:altmin-lrmc} uses fresh samples $\\Omega_t$ and $\\Omega_{T+t}$ for each iteration. This ensures that $\\tilde\\vu$ does behave like a fixed vector with respect to Lemma~\\ref{lem:ip-preserve}.\n\n\\begin{lemma}\n\\label{lem:lin-conv-altmin}\nFor any $\\epsilon > 0$, if $p > \\frac{80\\mu^2\\log(m+n)}{\\epsilon^2\\min\\bc{m,n}}$ and $\\vu^t$ is $2\\mu$-incoherent, the next iterate $\\vv^{t+1}$ satisfies\n\\[\n1- \\ip{\\frac{\\vv^{t+1}}{\\norm{\\vv^{t+1}}_2}}{\\vv^\\ast}^2 \\leq \\frac{\\epsilon}{(1-\\epsilon)^3}\\br{1- \\ip{\\frac{\\vu^t}{\\norm{\\vu^t}_2}}{\\vu^\\ast}^2}\n\\]\nSimilarly, for any $2\\mu$-incoherent iterate $\\vv^{t+1}$, the next iterate satisfies \n\\[\n1- \\ip{\\frac{\\vu^{t+1}}{\\norm{\\vu^{t+1}}_2}}{\\vu^\\ast}^2 \\leq \\frac{\\epsilon}{(1-\\epsilon)^3}\\br{1- \\ip{\\frac{\\vv^{t+1}}{\\norm{\\vv^{t+1}}_2}}{\\vv^\\ast}^2}.\n\\]\n\\end{lemma}\n\\begin{proof}\nUsing the modified form of the update for $\\vu^{t+1}$ \\eqref{eq:v-update-mod}, we get, for any unit vector $\\vv_\\bot$ such that $\\ip{\\vv_\\bot}{\\vv^\\ast} = 0$,\n\\begin{align*}\n\\norm{\\vu^t}_2\\cdot\\ip{\\vv^{t+1}}{\\vv_\\bot} &= \\ip{\\vv_\\bot}{B^{-1}(\\ip{\\tilde\\vu}{\\vu^\\ast} B - C)\\vv^\\ast}\\\\\n&\\leq \\norm{B^{-1}}_2\\norm{(\\ip{\\tilde\\vu}{\\vu^\\ast} B - C)\\vv^\\ast}_2\\\\\n&\\leq \\frac{1}{1-\\epsilon}\\norm{(\\ip{\\tilde\\vu}{\\vu^\\ast} B - C)\\vv^\\ast}_2,\n\\end{align*}\nwhere the last step follows from an application of Lemma~\\ref{lem:ip-preserve}. To bound the other term let $Z_{ij} = \\frac{1}{p}\\delta_{ij}(\\ip{\\tilde\\vu}{\\vu^\\ast}(\\tilde\\vu_i)^2 - \\tilde\\vu_i\\vu^\\ast_i)\\vv^\\ast_j\\ve_j \\in \\bR^n$. Clearly $\\sum_{i=1}^m\\sum_{j=1}^nZ_{ij} = (\\ip{\\tilde\\vu}{\\vu^\\ast} B - C)\\vv^\\ast$. Note that due to fresh samples being used by Algorithm~\\ref{algo:altmin-lrmc} at every step, the vector $\\tilde\\vu$ appears as a constant vector to the random variables $\\delta_{ij}$. Given this, note that\n\\begin{align*}\n\\E{\\sum_{i=1}^mZ_{ij}} &= \\sum_{i=1}^m(\\ip{\\tilde\\vu}{\\vu^\\ast}(\\tilde\\vu_i)^2 - \\tilde\\vu_i\\vu^\\ast_i)\\vv^\\ast_j\\ve_j\\\\\n\t\t\t\t\t\t\t\t\t\t\t &= \\ip{\\tilde\\vu}{\\vu^\\ast}\\sum_{i=1}^m(\\tilde\\vu_i)^2\\vv^\\ast_j\\ve_j - \\sum_{i=1}^m\\tilde\\vu_i\\vu^\\ast_i\\vv^\\ast_j\\ve_j\\\\\n\t\t\t\t\t\t\t\t\t\t\t &= (\\ip{\\tilde\\vu}{\\vu^\\ast}\\norm{\\tilde\\vu}_2^2 - \\ip{\\tilde\\vu}{\\vu^\\ast})\\vv^\\ast_j\\ve_j = \\vzero,\n\\end{align*}\nsince $\\norm{\\tilde\\vu}_2 = 1$. Thus $\\E{\\sum_{ij}Z_{ij}} = \\vzero$ as well. Now, we have $\\max_i(\\tilde\\vu_i)^2 = \\frac{1}{\\norm{\\vu^t}_2^2}\\cdot\\max_i(\\vu^t_i)^2 \\leq \\frac{4\\mu^2}{m\\norm{\\vu^t}_2^2} \\leq \\frac{\\mu^2}{m(1-\\epsilon)}$ since $\\norm{\\vu^t - \\vu^\\ast}_2 \\leq \\epsilon$. This allows us to bound\n\\begin{align*}\n\\abs{\\E{\\sum_{ij} Z_{ij}^\\top Z_{ij}}} &= \\frac{1}{p}\\sum_{i=1}^m\\sum_{j=1}^n (\\ip{\\tilde\\vu}{\\vu^\\ast}(\\tilde\\vu_i)^2 - \\tilde\\vu_i\\vu^\\ast_i)^2 (\\vv^\\ast_j)^2\\\\\n\t\t\t\t\t\t\t\t\t\t\t&= \\frac{1}{p}\\sum_{i=1}^m(\\tilde\\vu_i)^2(\\ip{\\tilde\\vu}{\\vu^\\ast}\\tilde\\vu_i - \\vu^\\ast_i)^2\\\\\n\t\t\t\t\t\t\t\t\t\t\t&\\leq \\frac{\\mu^2}{pm(1-\\epsilon)}\\sum_{i=1}^m \\ip{\\tilde\\vu}{\\vu^\\ast}^2(\\tilde\\vu_i)^2 + (\\vu^\\ast_i)^2 - 2\\ip{\\tilde\\vu}{\\vu^\\ast}\\tilde\\vu_i\\vu^\\ast_i\\\\\n\t\t\t\t\t\t\t\t\t\t\t&\\leq \\frac{8\\mu^2}{pm} (1- \\ip{\\tilde\\vu}{\\vu^\\ast}^2),\n\\end{align*}\nwhere we set $\\epsilon = 0.5$. In the same way we can show $\\norm{\\E{\\sum_{ij} Z_{ij}^\\top Z_{ij}}}_2 \\leq \\frac{8\\mu^2}{pm} (1- \\ip{\\tilde\\vu}{\\vu^\\ast}^2)$ as well. Using a similar argument we can show $\\norm{Z_{ij}}_2 \\leq \\frac{4\\mu^2}{p\\sqrt{mn}}\\sqrt{1- \\ip{\\tilde\\vu}{\\vu^\\ast}^2}$. Applying the Bernstein inequality now tells us, that for any $\\epsilon > 0$, if $p > \\frac{80\\mu^2\\log(m+n)}{\\epsilon^2\\min\\bc{m,n}}$, then with probability at least $1 - 1\/n^{10}$, we have\n\\[\n\\norm{(\\ip{\\tilde\\vu}{\\vu^\\ast} B - C)\\vv^\\ast}_2 \\leq \\epsilon\\cdot\\sqrt{1- \\ip{\\tilde\\vu}{\\vu^\\ast}^2}.\n\\]\nSince $\\norm{\\vu^t}_2 \\geq 1 -\\epsilon$ is guaranteed by the initialization step, we now get,\n\\[\n\\ip{\\vv^{t+1}}{\\vv_\\bot} \\leq \\frac{\\epsilon}{(1-\\epsilon)^2}\\sqrt{1- \\ip{\\tilde\\vu}{\\vu^\\ast}^2}.\n\\]\nIf $\\vv^{t+1}_\\bot$ and $\\vv^{t+1}_\\parallel$ be the components of $\\vv^{t+1}$ perpendicular and parallel to $\\vv^\\ast$. Then the above guarantees that $\\norm{\\vv^\\bot}_2 \\leq c\\cdot\\sqrt{1- \\ip{\\tilde\\vu}{\\vu^\\ast}^2}$. This gives us, upon applying the Pythagoras theorem,\n\\[\n\\norm{\\vv^{t+1}}^2_2 = \\norm{\\vv^{t+1}_\\bot}^2_2 + \\norm{\\vv^{t+1}_\\parallel}^2_2 \\leq \\frac{\\epsilon}{(1-\\epsilon)^2}\\br{1- \\ip{\\tilde\\vu}{\\vu^\\ast}^2} + \\norm{\\vv^{t+1}_\\parallel}^2_2\n\\]\nSince $\\norm{\\vv^{t+1}_\\parallel}_2 = \\ip{\\vv^{t+1}}{\\vv^\\ast}$ and $\\norm{\\vv^{t+1}}_2 \\geq 1-\\epsilon$ as $\\norm{\\vv^{t+1} - \\vv^\\ast} \\leq \\epsilon$ due to the initialization, rearranging the terms gives us the result.\n\\end{proof}\n\nUsing these results, it is easy to establish the main theorem.\n\n\\begin{theorem}\n\\label{thm:altmin-conv}\nLet $A=\\vu^\\ast(\\vv^\\ast)^\\top$ be a unit rank matrix where $\\vu^\\ast \\in \\bR^m$ and $\\vv^\\ast \\in \\bR^n$ are two $\\mu$-incoherent unit vectors. Let the matrix be observed at a set of indices $\\Omega\\subseteq [m]\\times [n]$ where each index is observed with probability $p$. Then if $p \\geq C\\cdot\\frac{\\mu^2\\log(m+n)}{\\epsilon^2\\min\\bc{m,n}}$ for some large enough constant $C$, then with probability at least $1 - 1\/\\min\\bc{m,n}^{10}$, AM-MC generates iterates which are $2\\mu$-incoherent. Moreover, within $\\bigO{\\log\\frac{1}{\\epsilon}}$ iterations, AM-MC also ensures that $\\norm{\\frac{\\vu^t}{\\norm{\\vu^t}_2}-\\vu^\\ast}_2 \\leq \\epsilon$ and $\\norm{\\frac{\\vv^t}{\\norm{\\vv^t}_2} - \\vv^\\ast}_2 \\leq \\epsilon$.\n\\end{theorem}\n\n\\section{Other Popular Techniques for Matrix Recovery}\nAs has been the case with the other problems we have studied so far, the first approaches to solving the ARM and LRMC problems were relaxation based approaches \\citep{CandesR2009,CandesT2009,RechtFP2010,Recht2011}. These approaches relax the non-convex rank objective in the \\eqref{eq:arm} formulation using the (convex) \\emph{nuclear norm}\n\\begin{equation*}\n\\begin{array}{cl}\n\t\\min & \\norm{X}_\\ast\\\\\n\t\\text{s.t.} & \\cA(X) = \\vy,\n\\end{array}\n\\label{eq:arm-conv}\n\\end{equation*}\nwhere the nuclear norm of a matrix $\\norm{X}_\\ast$ is the sum of all singular values of the matrix $X$. The nuclear norm is known to provide the tightest convex envelope of the rank function, just as the $\\ell_1$ norm provides a relaxation to the sparsity norm $\\norm{\\cdot}_0$ \\citep{RechtFP2010}. Similar to sparse recovery, under matrix-RIP settings, these relaxations can be shown to offer exact recovery \\citep{RechtFP2010,HastieTW2016}.\n\n\\begin{figure}[t]\n\\begin{subfigure}[t]{.5\\columnwidth}\n\\centering \\includegraphics[width=\\columnwidth]{matrec-comp.pdf}\n\\caption{Matrix Recovery}\n\\label{fig:matrec-comparison-matrec}\n\\end{subfigure}\n\\hfill\n\\begin{subfigure}[t]{.5\\columnwidth}\n\\centering \\includegraphics[width=\\columnwidth]{matcomp-comp.pdf}\n\\caption{Matrix Completion}\n\\label{fig:matrec-comparison-matcomp}\n\\end{subfigure}%\n\\caption[Relaxation vs. Non-convex Methods for Matrix Recovery]{An empirical comparison of run-times offered by the SVT, ADMiRA and SVP methods on synthetic matrix recovery and matrix completion problems with varying matrix sizes. The SVT method due to \\cite{CaiCS2010} is an efficient implementation of the nuclear norm relaxation technique. ADMiRA is a pursuit-style method due to \\cite{LeeB2010}. For the ARM task in Figure~\\ref{fig:matrec-comparison-matrec}, the rank of the true matrix was set to $r=5$ whereas it was set to $r=2$ for the LRMC task in Figure~\\ref{fig:matrec-comparison-matcomp}. SVT is clearly the most scalable of the methods in both cases whereas the relaxation-based SVT technique does not scale very well to large matrices. Note however, that for the LRMC problem, AM-MC (not shown in the figure) outperforms even SVP. Figures adapted from \\citep{MekaJCD2008}.}%\n\\label{fig:matrec-comparison}\n\\end{figure}\n\nAlso similar to sparse recovery, there exist pursuit-style techniques for matrix recovery, most notable among them being the ADMiRA method \\citep{LeeB2010} that extends the orthogonal matching pursuit approach to the matrix recovery setting. However, this method can be a bit sluggish when recovering matrices with slightly large rank since it discovers a matrix with larger and larger rank incrementally.\n\nBefore concluding, we present the reader with an empirical performance of these various methods. Figure~\\ref{fig:matrec-comparison} provides a comparison of these methods on synthetic matrix recovery and matrix completion problems with increasing dimensionality of the (low-rank) matrix being recovered. The graphs indicate that non-convex optimization methods such as IHT are far more scalable, often by an order of magnitude, than relaxation-based methods.\n\n\\section{Exercises}\n\\begin{exer}\n\\label{exer:matrec-np-hard}\nShow that low-rank matrix recovery is NP-hard.\\\\\n\\textit{Hint}: Take the sparse recovery problem in~\\eqref{eq:spreg} and reduce it to the reformulation~\\eqref{eq:arm-2} of the matrix recovery problem.\\end{exer}\n\\begin{exer}\n\\label{exer:matrec-monotone}\nShow that the matrix RIP constant is monotonic in its order i.e., if a linear map $\\cA$ satisfies matrix RIP of order $r$ with constant $\\delta_r$, then it also satisfies matrix RIP for all orders $r' \\leq r$ with $\\delta_{r'} \\leq \\delta_{r}$.\n\\end{exer}\n\n\\section{Bibliographic Notes}\nThere are a lot of aspects of low-rank matrix recovery that we have not covered in our discussion. Here we briefly point out some of these.\n\nSimilar to the sparse regression setting, the problem of ill-conditioned problems requires special care in the matrix recovery setting as well. For the general ARM problem, the work of \\cite{JainTK2014} does this by first proposing appropriate versions of RSC-RSS properties (see Definition~\\ref{defn:rsc-rss}) for the matrix case, and the suitably modifying SVP-style techniques to function even in high condition number settings. The final result is similar to the one for sparse recovery (see Theorem~\\ref{thm:iht-conv-proof-rsc-rss}) wherein a more ``relaxed'' projection step is required by using a rank $q > r$ while executing the SVP algorithm.\n\nIt turns out to be challenging to prove convergence results for SVP for the LRMC problem. This is primarily due to the difficulty in establishing the matrix RIP property for the affine transformation used in the problem. The affine map simply selects a few elements of the matrix and reproduces them which makes establishing RIP properties harder in this setting. Specifically, even though the initialization step can be shown to yield a matrix that satisfies matrix RIP \\citep{JainMD2010}, if the underlying matrix is low-rank and incoherent, it becomes challenging to show that RIP-ness is maintained across iterates. \\cite{JainN2015} overcome this by executing the SVP algorithm in a \\emph{stage-wise} fashion which resembles ADMiRA-like pursuit approaches.\n\nSeveral works have furthered the alternating minimization approach itself by reducing its sample complexity \\citep{Hardt2014}, giving recovery guarantees independent of the condition number of the problem \\citep{HardtW2014,JainN2015}, designing universal sampling schemes for recovery \\citep{BhojanapalliJ2014}, as well as tackling settings where some of the revealed entries of the matrix may be corrupted \\citep{ChenXCS2016, CherapanamjeriGJ2017}.\n\nAnother interesting line of work for matrix completion is that of \\citep{GeLM2016, SunL2015} which shows that under certain regularization assumptions, the matrix completion problem does not have any non-optimal stationary points once one gets close-enough to the global minimum. Thus, one can use any method for convex optimization such as alternating minimization, gradient descent, stochastic gradient descent, and its variants we studied in \\S~\\ref{chap:saddle}, once one is close enough to the global minimum. \n\\part{Introduction and Basic Tools}\n\n\\input{intro}\n\\input{tools}\n\n\\part{Non-convex Optimization Primitives}\n\n\\input{pgd}\n\\input{altmin}\n\\input{em}\n\\input{saddle}\n\n\\part{Applications}\n\n\\input{spreg}\n\\input{matrec}\n\\input{rreg}\n\\input{phret}\n\n\\backmatter \n\n\\bibliographystyle{plainnat}\n\n\\chapter*{Mathematical Notation}\n\\label{chap:not}\n\\addcontentsline{toc}{chapter}{Mathematical Notation}\n\\markboth{\\sffamily\\slshape Mathematical Notation}\n{\\sffamily\\slshape Mathematical Notation}\n\\begin{itemize}\n\t\\item The set of real numbers is denoted by $\\bR$. The set of natural numbers is denoted by $\\bN$.\n\t\\item The cardinality of a set $S$ is denoted by $\\abs{S}$.\n\t\\item Vectors are denoted by boldface, lower case alphabets for example, $\\vx,\\vy$. The zero vector is denoted by $\\vzero$. A vector $\\vx \\in \\bR^p$ will be in column format. The transpose of a vector is denoted by $\\vx^\\top$. The $i\\th$ coordinate of a vector $\\vx$ is denoted by $\\vx_i$.\n\t\\item Matrices are denoted by upper case alphabets for example, $A, B$. $A_i$ denotes the $i\\th$ column of the matrix $A$ and $A^j$ denotes its $j\\th$ row. $A_{ij}$ denotes the element at the $i\\th$ row and $j\\th$ column.\n\t\\item For a vector $\\vx \\in \\bR^p$ and a set $S \\subset [p]$, the notation $\\vx_S$ denotes the vector $\\vz \\in \\bR^p$ such that $\\vz_i = \\vx_i$ for $i \\in S$, and $\\vz_i = 0$ otherwise. Similarly for matrices, $A_S$ denotes the matrix $B$ with $B_i = A_i$ for $i \\in S$ and $B_i = \\vzero$ for $i \\neq S$. Also, $A^S$ denotes the matrix $B$ with $B^i = A^i$ for $i \\in S$ and $B^i = \\vzero^\\top$ for $i \\neq S$.\n\t\\item The support of a vector $\\vx$ is denoted by $\\supp(x) := \\bc{i : \\vx_i \\neq 0}$. A vector $x$ is referred to as $s$-sparse if $\\abs{\\supp(x)} \\leq s$.\n\t\\item The canonical directions in $\\bR^p$ are denoted by $\\ve_i,\\ i = 1, \\ldots, p$.\n\t\\item The identity matrix of order $p$ is denoted by $I_{p\\times p}$ or simply $I_p$. The subscript may be omitted when the order is clear from context.\n\t\\item For a vector $\\vx \\in \\bR^p$, the notation $\\norm{x}_q = \\sqrt[\\leftroot{1}\\uproot{1}q]{\\sum_{i=1}^p\\abs{\\vx_i}^q}$ denotes its $L_q$ norm. As special cases we define $\\norm{\\vx}_\\infty := \\max_i\\ \\abs{\\vx_i}$, $\\norm{\\vx}_{-\\infty} := \\min_i\\ \\abs{\\vx_i}$, and $\\norm{\\vx}_0 := \\abs{\\supp(\\vx)}$.\n\t\\item Balls with respect to various norms are denoted as $\\cB_q(r) := \\bc{\\vx \\in \\bR^p, \\norm{\\vx}_q \\leq r}$. As a special case the notation $\\cB_0(s)$ is used to denote the set of $s$-sparse vectors.\n\t\\item For a matrix $A \\in \\bR^{m \\times n}$, $\\sigma_1(A)\\geq \\sigma_2(A)\\geq \\ldots\\geq\\sigma_{\\min\\bc{m,n}}(A)$ denote its singular values. The Frobenius norm of $A$ is defined as $\\norm{A}_F := \\sqrt{\\sum_{i,j} A_{ij}^2}=\\sqrt{\\sum_i \\sigma_i(A)^2}$. The nuclear norm of $A$ is defined as $\\norm{A}_\\ast := \\sum_i \\sigma_i(A)$.\n\t\\item The trace of a square matrix $A \\in \\bR^{m \\times m}$ is defined as $\\text{tr}(A) = \\sum_{i=1}^m A_{ii}$.\n\t\\item The spectral norm (also referred to as the operator norm) of a matrix $A$ is defined as $\\norm{A}_2 := \\max_i \\sigma_i(A)$.\n\t\\item Random variables are denoted using upper case letters such as $X,Y$.\n\t\\item The expectation of a random variable $X$ is denoted by $\\E{X}$. In cases where the distribution of $X$ is to be made explicit, the notation $\\bE_{X\\sim\\cD}\\bs{X}$, or else simply $\\bE_\\cD\\bs{X}$, is used.\n\t\\item $\\text{\\textsf{Unif}}(\\cX)$ denotes the uniform distribution over a compact set $\\cX$.\n\t\\item The standard \\emph{big-Oh} notation is used to describe the asymptotic behavior of functions. The \\emph{soft-Oh} notation is employed to hide poly-logarithmic factors i.e., $f = \\softO{g}$ will imply $f = \\bigO{g \\log^c(g)}$ for some absolute constant $c$.\n\\end{itemize}\n\\chapter*{Preface}\n\\label{chap:org}\n\\addcontentsline{toc}{chapter}{Preface}\n\\markboth{\\sffamily\\slshape Preface}\n{\\sffamily\\slshape Preface}\n\nOptimization as a field of study has permeated much of science and technology. The advent of the digital computer and a tremendous subsequent increase in our computational prowess has increased the impact of optimization in our lives. Today, tiny details such as airline schedules all the way to leaps and strides in medicine, physics and artificial intelligence, all rely on modern advances in optimization techniques.\n\nFor a large portion of this period of excitement, our energies were focused largely on convex optimization problems, given our deep understanding of the structural properties of convex sets and convex functions. However, modern applications in domains such as signal processing, bio-informatics and machine learning, are often dissatisfied with convex formulations alone since there exist non-convex formulations that better capture the problem structure. For applications in these domains, models trained using non-convex formulations often offer excellent performance and other desirable properties such as compactness and reduced prediction times.\n\nExamples of applications that benefit from non-convex optimization techniques include gene expression analysis, recommendation systems, clustering, and outlier and anomaly detection. In order to get satisfactory solutions to these problems, that are scalable and accurate, we require a deeper understanding of non-convex optimization problems that naturally arise in these problem settings.\n\nSuch an understanding was lacking until very recently and non-convex optimization found little attention as an active area of study, being regarded as intractable. Fortunately, a long line of works have recently led areas such as computer science, signal processing, and statistics to realize that the general abhorrence to non-convex optimization problems hitherto practiced, was misled. These works demonstrated in a beautiful way, that although non-convex optimization problems do suffer from intractability in general, those that arise in \\emph{natural settings} such as machine learning and signal processing, possess additional structure that allow the intractability results to be circumvented.\n\nThe first of these works still religiously stuck to convex optimization as the method of choice, and instead, sought to show that certain classes of non-convex problems which possess suitable additional structure as offered by natural instances of those problems, could be converted to convex problems without any loss. More precisely, it was shown that the original non-convex problem and the modified convex problem possessed a common optimum and thus, the solution to the convex problem would automatically solve the non-convex problem as well! However, these approaches had a price to pay in terms of the time it took to solve these so-called \\emph{relaxed} convex problems. In several instances, these relaxed problems, although not intractable to solve, were nevertheless challenging to solve, at large scales.\n\nIt took a second wave of still more recent results to usher in provable non-convex optimization techniques which abstained from relaxations, solved the non-convex problems in their native forms, and yet seemed to offer the same quality of results as relaxation methods did. These newer results were accompanied with a newer realization that, for a wide range of applications such as sparse recovery, matrix completion, robust learning among others, these direct techniques are faster, often by an order of magnitude or more, than relaxation-based techniques while offering solutions of similar accuracy.\n\nThis monograph wishes to tell the story of this realization and the wisdom we gained from it from the point of view of machine learning and signal processing applications. The monograph will introduce the reader to a lively world of non-convex optimization problems with rich structure that can be exploited to obtain extremely scalable solutions to these problems. Put a bit more dramatically, it will seek to show how problems that were once avoided, having been shown to be NP-hard to solve, now have solvers that operate in near-linear time, by carefully analyzing and exploiting additional task structure! It will seek to inform the reader on how to look for such structure in diverse application areas, as well as equip the reader with a sound background in fundamental tools and concepts required to analyze such problem areas and come up with newer solutions.\\\\\n\n\\noindent\\textbf{How to use this monograph} We have made efforts to make this monograph as self-contained as possible while not losing focus of the main topic of non-convex optimization techniques. Consequently, we have devoted entire sections to present a tutorial-like treatment to basic concepts in convex analysis and optimization, as well as their non-convex counterparts. As such, this monograph can be used for a semester-length course on the basics of non-convex optimization with applications to machine learning.\n\nOn the other hand, it is also possible to cherry pick portions of the monograph, such the section on sparse recovery, or the EM algorithm, for inclusion in a broader course. Several courses such as those in machine learning, optimization, and signal processing may benefit from the inclusion of such topics. However, we advise that relevant background sections (see Figure~\\ref{fig:org}) be covered beforehand.\n\nWhile striving for breadth, the limits of space have constrained us from looking at some topics in much detail. Examples include the construction of design matrices that satisfy the RIP\/RSC properties and pursuit style methods, but there are several others. However, for all such omissions, the bibliographic notes at the end of each section can always be consulted for references to details of the omitted topics. We have also been unable to address several application areas such as dictionary learning, advances in low-rank tensor decompositions, topic modeling and community detection in graphs but have provided pointers to prominent works in these application areas too.\n\nThe organization of this monograph is outlined below with Figure~\\ref{fig:org} presenting a suggested order of reading the various sections.\\\\\n\n\\begin{figure}[t!]\n\\includegraphics[width=\\columnwidth]{org.pdf}\n\\caption[Suggested Order of Reading the Sections]{A schematic showing the suggested order of reading the sections. For example, concepts introduced in \\S~\\ref{chap:pgd} and \\ref{chap:altmin} are helpful for \\S~\\ref{chap:rreg} but a thorough reading of \\S~\\ref{chap:saddle} is not required for the same. Similarly, we recommend reading \\S~\\ref{chap:em} after going through \\S~\\ref{chap:altmin} but a reader may choose to proceed to \\S~\\ref{chap:spreg} directly after reading \\S~\\ref{chap:pgd}.}\n\\label{fig:org}\n\\end{figure}\n\n\\noindent\\textbf{Part I: Introduction and Basic Tools}\\\\\nThis part will offer an introductory note and a section exploring some basic definitions and algorithmic tools in convex optimization. These sections are recommended to readers not intimately familiar with basics of numerical optimization.\\\\\n\n\\noindent\\textbf{Section 1 - Introduction} This section will give a more relaxed introduction to the area of non-convex optimization by discussing applications that motivate the use of non-convex formulations. The discussion will also clarify the scope of this monograph.\\\\\n\n\\noindent\\textbf{Section 2 - Mathematical Tools} This section will set up notation and introduce some basic mathematical tools in convex optimization. This section is basically a handy repository of useful concepts and results and can be skipped by a reader familiar with them. Parts of the section may instead be referred back to, as and when needed, using the cross-referencing links in the monograph.\n\n\\noindent\\textbf{Part II: Non-convex Optimization Primitives}\\\\\nThis part will equip the reader with a collection of primitives most widely used in non-convex optimization problems.\\\\\n\n\\noindent\\textbf{Section 3 - Non-convex Projected Gradient Descent} This section will introduce the simple and intuitive projected gradient descent method in the context of non-convex optimization. Variants of this method will be used in later sections to solve problems such as sparse recovery and robust learning.\\\\\n\n\\noindent\\textbf{Section 4 - Alternating Minimization} This section will introduce the principle of alternating minimization which is widely used in optimization problems over two or more (groups of) variables. The methods introduced in this section will be later used in later sections to solve problems such as low-rank matrix recovery, robust regression, and phase retrieval.\\\\\n\n\\noindent\\textbf{Section 5 - The EM Algorithm} This section will introduce the EM algorithm which is a widely used optimization primitive for learning problems with latent variables. Although EM is a form of alternating minimization, given its significance, the section gives it special attention. This section will discuss some recent advances in the analysis and applications of this method and look at two case studies in learning Gaussian mixture models and mixed regression to illustrate the algorithm and its analyses.\\\\\n\n\\noindent\\textbf{Section 6 - Stochastic Non-convex Optimization} This section will look at some recent advances in using stochastic optimization techniques for solving optimization problems with non-convex objectives. The section will also introduce the problem of tensor factorization as a case study for the algorithms being studied.\\\\\n\n\\noindent\\textbf{Part III - Applications}\\\\\nThis part will take a look at four interesting applications in the areas of machine learning and signal processing and explore how the non-convex optimization techniques introduced earlier can be used to solve these problems.\n\n\\noindent\\textbf{Section 7 - Sparse Recovery} This section will look at a very basic non-convex optimization problem, that of performing linear regression to fit a sparse model to the data. The section will discuss conditions under which it is possible to do so in polynomial time and show how the non-convex projected gradient descent method studied earlier can be used to offer provably optimal solutions. The section will also point to other techniques used to solve this problem, as well as refer to extensions and related results.\\\\\n\n\\noindent\\textbf{Section 8 - Low-rank Matrix Recovery} This section will address the more general problem of low rank matrix recovery with specific emphasis on low-rank matrix completion. The section will gently introduce low-rank matrix recovery as a generalization of sparse linear regression that was studied in the previous section and then move on to look at matrix completion in more detail. The section will apply both the non-convex projected gradient descent and alternating minimization methods in the context of low-rank matrix recovery, analyzing simple cases and pointing to relevant literature.\\\\\n\n\\noindent\\textbf{Section 9 - Robust Regression} This section will look at a widely studied area of machine learning, namely robust learning, from the point of view of regression. Algorithms that are robust to (adversarial) corruption in data are sought after in several areas of signal processing and learning. The section will explore how to use the projected gradient and alternating minimization techniques to solve the robust regression problem and also look at applications of robust regression to robust face recognition and robust time series analysis.\\\\\n\n\\noindent\\textbf{Section 10 - Phase Retrieval} This section will look at some recent advances in the application of non-convex optimization to phase retrieval. This problem lies at the heart of several imaging techniques such as X-ray crystallography and electron microscopy. A lot remains to be understood about this problem and existing algorithms often struggle to cope with the retrieval problems presented in practice.\\\\\n\nThe area of non-convex optimization has considerably widened in both scope and application in recent years and newer methods and analyses are being proposed at a rapid pace. While this makes researchers working in this area extremely happy, it also makes summarizing the vast body of work in a monograph such as this, more challenging. We have striven to strike a balance between presenting results that are the best known, and presenting them in a manner accessible to a newcomer. However, in all cases, the bibliography notes at the end of each section do contain pointers to the state of the art in that area and can be referenced for follow-up readings.\\\\\n\n\\noindent Prateek Jain, Bangalore, India\\\\\n\\noindent Purushottam Kar, Kanpur, India\\\\\n\\noindent \\today\n\n\n\n\n\n\n\\chapter{Non-Convex Projected Gradient Descent}\n\\label{chap:pgd}\n\nIn this section we will introduce and study gradient descent-style methods for non-convex optimization problems. In \\S~\\ref{chap:tools}, we studied the projected gradient descent method for convex optimization problems. Unfortunately, the algorithmic and analytic techniques used in convex problems fail to extend to non-convex problems. In fact, non-convex problems are NP-hard to solve and thus, no algorithmic technique should be expected to succeed on these problems in general.\n\nHowever, the situation is not so bleak. As we discussed in \\S~\\ref{chap:intro}, several breakthroughs in non-convex optimization have shown that non-convex problems that possess nice additional structure can be solved not just in polynomial time, but rather efficiently too. Here, we will study the inner workings of projected gradient methods on such structured non-convex optimization problems.\n\nThe discussion will be divided into three parts. The first part will take a look at constraint sets that, despite being non-convex, possess additional structure so that projections onto them can be carried out efficiently. The second part will take a look at structural properties of objective functions that can aid optimization. The third part will present and analyze a simple extension of the PGD algorithm for non-convex problems. We will see that for problems that do possess nicely structured objective functions and constraint sets, the PGD-style algorithm does converge to the global optimum in polynomial time with a linear rate of convergence.\n\nWe would like to point out to the reader that our emphasis in this section will be on generality and exposition of basic concepts. We will seek to present easily accessible analyses for problems that have non-convex objectives. However, the price we will pay for this generality is in the fineness of the results we present. The results discussed in this section are not the best possible and more refined and problem-specific results will be discussed in subsequent sections where specific applications will be discussed in detail.\n\n\\section{Non-Convex Projections}\n\\label{sec:non-cvx-proj}\nExecuting the projected gradient descent algorithm with non-convex problems requires projections onto non-convex sets. Now, a quick look at the projection problem\n\\[\n\\Pi_\\cC(\\vz) := \\underset{\\vx\\in\\cC}{\\argmin}\\ \\norm{\\vx-\\vz}_2\n\\]\nreveals that this is an optimization problem in itself. Thus, when the set $\\cC$ to be projected onto is non-convex, the projection problem can itself be NP-hard. However, for several well-structured sets, projection can be carried out efficiently despite the sets being non-convex.\n\n\\subsection{Projecting into Sparse Vectors}\nIn the sparse linear regression example discussed in \\S~\\ref{chap:intro},\n\\[\n\\bth = \\underset{\\norm{\\bt}_0 \\leq s}{\\argmin}\\ \\sum_{i=1}^n\\br{y_i - \\vx_i^\\top\\bt}^2,\n\\]\napplying projected gradient descent requires projections onto the set of $s$-sparse vectors i.e., $\\cB_0(s) := \\bc{\\vx\\in\\bR^p, \\norm{\\vx}_0 \\leq s}$. The following result shows that the projection $\\Pi_{\\cB_0(s)}(\\vz)$ can be carried out by simply sorting the coordinates of the vector $\\vz$ according to magnitude and setting all except the top-$s$ coordinates to zero.\n\n\\begin{lemma}\n\\label{lem:sparse-proj}\nFor any vector $\\vz \\in \\bR^p$, let $\\sigma$ be the permutation that sorts the coordinates of $\\vz$ in decreasing order of magnitude, i.e., $\\abs{\\vz_{\\sigma(1)}} \\geq \\abs{\\vz_{\\sigma(2)}} \\geq \\ldots \\geq \\abs{\\vz_{\\sigma(p)}}$. Then the vector $\\hat\\vz := \\Pi_{\\cB_0(s)}(\\vz)$ is obtained by setting $\\hat\\vz_i = \\vz_i$ if $\\sigma(i) \\leq s$ and $\\hat\\vz_i = 0$ otherwise.\n\\end{lemma}\n\\begin{proof}\nWe first notice that since the function $x \\mapsto x^2$ is an increasing function on the positive half of the real line, we have $\\underset{\\vx\\in\\cC}{\\argmin}\\ \\norm{\\vx-\\vz}_2 = \\underset{\\vx\\in\\cC}{\\argmin}\\ \\norm{\\vx-\\vz}_2^2$. Next, we observe that the vector $\\hat\\vz := \\Pi_{\\cB_0(s)}$ must satisfy $\\hat\\vz_i = \\vz_i$ for all $i \\in \\supp(\\hat\\vz)$ otherwise we can decrease the objective value $\\norm{\\hat\\vz -\\vz}_2^2$ by ensuring this. Having established this gives us $\\norm{\\hat\\vz - \\vz}_2^2 = \\sum_{i \\notin \\supp(\\hat\\vz)} \\vz_i^2$. This is clearly minimized when $\\supp(\\hat\\vz)$ has the coordinates of $\\vz$ with largest magnitude.\n\\end{proof}\n\n\\subsection{Projecting into Low-rank Matrices}\nIn the recommendation systems problem, as discussed in \\S~\\ref{chap:intro} \n\\[\n\\hat A_\\text{lr} = \\underset{\\rank(X) \\leq r}{\\arg\\min}\\ \\sum_{(i,j) \\in \\Omega}\\br{X_{ij} - A_{ij}}^2,\n\\]\nwe need to project onto the set of low-rank matrices. Let us first define this problem formally. Consider matrices of a certain order, say $m \\times n$ and let $\\cC \\subset \\bR^{m\\times n}$ be an arbitrary set of matrices. Then, the projection operator $\\Pi_\\cC(\\cdot)$ is defined as follows: for any matrix $A \\in R^{m\\times n}$,\n\\[\n\\Pi_\\cC(A) := \\underset{X\\in\\cC}{\\argmin}\\ \\norm{A - X}_F,\n\\]\nwhere $\\norm{\\cdot}_F$ is the Frobenius norm over matrices. For low rank projections we require $\\cC$ to be the set of low rank matrices $\\cB_{\\rank}(r) := \\bc{A \\in \\bR^{m\\times n}, \\rank(A) \\leq r}$. Yet again, this projection can be done efficiently by performing a \\emph{Singular Value Decomposition} on the matrix $A$ and retaining the top $r$ singular values and vectors. The Eckart-Young-Mirsky theorem proves that this indeed gives us the projection.\n\n\\begin{theorem}[Eckart-Young-Mirsky theorem]\n\\label{thm:eym-thm}\nFor any matrix $A \\in \\bR^{m \\times n}$, let $U\\Sigma V^\\top$ be the singular value decomposition of $A$ such that $\\Sigma = \\diag(\\sigma_1,\\sigma_2,\\ldots,\\sigma_{\\min(m,n)})$ where $\\sigma_1 \\geq \\sigma_2 \\geq \\ldots \\geq \\sigma_{\\min(m,n)}$. Then for any $r \\leq {\\min(m,n)}$, the matrix $\\hat A_{(r)} := \\Pi_{\\cB_{\\rank}(r)}(A)$ can be obtained as $U_{(r)}\\Sigma_{(r)}V_{(r)}^\\top$ where $U_{(r)} := \\bs{U_1 U_2 \\ldots U_r}$, $V{(r)} := \\bs{V_1 V_2 \\ldots V_r}$, and $\\Sigma_{(r)} := \\diag(\\sigma_1,\\sigma_2,\\ldots,\\sigma_r)$.\n\\end{theorem}\n\nAlthough we have stated the above result for projections with the Frobenius norm defining the projections, the Eckart-Young-Mirsky theorem actually applies to any unitarily invariant norm including the Schatten norms and the operator norm. The proof of this result is beyond the scope of this monograph.\n\nBefore moving on, we caution the reader that the ability to efficiently project onto the non-convex sets mentioned above does not imply that non-convex projections are as nicely behaved as their convex counterparts. Indeed, none of the projections mentioned above satisfy projection properties I or II (Lemmata~\\ref{lem:proj-prop-1} and \\ref{lem:proj-prop-2}). This will pose a significant challenge while analyzing PGD-style algorithms for non-convex problems since, as we would recall, these properties were crucially used in all convergence proofs discussed in \\S~\\ref{chap:tools}.\n\n\\section{Restricted Strong Convexity and Smoothness}\nIn \\S~\\ref{chap:tools}, we saw how optimization problems with convex constraint sets and objective functions that are convex and have bounded gradients, or else are strongly convex and smooth, can be effectively optimized using PGD, with much faster rates of convergence if the objective is strongly convex and smooth. However, when the constraint set fails to be convex, these results fail to apply.\n\nThere are several workarounds to this problem, the simplest being to convert the constraint set into a convex one, possibly by taking its \\emph{convex hull}\\footnote{The convex hull of any set $\\cC$ is the ``smallest'' convex set $\\bar\\cC$ that contains $\\cC$. Formally, we define $\\bar\\cC = \\!\\!\\!\\!\\!\\underset{\\substack{S \\supseteq \\cC\\\\S \\text{ is convex}}}\\bigcap \\!\\!\\!\\!\\!S$. If $\\cC$ is convex then it is its own convex hull.}, which is what relaxation methods do. However, a much less drastic alternative exists that is widely popular in non-convex optimization literature.\n\nThe intuition is a simple one and generalizes much of the insights we gained from our discussion in \\S~\\ref{chap:tools}. The first thing we need to notice\\elink{exer:pgd-rsc} is that the convergence results for the PGD algorithm in \\S~\\ref{chap:tools} actually do not require the objective function to be convex (or strongly convex\/strongly smooth) over the entire $\\bR^p$. These properties are only required to be satisfied over the constraint set being considered. A natural generalization that emerges from this insight is the concept of \\emph{restricted} properties that are discussed below.\n\n\\begin{definition}[Restricted Convexity]\n\\label{defn:res-cvx-fn}\nA continuously differentiable function $f: \\bR^p \\rightarrow \\bR$ is said to satisfy restricted convexity over a (possibly non-convex) region $\\cC \\subseteq \\bR^p$ if for every $\\vx,\\vy \\in \\cC$ we have $f(\\vy) \\geq f(\\vx) + \\ip{\\nabla f(\\vx)}{\\vy - \\vx}$, where $\\nabla f(\\vx)$ is the gradient of $f$ at $\\vx$.\n\\end{definition}\n\nAs before, a more general definition that extends to non-differentiable functions, uses the notion of subgradient to replace the gradient in the above expression.\n\n\\begin{definition}[Restricted Strong Convexity\/Smoothness]\n\\label{defn:res-strong-cvx-smooth-fn}\nA continuously differentiable function $f: \\bR^p \\rightarrow \\bR$ is said to satisfy $\\alpha$-restricted strong convexity (RSC) and $\\beta$-restricted strong smoothness (RSS) over a (possibly non-convex) region $\\cC \\subseteq \\bR^p$ if for every $\\vx,\\vy \\in \\cC$, we have\n\\[\n\\frac{\\alpha}{2}\\norm{\\vx-\\vy}_2^2 \\leq f(\\vy) - f(\\vx) - \\ip{\\nabla f(\\vx)}{\\vy - \\vx} \\leq \\frac{\\beta}{2}\\norm{\\vx-\\vy}_2^2.\n\\]\n\\end{definition}\n\nNote that, as Figure~\\ref{fig:rsc} demonstrates, even non-convex functions can demonstrate the RSC\/RSS properties over suitable subsets. Conversely, functions that satisfy RSC\/RSS need not be convex. It turns out that in several practical situations, such as those explored by later sections, the objective functions in the non-convex optimization problems do satisfy the RSC\/RSS properties described above, in some form.\n\n\\begin{figure}\n\\includegraphics[width=\\columnwidth]{rsc.pdf}\n\\caption[Restricted Strong Convexity and Strong Smoothness]{A depiction of restricted convexity properties. $f$ is clearly non-convex over the entire real line but is convex within the cross-hatched region bounded by the dotted vertical lines. $g$ is a non-convex function that satisfies restricted strong convexity. Outside the cross-hatched region (again bounded by the dotted vertical lines), $g$ fails to even be convex as its curve falls below its tangent, but within the region, it actually exhibits strong convexity.}%\n\\label{fig:rsc}\n\\end{figure}\n\nWe also remind the reader that the RSC\/RSS definitions presented here are quite generic and presented to better illustrate basic concepts. Indeed, for specific non-convex problems such as sparse recovery, low-rank matrix recovery, and robust regression, the later sections will develop more refined versions of these properties that are better tailored to those problems. In particular, for sparse recovery problems, the RSC\/RSS properties can be shown to be related\\elink{exer:spreg-rsc-is-rsc} to the well-known restricted isometry property (RIP).\n\n\\section{Generalized Projected Gradient Descent}\nWe now present the generalized projected gradient descent algorithm (gPGD) for non-convex optimization problems. The procedure is outlined in Algorithm~\\ref{algo:gpgd}. The reader would find it remarkably similar to the PGD procedure in Algorithm~\\ref{algo:pgd}. However, a crucial difference is in the projections made. Whereas PGD utilized convex projections, the gPGD procedure, if invoked with a non-convex constraint set $\\cC$, utilizes non-convex projections instead.\n\nWe will perform the convergence analysis for the gPGD algorithm assuming that the projection step in the algorithm is carried out exactly. As we saw in the preceding discussion, this can be accomplished efficiently for non-convex sets arising in several interesting problem settings. However, despite this, the convergence analysis will remain challenging due to the non-convexity of the problem.\n\nFirstly, we will not be able to assume that the objective function we are working with is convex over the entire $\\bR^p$. Secondly, non-convex projections do not satisfy projection properties I or II. Finally, the first order optimality condition (\\cite[Proposition 1.3]{Bubeck2015}) we used to prove Theorem~\\ref{thm:pgd-sc-ss-proof} also fails to hold for non-convex constraint sets. Since the analyses for the PGD algorithm crucially used these results, we will have to find workarounds to all of them. We will denote the optimal function value as $f^\\ast = \\min_{\\vx \\in \\cC}f(\\vx)$ and any optimizer as $\\vx^\\ast \\in \\cC$ such that $f(\\vx^\\ast) = f^\\ast$.\n\nTo simplify the presentation we will assume that $\\nabla f(\\vx^\\ast) = \\vzero$. This assumption is satisfied whenever the objective function is differentiable and the optimal point $\\vx^\\ast$ lies in the interior of the constraint set $\\cC$. However, many sets such as $\\cB_0(s)$ do not possess an interior (although they may still possess a \\emph{relative} interior) and this assumption fails by default on such sets. Nevertheless, this assumption will greatly simplify the presentation as well as help us focus on the key issues. Moreover, convergence results can be shown without making this assumption too.\n\n\\begin{algorithm}[t]\n\t\\caption{Generalized Projected Gradient Descent (gPGD)}\n\t\\label{algo:gpgd}\n\t\\begin{algorithmic}[1]\n\t\t{\n\t\t\t\\REQUIRE Objective function $f$, constraint set $\\cC$, step length $\\eta$\n\t\t\t\\ENSURE A point $\\hat\\vx \\in \\cC$ with near-optimal objective value\n\t\t\t\\STATE $\\vx^1 \\leftarrow \\vzero$\n\t\t\t\\FOR{$t = 1, 2, \\ldots, T$}\n\t\t\t\t\\STATE $\\vz^{t+1} \\leftarrow \\vx^t - \\eta\\cdot\\nabla f(\\vx^t)$\n\t\t\t\t\\STATE $\\vx^{t+1} \\leftarrow \\Pi_\\cC(\\vz^{t+1})$\n\t\t\t\\ENDFOR\n\t\t\t\\STATE \\textbf{return} {$\\hat\\vx_{\\text{final}} = \\vx^T$}\n\t\t}\n\t\\end{algorithmic}\n\\end{algorithm}\n\nTheorem~\\ref{thm:gpgd-rsc-rss-proof} gives the convergence proof for gPGD. The reader will notice that the convergence rate offered by gPGD is similar to the one offered by the PGD algorithm for convex optimization (see Theorem~\\ref{thm:pgd-sc-ss-proof}). However, the gPGD algorithm requires a more careful analysis of the structure of the objective function and constraint set since it is working with a non-convex optimization problem.\n\n\\begin{theorem}\n\\label{thm:gpgd-rsc-rss-proof}\nLet $f$ be a (possibly non-convex) function satisfying the $\\alpha$-RSC and $\\beta$-RSS properties over a (possibly non-convex) constraint set $\\cC$ with $\\beta\/\\alpha < 2$. Let Algorithm~\\ref{algo:gpgd} be executed with a step length $\\eta = \\frac{1}{\\beta}$. Then after at most $T = \\bigO{\\frac{\\alpha}{2\\alpha - \\beta}\\log\\frac{1}{\\epsilon}}$ steps, $f(\\vx^T) \\leq f(\\vx^\\ast) + \\epsilon$.\n\\end{theorem}\n\nThis result holds even when the step length is set to values that are large enough but yet smaller than $1\/\\beta$. However, setting $\\eta = \\frac{1}{\\beta}$ simplifies the proof and allows us to focus on the key concepts.\n\n\\begin{proof}[Proof (of Theorem~\\ref{thm:gpgd-rsc-rss-proof}).]\nRecall that the proof of Theorem~\\ref{thm:pgd-conv-proof} used the SC\/SS properties for the analysis. We will replace these by the RSC\/RSS properties -- we will use RSC to track the global convergence of the algorithm and RSS to locally assess the progress made by the algorithm in each iteration. We will use $\\Phi_t = f(\\vx^{t+1}) - f(\\vx^\\ast)$ as the potential function.\\\\\n\n\\noindent\\textbf{(Apply Restricted Strong Smoothness)} Since both $\\vx^t,\\vx^{t+1} \\in \\cC$ due to the projection steps, we apply the $\\beta$-RSS property to them.\n\\begin{align*}\nf(\\vx^{t+1}) - f(\\vx^t) &\\leq \\ip{\\nabla f(\\vx^t)}{\\vx^{t+1}-\\vx^t} + \\frac{\\beta}{2}\\norm{\\vx^t-\\vx^{t+1}}_2^2\\\\\n\t\t\t\t\t\t\t\t\t\t\t&= \\frac{1}{\\eta}\\ip{\\vx^t - \\vz^{t+1}}{\\vx^{t+1}-\\vx^t} + \\frac{\\beta}{2}\\norm{\\vx^t-\\vx^{t+1}}_2^2\\\\\n\t\t\t\t\t\t\t\t\t\t\t&= \\frac{\\beta}{2}\\br{\\norm{\\vx^{t+1} - \\vz^{t+1}}_2^2 - \\norm{\\vx^t - \\vz^{t+1}}_2^2}\n\\end{align*}\nNotice that this step crucially uses the fact that $\\eta = 1\/\\beta$.\\\\\n\n\\noindent\\textbf{(Apply Projection Property)} We are again stuck with the unwieldy $\\vz^{t+1}$ term. However, unlike before, we cannot apply projection properties I or II as non-convex projections do not satisfy them. Instead, we resort to Projection Property-O (Lemma~\\ref{lem:proj-prop-o}), that all projections (even non-convex ones) must satisfy. Applying this property gives us\n\\begin{align*}\nf(\\vx^{t+1}) - f(\\vx^t) &\\leq \\frac{\\beta}{2}\\br{\\norm{\\vx^\\ast - \\vz^{t+1}}_2^2 - \\norm{\\vx^t - \\vz^{t+1}}_2^2}\\\\\n\t\t\t\t\t\t\t\t\t\t\t&= \\frac{\\beta}{2}\\br{\\norm{\\vx^\\ast - \\vx^t}_2^2 + 2\\ip{\\vx^\\ast - \\vx^t}{\\vx^t - \\vz^{t+1}}}\\\\\n\t\t\t\t\t\t\t\t\t\t\t&= \\frac{\\beta}{2}\\norm{\\vx^\\ast - \\vx^t}_2^2 + \\ip{\\vx^\\ast - \\vx^t}{\\nabla f(\\vx^t)}\n\\end{align*}\n\n\\noindent\\textbf{(Apply Restricted Strong Convexity)} Since both $\\vx^t,\\vx^\\ast \\in \\cC$, we apply the $\\alpha$-RSC property to them. However, we do so in two ways:\n\\begin{align*}\nf(\\vx^\\ast) - f(\\vx^t) &\\geq \\ip{\\nabla f(\\vx^t)}{\\vx^\\ast - \\vx^t} + \\frac{\\alpha}{2}\\norm{\\vx^t - \\vx^\\ast}_2^2\\\\\nf(\\vx^t) - f(\\vx^\\ast) &\\geq \\ip{\\nabla f(\\vx^\\ast)}{\\vx^t - \\vx^\\ast} + \\frac{\\alpha}{2}\\norm{\\vx^t - \\vx^\\ast}_2^2 \\geq \\frac{\\alpha}{2}\\norm{\\vx^t - \\vx^\\ast}_2^2,\n\\end{align*}\nwhere in the second line we used the fact that we assumed $\\nabla f(\\vx^\\ast) = \\vzero$. We recall that this assumption can be done away with but makes the proof more complicated which we wish to avoid. Simple manipulations with the two equations give us\n\\[\n\\ip{\\nabla f(\\vx^t)}{\\vx^\\ast - \\vx^t} + \\frac{\\beta}{2}\\norm{\\vx^\\ast - \\vx^t}_2^2 \\leq \\br{2 - \\frac{\\beta}{\\alpha}}\\br{f(\\vx^\\ast) - f(\\vx^t)}\n\\]\nPutting this in the earlier expression gives us\n\\[\nf(\\vx^{t+1}) - f(\\vx^t) \\leq \\br{2 - \\frac{\\beta}{\\alpha}}\\br{f(\\vx^\\ast) - f(\\vx^t)}\n\\]\nThe above inequality is quite interesting. It tells us that the larger the gap between $f(\\vx^\\ast)$ and $f(\\vx^t)$, the larger will be the drop in objective value in going from $\\vx^t$ to $\\vx^{t+1}$. The form of the result is also quite fortunate as it assures us that we will cover a constant fraction $\\br{2-\\frac{\\beta}{\\alpha}}$ of the remaining ``distance'' to $\\vx^\\ast$ at each step! Rearranging this gives\n\\[\n\\Phi_{t+1} \\leq (\\kappa-1)\\Phi_t,\n\\]\nwhere $\\kappa = \\beta\/\\alpha$. Note that we always have $\\kappa \\geq 1$\\elink{exer:pgd-kappa} and by assumption $\\kappa = \\beta\/\\alpha < 2$, so that we always have $\\kappa - 1 \\in [0,1)$. This proves the result after simple manipulations.\n\\end{proof}\nWe see that the condition number has yet again played in crucial role in deciding the convergence rate of the algorithm, this time for a non-convex problem. However, we see that the condition number is defined differently here, using the RSC\/RSS constants instead of the SC\/SS constants as we did in \\S~\\ref{chap:tools}.\n\nThe reader would notice that while there was no restriction on the condition number $\\kappa$ in the analysis of the PGD algorithm (see Theorem~\\ref{thm:pgd-sc-ss-proof}), the analysis of the gPGD algorithm does require $\\kappa < 2$. It turns out that this restriction can be done away with for specific problems. However, the analysis becomes significantly more complicated. Resolving this issue in general is beyond the scope of this monograph but we will revisit this question in \\S~\\ref{chap:spreg} when we study sparse recovery in ill-conditioned settings. i.e., with large condition numbers.\n\nIn subsequent sections, we will see more refined versions of the gPGD algorithm for different non-convex optimization problems, as well as more refined and problem-specific analyses. In all cases we will see that the RSC\/RSS assumptions made by us can be fulfilled in practice and that gPGD-style algorithms offer very good performance on practical machine learning and signal processing problems.\n\n\\section{Exercises}\n\\begin{exer}\n\\label{exer:pgd-rsc}\nVerify that the basic convergence result for the PGD algorithm in Theorem~\\ref{thm:pgd-conv-proof}, continues to hold when the constraint set $\\cC$ is convex and $f$ only satisfies \\emph{restricted convexity} over $\\cC$ (i.e., $f$ is not convex over the entire $\\bR^p$). Verify that the result for strongly convex and smooth functions in Theorem~\\ref{thm:pgd-sc-ss-proof}, also continues to hold if $f$ satisfies RSC and RSS over a convex constraint set $\\cC$.\n\\end{exer}\n\\begin{exer}\n\\label{exer:pgd-kappa}\nLet the function $f$ satisfy the $\\alpha$-RSC and $\\beta$-RSS properties over a set $\\cC$. Show that the condition number $\\kappa = \\frac{\\beta}{\\alpha} \\geq 1$. Note that the function $f$ and the set $\\cC$ may both be non-convex.\n\\end{exer}\n\\begin{exer}\n\\label{exer:pgd-low-rank}\nRecall the recommendation systems problem we discussed in \\S~\\ref{chap:intro}. Show that assuming the ratings matrix to be rank-$r$ is equivalent to assuming that with every user $i \\in [m]$ there is associated a vector $\\vu_i \\in \\bR^r$ describing that user, and with every item $j \\in [n]$ there is associated a vector $\\vv_i \\in \\bR^r$ describing that item such that the rating given by user $i$ to item $j$ is $A_{ij} = \\vu_i^\\top\\vv_j$.\\\\\n\\textit{Hint}: Use the singular value decomposition for $A$.\n\\end{exer}\n\\chapter{Phase Retrieval}\n\\label{chap:phret}\n\n\\newcommand{\\dist}[1]{\\text{dist}\\br{#1}}\n\nIn this section, we will take a look at the phase retrieval problem, a non-convex optimization problem with applications in several domains. We briefly mentioned this problem in \\S~\\ref{chap:em} when we were discussing mixed regression. At a high level, phase retrieval is equivalent to discovering a complex signal using observations that reveal only the magnitudes of (complex) linear measurements over that signal. The phases of the measurements are not revealed to us.\n\nOver reals, this reduces to solving a system of quadratic equations which is known to be computationally intractable to solve exactly. Fortunately however, typical phase retrieval systems usually require solving quadratic systems that have nice randomized structures in the coefficients of the quadratic equations. These structures can be exploited to efficiently solve these systems. In this section we will look at various algorithms that achieve this. \n\n\\section{Motivating Applications}\nThe problem of phase retrieval arises in areas of signal processing where the phase of a signal being measured is irrevocably lost, or at best, unreliably obtained. The recovery of the signal in such situations becomes closely linked to our ability to perform phase retrieval.\\\\\n\n\\noindent\\textbf{X-ray Crystallography} The goal in X-ray crystallography is to find the structure of a small molecule by bombarding it with X-rays from various angles and measuring the trajectories and intensities of the diffracted photons on a film. These quantities can be used to glean the internal three-dimensional structure of the electron cloud within the crystal, revealing the atomic arrangements therein. This technique has been found to be immensely useful in imaging specimens with a crystalline structure and has been historically significant in revealing the interior composition of several compounds, both inorganic and organic.\n\nA notable example is that of nucleic acids such as DNA whose structure was revealed in the seminal work of \\cite{FranklinG1953a} which immediately led Francis and Crick to propose its double helix structure. The reader may be intrigued by the now famous \\emph{Photo 51} which provided critical support to the helical structure-theory of DNA \\citep{FranklinG1953b}. We refer the reader to the expository work of \\cite{Lucas2008} for a technical and historical account of how these discoveries came to be.\\\\\n\n\\noindent\\textbf{Transmission Electron Microscopy (TEM)} This technique uses a focused beam of electrons instead of high energy photons to image the object of study. This technique works best with ultra thin specimens which do not absorb electrons but let the beam of electrons pass through. The electrons interact with the atomic structure of the specimen and are captured at the other end using a sensing mechanism. TEM is capable of resolutions far higher than those possible using photonic imaging techniques due to the extremely small de-Broglie wavelength of electrons.\\\\\n\n\\noindent\\textbf{Coherent Diffraction Imaging (CDI)} This is a widely used technique for studying nanostructures such as nanotubes, nanocrystals and the like. A highly coherent beam of X-rays is made incident on the object of study and the diffracted rays allowed to interfere to produce a diffraction pattern which is used to recover the structure of the object. A key differentiating factor in CDI is the absence of any lenses to focus light onto the specimen, as opposed to other methods such as TEM\/X-Ray crystallography which use optical or electromagnetic lenses to focus the incident beam and then refocus the diffracted beam. The absence of any lenses in CDI is very advantageous since it results in aberration-free patterns. Moreover, this way the resolution of the technique is only dependent on the wavelength and other properties of the incident rays rather than the material of the lens etc.\n\n\\section{Problem Formulation}\nBombarding a structure with X-rays or other beams can be shown to be equivalent to taking random Fourier measurements of its internal structure. More specifically, let $\\bto \\in \\bC^p$ denote the vector that represents the density of electrons throughout the crystal, signifying its internal structure. Then, under simplifying regularity assumptions such as perfect periodicity, X-ray crystallography can be modeled as transforming $\\bto$ into $\\vy = X\\bto \\in \\bC^n$ where $X \\in \\bC^{n\\times p}$ has each of its rows sampled from a Fourier measurement matrix, i.e, $X_{ab}=\\exp\\br{-\\frac{2\\pi(a-1)(b-1)}{p}}, b = 1,\\ldots,p$ for some random $a \\in [p]$.\n\nUnfortunately, the measurements made by the sensors in the above applications are not able to observe the Fourier measurements exactly. Instead of observing complex valued transformations $y_k \\in \\bC$ of the signal $\\vy$, all we observe are the real value magnitudes $\\abs{y_k} \\in \\bR$, the phase being lost in the signal acquisition process. A natural question arises whether it is still possible to recover $\\bto$ or not.\n\nNote that if the signal and the measurement matrices are real then $|y_k|^2=(\\vx_k^\\top\\bto)^2$. Thus, the goal is simply to recover a model $\\bt$ by solving a system of $n$ quadratic equations described above. Although problem of solving a system of quadratic equations is intractable in general, in the special case of phase retrieval, it is possible to exploit the fact that the vectors $\\vx_k$ are sampled randomly, to develop recovery algorithms that are efficient, as well as require only a small number $n$ of measurements.\n\nWe note that the state of the art in phase retrieval literature is yet unable to guarantee recovery from random Fourier measurements, which closely model situations arising in crystallography and other imaging applications. Current analyses mostly consider the case when the sensing matrix $X$ has random Gaussian entries. A notable exception to the above is the work of \\cite{CandesLS2015} which is able to address\t\\emph{coded diffraction} measurements. These are more relevant to what is used in practice but still complicated to design. For sake of simplicity, we will only focus on the Gaussian measurement case in our discussions. This would allow us to study the algorithmic techniques used to solve the problem without getting involved in the intricacies of Fourier or coded diffraction measurements.\n\nTo formalize our problem statement, we let $\\bto \\in \\bC^p$ be an unknown $p$-dimensional signal, $X = [\\vx_1,\\vx_2,\\ldots,\\vx_n]^\\top \\in \\bC^{n \\times p}$ be a measurement matrix. Our goal is to recover $\\bto$ given $X$ and $\\abs{\\vy}$, where $\\vy = [y_1,y_2,\\ldots,y_k]^\\top \\in \\bC^n = X\\bto$ and $\\abs{y_k} = \\abs{\\vx_i^\\top\\bto}$. Our focus would be on the special case where $X_{kj} \\stackrel{\\text{i.i.d.}}{\\sim} \\cN(0,1) +i\\cdot \\cN(0,1)$ where $i^2=-1$. We would like to recover $\\bto$ efficiently using only $n = \\softO{p}$ measurements. We will study how gAM and gPGD-style techniques can be used to solve this problem. We will point to approaches using the relaxation technique in the bibliographic notes.\\\\\n\n\\noindent{\\bf Notation}: We will abuse the notation $\\vx^\\top$ to denote the complex row-conjugate of a complex vector $\\vx \\in \\bC^p$, something that is usually denoted by $\\vx^\\ast$. A random vector $\\vx = \\va + i\\vb \\in \\bC^n$ will be said to be distributed according to the standard Gaussian distribution over $\\bC^n$, denoted as $\\cN_\\bC(\\vzero,I)$ if $\\va,\\vb \\in \\bR^n$ are independently distributed as standard (real valued) Gaussian vectors i.e., $\\va,\\vb \\sim \\cN(\\vzero,I_n)$.\n\n\\section{Phase Retrieval via Alternating Minimization}\nOne of the most popular approaches for solving the phase retrieval problem is a simple application of the gAM approach, first proposed by \\cite{GerchbergO1972} more than 4 decades ago. The intuition behind the approach is routine -- there exist two parameters of interest in the problem -- the phase of the data points i.e., $y_k\/\\abs{y_k}$, and the true signal $\\bto$. Just as before, we observe a two-way connection between these parameters. Given the true phase values of the points $\\phi_k = y_k\/|y_k|$, the signal $\\bto$ can be recovered simply by solving a system of linear equations: $\\phi_k \\cdot|y_k|=\\vx_k^\\top\\bt, k = 1,\\ldots,n$. On the other hand, given $\\bto$, estimating the phase of the points is straightforward as $\\phi_k=(\\vx_k^\\top\\bto)\/|\\vx_k^\\top\\bto|$.\n\nGiven the above, a gAM-style approach naturally arises: we alternately estimate $\\phi_k$ and $\\bto$. More precisely, at the $t$-th step, we first estimate $\\phi_k^t$ using $\\btp$ as $\\phi_k^t=(\\vx_k^\\top\\btp)\/|\\vx_k^\\top\\btp|$ and then update our estimate of $\\bt$ by solving a least squares problem $\\btt=\\arg\\min_{\\bt} \\sum_k |\\phi_k^t |y_k| - \\vx_k^\\top\\bt|^2$ over complex variables. Algorithm~\\ref{algo:phret_am} presents the details of this \\emph{Gerchberg-Saxton Alternating Minimization} (GSAM) method. To avoid correlations, the algorithm performs these alternations on distinct sets of points at each time step. These disjoint sets can be created by sub-sampling the overall available data.\n\n\\begin{algorithm}[t]\n\t\\caption{Gerchberg-Saxton Alternating Minimization (GSAM)}\n\t\\label{algo:phret_am}\n\t\\begin{algorithmic}[1]\n\t\t\t\\REQUIRE Measurement matrix $X\\in \\bC^{n \\times p}$, observed response magnitudes $|\\vy|\\in \\bR_+^n$, desired accuracy $\\epsilon$\n\t\t\t\\ENSURE A signal $\\bth \\in \\bC^p$\n\t\t\t\\STATE Set $T \\leftarrow \\log 1\/\\epsilon$\n\t\t\t\\STATE Partition $n$ data points into $T+1$ sets $S_0,S_1,\\ldots,S_T$\n\t\t\t\\STATE $\\bt^0 \\leftarrow \\text{eig}\\br{\\frac{1}{|S_0|}\\sum_{k \\in S_0} |y_k|^2\\cdot\\vx_k\\vx_k^\\top, 1}$ \\hfill \/\/\\emph{Leading eigenvector}\n\t\t\t\\FOR{$t = 1, 2, \\ldots, T$}\n\t\t\t\t\\STATE Phase Estimation: $\\phi_k = \\vx_k^\\top \\btp\/|\\vx_k^\\top \\btp|$, for all $k \\in S_t$\n\t\t\t\t\\STATE Signal Estimation: $\\btt = \\arg\\min_{\\bt \\in \\bC^p}\\sum_{k \\in S_t} ||y_k|\\cdot\\phi_k- \\vx_k^\\top\\bt|^2$\n\t\t\t\\ENDFOR\n\t\t\t\\STATE \\textbf{return} {$\\bt^T$}\n\t\\end{algorithmic}\n\\end{algorithm}\n\nIn their original work, \\cite{GerchbergO1972} proposed to use a random vector $\\bt^0$ for initialization. However, the recent work of \\cite{NetrapalliJS13} demonstrated that a more careful initialization, in particular the largest eigenvector of $M=\\sum_k |y_k|^2\\cdot\\vx_k \\vx_k^\\top$, is beneficial. Such a spectral initialization leads to an initial solution that is already at most a (small) constant distance away from the optimal solution $\\bto$.\n\nAs we have seen to be the case with most gAM-style approaches, including the EM algorithm, this approximately optimal initialization is crucial to allow the alternating minimization procedure to take over and push the iterates toward the globally optimal solution.\\\\\n\n\\noindent\\textbf{Notions of Convergence}: Before we proceed to give convergence guarantees for the GSAM algorithm, note that an exact recovery of $\\bto$ is impossible, since phase information is totally lost. More specifically, two signals $\\bto$ and $e^{i\\theta}\\cdot\\bto$ for some $\\theta \\in \\bR$ will generate exactly the same responses when phase information is eliminated. Thus, the best we can hope for is to recover $\\bto$ up to a phase shift. There are several ways of formalizing notions of convergence modulo a phase shift. We also note that complete proofs of the convergence results will be tedious and hence we will only give proof sketches for them.\n\n\\section{A Phase Retrieval Guarantee for GSAM}\nWhile the GSAM approach has been known for decades, its convergence properties and rates were not understood until the work of \\cite{NetrapalliJS13} who analyzed the heuristic (with spectral initialization as described in Algorithm~\\ref{algo:phret_am}) for Gaussian measurement matrices. In particular, they proved that GSAM recovers an $\\epsilon$-optimal estimate of $\\bto$ in roughly $T = \\log(1\/\\epsilon)$ iterations so long as the number of measurements satisfies $n = \\softOm{p \\log^3 p\/\\epsilon}$.\n\nThis result is established by first proving a linear convergence guarantee for the GSAM procedure assuming a sufficiently nice initialization. Next, it is shown that the spectral initialization described in Algorithm~\\ref{algo:phret_am} does indeed satisfy this condition.\\\\\n\n\\noindent{\\bf Linear Convergence}: For this part, it is assumed that the GSAM procedure has been initialized at $\\bt^0$ such that $\\norm{\\bt^0 - \\bto}_2 \\leq \\frac{1}{100}$. The following result (which we state without proof) shows that the alternating procedure hereafter ensures a linear rate of convergence.\n\n\\begin{theorem}\n\\label{thm:phret_am}\nLet $y_k= \\vx_k^\\top\\bto$ for $k = 1,\\ldots,n$ where $\\vx_k \\sim \\cN_\\bC(\\vzero,I)$ and $n \\geq C\\cdot p \\log^3(p\/\\epsilon)$ for a suitably large constant $C$. Then, if the initialization satisfies $\\norm{\\bt^0 - \\bto}_2 \\leq \\frac{1}{100}$, then with probability at least $1-1\/n^2$, GSAM outputs an $\\epsilon$-accurate solution $\\norm{\\bt^T - \\bto}_2 \\leq \\epsilon$ in no more than $T = \\bigO{\\log\\frac{\\norm{\\vy}_2}{\\epsilon}}$ steps.\n\\end{theorem}\n\nIn practice, one can use fast approximate solvers such as the conjugate gradient method to solve the least squares problem at each iteration. These take $\\bigO{np\\log(1\/\\epsilon)}$ time to solve a least squares instance. Since $n = \\softO{p\\log^3p}$ samples are enough, the GSAM algorithm operates with computation time at most $\\softO{p^2\\log^3(p\/\\epsilon)}$.\\\\\n\n\\noindent{\\bf Initialization}: We now establish the utility of the initialization step. The proof hinges on a simple observation. Consider the random variable $Z = |y|^2\\cdot\\vx\\x^\\top$ corresponding to a randomly chosen vector $\\vx \\sim \\cN_\\bC(\\vzero, I)$ and $y = \\vx^\\top\\bto$. For sake of simplicity, let us assume that $\\norm{\\bto}_2 = 1$. Since $\\vx = [x_1,x_2,\\ldots,x_p]^\\top$ is a spherically symmetric vector in the complex space $\\bC^p$, the random variable $\\vx^\\top\\bto$ has an identical distribution as the vector $e^{i\\theta}\\cdot\\vx^\\top\\ve_1$ where $\\ve_1 = [1,0,0,\\ldots,0]^\\top$, for any $\\theta \\in \\bR$. Using the above, it is easy to see that\n\\[\n\\E{|y|^2\\cdot\\vx\\x^\\top} = \\E{|x_1|^2\\cdot\\vx\\x^\\top} = 4\\cdot\\ve_1\\ve_1^\\top + 4\\cdot I\n\\]\nUsing a slightly more tedious calculation involving unitary transformations, we can extend the above to show that, in general,\n\\[\n\\E{|y|^2\\cdot\\vx\\x^\\top} = 4\\cdot\\bto(\\bto)^\\top + 4\\cdot I =: D\n\\]\nThe above clearly indicates that the largest eigenvector of the matrix $D$ is along $\\bto$. Now notice that the matrix whose leading eigenvector we are interested in during initialization,\n\\[\nS := \\frac{1}{|S_0|}\\sum_{k \\in S_0} |y_k|^2\\cdot\\vx_k\\vx_k^\\top,\n\\]\nis simply an empirical estimate to the expectation $\\E{|y|^2\\cdot\\vx\\x^\\top} = D$. Indeed, we have $\\E{S} = D$. Thus, it is reasonable to expect that the leading eigenvector of $S$ would also be aligned to $\\bto$. We can make this statement precise using results from the concentration of finite sums of self-adjoint independent random matrices from \\citep{Tropp2012}.\n\n\\begin{theorem}\n\\label{thm:phret_init}\nThe spectral initialization method (Step 3 of Algorithm~\\ref{algo:phret_am}), with probability at least $1 - 1\/|S_0|^2 \\leq 1 - 1\/p^2$, ensures an initialization $\\bt^0$ such that $\\norm{\\bt^0 - \\bto}_2 \\leq c$ for any constant $c > 0$, so long as it is executed with a randomly chosen set $S_0$ of data points of size $|S_0| \\geq C\\cdot p\\log p$ for a suitably large constant $C$ depending on $c$.\n\\end{theorem}\n\\begin{proof}\nTo make the analysis simple, we will continue to assume that $\\bto = e^{i\\theta}\\cdot\\ve_1$ for some $\\theta \\in \\bR$. We can use Bernstein-style results for matrix concentration (for instance, see \\cite[Theorem 1.5]{Tropp2012}) to show that for any chosen constant $c > 0$, if $n \\geq C\\cdot p\\log p$ for a large enough constant $C$ that depends on the constant $c$, then with probability at least $1 - 1\/|S_0|^2$, we have\n\\[\n\\norm{S - D}_2 \\leq c\n\\]\nNote that the norm being used above is the spectral\/operator norm on matrices. Given this, it is possible to get a handle on the leading eigenvalue of $S$. Observe that since $\\bt^0$ is the leading eigenvector of $S$, and since we have assumed $\\bto = e^{i\\theta}\\cdot\\ve_1$, we have\n\\begin{align*}\n\\textstyle\\abs{\\ip{\\bt^0}{S\\bt^0}} &\\geq \\abs{\\ip{\\bto}{S\\bto}}\\\\\n\t\t\t\t\t\t\t\t\t\t\t\t &\\geq \\abs{\\ip{\\bto}{D\\bto}} - \\abs{\\ip{\\bto}{(S-D)\\bto}}\\\\\n\t\t\t\t\t\t\t\t\t\t\t\t &\\geq 8 - \\norm{S-D}_2\\norm{\\bto}_2^2\\\\\n\t\t\t\t\t\t\t\t\t\t\t\t &\\geq 8 - c\n\\end{align*}\nOn the other hand, using the triangle inequality again, we have\n\\begin{align*}\n\\textstyle\\abs{\\ip{\\bt^0}{S\\bt^0}} &= \\textstyle\\abs{\\ip{\\bt^0}{(S-D)\\bt^0} + \\ip{\\bt^0}{D\\bt^0}}\\\\\n\t\t\t\t\t\t\t\t\t\t\t\t &\\leq \\textstyle\\norm{S-D}_2\\norm{\\bt^0}_2^2 + 4\\abs{\\bt^0_1}^2 + 4\\norm{\\bt^0}_2^2\\\\\n\t\t\t\t\t\t\t\t\t\t\t\t &= \\textstyle c + 4\\abs{\\bt^0_1}^2 + 4,\n\\end{align*}\nwhere in the second step we have used $D = 4\\cdot\\ve_1\\ve_1^\\top + 4\\cdot I$. The two opposing inequalities on the quantity $\\abs{\\ip{\\bt^0}{S\\bt^0}}$ give us $\\abs{\\bt^0_1}^2 \\geq 1 - c\/2$. Noticing that $\\abs{\\bt^0_1}^2 = \\abs{\\ip{\\bt^0}{\\ve_1}} = \\abs{\\ip{\\bt^0}{\\bto}}$ then establishes\n\\[\n\\textstyle\\norm{\\bt^0-\\bto}_2^2 = 2(1 - \\abs{\\ip{\\bt^0}{\\bto}}) \\leq 2(1 - \\sqrt{1-c\/2}) \\leq c\/2\n\\]\nwhich finishes the proof.\n\\end{proof}\n\n\\section{Phase Retrieval via Gradient Descent}\nThere also exists a relatively straightforward reformulation of the phase retrieval problem that allows a simple gPGD-style approach to be applied. The recent work of \\cite{CandesLS2015} did this by reformulating the phase retrieval problem in terms of the following objective\n\\[\nf(\\bt)=\\sum_{k=1}^n\\br{|y_k|^2 - |\\vx_k^\\top \\bt|^2}^2,\n\\]\nand then performing gradient descent (over complex variables) on this unconstrained optimization problem. In the same work, this technique was named the\\emph{Wirtinger's flow} algorithm, presumably as a reference to the notions of Wirtinger derivatives, and shown to offer provable convergence to the global optimum, just like the Gerchberg-Saxton method, when initialization is performed using a spectral method. Algorithm~\\ref{algo:phret_wf} outlines the Wirtinger's Flow (WF) algorithm. Note that WF offers accelerated update times. Note that unlike the GSAM approach, the WF algorithm does not require sub-sampling but needs to choose a step size parameter instead. \n\n\\begin{algorithm}[t]\n\t\\caption{Wirtinger's Flow for Phase Retrieval (WF)}\n\t\\label{algo:phret_wf}\n\t\\begin{algorithmic}[1]\n\t\t\t\\REQUIRE Measurement matrix $X \\in \\bC^{n \\times p}$, observed response magnitudes $|\\vy| \\in \\bR_+^n$, step size $\\eta$\n\t\t\t\\ENSURE A signal $\\bth \\in \\bC^p$\n\t\t\t\\STATE $\\bt^0 \\leftarrow \\text{eig}(\\frac{1}{n}\\sum_{k=1}^n |y_k|^2\\cdot\\vx_k\\vx_k^\\top, 1)$\\hfill\/\/{\\em Leading eigenvector}\n\t\t\t\\FOR{$t = 1, 2,\\dots$}\n\t\t\t\\STATE $\\btt \\leftarrow \\btp - 2\\eta\\cdot\\sum_k(|\\vx_k^\\top\\btp|^2-|y_k|^2)\\vx_k\\vx_k^\\top\\btp$\n\t\t\t\\ENDFOR\n\t\t\t\\STATE \\textbf{return} {$\\btt$}\n\t\\end{algorithmic}\n\\end{algorithm}\n\n\\section{A Phase Retrieval Guarantee for WF}\nSince the Wirtinger's Flow algorithm uses an initialization technique that is identical to that used by the Gerchberg-Saxton method, we can straightaway apply Theorem~\\ref{thm:phret_init} to assure ourselves that with probability at least $1-1\/n^2$, we have $\\norm{\\bt^0 - \\bto}_2 \\leq c$ for any constant $c > 0$. Starting from such an initial point, \\cite{CandesLS2015} argue that each step of the gradient descent procedure decreases the distance to optima by at least a constant (multiplicative) factor. This allows a linear convergence result to be established for the WF procedure, similar to the GSAM approach, however, with each iteration being much less expensive, being a gradient descent step, rather than the solution to a complex-valued least squares problem. \n\\begin{theorem}\n\\label{thm:phret_wf}\nLet $y_k= \\vx_k^\\top\\bto$ for $k = 1,\\ldots,n$ where $\\vx_k \\sim \\cN_\\bC(\\vzero,I)$. Also, let $n \\geq C\\cdot p \\log p$ for a suitably large constant $C$. Then, if the initialization satisfies $\\norm{\\bt^0 - \\bto}_2 \\leq \\frac{1}{100}$, then with probability at least $1-1\/p^2$, WF outputs an $\\epsilon$-accurate solution $\\norm{\\bt^T - \\bto}_2 \\leq \\epsilon$ in no more than $T = \\bigO{\\log\\frac{\\norm{\\vy}_2}{\\epsilon}}$ steps.\n\\end{theorem}\n \\cite{CandesLS2015} also studied a coded diffraction pattern (CDP) model which uses measurements $X$ that are more ``practical'' for X-ray crystallography style applications and based on a combination of random multiplicative perturbations of a standard Fourier measurement. For such measurements, \\cite{CandesLS2015} provided a result similar to Theorem~\\ref{thm:phret_wf} but with a slightly inferior rate of convergence: the new procedure is able to guarantee an $\\epsilon$-optimal solution only after $T = \\bigO{p\\cdot\\log\\frac{\\norm{\\vy}_2}{\\epsilon}}$ steps, i.e., a multiplicative factor of $p$ larger than that required by the WF algorithm for Gaussian measurements.\n\n\\section{Bibliographic Notes}\nThe phase retrieval problem has been studied extensively in the X-ray crystallography literature where the focus is mostly on studying Fourier measurement matrices \\citep{CandesL2014}. For Gaussian measurements, initial results were obtained using a technique called \\emph{Phase-lift}, which essentially viewed the quadratic equation $|y_k|^2=(\\vx_k^\\top\\bto)^2$ as the linear measurement of a rank-one matrix, i.e., $|y_k|^2= \\ip{\\vx_k\\vx_k^\\top}{W^\\ast}$ where $W^\\ast = \\bto(\\bto)^\\top$.\n\nThis rank-one constraint was then replaced by a nuclear norm constraint and the resulting problem was solved as a semi-definite program (SDP). This technique was shown to achieve the information theoretically optimal sample complexity of $n = \\Om{p}$ (recall that the GSAM and WF techniques require $n = \\Om{p \\log p}$. However, the running time of the Phase-lift algorithm is prohibitive at $O(np^2+p^3)$. \n\nIn addition to the standard phase retrieval problem, several works \\citep{NetrapalliJS13, JaganathanOH2013} have also studied the sparse phase retrieval problem where the goal is to recover a sparse signal $\\bto \\in \\bC^p$, with $\\norm{\\bto}_0 \\leq s \\ll p$, using only magnitude measurements $|y_k|=|\\vx_k^\\top\\bto|$. The best known results for such problems require $n \\geq s^3 \\log p$ measurements for $s$-sparse signals. This is significantly worse than information theoretically optimal $\\bigO{s\\log p}$ number of measurements. However, \\cite{JaganathanOH2013} showed that for a phase-lift style technique, one cannot hope to solve the problem using less than $\\bigO{s^2\\log p}$ measurements. \n\\chapter{Robust Linear Regression}\n\\label{chap:rreg}\n\n\\newcommand{\\vb^\\ast}{\\vb^\\ast}\n\\newcommand{\\vy^\\ast}{\\vy^\\ast}\n\\newcommand{\\vb^\\ast}{\\vb^\\ast}\n\\newcommand{S_\\ast}{S_\\ast}\n\\newcommand{S_t}{S_t}\n\\newcommand{S_{t+1}}{S_{t+1}}\n\nIn this section, we will look at the problem of robust linear regression. Simply put, it is the task of performing linear regression in the presence of adversarial \\emph{outliers} or \\emph{corruptions}. Let us take a look at some motivating applications. \n\n\\section{Motivating Applications}\n\\label{sec:rreg-intro}\nThe problem of regression has widespread application in signal processing, financial and economic analysis, as well as machine learning and data analytics. In most real life applications, the data presented to the algorithm has some amount of noise in it. However, at times, data may be riddled with missing values and corruptions, and that too of a malicious nature. It is useful to design algorithms that can assure stable operation even in the presence of such corruptions.\\\\\n\n\\noindent\\textbf{Face Recognition} The task of face recognition is widely useful in areas such as biometrics and automated image annotation. In biometrics, a fundamental problem is to identify if a new face image belongs to that of a registered individual or not. This problem can be cast as a regression problem by trying to fit various features of the new image to corresponding features of existing images of the individual in the registered database. More specifically, assume that images are represented as $n$-dimensional feature vectors say, using simple pixel-based features. Also assume that there already exist $p$ images of the person in the database.\n\nOur task is to represent the new image $\\vx^t \\in \\bR^n$ in terms of the database images $X = [\\vx_1,\\ldots,\\vx_p] \\in \\bR^{n \\times p}$ of that person. A nice way to do this is to perform a linear interpolation as follows\n\\[\n\\min_{\\bt \\in \\bR^p} \\norm{\\vx^t - X\\bt}_2^2 = \\sum_{i=1}^n(\\vx^t_i - X^i\\bt)^2.\n\\]\nIf the person is genuine, then there will exist a combination $\\bto$ such that for all $i$, we have $\\vx^t_i \\approx X^i\\bto$ i.e., all features can be faithfully reconstructed. Thus, the fit will be nice and we will admit the person. However, the same becomes problematic if the new image has occlusions. For example, if the person is genuine but wearing a pair of sunglasses or sporting a beard. In such cases, some of the pixels $\\vx^t_i$ will appear corrupted, cause us to get a poor fit, and result in a false alarm. More specifically\n\\[\n\\vx^t_i = X^i\\bto + \\vb^\\ast_i\n\\]\nwhere $\\vb^\\ast_i = 0$ on uncorrupted pixels but can take abnormally large and unpredictable values for corrupted pixels, such as those corresponding to the sunglasses. Being able to still correctly identify the person involves computing the least squares fit in the presence of such corruptions. The challenge is to do this without requiring any manual effort to identify the locations of the corrupted pixels. Figure~\\ref{fig:rreg-facerec} depicts this problem setting visually.\\\\\n\n\\begin{figure}[t]\n\\includegraphics[width=\\columnwidth]{facerec.pdf}\n\\caption[Face Recognition under Occlusions]{A corrupted image $\\vy$ can be interpreted as a combination of a clean image $\\vy^\\ast$ and a corruption mask $\\vb^\\ast$ i.e., $\\vy = \\vy^\\ast + \\vb^\\ast$. The mask encodes the locations of the corrupted pixels as well as the values of the corruptions. The clean image can be (approximately) recovered as an affine combination of existing images in a database as $\\vy^\\ast \\approx X\\bto$. Face reconstruction and recognition in such a scenario constitutes a robust regression problem. Note that the corruption mask $\\vb^\\ast$ is sparse since only a few pixels are corrupted. Images courtesy the Yale Face Database B.}%\n\\label{fig:rreg-facerec}\n\\end{figure}\n\n\\noindent\\textbf{Time Series Analysis} This is a problem that has received much independent attention in statistics and signal processing due to its applications in modeling sequence data such as weather data, financial data, and DNA sequences. However, the underlying problem is similar to that of regression. A sequence of timestamped observations $\\bc{y_t}$ for $t = 0, 1, \\ldots$ are made which constitute the time series. Note that the ordering of the samples is critical here.\n\nThe popular auto-regressive (AR) time series model uses a generative mechanism wherein the observation $y_t$ at time $t$ is obtained as a fixed linear combination of $p$ previous observations plus some noise.\n\\[\ny_t = \\sum_{i=1}^p\\bto_iy_{t-i} + \\eta_t\n\\]\nWe can cast this into a regression framework by constructing covariates $\\vx_t := \\bs{y_{t-1},y_{t-2},\\ldots,y_{t-p}}^\\top \\in \\bR^p$ and rewriting the above as\n\\[\ny_t = \\vx_t^\\top\\bto + \\eta_t,\n\\]\nwhere $\\bto \\in \\bR^p$ is an unknown model and $\\eta_i$ is Gaussian noise generated independent of previous observations $\\eta_t | \\bc{y_{t-1},y_{t-2},\\ldots} \\sim \\cN(0,\\sigma^2)$. The number $p$ is known as the \\emph{order} of the time series and captures how many historical observations affect the current one.\n\nIs not uncommon to encounter situations where the time series experiences gross corruptions. Examples may include observation or sensing errors or unmodeled factors such as dips and upheavals in stock prices due to political or socio-economic events. Thus, we have\n\\[\ny_t = \\vx_t^\\top\\bto + \\eta_t + \\vb^\\ast_t\n\\]\nwhere $\\vb^\\ast_t$ can take unpredictable values for corrupted time instances and $0$ otherwise. In time series literature, two corruption models are popular. In the \\emph{additive} model, observations at time steps subsequent to a corruption are constructed using the uncorrupted values i.e., $\\vb^\\ast_t$ does not influence the values $y_\\tau$ for $\\tau > t$. Of course, observers may detect the corruption at time $t$ but the underlying time series goes on as though nothing happened. \n\nHowever, in the \\emph{innovative} model, observations at time instances subsequent to a corruption use the corrupted value i.e., $\\vb^\\ast_t$ is involved in constructing the values $y_\\tau$ for $\\tau = t+1,\\ldots,t+p$. Innovative corruptions are simpler to handle as although the observation at the moment of the corruption i.e., $y_t$, appears to deviate from that predicted by the base model i.e., $\\vx_t^\\top\\bto$, subsequent observations fall in line with the predictions once more (unless there are more corruptions down the line). In the additive model however, observations can seem to deviate from the predictions of the base model for several iterations. In particular, $y_\\tau$ can disagree with $\\vx_\\tau^\\top\\bto$ for times $\\tau = t,t+1,\\ldots,t+p$, even if there is only a single corruption at time $t$.\n\nA time series analysis technique which seeks to study the ``usual'' behavior of the model involved, such as stock prices, might wish to exclude such aberrations, whether additive or innovative. However it is unreasonable, as well as error prone, to expect manual exclusion of such corruptions which motivates the problem of robust time series analysis. Note that time series analysis is a more challenging problem than regression since the ``covariates'' in this case $\\vx_t, \\vx_{t+1},\\ldots$ are heavily correlated with each other as they share a large number of coordinates whereas in regression they are usually assumed to be independent.\n\n\\section{Problem Formulation}\nLet us recall the regression model studied in \\S~\\ref{sec:em-pml} and \\S~\\ref{sec:spreg-prob-form}. In a linear regression setting, responses are modeled as a (possibly sparse) linear function of the data features to which some additive noise is added, i.e., we have\n\\[\ny_i = \\vx_i^\\top\\bto + \\eta_i.\n\\]\nHowever, notice that in previous discussions, we termed the noise as benign, and even found it appropriate to assume it could be modeled as a Gaussian random variable. This is usually acceptable when the noise is expected to be non-deliberate or the result of small and unstructured perturbations. However, the outliers or corruptions in the examples that we just saw, seem to defy these assumptions.\n\nIn the face recognition case, salt and pepper noise due to sensor disturbances, or lighting changes can be considered benign noise. However, it is improper to consider structured occlusions such as sunglasses or beards as benign as some of them may even be a part of a malicious attempt to fool the system. Even in the time series analysis setting, there might be malicious forces at play in the stock market which cause stock prices to significantly deviate from their usual variations and trends.\n\nSuch corruptions can severely disrupt the modeling process and hamper our understanding of the system. To handle them, we will need a more refined model that distinguishes between benign and unstructured errors, and errors that are deliberate, malicious and structured. The \\emph{robust regression} model best describes this problem setting\n\\[\ny_i = \\vx_i^\\top\\bto + \\eta_i + \\vb^\\ast_i\n\\]\nwhere the variable $\\vb^\\ast_i$ encodes the \\emph{additional} corruption introduced into the response. Our goal is to take a set of $n$ (possibly) corrupted data points $(\\vx_i,y_i)_{i=1}^n$ and recover the underlying parameter vector $\\bto$, i.e.,\n\\begin{equation}\n\\underset{\\substack{\\bt\\in\\bR^p, \\vb\\in\\bR^n\\\\\\norm{\\vb}_0 \\leq k}}{\\min}\\ \\norm{\\vy - X\\bt - \\vb}_2^2,\n\\tag*{(ROB-REG)}\\label{eq:rreg}\n\\end{equation}\n\nThe variables $\\vb^\\ast_i$ can be unbounded in magnitude and of arbitrary sign. However, we assume that only a few data points are corrupted i.e., the vector $\\vb^\\ast = [\\vb^\\ast_1,\\vb^\\ast_2,\\ldots,\\vb^\\ast_n]$ is sparse $\\norm{\\vb^\\ast}_0 \\leq k$. Indeed it is impossible\\elink{exer:rreg-impossible} to recover the model $\\bto$ if more than half the points are corrupted i.e., $k \\geq n\/2$. A worthy goal is to develop algorithms that can tolerate as large a value of $k$ as possible. We will study how two non-convex optimization techniques, namely gAM and gPGD, can be used to solve this problem. We point to other approaches, as well as extensions such as robust sparse recovery, in the bibliographic notes.\n\n\\section{Robust Regression via Alternating Minimization}\nThe key to applying alternating minimization to the robust regression problem is to identify the two critical parameters in this problem. Let us assume that there is no Gaussian noise in the model i.e., $\\vy = X\\bto + \\vb^\\ast$ where $\\vb^\\ast$ is $k$-sparse but can contain unbounded entries in its support.\n\n\\begin{algorithm}[t]\n\t\\caption{AltMin for Robust Regression (AM-RR)}\n\t\\label{algo:torrent}\n\t\\begin{algorithmic}[1]\n\t\t\t\\REQUIRE Data $X, \\vy$, number of corruptions $k$\n\t\t\t\\ENSURE An accurate model $\\bth \\in \\bR^p$\n\t\t\t\\STATE $\\bt^1 \\leftarrow \\vzero$, $S_1 = [1:n-k]$\n\t\t\t\\FOR{$t = 1, 2, \\ldots$}\n\t\t\t\t\\STATE $\\btn \\leftarrow \\arg\\min_{\\bt \\in \\bR^p}\\ \\sum_{i \\in S_t}(y_i - \\vx_i^\\top\\bt)^2$\n\t\t\t\t\\STATE $S_{t+1} \\leftarrow \\arg\\min_{|S| = n-k}\\ \\sum_{i \\in S}(y_i - \\vx_i^\\top\\btn)^2$\n\t\t\t\\ENDFOR\n\t\t\t\\STATE \\textbf{return} {$\\btt$}\n\t\\end{algorithmic}\n\\end{algorithm}\n\nIt can be seen that $\\bto$ and $\\bar{\\supp(\\vb^\\ast)} =: S_\\ast$, i.e., the true model and the locations of the uncorrupted points, are the two most crucial elements since given one, finding the other is very simple. Indeed, if someone were to magically hand us $\\bto$, it is trivial to identify $S_\\ast$ by simply identifying data points where $y_i = \\vx_i^\\top\\bto$. On the other hand, given $S_\\ast$, it is simple to obtain $\\bto$ by simply solving a least squares regression problem on the set of data points in the set $S_\\ast$. Thus we can rewrite the robust regression problem as\n\\begin{equation}\n\\underset{\\substack{\\bt \\in \\bR^p\\\\|S| = n-k}}{\\min}\\ \\norm{\\vy_S - X^S\\bt}_2^2\n\\tag*{(ROB-REG-2)}\\label{eq:rreg-2}\n\\end{equation}\nThis gives us a direct way of applying the gAM approach to this problem as outlined in the AM-RR algorithm (Algorithm~\\ref{algo:torrent}). The work of \\cite{BhatiaJK2015} showed that this technique and its variants offer scalable solutions to the robust regression problem.\n\nIn order to execute the gAM protocol, AM-RR maintains a model estimate $\\btt$ and an \\emph{active set} $S_t \\subset [n]$ of points that are deemed clean at the moment. At every time step, true to the gAM philosophy, AM-RR first fixes the active set and updates the model, and then fixes the model and updates the active set. The first step turns out to be nothing but least squares over the active set. For the second step, it is easy to see that the optimal solution is achieved simply by taking the $n-k$ data points with the smallest residuals (by magnitude) with respect to the updated model and designating them to be the active set.\n\n\\section{A Robust Recovery Guarantee for AM-RR}\nTo simplify the analysis, we will assume that there is no Gaussian noise in the model i.e., $\\vy = X\\bto + \\vb^\\ast$ where $\\vb^\\ast$ is $k$-sparse. To present the analysis of the AM-RR algorithm, we will need the notions of subset strong convexity and smoothness.\n\\begin{definition}[Subset Strong Convexity\/Smoothness Property \\citep{BhatiaJK2015}]\n\\label{defn:ssc-sss}\nA matrix $X \\in \\bR^{n \\times p}$ is said to satisfy the $\\alpha_k$-subset strong convexity (SSC) property and the $\\beta_k$-subset smoothness property (SSS) of order $k$ if for all sets $S \\subset [n]$ of size $|S| \\leq k$, we have, for all $\\vv \\in \\bR^p$,\n\\begin{center}\n\t$\\alpha_k\\cdot\\norm{\\vv}_2^2 \\leq \\norm{X^S\\vv}_2^2 \\leq \\beta_k\\cdot\\norm{\\vv}_2^2$.\n\\end{center}\n\\end{definition}\nThe SSC\/SSS properties require that the design matrix formed by taking any subset of $k$ points from the data set of $n$ points act as an approximate isometry on all $p$ dimensional points. These properties are related to the traditional RSC\/RSS properties and it can be shown (see for example, \\citep{BhatiaJK2015}) that RIP-inducing distributions over matrices (see \\S~\\ref{sec:rip-ensure}) also produce matrices that satisfy the SSC\/SSS properties, with high probability. However, it is interesting to note that whereas the RIP definition is concerned with column subsets of the design matrix, SSC\/SSS concerns itself with row subsets.\n\nThe nature of the SSC\/SSS properties is readily seen to be very appropriate for AM-RR to succeed. Since the algorithm uses only a subset of data points to estimate the model vector, it is essential that smaller subsets of data points of size $n-k$ (in particular the true subset of clean points $S_\\ast$) also allow the model to be recovered. This is equivalent\\elink{exer:rreg-ssc} to requiring that the design matrices formed by smaller subsets of data points not identify distinct model vectors. This is exactly what the SSC property demands.\n\nGiven this, we can prove the following convergence guarantee for the AM-RR algorithm. The reader would notice that the algorithm, despite being a gAM-style algorithm, does not require precise and careful initialization of the model and active set. This is in stark contrast to other gAM-style approaches we have seen so far, namely EM and AM-MC, both of which demanded careful initialization.\n\n\\begin{theorem}\nLet $X \\in \\bR^{n \\times p}$ satisfy the SSC property at order $n-k$ with parameter $\\alpha_{n-k}$ and the SSS property at order $k$ with parameter $\\beta_k$ such that $\\beta_k\/\\alpha_{n-k} < \\frac{1}{\\sqrt 2 + 1}$. Let $\\bto \\in \\bR^p$ be an arbitrary model vector and $\\vy = X\\bto + \\vb^\\ast$ where $\\norm{\\vb^\\ast}_0 \\leq k$ is a sparse vector of possibly unbounded corruptions. Then AM-RR yields an $\\epsilon$-accurate solution $\\norm{\\btt - \\bto}_2 \\leq \\epsilon$ in no more than $\\bigO{\\log\\frac{\\norm{\\vb^\\ast}_2}{\\epsilon}}$ steps.\n\\end{theorem}\n\\begin{proof}\nLet $\\vr^t = \\vy - X\\btt$ denote the vector of residuals at time $t$ and let $C_t = (X^{S_t})^\\top X^{S_t}$ and $S_\\ast = \\bar{\\supp(\\vb^\\ast)}$. Then the model update step of AM-RR solves a least squares problem ensuring\n\\begin{align*}\n\\btn &= C_t^{-1}(X^{S_t})^\\top\\vy_{S_t}\\\\\n&= C_t^{-1}(X^{S_t})^\\top(X^{S_t}\\bto + \\vb^\\ast_{S_t})\\\\\n&= \\bto + C_t^{-1}(X^{S_t})^\\top\\vb^\\ast_{S_t}.\n\\end{align*}\nThe residuals with respect to this new model can be computed as\n\\[\n\\vr^{t+1} = \\vy - X\\btn = \\vb^\\ast + XC_t^{-1}(X^{S_t})^\\top\\vb^\\ast_{S_t}.\n\\]\nHowever, the active-set update step selects the set with smallest residuals, in particular, ensuring that\n\\[\n\\norm{\\vr^{t+1}_{S_{t+1}}}^2_2 \\leq \\norm{\\vr^{t+1}_{S_\\ast}}^2_2.\n\\]\nPlugging in the expression for $\\vr^{t+1}$ into both sides of the equation, using $\\vb^\\ast_{S_\\ast} = \\vzero$ and the fact that that for any matrix $X$ and vector $\\vv$ we have\n\\[\n\\norm{X^S\\vv}_2^2 - \\norm{X^T\\vv}_2^2 = \\norm{X^{S\\backslash T}\\vv}_2^2 - \\norm{X^{T\\backslash S}\\vv}_2^2 \\leq \\norm{X^{S\\backslash T}\\vv}_2^2,\n\\]\ngives us, upon some simplification,\n\\begin{align*}\n\\norm{\\vb^\\ast_{S_{t+1}}}_2^2 \\leq{}& \\norm{X^{S_\\ast\\backslashS_{t+1}}C_t^{-1}(X^{S_t})^\\top\\vb^\\ast_{S_t}}_2^2\\\\\n&{}- 2(\\vb^\\ast_{S_{t+1}})^\\top X^{S_{t+1}}C_t^{-1}(X^{S_t})^\\top\\vb^\\ast_{S_t}\\\\\n\\leq{}& \\frac{\\beta_k^2}{\\alpha_{n-k}^2}\\norm{\\vb^\\ast_{S_t}}_2^2 + 2\\cdot\\frac{\\beta_k}{\\alpha_{n-k}}\\cdot\\norm{\\vb^\\ast_{S_{t+1}}}_2\\cdot\\norm{\\vb^\\ast_{S_t}}_2,\n\\end{align*}\nwhere the last step follows from an application of the SSC\/SSS properties by noticing that $|S_\\ast\\backslashS_{t+1}| \\leq k$ and that $\\vb^\\ast_{S_t}$ and $\\vb^\\ast_{S_{t+1}}$ are all $k$-sparse vectors since $\\vb^\\ast$ itself is a $k$-sparse vector. Solving the above equation gives us\n\\[\n\\norm{\\vb^\\ast_{S_{t+1}}}_2 \\leq (\\sqrt 2 + 1)\\cdot\\frac{\\beta_k}{\\alpha_{n-k}}\\cdot\\norm{\\vb^\\ast_{S_t}}_2\n\\]\nThe above result proves that in $t = \\bigO{\\log\\frac{\\norm{\\vb^\\ast}_2}{\\epsilon}}$ iterations, the alternating minimization procedure will identify an active set $S_t$ such that $\\norm{\\vb^\\ast_{S_t}}_2 \\leq \\epsilon$. It is easy\\elink{exer:rreg-ls} to see that a least squares step on this active set will yield a model $\\bth$ satisfying\n\\[\n\\norm{\\btt - \\bto}_2 = \\norm{C_t^{-1}(X^{S_t})^\\top\\vb^\\ast_{S_t}}_2 \\leq \\frac{\\beta_k}{\\alpha_{n-k}}\\cdot\\epsilon \\leq \\epsilon,\n\\]\nsince $\\beta_k\/\\alpha_{n-k} < 1$. This concludes the convergence guarantee.\n\\end{proof}\n\nThe crucial assumption in the previous result is the requirement $\\beta_k\/\\alpha_{n-k} < \\frac{1}{\\sqrt 2 + 1}$. Clearly, as $k \\rightarrow 0$, we have $\\beta_k \\rightarrow 0$ but if the matrix $X$ is well conditioned we still have $\\alpha_{n-k} > 0$. Thus, for small enough $k$, it is assured that we will have $\\beta_k\/\\alpha_{n-k} < \\frac{1}{\\sqrt 2 + 1}$. The point at which this occurs is the so-called \\emph{breakdown point} of the algorithm -- it is the largest number $k$ such that the algorithm can tolerate $k$ possibly adversarial corruptions and yet guarantee recovery.\n\nNote that the quantity $\\kappa_k = \\beta_k\/\\alpha_{n-k}$ acts as the effective condition number of the problem. It plays the same role as the condition number did in the analysis of the gPGD and IHT algorithms. It can be shown that for RIP-inducing distributions (see \\citep{BhatiaJK2015}), AM-RR can tolerate $k = \\Omega(n)$ corruptions.\n\nFor the specific case of the design matrix being generated from a Gaussian distribution, it can be shown that we have $\\alpha_{n-k} = \\Om{n-k}$ and $\\beta_k = \\bigO{k}$. This in turn can be used to show that we have $\\beta_k\/\\alpha_{n-k} < \\frac{1}{\\sqrt 2 + 1}$ whenever $k \\leq n\/70$. This means that AM-RR can tolerate up to $n\/70$ corruptions when the design matrix is Gaussian. This indicates a high degree of robustness in the algorithm since these corruptions can be completely adversarial in terms of their location, as well as their magnitude. Note that AM-RR is able to ensure this without requiring any specific initialization.\n\n\\section{Alternating Minimization via Gradient Updates}\n\\label{sec:rreg-hyb}\nSimilar to the gradient-based EM heuristic we looked at in \\S~\\ref{sec:em-implement}, we can make the alternating minimization process in AM-RR much cheaper by executing a gradient step for the alternations. More specifically, we can execute step 3 of AM-RR as\n\\[\n\\btn \\leftarrow \\btt - \\eta\\cdot\\sum_{i \\in S_t}(\\vx_i^\\top\\bt - y_i)\\vx_i,\n\\]\nfor some step size parameter $\\eta$. It can be shown (see \\citep{BhatiaJK2015}) that this process enjoys the same linear rate of convergence as AM-RR. However, notice that both alternations (model update as well as active set update) in this gradient-descent version can be carried out in (near-)linear time. This is in contrast with AM-RR which takes super-quadratic time in each alternation to discover the least squares solution. In practice, this makes the gradient version much faster (often by an order of magnitude) as compared to the fully corrective version.\n\nHowever, once we have obtained a reasonably good estimate of $S_\\ast$, it is better to execute the least squares solution to obtain the final solution in a single stroke. Thus, the gradient and fully corrective steps can be mixed together to great effect. In practice, such \\emph{hybrid} techniques offer the fastest convergence. We refer the reader to \\S~\\ref{sec:rreg-empcomp} for a brief discussion on this and to \\citep{BhatiaJK2015} for details.\n\n\\section{Robust Regression via Projected Gradient Descent}\n\\label{sec:rreg-gpgd}\nIt is possible to devise an alternate formulation for the robust regression problem that allows us to apply the gPGD technique instead. The work of \\citep{BhatiaJKK2017} uses this alternate formulation to arrive at a solution that enjoys consistency properties. Note that if someone gave us a good estimate $\\hat\\vb$ of the corruption vector, we could use it to clean up the responses as $\\vy - \\hat\\vb$, and re-estimate the model as\n\\[\n\\hat\\bt(\\hat\\vb) = \\underset{\\bt \\in \\bR^p}{\\arg\\min}\\ \\norm{(\\vy - \\hat\\vb) - X\\bt}_2^2 = (X^\\top X)^{-1}X^\\top(\\vy-\\hat\\vb).\n\\]\nThe residuals corresponding to this new model estimate would be\n\\[\n\\norm{\\vy - \\hat\\vb - X\\cdot\\hat\\bt(\\hat\\vb)}_2^2 = \\norm{(I - P_X)(\\vy - \\hat\\vb)}_2^2,\n\\]\nwhere $P_X = X(X^\\top X)^{-1}X^\\top$. The above calculation shows that an equivalent formulation for the robust regression problem \\eqref{eq:rreg} is the following\n\\[\n\\underset{\\norm{\\vb}_0 = k}\\min\\ \\norm{(I - P_X)(\\vy - \\vb)}_2^2,\n\\]\nto which we can apply gPGD (Algorithm~\\ref{algo:gpgd}) since it now resembles a sparse recovery problem! This problem enjoys\\elink{exer:rreg-rip} the restricted isometry property whenever the design matrix $X$ is sampled from an RIP-inducing distribution (see \\S~\\ref{sec:rip-ensure}). This shows that an application of the gPGD technique will guarantee recovery of the optimal corruption vector $\\vb^\\ast$ at a linear rate. Once we have an $\\epsilon$-optimal estimate $\\hat\\vb$ of $\\vb^\\ast$, a good model $\\hat\\bt$ can be found by solving the least squares problem.\n\\[\n\\hat\\bt(\\hat\\vb) = (X^\\top X)^{-1}X^\\top(\\vy-\\hat\\vb),\n\\]\nas before. It is a simple exercise to show that if $\\norm{\\hat\\vb - \\vb^\\ast}_2 \\leq \\epsilon$, then\n\\[\n\\norm{\\hat\\bt(\\hat\\vb) - \\bto} \\leq \\bigO{\\frac{\\epsilon}{\\alpha}},\n\\]\nwhere $\\alpha$ is the RSC parameter of the problem. Note that this shows how the gPGD technique can be applied to perform recovery not only when the parameter is sparse in the model domain (for instance in the gene expression analysis problem), but also when the parameter is sparse in the data domain, as in the robust regression example.\n\n\\begin{figure}[t]\n\\begin{subfigure}[t]{.5\\columnwidth}\n\\centering \\includegraphics[width=\\columnwidth]{rreg-comp.pdf}\n\\caption{Run-time Comparison}\n\\label{fig:rreg-comparison-rreg}\n\\end{subfigure}\n\\hfill\n\\begin{subfigure}[t]{.5\\columnwidth}\n\\centering \\includegraphics[width=\\columnwidth]{rreg-hyb.pdf}\n\\caption{Convergence Comparison}\n\\label{fig:rreg-comparison-hyb}\n\\end{subfigure}%\n\\caption[Empirical Performance on Robust Regression Problems]{An empirical comparison of the performance offered by various approaches for robust regression. Figure~\\ref{fig:rreg-comparison-rreg} (adapted from \\citep{BhatiaJKK2017}) compares Extended LASSO, a state-of-the-art relaxation method by \\cite{NguyenT2013b}, AM-RR and the gPGD method from \\S~\\ref{sec:rreg-gpgd} on a robust regression problem in $p = 1000$ dimensions with $30\\%$ data points corrupted. Non-convex techniques such as AM-RR and gPGD are more than an order of magnitude faster, and scale much better, than Extended LASSO. Figure~\\ref{fig:rreg-comparison-hyb} (adapted from \\citep{BhatiaJK2015}) compares various solvers on a robust regression problem in $p = 300$ dimensions with $1800$ data points of which $40\\%$ are corrupted. The solvers include the gAM-style solver AM-RR, a variant using gradient-based updates, a hybrid method (see \\S~\\ref{sec:rreg-hyb}), and the DALM method \\citep{YangZBSM2013}, a state-of-the-art solver for relaxed LASSO-style formulations. The hybrid method is the fastest of all the techniques. In general, all AM-RR variants are much faster than the relaxation-based method.}%\n\\label{fig:rreg-comparison}\n\\end{figure}\n\n\\section{Empirical Comparison}\n\\label{sec:rreg-empcomp}\nBefore concluding, we present some discussion on the empirical performance of various algorithms on the robust regression problem. Figure~\\ref{fig:rreg-comparison-rreg} compares the running times of various robust regression approaches on synthetic problems as the number of data points available in the dataset increases. Similar trends are seen if the data dimensionality increases. The graph shows that non-convex techniques such as AM-RR and gPGD offer much more scalable solutions than relaxation-based methods. Figure~\\ref{fig:rreg-comparison-hyb} similarly demonstrates how variants of the basic AM-RR procedure can offer significantly faster convergence. Figure~\\ref{fig:faceamrr} looks at a realistic example of face reconstruction under occlusions and demonstrates how AM-RR is able to successfully recover the face image even when significant portions of the face are occluded.\n\n\\begin{figure}[t]\n\\includegraphics[width=\\columnwidth]{faceamrr.pdf}\n\\caption[Robust Face Reconstruction]{An experiment on face reconstruction using robust regression techniques. Two face images were taken and different occlusions were applied to them. Using the model described in \\S~\\ref{sec:rreg-intro}, reconstruction was attempted using both, ordinary least squares (OLS) and robust regression (AM-RR). It is clear that AM-RR achieves far superior reconstruction of the images and is able to correctly figure out the locations of the occlusions. Images courtesy the Yale Face Database B.}%\n\\label{fig:faceamrr}\n\\end{figure}\n\n\\section{Exercises}\n\\begin{exer}\n\\label{exer:rreg-impossible}\nShow that it is impossible to recover the model vector if a fully adaptive adversary is able to corrupt more than half the responses i.e., if $k \\geq n\/2$. A fully adaptive adversary is one that is allowed to perform corruptions after observing the clean covariates as well as the uncorrupted responses.\\\\\n\\textit{Hint}: The adversary can make it impossible to distinguish between two models, the real model, and another one of its choosing.\n\\end{exer}\n\\begin{exer}\n\\label{exer:rreg-ssc}\nShow that the SSC property ensures that there exists no subset $S$ of the data, $|S| \\leq k$ and no two distinct model vectors $\\vv^1,\\vv^2\\in\\bR^p$ such that $X^S\\vv^1 = X^S\\vv^2$.\n\\end{exer}\n\\begin{exer}\n\\label{exer:rreg-ls}\nShow that executing the AM-RR algorithm for a single step starting from an active set $S_t$ such that $\\norm{\\vb^\\ast_{S_t}}_2 \\leq \\epsilon$ ensures in the very next step that $\\norm{\\btt - \\bto}_2 \\leq \\epsilon$.\n\\end{exer}\n\\begin{exer}\n\\label{exer:rreg-rip}\nShow that if the design matrix $X$ satisfies RIP, then the objective function $f(\\vb) = \\norm{(I - P_X)(\\vy - \\vb)}_2^2,$ enjoys RIP (of the same order as $X$ but with possibly different constants) as well.\n\\end{exer}\n\\begin{exer}\n\\label{exer:rreg-1-2}\nProve that ~\\eqref{eq:rreg}~and~\\eqref{eq:rreg-2} are equivalent formulations i.e., they yield the same model.\n\\end{exer}\n\n\\section{Bibliographic Notes}\nThe problem of robust estimation has been well studied in the statistics community. Indeed, there exist entire texts devoted to this area \\citep{RousseeuwL1987,MaronnaMY2006} which look at robust estimators for regression and other problems. However, these methods often involve estimators, such as the least median of squares estimator, that have an exponential time complexity.\n\nIt is notable that these infeasible estimators often have attractive theoretical properties such as a high breakdown point. For instance, the work of \\citet{Rousseeuw1984} shows that the least median of squares method enjoys a breakdown point of as high as $n\/2 - p$. In contrast, AM-RR is only able to handle $n\/70$ errors. However, whereas the gradient descent version of AM-RR can be executed in near-linear time, the least median of squares method requires time exponential in $p$.\n\nThere have been relaxation based approaches to solving robust regression and time series problems as well. Chief of them include the works of \\cite{ChenD2012, ChenCM2013} which look at the Dantzig selector methods and the \\emph{trimmed product} techniques to perform estimation, and the works of \\cite{WrightYGSM2009, NguyenT2013}. The work of \\cite{ChenCM2013} considers corruptions not only in the responses, but in the covariates as well. However, these methods tend to scale poorly to really large scale problems owing to the non-smooth nature of the optimization problems that they end up solving. The non-convex optimization techniques we have studied, on the other hand, require linear time or else have closed-form updates. \n\nRecent years have seen the application of non-convex techniques to robust estimation. However these works can be traced back to the classical work of \\citet{FischlerB1981} that developed the RANSAC algorithm that is very widely used in fields such as computer vision. The RANSAC algorithm samples multiple candidate active sets and returns the least squares estimate on the set with least residual error.\n\nAlthough the RANSAC method does not enjoy strong theoretical guarantees in the face of an adaptive adversary and a large number of corruptions, the method is seen to work well when there are very few outliers. Later works, such as that of \\citet{SheO2011} applied soft-thresholding techniques to the problem followed by the work of \\cite{BhatiaJK2015} which applied the alternating minimization algorithm we studied here. \\cite{BhatiaJK2015} also looked at the problem of robust sparse recovery.\n\nThe time series literature has also seen the application of various techniques for robust estimation including robust M-estimators in both the additive and innovative outlier models \\citep{MartinZ1978, StockingD1987} and least trimmed squares \\citep{CrouxJ2008}.\n\\chapter{Stochastic Optimization Techniques}\n\\label{chap:saddle}\n\nIn previous sections, we have looked at specific instances of optimization problems with non-convex objective functions. In \\S~\\ref{chap:altmin} we looked at problems in which the objective can be expressed as a function of two variables, whereas in \\S~\\ref{chap:em} we looked at objective functions with latent variables that arose in probabilistic settings. In this section, we will look at the problem of optimization with non-convex objectives in a more general setting.\n\nSeveral machine learning and signal processing applications such as deep learning, topic modeling etc, generate optimization problems that have non-convex objective functions. The global optimization of non-convex objectives, i.e., finding the global optimum of the objective function, is an NP hard problem in general. Even the seemingly simple problem of minimizing quadratic functions of the kind $\\vx^\\top A\\vx$ over convex constraint sets becomes NP-hard the moment the matrix $A$ is allowed to have even one negative eigenvalue.\n\nAs a result, a much sought after goal in applications with non-convex objectives is to find a local minimum of the objective function. The main hurdle in achieving local optimality is the presence of saddle points which can mislead optimization methods such as gradient descent by stalling their progress. Saddle points are best avoided as they signal inflection in the objective surface and unlike local optima, they need not optimize the objective function in any meaningful way.\n\nThe recent years have seen much interest in this problem, particularly with the advent of deep learning where folklore wisdom tells us that in the presence of sufficient data, even locally optimal solutions to the problem of learning the edge weights of a network perform quite well \\citep{ChoromanskaHMALC2015}. In these settings, techniques such as convex relaxations, and non-convex optimization techniques that we have studied such as EM, gAM, gPGD, do not apply directly. In these settings, one has to attempt optimizing a non-convex objective directly.\n\nThe problem of avoiding or escaping saddle points is actually quite challenging in itself given the wide variety of configurations saddle points can appear in, especially in high dimensional problems. It should be noted that there exist saddle configurations, bypassing which is intractable in itself. For such cases, even finding locally optimal solutions is an NP-hard problem \\citep{AnandkumarG2016}.\n\nIn our discussion, we will look recent results which show that if the function being optimized possesses certain nice structural properties, then an application of very intuitive algorithmic techniques can guarantee local optimality of the solutions. This will be yet another instance of a result where the presence of a structural property (such as RSC\/RSS, MSC\/MSS, or LSC\/LSS as studied in previous sections) makes the problem well behaved and allows efficient algorithms to offer provably good solutions. Our discussion will largely aligned to the work of \\cite{GeHJY2015}. The bibliographic notes will point to other works.\n\n\\section{Motivating Applications}\n\\label{sec:saddle-app}\nA wide range of problems in machine learning and signal processing generate optimization problems with non-convex objectives. Of particular interest to us would be the problem of \\emph{Orthogonal Tensor Decomposition}. This problem has been shown to be especially useful in modeling a large variety of learning problems including training deep Recurrent Neural Networks \\citep{SedghiA2016}, Topic Modeling, learning Gaussian Mixture Models and Hidden Markov Models, Independent Component Analysis \\citep{AnandkumarGHKT2014}, and reinforcement learning \\citep{AzizzadenesheliLA2016}.\n\nThe details of how these machine learning problems can be reduced to tensor decomposition problems will involve getting into the details which will distract us from our main objective. To keep the discussion focused and brief, we request the reader to refer to these papers for the reductions to tensor decomposition. We ourselves will be most interested in the problem of tensor decomposition itself.\n\nWe will restrict our study to $4\\th$-order tensors which can be interpreted as $4$-dimensional arrays. Tensors are easily constructed using \\emph{outer products}, also known as \\emph{tensor products}. An outer product of $2^{\\text{nd}}$ order produces a $2^{\\text{nd}}$ order tensor which is nothing but a matrix. For any $\\vu, \\vv \\in \\bR^p$, their outer product is defined as $\\vu \\otimes \\vv := \\vu\\vv^\\top \\in \\bR^{p\\times p}$ which is a $p \\times p$ matrix, whose $(i,j)$-th entry is $\\vu_i\\vv_j$.\n\nWe can similarly construct a $4\\th$-order tensors. For any $\\vu,\\vv,\\vw,\\vx \\in \\bR^p$, let $T = \\vu\\otimes\\vv\\otimes\\vw\\otimes\\vx \\in \\bR^{p\\times p\\times p\\times p}$. The $(i,j,k,l)$-th entry of this tensor, for any $i,j,k,l \\in [p]$, will be $T_{i,j,k,l} = \\vu_i\\cdot\\vv_j\\cdot\\vw_k\\cdot\\vx_l$. The set of $4\\th$-order tensors is closed under addition and scalar multiplication. We will study a special class of $4\\th$-order tensors known as \\emph{orthonormal} tensors which have an \\emph{orthonormal decomposition} as follows\n\\[\nT = \\sum_{i=1}^r \\vu_i \\otimes \\vu_i \\otimes \\vu_i\\otimes \\vu_i,\n\\]\nwhere the vectors $\\vu_i$ are orthonormal \\emph{components} of the tensor $T$ i.e., $\\vu_i^\\top\\vu_j = 0$ if $i \\neq j$ and $\\norm{\\vu_i}_2 = 1$. The above tensor is said to have rank $r$ since it has $r$ components in its decomposition. If an orthonormal decomposition of a tensor exists, it can be shown to be unique.\n\nJust as a matrix $A \\in \\bR^{p \\times p}$ defines a bi-linear form $A: (\\vx,\\vy) \\mapsto \\vx^\\top A \\vy$, similarly a tensor defines a \\emph{multi-linear form}. For orthonormal tensors, the multilinear form has a simple expression. In particular, if $T$ has the orthonormal form described above, we have\n\\begin{equation*}\n\\begin{array}{cl}\nT(\\vv,\\vv,\\vv,\\vv) &= \\sum_{i=1}^r (\\vu_i^\\top\\vv)^4 \\in \\bR\\\\\nT(I,\\vv,\\vv,\\vv) &= \\sum_{i=1}^r (\\vu_i^\\top\\vv)^3\\cdot\\vu_i \\in \\bR^p\\\\\nT(I,I,\\vv,\\vv) &= \\sum_{i=1}^r (\\vu_i^\\top\\vv)^2\\cdot\\vu_i\\vu_i^\\top \\in \\bR^{p \\times p}\\\\\nT(I,I,I,\\vv) &= \\sum_{i=1}^r (\\vu_i^\\top\\vv)\\cdot(\\vu_i\\otimes\\vu_i\\otimes\\vu_i) \\in \\bR^{p \\times p \\times p}\n\\end{array}\n\\end{equation*}\nThe problem of orthonormal tensor decomposition involves recovering all $r$ components of a rank-$r$, 4-th order tensor $T$. An intuitive way to do this is to iteratively find all the components.\n\nFirst we recover, say w.l.o.g., the first component $\\vu_1$. Then we perform a \\emph{peeling\/deflation} step by subtracting that component and creating a new tensor $T^{(1)} = T - \\vu_1 \\otimes \\vu_1 \\otimes \\vu_1\\otimes \\vu_1$ and repeating the process. Note that the rank of the tensor $T^{(1)}$ is only $r-1$ and thus, this procedure terminates in just $r$ steps.\n\nTo execute the above algorithm, all we are required is to solve the individual steps of recovering a single component. This requires us to solve the following optimization problem.\n\n\\begin{equation}\n\\begin{array}{cl}\n\t\\max & T(\\vu,\\vu,\\vu,\\vu) = \\sum_{i=1}^r (\\vu_i^\\top\\vu)^4\\\\\n\t\\text{s.t.} & \\norm{\\vu}_2 = 1,\n\\end{array}\n\\tag*{(LRTD)}\\label{eq:lrtd}\n\\end{equation}\n\nThis is the non-convex optimization problem\\elink{exer:saddle-non-conv} that we will explore in more detail. We will revisit this problem later after looking at some techniques to optimize non-convex objectives.\n\n\\section{Saddles and why they Proliferate}\nTo better understand the challenges posed by saddle points, let us take good old gradient descent as an optimization algorithm. As we studied in \\S~\\ref{chap:tools}, the procedure involves repeatedly taking steps away from the direction of the gradient.\n\\[\n\\vx^{t+1} = \\vx^t - \\eta_t\\cdot\\nabla f(\\vx^t)\n\\]\nNow, it can be shown\\elink{exer:saddle-smooth-nc}\\elink{exer:saddle-smooth-nc-2}, that the procedure is guaranteed to make progress at \\emph{every} time step, provided the function $f$ is strongly smooth and the step length is small enough. However, the procedure stalls at \\emph{stationary points} where the gradient of the function vanishes i.e., $\\nabla f(\\vx) = \\vzero$. This includes local optima, which are of interest, and saddle points, which simply stall descent algorithms.\n\nOne way to distinguish saddle points from local optima is by using the \\emph{second derivative test}. The Hessian of a doubly differentiable function has only positive eigenvalues at local minima and only negative ones at local maxima. Saddles on the other hand are unpredictable. The \\emph{simple saddles} which we shall study here, reveal themselves by having both positive and negative eigenvalues in the Hessian. The bibliographic notes discuss more complex saddle structures.\n\n\\begin{figure}\n\\includegraphics[width=0.5\\columnwidth]{saddle.pdf}\n\\includegraphics[width=0.5\\columnwidth]{saddle2d.pdf}\n\\caption[Emergence of Saddle Points]{The function on the left $f(x) = x^4 - 4\\cdot x^2 + 4$ has two global optima $\\bc{-\\sqrt 2, \\sqrt 2}$ separated by a local maxima at $0$. Using this function, we construct on the right, a higher dimensional function $g(x,y) = f(x) + f(y) +8$ which now has $4$ global minima separated by $4$ saddle points. The number of such minima and saddle points can explode exponentially in learning problems with symmetry (indeed $g(x,y,z) = f(x) + f(y) + f(z) + 12$ has $8$ local minima and saddle points). Plot on the right courtesy \\url{academo.org}}\n\\label{fig:saddle}\n\\end{figure}\n\nThe reasons for the origin of saddle points is quite intriguing too. Figure~\\ref{fig:saddle} shows how saddles may emerge and their numbers increase exponentially with increasing dimensionality. Consider the tensor decomposition problem in \\eqref{eq:lrtd}. It can be shown\\elink{exer:saddle-multi-opt} that all the $r$ components are optimal solutions to this problem. Thus, the problem possesses a beautiful symmetry which allows us to recover the components in any order we like. However, it is also easy to show\\elink{exer:saddle-multi-non-opt} that general convex combinations of the components are not optimal solutions to this problem. Thus, we automatically obtain $r$ isolated optima spread out in space, interspersed with saddle points.\n\nThe applications we discussed, such as Gaussian mixture models, also have such an internal symmetry -- the optimum is unique only up to permutation. Indeed, it does not matter in which order do we recover the components of a mixture model, so long as we recover all of them. However, this very symmetry gives rise to saddle points \\citep{GeHJY2015}, since taking two permutations of the optimal solution and taking a convex combination of them is in general not an optimal solution as well. This gives us, in general, an exponential number of optima, separated by (exponentially many) saddle points.\n\nBefore moving forward, we remind the reader that techniques we have studied so far for non-convex optimization, namely EM, gAM, and gPGD are far too specific to be applied to non-convex objectives in general, and to the problems we encounter with tensor decomposition in particular. We need more generic solutions for the task of local optimization of non-convex objectives.\n\n\\section{The Strict Saddle Property}\n\\label{sec:saddle-fsp}\nGiven that the Hessian of the function is the first point of inquiry when trying to distinguish saddle points from other stationary points, it seems natural to use second order methods to escape points. Indeed, the bibliographic notes discuss several such approaches that use the Hessian, or estimates thereof, to escape saddle points. However, these are expensive and do not scale to high dimensional problems. Consequently, we would be more interested in scalable first order methods.\n\nGiven the above considerations, a natural course of action is to identify properties that an objective function should satisfy in order to allow gradient descent-style techniques to escape its saddle points. The recent works of \\cite{GeHJY2015,SunQW2015} give an intuitive answer to this question. They observe that if a saddle point $\\vx$ for a function $f$ contains directions of steep descent, then it is possible, at least in principle, for a gradient descent procedure to discover this direction and ``fall'' along it. The existence of such directions makes a saddle unstable -- it behaves like a local maxima along these directions and a slight perturbation is very likely to cause gradient descent to roll down the function surface. We notice that this will indeed be the case if for some direction $\\vu \\in \\bR^p$, we have $\\vu^\\top \\nabla^2f(\\vx)\\vu \\ll 0$. Figure~\\ref{fig:strict-saddle} depicts a toy case with the function $f(x,y) = x^2-y^2$ which exhibits a saddle at the origin but nevertheless, also presents a direction of steep descent.\n\n\\begin{figure}\n\\centering\n\\includegraphics[width=0.5\\columnwidth]{strict-saddle.pdf}\n\\caption[The Strict Saddle Property]{The function $f(x,y) = x^2 - y^2$ exhibits a saddle at the origin $(0,0)$. The Hessian of this function at the origin is $-2\\cdot I$ and since $\\lambda_{\\min}(\\nabla^2f((0,0))) = -2$, the saddle satisfies the strict-saddle property. Indeed, the saddle does offer a prominent descent path along the $y$ axis which can be used to escape the saddle point. In \\S~\\ref{sec:saddle_analysis} we will see how the NGD algorithm is able to provably escape this saddle point. Plot courtesy \\url{academo.org}}\n\\label{fig:strict-saddle}\n\\end{figure}\n\nConsider the following unconstrained optimization problem.\n\\begin{equation}\n\t\\min_{\\vx\\in\\bR^p} f(\\vx)\n\\tag*{(NOPT)}\\label{eq:nopt}\n\\end{equation}\nThe following \\emph{strict saddle property} formalizes the requirements we have discussed in a more robust manner.\n \n\\begin{definition}[Strict Saddle Property \\citep{GeHJY2015}]\n\\label{defn:ssp}\nA twice differentiable function $f(\\vx)$ is said to satisfy the $(\\alpha, \\gamma, \\kappa, \\xi)$-strict saddle (SSa) property, if for every local minimum $\\vx^\\ast$ of the function, the function is $\\alpha$-strongly convex in the region $\\cB_2(\\vx^\\ast,2\\xi)$ and moreover, every point $\\vx_0 \\in \\bR^p$ satisfies at least one of the following properties: \n\\begin{enumerate}\n\t\\item (Non-stationary Point) $\\|\\nabla f(\\vx_0)\\|_2 \\geq \\kappa$\n\t\\item\t(Strict Saddle Point) $\\lambda_{\\min}(\\nabla^2f(\\vx_0))\\leq -\\gamma$\n\t\\item\t(Approx. Local Min.) For some local minimum $\\vx^\\ast$, $\\|\\vx_0-\\vx^\\ast\\|_2 \\leq \\xi$.\n\\end{enumerate}\n\\end{definition}\n\nThe above property places quite a few restrictions on the function. The function must be strongly-convex in the neighborhood of every local optima and every point that is not an approximate local minimum, must offer a direction of steep descent. This the point may do by having a steep gradient (case 1) or else (if the point is a saddle point) have its Hessian offer an eigenvector with a large negative eigenvalue which then offers a steep descent direction (case 2). We shall later see that there exist interesting applications that do satisfy this property.\n\n\\section{The Noisy Gradient Descent Algorithm}\n\\label{sec:saddle-algo}\nGiven that the strict-saddle property assures us of the existence of a steep descent direction until we reach close to an approximate-local minimum, it should come as no surprise that simple techniques should be able to achieve local optimality on functions that are strict saddle. We will now look at a one such simple approach to exploit the strict saddle property and achieve local optimality.\n\nThe idea behind the approach is very simple: at every saddle point which may stall gradient descent, the SSa property ensures that there exists a direction of steep descent. If we perturb the gradient, there is a chance it will point in the general direction of the steep descent and escape the saddle. However, if we are at a local minimum or a non-stationary point, then this perturbation would not affect us much.\n\nAlgorithm~\\ref{algo:saddle-ngd} gives the details of this approach. At each time step, it perturbs the gradient using a unit vector pointing at a random direction in the hope that if we are currently stuck at a saddle point, the perturbation will cause us to discover the steep (enough) descent direction, allowing us to continue making progress. One easy way to obtain a random point on the unit sphere is to sample a random standard Gaussian vector $\\vw \\sim \\cN(\\vzero,I_{p \\times p})$ and normalize it $\\vzeta = \\frac{1}{\\norm{\\vw}_2}\\cdot\\vw$. Notice that since the perturbation $\\vzeta^t$ is chosen uniformly from the unit sphere, we have $\\E{\\vzeta^t} = \\vzero$ and by linearity of expectation, $\\E{\\vg^t\\cond\\vx^t} = \\nabla f(\\vx^t)$.\n\nSuch an unbiased estimate $\\vg^t$ is often called a \\emph{stochastic gradient} and is widely used in machine learning and optimization. Recall that even the EM algorithm studied in \\S~\\ref{chap:em} had a variant that used stochastic gradients. In several machine learning applications, the objective can be written as a finite sum $f(\\vx) = \\frac{1}{n}\\sum_{i=1}^n f(\\vx;\\vtheta^i)$ where $\\vtheta^i$ may denote the $i$-th data point. This allows us to construct a stochastic gradient estimate in an even more inexpensive manner. At each time step, simply sample a data point $I_t \\sim \\text{\\textsf{Unif}}([n])$ and let\n\\[\n\\vg^t = \\nabla f(\\vx^t,\\vtheta^{I_t}) + \\vzeta^t\n\\]\nNote that we still have $\\E{\\vg^t\\cond\\vx^t} = \\nabla f(\\vx^t)$ but with a much cheaper construction for $\\vg^t$. However, in order to simplify the discussion, we will continue to work with the setting $\\vg^t = \\nabla f(\\vx^t) + \\vzeta^t$. \n\nWe note that we have set the step lengths to be around $1\/\\sqrt{T}$, where $T$ is the total number of iterations we are going to execute the NGD algorithm. This will seem similar to what we used in the projected gradient descent approach (see Algorithm~\\ref{algo:pgd} and Theorem~\\ref{thm:pgd-conv-proof}). Although in practice one may set the step length to $\\eta_t \\approx 1\/\\sqrt{t}$ here as well, the analysis becomes more involved.\n\n\\begin{algorithm}[t]\n\t\\caption{Noisy Gradient Descent (NGD)}\n\t\\label{algo:saddle-ngd}\n\t\\begin{algorithmic}[1]\n\t\t{\n\t\t\t\\REQUIRE Objective $f$, max step length $\\eta_{\\max}$, tolerance $\\epsilon$\n\t\t\t\\ENSURE A locally optimal point $\\hat\\vx \\in \\bR^p$\n\t\t\t\\STATE $\\vx^1 \\leftarrow \\text{\\textsf{INITALIZE}}()$\n\t\t\t\\STATE Set $T \\leftarrow 1\/\\eta^2$, where $\\eta = \\min\\bc{\\epsilon^2\/\\log^2(1\/\\epsilon), \\eta_{\\max}}$\n\t\t\t\\FOR{$t= 1, 2, \\ldots, T$}\n\t\t\t\t\\STATE Sample perturbation $\\vzeta^t \\sim S^{p-1}$\\hfill\/\/\\texttt{Random pt. on unit sphere}%\n\t\t\t\t\\STATE $\\vg^t \\leftarrow \\nabla f(\\vx^t) + \\vzeta^t$\n\t\t\t\t\\STATE $x^{t+1} \\leftarrow \\vx^t - \\eta\\cdot\\vg^t$\n\t\t\t\\ENDFOR\n\t\t\t\\STATE \\textbf{return} $\\vx^T$\n\t\t}\n\t\\end{algorithmic}\n\\end{algorithm}\n\n\\section{A Local Convergence Guarantee for NGD}\n\\label{sec:saddle_analysis}\nWe will now analyze the NGD algorithm for its convergence properties. We note that the complete proof requires results that are quite technical and beyond the scope of this monograph. We will instead, present the essential elements of the proof and point the curious reader to \\citep{GeHJY2015} for the complete analysis. To simplify the notation, we will often omit specifying the exact constants as well.\n\nIn the following, we will assume that the function $f$ satisfies the strict saddle property and is $\\beta$-strongly smooth (see Definition~\\ref{defn:strong-cvx-smooth-fn}). We will also assume that the function is bounded i.e., $\\abs{f(\\vx)} \\leq B$ for all $\\vx \\in \\bR^p$ and has $\\rho$-Lipschitz Hessians i.e., for every $\\vx,\\vy \\in \\bR^p$, we have $\\|\\nabla^2_\\bt f(\\vx)-\\nabla^2_\\bt f(\\vy)\\|_2 \\leq \\rho\\cdot\\|\\vx-\\vy\\|_2$ where $\\norm{\\cdot}_2$ for matrices denotes the spectral\/operator norm of the matrix.\n\nBefore commencing with the actual proof, we first present an overview of the proof. The definition of the SSa property clearly demarcates three different regimes:\n\\begin{enumerate}\n\t\\item Non-stationary points, i.e., points where gradient is ``large'' enough: in this case, standard (stochastic) gradient descent is powerful enough to ensure a large enough decrease in the objective function value in a single step\\elink{exer:saddle-smooth-nc}\n\t\\item Saddle points, i.e., points where the gradient is close to $\\vzero$. Here, the SSa property ensures that at least one highly `negative'' Hessian direction exists: in this case, traditional (stochastic) gradient descent may fail but the additional noise ensures an escape from the saddle point with high probability\n\t\\item Local minima, i.e., points where gradient is close to $\\vzero$ but the have a positive semi definite Hessian due to strong convexity: in this case, standard (stochastic) gradient descent by itself would converge to the corresponding local minima\n\\end{enumerate}\n\nThe above three regimes will be formally studied below in three separate lemmata. Note that the analysis for non-stationary points as well as for points near local minima is similar to the standard stochastic gradient descent analysis for convex functions. However, the analysis for saddle points is quite interesting and shows that the added random noise ensures an escape from the saddle point.\n\nTo further understand the inner workings of the NGD algorithm, let us perform a warm-up exercise by showing that the NGD algorithm will, with high probability, escape the saddle point in the function $f(x,y) = x^2-y^2$ that we considered in Figure~\\ref{fig:strict-saddle}.\n\n\\begin{theorem}\n\\label{thm:saddle-toy}\nConsider the function $f(x,y) = x^2 - y^2$ on two variables. If initialized at the saddle point $(0,0)$ of this function with step length $\\eta < 1$, with high probability, we have $f(x^t,y^t) \\rightarrow -\\infty$ as $t \\rightarrow \\infty$.\n\\end{theorem}\n\n\\begin{proof}\nFor an illustration, see Figure~\\ref{fig:strict-saddle}. Note that the function $f(x,y) = x^2 - y^2$ has trivial minima at the limiting points $(0,\\pm\\infty)$ where the function value approaches $-\\infty$. Thus, the statement of the theorem claims that NGD approaches the ``minimum'' function value.\n\nIn any case, we are interested in showing that NGD escapes the saddle point $(0,0)$. The gradient of $f$ is $\\vzero$ at the origin $(0,0)$. Thus, if a gradient descent procedure is initialized at the origin for this function, it will remain stuck there forever making no non-trivial updates.\n\nThe NGD algorithm on the other hand, when initialized at the saddle point $(0,0)$, after $t$ iterations, can be shown to reach the point $(x^t,y^t)$ where $x^t = \\sum_{\\tau=0}^{t-1} (1-\\eta)^{t-\\tau-1}\\zeta_1^\\tau$ and $y^t=\\sum_{\\tau=0}^{t-1} (1+\\eta)^{t-\\tau-1}\\zeta_2^\\tau$ and $(\\zeta^\\tau_1,\\zeta^\\tau_2) \\in \\bR^2$ is the noise vector added to the gradient at each step. Since $\\eta < 1$, as $t \\rightarrow \\infty$, it is easy to see that with high probability, we have $x^t\\rightarrow 0$ while $|y^t|\\rightarrow \\infty$ which indicates a successful escape from the saddle point, as well as progress towards the global optima. \n\\end{proof} \n\nWe now formalize the intuitions developed above. The following lemma shows that even if we are at a saddle point, NGD will still ensure a large drop in function value in not too many steps. The proof of this result is a bit subtle and we will just provide a sketch.\n\n\\begin{lemma}\n\t\\label{lem:ngd-saddle}\n\tIf NGD is executed on a function $f: \\bR^p \\rightarrow \\bR$ that is $\\beta$-strongly smooth, satisfies the $(\\alpha, \\gamma, \\kappa, \\xi)$-SSa property and has $\\rho$-Lipschitz Hessians, with step length $\\eta \\leq 1\/(\\beta+\\rho)^2$, then if an iterate $\\vx^t$ satisfies $\\norm{\\nabla f(\\vx^t)}_2 \\leq \\sqrt{\\eta\\beta}$ and $\\lambda_{\\min}\\nabla^2f(\\vx^t) \\leq -\\gamma$, then NGD ensures that after at most $s \\leq \\frac{3\\log p}{\\eta\\gamma}$ steps,\n\t\\[\n\t\\E{f(\\vx^{t+s})\\cond\\vx^t} \\leq f(\\vx^t) - \\eta\/4\n\t\\]\n\\end{lemma}\n\\begin{proof}\n\tFor the sake of notational simplicity, let $t = 0$. The overall idea of the proof is to rely on that one large negative eigendirection in the Hessian to induce a drop in function value. The hope is that random fluctuations will eventually nudge NGD in the steep descent direction and upon discovery, the larger and larger gradient values will accumulate to let the NGD procedure escape the saddle.\n\t\n\tSince the effects of the Hessian are most apparent in the second order Taylor expansion, we will consider the following function\n\t\\[\n\t\\hat f(\\vx) = f(\\vx^0) + \\ip{\\nabla f(\\vx^0)}{\\vx - \\vx^0} + \\frac{1}{2}(\\vx - \\vx^0)^\\top\\cH(\\vx - \\vx^0),\n\t\\]\n\twhere $\\cH = \\nabla^2 f(\\vx^0) \\in \\bR^{p\\times p}$. The proof will proceed by first imagining that NGD was executed on the function $\\hat f(\\cdot)$ instead, showing that the function value indeed drops, and then finishing off by showing that things do not change too much if NGD is executed on the function $f(\\cdot)$ instead. Note that to make this claim, we will need Hessians to vary smoothly which is why we assumed the Lipschitz Hessian condition. This seems to be a requirement for several follow-up results as well \\citep{Jin0NKJ17, AgarwalA-ZBHM2017}. An interesting and challenging open problem is to obtain similar results for non-convex optimization without the Lipschitz-Hessian assumption. \n\t\nWe will let $\\vx^t, t \\geq 1$ denote the iterates of NGD when executed on $f(\\cdot)$ and $\\hat\\vx^t$ denote the iterates of NGD when executed on $\\hat f(\\cdot)$. We will fix $\\hat\\vx^0 = \\vx^0$. Using some careful calculations, we can get\n\t\\[\n\t\\hat\\vx^t - \\hat\\vx^0 = -\\eta\\sum_{\\tau=0}^{t-1}(I-\\eta\\cH)^\\tau\\nabla f(\\vx^0) - \\eta\\sum_{\\tau=0}^{t-1}(I-\\eta\\cH)^{t-\\tau-1}\\vzeta^\\tau\n\t\\]\nWe note that both terms in the right hand expression above correspond to ``small'' vectors. The first term is small as we know that $\\hat\\vx^0$ satisfies $\\norm{\\nabla f(\\hat\\vx^0)}_2 \\leq \\sqrt{\\eta\\beta}$ by virtue of being close to a stationary point as is assumed in the statement of this result. The second term is small as $\\vzeta^\\tau$ are random unit vectors with expectation $\\vzero$. Using these intuitions, we will first show that $\\hat f(\\hat\\vx^t)$ is significantly smaller than $\\hat f(\\vx^0) = f(\\vx^0)$ after sufficiently many steps. Then using the property of Lipschitz Hessians, we will obtain obtain a descent guarantee for $f(x^t)$. \n\t\nNow, notice that NGD chooses the noise vectors $\\vzeta^\\tau$ for any $\\tau \\geq 0$, independently of $\\vx^0$ and $\\cH$. Moreover, for any two $\\tau \\neq \\tau'$, the vectors $\\vzeta^\\tau,\\vzeta^{\\tau'}$ are also chosen independently. We also know that the noise vectors are isotropic i.e., $\\E{\\vzeta^\\tau (\\vzeta^\\tau)^\\top}=I_p$. These observations and some straightforward calculations give us the following upper bound on the suboptimality of $\\vx^T$ with respect to the Taylor approximation $\\hat f$. As we have noted, this upper bound can be converted to an upper bound on the suboptimality of $\\vx^T$ with respect to the actual function $f$ using some more effort.\n\\begin{align*}\n\\bE[\\hat f(\\hat \\vx^T)-f(\\hat\\vx^0)&]\\\\\n={}& -\\eta \\nabla f(\\vx^0)^\\top \\sum_{\\tau=0}^{t-1}(I_p-\\eta \\cH)^{\\tau}\\nabla f(\\vx^0)\\\\\n&{}+\\frac{\\eta^2}{2}\\nabla f(\\vx^0)^\\top \\sum_{\\tau=0}^{t-1}(I_p-\\eta \\cH)^{\\tau} H \\sum_{\\tau=0}^{t-1}(I_p-\\eta \\cH)^{\\tau}\\nabla f(\\vx^0)\\\\\n&{}+\\frac{\\eta^2}{2}\\text{tr}\\left(\\sum_{\\tau=0}^{t-1}(I_p-\\eta \\cH)^{2\\tau}\\cH\\right)\\\\\n={}& -\\eta \\nabla f(\\vx^0)^\\top B \\nabla f(\\vx^0) + \\frac{\\eta^2}{2}\\text{tr}\\left(\\sum_{\\tau=0}^{t-1}(I_p-\\eta \\cH)^{2\\tau}\\cH\\right), \\label{eq:ngd1}\n\\end{align*}\nwhere $\\text{tr}(\\cdot)$ is the trace operator and $B=\\sum_{\\tau=0}^{t-1}(I_p-\\eta \\cH)^{\\tau}-\\frac{\\eta}{2} \\sum_{\\tau=0}^{t-1}(I_p-\\eta \\cH)^{\\tau} \\cH \\sum_{\\tau=0}^{t-1}(I_p-\\eta \\cH)^{\\tau}$. It is easy to verify that $B \\succeq 0$ for all step lengths $\\eta\\leq \\frac{1}{\\|\\cH\\|_2}$ i.e., all $\\eta \\leq \\frac{1}{\\beta}$. \n\t\nFor any value of $t \\geq \\frac{3\\log p}{\\eta\\gamma}$ (which is the setting for the parameter $s$ in the statement of the theorem), the second term in the final expression above can be simplified to give us\n\\[\n\\frac{\\eta^2}{2}\\text{tr}\\left(\\sum_{\\tau=0}^{t-1}(I-\\eta \\cH)^{2\\tau}\\cH\\right)=\\sum_{i=1}^p \\lambda_i (\\sum_{\\tau=0}^{t-1} (1-\\eta \\lambda_i)^\\tau)\\leq -\\frac{\\eta}{2},\n\\]\nwhere $\\lambda_i$ is the $i$-th eigenvalue of $\\cH$, by using $\\lambda_p \\leq -\\gamma$. This gives us\n\\[\n\\E{\\hat f(\\hat \\vx^T)-f(\\vx^0)}\\leq -\\frac{\\eta}{2}.\n\\]\nNote that the above equation only shows descent for $\\hat f(\\hat x^T)$. One can now show \\cite[Lemma 19]{GeHJY2015} that the iterates obtained by NGD on $\\hat f(\\cdot)$ do not deviate too far from those obtained on the actual function $f(\\cdot)$ using the Lipschitz-Hessian property. The proof is concluded by combining these two results. \n\\end{proof}\n\n\\begin{lemma}\n\\label{lem:ngd-non-stationary}\nIf NGD is executed on a function that is $\\beta$-strongly smooth and satisfies the $(\\alpha, \\gamma, \\kappa, \\xi)$-SSa property, with step length $\\eta \\leq \\frac{1}{\\beta}\\cdot\\min\\bc{1,\\kappa^2}$, then for any iterate $\\vx^t$ that satisfies $\\norm{\\nabla f(\\vx^t)}_2 \\geq \\sqrt{\\eta\\beta}$, NGD ensures that\n\\[\n\\E{f(\\vx^{t+1})\\ |\\ \\vx^t} \\leq f(\\vx^t) - \\frac{\\beta}{2}\\cdot\\eta^2.\n\\]\n\\end{lemma}\n\\begin{proof}\nThis is the most carefree case as we are neither close to any local optima, nor a saddle point. Unsurprisingly, the proof of this lemma follows from standard arguments as we have assumed the function to be strongly smooth. Since $\\vx^{t+1} = \\vx^t - \\eta\\cdot(\\nabla f(\\vx^t) + \\vzeta^t)$, we have, by an application of the strong smoothness property (see Definition~\\ref{defn:strong-cvx-smooth-fn})\n\\[\nf(\\vx^{t+1}) \\leq f(\\vx^t) - \\ip{\\nabla f(\\vx^t)}{\\eta\\cdot(\\nabla f(\\vx^t) + \\vzeta^t)} + \\frac{\\beta\\eta^2}{2}\\cdot\\norm{\\nabla f(\\vx^t) + \\vzeta^t}_2^2.\n\\]\nUsing the facts $\\norm{\\vzeta^t}_2 = 1$, $\\E{\\vzeta^t\\ |\\ \\vx^t} = \\vzero$, we get\n\\[\n\\E{f(\\vx^{t+1})\\ |\\ \\vx^t} \\leq f(\\vx^t) - \\eta\\br{1 - \\frac{\\beta\\eta}{2}}\\norm{\\nabla f(\\vx^t)}_2^2 + \\frac{\\beta\\eta^2}{2},\n\\]\nSince we have $\\eta \\leq \\frac{1}{\\beta}\\cdot\\min\\bc{1,\\kappa^2}$ by assumption and $\\norm{f(\\vx^t)}_2 \\geq \\sqrt{\\eta\\beta}$, we get\n\\[\n\\E{f(\\vx^{t+1}\\ |\\ \\vx^t} \\leq f(\\vx^t) - \\frac{\\beta}{2}\\cdot\\eta^2,\n\\]\nwhich proves the result.\n\\end{proof}\n\nThe final intermediate result is an \\emph{entrapment} lemma. It shows that once NGD gets sufficiently close to a local optimum, it gets trapped there for a really long time. Although the function $f$ satisfies strong convexity and smoothness properties in the neighborhood $\\cB_2(\\vx^\\ast,2\\xi)$, the proof of this result is still non-trivial due to the perturbations $\\vzeta^t$. Had the perturbations not been there, we could have utilized the analysis of the PGD algorithm\\elink{exer:saddle-smooth-nc-2} to show that we would converge to the local optimum $\\vx^\\ast$ at a linear rate.\n\nThe problem is that the perturbations do not diminish -- we always have $\\norm{\\vzeta^t}_2 = 1$. This prevents us from ever converging to the local optima. Moreover, a sequence of unfortunate perturbations may have us kicked out of this nice neighborhood. The next result shows that we will not get kicked out of the neighborhood of $\\vx^\\ast$ for a really long time.\n\n\\begin{lemma}\n\\label{lem:ngd-local-nbhd}\nIf NGD is executed on a function that is $\\beta$-strongly smooth and satisfies the $(\\alpha, \\gamma, \\kappa, \\xi)$-SSa property, with step length $\\eta \\leq \\min\\bc{\\frac{\\alpha}{\\beta^2}, \\xi^2\\log^{-1}(\\frac{1}{\\delta\\xi})}$ for some $\\delta > 0$, then if some iterate $\\vx^t$ satisfies $\\norm{\\vx^t - \\vx^\\ast}_2 \\leq \\xi$, then NGD ensures that with probability at least $1 - \\delta$, for all $s \\in \\bs{t,t+\\frac{1}{\\eta^2}\\log\\frac{1}{\\delta}}$, we have\n\\[\n\\norm{\\vx^s - \\vx^\\ast}_2 \\leq \\sqrt{\\eta\\log\\frac{1}{\\eta\\delta}} \\leq \\xi.\n\\]\n\\end{lemma}\n\\begin{proof}\nUsing strong convexity in the neighborhood of $\\vx^\\ast$ and the fact that $\\vx^\\ast$, being a local minimum, satisfies $\\nabla f(\\vx^\\ast) = \\vzero$, gives us\n\\begin{align*}\nf(\\vx^t) &\\geq f(\\vx^\\ast) + \\frac{\\alpha}{2}\\norm{\\vx^t - \\vx^\\ast}_2^2\\\\\nf(\\vx^\\ast) &\\geq f(\\vx^t) + \\ip{\\nabla f(\\vx^t)}{\\vx^\\ast - \\vx^t} + \\frac{\\alpha}{2}\\norm{\\vx^t - \\vx^\\ast}_2^2.\n\\end{align*}\nTogether, the above two expressions give us\n\\[\n\\ip{\\nabla f(\\vx^t)}{\\vx^t - \\vx^\\ast} \\geq \\alpha\\cdot\\norm{\\vx^t - \\vx^\\ast}_2^2.\n\\]\nSince $f$ is $\\beta$-smooth, using the co-coercivity of the gradient for smooth convex functions (recall that $f$ is strongly convex, in this neighborhood of $\\vx^\\ast$), we conclude that $f$ has $\\beta$-Lipschitz gradients, which gives us\n\\[\n\\norm{\\nabla f(\\vx^t)}_2 = \\norm{\\nabla f(\\vx^t) - \\nabla f(\\vx^\\ast)}_2 \\leq \\beta\\cdot\\norm{\\vx^t - \\vx^\\ast}_2.\n\\]\nUsing the above results, $\\E{\\vzeta^t\\ |\\ \\vx^t} = \\vzero$ and $\\eta \\leq \\frac{\\alpha}{\\beta^2}$ gives us\n\\begin{align*}\n\\E{\\left.\\norm{\\vx^{t+1} - \\vx^\\ast}_2^2\\ \\right|\\ \\vx^t} ={}& \\E{\\left.\\norm{\\vx^t - \\eta(\\nabla f(\\vx^t) + \\vzeta^t) - \\vx^\\ast}_2^2\\ \\right|\\ \\vx^t}\\\\\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t={}& \\norm{\\vx^t - \\vx^\\ast}_2^2 - 2\\eta\\ip{\\nabla f(\\vx^t)}{\\vx^t - \\vx^\\ast}\\\\\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t&{}+ \\eta^2\\norm{\\nabla f(\\vx^t)}_2^2 + \\eta^2\\\\\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\\leq{}& (1-2\\eta\\alpha+\\eta^2\\beta^2)\\cdot\\norm{\\vx^t - \\vx^\\ast}_2^2 + \\eta^2\\\\\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\\leq{}& (1-\\eta\\alpha)\\cdot\\norm{\\vx^t - \\vx^\\ast}_2^2 + \\eta^2,\n\\end{align*}\nwhich, upon some manipulation, gives us\n\\[\n\\E{\\left.\\norm{\\vx^{t+1} - \\vx^\\ast}_2^2 - \\frac{\\eta}{\\alpha}\\ \\right|\\ \\vx^t} \\leq (1-\\eta\\alpha)\\br{\\norm{\\vx^t - \\vx^\\ast}_2^2 - \\frac{\\eta}{\\alpha}}\n\\]\nThis tells us that on expectation, the distance of the iterates $\\vx^t$ from the local optimum $\\vx^\\ast$ will hover around $\\sqrt\\frac{\\eta}{\\alpha}$ which indicates towards the claimed result. The subsequent steps in the proof of this result require techniques from martingale theory which we wish to avoid. We refer the reader to \\cite[Lemma 16]{GeHJY2015} for the details.\n\\end{proof}\n\nNotice that the above result traps the iterates within a radius $\\xi$ ball around the local minimum $\\vx^\\ast$. Also notice that all points that are approximate local minima satisfy the preconditions of this theorem due to the SSa property and consequently the NGD gets trapped for these points. We now present the final convergence guarantee for NGD.\n\n\\begin{theorem}\n\\label{thm:conv-ngd}\nFor any $\\epsilon,\\delta > 0$, suppose NGD is executed on a function that is $\\beta$-strongly smooth, has $\\rho$-Lipschitz Hessians, and satisfies the $(\\alpha, \\gamma, \\kappa, \\xi)$-SSa property, with a step length $\\eta < \\eta_{\\max} = \\min\\bc{\\frac{\\epsilon^2}{\\log(1\/\\epsilon\\delta)},\\frac{\\alpha}{\\beta^2},\\frac{\\xi^2}{\\log(1\/\\xi\\delta)},\\frac{1}{(\\beta+\\rho)^2},\\frac{\\kappa^2}{\\beta}}$. Then, with probability at least $1 - \\delta$, after $T \\geq \\log p\/\\eta^2\\cdot\\log(2\/\\delta)$ iterations, NGD produces an iterate $\\vx^T$ that is $\\epsilon$-close to some local optimum $\\vx^\\ast$ i.e., $\\norm{\\vx^T - \\vx^\\ast}_2 \\leq \\epsilon$.\n\\end{theorem}\n\\clearpage\n\\begin{proof}\nWe partition the space $\\bR^p$ into 3 regions\n\\begin{enumerate}\n\t\\item $\\kR_1 = \\bc{\\vx: \\norm{f(\\vx)}_2 \\geq \\sqrt{\\eta\\beta}}$\n\t\\item $\\kR_2 = \\bc{\\vx: \\norm{f(\\vx)}_2 < \\sqrt{\\eta\\beta}, \\lambda_{\\min}(\\nabla^2 f(\\vx)) \\leq -\\gamma}$\n\t\\item $\\kR_3 = \\bR^p\\backslash(\\kR_1\\cup\\kR_2)$\n\\end{enumerate}\nSince $\\sqrt{\\eta\\beta}\\leq \\kappa$ due to the setting of $\\eta_{\\max}$, the region $\\kR_1$ contains all points considered non-stationary by the SSa property (Definition~\\ref{defn:ssp}) and possibly some other points as well. For this reason, region $\\kR_2$ can be shown to contain only saddle points. Since the SSa property assures us that a point that is neither non-stationary nor a saddle point is definitely an approximate local minimum, we deduce that the region $\\kR_3$ contains only approximately local minima.\n\nThe proof will use the following line of argument: since Lemmata~\\ref{lem:ngd-non-stationary} and \\ref{lem:ngd-saddle} assure us that whenever the NGD procedure is in regions $\\kR_1$ or $\\kR_2$ there is a large drop in function value, we should expect the procedure to enter region $\\kR_3$ sooner or later, since the function value cannot go on decreasing indefinitely. However, Lemma~\\ref{lem:ngd-local-nbhd} shows that once we are in region $\\kR_3$, we are trapped there. In the following analysis, we will ignore all non-essential constants and log factors.\n\nRecall that we let the NGD procedure last for $T = 1\/\\eta^2\\log(2\/\\delta)$ steps. Below we will show that in any sequence of $1\/\\eta^2$ steps, there is at least a $1\/2$ chance of encountering an iterate $\\vx^t \\in \\kR_3$. Since the entire procedure lasts $\\log(1\/\\delta)$ such sequences, we will conclude, by union bound, that with probability at least $1 - \\delta\/2$, we will encounter at least one iterate in the region $\\kR_3$ in the $T$ steps we execute.\n\nHowever, Lemma~\\ref{lem:ngd-local-nbhd} shows that once we enter the $\\kR_3$ neighborhood, with probability at least $1-\\delta\/2$, we are trapped there for at least $T$ steps. Applying the union bound will establish that with probability at least $1 - \\delta$, the NGD procedure will output $\\vx^T \\in \\kR_3$. Since we set $\\eta \\leq \\epsilon^2\/\\log(1\/\\epsilon\\delta)$, this will conclude the proof.\n\nWe now left with proving that in every sequence of $1\/\\eta^2$ steps, there is at least a $1\/2$ chance of NGD encountering an iterate $\\vx^t \\in \\kR_3$. To do so, we set up the notion of \\emph{epochs}. These will basically correspond to the amount of time taken by NGD to reduce the function value by a significant amount. The first epoch starts at time $\\tau_1 = 0$. Subsequently, we define\n\\[\n\\tau_{i+1} = \\left\\{\n\\begin{array}{cl}\n\t\\tau_i + 1 &\\quad \\text{ if } \\vx^{\\tau_i} \\in \\kR_1 \\cup \\kR_3\\\\\n\t\\tau_i + \\frac{1}{\\eta} &\\quad \\text{ if } \\vx^{\\tau_i} \\in \\kR_2\n\\end{array}\n\\right.\n\\]\nIgnoring constants and other non-essential factors, we can rewrite the results of Lemmata~\\ref{lem:ngd-non-stationary} and \\ref{lem:ngd-saddle} as follows\n\\begin{align*}\n\\E{f(\\vx^{\\tau_{i+1}}) - f(\\vx^{\\tau_i})\\ |\\ \\vx^{\\tau_i} \\in \\kR_1} &\\leq -\\eta^2\\\\\n\\E{f(\\vx^{\\tau_{i+1}}) - f(\\vx^{\\tau_i})\\ |\\ \\vx^{\\tau_i} \\in \\kR_2} &\\leq -\\eta\n\\end{align*}\nPutting these together gives us\n\\[\n\\E{f(\\vx^{\\tau_{i+1}}) - f(\\vx^{\\tau_i})\\ |\\ \\vx^{\\tau_i} \\notin \\kR_3} \\leq -\\E{(\\tau_{i+1}-\\tau_i)\\ |\\ \\vx^{\\tau_i} \\notin \\kR_3}\\cdot\\eta^2\n\\]\nDefine the event $\\kE_t := \\bc{\\nexists\\ j \\leq t: \\vx^j \\in \\kR_3}$ and let $\\vone_E$ denote the indicator variable for event $E$ i.e., $\\vone_E = 1$ if $E$ occurs and $\\vone_E = 0$ otherwise. Then we have\n\\begin{align*}\n\\E{f(\\vx^{\\tau_{i+1}})\\cdot\\vone_{\\kE_{\\tau_{i+1}}} - f(\\vx^{\\tau_i})\\cdot\\vone_{\\kE_{\\tau_i}}} = \\E{f(\\vx^{\\tau_{i+1}})\\cdot(\\vone_{\\kE_{\\tau_{i+1}}} - \\vone_{\\kE_{\\tau_i}})} + \\E{(f(\\vx^{\\tau_{i+1}}) - f(\\vx^{\\tau_i}))\\cdot\\vone_{\\kE_{\\tau_i}}}\\\\\n\\leq B\\cdot(\\Pr{\\kE_{\\tau_{i+1}}} - \\Pr{\\kE_{\\tau_i}}) - \\eta^2\\cdot\\E{\\tau_{i+1} - \\tau_i\\ |\\ \\vone_{\\kE_{\\tau_i}}}\\cdot\\Pr{\\kE_{\\tau_i}},\n\\end{align*}\nwhere we have used the fact that $\\abs{f(\\vx)} \\leq B$ for all $\\vx \\in \\bR^p$. Since $\\kE_{t+1} \\Rightarrow \\kE_t$, we have $\\Pr{\\kE_{t+1}} \\leq \\Pr{\\kE_t}$. Summing the expressions above from $i = 1$ to $j$ and using $\\vx^1 \\notin \\kR_3$ gives us\n\\[\n\\E{f(\\vx^{\\tau_{j+1}})\\cdot\\vone_{\\kE_{\\tau_{j+1}}}} - f(\\vx^1) \\leq - \\eta^2\\E{\\tau_{j+1}}\\cdot\\Pr{\\kE_{\\tau_j}}\n\\]\nHowever, since the function is bounded $\\abs{f(\\vx)} \\leq B$, the left hand side cannot be smaller than $-2B$. Thus, if $\\E{\\tau_{j+1}} \\geq 4B\/\\eta^2$ then we must have $\\Pr{\\kE_{\\tau_j}} \\leq 1\/2$. This concludes the proof.\n\\end{proof}\n\nWe have not presented exact constants in the results to avoid clutter. The NGD algorithm actually requires the step length to be set to $\\eta < \\eta_{\\max} \\leq \\frac{1}{p}$ where $p$ is the ambient dimensionality. Now, since NGD is run for $\\Om{1\/\\eta^2}$ iterations and each iteration takes $\\bigO{p}$ time to execute, the total run-time of NGD is $\\bigO{p^3}$ which can be prohibitive. The bibliographic notes discuss more recent results that offer run-times that are linear in $p$. Also note that the NGD procedure requires $\\softO{1\/\\epsilon^4}$ iterations to converge within an $\\epsilon$ distance of a local optimum.\n\n\\section{Constrained Optimization with Non-convex Objectives}\n\n\\begin{algorithm}[t]\n\t\\caption{Projected Noisy Gradient Descent (PNGD)}\n\t\\label{algo:saddle-pngd}\n\t\\begin{algorithmic}[1]\n\t\t{\n\t\t\t\\REQUIRE Objective $f$, max step length $\\eta_{\\max}$, tolerance $\\epsilon$\n\t\t\t\\ENSURE A locally optimal point $\\hat\\vx \\in \\bR^p$\n\t\t\t\\STATE $\\vx^1 \\leftarrow \\text{\\textsf{INITALIZE}}()$\n\t\t\t\\STATE Set $T \\leftarrow 1\/\\eta^2$, where $\\eta = \\min\\bc{\\epsilon^2\/\\log^2(1\/\\epsilon), \\eta_{\\max}}$\n\t\t\t\\FOR{$t= 1, 2, \\ldots, T$}\n\t\t\t\t\\STATE Sample perturbation $\\vzeta^t \\sim S^{p-1}$\\hfill\/\/\\texttt{Random pt. on unit sphere}%\n\t\t\t\t\\STATE $\\vg^t \\leftarrow \\nabla f(\\vx^t) + \\vzeta^t$\n\t\t\t\t\\STATE $x^{t+1} \\leftarrow \\Pi_\\cW(\\vx^t - \\eta\\cdot\\vg^t)$\\hfill\/\/\\texttt{Project onto constraint set}\n\t\t\t\\ENDFOR\n\t\t\t\\STATE \\textbf{return} $\\vx^T$\n\t\t}\n\t\\end{algorithmic}\n\\end{algorithm}\n\nWe will now present extensions to the above discussion to cases where the optimization problem is constrained. We will concentrate on constrained optimization problems with equality constraints.\n\\begin{equation}\n\\begin{array}{cl}\n\t\\underset{\\vx\\in\\bR^p}\\min & f(\\vx)\\\\\n\t\\text{s.t.} & c_i(\\vx) = 0, i \\in [m]\n\\end{array}\n\\tag*{(CNOPT)}\\label{eq:nopt-cons}\n\\end{equation}\n\nLet $\\cW := \\bc{\\vx \\in \\bR^p: c_i(\\vx) = 0, i \\in [m]}$ denote the constraint set. In general $\\cW$ is a \\emph{manifold} and we will assume that this manifold is \\emph{nice} in that it is smooth and does not have corners. It is natural to attempt to solve the problem using a gPGD-like approach. Indeed, Algorithm~\\ref{algo:saddle-pngd} extends the NGD algorithm to include a projection step. The algorithm is actually similar to the NGD algorithm save the step that projects the iterates onto the manifold $\\cW$. This projection step can be tricky but can be efficiently solved, for instance when the constraint functions $c_i$ are linear (see \\cite[Section 6.2]{BoydV2004}).\n\nThe complete analysis of this algorithm (see \\cite[Appendix B]{GeHJY2015}), although similar in parts to that of the NGD algorithm, is beyond the scope of this monograph. Instead, we just develop the design concepts and intuition used in the analysis. The first step is to convert the above constrained optimization into an unconstrained one so that some of the tools used for NGD may be reapplied here.\n\nA very common way to do so is to first construct the \\emph{Lagrangian} \\cite[Chapter 5]{BoydV2004} of the problem defined as\n\\[\n\\cL(\\vx,\\vlambda) = f(\\vx) - \\sum_{i=1}^m\\lambda_ic_i(\\vx),\n\\]\nwhere $\\lambda_i$ are \\emph{Lagrange multipliers}. It is easy to verify that the solution to the problem \\eqref{eq:nopt-cons} coincides with that of the following problem\n\\[\n\\min_{\\vx \\in \\bR^p}\\max_{\\vlambda \\in \\bR^m}\\ \\cL(\\vx,\\vlambda)\n\\]\nNote that the above problem is unconstrained. For any $\\vx \\in \\bR^p$, define $\\vlambda^\\ast(\\vx) := \\arg\\min_{\\vlambda}\\ \\norm{\\nabla f(\\vx) - \\sum_{i=1}^m\\lambda_i\\nabla c_i(\\vx)}$ and $\\cL^\\ast(\\vx) := \\cL(\\vx,\\vlambda^\\ast(\\vx))$. We also define the tangent and normal spaces of the manifold $\\cW$ as follows.\n\n\\begin{definition}[Normal and Tangent Space]\n\\label{defn:saddle_tangent}\nFor a manifold $\\cW$ defined as the intersection of $m$ constraints of the form $c_i(\\vx) = 0$, given any $\\vx \\in \\cW$, define its tangent space as $\\cT(\\vx) = \\bc{\\vv\\ |\\ \\ip{\\nabla c_i(\\vx)}{\\vv} = 0, i \\in [m]}$ and its normal space as $\\cT^c(\\vx) = \\text{span}\\bc{\\nabla c_1(\\vx),\\ldots,\\nabla c_m(\\vx)}$.\n\\end{definition}\n\nIf we think of $\\cW$ as a smooth surface, then at any point, the tangent space defines the \\emph{tangent plane} of the surface at that point and the normal space consists of all vectors orthogonal to the tangent plane. The reason behind defining all the above quantities is the following. In unconstrained optimization we have the first and second order optimality conditions: if $\\vx^\\ast$ is a local minimum for $f(\\cdot)$ then $\\nabla f(\\vx^\\ast) = \\vzero$ and $\\nabla^2 f(\\vx^\\ast) \\succ 0$.\n\nAlong the same lines, for constrained optimization problems, similar conditions exist characterizing stationary and optimal points: one can show \\citep{WrightN1999} that if $\\vx^\\ast$ is a local optimum for the constrained problem \\eqref{eq:nopt-cons}, then we must have $\\nabla \\cL^\\ast(\\vx^\\ast) = \\vzero$, as well as $\\vv^\\top\\nabla^2 \\cL^\\ast(\\vx^\\ast)\\vv \\geq 0$ for all $\\vv \\in \\cT(\\vx^\\ast)$. This motivates us to propose the following extension of the strict saddle property for constrained optimization problems.\n\n\\begin{definition}[Strict Constrained Saddle Property \\citep{GeHJY2015}]\n\\label{defn:scsp}\nA twice differentiable function $f(\\vx)$ with a constraint set $\\cW$ is said to satisfy the $(\\alpha, \\gamma, \\kappa, \\xi)$-strict constrained saddle (SCSa) property, if for every local minimum $\\vx^\\ast \\in \\cW$, we have $\\vv^\\top\\cL^\\ast(\\vx')\\vv \\geq \\alpha$ for all $\\vv \\in \\cT(\\vx'), \\norm{\\vv}_2 = 1$ and for all $\\vx'$ in the region $\\cB_2(\\vx^\\ast,2\\xi)$, and moreover, any point $\\vx_0 \\in \\bR^p$ satisfies at least one of the following properties: \n\\begin{enumerate}\n\t\\item (Non-Stationary) $\\|\\nabla \\cL^\\ast(\\vx_0)\\|_2 \\geq \\kappa$\n\t\\item\t(Strict Saddle) $\\vv^\\top\\nabla^2\\cL^\\ast(\\vx_0)\\vv \\leq -\\gamma$ for some $\\vv \\in \\cT(\\vx_0), \\norm{\\vv}_2 = 1$\n\t\\item\t(Approx. Local Min.) For some local minimum $\\vx^\\ast$, $\\|\\vx_0-\\vx^\\ast\\|_2 \\leq \\xi$.\n\\end{enumerate}\n\\end{definition}\nThe following local convergence result can then be shown for the PNGD algorithm.\n\n\\begin{theorem}\n\\label{thm:conv-pngd}\nSuppose PNGD is executed on a constrained optimization problem that satisfies the $(\\alpha, \\gamma, \\kappa, \\xi)$-SCSa property, has $\\rho$-Lipschitz Hessians and whose constraint set is a smooth manifold $\\cW$. Then there exists a constant $\\eta_{\\max} = \\Theta(1)$ such that for any $\\epsilon,\\delta > 0$, if we set $\\eta = \\min\\bc{\\eta_{\\max},\\frac{\\epsilon^2}{\\log(1\/\\epsilon\\delta)}}$, then with probability at least $1 - \\delta$, after $T \\geq \\log p\/\\eta^2\\cdot\\log(2\/\\delta)$ iterations, PNGD produces an iterate $\\vx^T$ that is $\\epsilon$-close to some local optimum $\\vx^\\ast$ i.e., $\\norm{\\vx^T - \\vx^\\ast}_2 \\leq \\epsilon$.\n\\end{theorem}\n\nThe PNGD procedure requires $\\softO{1\/\\epsilon^4}$ iterations to converge within an $\\epsilon$ distance of a local optimum, similar to NGD. The proof of the above theorem \\cite[Lemma 35]{GeHJY2015} proceeds by showing that the PNGD updates can be rewritten as\n\\[\n\\vx^{t+1} = \\vx^t - \\eta\\cdot(\\nabla\\cL^\\ast(\\vx^t) + \\Pi_{\\cT(\\vx^t)}(\\vzeta^t)) + \\viota^t,\n\\]\nwhere $\\Pi_{\\cT(\\vx^t)}(\\cdot)$ is the projection of a vector onto the tangent space $\\cT(\\vx^t)$ and $\\viota^t$ is an error vector with small norm $\\norm{\\viota^t}_2 \\leq \\eta^2$.\n\n\\section{Application to Orthogonal Tensor Decomposition}\n\\label{sec:saddle-app-otd}\nRecall that we reduced the tensor decomposition problem to solving the following optimization problem \\eqref{eq:lrtd}\n\\begin{equation*}\n\\begin{array}{cl}\n\t\\max & T(\\vu,\\vu,\\vu,\\vu) = \\sum_{i=1}^r (\\vu_i^\\top\\vu)^4\\\\\n\t\\text{s.t.} & \\norm{\\vu}_2 = 1,\n\\end{array}\n\\end{equation*}\nThe above problem has internal symmetry with respect to permutation of the components, as well as sign flips (both $\\vu_i$ and $-\\vu_i$ are valid components) which gives rise to saddle points as we discussed earlier. Using some simple but slightly tedious calculations (see \\cite[Theorem 44, Lemmata 45, 46]{GeHJY2015}) it is possible to show that the above optimization problem does satisfy the $(3,7\/p,1\/p^c,1\/p^c)$-SCSa property for some constant $c > 0$. All other requirements for Theorem~\\ref{thm:conv-pngd} can also be shown to be satisfied.\n\nIt is easy to see that any solution to the problem must lie in the span of the components. Since the components $\\vu_i$ are orthonormal, this means that it suffices to look for solutions of the form $\\vu = \\sum_{i=1}^rx_i\\vu_i$. This gives us $T(\\vu,\\vu,\\vu,\\vu) = \\sum_{i=1}^rx_i^4$, and $\\norm{\\vu}_2 = \\norm{\\vx}_2$ where $\\vx = [x_1,x_2,\\ldots,x_r]$. This allows us to formulate the equivalent problem as\n\\begin{equation*}\n\\begin{array}{cl}\n\t\\min & -\\norm{\\vx}_4^4\\\\\n\t\\text{s.t.} & \\norm{\\vx}_2 = 1,\n\\end{array}\n\\end{equation*}\nNote that the above problem is non-convex as the objective function is concave. However, it is also possible to show\\elink{exer:saddle-multi-opt}\\elink{exer:saddle-multi-non-opt} that the only local minima of the optimization problem \\eqref{eq:lrtd} are $\\pm\\vu_i$. This is most fortunate since it shows that all local minima are actually global minima! Thus, the \\emph{deflation} strategy alluded to earlier can be successfully applied. We discover one component, say $\\vu_1$ by applying the PNGD algorithm to \\eqref{eq:lrtd}. Having recovered this component, we create a new tensor $T' = T - \\vu_1\\otimes\\vu_1\\otimes\\vu_1\\otimes\\vu_1$, apply the procedure again to discover a second component and so on.\n\nThe work of \\cite{GeHJY2015} also discusses techniques to recover all $r$ components simultaneously, tensors with different positive weights on the components, as well as tensors of other orders.\n\n\\section{Exercises}\n\\begin{exer}\n\\label{exer:saddle-non-conv}\nShow that the optimization problem in the formulation~\\eqref{eq:lrtd} is non-convex in that it has a non-convex objective as well as a non-convex constraint set. Show that constraint set may be convexified without changing the optimum of the problem.\n\\end{exer}\n\\begin{exer}\n\\label{exer:saddle-smooth-nc}\nConsider a differentiable function $f$ that is $\\beta$-strongly smooth but possibly non-convex. Show that if gradient descent is performed on $f$ with a static step length $\\eta \\leq \\frac{2}{\\beta}$ i.e.,\n\\[\n\\vx^{t+1} = \\vx^t - \\eta\\cdot\\nabla f(\\vx^t)\n\\]\nthen the function value $f$ will never increase across iterations i.e., $f(\\vx^{t+1}) \\leq f(\\vx^t)$. This shows that on smooth functions, gradient descent enjoys monotonic progress whenever the step length is small enough.\\\\\n\\textit{Hint}: apply the SS property relating consecutive iterates.\n\\end{exer}\n\\begin{exer}\n\\label{exer:saddle-smooth-nc-2}\nFor the same setting as the previous problem, show that if we have $\\eta \\in\\bs{\\frac{1}{2\\beta},\\frac{1}{\\beta}}$ instead, then within $T = \\bigO{\\frac{1}{\\epsilon^2}}$ steps, gradient descent is guaranteed to identify an $\\epsilon$-stationary point i.e for some $t \\leq T$, we must have $\\norm{\\nabla f(\\vx^t)}_2 \\leq \\epsilon$.\\\\\n\\textit{Hint}: first apply SS to show that $f(x^t) - f(x^{t+1}) \\geq \\frac{1}{4\\beta}\\cdot\\norm{\\nabla f(x^t)}_2^2$. Then use the fact that the total improvement in function value over $T$ time steps cannot exceed $f(x^0) - f^\\ast$.\n\\end{exer}\n\\begin{exer}\n\\label{exer:saddle-multi-opt}\nShow that every component vector $\\vu_i$ is a globally optimal point for the optimization problem in~\\eqref{eq:lrtd}.\\\\\n\\textit{Hint}: Observe the reduction in \\S~\\ref{sec:saddle-app-otd}. Show that it is equivalent to finding the minimum $L_2$ norm vector(s) among unit $L_1$ norm vectors. Find the Lagrangian dual of this optimization problem and simplify.\n\\end{exer}\n\\begin{exer}\n\\label{exer:saddle-multi-non-opt}\nGiven a rank-$r$ orthonormal tensor $T$, construct a non-trivial convex combination of its components that has unit $L_2$ norm but achieves a suboptimal objective value in the formulation~\\eqref{eq:lrtd}.\n\\end{exer}\n\n\\section{Bibliographic Notes}\nProgress in this area was extremely rapid at the time of writing this monograph. The following discussion outlines some recent advances.\n\nThe general problem of local optimization has been of interest in the optimization community for several years. Some of the earlier results in the area concentrated on second order or quasi-Newton methods, understandably since the Hessians can be used to distinguish between local optima and saddle points. Some key works include those of \\cite{NesterovP2006,DauphinPGCGB2014,SunQW2015,Yuan2015}. However, these techniques can be expensive and hence, are not used in high dimensional applications.\n\nAlgorithms for specialized applications, such as the EM algorithm we studied in \\S~\\ref{chap:em}, have received independent attention with respect to their local convergence properties. We refer the reader to the bibliographic notes \\S~\\ref{sec:em-bib} for the same. In recent years, the problem of optimization of general non-convex problems has been looked at from several perspectives. Below we outline some of the major thrust areas.\\\\\n\n\\noindent\\textbf{First Order Methods} Given their efficiency, the question of whether first order methods can offer local optimality has captured the interest of the field. In particular, much attention has been paid to the standard gradient descent and its variants. The work of \\cite{LeeSJR2016} showed that when initialized at a random point, gradient descent avoids saddle points almost surely, although the work provides no definite rate of convergence in general.\n\nThe subsequent works of \\cite{SunQW2015,GeHJY2015,AnandkumarG2016} introduced structural properties such as the strict saddle property, and demonstrated that crisp convergence rates can be ensured for problems that do satisfy these structural properties. It is notable that the work of \\cite{SunQW2015} reconsidered second order methods while \\cite{GeHJY2015} were able to show that noisy stochastic gradient descent itself suffices.\n\nThe technique of using randomly perturbed (stochastic) gradients to escape saddle points receives attention in a more general framework of \\emph{Langevin Dynamics} which study cases when the perturbations are non-isotropic or else are applied at a scale that adapts to the problem. The recent work of \\cite{ZhangLC2017} shows a powerful result that offers, for empirical risk minimization problems that are ubiquitous in machine learning, a convergence guarantee that ensures convergence to a local optimum of the population risk functional. This is a useful result since a majority of works in literature focus on identifying local optima of the empirical risk functional, which depend on the training data, but which may correspond to bad solutions with respect to the population risk.\\\\\n\n\\noindent\\textbf{Non-smooth Optimization} All techniques we discussed in thus far in this monograph require the objective function to at least be differentiable. In fact, we go one step ahead to assume (restricted) smoothness. The work of \\cite{ReddiHSPS2016} shows how variance reduction techniques may be exploited to ensure $\\epsilon$-convergence to a first-order stationary point $\\vx^t$ (effectively a saddle point) where we have $\\norm{f(\\vx^t)}_2 \\leq \\epsilon$, in no more than $\\bigO{1\/\\epsilon}$ iterations of stochastic mini-batch gradient descent even when the objective function is non-convex and non-smooth at the same time. We urge the reader to confer this with Exercise~\\ref{exer:saddle-smooth-nc-2} where it took $\\bigO{1\/\\epsilon^2}$ iterations to do the same, that too using full gradient descent. Also notable is the fact that in our analysis (see Theorem~\\ref{thm:conv-ngd}), it took NGD $\\bigO{1\/\\epsilon^4}$ steps to reach a local optimum. However, two caveats exist: 1) the method proposed in \\citep{ReddiHSPS2016} only reaches a saddle point, and 2) it assumes a \\emph{finite-sum} objective, i.e., one that has a decomposable form.\\\\\n\n\\noindent\\textbf{Accelerated Optimization} The works of \\cite{CarmonDHS2017,AgarwalA-ZBHM2017} extend the work of \\cite{ReddiHSPS2016} by offering faster than $\\bigO{1\/\\epsilon^2}$ convergence to a stationary point for general (non finite-sum) non-convex objectives. However, it should be noted that these techniques assume a smooth objective and convergence is guaranteed only to a saddle point, not a local minimum. Whereas the work of \\cite{CarmonDHS2017} invokes a variant of Nesterov's accelerated gradient technique to offer $\\epsilon$-convergence in $\\bigO{1\/\\epsilon^{7\/4}}$ iterations, the work of \\cite{AgarwalA-ZBHM2017} employs a second-order method to offer $\\epsilon$-convergence, also in $\\bigO{1\/\\epsilon^{7\/4}}$ iterations.\\\\\n\n\\noindent\\textbf{High-dimensional Optimization} When considering problems in high-dimensions, it becomes difficult to execute algorithms whose run-times scale super-linearly in the ambient dimensionality. This is one reason why first-order methods are preferred. However, as we saw, the overall run-time of Theorem~\\ref{thm:conv-ngd} scales as $\\bigO{p^3}$ which makes the method prohibitive even for moderate dimensional problems. The work of \\cite{Jin0NKJ17} proposes another perturbed gradient method that offers a run-time that depends only quasi-linearly on the ambient dimension. It should be noted that the works of \\cite{CarmonDHS2017,AgarwalA-ZBHM2017} also offer run-times that are linear in the ambient dimensionality although, as noted earlier, convergence is only guaranteed to a saddle point by these methods.\\\\\n\n\\noindent\\textbf{Escaping Higher-order Saddles} In our discussion, we were occupied with the problem of avoiding getting stuck at simple saddle points which readily reveal themselves by having distinctly positive and negative eigenvalues in the Hessian. However, there may exist more complex \\emph{degenerate saddle} points where the Hessian has only non-negative eigenvalues and thus, masquerades as a local minima. Such configurations yield complex cases such as \\emph{monkey saddles} and \\emph{connected saddles}. We did not address these. The work of \\cite{AnandkumarG2016} proposes a method based on the Cubic Regularization algorithm of \\cite{NesterovP2006} which is able to escape some of these more complex saddle points and achieve convergence to a point that enjoys \\emph{third order optimality}.\\\\\n\n\\noindent\\textbf{Training Deep Networks} Given the popularity of deep networks in several areas of learning and signal processing, as well as the fact that the task of training deep networks corresponds to non-convex optimization, a lot of recent efforts have focused on the problem of efficiently and provably training deep networks using non-convex optimization techniques. Some notable works include provable methods for training multi-layered perceptrons \\citep{GoelKlivans2017,ZhongSJBD2017,LiYuan2017} and special cases of convoluational networks known as non-overlapping convolutional networks \\citep{BrutzkusGloberson2017}. Whereas the works \\citep{BrutzkusGloberson2017,ZhongSJBD2017,LiYuan2017} utilize gradient-descent based techniques, \\cite{GoelKlivans2017} uses an application of isotonic regression and kernel techniques. The work of \\cite{LiYuan2017} shows that the inclusion of an \\emph{identity map} into the network eases optimization by making the training problem well-posed.\n\\chapter{Sparse Recovery}\n\\label{chap:spreg}\n\nIn this section, we will take a look at the sparse recovery and sparse linear regression as applications of non-convex optimization. These are extremely well studied problems and find applications in several practical settings. This will be the first of four ``application'' sections where we apply non-convex optimization techniques to real-world problems.\n\n\\section{Motivating Applications}\nWe will take the following two running examples to motivate the sparse regression problem in two different settings.\\\\\n\n\\noindent\\textbf{Gene Expression Analysis} The availability of DNA micro-array gene expression data makes it possible to identify genetic explanations for a wide range of phenotypical traits such as physiological properties or even disease progressions.\tIn such data, we are given say, for $n$ human test subjects participating in the study, the expression levels of a large number $p$ of genes (encoded as a real vector $\\vx_i \\in \\bR^p$), and the corresponding phenotypical trait $y_i \\in \\bR$. Figure~\\ref{fig:gene} depicts this for a hypothetical study on Type-I diabetes. For the sake of simplicity, we are considering cases where the phenotypical trait can be modeled as a real number -- this real number may indicate the severity of a condition or the level of some other biological measurement. More expressive models exist in literature where the target phenotypical trait is itself represented as a vector \\cite[see for example,][]{JainT2015}.\n\n\\begin{figure}[t]\n\\includegraphics[width=\\columnwidth]{gene.pdf}\n\\caption[Gene Expression Analysis as Sparse Regression]{Gene expression analysis can help identify genetic bases for physiological conditions. The expression matrix on the right has 32 rows and 15 columns: each row represents one gene being tracked and each column represents one test subject. A bright red (green) shade in a cell indicates an elevated (depressed) expression level of the corresponding gene in the test subject with respect to a reference population. A black\/dark shade indicates an expression level identical to the reference population. Notice that most genes do not participate in the progression of the disease in a significant manner. Moreover, the number of genes being tracked is much larger than the number of test subjects. This makes the problem of gene expression analysis, an ideal application for sparse recovery techniques. Please note that names of genes and expression levels in the figure are illustrative and do not correspond to actual experimental observations. Figure adapted from \\citep{WilsonELRYSD-SMCS2003}.}%\n\\label{fig:gene}\n\\end{figure}\n\nFor the sake of simplicity, we assume that the phenotypical response is linearly linked to the gene expression levels i.e. for some $\\bto \\in \\bR^p$, we have $y_i = \\vx_i^\\top\\bto + \\eta_i$ where $\\eta_i$ is some noise. The goal then is to use gene expression data to deduce an estimate for $\\bto$. Having access to the model $\\bto$ can be instrumental in discovering possible genetic bases for diseases, traits etc. Consequently, this problem has significant implications for understanding physiology and developing novel medical interventions to treat and prevent diseases\/conditions.\n\nHowever, the problem fails to reduce to a simple linear regression problem for two important reasons. Firstly, although the number of genes whose expression levels are being recorded is usually very large (running into several tens of thousands), the number of samples (test subjects) is usually not nearly as large, i.e. $n \\ll p$. Traditional regression algorithms fall silent in such data-starved settings as they usually expect $n > p$. Secondly, and more importantly, we do not expect all genes being tracked to participate in realizing the phenotype. Indeed, the whole objective of this exercise is to identify a small set of genes which most prominently influence the given phenotype. Note that this implies that the vector $\\bto$ is very sparse. Traditional linear regression cannot guarantee the recovery of a sparse model.\\\\\n\n\\noindent\\textbf{Sparse Signal Transmission and Recovery} The task of transmitting and acquiring signals is a key problem in engineering. In several application areas such as magnetic resonance imagery and radio communication, \\emph{linear} measurement techniques, for example, sampling, are commonly used to acquire a signal. The task then is to reconstruct the original signal from these measurements. For sake of simplicity, suppose we wish to sense\/transmit signals represented as vectors in $\\bR^p$. For various reasons (conserving energy, protection against data corruption etc), we may want to not transmit the signal directly and instead, create a \\emph{sensing mechanism} wherein a signal $\\bt \\in \\bR^p$ is encoded into a signal $\\vy \\in \\bR^n$ and it is $\\vy$ that is transmitted. At the receiving end $\\vy$ must be decoded back into $\\bt$. A popular way of creating sensing mechanisms -- also called \\emph{designs} -- is to come up with a set of $n$ linear functionals $\\vx_i: \\bR^p \\rightarrow \\bR$ and for any signal $\\bt \\in \\bR^p$, record the values $y_i = \\vx_i^\\top\\bt$. If we denote $X = [\\vx_1,\\ldots,\\vx_n]^\\top$ and $\\vy = [y_1,\\ldots,y_n]^\\top$, then $\\vy = X\\bt$ is transmitted. Note as a special case that if $n = p$ and $\\vx_i = \\ve_i$, then $X = I_{p\\times p}$ and $\\vy = \\bt$, i.e. we transmit the original signal itself.\n\nIf $p$ is very large then we naturally look for designs with $n \\ll p$. However, elementary results in algebra dictate that the recovery of $\\bt$ from $\\vy$ cannot be guaranteed even if $n = p - 1$. There is irrecoverable loss of information and there could be (infinitely) many signals $\\vw$ all of which map to the same transmitted signal $\\vy$ making it impossible to recover the original signal uniquely. A result similar in spirit called the Shannon-Nyquist theorem holds for analog or continuous-time signals. Although this seems to spell doom for any efforts to perform \\emph{compressed} sensing and transmission, these negative results can actually be overcome by observing that in several useful settings, the signals we are interested in, are actually very sparse i.e. $\\bt \\in \\cB_0(s) \\subset \\bR^p$, $s \\ll p$. This realization is critical since it allows the possibility of specialized design matrices to be used to transmit sparse signals in a highly compressed manner i.e. with $n \\ll p$ but without any loss of information. However the recovery problem now requires a sparse vector to be recovered from the transmitted signal $\\vy$ and the design matrix $X$.\n\n\\section{Problem Formulation}\n\\label{sec:spreg-prob-form}\nIn both the examples considered above, we wish to recover a sparse linear model $\\bt \\in \\cB_0(s) \\subset \\bR^p$ that fits the data, i.e., $\\vy_i \\approx \\vx_i^\\top\\bt$, hence the name \\emph{sparse recovery}. In the gene analysis problem, the support of such a model $\\bt$ is valuable in revealing the identity of the genes involved in promoting the given phenotype\/disease. Similarly, in the sparse signal recovery problem, $\\bt$ is the (sparse) signal itself.\n\nThis motivates the sparse linear regression problem. In the following, we shall use $\\vx_i \\in \\bR^p$ to denote the features (gene expression levels\/measurement functionals). Each feature will constitute a data point. There will be a \\emph{response} variable $y_i \\in \\bR$ (phenotype response\/measurement) associated with each data point. We will assume that the response variables are being generated using some underlying sparse \\emph{model} $\\bto \\in \\cB_0(s)$ (gene influence pattern\/sparse signal) as $y_i = \\vx_i^\\top\\bto + \\eta_i$ where $\\eta_i$ is some benign noise.\n\nIn both the gene expression analysis problem, as well as the sparse signal recovery problem, recovering $\\bto$ from the data $(\\vx_1,y_1),\\ldots,(\\vx_n,y_n)$ then requires us to solve the following optimization problem: $\\underset{\\bt\\in\\bR^p, \\norm{\\bt}_0 \\leq s}{\\min}\\ \\sum_{i=1}^n\\br{y_i - \\vx_i^\\top\\bt}^2$. Rewriting the above in more succinct notation gives us\n\\begin{equation}\n\\underset{\\substack{\\bt\\in\\bR^p\\\\\\norm{\\bt}_0 \\leq s}}{\\min}\\ \\norm{\\vy - X\\bt}_2^2,\\tag*{(SP-REG)}\\label{eq:spreg}\n\\end{equation}\nwhere $X = [\\vx_1,\\ldots,\\vx_n]^\\top$ and $\\vy = [y_1,\\ldots,y_n]^\\top$. It is common to model the additive noise as \\emph{white noise} i.e. $\\eta_i \\sim \\cN(0,\\sigma^2)$ for some $\\sigma > 0$. It should be noted that the sparse regression problem in \\eqref{eq:spreg} is an NP-hard problem \\citep{Natarajan1995}.\n\n\\section{Sparse Regression: Two Perspectives}\nAlthough we cast both the gene analysis and sparse signal recovery problems in the same framework of sparse linear regression, there are subtle but crucial differences between the two problem settings.\n\nNotice that the problem in sparse signal recovery is to come up with both, a design matrix $X$, as well as a recovery algorithm $\\cA: \\bR^n \\times \\bR^{n \\times p} \\rightarrow \\bR^p$ such that all sparse signals can be accurately recovered from the measurements, i.e. $\\forall \\bt \\in \\cB_0(s) \\subset \\bR^p$, we have $\\cA(X\\bt, X) \\approx \\bt$.\n\nOn the other hand, in the gene expression analysis task, we do not have as direct a control over the effective design matrix $X$. In this case, the role of the design matrix is played by the gene expression data of the $n$ test subjects. Although we may choose which individuals we wish to include in our study, even this choice does not give us a direct handle on the properties of the design matrix. Our job here is restricted to coming up with an algorithm $\\cA$ which can, given the gene expression data for $p$ genes in $n$ test subjects, figure out a sparse set of $s$ genes which collectively promote the given phenotype i.e. for any $\\bt \\in \\cB_0(s) \\subset \\bR^p$ and given $X \\in \\bR^{n \\times p}$, we desire $\\cA(X\\bt, X) \\approx \\bt$.\n\nThe distinction between the two settings is now apparent: in the first setting, the design matrix is totally in our control. We may design it to have specific properties as required by the recovery algorithm. However, in the second case, the design matrix is mostly given to us. We have no fine control over its properties.\n\nThis will make an important difference in the algorithms that operate in these settings since algorithms for sparse signal recovery would be able to make very stringent assumptions regarding the design matrix since we ourselves create this matrix from scratch. However, for the same reason, algorithms working in \\emph{statistical learning} settings such as the gene expression analysis problem, would have to work with relaxed assumptions that can be expected to be satisfied by natural data. We will revisit this point later once we have introduced the reader to algorithms for performing sparse regression.\n\n\\section{Sparse Recovery via Projected Gradient Descent}\nThe formulation in \\eqref{eq:spreg} looks strikingly similar to the convex optimization problem \\eqref{eq:cons-cvx-opt} we analyzed in \\S~\\ref{chap:tools} if we take $f(\\bt) = \\norm{\\vy - X\\bt}_2^2$ as the (convex) objective function and $\\cC = \\cB_0(s)$ as the (non-convex) constraint set. Given this, it is indeed tempting to adapt Algorithm~\\ref{algo:pgd} to solve this problem. The only difference would be that the projection step would now have to project onto a non-convex set $\\cB_0(s)$. However, as we have seen in \\S~\\ref{sec:non-cvx-proj}, this can be done efficiently. The resulting algorithm is a variant of the gPGD algorithm (Algorithm~\\ref{algo:gpgd}) that we studied in \\S~\\ref{chap:pgd} and is referred to as \\emph{Iterative Hard Thresholding} (IHT) in literature. The IHT algorithm is outlined in Algorithm~\\ref{algo:iht}.\n\n\\begin{algorithm}[t]\n\t\\caption{Iterative Hard-thresholding (IHT)}\n\t\\label{algo:iht}\n\t\\begin{algorithmic}[1]\n\t\t\t\\REQUIRE Data $X, \\vy$, step length $\\eta$, projection sparsity level $k$\n\t\t\t\\ENSURE A sparse model $\\bth \\in \\cB_0(k)$\n\t\t\t\\STATE $\\bt^1 \\leftarrow \\vzero$\n\t\t\t\\FOR{$t = 1, 2, \\ldots$}\n\t\t\t\t\\STATE $\\vz^{t+1} \\leftarrow \\btt - \\eta\\cdot \\frac{1}{n}X^\\top(X\\btt - \\vy)$\n\t\t\t\t\\STATE $\\btn \\leftarrow \\Pi_{\\cB_0(k)}(\\vz^{t+1})$ \\hfill\/\/\\texttt{see \\S~\\ref{sec:non-cvx-proj}}\n\t\t\t\\ENDFOR\n\t\t\t\\STATE \\textbf{return} {$\\btt$}\n\t\\end{algorithmic}\n\\end{algorithm}\n\nIt should come as no surprise that Algorithm~\\ref{algo:iht} turns out to be extremely simple to implement, as well as extremely fast in execution, given that only gradient steps are required. Indeed, IHT is a method of choice for practitioners given its ease of use and speed. However, much less is clear about the recovery guarantees of this algorithm, especially since we have already stated that \\eqref{eq:spreg} is NP-hard to solve \\citep{Natarajan1995}. Note that since the problem involves non-convex constraints, Theorem~\\ref{thm:pgd-conv-proof} no longer applies. This seems to destroy all hope of proving a recovery guarantee, until one observes that the NP-hardness result does not preclude the possibility of solving this problem efficiently when there is special structure in the design matrix $X$.\n\nIndeed if $X = I_{p\\times p}$, it is trivial to recover \\emph{any} underlying sparse model $\\bto$ by simply returning $\\vy$. This toy case actually holds the key to efficient sparse recovery. Notice that when $X = I$, the design matrix is an \\emph{isometry} -- it completely preserves the geometry of the space $\\bR^p$. However, one could argue that this is an uninteresting and expensive design with $n = p$. In a long line of illuminating results, which we shall now discuss, it was revealed that even if the design matrix is not a global isometry such as $I$ but a \\emph{restricted isometry} that only preserves the geometry of sparse vectors, recovery is possible with $n \\ll p$.\n\nThe observant reader would be wondering why are we not applying the gPGD analysis from \\S~\\ref{chap:pgd} directly here, and if notions similar to the RSC\/RSS notions discussed there make sense here too. We request the reader to read on. We will find that not only do those notions extend here, but have beautiful interpretations. Moreover, instead of directly applying the gPGD analysis (Theorem~\\ref{thm:gpgd-rsc-rss-proof}), we will see a simpler convergence proof tailored to the sparse recovery problem which also gives a sharper result.\n\n\\section{Restricted Isometry and Other Design Properties}\nAs we observed previously, design matrices such as $I_p$ which preserve the geometry of signals\/models seem to allow for recovery of sparse signals. We now formalize this intuition further and develop specific conditions on the design matrix $X$ which guarantee \\emph{universal recovery} i.e. for every $\\bt \\in \\cB_0(s)$, it is possible to uniquely recover $\\bt$ from the measurements $X\\bt$.\n\nIt is easy to see that a design matrix that \\emph{identifies} sparse vectors cannot guarantee universal recovery. Suppose we have a design matrix $X \\in \\bR^{n \\times p}$ such that for some $\\bt_1, \\bt_2 \\in \\cB_0(s)$ and $\\bt_1 \\neq \\bt_2$, we get $\\vy_1 = \\vy_2$ where $\\vy_1 = X\\bt_1$ and $\\vy_2 = X\\bt_2$. In this case, it is information theoretically impossible to distinguish between $\\bt_1$ and $\\bt_2$ on the basis of measurements made using $X$ i.e. using $\\vy_1$ (or $\\vy_2$). Consequently, this design matrix cannot be used for universal recovery since it produces measurements that confuse between sparse vectors. It can be seen\\elink{exer:spreg-confuse} that such a design matrix will not identify just one pair of sparse vectors but an infinite set of pairs (indeed an entire subspace) of sparse vectors.\n\nThus, it is clear that the design matrix must preserve the geometry of the set of sparse vectors while projecting them from a high $p$-dimensional space to a low $n$-dimensional space. Recall that in sparse recovery settings, we usually have $n \\ll p$. The \\emph{Nullspace Property} presented below, and the others thereafter, are formalizations of this intuition. For any subset of coordinates $S \\subset [p]$, let us define the set $\\cC(S) := \\bc{\\vw \\in \\bR^p, \\norm{\\vw_S}_1 \\geq \\norm{\\vw_{\\bar S}}_1}$. This is the (convex) set of points that place a majority of their weight on coordinates in the set $S$. Define $\\cC(k) := \\bigcup_{S: |S| = k}\\cC(S)$ to be the (non-convex\\elink{exer:spreg-c-k-non-conv}) set of points that place a majority of their weight on some $k$ coordinates. Note that $\\cC(k) \\supset \\cB_0(k)$ since $k$-sparse vectors put \\emph{all} their weight on some $k$ coordinates.\n\n\\begin{definition}[Nullspace Property \\citep{CohenDDeV2009}]\n\\label{defn:ns-prop}\nA matrix $X \\in \\bR^{n \\times p}$ is said to satisfy the null-space property of order $k$ if $\\text{ker}(X) \\cap \\cC(k) = \\bc{\\vzero}$, where $\\text{ker}(X) = \\{\\vw \\in \\bR^p: X\\vw = \\vzero\\}$ is the kernel of the linear transformation induced by $X$ (also called its null-space).\n\\end{definition}\nIf a design matrix satisfies this property, then vectors in its null-space are disallowed from concentrating a majority of their weight on any $k$ coordinates. Clearly no $k$-sparse vector is present in the null-space either. If a design matrix has the null-space property of order $2s$, then it can never identify two $s$-sparse vectors\\elink{exer:spreg-nsp-iden} -- something that we have already seen as essential to ensure global recovery. A strengthening of the Nullspace Property gives us the Restricted Eigenvalue Property.\n\n\\begin{definition}[Restricted Eigenvalue Property \\citep{RaskuttiWY2010}]\n\\label{def:re}\nA matrix $X \\in \\bR^{n \\times p}$ is said to satisfy the restricted eigenvalue property of order $k$ with constant $\\alpha$ if for all $\\bt \\in \\cC(k)$, we have $\\frac{1}{n}\\norm{X\\bt}_2^2 \\geq \\alpha\\cdot\\norm{\\bt}_2^2$.\n\\end{definition}\nThis means that not only are $k$-sparse vectors absent from the null-space, they actually retain a good fraction of their length after projection as well. This means that if $k = 2s$, then for any $\\bt_1,\\bt_2 \\in \\cB_0(s)$, we have $\\frac{1}{n}\\norm{X(\\bt_1 - \\bt_2)}_2^2 \\geq \\alpha\\cdot\\norm{\\bt_1 - \\bt_2}_2^2$. Thus, the distance between \\emph{any two} sparse vectors never greatly diminished after projection. Such behavior is the hallmark of an isometry, which preserves the geometry of vectors. The next property further explicates this and is, not surprisingly, called the Restricted Isometry Property.\n\n\\begin{definition}[Restricted Isometry Property \\citep{CandesT2005}]\nA matrix $X \\in \\bR^{n \\times p}$ is said to satisfy the restricted isometry property (RIP) of order $k$ with constant $\\delta_k \\in [0,1)$ if for all $\\bt \\in \\cB_0(k)$, we have\n\\begin{center}\n\t$(1-\\delta_k)\\cdot\\norm{\\bt}_2^2 \\leq \\frac{1}{n}\\norm{X\\bt}_2^2 \\leq (1+\\delta_k)\\cdot\\norm{\\bt}_2^2$.\n\\end{center}\n\\end{definition}\nThe above property is most widely used in analyzing sparse recovery and compressive sensing algorithms. However, it is a bit restrictive since it requires the distortion parameters to be of the kind $(1 \\pm \\delta)$ for $\\delta \\in [0,1)$. A generalization of this property that is especially useful in settings where the properties of the design matrix are not strictly controlled by us, such as the gene expression analysis problem, is the following notion of restricted strong convexity and smoothness.\n\n\\begin{definition}[Restricted Strong Convexity\/Smoothness Property \\citep{JainTK2014, JalaliJR2011}]\n\\label{defn:rsc-rss}\nA matrix $X \\in \\bR^{n \\times p}$ is said to satisfy the $\\alpha$-restricted strong convexity (RSC) property and the $\\beta$-restricted smoothness (RSS) property of order $k$ if for all $\\bt \\in \\cB_0(k)$, we have\n\\begin{center}\n\t$\\alpha\\cdot\\norm{\\bt}_2^2 \\leq \\frac{1}{n}\\norm{X\\bt}_2^2 \\leq \\beta\\cdot\\norm{\\bt}_2^2$.\n\\end{center}\n\\end{definition}\n\nThe only difference between the RIP and the RSC\/RSS properties is that the former forces constants to be of the form $1 \\pm \\delta_k$ whereas the latter does not impose any such constraints. The reader will notice the similarities in the definition of restricted strong convexity and smoothness as given here and Definition~\\ref{defn:res-strong-cvx-smooth-fn} where we defined restricted strongly convexity and smoothness notions for general functions. The reader is invited to verify\\elink{exer:spreg-rsc-is-rsc} that the two are indeed related.\n\nIndeed, Definition~\\ref{defn:res-strong-cvx-smooth-fn} can be seen as a generalization of Definition~\\ref{defn:rsc-rss} to general functions \\citep{JainTK2014}. For twice differentiable functions, both definitions can be seen as placing restrictions on the (restricted) eigenvalues of the Hessian of the function.\n\nIt is a useful exercise to verify\\elink{exer:spreg-hierarchy} that these properties fall in a hierarchy: RSC-RSS $\\Rightarrow$ REP $\\Rightarrow$ NSP for an appropriate setting of constants. We will next establish the main result of this section: if the design matrix satisfies the RIP condition with appropriate constants, then the IHT algorithm does indeed guarantee universal sparse recovery. Subsequently, we will give pointers to recent results that guarantee universal recovery in gene expression analysis-like settings.\n\n\\section{Ensuring RIP and other Properties}\n\\label{sec:rip-ensure}\nSince properties such as RIP, RE and RSC are so crucial for guaranteed sparse recovery, it is important to study problem settings in which these are satisfied by actual data. A lot of research has gone into explicit construction of matrices that provably satisfy the RIP property.\\\\\n\n\\noindent\\textbf{Random Designs}: The simplest of these results are the so-called random design constructions which guarantee that if the matrix is sampled from certain well behaved distributions, then it will satisfy the RIP property with high probability. For instance, the work of \\citet{BaraniukDdVW2008} shows the following result:\n\n\\begin{theorem}\\cite[Theorem 5.2]{BaraniukDdVW2008}\nLet $\\cD$ be a distribution over matrices in $\\bR^{n \\times p}$ such that for any fixed $\\vv \\in \\bR^p, \\epsilon > 0$,\n\\[\n\\Prr{X \\sim \\cD^{n \\times p}}{\\abs{\\norm{X\\vv}_2^2 - \\norm{\\vv}_2^2} > \\epsilon\\cdot\\norm{\\vv}_2^2} \\leq 2\\exp(-\\Omega(n))\n\\]\nThen, for any $k < p\/2$, matrices $X$ generated from this distribution also satisfy the RIP property at order $k$ with constant $\\delta$ with probability at least $1 - 2\\exp(-\\Omega(n))$ whenever $n \\geq \\Omega\\br{\\frac{k}{\\delta^2}\\log\\frac{p}{k}}$.\n\\end{theorem}\n\nThus, a distribution over matrices that, for every \\emph{fixed} vector, acts as an almost isometry with high probability, is also guaranteed to, with very high probability, generate matrices that act as a restricted isometry \\emph{simulataneously} over all sparse vectors. Such matrix distributions are easy to construct -- one simply needs to sample each entry of the matrix independently according to one of the following distributions:\n\\begin{enumerate}\n\t\\item sample each entry from the Gaussian distribution $\\cN(0,1\/n)$.\n\t\\item set each entry to $\\pm 1\/\\sqrt n$ with equal probability.\n\t\\item set each entry to $0$ w.p. $2\/3$ and $\\pm \\sqrt{3\/n}$ w.p. $1\/6$.\n\\end{enumerate}\n\nThe work of \\citet{AgarwalNW2012} shows that the RSC\/RSS properties are satisfied whenever rows of the matrix $X$ are drawn from a sub-Gaussian distribution over $p$-dimensional vectors with a non-singular covariance matrix. This result is useful since it shows that real-life data, which can often be modeled as vectors being drawn from sub-Gaussian distributions, will satisfy these properties with high probability. This is crucial for sparse recovery and other algorithms to be applicable to real life problems such as the gene-expression analysis problem.\n\nIf one can tolerate a slight blowup in the number of rows of the matrix $X$, then there exist better constructions with the added benefit of allowing fast matrix vector products. The initial work of \\citet{CandesT2005} itself showed that selecting each row of a Fourier transform matrix independently with probability $\\bigO{k\\frac{\\log^6 p}{p}}$ results in an RIP matrix with high probability. More recently, this was improved to $\\bigO{k\\log^2 k\\frac{\\log p}{p}}$ in the work of \\cite{HavivR17}. A matrix-vector product of a $k$-sparse vector with such a matrix takes only $\\bigO{k\\log^2 p}$ time whereas a dense matrix filled with Gaussians would have taken up to $\\bigO{k^2\\log p}$ time. There exist more involved hashing-based constructions that simultaneously offer reduced sample complexity and fast matrix-vector multiplications \\citep{NelsonPW2014}.\\\\\n\n\\noindent\\textbf{Deterministic Designs}: There exist far fewer and far weaker results for deterministic constructions of RIP matrices. The initial results in this direction all involved constructing \\emph{incoherent} matrices. A matrix $X \\in \\bR^{n \\times p}$ with unit norm columns is said to be $\\mu$-incoherent if for all $i \\neq j \\in [p]$, $\\ip{X_i}{X_j} \\leq \\mu$. A $\\mu$-incoherent matrix always satisfies\\elink{exer:spreg-inc-rip} RIP at order $k$ with parameter $\\delta = (k-1)\\mu$.\n\nDeterministic constructions of incoherent matrices with $\\mu = \\bigO{\\frac{\\log p}{\\sqrt n\\log n}}$ are well known since the work of \\citet{Kashin1975}. However, such constructions require $n = \\tilde\\Omega\\br{\\frac{k^2\\log^2 p}{\\delta^2}}$ rows which is quadratically more than what random designs require. The first result to improve upon these constructions came from the work of \\citet{BourgainDFKK2011} which gave deterministic combinatorial constructions that assured the RIP property with $n = \\softO{\\frac{k^{(2-\\epsilon)}}{\\delta^2}}$ for some constant $\\epsilon > 0$. However, till date, substantially better constructions are not known.\n\n\\section{A Sparse Recovery Guarantee for IHT}\nWe will now establish a convergence result for the IHT algorithm. Although the analysis for the gPGD algorithm (Theorem~\\ref{thm:gpgd-rsc-rss-proof}) can be adapted here, the following proof is much more tuned to the sparse recovery problem and offers a tighter analysis and several problem-specific insights.\n\\begin{theorem}\n\\label{thm:iht-conv-proof-rip}\nSuppose $X \\in \\bR^{n \\times p}$ is a design matrix that satisfies the RIP property of order $3s$ with constant $\\delta_{3s} < \\frac{1}{2}$. Let $\\bto \\in B_0(s) \\subset \\bR^p$ be any arbitrary sparse vector and let $\\vy = X\\bto$. Then the IHT algorithm (Algorithm~\\ref{algo:iht}), when executed with a step length $\\eta = 1$, and a projection sparsity level $k = s$, ensures $\\norm{\\btt - \\bto}_2 \\leq \\epsilon$ after at most $t = \\bigO{\\log\\frac{\\norm{\\bto}_2}{\\epsilon}}$ iterations of the algorithm.\n\\end{theorem}\n\\begin{proof}\nWe start off with some notation. Let $S^\\ast := \\supp(\\bto)$ and $S^t := \\supp(\\btt)$. Let $I^t := S^t \\cup S^{t+1} \\cup S^\\ast$ denote the union of the supports of the two consecutive iterates and the optimal model. The reason behind defining this quantity is that we are assured that while analyzing this update step, the two \\emph{error vectors} $\\btt - \\bto$ and $\\btn - \\bto$, which will be the focal point of the analysis, have support within $I^t$. Note that $|I^t| \\leq 3s$. Please refer to the notation section at the beginning of this monograph for the interpretation of the notation $\\vx_I$ and $A_I$ for a vector $\\vx$, matrix $A$ and set $I$.\n\nWith $\\eta = 1$, we have (refer to Algorithm~\\ref{algo:iht}), $\\vz^{t+1} = \\btt - \\frac{1}{n}X^\\top(X\\btt - \\vy)$. However, due to the (non-convex) projection step $\\btn = \\Pi_{\\cB_0(k)}(\\vz^{t+1})$, applying projection property-O gives us\n\\[\n\\norm{\\btn - \\vz^{t+1}}_2^2 \\leq \\norm{\\bto - \\vz^{t+1}}_2^2.\n\\]\nNote that none of the other projection properties are applicable here since the set of sparse vectors is a non-convex set. Now, by Pythagoras' theorem, for any vector $\\vv \\in \\bR^p$, we have $\\norm{\\vv}_2^2 = \\norm{\\vv_I}_2^2 + \\norm{\\vv_{\\bar I}}_2^2$ which gives us\n\\[\n\\norm{\\btn_I - \\vz^{t+1}_I}_2^2 + \\norm{\\btn_{\\bar I} - \\vz^{t+1}_{\\bar I}}_2^2 \\leq \\norm{\\bto_I - \\vz^{t+1}_I}_2^2 + \\norm{\\bto_{\\bar I} - \\vz^{t+1}_{\\bar I}}_2^2\n\\]\nNow it is easy to see that $\\btn_{\\bar I} = \\bto_{\\bar I} = \\vzero$. Hence we have\n\\[\n\\norm{\\btn_I - \\vz^{t+1}_I}_2 \\leq \\norm{\\bto_I - \\vz^{t+1}_I}_2\n\\]\nUsing the fact that $y = X\\bto$, and denoting $\\bar X = \\frac{1}{\\sqrt n} X$ we get\n\\[\n\\norm{\\btn_I - (\\btt_I - \\bar X_I^\\top \\bar X(\\btt - \\bto))}_2 \\leq \\norm{\\bto_I - (\\btt_I - \\bar X_I^\\top \\bar X(\\btt - \\bto))}_2\n\\]\nAdding and subtracting $\\bto$ from the expression inside the norm operator on the LHS, rearranging, and applying the triangle inequality for norms gives us\n\\[\n\\norm{\\btn_I - \\bto_I}_2 \\leq 2\\norm{(\\btt_I - \\bto_I) - \\bar X_I^\\top \\bar X(\\btt - \\bto)}_2\n\\]\nAs $\\btt_I = \\btt, \\btn_I = \\btn$, observing that $X(\\btt - \\bto) = X_I(\\btt - \\bto)$ gives us\n\\begin{align*}\n\\norm{\\btn - \\bto}_2 &\\leq 2\\norm{(I - \\bar X_I^\\top\\bar X_I)(\\btt - \\bto)}_2\\\\\n\t\t\t\t\t\t\t\t\t\t &\\leq 2\\br{\\norm{\\btt - \\bto}_2 - \\norm{\\bar X_I^\\top\\bar X_I(\\btt - \\bto)}_2}\\\\\n\t\t\t\t\t\t\t\t\t\t &\\leq 2\\delta_{3s}\\norm{\\btt - \\bto}_2,\n\\end{align*}\nwhich finishes the proof. The second inequality above follows due to the triangle inequality and the third inequality follows from the fact that RIP implies\\elink{exer:spreg-rip-eigen} that for any $\\abs I \\leq 3s$, the smallest eigenvalue of the matrix $\\bar X_I^\\top\\bar X_I$ is lower bounded by $(1-\\delta_{3s})$.\n\\end{proof}\n\nWe note that this result holds even if the hard thresholding level is set to $k > s$. It is easy to see that the condition $\\delta_{3s} < \\frac 12$ is equivalent to the \\emph{restricted condition number} (over $3s$-sparse vectors) of the corresponding sparse recovery problem being upper bounded by $\\kappa_{3s} < 3$. Similar to Theorem~\\ref{thm:gpgd-rsc-rss-proof}, here also we require an upper bound on the restricted condition number of the problem. It is interesting to note that a direct application of Theorem~\\ref{thm:gpgd-rsc-rss-proof} would have instead required $\\delta_{2s} < \\frac 13$ (or equivalently $\\kappa_{2s} < 2$) which can be shown to be a harsher requirement than what we have achieved. Moreover, applying Theorem~\\ref{thm:gpgd-rsc-rss-proof} would have also required us to set the step length to a specific quantity $\\eta = \\frac{1}{1+\\delta_s}$ while executing the gPGD algorithm whereas while executing the IHT algorithm, we need only set $\\eta = 1$.\n\nAn alternate proof of this result appears in the work of \\cite{GargK2009} which also requires the condition $\\delta_{2s} < \\frac{1}{3}$. The above result extends to a more general setting where there is additive noise in the model $\\vy = X\\bto + \\veta$. In this setting, it is known (see for example, \\cite[][Theorem 3]{JainTK2014} or \\cite[][Theorem 2.3]{GargK2009}) that if the objective function in question (for the sparse recovery problem the objective function is the least squares objective) satisfies the $(\\alpha,\\beta)$ RSC\/RSS properties at level $2s$, then the following is guaranteed for the output $\\hat\\bt$ of the IHT algorithm (assuming the algorithm is run for roughly $\\bigO{\\log n}$ iterations)\n\\[\n\\norm{\\hat\\bt - \\bto}_2 \\leq \\frac{3\\sqrt s}{\\alpha}\\norm{\\frac{X^\\top\\veta}{n}}_\\infty\n\\]\nThe consistency of the above solution can be verified in several interesting situations. For example, if the design matrix has normalized columns i.e. $\\norm{X_i}_2 \\leq \\sqrt{n}$ and the noise $\\eta_i$ is generated i.i.d. and independently of the design $X$ from some Gaussian distribution $\\cN(0,\\sigma^2)$, then the quantity $\\norm{\\frac{X^\\top\\veta}{n}}_\\infty$ is of the order of $\\sigma\\sqrt{\\frac{\\log p}{n}}$ with high probability. In the above setting IHT guarantees with high probability\n\\[\n\\norm{\\hat\\bt - \\bto}_2 \\leq \\softO{\\frac{\\sigma}{\\alpha}\\sqrt\\frac{s\\log p}{n}},\n\\]\ni.e. $\\norm{\\hat\\bt - \\bto}_2 \\rightarrow 0$ as $n \\rightarrow \\infty$, thus establishing consistency.\n\n\\section{Other Popular Techniques for Sparse Recovery}\n\\label{sec:spreg-other}\nThe IHT method is a part of a larger class of \\emph{hard thresholding techniques}, which include algorithms such as Iterative Hard Thresholding (IHT) \\citep{Blumensath2011}, Gradient Descent with Sparsification (GraDeS) \\citep{GargK2009}, and Hard Thresholding Pursuit (HTP) \\citep{Foucart2011}. Apart from these gradient descent-style techniques, several other approaches have been developed for the sparse recovery problem over the years. Here we briefly survey them.\n\n\\subsection{Pursuit Techniques}\nA popular non-convex optimization technique for sparse recovery and a few related optimization problems is that of discovering support elements iteratively. This technique is embodied in pursuit-style algorithms. We warn the reader that the popular \\emph{Basis Pursuit} algorithm is actually a convex relaxation technique and not related to the other pursuit algorithms we discuss here. The terminology is a bit confusing but seems to be a matter of legacy.\n\nThe pursuit family of algorithms includes Orthogonal Matching Pursuit (OMP) \\citep{TroppG2007}, Orthogonal Matching Pursuit with Replacement (OMPR) \\citep{JainTD2011}, Compressive Sampling Matching Pursuit (CoSaMP) \\citep{NeedellT2008}, and the Forward-backward (FoBa) algorithm \\citep{Zhang2011}.\n\nPursuit methods work by gradually discovering the elements in the support of the true model vector $\\bto$. At every time step, these techniques add a new support element to an active support set (which is empty to begin with) and solve a traditional least-squares problem on the active support set. This least-squares problem has no sparsity constraints, and is hence a convex problem which can be solved easily.\n\nThe support set is then updated by adding a new support element. It is common to add the coordinate where the gradient of the objective function has the highest magnitude among coordinates not already in the support. FoBa-style techniques augment this method by having \\emph{backward} steps where support elements that were erroneously picked earlier are discarded when the error is detected.\n\nPursuit-style methods are, in general, applicable whenever the structure in the (non-convex) constraint set in question can be represented as a combination of a small number of \\emph{atoms}. Examples include sparse recovery, where the atoms are individual coordinates: every $s$-sparse vector is a linear combination of some $s$ of these atoms.\n\nOther examples include low-rank matrix recovery, which we will study in detail in \\S~\\ref{chap:matrec}, where the atoms are rank-one matrices. The SVD theorem tells us that every $r$-rank matrix can indeed be expressed as a sum of $r$ rank-one matrices. There exist works \\citep{TewariRD2011} that give generic methods to perform sparse recovery in such structurally constrained settings.\n\n\\subsection{Convex Relaxation Techniques for Sparse Recovery}\nConvex relaxation techniques have been extremely popular for the sparse recovery problem. In fact they formed the first line of attack on non-convex optimization problems starting with the seminal work of \\cite{CandesT2005,CandesRT2006,Donoho2006} that, for the first time, established polynomial time, globally convergent solutions for the compressive sensing problem.\n\nA flurry of work then followed on relaxation-based techniques \\citep{Candes2008,DonohoMM2009,Foucart2010,NegahbanRWY2012} that vastly expanded the scope of the problems being studied, the techniques being applied, as well as their analyses. It is important to note that all methods, whether relaxation based or not, have to assume some design property such as NSP\/REP\/RIP\/RSC-RSS that we discussed earlier, in order to give provable guarantees.\n\nThe relaxation approach converts non-convex problems to convex problems first before solving them. This approach, applied to the sparse regression problem, gives us the so-called LASSO problem which has been studied extensively. Consider the sparse recovery problem \\eqref{eq:spreg}.\n\\begin{equation*}\n\\underset{\\substack{\\bt\\in\\bR^p\\\\\\norm{\\bt}_0 \\leq s}}{\\min}\\ \\norm{\\vy - X\\bt}_2^2.\n\\end{equation*}\nNon-convexity arises in the problem due to the non-convex constraint $\\norm{\\bt}_0 \\leq s$ as the sparsity operator is not a valid norm. The relaxation approach fixes this problem by changing the constraint to use the $L_1$ norm instead i.e.\n\\begin{equation}\n\\underset{\\substack{\\bt\\in\\bR^p\\\\\\norm{\\bt}_1 \\leq R}}{\\min}\\ \\norm{\\vy - X\\bt}_2^2,\\tag*{(LASSO-1)}\\label{eq:lasso-1}\n\\end{equation}\nor by using its regularized version instead\n\\begin{equation}\n\\underset{\\substack{\\bt\\in\\bR^p}}{\\min}\\ \\frac{1}{2n}\\norm{\\vy - X\\bt}_2^2 + \\lambda_n\\norm{\\bt}_1.\\tag*{(LASSO-2)}\\label{eq:lasso-2}\n\\end{equation}\nThe choice of the $L_1$ norm is motivated mainly by its convexity as well as formal results that assure us that the relaxation gap is small or non-existent. Both the above formulations \\eqref{eq:lasso-1} and \\eqref{eq:lasso-2} are indeed convex but include parameters such as $R$ and $\\lambda_n$ that must be tuned properly to ensure proper convergence. Although the optimization problems \\eqref{eq:lasso-1} and \\eqref{eq:lasso-2} are vastly different from \\eqref{eq:spreg}, a long line of beautiful results, starting from the seminal work of \\cite{CandesT2005,CandesRT2006,Donoho2006}, showed that if the design matrix $X$ satisfies RIP with appropriate constants, and if the parameters of the relaxations $R$ and $\\lambda_n$ are appropriately tuned, then the solutions to the relaxations are indeed solutions to the original problems as well.\n\nBelow we state one such result from the recent text by \\citet{HastieTW2016}. We recommend this text to any reader looking for a well-curated compendium of techniques and results on the relaxation approach to several non-convex optimization problems arising in machine learning.\n\n\\begin{theorem}\\cite[Theorem 11.1]{HastieTW2016}\nConsider a sparse recovery problem $\\vy = X\\bto + \\veta$ where the model $\\bto$ is $s$-sparse and the design matrix $X$ satisfies the restricted-eigenvalue condition (see Definition~\\ref{def:re}) of the order $s$ with constant $\\alpha$, then the following hold\n\\begin{enumerate}\n\t\\item Any solution $\\hat\\bt_1$ to \\eqref{eq:lasso-1} with $R = \\norm{\\bto}_1$ satisfies\n\t\\[\n\t\\norm{\\hat\\bt_1 - \\bto}_2 \\leq \\frac{4}{\\alpha}\\sqrt s \\norm{\\frac{X^\\top\\veta}{n}}_\\infty.\n\t\\]\n\t\\item Any solution $\\hat\\bt_2$ to \\eqref{eq:lasso-2} with $\\lambda_n \\geq 2\\norm{X^\\top\\veta\/n}_\\infty$ satisfies\n\t\\[\n\t\\norm{\\hat\\bt_1 - \\bto}_2 \\leq \\frac{3}{\\alpha}\\sqrt s \\lambda_n.\n\t\\]\n\\end{enumerate}\n\\end{theorem}\n\nThe reader can verify that the above bounds are competitive with the bounds for the IHT algorithm that we discussed previously. We refer the reader to \\cite[Chapter 11]{HastieTW2016} for more consistency results for the LASSO formulations.\n\n\\subsection{Non-convex Regularization Techniques}\nInstead of performing a complete convex relaxation of the problem, there exist approaches that only partly relax the problem. A popular approach in this direction uses $L_q$ regularization with $0 < q < 1$ \\citep{Chartrand2007,Foucart-Lai2009,WangXT2011}. The resulting problem still remains non-convex but becomes a little well behaved in terms of having objective functions that are almost-everywhere differentiable. For instance, the following optimization problem may be used to solve the noiseless sparse recovery problem.\n\\begin{align*}\n\\underset{\\bt\\in\\bR^p}{\\min}&\\ \\norm{\\bt}_q,\\\\\n\\text{s.t.}&\\ \\vy = X\\bt\n\\end{align*}\nFor noisy settings, one may replace the constraint with a soft constraint such as $\\norm{\\vy - X\\bt}_2 \\leq \\epsilon$, or else move to an unconstrained version like LASSO with the $L_1$ norm replaced by the $L_q$ norm. The choice of the regularization norm $q$ is dictated by application and usually any value within a certain range within the interval $(0,1)$ can be chosen.\n\nThere has been interest in characterizing both the global and the local optima of these optimization problems for their recovery properties \\citep{Chen-Gu2015}. In general, $L_q$ regularized formulations, if solved exactly, can guarantee recovery under much weaker conditions than what LASSO formulations, and IHT require. For instance, the RIP condition that $L_q$-regularized formulations need in order to guarantee universal recovery can be as weak as $\\delta_{2k+1} < 1$ \\citep{Chartrand2007}. This is very close to the requirement $\\delta_{2k} < 1$ that must be made by any algorithm in order to ensure that the solution even be unique. However, solving these non-convex regularized problems at large scale itself remains challenging and an active area of research.\n\n\\begin{figure}[t]\n\\centering \\includegraphics[width=0.5\\columnwidth]{spreg-comp.pdf}\n\\caption[Relaxation vs. Non-convex Methods for Sparse Recovery]{An empirical comparison of run-times offered by the LASSO, FoBA and IHT methods on sparse regression problems with varying dimensionality $p$. All problems enjoyed sparsity $s = 100$ and were offered $n = 2s\\cdot\\log p$ data points. IHT is clearly the most scalable of the methods followed by FoBa. The relaxation technique does not scale very well to high dimensions. Figure adapted from \\citep{JainTK2014}.}%\n\\label{fig:spreg-comparison-spreg}\n\\end{figure}\n\n\\subsection{Empirical Comparison}\nTo give the reader an appreciation of the empirical performance of the various methods we have discussed for sparse recovery, Figure~\\ref{fig:spreg-comparison-spreg} provides a comparison of some of these methods on a synthetic sparse regression problem. The graph plots the running time taken by the various methods to solve the same sparse linear regression problem with sparsity $s = 100$ but with dimensionalities increasing from $p = 5000$ to $p= 25000$. The graph indicates that non-convex optimization methods such as IHT and FoBa are far more scalable than relaxation-based methods. It should be noted that although pursuit-style techniques are scalable, they can become sluggish if the true support set size $s$ is not very small since these techniques discover support elements one by one.\n\n\\section{Extensions}\n\\label{sec:spreg-ext}\nIn the preceding discussion, we studied the problem of sparse linear regression and the IHT technique to solve the problem. These basic results can be augmented and generalized in several ways. The work of \\cite{NegahbanRWY2012} greatly expanded the scope of sparse recovery techniques beyond simple least-squares to the more general M-estimation problem. The work of \\cite{BhatiaJK2015} offered solutions to the \\emph{robust} sparse regression problem where the responses may be corrupted by an adversary. We will explore the robust regression problem in more detail in \\S~\\ref{chap:rreg}. We discuss a few more such extensions below.\n\n\\subsection{Sparse Recovery in Ill-Conditioned Settings}\nAs we discussed before, the bound on the RIP constant $\\delta_{3s} < \\frac{1}{2}$ as required by Theorem~\\ref{thm:iht-conv-proof-rip}, effectively places a bound on the \\emph{restricted condition number} $\\kappa_{3s}$ of the design matrix. In our case the bound translates to $\\kappa_{3s} = \\frac{1 + \\delta_{3s}}{1 - \\delta_{3s}} < 3$. However, in cases such as the gene expression analysis problem where the design matrix is not totally under our control, the restricted condition number might be much larger than $3$.\n\nFor instance, it can be shown that if the expression levels of two genes are highly correlated then this results in ill-conditioned design matrices. In such settings, it is much more appropriate to assume that the design matrix satisfies restricted strong convexity and smoothness (RSC\/RSS) which allows us to work with design matrices with arbitrarily large condition numbers. It turns out that the IHT algorithm can be modified \\cite[see for example,][]{JainTK2014} to work in these ill-conditioned recovery settings.\n\n\\begin{theorem}\n\\label{thm:iht-conv-proof-rsc-rss}\nSuppose $X \\in \\bR^{n \\times p}$ is a design matrix that satisfies the restricted strong convexity and smoothness property of order $2k+s$ with constants $\\alpha_{2k+s}$ and $\\beta_{2k+s}$ respectively. Let $\\bto \\in B_0(s) \\subset \\bR^p$ be any arbitrary sparse vector and let $\\vy = X\\bto$. Then the IHT algorithm, when executed with a step length $\\eta < \\frac{2}{\\beta_{2k+s}}$, and a projection sparsity level $k \\geq 32\\br{\\frac{\\beta_{2k+s}}{\\alpha_{2k+s}}}^2s$, ensures $\\norm{\\btt - \\bto}_2 \\leq \\epsilon$ after $t = \\bigO{\\frac{\\beta_{2k+s}}{\\alpha_{2k+s}}\\log\\frac{\\norm{\\bto}_2}{\\epsilon}}$ iterations of the algorithm.\n\\end{theorem}\n\nNote that the above result does not place any restrictions on the condition number or the RSC\/RSS constants of the problem. The result also mimics Theorem~\\ref{thm:gpgd-rsc-rss-proof} in its dependence on the (restricted) condition number of the optimization problem i.e. $\\kappa_{2k+s} = \\frac{\\beta_{2k+s}}{\\alpha_{2k+s}}$. The proof of this result is a bit tedious, hence omitted.\n\n\\subsection{Recovery from a Union of Subspaces}\nIf we look closely, the set $\\cB_0(s)$ is simply a union of $p \\choose s$ linear subspaces, each subspace encoding a specific sparsity pattern. It is natural to wonder whether the methods and analyses described above also hold when the vector to be recovered belongs to a general union of subspaces. More specifically, consider a family of linear subspaces $\\cH_1,\\ldots,\\cH_L \\subset \\bR^p$ and denote the union of these subspaces by $\\cH = \\bigcup_{i=1}^L\\cH_i$. The restricted strong convexity and restricted strong smoothness conditions can be appropriately modified to suit this setting by requiring a design matrix $X: \\bR^p \\rightarrow \\bR^n$ to satisfy, for every $\\bt_1, \\bt_2 \\in \\cH$,\n\\[\n\\alpha\\cdot\\norm{\\bt_1 - \\bt_2}_2^2 \\leq \\norm{X(\\bt_1 - \\bt_2)}_2^2 \\leq L\\cdot\\norm{\\bt_1 - \\bt_2}_2^2\n\\]\nIt turns out that IHT, with an appropriately modified projection operator $\\Pi_{\\cH}(\\cdot)$, can ensure recovery of vectors that are guaranteed to reside in a small union of low-dimensional subspaces. Moreover, a linear rate of convergence, as we have seen for the IHT algorithm in the sparse regression case, can still be achieved. We refer the reader to the work of \\cite{Blumensath2011} for more details of this extension.\n\n\\subsection{Dictionary Learning}\nA very useful extension of sparse recovery, or sparse \\emph{coding} emerges when we attempt to learn the design matrix as well. Thus, all we are given are observations $\\vy_1,\\ldots,\\vy_m \\in \\bR^n$ and we wish to learn a design matrix $X \\in \\bR^{n \\times p}$ such that the observations $\\vy_i$ can be represented as sparse combinations $\\vw_i \\in \\bR^p$ of the columns of the design matrix i.e. $\\vy_i \\approx X\\vw_i$ such that $\\norm{\\vw_i}_0 \\leq s \\ll p$. The problem has several applications in the fields of computer vision and signal processing and has seen a lot of interest in the recent past.\n\nThe alternating minimization technique where one alternates between estimating the design matrix and the sparse representations, is especially popular for this problem. Methods mostly differ in the exact implementation of these alternations. Some notable works in this area include \\citep{AgarwalAJN2016,AroraGM2014,GribonvalJB2015,SpielmanWW2012}.\n\n\\section{Exercises}\n\\begin{exer}\n\\label{exer:spreg-confuse}\nSuppose a design matrix $X \\in \\bR^{n \\times p}$ satisfies $X\\vw_1 = X\\vw_2$ for some $\\vw_1 \\neq \\vw_2 \\in \\bR^p$. Then show that there exists an entire subspace $\\cH \\subset \\bR^p$ such that for all $\\vw,\\vw' \\in \\cH$, we have $X\\vw = X\\vw'$.\n\\end{exer}\n\\begin{exer}\n\\label{exer:spreg-c-k-non-conv}\nShow that the set $\\cC(k) := \\bigcup_{S: |S| = k}\\cC(S)$ is non-convex.\n\\end{exer}\n\\begin{exer}\n\\label{exer:spreg-nsp-iden}\nShow that if a design matrix $X$ satisfies the null-space property of order $2s$, then for any two distinct $s$-sparse vectors $\\vv^1,\\vv^2 \\in \\cB_0(s)$, $\\vv^1 \\neq \\vv^2$, it must be the case that $X\\vv^1 \\neq X\\vv^2$. \n\\end{exer}\n\\begin{exer}\n\\label{exer:spreg-rsc-is-rsc}\nShow that the RSC\/RSS notion introduced in Definition~\\ref{defn:rsc-rss} is equivalent to the RSC\/RSS notion in Definition~\\ref{defn:res-strong-cvx-smooth-fn} defined in \\S~\\ref{chap:pgd} for an appropriate choice of function and constraint sets.\n\\end{exer}\n\\begin{exer}\n\\label{exer:spreg-hierarchy}\nShow that RSC-RSS $\\Rightarrow$ REP $\\Rightarrow$ NSP i.e. a matrix that satisfies the RSC\/RSS condition for some constants, must satisfy the REP condition for some constants which in turn must force it to satisfy the null-space property.\n\\end{exer}\n\\begin{exer}\n\\label{exer:spreg-inc-rip}\nShow that every $\\mu$-incoherent matrix satisfies the RIP property at order $k$ with parameter $\\delta = (k-1)\\mu$.\n\\end{exer}\n\\begin{exer}\n\\label{exer:spreg-rip-eigen} \nSuppose the matrix $X \\in \\bR^{n \\times p}$ satisfies RIP at order $s$ with constant $\\delta_s$. Then show that for any set $I \\subset [p], \\abs I \\leq s$, the smallest eigenvalue of the matrix $X_I^\\top X_I$ is lower bounded by $(1-\\delta_{s})$.\n\\end{exer}\n\\begin{exer}\n\\label{exer:spreg-monotone}\nShow that the RIP constant is monotonic in its order i.e. if a matrix $X$ satisfies RIP of order $k$ with constant $\\delta_k$, then it also satisfies RIP for all orders $k' \\leq k$ with $\\delta_{k'} \\leq \\delta_{k}$.\n\\end{exer}\n\n\\section{Bibliographic Notes}\n\\label{sec:spreg-bib}\nThe literature on sparse recovery techniques is too vast for this note to cover. We have already covered several directions in \\S\\S~\\ref{sec:spreg-other} and \\ref{sec:spreg-ext} and point the reader to references therein.\n\\chapter{Mathematical Tools}\n\\label{chap:tools}\n\nThis section will introduce concepts, algorithmic tools, and analysis techniques used in the design and analysis of optimization algorithms. It will also explore simple convex optimization problems which will serve as a warm-up exercise.\n\n\\section{Convex Analysis}\nWe recall some basic definitions in convex analysis. Studying these will help us appreciate the structural properties of non-convex optimization problems later in the monograph. For the sake of simplicity, unless stated otherwise, we will assume that functions are continuously differentiable. We begin with the notion of a convex combination.\n\n\\begin{definition}[Convex Combination]\nA convex combination of a set of $n$ vectors $\\vx_i \\in \\bR^p$, $i = 1 \\ldots n$ in an arbitrary real space is a vector $\\vx_{\\vtheta} := \\sum_{i=1}^n\\theta_i\\vx_i$ where $\\vtheta = \\br{\\theta_1,\\theta_2,\\ldots,\\theta_n}$, $\\theta_i \\geq 0$ and $\\sum_{i=1}^n\\theta_i = 1$.\n\\end{definition}\n\nA set that is closed under arbitrary convex combinations is a convex set. A standard definition is given below. Geometrically speaking, convex sets are those that contain all line segments that join two points inside the set. As a result, they cannot have any inward ``bulges''.\n\n\\begin{figure}\n\\includegraphics[width=\\columnwidth]{cvxset.pdf}\n\\caption[Convex and Non-convex Sets]{A convex set is closed under convex combinations. The presence of even a single uncontained convex combination makes a set non-convex. Thus, a convex set cannot have inward ``bulges''. In particular, the set of sparse vectors is non-convex.}%\n\\label{fig:cvxset}\n\\end{figure}\n\n\\begin{definition}[Convex Set]\nA set $\\cC \\in \\bR^p$ is considered convex if, for every $\\vx,\\vy \\in \\cC$ and $\\lambda \\in [0,1]$, we have $(1-\\lambda)\\cdot\\vx + \\lambda\\cdot\\vy \\in \\cC$ as well.\n\\end{definition}\n\nFigure~\\ref{fig:cvxset} gives visual representations of prototypical convex and non-convex sets. A related notion is that of convex functions which have a unique behavior under convex combinations. There are several definitions of convex functions, those that are more basic and general, as well as those that are restrictive but easier to use. One of the simplest definitions of convex functions, one that does not involve notions of derivatives, defines convex functions $f : \\bR^p \\rightarrow \\bR$ as those for which, for every $\\vx,\\vy \\in \\bR^p$ and every $\\lambda \\in [0,1]$, we have $f((1-\\lambda)\\cdot\\vx + \\lambda\\cdot\\vy) \\leq (1-\\lambda)\\cdot f(\\vx) + \\lambda\\cdot f(\\vy)$. For continuously differentiable functions, a more usable definition follows.\n\n\\begin{definition}[Convex Function]\n\\label{defn:cvx-fn}\nA continuously differentiable function $f: \\bR^p \\rightarrow \\bR$ is considered convex if for every $\\vx,\\vy \\in \\bR^p$ we have $f(\\vy) \\geq f(\\vx) + \\ip{\\nabla f(\\vx)}{\\vy - \\vx}$, where $\\nabla f(\\vx)$ is the gradient of $f$ at $\\vx$.\n\\end{definition}\n\nA more general definition that extends to non-differentiable functions uses the notion of \\emph{subgradient} to replace the gradient in the above expression. A special class of convex functions is the class of \\emph{strongly convex} and \\emph{strongly smooth} functions. These are critical to the study of algorithms for non-convex optimization. Figure~\\ref{fig:cvxfn} provides a handy visual representation of these classes of functions.\n\n\\begin{figure}\n\\includegraphics[width=\\columnwidth]{cvxfn.pdf}\n\\caption[Convex, Strongly Convex and Strongly Smooth Functions]{A convex function is lower bounded by its own tangent at all points. Strongly convex and smooth functions are, respectively, lower and upper bounded in the rate at which they may grow, by quadratic functions and cannot, again respectively, grow too slowly or too fast. In each figure, the shaded area describes regions the function curve is permitted to pass through.}%\n\\label{fig:cvxfn}\n\\end{figure}\n\n\\begin{definition}[Strongly Convex\/Smooth Function]\n\\label{defn:strong-cvx-smooth-fn}\nA continuously differentiable function $f: \\bR^p \\rightarrow \\bR$ is considered $\\alpha$-strongly convex (SC) and $\\beta$-strongly smooth (SS) if for every $\\vx,\\vy \\in \\bR^p$, we have\n\\[\n\\frac{\\alpha}{2}\\norm{\\vx-\\vy}_2^2 \\leq f(\\vy) - f(\\vx) - \\ip{\\nabla f(\\vx)}{\\vy - \\vx} \\leq \\frac{\\beta}{2}\\norm{\\vx-\\vy}_2^2.\n\\]\n\\end{definition}\n\nIt is useful to note that strong convexity places a quadratic lower bound on the growth of the function at every point -- the function must rise up at least as fast as a quadratic function. How fast it rises is characterized by the SC parameter $\\alpha$. Strong smoothness similarly places a quadratic upper bound and does not let the function grow too fast, with the SS parameter $\\beta$ dictating the upper limit.\n\nWe will soon see that these two properties are extremely useful in forcing optimization algorithms to rapidly converge to optima. Note that whereas strongly convex functions are definitely convex, strong smoothness does not imply convexity\\elink{exer:tools-ss-cvx}. Strongly smooth functions may very well be non-convex. A property similar to strong smoothness is that of Lipschitzness which we define below.\n\n\\begin{definition}[Lipschitz Function]\n\\label{defn:lip}\nA function $f: \\bR^p \\rightarrow \\bR$ is $B$-Lipschitz if for every $\\vx,\\vy \\in \\bR^p$, we have\n\\[\n\\abs{f(\\vx) - f(\\vy)} \\leq B\\cdot\\norm{\\vx - \\vy}_2.\n\\]\n\\end{definition}\n\nNotice that Lipschitzness places a upper bound on the growth of the function that is linear in the perturbation i.e., $\\norm{\\vx-\\vy}_2$, whereas strong smoothness (SS) places a quadratic upper bound. Also notice that Lipschitz functions need not be differentiable. However, differentiable functions with bounded gradients are always Lipschitz\\elink{exer:tools-diff-lip}. Finally, an important property that generalizes the behavior of convex functions on convex combinations is the Jensen's inequality.\n\n\\begin{lemma}[Jensen's Inequality]\n\\label{lem:jensen's}\nIf $X$ is a random variable taking values in the domain of a convex function $f$, then $\\E{f(X)} \\geq f(\\E{X})$\n\\end{lemma}\nThis property will be useful while analyzing iterative algorithms.\n\n\\section{Convex Projections}\nThe projected gradient descent technique is a popular method for constrained optimization problems, both convex as well as non-convex. The \\emph{projection} step plays an important role in this technique. Given any closed set $\\cC \\subset \\bR^p$, the projection operator $\\Pi_\\cC(\\cdot)$ is defined as\n\\[\n\\Pi_\\cC(\\vz) := \\underset{\\vx\\in\\cC}{\\argmin}\\ \\norm{\\vx-\\vz}_2.\n\\]\nIn general, one need not use only the $L^2$-norm in defining projections but is the most commonly used one. If $\\cC$ is a convex set, then the above problem reduces to a convex optimization problem. In several useful cases, one has access to a closed form solution for the projection.\n\n\\begin{figure}\n\\includegraphics[width=\\columnwidth]{proj.pdf}\n\\caption[Convex Projections and their Properties]{A depiction of projection operators and their properties. Projections reveal a closest point in the set being projected onto. For convex sets, projection property I ensures that the angle $\\theta$ is always non-acute. Sets that satisfy projection property I also satisfy projection property II. Projection property II may be violated by non-convex sets. Projecting onto them may take the projected point $\\vz$ closer to certain points in the set (for example, $\\hat\\vz$) but farther from others (for example, $\\vx$).}%\n\\label{fig:proj}\n\\end{figure}\n\nFor instance, if $\\cC = \\cB_2(1)$ i.e., the unit $L_2$ ball, then projection is equivalent\\elink{exer:tools-proj-l2} to a normalization step\n\\[\n\\Pi_{\\cB_2(1)}(\\vz) = \\begin{cases}\\vz\/\\norm{\\vz}_2 &\\mbox{if } \\norm{\\vz} > 1\\\\ \\vz & \\mbox{otherwise}\\end{cases}.\n\\]\nFor the case $\\cC = \\cB_1(1)$, the projection step reduces to the popular \\emph{soft thresholding} operation. If $\\hat\\vz := \\Pi_{\\cB_1(1)}(\\vz)$, then $\\hat\\vz_i = \\max\\bc{\\vz_i - \\theta, 0}$, where $\\theta$ is a threshold that can be decided by a sorting operation on the vector \\citep[see][for details]{DuchiS-SSC2008}.\n\nProjections onto convex sets have some very useful properties which come in handy while analyzing optimization algorithms. In the following, we will study three properties of projections. These are depicted visually in Figure~\\ref{fig:proj} to help the reader gain an intuitive appeal.\n\n\\begin{lemma}[Projection Property-O]\n\\label{lem:proj-prop-o}\nFor any set (convex or not) $\\cC \\subset \\bR^p$ and $\\vz \\in \\bR^p$, let $\\hat\\vz := \\Pi_\\cC(\\vz)$. Then for all $\\vx \\in \\cC$, $\\norm{\\hat\\vz - \\vz}_2 \\leq \\norm{\\vx - \\vz}_2$.\n\\end{lemma}\n\nThis property follows by simply observing that the projection step solves the the optimization problem $\\min_{\\vx\\in\\cC}\\norm{\\vx-\\vz}_2$. Note that this property holds for all sets, whether convex or not. However, the following two properties necessarily hold only for convex sets.\n\n\\begin{lemma}[Projection Property-I]\n\\label{lem:proj-prop-1}\nFor any convex set $\\cC \\subset \\bR^p$ and any $\\vz \\in \\bR^p$, let $\\hat\\vz := \\Pi_\\cC(\\vz)$. Then for all $\\vx \\in \\cC$, $\\ip{\\vx - \\hat\\vz}{\\vz - \\hat\\vz} \\leq 0$.\n\\end{lemma}\n\\begin{proof}\n\nTo prove this, assume the contra-positive. Suppose for some $\\vx \\in \\cC$, we have $\\ip{\\vx - \\hat\\vz}{\\vz - \\hat\\vz} > 0$. Now, since $\\cC$ is convex and $\\hat\\vz,\\vx \\in \\cC$, for any $\\lambda \\in [0,1]$, we have $\\vx_\\lambda := \\lambda\\cdot\\vx + (1 - \\lambda)\\cdot\\hat\\vz \\in \\cC$. We will now show that for some value of $\\lambda \\in [0,1]$, it must be the case that $\\norm{\\vz - \\vx_\\lambda}_2 < \\norm{\\vz - \\hat\\vz}_2$. This will contradict the fact that $\\hat\\vz$ is the closest point in the convex set to $\\vz$ and prove the lemma. All that remains to be done is to find such a value of $\\lambda$. The reader can verify that any value of $0 < \\lambda < \\min\\bc{1, \\frac{2\\ip{\\vx-\\hat\\vz}{\\vz-\\hat\\vz}}{\\norm{\\vx-\\hat\\vz}_2^2}}$ suffices. Since we assumed $\\ip{\\vx - \\hat\\vz}{\\vz - \\hat\\vz} > 0$, any value of $\\lambda$ chosen this way is always in $(0,1]$.\n\\end{proof}\nProjection Property-I can be used to prove a very useful \\emph{contraction property} for convex projections. In some sense, a convex projection brings a point closer to \\emph{all} points in the convex set simultaneously.\n\\begin{lemma}[Projection Property-II]\n\\label{lem:proj-prop-2}\nFor any convex set $\\cC \\subset \\bR^p$ and any $\\vz \\in \\bR^p$, let $\\hat\\vz := \\Pi_\\cC(\\vz)$. Then for all $\\vx \\in \\cC$, $\\norm{\\hat\\vz - \\vx}_2 \\leq \\norm{\\vz - \\vx}_2$.\n\\end{lemma}\n\\begin{proof}\nWe have the following elementary inequalities\n\\begin{align*}\n\\norm{\\vz - \\vx}_2^2 &= \\norm{(\\hat\\vz - \\vx) - (\\hat\\vz - \\vz)}_2^2\\\\\n\t\t\t\t\t\t\t\t\t &= \\norm{\\hat\\vz - \\vx}_2^2 + \\norm{\\hat\\vz - \\vz}_2^2 - 2\\ip{\\hat\\vz - \\vx}{\\hat\\vz - \\vz}\\\\\n\t\t\t\t\t\t\t\t\t &\\geq \\norm{\\hat\\vz - \\vx}_2^2 + \\norm{\\hat\\vz - \\vz}_2^2 \\tag*{(Projection Property-I)}\\\\\n\t\t\t\t\t\t\t\t\t &\\geq \\norm{\\hat\\vz - \\vx}_2^2\\tag*{\\qedhere}\n\\end{align*}\n\\end{proof}\nNote that Projection Properties-I and II are also called \\emph{first order} properties and can be violated if the underlying set is non-convex. However, Projection Property-O, often called a \\emph{zeroth order} property, always holds, whether the underlying set is convex or not.\n\n\\section{Projected Gradient Descent}\nWe now move on to study the projected gradient descent algorithm. This is an extremely simple and efficient technique that can effortlessly scale to large problems. Although we will apply this technique to non-convex optimization tasks later, we first look at its behavior on convex optimization problems as a warm up exercise. We warn the reader that the proof techniques used in the convex case do not apply directly to non-convex problems. Consider the following optimization problem:\n\\begin{equation}\n\t\\label{eq:cons-cvx-opt}\\tag*{(CVX-OPT)}\n\t\\begin{split}\n\t\t\\min_{\\vx \\in \\bR^p}\\ & f(\\vx) \\\\\n\t\t\\text{s.t.}\\ & \\vx \\in \\cC.\n\t\\end{split}\n\\end{equation}\nIn the above optimization problem, $\\cC \\subset \\bR^p$ is a convex constraint set and $f: \\bR^p \\rightarrow \\bR$ is a convex objective function. We will assume that we have oracle access to the gradient and projection operators, i.e., for any point $\\vx \\in \\bR^p$ we are able to access $\\nabla f(\\vx)$ and $\\Pi_\\cC(\\vx)$.\n\\begin{algorithm}[t]\n\t\\caption{Projected Gradient Descent (PGD)}\n\t\\label{algo:pgd}\n\t\\begin{algorithmic}[1]\n\t\t{\n\t\t\t\\REQUIRE Convex objective $f$, convex constraint set $\\cC$, step lengths $\\eta_t$\n\t\t\t\\ENSURE A point $\\hat\\vx \\in \\cC$ with near-optimal objective value\n\t\t\t\\STATE $\\vx^1 \\leftarrow \\vzero$\n\t\t\t\\FOR{$t = 1, 2, \\ldots, T$}\n\t\t\t\t\\STATE $\\vz^{t+1} \\leftarrow \\vx^t - \\eta_t\\cdot\\nabla f(\\vx^t)$\n\t\t\t\t\\STATE $\\vx^{t+1} \\leftarrow \\Pi_\\cC(\\vz^{t+1})$\n\t\t\t\\ENDFOR\n\t\t\t\\STATE (OPTION 1) \\textbf{return} {$\\hat\\vx_{\\text{final}} = \\vx^T$}\n\t\t\t\\STATE (OPTION 2) \\textbf{return} {$\\hat\\vx_{\\text{avg}} = (\\sum_{t=1}^T \\vx^t)\/T$}\n\t\t\t\\STATE (OPTION 3) \\textbf{return} {$\\hat\\vx_{\\text{best}} = \\arg\\min_{t\\in[T]} f(\\vx^t)$}\n\t\t}\n\t\\end{algorithmic}\n\\end{algorithm}\t\t\t\n\nThe projected gradient descent algorithm is stated in Algorithm~\\ref{algo:pgd}. The procedure generates iterates $\\vx^t$ by taking steps guided by the gradient in an effort to reduce the function value locally. Finally it returns either the final iterate, the average iterate, or the best iterate.\n\n\\section{Convergence Guarantees for PGD}\nWe will analyze PGD for objective functions that are either a) convex with bounded gradients, or b) strongly convex and strongly smooth. Let $f^\\ast = \\min_{\\vx \\in \\cC}\\ f(\\vx)$ be the optimal value of the optimization problem. A point $\\hat\\vx \\in \\cC$ will be said to be an $\\epsilon$-optimal solution if $f(\\hat\\vx) \\leq f^\\ast + \\epsilon$.\n\n\\subsection{Convergence with Bounded Gradient Convex Functions}\nConsider a convex objective function $f$ with bounded gradients over a convex constraint set $\\cC$ i.e., $\\norm{f(\\vx)}_2 \\leq G$ for all $\\vx \\in \\cC$.\n\n\\begin{theorem}\n\\label{thm:pgd-conv-proof}\nLet $f$ be a convex objective with bounded gradients and Algorithm~\\ref{algo:pgd} be executed for $T$ time steps with step lengths $\\eta_t = \\eta = \\frac{1}{\\sqrt T}$. Then, for any $\\epsilon > 0$, if $T = \\bigO{\\frac{1}{\\epsilon^2}}$, then $\\frac{1}{T}\\sum_{t=1}^Tf(\\vx^t) \\leq f^\\ast + \\epsilon$.\n\\end{theorem}\n\nWe see that the PGD algorithm in this setting ensures that the function value of the iterates approaches $f^\\ast$ \\emph{on an average}. We can use this result to prove the convergence of the PGD algorithm. If we use OPTION 3, i.e., $\\hat\\vx_{\\text{best}}$, then since by construction, we have $f(\\hat\\vx_{\\text{best}}) \\leq f(\\vx^t)$ for all $t$, by applying Theorem~\\ref{thm:pgd-conv-proof}, we get\n\\[\nf(\\hat\\vx_{\\text{best}}) \\leq \\frac{1}{T}\\sum_{t=1}^Tf(\\vx^t) \\leq f^\\ast + \\epsilon,\n\\]\nIf we use OPTION 2, i.e., $\\hat\\vx_{\\text{avg}}$, which is cheaper since we do not have to perform function evaluations to find the best iterate, we can apply Jensen's inequality (Lemma~\\ref{lem:jensen's}) to get the following\n\\[\nf(\\hat\\vx_{\\text{avg}}) = f\\br{\\frac{1}{T}\\sum_{t=1}^T\\vx^t} \\leq \\frac{1}{T}\\sum_{t=1}^Tf(\\vx^t) \\leq f^\\ast + \\epsilon.\n\\]\nNote that the Jensen's inequality may be applied only when the function $f$ is convex. Now, whereas OPTION 1 i.e., $\\hat\\vx_{\\text{final}}$, is the cheapest and does not require any additional operations, $\\hat\\vx_{\\text{final}}$ does not converge to the optimum for convex functions in general and may oscillate close to the optimum. However, we shall shortly see that $\\hat\\vx_{\\text{final}}$ does converge if the objective function is strongly smooth. Recall that strongly smooth functions may not grow at a faster-than-quadratic rate.\n\nThe reader would note that we have set the step length to a value that depends on the total number of iterations $T$ for which the PGD algorithm is executed. This is called a \\emph{horizon-aware} setting of the step length. In case we are not sure what the value of $T$ would be, a \\emph{horizon-oblivious} setting of $\\eta_t = \\frac{1}{\\sqrt t}$ can also be shown to work\\elink{exer:tools-hor}.\n\n\\begin{proof}[Proof (of Theorem~\\ref{thm:pgd-conv-proof}).]\nLet $\\vx^\\ast \\in \\arg\\min_{\\vx \\in \\cC}\\ f(\\vx)$ denote any point in the constraint set where the optimum function value is achieved. Such a point always exists if the constraint set is closed and the objective function continuous. We will use the following \\emph{potential function} $\\Phi_t = f(\\vx^t)-f(\\vx^\\ast)$ to track the progress of the algorithm. Note that $\\Phi_t$ measures the sub-optimality of the $t$-th iterate. Indeed, the statement of the theorem is equivalent to claiming that $\\frac{1}{T}\\sum_{t=1}^T\\Phi_t \\leq \\epsilon$.\\\\\n\n\\noindent\\textbf{(Apply Convexity)} We apply convexity to upper bound the potential function at every step. Convexity is a global property and very useful in getting an upper bound on the level of sub-optimality of the current iterate in such analyses.\n\\[\n\\Phi_t = f(\\vx^t)-f(\\vx^\\ast) \\leq \\ip{\\nabla f(\\vx^t)}{\\vx^t-\\vx^\\ast}\n\\]\nWe now do some elementary manipulations\n\\begin{align*}\n&\\ip{\\nabla f(\\vx^t)}{\\vx^t-\\vx^\\ast} = \\frac{1}{\\eta}\\ip{\\eta\\cdot\\nabla f(\\vx^t)}{\\vx^t-\\vx^\\ast}\\\\\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t&= \\frac{1}{2\\eta}\\br{\\norm{\\vx^t-\\vx^\\ast}_2^2 + \\norm{\\eta\\cdot\\nabla f(\\vx^t)}_2^2 - \\norm{\\vx^t - \\eta\\cdot\\nabla f(\\vx^t) -\\vx^\\ast}_2^2}\\\\\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t&= \\frac{1}{2\\eta}\\br{\\norm{\\vx^t-\\vx^\\ast}_2^2 + \\norm{\\eta\\cdot\\nabla f(\\vx^t)}_2^2 - \\norm{\\vz^{t+1} -\\vx^\\ast}_2^2}\\\\\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t&\\leq \\frac{1}{2\\eta}\\br{\\norm{\\vx^t-\\vx^\\ast}_2^2 + \\eta^2G^2 - \\norm{\\vz^{t+1} -\\vx^\\ast}_2^2},\n\\end{align*}\nwhere the first step applies the identity $2ab = a^2 + b^2 - (a+b)^2$, the second step uses the update step of the PGD algorithm that sets $\\vz^{t+1} \\leftarrow \\vx^t - \\eta_t\\cdot\\nabla f(\\vx^t)$, and the third step uses the fact that the objective function $f$ has bounded gradients.\n\n\\noindent\\textbf{(Apply Projection Property)} We apply Lemma~\\ref{lem:proj-prop-2} to get\n\\[\n\\norm{\\vz^{t+1} -\\vx^\\ast}_2^2 \\geq \\norm{\\vx^{t+1} -\\vx^\\ast}_2^2\n\\]\nPutting all these together gives us\n\\[\n\\Phi_t \\leq \\frac{1}{2\\eta}\\br{\\norm{\\vx^t-\\vx^\\ast}_2^2 - \\norm{\\vx^{t+1} -\\vx^\\ast}_2^2} + \\frac{\\eta G^2}{2}\n\\]\nThe above expression is interesting since it tells us that, apart from the $\\eta G^2\/2$ term which is small as $\\eta = \\frac{1}{\\sqrt T}$, the current sub-optimality $\\Phi_t$ is small if the consecutive iterates $\\vx^t$ and $\\vx^{t+1}$ are close to each other (and hence similar in distance from $\\vx^\\ast$).\n\nThis observation is quite useful since it tells us that once PGD stops making a lot of progress, it actually converges to the optimum! In hindsight, this is to be expected. Since we are using a constant step length, only a vanishing gradient can cause PGD to stop progressing. However, for convex functions, this only happens at global optima. Summing the expression up across time steps, performing telescopic cancellations, using $\\vx^1 = \\vzero$, and dividing throughout by $T$ gives us\n\\begin{align*}\n\\frac{1}{T}\\sum_{t=1}^T\\Phi_t &\\leq \\frac{1}{2\\eta T}\\br{\\norm{\\vx^\\ast}_2^2 - \\|{\\vx^{T+1} - \\vx^\\ast}\\|_2^2} + \\frac{\\eta G^2}{2}\\\\\n\t\t\t\t&\\leq \\frac{1}{2\\sqrt T}\\br{\\norm{\\vx^\\ast}_2^2 + G^2},\n\\end{align*}\nwhere in the second step, we have used the fact that $\\norm{\\vx^{t+1} - \\vx^\\ast}_2 \\geq 0$ and $\\eta = 1\/\\sqrt T$. This gives us the claimed result.\n\\end{proof}\n\n\\subsection{Convergence with Strongly Convex and Smooth Functions}\nWe will now prove a stronger guarantee for PGD when the objective function is strongly convex and strongly smooth (see Definition~\\ref{defn:strong-cvx-smooth-fn}).\n\\begin{theorem}\n\\label{thm:pgd-sc-ss-proof}\nLet $f$ be an objective that satisfies the $\\alpha$-SC and $\\beta$-SS properties. Let Algorithm~\\ref{algo:pgd} be executed with step lengths $\\eta_t = \\eta = \\frac{1}{\\beta}$. Then after at most $T = \\bigO{\\frac{\\beta}{\\alpha}\\log\\frac{\\beta}{\\epsilon}}$ steps, we have $f(\\vx^T) \\leq f(\\vx^\\ast) + \\epsilon$.\n\\end{theorem}\n\nThis result is particularly nice since it ensures that the final iterate $\\hat\\vx_{\\text{final}} = \\vx^T$ converges, allowing us to use OPTION 1 in Algorithm~\\ref{algo:pgd} when the objective is SC\/SS. A further advantage is the accelerated rate of convergence. Whereas for general convex functions, PGD requires $\\bigO{\\frac{1}{\\epsilon^2}}$ iterations to reach an $\\epsilon$-optimal solution, for SC\/SS functions, it requires only $\\bigO{\\log\\frac{1}{\\epsilon}}$ iterations.\n\nThe reader would notice the insistence on the step length being set to $\\eta = \\frac{1}{\\beta}$. In fact the proof we show below crucially uses this setting. In practice, for many problems, $\\beta$ may not be known to us or may be expensive to compute which presents a problem. However, as it turns out, it is not necessary to set the step length exactly to $1\/\\beta$. The result can be shown to hold even for values of $\\eta < 1\/\\beta$ which are nevertheless large enough, but the proof becomes more involved. In practice, the step length is tuned globally by doing a grid search over several $\\eta$ values, or per-iteration using line search mechanisms, to obtain a step length value that assures good convergence rates.\n\n\\begin{proof}[Proof (of Theorem~\\ref{thm:pgd-sc-ss-proof}).]\nThis proof is a nice opportunity for the reader to see how the SC\/SS properties are utilized in a convergence analysis. As with convexity in the proof of Theorem~\\ref{thm:pgd-conv-proof}, the strong convexity property is a global property that will be useful in assessing the progress made so far by relating the optimal point $\\vx^\\ast$ with the current iterate $\\vx^t$. Strong smoothness on the other hand, will be used locally to show that the procedure makes significant progress between iterates.\n\nWe will prove the result by showing that after at most $T = \\bigO{\\frac{\\beta}{\\alpha}\\log\\frac{1}{\\epsilon}}$ steps, we will have $\\norm{\\vx^T - \\vx^\\ast}_2^2 \\leq \\frac{2\\epsilon}{\\beta}$. This already tells us that we have reached very close to the optimum. However, we can use this to show that $\\vx^T$ is $\\epsilon$-optimal in function value as well. Since we are very close to the optimum, it makes sense to apply strong smoothness to upper bound the sub-optimality as follows\n\\[\nf(\\vx^T) \\leq f(\\vx^\\ast) + \\ip{\\nabla f(\\vx^\\ast)}{\\vx^T - \\vx^\\ast} + \\frac{\\beta}{2}\\norm{\\vx^T-\\vx^\\ast}_2^2.\n\\]\nNow, since $\\vx^\\ast$ is an optimal point for the constrained optimization problem with a convex constraint set $\\cC$, the first order optimality condition \\cite[see][Proposition 1.3]{Bubeck2015} gives us $\\ip{\\nabla f(\\vx^\\ast)}{\\vx - \\vx^\\ast} \\leq 0$ for any $\\vx \\in \\cC$. Applying this condition with $\\vx = \\vx^T$ gives us\n\\[\nf(\\vx^T) - f(\\vx^\\ast) \\leq \\frac{\\beta}{2}\\norm{\\vx^T-\\vx^\\ast}_2^2 \\leq \\epsilon,\n\\]\nwhich proves that $\\vx^T$ is an $\\epsilon$-optimal point. We now show $\\norm{\\vx^T - \\vx^\\ast}_2^2 \\leq \\frac{2\\epsilon}{\\beta}$. Given that we wish to show convergence in terms of the iterates, and not in terms of the function values, as we did in Theorem~\\ref{thm:pgd-conv-proof}, a natural potential function for this analysis is $\\Phi_t = \\norm{\\vx^t - \\vx^\\ast}_2^2$.\\\\\n\n\\noindent\\textbf{(Apply Strong Smoothness)} As discussed before, we use it to show that PGD always makes significant progress in each iteration.\n\\begin{align*}\n&f(\\vx^{t+1}) - f(\\vx^t) \\leq \\ip{\\nabla f(\\vx^t)}{\\vx^{t+1}-\\vx^t} + \\frac{\\beta}{2}\\norm{\\vx^t-\\vx^{t+1}}_2^2\\\\\n\t\t\t\t\t\t\t\t\t\t\t&= \\ip{\\nabla f(\\vx^t)}{\\vx^{t+1}-\\vx^\\ast} + \\ip{\\nabla f(\\vx^t)}{\\vx^\\ast-\\vx^t} + \\frac{\\beta}{2}\\norm{\\vx^t-\\vx^{t+1}}_2^2\\\\\n\t\t\t\t\t\t\t\t\t\t\t&= \\frac{1}{\\eta}\\ip{\\vx^t - \\vz^{t+1}}{\\vx^{t+1}-\\vx^t} + \\ip{\\nabla f(\\vx^t)}{\\vx^\\ast-\\vx^t} + \\frac{\\beta}{2}\\norm{\\vx^t-\\vx^{t+1}}_2^2\n\\end{align*}\n\n\\noindent\\textbf{(Apply Projection Rule)} The above expression contains an unwieldy term $\\vz^{t+1}$. Since this term only appears during projection steps, we eliminate it by applying Projection Property-I (Lemma~\\ref{lem:proj-prop-1}) to get\n\\begin{align*}\n\\ip{\\vx^t - \\vz^{t+1}}{\\vx^{t+1}-\\vx^\\ast} &\\leq \\ip{\\vx^t - \\vx^{t+1}}{\\vx^{t+1}-\\vx^\\ast}\\\\\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t&= \\frac{\\norm{\\vx^t-\\vx^\\ast}_2^2 - \\norm{\\vx^t - \\vx^{t+1}}_2^2 - \\norm{\\vx^{t+1}-\\vx^\\ast}_2^2}{2}\n\\end{align*}\nUsing $\\eta = 1\/\\beta$ and combining the above results gives us\n\\[\nf(\\vx^{t+1}) - f(\\vx^t) \\leq \\ip{\\nabla f(\\vx^t)}{\\vx^\\ast-\\vx^t} + \\frac{\\beta}{2}\\br{\\norm{\\vx^t-\\vx^\\ast}_2^2 - \\norm{\\vx^{t+1}-\\vx^\\ast}_2^2}\n\\]\n\n\\noindent\\textbf{(Apply Strong Convexity)} The above expression is perfect for a telescoping step but for the inner product term. Fortunately, this can be eliminated using strong convexity.\n\\[\n\\ip{\\nabla f(\\vx^t)}{\\vx^\\ast-\\vx^t} \\leq f(\\vx^\\ast) - f(\\vx^t) - \\frac{\\alpha}{2}\\norm{\\vx^t-\\vx^\\ast}_2^2\n\\]\nCombining with the above this gives us\n\\[\nf(\\vx^{t+1}) - f(\\vx^\\ast) \\leq \\frac{\\beta-\\alpha}{2}\\norm{\\vx^t-\\vx^\\ast}_2^2 - \\frac{\\beta}{2}\\norm{\\vx^{t+1}-\\vx^\\ast}_2^2.\n\\]\nThe above form seems almost ready for a telescoping exercise. However, something much stronger can be said here, especially due to the $\\frac{-\\alpha}{2}\\norm{\\vx^t-\\vx^\\ast}_2^2$ term. Notice that we have $f(\\vx^{t+1}) \\geq f(\\vx^\\ast)$. This means\n\\[\n\\frac{\\beta}{2}\\norm{\\vx^{t+1}-\\vx^\\ast}_2^2 \\leq \\frac{\\beta-\\alpha}{2}\\norm{\\vx^t-\\vx^\\ast}_2^2,\n\\]\nwhich can be written as\n\\[\n\\Phi_{t+1} \\leq \\br{1 - \\frac{\\alpha}{\\beta}}\\Phi_t \\leq \\exp\\br{-\\frac{\\alpha}{\\beta}}\\Phi_t,\n\\]\nwhere we have used the fact that $1 - x \\leq \\exp(-x)$ for all $x \\in \\bR$. What we have arrived at is a very powerful result as it assures us that the potential value goes down by a constant fraction at every iteration! Applying this result recursively gives us\n\\[\n\\Phi_{t+1} \\leq \\exp\\br{-\\frac{\\alpha t}{\\beta}}\\Phi_1 = \\exp\\br{-\\frac{\\alpha t}{\\beta}}\\norm{\\vx^\\ast}_2^2,\n\\]\nsince $\\vx^1 = \\vzero$. Thus, we deduce that $\\Phi_T = \\norm{\\vx^T - \\vx^\\ast}_2^2 \\leq \\frac{2\\epsilon}{\\beta}$ after at most $T = \\bigO{\\frac{\\beta}{\\alpha}\\log\\frac{\\beta}{\\epsilon}}$ steps which finishes the proof\n\\end{proof}\n\nWe notice that the convergence of the PGD algorithm is of the form $\\norm{\\vx^{t+1}-\\vx^\\ast}_2^2 \\leq \\exp\\br{-\\frac{\\alpha t}{\\beta}}\\norm{\\vx^\\ast}_2^2$. The number $\\kappa := \\frac{\\beta}{\\alpha}$ is the \\emph{condition number} of the optimization problem. The concept of condition number is central to numerical optimization. Below we give an informal and generic definition for the concept. In later sections we will see the condition number appearing repeatedly in the context of the convergence of various optimization algorithms for convex, as well as non-convex problems. The exact numerical form of the condition number (for instance here it is $\\beta\/\\alpha$) will also change depending on the application at hand. However, in general, all these definitions of condition number will satisfy the following property.\n\n\\begin{definition}[Condition Number - Informal]\n\\label{defn:condition-number}\nThe condition number of a function $f : \\cX \\rightarrow \\bR$ is a scalar $\\kappa \\in \\bR$ that bounds how much the function value can change relative to a perturbation of the input.\n\\end{definition}\n\nFunctions with a small condition number are stable and changes to their input do not affect the function output values too much. However, functions with a large condition number can be quite jumpy and experience abrupt changes in output values even if the input is changed slightly. To gain a deeper appreciation of this concept, consider a differentiable function $f$ that is also $\\alpha$-SC and $\\beta$-SS. Consider a stationary point for $f$ i.e., a point $\\vx$ such that $\\nabla f(\\vx) = \\vzero$. For a general function, such a point can be a local optima or a saddle point. However, since $f$ is strongly convex, $\\vx$ is the (unique) global minima\\elink{exer:tools-sc-unique-minima} of $f$. Then we have, for any other point $\\vy$\n\\[\n\\frac{\\alpha}{2}\\norm{\\vx-\\vy}_2^2 \\leq f(\\vy) - f(\\vx) \\leq \\frac{\\beta}{2}\\norm{\\vx-\\vy}_2^2\n\\]\nDividing throughout by $\\frac{\\alpha}{2}\\norm{\\vx-\\vy}_2^2$ gives us\n\\[\n\\frac{f(\\vy) - f(\\vx)}{\\frac{\\alpha}{2}\\norm{\\vx-\\vy}_2^2} \\in \\bs{1,\\frac{\\beta}{\\alpha}} := [1,\\kappa]\n\\]\nThus, upon perturbing the input from the global minimum $\\vx$ to a point $\\norm{\\vx-\\vy}_2 =: \\epsilon$ distance away, the function value does change much -- it goes up by an amount at least $\\frac{\\alpha\\epsilon^2}{2}$ but at most $\\kappa\\cdot\\frac{\\alpha\\epsilon^2}{2}$. Such well behaved response to perturbations is very easy for optimization algorithms to exploit to give fast convergence.\n\nThe condition number of the objective function can significantly affect the convergence rate of algorithms. Indeed, if $\\kappa = \\frac{\\beta}{\\alpha}$ is small, then $\\exp\\br{-\\frac{\\alpha}{\\beta}} = \\exp\\br{-\\frac{1}{\\kappa}}$ would be small, ensuring fast convergence. However, if $\\kappa \\gg 1$ then $\\exp\\br{-\\frac{1}{\\kappa}} \\approx 1$ and the procedure might offer slow convergence.\n\n\\section{Exercises}\n\\begin{exer}\n\\label{exer:tools-ss-cvx}\nShow that strong smoothness does not imply convexity by constructing a non-convex function $f: \\bR^p \\rightarrow \\bR$ that is $1$-SS.\n\\end{exer}\n\\begin{exer}\n\\label{exer:tools-diff-lip}\nShow that if a differentiable function $f$ has bounded gradients i.e., $\\norm{\\nabla f(\\vx)}_2 \\leq G$ for all $\\vx \\in \\bR^d$, then $f$ is Lipschitz. What is its Lipschitz constant?\\\\\n\\textit{Hint}: use the mean value theorem.\n\\end{exer}\n\\begin{exer}\n\\label{exer:tools-proj-l2}\nShow that for any point $\\vz \\notin \\cB_2(r)$, the projection onto the ball is given by $\\Pi_{\\cB_2(r)}(\\vz) = \\frac{r}{\\norm{\\vz}_2}\\cdot\\vz$.\n\\end{exer}\n\\begin{exer}\n\\label{exer:tools-hor}\nShow that a \\emph{horizon-oblivious} setting of $\\eta_t = \\frac{1}{\\sqrt t}$ while executing the PGD algorithm with a convex function with bounded gradients also ensures convergence.\\\\\n\\textit{Hint}: the convergence rates may be a bit different for this setting.\n\\end{exer}\n\\begin{exer}\n\\label{exer:tools-sc-unique-minima}\nShow that if $f: \\bR^p \\rightarrow \\bR$ is a strongly convex function that is differentiable, then there is a unique point $\\vx^\\ast \\in \\bR^p$ that minimizes the function value $f$ i.e., $f(\\vx^\\ast) = \\min_{\\vx \\in \\bR^p}\\ f(\\vx)$.\n\\end{exer}\n\\begin{exer}\n\\label{exer:tools-nonconv-sp}\nShow that the set of sparse vectors $\\cB_0(s) \\subset \\bR^p$ is non-convex for any $s < p$. What happens when $s = p$?\n\\end{exer}\n\\begin{exer}\n\\label{exer:pgd-nc-rank}\nShow that $\\cB_\\text{rank}(r) \\subseteq \\bR^{n \\times n}$, the set of $n \\times n$ matrices with rank at most $r$, is non-convex for any $r < n$. What happens when $r = n$?\n\\end{exer}\n\\begin{exer}\nConsider the Cartesian product set $\\cC = \\bR^{m \\times r} \\times \\bR^{n \\times r}$. Show that it is convex.\n\\end{exer}\n\\begin{exer}\nConsider a least squares optimization problem with a strongly convex and smooth objective. Show that the condition number of this problem is equal to the condition number of the Hessian matrix of the objective function.\n\\end{exer}\n\\begin{exer}\n\\label{exer:tools-sc-unique-minima-cons}\nShow that if $f: \\bR^p \\rightarrow \\bR$ is a strongly convex function that is differentiable, then optimization problems with $f$ as an objective and a convex constraint set $\\cC$ always have a unique solution i.e., there is a unique point $\\vx^\\ast \\in \\cC$ that is a solution to the optimization problem $\\arg\\min_{\\vx\\in\\cC}\\ f(x)$. This generalizes the result in Exercise~\\ref{exer:tools-sc-unique-minima}.\\\\\n\\textit{Hint}: use the first order optimality condition (see proof of Theorem~\\ref{thm:pgd-sc-ss-proof})\n\\end{exer}\n\n\\section{Bibliographic Notes}\n\\label{sec:tools-bib}\nThe sole aim of this discussion was to give a self-contained introduction to concepts and tools in convex analysis and descent algorithms in order to seamlessly introduce non-convex optimization techniques and their applications in subsequent sections. However, we clearly realize our inability to cover several useful and interesting results concerning convex functions and optimization techniques given the paucity of scope to present this discussion. We refer the reader to literature in the field of optimization theory for a much more relaxed and deeper introduction to the area of convex optimization. Some excellent examples include \\citep{Bertsekas2016,BoydV2004,Bubeck2015,Nesterov2013,SraNW2011}.","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nUltra-Diffuse Galaxies (UDGs, \\citealt{vandokkum}) are a extreme class of low surface brightness galaxies (LSB, e.g. \\citealt{sandage}; \\citealt{impey}; \\citealt{conselice}) with dwarf-like surface brightness ($\\mu_g$ \\textbf{$\\gtrsim$} 24 mag arcsec$^{-2}$) but L$^\\star$-like effective radius ($R_e \\gtrsim$ 1.5 kpc). They have colors of passively evolving stellar populations (although some of them, especially in the field, can host ongoing star formation), exponential-like S\\'ersic profiles, and an axis ratio distribution with a peak around $b\/a \\sim$ 0.7$-$0.8 (e.g. \\citealt{koda}; \\citealt{vanderburg}; \\citealt{roman}; \\citealt{Aku}, \\citealt{largepaper}).\n\n\nIn recent years UDGs have drawn a lot of attention because of their potential to test galaxy formation and evolution models at such extreme conditions. At the same time, the discovery of UDGs in a range of environments (e.g. \\citealt{vandokkum}; \\citealt{merritt}; \\citealt{martinez}; \\citealt{vanderburg}; \\citealt{roman2}) represents a major opportunity to study the effects of environment on shaping galaxies. \n\nOne of the first noticed characteristics of UDGs in galaxy clusters was the relation between the number of UDGs and the mass of their host cluster: the N(UDGs)--M$_{200}$\\footnote{Here M$_{200}$ is used as a proxy of the cluster mass. It is defined as the mass enclosed by R$_{200}$, the radius at which the mean density is 200\ntimes the critical density of the Universe.} relation (\\citealt{vanderburg}, hereafter vdB+16). This relation is potentially very interesting to study the role of the environment affecting a UDG, since it gives information about the environment in which UDGs are preferentially found.\nvdB+16 noticed a very tight relation: N(UDGs) $\\propto$ M$_{200} ^{0.93 \\pm 0.16}$, where N(UDGs) is the number of UDGs inside R$_{200}$. \\citealt{roman2} (hereafter RT17b) extended this relation to galaxy groups and found N(UDGs) $\\propto$ M$_{200} ^{0.85 \\pm 0.05}$, a 3$\\sigma$ sublinear relation. This slope implies that UDGs are more abundant, per unit host cluster mass, in low-mass systems. RT17b suggested that a slope less than one is an indication that UDGs either preferably form in low-mass groups, or they are more efficiently destroyed in very massive clusters, and it supports a picture of UDGs accreted from groups and\/or the field to clusters, where some UDGs get destroyed due to interactions with the environment.\nHowever, \\cite{vanderburg2}, also studying the low-mass regime of the relation, found a slope of 1.11 $\\pm$ 0.07, concluding that UDGs are more abundant, per unit cluster mass, towards more massive clusters.\nThe nature of this relation is thus not fully determined, and given its importance for our general understanding of UDGs and UDGs formation models (e.g. \\citealt{amorisco}; \\citealt{amoriscohaloes}; \\citealt{carleton}), it is essential to reconcile these discrepancies, particularly whether the slope is linear or not.\n\nAnother clue to understand the impact of the cluster environment on UDGs is their deficit in the inner regions of clusters (e.g. \\citealt{vandokkum}; \\citealt{merritt}; vdB+16; \\citealt{wittmann}; \\citealt{Aku}). \nWhile this could be a bias due to lower detectability in cluster centres, it is also possible that UDGs are unable to survive due to the strong potential forces (see for instance \\citealt{merritt}, and the detailed analysis by \\citealt{wittmann}).\nHowever, a consistent investigation of this effect with homogeneous data has not yet been undertaken.\\\\\n\n\\noindent\nWith the aim of understanding more about the formation and evolution of UDGs in galaxy clusters, we present here our results of a homogeneous analysis on both phenomena.\nUsing data of new UDG detections in eight galaxy clusters from \\cite{largepaper}, hereafter Paper II, we perform a detailed comparison of our sample with UDGs in clusters in the literature. The rest of this work is organized as follows. In Section \\ref{sec:sample} we present our data. In Section \\ref{sec:results} we describe our findings regarding the abundance of UDGs and their central depletion, respectively. Finally, in Section \\ref{sec:discussion} we discuss our results and summarise our conclusions. \n\nAlong this work we use magnitudes in the AB system and we adopt a $\\Lambda$CDM cosmology with $\\Omega_\\textnormal{m}$ = 0.3, $\\Omega_{\\Lambda}$ = 0.7 and H$_\\textnormal{0}$ = 70 km s$^{-1}$ Mpc$^{-1}$.\n\n\n\n\\section{Cluster sample}\n\\label{sec:sample}\nOur observations come from a deep photometric survey (PIs Peletier \\& Aguerri) our team is carrying out of a set of X-ray selected, nearby (0.02 < $z$ < 0.04) galaxy clusters, which will be followed-up with the new WEAVE spectrograph \\citep{dalton}: the Kapteyn IAC WEAVE INT Clusters Survey (\\texttt{KIWICS}). For these observations we use the 2.5-m Isaac Newton Telescope of the Roque de los Muchachos Observatory on La Palma, Spain. The observations from \\texttt{KIWICS} are ideal for studying the evolution of LSBs at low redshift, covering at least 1 $R_{200}$ (in projection) in each cluster, but the field of views are usually larger.\n\nThe images consist of deep $r$- (total integration time $\\sim$1.5h) and $g$-band (total integration time $\\sim$0.5h) observations, reduced using the \\texttt{Astro-WISE} environment \\citep{astrowise}. For illustration, the mean depth of the $r$-band in our whole sample is $\\sim$29.3 mag arcsec$^{-2}$ when measured at a 3$\\sigma$ level and averaged over boxes of 10 arcsec, comparable to the depth of many observations of UDGs in the literature (see RT17b). \nA detailed description of the observational strategy, data reduction processes, and the search of UDGs is given in Paper II, so we just briefly summarise the main aspects.\n\nThe sample consists of a set of eight, relatively well virialized and isolated clusters (see Table \\ref{tab:sample}). We follow the strategy of detecting the potential UDG candidates using \\texttt{SExtractor} \\citep{sextractor} (based on their size and surface brightness and then fitting the galaxies with \\texttt{GALFIT} \\citep{galfit}, using the pipeline described in \\cite{Aku} and \\cite{Aku2} to retrieve the structural parameters. Simulations and sanity checks are done to determine the detection limits and completeness level of the sample, as to ensure its purity. {These simulations (cf. Figure 3 in Paper II) show that the completeness level for our sample is similar to vdB+16, and they help us to find an efficient way to run \\texttt{SExtractor}, lowering the rate of false positives.\nIn Paper II we find 442 UDG candidates in these eight clusters, 247 being at projected clustercentric distances within R$_{200}$. The definition of UDG\\footnote{We realize that by allowing high S\\'ersic ($n < 4$) objects to be included we are allowing relatively concentrated objects, but, in agreement with the literature, we do not want to restrict our sample by excluding these objects a priori. In any case, the contribution of galaxies with $n \\geq 2$ is < 3\\%. The cut in color aims also to reject background objects that might look like UDGs but do not have colors representative of stellar populations of low-$z$ galaxies.} used in Paper II is galaxies with mean effective surface brightness $\\langle\\mu(r,R_e)\\rangle \\geq$ 24.0 mag arcsec$^{-2}$, effective radius $R_e \\geq$ 1.5 kpc, S\\'ersic index $n < 4$ and color $g-r < 1.2$ mag. All the galaxies are corrected for Galactic extinction (taken from \\citealt{schlafly}), and while the effect is marginal given the redshifts of our sample, $k$-corrections (from \\citealt{chilingarian}) and surface brightness dimming (from \\citealt{tolman1,tolman2}) corrections are also taken into account. \nFor the data description, the results about the structural parameters and scaling relations of UDGs, and their implications in understanding the formation and evolution of UDGs, please refer to Paper II. As a matter of illustration, Figure \\ref{fig:examples} shows examples of some of the UDG candidates found in Paper II.\n\n\t\\begin{table*}\n\t\\caption{ID, coordinates, redshift, M$_{200}$, R$_{200}$, and number of UDGs with $R_{e,c} \\geq$ 1.5 kpc for the clusters in our sample.}\n\t\\label{tab:sample}\n\t\\begin{center}\n\t\\begin{tabular}{lccccccc}\n\t\\hline\n\n Cluster & RA & DEC & Redshift & M$_{200}$ & R$_{200}$ & N(UDGs) & N(UDGs)\\\\\n & (hh:mm:ss) & ($^{\\textnormal{o}}~:~^{'}~:~^{''}$) & & ($\\times$10$^{13}$ M$_\\odot$)& (kpc) & raw & decontaminated\\\\ \\hline\n \\noalign{\\smallskip}\n RXCJ1204.4+0154 & 12:04:25.2 & +01:54:02 & 0.0200 & 2.9 $\\pm$ 0.9 & 630 $\\pm$ 60 & 15 & 14 \\\\ \\noalign{\\smallskip}\n Abell 779 & 09:19:49.2 & +33:45:37 & 0.0231& 4.0 $\\pm$ 1.2 & 700 $\\pm$ 70 & 21 & 20\\\\ \\noalign{\\smallskip}\n RXCJ1223.1+1037 & 12:23:06.5 & +10:27:26 & 0.0256& 2.0 $\\pm$ 0.6 & 550 $\\pm$ 60 & 11 & 11 \\\\ \\noalign{\\smallskip} \n MKW 4s & 12:06:37.4 & +28:11:01 & 0.0274& 2.3 $\\pm$ 0.7 & 580 $\\pm$ 60 & 5 & 5\\\\ \\noalign{\\smallskip}\n RXCJ1714.3+4341 & 17:14:18.6 & +43:41:23 & 0.0275& 0.6 $\\pm$ 0.2 & 370 $\\pm$ 40 & 7 &7\\\\ \\noalign{\\smallskip}\n\t\tAbell 2634 & 23:38:25.7 & +27:00:45 & 0.0312& 26.6 $\\pm$ 8.0 & 1310 $\\pm$ 130 & 60 & 55\\\\ \\noalign{\\smallskip}\n Abell 1177 & 11:09:43.1 & +21:45:43 & 0.0319 & 3.8 $\\pm$ 1.1 & 690 $\\pm$ 70 & 9 &8\\\\ \\noalign{\\smallskip}\n Abell 1314 & 11:34:50.5 & +49:03:28 & 0.0327& 7.6 $\\pm$ 2.3 & 870 $\\pm$ 90 & 19 &16\\\\\n \\hline\n\t\\end{tabular}\n\t\\end{center}\n\t\\end{table*}\n \n\\begin{figure*} \n\\centering\n\\includegraphics[scale=0.74]{fig1.eps}\n\\caption{Example of some UDG candidates found in Paper II. Top panels show the UDGs, mid panels the \\texttt{GALFIT} models with the recovered structural parameters for each, and bottom panels the residuals. The white bars in the top boxes show a scale of 5 arcsec. The effective radii in mid panels are in kpc, surface brightness in mag arcsec$^{-2}$ and colors in magnitudes.}\n\\label{fig:examples}\n\\end{figure*}\n\n\\section{Results}\n\\label{sec:results}\n\\subsection{The frequency of UDGs in nearby clusters}\n\\label{sec:abundance}\nIn this section we aim to study the frequency of UDGs in clusters in a homogeneous way. We use our own dataset and complement it with literature data in a consistent way. We show how different considerations lead to different slopes for the relation, but overall, our analysis favors a sublinear behavior when galaxy groups are considered.\n\nA careful and homogeneous analysis of the abundance of UDGs in the full explored range in cluster mass is still missing in most of the literature: when populating the N(UDGs)--M$_{200}$ plane, the numbers of UDGs given in each work are used directly, without taking into account the fact that the definition of a UDG is slightly different in the different papers.\nFurthermore, there are several methods for determining the cluster mass, M$_{200}$, and this may have an impact in the relation.\n\nWe start by studying the relation only for the clusters of Paper II. We take the number of UDGs inside the projected R$_{200}$ of each cluster, and we statistically decontaminate it. The decontamination is done by analyzing observations of a blank field which was observed under the same conditions and strategy as the cluster sample and} following the same procedure (using \\texttt{SExtractor} and \\texttt{GALFIT} to select and characterize the UDG candidates) as for the cluster images, and we measure how many blank-field galaxies would have been classified as UDGs. \nThe decontaminated number of UDGs in the cluster is then found by subtracting the expected contribution of interlopers from the number of UDGs found in the cluster. \nSince we will later compare with the literature, we consider UDGs with $R_{e,c}\\geq$ 1.5 kpc\\footnote{$R_{e,c}$ = $R_e \\sqrt{b\/a}$; this is slightly more restrictive than using the non-circularized effective radius, $R_e$, as in Paper II.}.\nIt is worth mentioning that we analyzed with \\texttt{GALFIT} blank-field galaxies larger than the angular size that a galaxy with effective radius of 1.5 kpc would have at $z = 0.7$, and therefore we can use the same blank-field galaxies to do background decontamination (for a similar data) of UDGs up to that redshift. Additionally, a second blank-field was observed and analyzed to check that the results are independent on which blank-field is used.\n\nFor each cluster, M$_{200}$ is derived in Paper II by fitting a Gaussian to the redshift distribution of the galaxies in the cluster (from the Sloan Digital Sky Survey (SDSS) and NASA\/IPAC Extragalactic Database (NED) databases), estimating its corresponding velocity dispersion $\\sigma$, and then using the $\\sigma$--$M_{200}$ relation of \\cite{munari}.\n\nFitting\\footnote{We use a Orthogonal Distance Regression fit, taking into account the uncertainties in both axis. The uncertainties in the y-axis are Poissonian, and come from considering uncertainties in the measurement and in the background subtraction.} the abundance relation for these eight clusters, shown in Figure \\ref{fig:abundance}, we find N(UDGs) $\\propto$ M$_{200} ^{0.82 \\pm 0.24}$, a sublinear slope, although only at the $\\sim$1$\\sigma$ level\\footnote{We note that the scatter in the relation by Paper II (i.e. considering non-circularized effective radii) is smaller, with a slope of 0.81 $\\pm$ 0.17.}. This slope is consistent, within the uncertainties, with the slope by vdB+16 and RT17b. We call this \\textsc{case 0}.\n\nTo expand the mass range of our study, we complement our sample with the samples of vdB+16 and RT17b. These two samples allow us to perform a homogeneous analysis these samples have similar depths and completeness, the methodology for the detection and characterization of UDG candidates was the same, and the luminosity and surface brightness distributions also resemble each other), as explained below.\n\nFirst, vdB+16 selected UDGs with the same criteria as Paper II in surface brightness, but using R$_{e,c}$ > 1.5 kpc. Therefore their selection criteria is equivalent to ours. Their original MegaCam magnitudes are converted to our \\texttt{SDSS} filters\\footnote{We use the equations $g_{\\textsc{mega}} = g_{\\textsc{sdss}}-0.153\\times(g_{\\textsc{sdss}}-r_{\\textsc{sdss}})$ and $r_{\\textsc{mega}} = r_{\\textsc{sdss}}-0.024\\times(g_{\\textsc{sdss}}-r_{\\textsc{sdss}})$, as given in \\url{http:\/\/www1.cadc-ccda.hia-iha.nrc-cnrc.gc.ca\/community\/CFHTLS-SG\/docs\/extra\/filters.html}}, and we apply $k$- and surface brightness dimming corrections in the same way as for our sample. Finally, we keep galaxies with $b\/a > 0.1$ and $-1 < g-r < 1.2$; this cut removes $\\sim$ 5\\% of the original sample. To decontaminate this sample, since the depth of the data is similar to ours, and the farthest cluster lies at $z$ < 0.07, we use the same blank field we used for our data, following exactly the same procedure. As a first guess, we use the M$_{200}$ and R$_{200}$ as reported by the authors in their paper.\n\nSecond, RT17b selected all the galaxies in their groups with $\\mu(g,0) \\geq$ 23.5 mag arcsec$^{-2}$ and $R_e$ > 1.3 kpc. Assuming a color $g-r$ = 0.6 and a S\\'ersic profile $n=1$ (the mean color and S\\'ersic index for UDGs according to Paper II), this corresponds to $\\langle\\mu(r,R_e)\\rangle \\geq$ 24.03 mag arcsec$^{-2}$. Therefore we assume that this sample is also complete for our analysis. We then take the parameters from the S\\'ersic fit, and correct them for $k$-corrections and surface brightness dimming. \nThese authors performed a very careful analysis looking for possible interlopers, and did not find any other LSB near their fields. Furthermore, they have two colors and their galaxies have both colors in agreement with spectroscopic members. Moreover, the groups are nearby ($z$ = 0.0141--0.0266) Hickson Compact Groups (HCGs, \\citealt{hickson}) that by definition are isolated structures, and the galaxies are at relatively small projected distances from the centres of the groups. Finally, the association of several blue galaxies in RT17b with their corresponding HCG has been confirmed by spectroscopic observations \\citep{spekkens}. For these reasons, we do not to apply extra background decontamination to this dataset. For M$_{200}$ and R$_{200}$, as in RT17b, we take the mean $\\sigma$ values of the group and group+environment from \\cite{hcgs}, and treat the data in the same way as ours. Of the eleven galaxies studied in RT17b, four fulfill our UDG definition and are at projected clustercentric distances < 1 R$_{200}$, so we include them in our analysis.\n\nTwo other papers would be particularly interesting to compare with: the groups by \\cite{vanderburg2} and the very massive clusters by \\cite{lee}. However, the characteristics and methodologies applied in those works are not fully consistent with the rest of data used here. In the case of \\cite{vanderburg2}, i) their dataset is shallower by $\\sim$ 0.5 mag than ours, ii) has no color constraints (which can increase the presence of interlopers; perhaps that also explains the relatively high S\\'ersic indices they found), and iii) goes up to $z \\sim$ 0.1, so the purity can be affected, the cosmological dimming is as high as $\\sim$ 0.4 mag arcsec$^{-2}$, and the effects of having a PSF of the size of UDGs at $z \\sim$ 0.01 might also play a role; all this may affect in different degrees the results by \\cite{vanderburg2} but in any case our analysis is not fully compatible with that work. In the case of \\cite{lee}, their clusters are at redshifts higher than what we can decontaminate with our blank field, and the extrapolation from the observed number of UDGs to the reported number inside R$_{200}$ is very large. Given these concerns we decided to not include those works for the sake of homogeneity.\n\nWe therefore have a homogeneous set of 19 systems, as shown in Figure \\ref{fig:abundance}. As indicated in Table \\ref{tab:slopes}, we find the fit N(UDGs) $\\propto$ M$_{200}^{0.74 \\pm 0.04}$, again a sublinear slope (hereafter \\textsc{case 1}). \\cite{vanderburg2} claimed that the $\\sim$10$^{12}$ M$_\\odot$ groups of RT17b may be not fully representative if most $\\sim$10$^{12}$ M$_\\odot$ haloes do not host UDGs. While this is not yet clear, for the sake of completeness we also fit the relation without taking into account the two lowest mass groups (\\textsc{case 2}); this increases the slope to N(UDGs) $\\propto$ M$_{200}^{0.84 \\pm 0.07}$, still shallower than 1.\n\n\nWe also check the effect that different mass determinations have on the relation. In particular, the masses in vdB+16 come from the dynamical study by \\cite{sifon} and probably suffer from different systematic effects than the masses of our sample or the rest of literature, since the membership criteria and $\\sigma-$M$_{200}$ calibrations are different. To study this we derive the M$_{200}$ and R$_{200}$ for vdB+16 clusters in the same way as for our sample. The differences in the inferred masses are significant, with a median (mean) factor of 3.2 (3.8) and standard deviation 2, where the M$_{200}$ values are always smaller than the original dynamical masses. Taking this into account, we decide to perform two more fits considering the newly derived M$_{200}$ values for the vdB+16 sample, which of course affects R$_{200}$ and therefore N(UDGs). We also realize that the redshift distributions near these massive clusters are not as normally distributed as for our sample, something that perhaps might be affecting the purity of the sample.\n\nIn any case, if we now consider the 19 systems with the new mass measurements (\\textsc{case 3}), we find, as for \\textsc{case 2}, N(UDGs) $\\propto$ M$_{200}^{0.83 \\pm 0.07}$ (although the relation has a different intercept). Finally, we consider the new mass measurements without considering the two lowest mass groups (\\textsc{case 4}), and this significantly increases the slope (and error) to N(UDGs) $\\propto$ M$_{200}^{1.06 \\pm 0.12}$. \n\\begin{table*}\n\\caption{Slope of the N(UDGs)-M$_{200}$ relation for the cases described in the text. The second column refers to whether or not the mass used for the clusters in vdB+16 was the original, the third column indicates if the two lowest mass groups from RT17b are used, the fourth column specifies if only galaxies with $n < 2$ were used, and the last column gives the slope of the relation for each case.}\n\\label{tab:slopes}\n\\begin{center}\n\\begin{tabular}{lcccc}\n\t\\hline\n\n \\textsc{case} & M$_{200}$ homogeneous? & RT17b $\\sim$10$^{12}$ M$_\\odot$ groups? & constraint $n<2$? & slope\\\\ \\hline\n \\noalign{\\smallskip}\n \\textcolor{black}{vdB+ 16} & yes & no & no & 0.93 $\\pm$ 0.16\\\\ \\noalign{\\smallskip}\n \\textcolor{black}{RT17b} & no & yes & no & 0.85 $\\pm$ 0.05\\\\ \\noalign{\\smallskip}\n \\textcolor{black}{vdB+ 17} & no & no & no & 1.11 $\\pm$ 0.07\\\\ \\noalign{\\smallskip}\n \\textcolor{blue}{\\textsc{case 0}} & --- & --- & no & 0.82 $\\pm$ 0.24\\\\ \\noalign{\\smallskip}\n \\textcolor{red}{\\textsc{case 1}} & no & yes & no &0.74 $\\pm$ 0.04\\\\ \\noalign{\\smallskip}\n \\textcolor{orange}{\\textsc{case 2}} & no& no & no &0.84$\\pm$ 0.07 \\\\ \\noalign{\\smallskip}\n \\textcolor{crimson}{\\textsc{case 3}} & yes & yes & no &0.84 $\\pm$ 0.07\\\\ \\noalign{\\smallskip}\n \\textcolor{darkred}{ \\textsc{case 4}} & yes & no& no &1.06 $\\pm$ 0.12\\\\ \\noalign{\\smallskip}\n \\textcolor{magenta}{\\textsc{case 5}} & yes & yes & yes &0.77 $\\pm$ 0.06\\\\ \\noalign{\\smallskip}\n \\textcolor{pink}{ \\textsc{case 6}} & yes & no& yes &0.96 $\\pm$ 0.11\\\\\n \\hline\n\\end{tabular}\n\\end{center}\n\\end{table*}\n\n\\begin{figure}\n\\centering\n\\includegraphics[scale=0.627]{fig2.eps}\n\\caption{Abundance of UDGs. \\textit{Top:} The N(UDGs)--M$_{200}$ relation using the original M$_{200}$ values of the clusters in vdB+16. The corresponding fits, considering the 10$^{12}$ M$_\\odot$ groups by RT17B (\\textsc{case 1}) and without considering them (\\textsc{case 2}), are shown with solid lines. \\textit{Middle:} Same as left panel but considering our own mass determinations for the clusters in vdB+16 and the corresponding fits considering (\\textsc{case 3}) and ignoring (\\textsc{case 4}) the 10$^{12}$ M$_\\odot$ groups. \\textit{Bottom:} Abundance of UDGs considering only galaxies with $n < 2$, taking into account the 10$^{12}$ M$_\\odot$ groups (\\textsc{case 5}) and not taking them into account (\\textsc{case 6}). See the text for details.}\n\\label{fig:abundance}\n\\end{figure}\n\n\nMotivated by the almost non-existent population of highly resolved UDGs with S\\'ersic index > 2 (e.g. RT17b; \\citealt{trujillo}; \\citealt{Aku}; \\citealt{cohen}), it is worth exploring how the abundance relation behaves if we consider of our analysis only galaxies with $n < 2$. We study this in \\textsc{case 5} and \\textsc{case 6}, considering and not the 10$^{12}$ M$_\\odot$ groups, respectively. The result is shown in the right hand panel of Figure \\ref{fig:abundance}. As expected, since the sample of vdB+16 contains a higher contribution of galaxies with $n > 2$ than ours, the new constraint lowers the slope of the relation. As stated in Table \\ref{tab:slopes}, \\textsc{case 5} has a slope of 0.77 $\\pm$ 0.06, while \\textsc{case 6} has a slope of 0.96 $\\pm$ 0.11.\n\n\nOverall, our analysis shows the importance of applying the same selection criteria when studying the abundance of UDGs, as well as in the mass estimations. It also indicates the dependence of the slope on the cluster mass regime considered, as we discuss in Section \\ref{subsec:disc1}.\n\n\\subsection{The lack of UDGs in the centre of clusters}\n\\label{sec:depletion}\nTo study the lack of UDGs in the innermost regions of clusters, we look at the projected distances at which the innermost UDGs appear. \nFor this, we plot these distances as a function of the cluster mass in Figure \\ref{fig:dist2first} (for the two mass estimations for the clusters from vdB+16). As can be seen, a striking relation appears, where UDGs in low-mass systems\\footnote{The groups by RT17b are not used here, for the sake of consistency with the derived surface density profile; see below.} appear at larger distances (relative to R$_{200}$). While an initial conclusion that low-mass groups destroy the UDGs in their inner regions (supported also theoretically, cf. \\citealt{mihos}) can be made, this apparent effect is an artefact: the probability of finding a UDG at any position is higher for more massive clusters, because they have more UDGs than groups.\nTherefore, we decide to compare these empirical points with a simple prediction based on what we could expect from the observed distribution of UDGs. \n\n\\begin{figure*}\n\\centering\n\\includegraphics[scale=0.68]{fig3.eps}\n\\caption{Projected distances to the innermost UDGs as a function of the host cluster mass, M$_{200}$. Left panel shows the relation for our sample and the sample of vdB+16 with their original mass determinations, while the right panel shows the same but for our mass estimation of their clusters. The predicted positions using the Einasto profile and the different cases (different N(UDGs)-M$_{200}$ relations) are shown for each panel. The crosses and numbers in black show the physical distances corresponding to the distance and cluster mass indicated by the dotted lines, for clusters at $z=0$. See the text for details.}\n\\label{fig:dist2first}\n\\end{figure*}\n\nvdB+16 used an Einasto profile \\citep{einasto} to characterize the radial surface density profile of UDGs, demonstrating that it produces a reasonable fit to their data.\nIn Paper II the surface density distribution of our sample is also studied, finding strong similarities between our profile and the profile from vdB+16.\nMotivated by this similar behavior, we combine both profiles to build a general Einasto profile for the UDGs in both samples. The details about the derivation of this profile can be found in Paper II.\n\nAfter deriving the general Einasto profile, we convert it to a probability function. Subsequently, we take a range of values in M$_{200}$ and inject the expected number of UDGs according with the N(UDGs)--M$_{200}$ relation (for the different cases mentioned above, excluding \\textsc{case 0}, which only considers our data), at random positions that, however, follow the probability function of the surface density profile, extrapolated until the cluster centres. The result of this experiment in Figure \\ref{fig:dist2first} shows that the trend remains, with the ratios between the observed and the Einasto-derived distances increasing towards the high-mass clusters. We discuss the possible implications of this result in Section \\ref{subsec:disc2} below.\n\n\n\n\n\\section{Discussion and conclusions}\n\\label{sec:discussion}\n\n\\subsection{The frequency of UDGs}\n\\label{subsec:disc1}\nAs we have shown by using more clusters analyzed in a homogeneous way, different data used to infer the N(UDGs)--M$_{200}$ relation imply different slopes.\nGiven that we are sure about the high-purity of our sample, and for the sake of homogeneity, we consider in principle \\textsc{cases 0, 3} and \\textsc{4} as the most relevant. While \\textsc{case 0} is consistent with the other two, \\textsc{cases 3} and \\textsc{4} are statistically different. This shows that the selection of the mass range determines the behavior of the relation: if one considers groups down to $\\sim$10$^{12}$ M$\\odot$ (RT17b) the slope is sublinear, but otherwise it is in agreement with being linear. \nIf one takes into account only galaxies with S\\'ersic index smaller than 2, as in \\textsc{case 5} and \\textsc{case 6}, then the slope becomes sublinear regardless the mass regime considered, although the uncertainties of \\textsc{case 6} make it consistent with linear too. It is also worth mentioning that these slopes are all in agreement with the observed abundance of dwarfs in clusters: 0.91 $\\pm$ 0.11 \\citep{dwarfs}.\nNotwithstanding, despite the results by \\cite{vanderburg2} (whose limitations have been explained above), several studies of deep imaging in low-density environments (e.g. \\citealt{merritt}; \\citealt{martinez}; RT17b; \\citealt{muller}; \\citealt{cohen}; Paper II) have found the presence of LSBs and UDGs. We take this as evidence that the 10$^{12}$ M$\\odot$ groups of RT17b are rather representative, and thus the slope of the abundance relation for UDGs is more likely to be sublinear, as in \\textsc{case 3} or as in \\textsc{case 5} if one imposes the extra constraint of a small S\\'ersic index. Moreover, a selection bias present in most of the literature on UDGs should be taken into account: blue UDGs are brighter than red UDGs, which allows them to escape the surface brightness criterion used to define a UDG, as discussed in \\cite{trujillo} (for instance, of the eleven galaxies studied in RT17b, only five meet our definition of UDG); this implies that blue analogues of the UDG population are systematically missed. Given that low-density environments have a larger contribution of blue galaxies than high-density environments, it is clear that the selection bias affects galaxy groups more strongly than massive galaxy clusters. Therefore, the slope of the N(UDGs)--M$_{200}$ relation as studied here and in the literature can be seen as an upper limit, and taking into account the contribution of bluer analogues the slope of the abundance relation would be even more sublinear.\n\n\nAs discussed in RT17b and \\cite{vanderburg2}, a sublinear behavior implies that UDGs are more abundant, per unit host cluster mass, in low-mass systems. This could happen if UDGs preferably form\/survive more easily in groups, or if they are destroyed in high-mass clusters. An alternative could be that the subhalo mass distribution of UDGs \\citep{amoriscohaloes} is different for clusters of different mass, assuming the halo mass function is approximately universal \\citep{jenkins}; then linearity would not be expected.\nA combination of all the above scenarios is of course possible, but with our current data we are not able to distinguish between them. \n\n\\subsection{The depletion of UDGs in the centre of clusters}\n\\label{subsec:disc2}\nA number of works have suggested that the absence of UDGs in the centre of clusters is due to UDGs not being able to survive the strong tidal forces (e.g. \\citealt{vandokkum,merritt}; vdB+16). In particular, \\cite{wittmann} discussed the topic in detail, arguing in favor of this scenario.\nMoreover, as summarised by \\cite{smith}, simulations often study the effects of the cluster potential as scaling with the projected distance to the inverse cubed, and with the size of the galaxy to the third power (e.g. \\citealt{byrd}). This means that the innermost galaxies in clusters are expected to be more affected, and given that UDGs are large galaxies, are even more prone to this. Moreover, as those authors mention, harassment \\citep{moore} can also cause tidal mass loss, and low surface brightness disk galaxies are more susceptible to this loss (see also \\citealt{gnedin}).\nGiven their clear absence in basically all the clusters studied in the literature, and considering the typical sizes of the central bright cluster galaxies (BCGs) it is likely that the observed lack of UDGs is not only explained by an observational bias: even for the most massive clusters studied here, the expected half-light radius of a BCG is around $\\sim$ 5-20 kpc (following \\citealt{laporte} and \\citealt{sizes}), but innermost UDGs appear at larger projected clustercentric distances.\n\n\nIn our systematic study of the lack of UDGs in the innermost regions of clusters, we find hints of the central depletion of UDGs being caused by their destruction in the most massive clusters: the differences in the predicted (from the random placement of UDGs in an Einasto distribution) and observed distances to the innermost UDGs deviate systematically towards high-mass systems. Our simulations are rather schematic since, for instance, we treat UDGs as point sources, but they give an idea of the expected positions if more physical processes were not involved. A more realistic approach would be injecting mock UDGs with their expected structural parameters, and that follow the observed radial surface density distribution, in a set of different cluster images with a diversity of BCGs, and look then for the innermost UDGs but such analysis is out of the scope of this paper.\nAs discussed, potential-driven forces and harassment are likely to be behind the origin of UDGs avoiding the cluster centres. However, considering the predicted distances for the most massive clusters, galactic cannibalism may be also playing a role: for a 10$^{15}$ M$_\\odot$ cluster, the expected distance is $\\sim$ 5-10 kpc, which is the order of the size of the BCG in that kind of massive cluster. This could be one of the mechanisms making the slope of the abundance relation sublinear. Moreover, if the slope is rather linear, there should be a mechanism effective in massive clusters that is restoring the linearity by creating more UDGs.\\\\\n\n\n\\noindent\nTo summarize our results, using new observations of UDGs in eight clusters, and complementing them with literature data, we performed a homogeneous analysis to study the abundance of UDGs and its central depletion in galaxy clusters. Our analysis shows the sensitivity that the slope of the N(UDGs)--M$_{200}$ relation has on the data used to derive it.\nBased on the current evidence we support a sublinear behavior for the relation, but we show the effects that different constraints have on the result. We found hints of one mechanism that could be making the abundance slope sublinear: from looking at the projected distance to the innermost UDG, we noticed that the deficit of UDGs increases with the cluster mass, supporting the idea that environmental effects are destroying UDGs in the central regions of high-mass systems.\n\n\n\n\\section*{Acknowledgements}\n\nWe thank the constructive comments by an anonymous referee, that helped to improve this paper.\nWe thank Remco van der Burg for sharing the data of vdB+16 with us, as well as for many clarifications on it. We also thank Javier Rom\\'an for the data of RT17b and for enlightening discussions about our results. PEMP thanks the Netherlands Research School for Astronomy (NOVA) for the funding via the NOVA MSc Fellowship. PEMP, RFP and AV acknowledge financial support from the European Union's Horizon 2020 research and innovation programme under Marie Sk\\l odowska-Curie grant agreement No. 721463 to the SUNDIAL ITN network. JALA acknowledges support from the Spanish Ministerio de Econom\\'ia y Competitividad (MINECO) by the grants AYA2013-43188-P and AYA2017-83204-P. AV would like to thank the Vilho, Yrj\\\"o, and Kalle V\\\"ais\\\"al\\\"a Foundation of the Finnish Academy of Science and Letters for the funding during the writing of this paper. We have made an extensive use of SIMBAD and ADS services, as well as of the Python packages NumPy \\citep{numpy}, Matplotlib \\citep{hunter} and Astropy \\citep{astropy}, for which we are thankful.\n\n\n\n\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\\label{sec:intro}\n\nRecently, Facebook developed the ``shop section'' that encourages users to buy and sell products on their home pages. \nAmazon launched the ``social media influencer program'' that allows influential users to promote products to their followers.\n\\nop{Twitter introduces the ``Buy now'' button that lets users buy products with a single click.}\nThese and many other incoming innovative business initiatives alike integrate social networks and e-commerce, and introduce the new \\emph{social e-commerce networks}, which contain rich information of the social interactions and e-commercial transactions and activities of users. Essentially, in such a network, every vertex is associated with a transaction database, which records the user's activities. In this paper, we call such networks \\emph{database networks}.\n\nCan we find interesting patterns in social e-commerce networks and gain valuable business insights? There are two critical and complementary aspects of information: the network structures and the transactions associated with vertices. Consequently, it is interesting to find overlapping social groups (network structures) such that people in the same group share the same dominant e-commercial activity patterns.\nFor example, we may find a group of people who frequently buy diapers with beer together.\nThe finding of social groups of people associated with the same dominant e-commercial activity patterns provides valuable knowledge about the strong buying habits of social communities, and is very useful in personalized advertising and business marketing.\nWe call this data mining task \\emph{finding theme communities from database networks}.\n\nFinding theme communities is also meaningful in other database networks beyond social e-commerce networks. For example, as reported in our empirical study, we can model location-based social networks and the check-in information associated as database networks, where each user is a vertex and the set of locations that the user checks in within a period (e.g., a day) as a transaction. A theme community in this database network represent a group of friends who frequently visit the same set of locations. Moreover, in a co-author network, authors are vertices and two authors are linked if they co-authored before. The network can be enhanced by associating each author with a transaction database where each transaction is the set of keywords in an article published by the author. Then, a theme community in this database network is a group of collaborating authors who share the same research interest.\n\n\\nop{\nA social e-commerce network contain rich information about the social interactions and e-commercial activities of every single user. \n\nThen, we can promote the best brand of diaper and beer to the other young fathers \n\nwealthy ladies who frequently buy \n\nSuch networks often contain rich information about the social interactions and e-commercial activities of every single user. \n\n\nLarge social network and e-commerce companies have been struggling to seize the promising business opportunity of social e-commerce. \n}\n\n\n\n\\nop{Trump's triumph in the 2016 US presidential election has been a very hot topic in social networks, such as Facebook and Twitter. \nFor each citizen, rather than squeezing the complicated online activities into a single vector of less interpretable numerical features, \nit is much more informative and natural to log one's detailed daily activities with a transaction of interpretable attributes, and comprehensively record his\/her overall online activities using a well organized transactional database.\nThe rich detailed information in such personal databases provide us a much wider range of interpretable perspectives to analyze the arbitrarily overlapped communities in social networks. \n\nTake the US presidential election as an example, according to the CNN news reports,\nsome groups of people like Trump because of his taxation policy and business acumen, \nsome dislike Trump for his immigration policy and racist remarks, \nthere are also a considerable number of people who vote for Trump because they are tired of Hillary's hawkishness and hoping Trump can make America great again.\nAs a matter of fact, if we look from different perspectives of view, citizens holding different opinions about Trump form different communities and one citizen is usually involved in multiple communities.\nIt is interesting to find such groups of people having the same opinion in the same community, as well as discovering such opinions that hold such groups of people together.\nWe call this data mining task \\emph{finding theme communities from database network}.}\n\n\\nop{\nTechnically, we assume a \\emph{database network}, where each edge indicates the positive relationship between two vertices and each vertex is associated with a transaction database named \\emph{vertex database}. Every vertex database contains one or more \\emph{transactions} and each transaction is a set of \\emph{items}. \nGiven a database network, finding theme communities aims to find cohesive subgraphs such that every vertex database in the same subgraph \\emph{frequently} contains the same pattern. \nSuch \\emph{pattern} is regarded as the \\emph{theme}, which is a set of items that is contained by one or more transactions in a vertex database.\n\n\n\nMany real world networks naturally exist in the form of database network. Finding theme communities from database network is a useful exercise that enjoys many interesting applications.\nFor example, a social e-commerce network is a database network, where each vertex represents a user, each transaction stores the items bought in the same purchase, and all purchases of the same user constitute a vertex database. Finding theme communities in such database network reveals the groups of friends who share the same strong buying habits in the same group.\nIn location-based social networks, each vertex represents a user, each item is a check-in location, each transaction stores the daily check-in locations of the same user, and the daily check-in locations made by the same user through all the time form a vertex database. In this case, finding theme communities detects the groups of friends such that people in the same group frequently visit the same set of locations.\nThe network of co-authoring scholars is also a database network, where each vertex is an author, each item is a keyword, each transaction stores the keywords of one paper, and all papers published by the same author form a vertex database. Finding theme communities from such database network discovers the communities of co-authors who hold the same major research interest in the same community. \n\\nop{Interestingly, finding theme communities can be used in applications well beyond social networks. \nFor example, the ethernet can be smoothly modelled as a database network, where each switch is a vertex, each daily error log of a switch is a transaction and all daily error logs of the same switch form a vertex database. \nFrom this database network, finding theme communities identifies cohesive groups of switches that share the same pattern of errors in the same group.\n}\n}\n\n\nThere are established community detection methods for \\emph{vertex attributed networks} \\nop{where each vertex is associated with a set of attributes }and \\emph{simple networks} without vertex attributes, as to be reviewed in Section~\\ref{Sec:rw}. Can we adapt existing methods to tackle the problem of finding theme communities? Unfortunately, the answer is no due to the following challenges.\n\nFirst, the vertex databases in database network create a huge challenge for the existing methods that work well in vertex attributed networks. \nTo the best of our knowledge, existing methods for vertex attributed networks only consider the case where each vertex is associated with a single set of items. Those methods cannot distinguish the different frequencies of patterns in millions of vertex databases. Besides, we cannot simply transform each vertex database into a single set of items by taking the union of all transactions in the vertex database, because this wastes the valuable information of item co-occurrence and pattern frequency.\n\nSecond, every vertex database in a database network contains an exponential number of patterns.\nSince the community detection methods that work in simple networks only detect the theme communities of one pattern at a time, we have to perform community detection for each of the exponential number of patterns, which is computationally intractable.\n\n\\nop{Simple network community detection methods can be applied to find the theme community of a valid theme. \nand each theme may or may not be a \\emph{valid theme} that forms a theme community.\n}\n\n\\nop{\nit is computationally impractical to enumerate all theme communities by applying simple network community detection methods on exponential number of themes.\n}\n\n\\nop{finding theme communities is a chicken and egg problem -- it is challenging to find a cohesive subgraph without a specific pattern and it is also difficult to identify a frequent pattern without a specific subgraph.\nSince no pattern or community is known in advance, we cannot find theme community by community detection methods in simple networks. }\n\n\n\n\\nop{\neach vertex database contains exponential number of patterns and rich information about item co-occurrence and pattern frequency.\nThe exponential number of patterns makes it impractical to enumerate all theme communities by simple network methods. The vertex attributed network methods are incapable of exploiting the rich structural information in vertex databases.}\n\nLast but not least, a large database network usually contains a huge number of arbitrarily overlapping theme communities. Efficiently detecting overlapping theme communities and indexing them for fast query-answering are both challenging problems.\n\nIn this paper, we tackle the novel problem of finding theme communities from database networks and make the following contributions.\n\nFirst, we introduce database network as a network of vertices associated with transaction databases. A database network contains rich information about item co-occurrence, pattern frequency and graph structure. This presents novel opportunities to find theme communities with meaningful themes that consist of frequently co-occurring items.\n\nSecond, we motivate the novel problem of finding theme communities from database network and prove that even counting the number of theme communities in a database network is \\#P-hard.\n\nThird, we design a greedy algorithm to find theme communities by detecting \\emph{maximal pattern trusses} for every pattern in all vertex databases.\nTo improve the efficiency in practice, we first investigate several useful properties of maximal pattern trusses, then apply these properties to design two effective pruning methods that reduce the time cost by more than two orders of magnitudes in our experiments without losing the community detection accuracy.\n\nFourth, we advocate the construction of a data warehouse of maximal pattern truss. To facilitate indexing and query answering of the data warehouse, we show that a maximal pattern truss can be efficiently decomposed and stored in a linked list. Using such decomposition, we design an efficient theme community indexing tree.\nWe also develop an efficient query answering method that takes only 1 second to retrieve 1 million theme communities from the indexing tree.\n\nLast, we report extensive experiments on both real and synthetic datasets and demonstrate the efficiency of the proposed theme community finding method and indexing method. \nThe case study in a large database network shows that finding theme communities discovers meaningful groups of collaborating scholars who share the same research interest.\n\nThe rest of the paper is organized as follows. We review related work in Section~\\ref{Sec:rw} and formulate the theme community finding problem in Section~\\ref{sec:prob}. We present a baseline method and a maximal pattern truss detection method in Section~\\ref{sec:baseline}. We develop our major theme community finding algorithms in Section~\\ref{sec:algo}, and the indexing querying answering algorithms in Section~\\ref{sec:algo}. We report a systematic empirical study in Section~\\ref{sec:exp} and conclude the paper in Section~\\ref{sec:con}.\n\n\n\\nop{\nThird, we formulate the task of finding theme communities in database network as finding maximal pattern trusses.\nA \\emph{maximal pattern truss} is a cohesive subgraph in a \\emph{theme network}, which is induced from a database network by retaining all vertices containing the same pattern. \n\nSince each pattern corresponds to a unique theme network, pattern truss naturally allows arbitrary overlaps between theme communities with different themes.\n\nSecond, pattern truss smoothly incorporates item co-occurrence, pattern frequency and graph cohesiveness by the proposed edge \\emph{cohesion}. This enables us to simultaneously discover both theme and community without knowing any of them in advance.\n}\n\n\n\n\n\\section{Appendix}\n\n\\subsection{The Proof of Theorem~\\ref{theo:hardness}.}\n\\label{Apd:hardness}\n\n\\begin{theorem}\nGiven a database network $G$ and a minimum cohesion threshold $\\alpha$, the problem of counting the number of theme communities in $G$ is \\#P-hard.\n\\begin{proof}\nWe prove by a reduction from the \\emph{Frequent Pattern Counting} (FPC) problem, which is known to be \\#P-complete~\\cite{gunopulos2003discovering}.\n\nGiven a transaction database $\\mathbf{d}$ and a minimum support threshold $\\alpha\\in[0,1]$, an instance of the FPC problem is to count the number of patterns $\\mathbf{p}$ in $\\mathbf{d}$ such that $f(\\mathbf{p}) > \\alpha$.\nHere, $f(\\mathbf{p})$ is the frequency of $\\mathbf{p}$ in $\\mathbf{d}$.\n\nWe construct a database network $G=(V,E,D,S)$, where $V=\\{v_1, v_2, v_3\\}$ has only $3$ vertices; $E=\\{(v_1, v_2), (v_2, v_3), (v_3, v_1)\\}$ and forms a triangle; each vertex is associated with a copy of $\\mathbf{d}$, that is, $D=\\{\\mathbf{d}_1, \\mathbf{d}_2, \\mathbf{d}_3 \\mid \\mathbf{d}_1=\\mathbf{d}_2=\\mathbf{d}_3=\\mathbf{d}\\}$; and $S$ is the set of all items appearing in $\\mathbf{d}$. Apparently, $G$ can be constructed in $O(|\\mathbf{d}|)$ time.\n\nFor any pattern $\\mathbf{p}\\subseteq S$, since $\\mathbf{d}_1=\\mathbf{d}_2=\\mathbf{d}_3=\\mathbf{d}$, it follows $f_1(\\mathbf{p})=f_2(\\mathbf{p})=f_3(\\mathbf{p})=f(\\mathbf{p})$. According to Definition~\\ref{Def:edge_cohesion}, $eco_{12}(G_\\mathbf{p})=eco_{13}(G_\\mathbf{p})=eco_{23}(G_\\mathbf{p})=f(\\mathbf{p})$. \nAccording to Definition~\\ref{Def:theme_community}, $G_\\mathbf{p}$ is a theme community in $G$ if and only if $f(\\mathbf{p})> \\alpha$.\nTherefore, for any threshold $\\alpha\\in[0,1]$, the number of theme communities in $G$ is equal to the number of patterns in $\\mathbf{d}$ satisfying $f(\\mathbf{p})>\\alpha$, which is exactly the answer to the FPC problem.\n\\end{proof}\n\\end{theorem}\n\n\n\n\n\n\\subsection{The Proof of Theorem~\\ref{Prop:gam}}\n\\label{Apd:gam}\n\n\\begin{theorem}[Graph Anti-monotonicity]\nIf $\\mathbf{p}_1 \\subseteq \\mathbf{p}_2$, then $C^*_{\\mathbf{p}_2}(\\alpha)\\subseteq C^*_{\\mathbf{p}_1}(\\alpha)$.\n\n\\begin{proof}\nSince maximal pattern truss $C^*_{\\mathbf{p}_1}(\\alpha)$ is the union of all pattern trusses with respect to threshold $\\alpha$ in theme network $G_{\\mathbf{p}_1}$, we can prove $C^*_{\\mathbf{p}_2}(\\alpha)\\subseteq C^*_{\\mathbf{p}_1}(\\alpha)$ by proving $C^*_{\\mathbf{p}_2}(\\alpha)$ is a pattern truss with respect to threshold $\\alpha$ in theme network $G_{\\mathbf{p}_1}$.\n\nWe construct a subgraph $H_{\\mathbf{p}_1}=(V_{\\mathbf{p}_1}, E_{\\mathbf{p}_1})$, where $V_{\\mathbf{p}_1}=V^*_{\\mathbf{p}_2}(\\alpha)$ and $E_{\\mathbf{p}_1}=E^*_{\\mathbf{p}_2}(\\alpha)$. \nThat is, $H_{\\mathbf{p}_1}=C^*_{\\mathbf{p}_2}(\\alpha)$.\nNext, we prove $H_{\\mathbf{p}_1}$ is a subgraph in $G_{\\mathbf{p}_1}$. \nSince $\\mathbf{p}_1\\subseteq \\mathbf{p}_2$, by the anti-monotonicity of patterns~\\cite{agrawal1994fast,han2000mining}, it follows $\\forall v_i \\in V, f_i(\\mathbf{p}_1) \\geq f_i(\\mathbf{p}_2)$. \nAccording to the definition of theme network, it follows $G_{\\mathbf{p}_2}\\subseteq G_{\\mathbf{p}_1}$. \nSince $C^*_{\\mathbf{p}_2}(\\alpha)$ is the maximal pattern truss in $G_{\\mathbf{p}_2}$, it follows $C^*_{\\mathbf{p}_2}(\\alpha)\\subseteq G_{\\mathbf{p}_2}\\subseteq G_{\\mathbf{p}_1}$.\nRecall that $H_{\\mathbf{p}_1}=C^*_{\\mathbf{p}_2}(\\alpha)$, it follows $H_{\\mathbf{p}_1}\\subseteq G_{\\mathbf{p}_1}$.\n\nNow we prove $H_{\\mathbf{p}_1}$ is a pattern truss with respect to threshold $\\alpha$ in $G_{\\mathbf{p}_1}$.\nSince $\\forall v_i \\in V, f_i(\\mathbf{p}_1) \\geq f_i(\\mathbf{p}_2)$, the following inequation holds for every triangle $\\triangle_{ijk}$ in $H_{\\mathbf{p}_1}$.\n\\begin{equation}\n\\label{eq:mono}\n\\begin{split}\n\t\\min(f_i(\\mathbf{p}_1), f_j(\\mathbf{p}_1), f_k(\\mathbf{p}_1)) \\geq\\min(f_i(\\mathbf{p}_2), f_j(\\mathbf{p}_2), f_k(\\mathbf{p}_2))\n\\end{split}\n\\end{equation}\nSince $V_{\\mathbf{p}_1}=V^*_{\\mathbf{p}_2}(\\alpha)$ and $E_{\\mathbf{p}_1}=E^*_{\\mathbf{p}_2}(\\alpha)$, it follows that the set of triangles in $C^*_{\\mathbf{p}_2}(\\alpha)$ is exactly the same as the set of triangles in $H_{\\mathbf{p}_1}$.\nTherefore, we can derive from Equation~\\ref{eq:mono} and Definition~\\ref{Def:edge_cohesion} that\n\\begin{equation}\\nonumber\n\t\\forall e_{ij}\\in E_{\\mathbf{p}_1}, eco_{ij}(H_{\\mathbf{p}_1})\\geq eco_{ij}(C^*_{\\mathbf{p}_2}(\\alpha))\n\\end{equation}\nSince $C^*_{\\mathbf{p}_2}(\\alpha)$ is the maximal pattern truss with respect to cohesion threshold $\\alpha$ in $G_{\\mathbf{p}_2}$, it follows\n\\begin{equation}\\nonumber\n\t\\forall e_{ij}\\in E^*_{\\mathbf{p}_2}(\\alpha), eco_{ij}(C^*_{\\mathbf{p}_2}(\\alpha))>\\alpha\n\\end{equation}\nRecall that $E_{\\mathbf{p}_1} = E^*_{\\mathbf{p}_2}(\\alpha)$, it follows\n\\begin{equation}\\nonumber\n\t\\forall e_{ij}\\in E_{\\mathbf{p}_1}, eco_{ij}(H_{\\mathbf{p}_1})\\geq eco_{ij}(C^*_{\\mathbf{p}_2}(\\alpha)) > \\alpha\n\\end{equation}\nThis means $H_{\\mathbf{p}_1}$ is a pattern truss with respect to threshold $\\alpha$ in $G_{\\mathbf{p}_1}$.\nRecall that $H_{\\mathbf{p}_1}=C^*_{\\mathbf{p}_2}(\\alpha)$, it follows that $C^*_{\\mathbf{p}_2}(\\alpha)$ is a pattern truss in $G_{\\mathbf{p}_1}$.\nThe Theorem follows.\n\\end{proof}\n\\end{theorem}\n\n\n\n\n\\subsection{The Proof of Theorem~\\ref{Obs:discrete}}\n\\label{Apd:discrete}\n\n\\begin{theorem}\nGiven a theme network $G_\\mathbf{p}$, a cohesion threshold $\\alpha_2$ and a maximal pattern truss $C^*_{\\mathbf{p}}(\\alpha_1)$ in $G_\\mathbf{p}$ whose minimum edge cohesion is $\\beta = \\min\\limits_{e_{ij}\\in E^*_\\mathbf{p}(\\alpha_1)} eco_{ij}(C^*_\\mathbf{p}(\\alpha_1))$, if $\\alpha_2 \\geq \\beta$, then $C^*_{\\mathbf{p}}(\\alpha_2)\\subset C^*_{\\mathbf{p}}(\\alpha_1)$.\n\n\\begin{proof}\nFirst, we prove $\\alpha_2 > \\alpha_1$. \nSince $C^*_{\\mathbf{p}}(\\alpha_1)$ is a maximal pattern truss with respect to threshold $\\alpha_1$, from Definition~\\ref{Def:pattern_truss}, we have $\\forall e_{ij} \\in E^*_\\mathbf{p}(\\alpha_1), eco_{ij}(C^*_\\mathbf{p}(\\alpha_1)) > \\alpha_1$. \nSince $\\beta$ is the minimum edge cohesion of all the edges in $C^*_{\\mathbf{p}}(\\alpha_1)$, $\\beta > \\alpha_1$. \nSince $\\alpha_2 \\geq \\beta$, $\\alpha_2 > \\alpha_1$.\n\nSecond, we prove $C^*_\\mathbf{p}(\\alpha_2)\\subseteq C^*_\\mathbf{p}(\\alpha_1)$. \nSince $\\alpha_2 > \\alpha_1$, we know from Definition~\\ref{Def:pattern_truss} that $\\forall e_{ij} \\in E^*_\\mathbf{p}(\\alpha_2), eco_{ij}(C^*_\\mathbf{p}(\\alpha_2)) > \\alpha_2> \\alpha_1$.\nThis means that $C^*_\\mathbf{p}(\\alpha_2)$ is a pattern truss with respect to cohesion threshold $\\alpha_1$ in $G_\\mathbf{p}$.\nSince $C^*_\\mathbf{p}(\\alpha_1)$ is the maximal pattern truss with respect to cohesion threshold $\\alpha_1$ in $G_\\mathbf{p}$, from Definition~\\ref{Def:maximal_pattern_truss} we have $C^*_\\mathbf{p}(\\alpha_2)\\subseteq C^*_\\mathbf{p}(\\alpha_1)$.\n\nLast, we prove $C^*_\\mathbf{p}(\\alpha_2)\\neq C^*_\\mathbf{p}(\\alpha_1)$. \nLet $e^*_{ij}$ be the edge with minimum edge cohesion $\\beta$ in $ E^*_\\mathbf{p}(\\alpha_1)$. \nSince $\\alpha_2\\geq \\beta$, According to Definition~\\ref{Def:pattern_truss}, $e^*_{ij}\\not\\in E^*_\\mathbf{p}(\\alpha_2)$. \nThus, $E^*_{\\mathbf{p}}(\\alpha_1)\\neq E^*_{\\mathbf{p}}(\\alpha_2)$ and $C^*_\\mathbf{p}(\\alpha_2)\\neq C^*_\\mathbf{p}(\\alpha_1)$.\nRecall that $C^*_\\mathbf{p}(\\alpha_2)\\subseteq C^*_\\mathbf{p}(\\alpha_1)$. The theorem follows.\n\\end{proof}\n\\end{theorem}\n\n\n\n\n\n\n\n\n\n\n\\section{Related Works}\n\\label{Sec:rw}\nTo the best of our knowledge, finding theme communities from database networks is a new problem that has not been formulated or tackled in literature.\nBroadly it is related to truss detection and vertex attributed network clustering.\n\n\\subsection{Truss Detection}\nTruss detection aims to detect $k$-truss from unweighted networks.\nCohen~\\cite{cohen2008trusses} defined $k$-truss as a subgraph $S$ where each edge in $S$ is contained by at least $k-2$ triangles in $S$. He also proposed a polynomial time algorithm for efficient $k$-truss detection~\\cite{cohen2008trusses}.\nAs demonstrated by many studies~\\cite{cohen2008barycentric,cohen2008trusses,cohen2009graph}, $k$-truss naturally models cohesive communities in social networks and are elegantly related to some other graph structures, such as $k$-core~\\cite{seidman1983network} and $k$-clique~\\cite{luce1950connectivity}.\n\\nop{and $k$-plex~\\cite{seidman1978graph}. }\n\nThe elegance of $k$-truss attracts much attention in research. Huang~\\emph{et~al.}~\\cite{huang2014querying} designed an online community search method to find $k$-truss communities that contain a query vertex. They also proposed a memory efficient index structure to support fast $k$-truss search and online index update. Wang~\\emph{et~al.}~\\cite{wang2012truss} focused on solving the truss decomposition problem, which is to find all non-empty $k$-trusses for all possible values of $k$ in large unweighted networks. They first improved the in-memory algorithm proposed by Cohen~\\cite{cohen2008trusses}, then proposed two efficient methods to deal with large networks that cannot be held in memory. \nHuang~\\emph{et~al.}~\\cite{huang2016truss} proposed a new structure named $(k,\\gamma)$-truss that further extends the concept of $k$-truss from deterministic networks to probabilistic networks. They also proposed several algorithmic tools for detection, decomposition and approximation of $(k,\\gamma)$-trusses.\n\nAll the methods mentioned above perform well in detecting communities from networks without vertex databases. However, since the database network contains an exponential number of patterns and we don't know which pattern forms a theme community, enumerating all theme communities in the database network requires to perform community detection for each of the exponential number of patterns, which is computationally intractable.\n\n\\nop{\n\\subsection{Frequent Pattern Mining}\nFrequent pattern mining aims to find frequent patterns from a transactional database.\nIn the field of data mining, this is a well studied problem with many well known solutions. \nTo list a few, Agrawal~\\emph{et~al.}~\\cite{agrawal1994fast} proposed Apriori to find frequent patterns by candidate generation.\nHan~\\emph{et~al.}~\\cite{han2000mining} proposed the FP-Growth algorithm that avoids candidate generation by FP-Tree.\nYan~\\emph{et~al.}~\\cite{yan2002gspan} proposed gSpan to find the subgraphs that are frequently contained by a graph database.\nThe graph database is a set of graphs, which is substantially different from the database network.\n \nMost frequent pattern mining methods focus on mining frequent patterns from a single database, \nthus, they cannot find theme communities from a network of vertex databases.\n}\n\n\\newcommand{58mm}{58mm}\n\\begin{figure*}[t]\n\\centering\n\\subfigure[Database network]{\\includegraphics[width=58mm]{Figs\/DBN_dbnetwork.pdf}}\n\\subfigure[Theme network $G_{\\mathbf{p}}$]{\\includegraphics[width=58mm]{Figs\/DBN_theme_network1}}\n\\subfigure[Theme network $G_{\\mathbf{q}}$]{\\includegraphics[width=58mm]{Figs\/DBN_theme_network2}}\n\\caption{A toy example of database network, theme network and theme community. The pattern frequencies are labeled beside each vertex. The theme communities marked bold in (b) is valid when $\\alpha\\in [0, 0.2)$. The theme community marked in bold in (c) is valid when $\\alpha\\in[0.2, 0.4)$.}\n\\label{Fig:toy_example}\n\\end{figure*}\n\n\\subsection{Vertex Attributed Network Clustering}\nA vertex attributed network is a network where each vertex is associated with a set of items. \nVertex attributed network clustering aims to find cohesive communities such that all vertices within the same community contain the same set of items.\n\nVarious methods were proposed to solve this problem. Among them, the frequent pattern mining based methods have been proved to be effective.\nBerlingerio~\\emph{et~al.}~\\cite{berlingerio2013abacus} proposed ABACUS to find multi-dimensional communities by mining frequent closed itemsets from multi-dimensional community memberships.\nMoser~\\emph{et~al.}~\\cite{moser2009mining} devised CoPaM to efficiently find all maximal cohesive communities by exploiting various pruning strategies.\nPrado~\\emph{et~al.}~\\cite{prado2013mining} designed several interestingness measures and the corresponding mining algorithms for cohesive communities.\nMoosavi~\\emph{et~al.}~\\cite{moosavi2016community} applied frequent pattern mining to find cohesive groups of users who share similar features.\nThere are also effective methods that are based on \ngraph weighting~\\cite{steinhaeuser2008community, cruz2011semantic}, \nstructural embedding~\\cite{combe2012combining, dang2012community}, \nrandom walk~\\cite{zhou2009graph}, \nstatistical inference~\\cite{balasubramanyan2011block,yang2013community},\nmatrix compression~\\cite{akoglu2012pics},\nsubspace clustering~\\cite{gunnemann2011db,wang2016semantic},\nand $(k, d)$-truss detection~\\cite{huang2017attribute}.\n\nAll these methods mentioned above cannot exploit the rich and useful information of item co-occurrences and pattern frequencies in a database network.\n\n\n\n\n\\section{Problem Definition}\n\\label{sec:prob}\n\nIn this section, we first introduce the notions of database network, theme network and theme community, and then formalize the theme community finding problem.\n\n\\subsection{Database Network and Theme Network}\nLet $S=\\{s_1, \\ldots, s_m\\}$ be a set of items. An \\emph{itemset} $\\mathbf{x}$ is a subset of $S$, that is, $\\mathbf{x}\\subseteq S$.\nA \\emph{transaction} $\\mathbf{t}$ is an itemset. Transaction $\\mathbf{t}$ is said to \\emph{contain} itemset $\\mathbf{x}$ if $\\mathbf{x}\\subseteq \\mathbf{t}$. The \\emph{length} of transaction $\\mathbf{t}$, denoted by $|\\mathbf{t}|$, is the number of items in $\\mathbf{t}$.\nA \\emph{transaction database} $\\mathbf{d}=\\{\\mathbf{t}_1, \\ldots, \\mathbf{t}_h\\}$ ($h\\geq 1$) is a multi-set of transactions, that is, an itemset may appear multiple times as transactions in a transaction database.\n\nA \\emph{database network} is an undirected graph $G=(V,E,D,S)$, where each vertex is associated with a transaction database, that is,\n\\begin{itemize}\n\\item $V=\\{v_1, \\ldots, v_n\\}$ is a set of vertices;\n\\item $E=\\{e_{ij}=(v_i, v_j) \\mid v_i,v_j \\in V\\}$ is a set of edges;\n\\item $D=\\{\\mathbf{d}_1, \\ldots, \\mathbf{d}_n\\}$ is a set of transaction databases, where $\\mathbf{d}_i$ is the transaction database associated with vertex $v_i$; and\n\\item $S=\\{s_1, \\cdots, s_m\\}$ is the set of items that constitute all transaction databases in $D$. That is, $\\cup_{\\mathbf{d}_i\\in D} \\cup_{\\mathbf{t}\\in\\mathbf{d}_i} \\mathbf{t}= S$.\n\\end{itemize}\n\nFigure~\\ref{Fig:toy_example}(a) gives a toy database network of 9 vertices, where each vertex is associated with a transaction database, whose details are omitted due to the limit of space. \n\n\nA \\emph{theme} is an itemset $\\mathbf{p}\\subseteq S$, which is also known as a \\emph{pattern} in the field of frequent pattern mining~\\cite{agrawal1994fast,han2000mining}.\nThe \\emph{length} of $\\mathbf{p}$, denoted by $|\\mathbf{p}|$, is the number of items in $\\mathbf{p}$.\nThe \\emph{frequency} of $\\mathbf{p}$ in transaction database $\\mathbf{d}_i$, denoted by $f_i(\\mathbf{p})$, is the proportion of transactions in $\\mathbf{d}_i$ that contain $\\mathbf{p}$~\\cite{agrawal1994fast,han2000mining}. \n$f_i(\\mathbf{p})$ is also called the frequency of $\\mathbf{p}$ on vertex $v_i$.\nIn the rest of this paper, we use the terms \\emph{theme} and \\emph{pattern} interchangeably.\n\nGiven a pattern $\\mathbf{p}$, the \\emph{theme network} $G_\\mathbf{p}$ is a subgraph induced from $G$ by the set of vertices satisfying $f_i(\\mathbf{p}) > 0$. We denote it by $G_\\mathbf{p}=(V_\\mathbf{p}, E_\\mathbf{p})$, where \n$V_\\mathbf{p}=\\{v_i \\in V \\mid f_i(\\mathbf{p}) > 0\\}$ is the set of vertices and \n$E_\\mathbf{p}=\\{e_{ij}\\in E \\mid v_i, v_j \\in V_\\mathbf{p}\\}$ is the set of edges.\n\n\n\\nop{\nA \\emph{pattern} $p$ is an itemset that is also regarded as the \\emph{theme} of the theme network. .\nThe \\emph{frequency} of pattern $p$ on vertex $v_i$ is defined as the proportion of transactions in $d_i$ that contains $p$~\\cite{agrawal1994fast,han2000mining}. .\n}\n\n\\nop{The theme network $G_p$ is induced from database network $G$ by the set of vertices satisfying $f_i(p) > 0$.}\n\n\\nop{\n\\item $F_p=\\{f_i(p) \\mid v_i \\in V_p\\}$ is the set of vertex weights, where $f_i(p)$ is the weight of $v_i$.\nA higher weight $f_i(p)$ indicates that pattern $p$ is more frequent on $v_i$.\nA higher weight indicates that pattern $p$ is more significant on $v_i$, thus $v_i$ gets a larger weight in the theme network $G_p$.\nLater, when applying standard graph operations (e.g., union, intersection, etc) on $G_p$, we treat $G_p$ as a standard unweighted graph and ignore $F_p$.}\n\nFigures~\\ref{Fig:toy_example}(b) and~\\ref{Fig:toy_example}(c) present two theme networks induced by two different patterns $\\mathbf{p}$ and $\\mathbf{q}$, respectively. The edges and vertices marked in dashed lines are not included in the theme networks.\n\\nop{\\mcout{What are $\\mathbf{p}_1$ and $\\mathbf{p}_2$, respectively, in this example?}}\n\\nop{\nsince $f_6(p_1)=0$, $v_6$ and the edges connected to $v_6$ are removed when inducing $G_{p_1}$. The rest of the vertices and edges consist the theme network induced by $p_1$. For the theme network induced by $p_2$ (see Figure~\\ref{Fig:toy_example}(c)), we remove $v_4$ and its connected edges, since $f_4(p_2)=0$.\n}\n\nWe can induce a theme network by each pattern $\\mathbf{p} \\subseteq S$, a database network $G$ can induce at most $2^{|S|}$ theme networks, where $G$ can be regarded as the theme network of $\\mathbf{p}=\\emptyset$.\n\n\n\\subsection{Theme Community}\n\\label{Sec:theme_community}\n\nA \\emph{theme community} is a subgraph of a theme network so that the vertices form a cohesively connected subgraph.\nIntuitively, in a good theme community with theme $\\mathbf{p}$, every vertex is expected to satisfy at least one of the following \\emph{criteria}:\n\\begin{enumerate}\n\\item It has a high frequency of $\\mathbf{p}$ in its vertex database.\n\\item It is connected to a large proportion of vertices in the theme community.\n\\nop{\\item The frequency of $\\mathbf{p}$ on every vertex of the theme community is high.}\n\\end{enumerate}\n\nThe rationale is that, a vertex with a high frequency of $\\mathbf{p}$ is a \\emph{theme leader} that strengthens the theme coherence of the theme community. A vertex connecting to many vertices in the community is a \\emph{social leader} that strengthens the edge connection in the theme community.\nBoth theme leaders and social leaders are important members of the theme community.\nTake Figure~\\ref{Fig:toy_example}(b) as an example, $v_1, v_2, v_3, v_4$, and $v_5$ form a theme community with theme $\\mathbf{p}$, where $v_1$ and $v_4$ are theme leaders, $v_3$ and $v_5$ are social leaders, and $v_2$ is both a theme leader and a social leader.\n\n\\nop{\nAccording to the above criterion, in a good theme community with theme $\\mathbf{p}$, two cohesively connected vertices $v_i, v_j$ should either have high frequencies of $\\mathbf{p}$ or have a large number of common neighbouring vertices in the theme community. \n}\n\nThe above criteria inspire us to measure the cohesion of edge $e_{ij}=(v_i, v_j)$ by considering the number of common neighbour vertices between $v_i$ and $v_j$, as well as the frequencies of $\\mathbf{p}$ on $v_i, v_j$ and their common neighbor vertices. \nThe rationale is that, in a good theme community with theme $\\mathbf{p}$, two connected vertices $v_i$ and $v_j$ should have a large edge cohesion, if they each has a high frequency of $\\mathbf{p}$, or have many common neighbours in the theme community. \n\n\\nop{\nThis inspires us to measure the cohesion of edge $e_{ij}$ by considering both the number of common neighbour vertices of $v_i, v_j$ and the frequencies of $\\mathbf{p}$ on these vertices. \n\\nop{The rationale is that, if $v_i, v_j$ belong to the same theme community, the edge $(v_i, v_j)$ should have a larger edge cohesion as the two vertices have high frequencies of $\\mathbf{p}$ and have many common neighbors with higher frequencies of $\\mathbf{p}$.}\n}\n\n\\nop{\nThe above criteria inspire us to measure the cohesion of connection between two connected vertices $v_i, v_j$ by considering both the frequencies of $\\mathbf{p}$ on these vertices and the number of their common neighbour vertices in the theme community. \nThe intuition is that $v_i, v_j$ should have a large edge cohesion if they have high frequencies of $\\mathbf{p}$ or have a large number of common neighbours in the theme community.\n}\n\n\\nop{\nAccording to the above criterion, in a good theme community with theme $\\mathbf{p}$, two vertices $v_i, v_j$ connected by edge $e_{ij}=(v_i, v_j)$ should have high frequencies of $\\mathbf{p}$ and have a large number of common neighbouring vertices with high frequencies of $\\mathbf{p}$. \n}\n\n\\nop{\nThe relationship between $v_i$ and $v_j$ is more cohesive if they have more common neighbouring vertices with larger frequency of $\\mathbf{p}$.\nBased on this insight, we design \\emph{edge cohesion} (see Definition~\\ref{Def:edge_cohesion}) that measures the cohesiveness between $v_i$ and $v_j$ by comprehensively considering the number of common neighbouring vertices and the frequency of $\\mathbf{p}$ on these neighbouring vertices.\n}\n\n\\nop{Denote by $v_i$ and $v_j$ two connected vertices in a theme community with theme $\\mathbf{p}$, let $CN(v_i, v_j)$ denote the set of common neighbour vertices of $v_i$ and $v_j$ in the theme community, }\n\n\\nop{According to the above criterion, for any two connected vertices $v_i$ and $v_j$ in the theme community, $f_i(\\mathbf{p})$ and $f_j(\\mathbf{p})$ should be large.}\n\n\\nop{\nWe model theme community by maximal pattern truss.\nIn the following, we first define the fundamental concepts such as edge cohesion, pattern truss, maximal pattern truss. Then, we give the formal definition of theme community.\n}\n\n\\nop{, which extends the concept of $k$-truss~\\cite{cohen2008trusses} from unweighted network to the vertex weighted theme network. }\n\n\\newcommand{65mm}{65mm}\n\\begin{table}[t]\n\\centering\\small\n\\caption{Frequently used notations.}\n\\label{Table:notations}\n\\begin{tabular}{|c|p{65mm}|}\n\\hline\n\\makecell[c]{Notation} & \\makecell[c]{Description} \\\\ \\hline\n\n$G$\n& The database network. \\\\ \\hline\n\n$G_\\mathbf{p}$\n& The theme network induced by pattern $\\mathbf{p}$. \\\\ \\hline\n\n$S$\n& The complete set of items in $G$. \\\\ \\hline\n\n$|\\cdot|$\n& The set volume operator. \\\\ \\hline\n\n$C^*_\\mathbf{p}(\\alpha)$\n& The maximal pattern truss in theme network $G_\\mathbf{p}$ with respect to threshold $\\alpha$. \\\\ \\hline\n\n$\\mathcal{L}_\\mathbf{p}$\n& The linked list storing the decomposed results of maximal pattern truss $C^*_\\mathbf{p}(0)$. \\\\ \\hline\n\n$f_i(\\mathbf{p})$\n& The frequency of pattern $\\mathbf{p}$ on vertex $v_i$. \\\\ \\hline\n\n\\nop{\n$eco_{ij}(C_\\mathbf{p})$\n& See Definition~\\ref{Def:edge_cohesion}. \\\\ \\hline\n}\n\n$\\epsilon$\n& The threshold of pattern frequency for TCS. \\\\ \\hline\n\n\\nop{\n$s_{n_i}$\n& The item stored in node $n_i$ in the TC-Tree. \\\\ \\hline\n}\n\n\\nop{\n$\\mathcal{L}_{\\mathbf{p}_i}$\n& The linked list stored in node $n_i$ in the TC-Tree. \\\\ \\hline\n}\n\n\\end{tabular}\n\\end{table}\n\nLet $v_k$ be a common neighbor of two connected vertices $v_i$ and $v_j$. Then, $v_i, v_j$ and $v_k$ form a \\emph{triangle}, denoted by $\\triangle_{ijk}= \\{v_i, v_j, v_k\\}$.\nSince every common neighbor of $v_i$ and $v_j$ corresponds to a unique triangle that contains edge $e_{ij}=(v_i, v_j)$, the number of common neighbors of $v_i$ and $v_j$ is exactly the number of triangles that contain $e_{ij}=(v_i, v_j)$.\n\nNow we define \\emph{edge cohesion}, which measures the cohesion between two connected vertices $v_i, v_j$ by considering both the number of triangles containing $e_{ij}=(v_i, v_j)$ and the frequencies of $\\mathbf{p}$ on the vertices of those triangles.\n\n\\nop{\nthe number of common neighbouring vertices is exactly the same as \nwe obtain the number of common neighbouring vertices by counting the number of triangles that contains $e_{ij}$.\nA \\emph{triangle} $\\triangle_{ijk}= (v_i, v_j, v_k)$ is a clique of 3 vertices $v_i, v_j, v_k$.\n\n\nIn a graph $G=(V, E)$, three vertices $v_i, v_j, v_k \\in V$ forms a \\emph{triangle} $\\triangle_{ijk}= (v_i, v_j, v_k)$ if $(v_i, v_j), (v_j, v_k), (v_i, v_k) \\in E$, denoted by $\\triangle_{ijk} \\subseteq G$.\n\\mc{The number of triangles $\\triangle_{ijk}$ containing edge $e_{ij}=(v_i, v_j)$ is the number of the common neighbouring vertices shared by $v_i$ and $v_j$.}\n}\n\n\\nop{\nAccording to White~\\emph{et al.}~\\cite{white2001cohesiveness}, for two users in a social network, the number of their common friends is a natural measurement for the cohesiveness of their social tie. \nAs demonstrated by Cohen~\\cite{cohen2008barycentric,cohen2008trusses,cohen2009graph} and Huang~\\emph{et al.}~\\cite{huang2014querying,huang2016truss}, such measurement is robust and effective in detecting cohesive communities.\nInspired by such observation, for theme network $G_\\mathbf{p}$, we model the \\emph{edge cohesion} between two vertices by considering the number of their common neighbours and the frequency of $\\mathbf{p}$ on all neighbouring vertices.\n}\n\n\\nop{In a graph $G=(V, E)$, three vertices $v_i, v_j, v_k \\in V$ forms a \\emph{triangle} $\\triangle_{ijk}= (v_i, v_j, v_k)$ if $(v_i, v_j), (v_j, v_k), (v_i, v_k) \\in E$, denoted by $\\triangle_{ijk} \\subseteq G$.}\n\n\\nop{\nAs demonstrated by Cohen~\\cite{cohen2008barycentric,cohen2008trusses,cohen2009graph} and Huang~\\emph{et al.}~\\cite{huang2014querying,huang2016truss}, the $k$-truss structure \n\nAccording to Cohen~\\cite{cohen2008barycentric,cohen2008trusses,cohen2009graph} and Huang~\\emph{et al.}~\\cite{huang2014querying,huang2016truss}, for two users in a social network, measuring their connection strength by the number of their common friends is a natural and effective way in detecting cohesive social communities.\n\nthe number of their common friends naturally measures the strength of their social tie.\nInspired by such observation, proposed the community structure $k$-truss, which measures the cohesion of an edge by the number of triangles containing it.\n}\n\n\\begin{definition}[Edge Cohesion]\n\\label{Def:edge_cohesion}\nConsider a theme network $G_\\mathbf{p}$ and a subgraph $C_\\mathbf{p}\\subseteq G_\\mathbf{p}$, for an edge $e_{ij}=(v_i, v_j)$ in $C_\\mathbf{p}$, the \\textbf{edge cohesion} of $e_{ij}$ in $C_\\mathbf{p}$ is \n\\begin{equation}\n\\label{Eqn:cohesion}\neco_{ij}(C_\\mathbf{p})=\\sum\\limits_{\\triangle_{ijk}\\subseteq C_\\mathbf{p}} \\min{(f_i(\\mathbf{p}), f_j(\\mathbf{p}), f_k(\\mathbf{p}))} \\nonumber\n\\end{equation}\n\\end{definition}\n\n\\nop{\n\\mcout{We need to explain the intuition before edge cohesion. Why do we want to sum over all triangles?}\n}\n\n\\begin{example}\nIn Figure~\\ref{Fig:toy_example}(b), for subgraph $C_{\\mathbf{p}}$ induced by the set of vertices $\\{v_1, v_2, v_3, v_4, v_5\\}$, edge $e_{12}$ is contained by $\\triangle_{123}$ and $\\triangle_{125}$, thus the edge cohesion of $e_{12}$ is $eco_{12}(C_{\\mathbf{p}})=\\min(f_1(\\mathbf{p}),$ $ f_2(\\mathbf{p}), f_3(\\mathbf{p})) + \\min(f_1(\\mathbf{p}), f_2(\\mathbf{p}), $ $f_5(\\mathbf{p}))=0.2$.\n\\end{example}\n\nIn a subgraph $C_\\mathbf{p}$, if for every vertex $v_i$ in the subgraph, $f_i(\\mathbf{p}) = 1$, then the edge cohesion $eco_{ij}(C_\\mathbf{p})$ equals the number of triangles containing edge $e_{ij}$. In this case, $eco_{ij}(C_\\mathbf{p})$ is exactly the edge cohesion used by Cohen~\\cite{cohen2008trusses} to define $k$-truss.\n\n\\nop{\nApparently, a large edge cohesion $eco_{ij}(C_\\mathbf{p})$ indicates that vertices $v_i, v_j$ either have high frequencies of $\\mathbf{p}$ or have a large number of common neighbouring vertices whose vertex databases contain theme $\\mathbf{p}$.\nAccording to the criteria of good theme community, if every edge in $C_\\mathbf{p}$ has a large edge cohesion, $C_\\mathbf{p}$ will be a good theme community. Based on this insight, }\nNow, we propose \\emph{pattern truss}, a subgraph such that the cohesion of every edge in the subgraph is larger than a threshold.\n\n\\begin{definition}[Pattern Truss]\n\\label{Def:pattern_truss}\nGiven a minimum cohesion threshold $\\alpha \\geq 0$, a \\textbf{pattern truss} $C_\\mathbf{p}(\\alpha)=(V_\\mathbf{p}(\\alpha), E_\\mathbf{p}(\\alpha))$ is an edge-induced subgraph of $G_\\mathbf{p}$ on the set of edges $E_\\mathbf{p}(\\alpha)=\\{e_{ij} \\mid eco_{ij}(C_\\mathbf{p}(\\alpha)) > \\alpha \\}$.\n\\end{definition}\n\n\\nop{\nIn other words, a pattern truss $C_\\mathbf{p}(\\alpha)$ is a subgraph in $G_\\mathbf{p}$ such that the cohesion $eco_{ij}(C_\\mathbf{p}(\\alpha))$ of every edge in $E_\\mathbf{p}(\\alpha)$ is larger than $\\alpha$.\n}\n\n\\nop{\n\\mcout{It is unclear whether $C_\\mathbf{p}$ is the induced subgraph of $G_\\mathbf{p}$ on $V_\\mathbf{p}(\\alpha)$. That is, for any $v_i, v_j \\in V_\\mathbf{p}(\\alpha)$, is edge $(v_i, v_j) \\in E_\\mathbf{p}(\\alpha)$? I don't think it is the case. This should be clarified.}\n\n\\mc{Answer: $C_\\mathbf{p}(\\alpha)$ is an edge-induced subgraph of $G_\\mathbf{p}$ on $E_\\mathbf{p}(\\alpha)$. That is, for any edge $(v_i, v_j) \\in E_\\mathbf{p}(\\alpha)$, we have $v_i, v_j \\in V_\\mathbf{p}(\\alpha)$.}\n}\n\n\\nop{\n\\begin{itemize}\n\\item $V_p(\\alpha)=\\{v_i \\mid \\exists e_{ij}\\in E_p(\\alpha) \\}$.\n\\item $E_p(\\alpha)=\\{e_{ij} \\mid w_{ij}(p) > \\alpha, e_{ij}\\in E_p\\}$.\n\\end{itemize}\nthe \\emph{cohesion} of edge $e_{ij}=(v_i, v_j)$ is defined as \n\\begin{equation}\n\\label{Eqn:cohesion}\nw_{ij}(p)=\\sum\\limits_{\\triangle_{ijk}\\subseteq C_p(\\alpha)} \\min{(f_i(p), f_j(p), f_k(p))}\n\\end{equation}\nwhere $\\triangle_{ijk} = (v_i, v_j, v_k)$ represents any triangle in $C_p(\\alpha)$ that contains edge $e_{ij}$. }\n\n\\nop{Pattern truss has the following useful properties:}\n\nIf $\\alpha = k-3$ and $\\forall v_i \\in V_\\mathbf{p}(\\alpha), f_i(\\mathbf{p}) = 1$, a pattern truss $C_\\mathbf{p}(\\alpha)$ becomes a $k$-truss~\\cite{cohen2008trusses}, which is a subgraph where every edge in the subgraph is contained by at least $(k-2)$ triangles.\nFurthermore, if $C_\\mathbf{p}(\\alpha)$ is also a maximal connected subgraph in $G_\\mathbf{p}$, it will also be a $(k-1)$-core~\\cite{seidman1983network}.\n\nSimilar to $k$-truss, a pattern truss is not necessarily a connected subgraph.\nFor example, in Figure~\\ref{Fig:toy_example}(b), when $\\alpha \\in [0, 0.2)$, the subgraph marked in bold lines is a pattern truss, but is not connected.\n\nIt is easy to see that, for a given $\\alpha$, the union of multiple pattern trusses is still a pattern truss.\n\\nop{Take Figure~\\ref{Fig:toy_example}(c) as an example, when $\\alpha \\in [0.2, 0.4)$, $\\{v_2, v_3, v_5, v_6\\}$, $\\{v_3, v_5, v_6, v_7, v_9\\}$ and the union of them (i.e., $\\{v_2, v_3, v_5, v_6, v_7, v_9\\}$) are all valid pattern trusses.}\n\n\n\n\n\\begin{definition}[Maximal Pattern Truss]\n\\label{Def:maximal_pattern_truss}\nA \\textbf{maximal pattern truss} in $G_{\\mathbf{p}}$ with respect to a minimum cohesion threshold $\\alpha$ is a pattern truss that any proper superset is not a pattern truss with respect to $\\alpha$ in $G_{\\mathbf{p}}$.\n\\end{definition}\n\nApparently, a maximal pattern truss in $G_{\\mathbf{p}}$ with respect to $\\alpha$ is the union of all pattern trusses in $G_{\\mathbf{p}}$ with respect to the same threshold $\\alpha$. Moreover, a maximal pattern truss is not necessarily a connected subgraph.\nWe denote by $C^*_\\mathbf{p}(\\alpha)=(V^*_\\mathbf{p}(\\alpha), E^*_\\mathbf{p}(\\alpha))$ the maximal pattern truss in $G_\\mathbf{p}$. \n\nNow we are ready to define theme community.\n\\begin{definition}[Theme Community]\n\\label{Def:theme_community}\nGiven a minimum cohesion threshold $\\alpha$, a \\textbf{theme community} is a maximal connected subgraph in the maximal pattern truss with respect to $\\alpha$ in a theme network.\n\\end{definition}\n\n\n\n\\nop{\nA \\emph{maximal pattern truss} is a pattern truss that is not a subgraph of any other pattern truss. We denote maximal pattern truss by $C^*_p(\\alpha)=\\{V^*_p(\\alpha), E^*_p(\\alpha)\\}$.\n\nWe list the following useful facts about pattern truss: \n\\begin{itemize}\n\\item Pattern truss degenerates to $k$-truss~\\cite{cohen2008trusses} if $\\forall v_i \\in V_p, f_i(p) = 1$ and $\\alpha = k-2$. \n\\item A pattern truss is not necessarily a connected subgraph. For example, in Figure~\\ref{Fig:toy_example}(b), $\\{v_1, v_2, v_3, v_4,$ $ v_5, v_7, v_8, v_9\\}$ is a valid pattern truss when $\\alpha \\in [0, 0.2)$.\n\\item The union of two pattern trusses is also a pattern truss. This is because adding new vertices and edges to an existing pattern truss does not decrease the cohesion of any original edge of the pattern truss. For example, in Figure~\\ref{Fig:toy_example}(c), $\\{v_2, v_3, v_5, v_6\\}$, $\\{v_3, v_5, v_6, v_7, v_9\\}$ and the union of them (i.e., $\\{v_2, v_3, v_5, v_6, v_7, v_9\\}$) are all valid pattern trusses when $\\alpha \\in [0.2, 0.4)$.\n\\item For a fixed threshold $\\alpha$, a theme network can only have one unique maximal pattern truss, which is the union of all valid pattern trusses in the theme network.\n\\end{itemize}\n}\n\n\\begin{example}\nIn Figure~\\ref{Fig:toy_example}(b), when $\\alpha \\in [0, 0.2)$, $\\{v_1, v_2, v_3, v_4, v_5\\}$ and $\\{v_7, v_8, v_9\\}$ are two theme communities in $G_{\\mathbf{p}}$.\nIn Figure~\\ref{Fig:toy_example}(c), when $\\alpha \\in [0.2, 0.4)$, $\\{v_2, v_3, v_5, v_6, v_7, v_9\\}$ is a theme community in $G_{\\mathbf{q}}$, and partially overlaps with the two theme communities in $G_{\\mathbf{p}}$.\n\\end{example}\n\n\\nop{\n shows another theme community in theme network $G_{\\mathbf{p}_2}$ when $\\alpha \\in [0.2, 0.4)$. Such community is $\\{v_2, v_3, v_5, v_6, v_7, v_9\\}$, which partially overlaps with both the theme communities in $G_{\\mathbf{p}_1}$.}\n\n\\nop{\n\\begin{table}[t]\n\\caption{\\mc{Three types of good theme communities.}}\n\\centering\n\\label{Table:thcm_types}\n\\begin{tabular}{|c|c|c|}\n\\hline\nTypes \t& Theme Coherence & Edge Connection \\\\ \\hline\nType-I & Medium & High \t\t\t\\\\ \\hline\nType-II & High & Medium \t\t\\\\ \\hline\nType-III & High & High \t \t\t\\\\ \\hline\n\\end{tabular}\n\\end{table}\n}\nThere are several important benefits from modeling theme communities using maximal pattern trusses. \nFirst, there exists polynomial time algorithms to find maximal pattern trusses.\nSecond, maximal pattern trusses of different theme networks may overlap with each other, which reflects the application scenarios where a vertex may participate in communities of different themes.\nLast, as to be proved in Sections~\\ref{Sec:pompt} and~\\ref{Sec:mpt_dec}, maximal pattern trusses have many good properties that enable us to design efficient mining and indexing algorithms for theme community finding.\n\n\n\\nop{One interesting observation is that even though the left community has a lower average vertex frequency than the right community, it is equally valid as the right community under the same threshold $\\alpha$. This is because the left community has a denser edge connection that contains more triangles, and the edge cohesion comprehensively considers both edge connections and vertex frequencies.}\n\n\\subsection{Problem Definition and Complexity}\n\\label{Sec:tcfp}\n\\nop{We define the theme community finding problem as follows.}\n\\begin{definition}[Theme Community Finding]\n\\label{Def:theme_comm_finding}\nGiven a database network $G$ and a minimum cohesion threshold $\\alpha$, the \\textbf{problem of theme community finding} is to compute all theme communities in $G$.\n\\end{definition}\n\nSince extracting maximal connected subgraphs from a maximal pattern truss is straightforward, the core of the theme community finding problem is to identify the maximal pattern trusses of all theme networks. \nThis is a challenging problem, since a database network can induce up to $2^{|S|}$ theme networks and each theme network may contain a maximal pattern truss. \nAs a result, finding theme communities for all themes is computationally intractable. \n\n\\nop{The proof of Theorem~\\ref{theo:hardness} is in Section~\\ref{Apd:hardness} of the appendix.}\n\n\\begin{theorem}\n\\label{theo:hardness}\nGiven a database network $G$ and a minimum cohesion threshold $\\alpha$, the problem of counting the number of theme communities in $G$ is \\#P-hard.\n\\nop{\n\\begin{proof}\nWe prove by a reduction from the \\emph{Frequent Pattern Counting} (FPC) problem, which is known to be \\#P-complete~\\cite{gunopulos2003discovering}.\n\nGiven a transaction database $\\mathbf{d}$ and a minimum support threshold $\\alpha\\in[0,1]$, an instance of the FPC problem is to count the number of patterns $\\mathbf{p}$ in $\\mathbf{d}$ such that $f(\\mathbf{p}) > \\alpha$.\nHere, $f(\\mathbf{p})$ is the frequency of $\\mathbf{p}$ in $\\mathbf{d}$.\n\nWe construct a database network $G=(V,E,D,S)$, where $V=\\{v_1, v_2, v_3\\}$ has only $3$ vertices; $E=\\{(v_1, v_2), (v_2, v_3), (v_3, v_1)\\}$ and forms a triangle; each vertex is associated with a copy of $\\mathbf{d}$, that is, $D=\\{\\mathbf{d}_1, \\mathbf{d}_2, \\mathbf{d}_3 \\mid \\mathbf{d}_1=\\mathbf{d}_2=\\mathbf{d}_3=\\mathbf{d}\\}$; and $S$ is the set of all items appearing in $\\mathbf{d}$. Apparently, $G$ can be constructed in $O(|\\mathbf{d}|)$ time.\n\nFor any pattern $\\mathbf{p}\\subseteq S$, since $\\mathbf{d}_1=\\mathbf{d}_2=\\mathbf{d}_3=\\mathbf{d}$, it follows $f_1(\\mathbf{p})=f_2(\\mathbf{p})=f_3(\\mathbf{p})=f(\\mathbf{p})$. According to Definition~\\ref{Def:edge_cohesion}, $eco_{12}(G_\\mathbf{p})=eco_{13}(G_\\mathbf{p})=eco_{23}(G_\\mathbf{p})=f(\\mathbf{p})$. \nAccording to Definition~\\ref{Def:theme_community}, $G_\\mathbf{p}$ is a theme community in $G$ if and only if $f(\\mathbf{p})> \\alpha$.\nTherefore, for any threshold $\\alpha\\in[0,1]$, the number of theme communities in $G$ is equal to the number of patterns in $\\mathbf{d}$ satisfying $f(\\mathbf{p})>\\alpha$, which is exactly the answer to the FPC problem.\n\\end{proof}\n}\n\\end{theorem}\nThe proof of Theorem~\\ref{theo:hardness} is given in Appendix~\\ref{Apd:hardness}.\n\n\\nop{\n\\mc{Given a database network $G$ and a threshold $\\alpha$, if an algorithm can enumerate all theme communities in $G$, it is also able to count them. \nFrom this perspective, the theme community finding problem (Definition~\\ref{Def:theme_comm_finding}) is at least as hard as the problem of counting the number of theme communities in $G$.}\n}\n\n\nIn the rest of the paper, we develop an exact algorithm for theme community finding and investigate various techniques to speed up the search.\n\n\\nop{\nHowever, as proved in Section~\\ref{Sec:pompt}, the size of maximal pattern truss decreases monotonously when the length of pattern increases. \nWith carefully designed algorithms, we can safely prune a pattern $\\mathbf{p}$ if all its sub-patterns $\\{\\mathbf{h} \\mid \\mathbf{h}\\subset \\mathbf{p}, \\mathbf{h}\\neq \\emptyset\\}$ cannot induce pattern truss.\nSuch prunning leads to high detection efficiency without accuracy loss.\n}\n\\section{A Baseline and Maximal Pattern Truss Detection}\n\\label{sec:baseline}\n\nIn this section, we present a baseline for theme community finding. Before that, we introduce \\emph{Maximal Pattern Truss Detector} (MPTD) that detects the maximal pattern truss of a given theme network $G_\\mathbf{p}$ with respect to a threshold $\\alpha$. \n\n\\subsection{Maximal Pattern Truss Detection}\n\n\\begin{algorithm}[t]\n\\caption{Maximal Pattern Truss Detector (MPTD)}\n\\label{Alg:mptd}\n\\KwIn{A theme network $G_\\mathbf{p}$ and a user input $\\alpha$.}\n\\KwOut{The maximal pattern truss $C^*_\\mathbf{p}(\\alpha)$ in $G_\\mathbf{p}$.}\n\\BlankLine\n\\begin{algorithmic}[1]\n \\STATE Initialize: $Q\\leftarrow \\emptyset$.\n \\FOR{each $e_{ij}\\in E_\\mathbf{p}$}\n \t\\STATE $eco_{ij}(G_\\mathbf{p})\\leftarrow 0$.\n \t\\FOR{each $v_k\\in \\triangle_{ijk}$}\n\t\t\\STATE $eco_{ij}(G_\\mathbf{p})\\leftarrow eco_{ij}(G_\\mathbf{p}) + \\min(f_i(\\mathbf{p}), f_j(\\mathbf{p}), f_k(\\mathbf{p}))$.\n\t\\ENDFOR\n\t\\STATE \\textbf{if} $eco_{ij}(G_\\mathbf{p}) \\leq \\alpha$ \\textbf{then} $Q.push(e_{ij})$.\n \\ENDFOR\n \n \\WHILE{$Q\\neq\\emptyset$}\n \t\\STATE $Q.pop(e_{ij})$.\n\t\\FOR{each $v_k\\in \\triangle_{ijk}$}\n\t\t\\STATE $eco_{ik}(G_\\mathbf{p})\\leftarrow eco_{ik}(G_\\mathbf{p}) - \\min(f_i(\\mathbf{p}), f_j(\\mathbf{p}), f_k(\\mathbf{p}))$.\n\t\t\\STATE $eco_{jk}(G_\\mathbf{p})\\leftarrow eco_{jk}(G_\\mathbf{p}) - \\min(f_i(\\mathbf{p}), f_j(\\mathbf{p}), f_k(\\mathbf{p}))$.\n\t\t\\STATE \\textbf{if} $eco_{ik}(G_\\mathbf{p})\\leq \\alpha$ \\textbf{then} $Q.push(e_{ik})$.\n\t\t\\STATE \\textbf{if} $eco_{jk}(G_\\mathbf{p})\\leq \\alpha$ \\textbf{then} $Q.push(e_{jk})$.\n\t\\ENDFOR\n\t\\STATE Remove $e_{ij}$ from $G_{\\mathbf{p}}$.\n \\ENDWHILE\n \\STATE $C^*_\\mathbf{p}(\\alpha)\\leftarrow G_{\\mathbf{p}}$.\n\\BlankLine\n\\RETURN $C^*_\\mathbf{p}(\\alpha)$.\n\\end{algorithmic}\n\\end{algorithm}\n\nGiven $G_\\mathbf{p}$ and $\\alpha$, an edge in $G_\\mathbf{p}$ is referred to as an \\emph{unqualified edge} if the edge cohesion is not larger than $\\alpha$.\nThe key idea of MPTD is to remove all unqualified edges so that the remaining edges and connected vertices constitute the maximal pattern truss.\n\nAs shown in Algorithm~\\ref{Alg:mptd}, MPTD consists of two phases. Phase 1 (Lines 1-8) computes the initial cohesion of each edge and pushes unqualified edges into queue $Q$. Phase 2 (Lines 9-18) removes the unqualified edges in $Q$ from $E_\\mathbf{p}$. Since removing $e_{ij}$ also breaks $\\triangle_{ijk}$, we update $eco_{ik}(G_\\mathbf{p})$ and $eco_{jk}(G_\\mathbf{p})$ in Lines 12-13. After the update, if $e_{ik}$ or $e_{jk}$ becomes unqualified, they are pushed into $Q$ (Lines 14-15).\nLast, the remaining edges and connected vertices are returned as the maximal pattern truss.\n\n\\nop{\nThe correctness of MPTD is based on the fact that MPTD does not remove any edge from a valid maximal pattern truss $C^*_p(\\alpha)$. \nThis is because MPTD only removes unqualified edges and all edges in $C^*_p(\\alpha)$ are qualified edges according to the definition of $C^*_p(\\alpha)$. \n}\n\nWe show the correctness of MPTD as follows. \nIf $C^*_\\mathbf{p}(\\alpha)= \\emptyset$, then all edges in $E_\\mathbf{p}$ are removed as unqualified edges and MPTD returns $\\emptyset$. \nIf $C^*_\\mathbf{p}(\\alpha)\\neq \\emptyset$, then all edges in $E_\\mathbf{p}\\setminus E_\\mathbf{p}^*(\\alpha)$ are removed as unqualified edges and MPTD returns exactly $C^*_\\mathbf{p}(\\alpha)$.\n\n\n\\nop{we decompose the theme network $G_p$ into a maximal pattern truss $C^*_p(\\alpha)$ and the remaining subgraph denoted by $\\overline{C^*_p(\\alpha)}=G_p \\setminus C^*_p(\\alpha)$.\nSince adding new edges or vertices to $C^*_p(\\alpha)$ does not decrease the cohesion of the existing edges in $C^*_p(\\alpha)$, the edges and vertices in $\\overline{C^*_p(\\alpha)}$ will not decrease the cohesion of any edge in $C^*_p(\\alpha)$. Furthermore, $C^*_p(\\alpha)$ being the maximal pattern truss equally means all edges in $\\overline{C^*_p(\\alpha)}$ are unqualified, thus $\\overline{C^*_p(\\alpha)}$ will be removed by MPTD and the maximal pattern truss $C^*_p(\\alpha)$ is found. Additionally, if theme network $G_p$ does not contain any maximal pattern truss, we have $C^*_p(\\alpha)=\\emptyset$ and MPTD will remove all edges from $G_p$.}\n\nThe time complexity of Algorithm~\\ref{Alg:mptd} is dominated by the complexity of triangle enumeration for each edge $e_{ij}$ in $E_\\mathbf{p}$. \nThis requires checking all neighbouring vertices of $v_i$ and $v_j$, which costs $\\mathcal{O}(d(v_i) + d(v_j))$ time, where $d(v_i)$ and $d(v_j)$ are the degrees of $v_i$ and $v_j$, respectively.\nSince all edges in $E_\\mathbf{p}$ are checked, the cost for Lines 1-8 in Algorithm~\\ref{Alg:mptd} is $\\mathcal{O}(\\sum_{e_{ij}\\in E_\\mathbf{p}} (d(v_i) + d(v_j))) = \\mathcal{O}(\\sum_{v_i\\in V_\\mathbf{p}} d^2(v_i))$.\nThe cost of Lines 9-18 is also $\\mathcal{O}(\\sum_{v_i\\in V_\\mathbf{p}} d^2(v_i))$. The worst case happens when all edges are removed. Therefore, the time complexity of MPTD is $\\mathcal{O}(\\sum_{v_i\\in V_\\mathbf{p}} d^2(v_i))$. As a result, MPTD can efficiently find the maximal pattern truss of a sparse theme network. \n\n\n\n\n\n\n\\subsection{Theme Community Scanner: A Baseline}\n\\label{sec:tcs}\n\nSince a database network $G$ may induce up to $2^{|S|}$ theme networks, running MPTD on all theme networks is impractical.\nIn this section, we introduce a baseline method, called \\emph{Theme Community Scanner} (TCS).\nThe key idea is to detect maximal pattern truss on each theme network using MPTD, and improve the detection efficiency by pre-filtering out the patterns whose maximum frequencies in all vertex databases cannot reach a minimum frequency threshold $\\epsilon$. \nThe intuition is that patterns with low frequencies are not likely to be the theme of a theme community.\n\nGiven a frequency threshold $\\epsilon$, TCS first obtains the set of candidate patterns $\\mathcal{P}=\\{\\mathbf{p} \\mid \\exists v_i\\in V, f_i(\\mathbf{p}) > \\epsilon\\}$ by enumerating all patterns in each vertex database. Then, for each candidate pattern $\\mathbf{p}\\in \\mathcal{P}$, we induce theme network $G_\\mathbf{p}$ and find the maximal pattern truss by MPTD. The final result is a set of maximal pattern trusses, denoted by $\\mathbb{C}(\\alpha)=\\{C^*_\\mathbf{p}(\\alpha) \\mid C^*_\\mathbf{p}(\\alpha)\\neq\\emptyset, \\mathbf{p}\\in \\mathcal{P}\\}$.\n\n\\nop{However, since a pattern $\\mathbf{p}$ with small frequency can still form a strong theme community if there are a large number of vertices that contain $\\mathbf{p}$ and form a densely connected subgraph.}\n\nThe pre-filtering method of TCS improves the detection efficiency of theme communities, however, it may miss some theme communities, since a pattern $\\mathbf{p}$ with relatively small frequencies on all vertex databases can still form a good theme community, if a large number of vertices containing $\\mathbf{p}$ form a densely connected subgraph.\nAs a result, TCS performs a trade-off between efficiency and accuracy.\nDetailed discussion on the effect of $\\epsilon$ will be conducted in Section~\\ref{Sec:eop}.\n\n\\nop{\nHowever, there is no proper $\\epsilon$ that works for all theme networks and setting $\\epsilon=0$ is the only way to guarantee exact results.\nDetailed discussion on the effect of $\\epsilon$ will be introduced in Section~\\ref{Sec:eop}.\nPre-filtering out the patterns with low frequencies improves the efficiency of TCS, however, since \nIn sum, TCS performs a trade-off between efficiency and accuracy.\nHowever, there is no proper $\\epsilon$ that works for all theme networks and setting $\\epsilon=0$ is the only way to guarantee exact results.\nDetailed discussion on the effect of $\\epsilon$ will be introduced in Section~\\ref{Sec:eop}.\n}\n\n\\nop{\n\\section{A Naive Method}\nIn this section, we introduce a naive method named \\emph{Theme Community Scanner} (TCS) to solve the theme community finding problem in Definition~\\ref{Def:theme_comm_finding}.\n\n\\begin{algorithm}[t]\n\\caption{Theme Community Scanner}\n\\label{Alg:tcs}\n\\KwIn{A database network $G$ and a user input $\\alpha$.}\n\\KwOut{The set of maximal pattern trusses $\\mathcal{C}(\\alpha)$ in $G$.}\n\\BlankLine\n\\begin{algorithmic}[1]\n \\STATE Initialize maximal pattern truss set: $\\mathcal{C}(\\alpha)\\leftarrow \\emptyset$.\n \\STATE Initialize candidate pattern set: $P\\leftarrow \\emptyset$.\n \\FOR{each database $d_i\\in D$}\n \t\\STATE Find pattern set $P_i=\\{p \\mid f_i(p)>\\epsilon\\}$.\n\t\\STATE Update pattern set: $P\\leftarrow P \\cup P_i$.\n \\ENDFOR\n \n \\FOR{each pattern $p\\in P$}\n \t\\STATE Induce theme network $G_p$.\n\t\\STATE Remove vertices in $V'=\\{v_i \\mid f_i(p) \\leq \\epsilon \\}$ from $G_p$.\n \t\\STATE Call Algorithm~\\ref{Alg:mptd} to find $C^*_p(\\alpha)$ in $G_p$.\n \t\\STATE Update: $\\mathcal{C}(\\alpha)\\leftarrow \\mathcal{C}(\\alpha) \\cup C^*_p(\\alpha)$.\n \\ENDFOR\n\\BlankLine\n\\RETURN $\\mathcal{C}(\\alpha)$.\n\\end{algorithmic}\n\\end{algorithm}\n\nThe key idea of TCS is to directly apply MPTD (Algorithm~\\ref{Alg:mptd}) in each induced theme network $G_p$, while attempting to improve the efficiency by pre-filtering out the vertices whose pattern frequency $f_i(p)$ is not larger than a small threshold $\\epsilon$. \n\nThe intuition for the pre-filtering step is simply that vertices with low pattern frequency are less likely to be contained in a theme community. \nAs an immediate result of the pre-filtering step, we can skip a theme network $G_p$ if no vertex $v_i\\in V_p$ has a large pattern frequency $f_i(p)>\\epsilon$. Thus, the efficiency of theme community finding can be improved. \n\nAlgorithm~\\ref{Alg:tcs} presents the details of TCS. Steps 1-2 perform initialization. Steps 3-6 enumerates patterns in each database and obtains the candidate pattern set $P=\\{p \\mid \\exists v_i\\in V, f_i(p) > \\epsilon\\}$. Steps 7-12 apply MPTD on the theme networks induced by candidate patterns to find the set of maximal pattern trusses $\\mathcal{C}(\\alpha)$. In the end, we can easily obtain theme communities by extracting maximal connected subgraphs from each maximal pattern truss in $\\mathcal{C}(\\alpha)$.\n\nFigure~\\ref{Fig:toy_example}(b)-(c) show some good examples on the effect of the pre-filtering step. In Figure~\\ref{Fig:toy_example}(c), setting $\\epsilon=0.1$ will filter out both $v_1$ and $v_8$ in $G_{p_2}$. This improves the efficiency of TCS without affecting the theme community $\\{v_2, v_3, v_5, v_6, v_7, v_9\\}$. However, in Figure~\\ref{Fig:toy_example}(b), setting $\\epsilon=0.1$ will filter out both $v_3$ and $v_5$, which dismantles the theme community $\\{v_1, v_2, v_3, v_4, v_5\\}$. \n\nIn fact, the pre-filtering step is a trade-off between efficiency and accuracy. However, there is no proper $\\epsilon$ that uniformly works for all theme networks, since different patterns can induce completely different theme networks. \nTherefore, setting $\\epsilon=0$ is the only way to obtain exact theme community finding results. \nDetailed discussion on the effect of $\\epsilon$ will be discussed in Section~\\ref{Sec:eop}.\n}\n\\section{Theme Community Finding}\n\\label{sec:algo}\n\nIn this section, we first explore several fundamental properties of maximal pattern truss that enable fast theme community finding. Then, we introduce two efficient and exact theme community finding methods.\n\\nop{\\emph{Theme Community Finder Apriori (TCFA)} and \\emph{Theme Community Finder Intersection (TCFI)}.}\n\n\\subsection{Properties of Maximal Pattern Truss}\n\\label{Sec:pompt}\n\n\\begin{theorem}[Graph Anti-monotonicity]\n\\label{Prop:gam}\nIf $\\mathbf{p}_1 \\subseteq \\mathbf{p}_2$, then $C^*_{\\mathbf{p}_2}(\\alpha)\\subseteq C^*_{\\mathbf{p}_1}(\\alpha)$.\n\\end{theorem}\n\nThe proof of Theorem~\\ref{Prop:gam} is given in Appendix~\\ref{Apd:gam}.\n\n\\nop{\n\\begin{proof}\nSince maximal pattern truss $C^*_{\\mathbf{p}_1}(\\alpha)$ is the union of all pattern trusses with respect to threshold $\\alpha$ in theme network $G_{\\mathbf{p}_1}$, we can prove $C^*_{\\mathbf{p}_2}(\\alpha)\\subseteq C^*_{\\mathbf{p}_1}(\\alpha)$ by proving $C^*_{\\mathbf{p}_2}(\\alpha)$ is a pattern truss with respect to threshold $\\alpha$ in theme network $G_{\\mathbf{p}_1}$.\n\nWe construct a subgraph $H_{\\mathbf{p}_1}=(V_{\\mathbf{p}_1}, E_{\\mathbf{p}_1})$, where $V_{\\mathbf{p}_1}=V^*_{\\mathbf{p}_2}(\\alpha)$ and $E_{\\mathbf{p}_1}=E^*_{\\mathbf{p}_2}(\\alpha)$. \nThat is, $H_{\\mathbf{p}_1}=C^*_{\\mathbf{p}_2}(\\alpha)$.\nNext, we prove $H_{\\mathbf{p}_1}$ is a subgraph in $G_{\\mathbf{p}_1}$. \nSince $\\mathbf{p}_1\\subseteq \\mathbf{p}_2$, by the anti-monotonicity of patterns~\\cite{agrawal1994fast,han2000mining}, it follows $\\forall v_i \\in V, f_i(\\mathbf{p}_1) \\geq f_i(\\mathbf{p}_2)$. \nAccording to the definition of theme network, it follows $G_{\\mathbf{p}_2}\\subseteq G_{\\mathbf{p}_1}$. \nSince $C^*_{\\mathbf{p}_2}(\\alpha)$ is the maximal pattern truss in $G_{\\mathbf{p}_2}$, it follows $C^*_{\\mathbf{p}_2}(\\alpha)\\subseteq G_{\\mathbf{p}_2}\\subseteq G_{\\mathbf{p}_1}$.\nRecall that $H_{\\mathbf{p}_1}=C^*_{\\mathbf{p}_2}(\\alpha)$, it follows $H_{\\mathbf{p}_1}\\subseteq G_{\\mathbf{p}_1}$.\n\nNow we prove $H_{\\mathbf{p}_1}$ is a pattern truss with respect to threshold $\\alpha$ in $G_{\\mathbf{p}_1}$.\nSince $\\forall v_i \\in V, f_i(\\mathbf{p}_1) \\geq f_i(\\mathbf{p}_2)$, the following inequation holds for every triangle $\\triangle_{ijk}$ in $H_{\\mathbf{p}_1}$.\n\\begin{equation}\n\\label{eq:mono}\n\\begin{split}\n\t\\min(f_i(\\mathbf{p}_1), f_j(\\mathbf{p}_1), f_k(\\mathbf{p}_1)) \\geq\\min(f_i(\\mathbf{p}_2), f_j(\\mathbf{p}_2), f_k(\\mathbf{p}_2))\n\\end{split}\n\\end{equation}\nSince $V_{\\mathbf{p}_1}=V^*_{\\mathbf{p}_2}(\\alpha)$ and $E_{\\mathbf{p}_1}=E^*_{\\mathbf{p}_2}(\\alpha)$, it follows that the set of triangles in $C^*_{\\mathbf{p}_2}(\\alpha)$ is exactly the same as the set of triangles in $H_{\\mathbf{p}_1}$.\nTherefore, we can derive from Equation~\\ref{eq:mono} and Definition~\\ref{Def:edge_cohesion} that\n\\begin{equation}\\nonumber\n\t\\forall e_{ij}\\in E_{\\mathbf{p}_1}, eco_{ij}(H_{\\mathbf{p}_1})\\geq eco_{ij}(C^*_{\\mathbf{p}_2}(\\alpha))\n\\end{equation}\nSince $C^*_{\\mathbf{p}_2}(\\alpha)$ is the maximal pattern truss with respect to cohesion threshold $\\alpha$ in $G_{\\mathbf{p}_2}$, it follows\n\\begin{equation}\\nonumber\n\t\\forall e_{ij}\\in E^*_{\\mathbf{p}_2}(\\alpha), eco_{ij}(C^*_{\\mathbf{p}_2}(\\alpha))>\\alpha\n\\end{equation}\nRecall that $E_{\\mathbf{p}_1} = E^*_{\\mathbf{p}_2}(\\alpha)$, it follows\n\\begin{equation}\\nonumber\n\t\\forall e_{ij}\\in E_{\\mathbf{p}_1}, eco_{ij}(H_{\\mathbf{p}_1})\\geq eco_{ij}(C^*_{\\mathbf{p}_2}(\\alpha)) > \\alpha\n\\end{equation}\nThis means $H_{\\mathbf{p}_1}$ is a pattern truss with respect to threshold $\\alpha$ in $G_{\\mathbf{p}_1}$.\nRecall that $H_{\\mathbf{p}_1}=C^*_{\\mathbf{p}_2}(\\alpha)$, it follows that $C^*_{\\mathbf{p}_2}(\\alpha)$ is a pattern truss in $G_{\\mathbf{p}_1}$.\nThe Theorem follows.\n\\end{proof}\n}\n\n\\nop{\n\n\nFirst, we prove $C^*_{\\mathbf{p}_2}$ is a subgraph in $G_{\\mathbf{p}_1}$. \nSince $\\mathbf{p}_1\\subseteq \\mathbf{p}_2$, by the anti-monotonicity of patterns~\\cite{agrawal1994fast,han2000mining}, it follows $\\forall v_i \\in V, f_i(\\mathbf{p}_1) \\geq f_i(\\mathbf{p}_2)$. \nAccording to the definition of theme network, it follows $G_{\\mathbf{p}_2}\\subseteq G_{\\mathbf{p}_1}$. \nSince $C^*_{\\mathbf{p}_2}(\\alpha)$ is the maximal pattern truss in $G_{\\mathbf{p}_2}$, it follows $C^*_{\\mathbf{p}_2}(\\alpha)\\subseteq G_{\\mathbf{p}_2}\\subseteq G_{\\mathbf{p}_1}$.\n\nSecond, we construct a subgraph $H_{\\mathbf{p}_1}=\\{V_{\\mathbf{p}_1}, E_{\\mathbf{p}_1}\\}$ in theme network $G_{\\mathbf{p}_1}$, where $V_{\\mathbf{p}_1}=V^*_{\\mathbf{p}_2}(\\alpha)$ and $E_{\\mathbf{p}_1}=E^*_{\\mathbf{p}_2}(\\alpha)$. \nThis means $H_{\\mathbf{p}_1}=C^*_{\\mathbf{p}_2}(\\alpha)$.\nSince $C^*_{\\mathbf{p}_2}(\\alpha)\\subseteq G_{\\mathbf{p}_1}$, it follows $H_{\\mathbf{p}_1}\\subseteq G_{\\mathbf{p}_1}$.\n\nTo prove the first property, since the maximal pattern truss $C^*_{\\mathbf{p}_1}(\\alpha)$ is the union of all pattern trusses with respect to threshold $\\alpha$ in $G_{\\mathbf{p}_1}$, we can prove $C^*_{\\mathbf{p}_1}(\\alpha)\\neq\\emptyset$ by finding a non-empty pattern truss with respect to threshold $\\alpha$ in $G_{\\mathbf{p}_1}$.\n\n\n\\begin{proof}\nSince $\\mathbf{p}_1 \\subseteq \\mathbf{p}_2$, by the anti-monotonicity of patterns~\\cite{agrawal1994fast,han2000mining}, it follows $\\forall v_i \\in V, f_i(\\mathbf{p}_1) \\geq f_i(\\mathbf{p}_2)$.\nAccording to the definition of theme network, it follows $G_{\\mathbf{p}_2}\\subseteq G_{\\mathbf{p}_1}$. \nSince $C^*_{\\mathbf{p}_2}(\\alpha)\\subseteq G_{\\mathbf{p}_2}$, it follows $C^*_{\\mathbf{p}_2}(\\alpha)\\subseteq G_{\\mathbf{p}_1}$.\n\nWe prove this in two cases.\nFirst, if $C^*_{\\mathbf{p}_2}(\\alpha)=\\emptyset$, then $C^*_{\\mathbf{p}_2}(\\alpha)\\subseteq C^*_{\\mathbf{p}_1}(\\alpha)$ holds.\nSecond, if $C^*_{\\mathbf{p}_2}(\\alpha)\\neq\\emptyset$, we know from the proof of Proposition~\\ref{Prop:pam} that $C^*_{\\mathbf{p}_2}(\\alpha)$ is a pattern truss in $G_{\\mathbf{p}_1}$. \nRecall Definition~\\ref{Def:maximal_pattern_truss} that $C^*_{\\mathbf{p}_1}(\\alpha)$ is the union of all pattern trusses in $G_{\\mathbf{p}_1}$, it follows $C^*_{\\mathbf{p}_2}(\\alpha)\\subseteq C^*_{\\mathbf{p}_1}(\\alpha)$.\n\\end{proof}\n}\n\n\\begin{proposition}[Pattern Anti-monotonicity]\n\\label{Prop:pam}\nFor $\\mathbf{p}_1\\subseteq \\mathbf{p}_2$ and a cohesion threshold $\\alpha$, the following two properties hold.\n\\begin{enumerate}\n\t\\item If $C^*_{\\mathbf{p}_2}(\\alpha)\\neq \\emptyset$, then $C^*_{\\mathbf{p}_1}(\\alpha)\\neq\\emptyset$.\n\t\\item If $C^*_{\\mathbf{p}_1}(\\alpha)= \\emptyset$, then $C^*_{\\mathbf{p}_2}(\\alpha)= \\emptyset$.\n\\end{enumerate}\n\\end{proposition}\n\\begin{proof}\nAccording to Theorem~\\ref{Prop:gam}, since $\\mathbf{p}_1\\subseteq \\mathbf{p}_2$, $C^*_{\\mathbf{p}_2}(\\alpha)\\subseteq C^*_{\\mathbf{p}_1}(\\alpha)$.\nThe proposition follows immediately.\n\\end{proof}\n\n\\nop{it follows $V^*_{\\mathbf{p}_2}(\\alpha)\\neq\\emptyset$, $E^*_{\\mathbf{p}_2}(\\alpha)\\neq\\emptyset$ and $\\forall v_i\\in V^*_{\\mathbf{p}_2}(\\alpha), f_i(\\mathbf{p}_2)>0$.\nSince $\\forall v_i \\in V, f_i(\\mathbf{p}_1) \\geq f_i(\\mathbf{p}_2)$, it follows $\\forall v_i\\in V_{\\mathbf{p}_1}, f_i(\\mathbf{p}_1)>0$ and $H_{\\mathbf{p}_1}\\neq\\emptyset$. \n}\n\\nop{According to the definition of theme network, it follows $G_{\\mathbf{p}_2}\\subseteq G_{\\mathbf{p}_1}$. \nSince $C^*_{\\mathbf{p}_2}(\\alpha)\\subseteq G_{\\mathbf{p}_2}$, it follows $C^*_{\\mathbf{p}_2}(\\alpha)\\subseteq G_{\\mathbf{p}_1}$.\nConsidering $V_{\\mathbf{p}_1}=V^*_{\\mathbf{p}_2}(\\alpha)$ and $E_{\\mathbf{p}_1}=E^*_{\\mathbf{p}_2}(\\alpha)$, it follows $H_{\\mathbf{p}_1}\\subseteq G_{\\mathbf{p}_1}$.\n}\n\n\\nop{\nFirst, we prove $H_{\\mathbf{p}_1}\\subseteq G_{\\mathbf{p}_1}$. \n\nAccording to the definition of theme network, it follows $G_{\\mathbf{p}_2}\\subseteq G_{\\mathbf{p}_1}$. \nSince $C^*_{\\mathbf{p}_2}(\\alpha)\\subseteq G_{\\mathbf{p}_2}$, it follows $C^*_{\\mathbf{p}_2}(\\alpha)\\subseteq G_{\\mathbf{p}_1}$.\nConsidering $H_{\\mathbf{p}_1}\\subseteq C^*_{\\mathbf{p}_2}(\\alpha)$, it follows $H_{\\mathbf{p}_1}\\subseteq G_{\\mathbf{p}_1}$.\n\nSecond, we define $H_{\\mathbf{p}_1}=\\{V_{\\mathbf{p}_1}, E_{\\mathbf{p}_1}\\}$, where $V_{\\mathbf{p}_1}$\n\nwe prove $C^*_{\\mathbf{p}_2}(\\alpha)$ is a pattern truss in $G_{\\mathbf{p}_1}$. Since $\\mathbf{p}_1 \\subseteq \\mathbf{p}_2$, it follows $\\forall v_i \\in V^*_{\\mathbf{p}_2}(\\alpha), f_i(\\mathbf{p}_1) \\geq f_i(\\mathbf{p}_2)$. \nTherefore, for each $\\triangle_{ijk}$ in $C^*_{\\mathbf{p}_2}(\\alpha)$, we have\n\\begin{equation}\\nonumber\n\\begin{split}\n\t\\min(f_i(\\mathbf{p}_1), f_j(\\mathbf{p}_1), f_k(\\mathbf{p}_1)) \\geq\\min(f_i(\\mathbf{p}_2), f_j(\\mathbf{p}_2), f_k(\\mathbf{p}_2))\n\\end{split}\n\\end{equation}\nThus, according to Definition~\\ref{Def:edge_cohesion} it follows that \n\\begin{equation}\\nonumber\n\t\\forall e_{ij}\\in E^*_{\\mathbf{p}_2}(\\alpha), eco_{ij}(\\mathbf{p}_1)\\geq eco_{ij}(\\mathbf{p}_2)\n\\end{equation}\n}\n\n\n\n\n\n\n\\begin{proposition}[Graph Intersection Property]\n\\label{Lem:gip}\nIf $\\mathbf{p}_1 \\subseteq \\mathbf{p}_3$ and $\\mathbf{p}_2 \\subseteq \\mathbf{p}_3$, then $C^*_{\\mathbf{p}_3}(\\alpha)\\subseteq C^*_{\\mathbf{p}_1}(\\alpha) \\cap C^*_{\\mathbf{p}_2}(\\alpha)$.\n\\end{proposition}\n\n\\begin{proof}\nSince $\\mathbf{p}_1 \\subseteq \\mathbf{p}_3$, according to Theorem~\\ref{Prop:gam}, $C^*_{\\mathbf{p}_3}(\\alpha)\\subseteq C^*_{\\mathbf{p}_1}(\\alpha)$. \nSimilarly, $C^*_{\\mathbf{p}_3}(\\alpha)\\subseteq C^*_{\\mathbf{p}_2}(\\alpha)$. The proposition follows.\n\\end{proof}\n\n\\nop{\nThe proofs of Propositions~\\ref{Prop:pam}, \\ref{Prop:gam} and \\ref{Lem:gip} can be found in Sections~\\ref{Apd:pam}, \\ref{Apd:gam} and \\ref{Apd:gip} of the appendix, respectively.\n}\n\n\\begin{algorithm}[t]\n\\caption{Generate Apriori Candidate Patterns}\n\\label{Alg:apriori}\n\\KwIn{The set of Length-$(k-1)$ qualified patterns $\\mathcal{P}^{k-1}$.}\n\\KwOut{The set of Length-$k$ candidate patterns $\\mathcal{M}^k$.}\n\\BlankLine\n\\begin{algorithmic}[1]\n \\STATE Initialize: $\\mathcal{M}^k\\leftarrow \\emptyset$.\n \t\\FOR{$\\{\\mathbf{p}^{k-1}, \\mathbf{q}^{k-1}\\} \\subset \\mathcal{P}^{k-1} \\land |\\mathbf{p}^{k-1} \\cup \\mathbf{q}^{k-1}| = k$}\n\t\t\\STATE $\\mathbf{p}^k\\leftarrow \\mathbf{p}^{k-1} \\cup \\mathbf{q}^{k-1}$.\n\t\t\\STATE \\textbf{if} all length-$(k-1)$ sub-patterns of $\\mathbf{p}^k$ are qualified \\textbf{then} $\\mathcal{M}^k\\leftarrow \\mathcal{M}^k \\cup \\mathbf{p}^k$.\n\t\t\n\t\n\t\n\t\\ENDFOR\n\\BlankLine\n\\RETURN $\\mathcal{M}^k$.\n\\end{algorithmic}\n\\end{algorithm}\n\n\\begin{algorithm}[t]\n\\caption{Theme Community Finder Apriori (TCFA)}\n\\label{Alg:tcfa}\n\\KwIn{A database network $G$ and a user input $\\alpha$.}\n\\KwOut{The set of maximal pattern trusses $\\mathbb{C}(\\alpha)$ in $G$.}\n\\BlankLine\n\\begin{algorithmic}[1]\n \n \\STATE Initialize: $\\mathcal{P}^1$, $\\mathbb{C}^1(\\alpha)$, $\\mathbb{C}(\\alpha)\\leftarrow \\mathbb{C}^1(\\alpha)$, $k\\leftarrow2$.\n \\WHILE{$\\mathcal{P}^{k-1}\\neq \\emptyset$}\n \t\\STATE Call Algorithm~\\ref{Alg:apriori}: $\\mathcal{M}^k\\leftarrow \\mathcal{P}^{k-1}$.\n\t\\STATE $\\mathcal{P}^k\\leftarrow \\emptyset$, $\\mathbb{C}^k(\\alpha)\\leftarrow \\emptyset$.\n\t \\FOR{each length-$k$ pattern $\\mathbf{p}^k\\in \\mathcal{M}^k$}\n \t\t\\STATE Induce $G_{\\mathbf{p}^k}$ from $G$.\n\t\t\\STATE Compute $C^*_{\\mathbf{p}^k}(\\alpha)$ using $G_{\\mathbf{p}^k}$ by Algorithm~\\ref{Alg:mptd}.\n\t\n\t\t\\STATE \\textbf{if} $C^*_{\\mathbf{p}^k}(\\alpha) \\neq \\emptyset$ \\textbf{then} $\\mathbb{C}^k(\\alpha)\\leftarrow \\mathbb{C}^k(\\alpha) \\cup C^*_{\\mathbf{p}^k}(\\alpha)$, and $\\mathcal{P}^k\\leftarrow \\mathcal{P}^k\\cup \\mathbf{p}^k$.\n \t\\ENDFOR\n\t\\STATE $\\mathbb{C}(\\alpha)\\leftarrow \\mathbb{C}(\\alpha) \\cup \\mathbb{C}^k(\\alpha)$.\n \t\\STATE $k\\leftarrow k+1$.\n \\ENDWHILE\n\\BlankLine\n\\RETURN $\\mathbb{C}(\\alpha)$.\n\\end{algorithmic}\n\\end{algorithm}\n\n\\nop{\n\\begin{algorithm}[t]\n\\caption{Initialization}\n\\label{Alg:init}\n\\KwIn{A database network $G$ and a user input $\\alpha$.}\n\\KwOut{Length-$1$ qualified pattern set $P^1$ and the corresponding maximal pattern trusses $\\mathbb{C}^1(\\alpha)$.}\n\\BlankLine\n\\begin{algorithmic}[1]\n \\STATE Initialization: $P^1\\leftarrow \\emptyset$, $\\mathbb{C}^1(\\alpha)\\leftarrow \\emptyset$.\n \\FOR{each length-1 pattern $p^1\\in S$}\n \t\\STATE Induce theme network $G_{p^1}$ and call Algorithm~\\ref{Alg:mptd}.\n\t\\STATE \\textbf{if} $\\exists C^*_{p^1}(\\alpha)\\subset G_{p^1}$ and $C^*_{p^1}\\neq \\emptyset$ \\textbf{then} $\\mathbb{C}^1(\\alpha)\\leftarrow \\mathbb{C}^1(\\alpha) \\cup C^*_{p^1}(\\alpha)$ and $P^1\\leftarrow P^1\\cup p^1$.\n \\ENDFOR\n\\BlankLine\n\\RETURN $\\{P^1, \\mathbb{C}^1(\\alpha)\\}$.\n\\end{algorithmic}\n\\end{algorithm}\n}\n\n\\nop{\n\\begin{algorithm}[t]\n\\caption{Theme Community Finder Intersection}\n\\label{Alg:tcfi}\n\\KwIn{A database network $G$ and a user input $\\alpha$.}\n\\KwOut{The set of maximal pattern trusses $\\mathbb{C}(\\alpha)$ in $G$.}\n\\BlankLine\n\\begin{algorithmic}[1]\n \n \\STATE Initialize: $\\mathcal{P}^1$, $\\mathbb{C}^1(\\alpha)$, $\\mathbb{C}(\\alpha)\\leftarrow \\mathbb{C}^1(\\alpha)$, $k\\leftarrow2$.\n \\WHILE{$\\mathcal{P}^{k-1}\\neq \\emptyset$}\n \t\\STATE Call Algorithm~\\ref{Alg:apriori}: $\\mathcal{M}^k\\leftarrow \\mathcal{P}^{k-1}$.\n\t\\STATE $\\mathcal{P}^k\\leftarrow \\emptyset$, $\\mathbb{C}^k(\\alpha)\\leftarrow \\emptyset$.\n\t \\FOR{each length-$k$ pattern $\\mathbf{p}^k\\in \\mathcal{M}^k$}\n \t\t\\STATE Induce $G'_{\\mathbf{p}^k}$ from $C^*_{\\mathbf{p}^{k-1}}(\\alpha) \\cap C^*_{\\mathbf{p}^{k-1}}(\\alpha)$.\n\t\t\\STATE Call Algorithm~\\ref{Alg:mptd} with $G'_{\\mathbf{p}^k}$ as input.\n\t\t\\STATE \\textbf{if} $\\exists C^*_{\\mathbf{p}^k}(\\alpha)\\subset G'_{\\mathbf{p}^k}$ and $C^*_{\\mathbf{p}^k}\\neq \\emptyset$ \\textbf{then} $\\mathbb{C}^k(\\alpha)\\leftarrow \\mathbb{C}^k(\\alpha) \\cup C^*_{\\mathbf{p}^k}(\\alpha)$, $\\mathcal{P}^k\\leftarrow \\mathcal{P}^k\\cup \\mathbf{p}^k$. \n \t\\ENDFOR\n\t\n\t\\STATE $\\mathbb{C}(\\alpha)\\leftarrow \\mathbb{C}(\\alpha) \\cup \\mathbb{C}^k(\\alpha)$.\n \t\\STATE $k\\leftarrow k+1$.\n \\ENDWHILE\n\\BlankLine\n\\RETURN $\\mathbb{C}(\\alpha)$.\n\\end{algorithmic}\n\\end{algorithm}\n}\n\n\n\\subsection{Theme Community Finder Apriori}\n\\label{Sec:tcfa}\nIn this subsection, we introduce \\emph{Theme Community Finder Apriori} (TCFA) to solve the theme community finding problem.\nThe key idea of TCFA is to improve theme community finding efficiency by early pruning unqualified patterns in an Apriori-like manner~\\cite{agrawal1994fast}.\n\nA pattern $\\mathbf{p}$ is said to be \\emph{unqualified} if $C^*_\\mathbf{p}(\\alpha)=\\emptyset$, and to be \\emph{qualified} if $C^*_\\mathbf{p}(\\alpha)\\neq\\emptyset$.\nFor two patterns $\\mathbf{p}_1$ and $\\mathbf{p}_2$, if $\\mathbf{p}_1 \\subseteq \\mathbf{p}_2$, $\\mathbf{p}_1$ is called a \\emph{sub-pattern} of $\\mathbf{p}_2$.\n\nAccording to the second property of Proposition~\\ref{Prop:pam}, for two patterns $\\mathbf{p}_1$ and $\\mathbf{p}_2$, if $\\mathbf{p}_1 \\subseteq \\mathbf{p}_2$ and $\\mathbf{p}_1$ is unqualified, then $\\mathbf{p}_2$ is unqualified, thus $\\mathbf{p}_2$ can be immediately pruned without running MPTD (Algorithm~\\ref{Alg:mptd}) on $G_{\\mathbf{p}_2}$. Therefore, we can prune a length-$k$ pattern if any of its length-$(k-1)$ sub-patterns is unqualified.\n\nAlgorithm~\\ref{Alg:apriori} shows how we generate the set of length-$k$ candidate patterns by retaining only the length-$k$ patterns whose length-$(k-1)$ sub-patterns are all qualified.\nThis efficiently prunes a large number of unqualified patterns without running MPTD\n\\nop{\nSpecifically, line 3 generates a possible length-$k$ candidate pattern $\\mathbf{p}^k$ by merging two length-$(k-1)$ qualified patterns $\\{\\mathbf{p}^{k-1}, \\mathbf{q}^{k-1}\\}$. \nLine 4 retains $\\mathbf{p}^k$ as a candidate pattern if all its length-$(k-1)$ sub-patterns are qualified. \n}\n\nAlgorithm~\\ref{Alg:tcfa} introduces the details of TCFA. Line 1 calculates the set of length-1 qualified patterns $\\mathcal{P}^1=\\{\\mathbf{p} \\subset S \\mid C^*_\\mathbf{p}(\\alpha)\\neq\\emptyset, |\\mathbf{p}| = 1\\}$ and the corresponding set of maximal pattern trusses $\\mathbb{C}^1(\\alpha) = \\{C^*_\\mathbf{p}(\\alpha) \\mid \\mathbf{p}\\in \\mathcal{P}^1 \\}$. \nThis requires to run MPTD (Algorithm~\\ref{Alg:mptd}) on each theme network induced by a single item in $S$. \n\\nop{Since the theme networks induced by different items are independent, we can run this process in parallel. Our implementation use multiple threads for this step.}\nLine 3 calls Algorithm~\\ref{Alg:apriori} to generate the set of length-$k$ candidate patterns $\\mathcal{M}^k$.\nLines 5-9 remove the unqualified candidate patterns in $\\mathcal{M}^k$ by discarding every candidate pattern that cannot form a non-empty maximal pattern truss.\nIn this way, Lines 2-12 iteratively generate the set of length-$k$ qualified patterns $\\mathcal{P}^k$ from $\\mathcal{P}^{k-1}$ until no qualified patterns can be found. \nLast, the exact set of maximal pattern trusses $\\mathbb{C}(\\alpha)$ is returned.\n\n\\nop{Specifically, step 3 calls Algorithm~\\ref{Alg:apriori} to generate candidate pattern set $L^k$; steps 4-9 obtain $P^k$ by retaining only the qualified patterns in $L^k$.}\n\n\\nop{\nAlgorithm~\\ref{Alg:apriori} shows how we generate the length-$k$ candidate patterns and prune unqualified patterns in an Apriori-like manner.\nAccording to the second property of Proposition~\\ref{Prop:pam}, for two patterns $p_1$ and $p_2$, if $p_1 \\subset p_2$ and $p_1$ is unqualified, then $p_2$ is unqualified and can be immediately pruned without running Algorithm~\\ref{Alg:mptd} on $G_{p_2}$. \nAs a result, we perform efficient pattern pruning by retaining only the length-$k$ candidate patterns whose length-$(k-1)$ sub-patterns are all qualified patterns.\nSpecifically, step 3 generates a possible length-$k$ candidate pattern $p^k$ by merging two length-$(k-1)$ qualified patterns $\\{p^{k-1}, q^{k-1}\\}$. Step 4 retains $p^k$ as a candidate pattern if all its length-$(k-1)$ sub-patterns are qualified. \n}\n\nComparing with the baseline TCS in Section~\\ref{sec:tcs}, TCFA achieves a good efficiency improvement by effectively pruning a large number of unqualified patterns using the Apriori-like method in Algorithm~\\ref{Alg:apriori}.\nHowever, due to the well known limitation of Apriori~\\cite{agrawal1994fast}, the set of candidate patterns $\\mathcal{M}^k$ is often very large and still contains many unqualified candidate patterns. \nConsequently, Lines 5-9 of Algorithm~\\ref{Alg:tcfa} become the bottleneck of TCFA.\nWe solve this problem in the next subsection.\n\n\n\\nop{\ncan be identified and removed by running Algorithm~\\ref{Alg:mptd} on the corresponding theme networks, \n\nThus, checking the qualification of such unqualified patterns in lines 5-9 of Algorithm~\\ref{Alg:tcfa} becomes the bottleneck of computational efficiency. \n}\n\n\\subsection{Theme Community Finder Intersection}\n\\label{Sec:tcfi}\nThe \\emph{Theme Community Finder Intersection} (TCFI) method significantly improves the efficiency of TCFA by further pruning unqualified patterns in $\\mathcal{M}^k$ using Proposition~\\ref{Lem:gip}.\n\nConsider pattern $\\mathbf{p}^k$ of length $k$ and patterns $\\mathbf{p}^{k-1}$ and $\\mathbf{q}^{k-1}$ of length $k-1$.\nAccording to Proposition~\\ref{Lem:gip}, if $\\mathbf{p}^k = \\mathbf{p}^{k-1} \\cup \\mathbf{q}^{k-1}$, then $C^*_{\\mathbf{p}^k}(\\alpha)\\subseteq C^*_{\\mathbf{p}^{k-1}}(\\alpha) \\cap C^*_{\\mathbf{q}^{k-1}}(\\alpha)$. \nTherefore, if $C^*_{\\mathbf{p}^{k-1}}(\\alpha) \\cap C^*_{\\mathbf{q}^{k-1}}(\\alpha)=\\emptyset$, then $C^*_{\\mathbf{p}^k}(\\alpha)=\\emptyset$. Thus, we can prune $\\mathbf{p}^k$ immediately.\nIf $C^*_{\\mathbf{p}^{k-1}}(\\alpha) \\cap C^*_{\\mathbf{q}^{k-1}}(\\alpha)\\neq\\emptyset$, we can induce theme network $G_{\\mathbf{p}^k}$ from $C^*_{\\mathbf{p}^{k-1}}(\\alpha) \\cap C^*_{\\mathbf{q}^{k-1}}(\\alpha)$\nand find $C^*_{\\mathbf{p}^k}(\\alpha)$ within $G_{\\mathbf{p}^k}$ by MPTD\n\n\\nop{\nDenote $H=C^*_{p^{k-1}}(\\alpha) \\cap C^*_{q^{k-1}}(\\alpha)$ and let $G'_{p^k}$ denote the theme network induced from $H$, we can derive the following results using Lemma~\\ref{Lem:gip}. \nFirst, if $H=\\emptyset$, then $C^*_{p^k}(\\alpha)=\\emptyset$, thus we can prune $p^k$ immediately.\nSecond, if $H\\neq\\emptyset$, then $C^*_{p^k}(\\alpha) \\subseteq G'_{p^k}$, thus we can run Algorithm~\\ref{Alg:mptd} on $G'_{p^k}$ to find $C^*_{p^k}(\\alpha)$.\n}\n\nAccordingly, TCFI improves TCFA by modifying only Line 6 of Algorithm~\\ref{Alg:tcfa}.\nInstead of inducing $G_{\\mathbf{p}^k}$ from $G$, TCFI induces $G_{\\mathbf{p}^k}$ from $C^*_{\\mathbf{p}^{k-1}}(\\alpha) \\cap C^*_{\\mathbf{q}^{k-1}}(\\alpha)$ when $C^*_{\\mathbf{p}^{k-1}}(\\alpha) \\cap C^*_{\\mathbf{q}^{k-1}}(\\alpha)\\neq \\emptyset$. \nHere, $\\mathbf{p}^{k-1}$ and $\\mathbf{q}^{k-1}$ are qualified patterns in $\\mathcal{P}^{k-1}$ such that $\\mathbf{p}^k = \\mathbf{p}^{k-1} \\cup \\mathbf{q}^{k-1}$.\n\nUsing the graph intersection property in Proposition~\\ref{Lem:gip}, TCFI efficiently prunes a large number of unqualified candidate patterns and dramatically improves the detection efficiency. \nFirst, TCFI prunes a large number of candidate patterns in $\\mathcal{M}^k$ by simply checking whether $C^*_{\\mathbf{p}^{k-1}}(\\alpha) \\cap C^*_{\\mathbf{q}^{k-1}}(\\alpha)=\\emptyset$.\nSecond, when $C^*_{\\mathbf{p}^{k-1}}(\\alpha) \\cap C^*_{\\mathbf{q}^{k-1}}(\\alpha)\\neq\\emptyset$, inducing $G_{\\mathbf{p}^k}$ from $C^*_{\\mathbf{p}^{k-1}}(\\alpha) \\cap C^*_{\\mathbf{q}^{k-1}}(\\alpha)$ is more efficient than inducing $G_{\\mathbf{p}^k}$ from $G$, since $C^*_{\\mathbf{p}^{k-1}}(\\alpha) \\cap C^*_{\\mathbf{q}^{k-1}}(\\alpha)$ is often much smaller than $G$. \nThird, $G_{\\mathbf{p}^k}$ induced from $C^*_{\\mathbf{p}^{k-1}}(\\alpha) \\cap C^*_{\\mathbf{q}^{k-1}}(\\alpha)$ is often much smaller than $G_{\\mathbf{p}^k}$ induced from $G$, which significantly reduces the time cost of running MPTD on $G_{\\mathbf{p}^k}$.\nFourth, according to Theorem~\\ref{Prop:gam}, the size of a maximal pattern truss decreases when the length of the pattern increases. Thus, when a pattern grows longer, the size of $C^*_{\\mathbf{p}^{k-1}}(\\alpha) \\cap C^*_{\\mathbf{q}^{k-1}}(\\alpha)$ decreases rapidly, which significantly improves the pruning effectiveness of TCFI.\nLast, as to be discussed later in Section~\\ref{Sec:eotcf}, most maximal pattern trusses are small local subgraphs in a database network. Such small subgraphs in different local regions of a large sparse database network generally do not intersect with each other.\n\n\n\\nop{\nFurther more, according to Proposition~\\ref{Prop:gam}, the size of maximal pattern truss reduces when pattern length $k$ increases. Therefore, the size of $C^*_{\\mathbf{p}^{k-1}}(\\alpha) \\cap C^*_{\\mathbf{q}^{k-1}}(\\alpha)$ often decreases with the increase of $k$. As a result, TCFI tends to be more effective when $k$ increases. \nMore often than not, in real world database networks, the size of $C^*_{\\mathbf{p}^{k-1}}(\\alpha) \\cap C^*_{\\mathbf{q}^{k-1}}(\\alpha)$ decreases significantly with the increase of pattern length $k$.\nAs the pattern grows longer, the pruning of TCFI becomes impressively effective, leading to its significant efficiency performance in real world database networks.\n\n\nSo far, we have proposed TCS, TCFA and TCFI to solve the theme community finding problem in Definition~\\ref{Def:theme_comm_finding}. \nHowever, when the user inputs a new threshold $\\alpha$, all methods have to recompute from scratch. Thus, they are not efficient in answering frequent user queries. \nFortunately, such expensive re-computation can be avoided by decomposing and indexing all maximal pattern trusses. In the next section, we introduce our efficient theme community indexing structure that achieves fast user query answering.\n\n\nIn the next section, we introduce an indexing structure that efficiently indexes decomposed maximal pattern trusses in a tree to achieve fast query answering.\n\nTo quickly answer user queries, we introduce an efficient theme community indexing structure .\n}\n\n\n\\section{Theme Community Indexing}\n\\label{sec:index}\n\nWhen a user inputs a new cohesion threshold $\\alpha$, TCS, TCFA and TCFI have to recompute from scratch. \nCan we save the re-computation cost by decomposing and indexing all maximal pattern trusses to achieve fast user query answering?\nIn this section, we propose the \\emph{Theme Community Tree} (TC-Tree) for fast user query answering.\nWe first introduce how to decompose maximal pattern truss.\nThen, we illustrate how to build TC-Tree with decomposed maximal pattern trusses. Last, we present a querying method that efficiently answers user queries.\n\n\\subsection{Maximal Pattern Truss Decomposition}\n\\label{Sec:mpt_dec}\nWe first explore how to decompose a maximal pattern truss into multiple disjoint sets of edges\n\n\\nop{\n\\begin{property}\n\\label{Obs:anti_alpha}\n$\\forall p\\subseteq S$, if $\\alpha_2 > \\alpha_1 \\geq 0$, then $C^*_{p}(\\alpha_2)\\subseteq C^*_{p}(\\alpha_1)$. \n\\end{property}\n\nProperty~\\ref{Obs:anti_alpha} shows that the size of maximal pattern truss reduces when the threshold increases from $\\alpha_1$ to $\\alpha_2$.\nAccording to Definition~\\ref{Def:pattern_truss}, the cohesion of all edges in $C^*_{p}(\\alpha_1)$ are larger than $\\alpha_1$. Since $\\alpha_2 > \\alpha_1$, there can be some edges in $C^*_{p}(\\alpha_1)$ whose cohesion is smaller than $\\alpha_2$.\nSince $C^*_{p}(\\alpha_2)$ is formed by removing such edges from $C^*_{p}(\\alpha_1)$, we have $C^*_{p}(\\alpha_2)\\subseteq C^*_{p}(\\alpha_1)$.\n}\n\n\\begin{theorem}\n\\label{Obs:discrete}\nGiven a theme network $G_\\mathbf{p}$, a cohesion threshold $\\alpha_2$ and a maximal pattern truss $C^*_{\\mathbf{p}}(\\alpha_1)$ in $G_\\mathbf{p}$ whose minimum edge cohesion is $\\beta = \\min\\limits_{e_{ij}\\in E^*_\\mathbf{p}(\\alpha_1)} eco_{ij}(C^*_\\mathbf{p}(\\alpha_1))$, if $\\alpha_2 \\geq \\beta$, then $C^*_{\\mathbf{p}}(\\alpha_2)\\subset C^*_{\\mathbf{p}}(\\alpha_1)$.\n\\nop{\n$\\forall \\mathbf{p}\\subseteq S$, if $C^*_{\\mathbf{p}}(\\alpha_1)\\neq\\emptyset$, $\\beta = \\min\\limits_{e_{ij}\\in E^*_\\mathbf{p}(\\alpha_1)} eco_{ij}(C^*_\\mathbf{p}(\\alpha_1))$ and $\\alpha_2 \\geq \\beta$, then $C^*_{\\mathbf{p}}(\\alpha_2)\\subset C^*_{\\mathbf{p}}(\\alpha_1)$.}\n\\end{theorem}\n\nThe proof of Theorem~\\ref{Obs:discrete} is given in Appendix~\\ref{Apd:discrete}.\n\n\n\\nop{\n\\begin{proof}\n\\nop{\nRecall that $C^*_\\mathbf{p}(\\alpha_1)$ and $C^*_\\mathbf{p}(\\alpha_2)$ are edge-induced subgraphs induced from the sets of edges $E^*_\\mathbf{p}(\\alpha_1)$ and $E^*_\\mathbf{p}(\\alpha_2)$, respectively, we can prove $C^*_{\\mathbf{p}}(\\alpha_2)\\subset C^*_{\\mathbf{p}}(\\alpha_1)$ by proving $E^*_\\mathbf{p}(\\alpha_2)\\subset E^*_\\mathbf{p}(\\alpha_1)$ as follows.\n}\n\\nop{For notational efficiency, we denote $\\beta = \\min\\limits_{e_{ij}\\in E^*_\\mathbf{p}(\\alpha_1)} eco_{ij}(C^*_\\mathbf{p}(\\alpha_1))$.}\n\nFirst, we prove $\\alpha_2 > \\alpha_1$. \nSince $C^*_{\\mathbf{p}}(\\alpha_1)$ is a maximal pattern truss with respect to threshold $\\alpha_1$, from Definition~\\ref{Def:pattern_truss}, we have $\\forall e_{ij} \\in E^*_\\mathbf{p}(\\alpha_1), eco_{ij}(C^*_\\mathbf{p}(\\alpha_1)) > \\alpha_1$. \nSince $\\beta$ is the minimum edge cohesion of all the edges in $C^*_{\\mathbf{p}}(\\alpha_1)$, $\\beta > \\alpha_1$. \nSince $\\alpha_2 \\geq \\beta$, $\\alpha_2 > \\alpha_1$.\n\nSecond, we prove $C^*_\\mathbf{p}(\\alpha_2)\\subseteq C^*_\\mathbf{p}(\\alpha_1)$. \nSince $\\alpha_2 > \\alpha_1$, we know from Definition~\\ref{Def:pattern_truss} that $\\forall e_{ij} \\in E^*_\\mathbf{p}(\\alpha_2), eco_{ij}(C^*_\\mathbf{p}(\\alpha_2)) > \\alpha_2> \\alpha_1$.\nThis means that $C^*_\\mathbf{p}(\\alpha_2)$ is a pattern truss with respect to cohesion threshold $\\alpha_1$ in $G_\\mathbf{p}$.\nSince $C^*_\\mathbf{p}(\\alpha_1)$ is the maximal pattern truss with respect to cohesion threshold $\\alpha_1$ in $G_\\mathbf{p}$, from Definition~\\ref{Def:maximal_pattern_truss} we have $C^*_\\mathbf{p}(\\alpha_2)\\subseteq C^*_\\mathbf{p}(\\alpha_1)$.\n\nLast, we prove $C^*_\\mathbf{p}(\\alpha_2)\\neq C^*_\\mathbf{p}(\\alpha_1)$. \nLet $e^*_{ij}$ be the edge with minimum edge cohesion $\\beta$ in $ E^*_\\mathbf{p}(\\alpha_1)$. \nSince $\\alpha_2\\geq \\beta$, According to Definition~\\ref{Def:pattern_truss}, $e^*_{ij}\\not\\in E^*_\\mathbf{p}(\\alpha_2)$. \nThus, $E^*_{\\mathbf{p}}(\\alpha_1)\\neq E^*_{\\mathbf{p}}(\\alpha_2)$ and $C^*_\\mathbf{p}(\\alpha_2)\\neq C^*_\\mathbf{p}(\\alpha_1)$.\nRecall that $C^*_\\mathbf{p}(\\alpha_2)\\subseteq C^*_\\mathbf{p}(\\alpha_1)$. The theorem follows.\n\\end{proof}\n}\n\nTheorem~\\ref{Obs:discrete} indicates that the size of maximal pattern truss $C^*_{\\mathbf{p}}(\\alpha_1)$ reduces only when cohesion threshold $\\alpha_2 \\geq \\min\\limits_{e_{ij}\\in E^*_\\mathbf{p}(\\alpha_1)} eco_{ij}(C^*_\\mathbf{p}(\\alpha_1))$.\nTherefore, we can decompose a maximal pattern truss of $G_\\mathbf{p}$ into a sequence of disjoint sets of edges using a sequence of ascending cohesion thresholds $\\mathcal{A}_\\mathbf{p}=\\alpha_0, \\alpha_1, \\cdots, \\alpha_h$, where $\\alpha_0 = 0$, $\\alpha_k = \\min\\limits_{e_{ij}\\in E^*_\\mathbf{p}(\\alpha_{k-1})} eco_{ij}(C^*_\\mathbf{p}(\\alpha_{k-1}))$ and $k\\in[1,h]$.\n\nFor $\\alpha_0 = 0$, we call MPTD\nto calculate $C^*_\\mathbf{p}(\\alpha_0)$, which is the largest maximal pattern truss in $G_\\mathbf{p}$. \nFor $\\alpha_1, \\ldots, \\alpha_h$, we decompose $C^*_\\mathbf{p}(\\alpha_0)$ into a sequence of removed sets of edges $R_\\mathbf{p}(\\alpha_1), \\ldots, R_\\mathbf{p}(\\alpha_h)$, where $R_\\mathbf{p}(\\alpha_{k})=E^*_\\mathbf{p}(\\alpha_{k-1}) \\setminus E^*_\\mathbf{p}(\\alpha_{k})$ is the set of edges removed when $C^*_\\mathbf{p}(\\alpha_{k-1})$ shrinks to $C^*_\\mathbf{p}(\\alpha_{k})$. \nThe decomposition iterates until all edges in $C^*_\\mathbf{p}(\\alpha_0)$ are removed.\n\nThe decomposion results are stored in a linked list $\\mathcal{L}_\\mathbf{p}=\\mathcal{L}_\\mathbf{p}(\\alpha_1), \\ldots, \\mathcal{L}_\\mathbf{p}(\\alpha_h)$, where the $k$-th node stores $\\mathcal{L}_\\mathbf{p}(\\alpha_k) = (\\alpha_k, R_\\mathbf{p}(\\alpha_k))$. Since $\\mathcal{L}_\\mathbf{p}$ stores the same number of edges as in $E^*_\\mathbf{p}(\\alpha_0)$, it does not incur much extra memory cost. \n\n\\nop{\n$C^*_{p}(\\alpha_i)$ reduces to $C^*_{p}(\\alpha_{i+1})$ when $\\alpha_{i+1}\\in [\\beta_i, \\beta_{i+1})$, where $\\beta_i = \\psi_p(\\alpha_i)$.\n}\n\nUsing $\\mathcal{L}_\\mathbf{p}$, we can efficiently get the maximal pattern truss $C^*_\\mathbf{p}(\\alpha)$ for any $\\alpha\\geq 0$ by first obtaining $E^*_\\mathbf{p}(\\alpha)$ as\n\\begin{equation}\n\\label{Eqn:get_mpt}\nE^*_\\mathbf{p}(\\alpha)=\\bigcup\\limits_{\\alpha_k>\\alpha} R_\\mathbf{p}(\\alpha_k)\n\\end{equation}\nand then inducing $V^*_\\mathbf{p}(\\alpha)$ from $E^*_\\mathbf{p}(\\alpha)$ according to Definition~\\ref{Def:pattern_truss}.\n$\\mathcal{L}_\\mathbf{p}$ also provides the nontrivial range of $\\alpha$ for $G_\\mathbf{p}$. \nThe upper bound of $\\alpha$ in $G_\\mathbf{p}$ is $\\alpha^*_\\mathbf{p}=\\max\\mathcal{A}_\\mathbf{p}$, since $\\forall \\alpha\\geq \\alpha^*_\\mathbf{p}$ we have $C^*_\\mathbf{p}(\\alpha)=\\emptyset$. \nTherefore, the nontrivial range of $\\alpha$ for $G_\\mathbf{p}$ is $\\alpha\\in[0, \\alpha^*_\\mathbf{p})$. \n$\\alpha^*_\\mathbf{p}$ can be easily obtained by visiting the last entry of $\\mathcal{L}_\\mathbf{p}$.\n\n\\subsection{Theme Community Tree}\nA TC-Tree, denoted by $\\mathcal{T}$, is an extension of a \\emph{set enumeration tree} (SE-Tree)~\\cite{rymon1992search} and is carefully customized for efficient theme community indexing and query answering.\n\n\\nop{\nA SE-Tree is a basic data structure that enumerates all the patterns in the power set of $S$. \nEach node of the SE-Tree stores an item in $S$. \nThe $i$-th node of the SE-Tree is denoted by $n_i$, the item stored in $n_i$ is denoted by $s_{n_i}\\in S$.\nWe map each item in $S$ to a unique rank using a \\emph{ranking function} $\\psi: S \\rightarrow \\mathbb{Z}^+$. \nFor the $j$-th item $s_j$ in $S$, the ranking function maps $s_j$ to its rank $j$ in $S$, that is, $\\psi(s_j) = j$.\n\\nop{We compare a pair of items $s_i$ and $s_j$ in $S$ by a \\emph{pre-imposed order}, that is, \\emph{if $j > i$ then $s_j > s_i$}.}\nThen, for every node $n_i$ of the SE-Tree, we build a child of $n_i$ for each item $s_j\\in S$ that satisfies $\\psi(s_j) > \\psi(s_{n_i})$.\nIn this way, every node $n_i$ of the SE-Tree uniquely represents a pattern in the power set of $S$, denoted by $\\mathbf{p}_i\\subseteq S$, which is the union of the items stored in all the nodes along the path from the root to $n_i$. \nAs shown in Figure~\\ref{Fig:tct}, the SE-Tree of $S=\\{s_1, s_2, s_3, s_4\\}$ has 16 nodes $\\{n_0, n_1, n_2, \\cdots, n_{15}\\}$.\nFor node $n_{13}$, the path from the root to $n_{13}$ contains nodes $\\{n_0, n_1, n_6, n_{13}\\}$, where $s_{n_0}=\\emptyset$, $s_{n_1}=s_1$, $s_{n_6}=s_3$ and $s_{n_{13}}=s_4$, thus $\\mathbf{p}_{13}=\\{s_1, s_3, s_4\\}$.\n}\n\nA SE-Tree is a basic data structure that enumerates all the subsets of a set $S$. \nA total order $\\prec$ on the items in $S$ is assumed. \nThus, any subset of $S$ can be written as a sequence of items in order $\\prec$. \n\nEvery node of a SE-Tree uniquely represents a subset of $S$.\nThe root node represents $\\emptyset$. \nFor subsets $S_1$ and $S_2$ of $S$, the node representing $S_2$ is the child of the node representing $S_1$, if $S_1 \\subset S_2$; $|S_2 \\setminus S_1|=1$; and $S_1$ is a prefix of $S_2$ when $S_1$ and $S_2$ are written as sequences of items in order $\\prec$.\n\nEach node of a SE-Tree only stores the item in $S$ that is appended to the parent node to extend the child from the parent.\nIn this way, the set of items represented by node $n_i$ is the union of the items stored in all the nodes along the path from the root to $n_i$.\nFigure~\\ref{Fig:tct} shows an example of the SE-tree of set $S=\\{s_1, s_2, s_3, s_4\\}$. \nFor node $n_{13}$, the path from the root to $n_{13}$ contains nodes $\\{n_0, n_1, n_6, n_{13}\\}$, thus the set of items represented by $n_{13}$ is $\\{s_1, s_3, s_4\\}$.\n\n\\nop{The SE-Tree efficiently stores the item $S_2\\setminus S_1$ in the child node that represents $S_2$. In this way, }\n\n\n\n\n\\nop{\nEach node of the SE-Tree stores an item in $S$. \nThe $i$-th node of the SE-Tree is denoted by $n_i$, the item stored in $n_i$ is denoted by $s_{n_i}\\in S$.\nWe map each item in $S$ to a unique rank using a \\emph{ranking function} $\\psi: S \\rightarrow \\mathbb{Z}^+$. \nFor the $j$-th item $s_j$ in $S$, the ranking function maps $s_j$ to its rank $j$ in $S$, that is, $\\psi(s_j) = j$.\n\\nop{We compare a pair of items $s_i$ and $s_j$ in $S$ by a \\emph{pre-imposed order}, that is, \\emph{if $j > i$ then $s_j > s_i$}.}\nThen, for every node $n_i$ of the SE-Tree, we build a child of $n_i$ for each item $s_j\\in S$ that satisfies $\\psi(s_j) > \\psi(s_{n_i})$.\nIn this way, every node $n_i$ of the SE-Tree uniquely represents a pattern in the power set of $S$, denoted by $\\mathbf{p}_i\\subseteq S$, which is the union of the items stored in all the nodes along the path from the root to $n_i$. \nAs shown in Figure~\\ref{Fig:tct}, the SE-Tree of $S=\\{s_1, s_2, s_3, s_4\\}$ has 16 nodes $\\{n_0, n_1, n_2, \\cdots, n_{15}\\}$.\nFor node $n_{13}$, the path from the root to $n_{13}$ contains nodes $\\{n_0, n_1, n_6, n_{13}\\}$, where $s_{n_0}=\\emptyset$, $s_{n_1}=s_1$, $s_{n_6}=s_3$ and $s_{n_{13}}=s_4$, thus $\\mathbf{p}_{13}=\\{s_1, s_3, s_4\\}$.\n}\n\n\\nop{\nBuilding a SE-Tree requires a pre-imposed order of the items in $S$. \nWe define such \\emph{pre-imposed order} as: \\emph{$\\forall s_i, s_j\\in S$, if $j > i$ then $s_j > s_i$}.\n\\mc{The following description of SE-tree is confusing and inaccurate. The mapping between nodes $i_i$ and items $s_j$ is unclear.}\n\\todo{As shown in Figure~\\ref{Fig:tct}, for $S=\\{s_1, s_2, s_3, s_4\\}$, the SE-Tree has 16 nodes $\\{n_0, n_1, n_2, \\cdots, n_{15}\\}$. Each node $n_i$ uniquely represents a pattern $\\mathbf{p}_i$, which is the union of items in all nodes along the path from $n_0$ to $n_i$. \nTake $n_{13}$ for an example, the path from $n_0$ to $n_{13}$ contains nodes $\\{n_0, n_1, n_6, n_{13}\\}$, thus $\\mathbf{p}_{13}=\\{s_1, s_3, s_4\\}$.}\n}\n\n\\begin{figure}[t]\n\\centering\n\\includegraphics[width=86mm]{Figs\/theme_community_tree.pdf}\n\\caption{An example of SE-Tree and TC-Tree when $S=\\{s_1, s_2, s_3, s_4\\}$ and $\\mathcal{L}_{\\mathbf{p}_0}=\\mathcal{L}_{\\mathbf{p}_2}=\\mathcal{L}_{\\mathbf{p}_8}=\\mathcal{L}_{\\mathbf{p}_9}=\\mathcal{L}_{\\mathbf{p}_{11}}=\\mathcal{L}_{\\mathbf{p}_{14}}=\\mathcal{L}_{\\mathbf{p}_{15}}=\\emptyset$. SE-Tree includes all nodes marked in solid and dashed lines. TC-tree contains only the nodes in solid line. }\n\\label{Fig:tct}\n\\end{figure}\n\nA TC-Tree is an extension of a SE-Tree.\nIn a TC-Tree, each node $n_i$ represents a pattern $\\mathbf{p}_i$, which is a subset of $S$. \nThe item stored in $n_i$ is denoted by $s_{n_i}$. \nWe also store the decomposed maximal pattern truss $\\mathcal{L}_{\\mathbf{p}_i}$ in $n_i$. \nTo save memory, we omit the nodes $n_j$ $(j\\geq 1)$ whose decomposed maximal pattern trusses are $\\mathcal{L}_{\\mathbf{p}_j}=\\emptyset$. \n\nWe can build a TC-Tree in a top-down manner efficiently.\nIf $\\mathcal{L}_{\\mathbf{p}_j}=\\emptyset$, we can prune the entire subtree rooted at $n_j$ immediately.\nThis is because, for node $n_j$ and its descendant $n_d$, we have $\\mathbf{p}_j\\subset \\mathbf{p}_d$. Since $\\mathcal{L}_{\\mathbf{p}_j}=\\emptyset$, we can derive from Proposition~\\ref{Prop:pam} that $\\mathcal{L}_{\\mathbf{p}_d}=\\emptyset$. As a result, all descendants of $n_j$ can be immediately pruned.\n\nAlgorithm~\\ref{Alg:tctb} gives the details of building a TC-Tree $\\mathcal{T}$. \nLines 2-5 generate the nodes at the first layer of $\\mathcal{T}$.\nSince the theme networks induced by different items in $S$ are independent, we can compute $\\mathcal{L}_{\\mathbf{p}_i}$ in parallel. Our implementation uses multiple threads for this step.\nLines 6-12 iteratively build the rest of the nodes of $\\mathcal{T}$ in breadth first order.\nHere, $n_f.siblings$ is the set of nodes that have the same parent as $n_f$. The children of $n_f$, denoted by $n_c$, are built in Lines 8-11.\n\\nop{The pre-imposed order of items (i.e., $n_b.s_b > n_f.s_f$) is applied in Line 9 to ensure that the TC-Tree is a subtree of SE-Tree. \\mc{What do you mean by ``sub-tree'' here?, Do you mean a sub-graph?}}\nIn Line 9, we apply Proposition~\\ref{Lem:gip} to efficiently calculate $\\mathcal{L}_{\\mathbf{p}_c}$.\nSince $\\mathbf{p}_c=\\mathbf{p}_f \\cup \\mathbf{p}_b$, we have $\\mathbf{p}_f\\subset \\mathbf{p}_c$ and $\\mathbf{p}_b\\subset \\mathbf{p}_c$. \nFrom Proposition~\\ref{Lem:gip}, we know $C^*_{\\mathbf{p}_c}(0)\\subseteq C^*_{\\mathbf{p}_f}(0) \\cap C^*_{\\mathbf{p}_b}(0)$.\nTherefore, we can find $C^*_{\\mathbf{p}_c}(0)$ within a small subgraph $C^*_{\\mathbf{p}_f}(0) \\cap C^*_{\\mathbf{p}_b}(0)$ using MPTD,\nand then get $\\mathcal{L}_{\\mathbf{p}_c}$ by decomposing $C^*_{\\mathbf{p}_c}(0)$. \n\n\nIn summary, every node of a TC-Tree stores the decomposed maximal pattern truss $\\mathcal{L}_\\mathbf{p}$ of a unique pattern $\\mathbf{p}\\subseteq S$. \nSince $\\mathcal{L}_\\mathbf{p}$ also stores the nontrivial range of $\\alpha$ in $G_\\mathbf{p}$, we can easily use the TC-Tree to obtain the range of $\\alpha$ for all theme networks in $G$. \nThis range helps the users to set their queries.\n\n\\begin{algorithm}[t]\n\\caption{Build Theme Community Tree}\n\\label{Alg:tctb}\n\\KwIn{A database network $G$.}\n\\KwOut{The TC-Tree $\\mathcal{T}$ with root node $n_0$.}\n\\BlankLine\n\\begin{algorithmic}[1]\n \\STATE Initialization: $Q\\leftarrow\\emptyset$, $s_{n_0}\\leftarrow\\emptyset$, $\\mathcal{L}_{\\mathbf{p}_0}\\leftarrow\\emptyset$.\n \n \\FOR{each item $s_i\\in S$}\n \t\\STATE $s_{n_i}\\leftarrow s_i$, $\\mathbf{p}_i\\leftarrow s_i$ and compute $\\mathcal{L}_{\\mathbf{p}_i}$.\n\t\\STATE \\textbf{if} $\\mathcal{L}_{\\mathbf{p}_i}\\neq \\emptyset$ \\textbf{then} $n_0.addChild(n_i)$ and $Q.push(n_i)$. \n \\ENDFOR\n \n \\WHILE{$Q\\neq\\emptyset$}\n \t\\STATE $Q.pop(n_{f})$.\n\t\\FOR{each node $n_b\\in n_f.siblings$}\n\t\t\\STATE \\textbf{if} $s_{n_f} \\prec s_{n_b}$ \\textbf{then} $s_{n_c} \\leftarrow s_{n_b}$, $\\mathbf{p}_c \\leftarrow \\mathbf{p}_f \\cup \\mathbf{p}_b$, and compute $\\mathcal{L}_{\\mathbf{p}_c}$.\n\t\t\\STATE \\textbf{if} $\\mathcal{L}_{\\mathbf{p}_c}\\neq \\emptyset$ \\textbf{then} $n_f.addChild(n_c)$ and $Q.push(n_c)$.\n\t\\ENDFOR\n \\ENDWHILE\n\\BlankLine\n\\RETURN The TC-Tree $\\mathcal{T}$ with root node $n_0$.\n\\end{algorithmic}\n\\end{algorithm}\n\n\\begin{algorithm}[t]\n\\caption{Query Theme Community Tree}\n\\label{Alg:qtct}\n\\KwIn{A TC-Tree $\\mathcal{T}$, a query pattern $\\mathbf{q}$ and a threshold $\\alpha_\\mathbf{q}$.}\n\\KwOut{The set of maximal pattern trusses $\\mathbb{C}_\\mathbf{q}(\\alpha_\\mathbf{q})$.}\n\\BlankLine\n\\begin{algorithmic}[1]\n \\STATE Initialization: $Q\\leftarrow n_0$, $\\mathbb{C}_\\mathbf{q}(\\alpha_\\mathbf{q})\\leftarrow\\emptyset$.\n \\WHILE{$Q\\neq\\emptyset$}\n \t\\STATE $Q.pop(n_{f})$.\n\t\\FOR{each node $n_c\\in n_f.children \\land s_{n_c}\\in \\mathbf{q}$}\n\t\n\t\n\t\t\t\\STATE Get $C^*_{\\mathbf{p}_c}(\\alpha_\\mathbf{q})$ from $\\mathcal{L}_{\\mathbf{p}_c}$ by Equation~\\ref{Eqn:get_mpt}.\n\t\t\t\\STATE \\textbf{if} $C^*_{\\mathbf{p}_c}(\\alpha_\\mathbf{q})\\neq \\emptyset$ \\textbf{then} $\\mathbb{C}(\\alpha_\\mathbf{q})\\leftarrow \\mathbb{C}(\\alpha_\\mathbf{q}) \\cup C^*_{\\mathbf{p}_c}(\\alpha_\\mathbf{q})$, and $Q.push(n_c)$.\n\t\n\t\\ENDFOR\n \\ENDWHILE\n\\BlankLine\n\\RETURN The set of maximal pattern trusses $\\mathbb{C}(\\alpha_\\mathbf{q})$.\n\\end{algorithmic}\n\\end{algorithm}\n\n\\subsection{Querying Theme Community Tree}\nIn this subsection, we introduce how to query TC-Tree by a pattern $\\mathbf{q}$ and a cohesion threshold $\\alpha_\\mathbf{q}$. \nThe \\emph{answer} to query $(\\mathbf{q}, \\alpha_\\mathbf{q})$ is the set of maximal pattern trusses with respect to $\\alpha_\\mathbf{q}$ for any sub-pattern of $\\mathbf{q}$, that is, $\\mathbb{C}_\\mathbf{q}(\\alpha_\\mathbf{q})=\\{C^*_\\mathbf{p}(\\alpha_\\mathbf{q}) \\mid C^*_\\mathbf{p}(\\alpha_\\mathbf{q})\\neq \\emptyset, \\mathbf{p}\\subseteq \\mathbf{q}\\}$.\nWith $\\mathbb{C}_\\mathbf{q}(\\alpha_\\mathbf{q})$, one can easily extract theme communities by finding the maximal connected subgraphs in all the retrieved maximal pattern trusses.\n\nAs shown in Algorithm~\\ref{Alg:qtct}, the querying method simply traverses the TC-Tree in breadth first order and collects maximal pattern trusses that satisfy the conditions of the answer. \n\nThe efficiency of Algorithm~\\ref{Alg:qtct} comes from three factors.\nFirst, in Line 4, if $s_{n_c}\\not\\in \\mathbf{q}$, then $\\mathbf{p}_c\\not\\subset \\mathbf{q}$ and the patterns of all descendants of $n_c$ are not sub-patterns of $\\mathbf{q}$. Therefore, we can prune the entire subtree rooted at $n_c$.\nSecond, in Line 6, if $C^*_{\\mathbf{p}_c}(\\alpha_\\mathbf{q})=\\emptyset$, we can prune the entire subtree rooted at $n_c$, because, according to Proposition~\\ref{Prop:pam}, no descendants of $n_c$ can induce a maximal pattern truss with respect to $\\alpha_\\mathbf{q}$.\nLast, in Line 5, getting $C^*_{\\mathbf{p}_c}(\\alpha_\\mathbf{q})$ from $\\mathcal{L}_{\\mathbf{p}_c}$ is efficient using Equation~\\ref{Eqn:get_mpt}.\n\n\\nop{\nthen $C^*_{p_{g}}(\\alpha_q)=\\emptyset$ for all children $n_{g}$ of $n_c$. This is an immediate result of the second property of Proposition~\\ref{Prop:pam}, since $p_c\\subset p_g$. Therefore, when $C^*_{p_c}(\\alpha_q)=\\emptyset$, . \n}\n\nIn summary, TC-Tree enables fast user query answering. \nAs demonstrated in Section~\\ref{Sec:eotci}, TC-Tree is easy to build and efficient to query, and scales well to index a large number of maximal pattern trusses using practical size of memory.\n\n\n\n\n\\section{Experiments}\n\\label{sec:exp}\n\n\\begin{table}[t]\n\\caption{Statistics of the database networks. \\#Items (total) is the total number of items stored in all vertex databases, and \\#Items (unique) is the number of unique items in $S$.}\n\\centering\n\\label{Table:dss}\n\\begin{tabular}{| p{23mm} | p{10.5mm} | p{10.5mm} | p{12mm} | p{10.5mm}|}\n\\hline\n & \\makecell[c]{BK} & \\makecell[c]{GW} & \\makecell[c]{AMINER} & \\makecell[c]{SYN} \\\\ \\hline\n\\#Vertices & $5.1{\\times}10^4$ & $1.1{\\times}10^5$ & $1.1{\\times}10^6$ & $1.0{\\times}10^6$ \\\\ \\hline\n\\#Edges & $2.1{\\times}10^5$ & $9.5{\\times}10^5$ & $2.6{\\times}10^6$ & $1.0{\\times}10^7$ \\\\ \\hline\n\\#Transactions & $1.2{\\times}10^6$ & $2.0{\\times}10^6$ & $3.1{\\times}10^6$ & $6.1{\\times}10^6$ \\\\ \\hline\n\\#Items (total) & $1.7{\\times}10^6$ & $3.5{\\times}10^6$ & $9.2{\\times}10^6$ & $1.3{\\times}10^8$ \\\\ \\hline\n\\#Items (unique) & $1.8{\\times}10^3$ & $5.7{\\times}10^3$ & $1.2{\\times}10^4$ & $1.0{\\times}10^4$ \\\\ \\hline\n\n\\end{tabular}\n\\end{table}\n\nIn this section, we evaluate the performance of Theme Community Scanner (TCS), Theme Community Finder Apriori (TCFA), Theme community Finder Intersection (TCFI) and Theme Community Tree (TC-Tree). \nTCS, TCFA and TCFI are implemented in Java. In order to efficiently process large database network, we implement TC-Tree in C++ and\nthe parallel steps in Lines 2-5 of Algorithm~\\ref{Alg:tctb} with 4 threads using OpenMP.\nAll experiments are performed on a PC running Windows 7 with Core-i7-3370 CPU (3.40 GHz), 32GB memory and a 5400 rpm hard drive.\n\nSince TC-Tree is an indexing method and it is not directly comparable with the other theme community detection methods, we compare the performance of TCS, TCFA and TCFI in Sections~\\ref{Sec:eop} and~\\ref{Sec:eotcf}, and evaluate the performance of TC-Tree in Section~\\ref{Sec:eotci}. Last, we present some interesting case studies in Section~\\ref{Sec:cs}. \n\nThe performance of TCS, TCFA and TCFI are evaluated on the following aspects. First, ``Time Cost'' measures the total runtime of each method.\nSecond, ``Number of Patterns (NP)'', ``Number of Vertices (NV)'' and ``Number of Edges (NE)'' are the total numbers of patterns, vertices and edges in all detected maximal pattern trusses, respectively. \nWhen counting NV, a vertex is counted $k$ times if it is contained by $k$ different maximal pattern trusses. \nFor NE, an edge is counted $k$ times if it is contained by $k$ different maximal pattern trusses.\nNP is equal to the number of maximal pattern trusses, since each maximal pattern truss uniquely corresponds to a pattern.\nThe evaluation metrics of TC-Tree are discussed in Section~\\ref{Sec:eotci}\n\nThe following datasets are used.\n\n\\textbf{Brightkite (BK)}\nThe Brightkite dataset is a public check-in dataset produced by the location-based social networking website \\url{BrightKite.com}~\\cite{BKGW_data}. \nIt includes a friendship network of 58,228 users and 4,491,143 user check-ins that contain the check-in time and location.\nWe construct a database network using this data set by taking the user friendship network as the network of the database network. Moreover,\nto create the vertex database for a user, we treat each check-in location as an item, and cut the check-in history of a user into periods of 2 days. The set of check-in locations within a period is transformed into a transaction.\nA theme community in this database network represents a group of friends who frequently visit the same set of places.\n\n\\textbf{Gowalla (GW)} \nThe Gowalla dataset is a public dataset produced by the location-based social networking website \\url{Gowalla.com}~\\cite{BKGW_data}. \nIt includes a friendship network of 196,591 users and 6,442,890 user check-ins that contain the check-in time and location. \nWe transform this dataset into a database network in the same way as BK. \n\n\\textbf{AMINER}\nThe AMINER dataset is built from the Citation network v2 (CNV2) dataset~\\cite{Aminer_data}.\nCNV2 contains 1,397,240 papers. We transform it into a database network in the following two steps. First, we treat each author as a vertex and build an edge between a pair of authors who co-author at least one paper. \nSecond, to build the vertex database for an author, we treat each keyword in the abstract of a paper as an item, and all the keywords in the abstract of a paper is turned into a transaction. An author vertex is associated with a transaction database of all papers the author publishes.\nIn this database network, a theme community represents a group of authors who collaborate closely and share the same research interest that is described by the same set of keywords.\n\n\n\\newcommand{43mm}{43mm}\n\\begin{figure*}[t]\n\\centering\n\\subfigure[Time cost (BK)]{\\includegraphics[width=43mm]{Figs\/EXP1_EffectOfPara_time_BK.pdf}}\n\\subfigure[\\#Patterns (BK)]{\\includegraphics[width=43mm]{Figs\/EXP1_EffectOfPara_npat_BK.pdf}}\n\\subfigure[\\#Vertices (BK)]{\\includegraphics[width=43mm]{Figs\/EXP1_EffectOfPara_nv_BK.pdf}}\n\\subfigure[\\#Edges (BK)]{\\includegraphics[width=43mm]{Figs\/EXP1_EffectOfPara_ne_BK.pdf}}\n\\subfigure[Time cost (GW)]{\\includegraphics[width=43mm]{Figs\/EXP1_EffectOfPara_time_GW.pdf}}\n\\subfigure[\\#Patterns (GW)]{\\includegraphics[width=43mm]{Figs\/EXP1_EffectOfPara_npat_GW.pdf}}\n\\subfigure[\\#Vertices (GW)]{\\includegraphics[width=43mm]{Figs\/EXP1_EffectOfPara_nv_GW.pdf}}\n\\subfigure[\\#Edges (GW)]{\\includegraphics[width=43mm]{Figs\/EXP1_EffectOfPara_ne_GW.pdf}}\n\\subfigure[Time cost (AMINER)]{\\includegraphics[width=43mm]{Figs\/EXP1_EffectOfPara_time_DBLP.pdf}}\n\\subfigure[\\#Patterns (AMINER)]{\\includegraphics[width=43mm]{Figs\/EXP1_EffectOfPara_npat_DBLP.pdf}}\n\\subfigure[\\#Vertices (AMINER)]{\\includegraphics[width=43mm]{Figs\/EXP1_EffectOfPara_nv_DBLP.pdf}}\n\\subfigure[\\#Edges (AMINER)]{\\includegraphics[width=43mm]{Figs\/EXP1_EffectOfPara_ne_DBLP.pdf}}\n\\caption{The effects of user input $\\alpha$ and threshold $\\epsilon$ on BK, GW and AMINER. In (f)-(h) and (j)-(l), the performance of NP, NE and NV are all zero when $\\alpha=2.0$, however, we could not draw zero in the figure since the y-axes are in log scale.}\n\\label{Fig:effect_of_parameters}\n\\end{figure*}\n\n\n\\textbf{Synthetic (SYN) dataset.}\nThe synthetic dataset is built to evaluate the scalability of TC-Tree. We first generate a network with 1 million vertices using the Java Universal Network\/Graph Framework (JUNG)~\\cite{JUNG}.\nThen, in order to make the vertex databases of neighbour vertices share some common patterns, we generate the transaction databases on each vertex in three steps.\nFirst, we randomly select 1000 seed vertices. \nThen, to build the transaction database of each seed vertex, we randomly sample multiple itemsets from $S$ and store each sampled itemset as a transaction in the transaction database.\nLast, to build the transaction database of each non-seed vertex, we first sample multiple transactions from the transaction databases of the neighbor vertices, then randomly change 10\\% of the items in each sampled transaction to different items randomly picked in $S$.\nIn this way, we iteratively generate the transaction databases of all vertices by a breadth first search of the network. \nFor each vertex $v_i$ with degree $d(v_i)$, the number of transactions in vertex database $\\mathbf{d}_i$ is set to $\\lceil{e^{ 0.1\\times d(v_i)}}\\rceil$, the length of each transaction in $\\mathbf{d}_i$ is set to $\\lceil{e^{ 0.13\\times d(v_i)}}\\rceil$.\n\n\\nop{For each non-seed vertex $v_j$, we generate the transaction database $\\mathbf{d}_j$ by sampling transactions from the databases of the neighbouring vertices of $v_j$.\nto generate a transaction $t$ in the transaction database $\\mathbf{d}_j$, we first sample a transaction $t'$ from the transaction databases of the neighbouring vertices of $v_j$, then obtain $t$ by changing each item in $t'$ into another item in $S$ with probability $0.1$.\nwe use a \\emph{sample-and-change} method to make sure that the transaction databases of neighbouring vertices share some common patterns. Specifically,\n}\n\nThe statistics of all datasets are given in Table~\\ref{Table:dss}.\n\n\n\\newcommand{43mm}{43mm}\n\\begin{figure*}[t]\n\\centering\n\\subfigure[Time cost (BK)]{\\includegraphics[width=43mm]{Figs\/EXP2_EfficiencyVSEdgeNum_time_BK.pdf}}\n\\subfigure[\\#Patterns (BK)]{\\includegraphics[width=43mm]{Figs\/EXP2_EfficiencyVSEdgeNum_npat_BK.pdf}}\n\\subfigure[\\#Vertices (BK)]{\\includegraphics[width=43mm]{Figs\/EXP2_EfficiencyVSEdgeNum_nv_BK.pdf}}\n\\subfigure[\\#Edges (BK)]{\\includegraphics[width=43mm]{Figs\/EXP2_EfficiencyVSEdgeNum_ne_BK.pdf}}\n\\subfigure[Time cost (GW)]{\\includegraphics[width=43mm]{Figs\/EXP2_EfficiencyVSEdgeNum_time_GW.pdf}}\n\\subfigure[\\#Patterns (GW)]{\\includegraphics[width=43mm]{Figs\/EXP2_EfficiencyVSEdgeNum_npat_GW.pdf}}\n\\subfigure[\\#Vertices (GW)]{\\includegraphics[width=43mm]{Figs\/EXP2_EfficiencyVSEdgeNum_nv_GW.pdf}}\n\\subfigure[\\#Edges (GW)]{\\includegraphics[width=43mm]{Figs\/EXP2_EfficiencyVSEdgeNum_ne_GW.pdf}}\n\\subfigure[Time cost (AMINER)]{\\includegraphics[width=43mm]{Figs\/EXP2_EfficiencyVSEdgeNum_time_DBLP.pdf}}\n\\subfigure[\\#Patterns (AMINER)]{\\includegraphics[width=43mm]{Figs\/EXP2_EfficiencyVSEdgeNum_npat_DBLP.pdf}}\n\\subfigure[\\#Vertices (AMINER)]{\\includegraphics[width=43mm]{Figs\/EXP2_EfficiencyVSEdgeNum_nv_DBLP.pdf}}\n\\subfigure[\\#Edges (AMINER)]{\\includegraphics[width=43mm]{Figs\/EXP2_EfficiencyVSEdgeNum_ne_DBLP.pdf}}\n\\caption{The Time Cost, NP, NV\/NP and NE\/NP of TCS, TCFA and TCFI on different sizes of real world database networks. NV\/NP and NE\/NP are the average number of vertices and edges in all detected maximal pattern trusses, respectively.}\n\\label{Fig:scalability_results}\n\\end{figure*}\n\n\\subsection{Effect of Parameters}\n\\label{Sec:eop}\nIn this subsection, we analyze the effects of the cohesion threshold $\\alpha$ and the frequency threshold $\\epsilon$ for TCS in the real world database networks.\nThe settings of parameters are $\\alpha \\in \\{0.0, 0.1, 0.2, 0.3, 0.5, 1.0, 1.5, 2.0\\}$ and $\\epsilon\\in\\{0.1, 0.2, 0.3\\}$. \nWe do not evaluate the performance of TCS for $\\epsilon=0.0$ and $\\epsilon > 0.3$, because TCS is too slow to stop in reasonable time when $\\epsilon=0.0$ and it loses too much accuracy when $\\epsilon>0.3$.\nSince TCS with $\\epsilon\\in\\{0.1, 0.2, 0.3\\}$ still run too slow on the original database networks of BK, GW and AMINER, we use small database networks that are sampled from the original database networks by performing a breadth first search from a randomly picked seed vertex.\nFrom BK and GW, we obtain sampled database networks with 10,000 edges. \nFor AMINER, we sample a database network of 5,000 edges.\n\nFigures~\\ref{Fig:effect_of_parameters}(a), \\ref{Fig:effect_of_parameters}(e) and~\\ref{Fig:effect_of_parameters}(i) show the time cost of all methods on BK, GW and AMINER, respectively.\nThe cost of TCS does not change when $\\alpha$ increases. This is because the cost of TCS is largely dominated by the size of the set of candidate patterns $\\mathcal{P}$ (see Section~\\ref{sec:tcs}), which is not affected by $\\alpha$. \nHowever, when $\\epsilon$ increases, the size of $\\mathcal{P}$ reduces, thus the cost of TCS decreases.\nWhen $\\alpha$ increases, the cost of TCFA and TCFI both decreases. This is because, for both TCFA and TCFI, a larger $\\alpha$ reduces the size of the set of qualified patterns $\\mathcal{P}^{k-1}$, thus reduces the number of generated candidate patterns in $\\mathcal{M}^k$. This improves the effectiveness of the early pruning of TCFA and TCFI.\nThe cost of TCFA is sensitive to $\\alpha$. \nThis is because the cost of TCFA is largely dominated by the number of candidate patterns in $\\mathcal{M}^k$, which is generated by taking the length-$k$ union of the patterns in $\\mathcal{P}^{k-1}$.\nWhen $\\alpha$ decreases, the size of $\\mathcal{P}^{k-1}$ increases rapidly, and the number of candidate patterns in $\\mathcal{M}^k$ becomes very large. \nIn contrast, the cost of TCFI is stable with respect to $\\alpha$ and is much lower than the cost of TCFA when $\\alpha$ is small.\nThe reason is that most maximal pattern trusses are small local subgraphs that do not intersect with each other, thus many unqualified patterns in $\\mathcal{M}^k$ are easily pruned by TCFI using the graph intersection property in Proposition~\\ref{Lem:gip}.\n\nAccording to the experimental results on the small database network of AMINER of 5,000 edges, when $\\alpha=0$, TCFA calls MPTD 622,852 times, TCFI calls MPTD 152,396 times. This indicates that TCFI effectively prunes 75.5\\% of the candidate patterns. However, in Figure~\\ref{Fig:effect_of_parameters}(i), TCFI is nearly 3 orders of magnitudes faster than TCFA when $\\alpha=0$. This is because, for each run of MPTD, TCFA computes the maximal pattern truss in the large theme network induced from the entire database network, however, TCFI computes the maximal pattern truss within the small theme network induced from the intersection of two maximal pattern trusses.\n\nIn Figures~\\ref{Fig:effect_of_parameters}(a),~\\ref{Fig:effect_of_parameters}(e) and \\ref{Fig:effect_of_parameters}(i), when $\\alpha\\geq 1$, the cost of TCFA is comparable with TCFI in all database networks. \nThis is because, when $\\alpha\\geq 1$, GW and AMINER contain only one maximal pattern truss each, BK contains no more than three maximal pattern trusses that intersect with each other.\nIn this case, TCFA does not generate many unqualified candidate patterns and TCFI does not prune any candidate patterns by the graph intersection property.\n\\nop{\nIn this case, both TCFA and TCFI achieve the best time cost performances that are more than two orders of magnitudes better than TCS.\n}\n\n\\nop{\neven a single triangle whose vertices share a common length-1 pattern forms a theme community. \n\n the length-1 pattern set $\\mathcal{P}^1$ contains the length-1 patterns of all theme communities \n\nfor TCFA, the length-2 candidate pattern set $\\mathcal{M}^2$ is the power set of the length-1 pattern set $\\mathcal{P}^1$.\n\n\n\n and $\\mathcal{P}^1$ is very large when $\\alpha=0$. When the volume of $\\mathcal{M}^2$ is larger than the volume of the candidate pattern set $\\mathcal{P}$ of TCS, TCFA costs more time than TCS.\nHowever, when $\\alpha$ increases to $\\alpha=0.1$, the time cost of TCFA \n\nHowever, since the graph intersection based pruning of TCFI is not affected much by $\\alpha$, TCFI is still two orders of magnitudes faster than the other methods when $\\alpha=0$.\n\n\nWe can see from Figure~\\ref{Fig:effect_of_parameters}(a) that the time costs of TCS ($\\epsilon=[0.1, 0.2, 0.3]$) do not change with the increase of $\\alpha$; this is because the time cost of TCS is mostly determined by the size of candidate pattern set $P$ (see Section~\\ref{sec:tcs}), which is not affected by $\\alpha$. However, the size of $P$ reduces with the increase of $\\epsilon$, thus the time cost of TCS decreases when $\\epsilon$ increases.\nOn the other hand, the time cost of TCFA and TCFI both decreases when $\\alpha$ increases; the reason is that both TCFA and TCFI generates candidate patterns in a bottom-up manner (see Algorithm~\\ref{Alg:apriori}), a larger $\\alpha$ leads to a smaller pattern set $P^{k-1}$, which further reduces the number of candidate patterns in $L^k$. This facilitates the early pruning of TCFA and TCFI and decreases the time cost. \nInterestingly, when $\\alpha=0$, TCFA costs even more time than TCS ($\\epsilon=[0.1, 0.2, 0.3]$).\nThis is because $L^2$ is the power set of $P^1$, thus when $\\alpha=0$ and $P^1$ is large, $L^2$ can be larger than the candidate pattern set $P$ of TCS ($\\epsilon=[0.1, 0.2, 0.3]$).\nHowever, since the graph intersection based pruning of TCFI is not affected much by $\\alpha$, TCFI is still two orders of magnitudes faster than the other methods when $\\alpha=0$.\nSimilar time cost performances can be observed in Figure~\\ref{Fig:effect_of_parameters}(e) and Figure~\\ref{Fig:effect_of_parameters}(i).\n}\n\n\nFigures~\\ref{Fig:effect_of_parameters}(b)-(d), ~\\ref{Fig:effect_of_parameters}(f)-(h) and ~\\ref{Fig:effect_of_parameters}(j)-(l) show the performance in NP, NV and NE of all methods on BK, GW and AMINER, respectively. \nTCFA and TCFI produce the same exact results for all values of $\\alpha$ in all database networks. \n\\nop{\nwhen $\\alpha$ is not large enough, the NP, NV and NE of TCS are smaller than that of TCFA and TCFI, which indicates that TCS detects less maximal pattern trusses than TCFA and TCFI. \n}\nWhether TCS produces the exact results highly depends on the frequency threshold $\\epsilon$, cohesion threshold $\\alpha$ and the database network.\nFor example, \nIn Figures~\\ref{Fig:effect_of_parameters}(b)-(d) and Figures~\\ref{Fig:effect_of_parameters}(f)-(h), TCS ($\\epsilon=0.1$) cannot produce the same results as TCFA and TCFI unless $\\alpha\\geq 0.2$.\nFor TCS ($\\epsilon=0.2$) and TCS ($\\epsilon=0.3$), in order to produce the same results as TCFA and TCFI, the proper values of $\\alpha$ varies in different database networks.\nThe reason is that vertices with small pattern frequencies can still form a good maximal pattern truss with large edge cohesion if they form a densely connected subgraph. \nSuch maximal pattern trusses may be lost if the patterns with low frequencies are dropped by the pre-filtering step of TCS.\nThis clearly shows that TCS performs a trade-off between efficiency and accuracy.\n\\nop{\nThe reason is that, in the database network of AMINER, the patterns whose maximum frequencies are no larger than $\\epsilon=0.1$ cannot form a maximal pattern truss with edge cohesion larger than $\\alpha=0.1$.\nHowever, in the database networks of BK and GW, some patterns whose maximum frequencies are no larger than $\\epsilon=0.1$ can form a maximal pattern truss with edge cohesion larger than $\\alpha=0.3$.\nAs a result, there is no proper value of $\\epsilon$\n}\n\n\\nop{Such maximal pattern trusses with large edge cohesion but relatively small pattern frequencies on vertices usually have dense edge connections between all vertices.}\n\n\\nop{\nWhen $\\alpha$ is large, TCS produces the same results as TCFA and TCFI. \nThis is because, the vertices filtered out by a non-zero threshold $\\epsilon\\in\\{0.1, 0.2, 0.3\\}$ are not contained by any theme communities. \nHowever, when $\\alpha$ is small, TCS finds less theme communities than TCFA and TCFI. \nThe reason is that, the pre-filtering method of TCS performs a trade-off between efficiency and accuracy; a larger threshold $\\epsilon$ gains more efficiency, however, loses more accuracy.\nBy comparing the NP, NV and NE performances of TCS on BK, GW and AMINER, it is obvious that whether TCS produces the exact results highly depends on the threshold $\\epsilon$, the user input $\\alpha$ and the dataset.\nThus, there is no proper $\\epsilon$ that works for all database networks.\n}\n\\nop{\nThe reason is that, the vertices filtered out by a non-zero threshold $\\epsilon\\in\\{0.1, 0.2, 0.3\\}$ are not contained by any theme communities with $\\alpha \\geq 0.3$. \nHowever, TCS ($\\epsilon=0.1$), TCS ($\\epsilon=0.2$) and TCS ($\\epsilon=0.3$) cannot produce the exact results when $\\alpha < 0.3$, $\\alpha<0.5$ and $\\alpha<1.5$, respectively. This is because the pre-filtering step of TCS performs a trade-off between efficiency and accuracy; a larger $\\epsilon$ gains more efficiency, however, loses more accuracy.\nWe can also observe from Figure~\\ref{Fig:effect_of_parameters}(f)-(h),(j)-(l) that whether TCS ($\\epsilon=[0.1, 0.2, 0.3]$) can produce the exact results is highly dependent on the user input $\\alpha$ and the dataset. \nIn fact, setting $\\epsilon=0.0$ is the only way for TCS to produce exact results for arbitrary user input $\\alpha$ and datasets.\n}\n\nIn summary,\nTCFI produces the best detection results of maximal pattern trusses and achieves the best efficiency performance for all values of user input $\\alpha$ on all database networks. \n\n\n\\newcommand{43mm}{43mm}\n\\begin{figure*}[t]\n\\centering\n\\subfigure[Query by alpha (BK)]{\\includegraphics[width=43mm]{Figs\/EXP3_IndexingPerformances_qba_BK.pdf}}\n\\subfigure[Query by alpha (GW)]{\\includegraphics[width=43mm]{Figs\/EXP3_IndexingPerformances_qba_GW.pdf}}\n\\subfigure[Query by alpha (AMINER)]{\\includegraphics[width=43mm]{Figs\/EXP3_IndexingPerformances_qba_DBLP.pdf}}\n\\subfigure[Query by alpha (SYN)]{\\includegraphics[width=43mm]{Figs\/EXP3_IndexingPerformances_qba_SYN.pdf}}\n\\subfigure[Query by pattern (BK)]{\\includegraphics[width=43mm]{Figs\/EXP3_IndexingPerformances_qbp_BK.pdf}}\n\\subfigure[Query by pattern (GW)]{\\includegraphics[width=43mm]{Figs\/EXP3_IndexingPerformances_qbp_GW.pdf}}\n\\subfigure[Query by pattern (AMINER)]{\\includegraphics[width=43mm]{Figs\/EXP3_IndexingPerformances_qbp_DBLP.pdf}}\n\\subfigure[Query by pattern (SYN)]{\\includegraphics[width=43mm]{Figs\/EXP3_IndexingPerformances_qbp_SYN.pdf}}\n\\caption{Query performance of TC-Tree. (a)-(d) show the QBA performance. (e)-(h) show the QBP performance.}\n\\label{Fig:query_performances}\n\\end{figure*}\n\n\n\\subsection{Efficiency of Theme Community Finding}\n\\label{Sec:eotcf}\nIn this subsection, we analyze how the runtime of all methods changes when the size of the database network increases. \nFor each database network, we generate a series of database networks with different sizes by sampling the original database network using the breath first search sampling method introduced in Section~\\ref{Sec:eop}.\nSince TCS and TCFA run too slow on large database networks, we stop reporting the performance of TCS and TCFA when they cost more than one day. \nThe performance of TCFI is evaluated on all sizes of database networks including the original ones. To evaluate the worst case performance of all methods, we set $\\alpha=0$.\n\nFigures~\\ref{Fig:scalability_results}(a),~\\ref{Fig:scalability_results}(e) and \\ref{Fig:scalability_results}(i) show the time cost of all methods in BK, GW and AMINER, respectively.\nThe cost of all methods increases when the number of sampled edges increases. This is because increasing the size of the database network increases the number of maximal pattern trusses. \nThe cost of TCFI grows much slower than that of TCS and TCFA.\nThe reason is that TCS generates candidate patterns by enumerating the patterns of all vertex databases, TCFA generates candidate patterns by pairwise unions of the patterns of the detected maximal pattern trusses.\nThey both generate a large number of unqualified candidate patterns.\nTCFI generates candidate patterns by the pairwise unions of the patterns of two intersecting maximal pattern trusses, and runs MPTD on the small intersection of two maximal pattern trusses.\nThis effectively reduces the number of candidate patterns and significantly reduces the time cost.\nAs a result, TCFI achieves the best scalability and is more than two orders of magnitude faster than TCS and TCFA on large database networks.\n\n\n\n\\nop{\nThe reason is that TCS and TCFA generate their candidate patterns by pairwise unions of the patterns of detected maximal pattern trusses.\nThis generates a large number of unqualified candidate patterns, since most maximal pattern trusses are small local subgraphs that do not intersect with each other, and the union of the patterns of non-intersecting maximal pattern trusses is guaranteed to be unqualified according to Proposition~\\ref{Lem:gip}.\nTCFI generates candidate patterns only by pairwise unions of the patterns of intersecting maximal pattern trusses, therefore, the set of candidate patterns of TCFI grows much slower than that of TCS and TCFA.\nAs a result, TCFI achieves the best scalability and is more than two orders of magnitude faster than TCS and TCFA in large database networks.\n}\n\n\\nop{\neffectively prunes such unqualified candidates by only generate candidate patterns from the union of the patterns of non-intersection \n\nSuch unqualified candidate patterns are efficiently pruned by TCFI using the graph intersection property in Proposition~\\ref{Lem:gip}, therefore, the set of candidate patterns of TCFI grows much slower than that of TCS and TCFA.\n\nThe reason is that TCS and TCFA generate candidate patterns by randomly combining the patterns of detected maximal pattern trusses.\nThis generates a lot of unqualified candidate patterns for TCS and TCFA, since most maximal pattern trusses are small local subgraphs that do not intersect with each other, and combining the patterns of non-intersecting maximal pattern trusses\n\nmuch more unqualified candidate patterns than TCFI, by randomly combining the patterns of non-intersection maximal pattern trusses.\nSince most maximal pattern trusses are small local subgraphs that do not intersect with each other, TCFI efficiently prunes a large number of unqualified patterns by \n\n know from Proposition~\\ref{Lem:gip} that \n\nwe can know from Proposition~\\ref{Lem:gip} that combining the qualified patterns of such non-intersecting maximal pattern trusses generates a large number of unqualified candidate patterns.\nSuch unqualified candidate patterns are efficiently pruned by TCFI using the graph intersection property in Proposition~\\ref{Lem:gip}, therefore, the set of candidate patterns of TCFI grows much slower than that of TCS and TCFA when the size of the database network increases.\nAs a result, TCFI achieves the best scalability and is more than two magnitudes faster than TCS and TCFA in large database networks.\nThis is because, TCFI efficiently prunes a large number of unqualified patterns \n}\n\n\\nop{\nthe set of candidate patterns of TCFI grows much slower than that of TCS and TCFA when the size of the database network increases.\nThe sets of candidate patterns of TCS and TCFA are generated by random combining the qualified patterns of detected maximal pattern trusses. Since most maximal pattern trusses are small local subgraphs that do not intersect with each other, we can know from Proposition~\\ref{Lem:gip} that combining the qualified patterns of such non-intersecting maximal pattern trusses generates a large number of unqualified candidate patterns for TCS and TCFA.\nTCFI applies the graph intersection property in Proposition~\\ref{Lem:gip} to effectively prune a large number of candidate patterns generated by combining the patterns of non-intersecting maximal pattern trusses, \n\nHowever, since most maximal pattern trusses are small local subgraphs that do not intersect with each other. combining the qualified patterns of such non-intersecting maximal pattern trusses generates a large number of unqualified candidate patterns for TCS and TCFA. \n\nThis generates a lot of unqualified candidate patterns for TCS and TCFA, since most maximal pattern trusses are small local subgraphs that do not intersect with each other.\n\n\nSince most maximal pattern trusses are small local subgraphs that do not intersect with each other, combining the qualified patterns of such non-intersecting maximal pattern trusses generates a large number of unqualified candidate patterns for TCS and TCFA. \nHowever, such unqualified candidate patterns are efficiently pruned by TCFI using the graph intersection property in Proposition~\\ref{Lem:gip}, thus the set of candidate patterns of TCFI grows much slower than that of TCS and TCFA when the size of the database network increases.\nAs a result, TCFI achieves the best scalability and is more than two magnitudes faster than TCS and TCFA in large database networks.\n}\n\n\nFigures~\\ref{Fig:scalability_results}(b),~\\ref{Fig:scalability_results}(f) and~\\ref{Fig:scalability_results}(j) show the performance in NP of all methods. When the number of sampled edges increases, the NPs of all methods increase. This is because, increasing the size of database network increases the number of maximal pattern trusses, which is equal to NP.\nBoth TCFI and TCFA produce the same exact results. However, due to the accuracy loss caused by pre-filtering the patterns with low frequencies, TCS cannot produce the same results as TCFI and TCFA.\n\nIn Figures~\\ref{Fig:scalability_results}(c)-(d),~\\ref{Fig:scalability_results}(g)-(h) and~\\ref{Fig:scalability_results}(k)-(l), we show the average number of vertices and edges in detected maximal pattern trusses by NV\/NP and NE\/NP, respectively. \nThe trends of the curves of NV\/NP and NE\/NP are different in different database networks. \nThis is because each database network is sampled by conducting breath first search from a randomly selected seed vertex, \nand the distributions of maximal pattern trusses are different in different database networks.\nIf more smaller maximal pattern trusses are sampled earlier than larger maximal pattern trusses, NV\/NP and NE\/NP increase when the number of sampled edges increases.\nIn contrast, if more smaller maximal pattern trusses are sampled later than larger maximal pattern trusses, NV\/NP and NE\/NP decrease.\nWe can also see that the average numbers of vertices and edges in detected maximal pattern trusses are always small. \nThis demonstrates that most maximal pattern trusses are small local subgraphs in a database network. \nSuch small subgraphs in different local regions of a large sparse database network generally do not intersect with each other. Therefore, using the graph intersection property, TCFI can efficiently prune a large number of unqualified patterns and achieve much better scalability.\n\n\\nop{\nBesides, since each database network is sampled by conducting breath first search from a randomly selected seed vertex. Therefore, in Figures~\\ref{Fig:scalability_results}(c)-(d),~\\ref{Fig:scalability_results}(g)-(h) and~\\ref{Fig:scalability_results}(k)-(l), the trends of the curves depends on whether the seed vertex is located near more large maximal pattern trusses or more small maximal pattern trusses. As a result, we can see that the trends of the NV\/NP curves and NE\/NP curves are different on different datasets. \n}\n\n\\nop{\nwhen the number of sampled edges increase, the average number of vertices and edges in detected maximal pattern trusses does not increase much . This demonstrates that most maximal pattern trusses are small local subgraphs in database network.\n\n\n~\\ref{Fig:scalability_results}(g)-(h) \n\n\nAs it is shown in Figures~\\ref{Fig:scalability_results}(c)-(d), when the number of sampled edges increase, the NV\/NP and NE\/NP performances on the database network of BK first increase, then become stable. \n\nFigures~\\ref{Fig:scalability_results}(c)-(d),~\\ref{Fig:scalability_results}(g)-(h) and~\\ref{Fig:scalability_results}(k)-(l) show the NV\/NP and NE\/NP performances of all compared methods. Here, NV\/NP and NE\/NP are the average number of vertices and edges, respectively, of all the detected maximal pattern trusses.\n\n\n\nThe NP, NV and NE performances of all compared methods are shown in Figures~\\ref{Fig:scalability_results}(b)-(d),~\\ref{Fig:scalability_results}(f)-(h) and~\\ref{Fig:scalability_results}(j)-(l), respectively.\nAs it is shown, when the number of sampled edges increases, the NP, NV and NE performances of all compared methods increase. This is because increasing the size of database network increases the number of maximal pattern trusses. \nRecall that NP is also the number of detected maximal pattern trusses, \n\nWe can also see that NP, NV and NE do not grow exponentially with respect to the number of sampled edges. \nThis indicates that most maximal pattern trusses are small local subgraphs in the database network. \n}\n\n\\nop{\nInterestingly, in Figure~\\ref{Fig:scalability_results}(a), TCFA costs even more time than TCS. This is because\n\n when $\\alpha=0$, the candidate pattern set of $TCFA$ is larger than the candidate pattern set of $TCS$ in the database network of BK.\n\nthe length-2 candidate pattern set $\\mathcal{M}^2$ of TCFA is the power set of the length-1 qualified pattern set $\\mathcal{P}^1$. Since the candidate pattern set $\\mathcal{P}$ is obtained by filtering out all patterns whose maximum pattern frequency is smaller than threshold $\\epsilon$, \n}\n\n\\nop{\n when the size of the database network increases, the volumes of the candidate pattern set of TCFI grows much slower than that of TCS and TCFA.\n\nthe candidate pattern set $\\mathcal{P}$ of TCS and the length-2 candidate pattern set $\\mathcal{M}^2$ of TCFA both grow much faster than the candidate pattern set of TCFI.\n\nHowever, for TCFI, since most theme communities are small local subgraphs that do not intersect with each other, the number of candidate patterns generated by TCFI grow much slower than that of TCS and TCFA.\nAs a result, TCFI achieves the best scalability and is more than two magnitudes faster than TCS and TCFA in large database networks.\n}\n\n\\nop{\n\\mc{\n\\begin{enumerate}\n\\item The time cost of all compared methods grows then the number of edges increases. TCFI is more than two orders of magnitudes faster the other methods.\n\\item The time costs of the other methods grows exponentially, however, TCFI grows linearly. For TCFA, the number of patterns in $\\mathcal{P}^1$ matters a lot. The size of such pattern sets grows when more and more vertex databases are introduced, and the size of $\\mathcal{M}^2$ grows exponentially.\nFor TCFI, what really matters is how the patterns distribute in the vertex databases of the database graph, TCFI is not quite affected by the number of length-1 patterns in $\\mathcal{P}^1$.\n\\item The number of patterns grow linearly with respect to the size of the database network, this is because most theme community exist in the local region of the database network, vertices from different local regions generally does not form a good theme community. Therefore, introducing more vertices and edges does not increase the number of theme communities exponentially.\n\\item The number of edges also grows linearly due the local property of theme community.\n\\item The number of vertices also grows linearly for the same reason.\n\\end{enumerate}\n}\n\n\nFigure~\\ref{Fig:scalability_results}(a) shows the time costs of all compared methods in the database network of BK.\nAs it is shown, TCFI is more than two orders of magnitudes faster than TCFA.\nThe reason is that the length-2 candidate pattern set $\\mathcal{M}^2$ of TCFA is the power set of the length-1 pattern set $\\mathcal{P}^1$, \n\nApriori-like pattern generation employed by TCFA generates exponential number of candidate patterns.\n\nIn Figure~\\ref{Fig:scalability_results}(a), TCFI is two orders of magnitudes faster than TCFA.\nThe reason is that the Apriori-like pattern generation employed by TCFA generates exponential number of candidate patterns. Thus, when the number of sampled edges increases, the size of length-1 pattern set $P^1$ increases and the number of candidates generated by TCFA grows exponentially. \nHowever, for TCFI, since the graph intersection based pruning of TCFI efficiently prune a large proportion of unqualified candidate patterns, TCFI generates much less number of candidate patterns than TCFA. \nThus, when the number of edges increases, the time cost of TCFI grows much slower than TCFA.\nBesides, due to the large candidate pattern set $P$, TCS ($\\epsilon=[0.1, 0.2, 0.3]$) cost much more time than TCFI.\nSimilar results can also be observed in Figure~\\ref{Fig:scalability_results}(e) and Figure~\\ref{Fig:scalability_results}(i).\n\nFigure~\\ref{Fig:scalability_results}(b)-(d) show the NP, NV and NE performances on BK. Both TCFA and TCFI produce the exact results. However, due to the trade-off effect of $\\epsilon$, TCS ($\\epsilon=[0.1, 0.2, 0.3]$) failed to produce exact results. Figure~\\ref{Fig:scalability_results}(f)-(h) and Figure~\\ref{Fig:scalability_results}(j)-(l) show similar experimental performances on GW and AMINER, respectively.\n}\n\n\\subsection{Efficiency of Theme Community Indexing}\n\\label{Sec:eotci}\nNow we analyze the indexing scalability and query efficiency of TC-Tree in both the real and synthetic database networks.\n\nThe indexing performance of TC-Tree in all database networks is shown in Table~\\ref{Table:iptct}. ``Indexing Time'' is the cost to build a TC-Tree; ``Memory'' is the peak memory usage when building a TC-Tree; ``\\#Nodes'' is the number of nodes in a TC-Tree, which is also the number of maximal pattern trusses in a database network, since every TC-Tree node stores a unique maximal pattern truss.\n\n\\begin{table}[t]\n\\caption{Indexing performance of TC-Tree.}\n\\centering\n\\label{Table:iptct}\n\\begin{tabular}{| c | c | c | c | }\n\\hline\n \t & Indexing Time & Memory & \\#Nodes\t \t\t\\\\ \\hline\nBK & 179 \\;\\;\\;\\, seconds & 0.3 \\;\\,GB & 18,581 \t\t \\\\ \\hline\nGW & 1,594 \\;\\;seconds & 2.6 \\;\\,GB & 11,750,761 \t \\\\ \\hline\nAMINER & 41,068 seconds & 28.3 GB & 152,067,019 \t \\\\ \\hline\nSYN & 35,836 seconds & 26.6 GB & 132,985,944 \t \\\\ \\hline\n\\end{tabular}\n\\end{table}\n\nBuilding a TC-Tree is efficient in both Indexing Time and Memory.\nFor large database networks of AMINER and SYN, TC-Tree scales up pretty well in indexing more than 130 million nodes.\nTC-Tree costs more time on AMINER than SYN, because the database network of AMINER contains more unique items, which produces a larger set of candidate patterns.\n\nWe evaluate the performance of the TC-Tree querying method (Algorithm~\\ref{Alg:qtct}) under two settings: \n1) Query by Alpha (QBA), which queries a TC-Tree with a threshold $\\alpha_\\mathbf{q}$ by setting $\\mathbf{q}=S$.\n2) Query by Pattern (QBP), which queries a TC-Tree with pattern $\\mathbf{q}$ by setting $\\alpha_\\mathbf{q}=0$. The results are shown in Figure~\\ref{Fig:query_performances}, where ``Query Time'' is the cost of querying a TC-Tree, ``Retrieved Nodes (RN)'' is the number of nodes retrieved from a TC-Tree.\n\nTo evaluate how QBA performance changes when $\\alpha_\\mathbf{q}$ increases, we use $\\alpha_\\mathbf{q}\\in\\{0.0, 0.1, 0.2, \\cdots, \\alpha_\\mathbf{q}^*\\}$, which is a finite sequence that starts from 0.0 and is increased by 0.1 per step until Algorithm~\\ref{Alg:qtct} returns $\\emptyset$. $\\alpha_\\mathbf{q}^*$ is the largest $\\alpha_\\mathbf{q}$ when Algorithm~\\ref{Alg:qtct} does not return $\\emptyset$. \nFor each $\\alpha_\\mathbf{q}$, the Query Time is the average of 1,000 runs.\n\nIn Figures~\\ref{Fig:query_performances}(a)-(d), when $\\alpha_\\mathbf{q}$ increases, both RN and Query Time decrease. \nThis is because, a larger $\\alpha_\\mathbf{q}$ reduces the number of maximal pattern trusses, thus decreases RN and Query Time. Interestingly, in Figure~\\ref{Fig:query_performances}(c), we have $\\alpha_\\mathbf{q}^*=106.9$ in the database network of AMINER. This is because the CNV2 dataset~\\cite{Aminer_data} contains a paper about the ``IBM Blue Gene\/L super computer'' that is co-authored by 115 authors.\n\nFigures~\\ref{Fig:query_performances}(c)-(d) show the excellent QBA performance of the proposed querying method (Algorithm~\\ref{Alg:qtct}) on the large database networks of AMINER and SYN. The proposed querying method can retrieve 1 million maximal pattern trusses within 1 second.\n\n\n\nFigures~\\ref{Fig:query_performances}(e)-(h) show how the performance of QBP changes when query pattern length increases. \nTo generate query patterns with different length, we randomly sample 1,000 nodes from each layer of the TC-Tree and use the patterns of the sampled nodes as query patterns. \nSetting the query pattern length larger than the maximum depth of the TC-Tree does not make sense, since such patterns do not correspond to any maximal pattern trusses in the database network. Each Query Time reported is an average of 1,000 runs using different query patterns of the same length.\nAs shown in Figures~\\ref{Fig:query_performances}(e)-(h), both RN and Query Time increase when the Query Pattern Length increases. \nThis is because querying the TC-Tree with a longer query pattern visits more TC-Tree nodes and retrieves more maximal pattern trusses.\n\nIn summary, TC-Tree is scalable in both time and memory when indexing large database networks.\n\n\\nop{\nAccording to the results in Figure~\\ref{Fig:query_performances}, the proposed querying method (Algorithm~\\ref{Alg:qtct}) takes less than a second to retrieve 1 million maximal pattern trusses from the TC-Tree.\n}\n\n\n\\subsection{A Case Study}\n\\label{Sec:cs}\nIn this subsection, we present some interesting theme communities discovered in the database network of AMINER using the proposed TC-Tree method.\nEach detected theme community represents a group of co-working scholars who share the same research interest characterized by a set of keywords.\nWe present 6 interesting theme communities in Figure~\\ref{Fig:case_study} and show the corresponding sets of keywords in Table~\\ref{Table:listofpat}.\n\nTake Figures~\\ref{Fig:case_study}(a)-(b) as an example, the research interest of the theme community in Figure~\\ref{Fig:case_study}(a) is ``data mining'' and ``sequential pattern''. \nIf we narrow down the research interest of this theme community by an additional keyword ``intrusion detection'', the theme community in Figure~\\ref{Fig:case_study}(a) reduces to the theme community in Figure~\\ref{Fig:case_study}(b). This result demonstrates the fact that the size of a theme community reduces when the length of the pattern increases, which is consistent with Theorem~\\ref{Prop:gam}.\n\nThe results in Figures~\\ref{Fig:case_study}(a)-(d) show that four researchers, Philip S.\\ Yu, Jiawei Han, Jian Pei and Ke Wang, actively coauthor with different groups of researchers in different sub-disciplines of data mining, such as sequential pattern mining, intrusion detection, frequent pattern mining and privacy protection.\nThese results demonstrate that the proposed TC-Tree method can discover arbitrarily overlapped theme communities with different themes.\n\n\n\\begin{table}[t]\n\\caption{The sets of keywords for theme communities.}\n\\centering\n\\label{Table:listofpat}\n\\begin{tabular}{|p{2mm} p{78mm}|}\n\\hline\n$p_1:$ & data mining, sequential pattern \\\\ \\hline\n$p_2:$ & data mining, sequencial pattern, intrusion detection, \\\\ \\hline\n$p_3:$ & data mining, search space, complete set, pattern mining \\\\ \\hline\n$p_4:$ & data mining, sensitive information, privacy protection \\\\ \\hline\n$p_5:$ & principal component analysis, linear discriminant analysis, dimensionality reduction, component analysis \\\\ \\hline\n$p_6:$ & Image retrieval, image database, relevance feedback, semantic gap\\\\ \\hline\n\\end{tabular}\n\\end{table}\n\n\\begin{figure}[t]\n\\centering\n\\includegraphics[width=88mm]{Figs\/EXP4_CaseStudy.pdf}\n\\caption{Interesting theme communities found on the database network of AMINER.}\n\\label{Fig:case_study}\n\\end{figure}\n\n\nWe are surprised to see that TC-Tree also discovers the interdisciplinary research communities that are formed by researchers from different research areas. \nAs shown in Figures~\\ref{Fig:case_study}(e)-(f), the research activities of Jiawei Han and Jian Pei are not limited in data mining.\nFigure~\\ref{Fig:case_study}(e) indicates that Jiawei Han collaborated with some researchers in linear discriminant analysis. \nFigure~\\ref{Fig:case_study}(f) shows that Jian Pei collaborated with some researchers in image retrieval. \nMore interestingly, both Jiawei Han and Jian Pei collaborated with Jun Zhao, Xiaofei He, etc.\nThe two theme communities in Figures~\\ref{Fig:case_study}(e)-(f) have a heavy overlap in vertices, but are different in themes.\n\n\\nop{\nInterestingly, the theme communities in Figure~\\ref{Fig:case_study}(e)-(f) both involve the research group of Prof. Jun Zhao, Prof. Chun Chen, Prof. Xiaofei He, Prof. Jiajun Bu and Prof. Can Wang from Zhejiang University, China.\n}\n\n\\nop{\nA, finding theme communities in the database network of AMINER also discovers the interdisciplinary research communities that are formed by researchers from different research fields. \nAs revealed by the results, both theme communities in Figure~\\ref{Fig:case_study}(e)-(f) involve the research group of Prof. Jun Zhao, Prof. Chun Chen, Prof. Xiaofei He, Prof. Jiajun Bu and Prof. Can Wang from Zhejiang University, China. However, Prof. Jiawei Han, who is co-worked with the researchers in linear discriminant analysis and Prof. Jian Pei was actively associated with the researchers in the field of image retrieval.\n}\n\n\\nop{\nProf. Jiawei Han co-worked with the researchers in linear discriminant analysis and Prof. Jian Pei was actively associated with the researchers in the field of image retrieval. Interestingly, both the theme communities in Figure~\\ref{Fig:case_study}(e)-(f) involve the research group of Prof. Jun Zhao, Prof. Chun Chen, Prof. Xiaofei He, Prof. Jiajun Bu and Prof. Can Wang from Zhejiang University, China.\n}\n\nIn summary, the proposed theme community finding method allows arbitrary overlap between theme communities with different themes, and it is able to efficiently and accurately discover meaningful theme communities from large database networks.\n\n\n\n\\section{Conclusions and Future Work}\n\\label{sec:con}\n\nIn this paper, we tackle the novel problem of finding theme communities from database networks. \nWe first introduce the novel concept of database network, which is a natural abstraction of many real world networks.\nThen, we propose TCFI and TC-Tree that efficiently discovers and indexes millions of theme communities in large database networks.\nAs demonstrated by extensive experiments in both synthetic and real world database networks, TCFI and TC-Tree are highly efficient and scalable.\nAs future works, we will extend TCFI and TC-Tree to find theme communities from \\emph{edge database network}, where each edge is associated with a transaction database that describes complex relationships between vertices.\n\n\\section{Introduction}\n\\label{sec:intro}\n\nRecently, Facebook developed the ``shop section'' that encourages users to buy and sell products on their home pages. \nAmazon launched the ``social media influencer program'' that allows influential users to promote products to their followers.\n\\nop{Twitter introduces the ``Buy now'' button that lets users buy products with a single click.}\nThese and many other incoming innovative business initiatives alike integrate social networks and e-commerce, and introduce the new \\emph{social e-commerce networks}, which contain rich information of the social interactions and e-commercial transactions and activities of users. Essentially, in such a network, every vertex is associated with a transaction database, which records the user's activities. In this paper, we call such networks \\emph{database networks}.\n\nCan we find interesting patterns in social e-commerce networks and gain valuable business insights? There are two critical and complementary aspects of information: the network structures and the transactions associated with vertices. Consequently, it is interesting to find overlapping social groups (network structures) such that people in the same group share the same dominant e-commercial activity patterns.\nFor example, we may find a group of people who frequently buy diapers with beer together.\nThe finding of social groups of people associated with the same dominant e-commercial activity patterns provides valuable knowledge about the strong buying habits of social communities, and is very useful in personalized advertising and business marketing.\nWe call this data mining task \\emph{finding theme communities from database networks}.\n\nFinding theme communities is also meaningful in other database networks beyond social e-commerce networks. For example, as reported in our empirical study, we can model location-based social networks and the check-in information associated as database networks, where each user is a vertex and the set of locations that the user checks in within a period (e.g., a day) as a transaction. A theme community in this database network represent a group of friends who frequently visit the same set of locations. Moreover, in a co-author network, authors are vertices and two authors are linked if they co-authored before. The network can be enhanced by associating each author with a transaction database where each transaction is the set of keywords in an article published by the author. Then, a theme community in this database network is a group of collaborating authors who share the same research interest.\n\n\\nop{\nA social e-commerce network contain rich information about the social interactions and e-commercial activities of every single user. \n\nThen, we can promote the best brand of diaper and beer to the other young fathers \n\nwealthy ladies who frequently buy \n\nSuch networks often contain rich information about the social interactions and e-commercial activities of every single user. \n\n\nLarge social network and e-commerce companies have been struggling to seize the promising business opportunity of social e-commerce. \n}\n\n\n\n\\nop{Trump's triumph in the 2016 US presidential election has been a very hot topic in social networks, such as Facebook and Twitter. \nFor each citizen, rather than squeezing the complicated online activities into a single vector of less interpretable numerical features, \nit is much more informative and natural to log one's detailed daily activities with a transaction of interpretable attributes, and comprehensively record his\/her overall online activities using a well organized transactional database.\nThe rich detailed information in such personal databases provide us a much wider range of interpretable perspectives to analyze the arbitrarily overlapped communities in social networks. \n\nTake the US presidential election as an example, according to the CNN news reports,\nsome groups of people like Trump because of his taxation policy and business acumen, \nsome dislike Trump for his immigration policy and racist remarks, \nthere are also a considerable number of people who vote for Trump because they are tired of Hillary's hawkishness and hoping Trump can make America great again.\nAs a matter of fact, if we look from different perspectives of view, citizens holding different opinions about Trump form different communities and one citizen is usually involved in multiple communities.\nIt is interesting to find such groups of people having the same opinion in the same community, as well as discovering such opinions that hold such groups of people together.\nWe call this data mining task \\emph{finding theme communities from database network}.}\n\n\\nop{\nTechnically, we assume a \\emph{database network}, where each edge indicates the positive relationship between two vertices and each vertex is associated with a transaction database named \\emph{vertex database}. Every vertex database contains one or more \\emph{transactions} and each transaction is a set of \\emph{items}. \nGiven a database network, finding theme communities aims to find cohesive subgraphs such that every vertex database in the same subgraph \\emph{frequently} contains the same pattern. \nSuch \\emph{pattern} is regarded as the \\emph{theme}, which is a set of items that is contained by one or more transactions in a vertex database.\n\n\n\nMany real world networks naturally exist in the form of database network. Finding theme communities from database network is a useful exercise that enjoys many interesting applications.\nFor example, a social e-commerce network is a database network, where each vertex represents a user, each transaction stores the items bought in the same purchase, and all purchases of the same user constitute a vertex database. Finding theme communities in such database network reveals the groups of friends who share the same strong buying habits in the same group.\nIn location-based social networks, each vertex represents a user, each item is a check-in location, each transaction stores the daily check-in locations of the same user, and the daily check-in locations made by the same user through all the time form a vertex database. In this case, finding theme communities detects the groups of friends such that people in the same group frequently visit the same set of locations.\nThe network of co-authoring scholars is also a database network, where each vertex is an author, each item is a keyword, each transaction stores the keywords of one paper, and all papers published by the same author form a vertex database. Finding theme communities from such database network discovers the communities of co-authors who hold the same major research interest in the same community. \n\\nop{Interestingly, finding theme communities can be used in applications well beyond social networks. \nFor example, the ethernet can be smoothly modelled as a database network, where each switch is a vertex, each daily error log of a switch is a transaction and all daily error logs of the same switch form a vertex database. \nFrom this database network, finding theme communities identifies cohesive groups of switches that share the same pattern of errors in the same group.\n}\n}\n\n\nThere are established community detection methods for \\emph{vertex attributed networks} \\nop{where each vertex is associated with a set of attributes }and \\emph{simple networks} without vertex attributes, as to be reviewed in Section~\\ref{Sec:rw}. Can we adapt existing methods to tackle the problem of finding theme communities? Unfortunately, the answer is no due to the following challenges.\n\nFirst, the vertex databases in database network create a huge challenge for the existing methods that work well in vertex attributed networks. \nTo the best of our knowledge, existing methods for vertex attributed networks only consider the case where each vertex is associated with a single set of items. Those methods cannot distinguish the different frequencies of patterns in millions of vertex databases. Besides, we cannot simply transform each vertex database into a single set of items by taking the union of all transactions in the vertex database, because this wastes the valuable information of item co-occurrence and pattern frequency.\n\nSecond, every vertex database in a database network contains an exponential number of patterns.\nSince the community detection methods that work in simple networks only detect the theme communities of one pattern at a time, we have to perform community detection for each of the exponential number of patterns, which is computationally intractable.\n\n\\nop{Simple network community detection methods can be applied to find the theme community of a valid theme. \nand each theme may or may not be a \\emph{valid theme} that forms a theme community.\n}\n\n\\nop{\nit is computationally impractical to enumerate all theme communities by applying simple network community detection methods on exponential number of themes.\n}\n\n\\nop{finding theme communities is a chicken and egg problem -- it is challenging to find a cohesive subgraph without a specific pattern and it is also difficult to identify a frequent pattern without a specific subgraph.\nSince no pattern or community is known in advance, we cannot find theme community by community detection methods in simple networks. }\n\n\n\n\\nop{\neach vertex database contains exponential number of patterns and rich information about item co-occurrence and pattern frequency.\nThe exponential number of patterns makes it impractical to enumerate all theme communities by simple network methods. The vertex attributed network methods are incapable of exploiting the rich structural information in vertex databases.}\n\nLast but not least, a large database network usually contains a huge number of arbitrarily overlapping theme communities. Efficiently detecting overlapping theme communities and indexing them for fast query-answering are both challenging problems.\n\nIn this paper, we tackle the novel problem of finding theme communities from database networks and make the following contributions.\n\nFirst, we introduce database network as a network of vertices associated with transaction databases. A database network contains rich information about item co-occurrence, pattern frequency and graph structure. This presents novel opportunities to find theme communities with meaningful themes that consist of frequently co-occurring items.\n\nSecond, we motivate the novel problem of finding theme communities from database network and prove that even counting the number of theme communities in a database network is \\#P-hard.\n\nThird, we design a greedy algorithm to find theme communities by detecting \\emph{maximal pattern trusses} for every pattern in all vertex databases.\nTo improve the efficiency in practice, we first investigate several useful properties of maximal pattern trusses, then apply these properties to design two effective pruning methods that reduce the time cost by more than two orders of magnitudes in our experiments without losing the community detection accuracy.\n\nFourth, we advocate the construction of a data warehouse of maximal pattern truss. To facilitate indexing and query answering of the data warehouse, we show that a maximal pattern truss can be efficiently decomposed and stored in a linked list. Using such decomposition, we design an efficient theme community indexing tree.\nWe also develop an efficient query answering method that takes only 1 second to retrieve 1 million theme communities from the indexing tree.\n\nLast, we report extensive experiments on both real and synthetic datasets and demonstrate the efficiency of the proposed theme community finding method and indexing method. \nThe case study in a large database network shows that finding theme communities discovers meaningful groups of collaborating scholars who share the same research interest.\n\nThe rest of the paper is organized as follows. We review related work in Section~\\ref{Sec:rw} and formulate the theme community finding problem in Section~\\ref{sec:prob}. We present a baseline method and a maximal pattern truss detection method in Section~\\ref{sec:baseline}. We develop our major theme community finding algorithms in Section~\\ref{sec:algo}, and the indexing querying answering algorithms in Section~\\ref{sec:algo}. We report a systematic empirical study in Section~\\ref{sec:exp} and conclude the paper in Section~\\ref{sec:con}.\n\n\n\\nop{\nThird, we formulate the task of finding theme communities in database network as finding maximal pattern trusses.\nA \\emph{maximal pattern truss} is a cohesive subgraph in a \\emph{theme network}, which is induced from a database network by retaining all vertices containing the same pattern. \n\nSince each pattern corresponds to a unique theme network, pattern truss naturally allows arbitrary overlaps between theme communities with different themes.\n\nSecond, pattern truss smoothly incorporates item co-occurrence, pattern frequency and graph cohesiveness by the proposed edge \\emph{cohesion}. This enables us to simultaneously discover both theme and community without knowing any of them in advance.\n}\n\n\n\n\n\\section{Appendix}\n\n\\subsection{The Proof of Theorem~\\ref{theo:hardness}.}\n\\label{Apd:hardness}\n\n\\begin{theorem}\nGiven a database network $G$ and a minimum cohesion threshold $\\alpha$, the problem of counting the number of theme communities in $G$ is \\#P-hard.\n\\begin{proof}\nWe prove by a reduction from the \\emph{Frequent Pattern Counting} (FPC) problem, which is known to be \\#P-complete~\\cite{gunopulos2003discovering}.\n\nGiven a transaction database $\\mathbf{d}$ and a minimum support threshold $\\alpha\\in[0,1]$, an instance of the FPC problem is to count the number of patterns $\\mathbf{p}$ in $\\mathbf{d}$ such that $f(\\mathbf{p}) > \\alpha$.\nHere, $f(\\mathbf{p})$ is the frequency of $\\mathbf{p}$ in $\\mathbf{d}$.\n\nWe construct a database network $G=(V,E,D,S)$, where $V=\\{v_1, v_2, v_3\\}$ has only $3$ vertices; $E=\\{(v_1, v_2), (v_2, v_3), (v_3, v_1)\\}$ and forms a triangle; each vertex is associated with a copy of $\\mathbf{d}$, that is, $D=\\{\\mathbf{d}_1, \\mathbf{d}_2, \\mathbf{d}_3 \\mid \\mathbf{d}_1=\\mathbf{d}_2=\\mathbf{d}_3=\\mathbf{d}\\}$; and $S$ is the set of all items appearing in $\\mathbf{d}$. Apparently, $G$ can be constructed in $O(|\\mathbf{d}|)$ time.\n\nFor any pattern $\\mathbf{p}\\subseteq S$, since $\\mathbf{d}_1=\\mathbf{d}_2=\\mathbf{d}_3=\\mathbf{d}$, it follows $f_1(\\mathbf{p})=f_2(\\mathbf{p})=f_3(\\mathbf{p})=f(\\mathbf{p})$. According to Definition~\\ref{Def:edge_cohesion}, $eco_{12}(G_\\mathbf{p})=eco_{13}(G_\\mathbf{p})=eco_{23}(G_\\mathbf{p})=f(\\mathbf{p})$. \nAccording to Definition~\\ref{Def:theme_community}, $G_\\mathbf{p}$ is a theme community in $G$ if and only if $f(\\mathbf{p})> \\alpha$.\nTherefore, for any threshold $\\alpha\\in[0,1]$, the number of theme communities in $G$ is equal to the number of patterns in $\\mathbf{d}$ satisfying $f(\\mathbf{p})>\\alpha$, which is exactly the answer to the FPC problem.\n\\end{proof}\n\\end{theorem}\n\n\n\n\n\n\\subsection{The Proof of Theorem~\\ref{Prop:gam}}\n\\label{Apd:gam}\n\n\\begin{theorem}[Graph Anti-monotonicity]\nIf $\\mathbf{p}_1 \\subseteq \\mathbf{p}_2$, then $C^*_{\\mathbf{p}_2}(\\alpha)\\subseteq C^*_{\\mathbf{p}_1}(\\alpha)$.\n\n\\begin{proof}\nSince maximal pattern truss $C^*_{\\mathbf{p}_1}(\\alpha)$ is the union of all pattern trusses with respect to threshold $\\alpha$ in theme network $G_{\\mathbf{p}_1}$, we can prove $C^*_{\\mathbf{p}_2}(\\alpha)\\subseteq C^*_{\\mathbf{p}_1}(\\alpha)$ by proving $C^*_{\\mathbf{p}_2}(\\alpha)$ is a pattern truss with respect to threshold $\\alpha$ in theme network $G_{\\mathbf{p}_1}$.\n\nWe construct a subgraph $H_{\\mathbf{p}_1}=(V_{\\mathbf{p}_1}, E_{\\mathbf{p}_1})$, where $V_{\\mathbf{p}_1}=V^*_{\\mathbf{p}_2}(\\alpha)$ and $E_{\\mathbf{p}_1}=E^*_{\\mathbf{p}_2}(\\alpha)$. \nThat is, $H_{\\mathbf{p}_1}=C^*_{\\mathbf{p}_2}(\\alpha)$.\nNext, we prove $H_{\\mathbf{p}_1}$ is a subgraph in $G_{\\mathbf{p}_1}$. \nSince $\\mathbf{p}_1\\subseteq \\mathbf{p}_2$, by the anti-monotonicity of patterns~\\cite{agrawal1994fast,han2000mining}, it follows $\\forall v_i \\in V, f_i(\\mathbf{p}_1) \\geq f_i(\\mathbf{p}_2)$. \nAccording to the definition of theme network, it follows $G_{\\mathbf{p}_2}\\subseteq G_{\\mathbf{p}_1}$. \nSince $C^*_{\\mathbf{p}_2}(\\alpha)$ is the maximal pattern truss in $G_{\\mathbf{p}_2}$, it follows $C^*_{\\mathbf{p}_2}(\\alpha)\\subseteq G_{\\mathbf{p}_2}\\subseteq G_{\\mathbf{p}_1}$.\nRecall that $H_{\\mathbf{p}_1}=C^*_{\\mathbf{p}_2}(\\alpha)$, it follows $H_{\\mathbf{p}_1}\\subseteq G_{\\mathbf{p}_1}$.\n\nNow we prove $H_{\\mathbf{p}_1}$ is a pattern truss with respect to threshold $\\alpha$ in $G_{\\mathbf{p}_1}$.\nSince $\\forall v_i \\in V, f_i(\\mathbf{p}_1) \\geq f_i(\\mathbf{p}_2)$, the following inequation holds for every triangle $\\triangle_{ijk}$ in $H_{\\mathbf{p}_1}$.\n\\begin{equation}\n\\label{eq:mono}\n\\begin{split}\n\t\\min(f_i(\\mathbf{p}_1), f_j(\\mathbf{p}_1), f_k(\\mathbf{p}_1)) \\geq\\min(f_i(\\mathbf{p}_2), f_j(\\mathbf{p}_2), f_k(\\mathbf{p}_2))\n\\end{split}\n\\end{equation}\nSince $V_{\\mathbf{p}_1}=V^*_{\\mathbf{p}_2}(\\alpha)$ and $E_{\\mathbf{p}_1}=E^*_{\\mathbf{p}_2}(\\alpha)$, it follows that the set of triangles in $C^*_{\\mathbf{p}_2}(\\alpha)$ is exactly the same as the set of triangles in $H_{\\mathbf{p}_1}$.\nTherefore, we can derive from Equation~\\ref{eq:mono} and Definition~\\ref{Def:edge_cohesion} that\n\\begin{equation}\\nonumber\n\t\\forall e_{ij}\\in E_{\\mathbf{p}_1}, eco_{ij}(H_{\\mathbf{p}_1})\\geq eco_{ij}(C^*_{\\mathbf{p}_2}(\\alpha))\n\\end{equation}\nSince $C^*_{\\mathbf{p}_2}(\\alpha)$ is the maximal pattern truss with respect to cohesion threshold $\\alpha$ in $G_{\\mathbf{p}_2}$, it follows\n\\begin{equation}\\nonumber\n\t\\forall e_{ij}\\in E^*_{\\mathbf{p}_2}(\\alpha), eco_{ij}(C^*_{\\mathbf{p}_2}(\\alpha))>\\alpha\n\\end{equation}\nRecall that $E_{\\mathbf{p}_1} = E^*_{\\mathbf{p}_2}(\\alpha)$, it follows\n\\begin{equation}\\nonumber\n\t\\forall e_{ij}\\in E_{\\mathbf{p}_1}, eco_{ij}(H_{\\mathbf{p}_1})\\geq eco_{ij}(C^*_{\\mathbf{p}_2}(\\alpha)) > \\alpha\n\\end{equation}\nThis means $H_{\\mathbf{p}_1}$ is a pattern truss with respect to threshold $\\alpha$ in $G_{\\mathbf{p}_1}$.\nRecall that $H_{\\mathbf{p}_1}=C^*_{\\mathbf{p}_2}(\\alpha)$, it follows that $C^*_{\\mathbf{p}_2}(\\alpha)$ is a pattern truss in $G_{\\mathbf{p}_1}$.\nThe Theorem follows.\n\\end{proof}\n\\end{theorem}\n\n\n\n\n\\subsection{The Proof of Theorem~\\ref{Obs:discrete}}\n\\label{Apd:discrete}\n\n\\begin{theorem}\nGiven a theme network $G_\\mathbf{p}$, a cohesion threshold $\\alpha_2$ and a maximal pattern truss $C^*_{\\mathbf{p}}(\\alpha_1)$ in $G_\\mathbf{p}$ whose minimum edge cohesion is $\\beta = \\min\\limits_{e_{ij}\\in E^*_\\mathbf{p}(\\alpha_1)} eco_{ij}(C^*_\\mathbf{p}(\\alpha_1))$, if $\\alpha_2 \\geq \\beta$, then $C^*_{\\mathbf{p}}(\\alpha_2)\\subset C^*_{\\mathbf{p}}(\\alpha_1)$.\n\n\\begin{proof}\nFirst, we prove $\\alpha_2 > \\alpha_1$. \nSince $C^*_{\\mathbf{p}}(\\alpha_1)$ is a maximal pattern truss with respect to threshold $\\alpha_1$, from Definition~\\ref{Def:pattern_truss}, we have $\\forall e_{ij} \\in E^*_\\mathbf{p}(\\alpha_1), eco_{ij}(C^*_\\mathbf{p}(\\alpha_1)) > \\alpha_1$. \nSince $\\beta$ is the minimum edge cohesion of all the edges in $C^*_{\\mathbf{p}}(\\alpha_1)$, $\\beta > \\alpha_1$. \nSince $\\alpha_2 \\geq \\beta$, $\\alpha_2 > \\alpha_1$.\n\nSecond, we prove $C^*_\\mathbf{p}(\\alpha_2)\\subseteq C^*_\\mathbf{p}(\\alpha_1)$. \nSince $\\alpha_2 > \\alpha_1$, we know from Definition~\\ref{Def:pattern_truss} that $\\forall e_{ij} \\in E^*_\\mathbf{p}(\\alpha_2), eco_{ij}(C^*_\\mathbf{p}(\\alpha_2)) > \\alpha_2> \\alpha_1$.\nThis means that $C^*_\\mathbf{p}(\\alpha_2)$ is a pattern truss with respect to cohesion threshold $\\alpha_1$ in $G_\\mathbf{p}$.\nSince $C^*_\\mathbf{p}(\\alpha_1)$ is the maximal pattern truss with respect to cohesion threshold $\\alpha_1$ in $G_\\mathbf{p}$, from Definition~\\ref{Def:maximal_pattern_truss} we have $C^*_\\mathbf{p}(\\alpha_2)\\subseteq C^*_\\mathbf{p}(\\alpha_1)$.\n\nLast, we prove $C^*_\\mathbf{p}(\\alpha_2)\\neq C^*_\\mathbf{p}(\\alpha_1)$. \nLet $e^*_{ij}$ be the edge with minimum edge cohesion $\\beta$ in $ E^*_\\mathbf{p}(\\alpha_1)$. \nSince $\\alpha_2\\geq \\beta$, According to Definition~\\ref{Def:pattern_truss}, $e^*_{ij}\\not\\in E^*_\\mathbf{p}(\\alpha_2)$. \nThus, $E^*_{\\mathbf{p}}(\\alpha_1)\\neq E^*_{\\mathbf{p}}(\\alpha_2)$ and $C^*_\\mathbf{p}(\\alpha_2)\\neq C^*_\\mathbf{p}(\\alpha_1)$.\nRecall that $C^*_\\mathbf{p}(\\alpha_2)\\subseteq C^*_\\mathbf{p}(\\alpha_1)$. The theorem follows.\n\\end{proof}\n\\end{theorem}\n\n\n\n\n\n\n\n\n\n\n\\section{Related Works}\n\\label{Sec:rw}\nTo the best of our knowledge, finding theme communities from database networks is a new problem that has not been formulated or tackled in literature.\nBroadly it is related to truss detection and vertex attributed network clustering.\n\n\\subsection{Truss Detection}\nTruss detection aims to detect $k$-truss from unweighted networks.\nCohen~\\cite{cohen2008trusses} defined $k$-truss as a subgraph $S$ where each edge in $S$ is contained by at least $k-2$ triangles in $S$. He also proposed a polynomial time algorithm for efficient $k$-truss detection~\\cite{cohen2008trusses}.\nAs demonstrated by many studies~\\cite{cohen2008barycentric,cohen2008trusses,cohen2009graph}, $k$-truss naturally models cohesive communities in social networks and are elegantly related to some other graph structures, such as $k$-core~\\cite{seidman1983network} and $k$-clique~\\cite{luce1950connectivity}.\n\\nop{and $k$-plex~\\cite{seidman1978graph}. }\n\nThe elegance of $k$-truss attracts much attention in research. Huang~\\emph{et~al.}~\\cite{huang2014querying} designed an online community search method to find $k$-truss communities that contain a query vertex. They also proposed a memory efficient index structure to support fast $k$-truss search and online index update. Wang~\\emph{et~al.}~\\cite{wang2012truss} focused on solving the truss decomposition problem, which is to find all non-empty $k$-trusses for all possible values of $k$ in large unweighted networks. They first improved the in-memory algorithm proposed by Cohen~\\cite{cohen2008trusses}, then proposed two efficient methods to deal with large networks that cannot be held in memory. \nHuang~\\emph{et~al.}~\\cite{huang2016truss} proposed a new structure named $(k,\\gamma)$-truss that further extends the concept of $k$-truss from deterministic networks to probabilistic networks. They also proposed several algorithmic tools for detection, decomposition and approximation of $(k,\\gamma)$-trusses.\n\nAll the methods mentioned above perform well in detecting communities from networks without vertex databases. However, since the database network contains an exponential number of patterns and we don't know which pattern forms a theme community, enumerating all theme communities in the database network requires to perform community detection for each of the exponential number of patterns, which is computationally intractable.\n\n\\nop{\n\\subsection{Frequent Pattern Mining}\nFrequent pattern mining aims to find frequent patterns from a transactional database.\nIn the field of data mining, this is a well studied problem with many well known solutions. \nTo list a few, Agrawal~\\emph{et~al.}~\\cite{agrawal1994fast} proposed Apriori to find frequent patterns by candidate generation.\nHan~\\emph{et~al.}~\\cite{han2000mining} proposed the FP-Growth algorithm that avoids candidate generation by FP-Tree.\nYan~\\emph{et~al.}~\\cite{yan2002gspan} proposed gSpan to find the subgraphs that are frequently contained by a graph database.\nThe graph database is a set of graphs, which is substantially different from the database network.\n \nMost frequent pattern mining methods focus on mining frequent patterns from a single database, \nthus, they cannot find theme communities from a network of vertex databases.\n}\n\n\\newcommand{58mm}{58mm}\n\\begin{figure*}[t]\n\\centering\n\\subfigure[Database network]{\\includegraphics[width=58mm]{Figs\/DBN_dbnetwork.pdf}}\n\\subfigure[Theme network $G_{\\mathbf{p}}$]{\\includegraphics[width=58mm]{Figs\/DBN_theme_network1}}\n\\subfigure[Theme network $G_{\\mathbf{q}}$]{\\includegraphics[width=58mm]{Figs\/DBN_theme_network2}}\n\\caption{A toy example of database network, theme network and theme community. The pattern frequencies are labeled beside each vertex. The theme communities marked bold in (b) is valid when $\\alpha\\in [0, 0.2)$. The theme community marked in bold in (c) is valid when $\\alpha\\in[0.2, 0.4)$.}\n\\label{Fig:toy_example}\n\\end{figure*}\n\n\\subsection{Vertex Attributed Network Clustering}\nA vertex attributed network is a network where each vertex is associated with a set of items. \nVertex attributed network clustering aims to find cohesive communities such that all vertices within the same community contain the same set of items.\n\nVarious methods were proposed to solve this problem. Among them, the frequent pattern mining based methods have been proved to be effective.\nBerlingerio~\\emph{et~al.}~\\cite{berlingerio2013abacus} proposed ABACUS to find multi-dimensional communities by mining frequent closed itemsets from multi-dimensional community memberships.\nMoser~\\emph{et~al.}~\\cite{moser2009mining} devised CoPaM to efficiently find all maximal cohesive communities by exploiting various pruning strategies.\nPrado~\\emph{et~al.}~\\cite{prado2013mining} designed several interestingness measures and the corresponding mining algorithms for cohesive communities.\nMoosavi~\\emph{et~al.}~\\cite{moosavi2016community} applied frequent pattern mining to find cohesive groups of users who share similar features.\nThere are also effective methods that are based on \ngraph weighting~\\cite{steinhaeuser2008community, cruz2011semantic}, \nstructural embedding~\\cite{combe2012combining, dang2012community}, \nrandom walk~\\cite{zhou2009graph}, \nstatistical inference~\\cite{balasubramanyan2011block,yang2013community},\nmatrix compression~\\cite{akoglu2012pics},\nsubspace clustering~\\cite{gunnemann2011db,wang2016semantic},\nand $(k, d)$-truss detection~\\cite{huang2017attribute}.\n\nAll these methods mentioned above cannot exploit the rich and useful information of item co-occurrences and pattern frequencies in a database network.\n\n\n\n\n\\section{Problem Definition}\n\\label{sec:prob}\n\nIn this section, we first introduce the notions of database network, theme network and theme community, and then formalize the theme community finding problem.\n\n\\subsection{Database Network and Theme Network}\nLet $S=\\{s_1, \\ldots, s_m\\}$ be a set of items. An \\emph{itemset} $\\mathbf{x}$ is a subset of $S$, that is, $\\mathbf{x}\\subseteq S$.\nA \\emph{transaction} $\\mathbf{t}$ is an itemset. Transaction $\\mathbf{t}$ is said to \\emph{contain} itemset $\\mathbf{x}$ if $\\mathbf{x}\\subseteq \\mathbf{t}$. The \\emph{length} of transaction $\\mathbf{t}$, denoted by $|\\mathbf{t}|$, is the number of items in $\\mathbf{t}$.\nA \\emph{transaction database} $\\mathbf{d}=\\{\\mathbf{t}_1, \\ldots, \\mathbf{t}_h\\}$ ($h\\geq 1$) is a multi-set of transactions, that is, an itemset may appear multiple times as transactions in a transaction database.\n\nA \\emph{database network} is an undirected graph $G=(V,E,D,S)$, where each vertex is associated with a transaction database, that is,\n\\begin{itemize}\n\\item $V=\\{v_1, \\ldots, v_n\\}$ is a set of vertices;\n\\item $E=\\{e_{ij}=(v_i, v_j) \\mid v_i,v_j \\in V\\}$ is a set of edges;\n\\item $D=\\{\\mathbf{d}_1, \\ldots, \\mathbf{d}_n\\}$ is a set of transaction databases, where $\\mathbf{d}_i$ is the transaction database associated with vertex $v_i$; and\n\\item $S=\\{s_1, \\cdots, s_m\\}$ is the set of items that constitute all transaction databases in $D$. That is, $\\cup_{\\mathbf{d}_i\\in D} \\cup_{\\mathbf{t}\\in\\mathbf{d}_i} \\mathbf{t}= S$.\n\\end{itemize}\n\nFigure~\\ref{Fig:toy_example}(a) gives a toy database network of 9 vertices, where each vertex is associated with a transaction database, whose details are omitted due to the limit of space. \n\n\nA \\emph{theme} is an itemset $\\mathbf{p}\\subseteq S$, which is also known as a \\emph{pattern} in the field of frequent pattern mining~\\cite{agrawal1994fast,han2000mining}.\nThe \\emph{length} of $\\mathbf{p}$, denoted by $|\\mathbf{p}|$, is the number of items in $\\mathbf{p}$.\nThe \\emph{frequency} of $\\mathbf{p}$ in transaction database $\\mathbf{d}_i$, denoted by $f_i(\\mathbf{p})$, is the proportion of transactions in $\\mathbf{d}_i$ that contain $\\mathbf{p}$~\\cite{agrawal1994fast,han2000mining}. \n$f_i(\\mathbf{p})$ is also called the frequency of $\\mathbf{p}$ on vertex $v_i$.\nIn the rest of this paper, we use the terms \\emph{theme} and \\emph{pattern} interchangeably.\n\nGiven a pattern $\\mathbf{p}$, the \\emph{theme network} $G_\\mathbf{p}$ is a subgraph induced from $G$ by the set of vertices satisfying $f_i(\\mathbf{p}) > 0$. We denote it by $G_\\mathbf{p}=(V_\\mathbf{p}, E_\\mathbf{p})$, where \n$V_\\mathbf{p}=\\{v_i \\in V \\mid f_i(\\mathbf{p}) > 0\\}$ is the set of vertices and \n$E_\\mathbf{p}=\\{e_{ij}\\in E \\mid v_i, v_j \\in V_\\mathbf{p}\\}$ is the set of edges.\n\n\n\\nop{\nA \\emph{pattern} $p$ is an itemset that is also regarded as the \\emph{theme} of the theme network. .\nThe \\emph{frequency} of pattern $p$ on vertex $v_i$ is defined as the proportion of transactions in $d_i$ that contains $p$~\\cite{agrawal1994fast,han2000mining}. .\n}\n\n\\nop{The theme network $G_p$ is induced from database network $G$ by the set of vertices satisfying $f_i(p) > 0$.}\n\n\\nop{\n\\item $F_p=\\{f_i(p) \\mid v_i \\in V_p\\}$ is the set of vertex weights, where $f_i(p)$ is the weight of $v_i$.\nA higher weight $f_i(p)$ indicates that pattern $p$ is more frequent on $v_i$.\nA higher weight indicates that pattern $p$ is more significant on $v_i$, thus $v_i$ gets a larger weight in the theme network $G_p$.\nLater, when applying standard graph operations (e.g., union, intersection, etc) on $G_p$, we treat $G_p$ as a standard unweighted graph and ignore $F_p$.}\n\nFigures~\\ref{Fig:toy_example}(b) and~\\ref{Fig:toy_example}(c) present two theme networks induced by two different patterns $\\mathbf{p}$ and $\\mathbf{q}$, respectively. The edges and vertices marked in dashed lines are not included in the theme networks.\n\\nop{\\mcout{What are $\\mathbf{p}_1$ and $\\mathbf{p}_2$, respectively, in this example?}}\n\\nop{\nsince $f_6(p_1)=0$, $v_6$ and the edges connected to $v_6$ are removed when inducing $G_{p_1}$. The rest of the vertices and edges consist the theme network induced by $p_1$. For the theme network induced by $p_2$ (see Figure~\\ref{Fig:toy_example}(c)), we remove $v_4$ and its connected edges, since $f_4(p_2)=0$.\n}\n\nWe can induce a theme network by each pattern $\\mathbf{p} \\subseteq S$, a database network $G$ can induce at most $2^{|S|}$ theme networks, where $G$ can be regarded as the theme network of $\\mathbf{p}=\\emptyset$.\n\n\n\\subsection{Theme Community}\n\\label{Sec:theme_community}\n\nA \\emph{theme community} is a subgraph of a theme network so that the vertices form a cohesively connected subgraph.\nIntuitively, in a good theme community with theme $\\mathbf{p}$, every vertex is expected to satisfy at least one of the following \\emph{criteria}:\n\\begin{enumerate}\n\\item It has a high frequency of $\\mathbf{p}$ in its vertex database.\n\\item It is connected to a large proportion of vertices in the theme community.\n\\nop{\\item The frequency of $\\mathbf{p}$ on every vertex of the theme community is high.}\n\\end{enumerate}\n\nThe rationale is that, a vertex with a high frequency of $\\mathbf{p}$ is a \\emph{theme leader} that strengthens the theme coherence of the theme community. A vertex connecting to many vertices in the community is a \\emph{social leader} that strengthens the edge connection in the theme community.\nBoth theme leaders and social leaders are important members of the theme community.\nTake Figure~\\ref{Fig:toy_example}(b) as an example, $v_1, v_2, v_3, v_4$, and $v_5$ form a theme community with theme $\\mathbf{p}$, where $v_1$ and $v_4$ are theme leaders, $v_3$ and $v_5$ are social leaders, and $v_2$ is both a theme leader and a social leader.\n\n\\nop{\nAccording to the above criterion, in a good theme community with theme $\\mathbf{p}$, two cohesively connected vertices $v_i, v_j$ should either have high frequencies of $\\mathbf{p}$ or have a large number of common neighbouring vertices in the theme community. \n}\n\nThe above criteria inspire us to measure the cohesion of edge $e_{ij}=(v_i, v_j)$ by considering the number of common neighbour vertices between $v_i$ and $v_j$, as well as the frequencies of $\\mathbf{p}$ on $v_i, v_j$ and their common neighbor vertices. \nThe rationale is that, in a good theme community with theme $\\mathbf{p}$, two connected vertices $v_i$ and $v_j$ should have a large edge cohesion, if they each has a high frequency of $\\mathbf{p}$, or have many common neighbours in the theme community. \n\n\\nop{\nThis inspires us to measure the cohesion of edge $e_{ij}$ by considering both the number of common neighbour vertices of $v_i, v_j$ and the frequencies of $\\mathbf{p}$ on these vertices. \n\\nop{The rationale is that, if $v_i, v_j$ belong to the same theme community, the edge $(v_i, v_j)$ should have a larger edge cohesion as the two vertices have high frequencies of $\\mathbf{p}$ and have many common neighbors with higher frequencies of $\\mathbf{p}$.}\n}\n\n\\nop{\nThe above criteria inspire us to measure the cohesion of connection between two connected vertices $v_i, v_j$ by considering both the frequencies of $\\mathbf{p}$ on these vertices and the number of their common neighbour vertices in the theme community. \nThe intuition is that $v_i, v_j$ should have a large edge cohesion if they have high frequencies of $\\mathbf{p}$ or have a large number of common neighbours in the theme community.\n}\n\n\\nop{\nAccording to the above criterion, in a good theme community with theme $\\mathbf{p}$, two vertices $v_i, v_j$ connected by edge $e_{ij}=(v_i, v_j)$ should have high frequencies of $\\mathbf{p}$ and have a large number of common neighbouring vertices with high frequencies of $\\mathbf{p}$. \n}\n\n\\nop{\nThe relationship between $v_i$ and $v_j$ is more cohesive if they have more common neighbouring vertices with larger frequency of $\\mathbf{p}$.\nBased on this insight, we design \\emph{edge cohesion} (see Definition~\\ref{Def:edge_cohesion}) that measures the cohesiveness between $v_i$ and $v_j$ by comprehensively considering the number of common neighbouring vertices and the frequency of $\\mathbf{p}$ on these neighbouring vertices.\n}\n\n\\nop{Denote by $v_i$ and $v_j$ two connected vertices in a theme community with theme $\\mathbf{p}$, let $CN(v_i, v_j)$ denote the set of common neighbour vertices of $v_i$ and $v_j$ in the theme community, }\n\n\\nop{According to the above criterion, for any two connected vertices $v_i$ and $v_j$ in the theme community, $f_i(\\mathbf{p})$ and $f_j(\\mathbf{p})$ should be large.}\n\n\\nop{\nWe model theme community by maximal pattern truss.\nIn the following, we first define the fundamental concepts such as edge cohesion, pattern truss, maximal pattern truss. Then, we give the formal definition of theme community.\n}\n\n\\nop{, which extends the concept of $k$-truss~\\cite{cohen2008trusses} from unweighted network to the vertex weighted theme network. }\n\n\\newcommand{65mm}{65mm}\n\\begin{table}[t]\n\\centering\\small\n\\caption{Frequently used notations.}\n\\label{Table:notations}\n\\begin{tabular}{|c|p{65mm}|}\n\\hline\n\\makecell[c]{Notation} & \\makecell[c]{Description} \\\\ \\hline\n\n$G$\n& The database network. \\\\ \\hline\n\n$G_\\mathbf{p}$\n& The theme network induced by pattern $\\mathbf{p}$. \\\\ \\hline\n\n$S$\n& The complete set of items in $G$. \\\\ \\hline\n\n$|\\cdot|$\n& The set volume operator. \\\\ \\hline\n\n$C^*_\\mathbf{p}(\\alpha)$\n& The maximal pattern truss in theme network $G_\\mathbf{p}$ with respect to threshold $\\alpha$. \\\\ \\hline\n\n$\\mathcal{L}_\\mathbf{p}$\n& The linked list storing the decomposed results of maximal pattern truss $C^*_\\mathbf{p}(0)$. \\\\ \\hline\n\n$f_i(\\mathbf{p})$\n& The frequency of pattern $\\mathbf{p}$ on vertex $v_i$. \\\\ \\hline\n\n\\nop{\n$eco_{ij}(C_\\mathbf{p})$\n& See Definition~\\ref{Def:edge_cohesion}. \\\\ \\hline\n}\n\n$\\epsilon$\n& The threshold of pattern frequency for TCS. \\\\ \\hline\n\n\\nop{\n$s_{n_i}$\n& The item stored in node $n_i$ in the TC-Tree. \\\\ \\hline\n}\n\n\\nop{\n$\\mathcal{L}_{\\mathbf{p}_i}$\n& The linked list stored in node $n_i$ in the TC-Tree. \\\\ \\hline\n}\n\n\\end{tabular}\n\\end{table}\n\nLet $v_k$ be a common neighbor of two connected vertices $v_i$ and $v_j$. Then, $v_i, v_j$ and $v_k$ form a \\emph{triangle}, denoted by $\\triangle_{ijk}= \\{v_i, v_j, v_k\\}$.\nSince every common neighbor of $v_i$ and $v_j$ corresponds to a unique triangle that contains edge $e_{ij}=(v_i, v_j)$, the number of common neighbors of $v_i$ and $v_j$ is exactly the number of triangles that contain $e_{ij}=(v_i, v_j)$.\n\nNow we define \\emph{edge cohesion}, which measures the cohesion between two connected vertices $v_i, v_j$ by considering both the number of triangles containing $e_{ij}=(v_i, v_j)$ and the frequencies of $\\mathbf{p}$ on the vertices of those triangles.\n\n\\nop{\nthe number of common neighbouring vertices is exactly the same as \nwe obtain the number of common neighbouring vertices by counting the number of triangles that contains $e_{ij}$.\nA \\emph{triangle} $\\triangle_{ijk}= (v_i, v_j, v_k)$ is a clique of 3 vertices $v_i, v_j, v_k$.\n\n\nIn a graph $G=(V, E)$, three vertices $v_i, v_j, v_k \\in V$ forms a \\emph{triangle} $\\triangle_{ijk}= (v_i, v_j, v_k)$ if $(v_i, v_j), (v_j, v_k), (v_i, v_k) \\in E$, denoted by $\\triangle_{ijk} \\subseteq G$.\n\\mc{The number of triangles $\\triangle_{ijk}$ containing edge $e_{ij}=(v_i, v_j)$ is the number of the common neighbouring vertices shared by $v_i$ and $v_j$.}\n}\n\n\\nop{\nAccording to White~\\emph{et al.}~\\cite{white2001cohesiveness}, for two users in a social network, the number of their common friends is a natural measurement for the cohesiveness of their social tie. \nAs demonstrated by Cohen~\\cite{cohen2008barycentric,cohen2008trusses,cohen2009graph} and Huang~\\emph{et al.}~\\cite{huang2014querying,huang2016truss}, such measurement is robust and effective in detecting cohesive communities.\nInspired by such observation, for theme network $G_\\mathbf{p}$, we model the \\emph{edge cohesion} between two vertices by considering the number of their common neighbours and the frequency of $\\mathbf{p}$ on all neighbouring vertices.\n}\n\n\\nop{In a graph $G=(V, E)$, three vertices $v_i, v_j, v_k \\in V$ forms a \\emph{triangle} $\\triangle_{ijk}= (v_i, v_j, v_k)$ if $(v_i, v_j), (v_j, v_k), (v_i, v_k) \\in E$, denoted by $\\triangle_{ijk} \\subseteq G$.}\n\n\\nop{\nAs demonstrated by Cohen~\\cite{cohen2008barycentric,cohen2008trusses,cohen2009graph} and Huang~\\emph{et al.}~\\cite{huang2014querying,huang2016truss}, the $k$-truss structure \n\nAccording to Cohen~\\cite{cohen2008barycentric,cohen2008trusses,cohen2009graph} and Huang~\\emph{et al.}~\\cite{huang2014querying,huang2016truss}, for two users in a social network, measuring their connection strength by the number of their common friends is a natural and effective way in detecting cohesive social communities.\n\nthe number of their common friends naturally measures the strength of their social tie.\nInspired by such observation, proposed the community structure $k$-truss, which measures the cohesion of an edge by the number of triangles containing it.\n}\n\n\\begin{definition}[Edge Cohesion]\n\\label{Def:edge_cohesion}\nConsider a theme network $G_\\mathbf{p}$ and a subgraph $C_\\mathbf{p}\\subseteq G_\\mathbf{p}$, for an edge $e_{ij}=(v_i, v_j)$ in $C_\\mathbf{p}$, the \\textbf{edge cohesion} of $e_{ij}$ in $C_\\mathbf{p}$ is \n\\begin{equation}\n\\label{Eqn:cohesion}\neco_{ij}(C_\\mathbf{p})=\\sum\\limits_{\\triangle_{ijk}\\subseteq C_\\mathbf{p}} \\min{(f_i(\\mathbf{p}), f_j(\\mathbf{p}), f_k(\\mathbf{p}))} \\nonumber\n\\end{equation}\n\\end{definition}\n\n\\nop{\n\\mcout{We need to explain the intuition before edge cohesion. Why do we want to sum over all triangles?}\n}\n\n\\begin{example}\nIn Figure~\\ref{Fig:toy_example}(b), for subgraph $C_{\\mathbf{p}}$ induced by the set of vertices $\\{v_1, v_2, v_3, v_4, v_5\\}$, edge $e_{12}$ is contained by $\\triangle_{123}$ and $\\triangle_{125}$, thus the edge cohesion of $e_{12}$ is $eco_{12}(C_{\\mathbf{p}})=\\min(f_1(\\mathbf{p}),$ $ f_2(\\mathbf{p}), f_3(\\mathbf{p})) + \\min(f_1(\\mathbf{p}), f_2(\\mathbf{p}), $ $f_5(\\mathbf{p}))=0.2$.\n\\end{example}\n\nIn a subgraph $C_\\mathbf{p}$, if for every vertex $v_i$ in the subgraph, $f_i(\\mathbf{p}) = 1$, then the edge cohesion $eco_{ij}(C_\\mathbf{p})$ equals the number of triangles containing edge $e_{ij}$. In this case, $eco_{ij}(C_\\mathbf{p})$ is exactly the edge cohesion used by Cohen~\\cite{cohen2008trusses} to define $k$-truss.\n\n\\nop{\nApparently, a large edge cohesion $eco_{ij}(C_\\mathbf{p})$ indicates that vertices $v_i, v_j$ either have high frequencies of $\\mathbf{p}$ or have a large number of common neighbouring vertices whose vertex databases contain theme $\\mathbf{p}$.\nAccording to the criteria of good theme community, if every edge in $C_\\mathbf{p}$ has a large edge cohesion, $C_\\mathbf{p}$ will be a good theme community. Based on this insight, }\nNow, we propose \\emph{pattern truss}, a subgraph such that the cohesion of every edge in the subgraph is larger than a threshold.\n\n\\begin{definition}[Pattern Truss]\n\\label{Def:pattern_truss}\nGiven a minimum cohesion threshold $\\alpha \\geq 0$, a \\textbf{pattern truss} $C_\\mathbf{p}(\\alpha)=(V_\\mathbf{p}(\\alpha), E_\\mathbf{p}(\\alpha))$ is an edge-induced subgraph of $G_\\mathbf{p}$ on the set of edges $E_\\mathbf{p}(\\alpha)=\\{e_{ij} \\mid eco_{ij}(C_\\mathbf{p}(\\alpha)) > \\alpha \\}$.\n\\end{definition}\n\n\\nop{\nIn other words, a pattern truss $C_\\mathbf{p}(\\alpha)$ is a subgraph in $G_\\mathbf{p}$ such that the cohesion $eco_{ij}(C_\\mathbf{p}(\\alpha))$ of every edge in $E_\\mathbf{p}(\\alpha)$ is larger than $\\alpha$.\n}\n\n\\nop{\n\\mcout{It is unclear whether $C_\\mathbf{p}$ is the induced subgraph of $G_\\mathbf{p}$ on $V_\\mathbf{p}(\\alpha)$. That is, for any $v_i, v_j \\in V_\\mathbf{p}(\\alpha)$, is edge $(v_i, v_j) \\in E_\\mathbf{p}(\\alpha)$? I don't think it is the case. This should be clarified.}\n\n\\mc{Answer: $C_\\mathbf{p}(\\alpha)$ is an edge-induced subgraph of $G_\\mathbf{p}$ on $E_\\mathbf{p}(\\alpha)$. That is, for any edge $(v_i, v_j) \\in E_\\mathbf{p}(\\alpha)$, we have $v_i, v_j \\in V_\\mathbf{p}(\\alpha)$.}\n}\n\n\\nop{\n\\begin{itemize}\n\\item $V_p(\\alpha)=\\{v_i \\mid \\exists e_{ij}\\in E_p(\\alpha) \\}$.\n\\item $E_p(\\alpha)=\\{e_{ij} \\mid w_{ij}(p) > \\alpha, e_{ij}\\in E_p\\}$.\n\\end{itemize}\nthe \\emph{cohesion} of edge $e_{ij}=(v_i, v_j)$ is defined as \n\\begin{equation}\n\\label{Eqn:cohesion}\nw_{ij}(p)=\\sum\\limits_{\\triangle_{ijk}\\subseteq C_p(\\alpha)} \\min{(f_i(p), f_j(p), f_k(p))}\n\\end{equation}\nwhere $\\triangle_{ijk} = (v_i, v_j, v_k)$ represents any triangle in $C_p(\\alpha)$ that contains edge $e_{ij}$. }\n\n\\nop{Pattern truss has the following useful properties:}\n\nIf $\\alpha = k-3$ and $\\forall v_i \\in V_\\mathbf{p}(\\alpha), f_i(\\mathbf{p}) = 1$, a pattern truss $C_\\mathbf{p}(\\alpha)$ becomes a $k$-truss~\\cite{cohen2008trusses}, which is a subgraph where every edge in the subgraph is contained by at least $(k-2)$ triangles.\nFurthermore, if $C_\\mathbf{p}(\\alpha)$ is also a maximal connected subgraph in $G_\\mathbf{p}$, it will also be a $(k-1)$-core~\\cite{seidman1983network}.\n\nSimilar to $k$-truss, a pattern truss is not necessarily a connected subgraph.\nFor example, in Figure~\\ref{Fig:toy_example}(b), when $\\alpha \\in [0, 0.2)$, the subgraph marked in bold lines is a pattern truss, but is not connected.\n\nIt is easy to see that, for a given $\\alpha$, the union of multiple pattern trusses is still a pattern truss.\n\\nop{Take Figure~\\ref{Fig:toy_example}(c) as an example, when $\\alpha \\in [0.2, 0.4)$, $\\{v_2, v_3, v_5, v_6\\}$, $\\{v_3, v_5, v_6, v_7, v_9\\}$ and the union of them (i.e., $\\{v_2, v_3, v_5, v_6, v_7, v_9\\}$) are all valid pattern trusses.}\n\n\n\n\n\\begin{definition}[Maximal Pattern Truss]\n\\label{Def:maximal_pattern_truss}\nA \\textbf{maximal pattern truss} in $G_{\\mathbf{p}}$ with respect to a minimum cohesion threshold $\\alpha$ is a pattern truss that any proper superset is not a pattern truss with respect to $\\alpha$ in $G_{\\mathbf{p}}$.\n\\end{definition}\n\nApparently, a maximal pattern truss in $G_{\\mathbf{p}}$ with respect to $\\alpha$ is the union of all pattern trusses in $G_{\\mathbf{p}}$ with respect to the same threshold $\\alpha$. Moreover, a maximal pattern truss is not necessarily a connected subgraph.\nWe denote by $C^*_\\mathbf{p}(\\alpha)=(V^*_\\mathbf{p}(\\alpha), E^*_\\mathbf{p}(\\alpha))$ the maximal pattern truss in $G_\\mathbf{p}$. \n\nNow we are ready to define theme community.\n\\begin{definition}[Theme Community]\n\\label{Def:theme_community}\nGiven a minimum cohesion threshold $\\alpha$, a \\textbf{theme community} is a maximal connected subgraph in the maximal pattern truss with respect to $\\alpha$ in a theme network.\n\\end{definition}\n\n\n\n\\nop{\nA \\emph{maximal pattern truss} is a pattern truss that is not a subgraph of any other pattern truss. We denote maximal pattern truss by $C^*_p(\\alpha)=\\{V^*_p(\\alpha), E^*_p(\\alpha)\\}$.\n\nWe list the following useful facts about pattern truss: \n\\begin{itemize}\n\\item Pattern truss degenerates to $k$-truss~\\cite{cohen2008trusses} if $\\forall v_i \\in V_p, f_i(p) = 1$ and $\\alpha = k-2$. \n\\item A pattern truss is not necessarily a connected subgraph. For example, in Figure~\\ref{Fig:toy_example}(b), $\\{v_1, v_2, v_3, v_4,$ $ v_5, v_7, v_8, v_9\\}$ is a valid pattern truss when $\\alpha \\in [0, 0.2)$.\n\\item The union of two pattern trusses is also a pattern truss. This is because adding new vertices and edges to an existing pattern truss does not decrease the cohesion of any original edge of the pattern truss. For example, in Figure~\\ref{Fig:toy_example}(c), $\\{v_2, v_3, v_5, v_6\\}$, $\\{v_3, v_5, v_6, v_7, v_9\\}$ and the union of them (i.e., $\\{v_2, v_3, v_5, v_6, v_7, v_9\\}$) are all valid pattern trusses when $\\alpha \\in [0.2, 0.4)$.\n\\item For a fixed threshold $\\alpha$, a theme network can only have one unique maximal pattern truss, which is the union of all valid pattern trusses in the theme network.\n\\end{itemize}\n}\n\n\\begin{example}\nIn Figure~\\ref{Fig:toy_example}(b), when $\\alpha \\in [0, 0.2)$, $\\{v_1, v_2, v_3, v_4, v_5\\}$ and $\\{v_7, v_8, v_9\\}$ are two theme communities in $G_{\\mathbf{p}}$.\nIn Figure~\\ref{Fig:toy_example}(c), when $\\alpha \\in [0.2, 0.4)$, $\\{v_2, v_3, v_5, v_6, v_7, v_9\\}$ is a theme community in $G_{\\mathbf{q}}$, and partially overlaps with the two theme communities in $G_{\\mathbf{p}}$.\n\\end{example}\n\n\\nop{\n shows another theme community in theme network $G_{\\mathbf{p}_2}$ when $\\alpha \\in [0.2, 0.4)$. Such community is $\\{v_2, v_3, v_5, v_6, v_7, v_9\\}$, which partially overlaps with both the theme communities in $G_{\\mathbf{p}_1}$.}\n\n\\nop{\n\\begin{table}[t]\n\\caption{\\mc{Three types of good theme communities.}}\n\\centering\n\\label{Table:thcm_types}\n\\begin{tabular}{|c|c|c|}\n\\hline\nTypes \t& Theme Coherence & Edge Connection \\\\ \\hline\nType-I & Medium & High \t\t\t\\\\ \\hline\nType-II & High & Medium \t\t\\\\ \\hline\nType-III & High & High \t \t\t\\\\ \\hline\n\\end{tabular}\n\\end{table}\n}\nThere are several important benefits from modeling theme communities using maximal pattern trusses. \nFirst, there exists polynomial time algorithms to find maximal pattern trusses.\nSecond, maximal pattern trusses of different theme networks may overlap with each other, which reflects the application scenarios where a vertex may participate in communities of different themes.\nLast, as to be proved in Sections~\\ref{Sec:pompt} and~\\ref{Sec:mpt_dec}, maximal pattern trusses have many good properties that enable us to design efficient mining and indexing algorithms for theme community finding.\n\n\n\\nop{One interesting observation is that even though the left community has a lower average vertex frequency than the right community, it is equally valid as the right community under the same threshold $\\alpha$. This is because the left community has a denser edge connection that contains more triangles, and the edge cohesion comprehensively considers both edge connections and vertex frequencies.}\n\n\\subsection{Problem Definition and Complexity}\n\\label{Sec:tcfp}\n\\nop{We define the theme community finding problem as follows.}\n\\begin{definition}[Theme Community Finding]\n\\label{Def:theme_comm_finding}\nGiven a database network $G$ and a minimum cohesion threshold $\\alpha$, the \\textbf{problem of theme community finding} is to compute all theme communities in $G$.\n\\end{definition}\n\nSince extracting maximal connected subgraphs from a maximal pattern truss is straightforward, the core of the theme community finding problem is to identify the maximal pattern trusses of all theme networks. \nThis is a challenging problem, since a database network can induce up to $2^{|S|}$ theme networks and each theme network may contain a maximal pattern truss. \nAs a result, finding theme communities for all themes is computationally intractable. \n\n\\nop{The proof of Theorem~\\ref{theo:hardness} is in Section~\\ref{Apd:hardness} of the appendix.}\n\n\\begin{theorem}\n\\label{theo:hardness}\nGiven a database network $G$ and a minimum cohesion threshold $\\alpha$, the problem of counting the number of theme communities in $G$ is \\#P-hard.\n\\nop{\n\\begin{proof}\nWe prove by a reduction from the \\emph{Frequent Pattern Counting} (FPC) problem, which is known to be \\#P-complete~\\cite{gunopulos2003discovering}.\n\nGiven a transaction database $\\mathbf{d}$ and a minimum support threshold $\\alpha\\in[0,1]$, an instance of the FPC problem is to count the number of patterns $\\mathbf{p}$ in $\\mathbf{d}$ such that $f(\\mathbf{p}) > \\alpha$.\nHere, $f(\\mathbf{p})$ is the frequency of $\\mathbf{p}$ in $\\mathbf{d}$.\n\nWe construct a database network $G=(V,E,D,S)$, where $V=\\{v_1, v_2, v_3\\}$ has only $3$ vertices; $E=\\{(v_1, v_2), (v_2, v_3), (v_3, v_1)\\}$ and forms a triangle; each vertex is associated with a copy of $\\mathbf{d}$, that is, $D=\\{\\mathbf{d}_1, \\mathbf{d}_2, \\mathbf{d}_3 \\mid \\mathbf{d}_1=\\mathbf{d}_2=\\mathbf{d}_3=\\mathbf{d}\\}$; and $S$ is the set of all items appearing in $\\mathbf{d}$. Apparently, $G$ can be constructed in $O(|\\mathbf{d}|)$ time.\n\nFor any pattern $\\mathbf{p}\\subseteq S$, since $\\mathbf{d}_1=\\mathbf{d}_2=\\mathbf{d}_3=\\mathbf{d}$, it follows $f_1(\\mathbf{p})=f_2(\\mathbf{p})=f_3(\\mathbf{p})=f(\\mathbf{p})$. According to Definition~\\ref{Def:edge_cohesion}, $eco_{12}(G_\\mathbf{p})=eco_{13}(G_\\mathbf{p})=eco_{23}(G_\\mathbf{p})=f(\\mathbf{p})$. \nAccording to Definition~\\ref{Def:theme_community}, $G_\\mathbf{p}$ is a theme community in $G$ if and only if $f(\\mathbf{p})> \\alpha$.\nTherefore, for any threshold $\\alpha\\in[0,1]$, the number of theme communities in $G$ is equal to the number of patterns in $\\mathbf{d}$ satisfying $f(\\mathbf{p})>\\alpha$, which is exactly the answer to the FPC problem.\n\\end{proof}\n}\n\\end{theorem}\nThe proof of Theorem~\\ref{theo:hardness} is given in Appendix~\\ref{Apd:hardness}.\n\n\\nop{\n\\mc{Given a database network $G$ and a threshold $\\alpha$, if an algorithm can enumerate all theme communities in $G$, it is also able to count them. \nFrom this perspective, the theme community finding problem (Definition~\\ref{Def:theme_comm_finding}) is at least as hard as the problem of counting the number of theme communities in $G$.}\n}\n\n\nIn the rest of the paper, we develop an exact algorithm for theme community finding and investigate various techniques to speed up the search.\n\n\\nop{\nHowever, as proved in Section~\\ref{Sec:pompt}, the size of maximal pattern truss decreases monotonously when the length of pattern increases. \nWith carefully designed algorithms, we can safely prune a pattern $\\mathbf{p}$ if all its sub-patterns $\\{\\mathbf{h} \\mid \\mathbf{h}\\subset \\mathbf{p}, \\mathbf{h}\\neq \\emptyset\\}$ cannot induce pattern truss.\nSuch prunning leads to high detection efficiency without accuracy loss.\n}\n\\section{A Baseline and Maximal Pattern Truss Detection}\n\\label{sec:baseline}\n\nIn this section, we present a baseline for theme community finding. Before that, we introduce \\emph{Maximal Pattern Truss Detector} (MPTD) that detects the maximal pattern truss of a given theme network $G_\\mathbf{p}$ with respect to a threshold $\\alpha$. \n\n\\subsection{Maximal Pattern Truss Detection}\n\n\\begin{algorithm}[t]\n\\caption{Maximal Pattern Truss Detector (MPTD)}\n\\label{Alg:mptd}\n\\KwIn{A theme network $G_\\mathbf{p}$ and a user input $\\alpha$.}\n\\KwOut{The maximal pattern truss $C^*_\\mathbf{p}(\\alpha)$ in $G_\\mathbf{p}$.}\n\\BlankLine\n\\begin{algorithmic}[1]\n \\STATE Initialize: $Q\\leftarrow \\emptyset$.\n \\FOR{each $e_{ij}\\in E_\\mathbf{p}$}\n \t\\STATE $eco_{ij}(G_\\mathbf{p})\\leftarrow 0$.\n \t\\FOR{each $v_k\\in \\triangle_{ijk}$}\n\t\t\\STATE $eco_{ij}(G_\\mathbf{p})\\leftarrow eco_{ij}(G_\\mathbf{p}) + \\min(f_i(\\mathbf{p}), f_j(\\mathbf{p}), f_k(\\mathbf{p}))$.\n\t\\ENDFOR\n\t\\STATE \\textbf{if} $eco_{ij}(G_\\mathbf{p}) \\leq \\alpha$ \\textbf{then} $Q.push(e_{ij})$.\n \\ENDFOR\n \n \\WHILE{$Q\\neq\\emptyset$}\n \t\\STATE $Q.pop(e_{ij})$.\n\t\\FOR{each $v_k\\in \\triangle_{ijk}$}\n\t\t\\STATE $eco_{ik}(G_\\mathbf{p})\\leftarrow eco_{ik}(G_\\mathbf{p}) - \\min(f_i(\\mathbf{p}), f_j(\\mathbf{p}), f_k(\\mathbf{p}))$.\n\t\t\\STATE $eco_{jk}(G_\\mathbf{p})\\leftarrow eco_{jk}(G_\\mathbf{p}) - \\min(f_i(\\mathbf{p}), f_j(\\mathbf{p}), f_k(\\mathbf{p}))$.\n\t\t\\STATE \\textbf{if} $eco_{ik}(G_\\mathbf{p})\\leq \\alpha$ \\textbf{then} $Q.push(e_{ik})$.\n\t\t\\STATE \\textbf{if} $eco_{jk}(G_\\mathbf{p})\\leq \\alpha$ \\textbf{then} $Q.push(e_{jk})$.\n\t\\ENDFOR\n\t\\STATE Remove $e_{ij}$ from $G_{\\mathbf{p}}$.\n \\ENDWHILE\n \\STATE $C^*_\\mathbf{p}(\\alpha)\\leftarrow G_{\\mathbf{p}}$.\n\\BlankLine\n\\RETURN $C^*_\\mathbf{p}(\\alpha)$.\n\\end{algorithmic}\n\\end{algorithm}\n\nGiven $G_\\mathbf{p}$ and $\\alpha$, an edge in $G_\\mathbf{p}$ is referred to as an \\emph{unqualified edge} if the edge cohesion is not larger than $\\alpha$.\nThe key idea of MPTD is to remove all unqualified edges so that the remaining edges and connected vertices constitute the maximal pattern truss.\n\nAs shown in Algorithm~\\ref{Alg:mptd}, MPTD consists of two phases. Phase 1 (Lines 1-8) computes the initial cohesion of each edge and pushes unqualified edges into queue $Q$. Phase 2 (Lines 9-18) removes the unqualified edges in $Q$ from $E_\\mathbf{p}$. Since removing $e_{ij}$ also breaks $\\triangle_{ijk}$, we update $eco_{ik}(G_\\mathbf{p})$ and $eco_{jk}(G_\\mathbf{p})$ in Lines 12-13. After the update, if $e_{ik}$ or $e_{jk}$ becomes unqualified, they are pushed into $Q$ (Lines 14-15).\nLast, the remaining edges and connected vertices are returned as the maximal pattern truss.\n\n\\nop{\nThe correctness of MPTD is based on the fact that MPTD does not remove any edge from a valid maximal pattern truss $C^*_p(\\alpha)$. \nThis is because MPTD only removes unqualified edges and all edges in $C^*_p(\\alpha)$ are qualified edges according to the definition of $C^*_p(\\alpha)$. \n}\n\nWe show the correctness of MPTD as follows. \nIf $C^*_\\mathbf{p}(\\alpha)= \\emptyset$, then all edges in $E_\\mathbf{p}$ are removed as unqualified edges and MPTD returns $\\emptyset$. \nIf $C^*_\\mathbf{p}(\\alpha)\\neq \\emptyset$, then all edges in $E_\\mathbf{p}\\setminus E_\\mathbf{p}^*(\\alpha)$ are removed as unqualified edges and MPTD returns exactly $C^*_\\mathbf{p}(\\alpha)$.\n\n\n\\nop{we decompose the theme network $G_p$ into a maximal pattern truss $C^*_p(\\alpha)$ and the remaining subgraph denoted by $\\overline{C^*_p(\\alpha)}=G_p \\setminus C^*_p(\\alpha)$.\nSince adding new edges or vertices to $C^*_p(\\alpha)$ does not decrease the cohesion of the existing edges in $C^*_p(\\alpha)$, the edges and vertices in $\\overline{C^*_p(\\alpha)}$ will not decrease the cohesion of any edge in $C^*_p(\\alpha)$. Furthermore, $C^*_p(\\alpha)$ being the maximal pattern truss equally means all edges in $\\overline{C^*_p(\\alpha)}$ are unqualified, thus $\\overline{C^*_p(\\alpha)}$ will be removed by MPTD and the maximal pattern truss $C^*_p(\\alpha)$ is found. Additionally, if theme network $G_p$ does not contain any maximal pattern truss, we have $C^*_p(\\alpha)=\\emptyset$ and MPTD will remove all edges from $G_p$.}\n\nThe time complexity of Algorithm~\\ref{Alg:mptd} is dominated by the complexity of triangle enumeration for each edge $e_{ij}$ in $E_\\mathbf{p}$. \nThis requires checking all neighbouring vertices of $v_i$ and $v_j$, which costs $\\mathcal{O}(d(v_i) + d(v_j))$ time, where $d(v_i)$ and $d(v_j)$ are the degrees of $v_i$ and $v_j$, respectively.\nSince all edges in $E_\\mathbf{p}$ are checked, the cost for Lines 1-8 in Algorithm~\\ref{Alg:mptd} is $\\mathcal{O}(\\sum_{e_{ij}\\in E_\\mathbf{p}} (d(v_i) + d(v_j))) = \\mathcal{O}(\\sum_{v_i\\in V_\\mathbf{p}} d^2(v_i))$.\nThe cost of Lines 9-18 is also $\\mathcal{O}(\\sum_{v_i\\in V_\\mathbf{p}} d^2(v_i))$. The worst case happens when all edges are removed. Therefore, the time complexity of MPTD is $\\mathcal{O}(\\sum_{v_i\\in V_\\mathbf{p}} d^2(v_i))$. As a result, MPTD can efficiently find the maximal pattern truss of a sparse theme network. \n\n\n\n\n\n\n\\subsection{Theme Community Scanner: A Baseline}\n\\label{sec:tcs}\n\nSince a database network $G$ may induce up to $2^{|S|}$ theme networks, running MPTD on all theme networks is impractical.\nIn this section, we introduce a baseline method, called \\emph{Theme Community Scanner} (TCS).\nThe key idea is to detect maximal pattern truss on each theme network using MPTD, and improve the detection efficiency by pre-filtering out the patterns whose maximum frequencies in all vertex databases cannot reach a minimum frequency threshold $\\epsilon$. \nThe intuition is that patterns with low frequencies are not likely to be the theme of a theme community.\n\nGiven a frequency threshold $\\epsilon$, TCS first obtains the set of candidate patterns $\\mathcal{P}=\\{\\mathbf{p} \\mid \\exists v_i\\in V, f_i(\\mathbf{p}) > \\epsilon\\}$ by enumerating all patterns in each vertex database. Then, for each candidate pattern $\\mathbf{p}\\in \\mathcal{P}$, we induce theme network $G_\\mathbf{p}$ and find the maximal pattern truss by MPTD. The final result is a set of maximal pattern trusses, denoted by $\\mathbb{C}(\\alpha)=\\{C^*_\\mathbf{p}(\\alpha) \\mid C^*_\\mathbf{p}(\\alpha)\\neq\\emptyset, \\mathbf{p}\\in \\mathcal{P}\\}$.\n\n\\nop{However, since a pattern $\\mathbf{p}$ with small frequency can still form a strong theme community if there are a large number of vertices that contain $\\mathbf{p}$ and form a densely connected subgraph.}\n\nThe pre-filtering method of TCS improves the detection efficiency of theme communities, however, it may miss some theme communities, since a pattern $\\mathbf{p}$ with relatively small frequencies on all vertex databases can still form a good theme community, if a large number of vertices containing $\\mathbf{p}$ form a densely connected subgraph.\nAs a result, TCS performs a trade-off between efficiency and accuracy.\nDetailed discussion on the effect of $\\epsilon$ will be conducted in Section~\\ref{Sec:eop}.\n\n\\nop{\nHowever, there is no proper $\\epsilon$ that works for all theme networks and setting $\\epsilon=0$ is the only way to guarantee exact results.\nDetailed discussion on the effect of $\\epsilon$ will be introduced in Section~\\ref{Sec:eop}.\nPre-filtering out the patterns with low frequencies improves the efficiency of TCS, however, since \nIn sum, TCS performs a trade-off between efficiency and accuracy.\nHowever, there is no proper $\\epsilon$ that works for all theme networks and setting $\\epsilon=0$ is the only way to guarantee exact results.\nDetailed discussion on the effect of $\\epsilon$ will be introduced in Section~\\ref{Sec:eop}.\n}\n\n\\nop{\n\\section{A Naive Method}\nIn this section, we introduce a naive method named \\emph{Theme Community Scanner} (TCS) to solve the theme community finding problem in Definition~\\ref{Def:theme_comm_finding}.\n\n\\begin{algorithm}[t]\n\\caption{Theme Community Scanner}\n\\label{Alg:tcs}\n\\KwIn{A database network $G$ and a user input $\\alpha$.}\n\\KwOut{The set of maximal pattern trusses $\\mathcal{C}(\\alpha)$ in $G$.}\n\\BlankLine\n\\begin{algorithmic}[1]\n \\STATE Initialize maximal pattern truss set: $\\mathcal{C}(\\alpha)\\leftarrow \\emptyset$.\n \\STATE Initialize candidate pattern set: $P\\leftarrow \\emptyset$.\n \\FOR{each database $d_i\\in D$}\n \t\\STATE Find pattern set $P_i=\\{p \\mid f_i(p)>\\epsilon\\}$.\n\t\\STATE Update pattern set: $P\\leftarrow P \\cup P_i$.\n \\ENDFOR\n \n \\FOR{each pattern $p\\in P$}\n \t\\STATE Induce theme network $G_p$.\n\t\\STATE Remove vertices in $V'=\\{v_i \\mid f_i(p) \\leq \\epsilon \\}$ from $G_p$.\n \t\\STATE Call Algorithm~\\ref{Alg:mptd} to find $C^*_p(\\alpha)$ in $G_p$.\n \t\\STATE Update: $\\mathcal{C}(\\alpha)\\leftarrow \\mathcal{C}(\\alpha) \\cup C^*_p(\\alpha)$.\n \\ENDFOR\n\\BlankLine\n\\RETURN $\\mathcal{C}(\\alpha)$.\n\\end{algorithmic}\n\\end{algorithm}\n\nThe key idea of TCS is to directly apply MPTD (Algorithm~\\ref{Alg:mptd}) in each induced theme network $G_p$, while attempting to improve the efficiency by pre-filtering out the vertices whose pattern frequency $f_i(p)$ is not larger than a small threshold $\\epsilon$. \n\nThe intuition for the pre-filtering step is simply that vertices with low pattern frequency are less likely to be contained in a theme community. \nAs an immediate result of the pre-filtering step, we can skip a theme network $G_p$ if no vertex $v_i\\in V_p$ has a large pattern frequency $f_i(p)>\\epsilon$. Thus, the efficiency of theme community finding can be improved. \n\nAlgorithm~\\ref{Alg:tcs} presents the details of TCS. Steps 1-2 perform initialization. Steps 3-6 enumerates patterns in each database and obtains the candidate pattern set $P=\\{p \\mid \\exists v_i\\in V, f_i(p) > \\epsilon\\}$. Steps 7-12 apply MPTD on the theme networks induced by candidate patterns to find the set of maximal pattern trusses $\\mathcal{C}(\\alpha)$. In the end, we can easily obtain theme communities by extracting maximal connected subgraphs from each maximal pattern truss in $\\mathcal{C}(\\alpha)$.\n\nFigure~\\ref{Fig:toy_example}(b)-(c) show some good examples on the effect of the pre-filtering step. In Figure~\\ref{Fig:toy_example}(c), setting $\\epsilon=0.1$ will filter out both $v_1$ and $v_8$ in $G_{p_2}$. This improves the efficiency of TCS without affecting the theme community $\\{v_2, v_3, v_5, v_6, v_7, v_9\\}$. However, in Figure~\\ref{Fig:toy_example}(b), setting $\\epsilon=0.1$ will filter out both $v_3$ and $v_5$, which dismantles the theme community $\\{v_1, v_2, v_3, v_4, v_5\\}$. \n\nIn fact, the pre-filtering step is a trade-off between efficiency and accuracy. However, there is no proper $\\epsilon$ that uniformly works for all theme networks, since different patterns can induce completely different theme networks. \nTherefore, setting $\\epsilon=0$ is the only way to obtain exact theme community finding results. \nDetailed discussion on the effect of $\\epsilon$ will be discussed in Section~\\ref{Sec:eop}.\n}\n\\section{Theme Community Finding}\n\\label{sec:algo}\n\nIn this section, we first explore several fundamental properties of maximal pattern truss that enable fast theme community finding. Then, we introduce two efficient and exact theme community finding methods.\n\\nop{\\emph{Theme Community Finder Apriori (TCFA)} and \\emph{Theme Community Finder Intersection (TCFI)}.}\n\n\\subsection{Properties of Maximal Pattern Truss}\n\\label{Sec:pompt}\n\n\\begin{theorem}[Graph Anti-monotonicity]\n\\label{Prop:gam}\nIf $\\mathbf{p}_1 \\subseteq \\mathbf{p}_2$, then $C^*_{\\mathbf{p}_2}(\\alpha)\\subseteq C^*_{\\mathbf{p}_1}(\\alpha)$.\n\\end{theorem}\n\nThe proof of Theorem~\\ref{Prop:gam} is given in Appendix~\\ref{Apd:gam}.\n\n\\nop{\n\\begin{proof}\nSince maximal pattern truss $C^*_{\\mathbf{p}_1}(\\alpha)$ is the union of all pattern trusses with respect to threshold $\\alpha$ in theme network $G_{\\mathbf{p}_1}$, we can prove $C^*_{\\mathbf{p}_2}(\\alpha)\\subseteq C^*_{\\mathbf{p}_1}(\\alpha)$ by proving $C^*_{\\mathbf{p}_2}(\\alpha)$ is a pattern truss with respect to threshold $\\alpha$ in theme network $G_{\\mathbf{p}_1}$.\n\nWe construct a subgraph $H_{\\mathbf{p}_1}=(V_{\\mathbf{p}_1}, E_{\\mathbf{p}_1})$, where $V_{\\mathbf{p}_1}=V^*_{\\mathbf{p}_2}(\\alpha)$ and $E_{\\mathbf{p}_1}=E^*_{\\mathbf{p}_2}(\\alpha)$. \nThat is, $H_{\\mathbf{p}_1}=C^*_{\\mathbf{p}_2}(\\alpha)$.\nNext, we prove $H_{\\mathbf{p}_1}$ is a subgraph in $G_{\\mathbf{p}_1}$. \nSince $\\mathbf{p}_1\\subseteq \\mathbf{p}_2$, by the anti-monotonicity of patterns~\\cite{agrawal1994fast,han2000mining}, it follows $\\forall v_i \\in V, f_i(\\mathbf{p}_1) \\geq f_i(\\mathbf{p}_2)$. \nAccording to the definition of theme network, it follows $G_{\\mathbf{p}_2}\\subseteq G_{\\mathbf{p}_1}$. \nSince $C^*_{\\mathbf{p}_2}(\\alpha)$ is the maximal pattern truss in $G_{\\mathbf{p}_2}$, it follows $C^*_{\\mathbf{p}_2}(\\alpha)\\subseteq G_{\\mathbf{p}_2}\\subseteq G_{\\mathbf{p}_1}$.\nRecall that $H_{\\mathbf{p}_1}=C^*_{\\mathbf{p}_2}(\\alpha)$, it follows $H_{\\mathbf{p}_1}\\subseteq G_{\\mathbf{p}_1}$.\n\nNow we prove $H_{\\mathbf{p}_1}$ is a pattern truss with respect to threshold $\\alpha$ in $G_{\\mathbf{p}_1}$.\nSince $\\forall v_i \\in V, f_i(\\mathbf{p}_1) \\geq f_i(\\mathbf{p}_2)$, the following inequation holds for every triangle $\\triangle_{ijk}$ in $H_{\\mathbf{p}_1}$.\n\\begin{equation}\n\\label{eq:mono}\n\\begin{split}\n\t\\min(f_i(\\mathbf{p}_1), f_j(\\mathbf{p}_1), f_k(\\mathbf{p}_1)) \\geq\\min(f_i(\\mathbf{p}_2), f_j(\\mathbf{p}_2), f_k(\\mathbf{p}_2))\n\\end{split}\n\\end{equation}\nSince $V_{\\mathbf{p}_1}=V^*_{\\mathbf{p}_2}(\\alpha)$ and $E_{\\mathbf{p}_1}=E^*_{\\mathbf{p}_2}(\\alpha)$, it follows that the set of triangles in $C^*_{\\mathbf{p}_2}(\\alpha)$ is exactly the same as the set of triangles in $H_{\\mathbf{p}_1}$.\nTherefore, we can derive from Equation~\\ref{eq:mono} and Definition~\\ref{Def:edge_cohesion} that\n\\begin{equation}\\nonumber\n\t\\forall e_{ij}\\in E_{\\mathbf{p}_1}, eco_{ij}(H_{\\mathbf{p}_1})\\geq eco_{ij}(C^*_{\\mathbf{p}_2}(\\alpha))\n\\end{equation}\nSince $C^*_{\\mathbf{p}_2}(\\alpha)$ is the maximal pattern truss with respect to cohesion threshold $\\alpha$ in $G_{\\mathbf{p}_2}$, it follows\n\\begin{equation}\\nonumber\n\t\\forall e_{ij}\\in E^*_{\\mathbf{p}_2}(\\alpha), eco_{ij}(C^*_{\\mathbf{p}_2}(\\alpha))>\\alpha\n\\end{equation}\nRecall that $E_{\\mathbf{p}_1} = E^*_{\\mathbf{p}_2}(\\alpha)$, it follows\n\\begin{equation}\\nonumber\n\t\\forall e_{ij}\\in E_{\\mathbf{p}_1}, eco_{ij}(H_{\\mathbf{p}_1})\\geq eco_{ij}(C^*_{\\mathbf{p}_2}(\\alpha)) > \\alpha\n\\end{equation}\nThis means $H_{\\mathbf{p}_1}$ is a pattern truss with respect to threshold $\\alpha$ in $G_{\\mathbf{p}_1}$.\nRecall that $H_{\\mathbf{p}_1}=C^*_{\\mathbf{p}_2}(\\alpha)$, it follows that $C^*_{\\mathbf{p}_2}(\\alpha)$ is a pattern truss in $G_{\\mathbf{p}_1}$.\nThe Theorem follows.\n\\end{proof}\n}\n\n\\nop{\n\n\nFirst, we prove $C^*_{\\mathbf{p}_2}$ is a subgraph in $G_{\\mathbf{p}_1}$. \nSince $\\mathbf{p}_1\\subseteq \\mathbf{p}_2$, by the anti-monotonicity of patterns~\\cite{agrawal1994fast,han2000mining}, it follows $\\forall v_i \\in V, f_i(\\mathbf{p}_1) \\geq f_i(\\mathbf{p}_2)$. \nAccording to the definition of theme network, it follows $G_{\\mathbf{p}_2}\\subseteq G_{\\mathbf{p}_1}$. \nSince $C^*_{\\mathbf{p}_2}(\\alpha)$ is the maximal pattern truss in $G_{\\mathbf{p}_2}$, it follows $C^*_{\\mathbf{p}_2}(\\alpha)\\subseteq G_{\\mathbf{p}_2}\\subseteq G_{\\mathbf{p}_1}$.\n\nSecond, we construct a subgraph $H_{\\mathbf{p}_1}=\\{V_{\\mathbf{p}_1}, E_{\\mathbf{p}_1}\\}$ in theme network $G_{\\mathbf{p}_1}$, where $V_{\\mathbf{p}_1}=V^*_{\\mathbf{p}_2}(\\alpha)$ and $E_{\\mathbf{p}_1}=E^*_{\\mathbf{p}_2}(\\alpha)$. \nThis means $H_{\\mathbf{p}_1}=C^*_{\\mathbf{p}_2}(\\alpha)$.\nSince $C^*_{\\mathbf{p}_2}(\\alpha)\\subseteq G_{\\mathbf{p}_1}$, it follows $H_{\\mathbf{p}_1}\\subseteq G_{\\mathbf{p}_1}$.\n\nTo prove the first property, since the maximal pattern truss $C^*_{\\mathbf{p}_1}(\\alpha)$ is the union of all pattern trusses with respect to threshold $\\alpha$ in $G_{\\mathbf{p}_1}$, we can prove $C^*_{\\mathbf{p}_1}(\\alpha)\\neq\\emptyset$ by finding a non-empty pattern truss with respect to threshold $\\alpha$ in $G_{\\mathbf{p}_1}$.\n\n\n\\begin{proof}\nSince $\\mathbf{p}_1 \\subseteq \\mathbf{p}_2$, by the anti-monotonicity of patterns~\\cite{agrawal1994fast,han2000mining}, it follows $\\forall v_i \\in V, f_i(\\mathbf{p}_1) \\geq f_i(\\mathbf{p}_2)$.\nAccording to the definition of theme network, it follows $G_{\\mathbf{p}_2}\\subseteq G_{\\mathbf{p}_1}$. \nSince $C^*_{\\mathbf{p}_2}(\\alpha)\\subseteq G_{\\mathbf{p}_2}$, it follows $C^*_{\\mathbf{p}_2}(\\alpha)\\subseteq G_{\\mathbf{p}_1}$.\n\nWe prove this in two cases.\nFirst, if $C^*_{\\mathbf{p}_2}(\\alpha)=\\emptyset$, then $C^*_{\\mathbf{p}_2}(\\alpha)\\subseteq C^*_{\\mathbf{p}_1}(\\alpha)$ holds.\nSecond, if $C^*_{\\mathbf{p}_2}(\\alpha)\\neq\\emptyset$, we know from the proof of Proposition~\\ref{Prop:pam} that $C^*_{\\mathbf{p}_2}(\\alpha)$ is a pattern truss in $G_{\\mathbf{p}_1}$. \nRecall Definition~\\ref{Def:maximal_pattern_truss} that $C^*_{\\mathbf{p}_1}(\\alpha)$ is the union of all pattern trusses in $G_{\\mathbf{p}_1}$, it follows $C^*_{\\mathbf{p}_2}(\\alpha)\\subseteq C^*_{\\mathbf{p}_1}(\\alpha)$.\n\\end{proof}\n}\n\n\\begin{proposition}[Pattern Anti-monotonicity]\n\\label{Prop:pam}\nFor $\\mathbf{p}_1\\subseteq \\mathbf{p}_2$ and a cohesion threshold $\\alpha$, the following two properties hold.\n\\begin{enumerate}\n\t\\item If $C^*_{\\mathbf{p}_2}(\\alpha)\\neq \\emptyset$, then $C^*_{\\mathbf{p}_1}(\\alpha)\\neq\\emptyset$.\n\t\\item If $C^*_{\\mathbf{p}_1}(\\alpha)= \\emptyset$, then $C^*_{\\mathbf{p}_2}(\\alpha)= \\emptyset$.\n\\end{enumerate}\n\\end{proposition}\n\\begin{proof}\nAccording to Theorem~\\ref{Prop:gam}, since $\\mathbf{p}_1\\subseteq \\mathbf{p}_2$, $C^*_{\\mathbf{p}_2}(\\alpha)\\subseteq C^*_{\\mathbf{p}_1}(\\alpha)$.\nThe proposition follows immediately.\n\\end{proof}\n\n\\nop{it follows $V^*_{\\mathbf{p}_2}(\\alpha)\\neq\\emptyset$, $E^*_{\\mathbf{p}_2}(\\alpha)\\neq\\emptyset$ and $\\forall v_i\\in V^*_{\\mathbf{p}_2}(\\alpha), f_i(\\mathbf{p}_2)>0$.\nSince $\\forall v_i \\in V, f_i(\\mathbf{p}_1) \\geq f_i(\\mathbf{p}_2)$, it follows $\\forall v_i\\in V_{\\mathbf{p}_1}, f_i(\\mathbf{p}_1)>0$ and $H_{\\mathbf{p}_1}\\neq\\emptyset$. \n}\n\\nop{According to the definition of theme network, it follows $G_{\\mathbf{p}_2}\\subseteq G_{\\mathbf{p}_1}$. \nSince $C^*_{\\mathbf{p}_2}(\\alpha)\\subseteq G_{\\mathbf{p}_2}$, it follows $C^*_{\\mathbf{p}_2}(\\alpha)\\subseteq G_{\\mathbf{p}_1}$.\nConsidering $V_{\\mathbf{p}_1}=V^*_{\\mathbf{p}_2}(\\alpha)$ and $E_{\\mathbf{p}_1}=E^*_{\\mathbf{p}_2}(\\alpha)$, it follows $H_{\\mathbf{p}_1}\\subseteq G_{\\mathbf{p}_1}$.\n}\n\n\\nop{\nFirst, we prove $H_{\\mathbf{p}_1}\\subseteq G_{\\mathbf{p}_1}$. \n\nAccording to the definition of theme network, it follows $G_{\\mathbf{p}_2}\\subseteq G_{\\mathbf{p}_1}$. \nSince $C^*_{\\mathbf{p}_2}(\\alpha)\\subseteq G_{\\mathbf{p}_2}$, it follows $C^*_{\\mathbf{p}_2}(\\alpha)\\subseteq G_{\\mathbf{p}_1}$.\nConsidering $H_{\\mathbf{p}_1}\\subseteq C^*_{\\mathbf{p}_2}(\\alpha)$, it follows $H_{\\mathbf{p}_1}\\subseteq G_{\\mathbf{p}_1}$.\n\nSecond, we define $H_{\\mathbf{p}_1}=\\{V_{\\mathbf{p}_1}, E_{\\mathbf{p}_1}\\}$, where $V_{\\mathbf{p}_1}$\n\nwe prove $C^*_{\\mathbf{p}_2}(\\alpha)$ is a pattern truss in $G_{\\mathbf{p}_1}$. Since $\\mathbf{p}_1 \\subseteq \\mathbf{p}_2$, it follows $\\forall v_i \\in V^*_{\\mathbf{p}_2}(\\alpha), f_i(\\mathbf{p}_1) \\geq f_i(\\mathbf{p}_2)$. \nTherefore, for each $\\triangle_{ijk}$ in $C^*_{\\mathbf{p}_2}(\\alpha)$, we have\n\\begin{equation}\\nonumber\n\\begin{split}\n\t\\min(f_i(\\mathbf{p}_1), f_j(\\mathbf{p}_1), f_k(\\mathbf{p}_1)) \\geq\\min(f_i(\\mathbf{p}_2), f_j(\\mathbf{p}_2), f_k(\\mathbf{p}_2))\n\\end{split}\n\\end{equation}\nThus, according to Definition~\\ref{Def:edge_cohesion} it follows that \n\\begin{equation}\\nonumber\n\t\\forall e_{ij}\\in E^*_{\\mathbf{p}_2}(\\alpha), eco_{ij}(\\mathbf{p}_1)\\geq eco_{ij}(\\mathbf{p}_2)\n\\end{equation}\n}\n\n\n\n\n\n\n\\begin{proposition}[Graph Intersection Property]\n\\label{Lem:gip}\nIf $\\mathbf{p}_1 \\subseteq \\mathbf{p}_3$ and $\\mathbf{p}_2 \\subseteq \\mathbf{p}_3$, then $C^*_{\\mathbf{p}_3}(\\alpha)\\subseteq C^*_{\\mathbf{p}_1}(\\alpha) \\cap C^*_{\\mathbf{p}_2}(\\alpha)$.\n\\end{proposition}\n\n\\begin{proof}\nSince $\\mathbf{p}_1 \\subseteq \\mathbf{p}_3$, according to Theorem~\\ref{Prop:gam}, $C^*_{\\mathbf{p}_3}(\\alpha)\\subseteq C^*_{\\mathbf{p}_1}(\\alpha)$. \nSimilarly, $C^*_{\\mathbf{p}_3}(\\alpha)\\subseteq C^*_{\\mathbf{p}_2}(\\alpha)$. The proposition follows.\n\\end{proof}\n\n\\nop{\nThe proofs of Propositions~\\ref{Prop:pam}, \\ref{Prop:gam} and \\ref{Lem:gip} can be found in Sections~\\ref{Apd:pam}, \\ref{Apd:gam} and \\ref{Apd:gip} of the appendix, respectively.\n}\n\n\\begin{algorithm}[t]\n\\caption{Generate Apriori Candidate Patterns}\n\\label{Alg:apriori}\n\\KwIn{The set of Length-$(k-1)$ qualified patterns $\\mathcal{P}^{k-1}$.}\n\\KwOut{The set of Length-$k$ candidate patterns $\\mathcal{M}^k$.}\n\\BlankLine\n\\begin{algorithmic}[1]\n \\STATE Initialize: $\\mathcal{M}^k\\leftarrow \\emptyset$.\n \t\\FOR{$\\{\\mathbf{p}^{k-1}, \\mathbf{q}^{k-1}\\} \\subset \\mathcal{P}^{k-1} \\land |\\mathbf{p}^{k-1} \\cup \\mathbf{q}^{k-1}| = k$}\n\t\t\\STATE $\\mathbf{p}^k\\leftarrow \\mathbf{p}^{k-1} \\cup \\mathbf{q}^{k-1}$.\n\t\t\\STATE \\textbf{if} all length-$(k-1)$ sub-patterns of $\\mathbf{p}^k$ are qualified \\textbf{then} $\\mathcal{M}^k\\leftarrow \\mathcal{M}^k \\cup \\mathbf{p}^k$.\n\t\t\n\t\n\t\n\t\\ENDFOR\n\\BlankLine\n\\RETURN $\\mathcal{M}^k$.\n\\end{algorithmic}\n\\end{algorithm}\n\n\\begin{algorithm}[t]\n\\caption{Theme Community Finder Apriori (TCFA)}\n\\label{Alg:tcfa}\n\\KwIn{A database network $G$ and a user input $\\alpha$.}\n\\KwOut{The set of maximal pattern trusses $\\mathbb{C}(\\alpha)$ in $G$.}\n\\BlankLine\n\\begin{algorithmic}[1]\n \n \\STATE Initialize: $\\mathcal{P}^1$, $\\mathbb{C}^1(\\alpha)$, $\\mathbb{C}(\\alpha)\\leftarrow \\mathbb{C}^1(\\alpha)$, $k\\leftarrow2$.\n \\WHILE{$\\mathcal{P}^{k-1}\\neq \\emptyset$}\n \t\\STATE Call Algorithm~\\ref{Alg:apriori}: $\\mathcal{M}^k\\leftarrow \\mathcal{P}^{k-1}$.\n\t\\STATE $\\mathcal{P}^k\\leftarrow \\emptyset$, $\\mathbb{C}^k(\\alpha)\\leftarrow \\emptyset$.\n\t \\FOR{each length-$k$ pattern $\\mathbf{p}^k\\in \\mathcal{M}^k$}\n \t\t\\STATE Induce $G_{\\mathbf{p}^k}$ from $G$.\n\t\t\\STATE Compute $C^*_{\\mathbf{p}^k}(\\alpha)$ using $G_{\\mathbf{p}^k}$ by Algorithm~\\ref{Alg:mptd}.\n\t\n\t\t\\STATE \\textbf{if} $C^*_{\\mathbf{p}^k}(\\alpha) \\neq \\emptyset$ \\textbf{then} $\\mathbb{C}^k(\\alpha)\\leftarrow \\mathbb{C}^k(\\alpha) \\cup C^*_{\\mathbf{p}^k}(\\alpha)$, and $\\mathcal{P}^k\\leftarrow \\mathcal{P}^k\\cup \\mathbf{p}^k$.\n \t\\ENDFOR\n\t\\STATE $\\mathbb{C}(\\alpha)\\leftarrow \\mathbb{C}(\\alpha) \\cup \\mathbb{C}^k(\\alpha)$.\n \t\\STATE $k\\leftarrow k+1$.\n \\ENDWHILE\n\\BlankLine\n\\RETURN $\\mathbb{C}(\\alpha)$.\n\\end{algorithmic}\n\\end{algorithm}\n\n\\nop{\n\\begin{algorithm}[t]\n\\caption{Initialization}\n\\label{Alg:init}\n\\KwIn{A database network $G$ and a user input $\\alpha$.}\n\\KwOut{Length-$1$ qualified pattern set $P^1$ and the corresponding maximal pattern trusses $\\mathbb{C}^1(\\alpha)$.}\n\\BlankLine\n\\begin{algorithmic}[1]\n \\STATE Initialization: $P^1\\leftarrow \\emptyset$, $\\mathbb{C}^1(\\alpha)\\leftarrow \\emptyset$.\n \\FOR{each length-1 pattern $p^1\\in S$}\n \t\\STATE Induce theme network $G_{p^1}$ and call Algorithm~\\ref{Alg:mptd}.\n\t\\STATE \\textbf{if} $\\exists C^*_{p^1}(\\alpha)\\subset G_{p^1}$ and $C^*_{p^1}\\neq \\emptyset$ \\textbf{then} $\\mathbb{C}^1(\\alpha)\\leftarrow \\mathbb{C}^1(\\alpha) \\cup C^*_{p^1}(\\alpha)$ and $P^1\\leftarrow P^1\\cup p^1$.\n \\ENDFOR\n\\BlankLine\n\\RETURN $\\{P^1, \\mathbb{C}^1(\\alpha)\\}$.\n\\end{algorithmic}\n\\end{algorithm}\n}\n\n\\nop{\n\\begin{algorithm}[t]\n\\caption{Theme Community Finder Intersection}\n\\label{Alg:tcfi}\n\\KwIn{A database network $G$ and a user input $\\alpha$.}\n\\KwOut{The set of maximal pattern trusses $\\mathbb{C}(\\alpha)$ in $G$.}\n\\BlankLine\n\\begin{algorithmic}[1]\n \n \\STATE Initialize: $\\mathcal{P}^1$, $\\mathbb{C}^1(\\alpha)$, $\\mathbb{C}(\\alpha)\\leftarrow \\mathbb{C}^1(\\alpha)$, $k\\leftarrow2$.\n \\WHILE{$\\mathcal{P}^{k-1}\\neq \\emptyset$}\n \t\\STATE Call Algorithm~\\ref{Alg:apriori}: $\\mathcal{M}^k\\leftarrow \\mathcal{P}^{k-1}$.\n\t\\STATE $\\mathcal{P}^k\\leftarrow \\emptyset$, $\\mathbb{C}^k(\\alpha)\\leftarrow \\emptyset$.\n\t \\FOR{each length-$k$ pattern $\\mathbf{p}^k\\in \\mathcal{M}^k$}\n \t\t\\STATE Induce $G'_{\\mathbf{p}^k}$ from $C^*_{\\mathbf{p}^{k-1}}(\\alpha) \\cap C^*_{\\mathbf{p}^{k-1}}(\\alpha)$.\n\t\t\\STATE Call Algorithm~\\ref{Alg:mptd} with $G'_{\\mathbf{p}^k}$ as input.\n\t\t\\STATE \\textbf{if} $\\exists C^*_{\\mathbf{p}^k}(\\alpha)\\subset G'_{\\mathbf{p}^k}$ and $C^*_{\\mathbf{p}^k}\\neq \\emptyset$ \\textbf{then} $\\mathbb{C}^k(\\alpha)\\leftarrow \\mathbb{C}^k(\\alpha) \\cup C^*_{\\mathbf{p}^k}(\\alpha)$, $\\mathcal{P}^k\\leftarrow \\mathcal{P}^k\\cup \\mathbf{p}^k$. \n \t\\ENDFOR\n\t\n\t\\STATE $\\mathbb{C}(\\alpha)\\leftarrow \\mathbb{C}(\\alpha) \\cup \\mathbb{C}^k(\\alpha)$.\n \t\\STATE $k\\leftarrow k+1$.\n \\ENDWHILE\n\\BlankLine\n\\RETURN $\\mathbb{C}(\\alpha)$.\n\\end{algorithmic}\n\\end{algorithm}\n}\n\n\n\\subsection{Theme Community Finder Apriori}\n\\label{Sec:tcfa}\nIn this subsection, we introduce \\emph{Theme Community Finder Apriori} (TCFA) to solve the theme community finding problem.\nThe key idea of TCFA is to improve theme community finding efficiency by early pruning unqualified patterns in an Apriori-like manner~\\cite{agrawal1994fast}.\n\nA pattern $\\mathbf{p}$ is said to be \\emph{unqualified} if $C^*_\\mathbf{p}(\\alpha)=\\emptyset$, and to be \\emph{qualified} if $C^*_\\mathbf{p}(\\alpha)\\neq\\emptyset$.\nFor two patterns $\\mathbf{p}_1$ and $\\mathbf{p}_2$, if $\\mathbf{p}_1 \\subseteq \\mathbf{p}_2$, $\\mathbf{p}_1$ is called a \\emph{sub-pattern} of $\\mathbf{p}_2$.\n\nAccording to the second property of Proposition~\\ref{Prop:pam}, for two patterns $\\mathbf{p}_1$ and $\\mathbf{p}_2$, if $\\mathbf{p}_1 \\subseteq \\mathbf{p}_2$ and $\\mathbf{p}_1$ is unqualified, then $\\mathbf{p}_2$ is unqualified, thus $\\mathbf{p}_2$ can be immediately pruned without running MPTD (Algorithm~\\ref{Alg:mptd}) on $G_{\\mathbf{p}_2}$. Therefore, we can prune a length-$k$ pattern if any of its length-$(k-1)$ sub-patterns is unqualified.\n\nAlgorithm~\\ref{Alg:apriori} shows how we generate the set of length-$k$ candidate patterns by retaining only the length-$k$ patterns whose length-$(k-1)$ sub-patterns are all qualified.\nThis efficiently prunes a large number of unqualified patterns without running MPTD\n\\nop{\nSpecifically, line 3 generates a possible length-$k$ candidate pattern $\\mathbf{p}^k$ by merging two length-$(k-1)$ qualified patterns $\\{\\mathbf{p}^{k-1}, \\mathbf{q}^{k-1}\\}$. \nLine 4 retains $\\mathbf{p}^k$ as a candidate pattern if all its length-$(k-1)$ sub-patterns are qualified. \n}\n\nAlgorithm~\\ref{Alg:tcfa} introduces the details of TCFA. Line 1 calculates the set of length-1 qualified patterns $\\mathcal{P}^1=\\{\\mathbf{p} \\subset S \\mid C^*_\\mathbf{p}(\\alpha)\\neq\\emptyset, |\\mathbf{p}| = 1\\}$ and the corresponding set of maximal pattern trusses $\\mathbb{C}^1(\\alpha) = \\{C^*_\\mathbf{p}(\\alpha) \\mid \\mathbf{p}\\in \\mathcal{P}^1 \\}$. \nThis requires to run MPTD (Algorithm~\\ref{Alg:mptd}) on each theme network induced by a single item in $S$. \n\\nop{Since the theme networks induced by different items are independent, we can run this process in parallel. Our implementation use multiple threads for this step.}\nLine 3 calls Algorithm~\\ref{Alg:apriori} to generate the set of length-$k$ candidate patterns $\\mathcal{M}^k$.\nLines 5-9 remove the unqualified candidate patterns in $\\mathcal{M}^k$ by discarding every candidate pattern that cannot form a non-empty maximal pattern truss.\nIn this way, Lines 2-12 iteratively generate the set of length-$k$ qualified patterns $\\mathcal{P}^k$ from $\\mathcal{P}^{k-1}$ until no qualified patterns can be found. \nLast, the exact set of maximal pattern trusses $\\mathbb{C}(\\alpha)$ is returned.\n\n\\nop{Specifically, step 3 calls Algorithm~\\ref{Alg:apriori} to generate candidate pattern set $L^k$; steps 4-9 obtain $P^k$ by retaining only the qualified patterns in $L^k$.}\n\n\\nop{\nAlgorithm~\\ref{Alg:apriori} shows how we generate the length-$k$ candidate patterns and prune unqualified patterns in an Apriori-like manner.\nAccording to the second property of Proposition~\\ref{Prop:pam}, for two patterns $p_1$ and $p_2$, if $p_1 \\subset p_2$ and $p_1$ is unqualified, then $p_2$ is unqualified and can be immediately pruned without running Algorithm~\\ref{Alg:mptd} on $G_{p_2}$. \nAs a result, we perform efficient pattern pruning by retaining only the length-$k$ candidate patterns whose length-$(k-1)$ sub-patterns are all qualified patterns.\nSpecifically, step 3 generates a possible length-$k$ candidate pattern $p^k$ by merging two length-$(k-1)$ qualified patterns $\\{p^{k-1}, q^{k-1}\\}$. Step 4 retains $p^k$ as a candidate pattern if all its length-$(k-1)$ sub-patterns are qualified. \n}\n\nComparing with the baseline TCS in Section~\\ref{sec:tcs}, TCFA achieves a good efficiency improvement by effectively pruning a large number of unqualified patterns using the Apriori-like method in Algorithm~\\ref{Alg:apriori}.\nHowever, due to the well known limitation of Apriori~\\cite{agrawal1994fast}, the set of candidate patterns $\\mathcal{M}^k$ is often very large and still contains many unqualified candidate patterns. \nConsequently, Lines 5-9 of Algorithm~\\ref{Alg:tcfa} become the bottleneck of TCFA.\nWe solve this problem in the next subsection.\n\n\n\\nop{\ncan be identified and removed by running Algorithm~\\ref{Alg:mptd} on the corresponding theme networks, \n\nThus, checking the qualification of such unqualified patterns in lines 5-9 of Algorithm~\\ref{Alg:tcfa} becomes the bottleneck of computational efficiency. \n}\n\n\\subsection{Theme Community Finder Intersection}\n\\label{Sec:tcfi}\nThe \\emph{Theme Community Finder Intersection} (TCFI) method significantly improves the efficiency of TCFA by further pruning unqualified patterns in $\\mathcal{M}^k$ using Proposition~\\ref{Lem:gip}.\n\nConsider pattern $\\mathbf{p}^k$ of length $k$ and patterns $\\mathbf{p}^{k-1}$ and $\\mathbf{q}^{k-1}$ of length $k-1$.\nAccording to Proposition~\\ref{Lem:gip}, if $\\mathbf{p}^k = \\mathbf{p}^{k-1} \\cup \\mathbf{q}^{k-1}$, then $C^*_{\\mathbf{p}^k}(\\alpha)\\subseteq C^*_{\\mathbf{p}^{k-1}}(\\alpha) \\cap C^*_{\\mathbf{q}^{k-1}}(\\alpha)$. \nTherefore, if $C^*_{\\mathbf{p}^{k-1}}(\\alpha) \\cap C^*_{\\mathbf{q}^{k-1}}(\\alpha)=\\emptyset$, then $C^*_{\\mathbf{p}^k}(\\alpha)=\\emptyset$. Thus, we can prune $\\mathbf{p}^k$ immediately.\nIf $C^*_{\\mathbf{p}^{k-1}}(\\alpha) \\cap C^*_{\\mathbf{q}^{k-1}}(\\alpha)\\neq\\emptyset$, we can induce theme network $G_{\\mathbf{p}^k}$ from $C^*_{\\mathbf{p}^{k-1}}(\\alpha) \\cap C^*_{\\mathbf{q}^{k-1}}(\\alpha)$\nand find $C^*_{\\mathbf{p}^k}(\\alpha)$ within $G_{\\mathbf{p}^k}$ by MPTD\n\n\\nop{\nDenote $H=C^*_{p^{k-1}}(\\alpha) \\cap C^*_{q^{k-1}}(\\alpha)$ and let $G'_{p^k}$ denote the theme network induced from $H$, we can derive the following results using Lemma~\\ref{Lem:gip}. \nFirst, if $H=\\emptyset$, then $C^*_{p^k}(\\alpha)=\\emptyset$, thus we can prune $p^k$ immediately.\nSecond, if $H\\neq\\emptyset$, then $C^*_{p^k}(\\alpha) \\subseteq G'_{p^k}$, thus we can run Algorithm~\\ref{Alg:mptd} on $G'_{p^k}$ to find $C^*_{p^k}(\\alpha)$.\n}\n\nAccordingly, TCFI improves TCFA by modifying only Line 6 of Algorithm~\\ref{Alg:tcfa}.\nInstead of inducing $G_{\\mathbf{p}^k}$ from $G$, TCFI induces $G_{\\mathbf{p}^k}$ from $C^*_{\\mathbf{p}^{k-1}}(\\alpha) \\cap C^*_{\\mathbf{q}^{k-1}}(\\alpha)$ when $C^*_{\\mathbf{p}^{k-1}}(\\alpha) \\cap C^*_{\\mathbf{q}^{k-1}}(\\alpha)\\neq \\emptyset$. \nHere, $\\mathbf{p}^{k-1}$ and $\\mathbf{q}^{k-1}$ are qualified patterns in $\\mathcal{P}^{k-1}$ such that $\\mathbf{p}^k = \\mathbf{p}^{k-1} \\cup \\mathbf{q}^{k-1}$.\n\nUsing the graph intersection property in Proposition~\\ref{Lem:gip}, TCFI efficiently prunes a large number of unqualified candidate patterns and dramatically improves the detection efficiency. \nFirst, TCFI prunes a large number of candidate patterns in $\\mathcal{M}^k$ by simply checking whether $C^*_{\\mathbf{p}^{k-1}}(\\alpha) \\cap C^*_{\\mathbf{q}^{k-1}}(\\alpha)=\\emptyset$.\nSecond, when $C^*_{\\mathbf{p}^{k-1}}(\\alpha) \\cap C^*_{\\mathbf{q}^{k-1}}(\\alpha)\\neq\\emptyset$, inducing $G_{\\mathbf{p}^k}$ from $C^*_{\\mathbf{p}^{k-1}}(\\alpha) \\cap C^*_{\\mathbf{q}^{k-1}}(\\alpha)$ is more efficient than inducing $G_{\\mathbf{p}^k}$ from $G$, since $C^*_{\\mathbf{p}^{k-1}}(\\alpha) \\cap C^*_{\\mathbf{q}^{k-1}}(\\alpha)$ is often much smaller than $G$. \nThird, $G_{\\mathbf{p}^k}$ induced from $C^*_{\\mathbf{p}^{k-1}}(\\alpha) \\cap C^*_{\\mathbf{q}^{k-1}}(\\alpha)$ is often much smaller than $G_{\\mathbf{p}^k}$ induced from $G$, which significantly reduces the time cost of running MPTD on $G_{\\mathbf{p}^k}$.\nFourth, according to Theorem~\\ref{Prop:gam}, the size of a maximal pattern truss decreases when the length of the pattern increases. Thus, when a pattern grows longer, the size of $C^*_{\\mathbf{p}^{k-1}}(\\alpha) \\cap C^*_{\\mathbf{q}^{k-1}}(\\alpha)$ decreases rapidly, which significantly improves the pruning effectiveness of TCFI.\nLast, as to be discussed later in Section~\\ref{Sec:eotcf}, most maximal pattern trusses are small local subgraphs in a database network. Such small subgraphs in different local regions of a large sparse database network generally do not intersect with each other.\n\n\n\\nop{\nFurther more, according to Proposition~\\ref{Prop:gam}, the size of maximal pattern truss reduces when pattern length $k$ increases. Therefore, the size of $C^*_{\\mathbf{p}^{k-1}}(\\alpha) \\cap C^*_{\\mathbf{q}^{k-1}}(\\alpha)$ often decreases with the increase of $k$. As a result, TCFI tends to be more effective when $k$ increases. \nMore often than not, in real world database networks, the size of $C^*_{\\mathbf{p}^{k-1}}(\\alpha) \\cap C^*_{\\mathbf{q}^{k-1}}(\\alpha)$ decreases significantly with the increase of pattern length $k$.\nAs the pattern grows longer, the pruning of TCFI becomes impressively effective, leading to its significant efficiency performance in real world database networks.\n\n\nSo far, we have proposed TCS, TCFA and TCFI to solve the theme community finding problem in Definition~\\ref{Def:theme_comm_finding}. \nHowever, when the user inputs a new threshold $\\alpha$, all methods have to recompute from scratch. Thus, they are not efficient in answering frequent user queries. \nFortunately, such expensive re-computation can be avoided by decomposing and indexing all maximal pattern trusses. In the next section, we introduce our efficient theme community indexing structure that achieves fast user query answering.\n\n\nIn the next section, we introduce an indexing structure that efficiently indexes decomposed maximal pattern trusses in a tree to achieve fast query answering.\n\nTo quickly answer user queries, we introduce an efficient theme community indexing structure .\n}\n\n\n\\section{Theme Community Indexing}\n\\label{sec:index}\n\nWhen a user inputs a new cohesion threshold $\\alpha$, TCS, TCFA and TCFI have to recompute from scratch. \nCan we save the re-computation cost by decomposing and indexing all maximal pattern trusses to achieve fast user query answering?\nIn this section, we propose the \\emph{Theme Community Tree} (TC-Tree) for fast user query answering.\nWe first introduce how to decompose maximal pattern truss.\nThen, we illustrate how to build TC-Tree with decomposed maximal pattern trusses. Last, we present a querying method that efficiently answers user queries.\n\n\\subsection{Maximal Pattern Truss Decomposition}\n\\label{Sec:mpt_dec}\nWe first explore how to decompose a maximal pattern truss into multiple disjoint sets of edges\n\n\\nop{\n\\begin{property}\n\\label{Obs:anti_alpha}\n$\\forall p\\subseteq S$, if $\\alpha_2 > \\alpha_1 \\geq 0$, then $C^*_{p}(\\alpha_2)\\subseteq C^*_{p}(\\alpha_1)$. \n\\end{property}\n\nProperty~\\ref{Obs:anti_alpha} shows that the size of maximal pattern truss reduces when the threshold increases from $\\alpha_1$ to $\\alpha_2$.\nAccording to Definition~\\ref{Def:pattern_truss}, the cohesion of all edges in $C^*_{p}(\\alpha_1)$ are larger than $\\alpha_1$. Since $\\alpha_2 > \\alpha_1$, there can be some edges in $C^*_{p}(\\alpha_1)$ whose cohesion is smaller than $\\alpha_2$.\nSince $C^*_{p}(\\alpha_2)$ is formed by removing such edges from $C^*_{p}(\\alpha_1)$, we have $C^*_{p}(\\alpha_2)\\subseteq C^*_{p}(\\alpha_1)$.\n}\n\n\\begin{theorem}\n\\label{Obs:discrete}\nGiven a theme network $G_\\mathbf{p}$, a cohesion threshold $\\alpha_2$ and a maximal pattern truss $C^*_{\\mathbf{p}}(\\alpha_1)$ in $G_\\mathbf{p}$ whose minimum edge cohesion is $\\beta = \\min\\limits_{e_{ij}\\in E^*_\\mathbf{p}(\\alpha_1)} eco_{ij}(C^*_\\mathbf{p}(\\alpha_1))$, if $\\alpha_2 \\geq \\beta$, then $C^*_{\\mathbf{p}}(\\alpha_2)\\subset C^*_{\\mathbf{p}}(\\alpha_1)$.\n\\nop{\n$\\forall \\mathbf{p}\\subseteq S$, if $C^*_{\\mathbf{p}}(\\alpha_1)\\neq\\emptyset$, $\\beta = \\min\\limits_{e_{ij}\\in E^*_\\mathbf{p}(\\alpha_1)} eco_{ij}(C^*_\\mathbf{p}(\\alpha_1))$ and $\\alpha_2 \\geq \\beta$, then $C^*_{\\mathbf{p}}(\\alpha_2)\\subset C^*_{\\mathbf{p}}(\\alpha_1)$.}\n\\end{theorem}\n\nThe proof of Theorem~\\ref{Obs:discrete} is given in Appendix~\\ref{Apd:discrete}.\n\n\n\\nop{\n\\begin{proof}\n\\nop{\nRecall that $C^*_\\mathbf{p}(\\alpha_1)$ and $C^*_\\mathbf{p}(\\alpha_2)$ are edge-induced subgraphs induced from the sets of edges $E^*_\\mathbf{p}(\\alpha_1)$ and $E^*_\\mathbf{p}(\\alpha_2)$, respectively, we can prove $C^*_{\\mathbf{p}}(\\alpha_2)\\subset C^*_{\\mathbf{p}}(\\alpha_1)$ by proving $E^*_\\mathbf{p}(\\alpha_2)\\subset E^*_\\mathbf{p}(\\alpha_1)$ as follows.\n}\n\\nop{For notational efficiency, we denote $\\beta = \\min\\limits_{e_{ij}\\in E^*_\\mathbf{p}(\\alpha_1)} eco_{ij}(C^*_\\mathbf{p}(\\alpha_1))$.}\n\nFirst, we prove $\\alpha_2 > \\alpha_1$. \nSince $C^*_{\\mathbf{p}}(\\alpha_1)$ is a maximal pattern truss with respect to threshold $\\alpha_1$, from Definition~\\ref{Def:pattern_truss}, we have $\\forall e_{ij} \\in E^*_\\mathbf{p}(\\alpha_1), eco_{ij}(C^*_\\mathbf{p}(\\alpha_1)) > \\alpha_1$. \nSince $\\beta$ is the minimum edge cohesion of all the edges in $C^*_{\\mathbf{p}}(\\alpha_1)$, $\\beta > \\alpha_1$. \nSince $\\alpha_2 \\geq \\beta$, $\\alpha_2 > \\alpha_1$.\n\nSecond, we prove $C^*_\\mathbf{p}(\\alpha_2)\\subseteq C^*_\\mathbf{p}(\\alpha_1)$. \nSince $\\alpha_2 > \\alpha_1$, we know from Definition~\\ref{Def:pattern_truss} that $\\forall e_{ij} \\in E^*_\\mathbf{p}(\\alpha_2), eco_{ij}(C^*_\\mathbf{p}(\\alpha_2)) > \\alpha_2> \\alpha_1$.\nThis means that $C^*_\\mathbf{p}(\\alpha_2)$ is a pattern truss with respect to cohesion threshold $\\alpha_1$ in $G_\\mathbf{p}$.\nSince $C^*_\\mathbf{p}(\\alpha_1)$ is the maximal pattern truss with respect to cohesion threshold $\\alpha_1$ in $G_\\mathbf{p}$, from Definition~\\ref{Def:maximal_pattern_truss} we have $C^*_\\mathbf{p}(\\alpha_2)\\subseteq C^*_\\mathbf{p}(\\alpha_1)$.\n\nLast, we prove $C^*_\\mathbf{p}(\\alpha_2)\\neq C^*_\\mathbf{p}(\\alpha_1)$. \nLet $e^*_{ij}$ be the edge with minimum edge cohesion $\\beta$ in $ E^*_\\mathbf{p}(\\alpha_1)$. \nSince $\\alpha_2\\geq \\beta$, According to Definition~\\ref{Def:pattern_truss}, $e^*_{ij}\\not\\in E^*_\\mathbf{p}(\\alpha_2)$. \nThus, $E^*_{\\mathbf{p}}(\\alpha_1)\\neq E^*_{\\mathbf{p}}(\\alpha_2)$ and $C^*_\\mathbf{p}(\\alpha_2)\\neq C^*_\\mathbf{p}(\\alpha_1)$.\nRecall that $C^*_\\mathbf{p}(\\alpha_2)\\subseteq C^*_\\mathbf{p}(\\alpha_1)$. The theorem follows.\n\\end{proof}\n}\n\nTheorem~\\ref{Obs:discrete} indicates that the size of maximal pattern truss $C^*_{\\mathbf{p}}(\\alpha_1)$ reduces only when cohesion threshold $\\alpha_2 \\geq \\min\\limits_{e_{ij}\\in E^*_\\mathbf{p}(\\alpha_1)} eco_{ij}(C^*_\\mathbf{p}(\\alpha_1))$.\nTherefore, we can decompose a maximal pattern truss of $G_\\mathbf{p}$ into a sequence of disjoint sets of edges using a sequence of ascending cohesion thresholds $\\mathcal{A}_\\mathbf{p}=\\alpha_0, \\alpha_1, \\cdots, \\alpha_h$, where $\\alpha_0 = 0$, $\\alpha_k = \\min\\limits_{e_{ij}\\in E^*_\\mathbf{p}(\\alpha_{k-1})} eco_{ij}(C^*_\\mathbf{p}(\\alpha_{k-1}))$ and $k\\in[1,h]$.\n\nFor $\\alpha_0 = 0$, we call MPTD\nto calculate $C^*_\\mathbf{p}(\\alpha_0)$, which is the largest maximal pattern truss in $G_\\mathbf{p}$. \nFor $\\alpha_1, \\ldots, \\alpha_h$, we decompose $C^*_\\mathbf{p}(\\alpha_0)$ into a sequence of removed sets of edges $R_\\mathbf{p}(\\alpha_1), \\ldots, R_\\mathbf{p}(\\alpha_h)$, where $R_\\mathbf{p}(\\alpha_{k})=E^*_\\mathbf{p}(\\alpha_{k-1}) \\setminus E^*_\\mathbf{p}(\\alpha_{k})$ is the set of edges removed when $C^*_\\mathbf{p}(\\alpha_{k-1})$ shrinks to $C^*_\\mathbf{p}(\\alpha_{k})$. \nThe decomposition iterates until all edges in $C^*_\\mathbf{p}(\\alpha_0)$ are removed.\n\nThe decomposion results are stored in a linked list $\\mathcal{L}_\\mathbf{p}=\\mathcal{L}_\\mathbf{p}(\\alpha_1), \\ldots, \\mathcal{L}_\\mathbf{p}(\\alpha_h)$, where the $k$-th node stores $\\mathcal{L}_\\mathbf{p}(\\alpha_k) = (\\alpha_k, R_\\mathbf{p}(\\alpha_k))$. Since $\\mathcal{L}_\\mathbf{p}$ stores the same number of edges as in $E^*_\\mathbf{p}(\\alpha_0)$, it does not incur much extra memory cost. \n\n\\nop{\n$C^*_{p}(\\alpha_i)$ reduces to $C^*_{p}(\\alpha_{i+1})$ when $\\alpha_{i+1}\\in [\\beta_i, \\beta_{i+1})$, where $\\beta_i = \\psi_p(\\alpha_i)$.\n}\n\nUsing $\\mathcal{L}_\\mathbf{p}$, we can efficiently get the maximal pattern truss $C^*_\\mathbf{p}(\\alpha)$ for any $\\alpha\\geq 0$ by first obtaining $E^*_\\mathbf{p}(\\alpha)$ as\n\\begin{equation}\n\\label{Eqn:get_mpt}\nE^*_\\mathbf{p}(\\alpha)=\\bigcup\\limits_{\\alpha_k>\\alpha} R_\\mathbf{p}(\\alpha_k)\n\\end{equation}\nand then inducing $V^*_\\mathbf{p}(\\alpha)$ from $E^*_\\mathbf{p}(\\alpha)$ according to Definition~\\ref{Def:pattern_truss}.\n$\\mathcal{L}_\\mathbf{p}$ also provides the nontrivial range of $\\alpha$ for $G_\\mathbf{p}$. \nThe upper bound of $\\alpha$ in $G_\\mathbf{p}$ is $\\alpha^*_\\mathbf{p}=\\max\\mathcal{A}_\\mathbf{p}$, since $\\forall \\alpha\\geq \\alpha^*_\\mathbf{p}$ we have $C^*_\\mathbf{p}(\\alpha)=\\emptyset$. \nTherefore, the nontrivial range of $\\alpha$ for $G_\\mathbf{p}$ is $\\alpha\\in[0, \\alpha^*_\\mathbf{p})$. \n$\\alpha^*_\\mathbf{p}$ can be easily obtained by visiting the last entry of $\\mathcal{L}_\\mathbf{p}$.\n\n\\subsection{Theme Community Tree}\nA TC-Tree, denoted by $\\mathcal{T}$, is an extension of a \\emph{set enumeration tree} (SE-Tree)~\\cite{rymon1992search} and is carefully customized for efficient theme community indexing and query answering.\n\n\\nop{\nA SE-Tree is a basic data structure that enumerates all the patterns in the power set of $S$. \nEach node of the SE-Tree stores an item in $S$. \nThe $i$-th node of the SE-Tree is denoted by $n_i$, the item stored in $n_i$ is denoted by $s_{n_i}\\in S$.\nWe map each item in $S$ to a unique rank using a \\emph{ranking function} $\\psi: S \\rightarrow \\mathbb{Z}^+$. \nFor the $j$-th item $s_j$ in $S$, the ranking function maps $s_j$ to its rank $j$ in $S$, that is, $\\psi(s_j) = j$.\n\\nop{We compare a pair of items $s_i$ and $s_j$ in $S$ by a \\emph{pre-imposed order}, that is, \\emph{if $j > i$ then $s_j > s_i$}.}\nThen, for every node $n_i$ of the SE-Tree, we build a child of $n_i$ for each item $s_j\\in S$ that satisfies $\\psi(s_j) > \\psi(s_{n_i})$.\nIn this way, every node $n_i$ of the SE-Tree uniquely represents a pattern in the power set of $S$, denoted by $\\mathbf{p}_i\\subseteq S$, which is the union of the items stored in all the nodes along the path from the root to $n_i$. \nAs shown in Figure~\\ref{Fig:tct}, the SE-Tree of $S=\\{s_1, s_2, s_3, s_4\\}$ has 16 nodes $\\{n_0, n_1, n_2, \\cdots, n_{15}\\}$.\nFor node $n_{13}$, the path from the root to $n_{13}$ contains nodes $\\{n_0, n_1, n_6, n_{13}\\}$, where $s_{n_0}=\\emptyset$, $s_{n_1}=s_1$, $s_{n_6}=s_3$ and $s_{n_{13}}=s_4$, thus $\\mathbf{p}_{13}=\\{s_1, s_3, s_4\\}$.\n}\n\nA SE-Tree is a basic data structure that enumerates all the subsets of a set $S$. \nA total order $\\prec$ on the items in $S$ is assumed. \nThus, any subset of $S$ can be written as a sequence of items in order $\\prec$. \n\nEvery node of a SE-Tree uniquely represents a subset of $S$.\nThe root node represents $\\emptyset$. \nFor subsets $S_1$ and $S_2$ of $S$, the node representing $S_2$ is the child of the node representing $S_1$, if $S_1 \\subset S_2$; $|S_2 \\setminus S_1|=1$; and $S_1$ is a prefix of $S_2$ when $S_1$ and $S_2$ are written as sequences of items in order $\\prec$.\n\nEach node of a SE-Tree only stores the item in $S$ that is appended to the parent node to extend the child from the parent.\nIn this way, the set of items represented by node $n_i$ is the union of the items stored in all the nodes along the path from the root to $n_i$.\nFigure~\\ref{Fig:tct} shows an example of the SE-tree of set $S=\\{s_1, s_2, s_3, s_4\\}$. \nFor node $n_{13}$, the path from the root to $n_{13}$ contains nodes $\\{n_0, n_1, n_6, n_{13}\\}$, thus the set of items represented by $n_{13}$ is $\\{s_1, s_3, s_4\\}$.\n\n\\nop{The SE-Tree efficiently stores the item $S_2\\setminus S_1$ in the child node that represents $S_2$. In this way, }\n\n\n\n\n\\nop{\nEach node of the SE-Tree stores an item in $S$. \nThe $i$-th node of the SE-Tree is denoted by $n_i$, the item stored in $n_i$ is denoted by $s_{n_i}\\in S$.\nWe map each item in $S$ to a unique rank using a \\emph{ranking function} $\\psi: S \\rightarrow \\mathbb{Z}^+$. \nFor the $j$-th item $s_j$ in $S$, the ranking function maps $s_j$ to its rank $j$ in $S$, that is, $\\psi(s_j) = j$.\n\\nop{We compare a pair of items $s_i$ and $s_j$ in $S$ by a \\emph{pre-imposed order}, that is, \\emph{if $j > i$ then $s_j > s_i$}.}\nThen, for every node $n_i$ of the SE-Tree, we build a child of $n_i$ for each item $s_j\\in S$ that satisfies $\\psi(s_j) > \\psi(s_{n_i})$.\nIn this way, every node $n_i$ of the SE-Tree uniquely represents a pattern in the power set of $S$, denoted by $\\mathbf{p}_i\\subseteq S$, which is the union of the items stored in all the nodes along the path from the root to $n_i$. \nAs shown in Figure~\\ref{Fig:tct}, the SE-Tree of $S=\\{s_1, s_2, s_3, s_4\\}$ has 16 nodes $\\{n_0, n_1, n_2, \\cdots, n_{15}\\}$.\nFor node $n_{13}$, the path from the root to $n_{13}$ contains nodes $\\{n_0, n_1, n_6, n_{13}\\}$, where $s_{n_0}=\\emptyset$, $s_{n_1}=s_1$, $s_{n_6}=s_3$ and $s_{n_{13}}=s_4$, thus $\\mathbf{p}_{13}=\\{s_1, s_3, s_4\\}$.\n}\n\n\\nop{\nBuilding a SE-Tree requires a pre-imposed order of the items in $S$. \nWe define such \\emph{pre-imposed order} as: \\emph{$\\forall s_i, s_j\\in S$, if $j > i$ then $s_j > s_i$}.\n\\mc{The following description of SE-tree is confusing and inaccurate. The mapping between nodes $i_i$ and items $s_j$ is unclear.}\n\\todo{As shown in Figure~\\ref{Fig:tct}, for $S=\\{s_1, s_2, s_3, s_4\\}$, the SE-Tree has 16 nodes $\\{n_0, n_1, n_2, \\cdots, n_{15}\\}$. Each node $n_i$ uniquely represents a pattern $\\mathbf{p}_i$, which is the union of items in all nodes along the path from $n_0$ to $n_i$. \nTake $n_{13}$ for an example, the path from $n_0$ to $n_{13}$ contains nodes $\\{n_0, n_1, n_6, n_{13}\\}$, thus $\\mathbf{p}_{13}=\\{s_1, s_3, s_4\\}$.}\n}\n\n\\begin{figure}[t]\n\\centering\n\\includegraphics[width=86mm]{Figs\/theme_community_tree.pdf}\n\\caption{An example of SE-Tree and TC-Tree when $S=\\{s_1, s_2, s_3, s_4\\}$ and $\\mathcal{L}_{\\mathbf{p}_0}=\\mathcal{L}_{\\mathbf{p}_2}=\\mathcal{L}_{\\mathbf{p}_8}=\\mathcal{L}_{\\mathbf{p}_9}=\\mathcal{L}_{\\mathbf{p}_{11}}=\\mathcal{L}_{\\mathbf{p}_{14}}=\\mathcal{L}_{\\mathbf{p}_{15}}=\\emptyset$. SE-Tree includes all nodes marked in solid and dashed lines. TC-tree contains only the nodes in solid line. }\n\\label{Fig:tct}\n\\end{figure}\n\nA TC-Tree is an extension of a SE-Tree.\nIn a TC-Tree, each node $n_i$ represents a pattern $\\mathbf{p}_i$, which is a subset of $S$. \nThe item stored in $n_i$ is denoted by $s_{n_i}$. \nWe also store the decomposed maximal pattern truss $\\mathcal{L}_{\\mathbf{p}_i}$ in $n_i$. \nTo save memory, we omit the nodes $n_j$ $(j\\geq 1)$ whose decomposed maximal pattern trusses are $\\mathcal{L}_{\\mathbf{p}_j}=\\emptyset$. \n\nWe can build a TC-Tree in a top-down manner efficiently.\nIf $\\mathcal{L}_{\\mathbf{p}_j}=\\emptyset$, we can prune the entire subtree rooted at $n_j$ immediately.\nThis is because, for node $n_j$ and its descendant $n_d$, we have $\\mathbf{p}_j\\subset \\mathbf{p}_d$. Since $\\mathcal{L}_{\\mathbf{p}_j}=\\emptyset$, we can derive from Proposition~\\ref{Prop:pam} that $\\mathcal{L}_{\\mathbf{p}_d}=\\emptyset$. As a result, all descendants of $n_j$ can be immediately pruned.\n\nAlgorithm~\\ref{Alg:tctb} gives the details of building a TC-Tree $\\mathcal{T}$. \nLines 2-5 generate the nodes at the first layer of $\\mathcal{T}$.\nSince the theme networks induced by different items in $S$ are independent, we can compute $\\mathcal{L}_{\\mathbf{p}_i}$ in parallel. Our implementation uses multiple threads for this step.\nLines 6-12 iteratively build the rest of the nodes of $\\mathcal{T}$ in breadth first order.\nHere, $n_f.siblings$ is the set of nodes that have the same parent as $n_f$. The children of $n_f$, denoted by $n_c$, are built in Lines 8-11.\n\\nop{The pre-imposed order of items (i.e., $n_b.s_b > n_f.s_f$) is applied in Line 9 to ensure that the TC-Tree is a subtree of SE-Tree. \\mc{What do you mean by ``sub-tree'' here?, Do you mean a sub-graph?}}\nIn Line 9, we apply Proposition~\\ref{Lem:gip} to efficiently calculate $\\mathcal{L}_{\\mathbf{p}_c}$.\nSince $\\mathbf{p}_c=\\mathbf{p}_f \\cup \\mathbf{p}_b$, we have $\\mathbf{p}_f\\subset \\mathbf{p}_c$ and $\\mathbf{p}_b\\subset \\mathbf{p}_c$. \nFrom Proposition~\\ref{Lem:gip}, we know $C^*_{\\mathbf{p}_c}(0)\\subseteq C^*_{\\mathbf{p}_f}(0) \\cap C^*_{\\mathbf{p}_b}(0)$.\nTherefore, we can find $C^*_{\\mathbf{p}_c}(0)$ within a small subgraph $C^*_{\\mathbf{p}_f}(0) \\cap C^*_{\\mathbf{p}_b}(0)$ using MPTD,\nand then get $\\mathcal{L}_{\\mathbf{p}_c}$ by decomposing $C^*_{\\mathbf{p}_c}(0)$. \n\n\nIn summary, every node of a TC-Tree stores the decomposed maximal pattern truss $\\mathcal{L}_\\mathbf{p}$ of a unique pattern $\\mathbf{p}\\subseteq S$. \nSince $\\mathcal{L}_\\mathbf{p}$ also stores the nontrivial range of $\\alpha$ in $G_\\mathbf{p}$, we can easily use the TC-Tree to obtain the range of $\\alpha$ for all theme networks in $G$. \nThis range helps the users to set their queries.\n\n\\begin{algorithm}[t]\n\\caption{Build Theme Community Tree}\n\\label{Alg:tctb}\n\\KwIn{A database network $G$.}\n\\KwOut{The TC-Tree $\\mathcal{T}$ with root node $n_0$.}\n\\BlankLine\n\\begin{algorithmic}[1]\n \\STATE Initialization: $Q\\leftarrow\\emptyset$, $s_{n_0}\\leftarrow\\emptyset$, $\\mathcal{L}_{\\mathbf{p}_0}\\leftarrow\\emptyset$.\n \n \\FOR{each item $s_i\\in S$}\n \t\\STATE $s_{n_i}\\leftarrow s_i$, $\\mathbf{p}_i\\leftarrow s_i$ and compute $\\mathcal{L}_{\\mathbf{p}_i}$.\n\t\\STATE \\textbf{if} $\\mathcal{L}_{\\mathbf{p}_i}\\neq \\emptyset$ \\textbf{then} $n_0.addChild(n_i)$ and $Q.push(n_i)$. \n \\ENDFOR\n \n \\WHILE{$Q\\neq\\emptyset$}\n \t\\STATE $Q.pop(n_{f})$.\n\t\\FOR{each node $n_b\\in n_f.siblings$}\n\t\t\\STATE \\textbf{if} $s_{n_f} \\prec s_{n_b}$ \\textbf{then} $s_{n_c} \\leftarrow s_{n_b}$, $\\mathbf{p}_c \\leftarrow \\mathbf{p}_f \\cup \\mathbf{p}_b$, and compute $\\mathcal{L}_{\\mathbf{p}_c}$.\n\t\t\\STATE \\textbf{if} $\\mathcal{L}_{\\mathbf{p}_c}\\neq \\emptyset$ \\textbf{then} $n_f.addChild(n_c)$ and $Q.push(n_c)$.\n\t\\ENDFOR\n \\ENDWHILE\n\\BlankLine\n\\RETURN The TC-Tree $\\mathcal{T}$ with root node $n_0$.\n\\end{algorithmic}\n\\end{algorithm}\n\n\\begin{algorithm}[t]\n\\caption{Query Theme Community Tree}\n\\label{Alg:qtct}\n\\KwIn{A TC-Tree $\\mathcal{T}$, a query pattern $\\mathbf{q}$ and a threshold $\\alpha_\\mathbf{q}$.}\n\\KwOut{The set of maximal pattern trusses $\\mathbb{C}_\\mathbf{q}(\\alpha_\\mathbf{q})$.}\n\\BlankLine\n\\begin{algorithmic}[1]\n \\STATE Initialization: $Q\\leftarrow n_0$, $\\mathbb{C}_\\mathbf{q}(\\alpha_\\mathbf{q})\\leftarrow\\emptyset$.\n \\WHILE{$Q\\neq\\emptyset$}\n \t\\STATE $Q.pop(n_{f})$.\n\t\\FOR{each node $n_c\\in n_f.children \\land s_{n_c}\\in \\mathbf{q}$}\n\t\n\t\n\t\t\t\\STATE Get $C^*_{\\mathbf{p}_c}(\\alpha_\\mathbf{q})$ from $\\mathcal{L}_{\\mathbf{p}_c}$ by Equation~\\ref{Eqn:get_mpt}.\n\t\t\t\\STATE \\textbf{if} $C^*_{\\mathbf{p}_c}(\\alpha_\\mathbf{q})\\neq \\emptyset$ \\textbf{then} $\\mathbb{C}(\\alpha_\\mathbf{q})\\leftarrow \\mathbb{C}(\\alpha_\\mathbf{q}) \\cup C^*_{\\mathbf{p}_c}(\\alpha_\\mathbf{q})$, and $Q.push(n_c)$.\n\t\n\t\\ENDFOR\n \\ENDWHILE\n\\BlankLine\n\\RETURN The set of maximal pattern trusses $\\mathbb{C}(\\alpha_\\mathbf{q})$.\n\\end{algorithmic}\n\\end{algorithm}\n\n\\subsection{Querying Theme Community Tree}\nIn this subsection, we introduce how to query TC-Tree by a pattern $\\mathbf{q}$ and a cohesion threshold $\\alpha_\\mathbf{q}$. \nThe \\emph{answer} to query $(\\mathbf{q}, \\alpha_\\mathbf{q})$ is the set of maximal pattern trusses with respect to $\\alpha_\\mathbf{q}$ for any sub-pattern of $\\mathbf{q}$, that is, $\\mathbb{C}_\\mathbf{q}(\\alpha_\\mathbf{q})=\\{C^*_\\mathbf{p}(\\alpha_\\mathbf{q}) \\mid C^*_\\mathbf{p}(\\alpha_\\mathbf{q})\\neq \\emptyset, \\mathbf{p}\\subseteq \\mathbf{q}\\}$.\nWith $\\mathbb{C}_\\mathbf{q}(\\alpha_\\mathbf{q})$, one can easily extract theme communities by finding the maximal connected subgraphs in all the retrieved maximal pattern trusses.\n\nAs shown in Algorithm~\\ref{Alg:qtct}, the querying method simply traverses the TC-Tree in breadth first order and collects maximal pattern trusses that satisfy the conditions of the answer. \n\nThe efficiency of Algorithm~\\ref{Alg:qtct} comes from three factors.\nFirst, in Line 4, if $s_{n_c}\\not\\in \\mathbf{q}$, then $\\mathbf{p}_c\\not\\subset \\mathbf{q}$ and the patterns of all descendants of $n_c$ are not sub-patterns of $\\mathbf{q}$. Therefore, we can prune the entire subtree rooted at $n_c$.\nSecond, in Line 6, if $C^*_{\\mathbf{p}_c}(\\alpha_\\mathbf{q})=\\emptyset$, we can prune the entire subtree rooted at $n_c$, because, according to Proposition~\\ref{Prop:pam}, no descendants of $n_c$ can induce a maximal pattern truss with respect to $\\alpha_\\mathbf{q}$.\nLast, in Line 5, getting $C^*_{\\mathbf{p}_c}(\\alpha_\\mathbf{q})$ from $\\mathcal{L}_{\\mathbf{p}_c}$ is efficient using Equation~\\ref{Eqn:get_mpt}.\n\n\\nop{\nthen $C^*_{p_{g}}(\\alpha_q)=\\emptyset$ for all children $n_{g}$ of $n_c$. This is an immediate result of the second property of Proposition~\\ref{Prop:pam}, since $p_c\\subset p_g$. Therefore, when $C^*_{p_c}(\\alpha_q)=\\emptyset$, . \n}\n\nIn summary, TC-Tree enables fast user query answering. \nAs demonstrated in Section~\\ref{Sec:eotci}, TC-Tree is easy to build and efficient to query, and scales well to index a large number of maximal pattern trusses using practical size of memory.\n\n\n\n\n\\section{Experiments}\n\\label{sec:exp}\n\n\\begin{table}[t]\n\\caption{Statistics of the database networks. \\#Items (total) is the total number of items stored in all vertex databases, and \\#Items (unique) is the number of unique items in $S$.}\n\\centering\n\\label{Table:dss}\n\\begin{tabular}{| p{23mm} | p{10.5mm} | p{10.5mm} | p{12mm} | p{10.5mm}|}\n\\hline\n & \\makecell[c]{BK} & \\makecell[c]{GW} & \\makecell[c]{AMINER} & \\makecell[c]{SYN} \\\\ \\hline\n\\#Vertices & $5.1{\\times}10^4$ & $1.1{\\times}10^5$ & $1.1{\\times}10^6$ & $1.0{\\times}10^6$ \\\\ \\hline\n\\#Edges & $2.1{\\times}10^5$ & $9.5{\\times}10^5$ & $2.6{\\times}10^6$ & $1.0{\\times}10^7$ \\\\ \\hline\n\\#Transactions & $1.2{\\times}10^6$ & $2.0{\\times}10^6$ & $3.1{\\times}10^6$ & $6.1{\\times}10^6$ \\\\ \\hline\n\\#Items (total) & $1.7{\\times}10^6$ & $3.5{\\times}10^6$ & $9.2{\\times}10^6$ & $1.3{\\times}10^8$ \\\\ \\hline\n\\#Items (unique) & $1.8{\\times}10^3$ & $5.7{\\times}10^3$ & $1.2{\\times}10^4$ & $1.0{\\times}10^4$ \\\\ \\hline\n\n\\end{tabular}\n\\end{table}\n\nIn this section, we evaluate the performance of Theme Community Scanner (TCS), Theme Community Finder Apriori (TCFA), Theme community Finder Intersection (TCFI) and Theme Community Tree (TC-Tree). \nTCS, TCFA and TCFI are implemented in Java. In order to efficiently process large database network, we implement TC-Tree in C++ and\nthe parallel steps in Lines 2-5 of Algorithm~\\ref{Alg:tctb} with 4 threads using OpenMP.\nAll experiments are performed on a PC running Windows 7 with Core-i7-3370 CPU (3.40 GHz), 32GB memory and a 5400 rpm hard drive.\n\nSince TC-Tree is an indexing method and it is not directly comparable with the other theme community detection methods, we compare the performance of TCS, TCFA and TCFI in Sections~\\ref{Sec:eop} and~\\ref{Sec:eotcf}, and evaluate the performance of TC-Tree in Section~\\ref{Sec:eotci}. Last, we present some interesting case studies in Section~\\ref{Sec:cs}. \n\nThe performance of TCS, TCFA and TCFI are evaluated on the following aspects. First, ``Time Cost'' measures the total runtime of each method.\nSecond, ``Number of Patterns (NP)'', ``Number of Vertices (NV)'' and ``Number of Edges (NE)'' are the total numbers of patterns, vertices and edges in all detected maximal pattern trusses, respectively. \nWhen counting NV, a vertex is counted $k$ times if it is contained by $k$ different maximal pattern trusses. \nFor NE, an edge is counted $k$ times if it is contained by $k$ different maximal pattern trusses.\nNP is equal to the number of maximal pattern trusses, since each maximal pattern truss uniquely corresponds to a pattern.\nThe evaluation metrics of TC-Tree are discussed in Section~\\ref{Sec:eotci}\n\nThe following datasets are used.\n\n\\textbf{Brightkite (BK)}\nThe Brightkite dataset is a public check-in dataset produced by the location-based social networking website \\url{BrightKite.com}~\\cite{BKGW_data}. \nIt includes a friendship network of 58,228 users and 4,491,143 user check-ins that contain the check-in time and location.\nWe construct a database network using this data set by taking the user friendship network as the network of the database network. Moreover,\nto create the vertex database for a user, we treat each check-in location as an item, and cut the check-in history of a user into periods of 2 days. The set of check-in locations within a period is transformed into a transaction.\nA theme community in this database network represents a group of friends who frequently visit the same set of places.\n\n\\textbf{Gowalla (GW)} \nThe Gowalla dataset is a public dataset produced by the location-based social networking website \\url{Gowalla.com}~\\cite{BKGW_data}. \nIt includes a friendship network of 196,591 users and 6,442,890 user check-ins that contain the check-in time and location. \nWe transform this dataset into a database network in the same way as BK. \n\n\\textbf{AMINER}\nThe AMINER dataset is built from the Citation network v2 (CNV2) dataset~\\cite{Aminer_data}.\nCNV2 contains 1,397,240 papers. We transform it into a database network in the following two steps. First, we treat each author as a vertex and build an edge between a pair of authors who co-author at least one paper. \nSecond, to build the vertex database for an author, we treat each keyword in the abstract of a paper as an item, and all the keywords in the abstract of a paper is turned into a transaction. An author vertex is associated with a transaction database of all papers the author publishes.\nIn this database network, a theme community represents a group of authors who collaborate closely and share the same research interest that is described by the same set of keywords.\n\n\n\\newcommand{43mm}{43mm}\n\\begin{figure*}[t]\n\\centering\n\\subfigure[Time cost (BK)]{\\includegraphics[width=43mm]{Figs\/EXP1_EffectOfPara_time_BK.pdf}}\n\\subfigure[\\#Patterns (BK)]{\\includegraphics[width=43mm]{Figs\/EXP1_EffectOfPara_npat_BK.pdf}}\n\\subfigure[\\#Vertices (BK)]{\\includegraphics[width=43mm]{Figs\/EXP1_EffectOfPara_nv_BK.pdf}}\n\\subfigure[\\#Edges (BK)]{\\includegraphics[width=43mm]{Figs\/EXP1_EffectOfPara_ne_BK.pdf}}\n\\subfigure[Time cost (GW)]{\\includegraphics[width=43mm]{Figs\/EXP1_EffectOfPara_time_GW.pdf}}\n\\subfigure[\\#Patterns (GW)]{\\includegraphics[width=43mm]{Figs\/EXP1_EffectOfPara_npat_GW.pdf}}\n\\subfigure[\\#Vertices (GW)]{\\includegraphics[width=43mm]{Figs\/EXP1_EffectOfPara_nv_GW.pdf}}\n\\subfigure[\\#Edges (GW)]{\\includegraphics[width=43mm]{Figs\/EXP1_EffectOfPara_ne_GW.pdf}}\n\\subfigure[Time cost (AMINER)]{\\includegraphics[width=43mm]{Figs\/EXP1_EffectOfPara_time_DBLP.pdf}}\n\\subfigure[\\#Patterns (AMINER)]{\\includegraphics[width=43mm]{Figs\/EXP1_EffectOfPara_npat_DBLP.pdf}}\n\\subfigure[\\#Vertices (AMINER)]{\\includegraphics[width=43mm]{Figs\/EXP1_EffectOfPara_nv_DBLP.pdf}}\n\\subfigure[\\#Edges (AMINER)]{\\includegraphics[width=43mm]{Figs\/EXP1_EffectOfPara_ne_DBLP.pdf}}\n\\caption{The effects of user input $\\alpha$ and threshold $\\epsilon$ on BK, GW and AMINER. In (f)-(h) and (j)-(l), the performance of NP, NE and NV are all zero when $\\alpha=2.0$, however, we could not draw zero in the figure since the y-axes are in log scale.}\n\\label{Fig:effect_of_parameters}\n\\end{figure*}\n\n\n\\textbf{Synthetic (SYN) dataset.}\nThe synthetic dataset is built to evaluate the scalability of TC-Tree. We first generate a network with 1 million vertices using the Java Universal Network\/Graph Framework (JUNG)~\\cite{JUNG}.\nThen, in order to make the vertex databases of neighbour vertices share some common patterns, we generate the transaction databases on each vertex in three steps.\nFirst, we randomly select 1000 seed vertices. \nThen, to build the transaction database of each seed vertex, we randomly sample multiple itemsets from $S$ and store each sampled itemset as a transaction in the transaction database.\nLast, to build the transaction database of each non-seed vertex, we first sample multiple transactions from the transaction databases of the neighbor vertices, then randomly change 10\\% of the items in each sampled transaction to different items randomly picked in $S$.\nIn this way, we iteratively generate the transaction databases of all vertices by a breadth first search of the network. \nFor each vertex $v_i$ with degree $d(v_i)$, the number of transactions in vertex database $\\mathbf{d}_i$ is set to $\\lceil{e^{ 0.1\\times d(v_i)}}\\rceil$, the length of each transaction in $\\mathbf{d}_i$ is set to $\\lceil{e^{ 0.13\\times d(v_i)}}\\rceil$.\n\n\\nop{For each non-seed vertex $v_j$, we generate the transaction database $\\mathbf{d}_j$ by sampling transactions from the databases of the neighbouring vertices of $v_j$.\nto generate a transaction $t$ in the transaction database $\\mathbf{d}_j$, we first sample a transaction $t'$ from the transaction databases of the neighbouring vertices of $v_j$, then obtain $t$ by changing each item in $t'$ into another item in $S$ with probability $0.1$.\nwe use a \\emph{sample-and-change} method to make sure that the transaction databases of neighbouring vertices share some common patterns. Specifically,\n}\n\nThe statistics of all datasets are given in Table~\\ref{Table:dss}.\n\n\n\\newcommand{43mm}{43mm}\n\\begin{figure*}[t]\n\\centering\n\\subfigure[Time cost (BK)]{\\includegraphics[width=43mm]{Figs\/EXP2_EfficiencyVSEdgeNum_time_BK.pdf}}\n\\subfigure[\\#Patterns (BK)]{\\includegraphics[width=43mm]{Figs\/EXP2_EfficiencyVSEdgeNum_npat_BK.pdf}}\n\\subfigure[\\#Vertices (BK)]{\\includegraphics[width=43mm]{Figs\/EXP2_EfficiencyVSEdgeNum_nv_BK.pdf}}\n\\subfigure[\\#Edges (BK)]{\\includegraphics[width=43mm]{Figs\/EXP2_EfficiencyVSEdgeNum_ne_BK.pdf}}\n\\subfigure[Time cost (GW)]{\\includegraphics[width=43mm]{Figs\/EXP2_EfficiencyVSEdgeNum_time_GW.pdf}}\n\\subfigure[\\#Patterns (GW)]{\\includegraphics[width=43mm]{Figs\/EXP2_EfficiencyVSEdgeNum_npat_GW.pdf}}\n\\subfigure[\\#Vertices (GW)]{\\includegraphics[width=43mm]{Figs\/EXP2_EfficiencyVSEdgeNum_nv_GW.pdf}}\n\\subfigure[\\#Edges (GW)]{\\includegraphics[width=43mm]{Figs\/EXP2_EfficiencyVSEdgeNum_ne_GW.pdf}}\n\\subfigure[Time cost (AMINER)]{\\includegraphics[width=43mm]{Figs\/EXP2_EfficiencyVSEdgeNum_time_DBLP.pdf}}\n\\subfigure[\\#Patterns (AMINER)]{\\includegraphics[width=43mm]{Figs\/EXP2_EfficiencyVSEdgeNum_npat_DBLP.pdf}}\n\\subfigure[\\#Vertices (AMINER)]{\\includegraphics[width=43mm]{Figs\/EXP2_EfficiencyVSEdgeNum_nv_DBLP.pdf}}\n\\subfigure[\\#Edges (AMINER)]{\\includegraphics[width=43mm]{Figs\/EXP2_EfficiencyVSEdgeNum_ne_DBLP.pdf}}\n\\caption{The Time Cost, NP, NV\/NP and NE\/NP of TCS, TCFA and TCFI on different sizes of real world database networks. NV\/NP and NE\/NP are the average number of vertices and edges in all detected maximal pattern trusses, respectively.}\n\\label{Fig:scalability_results}\n\\end{figure*}\n\n\\subsection{Effect of Parameters}\n\\label{Sec:eop}\nIn this subsection, we analyze the effects of the cohesion threshold $\\alpha$ and the frequency threshold $\\epsilon$ for TCS in the real world database networks.\nThe settings of parameters are $\\alpha \\in \\{0.0, 0.1, 0.2, 0.3, 0.5, 1.0, 1.5, 2.0\\}$ and $\\epsilon\\in\\{0.1, 0.2, 0.3\\}$. \nWe do not evaluate the performance of TCS for $\\epsilon=0.0$ and $\\epsilon > 0.3$, because TCS is too slow to stop in reasonable time when $\\epsilon=0.0$ and it loses too much accuracy when $\\epsilon>0.3$.\nSince TCS with $\\epsilon\\in\\{0.1, 0.2, 0.3\\}$ still run too slow on the original database networks of BK, GW and AMINER, we use small database networks that are sampled from the original database networks by performing a breadth first search from a randomly picked seed vertex.\nFrom BK and GW, we obtain sampled database networks with 10,000 edges. \nFor AMINER, we sample a database network of 5,000 edges.\n\nFigures~\\ref{Fig:effect_of_parameters}(a), \\ref{Fig:effect_of_parameters}(e) and~\\ref{Fig:effect_of_parameters}(i) show the time cost of all methods on BK, GW and AMINER, respectively.\nThe cost of TCS does not change when $\\alpha$ increases. This is because the cost of TCS is largely dominated by the size of the set of candidate patterns $\\mathcal{P}$ (see Section~\\ref{sec:tcs}), which is not affected by $\\alpha$. \nHowever, when $\\epsilon$ increases, the size of $\\mathcal{P}$ reduces, thus the cost of TCS decreases.\nWhen $\\alpha$ increases, the cost of TCFA and TCFI both decreases. This is because, for both TCFA and TCFI, a larger $\\alpha$ reduces the size of the set of qualified patterns $\\mathcal{P}^{k-1}$, thus reduces the number of generated candidate patterns in $\\mathcal{M}^k$. This improves the effectiveness of the early pruning of TCFA and TCFI.\nThe cost of TCFA is sensitive to $\\alpha$. \nThis is because the cost of TCFA is largely dominated by the number of candidate patterns in $\\mathcal{M}^k$, which is generated by taking the length-$k$ union of the patterns in $\\mathcal{P}^{k-1}$.\nWhen $\\alpha$ decreases, the size of $\\mathcal{P}^{k-1}$ increases rapidly, and the number of candidate patterns in $\\mathcal{M}^k$ becomes very large. \nIn contrast, the cost of TCFI is stable with respect to $\\alpha$ and is much lower than the cost of TCFA when $\\alpha$ is small.\nThe reason is that most maximal pattern trusses are small local subgraphs that do not intersect with each other, thus many unqualified patterns in $\\mathcal{M}^k$ are easily pruned by TCFI using the graph intersection property in Proposition~\\ref{Lem:gip}.\n\nAccording to the experimental results on the small database network of AMINER of 5,000 edges, when $\\alpha=0$, TCFA calls MPTD 622,852 times, TCFI calls MPTD 152,396 times. This indicates that TCFI effectively prunes 75.5\\% of the candidate patterns. However, in Figure~\\ref{Fig:effect_of_parameters}(i), TCFI is nearly 3 orders of magnitudes faster than TCFA when $\\alpha=0$. This is because, for each run of MPTD, TCFA computes the maximal pattern truss in the large theme network induced from the entire database network, however, TCFI computes the maximal pattern truss within the small theme network induced from the intersection of two maximal pattern trusses.\n\nIn Figures~\\ref{Fig:effect_of_parameters}(a),~\\ref{Fig:effect_of_parameters}(e) and \\ref{Fig:effect_of_parameters}(i), when $\\alpha\\geq 1$, the cost of TCFA is comparable with TCFI in all database networks. \nThis is because, when $\\alpha\\geq 1$, GW and AMINER contain only one maximal pattern truss each, BK contains no more than three maximal pattern trusses that intersect with each other.\nIn this case, TCFA does not generate many unqualified candidate patterns and TCFI does not prune any candidate patterns by the graph intersection property.\n\\nop{\nIn this case, both TCFA and TCFI achieve the best time cost performances that are more than two orders of magnitudes better than TCS.\n}\n\n\\nop{\neven a single triangle whose vertices share a common length-1 pattern forms a theme community. \n\n the length-1 pattern set $\\mathcal{P}^1$ contains the length-1 patterns of all theme communities \n\nfor TCFA, the length-2 candidate pattern set $\\mathcal{M}^2$ is the power set of the length-1 pattern set $\\mathcal{P}^1$.\n\n\n\n and $\\mathcal{P}^1$ is very large when $\\alpha=0$. When the volume of $\\mathcal{M}^2$ is larger than the volume of the candidate pattern set $\\mathcal{P}$ of TCS, TCFA costs more time than TCS.\nHowever, when $\\alpha$ increases to $\\alpha=0.1$, the time cost of TCFA \n\nHowever, since the graph intersection based pruning of TCFI is not affected much by $\\alpha$, TCFI is still two orders of magnitudes faster than the other methods when $\\alpha=0$.\n\n\nWe can see from Figure~\\ref{Fig:effect_of_parameters}(a) that the time costs of TCS ($\\epsilon=[0.1, 0.2, 0.3]$) do not change with the increase of $\\alpha$; this is because the time cost of TCS is mostly determined by the size of candidate pattern set $P$ (see Section~\\ref{sec:tcs}), which is not affected by $\\alpha$. However, the size of $P$ reduces with the increase of $\\epsilon$, thus the time cost of TCS decreases when $\\epsilon$ increases.\nOn the other hand, the time cost of TCFA and TCFI both decreases when $\\alpha$ increases; the reason is that both TCFA and TCFI generates candidate patterns in a bottom-up manner (see Algorithm~\\ref{Alg:apriori}), a larger $\\alpha$ leads to a smaller pattern set $P^{k-1}$, which further reduces the number of candidate patterns in $L^k$. This facilitates the early pruning of TCFA and TCFI and decreases the time cost. \nInterestingly, when $\\alpha=0$, TCFA costs even more time than TCS ($\\epsilon=[0.1, 0.2, 0.3]$).\nThis is because $L^2$ is the power set of $P^1$, thus when $\\alpha=0$ and $P^1$ is large, $L^2$ can be larger than the candidate pattern set $P$ of TCS ($\\epsilon=[0.1, 0.2, 0.3]$).\nHowever, since the graph intersection based pruning of TCFI is not affected much by $\\alpha$, TCFI is still two orders of magnitudes faster than the other methods when $\\alpha=0$.\nSimilar time cost performances can be observed in Figure~\\ref{Fig:effect_of_parameters}(e) and Figure~\\ref{Fig:effect_of_parameters}(i).\n}\n\n\nFigures~\\ref{Fig:effect_of_parameters}(b)-(d), ~\\ref{Fig:effect_of_parameters}(f)-(h) and ~\\ref{Fig:effect_of_parameters}(j)-(l) show the performance in NP, NV and NE of all methods on BK, GW and AMINER, respectively. \nTCFA and TCFI produce the same exact results for all values of $\\alpha$ in all database networks. \n\\nop{\nwhen $\\alpha$ is not large enough, the NP, NV and NE of TCS are smaller than that of TCFA and TCFI, which indicates that TCS detects less maximal pattern trusses than TCFA and TCFI. \n}\nWhether TCS produces the exact results highly depends on the frequency threshold $\\epsilon$, cohesion threshold $\\alpha$ and the database network.\nFor example, \nIn Figures~\\ref{Fig:effect_of_parameters}(b)-(d) and Figures~\\ref{Fig:effect_of_parameters}(f)-(h), TCS ($\\epsilon=0.1$) cannot produce the same results as TCFA and TCFI unless $\\alpha\\geq 0.2$.\nFor TCS ($\\epsilon=0.2$) and TCS ($\\epsilon=0.3$), in order to produce the same results as TCFA and TCFI, the proper values of $\\alpha$ varies in different database networks.\nThe reason is that vertices with small pattern frequencies can still form a good maximal pattern truss with large edge cohesion if they form a densely connected subgraph. \nSuch maximal pattern trusses may be lost if the patterns with low frequencies are dropped by the pre-filtering step of TCS.\nThis clearly shows that TCS performs a trade-off between efficiency and accuracy.\n\\nop{\nThe reason is that, in the database network of AMINER, the patterns whose maximum frequencies are no larger than $\\epsilon=0.1$ cannot form a maximal pattern truss with edge cohesion larger than $\\alpha=0.1$.\nHowever, in the database networks of BK and GW, some patterns whose maximum frequencies are no larger than $\\epsilon=0.1$ can form a maximal pattern truss with edge cohesion larger than $\\alpha=0.3$.\nAs a result, there is no proper value of $\\epsilon$\n}\n\n\\nop{Such maximal pattern trusses with large edge cohesion but relatively small pattern frequencies on vertices usually have dense edge connections between all vertices.}\n\n\\nop{\nWhen $\\alpha$ is large, TCS produces the same results as TCFA and TCFI. \nThis is because, the vertices filtered out by a non-zero threshold $\\epsilon\\in\\{0.1, 0.2, 0.3\\}$ are not contained by any theme communities. \nHowever, when $\\alpha$ is small, TCS finds less theme communities than TCFA and TCFI. \nThe reason is that, the pre-filtering method of TCS performs a trade-off between efficiency and accuracy; a larger threshold $\\epsilon$ gains more efficiency, however, loses more accuracy.\nBy comparing the NP, NV and NE performances of TCS on BK, GW and AMINER, it is obvious that whether TCS produces the exact results highly depends on the threshold $\\epsilon$, the user input $\\alpha$ and the dataset.\nThus, there is no proper $\\epsilon$ that works for all database networks.\n}\n\\nop{\nThe reason is that, the vertices filtered out by a non-zero threshold $\\epsilon\\in\\{0.1, 0.2, 0.3\\}$ are not contained by any theme communities with $\\alpha \\geq 0.3$. \nHowever, TCS ($\\epsilon=0.1$), TCS ($\\epsilon=0.2$) and TCS ($\\epsilon=0.3$) cannot produce the exact results when $\\alpha < 0.3$, $\\alpha<0.5$ and $\\alpha<1.5$, respectively. This is because the pre-filtering step of TCS performs a trade-off between efficiency and accuracy; a larger $\\epsilon$ gains more efficiency, however, loses more accuracy.\nWe can also observe from Figure~\\ref{Fig:effect_of_parameters}(f)-(h),(j)-(l) that whether TCS ($\\epsilon=[0.1, 0.2, 0.3]$) can produce the exact results is highly dependent on the user input $\\alpha$ and the dataset. \nIn fact, setting $\\epsilon=0.0$ is the only way for TCS to produce exact results for arbitrary user input $\\alpha$ and datasets.\n}\n\nIn summary,\nTCFI produces the best detection results of maximal pattern trusses and achieves the best efficiency performance for all values of user input $\\alpha$ on all database networks. \n\n\n\\newcommand{43mm}{43mm}\n\\begin{figure*}[t]\n\\centering\n\\subfigure[Query by alpha (BK)]{\\includegraphics[width=43mm]{Figs\/EXP3_IndexingPerformances_qba_BK.pdf}}\n\\subfigure[Query by alpha (GW)]{\\includegraphics[width=43mm]{Figs\/EXP3_IndexingPerformances_qba_GW.pdf}}\n\\subfigure[Query by alpha (AMINER)]{\\includegraphics[width=43mm]{Figs\/EXP3_IndexingPerformances_qba_DBLP.pdf}}\n\\subfigure[Query by alpha (SYN)]{\\includegraphics[width=43mm]{Figs\/EXP3_IndexingPerformances_qba_SYN.pdf}}\n\\subfigure[Query by pattern (BK)]{\\includegraphics[width=43mm]{Figs\/EXP3_IndexingPerformances_qbp_BK.pdf}}\n\\subfigure[Query by pattern (GW)]{\\includegraphics[width=43mm]{Figs\/EXP3_IndexingPerformances_qbp_GW.pdf}}\n\\subfigure[Query by pattern (AMINER)]{\\includegraphics[width=43mm]{Figs\/EXP3_IndexingPerformances_qbp_DBLP.pdf}}\n\\subfigure[Query by pattern (SYN)]{\\includegraphics[width=43mm]{Figs\/EXP3_IndexingPerformances_qbp_SYN.pdf}}\n\\caption{Query performance of TC-Tree. (a)-(d) show the QBA performance. (e)-(h) show the QBP performance.}\n\\label{Fig:query_performances}\n\\end{figure*}\n\n\n\\subsection{Efficiency of Theme Community Finding}\n\\label{Sec:eotcf}\nIn this subsection, we analyze how the runtime of all methods changes when the size of the database network increases. \nFor each database network, we generate a series of database networks with different sizes by sampling the original database network using the breath first search sampling method introduced in Section~\\ref{Sec:eop}.\nSince TCS and TCFA run too slow on large database networks, we stop reporting the performance of TCS and TCFA when they cost more than one day. \nThe performance of TCFI is evaluated on all sizes of database networks including the original ones. To evaluate the worst case performance of all methods, we set $\\alpha=0$.\n\nFigures~\\ref{Fig:scalability_results}(a),~\\ref{Fig:scalability_results}(e) and \\ref{Fig:scalability_results}(i) show the time cost of all methods in BK, GW and AMINER, respectively.\nThe cost of all methods increases when the number of sampled edges increases. This is because increasing the size of the database network increases the number of maximal pattern trusses. \nThe cost of TCFI grows much slower than that of TCS and TCFA.\nThe reason is that TCS generates candidate patterns by enumerating the patterns of all vertex databases, TCFA generates candidate patterns by pairwise unions of the patterns of the detected maximal pattern trusses.\nThey both generate a large number of unqualified candidate patterns.\nTCFI generates candidate patterns by the pairwise unions of the patterns of two intersecting maximal pattern trusses, and runs MPTD on the small intersection of two maximal pattern trusses.\nThis effectively reduces the number of candidate patterns and significantly reduces the time cost.\nAs a result, TCFI achieves the best scalability and is more than two orders of magnitude faster than TCS and TCFA on large database networks.\n\n\n\n\\nop{\nThe reason is that TCS and TCFA generate their candidate patterns by pairwise unions of the patterns of detected maximal pattern trusses.\nThis generates a large number of unqualified candidate patterns, since most maximal pattern trusses are small local subgraphs that do not intersect with each other, and the union of the patterns of non-intersecting maximal pattern trusses is guaranteed to be unqualified according to Proposition~\\ref{Lem:gip}.\nTCFI generates candidate patterns only by pairwise unions of the patterns of intersecting maximal pattern trusses, therefore, the set of candidate patterns of TCFI grows much slower than that of TCS and TCFA.\nAs a result, TCFI achieves the best scalability and is more than two orders of magnitude faster than TCS and TCFA in large database networks.\n}\n\n\\nop{\neffectively prunes such unqualified candidates by only generate candidate patterns from the union of the patterns of non-intersection \n\nSuch unqualified candidate patterns are efficiently pruned by TCFI using the graph intersection property in Proposition~\\ref{Lem:gip}, therefore, the set of candidate patterns of TCFI grows much slower than that of TCS and TCFA.\n\nThe reason is that TCS and TCFA generate candidate patterns by randomly combining the patterns of detected maximal pattern trusses.\nThis generates a lot of unqualified candidate patterns for TCS and TCFA, since most maximal pattern trusses are small local subgraphs that do not intersect with each other, and combining the patterns of non-intersecting maximal pattern trusses\n\nmuch more unqualified candidate patterns than TCFI, by randomly combining the patterns of non-intersection maximal pattern trusses.\nSince most maximal pattern trusses are small local subgraphs that do not intersect with each other, TCFI efficiently prunes a large number of unqualified patterns by \n\n know from Proposition~\\ref{Lem:gip} that \n\nwe can know from Proposition~\\ref{Lem:gip} that combining the qualified patterns of such non-intersecting maximal pattern trusses generates a large number of unqualified candidate patterns.\nSuch unqualified candidate patterns are efficiently pruned by TCFI using the graph intersection property in Proposition~\\ref{Lem:gip}, therefore, the set of candidate patterns of TCFI grows much slower than that of TCS and TCFA when the size of the database network increases.\nAs a result, TCFI achieves the best scalability and is more than two magnitudes faster than TCS and TCFA in large database networks.\nThis is because, TCFI efficiently prunes a large number of unqualified patterns \n}\n\n\\nop{\nthe set of candidate patterns of TCFI grows much slower than that of TCS and TCFA when the size of the database network increases.\nThe sets of candidate patterns of TCS and TCFA are generated by random combining the qualified patterns of detected maximal pattern trusses. Since most maximal pattern trusses are small local subgraphs that do not intersect with each other, we can know from Proposition~\\ref{Lem:gip} that combining the qualified patterns of such non-intersecting maximal pattern trusses generates a large number of unqualified candidate patterns for TCS and TCFA.\nTCFI applies the graph intersection property in Proposition~\\ref{Lem:gip} to effectively prune a large number of candidate patterns generated by combining the patterns of non-intersecting maximal pattern trusses, \n\nHowever, since most maximal pattern trusses are small local subgraphs that do not intersect with each other. combining the qualified patterns of such non-intersecting maximal pattern trusses generates a large number of unqualified candidate patterns for TCS and TCFA. \n\nThis generates a lot of unqualified candidate patterns for TCS and TCFA, since most maximal pattern trusses are small local subgraphs that do not intersect with each other.\n\n\nSince most maximal pattern trusses are small local subgraphs that do not intersect with each other, combining the qualified patterns of such non-intersecting maximal pattern trusses generates a large number of unqualified candidate patterns for TCS and TCFA. \nHowever, such unqualified candidate patterns are efficiently pruned by TCFI using the graph intersection property in Proposition~\\ref{Lem:gip}, thus the set of candidate patterns of TCFI grows much slower than that of TCS and TCFA when the size of the database network increases.\nAs a result, TCFI achieves the best scalability and is more than two magnitudes faster than TCS and TCFA in large database networks.\n}\n\n\nFigures~\\ref{Fig:scalability_results}(b),~\\ref{Fig:scalability_results}(f) and~\\ref{Fig:scalability_results}(j) show the performance in NP of all methods. When the number of sampled edges increases, the NPs of all methods increase. This is because, increasing the size of database network increases the number of maximal pattern trusses, which is equal to NP.\nBoth TCFI and TCFA produce the same exact results. However, due to the accuracy loss caused by pre-filtering the patterns with low frequencies, TCS cannot produce the same results as TCFI and TCFA.\n\nIn Figures~\\ref{Fig:scalability_results}(c)-(d),~\\ref{Fig:scalability_results}(g)-(h) and~\\ref{Fig:scalability_results}(k)-(l), we show the average number of vertices and edges in detected maximal pattern trusses by NV\/NP and NE\/NP, respectively. \nThe trends of the curves of NV\/NP and NE\/NP are different in different database networks. \nThis is because each database network is sampled by conducting breath first search from a randomly selected seed vertex, \nand the distributions of maximal pattern trusses are different in different database networks.\nIf more smaller maximal pattern trusses are sampled earlier than larger maximal pattern trusses, NV\/NP and NE\/NP increase when the number of sampled edges increases.\nIn contrast, if more smaller maximal pattern trusses are sampled later than larger maximal pattern trusses, NV\/NP and NE\/NP decrease.\nWe can also see that the average numbers of vertices and edges in detected maximal pattern trusses are always small. \nThis demonstrates that most maximal pattern trusses are small local subgraphs in a database network. \nSuch small subgraphs in different local regions of a large sparse database network generally do not intersect with each other. Therefore, using the graph intersection property, TCFI can efficiently prune a large number of unqualified patterns and achieve much better scalability.\n\n\\nop{\nBesides, since each database network is sampled by conducting breath first search from a randomly selected seed vertex. Therefore, in Figures~\\ref{Fig:scalability_results}(c)-(d),~\\ref{Fig:scalability_results}(g)-(h) and~\\ref{Fig:scalability_results}(k)-(l), the trends of the curves depends on whether the seed vertex is located near more large maximal pattern trusses or more small maximal pattern trusses. As a result, we can see that the trends of the NV\/NP curves and NE\/NP curves are different on different datasets. \n}\n\n\\nop{\nwhen the number of sampled edges increase, the average number of vertices and edges in detected maximal pattern trusses does not increase much . This demonstrates that most maximal pattern trusses are small local subgraphs in database network.\n\n\n~\\ref{Fig:scalability_results}(g)-(h) \n\n\nAs it is shown in Figures~\\ref{Fig:scalability_results}(c)-(d), when the number of sampled edges increase, the NV\/NP and NE\/NP performances on the database network of BK first increase, then become stable. \n\nFigures~\\ref{Fig:scalability_results}(c)-(d),~\\ref{Fig:scalability_results}(g)-(h) and~\\ref{Fig:scalability_results}(k)-(l) show the NV\/NP and NE\/NP performances of all compared methods. Here, NV\/NP and NE\/NP are the average number of vertices and edges, respectively, of all the detected maximal pattern trusses.\n\n\n\nThe NP, NV and NE performances of all compared methods are shown in Figures~\\ref{Fig:scalability_results}(b)-(d),~\\ref{Fig:scalability_results}(f)-(h) and~\\ref{Fig:scalability_results}(j)-(l), respectively.\nAs it is shown, when the number of sampled edges increases, the NP, NV and NE performances of all compared methods increase. This is because increasing the size of database network increases the number of maximal pattern trusses. \nRecall that NP is also the number of detected maximal pattern trusses, \n\nWe can also see that NP, NV and NE do not grow exponentially with respect to the number of sampled edges. \nThis indicates that most maximal pattern trusses are small local subgraphs in the database network. \n}\n\n\\nop{\nInterestingly, in Figure~\\ref{Fig:scalability_results}(a), TCFA costs even more time than TCS. This is because\n\n when $\\alpha=0$, the candidate pattern set of $TCFA$ is larger than the candidate pattern set of $TCS$ in the database network of BK.\n\nthe length-2 candidate pattern set $\\mathcal{M}^2$ of TCFA is the power set of the length-1 qualified pattern set $\\mathcal{P}^1$. Since the candidate pattern set $\\mathcal{P}$ is obtained by filtering out all patterns whose maximum pattern frequency is smaller than threshold $\\epsilon$, \n}\n\n\\nop{\n when the size of the database network increases, the volumes of the candidate pattern set of TCFI grows much slower than that of TCS and TCFA.\n\nthe candidate pattern set $\\mathcal{P}$ of TCS and the length-2 candidate pattern set $\\mathcal{M}^2$ of TCFA both grow much faster than the candidate pattern set of TCFI.\n\nHowever, for TCFI, since most theme communities are small local subgraphs that do not intersect with each other, the number of candidate patterns generated by TCFI grow much slower than that of TCS and TCFA.\nAs a result, TCFI achieves the best scalability and is more than two magnitudes faster than TCS and TCFA in large database networks.\n}\n\n\\nop{\n\\mc{\n\\begin{enumerate}\n\\item The time cost of all compared methods grows then the number of edges increases. TCFI is more than two orders of magnitudes faster the other methods.\n\\item The time costs of the other methods grows exponentially, however, TCFI grows linearly. For TCFA, the number of patterns in $\\mathcal{P}^1$ matters a lot. The size of such pattern sets grows when more and more vertex databases are introduced, and the size of $\\mathcal{M}^2$ grows exponentially.\nFor TCFI, what really matters is how the patterns distribute in the vertex databases of the database graph, TCFI is not quite affected by the number of length-1 patterns in $\\mathcal{P}^1$.\n\\item The number of patterns grow linearly with respect to the size of the database network, this is because most theme community exist in the local region of the database network, vertices from different local regions generally does not form a good theme community. Therefore, introducing more vertices and edges does not increase the number of theme communities exponentially.\n\\item The number of edges also grows linearly due the local property of theme community.\n\\item The number of vertices also grows linearly for the same reason.\n\\end{enumerate}\n}\n\n\nFigure~\\ref{Fig:scalability_results}(a) shows the time costs of all compared methods in the database network of BK.\nAs it is shown, TCFI is more than two orders of magnitudes faster than TCFA.\nThe reason is that the length-2 candidate pattern set $\\mathcal{M}^2$ of TCFA is the power set of the length-1 pattern set $\\mathcal{P}^1$, \n\nApriori-like pattern generation employed by TCFA generates exponential number of candidate patterns.\n\nIn Figure~\\ref{Fig:scalability_results}(a), TCFI is two orders of magnitudes faster than TCFA.\nThe reason is that the Apriori-like pattern generation employed by TCFA generates exponential number of candidate patterns. Thus, when the number of sampled edges increases, the size of length-1 pattern set $P^1$ increases and the number of candidates generated by TCFA grows exponentially. \nHowever, for TCFI, since the graph intersection based pruning of TCFI efficiently prune a large proportion of unqualified candidate patterns, TCFI generates much less number of candidate patterns than TCFA. \nThus, when the number of edges increases, the time cost of TCFI grows much slower than TCFA.\nBesides, due to the large candidate pattern set $P$, TCS ($\\epsilon=[0.1, 0.2, 0.3]$) cost much more time than TCFI.\nSimilar results can also be observed in Figure~\\ref{Fig:scalability_results}(e) and Figure~\\ref{Fig:scalability_results}(i).\n\nFigure~\\ref{Fig:scalability_results}(b)-(d) show the NP, NV and NE performances on BK. Both TCFA and TCFI produce the exact results. However, due to the trade-off effect of $\\epsilon$, TCS ($\\epsilon=[0.1, 0.2, 0.3]$) failed to produce exact results. Figure~\\ref{Fig:scalability_results}(f)-(h) and Figure~\\ref{Fig:scalability_results}(j)-(l) show similar experimental performances on GW and AMINER, respectively.\n}\n\n\\subsection{Efficiency of Theme Community Indexing}\n\\label{Sec:eotci}\nNow we analyze the indexing scalability and query efficiency of TC-Tree in both the real and synthetic database networks.\n\nThe indexing performance of TC-Tree in all database networks is shown in Table~\\ref{Table:iptct}. ``Indexing Time'' is the cost to build a TC-Tree; ``Memory'' is the peak memory usage when building a TC-Tree; ``\\#Nodes'' is the number of nodes in a TC-Tree, which is also the number of maximal pattern trusses in a database network, since every TC-Tree node stores a unique maximal pattern truss.\n\n\\begin{table}[t]\n\\caption{Indexing performance of TC-Tree.}\n\\centering\n\\label{Table:iptct}\n\\begin{tabular}{| c | c | c | c | }\n\\hline\n \t & Indexing Time & Memory & \\#Nodes\t \t\t\\\\ \\hline\nBK & 179 \\;\\;\\;\\, seconds & 0.3 \\;\\,GB & 18,581 \t\t \\\\ \\hline\nGW & 1,594 \\;\\;seconds & 2.6 \\;\\,GB & 11,750,761 \t \\\\ \\hline\nAMINER & 41,068 seconds & 28.3 GB & 152,067,019 \t \\\\ \\hline\nSYN & 35,836 seconds & 26.6 GB & 132,985,944 \t \\\\ \\hline\n\\end{tabular}\n\\end{table}\n\nBuilding a TC-Tree is efficient in both Indexing Time and Memory.\nFor large database networks of AMINER and SYN, TC-Tree scales up pretty well in indexing more than 130 million nodes.\nTC-Tree costs more time on AMINER than SYN, because the database network of AMINER contains more unique items, which produces a larger set of candidate patterns.\n\nWe evaluate the performance of the TC-Tree querying method (Algorithm~\\ref{Alg:qtct}) under two settings: \n1) Query by Alpha (QBA), which queries a TC-Tree with a threshold $\\alpha_\\mathbf{q}$ by setting $\\mathbf{q}=S$.\n2) Query by Pattern (QBP), which queries a TC-Tree with pattern $\\mathbf{q}$ by setting $\\alpha_\\mathbf{q}=0$. The results are shown in Figure~\\ref{Fig:query_performances}, where ``Query Time'' is the cost of querying a TC-Tree, ``Retrieved Nodes (RN)'' is the number of nodes retrieved from a TC-Tree.\n\nTo evaluate how QBA performance changes when $\\alpha_\\mathbf{q}$ increases, we use $\\alpha_\\mathbf{q}\\in\\{0.0, 0.1, 0.2, \\cdots, \\alpha_\\mathbf{q}^*\\}$, which is a finite sequence that starts from 0.0 and is increased by 0.1 per step until Algorithm~\\ref{Alg:qtct} returns $\\emptyset$. $\\alpha_\\mathbf{q}^*$ is the largest $\\alpha_\\mathbf{q}$ when Algorithm~\\ref{Alg:qtct} does not return $\\emptyset$. \nFor each $\\alpha_\\mathbf{q}$, the Query Time is the average of 1,000 runs.\n\nIn Figures~\\ref{Fig:query_performances}(a)-(d), when $\\alpha_\\mathbf{q}$ increases, both RN and Query Time decrease. \nThis is because, a larger $\\alpha_\\mathbf{q}$ reduces the number of maximal pattern trusses, thus decreases RN and Query Time. Interestingly, in Figure~\\ref{Fig:query_performances}(c), we have $\\alpha_\\mathbf{q}^*=106.9$ in the database network of AMINER. This is because the CNV2 dataset~\\cite{Aminer_data} contains a paper about the ``IBM Blue Gene\/L super computer'' that is co-authored by 115 authors.\n\nFigures~\\ref{Fig:query_performances}(c)-(d) show the excellent QBA performance of the proposed querying method (Algorithm~\\ref{Alg:qtct}) on the large database networks of AMINER and SYN. The proposed querying method can retrieve 1 million maximal pattern trusses within 1 second.\n\n\n\nFigures~\\ref{Fig:query_performances}(e)-(h) show how the performance of QBP changes when query pattern length increases. \nTo generate query patterns with different length, we randomly sample 1,000 nodes from each layer of the TC-Tree and use the patterns of the sampled nodes as query patterns. \nSetting the query pattern length larger than the maximum depth of the TC-Tree does not make sense, since such patterns do not correspond to any maximal pattern trusses in the database network. Each Query Time reported is an average of 1,000 runs using different query patterns of the same length.\nAs shown in Figures~\\ref{Fig:query_performances}(e)-(h), both RN and Query Time increase when the Query Pattern Length increases. \nThis is because querying the TC-Tree with a longer query pattern visits more TC-Tree nodes and retrieves more maximal pattern trusses.\n\nIn summary, TC-Tree is scalable in both time and memory when indexing large database networks.\n\n\\nop{\nAccording to the results in Figure~\\ref{Fig:query_performances}, the proposed querying method (Algorithm~\\ref{Alg:qtct}) takes less than a second to retrieve 1 million maximal pattern trusses from the TC-Tree.\n}\n\n\n\\subsection{A Case Study}\n\\label{Sec:cs}\nIn this subsection, we present some interesting theme communities discovered in the database network of AMINER using the proposed TC-Tree method.\nEach detected theme community represents a group of co-working scholars who share the same research interest characterized by a set of keywords.\nWe present 6 interesting theme communities in Figure~\\ref{Fig:case_study} and show the corresponding sets of keywords in Table~\\ref{Table:listofpat}.\n\nTake Figures~\\ref{Fig:case_study}(a)-(b) as an example, the research interest of the theme community in Figure~\\ref{Fig:case_study}(a) is ``data mining'' and ``sequential pattern''. \nIf we narrow down the research interest of this theme community by an additional keyword ``intrusion detection'', the theme community in Figure~\\ref{Fig:case_study}(a) reduces to the theme community in Figure~\\ref{Fig:case_study}(b). This result demonstrates the fact that the size of a theme community reduces when the length of the pattern increases, which is consistent with Theorem~\\ref{Prop:gam}.\n\nThe results in Figures~\\ref{Fig:case_study}(a)-(d) show that four researchers, Philip S.\\ Yu, Jiawei Han, Jian Pei and Ke Wang, actively coauthor with different groups of researchers in different sub-disciplines of data mining, such as sequential pattern mining, intrusion detection, frequent pattern mining and privacy protection.\nThese results demonstrate that the proposed TC-Tree method can discover arbitrarily overlapped theme communities with different themes.\n\n\n\\begin{table}[t]\n\\caption{The sets of keywords for theme communities.}\n\\centering\n\\label{Table:listofpat}\n\\begin{tabular}{|p{2mm} p{78mm}|}\n\\hline\n$p_1:$ & data mining, sequential pattern \\\\ \\hline\n$p_2:$ & data mining, sequencial pattern, intrusion detection, \\\\ \\hline\n$p_3:$ & data mining, search space, complete set, pattern mining \\\\ \\hline\n$p_4:$ & data mining, sensitive information, privacy protection \\\\ \\hline\n$p_5:$ & principal component analysis, linear discriminant analysis, dimensionality reduction, component analysis \\\\ \\hline\n$p_6:$ & Image retrieval, image database, relevance feedback, semantic gap\\\\ \\hline\n\\end{tabular}\n\\end{table}\n\n\\begin{figure}[t]\n\\centering\n\\includegraphics[width=88mm]{Figs\/EXP4_CaseStudy.pdf}\n\\caption{Interesting theme communities found on the database network of AMINER.}\n\\label{Fig:case_study}\n\\end{figure}\n\n\nWe are surprised to see that TC-Tree also discovers the interdisciplinary research communities that are formed by researchers from different research areas. \nAs shown in Figures~\\ref{Fig:case_study}(e)-(f), the research activities of Jiawei Han and Jian Pei are not limited in data mining.\nFigure~\\ref{Fig:case_study}(e) indicates that Jiawei Han collaborated with some researchers in linear discriminant analysis. \nFigure~\\ref{Fig:case_study}(f) shows that Jian Pei collaborated with some researchers in image retrieval. \nMore interestingly, both Jiawei Han and Jian Pei collaborated with Jun Zhao, Xiaofei He, etc.\nThe two theme communities in Figures~\\ref{Fig:case_study}(e)-(f) have a heavy overlap in vertices, but are different in themes.\n\n\\nop{\nInterestingly, the theme communities in Figure~\\ref{Fig:case_study}(e)-(f) both involve the research group of Prof. Jun Zhao, Prof. Chun Chen, Prof. Xiaofei He, Prof. Jiajun Bu and Prof. Can Wang from Zhejiang University, China.\n}\n\n\\nop{\nA, finding theme communities in the database network of AMINER also discovers the interdisciplinary research communities that are formed by researchers from different research fields. \nAs revealed by the results, both theme communities in Figure~\\ref{Fig:case_study}(e)-(f) involve the research group of Prof. Jun Zhao, Prof. Chun Chen, Prof. Xiaofei He, Prof. Jiajun Bu and Prof. Can Wang from Zhejiang University, China. However, Prof. Jiawei Han, who is co-worked with the researchers in linear discriminant analysis and Prof. Jian Pei was actively associated with the researchers in the field of image retrieval.\n}\n\n\\nop{\nProf. Jiawei Han co-worked with the researchers in linear discriminant analysis and Prof. Jian Pei was actively associated with the researchers in the field of image retrieval. Interestingly, both the theme communities in Figure~\\ref{Fig:case_study}(e)-(f) involve the research group of Prof. Jun Zhao, Prof. Chun Chen, Prof. Xiaofei He, Prof. Jiajun Bu and Prof. Can Wang from Zhejiang University, China.\n}\n\nIn summary, the proposed theme community finding method allows arbitrary overlap between theme communities with different themes, and it is able to efficiently and accurately discover meaningful theme communities from large database networks.\n\n\n\n\\section{Conclusions and Future Work}\n\\label{sec:con}\n\nIn this paper, we tackle the novel problem of finding theme communities from database networks. \nWe first introduce the novel concept of database network, which is a natural abstraction of many real world networks.\nThen, we propose TCFI and TC-Tree that efficiently discovers and indexes millions of theme communities in large database networks.\nAs demonstrated by extensive experiments in both synthetic and real world database networks, TCFI and TC-Tree are highly efficient and scalable.\nAs future works, we will extend TCFI and TC-Tree to find theme communities from \\emph{edge database network}, where each edge is associated with a transaction database that describes complex relationships between vertices.\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{The proof of Theorem \\ref{Theorem irreducible quotient ring}}\n\nWe begin this section by giving some preliminaries.\n\nLet $\\mathcal{S}$ be a finite commutative semigroup.\nThe operation on $\\mathcal{S}$ is denoted by $+$.\nThe identity element of $\\mathcal{S}$, denoted $0_{\\mathcal{S}}$ (if exists), is the unique element $e$ of\n$\\mathcal{S}$ such that $e+a=a$ for every $a\\in \\mathcal{S}$. If $\\mathcal{S}$ has an identity element $0_{\\mathcal{S}}$, let\n$$U(\\mathcal{S})=\\{a\\in \\mathcal{S}: a+a'=0_{\\mathcal{S}} \\mbox{ for some }a'\\in \\mathcal{S}\\}$$ be the group of units\nof $\\mathcal{S}$.\nThe zero element of $\\mathcal{S}$, denoted\n$\\infty_{\\mathcal{S}}$ (if exists), is the unique element $z$ of $\\mathcal{S}$ such that $z+a=z$ for every\n$a\\in \\mathcal{S}$.\nLet $$T=x_1x_2\\cdot\\ldots\\cdot x_n=\\prod\\limits_{x\\in \\mathcal{S}} x^{\\ {\\rm v}_x(T)},$$ is a sequence\nof elements in the semigroup $\\mathcal{S}$, where ${\\rm v}_x(T)$ denotes the multiplicity of $x$ in the sequence $T$.\nLet $T_1,T_2\\in\n\\mathcal{F}(\\mathcal{S})$ be two sequences on $\\mathcal{S}$. We call $T_2$\na subsequence of $T_1$ if $${\\rm v}_x(T_2)\\leq {\\rm v}_x(T_1)$$ for every element $x\\in \\mathcal{S}$, in particular, if $T_2\\neq T_1$, we call $T_2$ a {\\bf proper} subsequence of $T_1$, and write $$T_3=T_1 \\cdot T_2^{-1}$$ to mean the unique subsequence of $T_1$ with $T_2\\cdot T_3=T_1$. By $\\lambda$ we denote the\nempty sequence.\nIf $S$ has an identity element $0_{\\mathcal{S}}$, we allow $T=\\lambda$ to be empty and adopt the convention\nthat $\\sigma(\\lambda)=0_\\mathcal{S}$.\nWe say that $T$ is {\\it\nreducible} if $\\sigma(T')=\\sigma(T)$ for some proper subsequence $T'$ of $T$\n(Note that, $T'$ is probably the empty sequence $\\lambda$ if $\\mathcal{S}$\nhas the identity element $0_{\\mathcal{S}}$ and $\\sigma(T)=0_{\\mathcal{S}}$). Otherwise, we call $T$\n{\\it irreducible}. For more related terminology used in additive problems in semigroups, one is refereed to \\cite{wang}.\n\n\n\\begin{lemma}(\\cite{GH}, Lemma 6.1.3) \\label{Lemma recusive Davenport constant} \\ Let $G$ be a finite abelian group, and let $H$ be a subgroup of $G$. Then, $D(G)\\geq D(G\/H)+D(H)-1$.\n\\end{lemma}\n\nFor any finite commutative semigroup $\\mathcal{S}$ with identity $0_{\\mathcal{S}}$, since $U(\\mathcal{S})$ is a nonempty subsemigroup of $\\mathcal{S}$, we have the following.\n\n\\begin{lemma}\\label{proposition D(U(G))leq D(G)} (see \\cite{wanggao}, Proposition 1.2) \\\nLet $\\mathcal{S}$ be a finite commutative semigroup with identity. Then $D(U(\\mathcal{S}))\\leq\nD(\\mathcal{S})$.\n\\end{lemma}\n\n\\begin{lemma}\\label{Lemma product of cyclic semigroups} Let $k\\geq 1$, and let $n_1,n_2,\\ldots,n_k\\geq 2$ be positive integers. Let $\\mathcal{S}_i=C_{n_i}\\cup \\{\\infty_i\\}$ be a semigroup obtained by a cyclic group of order $n_i$ adjoined with a zero element $\\infty_i$ for each $i\\in [1,k]$. Let $\\mathcal{S}=\\mathcal{S}_1\\times \\mathcal{S}_2\\times \\cdots \\times\\mathcal{S}_k$ be the product of $\\mathcal{S}_1,\\mathcal{S}_2,\\ldots,\\mathcal{S}_k$. Then $D(\\mathcal{S})=D(U(\\mathcal{S}))$.\n\\end{lemma}\n\n \\begin{proof} \\ By Lemma \\ref{proposition D(U(G))leq D(G)}, we need only to show that\n$$D(\\mathcal{S})\\leq D(U(\\mathcal{S})).$$ Observe that $$U(\\mathcal{S})=U(\\mathcal{S}_1)\\times \\cdots \\times U(\\mathcal{S}_k)=C_{n_1}\\times \\cdots \\times C_{n_k}.$$\nLet $g_i$ be the generator of the cyclic $C_{n_i}$. Let ${\\bf a}=(a_1,a_2,\\ldots,a_k)$ be an element of $\\mathcal{S}$. We define $$\\mathcal{J}({\\bf a})=\\{i\\in [1,k]:a_i=\\infty_i\\}.$$\nWe see that for each $i\\in [1,k]$, either $a_i=\\infty_i$ or $a_i=m_i g_i$ where $m_i\\in[1,n_i]$, and moreover, ${\\bf a}\\in U(\\mathcal{S})$ if and only if $\\mathcal{J}({\\bf a})=\\emptyset$.\n\nFor any index set $I\\subseteq [1,k]$, let $\\psi_{I}$ be the canonical epimorphism of $\\mathcal{S}$ onto the semigroup $\\prod\\limits_{i\\in I}\\mathcal{S}_i$ given by\n$$\\psi_{I}({\\bf x})=(x_1',x_2',\\ldots,x_{k}')$$ with $$x_i'=0 \\mbox{ for } i\\in I$$ and $$x_i'=x_i \\mbox{ for } i\\in [1,k]\\setminus I,$$\nwhere ${\\bf x}=(x_1,x_2,\\ldots,x_k)$ denotes an arbitrary element of $\\mathcal{S}$. Note that\n$\\psi_{I}(\\mathcal{S})$ is a subsemigroup of $\\mathcal{S}$ which is isomorphic to the semigroup $\\prod\\limits_{i\\in I}\\mathcal{S}_i$, the product of the semigroups $\\mathcal{S}_i$ with $i\\in I$.\n\n\n\nNow take an arbitrary sequence $T$ of elements of $\\mathcal{S}$ of length at least $U(\\mathcal{S})$.\nBy applying Lemma \\ref{Lemma recusive Davenport constant} recursively, we have that\n\\begin{align}\\label{equation |T|geq k+1}\n\\begin{array}{llll}\n|T|&\\geq & D(\\mathcal{S}) \\\\\n&\\geq & D(U(\\mathcal{S}))\\\\\n&=& D(\\prod\\limits_{i\\in [1,k]} C_{n_i})\\\\\n&\\geq& D(\\prod\\limits_{i\\in [1,k-1]} C_{n_i})+D(C_{n_k})-1\\geq D(\\prod\\limits_{i\\in [1,k-1]} C_{n_i})+1\\\\\n&\\geq & D(\\prod\\limits_{i\\in [1,k-2]} C_{n_i})+2\\\\\n&\\vdots& \\\\\n&\\geq & D(C_{n_1})+(k-1)\\\\\n&\\geq & 2+(k-1)\\\\\n&=& k+1.\\\\\n\\end{array}\n\\end{align}\n\nIt suffices to show that $T$ contains a proper subsequence $T'$ with $\\sigma(T')=\\sigma(T)$.\n\nSuppose first that all the terms of $T$ are from $U(\\mathcal{S})$, i.e., $\\mathcal{J}({\\bf x})= \\emptyset$ for each term ${\\bf x}$ of $T$. Since $|T|\\geq D(U(\\mathcal{S}))$, it follows that $T$ contains a {\\bf nonempty} subsequence $V$ with $\\sigma(V)=0_{\\mathcal{S}}$, i.e., the sum of all terms from $V$ is the identity element of $\\mathcal{S}$. This implies that $\\sigma(T V^{-1})=\\sigma(T V^{-1})+0_{\\mathcal{S}}=\\sigma(T V^{-1})+\\sigma(V)=\\sigma(T)$. Then $T'=T V^{-1}$ shall be the required proper subsequence of $T$, we are done.\nHence, we assume that not all the terms of $T$ are from $U(\\mathcal{S})$, that is, $$\\mathcal{J}(\\sigma(T))\\neq \\emptyset.$$\n\nNote that for each $i\\in \\mathcal{J}(\\sigma(T))$, there exists at least one term of $T$, say ${\\bf a_i}$, such that $$i\\in \\mathcal{J}({\\bf a_i}).$$ It follows that there exists\na nonempty subsequence $V$ of $T$ of length at most $|\\mathcal{J}(\\sigma(T))|$ such that $$\\mathcal{J}(\\sigma(V))=\\mathcal{J}(\\sigma(T)).$$\nLet $$L=[1,k]\\setminus \\mathcal{J}(\\sigma(T)).$$\nNote that $\\mathcal{\\psi_{L}}(TV^{-1})$ is a sequence of elements in $U(\\psi_{L}(\\mathcal{S}))\\cong U(\\prod\\limits_{i\\in L} \\mathcal{S}_i)=\\prod\\limits_{i\\in L} C_{n_i}$. By \\eqref{equation |T|geq k+1}, we have that $$|TV^{-1}|\\geq 1.$$\nBy applying Lemma \\ref{Lemma recusive Davenport constant} recursively, we have that\n$$\\begin{array}{llll}\nD(U(\\psi_{L}(\\mathcal{S})))&=&D(\\prod\\limits_{i\\in L} C_{n_i}) \\\\\n&\\leq & D(\\prod\\limits_{i\\in [1,k]} C_{n_i})-(k-|L|)\\\\\n&=& D(\\prod\\limits_{i\\in [1,k]} C_{n_i})- |\\mathcal{J}(\\sigma(T))|\\\\\n&\\leq& D(\\prod\\limits_{i\\in [1,k]} C_{n_i})- |V|\\\\\n&=& D(U(\\mathcal{S}))- |V|\\\\\n&\\leq & |T|- |V|\\\\\n&=& |TV^{-1}|\\\\\n&=& |\\psi_{L}(TV^{-1})|.\\\\\n\\end{array}$$\nIt follows that $TV^{-1}$ contains a {\\bf nonempty} subsequence $W$ such that $\\sigma(\\psi_{L}(W))$ is the identity element of the group $U(\\psi_{L}(\\mathcal{S}))$, i.e., $$\\sigma(\\psi_{L}(W))=0_{U(\\psi_{L}(\\mathcal{S}))}=0_{\\mathcal{S}}.$$ Since $V\\mid TW^{-1}$, it follows that $\\mathcal{J}(\\sigma(TW^{-1}))=\\mathcal{J}(\\sigma(V))=\\mathcal{J}(\\sigma(T)),$ which implies that\n$$\\sigma(TW^{-1})+\\sigma(\\psi_{L}(W))=\\sigma(TW^{-1})+\\sigma(W).$$\nThen we have that\n$$\\sigma(TW^{-1})=\\sigma(TW^{-1})+0_{\\mathcal{S}}=\\sigma(TW^{-1})+\\sigma(\\psi_{L}(W))=\\sigma(TW^{-1})+\\sigma(W)=\\sigma(T),$$\nand thus, $T'=TW^{-1}$ is the required proper subsequence of $T$, we are done.\n\\end{proof}\n\n\\bigskip\n\n\\noindent {\\bf Proof of Theorem \\ref{Theorem irreducible quotient ring}} \\\nLet $$f(x)=f_1(x) f_2(x)\\cdots f_k(x)$$ where $f_1(x), f_2(x), \\ldots,f_k(x)\\in {\\mathbb F}_p[x]$ are irreducible and do not associate each other. By the Chinese Remainder Theorem, we have $$\\frac{{\\mathbb F}_p[x]}{\\langle f(x)\\rangle}\\cong \\frac{{\\mathbb F}_p[x]}{\\langle f_1(x)\\rangle} \\times \\frac{{\\mathbb F}_p[x]}{\\langle f_2(x)\\rangle} \\times \\cdots \\times \\frac{{\\mathbb F}_p[x]}{\\langle f_k(x)\\rangle}.$$\nIt follows that the multiplicative semigroup $\\mathcal{S}_{f(x)}^p$ of the ring $\\frac{{\\mathbb F}_p[x]}{\\langle f(x)\\rangle}$, is isomorphic to the product of the multiplicative semigroups of $\\frac{{\\mathbb F}_p[x]}{\\langle f_1(x)\\rangle},\\frac{{\\mathbb F}_p[x]}{\\langle f_2(x)\\rangle},\\ldots,\\frac{{\\mathbb F}_p[x]}{\\langle f_k(x)\\rangle}$, i.e.,\n$$\\mathcal{S}_{f(x)}^p\\cong \\mathcal{S}_{f_1(x)}^p\\times \\mathcal{S}_{f_2(x)}^p\\times \\cdots\\times \\mathcal{S}_{f_k(x)}^p.$$\nSince the polynomial $f_i(x)$ is irreducible for each $i\\in [1,k]$, we have $\\frac{{\\mathbb F}_p[x]}{\\langle f_i(x)\\rangle}$ is a finite field, and thus, the semigroup $\\mathcal{S}_{f_i(x)}^p$ is a cyclic group adjoined with a zero element. Then the conclusion follows from Lemma \\ref{Lemma product of cyclic semigroups} immediately.\n\\qed\n\n\\section{Concluding remarks}\n\nIn the final section, we propose the further research by suggesting the following conjecture.\n\n\\begin{conj}\\label{Conjecture 1} \\ For any prime $p>2$, let $f(x)\\in {\\mathbb F}_p[x]$ with ${\\rm deg}(f(x))\\geq 1$.\nThen $$D(\\mathcal{S}_{f(x)}^p)=D(U(\\mathcal{S}_{f(x)}^p)).$$\n\\end{conj}\n\nFrom Theorem \\ref{Theorem irreducible quotient ring}, we need only to verify the case that there exists some irreducible polynomial $g(x)\\in {\\mathbb F}_p[x]$ with $g(x)^2\\mid f(x)$. Therefore, we close this paper by making a preliminary verification for example when $f(x)=(x+1)^2$.\n\n\\begin{prop}\\label{Theorem quotient ring}\n\\ For a prime $p>2$, $$D(\\mathcal{S}_{(x+1)^2}^p)=D(U(\\mathcal{S}_{(x+1)^2}^p)).$$\n\\end{prop}\n\n\\begin{proof} \\\nIn view of Lemma \\ref{proposition D(U(G))leq D(G)},\nwe need only to show that $D(S_{(x+1)^2}^p)\\leq\nD(U(S_{(x+1)^2}^p)).$ Take an arbitrary sequence $T$ of elements in the semigroup $S_{(x+1)^2}^p)$ with length\n\\begin{equation}\\label{equation length of T}\n|T|=D(U(S_{(x+1)^2}^p)).\n\\end{equation} It suffices to show that $T$ is reducible. Note\nthat\n\\begin{equation}\\label{equation units}\nU(S_{(x+1)^2}^p)=\\{ax+b: a,b\\in {\\mathbb F}_p \\mbox{ and }a\\neq b\\}.\n\\end{equation}\nLet $g$ be a primitive root of the prime $p$. Take the sequence $V={\\bf a}_1\n{\\bf a}_2\\cdot\\ldots \\cdot {\\bf a}_{p-1}$ of elements in $S_{(x+1)^2}^p$, where ${\\bf a}_1=x$ and ${\\bf a}_2= \\cdots ={\\bf a}_{p-1} =g$. It is easy to check that $V$ is irreducible, which implies\n\\begin{equation}\\label{equation length of davenport}\nD(U(S_{(x+1)^2}^p)) \\geq |T|+1=p.\n\\end{equation}\n\n\nSuppose first that all the terms of $T$ are from\n$U(S_{(x+1)^2}^p)$. By \\eqref{equation length of T}, we have that $T$ is reducible, we are done.\n\nSuppose that $T$ contains two terms, say ${\\bf a}_1,{\\bf a}_2$,\nwhich are not in $U(S_{(x+1)^2}^p)$. By \\eqref{equation units}, we have that $(x+1)^2$ divides the product of the two polynomials ${\\bf a}_1$ and ${\\bf a}_2$, that is,\nthe sum of the two elements ${\\bf a}_1$ and ${\\bf a}_2$ is the zero element of the semigroup\n$S_{(x+1)^2}^p$, i.e.,\n$${\\bf a}_1+{\\bf a}_2=\\infty_{S_{(x+1)^2}^p}.$$\nThen $\\sigma(T)=\\sigma(T\\cdot {\\bf a}_1^{-1}{\\bf a}_2^{-1})+\\sigma({\\bf a}_1{\\bf a}_2)=\\sigma(T\\cdot {\\bf a}_1^{-1}{\\bf a}_2^{-1})+\\infty_{S_{(x+1)^2}^p}=\\infty_{S_{(x+1)^2}^p}=\\sigma({\\bf a}_1{\\bf a}_2)$. Combined with \\eqref{equation length of davenport}, we have that ${\\bf a}_1{\\bf a}_2$ is the required {\\bf proper} subsequence of $T$, we are done.\n\nIt remains to consider the case that\n$T$ contains exactly one element outside the group $U(S_{(x+1)^2}^p)$, say $${\\bf a}_1\\in S_{(x+1)^2}^p\\setminus U(S_{(x+1)^2}^p) \\mbox{ and }{\\bf a}_i\\in U(S_{(x+1)^2}^p) \\mbox{ for }i=2,3,\\ldots, |T|.$$\nBy \\eqref{equation units}, we have that\n$${\\bf a}_1=m(x+1),$$ where\n$m\\in\n{\\mathbb F}_p$. We may\nassume $T{\\bf a}_1^{-1}$ is irreducible (otherwise, $T$ shall be reducible, and we are done). By the definition of irreducible sequences, we have that\n$$0_{S_{(x+1)^2}^p}=0_{U(S_{(x+1)^2}^p)}\\notin\n\\sum(T{\\bf a}_1^{-1}),$$ where the set $$\\sum(T{\\bf a}_1^{-1})=\\{\\sigma(T'): ~T' ~{\\rm is~ a ~ nonempty ~ subsequence ~ of }~ T{\\bf a}_1^{-1} \\}$$ consists of all the elements of the semigroup $S_{(x+1)^2}^p$ that can be represented by the sum of several distinct terms from the sequence $T{\\bf a}_1^{-1}$. We see that the sequence $T{\\bf a}_1^{-1}$ is a zero-sum free sequence of elements in the group $U(S_{(x+1)^2}^p)$ of length $D(U(S_{(x+1)^2}^p))-1$. It follows that $$\\sum(T{\\bf a}_1^{-1}) =U(S_{(x+1)^2}^p)\\setminus \\{0_{U(S_{(x+1)^2}^p)}\\},$$\nwhich implies that\nthere exists a {\\bf nonempty} subsequence $W$ of $T{\\bf a}_1^{-1}$ with $\\sigma(W)=x+2\\in {\\mathbb F}_p[x]$.\nWe see that\n$$\\sigma(W)+{\\bf a}_1=(x+2)* m(x+1)=m(x+1)={\\bf a}_1.$$ Let $T'=TW^{-1}$. Then we have\nthat $$\\sigma(T')=\\sigma(T'{\\bf a}_1^{-1})+{\\bf a}_1=\\sigma(T'{\\bf a}_1^{-1})+({\\bf a}_1+\\sigma(W))=\\sigma(T')+\\sigma(W)=\\sigma(T),$$ and thus, $T'$ is the required proper subsequence of $T$. This completes the proof.\n\\end{proof}\n\n\n\n\n\n\\bigskip\n\n\\noindent {\\bf Acknowledgements}\n\n\\noindent This work is supported by NSFC (61303023, 11301381, 11301531, 11371184, 11471244), Science and Technology Development Fund of Tianjin Higher\nInstitutions (20121003), Doctoral Fund of Tianjin Normal University (52XB1202), NSF of Henan Province (grant no. 142300410304).\n\n\n\n\\bigskip\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}