diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzzehft" "b/data_all_eng_slimpj/shuffled/split2/finalzzehft" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzzehft" @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction} \\label{sec:intro}\n\n\\IEEEPARstart{L}{earning} convolutional operators from large datasets is a growing trend in signal\/image processing, computer vision, and machine learning.\nThe widely known \\emph{patch-domain} approaches for learning kernels (e.g., filter, dictionary, frame, and transform) extract patches from training signals for simple mathematical formulation and optimization, yielding (sparse) features of training signals \\cite{Aharon&Elad&Bruckstein:06TSP, Elad&Aharon:06TIP, Yaghoobi&etal:13TSP, Hawe&Kleinsteuber&Diepold:13TIP, Mairal&Bach&Ponce:14FTCGV, Cai&etal:14ACHA, Ravishankar&Bressler:15TSP, Pfister&Bresler:15SPIE, Coates&NG:bookCh}.\nDue to memory demands, using many overlapping patches across the training signals hinders using large datasets\nand building hierarchies on the features, \ne.g., deconvolutional neural networks \\cite{Zeiler&etal:10CVPR}, \nconvolutional neural network (CNN) \\cite{LeCun&etal:98ProcIEEE}, \nand multi-layer convolutional sparse coding \\cite{Papyan&Romano&Elad:17JMLR}.\nFor similar reasons, the memory requirement of patch-domain approaches discourages learned kernels from being applied to large-scale inverse problems. \n\n\n\n\nTo moderate these limitations of the patch-domain approach, the so-called \\emph{convolution} perspective has been recently introduced by learning filters and obtaining (sparse) representations directly from the original signals without storing many overlapping patches, e.g., convolutional dictionary learning (CDL) \\cite{Zeiler&etal:10CVPR, Bristow&etal:13CVPR, Heide&eta:15CVPR, Wohlberg:16TIP, Chun&Fessler:18TIP, Chun&Fessler:17SAMPTA}. \nFor large datasets, CDL using careful algorithmic designs \\cite{Chun&Fessler:18TIP} is more suitable for learning filters than patch-domain dictionary learning \\cite{Aharon&Elad&Bruckstein:06TSP}; in addition, CDL can learn translation-invariant filters without obtaining highly redundant sparse representations \\cite{Chun&Fessler:18TIP}.\nThe CDL method applies the convolution perspective for learning kernels within \\dquotes{synthesis} signal models.\nWithin \\dquotes{analysis} signal models, however, there exist no prior frameworks using the convolution perspective for learning convolutional operators, whereas patch-domain approaches for learning analysis kernels are introduced in \\cite{Yaghoobi&etal:13TSP, Hawe&Kleinsteuber&Diepold:13TIP, Cai&etal:14ACHA, Ravishankar&Bressler:15TSP, Pfister&Bresler:15SPIE}.\n(See brief descriptions about synthesis and analysis signal models in \\cite[Sec.~\\Romnum{1}]{Hawe&Kleinsteuber&Diepold:13TIP}.)\n\n\n\nResearchers interested in dictionary learning have actively studied the structures of kernels learned by the patch-domain approach \\cite{Yaghoobi&etal:13TSP, Hawe&Kleinsteuber&Diepold:13TIP, Cai&etal:14ACHA, Ravishankar&Bressler:15TSP, Pfister&Bresler:15SPIE, Barchiesi&Plumbley:13TSP, Bao&Cai&Ji:13ICCV, Ravishankar&Bressler:13ICASSP}. \nIn training CNNs (see Appendix~\\ref{sec:CNN}), however,\nthere has been less study of filter structures having non-convex constraints, e.g., orthogonality and unit-norm constraints in Section~\\ref{sec:CAOL}, although it is thought that diverse (i.e., incoherent) filters can improve performance for some applications, e.g., image recognition \\cite{Coates&NG:bookCh}.\nOn the application side, researchers have applied (deep) NNs to signal\/image recovery problems.\nRecent works combined model-based image reconstruction (MBIR) algorithm with image refining networks \\cite{Yang&etal:16NIPS, Zhang&etal:17CVPR, Chen&Pock:17PAMI, Chen&etal:17arXiv, Wu&etal:17Fully3D, Romano&Elad&Milanfar:17SJIS, Buzzard&etal:18SJIS, Chun&Fessler:18IVMSP, Chun&etal:18arXiv:momnet, Chun&etal:18Allerton}.\nIn these \\emph{iterative} NN methods, \nrefining NNs should satisfy the non-expansiveness for fixed-point convergence \n\\cite{Chun&etal:18arXiv:momnet}; \nhowever, their trainings lack consideration of filter diversity constraints, e.g., orthogonality constraint in Section~\\ref{sec:CAOL}, \nand thus it is unclear whether the trained NNs are nonexpansive mapping \\cite{Chun&etal:18Allerton}.\n\n\n\n\\begin{figure*}[!pt]\n\\centering\n\n\\begin{tabular}{c}\n\\includegraphics[trim={3cm 11.2cm 3cm 11.3cm},clip,scale=0.46]{.\/Fig_v1\/diagram_abstract.pdf}\n\\end{tabular}\n\n\\vspace{-0.25em}\n\\caption{A general flowchart from\nlearning sparsifying operators $\\cO$ to \nsolving inverse problems via MBIR using learned operators $\\cO^\\star$;\nsee Section~\\ref{sec:back}.\nFor the $l\\rth$ training sample $x_l$, \n$F(\\cO; x_l)$ measures its sparse representation or sparsification errors, and\nsparsity of its representation generated by $\\cO$.\n}\n\\label{diag:abstract}\n\\end{figure*}\n\n\n\n\nThis paper proposes \\textit{1)} a new \\textit{convolutional analysis operator learning} (CAOL) framework that learns an analysis sparsifying regularizer with the convolution perspective, and \\textit{2)} a new convergent \\textit{Block Proximal Extrapolated Gradient method using a Majorizer} (BPEG-M \\cite{Chun&Fessler:18TIP}) for solving block multi-nonconvex problems \\cite{Xu&Yin:17JSC}.\nTo learn diverse filters, we propose \\textit{a)} CAOL with an orthogonality constraint that enforces a tight-frame (TF) filter condition in convolutional perspectives, and \\textit{b)} CAOL with a regularizer that promotes filter diversity.\nBPEG-M with sharper majorizers converges significantly faster than the state-of-the-art technique, Block Proximal Gradient (BPG) method \\cite{Xu&Yin:17JSC} for CAOL.\nThis paper also introduces a new X-ray computational tomography (CT) MBIR model using a convolutional sparsifying regularizer learned via CAOL \\cite{Chun&Fessler:18Asilomar}.\n\n\n\n\nThe remainder of this paper is organized as follows. \nSection~\\ref{sec:back} reviews how learned regularizers can help solve inverse problems.\nSection~\\ref{sec:CAOL} proposes the two CAOL models.\nSection~\\ref{sec:reBPG-M} introduces BPEG-M with several generalizations, analyzes its convergence, and applies a momentum coefficient formula and restarting technique from \\cite{Chun&Fessler:18TIP}.\nSection~\\ref{sec:CAOL+BPGM} applies the proposed BPEG-M methods to the CAOL models, designs two majorization matrices, and describes memory flexibility and applicability of parallel computing to BPEG-M-based CAOL.\nSection~\\ref{sec:CTrecon} introduces the CT MBIR model using a convolutional regularizer learned via CAOL \\cite{Chun&Fessler:18Asilomar}, along with its properties, i.e., its mathematical relation to a convolutional autoencoder, the importance of TF filters, and its algorithmic role in signal recovery.\nSection~\\ref{sec:result} reports numerical experiments that show \\textit{1)} the importance of sharp majorization in accelerating BPEG-M, and \\textit{2)} the benefits of BPEG-M-based CAOL -- acceleration, convergence, and memory flexibility.\nAdditionally, Section~\\ref{sec:result} reports sparse-view CT experiments that show \\textit{3)} the CT MBIR using learned convolutional regularizers significantly improves the reconstruction quality compared to that using a conventional edge-preserving (EP) regularizer, and \\textit{4)} more and wider filters in a learned regularizer better preserves edges in reconstructed images.\nFinally, Appendix~\\ref{sec:CNN} mathematically formulates unsupervised training of CNNs via CAOL, and shows that its updates attained via BPEG-M correspond to the three important CNN operators.\nAppendix~\\ref{sec:egs} introduces some potential applications of CAOL to image processing, imaging, and computer vision.\n\n\n\n\n\n\n\n\\section{Backgrounds: MBIR Using \\emph{Learned} Regularizers} \\label{sec:back}\n\nTo recover a signal $x \\in \\bbC^{N'}$ from a data vector $y \\in \\bbC^m$, \none often considers the following MBIR optimization problem\n(Appendix~\\ref{sec:notation} provides mathematical notations): \n$\n\\argmin_{x \\in \\cX} f(x; y) + \\gamma \\, g(x),\n$\nwhere $\\cX$ is a feasible set,\n$f(x; y)$ is data fidelity function that models imaging physics (or image formation) and noise statistics, \n$\\gamma \\!>\\! 0$ is a regularization parameter,\nand $g(x)$ is a regularizer, such as total variation \\cite[\\S2--3]{Arridge&etal:19AN}. \nHowever, when inverse problems are extremely ill-conditioned, \nthe MBIR approach using hand-crafted regularizers $g(x)$ has limitations in recovering signals.\nAlternatively, there has been a growing trend in learning sparsifying regularizers\n(e.g., convolutional regularizers \\cite{Chun&Fessler:18TIP, Chun&Fessler:17SAMPTA, Chun&etal:19SPL, Chun&Fessler:18Asilomar, Crockett&etal:19CAMSAP}) \nfrom training datasets and applying the learned regularizers to the following MBIR problem \\cite{Arridge&etal:19AN}:\n\\be{\n\\label{eq:mbir:learn}\n\\argmin_{x \\in \\cX} f(x; y) + \\gamma g(x; \\cO^\\star),\n\\tag{B1}\n}\nwhere a learned regularizer $g(x; \\cO^\\star)$ quantifies consistency between \nany candidate $x$ and training data that is encapsulated in some trained sparsifying operators $\\cO^\\star$.\nThe diagram in Fig.~\\ref{diag:abstract} shows the general process from\ntraining sparsifying operators to solving inverse problems via \\R{eq:mbir:learn}.\nSuch models \\R{eq:mbir:learn} arise in a wide range of applications. \nSee some examples in Appendix~\\ref{sec:egs}.\n\nThis paper describes multiple aspects of learning convolutional regularizers. \nThe next section first starts with proposing a new convolutional regularizer.\n\n\n\n\n\n\\section{CAOL: Models \\textit{Learning} Convolutional Regularizers} \\label{sec:CAOL}\n\nThe goal of CAOL is to find a set of filters that \\dquotes{best} sparsify a set of training images.\nCompared to hand-crafted regularizers,\nlearned convolutional regularizers \ncan better extract \\dquotes{true} features of estimated images and \nremove \\dquotes{noisy} features with thresholding operators.\nWe propose the following CAOL model:\n\\ea{\n\\label{sys:CAOL}\n\\argmin_{D = [d_1, \\ldots, d_K]} \\min_{\\{ z_{l,k} \\}} &~ F(D, \\{ z_{l,k} \\}) + \\beta g (D), \\tag{P0}\n\\\\\nF(D, \\{ z_{l,k} \\}) & := \\sum_{l=1}^L \\sum_{k=1}^K \\frac{1}{2} \\left\\| d_k \\circledast x_l - z_{l,k} \\right\\|_2^2 + \\alpha \\| z_{l,k} \\|_0, \\nn\n} \nwhere $\\circledast$ denotes a convolution operator (see details about boundary conditions in the supplementary material), $\\{ x_l \\in \\bbC^N : l =1,\\ldots,L \\}$ is a set of training images, $\\{ d_{k} \\in \\bbC^{R}: k = 1,\\ldots, K \\}$ is a set of convolutional kernels, $\\{ z_{l,k} \\in \\bbC^N : l = 1,\\ldots,L, k=1,\\ldots,K \\}$ is a set of sparse codes, and $g ( D )$ is a regularizer or constraint that encourages filter diversity or incoherence, $\\alpha \\!>\\! 0$ is a thresholding parameter controlling the sparsity of features $\\{ z_{l,k} \\}$, and $\\beta > 0$ is a regularization parameter for $g (D)$.\nWe group the $K$ filters into a matrix $D \\in \\bbC^{R \\times K}$:\n\\be{\n\\label{eq:D}\nD := \\left[ \\arraycolsep=3pt \\begin{array}{ccc} d_1 & \\ldots & d_K \\end{array} \\right].\n}\nFor simplicity, we fix the dimension for training signals, i.e., $\\{ x_l, z_{l,k} \\in \\bbC^N \\}$, but the proposed model \\R{sys:CAOL} can use training signals of different dimension, i.e., $\\{ x_l, z_{l,k} \\in \\bbC^{N_l} \\}$.\nFor sparse-view CT in particular, \nthe diagram in Fig.~\\ref{diag:caol-ctmbir} shows the process from\nCAOL \\R{sys:CAOL} to solving its inverse problem via\nMBIR using learned convolutional regularizers.\n\nThe following two subsections design the constraint or regularizer $g(D)$ \nto avoid redundant filters (without it, all filters could be identical).\n\n\n\n\n\n\n\\begin{figure*}[!pt]\n\\centering\n\n\\begin{tabular}{c}\n\\includegraphics[trim={2.2cm 9.5cm 2.2cm 9.9cm},clip,scale=0.46]{.\/Fig_v1\/diagram_caol-ctmbir.pdf}\n\\end{tabular}\n\n\\vspace{-0.25em}\n\\caption{A flowchart from CAOL \\R{sys:CAOL} to \nMBIR using a convolutional sparsifying regularizer learned via CAOL \\R{sys:CT&CAOL}\nin sparse-view CT.\nSee details of the CAOL process \\R{sys:CAOL} and its variants \\R{sys:CAOL:orth}--\\R{sys:CAOL:div}, \nand the CT MBIR process \\R{sys:CT&CAOL} in \nSection~\\ref{sec:CAOL} and Section~\\ref{sec:CTrecon}, respectively.\n}\n\\label{diag:caol-ctmbir}\n\\end{figure*}\n\n\n\n\n\n\n\\subsection{CAOL with Orthogonality Constraint} \\label{sec:CAOL:orth}\n\nWe first propose a CAOL model with a nonconvex orthogonality constraint on the filter matrix $D$ in \\R{eq:D}:\n\\be{\n\\label{sys:CAOL:orth}\n\\argmin_{D} \\min_{\\{ z_{l,k} \\}} ~ F(D, \\{ z_{l,k} \\}) \\quad \\mathrm{subj.~to} ~ D D^H = \\frac{1}{R} \\cdot I. \\tag{P1}\n} \nThe orthogonality condition $D D^H = \\frac{1}{R} I$ in \\R{sys:CAOL:orth} enforces a TF condition on the filters $\\{ d_k \\}$ in CAOL \\R{sys:CAOL}.\nProposition~\\ref{p:TFconst} below formally states this relation.\n\n\\prop{[Tight-frame filters]\\label{p:TFconst}\nFilters satisfying the orthogonality constraint \n$D D^H = \\frac{1}{R} I$ in \\R{sys:CAOL:orth} satisfy the following TF condition in a convolution perspective:\n\\be{\n\\label{eq:CAOL:TFcond}\n\\sum_{k=1}^K \\nm{ d_k \\circledast x }_2^2 = \\nm{ x }_2^2, \\quad \\forall x \\in \\bbC^N,\n}\nfor both circular and symmetric boundary conditions. \n}\n\n\\prf{\\renewcommand{\\qedsymbol}{}\nSee Section~\\ref{sec:prf:p:TF} of the supplementary material.\n}\n\n\nProposition~\\ref{p:TFconst} corresponds to a TF result from patch-domain approaches; see~Section~\\ref{sec:prf:p:TF}. (Note that the patch-domain approach in \\cite[Prop.~3]{Cai&etal:14ACHA} requires $R = K$.)\nHowever, we constrain the filter dimension to be $R \\leq K$ to have an efficient solution for CAOL model \\R{sys:CAOL:orth}; see Proposition~\\ref{p:orth} later.\nThe following section proposes a more flexible CAOL model in terms of the filter dimensions $R$ and $K$.\n\n\n\n\n\\subsection{CAOL with Diversity Promoting Regularizer}\n\nAs an alternative to the CAOL model \\R{sys:CAOL:orth}, we propose a CAOL model with a diversity promoting regularizer and a nonconvex norm constraint on the filters $\\{ d_k \\}$:\n\\ea{\n\\label{sys:CAOL:div}\n\\argmin_{D} \\min_{\\{ z_{l,k} \\}} &~ F(D, \\{ z_{l,k} \\}) + \\frac{\\beta}{2} \\overbrace{\\left\\| D^H D - \\frac{1}{R} \\cdot I \\right\\|_{\\mathrm{F}}^2}^{\\mathrm{\\hbox{$=: g_{\\text{div}} (D)$}}}, \\nn\n\\\\\n\\mathrm{subject~to} ~&~ \\| d_{k} \\|_2^2 = \\frac{1}{R}, \\quad k = 1,\\ldots, K. \\tag{P2}\n} \nIn the CAOL model \\R{sys:CAOL:div}, we consider the following:\n\\bulls{\n\\item The constraint in \\R{sys:CAOL:div} forces the learned filters $\\{ d_k \\}$ to have uniform energy. In addition, it avoids the \\dquotes{scale ambiguity} problem \\cite{Rem&Karin:10TIT}. \n\n\\item The regularizer in \\R{sys:CAOL:div}, $g_{\\text{div}} (D)$, promotes filter diversity, i.e., incoherence between $d_k$ and $\\{ d_{k'} : k' \\neq k \\}$, measured by $| \\ip{d_k}{d_{k'}} |^2$ for $k \\neq k'$.\n}\n\nWhen $R = K$ and $\\beta \\rightarrow \\infty$, the model \\R{sys:CAOL:div} becomes \\R{sys:CAOL:orth} since $D^H D = \\frac{1}{R} I$ implies $D D^H = \\frac{1}{R} I$ (for square matrices $A$ and $B$, if $AB=I$ then $BA = I$).\nThus \\R{sys:CAOL:div} generalizes \\R{sys:CAOL:orth} by relaxing the off-diagonal elements of the equality constraint in \\R{sys:CAOL:orth}. (In other words, when $R=K$, the orthogonality constraint in \\R{sys:CAOL:orth} enforces the TF condition and promotes the filter diversity.)\nOne price of this generalization is the extra tuning parameter $\\beta$.\n\n\n\n\\R{sys:CAOL:orth}--\\R{sys:CAOL:div} are challenging nonconvex optimization problems and block optimization approaches seem suitable. \nThe following section proposes a new block optimization method with momentum and majorizers, \nto rapidly solve the multiple block multi-nonconvex problems proposed in this paper, while guaranteeing convergence to critical points.\n\n\n\n\n\n\n\n\n\n\\section{BPEG-M: Solving Block Multi-Nonconvex Problems with Convergence Guarantees} \\label{sec:reBPG-M}\n\nThis section describes a new optimization approach, BPEG-M,\nfor solving block multi-nonconvex problems like \n\\textit{a)} CAOL \\R{sys:CAOL:orth}--\\R{sys:CAOL:div},\\footnote{\nA block coordinate descent algorithm can be applied to CAOL \\R{sys:CAOL:orth};\nhowever, its convergence guarantee in solving CAOL \\R{sys:CAOL:orth} is not yet known and might require stronger sufficient conditions than BPEG-M \\cite{Tseng:2001JOTA}.\n} \n\\textit{b)} CT MBIR \\R{sys:CT&CAOL} using learned convolutional regularizer via \\R{sys:CAOL:orth} (see Section~\\ref{sec:CTrecon}),\nand \\textit{c)} \\dquotes{hierarchical} CAOL \\R{sys:CNN:orth} (see Appendix~\\ref{sec:CNN}).\n\n\n\\subsection{BPEG-M -- Setup} \\label{sec:BPGM:setup}\n\nWe treat the variables of the underlying optimization problem either as a single block or multiple disjoint blocks. \nSpecifically, consider the following \\textit{block multi-nonconvex} optimization problem:\n\\ea{\n\\label{sys:multiConvx}\n\\min &~ F(x_1,\\ldots, x_B) := f(x_1,\\ldots,x_B) + \\sum_{b=1}^B g_b (x_b),\n}\nwhere variable $x$ is decomposed into $B$ blocks $x_1 ,\\ldots,x_B$ ($\\{ x_b \\in \\bbR^{n_b} : b=1,\\ldots,B \\}$), $f$ is assumed to be continuously differentiable, but functions $\\{ g_b : b = 1,\\ldots,B \\}$ are not necessarily differentiable. \nThe function $g_b$ can incorporate the constraint $x_b \\in \\cX_b$, by allowing any $g_b$ to be extended-valued, e.g., $g_b (x_b) = \\infty$ if $x_b \\notin \\cX_b$, for $b=1,\\ldots,B$.\nIt is standard to assume that both $f$ and $\\{ g_b \\}$ are closed and proper and the sets $\\{ \\cX_b \\}$ are closed and nonempty.\nWe do \\textit{not} assume that $f$, $\\{ g_b \\}$, or $\\{ \\cX_b \\}$ are convex. Importantly, $g_b$ can be a nonconvex $\\ell^p$ quasi-norm, $p \\in [0, 1)$.\nThe general block multi-convex problem in \\cite{Chun&Fessler:18TIP, Xu&Yin:13SIAM} is a special case of \\R{sys:multiConvx}.\n\n\nThe BPEG-M framework considers a more general concept than Lipschitz continuity of the gradient as follows:\n\n\\defn{[$M$-Lipschitz continuity] \\label{d:QM}\nA function $g: \\bbR^n \\rightarrow \\bbR^{n}$ is \\emph{$M$-Lipschitz continuous} on $\\bbR^n$ if there exist a (symmetric) positive definite matrix $M$ such that\n\\bes{\n\\nm{g(x) - g(y)}_{M^{-1}} \\leq \\nm{x - y}_{M}, \\quad \\forall x,y,\n}\nwhere $\\nm{x}_{M}^2 := x^T M x$.\n}\n\nLipschitz continuity is a special case of $M$-Lipschitz continuity with $M$ equal to a scaled identity matrix with \na Lipschitz constant of the gradient $\\nabla f$ (e.g., for $f(x) = \\frac{1}{2} \\| Ax - b\\|_2^2$, the (smallest) Lipschitz constant of $\\nabla f$ is the maximum eigenvalue of $A^T A$).\nIf the gradient of a function is $M$-Lipschitz continuous, then we obtain the following quadratic majorizer (i.e., surrogate function \\cite{Lange&Hunter&Yang:00JCGS, Jacobson&Fessler:07TIP}) at a given point $y$ without assuming convexity:\n\n\\lem{[Quadratic majorization (QM) via $M$-Lipschitz continuous gradients]\n\\label{l:QM}\nLet $f : \\bbR^n \\rightarrow \\bbR$. If $\\nabla f$ is $M$-Lipschitz continuous, then\n\\bes{\nf(x) \\leq f(y) + \\ip{\\nabla f(y)}{x-y} + \\frac{1}{2} \\nm{x - y}_M^2, \\quad \\forall x,y \\in \\bbR^n.\n}\n}\n\n\\prf{\\renewcommand{\\qedsymbol}{}\nSee Section~\\ref{sec:prf:l:QM} of the supplementary material.\n}\n\nExploiting Definition~\\ref{d:QM} and Lemma~\\ref{l:QM}, the proposed method, BPEG-M, is given as follows.\nTo solve \\R{sys:multiConvx}, we minimize a majorizer of $F$ cyclically over each block $x_1,\\ldots,x_B$, while fixing the remaining blocks at their previously updated variables. Let $x_b^{(i+1)}$ be the value of $x_b$ after its $i\\rth$ update, and define\n\\begingroup\n\\setlength{\\thinmuskip}{1.5mu}\n\\setlength{\\medmuskip}{2mu plus 1mu minus 2mu}\n\\setlength{\\thickmuskip}{2.5mu plus 2.5mu}\n\\bes{\nf_b^{(i+1)}(x_b) := f \\Big( x_1^{(i+1)}, \\ldots, x_{b-1}^{(i+1)}, x_b, x_{b+1}^{(i)}, \\ldots, x_{B}^{(i)} \\Big), \\quad \\forall b,i.\n}\n\\endgroup\nAt the $b\\rth$ block of the $i\\rth$ iteration, we apply Lemma~\\ref{l:QM} to functional $f_b^{(i+1)}(x_b)$ with a $M_b^{(i+1)}$-Lipschitz continuous gradient, and minimize the majorized function.\\footnote{The quadratically majorized function allows a unique minimizer if $g_b^{(i+1)} (x_b)$ is convex and $\\cX_b^{(i+1)}$ is a convex set (note that $M_b^{(i+1)} \\!\\succ\\! 0$).} Specifically, BPEG-M uses the updates\n\\begingroup\n\\setlength{\\thinmuskip}{1.5mu}\n\\setlength{\\medmuskip}{2mu plus 1mu minus 2mu}\n\\setlength{\\thickmuskip}{2.5mu plus 2.5mu}\n\\fontsize{9.5pt}{11.4pt}\\selectfont\n\\allowdisplaybreaks\n\\ea{\nx_b^{(i+1)} \n& = \\argmin_{ x_b } \\hspace{0.1em} \\ip{ \\nabla_{x_b} f_b^{(i+1)} (\\acute{x}_b^{(i+1)}) }{ x_b - \\acute{x}_b^{(i+1)} } \\nn\n\\\\\n& \\hspace{4.5em} + \\frac{1}{2} \\nm{ x_b - \\acute{x}_b^{(i+1)} }_{\\widetilde{M}_b^{(i+1)}}^2 + g_b (x_b) \\label{update:x} \n\\\\\n&= \\argmin_{ x_b } \\hspace{0.1em} \\frac{1}{2} \\bigg\\| x_b - \\bigg( \\acute{x}_b^{(i+1)} - \\Big( \\widetilde{M}_b^{(i+1)} \\Big)^{\\!\\!\\!-1} \\nn\n\\\\\n& \\hspace{8.0em} \\cdot \\nabla_{x_b} f_b^{(i+1)} (\\acute{x}_b^{(i+1)}) \\bigg) \\bigg\\|_{\\widetilde{M}_b^{(i+1)}}^2 \\!\\! + g_b (x_b) \\nn\n\\\\\n&= \\mathrm{Prox}_{g_b}^{\\widetilde{M}_b^{(i+1)}} \\!\\! \\bigg( \\! \\underbrace{ \\acute{x}_b^{(i+1)} - \\Big( \\! \\widetilde{M}_b^{(i+1)} \\! \\Big)^{\\!\\!\\!-1} \\! \\nabla_{x_b} f_b^{(i+1)} (\\acute{x}_b^{(i+1)}) }_{\\text{\\emph{extrapolated gradient step using a majorizer} of $f_b^{(i+1)}$}} \\! \\bigg), \\nn\n}\n\\endgroup\nwhere\n\\be{\n\\label{update:xacute}\n\\acute{x}_b^{(i+1)} = x_{b}^{(i)} + E_b^{(i+1)} \\left( x_b^{(i)} - x_b^{(i-1)} \\right),\n}\nthe proximal operator is defined by\n\\bes{\n\\mathrm{Prox}_g^M (y) := \\argmin_{x} \\, \\frac{1}{2} \\nm{x - y}_M^2 + g(x),\n}\n$\\nabla f_b^{(i+1)} (\\acute{x}_b^{(i+1)})$ is the block-partial gradient of $f$ at $\\acute{x}_b^{(i+1)}$, an \\textit{upper-bounded majorization matrix} is updated by\n\\be{\n\\label{update:Mtilde}\n\\widetilde{M}_b^{(i+1)} = \\lambda_b \\cdot M_b^{(i+1)} \\succ 0, \\qquad \\lambda_b > 1,\n}\nand $M_b^{(i+1)} \\!\\in\\! \\bbR^{n_b \\times n_b}$ is a symmetric positive definite \\textit{majorization matrix} of $\\nabla f_b^{(i+1)}$.\nIn \\R{update:xacute}, the $\\bbR^{n_b \\times n_b}$ matrix $E_b^{(i+1)} \\succeq 0$ is an \\textit{extrapolation matrix} that accelerates convergence in solving block multi-convex problems \\cite{Chun&Fessler:18TIP}.\nWe design it in the following form:\n\\be{\n\\label{update:Eb}\nE_b^{(i+1)} = e_b^{(i)} \\cdot \\frac{\\delta (\\lambda_b - 1)}{2 (\\lambda_b + 1)} \\cdot \\left( M_b^{(i+1)} \\right)^{-1\/2} \\left( M_b^{(i)} \\right)^{1\/2}, \n}\nfor some $\\{ 0 \\leq e_b^{(i)} \\leq 1 : \\forall b, i \\}$ and $\\delta < 1$, to satisfy condition \\R{cond:Wb} below.\nIn general, choosing $\\lambda_b$ values in \\R{update:Mtilde}--\\R{update:Eb} to accelerate convergence is application-specific.\nAlgorithm~\\ref{alg:BPGM} summarizes these updates.\n\n\n\n\n\n\n\\begin{algorithm}[t!]\n\\caption{BPEG-M}\n\\label{alg:BPGM}\n\n\\begin{algorithmic}\n\\REQUIRE $\\{ x_b^{(0)} = x_b^{(-1)} : \\forall b \\}$, $\\{ E_b^{(i)} \\in [0, 1], \\forall b,i \\}$, $i=0$\n\n\\WHILE{a stopping criterion is not satisfied}\n\n\\FOR{$b = 1,\\ldots,B$}\n\n\\begingroup\n\\setlength{\\thinmuskip}{1.5mu}\n\\setlength{\\medmuskip}{2mu plus 1mu minus 2mu}\n\\setlength{\\thickmuskip}{2.5mu plus 2.5mu}\n\\fontsize{9.5pt}{11.4pt}\\selectfont\n\\STATE Calculate $M_b^{(i+1)}$, $\\displaystyle \\widetilde{M}_b^{(i+1)}$ by \\R{update:Mtilde}, and $E_b^{(i+1)}$ by \\R{update:Eb}\n\\STATE $\\displaystyle \\acute{x}_b^{(i+1)} = \\, x_{b}^{(i)} + E_b^{(i+1)} \\! \\left( x_b^{(i)} - x_b^{(i-1)} \\right)$\n\\STATE $\\displaystyle x_b^{(i+1)} = \\, \\ldots$ \\\\ $\\displaystyle \\mathrm{Prox}_{g_b}^{\\widetilde{M}_b^{(i+1)}} \\!\\! \\bigg( \\!\\! \\acute{x}_b^{(i+1)} \\!-\\! \\Big( \\! \\widetilde{M}_b^{(i+1)} \\! \\Big)^{\\!\\!\\!\\!-1} \\!\\! \\nabla f_b^{(i+1)} (\\acute{x}_b^{(i+1)}) \\!\\! \\bigg)$\n\\endgroup\n\n\\ENDFOR\n\n\\STATE $i = i+1$\n\n\\ENDWHILE\n\n\\end{algorithmic}\n\\end{algorithm}\n\n\n\n\n\n\nThe majorization matrices $M_b^{(i)}$ and $\\widetilde{M}_b^{(i+1)}$ in \\R{update:Mtilde} influence the convergence rate of BPEG-M.\nA tighter majorization matrix (i.e., a matrix giving tighter bounds in the sense of Lemma~\\ref{l:QM}) provided faster convergence rate \\cite[Lem.~1]{Fessler&etal:93TNS}, \\cite[Fig.~2--3]{Chun&Fessler:18TIP}.\nAn interesting observation in Algorithm~\\ref{alg:BPGM} is that there exists a tradeoff between majorization sharpness via \\R{update:Mtilde} and extrapolation effect via \\R{update:xacute} and \\R{update:Eb}. For example, increasing $\\lambda_b$ (e.g., $\\lambda_b = 2$) allows more extrapolation but results in looser majorization; setting $\\lambda_b \\rightarrow 1$ results in sharper majorization but provides less extrapolation.\n\n\n\\rem{\\label{r:BPGM}\nThe proposed BPEG-M framework -- with key updates \\R{update:x}--\\R{update:xacute} -- generalizes the BPG method \\cite{Xu&Yin:17JSC}, and has several benefits over BPG \\cite{Xu&Yin:17JSC} and BPEG-M introduced earlier in \\cite{Chun&Fessler:18TIP}:\n\\begin{itemize}\n\\item The BPG setup in \\cite{Xu&Yin:17JSC} is a particular case of BPEG-M using a scaled identity majorization matrix $M_b$ with a Lipschitz constant of $\\nabla f_b^{(i+1)} (\\acute{x}_b^{(i+1)})$. The BPEG-M framework can significantly accelerate convergence by allowing sharp majorization; see \\cite[Fig.~2--3]{Chun&Fessler:18TIP} and Fig.~\\ref{fig:Comp:diffBPGM}. \nThis generalization was first introduced for block multi-convex problems in \\cite{Chun&Fessler:18TIP}, \nbut the proposed BPEG-M in this paper addresses the more general problem, block multi-(non)convex optimization.\n\n\\item BPEG-M is useful for controlling the tradeoff between majorization sharpness and extrapolation effect in different blocks, by allowing each block to use different $\\lambda_b$ values. If tight majorization matrices can be designed for a certain block $b$, then it could be reasonable to maintain the majorization sharpness by setting $\\lambda_b$ very close to 1. \nWhen setting $\\lambda_b = 1 + \\epsilon$ (e.g., $\\epsilon$ is a machine epsilon) and using $E_b^{(i+1)} = 0$ (no extrapolation), solutions of the original and its upper-bounded problem become (almost) identical. In such cases, it is unnecessary to solve the upper bounded problem \\R{update:x}, and the proposed BPEG-M framework allows using the solution of $f_b^{(i+1)}(x_b)$ without QM; see Section~\\ref{sec:CAOL:spCd}. This generalization was not considered in \\cite{Xu&Yin:17JSC}.\n\n\\item The condition for designing the extrapolation matrix \\R{update:Eb}, i.e., \\R{cond:Wb} in Assumption 3, is more general than that in \\cite[(9)]{Chun&Fessler:18TIP} (e.g., \\R{eg:Wb:eigen}).\nSpecifically, the matrices $E_b^{(i+1)}$ and $M_b^{(i+1)}$ in \\R{update:Eb} need not be diagonalized by the same basis.\n\\end{itemize}\n}\n\n\n\n\n\n\n\nThe first two generalizations lead to the question, \\dquotes{Under the sharp QM regime (i.e., having tight bounds in Lemma~\\ref{l:QM}), what is the best way in controlling $\\{ \\lambda_b \\}$ in \\R{update:Mtilde}--\\R{update:Eb} in Algorithm~\\ref{alg:BPGM}?}\nOur experiments show that, if sufficiently sharp majorizers are obtained for partial or all blocks, then giving more weight to sharp majorization provides faster convergence compared to emphasizing extrapolation; for example, $\\lambda_{b} = 1+\\epsilon$ gives faster convergence than $\\lambda_b = 2$.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\subsection{BPEG-M -- Convergence Analysis} \\label{sec:convg-analysis}\n\nThis section analyzes the convergence of Algorithm~\\ref{alg:BPGM} under the following assumptions.\n\n\\begin{itemize}\n\\item[] {\\em Assumption 1)} $F$ is proper and lower bounded in $\\dom(F)$, $f$ is continuously differentiable, $g_b$ is proper lower semicontinuous, $\\forall b$.\\footnote{\n$F : \\bbR^n \\rightarrow (- \\infty, + \\infty]$ is proper if $\\dom F \\neq \\emptyset$. \n$F$ is lower bounded in $\\dom(F) := \\{ x : F(x) < \\infty \\}$ if $\\inf_{x \\in \\dom(F)} F(x) > -\\infty$.\n$F$ is lower semicontinuous at point $x_0$ if $\\liminf_{x \\rightarrow x_0} F(x) \\geq F(x_0)$.\n}\n\\R{sys:multiConvx} has a critical point $\\bar{x}$, i.e., $0 \\in \\partial F(\\bar{x})$,\nwhere $\\partial F(x)$ denotes the limiting subdifferential of $F$ at $x$ (see \\cite[\\S 1.9]{Kruger:03JMS}, \\cite[\\S 8]{Rockafellar&Wets:book}).\n\\item[] {\\em Assumption 2)} The block-partial gradients of $f$, $\\nabla f_b^{(i+1)}$, are $M_b^{(i+1)}$-Lipschitz continuous, i.e.,\n\\ea{\n\\label{eq:QMbound}\n&~ \\nm{ \\nabla_{x_b} f_b^{(i+1)} (u) - \\nabla_{x_b} f_b^{(i+1)} (v) }_{\\big( \\! M_b^{(i+1)} \\! \\big)^{\\!-1}} \n\\nn\n\\\\\n& \\leq \\nm{u - v}_{M_b^{(i+1)}},\n}\nfor $u,v \\in \\bbR^{n_b}$, and (unscaled) majorization matrices satisfy $m_{b} I_{n_b} \\preceq M_b^{(i+1)}$ with $0 < m_{b} < \\infty$, $\\forall b, i$.\n\\item[] {\\em Assumption 3)} The extrapolation matrices $E_b^{(i+1)} \\succeq 0$ satisfy\n\\be{\n\\label{cond:Wb}\n\\Big( E_b^{(i+1)} \\Big)^T M_b^{(i+1)} E_b^{(i+1)} \\preceq \\frac{\\delta^2 (\\lambda_b - 1)^2}{4 (\\lambda_b + 1)^2} \\cdot M_b^{(i)},\n}\nfor any $\\delta < 1$, $\\forall b,i$.\n\\end{itemize}\n\n\nCondition \\R{cond:Wb} in Assumption 3 generalizes that in \\cite[Assumption~3]{Chun&Fessler:18TIP}.\nIf eigenspaces of $E_b^{(i+1)}$ and $M_b^{(i+1)}$ coincide (e.g., diagonal and circulant matrices), $\\forall i$ \\cite[Assumption~3]{Chun&Fessler:18TIP}, \\R{cond:Wb} becomes\n\\begingroup\n\\setlength{\\thinmuskip}{1.5mu}\n\\setlength{\\medmuskip}{2mu plus 1mu minus 2mu}\n\\setlength{\\thickmuskip}{2.5mu plus 2.5mu}\n\\be{\n\\label{eg:Wb:eigen}\nE_b^{(i+1)} \\preceq \\frac{\\delta (\\lambda_b - 1)}{2 (\\lambda_b + 1)} \\cdot \\left( M_b^{(i)} \\right)^{1\/2} \\left( M_b^{(i+1)} \\right)^{-1\/2},\n} \n\\endgroup\nas similarly given in \\cite[(9)]{Chun&Fessler:18TIP}. \nThis generalization allows one to consider arbitrary structures of $M_b^{(i)}$ across iterations. \n\n\n\n\\lem{[Sequence bounds]\n\\label{l:seqBound}\nLet $\\{ \\widetilde{M}_b : b = 1,\\ldots,B \\}$ and $\\{ E_b : b = 1,\\ldots,B \\}$ be as in \\R{update:Mtilde}--\\R{update:Eb}, respectively. \nThe cost function decrease for the $i\\rth$ update satisfies:\n\\ea{\n\\label{eq:l:seqBound}\nF_b (x_b^{(i)}) - F_b (x_b^{(i+1)}) \n& \\geq \\frac{\\lambda_b - 1}{4} \\nm{ x_b^{(i)} - x_b^{(i+1)} }_{M_b^{(i+1)}}^2 \\nn\n\\\\\n& \\hspace{1.2em} - \\frac{ ( \\lambda_b - 1 ) \\delta^2}{4} \\nm{ x_b^{(i-1)} - x_b^{(i)} }_{ M_b^{(i)}}^2\n}\n}\n\\prf{\\renewcommand{\\qedsymbol}{}\nSee Section~\\ref{sec:prf:l:seqBound} of the supplementary material.\n}\n\n \nLemma~\\ref{l:seqBound} generalizes \\cite[Lem.~1]{Xu&Yin:17JSC} using $\\{ \\lambda_b = 2 \\}$. \nTaking the majorization matrices in \\R{eq:l:seqBound} to be scaled identities with Lipschitz constants, i.e., $M_b^{(i+1)} \\!=\\! L_b^{(i+1)} \\cdot I$ and $M_b^{(i)} \\!=\\! L_b^{(i)} \\cdot I$, where $L_b^{(i+1)}$ and $L_b^{(i)}$ are Lipschitz constants, the bound \\R{eq:l:seqBound} becomes equivalent to that in \\cite[(13)]{Xu&Yin:17JSC}.\nNote that BPEG-M for block multi-convex problems in \\cite{Chun&Fessler:18TIP} can be viewed within BPEG-M in Algorithm~\\ref{alg:BPGM}, by similar reasons in \\cite[Rem.~2]{Xu&Yin:17JSC} -- bound \\R{eq:l:seqBound} holds for the block multi-convex problems by taking $E_b^{(i+1)}$ in \\R{eg:Wb:eigen} as $E_b^{(i+1)} \\preceq \\delta \\cdot ( M_b^{(i)} )^{1\/2} ( M_b^{(i+1)} )^{-1\/2}$ in \\cite[Prop.~3.2]{Chun&Fessler:18TIP}. \n\n\n\n\n\n\n\\prop{[Square summability]\n\\label{p:sqSum}\nLet $\\{ x^{(i+1)} : i \\geq 0 \\}$ be generated by Algorithm \\ref{alg:BPGM}.\nWe have\n\\be{\n\\label{eq:p:sqSum}\n\\sum_{i=0}^{\\infty} \\nm{ x^{(i)} - x^{(i+1)} }_2^2 < \\infty.\n}\n}\n\n\\prf{\\renewcommand{\\qedsymbol}{}\nSee Section~\\ref{sec:prf:p:sqSum} of the supplementary material.\n}\n\n\n\n\n\nProposition~\\ref{p:sqSum} implies that\n\\be{\n\\label{eq:convg_to0}\n\\nm{ x^{(i)} - x^{(i+1)} }_2^2 \\rightarrow 0,\n}\nand \\R{eq:convg_to0} is used to prove the following theorem:\n\n\n\n\n\n\\thm{[A limit point is a critical point]\n\\label{t:subseqConv}\nUnder Assumptions 1--3, let $\\{ x^{(i+1)} : i \\geq 0 \\}$ be generated by Algorithm \\ref{alg:BPGM}.\nThen any limit point $\\bar{x}$ of $\\{ x^{(i+1)} : i \\geq 0 \\}$ is a critical point of \\R{sys:multiConvx}. \nIf the subsequence $\\{ x^{(i_j+1)} \\}$ converges to $\\bar{x}$, then\n\\bes{\n\\lim_{j \\rightarrow \\infty} F(x^{(i_j + 1 )}) = F(\\bar{x}).\n} \n}\n\n\\prf{\\renewcommand{\\qedsymbol}{}\nSee Section~\\ref{sec:prf:t:subseqConv} of the supplementary material.\n}\n\nFinite limit points exist if the generated sequence $\\{ x^{(i+1)} : i \\geq 0 \\}$ is bounded; see, for example, \\cite[Lem.~3.2--3.3]{Bao&Ji&Shen:14ACHA}. For some applications, the boundedness of $\\{ x^{(i+1)} : i \\geq 0 \\}$ can be satisfied by choosing appropriate regularization parameters, e.g., \\cite{Chun&Fessler:18TIP}.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\subsection{Restarting BPEG-M} \\label{sec:reBPGM}\n\nBPG-type methods \\cite{Chun&Fessler:18TIP, Xu&Yin:17JSC, Xu&Yin:13SIAM} can be further accelerated by applying \\textit{1)} a momentum coefficient formula similar to those used in fast proximal gradient (FPG) methods \\cite{Beck&Teboulle:09SIAM, Nesterov:07CORE, Tseng:08techRep}, and\/or \\textit{2)} an adaptive momentum restarting scheme \\cite{ODonoghue&Candes:15FCM, Giselsson&Boyd:14CDC}; see \\cite{Chun&Fessler:18TIP}. \nThis section applies these two techniques to further accelerate BPEG-M in Algorithm~\\ref{alg:BPGM}.\n\nFirst, we apply the following increasing momentum-coefficient formula to \\R{update:Eb} \\cite{Beck&Teboulle:09SIAM}: \n\\be{\n\\label{eq:mom_coeff}\ne_b^{(i+1)} = \\frac{\\theta^{(i)} - 1}{\\theta^{(i+1)}}, \\quad \\theta^{(i+1)} = \\frac{1 + \\sqrt{1 + 4 (\\theta^{(i)})^2}}{2}.\n}\nThis choice guarantees fast convergence of FPG method \\cite{Beck&Teboulle:09SIAM}.\nSecond, we apply a momentum restarting scheme \\cite{ODonoghue&Candes:15FCM, Giselsson&Boyd:14CDC}, when the following \\textit{gradient-mapping} criterion is met \\cite{Chun&Fessler:18TIP}:\n\\be{\n\\label{eq:restart:grad}\n\\cos \\! \\left( \\Theta \\! \\left( M_b^{(i+1)} \\!\\! \\left( \\acute{x}_b^{(i+1)} - x_b^{(i+1)} \\right), x_b^{(i+1)} - x_b^{(i)} \\right) \\right) > \\omega,\n}\nwhere the angle between two nonzero real vectors $\\vartheta$ and $\\vartheta'$ is $\\Theta (\\vartheta, \\vartheta') := \\ip{\\vartheta}{\\vartheta'} \/ ( \\nm{\\vartheta}_2 \\nm{\\vartheta'}_2 )$ and $\\omega \\in [-1, 0]$.\nThis scheme restarts the algorithm whenever the momentum, i.e., $x_b^{(i+1)} - x_b^{(i)}$, is likely to lead the algorithm in an unhelpful direction, as measured by the gradient mapping at the $x_b^{(i+1)}$-update.\nWe refer to BPEG-M combined with the methods \\R{eq:mom_coeff}--\\R{eq:restart:grad} as restarting BPEG-M (reBPEG-M). Section~\\ref{sec:reBPGM-supp} in the supplementary material summarizes the updates of reBPEG-M.\n\nTo solve the block multi-nonconvex problems proposed in this paper (e.g., \\R{sys:CAOL:orth}--\\R{sys:CT&CAOL}), we apply reBPEG-M (a variant of Algorithm~\\ref{alg:BPGM}; see Algorithm~\\ref{alg:reBPGM}), promoting fast convergence to a critical point. \n\n\n\n\n\n\n\n\n\n\n\\section{Fast and Convergent CAOL via BPEG-M} \\label{sec:CAOL+BPGM}\n\nThis section applies the general BPEG-M approach to CAOL.\nThe CAOL models \\R{sys:CAOL:orth} and \\R{sys:CAOL:div} satisfy the assumptions of BPEG-M; see Assumption 1--3 in Section~\\ref{sec:convg-analysis}.\nCAOL models \\R{sys:CAOL:orth} and \\R{sys:CAOL:div} readily satisfy Assumption~1 of BPEG-M.\nTo show the continuously differentiability of $f$ and the lower boundedness of $F$, consider that \\textit{1)} $\\sum_{l} \\sum_{k} \\frac{1}{2} \\left\\| d_k \\circledast x_l - z_{l,k} \\right\\|_2^2$ in \\R{sys:CAOL} is continuously differentiable with respect to $D$ and $\\{ z_{l,k} \\}$;\n\\textit{2)} the sequences $\\{ D^{(i+1)} \\}$ are bounded, because they are in the compact set $\\cD_{\\text{\\R{sys:CAOL:orth}}} = \\{ D: D D^H = \\frac{1}{R} I \\}$ and $\\cD_{\\text{\\R{sys:CAOL:div}}} = \\{ d_k : \\| d_k \\|_2^2 = \\frac{1}{R}, \\forall k \\}$ in \\R{sys:CAOL:orth} and \\R{sys:CAOL:div}, respectively; \nand \\textit{3)} the positive thresholding parameter $\\alpha$ ensures that the sequence $\\{ z_{l,k}^{(i+1)} \\}$ is bounded (otherwise the cost would diverge).\nIn addition, for both \\R{sys:CAOL:orth} and \\R{sys:CAOL:div}, the lower semicontinuity of regularizer $g_b$ holds, $\\forall b$. \nFor $D$-optimization, the indicator function of the sets $\\cD_{\\text{\\R{sys:CAOL:orth}}}$ and $\\cD_{\\text{\\R{sys:CAOL:div}}}$ is lower semicontinuous, because the sets are compact.\nFor $\\{z_{l,k}\\}$-optimization, the $\\ell^0$-quasi-norm is a lower semicontinuous function.\nAssumptions~2 and 3 are satisfied with the majorization matrix designs in this section -- see Sections~\\ref{sec:CAOL:filt}--\\ref{sec:CAOL:spCd} later -- and the extrapolation matrix design in \\R{update:Eb}, respectively.\n\nSince CAOL models \\R{sys:CAOL:orth} and \\R{sys:CAOL:div} satisfy the BPEG-M conditions, we solve \\R{sys:CAOL:orth} and \\R{sys:CAOL:div} by the reBPEG-M method with a two-block scheme, i.e., we alternatively update all filters $D$ and all sparse codes $\\{z_{l,k} : l = 1,\\ldots,L, k=1,\\ldots,K\\}$. \nSections~\\ref{sec:CAOL:filt} and \\ref{sec:CAOL:spCd} describe details of $D$-block and $\\{ z_{l,k} \\}$-block optimization within the BPEG-M framework, respectively.\nThe BPEG-M-based CAOL algorithm is particularly useful for learning convolutional regularizers from large datasets because of its memory flexibility and parallel computing applicability, as described in Section~\\ref{sec:CAOL:memory} and Sections~\\ref{sec:CAOL:filt}--\\ref{sec:CAOL:spCd}, respectively.\n\n\n\n\n\n\n\n\n\\subsection{Filter Update: $D$-Block Optimization} \\label{sec:CAOL:filt}\n\nWe first investigate the structure of the system matrix in the filter update for \\R{sys:CAOL}.\nThis is useful for \\textit{1)} accelerating majorization matrix computation in filter updates (e.g., Lemmas~\\ref{l:MDdiag}--\\ref{l:MDscaleI}) and \\textit{2)} applying $R \\!\\times\\! N$-sized adjoint operators (e.g., $\\Psi_l^H$ in \\R{eq:Psi_l} below) to an $N$-sized vector without needing the Fourier approach \\cite[Sec.~\\Romnum{5}-A]{Chun&Fessler:18TIP} that uses commutativity of convolution and Parseval's relation.\nGiven the current estimates of $\\{ z_{l,k} : l=1,\\ldots,L, k=1,\\ldots,K \\}$, the filter update problem of \\R{sys:CAOL} is equivalent to\n\\be{\n\\label{sys:filter}\n\\argmin_{\\{d_k\\}} \\frac{1}{2} \\sum_{k=1}^K \\sum_{l=1}^L \\left\\| \\Psi_l d_k - z_{l,k} \\right\\|_2^2 + \\beta g (D),\n}\nwhere $D$ is defined in \\R{eq:D}, $\\Psi_l \\in \\bbC^{N \\times R}$ is defined by\n\\be{\n\\label{eq:Psi_l}\n\\Psi_l := \\left[ \\begin{array}{ccc} P_{B_1} \\hat{x}_l & \\ldots & P_{B_R} \\hat{x}_l \\end{array} \\right],\n}\n$P_{B_r} \\in \\bbC^{N \\times \\hat{N}}$ is the $r\\rth$ (rectangular) selection matrix that selects $N$ rows corresponding to the indices $B_r = \\{ r, \\ldots, r+N-1 \\}$ from $I_{\\hat{N}}$, $\\{ \\hat{x}_l \\in \\bbC^{\\hat{N}} : l=1,\\ldots,L \\}$ is a set of padded training data, $\\hat{N} = N+R-1$.\nNote that applying $\\Psi_l^H$ in \\R{eq:Psi_l} to a vector of size $N$ is analogous to calculating cross-correlation between $\\hat{x}_l$ and the vector, i.e., $( \\Psi_l^H \\hat{z}_{l,k} )_{r} = \\sum_{n=1}^N \\hat{x}_{n+r-1}^{*} (\\hat{z}_{l,k})_n$, $r=1,\\ldots,R$.\nIn general, $\\hat{(\\cdot)}$ denotes a padded signal vector.\n\n\n\n\n\n\n\n\n\n\\subsubsection{Majorizer Design} \\label{sec:filt:maj}\n\n\n\nThis subsection designs multiple majorizers for the $D$-block optimization and compares their required computational complexity and tightness. \nThe next proposition considers the structure of $\\Psi_l$ in \\R{eq:Psi_l} to obtain the Hessian $\\sum_{l=1}^L \\Psi^H_l \\Psi_l \\in \\bbC^{R \\times R}$ in \\R{sys:filter} for an arbitrary boundary condition.\n\\prop{[Exact Hessian matrix $M_D$] \\label{p:MDexhess}\nThe following matrix $M_D \\in \\bbC^{R \\times R}$ is identical to $\\sum_{l=1}^L \\Psi^H_l \\Psi_l$:\n\\be{\n\\label{eq:soln:filterHess}\n\\left[ M_D \\right]_{r,r'} = \\sum_{l=1}^L \\ip{P_{B_r} \\hat{x}_l}{P_{B_{r'}} \\hat{x}_l}, \\quad r,r' = 1,\\ldots,R.\n}\n}\n\n\nA sufficiently large number of training signals (with $N \\geq R$), $L$, can guarantee $M_D = \\sum_{l=1}^L \\Psi^H_l \\Psi_l \\succ 0$ in Proposition~\\ref{p:MDexhess}.\nThe drawback of using Proposition~\\ref{p:MDexhess} is its polynomial computational complexity, i.e., $O (L R^2 N)$ -- see Table \\ref{tab:MD:compt}. \nWhen $L$ (the number of training signals) or $N$ (the size of training signals) are large, the quadratic complexity with the size of filters -- $R^2$ -- can quickly increase the total computational costs when multiplied by $L$ and $N$. (The BPG setup in \\cite{Xu&Yin:17JSC} additionally requires $O (R^3)$ because it uses the eigendecomposition of \\R{eq:soln:filterHess} to calculate the Lipschitz constant.)\n\n\n\n\n\n\nConsidering CAOL problems \\R{sys:CAOL} themselves, different from CDL \\cite{Chun&Fessler:18TIP, Chun&Fessler:17SAMPTA, Wohlberg:16TIP, Heide&eta:15CVPR, Bristow&etal:13CVPR}, the complexity $O (L R^2 N)$ in applying Proposition~\\ref{p:MDexhess} is reasonable. In BPEG-M-based CDL \\cite{Chun&Fessler:18TIP, Chun&Fessler:17SAMPTA}, a majorization matrix for kernel update is calculated every iteration because it depends on updated sparse codes; however, in CAOL, one can precompute $M_D$ via Proposition~\\ref{p:MDexhess} (or Lemmas~\\ref{l:MDdiag}--\\ref{l:MDscaleI} below) without needing to change it every kernel update.\nThe polynomial computational cost in applying Proposition~\\ref{p:MDexhess} becomes problematic only when the training signals change.\nExamples include \\textit{1)} hierarchical CAOL, e.g., CNN in Appendix~\\ref{sec:CNN}, \\textit{2)} \\dquotes{adaptive-filter MBIR} particularly with high-dimensional signals \\cite{Cai&etal:14ACHA, Xu&etal:12TMI, Elad&Aharon:06TIP}, and \\textit{3)} online learning \\cite{Liu&etal:17arXiv, Mairal&etal:09ICML}. Therefore, we also describe a more efficiently computable majorization matrix at the cost of looser bounds (i.e., slower convergence; see Fig~\\ref{fig:Comp:diffBPGM}). \nApplying Lemma~\\ref{l:diag(|At|W|A|1)}, we first introduce a diagonal majorization matrix $M_D$ for the Hessian $\\sum_{l} \\Psi_l^H \\Psi_l$ in \\R{sys:filter}:\n\n\n\n\\begin{table}[!pt]\t\n\n\\centering\n\\renewcommand{\\arraystretch}{1.1}\n\t\n\\caption{Computational complexity of different majorization matrix designs for the filter update problem \\R{sys:filter}}\t\n\\label{tab:MD:compt}\n\t\n\\begin{tabular}{C{2.2cm}C{2cm}}\n\\hline \\hline\nLemmas~\\ref{l:MDdiag}--\\ref{l:MDscaleI} & Proposition~\\ref{p:MDexhess} \\\\ \n\\hline\n$O( L R N )$ & $O( L R^2 N )$ \\\\\n\\hline \\hline\n\\end{tabular}\n\\end{table}\n\n\n\n\\lem{[Diagonal majorization matrix $M_D$] \\label{l:MDdiag}\nThe following matrix $M_D \\in \\bbC^{R \\times R}$ satisfies $M_D \\succeq \\sum_{l=1}^L \\Psi^H_l \\Psi_l$:\n\\be{\n\\label{eq:soln:filterMaj:loose}\nM_D = \\diag \\! \\left( \\sum_{l=1}^L | \\Psi_l^H | | \\Psi_l | 1_{R} \\right),\n}\nwhere $|\\cdot|$ takes the absolute values of the elements of a matrix. \n}\nThe majorization matrix design in Lemma~\\ref{l:MDdiag} is more efficient to compute than that in Proposition~\\ref{p:MDexhess}, because no $R^2$-factor is needed for calculating $M_D$ in Lemma~\\ref{l:MDdiag}, i.e., $O( L R N )$; see Table~\\ref{tab:MD:compt}. \nDesigning $M_D$ in Lemma~\\ref{l:MDdiag} takes fewer calculations than \\cite[Lem.~5.1]{Chun&Fessler:18TIP} using Fourier approaches, when $R \\!<\\! \\log ( \\hat{N} )$.\nUsing Lemma~\\ref{l:diag(|A|1)}, we next design a potentially sharper majorization matrix than \\R{eq:soln:filterMaj:loose}, while maintaining the cost $O( L R N )$:\n\\lem{[Scaled identity majorization matrix $M_D$] \\label{l:MDscaleI}\nThe following matrix $M_D \\in \\bbC^{R \\times R}$ satisfies $M_D \\succsim \\sum_{l=1}^L \\Psi^H_l \\Psi_l$:\n\\be{\n\\label{eq:soln:filterMaj}\nM_D = \\sum_{r=1}^R \\left| \\sum_{l=1}^L \\ip{P_{B_1} \\hat{x}_l}{P_{B_{r}} \\hat{x}_l} \\right| \\cdot I_R,\n}\nfor a circular boundary condition.\n}\n\\prf{\\renewcommand{\\qedsymbol}{}\nSee Section~\\ref{sec:prf:l:MD} of the supplementary material.\n}\n\n\n\n\\begin{figure}[!t]\n\\centering\n\\begin{tabular}{c}\n\\vspace{-0.25em}\\includegraphics[scale=0.5, trim=0 0.2em 2.2em 1.2em, clip]{.\/Fig_v1\/obj_detDvsrandD_fruit_v1.eps} \\\\\n{\\small (a) The fruit dataset ($L = 10$, $N = 100 \\!\\times\\! 100$)} \\\\\n\\vspace{-0.25em}\\includegraphics[scale=0.5, trim=0 0.2em 2.2em 1em, clip]{.\/Fig_v1\/obj_detDvsrandD_city_v1.eps} \\\\\n{\\small (b) The city dataset ($L = 10$, $N = 100 \\!\\times\\! 100$)} \\\\\n\\end{tabular}\n\n\\vspace{-0.25em}\n\\caption{Cost minimization comparisons in CAOL \\R{sys:CAOL:orth} with different BPG-type algorithms and datasets ($R \\!=\\! K \\!=\\! 49$ and $\\alpha \\!=\\! 2.5 \\!\\times\\! 10^{-4}$; solution \\R{eq:soln:spCode:exact} was used for sparse code updates; BPG (Xu \\& Ying '17) \\cite{Xu&Yin:17JSC} used the maximum eigenvalue of Hessians for Lipschitz constants; the cross mark \\text{\\sffamily x} denotes a termination point).\nA sharper majorization leads to faster convergence of BPEG-M;\nfor all the training datasets considered in this paper, the majorization matrix in Proposition~\\ref{p:MDexhess} is sharper than those in Lemmas~\\ref{l:MDdiag}--\\ref{l:MDscaleI}. \n}\n\\label{fig:Comp:diffBPGM}\n\\end{figure}\n\n\n\n\\begin{figure}[!tp]\n\\centering\n\\begin{tabular}{c}\n\\vspace{-0.25em}\\includegraphics[scale=0.58, trim=0 0.2em 2em 1em, clip]{.\/Fig_v1\/obj_diffBPGM_fruit_paper_v2.eps} \\\\\n{\\small (a) The fruit dataset ($L = 10$, $N = 100 \\!\\times\\! 100$)} \\\\\n\\vspace{-0.25em}\\includegraphics[scale=0.58, trim=0 0.2em 2em 1em, clip]{.\/Fig_v1\/obj_diffBPGM_city_paper_v2.eps} \\\\\n{\\small (b) The city dataset ($L = 10$, $N = 100 \\!\\times\\! 100$)} \\\\\n\\end{tabular}\n\n\\vspace{-0.25em}\n\\caption{Cost minimization comparisons in CAOL \\R{sys:CAOL:orth} with different BPEG-M algorithms and datasets (Lemma~\\ref{l:MDdiag} was used for $M_D$; $R \\!=\\! K \\!=\\! 49$; deterministic filter initialization and random sparse code initialization). \nUnder the sharp majorization regime, maintaining sharp majorization (i.e., $\\lambda_D \\!=\\! 1+\\epsilon$) provides faster convergence than giving more weight on extrapolation (i.e., $\\lambda_D \\!=\\! 2$).\n(The same behavior was found in sparse-view CT application \\cite[Fig.~3]{Chun&Fessler:18Asilomar}.)\nThere exist no differences in convergence between solution \\R{eq:soln:spCode:exact} and solution \\R{eq:soln:spCode} using $\\{ \\lambda_Z \\!=\\! 1+\\epsilon \\}$. \n}\n\\label{fig:Comp:exact_vs_approx}\n\\end{figure}\n\n\n\n\n\nFor all the training datasets used in this paper, we observed that the tightness of majorization matrices in Proposition~\\ref{p:MDexhess} and Lemmas~\\ref{l:MDdiag}--\\ref{l:MDscaleI} for the Hessian $\\sum_{l} \\Psi^H_l \\Psi_l$ is given by\n\\be{\n\\label{eq:MD:tight}\n\\sum_{l=1}^L \\Psi^H_l \\Psi_l = \\text{\\R{eq:soln:filterHess}} \\preceq \\text{\\R{eq:soln:filterMaj}} \\preceq \\text{\\R{eq:soln:filterMaj:loose}}.\n}\n(Note that \\R{eq:soln:filterHess}\\,$\\preceq$\\,\\R{eq:soln:filterMaj:loose} always holds regardless of training data.)\nFig.~\\ref{fig:Comp:diffBPGM} illustrates the effects of the majorizer sharpness in \\R{eq:MD:tight} on CAOL convergence rates.\nAs described in Section~\\ref{sec:BPGM:setup}, selecting $\\lambda_D$ (see \\R{sys:prox:filter:orth} and \\R{sys:prox:filter:div} below) controls the tradeoff between majorization sharpness and extrapolation effect. \nWe found that using fixed $\\lambda_D = 1+\\epsilon$ \ngives faster convergence than $\\lambda_D = 2 $; see Fig.~\\ref{fig:Comp:exact_vs_approx} (this behavior is more obvious in solving the CT MBIR model in \\R{sys:CT&CAOL} via BPEG-M -- see \\cite[Fig.~3]{Chun&Fessler:18Asilomar}).\nThe results in Fig.~\\ref{fig:Comp:exact_vs_approx} and \\cite[Fig.~3]{Chun&Fessler:18Asilomar} show that, under the sharp majorization regime, maintaining sharper majorization is more critical in accelerating the convergence of BPEG-M than giving more weight to extrapolation.\n\n\n\nSections \\ref{sec:prox:filter:orth} and \\ref{sec:prox:filter:div} below apply the majorization matrices designed in this section to proximal mappings of $D$-optimization in \\R{sys:CAOL:orth} and \\R{sys:CAOL:div}, respectively.\n\n\n\n\n\n\n\n\n\n\\subsubsection{Proximal Mapping with Orthogonality Constraint} \\label{sec:prox:filter:orth}\n\nThe corresponding proximal mapping problem of \\R{sys:filter} using the orthogonality constraint in \\R{sys:CAOL:orth} is given by\n\\ea{\n\\label{sys:prox:filter:orth}\n\\{ d_k^{(i+1)} \\} = \\argmin_{\\{ d_k \\}} &~ \n\\sum_{k=1}^K \\frac{1}{2} \\left\\| d_k - \\nu_k^{(i+1)} \\right\\|_{\\widetilde{M}_D}^2, \\nn\n\\\\\n\\mathrm{subject~to} &~ D D^H = \\frac{1}{R} \\cdot I,\n}\nwhere\n\\ea{\n\\nu_{k}^{(i+1)} &= \\textstyle \\acute{d}_k^{(i+1)} - \\widetilde{M}_D^{-1} \\sum_{l=1}^L \\Psi_l^H \\Big( \\Psi_l \\acute{d}_k^{(i+1)} - z_{l,k} \\Big),\n\\label{eq:nuk} \n\\\\\n\\acute{d}_k^{(i+1)} &= d_k^{(i)} + E_D^{(i+1)} \\Big( d_k^{(i)} - d_k^{(i-1)} \\Big), \\label{eq:dk_ac}\n}\nfor $k=1,\\ldots,K$, and $\\widetilde{M}_D = \\lambda_D M_D$ by \\R{update:Mtilde}. \nOne can parallelize over $k = 1,\\ldots,K$ in computing $\\{ \\nu_k^{(i+1)} \\}$ in \\R{eq:nuk}.\nThe proposition below provides an optimal solution to \\R{sys:prox:filter:orth}:\n\n\\prop{\n\\label{p:orth}\nConsider the following constrained minimization problem:\n\\begingroup\n\\setlength{\\thinmuskip}{1.5mu}\n\\setlength{\\medmuskip}{2mu plus 1mu minus 2mu}\n\\setlength{\\thickmuskip}{2.5mu plus 2.5mu}\n\\ea{\n\\label{p:eq:TF}\n\\min_{D} ~ \\nm{ \\widetilde{M}_D^{1\/2} D - \\widetilde{M}_D^{1\/2} \\mathcal{V} }_{\\mathrm{F}}^2, \\quad \n\\mathrm{subj.~to} ~ D D^H = \\frac{1}{R} \\cdot I,\n}\n\\endgroup\nwhere $D$ is given as \\R{eq:D}, $\\mathcal{V} = [\\nu_{1}^{(i+1)} \\cdots \\nu_{K}^{(i+1)}] \\in \\bbC^{R \\times K}$, $\\widetilde{M}_D = \\lambda_D M_D$, and $M_D \\in \\bbR^{R \\times R}$ is given by \\R{eq:soln:filterHess}, \\R{eq:soln:filterMaj:loose}, or \\R{eq:soln:filterMaj}.\nThe optimal solution to \\R{p:eq:TF} is given by\n\\bes{\n D^\\star = \\frac{1}{\\sqrt{R}} \\cdot U \\left[ \\arraycolsep=1pt \\begin{array}{cc} I_R, & 0_{R \\times (K-R)} \\end{array} \\right] V^H, \\quad \\mbox{for}~ R \\leq K,\n}\nwhere $\\widetilde{M}_D \\mathcal{V}$ has (full) singular value decomposition, $\\widetilde{M}_D \\mathcal{V} = U \\Lambda V^H$.\n}\n\\prf{\\renewcommand{\\qedsymbol}{}\nSee Section~\\ref{sec:prf:p:orth} of the supplementary material.\n}\n\n\nWhen using Proposition~\\ref{p:MDexhess}, $\\widetilde{M}_D \\nu_{k}^{(i+1)}$ of $\\widetilde{M}_D \\mathcal{V}$ in Proposition~\\ref{p:orth} simplifies to the following update:\n\\bes{\n\\widetilde{M}_D \\nu_{k}^{(i+1)} = (\\lambda_D - 1) M_D \\acute{d}_k^{(i+1)} + \\sum_{l=1}^L \\Psi_l^H z_{l,k}.\n}\nSimilar to obtaining $\\{ \\nu_k^{(i+1)} \\}$ in \\R{eq:nuk}, computing $\\{ \\widetilde{M}_D \\nu_k^{(i+1)} : k=1,\\ldots,K \\}$ is parallelizable over $k$.\n\n\n\n\n\n\n\n\n\\subsubsection{Proximal Mapping with Diversity Promoting Regularizer} \\label{sec:prox:filter:div}\n\nThe corresponding proximal mapping problem of \\R{sys:filter} using the norm constraint and diversity promoting regularizer in \\R{sys:CAOL:div} is given by\n\\ea{\n\\label{sys:prox:filter:div}\n\\{ d_k^{(i+1)} \\} = \\argmin_{\\{ d_k \\}} &~ \n\\sum_{k=1}^K \\frac{1}{2} \\left\\| d_k - \\nu_k^{(i+1)} \\right\\|_{\\widetilde{M}_D}^2 + \\frac{\\beta}{2} g_{\\text{div}}( D ), \\nn\n\\\\\n\\mathrm{subject~to} &~ \\left\\| d_k \\right\\|_2^2 = \\frac{1}{R}, \\quad k = 1,\\ldots, K,\n}\nwhere $g_{\\text{div}}( D )$, $\\nu_{k}^{(i+1)}$, and $\\acute{d}_k^{(i+1)}$ are given as in \\R{sys:CAOL:div}, \\R{eq:nuk}, and \\R{eq:dk_ac}, respectively.\nWe first decompose the regularization term $g_{\\text{div}}(D)$ as follows:\n\\begingroup\n\\allowdisplaybreaks\n\\ea{\n\\label{eq:DtD}\ng_{\\text{div}} (D)\n& = \\sum_{k=1}^K \\sum_{k'=1}^K \\big( d_k^H d_{k'} d_{k'}^H d_k - R^{-1} \\big) \\nn\n\\\\\n& = \\sum_{k=1}^K d_k^H \\bigg( \\sum_{k' \\neq k} d_{k'} d_{k'}^H \\bigg) d_k + \\big( d_k^H d_k - R^{-1} \\big)^2 \\nn\n\\\\\n& = \\sum_{k=1}^K d_k^H \\Gamma_k d_k, \n}\n\\endgroup\nwhere the equality in \\R{eq:DtD} holds by using the constraint in \\R{sys:prox:filter:div}, and the Hermitian matrix $\\Gamma_k \\in \\bbC^{R \\times R}$ is defined by\n\\be{\n\\label{eq:Ek}\n\\Gamma_k := \\sum_{k' \\neq k} d_{k'} d_{k'}^H.\n}\nUsing \\R{eq:DtD} and \\R{eq:Ek}, we rewrite \\R{sys:prox:filter:div} as\n\\ea{\n\\label{sys:prox:filter2}\nd_k^{(i+1)} = \\argmin_{d_k} &~ \n\\frac{1}{2} \\left\\| d_k - \\nu_k^{(i)} \\right\\|_{\\widetilde{M}_D}^2 + \\frac{\\beta}{2} d_k^H \\Gamma_k d_k, \\nn\n\\\\\n\\mathrm{subject~to} &~ \\left\\| d_k \\right\\|_2^2 = \\frac{1}{R}, \\quad k = 1,\\ldots,K.\n}\nThis is a quadratically constrained quadratic program with $\\{ \\widetilde{M}_D + \\beta \\Gamma_k \\succ 0 : k = 1,\\ldots,K\\}$. \nWe apply an accelerated Newton's method to solve \\R{sys:prox:filter2}; see Section~\\ref{sec:Newton}.\nSimilar to solving \\R{sys:prox:filter:orth} in Section~\\ref{sec:prox:filter:orth}, solving \\R{sys:prox:filter:div} is a small-dimensional problem ($K$ separate problems of size $R$). \n\n\n\n\n\\subsection{Sparse Code Update: $\\{z_{l,k} \\}$-Block Optimization} \\label{sec:CAOL:spCd}\n\nGiven the current estimate of $D$, the sparse code update problem for \\R{sys:CAOL} is given by\n\\be{\n\\label{sys:spCode}\n\\argmin_{\\{ z_{l,k} \\}} \\sum_{l=1}^L \\sum_{k=1}^K \\frac{1}{2} \\left\\| d_k \\circledast x_{l} - z_{l,k} \\right\\|_2^2 + \\alpha \\left\\| z_{l,k} \\right\\|_0.\n}\nThis problem separates readily, \nallowing parallel computation with $LK$ threads.\nAn optimal solution to \\R{sys:spCode} is efficiently obtained by the well-known hard thresholding:\n\\be{\n\\label{eq:soln:spCode:exact}\nz_{l,k}^{(i+1)} = \\cH_{\\!\\sqrt{2\\alpha}} \\left( d_k \\circledast x_{l} \\right), \n}\nfor $k=1,\\ldots,K$ and $l=1,\\ldots,L$, where \n\\be{\n\\label{eq:def:hardthr}\n\\cH_a (x)_{n} := \\left\\{ \\begin{array}{cc} 0, & | x_{n} | < a_{n}, \\\\ x_{n}, & | x_{n} | \\geq a_{n}. \\end{array} \\right.\n}\nfor all $n$. \nConsidering $\\lambda_Z$ (in $\\widetilde{M}_Z = \\lambda_Z M_Z$) as $\\lambda_Z \\!\\rightarrow\\! 1$, the solution obtained by the BPEG-M approach becomes equivalent to \\R{eq:soln:spCode:exact}. \nTo show this, observe first that the BPEG-M-based solution (using $M_Z = I_{N}$) to \\R{sys:spCode} is obtained by\n\\ea{\n\\label{eq:soln:spCode}\nz_{l,k}^{(i+1)} &= \\cH_{\\!\\sqrt{ \\frac{2\\alpha}{\\lambda_Z}}} \\Big( \\zeta_{l,k}^{(i+1)} \\Big),\n\\\\\n\\zeta_{l,k}^{(i+1)} &= \\left( 1 - \\lambda_Z^{-1} \\right) \\cdot \\acute{z}_{l,k}^{(i+1)} + \\lambda_Z^{-1} \\cdot d_k \\circledast x_{l},\n\\nn \\\\\n\\acute{z}_{l,k}^{(i+1)} &= z_{l,k}^{(i)} + E_Z^{(i+1)} \\Big( z_{l,k}^{(i)} - z_{l,k}^{(i-1)} \\Big). \\nn\n}\nThe downside of applying solution \\R{eq:soln:spCode} is that it would require additional memory to store the corresponding extrapolated points -- $\\{ \\acute{z}_{l,k}^{(i+1)} \\}$ -- and the memory grows with $N$, $L$, and $K$.\nConsidering the sharpness of the majorizer in \\R{sys:spCode}, i.e., $M_Z = I_{N}$, and the memory issue, it is reasonable to consider the solution \\R{eq:soln:spCode} with no extrapolation, i.e., $\\{ E_Z^{(i+1)} = 0 \\}$:\n\\bes{\nz_{l,k}^{(i+1)} \n= \\cH_{\\! \\sqrt{\\frac{2\\alpha}{\\lambda_Z}}} \\Big( (\\lambda_Z - 1)^{-1}\\lambda_Z \\cdot z_{l,k}^{(i)} + \\lambda_Z^{-1} \\cdot d_k \\circledast x_{l} \\Big)\n}\nbecoming equivalent to \\R{eq:soln:spCode:exact} as $\\lambda_Z \\!\\rightarrow\\! 1$.\n\n\n\n\n\n\nSolution \\R{eq:soln:spCode:exact} has two benefits over \\R{eq:soln:spCode}: compared to \\R{eq:soln:spCode}, \\R{eq:soln:spCode:exact} requires only half the memory to update all $z_{l,k}^{(i+1)}$ vectors and no additional computations related to $\\acute{z}_{l,k}^{(i+1)}$.\nWhile having these benefits, empirically \\R{eq:soln:spCode:exact} has equivalent convergence rates as \\R{eq:soln:spCode} using $\\{ \\lambda_Z \\!=\\! 1+\\epsilon \\}$; see Fig.~\\ref{fig:Comp:exact_vs_approx}. Throughout the paper, we solve the sparse coding problems (e.g., \\R{sys:spCode} and $\\{ z_k \\}$-block optimization in \\R{sys:CT&CAOL}) via optimal solutions in the form of \\R{eq:soln:spCode:exact}.\n\n\n\n\n\n\n\\subsection{Lower Memory Use than Patch-Domain Approaches} \\label{sec:CAOL:memory}\n\nThe convolution perspective in CAOL \\R{sys:CAOL} requires much less memory than conventional patch-domain approaches; \nthus, it is more suitable for learning filters from large datasets or applying the learned filters to high-dimensional MBIR problems.\nFirst, consider the training stage (e.g., \\R{sys:CAOL}).\nThe patch-domain approaches, e.g., \\cite{Cai&etal:14ACHA, Ravishankar&Bressler:15TSP, Aharon&Elad&Bruckstein:06TSP}, require about $R$ times more memory to store training signals. For example, 2D patches extracted by $\\sqrt{R} \\!\\times\\! \\sqrt{R}$-sized windows (with \\dquotes{stride} one and periodic boundaries \\cite{Cai&etal:14ACHA, Papyan&Romano&Elad:17JMLR}, as used in convolution) require about $R$ (e.g., $R = 64$ \\cite{Aharon&Elad&Bruckstein:06TSP, Ravishankar&Bressler:15TSP}) times more memory than storing the original image of size $\\sqrt{N} \\!\\times\\! \\sqrt{N}$. For $L$ training images, their memory usage dramatically increases with a factor $L R N$.\nThis becomes even more problematic in forming hierarchical representations, e.g., CNNs -- see Appendix~\\ref{sec:CNN}.\nUnlike the patch-domain approaches, the memory use of CAOL \\R{sys:CAOL} only depends on the $L N$-factor to store training signals.\nAs a result, the BPEG-M algorithm for CAOL \\R{sys:CAOL:orth} requires about two times less memory than the patch-domain approach \\cite{Cai&etal:14ACHA} (using BPEG-M).\nSee Table~\\ref{tab:AOL}-B.\n(Both the corresponding BPEG-M algorithms use identical computations per iteration that scale with $L R^2 N$; see Table~\\ref{tab:AOL}-A.)\n\n\n\n\n\nSecond, consider solving MBIR problems. Different from the training stage, the memory burden depends on how one applies the learned filters.\nIn \\cite{Pfister&Bresler:17ICASSP}, the learned filters are applied with the conventional convolutional operators -- e.g., $\\circledast$ in \\R{sys:CAOL} -- and, thus, there exists no additional memory burden. \nHowever, in \\cite{Elad&Aharon:06TIP, Chun&etal:17Fully3D, Zheng&etal:19TCI}, the $\\sqrt{R} \\!\\times\\! \\sqrt{R}$-sized learned kernels are applied with a matrix constructed by many overlapping patches extracted from the updated image at each iteration. \nIn adaptive-filter MBIR problems \\cite{Elad&Aharon:06TIP, Cai&etal:14ACHA, Pfister&Bresler:15SPIE}, the memory issue pervades the patch-domain approaches.\n\n\n\n\n\\begin{table}[!t]\t\n\n\\centering\n\\renewcommand{\\arraystretch}{1.1}\n\t\n\\caption{\nComparisons of computational complexity and memory usages between CAOL and patch-domain approach\n}\t\n\\label{tab:AOL}\n\t\n\\begin{tabular}{C{2.15cm}|C{2.85cm}|C{2.3cm}}\n\\hline \\hline\n\\multicolumn{3}{c}{A.~Computational complexity per BPEG-M iteration}\n\\\\\n\\hline\n& Filter update & Sparse code update\n\\\\\n\\hline\nCAOL \\R{sys:CAOL:orth} & $O(L K R N) + O(R^2 K)$ & $O(L K R N)$ \n\\\\ \n\\hline\nPatch-domain \\cite{Cai&etal:14ACHA}$^\\dagger$ & $O (L R^2 N) + O (R^3)$ & $O (L R^2 N)$\n\\\\\n\\hline \\hline\n\\end{tabular}\n\n\\vspace{0.75pc}\n\n\\begin{tabular}{C{2.15cm}|C{2.3cm}|C{2.3cm}}\n\\hline \\hline\n\\multicolumn{3}{c}{B.~Memory usage for BPEG-M algorithm}\n\\\\\n\\hline\n& Filter update & Sparse code update\n\\\\\n\\hline\nCAOL \\R{sys:CAOL:orth} & $O(LN) + O(RK)$ & $O(L K N)$\n\\\\ \n\\hline\nPatch-domain \\cite{Cai&etal:14ACHA}$^\\dagger$ & $O(L R N) + O(R^2)$ & $O(L R N)$\n\\\\\n\\hline \\hline\n\\end{tabular}\n\n\\medskip\n\\begin{myquote}{0.1in}\n$^\\dagger$\nThe patch-domain approach \\cite{Cai&etal:14ACHA} considers the orthogonality constraint in \\R{sys:CAOL:orth} with $R \\!=\\! K$; see Section~\\ref{sec:CAOL:orth}.\nThe estimates consider all the extracted overlapping patches of size $R$ with the stride parameter $1$ and periodic boundaries, as used in convolution.\n\\end{myquote}\n\\end{table}\n\n\n\n\n\n\n\n\\section{Sparse-View CT MBIR using Convolutional Regularizer Learned via CAOL, and BPEG-M} \\label{sec:CTrecon}\n\nThis section introduces a specific example of applying the learned convolutional regularizer, i.e., $F(D^\\star, \\{ z_{l,k} \\})$ in \\R{sys:CAOL}, from a representative dataset to recover images in \\textit{extreme} imaging that collects highly undersampled or noisy measurements.\nWe choose a sparse-view CT application since it has interesting challenges in reconstructing images that include Poisson noise in measurements, nonuniform noise or resolution properties in reconstructed images, and complicated (or no) structures in the system matrices. \nFor CT, undersampling schemes can significantly reduce the radiation dose and cancer risk from CT scanning. \nThe proposed approach can be applied to other applications (by replacing the data fidelity and spatial strength regularization terms in \\R{sys:CT&CAOL} below).\n\n\n\nWe pre-learn TF filters $\\{ d_k^\\star \\in \\bbR^K : k = 1,\\ldots,K \\}$ via CAOL \\R{sys:CAOL:orth} with a set of high-quality (e.g., normal-dose) CT images $\\{ x_l : l = 1,\\ldots,L \\}$.\nTo reconstruct a linear attenuation coefficient image $x \\in \\bbR^{N'}$ from post-log measurement $y \\in \\bbR^m$ \\cite{Chun&Talavage:13Fully3D, Chun&etal:17Fully3D}, we apply the learned convolutional regularizer to CT MBIR and solve the following block multi-nonconvex problem \\cite{Chun&Fessler:18Asilomar, Crockett&etal:19CAMSAP}:\n\\begingroup\n\\allowdisplaybreaks\n\\setlength{\\thinmuskip}{1.5mu}\n\\setlength{\\medmuskip}{2mu plus 1mu minus 2mu}\n\\setlength{\\thickmuskip}{2.5mu plus 2.5mu}\n\\ea{\n\\label{sys:CT&CAOL}\n\\argmin_{x \\geq 0} & \\underbrace{ \\frac{1}{2} \\left\\| y - A x \\right\\|_W^2 }_{\\text{data fidelity $f(x;y)$}} + \n\\nn \\\\\n\\gamma \\cdot & \\underbrace{ \\min_{\\{ z_k \\}} \\sum_{k=1}^K \\frac{1}{2} \\left\\| d_{k}^\\star \\circledast x - z_k \\right\\|_2^2 \n+ \\alpha' \\sum_{n=1}^{N'} \\psi_{j} \\phi( (z_{k})_n ) }_{\\text{learned convolutional regularizer $g(x,\\{z_k\\}; \\{ d_k \\})$}}. \\tag{P3}\n}\n\\endgroup\n\\!\\!Here, $A \\in \\bbR^{m \\times N'}$ is a CT system matrix, $W \\in \\bbR^{m \\times m}$ is a (diagonal) weighting matrix with elements $\\{ W_{l,l} = \\rho_l^2 \/ ( \\rho_l + \\sigma^2 ) : l = 1,\\ldots,m \\}$ based on a Poisson-Gaussian model for the pre-log measurements $\\rho \\in \\bbR^m$ with electronic readout noise variance $\\sigma^2$ \\cite{Chun&Talavage:13Fully3D, Chun&etal:17Fully3D, Zheng&etal:19TCI}, $\\psi \\in \\bbR^{N'}$ is a pre-tuned spatial strength regularization vector \\cite{Fessler&Rogers:96TIP} with non-negative elements\n$\\{ \\psi_{n} = ( \\sum_{l=1}^m A_{l,n}^2 W_{l,l} )^{1\/2} \/ ( \\sum_{l=1}^m A_{l,n}^2 )^{1\/2} : n = 1,\\ldots,N' \\}$\\footnote{\nSee details of computing $\\{ A_{l,j}^2 : \\forall l,j \\}$ in \\cite{Chun&Fessler:18Asilomar}.\n} \nthat promotes uniform resolution or noise properties in the reconstructed image \\cite[Appx.]{Chun&etal:17Fully3D}, an indicator function $\\phi(a)$ is equal to $0$ if $a = 0$, and is $1$ otherwise, $z_k \\in \\bbR^{N'}$ is unknown sparse code for the $k\\text{th}$ filter, \nand $\\alpha' \\!>\\! 0$ is a thresholding parameter. \n\n\n\n\n\nWe solved \\R{sys:CT&CAOL} via reBPEG-M in Section~\\ref{sec:reBPG-M} with a two-block scheme \\cite{Chun&Fessler:18Asilomar}, and summarize the corresponding BPEG-M updates as\n\\begingroup\n\\allowdisplaybreaks\n\\ea{\n\\label{eq:soln:CT:bpgm}\nx^{(i+1)} &= \\bigg[ \\big( \\widetilde{M}_A + \\gamma I_R \\big)^{-1} \\cdot \\bigg( \\widetilde{M}_A \\eta^{(i+1)} + \n\\nn \\\\\n& \\hspace{1.95em} \\gamma \\sum_{k=1}^K ( P_f d_k^\\star ) \\circledast \\cH_{\\!\\sqrt{2 \\alpha' \\psi}} \\big( d_k^\\star \\circledast x^{(i)} \\big) \\bigg) \\bigg]_{\\geq 0},\n}\nwhere \n\\ea{\n\\eta^{(i+1)} &= \\acute{x}^{(i+1)} - \\widetilde{M}_A^{-1} A^T W \\Big( A \\acute{x}^{(i+1)} -y \\Big),\n\\label{eq:soln:CT:bpgm:eta} \\\\\n\\acute{x}^{(i+1)} &= x^{(i)} + E_A^{(i+1)} \\Big( x^{(i)} - x^{(i-1)} \\Big),\n\\nn\n}\n\\endgroup\n$\\widetilde{M}_A = \\lambda_A M_A$ by \\R{update:Mtilde}, a diagonal majorization matrix $M_A \\succeq A^T W A$ is designed by Lemma~\\ref{l:diag(|At|W|A|1)}, \nand $P_f \\in \\bbC^{R \\times R}$ flips a column vector in the vertical direction (e.g., it rotates 2D filters by $180^\\circ$).\nInterpreting the update \\R{eq:soln:CT:bpgm} leads to the following two remarks:\n\n\n\n\n\n\n\n\\rem{\n\\label{r:autoenc}\nWhen the convolutional regularizer learned via CAOL \\R{sys:CAOL:orth} is applied to MBIR, it works as an autoencoding CNN: \n\\be{\n\\label{eq:autoenc}\n\\cM ( x ) = \\sum_{k=1}^K (P_f d_k^\\star) \\circledast \\cH_{\\!\\sqrt{2 \\alpha_k'}} \\left( d_k^\\star \\circledast x \\right)\n}\n(setting $\\psi = 1_{N'}$ and generalizing $\\alpha'$ to $\\{ \\alpha'_k : k=1,\\ldots,K \\}$ in \\R{sys:CT&CAOL}).\nThis is an explicit mathematical motivation\nfor constructing architectures of iterative regression CNNs for MBIR, \ne.g., BCD-Net \\cite{Chun&Fessler:18IVMSP, Chun&etal:19MICCAI, Lim&etal:19TMI, Lim&etal:18NSSMIC}\nand Momentum-Net \\cite{Chun&etal:18Allerton, Chun&etal:18arXiv:momnet}. \nParticularly when the learned filters $\\{ d_k^\\star \\}$ in \\R{eq:autoenc} satisfy the TF condition, they are useful for compacting energy of an input signal $x$ and removing unwanted features via the non-linear thresholding in \\R{eq:autoenc}.\n}\n\n\n\n\\rem{\n\\label{r:autoenc-bpgm}\nUpdate \\R{eq:soln:CT:bpgm} improves the solution $x^{(i+1)}$ by weighting between \\textit{a)} the extrapolated point considering the data fidelity, i.e., $\\eta^{(i+1)}$ in \\R{eq:soln:CT:bpgm:eta}, and \\textit{b)} the \\dquotes{refined} update via the ($\\psi$-weighting) convolutional autoencoder, i.e., $\\sum_{k} ( P_f d_k^\\star ) \\circledast \\cH_{\\!\\sqrt{2 \\alpha' \\psi}} ( d_k^\\star \\circledast x^{(i)} )$.\n}\n\n\n\n\n\n\n\n\n\n\\section{Results and Discussion} \\label{sec:result}\n\n\\subsection{Experimental Setup} \\label{sec:exp}\n\nThis section examines the performance (e.g., scalability, convergence, and acceleration) and behaviors (e.g., effects of model parameters on filters structures and effects of dimensions of learned filter on MBIR performance) of the proposed CAOL algorithms and models, respectively.\n\n\\subsubsection{CAOL} \\label{sec:exp:CAOL}\n\nWe tested the introduced CAOL models\/algorithms for four datasets: \n\\textit{1)} the fruit dataset with $L = 10$ and $N = 100 \\!\\times\\! 100$ \\cite{Zeiler&etal:10CVPR}; \n\\textit{2)} the city dataset with $L = 10$ and $N = 100 \\!\\times\\! 100$ \\cite{Heide&eta:15CVPR}; \n\\textit{3)} the CT dataset of $L = 80$ and $N = 128 \\!\\times\\! 128$, created by dividing down-sampled $512 \\!\\times\\! 512$ XCAT phantom slices \\cite{Segars&etal:08MP} into $16$ sub-images \\cite{Olshausen&Field:96Nature, Bristow&etal:13CVPR} -- referred to the CT-(\\romnum{1}) dataset; \n\\textit{4)} the CT dataset of with $L = 10$ and $N = 512 \\!\\times\\! 512$ from down-sampled $512 \\!\\times\\! 512$ XCAT phantom slices \\cite{Segars&etal:08MP} -- referred to the CT-(\\romnum{2}) dataset.\nThe preprocessing includes intensity rescaling to $[0,1]$ \\cite{Zeiler&etal:10CVPR, Bristow&etal:13CVPR, Heide&eta:15CVPR} and\/or (global) mean substraction \\cite[\\S 2]{Jarrett&etal:09ICCV}, \\cite{Aharon&Elad&Bruckstein:06TSP}, as conventionally used in many sparse coding studies, e.g., \\cite{Aharon&Elad&Bruckstein:06TSP, Jarrett&etal:09ICCV, Zeiler&etal:10CVPR, Bristow&etal:13CVPR, Heide&eta:15CVPR}. \nFor the fruit and city datasets, we trained $K = 49$ filters of size $R = 7 \\!\\times\\! 7$.\nFor the CT dataset~(\\romnum{1}), we trained filters of size $R = 5 \\!\\times\\! 5$, with $K = 25$ or $K = 20$.\nFor CT reconstruction experiments, we learned the filters from the CT-(\\romnum{2}) dataset; however, we did not apply mean subtraction because it is not modeled in \\R{sys:CT&CAOL}. \n\nThe parameters for the BPEG-M algorithms were defined as follows.\\footnote{The remaining BPEG-M parameters not described here are identical to those in \\cite[\\Romnum{7}-A2]{Chun&Fessler:18TIP}.}\nWe set the regularization parameters $\\alpha, \\beta$ as follows:\n\\begin{itemize}\n\\item {CAOL \\R{sys:CAOL:orth}}: To investigate the effects of $\\alpha$, we tested \\R{sys:CAOL:orth} with different $\\alpha$'s in the case $R = K$. For the fruit and city datasets, we used $\\alpha = 2.5 \\!\\times\\! \\{ 10^{-5}, 10^{-4} \\}$; for the CT-(\\romnum{1}) dataset, we used $\\alpha = \\{ 10^{-4}, 2 \\!\\times\\! 10^{-3} \\}$. \nFor the CT-(\\romnum{2}) dataset (for CT reconstruction experiments), see details in \\cite[Sec.~\\Romnum{5}1]{Chun&Fessler:18Asilomar}.\n\n\\item {CAOL \\R{sys:CAOL:div}}: Once $\\alpha$ is fixed from the CAOL \\R{sys:CAOL:orth} experiments above, we tested \\R{sys:CAOL:div} with different $\\beta$'s to see its effects in the case $R > K$. For the CT-(\\romnum{1}) dataset, we fixed $\\alpha = 10^{-4}$, and used $\\beta = \\{ 5 \\!\\times\\! 10^6, 5 \\!\\times\\! 10^4 \\}$.\n\\end{itemize}\nWe set $\\lambda_D = 1+\\epsilon$ as the default.\nWe initialized filters in either deterministic or random ways.\nThe deterministic filter initialization follows that in \\cite[Sec.~3.4]{Cai&etal:14ACHA}.\nWhen filters were randomly initialized, we used a scaled one-vector for the first filter.\nWe initialize sparse codes mainly with a deterministic way that applies \\R{eq:soln:spCode:exact} based on $\\{ d_k^{(0)} \\}$.\nIf not specified, we used the random filter and deterministic sparse code initializations.\nFor BPG \\cite{Xu&Yin:17JSC}, we used the maximum eigenvalue of Hessians for Lipschitz constants in \\R{sys:filter}, and applied the gradient-based restarting scheme in Section~\\ref{sec:reBPGM}.\nWe terminated the iterations if the relative error stopping criterion (e.g., \\cite[(44)]{Chun&Fessler:18TIP}) is met before reaching the maximum number of iterations.\nWe set the tolerance value as $10^{-13}$ for the CAOL algorithms using Proposition~\\ref{p:MDexhess}, and $10^{-5}$ for those using Lemmas~\\ref{l:MDdiag}--\\ref{l:MDscaleI}, and the maximum number of iterations to $2 \\!\\times\\! 10^4$.\n\nThe CAOL experiments used the convolutional operator learning toolbox~\\cite{chun:19:convolt}.\n\n\n\n\n\\subsubsection{Sparse-View CT MBIR with Learned Convolutional Regularizer via CAOL} \\label{sec:exp:CT}\n\nWe simulated sparse-view sinograms of size $888 \\times 123$ (\\quotes{detectors or rays} $\\times$ \\quotes{regularly spaced projection views or angles}, where $984$ is the number of full views) with GE LightSpeed fan-beam geometry corresponding to a monoenergetic source with $10^5$ incident photons per ray and no background events, and electronic noise variance $\\sigma^2 \\!=\\! 5^2$. \nWe avoided an inverse crime in our imaging simulation and reconstructed images with a coarser grid with $\\Delta_x \\!=\\! \\Delta_y \\!=\\! 0.9766$ mm; see details in \\cite[Sec.~\\Romnum{5}-A2]{Chun&Fessler:18Asilomar}.\n\n\n\n\nFor EP MBIR, we finely tuned its regularization parameter to achieve both good root mean square error (RMSE) and structural similarity index measurement \\cite{Wang&etal:04TIP} values. \nFor the CT MBIR model \\R{sys:CT&CAOL}, we chose the model parameters $\\{ \\gamma, \\alpha' \\}$ that showed a good tradeoff between the data fidelity term and the learned convolutional regularizer, and set $\\lambda_A \\!=\\! 1+\\epsilon$.\nWe evaluated the reconstruction quality by the RMSE (in a modified Hounsfield unit, HU, where air is $0$ HU and water is $1000$ HU) in a region of interest.\nSee further details in \\cite[Sec.~\\Romnum{5}-A2]{Chun&Fessler:18Asilomar} and Fig.~\\ref{fig:CTrecon}.\n\n\nThe imaging simulation and reconstruction experiments used the Michigan image reconstruction toolbox~\\cite{fessler:16:irt}.\n\n\n\n\n\n\n\n\n\\subsection{CAOL with BPEG-M} \\label{sec:result:CAOL}\n\n\nUnder the sharp majorization regime (i.e., partial or all blocks have sufficiently tight bounds in Lemma~\\ref{l:QM}), the proposed convergence-guaranteed BPEG-M can achieve significantly faster CAOL convergence rates compared with the state-of-the-art BPG algorithm \\cite{Xu&Yin:17JSC} for solving block multi-nonconvex problems, by several generalizations of BPG (see Remark~\\ref{r:BPGM}) and two majorization designs (see Proposition~\\ref{p:MDexhess} and Lemma~\\ref{l:MDscaleI}). See Fig.~\\ref{fig:Comp:diffBPGM}.\nIn controlling the tradeoff between majorization sharpness and extrapolation effect of BPEG-M (i.e., choosing $\\{ \\lambda_b \\}$ in \\R{update:Mtilde}--\\R{update:Eb}), maintaining majorization sharpness is more critical than gaining stronger extrapolation effects to accelerate convergence under the sharp majorization regime.\nSee Fig.~\\ref{fig:Comp:exact_vs_approx}. \n\n\nWhile using about two times less memory (see Table~\\ref{tab:AOL}), CAOL \\R{sys:CAOL} learns TF filters corresponding to those given by the patch-domain TF learning in \\cite[Fig.~2]{Cai&etal:14ACHA}.\nSee Section~\\ref{sec:CAOL:memory} and Fig.~\\ref{fig:filters_BPGM} with deterministic $\\{ d_k^{(0)} \\}$.\nNote that BPEG-M-based CAOL \\R{sys:CAOL} requires even less memory than BPEG-M-based CDL in \\cite{Chun&Fessler:18TIP}, by using exact sparse coding solutions (e.g., \\R{eq:soln:spCode:exact} and \\R{eq:soln:CT:bpgm}) without saving their extrapolated points.\nIn particular, \nwhen tested with the large CT dataset of $\\{ L \\!=\\! 40, N \\!=\\! 512 \\!\\times\\! 512 \\}$,\nthe BPEG-M-based CAOL algorithm ran fine,\nwhile BPEG-M-based CDL \\cite{Chun&Fessler:18TIP} and patch-domain AOL \\cite{Cai&etal:14ACHA} were terminated due to exceeding available memory.\\footnote{\nTheir double-precision MATLAB implementations were tested on 3.3 GHz Intel Core i5 CPU with 32 GB RAM.\n}\nIn addition, the CAOL models \\R{sys:CAOL:orth} and \\R{sys:CAOL:div} are easily parallelizable with $K$ threads.\nCombining these results, the BPEG-M-based CAOL is a reasonable choice for learning filters from large training datasets. \nFinally, \\cite{Chun&etal:19SPL} shows theoretically how using many samples can improve CAOL, \naccentuating the benefits of the low memory usage of CAOL.\n\n\n\nThe effects of parameters for the CAOL models are shown as follows.\nIn CAOL~\\R{sys:CAOL:orth}, as the thresholding parameter $\\alpha$ increases, the learned filters have more elongated structures; see Figs.~\\ref{fig:CAOL:filters:CT}(a) and \\ref{fig:CAOL:filters:fruit&city&ct-ii}.\nIn CAOL~\\R{sys:CAOL:div}, when $\\alpha$ is fixed, increasing the filter diversity promoting regularizer $\\beta$ successfully lowers coherences between filters (e.g., $g_{\\text{div}}(D)$ in \\R{sys:CAOL:div}); see Fig.~\\ref{fig:CAOL:filters:CT}(b).\n\n\n\n\\begin{figure}[!t]\n\\small\\addtolength{\\tabcolsep}{-4pt}\n\\centering\n\n\\begin{tabular}{cc}\n\n\\vspace{-0.1em}\\hspace{-0.05em}\\includegraphics[scale=0.5, trim=2em 5.2em 0em 4em, clip]{.\/Fig\/filt_tfMexhess_CTeg1.eps} &\n\\vspace{-0.1em}\\hspace{-0.4em}\\includegraphics[scale=0.5, trim=2em 5.2em 0.1em 4em, clip]{.\/Fig\/filt_tfMexhess_CTeg2.eps} \\\\\n\n{\\small (a1) $\\alpha = 10^{-4}$} & {\\small (a2) $\\alpha = 2 \\!\\times\\! 10^{-3}$} \\\\\n\n\\multicolumn{2}{c}{\\small (a) Learned filters via CAOL \\R{sys:CAOL:orth} ($R = K = 25$)} \\\\\n\n\\includegraphics[scale=0.506, trim=2em 1.0em 0em 0.4em, clip]{.\/Fig\/filt_divMexhess_CTeg2-1.eps} &\n\\includegraphics[scale=0.506, trim=2em 1.0em 0em 0.4em, clip]{.\/Fig\/filt_divMexhess_CTeg1-1.eps} \\vspace{-0.3em} \\\\\n\n\\small{$g_{\\text{div}}(D) = \\mathbf{8.96 \\!\\times\\! 10^{-6}}$} & \\small{$g_{\\text{div}}(D) = \\mathbf{0.12}$} \\\\\n\n\\small{(b1) $\\alpha = 10^{-4}$, $\\beta = 5 \\!\\times\\! 10^{6}$} & {\\small (b2) $\\alpha = 10^{-4}$, $\\beta = 5 \\!\\times\\! 10^{4}$} \\\\\n\n\\multicolumn{2}{c}{\\small (b) Learned filters via CAOL \\R{sys:CAOL:div} ($R = 25, K = 20$) } \n\n\\end{tabular}\n\n\\vspace{-0.25em}\n\\caption{Examples of learned filters with different CAOL models and parameters (Proposition~\\ref{p:MDexhess} was used for $M_D$; the CT-(\\romnum{1}) dataset with a symmetric boundary condition).}\n\\label{fig:CAOL:filters:CT}\n\\end{figure}\n\n\n\n\nIn adaptive MBIR (e.g., \\cite{Elad&Aharon:06TIP, Cai&etal:14ACHA, Pfister&Bresler:15SPIE}), one may apply adaptive image denoising \\cite{Donoho:95TIT, Donoho&Johnstone:95JASA, Chang&Yu&Vetterli:00TIP, Blu&Luisier:07TIP, Liu&etal:15CVPR, Pfister&Bresler:17ICASSP} to optimize thresholding parameters. \nHowever, if CAOL \\R{sys:CAOL} and testing the learned convolutional regularizer to MBIR (e.g., \\R{sys:CT&CAOL}) are separated, selecting \\dquotes{optimal} thresholding parameters in (unsupervised) CAOL is challenging -- similar to existing dictionary or analysis operator learning methods.\nOur strategy to select the thresholding parameter $\\alpha$ in CAOL~\\R{sys:CAOL:orth} (with $R=K$) is given as follows.\nWe first apply the first-order finite difference filters $\\{ d_k : \\| d_k \\|_2^2 = 1\/R, \\forall k \\}$ (e.g., $\\frac{1}{\\sqrt{2R}} [1, -1]^T$ in 1D) to all training signals and find their sparse representations, and then find $\\alpha_{\\mathrm{est}}$ that corresponds to the largest $95 (\\pm 1)\\%$ of non-zero elements of the sparsified training signals. This procedure defines the range $[ \\frac{1}{10} \\alpha_{\\mathrm{est}}, \\alpha_{\\mathrm{est}}]$ to select desirable $\\alpha^\\star$ and its corresponding filter $D^\\star$. We next ran CAOL~\\R{sys:CAOL:orth} with multiple $\\alpha$ values within this range. Selecting $\\{ \\alpha^\\star, D^\\star \\}$ depends on application. \nFor CT MBIR, $D^\\star$ that both has (short) first-order finite difference filters and captures diverse (particularly diagonal) features of training signals, gave good RMSE values and well preserved edges; see Fig.~\\ref{fig:CAOL:filters:fruit&city&ct-ii}(c) and \\cite[Fig.~2]{Chun&Fessler:18Asilomar}. \n\n\n\n\n\n\\begin{figure*}[!t]\n\\centering\n\\small\\addtolength{\\tabcolsep}{-6.5pt}\n\\renewcommand{\\arraystretch}{0.1}\n\n \\begin{tabular}{ccccc}\n\n\n \\small{(a) Ground truth} \n & \n \\small{(b) Filtered back-projection} & \\small{(c) EP} \n & \n \\specialcell[c]{\\small (d) Proposed MBIR \\mbox{\\R{sys:CT&CAOL}}, \\\\ \\small with \\R{eq:autoenc} of $R \\!=\\! K \\!=\\! 25$} \n & \n \\specialcell[c]{\\small (e) Proposed MBIR \\mbox{\\R{sys:CT&CAOL}}, \\\\ \\small with \\R{eq:autoenc} of $R \\!=\\! K \\!=\\! 49$} \\\\\n \n \\begin{tikzpicture}\n \t\t\\begin{scope}[spy using outlines={rectangle,yellow,magnification=1.25,size=15mm,connect spies}]\n \t\t\t\\node {\\includegraphics[viewport=35 105 275 345, clip, width=3.4cm,height=3.4cm]{.\/Fig_CTrecon\/x_true.png}};\n\t\t\t\\spy on (0.8,-0.95) in node [left] at (-0.3,-2.2);\n \t\t\\draw [->, thick, red] (0.1,0.85+0.3) -- (0.1,0.85);\n \t\t\\draw [->, thick, red] (-0.65+0.3,0.1) -- (-0.65,0.1);\n \t\t\\draw [->, thick, red] (-1.1,-1.3) -- (-1.1+0.3,-1.3); \n\t\t\t\\draw [red] (-1.35,0.15) circle [radius=0.28];\n\t\t\t\\draw [red] (-1.35,-0.45) circle [radius=0.28];\n\t\t\\end{scope}\n \\end{tikzpicture} \n &\n \\begin{tikzpicture}\n \t\t\\begin{scope}[spy using outlines={rectangle,yellow,magnification=1.25,size=15mm,connect spies}]\n \t\t\t\\node {\\includegraphics[viewport=35 105 275 345, clip, width=3.4cm,height=3.4cm]{.\/Fig_CTrecon\/x123_fbp.png}};\n\t\t\t\\spy on (0.8,-0.95) in node [left] at (-0.3,-2.2);\n\t\t\t\\node [black] at (0.7,-1.95) {\\small $\\mathrm{RMSE} = 82.8$};\n\t\t\\end{scope}\n \\end{tikzpicture} \n &\n \\begin{tikzpicture}\n \t\t\\begin{scope}[spy using outlines={rectangle,yellow,magnification=1.25,size=15mm,connect spies}]\n \t\t\t\\node {\\includegraphics[viewport=35 105 275 345, clip, width=3.4cm,height=3.4cm]{.\/Fig_CTrecon\/x123_ep.png}};\n\t\t\t\\spy on (0.8,-0.95) in node [left] at (-0.3,-2.2);\n \t\t\\draw [->, thick, red] (0.1,0.85+0.3) -- (0.1,0.85);\n \t\t\\draw [->, thick, red] (-0.65+0.3,0.1) -- (-0.65,0.1);\n \t\t\\draw [->, thick, red] (-1.1,-1.3) -- (-1.1+0.3,-1.3); \n\t\t\t\\node [black] at (0.7,-1.95) {\\small $\\mathrm{RMSE} = 40.8$};\n\t\t\\end{scope}\n \\end{tikzpicture} \n &\n \\begin{tikzpicture}\n \t\t\\begin{scope}[spy using outlines={rectangle,yellow,magnification=1.25,size=15mm,connect spies}]\n \t\t\t\\node {\\includegraphics[viewport=35 105 275 345, clip, width=3.4cm,height=3.4cm]{.\/Fig_CTrecon_v1\/x123_caol5x5_old.png}};\n\t\t\t\\spy on (0.8,-0.95) in node [left] at (-0.3,-2.2);\n \t\t\\draw [->, thick, red] (0.1,0.85+0.3) -- (0.1,0.85);\n \t\t\\draw [->, thick, red] (-0.65+0.3,0.1) -- (-0.65,0.1);\n \t\t\\draw [->, thick, red] (-1.1,-1.3) -- (-1.1+0.3,-1.3); \n\t\t\t\\draw [red] (-1.35,0.15) circle [radius=0.28];\n\t\t\t\\draw [red] (-1.35,-0.45) circle [radius=0.28];\n\t\t\t\\node [blue] at (0.7,-1.95) {\\small $\\mathrm{RMSE} = 35.2$};\n\t\t\\end{scope}\n \\end{tikzpicture} \n &\n \\begin{tikzpicture}\n \t\t\\begin{scope}[spy using outlines={rectangle,yellow,magnification=1.25,size=15mm,connect spies}]\n \t\t\t\\node {\\includegraphics[viewport=35 105 275 345, clip, width=3.4cm,height=3.4cm]{Fig_CTrecon_v1\/x123_caol7x7_old.png}};\n\t\t\t\\spy on (0.8,-0.95) in node [left] at (-0.3,-2.2);\n \t\t\\draw [->, thick, red] (0.1,0.85+0.3) -- (0.1,0.85);\n \t\t\\draw [->, thick, red] (-0.65+0.3,0.1) -- (-0.65,0.1);\n \t\t\\draw [->, thick, red] (-1.1,-1.3) -- (-1.1+0.3,-1.3); \n\t\t\t\\draw [red] (-1.35,0.15) circle [radius=0.28];\n\t\t\t\\draw [red] (-1.35,-0.45) circle [radius=0.28];\n\t\t\t\\node [red] at (0.7,-1.95) {\\small $\\mathrm{RMSE} = 34.7$};\n\t\t\\end{scope}\n \\end{tikzpicture} \n\n \\end{tabular}\n\n \\vspace{-0.25em}\n\\caption{\nComparisons of reconstructed images from different reconstruction methods for sparse-view CT ($123$ views ($12.5$\\% sampling); for the MBIR model \\R{sys:CT&CAOL}, convolutional regularizers were trained by CAOL \\R{sys:CAOL:orth} -- see \\cite[Fig.~2]{Chun&Fessler:18Asilomar}; display window is within $[800, 1200]$ HU) \\cite{Chun&Fessler:18Asilomar}. \nThe MBIR model \\R{sys:CT&CAOL} using convolutional sparsifying regularizers trained via CAOL \\R{sys:CAOL:orth} shows higher image reconstruction accuracy compared to the EP reconstruction; see red arrows and magnified areas. \nFor the MBIR model \\R{sys:CT&CAOL}, the autoencoder (see Remark~\\ref{r:autoenc}) using the filter dimension $R \\!=\\! K \\!=\\! 49$ improves reconstruction accuracy of that using $R \\!=\\! K \\!=\\! 25$; compare the results in (d) and (e).\nIn particular, the larger dimensional filters improve the edge sharpness of reconstructed images; see circled areas. The corresponding error maps are shown in Fig.~\\ref{fig:CTrecon:err} of the supplementary material.\n}\n\\label{fig:CTrecon}\n\\end{figure*}\n\n\n\n\n\n\n\n\\subsection{Sparse-View CT MBIR with Learned Convolutional Sparsifying Regularizer (via CAOL) and BPEG-M} \\label{sec:result:CT}\n\n\n\nIn sparse-view CT using only $12.5$\\% of the full projections views, the CT MBIR \\R{sys:CT&CAOL} using the learned convolutional regularizer via CAOL \\R{sys:CAOL:orth} outperforms EP MBIR; it reduces RMSE by approximately $5.6$--$6.1$HU.\nSee the results in Figs.~\\ref{fig:CTrecon}(c)--(e).\nThe model \\R{sys:CT&CAOL} can better recover high-contrast regions (e.g., bones) -- see red arrows and magnified areas in Fig.~\\ref{fig:CTrecon}(c)--(e). \nNonetheless, the filters with $R\\!=\\!K\\!=\\!5^2$ in the ($\\psi$-weighting) autoencoding CNN, i.e., $\\sum_{k} ( P_f d_k^\\star ) \\circledast \\cH_{\\!\\sqrt{2 \\alpha' \\psi}} ( d_k^\\star \\circledast (\\cdot) )$ in \\R{eq:autoenc}, can blur edges in low-contrast regions (e.g., soft tissues) while removing noise. See Fig.~\\ref{fig:CTrecon}(d) -- the blurry issues were similarly observed in \\cite{Chun&etal:17Fully3D, Zheng&etal:19TCI}. \nThe larger dimensional kernels (i.e., $R\\!=\\!K\\!=\\!7^2$) in the convolutional autoencoder can moderate this issue, while further reducing RMSE values; compare the results in Fig.~\\ref{fig:CTrecon}(d)--(e).\nIn particular, the larger dimensional convolutional kernels capture more diverse features -- see \\cite[Fig.~2]{Chun&Fessler:18Asilomar}) -- and the diverse features captured in kernels are useful to further improve the performance of the proposed MBIR model \\R{sys:CT&CAOL}. (The importance of diverse features in kernels was similarly observed in CT experiments with the learned autoencoders having a fixed kernel dimension; see Fig.~\\ref{fig:CAOL:filters:fruit&city&ct-ii}(c).)\nThe RMSE reduction over EP MBIR is comparable to that of CT MBIR \\R{sys:CT&CAOL} \nusing the $\\{ R, K \\!=\\! 8^2 \\}$-dimensional filters trained via the patch-domain AOL \\cite{Ravishankar&Bressler:15TSP}; \nhowever, at each BPEG-M iteration, this MBIR model using the trained (non-TF) filters via patch-domain AOL \\cite{Ravishankar&Bressler:15TSP} requires more computations than the proposed CT MBIR model \\R{sys:CT&CAOL} using the learned convolutional regularizer via CAOL \\R{sys:CAOL:orth}.\nSee related results and discussion in \nFig.~\\ref{fig:CTrecon:TL} and Section~\\ref{sec:result:supp}, respectively. \n \n\n\n\nOn the algorithmic side, the BPEG-M framework can guarantee the convergence of CT MBIR \\R{sys:CT&CAOL}.\nUnder the sharp majorization regime in BPEG-M, maintaining the majorization sharpness is more critical than having stronger extrapolation effects -- see \\cite[Fig.~3]{Chun&Fessler:18Asilomar}, as similarly shown in CAOL experiments (see Section~\\ref{sec:result:CAOL}).\n\n\n\n\n\n\\section{Conclusion} \\label{sec:conclusion}\n\nDeveloping rapidly converging and memory-efficient CAOL engines is important, since it is a basic element in training CNNs in an unsupervised learning manner (see Appendix~\\ref{sec:CNN}). \nStudying structures of convolutional kernels is another fundamental issue, since it can avoid learning redundant filters or provide energy compaction properties to filters.\nThe proposed BPEG-M-based CAOL framework has several benefits.\nFirst, the orthogonality constraint and diversity promoting regularizer in CAOL are useful in learning filters with diverse structures.\nSecond, the proposed BPEG-M algorithm significantly accelerates CAOL over the state-of-the-art method, BPG \\cite{Xu&Yin:17JSC}, with our sufficiently sharp majorizer designs.\nThird, BPEG-M-based CAOL uses much less memory compared to patch-domain AOL methods \\cite{Yaghoobi&etal:13TSP, Hawe&Kleinsteuber&Diepold:13TIP, Ravishankar&Bressler:15TSP}, and easily allows parallel computing.\nFinally, the learned convolutional regularizer provides the autoencoding CNN architecture in MBIR, and outperforms EP reconstruction in sparse-view CT.\n\n\nSimilar to existing unsupervised synthesis or analysis operator learning methods, the biggest remaining challenge of CAOL is optimizing its model parameters. \nThis would become more challenging when one applies CAOL to train CNNs (see Appendix~\\ref{sec:CNN}).\nOur first future work is developing \\dquotes{task-driven} CAOL that is particularly useful to train thresholding values.\nOther future works include further acceleration of BPEG-M in Algorithm~\\ref{alg:BPGM}, designing sharper majorizers requiring only $O( L R N )$ for the filter update problem of CAOL~\\R{sys:CAOL}, and applying the CNN model learned via \\R{sys:CNN:orth} to MBIR. \n\n\n\n\n\n\n\n\n\n\n\\section*{Appendix}\n\\renewcommand{\\thesubsection}{\\Alph{subsection}}\n\n\\subsection{Training CNN in a unsupervised manner via CAOL} \\label{sec:CNN}\n\nThis section mathematically formulates an unsupervised training cost function for classical CNN (e.g., LeNet-5 \\cite{LeCun&etal:98ProcIEEE} and AlexNet \\cite{Krizhevsky&etal:12NIPS}) and solves the corresponding optimization problem, via the CAOL and BPEG-M frameworks studied in Sections~\\ref{sec:CAOL}--\\ref{sec:CAOL+BPGM}.\nWe model the three core modules of CNN: \\textit{1)} convolution, \\textit{2)} pooling, e.g., average \\cite{LeCun&etal:98ProcIEEE} or max \\cite{Jarrett&etal:09ICCV}, and \\textit{3)} thresholding, e.g., RELU \\cite{Nair&Hinton:10ICML}, while considering the TF filter condition in Proposition~\\ref{p:TFconst}. \nParticularly, the orthogonality constraint in CAOL \\R{sys:CAOL:orth} leads to a sharp majorizer, and BPEG-M is useful to train CNNs with convergence guarantees.\nNote that it is unclear how to train such diverse (or incoherent) filters described in Section~\\ref{sec:CAOL} by the most common CNN optimization method, the stochastic gradient method in which gradients are computed by back-propagation. The major challenges include \\textit{a)} the non-differentiable hard thresholding operator related to $\\ell^0$-norm in \\R{sys:CAOL}, \\textit{b)} the nonconvex filter constraints in \\R{sys:CAOL:orth} and \\R{sys:CAOL:div}, \\textit{c)} using the identical filters in both encoder and decoder (e.g., $W$ and $W^H$ in Section~\\ref{sec:prf:p:TF}), and \\textit{d)} vanishing gradients.\n\nFor simplicity, we consider a two-layer CNN with a single training image, but one can extend the CNN model \\R{sys:CNN:orth} (see below) to \\dquotes{deep} layers with multiple images. The first layer consists of \\textit{1c)} convolutional, \\textit{1t)} thresholding, and \\textit{1p)} pooling layers; the second layer consists of \\textit{2c)} convolutional and \\textit{2t)} thresholding layers.\nExtending CAOL \\R{sys:CAOL:orth}, we model two-layer CNN training as the following optimization problem:\n\\begingroup\n\\setlength{\\thinmuskip}{1.5mu}\n\\setlength{\\medmuskip}{2mu plus 1mu minus 2mu}\n\\setlength{\\thickmuskip}{2.5mu plus 2.5mu}\n\\allowdisplaybreaks\n\\ea{\n\\label{sys:CNN:orth}\n\\argmin_{\\{ d_{k}^{[1]}, d_{k,k'}^{[2]} \\}} \\min_{\\{ z_{k}^{[1]}, z_{k'}^{[2]} \\}}\n&~ \\sum_{k=1}^{K_1} \\frac{1}{2} \\left\\| d_{k}^{[1]} \\circledast x - z_{k}^{[1]} \\right\\|_2^2 + \\alpha_1 \\left\\| z_{k}^{[1]} \\right\\|_0 \\nn\n\\\\\n+ &\\, \\frac{1}{2} \\left\\| \\left( \\sum_{k=1}^{K_1} \\left[ \\arraycolsep=2pt \\begin{array}{c} d_{k,1}^{[2]} \\circledast P z_{k}^{[1]} \\\\ \\vdots \\\\ d_{k,K_2}^{^{[2]}} \\circledast P z_{k}^{[1]} \\end{array} \\right] \\right) - \\left[ \\arraycolsep=2pt \\begin{array}{c} z_{1}^{[2]} \\\\ \\vdots \\\\ z_{K_2}^{[2]} \\end{array} \\right] \\right\\|_2^2 \\nn\n\\\\\n+ &\\, \\alpha_2 \\sum_{k'=1}^{K_2} \\left\\| z_{k'}^{[2]} \\right\\|_0 \\nn\n\\\\\n\\mathrm{subject~to} ~~~~&~ D^{[1]} \\big( D^{[1]} \\big)^H = \\frac{1}{R_1} \\cdot I, \\nn\n\\\\\n&~ D_k^{[2]} \\big( D_k^{[2]} \\big)^H = \\frac{1}{R_2} \\cdot I, \\quad k = 1,\\ldots,K_1, \n\\tag{A1}\n} \n\\endgroup\nwhere $x \\in \\bbR^{N}$ is the training data, $\\{ d_{k}^{[1]} \\in \\bbR^{R_1} : k = 1,\\ldots,K_1 \\}$ is a set of filters in the first convolutional layer, $\\{ z_{k}^{[1]} \\in \\bbR^{N} : k = 1,\\ldots,K_1\\}$ is a set of features after the first thresholding layer, $\\{ d_{k,k'}^{[2]} \\in \\bbR^{R_2} : k' = 1,\\ldots,K_2 \\}$ is a set of filters for each of $\\{ z_{k}^{[1]} \\}$ in the second convolutional layer, $\\{ z_{k'}^{[2]} \\in \\bbR^{N\/\\omega} : k = 1,\\ldots,K_2\\}$ is a set of features after the second thresholding layer, $D^{[1]}$ and $\\{ D_k^{[2]} \\}$ are similarly given as in \\R{eq:D}, $P \\in \\bbR^{N\/\\omega \\times \\omega}$ denotes an average pooling \\cite{LeCun&etal:98ProcIEEE} operator (see its definition below), and $\\omega$ is the size of pooling window. \nThe superscripted number in the bracket of vectors and matrices denotes the $(\\cdot)\\rth$ layer.\nHere, we model a simple average pooling operator $P \\in \\bbR^{(N\/\\omega) \\times \\omega}$ by a block diagonal matrix with row vector $\\frac{1}{\\omega} 1_\\omega^T \\in \\bbR^{\\omega}$: $P := \\frac{1}{\\omega} \\bigoplus_{j=1}^{N\/\\omega} 1_\\omega^T$.\nWe obtain a majorization matrix of $P^T P$ by $P^T P \\preceq \\diag( P^T P 1_N ) = \\frac{1}{\\omega} I_N$ (using Lemma~\\ref{l:diag(|At|W|A|1)}).\nFor 2D case, the structure of $P$ changes, but $P^T P \\preceq \\frac{1}{\\omega} I_N$ holds.\n\n\n\n\n\n\n\nWe solve the CNN training model in \\R{sys:CNN:orth} via the BPEG-M techniques in Section~\\ref{sec:CAOL+BPGM}, and relate the solutions of \\R{sys:CNN:orth} and modules in the two-layer CNN training. The symbols in the following items denote the CNN modules.\n\\begin{itemize}\n[\\setlength{\\IEEElabelindent}{\\IEEEilabelindentA}]\n\\item[\\textit{1c})] Filters in the first layer, $\\{ d_{k}^{[1]} \\}$: Updating the filters is straightforward via the techniques in Section~\\ref{sec:prox:filter:orth}. \n\n\\item[\\textit{1t})] Features at the first layers, $\\{ z_{k}^{[1]} \\}$: Using BPEG-M with the $k\\rth$ set of TF filters $\\{ d_{k,k'}^{[2]} : k' \\}$ and $P^T P \\preceq \\frac{1}{\\omega} I_N$ (see above), the proximal mapping for $z_{k}^{[1]}$ is\n\\begingroup\n\\setlength{\\thinmuskip}{1.5mu}\n\\setlength{\\medmuskip}{2mu plus 1mu minus 2mu}\n\\setlength{\\thickmuskip}{2.5mu plus 2.5mu}\n\\be{\n\\label{eq:CNN:layer1:spCd}\n\\min_{z_{k}^{[1]}} \\frac{1}{2} \\left\\| d_{k}^{[1]} \\circledast x - z_{k}^{[1]} \\right\\|_2^2 + \\frac{1}{2\\omega'} \\left\\| z_{k}^{[1]} - \\zeta_{k}^{[k]} \\right\\|_2^2 + \\alpha_1 \\left\\| z_{k}^{[1]} \\right\\|_0,\n}\n\\endgroup\nwhere $\\omega' = \\omega \/ \\lambda_Z$ and $\\zeta_{k}^{[k]}$ is given by \\R{update:x}. Combining the first two quadratic terms in \\R{eq:CNN:layer1:spCd} into a single quadratic term leads to an optimal update for \\R{eq:CNN:layer1:spCd}:\n\\bes{\nz_{k}^{[1]} = \\cH_{\\! \\sqrt{ 2 \\frac{\\omega' \\alpha_1}{\\omega' + 1} }} \\left( d_{k}^{[1]} \\circledast x + \\frac{1}{\\omega'} \\zeta_{k}^{[k]} \\right), \\quad k \\in [K],\n} \nwhere the hard thresholding operator $\\cH_a (\\cdot)$ with a thresholding parameter $a$ is defined in \\R{eq:def:hardthr}.\n\n\n\\item[\\textit{1p})] Pooling, $P$: Applying the pooling operator $P$ to $\\{ z_{k}^{[1]} \\}$ gives input data -- $\\{ P z_{k}^{[1]} \\}$ -- to the second layer.\n\n\\item[\\textit{2c})] Filters in the second layer, $\\{ d_{k,k'}^{[2]} \\}$: We update the $k\\rth$ set filters $\\{ d_{k,k'}^{[2]} : \\forall k' \\}$ in a sequential way. Updating the $k\\rth$ set filters is straightforward via the techniques in Section~\\ref{sec:prox:filter:orth}. \n\n\\item[\\textit{2t})] Features at the second layers, $\\{ z_{k'}^{[2]} \\}$: The corresponding update is given by\n\\bes{\nz_{k'}^{[2]} = \\cH_{\\!\\sqrt{2 \\alpha_2}} \\left( \\sum_{k=1}^{K_1} d_{k,k'}^{[1]} \\circledast P z_{k}^{[1]} \\right), \\quad k' \\in [K_2].\n}\n\\end{itemize}\n\nConsidering the introduced mathematical formulation of training CNNs \\cite{LeCun&etal:98ProcIEEE} via CAOL, BPEG-M-based CAOL has potential to be a basic engine to rapidly train CNNs with big data (i.e., training data consisting of many (high-dimensional) signals).\n\n\n\n\\subsection{Examples of $\\{ f(x;y), \\cX \\}$ in MBIR model \\R{eq:mbir:learn} using learned regularizers} \\label{sec:egs}\n\n\nThis section introduces some potential applications \nof using MBIR model \\R{eq:mbir:learn} using learned regularizers\nin imaging processing, imaging, and computer vision.\nWe first consider quadratic data fidelity function in the form of $f(x;y) = \\frac{1}{2} \\| y - A x \\|_W^2$.\nExamples include\n\\begin{itemize}\n\\item Image debluring (with $W \\!=\\! I$ for simplicity), \nwhere $y$ is a blurred image, $A$ is a blurring operator, and $\\cX$ is a box constraint; \n\\item Image denoising (with $A \\!=\\! I$),\nwhere $y$ is a noisy image corrupted by additive white Gaussian noise (AWGN),\n$W$ is the inverse covariance matrix corresponding to AWGN statistics,\nand $\\cX$ is a box constraint;\n\\item Compressed sensing (with $\\{ W \\!=\\! I, \\cX \\!\\in\\! \\bbC^{N'} \\}$ for simplicity) \\cite{Chun&Adcock:17TIT, Chun&Adcock:18ACHA},\nwhere $y$ is a measurement vector, \nand $A$ is a compressed sensing operator,\ne.g., subgaussian random matrix, bounded orthonormal system, subsampled isometries, certain types of random convolutions;\n\\item Image inpainting (with $W \\!=\\! I$ for simplicity),\nwhere $y$ is an image with missing entries, $A$ is a masking operator, and $\\cX$ is a box constraint;\n\\item Light-field photography from focal stack data \nwith $f(x;y) = \\sum_{c} \\| y_c - \\sum_{s} A_{c,s} x_{s} \\|_2^2$,\nwhere $y_c$ denotes measurements collected at the $c\\rth$ sensor, \n$A_{c,s}$ models camera imaging geometry at the $s\\rth$ angular position for the $c\\rth$ detector, \n$x_s$ denotes the $s\\rth$ sub-aperture image, $\\forall c,s$,\nand $\\cX$ is a box constraint \\cite{Block&Chun&Fessler:18IVMSP, Chun&etal:18arXiv:momnet}.\n\\end{itemize}\nExamples that use nonlinear data fidelity function include \nimage classification using the logistic function \\cite{Mairal&etal:09NIPS}, \nmagnetic resonance imaging considering unknown magnetic field variation \\cite{Fessler:10SPM}, \nand positron emission tomography \\cite{Lim&etal:19TMI}.\n\n\n\n\n\\subsection{Notation} \\label{sec:notation}\n\nWe use $\\nm{\\cdot}_{p}$ to denote the $\\ell^p$-norm and write $\\ip{\\cdot}{\\cdot}$ for the standard inner product on $\\bbC^N$. \nThe weighted $\\ell^2$-norm with a Hermitian positive definite matrix $A$ is denoted by $\\nm{\\cdot}_{A} = \\nm{ A^{1\/2} (\\cdot) }_2$.\n$\\nm{\\cdot}_{0}$ denotes the $\\ell^0$-quasi-norm, i.e., the number of nonzeros of a vector. \nThe Frobenius norm of a matrix is denoted by $\\| \\cdot \\|_{\\mathrm{F}}$. \n$( \\cdot )^T$, $( \\cdot )^H$, and $( {\\cdot} )^*$ indicate the transpose, complex conjugate transpose (Hermitian transpose), and complex conjugate, respectively. \n$\\diag(\\cdot)$ denotes the conversion of a vector into a diagonal matrix or diagonal elements of a matrix into a vector.\n$\\bigoplus$ denotes the matrix direct sum of matrices.\n$[C]$ denotes the set $\\{1,2,\\ldots,C\\}$. \nDistinct from the index $i$, we denote the imaginary unit $\\sqrt{-1}$ by $\\imath$. \nFor (self-adjoint) matrices $A,B \\in \\bbC^{N \\times N}$, the notation $B \\preceq A$ denotes that $A-B$ is a positive semi-definite matrix. \n\n\n\n\n\\section*{Acknowledgment}\n\nWe thank Xuehang Zheng for providing CT imaging simulation setup, and Dr. Jonghoon Jin for constructive feedback on CNNs.\n\n\n\n\n\\bibliographystyle{IEEEtran}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\\label{sec:introduction}\n\nThe Planck 2013 and 2015 data releases open new directions in precision cosmology with regard to a more advanced\ninvestigation of the statistical isotropy and non-Gaussianity of the cosmic microwave background (CMB)~\\cite{Planck1, Planck2, Planck_is, Planck_is2015}.\nWhile generally confirming the Gaussianity and the statistical isotropy of the CMB (for the relevant multipole domain $\\ell\\ge 50$), the Planck\nscience team confirmed the existence of a variety of anomalies in the temperature anisotropy on large angular scales ($\\ell\\le 50$)\npreviously seen in WMAP data~\\cite{Planck_is,Planck_is2015}. Among these are the lack of power in the quadrupole component~\\cite{low_quadrupole} (see, however,~\\cite{wmap9b}\nand~\\cite{Planck_is}), the alignment of the quadrupole and octupole components~\\cite{Copi1, wmap9b, Planck_is}, the unusual symmetry of the\noctupole~\\cite{mhansen}, anisotropies in the temperature angular power spectrum~\\cite{WMAP7:powerspectra, Planck_is, Hansen2004, Eriksen2003},\n preferred directions~\\cite{Pref.Direction1, Pref.Direction2, Copi2, Planck_is, Planck_is2015, Akrami2014}, asymmetry in the power of even and odd modes~\\cite{parity1, parity2, wmap9b, Planck_is, Planck_is2015} and the Cold Spot~\\cite{vielva2003, Coldspot1, Planck_is, Planck_is2015}.\n Some of these anomalies are probably a consequence of the residuals of foreground effects that could be a major source of contamination in the primordial $E$- and $B$-modes of polarization (in this connection see~\\cite{Planck_bicep}).\n\nThe statistics of $B$-mode polarization that can be derived from ongoing and planned CMB experiments will be crucial for the determination of the\ncosmological gravitational waves associated with inflation~\\cite{inflation} at the range of multipoles $50\\le \\ell\\le 150$, closer to\nthe domain of interest for BICEP2 and Planck~\\cite{bicep,Planck_bicep}. It seems likely that $B$-mode polarization in this range is affected\nby Galactic dust emission, the statistical properties of which are very poorly known. (For $\\ell >150$ we expect contamination of the $B$-modes\ndue to lensing effects the precise nature of which is also not fully understood.) In the absence of such knowledge it is difficult to make \\emph{a priori}\nproposals for the best estimator of non-Gaussianity and statistical anisotropies in the derived $B$-modes due to possible contamination by foreground residuals.\nThus, we believe that it is of value to propose additional model-independent tests aimed at providing an improved quantitative understanding of the magnitude of non-Gaussianity in current CMB data. Such tests would also be useful for the analysis of forthcoming CMB data sets. In this paper\nwe propose use of the Kullback--Leibler (KL) divergence as such a test. The goal of this paper is to illustrate the utility of the KL divergence in studying the properties of the CMB signal --- a Gaussian or almost Gaussian signal. The KL divergence is likely to be even more useful for very non-Gaussian cases such as the statistical behaviour of the Minkowski functionals for a single map or the pixel-pixel cross-correlation coefficient between two maps, when calculated in small areas. We will consider such issues in a separate publication.\n\nThe one-point probability distribution function (PDF) would seem to be a reasonable starting point for the investigation of non-Gaussianity.\nSuch tests have been applied by the Planck team to a variety of derived CMB temperature maps including the SMICA, NILC, SEVEM and\nCommander maps~\\cite{Planck_is,Planck_is2015}. In practice, these tests involve comparison of the various temperature fluctuation maps with an ensemble\nof simulated maps. The CMB temperature is characterized by a power spectrum, $C_\\ell^\\text{Planck}$, which\ncorresponds to the Planck 2013 concordance \\LCDM\\ model with the cosmological parameters listed in~\\cite{Planck_cp}.\nThe simulated maps are obtained as Monte Carlo (MC) Gaussian draws on this power spectrum (in harmonic space). When processed with the Planck component separation tool, the resulting simulation maps contain both CMB information as well as various residuals from foregrounds, the uncertainties of instrumental effects, etc. For the Planck 2013 data release, the corresponding $10^3$ full focal plane simulations are referred to as the FFP7 simulations.\nThey reflect the intrinsic properties of the SMICA, NILC, SEVEM and Commander maps. Differences between the FFP7 maps and the various empirical maps can provide useful information regarding non-Gaussianity. In the following, we shall restrict our attention to the SMICA map.\n\nIn practice, the non-parametric Kolmogorov--Smirnov (KS) test is often used to assess the similarity of two distributions. The KS test\ncharacterizes the difference between the two cumulative distribution functions (CDF) in terms of the maximum absolute deviation between them. The KS estimator, $\\kappa$, is defined as\n\\begin{equation}\n\\kappa= \\sqrt{n} \\max[|F(x)-F_n(x)|] \\, ,\n\\label{eq:KS_definition}\n\\end{equation}\nwhere $F(x)$ is the theoretical expectation of the CDF and $F_n(x)$ is obtained from a data sample with $n$ elements. Here, both $F(x)$ and $F_n(x)$ must be normalized to the range $[0,1]$. Note that $F(x)$ is normally a continuous function or should at least be defined for all possible values of the data sample $F_n(x)$. It is clear from Eq.~\\eqref{eq:KS_definition} that the KS estimator $\\kappa$ is local in the sense that its value will be determined at a point, $x$, where the PDFs corresponding to $F(x)$ and $F_n(x)$ cross. We note that the use of PDFs in Eq.~\\eqref{eq:KS_definition} instead of CDFs would result in a\nmaximal sensitivity to the largest local anomaly. \n\nUnlike the case of vectors (where the scalar product provides a standard measure), there is no generic ``best''\nmeasure for quantifying the similarity of two distributions. Thus, we believe that it is also useful to consider the Kullback--Leibler divergence for two discrete probability distributions, $P$ and $Q$. The KL divergence on the set of points $i$ is defined~\\cite{kl} as\n\\be \\label{eq:KL_definition}\nK(P\\|Q)=\\sum_i P_i\\log\\left(\\frac{P_i}{Q_i}\\right) \\, .\n\\ee\nIn other words, the KL divergence is the expectation value of the logarithmic difference between the two probability distributions as computed with\nweights of $P_i$. Typically, $P$ represents the distribution of the data, while $Q$ represents a theoretical expectation of the data. Unlike the\nKS test, the KL divergence is non-local. Indeed, we shall indicate below that it is in a sense ``maximally'' non-local. It is familiar in information\ntheory, where it represents the difference between the intrinsic entropy,\n$H_P$, of the distribution $P$ and the cross-entropy, $H_{PQ}$, between $P$ and $Q$,\n\\be\nK(P\\|Q)=H_{PQ}-H_P, \\qquad H_P=-\\sum_i P_i\\log P_i, \\qquad H_{PQ}=-\\sum_i P_i\\log Q_i \\, .\n\\label{eq:entropy}\n\\ee\nIn more practical terms, consider the most probable result of $N$ independent random draws on the distribution $P$. When $N$ is large, the number of\ndraws at point $i$ is simply $n_i = N P_i$. Now construct the probabilities, $\\Pi_P$ and $\\Pi_Q$, that this most probable result was drawn at random\non distribution $P$ or $Q$, respectively. The KL divergence of Eq.~\\eqref{eq:KL_definition} is simply $N^{-1}\\log{(\\Pi_P \/ \\Pi_Q)}$.\n We note that simulations of the CMB\nmap, drawn independently in harmonic space, have correlations in pixel space.\n Nevertheless we regard this argument as motivation for applying the KL divergence to CMB pixels.\n\nThe main goal of this paper is to illustrate the implementation of the KL divergence for\nthe analysis of the statistical properties of the derived CMB maps in the low multipole domain as a complementary test to the methods listed in~\\cite{Planck_is,Planck_is2015}. The structure of the paper is as follows. In Section~\\ref{sec:KL_divergence_and_Planck_data} we present some properties of the KL divergence. We also use it to analyze the Planck SMICA map and compare it to both the FFP7 set and to a purely Gaussian ensemble. In addition, we compare the two ensembles and compare the KL divergence to the KS test in the low multipole domain of the CMB map. In Section~\\ref{sec:discussion} we discuss the results. Note that the Planck papers~\\cite{Planck_is,Planck_is2015} test the Gaussianity of the one-point PDF by analyzing its variance, skewness and kurtosis. In this sense, the KL divergence is simply another test on the global shape of the PDF. Here, we restrict our analysis by using the SMICA map and the corresponding simulations. The extension of the method to $E$- and $B$-modes of polarization does not require any modification.\n\n\n\n\\section{KL divergence and Planck data}\n\\label{sec:KL_divergence_and_Planck_data}\n\n\\subsection{Preliminary remarks: Properties of the KL divergence}\n\\label{subsec:Preliminary_remarks_KL_divergence}\n\nAs noted above, the KL divergence provides a measure of the similarity of two known distributions. In many cases, however, one of these\ndistributions is not known and must rather be approximated by the average PDF for a statistical ensemble of realizations of the random field. The\nquestion then arises of how closely the resulting proxy reflects the properties of the true underlying distribution. To offer some insight in this\nmatter, we consider a toy model based on a discrete Gaussian distribution, $P_k$, with\n\\be\nP_k \\sim \\exp{\\left[ - k^2\/25 \\right]}\\, ,\n\\ee\nand $k$ an integer between $-10$ and $+10$ subject to the obvious normalization condition. The mean value of\n$k$ is $0$ and the variance is approximately $25\/2$. Suppose for simplicity that we define an individual data\nset as $N$ random draws on this distribution. Each such data set can be regarded as a proxy, $Q_i$ for the underlying\ndistribution, and can be used to calculate the KL divergence $K(P\\|Q)$ defined in Eq.~\\eqref{eq:KL_definition}. For a given value of $N$, we repeat this\nprocess $M$ times and compute the average KL divergence, $\\overline{K}$, and the root-mean-square (RMS) deviation of the KL divergence, $\\Delta K = \\sqrt{\\overline{K^2} - \\overline{K}{}^2 }$. The results of\nthis exercise are shown in Table~\\ref{tab:Change_N}, where we have used the common value of $M=1000$.\n\\begin{table}\n\t\\centering\n\t\\caption{Mean and RMS values of the KL divergence for a discrete Gaussian distribution.}\n\t\\vspace{10pt}\n\t\\begin{tabular}{rcccc}\n\t\t\\hline\n\t\t\\noalign{\\smallskip}\n\t\t\\multicolumn{1}{c}{$N$} & $\\overline{K}$ & $\\Delta K$ & $N\\overline{K}$ & $N\\Delta K$ \\\\\n\t\t\\hline\n\t\t 4000 & 0.002540 & 0.000796 & 10.160 & 3.18 \\\\\n\t\t 8000 & 0.001253 & 0.000413 & 10.024 & 3.30 \\\\\n\t\t16000 & 0.000636 & 0.000200 & 10.176 & 3.20 \\\\\n\t\t\\hline\n\t\\end{tabular}\n\t\\label{tab:Change_N}\n\\end{table}\nSeveral things seem clear. Both the average value of the KL divergence and the RMS deviation from this average value vanish like $1\/N$ for large $N$. From general arguments,\nthe KL divergence cannot be negative. Fig.~\\ref{fig:KL_hist_change_N} shows a histogram of the distribution of KL divergences\nobtained with $M = 20000$ for the cases $N=100$ and $N=1000$.\n\\begin{figure}\n\t\\centering\n\t\\includegraphics[width=\\halfW]{KL_hist_change_N.ps}\n\t\\caption{Histogram of KL divergences with $M=20000$ for $N=100$ (\\emph{black})\n\tand $N=1000$ (\\emph{red}). Note that the horizontal axis is measured in units of the\n\tcorresponding value of $\\overline{K}$.}\n\t\\label{fig:KL_hist_change_N}\n\\end{figure}\nThe KL divergences here are measured\nin units of the corresponding value of $\\overline{K}$. The fact that these distributions scale like $1\/N$ is\nobvious. These histograms suggest power law suppression near zero and Gaussian behaviour\nfor large values of the KL divergence.\n\n\nThe results shown in Fig.~\\ref{fig:KL_hist_change_N} are not specific to the KL divergence, and qualitatively\nsimilar results would be obtained for any measure chosen to describe the similarity of two distributions.\nAll that is required is that the measure chosen is always positive and vanishes when the distributions\nbeing compared are identical. When drawn as here, each individual data set can be thought of\nas a combination of the ``exact'' distribution plus the amplitudes of $(N-1)$ ``fluctuations''.\\footnote{Due to\nthe fact that each data set contains exactly $N$ draws.} Obviously, the amplitudes of every one of these\nfluctuations must be exactly zero if the measure is to be $K = 0$. Of course, there are many combinations of the\nfluctuation amplitudes that will give any fixed non-zero value of the measure, and their number increases as\n$K$ grows from zero. In contrast, for the case of only two options (e.g., ``heads'' and ``tails'') subject to a constraint\non the total number of draws, there is only a single degree of freedom. In this case $K = 0$ is actually the most\nprobable value.\n\nA few additional remarks can help clarify the properties of the KL divergence. Consider that each individual\ndata set is sufficiently large that $Q_i = P_i + \\delta P_i$ where $\\delta P_i$ is small. Under these conditions,\nterms linear in $\\delta P_i$ vanish as a consequence of normalization, and the KL divergence is given simply as\n\\be\nK = \\frac{1}{2} \\, \\sum_i \\, \\frac{\\delta P_i^2}{P_i} \\, .\n\\label{eq:KL_small_fluctuation}\n\\ee\nWe see that the distributions $P$ and $Q$ are now treated symmetrically in spite of the asymmetry that is\napparent in Eq.~\\eqref{eq:KL_definition}. Elementary arguments suggest that the average number of draws on bin $i$ will be\n$N P_i \\pm \\sqrt{N P_i}$. The corresponding proxy for the underlying distribution will be $P_i \\pm \\sqrt{P_i \/ N}$.\nGiven this result, Eq.~\\eqref{eq:KL_small_fluctuation} suggests that, for fixed bin sizes and in the limit $N \\to \\infty$, each bin will make\na contribution to the KL divergence of roughly equal size. It is hard to imagine a greater degree of non-locality. Moreover, in this limit the KL divergence is expected to be of order $N_b\/N$, where $N_b$ is the number of bins. In other words, if we define\n\\be \\label{eq:alpha_definition}\n\\alpha=\\frac{N}{N_b}K\\, ,\n\\ee\nwe expect that $\\alpha\\sim\\mathcal{O}(1)$. This realization allows us to assign a rough scale for the expected KL divergence. If two given distributions yield a much larger value of $\\alpha$, we can conclude that they differ significantly from each other without resorting to comparisons with an ensemble. In the example shown in Table~\\ref{tab:Change_N} we are using $N_b=30$, meaning $\\alpha$ is small. This is to be expected since we are indeed using the true distribution to draw the data under examination.\n\n\n\\subsection{Preliminary remarks: Planck data}\n\\label{subsec:Preliminary_remarks_Planck_data}\n\nWe have performed the KL divergence test on the CMB map obtained by the Planck collaboration~\\cite{Planck_cp} using the SMICA method. Since we are interested in the\nstatistical properties of the CMB map on large scales, we first degrade the map from its native \\textsc{HEALPix}~\\cite{HEALPix} resolution of $\\Nside=2048$\nto $\\Nside=32$. We then construct the convolution of this map with a Gaussian smoothing kernel of $5\\deg$ FWHM and retain only the harmonic coefficients with\n$\\ell \\le \\lmax=96$. The SMICA map provides a useful estimation of the CMB temperature fluctuations for a very large fraction --- but not all --- of the sky. We use the SMICA inpainting mask to exclude heavily contaminated regions, mainly the Galactic plane. At the resolution considered here, the mask removes about 6\\% of the pixels, leaving the number of pixels under consideration to be $N=11565$. Since the analysis is performed in pixel space,\napplication of the mask is trivial.\n\nIn order to estimate the statistical significance of our results, we compare them to ensembles of realizations. In this work we use two different ensembles.\nThis is done in order to cross-check the significance estimations and also allows us to compare the two ensembles. The first of these ensembles is the\nFFP7 set described above. We degrade and smooth the FFP7 maps in the same manner as we did the SMICA map. We expect that the effects of detector noise will be minor on the large scales considered here. To test this expectation we therefore also make use of the best-fit power spectrum,\n$C_\\ell^\\text{Planck}$~\\cite{Planck_cp}, to generate an ensemble of $10^3$ Gaussian random realizations free of residuals. As in the\ncase of the FFP7 ensemble, we restrict the multipole domain to $\\lmax=96$ and smooth the harmonic coefficients with a Gaussian filter of $5\\deg$ FWHM.\nWe also multiply the coefficients with the pixel window function associated with an $\\Nside=32$ pixelization before converting them to\nan $\\Nside=32$ map.\n\nIn our analysis we calculate the KL divergence for the SMICA map in pixel space. We calculate a histogram of the temperature fluctuations in the\nunmasked pixels by taking bins of width $8\\uK$ in the range $[-200,200]\\uK$, meaning that the number of bins is $N_b=51$. Values outside this range are attributed to the edge bins. This histogram is taken as the $P$ probability distribution of Eq.~\\eqref{eq:KL_definition}. For $Q$, the expected distribution, we turn to the ensemble of simulations, either FFP7 or\nthe Gaussian realizations. We calculate the histogram for each simulation (using the same range and binning), and take $Q$ to be the mean of all histograms. These histograms are shown in Fig.~\\ref{subfig:maps_histograms} together with error bars showing the 5--95\\% range for the FFP7 set.\n\\begin{figure}\n\t\\centering\n\t\\subfigure[]{\n\t\t\\includegraphics[width=\\halfW]{T_hist.ps}\n\t\t\\label{subfig:maps_histograms}\n\t}\n\t\\subfigure[]{\n\t\t\\includegraphics[width=\\halfW]{KL_hist.ps}\n\t\t\\label{subfig:KL_histograms}}\n\t \\caption{\\subref{subfig:maps_histograms}~Number of counts versus amplitude of the SMICA map (\\emph{red}), the FFP7 (\\emph{black}) and\n\tthe Gaussian (\\emph{blue}) ensembles. The error bars show the 5--95\\% range for the FFP7 set. \\subref{subfig:KL_histograms}~Histograms of normalized KL\n\tdivergence values for the FFP7 ensemble (\\emph{black}) and the Gaussian ensemble (\\emph{blue}). The values for the SMICA map are shown as red vertical\n\tlines, compared to the FFP7 mean distribution (\\emph{solid}) and to the Gaussian mean distribution (\\emph{dotted}).}\n\t\\label{fig:KLResults}\n\\end{figure}\nIt appears that the histogram of the SMICA map deviates from the reference histograms by~$\\approx2\\sigma$ primarily in the vicinity of the peak of the distribution. However, this estimation relies on a local feature. The KL divergence provides us with a recipe to sum all the deviations from the entire range of the distribution with appropriate weights.\n\nBefore using Eq.~\\eqref{eq:KL_definition} to calculate the KL divergence, it is necessary to pay particular attention to bins in which either $P$ or $Q$ has small values.\nThe case in which $P_i=0$ is not problematic since $P_i\\log P_i \\to 0$ in this limit. Bins for which $Q_i=0$, however, should not be included since the\nKL divergence is logarithmically divergent as $Q_i \\to 0$. Such a result is not unreasonable since it is impossible to draw to a bin if its probability is\nstrictly $0$. In practice, however, small values of $Q_i$ are merely a consequence of the size of our ensemble. We have chosen to ignore bins for\nwhich $Q_i<5$~pixels in order to minimize the sensitivity to small non-statistical fluctuations in the extreme tails of the $Q$ distribution.\n\n\n\\subsection{The Basic Results}\n\\label{subsec:The_Basic_Results}\n\nWe have calculated the KL divergences between the SMICA histogram and the histograms made from the FFP7 and the Gaussian ensembles. After normalization using the number of valid pixels, $N$, and the number of bins, $N_b$, in Eq.~\\eqref{eq:alpha_definition}, we have obtained $\\alpha=6.47$ and $6.28$, respectively. If these values were significantly larger than the expected order of magnitude, we would conclude that the distributions were in disagreement. Since this is not the case, we must compare the results to ensembles of values of $\\alpha$.\nIn order to calculate the $p$-values, we repeat the calculation of the KL divergence, replacing the distribution of the map, $P$, with that\nof each of the random simulations. This results in two histograms of normalized $K$ values, i.e.\\ $\\alpha$ values, for FFP7 (Gaussian) maps compared to the FFP7 (Gaussian) mean distribution,\nshown in Fig.~\\ref{subfig:KL_histograms}. It is evident that, as expected, $\\alpha\\lesssim10$ for most simulations. We find that $5.6\\%$ of the FFP7 simulations and $6.3\\%$ of the Gaussian simulations get a higher KL\ndivergence than the SMICA map. We see that the KL divergence of the SMICA map from the expected distribution is not significant. As expected, differences\nbetween the two reference ensembles, the FFP7 simulations and the pure Gaussian realizations, are quite small.\nIn order to demonstrate explicitly the similarity between the two ensembles with respect to the KL divergence, we have also tested each of the FFP7 simulations\nagainst the mean distribution of the Gaussian ensemble and vice versa. The results of this calculation are extremely similar to those shown in Fig.~\\ref{subfig:KL_histograms}, and we can conclude that the added complexity of the FFP7 simulations relative to that of simple Gaussian realizations plays a minor role at this resolution.\n\n\nIn addition to the KL divergence, and as a basis for comparison, we also use the KS test to compare between the histogram of the SMICA map and the mean histogram for each of the ensembles. The KS test, defined in Eq.~\\eqref{eq:KS_definition}, requires the use of the CDF. For the SMICA map, the CDF is calculated from the data without any binning. The reference CDF, however, is calculated by first fitting the mean histogram of the ensemble (either FFP7 or the Gaussian realizations) to a Gaussian, and then using the fitted parameters in the expression for a Gaussian CDF. As in the case of the KL divergence, for each ensemble, we compare the SMICA map to the mean histogram of the ensemble and also create a histogram of KS test values by taking each realization separately and comparing it to the mean. The results are shown in Fig.~\\ref{fig:KSResults}.\n\\begin{figure}\n\t\\centering\n\t\\includegraphics[width=\\halfW]{KS_hist.ps}\n\t\\caption{Histograms of KS test values for the FFP7 ensemble (\\emph{black}) and the Gaussian ensemble (\\emph{blue}). The values for the SMICA map are shown as red vertical\nlines, compared to the FFP7 mean distribution (\\emph{solid}) and to the Gaussian mean distribution (\\emph{dotted}).}\n\t\\label{fig:KSResults}\n\\end{figure}\nThe KS test values we get when comparing the SMICA map to the FFP7 and Gaussian simulations are $\\kappa=8.32$ and $8.21$, respectively. The corresponding $p$-values are $3.0\\%$ and $2.6\\%$. Again we see that the results for the two ensembles are in good agreement. Moreover, while the $p$-values of the KS test are lower than those of the KL divergence, the SMICA map still appears to be consistent with the reference ensembles and not anomalous.\n\n\nAs we can see from Fig.~\\ref{subfig:maps_histograms}, there are well-defined temperature ranges in which the SMICA histogram is above or\nbelow the reference. Thus, in Fig.~\\ref{fig:smica_regions} we plot the SMICA map, showing only the temperature range $|T|\\le50\\uK$ where the SMICA\nhistogram is above the reference and the temperature range $50\\uK\\le|T|\\le120\\uK$ where it is below.\n\\begin{figure}\n\t\\centering\n\t\\subfigure[]{\n\t\t\\includegraphics[width=\\thirdW,angle=90]{Map50.ps}\n\t\t\\label{subfig:smica_-50_50}\n\t}\n\t\\subfigure[]{\n\t\t\\includegraphics[width=\\thirdW,angle=90]{Map120.ps}\n\t\t\\label{subfig:smica_abs_50_120}\n\t}\n\t\\caption{The SMICA map, showing only those regions where \\subref{subfig:smica_-50_50}~$|T|\\le50\\uK$ and \\subref{subfig:smica_abs_50_120}~$50\\uK\\le|T|\\le120\\uK$. The small Galactic mask used in the analysis appears as a thin horizontal gray line in the center of the maps, and the masked pixels are not included in any of the temperature ranges. In panel~\\subref{subfig:smica_abs_50_120}, the black curves mark the location of the ecliptic plane.}\n\t\\label{fig:smica_regions}\n\\end{figure}\nWe see that there is no apparent tendency for the contributions from either of these temperature ranges to be localized in specific regions of the sky. We do, however, pay special attention to the region of the ecliptic plane. As is apparent from fig.~3 of~\\cite{Planck15Maps}, the SMICA map is susceptible to contamination from foreground residuals in the region of the ecliptic. We therefore include in Fig.~\\ref{subfig:smica_abs_50_120} curves showing the location of the ecliptic plane and suggest that the number of cold spots in the ecliptic band might be unexpected. As it is not the focus of this work, we have not performed any quantitative analysis regarding the spatial distribution of hot or cold regions of the map. The maps in Fig.~\\ref{fig:smica_regions} provide\nan additional general indication that the SMICA temperature map is not anomalous. Nevertheless, we again emphasize that small foreground residuals, like those suspected to lie in the area of the ecliptic plane, while insignificant to the analysis of temperature fluctuations, can become extremely important when analyzing the CMB polarization pattern, specifically $B$-mode polarization. \n\n\n\n\\subsection{Interchangeability of $P$ and $Q$}\n\\label{sub:interchangeability_of_P_and_Q}\n\nAs has been noted above, the KL divergence is not symmetric with respect to the interchange of the distributions $P$ and $Q$ except in the limit $P \\to Q$.\nThis is a reminder of the fact that the KL divergence is not a true metric of the distance between $P$ and $Q$. So far, we have followed the common practice of taking $P$ to be the distribution of the data and $Q$ the expected distribution~\\cite{kl}. However, it is worth checking what happens when these roles are reversed. We have performed two tests involving interchange of the two distributions. First, we simply calculate the KL divergence $K(Q\\|P)$, where $P$ is again the SMICA histogram and $Q$ is the mean histogram of the FFP7 ensemble.\\footnote{Note that with the roles reversed, bins with $P_i < 5$ are now ignored and bins with small values of $Q_i$ are all counted.} This value is then compared to the ensemble of values computed with $P$ replaced by each of the FFP7 maps. The\nresulting histogram, after normalization using Eq.~\\eqref{eq:alpha_definition}, is presented in Fig.~\\ref{subfig:PQ_and_QP} together with the histogram of $K(P\\|Q)$ (presented above) as reference.\n\\begin{figure}\n\t\\centering\n\t\\subfigure[]{\n\t\t\\includegraphics[width=\\halfW]{KL_QP_hist.ps}\n\t\t\\label{subfig:PQ_and_QP}\n\t}\n\t\\subfigure[]{\n\t\t\\includegraphics[width=\\halfW]{Delta_KL_hist.ps}\n\t\t\\label{subfig:PQ_-_QP}\n\t}\n\t\\caption{\\subref{subfig:PQ_and_QP}~Histograms for the usual KL divergence $K(P\\|Q)$ (\\emph{black}) and for the reversed divergence $K(Q\\|P)$\n\t(\\emph{blue}). The red and blue vertical lines are the values for the SMICA map for the normal and reversed tests, respectively. All $K$ values have been normalized using Eq.~\\eqref{eq:alpha_definition}.\n\\subref{subfig:PQ_-_QP}\n\t~Histogram of the normalized difference $\\alpha(P\\|Q)-\\alpha(Q\\|P)$. The vertical line is the value for the SMICA map.}\n\t\\label{fig:PQQP}\n\\end{figure}\nWe can see that the two histograms and the corresponding $p$-values are similar. The value $p = 6.5\\%$ was obtained for the reversed test; the\n$p$-value for the normal test is $5.6\\%$ as stated above. It is apparent that, although similar, the reversed histogram is slightly but consistently shifted towards smaller $\\alpha$ values than the normal histogram. The value for the SMICA map is also lower for the reversed test. However, the SMICA is a single map, and the shift between histograms only indicates a statistical shift for the whole ensemble. Therefore, the second test is to examine the difference $\\Delta\\alpha = \\alpha(P\\|Q)-\\alpha(Q\\|P)$ between the normal and reversed normalized KL divergences of the \\emph{same} map. Fig.~\\ref{subfig:PQ_-_QP} shows the resulting histogram for the FFP7 ensemble together with the value for SMICA. The SMICA map shows a highly standard $\\Delta\\alpha$, yielding a $p$-value of $49.4\\%$. A similar test performed versus the Gaussian ensemble gives very similar results.\n\nThe tendency of the KL divergences to become smaller when $P$ and $Q$ are interchanged can be understood easily. Since $P$ is calculated from a single map, it tends to fluctuate more than $Q$, which is the mean of all ensemble distributions. Therefore, when $P$ appears only inside the logarithm, as is the case in the reversed $K(Q\\|P)$, the fluctuations are suppressed relative to the normal $K(P\\|Q)$. We see here that the SMICA map not only shows the expected qualitative behavior upon interchanging $P$ and $Q$, it is also quantitatively shifted by the expected amount. While the KL divergence in general is not symmetric under the interchange of $P$ and $Q$, we conclude that when testing the one-dimensional temperature distribution of the CMB on large scales, reversing the two makes little difference.\n\n\\section{Discussion}\n\\label{sec:discussion}\n\nWe have discussed the applicability of the Kullback--Leibler divergence for the assessment of departures form Gaussianity of CMB temperature\nmaps on large scales. We have illustrated this on the SMICA map, comparing it to both the set of FFP7 simulations and a set of $10^3$ Gaussian draws.\nWe have shown that it is consistent with each of these reference sets to a level of about 6\\%. We have used the KL divergence to compare the FFP7 and Gaussian reference sets and have shown that they are in good agreement. This suggests that the additional instrumental effects and foreground residuals included in the FFP7 simulations are unimportant on the scales considered here. Since the KL divergence is not symmetric in $P$ and $Q$, we have performed tests to demonstrate that their interchange has little effect on these conclusions. Finally, we have repeated these calculations using the Kolmogorov--Smirnov test. The resulting\n$p$-value of about 3\\% suggests that the differences between the two tests are not large. We note that there is no guarantee that these tests will always\ngive similar results. For example, the KL divergence is likely to be far more sensitive than the KS test for situations where there are large relative differences in the small amplitude tails of the distributions. We have also repeated all the tests on the CMB data of the 2015 release from Planck, which recently became publicly available.\\footnote{See the Planck Legacy Archive \\url{http:\/\/pla.esac.esa.int\/pla\/}.} The results on the 2015 data set are in very good agreement with those reported here.\n\nThe difficulty in devising tests for the assessment of non-Gaussianity of the temperature and polarization maps of the CMB lies in our ignorance of\nthe nature of the non-Gaussian residuals from foregrounds and systematic effects that could propagate to the maps. In such circumstances, it seems advisable to adopt a procedure that uses as much information in the maps as possible. With its connection to the intrinsic and cross-entropy of the distributions $P$ and $Q$,\nthe KL divergence would appear to be the natural choice. Given the correlations between the pixels of the CMB, a consequence of a random draw in harmonic space, this is not necessarily the case. However, the non-locality of the KL divergence and its sensitivity to the tails of the distributions still suggest that it is a valuable complement to the KS test and might be a useful alternative. Indeed, one should utilize a variety of methods and tests to identify possible contamination of the cosmological product. Obviously, \\emph{any\\\/} suggestion of an anomalous result would indicate the need for more sophisticated analyses to assess the quality of the CMB maps.\n\n\n\\acknowledgments\n\nWe would like to thank P. Naselsky for his enthusiastic support and encouragement of the work reported in this paper and for many valuable scientific discussions.\n\nThis work is based on observations obtained with Planck,\\footnote{\\url{http:\/\/www.esa.int\/Planck}} an ESA\nscience mission with instruments and contributions directly funded by ESA\nMember States, NASA, and Canada. The development of Planck has been\nsupported by: ESA; CNES and CNRS \/ INSU-IN2P3-INP (France); ASI, CNR, and\nINAF (Italy); NASA and DoE (USA); STFC and UKSA (UK); CSIC, MICINN and JA\n(Spain); Tekes, AoF and CSC (Finland); DLR and MPG (Germany); CSA (Canada);\nDTU Space (Denmark); SER\/SSO (Switzerland); RCN (Norway); SFI (Ireland);\nFCT\/MCTES (Portugal); and PRACE (EU). A description of the Planck Collaboration and a list of its members,\nincluding the technical or scientific activities in which they have been\ninvolved, can be found at the Planck web page.\\footnote{\\url{http:\/\/www.cosmos.esa.int\/web\/planck\/planck-collaboration}}\n\nWe acknowledge the use of the NASA Legacy Archive for Microwave Background Data Analysis (LAMBDA). Our data analysis made use of the GLESP package\\footnote{\\url{http:\/\/www.glesp.nbi.dk\/}}~\\cite{Glesp}, and of \\textsc{HEALPix}~\\cite{HEALPix}. This work is supported in part by Danmarks Grundforskningsfond which allowed the establishment of the Danish Discovery Center, FNU grants 272-06-0417, 272-07-0528 and 21-04-0355, the National Natural Science Foundation of China (Grant No.\\ 11033003), the National Natural Science Foundation for Young Scientists of China (Grant No.\\ 11203024) and the Youth Innovation Promotion Association, CAS.\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nAs part of the proof of his eponymous theorem~\\cite{Szemeredi75} on arithmetic progressions in dense sets of integers, Szemer\\'edi developed (a variant of what is now known as) the graph {\\em regularity lemma}~\\cite{Szemeredi78}.\nThe lemma roughly states that the vertex set of every graph can be partitioned into a bounded number of parts such that\nalmost all the bipartite graphs induced by pairs of parts in the partition are quasi-random.\nIn the past four decades this lemma has become one of the (if not the) most powerful tools in extremal combinatorics, with applications in many other areas of mathematics. We refer the reader to~\\cite{KomlosShSiSz02,RodlSc10}\nfor more background on the graph regularity lemma, its many variants and its numerous applications.\n\nPerhaps the most important and well-known application of the graph regularity lemma is the original\nproof of the {\\em triangle removal lemma}, which states that if an $n$-vertex graph $G$ contains only $o(n^3)$ triangles, then one can turn $G$ into a triangle-free graph by removing only $o(n^2)$ edges (see \\cite{ConlonFox13} for more details).\nIt was famously observed by Ruzsa and Szemer\\'edi~\\cite{RuzsaSz76} that the triangle removal lemma implies Roth's theorem~\\cite{Roth54}, the special case of Szemer\\'edi's theorem for $3$-term arithmetic progressions.\nThe problem of extending the triangle removal lemma to the hypergraph\nsetting was raised by Erd\\H{o}s, Frankl and R\\\"odl~\\cite{ErdosFrRo86}. One of the main motivations for obtaining such a result was the observation of Frankl and R\\\"odl~\\cite{FrankRo02} (see also~\\cite{Solymosi04}) that such a result would allow one to extend the Ruzsa--Szemer\\'edi~\\cite{RuzsaSz76} argument and thus obtain an alternative proof of Szemer\\'edi's theorem for progressions of arbitrary length.\n\nThe quest for a hypergraph regularity lemma, which would allow one to prove a hypergraph removal lemma, took about 20 years.\nThe first milestone was the result of Frankl and R\\\"odl~\\cite{FrankRo02}, who obtained a regularity lemma for $3$-uniform hypergraphs. About 10 years later, the approach of~\\cite{FrankRo02} was extended to hypergraphs of arbitrary uniformity by R\\\"odl, Skokan, Nagle and Schacht~\\cite{NagleRoSc06, RodlSk04}.\nAt the same time, Gowers~\\cite{Gowers07} obtained an alternative version of the regularity lemma for $k$-uniform hypergraphs (from now on we will use $k$-graphs instead of $k$-uniform hypergraphs).\nShortly after, Tao~\\cite{Tao06} and R\\\"odl and Schacht~\\cite{RodlSc07,RodlSc07-B} obtained two more versions of the lemma.\n\nAs it turned out, the main difficulty with obtaining a regularity lemma for $k$-graphs was defining the correct notion of hypergraph regularity that would:\n$(i)$ be strong enough to allow one to prove a counting lemma, and\n$(ii)$ be weak enough to be satisfied by every hypergraph (see the discussion in~\\cite{Gowers06} for more on this issue).\nAnd indeed, the above-mentioned variants of the hypergraph regularity lemma rely on four different notions of quasi-randomness, which\nto this date are still not known to be equivalent\\footnote{This should be contrasted with the setting of graphs in which\n(almost) all notions of quasi-randomness are not only known to be equivalent but even effectively equivalent. See e.g.~\\cite{ChungGrWi89}.} (see~\\cite{NaglePoRoSc09} for some partial results). What all of these proofs {\\em do} have in common however,\nis that they supply only Ackermann-type bounds for the size of a regular partition.\\footnote{Another variant of the hypergraph regularity lemma was obtained in~\\cite{ElekSz12}. This approach does not supply any quantitative bounds.} \nMore precisely, if we let $\\Ack_1(x)=2^x$ and then define $\\Ack_k(x)$ to be the $x$-times iterated\\footnote{$\\Ack_2(x)$ is thus a tower of exponents of height $x$,\n\t$\\Ack_3(x)$ is the so-called wowzer function, etc.} version of $\\Ack_{k-1}$, then all the above proofs guarantee to produce a regular partition of a $k$-graph whose order can be bounded from above by an $\\Ack_k$-type function.\n\nOne of the most important applications of the $k$-graph regularity lemma was that it gave the first explicit\nbounds for the multidimensional generalization of Szemer\\'edi's theorem, see~\\cite{Gowers07}. The original proof of this result, obtained by Furstenberg and Katznelson~\\cite{FurstenbergKa78}, relied on Ergodic Theory and thus supplied no quantitative bounds at all.\nExamining the reduction between these theorems~\\cite{Solymosi04} reveals that if one could improve the Ackermann-type bounds for the $k$-graph regularity\nlemma, by obtaining (say) $\\Ack_{k_0}$-type upper bounds (for all $k$), then one would obtain the first primitive recursive bounds for the\nmultidimensional generalization of Szemer\\'edi's theorem. Let us note that obtaining such bounds just for\nvan der Waerden's theorem~\\cite{Shelah89} and Szemer\\'edi's theorem~\\cite{Szemeredi75} (which are two special case) were open problems for many decades till\nthey were finally solved by Shelah~\\cite{Shelah89} and Gowers~\\cite{Gowers01}, respectively.\nFurther applications of the $k$-graph regularity lemma (and the hypergraph removal lemma in particular) are described in~\\cite{RodlNaSkScKo05} and~\\cite{RodlTeScTo06} as well as in R\\\"odl's recent ICM survey~\\cite{Rodl14}.\n\n\nA famous result of Gowers~\\cite{Gowers97} states that the $\\Ack_2$-type upper bounds for graph regularity\nare unavoidable. Several improvements~\\cite{FoxLo17},\nvariants~\\cite{ConlonFo12,KaliSh13,MoshkovitzSh18} and simplifications~\\cite{MoshkovitzSh16} of Gowers' lower bound were recently obtained, but no analogous lower bound was derived even for $3$-graph regularity.\nThe numerous applications of the hypergraph regularity lemma naturally lead to the question of whether one can improve upon the Ackermann-type\nbounds mentioned above and obtain primitive recursive bounds\nfor the $k$-graph regularity lemma. Tao~\\cite{Tao06-h} predicted that the answer to this question is negative, in the sense that one cannot obtain better than $\\Ack_k$-type upper bounds for the $k$-graph regularity lemma for every $k \\ge 2$.\nThe main result presented here and in the followup \\cite{MSk} confirms this prediction.\n\n\n\\begin{theo}{\\bf[Main result, informal statement]}\\label{thm:main-informal}\nThe following holds for every $k\\geq 2$: every regularity lemma for $k$-graphs satisfying some\nmild conditions can only guarantee to produce partitions of size bounded by an $\\Ack_k$-type function.\n\\end{theo}\n\nIn this paper we will focus on proving the key ingredient needed for obtaining Theorem~\\ref{thm:main-informal},\nstated as Lemma~\\ref{theo:core} in Subsection~\\ref{subsec:overview}, and on showing how it can be used in order to prove Theorem~\\ref{thm:main-informal}\nfor $k=3$. In a nutshell, the key idea is to use the graph construction given by Lemma~\\ref{theo:core} in order\nto construct a $3$-graph by taking a certain ``product'' of two graphs that are hard for graph regularity, in order to get\na $3$-graph that is hard for $3$-graph regularity. See the discussion following Lemma~\\ref{theo:core} in Subsection~\\ref{subsec:overview}.\nDealing with $k=3$ in this paper will allow us to present all the new ideas needed in order to actually prove Theorem~\\ref{thm:main-informal}\nfor arbitrary $k$, in the slightly friendlier setting of $3$-graphs.\nIn a followup paper~\\cite{MSk}, we will show how Lemma~\\ref{theo:core} can be used in order to prove\nTheorem~\\ref{thm:main-informal} for all $k \\ge 2$.\n\nIn this paper we will also show how to derive from Theorem~\\ref{thm:main-informal} tight lower bounds for\nthe $3$-graph regularity lemmas due to Frankl and R\\\"odl~\\cite{FrankRo02} and to Gowers~\\cite{Gowers06}.\n\n\\begin{coro}\\label{coro:FR-LB}\nThere is an $\\Ack_3$-type lower bound for the $3$-graph regularity lemmas of Frankl and R\\\"odl~\\cite{FrankRo02} and of Gowers~\\cite{Gowers06}.\n\\end{coro}\n\nIn \\cite{MSk} we will show how to derive from Theorem~\\ref{thm:main-informal} a tight lower bound for\nthe $k$-graph regularity lemma due to R\\\"odl and Schacht~\\cite{RodlSc07}.\n\n\\begin{coro}\\label{coro:RS-LB}\nThere is an $\\Ack_k$-type lower bound for the $k$-graph regularity lemma of R\\\"odl and Schacht~\\cite{RodlSc07}.\n\\end{coro}\n\n\nBefore getting into the gory details of the proof, let us informally discuss what we think are some\ninteresting aspects of the proof of Theorem \\ref{thm:main-informal}.\n\n\\paragraph{Why is it hard to ``step up''?}\nThe reason why the upper bound for graph regularity is of tower-type\nis that the process of constructing a regular partition of a graph proceeds by a sequence of steps, each increasing the size of the partition exponentially.\nThe main idea behind Gowers' lower bound for graph regularity~\\cite{Gowers97} is in ``reverse engineering'' the proof of the upper bound; in other words,\nin showing that (in some sense) the process of building the partition using a sequence of exponential refinements is unavoidable.\nNow, a common theme in all proofs of the hypergraph regularity lemma\nfor $k$-graphs is that they proceed by induction on $k$; that is, in the process of constructing a regular\npartition of the input $k$-graph $H$, the proof applies the $(k-1)$-graph regularity lemma on certain $(k-1)$-graphs\nderived from $H$. This is why one gets $\\Ack_k$-type upper bounds. So with~\\cite{Gowers97} in mind, one might guess\nthat in order to prove a matching lower bound one should ``reverse engineer'' the proof of the upper bound and show that such a process is unavoidable. However, this turns out to be false! As we argued in~\\cite{MoshkovitzSh18}, in order to prove an {\\em upper bound} for (say) $3$-graph regularity it is in fact enough to iterate a relaxed version of graph regularity which we call the ``sparse regular approximation lemma'' (SRAL for short).\nTherefore, in order to prove an $\\Ack_3$-type {\\em lower bound} for $3$-graph\nregularity one cannot simply ``step up'' an $\\Ack_2$-type lower bound for graph regularity. Indeed, a necessary condition\nwould be to prove an $\\Ack_2$-type lower bound for SRAL. See also the discussion following Lemma~\\ref{theo:core} in Subsection \\ref{subsec:overview}\non how do we actually use a graph construction in order to get a $3$-graph construction.\n\n\n\\paragraph{A new notion of graph\/hypergraph regularity:}\nIn a recent paper \\cite{MoshkovitzSh18} we proved an $\\Ack_2$-type\nlower bound for SRAL.\nAs it turned out, even this lower bound was not enough to allow us to step up the graph lower bound\ninto a $3$-graph lower bound. To remedy this, in the present paper we introduce an even weaker notion of graph\/hypergraph regularity\nwhich we call $\\langle \\d \\rangle$-regularity. This notion seems to be right at the correct level of ``strength'';\non the one hand, it is strong enough to allow one to prove $\\Ack_{k-1}$-type lower bounds for $(k-1)$-graph regularity, while at the\nsame time weak enough to allow one to induct, that is, to use it in order to then prove $\\Ack_{k}$-type lower bounds for $k$-graph regularity.\nAnother critical feature of our new notion of hypergraph regularity is that it has (almost) nothing to do with hypergraphs!\nA disconcerting aspect of all proofs of the hypergraph regularity lemma is that they involve a very complicated nested\/inductive structure.\nFurthermore, one has to introduce an elaborate hierarchy of constants that controls how regular one level of the partition is compared to\nthe previous one. What is thus nice about our new notion is that it involves only various kinds of instances of graph $\\langle \\d \\rangle$-regularity.\nAs a result, our proof is (relatively!) simple.\n\n\\paragraph{How do we find witnesses for $3$-graph irregularity?}\nThe key idea in Gowers' lower bound~\\cite{Gowers97} for graph regularity was in constructing a graph $G$, based on a sequence of\npartitions ${\\cal P}_1,{\\cal P}_2,\\ldots$ of $V(G)$, with the following\ninductive property: if a vertex partition $\\Z$ refines ${\\cal P}_i$\nbut does not refine ${\\cal P}_{i+1}$ then $\\Z$ is not $\\epsilon$-regular.\nThe key step of the proof of~\\cite{Gowers97} is in finding witnesses showing\nthat pairs of clusters of $\\Z$ are irregular. The main difficulty in extending this\nstrategy to $k$-graphs already reveals itself in the setting of $3$-graphs. In a nutshell, while\nin graphs, a witness to irregularity of a pair of clusters $A,B \\in \\Z$ is\n{\\em any} pair of large subsets $A' \\sub A$ and $B' \\sub B$, in the setting of $3$-graphs\nwe have to find three large edge-sets (called a {\\em triad}, see Section~\\ref{sec:FR}) that\nhave an additional property: they must together form a graph containing many triangles.\nIt thus seems quite hard to extend Gowers' approach already to the setting of $3$-graphs. By working\nwith the much weaker notion of $\\langle \\d \\rangle$-regularity, we circumvent this\nissue since two of the edges sets in our version of a triad are always complete bipartite graphs.\nSee Subsection~\\ref{subsec:definitions}.\n\n\\paragraph{What is then the meaning of Theorem \\ref{thm:main-informal}?}\nOur main result, stated formally as Theorem~\\ref{theo:main}, establishes an $\\Ack_3$-type lower bound\nfor $\\langle \\d \\rangle$-regularity of $3$-graphs, that is, for a specific new\nversion of the hypergraph regularity lemma. Therefore, we immediately get $\\Ack_3$-type lower bounds for\nany $3$-graph regularity lemma which is at least as strong as our new lemma, that is, for any lemma\nwhose requirements\/guarantees imply those that are needed in order to satisfy our new notion of regularity.\nIn particular, we will prove Corollary \\ref{coro:FR-LB} by showing that the regularity notions\nused in these lemmas are at least as strong as $\\langle \\d \\rangle$-regularity.\n\nIn \\cite{MSk} we will prove Theorem~\\ref{thm:main-informal} in its full generality by extending Theorem~\\ref{theo:main}\nto arbitrary $k$-graphs. This proof, though technically more involved, will be\nquite similar at its core to the way we derive Theorem~\\ref{theo:main} from Lemma~\\ref{theo:core} in the present paper.\nThe deduction of Corollary \\ref{coro:RS-LB}, which appears in \\cite{MSk}, will also turn out to be quite similar to the way\nCorollary \\ref{coro:FR-LB} is derived from Theorem~\\ref{theo:main} in the present paper\n\n\\paragraph{How strong is our lower bound?} Since Theorem \\ref{thm:main-informal} gives a lower bound for\n$\\langle \\d \\rangle$-regularity and Corollaries \\ref{coro:FR-LB} and \\ref{coro:RS-LB} show that\nthis notion is at least as weak as previously used notions of regularity, it is natural to ask:\n$(i)$ is this notion equivalent to one of the other notions? $(ii)$ is this notion strong enough for proving the\nhypergraph removal lemma, which was one of the main reasons for developing\nthe hypergraph regularity lemma? We will prove that the answer to both questions is {\\em negative} by showing that already for\ngraphs, $\\langle \\d \\rangle$-regularity (for $\\d$ a fixed constant) is not strong enough even for proving the triangle removal lemma.\nThis of course makes our lower bound even stronger as it already applies to a very weak notion of regularity.\nIn a nutshell, the proof proceeds by first taking a random tripartite graph, showing (using routine probabilistic\narguments) that with high probability the graph is $\\langle \\d \\rangle$-regular yet contains a small number of triangles.\nOne then shows that removing these triangles, and then taking a blowup of the resulting graph, gives a triangle-free graph of positive density that is $\\langle \\d \\rangle$-regular. The full details will appear in \\cite{MSk}.\n\n\n\\paragraph{How tight is our bound?} Roughly speaking, we will show that for a $k$-graph with $pn^k$ edges,\nevery $\\langle \\d \\rangle$-regular partition has order at least $\\Ack_k(\\log 1\/p)$. In a recent paper \\cite{MoshkovitzSh16}\nwe proved that in graphs, one can prove a matching $\\Ack_2(\\log 1\/p)$ upper bound, even for a slightly stronger notion than $\\langle \\d \\rangle$-regularity.\nThis allowed us to obtain a new proof of Fox's $\\Ack_2(\\log 1\/\\epsilon)$ upper bound for the graph removal lemma \\cite{Fox11} (since the stronger notion allows to count small subgraphs). We believe that it should\nbe possible to match our lower bounds with $\\Ack_k(\\log 1\/p)$ upper bounds (even for a slightly stronger notion analogous to the one used in \\cite{MoshkovitzSh16}). We think that it should be possible to deduce from such an upper bound an $\\Ack_k(\\log 1\/\\epsilon)$ upper bound\nfor the $k$-graph removal lemma. The best known bounds for this problem are (at least) $\\Ack_k(\\poly(1\/\\epsilon))$.\n\n\n\n\n\\subsection{Paper overview}\n\nIn Section~\\ref{sec:define} we will first define the new notion of hypergraph regularity, which we term $\\langle \\d \\rangle$-regularity,\nfor which we will prove our main lower bound. We will then give the formal version of Theorem~\\ref{thm:main-informal} (see Theorem~\\ref{theo:main}).\nThis will be followed by the statement of our core technical result, Lemma~\\ref{theo:core},\nand an overview of how this technical result is used in the proof of Theorem~\\ref{theo:main}.\nThe proof of Theorem~\\ref{theo:main} appears in Section~\\ref{sec:LB}. \nWe refer the reader to~\\cite{MSk} for the proof of Lemma~\\ref{theo:core}.\nIn Section \\ref{sec:FR} we prove Corollary~\\ref{coro:FR-LB}.\nIn Appendix~\\ref{sec:FR-appendix} we give the proof of certain technical claims missing from Section~\\ref{sec:FR}.\n\n\n\n\n\n\n\\section{$\\langle \\d \\rangle$-regularity and Proof Overview}\\label{sec:define}\n\n\nFormally, a \\emph{$3$-graph} is a pair $H=(V,E)$, where $V=V(H)$ is the vertex set and $E=E(H) \\sub \\binom{V}{3}$ is the edge set of $H$.\nThe number of edges of $H$ is denoted $e(H)$ (i.e., $e(H)=|E|$).\nThe $3$-graph $H$ is \\emph{$3$-partite} on (disjoint) vertex classes $(V_1,V_2,V_3)$ if every edge of $H$ has a vertex from each $V_i$.\nThe \\emph{density} of a $3$-partite $3$-graph $H$ is $e(H)\/\\prod_{i=1}^3 |V_i|$.\nFor a bipartite graph $G$, the set of edges of $G$ between disjoint vertex subsets $A$ and $B$ is denoted by $E_G(A,B)$; the density of $G$ between $A$ and $B$ is denoted by $d_G(A,B)=e_G(A,B)\/|A||B|$, where $e_G(A,B)=|E_G(A,B)|$. We use $d(A,B)$ if $G$ is clear from context.\nWhen it is clear from context, we sometimes identify a hypergraph with its edge set. In particular, we will write $V_1 \\times V_2$ for the complete bipartite graph on vertex classes $(V_1,V_2)$.\nFor partitions $\\P,\\Q$ of the same underlying set, we say that $\\Q$ \\emph{refines} $\\P$, denoted $\\Q \\prec \\P$, if every member of $\\Q$ is contained in a member of $\\P$.\nWe say that $\\P$ is \\emph{equitable} if all its members have the same size.\\footnote{In a regularity lemma one allows the parts to differ in size by at most $1$ so that it applies to all (hyper-)graphs. For our lower bound this is unnecessary.}\nWe use the notation $x \\pm \\e$ for a number lying in the interval $[x-\\e,\\,x+\\e]$.\n\nIn the following definition, and in the rest of the paper, we will sometimes identify a graph or a $3$-graph with its edge set when the vertex set is clear from context.\n\n\\begin{definition}[$2$-partition]\\label{def:2-partition}\nA \\emph{$2$-partition} $(\\Z,\\E)$ on a vertex set $V$ consists of a partition $\\Z$ of $V$ and a family of edge disjoint bipartite graphs\n$\\E$ so that:\n\\begin{itemize}\n\\item Every $E \\in \\E$ is a bipartite graph\nwhose two vertex sets are distinct $Z,Z' \\in \\Z$.\n\\item For every $Z \\neq Z' \\in \\Z$, the complete bipartite graph $Z \\times Z'$ is the union of graphs from $\\E$.\n\\end{itemize}\n\n\n\n\n\n\\end{definition}\n\nPut differently, a $2$-partition consists of vertex partition $\\Z$ and a collection of bipartite graphs $\\E$ such that\n$\\E$ is a refinement of the collection of complete bipartite graphs $\\{Z \\times Z' : Z \\neq Z' \\in \\Z \\}$.\n\n\n\\subsection{$\\langle \\d \\rangle$-regularity of graphs and hypergraphs}\\label{subsec:definitions}\n\nIn this subsection we define our new\\footnote{For $k=3$, related notions of regularity were studied in~\\cite{ReiherRoSc16,Towsner17}.} notion of $\\langle\\d\\rangle$-regularity, first for graphs and then for $3$-graphs in Definition~\\ref{def:k-reg} below.\nLet us first recall Szemer\\'edi's notion of $\\epsilon$-regularity.\nA bipartite graph on $(A,B)$ is \\emph{$\\e$-regular} if for all subsets $A' \\sub A$, $B' \\sub B$ with $|A'|\\ge\\e|A|$, $|B'|\\ge\\e|B|$ we have $|d(A',B') -d(A,B)| \\le \\e$.\nA vertex partition $\\P$ of a graph is $\\e$-regular\nif\nthe bipartite graph induced on each but at most $\\e|\\P|^2$ of the pairs $(A,B)$ with $A \\neq B \\in \\P$ is $\\e$-regular.\nSzemer\\'edi's graph regularity lemma says that every graph\nhas an $\\e$-regular equipartition of order at most some $\\Ack_2(\\poly(1\/\\e))$.\nWe now introduce a weaker notion of graph regularity which we will use throughout the paper.\n\\begin{definition}[graph $\\langle\\d\\rangle$-regularity]\\label{def:star-regular}\n\tA bipartite graph $G$ on $(A,B)$ is \\emph{$\\langle \\d \\rangle$-regular} if for all subsets $A' \\sub A$, $B' \\sub B$ with $|A'| \\ge \\d|A|$, $|B'|\\ge\\d|B|$ we have $d_G(A',B') \\ge \\frac12 d_G(A,B)$.\\\\\n\tA vertex partition $\\P$ of a graph $G$ is \\emph{$\\langle \\d \\rangle$-regular}\n\n\tif one can add\/remove at most $\\d \\cdot e(G)$ edges so that the bipartite graph induced on each $(A,B)$ with $A \\neq B \\in \\P$ is $\\langle \\d \\rangle$-regular.\n\n\\end{definition}\n\nFor the reader worried that in Definition~\\ref{def:star-regular} we merely replaced the $\\e$\nfrom the definition of $\\e$-regularity with $\\d$, we refer to the discussion following Theorem~\\ref{theo:main} below.\n\n\n\n\n\nThe definition of $\\langle\\d\\rangle$-regularity for hypergraphs involves the $\\langle\\d\\rangle$-regularity notion for graphs, applied to certain auxiliary graphs which are defined as follows.\n\n\n\n\n\\begin{definition}[The auxiliary graph $G_{H}^i$]\\label{def:aux}\nFor a $3$-partite $3$-graph $H$ on vertex classes $(V_1,V_2,V_3)$,\nwe define a bipartite graph $G_{H}^1$ on the vertex classes $(V_2 \\times V_3,\\,V_1)$ by\n$$E(G_{H}^1) = \\big\\{ ((v_2,v_3),v_1) \\,\\big\\vert\\, (v_1,v_2,v_3) \\in E(H) \\big\\} \\;.$$\nThe graphs $G_{H}^2$ and $G_{H}^3$ are defined in an analogous manner.\n\\end{definition}\n\nImportantly, for a $2$-partition (as defined in Definition~\\ref{def:2-partition}) to be $\\langle\\d\\rangle$-regular it must first satisfy a requirement on the regularity of its parts.\n\n\n\\begin{definition}[$\\langle\\d\\rangle$-good partition]\\label{def:k-good}\nA $2$-partition $(\\Z,\\E)$ on $V$ is \\emph{$\\langle\\d\\rangle$-good} if all bipartite graphs in $\\E$\n(between any two distinct vertex clusters of $\\Z$) are $\\langle \\d \\rangle$-regular.\n\\end{definition}\n\n\n\n\n\n\nFor a $2$-partition $(\\Z,\\E)$ of a $3$-partite $3$-graph on vertex classes $(V_1,V_2,V_3)$ with $\\Z \\prec \\{V_1,V_2,V_3\\}$, for every $1 \\le i \\le 3$ we denote $\\Z_i = \\{Z \\in \\Z \\,\\vert\\, Z \\sub V_i\\}$, and we denote $\\E_i = \\{E \\in \\E \\,\\vert\\, E \\sub V_j \\times V_k\\}$ where $\\{i,j,k\\}=\\{1,2,3\\}$.\nSo for example, $\\E_1$ is thus a partition of $V_2 \\times V_3$.\n\n\\begin{definition}[$\\langle\\d\\rangle$-regular partition]\\label{def:k-reg}\nLet $H$ be a $3$-partite $3$-graph on vertex classes $(V_1,V_2,V_3)$\nand $(\\Z,\\E)$ be a $\\langle \\d \\rangle$-good $2$-partition with $\\Z \\prec \\{V_1,V_2,V_3\\}$.\nWe say that $(\\Z,\\E)$ is a \\emph{$\\langle \\d \\rangle$-regular} partition of $H$ if\nfor every $1 \\le i \\le 3$,\n$\\E_i \\cup \\Z_i$ is a $\\langle \\d \\rangle$-regular partition of $G_H^i$.\n\n\n\n\n\n\\end{definition}\n\n\n\n\n\n\n\n\n\n\t\t\n\t\t\n\n\n\\subsection{Formal statement of the main result}\n\nWe are now ready to formally state our tight lower bound for $3$-graph $\\langle \\d \\rangle$-regularity (the formal version of\nTheorem~\\ref{thm:main-informal} above for $k=3$). Recall that we define the {\\em tower} functions $\\twr(x)$ to be a tower of exponents of height $x$,\nand then define the {\\em wowzer} function $\\wow(x)$ to be the $x$-times iterated tower function, that is\n$\\wow(x)= \\underbrace{\\twr(\\twr(\\cdots(\\twr(1))\\cdots))}_{x \\text{ times}}$.\n\n\\begin{theo}[Main result]\\label{theo:main}\nFor every $s \\in \\N$ there is a $3$-partite $3$-graph $H$ on vertex classes of equal size and of density at least $2^{-s}$,\nand a partition $\\V_0$ of $V(H)$ with $|\\V_0| \\le 2^{300}$, such that if\n$(\\Z,\\E)$ is a $\\langle 2^{-73} \\rangle$-regular partition of $H$ with\n$\\Z \\prec \\V_0$ then $|\\Z| \\ge \\wow(s)$.\n\n\n\\end{theo}\n\nLet us draw the reader's attention to an important and perhaps surprising aspect of Theorem~\\ref{theo:main}.\nAll the known tower-type lower bounds for graph regularity depend on the error parameter $\\epsilon$,\nthat is, they show the existence of graphs $G$ with the property that every $\\epsilon$-regular partition of $G$ is of order at least $\\Ack_2(\\poly(1\/\\e))$.\nThis should be contrasted with the fact that our lower bounds for $\\langle \\d \\rangle$-regularity holds for a {\\em fixed} error parameter $\\delta$.\nIndeed, instead of the dependence on the error parameter, our lower bound depends on the {\\em density} of the graph.\nThis delicate difference makes it possible for us to prove Theorem~\\ref{theo:main} by iterating the construction described in the next subsection.\n\n\n\\subsection{The core construction and proof overview}\\label{subsec:overview}\n\nThe graph construction in Lemma~\\ref{theo:core} below is the main technical result we will need in order to prove Theorem~\\ref{theo:main}.\nWe will first need to define ``approximate'' refinement (a notion that goes back to Gowers~\\cite{Gowers97}).\n\\begin{definition}[Approximate refinements]\nFor sets $S,T$ we write $S \\sub_\\b T$ if $|S \\sm T| < \\b|S|$.\nFor a partition $\\P$ we write $S \\in_\\b \\P$ if $S \\sub_\\b P$ for some $P \\in \\P$.\nFor partitions $\\P,\\Q$ of the same set of size $n$ we write $\\Q \\prec_\\b \\P$ if\n$$\\sum_{\\substack{Q \\in \\Q\\colon\\\\Q \\notin_\\b \\P}} |Q| \\le \\b n \\;.$$\n\\end{definition}\nNote that for $\\Q$ equitable, $\\Q \\prec_\\b \\P$ if and only if\nall but at most $\\b|\\Q|$ parts $Q \\in \\Q$ satisfy $Q \\in_\\b \\P$.\nWe note that throughout the paper we will only use approximate refinements with $\\b \\le 1\/2$, and so if $S \\in_\\b \\P$ then $S \\sub_\\b P$ for a unique $P \\in \\P$.\n\nWe stress that in Lemma~\\ref{theo:core} below we only use notions related to graphs. In particular, $\\langle \\d \\rangle$-regularity refers to Definition~\\ref{def:star-regular}.\n\n\\begin{lemma}[Core construction]\\label{theo:core}\nLet $\\Lside$ and $\\Rside$ be disjoint sets. Let\n$\\L_1 \\succ \\cdots \\succ \\L_s$ and $\\R_1 \\succ \\cdots \\succ \\R_s$ be two sequences of $s$ successively refined equipartitions of $\\Lside$ and $\\Rside$, respectively,\nthat satisfy for every $i \\ge 1$ that:\n\\begin{enumerate}\n\\item\\label{item:core-minR}\n$|\\R_i|$ is a power of $2$ and $|\\R_1| \\ge 2^{200}$,\n\\item\\label{item:core-expR} $|\\R_{i+1}| \\ge 4|\\R_i|$ if $i < s$,\n\\item\\label{item:core-expL} $|\\L_i| = 2^{|\\R_i|\/2^{i+10}}$.\n\\end{enumerate}\nThen there exists a sequence of $s$ successively refined edge equipartitions $\\G_1 \\succ \\cdots \\succ \\G_s$ of $\\Lside \\times \\Rside$ such that for every $1 \\le j \\le s$, $|\\G_j|=2^j$, and the following holds for every $G \\in \\G_j$ and $\\d \\le 2^{-20}$.\nFor every $\\langle \\d \\rangle$-regular partition $\\P \\cup \\Q$ of $G$, where $\\P$ and $\\Q$ are partitions of $\\Lside$ and $\\Rside$, respectively, and every $1 \\le i \\le j$, if $\\Q \\prec_{2^{-9}} \\R_{i}$ then $\\P \\prec_{\\g} \\L_{i}$ with $\\g = \\max\\{2^{5}\\sqrt{\\d},\\, 32\/\\sqrt[6]{|\\R_1|} \\}$.\n\\end{lemma}\n\\begin{remark}\\label{remequi}\nEvery $G \\in \\G_j$ is a bipartite graph of density $2^{-j}$ since $\\G_j$ is equitable.\n\\end{remark}\n\nAs mentioned before, the proof of Lemma~\\ref{theo:core} appears in~\\cite{MSk}.\nLet us end this section by explaining the role Lemma~\\ref{theo:core} plays in the proof of Theorem \\ref{theo:main}.\n\n\\paragraph{Using graphs to construct $3$-graphs:}\nPerhaps the most surprising aspect of the proof of Theorem~\\ref{theo:main} is that in order to construct a $3$-graph we also use the graph construction of Lemma~\\ref{theo:core} in a somewhat unexpected way.\nIn this case, $\\Lside$ will be a complete bipartite graph and the $\\L_i$'s will be partitions of this complete bipartite graph themselves given by another application of Lemma~\\ref{theo:core}.\nThe partitions will be of wowzer-type growth, and the second application of Lemma~\\ref{theo:core} will ``multiply'' the graph partitions (given by the $\\L_i$'s) to\ngive a partition of the complete $3$-partite $3$-graph into $3$-graphs that are hard for $\\langle \\d \\rangle$-regularity.\nWe will take $H$ in Theorem~\\ref{theo:main} to be an arbitrary $3$-graph in this partition.\n\n\n\n\n\n\\paragraph{Why is Lemma~\\ref{theo:core} one-sided?}\nAs is evident from the statement of Lemma~\\ref{theo:core}, it is one-sided in nature; that is, under the premise that the partition $\\Q$\nrefines $\\R_i$ we may conclude that $\\P$ refines $\\L_i$.\nIt is natural to ask if one can do away with this assumption, that is, be able to show that\nunder the same assumptions $\\Q$ refines $\\R_i$ and $\\P$ refines $\\L_i$.\nAs we mentioned in the previous item, in order to prove a wowzer-type lower bound for $3$-graph\nregularity we have to apply Lemma~\\ref{theo:core} with a sequence of partitions that grows as a wowzer-type function.\nNow, in this setting, Lemma~\\ref{theo:core} does not hold without the one-sided assumption, because if it did, then\none would have been able to prove a wowzer-type lower bound for graph $\\langle \\d \\rangle$-regularity, and hence also for Szemer\\'edi's regularity lemma.\nPut differently, if one wishes to have a construction that holds with arbitrarily fast growing partition sizes, then\none has to introduce the one-sided assumption.\n\n\\paragraph{How do we remove the one-sided assumption?}\nThe proof of Theorem \\ref{theo:main} proceeds by first proving a one-sided version of Theorem \\ref{theo:main},\nstated as Lemma~\\ref{lemma:ind-k}. In order to get a construction that does not require such a one-sided assumption,\nwe will need one final trick; we will take $6$ clusters of\nvertices and arrange $6$ copies\nof this one-sided construction\nalong the $3$-edges of a cycle. This will give us a ``circle of implications'' that will eliminate the one-sided assumption.\nSee Subsection \\ref{subsec:pasting}.\n\n\n\n\n\n\n\n\\section{Proof of Theorem~\\ref{theo:main}}\\label{sec:LB}\n\n\n\n\\renewcommand{\\k}{r}\n\n\\renewcommand{\\t}{t}\n\\newcommand{\\w}{w}\n\n\\newcommand{\\GG}{\\mathbf{G}}\n\\newcommand{\\FF}{\\mathbf{F}}\n\\newcommand{\\VV}{\\mathbf{V}}\n\n\n\\renewcommand{\\Hy}[1]{H_{{#1}}}\n\\renewcommand{\\A}{A}\n\n\\newcommand{\\subs}{\\subset_*}\n\\newcommand{\\pad}{P}\n\\renewcommand{\\K}{\\mathcal{K}}\n\\newcommand{\\U}{U}\n\n\n\\renewcommand{\\k}{k}\n\n\\renewcommand{\\K}{\\mathcal{K}}\n\n\\renewcommand{\\r}{k}\n\n\nThe purpose of this section is to prove the main result, Theorem~\\ref{theo:main}.\nThis section is self-contained save for the application of Lemma~\\ref{theo:core}.\nThe key step of the proof, stated as Lemma~\\ref{lemma:ind-k} and proved in Subsection \\ref{subsec:key}, relies on a subtle construction that uses Lemma~\\ref{theo:core} twice. This lemma only gives a ``one-sided'' lower bound for $3$-graph regularity, in the spirit of Lemma~\\ref{theo:core}.\nIn Subsection~\\ref{subsec:pasting} we show how to use Lemma~\\ref{lemma:ind-k} in order to complete the proof of Theorem~\\ref{theo:main}.\n\nWe first observe a simple yet crucial property of $2$-partitions, stated as Claim~\\ref{claim:uniform-refinement} below, which we will need later.\nThis property relates $\\d$-refinements of partitions and $\\langle \\d \\rangle$-regularity of partitions,\nand relies\non Claim~\\ref{claim:refinement-union}. Here, as well as in the rest of this section, we will use\nthe definitions and notations introduced in Section \\ref{sec:define}.\nIn particular, recall that if a vertex partition $\\Z$ of vertex classes $(V_1,V_2,V_3)$ satisfies $\\Z \\prec \\{V_1,V_2,V_3\\}$,\nthen for every $1 \\le i \\le 3$ we denote $\\Z_i = \\{Z \\in \\Z \\,\\vert\\, Z \\sub V_i\\}$.\nMoreover, if a $2$-partition $(\\Z,\\E)$, satisfies $\\Z \\prec \\{V_1,V_2,V_3\\}$ we denote $\\E_i = \\{E \\in \\E \\,\\vert\\, E \\sub V_j \\times V_k\\}$ where $\\{i,j,k\\}=\\{1,2,3\\}$. We will first need the following easy claim regarding the union of $\\langle \\d\\rangle$-regular graphs.\n\n\n\\begin{claim}\\label{claim:star-union\n\tLet $G_1,\\ldots,G_\\ell$ be mutually edge-disjoint bipartite graphs on the same vertex classes $(Z,Z')$.\n\tIf every $G_i$ is $\\langle \\d \\rangle$-regular then $G=\\bigcup_{i=1}^\\ell G_i$ is also $\\langle \\d \\rangle$-regular.\n\\end{claim}\n\\begin{proof\n\n\tLet $S \\sub Z$, $S' \\sub Z'$ with $|S| \\ge \\d|Z|$, $|S'| \\ge \\d|Z'|$.\n\tThen\n\t$$d_G(S,S') = \\frac{e_G(S,S')}{|S||S'|}\n\t= \\sum_{i=1}^\\ell \\frac{e_{G_i}(S,S')}{|S||S'|}\n\t= \\sum_{i=1}^\\ell d_{G_i}(S,S')\n\t\\ge \\sum_{i=1}^\\ell \\frac12 d_{G_i}(Z,Z')\n\t= \\frac12 d_{G}(Z,Z') \\;,$$\n\twhere the second and last equalities follow from the mutual disjointness of the $G_i$, and the inequality follows from the $\\langle \\d \\rangle$-regularity of each $G_i$.\n\tThus, $G$ is $\\langle \\d \\rangle$-regular, as claimed.\n\\end{proof}\n\nWe use the following claim regarding approximate refinements.\n\n\\begin{claim}\\label{claim:refinement-union}\n\tIf $\\Q \\prec_\\d \\P$ then there exist $P \\in \\P$ and $Q$ that is a union of members of $\\Q$ such that $|P \\triangle Q| \\le 3\\d|P|$.\n\\end{claim}\n\\begin{proof}\n\tFor each $P\\in \\P$ let $\\Q(P) = \\{Q \\in \\Q \\colon Q \\sub_\\d P\\}$,\n\tand denote $P_\\Q = \\bigcup_{Q \\in \\Q(P)} Q$.\n\tWe have\n\t\\begin{align*}\n\t\\sum_{P \\in \\P} |P \\triangle P_\\Q|\n\t&= \\sum_{P \\in \\P} |P_\\Q \\sm P| + \\sum_{P \\in \\P} |P \\sm P_\\Q|\n\t= \\sum_{P \\in \\P} \\sum_{\\substack{Q \\in \\Q \\colon\\\\Q \\sub_\\d P}} |Q \\sm P|\n\t+ \\sum_{P \\in \\P} \\sum_{\\substack{Q \\in \\Q \\colon\\\\Q \\nsubseteq_\\d P}} |Q \\cap P| \\\\\n\t&\\le \\sum_{P \\in \\P} \\sum_{\\substack{Q \\in \\Q \\colon\\\\Q \\sub_\\d P}} \\d|Q|\n\t+ \\Big( \\sum_{\\substack{Q \\in \\Q\\colon\\\\Q \\notin_\\d \\P}} |Q|\n\t+ \\sum_{\\substack{Q \\in \\Q \\colon\\\\Q \\in_\\d \\P}} \\d|Q| \\Big)\n\t\\le 3\\d\\sum_{Q \\in \\Q} |Q|\n\t= 3\\d\\sum_{P \\in \\P} |P| \\;,\n\t\\end{align*}\n\twhere the last inequality uses the statement's assumption $\\Q \\prec_\\d \\P$ to bound the middle summand.\n\tBy averaging, there exists $P \\in \\P$ such that $|P \\triangle P_\\Q| \\le 3\\d|P|$, thus completing the proof.\n\\end{proof}\n\nThe property of $2$-partitions that we need is as follows.\n\\begin{claim}\\label{claim:uniform-refinement}\n\tLet $\\P=(\\Z,\\E)$ be a $2$-partition with $\\Z \\prec \\{V_1,V_2,V_3\\}$,\n\tand let $\\G$ be a partition of $V_1\\times V_2$ with $\\E_3 \\prec_\\d \\G$.\n\tIf $(\\Z,\\E)$ is $\\langle \\d \\rangle$-good\n\tthen $\\Z_1 \\cup \\Z_2$ is a $\\langle 3\\d \\rangle$-regular partition of some $G \\in \\G$.\n\\end{claim}\n\n\\begin{proof}\n\tPut $\\E=\\E_3$.\n\tBy Claim~\\ref{claim:refinement-union}, since $\\E \\prec_\\d \\G$ there exist $G \\in \\G$ (a bipartite graph on $(V_1,V_2)$) and $G_\\E$ that is a union of members of $\\E$ (and thus also a bipartite graph on $(V_1,V_2)$) such that $|G \\triangle G_\\E| \\le 3\\d|G|$.\n\n\tLetting $Z_1 \\in \\Z_1$, $Z_2 \\in \\Z_2$, to complete the proof it suffices to show that the induced bipartite graph $G_\\E[Z_1,Z_2]$ is $\\langle \\d \\rangle$-regular (recall Definition~\\ref{def:star-regular}).\n\tBy Definition~\\ref{def:2-partition}, $G_\\E[Z_1,Z_2]$ is a union of bipartite graphs from $\\E$ on $(Z_1,Z_2)$.\n\tSince every graph in $\\E$ is $\\langle \\d \\rangle$-regular by the statement's assumption that $(\\Z,\\E)$ is $\\langle \\d \\rangle$-good (recall Definition~\\ref{def:k-good}), we have that $G_\\E[Z_1,Z_2]$ is a union of $\\langle \\d \\rangle$-regular bipartite graphs on $(Z_1,Z_2)$.\n\tBy Claim~\\ref{claim:star-union}, $G_\\E[Z_1,Z_2]$ is $\\langle \\d \\rangle$-regular as well, thus completing the proof.\n\\end{proof}\n\nWe will later need the following easy (but slightly tedious to state) claim.\n\\begin{claim}\\label{claim:restriction}\n\tLet $H$ be a $3$-partite $3$-graph on vertex classes $(V_1,V_2,V_3)$, and let $H'$ be the induced $3$-partite $3$-graph on vertex classes $(V_1',V_2',V_3')$ with $V_i' \\sub V_i$ and $\\a \\cdot e(H)$ edges.\n\tIf $(\\Z,\\E)$ is a $\\langle \\d \\rangle$-regular partition of $H$ with\n\t$\\Z \\prec \\bigcup_{i=1}^3 \\{V_i,\\,V_i \\sm V_i'\\}$\n\n\tthen its restriction $(\\Z',\\E')$ to $V(H')$ is a $\\langle \\d\/\\a \\rangle$-regular partition of $H'$.\n\\end{claim}\n\\begin{proof}\n\tRecall Definition~\\ref{def:k-reg}.\n\tClearly, $(\\Z',\\E')$ is $\\langle \\d \\rangle$-good. We will show that $\\E'_1 \\cup \\Z'_1$ is a $\\langle \\d\/\\a \\rangle$-regular partition of $G^1_{H'}$.\n\tThe argument for $G^2_{H'}$, $G^3_{H'}$ will be analogous, hence the proof would follow.\n\tObserve that $G^1_{H'}$ is an induced subgraph of $G^1_{H}$, namely, $G^1_{H'} = G^i_{H}[V_2' \\times V_3',\\, V_1']$.\n\tBy assumption, $e(H') = \\a e(H)$, and thus $e(G^1_{H'}) = \\a e(G^1_{H})$.\n\tBy the statement's assumption on $\\Z$ and since $\\E_1 \\cup \\Z_1$ is a $\\langle \\d \\rangle$-regular partition of $G^1_{H}$, we deduce---by adding\/removing at most $\\d e(G^1_{H}) = (\\d\/\\a)e(G^1_{H'})$ edges of $G^1_{H'}$---that $\\E'_1 \\cup \\Z'_1$ is a $\\langle \\d\/\\a \\rangle$-regular partition of $G_{H'}^1$.\n\tAs explained above, this completes the proof.\n\\end{proof}\n\n\nFinally, we will need the following claim regarding approximate refinements.\n\\begin{claim}\\label{claim:refinement-size}\n\tIf $\\Q \\prec_{1\/2} \\P$ and $\\P$ is equitable then $|\\Q| \\ge \\frac14|\\P|$.\n\\end{claim}\n\\begin{proof}\n\tWe claim that the underlying set $U$ has a subset $U^*$ of size $|U^*|\\ge \\frac14|U|$ such that the partitions $\\Q^*=\\{Q \\cap U^* \\,\\vert\\, Q \\in \\Q \\} \\setminus \\{\\emptyset\\}$ and $\\P^*=\\{P \\cap U^* \\,\\vert\\, P \\in \\P \\} \\setminus \\{\\emptyset\\}$\n\n\tof $U^*$ satisfy $\\Q^* \\prec \\P^*$.\n\tIndeed, let $U^* = \\bigcup_{Q} Q \\cap P_Q$ where the union is over all $Q \\in \\Q$ satisfying $Q \\sub_{1\/2} P_Q$ for a (unique) $P_Q \\in \\P$.\n\n\tAs claimed, $|U^*| = \\sum_{Q \\in_{1\/2} \\P} |Q \\cap P_Q| \\ge \\sum_{Q \\in_{1\/2} \\P} \\frac12|Q| \\ge \\frac14|U|$, using $\\Q \\prec_{1\/2} \\P$ for the last inequality.\n\n\tNow, since $\\P$ is equitable, $|\\P^*| \\ge \\frac14|\\P|$.\n\n\tThus, $|\\Q| \\ge |\\Q^*| \\ge |\\P^*| \\ge \\frac14|\\P|$, as desired.\n\\end{proof}\n\n\n\\renewcommand{\\K}{k}\n\\renewcommand{\\w}{w}\n\n\\subsection{$3$-graph key argument}\\label{subsec:key}\n\n\nWe next introduce a few more definitions that are needed for the statement of Lemma \\ref{lemma:ind-k}.\nLet $e(i) = 2^{i+10}$. We define the following tower-type function $\\t\\colon\\N\\to\\N$;\n\\begin{equation}\\label{eq:t}\n\\t(i+1) = \\begin{cases}\n2^{\\t(i)\/e(i)}\t&\\text{if } i \\ge 1\\\\\n2^{250}\t&\\text{if } i = 0 \\;.\n\\end{cases}\n\\end{equation}\nIt is easy to prove, by induction on $i$, that $\\t(i) \\ge e(i)\\t(i-1)$ for $i \\ge 2$ (for the induction step,\n$t(i+1) \\ge 2^{\\t(i-1)} = t(i)^{e(i-1)}$, so $t(i+1)\/e(i+1) \\ge \\t(i)^{e(i-1)-i-11} \\ge \\t(i)$).\nThis means that $t$ is monotone increasing, and that $\\t$ is an integer power of $2$ (follows by induction as $t(i)\/e(i) \\ge 1$ is a positive power of $2$ and in particular an integer).\nWe record the following facts regarding $\\t$ for later use:\n\\begin{equation}\\label{eq:monotone}\n\\t(i) \\ge 4\\t(i-1) \\quad\\text{ and }\\quad\n\\text{ $\\t(i)$ is a power of $2$} \\;.\n\\end{equation}\nFor a function $f:\\N\\to\\N$ with $f(i) \\ge i$ we denote\n\\begin{equation}\\label{eq:f*}\nf^*(i) = \\t\\big(f(i)\\big)\/e(i) \\;.\n\\end{equation}\nNote that $f^*(i)$ is indeed a positive integer (by the monotonicity of $\\t$, $f^*(i) \\ge \\t(i)\/e(i)$ is a positive power of $2$).\nIn fact, $f^*(i) \\ge f(i)$ (as $f^*(i) \\ge 4^{f(i)}\/e(i)$ using~(\\ref{eq:monotone})).\nWe recursively define the function $\\w\\colon\\N\\to\\N$ as follows;\n\\begin{equation}\\label{eq:Ak}\n\\w(i+1) = \\begin{cases}\n\\w^*(i)\t\t&\\text{if } i \\ge 1\\\\\n1\t\t&\\text{if } i = 0 \\;.\n\\end{cases}\n\\end{equation}\nIt is evident that $\\w$ is a wowzer-type function; in fact, one can check that:\n\\begin{equation}\\label{eq:A_k}\n\\w(i) \\ge \\wow(i) \\;.\n\\end{equation}\n\n\n\n\n\n\n\n\n\\begin{lemma}[Key argument]\\label{lemma:ind-k}\n\n\n\tLet $s \\in \\N$, let $\\Vside^1,\\Vside^2,\\Vside^3$ be mutually disjoint sets of equal size and let $\\V^1 \\succ\\cdots\\succ \\V^m$ be a sequence of\n\t$m=\\w^*(s)+1$ successive equitable refinements of $\\{\\Vside^1,\\Vside^2,\\Vside^3\\}$\n\twith $|\\V^i_1|=|\\V^i_2|=|\\V^i_3|=\\t(i)$ for every\\footnote{Since we assume that each $\\V^i$ refines $\\{\\Vside^1,\\Vside^2,\\Vside^3\\}$ then $\\V^i_1$ is (by the notation mentioned before Claim \\ref{claim:star-union}) the restriction of $\\V^i$ to $\\Vside^1$.} $1 \\leq i \\leq m$.\n\tThen there is a $3$-partite $3$-graph $H$ on $(\\Vside^1,\\Vside^2,\\Vside^3)$ of density $d(H)=2^{-s}$ satisfying the following property:\\\\\n\tIf $(\\Z,\\E)$ is a $\\langle 2^{-70} \\rangle$-regular partition of $H$ and for some\n\t$1 \\le i \\le \\w(s)$ $(< m)$ we have $\\Z_3 \\prec_{2^{-9}} \\V^i_3$ and $\\Z_2 \\prec_{2^{-9}} \\V^i_2$ then we also have $\\Z_1 \\prec_{2^{-9}} \\V^{i+1}_1$.\n\\end{lemma}\n\n\\begin{proof}\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\tPut $s':=\\w^*(s)$, so that $m = s'+1$.\n\tApply Lemma~\\ref{theo:core} with\n\t$$\\Lside=\\Vside^1,\\quad \\Rside=\\Vside^2 \\quad\\text{ and }\\quad \\V^2_1 \\succ \\cdots \\succ \\V^{s'+1}_1 ,\\quad \\V^1_2 \\succ \\cdots \\succ \\V^{s'}_2 \\;,$$\n\tand let\n\t\\begin{equation}\\label{eq:main-k-colors}\n\t\\G^1 \\succ \\cdots \\succ \\G^{s'} \\quad\\text{ with }\\quad |G^\\ell|=2^\\ell \\text{ for every } 1 \\le \\ell \\le s'\n\t\\end{equation}\n\tbe the resulting sequence of $s'$ successively refined equipartitions of $\\Vside^1 \\times \\Vside^2$.\n\n\n\n\n\t\n\t\\begin{prop}\\label{prop:main-k-hypo}\n\t\tLet $1 \\le \\ell \\le s'$ and $G \\in \\G^\\ell$.\n\t\tFor every $\\langle 2^{-28} \\rangle$-regular partition $\\Z_1 \\cup \\Z_2$ of $G$ (where $\\Z_1$ and $\\Z_2$ are partitions of $\\Vside^1$ and $\\Vside^2$, respectively) and every $1 \\le i \\le \\ell$,\n\t\tif $\\Z_2 \\prec_{2^{-9}} \\V^i_2$\n\t\tthen $\\Z_1 \\prec_{2^{-9}} \\V^{i+1}_1$.\n\t\\end{prop}\n\t\\begin{proof}\n\t\tFirst we need to verify that we may apply Lemma~\\ref{theo:core} as above.\n\t\tAssumptions~\\ref{item:core-minR},~\\ref{item:core-expR} in Lemma~\\ref{theo:core} hold by~(\\ref{eq:monotone}) and the fact that $|\\V^j_2|=\\t(j)$.\n\t\tAssumption~\\ref{item:core-expL} is satisfied since for every $1 \\le j \\le s$ we have\n\t\t$$|\\V^{j+1}_1| = \\t(j+1) = 2^{\\t(j)\/e(j)} = 2^{|\\V^{j}_2|\/e(j)} \\;,$$\n\t\twhere the second equality uses the definition of the function $\\t$ in~(\\ref{eq:t}).\n\t\tWe can thus use Lemma~\\ref{theo:core} to infer that the fact that $\\Z_2 \\prec_{2^{-9}} \\V^i_2$ implies that $\\Z_1 \\prec_x \\V^{i+1}_1$ with $x=\\max\\{2^{5}\\sqrt{2^{-28}},\\, 32\/\\sqrt[6]{\\t(1)} \\} = 2^{-9}$, using~(\\ref{eq:t}).\n\t\\end{proof}\n\t\n\n\n\n\n\n\n\n\n\n\n\t\n\tFor each $1 \\le j \\le s$ let\n\t\\begin{equation}\\label{eq:main-k-dfns}\n\t\\G^{(j)} = \\G^{\\w^*(j)}\n\t\\quad\\text{ and }\\quad\n\t\\V^{(j)} = \\V^{\\w(j)}_3 \\;.\n\t\\end{equation}\t\n\tAll these choices are well defined since $\\w^*(j)$ satisfies $1 \\le \\w^*(1) \\le \\w^*(j) \\le \\w^*(s) = s'$, and since $\\w(j)$ satisfies $1 \\le \\w(1) \\le \\w(j) \\le \\w(s) \\le m$. Observe that we have thus chosen two subsequences of $\\G^1,\\cdots,\\G^{s'}$ and $\\V^1_3,\\ldots,\\V^m_3$, each of length $s$.\n\tRecalling that each $\\G^{(j)}$ is a partition of $\\Vside^1 \\times \\Vside^2$, we now apply Lemma~\\ref{theo:core} again with\n\t$$\n\t\\Lside=\\Vside^1 \\times \\Vside^2,\\quad \\Rside=\\Vside^3 \\quad\\text{ and }\\quad \\G^{(1)} \\succ \\cdots \\succ \\G^{(s)}, \\quad \\V^{(1)} \\succ \\cdots \\succ \\V^{(s)} \\;.\n\t$$\n\tThe output of this application of Lemma~\\ref{theo:core} consists of a sequence of $s$ (successively refined)\n\tequipartitions of $(\\Vside^1 \\times \\Vside^2)\\times\\Vside^3$.\n\tWe can think of the $s$-th partition of this sequence as a collection of $2^s$ bipartite graphs on vertex sets\n\t$(\\Vside^1\\times\\Vside^{2},\\,\\Vside^3)$. For the rest of the proof let $G'$ be be any of these graphs.\n\tBy Remark \\ref{remequi} we have\n\t\\begin{equation}\\label{eq:ind-colors2}\n\td(G')=2^{-s} \\;.\n\t\\end{equation}\n\t\\begin{prop}\\label{prop:ind-prop2}\n\t\tFor every $\\langle 2^{-70} \\rangle$-regular partition $\\E \\cup \\V$ of $G'$ (where $\\E$ and $\\V$ are partitions of $\\Vside^1\\times\\Vside^{2}$ and $\\Vside^3$ respectively)\n\t\tand every $1 \\le j' \\le s$,\n\t\tif $\\V \\prec_{2^{-9}} \\V^{(j')}$ then $\\E \\prec_{2^{-30}} \\G^{(j')}$.\n\t\\end{prop}\n\t\\begin{proof}\n\t\tFirst we need to verify that we may apply Lemma~\\ref{theo:core} as above.\n\t\tNote that $|\\G^{(j)}|=2^{\\w^*(j)}$ by~(\\ref{eq:main-k-colors}) and (\\ref{eq:main-k-dfns}),\n\t\tand that $|\\V^{(j)}|=\\t(\\w(j))$ by (\\ref{eq:main-k-dfns}) and the statement's assumption that $|\\V^i_3|=\\t(i)$.\n\t\tTherefore,\n\t\t\\begin{equation}\\label{eq:main-k-orders}\n\t\t|\\G^{(j)}| = 2^{\\w^*(j)} = 2^{\\t(\\w(j))\/e(j)} = 2^{|\\V^{(j)}|\/e(j)} \\;,\n\t\n\t\t\\end{equation}\n\t\twhere the second equality relies on~(\\ref{eq:f*}).\n\t\tMoreover, note that $\\t(\\w(1)) = \\t(1) = 2^{300}$.\n\t\tNow, Assumptions~\\ref{item:core-minR} and~\\ref{item:core-expR} in Lemma~\\ref{theo:core} follow from the fact that $|\\V^{(j)}|=\\t(\\w(j))$, from~(\\ref{eq:monotone})\n\t\tand the fact that $|\\V^{(1)}| = \\t(\\w(1)) \\ge 2^{200}$ by~(\\ref{eq:Ak}). \t\n\t\n\t\tAssumption~\\ref{item:core-expL} follows from~(\\ref{eq:main-k-orders}).\n\t\tWe can thus use Lemma~\\ref{theo:core} to infer that the fact that $\\V \\prec_{2^{-9}} \\V^{(j')}$ implies that\n\t\t$\\E \\prec_x \\G^{(j')}$ with $x=\\max\\{2^{5}\\sqrt{2^{-70}},\\, 32\/\\sqrt[6]{\\t(\\w(1))} \\} = 2^{-30}$.\n\t\n\t\n\t\n\t\\end{proof}\n\t\n\tLet $H$ be the $3$-partite $3$-graph on vertex classes $(\\Vside^1,\\Vside^2,\\Vside^3)$ with edge set\n\t$$\n\tE(H) = \\big\\{ (v_1,v_2,v_3) \\,:\\, ((v_1,v_{2}),v_3) \\in E(G') \\big\\} \\;,\n\t$$\t\n\tand note that we have (recall Definition \\ref{def:aux})\n\t\\begin{equation}\\label{eqH}\n\tG'=G_{H}^3\\;.\n\t\\end{equation}\n\tWe now prove that $H$ satisfies the properties in the statement of the lemma.\n\n\n\n\n\tFirst, note that by~(\\ref{eq:ind-colors2}) and (\\ref{eqH}) we have $d(H)=2^{-s}$, as needed.\n\tAssume now that $i$ is such that\n\t\\begin{equation}\\label{eq:ind-i-assumption}\n\t1 \\le i \\le \\w(s)\n\t\\end{equation}\n\tand:\n\t\\begin{enumerate}\n\t\t\\item\\label{item:ind-reg}\n\t\t$(\\Z,\\E)$ is a $\\langle 2^{-70} \\rangle$-regular partition of $H$, and\n\t\t\\item\\label{item:ind-refine} $\\Z_3 \\prec_{2^{-9}} \\V^i_3$ and $\\Z_2 \\prec_{2^{-9}} \\V^i_2$.\n\t\\end{enumerate}\t\n\tWe need to show that\n\t\\begin{equation}\\label{eq:ind-goal}\n\t\\Z_1 \\prec_{2^{-9}} \\V^{i+1}_1 \\;.\n\t\\end{equation}\n\n\tSince Item~\\ref{item:ind-reg} guarantees that $(\\Z,\\E)$ is a $\\langle 2^{-70} \\rangle$-regular partition of $H$,\n\n\twe get from Definition~\\ref{def:k-reg} and (\\ref{eqH}) that\n\t\\begin{equation}\\label{eq:ind-reg}\n\t\\text{$\\E_3 \\cup \\Z_3$ is a $\\langle 2^{-70} \\rangle$-regular partition of } G'.\n\t\\end{equation}\n\tLet\n\t\\begin{equation}\\label{eq:ind-j'}\n\t1 \\le j' \\le s\n\t\\end{equation}\n\tbe the unique integer satisfying (the equality here is just (\\ref{eq:Ak}))\n\t\\begin{equation}\\label{eq:ind-sandwich}\n\t\\w(j') \\le i < \\w(j'+1) = \\w^*(j')\\;.\n\t\\end{equation}\n\tNote that (\\ref{eq:ind-j'}) holds due to~(\\ref{eq:ind-i-assumption}).\n\tRecalling~(\\ref{eq:main-k-dfns}),\n\tthe lower bound in~(\\ref{eq:ind-sandwich}) implies that $\\V^i_3 \\prec \\V^{\\w(j')} = \\V^{(j')}$.\n\tTherefore, the assumption $\\Z_3 \\prec_{2^{-9}} \\V^i_3$ in~\\ref{item:ind-refine} implies that\n\t\\begin{equation}\\label{eq:ind-Zk}\n\t\\Z_3 \\prec_{2^{-9}} \\V^{(j')} \\;.\n\t\\end{equation}\n\tApply Proposition~\\ref{prop:ind-prop2} on $G'$, using~(\\ref{eq:ind-reg}),~(\\ref{eq:ind-j'}) and~(\\ref{eq:ind-Zk}), to deduce that\n\t\\begin{equation}\\label{eq:ind-E}\n\t\\E_3 \\prec_{2^{-30}} \\G^{(j')} = \\G^{\\w^*(j')} \\;,\n\t\\end{equation}\n\twhere for the equality again recall~(\\ref{eq:main-k-dfns}).\n\tSince $(\\Z,\\E)$ is a $\\langle 2^{-70} \\rangle$-regular partition of $H$ (by Item~\\ref{item:ind-reg} above)\n\tit is in particular $\\langle 2^{-70} \\rangle$-good. By~(\\ref{eq:ind-E}) we may thus apply Claim~\\ref{claim:uniform-refinement}\n\tto conclude that\n\t\\begin{equation}\\label{eq:ind-reg2}\n\t\\Z_1 \\cup \\Z_2 \\text{ is a }\n\t\\langle 2^{-28} \\rangle\n\t\\text{-regular partition of some $G\\in\\G^{\\w^*(j')}$.}\n\t\\end{equation}\n\tBy~(\\ref{eq:ind-reg2}) we may apply Proposition~\\ref{prop:main-k-hypo} with $G$, $\\Z_1\\cup\\Z_2$, $\\ell=\\w^*(j')$ and $i$, observing (crucially)\n\tthat $i \\leq \\ell$ by (\\ref{eq:ind-sandwich}). We thus conclude that the fact $\\Z_2 \\prec_{2^{-9}} \\V^i_2$ (stated in~\\ref{item:ind-refine})\n\timplies that $\\Z_1 \\prec_{2^{-9}} \\V^{i+1}_1$, thus proving~(\\ref{eq:ind-goal}) and completing the proof.\n\\end{proof}\n\n\n\n\n\\subsection{Putting everything together}\\label{subsec:pasting}\n\n\nWe can now prove our main theorem, Theorem~\\ref{theo:main}, which we repeat here for convenience.\n\\addtocounter{theo}{-2}\n\\begin{theo}[Main theorem\n\tLet $s \\in \\N$.\n\tThere exists a $3$-partite $3$-graph $H$ on vertex classes of equal size and of density at least $2^{-s}$,\n\tand a partition $\\V_0$ of $V(H)$ with $|\\V_0| \\le 2^{300}$, such that if\n\t$(\\Z,\\E)$ is a $\\langle 2^{-73} \\rangle$-regular partition of $H$ with\n\t$\\Z \\prec \\V_0$ then $|\\Z| \\ge \\wow(s)$.\n\n\n\n\n\n\n\n\n\n\n\n\t\n\n\n\n\n\n\n\n\n\n\n\n\\end{theo}\n\\addtocounter{theo}{+1}\n\n\n\n\\begin{proof}\n\n\n\n\n\n\n\n\tLet the $3$-graph $B$ be the tight $6$-cycle; that is, $B$ is the $3$-graph on vertex classes $\\{0,1,\\ldots,5\\}$ with edge set $E(B)=\\{\\{0,1,2\\},\\{1,2,3\\},\\{2,3,4\\},\\{3,4,5\\},\\{4,5,0\\},\\{5,0,1\\}\\}$.\n\tNote that $B$ is $3$-partite with vertex classes $(\\{0,3\\},\\{1,4\\},\\{2,5\\}\\}$.\n\n\tPut $m=\\w^*(s-1)+1$ and let $n \\ge \\t(m)$.\n\tLet $\\Vside^0,\\ldots,\\Vside^{5}$ be $6$ mutually disjoint sets of size $n$ each.\n\tLet $\\V^1 \\succ\\cdots\\succ \\V^m$ be an arbitrary sequence of $m$ successive equitable refinements of $\\{\\Vside^0,\\ldots,\\Vside^{5}\\}$ with $|\\V^i_h|=\\t(i)$ for every $1 \\le i \\le m$ and $0 \\le h \\le 5$, which exists as $n$ is large enough.\n\n\tExtending the notation $\\Z_i$ (above Definition~\\ref{def:k-reg}), for every $0 \\le x \\le 5$\n\n\twe henceforth denote the restriction of the vertex partition $\\Z$ to $\\Vside^x$ by $\\Z_x = \\{Z \\in \\Z \\,\\vert\\, Z \\sub \\Vside^x\\}$.\t\n\tFor each edge $e=\\{x,x+1,x+2\\} \\in E(B)$ (here and henceforth when specifying an edge, the integers are implicitly taken modulo $6$)\n\tapply Lemma~\\ref{lemma:ind-k} with\n\t$$s-1,\\,\n\t\\Vside^{x},\\Vside^{x+1},\\Vside^{x+2}\n\t\\text{ and }\n\t(\\V^{1}_x \\cup \\V^1_{x+1} \\cup \\V^1_{x+2})\n\t\\succ\\cdots\\succ (\\V^{m}_{x}\\cup\\V^{m}_{x+1}\\cup\\V^{m}_{x+2}) \\;.$$\n\n\tLet $H_e$ denote the resulting $3$-partite $3$-graph on $(\\Vside^{x},\\Vside^{x+1},\\Vside^{x+2})$.\n\tNote that $d(H_e) = 2^{-(s-1)}$.\n\tMoreover, let\n\t$$c = 2^{-9} \\quad\\text{ and }\\quad K=\\w(s-1)+1 \\;.$$\n\tThen $H_e$ has the property that for every $\\langle 2^{-70} \\rangle$-regular partition $(\\Z',\\E')$ of $H_e$ and every $1 \\le i < K$,\n\n\n\n\n\n\n\n\t\\begin{equation}\\label{eq:paste-property}\n\t\\text{if $\\Z'_{x+2} \\prec_{c} \\V^i_{x+2}$ and $\\Z'_{x+1} \\prec_{c} \\V^i_{x+1}$ then $\\Z'_x \\prec_{c} \\V^{i+1}_x$.}\n\t\\end{equation}\t\n\tWe construct our $3$-graph on the vertex set $\\Vside:=\\Vside^0 \\cup\\cdots\\cup \\Vside^5$ as\n\t$E(H) = \\bigcup_{e} E(H_e)$; that is, $H$ is the edge-disjoint union of all six $3$-partite $3$-graphs $H_e$ constructed above.\n\tNote that $H$ is a $3$-partite $3$-graph (on vertex classes $(\\Vside^0 \\cup \\Vside^3,\\, \\Vside^1 \\cup \\Vside^4,\\, \\Vside^2 \\cup \\Vside^5))$ of density $\\frac68 2^{-(s-1)} \\ge 2^{-s}$, as needed.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\tWe will later use the following fact.\n\t\\begin{prop}\\label{prop:restriction}\n\t\tLet $(\\Z,\\E)$ be an $\\langle 2^{-73}\\rangle$-regular partition of $H$ and let\n\t\t$e \\in E(B)$.\n\t\n\t\n\t\n\t\tIf $\\Z \\prec \\{\\Vside^0,\\ldots,\\Vside^{5}\\}$\n\t\n\t\tthen the restriction $(\\Z',\\E')$ of $(\\Z,\\E)$ to $V(H_e)$ is a $\\langle 2^{-70} \\rangle$-regular partition of $H_e$.\n\t\\end{prop}\n\t\\begin{proof}\n\t\tImmediate from Claim~\\ref{claim:restriction} using the fact that $e(H_e) = \\frac16 e(H)$.\n\t\n\t\\end{proof}\t\n\n\t\n\tNow, let $(\\Z,\\E)$ be a $\\langle 2^{-73} \\rangle$-regular partition of $H$\n\twith $\\Z \\prec \\V^1$.\n\tOur goal will be to show that\n\t\\begin{equation}\\label{eq:paste-goal}\n\t\\Z \\prec_{c} \\V^{K} \\;.\n\t\\end{equation}\n\tProving~(\\ref{eq:paste-goal}) would complete the proof, by setting $\\V_0$ in the statement to be $\\V^1$ here (notice $|\\V^1|=3\\t(1) \\le 2^{300}$ by~(\\ref{eq:t}));\n\tindeed, Claim~\\ref{claim:refinement-size} would imply that\n\t$$|\\Z| \\ge \\frac14|\\V^{K}| = \\frac14 \\cdot 6 \\cdot \\t(K)\n\t\\ge \\t(K)\n\t\\ge \\t(\\w(s-1))\n\t\\ge \\w(s)\n\t\\ge \\wow(s) \\;,$$\n\n\n\twhere the last inequality uses~$(\\ref{eq:A_k})$.\n\n\n\n\tAssume towards contradiction that $\\Z \\nprec_{c} \\V^{K}$. By averaging,\n\t\\begin{equation}\\label{eq:assumption}\n\t\\Z_h \\nprec_c \\V^{K}_h \\text{ for some } 0 \\le h \\le 5.\n\t\\end{equation}\n\tFor each $0 \\le h \\le 5$ let $1 \\le \\b(h) \\le K$ be the largest integer satisfying $\\Z_h \\prec_c \\V^{\\b(h)}_h$,\n\twhich is well defined since $\\Z_h \\prec_c \\V^1_h$,\n\tsince in fact $\\Z \\prec \\V^1$.\n\tPut $\\b^* = \\min_{0 \\le h \\le 5} \\b(h)$, and note that by~(\\ref{eq:assumption}),\n\t\\begin{equation}\\label{eq:paste-star}\n\t\\b^* < K \\;.\n\t\\end{equation}\n\tLet $0 \\le x \\le 5$ minimize $\\b$, that is, $\\b(x)=\\b^*$.\n\tTherefore:\n\t\\begin{equation}\\label{eqcontra}\n\t\\Z_{x+2} \\prec_c \\V^{\\b^*}_{x+2} \\mbox{~,~} \\Z_{x+1} \\prec_c \\V^{\\b^*}_{x+1} \\mbox{ and }\n\t\\Z_{x} \\nprec_c \\V^{\\b^*+1}_{x}.\t\t\n\t\\end{equation}\n\tLet $e=\\{x,x+1,x+2\\} \\in E(B)$.\n\tLet $(\\Z',\\E')$ be the restriction of $(\\Z,\\E)$ to $V(H_e)=\\Vside^{x} \\cup \\Vside^{x+1} \\cup \\Vside^{x+2}$, which is a $\\langle 2^{-70} \\rangle$-regular partition of $H_e$ by Proposition~\\ref{prop:restriction}. Since $\\Z'_x=\\Z_x$, $\\Z'_{x+1}=\\Z_{x+1}$, $\\Z'_{x+2}=\\Z_{x+2}$ we get\n\tfrom (\\ref{eqcontra}) a contradiction to~(\\ref{eq:paste-property}) with $i=\\beta^*$.\n\tWe have thus proved~(\\ref{eq:paste-goal}) and so the proof is complete.\n\\end{proof}\n\n\n\n\t\n\t\n\n\n\n\n\\section{Wowzer-type Lower Bounds for $3$-Graph Regularity Lemmas}\\label{sec:FR}\n\n\\renewcommand{\\K}{\\mathcal{K}}\n\nThe purpose of this section is to apply Theorem \\ref{theo:main} in order to prove Corollary~\\ref{coro:FR-LB},\nthus giving wowzer-type (i.e., $\\Ack_3$-type) lower bounds for the $3$-graph regularity lemmas of Frankl and R\\\"odl~\\cite{FrankRo02} and of Gowers~\\cite{Gowers06}.\nWe will start by giving the necessary definitions for Frankl and R\\\"odl's lemma\nand state our corresponding lower bound.\nNext we will state the necessary definitions for Gower's lemma and state our corresponding lower bound.\nThe formal proofs would then follow.\n\n\n\n\n\\subsection{Frankl and R\\\"odl's $3$-graph regularity}\n\n\n\n\n\n\n\\begin{definition}[$(\\ell,t,\\e_2)$-equipartition,~\\cite{FrankRo02}]\\label{def:ve-partition}\n\n\tAn \\emph{$(\\ell,t,\\e_2)$-equipartition} on a set $V$ is a $2$-partition $(\\Z,\\E)$ on $V$ where $\\Z$ is an equipartition of order $|\\Z|=t$ and every graph in $\\E$ is $\\e_2$-regular\\footnote{Here, and in several places in this section, we of course refer to the ``traditional'' notion of Szemer\\'edi's $\\e$-regularity, as defined at the beginning of Section \\ref{sec:define}. } of density $\\ell^{-1} \\pm \\e_2$.\n\\end{definition}\n\n\\begin{remark}\n\tIf $\\e_2 \\le \\frac12\\ell^{-1}$ then $(Z,\\E)$ has at most $2\\ell$ bipartite graphs between every pair of clusters of $\\Z$.\n\\end{remark}\n\n\n\nA \\emph{triad} of a $2$-partition $(\\Z,\\E)$ is any tripartite graph whose three vertex classes are in $\\Z$ and three edge sets are in $\\E$. We often identify a triad with a triple of its edge sets $(E_1,E_2,E_3)$.\nThe \\emph{density} of a triad $P$ in a $3$-graph $H$ is $d_H(P)=|E(H) \\cap T(P)|\/|T(P)|$ (and $0$ if $|T(P)|=0$).\nA \\emph{subtriad} of $P$ is any subgraph of $P$ on the same vertex classes.\n\n\n\n\n\n\n\n\\begin{definition}[$3$-graph $\\e$-regularity~\\cite{FrankRo02}]\\label{def:FR-reg}\n\n\tLet $H$ be a $3$-graph.\n\tA triad $P$ is \\emph{$\\e$-regular} in $H$ if every subtriad $P'$ with $|T(P')| \\ge \\e|T(P)|$ satisfies $|d_H(P')-d_H(P)| \\le \\e$.\\\\\n\tAn $(\\ell,t,\\e_2)$-equipartition $\\P$ on $V(H)$ is an \\emph{$\\e$-regular} partition of $H$ if\n\n\t$\\sum_P |T(P)| \\le \\e|V|^3$ where the sum is over all triads of $\\P$ that are not $\\e$-regular in $H$.\t\n\\end{definition}\n\nThe $3$-graph regularity of Frankl and R\\\"odl~\\cite{FrankRo02} states, very roughly, that for every $\\e>0$ and every function $\\e_2\\colon\\N\\to(0,1]$, every $3$-graph has an $\\e$-regular $(\\ell,t,\\e_2(\\ell))$-equipartition where $t,\\ell$ are bounded by a wowzer-type function.\nIn fact, the statement in~\\cite{FrankRo02} uses a considerably stronger notion of regularity of a partition than in Definition~\\ref{def:FR-reg} that involves an additional function $r(t,\\ell)$ which we shall not discuss here (as discussed in~\\cite{FrankRo02}, this stronger notion was crucial for allowing them to prove the $3$-graph removal lemma).\nOur lower bound below applies even to the weaker notion stated above, which corresponds to taking $r(t,\\ell)\\equiv 1$.\n\n\n\nUsing Theorem~\\ref{theo:main} we can deduce a wowzer-type \\emph{lower} bound for Frankl and R\\\"odl's $3$-graph regularity lemma.\nThe proof of this lower bound appears in Subsection~\\ref{subsec:FR-LB-proof}.\n\n\n\n\\begin{theo}[Lower bound for Frankl and R\\\"odl's regularity lemma]\\label{theo:FR-LB}\n\tPut $c = 2^{-400}$.\n\tFor every $s \\in \\N$ there exists a $3$-partite $3$-graph $H$ of density $p=2^{-s}$, and a partition $\\V_0$ of $V(H)$ with $|\\V_0| \\le 2^{300}$,\n\tsuch that\n\tif $(\\Z,\\E)$ is an $\\e$-regular $(\\ell,t,\\e_2(\\ell))$-equipartition of $H$,\n\twith $\\e \\le c p$, $\\e_2(\\ell) \\le c \\ell^{-3}$ and $\\Z \\prec \\V_0$, then $|\\Z| \\ge \\wow(s)$.\n\\end{theo}\n\n\\begin{remark}\n\tOne can easily remove the assumption $\\Z \\prec \\V_0$ by taking the common refinement of $\\Z$ with $\\V_0$ (and adjusting $\\E$ appropriately).\nSince $|\\V_0|=O(1)$ this has only a minor effect on the parameters $\\e,\\ell,t,\\e_2(\\ell)$ of the partition and thus\none gets essentially the same lower bound. We omit the details of this routine transformation.\n\\end{remark}\n\n\n\\subsection{Gowers' $3$-graph regularity}\n\nHere we consider the $3$-graph regularity Lemma due to Gowers~\\cite{Gowers06}.\n\n\n\\begin{definition}[$\\a$-quasirandomness, see Definition~6.3 in \\cite{Gowers06}]\\label{def:quasirandom}\n\tLet $H$ be a $3$-graph, and let $P=(E_0,E_1,E_2)$ be a triad with $d(E_0)=d(E_1)=d(E_2)=:d$ on vertex classes $(X,Y,Z)$ with $|X|=|Y|=|Z|=:n$. We say that $P$ is \\emph{$\\a$-quasirandom} in $H$ if\n\t$$\\sum_{x_0,x_1 \\in X}\\sum_{y_0,y_1 \\in Y}\\sum_{z_0,z_1 \\in Z} \\prod_{i,j,k\\in\\{0,1\\}} f(x_i,y_j,z_k) \\le \\a d^{12}n^6 \\;,$$\n\n\twhere\n\t$$f(x,y,z) =\n\t\\begin{cases}\n\t1-d_H(P)\t\t\t\t&\\text{if } (x,y,z) \\in T(P), (x,y,z) \\in E(H)\\\\\n\t-d_H(P) \t\t\t\t&\\text{if } (x,y,z) \\in T(P), (x,y,z) \\notin E(H)\\\\\n\t0 \t\t\t\t\t\t&\\text{if } (x,y,z) \\notin T(P) \\;.\n\t\\end{cases}$$\n\tAn $(\\ell,t,\\e_2)$-equipartition $\\P$ on $V(H)$ is an \\emph{$\\a$-quasirandom} partition of $H$ if\n\n\t$\\sum_P |T(P)| \\le \\a|V|^3$ where the sum is over all triads of $\\P$ that are not $\\a$-quasirandom in $H$.\t\n\\end{definition}\n\nThe $3$-graph regularity lemma of Gowers~\\cite{Gowers06} (see also~\\cite{NaglePoRoSc09}) can be equivalently phrased as stating that, very roughly, for every $\\a>0$ and every function $\\e_2\\colon\\N\\to(0,1]$, every $3$-graph has an $\\a$-quasirandom $(\\ell,t,\\e_2(\\ell))$-equipartition where $t,\\ell$ are bounded by a wowzer-type function.\n\nOne way to prove a wowzer-type lower bound for Gowers' $3$-graph regularity lemma is along similar lines to the proof of Theorem~\\ref{theo:FR-LB}.\nHowever, there is shorter proof using the fact that Gowers' notion of quasirandomness implies Frankl and R\\\"odl's notion of regularity.\nIn all that follows we make the rather trivial assumption that, in the notation above, $\\a,1\/\\ell \\le 1\/2$.\n\n\\begin{prop}[\\cite{NagleRoSc17}]\\label{prop:Schacht}\n\tThere is $C \\ge 1$ such that the following holds;\n\tif a triad $P=(E_0,E_1,E_2)$ is $\\e^C$-quasirandom and for every $0 \\le i \\le 2$ the bipartite graph $E_i$ is $d(E_i)^C$-regular then $P$ is $\\e$-regular.\t\n\n\n\n\n\n\\end{prop}\n\n\nOur lower bound for Gowers' $3$-graph regularity lemma is as follows.\n\n\n\\begin{theo}[Lower bound for Gowers' regularity lemma]\\label{theo:Gowers-LB}\n\n\n\tFor every $s \\in \\N$ there exists a $3$-partite $3$-graph $H$ of density $p=2^{-s}$, and a partition $\\V_0$ of $V(H)$ with $|\\V_0| \\le 2^{300}$,\n\tsuch that\n\tif $(\\Z,\\E)$ is an $\\a$-quasirandom $(\\ell,t,\\e_2(\\ell))$-equipartition of $H$,\n\n\twith $\\a \\le \\poly(p)$, $\\e_2(\\ell) \\le \\poly(1\/\\ell)$ and $\\Z \\prec \\V_0$, then $|\\Z| \\ge \\wow(s)$.\n\\end{theo}\n\n\n\n\n\n\\begin{proof\n\n\tGiven $s$, let $H$ and $\\V_0$ be as in Theorem~\\ref{theo:FR-LB}.\n\tLet $\\P=(\\Z,\\E)$ be an $\\a$-quasirandom $(\\ell,t,\\e_2(\\ell))$-equipartition of $H$ with $\\Z \\prec \\V_0$, $\\a \\le (cp)^C$ and $\\e_2(\\ell) \\le \\min\\{c\\ell^{-3},\\,(2\\ell)^{-C}\\}$, where $c$ and $C$ are as in Theorem~\\ref{theo:FR-LB} and Proposition~\\ref{prop:Schacht} respectively.\n\tWe will show that $\\P$ is a $cp$-regular partition of $H$,\n\n\twhich would complete the proof using Theorem~\\ref{theo:FR-LB} and the fact that $\\e_2 \\le c\\ell^{-3}$.\n\tLet $P=(E_0,E_1,E_2)$ be a triad of $\\P$ that is $\\a$-quasirandom in $H$.\n\tNote that, by our choice of $\\e_2(\\ell)$, for every $0 \\le i \\le 2$ we have $d(E_i) \\ge 1\/\\ell - \\e_2(\\ell) \\ge 1\/2\\ell$; thus, since $\\e_2(\\ell) \\le (1\/2\\ell)^{C} \\le d(E_i)^C$, we have that $E_i$ is $d(E_i)^C$-regular.\n\tApplying Proposition~\\ref{prop:Schacht} on $P$ we deduce that $P$ is $\\e$-regular with $\\e=\\a^{1\/C} \\le cp$.\n\tSince $\\P$ is an $\\a$-quasirandom partition of $H$ we have, by Definition~\\ref{def:quasirandom} and since $\\a \\le \\e$, that $\\P$ is an $\\e$-regular partition of $H$, as needed.\n\n\n\\end{proof}\n\n\n\\subsection{Proof of Theorem~\\ref{theo:FR-LB}}\\label{subsec:FR-LB-proof}\n\nThe proof of Theorem~\\ref{theo:FR-LB} will follow quite easily from Theorem~\\ref{theo:main} together with Claim~\\ref{claim:reduction} below.\nClaim~\\ref{claim:reduction} basically shows that a $\\langle \\d \\rangle$-regularity ``analogue'' of Frankl and R\\\"odl's notion of regularity implies graph $\\langle \\d \\rangle$-regularity.\nHere it will be convenient to say that a graph partition is \\emph{perfectly} $\\langle \\d \\rangle$-regular if all pairs of distinct clusters are $\\langle \\d \\rangle$-regular without modifying any of the graph's edges.\nFurthermore, we will henceforth abbreviate $t(P)=|T(P)|$ for a triad $P$.\nWe will only sketch the proof of Claim~\\ref{claim:reduction}, deferring the full details to the Appendix~\\ref{sec:FR-appendix}.\n\n\\begin{claim}\\label{claim:reduction}\n\tLet $H$ be a $3$-partite $3$-graph on vertex classes $(\\Aside,\\Bside,\\Cside)$,\n\tand let $(\\Z,\\E)$ be an $(\\ell,t,\\e_2)$-equipartition of $H$ with $\\Z \\prec \\{\\Aside,\\Bside,\\Cside\\}$ such that for every triad $P$ of $\\P$ and every subtriad $P'$ of $P$ with $t(P') \\ge \\d \\cdot t(P)$ we have $d_H(P') \\ge \\frac23 d_H(P)$.\n\n\tIf $\\e_2(\\ell) \\le (\\d^2\/88)\\ell^{-3}$\n\tthen $\\E_3 \\cup \\Z_3$ is a perfectly $\\langle 2\\sqrt{\\d} \\rangle$-regular partition of $G_H^3$.\n\\end{claim}\n\n\\begin{proof}[Proof (sketch):]\nWe remind the reader that the vertex classes of $G_H^3$ are $(\\Aside \\times \\Bside,\\, \\Cside)$ (recall Definition~\\ref{def:aux}), and that $\\E_3$ and $\\Z_3$ are the partition of $\\Aside\\times\\Bside$ induced by $\\E$ and the partition of $\\Cside$ induced by $\\Z$, respectively. Suppose $(\\Z,\\E)$ is as in the statement of the claim, and define $\\E'$ as follows:\nfor every $A \\in \\Z_1$ and $C \\in \\Z_3$, replace all the bipartite graphs between $A$ and $C$ with the complete bipartite graph $A \\times C$.\nDo the same for every $B \\in \\Z_2$ and $C \\in \\Z_3$ (we do {\\em not} change the partitions between $\\Aside$ and $\\Bside$). The simple (yet somewhat tedious to prove)\nobservation is that if all triads of $(\\Z,\\E)$ are regular then all triads of $(\\Z,\\E')$ are essentially as regular.\nOnce this observation is proved, the proof of the claim reduces to checking definitions. We thus defer the proof to Appendix~\\ref{sec:FR-appendix}.\n\\end{proof}\n\n\nUsing Claim~\\ref{claim:reduction}, we now prove our wowzer lower bound.\n\n\n\n\n\\begin{proof}[Proof of Theorem~\\ref{theo:FR-LB}]\n\tPut $\\a = 2^{-73}$.\n\n\tWe have\n\t\\begin{equation}\\label{eq:FR-LB-ineq}\n\tc = 2^{-400} \\le \\a^4\/1500 \\;.\n\t\\end{equation}\n\n\n\tGiven $s$, let $H$ and $\\V_0$ be as in Theorem~\\ref{theo:main}.\n\tLet $\\P=(\\Z,\\E)$ be an $\\e$-regular $(\\ell,t,\\e_2(\\ell))$-equipartition of $H$\n\twith $\\e \\le c p$, $\\e_2(\\ell) \\le c \\ell^{-3}$ and $\\Z \\prec \\V_0$.\n\tThus, the bound $|Z| \\ge \\wow(s)$ would follow from Theorem~\\ref{theo:main}\n\n\tif we show that $\\P$ is an $\\langle \\a \\rangle$-regular partition of $H$.\n\n\n\tFirst we need to show that $\\P$ is $\\langle \\a \\rangle$-good (recall Definition~\\ref{def:k-good}). Let $E$ be a graph with $E \\in \\E$ on vertex classes $(Z,Z')$ (so $Z \\neq Z' \\in \\Z$).\n\tWe need to show that $E$ is $\\langle \\a \\rangle$-regular.\n\n\tSince $\\P$ is an $(\\ell,t,\\e_2(\\ell))$-equipartition we have\n\t(recall Definition~\\ref{def:ve-partition}) that $E$ is $\\e_2(\\ell)$-regular and $d(E) \\ge \\ell^{-1} - \\e_2(\\ell)$.\n\tThe statement's assumption on $\\e_2(\\ell)$ thus implies\n\t$d(E) \\ge 2\\e_2(\\ell)$.\n\tIt follows that for every $S \\sub Z$, $S' \\sub Z'$ with $|S| \\ge \\e_2(\\ell)|Z|$, $|S'| \\ge \\e_2(\\ell)|Z'|$ we have $d_E(S,S') \\ge d(E)-\\e_2(\\ell) \\ge \\frac12 d(E)$.\n\tThis proves that $E$ is $\\langle \\e_2(\\ell) \\rangle$-regular, and since $\\e_2(\\ell) \\le c \\le \\a$, that $E$ is $\\langle \\a \\rangle$-regular, as needed.\n\n\t\n\t\n\n\n\n\n\n\n\tIt remains to show that the $\\langle \\a \\rangle$-good $\\P$ is an $\\langle \\a \\rangle$-regular partition of $H$ (recall Definition~\\ref{def:k-reg}).\n\tBy symmetry, it suffices to show that $\\E_3 \\cup \\Z_3$ is an $\\langle \\a \\rangle$-regular partition of $G_{H}^3$.\n\tLet $H'$ be obtained from $H$ by removing all ($3$-)edges in triads of $\\P$ that are either not $\\e$-regular in $H$ or have density at most $3\\e$ in $H$.\n\tBy Definition~\\ref{def:FR-reg}, the number of edges removed from $H$ to obtain $H'$ is at most\n\t\\begin{equation}\\label{eq:FR-LB-modify}\n\t\\e|V(H)|^3 + 3\\e|V(H)|^3 \\le 4\\cdot c p |V(H)|^3\n\t\\le (\\a p\/27)|V(H)|^3 = \\a\\cdot e(H) \\;,\n\t\\end{equation}\n\n\twhere the second inequality uses ~(\\ref{eq:FR-LB-ineq}),\n\tand the equality uses the fact that all three vertex classes of $H$ are of the same size.\n\n\n\tThus, in $H'$, every non-empty triad of $\\P$ is $\\e$-regular and of density at least $3\\e$.\n\tPut $\\d = (\\a\/2)^2$.\n\tAgain by Definition~\\ref{def:FR-reg}, for every triad $P$ of $\\P$ and every subtraid $P'$ of $P$ with $t(P') \\ge \\d \\cdot t(P)$ ($\\ge \\e \\cdot t(P)$ by~(\\ref{eq:FR-LB-ineq})) we have $d_{H'}(P') \\ge d_{H'}(P)-\\e \\ge \\frac23 d_{H'}(P)$.\n\n\tIt follows from applying Claim~\\ref{claim:reduction} with $H'$ and $\\d$,\n\n\tusing~(\\ref{eq:FR-LB-ineq}),\n\n\n\tthat $\\E_3 \\cup \\Z_3$ is a perfectly $\\langle \\a \\rangle$-regular partition of $G_{H'}^3$.\n\tNote that~(\\ref{eq:FR-LB-modify}) implies that\n\tone can add\/remove at most $\\a \\cdot e(G_{H}^3)$ edges of $G_{H}^3$ to obtain $G_{H'}^3$.\n\tThus, $\\E_3 \\cup \\Z_3$ is an $\\langle \\a \\rangle$-regular partition of $G_{H}^3$, and as explained above, this completes the proof.\n\n\n\\end{proof}\n\n\n\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nDengue Fever is a mosquito-borne disease with a significant global burden. Half of the world's population and 129 countries are at risk of infection \\cite{brady2012refining}. Between 100 and 400 million infections are registered each year globally \\cite{bhatt2013global}. Dengue presents as a flu-like illness with symptoms ranging from mild to severe. There are four serotypes, meaning up to four infections are possible for each person over their lifetime, with potentially life-threatening complications arising from severe Dengue. There is no specific treatment or universal vaccine, and the primary preventative measure is vector control. This underscores the importance of disease surveillance. Predictive models can help efficiently allocate public health resources to combat Dengue, with the goal of reducing the overall disease burden \\cite{who2020dengue}. Dengue incidence is associated with many different risk factors. These include climate conditions such as rainfall, extreme weather events, and temperature \\cite{chien2014impact, thu1998effect, fan2013identifying}, land use \\cite{kilpatrick2012drivers}, and poverty \\cite{mulligan2014dengue}. These factors may have nonlinear, context-specific, and time-variant effects on disease incidence, which poses a challenge to disease modeling. \n\nThe literature on Dengue forecasting is multi-disciplinary, uniting expertise from areas such as epidemiology, environmental science, computer science, and mathematics. Modeling frameworks include both theoretical and data-driven approaches. Compartmental models, for example, estimate the dynamics of Dengue infections in a population over time, and are based on extensive knowledge of the host, vector, and transmission process. Another simulation technique, agent based modeling is useful for estimating impacts of interventions, such as the release of sterile males to control the mosquito population \\cite{isidoro2009agent}. Statistical time series forecasting takes a more data-centric approach and is effective at modeling the highly auto-correlated nature of Dengue Fever \\cite{johansson2016evaluating, riley2020sarima_preprint}. Machine learning models leverage the increasing data availability on risk factors of disease spreading and offer non-parametric approaches that require less detailed knowledge of the disease and context \\cite{guo2017developing}. \n\nNeural networks are a subset of machine learning algorithms, which have made significant contributions to medicine and public health, including applications such as medical image analysis for disease diagnosis \\cite{qayyum2018medical}, identifying abnormalities in signals such as electrocardiographs (ECG) \\cite{papik1998application}, and optimizing decisions of health care providers, hospitals, and policy-makers \\cite{shahid2019applications}. \nNeural networks are also used to forecast diseases, including Malaria \\cite{zinszer2012scoping}, Influenza. \\cite{alessa2017review}, and Covid-19 \\cite{bullock2020mapping}. \n\nThis review examines the use of neural networks for Dengue Fever prediction. The objective is to summarize the existing literature and also provide an introduction to this still somewhat novel modeling technique. Our contributions are as follows:\n\\begin{itemize}\n\n \\item We summarize the technical decisions made in the literature, including architecture selection and hyper-parameter tuning. \n \\item We examine the data inputs (such as climate or population demographics) and best predictors of Dengue fever identified in specific contexts.\n \\item We review the relative performance of different neural network architectures and comparator models, such as other machine learning techniques. \n\\end{itemize}\n\n\n\nTo the knowledge of the authors, no systematic review of the literature on neural networks applied to Dengue Fever prediction has yet been conducted. \nSiriyasatien et al (2018) \\cite{siriyasatien2018dengue} provide a broader review of data science models applied for Dengue Fever prediction. Racloz et al (2012) \\cite{racloz2012surveillance} review the literature on surveillance systems for Dengue Fever. The study finds that most papers use logistic or multiple regression to analyze Dengue risk factors. (Seasonal) Auto-Regressive Integrated Moving Average (S\/ARIMA) models have become a popular choice to incorporate auto-regressions but are not suitable for all data types. Though also apt at analyzing auto-regressive behavior, the study does not review any literature on neural networks or machine learning for Dengue surveillance. We aim to fill this knowledge gap in the present review.\n\n\n\n\n\n\\section{A Primer on Neural Networks}\n\nThis section gives an overview of some of the neural network models that are relevant to Dengue prediction. We explain the model first intuitively, then mathematically, and focus on feed-forward neural networks, which are used most often for Dengue prediction. Many high-quality textbooks and open source tutorials offer a deeper introduction to neural networks (for example \\cite{deng2014deep, haykin1999neural}). Several Python libraries include easy-to-use implementations of neural network models (for example TensorFlow \\cite{tensorflow}, scikit-learn \\cite{scikit-learn}, or Keras \\cite{chollet2015keras}). We also provide sample code for Dengue prediction in a Github repository\\footnote{https:\/\/github.com\/KRoster\/NN4Dengue}.\n\n\nLike other machine learning models, neural networks learn to execute specific supervised tasks, such as predicting the number of Dengue infections, based on a large set of labeled examples. Given a set of inputs, such as climate conditions, the model learns to estimate the output, such as next month's Dengue incidence. After making a first guess, the model looks at the correct answer and updates its parameters to reduce its error on the next iteration until its predictions are optimized.\n\nComputationally, neural networks are represented as a network of processing units (\"neurons\" or \"hidden units\") that transform and pass information from one layer to the next. During training, we distinguish between forward- and backward-propagation, depending on the direction of the information flow. During the forward-propagation step, input information passes between units, each time being transformed according to a set of weights and a non-linear activation function. The prediction error of the output relative to the true label is computed. Back-propagation is then used to reduce this error on the next iteration: The weights of each unit, the parameters that define how information is combined, are updated according to their influence on the prediction error. This combination of forward- and backward-propagation is repeated several times until the predictions are increasingly accurate.\n\nAfter this training phase, model performance is tested on a hold-out test set. Since the test set is not used to train the model, it can give a good indication of how well the model will generalize to new data, once applied in the real world.\n\n\\begin{figure}[!h]\n \\centering\n \\includegraphics[width=0.9\\textwidth]{\"nn_fully_annotated\".png}\n \\caption{Sample feed-forward neural network architecture}\n \\medskip\n \\small\n The figure shows a sample neural network architecture with 4 input features, 2 hidden layers with 5 and 2 nodes respectively and a single node in the output layer.\n \\label{fig:nn annotated}\n\\end{figure}\n\nThere are many different neural network architectures that are designed for different kinds of inputs and tasks. The architecture determines how the units are connected, how information flows through the network, and how many weights need to be optimized. An example of a feed-forward network is illustrated in figure \\ref{fig:nn annotated}. It shows the forward propagation of information through the individual neurons of the network, which was described above. The information that is passed through a given neuron $j$ in layer $l$, called its activation $a_{j}^{[l]}$, is computed as the linear combination of weights $w_{ij}^{[l]}$, biases $b_{j}^{[l]}$, and inputs from the previous layer $a_{i}^{[l-1]}$, which is passed through a nonlinear activation function $g^{[l]}$: \n\n\\begin{flalign} \\label{eq forward prop}\n a_{j}^{[l]} &= g^{[l]}(\\sum\\limits_{i=1}^{n_l} w_{ij}^{[l]} a_{i}^{[l-1]} + b_{j}^{[l]}) \\\\\n \\text{with: } a_{j}^{[0]} &= X_{j} \\\\\n \\hat{y} &= a^{[k]}\n\\end{flalign}\n\n\\noindent where ${n_l}$ is the number of neurons in layer $l$, $k$ is the number of layers (the input layer is counted as layer $0$), and $\\hat{y}$ is the prediction that is generated in the final layer.\n\n\nThe gradient of the loss with respect to a given weight $w_{ij}^{[l]}$ or bias $b_{j}^{[l]}$ tells the model how the given parameter needs to be adjusted, with the learning rate $\\alpha$ determining the size of the adjustment:\n\n\\begin{flalign} \\label{eq gradient descent}\n \\theta &= \\theta - \\alpha \\frac{d\\mathcal{L}}{d\\theta}\n\\end{flalign}\n\n\\noindent where $\\theta$ is a given set of weights or biases to be updated and $\\mathcal{L}$ is the loss.\n\n\n\nA benefit of neural networks is that parameters are updated by the model itself, not predefined by the researcher. They do not rely on strong assumptions and knowledge of the disease. Yet some hyper-parameters of neural network models must still be set by the researcher and are generally determined through a combination of domain knowledge and iterative experimentation. These hyper-parameters are tuned using a validation set or through cross-validation. Hyper-parameters may include the learning rate (by how much the weights are updated at each backward propagation step), the number of epochs (how often the forward and backward propagation steps are repeated), the mini-batch size (how many training examples are processed at each step), how weights are initialized (for example all zeros or random values), the number of hidden units in each layer, and the number of layers. Other important decisions include the choice of activation function in each layer, the choice of loss function, and the relative size of training, validation, and test sets. The optimal choices and ranges tested for these hyper-parameters depend, among other aspects, on the size and type of data available and the nature of the predictive task. One aim of this review is to summarize the choice of parameters deemed optimal by researchers in the existing literature to assist future researchers in the hyper-parameter tuning process. \n\nBesides feed-forward neural networks, this review includes two other categories of models used for Dengue forecasting. Recurrent neural networks (RNN) were developed for the analysis of sequence data, such as time series (sequences of observations in time) or text (sequences of words). They are cyclical (they contain loops) and process sequences of inputs together, giving the network a \"memory\" to remember historical feature states. RNNs are used in contexts where the evolution of input features matters for the target feature, where the prediction relies on information at multiple time steps. An example is machine translation, where the meaning of a word is influenced by its context, by the other words in the sentence. The number of Dengue cases may also depend on the sequence of risk factors or the sequence of previous Dengue incidence. Using time series data, RNNs may help capture the auto-correlation of the disease and lagged influence of risk factors, such as rainfall or travel patterns. Within the family of RNNs we distinguish different architectures, such as Long Short-Term Memory (LSTM) and Gated Recurrent Units (GRU) \\cite{hochreiter1997lstm, schmidhuber2015deep, lipton2015critical}. \n\nConvolutional neural networks (CNN) are another subclass of deep neural network models with applications in computer vision \\cite{lecun1990handwritten}, such as image segmentation and classification. Inputs into CNNs are three-dimensional - such as a two dimensional image with three color channels, which form a 3-D matrix of pixel values. The architectures of CNNs include different kinds of operations, such as convolution and pooling, which help identify key regions in the image that help make the final prediction. For example, the model may learn that the presence of a body of water (shiny pixels in a satellite image) influence the number of Dengue cases we should expect. Crucial in this example is that the type of relevant landscape feature is not defined by the researcher but learned by the model. However, CNNs may also be used for data processing prior to prediction, for example to classify land use (such as vegetation or urban areas) from satellite images, which are known to influence Dengue spread and which can then be fed into Dengue a separate predictive model.\n\n\nThe tasks executed by machine learning models in general, and neural networks in particular, fall into two different categories: classification and prediction. Classification is the process of assigning a categorical label to an example based on its inputs, such as determining whether a patient has Dengue fever or a different disease. Prediction is the process of forecasting a future state based on historical or current inputs, for example predicting next month's Dengue incidence from this month's rainfall. \n\nThis review is focused on predictive models of Dengue Fever at the population level. In addition to classic regression models, we include papers using classification for prediction approaches, which produce categorical instead of continuous outputs. For example, a model may predict whether a city will experience an outbreak (binary classification) or which risk category will prevail (multi-label classification). \n\n\n\n\n\n\n\n\n\\section{Methodology}\n\n\\begin{figure}[!h]\n \\centering\n \\includegraphics[width=0.8\\textwidth]{\"PRISMA_2009Flow_filled_new\".pdf}\n \\caption{PRISMA Flow Chart}\n \\label{fig:prisma flow}\n\\end{figure}\n\nThis review follows the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) statement guidelines \\cite{prisma2009}. The PRISMA flow chart is represented in figure \\ref{fig:prisma flow}. A systematic search was conducted using Web of Science\/ Knowledge, Scopus (abstract, title, keywords), PubMed, and Science Direct (abstract, title) databases. References of papers appearing in the search results were also examined for relevant works. The searches were conducted between 21 July and 17 August 2020 and used the following search string:\n\\begin{center}\n \"( deep learning OR neural network ) AND dengue\"\n\\end{center}\n\n\\begin{comment}\nThe databases yielded the following numbers of results:\n\n\\begin{itemize}\n \\item Web of Science \/ Knowledge: 92 (abstract screened: 46) (full text review: 26)\n \\item PubMed: 21 (abstract screened: 4) (full text review: 4)\n \\item Scopus: 23 (abstract screened: 18) (full text review: 8)\n \\item Science Direct: 13 (abstract screened: 3) (full text review: 3)\n \\item references of other papers: 31 (abstract screened: 27) (full text review: 21)\n\\end{itemize}\n\n\\end{comment}\n\n\n\n\n\n\nThe papers were examined for inclusion in the review in three stages: by title, by abstract, and finally by full text. In each phase, the following inclusion and exclusion criteria were applied liberally to determine which papers would be considered in the next phase. If the information in the title and\/or abstract was inconclusive, the paper was included in the full-text review. Inclusion and exclusion criteria are as follows:\n\n\\begin{itemize}\n \\item Studies must implement a neural network or deep learning technique, either as the main method or as a comparator model. Reviews of the literature on neural networks for Dengue forecasting would also be considered. \n \\item Studies must predict Dengue fever incidence or risk. There are no restrictions on the type of target variable used. For example, the number of Dengue cases, Dengue incidence rate, or a binary Dengue risk variable are all accepted as target features.\n \\item Studies must examine Dengue in a human population. Models for disease diagnosis of individuals are excluded. Studies modeling the location of vectors without relation to Dengue incidence are excluded. Studies examining animal hosts are excluded.\n \\item As the search query is in English, only English language articles were identified and included in the review.\n\\end{itemize}\n\n\n\n\n\n\\section{Results}\nThis section summarizes the findings of the review. The full list of papers and their properties are provided in table 1 at the end of this section.\n\n\n\\subsection{Prediction Target}\nWithin the criteria of study selection, there was significant variation in the study specifications, including the formulation of the target feature. Most studies predict the future number of Dengue cases based on historical time series. However, there are some notable exceptions. Andersson et al (2019) \\cite{andersson2019combining} predict a static incidence rate at the neighborhood level in Rio de Janeiro, Brazil. Two studies predict the risk of a Dengue outbreak: Abeyrathna et al (2019) \\cite{abeyrathna2019scheme} formulate their task as a binary classification of 'outbreak' or 'no outbreak', while Anno et al (2019) \\cite{anno2019spatiotemporal} classify cities according to five risk categories. \n\nEarly forecasts provide more time for public health response but may have lower confidence. In most of the selected studies, researchers predict one period ahead corresponding to the measurement frequency of the data used. A single study predicted the present: Livelo and Cheng (2018) \\cite{livelo2018intelligent} use social media activity in the Philippines to predict the present situation of Dengue infection. They use a neural network model to classify tweets according to a set of different categories of Dengue-related topics. Their weekly Dengue index based on this Twitter data is correlated with actual Dengue case counts. Koh et al. (2018) \\cite{koh2018model} compare one and two-week forecast horizons. Soemsap et al. (2014) \\cite{soemsap2014forecasting} use a two-week horizon with weekly data inputs. Dharmawardhana et al (2017) \\cite{dharmawardana2017predictive} predict Dengue cases in Sri Lankan districts four weeks ahead using weekly data on cases, climate, human mobility, and vegetation indices. \nChakraborty et al. (2019) \\cite{Chakraborty2019forecasting} use the longest forecasting horizon. They use three different datasets of Dengue cases, two of which are weekly (Peru and Puerto Rico), and one which is measured monthly (Philippines). For the weekly data, they compare three and six month horizons. For the monthly data, they compare a horizon of six months with one year forecasts. For Peru and the Philippines, the best model changes with the prediction horizon. \n\n\n\n\n\\subsection{Data Sources}\n\n\\begin{figure}[!h]\n \\centering\n \\includegraphics[width=0.9\\textwidth]{\"articles_for_review_variables_used2\".pdf}\n \\caption{Variable types used in the selected studies}\n \\medskip\n \\small\n The figure illustrates how frequently studies included individual data types as well as combinations thereof. Each row represents a data type and the horizontal bars measure their frequency of inclusion in the selected papers. The columns represent the combinations of variables, and the vertical bars measure how often the given combinations occurred. \n \\label{fig:variables}\n\\end{figure}\n\nBesides epidemiological data on the history of Dengue Fever, which is used in all studies, the most common predictors are meteorological variables (used in 14 studies) (see figure \\ref{fig:variables}). Additional data sources include: vegetation indices and other landscape features extracted from satellite images (in two studies), human mobility, specifically mobile phone data (used in one study), aerial and street view images (in one study), social media data, specifically from Twitter (one study), vector information (one study), and demography (one study). Three papers use only epidemiological data.\n\n\n\\begin{figure}[!h]\n \\centering\n \\includegraphics[width=0.8\\textwidth]{\"articles_for_review_distribution_of_geographic_scale\".png}\n \\caption{Geographic granularity used in the selected studies}\n \\label{fig:geographic granularity}\n\\end{figure}\n\nMost studies examine granular geographic regions, such as cities (nine studies) or districts (six studies). Two studies even modeled Dengue at the sub-city (neighborhood) level (see figure \\ref{fig:geographic granularity}). Granular scales can be beneficial in two ways: they allow for local predictions and accordingly targeted public health response. They also generally result in larger training datasets with more observations than national or state-level aggregations, which plays to the strength of neural networks. The studies cover countries in both Asia and Latin America, which are considered the primary Dengue endemic regions \\cite{who2020dengue}. A total of 13 countries are included (see figure \\ref{fig:countries map}), with the Philippines and Brazil appearing most often. \n\n\n\n\\begin{figure}[!h]\n \\centering\n \\includegraphics[width=0.9\\textwidth]{\"articles_for_review_geographic_distribution\".png}\n \\caption{Countries covered by the selected studies}\n \\label{fig:countries map}\n\\end{figure}\n\n\n\n\n\\subsection{Model Selection}\n\n\\begin{figure}[!h]\n \\centering\n \\includegraphics[width=0.9\\textwidth]{\"articles_for_review_models_used2\".pdf}\n \\caption{Models used in the selected studies}\n \\label{fig:models}\n \\medskip\n \\small\n The figure illustrates how frequently studies used different forecasting models as well as combinations thereof. Each row represents an algorithm and the horizontal bars measure their frequency of inclusion in the selected papers. Green rows refer to neural network models, while blue bars show comparator models, such as machine learning algorithms. The columns illustrate which combinations of models were used, the frequency of which is measured by the vertical bars. \n\\end{figure}\n\n\nMost studies implement a shallow neural network with just one hidden layer. Four studies use deeper feed-forward architectures. Three studies employ recurrent neural networks, two of which used LSTM units and one used a GRU. Three studies implemented CNNs. \n\n\nThe neural network models were compared to a range of other models. Support Vector Machine (SVM) models were most common (six studies). Generalized Additive Models (GAMs), (S)ARIMA and linear autoregressive (AR) models occurred twice each. The remaining comparators were used in just one paper each: Gradient Boosting Machine (GBM), XGBoost, Linear regression, Poisson regression, Tsetlin Machine (TM), Cellular automata, a compartmental model, and a naive baseline. Figure \\ref{fig:models} shows how frequently different model types and combinations thereof were used in the selected literature. \n\nThe most common comparator model used is SVM. Evidence suggests that both SVM and neural networks perform well, and which is better depends on the context. In Xu et al (2020) \\cite{xu2020forecast}, an LSTM neural network outperforms an SVM model in predicting the number of Dengue fever cases in 20 Chinese cities. Transfer learning further increased the performance gains in cities with few overall Dengue cases. Abeyrathna et al (2019) \\cite{abeyrathna2019scheme} use a classification for prediction approach to identify whether districts in the Philippines are likely to experience a Dengue Fever outbreak. Whether SVM or the neural network model is better in the context of this study depends on the evaluation metric used: SVM has a higher precision, while ANN has a higher F1-score. Yusof and Mustaffa (2011) \\cite{yusof2011dengue} and Kesorn et al (2015) \\cite{kesorn2015morbidity} both find SVM to have a higher performance than the neural network. \n\nFour studies use a feed-forward network with at least two hidden layers. They provide varying evidence to the performance relative to shallow neural networks or other machine learning models. In \\cite{dharmawardana2017predictive}, a model with two hidden layers performs better than an XGBoost model in forecasting Dengue in Sri Lankan districts. In two studies (\\cite{abeyrathna2019scheme} and \\cite{baquero2018dengue}), the deep neural network provides better predictions than alternative neural network models, but is not the best model overall: In \\cite{baquero2018dengue}, the MLP with two hidden layers performs better than the LSTM-RNN to forecast Dengue in a Brazilian city, but a GAM is best overall. In \\cite{abeyrathna2019scheme}, a model with three hidden layers has lower errors than models with a single or five hidden layers. However, the best F1 score in classifying binary outbreak risk is achieved by a Tsetlin Machine. The final study \\cite{rahayu2019prediction} uses a model with three hidden layers but does not employ any comparator models.\n\n\nPerformance of the RNN models is also mixed. The two studies using LSTM-RNNs produce contradicting results despite having similar study designs. Baquero et al. (2018) \\cite{baquero2018dengue} and Xu et al. (2020) \\cite{xu2020forecast} both use LSTM-RNNs to forecast Dengue at the city level, compare their model to a feed-forward neural network and a GAM, use meteorological and epidemiological input features, and evaluate performance using RMSE. Xu et al. (2020) \\cite{xu2020forecast} find the LSTM to have the best performance relative to all comparator models, whereas in the study by Baquero et al. (2018) \\cite{baquero2018dengue}, the LSTM-ANN performs worse than the simple neural network and the best model overall is the GAM. \n\nThe three studies using CNNs do not provide comparisons to other types of models, though one study compares different CNN architectures. However, the key advantage of the examined CNN studies is the granular geographic resolution in predictions. The models take satellite and other images as inputs and can therefore generate forecasts at the sub-city level. Andersson et al. (2019) \\cite{andersson2019combining} compare different kinds of CNN architectures (based on DenseNet-161) for each of three input types, specifically satellite imagery, street view images, and the combination of the two. Their model predicts Dengue incidence rates at the neighborhood level in Rio de Janeiro, Brazil. Best performance is achieved when combining street and aerial images. Rehman et al. (2019) \\cite{rehman2019deep} implement a pre-trained CNN (based on U-Net architecture) to extract landscape features from satellite imagery, which they use in a compartmental SIR model to forecast Dengue in neighborhoods of two Pakistani cities. Anno et al. (2019) \\cite{anno2019spatiotemporal} use a CNN (based on AlexNet architecture) to classify five levels of outbreak risk in cities in Taiwan, based on images capturing sea surface temperatures. \n\n\n\n\n\n\n\n\\subsection{Architecture}\nArchitectures vary across the different models. The studies implementing shallow models with one hidden layer cumulatively tested 1-30 hidden units. Three studies (\\cite{wijekoon2014prediction, aburas2010dengue, soemsap2014forecasting}) tested a broad number of values across this range, and their optimal number of hidden units were related to the number of input features. Two studies \\cite{wijekoon2014prediction, aburas2010dengue} identified four hidden units as the optimum. Both used meteorological and epidemiological variables and both measured four different pieces of information. Wijekoon et al (2014) \\cite{wijekoon2014prediction} used temperature, rainfall, humidity and Dengue cases. Aburas et al (2010) \\cite{aburas2010dengue} included a total of seven features that measured the same information but with additional lags. The third study \\cite{soemsap2014forecasting} identified 25 nodes as the optimum, which matches the larger input size of 32 features. Instead of trying different values, Datoc et al (2016) \\cite{datoc2016forecasting} fixed the number of hidden units to the number of input features, and selected the best-performing model among eight models with three to five nodes. The best model used three nodes (corresponding to three input features). \n\nAbeyrathna et al (2019) \\cite{abeyrathna2019scheme} compared shallow and deep architectures. Their best neural network model had three layers with (20, 150, 100) units respectively. This model performed better than the two shallow models with five or 20 neurons in a single hidden layer. It also outperformed a five-hidden layer model of (20, 200, 150, 100, 50) topology. \n\nBaquero et al (2018) \\cite{baquero2018dengue} also used deep architectures with 2 layers with combinations of (10 or 20) neurons in the first layer, and (5 or 10) neurons in the second. They implemented both MLP and LSTM models. The best neural network model was an MLP with (20,10) neurons, though a Generalized Additive Model (GAM) and an ensemble both performed better than the neural networks. \n\nThe other LSTM model was implemented by Xu et al (2020) \\cite{xu2020forecast} with a different architecture. They used a single hidden layer with 64 neurons. Unlike Baquero et al (2018) \\cite{baquero2018dengue}, their model outperformed GAM as well as SVR and GBM. \n\n\n\n\\subsection{Model Evaluation}\nThe studies utilize a range of metrics to assess model performance, in part due to different prediction targets. The most common measures are Root Mean Squared Error (RMSE) and Pearson's correlation, which are each used in seven studies each. Mean Average Error (MAE), Mean Squared Error (MSE), and accuracy are also commonly used.\n\n\n\n\n\n\\section{Discussion}\n\nThe literature on neural networks applied to Dengue forecasting is still somewhat scarce, but the reviewed studies include some promising examples, where neural networks outperform other machine learning or statistical approaches. The literature therefore suggests that neural networks may be appropriate for inclusion in a set of candidate models when forecasting Dengue incidence. However there is variation as to whether neural network models or other approaches perform best. Future research may compare different model types and architectures on multiple datasets, to better understand how the ideal model varies by context, such as geography or data availability.\n\nThough many risk factors of Dengue have been identified, most neural network models limit inputs to meteorological and epidemiological data. Future studies may evaluate the value of alternative predictors, especially as NNs tend to deal well with high-dimensional problems. Along the same lines, there is space for more incorporation of non-causal predictors of Dengue fever from sources such as social media. Some papers in this review have leveraged these new predictors, specifically from Twitter \\cite{livelo2018intelligent}, Street view and satellite images \\cite{andersson2019combining}, mobile phone records \\cite{dharmawardana2017predictive}, and features derived from satellite imagery \\cite{dharmawardana2017predictive, rehman2019deep}. These data sources have also been effective with other machine learning models, for example Baidu search data to predict Dengue in a Chinese province \\cite{guo2017developing}. \n\nTransfer learning was implemented in \\cite{xu2020forecast}, where it provided promising results. The authors trained a model on a city with high Dengue incidence and used it to predict disease in lower-incidence geographies. This was the only study in this review that used transfer learning outside the context of pre-trained CNNs. This may present another avenue for further research, especially in locations where data is scarce. \n\n\\subsubsection*{Acknowledgments}\n\nKirstin Roster gratefully acknowledges support from the S\u00e3o Paulo Research Foundation (FAPESP) (grant number 2019\/26595-7). Francisco Rodrigues acknowledges partial support from CNPq (grant number 309266\/2019-0) and FAPESP (grant number 2019\/23293-0).\n\n\\begin{comment}\n\\begin{table}[]\n \\centering\n \n \n {\\small %\n \\begin{tabular}{L{3cm} | L{1.5cm} | L{1.5cm} | L{2cm} | L{1.5cm} | L{3cm} }\n \\toprule\n Reference & Country & Geospatial scale & Target & Prediction horizon & Input data \\\\\n \\midrule\n \n \\bottomrule\n \\end{tabular}\n }%\n \n\\end{table}\n\\end{comment}\n\n\n\\includepdf[pages={1-}, fitpaper]{\"Tableofincludedpapers_2021-05-31\".pdf}\n\n\n\\newpage\n\\nocite{*}\n\\bibliographystyle{unsrt}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nSubmodular set functions are defined by the following condition for all pairs of sets $S,T$:\n$$ f(S \\cup T) + f(S \\cap T) \\leq f(S) + f(T),$$\nor equivalently by the property that the {\\em marginal value} of any element,\n$f_S(j) = f(S+j)-f(S)$, satisfies $f_T(j) \\leq f_S(j)$, whenever\n$j \\notin T \\supset S$. In addition, a set function is called monotone\nif $f(S) \\leq f(T)$ whenever $S \\subseteq T$.\nThroughout this paper, we assume that $f(S)$ is nonnegative.\n\nSubmodular functions have been studied in the context of combinatorial\noptimization since the 1970's, especially in connection with matroids\n\\cite{E70,E71,NWF78,NWF78II,NW78,W82a,W82b,L83,F97}.\nSubmodular functions appear mostly for the following two reasons:\n(i) submodularity arises naturally in various combinatorial settings,\nand many algorithmic applications use it either explicitly or implicitly;\n(ii) submodularity has a natural interpretation as the property\nof {\\em diminishing returns}, which defines an important class of\nutility\/valuation functions.\nSubmodularity as an abstract concept is both general enough to be useful for applications\nand it carries enough structure to allow strong positive results.\nA fundamental algorithmic result is that any submodular function\ncan be {\\em minimized} in strongly polynomial time \\cite{FFI01,Lex00}.\n\nIn contrast to submodular minimization, submodular maximization problems are\ntypically hard to solve exactly. Consider the classical problem\nof maximizing a monotone submodular function subject to a cardinality constraint,\n$\\max \\{f(S): |S|\\leq k\\}$. It is known that this problem admits a $(1-1\/e)$-approximation\n\\cite{NWF78} and this is optimal for two different reasons:\n(i) Given only black-box access to $f(S)$, we cannot achieve a better approximation,\nunless we ask exponentially many value queries \\cite{NW78}.\nThis holds even if we allow unlimited computational power.\n(ii) In certain special cases where $f(S)$ has a compact representation on the input,\nit is NP-hard to achieve an approximation better than $1-1\/e$ \\cite{Feige98}.\nThe reason why the hardness threshold is the same in both cases\nis apparently not well understood.\n\nAn optimal $(1-1\/e)$-approximation for the problem $\\max \\{f(S): |S| \\leq k\\}$ where $f$ is\nmonotone sumodular is achieved\nby a simple greedy algorithm \\cite{NWF78}. This seems to be rather coincidental; for\nother variants of submodular maximization, such as unconstrained (nonmonotone)\nsubmodular maximization \\cite{FMV07}, monotone submodular maximization subject\nto a matroid constraint \\cite{NWF78II,CCPV07,Vondrak08}, or submodular\nmaximization subject to linear constraints \\cite{KTS09,LMNS09},\ngreedy algorithms achieve suboptimal results. A tool which has proven useful\nin approaching these problems is the {\\em multilinear relaxation}.\n\n\\\n\n\\noindent{\\bf Multilinear relaxation.}\nLet us consider a discrete optimization problem $\\max \\{f(S): S \\in {\\cal F}\\}$, where\n$f:2^X \\rightarrow {\\boldmath R}$ is the objective function and ${\\cal F} \\subset 2^X$\nis the collection of feasible solutions. In case $f$ is a linear function,\n$f(S) = \\sum_{j \\in S} w_j$, it is natural to replace this problem by a linear\nprogramming problem. For a general set function $f(S)$, however, a linear\nrelaxation is not readily available. Instead, the following relaxation\nhas been proposed \\cite{CCPV07,Vondrak08,CCPV09}:\nFor ${\\bf x} \\in [0,1]^X$, let $\\hat{{\\bf x}}$ denote a random vector in $\\{0,1\\}^X$\nwhere each coordinate of $x_i$ is rounded independently to $1$ with probability $x_i$\nand $0$ otherwise.\\footnote{We denote vectors consistently in boldface $({\\bf x})$ and their coordinates in italics $(x_i)$.\nWe also identify vectors in $\\{0,1\\}^n$ with subsets of $[n]$ in a natural way.\n}\nWe define\n$$ F({\\bf x}) = {\\bf E}[f(\\hat{{\\bf x}})] = \\sum_{S \\subseteq X} f(S) \\prod_{i \\in S} x_i\n \\prod_{j \\notin S} (1-x_j).$$\nThis is the unique {\\em multilinear polynomial} which coincides with $f$ on $\\{0,1\\}$-vectors.\nWe remark that although we cannot compute the exact value of $F({\\bf x})$ for a given\n${\\bf x} \\in [0,1]^X$ (which would require querying $2^n$ possible values of $f(S)$),\nwe can compute $F({\\bf x})$ approximately, by random sampling. Sometimes this causes\ntechnical issues, which we also deal with in this paper.\n\nInstead of the discrete problem $\\max \\{f(S): S \\in {\\cal F}\\}$,\nwe consider a continuous optimization problem $\\max \\{F({\\bf x}): {\\bf x} \\in P({\\cal F})\\}$,\nwhere $P({\\cal F})$ is the convex hull of characteristic vectors corresponding to ${\\cal F}$,\n$$ P({\\cal F})\n = \\left\\{ \\sum_{S \\in {\\cal F}} \\alpha_S {\\bf 1}_S:\n \\sum_{S \\in {\\cal F}} \\alpha_S = 1, \\alpha_S \\geq 0 \\right\\}.$$\nThe reason why the extension $F({\\bf x}) = {\\bf E}[f(\\hat{{\\bf x}})]$ is useful\nfor submodular maximization problems is that $F({\\bf x})$ has convexity properties\nthat allow one to solve the continuous problem $\\max \\{F({\\bf x}): {\\bf x} \\in P\\}$\n(within a constant factor) in a number of interesting cases.\nMoreover, fractional solutions can be often\nrounded to discrete ones without losing {\\em anything} in terms of the objective\nvalue. Then, our ability to solve the multilinear relaxation\ntranslates directly into an algorithm for the original problem.\nIn particular, this is true when the collection of feasible solutions\nforms a matroid.\n\n{\\em Pipage rounding} was originally developed by Ageev and Sviridenko\nfor rounding solutions in the bipartite matching polytope \\cite{AS04}.\nThe technique was adapted to matroid polytopes by Calinescu et al.\n \\cite{CCPV07}, who proved that for any submodular function $f(S)$\nand any ${\\bf x}$ in the matroid base polytope $B({\\cal M})$,\nthe fractional solution ${\\bf x}$ can be rounded to a base\n$B \\in {\\cal B}$ such that $f(B) \\geq F({\\bf x})$.\nThis approach leads to an optimal $(1-1\/e)$-approximation for\nthe Submodular Welfare Problem,\nand more generally for monotone submodular maximization subject\nto a matroid constraint \\cite{CCPV07,Vondrak08}.\nIt is also known that\nthe factor of $1-1\/e$ is optimal for the Submodular Welfare Problem\nboth in the NP framework \\cite{KLMM05} and in the value oracle model\n\\cite{MSV08}. Under the assumption that the submodular function $f(S)$\nhas {\\em curvature} $c$, there is a $\\frac{1}{c}(1-e^{-c})$-approximation\nand this is also optimal in the value oracle model \\cite{VKyoto08}.\nThe framework of pipage rounding can be also extended to nonmonotone submodular functions;\nthis presents some additional issues which we discuss in this paper.\n\nFor the problem of\nunconstrained (nonmonotone) submodular maximization, a $2\/5$-approximation\nwas developed in \\cite{FMV07}. This algorithm implicitly uses\nthe multilinear relaxation $\\max \\{F({\\bf x}): {\\bf x} \\in [0,1]^X\\}$.\nFor {\\em symmetric} submodular functions, it is shown in \\cite{FMV07}\nthat a uniformly random solution ${\\bf x} = (1\/2,\\ldots,1\/2)$ gives\n$F({\\bf x}) \\geq \\frac12 OPT$, and there is no better approximation algorithm\nin the value oracle model. Recently, a $1\/2$-approximation was found\nfor unconstrained maximization of a general nonnegative submodular function \\cite{BFNS12}.\nThis algorithm can be formulated in the multilinear relaxation framework, but also\nas a randomized combinatorial algorithm.\n\nUsing additional techniques, the multilinear relaxation can be also applied\nto submodular maximization with knapsack constraints\n($\\sum_{j \\in S} c_{ij} \\leq 1$). For the problem of maximizing a monotone\nsubmodular function subject to a constant number of knapsack constraints,\nthere is a $(1-1\/e-\\epsilon)$-approximation algorithm for any $\\epsilon > 0$ \\cite{KTS09}.\nFor maximizing a nonmonotone submodular function\nsubject to a constant number of knapsack constraints,\na $(1\/5-\\epsilon)$-approximation was designed in \\cite{LMNS09}.\n\nOne should mention that not all the best known results for submodular maximization\nhave been achieved using the multilinear relaxation. The greedy algorithm\nyields a $1\/(k+1)$-approximation for monotone submodular maximization\nsubject to $k$ matroid constraints \\cite{NWF78II}. Local search methods\nhave been used to improve this to a $1\/(k+\\epsilon)$-approximation,\nand to obtain a $1\/(k+1+1\/(k-1)+\\epsilon)$-approximation\nfor the same problem with a nonmonotone submodular function,\nfor any $\\epsilon>0$ \\cite{LMNS09,LSV09}.\nFor the problem of maximizing a nonmonotone submodular function\nover the {\\em bases} of a given matroid, local search yields\na $(1\/6-\\epsilon)$-approximation, assuming that the matroid contains two disjoint bases\n\\cite{LMNS09}.\n\n\n\\subsection{Our results}\n\nOur main contribution (Theorem~\\ref{thm:general-hardness}) \nis a general hardness construction that yields\ninapproximability results in the value oracle model in an automated\nway, based on what we call the {\\em symmetry gap} for some fixed instance.\nIn this generic fashion, we are able to replicate a number of previously\nknown hardness results (such as the optimality of the factors\n$1-1\/e$ and $1\/2$ mentioned above),\nand we also produce new hardness results using this construction\n(Theorem~\\ref{thm:matroid-bases}).\nOur construction helps explain the particular hardness thresholds obtained under\nvarious constraints, by exhibiting a small instance where the threshold can be\nseen as the gap between the optimal solution and the best symmetric solution.\nThe query complexity results in \\cite{FMV07,MSV08,VKyoto08}\ncan be seen in hindsight as special cases of Theorem~\\ref{thm:general-hardness},\nbut the construction in this paper is somewhat different and\ntechnically more involved than the previous proofs for particular cases.\n\n\\paragraph{Concrete results}\nBefore we proceed to describe our general hardness result, we present its implications\nfor two more concrete problems.\nWe also provide closely matching approximation algorithms for these two problems,\nbased on the multilinear relaxation. In the following, we assume that the objective function is given by a value oracle\nand the feasibility constraint is given by a membership oracle: a value oracle for $f$ returns the value $f(S)$ for a given set $S$,\nand a membership oracle for ${\\cal F}$ answers whether $S \\in {\\cal F}$ for a given set $S$.\n\nFirst, we consider the problem of maximizing a nonnegative (possibly nonmonotone) submodular function\nsubject to a \\emph{matroid base} constraint. (This generalizes for example the maximum bisection\nproblem in graphs.) We show that the approximability of this problem is related to base\npackings in the matroid. We use the following definition.\n\n\\begin{definition}\n\\label{def:fractional-base}\nFor a matroid ${\\cal M}$ with a collection of bases ${\\cal B}$, the fractional base\npacking number is the maximum possible value of $\\sum_{B \\in {\\cal B}} \\alpha_B$\nfor $\\alpha_B \\geq 0$ such that $\\sum_{B \\in {\\cal B}: j \\in B} \\alpha_B \\leq 1$ for\nevery element $j$.\n\\end{definition}\n\n\\noindent{\\bf Example.}\nConsider a uniform matroid with bases ${\\cal B} = \\{ B \\subset [n]: |B| = k \\}$. ($[n]$ denotes the set of integers\n$\\{1,2,\\cdots,n\\}$.)\nHere, we can take each base with a coefficient $\\alpha_B = 1 \/ {n-1 \\choose k-1}$, which satisfies\nthe condition $\\sum_{B \\in {\\cal B}: j \\in B} \\alpha_B \\leq 1$ for every element $j$ since every element\nis contained in ${n-1 \\choose k-1}$ bases. We obtain that the fractional packing number is at least\n${n \\choose k} \/ {n-1 \\choose k-1} = \\frac{n}{k}$. It is also easy to check that the fractional packing number\ncannot be larger than $\\frac{n}{k}$.\n\n\\begin{theorem}\n\\label{thm:matroid-bases}\nFor any $\\nu$ in the form $\\nu = \\frac{k}{k-1}, k \\geq 2$,\nand any fixed $\\epsilon>0$, a $(1-\\frac{1}{\\nu}+\\epsilon) = \\frac{1}{k}$-approximation for the problem\n$\\max \\{f(S): S \\in {\\cal B}\\}$, where $f(S)$ is a nonnegative submodular function,\nand ${\\cal B}$ is a collection of bases in a matroid with fractional packing number at least $\\nu$,\nwould require exponentially many value queries.\n\nOn the other hand, for any $\\nu \\in (1,2]$,\nthere is a randomized $\\frac{1}{2}(1-\\frac{1}{\\nu}-o(1))$-approximation\nfor the same problem.\n\\end{theorem}\n\nIn case the matroid contains two disjoint bases ($\\nu=2$), we obtain a\n$(\\frac14-o(1))$-approximation, improving the previously known factor of $\\frac16-o(1)$ \\cite{LMNS09}.\nIn the range of $\\nu \\in (1,2]$, our positive and negative results are within a factor of $2$.\nFor maximizing a submodular function over the bases of a general matroid, \nwe obtain the following.\n\n\\begin{corollary}\n\\label{coro:matroid-bases}\nFor the problem $\\max \\{f(S): S \\in {\\cal B}\\}$, where $f(S)$ is a nonnegative submodular function,\nand ${\\cal B}$ is a collection of bases in a matroid, any constant-factor approximation requires\nan exponential number of value queries.\n\\end{corollary}\n\nWe also consider the problem of maximizing a nonnegative submodular function subject to a matroid independence\nconstraint. \n\n\\begin{theorem}\n\\label{thm:matroid-indep}\nFor any $\\epsilon>0$, a $(\\frac12 + \\epsilon)$-approximation for the problem $\\max \\{f(S): S \\in {\\cal I} \\}$,\nwhere $f(S)$ is a nonnegative submodular function, and ${\\cal I}$ is a collection of independent sets in a matroid,\nwould require exponentially many value queries.\n\nOn the other hand, there is a randomized $\\frac14 (-1+\\sqrt{5}-o(1)) \\simeq 0.309$-approximation for the same problem.\n\\end{theorem}\n\nOur algorithmic result improves a previously known $(\\frac14 - o(1))$-approximation \\cite{LMNS09}.\nThe hardness threshold follows from our general result, but also quite easily from \\cite{FMV07}.\n\n\n\\medskip\n\\noindent{\\bf Hardness from the symmetry gap.}\nNow we describe our general hardness result.\nConsider an instance $\\max \\{f(S): S \\in {\\cal F}\\}$ which exhibits a certain\ndegree of symmetry. This is formalized by the notion of a {\\em symmetry group}\n${\\cal G}$. We consider permutations $\\sigma \\in {\\bf S}(X)$ where ${\\bf S}(X)$\nis the symmetric group (of all permutations) on the ground set $X$.\nWe also use $\\sigma$ for the naturally induced mapping of subsets of $X$:\n$\\sigma(S) = \\{ \\sigma(i): i \\in S \\}$.\nWe say that the instance is invariant\nunder ${\\cal G} \\subset {\\bf S}(X) $, if for any $\\sigma \\in {\\cal G}$\nand any $S \\subseteq X$,\n$f(S) = f(\\sigma(S))$ and $S \\in {\\cal F} \\Leftrightarrow \\sigma(S) \\in {\\cal F}$.\nWe emphasize that even though we apply $\\sigma$ to sets, it must be derived\nfrom a permutation on $X$.\nFor ${\\bf x} \\in [0,1]^X$, we define the ``symmetrization of ${\\bf x}$'' as\n$$\\bar{{\\bf x}} = {\\bf E}_{\\sigma \\in {\\cal G}}[\\sigma({\\bf x})],$$\nwhere $\\sigma \\in {\\cal G}$ is uniformly random and $\\sigma({\\bf x})$ denotes ${\\bf x}$ with coordinates permuted by $\\sigma$.\n\n\\medskip\n{\\bf Erratum:}\nThe main hardness result in the conference version of this paper \\cite{Vondrak09} was formulated\nfor an arbitrary feasibility constraint ${\\cal F}$, invariant under ${\\cal G}$.\nUnfortunately, this was an error and the theorem does not hold in that form\n --- an algorithm could gather some information from querying ${\\cal F}$, and combining this\nwith information obtained by querying the objective function $f$ it could possibly determine the hidden optimal solution.\nThe possibility of gathering information from the membership oracle for ${\\cal F}$ was neglected in the proof. (The reason for this\nwas probably that the feasibility constraints used in concrete applications of the theorem were very simple and indeed\ndid not provide any information about the optimum.)\nNevertheless, to correct this issue, one needs to impose a stronger symmetry constraint on ${\\cal F}$, namely\nthe condition that $S \\in {\\cal F}$ depends only on the symmetrized version of $S$, $\\overline{{\\bf 1}_S} = {\\bf E}_{\\sigma \\in {\\cal G}}[{\\bf 1}_{\\sigma(S)}]$.\nThis is the case in all the applications of the hardness theorem in \\cite{Vondrak09} and \\cite{OV11} and hence\nthese applications are not affected.\n\n\\begin{definition}\n\\label{def:total-sym}\nWe call an instance $\\max \\{f(S): S \\in {\\cal F}\\}$ on a ground set $X$ strongly symmetric with respect to a group of permutations ${\\cal G}$ on $X$, if $f(S) = f(\\sigma(S))$ for all $S \\subseteq X$ and $\\sigma \\in {\\cal G}$, and $S \\in {\\cal F} \\Leftrightarrow S' \\in {\\cal F}$ whenever ${\\bf E}_{\\sigma \\in {\\cal G}}[{\\bf 1}_{\\sigma(S)}] = {\\bf E}_{\\sigma \\in {\\cal G}}[{\\bf 1}_{\\sigma(S')}] $.\n\\end{definition}\n\n\\noindent{\\bf Example.} A cardinality constraint, ${\\cal F} = \\{S \\subseteq [n]: |S| \\leq k \\}$, is strongly symmetric with respect to all permutations, because the condition $S \\in {\\cal F}$ depends only on the symmetrized vector $\\overline{{\\bf 1}_S} = \\frac{|S|}{n} {\\bf 1}$.\nSimilarly, a partition matroid constraint, ${\\cal F} = \\{S \\subseteq [n]: |S \\cap X_i| \\leq k \\}$ for disjoint sets $X_i$, is also strongly symmetric.\nOn the other hand, consider a family of feasible solution ${\\cal F} = \\{ \\{1,2\\}, \\{2,3\\}, \\{3,4\\}, \\{4,1\\} \\}$. This family is invariant under a group generated by the cyclic rotation $1 \\rightarrow 2 \\rightarrow 3 \\rightarrow 4 \\rightarrow 1$. It is not strongly symmetric, because the condition $S \\in {\\cal F}$ does not depend only the symmetrized vector $\\overline{{\\bf 1}_S} = (\\frac14 |S|,\\frac14 |S|,\\frac14 |S|,\\frac14 |S|)$;\nsome pairs are feasible and others are not.\n\n\n\\\n\nNext, we define the symmetry gap as the ratio between the optimal solution\nof $\\max \\{F({\\bf x}): {\\bf x} \\in P({\\cal F})\\}$ and the best {\\em symmetric} solution\nof this problem.\n\n\\begin{definition}[Symmetry gap]\nLet $\\max \\{ f(S): S \\in {\\cal F} \\}$ be an instance on a ground set $X$,\nwhich is strongly symmetric with respect to ${\\cal G} \\subset {\\bf S}(X)$.\nLet $F({\\bf x}) = {\\bf E}[f(\\hat{{\\bf x}})]$ be the multilinear extension of $f(S)$\nand $P({\\cal F}) = \\mbox{conv}(\\{{\\bf 1}_I: I \\in {\\cal F} \\})$ the polytope\nassociated with $\\cal F$. Let $\\bar{{\\bf x}} = {\\bf E}_{\\sigma \\in {\\cal G}}[\\sigma({\\bf x})]$.\nThe symmetry gap of $\\max \\{ f(S): S \\in {\\cal F} \\}$ is defined as\n$\\gamma = \\overline{OPT} \/ OPT$ where\n$$OPT = \\max \\{F({\\bf x}): {\\bf x} \\in P({\\cal F})\\},$$\n$$\\overline{OPT} = \\max \\{F(\\bar{{\\bf x}}): {\\bf x} \\in P({\\cal F}) \\}.$$\n\\end{definition}\n\n\\noindent\nWe give examples of computing the symmetry gap in Section~\\ref{section:hardness-applications}.\nNext, we need to define the notion of a {\\em refinement} of an instance.\nThis is a natural way to extend a family of feasible sets to a larger ground set.\nIn particular, this operation preserves the types of constraints that we care about,\nsuch as cardinality constraints, matroid independence, and matroid base constraints.\n\n\\begin{definition}[Refinement]\nLet ${\\cal F} \\subseteq 2^X$, $|X|=k$ and $|N| = n$. We say that\n$\\tilde{{\\cal F}}\\subseteq 2^{N \\times X}$ is a refinement of $\\cal F$, if\n$$ \\tilde{{\\cal F}} = \\left\\{ \\tilde{S} \\subseteq N \\times X \\ \\big| \\ (x_1,\\ldots,x_k) \\in P({{\\cal F}})\n \\mbox{ where } x_j = \\frac{1}{n} |\\tilde{S} \\cap (N \\times \\{j\\})| \\right\\}. $$\n\\end{definition}\n\nIn other words, in the refined instance, each element $j \\in X$ is replaced by a set $N \\times \\{j\\}$.\nWe call this set the {\\em cluster} of elements corresponding to $j$.\nA set $\\tilde{S}$ is in $\\tilde{{\\cal F}}$ if and only if the fractions $x_j$ of the respective clusters that are intersected by $\\tilde{S}$ form a vector ${\\bf x} \\in P({\\cal F})$, i.e. a convex combination of sets in ${\\cal F}$.\n\nOur main result is that the symmetry gap for any strongly symmetric instance\ntranslates automatically into hardness of approximation for refined instances.\n(See Definition~\\ref{def:total-sym} for the notion of being ``strongly symmetric\".)\nWe emphasize that this is a query-complexity lower bound,\nand hence independent of assumptions such as $P \\neq NP$.\n\n\\begin{theorem}\n\\label{thm:general-hardness}\nLet $\\max \\{ f(S): S \\in {\\cal F} \\}$ be an instance of nonnegative\n(optionally monotone) submodular maximization, strongly symmetric with respect to ${\\cal G}$,\nwith symmetry gap $\\gamma = \\overline{OPT} \/ OPT$.\nLet $\\cal C$ be the class of instances $\\max \\{\\tilde{f}(S): S \\in \\tilde{{\\cal F}}\\}$\nwhere $\\tilde{f}$ is nonnegative (optionally monotone) submodular\nand $\\tilde{{\\cal F}}$ is a refinement of ${\\cal F}$.\nThen for every $\\epsilon > 0$,\nany (even randomized) $(1+\\epsilon) \\gamma$-approximation algorithm for the class $\\cal C$ would require\nexponentially many value queries to $\\tilde{f}(S)$.\n\\end{theorem}\n\nWe remark that the result holds even if the class $\\cal C$ is restricted\nto instances which are themselves symmetric under a group related to ${\\cal G}$\n(see the discussion in Section~\\ref{section:hardness-proof}, after the proofs of Theorem~\\ref{thm:general-hardness}\nand \\ref{thm:multilinear-hardness}).\nOn the algorithmic side, submodular maximization seems easier for symmetric instances\nand in this case we obtain optimal approximation factors, up to lower-order terms\n(see Section~\\ref{section:symmetric}).\n\nOur hardness construction yields impossibility results also for solving\nthe continuous problem $\\max \\{F({\\bf x}): {\\bf x} \\in P({\\cal F}) \\}$. In the case of matroid constraints,\nthis is easy to see, because an approximation to the continuous problem gives the same\napproximation factor for the discrete problem (by pipage rounding, see Appendix~\\ref{app:pipage}).\nHowever, this phenomenon\nis more general and we can show that the value of a symmetry gap translates into\nan inapproximability result for the multilinear optimization problem under any constraint\nsatisfying a symmetry condition. \n\n\\begin{theorem}\n\\label{thm:multilinear-hardness}\nLet $\\max \\{ f(S): S \\in {\\cal F} \\}$ be an instance of nonnegative\n(optionally monotone) submodular maximization, strongly symmetric with respect to ${\\cal G}$,\nwith symmetry gap $\\gamma = \\overline{OPT} \/ OPT$.\nLet $\\cal C$ be the class of instances $\\max \\{\\tilde{f}(S): S \\in \\tilde{{\\cal F}}\\}$\nwhere $\\tilde{f}$ is nonnegative (optionally monotone) submodular\nand $\\tilde{{\\cal F}}$ is a refinement of ${\\cal F}$.\nThen for every $\\epsilon > 0$,\nany (even randomized) $(1+\\epsilon) \\gamma$-approximation algorithm for the multilinear relaxation\n$\\max \\{\\tilde{F}({\\bf x}): {\\bf x} \\in P(\\tilde{{\\cal F}})\\}$ of problems in $\\cal C$ would require\nexponentially many value queries to $\\tilde{f}(S)$.\n\\end{theorem}\n\n\n\\\n\n\\noindent{\\bf Additions to the conference version and follow-up work.}\nAn extended abstract of this work appeared in IEEE FOCS 2009 \\cite{Vondrak09}.\nAs mentioned above, the main theorem in \\cite{Vondrak09} suffers from a technical flaw.\nThis does not affect the applications, but the general theorem in \\cite{Vondrak09} is not correct.\nWe provide a corrected version of the main theorem with a complete proof\n(Theorem~\\ref{thm:general-hardness}) and we extend this hardness result to the problem of solving\n the multilinear relaxation (Theorem~\\ref{thm:multilinear-hardness}).\n\nSubsequently, further work has been done which exploits the symmetry gap concept. \nIn \\cite{OV11}, it has been proved using Theorem~\\ref{thm:general-hardness} that\nmaximizing a nonnegative submodular function subject to a matroid independence constraint\nwith a factor better than $0.478$ would require exponentially many queries. Even in the case of a cardinality constraint,\n$\\max \\{f(S): |S| \\leq k\\}$ cannot be approximated within a factor better than $0.491$ using subexponentially many\nqueries \\cite{OV11}.\nIn the case of a matroid base constraint, assuming that the fractional base packing number\nis $\\nu = \\frac{k}{k-1}$ for some $k \\geq 2$, there is no $(1-e^{-1\/k}+\\epsilon)$-approximation in the value oracle model \\cite{OV11}, improving the hardness of $(1-\\frac{1}{\\nu}+\\epsilon) = (\\frac{1}{k}+\\epsilon)$-approximation from this paper.\nThese applications are not affected by the flaw in \\cite{Vondrak09},\nand they are implied by the corrected version of Theorem~\\ref{thm:general-hardness} here.\n\nRecently \\cite{DV11}, it has been proved using the symmetry gap technique that combinatorial auctions with submodular bidders do not admit any truthful-in-expectation $1\/m^\\gamma$-approximation, where $m$ is the number of items and $\\gamma>0$ some absolute constant.\nThis is the first nontrivial hardness result for truthful-in-expectation mechanisms for combinatorial auctions;\nit separates the classes of monotone submodular functions and coverage functions,\nwhere a truthful-in-expectation $(1-1\/e)$-approximation is possible \\cite{DRY11}.\nThe proof is self-contained and does not formally refer to \\cite{Vondrak09}.\n\nMoreover, this hardness result for truthful-in-expectation mechanisms as well as the main hardness result in this paper\nhave been converted from the oracle setting to a computational complexity setting \\cite{DV12a,DV12b}. This recent work\nshows that the hardness of approximation arising from symmetry gap is not limited to instances given by an oracle, but holds\nalso for instances encoded explicitly on the input, under a suitable complexity-theoretic assumption.\n\n\\\n\n\\noindent{\\bf Organization.}\nThe rest of the paper is organized as follows.\nIn Section~\\ref{section:hardness-applications}, we present applications\nof our main hardness result (Theorem~\\ref{thm:general-hardness}) to concrete cases,\nin particular we show how it implies the hardness statements in Theorem~\\ref{thm:matroid-indep}\nand \\ref{thm:matroid-bases}.\nIn Section~\\ref{section:hardness-proof}, we present the proofs of Theorem~\\ref{thm:general-hardness}\n and Theorem~\\ref{thm:multilinear-hardness}.\nIn Section~\\ref{section:algorithms}, we prove the algorithmic results\nin Theorem~\\ref{thm:matroid-indep} and \\ref{thm:matroid-bases}.\nIn Section~\\ref{section:symmetric}, we discuss the special case of symmetric instances.\nIn the Appendix,\nwe present a few basic facts concerning submodular functions,\nan extension of pipage rounding to matroid independence polytopes (rather than matroid base polytopes),\nand other technicalities that would hinder the main exposition.\n\n\n\n\\section{From symmetry to inapproximability: applications}\n\\label{section:hardness-applications}\n\nBefore we get into the proof of Theorem~\\ref{thm:general-hardness},\nlet us show how it can be applied to a number of specific problems.\nSome of these are hardness results that were proved previously by\nan ad-hoc method. The last application is a new one\n(Theorem~\\ref{thm:matroid-bases}).\n\n\\\n\n\\noindent{\\bf Nonmonotone submodular maximization.}\nLet $X = \\{1,2\\}$ and for any $S \\subseteq X$,\n$f(S) = 1$ if $|S|=1$, and $0$ otherwise.\nConsider the instance $\\max \\{ f(S): S \\subseteq X \\}$.\nIn other words, this is the Max Cut problem on the graph $K_2$.\nThis instance exhibits a simple symmetry, the group of all (two) permutations\non $\\{1,2\\}$. We get $OPT = F(1,0) = F(0,1) = 1$, while $\\overline{OPT} = F(1\/2,1\/2)\n = 1\/2$. Hence, the symmetry gap is $1\/2$.\n\n\\begin{figure}[here]\n\\begin{tikzpicture}[scale=.50]\n\n\\draw (-10,0) node {};\n\n\\filldraw [fill=gray,line width=1mm] (0,0) rectangle (4,2);\n\n\\filldraw [fill=white] (0.5,1) .. controls +(0,1) and +(0,1) .. (1.5,1)\n .. controls +(0,-1) and +(0,-1) .. (0.5,1);\n\n\n\\fill (1,1) circle (5pt);\n\\fill (3,1) circle (5pt);\n\\draw (1,1) -- (3,1);\n\n\\draw (-1,1) node {$X$};\n\n\\draw (7,1) node {$\\bar{x}_1 = \\bar{x}_2 = \\frac{1}{2}$};\n\n\\end{tikzpicture}\n\n\\caption{Symmetric instance for nonmonotone submodular maximization: Max Cut on the graph $K_2$.\nThe white set denotes the optimal solution, while $\\bar{{\\bf x}}$ is the (unique) symmetric solution.}\n\n\\end{figure}\n\n\nSince $f(S)$ is nonnegative submodular and there is no constraint on $S \\subseteq X$,\nthis will be the case for any refinement of the instance as well.\nTheorem~\\ref{thm:general-hardness} implies immediately the following: any algorithm achieving\na $(\\frac12 + \\epsilon)$-approximation for nonnegative (nonmonotone) submodular\nmaximization requires exponentially many value queries\n(which was previously known \\cite{FMV07}). Note that a ``trivial instance\" implies\na nontrivial hardness result. This is typically the case in applications of Theorem~\\ref{thm:general-hardness}.\n\nThe same symmetry gap holds if we impose some simple constraints:\nthe problems $\\max \\{ f(S): |S| \\leq 1 \\}$ and $\\max \\{ f(S): |S| = 1 \\}$\nhave the same symmetry gap as above. Hence, the hardness threshold of $1\/2$\nalso holds for nonmonotone submodular maximization under cardinality constraints\nof the type $|S| \\leq n\/2$, or $|S| = n\/2$. This proves the hardness part of\nTheorem~\\ref{thm:matroid-indep}. This can be derived quite easily\nfrom the construction of \\cite{FMV07} as well.\n\n\\\n\n\\noindent{\\bf Monotone submodular maximization.}\nLet $X = [k]$ and $f(S) = \\min \\{|S|, 1\\}$.\nConsider the instance $\\max \\{f(S): |S| \\leq 1 \\}$.\nThis instance is invariant under all permutations on $[k]$, the symmetric group ${\\bf S}_k$.\nNote that the instance is {\\em strongly symmetric} (Def.~\\ref{def:total-sym}) with respect to ${\\bf S}_k$,\nsince the feasibility constraint $|S| \\leq 1$ depends only on the symmetrized vector\n$\\overline{{\\bf 1}_S} = (\\frac{1}{k}|S|,\n\\ldots, \\frac{1}{k}|S|)$. We get $OPT = F(1,0,\\ldots,0) = 1$, while\n$\\overline{OPT} = F(1\/k,1\/k,\\ldots,1\/k) = 1 - (1-1\/k)^k$.\n\n\\iffalse\n\\begin{figure}[here]\n\\begin{tikzpicture}[scale=.50]\n\n\\draw (-8,0) node {};\n\n\\filldraw [fill=gray,line width=1mm] (0,0) rectangle (9,2);\n\n\\filldraw [fill=white] (0.5,1) .. controls +(0,1) and +(0,1) .. (1.5,1)\n .. controls +(0,-1) and +(0,-1) .. (0.5,1);\n\n\n\\fill (1,1) circle (5pt);\n\\fill (2,1) circle (5pt);\n\\fill (3,1) circle (5pt);\n\\fill (4,1) circle (5pt);\n\\fill (5,1) circle (5pt);\n\\fill (6,1) circle (5pt);\n\\fill (7,1) circle (5pt);\n\\fill (8,1) circle (5pt);\n\n\\draw [dotted] (1,1) -- (8,1);\n\n\\draw (-1,1) node {$X$};\n\n\\draw (11,1) node {$\\bar{x}_{i} = \\frac{1}{k}$};\n\n\\end{tikzpicture}\n\n\\caption{Symmetric instance for monotone submodular maximization.}\n\n\\end{figure}\n\\fi\n\n\n\nHere, $f(S)$ is monotone submodular and any refinement of $\\cal F$ is\na set system of the type $\\tilde{{\\cal F}} = \\{S: |S| \\leq \\ell \\}$.\nBased on our theorem, this implies that any approximation better than $1 - (1-1\/k)^k$\nfor monotone submodular maximization subject\nto a cardinality constraint would require exponentially many value queries.\nSince this holds for any fixed $k$, we get the same hardness\nresult for any $\\beta > \\lim_{k \\rightarrow \\infty} (1 - (1-1\/k)^k) = 1-1\/e$\n(which was previously known \\cite{NW78}).\n\n\\iffalse\n\n\\noindent{\\bf Submodular welfare maximization.}\nLet $X = [k] \\times [k]$,\n$ {\\cal F} = \\{ S: S$ contains at most $1$ pair $(i,j)$ for each $j \\}$,\nand $f(S) = |\\{ i: \\exists (i,j) \\in S \\}|$.\nConsider the instance $\\max \\{f(S): S \\in {\\cal F}\\}$.\nThis instance can be interpreted as an allocation problem of $k$ items to\n$k$ players. A set $S$ represents an assignment in the sense that\n$(i,j) \\in S$ if item $j$ is allocated to player $i$.\nA player is satisfied is she receives at least 1 item;\nthe objective function is the number of satisfied players.\nThis instance exhibits the symmetry of all permutations of the players\n(and also all permutations of the items, although we do not use it here).\nNote that the feasibility constraint $S \\in {\\cal F}$ depends only on the symmetrized\nvector $\\overline{{\\bf 1}_S}$ which averages out the allocation of each item\nacross all players. Therefore the instance is strongly symmetric with respect\nto permutations of players.\nAn optimum solution allocates each item to a different player, and $OPT = k$.\nThe symmetrized optimum allocates a $1\/k$-fraction of each item to each player,\nwhich gives $\\overline{OPT} = F(\\frac{1}{k},\\frac{1}{k},\\ldots,\\frac{1}{k}) =\n k(1-(1-\\frac{1}{k})^k)$.\n\n\\begin{figure}[here]\n\\begin{tikzpicture}[scale=.50]\n\n\\draw (-8,0) node {};\n\n\\filldraw [fill=gray,line width=1mm] (0,0) rectangle (8,2);\n\\filldraw [fill=white] (0.5,1) .. controls +(0,1) and +(0,1) .. (1.5,1)\n .. controls +(0,-1) and +(0,-1) .. (0.5,1);\n\\filldraw [fill=gray,line width=1mm] (0,2) rectangle (8,4);\n\\filldraw [fill=white] (2.5,3) .. controls +(0,1) and +(0,1) .. (3.5,3)\n .. controls +(0,-1) and +(0,-1) .. (2.5,3);\n\\filldraw [fill=gray,line width=1mm] (0,4) rectangle (8,6);\n\\filldraw [fill=white] (4.5,5) .. controls +(0,1) and +(0,1) .. (5.5,5)\n .. controls +(0,-1) and +(0,-1) .. (4.5,5);\n\\filldraw [fill=gray,line width=1mm] (0,6) rectangle (8,8);\n\\filldraw [fill=white] (6.5,7) .. controls +(0,1) and +(0,1) .. (7.5,7)\n .. controls +(0,-1) and +(0,-1) .. (6.5,7);\n\n\n\\fill (1,1) circle (5pt);\n\\fill (3,1) circle (5pt);\n\\fill (5,1) circle (5pt);\n\\fill (7,1) circle (5pt);\n\\fill (1,3) circle (5pt);\n\\fill (3,3) circle (5pt);\n\\fill (5,3) circle (5pt);\n\\fill (7,3) circle (5pt);\n\\fill (1,5) circle (5pt);\n\\fill (3,5) circle (5pt);\n\\fill (5,5) circle (5pt);\n\\fill (7,5) circle (5pt);\n\\fill (1,7) circle (5pt);\n\\fill (3,7) circle (5pt);\n\\fill (5,7) circle (5pt);\n\\fill (7,7) circle (5pt);\n\n\\draw [dotted] (1,1) -- (7,1);\n\\draw [dotted] (1,3) -- (7,3);\n\\draw [dotted] (1,5) -- (7,5);\n\\draw [dotted] (1,7) -- (7,7);\n\n\\draw (-1.5,4) node {players};\n\\draw (4,9) node {items};\n\n\\draw (10,4) node {$\\bar{x}_{ij} = \\frac{1}{k}$};\n\n\\end{tikzpicture}\n\n\\caption{Symmetric instance for submodular welfare maximization.}\n\n\\end{figure}\n\nA refinement of this instance can be interpreted as an allocation problem\nwhere we have $n$ copies of each item, we still have $k$ players,\nand the utility functions are monotone submodular. Our theorem implies\nthat for submodular welfare maximization with $k$ players, a better approximation factor\nthan $1 - (1-\\frac{1}{k})^k$ is impossible.\\footnote{We note that \nour theorem here assumes an oracle model where only the total value can be\nqueried for a given allocation. This is actually enough for the $(1-1\/e)$-approximation\nof \\cite{Vondrak08} to work. However, the hardness result holds even if each player's valuation\nfunction can be queried separately; this result was proved in \\cite{MSV08}.}\n\n\\fi\n\n\\\n\n\\noindent{\\bf Submodular maximization over matroid bases.}\nLet $X = A \\cup B$, $A = \\{a_1, \\ldots, a_k\\}$, $B = \\{b_1, \\ldots, b_k\\}$\nand ${\\cal F} = \\{ S: |S \\cap A| = 1 \\ \\& \\ |S \\cap B| = k-1 \\}$.\nWe define $f(S) = \\sum_{i=1}^{k} f_i(S)$ where\n$f_i(S) = 1$ if $a_i \\in S \\ \\& \\ b_i \\notin S$, and $0$ otherwise.\nThis instance can be viewed as a Maximum Directed Cut problem on a graph\nof $k$ disjoint arcs, under the constraint that exactly one arc tail\nand $k-1$ arc heads should be on the left-hand side ($S$).\nAn optimal solution is for example $S = \\{a_1, b_2, b_3, \\ldots, b_k \\}$,\nwhich gives $OPT = 1$.\nThe symmetry here is that we can apply the same permutation to $A$ and $B$\nsimultaneously. Again, the feasibility of a set $S$ depends only on the symmetrized vector $\\overline{{\\bf 1}_S}$:\nin fact $S \\in {\\cal F}$ if and only if $\\overline{{\\bf 1}_S} = (\\frac1k,\\ldots,\\frac1k,1-\\frac1k,\\ldots,1-\\frac1k)$.\nThere is a unique symmetric solution\n$\\bar{{\\bf x}} = (\\frac1k,\\ldots,\\frac1k,1-\\frac1k,\\ldots,1-\\frac1k)$, and\n$\\overline{OPT} = F(\\bar{{\\bf x}}) = {\\bf E}[f(\\hat{\\bar{{\\bf x}}})] = \\sum_{i=1}^{k}\n {\\bf E}[f_i(\\hat{\\bar{{\\bf x}}})] = \\frac1k$\n(since each arc appears in the directed cut induced by $\\hat{\\bar{{\\bf x}}}$ with probability $\\frac{1}{k^2}$).\n\n\\begin{figure}[here]\n\\begin{tikzpicture}[scale=.60]\n\n\\draw (-6,0) node {};\n\n\\filldraw [fill=gray,line width=1mm] (0,0) rectangle (9,2);\n\\filldraw [fill=gray,line width=1mm] (0,2) rectangle (9,4);\n\n\\filldraw [fill=white] (0.5,3) .. controls +(0,1) and +(0,1) .. (1.5,3)\n .. controls +(0,-1) and +(0,-1) .. (0.5,3);\n\\filldraw [fill=white] (1.5,1) .. controls +(0,1) and +(0,1) .. (8.5,1)\n .. controls +(0,-1) and +(0,-1) .. (1.5,1);\n\n\n\\fill (1,1) circle (5pt);\n\\fill (2,1) circle (5pt);\n\\fill (3,1) circle (5pt);\n\\fill (4,1) circle (5pt);\n\\fill (5,1) circle (5pt);\n\\fill (6,1) circle (5pt);\n\\fill (7,1) circle (5pt);\n\\fill (8,1) circle (5pt);\n\\fill (1,3) circle (5pt);\n\\fill (2,3) circle (5pt);\n\\fill (3,3) circle (5pt);\n\\fill (4,3) circle (5pt);\n\\fill (5,3) circle (5pt);\n\\fill (6,3) circle (5pt);\n\\fill (7,3) circle (5pt);\n\\fill (8,3) circle (5pt);\n\\draw[-latex] [line width=0.5mm] (1,3) -- (1,1);\n\\draw[-latex] [line width=0.5mm] (2,3) -- (2,1);\n\\draw[-latex] [line width=0.5mm] (3,3) -- (3,1);\n\\draw[-latex] [line width=0.5mm] (4,3) -- (4,1);\n\\draw[-latex] [line width=0.5mm] (5,3) -- (5,1);\n\\draw[-latex] [line width=0.5mm] (6,3) -- (6,1);\n\\draw[-latex] [line width=0.5mm] (7,3) -- (7,1);\n\\draw[-latex] [line width=0.5mm] (8,3) -- (8,1);\n\n\n\\draw (-1,1) node {$B$};\n\\draw (-1,3) node {$A$};\n\n\\draw (11,3) node {$\\bar{x}_{a_i} = \\frac{1}{k}$};\n\\draw (11,1) node {$\\bar{x}_{b_i} = 1 - \\frac{1}{k}$};\n\n\\end{tikzpicture}\n\n\\caption{Symmetric instance for submodular maximization over matroid bases.}\n\n\\end{figure}\nThe refined instances are instances of (nonmonotone) submodular maximization\nover the bases of a matroid, where the ground set is partitioned into $A \\cup B$\nand we should take a $\\frac1k$-fraction of $A$ and a $(1-\\frac1k)$-fraction of $B$.\n(This means that the fractional packing number of bases is $\\nu = \\frac{k}{k-1}$.)\nOur theorem implies that for this class of instances, an approximation better\nthan $1\/k$ is impossible - this proves the hardness part of Theorem~\\ref{thm:matroid-bases}.\n\n\\\n\nObserve that in all the cases mentioned above, the multilinear relaxation is equivalent\nto the original problem, in the sense that any fractional solution can be rounded\nwithout any loss in the objective value. This implies that the same hardness factors apply to\nsolving the multilinear relaxation of the respective problems. In particular, using the last result\n(for matroid bases), we obtain that the multilinear optimization problem $\\max \\{ F({\\bf x}): {\\bf x} \\in P \\}$\ndoes not admit a constant factor for nonnegative submodular functions and matroid base polytopes.\n(We remark that a $(1-1\/e)$-approximation can be achieved for any {\\em monotone} submodular function\nand any solvable polytope, i.e.~polytope over which we can optimize linear functions \\cite{Vondrak08}.)\n\nAs Theorem~\\ref{thm:multilinear-hardness} shows, this holds more generally - any symmetry gap\nconstruction gives an inapproximability result for solving the multilinear optimization problem\n$\\max \\{F({\\bf x}): {\\bf x} \\in P\\}$. This in fact implies limits on what hardness results we can\npossibly hope for using this technique. For instance, we cannot prove using the symmetry gap\nthat the monotone submodular maximization problem subject to the intersection of $k$ matroid constraints\ndoes not admit a constant factor - because we would also prove that the respective multilinear relaxation\ndoes not admit such an approximation. But we know from \\cite{Vondrak08} that a $(1-1\/e)$-approximation\nis possible for the multilinear problem in this case.\n\nHence, the hardness arising from the symmetry gap is related to the difficulty of solving\nthe multilinear optimization problem rather than the difficulty of rounding a fractional solution.\nThus this technique is primarily suited to optimization problems where the multilinear optimization problem\ncaptures closely the original discrete problem.\n \n\n\n\n\\section{From symmetry to inapproximability: proof}\n\\label{section:hardness-proof}\n\n\\paragraph{The roadmap}\nAt a high level,\nour proof resembles the constructions of \\cite{FMV07,MSV08}.\nWe construct instances based on continuous functions $F({\\bf x})$, $G({\\bf x})$,\nwhose optima differ by a gap\nfor which we want to prove hardness. Then we show that after a certain perturbation,\nthe two instances are very hard to distinguish.\nThis paper generalizes the ideas of \\cite{FMV07,MSV08} and brings\ntwo new ingredients. First, we show that the functions\n$F({\\bf x}), G({\\bf x})$, which are ``pulled out of the hat'' in \\cite{FMV07,MSV08},\ncan be produced in a natural way from the multilinear relaxation\nof the respective problem, using the notion of a {\\em symmetry gap}.\nSecondly, the functions $F({\\bf x}), G({\\bf x})$ are perturbed in a way that makes\nthem indistinguishable and this forms the main technical part\nof the proof. In \\cite{FMV07}, this step is quite simple.\nIn \\cite{MSV08}, the perturbation is more complicated, but still relies\non properties of the functions $F({\\bf x}), G({\\bf x})$ specific to that application.\nThe construction that we present here (Lemma~\\ref{lemma:final-fix})\nuses the symmetry properties of a fixed instance in a generic fashion.\n\n\n\\\n\nFirst, let us present an outline of our construction. Given an instance\n$\\max \\{f(S): S \\in {\\cal F}\\}$ exhibiting a symmetry gap $\\gamma$,\nwe consider two smooth submodular\\footnote{\"Smooth submodularity\"\nmeans the condition $\\mixdiff{F}{x_i}{x_j} \\leq 0$ for all $i,j$.}\nfunctions, $F({\\bf x})$ and $G({\\bf x})$.\nThe first one is the multilinear extension $F({\\bf x}) = {\\bf E}[f(\\hat{{\\bf x}})]$,\nwhile the second one is its symmetrized version $G({\\bf x}) = F(\\bar{{\\bf x}})$.\nWe modify these functions slightly so that we obtain\nfunctions $\\hat{F}({\\bf x})$ and $\\hat{G}({\\bf x})$ with the following property:\nFor any vector ${\\bf x}$ which is close to its symmetrized version\n$\\bar{{\\bf x}}$, $\\hat{F}({\\bf x}) = \\hat{G}({\\bf x})$.\nThe functions $\\hat{F}({\\bf x}), \\hat{G}({\\bf x})$ induce instances of\nsubmodular maximization on the refined ground sets. The way we define\ndiscrete instances based on $\\hat{F}({\\bf x}), \\hat{G}({\\bf x})$ is natural,\nusing the following lemma.\nEssentially, we interpret the fractional variables as fractions of clusters\nin the refined instance.\n\n\\begin{lemma}\n\\label{lemma:smooth-submodular}\nLet $F:[0,1]^X \\rightarrow {\\boldmath R}$,\n $N = [n]$, $n \\geq 1$, and define $f:2^{N \\times X} \\rightarrow {\\boldmath R}$\nso that $f(S) = F({\\bf x})$ where $x_i = \\frac{1}{n} |S \\cap (N \\times \\{i\\})|$. Then\n\\begin{enumerate}\n\\item If $\\partdiff{F}{x_i} \\geq 0$ everywhere for each $i$, then\n$f$ is monotone.\n\\item If the first partial derivatives of $F$ are absolutely continuous\\footnote{\nA function $F:[0,1]^X \\rightarrow {\\boldmath R}$ is absolutely continuous,\nif $\\forall \\epsilon>0; \\exists \\delta>0; \\sum_{i=1}^{t} ||{\\bf x}_i-{\\bf y}_i|| < \\delta \\Rightarrow\n\\sum_{i=1}^{t} |F({\\bf x}_i) - F({\\bf y}_i)| < \\epsilon$.} and $\\mixdiff{F}{x_i}{x_j} \\leq 0$ almost everywhere for all $i,j$,\nthen $f$ is submodular.\n\\end{enumerate}\n\\end{lemma}\n\n\\begin{proof}\nFirst, assume $\\partdiff{F}{x_i} \\geq 0$ everywhere for all $i$. This implies that $F$ is nondecreasing in every coordinate,\ni.e. $F({\\bf x}) \\leq F({\\bf y})$ whenever ${\\bf x} \\leq {\\bf y}$. This means that $f(S) \\leq f(T)$ whenever $S \\subseteq T$.\n\nNext, assume $\\partdiff{F}{x_i}$ is absolutely continuous for each $i$ and $\\mixdiff{F}{x_j}{x_i} \\leq 0$ almost everywhere for all $i,j$.\nWe want to prove that $\\partdiff{F}{x_i} |_{\\bf x} \\geq \\partdiff{F}{x_i} |_{\\bf y}$ whenever ${\\bf x} \\leq {\\bf y}$, which implies that the marginal\nvalues of $f$ are nonincreasing.\n\nLet ${\\bf x} \\leq {\\bf y}$,\nfix $\\delta>0$ arbitrarily small, and pick ${\\bf x}',{\\bf y}'$ such that $||{\\bf x}'-{\\bf x}||<\\delta, ||{\\bf y}'-{\\bf y}||<\\delta, {\\bf x}' \\leq {\\bf y}'$ and\non the line segment $[{\\bf x}', {\\bf y}']$, we have $\\mixdiff{F}{x_j}{x_i} \\leq 0$ except for a set of (1-dimensional) measure zero. If such a pair of points\n${\\bf x}', {\\bf y}'$ does not exist, it means that there are sets $A,B$ of positive measure such that\n${\\bf x} \\in A, {\\bf y} \\in B$ and for any\n${\\bf x}' \\in A, {\\bf y}' \\in B$, the line segment $[{\\bf x}',{\\bf y}']$ contains a subset of positive (1-dimensional) measure where $\\mixdiff{F}{x_j}{x_i}$ \nfor some $j$ is positive or undefined. This would imply that $[0,1]^X$ contains a subset of positive measure where\n$\\mixdiff{F}{x_j}{x_i}$ for some $j$ is positive or undefined, which we assume is not the case.\n\nTherefore, there is a pair of points ${\\bf x}', {\\bf y}'$ as described above.\nWe compare $\\partdiff{F}{x_i} |_{{\\bf x}'}$ and $\\partdiff{F}{x_i} |_{{\\bf y}'}$\nby integrating along the line segment $[{\\bf x}', {\\bf y}']$. Since $\\partdiff{F}{x_i}$ is absolutely continuous and $\\partdiff{}{x_j} \\partdiff{F}{x_i} = \\mixdiff{F}{x_j}{x_i} \\leq 0$ along this line segment for all $j$ except for a set of measure zero, we obtain $\\partdiff{F}{x_i} |_{{\\bf x}'} \\geq \\partdiff{F}{x_i} |_{{\\bf y}'}$. This is true for ${\\bf x}', {\\bf y}'$ arbitrarily\nclose to ${\\bf x}, {\\bf y}$, and hence by continuity of $\\partdiff{F}{x_i}$, we get $\\partdiff{F}{x_i} |_{{\\bf x}} \\geq \\partdiff{F}{x_i} |_{{\\bf y}}$.\nThis implies that the marginal values of $f$ are nonincreasing.\n\\end{proof}\n\n\nThe way we construct $\\hat{F}({\\bf x}), \\hat{G}({\\bf x})$ is such that,\ngiven a large enough refinement of the ground set,\nit is impossible to distinguish the instances corresponding to\n$\\hat{F}({\\bf x})$ and $\\hat{G}({\\bf x})$. As we argue more precisely later,\nthis holds because if the ground set is large and labeled in a random way\n(considering the symmetry group of the instance), a query about a vector ${\\bf x}$\neffectively becomes a query about the symmetrized vector $\\bar{{\\bf x}}$.\nWe would like this property to imply that all queries with high probability fall\nin the region where $\\hat{F}({\\bf x}) = \\hat{G}({\\bf x})$ and the inability to distinguish between $\\hat{F}$\nand $\\hat{G}$ gives the hardness result that we want.\nThe following lemma gives the precise properties of $\\hat{F}({\\bf x})$\nand $\\hat{G}({\\bf x})$ that we need.\n\n\\begin{lemma}\n\\label{lemma:final-fix}\nConsider a function $f:2^X \\rightarrow {\\boldmath R}_+$ invariant under a group\nof permutations $\\cal G$ on the ground set $X$.\nLet $F({\\bf x}) = {\\bf E}[f(\\hat{{\\bf x}})]$, $\\bar{{\\bf x}} = {\\bf E}_{\\sigma \\in {\\cal G}}[\\sigma({\\bf x})]$,\nand fix any $\\epsilon > 0$.\nThen there is $\\delta > 0$ and functions $\\hat{F}, \\hat{G}:[0,1]^X \\rightarrow {\\boldmath R}_+$\n(which are also symmetric with respect to ${\\cal G}$), satisfying:\n\\begin{enumerate}\n\\item For all ${\\bf x} \\in [0,1]^X$, $\\hat{G}({\\bf x}) = \\hat{F}(\\bar{{\\bf x}})$.\n\\item For all ${\\bf x} \\in [0,1]^X$, $|\\hat{F}({\\bf x}) - F({\\bf x})| \\leq \\epsilon$.\n\\item Whenever $||{\\bf x} - \\bar{{\\bf x}}||^2 \\leq \\delta$, $\\hat{F}({\\bf x}) = \\hat{G}({\\bf x})$\nand the value depends only on $\\bar{{\\bf x}}$.\n\\item The first partial derivatives of $\\hat{F}, \\hat{G}$ are absolutely continuous. \n\\item If $f$ is monotone, then $\\partdiff{\\hat{F}}{x_i} \\geq 0$ and\n $\\partdiff{\\hat{G}}{x_i} \\geq 0$ everywhere.\n\\item If $f$ is submodular, then $\\mixdiff{\\hat{F}}{x_i}{x_j} \\leq 0$ and\n $\\mixdiff{\\hat{G}}{x_i}{x_j} \\leq 0$ almost everywhere.\n\\end{enumerate}\n\\end{lemma}\n\nThe proof of this lemma is the main technical part of this paper\nand we defer it to the end of this section. Assuming this lemma,\nwe first finish the proof of the main theorem. We prove the following.\n\n\\begin{lemma}\n\\label{lemma:indistinguish}\nLet $\\hat{F}, \\hat{G}$ be the two functions provided by Lemma~\\ref{lemma:final-fix}.\nFor a parameter $n \\in {\\boldmath Z}_+$ and $N = [n]$, define two discrete functions\n$\\hat{f}, \\hat{g}: 2^{N \\times X} \\rightarrow {\\boldmath R}_+$ as follows:\nLet $\\sigma^{(i)}$ be an arbitrary permutation in ${\\cal G}$ for each $i \\in N$.\nFor every set $S \\subseteq N \\times X$, we define a vector $\\xi(S) \\in [0,1]^X$ by\n$$ \\xi_j(S) = \\frac{1}{n} \\left|\\{i \\in N: (i,\\sigma^{(i)}(j)) \\in S \\}\\right|.$$\nLet us define:\n$$ \\hat{f}(S) = \\hat{F}(\\xi(S)), \\ \\ \\ \\ \\ \\hat{g}(S) = \\hat{G}(\\xi(S)).$$\nIn addition, let $\\tilde{{\\cal F}} = \\{\\tilde{S}: \\xi(\\tilde{S}) \\in P({\\cal F})\\}$\nbe a feasibility constraint such that the condition $S \\in {\\cal F}$ depends\nonly on the symmetrized vector $\\overline{{\\bf 1}_S}$.\nThen deciding whether an instance given by value\/membership oracles is\n$\\max \\{\\hat{f}(S): S \\in \\tilde{{\\cal F}}\\}$ or $\\max \\{\\hat{g}(S): S \\in \\tilde{{\\cal F}} \\}$\n(even by a randomized algorithm, with any constant probability of success)\nrequires an exponential number of queries.\n\\end{lemma}\n\n\\begin{proof}\nLet $\\sigma^{(i)} \\in {\\cal G}$ be chosen independently at random for each $i \\in N$ and consider the instances\n$\\max\\{\\hat{f}(S): S \\in \\tilde{{\\cal F}} \\}$, $\\max \\{\\hat{g}(S): S \\in \\tilde{{\\cal F}} \\}$ as described in the lemma. We show that\nevery deterministic algorithm will follow the same computation path and return the same answer on both instances,\nwith high probability. By Yao's principle, this means that every randomized algorithm returns\nthe same answer for the two instances with high probability, for some particular $\\sigma^{(i)} \\in {\\cal G}$.\n\nThe feasible sets in the refined instance, $S \\in \\tilde{{\\cal F}}$, are such that\nthe respective vector $\\xi(S)$ is in the polytope $P({{\\cal F}})$.\nSince the instance is strongly symmetric, the condition $S \\in {\\cal F}$ depends\nonly on the symmetrized vector $\\overline{{\\bf 1}_S}$. Hence, the condition $\\xi(S) \\in P({\\cal F})$\ndepends only on the symmetrized vector $\\overline{\\xi(S)}$. Therefore, $S \\in \\tilde{{\\cal F}} \\Leftrightarrow\n\\xi(S) \\in P({\\cal F}) \\Leftrightarrow \\overline{\\xi(S)} \\in P({\\cal F})$.\nWe have $\\overline{\\xi(S)}_j = {\\bf E}_{\\sigma \\in {\\cal G}}[\\frac{1}{n} \\left|\\{i \\in N: (i,\\sigma^{(i)}(\\sigma(j))) \\in S \\}\\right|].$ The distribution of $\\sigma^{(i)} \\circ \\sigma$ is the same as that of $\\sigma$, i.e.~uniform over ${\\cal G}$. Hence, $\\overline{\\xi(S)}_j$ and consequently the condition $S \\in \\tilde{{\\cal F}}$ does not depend on $\\sigma^{(i)}$ in any way.\nIntuitively, an algorithm cannot learn any information about the permutations $\\sigma^{(i)}$ by querying the feasibility oracle,\nsince the feasibility condition $S \\in \\tilde{{\\cal F}}$ does not depend on $\\sigma^{(i)}$ for any $i \\in N$.\n\nThe main part of the proof is to show that even queries to the objective function are unlikely to reveal\nany information about the permutations $\\sigma^{(i)}$.\nThe key observation is that for any fixed query $Q$ to the objective function,\nthe associated vector ${\\bf q} = \\xi(Q)$ is very likely to be close\nto its symmetrized version $\\bar{{\\bf q}}$.\nTo see this, consider a query $Q$. The associated vector ${\\bf q} = \\xi(Q)$\nis determined by\n$$ q_j = \\frac{1}{n} |\\{i \\in N: (i,\\sigma^{(i)}(j)) \\in Q\\}|\n = \\frac{1}{n} \\sum_{i=1}^{n} Q_{ij} $$\nwhere $Q_{ij}$ is an indicator variable of the event $(i,\\sigma^{(i)}(j)) \\in Q$.\nThis is a random event due to the randomness in $\\sigma^{(i)}$.\nWe have\n$$ {\\bf E}[Q_{ij}] = \\Pr[Q_{ij}=1] = \\Pr_{\\sigma^{(i)} \\in {\\cal G}}[(i,\\sigma^{(i)}(j)) \\in Q].$$\nAdding up these expectations over $i \\in N$, we get\n$$ \\sum_{i \\in N} {\\bf E}[Q_{ij}]\n = \\sum_{i \\in N} \\Pr_{\\sigma^{(i)} \\in {\\cal G}}[(i,\\sigma^{(i)}(j)) \\in Q] \n = {\\bf E}_{\\sigma \\in {\\cal G}} [|\\{i \\in N: (i,\\sigma(j)) \\in Q \\}|].$$\nFor the purposes of expectation, the independence of $\\sigma^{(1)},\n \\ldots, \\sigma^{(n)}$ is irrelevant and that is why we replace them by one random permutation $\\sigma$.\nOn the other hand, consider the symmetrized vector $\\bar{{\\bf q}}$, with coordinates\n$$ \\bar{q}_j = {\\bf E}_{\\sigma \\in {\\cal G}}[q_{\\sigma(j)}]\n = \\frac{1}{n} {\\bf E}_{\\sigma \\in {\\cal G}}[|\\{i \\in N: (i,\\sigma^{(i)}(\\sigma(j))) \\in Q\\}|]\n = \\frac{1}{n} {\\bf E}_{\\sigma \\in {\\cal G}}[|\\{i \\in N: (i,\\sigma(j)) \\in Q \\}|] $$\nusing the fact that the distribution of $\\sigma^{(i)} \\circ \\sigma$\nis the same as the distribution of $\\sigma$ - uniformly random over ${\\cal G}$.\nNote that the vector ${\\bf q}$ depends on the random permutations $\\sigma^{(i)}$\nbut the symmetrized vector $\\bar{{\\bf q}}$ does not; this will be also useful\nin the following. For now, we summarize that\n$$ \\bar{q}_j = \\frac{1}{n} \\sum_{i=1}^{n} {\\bf E}[Q_{ij}] = {\\bf E}[q_j].$$\nSince each permutation $\\sigma^{(i)}$ is chosen independently, the random variables\n$\\{Q_{ij}: 1 \\leq i \\leq n\\}$ are independent (for a fixed $j$).\nWe can apply Chernoff's bound (see e.g. \\cite{AlonSpencer}, Corollary A.1.7):\n$$ \\Pr\\left[\\left|\\sum_{i=1}^{n} Q_{ij}- \\sum_{i=1}^{n} {\\bf E}[Q_{ij}] \\right| > a \\right]\n < 2e^{-2a^2 \/ n}.$$\nUsing $q_j = \\frac{1}{n} \\sum_{i=1}^{n} Q_{ij}$,\n $\\bar{q}_j = \\frac{1}{n} \\sum_{i=1}^{n} {\\bf E}[Q_{ij}]$\nand setting $a = n \\sqrt{\\delta\/|X|}$, we obtain\n$$ \\Pr\\left[|q_j - \\bar{q}_j| > \\sqrt{{\\delta}\/{|X|}}\\right]\n < 2e^{-2 n \\delta \/ |X|}.$$\nBy the union bound,\n\\begin{equation}\n\\label{eq:D(q)}\n \\Pr[||{\\bf q}-\\bar{{\\bf q}}||^2 > \\delta] \\leq \\sum_{j \\in X} \\Pr[|q_j-\\bar{q}_j|^2 > {\\delta}\/{|X|}] < 2|X| e^{-2n \\delta\/|X|}.\n\\end{equation}\nNote that while $\\delta$ and $|X|$ are constants, $n$ grows as the size\nof the refinement and hence the probability is exponentially small in the\nsize of the ground set $N \\times X$.\n\nDefine $D({\\bf q}) = ||{\\bf q}-\\bar{{\\bf q}}||^2$. As long as $D({\\bf q}) \\leq \\delta$ for every query issued by the algorithm,\nthe answers do not depend on the randomness of the input. This is because\nthen the values of $\\hat{F}({\\bf q})$ and $\\hat{G}({\\bf q})$ depend only on $\\bar{{\\bf q}}$,\nwhich is independent of the random permutations $\\sigma^{(i)}$,\nas we argued above.\nTherefore, assuming that $D({\\bf q}) \\leq \\delta$ for each query,\nthe algorithm will always follow the same path\nof computation and issue the same sequence of queries $\\cal S$.\n(Note that this is just a fixed sequence which can be written down\nbefore we started running the algorithm.)\nAssume that $|{\\cal S}| = e^{o(n)}$, i.e.,~the number of queries is subexponential in $n$.\nBy (\\ref{eq:D(q)}), using a union bound over all $Q \\in {\\cal S}$,\nit happens with probability $1 - e^{-\\Omega(n)}$ that ${D}({\\bf q}) = ||{\\bf q}-\\bar{{\\bf q}}||^2 \\leq \\delta$ for all $Q \\in {\\cal S}$.\n(Note that $\\delta$ and $|X|$ are constants here.)\nThen, the algorithm indeed follows this path of computation and gives the same answer.\nIn particular, the answer does not depend on whether the instance is $\\max \\{\\hat{f}(S): S \\in \\tilde{{\\cal F}}\\}$\nor $\\max \\{\\hat{g}(S): S \\in \\tilde{{\\cal F}}\\}$.\n\\end{proof}\n\n\\begin{proof}[Proof of Theorem~\\ref{thm:general-hardness}]\nFix an $\\epsilon > 0$.\nGiven an instance $\\max \\{f(S): S \\in {\\cal F}\\}$\nstrongly symmetric under ${\\cal G}$, let $\\hat{F}, \\hat{G}: [0,1]^X \\rightarrow {\\boldmath R}$\nbe the two functions provided by Lemma~\\ref{lemma:final-fix}.\nWe choose a large number $n$ and consider a refinement $\\tilde{{\\cal F}}$\non the ground set $N \\times X$, where $N = [n]$.\nWe define discrete instances of submodular maximization\n$\\max \\{ \\hat{f}(S): S \\in \\tilde{{\\cal F}} \\}$ and $\\max \\{ \\hat{g}(S): S \\in \\tilde{{\\cal F}} \\}$.\nAs in Lemma~\\ref{lemma:indistinguish}, for each $i \\in N$\nwe choose a random permutation $\\sigma^{(i)} \\in {\\cal G}$.\nThis can be viewed as a random shuffle of the labeling of the ground set\nbefore we present it to an algorithm.\nFor every set $S \\subseteq N \\times X$, we define a vector $\\xi(S) \\in [0,1]^X$ by\n$$ \\xi_j(S) = \\frac{1}{n} \\left|\\{i \\in N: (i,\\sigma^{(i)}(j)) \\in S \\}\\right|.$$\nIn other words, $\\xi_j(S)$ measures the fraction of copies of element $j$\ncontained in $S$; however, for each $i$ the $i$-copies of all elements\nare shuffled by $\\sigma^{(i)}$. Next, we define\n$$ \\hat{f}(S) = \\hat{F}(\\xi(S)), \\ \\ \\ \\ \\ \\hat{g}(S) = \\hat{G}(\\xi(S)).$$\nWe claim that $\\hat{f}$ and $\\hat{g}$ are submodular (for any fixed $\\xi$).\nNote that the effect\nof $\\sigma^{(i)}$ is just a renaming (or shuffling) of the elements\nof $N \\times X$, and hence for the purpose of proving submodularity we can assume\nthat $\\sigma^{(i)}$ is the identity for all $i$. Then, $\\xi_j(S) = \\frac{1}{n}\n |S \\cap (N \\times \\{j\\})|$. Due to Lemma~\\ref{lemma:smooth-submodular},\nthe property $\\mixdiff{\\hat{F}}{x_i}{x_j} \\leq 0$ (almost everywhere) implies that $\\hat{f}$ is submodular.\nIn addition, if the original instance was monotone, then $\\partdiff{\\hat{F}}{x_j} \\geq 0$\nand $\\hat{f}$ is monotone. The same holds for $\\hat{g}$.\n\nThe value of $\\hat{g}(S)$ for any feasible solution $S \\in \\tilde{{\\cal F}}$\nis bounded by $\\hat{g}(S) = \\hat{G}(\\xi(S)) = \\hat{F}(\\overline{\\xi(S)})\n\\leq \\overline{OPT} + \\epsilon$.\nOn the other hand, let ${\\bf x}^*$ denote a point where the optimum of the continuous problem\n$\\max \\{\\hat{F}({\\bf x}): {\\bf x} \\in P({{\\cal F}}) \\}$ is attained, i.e. $\\hat{F}({\\bf x}^*) \\geq OPT - \\epsilon$.\nFor a large enough $n$, we can approximate the point ${\\bf x}^*$ arbitrarily closely\nby a rational vector with $n$ in the denominator,\nwhich corresponds to a discrete solution $S^* \\in \\tilde{{\\cal F}}$ whose\nvalue $\\hat{f}(S^*)$ is at least, say, $OPT - 2 \\epsilon$.\nHence, the ratio between the optima of the {\\em discrete} optimization\nproblems $\\max \\{\\hat{f}(S): S \\in \\tilde{{\\cal F}}\\}$ and $\\max \\{\\hat{g}(S): S \\in \\tilde{{\\cal F}} \\}$\ncan be made at most $\\frac{\\overline{OPT} + \\epsilon}{OPT - 2 \\epsilon}$, i.e. arbitrarily close\nto the symmetry gap $\\gamma = \\frac{\\overline{OPT}}{OPT}$.\n\nBy Lemma~\\ref{lemma:indistinguish}, distinguishing the two instances\n$\\max \\{\\hat{f}(S): S \\in \\tilde{{\\cal F}} \\}$ and $\\max \\{\\hat{g}(S): S \\in \\tilde{{\\cal F}} \\}$,\neven by a randomized algorithm, requires an exponential number of value queries.\nTherefore, we cannot estimate the optimum within a factor better than $\\gamma$.\n\\end{proof}\n\nNext, we prove Theorem~\\ref{thm:multilinear-hardness} (again assuming Lemma~\\ref{lemma:final-fix}),\ni.e.~an analogous hardness result for solving the multilinear optimization problem.\n\n\n\\begin{proof}[Proof of Theorem~\\ref{thm:multilinear-hardness}]\nGiven a symmetric instance $\\max \\{f(S): S \\in {\\cal F}\\}$ and $\\epsilon>0$, we construct\nrefined and modified instances $\\max \\{\\hat{f}(S): S \\in \\tilde{{\\cal F}} \\}$,\n $\\max \\{\\hat{g}(S): S \\in \\tilde{{\\cal F}} \\}$,\nderived from the continuous functions $\\hat{F}, \\hat{G}$ provided by Lemma~\\ref{lemma:final-fix},\nexactly as we did in the proof of Theorem~\\ref{thm:general-hardness}.\nLemma~\\ref{lemma:indistinguish} states that these two instances cannot be distinguished using a subexponential number of value queries. Furthermore, the gap between the two modified instances corresponds to the symmetry gap $\\gamma$ of the original instance: $\\max \\{\\hat{f}(S): S \\in \\tilde{{\\cal F}} \\} \\geq OPT - 2 \\epsilon$ and $\\max \\{\\hat{g}(S): S \\in \\tilde{{\\cal F}} \\} \\leq \\overline{OPT} + \\epsilon = \\gamma OPT + \\epsilon$.\n\nNow we consider the multilinear relaxations of the two refined instances, $\\max \\{\\check{F}({\\bf x}): {\\bf x} \\in P(\\tilde{{\\cal F}})\\}$\nand $\\max \\{\\check{G}({\\bf x}): {\\bf x} \\in P(\\tilde{{\\cal F}})\\}$. Note that $\\check{F}, \\check{G}$, (although related to $\\hat{F},\n\\hat{G}$) are not exactly the same as the functions $\\hat{F}, \\hat{G}$; in particular, they are defined\non a larger (refined) domain. However, we show that the gap between the optima of the two instances remains\nthe same.\n\nFirst, the value of $\\max \\{\\check{F}({\\bf x}): {\\bf x} \\in P({\\cal F})\\}$ is at least the optimum of the discrete problem,\n$\\max \\{\\hat{f}(S): S \\in \\tilde{{\\cal F}} \\}$, which is at least $OPT - 2\\epsilon$ as in the proof of\nTheorem~\\ref{thm:general-hardness}. The value of $\\max \\{\\check{G}({\\bf x}): {\\bf x} \\in P({\\cal F})\\}$ can be analyzed as follows. For any fractional solution ${\\bf x} \\in P({\\cal F})$, the value of $\\check{G}({\\bf x})$\nis the expectation ${\\bf E}[\\hat{g}(\\hat{{\\bf x}})]$, where $\\hat{{\\bf x}}$ is obtained by independently rounding the\ncoordinates of ${\\bf x}$ to $\\{0,1\\}$. Recall that $\\hat{g}$ is obtained by discretizing the continuous function\n$\\hat{G}$ (using Lemma~\\ref{lemma:smooth-submodular}). In particular, $\\hat{g}(S) = \\hat{G}(\\tilde{{\\bf x}})$ where\n$\\tilde{x}_i = \\frac{1}{n} |S \\cap (N \\times \\{i\\})|$ is the fraction of the respective cluster contained in $S$, and $|N| = n$ is the size of each cluster (the refinement parameter). If ${\\bf 1}_S = \\hat{{\\bf x}}$, i.e. $S$ is chosen by independent sampling with probabilities according to ${\\bf x}$, then for large $n$ the fractions $\\frac{1}{n}|S \\cap (N \\times \\{i\\})|$ will be strongly concentrated around their expectation. As $\\hat{G}$ is continuous, we get\n$\\lim_{n \\rightarrow \\infty} {\\bf E}[\\hat{g}(\\hat{{\\bf x}})] = \\lim_{n \\rightarrow \\infty} {\\bf E}[\\hat{G}(\\tilde{{\\bf x}})]\n = \\hat{G}({\\bf E}[\\tilde{{\\bf x}}]) = \\hat{G}(\\bar{{\\bf x}})$.\nHere, $\\bar{{\\bf x}}$ is the vector ${\\bf x}$ projected back to the original ground set $X$, where the coordinates\nof each cluster have been averaged. By construction of the refinement, if ${\\bf x} \\in P(\\tilde{{\\cal F}})$ then\n$\\bar{{\\bf x}}$ is in the polytope corresponding to the original instance, $P({\\cal F})$.\nTherefore, $\\hat{G}(\\bar{{\\bf x}}) \\leq\n\\max \\{\\hat{G}({\\bf x}): {\\bf x} \\in P({\\cal F}) \\} \\leq \\gamma OPT + \\epsilon$. For large enough $n$, this means that\n$\\max \\{\\check{G}({\\bf x}): {\\bf x} \\in P(\\tilde{{\\cal F}}) \\} \\leq \\gamma OPT + 2 \\epsilon$. This holds for\nan arbitrarily small fixed $\\epsilon>0$, and hence the gap between the instances \n$\\max \\{\\check{F}({\\bf x}): {\\bf x} \\in P(\\tilde{{\\cal F}})\\}$ and $\\max \\{\\check{G}({\\bf x}): {\\bf x} \\in P(\\tilde{{\\cal F}})\\}$\n(which cannot be distinguished) can be made arbitrarily close to $\\gamma$.\n\\end{proof}\n\n\n\\paragraph{Hardness for symmetric instances}\nWe remark that since Lemma~\\ref{lemma:final-fix} provides functions $\\hat{F}$ and $\\hat{G}$ symmetric\nunder ${\\cal G}$, the refined instances that we define are invariant with respect to\nthe following symmetries: permute the copies of each element in an arbitrary\nway, and permute the classes of copies according to any permutation\n$\\sigma \\in {\\cal G}$. This means that our hardness results also hold\nfor instances satisfying such symmetry properties.\n\n\\\n\nIt remains to prove Lemma~\\ref{lemma:final-fix}.\nBefore we move to the final construction of $\\hat{F}({\\bf x})$ and $\\hat{G}({\\bf x})$,\nwe construct as an intermediate step a function $\\tilde{F}({\\bf x})$ which is helpful\nin the analysis.\n\n\\\n\n\\paragraph{Construction}\nLet us construct a function $\\tilde{F}({\\bf x})$ which satisfies the following:\n\\begin{itemize}\n\\item For ${\\bf x}$ ``sufficiently close'' to $\\bar{{\\bf x}}$, $\\tilde{F}({\\bf x}) = G({\\bf x})$.\n\\item For ${\\bf x}$ ``sufficiently far away'' from $\\bar{{\\bf x}}$, $\\tilde{F}({\\bf x}) \\simeq F({\\bf x})$.\n\\item The function $\\tilde{F}({\\bf x})$ is ``approximately\" smooth submodular.\n\\end{itemize}\nOnce we have $\\tilde{F}({\\bf x})$, we can fix it to obtain a smooth submodular\nfunction $\\hat{F}({\\bf x})$, which is still close to the original function $F({\\bf x})$.\nWe also fix $G({\\bf x})$ in the same way, to obtain a function $\\hat{G}({\\bf x})$\nwhich is equal to $\\hat{F}({\\bf x})$ whenever ${\\bf x}$ is close to $\\bar{{\\bf x}}$. We defer\nthis step until the end.\n\nWe define $\\tilde{F}({\\bf x})$ as a convex linear combination\nof $F({\\bf x})$ and $G({\\bf x})$, guided by a ``smooth transition'' function, depending\non the distance of ${\\bf x}$ from $\\bar{{\\bf x}}$. The form that we use is the following:\\footnote{We remark\nthat a construction analogous to \\cite{MSV08}\nwould be $\\tilde{F}({\\bf x}) = F({\\bf x}) - \\phi(H({\\bf x}))$ where $H({\\bf x}) = F({\\bf x}) - G({\\bf x})$.\nWhile this makes the analysis easier in \\cite{MSV08}, it cannot be used in general.\nRoughly speaking, the problem is that in general the partial derivatives of $H({\\bf x})$\nare not bounded in any way by the value of $H({\\bf x})$.}\n$$ \\tilde{F}({\\bf x}) = (1-\\phi(D({\\bf x}))) F({\\bf x}) + \\phi(D({\\bf x})) G({\\bf x}) $$\nwhere $\\phi:{\\boldmath R}_+ \\rightarrow [0,1]$ is a suitable smooth function,\nand\n$$ D({\\bf x}) = ||{\\bf x} - \\bar{{\\bf x}}||^2 = \\sum_i (x_i - \\bar{x}_i)^2.$$\nThe idea is that when ${\\bf x}$ is close to $\\bar{{\\bf x}}$, $\\phi(D({\\bf x}))$ should be close to $1$,\ni.e. the convex linear combination should give most of the weight to $G({\\bf x})$.\nThe weight should shift gradually to $F({\\bf x})$ as ${\\bf x}$ gets further away from $\\bar{{\\bf x}}$.\nTherefore, we define $\\phi(t) = 1$ in a small interval $t \\in [0,\\delta]$,\nand $\\phi(t)$ tends to $0$ as $t$ increases.\nThis guarantees that $\\tilde{F}({\\bf x}) = G({\\bf x})$ whenever\n$D({\\bf x}) = ||{\\bf x} - \\bar{{\\bf x}}||^2 \\leq \\delta$.\nWe defer the precise construction of $\\phi(t)$ to Lemma~\\ref{lemma:phi-construction},\nafter we determine what properties we need from $\\phi(t)$.\nNote that regardless of the definition of $\\phi(t)$, $\\tilde{F}({\\bf x})$\nis symmetric with respect to ${\\cal G}$, since $F({\\bf x}), G({\\bf x})$ and $D({\\bf x})$ are.\n\n\\\n\n\\paragraph{Analysis of the construction}\nDue to the construction of $\\tilde{F}({\\bf x})$, it is clear that when\n$D({\\bf x}) = ||{\\bf x}-\\bar{{\\bf x}}||^2$ is small, $\\tilde{F}({\\bf x}) = G({\\bf x})$.\nWhen $D({\\bf x})$ is large, $\\tilde{F}({\\bf x}) \\simeq F({\\bf x})$.\nThe main issue, however, is whether we can say something about the first\nand second partial derivatives of $\\tilde{F}$. This is crucial for\nthe properties of monotonicity and submodularity, which we would like to preserve.\nLet us write $\\tilde{F}({\\bf x})$ as\n$$ \\tilde{F}({\\bf x}) = F({\\bf x}) - \\phi(D({\\bf x})) H({\\bf x}) $$\nwhere $H({\\bf x}) = F({\\bf x}) - G({\\bf x})$. By differentiating once, we get\n\\begin{equation}\n\\label{eq:partdiff}\n\\partdiff{\\tilde{F}}{x_i} = \\partdiff{F}{x_i} - \\phi(D({\\bf x})) \\partdiff{H}{x_i}\n - \\phi'(D({\\bf x})) \\partdiff{D}{x_i} H({\\bf x})\n\\end{equation}\nand by differentiating twice,\n\\begin{eqnarray}\n\\label{eq:mixdiff}\n \\mixdiff{\\tilde{F}}{x_i}{x_j} & = & \\mixdiff{F}{x_i}{x_j}\n- \\phi({D}({\\bf x})) \\mixdiff{H}{x_i}{x_j}\n - \\phi''({D}({\\bf x})) \\partdiff{{D}}{x_i} \\partdiff{{D}}{x_j} H({\\bf x}) \\\\\n & & - \\phi'({D}({\\bf x})) \\left( \\partdiff{{D}}{x_j} \\partdiff{H}{x_i}\n + \\mixdiff{{D}}{x_i}{x_j} H({\\bf x}) + \\partdiff{{D}}{x_i} \\partdiff{H}{x_j} \\right) \\nonumber.\n\\end{eqnarray}\n\nThe first two terms on the right-hand sides of (\\ref{eq:partdiff}) and\n(\\ref{eq:mixdiff}) are not bothering us, because they form\nconvex linear combinations of the derivatives of $F({\\bf x})$ and $G({\\bf x})$,\nwhich have the properties that we need. The remaining terms might cause\nproblems, however, and we need to estimate them.\n\nOur strategy is to define $\\phi(t)$ in such a way that it eliminates\nthe influence of partial derivatives of $D$ and $H$ where they become too large.\nRoughly speaking, $D$ and $H$ have negligible partial derivatives when ${\\bf x}$\nis very close to $\\bar{{\\bf x}}$. As ${\\bf x}$ moves away from $\\bar{{\\bf x}}$, the partial\nderivatives grow but then the behavior of $\\phi(t)$ must be such that\ntheir influence is supressed.\n\nWe start with the following important claim.\\footnote{\nWe remind the reader that $\\nabla F$, the gradient of $F$, is a vector\nwhose coordinates are the first partial derivatives $\\partdiff{F}{x_i}$.\nWe denote by $\\nabla F |_{{\\bf x}}$ the gradient evaluated at ${\\bf x}$.}\n\n\\begin{lemma}\n\\label{lemma:grad-symmetry}\nAssume that $F:[0,1]^X \\rightarrow {\\boldmath R}$ is differentiable and invariant under a group of\npermutations of coordinates ${\\cal G}$. Let $G({\\bf x}) = F(\\bar{{\\bf x}})$,\nwhere $\\bar{{\\bf x}} = {\\bf E}_{\\sigma \\in {\\cal G}}[\\sigma({\\bf x})]$.\nThen for any ${\\bf x} \\in [0,1]^X$,\n $$ \\nabla{G}|_{\\bf x} = \\nabla{F}|_{\\bar{{\\bf x}}}.$$\n\\end{lemma}\n\n\\begin{proof}\nTo avoid confusion, we use ${\\bf x}$ for the arguments of the functions $F$ and $G$,\nand ${\\bf u}$, $\\bar{{\\bf u}}$, etc. for points where their partial derivatives are evaluated.\nTo rephrase, we want to prove that for any point ${\\bf u}$ and any coordinate $i$,\nthe partial derivatives of $F$ and $G$ evaluated at $\\bar{u}$ are equal:\n$\\partdiff{G}{x_i} \\Big|_{{\\bf x}={\\bf u}} = \\partdiff{F}{x_i} \\Big|_{{\\bf x}=\\bar{{\\bf u}}}$.\n\nFirst, consider $F({\\bf x})$. We assume that $F({\\bf x})$ is invariant under\na group of permutations of coordinates $\\cal G$,\ni.e. $F({\\bf x}) = F(\\sigma({\\bf x}))$ for any $\\sigma \\in {\\cal G}$.\nDifferentiating both sides at ${\\bf x}={\\bf u}$, we get by the chain rule:\n$$ \\partdiff{F}{x_i} \\Big|_{{\\bf x}={\\bf u}} =\n \\sum_j \\partdiff{F}{x_j} \\Big|_{{\\bf x}=\\sigma({\\bf u})} \\partdiff{}{x_i} (\\sigma({\\bf x}))_j\n = \\sum_j \\partdiff{F}{x_j} \\Big|_{{\\bf x}=\\sigma({\\bf u})} \\partdiff{x_{\\sigma(j)}}{x_i}. $$\nHere, $\\partdiff{x_{\\sigma(j)}}{x_i} = 1$ if $\\sigma(j) = i$, and $0$ otherwise.\nTherefore,\n$$ \\partdiff{F}{x_i} \\Big|_{{\\bf x}={\\bf u}} =\n \\partdiff{F}{x_{\\sigma^{-1}(i)}} \\Big|_{{\\bf x}=\\sigma({\\bf u})}. $$\nNow, if we evaluate the left-hand side at $\\bar{{\\bf u}}$, the right-hand side is evaluated\nat $\\sigma(\\bar{{\\bf u}}) = \\bar{{\\bf u}}$, and hence for any $i$ and any $\\sigma \\in {\\cal G}$,\n\\begin{equation}\n\\label{eq:F-inv}\n\\partdiff{F}{x_i} \\Big|_{{\\bf x}=\\bar{{\\bf u}}} = \\partdiff{F}{x_{\\sigma^{-1}(i)}} \\Big|_{{\\bf x}=\\bar{{\\bf u}}}.\n\\end{equation}\nTurning to $G({\\bf x}) = F(\\bar{{\\bf x}})$, let us write $\\partdiff{G}{x_i}$\nusing the chain rule:\n$$ \\partdiff{G}{x_i} \\Big|_{{\\bf x}={\\bf u}} = \\partdiff{}{x_i} F(\\bar{{\\bf x}}) \\Big|_{{\\bf x}={\\bf u}}\n = \\sum_j \\partdiff{F}{x_j} \\Big|_{{\\bf x}=\\bar{{\\bf u}}} \\cdot \\partdiff{\\bar{x}_j}{x_i}. $$\nWe have $\\bar{x}_j = {\\bf E}_{\\sigma \\in {\\cal G}}[{\\bf x}_{\\sigma(j)}]$, and\nso\n$$ \\partdiff{G}{x_i} \\Big|_{{\\bf x}={\\bf u}} = \\sum_j \\partdiff{F}{x_j} \\Big|_{{\\bf x}=\\bar{{\\bf u}}}\n \\cdot \\partdiff{}{x_i} {\\bf E}_{\\sigma \\in {\\cal G}}[x_{\\sigma(j)}] \n = {\\bf E}_{\\sigma \\in {\\cal G}} \\left[\\sum_j \\partdiff{F}{x_j} \\Big|_{x=\\bar{u}}\n \\cdot \\partdiff{x_{\\sigma(j)}}{x_i} \\right]. $$\nAgain, $\\partdiff{x_{\\sigma(j)}}{x_i} = 1$ if $\\sigma(j) = i$ and $0$ otherwise.\nConsequently, we obtain\n$$ \\partdiff{G}{x_i} \\Big|_{{\\bf x}={\\bf u}} = {\\bf E}_{\\sigma \\in {\\cal G}}\n \\left[ \\partdiff{F}{x_{\\sigma^{-1}(i)}} \\Big|_{{\\bf x}=\\bar{{\\bf u}}} \\right]\n = \\partdiff{F}{x_i} \\Big|_{{\\bf x}=\\bar{{\\bf u}}} $$\nwhere we used Eq. (\\ref{eq:F-inv}) to remove the dependence on $\\sigma \\in {\\cal G}$.\n\\end{proof}\n\n\nObserve that the symmetrization operation $\\bar{{\\bf x}}$ is idempotent, i.e.\n$\\bar{\\bar{{\\bf x}}} = \\bar{{\\bf x}}$.\nBecause of this, we also get $\\nabla{G}|_{\\bar{{\\bf x}}} = \\nabla{F}|_{\\bar{{\\bf x}}}$.\nNote that $G(\\bar{{\\bf x}}) = F(\\bar{{\\bf x}})$ follows from the definition,\nbut it is not obvious that the same holds for gradients, since their\ndefinition involves points where $G({\\bf x}) \\neq F({\\bf x})$. For second partial\nderivatives, the equality no longer holds, as can be seen from\na simple example such as $F(x_1,x_2) = 1 - (1-x_1)(1-x_2)$,\n$G(x_1,x_2) = 1 - (1-\\frac{x_1+x_2}{2})^2$.\n\nNext, we show that the functions $F({\\bf x})$ and $G({\\bf x})$ are very similar\nin the close vicinity of the region where $\\bar{{\\bf x}} = {\\bf x}$. Recall our definitions:\n$H({\\bf x}) = F({\\bf x}) - G({\\bf x})$, $D({\\bf x}) = ||{\\bf x} - \\bar{{\\bf x}}||^2$.\nBased on Lemma~\\ref{lemma:grad-symmetry}, we know that $H(\\bar{{\\bf x}}) = 0$\nand $\\nabla H|_{\\bar{{\\bf x}}} = 0$. In the following lemmas,\nwe present bounds on $H({\\bf x})$, $D({\\bf x})$ and their partial derivatives.\n\n\n\\begin{lemma}\n\\label{lemma:H-bounds}\nLet $f:2^X \\rightarrow [0,M]$ be invariant under a permutation group $\\cal G$.\nLet $\\bar{{\\bf x}} = {\\bf E}_{\\sigma \\in {\\cal G}}[\\sigma({\\bf x})]$,\n$D({\\bf x}) = ||{\\bf x} - \\bar{{\\bf x}}||^2$ and $H({\\bf x}) = F({\\bf x}) - G({\\bf x})$ where $F({\\bf x}) = {\\bf E}[f(\\hat{{\\bf x}})]$\nand $G({\\bf x}) = F(\\bar{{\\bf x}})$. Then\n\\begin{enumerate}\n\\item $ |\\mixdiff{H}{x_i}{x_j}| \\leq 8M $ everywhere, for all $i,j$;\n\\item $ ||\\nabla H({\\bf x})|| \\leq 8M|X| \\sqrt{D({\\bf x})}$;\n\\item $ |H({\\bf x})| \\leq 8M|X| \\cdot D({\\bf x}). $\n\\end{enumerate}\n\\end{lemma}\n\n\\begin{proof}\nFirst, let us get a bound on the second partial derivatives.\nAssuming without loss of generality $x_i=x_j=0$, we have\\footnote{${\\bf x} \\vee {\\bf y}$\ndenotes the coordinate-wise maximum, $({\\bf x} \\vee {\\bf y})_i = \\max \\{x_i,y_i\\}$ and\n${\\bf x} \\wedge {\\bf y}$ denotes the coordinate-wise minimum, $({\\bf x} \\wedge {\\bf y})_i = \\min \\{x_i,y_i\\}$.}\n$$ \\mixdiff{F}{x_i}{x_j} = {\\bf E}[f(\\hat{{\\bf x}} \\vee ({\\bf e}_i+{\\bf e}_j)) -\n f(\\hat{{\\bf x}} \\vee {\\bf e}_i) - f(\\hat{{\\bf x}} \\vee {\\bf e}_j) + f(\\hat{{\\bf x}})] $$\n(see \\cite{Vondrak08}). Consequently,\n$$ \\Big| \\mixdiff{F}{x_i}{x_j} \\Big| \\leq 4 \\max |f(S)| = 4 M.$$\nIt is a little bit more involved to analyze $\\mixdiff{G}{x_i}{x_j}$.\nSince $G({\\bf x}) = F(\\bar{{\\bf x}})$ and $\\bar{{\\bf x}} = {\\bf E}_{\\sigma \\in {\\cal G}}[\\sigma({\\bf x})]$,\nwe get by the chain rule:\n$$ \\mixdiff{G}{x_i}{x_j} = \\sum_{k,\\ell} \\mixdiff{F}{x_k}{x_\\ell}\n \\partdiff{\\bar{x}_k}{x_i} \\partdiff{\\bar{x}_\\ell}{x_j} \n = {\\bf E}_{\\sigma,\\tau \\in {\\cal G}} \\left[ \\sum_{k,\\ell} \\mixdiff{F}{x_k}{x_\\ell}\n \\partdiff{x_{\\sigma(k)}}{x_i} \\partdiff{x_{\\tau(\\ell)}}{x_j} \\right].$$\nIt is useful here to use the Kronecker symbol, $\\delta_{i,j}$,\nwhich is $1$ if $i=j$ and $0$ otherwise. Note that\n$\\partdiff{x_{\\sigma(k)}}{x_i} = \\delta_{i,\\sigma(k)} = \\delta_{\\sigma^{-1}(i),k}$,\netc. Using this notation, we get\n$$ \\mixdiff{G}{x_i}{x_j} = {\\bf E}_{\\sigma,\\tau \\in {\\cal G}} \\left[ \\sum_{k,\\ell}\n\\mixdiff{F}{x_k}{x_\\ell} \\delta_{\\sigma^{-1}(i),k} \\delta_{\\sigma^{-1}(j),\\ell} \\right] \n = {\\bf E}_{\\sigma,\\tau \\in {\\cal G}}\n \\left[ \\mixdiff{F}{x_{\\sigma^{-1}(i)}}{x_{\\tau^{-1}(j)}} \\right], $$\n$$ \\Big| \\mixdiff{G}{x_i}{x_j} \\Big| \\leq {\\bf E}_{\\sigma,\\tau \\in {\\cal G}}\n\\left[ \\Big| \\mixdiff{F}{x_{\\sigma^{-1}(i)}}{x_{\\tau^{-1}(j)}} \\Big| \\right]\n \\leq 4 M $$\nand therefore\n$$ \\Big| \\mixdiff{H}{x_i}{x_j} \\Big|\n = \\Big| \\mixdiff{F}{x_i}{x_j} - \\mixdiff{G}{x_i}{x_j} \\Big|\n \\leq 8 M.$$\n\nNext, we estimate $\\partdiff{H}{x_i}$ at a given point ${\\bf u}$, depending\non its distance from $\\bar{{\\bf u}}$. Consider\nthe line segment between $\\bar{{\\bf u}}$ and ${\\bf u}$. The function $H({\\bf x}) = F({\\bf x}) - G({\\bf x})$\nis $C_\\infty$-differentiable, and hence we can apply the mean value theorem\nto $\\partdiff{H}{x_i}$: There exists a point $\\tilde{{\\bf u}}$ on the line segment\n$[\\bar{{\\bf u}}, {\\bf u}]$ such that\n$$ \\partdiff{H}{x_i} \\Big|_{{\\bf x}={\\bf u}} - \\partdiff{H}{x_i} \\Big|_{{\\bf x}=\\bar{{\\bf u}}}\n = \\sum_j \\mixdiff{H}{x_j}{x_i} \\Big|_{{\\bf x}=\\tilde{{\\bf u}}} (u_j-\\bar{u}_j).$$\nRecall that $\\partdiff{H}{x_i} \\Big|_{{\\bf x}=\\bar{{\\bf u}}} = 0$.\nApplying the Cauchy-Schwartz inequality to the right-hand side, we get\n$$ \\left( \\partdiff{H}{x_i} \\Big|_{{\\bf x}={\\bf u}} \\right)^2\n \\leq \\sum_j \\left( \\mixdiff{H}{x_j}{x_i} \\Big|_{{\\bf x}=\\tilde{{\\bf u}}} \\right)^2\n || {\\bf u} - \\bar{{\\bf u}} ||^2 \\leq (8M)^2 |X| ||{\\bf u} - \\bar{{\\bf u}}||^2.$$\nAdding up over all $i \\in X$, we obtain\n$$ || \\nabla H({\\bf u}) ||^2 = \\sum_{i}\\left( \\partdiff{H}{x_i} \\Big|_{{\\bf x}={\\bf u}} \\right)^2\n \\leq (8M|X|)^2 ||{\\bf u} - \\bar{{\\bf u}}||^2.$$\nFinally, we estimate the growth of $H({\\bf u})$. Again, by the mean value theorem,\nthere is a point $\\tilde{{\\bf u}}$ on the line segment $[\\bar{{\\bf u}},{\\bf u}]$, such that\n$$ H({\\bf u}) - H(\\bar{{\\bf u}}) = ({\\bf u} - \\bar{{\\bf u}}) \\cdot \\nabla H(\\tilde{{\\bf u}}).$$\nUsing $H(\\bar{{\\bf u}}) = 0$, the Cauchy-Schwartz inequality and the above bound\non $\\nabla H$,\n$$ (H({\\bf u}))^2 \\leq ||\\nabla H({\\tilde{{\\bf u}}})||^2 ||{\\bf u} - \\bar{{\\bf u}}||^2\n \\leq (8M|X|)^2 ||\\tilde{{\\bf u}}-\\bar{{\\bf u}}||^2 ||{\\bf u}-\\bar{{\\bf u}}||^2. $$\nClearly, $||\\tilde{{\\bf u}} - \\bar{{\\bf u}}|| \\leq ||{\\bf u} - \\bar{{\\bf u}}||$, and therefore\n$$ |H({\\bf u})| \\leq 8M|X| \\cdot ||{\\bf u}-\\bar{{\\bf u}}||^2.$$\n\\end{proof}\n\n\\begin{lemma}\n\\label{lemma:D-bounds}\nFor the function $D({\\bf x}) = ||{\\bf x} - \\bar{{\\bf x}}||^2$, we have\n\\begin{enumerate}\n\\item $\\nabla D = 2({\\bf x} - \\bar{{\\bf x}})$, and therefore\n $||\\nabla D|| = 2 \\sqrt{D({\\bf x})}$.\n\\item For all $i,j$, $|\\mixdiff{D}{x_i}{x_j}| \\leq 2$.\n\\end{enumerate}\n\\end{lemma}\n\n\\begin{proof}\nLet us write $D({\\bf x})$ as\n$$ D({\\bf x}) = \\sum_i (x_i - \\bar{x}_i)^2 = \\sum_i {\\bf E}_{\\sigma \\in {\\cal G}}[x_i - x_{\\sigma(i)}]\n {\\bf E}_{\\tau \\in {\\cal G}}[x_i - x_{\\tau(i)}]. $$\nTaking the first partial derivative,\n$$ \\partdiff{D}{x_j} = 2 \\sum_i {\\bf E}_{\\sigma \\in {\\cal G}}[x_i - x_{\\sigma(i)}]\n \\partdiff{}{x_j} {\\bf E}_{\\tau \\in {\\cal G}}[x_i - x_{\\tau(i)}].$$\nAs before, we have $\\partdiff{x_i}{x_j} = \\delta_{ij}$.\nUsing this notation, we get\n\\begin{eqnarray*}\n\\partdiff{D}{x_j} & = & 2 \\sum_i {\\bf E}_{\\sigma \\in {\\cal G}}[x_i - x_{\\sigma(i)}]\n {\\bf E}_{\\tau \\in {\\cal G}}[\\delta_{ij} - \\delta_{\\tau(i),j}] \\\\\n & = & 2 \\sum_i {\\bf E}_{\\sigma,\\tau \\in {\\cal G}}[(x_i - x_{\\sigma(i)})\n (\\delta_{ij} - \\delta_{i,\\tau^{-1}(j)})] \\\\\n & = & 2 \\ {\\bf E}_{\\sigma,\\tau \\in {\\cal G}}[x_j - x_{\\sigma(j)} - x_{\\tau^{-1}(j)}\n + x_{\\sigma(\\tau^{-1}(j))}].\n\\end{eqnarray*}\nSince the distributions of $\\sigma(j)$, $\\tau^{-1}(j)$ and $\\sigma(\\tau^{-1}(j))$\nare the same, we obtain\n$$ \\partdiff{D}{x_j} = 2 \\ {\\bf E}_{\\sigma \\in {\\cal G}}[x_j - x_{\\sigma(j)}]\n = 2(x_j - \\bar{x}_j) $$\nand\n$$ ||\\nabla D||^2 = \\sum_j \\Big| \\partdiff{D}{x_j} \\Big|^2\n = 4 \\sum_j (x_j - \\bar{x}_j)^2 = 4 D({\\bf x}).$$\n\nFinally, the second partial derivatives are\n$$ \\mixdiff{D}{x_i}{x_j} = 2 \\partdiff{}{x_i} (x_j - \\bar{x}_j)\n = 2 \\partdiff{}{x_i} {\\bf E}_{\\sigma \\in {\\cal G}}[x_j - x_{\\sigma(j)}]\n = 2 {\\bf E}_{\\sigma \\in {\\cal G}}[\\delta_{ij} - \\delta_{i,\\sigma(j)}] $$\nwhich is clearly bounded by $2$ in the absolute value.\n\\end{proof}\n\nNow we come back to $\\tilde{F}({\\bf x})$ and its partial derivatives.\nRecall equations (\\ref{eq:partdiff}) and (\\ref{eq:mixdiff}).\nThe problematic terms are those involving $\\phi'(D({\\bf x}))$ and $\\phi''(D({\\bf x}))$.\nUsing our bounds on $H({\\bf x})$, $D({\\bf x})$ and their derivatives, however,\nwe notice that $\\phi'(D({\\bf x}))$ always appears with factors on the order\nof $D({\\bf x})$ and $\\phi''(D({\\bf x}))$ appears with factors on the order of $(D({\\bf x}))^2$.\nThus, it is sufficient if $\\phi(t)$ is defined so that we have control\nover $t \\phi'(t)$ and $t^2 \\phi''(t)$. The following lemma describes\nthe function that we need.\n\n\\begin{lemma}\n\\label{lemma:phi-construction}\nFor any $\\alpha, \\beta > 0$, there is $\\delta \\in (0, \\beta)$\nand a function $\\phi:{\\boldmath R}_+ \\rightarrow [0,1]$ with an absolutely continuous first derivative\nsuch that\n\\begin{enumerate}\n\\item For $t \\leq \\delta$, $\\phi(t) = 1$.\n\\item For $t \\geq \\beta$, $\\phi(t) < e^{-1\/\\alpha}$.\n\\item For all $t \\geq 0$, $|t \\phi'(t)| \\leq 4 \\alpha$.\n\\item For almost all $t \\geq 0$, $|t^2 \\phi''(t)| \\leq 10 \\alpha$.\n\\end{enumerate}\n\\end{lemma}\n\n\\begin{proof}\nFirst, observe that if we prove the lemma for some particular value $\\beta>0$, we can also prove prove it\nfor any other value $\\tilde{\\beta}>0$, by modifying the function as follows: $\\tilde{\\phi}(t) = \\phi(\\beta t \/ \\beta')$.\nThis corresponds to a scaling of the parameter $t$ by $\\beta' \/ \\beta$. Observe that then $|t \\tilde{\\phi}'(t)| = |\\frac{\\beta}{\\beta'} t \\phi'(\\beta t \/ \\beta')| \\leq 4 \\alpha$ and $|t^2 \\tilde{\\phi}''(t)| = |(\\frac{\\beta}{\\beta'})^2 t^2 \\phi''(\\beta t \/ \\beta')| \\leq 10 \\alpha$, so the conditions are still satisfied.\n\nTherefore, we can assume without\nloss of generality that $\\beta>0$ is a value of our choice,\nfor example $\\beta = e^{1\/(2\\alpha^2)}+1$.\nIf we want to prove the result for a different value of $\\beta$,\nwe can just scale the argument $t$ and the constant $\\delta$\nby $\\beta \/ (e^{1\/(2\\alpha^2)}+1)$; the bounds on\n$t \\phi'(t)$ and $t^2 \\phi''(t)$ still hold.\n\nWe can assume that $\\alpha \\in (0,\\frac18)$ because for larger $\\alpha$,\nthe statement only gets weaker. As we argued, we can assume WLOG that\n$\\beta = e^{1\/(2\\alpha^2)}+1$. We set $\\delta = 1$ and\n$\\delta_2 = 1 + (1+\\alpha)^{-1\/2} \\leq 2$.\n(We remind the reader that in general these values will be scaled depending\non the actual value of $\\beta$.) We define the function as follows:\n\\begin{enumerate}\n\\item $\\phi(t) = 1$ for $t \\in [0,\\delta]$.\n\\item $\\phi(t) = 1 - \\alpha (t-1)^2$ for $t \\in [\\delta, \\delta_2]$.\n\\item $\\phi(t) = (1+\\alpha)^{-1-\\alpha} (t - 1)^{-2 \\alpha}$\n for $t \\in [\\delta_2, \\infty)$.\n\\end{enumerate}\n\n\\noindent\nLet's verify the properties of $\\phi(t)$.\nFor $t \\in [0,\\delta]$, we have\n$\\phi'(t) = \\phi''(t) = 0$. For $t \\in [\\delta,\\delta_2]$, we have\n$$ \\phi'(t) = -2\\alpha \\left(t-1 \\right), \\ \\ \\ \\ \\ \\ \n \\phi''(t) = -2\\alpha, $$\nand for $t \\in [\\delta_2, \\infty)$,\n$$ \\phi'(t) = -{2\\alpha}{(1+\\alpha)^{-1-\\alpha}} \\left( t-1 \\right)^{-2\\alpha-1}, $$\n$$ \\phi''(t) = {2\\alpha(1+2\\alpha)}{(1+\\alpha)^{-1-\\alpha}} \\left( t-1 \\right)^{-2\\alpha-2}. $$\nFirst, we check that the values and first derivatives agree at the breakpoints.\nFor $t = \\delta = 1$, we get $\\phi(1) = 1$ and $\\phi'(1) = 0$.\nFor $t = \\delta_2 = 1 + (1+\\alpha)^{-1\/2}$, we get\n$\\phi(\\delta_2) = (1+\\alpha)^{-1}$ and\n$\\phi'(\\delta_2) = -2 \\alpha (1+\\alpha)^{-1\/2}$.\nNext, we need to check is that $\\phi(t)$ is very small\nfor $t \\geq \\beta$. The function is decreasing for $t > \\beta$,\ntherefore it is enough to check $t = \\beta = e^{1\/(2\\alpha^2)}+1$:\n$$ \\phi\\left(\\beta \\right) = (1+\\alpha)^{-1-\\alpha} (\\beta - 1)^{-2\\alpha}\n \\leq (\\beta-1)^{-2\\alpha} = e^{-1\/\\alpha}. $$\nThe derivative bounds are satisfied trivially for $t \\in [0,\\delta]$.\nFor $t \\in [\\delta, \\delta_2]$, using $t \\leq \\delta_2 = 1 + (1+\\alpha)^{-1\/2}$,\n$$ |t \\phi'(t)| = t \\cdot 2 \\alpha (t-1)\n \\leq 2 \\alpha (1 + (1+\\alpha)^{-1\/2}) (1+\\alpha)^{-1\/2} \\leq 4 \\alpha $$\nand using $\\alpha \\in (0,\\frac18)$,\n$$ |t^2 \\phi''(t)| = t^2 \\cdot 2 \\alpha \\leq 2 \\alpha (1 + (1+\\alpha)^{-1\/2})^2\n \\leq 8 \\alpha.$$\nFor $t \\in [\\delta_2,\\infty)$, using $t-1 \\geq (1+\\alpha)^{-1\/2}$,\n\\begin{eqnarray*}\n|t \\phi'(t)| & = & t \\cdot \\frac{2 \\alpha}{(1+\\alpha)^{1+\\alpha}} \\left(t-1 \\right)^{-2\\alpha-1} \n = \\frac{2 \\alpha}{1+\\alpha} \\left( (1 + \\alpha)\n\\left( t-1 \\right)^2 \\right)^{-\\alpha} \\frac{t}{t-1} \\\\\n& \\leq & \\frac{2 \\alpha}{1 + \\alpha} \\cdot \\frac{t}{t-1}\n \\leq \\frac{2 \\alpha}{1+\\alpha} \\cdot \\frac{1 + (1+\\alpha)^{-1\/2}}{(1+\\alpha)^{-1\/2}}\n = 2 \\alpha \\cdot \\frac{1 + (1+\\alpha)^{-1\/2}}{(1+\\alpha)^{1\/2}} \\leq 4 \\alpha\n\\end{eqnarray*}\nand finally, using $\\alpha \\in (0,\\frac18)$,\n\\begin{eqnarray*}\n |t^2 \\phi''(t)| & = & t^2 \\cdot \\frac{2\\alpha(1+2\\alpha)}{(1+\\alpha)^{1+\\alpha}}\n \\left( t-1 \\right)^{-2\\alpha-2} \n = \\frac{2\\alpha(1+2\\alpha)}{1+\\alpha}\n \\left( (1 + \\alpha) \\left( t-1 \\right)^2 \\right)^{-\\alpha}\n \\left( \\frac{t}{t-1} \\right)^2 \\\\\n& \\leq & \\frac{2\\alpha(1+2\\alpha)}{1+\\alpha} \\cdot\n \\left( \\frac{1 + (1+\\alpha)^{-1\/2}}{(1+\\alpha)^{-1\/2}} \\right)^2\n \\leq 8 \\alpha (1 + 2 \\alpha) \\leq 10 \\alpha.\n\\end{eqnarray*}\n\\end{proof}\n\nUsing Lemmas~\\ref{lemma:H-bounds}, \\ref{lemma:D-bounds} and \\ref{lemma:phi-construction},\nwe now prove bounds on the derivatives of $\\tilde{F}({\\bf x})$.\n\n\\begin{lemma}\n\\label{lemma:F-bounds}\nLet $ \\tilde{F}({\\bf x}) = (1-\\phi(D({\\bf x}))) F({\\bf x}) + \\phi(D({\\bf x})) G({\\bf x}) $\nwhere $F({\\bf x}) = {\\bf E}[f(\\hat{{\\bf x}})]$, $f:2^X \\rightarrow [0,M]$,\n$G({\\bf x}) = F(\\bar{{\\bf x}})$, $D({\\bf x})=||{\\bf x}-\\bar{{\\bf x}}||^2$\nare as above and $\\phi(t)$ is provided by Lemma~\\ref{lemma:phi-construction}\nfor a given $\\alpha>0$.\nThen, whenever $\\mixdiff{F}{x_i}{x_j} \\leq 0$,\n$$\n \\mixdiff{\\tilde{F}}{x_i}{x_j}\n \\leq 512 M |X| \\alpha. $$\nIf, in addition, $\\partdiff{F}{x_i} \\geq 0$, then\n$$\n\\partdiff{\\tilde{F}}{x_i}\n \\geq -64 M |X| \\alpha. $$\n\\end{lemma}\n\n\\begin{proof}\nWe have\n$ \\tilde{F}({\\bf x}) = F({\\bf x}) - \\phi(D({\\bf x})) H({\\bf x}) $\nwhere $H({\\bf x}) = F({\\bf x}) - G({\\bf x})$. By differentiating once, we get\n$$ \\partdiff{\\tilde{F}}{x_i} = \\partdiff{F}{x_i}\n - \\phi(D({\\bf x})) \\partdiff{H}{x_i} - \\phi'(D({\\bf x})) \\partdiff{D}{x_i} H({\\bf x}), $$\ni.e.\n$$ \\Big| \\partdiff{\\tilde{F}}{x_i} -\n \\left(\\partdiff{F}{x_i} - \\phi(D({\\bf x})) \\partdiff{H}{x_i} \\right) \\Big|\n = \\Big| \\phi'(D({\\bf x})) \\partdiff{D}{x_i} H({\\bf x}) \\Big|.$$\nBy Lemma~\\ref{lemma:H-bounds} and \\ref{lemma:D-bounds},\nwe have $|\\partdiff{D}{x_i}| = 2 |x_i - \\bar{x}_i| \\leq 2$\nand $|H({\\bf x})| \\leq 8M|X| \\cdot D({\\bf x})$. Therefore,\n$$ \\Big| \\partdiff{\\tilde{F}}{x_i} -\n \\left(\\partdiff{F}{x_i} - \\phi(D({\\bf x})) \\partdiff{H}{x_i} \\right) \\Big|\n \\leq 16 M |X| D({\\bf x}) \\cdot \\Big| \\phi'(D({\\bf x})) \\Big|.$$\nBy Lemma~\\ref{lemma:phi-construction}, $|D({\\bf x}) \\phi'(D({\\bf x}))| \\leq 4 \\alpha$, and hence\n$$ \\Big| \\partdiff{\\tilde{F}}{x_i} -\n \\left(\\partdiff{F}{x_i} - \\phi(D({\\bf x})) \\partdiff{H}{x_i} \\right) \\Big|\n \\leq 64 M |X| \\alpha.$$\nAssuming that $\\partdiff{F}{x_i} \\geq 0$, we also have $\\partdiff{G}{x_i} \\geq 0$\n(see Lemma~\\ref{lemma:grad-symmetry}) and therefore,\n$ \\partdiff{F}{x_i} - \\phi(D({\\bf x})) \\partdiff{H}{x_i}\n = (1-\\phi(D({\\bf x}))) \\partdiff{F}{x_i} + \\phi(D({\\bf x})) \\partdiff{G}{x_i} \\geq 0$.\nConsequently,\n$$ \\partdiff{\\tilde{F}}{x_i} \\geq - 64 M |X| \\alpha.$$\nBy differentiating $\\tilde{F}$ twice, we obtain\n\\begin{eqnarray*}\n\\mixdiff{\\tilde{F}}{x_i}{x_j} & = &\n \\mixdiff{F}{x_i}{x_j} - \\phi({D}({\\bf x})) \\mixdiff{H}{x_i}{x_j}\n - \\phi''({D}({\\bf x})) \\partdiff{{D}}{x_i} \\partdiff{{D}}{x_j} H({\\bf x}) \\\\\n& & - \\phi'({D}({\\bf x})) \\left( \\partdiff{{D}}{x_j} \\partdiff{H}{x_i}\n + \\mixdiff{{D}}{x_i}{x_j} H({\\bf x})\n + \\partdiff{{D}}{x_i} \\partdiff{H}{x_j} \\right).\n\\end{eqnarray*}\nAgain, we use Lemma~\\ref{lemma:H-bounds} and \\ref{lemma:D-bounds}\nto bound $|H({\\bf x})| \\leq 8M|X| D({\\bf x})$, $|\\partdiff{H}{x_i}| \\leq 8M|X| \\sqrt{D({\\bf x})}$,\n $|\\mixdiff{H}{x_i}{x_j}| \\leq 8M$, $|\\partdiff{D}{x_i}| \\leq 2 \\sqrt{D({\\bf x})}$\nand $|\\mixdiff{D}{x_i}{x_j}| \\leq 2$. We get\n\\begin{eqnarray*}\n\\Big| \\mixdiff{\\tilde{F}}{x_i}{x_j} - \\mixdiff{F}{x_i}{x_j} +\n \\phi({D}({\\bf x})) \\mixdiff{H}{x_i}{x_j} \\Big|\n \\leq 32 M|X| \\Big| D^2({\\bf x}) \\phi''({D}({\\bf x})) \\Big|\n + 48 M|X| \\Big| D({\\bf x}) \\phi'({D}({\\bf x})) \\Big|\n\\end{eqnarray*}\nObserve that $\\phi'(D({\\bf x}))$ appears with $D({\\bf x})$ and\n$\\phi''(D({\\bf x}))$ appears with $(D({\\bf x}))^2$.\nBy Lemma~\\ref{lemma:phi-construction}, $|D({\\bf x}) \\phi'(D({\\bf x}))| \\leq 4 \\alpha$\nand $|D^2({\\bf x}) \\phi''(D({\\bf x}))| \\leq 10 \\alpha$. Therefore,\n\\begin{eqnarray*}\n\\Big| \\mixdiff{\\tilde{F}}{x_i}{x_j} - \\left( \\mixdiff{F}{x_i}{x_j}\n - \\phi({D}({\\bf x})) \\mixdiff{H}{x_i}{x_j} \\right) \\Big|\n \\leq 320 M|X|\\alpha + 192 M|X| \\alpha = 512 M |X| \\alpha.\n\\end{eqnarray*}\nIf $\\mixdiff{F}{x_i}{x_j} \\leq 0$ for all $i,j$, then also $\\mixdiff{G}{x_i}{x_j} =\n {\\bf E}_{\\sigma,\\tau \\in {\\cal G}}[\\mixdiff{F}{x_{\\sigma^{-1}(i)}}{x_{\\tau^{-1}(j)}}] \\leq 0$\n (see the proof of Lemma~\\ref{lemma:H-bounds}). Also, \n$\\mixdiff{F}{x_i}{x_j} - \\phi({D}({\\bf x})) \\mixdiff{H}{x_i}{x_j}\n = \\phi(D({\\bf x})) \\mixdiff{F}{x_i}{x_j} + (1-\\phi(D({\\bf x}))) \\mixdiff{G}{x_i}{x_j} \\leq 0$.\nWe obtain\n\\begin{eqnarray*}\n\\mixdiff{\\tilde{F}}{x_i}{x_j} \\leq 512 M |X| \\alpha.\n\\end{eqnarray*}\n\\end{proof}\n\nFinally, we can finish the proof of Lemma~\\ref{lemma:final-fix}.\n\n\\begin{proof}[Proof of Lemma~\\ref{lemma:final-fix}]\nLet $\\epsilon>0$ and $f:2^X \\rightarrow [0,M]$.\nWe choose $\\beta = \\frac{\\epsilon}{16 M|X|}$, so that $|H({\\bf x})| = |F({\\bf x}) - G({\\bf x})| \\leq \\epsilon\/2$\nwhenever $D({\\bf x}) = ||{\\bf x} - \\bar{{\\bf x}}||^2 \\leq \\beta$\n(due to by Lemma~\\ref{lemma:H-bounds}, which states that $|H({\\bf x})| \\leq 8 M |X| D({\\bf x})$).\nAlso, let $\\alpha = \\frac{\\epsilon}{2000 M |X|^3}$.\nFor these values of $\\alpha,\\beta>0$,\nlet $\\delta > 0$ and $\\phi:{\\boldmath R}_+ \\rightarrow [0,1]$\nbe provided by Lemma~\\ref{lemma:phi-construction}.\nWe define\n$$ \\tilde{F}({\\bf x}) = (1-\\phi(D({\\bf x}))) F({\\bf x}) + \\phi(D({\\bf x})) G({\\bf x}). $$\nLemma~\\ref{lemma:F-bounds} provides bounds on the first and second\npartial derivatives of $\\tilde{F}({\\bf x})$.\nFinally, we modify $\\tilde{F}({\\bf x})$ so that it satisfies the required\nconditions (submodularity and optionally monotonicity).\nFor that purpose, we add a suitable multiple of the following function:\n$$ J({\\bf x}) = |X|^2 + 3|X| \\sum_{i \\in X} x_i - \\left(\\sum_{i \\in X} x_i \\right)^2.$$\nWe have $0 \\leq J({\\bf x}) \\leq 3|X|^2$,\n$\\partdiff{J}{x_i} = 3|X| - 2 \\sum_{i \\in X} x_i \\geq |X|$.\nFurther, $\\mixdiff{J}{x_i}{x_j} = -2$. Note also that $J(\\bar{{\\bf x}}) = J({\\bf x})$,\nsince $J({\\bf x})$ depends only on the sum of all coordinates $\\sum_{i \\in X} x_i$.\nTo make $\\tilde{F}({\\bf x})$ submodular and optionally monotone, we define:\n$$ \\hat{F}({\\bf x}) = \\tilde{F}({\\bf x}) + 256 M |X| \\alpha J({\\bf x}),$$\n$$ \\hat{G}({\\bf x}) = G({\\bf x}) + 256 M |X| \\alpha J({\\bf x}). $$\nWe verify the properties of $\\hat{F}({\\bf x})$ and $\\hat{G}({\\bf x})$:\n\\begin{enumerate}\n\\item\nFor any ${\\bf x} \\in P({\\cal F})$, we have\n\\begin{eqnarray*}\n\\hat{G}({\\bf x}) & = & G({\\bf x}) + 256 M |X| \\alpha J({\\bf x}) \\\\\n & = & F(\\bar{{\\bf x}}) + 256 M |X| \\alpha J(\\bar{{\\bf x}}) \\\\\n & = & \\hat{F}(\\bar{{\\bf x}}).\n\\end{eqnarray*}\n\n\\item When $D({\\bf x}) = ||{\\bf x} - \\bar{{\\bf x}}||^2 \\geq \\beta$,\nLemma~\\ref{lemma:phi-construction} guarantees\nthat $0 \\leq \\phi(D({\\bf x})) < e^{-1\/\\alpha} \\leq \\alpha$ and\n\\begin{eqnarray*}\n|\\hat{F}({\\bf x}) - F({\\bf x})| & \\leq & \\phi(D({\\bf x})) |G({\\bf x}) - F({\\bf x})| + 256 M |X| \\alpha J({\\bf x}) \\\\\n & \\leq & \\alpha M + 768 M |X|^3 \\alpha \\\\\n & \\leq & \\epsilon\n\\end{eqnarray*}\nusing $0 \\leq F({\\bf x}), G({\\bf x}) \\leq M$, $|X| \\leq J({\\bf x}) \\leq 3|X|^2$ and $\\alpha = \\frac{\\epsilon}{2000 M |X|^3}$.\n\nWhen $D({\\bf x}) = ||{\\bf x} - \\bar{{\\bf x}}||^2 < \\beta$, we chose the value of $\\beta$ so that\n$|G({\\bf x}) - F({\\bf x})| < \\epsilon\/2$ and so by the above,\n\\begin{eqnarray*}\n|\\hat{F}({\\bf x}) - F({\\bf x})| & \\leq & \\phi(D({\\bf x})) |G({\\bf x}) - F({\\bf x})| + 256 M |X| \\alpha J({\\bf x}) \\\\\n & \\leq & \\epsilon\/2 + 768 M |X|^3 \\alpha \\\\\n & \\leq & \\epsilon.\n\\end{eqnarray*}\n\n\\item Due to Lemma~\\ref{lemma:phi-construction}, $\\phi(t) = 1$ for $t \\in [0,\\delta]$.\nHence, whenever $D({\\bf x}) = ||{\\bf x} - \\bar{{\\bf x}}||^2 \\leq \\delta$, we have\n$ \\tilde{F}({\\bf x}) = G({\\bf x}) = F(\\bar{{\\bf x}})$, which depends only on $\\bar{{\\bf x}}$.\nAlso, we have $\\hat{F}({\\bf x}) = \\hat{G}({\\bf x}) = F(\\bar{{\\bf x}}) + 256 M |X| \\alpha J({\\bf x})$\nand again, $J({\\bf x})$ depends only on $\\bar{{\\bf x}}$ (in fact, only on the average\nof all coordinates of ${\\bf x}$). Therefore, $\\hat{F}({\\bf x})$ and $\\hat{G}({\\bf x})$\nin this case depend only on $\\bar{{\\bf x}}$.\n\n\\item The first partial derivatives of $\\hat{F}$ are given by the formula\n$$ \\partdiff{\\hat{F}}{x_i} = \\partdiff{F}{x_i}\n - \\phi(D({\\bf x})) \\partdiff{H}{x_i} - \\phi'(D({\\bf x})) \\partdiff{D}{x_i} H({\\bf x}) + 256 M|X|\\alpha \\partdiff{J}{x_i}. $$\nThe functions $F, H, D, J$ are infinitely differentiable, so the only possible issue is with $\\phi$.\nBy inspecting our construction of $\\phi$ (Lemma~\\ref{lemma:phi-construction}), we can see that it is piecewise \ninfinitely differentiable, and $\\phi'$ is continuous at the breakpoints. Therefore, it is also\nabsolutely continuous. This implies that $\\partdiff{\\hat{F}}{x_i}$ is absolutely continuous.\n\nThe function $\\hat{G}({\\bf x}) = F(\\bar{{\\bf x}}) + 256 M|X|\\alpha J({\\bf x})$ is infinitely differentiable,\nso its first partial derivatives are also absolutely continuous.\n\n\\item Assuming $\\partdiff{F}{x_i} \\geq 0$, we get $\\partdiff{\\tilde{F}}{x_i}\n \\geq - 64 M |X| \\alpha$ by Lemma~\\ref{lemma:F-bounds}. Using $\\partdiff{J}{x_i}\n \\geq |X|$, we get $\\partdiff{\\hat{F}}{x_i} = \\partdiff{\\tilde{F}}{x_i}\n + 256 M|X| \\alpha J({\\bf x}) \\geq 0$. The same holds for $\\partdiff{\\hat{G}}{x_i}$\nsince $\\partdiff{G}{x_i} \\geq 0$.\n\n\\item Assuming $\\mixdiff{F}{x_i}{x_j} \\leq 0$, we get $\\mixdiff{\\tilde{F}}{x_i}{x_j}\n \\leq 512 M |X| \\alpha$ by Lemma~\\ref{lemma:F-bounds}. Using $\\mixdiff{J}{x_i}{x_j}\n = -2$, we get $\\mixdiff{\\hat{F}}{x_i}{x_j} = \\mixdiff{\\tilde{F}}{x_i}{x_j}\n + 256 M |X| \\alpha \\mixdiff{J}{x_i}{x_j} \\leq 0$. The same holds for\n$\\mixdiff{\\hat{G}}{x_i}{x_j}$ since $\\mixdiff{G}{x_i}{x_j} \\leq 0$.\n\\end{enumerate}\n\\end{proof}\n\nThis concludes the proofs of our main hardness results.\n\n\n\n\\section{Algorithms using the multilinear relaxation}\n\\label{section:algorithms}\n\n\nHere we turn to our algorithmic results. First, we discuss\nthe problem of maximizing a submodular (but not necessarily\nmonotone) function subject to a matroid independence constraint.\n\n\n\\subsection{Matroid independence constraint}\n\\label{section:submod-independent}\n\nConsider the problem $\\max \\{ f(S): S \\in {\\cal I} \\}$, where ${\\cal I}$ is the collection\nof independent sets in a matroid ${\\cal M}$.\nWe design an algorithm based on the multilinear relaxation of the problem,\n$\\max \\{ F({\\bf x}): {\\bf x} \\in P({\\cal M}) \\}$.\nOur algorithm can be seen as \"continuous local search\" in the matroid polytope $P({\\cal M})$,\nconstrained in addition by the box $[0,t]^X$\nfor some fixed $t \\in [0,1]$. The intuition is that this forces\nour local search to use fractional solutions that are more fuzzy\nthan integral solutions and therefore less likely to get stuck in a local\noptimum. On the other hand, restraining the search space too much would not\ngive us much freedom in searching for a good fractional point.\nThis leads to a tradeoff and an optimal choice of $t \\in [0,1]$\nwhich we leave for later.\n\nThe matroid polytope is defined as\n$ P({\\cal M}) = \\mbox{conv} \\{ {\\bf 1}_I: I \\in {\\cal I} \\} $.\nor equivalently \\cite{E70} as\n$ P({\\cal M}) = \\{ {\\bf x} \\geq 0: \\forall S; \\sum_{i \\in S} x_i \\leq r_{\\cal M}(S) \\}$,\nwhere $r_{\\cal M}(S)$ is the rank function of ${\\cal M}$.\nWe define\n$$ P_t({\\cal M}) = P({\\cal M}) \\cap [0,t]^X = \\{ {\\bf x} \\in P({\\cal M}): \\forall i; x_i \\leq t \\}.$$\nWe consider the problem $\\max \\{F({\\bf x}): {\\bf x} \\in P_t({\\cal M})\\}$.\nWe remind the reader that\n$F({\\bf x}) = {\\bf E}[f(\\hat{{\\bf x}})]$ denotes the multilinear extension.\nOur algorithm works as follows.\n\n\\\n\n\\noindent{\\bf Fractional local search in $P_t({\\cal M})$} \\\\\n (given $t = \\frac{r}{q}$, $r \\leq q$ integer)\n\\begin{enumerate}\n\\item Start with ${\\bf x} := (0,0,\\ldots,0)$. Fix $\\delta = 1\/q$.\n\\item If there is $i,j \\in X$ and a direction ${\\bf v} \\in\n \\{{\\bf e}_j, -{\\bf e}_i, {\\bf e}_j - {\\bf e}_i \\}$ such that\n${\\bf x} + \\delta {\\bf v} \\in P_t({\\cal M})$ and $F({\\bf x} + \\delta {\\bf v}) > F({\\bf x})$,\nset ${\\bf x} := {\\bf x} + \\delta {\\bf v}$ and repeat.\n\\item If there is no such direction ${\\bf v}$, apply pipage rounding to ${\\bf x}$\nand return the resulting solution.\n\\end{enumerate}\n\n\\noindent{\\em Notes.}\nThe procedure as presented here would not run in polynomial\ntime. A modification which runs in polynomial time is that\nwe move to a new solution only if $F({\\bf x}+\\delta {\\bf v}) >\n F({\\bf x}) + \\frac{\\delta}{poly(n)} OPT$ (where we first get\na rough estimate of $OPT$ using previous methods).\nFor simplicity, we analyze the variant above\nand finally discuss why we can modify it without\nlosing too much in the approximation factor. We also defer\nthe question of how to estimate the value of $F({\\bf x})$\nto the end of this section.\n\nFor $t=1$, we have $\\delta=1$ and the procedure\nreduces to discrete local search. However, it is known that discrete\nlocal search alone does not give any approximation guarantee. With\nadditional modifications, an algorithm based on discrete local search\nachieves a $(\\frac14-o(1))$-approximation \\cite{LMNS09}.\n\nOur version of fractional local search avoids this issue and leads\ndirectly to a good fractional solution. Throughout the algorithm,\nwe maintain ${\\bf x}$ as a linear combination of $q$ independent sets\nsuch that no element appears in more than $r$ of them. A local step\ncorresponds to an add\/remove\/switch operation preserving this condition.\n\nFinally, we use pipage rounding to convert a fractional solution into\nan integral one. As we show in Lemma~\\ref{lemma:pipage} (in the Appendix),\na modification of the technique from \\cite{CCPV07}\ncan be used to find an integral solution without any loss in the objective\nfunction.\n\n\\begin{theorem}\n\\label{thm:submod-matroid-approx}\nThe fractional local search algorithm for any fixed $t \\in [0,\\frac12 (3-\\sqrt{5})]$\nreturns a solution of value at least $(t - \\frac12 t^2) OPT$,\nwhere $OPT = \\max \\{f(S): S \\in {\\cal I}\\}$.\n\\end{theorem}\n\nWe remark that for $t=\\frac12(3-\\sqrt{5})$, we would obtain a $\\frac14(-1+\\sqrt{5})\n \\simeq 0.309$-approximation, improving the factor of $\\frac14$ \\cite{LMNS09}.\nThis is not a rational value, but we can pick a rational $t$\narbitrarily close to $\\frac12 (3-\\sqrt{5})$.\nFor values $t > \\frac12 (3-\\sqrt{5})$, our analysis does not yield\na better approximation factor.\n\nFirst, we discuss properties of the point found by the fractional local search algorithm.\n\n\\begin{lemma}\n\\label{lemma:local-opt}\nThe outcome of the fractional local search algorithm $x$ is\na ``fractional local optimum'' in the following sense.\n(All the partial derivatives are evaluated at $x$.)\n\\begin{itemize}\n\\item For any $i$ such that ${\\bf x} - \\delta {\\bf e}_i \\in P_t({\\cal M})$,\n$\\partdiff{F}{x_i} \\geq 0.$\n\\item For any $j$ such that ${\\bf x} + \\delta {\\bf e}_j \\in P_t({\\cal M})$,\n$\\partdiff{F}{x_j} \\leq 0.$\n\\item For any $i,j$ such that ${\\bf x} + \\delta ({\\bf e}_j - {\\bf e}_i) \\in P_t({\\cal M})$,\n$\\partdiff{F}{x_j} - \\partdiff{F}{x_i} \\leq 0.$\n\\end{itemize}\n\\end{lemma}\n\n\\begin{proof}\nWe use the property (see \\cite{CCPV07}) that along any direction ${\\bf v} = \\pm {\\bf e}_i$\nor ${\\bf v} = {\\bf e}_i - {\\bf e}_j$, the function $F({\\bf x} + \\lambda {\\bf v})$ is a convex\nfunction of $\\lambda$. Also, observe that if it is possible to move from ${\\bf x}$\nin the direction of ${\\bf v}$ by any nonzero amount, then it is possible to move\nby $\\delta {\\bf v}$, because all coordinates of ${\\bf x}$ are integer multiples of $\\delta$\nand all the constraints also have coefficients which are integer multiples of $\\delta$.\nTherefore, if $\\frac{dF}{d\\lambda} > 0$ and it is possible\nto move in the direction of ${\\bf v}$, we would get\n$F({\\bf x} + \\delta {\\bf v}) > F({\\bf x})$ and the fractional local search would continue.\n\nIf ${\\bf v} = -{\\bf e}_i$ and it is possible to move along $-{\\bf e}_i$,\nwe get $\\frac{dF}{d\\lambda} = -\\partdiff{F}{x_i} \\leq 0$. Similarly,\nif ${\\bf v} = {\\bf e}_j$ and it is possible to move along ${\\bf e}_j$,\nwe get $\\frac{dF}{d\\lambda} = \\partdiff{F}{x_j} \\leq 0$. Finally,\nif ${\\bf v} = {\\bf e}_j - {\\bf e}_i$ and it is possible to move along ${\\bf e}_j - {\\bf e}_i$,\nwe get $\\frac{dF}{d\\lambda} = \\partdiff{F}{x_j} - \\partdiff{F}{x_i} \\leq 0$.\n\\end{proof}\n\nWe refer to the following exchange property for matroids\n(which follows easily from \\cite{Schrijver}, Corollary 39.12a; see also \\cite{LMNS09}).\n\n\\begin{lemma}\n\\label{lemma:exchange}\nIf $I, C \\in {\\cal I}$, then for any $j \\in C \\setminus I$,\nthere is $\\pi(j) \\subseteq I \\setminus C$, $|\\pi(j)| \\leq 1$,\nsuch that $I \\setminus \\pi(j) + j \\in {\\cal I}$.\nMoreover, the sets $\\pi(j)$ are disjoint\n(each $i \\in I \\setminus C$ appears at most once as $\\pi(j) = \\{i\\}$).\n\\end{lemma}\n\nUsing this, we prove a lemma about fractional local optima\nwhich generalizes Lemma 2.2 in \\cite{LMNS09}.\n\n\\begin{lemma}\n\\label{lemma:fractional-search}\nLet ${\\bf x}$ be the outcome of fractional local search over $P_t({\\cal M})$.\nLet $C \\in {\\cal I}$ be any independent set.\nLet $C' = \\{i \\in C: x_i < t\\}$. Then\n$$ 2 F({\\bf x}) \\geq F({\\bf x} \\vee {\\bf 1}_{C'}) + F({\\bf x} \\wedge {\\bf 1}_C).$$\n\\end{lemma}\n\nNote that for $t = 1$, the lemma reduces to\n$2 F({\\bf x}) \\geq F({\\bf x} \\vee {\\bf 1}_C) + F({\\bf x} \\wedge {\\bf 1}_C)$\n(similar to Lemma 2.2 in \\cite{LMNS09}).\nFor $t < 1$, however, it is necessary to replace $C$ by $C'$\nin the first expression, which becomes apparent in the proof.\nThe reason is that we do not have any information on $\\partdiff{F}{x_i}$\nfor coordinates where $x_i = t$.\n\n\\begin{proof}\nLet $C \\in {\\cal I}$ and assume ${\\bf x} \\in P_t({\\cal M})$ is a local optimum.\nSince ${\\bf x} \\in P({\\cal M})$, we can decompose it into a convex linear\ncombination of vertices of $P({\\cal M})$, ${\\bf x} = \\sum_{I \\in {\\cal I}} x_I {\\bf 1}_I$\nwhere $\\sum x_I = 1$..\nBy the smooth submodularity of $F({\\bf x})$ (see \\cite{Vondrak08}),\n\\begin{eqnarray*}\nF({\\bf x} \\vee {\\bf 1}_{C'}) - F({\\bf x}) \\leq \\sum_{j \\in C'} (1 - x_j) \\partdiff{F}{x_j} \n = \\sum_{j \\in C'} \\sum_{I: j \\notin I} x_I \\partdiff{F}{x_j}\n = \\sum_I x_I \\sum_{j \\in C' \\setminus I} \\partdiff{F}{x_j}.\n\\end{eqnarray*}\nAll partial derivatives here are evaluated at ${\\bf x}$.\nOn the other hand, also by submodularity,\n\\begin{eqnarray*}\n F({\\bf x}) - F({\\bf x} \\wedge {\\bf 1}_C) \\geq \\sum_{i \\notin C} x_i \\partdiff{F}{x_i}\n = \\sum_{i \\notin C} \\sum_{I: i \\in I} x_I \\partdiff{F}{x_i}\n = \\sum_I x_I \\sum_{i \\in I \\setminus C} \\partdiff{F}{x_i}.\n\\end{eqnarray*}\nTo prove the lemma, it remains to prove the following.\n\n\\\n\n\\noindent {\\bf Claim.} Whenever $x_I > 0$,\n$\\sum_{j \\in C' \\setminus I} \\partdiff{F}{x_j}\n \\leq \\sum_{i \\in I \\setminus C} \\partdiff{F}{x_i}$.\n\n\\\n\n\\noindent {\\em Proof:}\nFor any $I \\in {\\cal I}$, we can apply\nLemma~\\ref{lemma:exchange} to get a mapping $\\pi$ such that\n$I \\setminus \\pi(j) + j \\in {\\cal I}$ for any $j \\in C \\setminus I$.\nNow, consider $j \\in C' \\setminus I$, i.e. $j \\in C \\setminus I$ and $x_j < t$.\n\nIf $\\pi(j) = \\emptyset$, is possible to move from ${\\bf x}$ in the direction\nof ${\\bf e}_j$, because $I + j \\in {\\cal I}$ and hence we can replace $I$ by $I+j$\n(or at least we can do this for some nonzero fraction of its coefficient)\nin the linear combination.\nBecause $x_j < t$, we can move by a nonzero amount inside $P_t({\\cal M})$.\nBy Lemma~\\ref{lemma:local-opt}, $\\partdiff{F}{x_j} \\leq 0$.\n\nSimilarly, if $\\pi(j) = \\{i\\}$, it is possible to move in the direction\nof ${\\bf e}_j - {\\bf e}_i$, because $I$ can be replaced by $I \\setminus \\pi(j) + i$\nfor some nonzero fraction of its coefficient. By Lemma~\\ref{lemma:local-opt},\nin this case $\\partdiff{F}{x_j} - \\partdiff{F}{x_i} \\leq 0$.\n\nFinally, for any $i \\in I$ we have $x_i > 0$ and therefore we can decrease\n$x_i$ while staying inside $P_t({\\cal M})$. By Lemma~\\ref{lemma:local-opt},\nwe have $\\partdiff{F}{x_i} \\geq 0$ for all $i \\in I$.\nThis means\n$$ \\sum_{j \\in C' \\setminus I} \\partdiff{F}{x_j}\n \\leq \\sum_{j \\in C' \\setminus I: \\pi(j) = \\emptyset} \\partdiff{F}{x_j} +\n \\sum_{j \\in C' \\setminus I: \\pi(j) = \\{i\\}} \\partdiff{F}{x_i}\n \\leq \\sum_{i \\in I \\setminus C} \\partdiff{F}{x_i} $$\nusing the inequalities we derived above, and the fact that\neach $i \\in I \\setminus C$ appears at most once in $\\pi(j)$.\nThis proves the Claim, and hence the Lemma.\n\\end{proof}\n\nNow we are ready to prove Theorem~\\ref{thm:submod-matroid-approx}.\n\n\\begin{proof}\nLet ${\\bf x}$ be the outcome of the fractional local search over $P_t({\\cal M})$.\nDefine $A = \\{ i: x_i = t \\}$.\nLet $C$ be the optimum solution and $C' = C \\setminus A = \\{ i \\in C: x_i < t\\}$.\nBy Lemma~\\ref{lemma:fractional-search},\n$$ 2 F({\\bf x}) \\geq F({\\bf x} \\vee {\\bf 1}_{C'}) + F({\\bf x} \\wedge {\\bf 1}_C).$$\nFirst, let's analyze $F({\\bf x} \\wedge {\\bf 1}_C)$. We apply Lemma~\\ref{lemma:rnd-threshold} (in the Appendix),\nwhich states that $F({\\bf x} \\wedge {\\bf 1}_C) \\geq {\\bf E}[f(T({\\bf x} \\wedge {\\bf 1}_C))]$. Here,\n$T({\\bf x} \\wedge {\\bf 1}_C)$ is a random threshold set corresponding to the\nvector ${\\bf x} \\wedge {\\bf 1}_C$, i.e.\n$$ T({\\bf x} \\wedge {\\bf 1}_C) = \\{ i: ({\\bf x} \\wedge {\\bf 1}_C)_i > \\lambda \\}\n = \\{ i \\in C: x_i > \\lambda \\} = T({\\bf x}) \\cap C $$\nwhere $\\lambda \\in [0,1]$ is uniformly random.\nEquivalently,\n$$ F({\\bf x} \\wedge {\\bf 1}_C) \\geq {\\bf E}[f(T({\\bf x}) \\cap C)].$$\nDue to the definition of a threshold set, with probability $t$ we have\n$\\lambda < t$ and $T({\\bf x})$ contains $A = \\{ i: x_i = t \\} = C \\setminus C'$.\nThen, $f(T({\\bf x}) \\cap C) + f(C') \\geq f(C)$ by submodularity.\nWe conclude that\n\\begin{equation}\n\\label{eq:1}\nF({\\bf x} \\wedge {\\bf 1}_C) \\geq t (f(C) - f(C')).\n\\end{equation}\nNext, let's analyze $F({\\bf x} \\vee {\\bf 1}_{C'})$. We consider the ground set\npartitioned into $X = C \\cup \\bar{C}$, and we apply Lemma~\\ref{lemma:submod-split} (in the Appendix).\n(Here, $\\bar{C}$ denotes $X \\setminus C$, the complement of $C$ inside $X$.)\nWe get\n$$ F({\\bf x} \\vee {\\bf 1}_{C'}) \\geq {\\bf E}[f((T_1({\\bf x} \\vee {\\bf 1}_{C'}) \\cap C)\n \\cup (T_2({\\bf x} \\vee {\\bf 1}_{C'}) \\cap \\bar{C}))].$$\nThe random threshold sets look as follows: $T_1({\\bf x} \\vee {\\bf 1}_{C'}) \\cap C\n = (T_1({\\bf x}) \\cup C') \\cap C$ is equal to $C$ with probability $t$,\nand equal to $C'$ otherwise. $T_2({\\bf x} \\vee {\\bf 1}_{C'}) \\cap \\bar{C}\n = T_2({\\bf x}) \\cap \\bar{C}$ is empty with probability $1-t$. \n(We ignore the contribution when $T_2({\\bf x}) \\cap \\bar{C} \\neq \\emptyset$.)\nBecause $T_1$ and $T_2$ are independently sampled, we get\n$$ F({\\bf x} \\vee {\\bf 1}_{C'}) \\geq t(1-t) f(C) + (1-t)^2 f(C').$$\nProvided that $t \\in [0,\\frac12 (3 - \\sqrt{5})]$, we have $t \\leq (1-t)^2$.\nThen, we can write\n\\begin{equation}\n\\label{eq:1'}\nF({\\bf x} \\vee {\\bf 1}_{C'}) \\geq t(1-t) f(C) + t f(C').\n\\end{equation}\nCombining equations (\\ref{eq:1}) and (\\ref{eq:1'}), we get\n\\begin{eqnarray*}\n& & F({\\bf x} \\vee {\\bf 1}_{C'}) + F({\\bf x} \\wedge {\\bf 1}_C)\n \\geq t (f(C) - f(C')) + t(1-t) f(C) + t \\, f(C')\n = (2t - t^2) f(C).\n\\end{eqnarray*}\nTherefore,\n$$ F({\\bf x}) \\geq \\frac12 (F({\\bf x} \\vee {\\bf 1}_{C'}) + F({\\bf x} \\wedge {\\bf 1}_C))\n \\geq (t - \\frac12 t^2) f(C).$$\nFinally, we apply the pipage rounding technique which does not\nlose anything in terms of objective value (see Lemma~\\ref{lemma:pipage}).\n\\end{proof}\n\n\\paragraph{Technical remarks}\nIn each step of the algorithm, we need to estimate values of $F({\\bf x})$\nfor given ${\\bf x} \\in P_t({\\cal M})$. We accomplish this by using the expression\n$F({\\bf x}) = {\\bf E}[f(R({\\bf x}))]$ where $R({\\bf x})$ is a random set associated with ${\\bf x}$.\nBy standard bounds, if the values of $f(S)$ are in a range $[0,M]$,\nwe can achieve accuracy $M \/ poly(n)$ using a polynomial number of samples.\nWe use the fact that $OPT \\geq \\frac{1}{n} M$ (see\nLemma~\\ref{lemma:solution-value} in the Appendix) and therefore\nwe can achieve $OPT \/ poly(n)$ additive error in polynomial time.\n\nWe also relax the local step condition: we move to the next solution\nonly if $F({\\bf x} + \\delta {\\bf v}) > F({\\bf x}) + \\frac{\\delta}{poly(n)} OPT$\nfor a suitable polynomial in $n$. This way, we can only make a polynomial number of steps.\nWhen we terminate, the local optimality conditions (Lemma~\\ref{lemma:local-opt})\nare satisfied within an additive error of $OPT \/ poly(n)$,\nwhich yields a polynomially small error in the approximation bound.\n\n\n\\subsection{Matroid base constraint}\n\\label{section:submod-bases}\n\nLet us move on to the problem $\\max \\{f(S): S \\in {\\cal B}\\}$ where\n${\\cal B}$ are the bases of a matroid.\nFor a fixed $t \\in [0,1]$, let us consider an algorithm which can be seen\nas local search inside the base polytope $B({\\cal M})$, further constrained\nby the box $[0,t]^X$. The matroid base polytope is defined as\n$ B({\\cal M}) = \\mbox{conv} \\{ {\\bf 1}_B: B \\in {\\cal B} \\} $\nor equivalently \\cite{E70} as\n$ B({\\cal M}) = \\{ {\\bf x} \\geq 0: \\forall S \\subseteq X; \\sum_{i \\in S} x_i \\leq r_{\\cal M}(S),\n \\sum_{i \\in X} x_i = r_{\\cal M}(X) \\}, $\nwhere $r_{\\cal M}$ is the matroid rank function of ${\\cal M}$. Finally, we define\n$$ B_t({\\cal M}) = B({\\cal M}) \\cap [0,t]^X = \\{ {\\bf x} \\in B({\\cal M}): \\forall i \\in X; x_i \\leq t \\}.$$\nObserve that $B_t({\\cal M})$ is nonempty if and only if there is a convex linear\ncombination ${\\bf x} = \\sum_{B \\in {\\cal B}} \\xi_B {\\bf 1}_B$ such that $x_i \\in [0,t]$ for all $i$.\nThis is equivalent to saying that there is a linear combination (a fractional base packing)\n${\\bf x}' = \\sum_{B \\in {\\cal B}} \\xi'_B {\\bf 1}_B$ such that $x_i \\in [0,1]$\nand $\\sum \\xi'_B \\geq \\frac{1}{t}$, in other words the fractional base packing\nnumber is $\\nu \\geq \\frac{1}{t}$. Since the optimal fractional packing of bases\nin a matroid can be found efficiently (see Corollary 42.7a in \\cite{Schrijver}, Volume B),\nwe can find efficiently the minimum $t \\in [\\frac12,1]$ such that $B_t({\\cal M}) \\neq \\emptyset$.\nThen, our algorithm is the following.\n\n\\\n\n\\noindent{\\bf Fractional local search in $B_t({\\cal M})$} \\\\\n (given $t = \\frac{r}{q}$, $r \\leq q$ integer)\n\\begin{enumerate}\n\\item Let $\\delta = \\frac{1}{q}$.\nAssume that ${\\bf x} \\in B_t({\\cal M})$; adjust ${\\bf x}$ (using pipage rounding)\nso that each $x_i$ is an integer multiple of $\\delta$.\nIn the following, this property will be maintained.\n\\item If there is a direction ${\\bf v} = {\\bf e}_j - {\\bf e}_i$ such that\n${\\bf x} + \\delta {\\bf v} \\in B_t({\\cal M})$ and $F({\\bf x} + \\delta {\\bf v}) > F({\\bf x})$,\nthen set ${\\bf x} := {\\bf x} + \\delta {\\bf v}$ and repeat.\n\\item If there is no such direction ${\\bf v}$, apply pipage rounding to ${\\bf x}$\nand return the resulting solution.\n\\end{enumerate}\n\n\\noindent{\\em Notes.}\nWe remark that the starting point can be found as a convex linear combination\nof $q$ bases, $x = \\frac{1}{q} \\sum_{i=1}^{q} {\\bf 1}_{B_i}$, such that\nno element appears in more than $r$ of them, using matroid union techniques (see Theorem 42.9 in \\cite{Schrijver}).\nIn the algorithm, we maintain this representation.\nThe local search step corresponds to switching a pair of elements in one base,\nunder the condition that no element is used in more than $r$ bases at the same time.\nFor now, we ignore the issues of estimating $F({\\bf x})$ and stopping the local search\nwithin polynomial time. We discuss this at the end of this section.\n\nFinally, we use pipage rounding to convert the fractional solution ${\\bf x}$\ninto an integral one of value at least $F({\\bf x})$ (Lemma~\\ref{lemma:base-pipage} in the Appendix).\nNote that it is not necessarily true that any of the bases in\na convex linear combination ${\\bf x} = \\sum \\xi_B {\\bf 1}_{B}$ achieves the value $F({\\bf x})$.\n\n\\begin{theorem}\n\\label{thm:submod-bases-approx}\nIf there is a fractional packing of $\\nu \\in [1,2]$ bases in ${\\cal M}$,\nthen the fractional local search algorithm with $t = \\frac{1}{\\nu}$\nreturns a solution of value at least $\\frac12 (1-t) \\ OPT.$\n\\end{theorem}\n\nFor example, assume that ${\\cal M}$ contains two disjoint bases $B_1, B_2$\n(which is the case considered in \\cite{LMNS09}). Then,\nthe algorithm can be used with $t = \\frac{1}{2}$ and\nand we obtain a $(\\frac{1}{4}-o(1))$-approximation, improving the\n$(\\frac{1}{6}-o(1))$-approximation from \\cite{LMNS09}.\nIf there is a fractional packing of more than 2 bases,\nour analysis still gives only a $(\\frac{1}{4}-o(1))$-approximation.\nIf the dual matroid ${\\cal M}^*$ admits a better fractional packing of bases,\nwe can consider the problem $\\max \\{f(\\bar{S}): S \\in {\\cal B}^* \\}$ which\nis equivalent. For a uniform matroid, ${\\cal B} = \\{B: |B|=k\\}$,\nthe fractional base packing number is either at least $2$ or the same\nholds for the dual matroid, ${\\cal B}^* = \\{B: |B|=n-k\\}$ (as noted in \\cite{LMNS09}).\nTherefore, we get a $(\\frac{1}{4}-o(1))$-approximation for any uniform matroid.\nThe value $t=1$ can be used for any matroid,\nbut it does not yield any approximation guarantee.\n\n\n\n\n\\paragraph{Analysis of the algorithm}\nWe turn to the properties of fractional local optima.\nWe will prove that the point ${\\bf x}$ found by the fractional local search algorithm\nsatisfies the following conditions that allow us to compare $F({\\bf x})$ to the actual\noptimum.\n\n\\begin{lemma}\n\\label{lemma:local-opt2}\nThe outcome of the fractional local search algorithm ${\\bf x}$\nis a ``fractional local optimum'' in the following sense.\n\\begin{itemize}\n\\item For any $i,j$ such that ${\\bf x} + \\delta ({\\bf e}_j - {\\bf e}_i) \\in B_t({\\cal M})$,\n$$\\partdiff{F}{x_j} - \\partdiff{F}{x_i} \\leq 0.$$\n(The partial derivatives are evaluated at ${\\bf x}$.)\n\\end{itemize}\n\\end{lemma}\n\n\\begin{proof}\nSimilarly to Lemma~\\ref{lemma:local-opt},\nobserve that the coordinates of ${\\bf x}$ are always integer multiples of $\\delta$,\ntherefore if it is possible to move from ${\\bf x}$ in the direction of\n${\\bf v} = {\\bf e}_j - {\\bf e}_i$ by any nonzero amount, then it is possible to\nmove by $\\delta {\\bf v}$.\nWe use the property that for any direction ${\\bf v} = {\\bf e}_j - {\\bf e}_i$,\nthe function $F({\\bf x} + \\lambda {\\bf v})$ is a convex function of $\\lambda$ \\cite{CCPV07}.\nTherefore, if $\\frac{dF}{d\\lambda} > 0$ and it is possible\nto move in the direction of ${\\bf v}$, we would get\n$F({\\bf x} + \\delta {\\bf v}) > F({\\bf x})$ and the fractional local search would continue.\nFor ${\\bf v} = {\\bf e}_j - {\\bf e}_i$, we get\n$$ \\frac{dF}{d\\lambda} = \\partdiff{F}{x_j} - \\partdiff{F}{x_i} \\leq 0.$$\n\\end{proof}\n\nWe refer to the following exchange property for matroid bases\n(see \\cite{Schrijver}, Corollary 39.21a).\n\n\\begin{lemma}\n\\label{lemma:exchange2}\nFor any $B_1, B_2 \\in {\\cal B}$, there is a bijection $\\pi:B_1 \\setminus B_2\n \\rightarrow B_2 \\setminus B_1$ such that\n$\\forall i \\in B_1 \\setminus B_2$; $B_1-i+\\pi(i) \\in {\\cal M}$.\n\\end{lemma}\n\nUsing this, we prove a lemma about fractional local optima\nanalogous to Lemma 2.2 in \\cite{LMNS09}.\n\n\\begin{lemma}\n\\label{lemma:fractional-search2}\nLet ${\\bf x}$ be the outcome of fractional local search over $B_t({\\cal M})$.\nLet $C \\in {\\cal B}$ be any base. Then there is ${\\bf c} \\in [0,1]^X$ satisfying\n\\begin{itemize}\n\\item $c_i = t$, if $i \\in C$ and $x_i = t$\n\\item $c_i = 1$, if $i \\in C$ and $x_i < t$\n\\item $0 \\leq c_i \\leq x_i$, if $i \\notin C$\n\\end{itemize}\nsuch that\n$$ 2 F({\\bf x}) \\geq F({\\bf x} \\vee {\\bf c}) + F({\\bf x} \\wedge {\\bf c}).$$\n\\end{lemma}\n\nNote that for $t = 1$, we can set ${\\bf c} = {\\bf 1}_C$.\nHowever, in general we need this more complicated formulation.\nIntuitively, ${\\bf c}$ is obtained from ${\\bf x}$ by raising the variables $x_i, i \\in C$\nand decreasing $x_i$ for $i \\notin C$.\nHowever, we can only raise the variables $x_i, i \\in C$, where $x_i$ is below\nthe threshold $t$, otherwise we do not have any information about $\\partdiff{F}{x_i}$.\nAlso, we do not necessarily decrease all the variables outside of $C$ to zero.\n\n\\begin{proof}\nLet $C \\in {\\cal B}$ and assume ${\\bf x} \\in B_t({\\cal M})$ is a fractional local optimum.\nWe can decompose ${\\bf x}$ into a convex linear combination of vertices\nof $B({\\cal M})$, ${\\bf x} = \\sum \\xi_B {\\bf 1}_B$.\nBy Lemma~\\ref{lemma:exchange2}, for each base $B$ there is a bijection\n$\\pi_B:B \\setminus C \\rightarrow C \\setminus B$ such that\n$\\forall i \\in B \\setminus C$; $B - i + \\pi_B(i) \\in {\\cal B}$.\n\nWe define $C' = \\{i \\in C: x_i < t \\}$.\nThe reason we consider $C'$ is that\nif $x_i = t$, there is no room for an exchange step increasing $x_i$,\nand therefore Lemma~\\ref{lemma:local-opt2} does not give any information\nabout $\\partdiff{F}{x_i}$.\nWe construct the vector ${\\bf c}$ by starting from ${\\bf x}$, and for each $B$\nswapping the elements in $B \\setminus C$ for their image under $\\pi_B$,\nprovided it is in $C'$, until we raise the coordinates on $C'$ to $c_i=1$.\nFormally, we set $c_i = 1$ for $i \\in C'$, $c_i = t$ for $i \\in C \\setminus C'$,\nand for each $i \\notin C$, we define\n$$ c_i = x_i - \\sum_{B: i \\in B, \\pi_B(i) \\in C'} \\xi_B.$$\n\nIn the following, all partial derivatives are evaluated at ${\\bf x}$.\nBy the smooth submodularity of $F({\\bf x})$ (see \\cite{CCPV09}),\n\\begin{eqnarray}\n\\label{eq:2}\nF({\\bf x} \\vee {\\bf c}) - F({\\bf x}) & \\leq & \\sum_{j: c_j > x_j} (c_j - x_j) \\partdiff{F}{x_j}\n = \\sum_{j \\in C'} (1 - x_j) \\partdiff{F}{x_j}\n = \\sum_B \\sum_{j \\in C' \\setminus B} \\xi_B \\partdiff{F}{x_j}\n\\end{eqnarray}\nbecause $\\sum_{B: j \\notin B} \\xi_B = 1 - x_j$ for any $j$.\nOn the other hand, also by smooth submodularity,\n\\begin{eqnarray*}\nF({\\bf x}) - F({\\bf x} \\wedge {\\bf c}) & \\geq & \\sum_{i: c_i < x_i} (x_i - c_i) \\partdiff{F}{x_i}\n = \\sum_{i \\notin C} (x_i - c_i) \\partdiff{F}{x_i}\n = \\sum_{i \\notin C} \\sum_{B: i \\in B, \\pi_B(i) \\in C'} \\xi_B \\partdiff{F}{x_i}\n\\end{eqnarray*}\nusing our definition of $c_i$. In the last sum,\nfor any nonzero contribution, we have $\\xi_B > 0$, $i \\in B$ and $j = \\pi_B(i) \\in C'$,\ni.e. $x_j < t$. Therefore it is possible to move in the direction ${\\bf e}_j - {\\bf e}_i$\n(we can switch from $B$ to $B-i+j$).\nBy Lemma~\\ref{lemma:local-opt2},\n$$ \\partdiff{F}{x_j} - \\partdiff{F}{x_i} \\leq 0.$$\nTherefore, we get\n\\begin{eqnarray}\nF({\\bf x}) - F({\\bf x} \\wedge {\\bf c}) & \\geq &\n\\sum_{i \\notin C} \\sum_{B: i \\in B, j=\\pi_B(i) \\in C'} \\xi_B \\partdiff{F}{x_j}\n\\label{eq:3}\n = \\sum_B \\sum_{i \\in B \\setminus C: j=\\pi_B(i) \\in C'} \\xi_B \\partdiff{F}{x_j}.\n\\end{eqnarray}\nBy the bijective property of $\\pi_B$, this is equal to\n$\\sum_B \\sum_{j \\in C' \\setminus B} \\xi_B \\partdiff{F}{x_j}$.\nPutting (\\ref{eq:2}) and (\\ref{eq:3}) together, we get\n$F({\\bf x} \\vee {\\bf c}) - F({\\bf x}) \\leq F({\\bf x}) - F({\\bf x} \\wedge {\\bf c})$.\n\\end{proof}\n\nNow we are ready to prove Theorem~\\ref{thm:submod-bases-approx}.\n\n\\begin{proof}\nAssuming that $B_t({\\cal M}) \\neq \\emptyset$, we can find a starting point ${\\bf x}_0 \\in B_t({\\cal M})$.\nFrom this point, we reach a fractional local optimum ${\\bf x} \\in B_t({\\cal M})$\n(see Lemma~\\ref{lemma:local-opt2}).\nWe want to compare $F({\\bf x})$ to the actual optimum; assume that $OPT = f(C)$.\n\nAs before, we define $C' = \\{i \\in C: x_i < t\\}$.\nBy Lemma~\\ref{lemma:fractional-search2}, we know that\nthe fractional local optimum satisfies:\n\\begin{equation}\n\\label{eq:4}\n2 F({\\bf x}) \\geq F({\\bf x} \\vee {\\bf c}) + F({\\bf x} \\wedge {\\bf c})\n\\end{equation}\nfor some vector ${\\bf c}$ such that $c_i = t$ for all $i \\in C \\setminus C'$,\n$c_i = 1$ for $i \\in C'$ and $0 \\leq c_i \\leq x_i$ for $i \\notin C$.\n\nFirst, let's analyze $F({\\bf x} \\vee {\\bf c})$. We have\n\\begin{itemize}\n\\item $({\\bf x} \\vee {\\bf c})_i = 1$ for all $i \\in C'$.\n\\item $({\\bf x} \\vee {\\bf c})_i = t$ for all $i \\in C \\setminus C'$.\n\\item $({\\bf x} \\vee {\\bf c})_i \\leq t$ for all $i \\notin C$.\n\\end{itemize}\nWe apply Lemma~\\ref{lemma:submod-split} to the partition $X = C \\cup \\bar{C}$.\nWe get\n$$ F({\\bf x} \\vee {\\bf c}) \\geq {\\bf E}[f((T_1({\\bf x} \\vee {\\bf c}) \\cap C) \\cup (T_2({\\bf x} \\vee {\\bf c}) \\cap \\bar{C}))] $$\nwhere $T_1({\\bf x})$ and $T_2({\\bf x})$ are independent threshold sets.\nBased on the information above, $T_1({\\bf x} \\vee {\\bf c}) \\cap C = C$ with probability $t$\nand $T_1({\\bf x} \\vee {\\bf c}) \\cap C = C'$ otherwise. On the other hand,\n$T_2({\\bf x} \\vee {\\bf c}) \\cap \\bar{C} = \\emptyset$ with probability at least $1-t$.\nThese two events are independent. We conclude that on the right-hand side,\nwe get $f(C)$ with probability at least $t(1-t)$, or $f(C')$ with probability\nat least $(1-t)^2$:\n\\begin{equation}\n\\label{eq:5}\nF({\\bf x} \\vee {\\bf c}) \\geq t(1-t) f(C) + (1-t)^2 f(C').\n\\end{equation}\nTurning to $F({\\bf x} \\wedge {\\bf c})$, we see that\n\\begin{itemize}\n\\item $({\\bf x} \\wedge {\\bf c})_i = x_i$ for all $i \\in C'$.\n\\item $({\\bf x} \\wedge {\\bf c})_i = t$ for all $i \\in C \\setminus C'$.\n\\item $({\\bf x} \\wedge {\\bf c})_i \\leq t$ for all $i \\notin C$.\n\\end{itemize}\nWe apply Lemma~\\ref{lemma:submod-split} to $X = C \\cup \\bar{C}$.\n$$ F({\\bf x} \\wedge {\\bf c}) \\geq {\\bf E}[f((T_1({\\bf x} \\wedge {\\bf c}) \\cap C) \\cup (T_2({\\bf x} \\wedge {\\bf c}) \\cap \\bar{C}))].$$\nWith probability $t$, $T_1({\\bf x} \\wedge {\\bf c}) \\cap C$ contains $C \\setminus C'$\n(and maybe some elements of $C'$). In this case, $f(T_1({\\bf x} \\wedge {\\bf c}) \\cap C)\n \\geq f(C) - f(C')$ by submodularity.\nAlso, $T_2({\\bf x} \\wedge {\\bf c}) \\cap \\bar{C}$ is empty with probability at least $1-t$.\nAgain, these two events are independent.\nTherefore,\n$ F({\\bf x} \\wedge {\\bf c}) \\geq t(1-t) (f(C) - f(C')).$\nIf $f(C') > f(C)$, this bound is vacuous; otherwise, we can replace $t(1-t)$ by\n$(1-t)^2$, because $t \\geq 1\/2$. In any case,\n\\begin{equation}\n\\label{eq:6}\nF({\\bf x} \\wedge {\\bf c}) \\geq (1-t)^2 (f(C) - f(C')).\n\\end{equation}\nCombining (\\ref{eq:4}), (\\ref{eq:5}) and (\\ref{eq:6}),\n$$ F({\\bf x}) \\geq \\frac12 (F({\\bf x} \\vee {\\bf c}) + F({\\bf x} \\wedge {\\bf c}))\n \\geq \\frac12 (t(1-t) f(C) + (1-t)^2 f(C)) = \\frac12 (1-t) f(C).$$\n\\end{proof}\n\n\\paragraph{Technical remarks}\nAgain, we have to deal with the issues of estimating $F({\\bf x})$ and\nstopping the local search in polynomial time. We do this exactly\nas we did at the end of Section~\\ref{section:submod-independent}.\nOne issue to be careful about here is that if $f:2^X \\rightarrow [0,M]$,\nour estimates of $F({\\bf x})$ are within an additive error of $M \/ poly(n)$.\nIf the optimum value $OPT = \\max \\{f(S): S \\in {\\cal B}\\}$ is very small\ncompared to $M$, the error might be large compared to $OPT$ which would be\na problem. The optimum could in fact be very small in general.\nBut it holds that if ${\\cal M}$ contains no loops and co-loops\n(which can be eliminated easily), then $OPT \\geq \\frac{1}{n^2} M$\n(see Appendix~\\ref{app:base-value}). Then, our sampling errors are\non the order of $OPT \/ poly(n)$ which yields a $1\/poly(n)$ error\nin the approximation bound.\n\n\n\\section{Approximation for symmetric instances}\n\\label{section:symmetric}\n\nWe can achieve a better approximation assuming that the instance exhibits\na certain symmetry. This is the same kind of symmetry that we use in our hardness\nconstruction (Section~\\ref{section:hardness-proof}) and the hard instances exhibit\nthe same symmetry as well. It turns out that our approximation in this case\nmatches the hardness threshold up to lower order terms.\n\nSimilar to our hardness result, the symmetries that we consider\nhere are permutations of the ground set $X$, corresponding\nto permutations of coordinates in ${\\boldmath R}^X$. We start with some basic\nproperties which are helpful in analyzing symmetric instances.\n\n\\begin{lemma}\n\\label{lemma:grad-sym}\nAssume that $f:2^X \\rightarrow {\\boldmath R}$ is invariant with respect to\na group of permutations ${\\cal G}$ and $F({\\bf x}) = {\\bf E}[f(\\hat{{\\bf x}})]$.\nThen for any symmetrized vector $\\bar{{\\bf c}} = {\\bf E}_{\\sigma \\in {{\\cal G}}}[\\sigma({\\bf c})]$,\n$\\nabla F |_{\\bar{{\\bf c}}}$ is also symmetric w.r.t. ${\\cal G}$. I.e., for any $\\tau \\in {\\cal G}$,\n$$ \\tau(\\nabla F |_{{\\bf x}=\\bar{{\\bf c}}}) = \\nabla F |_{{\\bf x}=\\bar{{\\bf c}}}. $$\n\\end{lemma}\n\n\n\\begin{proof}\nSince $f(S)$ is invariant under ${\\cal G}$, so is $F({\\bf x})$,\ni.e. $F({\\bf x}) = F(\\tau({\\bf x}))$ for any $\\tau \\in {\\cal G}$.\nDifferentiating both sides at ${\\bf x}={\\bf c}$, we get by the chain rule:\n$$ \\partdiff{F}{x_i} \\Big|_{{\\bf x}={\\bf c}} =\n \\sum_j \\partdiff{F}{x_j} \\Big|_{{\\bf x}=\\tau({\\bf c})} \\partdiff{}{x_i} (\\tau(x))_j\n = \\sum_j \\partdiff{F}{x_j} \\Big|_{{\\bf x}=\\tau({\\bf c})} \\partdiff{x_{\\tau(j)}}{x_i}. $$\nHere, $\\partdiff{x_{\\tau(j)}}{x_i} = 1$ if $\\tau(j) = i$, and $0$ otherwise.\nTherefore,\n$$ \\partdiff{F}{x_i} \\Big|_{{\\bf x}={\\bf c}} =\n \\partdiff{F}{x_{\\tau^{-1}(i)}} \\Big|_{{\\bf x}=\\tau({\\bf c})}. $$\nNote that $\\tau(\\bar{{\\bf c}}) = {\\bf E}_{\\sigma \\in {\\cal G}}[\\tau(\\sigma({\\bf c}))]\n = {\\bf E}_{\\sigma \\in {\\cal G}}[\\sigma({\\bf c})] = \\bar{{\\bf c}}$\nsince the distribution of $\\tau \\circ \\sigma$ is equal to the distribution\nof $\\sigma$. Therefore,\n$$ \\partdiff{F}{x_i} \\Big|_{{\\bf x}=\\bar{{\\bf c}}} = \\partdiff{F}{x_{\\tau^{-1}(i)}}\n \\Big|_{{\\bf x}=\\bar{{\\bf c}}} $$\nfor any $\\tau \\in {\\cal G}$.\n\\end{proof}\n\n\nNext, we prove that the ``symmetric optimum''\n$\\max \\{F(\\bar{{\\bf x}}): {\\bf x} \\in P({\\cal F})\\}$ gives a solution which is\na local optimum for the original instance $\\max \\{F({\\bf x}): {\\bf x} \\in P({\\cal F})\\}$.\n(As we proved in Section~\\ref{section:hardness-proof}, in general\nwe cannot hope to find a better solution than the symmetric optimum.)\n\n\\begin{lemma}\n\\label{lemma:sym-local-opt}\nLet $f:2^X \\rightarrow {\\boldmath R}$ and ${\\cal F} \\subset 2^X$\nbe invariant with respect to a group of permutations $\\cal G$.\nLet $\\overline{OPT} = \\max \\{ F(\\bar{{\\bf x}}): {\\bf x} \\in P({\\cal F}) \\}$\nwhere $\\bar{{\\bf x}} = {\\bf E}_{\\sigma \\in {\\cal G}}[\\sigma({\\bf x})]$,\nand let ${\\bf x}_0$ be the symmetric point\nwhere $\\overline{OPT}$ is attained ($\\bar{{\\bf x}}_0 = {\\bf x}_0$).\nThen ${\\bf x}_0$ is a local optimum for the problem\n$\\max \\{F({\\bf x}): {\\bf x} \\in P({\\cal F}) \\}$, in the sense that\n$({\\bf x}-{\\bf x}_0) \\cdot \\nabla F |_{{\\bf x}_0} \\leq 0$ for any ${\\bf x} \\in P({\\cal F})$.\n\\end{lemma}\n\n\\begin{proof}\nAssume for the sake of contradiction that $({\\bf x}-{\\bf x}_0) \\cdot \\nabla F|_{{\\bf x}_0} > 0$\nfor some ${\\bf x} \\in P({\\cal F})$. We use the symmetric properties of $f$ and ${\\cal F}$\nto show that $(\\bar{{\\bf x}} - {\\bf x}_0) \\cdot \\nabla F|_{{\\bf x}_0} > 0$ as well.\nRecall that ${\\bf x}_0 = \\bar{{\\bf x}}_0$. We have\n$$ (\\bar{{\\bf x}} - {\\bf x}_0) \\cdot \\nabla F|_{{\\bf x}_0}\n = {\\bf E}_{\\sigma \\in {\\cal G}}[\\sigma({\\bf x}-{\\bf x}_0) \\cdot \\nabla F|_{{\\bf x}_0}] \n = {\\bf E}_{\\sigma \\in {\\cal G}}[({\\bf x}-{\\bf x}_0) \\cdot \\sigma^{-1}(\\nabla F|_{{\\bf x}_0})]\n = ({\\bf x}-{\\bf x}_0) \\cdot \\nabla F|_{{\\bf x}_0} > 0 $$\nusing Lemma~\\ref{lemma:grad-sym}. Hence, there would be a direction\n$\\bar{{\\bf x}} - {\\bf x}_0$ along which an improvement can be obtained.\nBut then, consider a small $\\delta > 0$ such that\n${\\bf x}_1 = {\\bf x}_0 + \\delta (\\bar{{\\bf x}} - {\\bf x}_0) \\in P({\\cal F})$ and also\n$F({\\bf x}_1) > F({\\bf x}_0)$. The point ${\\bf x}_1$ is symmetric ($\\bar{{\\bf x}}_1 = {\\bf x}_1$)\nand hence it would contradict the assumption that $F({\\bf x}_0) = \\overline{OPT}$.\n\\end{proof}\n\n\n\\subsection{Submodular maximization over independent sets in a matroid}\n\nLet us derive an optimal approximation result for the problem\n$\\max \\{f(S): S \\in {\\cal I}\\}$ under the assumption that the instance is \"element-transitive\". \n\n\\begin{definition}\nFor a group ${\\cal G}$ of permutations on $X$, the orbit of an element $i \\in X$\nis the set $\\{ \\sigma(i): \\sigma \\in {\\cal G} \\}$.\n${\\cal G}$ is called element-transitive, if the orbit of any element is\nthe entire ground set $X$.\n\\end{definition}\n\nIn this case, we show that it is easy to achieve an optimal $(\\frac12-o(1))$-approximation\nfor a matroid independence constraint.\n\n\\begin{theorem}\n\\label{thm:sym-submod-independent}\nLet $\\max \\{f(S): S \\in {\\cal I}\\}$ be an instance symmetric with respect\nto an element-transitive group of permutations ${\\cal G}$. Let\n$\\overline{OPT} = \\max \\{F(\\bar{{\\bf x}}): {\\bf x} \\in P({\\cal M})\\}$\n where $\\bar{{\\bf x}} = {\\bf E}_{\\sigma \\in {\\cal G}}[\\sigma({\\bf x})]$.\nThen $\\overline{OPT} \\geq \\frac12 OPT$.\n\\end{theorem}\n\n\\begin{proof}\nLet $OPT = f(C)$.\nBy Lemma~\\ref{lemma:sym-local-opt}, $\\overline{OPT} = F({\\bf x}_0)$\nwhere ${\\bf x}_0$ is a local optimum for the problem $\\max \\{F({\\bf x}): {\\bf x} \\in P({\\cal M})$.\nThis means it is also a local optimum in the sense of Lemma~\\ref{lemma:local-opt},\nwith $t=1$.\nBy Lemma~\\ref{lemma:fractional-search},\n$$ 2 F({\\bf x}_0) \\geq F({\\bf x}_0 \\vee {\\bf 1}_C) + F({\\bf x}_0 \\wedge {\\bf 1}_C).$$\nAlso, ${\\bf x}_0 = \\bar{{\\bf x}}_0$. As we are dealing with an element-transitive\ngroup of symmetries, this means all the coordinates of ${\\bf x}_0$ are equal,\n${\\bf x}_0 = (\\xi,\\xi,\\ldots,\\xi)$. Therefore, ${\\bf x}_0 \\vee {\\bf 1}_C$ is equal\nto $1$ on $C$ and $\\xi$ outside of $C$. By Lemma~\\ref{lemma:rnd-threshold} (in the Appendix),\n$$ F({\\bf x}_0 \\vee {\\bf 1}_C) \\geq (1-\\xi) f(C).$$\nSimilarly, ${\\bf x}_0 \\wedge {\\bf 1}_C$ is equal to $\\xi$ on $C$ and $0$ outside of $C$.\nBy Lemma~\\ref{lemma:rnd-threshold},\n$$ F({\\bf x}_0 \\wedge {\\bf 1}_C) \\geq \\xi f(C).$$\nCombining the two bounds,\n$$ 2 F({\\bf x}_0) \\geq F({\\bf x}_0 \\vee {\\bf 1}_C) + F({\\bf x}_0 \\wedge {\\bf 1}_C) \n \\geq (1-\\xi) f(C) + \\xi f(C) = f(C) = OPT.$$\n\\end{proof}\n\nSince all symmetric solutions ${\\bf x} = (\\xi,\\xi,\\ldots,\\xi)$ form\na 1-parameter family, and $F(\\xi,\\xi,\\ldots,\\xi)$ is a concave function,\nwe can search for the best symmetric solution (within any desired accuracy)\nby binary search. By standard techniques, we get the following.\n\n\\begin{corollary}\nThere is a $(\\frac12 - o(1))$-approximation (\"brute force\" search\nover symmetric solutions) for the problem $\\max \\{f(S): S \\in {\\cal I}\\}$\nfor instances symmetric under an element-transitive group of permutations.\n\\end{corollary}\n\nThe hard instances for submodular maximization subject to a matroid\nindependence constraint correspond to refinements of the\nMax Cut instance for the graph $K_2$ (Section~\\ref{section:hardness-applications}).\nIt is easy to see that such instances are element-transitive,\nand it follows from Section~\\ref{section:hardness-proof} that a $(\\frac12+\\epsilon)$-approximation\nfor such instances would require exponentially many value queries.\nTherefore, our approximation for element-transitive instances is optimal.\n\n\n\\subsection{Submodular maximization over bases}\n\nLet us come back to the problem of submodular maximization over the bases\nof matroid. The property that $\\overline{OPT}$ is a local optimum\nwith respect to the original problem $\\max \\{F({\\bf x}): {\\bf x} \\in P({\\cal F})\\}$\nis very useful in arguing about the value of $\\overline{OPT}$.\nWe already have tools to deal with local optima from\nSection~\\ref{section:submod-bases}. Here we prove the following.\n\n\\begin{lemma}\n\\label{lemma:base-local-opt}\nLet $B({\\cal M})$ be the matroid base polytope of ${\\cal M}$ and\n${\\bf x}_0 \\in B({\\cal M})$ a local maximum for the submodular maximization problem\n$\\max \\{F({\\bf x}): {\\bf x} \\in B({\\cal M})\\}$, in the sense that $({\\bf x}-{\\bf x}_0) \\cdot \\nabla F|_{{\\bf x}_0} \\leq 0$\nfor any ${\\bf x} \\in B({\\cal M})$. Assume in addition that ${\\bf x}_0 \\in [s,t]^X$. Then\n$$ F({\\bf x}_0) \\geq \\frac12 (1-t+s) \\cdot OPT.$$\n\\end{lemma}\n\n\\begin{proof}\nLet $OPT = \\max \\{ f(B): B \\in {\\cal B} \\} = f(C)$.\nWe assume that ${\\bf x}_0 \\in B({\\cal M})$ is a local optimum with respect\nto any direction ${\\bf x}-{\\bf x}_0$, ${\\bf x} \\in B({\\cal M})$, so it is also a local optimum with respect\nto the fractional local search in the sense of Lemma~\\ref{lemma:fractional-search2},\nwith $t=1$. The lemma implies that\n$$ 2 F({\\bf x}_0) \\geq F({\\bf x} \\vee {\\bf 1}_C) + F({\\bf x} \\wedge {\\bf 1}_C).$$\nBy assumption, the coordinates of ${\\bf x} \\vee {\\bf 1}_C$ are equal to $1$ on $C$\nand at most $t$ outside of $C$. With probability $1-t$, a random threshold in $[0,1]$\nfalls between $t$ and $1$, and Lemma~\\ref{lemma:rnd-threshold} (in the Appendix) implies that\n$$ F({\\bf x} \\vee {\\bf 1}_C) \\geq (1-t) \\cdot f(C).$$\nSimilarly, the coordinates of ${\\bf x} \\wedge {\\bf 1}_C$ are $0$ outside of $C$,\nand at least $s$ on $C$. A random threshold falls between $0$ and $s$\nwith probability $s$, and Lemma~\\ref{lemma:rnd-threshold} implies that\n$$ F({\\bf x} \\wedge {\\bf 1}_C) \\geq s \\cdot f(C).$$\nPutting these inequalities together, we get\n$$2 F({\\bf x}_0) \\geq F({\\bf x} \\vee {\\bf 1}_C) + F({\\bf x} \\wedge {\\bf 1}_C) \\geq (1-t+s) \\cdot f(C).$$\n\\end{proof}\n\n\\\n\n\\noindent\n{\\bf Totally symmetric instances.}\nThe application we have in mind here is a special case of submodular\nmaximization over the bases of a matroid, which we call {\\em totally symmetric}.\n\n\\begin{definition}\n\\label{def:totally-symmetric}\nWe call an instance $\\max \\{f(S): S \\in {\\cal F} \\}$ totally symmetric with respect\nto a group of permutations ${\\cal G}$, if both $f(S)$ and ${\\cal F}$ are invariant\nunder ${\\cal G}$ and moreover, there is a point ${\\bf c} \\in P({\\cal F})$ such that\n${\\bf c} = \\bar{{\\bf x}} = {\\bf E}_{\\sigma \\in {\\cal G}}[\\sigma({\\bf x})]$ for every ${\\bf x} \\in P({\\cal F})$.\nWe call ${\\bf c}$ the center of the instance.\n\\end{definition}\n\nNote that this is indeed stronger than just being invariant under ${\\cal G}$.\nFor example, an instance on a ground set $X = X_1 \\cup X_2$ could be symmetric\nwith respect to any permutation of $X_1$ and any permutation of $X_2$.\nFor any ${\\bf x} \\in P({\\cal F})$, the symmetric vector $\\bar{{\\bf x}}$ is constant\non $X_1$ and constant on $X_2$. However, in a totally symmetric instance,\nthere should be a unique symmetric point.\n\n\\paragraph{Bases of partition matroids}\nA canonical example of a totally symmetric instance is as follows.\nLet $X = X_1 \\cup X_2 \\cup \\ldots \\cup X_m$ and let integers $k_1,\\ldots,k_m$\nbe given. This defines a partition matroid ${\\cal M} = (X,{\\cal B})$, whose bases are\n$$ {\\cal B} = \\{B: \\forall j; |B \\cap X_j| = k_j \\}.$$\nThe associated matroid base polytope is\n$$ B({\\cal M}) = \\{{\\bf x} \\geq 0: \\forall j; \\sum_{i \\in X_j} x_i = k_j \\}.$$\nLet ${\\cal G}$ be a group of permutations such that the orbit of each element\n$i \\in X_j$ is the entire set $X_j$.\nThis implies that for any ${\\bf x} \\in B({\\cal M})$,\n$\\bar{{\\bf x}}$ is the same vector ${\\bf c}$, with coordinates $k_j \/ |X_j|$ on $X_j$.\nIf $f(S)$ is also invariant under ${\\cal G}$, we have a totally symmetric\ninstance $\\max \\{f(S): S \\in {\\cal B} \\}$. \n\n\\paragraph{Example: welfare maximization}\nTo present a more concrete example, consider $X_j = \\{ a_{j1}, \\ldots, a_{jk} \\}$\nfor each $j \\in [m]$, a set of bases ${\\cal B} = \\{B: \\forall j; |B \\cap X_j| = 1\\}$,\nand an objective function in the form $f(S) = \\sum_{i=1}^{k} v(\\{j: a_{ji} \\in S\\})$,\nwhere $v:2^{[m]} \\rightarrow {\\boldmath R}_+$ is a submodular function.\nThis is a totally symmetric instance, which captures the welfare maximization\nproblem for combinatorial auctions where each player has the same valuation function.\n(Including element $a_{ji}$ in the solution corresponds to allocating item $j$ to player $i$;\nsee \\cite{CCPV09} for more details.) We remark that here we consider possibly\nnonmonotone submodular functions, which is not common for combinatorial auctions;\nnevertheless the problem still makes sense.\n\n\\\n\nWe show that for such instances, the center point achieves an improved approximation.\n\n\\begin{theorem}\n\\label{thm:sym-solution}\nLet $\\max \\{f(S): S \\in {\\cal B}\\}$ be a totally symmetric instance.\nLet the fractional packing number of bases be $\\nu$ and the fractional\npacking number of dual bases $\\nu^*$. Then the center point ${\\bf c}$\nsatisfies\n$$ F({\\bf c}) \\geq \\left(1 - \\frac{1}{2\\nu} - \\frac{1}{2\\nu^*} \\right) OPT.$$\n\\end{theorem}\n\nRecall that in the general case, we get a $\\frac12 (1-1\/\\nu-o(1))$-approximation\n(Theorem~\\ref{thm:submod-bases-approx}). By passing to the dual matroid,\nwe can also obtain a $\\frac12 (1-1\/\\nu^*-o(1))$-approximation,\nso in general, we know how to achieve a $\\frac12 (1-1\/\\max\\{\\nu,\\nu^*\\}-o(1))$-approximation.\nFor totally symmetric instances where $\\nu=\\nu^*$, we improve this\nto the optimal factor of $1-1\/\\nu$.\n\n\\begin{proof}\nSince there is a unique center ${\\bf c} = \\bar{{\\bf x}}$ for any ${\\bf x} \\in B({\\cal M})$,\nthis means this is also the symmetric optimum $F({\\bf c}) = \\max \\{ F(\\bar{{\\bf x}}): {\\bf x} \\in B({\\cal M})\\}$.\nDue to Lemma~\\ref{lemma:sym-local-opt}, ${\\bf c}$ is a local optimum\nfor the problem $\\max \\{F({\\bf x}): {\\bf x} \\in B({\\cal M})\\}$.\n\nBecause the fractional packing number of bases is $\\nu$, we have $c_i \\leq 1\/\\nu$\nfor all $i$. Similarly, because the fractional packing number of dual bases\n(complements of bases) is $\\nu^*$,\nwe have $1-c_i \\leq 1\/\\nu^*$. This means that ${\\bf c} \\in [1-1\/\\nu^*,1\/\\nu]$.\nLemma~\\ref{lemma:base-local-opt} implies that\n$$ 2 F({\\bf c}) \\geq \\left( 1-\\frac{1}{\\nu} + 1-\\frac{1}{\\nu^*} \\right) OPT.$$\n\\end{proof}\n\n\\begin{corollary}\nLet $\\max \\{f(S): S \\in {\\cal B}\\}$ be an instance\non a partition matroid where every base takes at least\nan $\\alpha$-fraction of each part, at most a $(1-\\alpha)$-fraction of each part,\nand the submodular function $f(S)$ is invariant under a group ${\\cal G}$\nwhere the orbit of each $i \\in X_j$ is $X_j$.\nThen, the center point ${\\bf c} = {\\bf E}_{\\sigma \\in {\\cal G}}[\\sigma({\\bf 1}_B)]$\n(equal for any $B \\in {\\cal B}$) satisfies $F({\\bf c}) \\geq \\alpha \\cdot OPT$.\n\\end{corollary}\n\n\\begin{proof}\nIf the orbit of any element $i \\in X_j$ is the entire set $X_j$,\nit also means that $\\sigma(i)$ for a random $\\sigma \\in {\\cal G}$\nis uniformly distributed over $X_j$ (by the transitive property of ${\\cal G}$).\nTherefore, symmetrizing any fractional vector ${\\bf x} \\in B({\\cal M})$ gives\nthe same vector $\\bar{{\\bf x}} = {\\bf c}$, where $c_i = k_j \/ |X_j|$ for $i \\in X_j$.\nAlso, our assumptions mean that the fractional packing number of bases\nis $1\/(1-\\alpha)$, and the fractional packing number of dual bases is\nalso $1\/(1-\\alpha)$.\nDue to Lemma~\\ref{thm:sym-solution}, the center ${\\bf c}$ satisfies\n$F({\\bf c}) \\geq \\alpha \\cdot OPT$.\n\\end{proof}\n\nThe hard instances for submodular maximization over matroid bases\nthat we describe in Section~\\ref{section:hardness-applications} are exactly of this form\n(see the last paragraph of Section~\\ref{section:hardness-applications}, with $\\alpha=1\/k$).\nThere is a unique symmetric solution,\n$x = (\\alpha,\\alpha,\\ldots,\\alpha, 1-\\alpha,1-\\alpha,\\ldots,1-\\alpha)$.\nThe fractional base packing number for these matroids is\n$\\nu = 1\/(1-\\alpha)$ and Theorem~\\ref{thm:general-hardness} implies that\nany $(\\alpha+\\epsilon) = (1-1\/\\nu+\\epsilon)$-approximation for such matroids would\nrequire exponentially many value queries. Therefore, our approximation\nin this special case is optimal.\n\n\n\n\\section*{Acknowledgment}\nThe author would like to thank Jon Lee and Maxim Sviridenko for helpful discussions.\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}