diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzzdxyu" "b/data_all_eng_slimpj/shuffled/split2/finalzzdxyu" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzzdxyu" @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\n\nVector equilibrium problem is a unified model of several problems, for\ninstance, vector variational inequalities and vector optimization problems.\nFor further relevant information on this topic, the reader is referred to\nthe following recent publications available in our bibliography: [1-7],\n[9-12], [14-17].\n\nIn this paper, we will suppose that $X$\\textit{\\ }is a nonempty, convex and\ncompact set in a Hausdorff locally convex space $E$ , $A:X\\rightarrow 2^{X}$\nand $F:X\\times X\\times X\\rightarrow 2^{X}$ are correspondences and $C\\subset\nX$ is a nonempty closed convex cone with int$C\\neq \\emptyset $.\n\nWe consider the following generalized strong vector quasi-equilibrium\nproblem (in short, GSVQEOP):\\newline\n\nfind $x^{\\ast }\\in X$ such that $x^{\\ast }\\in \\overline{A}(x^{\\ast })$ and\neach $u\\in A(x^{\\ast })$ implies that $F(u,x^{\\ast },z)\\nsubseteq $int$C$\nfor each $z\\in A(x^{\\ast }).$\n\n\\section{Preliminary results}\n\nLet $X$, $Y$ be topological spaces and $T:X\\rightarrow 2^{Y}$ be a\ncorrespondence. $T$ is said to be \\textit{upper semicontinuous} if for each \nx\\in X$ and each open set $V$ in $Y$ with $T(x)\\subset V$, there exists an\nopen neighborhood $U$ of $x$ in $X$ such that $T(x)\\subset V$ for each $y\\in\nU$. $T$ is said to be \\textit{lower semicontinuous} if for each x$\\in X$ and\neach open set $V$ in $Y$ with $T(x)\\cap V\\neq \\emptyset $, there exists an\nopen neighborhood $U$ of $x$ in $X$ such that $T(y)\\cap V\\neq \\emptyset $\nfor each $y\\in U$. $T$ is said to have \\textit{open lower sections} if \nT^{-1}(y):=\\{x\\in X:y\\in T(x)\\}$ is open in $X$ for each $y\\in Y.$\n\nThe following lemma will be crucial in the proofs.\n\n\\begin{lemma}\n(Yannelis and Prabhakar, \\cite{yan}). \\textit{Let }$X$\\textit{\\ be a\nparacompact Hausdorff topological space and }$Y$\\textit{\\ be a Hausdorff\ntopological vector space. Let }$T:X\\rightarrow 2^{Y}$\\textit{\\ be a\ncorrespondence with nonempty convex values\\ and for each }$y\\in Y$\\textit{, \n$T^{-1}(y)$\\textit{\\ is open in }$X$\\textit{. Then, }$T$\\textit{\\ has a\ncontinuous selection that is, there exists a continuous function }\nf:X\\rightarrow Y$\\textit{\\ such that }$f(x)\\in T(x)$\\textit{\\ for each }\nx\\in X$\\textit{.\\medskip }\n\\end{lemma}\n\nThe correspondence $\\overline{T}$ is defined by $\\overline{T}(x):=\\{y\\in\nY:(x,y)\\in $cl$_{X\\times Y}$ Gr $T\\}$ (the set cl$_{X\\times Y}$ Gr $(T)$ is\ncalled the adherence of the graph of $T$)$.$ It is easy to see that cl \nT(x)\\subset \\overline{T}(x)$ for each $x\\in X.$\n\nIf $X$ and $Y$ are topological vector spaces, $K$ is a nonempty subset of \nX, $ $C$ is a nonempty closed convex cone and $T:K\\rightarrow 2^{Y}$ is a\ncorrespondence, then \\cite{luc}, $T$ is called \\textit{upper }$C$\\textit\n-continuous at} $x_{0}\\in K$ if, for any neighborhood $U$ of the origin in \nY,$ there is a neighborhood $V$ of $x_{0}$ such that, for all $x\\in V,$ \nT(x)\\subset T(x_{0})+U+C.$ $T$ is called \\textit{lower }$C$\\textit\n-continuous at} $x_{0}\\in K$ if, for any neighborhood $U$ of the origin in \nY,$ there is a neighborhood $V$ of $x_{0}$ such that, for all $x\\in V,$ \nT(x_{0})\\subset T(x)+U-C.$\n\nThe property of properly $C-$quasi-convexity for correspondences is\npresented below.\n\nLet $X$ be a nonempty convex subset of a topological vector space\\textit{\\ }\nE,$ $Y$ be a topological vector space, and $C$ be a pointed closed convex\ncone in $Z$ with its interior int$C\\neq \\emptyset .$ Let $T:X\\rightarrow\n2^{Y}$ be a correspondence with nonempty values. $T$ is said to be \\textit\nproperly }$C-$\\textit{quasi-convex on} $X$ (\\cite{long}), if for any \nx_{1},x_{2}\\in X$ and $\\lambda \\in \\lbrack 0,1],$ either $T(x_{1})\\subset\nT(\\lambda x_{1}+(1-\\lambda )x_{2})+C$ or $T(x_{2})\\subset T(\\lambda\nx_{1}+(1-\\lambda )x_{2})+C.\\medskip $\n\nIn order to establish our main theorems, we need to prove some auxiliary\nresults. The starting point is the following statement:\n\n\\begin{theorem}\nLet $X$ be \\textit{a nonempty, convex and compact set in a} \\textit\nHausdorff locally convex space }$E$ and let $\\mathcal{C\\ }$\\ be a nonempty\nsubset of $X\\times X.$\\textit{\\ Assume that t}he following conditions are\nfulfilled:\n\\end{theorem}\n\n\\textit{a) }$\\mathcal{C}^{-}(y)$ $=\\{x\\in X:(x,y)\\in C\\}$ \\textit{is open fo\n} \\textit{each } $y\\in X;$\n\n\\textit{b)} $\\mathcal{C}^{+}(x)=\\{y\\in X:(x,y)\\in C\\}\\mathit{\\ }$ \\textit{is\nconvex and nonempty for each} $x\\in X.$\n\n\\textit{Then, there exists }$x^{\\ast }\\in X$ such that $(x^{\\ast },x^{\\ast\n})\\in C$\\textit{.\\medskip }\n\n\\begin{proof}\nLet us define the correspondence $T:X\\rightarrow 2^{X},$ by\n\n$T(x)=\\mathcal{C}^{+}(x)$ for each $x\\in X.$\n\nThe correspondence $T$ is nonempty and convex valued and it has open lower\nsections$.$\n\nWe apply the Yannelis and Prabhakar's Lemma and we obtain that $T$\\textit{\\ \nhas a continuous selection $f:X\\rightarrow X.$\n\nAccording to the Tychonoff fixed point Theorem \\cite{is}, there exists \nx^{\\ast }\\in X$ such that $f(x^{\\ast })=x^{\\ast }.$ Hence, $x^{\\ast }\\in\nT(x^{\\ast })$ and obviously, $(x^{\\ast },x^{\\ast })\\in \\mathcal{C}.\\medskip\n\\medskip $\n\\end{proof}\n\nThe next two results are direct consequences of Theorem 1.\\medskip\n\n\\begin{theorem}\nLet $X$ be \\textit{a nonempty, convex and compact set in a} \\textit\nHausdorff locally convex space }$E$ , and let $A:X\\rightarrow 2^{X}$ and \nP:X\\times X\\rightarrow 2^{X}$ be correspondences such that the following\nconditions are fulfilled:\n\\end{theorem}\n\n\\textit{a)} $A$\\textit{\\ has nonempty, convex values and open lower sections\n}\n\n\\textit{b)} \\textit{the set }$\\{y\\in X:$ $A(x)\\cap P(x,y)=\\emptyset \\}\\cap\nA(x)$ \\ \\textit{is nonempty for each} $x\\in X;$\n\n\\textit{c) }$\\{y\\in X:$\\textit{\\ }$A(x)\\cap P(x,y)=\\emptyset \\}$\\textit{\\ is\nconvex for each }$x\\in X;$\n\n\\textit{d)} $\\{x\\in X:A(x)\\cap P(x,y)=\\emptyset \\}\\mathit{\\ }$ \\textit{is\nopen for each} $y\\in X.$\n\n\\textit{Then, there exists }$x^{\\ast }\\in $\\textit{\\ }$X$\\textit{\\ such that \n}$\\ x^{\\ast }\\in A(x^{\\ast })$ \\textit{and }$A(x^{\\ast })\\cap P(x^{\\ast\n},x^{\\ast })=\\emptyset .\\medskip $\n\n\\begin{proof}\nLet us define the set $\\mathcal{C}=\\{(x,y)\\in X\\times X:$ $A(x)\\cap\nP(x,y)=\\emptyset \\}\\cap $Gr$A.$\n\nThen, $\\mathcal{C}^{+}(x)=\\{y\\in X:$ $A(x)\\cap P(x,y)=\\emptyset \\}\\cap A(x)$\nfor each $\\ x\\in X$ and\n\n$\\mathcal{C}^{-}(y)=A^{-1}(y)\\cap \\{x\\in X:A(x)\\cap P(x,y)=\\emptyset \\}\\ \nfor each $x\\in X.$\n\nAssumption b) implies that $\\mathcal{C}$ is nonempty$.$ The set $\\mathcal{C\n^{-}(y)$ is open for each $y\\in X$ since Assumptions a) and d) hold.\n\nAccording to Assumptions b) and c), $A(x)\\cap $ $\\mathcal{C}^{+}(x)$ is\nnonempty and convex for each $x\\in X.$\n\nAll hypotheses of Theorem 1 are fulfilled, and then, there exists $x^{\\ast\n}\\in $\\ $X$\\ such that $\\ x^{\\ast }\\in A(x^{\\ast })$ and $A(x^{\\ast })\\cap\nP(x^{\\ast },x^{\\ast })=\\emptyset .$\n\\end{proof}\n\nWe establish the following result as a consequence of Theorem 2. It will be\nused in order to prove the existence of solutions for the considered vector\nquasi-equilibrium problem.\n\n\\begin{theorem}\nLet $X$ be \\textit{a nonempty, convex and compact set in a} \\textit\nHausdorff locally convex space }$E$ , and let $A:X\\rightarrow 2^{X}$, \nP:X\\times X\\rightarrow 2^{X}$ be correspondences such that the following\nconditions are fulfilled:\n\\end{theorem}\n\n\\textit{a)} $A$\\textit{\\ has nonempty, convex values and open lower sections\n}\n\n\\textit{b)} \\textit{the set }$\\{y\\in X:$ $\\overline{A}(x)\\cap\nP(x,y)=\\emptyset \\}\\cap A(x)$ \\ \\textit{is nonempty for each} $x\\in X;$\n\n\\textit{c) }$\\{y\\in X:$\\textit{\\ }$\\overline{A}(x)\\cap P(x,y)=\\emptyset \\}\n\\textit{\\ is convex for each }$x\\in X;$\n\n\\textit{d)} $\\{x\\in X:\\overline{A}(x)\\cap P(x,y)=\\emptyset \\}\\mathit{\\ }$ \n\\textit{is open for each} $y\\in X.$\n\n\\textit{Then, there exists }$x^{\\ast }\\in $\\textit{\\ }$X$\\ \\textit{\\ such\nthat }$\\ x^{\\ast }\\in A(x^{\\ast })$ \\textit{and }$A(x^{\\ast })\\cap P(x^{\\ast\n},x^{\\ast })=\\emptyset .\\medskip $\n\nWe note that, according to Theorem 2, there exists $x^{\\ast }\\in $\\textit{\\ \n$X$\\textit{\\ }such that $\\ x^{\\ast }\\in A(x^{\\ast })$ and\\textit{\\ }\n\\overline{A}(x^{\\ast })\\cap P(x^{\\ast },x^{\\ast })=\\emptyset .$ Obviously, \n\\overline{A}(x^{\\ast })\\cap P(x^{\\ast },x^{\\ast })=\\emptyset $ implies \nA(x^{\\ast })\\cap P(x^{\\ast },x^{\\ast })=\\emptyset .\\medskip $\n\nIf $A(x)=X$ for each $x\\in X$, Theorem 2 implies the following corollary.\n\n\\begin{corollary}\nLet $X$ be \\textit{a nonempty, convex and compact set in a} \\textit\nHausdorff locally convex space }$E$ , and let $P:X\\times X\\rightarrow 2^{X}$\nbe a correspondence such that the following conditions are fulfilled:\n\\end{corollary}\n\n\\textit{a) }$\\{y\\in X:$\\textit{\\ }$P(x,y)=\\emptyset \\}$\\textit{\\ is nonempty\nand convex for each }$x\\in X;$\n\n\\textit{b)} $\\{x\\in X:P(x,y)=\\emptyset \\}\\mathit{\\ }$ \\textit{is open for\neach} $y\\in X.$\n\n\\textit{Then, there exists }$x^{\\ast }\\in $\\textit{\\ }$X$\\textit{\\ such that \n}$\\ P(x^{\\ast },x^{\\ast })=\\emptyset .\\medskip $\n\nBy applying an approximation method of proof, we can prove Theorem 4.\n\n\\begin{theorem}\nLet $X$ be \\textit{a nonempty, convex and compact set in a} \\textit\nHausdorff locally convex space }$E$ and $\\mathcal{C\\ }$\\ be a nonempty,\nclosed subset of $X\\times X.$\\textit{\\ Assume that there exists a sequence }\n(G_{k})_{k\\in \\mathbb{N}^{\\ast }}$ \\textit{of subsets of }$X\\times X$ such\nthat \\textit{t}he following conditions are fulfilled:\n\\end{theorem}\n\n\\textit{a) }for each $k\\in \\mathbb{N}^{\\ast },$ $G_{k}^{-}(y)$ $=\\{x\\in\nX:(x,y)\\in G_{k}\\}$ \\textit{is open for} \\textit{each } $y\\in X;$\n\n\\textit{b)} for each $k\\in \\mathbb{N}^{\\ast },$ $G_{k}^{+}(x)=\\{y\\in\nX:(x,y)\\in G_{k}\\}\\mathit{\\ }$ \\textit{is convex and nonempty for each} \nx\\in X;$\n\n\\textit{c) }$G_{k}\\supseteq G_{k+1}$\\textit{\\ for each }$k\\in \\mathbb{N\n^{\\ast };$\n\n\\textit{d) for every open set }$G$\\textit{\\ with }$G\\supset \\mathcal{C},\n\\textit{\\ there exists }$k\\in \\mathbb{N}^{\\ast }$\\textit{\\ such that }\nG_{k}\\subseteq \\mathcal{C}.$\n\n\\textit{Then, there exists }$x^{\\ast }\\in X$\\textit{\\ such that }$(x^{\\ast\n},x^{\\ast })\\in \\mathcal{C}$\\textit{.\\medskip }\n\n\\begin{proof}\nFor each $k\\in \\mathbb{N}^{\\ast },$ we apply Theorem 1. Let $x^{k}\\in X$\nsuch that $(x^{k},x^{k})\\in G_{k}.$ Since $X$ is a compact set, we can\nconsider that the sequence $(x^{k})_{k}$ converges to some $x^{\\ast }\\in X.$\nWe claim that $(x^{\\ast },x^{\\ast })\\in \\mathcal{C}.$\n\nIndeed, let us suppose, by way of contradiction, that $(x^{\\ast },x^{\\ast\n})\\notin \\mathcal{C}.$ Since $\\mathcal{C}$\\textit{\\ }is nonempty and\ncompact, we can choose a neighborhood $V_{(x^{\\ast },x^{\\ast })}$ of \n(x^{\\ast },x^{\\ast })$ and an open set $G$ such that $G\\supset \\mathcal{C}$\nand $V_{(x^{\\ast },x^{\\ast })}\\cap G=\\emptyset .$ According to Assumptions\nd) and c), there exists $k_{1}\\in \\mathbb{N}^{\\ast }$ such that \nG_{k}\\subseteq G$ for each $k\\geq k_{1}.$ Since $V_{(x^{\\ast },x^{\\ast })}$\nis a neighborhood of $(x^{\\ast },x^{\\ast }),$ there exists $k_{2}\\in \\mathbb\nN}^{\\ast }$ such that $(x^{k},x^{k})\\in V_{(x^{\\ast },x^{\\ast })}$ for each \nk\\geq k_{2}.$ $\\ $Hence, for $k\\geq $max($k_{2},k_{1}),$ \n(x^{k},x^{k})\\notin G_{k},$ which is a contradiction.\n\nConsequently, $(x^{\\ast },x^{\\ast })\\in \\mathcal{C}.$\n\\end{proof}\n\nTheorem 5 is a consequence of Theorem 4 and it will be used in Section 3 in\norder to prove the existence of solutions for GSVQEP.\n\n\\begin{theorem}\n\\textit{Let }$X$\\textit{\\ be a nonempty, convex and compact set in a\nHausdorff locally convex space }$E$\\textit{\\ , and let }$A:X\\rightarrow\n2^{X} $\\textit{\\ and }$P:X\\times X\\rightarrow 2^{X}$\\textit{\\ be\ncorrespondences such that the following conditions are fulfilled:}\n\\end{theorem}\n\n\\textit{a) }$A$\\textit{\\ has nonempty, convex values and open lower sections\n}\n\n\\textit{b) the set }$U=\\{(x,y)\\in X\\times X:$\\textit{\\ }$\\overline{A}(x)\\cap\nP(x,y)=\\emptyset \\}$\\textit{\\ \\ is closed} \\textit{and} $U\\cap $Gr$\\overline\nA}$ \\textit{is nonempty;}\n\n\\textit{c) there exists a sequence }$(P_{k})_{k\\in N}$\\textit{\\ of\ncorrespondences, where, for each }$k\\in \\mathbb{N}^{\\ast },$\\textit{\\ }\nP_{k}:X\\times X\\rightarrow 2^{X}$\\textit{\\ and let }$U_{k}=\\{(x,y)\\in\nX\\times X:$\\textit{\\ }$A(x)\\cap P_{k}(x,y)=\\emptyset \\}$\\textit{. Assume\nthat:}\n\n\\textit{\\ }$\\ \\ \\ $\\textit{c1) }$U_{k}^{+}(x)=\\{y\\in X:$\\textit{\\ }\n\\overline{A}(x)\\cap P_{k}(x,y)=\\emptyset \\}$\\textit{\\ is convex for each }\nx\\in X$\\textit{\\ and }$U_{k}^{+}(x)\\cap A(x)\\neq \\emptyset $\\textit{\\ for\neach }$x\\in X;$\n\n\\textit{\\ \\ \\ \\ c2 ) }$U_{k}^{-}(y)=\\{x\\in X:\\overline{A}(x)\\cap\nP_{k}(x,y)=\\emptyset \\}\\ $\\textit{\\ is open for each }$y\\in X;$\n\n\\textit{\\ \\ \\ \\ c3) for each }$k\\in \\mathbb{N}^{\\ast },$\\textit{\\ }\nP_{k}(x,y)\\subseteq P_{k+1}(x,y)$\\textit{\\ for each }$(x,y)\\in X\\times X;$\n\n\\textit{\\ \\ \\ \\ \\ c4) for every open set }$G$\\textit{\\ with }$G\\supset U\\cap \n$Gr$\\overline{A},$\\textit{\\ there exists }$k\\in \\mathbb{N}^{\\ast }$\\textit{\\\nsuch that }$G\\supseteq U_{k}\\cap $Gr$A.$\n\n\\textit{Then, there exists }$x^{\\ast }\\in $\\textit{\\ }$X$\\textit{\\ such that \n}$\\ x^{\\ast }\\in \\overline{A}(x^{\\ast })$\\textit{\\ and }$A(x^{\\ast })\\cap\nP(x^{\\ast },x^{\\ast })=\\emptyset .\\medskip $\n\n\\begin{proof}\nLet us define $\\mathcal{C=}U\\cap $Gr$\\overline{A}.$ According to Assumptions\nb) and c), $\\mathcal{C\\ }$\\ is a nonempty and closed subset of $X\\times X.$\n\nFurther, \\textit{\\ }for each\\textit{\\ }$k\\in \\mathbb{N}^{\\ast },$ let us\ndefine $G_{k}=U_{k}\\cap $Gr$A\\subseteq X\\times X.$\n\nThen, for each $k\\in \\mathbb{N}^{\\ast },$ $G_{k}^{+}(x)$ $=\\{y\\in X:(x,y)\\in\nG_{k}\\}=U_{k}^{+}(x)\\cap A(x)$ is nonempty and convex for each\\textit{\\ }\nx\\in X,$ since Assumptions a) and c1) hold.\n\nFor each $k\\in \\mathbb{N}^{\\ast },$ $G_{k}^{-}(y)$ $=\\{x\\in X:(x,y)\\in\nG_{k}\\}=U_{k}^{-}(y)\\cap A^{-1}(y)$ is open for each\\textit{\\ }$y\\in X,$\nsince Assumptions a) and c2) hold.\n\nAssumption c3) implies that, \\textit{\\ }for each\\textit{\\ }$k\\in \\mathbb{N\n^{\\ast },$ $U_{k+1}\\subseteq U_{k}$ and then, $G_{k}\\supseteq G_{k+1}$ and\nAssumption c4) implies that for every open set $G$ with $G\\supset \\mathcal{C\n,$ there exists $k\\in \\mathbb{N}^{\\ast }$ such that $G_{k}\\subseteq \\mathcal\nC}.$\n\nAll hypotheses of Theorem 4 are verified. Therefore, there exists $x^{\\ast\n}\\in X$ such that $(x^{\\ast },x^{\\ast })\\in \\mathcal{C}$.\n\nConsequently, there exists $x^{\\ast }\\in X$ such that $\\ x^{\\ast }\\in \n\\overline{A}(x^{\\ast })$\\textit{\\ }and\\textit{\\ }$A(x^{\\ast })\\cap P(x^{\\ast\n},x^{\\ast })=\\emptyset .\\medskip $\n\\end{proof}\n\n\\section{Main results}\n\nThis section is devoted to the study of the existence of solutions for the\nconsidered generalized strong vector quasi-equilibrium problem. We derive\nour main results by using the auxiliary theorems concerning correspondences,\nwhich have been established in the previous section. This new approach to\nsolve GSVQEP is intended to provide new conditions under which the solutions\nexist.\\medskip\n\nThe first theorem states that GSVQEP has solutions if $F(\\cdot ,y,\\cdot )$\nis lower ($-C$)-semicontinuous for each $y\\in X$ and $F(u,\\cdot ,z)$\\ is\nproperly $C-$\\ quasi-convex for each $(u,z)\\in X\\times X.$\n\n\\begin{theorem}\n\\textit{Let }$F:X\\times X\\times X\\rightarrow 2^{X}$\\textit{\\ be a\ncorrespondence with nonempty values. Suppose that:}\n\\end{theorem}\n\n\\textit{a) }$A$\\textit{\\ has nonempty, convex values and open lower sections\n}\n\n\\textit{b) for each }$x\\in X,$\\textit{\\ there exists }$y\\in A(x)$\\textit{\\\nsuch that each }$u\\in \\overline{A}(x)$\\textit{\\ implies that }\nF(u,y,z)\\nsubseteq $\\textit{int}$C$\\textit{\\ for each }$z\\in \\overline{A}(x);\n$\n\n\\textit{c) }$F(\\cdot ,y,\\cdot ):$\\textit{\\ }$X\\times X\\rightarrow 2^{X}\n\\textit{\\ is lower (}$-C$\\textit{)-semicontinuous for each }$y\\in X;$\n\n\\textit{d) for each }$(u,z)\\in X\\times X,$\\textit{\\ }$F(u,\\cdot\n,z):X\\rightarrow 2^{X}$\\textit{\\ is properly }$C-$\\textit{\\ quasi-convex.}\n\n\\textit{Then, there exists }$x^{\\ast }\\in X$\\textit{\\ such that }$x^{\\ast\n}\\in A(x^{\\ast })$\\textit{\\ and each }$u\\in A(x^{\\ast })$\\textit{\\ implies\nthat }$F(u,x^{\\ast },z)\\nsubseteq $\\textit{int}$C$\\textit{\\ for each }$z\\in\nA(x^{\\ast }),$\\textit{\\ that is, }$x^{\\ast }$\\textit{\\ is a solution for\nGSVQEP.\\medskip }\n\n\\begin{proof}\nLet us define $P:X\\times X\\rightarrow 2^{X},$ by\n\n$P(x,y)=\\{u\\in X:$ $\\exists z\\in \\overline{A}(x)$ such that \nF(u,y,z)\\subseteq C\\}$ for each $(x,y)\\in X\\times X.$\n\nAssumption b) implies that the set $\\{y\\in X:$ $\\overline{A}(x)\\cap\nP(x,y)=\\emptyset \\}\\cap A(x)$ \\ is nonempty for each $x\\in X.$\n\nWe claim that the set $E(x)$ $\\ $is convex for each $x\\in X,$ where\\ \nE(x)=\\{y\\in X:$\\textit{\\ }$\\overline{A}(x)\\cap P(x,y)=\\emptyset \\}=$\n\n$=\\{y\\in X:$\\textit{\\ }each $u\\in \\overline{A}(x)$ implies that \nF(u,y,z)\\nsubseteq C$ for each $z\\in \\overline{A}(x)\\}.$\n\nIndeed, let us fix $x_{0}\\in X$ and let us consider $y_{1},y_{2}\\in\nE(x_{0}). $ This means that each $u\\in \\overline{A}(x_{0})$ implies that \nF(u,y_{1},z)\\nsubseteq C$ and $F(u,y_{2},z)\\nsubseteq C$ for each $z\\in \n\\overline{A}(x_{0}).$\n\nLet $y(\\lambda )=\\lambda y_{1}+(1-\\lambda )y_{2}$ be defined for each \n\\lambda \\in \\lbrack 0,1].$\n\nWe claim that $y(\\lambda )\\in E(x_{0})$ for each $\\lambda \\in \\lbrack 0,1].$\n\nSuppose, on the contrary, that there exist $\\lambda _{0}\\in \\lbrack 0,1]$ $,$\n$u^{\\prime }\\in \\overline{A}(x_{0})$ and $z^{\\prime }\\in \\overline{A}(x_{0})$\nsuch that $F(u^{\\prime },y(\\lambda _{0}),z^{\\prime })\\subseteq C.$ Since \n(F(u^{\\prime },\\cdot ,z^{\\prime }):X\\rightarrow 2^{X}$ is\\textit{\\ }properl\n\\textit{\\ }$C-$\\textit{\\ }quasi-convex$,$ we have that:\n\n$F(u^{\\prime },y_{1},z^{\\prime })\\subseteq F(u^{\\prime },y(\\lambda\n),z^{\\prime })+C$ or $F(u^{\\prime },y_{2},z^{\\prime })\\subseteq F(u^{\\prime\n},y(\\lambda ),z^{\\prime })+C$\n\nOn the other hand, it is true that $F(u^{\\prime },y(\\lambda ),z^{\\prime\n})\\subseteq C.$ We obtain that:\n\n$F(u^{\\prime },y_{j},z^{\\prime })\\subseteq C+C\\subseteq C$ for $j=1$ or for \nj=2$.\n\nThis contradicts the assumption that $y_{1},y_{2}\\in E(x_{0})$.\nConsequently, $E(x_{0})$ is convex and Assumption c) from Theorem 3 is\nfulfilled.\n\nNow, we will prove that $D(y)=\\{x\\in X:\\overline{A}(x)\\cap P(x,y)=\\emptyset\n\\}\\mathit{\\ }$ is open for each $y\\in X.$\n\nIn order to do this, we will show that $^{C}D(y)$ is closed for each $y\\in X,\n$ where $\\ ^{C}D(y)=\\{x\\in X:\\overline{A}(x)\\cap P(x,y)\\neq \\emptyset \\\n\\mathit{=}\\{x\\in X:$ there exist $u,z\\in \\overline{A}(x)$ such that \nF(u,y,z)\\subseteq C\\}.$\n\nLet $(x_{\\alpha })_{\\alpha \\in \\Lambda }$ be a net in $^{C}D(y)$ such that\nlim$_{\\alpha }x_{\\alpha }=x_{0}.$ Then, there exist $u_{\\alpha },z_{\\alpha\n}\\in \\overline{A}(x_{\\alpha })$ such that $F(u_{\\alpha },y,z_{\\alpha\n})\\subseteq C.$\n\nSince $X$ is a compact set, we can suppose that $(u_{\\alpha })_{\\alpha \\in\n\\Lambda },(z_{\\alpha })_{\\alpha \\in \\Lambda }$ are convergent nets and let\nlim$_{\\alpha }u_{\\alpha }=u_{0}$ and lim$_{\\alpha }z_{\\alpha }=z_{0}.$\n\nThe closedness of $\\overline{A}$ implies that $u_{0},z_{0}\\in \\overline{A\n(x_{0}).$\n\nNow, we claim that $F(u_{0},y,z_{0})\\subseteq C.$\n\nSince $F(u_{\\alpha },y,z_{\\alpha })\\subseteq C$ and $F(\\cdot ,y,\\cdot ):\n\\textit{\\ }$X\\times X\\rightarrow 2^{X}$\\textit{\\ }is lower ($-C\n)-semicontinuous, for each neighborhood $U$ of the origin in $X,$ there\nexists a subnet $(u_{\\beta },z_{\\beta })_{\\beta }$ of $(u_{\\alpha\n},z_{\\alpha })_{\\alpha }$ such that $F(u_{0},y,z_{0})\\subset F(u_{\\beta\n},y,z_{\\beta })+U+C.$ Hence, $F(u_{0},y,z_{0})\\subset U+C.$\n\nWe will show that $F(u_{0},y,z_{0})\\subset C.$ Suppose, by way of\ncontradiction, that there exists $t\\in F(u_{0},y,z_{0})\\cap ^{C}C.$ We note\nthat $B=C-t$ is a closed set which does not contain $0.$ It follows that \n^{C}B$ is open and contains $0.$ Since $X$ is locally convex, there exists a\nconvex neighborhood $U_{1}$ of origin such that $U_{1}\\subset X\\backslash B$\nand $U_{1}=-U_{1}$. Thus, $0\\notin B+U_{1}$ and then, $t\\notin C+U_{1},$\nwhich is a contradiction. Therefore, $F(u_{0},y,z_{0})\\subset C.$\n\nWe proved that there exist $u_{0},z_{0}\\in \\overline{A}(x_{0})$ such that \nF(u_{0},y,z_{0})\\subseteq C.$ It follows that $^{C}D(y)$ is closed. Then, \nD(y)$ is an open set and Assumption d) from Theorem 3 is fulfilled.\n\nConsequently, all conditions of Theorem 3 are verified, so that there exists \n$x^{\\ast }\\in $\\ $X$\\ such that $\\ x^{\\ast }\\in A(x^{\\ast })$ and $A(x^{\\ast\n})\\cap P(x^{\\ast },x^{\\ast })=\\emptyset .\\medskip $ Obviously, $x^{\\ast }\n\\textit{\\ }is a solution for GSVQEP.\\medskip\n\\end{proof}\n\nIn order to obtain a second result concerning the existence of solutions of\nGSVQEP, we use an approximation method and Theorem 5. We mention that this\nresult does not require convexity properties for the correspondence $F.$\n\n\\begin{theorem}\n\\textit{Let }$F:X\\times X\\times X\\rightarrow 2^{X}$\\textit{\\ be a\ncorrespondence. Suppose that:}\n\\end{theorem}\n\n\\textit{a) }$A$\\textit{\\ has nonempty, convex values and open lower\nsections; }$\\overline{A}$\\textit{\\ is lower semicontinuous;}\n\n\\textit{b) }$F$\\textit{\\ is upper semicontinuous with nonempty, closed\nvalues;}\n\n\\textit{c) }$U\\cap $Gr$\\overline{A}$\\textit{\\ is nonempty, where }$U=\\{\n\\textit{\\ }$(x,y)\\in X\\times X:u\\in \\overline{A}(x)$\\textit{\\ implies that }\nF(u,y,z)\\nsubseteq $\\textit{int}$C$\\textit{\\ for each }$z\\in \\overline{A\n(x)\\};$\n\n\\textit{d) there exists a sequence }$(F_{k})_{k\\in N}$\\textit{\\ of\ncorrespondences, such that, for each }$k\\in \\mathbb{N}^{\\ast },$\\textit{\\ }\nF_{k}:X\\times X\\times X\\rightarrow 2^{X}$\\textit{\\ and let }\nU_{k}=\\{(x,y)\\in X\\times X:u\\in \\overline{A}(x)$\\textit{\\ implies that }\nF_{k}(u,y,z)\\nsubseteq $\\textit{int}$C$\\textit{\\ for each }$z\\in \\overline{A\n(x)\\}$\\textit{. Assume that:}\n\n\\textit{d1) for each }$k\\in \\mathbb{N}^{\\ast }$\\textit{\\ and for each }$x\\in\nX,$\\textit{\\ there exists }$y\\in A(x)$\\textit{\\ such that each }$u\\in \n\\overline{A}(x)$\\textit{\\ implies that }$F_{k}(u,y,z)\\nsubseteq $\\textit{int\n$C$\\textit{\\ for each }$z\\in A(x);$\n\n\\textit{d2) for each }$k\\in \\mathbb{N}^{\\ast }$\\textit{\\ and for each }\n(u,z)\\in X\\times X,$\\textit{\\ }$F_{k}(u,\\cdot ,z):X\\rightarrow 2^{X}$\\textit\n\\ is properly }$C-$\\textit{\\ quasi-convex;}\n\n\\textit{d3) for each }$k\\in \\mathbb{N}^{\\ast }$ \\textit{and} \\textit{\\ for\neach }$y\\in X,$\\textit{\\ }$F_{k}(\\cdot ,y,\\cdot ):$\\textit{\\ }$X\\times\nX\\rightarrow 2^{X}$\\textit{\\ is lower (}$-C$\\textit{)-semicontinuous}$;$\n\n\\textit{d4) for each }$k\\in \\mathbb{N}^{\\ast },$\\textit{\\ for each }\n(x,y)\\in X\\times X,$\\textit{\\ and for each }$u\\in X$\\textit{\\ with the\nproperty that }$\\exists z\\in \\overline{A}(x)$\\textit{\\ such that }\nF_{k}(u,y,z)\\subseteq C,$\\textit{\\ there exists }$z^{\\prime }\\in \\overline{A\n(x)$\\textit{\\ such that }$F_{k+1}(u,y,z^{\\prime })\\subseteq C;$\n\n\\textit{d5) for every open set }$G$\\textit{\\ with }$G\\supset U\\cap $Gr\n\\overline{A},$\\textit{\\ there exists }$k\\in \\mathbb{N}^{\\ast }$\\textit{\\\nsuch that }$G\\supseteq U_{k}\\cap $\\textit{Gr}$A.$\n\n\\textit{Then, there exists }$x^{\\ast }\\in X$\\textit{\\ such that }$x^{\\ast\n}\\in \\overline{A}(x^{\\ast })$\\textit{\\ and each }$u\\in A(x^{\\ast })$\\textit\n\\ implies that }$F(u,x^{\\ast },z)\\nsubseteq $\\textit{int}$C$\\textit{\\ for\neach }$z\\in A(x^{\\ast }),$\\textit{\\ that is, }$x^{\\ast }$\\textit{\\ is a\nsolution for GSVQEP.\\medskip }\n\n\\begin{proof}\nLet us define $P:X\\times X\\rightarrow 2^{X},$ by\n\n$P(x,y)=\\{u\\in X:$ $\\exists z\\in \\overline{A}(x)$ such that \nF(u,y,z)\\subseteq C\\}$ for each $(x,y)\\in X\\times X.$\n\nWe claim that, $U=\\{$\\textit{\\ }$(x,y)\\in X\\times X:u\\in \\overline{A}(x)\n\\textit{\\ }implies that $F(u,y,z)\\nsubseteq $int$C$ for each $z\\in \\overline\nA}(x)\\}$ is closed$.$\n\nLet $(x^{0},y^{0})\\in $cl$U.$ Then, there exists $(x^{\\alpha },y^{\\alpha\n})_{\\alpha \\in \\Lambda }$ a net in $U$ such that $\\lim_{\\alpha }(x^{\\alpha\n},y^{\\alpha })=(x^{0},y^{0})\\in X\\times X.$ Let $u\\in \\overline{A}(x^{0})$\nand $z\\in \\overline{A}(x^{0}).$ Since $\\overline{A}$ is lower semicontinuous\nand $\\lim_{\\alpha }x^{\\alpha }=x^{0},$ there exists nets $(u^{\\alpha\n})_{\\alpha \\in \\Lambda }\\ $\\ and $(z^{\\alpha })_{\\alpha \\in \\Lambda }$ in $X$\nsuch that $u^{\\alpha },z^{\\alpha }\\in \\overline{A}(x^{\\alpha })$ for each \n\\alpha \\in \\Lambda $ and $\\lim_{\\alpha }u^{\\alpha }=u,$ $\\lim_{\\alpha\n}z^{\\alpha }=z.$ Since $(x^{\\alpha },y^{\\alpha })_{\\alpha \\in \\Lambda }$ is\na net in $U,$ $\\ $then$,$ for each $\\alpha \\in \\Lambda ,$ $F(u^{\\alpha\n},y^{\\alpha },z^{\\alpha })\\nsubseteq $int$C$ $,$ that is, $F(u^{\\alpha\n},y^{\\alpha },z^{\\alpha })\\cap W\\neq \\emptyset ,$ where $W=X\\backslash $int\nC,$ that is, there exists $(t^{\\alpha })_{\\alpha \\in \\Lambda }$ a net in $X$\nsuch that $t^{\\alpha }\\in F(u^{\\alpha },y^{\\alpha },z^{\\alpha })\\cap W$ for\neach $\\alpha \\in \\Lambda .$\n\nSince $X$ is compact, we can suppose that $\\lim_{\\alpha }t^{\\alpha }=t^{0}.$\nThe closedness of $W$ implies that $t^{0}\\in W.$ We invoke here the\nclosedness of $F$ and we conclude that $t^{0}\\in F(u,y^{0},z).$ Therefore, \nF(u,y^{0},z)\\cap W\\neq \\emptyset ,$ and, thus, $u\\in \\overline{A}(x^{0})$\nimplies $F(u,y^{0},z)\\nsubseteq $int$C$ for each $z\\in \\overline{A}(x^{0}).$\nHence, $U$ is closed.\n\nFor each $k\\in \\mathbb{N}^{\\ast },$ let us define $P_{k}:X\\times\nX\\rightarrow 2^{X},$ by\n\n$P_{k}(x,y)=\\{u\\in X:$ $\\exists z\\in \\overline{A}(x)$ such that \nF_{k}(u,y,z)\\subseteq C\\}$ for each $(x,y)\\in X\\times X$ and\n\n$U_{k}=\\{(x,y)\\in X\\times X:u\\in \\overline{A}(x)$ implies that \nF_{k}(u,y,z)\\nsubseteq $int$C$ for each $z\\in \\overline{A}(x)\\}=\\{(x,y)\\in\nX\\times X:$\\textit{\\ }$\\overline{A}(x)\\cap P_{k}(x,y)=\\emptyset \\}.$\n\n\\textit{\\ }Let\\textit{\\ }$k\\in \\mathbb{N}^{\\ast }.$\n\nAssumption d1) implies that $U_{k}^{+}(x)\\cap A(x)\\neq \\emptyset $\\textit{\\ \nfor each\\textit{\\ }$x\\in X$ and Assumption d2) implies that $U_{k}^{+}(x)$\nis convex \\textit{\\ }for each\\textit{\\ }$x\\in X$ (we use a similar proof\nwith the one of Theorem 6)\n\nSince $F_{k}(\\cdot ,y,\\cdot ):$\\textit{\\ }$X\\times X\\rightarrow 2^{X}\n\\textit{\\ }is lower ($-C$)-semicontinuous for each $y\\in X,$ by following an\nargument similar with the one from the proof of Theorem 6, we can prove that\n\n$U_{k}^{-}(y)=\\{x\\in X:\\overline{A}(x)\\cap P_{k}(x,y)=\\emptyset \\}\\ $ is\nopen for each\\textit{\\ }$y\\in X.$\n\nAssumption d4) implies that $P_{k}(x,y)\\subseteq P_{k+1}(x,y)$\\textit{\\ }for\neach\\textit{\\ }$(x,y)\\in X\\times X.$\n\nAll conditions of Theorem 5 are verified, so that there exists $x^{\\ast }\\in \n$\\ $X$\\ such that $\\ x^{\\ast }\\in \\overline{A}(x^{\\ast })$ and $A(x^{\\ast\n})\\cap P(x^{\\ast },x^{\\ast })=\\emptyset .\\medskip $ Obvioulsy, $x^{\\ast }\n\\textit{\\ }is a solution for GSVQEP.\\medskip\n\\end{proof}\n\n\\section{Concluding remarks}\n\nThis paper developed a framework for discussing the existence of solutions\nfor a generalized strong vector quasi-equilibrium problem. The results have\nbeen obtained under assumptions which are different than the existing ones\nin literature. An approximation technique of proof has been developed.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\\label{Sec-1}\n\n\\subsection{Background}\n\\label{Sec-1.1}\n\nMicroorganisms are ubiquitous in nature and manage critical ecosystem services, ranging from global nutrient cycling to human health. Microbes do not exist in a vacuum in nature, but instead are members of diverse communities known as microbiomes, wherein microbes may interact with other microbes of different species. These microbial interactions may be favorable or antagonistic and are crucial for the successful establishment and maintenance of a microbial community, and frequently result in important pathogenic or beneficial effect to the host or environment \\citep{braga2016microbial}. A thorough understanding of how microbe interact with one another is critical to uncovering the underlying role microorganisms play in the host or environment.\nHowever, it is a highly challenging biological task as it is estimated that only 1\\% of bacteria are cultivatable. The inability to culture the majority of microbial species has motivated the use of culture-independent methods for microbiome studies in different environments \\citep{faust2012microbial, tang2019microbial}. \n\nFortunately, recent innovations in in situ DNA sequencing provide opportunities to infer how microbes interact with one another or their environments. Modern studies on microbial interactions frequently rely on DNA sequencing techniques through the bioinformatic analysis of taxonomically diagnostic genetic markers (e.g., 16S rRNA) sequenced directly from a sample. The counts of the taxonomically diagnostic genetic markers can be used to represent the abundance of microbial species, e.g., Operational Taxonomic Units (OTUs) or phylotypes (e.g., genera), in a sample. Here, the frequency with which a taxon's marker is observed in a sequence library represents its relative abundance in the community. When such abundance data are available from many communities, interactions among microbiota can be inferred through \\textcolor{blue}{statistical} correlation analysis \\citep{faust2012microbial}. \\textcolor{blue}{For example}, if the relative abundances of two microbial taxa are statistically correlated, then it is inferred that they interact on some level. This approach has been used to document interaction networks in the healthy human microbiome \\citep{faust2012microbial}, as well as free-living microbial communities \\citep{freilich2010large}, and has been useful for generating hypotheses of host-microbiome interaction \\citep{morgan2015associations}.\n\n\\subsection{Methodological Innovations}\n\\label{Sec-1.2}\n\nDespite the great potential of microbiome interaction networks as a tool to advance microbiome research, the power of this approach has been limited by the availability of effective statistical methods and computationally efficient estimation techniques. Microbial abundance data possess a few important features that pose tremendous challenges to standard statistical tools. First, the data are represented as compositional counts of the 16S rRNA sequences because the total count of sequences per sample is predetermined by how deeply the sequencing is conducted, a concept named sequencing depth. The counts only carry information about the relative abundances of the taxa instead of their absolute abundances. Second, the sequencing depth is always finite and often varies considerably across samples in a microbiome dataset. Thus, the observed relative abundance of a taxon in a sample is only an estimator of its true relative abundance with the variance depending on sample-specific sequencing depth, causing the ``heteroscedasticity'' issue \\citep{mcmurdie2014waste}. Third, the data are high-dimensional in nature. It is likely that the number of taxa is far more than the number of samples in any biological experiment.\n\n\nWhen such abundance data are available, one common strategy to resolve the interactions among microbial taxa is to use correlation-type analyses. For example, after the sample correlations are calculated between \\textcolor{blue}{the relative abundances of} each pair of microbial taxa, a threshold is then applied such that an interaction is deemed present if the sample correlation exceeds the threshold. More \\textcolor{blue}{recently developed methods} have started to \\textcolor{blue}{account for} the compositional feature and aim to construct sparse networks for the absolute abundances instead of relative abundances, \\textcolor{blue}{including} SparCC \\citep{friedman2012inferring}, CCLasso \\citep{fang2015cclasso}, and REBACCA \\citep{ban2015investigating}, \\textcolor{blue}{among others.}\n\n\\textcolor{blue}{The above-mentioned network methods based on marginal correlations} could lead to spurious correlations that are caused by confounding factors such as other \\textcolor{blue}{species} in the same community. \\textcolor{blue}{To eliminate the detection of spurious correlations}, interactions among taxa can be modeled through their conditional dependencies given the other taxa. The Gaussian graphical model \\textcolor{blue}{is a classical approach to modeling} the conditional dependency, \\textcolor{blue}{in which the conditional dependency is determined by the nonzero entries of the inverse covariance matrix of a multivariate normal distribution used to model the data.} Graphical lasso \\citep{yuan2007model, banerjee2008model, friedman2008sparse} and neighborhood selection \\citep{meinshausen2006high} are two commonly used methods to estimate sparse inverse covariance matrix under the \\textcolor{blue}{high-dimensional} Gaussian graphical model. However, \\textcolor{blue}{when applied to microbiome abundance data, its multivariate normality assumption was violated by either the count or the compositional features of the data.}\n\nSeveral methods have been proposed to infer microbial conditional dependence networks based on Gaussian graphical models, such as SPIEC-EASI \\citep{kurtz2015sparse}, gCoda \\citep{fang2017gcoda}, CD-trace \\citep{yuan2019compositional}, and SPRING \\citep{yoon2019microbial}. In order to transform the discrete counts to continuous variables and to remove the compositionality constraint, all these methods take the centered log-ratio transformation \\citep{aitchison1986statistical} on the observed counts as their first step. However, the centered log-ratio transformation suffers from an undefined inverse covariance matrix of the transformed data. To partially address this issue, these methods impose a sparsity assumption on the inverse covariance matrix and adding an $L_1$-norm penalty to their objective functions. Additionally, these methods ignore the heteroscedasticity issue as they simply treat the observed relative abundances as the truth. Ignoring the heteroscedasticity issue could impact downstream analysis including constructing microbial interaction networks \\citep{mcmurdie2014waste}.\n\nIn this article, we provide a new statistical tool to help unleash the full potential of microbiome interaction networks as a research tool in the microbiome field. We adopt the logistic normal multinomial distribution to model the compositional count data \\citep{aitchison1986statistical, billheimer2001statistical, xia2013logistic}. Compared to previous methods, this model accounts for the heteroscedasticity issue as the sequencing depth is treated as the number of trials in the multinomial distribution. Additionally, the additive log-ratio transformation applied to the multinomial probabilities results in a well-defined inverse covariance matrix in contrast to the centered log-ratio transformation. Based on this model, we develop an efficient algorithm that iterates between Newton-Raphson and graphical lasso for estimating a sparse inverse covariance matrix. We call this new approach ``compositional graphical lasso''. We establish the theoretical convergence of the algorithm and illustrate the advantage of compositional graphical lasso in comparison to current methods under a variety of simulation scenarios. We further apply the developed method to the data from the Zebrafish Parasite Infection study \\citep{gaulke2019longitudinal} (see Section \\ref{Sec-1.3}) to investigate how microbial interactions associate with parasite infection.\n\n\n\\subsection{Zebrafish Parasite Infection Study}\n\\label{Sec-1.3}\n\nHelminth parasites represent a significant threat to the health of human and animal populations, and there is a growing need for tools to treat, diagnose, and prevent these infections. A growing body of evidence points to the gut microbiome as an agent that interacts with parasites to influence their success in the gut. To clarify how the gut microbiome varies in accordance with parasitic infection dynamics, the Zebrafish Parasite Infection Study was a recent effort \\citep{gaulke2019longitudinal} conducted at Oregon State University, which assessed the association of an intestinal helminth of zebrafish, \\textit{Pseudocapillaria tomentosa}, and the gut microbiome of 210 4-month-old 5D line zebrafish. Among these fish, 105 were exposed to \\textit{P.\\ tomentosa} and the remaining 105 were unexposed controls. At each of the seven time points after exposure, a randomly selected group of 30 fish (15 exposed and 15 unexposed) were euthanized and fecal samples were collected. The parasite burden and tissue damage in \\textit{P.\\ tomentosa}-infected fish were also monitored over 12 weeks of infection.\n\nPrevious analyses \\citep{gaulke2019longitudinal} of the Zebrafish Parasite Infection Study data have revealed that parasite exposure, burden, and intestinal lesions were correlated with gut microbial diversity. They also identified individual taxa whose abundance associated with parasite burden, suggesting that gut microbiota may influence \\textit{P.\\ tomentosa} success. Numerous associations between taxon abundance, burden, and gut pathologic changes were also observed, indicating that the magnitude of microbiome disruption during infection varies with infection severity. However, it remains unclear how parasite success may disrupt or be modulated by the microbial interactions in the gut. Understanding how the microbial interactions associate with parasitic infection can help resolve potential drug or diagnostic test for parasitic infection.\n\n\\subsection{Benchmarking and Novel Biological Discoveries}\n\\label{Sec-1.4}\n\nTo evaluate the performance of our method and to benchmark it against previously available tools, we take advantage of a unique data resource provided by the \\textit{Tara} Oceans Project on ocean planktons. This data set is particularly suitable for method comparison because it enjoys an experimentally validated sub-network of plankton interactome that has served as a gold standard for method benchmarking. Compared to other methods, compositional graphical lasso performs better in reconstructing the microbial interactions that are validated by the literature. In addition, it performs the best in picking up the keystone taxa, which are those with an excessive number of interactions with other taxa in the literature, such as \\textit{Amoebophyra} \\citep{chambouvet2008control}, \\textit{Blastodinium} \\citep{skovgaard2012parasitic}, \\textit{Phaeocystis} \\citep{verity2007current}, and \\textit{Syndinium} \\citep{skovgaard2005phylogenetic}. All these genera are well-described keystone organisms in marine ecosystems. Finally, compositional graphical lasso affords an opportunity to resolve novel modulators of community composition. For example, as one of a few described genera within the syndinean dinoflagellates---an enigmatic lineage with abundant diversity in marine environmental clone libraries, \\textit{Euduboscquella} \\citep{bachvaroff2012molecular} ranked high in the degree distribution uniquely by compositional graphical lasso.\n\nTo investigate the role of the gut microbiome plays in the parasite infections, we apply compositional graphical lasso to the Zebrafish Parasite Infection Study data. {\\color{blue}Interestingly, compositional graphical lasso identifies changes in interaction degree between infected and uninfected individuals for three taxa, \\textit{Photobacterium}, \\textit{Gemmobacter}, and \\textit{Paucibacter}, which are inversely predicted by other methods. Further investigation of these method-specific taxa interaction changes reveals their biological plausibility, and provides insight into their relevance in the context of parasite-linked changes in the zebrafish gut microbiome. In particular, based on our observations, we speculate on the potential pathobiotic roles of \\textit{Photobacterium} and \\textit{Gemmobacter} in the zebrafish gut, and the potential probiotic role of \\textit{Paucibacter}. Future studies should seek to experimentally validate the ecological roles of \\textit{Photobacterium}, \\textit{Gemmobacter}, and \\textit{Paucibacter} in the zebrafish gut, including their impacts on the rest of the microbial community and their roles in infection induced tissue damage.}\n\n\n\n\n\\section{Compositional Graphical Lasso}\n\\label{Sec-2}\n\n\\subsection{Logistic Normal Multinomial Model}\n\\label{Sec-2.1}\n\nConsider a microbiome abundance dataset with $n$ independent samples, each of which is composed of the observed counts of $K + 1$ taxa, denoted by $\\mathbf{x}_i = (x_{i,1}, \\ldots, x_{i,K+1})'$ for the $i$-th sample, $i = 1,\\ldots,n$. Due to the compositional property of the data, the total count of all taxa for each sample $i$ is a fixed number, denoted by $M_i$. Naturally, a multinomial distribution is imposed on the observed counts:\n\\begin{equation} \n\\mathbf{x}_i | \\mathbf{p}_i \\sim \\text{Multinomial}(M_i; p_{i,1}, \\ldots, p_{i,K +1}), \\label{multinomial}\n\\end{equation}\nwhere $\\mathbf{p}_i = (p_{i,1}, \\ldots, p_{i,K+1})'$ are the multinomial probabilities for all taxa and $\\sum_{k=1}^{K+1} p_{i,k} = 1$.\n\nTo apply the additive log-ratio transformation \\citep{aitchison1986statistical} on the multinomial probabilities, we choose one taxon, without loss of generality the $(K+1)$-th taxon, as a reference to which all the other tax are compared. The transformed multinomial probabilities are given by\n\\begin{equation}\nz_{i,k} = \\log (\\frac{p_{i, k}}{p_{i, K + 1}}),\\ i = 1,\\ldots,n, \\ k = 1,\\ldots,K. \\label{log.ratio.transformation}\n\\end{equation}\nLet $\\mathbf{z}_i = (z_{i,1}, \\ldots, z_{i,K})'$ for $i = 1,\\ldots,n$, and further assume that they follow an i.i.d. multivariate normal distribution\n\\begin{equation}\n\\mathbf{z}_1, \\ldots, \\mathbf{z}_n \\stackrel{i.i.d.}{\\sim} N (\\boldsymbol\\mu, \\boldsymbol\\Sigma), \\label{logistic.normal}\n\\end{equation}\nwhere $\\boldsymbol\\mu$ is the mean and $\\boldsymbol\\Sigma$ is the covariance matrix. Let $\\boldsymbol\\Omega = \\boldsymbol\\Sigma^{-1}$ be the inverse covariance matrix or the precision matrix.\n\nThe above model given in (\\ref{multinomial})--(\\ref{logistic.normal}) is often referred to as the logistic normal multinomial model. In this model, a multinomial distribution is imposed on the compositional counts, which is the distribution of the observed data given the multinomial probabilities. In addition, to capture the variation of the multinomial probabilities across samples, we impose a logistic normal distribution on the multinomial probabilities as a prior distribution. We thereby obtain as our final model the logistic normal multinomial model, which is a hierarchical model with two levels.\n\nThe logistic normal multinomial model has a long history in modeling compositional count data and it has also been applied to analyze microbiome abundance data. For example, \\citet{xia2013logistic} proposed a penalized regression under this model to identify a subset of covariates that are associated with the taxon composition. Our objective is different from \\citet{xia2013logistic} as we aim to reveal the microbial interaction network by finding a sparse estimator of the inverse covariance matrix $\\boldsymbol{\\Omega}$ in (\\ref{logistic.normal}). It is also noteworthy that \\cite{jiang2020microbial} has the same objective as ours. However, \\cite{jiang2020microbial} did not make full use of the logistic normal multinomial model as it focused on correcting the bias of a naive estimator of the $\\boldsymbol{\\Sigma}$ that does not require the logistic normal part of the model. By contrast, we aim to find an estimator of $\\boldsymbol{\\Omega}$ directly based on the logistic normal multinomial model.\n\n\n\\subsection{Objective Function}\n\\label{Sec-2.2}\n\n{\\color{blue}From the logistic normal multinomial model in (\\ref{multinomial})--(\\ref{logistic.normal}), we aim to derive an objective function of $\\boldsymbol{\\Omega}$ to estimate the microbial interaction network. To this end, we take a two-step procedure similar to SPIEC-EASI \\citep{kurtz2015sparse}. In the first step, we find estimated values of $\\mathbf{z}_1,\\ldots,\\mathbf{z}_n$ based on the logistic normal multinomial model given $\\mu$ and $\\boldsymbol{\\Sigma}$; in the second step, we find an estimate of $\\boldsymbol{\\Omega}$ based on the estimated values of $\\mathbf{z}_1,\\ldots,\\mathbf{z}_n$.\n\nIn the first step, we consider the posterior distribution of $\\mathbf{z}_1,\\ldots,\\mathbf{z}_n$ given the data $\\mathbf{x}_1,\\ldots,\\mathbf{x}_n$ and find the maximum a posteriori (MAP) estimates of $\\mathbf{z}_1,\\ldots,\\mathbf{z}_n$. For $i=1,\\ldots,n$, the logarithm of the posterior density function of $\\mathbf{z}_i$ given $\\mathbf{x}_i$ is\n\\begin{align*} \n& \\log[f_{\\boldsymbol{\\mu},\\boldsymbol{\\Omega}}(\\mathbf{z}_i | \\mathbf{x}_i)] \\propto \\log[f_{\\boldsymbol{\\mu},\\boldsymbol{\\Omega}}(\\mathbf{x}_i, \\mathbf{z}_i)] \\\\\n\\propto{}& \\sum_{k=1}^{K+1} x_{i,k} \\log p_{i,k} + \\frac12 \\log[\\det(\\boldsymbol{\\Omega})] - \\frac12 (\\mathbf{z}_i - \\boldsymbol{\\mu})' \\boldsymbol{\\Omega} (\\mathbf{z}_i - \\boldsymbol{\\mu}) \\\\\n={} &\\sum_{k=1}^{K} x_{i,k} z_{i,k} - M_i \\log(\\sum_{k=1}^K e^{z_{i,k}} + 1) + \\frac12 \\log[\\det(\\boldsymbol{\\Omega})] - \\frac12 (\\mathbf{z}_i - \\boldsymbol{\\mu})' \\boldsymbol{\\Omega} (\\mathbf{z}_i - \\boldsymbol{\\mu}),\n\\end{align*}\nwhere $\\propto$ denotes that two quantities are equal up to a term not depending on $\\mathbf{z}_i$. In the above derivation, we ignored the marginal density function of $\\mathbf{x}_i$ that does not depend on $\\mathbf{z}_i$, which does not affect the estimation of $\\mathbf{z}_i$.}\n\nBy independence between all the samples, the logarithm of the posterior density function of $(\\mathbf{z}_1,\\ldots,\\mathbf{z}_n)$ given the data $(\\mathbf{x}_1,\\ldots,\\mathbf{x}_n)$ can be written as (again, ignoring a term independent of $\\mathbf{z}_1,\\ldots,\\mathbf{z}_n$):\n\\begin{equation} \n\\sum_{i=1}^n \\left[\\sum_{k=1}^{K} x_{i,k} z_{i,k} - M_i \\log(\\sum_{k=1}^K e^{z_{i,k}} + 1)\\right] + \\frac{n}2 \\log[\\det(\\boldsymbol{\\Omega})] - \\frac12 \\sum_{i=1}^n (\\mathbf{z}_i - \\boldsymbol{\\mu})' \\boldsymbol{\\Omega} (\\mathbf{z}_i - \\boldsymbol{\\mu}). \\label{objective.function.1}\n\\end{equation}\nGiven the values of the multivariate normal parameters $\\boldsymbol{\\mu}$ and $\\boldsymbol{\\Omega}$, one can maximize (\\ref{objective.function.1}) with respect to $(\\mathbf{z}_1,\\ldots,\\mathbf{z}_n)$. This leads to the MAP estimator $(\\hat\\mathbf{z}_1,\\ldots,\\hat\\mathbf{z}_n)$.\n\n{\\color{blue}In the second step, we find a sparse inverse covariance estimator of $\\boldsymbol{\\Omega}$ based on the estimate values $(\\hat\\mathbf{z}_1,\\ldots,\\hat\\mathbf{z}_n)$. Hereby, we use the graphical lasso estimator, which minimizes the $L_1$ penalized negative log-likelihood function as follows:\n\\begin{equation}\n-\\frac{1}2 \\log[\\det(\\boldsymbol{\\Omega})] + \\frac1{2n} \\sum_{i=1}^n (\\hat\\mathbf{z}_i - \\boldsymbol{\\mu})' \\boldsymbol{\\Omega} (\\hat\\mathbf{z}_i - \\boldsymbol{\\mu}) + \\lambda \\|\\boldsymbol{\\Omega}\\|_1. \\label{objective.function.2}\n\\end{equation}\n\nIt turns out that the above two-step procedure is equivalent to minimizing an overall objective function with respect to both $\\mathbf{z}_1,\\ldots,\\mathbf{z}_n$ and $(\\boldsymbol{\\mu},\\boldsymbol{\\Omega})$:\n\\begin{align}\n\\ell(\\mathbf{z}_1,\\ldots,\\mathbf{z}_n,\\boldsymbol{\\mu},\\boldsymbol{\\Omega}) ={}& -\\frac1n\\sum_{i=1}^n \\left[\\sum_{k=1}^{K} x_{i,k} z_{i,k} - M_i \\log(\\sum_{k=1}^K e^{z_{i,k}} + 1)\\right] \\notag\\\\\n& -\\frac12 \\log[\\det(\\boldsymbol{\\Omega})] + \\frac{1}{2n} \\sum_{i=1}^n (\\mathbf{z}_i - \\boldsymbol{\\mu})' \\boldsymbol{\\Omega} (\\mathbf{z}_i - \\boldsymbol{\\mu}) + \\lambda \\|\\boldsymbol{\\Omega}\\|_1. \\label{objective.function}\n\\end{align}\nIn other words, $\\mathbf{z}_1,\\ldots,\\mathbf{z}_n$, $\\boldsymbol{\\mu}$, and $\\boldsymbol{\\Omega}$ are all treated as unknown parameters in the minimization of the objective function (\\ref{objective.function}).\n\nIt is noteworthy that similar ideas to the above two-step procedure have been used in existing microbial interaction network estimation methods. For example, SPIEC-EASI is also a two-step procedure. In its first step, the abundance counts are converted into their centered log-ratio transformed data; in its second step, either graphical lasso or neighborhood selection is applied to estimate a sparse inverse covariance matrix based on the centered log-ratio transformed data in the first step.\n\nAlthough both our method and SPIEC-EASI can be regarded as two-step procedures, we underline two important distinctions between them. First, our method is based on the additive log-ratio transformation and SPIEC-EASI uses the centered log-ratio transformation. As mentioned in the Introduction, the centered log-ratio transformation suffers from an undefined inverse covariance matrix of the transformed data while the additive log-ratio transformation does not. Second, in the first step, our method accounts for the sequencing depths [$M_i$'s in (\\ref{multinomial})] and further the uncertainty of the observed relative abundances, and thus addresses the heteroscedasticity issue. However, the heteroscedasticity issue is ignored in SPIEC-EASI.} \n\n\n\\subsection{Computational Algorithm}\n\\label{Sec-2.3}\n\nThe objective function (\\ref{objective.function}) naturally includes three sets of parameters $(\\mathbf{z}_1,\\ldots,\\mathbf{z}_n)$, $\\boldsymbol{\\mu}$, and $\\boldsymbol{\\Omega}$, which motivates us to apply a block coordinate descent algorithm. A block coordinate descent algorithm minimizes the objective function iteratively for each set of parameters given current values of the other sets. Given the initial values $(\\mathbf{z}_1^{(0)},\\ldots,\\mathbf{z}_n^{(0)})$, $\\boldsymbol{\\mu}^{(0)}$, and $\\boldsymbol{\\Omega}^{(0)}$, a block coordinate algorithm repeats the following steps cyclically for iteration $t= 0,1,2,\\ldots$ until the algorithm converges.\n\\begin{enumerate}[nosep]\n\\item Given $\\boldsymbol{\\mu}^{(t)}$ and $\\boldsymbol{\\Omega}^{(t)}$, find $(\\mathbf{z}_1^{(t+1)},\\ldots,\\mathbf{z}_n^{(t+1)})$ that maximizes (\\ref{objective.function}).\n\\item Given $(\\mathbf{z}_1^{(t+1)},\\ldots,\\mathbf{z}_n^{(t+1)})$ and $\\boldsymbol{\\Omega}^{(t)}$, find $\\boldsymbol{\\mu}^{(t+1)}$ that maximizes (\\ref{objective.function}).\n\\item Given $(\\mathbf{z}_1^{(t+1)},\\ldots,\\mathbf{z}_n^{(t+1)})$ and $\\boldsymbol{\\mu}^{(t+1)}$, find $\\boldsymbol{\\Omega}^{(t+1)}$ that maximizes (\\ref{objective.function}).\n\\end{enumerate}\n\nBelow, we will present the details of this algorithm in each iteration. For the initial values $(\\mathbf{z}_1^{(0)},\\ldots,\\mathbf{z}_n^{(0)})$, we use their maximum likelihood estimators from the multinomial distribution, i.e.,\n\\[ {z}_{i,k}^{(0)} = \\log (\\frac{x_{i, k}}{x_{i, K + 1}}),\\ i = 1,\\ldots,n, \\ k = 1,\\ldots,K.\\] \nIf $x_{i, k} = 0$ for some $i$ and $k$, we add a small constant to it to evaluate the log ratio. For the initial value $\\boldsymbol{\\mu}^{(0)}$, we have a closed form minimizer of $\\boldsymbol{\\mu}$ for (\\ref{objective.function}) given the values of $(\\mathbf{z}_1,\\ldots,\\mathbf{z}_n)$, which is $\\boldsymbol{\\mu} = \\bar{\\mathbf{z}} = \\frac1n \\sum_{i=1}^n \\mathbf{z}_i$. Therefore, we set the initial value as $\\boldsymbol{\\mu}^{(0)} = \\frac1n \\sum_{i=1}^n \\mathbf{z}_i^{(0)}$. Finally, for the initial value $\\boldsymbol{\\Omega}^{(0)}$, we use the estimate of the graphical lasso algorithm taking the sample covariance matrix computed from $\\mathbf{z}_1^{(0)},\\ldots,\\mathbf{z}_n^{(0)}$ as input.\n\nIn step 1, given $\\boldsymbol{\\mu}^{(t)}$ and $\\boldsymbol{\\Omega}^{(t)}$, minimizing the objective function (\\ref{objective.function}) with respect to $(\\mathbf{z}_1,\\ldots,\\mathbf{z}_n)$ is equivalent to minimizing the following objective function with respect to each $\\mathbf{z}_i$ separately, for $i = 1,\\ldots,n$:\n\\begin{equation}\n\\ell_i^{(t)}(\\mathbf{z}_i) = \\frac12 (\\mathbf{z}_i - \\boldsymbol{\\mu}^{(t)})' \\boldsymbol{\\Omega}^{(t)} (\\mathbf{z}_i - \\boldsymbol{\\mu}^{(t)}) - \\left[\\sum_{k=1}^{K} x_{i,k} z_{i,k} - M_i \\log(\\sum_{k=1}^K e^{z_{i,k}} + 1)\\right]. \\label{objective.function.zi}\n\\end{equation}\nThe above objective function is a smooth and convex function in $\\mathbf{z}_i$ with the following positive definite Hessian matrix\n\\begin{equation*}\n\\boldsymbol{\\Omega}^{(t)} + \\frac{M_i}{\\left(\\sum_{k=1}^K e^{z_{i,k}} + 1\\right)^2} \\left\\{\\left(\\sum_{k=1}^K e^{z_{i,k}} + 1\\right) \\text{diag} (e^{\\mathbf{z}_i}) - (e^{\\mathbf{z}_i})(e^{\\mathbf{z}_i})'\\right\\},\n\\end{equation*}\nwhere $e^{\\mathbf{z}_i} = (e^{z_{i,1}}, \\ldots, e^{z_{i,K}})'$ and $\\text{diag}(e^{\\mathbf{z}_i})$ is the diagonal matrix with the diagonal elements $e^{z_{i,1}}, \\ldots, e^{z_{i,K}}$. Therefore, we apply the Newton-Raphson algorithm to find the minimizer numerically. In addition, we implement a line search procedure in each Newton-Raphson iteration following the Armijo rule \\citep{armijo1966minimization}. This procedure ensures sufficient decrease in the objective function at each iteration to prevent possible divergence of the algorithm.\n\nStep 2 is similar to the initialization step, in which $\\boldsymbol{\\mu}$ has a closed-form solution and is updated as $\\bar{\\mathbf{z}}^{(t+1)} = \\frac1n \\sum_{i=1}^n \\mathbf{z}_i^{(t+1)}$ from the current numerical values of $(\\mathbf{z}_1^{(t+1)},\\ldots,\\mathbf{z}_n^{(t+1)})$ that are computed from the Newton-Raphson algorithm in step 1.\n\nIn step 3, given $(\\mathbf{z}_1^{(t+1)},\\ldots,\\mathbf{z}_n^{(t+1)})$ and $\\boldsymbol{\\mu}^{(t+1)} = \\bar{\\mathbf{z}}^{(t+1)}$, the objective function for $\\boldsymbol{\\Omega}$ can be simplified as\n\\begin{align}\n\\ell^{(t)}(\\boldsymbol{\\Omega}) &= -\\frac12 \\log[\\det(\\boldsymbol{\\Omega})] + \\frac{1}{2n} \\sum_{i=1}^n (\\mathbf{z}_i^{(t+1)} - \\boldsymbol{\\mu}^{(t+1)})' \\boldsymbol{\\Omega} (\\mathbf{z}_i^{(t+1)} - \\boldsymbol{\\mu}^{(t+1)}) + \\lambda \\|\\boldsymbol{\\Omega}\\|_1, \\notag \\\\\n&= -\\frac12 \\log[\\det(\\boldsymbol{\\Omega})] + \\frac{1}{2} \\mathrm{tr} (\\mathbf{S}^{(t+1)} \\boldsymbol{\\Omega}) + \\lambda \\|\\boldsymbol{\\Omega}\\|_1, \\label{objective.function.omega} \n\\end{align}\nwhere $\\mathbf{S}^{(t+1)} = \\frac{1}{n} \\sum_{i=1}^n (\\mathbf{z}_i^{(t+1)} - \\bar{\\mathbf{z}}^{(t+1)}) (\\mathbf{z}_i^{(t+1)} - \\bar{\\mathbf{z}}^{(t+1)})'$. It is obvious that minimizing the objective function (\\ref{objective.function.omega}) becomes a graphical lasso problem \\citep{yuan2007model, banerjee2008model, friedman2008sparse}. It is well known that the graphical lasso objective function is a convex function in $\\boldsymbol{\\Omega}$ \\citep{banerjee2008model} and efficient algorithms have been developed for its optimization \\citep{friedman2008sparse}. In this paper, we implement this step using the graphical lasso algorithm included in the \\texttt{huge} \\citep{zhao2012huge} package in R.\n\nThe above block coordinate descent algorithm iterates between Newton-Raphson and graphical lasso and is designed specifically to optimize the objective function (\\ref{objective.function}) for compositional count data. Therefore, we name this algorithm the compositional graphical lasso algorithm, and the entire approach the compositional graphical lasso method including both the model and the algorithm for the analysis of microbiome abundance data.\n\n\\subsection{Theoretical Convergence}\n\\label{Sec-2.4}\n\nUnfortunately, the objective function (\\ref{objective.function}) is not necessarily a convex function jointly in $(\\mathbf{z}_1,\\ldots,\\mathbf{z}_n)$, $\\boldsymbol{\\mu}$, and $\\boldsymbol{\\Omega}$. However, \nwe have shown that it is convex in each of the three subsets of its parameters. The convergence property of such an optimization problem has been studied in the literature. For example, \\cite{tseng2001convergence} studied the convergence property of a block coordinate descent method applied to minimize a nonconvex function with certain separability and regularity properties. We will establish the convergence property of the compositional graphical lasso algorithm following \\cite{tseng2001convergence}.\n\nRecall that our algorithm treats the three sets of parameters $(\\mathbf{z}_1,\\ldots,\\mathbf{z}_n)$, $\\boldsymbol{\\mu}$, and $\\boldsymbol{\\Omega}$ as three blocks and optimizes for each block iteratively. In addition, as in \\cite{tseng2001convergence}, the objective function (\\ref{objective.function}) can be regarded as the sum of two parts, the first of which is an inseparable but differentiable function given by\n\\begin{equation}\n\\ell_0(\\mathbf{z}_1,\\ldots,\\mathbf{z}_n,\\boldsymbol{\\mu},\\boldsymbol{\\Omega}) = \\frac{1}{2n} \\sum_{i=1}^n (\\mathbf{z}_i - \\boldsymbol{\\mu})' \\boldsymbol{\\Omega} (\\mathbf{z}_i - \\boldsymbol{\\mu}), \\label{objective.function.inseparable}\n\\end{equation}\nand the second of which is a sum of separable and differentiable functions given by $\\ell_1(\\mathbf{z}_1,\\ldots,\\mathbf{z}_n) + \\ell_2(\\boldsymbol{\\Omega})$, where\n\\begin{align}\n\\ell_1(\\mathbf{z}_1,\\ldots,\\mathbf{z}_n) &= -\\frac1n\\sum_{i=1}^n \\left[\\sum_{k=1}^{K} x_{i,k} z_{i,k} - M_i \\log(\\sum_{k=1}^K e^{z_{i,k}} + 1)\\right], \\label{objective.function.separable.1}\\\\\n\\ell_2(\\boldsymbol{\\Omega}) &= -\\frac12 \\log[\\det(\\boldsymbol{\\Omega})] + \\lambda \\|\\boldsymbol{\\Omega}\\|_1. \\label{objective.function.separable.2}\n\\end{align} \n\\cite{tseng2001convergence} established the convergence property of a block coordinate descent algorithm under regularity conditions on $\\ell_0$, $\\ell_1$, and $\\ell_2$.\n\nTo present the main convergence property of the compositional graphical lasso algorithm, let's review the definition of a cluster point in real analysis. A cluster point of a set $\\mathcal{A} \\subset \\mathbb{R}^n$ is a real vector $\\mathbf{a} \\in \\mathbb{R}^n$ such that for every $\\delta > 0$, there exists a point $\\mathbf{x}$ in $\\mathcal{A}\\setminus \\{\\mathbf{a}\\}$ satisfying that $\\|\\mathbf{x} - \\mathbf{a}\\|_2 < \\delta$. Obviously, any limit point of the set $\\mathcal{A}$ is a cluster point. Furthermore, define a cluster point of the compositional graphical lasso algorithm to be a cluster point of the set $\\{(\\mathbf{z}_1^{(t)},\\ldots,\\mathbf{z}_n^{(t)},\\boldsymbol{\\mu}^{(t)},\\boldsymbol{\\Omega}^{(t)}): t = 0,1,2,\\ldots\\}$, which are minimizers found at each iteration $t$. Then, the following theorem presents a theoretical property for every cluster point of our algorithm as follows.\n\n\\begin{thm} \\label{Thm-1}\nAny cluster point of the compositional graphical lasso algorithm is a stationary point of the objective function (\\ref{objective.function}).\n\\end{thm}\n\nThe proof of Theorem \\ref{Thm-1} can be found in the supplementary materials. This theorem guarantees that a cluster point, usually a limit point, of the compositional graphical lasso algorithm, is at least a stationary point. It is noteworthy that there exists a global minimizer for the objective function in (\\ref{objective.function}) because we have proved the coerciveness of the objective function in the proof \\citep[Lemma 8.4]{calafiore2014optimization}. Therefore, to achieve global optimization in practice, one can run the algorithm multiple times starting with different initial values and choose the one solution that yields the smallest objective function among the multiple ones.\n\nIn addition, the values of the objective function at each iteration, i.e., \\\\\n$\\{\\ell(\\mathbf{z}_1^{(t)},\\ldots,\\mathbf{z}_n^{(t)},\\boldsymbol{\\mu}^{(t)},\\boldsymbol{\\Omega}^{(t)}): t = 0,1,2,\\ldots\\}$, will always converge. This is because that the objective function is bounded below (as $\\ell_0 + \\ell_2$ is bounded below as shown in the proof and $\\ell_1 > 0$ by definition)\nand our algorithm results in non-increasing objective function values between two iterations. Therefore, the values of objective function will always converge to a limit point. In practice, we have always observed the numerical convergence of both the minimizers and the values of the objective function after a certain number of iterations.\n\n\n\n\n\n\n\\subsection{Tuning Parameter Selection}\n\\label{Sec-2.5}\n\nThere is a large body of literature on the selection of a tuning parameter in the variable selection framework. Common approaches can be broadly categorized into three types: criteirion-based methods, prediction-based methods, and stability-based methods. Criterion-based methods such as Akaike information criterion (AIC) \\citep{akaike1974new} and Bayesian information criteria (BIC) \\citep{schwarz1978estimating} balance the model complexity and the goodness of fit, prediction-based methods such as cross validation \\citep{stone1974cross, geisser1975predictive} and generalized cross validation \\citep{golub1979generalized} aim to minimize the expected prediction error of the selected model on independent datasets. Stability-based methods such as stability selection \\citep{meinshausen2010stability} and the Stability Approach to Regularization Selection (StARS) \\citep{liu2010stability} select a model with high stability under subsampling or bootstrapping the original data.\n\nIn this work, we apply StARS to select the tuning parameter $\\lambda$ in our objective function (\\ref{objective.function}). In StARS, we draw $N$ subsamples without replacement from the original dataset with $n$ observations, each subsample of size $b$. For each value of the tuning parameter $\\lambda$, we obtain an estimate of $\\boldsymbol{\\Omega}$, i.e., a network for each subsample. Then, we measure the total instability of these resultant networks across the $N$ subsamples. The total instability of these networks is defined by averaging the instabilities of each edge across the $N$ subsamples over all possible edges, where the instability of each edge is estimated as the twice the sample variance of the Bernoulli indicator of whether this edge is selected or not in each of the $N$ subsamples.\n\nStarting from a large penalty which corresponds to the empty network, the instability of networks increases as $\\lambda$ decreases. StARS stops and selects the tuning parameter to be the minimum value of $\\lambda$'s with which the instability of the resultant networks is less than a threshold $\\beta > 0$. In principle, StARS selects a tuning parameter so that the resultant network is the densest among networks with a total instability less than a threshold $\\beta$ without violating some sparsity assumption. The selected network is the ``densest on the sparse side,\" as it starts with the empty network and stops when the instability first across the threshold.\n\n\\section{Simulation Studies}\n\\label{Sec-3}\n\n\\subsection{Simulation Settings}\n\\label{Sec-3.1}\n\nTo evaluate the performance of compositional graphical lasso, we conduct \\textcolor{blue}{simulation studies} and compare it with other network estimation methods.\n\nGiven that our goal is to estimate the true network, i.e., $\\boldsymbol{\\Omega}$ in (\\ref{logistic.normal}), we consider the following three types of true precision matrices $\\boldsymbol{\\Omega} = (\\omega_{kl})_{1\\le k,l \\le K}$, which are different in the pattern of edge distributions as well as the degree of connectedness.\n\n\\begin{enumerate}[nosep]\n\\item Chain network: $\\omega_{kk} = 1.5$, $\\omega_{kl} = 0.5$ if $|k - l| = 1$, and $\\omega_{kl} = 0$ if $|k - l| > 1$. A node is designed to be connected to its adjacent nodes, and the connectedness of nodes is balanced.\n\\item Random network: $\\omega_{kl} = 1$ with probability $3\/K$ for $k \\ne l$. A node is connected to all other nodes randomly with a fixed probability, set to be $3\/K$. Similar to the chain structure, the connectedness of nodes is balanced.\n\\item Hub network: All nodes are randomly split into $\\lceil K \/ 20 \\rceil$ disjoint groups, and a hub node $k$ is selected from each group. For any other node $l$ in the same group, $\\omega_{kl} = 1$. All the remaining entries of $\\boldsymbol{\\Omega}$ are zero. Here, nodes are partitioned into the same group at random, but is then designated to be connected to the hub node at certain. The degree of connectedness among nodes is extremely unbalanced in this case: the hub nodes are connected to all the other nodes in its group (around 20 nodes) and all the other nodes are only connected to the hub node in its group, i.e., just one node.\n\\end{enumerate}\n\nIn addition to the true network, we also consider two other factors that are expected to influence the result. The first factor is the sequencing depth, $M_i$, in the multinomial distribution (\\ref{multinomial}). \\textcolor{blue}{We simulate $M_i$ from uniform distributions, the detail of which will be discussed in the following subsections.} The second factor is the degree of variation in the logistic normal distribution (\\ref{logistic.normal}). For each of the three types of precision matrices, we consider an additional factor by multiplying a positive constant $c$ to $\\boldsymbol{\\Omega}$ so that the true precision matrix is $c\\boldsymbol{\\Omega}$. We choose $c = 1$ and $c = 1\/5$ separately and call the two settings low and high compositional variation, respectively.\n\nThe data are simulated following the logistic normal multinomial model in (\\ref{multinomial})--(\\ref{logistic.normal}). We first simulate $\\mathbf{z}_i \\sim N (\\boldsymbol{\\mu}, \\boldsymbol{\\Sigma})$ independently for $i = 1,\\ldots,n$; then, we perform the inverse log-ratio transformation (also know as the softmax transformation, the inverse transformation of (\\ref{log.ratio.transformation})) to obtain the multinomial probabilities $\\mathbf{p}_i$ for $i = 1,\\ldots,n$; last, we simulate multinomial counts $\\mathbf{x}_i$ from a multinomial distribution with sequencing depth $M_i$ and probabilities $\\mathbf{p}_i$. Throughout this simulation study, we fix $n = 100$ and $K = 200$.\n\nThe simulation results are based on 100 replicates. For each replicate, we apply compositional graphical lasso, neighborhood selection, and graphical lasso separately to obtain a sparse estimator of $\\boldsymbol{\\Omega}$. For neighborhood selection and graphical lasso, we first obtain an estimate of $\\mathbf{z}_1,\\ldots,\\mathbf{z}_n$ from the multinomial distribution via the additive log-ratio transformation\n\\begin{equation}\n\\tilde{z}_{i,k}= \\log (\\frac{x_{i, k}}{x_{i, K + 1}}),\\ i = 1,\\ldots,n, \\ k = 1,\\ldots,K, \\label{z.surrogates}\n\\end{equation}\nand then apply neighborhood selection and graphical lasso directly on the estimates $\\tilde\\mathbf{z}_1,\\ldots,\\tilde\\mathbf{z}_n$ by treating them as surrogates for their true counterparts, i.e., $\\mathbf{z}_1,\\ldots,\\mathbf{z}_n$. These methods are almost identical to SPIEC-EASI although we replaced the centered log-ratio transformation in SPIEC-EASI with additive log-ratio transformation for a fair comparison.\n\nTo compare the performance of the three methods in terms of network recovery, all three methods are applied with a sequence of tuning parameter values, and their true positive rates (TPR) and false positive rates (FPR) in terms of edge selection are recorded for each value of $\\lambda$. An ROC curve is plotted from the average TPR and the average FPR over the 100 replicates for each of the tuning parameters.\n\nIn addition, we apply StARS to select an optimal tuning parameter $\\lambda$. Following the recommendation in \\citet{liu2010stability}, we set the threshold for the total instability to be $\\beta = 0.05$, the size of each subsample $b = 7 \\sqrt{n}$, and the number of subsamples $N = 50$. Once the optimal tuning parameter is determined by StARS, we fit the whole dataset with the selected tuning parameter and evaluate the resultant network using three criteria: precision, recall, and F1 score, which are defined as\n\\begin{equation*}\n\\text{Precision} = \\frac{\\text{TP}}{\\text{TP} + \\text{FP}},\\quad \\text{Recall} = \\frac{\\text{TP}}{\\text{TP} + \\text{FN}}, \\quad \\text{F1} = \\frac{2 \\times \\text{Precision} \\times \\text{Recall}}{\\text{Precision} + \\text{Recall}},\n\\end{equation*}\nwhere TP, FP, and FN are numbers of true positives, false positives, and false negatives, respectively. \n\n\\subsection{\\textcolor{blue}{Simulation Results for Dense Data}}\n\\label{Sec-3.2}\n\n\\textcolor{blue}{In this subsection, we evaluate the performance of compositional graphical lasso on dense data, in which most of the simulated counts are nonzero. This corresponds to a simulation setting where the sequencing depths $M_i$'s are relatively larger than the number of taxa $K + 1$. Still, to evaluate the effect of the sequencing depth on the performance of network estimation methods, we simulate $M_i$ from two uniform distributions, Uniform$(20K, 40K)$ and Uniform$(100K, 200K)$, and call the two settings low and high sequencing depth in this subsection, respectively.}\n\nFigure \\ref{ROC} presents the ROC curves for compositional graphical lasso (Comp-gLASSO), neighborhood selection (MB), and graphical lasso (gLASSO), from which we can see that compositional graphical lasso dominates its competitors in terms of edge selection in all settings. In particular, the advantage of the compositional graphical lasso over neighborhood selection and graphical lasso is the most obvious when the compositional variation is high and the sequencing depth is low, no matter which type of network structure is considered. On the contrary, the three methods perform very similarly for all types of network structures when the compositional variation is low and the sequencing depth is high. The difference between compositional graphical lasso and the rest is intermediate for the other two settings when both compositional variation and sequencing depth are high or when both are low. Comparing graphical lasso and neighborhood selection, they tend to perform more similarly although graphical lasso seems to outperform neighborhood selection in some settings with a small margin.\n\n\\begin{figure}[h]\n\\centering\n\\includegraphics[width=0.8\\textwidth]{ROC_MB.pdf}\n\\caption{ROC curves for compositional graphical lasso (Comp-gLASSO), graphical lasso (gLASSO) and neighborhood selection (MB). Solid blue: Comp-gLASSO; dashed red: gLASSO; dotted black: MB. $\\mathbf{h\/l, h\/l}$: high\/low sequencing depth, high\/low compositional variation.}\n\\label{ROC}\n\\end{figure}\n\n\\begin{figure}[h]\n\\centering\n\\includegraphics[width=0.8\\textwidth]{bar_graph_MB.pdf}\n\\caption{Recall, precision and F1 score for the network selected by StARS for compositional graphical lasso (Comp-gLASSO), graphical lasso (gLASSO) and neighborhood selection (MB). Red (left): Comp-gLASSO; green (middle): gLASSO; blue (right): MB. $\\mathbf{h\/l, h\/l}$: high\/low sequencing depth, high\/low compositional variation.}\n\\label{Recall.Precision.F1}\n\\end{figure}\n\nThe above observations agree with our expectation about how the two factors, compositional variation and sequencing depth, affect the comparison between the methods. Recall that neighborhood selection and graphical lasso replace the true values of $\\mathbf{z}_1,\\ldots,\\mathbf{z}_n$ by their estimates\/surrogates $\\tilde\\mathbf{z}_1,\\ldots,\\tilde\\mathbf{z}_n$ as in (\\ref{z.surrogates}) without taking into account the estimation accuracy or uncertainty of these surrogates. First, a higher sequencing depth leads to more accurate surrogates $\\tilde\\mathbf{z}_1,\\ldots,\\tilde\\mathbf{z}_n$; therefore, it is not surprising to see that the three methods perform more similarly when the sequencing depth is high. Second, a higher compositional variation results in a higher variation in $\\mathbf{z}_i$'s and further in $\\mathbf{p}_i$'s that are multinomial probabilities. Since neighborhood selection and graphical lasso ignore the multinomial component in the model, it is also not surprising to see that their performance is deteriorated by a high compositional variation.\n\nFigure \\ref{Recall.Precision.F1} presents the recall, precision, and F1 score from 50 replicates of the estimated network resulted from the tuning parameter selected by StARS. The first observation would be that the precisions of both compositional graphical lasso and graphical lasso are much worse than their recalls, whereas the precisions and recalls are more comparable for neighborhood selection. Interestingly, StARS results in a much more sparse network for neighborhood selection than the other methods under the same stability threshold, suggesting that fewer edges selected by neighborhood selection are stable enough (within the instability threshold). When it comes to method comparison, compositional graphical lasso has much higher recall than neighborhood selection in most settings, but have comparable or lower precision in most of the settings with high sequencing depth. The network from compositional graphical lasso has higher F1 score than the ones from neighborhood selection in most settings, except when sequencing depth is high and compositional variation is low for chain and hub networks. In addition, the network from compositional graphical lasso has higher or comparable precision, recall, and F1 score than the ones from graphical lasso in all settings. Similar to the observations from the ROC curves, the advantage of compositional graphical lasso is more obvious with a low sequencing depth or a high compositional variation.\n\n\\subsection{\\textcolor{blue}{Simulation Results for Sparse Data}}\n\\label{Sec-3.3}\n\n\\textcolor{blue}{In this subsection, we evaluate the performance of compositional graphical lasso on sparse data, in which a range of proportions of the compositional counts are simulated as zero. We only present the simulation results for the chain network, given that the simulation results for the other network types are similar. To examine how our method performs for different sparsity levels, we simulate $M_i$ from four uniform distributions, Uniform$(8K, 16K)$, Uniform$(4K, 8K)$, Uniform$(2K, 4K)$, and Uniform$(K, 2K)$, respectively. Note that the sparsity level of the simulated data also depends on the compositional variation (see Section \\ref{Sec-3.1}). In a typically simulated dataset with high compositional variation, the sparsity level is around 40\\%, 50\\%, 60\\%, and 70\\% with the four uniform distributions for $M_i$; in a typically simulated dataset with low compositional variation, the sparsity level is around 5\\%, 10\\%, 25\\%, and 40\\% with the four uniform distributions for $M_i$. In both cases, we refer to these four sparsity levels as 1, 2, 3, and 4, with 1 the least sparse and 4 the most sparse. Due to the limited space, we place the figures resulted from this simulation in the supplementary materials.}\n\n\\textcolor{blue}{Figure \\ref{ROC_MB_Sparse} presents the ROC curves for compositional graphical lasso (Comp-gLASSO), neighborhood selection (MB), and graphical lasso (gLASSO). Similar to Figure \\ref{ROC}, we can see that compositional graphical lasso dominates its competitors in terms of edge selection across all sparsity levels. As the sequencing depths for sparse data are relatively small compared to the ones used for dense data, the advantage of the compositional graphical lasso over neighborhood selection and graphical lasso is obviously and consistently observed in all simulation settings. In addition, graphical lasso and neighborhood selection tend to perform more similarly although graphical lasso seems to outperform neighborhood selection with a small margin. This result is consistent with that observed for dense data in Section \\ref{Sec-3.2}.}\n\n\\textcolor{blue}{Figure \\ref{Recall.Precision.F1.Sparse} presents the recall, precision, and F1 score from 50 replicates of the estimated network resulted from the tuning parameter selected by StARS. The first observation is that all methods perform worse for sparse data than for dense data from the comparison between Figure \\ref{Recall.Precision.F1.Sparse} and Figure \\ref{Recall.Precision.F1}; all methods perform worse as the sparsity level increases. Similar to Figure \\ref{Recall.Precision.F1}, Figure \\ref{Recall.Precision.F1.Sparse} implies that compositional graphical lasso outperforms graphical lasso and neighborhood selection, with a higher recall, precision, and F1 score consistently observed in most settings. The only exception is that, when data are extremely sparse with a low compositional variation, graphical lasso seems to outperform compositional graphical lasso by a very small margin as both methods are not working well. For sparse data, StARS results in an almost empty network for neighborhood selection, resulting in zero recall, precision, and F1 score for most settings. This agrees with our observation from the simulation results for dense data that neighborhood selection tends to produce a much sparser network than compositional graphical lasso and graphical lasso when its tuning parameter is selected by StARS.}\n\n\\section{Real Data}\n\\label{Sec-4}\n\n\\subsection{Benchmark Study: \\textit{Tara} Oceans Project}\n\\label{Sec-4.1}\nTo better understand the ocean, the largest ecosystem on the earth, the \\textit{Tara} Oceans Project aims to build the global ocean interactome that can be used to predict the dynamics and structure of ocean ecosystems. To achieve this, the \\textit{Tara} Oceans Consortium sampled both plankton and environmental data at 210 sites from the world's oceans using the 110-foot research schooner \\textit{Tara} during the \\textit{Tara} Oceans Expedition (2009-2013). The data collected was later processed using sequencing and imaging techniques. One unique advantage of the \\textit{Tara} Oceans Project is that it has generated a list of 91 genus-level marine planktonic interactions that have been validated in the literature \\citep{lima2015determinants}. Though this list only comprises interactions between microbes that represent a small fraction of the total marine eukaryotic diversity and is therefore far from complete, it could serve as partial ground truth for us to evaluate the interactions identified by different methods. Thus, our major goal is to use the \\textit{Tara} Oceans Project as a benchmark study in order to compare the performance of different methods in constructing the planktonic interactions.\n\nAs the partial ground truth is a list of genus-level interactions, we choose to analyze the genus-level abundance data, which are aggregated from the original OTU abundance data downloadable from the \\textit{Tara} Oceans Project data repository (\\url{http:\/\/taraoceans.sb-roscoff.fr\/EukDiv\/}). As a benchmark study, we only include the 81 genera that are involved in the list of gold-standard interactions in our analysis. In addition, we discard the samples with too few sequence reads (less than 100), resulting in 324 samples left in our analysis.\n\nSimilar to the simulation study, we apply compositional graphical lasso, graphical lasso, and neighborhood selection to estimate the interaction network among the 81 genera. We first pick the genus \\textit{Acrosphaera}, which has the largest average relative abundance among those genera not involved in the gold-standard list, and use this genus as the reference taxon for all three methods. Then, we apply each method with a sequence of 70 decreasing tuning parameter values, resulting in a sequence of interaction networks starting from an empty network. Finally, we apply StARS to find the optimal tuning parameter, in which the parameters $\\beta$, $b$, and $N$ are set the same as in the simulation study.\n\n\\begin{figure}[htbp]\n\\centering\n\\begin{subfigure}{.4\\textwidth}\n \\centering\n \\includegraphics[width=1\\linewidth]{TARA_path_half.pdf}\n \\caption{}\n \\label{tara_path}\n\\end{subfigure}%\n\\begin{subfigure}{.4\\textwidth}\n \\centering\n \\includegraphics[width=1\\linewidth]{TARA_degree_half.pdf}\n \\caption{}\n \\label{tara_degree}\n\\end{subfigure}\n\\caption{(a): Number of identified literature interactions versus number of edges of the estimated network from the \\textit{Tara} dataset. (b): The degree distribution of vertices from the networks selected by StARS. Solid red: compositional graphical lasso; dashed green: graphical lasso; dashed dotted blue: neighborhood selection.}\n\\label{network.P}\n\\end{figure}\n\nFirst, we compare the three methods in terms of how highly they rank the literature validated interactions amongst their top reported edges. Specifically, we start with a large value of the tuning parameter that results in an empty network, then decrease the tuning parameter so that the network becomes denser, and stop until the network has about 200 edges (out of a total of 3240 possible edges). At each value of the tuning parameter, we plot the number of literature validated interactions included in the network versus the total number of edges of the network, resulting in a step function for each method (Figure \\ref{tara_path}). From Figure \\ref{tara_path}, we observe that compositional graphical lasso identifies slightly more literature validated interactions than graphical lasso until the total number of edges arrives 175 and graphical lasso identifies one more literature validated interaction afterwards. Neighborhood selection selects much fewer literature validated interactions than either compositional graphical lasso or graphical lasso. These observations imply that compositional graphical lasso slightly outperforms graphical lasso in reconstructing the literature validated interactions, while its advantage over neighborhood selection is much more obvious.\n\nSecond, we compare the overall topologies of the three interaction networks with tuning parameters selected by StARS. We find that compositional graphical lasso, graphical lasso, and neighborhood selection identify 749, 921, and 190 edges, respectively, with the same instability threshold used in StARS. This agrees with our observation in the simulation study that the network from neighborhood selection is much sparser than those from compositional graphical lasso and graphical lasso. The degree distributions from the networks estimated by the three methods are shown in Figure \\ref{tara_degree}.\nThe center of the three degree distributions are ranked as neighborhood selection, compositional graphical lasso, and graphical lasso in the ascending order, which is also reflected in the densities of the three interaction networks.\n\nThird, we investigate the high-degree nodes, i.e., hub genera, in the interaction networks. High-degree nodes are often thought to represent taxa that elicit an effect on a large number of other members of their community, such as keystone taxa and generalistic parasites. It is observed that there are a few hub genera that have an excessive number of interactions with other genera in the literature, such as \\textit{Amoebophrya}, \\textit{Blastodinium}, and \\textit{Parvilucifera}, which are referred to as benchmark hub genera. Although the literature validated interactions are rather incomplete, it is still of interest to evaluate how well the three methods pick up those benchmark hub genera. Since the density of networks from the three methods are rather different, it is hard to compare the degrees of the hub genera from the three networks directly, but it is reasonable to compare the ranks of those degrees within each degree distribution. The method that generates lower ranks (degree of genera ranked in descending order) for those hubs in their degree distribution are believed to pick up the benchmark hub genera better. \n\nA list of 7 benchmark hubs (which has degree $\\geq 5$) along with their degrees from the incomplete network constructed from the literature is shown in Table \\ref{tab:degree rank of lit hubs}, followed by the corresponding ranks of those genera in the degree distributions from each of the three methods and their corresponding degrees in the parentheses. We can see that compositional graphical lasso generates lower ranks than graphical lasso for all 7 genera, while neighborhood selection generates lower ranks than compositional graphical lasso for 3 genera, and the opposite for the other 4 genera. Overall, compositional graphical lasso performs the best in picking up the benchmark hub genera among the three methods.\n\n\\begin{table}[htbp]\n\\centering\n\\begin{tabular}{lllll}\n \\hline\n & Literature & Comp-gLASSO & gLASSO & MB \\\\ \n \\hline\n \\textit{Amoebophrya} & 1 (21) & 31 (19) & 47 (21) & 2 (9) \\\\ \n \\textit{Blastodinium} & 2 (12) & 13 (23) & 26 (24) & 30 (5) \\\\ \n \\textit{Parvilucifera} & 2 (12) & 46 (17) & 67 (19) & 58 (3) \\\\ \n \\textit{Syndinium} & 4 (7) & 14 (23) & 27 (24) & 19 (6) \\\\ \n \\textit{Vampyrophrya} & 4 (7) & 34 (19) & 60 (20) & 6 (8) \\\\ \n \\textit{Phaeocystis} & 6 (6) & 1 (31) & 3 (29) & 17 (6) \\\\ \n \\textit{Pirsonia} & 7 (5) & 64 (15) & 68 (19) & 33 (5) \\\\\n \\hline\n\\end{tabular}\n\\caption{\\label{tab:degree rank of lit hubs} For the hub genera, their ranks in each degree distribution (in descending order) from the literature, compositional graphical lasso (Comp-gLASSO), graphical lasso (gLASSO) and neighborhood selection (MB). The numbers in the parentheses are the corresponding degrees of the genera.}\n\\end{table}\n\nWe find that compositional graphical lasso predicts a high degree for genera that are known to act as keystone species or parasitize other taxa, many of which are also high-degree in the literature validated network. For example, \\textit{Phaeocystis} elicits a high degree in both literature validated and compositional graphical lasso networks. This genus is a well-described keystone organism in some marine ecosystems \\citep{verity2007current}, where it causes large phytoplankton blooms and plays an important role in the global cycling of carbon and sulfur \\citep{verity1996organism}. Similarly, these two networks predict a high degree for taxa that are known parasites, such as \\textit{Blastodinium} \\citep{skovgaard2012parasitic}, \\textit{Amoebophyra} \\citep{chambouvet2008control}, and \\textit{Syndinium} \\citep{skovgaard2005phylogenetic}. In some instances, compositional graphical lasso uniquely reveals high-degree genera even when compared to the literature validated network, such as in the case of the parasite \\textit{Euduboscquella} \\citep{bachvaroff2012molecular}, which is ranked the 5th by compositional graphical lasso but only has 1 interaction in the literature. Given the fact that the literature validated network only captures a subset of interactions that exist in nature (i.e., those interactions that have been explicitly tested), we posit that compositional graphical lasso affords an opportunity to resolve novel modulators of community composition, such as keystone taxa and generalistic parasites, that follow-up experiments can validate.\n\n\\subsection{Zebrafish Parasite Infection Study}\n\\label{Sec-4.2}\nThe Zebrafish Parasite Infection Study, recently conducted at Oregon State University, used a zebrafish helminth intestinal infection model to resolve how the gut microbiome and parasite burden covary during infection \\citep{gaulke2019longitudinal}. This study quantified the infection burden of an intestinal helminth of zebrafish, \\textit{Pseudocapillaria tomentosa}, and the gut microbiome in 210 4-month-old zebrafish. Half of these fish were exposed to the parasite, \\textit{P.\\ tomentosa}, and the other half were unexposed. Given that not all exposed fish would be ultimately infected, parasite burden was measured more accurately as the total number of worms in their fecal samples. In addition, the same fecal samples were used to measure the abundance of the gut microbiome species via 16S rRNA sequencing, which resulted in gut microbiome data for 207 fish. Among these fish, 81 were infected after being exposed to \\textit{P.\\ tomentosa} and the other 126 fish were not infected, as indicated by the total number of worms. Our major goal is to evaluate how the microbial interaction network associates with successful parasite infection in the gut.\n\nSimilar to the \\textit{Tara} Oceans Project, we choose to analyze the genus-level abundance data. We discard those genera that have a nonzero abundance in less than 5\\% of the samples, resulting in 42 genera left in our analysis. In addition, we follow the same strategy to define a reference genus as in \\citet{jiang2020microbial}, i.e., combining those OTUs that do not have a genus-level taxonomic classification into a pseudo-genus and using it as the reference genus. The analysis is conducted in the same way as in the \\textit{Tara} Oceans Project, except that the methods (compositional graphical lasso, graphical lasso, and neighborhood selection) are applied separately for the uninfected and the infected fish, resulting in three interaction networks for each group of fish. Analyzing the uninfected and infected fish separately allows us to compare the microbial interaction networks between the two groups.\n\nFirst, we compare the overall topologies of the interaction networks with tuning parameters selected by StARS. We find that compositional graphical lasso, graphical lasso, and neighborhood selection identify 262, 312, and 78 edges, respectively, for uninfected fish, and 241, 259, and 60 edges, respectively, for infected fish. Comparing the two groups of fish, the interaction networks for the uninfected fish are slightly denser than the infected fish. The comparison among the three methods agrees with our observation in the simulation study and in the \\textit{Tara} Oceans Project that the network from neighborhood selection tends to be much sparser than those from compositional graphical lasso and graphical lasso. The degree distributions based on the three methods are shown in Figure \\ref{fish_degree}, which again suggests high similarity between the two groups of fish. Similar to the \\textit{Tara} Oceans Project, neighborhood selection results in the lowest median degree, followed by composition graphical lasso and graphical lasso, regardless of whether the fish are infected or not.\n\n\\begin{figure}[htbp]\n\\centering\n\\begin{subfigure}{.4\\textwidth}\n \\centering\n \\includegraphics[width=1\\linewidth]{not_infected_degree.pdf}\n \\caption{}\n \\label{not_infected_degree}\n\\end{subfigure}%\n\\begin{subfigure}{.4\\textwidth}\n \\centering\n \\includegraphics[width=1\\linewidth]{infected_degree.pdf}\n \\caption{}\n \\label{infected_degree}\n\\end{subfigure}\n\\caption{(a): The degree distribution of vertices from the networks selected by StARS for uninfected fish. (b): The degree distribution of vertices from the networks selected by StARS for infected fish. Solid red: compositional graphical lasso; dashed green: graphical lasso; dashed dotted blue: neighborhood selection.}\n\\label{fish_degree}\n\\end{figure}\n\nSecond, we further investigate the degree of each node in the interaction networks. This analysis helps identify those high-degree nodes, i.e., hub genera, which are indicative of keystone taxa in the microbial community. {\\color{blue}Due to the limited space, we refer to Table \\ref{fish_hubs} in the supplementary materials that presents all 42 genera with their degrees in each network in descending order.} Although the networks are similar in density between the two groups of fish, we have observed a substantial difference between the degrees of individual nodes. To see this we sort the nodes in each network based on their degrees and compare the top ten nodes between networks. Taking compositional graphical lasso as an example, only three out of the top ten nodes overlap between uninfected and infected fish: \\textit{Paicibacter}, \\textit{Photobacterium}, \\textit{Fusobacterium}. This suggests that the network structures between the uninfected and infected fish are more distinctive than what their overall topologies might appear. The hub genera identified by different methods are quite different from each other too. For the group of uninfected fish, only three out of the top ten nodes overlap between the networks from the three methods: \\textit{Aeromonas}, \\textit{Photobacterium}, and \\textit{Rheinheimera}; similarly, for the group of infected fish, only three out of the top ten nodes overlap between the networks from the three methods: \\textit{Fusobacterium}, \\textit{Paucibacter}, and \\textit{Yersinia}.\n\n{\\color{blue}Third, we compare the networks generated from uninfected and infected fish to clarify the relationship between infection and the interactions that exist in the gut microbiome, motivated by the hypothesis that intestinal dysbiosis is defined not only by changes in community composition, but also by how members of the community ecologically interact. In order to understand this relationship, we identify microbes whose interaction set changes between infected and uninfected hosts then compare these results across methods. One of the bacterial taxa that compositional graphical lasso identifies as increasing in interaction degree among infected individuals, while other methods estimate a decrease in detected interaction degree, is the genus \\textit{Gemmobacter}. Our earlier work \\citep{gaulke2019longitudinal} showed that the abundance of this taxon positively links to parasite exposure, indicating that it is more ecologically successful in infected versus uninfected intestines. The analysis from compositional graphical lasso indicates that this increase in \\textit{Gemmobacter} relative abundance in infected individuals is coincident with an increase in the ecological interactions between \\textit{Gemmobacter} and other members of the gut microbiome, as it interacts with a greater number of other taxa in infected individuals. While \\textit{Gemmobacter} is not a particularly well studied genus, prior work links it to dysbiotic diseases in a variety of hosts \\citep{bates2022microbiome, ni2020gut}. Our observation of an increase in its interaction with other taxa in infected individuals strengthens the hypothesis that \\textit{Gemmobacter} may act as an agent of dysbiosis, as its increase in its relative abundance not only links to a disease context, but its more integrated role in the microbial community (i.e., it potentially impacts a larger number of taxa in the gut).\n\nThe results from compositional graphical lasso also uniquely suggest that the \\textit{Paucibacter} genus in uninfected individuals displays a higher number of identified interactions relative to infected individuals. The other methods we evaluated find that \\textit{Paucibacter}'s interactions are relatively consistent between infected and uninfected individuals. The results from compositional graphical lasso are notable because the genus \\textit{Paucibacter} includes clades that have been identified as being conserved in the healthy zebrafish gut microbiome \\citep{sharpton2021phylogenetic}, presumably because these taxa are critical to the commensal microbial community. Based on our observations, namely that disruption to the broader microbiome by parasite infection appears to disrupt the set of interactions between \\textit{Paucibacter} and other taxa in the community, we hypothesize that \\textit{Paucibacter} is a keystone member of the healthy zebrafish gut microbiome and that parasite exposure induced dysbiosis occurs in part by disrupting \\textit{Paucibacter} in the gut. Collectively, our observations reveal patterns of interaction between infected and uninfected individuals that clarify the potential ecological role of \\textit{Gemmobacter} and \\textit{Paucibacter}, which future research should seek to empirically test.}\n\n{\\color{blue}In addition to these observations about \\textit{Gemmobacter} and \\textit{Paucibacter}, compositional graphical lasso also finds insightful patterns about \\textit{Photobacterium}, a genus which contains pathogens of marine fish \\citep{romalde2002photobacterium, osorio2018photobacterium}, though its role as a pathogen in zebrafish is less clear.} However, as prior work has cultured \\textit{Photobacterium} from the zebrafish gut \\citep{cantas2012culturable} and other studies have demonstrated that \\textit{Photobacterium} elicits toxicity to zebrafish cells in culture \\citep{osorio2018photobacterium}, observations collectively point to \\textit{Photobacterium} as a potential pathogen or pathobiont of zebrafish. Consistent with this speculation, prior work found that \\textit{Photobacterium} statistically explained variation in \\textit{P. tomentosa} worm burden as well as worm infection induced intestinal inflammation \\citep{gaulke2019longitudinal}. We sought to determine if our interaction network analysis could clarify the role of \\textit{Photobacterium} in the zebrafish gut microbiome, especially as it relates to parasite infection.\n\nInterestingly, \\textit{Photobacterium} is identified as an interaction hub (i.e., a relatively high degree node) in all networks assembled from all methods we evaluated. However, compositional graphical lasso uniquely reveals that the number of interactions linked to \\textit{Photobacterium} increases in infected fish (19 edges) as compared to uninfected fish (16 edges), whereas the other approaches observe the opposite pattern (graphical lasso: 11 versus 17 edges; neighborhood selection: 3 versus 5 edges). These differences in the \\textit{Photobacterium} subgraph across approaches also includes variation in the specific taxa that \\textit{Photobacterium} is inferred to interact with. {\\color{blue}To better understand changes in the set of \\textit{Photobacterium} interactions, we extracted the first-degree sub-graph connected to \\textit{Photobacterium} that each of the three methods produced for both the infected and uninfected host interaction networks. Compositional graphical lasso alone estimates eight interactions in the \\textit{Photobacterium}-specific infected host network, and six of these eight genera have previously been linked to parasite exposure, disease burden, or histopathology score.} Two of these parasite etiology-linked taxa are \\textit{Aeromonas} and \\textit{Gemmobacter}, genera which are notable because they contain microbes that elicit pathogenic or pathobiotic phenotypes. For example, the \\textit{Aeromonas} genus includes well-characterized and abundant opportunistic pathogens, such as \\textit{A. hydrophila} \\citep{saraceni2016establishment}, and the \\textit{Gemmobacter} genus, as noted above, has been shown to opportunistically increase abundance \\citep{huang2020exposure} in dysbiotic fish gut communities which display impeded overall gut function. {\\color{blue}Given that \\textit{Photobacterium} proliferates in the infected host gut, these observations suggest that \\textit{Photobacterium} may work alongside and even promote the growth of other pathobiotic taxa to disrupt the composition of the gut microbiome and induce dysbiosis upon infection.}\n\nBased on these collective observations, we hypothesize that \\textit{Photobacterium} is an intestinal pathobiont of the zebrafish gut and contributes to dysbiosis in infected fish. \\textit{Photobacterium} is a relatively prevalent genus in the zebrafish intestines, even in uninfected fish. However, \\textit{P. tomentosa} infection links to an increase in the relative abundance of \\textit{Photobacterium}, which also positively associates with intestinal hyperplasia \\citep{gaulke2019longitudinal}, possibly due to the cytotoxic effect of \\textit{Photobacterium}. Given that \\textit{Photobacterium} is also an interaction hub whose influence on the community increases in infected fish, at least according to compositional graphic lasso, infection induced changes in \\textit{Photobacterium} relative abundance may drive additional changes in the success of other taxa in the microbiome, including opportunistic pathogens and pathobionts like members of \\textit{Aeromonas} and \\textit{Gemmobacter}. Going forward, future studies should seek to experimentally validate our novel hypothesis about the pathobiotic role of \\textit{Photobacterium} in the zebrafish gut, including its impact on the rest of the microbial community and its role in infection induced tissue damage. If our hypothesis is accurate, \\textit{Photobacterium} may serve as an important model taxon for discerning how the gut microbiome and helminths interact to impact infection outcomes.\n\n{\\color{blue}Collectively, these results provide evidence for our hypothesis that in the case of intestinal helminth infection, dysbiosis may be defined by not only a change in the composition of the gut microbiome, but a restructuring in how key members of the microbial community interact with the rest of the microbiota. Notably, these patterns were only observed by compositional graphical lasso, as the other methods did not observe unique evidence of such infection-associated inversions for \\textit{Gemmobacter}, \\textit{Paucibacter}, or \\textit{Photobacterium}.\n\nFinally, we visualize the six genus interaction networks, three for infected fish and three for uninfected fish (Figure \\ref{fish_StARS}). For a better visualization, we only keep the top $100$ edges for the networks resulted from compositional graphical lasso and graphical lasso as they are much denser than the ones from neighborhood selection. Their edges are ranked in the exactly same way as in the \\textit{Tara} Oceans Study, first by selection probability and then by edge weight (see Figure \\ref{tara_StARS} in the supplementary materials for details). For all networks, darker blue implies higher magnitude in the absolute value of partial correlation.\n\n\\begin{figure}[htbp]\n\\centering\n\\includegraphics[scale=0.8]{Fish_StARS_genus_level.pdf}\n\\caption{Inferred networks from each method with edges filtered by selection probability and ranked by edge weight separately for the groups of uninfected and infected fish. In each network, darker blue implies stronger (larger in absolute value) edge weight.}\n\\label{fish_StARS}\n\\end{figure}}\n\n\\section{Discussion}\n\\label{Sec-5}\n\nA growing body of work points to the gut microbiome as an agent to treat, diagnose, and prevent human diseases. The innovation of these utilities requires foundational knowledge about the pathobiotic {\\color{blue}or probiotic} role that the gut microbiome play in the development and progression of the diseases. While most studies have focused on investigating how the abundance of gut microbes covary with a disease, such as infection \\citep{gaulke2019longitudinal}, how the microbial interactions associate with the diseases is largely unknown. Understanding how the microbial interactions associate with a disease is critical in identifying microbes as a pathobiont {\\color{blue}or probiotic}, which are candidates for potential drug or diagnostic test.\n\nThis work focuses on gaining a better understanding of the underlying role that microorganisms play in their communities by constructing microbial interaction networks. We propose a novel method called compositional graphical lasso that simultaneously accounts for the following key features of microbiome abundance data: (a) the data are compositional counts and only carry information about relative abundances of the taxa; (b) the observed relative abundance is subject to heteroscedasticity as its variance depends on the varying sequence depth across samples; (c) the data are high-dimensional as the number of taxa are often larger than the number of samples. We have demonstrated the advantages of our approach over previously proposed methods in simulations and on a benchmark dataset.\n\n\nWe apply compositional graphical lasso to the data from the Zebrafish Parasite Infection Study, which used a parasite infection model to identify gut microbiota that positively and negatively associate with infection burden \\citep{gaulke2019longitudinal}. {\\color{blue}Our approach identified method-specific changes in interaction degree between infected and uninfected individuals for three taxa, \\textit{Photobacterium}, \\textit{Gemmobacter}, and \\textit{Paucibacter}. Further investigation of these method-specific taxa interaction changes reveals their biological plausibility, and provides insight into their relevance in the context of parasite-linked changes in the zebrafish gut microbiome. In particular, we hypothesize that \\textit{Photobacterium} and \\textit{Gemmobacter} are pathobionts of zebrafish gut for parasitic infection and that \\textit{Paucibacter} is a probiotic. Future studies should seek to experimentally validate their ecological roles in the zebrafish gut, including their impacts on the rest of the microbial community and their roles in infection induced tissue damage.}\n\n\nIt is noteworthy that compositional graphical lasso requires one to choose a reference taxon. As a general rule of thumb, we recommend choosing a ``common'' taxon that has a high average relative abundance and relatively few zero counts across samples, as its true relative abundance serves as the denominator in the additive log-ratio transformation. In practice, we also suggest that a user choose a taxon that is not of direct interest to investigate, as the reference taxon will not be represented in the resultant network. {\\color{blue}It is also of interest to study how much the choice of the reference taxon may affect the estimated network, and whether some robustness could be guaranteed across different choices of the reference. Because the robustness against the choice of the reference is so important not just for compositional graphical lasso, but for many other network estimation methods as well, we have chosen to devote a major effort to this issue in a separate ongoing project. In this ongoing work, we have theoretically established the reference-invariance property of the inverse covariance matrix of a class of additive log-ratio transformation-based models, including the logistic normal multinomial model as a special case. However, as it is beyond the scope of this paper, this invariance property will be detailed in the future.}\n\n\\bibliographystyle{asa}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction} \\label{sec:intro}\n\nBlind decoding, also known as blind detection, requires the receiver of a set of bits to identify if said bits compose a codeword of a particular channel code. In 3GPP LTE\/LTE-Advanced standards blind detection is used by the user equipment (UE) to receive control information related to the downlink shared channel. The UE attempts the decoding of a set of candidates, to identify if one of the candidates holds its control information. \nBlind detection will be required in the $5^{\\text{th}}$ generation wireless communication standard (5G) as well: ongoing discussions are considering a substantial reduction of the time frame allocated to blind detection, from $16\\mu$s to $4\\mu$s. Blind detection must be performed very frequently, and given the high number of decoding attempts that must be performed in a limited time \\cite{3GPP_R8}, it can lead to large implementation costs and high energy consumption. Blind detection solutions for codes adopted in previous generation standards can be found in \\cite{Moosavi_GLOBECOM11,Xia_TSP14,Zhou_ENT13}.\n\nPolar codes are a class of capacity-achieving error correcting codes, introduced by Ar{\\i}kan in \\cite{arikan}. They are characterized by simple encoding and decoding algorithms, and have been selected for use in 5G \\cite{3gpp_polar}. In \\cite{arikan}, the successive-cancellation (SC) decoding algorithm has been proposed as well. It is optimal for infinite code lengths, but its error-correction performance degrades quickly at moderate and short code lengths. In its original formulation, it also suffers from long decoding latency. SC list (SCL) decoding has been proposed in \\cite{tal_list} to improve the error-correction performance of SC, at the cost of increased decoding latency. In \\cite{sarkis, hashemi_SSCL, xiong_symbol, hashemi_FSSCL}, a series of techniques has been proposed, aimed at improving the decoding speed of both SC and SCL without sacrificing error-correction performance.\n\nBlind detection of polar codes has been recently addressed in \\cite{Condo_COMML17}, where a blind detection scheme fitting within 3GPP LTE-A and future 5G requirements has been proposed. It is based on a two-step scheme: a first SC decoding phase helps selecting a set of candidates, subsequently decoded with SCL. An early stopping criterion for SCL is also proposed to reduce average latency. Another recent work on polar code blind detection \\cite{Giard_BD} detaches itself from 4G-5G standard requirements, and proposes a metric on which the outcome of the blind detection can be based.\n\nIn this work, we extend the blind detection scheme presented in \\cite{Condo_COMML17} and its early stopping criterion by considering SCL also in the first decoding phase, and provide improved detection accuracy results. We then propose an architecture to implement the blind detection scheme: it relies on an SCL decoder with tunable list size, that can be used for both the first and second decoding stages. The architecture is synthesized and implementation results are reported for various system parameters.\n\nThe rest of the paper is organized as follows. Section~\\ref{sec:prel} introduces background information on polar codes and blind detection. Section~\\ref{sec:blind} details the proposed blind detection scheme, and provides simulation results to evaluate its performance. The architecture of the blind detection system is detailed in Section~\\ref{sec:HW}, and implementation results are given in Section~\\ref{sec:impl}. Finally, Section~\\ref{sec:conc} draws the conclusion.\n\n\n\\section{Preliminaries} \\label{sec:prel}\n\n\\subsection{Polar Codes} \\label{sec:prel:PC}\n\n\nA polar code $\\mathcal{P}(N,K)$ is a linear block code of length $N=2^n$ and rate $K\/N$, and it can be expressed as the concatenation of two polar codes of length $N\/2$. This is due to the fact that the encoding process is represented by a modulo-$2$ matrix multiplication a\n\\begin{equation}\n\\mathbf{x} = \\mathbf{u} \\mathbf{G}^{\\otimes n}\\text{,}\n\\end{equation}\nwhere $\\mathbf{u} = \\{u_0,u_1,\\ldots,u_{N-1}\\}$ is the input vector, $\\mathbf{x} = \\{x_0,x_1,\\ldots,x_{N-1}\\}$ is the codeword, and the generator matrix $\\mathbf{G}^{\\otimes n}$ is the $n$-th Kronecker product of the polarizing matrix $\\mathbf{G}=\\bigl[\\begin{smallmatrix} 1&0\\\\ 1&1 \\end{smallmatrix} \\bigr]$. The polarization effect brought by polar codes allows to divide the $N$-bit input vector $\\mathbf{u}$ between reliable and unreliable bit-channels.\nThe $K$ information bits are assigned to the most reliable bit-channels of $\\mathbf{u}$, while the remaining $N-K$, called frozen bits, are set to a predefined value, usually $0$.\nCodeword $\\mathbf{x}$ is transmitted through the channel, and the decoder receives the logarithmic likelihood ratio (LLR) vector $\\mathbf{y} = \\{y_0,y_1,\\ldots,y_{N-1}\\}$.\n\nIn the seminal work on polar codes \\cite{arikan}, the SC decoder is proposed. The SC-based decoding process can be represented as a binary tree search, in which the tree is explored depth first, with priority given to the left branches. Fig.~\\ref{fig:tree} shows an example of SC decoding tree for $\\mathcal{P}(16,8)$, where nodes at stage $s$ contain $2^s$ bits. White leaf nodes are frozen bits, while black leaf nodes are information bits.\n\n\\begin{figure}\n\\centering\n\\begin{tikzpicture}[scale=1.8, thick]\n\n\\fill (0,0) circle [radius=.05];\n\n\\fill (-1,-.5) circle [radius=.05];\n\\fill (1,-.5) circle [radius=.05];\n\n\\fill (-1.5,-1) circle [radius=.05];\n\\fill (-.5,-1) circle [radius=.05];\n\\fill (.5,-1) circle [radius=.05];\n\\fill (1.5,-1) circle [radius=.05];\n\n\\fill (-1.75,-1.5) circle [radius=.05];\n\\fill (-1.25,-1.5) circle [radius=.05];\n\\fill (-.75,-1.5) circle [radius=.05];\n\\fill (-.25,-1.5) circle [radius=.05];\n\\fill (.25,-1.5) circle [radius=.05];\n\\fill (.75,-1.5) circle [radius=.05];\n\\fill (1.25,-1.5) circle [radius=.05];\n\\fill (1.75,-1.5) circle [radius=.05];\n\n\\draw (-1.875,-2) circle [radius=.05];\n\\draw (-1.625,-2) circle [radius=.05];\n\\draw (-1.375,-2) circle [radius=.05];\n\\draw (-1.125,-2) circle [radius=.05];\n\\draw (-.875,-2) circle [radius=.05];\n\\fill (-.625,-2) circle [radius=.05];\n\\fill (-.375,-2) circle [radius=.05];\n\\fill (-.125,-2) circle [radius=.05];\n\\fill (.125,-2) circle [radius=.05];\n\\fill (.375,-2) circle [radius=.05];\n\\fill (.625,-2) circle [radius=.05];\n\\fill (.875,-2) circle [radius=.05];\n\\draw (1.125,-2) circle [radius=.05];\n\\draw (1.375,-2) circle [radius=.05];\n\\draw (1.625,-2) circle [radius=.05];\n\\fill (1.875,-2) circle [radius=.05];\n\n\\draw (0,-.05) -- (-1,-.45);\n\\draw (0,-.05) -- (1,-.45);\n\n\\draw (-1,-.55) -- (-1.5,-.95);\n\\draw (-1,-.55) -- (-.5,-.95);\n\\draw (1,-.55) -- (.5,-.95);\n\\draw (1,-.55) -- (1.5,-.95);\n\n\\draw (-1.5,-1.05) -- (-1.75,-1.45);\n\\draw (-1.5,-1.05) -- (-1.25,-1.45);\n\\draw (-.5,-1.05) -- (-.75,-1.45);\n\\draw (-.5,-1.05) -- (-.25,-1.45);\n\\draw (.5,-1.05) -- (.25,-1.45);\n\\draw (.5,-1.05) -- (.75,-1.45);\n\\draw (1.5,-1.05) -- (1.25,-1.45);\n\\draw (1.5,-1.05) -- (1.75,-1.45);\n\n\\draw (-1.75,-1.55) -- (-1.875,-1.95);\n\\draw (-1.75,-1.55) -- (-1.625,-1.95);\n\\draw (-1.25,-1.55) -- (-1.375,-1.95);\n\\draw (-1.25,-1.55) -- (-1.125,-1.95);\n\\draw (-.75,-1.55) -- (-.875,-1.95);\n\\draw (-.75,-1.55) -- (-.625,-1.95);\n\\draw (-.25,-1.55) -- (-.375,-1.95);\n\\draw (-.25,-1.55) -- (-.125,-1.95);\n\\draw (.25,-1.55) -- (.125,-1.95);\n\\draw (.25,-1.55) -- (.375,-1.95);\n\\draw (.75,-1.55) -- (.625,-1.95);\n\\draw (.75,-1.55) -- (.875,-1.95);\n\\draw (1.25,-1.55) -- (1.125,-1.95);\n\\draw (1.25,-1.55) -- (1.375,-1.95);\n\\draw (1.75,-1.55) -- (1.625,-1.95);\n\\draw (1.75,-1.55) -- (1.875,-1.95);\n\n\\draw [very thin,gray,dashed] (-2,0) -- (2,0);\n\\draw [very thin,gray,dashed] (-2,-.5) -- (2,-.5);\n\\draw [very thin,gray,dashed] (-2,-1) -- (2,-1);\n\\draw [very thin,gray,dashed] (-2,-1.5) -- (2,-1.5);\n\\draw [very thin,gray,dashed] (-2,-2) -- (2,-2);\n\n\\node at (-2.3,0) {$s=4$};\n\\node at (-2.3,-.5) {$s=3$};\n\\node at (-2.3,-1) {$s=2$};\n\\node at (-2.3,-1.5) {$s=1$};\n\\node at (-2.3,-2) {$s=0$};\n\n\\end{tikzpicture}\n\\caption{Binary tree example for $\\mathcal{P}(16,8)$. White circles at $s=0$ are frozen bits, black circles at $s=0$ are information bits.}\n\\label{fig:tree}\n\\end{figure}\n\nFig.~\\ref{fig:MessagePassing} portrays the message passing among SC tree nodes. Parents pass LLR values $\\alpha$ to children, that send in return the hard bit estimates $\\beta$. \nThe left and right branch messages $\\alpha^\\text{l}$ and $\\alpha^\\text{r}$, in the hardware-friendly version of \\cite{leroux}, are computed as\n\\begin{align}\n\\alpha^{\\text{l}}_i = & \\text{sgn}(\\alpha_i)\\text{sgn}(\\alpha_{i+2^{s-1}})\\min(|\\alpha_i|,|\\alpha_{i+2^{s-1}}|) \\text{,} \\label{eq1} \\\\\n\\alpha^{\\text{r}}_i =& \\alpha_{i+2^{s-1}} + (1-2\\beta^\\text{l}_i)\\alpha_i \\text{,}\n\\label{eq2}\n\\end{align}\nwhile $\\beta$ is computed as\n\\begin{equation}\n\\beta_i =\n \\begin{cases}\n \\beta^\\text{l}_i\\oplus \\beta^\\text{r}_i, & \\text{if} \\quad i < 2^{s-1} \\text{,}\\\\\n \\beta^\\text{r}_{i-2^{s-1}}, & \\text{otherwise},\n \\end{cases}\n \\label{eq3}\n\\end{equation}\nwhere $\\oplus$ denotes the bitwise XOR. The SC operations are scheduled according to the following order: each node receives $\\alpha$ first, then sends $\\alpha^\\text{l}$, receives $\\beta^\\text{l}$, sends $\\alpha^\\text{r}$, receives $\\beta^\\text{r}$, and finally sends $\\beta$.\nWhen a leaf node is reached, $\\beta_i$ is set as the estimated bit $\\hat{u}_i$:\n\\begin{equation}\n\\hat{u}_i =\n \\begin{cases}\n 0 \\text{,} & \\text{if } i \\in \\mathcal{F} \\text{ or } \\alpha_{i}\\geq 0\\text{,}\\\\\n 1 \\text{,} & \\text{otherwise,}\n \\end{cases} \\label{eq6}\n\\end{equation}\nwhere $\\mathcal{F}$ is the set of frozen bits.\n\nThe SC decoding process requires full tree exploration: however, in \\cite{alamdar, sarkis} it has been shown that it is possible to prune the tree by identifying patterns in the sequence of frozen and information bits, achieving substantial speed increments. This improved SC decoding is called fast simplified SC (Fast-SSC).\n\n\nSC decoding suffers from modest error-correction performance with moderate and short code lengths. To improve it, the SCL algorithm was proposed in \\cite{tal_list}. It is based on the same process as SC, but each time that a bit is estimated at a leaf node, both its possible values $0$ and $1$ are considered. A set of $L$ codeword candidates is stored, so that a bit estimation results in $2L$ new candidates, half of which must be discarded. To this purpose, a path metric (PM) is associated to each candidate and updated at every new estimate: the $L$ paths with the lowest PM survive. In the LLR-based SCL proposed in \\cite{balatsoukas}, the hardware-friendly formulation of the PM is\n\\begin{align}\n \\text{PM}_{{i}_l} =& \\begin{cases}\n \\text{PM}_{{i-1}_l}, & \\text{if } \\hat{u}_{i_l} = \\frac{1}{2}\\left(1-\\text{sgn}\\left(\\alpha_{i_l}\\right)\\right)\\text{,}\\\\\n \\text{PM}_{{i-1}_l} + |\\alpha_{i_l}|, & \\text{otherwise,}\n \\end{cases} \\label{eq7}\n\\end{align} \nwhere $l$ is the path index and $\\hat{u}_{i_l}$ is the estimate of bit $i$ at path $l$.\nAs with SC decoding, SCL tree pruning techniques relying on the identification of frozen-information bit patterns have been proposed in \\cite{hashemi_SSCL,hashemi_FSSCL}, called simplified SCL (SSCL) and Fast-SSCL.\n\n\\begin{figure}\n\\centering\n\\begin{tikzpicture}[scale=.5]\n\n\\draw [very thin,gray,dashed] (-2,0) -- (2,0);\n\\draw [very thin,gray,dashed] (-2,-2) -- (2,-2);\n\\draw [very thin,gray,dashed] (-2,-4) -- (2,-4);\n\n\\node at (-3,0) {$s+1$};\n\\node at (-3,-2) {$s$};\n\\node at (-3,-4) {$s-1$};\n\n\\fill (0,0) circle [radius=.25];\n\\fill (0,-2) circle [radius=.2];\n\\fill (-1.5,-4) circle [radius=.15];\n\\fill (1.5,-4) circle [radius=.15];\n\n\\draw [->,very thick] (-.1,-.4) -- (-.1,-1.7) node [left,midway,rotate=0] {$\\alpha$};\n\\draw [->,very thick] (.1,-1.7) -- (.1,-.4) node [right,midway,rotate=0] {$\\beta$};\n\n\\draw [->,very thick] (-.25,-2.2) -- (-1.45,-3.75) node [left,midway,rotate=0] {$\\alpha^{\\text{l}}$};\n\\draw [->,very thick] (-1.3,-3.85) -- (-.1,-2.3) node [right,near start,rotate=0] {$\\beta^{\\text{l}}$};\n\\draw [<-,very thick] (.25,-2.2) -- (1.45,-3.75) node [right,midway,rotate=0] {$\\beta^{\\text{r}}$};\n\\draw [<-,very thick] (1.3,-3.85) -- (.1,-2.3) node [left,near start,rotate=0] {$\\alpha^{\\text{r}}$};\n\n\\end{tikzpicture}\n\\caption{Message passing in tree graph representation of SC decoding.}\n\\label{fig:MessagePassing}\n\\end{figure}\n\n\n\n\\subsection{Blind Detection}\n\nThe physical downlink control channel (PDCCH) is used in 3GPP LTE\/LTE-Advanced to transmit the downlink control information (DCI) related to the downlink shared channel. The DCI carries information regarding the channel resource allocation, transport format and hybrid automatic repeat request, and allows the UE to receive, demodulate and decode.\n\nA cyclic redundancy check (CRC) is attached to the DCI payload before transmission. The CRC is masked according to an ID, like the radio network temporary identifier (RNTI), of the UE to which the transmission is directed, or according to one of the system-wide IDs. Finally, the DCI is encoded with a convolutional code. The UE is not aware of the format with which the DCI has been transmitted: it thus has to explore a combination of PDCCH locations, PDCCH formats, and DCI formats in the common search space (CSS) and UE-specific search space (UESSS) and attempt decoding to identify useful DCIs. This process is called blind decoding, or blind detection. For each PDCCH candidate in the search space, the UE performs channel decoding, and demasks the CRC with its ID. If no error is found in the CRC, the DCI is considered as carrying the UE control information.\n\n\nBased on LTE standard R8 \\cite{3GPP_R8}, the performance specifications for the blind detection process are the following:\n\\begin{itemize}\n \\item The DCI of PDCCH is from $8$ to $57$ bits plus $16$-bit CRC, masked by $16$-bit ID.\n \\item In UESSS, a maximum of $2$ DCI formats can be sent per transmission time interval (TTI) for $2$ potential frame lengths. Therefore, $16$ candidate locations in UESSS $\\rightarrow$ $32$ candidates.\n \\item In CSS, a maximum of $2$ DCI formats can be sent per TTI for $2$ potential frame lengths. Therefore, $6$ candidate locations in CSS $\\rightarrow$ $12$ candidates.\n \\item Code length could be between $72$ and $576$ bits.\n \\item Information length (including $16$-bit CRC) could be between $24$ and $73$ bits.\n \\item Target signal-to-noise ratio (SNR) is dependent on the targeted block error rate (BLER): $10^{-2}$.\n \\item There are two types of false-alarm scenarios: Type-1, when the UE ID is not transmitted but detected, and Type-2, when the UE ID is transmitted but another one is detected. The target false-alarm rate (FAR) is below $1.52\\times 10^{-5}$.\n \\item Missed detection occurs when UE ID is transmitted but not detected. The missed detection rate (MDR) is close to BLER curve.\n \\item The available time frame for blind detection is $16\\mu$s.\n\\end{itemize}\n\n\n\n\\section{Blind Detection Scheme} \\label{sec:blind}\n\nIn \\cite{Condo_COMML17}, polar codes have been considered within a blind detection framework, and a blind detection scheme has been proposed. Frozen bit positions are selected to instead transmit the RNTI. Fig.~\\ref{fig:scheme} shows the block diagram of the devised blind detection scheme. $C_1$ candidates are received at the same time: in this case, $C_1=44$. The $C_1$ candidates are decoded with the simple SC algorithm, and a PM is obtained for each candidate, equivalent to the LLR of the last decoded bit: thanks to the serial nature of SC decoding, the LLR of the last bit can be interpreted as a reliability measure on the decoding process.\nThe PMs are then sorted, to help the selection of the best candidates to forward to the following decoding phase. $C_2$ candidates are in fact selected to be decoded with the more powerful SCL decoding algorithm, that guarantees a better error-correction performance, at a higher implementation complexity. The $C_2$ candidates are chosen as:\n\\begin{enumerate}\n \\item All candidates whose ID, after the first phase, matches the one assigned to the UE. If more than $C_2$ are present, the ones with the highest PMs are selected.\n \\item If free slots among the $C_2$ remain, the candidates with the smallest PMs are selected. The candidates with large PMs have higher probability to be correctly decoded: if their ID does not match the one assigned to the UE, it is probably a different one. On the other hand, candidates with small PMs have a higher chance of being incorrectly decoded, and a transmission to the UE might be hiding among them.\n\\end{enumerate}\nAfter the SCL decoding phase, if one of the $C_2$ candidates matches the UE ID, it is selected, otherwise no selection is attempted.\n\n\\begin{figure}[t!]\n \\centering\n \\input{.\/Arch.tikz}\n \\caption{Polar codes blind detection scheme.}\n \\label{fig:scheme}\n\\end{figure}\n\nIn \\cite{Condo_COMML17}, an early stopping criterion has been proposed as well, to reduce the latency and energy expenditure of the second phase of the blind detection scheme, The first phase requires the full decoding of each candidate, to identify the $C_2$ codewords that will be sent to the second phase. In the second phase, however, all codewords whose ID does not match the UE ID will be discarded. Thus, as soon as the ID is shown to be different, the decoding can be interrupted. Since SC-based decoding algorithms estimate codeword bits sequentially, the ID evaluation can be performed every time an ID bit is estimated. In case the estimated bit is different from the UE ID bit, the decoding is stopped. \n\nThree methods of ID bits have been described in \\cite{Condo_COMML17} to choose the bits assigned to the ID:\n\\begin{itemize}\n \\item ID mode 1: the ID bits are the $16$ most reliable bits after the $K$ information bits. \n \\item ID mode 2: the ID bits are the $16$ most reliable bits, while the $K$ information bits are the most reliable bits after the $16$ ID bits.\n \\item ID mode 3: considering the order with which bits are decoded in SC-based algorithms, the ID bits are the first $16$ to be decoded among the $K+16$ most reliable bits.\n\\end{itemize}\nThe three techniques yield negligible differences in terms of error-correction performance, while ID mode 3 yields considerable advantages over mode 1 and mode 2 when early stopping is applied. In fact, since the ID bits are decoded earlier, the average percentage of estimated bits decreases, and the reduction in average latency is more substantial.\n\nIn this work, we generalize the blind detection scheme proposed in \\cite{Condo_COMML17}, by considering SCL also for the first decoding phase. In particular, we consider a list sizes $L_1\\ge1$ for the first decoding phase, and a list size $L_{\\max}>L_1$ for the second decoding phase. It should be noted that when $L_1=1$, the blind detection scheme reverts to that of \\cite{Condo_COMML17}.\n\n\n\n\\subsection{Simulation Results} \\label{sec:simres}\n\n\\begin{figure}[t!]\n \\centering\n \\input{.\/blerHW.tikz}\n \\\\\n \\ref{blerlegend}\n \\caption{BLER curves with SCL when $L=8$.}\n \\label{fig:BLER}\n\\end{figure}\n\n\\begin{figure}[t!]\n \\centering\n \\input{.\/MissedDetectionHW.tikz}\n \\\\\n \\ref{MDRlegend}\n \\caption{Missed detection rates after the second decoding phase with $L_1=2$, $L_{\\max}=8$, and $C_2=5$. Transmissions include $C_1\/2$ cases of $N_1=256$ and $C_1\/2$ cases of $N_2=512$.}\n \\label{fig:MD_SCL}\n\\end{figure}\n\n\n\nTo evaluate the effectiveness of the proposed blind detection scheme, simulations were performed. The BLER, MDR, and FAR have been measured on the additive white Gaussian noise (AWGN) channel, with binary phase-shift keying (BPSK) modulation, at the variation of different code parameters. We focused on polar codes with block lengths $N=\\{256, 512\\}$, since in \\cite{Condo_COMML17} it has been shown that they constitute the most critical cases in terms of speed. Four information lengths $K=\\{8, 16, 32, 57\\}$ have been considered, while the number of ID bits has been set to $16$. The 3GPP standardization committee has decided that information bits in polar codes must be assigned to the $K$ most reliable bit-channels \\cite{3gpp_polar_AH}: thus, the ID bits have been assigned according to ID mode 1. The ID values assigned to the $C_1$ candidates are randomly selected over $16$ bits. While different numbers of candidates passed to the second phase have been considered in \\cite{Condo_COMML17}, we have focused here on $C_2=5$, for which a good tradeoff between accuracy and latency is found. At the same time, we set $L_{\\max}=8$ and $L_1=2$: it is a representative case for which $L_{\\max}$ guarantees good error-correction performance, and at which SCL decoders can be implemented with reasonable complexity.\n\n\nFig.~\\ref{fig:BLER} plots the BLER curves for all the considered code lengths and rates. As expected, their error-correction performance improves as the code length increases and the code rate decreases. In Fig.~\\ref{fig:MD_SCL}, the first of the metrics specific to the blind detection problem, the MDR, is depicted. The MDR can be defined as the number of missed detections divided by the number of transmissions in which the UE ID was sent. The curves in Fig.~\\ref{fig:MD_SCL} have been obtained considering $C_1\/2$ candidates of length $N_1=256$, and $C_1\/2$ candidates of length $N_2=512$ in each transmission, with $K_1=K_2$ information bits. Together with the MDR, in Fig.~\\ref{fig:MD_SCL} the BLER curves relative to the aggregate transmissions are portrayed. It can be seen that the MDR curve is always lower than the relative BLER curve.\n\n\nThe FAR curves for the considered case study are portrayed in Fig.~\\ref{fig:FAR}. The system target FAR is equivalent to the FAR obtained with a $16$-bit CRC: in 5G, a CRC of at least $16$-bits long is foreseen. Here, we evaluate the additional contribution that the proposed blind detection scheme can bring in lowering the FAR on top of the CRC. It can be seen that the FAR is kept below the $10^{-4}$ threshold at SNR values for which the BLER is still very high, and decreases as the channel conditions improve. In the blind detection method presented in \\cite{Giard_BD}, the FAR increases as the MDR decreases. On the other hand, the proposed scheme allows to decrease both at the same time, thus avoiding performance limitations that could make it unappealing for 5G standard applications.\n\nThe impact of the devised early stopping criterion on the average number of estimated bits is shown in Fig.~\\ref{fig:avg_est}, for $K=32$ and $K=57$. These results consider each of the $C_2$ candidates separately, since the number of candidates of length $N_1$ and $N_2$ in the second phase depends on the PMs received from the first phase, and thus on channel SNR. The solid curves have been obtained in cases the UE ID was sent through the considered code, while the dashed curves in cases it was not sent through the code. \n\n\n\n\\begin{itemize}\n \\item For $N=256$ (curves with a circle marker), it is possible to observe the same behavior noted in \\cite{Condo_COMML17} for $N=128$ as well. In case the UE ID was sent, as the channel conditions improve, the number of estimated bits increases until stabilizing at a maximum average value. This phenomenon can be explained by the fact that when the SNR is low, it is more likely that the codeword carrying the UE ID is not selected to be among the $C_2$ candidates. Thus the decoders in the second phase easily encounter ID bits different from the UE ID early in the decoding process. As the channel conditions improve, the codeword with the UE ID falls among the $C_2$ candidates with rising probability. Consequently, the decoder tasked with its decoding does not interrupt the process, reaching $100\\%$ estimated bits, while the remaining $C_2-1$ decoders stop the decoding early, thus averaging the estimated bit percentage at a stable value ($67\\%$ for $K=32$ and $61\\%$ for $K=57$). The dashed curves show instead a stable value regardless of channel conditions: since among the $C_2$ candidates there is never one carrying the UE ID, all second phase decoders tend to stop the decoding early, at a percentage independent of the SNR, and mostly influenced by the position of bits assigned to the ID.\n\\item For $N=512$ (curves with a cross marker) a similar behavior to the $N=256$ case can be observed when the UE ID is not sent, with the average number of estimated bits stable at all the considered SNR values. On the other hand, when the UE ID is sent, the trend is different: at low SNR values, the percentage of estimated bits is very close to $100\\%$. As the SNR value increases, the average starts to decrease, until it settles on a stable value.\nThis behavior is due to the fact that at low SNR, it is very unlikely that a codeword with $N=512$ is among the $C_2$ second phase candidates if the UE ID is not matching: the longer code length and lower rate contribute to a higher decoding reliability during the first phase, that allows to screen out unlikely candidates better than the $N=256$ case.\n\\end{itemize}\n\n\n\n\\begin{figure}[t!]\n \\centering\n \\input{.\/FalseAlarmHW.tikz}\n \\\\\n \\ref{FARlegend}\n \\caption{False alarm rates after the second decoding phase with $L_1=2$, $L_{\\max}=8$, and $C_2=5$. Transmissions include $C_1\/2$ cases of $N_1=256$ and $C_1\/2$ cases of $N_2=512$.}\n \\label{fig:FAR}\n\\end{figure}\n\n\\begin{figure}[t!]\n \\centering\n \\input{.\/AvgEstHW.tikz}\n \\\\\n \\ref{ESlegend}\n \\caption{Average percentage of estimated bits during the second decoding phase with early stopping when $L_{\\max}=8$ and $C_2=5$.}\n \\label{fig:avg_est}\n\\end{figure}\n\n\n\n\n\\section{Hardware Architecture} \\label{sec:HW}\nTo evaluate the implementation cost of the devised blind detection scheme, we designed a decoder architecture that supports it, portrayed in Fig.~\\ref{fig:BDarch}. An array of flexible list size SCL decoders handles both the first and second decoding phase. A dedicated module selects the $C_2$ candidates for the second phase according to the criteria described in Section~\\ref{sec:blind}.\n\n\\begin{figure}[t!]\n \\centering\n \\includegraphics[scale=0.55]{.\/BDsys.pdf}\n \\caption{Polar codes blind detection system architecture.}\n \\label{fig:BDarch}\n\\end{figure}\n\n\\subsection{Flexible list size SCL decoder}\n\nWe based our SCL decoder architecture on that of \\cite{hashemi_SSCL_TCASI,hashemi_FSSCL}: the decoding process follows the one described in Section~\\ref{sec:prel:PC} for a list size $L_{\\max}$. Most of the datapath and memories are instantiated $L_{\\max}$ times: multiple candidates are stored at the same time, with the best candidate being selected at the end of the decoding. While in \\cite{hashemi_SSCL_TCASI,hashemi_FSSCL} the final candidate is selected according to a CRC check, in the proposed architecture no CRC is considered, and the validity of the final candidate is based on the matching ID and PM value.\n\nThe SC decoding tree is descended by computing (\\ref{eq1}) and (\\ref{eq2}) at each stage $s$, with priority being given to left branches. These calculations are performed by $L_{\\max}$ parallel sets of $P$ processing elements (PEs), with $P$ being a power of $2$. In the stages for which $2^s>2P$, the operations in (\\ref{eq1}) and (\\ref{eq2}) are performed over $2^s\/(2P)$ steps, while a single step is needed otherwise. Internal memories store the updated LLR values between stages.\n\nPEs get two LLR values as input, and concurrently compute both $\\alpha^{\\text{l}}$ and $\\alpha^{\\text{r}}$ according to (\\ref{eq1}) and (\\ref{eq2}), respectively. The correct output is selected depending on the index of the leaf node to be estimated. When a leaf node is reached, the decoder controller module identifies the leaf node as either an information bit or a frozen bit. If a frozen bit is found, the paths are not split, and the bit is estimated only as $0$, and the $L$ memories are updated with the same bit or LLR values. Instead, in case of an information bit, both $0$ and $1$ are considered, so that paths are split, and the PMs updated for the $2L$ candidates according to (\\ref{eq7}). Afterwards, the PMs are sorted, identifying the $L$ surviving paths.\n\nAll memories in the decoder are registers, enabling the internal LLR and $\\beta$ values to be read, updated by the PEs, and written back in a single clock cycle. At the same time, the paths are either updated or split and updated, and the new PMs computed. In the following clock cycle, in case the paths were split, the PMs are sorted and the surviving paths selected.\n\nCodes with different code lengths can be decoded by storing the appropriate memory offsets for every considered code in a dedicated memory.\n\nThis baseline decoder has been modified to better fit the needs of the proposed blind detection scheme. In order to maximize resource sharing, the SCL decoder has been sized for $L_{\\max}>L_1$, and the effective list size can be selected through a dedicated input.\nThe $L_{\\max}-L_1$ paths that are not used in the first decoding phase are used to decode up to $\\left\\lfloor(L_{\\max}-L_1)\/L_1\\right\\rfloor$ additional candidates at the same time. In order to exploit the unused paths, additional functional modules are necessary.\n\\begin{itemize}\n \\item The baseline decoder uses a single memory to store the channel LLR values, sharing it among the different paths. If different codewords have to be decoded at the same time, the channel memory needs to be instantiated not once, but $\\left\\lfloor L_{\\max}\/L_1\\right\\rfloor$ times.\n \\item The decoder relies on sorting and selection logic that identifies the surviving $L_{\\max}$ ones after paths are split. To support the parallel decoding of $\\left\\lfloor L_{\\max}\/L_1\\right\\rfloor$ candidates, as many sorting and selection modules targeting the selection of $L_1$ paths out of $2L_1$ are instantiated.\n\\end{itemize}\nIf $L_1=1$ is selected, the path splitting and PM sorting steps are bypassed, reverting decoders to the standard SC case. Since a single set of SCL decoders can handle both decoding phases, the total number of decoders is $N_{\\text{SCL}_{\\max}}$ (see Fig. \\ref{fig:BDarch}). However, the effective number of decoders for the first decoding phase is $N_{\\text{SCL}_1}=N_{\\text{SCL}_{\\max}}\\times \\left\\lfloor L_{\\max}\/L_1\\right\\rfloor$.\n\nThe early stopping technique described in Section~\\ref{sec:blind} has been also implemented. The decoder receives as input the position of the ID bits and the value of the UE ID: every time a bit in an ID position is estimated, the bit value is compared to the expected UE ID bit. All paths whose estimated bit does not match the UE ID bit are deactivated. This operation is performed after the $L$ surviving paths have been selected, in order not to force the survival of unlikely paths and increase the FAR. In case all paths have been deactivated, the decoding is stopped. The early stopping logic can be activated and deactivated by means of a dedicated control signal. Since the same hardware is used for both decoding phases, early stopping is enabled only during the second one.\n\n\n\n\\subsection{PM sorting and candidate selection}\n\n\\begin{figure*}[t!]\n \\centering\n \\includegraphics{.\/C2sel.pdf}\n \\caption{PM sorting and candidate selection architecture.}\n \\label{fig:C2sel}\n\\end{figure*}\n\nFig.~\\ref{fig:C2sel} depicts the architecture of the PM sorting and candidate selection block. It processes the output of the first decoding phase to select the $C_2$ candidates for the second phase, and selects the overall system output based on the results from the second phase. For each of the $N_{\\text{SCL}_1}$ first phase decoders, a PM and a flag signalling a UE ID match are received. They are stored every time the respective \\texttt{Valid} signal is risen by the decoder. The \\texttt{Valid} signal is also used as an enable for the PM and UE ID match register address counter, and for the counter keeping track of how many codewords had a matching UE ID after the first phase. When all the $C_1$ candidates have gone through the first decoding phase, a \\texttt{Valid} signal is issued to the sorter module, that receives as input all the stored PMs. The sorter module returns the $C_2$ minimum PMs in as many clock cycles: each PM is compared to all the others, and a single clock cycle is necessary to identify the minimum one, that is excluded from the subsequent comparison. When the $C_2$ minima have been found, the selector module considers how many candidates had a matching UE ID after the first phase, and selects the $C_2$ candidates for the second phase among them and those with the minimum PM values. The $C_2$ candidates are sent to the $N_{\\text{SCL}_{\\max}}$ decoders by means of a dedicated counter. Returning PMs and UE ID match flags are received and compared by another selector: when all $C_2$ candidates have been decoded, the selected codeword, if any, is output.\n\n\n\\section{Implementation Results} \\label{sec:impl}\n\nThe architecture proposed in Section~\\ref{sec:HW} has been described in VHDL and synthesized in TSMC 65~nm CMOS technology. Table~\\ref{tab:asic} reports the synthesis results for the architecture sized for a maximum code length $N_{\\max}=512$, a maximum list size $L_{\\max}=8$, $C_2=5$, and a target frequency $f=1$~GHz. Various $N_{\\text{SCL}_{\\max}}$ values have been considered, leading to different latencies and area occupations. Since during the first decoding phase $L_1=2$, the effective number of decoders $N_{\\text{SCL}_1}$ is equal to $4N_{\\text{SCL}_{\\max}}$, even if only $N_{\\text{SCL}_{\\max}}$ are physically instantiated.\nRegarding the area, the $N_{\\text{SCL}_{\\max}}$ SCL decoders contribute to the majority of the complexity, ranging from $97.8\\%$ when $N_{\\text{SCL}_{\\max}}=1$ to $99.7\\%$ when $N_{\\text{SCL}_{\\max}}=5$. The logic complexity of the PM sorting and candidate selection module remains almost unchanged at the variation of $N_{\\text{SCL}_{\\max}}$, being mainly affected by $C_1$ and $C_2$. Memories have been synthesized with registers only, without the use of RAM, and account for $36\\%$ of the total area occupation.\n\nThe worst case latency of the proposed blind detection system can be found as \n\\begin{align} \\label{eq:lat2}\n\\begin{split}\n T_{\\text{bd}}=&\\left\\lceil \\frac{C_1}{N_{\\text{SCL}_1}} \\right\\rceil \\left(\\frac{T^1_{\\text{SCL}}}{2}+\\frac{T^2_{\\text{SCL}}}{2}\\right)\\\\\n &+ T_{\\text{sort}} + \\left\\lceil \\frac{C_2}{N_{\\text{SCL}_{\\max}}} \\right\\rceil \\max\\left(T^1_{\\text{SCL}},T^2_{\\text{SCL}}\\right)~ \\text{,}\\\\\n\\end{split} \n\\end{align}\nwhere $T^1_{\\text{SCL}}$ and $T^2_{\\text{SCL}}$ are the SCL decoding latencies for codes of length $N_1$ and $N_2$, respectively, while $T_{\\text{sort}}$ is the number of time steps required to sort the PM of the first decoding phase and obtain the $C_2$ candidates out of the $C_1$ candidate locations. Also, it is worth remembering that for the proposed architecture, $N_{\\text{SCL}_1}=\\left\\lfloor L_{\\max}\/L_1\\right\\rfloor \\times N_{\\text{SCL}_{\\max}}$.\nThe SCL decoding latency can be found as \\cite{balatsoukas}\n\\begin{equation*}\nT^x_{\\text{SCL}}=2N_x+K_x+16-2 \\text{,}\n\\end{equation*}\nfor $x\\in\\{1,2\\}$.\nFrom the results presented in Table \\ref{tab:asic}, it is possible to see that even when considering the relatively old 65~nm technology node, the $16\\mu$s worst case latency target can be reached with a single SCL decoder running at a frequency of $1$~GHz, while $N_{\\text{SCL}_{\\max}}=5$ guarantees a worst case latency of $3.6\\mu$s, meeting the $4\\mu$s target as well.\n\nHowever, considering only the worst case latency is indeed an unrealistic scenario. To begin with, while there is no guarantee on how the $C_2$ candidates are distributed among $N_1$ and $N_2$, simulation results have shown that we can expect the $C_2$ candidates either to favor the shorter code length, or to be equally divided between $N_1$ and $N_2$ candidates. Thus, the factor $$\\left\\lceil\\frac{C_2}{N_{\\text{SCL}_{\\max}}} \\right\\rceil\\max\\left(T^1_{\\text{SCL}},T^2_{\\text{SCL}}\\right)$$ in (\\ref{eq:lat2}), that represents the contribution of the second decoding phase, could be better expressed as: \n\\begin{equation*}\n\\left\\lceil\\frac{\\left\\lceil C_2\/2\\right\\rceil}{N_{\\text{SCL}_{\\max}}} \\right\\rceil T^1_{\\text{SCL}} + \\left\\lceil\\frac{C_2-\\left\\lceil C_2\/2\\right\\rceil}{N_{\\text{SCL}_{\\max}}} \\right\\rceil T^2_{\\text{SCL}}~.\n\\end{equation*}\nNote that this is still a conservative assumption, since it entails the $C_2$ candidates equally divided among the two code lengths. We can refine this assumption by taking in account the effect of early stopping. We can approximate the latency reduction with a multiplicative factor $E^x$ associated to $T^x_{\\text{SCL}}$. Consequently, the average latency of the blind detection system, for $N_{\\text{SCL}_{\\max}} \\epsilon_0$ such that $\\sup_{x\\in M} f(x) = f_M$ and $M$ is a CC of the level set ${\\mathcal{X}^{f_M - \\epsilon_0}} := \\{ x : f(x) \\ge f_M - \\epsilon_0 \\}$.\n\\end{definition}\n\nWe require the following Assumption~\\ref{assumption2} on $\\epsilon_0$-modal sets. Note that under Assumption~\\ref{assumption-main} on modal-sets, Assumption~\\ref{assumption2} on $\\epsilon_0$-modal sets will hold for $\\epsilon_0$ sufficiently small.\n\n\\begin{assumption}\\label{assumption2}\nThe $\\epsilon_0$-modal sets are on the interior of $\\mathcal{X}$ and $f_M \\ge 2\\epsilon_0$ for all $\\epsilon_0$-modal sets $M$. \n\\end{assumption}\n\n\\begin{remark}\nSince each $\\epsilon_0$-modal set contains a modal-set, it follows that the number of $\\epsilon_0$-modal sets is finite.\n\\end{remark}\n\nThe following extends Proposition~\\ref{prop:main-assumptions} to show the additional properties of the regions around the $\\epsilon_0$-modal sets necessary in our analysis. The proof is in Appendix~\\ref{supportinglemmas}.\n\n\\begin{proposition}[Extends Proposition~\\ref{prop:main-assumptions}] \\label{prop:main-assumptions-general}\nFor any $\\epsilon_0$-modal set $M$, there exists $\\lambda_M, A_M, r_M, l_M, u_M, r_s, S_M$ such that the following holds. $A_M$ is a CC of $\\mathcal{X}^{\\lambda_M} := \\{x : f(x) \\ge \\lambda_M\\}$ containing $M$ which satisifies the following.\n\\begin{itemize} \n\\item \\emph{$A_M$ isolates $M$ by a valley}: $A_M$ does not intersect any other $\\epsilon_0$-modal sets and $A_M$ and $\\mathcal{X}^{\\lambda_M} \\backslash A_M$ are $r_s$-separated by $S_M$ with $r_s>0$ where $r_s$ does not depend on $M$. \n\n\\item \\emph{$A_M$ is full-dimensional}: $A_M$ contains an envelope $B(M, r_M)$ of $M$, with $r_M>0$. \n\\item \\emph{$f$ is smooth around some maximum modal-set in $M$}: There exists modal-set $M_0 \\subseteq M$ such that $f$ has density $f_M$ on $M_0$ and $f_M - f(x) \\le u_M(d(x, M_0))$ for $x \\in B(M_0, r_M)$\n\\item \\emph{$f$ is both \\emph{smooth} and has \\emph{curvature} around $M$}: $u_M$ and $l_M$ are increasing continuous functions on $[0, r_M]$, $u_M(0) = l_M(0) = 0$ and $u_M(r), l_M(r) > 0$ for $r > 0$, and\n\\begin{align*}\nl_M(d(x, M)) \\le f_M - \\epsilon_0 - f(x) \\le u_M(d(x, M)) \\forall x \\in B(M, r_M).\n\\end{align*}\n\n\\end{itemize}\n\\end{proposition}\n\n\n\n\\begin{algorithm}[tb]\n \\caption{M-cores (estimating $\\epsilon_0$-modal-sets)}\n \\label{alg:epsilonmodalset}\n\\begin{algorithmic}\n \\STATE Initialize $\\widehat{\\mathcal{M}}:= \\emptyset$. Define $\\beta_k = 4\\frac{C_{\\delta, n}}{\\sqrt{k}}$. \n \\STATE Sort the $X_i$'s in descending order of $f_k$ values. \n \\FOR{$i=1$ {\\bfseries to} $n$}\n \\STATE Define $\\lambda := f_k(X_i)$.\n \\STATE Let $A$ be the CC of $G(\\lambda - 9\\beta_k \\lambda - \\epsilon_0 - \\tilde{\\epsilon})$ that contains $X_i$.\n \\IF{$A$ is disjoint from all cluster-cores in $\\widehat{\\mathcal{M}}$}\n \\STATE Add $\\widehat{M} := \\{ x \\in A : f_k(x) > \\lambda - \\beta_k \\lambda - \\epsilon_0 \\}$ to\n $\\widehat{\\mathcal{M}}$. \n \\ENDIF\n \\ENDFOR\n \\STATE \\textbf{return} $\\widehat{\\mathcal{M}}$. \n\n\\end{algorithmic}\n\\end{algorithm}\n\n\nNext we give admissibility conditions for $\\epsilon_0$-modal sets. The only changes (compared to admissibility conditions for modal-sets) are the constant factors. In particular, when $\\epsilon_0=0$ and $\\tilde\\epsilon = 0$ it is the admissibility conditions for modal-sets. As discussed in the main text, a larger $\\tilde\\epsilon$ value will prune more aggressively at the cost of requiring a larger number of samples. Furthermore, it is implicit below that $\\tilde\\epsilon < l_M(\\min\\{r_M, r_s\\}\/2)$. This ensures that we don't prune too aggressively that the estimated $\\epsilon_0$-modal sets merge together. \n\n\n\\begin{definition} $k$ is {\\bf admissible} for an $\\epsilon_0$-modal set $M$ if (letting $u_M^{-1}, l_M^{-1}$ be the inverses of $u_M, l_M$)\n\\begin{align*}\n\\max \\left\\{ \\left(\\frac{24C_{\\delta, n} (\\sup_{x \\in \\mathcal{X}} f(x) + \\epsilon_0)}{l_M(\\min\\{r_M, r_s\\}\/2) - \\tilde\\epsilon} \\right)^2, 2^{7 + d} C_{\\delta, n}^2 \\right\\} \\\\\n\\le k \\le \\frac{v_d \\cdot (f_M - \\epsilon_0)}{2^{2+2d}} \\left(u_M^{-1} \\left ( \\frac{C_{\\delta, n} (f_M - \\epsilon_0)}{2\\sqrt{k}}\\right) \\right)^d \\cdot n.\n\\end{align*}\n\\end{definition}\n\n\n\n\n\n\n\n\n\n\\section{Supporting lemmas and propositions}\\label{supportinglemmas}\n\n\n\\begin{proof}[Proof of Proposition \\ref{prop:main-assumptions-general}]\nLet $M$ be an $\\epsilon_0$-modal set with maximum density $f_M$ and minimum density $f_M - \\epsilon_0$ (i.e. $f_M - \\epsilon_0 \\le f(x) \\le f_M$ for $x\\in M$). \nDefine ${\\mathcal{X}^{\\lambda}} := \\{ x : f(x) \\ge \\lambda\\}$.\nLet $A_1,...,A_m$ be the CCs of ${\\mathcal{X}^{f_M - \\epsilon_0}}$ (there are a finite number of CCs since each CC contains at least one modal-set and the number of modal-sets is finite). \nDefine $r_{\\text{min}} := \\min_{A_i \\neq A_j} \\inf_{x \\in A_i, x' \\in A_j} |x - x'|$, which is the minimum distance between pairs of points in different CCs.\nNext, define the one-sided Hausdorff distance for closed sets $A, B$: $d_{H'}(A, B) := \\max_{x \\in A} \\min_{x \\in B} |x - y|$. Then consider\n$g(t) := d_{H'}({\\mathcal{X}^{f_M - \\epsilon_0 - t}}, {\\mathcal{X}^{f_M - \\epsilon_0}})$.\n\nSince $f$ is continuous and has a finite number of modal-sets, $g$ has a finite number of points of discontinuity (i.e. when $f_M - \\epsilon_0 - t$ is the density of some modal-set) and we have $g(t) \\rightarrow 0$ as $t \\rightarrow 0$. \nThus, there exists $0 < \\lambda_M < f_M - \\epsilon_0$ such that $g(f_M - \\epsilon_0 - \\lambda_M) < \\frac{1}{4}r_{\\text{min}}$ and there are no modal-sets or $\\epsilon_0$-modal sets with minimum density in $[\\lambda_M, f_M - \\epsilon_0)$. \nFor each $A_i$, there exists exactly one CC of $\\mathcal{X}^{\\lambda_M}$, $A_i'$, such that\n$A_i \\subset A_i'$. Since $g(f_M - \\epsilon_0 - \\lambda_M) < \\frac{1}{4}r_{\\text{min}}$, it follows that $A_i' \\subseteq B(A_i, \\frac{1}{4}r_{\\text{min}})$. Thus, the $A_i'$'s are pairwise separated by distance at least $\\frac{1}{2}r_{\\text{min}}$. Moreover, there are no other CCs in ${\\mathcal{X}^{f_M - \\epsilon_0}}$ because there are no modal-sets with density in $[\\lambda_M, f_M - \\epsilon_0)$. \n\nThen, let $A_M$ be the CC of $\\mathcal{X}^{\\lambda_M}$ containing $M$. Then $A_M$ contains no other $\\epsilon_0$-modal sets and it is $\\frac{1}{5}r_{\\text{min}}$-separated by $\\mathcal{X}^{\\lambda_M} \\backslash M$ by some set $S_M$ (i.e. take $S_M := \\{x : d(x, A_M) = \\frac{1}{5} r_{\\text{min}}\\}$). Since there is a finite number of modal-sets, it suffices to take $r_s$ to be the minimum of the corresponding $\\frac{1}{5}r_{\\text{min}}$ for each $\\epsilon_0$-modal set. This resolves the first part of the proposition.\n\nLet $h(r) := \\inf_{x \\in B(M, r)} f(x)$. Since $f$ is continuous, $h$ is continuous and decreasing with $h(0) = f_M - \\epsilon_0 > \\lambda_M$. Take $r_M > 0$ sufficiently small so that $h(r_M) > \\lambda_M$. This resolves the second part of the proposition. \n\n\nTake $M_0$ to be some modal-set with density $f_M$ in $M$. One must exist since $M$ has local-maxima at level $f_M$.\nFor each $r$, let $u_M(r) := \\max\\{f_M - \\epsilon_0 - \\inf_{x \\in B(M, r)} f(x), f_M - \\inf_{x \\in B(M_0, r)} f(x) \\}$. Then, we have $f_M - f(x) \\le u_M(d(x, M_0))$ and $f_M - \\epsilon_0 - f(x) \\le u_M(d(x, M))$. Clearly $u_M$ is increasing on $[0, r_M]$ with $u_M(0) = 0$ and continuous since $f$ is continuous. If $u_M$ is not strictly increasing then we can replace it with a strictly increasing continuous function while still having $u_M(r) \\rightarrow 0$ as $r \\rightarrow 0$ (i.e. by adding an appropriate strictly increasing continuous function). This resolves the third part of the proposition and the upper bound in the fourth part of the proposition. \n\nNow, define $g_M(t) := d({\\mathcal{X}^{f_M - \\epsilon_0 - t}} \\cap {A_M}, M)$ \nfor $t \\in [0, \\frac{1}{2} (f_M - \\epsilon_0 - \\lambda_M)]$. \nThen, $g_M$ is continuous, $g_M(0) = 0$ and is strictly increasing. \nDefine $l_M$ to be the inverse of $g_M$. Clearly $l_M$ is continuous, strictly increasing, and $l_M(r) \\rightarrow 0$ as $r \\rightarrow 0$. From the definition of $g_M$, it follows that for $x \\in B(M, r_M)$, $f_M - \\epsilon_0- f(x) \\ge l_M(d(x, M))$\nas desired.\n\\end{proof}\n\nWe need the following result giving guarantees on the empirical balls.\n\\begin{lemma}[\\citep{CD10}] \\label{ball_bounds} \nPick $0 < \\delta < 1$. Assume that $k \\ge d \\log n$. Then with probability at least $1 - \\delta$, for every ball $B \\subset \\mathbb{R}^d$ we have\n\\begin{align*}\n\\mathcal{F}(B) \\ge C_{\\delta, n} \\frac{\\sqrt{d \\log n}}{n} &\\Rightarrow \\mathcal{F}_n(B) > 0\\\\\n\\mathcal{F}(B) \\ge \\frac{k}{n} + C_{\\delta, n} \\frac{\\sqrt{k}}{n} &\\Rightarrow \\mathcal{F}_n(B) \\ge \\frac{k}{n} \\\\\n\\mathcal{F}(B) \\le \\frac{k}{n} - C_{\\delta, n}\\frac{\\sqrt{k}}{n} &\\Rightarrow \\mathcal{F}_n(B) < \\frac{k}{n}.\n\\end{align*}\n\\end{lemma}\n\n\n\nLemma~\\ref{fk_bounds} of \\cite{dasgupta2014optimal} establish convergence rates for $f_k$. \n\n\\begin{definition}\\label{rhat} For $x \\in \\mathbb{R}^d$ and $\\epsilon > 0$, define \n$\\hat{r}(\\epsilon, x):=\\sup\\left\\{r : \\sup_{x' \\in B(x, r)} f(x') - f(x) \\le \\epsilon \\right\\}$ and\n$\\check{r}(\\epsilon, x):=\\sup\\left\\{r : \\sup_{x' \\in B(x, r)} f(x) - f(x') \\le \\epsilon \\right\\}$.\n\\end{definition}\n\n\\begin{lemma}[Bounds on $f_k$]\\label{fk_bounds} Suppose that $\\frac{C_{\\delta, n}}{\\sqrt{k}} < \\frac{1}{2}$. Then the follow two statements each hold with probability at least $1 - \\delta$: \n\\begin{align*}\nf_k(x) < \\left(1 + 2\\frac{C_{\\delta, n}}{\\sqrt{k}} \\right)(f(x) + \\epsilon),\n\\end{align*}\nfor all $x\\in \\mathbb{R}^d$ and all $\\epsilon > 0$ provided $k$ satisfies $v_d\\cdot \\hat{r}(\\epsilon, x)^d \\cdot (f(x) + \\epsilon) \\ge \\frac{k}{n} - C_{\\delta, n}\\frac{\\sqrt{k}}{n}$.\n\\begin{align*}\nf_k(x) \\ge \\left(1 - \\frac{C_{\\delta, n}}{\\sqrt{k}} \\right)(f(x) - \\epsilon),\n\\end{align*}\nfor all $x\\in \\mathbb{R}^d$ and all $\\epsilon > 0$ provided $k$ satisfies $v_d\\cdot \\check{r}(\\epsilon, x)^d \\cdot (f(x) - \\epsilon) \\ge \\frac{k}{n} + C_{\\delta, n}\\frac{\\sqrt{k}}{n}$. \n\\end{lemma}\n\n\n\\begin{lemma}[Extends Lemma~\\ref{r_n_upper_bound}] \\label{r_n_upper_bound_general}(Upper bound on $r_n$) Let $M$ be an $\\epsilon_0$-modal set with maximum density $f_M$ and suppose that $k$ is admissible. With probability at least $1 - \\delta$,\n\\begin{align*}\nr_n(M) \\le \\left(\\frac{2C_{\\delta, n} \\sqrt{d \\log n}}{n\\cdot v_d\\cdot (f_M - \\epsilon_0)}\\right)^{1\/d}.\n\\end{align*}\n\\end{lemma}\n\n\\begin{proof}[Proof of Lemma~\\ref{r_n_upper_bound_general}]\nDefine $r_0 := \\left(\\frac{2C_{\\delta, n} \\sqrt{d \\log n}}{nv_d\\cdot (f_M - \\epsilon_0)}\\right)^{1\/d}$ and $r := (4k\/(nv_df_M))^{1\/d}$. Since $k$ is admissible, we have that $u_M(r_0) \\le u_M(r) \\le (f_M - \\epsilon_0)\/ 2$. We have\n\\begin{align*}\n\\mathcal{F}(B(x, r_0)) &\\ge v_d{r_0}^d(f_M -\\epsilon_0 - u_M(r_0)) \\ge v_d{r_0}^d (f_M - \\epsilon_0)\/2 \n= \\frac{C_{\\delta, n} \\sqrt{d\\log n}}{n}.\n\\end{align*}\nBy Lemma~\\ref{ball_bounds}, this implies that $\\mathcal{F}_n (B(x, r_0)) > 0$ with probability at least $1 - \\delta$ and therefore we have $r_n(x) \\le r_0$.\n\\end{proof}\n\n\n\n\n\n\\section{Isolation Results} \\label{appendix:isolation}\n\n\n\nThe following extends Lemma~\\ref{isolation} to handle more general\n$\\epsilon_0$-modal sets and pruning parameter $\\tilde\\epsilon$. \n\n\\begin{lemma}[Extends Lemma~\\ref{isolation}] (Isolation) \\label{isolation_general} \nLet $M$ be an $\\epsilon_0$-modal set and $k$ be admissible for $M$. Suppose $0 \\le \\tilde\\epsilon < l_M(\\min\\{r_M, r_s\\}\/2)$ and let $\\hat{x}_M := \\argmax_{x \\in \\mathcal{X}_M \\cap X_{[n]}} f_k(x)$. Then the following holds\nwith probability at least $1-5\\delta$: when processing sample point $\\hat{x}_M$ in Algorithm~\\ref{alg:epsilonmodalset} we will add $\\widehat{M}$ to $\\widehat{\\mathcal{M}}$ where \n$\\widehat{M}$ does not contain points outside of $\\mathcal{X}_M$.\n\\end{lemma}\n\n\\begin{proof} Define $\\widehat{f}_M := f_k(\\hat{x}_M)$, $\\lambda = \\widehat{f}_M$ and $\\bar{r} := \\min\\{r_M, r_s\\} \/ 2$.\nIt suffices to show that (\\rm{i}) $\\mathcal{X} \\backslash \\mathcal{X}_M$ and $B(M, \\bar{r})$ are disconnected in $G(\\lambda - 9\\beta_k \\lambda - \\epsilon_0 - \\tilde{\\epsilon})$ and (\\rm{ii}) $\\hat{x}_M \\in B(M, \\bar{r})$. \n\nIn order to show (\\rm{i}), we first show that $G(\\lambda - 9\\beta_k \\lambda - \\epsilon_0 - \\tilde{\\epsilon})$ contains no points from $B(S_M, r_s\/2)$ and no points from $\\mathcal{X}_M \\backslash B(M, \\bar{r})$. Then, all that will be left is showing that there are no edges between $B(M, \\bar{r})$ and $\\mathcal{X} \\backslash \\mathcal{X}_M$.\n \n\nWe first prove bounds on $f_k$ that will help us show (\\rm{i}) and (\\rm{ii}). Let $\\bar{F} := f_M - \\epsilon_0 - l_M(\\bar{r}\/2)$. Then for all $x \\in \\mathcal{X}_M \\backslash B(M, \\bar{r})$, we have $\\hat{r}(\\bar{F} - f(x), x) \\ge \\bar{r}\/2$. Thus the conditions for Lemma~\\ref{fk_bounds} are satisfied by the admissibility of $k$ and hence $f_k(x) < \\left(1 + 2\\frac{C_{\\delta, n}}{\\sqrt{k}} \\right) \\bar{F}$. Now,\n\\begin{align*}\n\\sup_{x \\in \\mathcal{X}_M\\backslash B(M, \\bar{r})} f_k(x) &< (1 + 2\\frac{C_{\\delta, n}}{\\sqrt{k}}) \\bar{F} = (1 + 2\\frac{C_{\\delta, n}}{\\sqrt{k}}) (f_M - \\epsilon_0 - l_M(\\bar{r}\/2))\\\\\n &\\le (1 + 2\\frac{C_{\\delta, n}}{\\sqrt{k}})^3 \\widehat{f}_M - (1 + 2\\frac{C_{\\delta, n}}{\\sqrt{k}}) \\cdot (\\epsilon_0 + l_M(\\bar{r}\/2)) \\le \\lambda- 9\\beta_k \\lambda - \\epsilon_0 - \\tilde{\\epsilon},\n\\end{align*}\nwhere the second inequality holds by using Lemma~\\ref{fk_bounds} as follows. Choose $x \\in M_0$ and $\\epsilon = \\frac{C_{\\delta, n}}{2\\sqrt{k}} f_M$. Then $\\check{r}(\\epsilon, x) \\ge u^{-1}(\\epsilon)$. The conditions for Lemma~\\ref{fk_bounds} hold by the admissibility of $k$ and thus $\\widehat{f}_M \\ge f_k(x) \\ge (1 - C_{\\delta, n}\/\\sqrt{k})^2 f_M$. Furthermore it follows from Lemma~\\ref{fk_bounds} that $\\widehat{f}_M < (1 + 2C_{\\delta, n}\/\\sqrt{k}) f_M$; \ncombine this admissibility of $k$ to obtain the last inequality. Finally, from the above, we also have $\\sup_{x \\in \\mathcal{X}_M\\backslash B(M, \\bar{r})} f_k(x) < \\widehat{f}_M$, implying (\\rm{ii}).\n\n\\noindent Next, if $x \\in B(S_M, r_s\/2)$, then $\\hat{r}(\\bar{F} - f(x) , x) \\ge \\bar{r}\/2$ and the same holds for $B(S_M, r_s\/2)$:\n\\begin{align*}\n\\sup_{x \\in B(S_M, r_s\/2)} f_k(x) < \n\\lambda- 9\\beta_k \\lambda - \\epsilon_0 -\\tilde{\\epsilon}.\n\\end{align*}\nThus, $G(\\lambda- 9\\beta_k \\lambda - \\epsilon_0 - \\tilde{\\epsilon})$ contains no point from $B(S_M, r_s\/2)$ and no point from $\\mathcal{X}_M \\backslash B(M, \\bar{r})$. \n\n\\noindent All that remains is showing that there is no edge between $B(M, \\bar{r})$ and $\\mathcal{X} \\backslash \\mathcal{X}_M$. It suffices to show that any such edge will have length less than $r_s$ since $B(S_M, r_s\/2)$ separates them by a width of $r_s$. We have for all $x \\in B(M, \\bar{r})$, \n\\begin{align*}\\mathcal{F}(B(x, \\bar{r})) \\ge v_d\\bar{r}^d\\inf_{x' \\in B(x, 2\\bar{r})} f(x') \\ge \\frac{k}{n} + C_{\\delta, n}\\frac{\\sqrt{k}}{n}.\n\\end{align*}\nThus by Lemma~\\ref{ball_bounds}, we have $r_k(x) \\le \\bar{r} < r_s$, establishing (\\rm{i}).\n\n\\end{proof}\n\n\n\n\\section{Integrality Results} \\label{appendix:integrality}\n\nThe goal is to show that the $\\widehat{M} \\in \\widehat{\\mathcal{M}}$ refered to above contains $B(M, r_n(M))$. We give a condition under which $B(M, r_n(M)) \\cap X_{[n]}$ would be connected in $G(\\lambda)$ for some $\\lambda$. It is adapted from arguments in Theorem V.2 in \\cite{CDKvL14}.\n\n\n\\begin{lemma}\\label{connectedness} (Connectedness)\nLet $M$ be an $\\epsilon_0$-modal set and $k$ be admissible for $M$. Then with probability at least $1 - \\delta$, $B(M, r_n(M)) \\cap X_{[n]}$ is connected in $G(\\lambda)$ if \n\\begin{align*}\n\\lambda \\le \\left(1 - \\frac{C_{\\delta, n}}{\\sqrt{k}}\\right)^2 (f_M - \\epsilon_0).\n\\end{align*}\n\\end{lemma}\n\n\n\\begin{proof}\nFor simplicity of notation, let $A := B(M, r_n(M))$. It suffices to prove the result for $\\lambda = (1 - \\frac{C_{\\delta, n}}{\\sqrt{k}})^2 (f_M - \\epsilon_0)$.\nDefine $r_{\\lambda} = (k\/(nv_d\\lambda))^{1\/d}$ and $r_o = (k\/(2nv_df_M))^{1\/d}$. \nFirst, we show that each $x \\in B(A, r_\\lambda)$, there is a sample point in $B(x, r_o)$. \nWe have for $x\\in B(A, r_\\lambda)$,\n\\begin{align*}\n\\mathcal{F}(B(x, r_o)) &\\ge v_d r_o^d \\inf_{x' \\in B(x, r_o + r_\\lambda )} f(x') \\ge v_d r_o^d(f_M - \\epsilon_0 - u_M(r_o + r_\\lambda + r_n(M))) \\\\\n&\\ge v_d r_o^d (f_M - \\epsilon_0) \\left(1 - \\frac{C_{\\delta, n}}{\\sqrt{k}}\\right) \\ge C_{\\delta, n} \\frac{\\sqrt{d \\log n}}{n}.\n\\end{align*}\nThus by Lemma~\\ref{ball_bounds} we have that with probability at least $1 - \\delta$, $B(x, r_o)$ contains a sample uniformly over $x \\in B(A, r_\\lambda)$. \n\n \\noindent Now, let $x$ and $x'$ be two points in $A\\cap X_{[n]}$. We now show that there exists $x = x_0,x_1,...,x_p = x'$ such that $||x_i - x_{i+1}|| < r_o$ and $x_i \\in B(A, r_o)$. Note that since $A$ is connected and the density in $B(A, r_o + r_\\lambda)$ is lower bounded by a positive quantity, then for arbitrary $\\gamma \\in (0, 1)$, we can choose $x = z_0, z_1,...,z_p = x'$ where $||z_{i+1} - z_i|| \\le \\gamma r_o$. Next, choose $\\gamma$ sufficiently small such that \n\\begin{align*}\nv_d\\left( \\frac{(1-\\gamma)r_o}{2}\\right)^d\\gamma \\ge \\frac{C_{\\delta, n}\\sqrt{d\\log n}}{n},\n\\end{align*}\nthen there exists a sample point $x_i$ in $B(z_i, (1-\\gamma)r_o\/2)$. Moreover we obtain that \n\\begin{align*}\n||x_{i+1} - x_i|| &\\le ||x_{i+1} - z_{i+1}|| + ||z_{i+1} - z_i|| + ||z_i -x_i|| \\le r_o.\n\\end{align*} \n\n\\noindent All that remains is to show $(x_i, x_{i+1}) \\in G(\\lambda)$. We see that $x_i \\in B(A, r_o)$. However, for each $x \\in B(A, r_o)$, we have \n\\begin{align*}\n\\mathcal{F}(B(x, r_\\lambda)) &\\ge v_dr_\\lambda^d \\inf_{x' \\in B(x, r_o + r_\\lambda)} f(x') \n\\ge v_d r_\\lambda^d (f_M - \\epsilon_0) \\left(1 - \\frac{C_{\\delta, n}}{\\sqrt{k}}\\right)\n\\ge \\frac{k}{n} + \\frac{C_{\\delta, n} \\sqrt{k}}{n}.\n\\end{align*}\n Thus $r_k(x_i) \\le r_\\lambda$ for all $i$. Therefore, $x_i \\in G(\\lambda)$ for all $x_i$. Finally, $||x_{i+1} - x_i|| \\le r_o \\le \\min \\{ r_k(x_i), r_k(x_{i+1})\\}$ and thus $(x_i, x_{i+1}) \\in G(\\lambda)$. Therefore, $A \\cap X_{[n]}$ is connected in $G(\\lambda)$, as desired. \n \\end{proof}\n\n\n\n\nThe following extends Lemma~\\ref{integrality} handle more general $\\epsilon_0$-modal sets. \n\\begin{lemma}[Extends Lemma~\\ref{integrality}] (Integrality) \\label{integrality_general} \nLet $M$ be an $\\epsilon_0$-modal set with density $f_M$, and suppose $k$ is admissible for $M$. Let $\\hat{x}_M := \\argmax_{x \\in \\mathcal{X}_M \\cap X_{[n]}} f_k(x)$. Then the following holds with probability at least $1 - 3\\delta$. When processing sample point $\\hat{x}_M$ in Algorithm~\\ref{alg:modalset}, if we add $\\widehat{M}$ to $\\widehat{\\mathcal{M}}$, then $B(M, r_n(M)) \\cap X_{[n]} \\subseteq \\widehat{M}$.\n\\end{lemma}\n\n\\begin{proof} Define $\\widehat{f}_M := f_k(\\hat{x}_M)$ and $\\lambda := \\widehat{f}_M$. It suffices to show that $B(M, r_n(M)) \\cap X_{[n]}$ is connected in $G(\\lambda - 9\\beta_k \\lambda - \\tilde{\\epsilon})$. By Lemma~\\ref{connectedness}, $B(M, r_n(M)) \\cap X_{[n]}$ is connected in $G(\\lambda_0)$ when $\\lambda_0 \\le (1 - \\frac{C_{\\delta, n}}{\\sqrt{k}})^2 (f_M - \\epsilon_0) $. Indeed, we have that\n\\begin{align*}\n\\left(1 - \\frac{C_{\\delta, n}}{\\sqrt{k}}\\right)^2 (f_M - \\epsilon_0)\n&\\ge \\widehat{f}_M \\left(1 - \\frac{C_{\\delta, n}}{\\sqrt{k}}\\right)^2 \/ \\left(1 + 2\\frac{C_{\\delta, n}}{\\sqrt{k}}\\right) - \\left(1 - \\frac{C_{\\delta, n}}{\\sqrt{k}}\\right)^2 \\epsilon_0 \\\\\n&\\ge \\lambda - \\beta_k \\lambda - \\epsilon_0 \\ge \\lambda - 9\\beta_k \\lambda - \\epsilon_0 - \\tilde{\\epsilon},\n\\end{align*}\nwhere the first inequality follows from Lemma~\\ref{fk_bounds},\nas desired.\n\\end{proof}\n\n\n\\section{Theorem~\\ref{theo:main}} \\label{appendix:theomain}\n\nCombining the isolation and integrality, we obtain the following extention of Corollary~\\ref{identification}.\n\n\n\\begin{corollary}[Extends Corollary~\\ref{identification}] (Identification) \\label{identification_general}\nSuppose we have the assumptions of Lemmas~\\ref{isolation_general} and~\\ref{integrality_general} for $\\epsilon_0$-modal set $M$. Define $\\widehat{f}_M := \\max_{x \\in \\mathcal{X}_M \\cap X_{[n]}} f_k(x)$ and $\\lambda := \\widehat{f}_M$. With probability at least $1 - 5\\delta$, there exists $\\widehat{M} \\in \\widehat{\\mathcal{M}}$ such that $B(M, r_n(M)) \\cap X_{[n]} \\subseteq \\widehat{M} \\subseteq \\{ x \\in \\mathcal{X}_M \\cap X_{[n]} : f_k(x) \\ge \\lambda- \\beta_k \\lambda - \\epsilon_0 \\}$\n\\end{corollary}\n\n\\begin{proof}\nBy Lemma~\\ref{isolation_general}, there exists $\\widehat{M} \\in \\widehat{\\mathcal{M}}$ which contains only points in $\\mathcal{X}_M$ with maximum $f_k$ value of $\\widehat{f}_M$. Thus, we have $\\widehat{M} \\subseteq \\{ x \\in \\mathcal{X}_M \\cap X_{[n]} : f_k(x) \\ge \\widehat{f}_M - \\beta_k \\widehat{f}_M - \\epsilon_0 \\}$. By Lemma~\\ref{integrality_general}, $B(M, r_n(M)) \\cap X_{[n]} \\subseteq \\widehat{M}$.\n\\end{proof}\n\n\nThe following extends Theorem~\\ref{theo:main} to handle more general $\\epsilon_0$-modal sets and pruning parameter $\\tilde{\\epsilon}$. \n\n\\begin{theorem}[Extends Theorem~\\ref{theo:main}] \\label{theo:main_general}\nLet $\\delta > 0$ and $M$ be an $\\epsilon_0$-modal set. Suppose $k$ is admissible for $M$ and $0 \\le \\tilde\\epsilon < l_M(\\min\\{r_M, r_s\\}\/2)$. Then with probability at least $1 - 6\\delta$, there exists $\\widehat{M} \\in \\widehat{\\mathcal{M}}$ such that \n \\begin{align*}\n d(M, \\widehat{M}) \\le l_M^{-1}\\left(\\frac{8C_{\\delta,n }}{\\sqrt{k}}f_M\\right), \\end{align*}\n which goes to $0$ as $C_{\\delta, n}\/\\sqrt{k} \\rightarrow 0$.\n\\end{theorem}\n\n\\begin{proof} Define $\\tilde{r} = l_M^{-1}\\left(\\frac{8C_{\\delta,n }}{\\sqrt{k}}f_M\\right)$. There are two directions to show: $\\max_{x \\in \\widehat{M}} d(x, M) \\le \\tilde{r}$ and $\\sup_{x \\in M} d(x, \\widehat{M}) \\le \\tilde{r}$ with probability at least $1 - \\delta$.\n\nWe first show $\\max_{x \\in \\widehat{M}} d(x, M) \\le \\tilde{r}$.\nBy Corollary~\\ref{identification_general} we have $\\widehat{M} \\in \\widehat{\\mathcal{M}}$ such that $\\widehat{M} \\subseteq \\{ x \\in \\mathcal{X}_M : f_k(x) \\ge \\widehat{f}_M - \\beta_k \\widehat{f}_M - \\epsilon_0 \\}$ where $\\widehat{f}_M := \\max_{x \\in \\mathcal{X}_M \\cap X_{[n]}} f_k(x)$. \nHence, it suffices to show \n\\begin{align}\\label{consistency_to_show}\n\\inf_{x \\in B(M_0, r_n(M))} f_k(x) \\ge \\sup_{\\mathcal{X}_M \\backslash B(M, \\tilde{r})} f_k(x) + \\beta_k \\widehat{f}_M + \\epsilon_0.\n\\end{align}\nDefine $r := (4\/f_Mv_d)^{1\/d}(k\/n)^{1\/d}$. For any $x \\in B(M_0, r + r_n(M))$, $f(x) \\ge f_M - u_M(r + r_n(M)) := \\check{F}$. Thus, for any $x \\in B(M_0, r_n(M))$ we can let $\\epsilon = f(x) - \\check{F}$ and thus $\\check{r}(\\epsilon, x) \\ge r$ and hence the conditions for Lemma~\\ref{fk_bounds} are satisfied. Therefore, with probability at least $1-\\delta$,\n\\begin{align}\\label{consistency_bound1}\n \\inf_{x \\in B(M_0, r_n(M))} f_k(x) \\ge \\left(1 - \\frac{C_{\\delta, n}}{\\sqrt{k}}\\right)(f_M - u_M(r +r_n(M))).\n\\end{align}\nFor any $x\\in \\mathcal{X}_M \\backslash B(M, \\tilde{r}\/2)$, $f(x) \\le f_M - \\epsilon_0 - l_M(\\tilde{r}\/2) := \\hat{F}$. Now, for any $x \\in \\mathcal{X} \\backslash B(M, \\tilde{r})$, let $\\epsilon := \\hat{F} - f(x)$. We have $\\hat{r}(\\epsilon, x) \\ge \\tilde{r}\/2 = l^{-1}_M (8C_{\\delta, n}\/\\sqrt{k})\/2 \\ge l^{-1}_M (u_M(2r)) \/ 2 \\ge r$ (since $l_M$ is increasing and $l_M \\le u_M$) and thus the conditions for Lemma~\\ref{fk_bounds} hold. Hence, with probability at least $1 - \\delta$,\n\\begin{align}\\label{consistency_bound2}\n\\sup_{x \\in \\mathcal{X}_M \\backslash B(M, \\tilde{r})} f_k(x) \\le \\left(1 + 2\\frac{C_{\\delta, n}}{\\sqrt{k}}\\right)(f_M - \\epsilon_0 - l_M(\\tilde{r})).\n\\end{align}\nThus, by (\\ref{consistency_bound1}) and (\\ref{consistency_bound2}) applied to (\\ref{consistency_to_show}) it suffices to show that\n\\begin{align}\\label{consistency_to_show_2}\n&\\left(1 - \\frac{C_{\\delta, n}}{\\sqrt{k}}\\right)(f_M - u_M(r + r_n(M))) \n\\ge \\left(1 + 2\\frac{C_{\\delta, n}}{\\sqrt{k}}\\right)(f_M - \\epsilon_0 - l_M(\\tilde{r})) + \\beta_k \\widehat{f}_M + \\epsilon_0,\n\\end{align}\nwhich holds when \n\\begin{align}\\label{consistency_to_show_3}\nl_M(\\tilde{r}) \\ge u_M(r + r_n(M)) + \\frac{3C_{\\delta, n}}{\\sqrt{k}}f_M + \\beta_k \\widehat{f}_M.\n\\end{align}\nThe admissibility of $k$ ensures that $r_n(M) \\le r \\le r_M\/2$ so that the regions of $\\mathcal{X}$ we are dealing with in this proof are confined within $B(M_0, r_M)$ and $B(M, r_M) \\backslash M$.\n\nBy the admissibility of $k$, $u_M(2r) \\le \\frac{C_{\\delta, n}}{2\\sqrt{k}}f_M$. This gives\n\\begin{align*}\nl_M(\\tilde{r}) &= \\frac{8C_{\\delta,n }}{\\sqrt{k}}f_M \\ge u_M(2r) + \\frac{15C_{\\delta,n }}{2\\sqrt{k}}f_M \n\\ge u_M(r + r_n(M)) + \\frac{3C_{\\delta, n}}{\\sqrt{k}} f_M + \\beta_k \\widehat{f}_M,\n\\end{align*}\nwhere the second inequality holds since $C_{\\delta, n}\/\\sqrt{k} < 1\/16$, $u$ is increasing, $r \\ge r_n(M)$, and $\\widehat{f}_M \\le \\left(1 + 2\\frac{C_{\\delta, n}}{\\sqrt{k}}\\right)f_M$ by Lemma~\\ref{fk_bounds}. Thus, showing (\\ref{consistency_to_show_3}), as desired.\n\nThis shows one direction of the Hausdorff bound. We now show the other direction, that $\\sup_{x \\in M} d(x, M) \\le \\tilde{r}$.\n\nIt suffices to show for each point $x \\in M$ that the distance to the closest sample point $r_n(x) \\le \\tilde{r}$ since $\\widehat{M}$ contains these sample points by Corollary~\\ref{identification_general}. However, by Lemma~\\ref{r_n_upper_bound_general} and the admissibility of $k$, $r_n(x) \\le \\tilde{r}$ as desired.\n\\end{proof}\n\n\n\n\n\n\\section{Theorem~\\ref{pruning}} \\label{appendix:pruning}\n\nWe need the following Lemma~\\ref{connectedness_pruning} which gives guarantees us that \ngiven points in separate CCs of the pruned graph, these points will also be in separate CCs of $f$ at a nearby level. \\cite{CDKvL14} gives a result for a different graph and the proof can be adapted to give the same result for our graph (but slightly different assumptions on $k$).\n\n\\begin{lemma}[Separation of level sets under pruning, \\cite{CDKvL14}] \\label{connectedness_pruning} \nFix $\\epsilon > 0$ and let $r(\\epsilon) := \\inf_{x \\in \\mathbb{R}^d} \\min \\{\\hat{r}(\\epsilon, x), \\check{r}(\\epsilon, x) \\}$. Define $\\Lambda := \\max_{x \\in \\mathbb{R}^d} f(x)$ and assume $\\tilde\\epsilon_0 \\ge 2 \\epsilon + \\beta_k(\\lambda_f + \\epsilon)$ and let $\\tilde{G}(\\lambda)$ be the graph with vertices in $G(\\lambda)$ and edges between pairs of vertices if they are connected in $G(\\lambda - \\tilde\\epsilon_0)$. Then the following holds with probability at least $1 - \\delta$.\\\\\n\nLet $\\tilde{A}_1$ and $\\tilde{A}_2$ denote two disconnected sets of points $\\tilde{G}(\\lambda)$. Define $\\lambda_f := \\inf_{x \\in \\tilde{A}_1 \\cup \\tilde{A}_2}f(x)$. Then $\\tilde{A}_1$ and $\\tilde{A}_2$ are disconnected in the level set $\\{ x \\in \\mathcal{X} : f(x) \\ge \\lambda_f\\}$ if $k$ satisfies\n\\begin{align*}\nv_d (r(\\epsilon) \/ 2) ^d (\\lambda_f - \\epsilon) \\ge \\frac{k}{n} + C_{\\delta, n}\\frac{\\sqrt{k}}{n}\n\\end{align*}\nand \n\\begin{align*}\nk \\ge \\max \\{ 8 \\Lambda C_{\\delta, n}^2\/(\\lambda_f - \\epsilon), 2^{d + 7} C_{\\delta, n}^2 \\}.\n\\end{align*}\n\\end{lemma}\n\n\\begin{proof}\nWe prove the contrapositive. Let $A$ be a CC of $\\{x \\in \\mathcal{X} : f(x) \\ge \\lambda_f \\}$ with $\\lambda_f = \\min_{x \\in A \\cap X_{[n]}} f(x)$. Then it suffices to show $A \\cap X_{[n]}$ is connected in $G(\\lambda')$ for $\\lambda' := \\min_{x \\in A \\cap X_{[n]}} f_k(x) - \\tilde\\epsilon_0$. \n\nWe first show $A \\cap X_{[n]}$ is connected in $G(\\lambda)$ for $\\lambda = (\\lambda_f - \\epsilon) \/ (1 + C_{\\delta, n}\/\\sqrt{k})$ and all that will remain is showing $\\lambda' \\le \\lambda$.\n\nDefine $r_o := (k\/(2nv_df_M))^{1\/d}$ and $r_\\lambda := (k\/(nv_d\\lambda))^{1\/d}$. Then from the first assumption on $k$, it follows that $r_\\lambda \\le r(\\epsilon)\/2$. Now for each $x \\in B(A, r_\\lambda)$, we have \n\\begin{align*}\n\\mathcal{F}(B(x, r_o)) \\ge v_d r_o^d \\inf_{x' \\in B(x, r_o + r_\\lambda )} f(x') \n\\ge v_d r_o^d(\\lambda_f - \\epsilon) \\ge C_{\\delta, n} \\frac{\\sqrt{d \\log n}}{n}.\n\\end{align*}\nThus, by Lemma~\\ref{ball_bounds} we have with probability at least $1-\\delta$ that $B(x, r_0)$ contains a sample point.\n\nNow, in the same way shown as in Lemma~\\ref{connectedness}, we have the following. If $x$ and $x'$ be two points in $A\\cap X_{[n]}$ then there exists $x = x_0,x_1,...,x_p = x'$ such that $||x_i - x_{i+1}|| < r_o$ and $x_i \\in B(A, r_o)$. \n\nNext is showing $(x_i, x_{i+1}) \\in G(\\lambda)$. We see that $x_i \\in B(A, r_o)$. However, for each $x \\in B(A, r_o)$, we have \n\\begin{align*}\n\\mathcal{F}(B(x, r_\\lambda)) &\\ge v_dr_\\lambda^d \\inf_{x' \\in B(x, r_o + r_\\lambda)} f(x') \n\\ge v_d r_\\lambda^d (\\lambda_f - \\epsilon) \\ge \\frac{k}{n} + \\frac{C_{\\delta, n} \\sqrt{k}}{n}.\n\\end{align*}\n Thus $r_k(x_i) \\le r_\\lambda$ for all $i$. Therefore, $x_i \\in G(\\lambda)$ for all $x_i$. Finally, $||x_{i+1} - x_i|| \\le r_o \\le \\min \\{ r_k(x_i), r_k(x_{i+1})\\}$ and thus $(x_i, x_{i+1}) \\in G(\\lambda)$. Therefore, $A \\cap X_{[n]}$ is connected in $G(\\lambda)$.\n\nAll that remains is showing $\\lambda' \\le \\lambda$. We have\n\\begin{align*}\n\\lambda' = \\min_{x \\in A \\cap X_{[n]}} f_k(x) - \\tilde\\epsilon_0 \n\\le \\left(1 + 2\\frac{C_{\\delta, n}}{\\sqrt{k}} \\right) (\\lambda_f + \\epsilon) - \\tilde\\epsilon_0\n\\le \\lambda,\n\\end{align*}\nwhere the first inequality holds by Lemma~\\ref{fk_bounds}, and the second inequality holds from the assumption on $\\tilde\\epsilon_0$, as desired. \n\n\\end{proof}\n\n\nWe state the pruning result for more general choices of $\\tilde{\\epsilon}$. Its proof is standard and given here for completion. (See e.g. \\cite{dasgupta2014optimal}). \n\n\\begin{theorem}[Extends Theorem~\\ref{pruning}] \\label{pruning_general}\nLet $0< \\delta < 1$ and $\\tilde\\epsilon \\ge 0$. There exists $\\lambda_0 = \\lambda_0(n, k)$ such that the following holds with probability at least $1 - \\delta$. All $\\epsilon_0$-modal set estimates in $\\widehat{\\mathcal{M}}$ chosen at level $\\lambda \\ge \\lambda_0$ can be injectively mapped to $\\epsilon_0$-modal sets $\\braces{M: \\lambda_M \\geq \\min_{\\{x\\in \\mathcal{X}_{[n]} : f_k(x) \\ge \\lambda - \\beta_k \\lambda \\}} f(x)}$, provided $k$ is admissible for all such $M$. \n\nIn particular, if $f$ is H\\\"older-continuous, (i.e. $||f(x) - f(x')|| \\le c||x - x'||^\\alpha $ for some $0 < \\alpha\\le 1$, $c > 0$) and $\\tilde\\epsilon = 0$, \nthen $\\lambda_0 \\to 0$ as $n\\to \\infty$, provided \n$C_1 \\log n \\le k \\le C_2 n^{2\\alpha \/ (2\\alpha + d)}$, for some $C_1, C_2$ independent of $n$. \n\\end{theorem}\n\n\\begin{proof}\nDefine $r(\\epsilon) := \\inf_{x \\in \\mathbb{R}^d} \\min \\{\\hat{r}(\\epsilon, x), \\check{r}(\\epsilon, x) \\}$. Since $f$ is uniformly continuous, it follows that $r(0) = 0$, $r$ is increasing, and $r(\\epsilon) > 0$ for $\\epsilon > 0$. \n\nThus, there exists $\\tilde{\\lambda}_{n,k,\\tilde\\epsilon} > 0$ such that \n\\begin{align*}\n\\tilde{\\lambda}_{n, k, \\tilde\\epsilon} = \\frac{k}{n\\cdot v_d \\cdot r((8\\beta_k \\tilde{\\lambda}_{n,k,\\tilde\\epsilon} + \\tilde{\\epsilon})\/3)}.\n\\end{align*}\nDefine\n\\begin{align*}\n\\lambda_0 := \\max \\{\\tilde\\lambda_{n,k,\\tilde\\epsilon}, 32\\beta_k \\sup_{x \\in \\mathcal{X}} f(x) + 4 \\tilde\\epsilon \\}.\n\\end{align*}\n\nLet us identify each estimated $\\epsilon_0$-modal set $\\widehat{M}$ with the point $\\hat{x}_M := \\max_{x \\in \\widehat{M}} f_k(x)$. Let us call these points modal-points. Then it suffices to show that there is an injection from modal-points to the $\\epsilon_0$-modal sets. \n\nDefine $G'(\\lambda)$ to be the graph with vertices in $G(\\lambda - \\beta_k \\lambda)$ and edges between vertices if they are in the same CC of $G(\\lambda - 9\\beta_k \\lambda - \\tilde\\epsilon)$ and $X_{[n]}^\\lambda := \\{ x : f_k(x) \\ge \\lambda \\}$. \nLet $\\tilde{A}_{i, \\lambda} := \\tilde{A}_i \\cap X_{[n]}^\\lambda$ for $i = 1,...,m$ to be the vertices of the CCs of $G'(\\lambda)$ which do not contain any modal-points chosen thus far as part of estimated modal-sets. \n\nFix level $\\lambda > 0$ such that $\\lambda_f := \\inf_{x \\in X_{[n]}^\\lambda} f(x) \\ge \\lambda_0\/2$. Then the conditions are satisified for Lemma~\\ref{connectedness_pruning} with $\\epsilon = (8\\beta_k\\lambda + \\tilde\\epsilon)\/3$. Suppose that $\\tilde{A}_{1,\\lambda},...,\\tilde{A}_{m,\\lambda}$ are in ascending order according to $\\lambda_{i, f} := \\min_{x \\in \\tilde{A}_{i, \\lambda}} f(x)$. Starting with $i = 1$, by Lemma~\\ref{connectedness_pruning}, $\\mathcal{X}^{\\lambda_1, f}$ can be partitioned into disconneced subsets $A_1$ and $\\mathcal{X}^{\\lambda_1, f} \\backslash A_1$ containing respectively $\\tilde{A}_{1, \\lambda}$ and $\\cup_{i=2}^m \\tilde{A}_{i, \\lambda}$. Assign the modal-point $\\argmax_{x \\in \\tilde{A}_{1, \\lambda}} f_k(x)$ to any $\\epsilon_0$-modal set in $A_1$. Repeat the same argument successively for any $\\tilde{A}_{i, \\lambda}$ and $\\cup_{j=i+1}^m \\tilde{A}_{j, \\lambda}$ until all modal-points are assigned to distinct $\\epsilon_0$-modal sets in disjoint sets $A_i$. \n\nNow by Lemma~\\ref{connectedness_pruning}, $\\mathcal{X}^{\\lambda_f}$ can be partitioned into disconnected subsets $A$ and $\\mathcal{X}^{\\lambda_f} \\backslash A$ containing respectively $\\tilde{A}_\\lambda := \\cup_{i=1}^m \\tilde{A}_{i, \\lambda}$ and $\\mathcal{X}_{[n]}^{\\lambda_f} \\backslash \\tilde{A}_{\\lambda}$. Thus, the modal-points in $\\tilde{A}_\\lambda$ were assigned to $\\epsilon_0$-modal sets in $A$. \n\n\\noindent Now we repeat the argument for all $\\lambda' > \\lambda$ to show that the modal-points in $X_{[n]}^\\lambda \\backslash \\tilde{A}_\\lambda$ can be assigned to distinct $\\epsilon_0$-modal sets in $\\mathcal{X}^\\lambda \\backslash A^\\lambda$. (We have $\\lambda'_f := \\min_{x \\in X_{[n]}^{\\lambda' - \\beta_k\\lambda'} }f(x) \\ge \\lambda_f$).\n\n\\noindent Finally, it remains to show that $\\lambda \\ge \\lambda_0$ implies $\\lambda_f \\ge \\lambda_0 \/2$. We have $\\lambda_0 \/ 4 \\ge 8\\beta_k\\lambda + \\tilde\\epsilon$, thus $r(\\lambda_0 \/ 4) \\ge r(8\\beta_k\\lambda + \\tilde\\epsilon)$. It follows that\n\\begin{align*}\nv_d(r(\\lambda_0\/4))^d\\cdot (\\lambda_0\/4) \\ge \\frac{k}{n} + C_{\\delta, n}\\frac{\\sqrt{k}}{n}.\n\\end{align*} \nHence, for all $x$ such that $f(x) \\le \\lambda_0 \/ 2$, we have\n\\begin{align*}\nf_k(x) \\le (1 + 2\\frac{C_{\\delta, n}}{\\sqrt{k}}) (f(x) + \\lambda_0 \/ 4) \\le \\lambda_0.\n\\end{align*}\n\nTo see the second part, suppose we have $C_1, C_2 > 0$ such that $C_1 \\log n \\le k \\le C_2 n^{2\\alpha \/ (2\\alpha + d)}$. This combined with the fact that $r(\\epsilon) \\ge (\\epsilon\/C)^{1\/\\alpha}$ implies $\\lambda_0 \\rightarrow 0$, as desired.\n\n\\end{proof}\n\n\n\n\n\\section{Point-Cloud Density} \\label{appendix:pointcloud}\nHere we formalize the fact that modal-sets can serve as good models for high-density structures in data, for instance \na low-dimensional structure $M$ $+$ noise. \n\n\\begin{lemma} (Point Cloud with Gaussian Noise)\nLet $M \\subseteq \\mathbb{R}^d$ be compact (with possibly multiple connected-components of differing dimension). Then there exists a density $f$ over $\\mathbb{R}^d$ such that the density is uniform in $M$ and has Gaussian decays around $M$ i.e.\n\\begin{align*}\nf(x) = \\frac{1}{Z} \\exp(-d(x, M)^2\/(2\\sigma^2)),\n\\end{align*} \nwhere $\\sigma > 0$ and $Z > 0$ depends on $M, \\sigma$. Thus, the modal-sets of $f$ are the connected-components of $M$. \n\\end{lemma}\n\n\\begin{proof}\nSince $M$ is compact in $\\mathbb{R}^d$, it is bounded. Thus there exists $R > 0$ such that $M \\subseteq B(0, R)$. \nIt suffices to show that for any $\\sigma > 0$,\n\\begin{align*}\n\\int_{\\mathbb{R}^d} \\exp(-d(x, M)^2\/(2\\sigma^2)) dx < \\infty.\n\\end{align*}\nBy a scaling of $x$ by $\\sigma$, it suffices to show that \n\\begin{align*}\n\\int_{\\mathbb{R}^d} g(x) dx < \\infty,\n\\end{align*}\nwhere $g(x) := \\exp(-\\frac{1}{2} d(x, M)^2)$. Consider level sets $\\mathcal{X}^\\lambda := \\{ x \\in \\mathbb{R}^d : g(x) \\ge \\lambda \\}$. \nNote that $\\mathcal{X}^\\lambda \\subseteq B(M, \\sqrt{2 \\log(1\/\\lambda)})$ based on the decay in $g$ around $M$. Clearly the image of $g$ is $(0, 1]$ so consider partitioning this range into intervals $[1, 1\/2], [1\/2, 1\/3], ...$. Then it follows that \n\\begin{align*}\n\\int_{\\mathbb{R}^d} g(x) dx \n&\\le \\sum_{n=2}^\\infty \\text{Vol}(\\mathcal{X}^{1\/n}) \\left(\\frac{1}{n-1} - \\frac{1}{n}\\right)\n\\le \\sum_{n=2}^\\infty \\frac{\\text{Vol}(B(M, \\sqrt{2 \\log(n))}) }{(n-1)n}\\\\\n&\\le \\sum_{n=2}^\\infty \\frac{\\text{Vol}(B(0, R + \\sqrt{2 \\log(n))}) }{(n-1)n}\n= \\sum_{n=2}^\\infty \\frac{v_d(R + \\sqrt{2 \\log(n))})^d }{(n-1)n} \\\\\n&\\le v_d\\cdot 2^{d-1} \\sum_{n=2}^\\infty \\frac{R^d + (2 \\log(n))^{d\/2}}{(n-1)n} < \\infty,\n\\end{align*}\nwhere the last inequality holds by AM-GM. As desired.\n\\end{proof}\n\n\n\n\n\n\n\n\n\n\\section{Implementation} \\label{appendix:implementation}\n\nIn this section, we explain how to implement Algorithm~\\ref{alg:epsilonmodalset} (which supersedes Algorithm~\\ref{alg:modalset}) efficiently. Here we assume that for our sample $X_{[n]}$, we have the $k$-nearest neighbors for each sample point. In our implementation, we simply use kd-tree, although one could replace it with any method that can produce the $k$-nearest neighbors for all the sample points. In particular, one could use approximate $k$-NN methods if scale is an issue. \n\nThis section now concerns with what remains: constructing a data structure that maintains the CCs of the mutual $k$-NN graph as we traverse down the levels. At level $\\lambda$ in Algorithm~\\ref{alg:epsilonmodalset}, we must keep track of the mutual $k$-NN graph for points $x$ such that $f_k(x) \\ge \\lambda - 9\\beta_k \\lambda - \\epsilon_0 - \\tilde\\epsilon$. Thus as $\\lambda$ decreases, we add more vertices (and corresponding edges to the mutual $k$-nearest neighbors). Algorithm~\\ref{alg:epsilonmodalsetinterface} shows what functions this data structure must support. Namely, adding nodes and edges, getting CCs of nodes, and checking if a CC intersects with the current estimates of the $\\epsilon_0$-modal sets.\n\nWe implement this data structure as a disjoint-set forest data structure. \nThe CCs can be represented as disjoint-sets of forests. Adding a node corresponds to making a set while adding an edge corresponds to a union operation. We can identify the verticies with the roots of the corresponding set's trees and thus getConnectedComponent and componentSeen can be implemented in a straightforward way.\n\nIn sum, the bulk of the time complexity is in preprocessing the data. This consists of obtaining the initial $k$-NN graph, i.e. distances to nearest neighbors; this one time operation is of worst-case order $O(n^2)$, similar to usual clustering procedures (e.g. Mean-Shift, K-Means, Spectral Clustering), but average case $O(nk \\log n)$. After this preprocessing step, the estimation procedure itself requires just $O(nk)$ operations, each with amortized $O(\\alpha(n))$ where $\\alpha$ is the inverse Ackermann function. Thus, the implementation provided in Algorithm~\\ref{alg:epsilonmodalsetimpl} is near-linear in $k$ and $n$. \n\n\n\n\n\\begin{algorithm}[tb]\n \\caption{Interface for Mutual $k$-NN graph construction}\n \\label{alg:epsilonmodalsetinterface}\n\\begin{algorithmic}\n\\STATE InitializeGraph() \\hfill \/\/ Creates an empty graph\n\\STATE addNode(G, node) \\hfill \/\/ Adds a node\n\\STATE addEdge(G, node1, node2) \\hfill \/\/ Adds an edge\n\\STATE getConnectedComponent(G, node) \\hfill\/\/ Get the vertices in node's CC\n\\STATE componentSeen(G, node) \\hfill \/\/ checks whether node's CC intersects with the estimates. If not, then marks the component as seen.\n\\end{algorithmic}\n\\end{algorithm}\n\n\n\n\\begin{algorithm}[tb]\n \\caption{Implementation of M-cores (Algorithm~\\ref{alg:epsilonmodalset})}\n \\label{alg:epsilonmodalsetimpl}\n\\begin{algorithmic}\n\n\\STATE Let $\\text{kNNSet}(x)$ be the $k$-nearest neighbors of $x \\in X_{[n]}$. \n\\STATE $\\widehat{\\mathcal{M}} \\leftarrow \\{ \\}$\n\\STATE $G \\leftarrow \\text{InitializeGraph()}$ \n\\STATE Sort points in descending order of $f_k$ values\n\\STATE Let $p \\leftarrow 1$\n\\FOR{$i = 1,...,n$}\n\\STATE $\\lambda \\leftarrow f_k(X_i)$\n\\WHILE {$p < n$ and $f_k(X_p) \\ge \\lambda - 9\\beta_k \\lambda -\\epsilon_0 - \\tilde\\epsilon$}\n\\STATE addNode($G, X_p$)\n\\FOR{$x \\in \\text{kNNSet}(X_p) \\cap G$}\n\\STATE addEdge($G, x, X_p$)\n\\ENDFOR \n\\STATE $p \\leftarrow p + 1$\n\\ENDWHILE\n\\IF{not componentSeen($G, X_i$)}\n\\STATE $\\text{toAdd} \\leftarrow \\text{getConnectedComponent}(G, X_i)$\n\\STATE Delete all $x$ from $\\text{toAdd}$ where $f_k(x) < \\lambda - \\beta_k \\lambda$\n\\STATE $\\widehat{\\mathcal{M}} \\leftarrow \\widehat{\\mathcal{M}} + \\{\\text{toAdd}\\}$\n\\ENDIF\n\\ENDFOR\n\\RETURN{$\\widehat{\\mathcal{M}}$}\n\\end{algorithmic}\n\\end{algorithm}\n\n\n\n\n\\subsection{Practical Setup} \n\nThe analysis prescribes a setting of $\\beta_k = O(1\/\\sqrt{k})$. Throughout the experiments we simply fix $\\beta_k = 2 \/ \\sqrt{k}$, and let our choice of $k$ be the essential parameter. \nAs we will see, M-cores yields competitive and stable performance for a wide-range of settings of $k$. The implementation can be done efficiently and is described in Appendix~\\ref{appendix:implementation}. \n\n\n\nWe will release an optimized Python\/C++ version of the code at \\cite{url}. \n\n\n\n\\subsection{Qualitative Experiments on General Structures} \n\\begin{figure}[h]\n\\centering \n\\includegraphics[width=0.155\\textwidth]{eye1_original}\n\\includegraphics[width=0.155\\textwidth]{eye1_filtered_new}\n\\includegraphics[width=0.155\\textwidth]{eye1_estimated_new}\n\\includegraphics[width=0.155\\textwidth]{eye2_original}\n\\includegraphics[width=0.155\\textwidth]{eye2_filtered_new}\n\\includegraphics[width=0.155\\textwidth]{eye2_estimated_new}\n\\caption{Diabetic Retinopathy: (Left 3 figures) An unhealthy eye, (Right 3 figures) A healthy eye. \nIn both cases, shown are (1) original image, (2) a filter applied to the image, (3) modal-sets (structures of capillaries) estimated by M-cores on the corresponding filtered image. The unhealthy eye is characterized by a proliferation of \ndamaged blood capillaries, while a healthy eye has visually fewer capillaries. The analysis task is to automatically discover the higher number of capillary-structures in the unhealthy eye. M-cores discovers $29$ structures for unhealthy eye vs $6$ for healthy eye.\n}\n\\label{fig:eye}\n\\end{figure} \n\nWe start with a qualitative experiment highlighting the flexibility of the procedure in fitting \na large variety of high-density structures. For these experiments, we use $k = \\frac{1}{2} \\cdot \\log^2 n$, which is within the\ntheoretical range for admissible values of $k$ (see Theorem \\ref{theo:main} and Remark \\ref{kadmissible}). \n\nWe consider a medical imaging problem. Figure~\\ref{fig:eye} displays the procedure applied to the Diabetic Retinopathy detection problem \\cite{drd}. While this is by no means an end-to-end treatment of this detection problem, it gives a sense of M-cores' versatility in fitting \nreal-world patterns. In particular, M-cores automatically estimates a reasonable number of clusters, independent \nof shape, while pruning away (most importantly in the case of the healthy eye) false clusters due to noisy data. As a result, it correctly picks up a much larger number of clusters in the case of the unhealthy eye. \n\n\n\n\\subsection{Clustering applications}\n\n\\begin{figure*}[ht]\n\\centering\n\\includegraphics[width=1\\textwidth]{real_data_examples_new.png}\n\\caption{Comparison on real datasets (along the rows) across different hyperparameter settings for each algorithm (along the columns). The hyperparameters being tuned are displayed at the bottom of the figure for each clustering algorithm. Scores: the blue line with triangular markers is Adjusted-Mutual-Information, and the dotted red line is Adjusted-Rand-Index. }\n\\label{fig:realworld}\n\\end{figure*} \nWe now evaluate the performance of M-cores on clustering applications, where for\n{\\bf clustering:} we assign every point $x_i\\in X_{[n]}$ to $\\argmin_{\\widehat{M} \\in \\widehat{\\mathcal{M}}} d(x_i, \\widehat M)$, i.e. to the closest estimated modal-set. \n\nWe compare M-cores to two common density-based clustering procedures, DBSCAN and Mean-Shift, as implemented in the \\textit{sci-kit-learn} package. Mean-Shift clusters data around point-modes, i.e. local-maxima of $f$, and is therefore most similar to M-cores in its objective. \n\n\n\n{\\bf Clustering scores.} We compute two established scores which evaluate a clustering against a labeled ground-truth. The \\emph{rand-index}-score is the $0$-$1$ accuracy in grouping pairs of points, (see e.g. \\cite{hubert}); the \\emph{mutual information}-score is the (information theoretic) mutual-information between the distributions induced by the clustering and \nthe ground-truth (each cluster is a mass-point of the distribution, see e.g. \\cite{vinh2010information}). For both scores we report the \\emph{adjusted} version, which adjusts the score so that a random clustering (with the same number of clusters as the ground-truth) scores near $0$ (see e.g. \\cite{hubert}, \\cite{vinh2010information}). \n\n{\\bf Datasets.} Phonemes \\cite{hastie2005elements}, and UCI datasets: Glass, Seeds, Iris, and Wearable Computing. They are described in the table below\n\\begin{center}\n \\begin{tabular}{ | p{2cm}| p{0.7cm} | p{0.35cm} | p{0.7cm} | p{8cm} |}\n \\hline\n {\\small \\emph{Dataset}} & $n$ & $d$ & {\\small \\emph{Labels}} & {\\small \\emph{Description}} \\\\ \\hline\n {\\small Phonemes} & {\\small 4509} & {\\small 256} & {\\small 5} & {\\small Log-periodograms of spoken phonemes} \\\\ \\hline\n {\\small Glass} & {\\small 214} & {\\small 7} & {\\small 6} & \\small{Properties of different types of glass} \\\\ \\hline\n {\\small Seeds} & {\\small 210} & {\\small 7} & {\\small 3} & \\small{Geometric measurements of wheat kernels} \\\\ \\hline\n {\\small Iris} & {\\small 150} & {\\small 4} & {\\small 3} & \\small{Various measurements over species of flowers} \\\\ \\hline\n {\\small Wearable} & {\\small 10000} & {\\small 12} & {\\small 5} & \n \\small{4 sensors on a human body, recording body posture and activity} \\\\ \\hline\n \\end{tabular}\n\\end{center}\n\n{\\bf Results.} Figure~\\ref{fig:realworld} reports the performance of the procedures for each dataset. Rather than reporting the performance of the \nprocedures under \\emph{optimal-tuning}, we report their performance \\emph{over a range} of hyperparameter settings, \nmindful of the fact that optimal-tuning is hardly found in practice (this is a general problem in clustering given the lack of ground-truth to guide tuning). \n\n\nFor M-cores we vary the parameter $k$. For DBSCAN and Mean-Shift, we vary the main parameters, respectively \\emph{eps} (choice of level-set), and \\emph{bandwidth} (used in density estimation). \nM-cores yields competitive performance across the board, with stable scores over a large range of values of $k$ (relative to sample size). Such stable performance to large changes in $k$ is quite desirable, considering that proper tuning of hyperparameters remains a largely open problem in clustering. \n\\vspace{0.3cm}\n\n{\\bf Conclusion} \\\\\n\n\\vspace{-0.2cm} \n\nWe presented a theoretically-motivated procedure which can consistently estimate modal-sets, i.e. nontrivial high-density structures in data, under benign distributional conditions. This procedure is easily implemented and yields competitive and stable scores in clustering applications. \n\n\n\n\n\n\\subsection*{#1}}{\\qed\\medskip}\n\\theoremstyle{definition}\n\\newtheorem{discussion}{Discussion}\n\n\\newcommand{\\mathcal{G}}{\\mathcal{G}}\n\\renewcommand{\\P}{\\mathbb{P}}\n\\newcommand{\\mathbb{E}}{\\mathbb{E}}\n\n\\newcommand{\\hat{C}}{\\hat{C}}\n\\newcommand{\\check{C}}{\\check{C}}\n\\newcommand{\\mathcal{F}}{\\mathcal{F}}\n\\newcommand{\\mathcal{X}}{\\mathcal{X}}\n\\newcommand{\\mathcal{Y}}{\\mathcal{Y}}\n\\newcommand{X_{[n]}}{X_{[n]}}\n\\newcommand{\\mathbf{Y}}{\\mathbf{Y}}\n\\newcommand{\\mathcal{M}}{\\mathcal{M}}\n\\newcommand{\\Sh}[2]{\\mathcal{S}\\left(#1, #2\\right)}\n\\newcommand{\\mathcal{N}_r}{\\mathcal{N}_r}\n\\newcommand{\\mathcal{P}}{\\mathcal{P}}\n\\newcommand{\\mathcal{H}}{\\mathcal{H}}\n\\newcommand{\\mathcal{C}}{\\mathcal{C}}\n\\newcommand{\\mathcal{B}}{\\mathcal{B}}\n\\newcommand{\\mathcal{P}}{\\mathcal{P}}\n\\newcommand{\\widetilde{S}}{\\widetilde{S}}\n\\newcommand{\\mathbf{A}}{\\mathbf{A}}\n\\newcommand{\\A^{\\!\\!*}}{\\mathbf{A}^{\\!\\!*}}\n\\newcommand{\\tilde{R}}{\\tilde{R}}\n\\newcommand{\\bar{a}}{\\bar{a}}\n\\newcommand{\\widetilde{f}}{\\widetilde{f}}\n\\newcommand{\\mathcal{S}}{\\mathcal{S}}\n\\newcommand{\\mathcal{L}}{\\mathcal{L}}\n\\newcommand{\\mathcal{O}}{\\mathcal{O}}\n\\newcommand{\\mathbb{R}}{\\mathbb{R}}\n\\newcommand{\\mathbb{N}}{\\mathbb{N}}\n\\newcommand{\\mathbb{E}}{\\mathbb{E}}\n\\newcommand{\\ext}[2]{{\\overline{\\mathbb{E}}_{#1}}\\left[#2\\right]}\n\\newcommand{\\exf}[2]{{\\mathbb{E}_{#1}}\\left[#2\\right]}\n\\newcommand{\\ex}[1]{{\\mathbb{E}}\\left[#1\\right]}\n\\newcommand{{\\expectation} \\,}{{\\mathbb{E}} \\,}\n\\newcommand{\\expecf}[1]{{\\mathbb{E}_{#1}} \\,}\n\\newcommand{\\norm}[1]{\\left\\|#1\\right\\|}\n\\newcommand{\\abs}[1]{\\left|#1\\right|}\n\\newcommand{\\paren}[1]{\\left(#1\\right)}\n\\newcommand{\\pr}[1]{\\mathbb{P}\\left(#1\\right)}\n\\newcommand{\\prf}[2]{\\mathbb{P}_{#1}\\left(#2\\right)}\n\\newcommand{\\widetilde{\\sigma}}{\\widetilde{\\sigma}}\n\\newcommand{\\mathbf{\\top}}{\\mathbf{\\top}}\n\\newcommand{\\widetilde{x}}{\\widetilde{x}}\n\\newcommand{\\widetilde{X}}{\\widetilde{X}}\n\\newcommand{\\widetilde{m}}{\\widetilde{m}}\n\\newcommand{\\SigmaT_Z}{\\SigmaT_Z}\n\\newcommand{\\tilde{\\rho}}{\\tilde{\\rho}}\n\\newcommand{\\mathring{\\delta}}{\\mathring{\\delta}}\n\\newcommand{\\text{tr}}{\\text{tr}}\n\\newcommand{\\diameter}[1]{\\Delta_n\\left(#1\\right)}\n\\newcommand{\\diam}[1]{\\Delta_n^2\\left(#1\\right)}\n\\newcommand{\\diamA}[1]{\\Delta_{n,a}^2\\left(#1\\right)}\n\\newcommand{\\Delta^2_{\\X}}{\\Delta^2_{\\mathcal{X}}}\n\\newcommand{\\meanN}[1]{\\bar{#1}_n}\n\\newcommand{\\mean}[1]{\\bar{#1}}\n\\newcommand{\\text{err}}{\\text{err}}\n\\newcommand{\\ind}[1]{\\mathds{1}\\left\\{#1\\right\\}}\n\\newcommand{\\braces}[1]{\\left\\{#1\\right\\}}\n\\DeclareMathOperator*{\\argmax}{argmax}\n\\DeclareMathOperator*{\\argmin}{argmin}\n\\DeclareMathOperator*{\\level}{level}\n\\DeclareMathOperator*{\\volume}{vol}\n\\DeclareMathOperator*{\\Expectation}{\\mathbb{E}}\n\\newcommand{\\lev}[1]{\\level\\left(#1\\right)}\n\\newcommand{\\vol}[1]{\\volume\\left(#1\\right)}\n\n\n\\title{Modal-set estimation with an application to clustering}\n\n\n\\author{\n Heinrich Jiang \\thanks{Much of this work was done when this author was at Princeton University Mathematics Department.}\\\\\n \\texttt{heinrich.jiang@gmail.com} \\\\\n \\And\n Samory Kpotufe\\\\\n ORFE, Princeton University\\\\\n \\texttt{samory@princeton.edu}\\\\\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n}\n\n\\begin{document}\n\n\\maketitle\n\n\\begin{abstract}\n{\\bf Abstract}. We present a first procedure that can estimate -- with statistical consistency guarantees -- any local-maxima of a density, under benign distributional conditions. The procedure estimates all such local maxima, or \\emph{modal-sets}, of any bounded shape or dimension, including usual point-modes. In practice, modal-sets can arise as dense low-dimensional structures in noisy data, and more generally serve to better model the rich variety of locally-high-density structures in data. \n\nThe procedure is then shown to be competitive on clustering applications, and moreover is quite stable to a wide range of settings of its tuning parameter. \n\\end{abstract}\n\n\n\n\\section{Introduction}\nMode estimation is a basic problem in data analysis. \n Modes, i.e. points of locally high density, serve as a measure of central tendency and are therefore important in unsupervised problems such as outlier detection, image or audio segmentation, and clustering in particular (as cluster cores). In the present work, we are interested in capturing a wider generality of \\emph{modes}, i.e. general structures (other than single-points) of locally high density, that can arise in modern data. \n\nFor example, application data in $\\mathbb{R}^d$ (e.g. speech, vision) are often well modeled as arising from \na lower-dimensional structure $M$ + noise. In other words, such data is densest on $M$, hence \nthe ambient density $f$ is more closely modeled as locally maximal at (or near) $M$, a nontrivial subset of $\\mathbb{R}^d$, \nrather than maximal only at single points in $\\mathbb{R}^d$. Such a situation is illustrated in Figure \\ref{fig:clustercores}. \n\nWe therefore extend the notion of \\emph{mode} to any connected \nsubset of $\\mathbb{R}^d$ where the unknown density $f$ is locally maximal; we refer to these as \\emph{modal-sets} of $f$. A modal-set can be of any bounded shape and dimension, from $0$-dimensional (point modes), to full dimensional surfaces, and aim to capture the possibly rich variety of dense structures in data. \n\nOur main contribution is a procedure, M(odal)-cores, that consistently estimates all such modal-sets from data, of general shape and dimension, with minimal assumption on the unknown $f$. The procedure builds on recent developments in topological data analysis \n\\cite{SN10, CD10, KV11, RSNW12, balakrishnan2013cluster, CDKvL14, eldridge2015beyond}, and works by traversing certain $k$-NN graphs which encode level sets of a $k$-NN density estimate. We show that, if $f$ is continuous on compact support, the Hausdorff distance between any modal-set and its estimate vanishes as $n\\to \\infty$ (Theorem \\ref{theo:main}); the estimation rate for point-modes matches (up to $\\log n$) the known minimax rates. Furthermore, under mild additional smoothness condition on $f$ (H\\\"older continuity), \\emph{false} structures (due to empirical variability) are correctly identified and pruned. We know of no such general statistical guarantees in mode estimation. \n\nWhile there is often a gap between theoretical procedures and practical ones, the present procedure is easy to implement and yields competitive scores on clustering applications; here, as in \\emph{mode-based clustering}, clusters are simply defined as regions of high-density of the data, and the estimated modal-sets serve as the centers of these regions, i.e. as \\emph{cluster-cores}. A welcome aspect of the resulting clustering procedure is its stability to tuning \nsettings of the parameter $k$ (from $k$-NN): it maintains high clustering scores (computed with knowledge of the ground-truth) over a wide range of settings of $k$, for various datasets. \n Such stability to tuning is of practical importance, since typically the ground-truth is unknown, so clustering procedures come with tuning parameters that are hard to set in practice. Practitioners therefore use various rule-of-thumbs and can thus benefit from procedures that are less-sensitive to their hyperparameters. \n \n\n\nIn the next section we put our result in context with respect to previous work on mode estimation and density-based clustering in general. \n\n\n\n\n\n\n\n\n\n\n\n\n\\begin{figure} \n\\centering \n\\includegraphics[width=0.25\\textwidth]{3d_2000_samples.jpg}\n\\includegraphics[width=0.25\\textwidth]{3d_density.jpg}\n\\includegraphics[width=0.25\\textwidth]{3d_estimated.jpg}\n\n\\label{fig:clustercores}\n\\caption{Main phase of M-cores. (Left) Points-cloud generated as three 1-dimensional rings + noise. (Middle) The 3 rings, and (Right) their estimate (as modal-sets) by M-cores.}\n\\end{figure}\n\n\\subsection*{Related Work} \n$\\bullet$ Much theoretical work on mode-estimation is concerned with understanding the statistical difficulty of the problem, and as such, often only considers the case of densities with single point-modes \\cite{parzen1962estimation, chernoff1964estimation, eddy1980optimum, devroye1979recursive, tsybakov1990recursive, abraham2004asymptotic}. \nThe more practical case of densities with multiple point-modes has received less attention in the theoretical literature. However there exist practical estimators, e.g., the popular \\emph{Mean-Shift} procedure (which doubles as a clustering procedure), which are however harder to analyze. Recently, \\cite{arias2013estimation} shows the consistency of a variant of Mean-Shift. Other recent work of \\cite{genovese2013nonparametric} derives a method for pruning false-modes obtained by mode-seeking procedures. Also recent, the work of \\cite{dasgupta2014optimal} shows that point-modes of a $k$-NN density estimate $f_k$ approximate the true modes of the unknown density $f$, assuming $f$ only has point-modes and bounded Hessian at the modes; their procedure, therefore operates on level-sets of $f_k$ (similar to ours), but fails in the presence of more general high-density structures such as modal-sets. To handle such general structures, we have to identify more appropriate level-sets to operate on, the main technical difficulty being that local-maxima of $f_k$ can be relatively far (in Hausdorff) from those of $f$, for instance single-point modes rather than more general modal-sets, due to data-variability. \nThe present procedure handles general structures, and is consistent under the much weaker conditions of continuity (of $f$) on a compact domain.\n\nA related line of work, which seeks more general structures than point-modes, is that of \\emph{ridge} estimation (see e.g. \\citep{ozertem2011locally, genovese2014nonparametric}). A ridge is typically defined as a lower-dimensional structure away from which the density curves (in some but not all directions), and can serve to capture various lower-dimensional patterns apparent in point clouds. In contrast, the modal-sets defined here can be full-dimensional and are always local maxima of the density. Also, unlike in ridge estimation, we do not require local differentiability of the unknown $f$, nor knowledge of the dimension of the structure, thus allowing a different but rich set of practical structures. \n\n$\\bullet$ A main application of the present work, and of mode-estimation in general, is \\emph{density-based clustering}. Such clustering was formalized in early work of \\cite{carmichael1968finding, hartigan1975clustering, H81}, and can take various forms, each with their advantage. \n\nIn its hierarchical version, one is interested in estimating the connected components (CCs) of \\emph{all} level sets \n$\\braces{f \\ge \\lambda}_{\\lambda>0}$ of the unknown density $f$. Many recent works analyze approaches that consistently estimate such a hierarchy under quite general conditions, e.g. \\cite{SN10, CD10, KV11, RSNW12, balakrishnan2013cluster, CDKvL14, eldridge2015beyond}. \n\nIn the \\emph{flat} clustering version, one is interested in estimating the CCs of $\\braces{f \\ge \\lambda}$ for a single $\\lambda$, somehow appropriately chosen \n\\citep{RV09, SSN09, MHL09, RW10, S11, sriperumbudur2012consistency}. The popular DBSCAN procedure \\citep{ester1996density} can be viewed as estimating such single level set. The main disadvantage here is in the ambiguity in the choice of $\\lambda$, especially when the levels $\\lambda$ of $f$ have different numbers of clusters (CCs). \n\n\nAnother common flat clustering approach, most related to the present work, is \\emph{mode-based} clustering. The approach clusters points to estimated modes of $f$, a fixed target, and therefore does away with the ambiguity in choosing an appropriate level $\\lambda$ of $f$ \\citep{fukunaga1975estimation, cheng1995mean, comaniciu2002mean, li2007nonparametric, chazal2013persistence}. As previously discussed, \nthese approaches are however hard to analyze in that mode-estimation is itself not an easy problem. Popular examples are extensions of $k$-Means to categorical data \\cite{chaturvedi2001k}, and the many variants of Mean-Shift which cluster points by gradient ascent to the closest mode. \nNotably, the recent work \\cite{wasserman2014feature} analyzes clustering error of Mean-Shift in a general high-dimensional setting with potentially irrelevant features. The main assumption is that $f$ only has point-modes. \n\n\n\n\n\n\\section{Overview of Results}\n\\label{sec:Overview}\n\\input{Overview.tex}\n\n\n\\section{Analysis Overview}\n\\label{sec:analysis}\n\\input{Analysis.tex}\n\n\\section{Experiments}\n\\label{sec:experiments}\n\\input{Experiments.tex}\n\n\n{\n\n\\subsection{Basic Setup and Definitions}\nWe have samples $X_{[n]} = \\{X_1,...,X_n\\}$ drawn i.i.d. from \na distribution $\\mathcal{F}$ over $\\mathbb{R}^d$ with density $f$. We let $\\mathcal{X}$ denote the support of $f$. \nOur main aim is to estimate all local maxima of $f$, or \\emph{modal-sets} of $f$, as we will soon define. \n\nWe first require the following notions of distance between sets. \n\\begin{definition}\\label{ball} \nFor $M\\subset \\mathcal{X}$, $x\\in \\mathcal{X}$, let $d(x, M) := \\inf_{x'\\in M} \\norm{x - x'}$. \nThe {\\bf Hausdorff} distance between $A, B \\subset \\mathcal{X}$ is defined as \n$d(A,B) := \\max \\{ \\sup_{x \\in A} d(x, B), \\sup_{y \\in B} d(y, A) \\}.$\n\\end{definition}\n\nA modal set, defined below, extends the notion of a point-mode to general subsets of $\\mathcal{X}$ \nwhere $f$ is locally maximal. These can arise for instance, as discussed earlier, in applications where high-dimensional \ndata might be modeled as a (disconnected) manifold $\\mathcal{M}$ + ambient noise, each connected component of which \ninduces a modal set of $f$ in ambient space $\\mathbb{R}^D$ (see e.g. Figure \\ref{fig:clustercores}). \n\n\\begin{definition}\\label{modal-set} For any $M \\subset \\mathcal{X}$ and $r>0$, define the \\emph{envelope} $B(M, r) := \\{x : d(x, M) \\leq r\\}$. A connected set $M$ is a {\\bf modal-set} of $f$ if $\\forall x \\in M$, $f(x) = f_M$ for some fixed $f_M$, and there exist $r>0$ such that $f(x) < f_M$ for all $x \\in B(M, r) \\backslash M$. \n\\end{definition}\n\n\\begin{remark} \nThe above definition can be relaxed to \\emph{$\\epsilon_0$-modal sets}, i.e., to allow $f$ to vary by a small $\\epsilon_0$ on $M$. Our results extend easily to this more relaxed definition, with minimal changes to some constants. This is because the procedure operates on $f_k$, and therefore already needs to account for variations in $f_k$ \non $M$. This is described in Appendix~\\ref{appendix:eps-modal-set}. \n\\end{remark}\n\n\n\n\n\n\\subsection{Estimating Modal-sets}\nThe algorithm relies on nearest-neighbor density estimate $f_k$, defined as follows. \n\n\\begin{definition} \\label{kNNdensity} Let $r_k(x) := \\min \\{ r : |B(x, r) \\cap X_{[n]} | \\ge k \\}$. \nDefine the {\\bf $k$-NN density estimate} as\n$$f_k(x) := \\frac{k}{n\\cdot v_d\\cdot r_k(x)^d},\n\\text{where } v_d \\text{ is the volume of a unit sphere in } \\mathbb{R}^d.$$\n\\end{definition}\n\nFurthermore, we need an estimate of the level-sets of $f$; various recent work on cluster-tree estimation (see e.g. \\cite{CDKvL14}) have shown that such level sets are encoded by subgraphs of certain \\emph{modified} $k$-NN graphs. \nHere however, we directly use $k$-NN graphs, simplifying implementation details, but requiring a bit of side analysis.\n\n\\begin{definition}\n Let $G(\\lambda)$ denote the (mutual) {\\bf $k$-NN graph} with vertices $\\{ x \\in X_{[n]} : f_k(x) \\ge \\lambda\\}$ and an edge between $x$ and $x'$ iff $||x - x'|| \\le \\min \\{r_k(x), r_k(x') \\}$. \n \\end{definition}\n $G(\\lambda)$ can be viewed as approximating the $\\lambda$-level set of $f_k$, hence approximates the $\\lambda$-level set of $f$ (implicit in the connectedness result in Appendix~\\ref{appendix:integrality}). \n \nAlgorithm \\ref{alg:modalset} (M-cores) estimates the modal-sets of the unknown $f$.\nIt is based on various insights described below. \nA basic idea, used for instance in point-mode estimation \\cite{dasgupta2014optimal}, is to proceed top-down on the level sets of $f_k$ (i.e. on $G(\\lambda), \\, \\lambda \\to 0$), and identify new modal-sets as they appear in separate CCs at a level $\\lambda$. \n\nHere we have to however be careful: the CCs of $G(\\lambda)$ (essentially modes of $f_k$) might be \nsingleton points (since $f_k$ might take unique values over samples $x\\in X_{[n]}$) while the modal-sets to be estimated might be of any dimension and shape. Fortunately, if a datapoint $x$, locally maximizes $f_k$, and belongs to some modal-set $M$ of $f$, then the rest of $M\\cap X_{[n]}$ must be at a nearby level; Algorithm \\ref{alg:modalset} therefore proceeds by checking a nearby level ($\\lambda - 9\\beta_k \\lambda$) from which it picks a specific set of points as an estimate of $M$. The main parameter here is $\\beta_k$ which is worked out explicitly in terms of $k$ and requires no a priori knowledge of distributional parameters. The confidence level $\\delta$ can be viewed in practice as fixed (e.g. $\\delta = 0.05$). The essential algorithmic parameter is therefore just $k$, which, as we will show, can be chosen over a wide range (w.r.t. $n$) while ensuring statistical consistency. \n\n\\begin{definition}Let $0< \\delta < 1$. Define $C_{\\delta, n} := 16\\log(2\/\\delta)\\sqrt{d \\log n}$, and define $\\beta_k = 4\\frac{C_{\\delta, n}}{\\sqrt{k}}$. \\end{definition}\n\nWe note that the above definition of $\\beta_k$ is somewhat conservative (needed towards theoretical guarantees), since the exact constants $C_{\\delta, n}$ turn out to have little effect in implementation. \n\nA further algorithmic difficulty is that a level $G(\\lambda)$ might have too many CCs w.r.t. the ground truth. For example, due to variability in the data, $f_k$ might have more modal-sets than $f$, inducing too many CCs at some level $G(\\lambda)$. Fortunately, it can be shown that the nearby level $\\lambda - 9\\beta_k \\lambda$ will likely have the right number of CCs. Such lookups down to lower-level act as a way of \\emph{pruning false modal-sets}, and trace back to earlier work \\citep{KV11} on pruning cluster-trees. Here, we need further care: \nwe run the risk of over-estimating a given $M$ if we look too far down (aggressive pruning), since \na CC at lower level might contain points \\emph{far outside} of a modal-set $M$. \nTherefore, the main difficulty here is in figuring out how far down to look and yet not over-estimate \\emph{any} $M$ (to ensure consistency). In particular our lookup \\emph{distance} of $9\\beta_k \\lambda$ is adapted to the level $\\lambda$ unlike in aggressive pruning. \n\nFinally, for clustering with M-cores, we can simply assign every data-point to the closest estimated modal-set (acting as cluster-cores).\n\n\\begin{algorithm}[tb]\n \\caption{M-cores (estimating modal-sets).}\n \\label{alg:modalset}\n\\begin{algorithmic}\n \\STATE Initialize $\\widehat{\\mathcal{M}}:= \\emptyset$. Define $\\beta_k = 4\\frac{C_{\\delta, n}}{\\sqrt{k}}$. \n \\STATE Sort the $X_i$'s in decreasing order of $f_k$ values (i.e. $f_k(X_i) \\geq f_k(X_{i+1})$). \n \\FOR{$i=1$ {\\bfseries to} $n$}\n \\STATE Define $\\lambda := f_k(X_i)$.\n \\STATE Let $A$ be the CC of $G(\\lambda - 9\\beta_k \\lambda)$ that contains $X_i$. \\hfill (\\rm{i})\n \\IF{$A$ is disjoint from all cluster-cores in $\\widehat{\\mathcal{M}}$}\n \\STATE Add $\\widehat{M} := \\{ x \\in A : f_k(x) > \\lambda - \\beta_k \\lambda \\}$ to\n $\\widehat{\\mathcal{M}}$. \n \\ENDIF\n \\ENDFOR\n \\STATE \\textbf{return} $\\widehat{\\mathcal{M}}$. \\textit{ \/\/ Each $\\widehat M\\in \\widehat{\\mathcal{M}}$ is a cluster-core estimating \n a modal-set of the unknown $f$.}\n\\end{algorithmic}\n\\end{algorithm}\n\n\n\n\n\\subsection{Consistency Results}\nOur consistency results rely on the following mild assumptions. \n\\begin{assumption} $f$ is continuous with compact support $\\mathcal{X}$. Furthermore $f$ has a finite number of modal-sets \nall in the interior of its support $\\mathcal{X}$. \n\\label{assumption-main}\n\\end{assumption}\n\n\nWe will express the convergence of the procedure explicitly in terms of quantities that characterize the behavior of $f$ at the boundary of every modal set. The first quantity has to do with how \\emph{salient} a modal-set, i.e whether it is sufficiently \\emph{separated} from other modal sets. We start with the following definition of \\emph{separation}. \n\n\\begin{definition}\\label{rsalient} \nTwo sets $A, A' \\subset \\mathcal{X}$ are {\\bf $r$-separated}, if there exists a set $S$ such that every path from $A$\n to $A'$ crosses $S$ and $\\sup_{x \\in B(S, r)} f(x) < \\inf_{x \\in A\\cup A'} f(x)$.\n\\end{definition}\nThe next quantities characterize the \\emph{change} in $f$ in a neighborhood of a modal set $M$. \nThe existence of a proper such neighborhood $A_M$, and appropriate functions $u_M$ and $l_M$ capturing smoothness and curvature, follow from the above assumptions on $f$. This is captured in the proposition below. \n\n\\begin{proposition}\n\\label{prop:main-assumptions}\nLet $M$ be a modal-set of $f$. Then there exists a CC $A_M$ of some level-set $\\mathcal{X}^{\\lambda_M} := \\{x : f(x) \\ge \\lambda_M\\}$, containing $M$, such that the following holds. \n\\begin{itemize} \n\\item \\emph{$A_M$ isolates $M$ by a valley}: $A_M$ does not intersect any other modal-set; and $A_M$ and $\\mathcal{X}^{\\lambda_M} \\backslash A_M$ are $r_s$-separated (by some $S_M$) for some $r_s>0$ independent of $M$. \n\\item \\emph{$A_M$ is full-dimensional}: $A_M$ contains an envelope $B(M, r_M)$ of $M$, for some $r_M>0$. \n\\item \\emph{$f$ is both \\emph{smooth} and has \\emph{curvature} around $M$}: there exist functions $u_M$ and $l_M$, increasing and continuous on $[0, r_M]$, $u_M(0) = l_M(0) = 0$, such that $\\forall x \\in B(M, r_M)$, \n\\begin{align*}\nl_M(d(x, M)) \\le f_M - f(x) \\le u_M(d(x, M)). \n\\end{align*}\n\n\\end{itemize}\n\\end{proposition}\n\n\n\nFinally, our consistency guarantees require the following admissibility condition on $k = k(n)$. \nThis condition results, roughly, from needing the density estimate $f_k$ to properly approximate the behavior \nof $f$ in the neighborhood of a modal-set $M$. In particular, we intuitively need $f_k$ values to be smaller for \npoints far from $M$ than for points close to $M$, and this should depend on the smoothness and curvature of $f$ \naround $M$ (as captured by $u_M$ and $l_M$). \n\n\\begin{definition} $k$ is {\\bf admissible} for a modal-set $M$ if (we let $u_M^{-1}$ denote the inverse of $u_M$):\n$$\\max \\left\\{ \\left(\\frac{24 \\sup_{x \\in \\mathcal{X}} f(x)}{l_M(\\min\\{r_M, r_s\\}\/2)} \\right)^2, 2^{7 + d} \\right\\}\\cdot C_{\\delta, n}^2 \\le k \\le \\frac{v_d \\cdot f_M}{2^{2+2d}} \\left(u_M^{-1} \\left ( \nf_M\\frac{C_{\\delta, n}}{2\\sqrt{k}}\\right) \\right)^d \\cdot n.$$\n\\end{definition}\n\n\\begin{remark}\\label{kadmissible} The admissibility condition on $k$, although seemingly opaque, allows for a wide range of settings of $k$. For example, suppose $u_M(t) = c t^{\\alpha}$ for some $c, \\alpha > 0$. These are polynomial tail conditions common in mode estimation, following e.g. from H\\\"older assumptions on $f$. \nAdmissibility then (ignoring $\\log (1\/\\delta)$), is immediately seen to correspond to the wide range\n$$C_1\\cdot \\log n \\leq k \\leq C_2\\cdot n^{2\\alpha\/(2\\alpha + d)},$$ where $C_1, C_2$ are constants depending on $M$, but independent of $k$ and $n$. It's clear then that even the simple choice $k = \\Theta(log^2 n)$ is always admissible \\emph{for any} $M$ for $n$ sufficiently large.\n\\end{remark}\n\n\n{\\bf Main theorems.} We then have the following two main consistency results for Algorithm \\ref{alg:modalset}. Theorem \\ref{theo:main} states a rate (in terms of $l_M$ and $u_M$) at which \nany modal-set $M$ is approximated by some estimate in $\\widehat{\\mathcal{M}}$; Theorem \\ref{pruning} establishes \\emph{pruning} guarantees. \n\\begin{theorem} \n\\label{theo:main}\nLet $0< \\delta < 1$. The following holds with probability at least $1- 6\\delta$, simultaneously for all modal-sets $M$ of $f$. Suppose $k$ is admissible for $M$. Then there exists $\\widehat{M} \\in \\widehat{\\mathcal{M}}$ such that the following holds. Let $l_M^{-1}$ denote the inverse of $l_M$. \n \\begin{align*}\n d(M, \\widehat{M}) \\le l_M^{-1}\\left(\\frac{8C_{\\delta,n }}{\\sqrt{k}}f_M\\right), \n \\text{ which goes to } 0 \\text{ as } C_{\\delta, n}\/\\sqrt{k} \\rightarrow 0.\n \\end{align*}\n\\end{theorem}\nIf $k$ is admissible for all modal-sets $M$ of $f$, then $\\widehat{\\mathcal{M}}$ estimates all modal-sets of $f$ \nat the above rates. These rates can be instantiated under the settings in Remark~\\ref{kadmissible}: \nsuppose $l_M(t) = c_1 t^{\\alpha_1}$, $u_M(t) = c t^\\alpha$, $\\beta_1 \\geq \\beta$; then \nthe above bound becomes $d(M, \\widehat{M}) \\lesssim k^{-1\/2{\\alpha_1}}$ for admissible $k$. As in the remark, $k = \\Theta (\\log^2 n)$ is admissible, simultaneously for all $M$ (for $n$ sufficiently large), and therefore all modal-sets of $f$ are recovered at the above rate. \nIn particular, taking large $k = O(n^{2\\alpha\/(2\\alpha + d)})$ optimizes the rate to $O(n^{-\\alpha\/(2\\alpha_1\\alpha + \\alpha_1 d)})$. Note that for $\\alpha_1 = \\alpha = 2$, the resulting rate ($n^{-1\/(4+d)}$) is tight (see e.g. \\cite{tsybakov1990recursive} for matching lower-bounds in the case of point-modes $M = \\braces{x}$.). \n\nFinally, Theorem~\\ref{pruning} (pruning guarantees) states that any estimated modal-set in $\\widehat{\\mathcal{M}}$, at a sufficiently high level (w.r.t. to $k$), corresponds to a \\emph{true} modal-set of $f$ at a similar level. Its proof consists of showing that if two sets of points are wrongly disconnected at level $\\lambda$, they remain connected at nearby level $\\lambda - 9\\beta_k \\lambda$ (so are reconnected by the procedure). The main technicality is the dependence of the nearby level on the empirical $\\lambda$; the proof is less involved and given in Appendix~\\ref{appendix:pruning}. \n\n\\begin{theorem} \\label{pruning}\nLet $0< \\delta < 1$. There exists $\\lambda_0 = \\lambda_0(n, k)$ such that the following holds with probability at least $1 - \\delta$. All modal-set estimates in $\\widehat{\\mathcal{M}}$ chosen at level $\\lambda \\ge \\lambda_0$ can be injectively mapped to modal-sets $\\braces{M: \\lambda_M \\geq \\min_{\\{x\\in \\mathcal{X}_{[n]} : f_k(x) \\ge \\lambda - \\beta_k \\lambda \\}} f(x)}$, provided $k$ is admissible for all such $M$. \n\nIn particular, if $f$ is H\\\"older-continuous, (i.e. $||f(x) - f(x')|| \\le c||x - x'||^\\alpha $ for some $0 < \\alpha\\le 1$, $c > 0$)\nthen $\\lambda_0 \\xrightarrow{n \\to \\infty} 0$, provided \n$C_1 \\log n \\le k \\le C_2 n^{2\\alpha \/ (2\\alpha + d)}$, for some $C_1, C_2$ independent $n$. \n\n\n\\end{theorem}\n\\begin{remark}\nThus with little additional smoothness ($\\alpha \\approx 0$) over uniform continuity of $f$, any estimate above level $\\lambda_0 \\to 0$ corresponds to a true modal-set of $f$. We note that these pruning guarantees can be strengthened as needed by implementing a more aggressive pruning: simply replace $G(\\lambda - 9\\beta_k \\lambda)$ in the procedure (on line (\\rm{i})) with $G(\\lambda - 9\\beta_k \\lambda - \\tilde\\epsilon)$ using a \\emph{pruning parameter} $\\tilde \\epsilon \\ge 0$. This allows $\\lambda_0 \\rightarrow 0$ faster. However the rates of Theorem \\ref{theo:main} (while maintained) then require a larger initial sample size $n$. \nThis is discussed in Appendix~\\ref{appendix:pruning}. \n\\end{remark}","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nIn the framework of research and development of novel acceleration schemes and technology, the upgrade of the SPARC\\_LAB test facility \\cite{ferrario2013sparc_lab} at INFN-LNF is foreseen, based on a high gradient linac. High brightness electron bunches are fundamental for the successful development of plasma-based accelerators, for instance, whereas external injection schemes are considered, i.e. particle beam driven and laser driven plasma wakefield accelerators (PWFA and LWFA, respectively). Indeed, the ultimate beam brightness and its stability and reproducibility are strongly influenced by the RF-generated electron beam.\n\nIn this scenario the SPARC\\_LAB upgrade, named as EuPRAXIA@SPARC\\_LAB \\cite{ferrario_eaac2017}, might be one of the possible candidates to host EuPRAXIA (European Plasma Research Accelerator with eXcellence In Applications) \\cite{Walker:IPAC2017-TUOBB3}. EuPRAXIA is a design study in the framework of Horizon 2020 (INFRADEV-1-2014), funded to bring together for the first time novel acceleration schemes, based for instance on plasmas, modern lasers, the latest correction\/feedback technologies and large-scale user areas. Such a research infrastructure would achieve the required quantum leap in accelerator technology towards more compact and more cost-effective accelerators, opening new horizons for applications and research.\n\nThe preliminary EuPRAXIA@SPARC\\_LAB linac layout is based on an S-band Gun, three S-band TW structures and an X-band booster with a bunch compressor \\cite{ferrario_eaac2017}. The booster design has been driven by the need of a high accelerating gradient required to achieve a high facility compactness, which is one of the main goals of the EuPRAXIA project. The baseline technology chosen for the EuPRAXIA@SPARC\\_LAB booster is X-band. The total space allocated for the linac accelerating sections is $\\approx$25 m, corresponding to an active length of $\\approx$16 m taking into account the space required to accommodate beam diagnostics, magnetic elements, vacuum equipment and flanges. Two average accelerating gradient options are foreseen for the X-band linac: high gradient (HG) of 57 MV\/m and very high gradient (VHG) of 80 MV\/m, corresponding to double the power of the HG case. The RF linac layout is based on klystrons with SLEDs \\cite{farkas} that feed several TW accelerating structures. The operating mode is the $2\\pi\/3$ mode at 11.9942 GHz. The preliminary RF system parameters are summarized in Table \\ref{tab:RF_parameters} \\cite{cpi_website}.\n\n\\begin{table}[hbt]\n\\center{\n\\caption{RF system parameters.}\n \\label{tab:RF_parameters}\n\\begin{tabular}{lcr}\n\\midrule\nFrequency & 11.9942 GHz\\\\\nPeak RF power & \\SI {50}{\\mega W}\\\\\nRF pulse length $t_k$ & \\SI {1.5}{\\micro s}\\\\\nUnloaded Q-factor $Q_0$ of SLED & 180000\\\\\n\\bottomrule\n\\end{tabular}\n}\n\\end{table}\n\nIn this paper we illustrate the preliminary RF design of the X-band booster. The single cell parameters have been calculated by electromagnetic (e.m.) simulations. On the basis of these results, the accelerating structure length and geometry have been optimized by numerical studies. Finally, the basic RF power distribution layout has been designed. \n\n\\section{Single cell study}\\label{sec:single_cell}\n\nThe main single cell parameters (shunt impedance per unit length $R$, normalized group velocity $v_g\/c$, Q-factor $Q$, peak value of modified Poynting vector \\cite{poynting,poynting4} normalized to the average accelerating field $S_{c\\;max}\/E_{acc}^2$) as a function of the iris radius $a$ have been calculated with ANSYS Electronics Desktop \\cite{hfss_website}. The results are reported in Fig. \\ref{fig:cell_parameters}.\n\n\n\nAccording to beam dynamics calculations and single bunch beam break up limits, an average iris radius $\\langle a \\rangle$=3.2 mm has been taken into account (the corresponding parameters are given in Tab. \\ref{tab:a_3.2}) \\cite{vaccarezza_eaac2017}.\n\n\n\n\n\n\\begin{figure}[htb]\n\\centering\n\\includegraphics[scale=1]{cell_parameters.pdf}\n\\caption{Single cell parameters as a function of the iris radius.}\n\\label{fig:cell_parameters}\n\\end{figure}\n\n\\begin{table}[hbt]\n\\center{\n\\caption{Single cell parameters for an iris radius of 3.2 mm.}\n \\label{tab:a_3.2}\n\\begin{tabular}{lcr}\n\\midrule\niris radius $a$ [mm] & 3.2\\\\\niris thickness $t$ [mm]& 2.5\\\\\ncell radius $b$ [mm] & 10.139\\\\\ncell length $d$ [mm]& 8.332\\\\\n$R$ [M$\\Omega$\/m] & 93\\\\\n$v_g\/c$ [$\\%$] & 1.382\\\\\n$Q$ & 6396\\\\\n$S_{c\\;max}\/E_{acc}^2$ [A\/V] & $3.9 \\cdot 10^{-4}$\\\\\n\\bottomrule\n\\end{tabular}\n}\n\\end{table}\n\n\n\\section{Analytical optimization of structure effective shunt impedance}\n\nThe accelerating gradient distribution along the structure after one filling time $t_f$ is given by the formula \\cite{anal_grudiev}:\n\\begin{linenomath*}\n\\begin{equation}\\label{eq:G}\nG(z,t_f) = G_0[t_f-\\tau(z)] g(z),\n\\end{equation}\n\\end{linenomath*}\nwhere $z$ is the longitudinal position, $\\tau(z) = \\int_{0}^{z} \\frac{dz^\\prime}{v_g(z^\\prime)}$ is the signal time delay and $g(z)$ is defined as:\n\\begin{linenomath*}\n\\begin{equation}\ng(z)=\\sqrt{\\frac{v_g(0)}{v_g(z)}} \\sqrt{\\frac{R(z)Q(0)}{R(0)Q(z)}} e^{-\\frac{1}{2}\\int_{0}^{z} \\frac{\\omega}{v_g(z^\\prime)Q(z^\\prime)} dz^\\prime}.\n\\end{equation}\n\\end{linenomath*}\n$G_0$ is the gradient at the beginning of the structure given by:\n\\begin{linenomath*}\n\\begin{equation}\nG_0(t)=\\sqrt{\\frac{\\omega R(0) P_0(t)}{v_g(0) Q(0)}}.\n\\end{equation}\n\\end{linenomath*}\n$P_0(t)$ is the input RF power and, due to the SLED, is given by:\n\\begin{linenomath*}\n\\begin{equation}\nP_0(t)=P_k \\cdot k_{SLED}^2(t),\n\\end{equation}\n\\end{linenomath*}\nwhere $P_k$ is the power from the klystron and $k_{SLED}(t)$ is the SLED electric field gain factor \\cite{farkas}.\n\nIntegrating Eq. \\eqref{eq:G} along the structure length $L_s$, we obtain the accelerating voltage $V_a$.\n\nThe efficiency of the structure is given by the effective shunt impedance per unit length $R_s$ defined as \\cite{neal}:\n\\begin{linenomath*}\n\\begin{equation}\\label{eq:R_s}\nR_s = \\frac{V_a^2}{P_k L_s} = \\frac{V_a \\langle G \\rangle}{P_k} = \\frac{\\langle G \\rangle^2 L_s}{P_k},\n\\end{equation}\n\\end{linenomath*}\n where $\\langle G \\rangle = V_a\/L_s$ is the average accelerating gradient. \n\n\n\nFor a constant impedance (CI) structure $R_s$, as a function of the section attenuation $\\tau_s$ ($= \\frac{\\omega}{2 v_g Q}L_s$), is given by \\cite{leduff}:\n\\begin{linenomath*}\n\\begin{align}\\label{eq:Rs_CI}\nR_s =\\;&2 \\tau_s R \\left\\{ \\frac{1 - \\frac{2 Q_l}{Q_e}}{\\tau_s} \\left( 1 - e^{-\\tau_s} \\right) + \\right. \\nonumber \\\\ &\\left. + \\frac{\\frac{2 Q_l}{Q_e} \\left[ 2 - e^{- \\left( \\frac{\\omega t_k}{2 Q_l} - \\tau_s \\frac{Q}{Q_l} \\right)} \\right]}{\\tau_s \\left( 1 - \\frac{Q_l}{Q_e} \\right)} \\left( e^{-\\tau_s \\frac{Q_l}{Q_e}} - e^{-\\tau_s} \\right) \\right\\}^2,\n\\end{align}\n\\end{linenomath*}\nwhere $Q_l = \\frac{Q_0 Q_e}{Q_0 + Q_e}$ is the loaded Q-factor of SLED (being $Q_e$ the external quality factor).\nFor a constant gradient (CG) structure $R_s$ is given by \\cite{farkas}:\n\\begin{linenomath*}\n\\begin{align}\\label{eq:Rs_CG}\nR_s =\\;&R \\frac{2 \\tau_s}{1+\\tau_s} \\left\\{ 1 - \\frac{2 Q_l}{Q_e} + \\frac{2 Q_l}{Q_e} \\left[ 2 - e^{-\\frac{\\omega t_k}{2 Q_l}} \\left( \\frac{1 + \\tau_s}{1 - \\tau_s} \\right)^{\\frac{Q}{2 Q_l}} \\right] \\cdot \\right. \\nonumber \\\\ &\\left. \\cdot \\frac{1 - \\tau_s}{2 \\tau_s} \\frac{1}{1-Q\/2Q_l} \\left[ \\left( \\frac{1 + \\tau_s}{1 - \\tau_s} \\right)^{1 - \\frac{Q}{2 Q_l}} - 1 \\right] \\right\\}^2.\n\\end{align}\n\\end{linenomath*}\nFigure \\ref{fig:Rs_CI_CG} shows $R_s$ as a function of $\\tau_s$ for both structures, with the parameters of Tabs. \\ref{tab:RF_parameters} and \\ref{tab:a_3.2}. In both cases the value of the external Q-factor $Q_e$ of the SLED has been chosen in order to maximize $R_s$. \n\n\\begin{figure}[htb]\n\\centering\n\\includegraphics[scale=1]{Rs_CI_CG.pdf}\n\\caption{Effective shunt impedance per unit length for the CI and CG structure.}\n\\label{fig:Rs_CI_CG}\n\\end{figure}\n\nThe accelerating gradient for a CI structure is given by \\cite{leduff}:\n\\begin{linenomath*}\n\\begin{align}\nG(z,t_f) = &\\sqrt{\\frac{\\omega}{v_g} \\frac{R}{Q} P_k}\\;e^{-\\tau_s\\frac{z}{L_s}} \\cdot \\nonumber \\\\ &\\cdot \\left\\{ 1 + \\frac{2 Q_l}{Q_e} \\left[ e^{- \\frac{\\omega (L_s - z)}{2 v_g Q_l}} \\left( 2 - e^{- \\frac{\\omega \\left(t_k - t_f\\right)}{2 Q_l}} \\right) - 1 \\right] \\right\\},\n\\end{align}\n\\end{linenomath*}\nwhile for a CG structure is given by \\cite{farkas}:\n\\begin{linenomath*}\n\\begin{align}\nG(z,t_f) = &\\sqrt{\\frac{2 \\tau_s}{1 + \\tau_s} \\frac{R}{L_s} P_k} \\left\\{ 1 + \\frac{2 Q_l}{Q_e} \\cdot \\right. \\nonumber \n\\\\ &\\left. \\cdot \\left[ \\left( \\frac{1 + \\tau_s \\left(1 - \\frac{2z}{L_s}\\right)}{1 - \\tau_s} \\right)^{- \\frac{Q}{2 Q_l}} \\left( 2 - e^{-\\frac{\\omega \\left(t_k - t_f\\right)}{2 Q_l}} \\right) - 1 \\right] \\right\\}.\n\\end{align}\n\\end{linenomath*}\nThe previous formulas \\eqref{eq:Rs_CI},\\eqref{eq:Rs_CG} allow calculating the optimum $\\tau_s$ (=$\\tau_{s0}$) that maximize $R_s$. This fixes also the filling time of the structure, i.e. the compressed pulse length and allows to calculate the corresponding gradient profiles given in Figure \\ref{fig:G_CI_CG}.\n\n\\begin{figure}[htb]\n\\centering\n\\includegraphics[scale=1]{G_CI_CG.pdf}\n\\caption{Normalized gradient distribution along the structure for the CI and CG structure.}\n\\label{fig:G_CI_CG}\n\\end{figure}\n\nThe final optimum structures parameters are summarized in Table \\ref{tab:CI_vs_CG}.\n\n\\begin{table}[hbt]\n\\center{\n\\caption{CI and CG structure parameters (analytical study).}\n \\label{tab:CI_vs_CG}\n\\begin{tabular}{lcr}\n\\toprule\n\\textbf{Parameter} & \\textbf{CI} & \\textbf{CG}\\\\\n\\midrule\n$R_s$ \\SI {}{[M\\ohm\/m]} & 343 & 344\\\\\nOptimal structure length $L_s$ [m] & 0.474 & 0.432\\\\\nFilling time $t_f$ [ns] & 114 & 118\\\\\nExternal Q-factor $Q_e$ of SLED & 20030 & 21170\\\\\n\\bottomrule\n\\end{tabular}\n}\n\\end{table}\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\section{Numerical optimization}\n\nIn the analytical study, the CG solution is approximated because of the assumption of constant $R\/Q$ \\cite{lapostolle} along the structure. On the other hand in the CI case it is quite easy to verify that, in the VHG case, one exceeds the maximum value of $S_{c\\;max}$ that allows to have a Breakdown rate (BDR) lower than $10^{-6}$ bpp\/m \\cite{poynting,poynting4}. For these reasons we also performed a numerical study.\n\nTo this purpose we have considered a linear tapering of the irises as sketched in Figure \\ref{fig:tapering}, defined by the modulation angle $\\theta$. We have then calculated (by Eq. \\eqref{eq:G}) the gradient profile along the structure for different $\\theta$ and $L_s$. In the calculation we have used the polynomial fits of the single cell parameters illustrated in section \\ref{sec:single_cell}. From the gradient profiles we have finally calculated the effective shunt impedance per unit length and the peak value of the modified Poynting vector.\n\n\\begin{figure}[htb]\n\\centering\n\\includegraphics*[width=180pt]{linear_tapering_2.png}\n\\caption{Sketch of the linear iris tapering.}\n\\label{fig:tapering}\n\\end{figure}\n\nFigure \\ref{fig:BFL_G} shows, as an example, the normalized gradient profile as a function of $\\theta$ for $L_s$ equal to 0.5 m. $R_s$ and $S_{c\\;max}$ (VHG case), as a function of $\\theta$ and for different $L_s$, are given in Figures \\ref{fig:BFL_Rs} and \\ref{fig:BFL_Sc}. In Figure \\ref{fig:BFL_Sc} we have also reported the maximum value of $S_{c\\;max}$ that, according to the scaling law given in \\cite{poynting}, allows having a BDR lower than $10^{-6}$ bpp\/m. From the plot it is quite easy to note that the 0.4 m and 0.5 m long structures have the same efficiency while the 0.667 m case is worse. Concerning $S_{c\\;max}$, the 0.4 m solution is better but requires, on the other hand, a larger number of structures per unit length. In conclusion the 0.5 m case with $\\theta$=\\SI{0.1}{\\degree} has been chosen as the design baseline for the X-band linac.\n\n\\begin{figure}[htb]\n\\centering\n\\includegraphics[scale=1]{BFL_G_theta_60cells_fl.pdf}\n\\caption{Normalized gradient after one filling time as function of the modulation angle (0.5 m case).}\n\\label{fig:BFL_G}\n\\end{figure}\n\n\\begin{figure}[htb]\n\\centering\n\\includegraphics[scale=1]{BFL_Rs_48_60_80cells.pdf}\n\\caption{Effective shunt impedance per unit length as a function of the modulation angle for three structure lengths.}\n\\label{fig:BFL_Rs}\n\\end{figure}\n\n\\begin{figure}[htb]\n\\centering\n\\includegraphics*[scale=1]{BFL_Sc_max_48_60_80cells_80MVm.pdf}\n\\caption{Peak value of modified Poynting vector as a function of the modulation angle for three structure lengths (VHG case).}\n\\label{fig:BFL_Sc}\n\\end{figure}\n\n\n\\section{Linac basic layout}\n\nAccording to the results of numerical study, the TW X-band accelerating sections optimized for the EuPRAXIA@SPARC\\_LAB application are 0.5 m long and show an effective shunt impedance per unit length of \\SI {346}{\\mega\\ohm\/m}. Commercially available X-band klystrons \\cite{cpi_website} provide up to 50 MW peak power with \\SI {1.5}{\\micro s} long RF pulses. We have estimated the RF losses in the waveguide distribution system of $\\approx$-7 dB and, as a consequence, $\\approx$40 MW available input power. The basic RF module of the EuPRAXIA@SPARC\\_LAB X-band linac can be conveniently composed by a group of 8 TW sections assembled on a single girder and powered by one (for HG) or two (for VHG) klystrons by means of one pulse compressor system and a waveguide network splitting and transporting the RF power to the input couplers of the sections. The sketch of the basic module is given in Fig. \\ref{fig:RF_layout} while the final main linac parameters are shown in Tab. \\ref{tab:final_parameters}.\n\n\\begin{figure}[htb]\n\\centering\n\\includegraphics[width=180pt]{RF_layout.png}\n\\caption{RF power distribution layout of a single module for the HG and VHG cases.}\n\\label{fig:RF_layout}\n\\end{figure}\n\n\\begin{table}[hbt]\n\\center{\n\\caption{X-band linac parameters.}\n \\label{tab:final_parameters}\n\\begin{tabular}{lcr}\n\\midrule\nFrequency of operation [GHz] & \\multicolumn{2}{c}{11.9942}\\\\\nRF pulse length $t_f$ [ns] & \\multicolumn{2}{c}{129}\\\\\nUnloaded Q-factor $Q_0$ of SLED & \\multicolumn{2}{c}{180000}\\\\\nExternal Q-factor $Q_e$ of SLED & \\multicolumn{2}{c}{21800}\\\\\n$a$ first-last cell [mm] & \\multicolumn{2}{c}{$3.636 - 2.764$}\\\\\nStructure length $L_s$ [m] & \\multicolumn{2}{c}{0.5}\\\\\nActive length $L_t$ [m] & \\multicolumn{2}{c}{16}\\\\\nNo. of structures $N_s$ & \\multicolumn{2}{c}{32}\\\\\n$v_g\/c$ first-last cell [\\%] & \\multicolumn{2}{c}{$2.23 - 0.77$}\\\\\n$R_s$ \\SI {}{[\\mega\\ohm\/m]} & \\multicolumn{2}{c}{346}\\\\\n& HG & VHG\\\\\nAverage gradient $\\langle G \\rangle$ [MV\/m] & 57 & 80\\\\\nEnergy gain $W_{gain}$ [MeV] & 912 & 1280\\\\\nTotal Required RF power $P_{RF}$ [W] & 150 & 296\\\\\nNo. of klystrons $N_k$ & 4 & 8\\\\\n\\bottomrule\n\\end{tabular}\n}\n\\end{table}\n\n\n\\section{Conclusions}\nIn the paper we have illustrated the preliminary RF design of the EuPRAXIA@SPARC\\_LAB X-band linac. It has been done performing e.m. simulations of the single cell, and analytical and numerical optimization of the structure efficiency taking into account the available space, local field quantity (modified Poynting vector) minimization and available commercial klystrons. The final linac\nlayout and main parameters have been finally shown.\n\n\\section*{Acknowledgments}\n\nThis work was supported by the European Union's Horizon 2020 research and innovation programme under grant agreement No. 653782.\n\n\n\n\n\n\\bibliographystyle{elsarticle-num}\n\n\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}