diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzzbtlp" "b/data_all_eng_slimpj/shuffled/split2/finalzzbtlp" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzzbtlp" @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\n\nThe development of sensor, peer-to-peer, and ad hoc wireless networks\nhas stimulated interest in distributed algorithms for data\naggregation, in which nodes in a network compute a function of local\nvalues at the individual nodes. These networks typically do not have\ncentralized agents that organize the computation and communication\namong the nodes. Furthermore, the nodes in such a network may not\nknow the complete topology of the network, and the topology may change\nover time as nodes are added and other nodes fail. In light of the\npreceding considerations, distributed computation is of vital\nimportance in these modern networks.\n\nWe consider the problem of computing separable functions in a\ndistributed fashion in this paper. A separable function can be\nexpressed as the sum of the values of individual functions. Given a\nnetwork in which each node has a number, we seek a distributed\nprotocol for computing the value of a separable function of the\nnumbers at the nodes. Each node has its own estimate of the value of\nthe function, which evolves as the protocol proceeds. Our goal is to\nminimize the amount of time required for all of these estimates to be\nclose to the actual function value.\n\nIn this work, we are interested in {\\em totally distributed}\ncomputations, in which nodes have a local view of the state of the\nnetwork. Specifically, an individual node does not have information\nabout nodes in the network other than its neighbors. To accurately\nestimate the value of a separable function that depends on the numbers\nat all of the nodes, each node must obtain information about the other\nnodes in the network. This is accomplished through communication\nbetween neighbors in the network. Over the course of the protocol,\nthe global state of the network effectively diffuses to each\nindividual node via local communication among neighbors.\n\nMore concretely, we assume that each node in the network knows only\nits neighbors in the network topology, and can contact any neighbor to\ninitiate a communication. On the other hand, we assume that the nodes\ndo not have unique identities (i.e., a node has no unique identifier\nthat can be attached to its messages to identify the source of the\nmessages). This constraint is natural in ad-hoc and mobile networks,\nwhere there is a lack of infrastructure (such as IP addresses or\nstatic GPS locations), and it limits the ability of a distributed\nalgorithm to recreate the topology of the network at each node. In\nthis sense, the constraint also provides a formal way to distinguish\ndistributed algorithms that are truly local from algorithms that\noperate by gathering enormous amounts of global information at all the\nnodes.\n\nThe absence of identifiers for nodes makes it difficult, without\nglobal coordination, to simply transmit every node's value throughout\nthe network so that each node can identify the values at all the\nnodes. As such, we develop an algorithm for computing separable\nfunctions that relies on an {\\em order- and duplicate-insensitive}\nstatistic \\cite{ngsa} of a set of numbers, the minimum. The algorithm\nis based on properties of exponential random variables, and reduces\nthe problem of computing the value of a separable function to the\nproblem of determining the minimum of a collection of numbers, one for\neach node.\n\nThis reduction leads us to study the problem of\n{\\em information spreading} or {\\em information dissemination} in a\nnetwork. In this problem, each node starts with a message, and the\nnodes must spread the messages throughout the network using local\ncommunication so that every node eventually has every message.\nBecause the minimum of a collection of numbers is not affected by the\norder in which the numbers appear, nor by the presence of duplicates\nof an individual number, the minimum computation required by our\nalgorithm for computing separable functions can be performed by any\ninformation spreading algorithm. Our analysis of the algorithm for\ncomputing separable functions establishes an upper bound on its\nrunning time in terms of the running time of the information spreading\nalgorithm it uses as a subroutine.\n\nIn view of our goal of distributed computation, we analyze a\n{\\em gossip} algorithm for information spreading. Gossip algorithms\nare a useful tool for achieving fault-tolerant and scalable\ndistributed computations in large networks. In a gossip algorithm,\neach node repeatedly iniatiates communication with a small number of\nneighbors in the network, and exchanges information with those\nneighbors.\n\nThe gossip algorithm for information spreading that we study is\nrandomized, with the communication partner of a node at any time\ndetermined by a simple probabilistic choice. We provide an upper\nbound on the running time of the algorithm in terms of the\n{\\em conductance} of a stochastic matrix that governs how nodes choose\ncommunication partners. By using the gossip algorithm to compute\nminima in the algorithm for computing separable functions, we obtain\nan algorithm for computing separable functions whose performance on\ncertain graphs compares favorably with that of known iterative\ndistributed algorithms \\cite{bgps} for computing averages in a\nnetwork.\n\n\\subsection{Related work}\n\\label{sec:related}\n\nIn this section, we present a brief summary of related work.\nAlgorithms for computing the number of distinct elements in a multiset\nor data stream \\cite{fm, streamdistinct} can be adapted to compute\nseparable functions using information spreading \\cite{clkb}. We are\nnot aware, however, of a previous analysis of the amount of time\nrequired for these algorithms to achieve a certain accuracy in the\nestimates of the function value when the computation is totally\ndistributed (i.e., when nodes do not have unique identities). These\nadapted algorithms require the nodes in the network to make use of a\ncommon hash function. In addition, the discreteness of the counting\nproblem makes the resulting algorithms for computing separable\nfunctions suitable only for functions in which the terms in the sum\nare integers. Our algorithm is simpler than these algorithms, and can\ncompute functions with non-integer terms.\n\n\n\nThere has been a lot of work on the distributed computation of\naverages, a special case of the problem of reaching agreement or\nconsensus among processors via a distributed computation. Distributed\nalgorithms for reaching consensus under appropriate conditions have\nbeen known since the classical work of Tsitsiklis\n\\cite{tsitsiklis-thesis} and Tsitsiklis, Bertsekas, and Athans\n\\cite{tba} (see also the book by Bertsekas and Tsitsiklis\n\\cite{pardiscomp}). Averaging algorithms compute the ratio of the sum\nof the input numbers to $n$, the number of nodes in the network, and\nnot the exact value of the sum. Thus, such algorithms cannot be\nextended in general to compute arbitrary separable functions. On the\nother hand, an algorithm for computing separable functions can be used\nto compute averages by separately computing the sum of the input\nnumbers, and the number of nodes in the graph (using one as the input\nat each node).\n\n\nRecently, Kempe, Dobra, and Gehrke showed the existence of a\nrandomized iterative gossip algorithm for averaging with the optimal\naveraging time \\cite{kempe}. This result was restricted to complete\ngraphs. The algorithm requires that the nodes begin the computation\nin an asymmetric initial state in order to compute separable\nfunctions, a requirement that may not be convenient for large networks\nthat do not have centralized agents for global coordination.\nFurthermore, the algorithm suffers from the possibility of oscillation\nthroughout its execution.\n\nIn a more recent paper, Boyd, Ghosh, Prabhakar, and Shah presented a\nsimpler iterative gossip algorithm for averaging that addresses some\nof the limitations of the Kempe et al. algorithm \\cite{bgps}.\nSpecifically, the algorithm and analysis are applicable to arbitrary\ngraph topologies. Boyd et al. showed a connection between the\naveraging time of the algorithm and the mixing time (a property that\nis related to the conductance, but is not the same) of an appropriate\nrandom walk on the graph representing the network. They also found an\noptimal averaging algorithm as a solution to a semi-definite program.\n\nFor completeness, we contrast our results for the problem of averaging\nwith known results. As we shall see, iterative averaging, which has\nbeen a common approach in the previous work, is an order slower than\nour algorithm for many graphs, including ring and grid graphs. In\nthis sense, our algorithm is quite different than (and has advantages\nin comparison with) the known averaging algorithms.\n\nOn the topic of information spreading, gossip algorithms for\ndisseminating a message to all nodes in a complete graph in which\ncommunication partners are chosen uniformly at random have been\nstudied for some time \\cite{frieze, rumor2, epidemic}. Karp,\nSchindelhauer, Shenker, and V\\\"{o}cking presented a\n{\\em push and pull} gossip algorithm, in which communicating nodes\nboth send and receive messages, that disseminates a message to all $n$\nnodes in a graph in $O(\\log n)$ time with high probability\n\\cite{kssv}. In this work, we have provided an analysis of the time\nrequired for a gossip algorithm to disseminate $n$ messages to $n$\nnodes for the more general setting of arbitrary graphs and non-uniform\nrandom choices of communication partners. For other related results,\nwe refer the reader to \\cite{rumor3, gossip1, gossip2}. We take note\nof the similar (independent) recent work of Ganesh, Massouli\\'{e}, and\nTowsley \\cite{gmt}, and Berger, Borgs, Chayes, and Saberi \\cite{bbcs},\non the spread of epidemics in a network.\n\n\n\\subsection{Organization}\n\nThe rest of the paper is organized as follows. Section\n\\ref{sec:prelim} presents the distributed computation problems we\nstudy and an overview of our results. In Section \\ref{sec:comp}, we\ndevelop and analyze an algorithm for computing separable functions in\na distributed manner. Section \\ref{sec:infdis} contains an analysis\nof a simple randomized gossip algorithm for information spreading,\nwhich can be used as a subroutine in the algorithm for computing\nseparable functions. In Section \\ref{sec:appl}, we discuss\napplications of our results to particular types of graphs, and compare\nour results to previous results for computing averages. Finally, we\npresent conclusions and future directions in Section \\ref{sec:conc}.\n\n\\section{Preliminaries and Results}\n\\label{sec:prelim}\n\nWe consider an arbitrary connected network, represented by an\nundirected graph $G = (V, E)$, with $|V| = n$ nodes. For notational\npurposes, we assume that the nodes in $V$ are numbered arbitrarily so\nthat $V = \\{1, \\dots, n\\}$. A node, however, does not have a unique\nidentity that can be used in a computation. Two nodes $i$ and $j$ can\ncommunicate with each other if (and only if) $(i, j) \\in E$.\n\nTo capture some of the resource constraints in the networks in which\nwe are interested, we impose a {\\em transmitter gossip} constraint on\nnode communication. Each node is allowed to contact at most one other\nnode at a given time for communication. However, a node can be\ncontacted by multiple nodes simultaneously.\n\nLet $2^{V}$ denote the power set of the vertex set $V$ (the set of all\nsubsets of $V$). For an $n$-dimensional vector\n$\\vec{x} \\in \\mathbf{R}^{n}$, let $x_{1}, \\dots, x_{n}$ be the components\nof $\\vec{x}$.\n\\begin{definition}\nWe say that a function $f : \\mathbf{R}^{n} \\times 2^{V} \\to \\mathbf{R}$\nis {\\em separable} if there exist functions $f_{1}, \\dots, f_{n}$ such\nthat, for all $S \\subseteq V$,\n\\begin{equation}\nf(\\vec{x}, S) = \\sum_{i \\in S} f_{i}(x_{i}).\n\\label{sepsum}\n\\end{equation}\n\\label{sepfunc}\n\\end{definition}\n\n\\noindent\n{\\bf Goal.} Let $\\cal{F}$ be the class of separable functions $f$ for\nwhich $f_{i}(x) \\geq 1$ for all $x \\in \\mathbf{R}$ and $i = 1, \\dots, n$.\nGiven a function $f \\in \\cal{F}$, and a vector $\\vec{x}$ containing\ninitial values $x_{i}$ for all the nodes, the nodes in the network are\nto compute the value $f(\\vec{x}, V)$ by a distributed computation,\nusing repeated communication between nodes.\n\n\\begin{note}\nConsider a function $g$ for which there exist functions\n$g_{1}, \\dots, g_{n}$ satisfying, for all $S \\subseteq V$, the\ncondition $g(\\vec{x}, S) = \\prod_{i \\in S} g_{i}(x_{i})$ in lieu of\n(\\ref{sepsum}). Then, $g$ is {\\em logarithmic separable}, i.e.,\n$f = \\log_b g$ is separable. Our algorithm for computing separable\nfunctions can be used to compute the function $f = \\log_{b} g$. The\ncondition $f_{i}(x) \\geq 1$ corresponds to $g_{i}(x) \\geq b$ in this\ncase. This lower bound of $1$ on $f_{i}(x)$ is arbitrary, although\nour algorithm does require the terms $f_{i}(x_{i})$ in the sum to be\npositive.\n\\end{note}\n\nBefore proceeding further, we list some practical situations where the\ndistributed computation of separable functions arises naturally. By\ndefinition, the sum of a set of numbers is a separable function.\n\\renewcommand{\\labelenumi}{(\\arabic{enumi})}\n\\begin{enumerate}\n\\item\n{\\em Summation.} Let the value at each node be $x_{i} = 1$. Then, the\nsum of the values is the number of nodes in the network.\n\n\\item\n{\\em Averaging.} According to Definition \\ref{sepfunc}, the average of\na set of numbers is not a separable function. However, the nodes can\nestimate the separable function $\\sum_{i = 1}^{n} x_{i}$ and $n$\nseparately, and use the ratio between these two estimates as an\nestimate of the mean of the numbers.\n\nSuppose the values at the nodes are measurements of a quantity of\ninterest. Then, the average provides an unbiased maximum likelihood\nestimate of the measured quantity. For example, if the nodes are\ntemperature sensors, then the average of the sensed values at the\nnodes gives a good estimate of the ambient temperature.\n\\end{enumerate}\n\nFor more sophisticated applications of a distributed averaging\nalgorithm, we refer the reader to \\cite{distr_eigvec} and\n\\cite{MSZ}.\nAveraging is used for the distributed computation of the top $k$\neigenvectors of a graph in \\cite{distr_eigvec}, while in \\cite{MSZ}\naveraging is used in a throughput-optimal distributed scheduling\nalgorithm in a wireless network.\n\n\\noindent{\\bf Time model.} In a distributed computation, a time model\ndetermines when nodes communicate with each other. We consider two\ntime models, one synchronous and the other asynchronous, in this\npaper. The two models are described as follows.\n\\begin{enumerate}\n\\item\n{\\em Synchronous time model:} Time is slotted commonly across all\nnodes in the network. In any time slot, each node may contact one of\nits neighbors according to a random choice that is independent of the\nchoices made by the other nodes. The simultaneous communication\nbetween the nodes satisfies the transmitter gossip constraint.\n\n\\item\n{\\em Asynchronous time model:} Each node has a clock that ticks at the\ntimes of a rate $1$ Poisson process. Equivalently, a common clock\nticks according to a rate $n$ Poisson process at times\n$C_{k}, k \\geq 1$, where $\\{C_{k + 1} - C_{k}\\}$ are i.i.d.\nexponential random variables of rate $n$. On clock tick $k$, one of\nthe $n$ nodes, say $I_{k}$, is chosen uniformly at random. We\nconsider this global clock tick to be a tick of the clock at node\n$I_{k}$. When a node's clock ticks, it contacts one of its neighbors\nat random. In this model, time is discretized according to clock\nticks. On average, there are $n$ clock ticks per one unit of absolute\ntime.\n\\end{enumerate}\n\nIn this paper, we measure the running times of algorithms in absolute\ntime, which is the number of time slots in the synchronous model, and\nis (on average) the number of clock ticks divided by $n$ in the\nasynchronous model. To obtain a precise relationship between clock\nticks and absolute time in the asynchronous model, we appeal to tail\nbounds on the probability that the sample mean of i.i.d. exponential\nrandom variables is far from its expected value. In particular, we\nmake use of the following lemma, which also plays a role in the\nanalysis of the accuracy of our algorithm for computing separable\nfunctions.\n\\begin{lemma}\n\\label{discrete-to-cont}\nFor any $k \\geq 1$, let $Y_{1}, \\dots, Y_{k}$ be i.i.d. exponential\nrandom variables with rate $\\lambda$. Let\n$R_{k} = \\frac{1}{k} \\sum_{i = 1}^{k} Y_{i}$. Then, for any\n$\\epsilon \\in (0, 1\/2)$,\n\\begin{eqnarray}\n\\Pr \\left(\\left|R_k - \\frac{1}{\\lambda}\\right|\n\\geq \\frac{\\epsilon}{\\lambda} \\right)\n& \\leq & 2 \\exp\\left(-\\frac{\\epsilon^{2} k}{3}\\right).\n\\label{e:dtoc1}\n\\end{eqnarray}\n\\end{lemma}\n\\begin{proof}\nBy definition,\n$E[R_{k}] = \\frac{1}{k}\\sum_{i = 1}^{k} \\lambda^{-1} = \\lambda^{-1}$.\nThe inequality in (\\ref{e:dtoc1}) follows directly from Cram\\'{e}r's\nTheorem (see \\cite{dembo}, pp. $30$, $35$) and properties of\nexponential random variables.\n\\end{proof}\n\nA direct implication of Lemma \\ref{discrete-to-cont} is the following\ncorollary, which bounds the probability that the absolute time $C_{k}$\nat which clock tick $k$ occurs is far from its expected value.\n\\begin{corollary}\\label{discrete-to-contc}\nFor $k \\geq 1$, $E[C_{k}] = k\/n$. Further, for any\n$\\epsilon \\in (0, 1\/2)$,\n\\begin{eqnarray}\n\\Pr \\left( \\left|C_{k} - \\frac{k}{n}\\right|\n\\geq \\frac{\\epsilon k}{n} \\right)\n& \\leq & 2 \\exp\\left(-\\frac{\\epsilon^{2} k}{3}\\right).\n\\label{e:dtoc}\n\\end{eqnarray}\n\\end{corollary}\n\nOur algorithm for computing separable functions is randomized, and is\nnot guaranteed to compute the exact quantity\n$f(\\vec{x}, V) = \\sum_{i = 1}^{n} f_{i}(x_{i})$ at each node in the\nnetwork. To study the accuracy of the algorithm's estimates, we\nanalyze the probability that the estimate of $f(\\vec{x}, V)$ at every\nnode is within a $(1 \\pm \\epsilon)$ multiplicative factor of the true\nvalue $f(\\vec{x}, V)$ after the algorithm has run for some period of\ntime. In this sense, the error in the estimates of the algorithm is\nrelative to the magnitude of $f(\\vec{x}, V)$.\n\nTo measure the amount of time required for an algorithm's estimates to\nachieve a specified accuracy with a specified probability, we define\nthe following quantity. For an algorithm ${\\cal C}$ that estimates\n$f(\\vec{x}, V)$, let $\\hat{y}_i(t)$ be the estimate of $f(\\vec{x}, V)$ at\nnode $i$ at time $t$. Furthermore, for notational convenience, given\n$\\epsilon > 0$, let $A_{i}^{\\epsilon}(t)$ be the following event.\n\\[\nA_{i}^{\\epsilon}(t)\n= \\left\\{\\hat{y}_{i}(t) \\not\\in\n\\left[(1 - \\epsilon)f(\\vec{x}, V),\n(1 + \\epsilon)f(\\vec{x}, V) \\right] \\right\\}\n\\]\n\\begin{definition}\nFor any $\\epsilon > 0$ and $\\delta \\in (0, 1)$, the\n($\\epsilon$, $\\delta$)-computing time of $\\cal{C}$, denoted\n$T_{\\cal{C}}^{\\text{cmp}}(\\epsilon, \\delta)$, is\n\\[\nT_{\\cal{C}}^{\\text{cmp}}(\\epsilon, \\delta)\n= \\sup_{f \\in \\cal{F}} \\sup_{\\vec{x} \\in \\mathbf{R}^{n}}\n\\inf \\Big\\{\\tau : \\forall t \\geq \\tau,\n\\Pr \\big(\\cup_{i = 1}^{n} A_{i}^{\\epsilon}(t) \\big)\n\\leq \\delta \\Big\\}.\n\\]\n\\end{definition}\n\n\\noindent\nIntuitively, the significance of this definition of the\n$(\\epsilon, \\delta)$-computing time of an algorithm $\\cal{C}$ is that,\nif $\\cal{C}$ runs for an amount of time that is at least\n$T_{\\cal{C}}^{\\text{cmp}}(\\epsilon, \\delta)$, then the probability that the\nestimates of $f(\\vec{x}, V)$ at the nodes are all within a\n$(1 \\pm \\epsilon)$ factor of the actual value of the function is at\nleast $1 - \\delta$.\n\nAs noted before, our algorithm for computing separable functions is\nbased on a reduction to the problem of information spreading, which is\ndescribed as follows. Suppose that, for $i = 1, \\dots, n$, node $i$\nhas the one message $m_{i}$. The task of information spreading is to\ndisseminate all $n$ messages to all $n$ nodes via a sequence of local\ncommunications between neighbors in the graph. In any single\ncommunication between two nodes, each node can transmit to its\ncommunication partner any of the messages that it currently holds. We\nassume that the data transmitted in a communication must be a set of\nmessages, and therefore cannot be arbitrary information.\n\nConsider an information spreading algorithm $\\cal{D}$, which specifies\nhow nodes communicate. For each node $i \\in V$, let $S_{i}(t)$ denote\nthe set of nodes that have the message $m_{i}$ at time $t$. While\nnodes can gain messages during communication, we assume that they do\nnot lose messages, so that $S_{i}(t_{1}) \\subseteq S_{i}(t_{2})$ if\n$t_{1} \\leq t_{2}$. Analogous to the $(\\epsilon, \\delta)$-computing\ntime, we define a quantity that measures the amount of time required\nfor an information spreading algorithm to disseminate all the messages\n$m_{i}$ to all the nodes in the network.\n\\begin{definition}\nFor $\\delta \\in (0, 1)$, the $\\delta$-information-spreading time\nof the algorithm $\\cal{D}$, denoted $T_{\\cal{D}}^{\\text{spr}}(\\delta)$, is\n\\[\nT_{\\cal{D}}^{\\text{spr}}(\\delta)\n= \\inf\n\\left\\{t : \\Pr \\left(\\cup_{i = 1}^{n} \\{S_{i}(t) \\neq V\\} \\right)\n\\leq \\delta \\right\\}.\n\\]\n\\end{definition}\n\nIn our analysis of the gossip algorithm for information spreading, we\nassume that when two nodes communicate, each node can send all of its\nmessages to the other in a single communication. This rather\nunrealistic assumption of {\\em infinite} link capacity is merely for\nconvenience, as it provides a simpler analytical characterization of\n$T_{\\cal{C}}^{\\text{cmp}}(\\epsilon, \\delta)$ in terms of\n$T_{\\cal{D}}^{\\text{spr}}(\\delta)$. Our algorithm for computing separable\nfunctions requires only links of unit capacity.\n\n\\subsection{Our contribution}\n\\label{ssec:contrib}\n\nThe main contribution of this paper is the design of a distributed\nalgorithm to compute separable functions of node values in an\narbitrary connected network. Our algorithm is randomized, and in\nparticular uses exponential random variables. This usage of\nexponential random variables is analogous to that in an \nalgorithm by Cohen\\footnote{We thank Dahlia Malkhi for pointing\nthis reference out to us.}\n for estimating the sizes of sets in a graph \\cite{cohen}. The\nbasis for our algorithm is the following property of the exponential\ndistribution.\n\\begin{property}\n\\label{p1}\nLet $W_{1}, \\dots, W_{n}$ be $n$ independent random variables such\nthat, for $i = 1, \\dots, n$, the distribution of $W_{i}$ is\nexponential with rate $\\lambda_{i}$. Let $\\bar{W}$ be the minimum of\n$W_{1}, \\dots, W_{n}$. Then, $\\bar{W}$ is distributed as an exponential\nrandom variable of rate $\\lambda = \\sum_{i = 1}^{n} \\lambda_{i}$.\n\\end{property}\n\\begin{proof}\nFor an exponential random variable $W$ with rate $\\lambda$, for any\n$z \\in \\Reals_{+}$,\n\\[\n\\Pr(W > z) = \\exp(-\\lambda z).\n\\]\nUsing this fact and the independence of the random variables $W_{i}$,\nwe compute $\\Pr(\\bar{W} > z)$ for any $z \\in \\Reals_{+}$.\n\\begin{eqnarray*}\n\\Pr(\\bar{W} > z)\n& = & \\Pr \\left(\\cap_{i = 1}^{n} \\{W_{i} > z\\} \\right) \\\\\n& = & \\prod_{i = 1}^{n} \\Pr(W_{i} > z) \\\\\n& = & \\prod_{i = 1}^{n} \\exp(-\\lambda_{i} z) \\\\\n& = & \\exp\\left(-z \\sum_{i = 1}^{n} \\lambda_{i} \\right).\n\\end{eqnarray*}\nThis establishes the property stated above.\n\\end{proof}\n\nOur algorithm uses an information spreading algorithm as a subroutine,\nand as a result its running time is a function of the running time of\nthe information spreading algorithm it uses. The faster the\ninformation spreading algorithm is, the better our algorithm performs.\nSpecifically, the following result provides an upper bound on the\n($\\epsilon$, $\\delta$)-computing time of the algorithm.\n\\begin{theorem}\n\\label{thm:main1}\nGiven an information spreading algorithm $\\cal{D}$ with\n$\\delta$-spreading time $T_{\\cal{D}}^{\\text{spr}}(\\delta)$ for\n$\\delta \\in (0, 1)$, there exists an algorithm ${\\cal{A}}$ for\ncomputing separable functions $f \\in \\cal{F}$ such that, for any\n$\\epsilon \\in (0, 1)$ and $\\delta \\in (0, 1)$,\n\\[\nT_{\\cal{A}}^{\\text{cmp}}(\\epsilon, \\delta)\n= O\\left( \\epsilon^{-2} (1 + \\log \\delta^{-1})\nT_{\\cal{D}}^{\\text{spr}}(\\delta\/2) \\right).\n\\]\n\\end{theorem}\n\nMotivated by our interest in decentralized algorithms, we analyze a\nsimple randomized gossip algorithm for information spreading. When\nnode $i$ initiates a communication, it contacts each node $j \\neq i$\nwith probability $P_{ij}$. With probability $P_{ii}$, it does not\ncontact another node. The $n \\times n$ matrix $P = [P_{ij}]$\ncharacterizes the algorithm; each matrix $P$ gives rise to an\ninformation spreading algorithm $\\cal{P}$. We assume that $P$ is\nstochastic, and that $P_{ij} = 0$ if $i \\neq j$ and $(i, j) \\notin E$,\nas nodes that are not neighbors in the graph cannot communicate with\neach other. Section \\ref{sec:infdis} describes the data transmitted\nbetween two nodes when they communicate.\n\nWe obtain an upper bound on the $\\delta$-information-spreading time of\nthis gossip algorithm in terms of the {\\em conductance} of the matrix\n$P$, which is defined as follows.\n\\begin{definition}\nFor a stochastic matrix $P$, the conductance of $P$, denoted\n$\\Phi(P)$, is\n\\[\n\\Phi(P)\n= \\min_{S \\subset V, \\; 0 < |S| \\leq n\/2}\n\\frac{\\sum_{ i \\in S, j \\notin S} P_{ij}}{ |S|}.\n\\]\n\\end{definition}\n\n\\noindent\nIn general, the above definition of conductance is not the same as the\nclassical definition \\cite{sinclair}. However, we restrict our\nattention in this paper to doubly stochastic matrices $P$. When $P$\nis doubly stochastic, these two definitions are equivalent. Note that\nthe definition of conductance implies that $\\Phi(P) \\leq 1$.\n\\begin{theorem}\n\\label{thm:main2}\nConsider any doubly stochastic matrix $P$ such that if $i \\neq j$ and\n$(i, j) \\notin E$, then $P_{ij} = 0$. There exists an information\ndissemination algorithm $\\cal{P}$ such that, for any\n$\\delta \\in (0, 1)$,\n\\[\nT_{\\cal{P}}^{\\text{spr}}(\\delta)\n= O\\left(\\frac{\\log n + \\log \\delta^{-1}}{\\Phi(P)}\\right).\n\\]\n\\end{theorem}\n\\begin{note}\nThe results of Theorems \\ref{thm:main1} and \\ref{thm:main2} hold for\nboth the synchronous and asynchronous time models. Recall that the\nquantities $T_{\\cal{C}}^{\\text{cmp}}(\\epsilon, \\delta)$ and\n$T_{\\cal{D}}^{\\text{spr}}(\\delta)$ are defined with respect to absolute time\nin both models.\n\\end{note}\n\n\n\\noindent\n{\\bf A comparison.} Theorems \\ref{thm:main1} and \\ref{thm:main2} imply\nthat, given a doubly stochastic matrix $P$, the time required for our\nalgorithm to obtain a $(1 \\pm \\epsilon)$ approximation with\nprobability at least $1 - \\delta$ is\n$O\\left(\\frac{\\epsilon^{-2} (1 + \\log \\delta^{-1})\n(\\log n + \\log \\delta^{-1})}{\\Phi(P)}\\right)$.\nWhen the network size $n$ and the accuracy parameters $\\epsilon$ and\n$\\delta$ are fixed, the running time scales in proportion to\n$1\/\\Phi(P)$, a factor that captures the dependence of the algorithm on\nthe matrix $P$. Our algorithm can be used to compute the average of a\nset of numbers. For iterative averaging algorithms such as the ones\nin \\cite{tsitsiklis-thesis} and \\cite{bgps}, the convergence time\nlargely depends on the mixing time of $P$, which is lower bounded by\n$\\Omega(1\/\\Phi(P))$ (see \\cite{sinclair}, for example). Thus, our\nalgorithm is (up to a $\\log n$ factor) no slower than the fastest\niterative algorithm based on time-invariant linear dynamics.\n\n\n\n\\section{Function Computation}\n\\label{sec:comp}\n\nIn this section, we describe our algorithm for computing the value\n$y = f(\\vec{x}, V) = \\sum_{i = 1}^{n} f_{i}(x_{i})$ of the separable\nfunction $f$, where $f_{i}(x_{i}) \\geq 1$. For simplicity of\nnotation, let $y_{i} = f_{i}(x_{i})$. Given $x_{i}$, each node can\ncompute $y_{i}$ on its own. Next, the nodes use the algorithm shown\nin Fig. \\ref{compalg}, which we refer to as COMP, to compute estimates\n$\\hat{y}_{i}$ of $y = \\sum_{i = 1}^{n} y_{i}$. The quantity $r$ is a\nparameter to be chosen later.\n\\begin{figure}[htbp]\n\\centering\n\\begin{minipage}{\\textwidth}\n\\hrulefill\n\n\\noindent\n{\\bf Algorithm COMP}\n\n\\renewcommand{\\labelenumi}{{\\bf \\arabic{enumi}.}}\n\\renewcommand{\\labelenumii}{(\\alph{enumii})}\n\\begin{enumerate}\n\n\\item[{\\bf 0.}]\nInitially, for $i = 1, \\dots, n$, node $i$ has the value\n$y_{i} \\geq 1$.\n\n\\item\nEach node $i$ generates $r$ independent random numbers\n$W_{1}^{i}, \\dots, W_{r}^{i}$, where the distribution of each\n$W_{\\ell}^{i}$ is exponential with rate $y_{i}$ (i.e., with mean\n$1\/y_{i}$).\n\n\\item\n\\label{minstep}\nEach node $i$ computes, for $\\ell = 1, \\dots, r$, an estimate\n$\\hat{W}_{\\ell}^{i}$ of the minimum\n$\\bar{W}_{\\ell} = \\min_{i = 1}^{n} W_{\\ell}^{i}$. This computation can be\ndone using an information spreading algorithm as described below.\n\n\\item\nEach node $i$ computes\n$\\hat{y}_{i} = \\frac{r}{\\sum_{\\ell = 1}^{r} \\hat{W}_{\\ell}^{i}}$ as its\nestimate of $\\sum_{i = 1}^{n} y_{i}$.\n\\end{enumerate}\n\\hrulefill\n\\end{minipage}\n\\caption{An algorithm for computing separable functions.}\n\\label{compalg}\n\\end{figure}\n\nWe describe how the minimum is computed as required by step\n{\\bf \\ref{minstep}} of the algorithm in Section \\ref{ssec:minima}.\nThe running time of the algorithm COMP depends on the running time of\nthe algorithm used to compute the minimum.\n\nNow, we show that COMP effectively estimates the function value $y$\nwhen the estimates $\\hat{W}_{\\ell}^{i}$ are all correct by providing a\nlower bound on the conditional probability that the estimates produced\nby COMP are all within a $(1 \\pm \\epsilon)$ factor of $y$.\n\\begin{lemma}\n\\label{lem:estaccuracy}\nLet $y_{1}, \\dots, y_{n}$ be real numbers (with $y_{i} \\geq 1$ for\n$i = 1, \\dots, n$), $y = \\sum_{i = 1}^{n} y_{i}$, and\n$\\bar{W} = (\\bar{W}_{1}, \\dots, \\bar{W}_{r})$, where the $\\bar{W}_{\\ell}$ are as\ndefined in the algorithm COMP. For any node $i$, let\n$\\hat{W}^{i} = (\\hat{W}_{1}^{i}, \\dots, \\hat{W}_{r}^{i})$, and let $\\hat{y}_{i}$ be\nthe estimate of $y$ obtained by node $i$ in COMP. For any\n$\\epsilon \\in (0, 1\/2)$,\n\\[\n\\begin{split}\n\\Pr & \\left( \\cup_{i = 1}^{n} \\left\\{\n\\left|\\hat{y}_{i} - y \\right| > 2 \\epsilon y \\right\\}\n\\mid \\forall i \\in V, \\: \\hat{W}^{i} = \\bar{W} \\right) \n \\leq 2\\exp\\left(-\\frac{\\epsilon^{2} r}{3} \\right).\n\\end{split}\n\\]\n\\end{lemma}\n\n\\begin{proof}\nObserve that the estimate $\\hat{y}_{i}$ of $y$ at node $i$ is a function\nof $r$ and $\\hat{W}^{i}$. Under the hypothesis that $\\hat{W}^{i} = \\bar{W}$ for\nall nodes $i \\in V$, all nodes produce the same estimate\n$\\hat{y} = \\hat{y}_{i}$ of $y$. This estimate is\n$\\hat{y} = r \\left(\\sum_{\\ell = 1}^{r} \\bar{W}_{\\ell} \\right)^{-1}$, and so\n$\\hat{y}^{-1} = \\left(\\sum_{\\ell = 1}^{r} \\bar{W}_{\\ell} \\right)r^{-1}$.\n\nProperty \\ref{p1} implies that each of the $n$ random variables\n$\\bar{W}_{1}, \\dots, \\bar{W}_{r}$ has an exponential distribution with rate\n$y$. From Lemma \\ref{discrete-to-cont}, it follows that for any\n$\\epsilon \\in (0, 1\/2)$,\n\\begin{equation}\n\\begin{split}\n\\Pr & \\left( \\left|\\hat{y}^{-1} - \\frac{1}{y}\\right|\n> \\frac{\\epsilon}{y}\n\\;\\Big|\\; \\forall i \\in V, \\: \\hat{W}^{i} = \\bar{W} \\right) \n~ \\leq 2\\exp\\left(-\\frac{\\epsilon^2 r}{3}\\right).\n\\end{split}\n\\label{e1}\n\\end{equation}\nThis inequality bounds the conditional probability of the event\n$\\{\\hat{y}^{-1} \\not\\in\n[(1 - \\epsilon) y^{-1}, (1 + \\epsilon)y^{-1}]\\}$,\nwhich is equivalent to the event\n$\\{\\hat{y} \\not\\in [(1 + \\epsilon)^{-1}y, (1 - \\epsilon)^{-1}y]\\}$.\nNow, for $\\epsilon \\in (0, 1\/2)$,\n\\begin{equation}\n\\begin{split}\n(1 - \\epsilon)^{-1} & \\in\n\\left[ 1 + \\epsilon, 1 + 2\\epsilon \\right], \n~ (1 + \\epsilon)^{-1}\n~ \\in \\left[1 - \\epsilon, 1 - 2\\epsilon\/3\\right].\n\\end{split}\n\\label{e2}\n\\end{equation}\nApplying the inequalities in (\\ref{e1}) and (\\ref{e2}), we conclude\nthat for $\\epsilon \\in (0, 1\/2)$,\n\\[\n\\begin{split}\n\\Pr & \\left(\\left| \\hat{y} - y \\right| > 2 \\epsilon y\n\\mid \\forall i \\in V, \\: \\hat{W}^{i} = \\bar{W} \\right) ~\n \\leq 2 \\exp\\left(-\\frac{\\epsilon^2 r}{3}\\right).\n\\end{split}\n\\]\n\n\\noindent\nNoting that the event\n$\\cup_{i = 1}^{n} \\{|\\hat{y}_{i} - y| > 2 \\epsilon y\\}$ is equivalent to\nthe event $\\{|\\hat{y} - y| > 2 \\epsilon y\\}$ when $\\hat{W}^{i} = \\bar{W}$ for all\nnodes $i$ completes the proof of Lemma \\ref{lem:estaccuracy}.\n\\end{proof}\n\n\\subsection{Using information spreading to compute minima}\n\\label{ssec:minima}\n\nWe now elaborate on step {\\bf \\ref{minstep}} of the algorithm COMP.\nEach node $i$ in the graph starts this step with a vector\n$W^{i} = (W_{1}^{i}, \\dots, W_{r}^{i})$, and the nodes seek the vector\n$\\bar{W} = (\\bar{W}_{1}, \\dots, \\bar{W}_{r})$, where\n$\\bar{W}_{\\ell} = \\min_{i = 1}^{n} W_{\\ell}^{i}$. In the information\nspreading problem, each node $i$ has a message $m_{i}$, and the nodes\nare to transmit messages across the links until every node has every\nmessage.\n\nIf all link capacities are infinite (i.e., in one time unit, a node\ncan send an arbitrary amount of information to another node), then an\ninformation spreading algorithm $\\cal{D}$ can be used directly to\ncompute the minimum vector $\\bar{W}$. To see this, let the message\n$m_{i}$ at the node $i$ be the vector $W^{i}$, and then apply the\ninformation spreading algorithm to disseminate the vectors. Once\nevery node has every message (vector), each node can compute $\\bar{W}$ as\nthe component-wise minimum of all the vectors. This implies that the\nrunning time of the resulting algorithm for computing $\\bar{W}$ is the\nsame as that of the information spreading algorithm.\n\nThe assumption of infinite link capacities allows a node to transmit\nan arbitrary number of vectors $W^{i}$ to a neighbor in one time unit.\nA simple modification to the information spreading algorithm, however,\nyields an algorithm for computing the minimum vector $\\bar{W}$ using links\nof capacity $r$. To this end, each node $i$ maintains a single\n$r$-dimensional vector $w^{i}(t)$ that evolves in time, starting with\n$w^{i}(0) = W^{i}$.\n\nSuppose that, in the information dissemination algorithm, node $j$\ntransmits the messages (vectors) $W^{i_{1}}, \\dots, W^{i_{c}}$ to node\n$i$ at time $t$. Then, in the minimum computation algorithm, $j$\nsends to $i$ the $r$ quantities $w_{1}, \\dots, w_{r}$, where\n$w_{\\ell} = \\min_{u = 1}^{c} W_{\\ell}^{i_{u}}$. The node $i$ sets\n$w_{\\ell}^{i}(t^{+}) = \\min(w_{\\ell}^{i}(t^{-}), w_{\\ell})$ for\n$\\ell = 1, \\dots, r$, where $t^{-}$ and $t^{+}$ denote the times\nimmediately before and after, respectively, the communication. At any\ntime $t$, we will have $w^{i}(t) = \\bar{W}$ for all nodes $i \\in V$ if, in\nthe information spreading algorithm, every node $i$ has all the\nvectors $W^{1}, \\dots, W^{n}$ at the same time $t$. In this way, we\nobtain an algorithm for computing the minimum vector $\\bar{W}$ that uses\nlinks of capacity $r$ and runs in the same amount of time as the\ninformation spreading algorithm.\n\nAn alternative to using links of capacity $r$ in the computation of\n$\\bar{W}$ is to make the time slot $r$ times larger, and impose a unit\ncapacity on all the links. Now, a node transmits the numbers\n$w_{1}, \\dots, w_{r}$ to its communication partner over a period of\n$r$ time slots, and as a result the running time of the algorithm for\ncomputing $\\bar{W}$ becomes greater than the running time of the\ninformation spreading algorithm by a factor of $r$. The preceding\ndiscussion, combined with the fact that nodes only gain messages as an\ninformation spreading algorithm executes, leads to the following\nlemma.\n\\begin{lemma}\n\\label{lem:mininfdis}\nSuppose that the COMP algorithm is implemented using an information\nspreading algorithm $\\cal{D}$ as described above. Let $\\hat{W}^{i}(t)$\ndenote the estimate of $\\bar{W}$ at node $i$ at time $t$. For any\n$\\delta \\in (0, 1)$, let $t_{m} = r T_{\\cal{D}}^{\\text{spr}}(\\delta)$.\nThen, for any time $t \\geq t_{m}$, with probability at least\n$1 - \\delta$, $\\hat{W}^{i}(t) = \\bar{W}$ for all nodes $i \\in V$.\n\\end{lemma}\n\nNote that the amount of data communicated between nodes during the\nalgorithm COMP depends on the values of the exponential random\nvariables generated by the nodes. Since the nodes compute minima of\nthese variables, we are interested in a probabilistic lower bound on\nthe values of these variables (for example, suppose that the nodes\ntransmit the values $1\/W_{\\ell}^{i}$ when computing the minimum\n$\\bar{W}_{\\ell} = 1\/\\max_{i = 1}^{n} \\{1\/W_{\\ell}^{i}\\}$).\nTo this end, we use the fact that each $\\bar{W}_{\\ell}$ is an exponential\nrandom variable with rate $y$ to obtain that, for any constant\n$c > 1$, the probability that any of the minimum values $\\bar{W}_{\\ell}$\nis less than $1\/B$ (i.e., any of the inverse values $1\/W_{\\ell}^{i}$\nis greater than $B$) is at most $\\delta\/c$, where $B$ is proportional\nto $cry\/\\delta$.\n\n\\subsection{Proof of Theorem \\ref{thm:main1}}\n\nNow, we are ready to prove Theorem \\ref{thm:main1}. In particular, we\nwill show that the COMP algorithm has the properties claimed in\nTheorem \\ref{thm:main1}. To this end, consider using an information spreading algorithm $\\cal{D}$ with\n$\\delta$-spreading time $T_{\\cal{D}}^{\\text{spr}}(\\delta)$ for\n$\\delta \\in (0, 1)$ as the subroutine in the COMP algorithm. For any\n$\\delta \\in (0, 1)$, let $\\tau_{m} = rT_{\\cal{D}}^{\\text{spr}}(\\delta\/2)$.\nBy Lemma \\ref{lem:mininfdis}, for any time $t \\geq \\tau_{m}$, the\nprobability that $\\hat{W}^{i} \\neq \\bar{W}$ for any node $i$ at time $t$ is at\nmost $\\delta\/2$.\n\nOn the other hand, suppose that $\\hat{W}^{i} = \\bar{W}$ for all nodes $i$ at\ntime $t \\geq \\tau_{m}$. For any $\\epsilon \\in (0, 1)$, by choosing\n$r \\geq 12 \\epsilon^{-2} \\log (4 \\delta^{-1})$ so that\n$r = \\Theta(\\epsilon^{-2}(1 + \\log \\delta^{-1}))$, we obtain from\nLemma \\ref{lem:estaccuracy} that\n\\begin{equation}\n\\begin{split}\n\\Pr & \\left(\\cup_{i = 1}^{n} \\left\\{\\hat{y}_{i}\n\\not\\in \\left[ (1 - \\epsilon) y, (1 + \\epsilon) y \\right] \\right\\}\n\\mid \\forall i \\in V, \\: \\hat{W}^{i} = \\bar{W} \\right) ~\n~ \\leq \\delta\/2.\n\\end{split}\n\\label{e3}\n\\end{equation}\nRecall that $T_{COMP}^{\\text{cmp}}(\\epsilon, \\delta)$ is the smallest time\n$\\tau$ such that, under the algorithm COMP, at any time $t \\geq \\tau$,\nall the nodes have an estimate of the function value $y$ within a\nmultiplicative factor of $(1 \\pm \\epsilon)$ with probability at least\n$1 - \\delta$. By a straightforward union bound of events and\n(\\ref{e3}), we conclude that, for any time $t \\geq \\tau_{m}$,\n\\[\n\\Pr \\left(\\cup_{i = 1}^{n} \\left\\{\\hat{y}_{i} \\not\\in\n\\left[ (1 - \\epsilon) y, (1 + \\epsilon) y \\right] \\right\\} \\right)\n\\leq \\delta.\n\\]\nFor any $\\epsilon \\in (0, 1)$ and $\\delta \\in (0, 1)$, we now have, by\nthe definition of $(\\epsilon, \\delta)$-computing time,\n\\begin{eqnarray*}\nT_{COMP}^{\\text{cmp}}(\\epsilon, \\delta)\n& \\leq & \\tau_{m} \\\\\n& = & O \\left(\\epsilon^{-2} (1 + \\log \\delta^{-1})\nT_{\\cal{D}}^{\\text{spr}}(\\delta\/2) \\right).\n\\end{eqnarray*}\nThis completes the proof of Theorem \\ref{thm:main1}.\n\n\\section{Information spreading}\n\\label{sec:infdis}\n\nIn this section, we analyze a randomized gossip algorithm for\ninformation spreading. The method by which nodes choose partners to\ncontact when initiating a communication and the data transmitted\nduring the communication are the same for both time models defined in\nSection \\ref{sec:prelim}. These models differ in the times at which\nnodes contact each other: in the asynchronous model, only one node can\nstart a communication at any time, while in the synchronous model all\nthe nodes can communicate in each time slot.\n\n\nThe information spreading algorithm that we study is presented in\nFig. \\ref{spreadalg}, which makes use of the following notation. Let\n$M_{i}(t)$ denote the set of messages node $i$ has at time $t$.\nInitially, $M_{i}(0) = \\{m_{i}\\}$ for all $i \\in V$. For a\ncommunication that occurs at time $t$, let $t^{-}$ and $t^{+}$\ndenote the times immediately before and after, respectively, the\ncommunication occurs.\n\nAs mentioned in Section \\ref{ssec:contrib}, the nodes choose\ncommunication partners according to the probability distribution\ndefined by an $n \\times n$ matrix $P$. The matrix $P$ is\nnon-negative and stochastic, and satisfies $P_{ij} = 0$ for any pair\nof nodes $i \\neq j$ such that $(i, j) \\not\\in E$. For each such\nmatrix $P$, there is an instance of the information spreading\nalgorithm, which we refer to as SPREAD($P$).\n\\begin{figure}[htbp]\n\\centering\n\\begin{minipage}{\\textwidth}\n\\hrulefill\n\n\\noindent\n{\\bf Algorithm SPREAD($P$)}\n\n\\noindent\nWhen a node $i$ initiates a communication at time $t$:\n\n\\renewcommand{\\labelenumi}{{\\bf \\arabic{enumi}.}}\n\\begin{enumerate}\n\n\\item\nNode $i$ chooses a node $u$ at random, and contacts $u$. The choice\nof the communication partner $u$ is made independently of all other\nrandom choices, and the probability that node $i$ chooses any node $j$\nis $P_{ij}$.\n\n\\item\nNodes $u$ and $i$ exchange all of their messages, so that\n\\[\nM_{i}(t^{+}) = M_u(t^+) = M_{i}(t^{-}) \\cup M_{u}(t^{-}).\n\\]\n\\end{enumerate}\n\\hrulefill\n\\end{minipage}\n\\caption{A gossip algorithm for information spreading.}\n\\label{spreadalg}\n\\end{figure}\n\nWe note that the data transmitted between two communicating nodes in\nSPREAD conform to the {\\em push and pull mechanism}. That is, when node $i$\ncontacts node $u$ at time $t$, both nodes $u$ and $i$ exchange all of\ntheir information with each other. We also note that the\ndescription in the algorithm assumes that the communication\nlinks in the network have infinite capacity. As discussed in Section\n\\ref{ssec:minima}, however, an information spreading algorithm that\nuses links of infinite capacity can be used to compute minima using\nlinks of unit capacity.\n\nThis algorithm is simple, distributed, and satisfies the transmitter\ngossip constraint. We now present analysis of the information\nspreading time of SPREAD($P$) for doubly stochastic matrices $P$ in\nthe two time models. The goal of the analysis is to prove Theorem\n\\ref{thm:main2}. To this end, for any $i \\in V$, let\n$S_{i}(t) \\subseteq V$ denote the set of nodes that have the message\n$m_{i}$ after any communication events that occur at absolute time $t$\n(communication events occur on a global clock tick in the asynchronous\ntime model, and in each time slot in the synchronous time model). At\nthe start of the algorithm, $S_{i}(0) = \\{i\\}$.\n\n\\subsection{Asynchronous model}\n\nAs described in Section \\ref{sec:prelim}, in the asynchronous time\nmodel the global clock ticks according to a Poisson process of rate\n$n$, and on a tick one of the $n$ nodes is chosen uniformly at random.\nThis node initiates a communication, so the times at which the\ncommunication events occur correspond to the ticks of the clock. On\nany clock tick, at most one pair of nodes can exchange messages by\ncommunicating with each other.\n\nLet $k \\geq 0$ denote the index of a clock tick. Initially, $k = 0$,\nand the corresponding absolute time is $0$. For simplicity of\nnotation, we identify the time at which a clock tick occurs with its\nindex, so that $S_{i}(k)$ denotes the set of nodes that have the\nmessage $m_{i}$ at the end of clock tick $k$. The following lemma\nprovides a bound on the number of clock ticks required for every node\nto receive every message.\n\\begin{lemma}\n\\label{lem:asynchticks}\nFor any $\\delta \\in (0, 1)$, define \n\\[\nK(\\delta)\n= \\inf\\{k \\geq 0:\n\\Pr(\\cup_{i = 1}^{n} \\{S_{i}(k) \\neq V\\}) \\leq \\delta \\}.\n\\]\nThen,\n\\[\nK(\\delta)\n= O\\left( n \\frac{\\log n + \\log \\delta^{-1}}{\\Phi(P)}\\right).\n\\]\n\\end{lemma}\n\n\\begin{proof}\nFix any node $v \\in V$. We study the evolution of the size of the set\n$S_{v}(k)$. For simplicity of notation, we drop the subscript $v$,\nand write $S(k)$ to denote $S_{v}(k)$.\n\nNote that $|S(k)|$ is monotonically non-decreasing over the course of\nthe algorithm, with the initial condition $|S(0)| = 1$. For the\npurpose of analysis, we divide the execution of the algorithm into two\nphases based on the size of the set $S(k)$. In the first phase,\n$|S(k)| \\leq n\/2$, and in the second phase $|S(k)| > n\/2$.\n\nUnder the gossip algorithm, after clock tick $k + 1$, we have either\n$|S(k + 1)| = |S(k)|$ or $|S(k + 1)| = |S(k)| + 1$. Further, the size\nincreases if a node $i \\in S(k)$ contacts a node $j \\notin S(k)$, as\nin this case $i$ will push the message $m_{v}$ to $j$. For each such\npair of nodes $i$, $j$, the probability that this occurs on clock tick\n$k + 1$ is $P_{ij}\/n$. Since only one node is active on each clock\ntick,\n\\begin{equation}\nE[|S(k + 1)| - |S(k)| \\mid S(k)]\n\\geq \\sum_{i \\in S(k), j \\notin S(k)} \\frac{P_{ij}}{n}.\n\\label{expinc}\n\\end{equation}\n\n\\noindent\nWhen $|S(k)| \\leq n\/2$, it follows from (\\ref{expinc}) and the\ndefinition of the conductance $\\Phi(P)$ of $P$ that\n\\begin{eqnarray}\nE[|S(k + 1)| - |S(k)| \\mid S(k)]\n& \\geq & \\frac{|S(k)|}{n}\n\\frac{\\sum_{i \\in S(k), j \\notin S(k)} P_{ij} }{|S(k)|}\n\\notag \\\\\n& \\geq &\n\\frac{|S(k)|}{n} \\min_{S \\subset V, \\; 0 < |S| \\leq n\/2}\n\\frac{\\sum_{ i \\in S, j \\notin S} P_{ij}}{|S|}\n\\notag \\\\\n& = & \\frac{|S(k)|}{n} \\Phi(P)\n\\notag \\\\\n& = & |S(k)| \\hat{\\Phi},\n\\label{e4}\n\\end{eqnarray}\n\n\\noindent\nwhere $\\hat{\\Phi} = \\frac{\\Phi(P)}{n}$.\n\nWe seek an upper bound on the duration of the first phase. To this\nend, let\n\\begin{equation*}\nZ(k) = \\frac{\\exp \\left(\\frac{\\hat{\\Phi}}{4} k \\right)}{|S(k)|}.\n\\end{equation*}\n\n\\noindent\nDefine the stopping time $L = \\inf \\{k : |S(k)| > n\/2\\}$, and\n$L \\land k = \\min(L, k)$. If $|S(k)| > n\/2$, then\n$L \\land (k + 1) = L \\land k$, and thus\n$E[Z(L \\land (k + 1)) \\mid S(L \\land k)] = Z(L \\land k)$.\n\nNow, suppose that $|S(k)| \\leq n\/2$, in which case\n$L \\land (k + 1) = (L \\land k) + 1$. The function $g(z) = 1\/z$ is\nconvex for $z > 0$, which implies that, for $z_{1}, z_{2} > 0$,\n\\begin{equation}\ng(z_{2}) \\geq g(z_{1}) + g'(z_{1})(z_{2} - z_{1}).\n\\label{convderunder}\n\\end{equation}\n\n\\noindent\nApplying \\eqref{convderunder} with $z_{1} = |S(k + 1)|$ and\n$z_{2} = |S(k)|$ yields\n\\begin{equation*}\n\\frac{1}{|S(k + 1)|}\n\\leq \\frac{1}{|S(k)|}\n- \\frac{1}{|S(k + 1)|^{2}} (|S(k + 1)| - |S(k)|).\n\\end{equation*}\n\n\\noindent\nSince $|S(k + 1)| \\leq |S(k)| + 1 \\leq 2|S(k)|$, it follows that\n\\begin{equation}\n\\frac{1}{|S(k + 1)|}\n\\leq \\frac{1}{|S(k)|}\n- \\frac{1}{4 |S(k)|^{2}} (|S(k + 1)| - |S(k)|).\n\\label{invsizeupper}\n\\end{equation}\n\nCombining \\eqref{e4} and \\eqref{invsizeupper}, we obtain that, if\n$|S(k)| \\leq n\/2$, then\n\\begin{equation*}\nE \\left[\\frac{1}{|S(k + 1)|} \\;\\Big|\\; S(k) \\right]\n\\leq \\frac{1}{|S(k)|} \\left(1 - \\frac{\\hat{\\Phi}}{4} \\right)\n\\leq \\frac{1}{|S(k)|} \\exp \\left(-\\frac{\\hat{\\Phi}}{4} \\right),\n\\end{equation*}\n\n\\noindent\nas $1 - z \\leq \\exp(-z)$ for $z \\geq 0$. This implies that\n\\begin{eqnarray*}\nE[Z(L \\land (k + 1)) \\mid S(L \\land k)]\n& = & E \\left[\n\\frac{\\exp \\left(\\frac{\\hat{\\Phi}}{4} (L \\land (k + 1)) \\right)}\n{|S(L \\land (k + 1))|} \\;\\bigg|\\; S(L \\land k) \\right]\n\\notag \\\\\n& = & \\exp \\left(\\frac{\\hat{\\Phi}}{4} (L \\land k) \\right)\n\\exp \\left(\\frac{\\hat{\\Phi}}{4} \\right)\nE \\left[ \\frac{1}\n{|S((L \\land k) + 1)|} \\;\\Big|\\; S(L \\land k) \\right]\n\\notag \\\\\n& \\leq & \\exp \\left(\\frac{\\hat{\\Phi}}{4} (L \\land k) \\right)\n\\exp \\left(\\frac{\\hat{\\Phi}}{4} \\right)\n\\exp \\left(-\\frac{\\hat{\\Phi}}{4} \\right)\n\\frac{1}{|S(L \\land k)|}\n\\notag \\\\\n& = & Z(L \\land k),\n\\end{eqnarray*}\n\n\\noindent\nand therefore $Z(L \\land k)$ is a supermartingale.\n\nSince $Z(L \\land k)$ is a supermartingale, we have the inequality\n$E[Z(L \\land k)] \\leq E[Z(L \\land 0)] = 1$ for any $k > 0$, as\n$Z(L \\land 0) = Z(0) = 1$. The fact that the set $S(k)$ can contain\nat most the $n$ nodes in the graph implies that\n\\begin{equation*}\nZ(L \\land k)\n= \\frac{\\exp \\left(\\frac{\\hat{\\Phi}}{4} (L \\land k) \\right)}\n{|S(L \\land k)|}\n\\geq \\frac{1}{n} \\exp \\left(\\frac{\\hat{\\Phi}}{4} (L \\land k) \\right),\n\\end{equation*}\n\n\\noindent\nand so\n\\begin{equation*}\nE \\left[\\exp \\left(\\frac{\\hat{\\Phi}}{4} (L \\land k) \\right) \\right]\n\\leq n E[Z(L \\land k)] \\leq n.\n\\end{equation*}\n\n\\noindent\nBecause $\\exp(\\hat{\\Phi}(L \\land k)\/4) \\uparrow \\exp (\\hat{\\Phi}L\/4)$\nas $k \\to \\infty$, the monotone convergence theorem implies that\n\\begin{equation*}\nE \\left[\\exp \\left(\\frac{\\hat{\\Phi}L}{4} \\right) \\right] \\leq n.\n\\end{equation*}\n\n\\noindent\nApplying Markov's inequality, we obtain that, for\n$k_{1} = 4(\\ln 2 + 2 \\ln n + \\ln (1\/\\delta))\/\\hat{\\Phi}$,\n\\begin{eqnarray}\n\\Pr (L > k_{1})\n& = & \\Pr \\left(\\exp \\left(\\frac{\\hat{\\Phi} L}{4} \\right)\n> \\frac{2n^{2}}{\\delta} \\right)\n\\notag \\\\\n& < & \\frac{\\delta}{2n}.\n\\label{e6a}\n\\end{eqnarray}\n\nFor the second phase of the algorithm, when $|S(k)| > n\/2$, we study\nthe evolution of the size of the set of nodes that do not have the\nmessage, $|S(k)^{c}|$. This quantity will decrease as the message\nspreads from nodes in $S(k)$ to nodes in $S(k)^{c}$. For simplicity,\nlet us consider restarting the process from clock tick $0$ after $L$\n(i.e., when more than half the nodes in the graph have the message),\nso that we have $|S(0)^{c}| \\leq n\/2$.\n\nIn clock tick $k + 1$, a node $j \\in S(k)^{c}$ will receive the\nmessage if it contacts a node $i \\in S(k)$ and pulls the message from\n$i$. As such,\n\\begin{equation*}\nE[|S(k)^{c}| - |S(k + 1)^{c}| \\mid S(k)^{c}]\n\\geq \\sum_{j \\in S(k)^{c}, i \\notin S(k)^{c}} \\frac{P_{ji}}{n},\n\\end{equation*}\n\n\\noindent\nand thus\n\\begin{eqnarray}\n\\notag\nE[|S(k + 1)^{c}| \\mid S(k)^{c}]\n& \\leq & |S(k)^c|\n- \\frac{\\sum_{j \\in S(k)^{c}, i \\notin S(k)^c} P_{ji}}{n}\n\\notag \\\\\n& = & |S(k)^c|\n\\left(1\n- \\frac{\\sum_{j \\in S(k)^c, i \\notin S(k)^c} P_{ji}}{n |S(k)^c|} \\right)\n\\notag \\\\\n& \\leq & |S(k)^c| \\left( 1 - \\hat{\\Phi} \\right).\n\\label{e:5}\n\\end{eqnarray}\n\nWe note that this inequality holds even when $|S(k)^{c}| = 0$, and as\na result it is valid for all clock ticks $k$ in the second phase.\nRepeated application of \\eqref{e:5} yields\n\\begin{eqnarray*}\nE[|S(k)^{c}|]\n& = & E[E[|S(k)^{c}| \\mid S(k - 1)^{c}]] \\\\\n& \\leq & \\left(1 - \\hat{\\Phi} \\right)E[|S(k - 1)^{c}|] \\\\\n& \\leq & \\left(1 - \\hat{\\Phi} \\right)^{k} E[|S(0)^{c}|] \\\\\n& \\leq & \\exp \\left(-\\hat{\\Phi} k \\right) \\left(\\frac{n}{2} \\right)\n\\end{eqnarray*}\n\nFor\n$k_{2} = \\ln (n^{2}\/\\delta)\/2\\hat{\\Phi} =\n(2\\ln n + \\ln (1\/\\delta))\/\\hat{\\Phi}$,\nwe have $E[|S(k_{2})^{c}|] \\leq \\delta\/(2n)$. Markov's inequality now\nimplies the following upper bound on the probability that not all of\nthe nodes have the message at the end of clock tick $k_{2}$ in the\nsecond phase.\n\\begin{eqnarray}\n\\Pr(|S(k_{2})^{c}| > 0) & = & \\Pr(|S(k_{2})^{c}| \\geq 1)\n\\notag \\\\\n& \\leq & E[|S(k_{2})^{c}|]\n\\notag \\\\\n& \\leq & \\frac{\\delta}{2n}.\n\\label{e8}\n\\end{eqnarray}\n\nCombining the analysis of the two phases, we obtain that, for\n$k' = k_{1} + k_{2} = O((\\log n + \\log \\delta^{-1})\/\\hat{\\Phi})$,\n$\\Pr(S_{v}(k') \\neq V) \\leq \\delta\/n$. Applying the union bound over\nall the nodes in the graph, and recalling that\n$\\hat{\\Phi} = \\Phi(P)\/n$, we conclude that\n\\begin{eqnarray*}\nK(\\delta)\n& \\leq & k' \n~ = ~ O\\left(n \\frac{\\log n + \\log \\delta^{-1}}{\\Phi(P)}\\right).\n\\end{eqnarray*}\nThis completes the proof of Lemma \\ref{lem:asynchticks}.\n\\end{proof}\n\nTo extend the bound in Lemma \\ref{lem:asynchticks} to absolute time,\nobserve that Corollary \\ref{discrete-to-contc} implies that the\nprobability that\n$\\kappa = K(\\delta\/3) + 27 \\ln (3\/\\delta) =\nO(n(\\log n + \\log \\delta^{-1})\/\\Phi(P))$\nclock ticks do not occur in absolute time\n$(4\/3) \\kappa\/n = O((\\log n + \\log \\delta^{-1})\/\\Phi(P))$ is at most\n$2 \\delta\/3$. Applying the union bound now yields\n$T_{SPREAD(P)}^{\\text{spr}}(\\delta) =\nO((\\log n + \\log \\delta^{-1})\/\\Phi(P))$,\nthus establishing the upper bound in Theorem \\ref{thm:main2} for the\nasynchronous time model.\n\n\\subsection{Synchronous model}\n\nIn the synchronous time model, in each time slot every node contacts a\nneighbor to exchange messages. Thus, $n$ communication events may\noccur simultaneously. Recall that absolute time is measured in rounds\nor time slots in the synchronous model.\n\nThe analysis of the randomized gossip algorithm for information\nspreading in the synchronous model is similar to the analysis for the\nasynchronous model. However, we need additional analytical arguments\nto reach analogous conclusions due to the technical challenges\npresented by multiple simultaneous transmissions.\n\nIn this section, we sketch a proof of the time bound in Theorem\n\\ref{thm:main2},\n$T_{SPREAD(P)}^{\\text{spr}}(\\delta) =\nO((\\log n + \\log \\delta^{-1})\/\\Phi(P))$,\nfor the synchronous time model. Since the proof follows a similar\nstructure as the proof of Lemma \\ref{lem:asynchticks}, we only point\nout the significant differences.\n\nAs before, we fix a node $v \\in V$, and study the evolution of the\nsize of the set $S(t) = S_{v}(t)$. Again, we divide the execution of\nthe algorithm into two phases based on the evolution of $S(t)$: in the\nfirst phase $|S(t)| \\leq n\/2$, and in the second phase\n$|S(t)| > n\/2$. In the first phase, we analyze the increase in\n$|S(t)|$, while in the second we study the decrease in $|S(t)^{c}|$.\nFor the purpose of analysis, in the first phase we ignore the effect\nof the increase in $|S(t)|$ due to the {\\em pull} aspect of protocol:\nthat is, when node $i$ contacts node $j$, we assume (for the purpose\nof analysis) that $i$ sends the messages it has to $j$, but that $j$\ndoes not send any messages to $i$. Clearly, an upper bound obtained\non the time required for every node to receive every message under\nthis restriction is also an upper bound for the actual algorithm.\n\nConsider a time slot $t + 1$ in the first phase. For $j \\notin S(t)$,\nlet $X_{j}$ be an indicator random variable that is $1$ if node $j$\nreceives the message $m_{v}$ via a push from some node $i \\in S(t)$ in\ntime slot $t + 1$, and is $0$ otherwise. The probability that $j$\ndoes not receive $m_{v}$ via a push is the probability that no node\n$i \\in S(t)$ contacts $j$, and so\n\\begin{eqnarray}\nE[X_{j} \\mid S(t)]\n& = & 1 - \\Pr(X_{j} = 0 \\mid S(t))\n\\notag \\\\\n& = & 1 - \\prod_{i \\in S(t)} (1 - P_{ij})\n\\notag \\\\\n& \\geq & 1 - \\prod_{i \\in S(t)} \\exp(-P_{ij})\n\\notag \\\\\n& = & 1 - \\exp \\left(-\\sum_{i \\in S(t)} P_{ij} \\right).\n\\label{pullproblower}\n\\end{eqnarray}\n\n\\noindent\nThe Taylor series expansion of $\\exp(-z)$ about $z = 0$ implies that,\nif $0 \\leq z \\leq 1$, then\n\\begin{equation}\n\\exp(-z) \\leq 1 - z + z^{2}\/2 \\leq 1 - z + z\/2 = 1 - z\/2.\n\\label{taylorexpsecondterm}\n\\end{equation}\n\n\\noindent\nFor a doubly stochastic matrix $P$, we have\n$0 \\leq \\sum_{i \\in S(t)} P_{ij} \\leq 1$, and so we can combine\n\\eqref{pullproblower} and \\eqref{taylorexpsecondterm} to obtain\n\\begin{equation*}\nE[X_{j} \\mid S(t)]\n\\geq \\frac{1}{2} \\sum_{i \\in S(t)} P_{ij}.\n\\end{equation*}\n\nBy linearity of expectation,\n\\begin{eqnarray*}\nE[|S(t + 1)| - |S(t)| \\mid S(t)]\n& = & \\sum_{j \\not\\in S(t)} E[X_{j} \\mid S(t)]\n\\notag \\\\\n& \\geq & \\frac{1}{2} \\sum_{i \\in S(t), j \\not\\in S(t)} P_{ij}\n\\notag \\\\\n& = & \\frac{|S(t)|}{2}\n\\frac{\\sum_{i \\in S(t), j \\not\\in S(t)} P_{ij}}{|S(t)|}.\n\\end{eqnarray*}\n\n\\noindent\nWhen $|S(t)| \\leq n\/2$, we have\n\\begin{equation}\nE[|S(t + 1)| - |S(t)| \\mid S(t)]\n\\geq |S(t)| \\frac{\\Phi(P)}{2}.\n\\label{synchexpinccond}\n\\end{equation}\n\nInequality \\eqref{synchexpinccond} is analogous to inequality\n\\eqref{e4} for the asynchronous time model, with $\\Phi(P)\/2$ in the\nplace of $\\hat{\\Phi}$. We now proceed as in the proof of Lemma\n\\ref{lem:asynchticks} for the asynchronous model. Note that\n$|S(t + 1)| \\leq 2 |S(t)|$ here in the synchronous model because of\nthe restriction in the analysis to only consider the push aspect of\nthe protocol in the first phase, as each node in $S(t)$ can push a\nmessage to at most one other node in a single time slot. Repeating\nthe analysis from the asynchronous model leads to the conclusion that\nthe first phase of the algorithm ends in\n$O\\left(\\frac{\\log n + \\log \\delta^{-1}}{{\\Phi(P)}}\\right)$ time with\nprobability at least $1 - \\delta\/2n$.\n\nThe analysis of the second phase is the same as that presented for the\nasynchronous time model, with $\\hat{\\Phi}$ replaced by $\\Phi$. As a\nsummary, we obtain that it takes at most\n$O\\left(\\frac{\\log n + \\log \\delta^{-1}}{{\\Phi(P)}}\\right)$ time for the\nalgorithm to spread all the messages to all the nodes with probability\nat least $1 - \\delta$. This completes the proof of Theorem\n\\ref{thm:main2} for the synchronous time model.\n\n\n\\section{Applications}\n\\label{sec:appl}\n\nWe study here the application of our preceding results to several\ntypes of graphs. In particular, we consider complete graphs,\nconstant-degree expander graphs, and grid graphs. We use grid graphs\nas an example to compare the performance of our algorithm for\ncomputing separable functions with that of a known iterative averaging\nalgorithm.\n\nFor each of the three classes of graphs mentioned above, we are\ninterested in the $\\delta$-information-spreading time\n$T_{SPREAD(P)}^{\\text{spr}}(\\delta)$, where $P$ is a doubly stochastic\nmatrix that assigns equal probability to each of the neighbors of any\nnode. Specifically, the probability $P_{ij}$ that a node $i$ contacts\na node $j \\neq i$ when $i$ becomes active is $1\/\\Delta$, where\n$\\Delta$ is the maximum degree of the graph, and\n$P_{ii} = 1 - d_{i}\/\\Delta$, where $d_{i}$ is the degree of $i$.\nRecall from Theorem \\ref{thm:main1} that the information dissemination\nalgorithm SPREAD($P$) can be used as a subroutine in an algorithm for\ncomputing separable functions, with the running time of the resulting\nalgorithm being a function of $T_{SPREAD(P)}^{\\text{spr}}(\\delta)$.\n\n\\subsection{Complete graph}\n\nOn a complete graph, the transition matrix $P$ has $P_{ii} = 0$ for\n$i = 1, \\dots, n$, and $P_{ij} = 1\/(n - 1)$ for $j \\neq i$. This\nregular structure allows us to directly evaluate the conductance of\n$P$, which is $\\Phi(P) \\approx 1\/2$. This implies that the\n($\\epsilon$, $\\delta$)-computing time of the algorithm for computing\nseparable functions based on SPREAD($P$) is\n$O(\\epsilon^{-2} (1 + \\log \\delta^{-1})(\\log n + \\log \\delta^{-1}))$.\nThus, for a constant $\\epsilon \\in (0, 1)$ and $\\delta = 1\/n$, the\ncomputation time scales as $O(\\log^{2} n)$.\n\n\\subsection{Expander graph}\n\nExpander graphs have been used for numerous applications, and explicit\nconstructions are known for constant-degree expanders \\cite{rvw}. We\nconsider here an undirected graph in which the maximum degree of any\nvertex, $\\Delta$, is a constant. Suppose that the edge expansion of\nthe graph is\n\\begin{equation*}\n\\min_{S \\subset V, \\; 0 < |S| \\leq n\/2}\n\\frac{|F(S, S^{c})|}{|S|} = \\alpha,\n\\end{equation*}\n\n\\noindent\nwhere $F(S, S^{c})$ is the set of edges in the cut $(S, S^{c})$, and\n$\\alpha > 0$ is a constant. The transition matrix $P$ satisfies\n$P_{ij} = 1\/\\Delta$ for all $i \\neq j$ such that $(i, j) \\in E$, from\nwhich we obtain $\\Phi(P) \\geq \\alpha\/\\Delta$. When $\\alpha$ and\n$\\Delta$ are constants, this leads to a similar conclusion as in the\ncase of the complete graph: for any constant $\\epsilon \\in (0, 1)$ and\n$\\delta = 1\/n$, the computation time is $O(\\log^{2} n)$.\n\n\\subsection{Grid}\n\\label{gridsec}\n\nWe now consider a $d$-dimensional grid graph on $n$ nodes, where\n$c = n^{1\/d}$ is an integer. Each node in the grid can be represented\nas a $d$-dimensional vector $a = (a_{i})$, where\n$a_{i} \\in \\{1, \\dots, c\\}$ for $1 \\leq i \\leq d$. There is one node\nfor each distinct vector of this type, and so the total number of\nnodes in the graph is $c^{d} = (n^{1\/d})^{d} = n$. For any two nodes\n$a$ and $b$, there is an edge $(a, b)$ in the graph if and only if,\nfor some $i \\in \\{1, \\dots, d\\}$, $|a_{i} - b_{i}| = 1$, and\n$a_{j} = b_{j}$ for all $j \\neq i$.\n\nIn \\cite{isogrid}, it is shown that the isoperimetric number of this\ngrid graph is\n\\begin{equation*}\n\\min_{S \\subset V, \\; 0 < |S| \\leq n\/2}\n\\frac{|F(S, S^{c})|}{|S|}\n= \\Theta \\left(\\frac{1}{c} \\right)\n= \\Theta \\left(\\frac{1}{n^{1\/d}} \\right).\n\\end{equation*}\n\n\\noindent\nBy the definition of the edge set, the maximum degree of a node in the\ngraph is $2d$. This means that $P_{ij} = 1\/(2d)$ for all $i \\neq j$\nsuch that $(i, j) \\in E$, and it follows that\n$\\Phi(P) = \\Omega \\left(\\frac{1}{dn^{1\/d}} \\right)$. Hence, for any\n$\\epsilon \\in (0, 1)$ and $\\delta \\in (0, 1)$, the\n($\\epsilon$, $\\delta$)-computing time of the algorithm for computing\nseparable functions is\n$O(\\epsilon^{-2} (1 + \\log \\delta^{-1})(\\log n + \\log \\delta^{-1})\nd n^{1\/d})$.\n\n\\subsection{Comparison with Iterative Averaging}\n\nWe briefly contrast the performance of our algorithm for computing\nseparable functions with that of the iterative averaging algorithms in\n\\cite{tsitsiklis-thesis} \\cite{bgps}. As noted earlier, the\ndependence of the performance of our algorithm is in proportion to\n$1\/\\Phi(P)$, which is a lower bound for the iterative algorithms based\non a stochastic matrix $P$.\n\nIn particular, when our algorithm is used to compute the average of a\nset of numbers (by estimating the sum of the numbers and the number of\nnodes in the graph) on a $d$-dimensional grid graph, it follows from\nthe analysis in Section \\ref{gridsec} that the amount of time required\nto ensure the estimate is within a $(1 \\pm \\epsilon)$ factor of the\naverage with probability at least $1 - \\delta$ is\n$O(\\epsilon^{-2} (1 + \\log \\delta^{-1})(\\log n + \\log \\delta^{-1})\ndn^{1\/d})$\nfor any $\\epsilon \\in (0, 1)$ and $\\delta \\in (0, 1)$. So, for a\nconstant $\\epsilon \\in (0, 1)$ and $\\delta = 1\/n$, the computation\ntime scales as $O(dn^{1\/d} \\log^{2} n)$ with the size of the graph,\n$n$. The algorithm in \\cite{bgps} requires $\\Omega(n^{2\/d} \\log n)$\ntime for this computation. Hence, the running time of our algorithm\nis (for fixed $d$, and up to logarithmic factors) the {\\em square\nroot} of the runnning time of the iterative algorithm! This\nrelationship holds on other graphs for which the spectral gap is\nproportional to the square of the conductance.\n\n\n\\section{Conclusions and Future Work}\n\\label{sec:conc}\n\nIn this paper, we presented a novel algorithm for computing separable\nfunctions in a totally distributed manner. The algorithm is based on\nproperties of exponential random variables, and the fact that the\nminimum of a collection of numbers is an order- and\nduplicate-insensitive statistic.\n\nOperationally, our algorithm makes use of an information spreading\nmechanism as a subroutine. This led us to the analysis of a\nrandomized gossip mechanism for information spreading. We obtained an\nupper bound on the information spreading time of this algorithm in\nterms of the conductance of a matrix that characterizes the algorithm.\n\nIn addition to computing separable functions, our algorithm improves\nthe computation time for the canonical task of averaging. For\nexample, on graphs such as paths, rings, and grids, the performance of\nour algorithm is of a smaller order than that of a known iterative\nalgorithm.\n\nWe believe that our algorithm will lead to the following totally\ndistributed computations: (1) an approximation algorithm for convex\nminimization with linear constraints; and (2) a ``packet marking''\nmechanism in the Internet. These areas, in which summation is a key\nsubroutine, will be topics of our future research.\n\n\\section{Acknowledgments}\n\nWe thank Ashish Goel for a useful discussion and providing\nsuggestions, based on previous work \\cite{Goel}, when we started this\nwork.\n\n\\bibliographystyle{abbrv}\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nABCG~85{} is a very rich cluster located at a redshift z=0.0555. We\nperformed a detailed analysis of this cluster from the X-ray point of\nview, based on Einstein IPC data (Gerbal et al. 1992 and references\ntherein). In the optical, no photometric data were available at that\ntime, except for an incomplete photometric catalogue by Murphy (1984),\nand about 150 redshifts were published in the literature only after we\ncompleted our first X-ray analysis (Beers et al. 1991, Malumuth et\nal. 1992). We therefore undertook a more complete analysis of this cluster,\nwith the aim of obtaining both photometric and redshift data at\noptical wavelengths and better X-ray data from the ROSAT data bank\n(Pislar et al. 1997, Lima--Neto et al. 1997). We present here our photometric\ndata. The redshift catalogue is published in a companion paper (Durret\net al. 1997a) and the analysis of all these combined optical data will be\npresented in Paper III (Durret et al. in preparation).\n\n\\section{The photographic plate data}\n\n\\subsection{Method for obtaining the catalogue}\n\nWe decided to obtain a photometric catalogue of the galaxies in the\ndirection of the Abell 85 cluster of galaxies by first processing the field\n681 in the SRC-J Schmidt atlas. This blue glass copy plate (IIIaJ$+$GG385)\nwas investigated with the MAMA (Machine \\`a Mesurer pour l'Astronomie)\nfacility located at the Centre d'Analyse des Images at the Observatoire de\nParis and operated by CNRS\/INSU (Institut National des Sciences de l'Univers).\nIn order to also get information on the neighbouring galaxy distribution, the\ncentral 5$^{\\circ}\\times$~5$^{\\circ}$ area has been searched for objects\nusing the on-line mode with the 10~$\\mu$m step size available at that time.\nThe involved algorithmic steps are well-known. They can be summarized as\nfollows~:\nfirst a local background estimate and its variance are computed from\npixel values inside a 256~$\\times$~256 window, then pixels with a\nnumber of counts higher than the background value plus three times the\nvariance are flagged, which leads to define an object as a set of\nconnected flagged pixels; an overlapping zone of 512 pixels is used in\nboth directions for each individual scan. Although this method may appear\nrather crude, its efficiency is nevertheless quite high for properly\ndetecting and measuring simple and isolated objects smaller than the\nbackground scale. The region where ABCG~85 is located is not crowded by\nstellar images ($b_{\\rm II}\\simeq-72^{\\circ}$), so that most of the objects\nlarger than a few pixels can indeed be detected this way. \nThe result was a list of more than 10$^5$ objects distributed over\nthe $\\sim$~25 square degrees of the field bounded by 0$^{\\rm\nh}$31$^{\\rm mn}$30.4$^{\\rm s}$ $<\\alpha<$ 0$^{\\rm h}$53$^{\\rm\nmn}$10.6$^{\\rm s}$ and $-$12$^{\\circ}$18'19.43\" $<\\delta<$\n$-$7$^{\\circ}$05'13.88\" (equinox 2000.0, as hereafter), with their\ncoordinates, their shape parameters (area, elliptical modelling) and\ntwo flux descriptors (peak density, sum of background-subtracted\npixel values).\n\nThe astrometric reduction of the whole catalogue was performed with\nrespect to 91 stars of the PPM star catalogue (Roeser \\& Bastian 1991)\nspread over the field, using a 3$^{\\rm rd}$-order polynomial fitting. The\nresiduals of the fit yielding the instrumental constants were smaller\nthan 0.25 arcsecond and the astrometry of our catalogue indeed appears \nto be very good, as confirmed by our multi-object fibre spectroscopy where\nthe galaxies were always found to be very close ($<$~2.0 arcseconds, i.e.\n3 pixels) to the expected positions.\n\nSince the required CCD observations were not available at that time, a\npreliminary photometric calibration of these photographic data has\nbeen done using galaxies with known total blue magnitude. The\nmagnitude of stars is certainly much easier to define, but such\nhigh-surface brightness objects suffer from severe saturation effects\non Schmidt plates when they are bright enough to be included in\navailable photometric catalogues. So, 83 galaxies were selected from\nthe Lyon Extragalactic Database (LEDA) in order to compare their\nmagnitude to their measured blue flux. A small region around each of\nthese objects was scanned and this image has been used: i) to identify\nthe object among its neighbours within the coordinate list and ii) to\nassess the quality of the flux value stored in the on-line catalogue\nwith respect to close, overlapping or merged objects. The 74 remaining\nundisturbed objects identified with no ambiguity came from eight\ndifferent catalogues in the literature. Whatever the intrinsic\nuncertainties about the integrated MAMA fluxes are, systematic\neffects were found with respect to the parent catalogue in a flux\nversus magnitude plot, as well as discrepancies for some objects\nbetween the LEDA and the Centre de Donn\\'ees Astronomiques de Strasbourg (CDS)\ndatabases. Consequently, three catalogues including 12 objects were\nremoved and the LEDA magnitude of 5 objects was replaced by a CDS\nvalue which seems in better agreement with their aspect and with the overall\ntrend when compared to similar objects. Later, 7 objects far from the overall\ntrend were discarded. These successive rejections resulted in a set of\n55 objects distributed over a six magnitude range. The magnitude\nzero-point for our photographic catalogue was obtained by plotting the\nflux of these objects against their expected magnitude. A rms scatter\nof 0.34 mag was computed around the linear fit.\n\n\\subsection{Classification of the objects}\n\nMost of the diffuse objects included in our main catalogue were\nautomatically selected according to their lower surface brightness\nwhen compared to stars. As usual for glass copies of survey plates,\nthe discrimination power of this brightness criterion drops sharply\nfor objects fainter than approximately 19$^{\\rm th}$ magnitude, and so does\nthe completeness of the resulting catalogue if no\ncontamination is allowed for. The number of galaxy candidates brighter\nthan this limit within the investigated area appeared, however, to be\nalready large enough to get a much better view of the bright galaxy\ndistribution than using the deeper but very incomplete catalogue\npublished by Murphy (1984). Moreover, including faintest objects was\nnot necessary for the redshift survey of the Abell~85 cluster of\ngalaxies we were planning (see Durret et al. 1997). Hence, no attempt was\ndone to reach a fainter completeness limit. Nonetheless, in order to\nselect galaxies, the decision curve which has been computed in the Flux\nvs. Area parameter space was fitted to the data so that\nsome objects identified by Murphy from CCD frames as faint galaxies\nwere also classified as galaxies by us. Next, a further test based on\nthe elongation was performed in order to reject linear plate flaws or\nartefacts, as well as to pick bright elongated galaxies first\nclassified as stars due to strong saturation effects. Finally,\nspurious detections occuring around very bright stars (area greater\nthan 10$^3$ pixels) due to a wrong estimate of the local background were\ntentatively removed by checking their location with respect to these\nbright objects.\nIn this way, a list of more than 25,000 galaxy candidates over the 25\nsquare degrees of our SRC-J~681 blue field was obtained.\n\n\\begin{figure*}[ht!]\n\\centerline{\\psfig{figure=DS1413F1.ps,height=14cm}}\n\\caption[ ]{Spatial distribution of the 11,862 galaxies brighter than \nB$_{\\rm J}=$~19.75 in the SRC-J~681 field. The large overdensities are \nindicated by \nsuperimposed isopleths from a density map computed by the method introduced \nby Dressler (1980) with $N=$~50~; eleven isopleths are drawn from 850 to \n2,850 galaxies\/square degree.}\n\\protect\\label{allplate}\n\\end{figure*}\n\nThe distribution of these galaxies is displayed in Fig.~\\ref{allplate}\nfor objects brighter than B$_{\\rm J}=$~19.75. The Abell~85 cluster is\nclearly visible, as well as several other density enhancements which\nare mostly located along the direction defined by the cluster\nellipticity.\n\n\\subsection{Completeness and accuracy of the classification}\n\n\\begin{figure}[h!]\n\\centerline{\\psfig{figure=DS1413F2.ps,height=7cm,angle=-90}}\n\\caption[ ]{Differential magnitude distribution of the 25~10$^3$ galaxy \ncandidates in the SRC-J 681 field.}\n\\protect\\label{fdl}\n\\end{figure}\n\nThe differential luminosity distribution of the galaxy candidates\nindicates that the sample appears quite complete down to the\nB$_{\\rm J}=$~19.75 magnitude (see Fig.~\\ref{fdl}). To go further, we first\ntested the completeness of this overall list by cross-identifying it\nwith three catalogues from the literature (Murphy 1984, Beers et\nal. 1991, Malumuth et al. 1992) with the help of images obtained from\nthe mapping mode of the MAMA machine. It appeared that: i) all but one\ngalaxy of the Malumuth et al. (1992) catalogue of 165 objects are actually\nclassified as galaxies, with a mean offset between individual\npositions equal to 1.10~$\\pm$~0.06 arcsecond ; ii) 94\\% of the 35\ngalaxies listed by Beers et al. (1991) inside the area are included in our\ncatalogue, only 2 bright objects which suffer from severe saturation\nbeing misclassified. Note that such an effect also caused 5 of the 83\ngalaxies chosen as photometric standards to be misclassified, which\ngives the same percentage as for the sample by Beers et al. The\ncomparison with the faint CCD catalogue built by Murphy (1984) in the so-called\n$r_{\\rm F}$ band (quite similar to that obtained using a photographic IIIaF\nemulsion with a R filter) was performed only for objects which were visible\non the photographic plate with secure identification (only uncertain X\nand Y coordinates are provided in the paper) and classified without any\ndoubt as galaxies from our visual examination. There remained 107\nobjects out of 170, among which 88 are brighter than B$_{\\rm J}\\sim$~19.75\n($r_{\\rm F}\\sim$~18.5). Down to this flux limit, 82 objects ($\\sim$~93\\%)\nare in agreement, thereby validating the choice of our decision curve in the\nFlux vs. Area parameter space. These cross-identifications therefore indicate\nthat the completeness limit of our catalogue is about 95\\% for such objects,\nas expected from similar studies at high galactic latitude.\n\nIn order to confirm this statement and to study the homogeneity of our\ngalaxy catalogue, we then decided to verify carefully its reliability\ninside the region of the Abell~85 cluster of galaxies itself. The\ncentre of ABCG~85 was assumed to be located at the equatorial\ncoordinates given in the literature, $\\alpha=$~0$^{\\rm h}$41$^{\\rm\nmn}$49.8$^{\\rm s}$ and $\\delta=$~$-$9$^{\\circ}$17'33.\", and a square\nregion of $\\pm$1$^\\circ$ around this position was defined; such an\nangular distance corresponds to $\\sim$~2.7~Mpc~h$_{100}^{-1}$ at the\nredshift of the cluster ($z=$~0.0555). However, let us remark that the\nposition of the central D galaxy is slightly different,\n$\\alpha=$~0$^{\\rm h}$41$^{\\rm mn}$50.5$^{\\rm s}$ and\n$\\delta=$~$-$9$^{\\circ}$18'11.\", and so is the centre we found from\nour X-ray analysis of the diffuse component of this cluster, i.e.:\n$\\alpha=$~0$^{\\rm h}$41$^{\\rm mn}$51.9$^{\\rm s}$ and\n$\\delta=$~$-$9$^{\\circ}$18'17.\" (Pislar et al. 1997). For all our\nfuture studies, we then chose to define the cluster centre as that of\nthis X-ray component.\n\nThe distribution of the $\\sim$~4,100 candidates within the area has\nbeen first of all visually inspected to remove remaining conspicuous\nfalse detections around some stars as well as some defects mainly due to a\nsatellite track crossing the field. This cleaned catalogue contains a\nlittle more than 4,000 galaxy-like objects, half of which brighter\nthan B$_{\\rm J}=$~19.75. The intrinsic quality of this list has then been\nchecked against a visual classification of all the recorded objects\nwithin a $\\pm$~11'25\" area covering the region already\nobserved by Murphy (1984) around the location $\\alpha=$~0$^{\\rm\nh}$41$^{\\rm mn}$57.0$^{\\rm s}$ and $\\delta=$~$-$9$^{\\circ}$23'05\". The\ninspection of the corresponding MAMA frame of 2048~$\\times$~2048\npixels enabled us to give a morphological code to each object, as well\nas to flag superimposed objects and to deblend manually 10 galaxies\n(new positions and flux estimates for each galaxy member). Of course,\nthe discrimination power of this visual examination decreases for\nstar-like objects fainter than B$_{\\rm J}=$~18.5 ($r_{\\rm F}\\sim$~17.3)\ndue to the sampling involved (pixel size of 0.67\"), and an exact\nclassification of such objects appeared to be hopeless above the {\\it\na priori} completeness limit of our automated galaxy list guessed to\nbe B$_{\\rm J}=$~19.75. Down to this limit, our results can be summarized as\nfollows~: i) $\\sim$94\\% of the selected galaxies are true galaxies\n(including 7 multiple galaxies and 2 mergers with stars), while 4\\%\nmay be galaxies~; ii) 7 genuine galaxies are missed (4\\%). Since these\ncontamination and incompleteness levels of 5--6\\% were\nsatisfactory, we decided to set the completeness limit for our\nautomated galaxy catalogue at this magnitude B$_{\\rm J}=$~19.75.\n\n\\subsection{The photographic plate catalogue}\n\n\\begin{figure}[ht!]\n\\centerline{\\psfig{figure=DS1413F3.ps,height=8cm}}\n\\caption[ ]{Positions of the 4,232 galaxies detected on the photographic plate\nrelative to the centre of the cluster defined as the centre of the diffuse\nX-ray component. North is to the top and East to the left.}\n\\protect\\label{mamaxy}\n\\end{figure}\n\nFor objects fainter than our completeness limit, the visual check of\nthe inner ($\\pm$~11'25\") part of our object list has enabled us to\nconfirm the galaxy identification of 135 galaxy candidates as well as\nto select 214 misclassified faint galaxies. The total number of\ngalaxies included in the visual sample down to the detection limit is\n541, whereas the initial list only contains 338 candidates within the\nsame area. Keeping in mind that both catalogues are almost identical\nfor objects brighter than B$_{\\rm J}=$~19.75, we decided to replace the\nautomated list by the visual one inside this $\\pm$~11'25\" central\narea. Note that about 150 objects remained unclassified, including 26\ngalaxies from the CCD list by Murphy. We added these 26 galaxies to\nthe final catalogue whose galaxies are plotted in Fig.~\\ref{mamaxy}.\n\nTable~1 lists the merged catalogue of 4,232 galaxies obtained from the \nSRC-J~681 plate in the $\\pm 1^{\\circ}$ field of ABCG~85, with V and R\nmagnitudes computed using the transformation laws obtained from our CCD\ndata (see \\S 3.3). This Table includes the following information~: running\nnumber~; equatorial coordinates (equinox 2000.0)~; ellipticity ; position\nangle of the major axis~; B$_{\\rm J}$, V, and R magnitudes~; X and Y\npositions in arcsecond relative to the centre defined as that of the diffuse\nX-ray emission of the cluster (see above)~; cross-identifications with the\nlists by Malumuth et al. (1992), Beers et al. (1991) and Murphy (1984).\n\n\\section{The CCD data}\n\n\\subsection{Description of the observations}\n\n\\begin{figure}[ht!]\n\\centerline{\\psfig{figure=DS1413F4.ps,height=5cm}}\n\\caption[ ]{Distribution of the fields observed with CCD imaging. The size\nof each field is 6.4$\\times$6.4~arcmin$^2$. Positions are drawn relatively to \nthe centre with equatorial coordinates\n$\\alpha=$~0$^{\\rm h}$41$^{\\rm mn}$46.0$^{\\rm s}$ and\n$\\delta=$~$-$9$^{\\circ}$20'10\".}\n\\protect\\label{champsccd}\n\\end{figure}\n\nThe observations were performed with the Danish 1.5m telescope at\nESO La Silla during 2 nights on November 2 and 3, 1994 (the third night was \ncloudy, and this accounts for the missing fields in Fig.~\\ref{champsccd}). \nA sketch of the observed fields is displayed in Fig.~\\ref{champsccd}. Field~1\nwas centered on the coordinates~: 00$^{\\rm h}$41$^{\\rm mn}$46.00$^{\\rm s}$, \n$-9^\\circ$20'10.0\" (2000.0). There was almost no overlap between the\nvarious fields (only a few arcseconds). The Johnson\nV and R filters were used. Exposure times were 10~mn for all fields; 1~mn \nexposures were also taken for a number of fields with bright objects in\norder to avoid saturation. The detector was CCD~\\#28 with 1024$^2$ pixels of \n24~$\\mu$m, giving a sampling on the sky of 0.377\"\/pixel, and a size of \n6.4$\\times$6.4~arcmin$^2$ for each field. \nThe seeing was poor the first night~: 1.5--2\" for fields 1 and 2, 2--3\" for \nfield 3 (in which consequently the number of galaxies detected is much \nsmaller), and good the second night~: 0.75--1.1\". On the \nother hand, the photometric quality of the first night was better than that\nof the second one. However, the observation of many standard stars per night\nmade a correct photometric calibration possible even for the second night as\nindicated by a comparison with an external magnitude list~: the photometric\ncatalogues from the six fields have the same behaviour for both nights\n(see e.g. Fig.~8).\n\n\n\\subsection{Data reduction}\n\nCorrections for bias and flat-field were performed in the usual way with\nthe IRAF software. Only flat fields obtained on the sky at twilight and\ndawn were used; dome flat fields were discarded because they showed too\nmuch structure.\n\nEach field was reduced separately. The photometric calibration took\ninto account the exposure time, the time at which the exposure had\nbeen made, the color index (V-R), the airmass, and a second order term\nincluding both the color index and airmass. The photometric\ncharacteristics of both nights were estimated separately.\n\nObjects were automatically detected using the task\nDAOPHOT\/DAOFIND. This task performs a convolution with a gaussian\nhaving previously chosen characteristics, taking into account the\nseeing in each frame (FWHM of the star-like profiles in the image) as\nwell as the CCD readout noise and gain. Objects are identified as the\npeaks of the convolved image which are higher than a given threshold\nabove the local sky background (chosen as approximately equal to\n4~$\\sigma$ of the image mean sky level). A list of detected objects\nis thus produced and interactively corrected on the displayed image so\nas to discard spurious objects, add undetected ones (usually close to\nthe CCD edges) and dispose of false detections caused by the events\nflagged in the previous section. Since exposure times were the same in\nV and R, the number of objects detected in the R band is of course much\nlarger.\n\n\\begin{figure}\n\\centerline{\\psfig{figure=DS1413F5.ps,height=6cm}}\n\\caption[ ]{Positions of the galaxies detected in the R band relative to the\ncentre defined as the centre of the diffuse X-ray emission (see text).}\n\\protect\\label{ccdxy}\n\\end{figure}\n\nWe used the package developed by O.~Le F\\`evre (Le F\\`evre et al. \n1986) to obtain for each field a catalogue with the (x,y) galaxy positions, \nisophotal radii, ellipticities, major axis, position angles, and V and R \nmagnitudes within the 26.5 isophote.\nStar-galaxy separation was performed based on a compactness parameter q\ndetermined by Le F\\` evre et al. (1986, see also Slezak et al. 1988), as\ndescribed in detail e.g. by Lobo et al. (1997). We chose q=1.45 as the best\nseparation limit between galaxies and stars; very bright stars were \nclassified as galaxies with this criterion and had to be eliminated manually.\nAfter eliminating repeated detections of a few objects, we obtained a total \nnumber of 805 galaxies detected in R, among which 381 are detected in V. \nThe errors on these CCD magnitudes are in all cases smaller than 0.2 magnitude,\nand their rms accuracy is about 0.1 magnitude; these rather large values are\ndue to the bad seeing during the first night and to pretty poor photometric\nconditions during the second night.\n\nPositions of the galaxies detected in the R band relative to the\ncentre defined above are displayed in Fig.~\\ref{ccdxy}. Notice the smaller\nnumber of galaxies detected in field 3 due to a sudden worsening of the seeing \nduring the exposure on this field.\nThe astrometry of this CCD catalogue is accurate to 1.5--2.0 arcseconds as\nverified from the average mutual angular distance between CCD and MAMA\nequatorial coordinates for 174 galaxies included in both catalogues.\n\n\\begin{figure}\n\\centerline{\\psfig{figure=DS1413F6.ps,height=7cm}}\n\\caption[ ]{Histogram of all the R magnitudes of the galaxies in the\nCCD catalogue.}\n\\protect\\label{ccdrmag}\n\\end{figure}\n\nThe histogram of the R magnitudes in the CCD catalogue is displayed in\nFig.~\\ref{ccdrmag}. It will be discussed in detail in Paper~III\n(Durret et al. in preparation). The turnover value of this histogram\nis located between R=22 and R=23, suggesting that our catalogue is\nroughly complete up to R=22.\n\nThe (V-R) colours are plotted as a function of R for the 381 galaxies\ndetected in the V band in our CCD catalogue\n(Fig.~\\ref{coul}). Unfortunately, since the observed CCD field is\nsmall, there are only 50 of these galaxies with measured redshifts,\nand therefore it is not possible to derive a colour-magnitude relation\nfrom which to establish a membership criterion for the cluster.\n\n\\begin{figure}\n\\centerline{\\psfig{figure=DS1413F7.ps,height=8cm}}\n\\caption[ ]{(V-R) colour as a function of R for the 381 galaxies detected\nin the V band in our CCD catalogue. The 50 galaxies indicated with a square are\nthose with redshifts in the interval 13,350 -- 20,000~km~s$^{-1}$ assumed to belong to\nABCG~85{}.}\n\\protect\\label{coul}\n\\end{figure}\n\n\\subsection{Transformation laws between the photometric systems}\n\n576 stars were also measured on the CCD images and used to calculate \ncalibration relations between our photographic plate B$_{\\rm J}$ \\ magnitudes and \nour V and R CCD magnitudes.\n\nFor stars:\\\\\n$$V_{CCD}=B_{\\rm J}- 40.8302+3.6656\\ B_{\\rm J} -0.082567\\ B_{\\rm J}^2 $$\n$$R_{CCD}=B_{\\rm J} -10.12663+0.430772\\ B_{\\rm J} $$\n\nFor galaxies where only R is detected:\\\\\n$$R_{CCD}=B_{\\rm J} -3.03532+0.121963\\ B_{\\rm J}$$\n\nFor galaxies where both V and R are detected:\\\\\n$$V_{CCD}=B_{\\rm J}-2.13942+0.108905\\ B_{\\rm J}$$\n$$R_{CCD}=B_{\\rm J}-0.566762(V-R)-2.29919+0.10482\\ B_{\\rm J}$$\n\nThe observed R band CCD magnitude $R_{CCD}$ as a function of the R\nmagnitude calculated from the photographic B$_{\\rm J}$ magnitude is plotted\nin Fig.~\\ref{rr5} for galaxies, showing the quality of the correlation\nfor the six different CCD fields, especially for objects brighter than R$=$19.\nAll the CCD fields appear to behave identically.\n\n\\begin{figure}\n\\centerline{\\psfig{figure=DS1413F8.ps,height=8cm}}\n\\caption[ ]{Observed R band CCD magnitude $R_{CCD}$ as a function of the \nR magnitude calculated from the photographic B$_{\\rm J}$ magnitude. The six different \nsymbols correspond to the six CCD fields described above.}\n\\protect\\label{rr5}\n\\end{figure}\n\n\\subsection{The CCD catalogue}\n\nThe CCD photometric data for the galaxies in the field of ABCG~85{} are given \nin Table~2. This Table includes for each object the following information~:\nrunning number~; equatorial coordinates (equinox 2000.0)~; isophotal radius~;\nellipticity~; position angle of the major axis~; V and R magnitudes~; X and\nY positions in arcsecond relative to the centre assumed to have coordinates\n$\\alpha = 0^{\\rm h}41^{\\rm mn}51.90^{\\rm s}$ and \n$\\delta = -9^\\circ$18'17.0\" (equinox 2000.0) (this centre was chosen to\ncoincide with that of the diffuse X-ray gas component as defined by Pislar\net al. (1997) ).\\\\\n\n\\section{Conclusions} \n\nOur redshift catalogue is submitted jointly in a companion paper\n(Durret et al. 1997). Together with the catalogues presented here, it\nis used to give an interpretation of the optical properties of ABCG~85{}\n(Durret et al. in preparation, Paper~III), in relation with the X-ray\nproperties of this cluster (Pislar et al. 1997, Lima--Neto et al. 1997, \nPapers~I and II).\n\n\\acknowledgements {We are very grateful to the MAMA team at\nObservatoire de Paris for help when scanning the photographic plate,\nand to Cl\\'audia Mendes de Oliveira for her cheerful assistance at the\ntelescope. CL is fully supported by the BD\/2772\/93RM grant attributed\nby JNICT, Portugal.}\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section*{Overview}\n\\addcontentsline{toc}{section}{Overview}\n\nMany problems in high-dimensional statistics are believed to exhibit gaps between what can be achieved \\emph{information-theoretically} (or \\emph{statistically}, i.e., with unbounded computational power) and what is possible with bounded computational power (e.g., in polynomial time). Examples include finding planted cliques \\cite{J-clique,DM-clique,MPW-clique,pcal} or dense communities \\cite{block-model-1,block-model-2,HS-bayesian} in random graphs, extracting variously structured principal components of random matrices \\cite{BR-sparse,LKZ-mmse,LKZ-sparse} or tensors \\cite{HSS-tensor,sos-hidden}, and solving or refuting random constraint satisfaction problems \\cite{alg-barriers,refuting-any-csp}.\n\nAlthough current techniques cannot prove that such average-case problems require super-polynomial time (even assuming $P \\ne NP$), various forms of rigorous evidence for hardness have been proposed. These include:\n\\begin{itemize}\n \\item failure of Markov chain Monte Carlo methods \\cite{J-clique,DFJ-mcmc};\n \\item failure of local algorithms \\cite{GS-local,DM-hidden,tensor-local,subopt-local-maxcut};\n \\item methods from statistical physics which suggest failure of belief propagation or approximate message passing algorithms \\cite{block-model-1,block-model-2,LKZ-mmse, LKZ-sparse} (see \\cite{stat-phys-survey} for a survey or \\cite{BPW-phys-notes} for expository notes);\n \\item structural properties of the solution space \\cite{alg-barriers,KMRSZ,GS-local,GZ-reg,GZ-clique};\n \\item geometric analysis of non-convex optimization landscapes \\cite{ABC,matrix-tensor};\n \\item reductions from planted clique (which has become a ``canonical'' problem believed to be hard in the average case) \\cite{BR-sparse,HWX-reduction,WBS,hard-rip,bresler-sparse,bresler-pca};\n \\item lower bounds in the statistical query model \\cite{sq-kearns,sq-half,sq-clique,sq-sat,sq-gaussian,sq-robust};\n \\item lower bounds against the sum-of-squares hierarchy \\cite{grig-parity,sch-parity,DM-clique,MPW-clique,HSS-tensor,MW-sos,pcal,sos-hidden} (see \\cite{sos-survey} for a survey).\n\\end{itemize} \n\n\\noindent In these notes, we survey another emerging method, which we call the \\emph{low-degree method}, for understanding computational hardness in average-case problems. \nIn short, we explore a conjecture that the behavior of a certain quantity -- the second moment of the \\emph{low-degree likelihood ratio} -- reveals the computational complexity of a given statistical task. \nWe find the low-degree method particularly appealing because it is simple, widely applicable, and can be used to study a wide range of time complexities (e.g., polynomial, quasipolynomial, or nearly-exponential).\nFurthermore, rather than simply positing a certain ``optimal algorithm,'' the underlying conjecture captures an interpretable structural feature that seems to dictate whether a problem is easy or hard. \nFinally, and perhaps most importantly, predictions using the low-degree method have been carried out for a variety of average-case problems, and so far have always reproduced widely-believed results.\n\nHistorically, the low-degree method arose from the study of the sum-of-squares (SoS) semidefinite programming hierarchy. In particular, the method is implicit in the \\emph{pseudo-calibration} approach to proving SoS lower bounds \\cite{pcal}. Two concurrent papers \\cite{HS-bayesian,sos-hidden} later articulated the idea more explicitly. In particular, Hopkins and Steurer \\cite{HS-bayesian} were the first to demonstrate that the method can capture sharp thresholds of computational feasibility such as the Kesten--Stigum threshold for community detection in the stochastic block model. The low-degree method was developed further in the PhD thesis of Hopkins \\cite{sam-thesis}, which includes a precise conjecture about the complexity-theoretic implications of low-degree predictions. In comparison to sum-of-squares lower bounds, the low-degree method is much simpler to carry out and appears to always yield the same results for natural average-case problems.\n\nIn these notes, we aim to provide a self-contained introduction to the low-degree method; we largely avoid reference to SoS and instead motivate the method in other ways. We will briefly discuss the connection to SoS in Section~\\ref{sec:sos}, but we refer the reader to \\cite{sam-thesis} for an in-depth exposition of these connections.\n\nThese notes are organized as follows. In Section~\\ref{sec:decisiontheory}, we present the low-degree method and motivate it as a computationally-bounded analogue of classical statistical decision theory. In Section~\\ref{sec:agn}, we show how to carry out the low-degree method for a general class of additive Gaussian noise models. In Section~\\ref{sec:examples}, we specialize this analysis to two classical problems: the spiked Wigner matrix and spiked Gaussian tensor models. Finally, in Section~\\ref{sec:ldlr-conj-2}, we discuss various forms of heuristic and formal evidence for correctness of the low-degree method; in particular, we highlight a formal connection between low-degree lower bounds and the failure of spectral methods (Theorem~\\ref{thm:spectral-hard}).\n\n\n\\section{Towards a Computationally-Bounded Decision Theory}\\label{sec:decisiontheory}\n\n\\subsection{Statistical-to-Computational Gaps in Hypothesis Testing}\n\nThe field of \\emph{statistical decision theory} (see, e.g., \\cite{LR-sdt,LeCam-sdt} for general references) is concerned with the question of how to decide optimally (in some quantitative sense) between several statistical conclusions.\nThe simplest example, and the one we will mainly be concerned with here, is that of \\emph{simple hypothesis testing}: we observe a dataset that we believe was drawn from one of two probability distributions, and want to make an inference (by performing a statistical \\emph{test}) about which distribution we think the dataset was drawn from.\n\nHowever, one important practical aspect of statistical testing usually is not included in this framework, namely the \\emph{computational cost} of actually performing a statistical test.\nIn these notes, we will explore ideas from a line of recent research about how one mathematical method of classical decision theory might be adapted to predict the capabilities and limitations of \\emph{computationally bounded statistical tests}.\n\nThe basic problem that will motivate us is the following.\nSuppose $\\PPP = (\\PP_n)_{n \\in \\NN}$ and $\\QQQ = (\\QQ_n)_{n \\in \\NN}$ are two sequences of probability distributions over a common sequence of measurable spaces $\\sS = ((\\sS_n, \\sF_n))_{n \\in \\NN}$.\n(In statistical parlance, we will think throughout of $\\PPP$ as the model of the \\emph{alternative hypothesis} and $\\QQQ$ as the model of the \\emph{null hypothesis}. Later on, we will consider hypothesis testing problems where the distributions $\\PPP$ include a ``planted'' structure, making the notation a helpful mnemonic.)\nSuppose we observe $\\bY \\in \\sS_n$ which is drawn from one of $\\PP_n$ or $\\QQ_n$.\nWe hope to recover this choice of distribution in the following sense.\n\n\\begin{definition}\n We say that a sequence of events $(A_n)_{n \\in \\NN}$ with $A_n \\in \\sF_n$ occurs with \\emph{high probability (in $n$)} if the probability of $A_n$ tends to 1 as $n \\to \\infty$.\n\\end{definition}\n\n\\begin{definition}\\label{def:stat-ind}\n A sequence of (measurable) functions $f_n: \\sS_n \\to \\{ \\tp, \\tq \\}$ is said to \\emph{strongly distinguish}\\footnote{We will only consider this so-called \\emph{strong} version of distinguishability, where the probability of success must tend to 1 as $n \\to \\infty$, as opposed to the \\emph{weak} version where this probability need only be bounded above $\\frac{1}{2}$. For high-dimensional problems, the strong version typically coincides with important notions of estimating the planted signal (see Section~\\ref{sec:extensions}), whereas the weak version is often trivial.} $\\PPP$ and $\\QQQ$ if $f_n(\\bY) = \\tp$ with high probability when $\\bY \\sim \\PP_n$, and $f_n(\\bY) = \\tq$ with high probability when $\\bY \\sim \\QQ_n$. If such $f_n$ exist, we say that $\\PPP$ and $\\QQQ$ are \\emph{statistically distinguishable}.\n\\end{definition}\n\nIn our computationally bounded analogue of this definition, let us for now only consider polynomial time tests (we will later consider various other restrictions on the time complexity of $f_n$, such as subexponential time).\nThen, the analogue of Definition~\\ref{def:stat-ind} is the following.\n\n\\begin{definition}\n $\\PPP$ and $\\QQQ$ are said to be \\emph{computationally distinguishable} if there exists a sequence of measurable \\textbf{and computable in time polynomial in $\\bm n$} functions $f_n: \\sS_n \\to \\{\\tp, \\tq\\}$ such that $f_n$ strongly distinguishes $\\PPP$ and $\\QQQ$.\n\\end{definition}\n\n\\noindent Clearly, computational distinguishability implies statistical distinguishability.\nOn the other hand, a multitude of theoretical evidence suggests that statistical distinguishability does not in general imply computational distinguishability.\nOccurrences of this phenomenon are called \\emph{statistical-to-computational (stat-comp) gaps}.\nTypically, such a gap arises in the following slightly more specific way. Suppose the sequence $\\PPP$ has a further dependence on a \\emph{signal-to-noise} parameter $\\lambda > 0$, so that $\\PPP_\\lambda = (\\PP_{\\lambda, n})_{n \\in \\NN}$.\nThis parameter should describe, in some sense, the strength of the structure present under $\\PPP$ (or, in some cases, the number of samples received). The following is one canonical example.\n\n\\begin{example}[Planted Clique Problem \\cite{J-clique,kucera-clique}]\nUnder the null model $\\QQ_n$, we observe an $n$-vertex Erd\\H{o}s-R\\'enyi graph $\\sG(n,1\/2)$, i.e., each pair $\\{i, j\\}$ of vertices is connected with an edge independently with probability $1\/2$. The signal-to-noise parameter $\\lambda$ is an integer $1 \\le \\lambda \\le n$. Under the planted model $\\PP_{\\lambda,n}$, we first choose a random subset of vertices $S \\subseteq [n]$ of size $|S| = \\lambda$ uniformly at random. We then observe a graph where each pair $\\{i, j\\}$ of vertices is connected with probability $1$ if $\\{i,j\\} \\subseteq S$ and with probability $1\/2$ otherwise. In other words, the planted model consists of the union of $\\sG(n,1\/2)$ with a planted \\emph{clique} (a fully-connected subgraph) on $\\lambda$ vertices.\n\\end{example}\n\n\\noindent As $\\lambda$ varies, the problem of testing between $\\PPP_\\lambda$ and $\\QQQ$ can change from statistically impossible, to statistically possible but computationally hard, to computationally easy.\nThat is, there exists a threshold $\\lambda_{\\mathsf{stat}}$ such that for any $\\lambda > \\lambda_{\\mathsf{stat}}$, $\\PPP_{\\lambda}$ and $\\QQQ$ are statistically distinguishable, but for $\\lambda < \\lambda_{\\mathsf{stat}}$ are not.\nThere also exists a threshold $\\lambda_{\\mathsf{comp}}$ such that for any $\\lambda > \\lambda_{\\mathsf{comp}}$, $\\PPP_\\lambda$ and $\\QQQ$ are computationally distinguishable, and (conjecturally) for $\\lambda < \\lambda_{\\mathsf{comp}}$ are not.\nClearly we must have $\\lambda_{\\mathsf{comp}} \\geq \\lambda_{\\mathsf{stat}}$, and a stat-comp gap corresponds to strict inequality $\\lambda_{\\mathsf{comp}} > \\lambda_{\\mathsf{stat}}$. For instance, the two models in the planted clique problem are statistically distinguishable when $\\lambda \\ge (2+\\varepsilon) \\log_2 n$ (since $2\\log_2 n$ is the typical size of the largest clique in $\\sG(n,1\/2)$), so $\\lambda_{\\mathsf{stat}} = 2 \\log_2 n$. However, the best known polynomial-time distinguishing algorithms only succeed when $\\lambda = \\Omega(\\sqrt{n})$ \\cite{kucera-clique,AKS-clique}, and so (conjecturally) $\\lambda_{\\mathsf{comp}} \\approx \\sqrt{n}$, a large stat-comp gap.\n\nThe remarkable method we discuss in these notes allows us, through a relatively straightforward calculation, to predict the threshold $\\lambda_{\\mathsf{comp}}$ for many of the known instances of stat-comp gaps.\nWe will present this method as a modification of a classical second moment method for studying $\\lambda_{\\mathsf{stat}}$.\n\n\n\\subsection{Classical Asymptotic Decision Theory}\n\nIn this section, we review some basic tools available from statistics for understanding statistical distinguishability.\nWe retain the same notations from the previous section in the later parts, but in the first part of the discussion will only be concerned with a single pair of distributions $\\PP$ and $\\QQ$ defined on a single measurable space $(\\sS, \\sF)$.\nFor the sake of simplicity, let us assume in either case that $\\PP_n$ (or $\\PP$) is absolutely continuous with respect to $\\QQ_n$ (or $\\QQ$, as appropriate).\\footnote{For instance, what will be relevant in the examples we consider later, any pair of non-degenerate multivariate Gaussian distributions satisfy this assumption.}\n\n\\subsubsection{Basic Notions}\n\nWe first define the basic objects used to make hypothesis testing decisions, and some ways of measuring their quality.\n\n\\begin{definition}\n A \\emph{test} is a measurable function $f: \\sS \\to \\{\\tp, \\tq\\}$.\n\\end{definition}\n\n\\begin{definition}\n The \\emph{type I error} of $f$ is the event of falsely rejecting the null hypothesis, i.e., of having $f(\\bY) = \\tp$ when $\\bY \\sim \\QQ$.\n The \\emph{type II error} of $f$ is the event of falsely failing to reject the null hypothesis, i.e., of having $f(\\bY) = \\tq$ when $\\bY \\sim \\PP$.\n The probabilities of these errors are denoted\n \\begin{align*}\n \\alpha(f) &\\colonequals \\QQ\\left(f(\\bY) = \\tp\\right), \\\\\n \\beta(f) &\\colonequals \\PP\\left(f(\\bY) = \\tq\\right).\n \\end{align*}\n The probability $1 - \\beta(f)$ of correctly rejecting the null hypothesis is called the \\emph{power} of $f$.\n\\end{definition}\n\\noindent\nThere is a tradeoff between type I and type II errors.\nFor instance, the trivial test that always outputs $\\tp$ will have maximal power, but will also have maximal probability of type I error, and vice-versa for the trivial test that always outputs $\\tq$.\nThus, typically one fixes a tolerance for one type of error, and then attempts to design a test that minimizes the probability of the other type.\n\n\\subsubsection{Likelihood Ratio Testing}\n\nWe next present the classical result showing that it is in fact possible to identify the test that is optimal in the sense of the above tradeoff.\\footnote{It is important to note that, from the point of view of statistics, we are restricting our attention to the special case of deciding between two ``simple'' hypotheses, where each hypothesis consists of the dataset being drawn from a specific distribution. Optimal testing is more subtle for ``composite'' hypotheses in parametric families of probability distributions, a more typical setting in practice. The mathematical difficulties of this extended setting are discussed thoroughly in \\cite{LR-sdt}.}\n\n\\begin{definition}\n Let $\\PP$ be absolutely continuous with respect to $\\QQ$. The \\emph{likelihood ratio}\\footnote{For readers not familiar with the Radon--Nikodym derivative: if $\\PP$, $\\QQ$ are discrete distributions then $L(\\bY) = \\PP(\\bY)\/\\QQ(\\bY)$; if $\\PP$, $\\QQ$ are continuous distributions with density functions $p$, $q$ (respectively) then $L(\\bY) = p(\\bY)\/q(\\bY)$.} of $\\PP$ and $\\QQ$ is\n \\begin{equation*}\n L(\\bY) \\colonequals \\frac{d\\PP}{d\\QQ}(\\bY).\n \\end{equation*}\n The \\emph{thresholded likelihood ratio test} with threshold $\\eta$ is the test\n \\begin{equation*}\n L_{\\eta}(\\bY) \\colonequals \\left\\{\\begin{array}{lcl} \\tp & : & L(\\bY) > \\eta \\\\ \\tq & : & L(\\bY) \\leq \\eta \\end{array}\\right\\}.\n \\end{equation*}\n\\end{definition}\n\n\\noindent Let us first present a heuristic argument for why thresholding the likelihood ratio might be a good idea. Specifically, we will show that the likelihood ratio is optimal in a particular ``$L^2$ sense'' (which will be of central importance later), i.e., when its quality is measured in terms of first and second moments of a testing quantity.\n\n\\begin{definition}\\label{def:hilbert}\n For (measurable) functions $f,g: \\sS \\to \\RR$, define the inner product and norm induced by $\\QQ$:\n \\begin{align*}\n \\la f, g \\ra &\\colonequals \\Ex_{\\bY \\sim \\QQ}\\left[ f(\\bY) g(\\bY) \\right], \\\\\n \\|f\\| &\\colonequals \\sqrt{\\langle f,f \\rangle}.\n \\end{align*}\n Let $L^2(\\QQ)$ denote the Hilbert space consisting of functions $f$ for which $\\|f\\| < \\infty$, endowed with the above inner product and norm.\\footnote{For a more precise definition of $L^2(\\QQ_n)$ (in particular including issues around functions differing on sets of measure zero) see a standard reference on real analysis such as \\cite{SS-real}.}\n\\end{definition}\n\n\\begin{proposition}\n \\label{prop:lr-optimal-l2}\n If $\\PP$ is absolutely continuous with respect to $\\QQ$, then the unique solution $f^*$ of the optimization problem\n \\begin{equation*}\n \\begin{array}{ll}\n \\text{maximize} & \\displaystyle \\Ex_{\\bY \\sim \\PP} [f(\\bY)] \\\\[10pt]\n \\text{subject to} & \\displaystyle \\Ex_{\\bY \\sim \\QQ} [f(\\bY)^2] = 1\n \\end{array}\n \\end{equation*}\n is the (normalized) likelihood ratio\n \\[\n f^\\star = L\/\\|L\\|,\n \\]\n and the value of the optimization problem is $\\|L\\|$.\n\\end{proposition}\n\\begin{proof}\n We may rewrite the objective as\n \\[\\Ex_{\\bY \\sim \\PP} f(\\bY) = \\Ex_{\\bY \\sim \\QQ} \\left[L(\\bY) f(\\bY) \\right] = \\langle L,f \\rangle,\\]\n and rewrite the constraint as $\\|f\\| = 1$. The result now follows since $\\langle L, f \\rangle \\leq \\|L\\| \\cdot \\|f\\| = \\|L\\|$ by the Cauchy-Schwarz inequality, with equality if and only if $f$ is a scalar multiple of $L$.\n\\end{proof}\n\\noindent\nIn words, this means that if we want a function to be as large as possible in expectation under $\\PP$ while remaining bounded (in the $L^2$ sense) under $\\QQ$, we can do no better than the likelihood ratio.\nWe will soon return to this type of $L^2$ reasoning in order to devise computationally-bounded statistical tests.\n\nThe following classical result shows that the above heuristic is accurate, in that the thresholded likelihood ratio tests achieve the optimal tradeoff between type I and type II errors.\n\\begin{lemma}[Neyman--Pearson Lemma \\cite{N-P}]\n \\label{lem:neyman-pearson}\n Fix an arbitrary threshold $\\eta \\ge 0$. Among all tests $f$ with $\\alpha(f) \\leq \\alpha(L_\\eta) = \\QQ(L(\\bY) > \\eta)$, $L_{\\eta}$ is the test that maximizes the power $1 - \\beta(f)$.\n\\end{lemma}\n\\noindent \nWe provide the standard proof of this result in Appendix~\\ref{app:neyman-pearson} for completeness. (The proof is straightforward but not important for understanding the rest of these notes, and it can be skipped on a first reading.)\n\n\\subsubsection{Le Cam's Contiguity}\n\\label{sec:contig}\n\nSince the likelihood ratio is, in the sense of the Neyman--Pearson lemma, an optimal statistical test, it stands to reason that it should be possible to argue about statistical distinguishability solely by computing with the likelihood ratio.\nWe present one simple method by which such arguments may be made, based on a theory introduced by Le~Cam \\cite{lecam}.\n\nWe will work again with sequences of probability measures $\\PPP = (\\PP_n)_{n \\in \\NN}$ and $\\QQQ = (\\QQ_n)_{n \\in \\NN}$, and will denote by $L_n$ the likelihood ratio $d \\PP_n\/d \\QQ_n$. Norms and inner products of functions are those of $L^2(\\QQ_n)$.\nThe following is the crucial definition underlying the arguments to come.\n\n\\begin{definition}\n A sequence $\\PPP$ of probability measures is \\emph{contiguous} to a sequence $\\QQQ$, written $\\PPP \\triangleleft \\QQQ$, if whenever $A_n \\in \\sF_n$ with $\\QQ_n(A_n) \\to 0$ (as $n \\to \\infty$), then $\\PP_n(A_n) \\to 0$ as well.\n\\end{definition}\n\n\\begin{proposition}\n If $\\PPP \\triangleleft \\QQQ$ or $\\QQQ \\triangleleft \\PPP$, then $\\QQQ$ and $\\PPP$ are statistically indistinguishable (in the sense of Definition~\\ref{def:stat-ind}, i.e., no test can have both type I and type II error probabilities tending to 0).\n\\end{proposition}\n\\begin{proof}\n We give the proof for the case $\\PPP \\triangleleft \\QQQ$, but the other case may be shown by a symmetric argument.\n For the sake of contradiction, let $(f_n)_{n \\in \\NN}$ be a sequence of tests distinguishing $\\PPP$ and $\\QQQ$, and let $A_n = \\{\\bY: f_n(\\bY) = \\tp\\}$.\n Then, $\\PP_n(A_n^c) \\to 0$ and $\\QQ_n(A_n) \\to 0$.\n But, by contiguity, $\\QQ_n(A_n) \\to 0$ implies $\\PP_n(A_n) \\to 0$ as well, so $\\PP_n(A_n^c) \\to 1$, a contradiction.\n\\end{proof}\n\\noindent\nIt therefore suffices to establish contiguity in order to prove negative results about statistical distinguishability.\nThe following classical second moment method gives a means of establishing contiguity through a computation with the likelihood ratio.\n\\begin{lemma}[Second Moment Method for Contiguity]\n \\label{lem:second-moment-contiguity}\n If $\\|L_n\\|^2 \\colonequals \\EE_{\\bY \\sim \\QQ_n}[L_n(\\bY)^2]$ remains bounded as $n \\to \\infty$ (i.e., $\\limsup_{n \\to \\infty} \\|L_n\\|^2 < \\infty$), then $\\PPP \\triangleleft \\QQQ$.\n\\end{lemma}\n\\begin{proof}\n Let $A_n \\in \\sF_n$.\n Then, using the Cauchy--Schwarz inequality,\n \\begin{equation*}\n \\PP_n(A_n) = \\Ex_{\\bY \\sim \\PP_n} [\\One_{A_n}(\\bY)] = \\Ex_{\\bY \\sim \\QQ_n}\\left[ L_n(\\bY) \\One_{A_n}(\\bY)\\right] \\leq \\left(\\Ex_{\\bY \\sim \\QQ_n}[L_n(\\bY)^2]\\right)^{1\/2} \\left(\\QQ_n(A_n)\\right)^{1\/2},\n \\end{equation*}\n and so $\\QQ_n(A_n) \\to 0$ implies $\\PP_n(A_n) \\to 0$.\n\\end{proof}\n\n\\noindent This second moment method has been used to establish contiguity for various high-dimensional statistical problems (see e.g., \\cite{MRZ-spectral,BMVVX-pca,PWBM-pca,PWB-tensor}). Typically the null hypothesis $\\QQ_n$ is a ``simpler'' distribution than $\\PP_n$ and, as a result, $d\\PP_n\/d\\QQ_n$ is easier to compute than $d\\QQ_n\/d\\PP_n$. In general, and essentially for this reason, establishing $\\QQQ \\triangleleft \\PPP$ is often more difficult than $\\PPP \\triangleleft \\QQQ$, requiring tools such as the \\emph{small subgraph conditioning method} (introduced in \\cite{subgraph-1,subgraph-2} and used in, e.g., \\cite{MNS-rec,BMNN-community}). Fortunately, one-sided contiguity $\\PP_n \\triangleleft \\QQ_n$ is sufficient for our purposes.\n\nNote that $\\|L_n\\|$, the quantity that controls contiguity per the second moment method, is the same as the optimal value of the $L^2$ optimization problem in Proposition~\\ref{prop:lr-optimal-l2}:\n\\begin{equation*}\n \\left\\{\\begin{array}{rl}\n \\text{maximize} & \\EE_{\\bY \\sim \\PP_n} [f(\\bY)] \\\\[5pt]\n \\text{subject to} & \\EE_{\\bY \\sim \\QQ_n} [f(\\bY)^2] = 1\n \\end{array}\\right\\} = \\|L_n\\|.\n\\end{equation*}\nWe might then be tempted to conjecture that $\\PPP$ and $\\QQQ$ are statistically distinguishable \\emph{if and only if} $\\|L_n\\| \\to \\infty$ as $n \\to \\infty$. However, this is incorrect: there are cases when $\\PPP$ and $\\QQQ$ are not distinguishable, yet a rare ``bad'' event under $\\PP_n$ causes $\\|L_n\\|$ to diverge.\nTo overcome this failure of the ordinary second moment method, some previous works (e.g., \\cite{BMNN-community,BMVVX-pca,PWB-tensor,PWBM-pca}) have used \\emph{conditional} second moment methods to show indistinguishability, where the second moment method is applied to a modified $\\PPP$ that conditions on these bad events not occurring.\n\n\\subsection{Basics of the Low-Degree Method}\n\\label{sec:ldlr-conj}\n\nWe now describe the \\emph{low-degree} analogues of the notions described in the previous section, which together constitute a method for restricting the classical decision-theoretic second moment analysis to computationally-bounded tests.\nThe premise of this \\emph{low-degree method} is to take low-degree multivariate polynomials in the entries of the observation $\\bY$ as a proxy for efficiently-computable functions. The ideas in this section were first developed in a sequence of works in the sum-of-squares optimization literature \\cite{pcal,HS-bayesian,sos-hidden,sam-thesis}.\n\nIn the computationally-unbounded case, Proposition~\\ref{prop:lr-optimal-l2} showed that the likelihood ratio optimally distinguishes $\\PPP$ from $\\QQQ$ in the $L^2$ sense.\nFollowing the same heuristic, we will now find the low-degree polynomial that best distinguishes $\\PPP$ from $\\QQQ$ in the $L^2$ sense. In order for polynomials to be defined, we assume here that $\\mathcal{S}_n \\subseteq \\RR^N$ for some $N = N(n)$, i.e., our data (drawn from $\\PP_n$ or $\\QQ_n$) is a real-valued vector (which may be structured as a matrix, tensor, etc.).\n\n\\begin{definition}\\label{def:LDLR}\n Let $\\sV^{\\leq D}_n \\subset L^2(\\QQ_n)$ denote the linear subspace of polynomials $\\sS_n \\to \\RR$ of degree at most $D$.\n Let $\\sP^{\\leq D}: L^2(\\QQ_n) \\to \\sV^{\\leq D}_n$ denote the orthogonal projection\\footnote{To clarify, orthogonal projection is with respect to the inner product induced by $\\QQ_n$ (see Definition~\\ref{def:hilbert}).} operator to this subspace.\n Finally, define the \\emph{$D$-low-degree likelihood ratio ($D$-LDLR)} as $L_n^{\\leq D} \\colonequals \\sP^{\\leq D} L_n$.\n\\end{definition}\n\\noindent We now have a low-degree analogue of Proposition~\\ref{prop:lr-optimal-l2}, which first appeared in \\cite{HS-bayesian,sos-hidden}.\n\\begin{proposition}\n The unique solution $f^*$ of the optimization problem\n \\begin{equation}\n \\label{eq:l2-opt-low}\n \\begin{array}{ll}\n \\text{maximize} & \\displaystyle\\Ex_{\\bY \\sim \\PP_n} [f(\\bY)] \\\\[10pt]\n \\text{subject to} & \\displaystyle\\Ex_{\\bY \\sim \\QQ_n} [f(\\bY)^2] = 1, \\\\[10pt] & f \\in \\sV_n^{\\leq D},\n \\end{array}\n \\end{equation}\n is the (normalized) $D$-LDLR\n \\[\n f^\\star = L^{\\le D}_n\/\\|L^{\\le D}_n\\|,\n \\]\n and the value of the optimization problem is $\\|L_n^{\\leq D}\\|$.\n\\end{proposition}\n\\begin{proof}\nAs in the proof of Proposition~\\ref{prop:lr-optimal-l2}, we can restate the optimization problem as maximizing $\\langle L_n,f \\rangle$ subject to $\\|f\\| = 1$ and $f \\in \\sV_n^{\\le D}$. Since $\\sV_n^{\\leq D}$ is a linear subspace of $L^2(\\QQ_n)$, the result is then simply a restatement of the variational description and uniqueness of the orthogonal projection in $L^2(\\QQ_n)$ (i.e., the fact that $L_n^{\\le D}$ is the unique closest element of $\\sV_n^{\\le D}$ to $L_n$).\n\\end{proof}\n\nThe following informal conjecture is at the heart of the low-degree method. It states that a computational analogue of the second moment method for contiguity holds, with $L_n^{\\leq D}$ playing the role of the likelihood ratio. Furthermore, it postulates that polynomials of degree roughly $\\log(n)$ are a proxy for polynomial-time algorithms. This conjecture is based on \\cite{HS-bayesian,sos-hidden,sam-thesis}, particularly Conjecture~2.2.4 of \\cite{sam-thesis}.\n\n\\begin{conjecture}[Informal]\n \\label{conj:low-deg-informal}\n For ``sufficiently nice'' sequences of probability measures $\\PPP$ and $\\QQQ$, if there exists $\\varepsilon > 0$ and $D = D(n) \\ge (\\log n)^{1+\\varepsilon}$ for which $\\|L_n^{\\leq D}\\|$ remains bounded as $n \\to \\infty$, then there is no polynomial-time algorithm that strongly distinguishes (see Definition~\\ref{def:stat-ind}) $\\PPP$ and $\\QQQ$.\n\\end{conjecture}\n\\noindent We will discuss this conjecture in more detail later (see Section~\\ref{sec:ldlr-conj-2}), including the informal meaning of ``sufficiently nice'' and a variant of the LDLR based on \\emph{coordinate degree} considered by \\cite{sos-hidden,sam-thesis} (see Section~\\ref{sec:discuss-conj}). A more general form of the low-degree conjecture (Hypothesis~2.1.5 of \\cite{sam-thesis}) states that degree-$D$ polynomials are a proxy for time-$n^{\\tilde\\Theta(D)}$ algorithms, allowing one to probe a wide range of time complexities. We will see that the converse of these low-degree conjectures often holds in practice; i.e., if $\\|L_n^{\\le D}\\| \\to \\infty$, then there exists a distinguishing algorithm of runtime roughly $n^D$. As a result, the behavior of $\\|L_n^{\\le D}\\|$ precisely captures the (conjectured) power of computationally-bounded testing in many settings.\n\nThe remainder of these notes is organized as follows.\nIn Section~\\ref{sec:agn}, we work through the calculations of $L_n$, $L_n^{\\leq D}$, and their norms for a general family of additive Gaussian noise models. In Section~\\ref{sec:examples}, we apply this analysis to a few specific models of interest: the spiked Wigner matrix and spiked Gaussian tensor models.\nIn Section~\\ref{sec:ldlr-conj-2}, we give some further discussion of Conjecture~\\ref{conj:low-deg-informal}, including evidence (both heuristic and formal) in its favor.\n\n\n\n\\section{The Additive Gaussian Noise Model}\n\\label{sec:agn}\n\nWe will now describe a concrete class of hypothesis testing problems and analyze them using the machinery introduced in the previous section.\nThe examples we discuss later (spiked Wigner matrix and spiked tensor) will be specific instances of this general class.\n\n\\subsection{The Model}\n\n\\begin{definition}[Additive Gaussian Noise Model]\n\\label{def:agn}\nLet $N = N(n) \\in \\NN$ and let $\\bX$ (the ``signal'') be drawn from some distribution $\\sP_n$ (the ``prior'') over $\\RR^N$. Let $\\bZ \\in \\RR^N$ (the ``noise'') have i.i.d.\\ entries distributed as $\\sN(0,1)$. Then, we define $\\PPP$ and $\\QQQ$ as follows.\n\\begin{itemize}\n \\item Under $\\PP_n$, observe $\\bY = \\bX + \\bZ$.\n \\item Under $\\QQ_n$, observe $\\bY = \\bZ$.\n\\end{itemize}\n\\end{definition}\n\\noindent \nOne typical situation takes $\\bX$ to be a low-rank matrix or tensor. The following is a particularly important and well-studied special case, which we will return to in Section~\\ref{sec:spiked-matrix}.\n\n\\begin{example}[Wigner Spiked Matrix Model]\n\\label{ex:wig}\nConsider the additive Gaussian noise model with $N = n^2$, $\\RR^N$ identified with $n \\times n$ matrices with real entries, and $\\sP_n$ defined by $\\bX = \\lambda \\bx \\bx^{\\top} \\in \\RR^{n \\times n}$, where $\\lambda = \\lambda(n) > 0$ is a signal-to-noise parameter and $\\bx$ is drawn from some distribution $\\sX_n$ over $\\RR^n$. Then, the task of distinguishing $\\PP_{n}$ from $\\QQ_n$ amounts to distinguishing $\\lambda \\bx\\bx^\\top + \\bZ$ from $\\bZ$ where $\\bZ \\in \\RR^{n \\times n}$ has i.i.d.\\ entries distributed as $\\sN(0, 1)$. (This variant is equivalent to the more standard model in which the noise matrix is symmetric; see Appendix~\\ref{app:symm}.)\n\\end{example}\n\n\\noindent\nThis problem is believed to exhibit stat-comp gaps for some choices of $\\sX_n$ but not others; see, e.g., \\cite{LKZ-mmse,LKZ-sparse,mi-rank-one,BMVVX-pca,PWBM-pca}.\nAt a heuristic level, the typical \\emph{sparsity} of vectors under $\\sX_n$ seems to govern the appearance of a stat-comp gap.\n\n\\begin{remark}\nIn the spiked Wigner problem, as in many others, one natural statistical task besides distinguishing the null and planted models is to non-trivially estimate the vector $\\bx$ given $\\bY \\sim \\PP_n$, i.e., to compute an estimate $\\hat \\bx = \\hat \\bx (\\bY)$ such that $|\\langle \\hat \\bx, \\bx \\rangle|\/(\\|\\hat \\bx\\| \\cdot \\|\\bx\\|) \\ge \\varepsilon$ with high probability, for some constant $\\varepsilon > 0$. Typically, for natural high-dimensional problems, non-trivial estimation of $\\bx$ is statistically or computationally possible precisely when it is statistically or computationally possible (respectively) to strongly distinguish $\\PPP$ and $\\QQQ$; see Section~\\ref{sec:extensions} for further discussion.\n\\end{remark}\n\n\\subsection{Computing the Classical Quantities}\n\nWe now show how to compute the likelihood ratio and its $L^2$-norm under the additive Gaussian noise model. (This is a standard calculation; see, e.g., \\cite{MRZ-spectral,BMVVX-pca}.)\n\n\\begin{proposition}\n \\label{prop:agn-L}\n Suppose $\\PPP$ and $\\QQQ$ are as defined in Definition~\\ref{def:agn}, with a sequence of prior distributions $(\\sP_n)_{n \\in \\NN}$.\n Then, the likelihood ratio of $\\PP_n$ and $\\QQ_n$ is\n \\begin{equation*}\n L_n(\\bY) = \\frac{d\\PP_n}{d\\QQ_n}(\\bY) = \\Ex_{\\bX \\sim \\sP_n}\\left[ \\exp\\left(-\\frac{1}{2}\\|\\bX\\|^2 + \\la \\bX, \\bY \\ra \\right)\\right].\n \\end{equation*}\n\\end{proposition}\n\\begin{proof}\n Write $\\sL$ for the Lebesgue measure on $\\RR^N$.\n Then, expanding the gaussian densities,\n \\begin{align}\n \\frac{d\\QQ_n}{d\\sL}(\\bY) \n &= (2\\pi)^{-N \/ 2}\\cdot \\exp\\left(-\\frac{1}{2}\\|\\bY\\|^2\\right) \\label{eq:agn-ln-denom}\\\\\n \\frac{d\\PP_n}{d\\sL}(\\bY) \n &= (2\\pi)^{-N \/ 2}\\cdot \\Ex_{\\bX \\sim \\sP_n}\\left[\\exp\\left(-\\frac{1}{2}\\|\\bY - \\bX\\|^2\\right)\\right] \\nonumber \\\\\n &= (2\\pi)^{-N \/ 2}\\cdot \\exp\\left(-\\frac{1}{2}\\|\\bY\\|^2\\right) \\cdot \\EE_{\\bX \\sim \\sP_n}\\left[\\exp\\left(-\\frac{1}{2}\\|\\bX\\|^2 + \\la \\bX, \\bY \\ra\\right)\\right]\\label{eq:agn-ln-num},\n \\end{align}\n and $L_n$ is given by the quotient of \\eqref{eq:agn-ln-num} and \\eqref{eq:agn-ln-denom}.\n\\end{proof}\n\n\\begin{proposition}\n\\label{prop:agn-L-norm}\nSuppose $\\PPP$ and $\\QQQ$ are as defined in Definition~\\ref{def:agn}, with a sequence of prior distributions $(\\sP_n)_{n \\in \\NN}$.\nThen,\n\\begin{equation}\\label{eq:gaussian-2nd}\n\\|L_n\\|^2 = \\Ex_{\\bX^1, \\bX^2 \\sim \\sP_n} \\exp(\\langle \\bX^1, \\bX^2 \\rangle),\n\\end{equation}\nwhere $\\bX^1, \\bX^2$ are drawn independently from $\\sP_n$.\n\\end{proposition}\n\\begin{proof}\nWe apply the important trick of rewriting a squared expectation as an expectation over the two independent ``replicas'' $\\bX^1, \\bX^2$ appearing in the result:\n\\begin{align*}\n\\|L_n\\|^2 &= \\Ex_{\\bY \\sim \\QQ_n} \\left[\\left(\\Ex_{\\bX \\sim \\sP_n} \\exp\\left(\\langle \\bY,\\bX \\rangle - \\frac{1}{2} \\|\\bX\\|^2\\right)\\right)^2\\right] \\\\\n&= \\Ex_{\\bY \\sim \\QQ_n} \\Ex_{\\bX^1, \\bX^2 \\sim \\sP_n} \\exp\\left(\\langle \\bY,\\bX^1 + \\bX^2\\rangle - \\frac{1}{2} \\|\\bX^1\\|^2 - \\frac{1}{2} \\|\\bX^2\\|^2\\right),\n\\intertext{where $\\bX^1$ and $\\bX^2$ are drawn independently from $\\sP_n$. We now swap the order of the expectations,}\n&= \\Ex_{\\bX^1,\\bX^2 \\sim \\sP_n} \\left[\\exp\\left(-\\frac{1}{2} \\|\\bX^1\\|^2 - \\frac{1}{2} \\|\\bX^2\\|^2\\right)\\Ex_{\\bY \\sim \\QQ_n}\\exp\\left(\\langle \\bY,\\bX^1 + \\bX^2\\rangle \\right)\\right],\n\\intertext{and the inner expectation may be evaluated explicitly using the moment-generating function of a Gaussian distribution (if $y \\sim \\sN(0,1)$, then for any fixed $t \\in \\RR$, $\\EE[\\exp(ty)] = \\exp(t^2\/2)$),}\n&= \\Ex_{\\bX^1,\\bX^2} \\exp\\left(-\\frac{1}{2} \\|\\bX^1\\|^2 - \\frac{1}{2} \\|\\bX^2\\|^2 + \\frac{1}{2}\\|\\bX^1 + \\bX^2\\|^2\\right),\n\\end{align*}\nfrom which the result follows by expanding the term inside the exponential.\\footnote{Two techniques from this calculation are elements of the ``replica method'' from statistical physics: (1) writing a power of an expectation as an expectation over independent ``replicas'' and (2) changing the order of expectations and evaluating the moment-generating function. The interested reader may see \\cite{MPV-spin-glass} for an early reference, or \\cite{MM-IPC,BPW-phys-notes} for two recent presentations.}\n\\end{proof}\n\nTo apply the second moment method for contiguity, it remains to show that~\\eqref{eq:gaussian-2nd} is $O(1)$ using problem-specific information about the distribution $\\sP_n$. For spiked matrix and tensor models, various general-purpose techniques for doing this are given in \\cite{PWBM-pca,PWB-tensor}.\n\n\n\\subsection{Computing the Low-Degree Quantities}\n\\label{sec:agn-low-deg}\n\nIn this section, we will show that the norm of the LDLR (see Section~\\ref{sec:ldlr-conj}) takes the following remarkably simple form under the additive Gaussian noise model.\n\\begin{theorem}\n \\label{thm:agn-ldlr-norm}\n Suppose $\\PPP$ and $\\QQQ$ are as defined in Definition~\\ref{def:agn}, with a sequence of prior distributions $(\\sP_n)_{n \\in \\NN}$.\n Let $L_n^{\\leq D}$ be as in Definition~\\ref{def:LDLR}.\n Then,\n \\begin{equation}\\label{eq:gaussian-2nd-low}\n \\|L_n^{\\leq D}\\|^2 = \\Ex_{\\bX^1, \\bX^2 \\sim \\sP_n}\\left[ \\sum_{d = 0}^D \\frac{1}{d!} \\la \\bX^1, \\bX^2 \\ra^d \\right],\n \\end{equation}\n where $\\bX^1, \\bX^2$ are drawn independently from $\\sP_n$.\n\\end{theorem}\n\n\\begin{remark}\\label{rem:taylor}\nNote that~\\eqref{eq:gaussian-2nd-low} can be written as $\\EE_{\\bX^1,\\bX^2} [\\exp^{\\le D}(\\langle \\bX^1,\\bX^2 \\rangle)]$, where $\\exp^{\\le D}(t)$ denotes the degree-$D$ truncation of the Taylor series of $\\exp(t)$. This can be seen as a natural low-degree analogue of the full second moment~\\eqref{eq:gaussian-2nd}. However, the low-degree Taylor series truncation in $\\exp^{\\le D}$ is conceptually distinct from the low-degree projection in $L_n^{\\le D}$, because the latter corresponds to truncation in the Hermite orthogonal polynomial basis (see below), while the former corresponds to truncation in the monomial basis.\n\\end{remark}\n\n\\noindent Our proof of Theorem~\\ref{thm:agn-ldlr-norm} will follow the strategy of \\cite{HS-bayesian,sos-hidden,sam-thesis} of expanding $L_n$ in a basis of orthogonal polynomials with respect to $\\QQ_n$, which in this case are the \\emph{Hermite polynomials}.\n\nWe first give a brief and informal review of the multivariate Hermite polynomials (see Appendix~\\ref{app:hermite} or the reference text \\cite{Szego-OP} for further information).\nThe univariate Hermite polynomials\\footnote{We will not actually use the definition of the univariate Hermite polynomials (although we will use certain properties that they satisfy as needed), but the definition is included for completeness in Appendix~\\ref{app:hermite}.} are a sequence $h_k(x) \\in \\RR[x]$ for $k \\geq 0$, with $\\deg h_k = k$.\nThey may be normalized as $\\what{h}_k(x) = h_k(x) \/ \\sqrt{k!}$, and with this normalization satisfy the orthonormality conditions\n\\begin{equation}\n \\label{eq:hermite-orth-1d}\n \\Ex_{y \\sim \\sN(0, 1)}\\what{h}_k(y)\\what{h}_\\ell(y) = \\delta_{k\\ell}.\n\\end{equation}\nThe multivariate Hermite polynomials in $N$ variables are indexed by $\\bm\\alpha \\in \\NN^N$, and are merely products of the $h_k$: $H_{\\bm\\alpha}(\\bx) = \\prod_{i = 1}^N h_{\\alpha_i}(x_i)$.\nThey also admit a normalized variant $\\what{H}_{\\bm\\alpha}(\\bx) = \\prod_{i = 1}^N \\what{h}_{\\alpha_i}(x_i)$, and with this normalization satisfy the orthonormality conditions\n\\begin{equation*}\n \\Ex_{\\bY \\sim \\sN(0, \\bm I_N)}\\what{H}_{\\bm\\alpha}(\\bY)\\what{H}_{\\bm\\beta}(\\bY) = \\delta_{\\bm\\alpha \\bm\\beta},\n\\end{equation*}\nwhich may be inferred directly from \\eqref{eq:hermite-orth-1d}.\n\nThe collection of those $\\what{H}_{\\bm\\alpha}$ for which $|\\bm\\alpha| \\colonequals \\sum_{i = 1}^N \\alpha_i \\leq D$ form an orthonormal basis for $\\sV_n^{\\leq D}$ (which, recall, is the subspace of polynomials of degree $\\le D$).\nThus we may expand\n\\begin{equation}\n \\label{eq:agn-Ln-expansion}\n L_n^{\\leq D}(\\bY) = \\sum_{\\substack{\\bm\\alpha \\in \\NN^N \\\\ |\\bm\\alpha| \\leq D}} \\la L_n, \\what{H}_{\\bm\\alpha}\\ra \\what{H}_{\\bm\\alpha}(\\bY) = \\sum_{\\substack{\\bm\\alpha \\in \\NN^N \\\\ |\\bm\\alpha| \\leq D}} \\frac{1}{\\prod_{i = 1}^N \\alpha_i!} \\la L_n, H_{\\bm\\alpha}\\ra H_{\\bm\\alpha}(\\bY),\n\\end{equation}\nand in particular we have\n\\begin{equation}\n \\label{eq:agn-Ln-norm-expansion}\n \\|L_n^{\\leq D}\\|^2 = \\sum_{\\substack{\\bm\\alpha \\in \\NN^N \\\\ |\\bm\\alpha| \\leq D}} \\frac{1}{\\prod_{i = 1}^N \\alpha_i!} \\la L_n, H_{\\bm\\alpha}\\ra^2.\n\\end{equation}\nOur main task is then to compute quantities of the form $\\la L_n, H_{\\bm\\alpha} \\ra$. Note that these can be expressed either as $\\EE_{\\bY \\sim \\QQ_n}[L_n(\\by) H_{\\bm\\alpha}(\\bY)]$ or $\\EE_{\\bY \\sim \\PP_n}[H_{\\bm\\alpha}(\\bY)]$.\nWe will give three techniques for carrying out this calculation, each depending on a different identity satisfied by the Hermite polynomials. Each will give a proof of the following remarkable formula, which shows that the quantities $\\la L_n, H_{\\bm\\alpha} \\ra$ are simply the moments of $\\sP_n$.\n\n\\begin{proposition}\n \\label{prop:agn-components}\n For any $\\bm\\alpha \\in \\NN^N$,\n \\begin{equation*}\n \\langle L_n, H_{\\bm\\alpha} \\rangle = \\Ex_{\\bX \\sim \\sP_n}\\left[ \\prod_{i = 1}^N X_i^{\\alpha_i} \\right].\n \\end{equation*}\n\\end{proposition}\n\n\\noindent Before continuing with the various proofs of Proposition~\\ref{prop:agn-components}, let us show how to use it to complete the proof of Theorem~\\ref{thm:agn-ldlr-norm}.\n\n\\begin{proof}[Proof of Theorem~\\ref{thm:agn-ldlr-norm}]\n By Proposition~\\ref{prop:agn-components} substituted into \\eqref{eq:agn-Ln-norm-expansion}, we have\n \\begin{align*}\n \\|L_n^{\\leq D}\\|^2\n &= \\sum_{\\substack{\\bm\\alpha \\in \\NN^N \\\\ |\\bm\\alpha| \\leq D}} \\frac{1}{\\prod_{i = 1}^N \\alpha_i!} \\left(\\Ex_{\\bX \\sim \\sP_n}\\left[\\prod_{i = 1}^N X_i^{\\alpha_i}\\right]\\right)^2, \\\\\n \\intertext{and performing the ``replica'' manipulation (from the proof of Proposition~\\ref{prop:agn-L-norm}) again, this may be written}\n &= \\Ex_{\\bX^1, \\bX^2 \\sim \\sP_n}\\left[\\sum_{\\substack{\\bm\\alpha \\in \\NN^N \\\\ |\\bm\\alpha| \\leq D}} \\frac{1}{\\prod_{i = 1}^N \\alpha_i!} \\prod_{i = 1}^N (X^1_iX^2_i)^{\\alpha_i}\\right] \\\\\n &= \\Ex_{\\bX^1, \\bX^2 \\sim \\sP_n}\\left[\\sum_{d = 0}^D \\frac{1}{d!}\\sum_{\\substack{\\bm\\alpha \\in \\NN^N \\\\ |\\bm\\alpha| = d}} \\binom{d}{\\alpha_1 \\cdots \\alpha_N} \\prod_{i = 1}^N (X^1_iX^2_i)^{\\alpha_i}\\right] \\\\\n &= \\Ex_{\\bX^1, \\bX^2 \\sim \\sP_n}\\left[\\sum_{d = 0}^D \\frac{1}{d!}\\la \\bX^1, \\bX^2 \\ra^d \\right],\n \\end{align*}\n where the last step uses the multinomial theorem.\n\\end{proof}\n\nWe now proceed to the three proofs of Proposition~\\ref{prop:agn-components}. For the sake of brevity, we omit here the (standard) proofs of the three Hermite polynomial identities these proofs are based on, but the interested reader may review those proofs in Appendix~\\ref{app:hermite}.\n\n\\subsubsection{Proof 1: Hermite Translation Identity}\n\nThe first (and perhaps simplest) approach to proving Proposition~\\ref{prop:agn-components} uses the following formula for the expectation of a Hermite polynomial evaluated on a Gaussian random variable of non-zero mean.\n\n\\begin{proposition}\n \\label{prop:hermite-translation}\n For any $k \\geq 0$ and $\\mu \\in \\RR$,\n \\[ \\Ex_{y \\sim \\sN(\\mu, 1)}\\left[h_k(y)\\right] = \\mu^k. \\]\n\\end{proposition}\n\n\\begin{proof}[Proof 1 of Proposition~\\ref{prop:agn-components}]\n We rewrite $\\la L_n, H_{\\bm\\alpha} \\ra$ as an expectation with respect to $\\PP_n$:\n \\begin{align*}\n \\la L_n, H_{\\bm\\alpha} \\ra \n &= \\Ex_{\\bY \\sim \\QQ_n}\\left[L_n(\\bY)H_{\\bm\\alpha}(\\bY)\\right] \\\\\n &= \\Ex_{\\bY \\sim \\PP_n}\\left[H_{\\bm\\alpha}(\\bY)\\right] \\\\\n &= \\Ex_{\\bY \\sim \\PP_n}\\left[\\prod_{i = 1}^N h_{\\alpha_i}(Y_i) \\right]\n \\intertext{and recall $\\bY = \\bX + \\bZ$ for $\\bX \\sim \\sP_n$ and $\\bZ \\sim \\sN(\\bm 0, \\bm I_N)$ under $\\PP_n$,}\n &= \\Ex_{\\bX \\sim \\sP_n}\\left[\\Ex_{\\bZ \\sim \\sN(\\bm 0, \\bm I_N)}\\prod_{i = 1}^N h_{\\alpha_i}(X_i + Z_i) \\right] \\\\\n &= \\Ex_{\\bX \\sim \\sP_n}\\left[\\prod_{i = 1}^N\\Ex_{z \\sim \\sN(X_i, 1)}h_{\\alpha_i}(z)\\right] \\\\\n &= \\Ex_{\\bX \\sim \\sP_n}\\left[\\prod_{i = 1}^N X_i^{\\alpha_i}\\right],\n \\end{align*}\n where we used Proposition~\\ref{prop:hermite-translation} in the last step.\n\\end{proof}\n\n\n\\subsubsection{Proof 2: Gaussian Integration by Parts}\n\nThe second approach to proving Proposition~\\ref{prop:agn-components} uses the following generalization of a well-known integration by parts formula for Gaussian random variables.\n\\begin{proposition}\n \\label{prop:gaussian-ibp}\n If $f: \\RR \\to \\RR$ is $k$ times continuously differentiable and $f(y)$ and its first $k$ derivatives are bounded by $O(\\exp(|y|^\\alpha))$ for some $\\alpha \\in (0, 2)$, then\n \\begin{equation*}\n \\Ex_{y \\sim \\sN(0, 1)}\\left[h_k(y)f(y)\\right] = \\Ex_{y \\sim \\sN(0, 1)}\\left[\\frac{d^k f}{dy^k}(y)\\right].\n \\end{equation*}\n\\end{proposition}\n\\noindent\n(The better-known case is $k = 1$, where one may substitute $h_1(x) = x$.)\n\n\\begin{proof}[Proof 2 of Proposition~\\ref{prop:agn-components}]\nWe simplify using Proposition~\\ref{prop:gaussian-ibp}:\n\\begin{align*}\n \\la L_n, H_{\\bm\\alpha} \\ra = \\Ex_{\\bY \\sim \\QQ_n}\\left[L_n(\\bY)\\prod_{i = 1}^N h_{\\alpha_i}(Y_i)\\right] = \\Ex_{\\bY \\sim \\QQ_n}\\left[ \\frac{\\partial^{|\\bm\\alpha|}L_n}{\\partial Y_1^{\\alpha_1} \\cdots \\partial Y_N^{\\alpha_N}}(\\bY)\\right].\n\\end{align*}\nDifferentiating $L_n$ under the expectation, we have\n\\[ \\frac{\\partial^{|\\bm\\alpha|}L}{\\partial Y_1^{\\alpha_1} \\cdots \\partial Y_N^{\\alpha_N}}(\\bY) = \\Ex_{\\bX \\sim \\sP_n}\\left[\\,\\prod_{i = 1}^N X_i^{\\alpha_i}\\, \\exp\\left(-\\frac{1}{2}\\|\\bX\\|^2 + \\la \\bX, \\bY \\ra\\right)\\right]. \\]\nTaking the expectation over $\\bY$, we have $\\EE_{\\bY \\sim \\QQ_n} \\exp(\\la \\bX, \\bY \\ra) = \\exp(\\frac{1}{2}\\|\\bX\\|^2)$, so the entire second term cancels and the result follows.\n\\end{proof}\n\n\n\\subsubsection{Proof 3: Hermite Generating Function}\n\nFinally, the third approach to proving Proposition~\\ref{prop:agn-components} uses the following generating function for the Hermite polynomials.\n\n\\begin{proposition}\n \\label{prop:hermite-gf}\n For any $x, y \\in \\RR$,\n \\[ \\exp\\left(xy - \\frac{1}{2}x^2\\right) = \\sum_{k = 0}^\\infty \\frac{1}{k!}x^k h_k(y). \\]\n\\end{proposition}\n\n\\begin{proof}[Proof 3 of Proposition~\\ref{prop:agn-components}]\n We may use Proposition~\\ref{prop:hermite-gf} to expand $L_n$ in the Hermite polynomials directly:\n \\begin{align*}\n L_n(\\bY)\n &= \\Ex_{\\bX \\sim \\sP_n}\\left[\\exp\\left(\\la \\bX, \\bY \\ra - \\frac{1}{2}\\|\\bX\\|^2\\right)\\right] \\\\\n &= \\Ex_{\\bX \\sim \\sP_n}\\left[\\,\\prod_{i = 1}^N\\left(\\sum_{k = 0}^\\infty\\frac{1}{k!}X_i^k h_k(Y_i)\\right)\\right] \\\\\n &= \\sum_{\\bm\\alpha \\in \\NN^N} \\frac{1}{\\prod_{i = 1}^N \\alpha_i!}\\, \\EE_{\\bX \\sim \\sP_n}\\left[\\,\\prod_{i = 1}^N X_i^{\\alpha_i}\\right] H_{\\bm\\alpha}(\\bm Y).\n \\end{align*}\n Comparing with the expansion \\eqref{eq:agn-Ln-expansion} then gives the result.\n\\end{proof}\n\nNow that we have the simple form~\\eqref{eq:gaussian-2nd-low} for the norm of the LDLR, it remains to investigate its convergence or divergence (as $n \\to \\infty$) using problem-specific statistics of $\\bX$. In the next section we give some examples of how to carry out this analysis.\n\n\n\n\\section{Examples: Spiked Matrix and Tensor Models}\n\\label{sec:examples}\n\nIn this section, we perform the low-degree analysis for a particular important case of the additive Gaussian model: the \\emph{order-$p$ spiked Gaussian tensor model}, also referred to as the \\emph{tensor PCA (principal component analysis)} problem. This model was introduced by \\cite{RM-tensor} and has received much attention recently. The special case $p=2$ of the spiked tensor model is the so-called \\emph{spiked Wigner matrix model} which has been widely studied in random matrix theory, statistics, information theory, and statistical physics; see \\cite{leo-survey} for a survey.\n\nIn concordance with prior work, our low-degree analysis of these models illustrates two representative phenomena: the spiked Wigner matrix model exhibits a sharp computational phase transition, whereas the spiked tensor model (with $p \\ge 3$) has a ``soft'' tradeoff between statistical power and runtime which extends through the subexponential-time regime. A low-degree analysis of the spiked tensor model has been carried out previously in \\cite{sos-hidden,sam-thesis}; here we give a sharper analysis that more precisely captures the power of subexponential-time algorithms.\n\nIn Section~\\ref{sec:spiked-tensor}, we carry out our low-degree analysis of the spiked tensor model. In Section~\\ref{sec:spiked-matrix}, we devote additional attention to the special case of the spiked Wigner model, giving a refined analysis that captures its sharp phase transition and applies to a variety of distributions of ``spikes.''\n\n\n\\subsection{The Spiked Tensor Model}\n\\label{sec:spiked-tensor}\n\nWe begin by defining the model.\n\n\\begin{definition}\nAn $n$-dimensional order-$p$ \\emph{tensor} $\\bT \\in (\\RR^n)^{\\otimes p}$ is a multi-dimensional array with $p$ dimensions each of length $n$, with entries denoted $T_{i_1,\\ldots,i_p}$ where $i_j \\in [n]$. For a vector $\\bx \\in \\RR^n$, the \\emph{rank-one tensor} $\\bx^{\\otimes p} \\in (\\RR^n)^{\\otimes p}$ has entries $(\\bx^{\\otimes p})_{i_1,\\ldots,i_p} = x_{i_1} x_{i_2} \\cdots x_{i_p}$.\n\\end{definition}\n\n\\begin{definition}[Spiked Tensor Model]\n\\label{def:spiked-tensor}\nFix an integer $p \\ge 2$. The order-$p$ \\emph{spiked tensor model} is the additive Gaussian noise model (Definition~\\ref{def:agn}) with $\\bX = \\lambda \\bx^{\\otimes p}$, where $\\lambda = \\lambda(n) > 0$ is a signal-to-noise parameter and $\\bx \\in \\RR^n$ (the ``spike'') is drawn from some probability distribution $\\sX_n$ over $\\RR^n$ (the ``prior''), normalized so that $\\|\\bx\\|^2 \\to n$ in probability as $n \\to \\infty$. In other words:\n\\begin{itemize}\n \\item Under $\\PP_n$, observe $\\bY = \\lambda \\bx^{\\otimes p} + \\bZ$.\n \\item Under $\\QQ_n$, observe $\\bY = \\bZ$.\n\\end{itemize}\nHere, $\\bZ$ is a tensor with i.i.d.\\ entries distributed as $\\sN(0,1)$.\\footnote{This model is equivalent to the more standard model in which the noise is symmetric with respect to permutations of the indices; see Appendix~\\ref{app:symm}.}\n\\end{definition}\n\n\\noindent Throughout this section we will focus for the sake of simplicity on the Rademacher spike prior, where $\\bx$ has i.i.d.\\ entries $x_i \\sim \\Unif(\\{\\pm 1\\})$. We focus on the problem of strongly distinguishing $\\PP_n$ and $\\QQ_n$ (see Definition~\\ref{def:stat-ind}), but, as is typical for high-dimensional problems, the problem of estimating $\\bx$ seems to behave in essentially the same way (see Section~\\ref{sec:extensions}). \n\nWe first state our results on the behavior of the LDLR for this model.\n\n\\begin{theorem}\\label{thm:tensor-lowdeg}\nConsider the order-$p$ spiked tensor model with $\\bx$ drawn from the Rademacher prior, $x_i \\sim \\Unif(\\{\\pm 1\\})$ i.i.d.\\ for $i \\in [n]$. Fix sequences $D = D(n)$ and $\\lambda = \\lambda(n)$. For constants $0 < A_p < B_p$ depending only on $p$, we have the following.\\footnote{Concretely, one may take $A_p = \\frac{1}{\\sqrt{2}} p^{-p\/4-1\/2}$ and $B_p = \\sqrt{2} e^{p\/2} p^{-p\/4}$.}\n\\begin{enumerate}\n \\item[(i)] If $\\lambda \\le A_p\\, n^{-p\/4} D^{(2-p)\/4}$ for all sufficiently large $n$, then $\\|L_n^{\\le D}\\| = O(1)$.\n \\item[(ii)] If $\\lambda \\ge B_p\\, n^{-p\/4} D^{(2-p)\/4}$ and $D \\le \\frac{2}{p}n$ for all sufficiently large $n$, and $D = \\omega(1)$, then $\\|L_n^{\\le D}\\| = \\omega(1)$.\n\\end{enumerate}\n(Here we are considering the limit $n \\to \\infty$ with $p$ held fixed, so $O(1)$ and $\\omega(1)$ may hide constants depending on $p$.)\n\\end{theorem}\n\n\\noindent Before we prove this, let us interpret its meaning. If we take degree-$D$ polynomials as a proxy for $n^{\\tilde\\Theta(D)}$-time algorithms (as discussed in Section~\\ref{sec:ldlr-conj}), our calculations predict that an $n^{O(D)}$-time algorithm exists when $\\lambda \\gg n^{-p\/4} D^{(2-p)\/4}$ but not when $\\lambda \\ll n^{-p\/4} D^{(2-p)\/4}$. (Here we ignore log factors, so we use $A \\ll B$ to mean $A \\le B\/\\mathrm{polylog}(n)$.) These predictions agree precisely with the previously established statistical-versus-computational tradeoffs in the spiked tensor model! It is known that polynomial-time distinguishing algorithms exist when $\\lambda \\gg n^{-p\/4}$ \\cite{RM-tensor,HSS-tensor,tensor-hom,sos-fast}, and sum-of-squares lower bounds suggest that there is no polynomial-time distinguishing algorithm when $\\lambda \\ll n^{-p\/4}$ \\cite{HSS-tensor,sos-hidden}. \n\nFurthermore, one can study the power of subexponential-time algorithms, i.e., algorithms of runtime $n^{n^\\delta} = \\exp(\\tilde{O}(n^{\\delta}))$ for a constant $\\delta \\in (0,1)$. Such algorithms are known to exist when $\\lambda \\gg n^{-p\/4-\\delta(p-2)\/4}$ \\cite{strongly-refuting,BGG,BGL,kikuchi}, matching our prediction.\\footnote{Some of these results only apply to minor variants of the spiked tensor problem, but we do not expect this difference to be important.}\nThese algorithms interpolate smoothly between the polynomial-time algorithm which succeeds when $\\lambda \\gg n^{-p\/4}$, and the exponential-time exhaustive search algorithm which succeeds when $\\lambda \\gg n^{(1-p)\/2}$. (Distinguishing the null and planted distributions is information-theoretically impossible when $\\lambda \\ll n^{(1-p)\/2}$ \\cite{RM-tensor,PWB-tensor,tensor-phys,tensor-stat}, so this is indeed the correct terminal value of $\\lambda$ for computational questions.) The tradeoff between statistical power and runtime that these algorithms achieve is believed to be optimal, and our results corroborate this claim. Our results are sharper than the previous low-degree analysis for the spiked tensor model \\cite{sos-hidden,sam-thesis}, in that we pin down the precise constant $\\delta$ in the subexponential runtime. (Similarly precise analyses of the tradeoff between subexponential runtime and statistical power have been obtained for CSP refutation \\cite{strongly-refuting} and sparse PCA \\cite{subexp-sparse}.)\n\nWe now begin the proof of Theorem~\\ref{thm:tensor-lowdeg}. Since the spiked tensor model is an instance of the additive Gaussian model, we can apply the formula from Theorem~\\ref{thm:agn-ldlr-norm}: letting $\\bx^1,\\bx^2$ be independent draws from $\\sX_n$,\n\\begin{equation}\\label{eq:tensor-L}\n\\|L_n^{\\le D}\\|^2 = \\Ex_{\\bx^1,\\bx^2} \\exp^{\\le D}(\\lambda^2 \\langle \\bx^1,\\bx^2 \\rangle^p) = \\sum_{d=0}^{D} \\frac{\\lambda^{2d}}{d!} \\Ex_{\\bx^1,\\bx^2} [\\langle \\bx^1,\\bx^2 \\rangle^{pd}].\n\\end{equation}\nWe will give upper and lower bounds on this quantity in order to prove the two parts of Theorem~\\ref{thm:tensor-lowdeg}.\n\n\\subsubsection{Proof of Theorem~\\ref{thm:tensor-lowdeg}: Upper Bound}\n\n\\begin{proof}[Proof of Theorem~\\ref{thm:tensor-lowdeg}(i)]\n\nWe use the moment bound\n\\begin{equation}\\label{eq:subg-mom}\n\\Ex_{\\bx^1,\\bx^2}[|\\langle \\bx^1,\\bx^2 \\rangle|^k] \\le (2n)^{k\/2} k \\Gamma(k\/2)\n\\end{equation}\nfor any integer $k \\ge 1$. This follows from $\\langle \\bx^1,\\bx^2 \\rangle$ being a \\emph{subgaussian} random variable with variance proxy $n$ (see Appendix~\\ref{app:subg} for details on this notion, and see Proposition~\\ref{prop:subg-mom} for the bound~\\eqref{eq:subg-mom}). Plugging this into~\\eqref{eq:tensor-L},\n\\[ \\|L_n^{\\le D}\\|^2 \\le 1 + \\sum_{d=1}^D \\frac{\\lambda^{2d}}{d!} (2n)^{pd\/2}pd\\,\\Gamma(pd\/2) =: 1 + \\sum_{d=1}^D T_d. \\]\nNote that $T_1 = O(1)$ provided $\\lambda = O(n^{-p\/4})$ (which will be implied by~\\eqref{eq:tensor-lam} below). Consider the ratio between successive terms:\n\\[ r_d := \\frac{T_{d+1}}{T_d} = \\frac{\\lambda^2}{d+1} (2n)^{p\/2} p \\,\\frac{\\Gamma(p(d+1)\/2)}{\\Gamma(pd\/2)}. \\]\nUsing the bound $\\Gamma(x+a)\/\\Gamma(x) \\le (x+a)^a$ for all $a, x > 0$ (see Proposition~\\ref{prop:gamma}), we find\n\\[ r_d \\le \\frac{\\lambda^2}{d+1} (2n)^{p\/2} p [p(d+1)\/2]^{p\/2} \\le \\lambda^2 p^{p\/2+1} n^{p\/2} (d+1)^{p\/2-1}. \\]\nThus if $\\lambda$ is small enough, namely if\n\\begin{equation}\\label{eq:tensor-lam}\n\\lambda \\le \\frac{1}{\\sqrt{2}}\\, p^{-p\/4-1\/2} n^{-p\/4} D^{(2-p)\/4},\n\\end{equation}\nthen $r_d \\le 1\/2$ for all $1 \\le d < D$.\nIn this case, by comparing with a geometric sum we may bound $\\|L_n^{\\le D}\\|^2 \\le 1 + 2 T_1 = O(1)$.\n\\end{proof}\n\n\n\n\n\n\\subsubsection{Proof of Theorem~\\ref{thm:tensor-lowdeg}: Lower Bound}\n\\label{sec:tensor-lower}\n\n\\begin{proof}[Proof of Theorem~\\ref{thm:tensor-lowdeg}(ii)]\nNote that $\\langle \\bx^1,\\bx^2 \\rangle = \\sum_{i=1}^n s_i$ where $s_1, \\dots, s_n$ are i.i.d.\\ Rademacher random variables, so $\\EE_{\\bx^1,\\bx^2}[\\langle \\bx^1,\\bx^2 \\rangle^{2k + 1}] = 0$, and\n$$\\Ex_{\\bx^1,\\bx^2}[\\langle \\bx^1,\\bx^2 \\rangle^{2k}] = \\EE\\left[\\left(\\sum_{i=1}^n s_i\\right)^{2k}\\right] = \\sum_{i_1,i_2,\\ldots,i_{2k} \\in [n]} \\EE[s_{i_1} s_{i_2} \\cdots s_{i_{2k}}].$$\nBy counting only the terms $\\EE[s_{i_1} s_{i_2} \\cdots s_{i_{2k}}]$ in which each $s_i$ appears either 0 or 2 times, we have\n\\begin{equation}\\label{eq:mom-bound}\n\\Ex_{\\bx^1,\\bx^2} [\\langle \\bx^1,\\bx^2 \\rangle^{2k}] \\ge \\binom{n}{k}\\frac{(2k)!}{2^k}.\n\\end{equation}\nLet $d$ be the largest integer such that $d \\le D$ and $pd$ is even. By our assumption $D \\le \\frac{2}{p}n $, we then have $pd\/2 \\le n$.\nWe now bound $\\|L_n^{\\leq D}\\|^2$ by only the degree-$pd$ term of~\\eqref{eq:tensor-L}, and using the bounds $\\binom{n}{k} \\ge (n\/k)^k$ (for $1 \\le k \\le n$) and $(n\/e)^n \\le n! \\le n^n$, we can lower bound that term as follows:\n\\begin{align*}\n\\|L_n^{\\le D}\\|^2 &\\ge \\frac{\\lambda^{2d}}{d!} \\Ex_{\\bx^1,\\bx^2}[\\langle \\bx^1,\\bx^2 \\rangle^{pd}] \\\\\n&\\ge \\frac{\\lambda^{2d}}{d!} \\binom{n}{pd\/2} \\frac{(pd)!}{2^{pd\/2}} \\\\\n&\\ge \\frac{\\lambda^{2d}}{d^d} \\left(\\frac{2n}{pd}\\right)^{pd\/2} \\frac{(pd\/e)^{pd}}{2^{pd\/2}} \\\\\n&= \\left(\\lambda^2 e^{-p} p^{p\/2} n^{p\/2} d^{p\/2-1} \\right)^d.\n\\end{align*}\nNow, if $\\lambda$ is large enough, namely if\n$$\\lambda \\ge \\sqrt{2} e^{p\/2} p^{-p\/4} n^{-p\/4} D^{(2-p)\/4}$$\nand $D = \\omega(1)$, then $\\|L_n^{\\le D}\\|^2 \\ge (2-o(1))^d = \\omega(1)$.\n\\end{proof}\n\n\n\\subsection{The Spiked Wigner Matrix Model: Sharp Thresholds}\n\\label{sec:spiked-matrix}\n\nWe now turn our attention to a more precise understanding of the case $p = 2$ of the spiked tensor model, which is more commonly known as the \\emph{spiked Wigner matrix model}.\nOur results from the previous section (specialized to $p=2$) suggest that if $\\lambda \\gg n^{-1\/2}$ then there should be a polynomial-time distinguishing algorithm, whereas if $\\lambda \\ll n^{-1\/2}$ then there should not even be a subexponential-time distinguishing algorithm (that is, no algorithm of runtime $\\exp(n^{1-\\varepsilon})$ for any $\\varepsilon > 0$). In this section, we will give a more detailed low-degree analysis that identifies the precise value of $\\lambda \\sqrt{n}$ at which this change occurs. This type of sharp threshold has been observed in various high-dimensional inference problems; another notable example is the Kesten-Stigum transition for community detection in the stochastic block model \\cite{block-model-1,block-model-2,MNS-rec,massoulie,MNS-proof}. It was first demonstrated by \\cite{HS-bayesian} that the low-degree method can capture such sharp thresholds.\n\nTo begin, we recall the problem setup. Since the interesting regime is $\\lambda = \\Theta(n^{-1\/2})$, we define $\\hat\\lambda = \\lambda \\sqrt{2n}$ and take $\\hat\\lambda$ to be constant (not depending on $n$). With this notation, the spiked Wigner model is as follows:\n\\begin{itemize}\n \\item Under $\\PP_n$, observe $\\bY = \\frac{\\hat\\lambda}{\\sqrt{2n}} \\bx\\bx^\\top + \\bZ$ where $\\bx \\in \\RR^n$ is drawn from $\\sX_n$.\n \\item Under $\\QQ_n$, observe $\\bY = \\bZ$.\n\\end{itemize}\nHere $\\bZ$ is an $n \\times n$ random matrix with i.i.d.\\ entries distributed as $\\sN(0,1)$. (This asymmetric noise model is equivalent to the more standard symmetric one; see Appendix~\\ref{app:symm}.) We will consider various spike priors $\\sX_n$, but require the following normalization.\n\n\\begin{assumption}\\label{asm:spike-family}\nThe spike prior $(\\sX_n)_{n \\in \\NN}$ is normalized so that $\\bx \\sim \\sX_n$ satisfies $\\|\\bx\\|^2 \\to n$ in probability as $n \\to \\infty$.\n\\end{assumption}\n\n\n\\subsubsection{The Canonical Distinguishing Algorithm: PCA}\n\\label{sec:pca}\n\nThere is a simple reference algorithm for testing in the spiked Wigner model, namely \\emph{PCA (principal component analysis)}, by which we simply mean thresholding the largest eigenvalue of the (symmetrized) observation matrix.\n\n\\begin{definition}\n The \\emph{PCA test} for distinguishing $\\PPP$ and $\\QQQ$ is the following statistical test, computable in polynomial time in $n$.\n Let $\\overline{\\bY} \\colonequals (\\bY + \\bY^\\top) \/ \\sqrt{2n} = \\frac{\\hat\\lambda}{n} \\bx\\bx^\\top + \\bW$, where $\\bW = (\\bZ + \\bZ^\\top) \/ \\sqrt{2n}$ is a random matrix with the GOE distribution.\\footnote{Gaussian Orthogonal Ensemble (GOE): $\\bW$ is a symmetric $n \\times n$ matrix with entries $W_{ii} \\sim \\sN(0,2\/n)$ and $W_{ij} = W_{ji} \\sim \\sN(0,1\/n)$, independently.}\n Then, let\n \\begin{equation*}\n f_{\\hat\\lambda}^{\\mathsf{PCA}}(\\bY) \\colonequals \\left\\{\\begin{array}{lcl} \\tp & : & \\lambda_{\\max}(\\overline{\\bY}) > t(\\hat\\lambda) \\\\ \\tq & : & \\lambda_{\\max}(\\overline{\\bY}) \\leq t(\\hat\\lambda) \\end{array}\\right\\}\n \\end{equation*}\n where the threshold is set to $t(\\hat\\lambda) \\colonequals 2 + (\\hat\\lambda + \\hat\\lambda^{-1} - 2) \/ 2$.\n\\end{definition}\n\\noindent\nThe theoretical underpinning of this test is the following seminal result from random matrix theory, the analogue for Wigner matrices of the celebrated ``BBP transition'' \\cite{BBP}.\n\\begin{theorem}[\\cite{FP-bbp,BGN}]\n \\label{thm:wig-bbp}\n Let $\\hat\\lambda$ be constant (not depending on $n$). Let $\\overline{\\bY} = \\frac{\\hat\\lambda}{n} \\bx\\bx^\\top + \\bW$ with $\\bW \\sim \\GOE(n)$ and arbitrary $\\bx \\in \\RR^n$ with $\\|\\bx\\|^2 = n$.\n \\begin{itemize}\n \\item If $\\hat\\lambda \\leq 1$, then $\\lambda_{\\max}(\\overline{\\bY}) \\to 2$ as $n \\to \\infty$ almost surely, and $\\la \\bv_{\\max}(\\overline{\\bY}), \\bx\/\\sqrt{n} \\ra^2 \\to 0$ almost surely (where $\\lambda_{\\max}$ denotes the largest eigenvalue and $\\bv_{\\max}$ denotes the corresponding unit-norm eigenvector).\n \\item If $\\hat\\lambda > 1$, then $\\lambda_{\\max}(\\overline{\\bY}) \\to \\hat\\lambda + \\hat\\lambda^{-1} > 2$ as $n \\to \\infty$ almost surely, and $\\la \\bv_{\\max}(\\overline{\\bY}), \\bx\/\\sqrt{n} \\ra^2 \\to 1 - \\hat\\lambda^{-2}$ almost surely.\n \\end{itemize}\n\\end{theorem}\n\n\\noindent Thus, the PCA test exhibits a sharp threshold: it succeeds when $\\hat\\lambda > 1$, and fails when $\\hat\\lambda \\le 1$. (Furthermore, the leading eigenvector achieves non-trivial estimation of the spike $\\bx$ when $\\hat\\lambda > 1$ and fails to do so when $\\hat\\lambda \\le 1$.)\n\n\\begin{corollary}\n For any $\\hat\\lambda > 1$ and any spike prior family $(\\sX_n)_{n \\in \\NN}$ valid per Assumption~\\ref{asm:spike-family}, $f_{\\hat\\lambda}^{\\mathsf{PCA}}$ is a polynomial-time statistical test strongly distinguishing $\\PPP_{\\lambda}$ and $\\QQQ$.\n\\end{corollary}\n\n\\noindent For some spike priors $(\\sX_n)$, it is known that PCA is statistically optimal, in the sense that distinguishing (or estimating the spike) is information-theoretically impossible when $\\hat\\lambda < 1$.\nThese priors include the prior with $\\bx$ drawn uniformly from the sphere of radius $\\sqrt{n}$ and the priors with $\\bx$ having i.i.d.\\ $\\sN(0, 1)$ or Rademacher (uniformly $\\pm 1$) entries \\cite{MRZ-spectral,DAM,BMVVX-pca,PWBM-pca}. For these priors, we thus have $\\hat\\lambda_{\\mathsf{stat}} = \\hat\\lambda_{\\mathsf{comp}} = 1$, and there is no stat-comp gap.\n\nA different picture emerges for other spike priors, such as the \\emph{sparse Rademacher prior}\\footnote{In the sparse Rademacher prior, each entry of $\\bx$ is nonzero with probability $\\rho$ (independently), and the nonzero entries are drawn uniformly from $\\{\\pm 1\/\\sqrt{\\rho}\\}$.} with constant density $\\rho = \\Theta(1)$. If $\\rho$ is smaller than a particular small constant (roughly $0.09$ \\cite{mi-rank-one}), it is known that $\\hat\\lambda_{\\mathsf{stat}} < 1$. More precisely, an exponential-time exhaustive search algorithm succeeds in part of the regime where PCA fails \\cite{BMVVX-pca}. For any given $\\rho$, the precise threshold $\\hat\\lambda_{\\mathsf{stat}}$ can be computed using the \\emph{replica-symmetric formula} from statistical physics (see, e.g., \\cite{LKZ-mmse,LKZ-sparse,mi-rank-one,BDMKLZ-spiked,LM-symm,finite-size,replica-short,detection-wig,tensor-stat}, or \\cite{leo-survey} for a survey). However, for any constant $\\rho$, it is believed that $\\hat\\lambda_{\\mathsf{comp}} = 1$, i.e., that no \\emph{polynomial-time} algorithm can ``beat PCA.'' This has been conjectured in the same statistical physics literature based on failure of the \\emph{approximate message passing (AMP)} algorithm \\cite{LKZ-mmse,LKZ-sparse,mi-rank-one}.\n\nWe will next give a low-degree analysis corroborating that conjecture: for a large class of spike priors (including the sparse Rademacher prior with constant $\\rho$), our predictions suggest that any distinguishing algorithm requires nearly-exponential time whenever $\\hat\\lambda < 1$.\nOne interesting case we will \\emph{not} cover is that of the sparse Rademacher prior with $\\rho = o(1)$. In such models, there actually \\emph{are} subexponential-time algorithms that succeed for some $\\hat\\lambda < 1$; these are described in \\cite{subexp-sparse} along with matching lower bounds using the low-degree method. Furthermore, there are polynomial-time algorithms that can beat the PCA threshold once $\\rho \\lesssim 1\/\\sqrt{n}$; this is the much-studied ``sparse PCA'' regime (see, e.g., \\cite{JL04,JL09,AW-sparse,BR-sparse,KNV,DM-sparse,subexp-sparse}).\n\n\n\n\\subsubsection{Low-Degree Analysis: Informally, with the ``Gaussian Heuristic''}\n\\label{sec:spiked-wigner-gaussian-heuristic}\n\nWe first give a heuristic low-degree analysis of the spiked Wigner model which suggests $\\hat\\lambda_{\\mathsf{comp}} = 1$ (matching PCA) for sufficiently ``reasonable'' spike priors $(\\sX_n)$. In the next section, we will state and prove a rigorous statement to this effect.\n\nRecall from~\\eqref{eq:tensor-L} the expression for the norm of the LDLR:\n\\begin{equation*}\n\\|L_n^{\\le D}\\|^2 = \\sum_{d=0}^{D} \\frac{\\lambda^{2d}}{d!} \\Ex_{\\bx^1,\\bx^2} [\\langle \\bx^1,\\bx^2 \\rangle^{2d}] = \\sum_{d=0}^{D} \\frac{1}{d!}\\left(\\frac{\\hat\\lambda^2}{2n}\\right)^d \\Ex_{\\bx^1,\\bx^2} [\\langle \\bx^1,\\bx^2 \\rangle^{2d}].\n\\end{equation*}\n\n\\noindent To predict $\\hat\\lambda_{\\mathsf{comp}}$, it remains to determine whether $\\|L_n^{\\le D}\\|$ converges or diverges as $n \\to \\infty$, as a function of $\\hat\\lambda$.\nRecall that, per Assumption~\\ref{asm:spike-family}, $\\sX_n$ is normalized so that $\\|\\bx\\|^2 \\approx n$. Thus when, e.g., $\\bx$ has i.i.d.\\ entries, we may expect the following informal central limit theorem:\n\\begin{quote}\n ``When $\\bx^1, \\bx^2 \\sim \\sX_n$ independently, $\\la \\bx^1, \\bx^2 \\ra$ is distributed approximately as $\\sN(0, n)$.''\n\\end{quote}\nAssuming this heuristic applies to the first $2D$ moments of $\\la \\bx^1, \\bx^2 \\ra$, and recalling that the Gaussian moments are $\\EE_{g \\sim \\sN(0,1)} [g^{2k}] = (2k-1)!! = \\prod_{i = 1}^k (2i - 1)$, we may estimate\n\\begin{equation*}\n \\|L_n^{\\leq D}\\|^2 \\approx \\sum_{d=0}^{D} \\frac{1}{d!}\\left(\\frac{\\hat\\lambda^2}{2n}\\right)^d \\Ex_{g \\sim \\sN(0, n)} [g^{2d}] = \\sum_{d=0}^{D} \\frac{1}{d!}\\left(\\frac{\\hat\\lambda^2}{2n}\\right)^d n^d(2d - 1)!! =: \\sum_{d=0}^D T_d.\n\\end{equation*}\nImagine $D$ grows slowly with $n$ (e.g., $D \\approx \\log n$) in order to predict the power of polynomial-time algorithms. The ratio of consecutive terms above is $$\\frac{T_{d+1}}{T_d} = \\hat\\lambda^2\\cdot \\frac{2d+1}{2(d+1)} \\approx \\hat\\lambda^2,$$\nsuggesting that $\\|L_n^{\\le D}\\|$ should diverge if $\\hat\\lambda > 1$ and converge if $\\hat\\lambda < 1$.\n\nWhile this style of heuristic analysis is often helpful for guessing the correct threshold, this type of reasoning can break down if $D$ is too large or if $\\bx$ is too sparse. In the next section, we therefore give a rigorous analysis of $\\|L_n^{\\le D}\\|$.\n\n\n\n\\subsubsection{Low-Degree Analysis: Formally, with Concentration Inequalities}\n\\label{sec:wig-bound-L}\n\nWe now give a rigorous proof that $\\|L_n^{\\leq D}\\| = O(1)$ when $\\hat\\lambda < 1$ (and $\\|L_n^{\\leq D}\\| = \\omega(1)$ when $\\hat\\lambda > 1$), provided the spike prior is ``nice enough.'' Specifically, we require the following condition on the prior.\n\n\\begin{definition}\\label{def:local-chernoff}\nA spike prior $(\\sX_n)_{n \\in \\NN}$ admits a \\emph{local Chernoff bound} if for any $\\eta > 0$ there exist $\\delta > 0$ and $C > 0$ such that for all $n$,\n\\begin{equation*}\n\\Pr\\left\\{|\\la \\bx^1,\\bx^2 \\ra| \\ge t\\right\\} \\le C \\exp\\left(-\\frac{1}{2n}(1-\\eta)t^2\\right) \\quad \\text{for all } t \\in [0,\\delta n]\n\\end{equation*}\nwhere $\\bx^1,\\bx^2$ are drawn independently from $\\sX_n$.\n\\end{definition}\n\\noindent For instance, any prior with i.i.d.\\ \\emph{subgaussian} entries admits a local Chernoff bound; see Proposition~\\ref{prop:local-chernoff} in Appendix~\\ref{app:subg}. This includes, for instance, the sparse Rademacher prior with any constant density $\\rho$. The following is the main result of this section, which predicts that for this class of spike priors, any algorithm that beats the PCA threshold requires nearly-exponential time.\n\n\\begin{theorem}\\label{thm:wig-bound-L}\nSuppose $(\\sX_n)_{n \\in \\NN}$ is a spike prior that (i) admits a local Chernoff bound, and (ii) has then $\\|\\bx\\|^2 \\le \\sqrt{2}n$ almost surely if $\\bx \\sim \\sX_n$. Then, for the spiked Wigner model with $\\hat\\lambda < 1$ and any $D = D(n) = o(n\/\\log n)$, we have $\\|L_n^{\\leq D}\\| = O(1)$ as $n \\to \\infty$.\n\\end{theorem}\n\n\\begin{remark}\nThe upper bound $\\|\\bx\\|^2 \\le \\sqrt{2}n$ is without loss of generality (provided $\\|\\bx\\|^2 \\to n$ in probability). This is because we can define a modified prior $\\tilde \\sX_n$ that draws $\\bx \\sim \\sX_n$ and outputs $\\bx$ if $\\|\\bx\\|^2 \\le \\sqrt{2}n$ and $\\bm 0$ otherwise. If $(\\sX_n)_{n \\in \\NN}$ admits a local Chernoff bound then so does $(\\tilde \\sX_n)_{n \\in \\NN}$. And, if the spiked Wigner model is computationally hard with the prior $(\\tilde \\sX_n)_{n \\in \\NN}$, it is also hard with the prior $(\\sX_n)_{n \\in \\NN}$, since the two differ with probability $o(1)$.\n\\end{remark}\n\nThough we already know that a polynomial-time algorithm (namely PCA) exists when $\\hat\\lambda > 1$, we can check that indeed $\\|L_n^{\\le D}\\| = \\omega(1)$ in this regime. For the sake of simplicity, we restrict this result to the Rademacher prior.\n\\begin{theorem}\\label{thm:wig-above}\nConsider the spiked Wigner model with the Rademacher prior: $\\bx$ has i.i.d.\\ entries $x_i \\sim \\Unif(\\{\\pm 1\\})$. If $\\hat\\lambda > 1$, then for any $D = \\omega(1)$ we have $\\|L_n^{\\le D}\\| = \\omega(1)$.\n\\end{theorem}\n\\noindent The proof is a simple modification of the proof of Theorem~\\ref{thm:tensor-lowdeg}(ii) in Section~\\ref{sec:tensor-lower}; we defer it to Appendix~\\ref{app:wig-above}. The remainder of this section is devoted to proving Theorem~\\ref{thm:wig-bound-L}.\n\n\\begin{proof}[Proof of Theorem~\\ref{thm:wig-bound-L}]\nStarting from the expression for $\\|L_n^{\\le D}\\|^2$ (see Theorem~\\ref{thm:agn-ldlr-norm} and Remark~\\ref{rem:taylor}), we split $\\|L_n^{\\leq D}\\|^2$ into two terms, as follows:\n$$\\|L_n^{\\leq D}\\|^2 = \\Ex_{\\bx^1, \\bx^2}\\left[\\exp^{\\leq D}\\left(\\lambda^2\\la \\bx^1, \\bx^2 \\ra^2\\right)\\right]\n=: R_1 + R_2,$$\nwhere\n\\begin{align*}\nR_1 &\\colonequals \\Ex_{\\bx^1, \\bx^2}\\left[\\One_{|\\langle \\bx^1,\\bx^2 \\rangle| \\le \\varepsilon n}\\,\\exp^{\\leq D}\\left(\\lambda^2\\la \\bx^1, \\bx^2 \\ra^2\\right)\\right], \\\\\nR_2 &\\colonequals \\Ex_{\\bx^1, \\bx^2}\\left[\\One_{|\\langle \\bx^1,\\bx^2 \\rangle| > \\varepsilon n}\\,\\exp^{\\leq D}\\left(\\lambda^2\\la \\bx^1, \\bx^2 \\ra^2\\right)\\right].\n\\end{align*}\nHere $\\varepsilon > 0$ is a small constant to be chosen later. We call $R_1$ the \\emph{small deviations} and $R_2$ the \\emph{large deviations}, and we will bound these two terms separately.\n\\paragraph{Bounding the large deviations.}\nUsing that $\\|\\bx\\|^2 \\le \\sqrt{2}n$, that $\\exp^{\\le D}(t)$ is increasing for $t \\ge 0$, and the local Chernoff bound (taking $\\varepsilon$ to be a sufficiently small constant),\n\\begin{align*}\nR_2 &\\le \\Pr\\left\\{|\\langle \\bx^1,\\bx^2 \\rangle| > \\varepsilon n\\right\\} \\exp^{\\leq D}\\left(2 \\lambda^2 n^2\\right)\\\\\n&\\le C \\exp\\left(-\\frac{1}{3}\\varepsilon^2 n\\right) \\sum_{d=0}^D \\frac{(\\hat\\lambda^2 n)^d}{d!}\n\\intertext{and noting that the last term of the sum is the largest since $\\hat\\lambda^2 n > D$,}\n&\\le C \\exp\\left(-\\frac{1}{3}\\varepsilon^2 n\\right) (D+1) \\frac{(\\hat\\lambda^2 n)^D}{D!}\\\\\n&= \\exp\\left[\\log C - \\frac{1}{3} \\varepsilon^2 n + \\log(D+1) + 2D \\log \\hat\\lambda + D \\log n - \\log(D!)\\right]\\\\\n&= o(1)\n\\end{align*}\nprovided $D = o(n\/\\log n)$.\n\n\\paragraph{Bounding the small deviations.}\n\nWe adapt an argument from \\cite{PWB-tensor}. Here we do not need to make use of the truncation to degree $D$ at all, and instead simply use the bound $\\exp^{\\le D}(t) \\le \\exp(t)$ for $t \\ge 0$.\nWith this, we bound\n\\begin{align*}\nR_1 &= \\Ex_{\\bx^1, \\bx^2}\\left[\\One_{|\\langle \\bx^1,\\bx^2 \\rangle| \\le \\varepsilon n}\\,\\exp^{\\leq D}\\left(\\lambda^2\\la \\bx^1, \\bx^2 \\ra^2\\right)\\right]\\\\\n&\\le \\Ex_{\\bx^1, \\bx^2}\\left[\\One_{|\\langle \\bx^1,\\bx^2 \\rangle| \\le \\varepsilon n}\\,\\exp\\left(\\lambda^2\\la \\bx^1, \\bx^2 \\ra^2\\right)\\right]\\\\\n&= \\int_0^\\infty \\Pr\\left\\{\\One_{|\\langle \\bx^1,\\bx^2 \\rangle| \\le \\varepsilon n}\\,\\exp\\left(\\lambda^2\\la \\bx^1, \\bx^2 \\ra^2\\right) \\ge u\\right\\} du\\\\\n&= 1 + \\int_1^\\infty \\Pr\\left\\{\\One_{|\\langle \\bx^1,\\bx^2 \\rangle| \\le \\varepsilon n}\\,\\exp\\left(\\lambda^2\\la \\bx^1, \\bx^2 \\ra^2\\right) \\ge u\\right\\} du\\\\\n&= 1 + \\int_0^\\infty \\Pr\\left\\{\\One_{|\\langle \\bx^1,\\bx^2 \\rangle| \\le \\varepsilon n}\\,\\langle \\bx^1,\\bx^2 \\rangle^2 \\ge t\\right\\} \\lambda^2\\exp\\left(\\lambda^2 t\\right) dt \\tag{where $\\exp\\left(\\lambda^2 t\\right) = u$}\\\\\n&\\le 1 + \\int_0^\\infty C \\exp\\left(-\\frac{1}{2n}(1-\\eta)t\\right) \\lambda^2\\exp\\left(\\lambda^2 t\\right) dt \\tag{using the local Chernoff bound}\\\\\n&\\le 1 + \\frac{C \\hat\\lambda^2}{2n} \\int_0^\\infty \\exp\\left(-\\frac{1}{2n}(1 - \\eta - \\hat\\lambda^2)t\\right) dt\\\\\n&= 1 + C \\hat\\lambda^2 (1 - \\eta - \\hat\\lambda^2)^{-1} \\tag{provided $\\hat\\lambda^2 < 1-\\eta$}\\\\\n&= O(1).\n\\end{align*}\nSince $\\hat\\lambda < 1$, we can choose $\\eta > 0$ small enough so that $\\hat\\lambda^2 < 1 - \\eta$, and then choose $\\varepsilon$ small enough so that the local Chernoff bound holds. (Here, $\\eta$ and $\\varepsilon$ depend on $\\hat\\lambda$ and the spike prior, but not on $n$.)\n\\end{proof}\n\n\n\n\\section{More on the Low-Degree Method}\n\\label{sec:ldlr-conj-2}\n\nIn this section, we return to the general considerations introduced in Section~\\ref{sec:ldlr-conj} and describe some of the nuances in and evidence for the main conjecture underlying the low-degree method (Conjecture~\\ref{conj:low-deg-informal}). Specifically, we investigate the question of what can be concluded (both rigorously and conjecturally) from the behavior of the low-degree likelihood ratio (LDLR) defined in Definition~\\ref{def:LDLR}. We present conjectures and formal evidence connecting the LDLR to computational complexity, discussing various caveats and counterexamples along the way. \n\nIn Section~\\ref{sec:LDLR-poly}, we explore to what extent the $D$-LDLR controls whether or not degree-$D$ polynomials can distinguish $\\PPP$ from $\\QQQ$. Then, in Section~\\ref{sec:LDLR-alg}, we explore to what extent the LDLR controls whether or not \\emph{any} efficient algorithm can distinguish $\\PPP$ and $\\QQQ$.\n\n\n\\subsection{The LDLR and Thresholding Polynomials}\n\\label{sec:LDLR-poly}\n\nHeuristically, since $\\|L_n^{\\le D}\\|$ is the value of the $L^2$ optimization problem \\eqref{eq:l2-opt-low}, we might expect the behavior of $\\|L_n^{\\le D}\\|$ as $n \\to \\infty$ to dictate whether or not degree-$D$ polynomials can distinguish $\\PPP$ from $\\QQQ$: it should be possible to strongly distinguish (in the sense of Definition~\\ref{def:stat-ind}, i.e., with error probabilities tending to $0$) $\\PPP$ from $\\QQQ$ by \\emph{thresholding} a degree-$D$ polynomial (namely $L_n^{\\le D}$) if and only if $\\|L_n^{\\le D}\\| = \\omega(1)$. We now discuss to what extent this heuristic is correct.\n\n\\begin{question}\\label{q:thresh-poly-yes}\nIf $\\|L_n^{\\le D}\\| = \\omega(1)$, does this imply that it is possible to strongly distinguish $\\PPP$ and $\\QQQ$ by thresholding a degree-$D$ polynomial?\n\\end{question}\n\n\\noindent We have already mentioned (see Section~\\ref{sec:contig}) a counterexample when $D = \\infty$: there are cases where $\\PPP$ and $\\QQQ$ are not statistically distinguishable, yet $\\|L_n\\| \\to \\infty$ due to a rare ``bad'' event under $\\PP_n$. Examples of this phenomenon are fairly common (e.g., \\cite{BMNN-community,BMVVX-pca,PWBM-pca,PWB-tensor}). However, after truncation to only low-degree components, this issue seems to disappear. For instance, in sparse PCA, $\\|L_n^{\\le D}\\| \\to \\infty$ only occurs when either (i) there actually is an $n^{\\tilde{O}(D)}$-time distinguishing algorithm, or (ii) $D$ is ``unreasonably large,'' in the sense that there is a trivial $n^{t(n)}$-time exhaustive search algorithm and $D \\gg t(n)$ \\cite{subexp-sparse}. Indeed, we do not know any example of a natural problem where $\\|L_n^{\\le D}\\|$ diverges spuriously for a ``reasonable'' value of $D$ (in the above sense), although one can construct unnatural examples by introducing a rare ``bad'' event in $\\PP_n$. Thus, it seems that for natural problems and reasonable growth of $D$, the smoothness of low-degree polynomials regularizes $L_n^{\\le D}$ in such a way that the answer to Question~\\ref{q:thresh-poly-yes} is typically ``yes.'' This convenient feature is perhaps related to the probabilistic phenomenon of \\emph{hypercontractivity}; see Appendix~\\ref{app:hyp} and especially Remark~\\ref{rem:low-dom}.\n\nAnother counterexample to Question~\\ref{q:thresh-poly-yes} is the following. Take $\\PPP$ and $\\QQQ$ that are ``easy'' for degree-$D$ polynomials to distinguish, i.e., $\\|L_n^{\\le D}\\| \\to \\infty$ and $\\PPP, \\QQQ$ can be strongly distinguished by thresholding a degree-$D$ polynomial. Define a new sequence of ``diluted'' planted measures $\\PPP^\\prime$ where $\\PP^\\prime_n$ samples from $\\PP_n$ with probability $1\/2$, and otherwise samples from $\\QQ_n$. Letting $L'_n = d\\PP'_n\/d\\QQ_n$, we have $\\|(L'_n)^{\\le D}\\| \\to \\infty$, yet $\\PPP'$ and $\\QQQ$ cannot be strongly distinguished (even statistically). While this example is perhaps somewhat unnatural, it illustrates that a rigorous positive answer to Question~\\ref{q:thresh-poly-yes} would need to restrict to $\\PPP$ that are ``homogeneous'' in some sense.\n\nThus, while we have seen some artificial counterexamples, the answer to Question~\\ref{q:thresh-poly-yes} seems to typically be ``yes'' for natural high-dimensional problems, so long as $D$ is not unreasonably large. We now turn to the converse question.\n\n\\begin{question}\\label{q:thresh-poly-no}\nIf $\\|L_n^{\\le D}\\| = O(1)$, does this imply that it is impossible to strongly distinguish $\\PPP$ and $\\QQQ$ by thresholding a degree-$D$ polynomial?\n\\end{question}\n\n\\noindent Here, we are able to give a positive answer in a particular formal sense.\nThe following result addresses the contrapositive of Question~\\ref{q:thresh-poly-no}: it shows that distinguishability by thresholding low-degree polynomials implies exponential growth of the norm of the LDLR.\n\n\\begin{theorem}\\label{thm:thresh-poly-hard}\nSuppose $\\QQ$ draws $\\bY \\in \\RR^N$ with entries either i.i.d.\\ $\\sN(0,1)$ or i.i.d.\\ $\\Unif(\\{\\pm 1\\})$. Let $\\PP$ be any measure on $\\RR^N$ that is absolutely continuous with respect to $\\QQ$. Let $f: \\RR^N \\to \\RR$ be a polynomial of degree $\\le d$ satisfying\n\\begin{equation}\\label{eq:strong-thresh}\n\\Ex_{\\bY \\sim \\PP}[f(\\bY)] \\ge A \\quad \\text{and} \\quad \\QQ(|f(\\bY)| \\ge B) \\le \\delta\n\\end{equation}\nfor some $A > B > 0$ and some $\\delta \\le \\frac{1}{2} \\cdot 3^{-4kd}$. Then for any $k \\in \\NN$,\n$$\\|L^{\\le 2kd}\\| \\ge \\frac{1}{2}\\left(\\frac{A}{B}\\right)^{2k}.$$\n\\end{theorem}\n\n\\noindent We defer the proof to Appendix~\\ref{app:thresh-poly-hard}. To understand what the result shows, imagine, for example, that $A > B$ are both constants, and $k$ grows slowly with $n$ (e.g., $k \\approx \\log n$). Then, if a degree-$d$ polynomial (where $d$ may depend on $n$) can distinguish $\\PPP$ from $\\QQQ$ in the sense of \\eqref{eq:strong-thresh} with $\\delta = \\frac{1}{2} \\cdot 3^{-4kd}$, then $\\|L_n^{\\le 2kd}\\| \\to \\infty$ as $n \\to \\infty$. Note though, that one weakness of this result is that we require the explicit quantitative bound $\\delta \\leq \\frac{1}{2} \\cdot 3^{-4kd}$, rather than merely $\\delta = o(1)$.\n\nThe proof (see Appendix~\\ref{app:thresh-poly-hard}) is a straightforward application of \\emph{hypercontractivity} (see e.g., \\cite{boolean-book}), a type of result stating that random variables obtained by evaluating low-degree polynomials on weakly-dependent distributions (such as i.i.d.\\ ones) are well-concentrated and otherwise ``reasonable.''\nWe have restricted to the case where $\\QQ$ is i.i.d.\\ Gaussian or Rademacher because hypercontractivity results are most readily available in these cases, but we expect similar results to hold more generally.\n\n\n\\subsection{Algorithmic Implications of the LDLR}\n\\label{sec:LDLR-alg}\n\nHaving discussed the relationship between the LDLR and low-degree polynomials, we now discuss the relationship between low-degree polynomials and the power of \\emph{any} computationally-bounded algorithm.\n\nAny degree-$D$ polynomial has at most $n^D$ monomial terms and so can be evaluated in time $n^{O(D)}$ (assuming that the individual coefficients are easy to compute). However, certain degree-$D$ polynomials can of course be computed faster, e.g., if the polynomial has few nonzero monomials or has special structure allowing it to be computed via a spectral method (as in the \\emph{color coding} trick \\cite{color-coding} used by~\\cite{HS-bayesian}).\nDespite such special cases, it appears that for average-case high-dimensional hypothesis testing problems, degree-$D$ polynomials are typically as powerful as general $n^{\\tilde{\\Theta}(D)}$-time algorithms; this informal conjecture appears as Hypothesis~2.1.5 in \\cite{sam-thesis}, building on the work of \\cite{pcal,HS-bayesian,sos-hidden} (see also our previous discussion in Section~\\ref{sec:ldlr-conj}). We will now explain the nuances and caveats of this conjecture, and give evidence (both formal and heuristic) in its favor.\n\n\\subsubsection{Robustness}\n\\label{sec:robust}\n\nAn important counterexample that we must be careful about is XOR-SAT. In the random 3-XOR-SAT problem, there are $n$ $\\{\\pm 1 \\}$-valued variables $x_1,\\ldots,x_n$ and we are given a formula consisting of $m$ random constraints of the form $x_{i_\\ell} x_{j_\\ell} x_{k_\\ell} = b_\\ell$ for $\\ell \\in [m]$, with $b_\\ell \\in \\{\\pm 1\\}$. The goal is to determine whether there is an assignment $x \\in \\{\\pm 1\\}^n$ that satisfies all the constraints. Regardless of $m$, this problem can be solved in polynomial time using Gaussian elimination over the finite field $\\mathbb{F}_2$. However, when $n \\ll m \\ll n^{3\/2}$, the low-degree method nevertheless predicts that the problem should be computationally hard, i.e., it is hard to distinguish between a random formula (which is unsatisfiable with high probability) and a formula with a planted assignment. This pitfall is not specific to the low-degree method: sum-of-squares lower bounds, statistical query lower bounds, and the cavity method from statistical physics also incorrectly suggest the same (this is discussed in \\cite{sq-parity,stat-phys-survey,sos-notes}).\n\nThe above discrepancy can be addressed (see, e.g., Lecture~3.2 of \\cite{sos-notes}) by noting that Gaussian elimination is very brittle, in the sense that it no longer works to search for an assignment satisfying only a $1-\\delta$ fraction of the constraints (as in this case it does not seem possible to leverage the problem's algebraic structure over $\\mathbb{F}_2$). Another example of a brittle algorithm is the algorithm of \\cite{reg-LLL} for linear regression, which uses Lenstra-Lenstra-Lov\\'asz lattice basis reduction \\cite{LLL} and only tolerates an exponentially-small level of noise. Thus, while there sometimes exist efficient algorithms that are ``high-degree'', these tend not to be robust to even a tiny amount of noise. As with SoS lower bounds, we expect that the low-degree method correctly captures the limits of \\emph{robust} hypothesis testing \\cite{sos-hidden} for high-dimensional problems. (Here, ``robust'' refers to the ability to handle a small amount of noise, and should not be confused with the specific notion of \\emph{robust inference} \\cite{sos-hidden} or with other notions of robustness that allow adversarial corruptions \\cite{FK-semirandom,robust-est}.)\n\n\\subsubsection{Connection to Sum-of-Squares}\n\\label{sec:sos}\n\nThe sum-of-squares (SoS) hierarchy \\cite{parrilo-thesis,lasserre} is a hierarchy of increasingly powerful semidefinite programming (SDP) relaxations for general polynomial optimization problems. Higher levels of the hierarchy produce larger SDPs and thus require more time to solve: level $d$ typically requires time $n^{O(d)}$. SoS lower bounds show that certain levels of the hierarchy fail to solve a given problem. As SoS seems to be at least as powerful as all known algorithms for many problems, SoS lower bounds are often thought of as the ``gold standard'' of formal evidence for computational hardness of average-case problems. For instance, if any constant level $d$ of SoS fails to solve a problem, this is strong evidence that no polynomial-time algorithm exists to solve the same problem (modulo the robustness issue discussed above). \n\nIn order to prove SoS lower bounds, one needs to construct a valid primal certificate (also called a \\emph{pseudo-expectation}) for the SoS SDP. The \\emph{pseudo-calibration} approach \\cite{pcal} provides a strategy for systematically constructing a pseudo-expectation; however, showing that the resulting object is valid (in particular, showing that a certain associated matrix is positive semidefinite) often requires substantial work. As a result, proving lower bounds against constant-level SoS programs is often very technically challenging (as in~\\cite{pcal,sos-hidden}). We refer the reader to~\\cite{sos-survey} for a survey of SoS and pseudo-calibration in the context of high-dimensional inference.\n\nOn the other hand, it was observed by the authors of \\cite{pcal,sos-hidden} that the bottleneck for the success of the pseudo-calibration approach seems to typically be a simple condition, none other than the boundedness of the norm of the LDLR (see Conjecture~3.5 of \\cite{sos-survey} or Section~4.3 of \\cite{sam-thesis}).\\footnote{More specifically, $(\\|L_n^{\\le D}\\|^2 - 1)$ is the variance of a certain pseudo-expectation value generated by pseudo-calibration, whose actual value in a valid pseudo-expectation must be exactly 1. It appears to be impossible to ``correct'' this part of the pseudo-expectation if the variance is diverging with $n$.} Through a series of works \\cite{pcal,HS-bayesian,sos-hidden,sam-thesis}, the low-degree method emerged from investigating this simpler condition in its own right. It was shown in \\cite{HS-bayesian} that SoS can be used to achieve sharp computational thresholds (such as the Kesten--Stigum threshold for community detection), and that the success of the associated method also hinges on the boundedness of the norm of the LDLR. So, historically speaking, the low-degree method can be thought of as a ``lite'' version of SoS lower bounds that is believed to capture the essence of what makes SoS succeed or fail (see~\\cite{HS-bayesian,sos-hidden,sos-survey,sam-thesis}).\n\nA key advantage of the low-degree method over traditional SoS lower bounds is that it greatly simplifies the technical work required, allowing sharper results to be proved with greater ease. Moreover, the low-degree method is arguably more natural in the sense that it is not specific to any particular SDP formulation and instead seems to capture the essence of what makes problems computationally easy or hard. On the other hand, some would perhaps argue that SoS lower bounds constitute stronger evidence for hardness than low-degree lower bounds (although we do not know any average-case problems for which they give different predictions).\n\nWe refer the reader to~\\cite{sam-thesis} for more on the relation between SoS and the low-degree method, including evidence for why the two methods are believed to predict the same results.\n\n\n\n\\subsubsection{Connection to Spectral Methods}\n\\label{sec:spectral}\n\nFor high-dimensional hypothesis testing problems, a popular class of algorithms are the \\emph{spectral methods}, algorithms that build a matrix $\\bM$ using the data and then threshold its largest eigenvalue. (There are also spectral methods for estimation problems, usually extracting an estimate of the signal from the leading eigenvector of $\\bM$.) Often, spectral methods match the best\\footnote{Here, ``best'' is in the sense of strongly distinguishing $\\PP_n$ and $\\QQ_n$ throughout the largest possible regime of model parameters.} performance among all known polynomial-time algorithms. Some examples include the non-backtracking and Bethe Hessian spectral methods for the stochastic block model \\cite{spectral-redemption,massoulie,nb-spectrum,bethe-hessian}, the covariance thresholding method for sparse PCA \\cite{DM-sparse}, and the tensor unfolding method for tensor PCA \\cite{RM-tensor,HSS-tensor}. As demonstrated in \\cite{HSS-tensor,sos-fast}, it is often possible to design spectral methods that achieve the same performance as SoS; in fact, some formal evidence indicates that low-degree spectral methods (where each matrix entry is a constant-degree polynomial of the data) are as powerful as any constant-degree SoS relaxation \\cite{sos-hidden}\\footnote{In \\cite{sos-hidden}, it is shown that for a fairly general class of average-case hypothesis testing problems, if SoS succeeds in some range of parameters then there is a low-degree spectral method whose maximum \\emph{positive} eigenvalue succeeds (in a somewhat weaker range of parameters). However, the resulting matrix could \\emph{a priori} have an arbitrarily large (in magnitude) negative eigenvalue, which would prevent the spectral method from running in polynomial time. For this same reason, it seems difficult to establish a formal connection between SoS and the LDLR via spectral methods.}.\n\nAs a result, it is interesting to try to prove lower bounds against the class of spectral methods. Roughly speaking, the largest eigenvalue in absolute value of a polynomial-size matrix $\\bM$ can be computed using $O(\\log n)$ rounds of power iteration, and thus can be thought of as an $O(\\log n)$-degree polynomial; more specifically, the associated polynomial is $\\Tr(\\bM^{2k})$ where $k \\sim \\log(n)$. The following result makes this precise, giving a formal connection between the low-degree method and the power of spectral methods. The proof is given in Appendix~\\ref{app:spectral-hard} (and is similar to that of Theorem~\\ref{thm:thresh-poly-hard}).\n\n\\begin{theorem}\\label{thm:spectral-hard}\nSuppose $\\QQ$ draws $\\bY \\in \\RR^N$ with entries either i.i.d.\\ $\\sN(0,1)$ or i.i.d.\\ $\\Unif(\\{\\pm 1\\})$. Let $\\PP$ be any measure on $\\RR^N$ that is absolutely continuous with respect to $\\QQ$. Let $\\bM = \\bM(\\bY)$ be a real symmetric $L \\times L$ matrix, each of whose entries is a polynomial in $\\bY$ of degree $\\le d$. Suppose\n\\begin{equation}\\label{eq:strong-thresh-spec}\n\\Ex_{\\bY \\sim \\PP}\\|\\bM\\| \\ge A \\quad \\text{and} \\quad \\QQ(\\|\\bM\\| \\ge B) \\le \\delta\n\\end{equation}\n(where $\\|\\cdot\\|$ denotes matrix operator norm) for some $A > B > 0$ and some $\\delta \\le \\frac{1}{2} \\cdot 3^{-4kd}$. Then, for any $k \\in \\NN$,\n$$\\|L^{\\le 2kd}\\| \\ge \\frac{1}{2L}\\left(\\frac{A}{B}\\right)^{2k}.$$\n\\end{theorem}\n\\noindent For example, suppose we are interested in polynomial-time spectral methods, in which case we should consider $L = \\mathrm{poly}(n)$ and $d = O(1)$. If there exists a spectral method with these parameters that distinguishes $\\PPP$ from $\\QQQ$ in the sense of \\eqref{eq:strong-thresh-spec} for some constants $A > B$, and with $\\delta \\to 0$ faster than any inverse polynomial (in $n$), then there exists a choice of $k = O(\\log n)$ such that $\\|L^{\\le O(\\log n)}\\| = \\omega(1)$. And, by contrapositive, if we could show that $\\|L^{\\le D}\\| = O(1)$ for some $D = \\omega(\\log n)$, that would imply that there is no spectral method with the above properties. This justifies the choice of logarithmic degree in Conjecture~\\ref{conj:low-deg-informal}. Similarly to Theorem~\\ref{thm:thresh-poly-hard}, one weakness of Theorem~\\ref{thm:spectral-hard} is that we can only rule out spectral methods whose failure probability is smaller than any inverse polynomial, instead of merely $o(1)$.\n\n\\begin{remark}\nAbove, we have argued that polynomial-time spectral methods correspond to polynomials of degree roughly $\\log(n)$. What if we are instead interested in subexponential runtime $\\exp(n^\\delta)$ for some constant $\\delta \\in (0,1)$? One class of spectral method computable with this runtime is that where the dimension is $L \\approx \\exp(n^\\delta)$ and the degree of each entry is $d \\approx n^\\delta$ (such spectral methods often arise based on SoS \\cite{strongly-refuting,BGG,BGL}). To rule out such a spectral method using Theorem~\\ref{thm:spectral-hard}, we would need to take $k \\approx \\log(L) \\approx n^\\delta$ and would need to show $\\|L^{\\le D}\\| = O(1)$ for $D \\approx n^{2\\delta}$. However, Conjecture~\\ref{conj:low-deg-informal} postulates that time-$\\exp(n^\\delta)$ algorithms should instead correspond to degree-$n^\\delta$ polynomials, and this correspondence indeed appears to be the correct one based on the examples of tensor PCA (see Section~\\ref{sec:spiked-tensor}) and sparse PCA (see \\cite{subexp-sparse}).\n\nAlthough this seems at first to be a substantial discrepancy, there is evidence that there are actually spectral methods of dimension $L \\approx \\exp(n^\\delta)$ and \\emph{constant} degree $d = O(1)$ that achieve optimal performance among $\\exp(n^\\delta)$-time algorithms. Such a spectral method corresponds to a degree-$n^\\delta$ polynomial, as expected. These types of spectral methods have been shown to exist for tensor PCA \\cite{kikuchi}.\n\\end{remark}\n\n\n\n\\subsubsection{Formal Conjecture}\n\\label{sec:discuss-conj}\n\nWe next discuss the precise conjecture that Hopkins \\cite{sam-thesis} offers on the algorithmic implications of the low-degree method. Informally, the conjecture is that for ``sufficiently nice'' $\\PPP$ and $\\QQQ$, if $\\|L_n^{\\le D}\\| = O(1)$ for some $D \\ge (\\log n)^{1+\\varepsilon}$, then there is no polynomial-time algorithm that strongly distinguishes $\\PPP$ and $\\QQQ$. We will not state the full conjecture here (see Conjecture~2.2.4 in \\cite{sam-thesis}) but we will briefly discuss some of the details that we have not mentioned yet.\n\nLet us first comment on the meaning of ``sufficiently nice'' distributions. Roughly speaking, this means that: \n\\begin{enumerate}\n \\item $\\QQ_n$ is a product distribution,\n \\item $\\PP_n$ is sufficiently symmetric with respect to permutations of its coordinates, and\n \\item $\\PP_n$ is then perturbed by a small amount of additional noise.\n\\end{enumerate}\nConditions (1) and (2) or minor variants thereof are fairly standard in high-dimensional inference problems. The reason for including (3) is to rule out non-robust algorithms such as Gaussian elimination (see Section~\\ref{sec:robust}).\n\nOne difference between the conjecture of \\cite{sam-thesis} and the conjecture discussed in these notes is that \\cite{sam-thesis} considers the notion of \\emph{coordinate degree} rather than \\emph{polynomial degree}. A polynomial has coordinate degree $\\le D$ if no monomial involves more than $D$ variables; however, each individual variable can appear with arbitrarily-high degree in a monomial.\\footnote{Indeed, coordinate degree need not be phrased in terms of polynomials, and one may equivalently consider the linear subspace of $L^2(\\QQ_n)$ of functions that is spanned by functions of at most $D$ variables at a time.} In \\cite{sam-thesis}, the low-degree likelihood ratio is defined as the projection of $L_n$ onto the space of polynomials of coordinate degree~$\\le D$. The reason for this is to capture, e.g., algorithms that preprocess the data by applying a complicated high-degree function entrywise. However, we are not aware of any natural problem in which it is important to work with coordinate degree instead of polynomial degree. While working with coordinate degree gives lower bounds that are formally stronger, we work with polynomial degree throughout these notes because it simplifies many of the computations.\n\n\n\\subsubsection{Empirical Evidence and Refined Conjecture}\n\nPerhaps the strongest form of evidence that we have in favor of the low-degree method is simply that it has been carried out on many high-dimensional inference problems and seems to always give the correct predictions, coinciding with widely-believed conjectures. These problems include planted clique \\cite{sam-thesis} (implicit in \\cite{pcal}), community detection in the stochastic block model \\cite{HS-bayesian,sam-thesis}, the spiked tensor model \\cite{sos-hidden,sam-thesis}, the spiked Wishart model \\cite{BKW-sk}, and sparse PCA \\cite{subexp-sparse}. In these notes we have also carried out low-degree calculations for the spiked Wigner model and spiked tensor model (see Section~\\ref{sec:examples}). Some of the early results \\cite{HS-bayesian,sos-hidden} showed only $\\|L_n^{\\le D}\\| = n^{o(1)}$ as evidence for hardness, which was later improved to $O(1)$ \\cite{sam-thesis}. Some of the above results \\cite{sos-hidden,sam-thesis} use coordinate degree instead of degree (as we discussed in Section~\\ref{sec:discuss-conj}). Throughout the above examples, the low-degree method has proven to be versatile in that it can predict both sharp threshold behavior as well as precise smooth tradeoffs between subexponential runtime and statistical power (as illustrated in the two parts of Section~\\ref{sec:examples}).\n\nAs discussed earlier, there are various reasons to believe that if $\\|L_n^{\\le D}\\| = O(1)$ for some $D = \\omega(\\log n)$ then there is no polynomial-time distinguishing algorithm; for instance, this allows us to rule out a general class of spectral methods (see Theorem~\\ref{thm:spectral-hard}). However, we have observed that in numerous examples, the LDLR actually has the following more precise behavior that does not involve the extra factor of $\\log(n)$.\n\\begin{conjecture}[Informal]\nLet $\\PPP$ and $\\QQQ$ be ``sufficiently nice.'' If there exists a polynomial-time algorithm to strongly distinguish $\\PPP$ and $\\QQQ$ then $\\|L_n^{\\le D}\\| = \\omega(1)$ for any $D = \\omega(1)$.\n\\end{conjecture}\n\n\\noindent In other words, if $\\|L_n^{\\le D}\\| = O(1)$ for some $D = \\omega(1)$, this already constitutes evidence that there is no polynomial-time algorithm.\nThe above seems to be a cleaner version of the main low-degree conjecture that remains correct for many problems of practical interest.\n\n\\subsubsection{Extensions}\n\\label{sec:extensions}\n\nWhile we have focused on the setting of hypothesis testing throughout these notes, we remark that low-degree arguments have also shed light on other types of problems such as \\emph{estimation} (or \\emph{recovery}) and \\emph{certification}.\n\nFirst, as we have mentioned before, non-trivial estimation\\footnote{Non-trivial estimation of a signal $\\bx \\in \\RR^n$ means having an estimator $\\hat{\\bm x}$ achieving $|\\langle \\hat \\bx, \\bx \\rangle|\/(\\|\\hat \\bx\\| \\cdot \\|\\bx\\|) \\ge \\varepsilon$ with high probability, for some constant $\\varepsilon > 0$.} typically seems to be precisely as hard as strong distinguishing (see Definition~\\ref{def:stat-ind}), in the sense that the two problems share the same $\\lambda_{\\mathsf{stat}}$ and $\\lambda_{\\mathsf{comp}}$. For example, the statistical thresholds for testing and recovery are known to coincide for problems such as the two-groups stochastic block model \\cite{MNS-rec,massoulie,MNS-proof} and the spiked Wigner matrix model (for a large class of spike priors) \\cite{finite-size,detection-wig}. Also, for any additive Gaussian noise model, any lower bound against hypothesis testing using the second moment method (Lemma~\\ref{lem:second-moment-contiguity}) or a conditional second moment method also implies a lower bound against recovery \\cite{BMVVX-pca}. More broadly, we have discussed (see Section~\\ref{sec:spectral}) how suitable spectral methods typically give optimal algorithms for high-dimensional problems; such methods typically succeed at testing and recovery in the same regime of parameters, because whenever the leading eigenvalue undergoes a phase transition, the leading eigenvector will usually do so as well (see Theorem~\\ref{thm:wig-bbp} for a simple example). Thus, low-degree evidence that hypothesis testing is hard also constitutes evidence that non-trivial recovery is hard, at least heuristically. Note, however, that there is no formal connection (in either direction) between testing and recovery (see \\cite{BMVVX-pca}), and there are some situations in which the testing and recovery thresholds differ (e.g., \\cite{planting-trees}).\n\nIn a different approach, Hopkins and Steurer \\cite{HS-bayesian} use a low-degree argument to study the recovery problem more directly. In the setting of community detection in the stochastic block model, they examine whether there is a low-degree polynomial that can non-trivially estimate whether two given network nodes are in the same community. They show that such a polynomial exists only when the parameters of the model lie above the problem's widely-conjectured computational threshold, the Kesten--Stigum threshold. This constitutes direct low-degree evidence that recovery is computationally hard below the Kesten--Stigum threshold.\n\nA related (and more refined) question is that of determining the optimal estimation error (i.e., the best possible correlation between the estimator and the truth) for any given signal-to-noise parameter $\\lambda$. Methods such as \\emph{approximate message passing} can often answer this question very precisely, both statistically and computationally (see, e.g., \\cite{AMP,LKZ-mmse,DAM,BDMKLZ-spiked}, or \\cite{stat-phys-survey,BPW-phys-notes,leo-survey} for a survey). One interesting question is whether one can recover these results using a variant of the low-degree method.\n\nAnother type of statistical task is \\emph{certification}. Suppose that $\\bY \\sim \\QQ_n$ has some property $\\sP$ with high probability. We say an algorithm \\emph{certifies} the property $\\sP$ if (i) the algorithm outputs ``yes'' with high probability on $\\bY \\sim \\QQ_n$, and (ii) if $\\bY$ does not have property $\\sP$ then the algorithm \\emph{always} outputs ``no.'' In other words, when the algorithm outputs ``yes'' (which is usually the case), this constitutes a proof that $\\bY$ indeed has property $\\sP$. Convex relaxations (including SoS) are a common technique for certification. In \\cite{BKW-sk}, the low-degree method is used to argue that certification is computationally hard for certain structured PCA problems. The idea is to construct a \\emph{quiet planting} $\\PP_n$, which is a distribution for which (i) $\\bY \\sim \\PP_n$ never has property $\\sP$, and (ii) the low-degree method indicates that it is computationally hard to strongly distinguish $\\PP_n$ and $\\QQ_n$. In other words, this gives a reduction from a hypothesis testing problem to a certification problem, since any certification algorithm can be used to distinguish $\\PP_n$ and $\\QQ_n$. (Another example of this type of reduction, albeit relying on a different heuristic for computational hardness, is \\cite{hard-rip}.)\n\n\n\\section*{Acknowledgments}\n\\addcontentsline{toc}{section}{Acknowledgments}\nWe thank the participants of a working group on the subject of these notes, organized by the authors at the Courant Institute of Mathematical Sciences during the spring of 2019.\nWe also thank Samuel B.\\ Hopkins, Philippe Rigollet, and David Steurer for helpful discussions. \n\n\n\\addcontentsline{toc}{section}{References}\n\\bibliographystyle{alpha}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction and motivation}\n\nThe work of this current paper is motivated by a question posed in a seminal paper by Mahler \\cite{Mahler_1984}; namely, how well can we approximate points in the middle-third Cantor set by:\n\\begin{enumerate}[(i)]\n\\itemsep-2pt\n\\item{rational numbers contained in the Cantor set, or} \\label{Mahler part 1}\n\\item{rational numbers not in the Cantor set?}\n\\end{enumerate}\n\nThe first contribution to this question was arguably made by Weiss \\cite{Weiss_2001}, who showed that almost no point in the middle-third Cantor set is \\emph{very well approximable} with respect to the natural probability measure on the middle-third Cantor set. Since this initial contribution, numerous authors have contributed to answering these questions, approaching them from many different perspectives. For example, Levesley, Salp, and Velani \\cite{LSV_2007} considered triadic approximation in the middle-third Cantor set, different subsets of the first named author, Baker, Chow, and Yu \\cite{ABCY, ACY, Baker4} studied dyadic approximation in the middle-third Cantor set, Kristensen \\cite{Kristensen_2006} considered approximation of points in the middle-third Cantor set by algebraic numbers, and Tan, Wang and Wu \\cite{TWW_2021} have recently studied part (\\ref{Mahler part 1}) by introducing a new notion of the ``height'' of a rational number. There has also been considerable effort invested in trying to generalise some of the above results to more general self-similar sets in $\\R$ and also to various fractal sets in higher dimensions. See, for example, \\cite{Allen-Barany, Baker1, Baker2, Baker3, Baker-Troscheit, BFR_2011, FS_2014, FS_2015, Khalil-Luethi, KLW_2005, PV_2005, WWX_Cantor, Yu_self-similar_2021} and references therein. The results in this paper can be thought of as a contribution to answering a natural $d$-dimensional weighted variation of part (\\ref{Mahler part 1}) of Mahler's question. In particular, we will be interested in weighted approximation in $d$-dimensional ``missing digit'' sets. \n\n Before we introduce the general framework we will consider here, we provide a very brief overview of some of the classical results on weighted Diophantine approximation in the ``usual'' Euclidean setting which provide further motivation for the current work. Fix $d \\in \\N$ and let $\\Psi=(\\psi_{1}, \\dots , \\psi_{d})$ be a $d$-tuple of approximating functions $\\psi_{i}:\\N \\to [0, \\infty)$ with $\\psi_{i}(r) \\to 0$ as $r \\to \\infty$ for each $1 \\leq i \\leq d$. The set of weighted simultaneously $\\Psi$-well-approximable points in $\\R^d$ is defined as\n\\begin{equation*}\nW_{d}(\\Psi):= \\left\\{ \\mathbf{x} = (x_{1}, \\dots , x_{d}) \\in [0,1]^{d}: \\left|x_{i}-\\frac{p_{i}}{q}\\right| < \\psi_{i}(q)\\, ,\\; 1 \\leq i \\leq d, \\text{ for i.m.} \\; (p_1, \\dots p_d, q) \\in \\Z^{d} \\times \\N \\right\\},\n\\end{equation*}\nwhere i.m. denotes infinitely many. Note that the special case where each approximating function is the same, that is $\\Psi=(\\psi, \\dots , \\psi)$, is generally the more intensively studied set. The case where each approximating function is potentially different, usually referred to as \\emph{weighted simultaneous approximation}, is a natural generalisation of this. Simultaneous approximation (i.e. when the approximating function is the same in each coordinate axis) can generally be seen as a metric generalisation of Dirichlet's Theorem, whereas weighted simultaneous approximation is a metric generalisation of Minkowski's Theorem. Weighted simultaneous approximation has earned interest in the past few decades due to Schmidt and natural connections to Littlewood's Conjecture, see for example \\cite{BRV_2016, BPV_2011, An_2013, An_2016, Schmidt_1983}. \n \nMotivated by classical works due to the likes of Khintchine \\cite{Khintchine_24, Khintchine_25} and Jarn\\'{i}k \\cite{Jarnik_31} which tell us, respectively, about the Lebesgue measure and Hausdorff measures of the sets of classical simultaneously $\\Psi$-well-approximable points (i.e. when $\\Psi=(\\psi, \\dots, \\psi)$), one may naturally also wonder about the ``size'' of sets of weighted simultaneously $\\Psi$-well-approximable points in terms of Lebesgue measure, Hausdorff dimension, and Hausdorff measures. Khintchine \\cite{Weighted_Khintchine} showed that if $\\psi: \\N \\to [0,\\infty)$ and $\\Psi(q)=(\\psi(q)^{\\tau_1}, \\dots, \\psi(q)^{\\tau_d})$ for some $\\boldsymbol{\\tau} = (\\tau_1,\\dots,\\tau_d) \\in (0,1)^d$ with $\\tau_1+\\tau_2+\\dots+\\tau_d=1$, then\n\\begin{equation*}\n\\lambda_d(W_d(\\Psi))=\\begin{cases}\n\\quad 0 \\quad &\\text{ if } \\quad \\sum_{q=1}^{\\infty} q^d \\psi(q) < \\infty, \\\\[2ex]\n\\quad 1 \\quad &\\text{ if } \\quad \\sum_{q=1}^{\\infty} q^d \\psi(q) = \\infty, \\text{ and $q^d\\psi(q)$ is monotonic}.\n\\end{cases}\n\\end{equation*}\nThroughout we use $\\lambda_d(X)$ to denote the $d$-dimensional Lebesgue measure of a set \\mbox{$X \\subset \\R^d$}. For more general approximating functions $\\Psi(q)=(\\psi_1(q), \\dots, \\psi_d(q))$, with $\\prod_{i=1}^{d}\\psi_i(q)$ monotonically decreasing and $\\psi_{i}(q)0},\\] \nRynne \\cite{Rynne_1998} proved that if $\\sum_{i=1}^{d}t_{i} \\geq 1$, then \n\\begin{equation*}\n\\dimh W_{d}(\\Psi)=\\min_{1 \\leq k \\leq d} \\left\\{ \\frac{1}{t_{k}+1} \\left( d+1+\\sum_{i:t_{k} \\geq t_{i}}(t_{k}-t_{i})\\right) \\right\\}.\n\\end{equation*}\n Throughout, we write $\\dimh{X}$ to denote the \\emph{Hausdorff dimension} of a set $X \\subset \\R^d$, we refer the reader to \\cite{Falconer} for definitions and properties of Hausdorff dimension and Hausdorff measures. Rynne's result has recently been extended to a more general class of approximating functions by Wang and Wu \\cite[Theorem 10.2]{WW2019}.\n\nIn recent years, there has been rapidly growing interest in whether similar statements can be proved when we intersect $W_d(\\Psi)$ with natural subsets of $[0,1]^{d}$, such as submanifolds or fractals. The study of such questions has been further incentivised by many remarkable works of the recent decades, such as \\cite{KLW_2005, KM_1998, VV_2006}, and applications to other areas, such as wireless communication theory \\cite{ABLVZ_2016}.\n\n\\section{$d$-dimensional missing digit sets and main results}\n\nIn this paper we study weighted approximation in $d$-dimensional missing digit sets, which are natural extensions of classical missing digit sets (i.e. generalised Cantor sets) in $\\R$ to higher dimensions. A very natural class of higher dimensional missing digit sets included within our framework are the \\emph{four corner Cantor sets} (or \\emph{Cantor dust}) in $\\R^2$ with contraction ratio $\\frac{1}{n}$ for $n \\in \\N$.\n\nThroughout we consider $\\R^d$ equipped with the supremum norm, which we denote by $\\supn{\\cdot}$. For subsets $X,Y \\subset \\R^{d}$ we define $\\mathrm{diam}(X) = \\sup\\{\\|u-v\\|:u,v \\in X\\}$ and $\\operatorname{dist}(X,Y)= \\inf\\{\\|x-y\\|: x \\in X, y \\in Y\\}$. We define \\emph{higher-dimensional missing digit sets} via iterated function systems as follows. Let $b \\in \\N$ be such that $b \\geq 3$ and let $J_1,\\dots,J_d$ be proper subsets of $\\set{0,1,\\dots,b-1}$ such that for each $1 \\leq i \\leq d$, we have\n\\[N_i:=\\#J_i\\geq 2.\\]\nSuppose $J_i=\\set{a^{(i)}_1,\\dots,a^{(i)}_{N_i}}$. For each $1 \\leq i \\leq d$, we define the iterated function system \n\\[\\Phi^i=\\left\\{f_j:[0,1]\\to[0,1]\\right\\}_{j=1}^{N_i} \\quad \\text{where} \\quad f_j(x)=\\frac{x+a^{(i)}_j}{b}.\\]\nLet $K_i$ be the attractor of $\\Phi^i$; that is, $K_i \\subset \\R$ is the unique non-empty compact set which satisfies\n\\[K_i=\\bigcup_{j=1}^{N_i}{f_j(K_i)}.\\]\n We know that such a set exists due to work of Hutchinson \\cite{Hutchinson}. Equivalently $K_i$ is the set of $x \\in [0,1]$ for which there exists a base $b$ expansion of $x$ consisting only of digits from $J_i$. In view of this, we will also use the notation $K_{b}(J_i)$ to denote this set. For example, in this notation, the classical middle-third Cantor set is precisely the set $K_3(\\{0,2\\})$. We call the sets $K_b(J_i)$ \\emph{missing digit sets} since they consist of numbers with base-$b$ expansions missing specified digits. Note that, for each $1 \\leq i \\leq d$, the Hausdorff dimension of $K_i$, which we will denote by $\\gamma_i$, is given by \n\\[\\gamma_i = \\dimh{K_i} = \\frac{\\log{N_i}}{\\log{b}}.\\]\n\nWe will be interested in the \\emph{higher-dimensional missing digit set}\n\\[K:=\\prod_{i=1}^{d}{K_i}\\]\nformed by taking the Cartesian product of the sets $K_i$, $1 \\leq i \\leq d$. As a natural concrete example, we note that the \\emph{four corner Cantor set} in $\\R^2$ with contraction ratio $\\frac{1}{b}$ (with $b \\geq 3$ an integer) can be written in our notation as $K_{b}(\\{0,b-1\\}) \\times K_{b}(\\{0,b-1\\})$. \n\nWe note that $K$ is the attractor of the iterated function system \n\\[\\Phi=\\left\\{f_{(j_1,\\dots,j_d)}:[0,1]^d \\to [0,1]^d, (j_1,\\dots,j_d) \\in \\prod_{i=1}^{d}{J_i}\\right\\}\\]\nwhere\n\\[f_{(j_1,\\dots,j_d)}\\begin{pmatrix} x_1 \\\\ \\vdots \\\\ x_d\\end{pmatrix} = \\begin{pmatrix}\\frac{x_1+j_1}{b} \\\\ \\vdots \\\\ \\frac{x_d+j_d}{b}\\end{pmatrix}.\\]\nNotice that $\\Phi$ consists of \n\\[N:=\\prod_{i=1}^{d}{N_i}\\]\nmaps and so, for convenience, we will write \n\\[\\Phi = \\left\\{g_j:[0,1]^d\\to[0,1]^d\\right\\}_{j=1}^{N}\\]\nwhere the $g_j$'s are just the maps $f_{(j_1,\\dots,j_d)}$ from above written in some order. The Hausdorff dimension of $K$, which we denote by $\\gamma$, is\n\\[\\gamma = \\dimh{K} = \\frac{\\log{N}}{\\log{b}}.\\]\n\nWe will write\n\\[\\Lambda = \\set{1,2,\\dots,N} \\qquad \\text{and} \\qquad \\Lambda^*=\\bigcup_{n=0}^{\\infty}{\\Lambda^n}.\\]\nWe write $\\mathbf{i}$ to denote a word in $\\Lambda$ or $\\Lambda^*$ and we write $|\\mathbf{i}|$ to denote the length of $\\mathbf{i}$. For $\\mathbf{i} \\in \\Lambda^*$ we will also use the shorthand notation\n\\[g_{\\mathbf{i}} = g_{i_1} \\circ g_{i_2} \\circ \\dots \\circ g_{i_{|\\mathbf{i}|}}.\\]\nWe adopt the convention that $g_{\\emptyset}(x)=x$.\n\nLet $\\Psi: \\Lambda^* \\to [0,\\infty)$ be an approximating function. For each $x \\in K$, we define the set \n\\begin{align*} \nW(x,\\Psi)=\\left\\{y \\in K: \\supn{y-g_{\\mathbf{i}}(x)}<\\Psi(\\mathbf{i}) \\text{ for infinitely many } \\mathbf{i} \\in \\Lambda^*\\right\\}.\n\\end{align*}\nThe following theorem is a special case of \\cite[Theorem 1.1]{Allen-Barany}. \n\n\\begin{theorem} \\label{self-similar approx thm}\nLet $\\Phi$ and $K$ be as defined above. Let $x \\in K$ and let $\\varphi: \\N \\to [0,\\infty)$ be a monotonically decreasing function. Let $\\Psi(\\mathbf{i}) = \\mathrm{diam}(g_{\\mathbf{i}}(K))\\varphi(|\\mathbf{i}|)$. Then, for $s>0$, \n\\[\n{\\cal H}^{s}(W(x,\\Psi))=\n\\begin{cases}\n0 & \\text{if} \\quad \\sum_{\\mathbf{i} \\in \\Lambda^*}{\\Psi(\\mathbf{i})^s}<\\infty, \\\\[2ex]\n{\\cal H}^s(K) & \\text{if} \\quad \\sum_{\\mathbf{i} \\in \\Lambda^*}{\\Psi(\\mathbf{i})^s}=\\infty.\n\\end{cases}\n\\]\n\\end{theorem}\nOf particular interest to us here is the following easy corollary.\n\\begin{corollary} \\label{simultaneous corollary}\nLet $\\Phi$ and $K$ be as above and suppose that $\\mathrm{diam}(K)=1$. Let $\\psi: \\N \\to [0,\\infty)$ be such that $b^n\\psi(b^n)$ is monotonically decreasing and define $\\varphi: \\N \\to [0,\\infty)$ by $\\varphi(n)=b^n\\psi(b^n)$. Let $\\Psi(\\mathbf{i}) = \\mathrm{diam}(g_{\\mathbf{i}}(K))\\varphi(|\\mathbf{i}|)$. Recall that $\\gamma = \\dimh{K}$. Then, for $x \\in K$, we have\n\\[\n{\\cal H}^{\\gamma}(W(x,\\Psi))=\n\\begin{cases}\n0 & \\text{if} \\quad \\sum_{n=1}^{\\infty}{(b^n\\psi(b^n))^{\\gamma}}<\\infty, \\\\[2ex]\n{\\cal H}^{\\gamma}(K) & \\text{if} \\quad \\sum_{n=1}^{\\infty}{(b^n\\psi(b^n))^{\\gamma}}=\\infty.\n\\end{cases}\n\\]\n\\end{corollary}\n\n\\begin{proof}\nIt follows from Theorem \\ref{self-similar approx thm} that\n\\[\n{\\cal H}^{\\gamma}(W(x,\\Psi))=\n\\begin{cases}\n0 & \\text{if} \\quad \\sum_{\\mathbf{i} \\in \\Lambda^*}{\\Psi(\\mathbf{i})^{\\gamma}}<\\infty, \\\\[2ex]\n{\\cal H}^{\\gamma}(K) & \\text{if} \\quad \\sum_{\\mathbf{i} \\in \\Lambda^*}{\\Psi(\\mathbf{i})^{\\gamma}}=\\infty.\n\\end{cases}\n\\]\nHowever, in this case, by the definition of $\\varphi$ and our assumption that $\\mathrm{diam}(K)=1$, we have\n\\[\\sum_{\\mathbf{i} \\in \\Lambda^*}{\\Psi(\\mathbf{i})^{\\gamma}} = \\sum_{n=1}^{\\infty}{\\sum_{\\substack{\\mathbf{i} \\in \\Lambda^* \\\\ |\\mathbf{i}| = n}}{(\\mathrm{diam}(g_{\\mathbf{i}}(K))\\varphi(|\\mathbf{i}|))^{\\gamma}}} = \\sum_{n=1}^{\\infty}{\\sum_{\\substack{\\mathbf{i} \\in \\Lambda^* \\\\ |\\mathbf{i}| = n}}{\\psi(b^n)^{\\gamma}}} = \\sum_{n=1}^{\\infty}{N^n \\psi(b^n)^{\\gamma}} = \\sum_{n=1}^{\\infty}{(b^n \\psi(b^n))^{\\gamma}}. \\qedhere \\]\n\\end{proof}\n\nFor an approximating function $\\psi: \\N \\to [0,\\infty)$, define\n\\begin{align} \\label{W x psi}\nW(x,\\psi)=\\left\\{y \\in K: \\supn{y-g_{\\mathbf{i}}(x)} < \\psi(b^{|\\mathbf{i}|}) \\text{ for infinitely many } \\mathbf{i} \\in \\Lambda^*\\right\\}.\n\\end{align}\nIn essence, $W(x,\\psi)$ is a set of ``simultaneously $\\psi$-well-approximable'' points in $K$. The following statement regarding these sets can be deduced immediately from Corollary \\ref{simultaneous corollary}.\n\n\\begin{corollary} \\label{W x psi corollary}\nLet $\\Phi$ and $K$ be defined as above and let $\\psi: \\N \\to [0,\\infty)$ be such that $b^n\\psi(b^n)$ is monotonically decreasing. Suppose further that $\\mathrm{diam}(K)=1$. Then, \n\\[\n{\\cal H}^{\\gamma}(W(x,\\psi))=\n\\begin{cases}\n0 & \\text{if} \\quad \\sum_{n=1}^{\\infty}{(b^n\\psi(b^n))^{\\gamma}}<\\infty, \\\\[2ex]\n{\\cal H}^{\\gamma}(K) & \\text{if} \\quad \\sum_{n=1}^{\\infty}{(b^n\\psi(b^n))^{\\gamma}}=\\infty.\n\\end{cases}\n\\]\n\\end{corollary}\n\nHere we will be interested in weighted versions of the sets $W(x,\\psi)$. More specifically, for $\\mathbf{t}=(t_1,\\dots,t_d) \\in \\R^d_{\\geq 0}$ and for $x \\in K$, we define the \\emph{weighted approximation set}\n\\[W(x,\\psi,\\mathbf{t}) = \\left\\{\\mathbf{y}=(y_1,\\dots,y_d) \\in K: |y_j-g_{\\mathbf{i}}(x)_j|<\\psi(b^{|\\mathbf{i}|})^{1+t_i}, 1 \\leq j \\leq d, \\text{ for i.m. } \\mathbf{i} \\in \\Lambda^*\\right\\}.\\]\nHere we are using the notation $g_{\\mathbf{i}}(x)=(g_{\\mathbf{i}}(x)_1,\\dots,g_{\\mathbf{i}}(x)_d)$. Our main results relating to the Hausdorff dimension of sets of the form $W(x,\\psi,\\mathbf{t})$ are as follows.\n\n\\begin{theorem} \\label{lower bound theorem}\nLet $\\Phi$ and $K$ be defined as above. Recall that $\\gamma = \\dimh{K}$ and $\\gamma_i=\\dimh{K_i}$ for each $1 \\leq i \\leq d$. Let $\\psi: \\N \\to [0, \\infty)$ be such that $b^n\\psi(b^n)$ is monotonically decreasing. Further suppose that $\\mathrm{diam}(K)=1$ and\n\\[\\sum_{n=1}^{\\infty}{(b^n\\psi(b^n))^{\\gamma}} = \\infty.\\]\nThen, for $\\mathbf{t} = (t_1,\\dots,t_d) \\in \\R^d_{\\geq 0}$, we have\n\\[\\dimh{W(x,\\psi,\\mathbf{t})} \\geq \\min_{1 \\leq k \\leq d}\\left\\{\\frac{1}{1+t_k}\\left(\\gamma + \\sum_{j:t_j \\leq t_k}{(t_k-t_j)\\gamma_{j}}\\right)\\right\\}.\\]\n\\end{theorem}\n\nIf $\\psi$ satisfies more stringent divergence conditions, then we an show that the lower bound given in Theorem \\ref{lower bound theorem} in fact gives an exact formula for the Hausdorff dimension of $W(x,\\psi,\\mathbf{t})$. More precisely, we are able to show the following.\n\n\\begin{theorem} \\label{upper bound theorem}\nLet $\\Phi$ and $K$ be as defined above. Let $x \\in K$ and let $\\psi:\\N \\to [0,\\infty)$ be such that:\n\\begin{enumerate}[(i)]\n \\item{$b^n\\psi(b^n)$ is monotonically decreasing,}\n \\item{$\\displaystyle{\\sum_{n=1}^{\\infty}{(b^n \\psi(b^n))^{\\gamma}}=\\infty}, \\quad$ and}\n \\item{$\\displaystyle{\\sum_{n=1}^{\\infty}{(b^n \\psi(b^n)^{1+\\varepsilon})^{\\gamma}}<\\infty} \\quad$ for every $\\varepsilon>0$.}\n\\end{enumerate}\nThen, for $\\mathbf{t} = (t_1, \\dots, t_d) \\in \\R^d_{\\geq 0}$, we have \n\\[\\dimh{W(x,\\psi,\\mathbf{t})} = \\min_{1 \\leq k \\leq d}\\left\\{\\frac{1}{1+t_k}\\left(\\gamma + \\sum_{j:t_j \\leq t_k}{(t_k-t_j)\\gamma_{j}}\\right)\\right\\}.\\]\n\\end{theorem}\nAs an example of an approximating function which satisifies conditions $(i)-(iii)$, one can think of $\\psi(q)=\\left(q(\\log_{b}q)^{1\/\\gamma}\\right)^{-1}$. This function naturally appears when one considers analogues of Dirichlet's theorem in missing digit sets (see \\cite{BFR_2011, FS_2014}).\n As a corollary to Theorem \\ref{upper bound theorem} we deduce the following statement which can be interpreted as a higher-dimensional weighted generalisation of \\cite[Theorem 4]{LSV_2007}. In \\cite[Theorem 4]{LSV_2007}, Levesley, Salp, and Velani establish the Hausdorff measure of the set of points in a one-dimensional base-$b$ missing digit set (i.e. of the form $K_b(J)$ in our present notation) which can be well-approximated by rationals with denominators which are powers of $b$. Before we state our corollary, we fix one more piece of notation. Given an approximating function $\\psi: \\N \\to [0,\\infty)$, an infinite subset ${\\cal B} \\subset \\N$, and $\\mathbf{t} = (t_1,\\dots,t_d) \\in \\R_{\\geq 0}^d$, we define\n\\[W_{{\\cal B}}(\\psi,\\mathbf{t})=\\left\\{x \\in K: \\left|x_{i}-\\frac{p_i}{q}\\right|<\\psi(q)^{1+t_i}, 1 \\leq i \\leq d, \\text{ for i.m. } (p_1,\\dots,p_d,q) \\in \\Z^d \\times {\\cal B} \\right\\}.\\]\n\n\\begin{corollary} \\label{LSV_equivalent}\nFix $b \\in \\N$ with $b \\geq 3$ and let ${\\cal B}=\\{b^n: n=0,1,2,\\dots\\}$. Let $K$ be a higher dimensional missing digit set as defined above (with base $b$) and write $\\gamma=\\dimh{K}$. Furthermore, suppose that $\\set{0,b-1} \\subset J_i$ for every $1 \\leq i \\leq d$. In particular, this also means that $\\mathrm{diam}{K}=1$. Let $\\psi: \\N \\to [0,\\infty)$ be an approximating function such that \n\\begin{enumerate}[(i)]\n\\item{$b^{n}\\psi(b^{n})$ is monotonically decreasing with $b^{n}\\psi(b^{n}) \\to 0$ as $n \\to \\infty$},\n\\item{$\\displaystyle{\\sum_{n=1}^{\\infty}{(b^n \\psi(b^n))^{\\gamma}}=\\infty}, \\quad$ and}\n\\item{$\\displaystyle{\\sum_{n=1}^{\\infty}{(b^n \\psi(b^n)^{1+\\varepsilon})^{\\gamma}}<\\infty} \\quad$ for every $\\varepsilon>0$.}\n\\end{enumerate}\nThen\n\\begin{equation*}\n\\dimh W_{{\\cal B}}(\\psi, \\mathbf{t}) = \\min_{1 \\leq k \\leq d}\\left\\{\\frac{1}{1+t_k}\\left(\\gamma + \\sum_{j:t_j \\leq t_k}{(t_k-t_j)\\, \\gamma_j}\\right)\\right\\}.\n\\end{equation*}\n\\end{corollary}\n\n\\begin{proof} \n\nObserve that the conditions imposed in the statement of Corollary \\ref{LSV_equivalent} guarantee that Theorem \\ref{upper bound theorem} is applicable. Furthermore, by our assumption that $b^{n}\\psi(b^{n}) \\to 0$ as $n \\to \\infty$, we may assume without loss of generality that $\\psi(b^n) < b^{-n}$ for all $n \\in \\N$.\n\nNext, we note that if $\\mathbf{p}=(p_1,\\dots,p_d) \\in \\Z^d$ and $\\frac{\\mathbf{p}}{b^n} = \\left(\\frac{p_1}{b^n},\\dots,\\frac{p_d}{b^n}\\right) \\notin K$, then we must have \n\\[\\operatorname{dist}\\left(\\frac{\\mathbf{p}}{b^n},K\\right) \\geq b^{-n}, \\quad \\text{where} \\quad \\operatorname{dist}(x,K)=\\inf\\set{\\|x-y\\|: y \\in K}.\\] \n(Recall that we use $\\|\\cdot\\|$ to denote the supremum norm in $\\R^d$.)\nThus we need only concern ourselves with pairs $(\\mathbf{p},q) \\in \\Z^{d} \\times {\\cal B}$ for which $\\frac{\\mathbf{p}}{q} \\in K$.\n\nLet $G=\\left\\{x=(x_1,\\dots,x_d) \\in \\{0,1\\}^{d}\\right\\}$ and note that $G \\subset K$ by the assumption that $\\{0,b-1\\} \\subset J_{i}$ for each $1\\leq i \\leq d$. For any $x \\in G$ and any $\\mathbf{j} \\in \\Lambda^{n}$ it is possible to write $g_{\\mathbf{j}}(x)=\\frac{\\mathbf{p}}{b^{n}}$ for some $\\mathbf{p} \\in (\\N\\cup \\set{0})^{d}$. Hence\n\\[W(x, \\psi, \\mathbf{t}) \\subset W_{{\\cal B}}(\\psi, \\mathbf{t}).\\]\nFurthermore, the set of all rational points of the form $\\frac{\\mathbf{p}}{b^{n}}$ contained in $K$ is\n\\begin{equation*}\n\\bigcup_{x \\in G} \\bigcup_{\\mathbf{j} \\in \\Lambda^{n}} g_{\\mathbf{j}}(x).\n\\end{equation*}\nHence\n\\begin{equation*} \nW_{{\\cal B}}(\\psi, \\mathbf{t}) \\subset \\bigcup_{x \\in G} W(x, \\psi, \\mathbf{t}).\n \\end{equation*}\nBy the finite stability of Hausdorff dimension (see \\cite{Falconer}), Corollary \\ref{LSV_equivalent} now follows from Theorem~\\ref{upper bound theorem}.\n\\end{proof}\n\n\nNotice that in Theorem \\ref{lower bound theorem}, Theorem \\ref{upper bound theorem}, and Corollary \\ref{LSV_equivalent}, we insist on the same underlying base $b$ in each coordinate direction. This is somewhat unsatisfactory and one might hope to be able to obtain results where we can have different bases $b_i$ in each coordinate direction. The first steps towards proving results relating to weighted approximation in this setting can be seen in \\cite[Section 12]{WW2019}. Proving more general results with different bases in different coordinate directions is likely to be a very challenging problem since such sets are \\emph{self-affine} and, generally speaking, self-affine sets are more difficult to deal with than self-similar or self-conformal sets. Indeed, very little is currently known even regarding non-weighted approximation in self-affine sets.\n\n\n\\noindent{\\bf Structure of the paper:} The remainder of the paper will be arranged as follows. In Section~\\ref{measure theory section} we will present some measure theoretic preliminaries which will be required for the proofs of our main results. The key tool required for proving Theorem \\ref{lower bound theorem} is a mass transference principle for rectangles proved recently by Wang and Wu \\cite{WW2019}. We introduce this in Section \\ref{mtp section}. In Section \\ref{lower bound section} we present our proof of Theorem \\ref{lower bound theorem} and we conclude in Section \\ref{upper bound section} with the proof of Theorem \\ref{upper bound theorem}.\n\n\\section{Some Measure Theoretic Preliminaries} \\label{measure theory section}\n\nRecall that $\\gamma = \\dimh{K}$ and that $\\gamma_i=\\dimh{K_i}$ for $1 \\leq i \\leq d$, where $K$ and $K_i$ are as defined above. Furthermore, note that $0 < {\\cal H}^{\\gamma}(K) < \\infty$ and $0< {\\cal H}^{\\gamma_{i}}(K_{i})<\\infty$ for each $1 \\leq i \\leq d$, see for example \\cite[Theorem 9.3]{Falconer}. Let us define the measures\n\\[\\mu:=\\frac{{\\cal H}^{\\gamma}|_{K}}{{\\cal H}^{\\gamma}(K)} \\qquad \\text{and} \\qquad \\mu_i:=\\frac{{\\cal H}^{\\gamma_i}|_{K_i}}{{\\cal H}^{\\gamma_i}(K_i)} \\quad \\text{for each } 1 \\leq i \\leq d.\\]\nSo, for $X \\subset \\R^d$, we have \n\\[\\mu(X) = \\frac{{\\cal H}^{\\gamma}(X \\cap K)}{{\\cal H}^{\\gamma}(K)}.\\]\nSimilarly, for $X \\subset \\R$, for each $1 \\leq i \\leq d$ we have \n\\[\\mu_i(X) = \\frac{{\\cal H}^{\\gamma_i}(X \\cap K_i)}{{\\cal H}^{\\gamma_i}(K_i)}.\\]\nNote that $\\mu$ defines a probability measure supported on $K$ and, for each $1 \\leq i \\leq d$, $\\mu_i$ defines a probability measure supported on $K_i$. \nNote also that the measure $\\mu$ is $\\delta$-Ahlfors regular with $\\delta = \\gamma$ and, for each $1 \\leq i \\leq d$, the measure $\\mu_i$ is $\\delta$-Ahlfors regular with $\\delta=\\gamma_i$ (see, for example, \\cite[Theorem 4.14]{Mattila_1999}).\n\nWe will also be interested in the product measure\n\\[\\Mu:=\\prod_{i=1}^{d}{\\mu_i}.\\]\nWe note that $\\Mu$ is $\\delta$-Ahlfors regular with $\\delta = \\gamma$. This fact follows straightforwardly from the Ahlfors regularity of each of the $\\mu_i$'s.\n\n\\begin{lemma} \\label{M regularity lemma}\nThe product measure $\\Mu = \\prod_{i=1}^{d}{\\mu_i}$ on $\\R^d$ is $\\delta$-Ahlfors regular with $\\delta=\\gamma$.\n\\end{lemma}\n\n\\begin{proof}\nLet $B=\\prod_{i=1}^{d}{B(x_i,r)}$, $r>0$, be an arbitrary ball in $\\R^d$. The aim is to show that \\mbox{$\\Mu(B) \\asymp r^{\\gamma}$}. Recall that for each $1 \\leq i \\leq d$, the measure $\\mu_i$ is $\\delta$-Ahlfors regular with \\mbox{$\\delta = \\gamma_i = \\dimh{K_i} = \\frac{\\log{N_i}}{\\log{b}}$}. Also recall that $N=\\prod_{i=1}^{d}{N_i}$ and $\\gamma = \\dimh{K} = \\frac{\\log{N}}{\\log{b}}$. Thus, we have\n\\[\\Mu(B) = \\prod_{i=1}^{d}{\\mu_i(B(x_i,r))} \\asymp \\prod_{i=1}^{d}{r^{\\gamma_i}} = r^{\\sum_{i=1}^{d}{\\gamma_i}}.\\]\nNote that\n\\[\\sum_{i=1}^{d}{\\gamma_i} = \\sum_{i=1}^{d}{\\frac{\\log{N_i}}{\\log{b}}} = \\frac{\\log(\\prod_{i=1}^{d}{N_i})}{\\log{b}} = \\frac{\\log{N}}{\\log{b}} = \\gamma.\\]\nHence, $\\Mu(B) \\asymp r^{\\gamma}$ as claimed.\n\\end{proof}\n\nWe also note that, up to a constant factor, the product measure $\\Mu$ is equivalent to the measure $\\mu = \\frac{{\\cal H}^{\\gamma}|_K}{{\\cal H}^{\\gamma}(K)}$.\n\n\\begin{lemma}\\label{measure equivalence lemma}\nLet $\\Mu=\\prod_{i=1}^{d}{\\mu_i}$. Then, up to a constant factor, $\\Mu$ is equivalent to $\\mu$; i.e. for any Borel set $F \\subset \\R^d$, we have $\\Mu(F) \\asymp \\mu(F)$.\n\\end{lemma}\n\nLemma \\ref{measure equivalence lemma} follows immediately upon combining Lemma \\ref{M regularity lemma} with \\cite[Proposition 2.2 (a) + (b)]{Falconer_techniques}.\n\nIn our present setting, where $K$ is a self-similar set with well-separated components, we can actually show the stronger statement that $\\mu=\\Mu$.\n\n\\begin{proposition} \\label{measures_are_equal}\nThe measures $\\mu$ and $\\Mu$ are equal, i.e. for every Borel set $F \\subset \\R^d$, we have $\\mu(F) = \\Mu(F)$.\n\\end{proposition}\n\n\\begin{proof}\nFor each $1 \\leq i \\leq d$, there exists a unique Borel probability measure (see, for example, \\cite[Theorem 2.8]{Falconer_techniques}) $m_i$ satisfying\n\\begin{align} \\label{measure uniqueness 1}\nm_i &= \\sum_{j=1}^{N_i}{\\frac{1}{N_i}m_i \\circ f_j^{-1}}.\n\\end{align}\nLikewise, there exists a unique Borel probability measure $m$ satisfying\n\\begin{align} \\label{measure uniqueness 2}\n m &= \\sum_{j=1}^{N}{\\frac{1}{N}m \\circ g_j^{-1}}.\n\\end{align}\n\nWe begin by showing that $\\mu_i$ satisfies \\eqref{measure uniqueness 1} for each $1 \\leq i \\leq d$. Note that ${\\cal H}^{\\gamma_i}(f_{j_1}(K_i) \\cap f_{j_2}(K_i)) = 0$ for any $1\\leq j_1,j_2 \\leq N_i$ with $j_1 \\neq j_2$. Thus, for any Borel set $X \\subset \\R^d$, , we have\n\\begin{align*}\n \\mu_i(X) &= \\frac{1}{{\\cal H}^{\\gamma_i}(K_i)}{\\cal H}^{\\gamma_i}(X \\cap K_{i}) \\\\\n &= \\frac{1}{{\\cal H}^{\\gamma_i}(K_i)}\\sum_{j=1}^{N_i}{{\\cal H}^{\\gamma_i}(X \\cap f_j(K_i))} \\\\\n &= \\frac{1}{{\\cal H}^{\\gamma_i}(K_i)}\\sum_{j=1}^{N_i}{{\\cal H}^{\\gamma_i}(f_j(f_j^{-1}(X) \\cap K_i))} \\\\\n &= \\frac{1}{{\\cal H}^{\\gamma_i}(K_i)}\\sum_{j=1}^{N_i}{\\left(\\frac{1}{b}\\right)^{\\gamma_i}{\\cal H}^{\\gamma_i}(f_j^{-1}(X) \\cap K_i)} \\\\\n &= \\frac{1}{{\\cal H}^{\\gamma_i}(K_i)}\\sum_{j=1}^{N_i}{\\frac{1}{N_i}{\\cal H}^{\\gamma_i}(f_j^{-1}(X) \\cap K_i)} \\\\\n &= \\sum_{j=1}^{N_i}{\\frac{1}{N_i}\\mu_i \\circ f_j^{-1}(X)}.\n\\end{align*}\n\nBy an almost identical argument, it can be shown that $\\mu$ satisfies $\\eqref{measure uniqueness 2}$.\n\nFinally, we show that $\\Mu$ also satisfies \\eqref{measure uniqueness 2} and, hence, by the uniqueness of solutions to~\\eqref{measure uniqueness 2}, we conclude that $\\Mu$ must be equal to $\\mu$. Since $\\mu_i$ satisfies \\eqref{measure uniqueness 1} for each $1 \\leq i \\leq d$, we have \n\\begin{align*}\n\\Mu &= \\prod_{i=1}^{d}{\\mu_i} \\\\\n &= \\prod_{i=1}^{d}{\\left(\\sum_{j=1}^{N_i}{\\frac{1}{N_i}\\mu_i \\circ f_j^{-1}}\\right)} \\\\\n &= \\sum_{\\mathbf{j} = (j_1, \\dots, j_d) \\in \\prod_{i=1}^{d}{\\{1,\\dots,N_i\\}}}{\\frac{1}{N}\\prod_{i=1}^{d}{\\mu_i \\circ f_{j_i}^{-1}}} \\\\\n &= \\sum_{j=1}^{N}{\\frac{1}{N}M\\circ g_j^{-1}}. \\qedhere\n\\end{align*}\n\\end{proof}\n\n\\section{Mass transference principle for rectangles} \\label{mtp section}\n\nTo prove Theorem \\ref{lower bound theorem}, we will use the mass transference principle for rectangles established recently by Wang and Wu in \\cite{WW2019}. The work of Wang and Wu generalises the famous Mass Transference Principle originally proved by Beresnevich and Velani \\cite{BV_MTP}. Since its initial discovery in \\cite{BV_MTP}, the Mass Transference Principle has found many applications, especially in Diophantine Approximation, and has by now been extended in numerous directions. See \\cite{Allen-Beresnevich, Allen-Baker, BV_MTP, BV_Slicing, Koivusalo-Rams, WWX2015, WW2019, Zhong2021} and references therein for further information. Here we shall state the general ``full measure'' mass transference principle from rectangles to rectangles established by Wang and Wu in \\cite[Theorem~3.4]{WW2019}.\n\nFix an integer $d \\geq 1$. For each $1 \\leq i \\leq d$, let $(X,|\\cdot|_i,m_i)$ be a bounded locally compact metric space equipped with a $\\delta_i$-Ahlfors regular probability measure $m_i$. We consider the product space $(X, |\\cdot|, m)$ where\n\\[X = \\prod_{i=1}^{d}{X_i}, \\qquad |\\cdot|=\\max_{1 \\leq i \\leq d}{|\\cdot|_i}, \\quad \\text{and} \\quad m=\\prod_{i=1}^{d}{m_i}.\\]\nNote that a ball $B(x,r)$ in $X$ is the product of balls in $\\{X_i\\}_{1 \\leq i \\leq d}$;\n\\[B(x,r) = \\prod_{i=1}^{d}{B(x_i,r)} \\quad \\text{for} \\quad x=(x_1,\\dots,x_d).\\]\n\nLet $J$ be an infinite countable index set and let $\\beta: J \\to \\R_{\\geq 0}: \\alpha \\mapsto \\beta_{\\alpha}$ be a positive function such that for any $M > 1$, the set\n\\[\\{\\alpha \\in J: \\beta_{\\alpha} < M\\}\\]\nis finite. Let $\\rho: \\R_{\\geq 0} \\to \\R_{\\geq 0}$ be a non-increasing function such that $\\rho(u) \\to 0$ as $u \\to \\infty$.\n\nFor each $1 \\leq i \\leq d$, let $\\{R_{\\alpha,i}: \\alpha \\in J\\}$ be a sequence of subsets of $X_i$. Then, the \\emph{resonant sets} in $X$ that we will be concerned with are\n\\[\\left\\{R_{\\alpha}=\\prod_{i=1}^{d}{R_{\\alpha,i}:\\alpha \\in J}\\right\\}.\\]\nFor a vector $\\mathbf{a} = (a_1,\\dots,a_d) \\in \\R_{>0}^d$, write\n\\[\\Delta(R_{\\alpha}, \\rho(\\beta_{\\alpha})^{\\mathbf{a}}) = \\prod_{i=1}^{d}{\\Delta(R_{\\alpha,i},\\rho(\\beta_{\\alpha})^{a_i})},\\]\nwhere $\\Delta(R_{\\alpha,i},\\rho(\\beta_\\alpha)^{a_i})$ appearing on the right-hand side denotes the $\\rho(\\beta_\\alpha)^{a_i}$-neighbourhood of $R_{\\alpha,i}$ in $X_i$. We call $\\Delta(R_{\\alpha,i},\\rho(\\beta_{\\alpha})^{a_i})$ the \\emph{part of $\\Delta(R_{\\alpha},\\rho(\\beta_{\\alpha})^{\\mathbf{a}})$ in the $i$th direction.}\n\nFix $\\mathbf{a} = (a_1, \\dots, a_d) \\in \\R_{>0}^d$ and suppose $\\mathbf{t} = (t_1,\\dots,t_d) \\in \\R_{\\geq 0}^d$. We are interested in the set\n\\[W_{\\mathbf{a}}(\\mathbf{t}) = \\left\\{x \\in X: x \\in \\Delta(R_{\\alpha},\\rho(\\beta_{\\alpha})^{\\mathbf{a}+\\mathbf{t}}) \\quad \\text{for i.m. } \\alpha \\in J \\right\\}.\\]\nWe can think of $\\Delta(R_{\\alpha},\\rho(\\beta_{\\alpha})^{\\mathbf{a}+\\mathbf{t}})$ as a smaller ``rectangle'' obtained by shrinking the ``rectangle'' $\\Delta(R_{\\alpha},\\rho(\\beta_{\\alpha})^{\\mathbf{a}})$.\n\nFinally, we require that the resonant sets satisfy a certain \\emph{$\\kappa$-scaling} property, which in essence ensures that locally our sets behave like affine subspaces.\n\n\\begin{definition} \\label{kappa scaling}\nLet $0 \\leq \\kappa < 1$. For each $1 \\leq i \\leq d$, we say that $\\{R_{\\alpha,i}\\}_{\\alpha \\in J}$ has the \\emph{$\\kappa$-scaling property} if for any $\\alpha \\in J$ and any ball $B(x,r)$ in $X_i$ with centre $x_i \\in R_{\\alpha,i}$ and radius $r>0$, for any $ 0 < \\varepsilon < r$, we have\n\\[c_1 r^{\\delta_i \\kappa}\\varepsilon^{\\delta_i(1-\\kappa\n)} \\leq m_i(B(x_i,r) \\cap \\Delta(R_{\\alpha,i},\\varepsilon)) \\leq c_2 r^{\\delta_i \\kappa} \\varepsilon^{\\delta_i(1-\\kappa)}\\]\nfor some absolute constants $c_1, c_2 > 0$.\n\\end{definition}\nIn our case $\\kappa=0$ since our resonant sets are points. For justification of this, and calculations of $\\kappa$ for other resonant sets, see \\cite{Allen-Baker}. Wang and Wu established the following mass transference principle for rectangles in \\cite{WW2019}.\n\n\\begin{theorem}[Wang -- Wu, \\cite{WW2019}] \\label{Theorem WW}\nAssume that for each $1 \\leq i \\leq d$, the measure $m_i$ is $\\delta_i$-Ahlfors regular and that the resonant set $R_{\\alpha,i}$ has the $\\kappa$-scaling property for $\\alpha \\in J$. Suppose\n\\[m\\left(\\limsup_{\\substack{\\alpha \\in J \\\\ \\beta_{\\alpha} \\to \\infty}}{\\Delta(R_{\\alpha},\\rho(\\beta_{\\alpha})^{\\mathbf{a}}})\\right) = m(X).\\]\nThen we have\n\\[\\dimh{W_{\\mathbf{a}}(\\mathbf{t})} \\geq s(\\mathbf{t}):= \\min_{A \\in {\\cal A}}\\left\\{\\sum_{k \\in {\\cal K}_1}{\\delta_k} + \\sum_{k \\in {\\cal K}_2}{\\delta_k} + \\kappa \\sum_{k \\in {\\cal K}_3}{\\delta_k}+(1-\\kappa)\\frac{\\sum_{k \\in {\\cal K}_3}{a_k \\delta_k} - \\sum_{k \\in {\\cal K}_2}{t_k \\delta_k}}{A}\\right\\},\\]\nwhere\n\\[{\\cal A} = \\{a_i, a_i+t_i: 1 \\leq i \\leq d\\}\\]\nand for each $A \\in {\\cal A}$, the sets ${\\cal K}_1, {\\cal K}_2, {\\cal K}_3$ are defined as\n\\[{\\cal K}_1 = \\{k: a_k \\geq A\\}, \\quad {\\cal K}_2=\\{k: a_k+t_k \\leq A\\} \\setminus {\\cal K}_1, \\quad {\\cal K}_3=\\{1,\\dots,d\\}\\setminus ({\\cal K}_1 \\cup {\\cal K}_2)\\]\nand thus give a partition of $\\{1,\\dots,d\\}$.\n\\end{theorem}\n\n\\section{Proof of Theorem \\ref{lower bound theorem}} \\label{lower bound section}\n\nTo prove Theorem \\ref{lower bound theorem}, we will apply Theorem \\ref{Theorem WW} with $X_i = K_i$, $m_i=\\mu_i$ and $\\abs{\\cdot}_i = \\abs{\\cdot}$ (absolute value in $\\R$) for each $1 \\leq i \\leq d$. Then, in our setting, we will be interested in the product space $(X, \\supn{\\cdot}, \\Mu)$ where\n\\[X = \\prod_{i=1}^{d}{K_i} \\; = K, \\qquad \\Mu = \\prod_{i=1}^{d}{\\mu_i},\\]\nand $\\supn{\\cdot}$ denotes the supremum norm in $\\R^d$. Recall that for each $1 \\leq i \\leq d$, the measure $\\mu_i$ is $\\delta_i$-Ahlfors regular with \n\\[\\delta_i = \\gamma_i = \\dimh{K_i}\\]\nand the measure $\\Mu$ is $\\delta$-Ahlfors regular with\n\\[\\delta = \\gamma = \\dimh{K}.\\]\n\nFor us, the appropriate indexing set is \n\\[{\\cal J} = \\{\\mathbf{i} \\in \\Lambda^*\\}.\\]\nWe define our \\emph{weight function} $\\beta: \\Lambda^* \\to \\R_{\\geq 0}$ by\n\\[\\beta_{|\\mathbf{i}|} = \\beta(\\mathbf{i}) = |\\mathbf{i}|.\\]\nNote that $\\beta$ satisfies the requirement that for any real number $M > 1$ the set $\\set{\\mathbf{i} \\in \\Lambda^*: \\beta_{\\mathbf{i}} < M}$ is finite. Next we define $\\rho: \\R_{\\geq 0} \\to \\R_{\\geq 0}$ by\n\\[\\rho(u) = \\psi(b^u).\\]\nSince $b^n\\psi(b^n)$ is monotonically decreasing by assumption, it follows that $\\psi(b^n)$ is monotonically decreasing and $\\psi(b^n) \\to 0$ as $n \\to \\infty$.\n\nFor a fixed $x = (x_1,\\dots,x_d) \\in K$, we define the resonant sets of interest as follows. For each $\\mathbf{i} \\in \\Lambda^*$, take\n\\[R_{\\mathbf{i}}^x=g_{\\mathbf{i}}(x).\\]\nCorrespondingly, for each $1 \\leq j \\leq d$, \n\\[R_{\\mathbf{i},j}^x = g_{\\mathbf{i}}(x)_j,\\]\nwhere $g_{\\mathbf{i}}(x) = (g_{\\mathbf{i}}(x)_1, \\dots, g_{\\mathbf{i}}(x)_d)$. So, $R_{\\mathbf{i},j}^x$ is the coordinate of $g_{\\mathbf{i}}(x)$ in the $j$th direction. In each coordinate direction, the $\\kappa$-scaling property is satisfied with $\\kappa$=0, since our resonant sets are points.\n\nLet us fix $\\mathbf{a} = (1,1,\\dots,1) \\in \\R_{>0}^d$. Then, in this case, we note that \n\\[\\limsup_{\\substack{\\alpha \\in {\\cal J} \\\\ \\beta_\\alpha \\to \\infty}}{\\Delta(R_{\\alpha}^{x}, \\rho(\\beta_\\alpha)^{\\mathbf{a}})} = \\limsup_{\\substack{\\mathbf{i} \\in \\Lambda^* \\\\ |\\mathbf{i}| \\to \\infty}}{\\Delta(g_{\\mathbf{i}}(x), \\psi(b^{|\\mathbf{i}|})^{\\mathbf{a}})} = W(x,\\psi),\\]\nwhere $W(x,\\psi)$ is as defined in \\eqref{W x psi}. Moreover, it follows from Corollary \\ref{W x psi corollary} and Proposition \\ref{measures_are_equal} that $\\Mu(W(x,\\psi)) = \\Mu(K)$, since we assumed that $\\sum_{n=1}^{\\infty}{(b^n\\psi(b^n))^{\\gamma}} = \\infty$.\n\nNow suppose that $\\mathbf{t} = (t_1,\\dots,t_d) \\in \\R_{\\geq 0}^d$. Then, in our case, \n\\[W_{\\mathbf{a}}(\\mathbf{t}) = W(x,\\psi,\\mathbf{t}),\\]\nwhich is the set we are interested in. So, recalling that $\\kappa = 0$ in our setting, we may now apply Theorem \\ref{Theorem WW} directly to conclude that\n\\[\\dimh{W(x,\\psi,\\mathbf{t})} \\geq \\min_{A \\in {\\cal A}}\\left\\{\\sum_{k \\in {\\cal K}_1}{\\delta_k} + \\sum_{k \\in {\\cal K}_2}{\\delta_k}+\\frac{\\sum_{k \\in {\\cal K}_3}{\\delta_k}-\\sum_{k \\in {\\cal K}_2}{t_k \\delta_k}}{A}\\right\\} =: s(\\mathbf{t}),\\]\nwhere\n\\[{\\cal A} = \\set{1} \\cup \\set{1+t_i: 1 \\leq i \\leq d}\\]\nand for each $A \\in {\\cal A}$ the sets ${\\cal K}_1, {\\cal K}_2, {\\cal K}_3$ are defined as follows:\n\\[{\\cal K}_1 = \\set{k: 1 \\geq A}, \\quad {\\cal K}_2 = \\{k: 1+t_k \\leq A\\} \\setminus {\\cal K}_1, \\quad \\text{and} \\quad {\\cal K}_3 = \\set{1,\\dots,d} \\setminus ({\\cal K}_1 \\cup {\\cal K}_2). \\]\nNote that ${\\cal K}_1, {\\cal K}_2, {\\cal K}_3$ give a partition of $\\set{1,\\dots,d}$.\n\nTo obtain a neater expression for $s(\\mathbf{t})$, as given in the statement of Theorem \\ref{lower bound theorem}, we consider the possible cases which may arise. To this end, let us suppose, without loss of generality, that \n\\[0 < t_{i_1} \\leq t_{i_2} \\leq \\dots \\leq t_{i_d}.\\]\n\n\\smallskip\n\\underline{{\\bf Case 1: $A=1$}}\n\nIf $A = 1$, then ${\\cal K}_1 = \\set{1,\\dots,d}$, ${\\cal K}_2 = \\emptyset$, and ${\\cal K}_3 = \\emptyset$. In this case, the ``dimension number'' simplifies to\n\\[\\sum_{j=1}^{d}{\\delta_{j}} = \\sum_{j=1}^{d}{\\dimh{K_j}} = \\sum_{j=1}^{d}{\\frac{\\log{N_j}}{\\log{b}}} = \\frac{\\log{\\left(\\prod_{j=1}^{d}N_{j}\\right)}}{\\log{b}} = \\frac{\\log{N}}{\\log{b}} = \\dimh{K}.\\]\n\n\n\\smallskip\n\\underline{{\\bf Case 2:} $A=1+t_{i_k}$ with $t_{i_k}>0$}\n\\nopagebreak\n\nSuppose $A= 1+t_{i_k}$ for some $1 \\leq k \\leq d$ and that $t_{i_k}>0$ (otherwise we are in Case 1). Suppose $k \\leq k' \\leq d$ is the maximal index such that $t_{i_k} = t_{i_{k'}}$. In this case, \n\\[{\\cal K}_1 = \\emptyset, \\qquad {\\cal K}_2 = \\set{i_1, \\dots, i_{k'}}, \\quad \\text{and} \\quad {\\cal K}_3 = \\set{i_{k'+1},\\dots,i_d}\\]\nand the ``dimension number'' is\n\\begin{align*}\n \\sum_{j=1}^{k'}{\\delta_{i_j}} + \\frac{\\sum_{j=k'+1}^{d}{\\delta_{i_j}}-\\sum_{j=1}^{k'}{t_{i_j}\\delta_{i_j}}}{1+t_{i_k}} &= \\frac{1}{1+t_{i_k}}\\left((1+t_{i_k})\\sum_{j=1}^{k'}{\\delta_{i_j}} + \\sum_{j=k'+1}^{d}{\\delta_{i_j}}-\\sum_{j=1}^{k'}{t_{i_j}\\delta_{i_j}}\\right) \\\\\n &= \\frac{1}{1+t_{i_k}}\\left(\\sum_{j=1}^{d}{\\delta_{i_j}}+\\sum_{j=1}^{k'}{\\delta_{i_j}(t_{i_k}-t_{i_j})}\\right) \\\\\n &= \\frac{1}{1+t_{i_k}}\\left(\\dimh{K} + \\sum_{j=1}^{k'}{(t_{i_k}-t_{i_j})\\dimh{K_j}}\\right).\n\\end{align*}\n\nPutting the two cases together, we conclude that \n\\[\\dimh{W(x,\\psi,\\mathbf{t})} \\geq \\min_{1 \\leq k \\leq d}\\left\\{\\frac{1}{1+t_k}\\left(\\gamma + \\sum_{j:t_j \\leq t_k}{(t_k-t_j)\\gamma_{j}}\\right)\\right\\},\\]\nas claimed. This completes the proof of Theorem \\ref{lower bound theorem}.\n\n\\section{Proof of Theorem \\ref{upper bound theorem}} \\label{upper bound section}\nLet \n\\begin{equation*}\n{\\cal A}_{n}(x, \\psi, \\mathbf{t}):= \\bigcup_{\\mathbf{i} \\in \\Lambda^{n}} \\Delta\\left(R^{x}_{\\mathbf{i}}, \\psi(b^{n})^{1+\\mathbf{t}}\\right)=\\bigcup_{\\mathbf{i} \\in \\Lambda^{n}}\\prod_{j=1}^{d}B\\left( R^{x}_{\\mathbf{i},j}, \\psi(b^{n})^{1+t_{j}} \\right).\n\\end{equation*}\nThen\n\\begin{equation*}\nW(x, \\psi, \\mathbf{t}) = \\limsup_{n \\to \\infty} {\\cal A}_{n}(x, \\psi, \\mathbf{t}) \\, .\n\\end{equation*}\nFor any $m \\in \\N$ we have that \n\\begin{equation} \\label{cover_1}\nW(x, \\psi, \\mathbf{t}) \\subset \\bigcup_{n \\geq m} {\\cal A}_{n}(x, \\psi, \\mathbf{t}) \\, .\n\\end{equation}\nObserve that ${\\cal A}_{n}(x, \\psi, \\mathbf{t})$ is a collection of $N^{n}=(b^{n})^{\\gamma}$ rectangles with sidelengths $2\\psi(b^{n})^{1+t_{j}}$ in each $j$th coordinate axis.\n\nFix some $1 \\leq k \\leq d$. Throughout suppose that $n$ is sufficiently large such that $\\psi(b^{n})<1$. Condition $(i)$ of Theorem \\ref{upper bound theorem} implies that $\\psi(b^{n})^{1+t_{k}} \\leq \\psi(b^{n}) \\to 0$ as $n \\to \\infty$, and so for any $\\rho>0$ there exists a sufficiently large positive integer $n_{0}(\\rho)$ such that \n\\begin{equation*}\n\\psi(b^{n})^{1+t_{k}} \\leq \\rho \\quad \\text{ for all } n \\geq n_{0}(\\rho).\n\\end{equation*}\n Suppose $n \\geq n_0(\\rho)$ and that for each $1 \\leq j \\leq d$ we can construct an efficient finite $\\psi(b^{n})^{1+t_{k}}$-cover ${\\cal B}_{j}(\\mathbf{i},k,\\rho)$ for $B\\left(R^{x}_{\\mathbf{i},j}, \\psi(b^{n})^{1+t_{j}}\\right)$ with cardinality $\\#{\\cal B}_{j}(\\mathbf{i},k,\\rho)$ for each $\\mathbf{i} \\in \\Lambda^{n}$. Then we can construct a $\\psi(b^{n})^{1+t_{k}}$-cover of\n$\\Delta\\left(R^{x}_{\\mathbf{i}}, \\psi(b^{n})^{1+\\mathbf{t}}\\right)$ for each $\\mathbf{i} \\in \\Lambda^{n}$ with cardinality $\\prod_{j=1}^{d}\\#{\\cal B}_{j}(\\mathbf{i},k,\\rho)$ by considering the Cartesian product of the individual covers ${\\cal B}_j(\\mathbf{i}, k, \\rho)$ for each $1 \\leq j \\leq d$. By \\eqref{cover_1}\n\\begin{equation} \\label{cover_2}\n \\bigcup_{n \\geq n_{0}(\\rho)} {\\cal A}_{n}(x, \\psi, \\mathbf{t})\n\\end{equation}\nis a cover of $W(x, \\psi, \\mathbf{t})$. So, supposing that we can find such covers ${\\cal B}_{j}(\\mathbf{i},k,\\rho)$, we have that\n\\begin{equation*}\n\\bigcup_{n \\geq n_{0}(\\rho)}\\bigcup_{\\mathbf{i} \\in \\Lambda^{n}}\\prod_{j=1}^{d}{\\cal B}_{j}(\\mathbf{i},k,\\rho)\n\\end{equation*}\nis a $\\psi(b^{n})^{1+t_{k}}$-cover of $W(x,\\psi,\\mathbf{t})$.\n \nTo calculate the values $\\#{\\cal B}_{j}(\\mathbf{i},k,\\rho)$ we consider two possible cases depending on the fixed $1\\leq k\\leq d$. Without loss of generality suppose that $0 \\mathrm{diam} \\left( B\\left(R^{x}_{\\mathbf{i},j}, \\psi(b^{n})^{1+t_{j}}\\right) \\right).\n\\end{equation*}\n Observe that $f_{\\mathbf{b}}([0,1]) \\subset f_{\\mathbf{a}}([0,1])$ if and only if $\\mathbf{b}=\\mathbf{a}\\mathbf{c}$ for $\\mathbf{c} \\in \\Lambda^{*}_{j}:=\\bigcup_{n=0}^{\\infty}{\\Lambda_j^n}$, where we write $\\mathbf{a}\\mathbf{c}$ to denote the concatenation of the two words $\\mathbf{a}$ and $\\mathbf{c}$. Let $v \\geq 0$ be the unique integer such that \n\\begin{equation} \\label{v_bound}\n b^{-u-v} \\leq \\psi(b^{n})^{1+t_{k}} < b^{-u-v+1}.\n \\end{equation}\nNote that $v$ is well defined since $\\psi(b^{n})^{1+t_{k}} < 2\\psi(b^{n})^{1+t_{j}} < b^{-u+1}$, and so $v \\geq 0$. Then \n\\begin{equation*}\n\\underset{\\mathbf{c} \\in \\Lambda_{j}^{v}}{\\bigcup_{\\mathbf{a} \\in A, } }f_{\\mathbf{a} \\mathbf{c}}([0,1]) \\supset B\\left(R^{x}_{\\mathbf{i},j}, \\psi(b^{|\\mathbf{i}|})^{1+t_{j}}\\right).\n\\end{equation*}\nNotice that the left-hand side above gives rise to a $\\psi(b^n)^{1+t_k}$-cover for the right-hand side and let us denote this cover by ${\\cal B}_{j}(\\mathbf{i},k,\\rho)$. By the above arguments an easy upper bound on $\\#{\\cal B}_{j}(\\mathbf{i},k,\\rho)$ is seen to be $2N_{j}^{v}$. Furthermore, by \\eqref{u_bound} and \\eqref{v_bound} we have that\n\\begin{equation*}\n\\#{\\cal B}_{j}(\\mathbf{i},k,\\rho) \\leq 2N_{j}^{v} = 2 (b^{v})^{\\gamma_{j}} \\overset{\\eqref{v_bound}}{\\leq} 2\\left(b^{1-u}\\psi(b^{n})^{-1-t_{k}}\\right)^{\\gamma_{j}} \\overset{\\eqref{u_bound}}{\\leq}2^{1+\\gamma_j}b^{\\gamma_{j}}\\psi(b^{n})^{(t_{j}-t_{k})\\gamma_{j}}.\n\\end{equation*}\n\nSumming over $1 \\leq j \\leq d$ and $\\mathbf{i} \\in \\Lambda^n$ for each $n \\geq n_0(\\rho)$ we see that\n \\begin{align}\n{\\cal H}^{s}_{\\rho}(W(x, \\psi, \\mathbf{t})) \\, & \\ll \\sum_{n \\geq n_0(\\rho)}{\\left(\\left(\\psi(b^n)^{1+t_k}\\right)^s \\times \\sum_{\\mathbf{i} \\in \\Lambda^n}{\\prod_{j=1}^{d}{\\#{\\cal B}_j(\\mathbf{i},k,\\rho)}}\\right)} \\nonumber \\\\ \n&\\ll \\sum_{n\\geq n_{0}(\\rho)} \\left(\\psi(b^{n})^{1+t_{k}}\\right)^{s}N^{n} \\prod_{j:t_{j}0,\\] \nwe have\n\\[{\\cal H}^{s}_{\\rho}(W(x,\\psi,\\mathbf{t})) \\to 0 \\quad \\text{as } \\rho \\to 0.\\]\n This implies that $\\dimh W(x, \\psi, \\mathbf{t}) \\leq s_{0}$. The above argument holds for any initial choice of~$k$, and so we conclude that\n\\begin{equation*}\n\\dimh W(x, \\psi, \\mathbf{t}) \\leq \\min_{1 \\leq k \\leq d} \\left\\{ \\frac{1}{1+t_{k}}\\left(\\gamma+ \\sum_{j:t_j2$ is\nthe nature of the perturbation constructed. As in\nContreras~\\cite{GonDomtd}, Lemma 7.3 and 7.4, we need to solve a\nmatrix equation in the Lie algebra of the symplectic group $S_p(n)=\\{$Symplectic\nmatrices $2n \\times 2n\\}$. The solubility of this equation is\nstrongly related with the existence of repeated eigenvalues \nof the matrix $H_{pp}$ in local coordinates. The problem\nis that we can not change this characteristic by adding a\npotential. Moreover, the equation involved is very complex too. \n\nHowever, we point out that the Kupka-Smale Theorem in dimension 2 is a strong result in the study of generic Lagrangians because it works below the critical level. More than that, it can be combined with other results on the structure of Aubry-Mather sets in surfaces, like Haeflinger theorem for Mather measures with rational homology in an orientable surface, claiming that such measures are supported in periodic orbits, and results on the nonexistence of conjugated points in the Aubry-Mather sets from Contreras and Iturriaga~\\cite{GoReConvex}, in order to guarantee the hyperbolicity of the periodic orbits in this set.\n\n\\section{The Kupka-Smale Theorem }\n\\vspace{0.3cm}\n\nWe consider $(M;g)$ a, $n$- dimensional, smooth and compact, Riemannian\nmanifold without boundary, $L: TM \\rightarrow\n\\mathbb{R} $, a Lagrangian in $M$, convex and fiberwise\nsuperlinear (see~\\cite{GoReMin} to definitions) and $H: T^{*} M\n\\rightarrow \\mathbb{R}$ the associated Hamiltonian obtained by\nLegendre transform.\n\n\nIn the study of generic properties of Lagrangians we use the\nconcept of genericity due to Ma\\~n\\'e. The idea is that, \nthe properties studied in the Aubry-Mather theory become much more\nstrongest in this generic setting. For more details and applications\nsee \\cite{GoReConvex}, \\cite{GoReMin}, \\cite{Jan} and\n\\cite{ManeGenProp}.\n\nWe will say that a property $\\mathcal{P}$ is generic, in Ma\\~n\\'e's\nsense, for $L$, if there exists a generic set\n$\\mathcal{O} \\subset C^{\\infty}(M;\\mathbb{R})$, in $C^{\\infty}$ topology,\nsuch that, for all $f \\in \\mathcal{O}$, $L+f$ has the property $\\mathcal{P}$.\n\nConsider $E_{L}(x,v)=\\frac{\\partial L}{\\partial v} (x,v)\\cdot v -\nL(x,v)$ the energy function associated to $L$ and $\\varepsilon\n^{k}_{L}=\\{(x,v) \\in TM\n\\mid E_{L}(x,v)=k\\}$ the set of all points in the energy level $k$.\n\nLet $\\theta \\in TM$ be a periodic\npoint of positive period, $T_{min}$ of the Euler-Lagrange \nflow $ \\phi_ {t} ^ {L}: TM \\to TM$. \nFixed a local section transversal to this flow, $\\Sigma$\ncontained in the energy level of $\\theta$, there\nexists a smooth function $\\tau : U \\subset \\Sigma \\to\n\\mathbb{R}$, such that, $\\tau(\\theta)=T_{min} $ which is the\ntime of first return to $\\Sigma$,\nsuch that the map $P(\\Sigma, \\theta): U \\to\n\\Sigma$ given by\n$$P(\\Sigma, \\theta)(\\theta)=\\phi_{\\tau(\\theta)}^{L}(\\theta)$$\nis a local diffeomorphism and $\\theta$ is a fix point of $P(\\Sigma, \\theta)$.\nThis map is called Poincar\\'e first return map.\nWe will say that $ \\theta $\n(or the orbit of $ \\theta $) is a\nnondegenerate orbit of order $m \\geq 1$ for $L$ if\n$$Ker((d_{\\theta} P(\\Sigma, \\theta))^{m}-Id)=0.$$\nThe property of Nondegeneracy of order $m$ means that $d_{\\theta} P(\\Sigma, \\theta)$\ndoes not have $m$-roots of the unity as eigenvalues.\n\nIf we are interested in the Hamiltonian viewpoint of the described\nLagrangian dynamics, then we consider the Hamiltonian $H$ associated to $L$ by the\nLegendre transform in the speed, that is, \n$$\\displaystyle H(x, p) = \\max_ {v \\in T_ {x} M} \\{pv - L (x, v) \\}.$$\nLet $X^ {H} $ be the Hamiltonian field, which is the unique field\n$X^ {H} $ in $T^{*} M$ such that $ \\omega_ {\\vartheta} (X^{H}\n(\\vartheta), \\xi) = d_ {\\vartheta} H \\xi$ for all $ \\xi \\in T_\n{\\vartheta} T^{*} M$. In the local coordinates $ (x, p) $, \n$X^ {H} = H_{p} \\, \\frac{\\partial \\,}{\\partial x} -\nH_{x} \\, \\frac{\\partial \\,}{\\partial p} $. We denote by\n$ \\psi_ {t} ^ {H}: T^ {*} M \\to T^{*} M$ the\nflow in $T^{*}M$ associated with the Hamiltonian field $X^ {H}: T^\n{*} M \\rightarrow TT^ {*} M$. This flow preserves the canonical symplectic\nform $\\omega$. Since $L$ is a convex and superlinear Lagrangian we have that $H$ is a\nconvex and superlinear Hamiltonian. Using the Legendre transform\n\\begin{center} $p=L_v (x, v) $ and $v=H_p (x, p) $, \\end{center}\nwe have that, $H_ {pp} (x, p) $ is positive defined in $T^ {*}_{x} M$,\nuniformly in $x \\in M$. Observe that the Legendre transform associates the energy\nlevel $\\varepsilon_{k}^{L} $ with the level set $H^{- 1}(k)$ of $H$.\nFrom the conjugation property between Lagrangian and Hamiltonian viewpoint, the nondegeneracy of an orbit is equivalent in both senses.\n\nOne can prove that the restriction of the symplectic form $\\omega$\nto $T_ {\\vartheta} \\Sigma$ is nondegenerate and closed form,\ntherefore the Poincar\\'e map is symplectic.Moreover, \n$$d_ {\\vartheta} \\psi_ {T_ {min}} ^ {H} (\\xi) = - d_\n{\\vartheta} \\tau (\\xi) X^ {H} + d_ {\\vartheta} P (\\Sigma, \\vartheta)\n(\\xi), \\; \\forall \\xi \\in T_ {\\vartheta} \\Sigma.$$ Therefore we have\nthat \\begin{center} \\( d_ {\\vartheta} \\psi_ {T_ {min}} ^ {H} \\mid_\n{T_ {\\vartheta} H^ {- 1} (k)} = \\left [ \\begin{matrix} 1 & d_\n{\\vartheta} \\tau \\\\ 0 & d_ {\\vartheta} P (\\Sigma, \\vartheta)\n\\end{matrix} \\right], \\) \\end{center} in general for $T=m T_\n{min} $ \\\\ $d_ {\\vartheta} \\psi_ {T}^{H} (\\xi) = -d_ {\\vartheta}\n\\tau (\\sum_ {i=0} ^ {M-1} d_ {\\vartheta} P (\\Sigma, \\vartheta)^ {i})\n(\\xi) X^{H} + d_ {\\vartheta} P (\\Sigma, \\vartheta)^{m} (\\xi), \\;\n\\forall \\xi \\in T_ {\\vartheta} \\Sigma$.\n\nSo, the condition of that $ \\vartheta $ is nondegenerate of\norder $m \\geq 1$ is equivalent to say that the algebraic\nmultiplicity of $ \\lambda=1$ as eigenvalue of $d_ {\\vartheta} \\psi_\n{T}^{H} \\mid_{T_ {\\vartheta} H^ {- 1} (k)} $ is equal to 1, because\nthe characteristic polynomials are related by\n$\\mathfrak {p}_{d_{\\vartheta} \\psi_{T}^{H}} (\\lambda) = (1- \\lambda)\n\\cdot \\mathfrak{p}_{d_{\\vartheta} P(\\Sigma, \\vartheta)^{m}}\n(\\lambda) $.\n\nOur main result is the Kupka-Smale Theorem that relies the \nBumpy Metrics Theorem proved by Anosov~\\cite{Anos}, but here\nfor the Lagrangian setting.\n\n\nWe state our main result just when $dim(M)=2$ because we do not know\nthe proof for Theorem~\\ref{PerturbacaoLocaldaOrbita} when $dim (M)=n\n\\geq 3$. This problem remains an open question.\n\n\\begin{theorem}\\label{KS}\\textnormal{(Kupka-Smale Theorem)}\n Suppose $dim (M)=2$. Let $L: TM \\to\n\\mathbb{R} $, be a Lagrangian in $M$, convex and fiberwise\nsuperlinear. Then, for each $k \\in \\mathbb{R}$, the property\\\\\n i) $\\varepsilon ^{k}_{L}$ is regular;\\\\\n ii) Any periodic orbit in the level $\\varepsilon ^{k}_{L}$ is\n nondegenerate for all orders;\\\\\n iii) All heteroclinic intersections, in this level, are transversal.\\\\\n is generic for $L$.\n\\end{theorem}\n\\section{Proofs of the main results}\n\nGiven $k \\in \\mathbb{R}$, we define the set of the regular potentials for $k$, as being\n$$ \\mathcal{R}(k)=\\{ \\;f \\in C^{\\infty}(M;\\mathbb{R}) \\mid\n\\varepsilon ^{k}_{f}:=(H+f)^{-1}(k) \\; is \\; regular \\; \\},$$\n where $H$ is the associated Hamiltonian.\n\n\\begin{lemma}\\label{compacidade}\nConsider $k \\in \\mathbb{R}$ and $f_{0} \\in C^{\\infty}(M;\\mathbb{R})$.\nFor each sequence\n$f_{n} \\to f_{0}$ in $C^{\\infty}(M;\\mathbb{R})$ topology and points\n$\\vartheta_{n}=(x_{n},p_{n}) \\in \\varepsilon ^{k}_{f_{n}}$\nthere exists a subsequence $\\vartheta_{n_{i}} \\to \\vartheta_{0} \\in \\varepsilon ^{k}_{f_{0}}$.\n\\end{lemma}\n\nIn fact, this lemma is an easy consequence of the compactness of the energy level.\n\n\\begin{theorem}\\label{PotRegAbertoDenso}\\textnormal{(Regularity of the energy level)}\n Given $k \\in \\mathbb{R}$, the subset\n $\\mathcal{R}(k)$ is open and dense in\n $C^{\\infty}(M;\\mathbb{R})$.\n\\end{theorem}\n\\begin{proof}\nThe openness of the set $\\mathcal{R}(k)$ follows directly of the\nLemma~\\ref{compacidade}. In order to obtain the density of $\\mathcal{R}(k)$ in\n$C^{\\infty}(M;\\mathbb{R})$, consider $f_{0} \\in C^{\\infty}(M;\\mathbb{R})$\nand $\\mathcal{U}$, a neighborhood that contains a ball of radius $\\varepsilon > 0$, \nand center, $f_{0}$. We claim that $\\mathcal{U} \\cap \\mathcal{R}(k) \\neq \\varnothing$. In fact, if it is not the case, we can achieve a contradiction by considering the Hamiltonian $H_{\\delta}:=H + (f_{0}+ \\delta)$, with $\\delta \\in (0,\\varepsilon)$.\n\\end{proof}\n\n{\\large \\textbf{The Nondegeneracy Lemma }}\\\\\n\n\nGiven $k \\in \\mathbb{R}$ and $0< a \\leq b \\in \\mathbb{R}$,\nwe define the set $\\mathcal{G}_{k}^{a,b} \\subseteq \\mathcal{R}(k)$\nas $\\mathcal{G}_{k}^{a,b}= \\{f \\in \\mathcal{R}(k) \\mid $\nall periodic points $\\vartheta \\in (H+f)^{-1}(k)$, \nwith $T_{min}(\\vartheta) \\leq a$ are nondegenerate of order $m$ \nfor $H+f$, \\, $\\forall \\, m \\leq \\frac{b}{T_{min}} \\}$. Observe that, \n if we have $a,a',b,b' \\in \\mathbb{R}$, such that,\n $0 < a, a' < \\infty$, $a \\leq a'$ and $ b \\leq b'$, then\n $\\mathcal{G}_{k}^{a',b'} \\subseteq \\mathcal{G}_{k}^{a,b}$.\nWe define $\\displaystyle\n\\mathcal{G}(k)=\\bigcap_{n=1}^{+ \\infty} \\mathcal{G}_{k}^{n,n}$. Then\n$\\mathcal{G}(k)$ is the set of all potentials $f \\in \\mathcal{R}(k)$\nsuch that, all periodic orbits with positive period in the\nenergy level $(H+f)^{-1}(k)$ are nondegenerate of all orders\nfor $H+f$.\n\\begin{lemma}\\label{NondegeneracyLemma}\\textnormal{(Nondegeneracy Lemma)}\nGiven $k \\in \\mathbb{R}$ and $0< c \\in \\mathbb{R}$, the\nset $\\mathcal{G}_{k}^{c,c}$ is open and dense in\n$C^{\\infty}(M;\\mathbb{R})$.\n\\end{lemma}\nIf $\\mathcal{G}_{k}$ is generic, then generically in $L$, the energy\nlevel $k$ is regular and all periodic orbits in this \nlevel are nondegenerate of all orders for $H+f$. \nThus, we must to prove that $\\mathcal{G}_{k}^{c,c}$ is open in\n $C^{\\infty}(M;\\mathbb{R})$ , $\\forall c \\in \\mathbb{R}_{+}$ and\ndense in $\\mathcal{R}(k)$, since Theorem~\\ref{PotRegAbertoDenso},\nimplies that $\\mathcal{R}(k)$ is dense in $C^{\\infty}(M;\\mathbb{R})$. \nThe proof of this lemma requires a sequence of technical constructions.\n\\begin{lemma}\\label{MinimoPerioLema}\nGiven $k \\in \\mathbb{R}$ and $f_{0} \\in \\mathcal{R}(k)$ there exists a neighborhood, $\\mathcal{U}$,\nof $f_{0}$ in $C^{\\infty}(M;\\mathbb{R})$ and $0< \\alpha:=\\alpha(\\mathcal{U},f_{0})$\nsuch that, for all $f \\in \\mathcal{U}$, the period of all periodic orbits of $H+f$, in the level $(H+f)^{-1}(k)$ , is bounded below by $\\alpha$.\n\\end{lemma}\n\\begin{proof}\nIf we suppose that our claiming is false, we get the existence of sequences,\n$\\mathcal{U} \\ni f_{n} \\to f_{0}$, $ T_{n} >\n0$ with $ T_{n} \\to 0$ and $\\vartheta_{n} \\in (H+f_{n})^{-1}(k)$\nsuch that $\\psi_{T_{n}}^{H+f_{n}}(\\vartheta_{n})=\\vartheta_{n}$.\n\nFrom Lemma~\\ref{compacidade}, we can choose a subsequence such that\n$$d_{T^{*}M}(\\psi_{t}^{H+f_{n}}(\\vartheta_{0}),\n\\vartheta_{0})=0, \\, \\forall t>0$$, that is, $\\vartheta_{0} \\in\n(H+f_{0})^{-1}(k)$ which is a fix point, contradicting the fact of $f_{0} \\in \\mathcal{R}(k)$. \n\\end{proof}\n\\begin{lemma}\\label{openlema}\nGiven $k \\in \\mathbb{R}$, $a,b \\in \\mathbb{R}$ with $0 < a \\leq b < \\infty$, the set\n$\\mathcal{G}_{k}^{a,b}$ is open in $C^{\\infty}(M;\\mathbb{R})$.\n\\end{lemma}\n\\begin{proof}\nIf, $\\mathcal{G}_{k}^{a,b} \\neq\n\\varnothing$, take $f_{0} \\in\n\\mathcal{G}_{k}^{a,b}$. If $f_{0}$ is not an interior point we get the existence of a\nsequence $f_{n} \\to f_{0}$ where\n$f_{n} \\not\\in \\mathcal{G}_{k}^{a,b}$. Therefore, there exists $\\vartheta_{n} \\in\n(H+f_{n})^{-1}(k)$, $ T_{n}=T_{min}(\\vartheta_{n}) \\in\n(0;a]$ and natural numbers $\\ell_{n} \\geq 1$ such that,\n$\\ell_{n}T_{n} \\leq b$, $\\psi_{\\ell_{n}T_{n}}^{H+f_{n}}(\\vartheta_{n})=\n\\vartheta_{n}$ and $d_{\\vartheta_{n}}\n\\psi_{\\ell_{n}T_{n}}^{H+f_{n}}$ do not have 1 as eigenvalue\nwith algebraic multiplicity bigger than 1.\nConsider $\\mathcal{U}_{0}$ and $0 <\n\\alpha:=\\alpha(\\mathcal{U}_{0},f_{0}) 0$ we define the local stable and local unstable\nsubmanifolds of $\\gamma$ as being\n$$W^{s}_{a}(\\gamma)=\\{ \\theta \\in W^{ss}(\\gamma) \\mid\nd_{W^{ss}(\\gamma)}(\\theta, \\gamma) < a \\}$$\nand\n$$W^{u}_{a}(\\gamma)=\\{ \\theta \\in W^{us}(\\gamma) \\mid\nd_{W^{us}(\\gamma)}(\\theta, \\gamma) < a \\}.$$\nThey are Lagrangians submanifolds of $TM$.\n\nIn order to prove the Kupka-Smale Theorem , we define\n$\\mathcal{K}^{a}_{k}=\\{ f \\in \\mathcal{G}^{a,a}_{k} \\mid\n\\forall \\gamma_{1}, \\gamma_{2} \\subset (H+f)^{-1}(k),$ hyperbolic periodic orbits\\\\\nfor $L+f$, with period $\\leq a$ we have $W^{s}_{a}(\\gamma_{1}) \\pitchfork\nW^{u}_{a}(\\gamma_{2}) \\}$ and\n$\\displaystyle \\mathcal{K}(k)=\\bigcap_{n \\in \\mathbb{N}} \\mathcal{K}^{n}_{k}$.\nIt is clear that the properties (i), (ii) and (iii) of the Kupka-Smale Theorem \nare valid for all $f \\in \\mathcal{K}(k)$.\nThus, in order to prove the Kupka-Smale Theorem , we must to show that $\\mathcal{K}(k)$\nis generic, or equivalent, that each, $\\mathcal{K}^{n}_{k}$\nis an open and dense set (in $C^{\\infty}$ topology).\nSince, the local stable and unstable manifolds depends $C^{1}$ continuously \non compact parts, of the Lagrangian field,\nwe get the openness of $\\mathcal{K}^{n}_{k} $,\nbecause the transversality is an open property.\n\nThe next lemma can be found in Paternain~\\cite{PaternainGeodFlows}, Proposition\n2.11, Pg.34, for the geodesic case, but here we present a Lagrangian\nversion.\n\n\\begin{lemma} \\label{PropTwistDoFibradoVertical} \\textnormal{(Twist Property of the vertical\nbundle)} Let $L$ be a smooth, convex and superlinear, Lagrangian\nin $M$, $\\theta \\in TM$ and $F \\subset T_{\\theta} TM$ an Lagrangian\nsubspace for the twist form in $T^{*}M$. Then, the set,\n$$\\mathcal{Z}_{F}=\\{ t \\in \\mathbb{R} \\mid d_{\\theta}\\phi_{t}^{L}(E)\n\\cap V(\\phi_{t}^{L}(\\theta))\n\\neq \\varnothing \\}$$ is discrete, where $V$ is the vertical bundle in $M$.\n\\end{lemma}\n\nThe next lemma allow us to make a local perturbation of a potential\n$f$ in such way that the correspondent stable and unstable manifolds\nbecome transversal in a certain heteroclinic point $\\theta$. The\ndensity of $\\mathcal{K}^{a}_{k}$ follows from\nLemma~\\ref{DensidadeTransvVariedEstInst}.\n\\begin{lemma}\\label{PertTransvVariedEstInst}\nLet $L$ be a Lagrangian, and $f \\in C^{\\infty}(M,\\mathbb{R})$. \nGiven $\\gamma_{1}, \\gamma_{2} \\subset\n(H-f)^{-1}(k)$ hyperbolic periodic orbits with period $\\leq a$ and\n$\\theta \\in W_{a}^{u}(\\gamma_{2})$, such that, the canonic\nprojection $\\pi \\mid_{W_{a}^{u}(\\gamma_{2})}$ is a local\ndiffeomorphism in $\\theta$ and $U, V$, are neighborhoods of \n$\\theta$ in $TM$ such that $\\theta \\in V \\subset\n\\bar{V} \\subset U$.\nThen, there exists $\\bar{f} \\in C^{\\infty}(M,\\mathbb{R})$, such that,\\\\\ni) $\\bar{f} $ is $ C^{\\infty}$ close to $f$;\\\\\nii) $\\mathop{\\rm supp}\\nolimits( f - \\bar{f}) \\subset \\pi (U)$; \\\\\niii) $\\gamma_{1}, \\gamma_{2} \\subset (H-\\bar{f})^{-1}(k)$ are\nhyperbolic periodic orbits to $\\bar{f}$, with the same period as to $f$; \\\\\niv) The connected component of $W_{a}^{u}(\\gamma_{2}) \\cap V$ that\ncontains $\\theta$ is transversal to $W^{s}(\\gamma_{1})$.\n\\end{lemma}\n\\begin{proof}\nInitially we consider the Hamiltonian $H-f$ associated to the Lagrangian\n$L+f$ by the Legendre transform $\\mathcal{L}$:\n$$ H-f(x,p)=\\sup_{v \\in T_{x}M} \\{ p(v)- (L+f)(x,v)\\} $$\nwith the canonic symplectic form of $T^{*}M$, $\\omega = \\sum dx_{i} \\wedge dp_{i}.$\n\nWe know, from the general theory of the Hamiltonian systems, that\n$\\gamma_{1}, \\gamma_{2}$ are in correspondence, by Legendre\ntransform with hyperbolic periodic orbits of same period,\n$\\tilde{\\gamma_{1}}, \\tilde{\\gamma_{2}} \\subset (H-f)^{-1}(k)$ for\nthe Hamiltonian flow $\\psi_{t}^{H-f}$. Consider\n$\\tilde{W}^{u}(\\tilde{\\gamma_{2}})$ and\n$\\tilde{W}^{s}(\\tilde{\\gamma_{1}})$, respectively, the invariant\nsubmanifolds, they will be Lagrangian submanifolds of $T^{*}M$, and\n$\\vartheta= \\mathcal{L}(\\theta) \\in\n\\tilde{W}^{u}(\\tilde{\\gamma_{2}})$.\nIf $\\pi:TM \\to M$ and $\\pi^{*}:T^{*}M \\to M$ are the\ncanonic projections, then\n$d_{\\vartheta}\\pi^{*}= d_{\\theta}\\pi \\circ (d_{\\theta}\\mathcal{L})^{-1}$.\nTherefore the canonic projection $\\pi^{*}\n\\mid_{\\tilde{W}^{u}(\\tilde{\\gamma_{2}})}$ is a local diffeomorphism\nin $\\vartheta$. Moreover, $X^{H-f}(\\vartheta)=d_{\\theta}\\mathcal{L}\n\\circ X^{L+f}(\\theta) \\neq 0$. Thus we can prove the lemma in the\nHamiltonian setting.\nBy \\cite{GonzaloPaternGenGeodFlowPositEntropy}, Lemma A3, we can\nfind a neighborhood $ U$ of $\\vartheta$, and $V \\subset U$, such\nthat, $ V \\subset \\bar{V} \\subset U$, and a Lagrangian submanifold,\n$\\mathcal{N}$, $C^{\\infty}$ close to\n$\\tilde{W}^{u}(\\tilde{\\gamma_{2}})$, satisfying the following conditions\\\\\n1) $\\vartheta \\in \\{ U \\backslash \\bar{V} \\}$\\\\\n2) $\\mathcal{N} \\cap \\{ U \\backslash \\bar{V} \\} =\n\\tilde{W}^{u}(\\tilde{\\gamma_{2}}) \\cap \\{ U \\backslash \\bar{V} \\}\n\\subset (H - f)^{-1}(k)$;\\\\\n3) $\\mathcal{N} \\cap \\bar{V} \\pitchfork\n\\tilde{W}^{s}(\\tilde{\\gamma_{1}}) \\cap \\bar{V}$.\n\nAs, $\\mathcal{N}$ is $C^{\\infty}$ close to\n$\\tilde{W}^{u}(\\tilde{\\gamma_{2}})$, we have that the canonic\nprojection $\\pi^{*} \\mid_{\\mathcal{N}}$ is a local\ndiffeomorphism in $\\vartheta$. If $U$ is\nsmall enough, then\n$\\mathcal{N} \\cap U =\\{ (x, p(x)) \\mid x \\in \\pi^{*}(u)\\}$ \nthat is, $\\mathcal{N}\\mid_{U}$ is a $C^{\\infty}(M,\\mathbb{R})$\ngraph. We define the following potential, $\\bar{f} \\in\nC^{\\infty}(M,\\mathbb{R})$,\n$$\n\\bar{f}(x)= \\left \\{\n\\begin{array}{clcr}\nf(x) \\quad \\, \\quad \\quad \\quad \\quad if \\; x \\in \\pi^{*}(U)^{C}\\\\\nH(x,p(x))-k \\quad \\, if \\; x \\in \\pi^{*}(U) \\quad _{}\n\\end{array}\n\\right.\n$$\n\nObserve that, $\\mathop{\\rm supp}\\nolimits( f - \\bar{f}) \\subset \\pi^{*}(U)$ and\n$\\vartheta \\not\\in \\mathop{\\rm supp}\\nolimits( f - \\bar{f})$, moreover, choosing $U$ small\nenough, we will have that $\\pi^{*}(U) \\cap \\{ \\tilde{\\gamma_{1}},\n\\tilde{\\gamma_{2}} \\} =\\varnothing$ and therefore\n$\\tilde{\\gamma_{1}}, \\tilde{\\gamma_{2}}$ still, hyperbolic periodic\norbits of same period for the Hamiltonian flow\n$\\psi_{t}^{H-\\bar{f}}$, contained in $(H-\\bar{f})^{-1}(k)$.\nWe denote $\\bar{W}^{u}(\\tilde{\\gamma_{2}})$ and\n$\\bar{W}^{s}(\\tilde{\\gamma_{1}})$, the invariant manifolds for the\nnew flow $\\psi_{t}^{H-\\bar{f}}$.\nClearly $(H-\\bar{f})(\\mathcal{N})=k$. By\n\\cite{GonzaloPaternGenGeodFlowPositEntropy}, Lemma A1, we have that\n$\\mathcal{N}$ is $\\psi_{t}^{H-\\bar{f}}$ invariant. Since,\n$\\bar{W}^{u}(\\tilde{\\gamma_{2}})$ depends only of the negative times\nand, the connected component of $\\bar{W}^{u}(\\tilde{\\gamma_{2}})\n\\cap U$ that contains $\\vartheta$ and $\\mathcal{N}$ are coincident\nin a neighborhood of $ \\tilde{\\gamma_{2}}$ disjoint of $\\mathop{\\rm supp}\\nolimits( f -\n\\bar{f})$, we have \n$\\mathcal{N}=\\bar{W}^{u}(\\tilde{\\gamma_{2}})$.\nOn the other hand, as $\\bar{W}^{s}(\\tilde{\\gamma_{1}})$ depends only\nof the positive times and $f=\\bar{f}$ in $\\{ U \\backslash \\bar{V}\n\\}$, we have $\\bar{W}^{s}(\\tilde{\\gamma_{1}}) =\n\\tilde{W}^{s}(\\tilde{\\gamma_{1}})$. Since $\\mathcal{N} \\cap \\bar{V}\n\\pitchfork \\tilde{W}^{s}(\\tilde{\\gamma_{1}}) \\cap \\bar{V}$, we have\n$\\bar{W}^{u}(\\tilde{\\gamma_{2}}) \\cap \\bar{V} \\pitchfork\n\\tilde{W}^{s}(\\tilde{\\gamma_{1}}) \\cap \\bar{V}.$ From the\ninitial considerations we choose $L+ \\bar{f} $. The lemma is proven.\n\\end{proof}\n\n\\begin{lemma}\\label{DensidadeTransvVariedEstInst}\nThe set $\\mathcal{K}^{n}_{k}$, is dense in\n$C^{\\infty}(M,\\mathbb{R})$, for all $n \\in \\mathbb{N}$.\n\\end{lemma}\n\\begin{proof}\nTake $f_{0} \\in C^{\\infty}(M,\\mathbb{R})$, by the Nondegeneracy Lemma\nwe can find $f_{0}$ arbitrarily close to $f' \\in\n\\mathcal{G}_{k}^{n,n}$, which is open and dense. Thus, is enough \nto find $f$ arbitrarily close to $f'$, such that, for any\n$\\gamma_{1}, \\gamma_{2} \\subset (H-f)^{-1}(k)$, hyperbolic periodic\norbits of period $\\leq n$, is valid $W_{n}^{s}( \\gamma_{1})\n\\pitchfork W_{n}^{u}( \\gamma_{2})$. Then, $f \\in\n\\mathcal{K}^{n}_{k}$ and $f$ is arbitrarily close to $f_{0}$.\nGiven $\\gamma_{1}, \\gamma_{2} \\subset (H+f')^{-1}(k)$\nhyperbolic periodic orbits of period $\\leq n$, in order to conclude\nthat $W_{n}^{s}( \\gamma_{1}) \\pitchfork W_{n}^{u}( \\gamma_{2})$ we \nshould to prove that $W_{n}^{s}( \\gamma_{1})\n\\pitchfork_{\\mathcal{D}} W_{n}^{u}( \\gamma_{2})$ where $\\mathcal{D}$\nis a fundamental domain of $W^{u}( \\gamma_{2}) $ , because if\n$W^{s}( \\gamma_{1}) \\pitchfork_{\\theta} W^{u}( \\gamma_{2})$ then\n$W^{s}( \\gamma_{1}) \\pitchfork_{\\phi_{t}^{L+f'}(\\theta)} W^{u}(\n\\gamma_{2}), \\, \\forall t$.\nTake $\\mathcal{D}$ a fundamental domain of $W^{u}( \\gamma_{2}) $ and\n$\\theta \\in \\mathcal{D}$. By the inverse function theorem we know\nthat $\\pi|_{W^{u}( \\gamma_{2})}$ is a local diffeomorphism in\n$\\theta$ if, and only if, $T_{\\theta}W^{u}( \\gamma_{2}) \\cap Ker\nd_{\\theta} \\pi ={0}$. As $W^{u}( \\gamma_{2})$ is a Lagrangian\nsubmanifold we have, from Lemma~\\ref{PropTwistDoFibradoVertical},\nthat\n$\\{ t \\in \\mathbb{R} \\mid d_{\\theta}\\phi_{t}^{L}(T_{\\theta}W^{u}( \\gamma_{2}))\n\\cap Ker \\, d_{\\phi_{t}^{L+f'}(\\theta)} \\pi \\neq \\varnothing \\},$ is\ndiscrete. Then there exists $t(\\theta)$ arbitrarily close to $0$, such\nthat, $\\pi|_{W^{u}( \\gamma_{2})}$ is a local diffeomorphism in\n$\\tilde{\\theta}=\\phi_{t(\\theta)}^{L+f'}(\\theta)$.\nAs, $f' \\in \\mathcal{G}_{k}^{n,n}$, we can choose, $t(\\theta)$, such\nthat, $\\pi(\\tilde{\\theta})$ does not intercept any periodic orbit of\nperiod $\\leq n$. Fix a neighborhood $U$, of $\\tilde{\\theta}$, arbitrarily small,\nsuch that, $\\pi(U)$ does not intercept any\nperiodic orbit of period $\\leq n$. Taking $V$, a neighborhood of\n$\\tilde{\\theta} $, such that, $V \\subset \\bar{V} \\subset U$, from\nLemma~\\ref{PertTransvVariedEstInst}, we can find $f_{1}=f'$ in\n$\\pi(U)^{C}$, such that, the connected component of\n$W_{n}^{u}( \\gamma_{2}) \\cap \\bar{V}$ (to the new flow)\ncontain $\\tilde{\\theta}$, and is transversal to $W^{s}( \\gamma_{1})$ \n(to the new flow). Taking $\\bar{V}_{1} =\n\\phi_{t'}^{L+f_{1}}(\\bar{V})$ we will have that $W_{n}^{u}(\n\\gamma_{2}) \\pitchfork W^{s}( \\gamma_{1})$.\nWe can cover the fundamental domain $\\mathcal{D}$ with a finite number\nof neighborhoods like $\\bar{V}_{1}$, that is, $W_{1},..., W_{s}$.\nSince the transversality is an open condition and the local\nstable (unstable) manifold depends continuously on compact parts, we\ncan choose successively $W_{i+1}$ such that the transversality in\n$W_{j}, \\; j \\leq i$, is preserved. Thus, \n$W_{n}^{u}( \\gamma_{2}) \\pitchfork W_{n}^{s}( \\gamma_{1})$.\n\\end{proof}\n\n\\section{Proof of the Reduction Lemma} \\label{ReducLema} \n\nFor the proof of the Reduction Lemma\n(Lemma~\\ref{ReduLocalDaPrimParte}) we will use an induction\nmethod similar to the one used by Anosov~\\cite{Anos}, using transversality arguments\nas described in Abraham \\cite{Ab} and \\cite{AbRb}. In this way, we remember a usefull theorem, the Parametric Transversality Theorem of Abraham.\n\nRemember that, if $\\mathcal{X}$ is a topological space. A subset $\\mathcal{R} \\subseteq\n\\mathcal{X}$ is said generic if $\\mathcal{R}$ is a countable\nintersection of open and dense sets. The space\n$\\mathcal{X}$ will be a Baire Space if all generic subsets are dense. For additional results and definitions of Differential Topology, see~\\cite{AbRb}, \\cite{AbMardRat} or \\cite{KbMg}.\n\\begin{theorem}\\label{abraham}(\\cite{AbRb}, pg. 48, Abraham's Parametric Transversality Theorem)\nConsider $\\mathbb{X}$ a submanifold finite dimensional (with\nboun\\-da\\-ry or boundaryless), $\\mathbb{Y}$ a boun\\-da\\-ry\\-less manifold and $S\n\\subseteq \\mathbb{Y}$ a submanifold with finite codimension.\nConsider $\\mathcal{B}$ boundaryless manifold, $\\rho : \\mathcal{B}\n\\rightarrow C^{\\infty}(\\mathbb{X};\\mathbb{Y})$ a smooth\nrepresentation and your evaluation $ev_{\\rho} : \\mathcal{B} \\times\n\\mathbb{X} \\rightarrow \\mathbb{Y}$. If $\\mathbb{X}$ and\n$\\mathcal{B}$ are Baire spaces and $ev_{\\rho} \\pitchfork S $ then\nthe set $\\mathcal{R}=\\{ \\varphi \\in \\mathcal{B} \\mid \\rho_{\\varphi}\n\\pitchfork S \\}$ is a generic subset (and obviously dense) of\n$\\mathcal{B}$.\n\\end{theorem}\n\n\n\nGiven a Hamiltonian $H$ we define the normal field associated,\nas being the gradient field, $Y^{H}=\\nabla H= H_{x} \\, \\frac{\\partial \\,}{\\partial x} + H_{p} \\, \\frac{\\partial \\,}{\\partial p}$ in $T^{*}M$. Observe that $JY^{H}=X^{H}$, where $J$ is the canonic sympletic matrix. We denote $\\psi_{s}^{H^{\\perp}}: T^{*}M \\times (-\\varepsilon,\\varepsilon) \\rightarrow T^{*}M$ the flow in $T^{*}M$ generated by the normal field.\n Let us briefly describe the properties of the normal field. Initially\n observe that $\\omega_{\\vartheta} (Y^{H}, X^{H})= H_x^{2} +\n H_p^{2}, \\; \\forall \\vartheta \\in T^{*}M$. If $X^{H}(\\vartheta) \\neq\n 0$ then $0 \\neq Y^{H}(\\vartheta) \\not\\in T_{\\vartheta}H^{-1}(k)$ where\n $k=H(\\vartheta)$, that is $Y^{H}$ points to the outside of the energy level.\n From the compactness of the energy level $H^{-1}(k)$\n we have that the flow of the normal field, restricted\n to $H^{-1}(k)$ is defined in $H^{-1}(k) \\times\n (-\\varepsilon(H),\\varepsilon(H))$ where $\\varepsilon(H) >0$ is uniformly defined in\n $H^{-1}(k)$. Then the flow of the normal\n field is defined in a neighborhood of the energy level $H^{-1}(k)$.\n The action of the differential of the normal flow through an orbit is given by\n$$\n\\left \\{\n\\begin{array}{clcr}\n\\dot{Z}^{H}(s) = \\mathcal{H}(\\gamma(s)) Z^{H}(s) \\\\\nZ^{H}(0)=Y^{H}(\\vartheta) \\quad \\quad \\, \\, \\; \\; \\; ^{}\n\\end{array}\n\\right .\n$$\nwhere $\\mathcal{H}$ is the hessian matrix of $H$. The main property of $Y^{H}$ is to establish a sympletic decomposition of $T_{\\vartheta}T^{*}M$ given by the next proposition.\n\n\\begin{proposition}\\label{BaseCampoNormal} Consider the normal field\n$Y^{H}$ associated to $H$ in the regular energy level, $H^{-1}(k)$.\nThen, for each $\\vartheta \\in H^{-1}(k) $, periodic of period $T\n>0$, there exists a symplectic base $\\{\nu_{1},...,u_{n},u_{1}^{*},...,u_{n}^{*} \\}$ of\n$T_{\\vartheta}T^{*}M $ verifying\\\\\n(i) $u_{1}= X^{H} $ and $u_{1}^{*}=-\\frac{1}{H_x^{2} +\n H_p^{2}} Y^{H} $;\\\\\n(ii) $\\mathcal{W}_{1}= \\langle u_{1}, u_{1}^{*} \\rangle ^{\\perp}\n\\subset T_{\\vartheta} H^{-1}(k) $ in particular, $T_{\\vartheta}\nH^{-1}(k) = \\langle u_1 \\rangle \\oplus \\mathcal{W}_{1}$;\\\\\n(iii) If $\\Sigma \\subset H^{-1}(k)$ is a section transversal to the\nflow, such that, $T_{\\vartheta} \\Sigma= \\mathcal{W}_{1}$, \nwe have that $d_{\\vartheta} P(\\Sigma,\\vartheta)\n\\mathcal{W}_{1} \\subseteq \\mathcal{W}_{1} $;\\\\\n(iv) If $T=m T_{min}(\\vartheta)$, then\n$d_{\\vartheta} \\psi_{T}^{H} u_{1}=u_{1}$ and $d_{\\vartheta}\n\\psi_{T}^{H} u_{1}^{*}=c u_{1} + u_{1}^{*} +\\xi$, $\\xi \\in\n\\mathcal{W}_{1}$.\nIn particular, $(d_{\\vartheta} \\psi_{T}^{H} -Id)\n(T_{\\vartheta}T^{*}M) \\subseteq \\langle u_{1} \\rangle \\oplus\n\\mathcal{W}_{1}=T_{\\vartheta} H^{-1}(k) $.\\\\\n(v) There exists, $\\varepsilon >0$ uniform in $\\vartheta \\in H^{-1}(k)$, such\nthat, the map $e_{\\vartheta} : (-\\varepsilon, \\varepsilon) \\to~\\mathbb{R}$ given by\n$e_{\\vartheta}(s)=H \\circ \\psi_{s}^{H^{\\perp}}(\\vartheta)$ is\ninjective with $e_{\\vartheta}(0)=k$.\n\\end{proposition}\n\n\nUsing the normal field we are able to construct an representation, in order to apply the Parametric Transversality Theorem.\n\n\\begin{proposition}\\label{repres}\n Given $k \\in \\mathbb{R}$, $00$ as in the Proposition~\\ref{BaseCampoNormal},~(v),\n and the sets, $\\mathcal{U}_{f_0} \\subset \\mathcal{R}(k)$ a $C^{\\infty}$ neighborhood\n of $f_0$, $\\alpha=\\alpha(\\mathcal{U}_{f_0}) >0$\n as in Lemma~\\ref{MinimoPerioLema}, $\\mathbb{X}= T^{*}M \\times\n (a,b) \\times (-\\varepsilon, \\varepsilon)$ and $\\mathbb{Y}= T^{*}M \\times T^{*}M \\times\n \\mathbb{R}$. \n Then the map $\\rho : \\mathcal{U}_{f_0} \\rightarrow\n C^{\\infty}(\\mathbb{X};\\mathbb{Y})$ given by\n $\\rho(f):=\\rho_{f}$, where\n \\begin{center}$\\rho_{f}(\\vartheta, t, s)=(\\psi_{s}^{H+f^{\\perp}}\n (\\vartheta), \\psi_{t}^{H+f}(\\vartheta),\n (H+f)(\\vartheta)-k)$ \\end{center}\n is an injective representation (see~\\cite{Ab}~or~\\cite{AbRb}).\n\\end{proposition}\n\n\\begin{proof}\nInitially we point out that $\\rho$\nis well defined, therefore $\\mathbb{Y}$ has the structure of a product manifold. Writing\n$\\rho_{f}=(\\rho_{f}^{1},\\rho_{f}^{2},\\rho_{f}^{3})$, where\n $\\rho_{f}^{1}(\\vartheta, t,s)=\\psi_{s}^{H+f^{\\perp}}(\\vartheta) $, \n$\\rho_{f}^{2}(\\vartheta, t,s) = \\psi_{t}^{L+f}(\\vartheta) $,\n$\\rho_{f}^{3}(\\vartheta, t,s) = (H+f)(\\vartheta)-k$,\nwe can see that each coordinate is a smooth function. Thus\n$\\rho_{f}\\in C^{\\infty}(\\mathbb{X};\\mathbb{Y})$.\nObserve that $\\rho$ is injective. Indeed, if $\\rho_{f_{1}}=\\rho_{f_{2}}$\nthen $(H+f_{1})(\\vartheta)-k =\n(H+f_{2})(\\vartheta)-k$, for all $\\vartheta \\in T^{*}M$, so $f_{1}(x) = f_{2}(x)\n$, for all $x \\in M$. Thus $f_{1} = f_{2}$.\nWe must to verify that $ev_{\\rho} : \\mathcal{U}_{f_0} \\times \\mathbb{X}\n\\to \\mathbb{Y}$ is smooth. Since $\\mathcal{U}_{f_0} \\times\n\\mathbb{X}$ have the structure of product\nmanifold , we can write\n$$d_{(f,x)} ev_{\\rho}:= \\frac{\\partial ev_{\\rho}}{\\partial f} (f,x)\n+ \\frac{\\partial ev_{\\rho}}{\\partial x} (f,x), \\; \\forall\n(f,x)\\in \\mathcal{U}_{f_0} \\times \\mathbb{X}$$\nIt is clear that $\\frac{\\partial ev_{\\rho}}{\\partial x}\n(f,x)$ is always defined as$ \\frac{\\partial ev_{\\rho}}{\\partial x} (f,x)=d_{x}\\rho_{f}$.\nMore precisely, given $(\\xi,\\dot{t},\\dot{s}) \\in T_{x=(\\vartheta,T,S)}\\mathbb{X}$\nwe have\n$$\\frac{\\partial ev_{\\rho}}{\\partial x}\n(f,x)(\\xi,\\dot{t},\\dot{s})=d_{x}\\rho_{f}(\\xi,\\dot{t},\\dot{s}) =\\frac{d \\,}{dr}\\rho_{f}(\\vartheta(r), t(r), s(r)) \\mid_{r=0}=$$\n$$=\\frac{d \\,}{dr} (\\psi_{s(r)}^{H+f^{\\perp}}(\\vartheta(r)), \\psi_{t(r)}^{H+f}(\\vartheta(r)),\n(H+f)(\\vartheta(r))-k)\\mid_{r=0}=$$\n$$( d_{\\vartheta} \\psi_{S}^{H+f^{\\perp}}(\\xi) +\n\\dot{s} Y^{H+f}(\\psi_{S}^{H+f^{\\perp}}(\\vartheta)), d_{\\vartheta}\\psi_{T}^{H+f}(\\xi) +\n\\dot{t} X^{H+f}(\\psi_{S}^{H+f}(\\vartheta)),$$ $$ d_{\\vartheta}(H+f)(\\xi) ).$$\nObserve that, if $S=0$ and $\\psi_{S}^{H+f}(\\vartheta)=\\vartheta$,\nthen $$\\frac{\\partial ev_{\\rho}}{\\partial x}\n(f,x)(\\xi,\\dot{t},\\dot{s})=( \\xi + \\dot{s} Y^{H+f}(\\vartheta), d_{\\vartheta}\\psi_{T}^{H+f}(\\xi) +\n\\dot{t} X^{H+f}(\\vartheta),$$ $$ d_{\\vartheta}(H+f)(\\xi) ).$$ \nHowever we must to show that $\\frac{\\partial ev_{\\rho}}{\\partial f}\n(f,x)$ is always defined. By the structure of $C^{\\infty}(M;\\mathbb{R})$\nwe know that this fact is equivalent to show that there exists\n\\begin{center} $\\frac{d \\,}{dr} \\psi_{S}^{H + f + r h ^{\\perp}}(\\vartheta) \\mid_{r=0}$ ,\n$ \\frac{d \\,}{dr} \\psi_{T}^{H + f + r h }(\\vartheta)\n\\mid_{r=0}$ and $\\frac{d \\,}{dr} (H + f + r h ) (\\vartheta) \\mid_{r=0}$\n\\end{center}\nfor any $h \\in C^{\\infty}(M;\\mathbb{R})$ and\n$x=(\\vartheta,T, S) \\in \\mathbb{X}$. From some straightforward \ncalculations (see~\\cite{PhilHart}, pg.46), we get\\\\\n$\\frac{d \\,}{dr} (H + f + r h ) (\\vartheta) \\mid_{r=0}=h\\circ\\pi(\\vartheta)$,\\\\\n$ \\frac{d \\,}{dr} \\psi_{T}^{H + f + r h }(\\vartheta)\n\\mid_{r=0}=Z_{h}(T)= d_{\\vartheta}\\psi_{T}^{H + f}\n\\int_{0}^{T} (d_{\\vartheta}\\psi_{t}^{H + f })^{-1}\nb_{h}(t)dt$ and\\\\\n$ \\frac{d \\,}{dr} \\psi_{S}^{H + f + r h ^{\\perp} }(\\vartheta)\n\\mid_{r=0}=Z^{h}(S)= d_{\\vartheta}\\psi_{S}^{H + f^{\\perp} }\n\\int_{0}^{S} (d_{\\vartheta}\\psi_{s}^{H + f^{\\perp} })^{-1}\nb^{h}(s)ds$.\\\\\nThus $\\displaystyle \\frac{\\partial ev_{\\rho}}{\\partial f} (f,x)\n(h)=$\n $$ ( d_{\\vartheta}\\psi_{S}^{H + f^{\\perp} }\n\\int_{0}^{S} (d_{\\vartheta}\\psi_{s}^{H + f^{\\perp} })^{-1}\nb^{h}(s)ds \\; , \\; d_{\\vartheta}\\psi_{T}^{H + f}\n\\int_{0}^{T} (d_{\\vartheta}\\psi_{t}^{H + f })^{-1}\nb_{h}(t)dt \\; ,$$ $$ \\; h\\circ\\pi(\\vartheta)).$$\n If $S=0$ and $\\psi_{T}^{H+f}(\\vartheta)=\\vartheta$,\n then $$\\displaystyle \\frac{\\partial ev_{\\rho}}{\\partial f} (f,x)\n(h)=( 0 , \\; d_{\\vartheta}\\psi_{T}^{H + f}\n\\int_{0}^{T} (d_{\\vartheta}\\psi_{t}^{H + f })^{-1}\nb_{h}(t)dt \\; , \\; h \\circ \\pi(\\vartheta)).$$\nThus $ev_{\\rho}$ is smooth\nand therefore $\\rho$ is a representation.\n\\end{proof}\n\nDefine the null diagonal $\\Delta_{0} \\subseteq\n\\mathbb{Y}$ given by $\\Delta_{0}=\\{(\\vartheta,\\vartheta,0) \\mid \\vartheta \\in T^{*}M\\}$.\nCombining, Propositions~\\ref{BaseCampoNormal} and \\ref{repres} we get\n\n\\begin{lemma}\\label{PeriodPontoSaoNaoDegenTrans}\nWith the same notations of the Proposition~\\ref{repres}, we have that,\n$\\forall f \\in \\mathcal{U}_{f_0}$, with $T \\in (a,b)$ and $S \\in (-\\varepsilon, \\varepsilon)$,\n\\begin{itemize}\n\\item[i)] If $\\vartheta$ is a periodic orbit of positive period $T$ for $H+f$\nin the level $(H+f)^{-1}(k)$ then,\n$\\rho_{f}(\\vartheta, T, 0) \\in \\Delta_{0}$.\nReciprocally, if $\\rho_{f}(\\vartheta, T, S) \\in \\Delta_{0}$\nthen, $S=0$ and $\\vartheta$ is a periodic orbit of positive period $T$\nfor $H+f$ in the level $(H+f)^{-1}(k)$.\n\n\\item[ii)] If $\\vartheta$ is a periodic orbit of positive period, for $H+f$\nin the level $(H+f)^{-1}(k)$. Then, $\\vartheta$ is\nnondegenerate of order $ m = \\frac{T}{T_{min}(\\vartheta)}$ if, and only if,\n$\\rho_{f} \\pitchfork _{(\\vartheta, T, 0)} \\Delta_{0}$.\n\\end{itemize}\n\\end{lemma}\n\nThe next corollary it is an easy consequence of the Lemma~\\ref{PeriodPontoSaoNaoDegenTrans}.\n\n\\begin{corollary}\\label{PeriodNivelSaoNaoDegenTrans}\nWith the same notations of the Lemma~\\ref{PeriodPontoSaoNaoDegenTrans} we have that,\ngiven $ f \\in \\mathcal{U}_{f_0}$, all periodic orbits $\\vartheta $, with positive period,\n$T_{min}(\\vartheta) \\in (a,b)$, in $(H+f)^{-1}(k)$,\nare nondegenerate for $H+f$ of order $m$, $\\forall m \\leq \\frac{b}{T_{min}}$\nif, and only if, $\\rho_{f} \\pitchfork \\Delta_{0}$.\n\\end{corollary}\n\nThe previous corollary shows that, the nondegeneracy of the periodic orbits\nof positive period in an interval $(a,b)$, for a given energy level $(H+f)^{-1}(k)$,\nis equivalent to the transversality of the map $\\rho_{f}$ in relation to the diagonal $\\Delta_{0}$. \nThe key element for the proof of the Lemma~\\ref{ReduLocalDaPrimParte} is\nthe nest lemma.\n\\begin{lemma}\\label{evaluationtransverlema}\nConsider the representation $\\rho$ as in the Proposition~\\ref{repres} and its evaluation\nin $\\mathcal{U}_{f_0}$, that is, $ev : \\mathcal{U}_{f_0} \\times \\mathbb{X} \\rightarrow\n\\mathbb{Y}$, given by $ev(f, \\vartheta, t , s)= \\rho_{f}(\\vartheta, t ,s)$.\nSuppose that $ev(f , \\vartheta, T , S) \\in \\Delta_{0}$ then,\n\\begin{itemize}\n\\item[i)] If $\\vartheta$ is nondegenerate of order $m=\\frac{T}{T_{min}}$ for $H+f$\nthen $ev \\pitchfork _{(f, \\vartheta, T ,S)}\n\\Delta_{0}$;\n\\item[ii)] If $T = T_{min}(\\vartheta)$ then, $ev \\pitchfork _{(f, \\vartheta, T, S )}\n\\Delta_{0}$.\n\\end{itemize}\n\\end{lemma}\n\n\\begin{proof}\n\ni) We know that $ev(f, \\vartheta, T ,S)=\\rho_{f}(\\vartheta, T ,S)$\ntherefore $\\rho_{f}(\\vartheta, T, S )\\in \\Delta_{0}$, and $S=0$.\nIf $\\vartheta$ is nondegenerate of order $m=\\frac{T}{T_{min}}$\nfor $H+f$, then, from the Lemma~\\ref{PeriodPontoSaoNaoDegenTrans}~(ii),\n$\\rho_{f} \\pitchfork _{(\\vartheta, T, 0 )} \\Delta_{0}$,\nin particular $ev \\pitchfork _{(f, \\vartheta, T, 0)}\n\\Delta_{0}$.\\\\\n\nii) As $ev(f, \\vartheta, T , S) \\in \\Delta_{0}$ we must to show that\n\\begin{center}\n$d_{(f, \\vartheta, T , 0)} ev T_{(f, \\vartheta, T, 0 )} (\\mathcal{U}_{f_0} \\times\n\\mathbb{X})\n+ T_{(\\vartheta , \\vartheta, 0)} \\Delta_{0}= T_{(\\vartheta , \\vartheta, 0)}\\mathbb{Y} $.\n\\end{center}\nTake any $(u,v,w ) \\in T_{(\\vartheta,\\vartheta\n,0)}\\mathbb{Y}$, $(\\zeta, \\zeta, 0) \\in T_{(\\vartheta,\\vartheta\n,0)} \\Delta_{0} $ and \\\\ $(h, \\xi, \\dot{t}, \\dot{s}) \\in T_{(f, \\vartheta, T, 0 )}\n(\\mathcal{U}_{f_0} \\times \\mathbb{X})$. From Proposition~\\ref{repres} we have that\\\\\n$d_{(f,\\vartheta, T , 0)} ev_{\\rho}(h, \\xi, \\dot{t},\n\\dot{s})=$\\\\\n$=( 0 , \\; d_{\\vartheta}\\psi_{T}^{H + f}\n\\int_{0}^{T} (d_{\\vartheta}\\psi_{t}^{H + f })^{-1}\nb_{h}(t)dt \\; , \\; h\\circ\\pi(\\vartheta))\n+ \\\\\n( \\xi + \\dot{s} Y^{H+f}(\\vartheta), d_{\\vartheta}\\psi_{T}^{H+f}(\\xi) +\n\\dot{t} X^{H+f}(\\vartheta), d_{\\vartheta}(H+f)(\\xi) )$\\\\\n$=( \\xi + \\dot{s} Y^{H+f}(\\vartheta) , \\; d_{\\vartheta}\\psi_{T}^{H+f}(\\xi) +\n\\dot{t} X^{H+f}(\\vartheta)+ d_{\\vartheta}\\psi_{T}^{H + f}\n\\int_{0}^{T} (d_{\\vartheta}\\psi_{t}^{H + f })^{-1}\nb_{h}(t)dt \\; , \\;\\\\ h\\circ\\pi(\\vartheta) + d_{\\vartheta}(H+f)(\\xi) )$\\\\\nTherefore $ev_{\\rho} \\pitchfork _{(f, \\vartheta, T, 0 )}\n\\Delta_{0}$, if and only if, the system\n\\begin{center}\n\\(\n\\left \\{\n\\begin{array}{clcr}\nu=\\xi + \\dot{s} Y^{H+f}(\\vartheta) +\\zeta \\quad \\quad \\quad \\quad \\quad \\quad \\quad \\quad\n\\quad \\quad \\quad \\quad \\quad \\quad \\quad \\quad \\quad \\quad \\quad (1)\\\\\nv=d_{\\vartheta}\\psi_{T}^{H+f}(\\xi) +\n\\dot{t} X^{H+f}(\\vartheta)+ d_{\\vartheta}\\psi_{T}^{H + f}\n\\int_{0}^{T} (d_{\\vartheta}\\psi_{t}^{H + f })^{-1}\nb_{h}(t)dt + \\zeta \\; \\; \\, (2)\\\\\nw=h\\circ\\pi(\\vartheta) +d_{\\vartheta}(H+f)(\\xi) \\quad \\quad \\quad \\quad\n\\quad \\quad \\quad \\quad \\quad \\quad \\quad \\quad \\quad \\quad \\quad \\quad(3)\n\\end{array}\n\\right.\n\\)\n\\end{center}\nhas a solution.Using the coordinates of the Proposition~\\ref{BaseCampoNormal}\nand taking $\\zeta=u -\\xi - \\dot{s} Y^{H+f}(\\vartheta) $ we have that the equation (2)\nrestricted to the set of the solutions of (3),\n$$\\mathcal{V}_{w}=\\left\\{h \\in C^{\\infty}(M,\\mathbb{R}), \\, \\xi = a X^{H+f}(\\vartheta)\n + b_{0} Y^{H+f}(\\vartheta) + U \\; \\right\\} $$\nwhere\n$b_{0}=\\frac{w-h\\circ\\pi(\\vartheta)}{d_{\\vartheta}(H+f)(Y^{H+f}(\\vartheta))}$,\nwill have the expression\\\\\n$(\\dot{t}+b_{0}c+\\tau_{0}) X^{H+f}(\\vartheta)+ (c^{*}- \\dot{s}) Y^{H+f}(\\vartheta) +\n(d_{\\vartheta} P(\\Sigma, \\vartheta)-Id)(U) + b_{0}U_{0} + \\\\\nd_{\\vartheta}\\psi_{T}^{H + f}\\int_{0}^{T} (d_{\\vartheta}\\psi_{t}^{H + f })^{-1}\nb_{h}(t)dt= \\tilde{a} X^{L+f}(\\vartheta) + \\tilde{b} Y^{H+f}(\\vartheta) +\n\\bar{U}$, where\n$$v-u= \\tilde{a} X^{L+f}(\\vartheta) + \\tilde{b} Y^{H+f}(\\vartheta) + \\bar{U},$$\\\n$$(d_{\\vartheta}\\psi_{T}^{H+f} -id)(Y^{H+f}(\\vartheta))=c\nX^{H+f}(\\vartheta) + c^{*} Y^{H+f}(\\vartheta) + U_{0}$$ and \n$$(d_{\\vartheta}\\psi_{T}^{H+f} -id)(U)=\\tau_{0}\nX^{H+f}(\\vartheta) + (d_{\\vartheta} P(\\Sigma,\n\\vartheta)-Id)(U).$$\nThat is, the system always has a solution, if the expression,\n\\begin{center}\n$(\\dot{t}+b_{0}c+\\tau_{0}) X^{H+f}(\\vartheta) + (c^{*}- \\dot{s}) Y^{H+f}(\\vartheta) +\n(d_{\\vartheta} P(\\Sigma, \\vartheta)-Id)(U) + b_{0}U_{0} +\nd_{\\vartheta}\\psi_{T}^{H + f}\\int_{0}^{T} (d_{\\vartheta}\\psi_{t}^{H + f })^{-1}\nb_{h}(t)dt$\n\\end{center}\nis surjective in $T_{\\vartheta}T^{*}M$.\nSo, we must to show that $$ d_{\\vartheta}\\psi_{T}^{H + f}\n\\int_{0}^{T} (d_{\\vartheta}\\psi_{t}^{H + f })^{-1}\nb_{h}(t)dt$$ generates a $2n-2$ dimensional space complementary\nto the space generated by $ X^{H+f}(\\vartheta)$ and $ Y^{H+f}(\\vartheta) $, in\n$T_{\\vartheta}T^{*}M$, which is the claim of the next lemma.\n\\end{proof}\n\n\\begin{lemma}\\label{sobrejetividade}\nWith the same notations as in Lemma~\\ref{evaluationtransverlema}, the map\n$ \\mathcal{B}: C^{\\infty}(M ; \\mathbb{R})$\n$\\to~T_{\\vartheta}T^{*}M$,\n\\begin{center}\n$\\displaystyle \\mathcal{B}(h)=- d_{\\vartheta}\\psi_{T}^{H + f}\n\\int_{0}^{T} (d_{\\vartheta}\\psi_{t}^{H + f })^{-1}\nb_{h}(t)dt$\n\\end{center}\ngenerates a space complementary to\n$\\langle X^{H+f}(\\vartheta), Y^{H+f}(\\vartheta) \\rangle$.\n\\end{lemma}\n\\begin{proof}\nIn order to prove this claim is enough to restrict the map $\\displaystyle\n\\mathcal{B}$ to a subspace chosen in $C^{\\infty}(M ; \\mathbb{R})$.\nConsider $t_{0} \\in (0,T)$, $\\varepsilon >0$, and\ndenote $\\mathcal{A}_{t_{0}}$, the subspace of the smooth functions\n$$\\mathcal{A}_{t_{0}}=\\{\\alpha: \\mathbb{R} \\to \\mathbb{R}^{n-1} \\mid\n\\alpha(t)=(a_{1}(t),...,a_{n-1}(t)) \\neq 0, \\,\n\\forall t \\in (t_{0}-\\varepsilon, t_{0}+\\varepsilon) \\}. $$\nWe assume that, $x(t) = \\pi(\\gamma(t))$,\nwhere $\\gamma(t)=\\psi^{H+f}_{t}(\\vartheta)$, does not contain autointersections\nfor $t \\in (t_{0} -\\varepsilon, t_{0} + \\varepsilon)$, that is, that\n$H_{p}(\\gamma(t))=d\\pi X^{H+f}(\\gamma(t)) \\neq 0$. Then there exists a\nsystem of tubular coordinates $\\mathcal{V}$, in a neighborhood of\n$\\pi(\\gamma(t_{0}))$, $F: \\mathcal{V} \\to \\mathbb{R}^{n}$, such that,\\\\\ni) $F(x)=(t,z_{1},...,z_{n-1})$;\\\\\nii) $F(x(t))=(t,0,...,0)$.\\\\\nObserve that, by construction, $d_{x(t)}F H_{p}(\\gamma(t))=(1,0,...0)$.\nConsider a bump function $ \\sigma :M \\to \\mathbb{R}$, such that, $\\mathop{\\rm supp}\\nolimits(\\sigma) \\subset \\mathcal{V}$, $\\sigma \\mid_{\\mathcal{V}_{0}} \\equiv 1$, with $x(t_{0}) \\in \\mathcal{V}_{0} \\subset \\mathcal{V}$.\nDefine the perturbation space\n$\\mathcal{F}_{t_{0}} \\subset C^{\\infty}(M ; \\mathbb{R})$ as being\n$$\\mathcal{F}_{t_{0}}=\\{h_{\\alpha,\\beta}(x)=\\tilde{h}_{\\alpha,\\beta}(x) \\cdot \\sigma(x)\n \\, \\mid \\,\\alpha, \\beta \\in \\mathcal{A}_{t_{0}}\\} $$\nwhere, $\\tilde{h}_{\\alpha,\\beta}(x)=\\langle \\alpha(t) \\delta_{t_{0}}(t) + \\beta(t)\n\\dot{\\delta}_{t_{0}}(t) \\, , \\, z \\rangle$, $F(x)=(t,z)$ and\n$\\delta_{t_{0}}$ is a smooth approximation of the delta of Dirac in the point $t=t_{0}$.\nGiven $h_{\\alpha,\\beta} \\in \\mathcal{F}_{t_{0}}$ we get\n$d_{x}h_{\\alpha,\\beta}=d_{x} \\tilde{h}_{\\alpha,\\beta} \\cdot \\sigma(x) +\n\\tilde{h}_{\\alpha,\\beta} \\cdot d_{x} \\sigma(x)$. \nOn the other hand\n$$d_{x} \\tilde{h}_{\\alpha,\\beta}=(\\langle \\frac{d \\,}{dt}(\\alpha(t)\n\\delta_{t_{0}}(t) + \\beta(t)\n\\dot{\\delta}_{t_{0}}(t)),z \\rangle, \\alpha(t) \\delta_{t_{0}}(t) + \\beta(t)\n\\dot{\\delta}_{t_{0}}(t)) d_{x}F $$\nEvaluating $x(t) $ and using that\n$h_{\\alpha,\\beta}(x(t))=0$ and $\\sigma(x(t))=1$, we get\n$$d_{x(t)}h_{\\alpha,\\beta}=(0, \\alpha(t) \\delta_{t_{0}}(t) + \\beta(t)\n\\dot{\\delta}_{t_{0}}(t)) d_{x(t)}F.$$\nIn particular,\n$$d_{x(t)}h_{\\alpha,\\beta} H_{p}(\\gamma(t))=(0,\\langle \\alpha(t) \\delta_{t_{0}}(t) + \\beta(t)\n\\dot{\\delta}_{t_{0}}(t)) d_{x(t)}F H_{p}(\\gamma(t))=0$$\nfor any, $h_{\\alpha,\\beta} \\in \\mathcal{F}_{t_{0}}$.\n\nWe claim that,\\\\\n1) $\\mathcal{B}(\\mathcal{F}_{t_{0}}) \\subset T(H+f)^{-1}(k)$;\\\\\n2) $X^{H+f}(\\vartheta) \\not\\in \\mathcal{B}(\\mathcal{F}_{t_{0}})$;\\\\\n3) $\\dim(\\mathcal{B}(\\mathcal{F}_{t_{0}}))=2n-2$;\\\\\n4) In particular, $\\mathcal{B}(\\mathcal{F}_{t_{0}})$\ngenerates a space complementary to $\\langle X^{H+f}(\\vartheta), $ $Y^{H+f}(\\vartheta)\n\\rangle $.\\\\\nIn order to get (1) consider,\n $\\alpha_{0} = d_{x(t)}h_{\\alpha,0}=(0, \\alpha(t) \\delta_{t_{0}}(t))\nd_{x(t)}F = \\alpha_{1} \\delta_{t_{0}}(t) $ and \n $\\beta_{0} = d_{x(t)}h_{0,\\beta}=(0, \\beta(t) \\dot{\\delta}_{t_{0}}(t))\nd_{x(t)}F = \\beta_{1} \\dot{\\delta}_{t_{0}}(t) $,\nthen,\n$$ \\mathcal{B}(h_{\\alpha})= d_{\\vartheta}\\psi_{T}^{H + f}\n\\int_{0}^{T} (d_{\\vartheta}\\psi_{t}^{H + f })^{-1}\n\\left [\n\\begin{matrix}\n0 \\\\\n\\alpha_{0}\n\\end{matrix}\n\\right ]\ndt$$\n\\text{ and } $$\\mathcal{B}(h_{\\beta})= d_{\\vartheta}\\psi_{T}^{H + f}\n\\int_{0}^{T} (d_{\\vartheta}\\psi_{t}^{H + f })^{-1}\n\\left [\n\\begin{matrix}\n0 \\\\\n\\beta_{0}\n\\end{matrix}\n\\right ]\ndt.$$\nObserve that,\n\\( \\omega( \\left [ \\begin{matrix} 0 \\\\ \\alpha_{0} \\end{matrix} \\right\n], X^{H+f}(\\gamma(t)))= \\alpha_{0} H_{p}(\\gamma(t))=0, \\)\n\\text{ and }\\\\\n\\(\\omega( \\left [\\begin{matrix} 0 \\\\ \\beta_{0} \\end{matrix} \\right\n], X^{H+f}(\\gamma(t)))= \\beta_{0} H_{p}(\\gamma(t))=0,\\)\ntherefore\n$\\left [ \\begin{matrix} 0 \\\\ \\alpha_{0} \\end{matrix} \\right\n]$ and $\\left [ \\begin{matrix} 0 \\\\ \\beta_{0} \\end{matrix} \\right\n]$ are in $T(H+f)^{-1}(k)$.\nThus, $\\mathcal{B}(\\mathcal{F}_{t_{0}})\\subset\nT(H+f)^{-1}(k)$.\n\nIn order to get (2), we will make $\\delta_{t_{0}} \\to \\delta_{Dirac}$\nand will write $\\mathcal{B}(h_{\\alpha})$ and $\\mathcal{B}(h_{\\beta})$ as \n$$ \\displaystyle \\mathcal{B}(h_{\\alpha})= d_{\\vartheta}\\psi_{T}^{H + f}(d_{\\vartheta}\\psi_{t_{0}}^{H + f\n})^{-1} \\left [ \\begin{matrix} 0 \\\\ \\alpha_{1}(t_{0}) \\end{matrix} \\right].$$\nAnalogously, \n$$ \\displaystyle \\mathcal{B}(h_{\\beta})= d_{\\vartheta}\\psi_{T}^{H + f}(d_{\\vartheta}\\psi_{t_{0}}^{H + f\n})^{-1}\n\\left\\{ J \\mathcal{H}^{H+f}(t_{0}) \\left[ \\begin{matrix} 0 \\\\ \\beta_{1}(t_{0})\n\\end{matrix} \\right] - \\left[ \\begin{matrix} 0 \\\\ \\dot{\\beta}_{1}(t_{0})\n\\end{matrix} \\right]\n\\right\\}. $$\n\nIf we assume, by contradiction, that $X^{H+f}(\\gamma(t))=\\mathcal{B}(h_{\\alpha})\n+\\mathcal{B}(h_{\\beta})$ then\n$$X^{H+f}(\\gamma(t_{0}))= \\left\\{ \\left [ \\begin{matrix}\n0 \\\\ \\alpha_{1}(t_{0}) \\end{matrix} \\right] +\nJ \\mathcal{H}^{H+f}(t_{0}) \\left [ \\begin{matrix} 0 \\\\ \\beta_{1}(t_{0})\n\\end{matrix} \\right ] - \\left [ \\begin{matrix} 0 \\\\ \\dot{\\beta}_{1}(t_{0})\n\\end{matrix} \\right ] \\right\\}.$$\nFrom this equality we have\n$H_{p}(\\gamma(t_{0}))= H_{pp}(\\gamma(t_{0})) \\beta_{1}(t_{0})$.\nSince $H_{p}(\\gamma(t_{0})) \\neq 0$ we have $n-1$ choices, linearly independent, for $\\beta_{1}(t_{0})$. Indeed, $dF$ is an isomorphism and for all $\\beta (t_{0}) \\in \\mathbb{R}^{n-1}$ we have $ \\beta_{1}(t_{0})H_{p}(\\gamma(t_{0}))=(0,\\beta (t_{0}))dFH_{p}(\\gamma(t_{0}))=0$. Thus,\n $0=\\beta_{1}(t_{0})H_{p}(\\gamma(t_{0}))=\\beta_{1}(t_{0})\nH_{pp}(\\gamma(t_{0})) \\beta_{1}(t_{0})$, contradicting the superlinearity of $H$.\nFor (3) observe that, in (2) we got the limit representation\n$$\\mathcal{B}(h_{\\alpha})\n+\\mathcal{B}(h_{\\beta})= d_{\\vartheta}\\psi_{T}^{H + f}\n(d_{\\vartheta}\\psi_{t_{0}}^{H + f })^{-1} $$ $$ \\left\\{ \\left [ \\begin{matrix} 0 &\n\\quad H_{pp}(\\gamma(t_{0})) \\\\\nI_{n} & -H_{xp}(\\gamma(t_{0})) \\end{matrix} \\right]\n\\left [ \\begin{matrix} \\alpha_{1}(t_{0}) \\\\ \\beta_{1}(t_{0}) \\end{matrix} \\right ]-\n\\left [ \\begin{matrix} 0 \\\\ \\dot{\\beta}_{1}(t_{0}) \\end{matrix} \\right ] \\right\\}.$$\nFrom this equation we get $\\dim( \\{ \\mathcal{B}(h_{\\alpha})\n+\\mathcal{B}(h_{\\beta}) \\})= \\dim(\\{\\alpha_{1}(t_{0}),$ $ \\beta_{1}\n(t_{0})\\}=2n-2$, since $\\left [ \\begin{matrix} 0 & \\quad H_{pp}(\\gamma(t_{0})) \\\\\nI_{n} & -H_{xp}(\\gamma(t_{0})) \\end{matrix} \\right]$ is an isomorphism.\n\nFinally, we observe that, the claim (1) is true independently of the\napproximation $\\delta_{t_{0}}$ of the delta of Dirac in the point $t=t_{0}$.\nMoreover the claims (2) and (3) still true for $\\delta_{t_{0}}$, close enough to the delta of Dirac.\n\\end{proof}\n\n\nThe next theorem allow us to make a local perturbation of a\nperiodic orbit nondegenerate of order $\\leq m$ in such way that it becomes\nnondegenerate of order $\\leq 2m$. The proof is just for dimension 2\nand the $n$-dimensional case is still open. Almost all th parts of the argument\nare true in the $n$-dimensional case, but we do not know how to show\nthe surjectivity of the representation in this case.\n\n\n\\begin{theorem}\\label{PerturbacaoLocaldaOrbita}\\textnormal{(Local perturbation of periodic orbits)}\nLet $dim(M)=2$, $\\mathbb{H}:~T^{*}M \\to \\mathbb{R}$ be a smooth, convex\nand superlinear Hamiltonian and $\\gamma= \\{\n\\psi_{t}^{\\mathbb{H}}(\\vartheta_{0}) \\mid 0 \\leq t \\leq T \\}\n\\subseteq \\mathbb{H}^{-1}(k)$, where $\\mathbb{H}^{-1}(k)$ is a\nregular energy level, $T$ is the minimal period of $\\gamma$, and\n$\\gamma$ is isolated in this energy level, nondegenerate of order $\n\\leq m \\in \\mathbb{N}$. Then there exists a potential $f_{0} \\in\nC^{\\infty}(M,\\mathbb{R})$ arbitrarily close to zero, with\n$\\mathop{\\rm supp}\\nolimits(f_{0}) \\subset \\mathcal{U} \\subset M$ such that, \n$\\gamma$ is nondegenerate of order $\\leq 2m$ to\n$\\mathbb{H}+f_{0}$. More over, $\\mathcal{U}$ can be chosen arbitrarily small.\n\\end{theorem}\n\n\\begin{proof}\nChoose $t_{0} \\in (0,T)$ and $E(0)=\\{e_{1}(0), e_{2}(0),\ne_{1}^{*}(0), e_{2}^{*}(0) \\}$ a symplectic frame in $\\gamma(t_{0})$\nwith $e_{1}(0)=X^{\\mathbb{H}}(\\gamma(t_{0}))$. Consider\n$$E(t)=\\{e_{1}(t), e_{2}(t), e_{1}^{*}(t), e_{2}^{*}(t) \\}$$, where\n$\\xi(t)= d_{\\gamma(t_{0})} \\psi_{-t}^{\\mathbb{H}} \\xi, \\quad \\forall\n\\xi \\in E(0)$, for $t \\in (0,r)$ with $r >0$ arbitrarily small.\n\nThen we can decompose the matrix of the differential of the flow, in\nthe base $E(0)$, $[d_{\\gamma(t_{0})}\n\\psi_{T}^{\\mathbb{H}}]_{E(0)}^{E(0)} \\in Sp(2)$, as\n$ [d_{\\gamma(t_{0})} \\psi_{T}^{\\mathbb{H}}]_{E(0)}^{E(0)} =\n[d_{\\gamma(t_{0}-r)} \\psi_{r}^{\\mathbb{H}}]_{E(0)}^{E(r)} \\cdot\n[d_{\\gamma(t_{0})} \\psi_{T-r}^{\\mathbb{H}}]_{E(r)}^{E(0)}.$ \nBy construction we have that $[d_{\\gamma(t_{0}-r)}\n\\psi_{r}^{\\mathbb{H}}]_{E(0)}^{E(r)}=I_{4}$, therefore \n$$ \\quad \\quad \\quad \\quad \\quad \\quad \\quad \\quad [d_{\\gamma(t_{0})} \\psi_{T}^{\\mathbb{H}}]_{E(0)}^{E(0)} =\n[d_{\\gamma(t_{0})} \\psi_{T-r}^{\\mathbb{H}}]_{E(r)}^{E(0)}. \\quad\\quad \\quad \\quad \\quad \\quad \\quad \\quad \n\\quad (1)$$\nConsider $U$ an arbitrarily small neighborhood of\n$\\gamma(t_{0})$ in $T^{*}M$ and $r$ small enough, in such way that,\n$\\hat{\\gamma}= \\{ \\psi_{t}^{\\mathbb{H}}(\\vartheta_{0}) \\mid t \\in\n(t_{0}-r,t_{0}) \\} \\subseteq \\mathbb{H}^{-1}(k) \\cap U$. Fix\n$t_{1} \\in (t_{0}-r,t_{0})$ and $V$ a neighborhood of\n$\\gamma(t_{1})$ in $T^{*}M$, small enough, in such way that, $V \\subset U$\nand that $\\gamma(t_{0}),\\gamma(t_{0}-r) \\not\\in \\overline{V}$.\nSuppose that we have $\\tilde{\\mathbb{H}}: T^{*}M \\to \\mathbb{R}$ a\nsmooth Hamiltonian re\\-pre\\-sen\\-ting a smooth perturbation of\n$\\mathbb{H}$, such that, $\\mathop{\\rm supp}\\nolimits(\\tilde{\\mathbb{H}}-\\mathbb{H})\n\\subset V$ and that $jet_{1}(\\tilde{\\mathbb{H}})\\mid_{\\gamma(t)}=\njet_{1}(\\mathbb{H})\\mid_{\\gamma(t)}$, and\n$ [d_{\\gamma(t_{0})} \\psi_{T}^{\\tilde{\\mathbb{H}}}]_{E(0)}^{E(0)} =\n[d_{\\gamma(t_{0}-r)} \\psi_{r}^{\\tilde{\\mathbb{H}}}]_{E(0)}^{E(r)}\n\\cdot [d_{\\gamma(t_{0})}\n\\psi_{T-r}^{\\tilde{\\mathbb{H}}}]_{E(r)}^{E(0)}$.\nSince $\\mathop{\\rm supp}\\nolimits(\\tilde{\\mathbb{H}}-\\mathbb{H}) \\subset V$, we have \n$[d_{\\gamma(t_{0})} \\psi_{T-r}^{\\tilde{\\mathbb{H}}}]_{E(r)}^{E(0)}=\n[d_{\\gamma(t_{0})} \\psi_{T-r}^{\\mathbb{H}}]_{E(r)}^{E(0)}$. By (1),\n$ [d_{\\gamma(t_{0})}\n\\psi_{T-r}^{\\mathbb{H}}]_{E(r)}^{E(0)}= [d_{\\gamma(t_{0})}\n\\psi_{T}^{\\mathbb{H}}]_{E(0)}^{E(0)}$, so\n$$\\quad \\quad \\quad \\quad \\quad \\quad[d_{\\gamma(t_{0})} \\psi_{T}^{\\tilde{\\mathbb{H}}}]_{E(0)}^{E(0)} =\n[d_{\\gamma(t_{0}-r)} \\psi_{r}^{\\tilde{\\mathbb{H}}}]_{E(0)}^{E(r)}\n\\cdot [d_{\\gamma(t_{0})} \\psi_{T}^{\\mathbb{H}}]_{E(0)}^{E(0)} \\quad \\quad \\quad\\quad\n\\quad (2)$$\nFrom the construction of the perturbation described above we have\nthat $[d_{\\gamma(t_{0})}\n\\psi_{T}^{\\tilde{\\mathbb{H}}}]_{E(0)}^{E(0)}$ has the expression\n$$\n[d_{\\gamma(t_{0})}\n\\psi_{T}^{\\tilde{\\mathbb{H}}}]_{E(0)}^{E(0)} =\n\\left [\n\\begin{matrix}\n1 & \\alpha & \\sigma & \\beta \\\\\n0 & A & \\hat{\\alpha} & B \\\\\n0 & 0 & 1 & 0 \\\\\n0 & C & \\hat{\\beta} & D \\\\\n\\end{matrix}\n\\right ]\n\\, \\in Sp(2),\n$$\nbecause, the energy level in $\\gamma(t_{0})$ and\n$\\gamma(t_{0}-r)$ is the same for $\\tilde{\\mathbb{H}}$ and\n$\\mathbb{H}$. Thus it is invariant by the action of the flow of\nboth Hamiltonians. Let $\\hat{Sp(2)}$ be the following subgroup of $Sp(2)$,\n$$\n\\hat{Sp(2)} =\n\\left \\{\n\\left [\n\\begin{matrix}\n1 & \\alpha & \\sigma & \\beta \\\\\n0 & A & \\hat{\\alpha} & B \\\\\n0 & 0 & 1 & 0 \\\\\n0 & C & \\hat{\\beta} & D\n\\end{matrix}\n\\right ]\n\\in SL(4)\n\\,\n\\left |\n\\,\n\\left [\n\\begin{matrix}\n\\hat{\\alpha} \\\\\n\\hat{\\beta}\n\\end{matrix}\n\\right ]\n\\right.\n=\n\\left [\n\\begin{matrix}\nA & B \\\\\nC & D\n\\end{matrix}\n\\right ]\nJ\n\\left [\n\\begin{matrix}\n\\alpha & \\beta\n\\end{matrix}\n\\right ]^{*}\n\\right \\}\n$$\nand consider the projection $\\pi : \\hat{Sp(2)} \\to Sp(1)$ given by\n$$\n\\pi \\left(\\left [\n\\begin{matrix}\n1 & \\alpha & \\sigma & \\beta \\\\\n0 & A & \\hat{\\alpha} & B \\\\\n0 & 0 & 1 & 0 \\\\\n0 & C & \\hat{\\beta} & D\n\\end{matrix}\n\\right ]\n\\right )=\n\\left [\n\\begin{matrix}\nA & B \\\\\nC & D\n\\end{matrix}\n\\right ],\n$$\nwhich is a homomorphism of Lie groups.\nObserve that $$[d_{\\gamma(t_{0}-r)}\n\\psi_{r}^{\\tilde{\\mathbb{H}}}]_{E(0)}^{E(r)}, [d_{\\gamma(t_{0})}\n\\psi_{T}^{\\mathbb{H}}]_{E(0)}^{E(0)} \\in \\hat{Sp(2)}$$ and\n$\\det([d_{\\gamma(t_{0})} \\psi_{T}^{\\tilde{\\mathbb{H}}}]_{E(0)}^{E(0)} -\n\\lambda I_{4})=(\\lambda -1)^{2} \\det(\\pi([d_{\\gamma(t_{0})}\n\\psi_{T}^{\\tilde{\\mathbb{H}}}]_{E(0)}^{E(0)}) - \\lambda I_{2}).\n$ Thus $\\gamma$ will be a nondegenerate\norbit of order $\\leq 2m$, to the perturbed Hamiltonian, if\n$\\pi([d_{\\gamma(t_{0})}\n\\psi_{T}^{\\tilde{\\mathbb{H}}}]_{E(0)}^{E(0)})$ does not have roots\nof the unity of order~$\\leq~2m$ as eigenvalues. Since, the\nsymplectic matrices that are $2m$-elementary~\\footnote{A symplectic\nmatrix is N-elementary if its principal eigenvalues (the\neigenvalues $\\lambda$ such that $\\| \\lambda \\| < 1$ or $Re(\\lambda)\n\\geq 0$) are multiplicatively independent over the integer, that is,\nif $\\Pi \\lambda_{i}^{p_{i}}=1$, where $\\sum p_{i} =N$ then $p_{i}=0,\n\\, \\forall i $.} (in particular, does not have roots of the unity of\norder~$\\leq~2m$ as eigenvalues), forms an open and dense subset of\n$Sp(1)$, we must to show that, for a choice of the perturbation\nspace , the correspondence\n$\\tilde{\\mathbb{H}} \\to \\pi([d_{\\gamma(t_{0})}\n\\psi_{T}^{\\tilde{\\mathbb{H}}}]_{E(0)}^{E(0)})$ applied to a\nneighborhood of $\\mathbb{H}$, generate an open neighborhood of\n$\\pi([d_{\\gamma(t_{0})} \\psi_{T}^{\\tilde{\\mathbb{H}}}]_{E(0)}^{E(0)})$ in $Sp(1)$.\nUsing the homomorphism property \n$$ \\pi( [d_{\\gamma(t_{0})} \\psi_{T}^{\\tilde{\\mathbb{H}}}]_{E(0)}^{E(0)}) =\n\\pi( [d_{\\gamma(t_{0}-r)}\n\\psi_{r}^{\\tilde{\\mathbb{H}}}]_{E(0)}^{E(r)}) \\cdot \\pi(\n[d_{\\gamma(t_{0})} \\psi_{T}^{\\mathbb{H}}]_{E(0)}^{E(0)} ). $$\nWe define $\\mathbb{X}_{0}=\\pi( [d_{\\gamma(t_{0})}\n\\psi_{T}^{\\mathbb{H}}]_{E(0)}^{E(0)} )$ and\n$\\hat{S}(\\tilde{\\mathbb{H}})=\\pi( [d_{\\gamma(t_{0}-r)}\n\\psi_{r}^{\\tilde{\\mathbb{H}}}]_{E(0)}^{E(r)})$.\nSince the translation $\\mathbb{X} \\to \\mathbb{X} \\cdot\n\\mathbb{X}_{0}$ is an isomorphism of the of the Lie group $Sp(1)$,\nwe need to show that the map $\\tilde{\\mathbb{H}} \\to\n\\hat{S}(\\tilde{\\mathbb{H}})$ applied to a neighborhood of\n$\\mathbb{H}$ generates an open neighborhood of $I_{2}$ in $Sp(1)$.\nInorder to construct the perturbation space we will consider\n$\\mathcal{N} \\subset \\mathbb{H}^{-1}(k)$ a local Lagrangian\nsubmanifold in $\\gamma(t_{0})$. We can reduce, if necessary, the\nsize of the neighborhood $U$ of $\\gamma(t_{0})$ chosen previously\nin such way that $U$ admits the parameterization\n$(x=(x_{1},x_{2}),p=(p_{1},p_{2})):U \\to \\mathbb{R}^{2+2}$ as in\n\\cite{GonzaloPaternGenGeodFlowPositEntropy}, Lemma A3, that is,\\\\\na) $\\mathcal{N} \\cap U = \\{(x,0) \\}$; \\\\\nb) $\\omega=dx \\wedge dp$;\\\\\nc) $X^{\\mathbb{H}}|_{\\mathcal{N} \\cap U}= 1 \\frac{\\partial}{\\partial\nx_{1}}$.\\\\\nIn these coordinates we can see that $\\hat{\\gamma}=\\{ (t,0,0,0) \\mid\nt \\in (t_{0}-r,t_{0}) \\}$. Consider, the perturbation space,\n$$\\hat{\\mathcal{F}}=\\{ f:T^{*}M \\to \\mathbb{R} \\mid \\mathop{\\rm supp}\\nolimits(f) \\subset\n\\hat{W} \\subset W \\}$$ where $W = \\mathcal{N} \\cap V $ and $\\hat{W}$\nis a compact set contained in $W$ that contains $\\gamma(t_{1})$ in its\ninterior. Observe that, $\\hat{\\mathcal{F}}$ can be identified with\n$C^{\\infty}(\\hat{W}, \\mathbb{R})$, therefore we can think\n$\\hat{\\mathcal{F}}$ as a vectorial space. Consider the\nfollowing finite dimensional subspace $\\mathcal{F} \\subset\n\\hat{\\mathcal{F}}$\n$$\\mathcal{F}=\\{ f \\mid f(x,p)=\\eta(x)(a \\delta_{t_{1}}(x_{1}) + b\n\\delta_{t_{1}}'(x_{1}) + c\n\\delta_{t_{1}}''(x_{1}))\\frac{1}{2}x_{2}^{2}, \\, a,b,c \\in\n\\mathbb{R} \\}$$ where $\\eta$ is a fix function with\n$\\mathop{\\rm supp}\\nolimits(\\eta) \\subset \\hat{W}$ and $\\eta \\equiv 1$ in some\nneighborhood of $\\gamma(t_{1})$, in $\\mathcal{N}$. Moreover,\n$\\delta_{t_{1}}$ is a smooth approximation of the delta of Dirac in\nthe point $t_{1}$. Now we are able to define the differentiable map\n$S: \\mathcal{F} \\to Sp(1)$ given by\n$S(f)=\\hat{S}(\\mathbb{H}+f)=\\pi( [d_{\\gamma(t_{0}-r)}\n\\psi_{r}^{\\mathbb{H} +f }]_{E(0)}^{E(r)}) $. Observe that\n$dim(\\mathcal{F})=3=dim(Sp(1))$ and $S(0)=\\hat{S}(\\mathbb{H}+0)=\n\\pi( [d_{\\gamma(t_{0}-r)} \\psi_{r}^{\\mathbb{H} +0 }]_{E(0)}^{E(r)})=\nI_{2}$. Thus we must to show that,\n$$d_{0}\\mathcal{F} :T_{0} \\mathcal{F} \\cong \\mathcal{F} \\to\nT_{Id_{2 \\times 2}}Sp(1) \\cong sp(1)$$ is surjective.\nGiven $h \\in \\mathcal{F}$ we have\n$\nd_{0}\\mathcal{F}(h)=\n\\pi( \\frac{d}{dl} [d_{\\gamma(t_{0}-r)} \\psi_{r}^{\\mathbb{H} +lh\n}]_{E(0)}^{E(r)} |_{l=0} ).\n$ \nConsider $\\xi \\in T_{\\gamma(t_{0}-r)}T^{*}M$, where $t \\in (0, r)$\n and define $$\\xi(t,l)=d_{\\gamma(t_{0}-r)}\n\\psi_{t}^{\\mathbb{H} +lh } \\xi.$$ For a fix $l$ we define a field\nthrough $\\gamma$ that verifies the equation\n$$\n\\left \\{\n\\begin{array}{clcr}\n\\dot{\\xi}(t,l) = J Hess(\\mathbb{H}+lh)(\\gamma(t)) \\xi(t,l) \\\\\n\\xi(0,l)=\\xi. \\quad \\quad \\quad \\quad \\; \\quad \\quad \\quad \\quad \\quad \\quad \\; ^{ }\n\\end{array}\n\\right.\n$$\nTaking the derivative of the equation above with respect to $l$ and\nusing the commutativity of the derivatives we get\n$$\\frac{d}{dt}( \\frac{d}{dl} \\xi(t,l)|_{l=0}) = J Hess(h)\n\\xi(t,l)|_{l=0} + J Hess(\\mathbb{H}) \\frac{d}{dl} \\xi(t,l)|_{l=0}$$\nDenote $\\mathcal{H}= Hess(\\mathbb{H})$, $\\xi(t)=\\xi(t,l)|_{l=0}$ e\n$\\mathbb{Y}(t)= \\frac{d}{dl} \\xi(t,l)|_{l=0}$, then \n$$\n\\left \\{\n\\begin{array}{clcr}\n\\dot{\\mathbb{Y}}(t) =J \\mathcal{H} \\mathbb{Y}(t)+ J Hess(h) \\xi(t)\\\\\n\\mathbb{Y}(0)=0. \\quad \\quad \\quad \\quad \\quad \\quad \\quad \\quad \\quad \\quad \\; ^{ }\n\\end{array}\n\\right.\n$$\nApplying the method of variation of constants and using\n$$\n\\left \\{\n\\begin{array}{clcr}\n\\dot{\\xi}(t) = J \\mathcal{H}(\\gamma(t)) \\xi(t) \\\\\n\\xi(0)=\\xi, \\quad \\quad \\quad \\quad \\quad \\; ^{ }\n\\end{array}\n\\right.\n$$\nwe get\n$$\\mathbb{Y}(t)=d_{\\gamma(t_{0}-r)} \\psi_{r}^{\\mathbb{H}}\n\\int_{0}^{r} d_{\\gamma(t_{0}-r)}\\psi_{-t}^{\\mathbb{H}}\nJ Hess(h) d_{\\gamma(t_{0}-r)}\\psi_{t}^{\\mathbb{H}} \\xi dt.$$\nRemember that $\\mathbb{Y}(r)= \\frac{d}{dl} \\xi(r,l)|_{l=0}=\n\\frac{d}{dl} d_{\\gamma(t_{0}-r)} \\psi_{r}^{\\mathbb{H} +lh }(\\xi)\n|_{l=0}$, so\n$$ \\frac{d}{dl} d_{\\gamma(t_{0}-r)} \\psi_{r}^{\\mathbb{H} +lh }\n|_{l=0} =d_{\\gamma(t_{0}-r)} \\psi_{r}^{\\mathbb{H}}\n\\int_{0}^{r} d_{\\gamma(t_{0}-r)}\\psi_{-t}^{\\mathbb{H}}\nJ Hess(h) d_{\\gamma(t_{0}-r)}\\psi_{t}^{\\mathbb{H}} dt.$$\nFrom this calculation\n$$\\; d_{0}\\mathcal{F}(h)=\\pi \\left( [d_{\\gamma(t_{0}-r)} \\psi_{r}^{\\mathbb{H}}\n\\int_{0}^{r} d_{\\gamma(t_{0}-r)}\\psi_{-t}^{\\mathbb{H}} J Hess(h)\nd_{\\gamma(t_{0}-r)}\\psi_{t}^{\\mathbb{H}} dt]_{E(0)}^{E(r)} \\right ). \\; \n (3)$$\nIn order to obtain the expression (3) we need to calculate $J Hess(h)$.\nAll the integrals will be calculated with the delta of\nDirac and not with the approximations, however the same conclusions\nare true for an approximation, good enough.\nConsider $\\tilde{h}(x)=(a \\delta_{t_{1}}(x_{1}) + b\n\\delta_{t_{1}}'(x_{1}) + c\n\\delta_{t_{1}}''(x_{1}))\\frac{1}{2}x_{2}^{2}$ and\n$h(x)=\\eta(x) \\tilde{h}(x)$ then\n$d h= \\eta d \\tilde{h} + \\tilde{h} d \\eta$ and\n$d^{2}h= \\eta d^{2}\\tilde{h} +d \\tilde{h}^{*} d \\eta + d \\eta^{*} d \\tilde{h} + \\tilde{h} d^{2}\n\\eta$. As, $Hess(h)(\\gamma)= d_{\\gamma}^{2}h$ and\n$jet_{1}(h)|_{\\gamma}=0$ we have that $Hess(h)(\\gamma)=\\eta(\\gamma)\nd_{\\gamma}^{2}\\tilde{h}$. On the other hand,\n$(d_{\\gamma}^{2}\\tilde{h})_{ij}= a \\delta_{t_{1}}(t_{0}-r +t) + b \\delta_{t_{1}}'(t_{0}-r +t)\n+ c \\delta_{t_{1}}''(t_{0}-r +t)$ if $ij=22$ and, and equal to 0 otherwise.\nTaking the $x_{1}$-support of $\\delta_{t_{1}}$ small enough, we can\nassume that\n$\nJ Hess(h)(\\gamma)=\n\\hat{A} \\delta_{t_{1}}(t_{0}-r +t) +\n\\hat{B} \\delta_{t_{1}}'(t_{0}-r +t) +\n\\hat{C} \\delta_{t_{1}}''(t_{0}-r +t)\n$\nwhere \n$(\\hat{A})_{ij}= -a $ if $ij=42$, and equal to 0 otherwise,\n$(\\hat{B})_{ij}= -b $ if $ij=42$, and equal to 0 otherwise and\n$(\\hat{C})_{ij}= -c $ if $ij=42$, and equal to 0 otherwise.\\\\\nDenote,\n\\begin{align*}\n\\hat{I}_{1}= d_{\\gamma(t_{0}-r)} \\psi_{r}^{\\mathbb{H}} \\int_{0}^{r}\nd_{\\gamma(t_{0}-r)}\\psi_{-t}^{\\mathbb{H}} \\; \\hat{A} \\;\nd_{\\gamma(t_{0}-r)}\\psi_{t}^{\\mathbb{H}} \\; \\delta_{t_{1}}(t_{0}-r +t) dt, \\\\\n\\hat{I}_{2}= d_{\\gamma(t_{0}-r)} \\psi_{r}^{\\mathbb{H}} \\int_{0}^{r}\nd_{\\gamma(t_{0}-r)}\\psi_{-t}^{\\mathbb{H}} \\; \\hat{B} \\;\nd_{\\gamma(t_{0}-r)}\\psi_{t}^{\\mathbb{H}} \\; \\delta_{t_{1}}'(t_{0}-r +t) dt, \\\\\n\\hat{I}_{3}= d_{\\gamma(t_{0}-r)} \\psi_{r}^{\\mathbb{H}} \\int_{0}^{r}\nd_{\\gamma(t_{0}-r)}\\psi_{-t}^{\\mathbb{H}} \\; \\hat{C} \\;\nd_{\\gamma(t_{0}-r)}\\psi_{t}^{\\mathbb{H}} \\; \\delta_{t_{1}}''(t_{0}-r +t) dt. \\\\\n\\end{align*}\nThus\\\\\n$\\hat{I}_{1} = d_{\\gamma(t_{0}-r)} \\psi_{r}^{\\mathbb{H}} \\;\nd_{\\gamma(t_{0}-r)}\\psi_{-(t_{1}-t_{0}+r)}^{\\mathbb{H}} \\; \\hat{A}\n\\; d_{\\gamma(t_{0}-r)}\\psi_{(t_{1}-t_{0}+r)}^{\\mathbb{H}}, $\\\\\n$\\hat{I}_{2} = - d_{\\gamma(t_{0}-r)} \\psi_{r}^{\\mathbb{H}} \\;\nd_{\\gamma(t_{0}-r)}\\psi_{-(t_{1}-t_{0}+r)}^{\\mathbb{H}}\n \\; [\\hat{B}, J \\mathcal{H}] \\;\nd_{\\gamma(t_{0}-r)}\\psi_{(t_{1}-t_{0}+r)}^{\\mathbb{H}},\n$\\\\\nand\\\\\n$\\hat{I}_{3}=d_{\\gamma(t_{0}-r)} \\psi_{r}^{\\mathbb{H}} \\;\nd_{\\gamma(t_{0}-r)}\\psi_{-(t_{1}-t_{0}+r)}^{\\mathbb{H}} \\;(\n[[\\hat{C}, J \\mathcal{H}], J \\mathcal{H}] + [\\hat{C}, J\n\\dot{\\mathcal{H}}] ) \\; \\\\\nd_{\\gamma(t_{0}-r)}\\psi_{(t_{1}-t_{0}+r)}^{\\mathbb{H}}$.\\\\\nDefine $\\mathcal{Z}= \\hat{A} - [\\hat{B}, J \\mathcal{H}] +\n[\\hat{C}, J \\dot{\\mathcal{H}}] + [[\\hat{C}, J \\mathcal{H}], J\n\\mathcal{H}]$. Then,\n\n$$d_{0}\\mathcal{F}(h)=\\pi \\left( [d_{\\gamma(t_{0}-r)} \\psi_{r}^{\\mathbb{H}} \\;\nd_{\\gamma(t_{0}-r)}\\psi_{-(t_{1}-t_{0}+r)}^{\\mathbb{H}} \\;\n\\mathcal{Z} \\;\nd_{\\gamma(t_{0}-r)}\\psi_{(t_{1}-t_{0}+r)}^{\\mathbb{H}}]_{E(0)}^{E(r)}\n\\right ).$$\nWriting this matrix in the bases $E(0)$ and $E(r)$, \nin each point of the curve, we get,\\\\\n$$[d_{\\gamma(t_{0}-r)} \\psi_{r}^{\\mathbb{H}} \\;\nd_{\\gamma(t_{0}-r)}\\psi_{-(t_{1}-t_{0}+r)}^{\\mathbb{H}} \\;\n\\mathcal{Z} \\;\nd_{\\gamma(t_{0}-r)}\\psi_{(t_{1}-t_{0}+r)}^{\\mathbb{H}}]_{E(0)}^{E(r)}=$$\n$$\n=[d_{\\gamma(t_{0}-r)} \\psi_{r}^{\\mathbb{H}}]_{E(0)}^{E(r)} \\;\n[d_{\\gamma(t_{0}-r)}\\psi_{-(t_{1}-t_{0}+r)}^{\\mathbb{H}}]_{E(r)}^{E(t_{0}-t_{1})}\n\\; [\\mathcal{Z}]_{E(t_{0}-t_{1})}^{E(t_{0}-t_{1})}$$ \\\\ $$\\;\n[d_{\\gamma(t_{0}-r)}\\psi_{(t_{1}-t_{0}+r)}^{\\mathbb{H}}]_{E(t_{0}-t_{1})}^{E(r)}.$$\nMoreover $[d_{\\gamma(t_{0}-r)} \\psi_{r}^{\\mathbb{H}}]_{E(0)}^{E(r)}=\nI_{4}$ and there exists a symplectic conjugation $\\mathbb{G} \\in $ \n$Sp(2)$ between the base $E(t_{0}-t_{1})$ and the canonic symplectic\nbase,\n$\\left\\lbrace \\frac{\\partial}{\\partial x_{1}}(\\gamma(t_{1})),\n\\frac{\\partial}{\\partial x_{2}}(\\gamma(t_{1})), \\right. $ \n$\\left. \\frac{\\partial}{\\partial p_{1}}(\\gamma(t_{1})),\n\\frac{\\partial}{\\partial p_{2}}(\\gamma(t_{1})) \\right\\rbrace $ such\nthat, \\\\ $[\\mathcal{Z}]_{E(t_{0}-t_{1})}^{E(t_{0}-t_{1})} = \n\\mathbb{G}^{-1} \\mathcal{Z} \\mathbb{G}$. Thus\n $d_{0}\\mathcal{F}(h)=$\n$$\\pi \\left( \\mathbb{G} [d_{\\gamma(t_{0}-r)}\n\\psi_{(t_{1}-t_{0}+r)}^{\\mathbb{H}}]_{E(r)}^{E(t_{0}-t_{1})} \\right)\n^{-1} \\pi \\left( \\mathcal{Z}\\right) \\, \\pi \\left( \\mathbb{G}\n[d_{\\gamma(t_{0}-r)}\\psi_{(t_{1}-t_{0}+r)}^{\\mathbb{H}}]_{E(r)}^{E(t_{0}-t_{1})}\n\\right)$$ that is, we need to show that $\\pi(\\mathcal{Z})$ is\nsurjective in $sp(1)$. A simple calculation shows that,\n$\n\\pi(\\mathcal{Z})=\n\\left [\n\\begin{matrix}\nz_{11} & z_{12} \\\\\nz_{21} & z_{22} \\\\\n\\end{matrix}\n\\right ]\n$\nwhere,\\\\\n$z_{11}=-b \\mathbb{H}_{p_{2}p_{2}} +2 c \\mathbb{H}_{p_{2}p_{2}}\n\\mathbb{H}_{x_{2}p_{2}} + \\dot{\\mathbb{H}}_{p_{2}p_{2}}$\\\\\n$z_{12}= 2c (\\mathbb{H}_{p_{2}p_{2}})^{2}$\\\\\n$z_{21}=-a + 2 b \\mathbb{H}_{x_{2}p_{2}}+ 2 c\n\\mathbb{H}_{p_{1}p_{2}} \\mathbb{H}_{x_{1}x_{2}} + 2 c\n\\mathbb{H}_{p_{2}p_{2}}\\mathbb{H}_{x_{2}x_{2}} -2 c\n\\mathbb{H}_{x_{2}p_{1}} \\mathbb{H}_{x_{1}p_{2}}\\\\ -4 c\n(\\mathbb{H}_{x_{2}p_{2}})^{2} -2 c \\dot{\\mathbb{H}}_{x_{2}p_{2}}$\\\\\n$z_{22}=b \\mathbb{H}_{p_{2}p_{2}} -2 c \\mathbb{H}_{p_{2}p_{2}}\n\\mathbb{H}_{x_{2}p_{2}} - \\dot{\\mathbb{H}}_{p_{2}p_{2}}$\\\\\n\nRemember that,\n$\nsp(1)=\n\\left \\{\n\\left [\n\\begin{matrix}\nB & C \\\\\nA & -B \\\\\n\\end{matrix}\n\\right ] \\; | A,B,C,D \\in\n\\mathbb{R}, \\;\n\\right \\}\n$\nand $\\mathbb{H}_{p_{2}p_{2}} \\neq 0$, thus we have the\nsurjectivity. In order to conclude the proof we must to find a potential in $M$ adapted to this perturbation. Consider $f \\in \\mathcal{F}$ arbitrarily close\nto zero such that $\\pi\n([d_{\\gamma(t_{0})}\\psi_{T}^{\\mathbb{H}+f}]_{E(0)}^{E(0)})$ is\nnondegenerate of order $\\leq 2m$. Let us remember that the\n$x$-support of $f$ is contained in $W$ which is an arbitrarily small\nneighborhood of $\\gamma(t_{1})$ in $\\mathcal{N}$.\nConsider $(\\hat{x},\\hat{p})$ the canonic symplectic coordinates in\n$\\gamma(t_{1})$, and $\\hat{\\pi} : T^{*}M \\to \\mathbb{R}$ given by\n$\\hat{\\pi}(\\hat{x},\\hat{p})=\\hat{x}$. As we are free to\ndislocate the point $t_{1}$ by a $\\varepsilon$ arbitrarily small, we\ncan use the twist property of the vertical fiber bundle as in\nLemma~\\ref{PropTwistDoFibradoVertical} to conclude that\n$\\hat{\\pi}|_{\\mathcal{N}}$ is a local diffeomorphism in\n$\\gamma(t_{1})$. Take a diffeomorphism $q:W~\\subset~\\mathcal{N}~\\to~M$ given by\n$q(x)=\\hat{x}$, where $(x,0)\\equiv (\\hat{x},\\hat{p})$ in\n$\\mathcal{N}$. Choose the potential\n$$f_{0}(\\hat{x})=\n \\left \\{\n \\begin{matrix}\n f(q^{-1}(\\hat{x})) \\quad \\quad x \\in \\hat{\\pi}(W)\\\\\n 0 \\quad \\quad \\quad \\quad \\quad \\quad x \\not\\in \\hat{\\pi}(W),\n \\end{matrix}\n \\right.\n$$\nby construction, $\\mathbb{H}(\\hat{x},\\hat{p}) + f_{0}(\\hat{x})$ has\nthe desired property. The lemma is proven.\n\\end{proof}\n\n\\textbf{Conjecture:}\\\\\n\\textit{If $\\dim(M)=n$ then $\\pi(\\mathcal{Z})$ is surjective in $sp(n-1)$.}\\\\\n\nThe main obstruction to prove this conjecture is that if $\\mathcal{Z}= \\hat{A} - [\\hat{B}, J \\mathcal{H}] +\n[\\hat{C}, J \\dot{\\mathcal{H}}] + [[\\hat{C}, J \\mathcal{H}], J\n\\mathcal{H}]$ then we need to solve equations like $UX+XU=D$, in the space of simetric $n-1 \\times n-1$. But it is well known (see \\cite{GonDomtd}) that, in our case, the solving of this type of equations requires additional hypothesis on the eigenvalues of $\\mathbb{H}_{p p }$, which are not generic in Ma\\~n\\'e's sense. On the other hand, our approach it is essentially the only way to construct perturbations by potentials, thus we hope that in the future, we will be capable to solve this equation in higer dimensions.\n\n\n\\begin{lemma}\\label{a2a E denso Em aa}\nGiven $k \\in \\mathbb{R}$, $f_{0} \\in \\mathcal{R}(k)$,\n$\\mathcal{U}_{f_{0}} \\subseteq \\mathcal{R}$ a $C^{\\infty}$ neighborhood of\n$f_{0}$, $\\alpha=\\alpha(\\mathcal{U}_{f_{0}})>0$, as in the\nLemma~\\ref{MinimoPerioLema}, $a \\in \\mathcal{R}$ such that $0 < a <\n\\infty$ and $\\mathcal{G}_{k}^{a,a} \\cap \\mathcal{U}_{f_{0}} \\neq\n\\varnothing$, we have that $\\mathcal{G}_{k}^{a,2a} \\cap \\mathcal{U}_{f_{0}}$ is dense in\n$\\mathcal{G}_{k}^{a,a} \\cap \\mathcal{U}_{f_{0}}$.\n\\end{lemma}\n\\begin{proof}\nTake $f \\in \\mathcal{G}_{k}^{a,a} \\cap \\mathcal{U}_{f_{0}}$ and $\\mathcal{U}$ an arbitrary \nneighborhood of $f$. We must show that $\\mathcal{U} \\cap\n(\\mathcal{G}_{k}^{a,2a} \\cap \\mathcal{U}_{f_{0}}) \\neq \\varnothing$.\nFrom the definition of $\\mathcal{G}_{k}^{a,a}$ we have that all periodic orbits\nof $H+f$ in the level $k$ with minimal period $\\leq\na$ are nondegenerate of order $m \\leq \\frac{a}{T_{min}}$.\nTake $\\rho_{f}: T^{*}M \\times (0,a) \\times (-\\varepsilon,\\varepsilon)\n\\to T^{*}M \\times T^{*}M \\times \\mathbb{R}$. From \nCorollary~\\ref{PeriodNivelSaoNaoDegenTrans} we have that\n$\\rho_{f} \\pitchfork \\Delta_{0}$.\nMoreover, by Lemma~\\ref{PeriodPontoSaoNaoDegenTrans},~(i), \n$$\\rho_{f}^{-1}(\\Delta_{0})=\\{(\\vartheta, T, 0) \\mid \\vartheta\n\\in (H+f)^{-1}(k), \\; T \\in (0,a) ,\n\\psi_{T}^{H+f}(\\vartheta)=\\vartheta \\}$$\nObserve that $\\rho_{f}^{-1}(\\Delta_{0}) \\subset (H+f)^{-1}(k) \\times\n[0,a] \\times \\{0\\}$ which is a compact set. As $\\Delta_{0}$ is closed we have that\n$\\rho_{f}^{-1}(\\Delta_{0})$ is a submanifold of dimension 1, with a finite\nnumber of connected components. Since each periodic orbit,\n$\\{\\psi_{t}^{H+f}(\\vartheta) \\mid\nt \\in [0,T], \\; (\\vartheta, T, 0) \\in \\rho_{f}^{-1}(\\Delta_{0})\\}\n\\subset \\rho_{f}^{-1}(\\Delta_{0})$, is a connected component of\ndimension 1,the number of periodic orbits for\n$H+f$ in the level $k$ with minimal period $\\leq a$, distinct, is finite.\nDenote, $\\{\\psi_{t}^{H+f}(\\vartheta_{i}) \\mid\nt \\in [0, T_{i} = T_{min}(\\vartheta_{i})], \\; (\\vartheta_{i}, T_{i}, 0) \\in\n\\rho_{f}^{-1}(\\Delta_{0}) \\}$, for $i=1,..N$, the $N$ periodic orbits for\n$H+f$ in the level $k$, with its respective minimal periods.\nFrom Theorem~\\ref{PerturbacaoLocaldaOrbita} we can find a sum\nof $N$ potentials $f_{0}=f_{1}+...+f_{N}$\narbitrarily close to 0, such that, all orbits are nondegenerate of order $ \\leq 2m$\nfor $(H+f)+f_{0}$. The claim is proven because $f + f_{0} \\in \\mathcal{U} \\cap\n(\\mathcal{G}_{k}^{a,2a} \\cap \\mathcal{U}_{f_{0}})$.\n\\end{proof}\n\n\n\\begin{lemma}\\label{transa2a}\nWith the same notation of the Lemma~\\ref{a2a E denso Em aa}, if\n$\\mathcal{G}_{k}^{a,a} \\cap \\mathcal{U}_{f_{0}} \\neq \\varnothing$,\nthen $ev_{\\rho} : \\mathcal{G}_{k}^{a,2a} \\cap \\mathcal{U}_{f_{0}}\n\\times T^{*}M \\times (0,2a) \\times (-\\varepsilon,\\varepsilon)\n\\to T^{*}M \\times T^{*}M \\times \\mathbb{R}$ is transversal to $\\Delta_{0}$.\n\\end{lemma}\n\\begin{proof}\nIndeed, given $(f, \\vartheta, T, S) \\in\n\\mathcal{G}_{k}^{a,2a} \\cap \\mathcal{U}_{f_{0}}\n\\times T^{*}M \\times (0,2a) \\times (-\\varepsilon,\\varepsilon)$,\nif $ev(f, \\vartheta , T, S) $ $\\not\\in \\Delta_{0}$,\nis done. So we can assume that $ev(f, \\vartheta, T, S)\n\\in \\Delta_{0}$, that is, $\\vartheta$ is a periodic orbit of\n $H+f$ in the level $k$ with minimal period,\n$T_{min}=T_{min}(\\vartheta)$ and $S=0$.\n\nIf $T=T_{min}$ then $ev \\pitchfork _{(f, \\vartheta, T, 0)}\n\\Delta_{0}$ by Lemma~\\ref{evaluationtransverlema}, (ii). On the\nother hand, if $T =m T_{min},\\; m \\geq 2$ we have that $ m \\leq 2a\/T_{min}$, that is,\n$ T_{min} \\leq 2a \/m \\leq a$ so $\\vartheta$ is nondegenerate of order $m$,\nbecause$f \\in \\mathcal{G}_{k}^{a,2a} \\cap \\mathcal{U}_{f_{0}}$.\nThus $ev \\pitchfork _{(f, \\vartheta, T, S)} \\Delta_{0}$ by\nLemma~\\ref{evaluationtransverlema}, (i).\n\\end{proof}\n\n\n\\begin{lemma}\\label{3meiosinter2a}\nWith the same notation of the Lemma~\\ref{a2a E denso Em aa}, if\n$\\mathcal{G}_{k}^{a,a} \\cap \\mathcal{U}_{f_{0}} \\neq \\varnothing$,\nthen $(\\mathcal{G}_{k}^{3a\/2,3a\/2} \\cap\n\\mathcal{U}_{f_{0}}) \\cap (\\mathcal{G}_{k}^{a,2a} \\cap \\mathcal{U}_{f_{0}})$\nis dense in $\\mathcal{G}_{k}^{a,2a} \\cap \\mathcal{U}_{f_{0}}$.\n\\end{lemma}\n\n\\begin{proof}\nConsider $\\mathcal{B}= \\mathcal{G}_{k}^{a,2a} \\cap \\mathcal{U}_{f_{0}}$,\nwhich is a submanifold of $C^{\\infty}(M;\\mathbb{R})$ because it is open.\nFrom the Lemma~\\ref{transa2a} we have that $ev_{\\rho} : \\mathcal{G}_{k}^{a,2a}\n\\cap \\mathcal{U}_{f_{0}} \\times T^{*}M \\times (0,2a) \\times (-\\varepsilon,\\varepsilon)\n\\to T^{*}M \\times T^{*}M \\times \\mathbb{R}$ is transversal to $\\Delta_{0}$.\nThen, Theorem~\\ref{abraham} implies that $\\mathfrak{R}=\\{ f \\in\n\\mathcal{G}_{k}^{a,2a} \\mid \\rho_{f} \\pitchfork \\Delta_{0} \\}$ is a generic\nsubset of $\\mathcal{G}_{k}^{a,2a} \\cap \\mathcal{U}_{f_{0}}$. In\nparticular $\\mathfrak{R}$ is dense in\n$\\mathcal{G}_{k}^{a,2a} \\cap \\mathcal{U}_{f_{0}}$.\nWe claim that $\\mathfrak{R} \\subset (\\mathcal{G}_{k}^{3a\/2,3a\/2} \\cap\n\\mathcal{U}_{f_{0}}) \\cap (\\mathcal{G}_{k}^{a,2a} \\cap\n\\mathcal{U}_{f_{0}})$.\nIndeed, take $f \\in \\mathfrak{R}$, from Corollary~\\ref{PeriodNivelSaoNaoDegenTrans},\nall periodic orbits of the flow defined by $H+f$ in the energy level $k$,\nwith minimal period $T_{min}$ are nondegenerate of order\n$m \\leq \\frac{2a}{T_{min}}$ because $\\rho_{f} \\pitchfork \\Delta_{0}$.\nIf we have a periodic orbit for $H+f$ in the level $k$,\nwith minimal period $T_{min} \\leq 3a\/2$ take\n$m' \\leq \\frac{3a\/2}{T_{min}} = \\frac{2a}{4\/3 T_{min}} \\leq\n\\frac{2a}{T_{min}}$ then this orbit is nondegenerate of order $m'$\nin particular $f \\in \\mathcal{G}_{k}^{3a\/2,3a\/2} \\cap \\mathcal{U}_{f_{0}}$.\nTherefore $(\\mathcal{G}_{k}^{3a\/2,3a\/2}\\cap \\mathcal{U}_{f_{0}}) \\cap\n(\\mathcal{G}_{k}^{a,2a}\\cap \\mathcal{U}_{f_{0}})$ is dense in\n$\\mathcal{G}_{k}^{a,2a} \\cap \\mathcal{U}_{f_{0}}$.\n\\end{proof}\n\n\n\\begin{lemma}\\label{3a23a2 denso Em aa}\nWith the same notation of the Lemma~\\ref{a2a E denso Em aa}, if\n$\\mathcal{G}_{k}^{a,a} \\cap \\mathcal{U}_{f_{0}} \\neq \\varnothing$,\nwe have that $(\\mathcal{G}_{k}^{3a\/2,3a\/2} \\cap\n\\mathcal{U}_{f_{0}})$ is dense in\n$\\mathcal{G}_{k}^{a,a} \\cap \\mathcal{U}_{f_{0}}$.\n\\end{lemma}\n\\begin{proof}\nFrom Lemma~\\ref{3meiosinter2a} we have that\n$(\\mathcal{G}_{k}^{3a\/2,3a\/2} \\cap \\mathcal{U}_{f_{0}}) \\cap\n(\\mathcal{G}_{k}^{a,2a} \\cap \\mathcal{U}_{f_{0}})$ is dense in\n$\\mathcal{G}_{k}^{a,2a} \\cap \\mathcal{U}_{f_{0}}$ and from\nLemma~\\ref{a2a E denso Em aa}, $\\mathcal{G}_{k}^{a,2a} \\cap \\mathcal{U}_{f_{0}}$\nis dense in $\\mathcal{G}_{k}^{a,a} \\cap \\mathcal{U}_{f_{0}}$ therefore\n$\\mathcal{G}_{k}^{3a\/2,3a\/2} \\cap \\mathcal{U}_{f_{0}}$ is dense in\n$\\mathcal{G}_{k}^{a,a} \\cap\n\\mathcal{U}_{f_{0}}$.\n\\end{proof}\n\n\n{\\large \\textbf{ Proof of the Lemma~\\ref{ReduLocalDaPrimParte}:}} \\label{ProvaLemaReducao}\n\n\n\\begin{proof}\nGiven $k \\in \\mathbb{R}$, $f_{0} \\in \\mathcal{R}(k)$,\n$\\mathcal{U}_{f_{0}} \\subseteq \\mathcal{R}$ a $C^{\\infty}$ neighborhood \nof $f_{0}$, $\\alpha=\\alpha(\\mathcal{U}_{f_{0}})>0$ as in\nLemma~\\ref{MinimoPerioLema}. Take $c \\in \\mathbb{R}_{+}$, if $c <\n\\alpha$ then $\\mathcal{G}_{k}^{c,c} \\cap\n\\mathcal{U}_{f_{0}}=\\mathcal{U}_{f_{0}}$ by Lemma~\\ref{MinimoPerioLema}.\n So we can assume that $c \\in \\mathbb{R}_{+}$, with $c \\geq \\alpha > a >0$.\n\nWe claim that, $\\mathcal{G}_{k}^{(\\frac{3}{2})^{\\ell}a,\n(\\frac{3}{2})^{\\ell}a} \\cap \\mathcal{U}_{f_{0}}$ is dense in\n$\\mathcal{G}_{k}^{a,a} \\cap \\mathcal{U}_{f_{0}}$, $\\forall \\ell\n\\in \\mathbb{N}$. The proof is by induction in $\\ell$.\n\nFor $\\ell =1$ observe that, $\\mathcal{G}_{k}^{a,a} \\cap\n\\mathcal{U}_{f_{0}}=\\mathcal{U}_{f_{0}} \\neq \\varnothing $, because\n$\\alpha > a >0$. Therefore $\\mathcal{G}_{k}^{\\frac{3}{2} a,\n\\frac{3}{2} a} \\cap \\mathcal{U}_{f_{0}}$ is dense in\n$\\mathcal{G}_{k}^{a,a} \\cap \\mathcal{U}_{f_{0}}$ by\nLemma~\\ref{3a23a2 denso Em aa}. \n\nSuppose that, $\\mathcal{G}_{k}^{(\\frac{3}{2})^{\\ell}a,\n(\\frac{3}{2})^{\\ell}a} \\cap \\mathcal{U}_{f_{0}}$ is dense in\n$\\mathcal{G}_{k}^{a,a} \\cap \\mathcal{U}_{f_{0}}$, with $\\ell\n\\geq 1$. Then $\\mathcal{G}_{k}^{(\\frac{3}{2})^{\\ell}a,\n(\\frac{3}{2})^{\\ell}a} \\cap \\mathcal{U}_{f_{0}} \\neq \\varnothing\n$, from the density, and taking $a'=(\\frac{3}{2})^{\\ell}a$, we will have\nthat $\\mathcal{G}_{k}^{ \\frac{3}{2} a' , \\frac{3}{2} a'} \\cap \\mathcal{U}_{f_{0}}$ is dense in\n$\\mathcal{G}_{k}^{a',a'} \\cap \\mathcal{U}_{f_{0}}$ by Lemma~\\ref{3a23a2 denso Em aa}.\nSo $\\mathcal{G}_{k}^{(\\frac{3}{2})^{\\ell+1}a,\n(\\frac{3}{2})^{\\ell+1}a} \\cap \\mathcal{U}_{f_{0}}$ is dense in\n$\\mathcal{G}_{k}^{a,a} \\cap \\mathcal{U}_{f_{0}}$ concluding\nthe proof of the claim.\n\nConsider $\\ell_{0}$, such that, $(\\frac{3}{2})^{\\ell_{0}}a >\nc$. Then, $\\mathcal{G}_{k}^{(\\frac{3}{2})^{\\ell_{0}}a,\n(\\frac{3}{2})^{\\ell_{0}}a} \\cap \\mathcal{U}_{f_{0}} \\subset\n\\mathcal{G}_{k}^{c,c} \\cap \\mathcal{U}_{f_{0}} \\subset\n\\mathcal{G}_{k}^{a,a} \\cap \\mathcal{U}_{f_{0}}=\\mathcal{U}_{f_{0}}$.\nSince $\\mathcal{G}_{k}^{(\\frac{3}{2})^{\\ell_{0}}a,\n(\\frac{3}{2})^{\\ell_{0}}a} \\cap \\mathcal{U}_{f_{0}}$ is dense in\n$\\mathcal{G}_{k}^{a,a} \\cap \\mathcal{U}_{f_{0}}$, we conclude that\n$\\mathcal{G}_{k}^{c,c} \\cap \\mathcal{U}_{f_{0}}$ is dense in\n$\\mathcal{U}_{f_{0}}$. The lemma is proven.\n\\end{proof}\n\\vskip 10mm\n\n\n\n\\vspace{0.3cm} \\textbf{{\\Large Acknowledgements }} \\vspace{0.3cm}\n\nI wish to thank Dr. Gonzalo Contreras by the invitation to work in CIMAT, where part of this work was done. I thank to my thesis advisor, Dr. Artur O. Lopes for several important remarks and corrections, and Patrick Z. A. by your help in translating this manuscript. This work is part\nof my PhD thesis in Programa de P\\'os-Gradua\\c{c}\\~ao em Matem\\'atica - UFRGS.\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}