diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzimj" "b/data_all_eng_slimpj/shuffled/split2/finalzimj" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzimj" @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\n\nThe symmetric group on $N$ letters acts naturally on $\\mathbb{R}^{N}$ (for $N=2,3,\\ldots$) but not irreducibly, because the vector $\\left(\n1,1,\\ldots,1\\right) $ is f\\\/ixed. However the important basis consisting of\nnonsymmetric Jack polynomials is def\\\/ined for $N$ variables and does not behave\nwell under restriction to the orthogonal complement of $\\left( 1,1,\\ldots\n,1\\right) $, in general. In this paper we consider the one exception to this\nsituation, occurring when $N=4$. In this case there is a coordinate system,\nessentially the $4\\times4$ Hadamard matrix, which allows a dif\\\/ferent basis of\npolynomials, derived from the type-$B$ nonsymmetric Jack polynomials for the\nsubgroup $D_{3}$ of the octahedral group $B_{3}$. We will construct an\northogonal basis for the $L^{2}$-space of the measure%\n\\[\n\\prod_{1\\leq i0$.\n\nWe will use the following notations: $\\mathbb{N}_{0}$ denotes the set of nonnegative integers; $\\mathbb{N}_{0}^{N}$ is the set of compositions (or multi-indices), if $\\alpha=\\left(\n\\alpha_{1},\\ldots,\\alpha_{N}\\right) \\in \\mathbb{N}_{0}^{N}$ then $\\left\\vert \\alpha\\right\\vert :=\\sum_{i=1}^{N}\\alpha_{i}$ and\nthe length of $\\alpha$ is $\\ell\\left( \\alpha\\right) :=\\max\\left\\{\ni:\\alpha_{i}>0\\right\\} $. Let $\\mathbb{N}_{0}^{N,+}$ denote the subset of partitions, that is, $\\lambda\\in\n\\mathbb{N}\n_{0}^{N}$ and $\\lambda_{i}\\geq\\lambda_{i+1}$ for $1\\leq i0$ when $f\\neq0$ and $\\kappa\\geq0$. The operators\n$\\mathcal{U}_{i}$ have the very useful property of acting as triangular\nmatrices on the monomial basis furnished with a certain partial order. However\nthe good properties depend completely on the use of $%\n\\mathbb{R}\n^{N}$ even though the group $\\mathcal{S}_{N}$ acts irreducibly on $\\left(\n1,1,\\ldots,1\\right) ^{\\bot}$. We suggest that an underlying necessity for the\nexistence of an analog of $\\left\\{ \\mathcal{U}_{i}\\right\\} $ for any\nref\\\/lection group $W$ is the existence of a $W$-orbit in which any two points\nare orthogonal or antipodal (as in the analysis of the hyperoctahedral group\n$B_{N}$). This generally does not hold for the action of $\\mathcal{S}_{N}$ on\n$\\left( 1,\\ldots,1\\right) ^{\\bot}$. We consider the exceptional case $N=4$\nand exploit the isomorphism between $\\mathcal{S}_{4}$ and the group of type\n$D_{3}$, that is, the subgroup of $B_{3}$ whose simple roots are $\\left(\n1,-1,0\\right)$, $\\left( 0,1,-1\\right)$, $\\left( 0,1,1\\right) $. We map these\nroot vectors to the simple roots $\\left( 0,1,-1,0\\right)$, $\\left(\n0,0,1,-1\\right)$, $\\left( 1,-1,0,0\\right) $ of $\\mathcal{S}_{4}$, in the same\norder. This leads to the linear isometry%\n\\begin{gather}\ny_{1} =\\tfrac{1}{2}\\left( x_{1}+x_{2}-x_{3}-x_{4}\\right), \\nonumber\\\\\ny_{2} =\\tfrac{1}{2}\\left( x_{1}-x_{2}+x_{3}-x_{4}\\right), \\nonumber\\\\\ny_{3} =\\tfrac{1}{2}\\left( x_{1}-x_{2}-x_{3}+x_{4}\\right), \\nonumber\\\\\ny_{0} =\\tfrac{1}{2}\\left( x_{1}+x_{2}+x_{3}+x_{4}\\right) .\\label{y2x}\n\\end{gather}\nConsider the group $D_{3}$ acting on $\\left( y_{1},y_{2},y_{3}\\right) $ and\nuse the type-$B_{3}$ Dunkl operators with the parameter $\\kappa^{\\prime}=0$\n(associated with the class of sign-changes $y_{i}\\mapsto-y_{i}$ which are not\nin $D_{3}$). Let~$\\sigma_{ij}$, $\\tau_{ij}$ denote the ref\\\/lections in\n$y_{i}-y_{j}=0$, $y_{i}+y_{j}=0$ respectively. Then for $i=1,2,3$ let%\n\\begin{gather*}\n\\mathcal{D}_{i}^{B}f\\left( y\\right) =\\frac{\\partial}{\\partial y_{i}%\n}f\\left( y\\right) +\\kappa\\sum_{j=1,j\\neq i}^{3}\\left( \\frac{f\\left(\ny\\right) -f\\left( y\\sigma_{ij}\\right) }{y_{i}-y_{j}}+\\frac{f\\left(\ny\\right) -f\\left( y\\tau_{ij}\\right) }{y_{i}+y_{j}}\\right) ,\\\\\n\\mathcal{U}_{i}^{B}f\\left( y\\right) =\\mathcal{D}_{i}^{B}\\left(\ny_{i}f\\left( y\\right) \\right) -\\kappa\\sum_{1\\leq j\\alpha_{i}\\right\\}\n+\\#\\left\\{ j:1\\leq j\\leq i,\\alpha_{j}=\\alpha_{i}\\right\\} ,\\\\\n\\xi_{i}\\left( \\alpha\\right) :=\\left( N-r\\left( \\alpha,i\\right)\n\\right) \\kappa+\\alpha_{i}+1.\n\\end{gather*}\n\\end{definition}\n\n\nClearly for a f\\\/ixed $\\alpha\\in\\mathbb{N}_{0}^{N}$ the values $\\left\\{\nr\\left( \\alpha,i\\right) :1\\leq i\\leq N\\right\\} $ consist of all of\n$\\left\\{ 1,\\ldots,N\\right\\} $; let $w$ be the inverse function of $i\\mapsto\nr\\left( \\alpha,i\\right) $ so that $w\\in\\mathcal{S}_{N}$, $r\\left(\n\\alpha,w\\left( i\\right) \\right) =i$ and $\\alpha^{+}=w\\alpha\\ $(note that\n$\\alpha\\in\\mathbb{N}_{0}^{N,+}$ if and only if $r\\left( \\alpha,i\\right) =i$\nfor all $i$). Then%\n\\[\n\\mathcal{U}_{i}x^{\\alpha}=\\xi_{i}\\left( \\alpha\\right) x^{\\alpha}%\n+q_{\\alpha,i}\\left( x\\right)\n\\]\nwhere $q_{\\alpha,i}\\left( x\\right) $ is a sum of terms $\\pm\\kappa x^{\\beta}$\nwith $\\alpha\\vartriangleright\\beta$.\n\n\\begin{definition}\nFor $\\alpha\\in\\mathbb{N}_{0}^{N}$, let$\\ \\zeta_{\\alpha}$ denote the $x$-monic\nsimultaneous eigenfunction (NSJP), that is, $\\mathcal{U}_{i}\\zeta_{\\alpha}%\n=\\xi_{i}\\left( \\alpha\\right) \\zeta_{\\alpha}$ for $1\\leq i\\leq N$ and\n\\[\n\\zeta_{\\alpha}=x^{\\alpha}+\\sum\\limits_{\\alpha\\vartriangleright\\beta}%\nA_{\\beta\\alpha}x^{\\beta},\n\\]\nwith coef\\\/f\\\/icients $A_{\\beta\\alpha}\\in\\mathbb{Q}\\left( \\kappa\\right) $,\nrational functions of $\\kappa$.\n\\end{definition}\n\n\nThere are norm formulae for the pairing $\\left\\langle \\cdot,\\cdot\\right\\rangle\n_{\\kappa}$.\nSuppose $\\alpha\\in\\mathbb{N}_{0}^{N}$ and $\\ell\\left( \\alpha\\right) =m$; the\n\\textit{Ferrers diagram} of $\\alpha$ is the set $\\left\\{ \\left(\ni,j\\right) :1\\leq i\\leq m,0\\leq j\\leq\\alpha_{i}\\right\\} .$ For each node\n$\\left( i,j\\right) $ with $1\\leq j\\leq\\alpha_{i}$ there are two special\nsubsets of the Ferrers diagram, the \\textit{arm} $\\left\\{ \\left( i,l\\right)\n:ji,j\\leq\\alpha_{l}\\leq\\alpha_{i}\\right\\} \\cup\\left\\{ \\left(\nl,j-1\\right) :li,j\\leq\\alpha_{l}\\leq\\alpha\n_{i}\\right\\} +\\#\\left\\{ l:l-1$, and%\n\\[\nL_{n}^{a}\\left( t\\right) =\\frac{\\left( a+1\\right) _{n}}{n!}\\sum_{i=0}%\n^{n}\\frac{\\left( -n\\right) _{i}}{\\left( a+1\\right) _{i}}\\frac{t^{i}}{i!}.\n\\]\nThe result of applying $e^{-\\Delta_{B}\/2}$ to a polynomial $x_{E_{k}}%\n\\zeta_{\\alpha}\\left( y^{2}\\right) $ is a complicated expression involving\nsome generalized binomial coef\\\/f\\\/icients (see \\cite[Proposition~9.4.5]{DX}). For the\nsymmetric cases $j_{\\lambda}\\left( y^{2}\\right) $ and $y_{1}y_{2}%\ny_{3}j_{\\lambda}\\left( y^{2}\\right) ,\\lambda\\in\n\\mathbb{N}\n_{0}^{3,+}$ these coef\\\/f\\\/icients were investigated by Lassalle \\cite{L} and\nOkoun\\-kov and Olshanski \\cite[equation~(3.2)]{OO}; in the latter paper there is an\nexplicit formula.\n\nFinally we can use our orthogonal basis to analyze a modif\\\/ication of the\ntype-$A$ quantum Calogero--Sutherland model with four particles on a line and\nharmonic conf\\\/inement. By resca\\-ling, the Hamiltonian (with exchange terms) can\nbe written as:\n\\[\n\\mathcal{H}=-\\Delta+\\frac{\\left\\vert x\\right\\vert ^{2}}{4}+2\\kappa\\sum_{1\\leq\ni0$ (Gaussian).\n\n\n\n\\begin{theorem} \\label{thm1}\nLet $K:{\\cal X} \\times {\\cal X} \\rightarrow \\mbox{I}\\!\\mbox{R}$ be a symmetric and positive definite function. Then there\nexists a Hilbert space of functions ${\\cal H}$ defined on ${\\cal X}$ admitting $K$ as a reproducing Kernel.\nConversely, let ${\\cal H}$ be a Hilbert space of functions $f: {\\cal X} \\rightarrow \\mbox{I}\\!\\mbox{R}$ satisfying\n$\\forall x \\in {\\cal X}, \\exists \\kappa_x>0,$ such that $|f(x)| \\le \\kappa_x \\|f\\|_{\\cal H},\n\\quad \\forall f \\in {\\cal H}. $\nThen ${\\cal H}$ has a reproducing kernel $K$.\n\\end{theorem}\n\n\n\\begin{theorem}\\label{thm4}\n Let $K(x,y)$ be a positive definite kernel on a compact domain or a manifold $X$. Then there exists a Hilbert\nspace $\\mathcal{F}$ and a function $\\Phi: X \\rightarrow \\mathcal{F}$ such that\n$$K(x,y)= \\langle \\Phi(x), \\Phi(y) \\rangle_{\\mathcal{F}} \\quad \\mbox{for} \\quad x,y \\in X.$$\n $\\Phi$ is called a feature map, and $\\mathcal{F}$ a feature space\\footnote{The dimension of the feature space can be infinite, for example in the case of the Gaussian kernel.}.\n\\end{theorem}\n\n\n\nGiven Theorem~\\ref{thm4}, and property [iv.] in Proposition~\\ref{prop1}, note that we can take\n$\\Phi(x):=K_x:=K(x,\\cdot)$ in which case $\\mathcal{F}=\\mathcal{H}$ -- the ``feature space'' is the RKHS itself, as opposed to an isomorphic space.\nWe will make extensive use of this feature map. The fact that Mercer kernels are positive definite and\nsymmetric is also key; these properties ensure that kernels induce positive, symmetric matrices and\nintegral operators, reminiscent of similar properties enjoyed by gramians and covariance matrices.\nFinally, in practice one typically first chooses a Mercer kernel in order to choose an RKHS:\nTheorem~\\ref{thm1} guarantees the existence of a Hilbert space admitting such a function as its\nreproducing kernel.\n\nA key observation however, is that working in RKHS allows one to immediately find nonlinear versions of\n algorithms which can be expressed in terms of inner products. Consider an algorithm expressed in\nterms of the inner product $\\langle x, x^{\\prime} \\rangle_{\\mathcal{X}}$ with $x, x^{\\prime} \\in \\mathcal{X}$. Now assume that\ninstead of looking at a state $x$, we look at its $\\Phi$ image in $\\mathcal{H}$,\n\\[\\label{eqn:phi-mapped-data}\n\\begin{array}{rccl}\n\\Phi&:& X & \\rightarrow {\\cal H} \\\\\n& & x &\\mapsto \\Phi(x) \\;.\n\\end{array}\\]\nIn the RKHS, the inner product $\\langle \\Phi(x), \\Phi(x^{\\prime}) \\rangle$ is\n\\[\\langle \\Phi(x), \\Phi(x^{\\prime}) \\rangle= K(x,x^{\\prime}) \\]\nby the reproducing property. Hence, a nonlinear variant of the original algorithm may be implemented\nusing kernels in place of inner products on $\\mathcal{X}$.\n\n\n\\section{Empirical Gramians in RKHS}\nIn this Section we recall empirical gramians for linear systems~\\cite{moore}, as well as a notion of empirical gramians for nonlinear systems in RKHS introduced in~\\cite{allerton}. The goal of the construction we describe here is to provide meaningful, data-based empirical controllability and observability gramians for nonlinear systems. In~\\cite{allerton}, observability and controllability gramians were used for balanced model reduction, however here we will use these quantities to analyze nonlinear control properties and random dynamical systems. We note that a related notion of gramians for nonlinear systems is briefly discussed in~\\cite{gray}, however no method for computing or estimating them was given.\n\n\\subsection{Empirical Gramians for Linear Systems}\\label{sec:linear_gramians}\nTo compute the Gramians for the linear system (\\ref{linsys}), one can attempt to solve the\nLyapunov equations (\\ref{lyap_lin}) directly although this can be computationally prohibitive. For linear systems, the gramians may be approximated by way of matrix multiplications implementing primal and adjoint systems (see the method of snapshots, e.g.~\\cite{Rowley05}). Alternatively, for any system, linear or nonlinear, one may take the simulation based approach introduced by B.C. Moore~\\cite{moore} for reduction of linear systems, and subsequently extended to nonlinear systems in~\\cite{lall}. The method\nproceeds by exciting each coordinate of the input with impulses from the zero initial state $(x_0=0)$. The system's responses are sampled, and the sample covariance is taken as an approximation to the controllability gramian. Denote the set of canonical orthonormal basis vectors in $\\mbox{I}\\!\\mbox{R}^n$ by $\\{e_i\\}_{i}$. Let $u^i(t) = \\delta(t)e_i$ be the input signal for the $i$-th simulation, and let $x^i(t)$ be the corresponding response of the system. Form the matrix $X(t) = \\bigl[x^1(t)\n~\\cdots~ x^q(t)\\bigr] \\in \\mbox{I}\\!\\mbox{R}^{n\\times q}$, so that $X(t)$ is seen as a data matrix with\ncolumn observations given by the respective responses $x^i(t)$. Then the $(n\\times n)$ controllability gramian is given by\n\\[\nW_{c,\\text{lin}} = \\frac{1}{q}\\int_0^{\\infty}X(t)X(t)^{\\!\\top\\!} dt.\n\\]\nWe can approximate this integral by sampling the matrix function $X(t)$ within a finite time interval $[0,T]$\nassuming for instance the regular partition $\\{t_i\\}_{i=1}^N, t_i = (T\/N)i$. This leads to the {\\em empirical controllability gramian}\n\\[\\label{eqn:Wchat_lin}\n\\widehat{W}_{c,\\text{lin}} = \\frac{T}{Nq}\\sum_{i=1}^N X(t_i)X(t_i)^{\\!\\top\\!} .\n\\]\n\nThe observability gramian is estimated by\nfixing $u(t) = 0$, setting $x_0 = e_i$ for $i=1,\\ldots,n$, and measuring the corresponding system output\nresponses $y^i(t)$. Now assemble the output responses into a matrix $Y(t) = [y^1(t) ~\\cdots~ y^n(t)]\\in\n\\mbox{I}\\!\\mbox{R}^{p\\times n}$. The $(n\\times n)$ observability gramian $W_{o,\\text{lin}}$ and its empirical\ncounterpart $\\widehat{W}_{o,\\text{lin}}$ are respectively given by\n\\[\nW_{o,\\text{lin}} = \\frac{1}{p}\\int_0^{\\infty}Y(t)^{\\!\\top\\!}Y(t) dt\\]\nand\n\\[\\label{eqn:Wohat_lin}\n\\widehat{W}_{o,\\text{lin}} = \\frac{T}{Np}\\sum_{i=1}^N \\widetilde{Y}(t_i)\\widetilde{Y}(t_i)^{\\!\\top\\!}\n\\]\nwhere $\\widetilde{Y}(t) = Y(t)^{\\!\\top\\!}$.\nThe matrix $\\widetilde{Y}(t_i)\\in\\mbox{I}\\!\\mbox{R}^{n\\times p}$ can be thought of as a data matrix with column observations\n\\begin{equation}\\label{eqn:obs_data}\nd_j(t_i) = \\bigl(y_j^1(t_i), \\ldots, y_j^n(t_i)\\bigr)^{\\!\\!\\top\\!} \\in\\mbox{I}\\!\\mbox{R}^n,\n\\end{equation}\nfor $j=1,\\ldots,p, \\,\\,i=1,\\ldots, N$ so that $d_j(t_i)$ corresponds to the response at time $t_i$ of the\nsingle output coordinate $j$ to each of\nthe (separate) initial conditions $x_0=e_k, k=1,\\ldots,n$.\n\n\\subsection{Empirical Gramians in RKHS Characterizing Nonlinear Systems}\\label{sec:rkhs-gramians}\nConsider the generic nonlinear system\n\\[\\label{sigma}\n\\left\\{\\begin{array}{rcl}\\dot{x}&=&F(x,u)\\\\ y &=& h(x), \\end{array}\\right.\n\\]\nwith $x \\in \\mbox{I}\\!\\mbox{R}^n$, $u \\in \\mbox{I}\\!\\mbox{R}^q$, $y\\in \\mbox{I}\\!\\mbox{R}^p$, $F(0)=0$ and $h(0)=0$.\nAssume that the linearization of~\\eqref{sigma} around the origin is controllable, observable and\n$A=\\frac{\\partial F}{\\partial x}|_{x=0}$ is asymptotically stable.\n\nRKHS counterparts to the empirical quantities~\\eqref{eqn:Wchat_lin},\\eqref{eqn:Wohat_lin} defined above for the system~\\eqref{sigma} can be defined by considering feature-mapped lifts of the simulated samples in $\\mathcal{H}_K$. In the following, and without loss of generality,\n{\\em we assume the data are centered in feature space}, and that the observability\nsamples and controllability samples are centered separately. See~(\\cite{smola}, Ch. 14) for a\ndiscussion on implicit data centering in RKHS with kernels.\n\nFirst, observe that the gramians $\\widehat{W}_c, \\widehat{W}_o$ can be viewed as the sample covariance of a collection of $N\\cdot q, N\\cdot p$ vectors in $\\mbox{I}\\!\\mbox{R}^n$ scaled by $T$, respectively. Then applying $\\Phi$ to the samples as in~\\eqref{eqn:phi-mapped-data}, we obtain the corresponding gramians in the RKHS associated to $K$ as bounded linear operators on $\\mathcal{H}_K$:\n\\begin{align}\n\\widehat{W}_c &= \\frac{T}{Nq}\\sum_{i=1}^N\\sum_{j=1}^q \\Phi(x^j(t_i))\\otimes \\Phi(x^j(t_i)) \\label{eqn:emp_Wc_rkhs}\\\\\n\\widehat{W}_o &= \\frac{T}{Np}\\sum_{i=1}^N\\sum_{j=1}^p \\Phi(d_j(t_i))\\otimes\\Phi(d_j(t_i))\\nonumber\n\\end{align}\nwhere the samples $x_j,d_j$ are as defined in Section~\\ref{sec:linear_gramians}, and $a\\otimes b=a\\scal{b}{\\cdot}$ denotes the tensor product in $\\mathcal{H}$. From here on we will use the notation $W_c, W_o$ to refer to RKHS versions of\nthe true (integrated) gramians, and $\\widehat{W}_c, \\widehat{W}_o$ to refer to RKHS versions of the empirical gramians.\n\nLet $\\boldsymbol{\\Psi}$ denote the matrix whose columns are the (scaled) observability samples mapped into feature space by $\\Phi$, and let $\\boldsymbol{\\Phi}$ be the matrix similarly built from the feature space representation of the\ncontrollability samples. Then we may alternatively express the gramians above as\n$\\widehat{W}_c=\\boldsymbol{\\Phi}\\bPhi^{\\!\\top\\!}$ and $\\widehat{W}_o=\\boldsymbol{\\Psi}\\bPsi^{\\!\\top\\!}$, and define two other important quantities:\n\\begin{itemize}\\itemsep 0pt\n\\item The \\emph{controllability kernel matrix} $K_c\\in\\mbox{I}\\!\\mbox{R}^{Nq\\times Nq}$ of kernel\nproducts\n\\begin{align}\nK_c &= \\boldsymbol{\\Phi}^{\\!\\top\\!}\\boldsymbol{\\Phi} \\\\\n(K_c)_{\\mu\\nu} &= K(x_\\mu, x_\\nu) = \\scal{\\Phi(x_\\mu)}{\\Phi(x_\\nu)}_{\\mathcal{F}}\n\\end{align}\nfor $\\mu,\\nu=1,\\ldots,Nq$ where we have re-indexed the set of vectors $\\{x^{j}(t_i)\\}_{i,j} =\n\\{x_{\\mu}\\}_{\\mu}$ to use a single linear index.\n\\item The \\emph{observability kernel matrix} $K_o\\in\\mbox{I}\\!\\mbox{R}^{Np\\times Np}$,\n\\begin{align}\nK_o &= \\boldsymbol{\\Psi}^{\\!\\top\\!}\\boldsymbol{\\Psi}\\\\\n(K_o)_{\\mu\\nu} &= K(d_\\mu, d_\\nu) = \\scal{\\Phi(d_\\mu)}{\\Phi(d_\\nu)}_{\\mathcal{F}}\n\\end{align}\nfor $\\mu,\\nu=1,\\ldots,Np$, where we have again re-indexed the set $\\{d_j(t_i)\\}_{i,j}=\\{d_\\mu\\}_{\\mu}$ for\nsimplicity.\n\\end{itemize}\nNote that $K_c,K_o$ may be highly ill-conditioned. The SVD\nmay be used to show that $\\widehat{W}_c$ and $K_c$ ($\\widehat{W}_o$ and $K_o$) have the same singular values (up to zeros).\n\n\n\\section{Nonlinear Control Systems in RKHS}\nIn this section, we introduce empirical versions of the controllability and observability energies~\\eqref{L_c}-\\eqref{L_o} for stable\nnonlinear control systems of the form~\\eqref{control_nonlin}, that can be estimated from observed data. Our underlying assumption is that a given nonlinear system may be treated as if it were linear in a suitable feature space. That reproducing kernel Hilbert spaces provide rich representations capable of capturing strong nonlinearities in the original input (data) space lends validity to this assumption.\n\nIn general little is known about the energy functions in the nonlinear setting. However,\nScherpen~\\cite{scherpen_thesis} has shown that the energy functions $L_c(x)$ and $L_o(x)$ defined in~\\eqref{L_c} and~\\eqref{L_o} satisfy a Hamilton-Jacobi and a Lyapunov equation, respectively.\n\\begin{theorem}\\label{thm:scherp1}\\cite{scherpen_thesis} Consider the nonlinear control system (\\ref{sigma}) with $F(x,u)=f(x)+G(x)u$. If the origin is an asymptotically stable equilibrium\nof $f(x)$ on a neighborhood $W$ of the origin, then for\nall $x \\in W$, $L_o(x)$ is the unique smooth solution of\n\\[\\label{Lo_hjb} \\frac{\\partial L_o}{\\partial x}(x)f(x)+\\frac{1}{2}h^{\\!\\top\\!}(x)h(x)=0,\\quad L_o(0)=0 \\]\nunder the assumption that (\\ref{Lo_hjb}) has a smooth solution on $W$. Furthermore for all $x \\in W$, $L_c(x)$\nis the unique smooth solution of\n\\[\\label{Lc_hjb} \\frac{\\partial L_c}{\\partial x}(x)f(x)+\\frac{1}{2} \\frac{\\partial L_c}{\\partial\nx}(x)g(x)g^{\\!\\top\\!}(x) \\frac{\\partial^{\\!\\top\\!}L_c}{\\partial x}(x)=0,\\; L_c(0)=0\\]\nunder the assumption that (\\ref{Lc_hjb}) has a smooth solution $\\bar{L}_c$ on $W$ and that the origin is an\nasymptotically stable equilibrium of $-(f(x)+g(x)g^{\\!\\top\\!}(x) \\frac{\\partial \\bar{L}_c}{\\partial x}(x))$ on $W$.\n\\end{theorem}\nWe would like to avoid solving explicitly the PDEs (\\ref{Lo_hjb})- (\\ref{Lc_hjb}) and instead find good\nestimates of their solutions directly from simulated or observed data.\n\n\\subsection{Energy Functions}\\label{sec:energy_fns}\nFollowing the linear theory developed in Section~\\ref{sec:linear-control}, we would like to\ndefine analogous controllability and observability energy functions paralleling~\\eqref{eqn:lin_Lc}-\\eqref{eqn:lin_Lo}, but adapted to the nonlinear setting. We first treat the controllability\nfunction. Let $\\mu_{\\infty}$ on the statespace $\\mathcal{X}$ denote the unknown invariant measure of the nonlinear system~\\eqref{sigma} when driven by white Gaussian noise. We will consider here the case where the controllability samples $\\{x_i\\}_{i=1}^m$ are i.i.d. random draws from $\\mu_{\\infty}$, and $\\mathcal{X}$ is a compact subset of $\\mbox{I}\\!\\mbox{R}^n$. The former assumption is implicitly made in much of the empirical balancing literature, and if a system is simulated for long time intervals, it should hold approximately in practice. If we take $\\Phi(x)=K_x$, the infinite-data limit of~\\eqref{eqn:emp_Wc_rkhs} is given by\n\\[\\label{eqn:covop_gramian}\nW_c = \\mathbb{E}_{\\mu_{\\infty}} [\\widehat{W}_{c}] = \\int_{\\mathcal{X}}\\scal{\\cdot}{K_x}K_xd\\mu_{\\infty}(x).\n\\]\n\nIn general neither $W_c$ nor its empirical approximation $\\widehat{W}_c$ are invertible, so to define a controllability energy similar to~\\eqref{eqn:lin_Lc} one is tempted to define $L_c$ on $\\mathcal{H}$ as\n\\mbox{$L_c(h)=\\scal{W_c^{\\dag}h}{h}$}, where $A^{\\dag}$ denotes the pseudoinverse of the operator $A$. However, the domain of $W_c^{\\dag}$ is equal to the range of $W_c$, and so in general $K_x$ may not be in the domain of $W_c^{\\dag}$. We will therefore introduce the orthogonal projection $W_c^{\\dag}W_c$ mapping $\\mathcal{H}\\mapsto\\text{range}(W_c)$ and define the nonlinear control energy on $\\mathcal{H}$ as\n\\begin{equation}\\label{eqn:best_lc}\nL_c(h) = \\scal{W_c^{\\dag}(W_c^{\\dag}W_c)h}{h}.\n\\end{equation}\nWe will consider finite sample approximations to~\\eqref{eqn:best_lc}, however a further complication is that $\\widehat{W}_c^{\\dag}\\widehat{W}_c$ may not converge to $W_c^{\\dag}W_c$ in the limit of infinite data (taking the pseudoinverse is not a continuous operation), and $\\widehat{W}_c^{\\dag}$ can easily be ill-conditioned in any event. Thus one needs to impose regularization, and we replace the pseudoinverse $A^{\\dag}$ with a regularized inverse $(A + \\lambda I)^{-1}, \\lambda > 0$ throughout. We note that the preceding observations were also made in~\\cite{RosascoDensity}. Intuitively,\nregularization prevents the estimator from overfitting to a bad or unrepresentative sample\nof data. We thus define the estimator \\mbox{$\\hat{L}_c:\\mathcal{X}\\to\\mbox{I}\\!\\mbox{R}_+$} (that is, on the domain $\\{K_x~|~ x\\in\\mathcal{X}\\}\\subseteq\\mathcal{H}$) to be\n\\begin{equation}\\label{eqn:rkhs_lc_def}\n\\hat{L}_c(x)=\\tfrac{1}{2}\\bigl\\langle(\\widehat{W}_c + \\lambda I)^{-2}\\widehat{W}_c K_x,K_x\\bigr\\rangle, \\quad x\\in\\mathcal{X}\n\\end{equation}\nwith infinite-data limit\n\\[\nL_c^{\\lambda}(x) = \\tfrac{1}{2}\\scal{(W_c + \\lambda I)^{-2}W_c K_x}{K_x},\n\\]\nwhere $\\lambda > 0$ is the regularization parameter.\n\nTowards deriving an equivalent but computable expression for $\\hat{L}_c$ defined in terms of kernels, we\nrecall the sampling operator $S_{\\mathbf{x}}$ of~\\cite{SmaleIntegral} and its adjoint. Let $\\mathbf{x} = \\{x_i\\}_{i=1}^{m}$ denote a generic sample of $m$ data points. To $\\mathbf{x}$ we can associate the operators\n\\begin{alignat*}{4}\nS_{\\mathbf{x}} &: \\mathcal{H} &\\to &\\, \\mbox{I}\\!\\mbox{R}^{m}, &\\quad h &\\in\\mathcal{H} &\\mapsto&\\, \\bigl(h(x_1),\\ldots,h(x_{m})\\bigr)\\\\\nS_{\\mathbf{x}}^{\\ast} &:\\mbox{I}\\!\\mbox{R}^{m} &\\to &\\, \\mathcal{H}, &\\quad c &\\in\\mbox{I}\\!\\mbox{R}^{m} &\\mapsto&\\, \\textstyle\\sum_{i=1}^{m}c_iK_{x_i}\\,.\n\\end{alignat*}\nIf $\\mathbf{x}$ is the collection of $m=Nq$ controllability samples, one can check that\n$\\widehat{W}_c = \\tfrac{1}{m}\\SxsS_{\\mathbf{x}}$ and $K_c=S_{\\mathbf{x}}S_{\\mathbf{x}}^{\\ast}$. Consequently,\n\\begin{align*}\n\\hat{L}_c(x) &=\\tfrac{1}{2}\\scal{(\\tfrac{1}{m}\\SxsS_{\\mathbf{x}} + \\lambda I)^{-2}\\tfrac{1}{m}\\SxsS_{\\mathbf{x}} K_{x}}{K_{x}}\\\\\n&=\\tfrac{1}{2m}\\scal{S_{\\mathbf{x}}^{\\ast}(\\tfrac{1}{m}S_{\\mathbf{x}}S_{\\mathbf{x}}^{\\ast} + \\lambda I)^{-2}S_{\\mathbf{x}} K_{x}}{K_{x}}\\\\\n&= \\tfrac{1}{2m}{\\bf k_c}(x)^{\\!\\top\\!}(\\tfrac{1}{m}K_c + \\lambda I)^{-2}{\\bf k_c}(x),\n\\end{align*}\nwhere ${\\bf k_c}(x):=S_{\\mathbf{x}} K_x = \\bigl(K(x,x_{\\mu})\\bigr)_{\\mu=1}^{Nq}$ is the $Nq$-dimensional column vector\ncontaining the kernel products between $x$ and the controllability samples.\n\nSimilarly, letting $\\mathbf{x}$ now denote the collection of $m=Np$ observability samples, we can approximate the future output energy by\n\\begin{align}\n\\hat{L}_o(x) &= \\tfrac{1}{2}\\bigl\\langle\\widehat{W}_oK_x,K_x\\bigr\\rangle \\\\\\label{eqn:Lo_rkhs}\n&= \\tfrac{1}{2m}\\bigl\\langle\\SxsS_{\\mathbf{x}} K_x, K_x\\bigr\\rangle \\nonumber \\\\\n &= \\tfrac{1}{2m}{\\bf k_o}(x)^{\\!\\top\\!}{\\bf k_o}(x)\n = \\tfrac{1}{2m}\\nor{{\\bf k_o}(x)}_2^2\\nonumber\n\\end{align}\nwhere ${\\bf k_o}(x):=\\bigl(K(x,d_{\\mu})\\bigr)_{\\mu=1}^{Np}$ is the $Np$-dimensional column vector\ncontaining the kernel products between $x$ and the observability samples.\nWe collect the above results into the following definition:\n\\begin{definition}\\label{def:rkhs_energies} Given a nonlinear control system of the form~\\eqref{sigma}, we define the kernel\ncontrollability energy function and the kernel observability energy function as, respectively,\n\\begin{align}\n\\hat{L}_c(x) &= \\tfrac{1}{2Nq}{\\bf k_c}(x)^{\\!\\top\\!}(\\tfrac{1}{Nq}K_c + \\lambda I)^{-2}{\\bf k_c}(x) \\\\ \\label{eqn:lc_hat}\n\\hat{L}_o(x) &= \\tfrac{1}{2Np}\\nor{{\\bf k_o}(x)}_2^2 \\;.\n\\end{align}\n\\end{definition}\nNote that the kernels used to define $\\hat{L}_c$ and $\\hat{L}_o$ need not be the same.\n\n\\subsection{Consistency}\nWe'll now turn to showing that the estimator $\\hat{L}_c$ is consistent, but note that\n{\\em we do not address the approximation error} between the energy function estimates and the true\n but unknown underlying functions. Controlling the approximation error requires making specific assumptions\nabout the nonlinear system, and we leave this question open.\n\nIn the following we will make an important set of assumptions regarding the kernel $K$ and the\nRKHS $\\mathcal{H}$ it induces.\n\\begin{assumption}\\label{ass:rkhs}\nThe reproducing kernel $K$ defined on the compact statespace $\\mathcal{X}\\subset\\mbox{I}\\!\\mbox{R}^n$ is locally Lipschitz, measurable and defines a completely regular RKHS. Furthermore the diagonal of $K$ is uniformly bounded,\n\\[\\label{eqn:kappa}\n\\kappa^2 = \\sup_{x\\in\\mathcal{X}}K(x,x) <\\infty.\n\\]\n\\end{assumption}\nSeparable RKHSes are induced by continuous kernels on separable spaces $\\mathcal{X}$.\nSince $\\mathcal{X}\\subset\\mbox{I}\\!\\mbox{R}^n$ is separable and locally Lipschitz functions are also continuous, $\\mathcal{H}$ will always be separable. {\\em Completely regular} RKHSes are introduced in~\\cite{RosascoDensity} and\nthe reader is referred to this reference for details. Briefly, complete regularity ensures recovery of level sets of {\\em any} distribution, in the limit of infinite data. The Gaussian kernel does not define a completely regular RKHS, but the $L_1$ exponential and Laplacian kernels do~\\cite{RosascoDensity}.\n\nWe introduce some additional notation. Let $W_{c,m}$ denote the empirical RKHS gramian formed from a sample of size $m$ observations, and let the corresponding control energy estimate in Definition~\\ref{def:rkhs_energies} involving $W_{c,m}$ and regularization parameter $\\lambda$ be denoted by $L_{c,m}^{\\lambda}$.\n\nThe following preliminary lemma provides finite sample error bounds for Hilbert-Schmidt covariance matrices\non real, separable reproducing kernel Hilbert spaces.\n\\begin{lemma}[\\cite{RosascoIntegral} Theorem 7; Props. 8, 9]\\label{lem:cov_conc}\n\\mbox{}\n\\begin{itemize}\\itemsep 0pt\n\\item[(i)] The operators $W_c, W_{c,m}$ are Hilbert-Schmidt.\n\\item[(ii)] Let $\\delta\\in(0,1]$. With probability at least $1-\\delta$,\n\\[\n\\nor{W_c-W_{c,m}}_{HS} \\leq \\frac{2\\sqrt{2}\\kappa^2}{\\sqrt{m}}\\log^{1\/2}\\frac{2}{\\delta}.\n\\]\n\\end{itemize}\n\\end{lemma}\n\nThe following theorem establishes consistency of the estimator $L_{c,m}^{\\lambda}$, the proof of which follows the method of integral operators developed by~\\cite{SmaleIntegral,AndreaFastRates} and subsequently adopted in the context of density estimation by~(\\cite{RosascoDensity}, Theorem 1).\n\\begin{theorem}\n\\mbox{}\n\\begin{itemize}\\itemsep 0pt\n\\item[(i)] Fix $\\lambda > 0$. For each $x\\in\\mathcal{X}$, with probability at least $1-\\delta$,\n\\[\n\\bigl|L_{c,m}^{\\lambda}(x) - L_c^{\\lambda}(x)\\bigr| \\leq \\frac{2\\sqrt{2}\\kappa^4(\\lambda^2 + \\kappa^4)}{\\lambda^4\\sqrt{m}}\\log^{1\/2}\\frac{2}{\\delta} .\n\\]\n\\item[(ii)] If $(K,\\mathcal{X},\\mu_{\\infty})$ is such that\n \\[\\label{eqn:bounded_pinv}\n \\sup_{x\\in\\mathcal{X}}\\|W_c^{\\dag}(W_c^{\\dag}W_c)K_x\\|_{\\mathcal{H}}<\\infty,\n \\]\nthen for all $x\\in\\mathcal{X}$,\n$$\n\\displaystyle\\lim_{\\lambda\\to 0}|L_c^{\\lambda}(x) - L_c(x)| = 0.\n$$\n\\item[(iii)] If the condition~\\eqref{eqn:bounded_pinv} holds and the sequence $\\{\\lambda_m\\}_m$ satisfies $\\displaystyle\\lim_{m\\to\\infty}\\lambda_m=0$ with\n$\\displaystyle\\lim_{m\\to\\infty}\\tfrac{\\log^{1\/2}m}{\\lambda_m\\sqrt{m}} = 0$, then\n$$\n\\lim_{m\\to\\infty}\\bigl|L_{c,m}^{\\lambda}(x) - L_c(x)\\bigr| = 0,\\quad\\text{almost surely.}\n$$\n\\end{itemize}\n\\end{theorem}\n\\begin{proof}\nFor (i), the sample error, we have\n\\begin{align*}\n2\\bigl|L_{c,m}^{\\lambda}(x) - L_c^{\\lambda}(x)\\bigr|\n& \\leq \\nor{(W_{c,m} + \\lambda I)^{-2}W_{c,m} - (W_c+ \\lambda I)^{-2}W_c }\\nor{K_{x}}^2_{\\mathcal{H}} \\\\\n& \\leq \\bigl\\|(W_{c} + \\lambda I)^{-2}[\\lambda^2(W_{c,m} - W_c) + W_c(W_c - W_{c,m})W_{c,m}](W_{c,m} + \\lambda I)^{-2} \\bigr\\|\\kappa^2\\\\\n& \\leq \\frac{\\kappa^2(\\lambda^2 + \\kappa^4)}{\\lambda^4}\\nor{W_{c,m} - W_c}_{HS}\n\\end{align*}\nwhere $\\nor{\\cdot}$ refers to the operator norm. The second inequality follows from spectral calculus and~\\eqref{eqn:kappa}. The third line follows making use of the estimates $\\nor{(W_{c,m} + \\lambda I)^{-2}}\\leq \\lambda^{-2}, \\nor{(W_c + \\lambda I)^{-2}}\\leq \\lambda^{-2}, \\|W_c\\|_{HS}\\leq \\kappa^2, \\|W_{c,m}\\|_{HS}\\leq\\kappa^2$ (and the fact that $\\lambda > 0$ so that the relevant quantities are invertible).\nPart (i) then follows applying Lemma~\\ref{lem:cov_conc} to the quantity\n$\\nor{W_{c,m} - W_c}_{HS}$.\nFor (ii), the approximation error, note that the compact self-adjoint operator $W_c$ can be expanded onto an orthonormal basis $\\{\\sigma_i,\\phi_i\\}$. We then have\n\\begin{align*}\n 2\\bigl|L_{c}^{\\lambda}(x) - L_c(x)\\bigr|\n& = \\bigl|\\bigl\\langle[(W_{c} + \\lambda I)^{-2}W_{c} - W_c^{\\dag}(W_c^{\\dag}W_c)]K_x,K_x\\bigr\\rangle\\bigr| \\\\\n& = \\left| \\sum_i\\frac{\\sigma_i}{(\\sigma_i + \\lambda)^2}|\\langle \\phi_i,K_x\\rangle|^2 -\n\\sum_{i:\\sigma_i>0}\\frac{1}{\\sigma_i}|\\langle\\phi_i,K_x\\rangle|^2\\right| \\\\\n& \\leq \\lambda\\sum_{i:\\sigma_i>0}\\frac{2\\sigma_i + \\lambda}{(\\sigma_i + \\lambda)^2\\sigma_i}|\\langle \\phi_i,K_x\\rangle|^2 .\n\\end{align*}\nThe last quantity above can be seen to converge to 0 as $\\lambda\\to 0$ since the sum converges for all $x$ under the condition~\\eqref{eqn:bounded_pinv}.\nLastly for part (iii), we see that if $m\\to\\infty$ and $\\lambda^2\\to 0$ slower than $\\sqrt{m}$ then the sample error (i) goes to 0 while (ii) also holds. For almost sure convergence in part (i), we additionally require that for any $\\varepsilon\\in(0,\\infty)$,\n$$\n\\sum_m\\mathbb{P}\\bigl(|L_{c,m}^{\\lambda}(x) - L_c^{\\lambda}(x)| > \\varepsilon\\bigr)\\leq\n\\sum_m e^{-\\mathcal{O}(m\\lambda^4_m\\varepsilon^2)} < \\infty.$$\nThe choice $\\lambda_m = \\log^{-1\/2}m$ satisfies this requirement, as can be seen from the fact that for large enough $M<\\infty$, $\\sum_{m>M} e^{-m\/\\log^2 m} \\leq \\sum_{m>M} e^{-\\sqrt{m}} < \\infty$.\n\\end{proof}\nWe note that the condition~\\eqref{eqn:bounded_pinv} required in part (ii) of the theorem has also been discussed in the context of support estimation in forthcoming work from the authors of~\\cite{RosascoDensity}.\n\n\\subsection{Observability and Controllability Ellipsoids}\nGiven the preceding, we can estimate the reachable and observable sets of a nonlinear control system as level sets of the RKHS energy functions $\\hat{L}_c, \\hat{L}_o$ from Definition~\\ref{def:rkhs_energies}:\n\\begin{definition}\nGiven a nonlinear control system~\\eqref{control_nonlin}, its reachable set can be estimated as\n\\[\n\\label{reachable_estimate}\n\\widehat{\\cal R}_{\\tau}=\\{x\\in\\mathcal{X}~|~\\hat{L}_c(x) \\leq \\tau \\}\n\\]\nand its observable set can be estimated as\n\\[\n\\label{observable_estimate}\n\\widehat{\\cal E}_{\\tau'}=\\{x\\in\\mathcal{X}~|~\\hat{L}_o(x) \\geq \\tau' \\}\n\\]\nfor suitable choices of the threshold parameters $\\tau, \\tau'$.\n\\end{definition}\nIf the energy function estimates above are replaced with the true energy functions, and\n$\\tau=\\tau'=1\/2$, one obtains a finite sample approximation to the controllability and observability ellipsoids defined in Section~\\ref{sec:linear-control} if the system is linear. In general, $\\tau$ may be chosen empirically based on the data, using for instance a cross-validation procedure. Note that in the linear setting, the ellipsoid of strongly observable states is more commonly characterized as $\\{x~|~x^{\\!\\top\\!}W_o^{-1}x\\leq 1\\} = \\{W_o^{\\frac{1}{2}}x ~|~ \\nor{x}\\leq 1\\}$; hence the definition~\\eqref{observable_estimate}.\n\n\\section{Estimation of Invariant Measures for Ergodic Nonlinear SDEs}\\label{sec:nonlinear-sdes}\nIn this Section we consider {\\em ergodic} nonlinear SDEs of the form~\\eqref{sde_nonlin}, where the invariant (or ``stationary'') measure is a key quantity providing a great deal of insight. In the context of\ncontrol, the support of the stationary distribution corresponds to the reachable set of the\nnonlinear control system and may be estimated by~\\eqref{reachable_estimate}. Solving a Fokker-Planck equation\n of the form~\\eqref{ito} is one way to determine the probability distribution describing the solution to an\nSDE. However, for nonlinear systems finding an explicit solution to the Fokker-Planck equation --or even its\nsteady-state solution-- is a challenging problem. The study of existence of steady-state solutions can be traced back to the 1960s~\\cite{fuller,zakai}, however explicit formulas for steady-state solutions of the Fokker-Planck equation exist in only a few special cases (see~\\cite{butchart,da_prato1992,fuller, guinez,liberzon,risken}\nfor example). Such systems are often conservative or second order vector-fields. Hartmann~\\cite{hartmann:08} among others has studied balanced truncation in the context of linear SDEs, where empirical estimation of gramians plays a key role.\n\nWe propose here a data-based non-parametric estimate of the solution to the steady-state Fokker-Planck equation~\\eqref{steady} for a nonlinear SDE, by combining the relation~\\eqref{rho_approx} with the control energy estimate~\\eqref{eqn:lc_hat}. Following the general theme of this paper, we make use of the theory from the linear Gaussian setting described in Section~\\ref{sec:linear-sdes}, but in a suitable reproducing kernel\nHilbert space. Other estimators have of course been proposed in the literature for approximating invariant measures and for density estimation from data more generally (see e.g.~\\cite{biau,froyland,froyland1,kilminster,RosascoDensity}), however to our knowledge we are not aware of any estimation techniques which combine RKHS theory and nonlinear dynamical control systems. An advantage of our approach over other non-parametric methods is that an invariant density is approximated by way of a regularized fitting process, giving the user an additional degree of freedom in the regularization parameter.\n\nOur setting adopts the perspective that the nonlinear stochastic system~\\eqref{sde_nonlin} behaves\napproximately linearly when mapped via $\\Phi$ into the RKHS $\\mathcal{H}$, and as such\nmay be modeled by an infinite dimensional linear system in $\\mathcal{H}$. Although this system is {\\em unknown},\nwe know that it is linear and that we can estimate its gramians and control energies from observed data. Furthermore, we know that the invariant measure of the system in $\\mathcal{H}$ is zero-mean Gaussian with covariance given by the controllability gramian. Thus the original nonlinear system's invariant measure on $\\mathcal{X}$ should be reasonably approximated by the pullback along $\\Phi$ of the Gaussian invariant measure associated with the linear infinite dimensional SDE in $\\mathcal{H}$.\n\nWe summarize the setting in the following {\\em modeling Assumption}:\n\\begin{assumption}\\label{ass:ou_proc}\nLet $\\mathcal{H}$ be a real, possibly infinite dimensional RKHS satisfying Assumption~\\ref{ass:rkhs}.\n\\begin{itemize}\n\\item[(i)] Given a suitable choice of kernel $K$, if the $\\mbox{I}\\!\\mbox{R}^d$-valued stochastic process $x(t)$ is a solution to the (ergodic) stochastically excited nonlinear system~\\eqref{sde_nonlin}, the $\\mathcal{H}$-valued stochastic process \\mbox{$(\\Phi\\circ x)(t)=:X(t)$} can be reasonably modeled as an Ornstein-Uhlenbeck process\n\\[\\label{eqn:infdim-sde}\ndX(t) = AX(t)dt + \\sqrt{C}dW(t), \\quad X(0)=0\\in\\mathcal{H}\n\\]\nwhere $A$ is linear, negative and is the infinitesimal generator of a strongly continuous semigroup $e^{tA}$, $C$ is linear, continuous, positive and self-adjoint, and $W(t)$ is the cylindrical Wiener process.\n\\item[(ii)] The measure $P_{\\infty}$ is the invariant measure of the OU process~\\eqref{eqn:infdim-sde} and $P_{\\infty}$ is the pushforward along $\\Phi$ of the unknown invariant measure $\\mu_{\\infty}$ on the statespace $\\mathcal{X}$ we would like to approximate.\n\\item[(iii)] The measure $\\mu_{\\infty}$ is absolutely continuous with respect to Lebesgue measure, and so admits a density.\n\\end{itemize}\n\\end{assumption}\nWe will proceed in deriving an estimate of the invariant density under these assumptions, but note that\nthere are interesting systems for which the assumptions may not always hold in practice. For example, uncontrollable systems may not have a unique invariant measure. In these cases one must interpret the results discussed here as heuristic in nature.\n\nIt is known that a mild solution $X(t)$ to the SDE~\\eqref{eqn:infdim-sde} exists\nand is unique (\\cite{da_prato1992}, Thm. 5.4. pg. 121). Furthermore, the controllability gramian\nassociated to~\\eqref{eqn:infdim-sde}\n\\[\\label{eqn:infdim-gramian}\nW_ch = \\int_0^{\\infty}e^{tA}Ce^{tA^*}hdt,\\quad h\\in\\mathcal{H}\n\\]\nis trace class (\\cite{da_prato2006}, Lemma 8.19), and the unique measure $P_{\\infty}$ invariant with respect to the Markov semigroup associated to the OU process has characteristic function (\\cite{da_prato2006}, Theorem 8.20)\n\\[\\label{eqn:inv-meas}\n\\widetilde{P}_{\\infty}(h) = \\exp\\Bigl(-\\tfrac{1}{2}\\scal{W_ch}{h}\\Bigr),\\quad h\\in\\mathcal{H} \\;.\n\\]\nWe will use the notation $\\widetilde{P}$ to refer to the Fourier transform of the measure $P$.\nThe law of the solution $X(t)$ to problem~\\eqref{eqn:infdim-sde} given initial condition $X(0)=0$ is\nGaussian with zero mean and covariance operator $Q_t=\\int_{0}^t e^{sA}Ce^{sA^*}ds$. Thus\n\\begin{align*}\nW_c &= \\lim_{t\\to\\infty}\\mathbb{E}[X(t)\\otimes X(t)]\\\\\n & = \\int_{\\mathcal{H}}\\scal{\\cdot}{h}{h}dP_{\\infty}(h)\\\\\n&= \\int_{\\mathcal{X}}\\scal{\\cdot}{K_x}K_xd\\mu_{\\infty}(x)\n\\end{align*}\nwhere the last integral follows pulling $P_{\\infty}$ back to $\\mathcal{X}$ via $\\Phi$, establishing\nthe equivalence between~\\eqref{eqn:infdim-gramian} and ~\\eqref{eqn:covop_gramian}.\n\nGiven that the measure $P_{\\infty}$ has Fourier transform~\\eqref{eqn:inv-meas} and by Assumption~\\ref{ass:ou_proc} is interpreted as the pushforward of $\\mu_{\\infty}$ (that is, for Borel sets $B\\in\\mathcal{B}(\\mathcal{H})$, $P_{\\infty}(B)=(\\Phi_*\\mu_{\\infty})(B)=\\mu_{\\infty}(\\Phi^{-1}(B))$ formally), we\nhave that $\\widetilde{\\mu}_{\\infty}(x) = \\exp\\bigl(-\\tfrac{1}{2}\\scal{W_cK_x}{K_x}\\bigr)$.\n\n\nThe invariant measure $\\mu_{\\infty}$ is defined on a finite dimensional space, so together with\npart (iii) of Assumption~\\ref{ass:ou_proc}, we may consider the corresponding (Radon-Nikodym) density\n$$\\rho_{\\infty}(x) \\propto \\exp\\bigl(-\\tfrac{1}{2}\\scal{W_c^{\\dag}(W_c^{\\dag}W_c)K_x}{K_x}\\bigr)$$\nwhenever the condition~\\eqref{eqn:bounded_pinv} holds. If~\\eqref{eqn:bounded_pinv} does not hold or if we are considering a finite data sample, then we regularize to arrive at\n\\[\n\\rho_{\\infty}(x) \\propto \\exp\\bigl(-\\tfrac{1}{2}\\scal{(W_c + \\lambda I)^{-1}K_x}{K_x}\\bigr)\n\\]\nas discussed in Section~\\ref{sec:linear-sdes} (see Eq.~\\ref{rho_approx}) and Section~\\ref{sec:energy_fns}. This density may be\nestimated from data $\\{x_i\\}_{i=1}^N$ since the controllability energy may be estimated from data: at a new point $x$, we have\n\\[\n \\hat{\\rho}_{\\infty}(x) = Z^{-1}\\exp\\bigl(-\\hat{L}_c(x)\\bigr)\n\\]\n where $\\hat{L}_c$ is the empirical approximation computed according to Definition~\\ref{def:rkhs_energies},\nand the constant $Z$ may be either computed analytically in some cases or simply estimated from the data\nsample to enforce summation to unity.\nWe may also estimate, for example, level sets of $\\rho_{\\infty}$ (such as the support) by considering level\nsets of the regularized control energy function estimator, $\\{x\\in\\mathcal{X}~|~ L_{c,m}(x) \\leq \\tau\\}$.\n\n\n\\section{Conclusion}\nTo summarize our contributions, we have introduced estimators for the controllability\/observability energies and the reachable\/observable sets of nonlinear control systems. We showed that the controllability energy\nestimator may be used to approximate the stationary solution of the Fokker-Planck equation governing nonlinear SDEs (and its support).\n\nThe estimators we derived were based on applying linear methods for control and random\ndynamical systems to nonlinear control systems and SDEs, once mapped into an\ninfinite-dimensional RKHS acting as a ``linearizing space''. These results collectively argue that there is a reasonable passage from linear dynamical systems theory to a data-based nonlinear dynamical systems theory through reproducing kernel Hilbert spaces.\n\nWe leave for future work the formulation of data-based estimators for Lyapunov exponents and the controllability\/observability operators $\\Psi_c,\\Psi_o$ associated to nonlinear systems.\n\n\\section*{Acknowledgements}\nWe thank Lorenzo Rosasco and Jonathan Mattingly for helpful discussions. BH thanks the European Union for financial support received through an International Incoming Marie Curie Fellowship, and JB gratefully acknowledges support under NSF contracts NSF-IIS-08-03293 and NSF-CCF-08-08847 to M. Maggioni.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Missing Proofs from Section \\ref{sec:prelim}}\n\\subsection{Proof of Proposition~\\ref{prop:monotone}}\n\\label{sec:proof-prop:monotone}\n\\propmonotone*\n\\begin{proof}\nFor all practical purposes we may assume $\\bidi(\\typespacei)$ to be compact.\nFix the distributions $\\distsmi$ and strategies $\\bidsmi(\\cdot)$ of other bidders.\nTo simplify notation when $\\bidsmi(\\cdot)$ is fixed, let the interim allocation $\\alloci(\\bidi)$ be $\\Ex[\\valsmi \\sim \\distsmi]{\\alloci(\\bidi, \\bidsmi(\\valsmi))}$, \nthe interim payment\n$\\paymenti(\\bidi) \\coloneqq \\Ex[\\valsmi \\sim \\distsmi]{\\paymenti(\\bidi, \\bidsmi(\\valsmi))}$,\nand the interim utility\n$\\utili(\\vali, \\bidi) \\coloneqq \\utili(\\vali, \\bidi, \\bidsmi(\\cdot))$.\nWithout loss of generality, we may assume for each $\\vali$, $\\utili(\\vali, \\bidi(\\vali)) = \\max_{\\val \\in \\typespacei} \\utili(\\vali, \\bidi(\\val))$\n(Otherwise we can first readjust $\\bidi(\\cdot)$ this way, which only weakly improves the utility of all types.)\n\nSuppose $\\bidi(\\cdot)$ is non-monotone, i.e., there exist $\\vali' > \\vali$, such that $\\bidi(\\vali') < \\bidi(\\vali)$. \nBy the assumption that $\\utili(\\vali, \\bidi(\\vali)) = \\max_{\\val \\in \\typespacei} \\utili(\\vali, \\bidi(\\val))$ for each $\\vali$, we have \n\\begin{equation}\\label{eq:monotone-maximize-1}\n \\vali x_i(\\bidi(\\vali)) - p_i(\\bidi(\\vali)) \\ge \\vali x_i(\\bidi(\\vali')) - p_i(\\bidi(\\vali'));\n\\end{equation}\n\\begin{equation}\\label{eq:monotone-maximize-2}\n \\vali' x_i(\\bidi(\\vali')) - p_i(\\bidi(\\vali')) \\ge \\vali' x_i(\\bidi(\\vali)) - p_i(\\bidi(\\vali)). \n\\end{equation}\nAdding \\eqref{eq:monotone-maximize-1} and \\eqref{eq:monotone-maximize-2}, we obtain\n\\begin{equation}\n (\\vali' - \\vali)[x_i(\\bidi(\\vali')) - x_i(\\bidi(\\vali))] \\ge 0. \n\\end{equation}\nSince $\\vali' > \\vali$, we get \n\\begin{equation}\n x_i(\\bidi(\\vali')) \\ge x_i(\\bidi(\\vali)).\n\\end{equation}\nIn both the first price auction and the all pay auction we also have $x_i(\\bidi(\\vali')) \\le x_i(\\bidi(\\vali))$ because the probability that $i$ receives the item cannot decrease if her bid increases. \nTherefore, it must be\n\\begin{equation}\\label{eq:monotone-equal-alloc}\n x_i(\\bidi(\\vali')) = x_i(\\bidi(\\vali)). \n\\end{equation}\n\nPlugging \\eqref{eq:monotone-equal-alloc} into \\eqref{eq:monotone-maximize-1} and \\eqref{eq:monotone-maximize-2}, we obtain\n\\begin{equation}\\label{eq:monotone-equal-pay}\n p_i(\\bidi(\\vali')) = p_i(\\bidi(\\vali)). \n\\end{equation}\n\n\nFor the all pay auction, since bidder $i$ pays her bid whether or not she wins the item, \\eqref{eq:monotone-equal-pay} implies $\\bidi(\\vali)=\\bidi(\\vali')$, a contradiction. \n\nFor the first price auction, for any bid~$\\bid$ made by bidder~$i$, $p_i(\\bid) = \\bid\\cdot x_i(\\bid)$. By \\eqref{eq:monotone-equal-pay}, $\\bidi(\\vali')x_i(\\bidi(\\vali')) = \\bidi(\\vali) x_i(\\bidi(\\vali))$. On the other hand, $x_i(\\bidi(\\vali')) = x_i(\\bidi(\\vali))$ and $\\bidi(\\vali')>\\bidi(\\vali)$, so we must have\n\\[x_i(\\bidi(\\vali')) = x_i(\\bidi(\\vali)) = 0. \\]\nIn other words, $\\bidi(\\vali)$ must be monotone non-decreasing everywhere except maybe for values whose bids are so low that the bidder does not win and hence obtains zero utility. \nLetting the bidder bid~$0$ for all values on which her allocation is~$0$ does not affect her utility and yields a monotone bidding strategy.\n\\end{proof}\n\n\n\\section{Missing Proofs from Section \\ref{sec:fpa}}\n\\subsection{Upper Bound}\n\\subsubsection{Proof of Lemma~\\ref{lem:pseudo-dimension-utility-class}}\n\\label{sec:proof-lem:pseudo-dimension-utility-class}\n\\pdimutil*\n\n\n\n\n\n\n\\begin{proof}[Proof]\nWe discussed the case with $n=2$ in Section~\\ref{sec:fpa-upper-bound}. Now we consider the general case with $n > 2$ bidders. \nWe give the proof for the random-allocation tie-breaking rule; \nthe proof for the no-allocation rule is similar (and in fact simpler).\nFor ease of notation, we use $\\Pinputs^k$ to denote $\\samplesmi^{k}$.\nRecall that each $\\Pinputs^k$ is a vector in $\\mathbb R^{n-1}$.\nWe write its $j^{\\text{-th}}$ component as $\\Pinputi[j]^k$.\nWe start with a simple observation: for any $\\vali$ and $\\bids(\\cdot)$, the output of $h^{\\vali, \\bids(\\cdot)}$ on any input \n$\\Pinputs^k$\nmust be one of the following $n+1$ values: \n$\\vali-\\bidi, \\frac{\\vali-\\bidi}{2}, \\ldots, \\frac{\\vali-\\bidi}{n}$, or~$0$; this value is fully determined by the $n-1$ comparisons $\\bidi \\lesseqqgtr \\bidi[j](\\Pinputi[j]^k)$ for each $j\\ne i$. \nWe argue that the hypothesis class $\\mathcal{H}_i$ can be divided into $O(m^{2n})$ sub-classes $\\{\\mathcal{H}_i^{\\mathbf k }\\}_{\\mathbf k \\in [m+1]^{2(n-1)}}$ \nsuch that each sub-class $\\mathcal{H}_i^{\\mathbf k }$ generates at most $O(m^n)$ different label vectors. \nThus $\\mathcal{H}_i$ generates at most $O(m^{3n})$ label vectors in total. \nTo pseudo-shatter $m$~samples, we need $O(m^{3n})\\ge 2^m$, which implies $m = O(n\\log n)$. \n\nWe now define sub-classes $\\{\\mathcal{H}_i^{\\mathbf k}\\}_{\\mathbf k}$, each indexed by $\\mathbf k \\in [m + 1]^{2(n-1)}$. \nFor each dimension $j \\ne i$, we sort the $m$ samples by their $j^{\\text{-th}}$ coordinates non-decreasingly, and use $\\pi(j, \\cdot)$ to denote the resulting permutation over $\\{1, 2, \\ldots, m\\}$; \nformally, let $\\Pinputi[j]^{\\pi(j, 1)} \\le \\Pinputi[j]^{\\pi(j, 2)}\\le \\cdots \\le \\Pinputi[j]^{\\pi(j, m)}$.\nFor each hypothesis $h^{\\vali, \\bids(\\cdot)}(\\cdot)$, for each $j$, \nwe define two special positions; these positions are similar to the position~$k$ in the case for two bidders; \nwe now need a pair, because of the need to keep track of ties, due to the more complex random-allocation tie-breaking rule.\nLet $k_{j, 1}$ be $\\max \\{0, \\{k: \\bidi[j](\\Pinputi[j]^{\\pi(j, k)}) < \\bidi(\\vali) \\} \\}$, and let $k_{j, 2}$ be\n$\\min \\{m + 1, \\{k: \\bidi[j](\\Pinputi[j]^{\\pi(j, k)}) > \\bidi(\\vali) \\}\\}$.\nAs in the case for two bidders, this is well defined because of the monotonicity of~$\\bidi[j](\\cdot)$. It also follows that, if $k_{j, 1} < k_{j, 2} - 1$, then for any $k$ such that $k_{j, 1} < k < k_{j, 2}$, we must have $\\bidi[j](\\Pinputi[j]^{\\pi(j, k)}) = \\bidi(\\vali)$.\n\n\n\nA hypothesis $h^{\\vali, \\bids(\\cdot)}(\\cdot)$ belongs to sub-class $\\mathcal{H}_i^{\\mathbf k }$ where the index $\\mathbf k$ is $(k_{j, 1}, k_{j, 2})_{j\\in[n]\\backslash\\{i\\}}$. \nThe number of sub-classes is clearly bounded by $(m+1)^{2(n-1)}$.\n\n\nWe now show that the hypotheses within each sub-class $\\mathcal{H}_i^{\\mathbf k}$ give rise to at most $(m+1)^n$ label vectors.\nLet us focus on one such class with index~$\\mathbf k$. \nOn the $k^{\\text{-th}}$ sample~$\\Pinputs^k$, \na hypothesis's membership in \n$\\mathcal{H}_i^{\\mathbf k}$ suffices to specify whether bidder~$i$ is a winner on this sample, and, if so, the number of other winning bids at a tie.\nTherefore, the class index~$\\mathbf k$ determines a mapping $c: [m] \\to \\{0, 1, \\ldots, n\\}$, with $c(k) > 0$ meaning bidder~$i$ is a winner on sample~$\\Pinputs^k$ at a tie with $c(k)-1$ other bidders, and $c(k) = 0$ meaning bidder~$i$ is a loser on sample~$\\Pinputs^k$. \nThe output of a hypothesis $h^{\\vali, \\bids(\\cdot)}(\\cdot) \\in \\mathcal{H}_i^{\\mathbf k}$ on sample~$\\Pinputs^k$ is then $(\\vali - \\bidi(\\vali)) \/ c(k)$ if $c(k) > 0$ and 0 otherwise. \nThe same utility is output on two samples $\\Pinputs^k$ and~$\\Pinputs^{k'}$ whenever $c(k) = c(k')$. \nTherefore, if we look at the labels assigned to a set~$S$ of samples that are mapped to the nonzero integer by~$c$, there can be at most $|S| + 1 \\leq m + 1$ patterns of labels, because we compare the same utility with $|S|$ witnesses; the set of samples mapped to~$0$ by~$c$ have only one pattern of labels. \nThe vector of labels generated by a hypothesis in such a sub-class is a concatenation of these patterns. \nThe image of $c$ has $n$ nonzero integers, and so there are at most $(m+1)^n$ label vectors.\n\n\n\n\nTo conclude, the total number of label vectors generated by $\\mathcal{H}_i=\\bigcup_{\\mathbf k } \\mathcal{H}_i^{\\mathbf k }$ is at most \n\\[ (m+1)^{2(n-1)} (m+1)^{n} \\le (m+1)^{3n}. \\]\nTo pseudo-shatter $m$~samples, we need $(m+1)^{3n}\\ge 2^m$, which implies $m=O(n\\log n)$.\n\n\\old{\n\\bluecom{old general case}\nNow we consider the general case with $n > 2$ bidders. \nWe show the proof for the random-allocation tie-breaking rule. \nThe proof for the no-allocation rule is similar (and is in fact simpler).\nAgain for ease of notation, we denote by $\\Pinputs^k = \\samplesmi^{k}$.\nA simple observation is that, fixing $\\vali$ and~$\\bids(\\cdot)$, the utility of bidder~$i$ on any sample must be one of $n+1$ values: \n$\\vali-\\bidi, \\frac{\\vali-\\bidi}{2}, \\ldots, \\frac{\\vali-\\bidi}{n}$, or~$0$.\n\neach sample $\\Pinputs^k$. as an $(n-1)$ dimensional vector consisting of the values of the bidders other than~$i$,\ncan be classified into one of $(n+1)$ cases $C_1, \\ldots, C_{n+1}$ that are characterized by the ex-post utility $\\vali-\\bidi, \\frac{\\vali-\\bidi}{2}, \\ldots, \\frac{\\vali-\\bidi}{n}, 0$. \nSpecifically, if there exists a dimension $j\\ne i$ such that $\\bidi[j](\\samplei[j]^{k}) > \\bidi$, then $\\Pinputs^k\\in C_{n+1}$; if all dimensions $j$'s satisfy $\\bidi[j](\\samplei[j]^k)<\\bidi$, then $\\Pinputs^k\\in C_1$; if there are $1\\le d\\le n-1$ dimensions for which $\\bidi[j](\\samplei[j]^k)=\\bidi$, then $\\Pinputs^k\\in C_{d+1}$. \n\nConsider the partition of the $m$ samples into classes by different choices of $\\vali$ and $\\bids(\\cdot)$. For each dimension $j$, let $\\underline{x}\\in\\mathbb R_+$ be the smallest coordinate at which $\\bidi[j](\\underline{x})=\\bidi$, then we have $\\bidi[j](\\samplei[j]^k) < \\bidi$ if $\\samplei[j]^k<\\underline{x}$, because $\\bidi[j](\\cdot)$ is monotone. Similarly, for the largest coordinate $\\overline{x}$ at which $\\bidi[j](\\overline{x})=\\bidi$, we have $\\bidi[j](\\samplei[j]^k) > \\bidi$ whenever $\\samplei[j]^k > \\overline{x}$. And for $\\samplei[j]^k\\in[\\underline{x}, \\overline{x}]$, $\\bidi[j](\\samplei[j]^k) = \\bidi$. We draw two hyperplanes at $x_j=\\underline{x}$ and $x_j=\\overline{x}$ for each dimension $j$. These $2(n-1)$ hyperplanes divide the space into several rectangular region and some infinite regions, such that in each region, the class of all samples is the same, e.g., all the samples in the top-right infinite regions belong to $C_{n+1}$, and the rectangle closest to the origin corresponds to $C_1$. A class may contain several regions.\n\nIn fact, in order to determine the classification of each sample, it suffices to determine two boundary samples $\\Pinputs^{k_1}, \\Pinputs^{k_2}$ that represent $\\underline{x}, \\overline{x}$ in each dimension $j$, namely, \n \\[ k_1=\\argmax_{k}\\{\\samplei[j]^{k}\\;\\mid\\; \\bidi[j](\\samplei[j]^k)<\\bidi\\},\\]\n \\[k_2=\\argmax_{k}\\{\\samplei[j]^k\\;\\mid\\; \\bidi[j](\\samplei[j]^k)\\le \\bidi\\}, \\]\nthen draw the hyperplanes at $x_j=\\samplei[j]^{k_1}$ and $x_j=\\samplei[j]^{k_2}$ (denote by $k_1=0$ or $k_2=0$ if there is no such $k_1$ or $k_2$). The number of different partitions produced by all $\\vali, \\bids(\\cdot)$ is determined by the number of choices of $k_1$ and $k_2$ over all dimensions, which is upper-bounded by\n\\[ \\left[\\binom{m+1}{2} + m+1\\right]^{n-1} \\le (m+1)^{2(n-1)}.\\]\nFor each partition, the samples in each class $C_d$ have the same ex-post utility $\\frac{\\vali-\\bidi}{d}$ or 0 if $d=n+1$. At most $m+1$ distinct labelings can be given to the samples in $C_d$ for different $\\vali, \\bidi$. \nOver all classes, we have at most $(m+1)^{n+1}$ labelings. \n\nTherefore, in total: \n\\[(m+1)^{2(n-1)}(m+1)^{n+1} = (m+1)^{3n-1}. \\]\nSolving $2^m \\le (m+1)^{3n-1}$ gives $m=O(n\\log n)$.\n}\n\n\\end{proof}\n\n\\subsubsection{Proof of Lemma~\\ref{lem:relation-uniform-convergence}}\n\\label{sec:proof-lem:relation-uniform-convergence}\n\\uniformprod*\n\n\\begin{proof}\nThink of the samples $\\samples$ as an $m \\times n$ matrix $(\\samplei^j)$, where each row $j\\in[m]$ represents sample~$\\samples^j$, and each column~$i\\in[n]$ consists of the values sampled from~$\\disti$. \nThen we draw $n$ permutations $\\pi_1, ..., \\pi_n$ of $[m]=\\{1, \\ldots, m\\}$ independently and uniformly at random, and permute the $m$ elements in column~$i$ by~$\\pi_i$. \nRegard each new row $j$ as a new sample, denoted by $\\permSamples^j = (\\samplei[1]^{\\pi_1(j)}, \\samplei[2]^{\\pi_2(j)}, ..., \\samplei[n]^{\\pi_n(j)})$. \nGiven $\\pi_1, \\ldots, \\pi_n$, the ``permuted samples'' $\\permSamples^j$, $j=1, \\ldots, m$ then have the same distributions as $m$ i.i.d.\\@ random draws from~$\\dists$. \n\nFor $h \\in \\mathcal{H}$, let $p_h$ be $\\Ex[\\vals\\sim\\dists]{h(\\vals)}$. \nThen by the definition of $(\\epsilon, \\delta)$-uniform convergence (but not on product distribution),\n\\begin{equation}\\label{eq:samples_pi}\n\\Prx[\\samples, \\pi]{\\exists h \\in \\mathcal{H},\\ \\left| p_h - \\frac{1}{m }\\sum_{j=1}^{m} h(\\permSamples^j) \\right|\\ge\\epsilon} \\le \\delta.\n\\end{equation}\n\nFor a set of fixed samples $\\samples = (\\samples^1, \\ldots, \\samples^m)$, recall that $\\empDisti[i]$ is the uniform distribution over $\\{\\samplei[i]^{1}, \\ldots, \\samplei[i]^{m}\\}$, and $\\empDists = \\prod_{i=1}^n \\empDisti[i]$. \nWe show that the expected \nvalue of $h$ on $\\empDists$ satisfies $\\Ex[\\vals\\sim\\empDists]{h(\\vals)} = \\Ex[\\pi]{\\frac{1}{m}\\sum_{j=1}^m h(\\permSamples^j)}$. This is because\n\\begin{align*}\n \\Ex[\\pi]{\\frac{1}{m}\\sum_{i=1}^m h(\\permSamples^j)}\n ={}&\\frac{1}{m} \\sum_{j=1}^{m} \\Ex[\\pi]{h(\\permSamples^j)} \\\\\n ={}&\\frac{1}{m}\\sum_{j=1}^m \\sum_{(k_1, \\ldots, k_n)\\in[m]^n} h(\\samplei[1]^{k_1}, \\ldots, \\samplei[n]^{k_n})\\ \\cdot \\\\\n & \\hspace{8em}\n \t\\Prx[\\pi]{\\pi_1(j)=k_1, \\ldots, \\pi_n(j)=k_n} \\\\\n ={}&\\frac{1}{m}\\sum_{j=1}^m \\sum_{(k_1, \\ldots, k_n)\\in[m]^n} h(\\samplei[1]^{k_1}, \\ldots, \\samplei[n]^{k_n})\\cdot \\frac{1}{m^n}\\\\\n ={}&\\frac{1}{m^n} \\sum_{(k_1, \\ldots, k_n)\\in[m]^n} h(\\samplei[1]^{k_1}, \\ldots, \\samplei[n]^{k_n}) \\\\\n ={}&\\Ex[\\vals \\sim \\empDists]{h(\\vals)}.\n\\end{align*}\n\nThus, \n\\begin{align*}\n \\left| p_h - \\Ex[\\vals\\sim\\empDists]{h(\\vals)}\\right| \n ={}&\\left| p_h - \\Ex[\\pi]{\\frac{1}{m}\\sum_{j=1}^m h(\\permSamples^j)} \\right| \\\\\n \\le{}& \\Ex[\\pi]{ \\left| p_h - \\frac{1}{m}\\sum_{j=1}^m h(\\permSamples^j) \\right|}\\\\\n \\le{}& \\Prx[\\pi]{ \\left| p_h - \\frac{1}{m}\\sum_{j=1}^m h(\\permSamples^j) \\right|\\ge \\epsilon}\\cdot H \\\\\n & \\hspace{1em} + \\left(1-\\Prx[\\pi]{\\left| p_h - \\frac{1}{m}\\sum_{j=1}^m h(\\permSamples^j) \\right|\\ge \\epsilon}\\right)\\cdot\\epsilon \\\\\n \\le{}& \\Prx[\\pi]{\\mathrm{Bad}(h, \\pi, \\samples)}\\cdot H + \\epsilon, \n\\end{align*}\nwhere in the last step we define event\n\\[ \\mathrm{Bad}(h, \\pi, \\samples) = \\mathbb{I}\\left[\\left| p_h - \\frac{1}{m}\\sum_{j=1}^m h(\\permSamples^j) \\right|\\ge \\epsilon\\right].\\]\nBy simple calculation, whenever $\\left| p_h - \\Ex[\\vals\\sim\\empDists]{h(\\vals)}\\right| \\ge 2\\epsilon$, we have $\\Prx[\\pi]{\\mathrm{Bad}(h, \\pi, \\samples)}\\ge \\epsilon\/H$.\n\nFinally, consider the random draw $\\samples\\sim\\dists$, \n\\begin{align*}\n \\Prx[\\samples]{\\exists h\\in \\mathcal{H}, \\ \\left| p_h - \\Ex[\\vals \\sim \\empDists]{h(\\vals)} \\right|\\ge 2\\epsilon} \n \\le{}& \\Prx[\\samples]{\\exists h \\in \\mathcal{H}, \\ \\Prx[\\pi]{\\mathrm{Bad}(h, \\pi, \\samples)}\\ge \\frac{\\epsilon}{H} } \n \\\\\n \n \\le{}& \\Prx[\\samples]{\\Prx[\\pi]{\\exists h \\in \\mathcal{H}, \\ \\mathrm{Bad}(h, \\pi, \\samples)\\text{ holds} } \\ge \\frac{\\epsilon}{H}}.\n \\end{align*}\n \n By Markov's inequality, this is in turn upper bounded by\n \n \n \\begin{align*}\n \\frac{H}{\\epsilon}\\Ex[\\samples]{\\Prx[\\pi]{\\exists h \\in \\mathcal{H}, \\ \\mathrm{Bad}(h, \\pi, \\samples)\\text{ holds} } }\n \n ={}& \\frac{H}{\\epsilon}\\Prx[\\samples, \\pi]{\\exists h \\in \\mathcal{H}, \\ \\mathrm{Bad}(h, \\pi, \\samples)\\text{ holds} } \\\\\n \n \\le{}& \\frac{H\\delta}{\\epsilon} && \\text{ By \\eqref{eq:samples_pi}} \n\\end{align*}\n\\end{proof}\n\n\\subsection{Lower Bound: Proof of Theorem~\\ref{thm:lower-bound-learning-util}}\n\\label{sec:proof-thm:lower-bound-learning-utility}\n\n\\lowerbound*\n\nFixing $\\epsilon > 0$, fixing $c_1 = 2000$, we first define two value distributions.\nLet $\\dist^+$ be a distribution supported on $\\{0, 1\\}$, and for $\\val \\sim \\dist^+$, $\\Prx{\\val = 0} = 1 - \\frac{1 + c_1 \\epsilon}{n}$, and $\\Prx{\\val = 1} = \\frac{1 + c_1 \\epsilon}{n}$. \nSimilarly define $\\dist^-$: for $\\val \\sim \\dist^-$, $\\Prx{\\val = 0} = 1 - \\frac{1 - c_1 \\epsilon}{n}$, and $\\Prx{\\val = 1} = \\frac{1 - c_1 \\epsilon}{n}$. \n\nLet $\\kl(\\dist^+; \\dist^-)$ denote the KL-divergence between the two distributions.\n\\begin{claim}\n\\label{cl:lb-kl}\n$\\kl(\\dist^+; \\dist^-)= O(\\frac {\\epsilon^2}{n})$.\n\\end{claim}\n\\begin{proof}\nBy definition,\n\\begin{align*}\n \\kl(\\dist^+; \\dist^-) \n ={}& \\frac{1 + c_1\\epsilon}{n} \\ln \\left( \\frac{1 + c_1\\epsilon}{1 - c_1\\epsilon} \\right) \n + \\frac{n - 1 - c_1 \\epsilon}{n} \\ln \\left(\\frac{n - 1 - c_1 \\epsilon}{n - 1 +c_1 \\epsilon}\\right) \\\\\n ={}& \\frac 1n \\ln \\left( \\frac {1 + c_1\\epsilon}{1 - c_1 \\epsilon} \\cdot \\frac{(1 - \\frac{c_1 \\epsilon}{n - 1})^{n-1}}{(1 + \\frac{c_1 \\epsilon}{n-1})^{n-1}}\n \\right)\n + \\frac {c_1 \\epsilon}{n} \\ln \\left(\\frac{1 + c_1 \\epsilon}{1 - c_1 \\epsilon} \\cdot \n \\frac{1 + \\frac{c_1 \\epsilon}{n-1}}{1 - \\frac{c_1 \\epsilon}{n-1}}\\right) \\\\\n \\leq{}& \\frac 1n \\ln \\left( \\frac {1 + c_1\\epsilon}{1 - c_1 \\epsilon} \\cdot \\frac{\\left(1 - \\frac{c_1 \\epsilon}{n - 1}\\right)^{n-1}}{1 + c_1 \\epsilon} \\right)\n + \\frac {2c_1 \\epsilon}{n} \\ln \\left(1 + \\frac{2c_1 \\epsilon}{1 - c_1 \\epsilon} \\right)\n\\\\\n \\leq{}& \\frac 1 n \\ln \\left( \n \\frac{1 - c_1 \\epsilon + \\frac 1 2 (c_1 \\epsilon)^2}{1 - c_1 \\epsilon}\n \\right) + \\frac{8c_1^2 \\epsilon^2}{n} \\\\\n \\leq{}& \\frac{10c_1^2 \\epsilon^2}{n}.\n\\end{align*}\nIn the last two inequalities we used $c_1 \\epsilon < \\frac 1 2$ and $\\ln (1+x) \\leq 1+x$ for all $x > 0$.\n\\end{proof}\n\nIt is well known that upper bounds on KL-divergence implies information theoretic lower bound on the number of samples to distinguish distributions \\cite[e.g.][]{MansourNotes}.\n\\begin{corollary}\n\\label{cor:lb-kl-single}\nGiven $t$ i.i.d.\\@ samples from $\\dist^+$ or~$\\dist^-$, if $t \\leq \\frac{n}{80c_1^2 \\epsilon^2}$, no algorithm~$\\mathcal{H}$ that maps samples to $\\{\\dist^+, \\dist^-\\}$ can do the following: when the samples are from~$\\dist^+$, $\\mathcal{H}$ outputs~$\\dist^+$ with probability at least $\\frac 2 3$, and if the samples are from~$\\dist^-$, $\\mathcal{H}$ outputs~$\\dist^-$ with probability at least~$\\frac 2 3$. \n\\end{corollary}\n\nWe now construct product distributions using $\\dist^+$ and~$\\dist^-$. \nFor any $S \\subseteq [n - 1]$, define product distribution $\\dists_S$ to be $\\prod_i \\disti$ where $\\disti = \\dist^+$ if $i \\in S$, and $\\disti = \\dist^-$ if $i \\in [n-1] \\setminus S$, and $F_n$ is a point mass on value~$1$.\nFor any $j \\in [n - 1]$ and $S \\subseteq [n - 1]$, distinguishing $\\dists_{S \\cup \\{j\\}}$ and $\\dists_{S \\setminus \\{j\\}}$ by samples from the product distribution is no easier than distinguishing $\\dist^+$ and $\\dist^-$, because the coordinates of the samples not from $\\disti[j]$ contains no information about~$\\disti[j]$. \n\n\\begin{corollary}\n\\label{cor:lb-kl}\nFor any $j \\in [n - 1]$ and $S \\subseteq [n - 1]$, given $t$ i.i.d.\\@ samples from $\\dists_{S \\cup \\{j\\}}$ or $\\dists_{S \\setminus \\{j\\}}$, if $t \\leq \\frac n {80 c_1^2 \\epsilon^2}$, no algorithm~$\\mathcal{H}$ can do the following: when the samples are from $\\dists_{S \\cup \\{j\\}}$, $\\mathcal{H}$ outputs~$\\dists_{S \\cup \\{j\\}}$ with probability at least $\\frac 2 3$, and when the samples are from $\\dists_{S \\setminus \\{j\\}}$, $\\mathcal{H}$ outputs~$\\dists_{S \\setminus \\{j\\}}$ with probability at least~$\\frac 2 3$.\n\\end{corollary}\n\nWe now use Corollary~\\ref{cor:lb-kl} to derive an information theoretic lower bound on learning utilities for monotone bidding strategies, for distributions in $\\{\\dists_S\\}_{S \\subseteq [n]}$.\n\n\\begin{proof}[Proof of Theorem~\\ref{thm:lower-bound-learning-util}]\nWithout loss of generality, assume $n$ is odd. \nLet $S$ be an arbitrary subset of~$[n - 1]$ of size either $\\lfloor n\/2 \\rfloor$ or $\\lceil n\/ 2 \\rceil$.\nWe focus on the interim utility of bidder~$n$ with value~$1$ and bidding $\\frac 1 2$. \nDenote this bidding strategy by~$\\bidi[n](\\cdot)$.\nThe other bidders may adopt one of two bidding strategies.\nOne of them is $\\bid^+(\\cdot)$: $\\bid^+(0) = 0$ and $\\bid^+(1) = \\frac 1 2 + \\eta$ for sufficiently small $\\eta > 0$. \nThe other bidding strategy $\\bid^-(\\cdot)$ maps all values to~$0$.\nFor $T \\subseteq [n-1]$, let $\\bids_T(\\cdot)$ be the profile of bidding strategies where $\\bidi(\\cdot) = \\bid^+(\\cdot)$ for $i \\in T$, and $\\bidi(\\cdot) = \\bid^-(\\cdot)$ for $i \\notin T$. \n\n\nFor the distribution $\\dists_S$, \n\\begin{align*}\n \\utili[n]\\left(1, \\frac 1 2, \\bids_T(\\cdot)\\right) \n ={}& \\frac 1 2 \\Prx{\\max_{i \\in T} \\vali = 0} \\\\\n ={}& \\frac 1 2 \n \\left(1 - \n \\frac{1 + c_1 \\epsilon}{n}\n \\right)^{|S \\cap T|}\n \\left(1 - \n \\frac{1 - c_1 \\epsilon}{n}\n \\right)^{|T\\setminus S|}\n \\\\\n ={}& \\frac 1 2\n \\left(\n 1 - \\frac{1 + c_1 \\epsilon}{n}\n \\right)^{|T|}\n \\left(\n \\frac{n - 1 + c_1 \\epsilon}{n - 1 - c_1 \\epsilon}\n \\right)^{|T \\setminus S|}.\n\\end{align*}\nTherefore, for $T, T' \\subseteq [n-1]$ with $|T| = |T'| $,\n\\begin{align*}\n \\frac{\\utili[n](1, \\frac 1 2, \\bids_T(\\cdot))}{\\utili[n](1, \\frac 1 2, \\bids_{T'}(\\cdot))} ={}& \\left(\n 1 + \\frac{2c_1 \\epsilon \/ (n-1)}{1 - \\frac{c_1 \\epsilon}{n-1}} \n \\right)^{|T\\setminus S| - |T' \\setminus S|} \\\\\n \\geq{}& 1 + \\frac{2c_1 \\epsilon}{n-1} \\cdot (|T \\setminus S| - |T' \\setminus S|);\n\\end{align*}\nSuppose $|T \\setminus S| \\geq |T' \\setminus S|$ and $|T| = |T'| \\geq \\lfloor \\frac n 2 \\rfloor$, then\n\\begin{align}\n \\utili[n]\\left(1, \\frac 1 2, \\bids_T(\\cdot)\\right) - \\utili[n]\\left(1, \\frac 1 2, \\bids_{T'}(\\cdot)\\right) \\ge{}& (|T \\setminus S| - |T' \\setminus S|) \\cdot \\frac {2c_1 \\epsilon}{n-1} \\cdot \\utili[n]\\left(1, \\frac 1 2, \\bids_{T'}(\\cdot) \\right) \n\\notag \\\\\n\\geq{}& (|T \\setminus S| - |T' \\setminus S|) \\cdot \\frac {2c_1 \\epsilon}{n-1} \\cdot \\frac 1 {8 e^2}, \n\\label{eq:util-diff-eps}\n\\end{align}\nwhere the last inequality is because $\\utili[n](1, \\frac 1 2, \\bids_{T'}(\\cdot)) \\ge \\frac{1}{2} (1 - \\frac{2}{n})^n = \\frac{1}{2} [(1 - \\frac{2}{n})^\\frac{n}{2}]^2\\ge \\frac{1}{2} (\\frac{1}{2e})^2 = \\frac{1}{8e^2}$. \n\nNow suppose an algorithm~$\\mathcal{A}$ $(\\epsilon, \\delta)$-learns the utilities of all monotone bidding strategies with $t$ samples~$\\samples$ for $t \\leq \\frac{n}{80c_1^2 \\epsilon^2}$.\nDefine $\\mathcal{H}: \\mathbb R_+^{n \\times t} \\times \\mathbb N \\to 2^{[n-1]}$ be a function that outputs among all $T\\subseteq [n-1]$ of size~$k$, the one that maximizes bidder~$n$'s utility when they bid according to bidding strategy $\\bids_T$. \nFormally, \n\\begin{align*}\n \\mathcal{H}(\\samples, k) = \\argmax_{T \\subseteq [n-1], |T| = k} \\mathcal{A}\\left(\\samples, n, 1, (\\bids_{T}(\\cdot), \\bidi[n](\\cdot)) \\right),\n\\end{align*}\n\nBy Definition~\\ref{def:util-learn-ensemble}, for any $S$ with $|S| = \\lfloor n \/ 2 \\rfloor$, for samples drawn from~$\\dists_S$, with probability at least $1 - \\delta$,\n\\begin{equation*}\n \\mathcal{A}(\\samples, n, 1, (\\bids_{[n-1]\\setminus S}(\\cdot), \\bidi[n](\\cdot)) \\\\\n \\geq \\utili[n]\\left(1, \\frac 1 2, \\bids_{[n-1] \\setminus S}(\\cdot) \\right) - \\epsilon;\n\\end{equation*}\nand for any $T \\subseteq[n-1]$ with $|T| = \\lceil n \/ 2 \\rceil$,\n\\begin{equation*}\n \\mathcal{A}(\\samples, n, 1, (\\bids_T(\\cdot), \\bidi[n](\\cdot))\n \\leq \\utili[n]\\left(1, \\frac 1 2, \\bids_T(\\cdot) \\right) + \\epsilon.\n\\end{equation*}\nTherefore, for $W = \\mathcal{H}(\\samples, \\lceil n \/ 2 \\rceil)$, \n\\begin{align*}\n \\utili[n]\\left(1, \\frac 1 2, \\bids_W(\\cdot) \\right) \\geq \n \\utili[n] \\left(1, \\frac 1 2, \\bids_{[n-1]\\setminus S}(\\cdot) \\right) - 2\\epsilon.\n\\end{align*}\nSince $|W| = [n-1]\\setminus S = \\lceil n \/ 2 \\rceil$, by \\eqref{eq:util-diff-eps},\n\\begin{align*}\n \\left(\\lceil \\frac n 2 \\rceil - |W \\setminus S| \\right) \\cdot \\frac{c_1 \\epsilon}{(n-1)4e^2} \\leq 2\\epsilon.\n\\end{align*}\nSo\n\\begin{align*}\n |W \\cap S| \\leq (n - 1) \\cdot \\frac{8e^2}{c_1}.\n\\end{align*}\nIn other words, with probability at least $ 1- \\delta$, $\\mathcal{H}(\\samples, \\lceil n \/ 2 \\rceil)$ is the complement of~$S$ except for at most $\\frac{8e^2}{c_1}$ fraction of the coordinates in $[n-1]$.\n\nSimilarly, for $S$ of cardinality $\\lceil n \/ 2 \\rceil$, \n\\begin{align*}\n |\\mathcal{H}(\\samples, \\lceil n \/ 2 \\rceil) \\cap S| \\leq (n - 1) \\cdot \\frac{8e^2}{c_1} + 1.\n\\end{align*}\nTake $c_2$ to be $\\frac{8e^2}{c_1}$. We have $c_2<\\frac 1 {20}$. For all large enough $n$ and all $S$ of size~$\\lfloor n \/ 2 \\rfloor$ or $\\lceil n \/ 2 \\rceil$, with probability at least $1 - \\delta$, $\\mathcal{H}(\\samples, \\lceil n \/ 2 \\rceil)$ correctly outputs the elements not in~$S$ with an exception of at most $c_2$ fraction of coordinates.\n\nLet $\\mathcal {S}$ be the set of all subsets of $[n-1]$ of size either $\\lceil n \/ 2 \\rceil$ or $\\lfloor n \/ 2 \\rfloor$.\nConsider any~$S \\in \\mathcal {S}$. \nLet $\\theta(S) \\subseteq [n-1]$ denote the set of coordinates whose memberships in~$S$ are correctly predicted by $\\mathcal{H}(\\samples, \\lceil n \/ 2 \\rceil)$ with probability at least $2\/3$; that is, $i \\in \\theta(S)$ if{f} with probability at least $2\/3$, $\\mathcal{H}(\\samples, \\lceil n \/ 2 \\rceil)$ is correct about whether $i \\in S$. \nLet the cardinality of~$|\\theta(S)|$ be $z(n-1)$. Suppose we draw coordinate $i$ uniformly at random from $[n-1]$, and independently draw samples $\\samples$ from $\\dists_S$, then the probability that $\\mathcal{H}(\\samples, \\lceil n \/ 2 \\rceil)$ is correct about whether $i\\in S$ satisfies:\n\\begin{align*}\n \\Prx[i, \\samples]{\\mathcal{H}(\\samples, \\lceil n \/ 2 \\rceil)\\text{ is correct about whether }i\\in S}\n \\geq{}& (1 - c_2) (1 - \\delta) \\\\\n \\geq{}& 0.9,\n\\end{align*}\nand \n\\begin{align*}\n \\Prx[i, \\samples]{\\mathcal{H}(\\samples, \\lceil n \/ 2 \\rceil)\\text{ is correct about whether }i\\in S}\n \\le{}& \\Prx[i]{i\\in \\theta(S)}\\cdot 1 + \\Prx[i]{i\\notin \\theta(S)}\\cdot\\frac{2}{3} \\\\\n ={}& z\\cdot 1 + (1-z)\\cdot \\frac 2 3,\n\\end{align*} \nwhich implies $z > 0.6$. \nIf a pair of sets $S$ and~$S'$ differ in only one coordinate~$i$, and $i \\in \\theta(S) \\cap \\theta(S')$, then $\\mathcal{H}(\\cdot)$ serves as an algorithm that tells apart $\\dists_S$ and~$\\dists_{S'}$, contradicting Corollary~\\ref{cor:lb-kl}. \nWe now show, with a counting argument, that such a pair of $S$ and~$S'$ must exist.\n\nSince for each $S \\in \\mathcal {S}$, $|\\theta(S)| \\geq 0.6(n-1)$, there exists a coordinate $i \\in [n-1]$ and $\\mathcal T \\subseteq \\mathcal {S}$, with $|\\mathcal T| \\geq 0.6 |\\mathcal {S}|$, such that for each $S \\in \\mathcal T$, $i \\in \\theta(S)$. \nBut $\\mathcal {S}$ can be decomposed into $|\\mathcal {S}| \/ 2$ pairs of sets, such that within each pair, the two sets differ by one in size, and precisely one of them contains coordinate~$i$. \nTherefore among these pairs there must exist one $(S, S')$ with $S, S' \\in \\mathcal T$, i.e., $i \\in \\theta(S)$ and $i \\in \\theta(S')$.\nUsing $\\mathcal{H}$, which is induced by~$\\mathcal{A}$, we can tell apart $\\dists_S$ and~$\\dists_{S'}$ with probability at least $2\/3$, which is a contradiction to Corollary~\\ref{cor:lb-kl}.\nThis completes the proof of Theorem~\\ref{thm:lower-bound-learning-util}.\n\\end{proof}\n\n\n\\section{Missing Proofs from Section \\ref{sec:search}}\n\\subsection{Proof of Theorem \\ref{thm:epsNEpoa}}\n\\label{sec:proof-thm:epsNEpoa}\n\\epsNEpoa*\n\nRecall that ${\\mathbb{A}}_i(\\dstrats)$ indicates whether bidder~$i$ receives the item, and \n${\\mathbb{I}}_i(\\dstrats)$ indicates whether bidder~$i$ inspects her value.\nThe expected utility of bidder~$i$ can be decomposed into a welfare term and a payment term, as follows:\n\\[\\utili^{\\DA(\\dists, \\costs)}(\\dstrats) = \\Ex{{\\mathbb{A}}_i(\\dstrats)(\\vali - \\dbidi(\\vali)) - {\\mathbb{I}}_i(\\dstrats)\\costi} = \\Ex{{\\mathbb{A}}_i(\\dstrats)\\vali - {\\mathbb{I}}_i(\\dstrats)\\costi} - \\Ex{{\\mathbb{A}}_i(\\dstrats)\\dbidi(\\vali)},\\]\nwhere the randomness is over $\\vals\\sim\\dists$ and the randomness of mixed strategies.\nAnd the social welfare can be expressed as the sum of utilities and payments of bidders: \n\\begin{align*}\n \\SW^{\\DA(\\dists, \\costs)}(\\dstrats) = \\sum_{i=1}^n\\Ex{ {\\mathbb{A}}_i(\\dstrats)\\vali - {\\mathbb{I}}_i(\\dstrats)\\costi}\n = \\sum_{i=1}^n \\left[ \\utili^{\\DA(\\dists, \\costs)}(\\dstrats) + \\Ex{{\\mathbb{A}}_i(\\dstrats)\\dbidi(\\vali)}\\right].\n\\end{align*}\n\nNow suppose $\\dstrats$ is an $\\epsilon$-NE, then \n\\begin{align}\n \\SW^{\\DA(\\dists, \\costs)}(\\dstrats) & \\ge \\sum_{i=1}^n \\left[\\utili^{\\DA(\\dists, \\costs)}(\\dstrati', \\dstratsmi)-\\epsilon + \\Ex{{\\mathbb{A}}_i(\\dstrats)\\dbidi(\\vali)}\\right] \\nonumber \\\\\n & = \\sum_{i=1}^n \\utili^{\\DA(\\dists, \\costs)}(\\dstrati', \\dstratsmi) + \\sum_{i=1}^n \\Ex{{\\mathbb{A}}_i(\\dstrats)\\dbidi(\\vali)} - n\\epsilon, \n\\label{eq:SW-eps-NE}\n\\end{align}\nfor any set of strategies $\\{\\dstrati'\\}_{i\\in[n]}$.\n\nFor each $i$, let $\\thresholdi$ be the index of $(\\disti, \\costi)$. \nRecall that we use $\\kappa_i$ to denote $\\min\\{\\vali, \\thresholdi\\}$. \nWe will construct strategies $\\dstrati'$ that satisfy the following inequality\n\\begin{equation}\n\\label{eq:smoothness-goal}\n \\sum_{i=1}^n \\utili^{\\DA(\\dists, \\costs)}(\\dstrati', \\dstratsmi) \\ge (1-\\frac{1}{e})\\Ex{\\max_{i\\in[n]} \\kappa_i} - \\sum_{i=1}^n \\Ex{{\\mathbb{A}}_i(\\dstrats)\\dbidi(\\vali)}. \n\\end{equation}\nBy \\eqref{eq:SW-eps-NE} and \\eqref{eq:smoothness-goal} we have\n$\\SW^{\\DA(\\dists, \\costs)}(\\dstrats) \\ge (1-\\frac{1}{e})\\Ex{\\max_{i\\in[n]} \\kappa_i} - n\\epsilon$. Since $\\Ex{\\max_{i\\in[n]} \\kappa_i} = \\OPT^{(\\dists, \\costs)}$ (Lemma~\\ref{lem:optimal-welfare}), the theorem is proved. \n\nNow we construct $\\dstrati'$. Each $\\dstrati'$ is a mixed strategy that does the following: sample a random variable $Z\\in [\\frac{1}{e}, 1]$ with probability density function $f_Z(z) = \\frac{1}{z}$; inspect the value at threshold price $\\dtimei'=(1-Z)\\thresholdi$; and claim the item at price $\\dbidi'(\\vali) = (1-Z)\\kappa_i$. Note that $\\dstrati'$ claims above $\\thresholdi$, and we will make use of the following property of strategies that claim above $\\thresholdi$: \n\\begin{claim}\\label{cl:utility-pandora}\nFor strategy $\\dstrati'$ that claims above $\\thresholdi$, \n$\\Ex{{\\mathbb{A}}_i(\\dstrat', \\dstratsmi)\\vali - {\\mathbb{I}}_i(\\dstrati', \\dstratsmi)\\costi} = \\Ex{{\\mathbb{A}}_i(\\dstrati', \\dstratsmi)\\kappa_i}$.\n\\end{claim}\n\\begin{proof}\nFor convenience we write ${\\mathbb{A}}_i = {\\mathbb{A}}_i(\\dstrati', \\dstratsmi)$ and ${\\mathbb{I}}_i = {\\mathbb{I}}_i(\\dstrati', \\dstratsmi)$. By linearity of expectation and the definition of index,\n\\begin{align*}\n \\Ex{{\\mathbb{A}}_i\\vali - {\\mathbb{I}}_i\\costi}\n & = \\Ex{{\\mathbb{A}}_i\\vali} - \\Ex{{\\mathbb{I}}_i}\\costi \\\\\n & = \\Ex{{\\mathbb{A}}_i\\vali} - \\Ex{{\\mathbb{I}}_i}\\Ex[\\vali\\sim\\disti]{\\max\\{\\vali - \\thresholdi, 0\\}}. \n\\end{align*}\nNote that $\\vali$ and ${\\mathbb{I}}_i$ are independent because bidder~$i$ doesn't know her value before inspection. Thus\n\\begin{align*}\n \\Ex{{\\mathbb{A}}_i\\vali - {\\mathbb{I}}_i\\costi}\n & = \\Ex{{\\mathbb{A}}_i\\vali} - \\Ex{{\\mathbb{I}}_i\\max\\{\\vali - \\thresholdi, 0\\}} \\\\\n & = \\Ex{{\\mathbb{A}}_i\\vali - {\\mathbb{I}}_i \\max\\{\\vali - \\thresholdi, 0\\}} \\\\\n & = \\Ex{{\\mathbb{A}}_i\\vali - {\\mathbb{A}}_i \\max\\{\\vali - \\thresholdi, 0\\} + ({\\mathbb{A}}_i - {\\mathbb{I}}_i) \\max\\{\\vali - \\thresholdi, 0\\}}. \n\\end{align*}\nBecause $\\dstrati'$ claims above $\\thresholdi$, we have ${\\mathbb{A}}_i = {\\mathbb{I}}_i$ whenever $\\vali>\\thresholdi$. This implies $({\\mathbb{A}}_i - {\\mathbb{I}}_i) \\max\\{\\vali - \\thresholdi, 0\\} = 0$ and \n\\begin{align*}\n \\Ex{{\\mathbb{A}}_i\\vali - {\\mathbb{I}}_i\\costi}\n = \\Ex{{\\mathbb{A}}_i(\\vali - \\max\\{\\vali - \\thresholdi, 0\\})}\n = \\Ex{{\\mathbb{A}}_i\\kappa_i}.\n\\end{align*}\n\\end{proof}\n\nNow we argue that the $\\{\\dstrati'\\}_{i\\in[n]}$ constructed above satisfy \\eqref{eq:smoothness-goal}. By Claim~\\ref{cl:utility-pandora}, we have $\\utili^{\\DA(\\dists, \\costs)}(\\dstrati', \\dstratsmi) = \\Ex{{\\mathbb{A}}_i(\\dstrati', \\dstratsmi)(\\kappa_i - \\dbidi'(\\vali))}$. Summing over $i\\in[n]$, \n\\begin{align*}\n \\sum_{i=1}^n \\utili^{\\DA(\\dists, \\costs)}(\\dstrati', \\dstratsmi) & = \\sum_{i=1}^n \\Ex{{\\mathbb{A}}_i(\\dstrati', \\dstratsmi)(\\kappa_i - \\dbidi'(\\vali))} \\\\ \n & = \\Ex{ \\sum_{i=1}^n {\\mathbb{A}}_i(\\dstrati', \\dstratsmi)(\\kappa_i - \\dbidi'(\\vali))} \\\\\n & = \\Ex{ \\sum_{i=1}^n {\\mathbb{A}}_i(\\dstrati', \\dstratsmi) Z\\kappa_i}.\n\\end{align*}\nFor any fixed value profile $\\vals=(\\vali)$, let $i^*\\coloneqq \\argmax_{i\\in[n]}\\{\\kappa_i\\}$. Since ${\\mathbb{A}}_i(\\dstrati', \\dstratsmi) Z\\kappa_i \\ge 0$, we have \n\\begin{align}\\label{eq:utility-maxindex}\n \\sum_{i=1}^n \\utili^{\\DA(\\dists, \\costs)}(\\dstrati', \\dstratsmi) \\ge \\Ex{ {\\mathbb{A}}_{i^*}(\\dstrati[i^*]', \\dstratsmi[i^*]) Z\\kappa_{i^*}}.\n\\end{align}\n\\begin{claim}\\label{cl:smoothness-step}\nFor any $\\vals$, $\\Ex{ {\\mathbb{A}}_{i^*}(\\dstrati[i^*]', \\dstratsmi[i^*]) Z\\kappa_{i^*}\\mid \\vals} \\ge (1-\\frac{1}{e}) \\kappa_{i^*} - \\sum_{i=1}^n {\\mathbb{A}}_i(\\dstrats)\\dbidi(\\vali)$. \n\\end{claim}\n\\begin{proof}\nLet $p\\coloneqq \\max_{j\\ne {i^*}} \\dbidi[j](\\vali[j])$.\nIf $p\\ge (1-\\frac{1}{e})\\kappa_{i^*}$, then $\\Ex{ {\\mathbb{A}}_{i^*}(\\dstrati[i^*]', \\dstratsmi[i^*]) Z\\kappa_{i^*}\\mid \\vals} \\ge 0 \\ge (1-\\frac{1}{e}) \\kappa_{i^*} - p$.\nOtherwise, note that whenever bidder~$i^*$'s bid $(1-Z)\\kappa_{i^*}$ is above $p$, she wins the item, thus\n\\begin{align*}\n \\Ex{ {\\mathbb{A}}_{i^*}(\\dstrati[i^*]', \\dstratsmi[i^*]) Z\\kappa_{i^*}\\mid \\vals} & = \\int_{1\/e}^{1-p\/\\kappa_{i^*}} z\\kappa_{i^*} f_Z(z) \\dd z = \\int_{1\/e}^{1-p\/\\kappa_{i^*}} z\\kappa_{i^*} \\frac{1}{z} \\dd z \\\\\n & = (1- \\frac{1}{e} - \\frac{p}{\\kappa_{i^*}}) \\kappa_{i^*} = (1- \\frac{1}{e})\\kappa_{i^*} - p.\n\\end{align*}\nThe proof is completed by observing that $\\sum_{i=1}^n {\\mathbb{A}}_i(\\dstrats)\\dbidi(\\vali) = \\max_{i\\in[n]} \\bidi(\\vali) \\ge p$.\n\\end{proof}\nTaking expectation over $\\vals\\sim\\dists$, \\eqref{eq:utility-maxindex} and Claim~\\ref{cl:smoothness-step} immediately implies \\eqref{eq:smoothness-goal}. \n\n\n\\subsection{Pandora's Box Problem and Its Sample Complexity}\n\\label{sec:pandora}\n\\input{pandora.tex}\n\n\n\\subsection{Descending Auction with Search Costs}\n\\label{sec:fpa-pandora}\nIn this section, we briefly review the main results by \\citet{KWW16} in Section~\\ref{sec:KWW16}, and then in Section~\\ref{sec:sample-KWW16} present our learning results in auctions with search costs.\nRecall that in this setting, we consider a single-item auction, where each bidder~$i$ has a value~$\\vali \\in [0, H]$ drawn independently from distribution~$\\disti$, but \n$\\vali$ is not known to anyone at the beginning of the auction. \nIn order to observe the value, bidder~$i$ needs to pay a known search cost $\\costi\\in[0, H]$. \n\n\n\\subsubsection{Transformation with Distributional Knowledge}\n\\label{sec:KWW16}\n\n\\paragraph{Descending auction with search costs.} \nIn a \\emph{descending auction} (or Dutch auction), a publicly visible price descends continuously from~$H$. \nAt any point, any bidder may claim the item at the current price. \nWith search cost, a bidder's strategy $\\dstrati$ consists of two parts:\\footnote{Note that there is no private information at the beginning of the auction.} a threshold price $\\dtimei$ and a mapping $\\dbidi(\\cdot)$ from values to bids. \nConcretely, bidder~$i$ decides to inspect when the price descends to~$\\dtimei$, at which point she pays the search cost and immediately learns her value~$\\vali$. \nAfter seeing her value, the bidder chooses another a purchase price $\\dbidi(\\vali) \\leq \\dtimei$ at which to claim the item.\nThe latter is equivalent to submitting a bid $\\dbidi(\\vali)\\le\\dtimei$. \n\nWe say a strategy $\\dstrati=(\\dtimei, \\dbidi(\\cdot))$ is \\emph{monotone} if $\\dbidi(\\cdot)$ is monotone non-decreasing. A strategy is \\emph{mixed} if it is a distribution over pure strategies $\\dstrati$'s. Mixed strategies allow bidders to randomize over the threshold price $\\dtimei$ and the purchase price $\\dbidi(\\vali)$. Abusing notations, we also use $\\dstrati$ to denote a mixed strategy.\nWe say a \\emph{mixed} strategy $\\dstrati$ is \\emph{monotone} if it is a distribution over monotone pure strategies. \n\nWe use $\\DA(\\dists, \\costs)$ to denote a descending auction on value distributions $\\dists$ with search costs $\\costs$, and let $\\utili^{\\DA(\\dists, \\costs)}(\\dstrati, \\dstratsmi)$ be the expected utility of bidder $i$ when bidders use strategies $\\dstrats=(\\dstrati, \\dstratsmi)$ and their values are drawn from $\\dists$. Note that this utility is ex ante, since the value is unknown until the bidder searches. \nThe solution concept we consider is therefore a Nash equilibrium instead of a Bayes Nash equilibrium. \n\n\\begin{defn} \nIn $\\DA(\\dists, \\costs)$, a (mixed) strategy profile $\\dstrats$ is an \\emph{$\\epsilon$-Nash equilibrium} ($\\epsilon$-NE) if for each bidder~$i$ and any strategy $\\dstrati'$, \n\\[\\utili^{\\DA(\\dists, \\costs)}(\\dstrati, \\dstratsmi) \\ge \\utili^{\\DA(\\dists, \\costs)}(\\dstrati', \\dstratsmi) - \\epsilon.\\]\nIf $\\epsilon = 0$, $\\dstrats$ is a \\emph{Nash equilibrium}.\n\\end{defn}\n\nWe use $\\FPA(\\dists)$ to denote the first price auction with value distributions~$\\dists$. \nDenote by $\\utili^{\\FPA(\\dists)}(\\fstrats)$ the (ex ante) expected utility of bidder~$i$ in $\\FPA(\\dists)$, when the bidders use strategy profile~$\\fstrats$. \nWe can similarly define the Nash equilibrium for a first price auction. \n\\begin{defn} \nIn $\\FPA(\\dists)$, a (mixed) strategy profile $\\fstrats$ is an \\emph{$\\epsilon$-Nash equilibrium} ($\\epsilon$-NE) if for each bidder~$i$ and any strategy $\\fstrati'$,\n\\[\\utili^{\\FPA(\\dists)}(\\fstrati, \\fstratsmi) \\ge \\utili^{\\FPA(\\dists)}(\\fstrati', \\fstratsmi) - \\epsilon.\\]\nIf $\\epsilon = 0$, $\\fstrats$ is a \\emph{Nash equilibrium}.\n\\end{defn}\n\nNote that Nash equilibrium is an ex ante notion, in contrast to BNE (Definition~\\ref{def:bne}), which is an interim notion and requires every type to best respond.\nIn a first price auction, an $\\epsilon$-BNE must be an $\\epsilon$-NE, but the reverse is not true.\n\nWith no search cost, the Dutch auction is well known to be equivalent to a first price auction. \nGiven a Dutch auction with search costs, \\citet{KWW16} constructed a first price auction with transformed value distributions and no search costs, and showed that an NE of this FPA corresponds to an NE of the Dutch auction with search costs. \n\n\n\\begin{defn}\\label{def:fpa-pandora-index}\nGiven a distribution $\\disti$ and a search cost $\\costi$, define the \\emph{index} $\\thresholdi$ of $(\\disti, \\costi)$ to be the unique solution to $\\Ex[\\vali\\sim\\disti]{ \\max\\{\\vali - \\thresholdi, 0\\} } = \\costi$. If $\\costi=0$, let $\\thresholdi=H$. \nWe always assume $\\Ex[\\vali\\sim\\disti]{\\vali}\\ge \\costi$, so that $\\thresholdi\\in[0, H]$. (Otherwise the search cost would be so high that the bidder should never search for the value.)\n\\end{defn}\n\nFor a distribution $\\dist$ and some $\\threshold\\in \\mathbb R$, we define \n$\\dist^{\\threshold}$ to be the distribution of $\\min\\{\\val, \\threshold\\}$ where $\\val\\sim F$. \nFor a product distribution~$\\dists$ and a vector $\\thresholds$, we use $\\dists^{\\thresholds}$ to denote the product distribution whose $i^{\\text{-th}}$ component is $\\disti^{\\thresholdi}$. \nA key insight of \\citet{KWW16} is a pair of utility-preserving mappings between strategies in DA$(\\dists, \\costs)$ and FPA$(\\dists^{\\thresholds})$, where $\\thresholds$ is the vector of indices for $(\\dists, \\costs)$.\n\n\n\\begin{defn}\\label{def:strategy-mappings}\nFor each bidder $i$, given distribution $\\disti$ and $\\thresholdi \\in [0, H]$, define two mappings:\\footnote{We describe mappings for pure strategies here. For mixed strategies, their images are naturally distributions over the images of pure strategies under $\\lambda$ and $\\mu$.}\n\\begin{enumerate} \n\\item $\\lambda^{\\thresholdi}$: a monotone strategy $\\fstrati:[0, \\thresholdi]\\to\\mathbb R_+$ in $\\FPA(\\dists^{\\thresholds})$ is mapped by $\\lambda^{\\thresholdi}$ to \nthe strategy in $\\DA(\\dists, \\costs)$ with threshold price $\\dtimei=\\fstrati(\\thresholdi)$ and bidding function $\\dbidi(\\vali) = \\fstrati(\\min \\{\\vali, \\thresholdi\\})$. \n(By the monotonicity of~$\\fstrati$, we have $\\dbidi(\\vali)\\le \\dtimei$). \n\n\\item $\\mu^{(\\disti, \\thresholdi)}$: a strategy $\\dstrati = (\\dtimei, \\dbidi(\\cdot))$ in $\\DA(\\dists, \\costs)$ is mapped by $\\mu^{(\\disti, \\thresholdi)}$ to a strategy $\\fstrati=\\mu^{(\\disti, \\thresholdi)}(\\dstrati)$ in $\\FPA(\\dists^{\\thresholds})$, with $\\fstrati(\\vali) = \\dbidi(\\vali)$ for $\\vali<\\thresholdi$ and $\\fstrati(\\thresholdi)=\\dbidi(\\vali')$, where $\\vali'$ is a random variable drawn from~$\\disti$ conditioning on $\\vali' \\ge \\thresholdi$.\n\\end{enumerate}\n\\end{defn}\n\n\\noindent The superscripts $\\thresholdi$ and $(\\disti, \\thresholdi)$ should make it clear that the mapping~$\\lambda^{\\thresholdi}$ is determined solely by~$\\thresholdi$ while $\\mu^{(\\disti, \\thresholdi)}$ is related to both the distribution and~$\\thresholdi$. \n\nA strategy $\\dstrati = (\\dtimei, \\dbidi(\\cdot))$ in a descending auction is said to \\emph{claim above $\\thresholdi$} \nif $\\dbidi(\\vali) =\\dtimei$ for all $\\vali\\ge \\thresholdi$, i.e., the bidder claims the item immediately if she finds the value of the item greater than or equal to~$\\thresholdi$. \n\n\\begin{claim}[Claim 2 of \\citealp{KWW16}]\n\\label{claim:strategy-equivalence}\nGiven distribution $\\disti$ whose index is $\\thresholdi$, \n\\begin{enumerate}\n \\item If $\\dstrati$ claims above $\\thresholdi$, then $\\dstrati = \\lambda^{\\thresholdi}(\\mu^{(\\disti, \\thresholdi)}(\\dstrati))$. \n \\item If $\\fstrati$ is monotone, then $\\fstrati = \\mu^{(\\disti, \\thresholdi)}(\\lambda^{\\thresholdi}(\\fstrati))$. \n\\end{enumerate}\n\\end{claim}\n\n\\begin{thm}[Claim 3 of \\citealp{KWW16}]\n\\label{thm:DA_FPA_transform}\nSuppose $\\thresholds$ is the vector of indices of $(\\dists, \\costs)$ (Definition~\\ref{def:fpa-pandora-index}). \n\\begin{enumerate}\n \\item For any monotone mixed strategy profile $\\fstrats = (\\fstrati, \\fstratsmi)$ for $\\FPA(\\dists^{\\thresholds})$,\n\n for each bidder~$i$, \n\\[ \\utili^{\\FPA(\\dists^{\\thresholds})}(\\fstrats) = \\utili^{\\DA(\\dists, \\costs)}(\\lambda^{\\thresholds}(\\fstrats)).\\]\n\\item For any mixed (not necessarily monotone) strategy profile $\\dstrats = (\\dstrati, \\dstratsmi)$ for $\\DA(\\dists, \\costs)$, for each bidder~$i$,\n\\[ \\utili^{\\DA(\\dists, \\costs)}(\\dstrats) \\le \\utili^{\\FPA(\\dists^{\\thresholds})}(\\mu^{(\\dists, \\thresholds)}(\\dstrats)),\\]\nwhere ``='' obtains if $\\dstrati$ claims above $\\thresholdi$.\n\\end{enumerate}\n\\end{thm}\n\n\n\\begin{thm}[\\citealp{KWW16}]\\label{thm:fpa-pandora-NE-BNE}\nGiven $\\DA(\\dists, \\costs)$ and $\\FPA(\\dists^{\\thresholds})$ where $\\thresholds$ is the indices of $(\\dists, \\costs)$.\nIf $\\fstrats$ is a BNE in $\\FPA(\\dists^{\\thresholds})$, then $\\lambda^{\\thresholds}(\\fstrats)$ is an NE in $\\DA(\\dists, \\costs)$. Conversely, if $\\dstrats$ is an NE in $\\DA(\\dists, \\costs)$, then $\\mu^{(\\dists, \\thresholds)}(\\dstrats)$ is an NE in $\\FPA(\\dists^{\\thresholds})$. \n\\end{thm}\n\n\n\nFinally, we review a welfare guarantee shown by \\citeauthor{KWW16}. \nCombining Theorem~\\ref{thm:fpa-pandora-NE-BNE} with known bound on the Price of Anarchy for the first price auction \\citep{ST13}, \\citeauthor{KWW16} concluded that the welfare of an NE in a Dutch auction with search costs is at least a $(1-1\/e)$-fraction of the maximum expected welfare.\n\nFor our purpose, we generalize their conclusion to $\\epsilon$-NE. \nFormally, let ${\\mathbb{A}}_i(\\dstrats)$ be an indicator variable for whether bidder~$i$ receives the item, and let ${\\mathbb{I}}_i(\\dstrats)$ be an indicator variable for whether bidder~$i$ inspects her value.\nThe social welfare of a strategy profile $\\dstrats$ is \n\\begin{equation}\n \\SW^{\\DA(\\dists, \\costs)}(\\dstrats) = \\Ex{\\sum_{i=1}^n \\left({\\mathbb{A}}_i(\\dstrats)\\vali - {\\mathbb{I}}_i(\\dstrats)\\costi \\right)},\n\\end{equation}\nwhere the randomness is over $\\vals\\sim\\dists$ and the randomness of mixed strategies.\nLet $\\OPT^{(\\dists, \\costs)}$ be the maximum expected welfare, obtained by Pandora's Box algorithm (Theorem~\\ref{thm:optimal-pandora}) on distributions $\\disti[1], \\ldots, \\disti[n]$ and costs $\\costi[1], \\ldots, \\costi[n]$.\n\\begin{restatable}[Corollary 1 and Theorem 1 of \\citealp{KWW16}]{lemma}{optimalwelfare} \\label{lem:optimal-welfare}\nLet $\\thresholdi$ be the index of $(\\disti, \\costi)$ and $\\kappa_i=\\min\\{\\vali, \\thresholdi\\}$, then \n$\\OPT^{(\\dists, \\costs)} = \\Ex{\\max_{i\\in[n]}\\kappa_i}$. \n\\end{restatable}\n\\begin{restatable}[A slight generalization of \\citealp{KWW16}]{thm}{epsNEpoa}\n\\label{thm:epsNEpoa}\nSuppose $\\dstrats$ is an $\\epsilon$-NE in $\\DA(\\dists, \\costs)$, then $\\SW^{\\DA(\\dists, \\costs)}(\\dstrats) \\ge (1-\\frac{1}{e})\\OPT^{(\\dists, \\costs)} - n\\epsilon$. \n\\end{restatable}\n\\noindent The proof of Theorem~\\ref{thm:epsNEpoa} follows the smoothness framework \\citep{ST13} and is given in Appendix~\\ref{sec:proof-thm:epsNEpoa}. \n\n\\subsubsection{Transformation with Samples}\n\\label{sec:sample-KWW16}\n\nWe are now ready to present our learning results on auctions with search costs.\nIn \\citet{KWW16}, the utility- and equilibrium-preserving mappings $\\lambda^{\\thresholds}$ and $\\mu^{(\\dists, \\thresholds)}$ depend on the value distributions.\nWe examine the number of samples needed to compute approximations of these mappings, when the value distributions are unknown.\nWe find that, given search costs and value samples, $\\tilde O(1 \/ \\epsilon^2)$ samples \nsuffice to construct mappings between strategies that approximately preserve utility; with $\\tilde O(n \/ \\epsilon^2)$ samples, any equilibrium of the first price auction without search costs on a transformed empirical distribution can be mapped to an approximate equilibrium of the descending auction on the true distribution.\nBy Theorem~\\ref{thm:epsNEpoa}, such an approximate equilibrium in the descending auction must obtain a $(1 - 1\/e)$-approximation to the optimal welfare. \nTo make use of this result, a market designer could collect $\\tilde O(n \/ \\epsilon^2)$ value samples to compute an approximate Nash in the said FPA, which then maps to an approximate Nash in the Dutch auction. This approximate Nash can serve as bidding guidance for the participants, and guarantees approximate efficiency of the market.\n\n\n\nWhen value distribution $\\disti$'s are unknown (but cost~$\\costi$'s are known), the mapping~$\\lambda^{\\thresholds}$ \nis also unknown, \nbecause each index~$\\thresholdi$ is determined by the distribution~$\\disti$. \nInstead, we estimate an index~$\\hat{\\threshold}_i$ from samples and use the corresponding mapping~$\\lambda^{\\hat{\\thresholds}}$. \n\n\\begin{defn}\\label{defn:fpa-pandora-empirical}\nPartition the samples $\\samples$ into two sets, $\\samples^A$ and $\\samples^B$, each of size $m\/2$. \nDenote the empirical product distributions on $\\samples^A$ and $\\samples^B$ as $\\empDists^A$ and $\\empDists$, respectively. \nThe \\emph{empirical indices} are the indices $\\hat{\\thresholds}$ for $(\\empDists^A, \\costs)$; namely, $\\hat{\\threshold}_i$ is the unique solution to\n$\\Ex[\\vali\\sim\\empDisti^A]{\\max\\{\\vali - \\hat{\\threshold}_i, 0\\}} = \\costi$.\nThe \\emph{empirical counterpart} of $\\DA(\\dists, \\costs)$ is $\\FPA(\\empDists^{\\hat{\\thresholds}})$. \nThe \\emph{empirical mappings} are $\\lambda^{\\hat{\\thresholds}}$ and $\\mu^{(\\dists, \\hat{\\thresholds})}$, computed as in Definition~\\ref{def:strategy-mappings}.\n\\end{defn}\n\nNote that $\\mu^{(\\dists, \\hat{\\thresholds})}$ depends on distributions while $\\lambda^{\\hat{\\thresholds}}$ does not. The following theorem, analogous to Theorem~\\ref{thm:DA_FPA_transform}, shows that the empirical mappings $\\lambda^{\\hat{\\thresholds}}$ and $\\mu^{(\\dists, \\hat{\\thresholds})}$ approximately preserve the utilities with high probability. \n\\begin{thm}\\label{thm:fpa-pandora-utility-intermediate}\nFor any $\\epsilon, \\delta > 0$, there is \n$M = O \\left(\\frac{H^2}{\\epsilon^2} \\left[\\log\\left(\\frac{H}{\\epsilon} \\right) + \\log\\left(\\frac{n}{\\delta}\\right)\\right]\\right)$, \nsuch that for all $m > M$, with probability at least $1-\\delta$ over the random draw of $\\samples^A$, \n\\begin{enumerate}\n\\item For any monotone mixed strategy profile $\\fstrats$ in $\\FPA(\\dists^{\\hat{\\thresholds}})$, \nfor each bidder~$i$, \n\\[ \\left| \\utili^{\\FPA(\\dists^{\\hat{\\thresholds}})}(\\fstrats) - \\utili^{\\DA(\\dists, \\costs)}(\\lambda^{\\hat{\\thresholds}}(\\fstrats)) \\right| \\le \\epsilon.\\]\n\\item For any mixed strategy profile $\\dstrats$ in $\\DA(\\dists, \\costs)$, for each bidder~$i$,\n\\[ \\utili^{\\DA(\\dists, \\costs)}(\\dstrats) \\le \\utili^{\\FPA(\\dists^{\\hat{\\thresholds}})}(\\mu^{(\\dists, \\hat{\\thresholds})}(\\dstrats)) + \\epsilon.\\]\nIf $\\dstrati$ claims above $\\hat{\\threshold}_i$, then we also have $\\utili^{\\DA(\\dists, \\costs)}(\\dstrats) \\ge \\utili^{\\FPA(\\dists^{\\hat{\\thresholds}})}(\\mu^{(\\dists, \\hat{\\thresholds})}(\\dstrats)) - \\epsilon$. \n\\end{enumerate}\n\n\\end{thm}\n\\noindent \nBefore proving Theorem~\\ref{thm:fpa-pandora-utility-intermediate}, \nwe first derive a few important consequences.\n\n\\begin{corollary}\\label{lem:fpa-pandora-NE-intermediate}\nFor any $\\epsilon, \\delta > 0$, and $m > M$ as in the condition of\nTheorem~\\ref{thm:fpa-pandora-utility-intermediate}, with probability at least $1-\\delta$, \n\\begin{enumerate}\n\\item For any monotone strategy profile $\\fstrats$,\nif $\\fstrats$ is an $\\epsilon'$-NE in $\\FPA(\\dists^{\\hat{\\thresholds}})$, then $\\lambda^{\\hat{\\thresholds}}(\\fstrats)$ is an $(\\epsilon'+2\\epsilon)$-NE in $\\DA(\\dists, \\costs)$. \n\\item Conversely, for any strategy profile $\\dstrats$ that claims above $\\hat{\\thresholds}$, if $\\dstrats$ is an $\\epsilon'$-NE in $\\DA(\\dists, \\costs)$, then $\\mu^{(\\dists, \\hat{\\thresholds})}(\\dstrats)$ is an $(\\epsilon'+2\\epsilon)$-NE in $\\FPA(\\dists^{\\hat{\\thresholds}})$. \n\\end{enumerate}\n\\end{corollary}\n\n\\begin{proof}\nWe prove the two items respectively, \n\\begin{enumerate}\n\\item Let $\\fstrats=(\\fstrati, \\fstratsmi)$ be an $\\epsilon'$-NE in $\\FPA(\\dists^{\\hat{\\thresholds}})$ satisfying the condition in the statement. For any strategy $\\dstrati$, by Theorem~\\ref{thm:fpa-pandora-utility-intermediate} item 2, \n\\begin{align*}\n \\utili^{\\DA(\\dists, \\costs)}(\\dstrati, \\lambda^{\\hat{\\thresholds}}(\\fstratsmi))\n \\le \\utili^{\\FPA(\\dists^{\\hat{\\thresholds}})}(\\mu^{(\\dists, \\hat{\\thresholds})}(\\dstrati), \\mu^{(\\dists, \\hat{\\thresholds})}(\\lambda^{\\hat{\\thresholds}}(\\fstratsmi))) + \\epsilon. & \n\\end{align*}\nSince $\\fstratsmi$ is monotone, by Claim~\\ref{claim:strategy-equivalence} item 2, we have $\\mu^{(\\dists, \\hat{\\thresholds})}(\\lambda^{\\hat{\\thresholds}}(\\fstratsmi)) = \\fstratsmi$. Thus, \n\\begin{align*}\n \n \\utili^{\\DA(\\dists, \\costs)}(\\dstrati, \\lambda^{\\hat{\\thresholds}}(\\fstratsmi))\n & \\le \\utili^{\\FPA(\\dists^{\\hat{\\thresholds}})}(\\mu^{(\\dists, \\hat{\\thresholds})}(\\dstrati), \\fstratsmi) + \\epsilon &\\\\\n \\shortintertext{\\hfill $\\fstrats$ is an $\\epsilon'$-NE in $\\FPA(\\dists^{\\hat{\\thresholds}})$}\n & \\le \\utili^{\\FPA(\\dists^{\\hat{\\thresholds}})}(\\fstrats) + \\epsilon' + \\epsilon & \\\\\n \\shortintertext{\\hfill Theorem \\ref{thm:fpa-pandora-utility-intermediate} item 1}\n & \\le \\utili^{\\DA(\\dists, \\costs)}(\\lambda^{\\hat{\\thresholds}}(\\fstrats)) + \\epsilon' + 2\\epsilon. & \n\\end{align*}\n\n\\item For any strategy $\\fstrati$, by Proposition~\\ref{prop:monotone}, there exists some monotone strategy $\\fstrati'$, such that \n\\begin{align*}\n \\utili^{\\FPA(\\dists^{\\hat{\\thresholds}})}(\\fstrati, \\mu^{(\\dists, \\hat{\\thresholds})}(\\dstratsmi)) \\le \\utili^{\\FPA(\\dists^{\\hat{\\thresholds}})}(\\fstrati', \\mu^{(\\dists, \\hat{\\thresholds})}(\\dstratsmi)). \n\\end{align*}\nThen by Theorem~\\ref{thm:fpa-pandora-utility-intermediate} item 1, \n\\begin{align*}\n \\utili^{\\FPA(\\dists^{\\hat{\\thresholds}})}(\\fstrati', \\mu^{(\\dists, \\hat{\\thresholds})}(\\dstratsmi)) & \\le \\utili^{\\DA(\\dists, \\costs)}(\\lambda^{\\hat{\\thresholds}}(\\fstrati'), \\lambda^{\\hat{\\thresholds}}(\\mu^{(\\dists, \\hat{\\thresholds})}(\\dstratsmi))) + \\epsilon.\n\\end{align*}\nSince $\\dstratsmi$ claims above $\\hat{\\thresholds}_{-i}$, by Claim~\\ref{claim:strategy-equivalence} item 1, we have $\\lambda^{\\hat{\\thresholds}}(\\mu^{(\\dists, \\hat{\\thresholds})}(\\dstratsmi)) = \\dstratsmi$. Thus \n\\begin{align*}\n \\utili^{\\FPA(\\dists^{\\hat{\\thresholds}})}(\\fstrati, \\mu^{(\\dists, \\hat{\\thresholds})}(\\dstratsmi)) & \\le \\utili^{\\DA(\\dists, \\costs)}(\\lambda^{\\hat{\\thresholds}}(\\fstrati'), \\dstratsmi) + \\epsilon & \\\\\n \\shortintertext{\\hfill $\\dstrats$ is an $\\epsilon'$-NE in $\\DA(\\dists, \\costs)$}\n & \\le \\utili^{\\DA(\\dists, \\costs)}(\\dstrats) + \\epsilon' + \\epsilon & \\\\\n \\shortintertext{\\hfill Theorem \\ref{thm:fpa-pandora-utility-intermediate} item 2}\n & \\le \\utili^{\\FPA(\\dists^{\\hat{\\thresholds}})}(\\mu^{(\\dists, \\hat{\\thresholds})}(\\dstrats)) + \\epsilon' + 2\\epsilon. & \n\\end{align*}\n\\end{enumerate}\n\\end{proof}\n\n\nAs a consequence of Corollary~\\ref{lem:fpa-pandora-NE-intermediate}, Corollary~\\ref{cor:find-BNE} and Theorem~\\ref{thm:epsNEpoa}, any approximate BNE in $\\FPA(\\empDists^{\\hat{\\thresholds}})$ is transformed by $\\lambda^{\\hat{\\thresholds}}$ to a nearly efficient approximate NE in $\\DA(\\dists, \\costs)$, as formalized by the following theorem.\n\\begin{restatable}{thm}{fpapandorane}\n\\label{thm:fpa-pandora-NE-v2}\nFor any $\\epsilon, \\epsilon', \\delta > 0$, there is $M = O \\left(\\frac{H^2}{\\epsilon^2} \\left[n\\log n\\log \\left(\\frac{H}{\\epsilon} \\right) + \\log\\left(\\frac{n}{\\delta}\\right)\\right]\\right)$, such that for all $m > M$, with probability at least $1-\\delta$ over random draws of samples~$\\samples$, we have: for\nany monotone strategy profile $\\fstrats$ that is an $\\epsilon'$-BNE in $\\FPA(\\empDists^{\\hat{\\thresholds}})$, $\\lambda^{\\hat{\\thresholds}}(\\fstrats)$\nis an $(\\epsilon'+4\\epsilon)$-NE in $\\DA(\\dists, \\costs)$; \nmoreover, $\\SW^{\\DA(\\dists, \\costs)}(\\lambda^{\\hat{\\thresholds}}(\\fstrats)) \\ge (1-\\frac{1}{e})\\OPT^{(\\dists, \\costs)} - n(\\epsilon'+4\\epsilon)$\n\\end{restatable}\n\\begin{proof\nFirst use Corollary~\\ref{cor:find-BNE} on distributions $\\dists^{\\hat{\\thresholds}}$. \nNote that $\\empDists^{\\hat{\\thresholds}}$ is an empirical product distribution for $\\dists^{\\hat{\\thresholds}}$; this is because $\\empDists$ consists of samples $\\samples^B$, whereas $\\hat{\\thresholds}$ is determined by samples in $\\samples^A$, and $\\samples^A$ and~$\\samples^B$ are disjoint. Thus, with probability at least $1-\\delta\/2$ over the random draw of $\\samples^B$, any monotone strategy profile $\\fstrats$ that is an $\\epsilon'$-BNE in $\\FPA(\\empDists^{\\hat{\\thresholds}})$ is an $(\\epsilon'+2\\epsilon)$-BNE in $\\FPA(\\dists^{\\hat{\\thresholds}})$. An $(\\epsilon'+2\\epsilon)$-BNE must be an $(\\epsilon'+2\\epsilon)$-NE in $\\FPA(\\dists^{\\hat{\\thresholds}})$, so by Corollary~\\ref{lem:fpa-pandora-NE-intermediate}, with probability at least $1-\\delta\/2$ over the random draw of $\\samples^A$, $\\lambda^{\\hat{\\thresholds}}(\\fstrats)$ is an $(\\epsilon'+4\\epsilon)$-NE in $\\DA(\\dists, \\costs)$.\nThe welfare guarantee follows from Theorem~\\ref{thm:epsNEpoa}. \n\\end{proof}\nTheorem \\ref{thm:fpa-pandora-NE-v2} does not include the reverse direction, i.e., from an $\\epsilon'$-NE in $\\DA(\\dists, \\costs)$ to an $(\\epsilon'+4\\epsilon)$-BNE in $\\FPA(\\empDists^{\\hat{\\thresholds}})$ \n(cf.\\@ Theorem~\\ref{thm:fpa-pandora-NE-BNE}). \nThis is for two reasons:\n(1) Such a transformation will result in $(\\epsilon'+4\\epsilon)$-NE in $\\FPA(\\empDists^{\\hat{\\thresholds}})$, but $(\\epsilon'+4\\epsilon)$-NE in $\\FPA(\\empDists^{\\hat{\\thresholds}})$ is not necessarily an $(\\epsilon'+4\\epsilon)$-BNE.\n(2) Unlike interim utility, ex ante utility cannot be learned from samples directly; in other words, $\\utili^{\\FPA(\\empDists^{\\hat{\\thresholds}})}(\\fstrats)$ does not necessarily approximate $\\utili^{\\FPA(\\dists^{\\hat{\\thresholds}})}(\\fstrats)$ even if $\\fstrats$ is monotone. This is because in the computation of ex ante utility we need to take expectation over bidder $i$'s own value but for interim utility we do not need to take such an expectation. \n\n\\paragraph{Proof of Theorem~\\ref{thm:fpa-pandora-utility-intermediate}.}\nThe main idea is as follows: For item 1, we need to show that the utility of a strategy profile $\\fstrats$ in $\\FPA(\\dists^{\\hat{\\thresholds}})$ approximates the utility of its image $\\dstrats=\\lambda^{\\hat{\\thresholds}}(\\fstrats)$ in $\\DA(\\dists, \\costs)$. We wish to use Theorem~\\ref{thm:DA_FPA_transform} to do so but it cannot be used directly because $\\hat{\\thresholds}$ is not the indices of $(\\dists, \\costs)$. Instead, we construct a set of ``empirical costs''~$\\hat{\\costs}$ such that $\\hat{\\thresholds}$ becomes the indices of $(\\dists, \\hat{\\costs})$. Then Theorem~\\ref{thm:DA_FPA_transform} can be used to show that $\\utili^{\\FPA(\\dists^{\\hat{\\thresholds}})}(\\fstrats)=\\utili^{\\DA(\\dists, \\hat{\\costs})}(\\dstrats)$. With an additional lemma (Lemma~\\ref{lem:costs_close}) which shows that $\\hat{\\costs}$ approximates $\\costs$ up to $\\epsilon$-error, we are able to establish the following chain of approximate equations\n\\[\n \\utili^{\\FPA(\\dists^{\\hat{\\thresholds}})}(\\fstrats) = \\utili^{\\DA(\\dists, \\hat{\\costs})}(\\dstrats) \\stackrel{\\epsilon}{\\approx} \\utili^{\\DA(\\dists, \\costs)}(\\dstrats).\n\\]\nThe proof for item 2 is similar. \n\nFormally, define $\\hat{\\costs}=(\\hat{\\cost}_i)_{i\\in[n]}$, where\n\\begin{equation}\n\\hat{\\cost}_i \\coloneqq \\Ex[\\vali\\sim\\disti]{\\max\\{\\vali - \\hat{\\threshold}_i, 0\\}}.\n\\end{equation}\nNote that $\\hat{\\cost}_i$ is determined by samples~$\\samples^A$ since the empirical index $\\hat{\\threshold}_i$ is computed from $\\samples^A$. \n\\begin{restatable}{lemma}{costsclose}\n\\label{lem:costs_close}\nThere is $M = O\\left(\\frac{H^2}{\\epsilon^2}\\left[\\log\\frac{H}{\\epsilon} + \\log\\frac{n}{\\delta}\\right]\\right)$, such that if $m\/2 > M$, then with probability at least $1-\\delta$ over the random draw of $\\samples^A$, for each $i\\in[n]$, $|\\costi - \\hat{\\cost}_i|\\le\\epsilon$.\n\\end{restatable}\n\\begin{proof}\nThe main idea of the proof is to show that the class $\\mathcal{H}_i=\\{h^{\\threshold}\\;\\mid\\;\\threshold\\in[-H, H]\\}$ where $h^{\\threshold}(x)=\\max\\{x-r, 0\\}$ has pseudo-dimension $\\Pdim(\\mathcal{H}_i) = O(1)$ and thus uniformly converges with $O\\left(\\frac{H^2}{\\epsilon^2}\\left[\\log\\frac{H}{\\epsilon} + \\log\\frac{1}{\\delta}\\right]\\right)$ samples.\n\nFormally, consider the pseudo-dimension $d$ of the class $\\mathcal{H}_i=\\{h^{\\threshold}\\;\\mid\\;\\threshold\\in[-H, H]\\}$ where $h^\\threshold(x)\\coloneqq\\max\\{x-\\threshold, 0\\}$ for $x\\in[0, H]$ (thus $h^\\threshold(x)\\in[0, 2H]$). We claim that $d=O(1)$. To see this, fix any $d$ samples $(x_1, x_2, \\ldots, x_d)$ and any witnesses $(\\Pwitnessi[1], \\Pwitnessi[2], \\ldots, \\Pwitnessi[d])$, we bound the number of distinct labelings that can be given by $\\mathcal{H}_i$ to these samples. Each sample $x_j$ induces a partition of the parameter space (the space of $\\threshold$) $[-H, H]$ into two intervals $[-H, x_j]$ and $(x_j, H]$, such that for any $\\threshold\\le x_j$, $h^{\\threshold}(x_j) = x_j-r$, and for $\\threshold > x_j$, $h^{\\threshold}(x_j)=0$. All $d$ samples partition $[-H, H]$ into (at most) $d+1$ consecutive intervals, $I_1, \\ldots, I_{d+1}$, such that within each interval $I_k$, $h^{\\threshold}(x_j)$ is either $x_j-r$ for all $\\threshold\\in I_k$ or $0$ for all $\\threshold\\in I_k$, for each $j\\in[d]$. We further divide each $I_k$ using witnesses $\\Pwitnessi[j]$'s: for each $j\\in[d]$, if $h^{\\threshold}(x_j) = x_j-r$ for $\\threshold\\in I_k$, then we cut $I_k$ at the point $r = x_j - \\Pwitnessi[j]$; in this way we cut each $I_k$ into at most $d+1$ sub-intervals. Within each sub-interval $I'\\subseteq I_k$, the labeling of the $d$ samples given by all $h^{\\threshold}$ ($\\threshold\\in I'$) is the same. Since there are at most $(d+1)^2$ sub-intervals in total, there are at most $(d+1)^2$ distinct labelings. \nTo pseudo-shatter $d$ samples, we must have $2^d \\leq (d+1)^2$, which gives $d=O(1)$. \n\nBy the definition of $\\hat{\\threshold}_i$, we have \n\\[\\costi=\\Ex[\\vali\\sim \\empDisti^A]{\\max\\{\\vali - \\hat{\\threshold}_i, 0\\}} = \\Ex[\\vali\\sim \\empDisti^A]{h^{\\hat{\\threshold}_i}(\\vali)}, \\]\nand $\\hat{\\threshold}_i\\in [-H, H]$. Also note that $\\hat{\\cost}_i = \\Ex[\\vali\\sim \\disti]{h^{\\hat{\\threshold}_i}(\\vali)}$. \nThus the conclusion $|\\costi-\\hat{\\cost}_i| \\le \\epsilon$ follows from Theorem~\\ref{thm:pseudo-dimension} and a union bound over $i\\in[n]$.\n\\end{proof}\n\n\\begin{lemma}\\label{lem:DA_utility_close}\nSuppose $|\\costi-\\hat{\\cost}_i|\\le\\epsilon$, then for any strategies~$\\dstrats$, \n\\begin{equation*}\n\t\\left| \\utili^{\\DA(\\dists, \\costs)}(\\dstrats) - \\utili^{\\DA(\\dists, \\hat{\\costs})}(\\dstrats)\\right|\\le\\epsilon.\n\\end{equation*}\n\\end{lemma}\n\\begin{proof}\nCouple the realizations of values (and threshold prices and bids if the strategies are randomized) in $\\DA(\\dists, \\costs)$ and $\\DA(\\dists, \\hat{\\costs})$. When bidders use the same strategies $\\dstrats$ in the two auctions $\\DA(\\dists, \\costs)$ and $\\DA(\\dists, \\hat{\\costs})$, bidder~$i$ receives the same allocation and pays the same price. \nThe only difference between bidder~$i$'s utilities in these two auctions is the difference between the search costs she pays, which is upper-bounded by $|\\costi-\\hat{\\cost}_i|\\le \\epsilon$.\n\\end{proof}\n\nNow we finish the proof of Theorem~\\ref{thm:fpa-pandora-utility-intermediate}. \n\\begin{proof}[Proof of Theorem~\\ref{thm:fpa-pandora-utility-intermediate}]\nFirst consider item 1. We use $a\\stackrel{\\epsilon}{\\approx}b$ to denote $|a-b|\\le\\epsilon$. Given any monotone strategies $\\fstrats$ for $\\FPA(\\empDists^{\\hat{\\thresholds}})$, \n\\begin{align*}\n \\utili^{\\FPA(\\dists^{\\hat{\\thresholds}})}(\\fstrats) \n ={}& \\utili^{\\DA(\\dists, \\hat{\\costs})}(\\lambda^{\\hat{\\thresholds}}(\\fstrats)) && \\text{Theorem \\ref{thm:DA_FPA_transform} item 1 } \\\\\n \\stackrel{\\epsilon}{\\approx}{}& \\utili^{\\DA(\\dists, \\costs)}(\\lambda^{\\hat{\\thresholds}}(\\fstrats)) &&\\text{Lemma \\ref{lem:DA_utility_close}}. \n\\end{align*}\n\nThen for item 2, given any strategies $\\dstrats$ for $\\DA(\\dists, \\costs)$, by Lemma \\ref{lem:DA_utility_close}, \n\\begin{align*}\n \\utili^{\\DA(\\dists, \\costs)}(\\dstrats) \n \\stackrel{\\epsilon}{\\approx} \\utili^{\\DA(\\dists, \\hat{\\costs})}(\\dstrats)\n\\end{align*}\nBy Theorem \\ref{thm:DA_FPA_transform} item 2, we have $\\utili^{\\DA(\\dists, \\hat{\\costs})}(\\dstrats) \\le \\utili^{\\FPA(\\dists^{\\hat{\\thresholds}})}(\\mu^{(\\dists, \\hat{\\thresholds})}(\\dstrats))$ where ``$=$'' holds if $\\dstrati$ claims above $\\hat{\\threshold}_i$, which concludes the proof. \n\\end{proof}\n\n\n\n\n\n\\section{Proof of Theorem~\\ref{thm:util-learn-upper-bound}}\n\n\\section{proof}\n\n\\subsubsection{Pseudo-dimension and the Proof of Theorem \\ref{thm:util-learn-upper-bound}}\n\\label{sec:pseudodim}\n\nPseudo-dimension is a well known tool for upper bounding sample complexity \\citep[see, e.g.][]{anthony2009neural}, and has been applied to learning in mechanism design \\citep{MR15, MR16, BSV18, BSV19}.\n\n\\begin{defn}\n\t\\label{def:pseudo-dimension}\n\tGiven a class $\\mathcal{H}$ of real-valued functions on input space $\\mathcal{X}$, a set of input $\\Pinputi[1], \\ldots, \\Pinputi[m]$ is said to be \\emph{pseudo-shattered} if there exist \\emph{witnesses} $\\Pwitnessi[1], \\ldots, \\Pwitnessi[m] \\in \\mathbb R$ such that for any label vector $\\Plabels\\in\\{1, -1\\}^m$, there exists $h_{\\Plabels}\\in \\mathcal{H}$ such that $\\sgn(h_{\\Plabels}(\\Pinputi) - \\Pwitnessi) = \\Plabeli$ for each $i=1, \\ldots, m$, where $\\sgn(y)=1$ if $y>0$ and $-1$ if $y<0$. The \\emph{pseudo-dimension} of $\\mathcal{H}$, $\\Pdim(\\mathcal{H})$, is the size of the largest set of inputs that can be pseudo-shattered by $\\mathcal{H}$. \n\\end{defn}\n\n\\begin{defn}\n\t\\label{def:uniform-convergence}\n\tFor $\\epsilon>0, \\delta \\in (0, 1)$, a class of functions $\\mathcal{H}: \\mathcal{X} \\to \\mathbb R$ is \\emph{$(\\epsilon, \\delta)$-uniformly convergent with sample complexity $M$} if\n\t\tfor any $m \\geq M$, \n\tfor any distribution $\\dist$ on~$\\mathcal{X}$, \n\tif $\\sample^1, \\ldots, \\sample^{m}$ are i.i.d.\\@ samples from~$\\dist$, \n\twith probability at least $1 - \\delta$, for every $h \\in \\mathcal{H}$,\n\t$\n\t\t\\left| \\Ex[\\Pinput \\sim \\dist]{h(\\Pinput)} - \\frac 1 {m} \\sum_{j = 1}^{m} h(\\sample^j) \\right| < \\epsilon.\n\t$\n\\end{defn}\n\n\n\\begin{thm}[See \\citealp{anthony2009neural}]\n\t\\label{thm:pseudo-dimension}\n\tLet $\\mathcal{H}$ be a class of functions with range $[0, H]$ and pseudo-dimension $d=\\Pdim(\\mathcal{H})$, \n\tfor any $\\epsilon>0$, $\\delta\\in(0, 1)$, \n\t$\\mathcal{H}$ is $(\\epsilon, \\delta)$-uniformly convergent with sample complexity $O\\left( (\\frac{H}{\\epsilon})^2 [d\\log(\\frac{H}{\\epsilon}) + \\log(\\frac{1}{\\delta})] \\right)$.\n\n\n\n\n\n\n\n\\end{thm}\n\nWe show Theorem~\\ref{thm:util-learn-upper-bound} by treating the utilities on monotone bidding strategies as a class of functions, whose uniform convergence implies that $\\emp$ learns the interim utilities.\n\n\nFor each bidder $i$, let $h^{\\vali, \\bids(\\cdot)}$ be the function that maps the opponents' values to bidder~$i$'s ex post utility, that is, \n\\[h^{\\vali, \\bids(\\cdot)}(\\valsmi) = \\expostUi(\\vali, \\bidi(\\vali), \\bidsmi(\\valsmi)).\\]\nLet $\\mathcal{H}_i$ be the set of all such functions corresponding to the set of monotone strategies, \n\\[\\mathcal{H}_i = \\left\\{h^{\\vali, \\bids(\\cdot) }(\\cdot) \\;\\mid\\; \\vali \\in \\typespacei,~~ \\bids(\\cdot) \\text{ is monotone} \\right\\}. \\]\n\nBy \\eqref{eq:interim-util}, the expectation of $h^{\\vali, \\bids(\\cdot)}(\\cdot)$ over~$\\distsmi$ is the interim utility of bidder $i$: \n\\[\\Ex[\\valsmi\\sim \\distsmi]{h^{\\vali, \\bids(\\cdot)}(\\valsmi)} = \\utili(\\vali, \\bidi(\\vali), \\bidsmi(\\cdot)). \\]\nBy Definition~\\ref{def:emp}, on samples $\\samples = (\\samples^1, \\ldots, \\samples^{m})$, \n\\[\\emp(\\samples, i, \\vali, \\bids(\\cdot)) = \\frac 1 {m} \\sum_{j = 1}^{m} h^{\\vali, \\bids(\\cdot)}(\\samples^j_{-i}).\n\\]\n\nThus, \n\\begin{align}\n \\left| \\emp(\\samples, i, \\vali, \\bids(\\cdot)) - \\utili(\\vali, \\bidi(\\vali), \\bidsmi(\\cdot)) \\right|\n \n\n \\left| \\Ex[\\valsmi]{h^{\\vali, \\bids(\\cdot)}(\\valsmi)} - \\frac 1 {m} \\sum_{j = 1}^{m} h^{\\vali, \\bids(\\cdot)}(\\samples^j_{-i})\\right|.\n\t\\label{eq:emp-func}\n\\end{align}\n\nThe right hand side of~\\eqref{eq:emp-func} is the difference between the expectation of $h^{\\vali, \\bids(\\cdot)}$ on the distribution $\\distsmi$ and that on the empirical distribution with samples drawn from $\\distsmi$.\nNow by Theorem~\\ref{thm:pseudo-dimension},\nto bound the number of samples needed by $\\emp$ to $(\\epsilon, \\delta)$-learn the utilities over monotone strategies, \nit suffices to bound the pseudo-dimension of~$\\mathcal{H}_i$.\nWith the following key lemma, the proof is completed by observing that the range of each $h^{\\vali, \\bids(\\cdot)}$ is within $[-H, H]$ and by taking a union bound over $i \\in [n]$.\n\n\n\n\\begin{restatable}{lemma}{pdimutil}\n\\label{lem:pseudo-dimension-utility-class}\nIf tie breaking is random-allocation or no-allocation, then $\\Pdim(\\mathcal{H}_i) = O(n \\log n)$.\n\\end{restatable}\n\nThe proof of Lemma~\\ref{lem:pseudo-dimension-utility-class} follows a powerful framework introduced by \\citet{MR16} and \\citet{BSV18} for bounding the pseudo-dimension of a class $\\mathcal{H}$ of functions: given samples that are to be pseudo-shattered and for any (fixed) witnesses, one classifies the functions in~$\\mathcal{H}$ into categories, so that functions in the same category must output the same label on all the samples; by counting and bounding the number of such categories, one can bound the number of shattered samples.\nOur proof follows this strategy. To bound the number of categories, we make use of monotonicity of bidding functions, which is specific to our problem.\n\nWe give a proof below for the simplest case with two bidders and no-allocation tie-breaking rule, and relegate the full proof to Appendix~\\ref{sec:proof-lem:pseudo-dimension-utility-class}. \n\n\\begin{proof}[Proof of Lemma~\\ref{lem:pseudo-dimension-utility-class} for a special case.] \nConsider $n=2$ and no-allocation tie-breaking rule. Fix an arbitrary set of $m$ samples $\\samplesmi^1, \\ldots, \\samplesmi^{m}$. \nConsider any set of potential witnesses $(\\Pwitnessi[1], \\Pwitnessi[2], \\ldots, \\Pwitnessi[m])$. \nEach hypothesis in $\\mathcal{H}_i$ then gives every sample~$\\samplesmi^j$ a label according to the witness~$\\Pwitnessi[j]$, giving rise to a label vector in $\\{-1, +1\\}^m$.\nWe show that $\\mathcal{H}_i$ can be divided into $m+1$ sub-classes $\\mathcal{H}_i^0, \\ldots, \\mathcal{H}_i^{m}$, such that each sub-class $\\mathcal{H}_i^k$ generates at most $m+1$ different label vectors. \nThus $\\mathcal{H}_i$ generates at most $(m+1)^2$ label vectors in total. \nTo pseudo-shatter $\\samplesmi^1, \\ldots, \\samplesmi^m$, we need $2^m$ different label vectors; therefore $(m+1)^2\\ge 2^m$, which implies $m = O(1)$. \n\nWe now show how $\\mathcal{H}_i$ is thus divided.\nNote that, for $n = 2$, each $\\samplesmi^k$ is just a real number and we can sort them; for ease of notation let \n$\\Pinput^k$ denote $\\samplei[-i]^{k}$ for $k=1, \\ldots, m$ \nand suppose $\\Pinput^1\\le \\Pinput^2\\le \\cdots \\le \\Pinput^m$. \nWe put hypothesis $h^{\\vali, \\bids(\\cdot)}$ into the $k^{\\text{-th}}$ sub-class, $\\mathcal{H}_i^k$, if\n\\[ \\bidi[-i](\\Pinput^k) < \\bidi(\\vali) \\ \\text{ and }\\ \\bidi[-i](\\Pinput^{k+1}) \\ge \\bidi(\\vali).\\]\nThis is well defined because, by assumption, $\\bidi[-i](\\Pinput)$ is monotone non-decreasing in $\\Pinput$.\n\nWe now show that each sub-class $\\mathcal{H}_i^k$ gives rise to at most $m+1$ label vectors.\nFor any $h^{\\vali, \\bids(\\cdot)}\\in \\mathcal{H}_i^k$, we have $h^{\\vali, \\bids(\\cdot)}(\\Pinput^{j}) = \\vali - \\bidi(\\vali)$ for any $j \\le k$ (because bidder~$i$'s bid $\\bidi(\\vali)$ is higher than the opponent's),\nand $h^{\\vali, \\bids(\\cdot)}(\\Pinput^{j}) = 0$ for any $j > k$.\nOn the first $k$ samples $\\Pinput^{1}, \\ldots, \\Pinput^{k}$, \nany fixed hypothesis $h^{\\vali, \\bids(\\cdot)}(\\cdot) \\in \n\\mathcal{H}_i^k$ \noutputs a constant $\\vali - \\bidi(\\vali)$;\nas one varies this constant and compares it with the $k$ witnesses $\\Pwitnessi[1], \\ldots, \\Pwitnessi[k]$, there are only $k+1$ possible results from the comparisons.\nOn the remaining $m - k$ samples, only one pattern is possible, since all hypotheses in $\\mathcal{H}_i^k$ output $0$ on these samples.\nTherefore, at most $k+1 \\le m+1$ label vectors can be generated by $\\mathcal{H}_i^k$. \n\\end{proof}\n\n\n\n\n\n\n\\subsubsection{Learning on Empirical Product Distributions and Equilibrium Preservation}\n\\label{sec:empp}\n\nThe empirical distribution estimator approximates interim utilities with high probability, but this does not immediately imply that one may take the first price auction on the empirical distribution as a close approximation to the auction on the original distribution. \nThis is because the empirical distribution over samples is \\emph{correlated} --- the values $\\samplei[1]^j, \\ldots, \\samplei[n]^j$ are drawn as a vector, instead of independently. \nStandard notions, such as Bayes Nash equilibria, defined on product distributions become intricate on correlated distributions, and there is no reason to expect the latter to correspond to the equilibria in the original auction.\nTherefore, it is desirable that utilities can also be learned on a \\emph{product} distribution arising from the samples, where each bidder's value is independently drawn, uniformly from the $m$ samples of her value. \nWe show that this can indeed be done, without substantial increase in the number of samples.\nThe key technical step, Lemma~\\ref{lem:relation-uniform-convergence},\nis a reduction from learning on empirical distribution to learning on empirical product distribution. We believe this lemma is of independent interest.\nIn fact, in Section~\\ref{sec:search} we invoke Lemma~\\ref{lem:relation-uniform-convergence} in a different context, that of learning in Pandora's Box problem; the reduction is crucial there for obtaining a polynomial-time learning algorithm.\n\n\\begin{defn}\n\\label{def:empp}\nGiven samples $\\samples = (\\samples^1, \\ldots, \\samples^{m})$, \n$\\empDisti$ is defined to be the uniform distribution over $\\{\\samplei^1, \\ldots, \\samplei^m\\}$. The \\emph{empirical product distribution} $\\empDists$ is the product distribution\n $\\empDists\\coloneqq\\prod_{i=1}^n \\empDisti$.\n\\end{defn}\n\n\t\\begin{defn}\n\t\t\\label{def:uniform-convergence-product}\n\t\tFor $\\epsilon>0, \\delta \\in (0, 1)$, a class of functions $\\mathcal{H}: \\prod_{i=1}^n \\typespacei \\to \\mathbb R$ is \\emph{$(\\epsilon, \\delta)$-uniformly convergent on product distribution with sample complexity $M$} if\n\t\tfor any $m \\geq M$, \n\tfor any product distribution $\\dists$ on~$\\prod_{i=1}^n \\typespacei$, \n\tif $\\samples^1, \\ldots, \\samples^{m}$ are i.i.d.\\@ samples from~$\\dists$, \n\twith probability at least $1 - \\delta$, for every $h \\in \\mathcal{H}$,\n\t\\[\n\n\t\t\\left| \\Ex[\\types \\sim \\dists]{h(\\types)} - \\Ex[\\types \\sim \\empDists]{h(\\types)} \\right| < \\epsilon,\n\t\\]\n\n\twhere $\\empDists$ is the empirical product distribution.\n\t\\end{defn}\n\n\\begin{restatable}{lemma}{uniformprod}\n\t\\label{lem:relation-uniform-convergence}\nLet $\\mathcal{H}$ be a class of functions from a product space $\\typespaces$ to $[0, H]$. \nIf $\\mathcal{H}$ is $(\\epsilon, \\delta)$-uniformly convergent with sample complexity $m=m(\\epsilon, \\delta)$, then $\\mathcal{H}$ is $\\left(2\\epsilon, \\frac{H\\delta}{\\epsilon}\\right)$-uniformly convergent on product distribution with sample complexity $m$. \n\\end{restatable}\n\\noindent Lemma~\\ref{lem:relation-uniform-convergence} is closely related to a concentration inequality by \\citet{DHP16}.\n\\citeauthor{DHP16} show that for any single function $h:\\typespaces\\to[0, H]$, the expectation of $h$ on the empirical product distribution is close to its expectation on any product distribution with high probability.\nOur lemma generalizes this to show a simultaneous concentration for a family of functions, \nand seems more handy for applications such as ours.\n\nCombining Theorem~\\ref{thm:util-learn-upper-bound} with Lemma~\\ref{lem:relation-uniform-convergence}, we derive our learning results on the empirical product distribution.\n\n\\begin{defn}\nThe \\emph{empirical product distribution estimator} $\\empp$ estimates interim utilities of a bidding strategy on the empirical product distribution~$\\empDists$. Formally, \nfor bidder~$i$ with value~$\\vali$, for bidding strategy profile $\\bids(\\cdot)$,\n\\begin{equation}\n\\empp(\\samples, i, \\vali, \\bids(\\cdot)) \\coloneqq \n\\Ex[\\valsmi\\sim \\empDistsmi] { \\expostUi(\\vali, \\bidi(\\vali), \\bidsmi(\\valsmi)) }. \\label{eq:def_empp}\n\\end{equation}\n\\end{defn}\n\n\\begin{thm}\\label{thm:util-learn-upper-bound-product}\nSuppose $\\typespacei\\subseteq[0, H]$ for each $i\\in[n]$, and the tie-breaking rule is random-allocation or no-allocation.\nFor any $\\epsilon>0, \\delta \\in (0, 1)$, there is\n\\begin{equation}\nM = O \\left(\\frac{H^2}{\\epsilon^2} \\left[n\\log n\\log \\left(\\frac{H}{\\epsilon} \\right) + \\log\\left(\\frac{n}{\\delta}\\right)\\right]\\right),\n\\label{eq:util-learn-upper-bound-product}\n\\end{equation}\nsuch that for any $m \\geq M$, \nthe empirical distribution estimator $\\empp$ $(\\epsilon, \\delta)$-learns with $m$ samples\nthe utilities over the set of all monotone bidding strategies.\n\\end{thm}\n\n\nBy Theorem~\\ref{thm:util-learn-upper-bound-product}, utilities in the FPA on the empirical product distribution approximate those in the FPA on the original distribution, therefore the two auctions share the same set of approximate equilibria:\n\n\\begin{corollary}\\label{cor:find-BNE}\nSuppose $\\typespacei\\subseteq[0, H]$ for each $i\\in[n]$ and the tie-breaking rule is random-allocation or no-allocation. \nFor any $\\epsilon, \\epsilon'>0, \\delta \\in (0, 1)$, for $m$ satisfying~\\eqref{eq:util-learn-upper-bound-product}, \n with probability at least $1-\\delta$ over random draws of $\\samples$, for any monotone bidding strategy profile $\\bids(\\cdot)$, if $\\bids(\\cdot)$ is an $\\epsilon'$-BNE in the first price auction on value distribution $\\empDists=\\prod_i\\empDisti$, then $\\bids(\\cdot)$ is an $(\\epsilon'+2\\epsilon)$-BNE in the first price auction on value distribution $\\dists = \\prod_i \\disti$. \n Conversely, if $\\bids(\\cdot)$ is an $\\epsilon'$-BNE in the first price auction on value distribution~$\\dists$, then $\\bids(\\cdot)$ is an $(\\epsilon'+2\\epsilon)$-BNE in the first price auction on value distribution~$\\empDists$. \n\\end{corollary}\n\nCorollary~\\ref{cor:find-BNE} has an interesting consequence. \n\\citet{SWZ20} gave a polynomial-time\nalgorithm for computing Bayes Nash equilibrium in first price auctions on discrete value distributions. \nThe empirical product distribution is discrete, so one can run \\citeauthor{SWZ20}'s algorithm on it.\nCorollary~\\ref{cor:find-BNE} immediately implies:\n\\begin{corollary}\n\\label{cor:polytime-equilibria}\nThere is a Monte Carlo randomized algorithm for computing an $\\epsilon$-BNE in a first price auction with $n$ bidders on arbitrary product value distributions. \nThe running time of the algorithm is polynomial in $n$ and~$\\frac 1 {\\epsilon}$. \n\\end{corollary}\n\nNote that the running time of the algorithm does not depend on the size of the distributions' support, and works for continuous distributions as well.\n\n\n\t\n\n\n\n\n\n\n\n\nResults very similar to Theorem~\\ref{thm:util-learn-upper-bound}, Theorem~\\ref{thm:util-learn-upper-bound-product}, and Corollary~\\ref{cor:find-BNE}, apply to the all pay auction, with the same bounds on the number of samples. \nThe proofs are almost identical and so are omitted.\n\n\n\\section{Introduction}\n\\label{sec:intro}\n\\input{intro.tex}\n\n\\paragraph{Additional Related Works.}\n\\label{sec:related}\n\\input{related.tex}\n\n\n\\section{Preliminaries on Auctions}\n\\label{sec:prelim}\n\\input{prelim.tex}\n\n\\section{Sample Complexity of Utility Estimation}\n\\label{sec:fpa}\n\\input{fpa.tex}\n\n\\subsection{Upper Bound on Sample Complexity}\n\\label{sec:fpa-upper-bound}\n\\input{fpa-upper-bound.tex}\n\n\\subsection{Lower Bound of Sample Complexity}\n\\label{sec:lower-bound}\n\\input{fpa-lower-bound.tex}\n\n\\section{Auctions with Costly Search}\n\\label{sec:search}\n\\input{auction-with-search.tex}\n\n\\section{Conclusion}\n\\label{sec:conclusion}\n\nIn this work we obtained almost tight sample complexity bounds for learning utilities in first price auctions and all pay auctions. \nWhereas utilities for unconstrained bidding strategies are hard to learn, we show that learning is made possible by focusing on monotone bidding strategies, which is sufficient for all practical purposes.\nWe also extended the results to auctions where search costs are present.\n\nMonotonicity is a natural assumption on bidding strategies in a single item auction, but it does not generalize to multi-parameter settings, where characterization of equilibrium is notoriously difficult.\nIt is an interesting question whether our results can be generalized to multi-item auctions, such as simultaneous first-price auctions, via more general, lossless structural assumptions on the bidding strategies. \n\nOur results also depend on the values being drawn independently. \nWhen bidders' values are correlated, the conditional distribution of opponents' values changes with a bidder's value, and any na\\\"ive utility learning algorithm needs a number of samples that grows linearly with the size of a bidder's type space.\nIt is interesting whether there are meaningful tractable middle grounds for utility learning between product distributions and arbitrary correlated distributions.\n\n\\bibliographystyle{abbrvnat}\n\n\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nIn their celebrated paper $\\cite{BrezisCoronLieb}$, Brezis, Coron and Lieb showed, in the context of harmonic maps and liquid crystals theory, the existence of a close relation between sphere-valued harmonic maps having prescribed topological singularities at given points in $\\R^3$ and {\\it minimal connections} between those points, i.e., optimal mass transportation networks (in the sense of Monge-Kantorovich) having those points as marginals. This relation was further enlightened by Almgren, Browder and Lieb in $\\cite{abh}$, who recovered the results in $\\cite{BrezisCoronLieb}$ by interpreting the (minimal connection) optimal transportation problem as a suitable Plateau problem for rectifiable currents having the given marginals as prescribed boundary.\n\nOur aim is to consider minimizing configurations for maps valued into manifolds and with prescribed topological singularities when the energy is possibly more general than the Dirichlet energy, and investigate the connection with Plateau problems for currents (or flat chains) with coefficients in suitable groups. The choice of these groups is linked to the topology of the involved target manifolds. \n\nIn this paper we will consider the particular case where the manifold is a product of spheres and the maps have assigned point singularities, and we will show, in Theorem \\ref{thm1} below, that energy minimizing configurations are related with Steiner-type optimal networks connecting the given points, i.e., solutions of the Steiner problem or solutions of the Gilbert-Steiner irrigation problem. The investigation of maps with values into product of spheres arises in several physical problems, such as the study of the structure of minimizers of two-component Ginzburg-Landau functionals, where the reference (ground state) manifold is a torus ($\\mathbb{S}^{1}\\times \\mathbb{S}^{1}$) (see \\cite{Stan}), or the case of Dipole-Free $^3$He-A, where the order parameter takes values into $(\\mathbb{S}^{2}\\times$ SO(3))$\/\\Z_{2}$, whose covering space is $\\mathbb{S}^{2}\\times \\mathbb{S}^{3}$ (see \\cite{Mermin, Christopher}).\nIn a companion paper in preparation $\\cite{CVO}$ we will discuss and state the results which correspond to more general situations. Let us also stress that the generalization of the results to a broader class of energies (and thus different norms) is not moot, this being the case, for instance, for dislocations in crystals (see \\cite{CoGarMas}). \n\n\nSteiner tree problems and Gilbert-Steiner (single sink) problems can be formulated as follows: given $n$ distinct points $P_{1},\\ldots, P_{n}$ in $\\R^{d}$, where $d, n \\geq 2$, we are looking for an optimal connected transportation network, $L = \\cup_{i=1}^{n-1}\\lambda_i$, along which the unit masses initially located at $P_{1},\\ldots, P_{n-1}$ are transported to the target point $P_n$ (single sink); here $\\lambda_i$ can be seen as the path of the $i^{\\rm th}$ mass flowing from $P_{i}$ to $P_{n}$, and the cost of moving a mass $m$ along a segment with length $l$ is proportional to $lm^{\\alpha}$, $\\alpha\\in[0,1]$. Therefore, we are led to consider the problem\n$$\n\\inf \\left\\{ I_\\alpha(L):\\,L = \\bigcup_{i=1}^{n-1}\\lambda_i\\text{ with }\\lbrace P_{i}, P_{n} \\rbrace \\subset \\lambda_{i}\\text{, for every }i=1,\\ldots, n-1 \\right\\}\n\\leqno{(I)}\n$$\nwhere the energy $I_\\alpha$ is computed as $I_\\alpha(L)=\\int_L |\\theta(x)|^\\alpha d{\\mathcal H}^1(x)$, with $\\theta(x) = \\sum_{i=1}^{n-1} \\mathbf{1}_{\\lambda_i}(x)$. Let us notice that $\\theta$ stands for the mass density along the network. In particular, we consider the range $\\alpha\\in[0,1]$:\n\\begin{itemize}\n\\item when $\\alpha=0$ the problem is equivalent to optimize the total length of the graph $L$, as in the Steiner Tree Problem (STP); \n\\item when $\\alpha=1$ the problem $(I)$ becomes the well-known Monge-Kantorovich problem;\n\\item and when $0<\\alpha<1$ the problem is known as the Gilbert-Steiner problem, or, more generally, as a branched optimal transport problem, due to the fact that the cost is proportional to a concave function $\\theta^{\\alpha}$, which favours the clustering of the mass during the transportation, thus giving rise to the branched structures which characterize the solutions (we refer the reader to $\\cite{Bernot2009}$ for an overview on the topic). \n\\end{itemize}\n\n\nIn the last decade, the communities of Calculus of Variations and Geometric Measure Theory made some efforts to study (Gilbert-)Steiner problems in many aspects, such as existence, regularity, stability and numerical feasibility (see for example \\cite{Xia, PaSt, MaMa2, MaMa, MariaAnonioAndrea, MariaAnonioAndreaPegonProuff, OuSa, BoLeSa, MaOuVe, BoOrOu, BoOu, BoOrOu2} and references therein). Among all the significant results, we would like to mention recent works in $\\cite{MaMa2, MaMa}$ and $\\cite{BoOrOu, BoOrOu2}$, which are closely related to the present paper. To be more precise, in $\\cite{MaMa2, MaMa}$ the authors turn the problem $(I)$ into the problem of mass-minimization of integral currents with multiplicities in a suitable group. For the sake of readability we postpone proper definitions about currents to Section \\ref{section2}, in this introduction we only recall that a $1$-dimensional integral current with coefficients in a group can be thought as a formal sum of finitely many curves and countably many loops with coefficients in a given normed abelian group. For instance, considering the group $\\Z^{n-1}$ and assigning to the boundary datum $P_{1}, P_{2},\\ldots, P_{n-1}, P_{n}$ the multiplicities $e_{1},e_{2},\\ldots,e_{n-1},-(e_{1}+\\ldots+ e_{n-1})$, respectively (where $\\lbrace e_{i} \\rbrace_{1 \\leq i \\leq n-1}$ is the basis of $\\R^{n-1}$), we recover the standard model in \\cite{MaMa2,MaMa}. \n\nIn fact we can interpret the network $L = \\bigcup_{i=1}^{n-1}\\lambda_i$ as the superposition of $n-1$ paths $\\lambda_i$ connecting $P_{i}$ to $P_{n}$ labelled with multiplicity $e_{i}$. This point of view requires a density function with values in $\\Z^{n-1}$, which corresponds to the so-called $1$-dimensional current with coefficients in the group $\\Z^{n-1}$. Furthermore, by equipping $\\Z^{n-1}$ with a certain norm (depending on the cost of the problem), we may define the notion of mass of those currents, and problem $(I)$ turns out to be equivalent to the Plateau problem.\n$$\n\\inf \\left\\{ \\mathbb{M}(T):\\,\\partial T = e_{1}\\delta_{P_1}+e_{2}\\delta_{P_2}+\\ldots+e_{n-1}\\delta_{P_{n-1}}-(e_{1}+e_{2}+\\ldots+e_{n-1})\\delta_{P_n} \\right\\}\n\\leqno{(M)}\n$$\nwhere $T$ is a 1-dimensional current with coefficients in the group $\\Z^{n-1}$ (again, we refer the reader to the Section $\\ref{section2}$ for rigorous definitions). For mass minimization, there is the very useful notion of calibration (see section \\ref{section3}), that is, a tool to prove minimality when dealing with concrete configurations (see Example $\\ref{examplecalib}$). To be precise, a calibration is a sufficient condition for minimality, see Definition \\ref{Calibration} and the following remarks.\n\nIn $\\cite{BoOrOu, BoOrOu2}$, by using $\\cite{MaMa2, MaMa}$, a variational approximation of the problem $(I)$ was provided through Modica-Mortola type energies in the planar case, and through Ginzburg-Landau type energies (see \\cite{ABO2}) in higher dimensional ambient spaces via $\\Gamma$-convergence. The corresponding numerical treatment is also shown there.\n\nFollowing $\\cite{MaMa2, MaMa}$, $\\cite{BoOrOu, BoOrOu2}$, and the strategy outlined in $\\cite{abh}$ (relating the energy of harmonic maps with prescribed point singularities to the mass of $1$-dimensional classical integral currents) we provide here a connection between an energy functional with its energy comparable with $k$-harmonic map problem with prescribed point singularities and (Gilbert-)Steiner problems $(I)$. More precisely, let $P_{1},\\ldots,P_{n-1}, P_{n}$ in $\\R^{d}$ be given, and consider the spaces $H_{i}$ defined as the subsets of $W^{1,d-1}_{\\rm loc}(\\R^{d}; \\mathbb{S}^{d-1})$ where the functions are constant outside a neighbourhood of the segment joining $P_i,P_n$ and have distributional Jacobian $\\frac{\\alpha_{d-1}}{d}( \\delta_{P_i}-\\delta_{P_n})$, respectively. Here $\\alpha_{d-1}$ is the surface area of the unit ball in $\\R^{d}$.\n\nLet $\\mathbb{\\psi}$ be a norm on $\\R^{n-1}$ which will be specified in Section $\\ref{section3}$ (see $\\eqref{normeuclidean}$), and set \n\\begin{equation}\\label{def_h}\n\t\\mathbb{H}({\\bf u})=\\int_{\\R^{d}} \\mathbb{\\psi}(|\\nabla u_{1}|^{d-1}, |\\nabla u_{2}|^{d-1},\\ldots,|\\nabla u_{n-1}|^{d-1})\\,dx\n\\end{equation}\nwhere ${\\bf u}=(u_{1},\\ldots,u_{n-1})\\in H_{1}\\times H_{2} \\times \\ldots \\times H_{n-1}$ is a $2$-tensor. The functional $\\mathbb{H}$ is the so-called $k$-harmonic energy, it is modeled on the $(d-1)$-Dirichlet energy. We will consider here a class of energies $\\mathbb E$ for maps in $H_{1}\\times H_{2} \\times \\ldots \\times H_{n-1}$ which are suitably related to $\\mathbb M$ and $\\mathbb H$, according to Definition \\ref{def:suiten} below. In this case, we investigate the problem of characterizing\n$$\n\\inf \\left\\{ \\mathbb{E}({\\bf u}):\\,{\\bf u}\\in H_{1}\\times H_{2} \\times \\ldots \\times H_{n-1} \\right\\}.\n\\leqno{(H)}\n$$\nThe main contribution of this paper is the following equivalence result in the minimization problem for the mass $\\mathbb M$ and an energy $\\mathbb E$ which is suitably related to $\\mathbb M$ and $\\mathbb H$.\n\n\\begin{theorem}\\label{thm1}\n\nAssume that a minimizer of the problem $(M)$ admits a calibration (see Definition \\ref{Calibration}). Consider an energy functional $\\mathbb{E}$ which is suitably related to $\\mathbb M$ and $\\mathbb H$, in the sense of Definition \\ref{def:suiten}. Then, we have\n\t\\begin{equation}\\label{thmharmonic}\n\t\\inf{\\mathbb{E}}=\\alpha_{d-1} \\inf{\\mathbb{M}}\n\t\\end{equation}\n\tor equivalently, in view of paper $\\cite{MaMa2, MaMa}$,\n\t\\begin{equation}\\label{thmharmonic2}\n\t\\inf{\\mathbb{E}}=\\alpha_{d-1} \\inf{I_\\alpha}\\,.\n\t\\end{equation}\n\\end{theorem}\n\nCurrently, we cannot evade the assumption on the existence of a calibration, because it is still not known if a calibration, or even a weak version of it, is not only sufficient but also a necessary condition for minimality (see Section \\ref{section2}). Nonetheless, dropping this assumption we can still state some partial result as follows. \n\n\\begin{remark}{\\rm\n\t\\begin{enumerate}\n\t\t\\item[(i)] If $\\alpha=1$, $\\psi=\\|\\cdot \\|_{1}$, $\\mathbb{E}=\\frac{1}{(d-1)^{\\frac{d-1}{2}}}\\mathbb{H}$, then we are able to prove that $\\eqref{thmharmonic}$ still holds true, as a variant of the main result in $\\cite{BrezisCoronLieb}$.\n\t\t\\item[(ii)] In case $0\\leq \\alpha <1$, we obtain the following inequality\n\t\t\\begin{equation}\\label{compareintro}\n\t\t\\alpha_{d-1} \\inf{\\mathbb{M}}=\\alpha_{d-1}\\inf{I_\\alpha}\\geq \\inf{\\mathbb{E}}\\, \\geq \\alpha_{d-1} \\inf{\\mathbb{N}}\\,.\n\t\t\\end{equation}\n\t\tThe investigation of equality in $\\eqref{compareintro}$ when $0\\leq \\alpha <1$ is delicate and will be considered in forthcoming works.\n\t\n\t\\end{enumerate}\n\n}\\end{remark}\n\\begin{remark}\\label{conjecture}\n{\\rm\tWe believe that the assumption of the existence of a calibration is not too restrictive. We actually conjecture that minimizing configurations for the problem $(M)$ admit a calibration in case of uniqueness, which is somehow a generic property (see \\cite{Cal_Ma_Stein}). We carry out in Example $\\ref{examplecalib}$ the construction of configurations of $n$ points in $\\R^{n-1}$ with $n-2$ branching points which are generic in character and these configurations admit a calibration.\n}\\end{remark}\nThe organization of the paper is as follows: in Section $\\ref{section2}$, we briefly review some basic notions of Geometric Measure Theory which will be used in the paper, in Section $\\ref{section3}$ we recall (Gilbert-) Steiner problems and briefly describe their connection with Plateau's problem for currents with coefficients in a group. Finally, in Section $\\ref{Prooftheorem1}$ we prove the Theorem $\\ref{thm1}$.\n\\section{Preliminaries and notations}\\label{section2}\n\\subsection{Rectifiable currents with coefficients in a group G}\nIn this section, we present the notion $1$-dimensional currents with coefficients in the group $\\R^{n-1}$ in the ambient space $\\R^{d}$ with $n, d\\geq 2$. We refer to $\\cite{Ma}$ for a more detailed exposition of the subject. \n\nConsider $\\R^{n-1}$ equipped with a norm $\\psi$ and its dual norm $\\psi^{*}$. Denote by $\\Lambda_{1}(\\R^{d})$ the space of $1$-dimensional vectors and by $\\Lambda^{1}(\\R^{d})$ the space of $1$-dimensional covectors in $\\R^{d}$.\n\\begin{definition}{\\rm An $(\\R^{n-1})^{*}$-valued $1$-covector on $\\R^{d}$ is a bilinear map\n\t$$w : \\Lambda_{1}(\\R^{d})\\times \\R^{n-1}\\longrightarrow \\R\\,.\n\t$$\n\n\tLet $\\lbrace e_{1},e_{2},\\ldots,e_{n-1} \\rbrace$ be an orthonormal basis of $\\R^{n-1}$, and let $\\lbrace e^{*}_{1},e^{*}_{2},\\ldots,e^{*}_{n-1} \\rbrace$ be its dual. Then, each $(\\R^{n-1})^{*}$-valued $1$-covector on $\\R^{d}$ can be represented as \n\t$w=w_{1} e^{*}_{1}+\\ldots+w_{n-1}e^{*}_{n-1}\\,,$\n\twhere $w_{i}$ is a ``classical'' $1$-dimensional covector in $\\R^{d}$ for each $i=1,\\ldots,n-1$. To be precise, the action of $w$ on a pair $(\\tau,\\theta)\\in\\Lambda_1(\\R^d)\\times\\R^{n-1}$ can be computed as\n\t\\[\n\\langle w;\\tau,\\theta\\rangle=\\sum_{i=1}^{n-1}\\theta_i\\langle w_i,\\tau\\rangle\\,,\t\n\t\\]\n\twhere the scalar product on the right hand side is the standard Euclidean scalar product in $\\R^d$.\nWe denote by $\\Lambda^{1}_{(\\R^{n-1},\\psi)}(\\R^{d})$ the space of $(\\R^{n-1})^{*}$-valued $1$-covectors in $\\R^{d}$, endowed with the (comass) norm:\n$$\n| w |_{c,\\psi}:=\\sup \\lbrace \\psi^{*} ( \\langle w ; \\tau, \\cdot \\rangle ) \\, : \\, \\vert \\tau \\vert \\leq 1\\rbrace\\,.$$\nSimilarly, we can define the notion of space $(\\R^{n-1})$-valued $1$-vectors in $\\R^{d}$, $\\Lambda_{1, (\\R^{n-1},\\psi)}(\\R^{d})$, endowed with pre-dual (mass) norm: for any $v\\in \\Lambda_{1, (\\R^{n-1},\\psi)}(\\R^{d})$ we define:\n\\begin{equation}\\label{nuclearnorm}\n\\begin{aligned}\n| v |_{m,\\psi}:= & \\sup \\lbrace \\langle w, v \\rangle \\, : \\, \\vert w \\vert_{c,\\psi} \\leq 1, w\\in \\Lambda^{1}_{(\\R^{n-1},\\psi)}(\\R^{d}) \\rbrace\\,\\\\\n= & \\inf \\left\\{ \\sum_{l=1}^L \\psi (z_l) |\\tau_l| \\, : \\, \\tau_{1},\\ldots,\\tau_{l} \\in \\Lambda_{1}(\\R^{d}), \\, z_1, \\ldots, z_k \\in \\R^{n-1} \\mbox{ s.t. }v=\\sum_{l=1}^{L}z_l\\otimes\\tau_{l} \\right\\}\\,.\n\\end{aligned}\n\\end{equation}\n}\n\\end{definition}\n\\begin{definition}{\\rm\nAn $(\\R^{n-1})^{*}$-valued $1$-dimensional differential form defined on $\\R^{d}$ is a map \n$$\\omega: \\R^{d} \\longrightarrow \\Lambda^{1}_{(\\R^{n-1},\\psi)}(\\R^{d})\\,.$$\nLet us remark that the regularity of $\\omega$ is inherited from the components $\\omega_{i}$, $i=1,\\ldots,n-1$. \n\tLet $\\phi=(\\phi_1,\\ldots,\\phi_{n-1})$ be a function of class $C^{1}(\\R^d;\\R^{n-1})$. We denote\n\t$${\\rm d}\\phi:={\\rm d\\phi_{1}}e^{*}_{1}+\\ldots+{\\rm d}\\phi_{n-1}e^{*}_{n-1},$$\n\twhere ${\\rm d}\\phi_{i}$ is the differential of $\\phi_{i}$. Thus ${\\rm d}\\phi \\in C(\\R^{d};\\Lambda^{1}_{(\\R^{n-1},\\psi)}(\\R^{d}) )$.\n}\\end{definition}\n\\begin{definition}{\\rm\nA $1$-dimensional current $T$ with coefficients in $(\\R^{n-1},\\psi)$ is a linear and continuous map\n\t$$T: C^{\\infty}_{c}\\left(\\R^{d};\\Lambda^{1}_{(\\R^{n-1},\\psi)}(\\R^{d})\\right) \\longrightarrow \\R\\,.$$\n\tHere the continuity is meant with respect to the (locally convex) topology on $C^\\infty_c(\\R^d;\\Lambda^1_{(\\R^{n-1},\\psi)}(\\R^d))$ defined in analogy with the topology on $C^\\infty_c(\\R^d;\\R)$ which allows the definition of distributions.\n\tThe mass of $T$ is defined as\n\t\\[\n\t\\mathbb{M}(T):=\\sup \\left\\{ T(\\omega):\\, \\sup_{x\\in \\R^{d}}|\\omega|_{c,\\psi} \\leq 1 \\right\\}\\,.\n\t\\]\n\tMoreover, if $T$ is a $1$-dimensional current with coefficients in $(\\R^{n-1}, \\psi)$, we define the boundary $\\partial T$ of $T$ as a distribution with coefficients in $(\\R^{n-1},\\psi)$, $\\partial T: C^{\\infty}_{c}(\\R^{d};(\\R^{n-1},\\psi) ) \\longrightarrow \\R $, such that $$\\partial T(\\phi):=T({\\rm d}\\phi)\\,.$$ \n\tThe mass of $\\partial T$ is the supremum norm\n\t\\[\n\t\\mathbb{M}(\\partial T):=\\sup \\left\\{ T({\\rm d}\\varphi):\\, \\sup_{x\\in \\R^{d}} \\psi^*(\\varphi)\\leq 1 \\right\\}\\,.\n\t\\]\nA current $T$ is said to be normal if $\\mathbb{M}(T)+\\mathbb{M}(\\partial T)<\\infty$.\n}\\end{definition}\n\\begin{definition}{\\rm\n\tA $1$-dimensional rectifiable current with coefficients in the normed (abelian) group $(\\Z^{n-1}, \\psi)$ is a ($1$-dimensional) normal current (with coefficients in $(\\R^{n-1},\\psi)$)\tsuch that there exists a $1$-dimensional rectifiable set $\\Sigma\\subset\\R^d$, an approximate tangent vectorfield $\\tau \\, : \\, \\Sigma \\longrightarrow \\Lambda_{1}(\\R^{d})$, and a density function $\\theta : \\Sigma \\longrightarrow \\Z^{n-1}$ such that \n\t$$T(\\omega)=\\int_{\\Sigma}\\langle \\omega (x) \\tau (x), \\theta (x) \\rangle \\,d\\mathcal{H}^{1}(x)$$\n\tfor every $\\omega \\in C^{\\infty}_{c}\\left(\\R^{d};\\Lambda^{1}_{(\\R^{n-1},\\psi)}(\\R^{d}) \\right)$. We denote such a current $T$ by the triple $\\llbracket\\Sigma, \\tau, \\theta\\rrbracket$.\n}\\end{definition}\n\n\\begin{remark}\\label{rmk:mass}{\\rm\nThe mass of a rectifiable current $T=\\llbracket\\Sigma,\\tau,\\theta\\rrbracket$ with coefficients in $(\\Z^{n-1}, \\psi)$ can be computed as\n\t$$\\mathbb{M}(T):=\\sup \\left\\{ T(\\omega):\\, \\sup_{x\\in \\R^{d}}|\\omega|_{c,\\psi} \\leq 1 \\right\\}=\\int_{\\Sigma}\\psi (\\theta(x))\\,d\\mathcal{H}^{1}(x)\\,.$$\nMoreover, $\\partial T: C^{\\infty}_{c}(\\R^{d};(\\R^{n-1},\\psi) ) \\longrightarrow \\R $ is a measure and there exist $x_{1},\\ldots,x_{m} \\in \\R^{d}$, $p_{1},\\ldots,p_{m} \\in \\Z^{n-1}$ such that\n\t$$\\partial T(\\phi)=\\sum_{j=1}^{m}p_{j}\\phi(x_{j}).$$\nFinally the mass of the boundary $\\mathbb{M}(\\partial T)$ coincides with $\\sum_{j=1}^{m}\\psi(p_{j})$.\n\t}\\end{remark}\n\t\\begin{remark}{\\rm\n\t\tIn the trivial case $n=2$, we consider rectifiable currents with coefficients in the discrete group $\\Z$ and we recover the classical definition of integral currents (see, for instance, \\cite{FeBook}).\n\t}\\end{remark}\nFinally, it is useful to define the components $T$ with respect to the index $i\\in\\{1,\\ldots,n-1\\}$: for every $1$-dimensional test form $\\tilde\\omega\\in C^\\infty_c(\\R^d;\\Lambda^1(\\R^d))$ we set\n\t$$T^{i}(\\tilde\\omega):=T(\\tilde\\omega e^{*}_{i})\\,.$$\n\tNotice that $T^{i}$ is a classical integral current (with coefficients in $\\Z$). Roughly speaking, in some situations we are allowed to see a current with coefficients in $\\R^{n-1}$ through its components $(T^{1},\\ldots,T^{n-1})$.\n\t\nWhen dealing with the Plateau problem in the setting of currents, it is important to remark a couple of critical features. For the sake of understandability, we recall them here for the particular case of $1$-dimensional currents, but the matter does not depend on the dimension. \n\\begin{remark}\\label{lavrentiev}{\\rm\nIf a boundary $\\{P_1,\\ldots,P_n\\}\\subset\\R^d$ is given, then the problem of the minimization of mass is well posed in the framework of rectifiable currents and in the framework of normal currents as well. In both cases the existence of minimizers is due to a direct method and, in particular, to the closure of both classes of currents. Obviously\n\\begin{align*}\n& \\min\\{\\mathbb{M}(T):\\,T\\text{ normal current with coefficients in }\\R^{n-1}\\text{ and boundary }\\{P_1,\\ldots,P_n\\}\\}\\\\\n\\le & \\min\\{\\mathbb{M}(T):\\,T\\text{ rectifiable current with coefficients in }\\Z^{n-1}\\text{ and boundary }\\{P_1,\\ldots,P_n\\}\\}\\,,\n\\end{align*}\nbut whether the inequality is actually an identity is not known for currents with coefficients in groups. The same question about the occurence of a Lavrentiev gap between normal and integral currents holds for classical currents of dimension bigger than $1$ and it is closely related to the problem of the decomposition of a normal current in rectifiable ones (see \\cite{Ma} for a proper overview of this issue). }\\end{remark}\n\nA formidable tool for proving the minimality of a certain current is to show the existence of a calibration.\n\n\\begin{definition}\\label{Calibration}{\\rm\n\tConsider a rectifiable current $T=\\llbracket\\Sigma, \\tau, \\theta\\rrbracket$ with coefficients in $\\Z^n$, in the ambient space $\\R^{d}$. A smooth $(\\R^{n})^{*}$-valued differential form $\\omega$ in $\\R^{d}$ is a calibration for $T$ if the following conditions hold:\n\t\\begin{enumerate}\n\t\t\\item[(i)]\\label{clr1} for a.e $x\\in \\Sigma$ we have that $\\langle \\omega(x); \\tau(x), \\theta (x)\\rangle=\\psi (\\theta(x));$ \n\t\t\\item[(ii)]\\label{clr2} the form is closed, i.e, ${\\rm d}\\omega=0;$\n\t\t\\item[(iii)]\\label{clr3} for every $x\\in \\R^{d}$, for every unit vector $t \\in \\R^{d}$ and for every $h\\in \\Z^{n}$, we have that\n\t\t$$\\langle \\omega(x); t, h \\rangle \\leq \\psi (h)\\,.$$\n\t\\end{enumerate}\n}\\end{definition}\n\nIt is straightforward to prove that the existence of a calibration associated to a current implies the minimality of the current itself. Indeed, with the notation in Definition \\ref{Calibration}, if $T'=\\llbracket\\Sigma',\\tau',\\theta'\\rrbracket$ is a competitor, i.e., $T'$ is a rectifiable current with coefficients in $\\Z^n$ and $\\partial T'=\\partial T$, then\n\\[\n{\\mathbb M}(T)=\\int_{\\Sigma}\\psi(\\theta)=\\int_{\\Sigma}\\langle\\omega;\\tau,\\theta\\rangle=\\int_{\\Sigma'}\\langle\\omega;\\tau',\\theta'\\rangle\\le\\int_{\\Sigma'}\\psi(\\theta')={\\mathbb M}(T')\\,.\n\\]\n\nWe stress that fact that the existence of a calibration is a sufficient condition for the minimality of a current, so it is always a wise attempt when a current is a good candidate for mass minimization. Nonetheless, it is also natural to wonder if every mass minimizing current has its own calibration and this problem can be tackled in two ways: for specific currents or classes of currents (such as holomorphic subvarieties) one has to face an extension problem with the (competing) constraints (ii) and (iii), since condition (i) already prescribes the behaviour of the form on the support of the current. In general, one may attempt to prove the existence of a calibration as a result of a functional argument, picking it in the dual space of normal currents, but this approach has two still unsolved problems:\n\\begin{itemize}\n\\item the calibration is merely an element of the dual space of normal currents, thus it is far to be smooth;\n\\item this argument works in the space of normal currents and it is not known whether a minimizer in this class is rectifiable as well (see Remark \\ref{lavrentiev}).\n\\end{itemize}\nAnyway, in this specific case of currents with coefficients in $\\Z^n$ which match the energy minimizing networks of a branched optimal transport problem (with a subadditive cost), we think that the Lavrentiev phenomenon cannot occur, as explained in Remark \\ref{conjecture}. \n\n\\subsection{Distributional Jacobian}\nWe recall the definition of distributional Jacobian of a function $u\\in W^{1,d-1}_{\\rm loc}(\\R^{d}; \\R^{d})\\cap L^{\\infty}_{\\rm loc}(\\R^{d}; \\R^{d})$, see also $\\cite{JeSo02, ABO1}$.\n\n\\begin{definition} Let $u$ be in $W^{1,d-1}_{\\rm loc}(\\R^{d}; \\R^{d})\\cap L^{\\infty}_{\\rm loc}(\\R^{d}; \\R^{d})$, we define the pre-jacobian $ju \\in L^1_{\\rm loc}(\\R^d;\\R^d)$ as\n$$ju\n:=(\\det(u,u_{x_{2}},\\ldots,u_{x_{d}}), \\det(u_{x_{1}},u,\\ldots,u_{x_{d}}),\n\\ldots,\\det(u_{x_{1}},\\ldots,u_{x_{d-1}}, u))\\,,$$\nwhere $u_{x_j}$ is a $L^{d-1}_{\\rm loc}(\\R^d;\\R^d)$ representative of the partial derivative of $u$ with respect to the $j^{\\rm th}$ direction. Thus we define the Jacobian $Ju$ of $u$ as $\\frac{1}{d}{\\rm d}(ju)$ in the sense of distributions. More explicitly, if $\\phi \\in C^{\\infty}_{c}(\\R^{d};\\R)$ is a test function, then one has\n\\begin{equation}\\label{distrib_jac}\n\\int_{\\R^{d}}\\phi\\, Ju\\,dx=-\\frac{1}{d}\\int_{\\R^{d}}\\nabla \\phi \\cdot ju\\,dx\\,.\n\\end{equation}\nThe identity required in \\eqref{distrib_jac} is clearer if one notices that $ju$ has been chosen in such a way that ${\\rm div}(\\varphi\\tilde u)=\\nabla\\varphi\\cdot j\\tilde u+d\\varphi\\det D\\tilde u$ whenever $\\tilde u$ is smooth enough to allow the differential computation.\n\\end{definition}\n\nOnce the singularities of the problem ${P_1,\\ldots,P_n}$ have been prescribed, we can also introduce the energy spaces $H_{i}$, for each $i=1,\\ldots,n-1$. By definition a map $u\\in W^{1,d-1}_{\\rm loc}(\\R^{d}; \\mathbb{S}^{d-1})$ belongs to $H_i$ if $Ju=\\frac{\\alpha_{d-1}}{d}( \\delta_{P_i}-\\delta_{P_n})$, and there exists a radius $r=r(u)>0$ such that $u$ is constant outside $B(0, r(u))\\ni P_{i}, P_{n}$, where\n$B(0, r)$ is the open ball of radius $r$ centered at $0$. \n\nFor any $\\textbf{u}\\in H_1\\times \\ldots \\times H_{n-1}$, we define the (matrix-valued) pre-jacobian of $\\textbf{u}$ by\n\\begin{equation}\n\\textbf{ju}=(ju_1,\\ldots,ju_{n-1})\n\\end{equation}\nand its Jacobian by\n\\begin{equation}\n\\textbf{Ju}=(Ju_1,\\ldots,Ju_{n-1})\\,.\n\\end{equation}\nWe observe that $\\textbf{ju}$ is actually a $1$-dimensional normal currents with coefficients in $\\R^{n-1}$. Moreover\n\\begin{equation}\n\\frac{1}{d}\\partial \\, \\textbf{ju}=-Ju\\,.\n\\end{equation}\n\n\\begin{definition}\\label{def:suiten} Given $P_1,\\ldots,P_n\\in \\R^d$ and a norm $\\psi$ on $\\R^{n-1}$, a functional $\\mathbb{E}$ defined on $H_{1}\\times \\ldots \\times H_{n-1}$ is said to be suitably related to $\\mathbb{M}$ and $\\mathbb{H}$ (see \\eqref{def_h} for its definition) if the following properties hold.\n\\begin{itemize}\n\t\\item[(i)] $\\mathbb{M}(\\textbf{\\rm\\bf ju})\\leq \\mathbb{E}({\\bf u})$, where $\\textbf{\\rm \\bf ju}$ is the normal current defined by the pre-jacobian.\n\t\\item[(ii)] If there exist an open set $U\\subset\\R^d$ and a subset $I$ of the set of labels ${1,\\ldots,n-1}$ such that $u_i=u_l$ for every pair $i,l\\in I$ and $u_i=0$ otherwise, we have \n\t\\begin{equation}\n\t\t\\mathbb{E}({\\bf u}\\chi_U) \\leq \\frac{1}{(d-1)^{\\frac{d-1}{2}}} \\mathbb{H}({\\bf u}\\chi_U)\\,,\n\t\\end{equation}\n\twhere $\\chi_U$ is the characteristic function of $U$.\n\t\\item[(iii)] When $k=1$, the functional $\\mathbb{E}$ coincides with the harmonic energy considered in $\\cite{BrezisCoronLieb}$.\n\\end{itemize}\n\\end{definition}\nLet us point out that requirement {\\it (ii)} is taylored on the dipole construction maps ${\\bf u}=(u_{1},\\ldots,u_{n-1})$ in the Step $1$ of the proof of Theorem $\\ref{thm1}$.\n\nWe consider the following problem:\n$$\n\\inf \\left\\{\\mathbb{E}({\\bf u}), \\hspace{0.2cm} {\\bf u}=(u_{1},\\ldots,u_{n-1})\\in H_{1}\\times H_{2} \\times \\ldots \\times H_{n-1} \\right\\}.\n\\leqno{(H)}\n$$\nAs indicated in the introduction, the inspiration for considering the problem $(H)$ and comparing it with the irrigation problem $(I)$ is coming from the works $\\cite{MaMa2, MaMa}$ and $\\cite{abh}$. More precisely, $\\cite{MaMa2, MaMa}$ provided a new framework for the problem $(I)$ by proving it to be equivalent to the problem of mass-minimizing currents with coefficients in the group $\\Z^{n-1}$ with a suitable norm. The point is to look at each irrigation network $L = \\bigcup_{i=1}^{n-1}\\lambda_i$ encoded in the current $T=(T^{1},\\ldots, T^{n-1})$ where $T^{i}$ is a classical current supported by $\\lambda_{i}$, and the irrigation cost of $L$ is the mass of the current $T$. Then, combining this point of view with $\\cite{abh}$ (see also $\\cite{BrezisCoronLieb}$), where the energy of harmonic maps with prescribed point singularities was related to $1$-dimensional classical currents, we are led to investigate the problem $(H)$ in connection with problem $(I)$.\n\nBefore moving to the next section, we provide a candidate for the functional $\\mathbb{E}$ satisfying the properties in Definition \\ref{def:suiten}.\nLet $\\textbf{u}=(u_1,\\ldots,u_{n-1}) \\in H_1\\times \\ldots \\times H_{n-1}$. Let $e_1,\\ldots,e_{n-1}$ be the canonical basis of $\\R^{n-1}$, and let $I$ be a subset of $\\lbrace 1,\\ldots, n-1 \\rbrace$, then we denote by\n$e_{I}$ the sum $\\sum_{i\\in I}e_{i}$. We define the energy density $\\textbf{e}(\\textbf{u})$ at a point $x\\in\\R^d$ as\n\\begin{align}\n\\textbf{e}(\\textbf{u})(x)=(d-1)^{-\\frac{d-1}{2}}\\inf\\Bigg\\{ & \\sum_{I\\in{\\mathcal{I}}}\\|e_I \\|_{\\alpha}|\\nabla u_I(x)|^{d-1}:\\, \\mbox{where }\\textbf{ju}(x)=\\sum_{I\\in{\\mathcal I}}ju_{I}(x)\\otimes e_I\\,\\label{energydensityforE}\\\\\n & \\text{and } \\mathcal{I}\\text{ is a partition of }\\{1,\\ldots,n-1\\}\\Bigg\\}\\,.\\nonumber\n\\end{align}\nTo be precise, here the matrix $\\textbf{ju} (x)$ is decomposed according to a partition ${\\mathcal I}$ of the set $\\{1,\\ldots,n-1\\}$ in such a way that $ju_i(x)=ju_l(x)$ for every pair $i,l\\in I$. \n\nAs an example, take ${\\bf u}=(u_1,u_2)\\in H_1\\times H_2$ for some choice of the points $P_1,P_2,P_3\\in\\R^d$. Then, at some point $x\\in \\R^d$, either $ju_1(x)\\neq ju_2(x)$ or $ju_1(x)=ju_2(x)$.\n\\begin{itemize}\n\\item If $ju_1(x)\\neq ju_2(x)$, then the unique decomposition that we are allowing is ${\\bf j}({\\bf u})(x)=ju_1(x)e_1+ju_2(x)e_2$ and $\\textbf{e}(\\textbf{u})(x)=c_d(|\\nabla u_1(x)|^{d-1}+|\\nabla u_2(x)|^{d-1})$, where we abbreviated $c_d=(d-1)^{-\\frac{d-1}{2}}$.\n\\item If $ju_1(x)=ju_2(x)$, then, thanks to the subadditivity of $\\|\\cdot\\|_\\alpha$, the most convenient decomposition is ${\\bf j}({\\bf u})(x)=ju_1(x)(e_1+e_2)$ and $\\textbf{e}(\\textbf{u})(x)=c_d\\|e_1+e_2\\|_\\alpha|\\nabla u_1(x)|^{d-1}$.\n\\end{itemize}\nFinally, we consider the functional\n\\begin{equation}\\label{energyforE}\n\\mathbb{E}(\\textbf{u})=\\int_{\\R^{d}}{\\textbf e}(\\textbf{u})(x)\\, dx.\n\\end{equation}\n\\begin{prop}\\label{functionalE}Let $\\psi$ be the norm \ndefined as\n\\begin{equation}\n\\psi(h)=\\begin{cases} \n||\\cdot||_{\\alpha}=\\left(\\sum_{j=1}^{n-1}|h_{j}|^{\\frac{1}{\\alpha}}\\right)^{\\alpha} & \\mbox{in case } \\alpha \\in (0; 1], \\, h\\in \\Z^{n-1} \\\\ \n||\\cdot||_{0}=\\max \\lbrace h_{1},\\ldots,h_{n-1} \\rbrace & \\mbox{in case } \\alpha=0, \\, h\\in \\Z^{n-1}\\,.\n\\end{cases}\n\\end{equation}\nLet $\\mathbb{E}$ be the functional defined above, in $\\eqref{energyforE}$. If $\\alpha=1$, i.e. $\\psi=\\|\\cdot \\|_{1}$, we choose $\\mathbb{E}=\\frac{1}{(d-1)^{\\frac{d-1}{2}}}\\mathbb{H}$. \nThen $\\mathbb{E}$ is suitably related to $\\mathbb{M}$ and $\\mathbb{H}$ in the sense of Definition \\ref{def:suiten}.\n\\end{prop}\n\\begin{proof}\nWe start with property {\\it (i)}. Let $\\omega \\in C^{\\infty}_{c}\\left(\\R^{d};\\Lambda^{1}_{(\\R^{n-1},\\psi)}(\\R^{d})\\right)$ be a test form with comass norm $\\sup_{x\\in \\R^d} |\\omega \\,|_{c,\\psi} \\leq 1$. By using the very definition of $|\\cdot |_{m,\\psi}$, see \\eqref{nuclearnorm}, we obtain\n\\begin{equation}\\label{comparenuclearnorm}\n\\begin{aligned}\n|\\, \\textbf{ju} (\\omega)\\,|=\\left|\\int_{\\R^d}\\langle \\textbf{ju}(x), \\omega(x) \\rangle\\, dx\\right| \\leq \\int_{\\R^{d}} |\\textbf{ju} (x)|_{m,\\psi}\\,dx\\,.\n\\end{aligned}\n\\end{equation}\nOn the other hand, as already observed, for a.e $x\\in \\R^d$ we have\n\\begin{equation*}\n\\begin{aligned}\n|\\textbf{ju}(x)|_{m,\\psi}\\leq \\inf\\left\\{\\sum_{I\\in{\\mathcal{I}}}\\|e_I \\|_{\\alpha}|j u_I(x)|:\\, \\mbox{where }\\textbf{ju}(x)=\\sum_{I\\in{\\mathcal I}}ju_{I}(x)\\otimes e_I,\\,\\mathcal{I}\\text{ part. of }\\{1,\\ldots,n-1\\}\\right\\}\\,.\n\\end{aligned}\n\\end{equation*}\nObserve that for any $v\\in H_{l}$, $l=1,\\ldots,n-1$, one has for a.e $x\\in \\R^d$\n\\begin{equation}\n|jv(x)|\\leq \\frac{1}{(d-1)^{\\frac{d-1}{2}}}|\\nabla v(x)|^{d-1}\\,,\n\\end{equation}\nsee also $\\cite{BrezisCoronLieb}$-page 64, $\\cite{abh}$-A.1.3.\nTherefore, we obtain that for a.e $x\\in \\R^d$\n\\begin{equation}\n|\\textbf{ju}|_{m,\\psi}(x)\\leq e(\\textbf{u})(x)\n\\end{equation}\nThis in turn implies that\n\\begin{equation}\n\\begin{aligned}\n|\\, \\textbf{ju} (\\omega)\\,|\\leq \\mathbb{E} (\\textbf{u})\\,.\n\\end{aligned}\n\\end{equation}\nSo, by the arbitrariness of $\\omega$, we conclude that\n\\begin{equation}\n\\mathbb{M}(\\textbf{ju})\\leq \\mathbb{E} (\\textbf{u}).\n\\end{equation} \n\nConcerning property {\\it(ii)}, assume that, in some open set $U$, each $u_{i}$ is equal to either $0$ or a given function $v\\in W^{1,d-1}_{\\rm loc}(\\R^d,\\mathbb{S}^{d-1})$, thus in $U$ the jacobian $\\textbf{ju}$ can be written as $\\textbf{ju}=jv {e}_{I}$, for some $I\\subset \\lbrace 1,\\ldots,n \\rbrace$. This implies that\n\\begin{equation}\n{\\bf e}(\\textbf{u})(x)\\leq \\| e_{I} \\|_{\\alpha} \\frac{1}{(d-1)^{\\frac{d-1}{2}}} |\\nabla v (x)|^{d-1}\n\\end{equation}\nfor a.e $x$ in the dipole, so we can conclude that\n\\begin{equation}\n\\mathbb{E}(\\textbf{u}\\chi_U)\\leq \\mathbb{H}(\\textbf{u}\\chi_U)\\,.\n\\end{equation}\n\n\nFinally, if $k=1$ (i.e., we have just one component $\\textbf{u}=u$), it is obvious that\n\\begin{equation}\n{\\bf e}(\\textbf{u})=\\frac{1}{(d-1)^{\\frac{d-1}{2}}} |\\nabla u|^{d-1}.\n\\end{equation}\nTo conclude the proof, we observe that, in case $\\alpha=1$, that is, $\\psi=\\| \\cdot \\|_{1}$, $\\mathbb{E}=\\frac{1}{(d-1)^{\\frac{d-1}{2}}}\\mathbb{H}$ and this functional obviously satisfies the three properties.\n\\end{proof}\n\\section{(Gilbert-)Steiner problems and currents with coefficients in a group}\\label{section3}\nLet us briefly recall the Gilbert-Steiner problem and the Steiner tree problem and see how it can be turned into a mass-minimization problem for integral currents in a suitable group. \n\nLet $n$ distinct points $P_{1},\\ldots, P_{n}$ in $\\R^{d}$ be given. Denote by $G(A)$ the set of all acyclic graphs $L = \\bigcup_{i=1}^{n-1}\\lambda_i$, along which the unit masses located at $P_{1},\\ldots, P_{n-1}$ are transported to the target point $P_n$ (single sink). Here $\\lambda_i$ is a simple rectifiable curve and represents the path of the mass at $P_{i}$ flowing from $P_{i}$ to $P_{n}$. In $\\cite{MaMa2, MaMa}$, the occurrence of cycles in minimizers is ruled out, thus the problem $(I)$ is proved to be equivalent to \n\n$$\n\\inf \\left\\{ \\int_L |\\theta(x)|^\\alpha d{\\mathcal H}^1(x), \\;\\; L\\in G(A), \\;\\;\\theta(x) = \\sum_{i=1}^{n-1} \\mathbf{1}_{\\lambda_i}(x) \\right\\}\n\\leqno{(I)}\n$$\nwhere $\\theta$ is the mass density along the network $L$. Moreover, in $\\cite{MaMa2, MaMa}$ the problem $(I)$ can be turned into a mass-minimization problem for integral currents with coefficients in the group $\\Z^{n-1}$: the idea is to label differently the masses located at $P_{1}, P_{2} \\ldots, P_{n-1}$ (source points) and to associate the source points $P_{1},\\ldots, P_{n-1}$ to the single sink $P_{n}$. Formally, we produce a $0$-dimensional rectifiable current (a.k.a. a measure) with coefficients in $\\Z^{n-1}$, given by the difference between\n$$\\mu^{-}=e_{1}\\delta_{P_{1}}+e_{2}\\delta_{P_{2}}+ \\ldots +e_{n-1}\\delta_{P_{n-1}} \\mbox{ and }\\mu^{+}=(e_{1}+\\ldots+e_{n})\\delta_{P_{n}}\\,.$$\nWe recall that $\\lbrace e_{1},e_{2},\\ldots,e_{n} \\rbrace$ is the canonical basis of $\\R^{n-1}$. The measures $\\mu^{-}, \\mu^{+}$ are the marginals of the problem $(I)$. To any acyclic graph $L = \\bigcup_{i=1}^{n-1}\\lambda_i$ we associate a current $T$ with coefficients in the group $\\Z^{n-1}$ as follows: to each $\\lambda_{i}$ associate the current $T_{i}=\\llbracket\\lambda_{i},\\tau_{i},e_{i}\\rrbracket$, where $\\tau_{i}$ is the tangent vector of $\\lambda_{i}$. We associate to the graph $L = \\bigcup_{i=1}^{n-1}\\lambda_i$ the current $T=(T_{1},\\ldots,T_{n-1})$ with coefficients in $\\Z^{n-1}$. By construction we obtain $$\\partial T=\\mu^{+}-\\mu^{-}\\,.$$\nChoosing the norm $\\psi$ on $\\Z^{n-1}$ as\n\\begin{equation}\\label{normeuclidean}\n\\psi(h)=\\begin{cases} \n||\\cdot||_{\\alpha}=\\left(\\sum_{j=1}^{n-1}|h_{j}|^{\\frac{1}{\\alpha}}\\right)^{\\alpha} & \\mbox{in case } \\alpha \\in (0; 1], \\, h\\in \\Z^{n-1} \\\\ \n||\\cdot||_{0}=\\max \\lbrace h_{1},\\ldots,h_{n-1} \\rbrace & \\mbox{in case } \\alpha=0, \\, h\\in \\Z^{n-1}\\,,\n\\end{cases}\n\\end{equation}\n\n\nin view of Remark \\ref{rmk:mass}, the problem $(I)$ is equivalent to \n$$\n\\inf \\left\\{ \\mathbb{M}(T), \\hspace{0.2cm} \\partial T =\\mu^{+}-\\mu^{-} \\right\\}\n\\,.\\leqno{(M)}\n$$\nWe refer the reader to $\\cite{MaMa2, MaMa}$ for more details. From now on we restrict our attention to the coefficients group $(\\Z^{n-1}, ||\\cdot||_{\\alpha})$, $0\\leq \\alpha \\leq 1$.\n\\begin{remark}\\label{prejacobiancurrent}\nLet $\\textbf{u}=(u_1,\\ldots,u_{n-1})\\in H_1\\times \\ldots \\times H_{n-1}$. One has\n\\begin{equation}\\label{boundaryofprejacobian}\n\\frac{1}{\\alpha_{d-1}} \\partial \\,\\textbf{ju}=\\mu^{+}-\\mu^{-}\n\\end{equation}\n\\end{remark}\n\nWe remark that turning the problem $(I)$ into a mass-minimization problem allows to rely on the (dual) notion of calibration, which is a useful tool to prove minimality, especially when dealing with concrete configurations. We also recall that the existence of a calibration (see Definition \\ref{Calibration}) associated with a current $T$ implies that $T$ is a mass-minimizing current for the boundary $\\partial T$.\n\\begin{example}\\label{examplecalib}{\\rm\n\t\tLet us consider an irrigation problem with $\\alpha=\\frac{1}{2}$. We will consider a minimal network joining $n+1$ points in $\\R^{n}$, the construction of the network is explained below. Let us stress that in this example the coincidence of the dimension of the ambient space with the dimension of the space of coefficients is needed.\n\t\t\nAdopting the point of view of \\cite{HarveyLawson}, we propose a calibration first, and only {\\it a posteriori} we construct a current which fulfills the requirement (i) in Definition \\ref{Calibration}. We briefly remind that the problem $(I)$ can be seen as the mass-minimization problem for currents with coefficients in $\\Z^{n}$ with the norm $\\Vert \\cdot \\Vert_{\\frac{1}{2}}$. \n\nLet $\\{{\\rm d}x_1,\\ldots,{\\rm d}x_n\\}$ be the (dual) basis of covectors of $\\R^n={\\rm span}(e_1,\\ldots,e_n)$. We now prove that the differential form \n\t\t\\[\n\t\t\\omega=\n\t\t\\begin{bmatrix}\n\t\t{\\rm d}x_{1}\\\\\n\t\t{\\rm d}x_{2}\\\\\n\t\t\\vdots \\\\\n\t\t{\\rm d}x_{n}\n\t\t\\end{bmatrix}\n\t\t\\]\n\t\tsatisfies conditions (ii) and (iii) in Definition $\\ref{Calibration}$. Obviously ${\\rm d}\\omega=0$. Moreover,\n\t\tlet $\\tau=(\\tau_{1}, \\tau_{2},\\ldots,\\tau_{n})\\in \\R^{n}$ be a unit vector (with respect to the Euclidean norm). Thus, for our choice of the norm $\\psi=\\|\\cdot\\|_{\\frac 12}$ we can compute $\\Vert \\langle \\omega; \\tau, \\cdot \\rangle \\Vert^{\\frac{1}{2}}=( \\tau_{1}^{2}+\\tau_{2}^{2}+\\tau_{3}^{2}+\\ldots+\\tau_{n}^{2})^{\\frac{1}{2}}=1$. \n\t\t\nWe will build now a configuration of $n+1$ points $P_{1}, P_{2}, \\ldots, P_{n+1}$ in $\\R^{n}$ calibrated by $\\omega$. Notice that the network has $n-1$ branching points and is somehow generic in character. More precisely, our strategy in building such a configuration is to choose end points, and branching points following the directions parallel to $e_{1}, e_{2}, e_{3}, \\ldots, e_{n}, e_{1}+e_{2},e_{1}+e_{2}+e_{3}, \\ldots,e_{1}+e_{2}+\\ldots+e_{n-1}, e_{1}+e_{2}+\\ldots+e_{n}$. We illustrate the construction in $\\R^{3}, \\R^{4}$. This process can be extended to any dimension.\n\n\\begin{itemize}\n\\item In $\\R^{3}$, let us consider $P_{1}=(-1, 0, 0)$, $P_{2}=(0, -1, 0)$, $P_{3}=(1, 1, -1)$, $P_{4}=(2,2 ,1)$. Take, as \n\t\t\tbranching points, $G_{1}=(0, 0, 0)$, $G_{2}=(1, 1, 0)$. Now consider the current $T=\\llbracket\\Sigma, \\tau, \\theta\\rrbracket$ with support $\\Sigma$ obtained by the union of the segments $\\overline{P_1G_1},\\overline{P_2G_1},\\overline{G_1G_2},\\overline{P_3G_2},\\overline{G_2P_4}$. \n\t\t\t\\begin{figure}[tbh]\n\t\t\t\t\\centering\n\t\t\t\t\\begin{tabular}{cc}\n\t\t\t\t\t\\includegraphics[width=0.4\\linewidth]{Constructionpoints}\n\t\t\t\t\\end{tabular}\n\t\t\t\t\\caption{The picture illustrates the construction of $T$.}\n\t\t\t\t\\label{fig:1d_exe}\n\t\t\t\\end{figure}\n\t\t\t\n\t\t\tThe multiplicity $\\theta$ is set as\n\t\t\t$$\\theta(x)\n\t\t\t=\\begin{cases} \n\t\t\te_{1} & \\mbox{if } x\\in \\overline{P_{1}G_{1}} \\\\ \n\t\t\te_{2} & \\mbox{if } x\\in \\overline{P_{2}G_{1}} \\\\\n\t\t\te_{1}+e_{2} & \\mbox{if } x\\in \\overline{G_{1}G_{2}} \\\\\n\t\t\te_{3} & \\mbox{if } x\\in \\overline{P_{3}G_{2}} \\\\\n\t\t\te_{1}+e_{2}+e_{3} & \\mbox{if } x\\in \\overline{G_{2}P_{4}} \\\\\n\t\t\t0 & \\mbox{elsewhere}.\\\\\n\t\t\t\\end{cases}$$\t\t\t\nWe observe that $T$ is calibrated by $\\omega$, thus $T$ is a minimal network for the irrigation problem with sources $P_1,P_2$ and $P_3$ and sink $P_4$. Notice that edges of the network meet at the branching points with the $90$ degrees angles, as known for branched optimal structures with cost determined by $\\alpha=1\/2$.\n\\item In $\\R^{4}$, we keep points $P_{1}=(-1, 0, 0, 0)$, $P_{2}=(0, -1, 0, 0)$, $P_{3}=(1, 1, -1, 0)$ and, in general, the whole network of the example above as embedded in $\\R^{4}$. We relabel $G_{3}:=(2,2,1,0)$. We now pick $P_{4}$ and $P_{5}$ in such a way that $\\overrightarrow{P_4G_{3}}=e_{4}$ and $\\overrightarrow{G_{3}P_{5}}=e_{1}+e_{2}+e_{3}+e_{4}$. For instance, we choose\t$P_{4}=(2, 2, 1, -1)$ and $P_{5}=(3, 3, 2, 1)$. As before, the marginals of the irrigation problem are $P_1,P_2,P_3,P_4$ as sources and $P_5$ as sink, while $G_1,G_2,G_3$ are branching points.\n\nLet us now consider the current $T=\\llbracket\\Sigma, \\tau, \\theta\\rrbracket$ supported on the union of segments $\\overline{P_1,G_1},\\overline{P_2G_1},\\overline{G_1G_2},\\overline{P_3G_2},\\overline{G_2G_3},\\overline{P_4G_3},\\overline{G_3P_5}$ and multiplicity $\\theta$ given by\n$$\n\t\t\t\\theta(x)\n\t\t\t=\\begin{cases} \n\t\t\te_{1} & \\mbox{if } x\\in \\overline{P_{1}G_{1}} \\\\ \n\t\t\te_{2} & \\mbox{if } x\\in \\overline{P_{2}G_{1}} \\\\\n\t\t\te_{1}+e_{2} & \\mbox{if } x\\in \\overline{G_{1}G_{2}} \\\\\n\t\t\te_{3} & \\mbox{if } x\\in \\overline{P_{3}G_{2}} \\\\\n\t\t\te_{1}+e_{2}+e_{3} & \\mbox{if } x\\in \\overline{G_{2}G_{3}} \\\\\n\t\t\te_{4} & \\mbox{if } x\\in \\overline{P_{4}G_{3}} \\\\\n\t\t\te_{1}+e_{2}+e_{3}+e_{4} & \\mbox{if } x\\in \\overline{G_{3}P_{5}} \\\\\n\t\t\t0 & \\mbox{elsewhere}.\\\\\n\t\t\t\\end{cases}$$\n\t\t\tIt is easy to check that the orientation of each segment coincides with the multiplicity, \n\t\t\ttherefore $T$ is calibrated by $\\omega$.\t\t\n\t\t\t\\item This procedure can be replicated to construct a configuration of $n+1$ points $P_{1}, P_{2}, \\ldots, P_{n+1}$ in $\\R^{n}$ calibrated by $\\omega$, always in the case $\\alpha=1\/2$.\t\n\t\t\\end{itemize}\n\t\t}\\end{example}\n\t\t\\begin{example}{\\rm\n\t\t\tWe now consider a Steiner tree problem. As in the previous example, we aim to construct calibrated configurations joining $n+1$ points in $\\R^{n}$ (with $n-1$ branching points). Consider the following differential form:\n\t\t\t\\[\n\t\t\t\\omega=\n\t\t\t\\begin{bmatrix}\n\t\t\t\\frac{1}{2}{\\rm d}x_{1}+\\frac{\\sqrt{3}}{2}{\\rm d}x_{2}\\\\\n\t\t\t\\frac{1}{2}{\\rm d}x_{1}-\\frac{\\sqrt{3}}{2}{\\rm d}x_{2}\\\\\n\t\t\t\\frac{-1}{2}{\\rm d}x_{1}-\\frac{\\sqrt{3}}{2}{\\rm d}x_{3}\\\\\t\n\t\t\t\\frac{-1}{4}{\\rm d}x_{1}+\\frac{\\sqrt{3}}{4}{\\rm d}x_{3}-\\frac{\\sqrt{3}}{2}{\\rm d}x_{4}\\\\\n\t\t\t\\frac{-1}{8}{\\rm d}x_{1}+\\frac{\\sqrt{3}}{8}{\\rm d}x_{3}+\\frac{\\sqrt{3}}{4}{\\rm d}x_{4}-\\frac{\\sqrt{3}}{2}{\\rm d}x_{5}\\\\\n\t\t\t\\vdots\\\\\n\t\t\t\\frac{-1}{2^{n-2}}{\\rm d}x_{1}+\\frac{\\sqrt{3}}{2^{n-2}}{\\rm d}x_{3}+\\frac{\\sqrt{3}}{2^{n-3}}{\\rm d}x_{4}+\\ldots+\\frac{\\sqrt{3}}{2^{n-k}}{\\rm d}x_{k+1}+\\ldots+\\frac{\\sqrt{3}}{4}{\\rm d}x_{n-1}-\\frac{\\sqrt{3}}{2}{\\rm d}x_{n}\n\t\t\t\\end{bmatrix}\\,.\n\t\t\t\\]\n\t\t\tIt is easy to check that the differential form $\\omega$ is a calibration only among those currents having multiplicities \n\t\t\t$e_{1}, e_{2}, e_{3}, \\ldots, e_{n}, e_{1}+e_{2},e_{1}+e_{2}+e_{3}, \\ldots,e_{1}+e_{2}+\\ldots+e_{n-1}, e_{1}+e_{2}+\\ldots+e_{n}$ and hence it will allow to prove the minimality of configurations in the class of currents with those multiplicities (cf.\\cite{Marcello} for the notion calibrations in families). Nevertheless, it is enough to prove the minimality of global minimizers in some configurations.\n\t\t\t\n\t\t\\begin{itemize}\n\t\t\t\\item \t Consider $n=3$ and\n\t\t\t$P_{1}=\\left(\\frac{-1}{2},\\frac{\\sqrt{3}}{2},0\\right)$, $P_{2}=\\left(\\frac{-1}{2},\\frac{-\\sqrt{3}}{2}, 0\\right)$, $P_{3}=\\left(\\frac{\\sqrt{6}}{2}-\\frac{1}{2}, 0 ,\\frac{\\sqrt{3}}{2}\\right)$, $P_{4}=\\left(\\frac{\\sqrt{6}}{2}-\\frac{1}{2}, 0 , -\\frac{\\sqrt{3}}{2}\\right)$ (see also the example in \\cite[Section $3$]{BoOrOu2}).\nIndeed, we observe that the lengths $|\\overline{P_{1}P_{2}}|=|\\overline{P_{1}P_{3}}|=|\\overline{P_{1}P_{4}}|=|\\overline{P_{2}P_{3}}|=|\\overline{P_{2}P_{4}}|=|\\overline{P_{3}P_{4}}|=\\sqrt{3}$, meaning that the convex envelope of points $P_{1},P_{2},P_{3},P_{4}$ is a tetrahedron: this observation allows us to restrict our investigation among all currents having multiplicities $e_{1}, e_{2}, e_{3}, e_{1}+e_{2}, e_{1}+e_{2}+e_{3}$. More precisely, given any $1$-dimensional integral current $T$ with $\\partial T=(e_{1}+e_{2}+e_{3})\\delta_{P_{4}}-e_{1}\\delta_{P_{1}}-e_{2}\\delta_{P_{2}}-\\ldots -e_{3}\\delta_{P_{3}}$ whose support is an acyclic graph with two additional Steiner points, we can always construct a corresponding current $L$ with multiplicities $e_{1}, e_{2}$, $e_{1}+e_{2}$, $e_{1}+e_{2}+e_{3}$ having the same boundary with $T$ such that $\\mathbb{M}(T)=\\mathbb{M}(L)$ thanks to the symmetric configuration $P_{1}, P_{2}, P_{3}, P_{4}$ combined with the fact that any minimal configuration cannot have less than two Steiner points. Indeed, by contradiction, if a minimal configuration for the vertices of a tetrahedron had $1$ Steiner point, then this configuration would violate the well-known property of the $120$ degrees angles at Steiner points.\nTherefore, $\\omega$ calibrates the current $T=\\llbracket\\Sigma, \\tau, \\theta\\rrbracket$, where $S_{1}=(0, 0, 0), S_{2}=\\left(\\frac{\\sqrt{6}}{2}-1, 0, 0\\right)$ are the Steiner points, $\\Sigma=\\overline{P_{1}S_{1}}\\cup \\overline{P_{2}S_{1}} \\cup \\overline{S_{1}S_{2}} \\cup \\overline{P_{3}S_{2}} \\cup \\overline{S_{2}P_{4}}$ and the multiplicity is given by\n\t\t\t$$\\theta(x)\n\t\t\t=\\begin{cases} \n\t\t\te_{1} & \\mbox{if } x\\in \\overline{P_{1}S_{1}} \\\\ \n\t\t\te_{2} & \\mbox{if } x\\in \\overline{P_{2}S_{1}} \\\\\n\t\t\te_{1}+e_{2} & \\mbox{if } x\\in \\overline{S_{1}S_{2}} \\\\\n\t\t\te_{3} & \\mbox{if } x\\in \\overline{P_{3}S_{2}} \\\\\n\t\t\te_{1}+e_{2}+e_{3} & \\mbox{if } x\\in \\overline{S_{2}P_{4}} \\\\\n\t\t\t0 & \\mbox{elsewhere}\\,.\\\\\n\t\t\t\\end{cases}$$\t\n\t\t\t\\item Using the same strategy of Example \\ref{examplecalib}, we can build a configuration $P_{1}, P_{2}, P_{3}, P_{4}, P_{5}$ in $\\R^{4}$ starting from the points $P_{1}, P_{2}, P_{3}, P_{4}$ above, in such a way that the new configuration is calibrated by $\\omega$ among all currents with multiplicities $e_{1}, e_{2}, e_{3}, e_{4}, e_{1}+e_{2},e_{1}+e_{2}+e_{3}, e_{1}+e_{2}+e_{3}+e_{4}$. This construction can be extended to any dimension.\n\t\t\\end{itemize}\n}\\end{example}\n\n\n\\section{Proof of the main result}\\label{Prooftheorem1}\n\n\n\n\n\n\n\n\n\n\n\nThe proof of Theorem $\\ref{thm1}$ is much in the spirit of the dipole construction of $\\cite{BrezisCoronLieb, abh}$ (in the version of $\\cite{ABO1}$), the properties of the functional $\\mathbb{E}$, and making use of the existence of calibration. \n\\begin{proof}\nLet $\\mathbb{E}$ be the functional which fulfills the requirements of Definition \\ref{def:suiten}.\n\tIn the first steps we prove the inequality \n\t$$\\inf{\\mathbb{E}}\\leq \\alpha_{d-1} \\inf{I_\\alpha}.$$\n\t\nWe briefly recall the dipole construction (see, for instance, \\cite[Theorem $3.1$, Theorem $8.1$]{BrezisCoronLieb}). Given a segment $\\overline{AB}\\subset\\R^d$ and a pair of parameters $\\beta,\\gamma>0$, we define \n\\begin{equation}\\label{neigh}\nU:=\\{x\\in\\R^d:\\,{\\rm dist}(x,\\overline{AB})<\\min\\{\\beta,\\gamma\\,{\\rm dist(x,\\{A,B\\})}\\}\\}\\subset\\R^d\n\\end{equation} \nto be a pencil-shaped neighbourhood with core $\\overline{AB}$ and parameters $\\beta,\\gamma$. For any fixed $\\varepsilon>0$, the dipole construction produces a function $u\\in W^{1,d-1}_{\\rm loc}(\\R^{d}; \\mathbb{S}^{d-1})$ with the following properties:\n\t\\begin{itemize}\n\t\\item $u\\equiv (0,\\ldots,0,1)$ in $\\R^d\\setminus U$;\n\t\\item $Ju=\\frac{\\alpha_{d-1}}{d}(\\delta_{A}-\\delta_{B})$;\n\t\\item moreover the map $u$ satisfies the following inequality\n\t\\begin{equation}\\label{estimateharmonic}\n\t\\frac{1}{(d-1)^{\\frac{d-1}{2}}\\alpha_{d-1}}\\int_{\\R^{d}}|\\nabla u|^{d-1}dx \\leq |AB|+\\varepsilon\\,,\n\t\\end{equation}\n\t\\end{itemize}\t\n\\textbf{Step 1.} \t\nLet $L=\\bigcup_{i=1}^{n-1}\\lambda_i$ be an acyclic connected polyhedral graph, and $T$ be the associated current with coefficients in $\\Z^{n-1}$ corresponding to $L$. Since $L$ is polyhedral, it can also be written as $L=\\bigcup_{j=1}^{k} I_{j}$, where $I_j$ are weighted segments. For each segment $I_{j}$ we can find parameters $\\delta_j,\\gamma_j>0$ such that the pencil-shaped neighbourhood $U_j = \\left\\{ x \\in \\R^d:\\, \\text{dist}(x,I_{j}) \\leq \\min \\left\\{ \\beta_{j}, \\gamma_j\\text{dist}(x,\\partial I_{j}) \\right\\} \\right\\}$ (modelled after \\eqref{neigh}) is essentially disjoint from $U_\\ell$ for every $\\ell\\neq j$. Then, for every $i=1,\\ldots,n-1$, let \n\t$V_{i}=\\bigcup_{j\\in K_i} U_j$\n\tbe a sharp covering of the path $\\lambda_{i}$. To be precise, we choose $K_i\\subset\\{1,\\ldots,k\\}$ such that $V_i\\cap U_\\ell$ is at most an endpoint of the segment $I_\\ell$, if $\\ell\\notin K_i$.\n\t\t\\begin{figure}[tbh]\n\t\t\\centering\n\t\t\\begin{tabular}{cc}\n\t\t\t\\includegraphics[width=0.4\\linewidth]{Dipoleconstruction}\n\t\t\\end{tabular}\n\t\t\\caption{A dipole construction of a Y-shaped graph connecting $3$ points.}\n\t\t\\label{fig:1d_exe2}\n\t\t\\end{figure}\t\n\t\n\t\nFor each path $\\lambda_i$, $i=1,\\ldots,n-1$, we build the map $u_{i}\\in H_{i}$ in such a way that it coincides with a dipole associated to the segment $I_j$ in the neighbourhood $U_j$ for each $j\\in K_i$. We put $u_i\\equiv (0,\\ldots,0,1)$ in $\\R^d\\setminus V_i$.\n\t\nWe obtain that $u_{i}\\in W^{1,d-1}_{\\rm loc}(\\R^{d}; \\mathbb{S}^{d-1})$ and satisfies $Ju_{i}=\\frac{\\alpha_{d-1}}{d}(\\delta_{P_{i}}-\\delta_{P_{n}})$. Moreover, summing up inequality \\eqref{estimateharmonic} repeated for each segment $I_j$ with $j\\in K_i$, the following inequality holds\n\t\\begin{equation*}\n\t\\frac{1}{(d-1)^{\\frac{d-1}{2}}\\alpha_{d-1}}\\int_{\\R^{d}}|\\nabla u_{i}|^{d-1}dx \\leq \\mathbb{M}(T_{i})+k\\varepsilon\\,,\n\t\\end{equation*}\n\twhere $T_{i}$ is the (classical) integral current corresponding to the $i^{\\rm th}$ component of $T$. \n\t\n\t\nIn particular, let us stress that the maps $u_1,\\ldots,u_{n-1}$ have the following further property: if some paths $\\lambda_{i_1},\\lambda_{i_2},\\ldots,\\lambda_{i_m}$ have a common segment $I_j$ for some $j\\in K_{i_1}\\cap K_{i_2}\\cap\\ldots\\cap K_{i_m}$, then $u_{i_1},\\ldots,u_{i_m}$ agree in $U_j$. Furthermore, setting $h_{i_{1}, i_{2},\\ldots,i_{m}}=(0,\\ldots,|\\nabla u_{i_{1}}|^{d-1},\\ldots, |\\nabla u_{i_{m}}|^{d-1},\\ldots,0)$, we obtain\n\\begin{equation*}\n\t\t\\frac{1}{(d-1)^{\\frac{d-1}{2}}\\alpha_{d-1}}\\int_{U_j}||h_{i_{1}, i_{2},\\ldots,i_{m}}||_{\\alpha}dx \\leq m^{\\alpha}(|I_j|+k\\varepsilon)\\,,\n\t\t\\end{equation*}\n\t\twhere $h_{i_{1}, i_{2},\\ldots,i_{m}}=(0,\\ldots,|\\nabla u_{i_{1}}|^{d-1},\\ldots, |\\nabla u_{i_{m}}|^{d-1},\\ldots,0)$. This holds for every $\\alpha\\in[0,1]$.\n\t\n\tCombining all the previous observations, we can conclude that, given any $\\tilde\\epsilon >0$ , there exist $u_{i}\\in H_{i}$, $i=1,\\ldots,n-1$ such that \n\t\\begin{align*}\n\t\\int_{\\R^{d}} ||(|\\nabla u_{1}|^{d-1}, |\\nabla u_{2}|^{d-1},\\ldots,|\\nabla u_{n-1}|^{d-1})||_{\\alpha}\\,dx\n\t\\leq & (d-1)^{\\frac{d-1}{2}}\\alpha_{d-1} \\int_{L}|\\theta (x)|^{\\alpha}d\\mathcal{H}^{1}(x)+\\tilde\\epsilon\n\t\\\\ = & (d-1)^{\\frac{d-1}{2}}\\alpha_{d-1}\\mathbb{M}(T)+\\tilde\\epsilon\\,,\n\t\\end{align*}\n\twhere $\\theta(x) = \\sum_{i=1}^{n-1} \\mathbf{1}_{\\lambda_i}(x)$.\nThus, by the properties of $\\mathbb{E}$, one obtain that\n\\begin{equation}\n\\inf \\mathbb{E} \\leq \\mathbb{E}(\\textbf{u})\\leq \\frac{1}{(d-1)^{\\frac{d-1}{2}}} \\mathbb{H}(\\textbf{u}) \\leq \\alpha_{d-1}\\mathbb{M}(T)+\\tilde\\epsilon.\n\\end{equation}\n\t\n\t\\noindent {\\bf Step 2.} Considering an arbitrary acyclic graph $L=\\bigcup_{i=1}^{n-1}\\lambda_i$, there is a sequence of acyclic polyhedral graphs $\\left(L_{m} \\right)_{m\\ge 1}$, $L_{m}=\\bigcup_{i=1}^{n-1}\\lambda^{m}_i$ such that\n\tthe Hausdorff distance $d_{H}(\\lambda^m_i, \\lambda_i) \\leq \\frac{1}{m}$, moreover (see \\cite[Lemma $3.10$]{BoOrOu}) denoting by $T$ and $T_{m}$ the associated currents with coefficients in $\\Z^{n-1}$ we also have that\n\t$$\\mathbb{M}(T_{m})=\\int_{L_{m}}|\\theta_{m}(x)|^{\\alpha}\\,d\\mathcal{H}^{1}(x) \\leq \\mathbb{M}(T)=\\int_{L}|\\theta(x)|^{\\alpha}\\,d\\mathcal{H}^{1}(x) +\\frac{1}{m}.$$\n\there $\\theta_{m}(x) = \\sum_{i=1}^{n-1} \\mathbf{1}_{\\lambda_i^{m}}(x)$. \n\tOn the other hand, by previous construction there exists a sequence $\\lbrace {\\bf u}_{m} \\rbrace_{m}$, ${\\bf u}_{m}=(u_{1,m},\\ldots,u_{n-1,m})\\in H_{1}\\times \\ldots \\times H_{n-1}$ such that\n\t\\begin{align*}\n\t\\inf \\mathbb{E} \\leq \\mathbb{E}(\\textbf{u}_{m})\\leq \\frac{1}{(d-1)^{\\frac{d-1}{2}}} \\mathbb{H}(\\textbf{u}_{m})\n\t&\\leq \\alpha_{d-1} \\int_{L_{m}}|\\theta_{m}(x)|^{\\alpha}d\\mathcal{H}^{1}(x)+\\frac{1}{m}\\\\\n\t&=\\alpha_{d-1}\\mathbb{M}(T_{m})+\\frac{1}{m}\\\\\n\t&\\leq \\alpha_{d-1}\\mathbb{M}(T)+\\frac{1+\\alpha_{d-1}}{m}\\\\\n\t&=\\alpha_{d-1} \\int_{L}|\\theta(x)|^{\\alpha}d\\mathcal{H}^{1}(x)+\\frac{1+\\alpha_{d-1}}{m}.\\,,\\\\\n\t\\end{align*}This implies that\n\t\\begin{equation}\n\t\\inf{\\mathbb{E}}\\leq \\alpha_{d-1} \\inf{I_\\alpha}=\\alpha_{d-1} \\inf \\mathbb{M}.\n\t\\end{equation}\nOn the other hand, by the properties $(i)$ of Definition \\ref{def:suiten}, we also have that for any $\\textbf{u}=(u_1,\\ldots,u_{n-1})\\in H_{1}\\times \\ldots \\times H_{n-1}$\n\\begin{equation}\n\\alpha_{d-1} \\inf \\mathbb{N} \\leq \\mathbb{M}(\\textbf{ju}) \\leq \\mathbb{E}(\\textbf{u})\n\\end{equation}\n(see Remark \\ref{prejacobiancurrent} to see why the constant $\\alpha_{d-1}$ appears in front of $\\inf \\mathbb{N}$).\nThis allows us to conclude that\n\\begin{equation}\n\\alpha_{d-1} \\inf \\mathbb{N} \\leq \\inf \\mathbb{E}.\n\\end{equation}\nTherefore we obtain the following inequality:\n\\begin{equation}\n\\alpha_{d-1} \\inf \\mathbb{N} \\leq \\inf{\\mathbb{E}}\\leq \\alpha_{d-1} \\inf{I_\\alpha}=\\alpha_{d-1} \\inf \\mathbb{M}.\n\\end{equation}\nBy assumption, a minimizer of the problem $(M)$ admits a calibration, we have \n\\begin{equation}\n\\inf \\mathbb{N}=\\inf \\mathbb{M}=\\inf{I_\\alpha}.\n\\end{equation}\nthis also means that \n\\begin{equation}\n\\alpha_{d-1}\\inf \\mathbb{N}=\\alpha_{d-1}\\inf \\mathbb{M}=\\alpha_{d-1}\\inf{I_\\alpha}=\\inf \\mathbb{E}\n\\end{equation}\nwhich is sought the conclusion.\n\\end{proof}\n\\begin{remark}{\\rm\n\n\n\n\n\tIn the proof of Theorem $\\ref{thm1}$, step 3, we must assume the existence of a calibration $\\omega$. Observe that, without this assumption, we still can deduce from that\n\t\\begin{equation}\\label{compare6}\n\t\\alpha_{d-1} \\inf{\\mathbb{M}}=\\alpha_{d-1}\\inf{I_\\alpha} \\geq \\inf{\\mathbb{E}} \\geq \\alpha_{d-1}\\, \\inf{\\mathbb{N}}\n\t\\end{equation}\n\twhere $\\inf{\\mathbb{N}}$ is the infimum of the problem obtained measuring the mass among $1$-dimensional normal currents with coefficients in $\\R^{n-1}$ (compare with Remark \\ref{lavrentiev}). \n\t\n\tMoreover, in case $\\alpha=1$, $\\psi=\\| \\cdot \\|_{1}$, $\\mathbb{E}=\\mathbb{H}$. First, $(I)$ turns out to coincide with the Monge-Kantorovich problem. Then,\n\t$$\\inf{\\mathbb{H}}\\geq (d-1)^{\\frac{d-1}{2}}\\alpha_{d-1} \\inf{I_\\alpha}=(d-1)^{\\frac{d-1}{2}}\\alpha_{d-1} \\inf{\\mathbb{M}\\,.}$$\n\tTo see this is to use the results of Brezis-Coron-Lieb $\\cite{BrezisCoronLieb}$ separately for each map $u_{i}$, $i=1,\\ldots,n-1$, for the energy\n\t$$\\mathbb{H}({\\bf u})=\\int_{\\R^{d}}(|\\nabla u_{1}|^{d-1}+ |\\nabla u_{2}|^{d-1}+\\ldots+|\\nabla u_{n-1}|^{d-1})\\,dx\\,,$$\n\twhere, again, ${\\bf u}=(u_{1},\\ldots,u_{n-1})\\in H_{1}\\times \\ldots \\times H_{n-1}$.\t\t\n\tThe investigation of equality cases in $\\eqref{compare6}$, when $0\\leq \\alpha <1$, will be considered in forthcoming works.\n\n}\\end{remark}\n\n\n\\section*{Acknowledgements}\nThe authors are partially supported by GNAMPA-INdAM. The research of the third\nauthor has been supported by European Union's Horizon 2020 programme through project 752018.\n\nThe authors wish to warmly thank Giacomo Canevari for extremely fruitful and enlightening discussions.\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\\label{sec:intro}\n\nRedistribution of angular momentum in astrophysical systems is a major driver of their dynamical and\nsecular evolution. Galactic bars facilitate this process by means of gravitational torques, triggered \ninternally (spontaneously) or externally (interactively). Important aspects of stellar bar evolution\nare still being debated --- their origin and evolutionary changes in \nmorphology, growth and decay, are not entirely clear. \nTheoretical studies of angular momentum redistribution in disk-halo systems\nhave been limited almost exclusively to {\\it nonrotating} halos, following pioneering \nworks on linear perturbation theory by Lynden-Bell (1962), Lynden-Bell \\& Kalnajs (1972), Tremaine \\& \nWeinberg (1984) and Weinberg (1985), which underscored the dominant role of orbital resonances. Numerical \nsimulations have confirmed the angular momentum flow away from disks embedded in\naxisymmetric (e.g., Sellwood 1980; Debattista \\& Sellwood 1998, 2000; Tremaine \\& Ostriker 1999;\nVilla-Vargas et al. 2009, 2010; review by Shlosman 2013) and triaxial \n(e.g., El-Zant \\& Shlosman 2002; El-Zant et al. 2003; Berentzen et al. 2006; Berentzen \n\\& Shlosman 2006; Heller et al. 2007; Machado \\& Athanassoula 2010; Athanassoula et al. \n2013) halos. Resonances have been confirmed to account for the lion's share of angular momentum \ntransfer (e.g., Athanassoula 2002, 2003; Martinez-Valpuesta et al. 2006; Weinberg \\& Katz 2007), \nIn this paradigm, the halo serves as the pure sink and the disk as the net source of angular momentum.\n\nHowever, realistic cosmological halos are expected to possess a net angular momentum, acquired \nduring the maximum expansion epoch (e.g., Hoyle 1949; White 1978) and possibly during\nthe subsequent evolution (Barnes \\& Efstathiou 1987; but see Porciani et al. 2002). Simulations \nhave quantified the distribution of spin values,\n$\\lambda\\equiv J_{\\rm h}\/\\sqrt{2} M_{\\rm vir}R_{\\rm vir}v_{\\rm c}$, for cosmological dark matter (DM) \nhalos to follow a lognormal distribution, where $J_{\\rm h}$ is the\nangular momentum, $M_{\\rm vir}$ and $R_{\\rm vir}$ --- the halo virial mass and radius, and $v_{\\rm c}$\n--- the circular velocity at $R_{\\rm vir}$, with the mean value $\\lambda = 0.035\\pm 0.005$ (e.g., \nBullock et\nal. 2001). Spinning halos can increase the rate of the angular momentum absorption --- \nan issue brought up by Weinberg (1985) but never fully addressed since. Only recently has it been\nconfirmed numerically that the bar instability timescale is indeed shortened for $\\lambda>0$\n(Saha \\& Naab 2013). But these models had been terminated immediately after the bar instability had \nreached its peak, and hence avoided completely the secular stage of bar evolution.\n\nThe $\\lambda=0$ halos consist of two populations of DM particles, prograde and retrograde\n(with respect to disk spin). The amount of angular momentum in {\\it each} of these populations can vary\nfrom zero for nearly radial orbits, to a maximal one for nearly circular orbits. (Both extremes are \nmentioned \nfor pedagogical reasons only.) These extremes in angular momentum correspond to extremes in velocity\nanisotropy. Various degrees of velocity anisotropy \nin the halo lie in between, and represent a rich variety of dynamical models. \nStellar bars mediate the angular momentum transfer in such disk-halo systems with a broad range\nof efficiencies. The current paradigm of stellar bar evolution assumes an idealized \nversion of a nonrotating DM halo which cannot account for the whole bounty of associated processes. \nWe address these issues in a subsequent paper (in preparation).\n\nIn this Letter we demonstrate for the first time that secular growth of galactic bars in spinning DM \nhalos is damped more strongly with increasing $\\lambda$, and this effect is the result of a modified \nangular momentum transfer. Section~2 describes our numerical methods.\nResults are given in section~3. \n \n\\section{Numerics and Initial Conditions}\n\\label{sec:num}\n\nWe use the $N$-body part of the tree-particle-mesh Smoothed Particle Hydrodynamics code \nGADGET-3 originally described in Springel (2005). The units of mass and distance are taken as \n$10^{11}\\,M_\\odot$ and 1\\,kpc, respectively. \nWe use $N_{\\rm h} = 10^6$ particles for the DM halo, and $N_{\\rm d} = 2\\times 10^5$ for stars.\nConvergence models have been run with $N_{\\rm h} = 4\\times 10^6$ and $N_{\\rm d} = 4\\times 10^5$,\nin compliance with the Dubinski et al. (2009) study of discrete resonance interactions\nbetween the bar and halo orbits.\nThe gravitational softening is $\\epsilon_{\\rm grav}=50$\\,pc for stars and DM.\nTo simplify the analysis we have ignored the stellar bulge. The opening angle $\\theta$ of the\ntree code has been reduced from 0.5 used in cosmological simulations to 0.4 which increases\nthe quality of the force calculations. Our models\nhave been run for 10\\,Gyr with an energy conservation of 0.08\\% and angular momentum\nconservation of 0.05\\% over this time. \n\nTo construct the initial conditions, we have used a novel method introduced by Rodionov \\& Sotnikova\n(2006), see also Rodionov et al. (2009). We provide only minimal details for this method,\nwhich is elaborated elsewhere. It is based on the constrained\nevolution of a dynamical system. The basic steps include (1) constructing the model using prescribed\npositions of the particles with some (non-equilibrium) velocities, (2) allowing the particles to evolve \nfor a short time which leads to modified positions and velocities, (3) returning the particles\nto the old positions with the new velocities, and (4) iterating on the previous steps until\nvelocities converge to equilibrium values. This results in the near-equilibrium dynamical system\nwhich is then evolved. \n\nThe initial disk has been constructed as exponential, with the volume density given by \n\n\\begin{equation}\n\\rho_{\\rm d}(R,z) = \\biggl(\\frac{M_{\\rm d}}{4\\pi h^2 z_0}\\biggr)\\,{\\rm exp}(-R\/h) \n \\,{\\rm sech}^2\\biggl(\\frac{z}{z_0}\\biggr),\n\\end{equation}\nwhere $M_{\\rm d}=6.3\\times 10^{10}\\,M_\\odot$ is the disk mass, $h=2.85$\\,kpc is its radial \nscalelength, and $z_0=0.6$\\,kpc is the scaleheight. $R$ and $z$ represent the cylindrical coordinates. \n\nThe halo density is given by Navarro, Frenk \\& White (1996, NFW):\n\n\\begin{equation}\n\\rho_{\\rm h}(r) = \\frac{\\rho_{\\rm s}\\,e^{-(r\/r_{\\rm t})^2}}{[(r+r_{\\rm c})\/r_{\\rm s}](1+r\/r_{\\rm s})^2}\n\\end{equation}\nwhere $\\rho(r)$ is the DM density in spherical coordinates, $\\rho_{\\rm s}$\nis the (fitting) density parameter, and $r_{\\rm s}=9$\\,kpc is the characteristic radius, where the power \nlaw slope is (approximately) equal\nto $-2$, and $r_{\\rm c}$ is a central density core. We used the Gaussian cutoffs at \n$r_{\\rm t}=86$\\,kpc for the halo and $R_{\\rm t}=6h\\sim 17$\\,kpc\nfor the disk models, respectively. The halo mass is $M_{\\rm h} = 6.3\\times 10^{11}\\,M_\\odot$,\nand halo-to-disk mass ratio within $R_{\\rm t}$ is $\\sim 2$. Other ratios have\nbeen explored as well. Oblate halos with various \npolar-to-equatorial axis ratios, $q=c\/a$, \nhave been analyzed, with $0.8\\ltorder q\\ltorder 1$. Here, we limit our discussion to cuspy halos \nwith $q\\sim 1$, and a small \ncore of $r_{\\rm c}=1.4$\\,kpc. Other profiles, such as the large core NFW and\nisothermal sphere density profiles, have been implemented as well, and\nresulted in qualitatively similar evolution. Dispersion velocity anisotropy, $\\beta$, has been \nconstrained initially to be mild, using the novel method of Constrained Evolution \ndiscussed above. Velocities have been taken to be isotropic in the central region and the\nanisotropy increased to $\\beta\\sim 0.3$ outside the disk.\n\nDisk radial dispersion velocities have been taken as $\\sigma_{\\rm R}(R)= \\sigma_{\\rm R,0}\\,{\\rm\nexp}(-R\/2h)$ with $\\sigma_{\\rm R,0}=143\\,{\\rm km\\,s^{-1}}$. This results in $Q=1.5$\nat $R\\sim 2.42\\,h$, and increasing values toward the center and outer disk. Vertical velocity\ndispersions are $\\sigma_{\\rm z}(R)=\\sigma_{\\rm z,0}\\,{\\rm exp}(-R\/2h)$, with $\\sigma_{\\rm\nz,0}=98\\,{\\rm km\\,s^{-1}}$.\n\nTo form spinning halos, we have flipped the angular momenta, $J_{\\rm z}$, \nof a prescribed fraction of DM particles which are on retrograde orbits with respect to the disk, \nby reversing their velocities, in line with Lynden-Bell's (1960) Maxwell demon. Only $\\lambda\\sim 0-0.09$ \nmodels are discussed here. \nThe $\\lambda < 0$ cases are simpler, due to a decreased fraction of prograde halo particles able to\nresonate with the bar\/disk particles (e.g., Christodoulou et al. 1995). \nThe implemented velocity reversals preserve the solution to the Boltzmann\nequation and do not alter the DM density profile or velocity magnitudes (e.g., Lynden-Bell 1960, 1962; \nWeinberg 1985). For spherical halos, the invariancy under velocity reversals is a direct corollary of \nthe Jeans (1919) theorem (see also Binney \\& Tremaine 2008). The most general distribution function \nfor such systems is a sum of $f(E,J^2)$, where $E$ is the energy, $J$ --- the value of the total angular \nmomentum (i.e., $J^2$), and of an odd function of $J_{\\rm z}$, \ni.e., $g(E,J,J_{\\rm z})$ (Lynden-Bell 1960). If $g\\neq 0$, the spherical system has \na net rotation around this axis.\n\nWe left the disk parameters unchanged, while halo models have varied spin\n$\\lambda$. The value of $\\lambda$ has been added to the model name using the last two significant \ndigits, e.g., P60 means $\\lambda=0.060$ and ``P'' stands for prograde. \n\n\\section{Results}\n\\label{sec:res}\n\n\\begin{figure}\n\\begin{center}\n\\includegraphics[angle=0,scale=0.47]{fig1.ps}\n\\end{center}\n\\caption{{\\it Upper:} Evolution of the bar amplitudes, $A_2$ (normalized by the monopole term $A_0$), for \nfor spherical NFW halos with $q=1$.\nShown are P00, P45, P60 and P90 models. {\\it Lower:} Evolution of bar pattern speed, $\\Omega_{\\rm b}$, \nin the above models.\n}\n\\end{figure}\n\n\\begin{figure*}\n\\begin{center}\n\\includegraphics[angle=0,scale=0.9]{fig2.ps}\n\\end{center}\n\\caption{Disk-bar surface density contours (face-on, edge-on, and end-on) at $t=10$\\,Gyr, for the NFW \nhalos with $q=1$, P00 (left column), P45 (center) and P90 (right) models. Note the different bulge\nshapes: X-shaped for P00, boxy\/X-shaped for P45, and boxy for P90, as well as decreasing strength of\n{\\it ansae} with increasing $\\lambda$ (see text).\n}\n\\end{figure*}\n\nAll models presented here have an identical\nmass distribution, both in DM and stars. Hence, any differences in the\nevolution must follow from the initial distribution of angular momentum in DM halos and\nits redistribution in the bar-disk-halo system.\nFigure\\,1 displays the evolution of the stellar bars through the amplitudes of the Fourier $m=2$ mode, \n$A_2$, \nand their pattern speeds, $\\Omega_{\\rm b}$, for 10\\,Gyrs. This timescale is probably close to the maximum \nuninterrupted growth of galactic disks in the cosmological framework, and hence to the lifetime of\nthe bars. The normalized (by the monopole term $A_0$) bar amplitude has been defined here as \n\n\\begin{equation}\n\\frac{A_2}{A_0} = \\frac{1}{A_0}\\sum_{i=1}^{N_{\\rm d}} m_{\\rm i}\\,e^{2i\\phi_{\\rm i}},\n\\end{equation} \nfor $R\\leq 14$\\,kpc. The summation is performed over all disk particles with the mass $m=m_{\\rm i}$\nat angles $\\phi_{\\rm i}$ in the disk plane. $\\Omega_{\\rm b}$ is obtained\nfrom the phase angle $\\phi= 0.5\\,{\\rm tan^{-1}}[{\\rm Im}(A_2)\/{\\rm Re}(A_2)]$ evolution with time.\nWe divide the evolution into two phases: the dynamical phase, which consists of the initial \nbar instability and terminates with the vertical buckling instability of the bars and formation of \nboxy\/peanut-shaped bulges (e.g., Combes et al. 1990; Pfenniger \\& Friedli 1991; Raha et al. 1991; \nPatsis et al. 2002; \nAthanassoula 2005; Berentzen et al. 2007). Buckling weakens the bar but does not dissolve it \n(Martinez-Valpuesta \\& Shlosman 2004). Repeated bucklings increase the size of the bulge \n(Martinez-Valpuesta \\& Shlosman 2005; Martinez-Valpuesta et al. 2006). One buckling has been\nobserved in the models presented here --- following it,\nthe bar enters the second phase, that of secular evolution.\n\nThe most striking development observed in models of Figure\\,1 during the secular phase is an increased \ndamping of the bar amplitude and a slower or absent bar growth for $\\lambda\\gtorder 0.03$. \nThe P00 model ($\\lambda=0$) displays healthy growth after buckling.\nThe P30 and P45 bars have a slower growth rate than the P00 bar, and do not recover their pre-buckling \nstrength even after 10\\,Gyr. But models P60 and P90 show no growth in $A_2$ at all. \nThe corresponding pattern\nspeed evolution, $\\Omega_{\\rm b}(t)$, for these models differs substantially as well. The A90 bar \ndisplays a perfectly flat $\\Omega_{\\rm b}(t)$, and does not lose its angular momentum to the\ndisk and\/or the halo. This includes both the internal angular momentum (i.e., circulation) and the\ntumbling. Similar trend between the final $\\Omega_{\\rm b}$ and $\\lambda$ can be also\nobserved in Figure\\,7 of Debattista \\& Sellwood (2000), although low\nresolution apparently prevented any conclusion of this sort.\n \nFigure\\,2 compares the end products of the secular evolution of barred disks in models P00, P45 and P90. \nThe differences appear to be profound. First, the bar size clearly anticorrelates with $\\lambda$ --- \nthis is\na reflection of the inability of the bar potential to capture additional orbits and grow in length and\nmass. Second, the {\\it ansae} (handles) feature is the strongest in the P00 bar, while it is smaller in \nsize for P45 \nand completely absent in the P90 bar. Ansae have been associated with captured disk orbits librating around\nthe bar (Martinez-Valpuesta 2006; Martinez-Valpuesta et al. 2006). This is another indication that\nthe bar in high-$\\lambda$ models does not grow. Note that the surface density in the disk is clearly affected,\nas trapping of the disk orbits by the P00 bar creates low-density regions in the disk but not in P90. We\nanalyzed the properties of the halo `ghost' bar (Holley-Bockelmann et al. 2005; Athanassoula\n2007; Shlosman 2008), and found no growth there as well. The offset angle between the ghost and stellar\nbars remains near zero (within the error margin). Third, the face-on morphology of the P00 bar is that of a \nrectangular shape, while that of P90 is elliptical. Fourth, bulges that formed as a a result of the buckling\ninstability show the same anticorrelation trend in size$-\\lambda$, as seen in edge-on (i.e., along the bar's \nminor axis) frames. Furthermore, they differ \nin shape as well: the P00 bulge has an X-shape, P45 is boxy\/X-shaped, and P90 is boxy. Trapped 3-D orbits\nare responsible for the bulge shape (e.g., Patsis et al. 2002; Athanassoula 2005; Martinez-Valpuesta et al. \n2006). \n\nWhat is even more intriguing is the near or complete absence of secular braking in the P60 and P90 bars. Although the\nbars are weak, constancy of $\\Omega_{\\rm b}$ and $A_2$ over 6\\,Gyr in P90 points to no angular momentum \ntransfer away from the bar, or, alternatively, to an opposite flux from the halo which compensates for the \nloss of angular momentum by the bar. As we see below, it is the second possibility that takes place.\nWhile the P60 and P90 models exhibit extremes of this effect, it is visible at various levels in all models \nwith $\\lambda\\gtorder 0.02$. \n\nWhile most of the angular momentum transfer away from the bar is due to resonances, we \ndeal with this aspect of the problem elsewhere. However, we do quantify the {\\it rate} of the\noverall angular momentum transfer between the disk and the halo, i.e., accounting for the resonant and\nnon-resonant angular momentum redistribution. This is accomplished by dividing the disk and halo into\nnested cylindrical shells and constructing a two-dimensional map of the angular momentum change in each shell as\na function of $R$ and $t$ (e.g., Villa-Vargas et al. 2009, 2010). Such a color-coded diagram is shown in \nFigure\\,3 for disk stars (lower frames), $\\langle\\dot J_*\\rangle\\equiv (\\partial J_*\/\\partial t)_{\\rm R}$ \nand for halo particles (upper frames), \n$\\langle\\dot J_{\\rm DM}\\rangle\\equiv (\\partial J_{\\rm DM}\/\\partial t)_{\\rm R}$, \nwhere the brackets indicate time-averaging. \n\n\\begin{figure*}\n\\begin{center}\n\\includegraphics[angle=-90,scale=0.6]{fig3.ps}\n\\end{center}\n\\caption{\\underline{DM halos} ({\\it upper frames}): Rate of angular momentum flow $\\dot J$ as a function of \na cylindrical \nradius and time for the P00 (left), P30 (middle), and P60 (right) models with $q=1$ NFW DM halos. \nThe color palette corresponds to gain\/loss rates (i.e., red\/blue) using a logarithmic scale in color. \nThe cylindrical shells have $\\Delta R = 1$\\,kpc, extending to $z=\\pm \\infty$. \n\\underline{Stellar disks} ({\\it lower frames}): same for (identical) disk models embedded in the P00 (left), \nP30 (middle), and \nP60 (right) halos, except $\\Delta R = 0.5$\\,kpc, and $|\\Delta z| = 3$\\,kpc. Positions of major disk \nresonances, ILR, CR, and OLR, have been marked.\n}\n\\end{figure*}\n\nThe diagrams for P00 are the easiest to understand. The red (blue) colors correspond to the absorption\n(emission) of the angular momentum. The continuity of these colors for the P00 disk represents the emission\nand absorption of angular momentum by the disk prime resonances. For example, the dominant blue band drifting \nto larger\n$R$ with time is associated with the emission of angular momentum by the inner Lindblad resonance (ILR),\nand the additional blue band corresponds to the Ultra-Harmonic resonance (UHR). The dominant red band\nfollows the corotation resonance (CR) and the outer Lindblad resonance (OLR). \n\nThe number of DM particles on prograde orbits has steadily increased with $\\lambda$, raising the\npossibility of resonant coupling between them and the bar orbits. This is supported by linear theory\n(Weinberg 1985 and refs. therein) and by numerical simulations (Saha \\& Naab 2013). \nIndeed, we observe increased emission of angular momentum by the ILR and corresponding enhanced absorption\nby the halo. Halo particles are late to pick up the angular momentum\nfrom the bar (due to their higher velocity dispersion), but the exchange is visible already before buckling. \nEnhanced coupling between the orbits is the reason for the shorter timescale for bar instability.\n\nThe secular evolution of bars, however, proceeds under quite different conditions. The bar cannot be \nconsidered as a linear perturbation, and the halo orbits have already been heavily perturbed and some have \nbeen captured by the stellar bar. So, one expects the nearby halo orbits around the bar to be tightly\ncorrelated with the bar. The upper frames in Figure\\,3 display the rate of angular momentum flow in the DM halo.\nWhile the P00 halo appears to be completely dominated by the absorption of angular momentum at all major\nresonances (ILR, CR, OLR), P30 shows a quite different behavior and emits it at the ILR. The loss\nof angular momentum in this region of the DM halo is even more intense in P90. \nAlready at the buckling, we can observe a weak blue band of emission in the P30 halo, alongside a strong\nabsorption, instead of pure absorption in P00. Note that linear resonances shown by continuous curves\nappear to be a bad approximation to the actual nonlinear resonances given by the color bands,\nbecause they are calculated under assumption of circular orbits. In the P90 halo, a strong {\\it emission} \nis visible at the position of the disk ILR, which continues as a band \nalongside weakened absorption. So the absorption gradually weakens and moves out with increased $\\lambda$, \nwhile the emission strengthens and spreads. The disk emission and absorption by major resonances \nalso differs with changing $\\lambda$ --- it gradually develops an intermittent behavior, especially\nat the ILR in P60, where the blue and red bands become intermittent. Such a cyclical behavior is not seen \nin the P00 disk, but becomes visible in the P30 disk and dominates the inner P60 disk. Hence, the spinning halo \nappears to emit and absorb angular momentum recurrently. The halo as a whole still absorbs the\nangular momentum from the disk in P30, while the net flux is zero for P90.\n\nThis result is anticipated. The ability to pump angular momentum into a selected\nnumber of halo particles by means of a stellar bar is not without limits. As the angular momentum of the\nprograde population in the halo is increased, its ability to absorb angular momentum should saturate, and,\nunder certain conditions, even be reversed. After buckling, the bar weakens substantially, as seen in\nFigure\\,1. At later stages, as the bar is expected to resume its growth, the near (disk) halo orbits can possess\nmore angular momentum than the bar region which has been losing it for some time. For this prograde \npopulation, increase of $\\lambda$ simply\nincreases the initial angular momentum and the saturation comes earlier. {\\it What emerges as a fundamental\nproperty of a DM halo is the angular momentum and its distribution for the prograde population of \norbits, irrespective of the value of $\\lambda$.}\n\nEvolution of galactic bars is inseparable from the cosmological evolution of their host galaxies. We find\nthat the secular growth of bars is significantly anticorrelated with the halo spin for $\\lambda\\gtorder\n0.03$. This means that majority of halos will adversely affect the bar strength, and, therefore, the\nangular momentum transfer and the bar braking. \nBeyond dynamical consequences, bars in spinning halos will be systematically smaller, which \nwill make their detection at larger redshifts more difficult. This trend can be further strengthened \nbecause, during mergers, for a limited time period of $\\sim 1-2$\\,Gyr, $\\lambda$ has been shown to\nincrease (e.g., Hetznecker \\& Burkert 2006; Romano-Diaz et al. 2007; Shlosman 2013). Weaker bars\nare known to possess star formation along the offset shocks, unlike strong bars, and are less \nefficient in moving the gas inward. Furthermore, damping bar amplitude has implications for disk\nmorphology, stellar populations and abundance gradients. \n\nTo summarize, we have investigated the dynamical and secular evolution of stellar bars in spinning DM halos. \nIn a representative set of numerical models, we find that\nthe angular momentum flow in the disk-halo system is substantially affected by the momentum distribution\nin the prograde population of DM particles, and is not limited\nto the momentum flux from the disk to halo. The associated bar pattern speed slowdown is minimized\nand ceases for larger $\\lambda$. This means that the bar does not experience gravitational torques\nand its amplitude remains steady, while the angular momentum, both internal circulation and tumbling, is \npreserved. This trend becomes\nvisible for $\\lambda\\gtorder 0.02$ and dominates the bar evolution for halos with $\\lambda\\gtorder 0.03$.\nBecause of a lognormal distribution of $\\lambda$ with a mean value of $0.035\\pm 0.005$, a substantial \nfraction of DM halos will be affected. We analyze the\nrate of angular momentum change by subdividing the disk-halo system into nested cylindrical shells, and \nshow that the DM {\\it halo can both absorb and emit angular momentum}, resulting in a reduction of \nthe net transfer of angular momentum from the disk to the halo.\nThe ability of the halo material to both emit and absorb angular momentum has important corollaries.\n\n\\acknowledgements \nWe are grateful to Sergey Rodionov for guidance with the iterative method to construct initial \nconditions, and to Ingo Berentzen, Jun-Hwan Choi, Emilio Romano-Diaz, Raphael Sadoun and Jorge \nVilla-Vargas for help with numerical\nissues. We thank Volker Springel for providing us with the original version of GADGET-3. This work \nhas been partially supported by grants from the NSF and the STScI (to I.S.). Simulations have been\nperformed on the University of Kentucky DLX Cluster.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}