diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzzalmh" "b/data_all_eng_slimpj/shuffled/split2/finalzzalmh" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzzalmh" @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\n\nFollowing the canonical approach to a quantum theory of gravity, a\n(Dirac) observable is by definition a self-adjoint operator on the full \nHilbert space (not only on the Hilbert space induced from the full Hilbert\nspace by restricting to the space of solutions of the quantum constraints)\nwhich weakly commutes with the constraint operators. Equivalently, an\nobservable leaves the physical Hilbert space of solutions to the quantum\nconstraints invariant.\n\nThere are no Dirac observables known in neither classical nor quantum\ngravity, except for the asymptotically flat case where it is well-known that\nthe Poincar\\'e generators at spatial infinity form a closed Dirac observable\nalgebra.\n\nIn the present paper we address the question of how to quantize these\ngenerators. We work with the real version \\cite{Barbero} of the\noriginally complex connection formulation of general relativity \\cite{0}.\nThe associated real connection representation has been made a solid\nfoundation of the quantum theory in the series of papers \n\\cite{1,2,3,4,5,6,7}. This rigorous mathematical framework which is based \non the earlier pioneering work on loop variables, for instance \n\\cite{Gambini,RS} for gauge theories and gravity respectively (the latter\nof which was designed for the complex variables and thus was lacking an \nappropriate inner product), equips us with the tools necessary to ask the \nquestion of \nwhether the various operators constructed are densely defined, symmetric,\nself-adjoint, diffeomorphism invariant and so forth.\n\nBut even though this kinematical framework is available, it is still quite\nsurprising that something like an ADM energy operator can actually be\nconstructed. The reason for this is that in the representation under\nconsideration the ADM energy function turns out ot be a rather\nnon-polynomial function of the canonical momenta and thus it is far from\nclear how to define it. It turns out that the same technique that \nenabled one to define the Wheeler-DeWitt operator \nfor 3+1 Lorentzian gravity \\cite{8,9,10}, the Wheeler-DeWitt operator for \n2+1 Euclidean gravity \\cite{10a}, length operators \\cite{11}, and matter \nHamiltonians when coupled to \ngravity \\cite{12} can be employed to define Poincar\\'e quantum operators.\\\\\n\\\\\nThe plan of the present paper is as follows :\n\nIn section 2 we review the necessary mathematical background from\n\\cite{1,2,3,4,5,6,7,15}.\n\nIn section 3 we regularize the ADM energy operator. There are at least\ntwo natural orderings, one in which the operator becomes densely defined \non the full physical Hilbert space as defined in \\cite{15} and one in which \nit is not. Nevertheless the latter\noperator should be the physically relevant one because when restricting the \nHilbert space to a subspace which is suggested to us given the \nasymptotically flat structure available, then this operator turns out to be \npositive-semidefinite by \ninspection and essentially self-adjoint on the physical Hilbert space.\nThis result reveals that several of the speculations spelled out in \n\\cite{Smolin} and which were based on several unproved assumptions \nwere pre-mature : after taking the quantum dynamics of the theory and the \nquantum asymptotic and regularity conditions on the Hilbert space \nappropriately into account, the quantum positivity of energy theorem is \n{\\em not} violated.\\\\ \nIt should be said from the outset, however, that \nthe ``quantum positivity of energy theorem\" that we provide rests, \nbesides on a quite particular regularization procedure of the ADM energy \noperator \nwhich exploits the fall-off behaviour of the fields at spatial infinity \nquite crucially, on one additional technical assumption (the {\\em tangle \nassumption}, see below) whose physical significance is unclear.\nAlthough it can be motivated also given the structure available at \nspatial infinity, it should be stressed that without this assumption \nthe positivity theorem would not hold.\n\nIn section 4 we naturally extend the unitary representation of the \ndiffeomorphism group to include asymptotic translations and rotations \nand compute the symmetry algebra between time translations and the \nspatial Euclidean group, that is, we verify the algebra of the little \ngroup of the Poincar\\'e group. We find no anomaly. As is well-known, the\nlittle group suffices \nto induce the unitary irreducible representations of the Poincar\\'e\ngroup. In this paper we do not, however, address the more difficult problem\nof how to define a boost quantum operator.\n\n\n\n\\section{Preliminaries}\n\nWe begin with a compact review of the relevant notions from \\cite{7,15}.\nThe interested reader is urged to consult these papers and references \ntherein.\\\\ \n\\\\ \nWe assume that spacetime is of the form $M=\\Rl\\times \\Sigma$ where \n$\\Sigma$ has an asymptotically flat topology, that is, there is \na compact set $B\\subset\\Sigma$ such that $\\Sigma-B$ is homeomorphic with\nof $\\Rl^3$ with a compact ball cut out. We also assume that $\\partial\\Sigma$\nis homeomorphic with the 2-sphere. The case of more than one component of\n$\\partial\\Sigma$ (e.g. several asymptotic ends or horizons etc.) can be \ntreated in a similar way.\\\\\nDenote by $a,b,c,..$ spatial tensor indices and by $i,j,k,..$ $su(2)$ \nindices. The gravitational phase space is described by a canonical pair \n$(A_a^i,E^a_i\/\\kappa)$ where $A_a^i$ is an $SU(2)$ connection on the \nhypersurface\n$\\Sigma$, $E^a_i$ is an $\\mbox{ad}(SU(2))$ transforming vector density\nand $\\kappa$ is the gravitational coupling constant.\nThis means that the symplectic structure is given by\n$\\{A_a^i(x),E^b_j(y)\\}=\\kappa\\delta(x,y)\\delta^b_a\\delta^i_j$. The \nrelation with the co-triad $e_a^i$, the extrinsic curvature $K_{ab}$\nand the three-metric $q_{ab}=e^i_a e^i_b$ is \n$E^a_i=\\frac{1}{2}\\epsilon^{abc}\\epsilon_{ijk}e^j_b e^k_c$ and \n$A_a^i=\\Gamma_a^i+\\mbox{sgn}(\\det((e_c^j)))K_{ab} e^b_j$ where\n$\\epsilon^{abc}$ is the metric independent, completely skew tensor density\nof weight one, $\\Gamma_a^i$ is the spin-connection of $e_a^i$ and $e^a_i$\nis the inverse of the matrix $e_a^i$. \n\nBy $\\gamma$ we will denote in the sequel a closed, piecewise analytic graph\nembedded into a d-dimensional smooth manifold $\\Sigma$ (the case of interest\nin general relativity is $d=3$).\nThe set of its edges will be denoted $E(\\gamma)$ and the set of its \nvertices $V(\\gamma)$. By suitably subdividing edges into two halves we \ncan assume that all of them are outgoing from a vertex (the remaining \nendpoint of the so divided edges is not a vertex of the graph because \nit is a point of analyticity). Let $A$ be a $G$-connection for a compact \ngauge group\n$G$ (the case of interest in general relativity is $G=SU(2)$). We will denote\nby $h_e(A)$ the holonomy of $A$ along the edge $e$. Let $\\pi_j$ be the \n(once and for all fixed representant of the equivalence class of the \nset of) $j$-th irreducible representations of $G$ (in general relativity\n$j$ is just a spin quantum number) and label each edge $e$ of $\\gamma$ \nwith a label $j_e$. Let $v$ be an \n$n$-valent vertex of $\\gamma$ and let $e_1,..,e_n$ be the edges incident \nat $v$. Consider the decomposition of the tensor product \n$\\otimes_{k=1}^n\\pi_{j_{e_k}}$ into irreducibles and denote by \n$\\pi_{c_v}(1)$ the\nlinearly independent projectors onto the irreducible representations $c_v$ \nthat appear. \n\\begin{Definition} \\label{def0}\nAn extended spin-network state is defined by\n\\begin{equation} \\label{1}\nT_{\\gamma,\\vec{j},\\vec{c}}(A):=\n\\mbox{tr}(\\otimes_{v\\in V(\\gamma)}\n[\\pi_{c_v}(1)\\cdot\\otimes_{e\\in E(\\gamma),v\\in e}\\pi_{j_e}(h_e(A))])\n\\end{equation}\nwhere $\\vec{j}=\\{j_e\\}_{e\\in E(\\gamma)},\\;\\vec{c}=\\{c_v\\}_{v\\in \nV(\\gamma)}$. In what follows we will use a compound label\n$I\\equiv(\\gamma(I),\\vec{j}(I),\\vec{c}(I))$. An ordinary spin-network\nstate is an extended one with all vertex projectors corresponding\nto singlets.\n\\end{Definition}\nThus, a spin-network state is a particular function of \nsmooth connections restricted to a graph with definite transformation \nproperties under gauge transformations at the vertices. Their importance is \nthat they form\nan orthonormal basis for a Hilbert space ${\\cal H}\\equiv{\\cal H}_{aux}$, \ncalled the auxiliary Hilbert space. Orthonormality means that\n\\begin{equation} \\label{2}\n\n\\equiv\n_{aux}\n=\\delta_{\\gamma\\gamma'}\\delta_{\\vec{j},\\vec{j}'}\n\\delta_{\\vec{c},\\vec{c}'}\\;.\n\\end{equation}\nAnother way to describe $\\cal H$ is by displaying it as a space of\nsquare integrable functions $L_2({\\overline {{\\cal A}\/{\\cal G}}},d\\mu_0)$. Here ${\\overline {{\\cal A}\/{\\cal G}}}$ is a space \nof distributional connections modulo gauge transformations, typically \nnon-smooth and $\\mu_0$ is a \nrigorously defined, $\\sigma$-additive, diffeomorphism invariant probability\nmeasure on ${\\overline {{\\cal A}\/{\\cal G}}}$. The space ${\\overline {{\\cal A}\/{\\cal G}}}$ is the maximal extension of \nthe space ${{\\cal A}\/{\\cal G}}$ of smooth connections such that (the Gel'fand \ntransform of) spin-network functions \nare still continuous. The inner product can be extended, with the same\northonormality relations, to any smooth (rather than analytic) graph with a \nfinite number of edges and to non-gauge-invariant functions. It is only \nthe latter description of $\\cal H$ which enables one to verify that \nthe inner product $<.,.>$ is the unique one that incorporates the \ncorrect reality conditions that $A,E$ are in fact real valued. The\ninner product (\\ref{2}) was postulated earlier (see remarks in \\cite{RDP})\nfor the {\\em complex} connection formulation. But\nit was not until the construction of the Ashtekar-Lewandowski measure $\\mu_0$\nthat one could show that this inner product is actually the correct one \nfor the real connection formulation only.\n\nWe will denote by $\\Phi$ the finite linear combinations of spin-network \nfunctions and call it the space of cylindrical functions. A function \n$f_\\gamma$ is said to be cylindrical with respect to a graph \n$\\gamma$ whenever it is a linear combination of spin-network functions on \nthat graph such that $\\pi_{j_e}$ is not the trivial representation for no\n$e\\in E(\\gamma)$. \nThe space $\\Phi$ can be equipped with one of the standard nuclear topologies\ninduced from $G^n$ because on each graph $\\gamma$ every cylindrical \nfunction $f_\\gamma$ becomes a function $f_n$ on $G^n$ where $n$ is the \nnumber of \nedges $e$ of $\\gamma$ through the simple relation $f_\\gamma(A)=\nf_n(h_{e_1}(A),..,h_{e_n}(A))$. This \nturns it into a topological vector space. By $\\Phi'$ we mean the topological\ndual of $\\Phi$, that is, the bounded linear functionals on $\\Phi$. General\ntheorems on nuclear spaces show that the inclusion $\\Phi\\subset{\\cal \nH}\\subset\\Phi'$ (Gel'fand triple) holds.\n\nSo far we have dealt with solutions to the Gauss constraint only, that is,\nwe have explicitly solved it by dealing with gauge invariant functions only.\nWe now turn to the solutions to the diffeomorphism constraint (we follow \n\\cite{15}).\\\\ \nRoughly speaking one constructs a certain subspace $\\Phi_{Diff}$ of \n$\\Phi'$ by ``averaging spin-network states\nover the diffeomorphism group\" by following the subsequent recipe :\\\\\nTake a spin-network state $T_I$ and consider its orbit $\\{T_I\\}$ under the \ndiffeomorphism group. Here we mean orbit under asymptotically identity \ndiffeomorphisms only ! Then construct the distribution\n\\begin{equation} \\label{3}\n[T_I]:=\\sum_{T\\in\\{T_I\\}} T\n\\end{equation}\nwhich can be explicitly shown to be an \nelement of $\\Phi'$. Any other vector is averaged by \nfirst decomposing it into spin-network states and then averaging those \nspin-network states separately. Certain technical difficulties having to \ndo with superselection rules and graph symmetries \\cite{7} were removed in \n\\cite{15}.\n\nAn inner product on the space of the so constructed states is given by\\\\\n\\begin{equation} \\label{4}\n<[f],[g]>_{Diff}:=[f](g)\n\\end{equation}\nwhere the brackets stand for the averaging \nprocess and the right hand side means evaluation of a distribution on a \ntest function. The completion of $\\Phi_{Diff}$ with respect to \n$<.,.>_{Diff}$ is denoted ${\\cal H}_{Diff}$.\n\nFinally, the Hamiltonian constraint is solved as follows \\cite{10} :\nOne can explicitly write down an algorithm of how to construct the most\ngeneral solution. It turns out that \none can construct ``basic\" solutions $s_\\mu\\in\\Phi'$ which are\nmutually orthonormal with respect to $<.,.>_{Diff}$ (in a generalized sense)\nand diffeomorphism invariant.\nThe span of these solutions is equipped \nwith the natural orthonormal basis $s_\\mu$ (in the generalized sense).\nOne now defines a ``projector\" \n\\begin{equation} \\label{5}\n\\hat{\\eta}f:=[[f]]:=\\sum_\\mu s_\\mu _{Diff}\n\\end{equation}\nfor each $f\\in\\Phi$ and so obtains a \nsubspace $\\Phi_{Ham}\\subset\\Phi'$. \nThe physical inner product \\cite{15} is defined by \n\\begin{equation} \\label{6}\n<[[f]],[[g]]>_{phys}:=[[f]]([g])\\;.\n\\end{equation}\nFinally, the physical Hilbert space is just the completion of \n$\\Phi_{Ham}$ with respect to $<.,.>_{Ham}$.\n\n\n\n\\section{Regularization of the ADM Hamiltonian}\n\nThere are many ways to write the ADM-Hamiltonian \nwhich are all classically weakly identical. We are going to choose a form \nwhich is a pure surface integral and which depends only on $E^a_i$ \nbecause in this case the associated operator will be almost diagonal in a \nspin-network basis so that we can claim that spin-network states really\ndo provide a non-linear Fock representation for quantum general relativity\nas announced in \\cite{8,9}.\n\nThe appropriate form of the classical symmetry generators was derived in \n\\cite{13,13a}.\nAlthough that paper was written for the {\\em complex} Ashtekar variables,\nall results can be taken over by carefully removing factors of $i$ at \nvarious places. We find for the surface part of the Hamiltonian (expression \n(4.31) in \\cite{13}, we \nuse that $\\tiN=N\/\\sqrt{\\det(q)},\\;D_a\\tiN=(D_a N)\/\\sqrt{\\det(q)}$ where\n$N$ is the scalar lapse function)\n\\begin{equation} \\label{20}\nE(N)=-\\frac{2}{\\kappa}\\int_{\\partial\\Sigma} dS_a\\frac{N}{\\sqrt{\\det(q)}}\nE^a_i\\partial_b E^b_i\\;.\n\\end{equation}\nIt is easy, instructive and for the sign of the ADM energy crucial to see \nthat (\\ref{20}) really equals \nthe classical expression $+\\frac{1}{\\kappa}\\int_{\\partial\\Sigma}dS_a \n(q_{ab,b}-q_{bb,a})$ due to ADM :\nUsing that $E^a_i=\\frac{1}{2}\\epsilon^{abc}\\epsilon_{ijk} e_a^j e_b^k$\nwe have the chain of identities\n\\begin{eqnarray} \\label{7}\n&&-\\frac{2}{\\sqrt{\\det(q)}}E^a_i\\partial_b E^b_i\n=-\\mbox{sgn}(\\det(e))e^a_i\\epsilon^{bcd}\\epsilon_{ijk}[e_c^j e_d^k]_{,b}\n\\nonumber\\\\\n&=&-2\\mbox{sgn}(\\det(e))e^a_i\\epsilon^{bcd}\\epsilon_{ijk}e_c^j e_{d,b}^k\n\\nonumber\\\\\n&=& -2\\mbox{sgn}(\\det(e))q^{af}\\epsilon^{bcd}\\epsilon_{ijk}e_f^i e_c^j \ne_{d,b}^k\n=-2q^{af}\\epsilon^{bcd}\\sqrt{\\det(q)} \\epsilon_{fce}e^e_k e_{d,b}^k\n\\nonumber\\\\\n&=&-4q^{ac}\\delta^b_{[c}\\delta^d_{e]}\\sqrt{\\det(q)} e^e_i e_{d,b}^i\n\\nonumber\\\\\n&=& 4\\sqrt{\\det(q)} q^{ac} q^{ed} e_d^i e_{[c,e]}^i\n=2\\sqrt{\\det(q)} q^{ac} q^{bd} e_d^i[e_{c,b}^i-e_{b,c}^i]\n\\nonumber\\\\\n&=& \\sqrt{\\det(q)} q^{ac} q^{bd} \n[2e_{(d}^i e_{c),b}^i+2e_{[d}^i e_{c],b}^i\n-2e_{(d}^i e_{b),c}^i-2e_{[d}^i e_{b],c}^i]\n\\nonumber\\\\\n&=& \\sqrt{\\det(q)} q^{ac} q^{bd} \n[(q_{cd,b}-q_{bd,c})\n+2e_{[d}^i e_{c],b}^i]\\;\n\\end{eqnarray}\nNow we expand $e_a^i(x)=\\delta_a^i+\\frac{f_a^i(x\/r)}{r}+o(1\/r^2)$ where\n$r^2=\\delta_{ab}x^a x^b$ defines the asymptotic Cartesian frame. The function\n$f_a^i(x\/r)$ only depends on the angular coordinates of the asymptotic \nsphere and is related to the analogous expansion \n$q_{ab}(x)=\\delta_{ab}+\\frac{f_{ab}(x\/r)}{r}+o(1\/r^2)$ by \n$f_{ab}\\delta^b_i=f_a^i$. Consider now the \nremainder in the last line of (\\ref{7}). Its $o(1\/r^2)$ part vanishes \nbecause $f_{[ab]}=0$ and this concludes the proof.\n\nIn the sequel we focus on the energy functional $E_{ADM}=E(N=1)$. We will \nquantize it in two different ways correspoinding to two quite different \nfactor orderings. Each of the orderings has certain advantages and \ncertain disadvantages which we will point out. \n\n\n\\subsection{Ordering I : No state space restriction}\n\nIn this subsection we will derive a form of the operator which is densely \ndefined on the whole Hilbert space $\\cal H$ (and extends to the spaces \n${\\cal H}_{Diff},\\;{\\cal H}_{phys}$ defined above) without imposing any further\nrestriction that corresponds to asymptotic flatness.\n\nUsing again that\n$E^a_i=\\frac{1}{2}\\epsilon_{ijk}\\epsilon^{abc}e^j_b e^k_c$ we can write \nit as\n\\begin{equation} \\label{21}\nE_{ADM}=\\lim_{S\\to \\partial\\Sigma} E_{ADM}(S) \n\\mbox{ where } E_{ADM}(S)=-\\frac{2}{\\kappa}\\int_S\n\\frac{1}{\\sqrt{\\det(q)}} \\epsilon^{ijk}e^j\\wedge e^k \\partial_b E^b_i\n\\end{equation}\nand $S$ is a closed 2-surface which is topologically a sphere.\nThe idea is to point-split expression (\\ref{21}) and to use that\n$$\n[\\mbox{sgn}(\\det(e))e_a^i](x)=\\frac{1}{2\\kappa}\\{A_a^i(x),V(x,\\epsilon)\\}\n$$\nwhere $V(x,\\epsilon)=\\int_\\Sigma d^3y \\chi_\\epsilon(x,y)\\sqrt{\\det(q)}(y)$\nand $\\chi_\\epsilon$ is the (smoothed out) characteristic function of a \nbox of coordinate volume $\\epsilon^3$. Since \n$$\n\\lim_{\\epsilon\\to 0}\\frac{\\chi_\\epsilon(x,y)}{\\epsilon^3}=\\delta(x,y)\n\\mbox{ so that } \\lim_{\\epsilon\\to 0}\\frac{V(x,\\epsilon)}{\\epsilon^3}\n=\\sqrt{\\det(q)}(x)\n$$\nwe have\n\\begin{eqnarray} \\label{22}\n&&E_{ADM}(S)\\nonumber\\\\\n&=&\\lim_{\\epsilon\\to 0}\n-\\frac{2}{\\kappa}\\int_S \n\\frac{1}{\\epsilon^3\\sqrt{\\det(q)}(x)}\n\\epsilon^{ijk}e^j(x)\\wedge e^k(x)\\int_\\Sigma d^3y\n\\chi_\\epsilon(x,y)(\\partial_b E^b_i)(y)\n\\nonumber\\\\\n&=& \\lim_{\\epsilon\\to 0}-\\frac{2}{\\kappa}\\int_S \n\\frac{\\epsilon^{ijk}}{V(x,\\epsilon)}\ne^j(x)\\wedge e^k(x)\\int_\\Sigma d^3y \\chi_\\epsilon(x,y)(\\partial_b E^b_i)(y)\n\\nonumber\\\\\n&=& \\lim_{\\epsilon\\to 0}\n-\\frac{1}{2\\kappa^3}\\int_S \n\\frac{\\epsilon^{ijk}}{V(x,\\epsilon)}\n\\{A^j(x),V(x,\\epsilon)\\}\\wedge \\{A^k(x),V(x,\\epsilon)\\}\n\\int_\\Sigma d^3y \\chi_\\epsilon(x,y) (\\partial_b E^b_i)(y) \\nonumber\\\\\n&=& \\lim_{\\epsilon\\to 0}\n-\\frac{2}{\\kappa^3}\\int_S \\epsilon^{ijk} \n\\{A^j(x),\\sqrt{V(x,\\epsilon)}\\}\\wedge \\{A^k(x),\\sqrt{V(x,\\epsilon)}\\}\n\\int_\\Sigma d^3y \\chi_\\epsilon(x,y)\n(\\partial_b E^b_i)(y) \\nonumber\\\\\n&=& \\lim_{\\epsilon\\to 0}\n\\frac{4}{\\kappa^3}\\int\n\\mbox{tr}(\\{A(x),\\sqrt{V(x,\\epsilon)}\\}\\wedge \n\\{A(x),\\sqrt{V(x,\\epsilon)}\\} \\int_\\Sigma d^3y \\chi_\\epsilon(x,y)\n(\\partial_b E^b)(y)) \\nonumber\\\\\n&=& -\\lim_{\\epsilon\\to 0}\n\\frac{4}{\\kappa^3}\\int_S \n\\mbox{tr}(\\{A(x),\\sqrt{V(x,\\epsilon)}\\}\\wedge \n\\{A(x),\\sqrt{V(x,\\epsilon)}\\} \\int_\\Sigma d^3y \n[\\partial_{y^b}\\chi_\\epsilon(x,y)] E^b(y)) \\nonumber\\\\\n&=& \\lim_{\\epsilon\\to 0} E^\\epsilon_{ADM}(S)\n\\end{eqnarray}\nwhere in the second before the last step we have taken a trace with \nrespect to generators $\\tau_i$ of $su(2)$ obeying \n$[\\tau_i,\\tau_j]=\\epsilon_{ijk}\\tau_k$ and in \nthe last step we have performed an integration by parts\n(the boundary term at $\\partial\\Sigma$ does not contribute for finite $S$\nand $\\epsilon$ sufficiently small).\nThus, we absorbed the $1\/\\sqrt{\\det(q)}$ into a square-root within a \nPoisson-bracket and simultanously the singular $1\/\\epsilon^3$ into a \nvolume functional. Classically we could have dropped the $1\/\\sqrt{\\det(q)}$\n(although the integrand would then no longer be a density of weight one \nand is strictly speaking not the boundary integral of a variation of the \nHamiltonian constraint) due to the classical boundary conditions which \ntell us that $\\det(q)$ tends to $1$. \\\\\nWe now quantize $E^\\epsilon_{ADM}(S)$. This consists of two parts : In the \nfirst we focus on the volume integral in (\\ref{22}) and replace $E^a_i$ by\n$\\hat{E}^a_i=-i\\hbar\\kappa\\delta\/\\delta A_a^i$. In the second step we \ntriangulate $S$ exactly as the hypersurface of 2+1 gravity in \\cite{10a}, \nreplace the volume \nfunctional by the volume operator and Poisson brackets by commutators times\n$1\/(i\\hbar)$. \n\nSo let $f_\\gamma$ be a function cylindrical with \nrespect to a graph $\\gamma$. Since we are only interested in the \nlimit $S\\to\\partial\\Sigma$ we may assume that \\\\\n1) $\\gamma$ lies entirely within the closed ball whose boundary is $S$ but\\\\\n2) $\\gamma$ may intersect $S$ at an endpoint of one of its edges and may \neven have edges that lie entirely inside $S$.\\\\\nFurthermore we can label the edges of $\\gamma$ in \nsuch a way that an edge either intersects $S$ transversally (with an \norientation outgoing from the intersection point with $S$) or lies entirely \nwithin $S$.\\\\ \nComing to the first step we have for the $y$ integral involved in\n$E^\\epsilon_{ADM}(S)$ :\n\\begin{eqnarray} \\label{23}\n&& \\int_\\Sigma d^3y [\\partial_a\\chi_\\epsilon(x,y)]\n\\hat{E}^a_(y)_i f_\\gamma \\nonumber\\\\\n&=& -i\\hbar\\kappa \\sum_{e\\in E(\\gamma)}\\int_\\Sigma d^3y \n[\\partial_a\\chi_\\epsilon(x,y)]\n\\int_0^1 dt \\dot{e}^a(t) \\delta(y,e(t))X^i_e(t) f_\\gamma \n\\nonumber\\\\\n&=& -i\\hbar\\kappa \\sum_{e\\in E(\\gamma)}\\int_0^1 dt \\dot{e}^a(t)\n[\\partial_{y^a}\\chi_\\epsilon(x,y)_{y=e(t)}]\nX^i_e(t) f_\\gamma \n\\nonumber\\\\\n&=& -i\\hbar\\kappa \\sum_{e\\in E(\\gamma)}\\int_0^1 dt \n[\\frac{d}{dt}\\chi_\\epsilon(x,e(t))]\nX^i_e(t) f_\\gamma \n\\nonumber\\\\\n&=& -i\\hbar\\kappa \\sum_{e\\in E(\\gamma)} \\lim_{n\\to\\infty}\n\\sum_{k=1}^n [\\chi_\\epsilon(x,e(t_k))-\\chi_\\epsilon(x,e(t_{k-1}))]\nX^i_e(t_{k-1}) f_\\gamma \n\\end{eqnarray}\nwhere $E(\\gamma)$ is the set of edges of $\\gamma$, $X^i_e(t):=\n[h_e(0,t)\\tau_i h_e(t,1)]_{AB} \\partial\/\\partial[h_e(0,1)]_{AB}$\nand $0=t_00$ and \nb) there exists $\\epsilon>0$ such that $-t^2 \nN^2(x)+\\Psi[\\hat{L}(c(\\vec{N},x,t))^2\\xi]<0$ for each $00$ such that the four vector \n$\\hat{P}^\\mu(x,t):=(\\hat{H}_{matter}'(N_{x,t})\n+\\hat{V}_{matter}'(\\vec{N}_{x,t}),0)$ is \neither zero or \nfuture directed and timelike for every $x\\in\\Sigma$ and $00$ for \neach $x,00$ if $v_n\\in R$. Thus we have \n$\\hat{V}(B_m)f_n=\\lambda\\ell_p^3\\delta_{m,n}f_n$. Consider the \ninfinite product state $\\xi:=\\prod_{n=1}^\\infty f_n$ which is a regular \n(non-cylindrical) spin-network state on the infinite graph \n$\\gamma=\\cup_n \\gamma_n$ and which is in fact\nnormalized, $||\\xi||=1$ thanks to the disjointness of the graphs $\\gamma_n$\nbecause of which $||\\xi||=\\prod_n||f_n||$ due to the properties of the \nAshtekar-Lewandowski measure. We now choose $k:=\\lambda$\nand find that for any macroscopic $R$, that is, any $R$ that contains \nmany of the boxes $B_n$ it holds that \n$\\hat{V}(R)\\xi=V_0(R)[1+o(\\ell_p^3\/V_0(R)]\\xi$. Now, since no state\nwhich is cylindrical with respect to any of the \ngraphs $\\gamma_n$ can be in the image of the Hamiltonian constraint\n\\cite{8,9,10} it follows from its definition \\cite{15} that the $\\hat{\\eta}$\noperator reduces to group averaging with respect to the diffeomorphism group\nbecause of which the group averaged diffeomorphism invariant state \n$\\Psi=[\\xi]=\\hat{\\eta}\\xi$ is normalized as well with respect to the \nphysical inner product \\cite{15} $||\\Psi||_{phys}^2=\\Psi(\\xi)=||\\xi||^2=1$.\nThus indeed $\\Psi([\\hat{V}(R)-V_0(R)]\\xi)=o(\\ell_p^3\/V_0(R))$ is satisfied.\nIt is clear that the construction can be repeated for the surface operator\nas well because most of the intersections of the macroscopic surface \n$S$ with the $\\gamma_n$ will not be in vertices of the $\\gamma_n$ so \nthat $\\hat{V}(R),\\hat{A}(S)$ can be simultanously diagonalized up to \nerrors of order of $\\ell_p^2\/A_0(S)$. Thus, almost every $f_n$ can be \nchosen as a simultanous eigenvector of $\\hat{V}(R),\\hat{V}(S)$. Finally,\nany macroscopic, for simplicity non-self-intersecting (any loop is \nproduct of these), loop $\\alpha$ on our particular $\\gamma$ is of the \nproduct form $\\alpha=\\circ^n \n\\alpha_n^{k_n},\\;\\alpha_n\\subset\\gamma_n,\\;k_n\\in\\{0,1\\}$ where $\\;k_n=0$ \nexcept for finitely many. The $SU(2)$ Mandelstam algebra is too \ncomplicated as to exhibit an explicit solution for $SU(2)$ so let us \nargue with an $U(1)$ substitute that the condition stated in definition\n(\\ref{def1}) is reasonable. For $U(1)$ we have \n$h_\\alpha=\\prod_{k_n=1}h_{\\alpha_n}$. Now, if we \nchoose for simplicity $\\alpha_n=\\gamma_n$ then \n$f_n=\\sum_{k=-N}^N a_k h_{\\alpha_n}^k$ where $\\chi_k(g)=g^k$ is the \ncharacter of the irreducible representation of $U(1)$ with weight $k$.\nSince $T=\\chi_k\\chi_l=\\chi_{k+l}$ the condition stated in the definition \namounts to asking that (for $U(1)$) $1=\\prod_{k_n=1}\\sum_k \n|a_k|^2=\\prod_{k_n=1}\\sum_{k=-N+1}^N\\bar{a}_k a_{k+1}$ up to some \ncorrections. Indeed, if we could choose all $a_k$ to be equal \n($=1\/\\sqrt{2N+1}$) then the error would be $1-\\prod_{k_n=1} [1-1\/(2N+1)]$ \nwhich is small provided that $\\sum_{k_n=1}1=o(L(\\alpha)\/\\ell_p)<0$ such that $\\xi$ is,\nfor each $00$\nthe standard vector is associated with the spin of the particle in the rest\nframe and the covering group of the stabilator group is given by by $SU(2)$.\nIn the massless case the standard vector is associated with the helicity\nof the particle (spin in momentum direction) and \nthe covering group of the stabilator group is given by by $U(1)$, \nphysically important representations being two-valued. Thus, the \nrotations at spatial infinity determine the unitary irreducible \nrepresentation of the particle state in question.}. \nThis is enough to construct particle states since the \nirreducible unitary representations of the little group induce a unique\nunitary irreducible representation of the full Poincar\\'e group. So far \nwe did not construct an operator corresponding to a boost generator which\nis more difficult to obtain than the ADM energy operator.\n\nFirst of all we must clarify on which space to represent the Poincar\\'e \ngroup, respectively its generators. To that end it is helpful to remember\nhow the classical Poincar\\'e generators are realized as a subalgebra of \nthe Poisson algebra \\cite{13,13a}.\\\\ \nLet $H(N),V(\\vec{N})$ be the Hamiltonian and diffeomorphism constraint \nfunctional respectively. Both functionals are integrals over $\\Sigma$ of \nlocal \ndensities and both converge and are functionally differentiable only\nif the lapse and shift functions $N,\\vec{N}$ vanish at $\\partial\\Sigma$.\nIn order to be able to describe the Poincar\\'e group corresponding to\nthe asymptotically constant or even diverging functions ($x^a$ is a \ncartesian frame at spatial infinity)\n$N=a+\\chi_a x^a,N^a=a^a+\\epsilon^{abc}\\phi_b x^c$ where $(a,a^a)$ is \na four translation, $\\phi^a$ are rotation angles and $\\chi^a$ are boost\nparameters, one proceeds as follows : let $S$ be a bounded two-surface\nthat is topologically a sphere and let $B(S)$ be the (intersection of \n$\\Sigma$ with the) closed ball such that $\\partial B(S)=S$. For \neach $S$ one defines \n$E(N,S):=H(N,S)+E_{ADM}(N,S)+B(N,S),\\;P(N,S):=V(\\vec{N},S)+P_{ADM}(N,S)$\nwhere the parameter $S$ means that volume integrals are restricted to\n$B(S)$ only (a classical regularization of the divergent integrals) and the \n``counter-terms\" $E_{ADM}(N,S),B(N,S),P_{ADM}(\\vec{N},S)$ are the surface \nintegrals defined in \\cite{13} and correspond to ADM energy, boost and \nmomentum. One can show that $\\lim_{S\\to\\partial\\Sigma} E(N,S), \n\\lim_{S\\to\\partial\\Sigma} P(\\vec{N},S)$ exist. Moreover, for each finite $S$,\n$E(N,S),P(\\vec{N},S)$ are functionally differentiable so that it is \nmeaningful to compute the Possion brackets\n\\begin{eqnarray} \\label{33}\n\\{E(M,S),E(M,S)\\}=P(q^{ab}(MN_{,b}-M_{,b}N),S)\\nonumber\\\\\n\\{E(M,S),P(\\vec{N},S)\\}=E({\\cal L}_{\\vec{N}}M,S)\\nonumber\\\\\n\\{P(\\vec{M},S),P(\\vec{N},S)\\}=P({\\cal L}_{\\vec{M}}\\vec{N},S)\\;.\n\\end{eqnarray}\nThe crucial point is that one computes the Poisson brackets a) at finite \n$S$ and b) on the full phase space and then takes the limit \n$S\\to\\partial\\Sigma$ or restricts to the constraint surface of the phase \nspace (where $H(N,S)=V(\\vec{N},S)=0$). Notice that the numerical value \nof, say, $E(N,S)$ equals $H(N,S)$ for a gauge transformation for which $N\\to \n0$ as $S\\to\\partial\\Sigma$. On the other hand, on the constraint surface \nfor a symmetry for which $N\\not\\to 0$ as $S\\to \\partial\\Sigma$ it equals a \ntime translation or a boost respectively. A similar remark holds for \n$P(\\vec{N},S)$. One therefore interpretes (\\ref{33}) as follows :\nif $M,N$ are both pure gauge then the constraint algebra closes. If $M$\nis a symmetry and $N$ pure gauge then energy (or boost generator) are gauge \ninvariant.\nIf $M,N$ are both symmetry then time translations commute with each other,\ntime translations and boosts give a spatial translation and a boost with a \nboost gives a rotation, in other words the symmetry algebra closes.\n\nIn quantum theory we will therefore proceed as follows : \\\\\nRecall \\cite{8,9,10,15} that the Hamiltonian constraint $\\hat{H}(N)$\n(for asymptotically vanishing $N$) is only well-defined on the subspace\nof $\\Phi'$ corresponding to distributions on $\\Phi$\nwhich are invariant under diffeomorphisms that approach identity at \n$\\partial\\Sigma$. Thus we can expect the symmetry algebra to hold only on\nsuch distributions as well. In fact, we will just choose $\\Psi$ to be a \nsolution to all constraints.\\\\ \nNext, in view of the fact that even the classical symmtry algebra \nonly holds provided one first computes Poisson brackets at finite $S$ and \nthen takes the limit, we will check the quantum algebra first \nby evaluating $\\Psi$ on $\\hat{E}(N_S) f_S$ \nfor functions $f_S\\in\\Phi$ cylindrical with \nrespect to a graph which lies in the interior of $B(S)$ (it may \nintersect $S$ in such a way that the volume operator does not vanish\nat the intersection point for none of the eigenvectors into which\n$f_S$ maybe decomposed) and lapse functions $N_S$ which grow at infinity like\nsymmetries but which are supported in $B(S)\\cup S$ {\\em including $S$}, and \nthen to take the limit $S\\to\\partial\\Sigma$ \n(the support fills all of $\\Sigma$ as $S\\to \\partial\\Sigma$ in this \nprocess). \\\\ \n\nWe come to the definition of $\\hat{E}(N),\\hat{P}(\\vec{N})$. \nFirst we treat the spatial Euclidean group.\\\\\nThe unitary representation of the diffeomorphism group defined by\n$\\hat{U}(\\varphi) f_\\gamma=f_{\\varphi(\\gamma)}$ which was for matters of \nsolving the diffeomorphism constraint so far only defined for \ndiffeomorphisms that approach asymptotically the identity, can easily \nbe extended to three-diffeomorphisms which correpond to asymptotic spatial\ntranslations or rotations. Instead of defining the generator \n$\\hat{P}(\\vec{N})$ though \n(which does not exist on $\\cal H$ \\cite{7}) we content ourselves with the \nexponentiated version $\\hat{U}(\\varphi(\\vec{N}))$ where $\\varphi(\\vec{N})$\nis the diffeomorphism generated by the six parameter shift vector field\n$N^a=a^a+\\epsilon_{abc} \\phi^b x^c$ for some cartesian frame $x^a$ possibly \ncorrected by an asymptotically vanishing vector field corresponding to a \ngauge transformation. It is trivial to check that \n\\begin{equation} \\label{34}\n\\hat{U}(\\varphi(\\vec{N}))\n\\hat{U}(\\varphi(\\vec{N}'))\n\\hat{U}(\\varphi(\\vec{N}))^{-1}\n\\hat{U}(\\varphi(\\vec{N}'))^{-1}\n=\\hat{U}(\\varphi({\\cal L}_{\\vec{N}}\\vec{N}'))\n\\end{equation}\nwhere $\\cal L$ denotes the Lie derivative \nso that there are no anomalies coming from the spatial Euclidean group.\nThis expression was derived by applying it to any function $f_S$ cylindrical \nwith respect to a graph with support in $B(S)$. \n\nWe now turn to the time translations. As already mentioned we will not \nconsider boosts in this paper so that $\\chi_a\\equiv 0$ in \nthe four parameter family of lapse functions $N=a+\\chi_a x^a$\n(modulo a correction which vanishes at $\\partial\\Sigma$).\nDefine the operator on ${\\cal H}$\n\\begin{equation} \\label{35}\n\\hat{E}(N):=\\hat{H}(N)\n+\\hat{E}_{ADM}(N)\n\\end{equation}\nwhere $\\hat{H}(N)$ is the Lorentzian Hamiltonian constraint.\nNotice that $\\hat{E}(N)$ just as the Hamiltonian constraint in \n\\cite{8,9,10,15} carries a certain prescription dependence \nwhich is removed by evaluating its dual on $\\Phi_{Diff}$. We will not\nrepeat these details here and refrain from indicating this prescription\ndependence in \\ref{35}, however, the prescription dependence has \nconsequences for the commutator algebra that we will dicuss below in \ngreat detail.\n\nLet us verify the commutators between the time \ntranslations among themselves and between time taranslations and spatial \ntranslations and rotations. We have \n\\begin{eqnarray} \\label{36}\n&&\\Psi([\\hat{E}(M),\\hat{E}(N)]f_\\gamma)=\n\\Psi([\\hat{H}(M),\\hat{H}(N)]f_\\gamma)\n+\\Psi([\\hat{E}_{ADM}(M),\\hat{E}_{ADM}(N)]f_\\gamma)\n\\nonumber\\\\\n&+&\\Psi(\\{[\\hat{E}_{ADM}(M),\\hat{H}(N)]+[\\hat{H}(M),\\hat{E}_{ADM}(N)]\\}\nf_\\gamma)\\;.\n\\end{eqnarray}\nThe first term vanishes for the same reason as in \\cite{8,9,10,15} \nalthough one needs one additional argument : the \nHamiltonian constraint does not act at vertices that it creates. Therefore,\nit can be written as a double sum over vertices $v,v'$ of $\\gamma$ alone and \neach of these terms is of the form \n$$\n(M(v)N(v')-M(v')N(v))\n\\Psi([\\hat{H}_{v',\\gamma(v)}\\hat{H}_{v,\\gamma}-\n\\hat{H}_{v,\\gamma(v')}\\hat{H}_{v',\\gamma}]f_\\gamma)\n$$\nwhere the notation means that $\\hat{H}_{v,\\gamma}$ is a family of\nconsistently defined operators each of which \nacting on cylindrical functions which depend on the graph $\\gamma$ and \n$\\gamma(v)$ is a graph on which $\\hat{H}_{v,\\gamma}f_\\gamma$ depends. \nThis expression clearly is non-vanishing only if $v\\not=v'$ but then \nit can be shown that the operators $\\hat{H}_{v,..}$ and \n$\\hat{H}_{v',..}$ actually commute. Now still this does not show that\nthe term above vanishes however, it can be shown that \n$\\hat{H}_{v',\\gamma(v)}\\hat{H}_{v,\\gamma}f_\\gamma$ and \n$\\hat{H}_{v,\\gamma(v')}\\hat{H}_{v',\\gamma}f_\\gamma$ are related by a \ndiffeomorphism \\cite{9}. Now in \\cite{9} that was enough to show that the \ncommutator vanishes because we were dealing there only with vertices \nwhich do not intersect $S$ as otherwise both lapse functions identically \nvanish for a pure gauge transformation. Thus the diffeomorphism that \nrelates the two terms above could be chosen to have support inside \n$B(S)$ and $\\Psi$ is invariant under such diffeomorphisms. In the present \ncontext that \ndoes not need to be true. However, the crucial point is now that by the \ntangle property all edges of $\\gamma$ that intersect $S$ must intersect\n$S$ transversally. Therefore the arcs that the Hamiltonian constraint \nattaches to $\\gamma$ and whose position is the only thing by which the \ntwo above vectors differ {\\em lie inside $B(S)$ and do not intersect $S$}. \nTherefore, again the two vectors are related by a diffeomorphism which \nhas support inside $B(S)$, that is, they are related by a gauge \ntransformation and therefore the commutator vanishes.\n\nWe turn to the second term in (\\ref{36}). Now we obtain a double sum\nover vertices of $\\gamma$ which lie in $S$ and each term is \nof the form\n$$\n(M(v)N(v')-M(v')N(v))\n\\Psi([\\hat{E}_{v',ADM},\\hat{E}_{v,ADM}]f_\\gamma)\n$$\nwhich is significantly simpler than before because $\\hat{E}_{v,ADM}$\ndoes not alter the graph. Notice that the commutator makes sense because\n$\\hat{E}_{ADM,v}$ leaves the span of non-zero volume eigenvectors invariant.\nNow for $v\\not=v'$ the commutator trivially vanishes, this time without \nemploying diffeomorphism invariance of $\\Psi$.\n\nFinally the last term in (\\ref{36}) is a double sum over vertices \n$v,v'$ of $\\gamma$, where $v$ must lie in $S$, of the form\n\\begin{equation} \\label{37}\n(M(v)N(v')-M(v')N(v))\n\\Psi([\\hat{H}_{v',\\gamma},\\hat{E}_{v,ADM}]f_\\gamma)\\;.\n\\end{equation}\nThe fact that $\\hat{E}_{ADM}$ does not alter the graph was used to write\n(\\ref{37}) as a commutator without employing diffeomorphism \ninvariance of $\\Psi$. Now it may happen that, although $f_\\gamma$ is in \nthe domain of $\\hat{E}_{v,ADM}$, that $\\hat{H}_{v,\\gamma}f_\\gamma$ is not \nany longer in the domain and so (\\ref{37}), for $v=v'$, is in danger of \nbeing \na meaningless product of something that blows up times zero while that\ncannot happen for $v\\not=v'$. However, \nsince $\\Psi$ is a solution we conclude first of all that \n(\\ref{37}) equals\n\\begin{equation} \\label{38}\n-(M(v)N(v')-M(v')N(v))\n[\\hat{E}_{v,ADM}\\Psi](\\hat{H}_{v',\\gamma} f_\\gamma)\n\\end{equation}\nand since $\\Psi$ is also in the domain of $\\hat{E}_{ADM}$ both \n$\\hat{E}_{v,ADM}\\Psi$ and $\\hat{H}_{v',\\gamma} f_\\gamma$\nare well-defined elements of $\\Phi'$ and $\\Phi$ respectively we conclude \nthat in case $v=v'$ (\\ref{37}) indeed vanishes. On the other hand,\nthe same argument as before shows that the commutator trivially vanishes for \n$v\\not=v'$.\n\nLet us now check the commutator between time translations and spatial\ntranslations and rotations $\\varphi$. We have\n\\begin{eqnarray} \\label{39}\n&&\\Psi([\\hat{U}(\\varphi)^{-1}\\hat{E}(N)\\hat{U}(\\varphi)-\\hat{E}(N)]f_\\gamma)\n\\nonumber\\\\\n&=& \\sum_{v\\in V(\\gamma)}[N(\\varphi(v))\n\\Psi(\\hat{U}(\\varphi^{-1})\\hat{H}_{\\varphi(v),\\varphi(\\gamma)}\nf_{\\varphi(\\gamma)})\n-N(v)\\Psi(\\hat{H}_{v,\\gamma}f_\\gamma)]\n\\nonumber\\\\\n&+& \n\\sum_{v\\in \nV(\\gamma)\\cap \nS}[N(\\varphi(v))\\Psi(\\hat{U}(\\varphi^{-1})\\hat{E}_{ADM,\\varphi(v)} \nf_{\\varphi(\\gamma)})-N(v)\\Psi(\\hat{H}_{ADM,v}f_\\gamma)]\\;.\n\\end{eqnarray}\nSince $\\hat{E}_{ADM}$ does not change the graph on which a function depends\nwe have identically \n$\\hat{U}(\\varphi^{-1})\\hat{E}_{ADM,\\varphi(v)} f_{\\varphi(\\gamma)}\n=\\hat{E}_{ADM,v} f_\\gamma$.\\\\ \nNow, as explained in more detail in \\cite{9}, the operator $\\hat{H}(N)$\ndepends on a certain prescription of how to attach loops to graphs. Since \nin the interiour of $B(S)$ there is no background metric\navailable, this prescription can only be topological in nature and therefore\ngraphs differing\nby a diffeormorphism $\\varphi$ are assigned graphs by $\\hat{H}(N)$ which\nare diffeomorphic by a diffeomorphism $\\varphi'$ which may not coincide with\n$\\varphi$. That is, in the interiour of $B(S)$, $\\hat{H}(N)$ is \nonly covariant up to a diffeomorphism. On the other hand,\nsince one has the fixed background metric $\\delta_{ab}$ at $S$ one can \nmake $\\hat{H}(N)$ precisely covariant at $S$, that is, the prescription \nsatisfies $\\varphi_{|S}=\\varphi'_{|S}$.\nTherefore, with this sense of covariance of $\\hat{H}(N)$ \nit is true that \n$\\hat{U}(\\varphi^{-1})\\hat{H}_{\\varphi(v),\\varphi(\\gamma)}f_{\\varphi(\\gamma)}$\nand $\\hat{H}_{v,\\gamma}f_\\gamma$ differ at most by a diffeomorphism \nwith support in the interiour of $B(S)$.\\\\ \nIn conclusion we obtain\n$$\n\\Psi([\\hat{E}(N),\\hat{U}(\\varphi)]f_\\gamma)=\n\\Psi(\\hat{E}(\\varphi^\\star N-N)f_\\gamma)\n$$ \nwhich is what we were looking for.\n\nWe conclude that the little algebra of the Poincar\\'e algebra is faithfully \nimplemented.\\\\ \\\\\n\\\\\n\\\\\n{\\large Acknowledgements}\\\\\n\\\\\nThis research project was supported in part by DOE-Grant\nDE-FG02-94ER25228 to Harvard University.\n\n\n\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nMost of the accretion onto the supermassive black hole (SMBH) found in the center of most massive galaxies is heavily \nobscured by the surrounding dust and gas (e.g., \\citealp{fabian99a}). In the local Universe, $\\sim$75\\% of the Seyfert 2 galaxies\nare heavily-obscured ($N_H$$>$10$^{23}$~cm$^{-2}$; \\citealp{risaliti99}). Many of these, if at $z$$\\gtrsim$1, where most of the black hole growth\noccurs, would not be identified in X-rays even in very deep ($>$1 Msec) Chandra or XMM\/Newton exposures \\citep{treister04}. Locating and \nquantifying this heavily obscured SMBH growth, in particular at high redshifts, is currently one of the fundamental problems \nin astrophysics. \n\nBecause the energy absorbed at optical to X-ray wavelengths is later re-emitted in the mid-IR, it is expected that all Active Galactic Nuclei (AGN), even\nthe most obscured ones, should be very bright mid-IR sources (e.g., \\citealp{martinez06}). Hence, it is not surprising\nthat a large number of heavily obscured --- even Compton-thick ($N_H$$>$10$^{24}$cm$^{-2}$) --- AGN have been found amongst the Luminous and\nUltra-luminous Infrared Galaxies ((U)LIRGs; L$_{IR}$$>$10$^{11}$ and $>$10$^{12}$L$_\\odot$ respectively), both locally \\citep{iwasawa09} and at \nhigh redshift \\citep{bauer10}. Deep X-ray observations performed using the XMM-Newton (e.g., \\citealp{braito03,braito04}), Chandra \\citep{teng05} and \nSuzaku \\citep{teng09} observatories have shown that most ULIRGs are intrinsically faint X-ray sources, most likely due to the effects of obscuration, while \ntheir X-ray spectra show combined signatures of starburst and AGN activity. The key features observed in the X-ray spectra of ULIRGs are \na soft thermal component, typically associated with star formation, a heavily-obscured (N$_H$$\\sim$10$^{24}$ cm$^{-2}$) power-law associated \nwith the AGN direct emission, and a prominent emission line at $\\sim$6.4~keV, identified with fluorescence emission from iron in the \nK$_\\alpha$ ionization level, originating either in the accretion disk or in the surrounding material \\citep{matt91}. \n\nThe presence of heavily-obscured AGN among the most extreme ULIRGs at $z$$\\simeq$1-2 has recently been established from deep\nSpitzer observations \\citep{daddi07,fiore08,treister09c}. Most of these sources have very high, quasar-like, intrinsic luminosities, and hence most likely\ndo not constitute the bulk of the heavily-obscured AGN population \\citep{treister10}. Establishing the fraction of (U)LIRGs that host a lower luminosity AGN is \na more challenging task. Recent works based on X-ray stacking \\citep{fiore09} and using 70-$\\mu$m selected sources \\citep{kartaltepe10} report \na steep decrease in the fraction of AGN with decreasing IR luminosity, going from $\\sim$100\\% at L$_\\textnormal{IR}$=10$^{13}$~L$_\\odot$ \nto $<$10\\% at L$_\\textnormal{IR}$=10$^{10}$~L$_\\odot$. In the local Universe, \\citet{schawinski10} found that the incidence of low-luminosity, Seyfert-like, \nAGN as a function of stellar mass is more complicated, and is influenced by other parameters. For example, the dependence of AGN fraction on stellar mass\ncan be opposite if galaxy morphology is considered (increases with decreasing mass in the early-type galaxy population).\n\nIn this work, we estimate the fraction of heavily-obscured AGN in mid-IR-luminous and massive galaxies at high redshift, few of which are individually \ndetected in X-rays. The main goal is to constrain the amount of obscured SMBH accretion happening in distant galaxies. This can be done thanks\nto the very deep X-ray observations available in the Chandra Deep Fields and the very low and stable Chandra background, which allows for the efficient\nstacking of individually undetected sources. Throughout this letter, we assume a $\\Lambda$CDM cosmology with $h_0$=0.7, $\\Omega_m$=0.27\nand $\\Omega_\\Lambda$=0.73, in agreement with the most recent cosmological observations \\citep{hinshaw09}.\n\n\\section{Analysis and Results}\n\nBy stacking individually-undetected sources selected at longer wavelengths, it is possible to detect very faint X-ray emitters using Chandra observations. For \nexample, this technique was used successfully by \\citet{brandt01} in the Chandra Deep Field North (CDF-N) to measure\nthe average X-ray emission from a sample of Lyman break galaxies at $z$$\\simeq$2--4 and by \\citet{rubin04} to detect\nX-rays from red galaxies at $z$$\\sim$2. More recently, samples of heavily-obscured AGN candidates selected based\non their mid-IR properties have been studied in X-rays via Chandra stacking (e.g., \\citealp{daddi07,fiore08,treister09c}).\n\nThe 4 Msec Chandra observations of the Chandra Deep Field South (CDF-S), are currently\nthe deepest view of the X-ray sky. In addition, the CDF-S has been observed extensively at many wavelengths. The multiwavelength data available on the (E)CDF-S were \npresented by \\citet{treister09c}. Very relevant for this work are the deep Spitzer observations available in this field, using both the Infrared Array Camera (IRAC) and the \nMultiband Imaging Photometer for Spitzer (MIPS), from 3.6 to 24~$\\mu$m. Also critical is the availability of good quality photometric \nredshifts ($\\Delta$$z$\/(1+$z$)=0.008 for $R$$<$25.2) obtained thanks to deep observations in 18 medium-band optical filters performed using \nSubaru\/Suprime-Cam \\citep{cardamone10}. \n\nWe generated our sample starting with the 4959 Spitzer\/MIPS 24~$\\mu$m sources in the region covered by the Chandra observations that have\nphotometric redshift $z$$>$0.5, and hence rest-frame E$>$4~keV emission falling in the high-sensitivity Chandra range. In addition, sources \nindividually detected in X-rays and reported in the catalogs of \\citet{luo08}, \\citet{lehmer05} or \\citet{virani06} were removed from our \nsample, as these sources will otherwise dominate the stacked spectrum. We then inspected the remaining sources to eliminate\nindividual detections in the 4 Msec data not present in the 2 Msec catalog of \\citet{luo08}. We further excluded 28 sources that meet the \nselection criteria of \\citet{fiore08} for heavily obscured AGN, $f_{24}$\/$f_R$$>$1000 and $R$-$K$$>$4.5 (Vega), because we expect these sources to \ncontain an intrinsically luminous AGN (quasar), while the aim of this work is to find additional hidden accretion in less luminous objects. The median \nredshift of the sources in our final sample is 1.32 (average $z$=1.5) with a standard deviation of 0.77.\n\nIn order to perform X-ray stacking in the rest-frame, we started with the regenerated level 2 merged event files created by the Chandra X-ray \nCenter\\footnote{Data publicly available at http:\/\/cxc.harvard.edu\/cda\/whatsnew.html}. For each source, we extracted all events in a circle of \n30$''$ radius centered in the optical position. The energy of each event was then converted to the rest frame using the photometric redshift of the \nsource. Using standard CIAO \\citep{fruscione06} tools we then generated seven X-ray images for each source covering the energy range from 1-8 keV \nin the rest-frame with a fixed width of 1 keV. Images for individual sources were then co-added to measure the stacked signal. Total counts were measured\nin a fixed 5$''$ aperture, while the background was estimated by randomly placing apertures with same the area, 5$''$ to 30$''$ away from the center.\n\nSeveral groups have found (e.g, \\citealp{kartaltepe10} and references therein) that the fraction of galaxies containing an AGN is a strong function of\ntheir IR luminosity. Hence, it is a natural choice to divide our sample in terms of total IR luminosity. The infrared luminosity was estimated from the \nobserved 24~$\\mu$m luminosity assuming the relation found by \\citet{takeuchi05}: $\\log$(L$_{IR}$)=1.02+0.972 $\\log$(L$_{12~\\mu m}$). We further \nassumed that the $k$ correction between observed-frame 24~$\\mu$m and rest-frame 12~$\\mu$m luminosity for these sources \nis negligible, as shown by \\citet{treister09c}. We then separated our sample in 4 overlapping bins: $L_{IR}$$>$10$^{11}$$L_\\odot$, \n$L_{IR}$$>$5$\\times$10$^{10}$$L_\\odot$, 5$\\times$10$^{10}$$L_\\odot$$>$$L_{IR}$$>$10$^{10}$$L_\\odot$ and $L_{IR}$$>$10$^{10}$$L_\\odot$ \nand stacked them independently. The number of sources in each sample is 670, 1545, 2342 and 3887 respectively.\n\nIn Fig.~\\ref{obs_spec} we present the stacked spectra as a function of rest-frame energy, both in total counts and normalized at 1 keV to highlight the \ndifference in spectral shape among the different samples. At E$\\gtrsim$5 keV, the spectra begin to diverge, where we expect the AGN emission to\ndominate even for heavily-obscured sources. There is a clear trend, with more high energy X-ray emission with increasing IR luminosity.\n\n\\begin{figure}\n\\begin{center}\n\\includegraphics[width=0.22\\textwidth]{f1a.eps}\n\\includegraphics[width=0.22\\textwidth]{f1b.eps}\n\\end{center}\n\\caption{{\\it Left panel:} Stacked background-subtracted Chandra counts as a function of rest-frame energy from 1 to 8 keV. Samples were selected\nbased on their IR luminosity in the following overlapping bins: $L_{IR}$$>$10$^{11}$$L_\\odot$ ({\\it filled circles}), $L_{IR}$$>$5$\\times$10$^{10}$$L_\\odot$ \n({\\it squares}), 5$\\times$10$^{10}$$L_\\odot$$>$$L_{IR}$$>$10$^{10}$$L_\\odot$ ({\\it triangles}) and $L_{IR}$$>$10$^{10}$$L_\\odot$ ({\\it open circles}).\n{\\it Right panel}: same as left panel but normalized at 1 keV in order to highlight the differences in spectral shape among the different samples.\nThe largest differences are at E$\\gtrsim$5 keV, where there is a clear trend in the relative intensity as a function of IR luminosity,\nsuggesting a larger fraction of AGN in the most luminous IR sources.}\n\\label{obs_spec}\n\\end{figure}\n\n\n\\section{Discussion}\n\nThe spectra shown in Fig.~\\ref{obs_spec} cannot be directly interpreted, as the detector-plus-telescope response information is lost after the conversion to \nrest-frame energy and stacking. Hence, we perform simulations assuming different intrinsic X-ray spectra in order to constrain the nature of the sources \ndominating the co-added signal. We use the XSPEC code \\citep{arnaud96} to convolve several intrinsic input spectra with the latest response \nfunctions\\footnote{Obtained from http:\/\/cxc.harvard.edu\/caldb\/calibration\/acis.html} for the Chandra ACIS-I camera used in the CDF-S observations. We\nthen compare these simulated spectra with the observations in our sample of IR-selected sources.\n\nThe low energy spectrum of (U)LIRGs is dominated by a combination of a thermal plasma component with temperatures $kT$$\\simeq$0.7~keV, particularly\nimportant at E$<$3 keV, and the emission from high-mass X-ray binaries (HMXBs) at 1$<$E (keV)$<$10 (e.g., \\citealp{persic02}). For each source, we\ngenerated a simulated spectrum using a combination of these two components, taking into account the luminosity and redshift of the source. For\nthe HMXB population we assumed a power-law given by $\\Gamma$=1.2 and cutoff energy $E_c$=20 keV, consistent with recent observations \n(e.g., \\citealp{lutovinov05}). This component was normalized assuming the relation between IR and X-ray luminosity in starburst galaxies found \nby \\citet{ranalli03}. For the thermal component, we assumed a black body with temperature $kT$=0.7 keV. The normalization of this component\nwas then adjusted to match the observations at $E$$<$3 keV.\n\nIn order to compute the possible contribution from heavily-obscured AGN to the stacked spectrum we assumed the observed X-ray spectrum of the nearby\nULIRG IRAS19254-7245, as observed by Suzaku \\citep{braito09}. In addition to the starburst emission described above, the X-ray spectrum is described by\nan absorbed, Compton-thick, power-law with $\\Gamma$=1.9, $N_H$=10$^{24}$~cm$^{-2}$, and a possible scattered component, characterized by a \npower-law with $\\Gamma$=1.9, no absorption, and 1\\% of the intensity of the direct emission. The resulting simulated spectral components and the comparison\nwith the observed stacked spectrum for sources with the four samples defined above are shown in Fig.~\\ref{simul_spec}. \n\n\\begin{figure}\n\\begin{center}\n\\plotone{f2.eps}\n\\end{center}\n\\caption{Stacked background-subtracted Chandra counts as a function of rest-frame energy, as in Fig.~\\ref{obs_spec}. {\\it Black data points (filled circles)} show\nthe stacked X-ray signal for sources binned by IR luminosity. The {\\it cyan dashed lines (stars)} shows the simulated spectra for the HMXB population\nnormalized using the \\citet{ranalli03} relation between star-formation rate and X-ray luminosity. The {\\it blue dashed lines (open squares)} show simulated thermal spectra\ncorresponding to a black body with $kT$=0.7 keV, required to explain the E$<$3 keV emission. An absorbed AGN spectrum, given by a power-law with $\\Gamma$=1.9 and a \nfixed $N_H$=10$^{24}$~cm$^{-2}$, is shown by the {\\it red dashed lines (open circles)}. In addition, a scattered AGN component, characterized by a 1\\% reflection of the underlying \nunobscured power-law, is shown by the {\\it green dashed lines (open triangles)}. The resulting summed spectrum ({\\it black solid lines}) is in very good \nagreement with the observed counts. The strong detection in the stacked spectrum at E$>$5 keV, in particular at the higher IR luminosities, confirms the presence of a significant \nnumber of heavily-obscured AGN in these samples.}\n\\label{simul_spec}\n\\end{figure}\n\nIt is not possible to explain the observed stacked spectral shape using only a plausible starburst spectrum without invoking an AGN component, which dominates \nat E$>$5~keV. The average intrinsic rest-frame 2-10 keV AGN luminosity needed to explain the observed spectrum, assuming that every source in the sample contains an AGN of the \nsame luminosity, is 6$\\times$10$^{42}$~erg~s$^{-1}$ for sources with $L_{IR}$$>$10$^{11}$$L_\\odot$, 3$\\times$10$^{42}$~erg~s$^{-1}$ \nfor sources with $L_{IR}$$>$5$\\times$10$^{10}$$L_\\odot$, 5$\\times$10$^{41}$~erg~s$^{-1}$ in the sample with 5$\\times$10$^{10}$$L_\\odot$$>$$L_{IR}$$>$10$^{10}$$L_\\odot$\nand 7$\\times$10$^{41}$~erg~s$^{-1}$ for sources with $L_{IR}$$>$10$^{10}$$L_\\odot$. All of these are (intrinsically) very low-luminosity AGN; even if there is a range, it is extremely\nunlikely to include high-luminosity quasars like those discussed in previous stacking papers. An alternative possibility is that the extra emission at E$>$5~keV is due entirely to \nthe Fe~K$\\alpha$ line, provided the errors in the photometric redshifts in these samples are significantly larger than the values reported by \\citet{cardamone10}. \nRegardless of the template assumed for the AGN emission, we obtain similar values for the average AGN luminosity in each sample.\n\nThe median hard X-ray luminosity for the Chandra sources with measured photometric redshifts in the catalog of \\citet{luo08} is 4.1$\\times$10$^{43}$ erg~s$^{-1}$ for \nthe sources in the $L_{IR}$$>$10$^{11}$$L_\\odot$ sample, 3.5$\\times$10$^{43}$~erg~s$^{-1}$ in the $L_{IR}$$>$5$\\times$10$^{10}$$L_\\odot$ group, 5.7$\\times$10$^{42}$~erg~s$^{-1}$ for \nsources with 5$\\times$10$^{10}$$L_\\odot$$>$$L_{IR}$$>$10$^{10}$$L_\\odot$ and 1.6$\\times$10$^{43}$~erg~s$^{-1}$ in the $L_{IR}$$>$10$^{10}$$L_\\odot$ sample. Hence, \nif the heavily-obscured AGN in our stacked samples have the same median intrinsic luminosity this would indicate that 15\\% (98 sources) of the 670 galaxies \nwith $L_{IR}$$>$10$^{11}$$L_\\odot$ contain a heavily-obscured AGN. This fraction is $\\sim$9\\% (132 and 205 sources respectively) in the $L_{IR}$$>$5$\\times$10$^{10}$$L_\\odot$ and\n5$\\times$10$^{10}$$L_\\odot$$>$$L_{IR}$$>$10$^{10}$$L_\\odot$ samples. For sources with $L_{IR}$$>$10$^{10}$$L_\\odot$ this fraction is $<$5\\%. The integrated intrinsic \nX-ray emission in the rest-frame 2-10 keV band due to the heavily-obscured AGN in this sample, obtained by multiplying the intrinsic X-ray luminosity by the number of sources \nand dividing by the studied area, is $\\sim$4.6$\\times$10$^{46}$~erg~cm$^{-2}$~s$^{-1}$~deg$^{-2}$. For comparison, the total emission from all the X-ray \ndetected AGN in the CDF-S is 1.63$\\times$10$^{47}$~erg~cm$^{-2}$~s$^{-1}$~deg$^{-2}$. Hence, this extra AGN activity can account for $\\sim$22\\% of the total SMBH accretion.\nAdding this to the obscured SMBH growth in X-ray detected AGN \\citep{luo08}, we confirm that most SMBH growth, $\\sim$70\\%, is significantly obscured and missed by even the \ndeepest X-ray surveys \\citep{treister04,treister10}.\n\nPerforming a similar study on the 28 sources with $f_{24}$\/$f_R$$>$1000 and $R$-$K$$>$4.5 that we previously excluded, we find a very hard X-ray\nspectrum, harder than that of the $L_{IR}$$>$10$^{11}$$L_\\odot$ sources. This spectrum is consistent with a population of luminous AGN with intrinsic rest-frame 2-10 keV \nluminosity $\\sim$2$\\times$10$^{43}$~erg~s$^{-1}$ and negligible contribution from the host galaxy, except at E$<$2 keV where the thermal component is $\\sim$30\\% of the\ntotal emission. This result justifies our choice of removing these sources from our study (otherwise they would dominate the stacked signal), while at the same time it confirms the\nAGN nature of the vast majority of these sources, in contrast to the suggestion that the extra IR emission could be due to star-formation processes \\citep{donley08, pope08,georgakakis10}. \nA similar result for these high-luminosity sources was found by \\citet{fiore10}: In a sample of 99 mid-IR excess sources in the COSMOS field he found a strong stacked signal \nat E$\\sim$6~keV, which he interpreted as due to the Fe~K$\\alpha$ line, a clear signature of AGN emission and high obscuration (see discussion below).\n\n\\subsection{Multiwavelength Properties}\n\nBy design, none of the sources in our sample are individually detected in X-rays, nor do they satisfy the selection criteria of \\citet{fiore08}. However, it is interesting to investigate\nif they present other AGN signatures. For example, 237 out of the 1545 sources with $L_{IR}$$>$5$\\times$10$^{10}$$L_\\odot$ in our sample (15\\%) are found inside the\nAGN IRAC color-color region defined by \\citet{stern05}. For comparison, in the sample of 2342 sources with 5$\\times$10$^{10}$$L_\\odot$$>$$L_{IR}$$>$10$^{10}$$L_\\odot$ --- \nin which from the stacked hard X-ray signal we determined a negligible AGN fraction --- there are 327 galaxies (14\\%) in the \\citet{stern05} region. This suggests that the \nIRAC color-color diagram cannot be used to identify heavily-obscured low-luminosity AGN, because the near-IR emission in these sources is dominated by the host \ngalaxy \\citep{cardamone08}. At longer wavelengths, 83 of the 1545 sources with $L_{IR}$$>$5$\\times$10$^{10}$$L_\\odot$ were detected in the deep VLA observations of the \nCDF-S \\citep{kellermann08}. In contrast, only 33 sources in the 5$\\times$10$^{10}$$L_\\odot$$>$$L_{IR}$$>$10$^{10}$$L_\\odot$ sample were detected in these \nobservations. Using the $q_{24}$ ratio between 1.4 GHz and 24~$\\mu$m flux densities (e.g., \\citealp{appleton04}), we find that in the $L_{IR}$$>$5$\\times$10$^{10}$$L_\\odot$ \nsample, only 14 sources have $q_{24}$$<$-0.23 and can be considered ``radio-loud'' \\citep{ibar08}, and in the 5$\\times$10$^{10}$$L_\\odot$$>$$L_{IR}$$>$10$^{10}$$L_\\odot$ \nsample only 10 sources have $q_{24}$$<$-0.23. Hence, we conclude that the fraction of bona fide radio-loud sources is negligible and that in most cases the radio emission \nis produced by star-formation processes.\n\n\\subsection{AGN Fraction Versus Stellar Mass}\n\nIn order to investigate the fraction of heavily-obscured AGN as a function of other galaxy parameters, we performed X-ray stacking of samples sorted by stellar mass.\nStellar masses were taken from \\citet{cardamone10}, who performed spectral fitting to the extensive optical and near-IR spectro-photometry using FAST \\citep{kriek09} and the stellar \ntemplates of \\citet{maraston05} assuming the \\citet{kroupa01} initial mass function and solar metallicity. We further restricted our sample to sources with $z$$<$1.2, for \nwhich photometric redshifts and stellar masses are very well determined ($\\Delta$z\/(1+$z$)=0.007). We then divided the sample into three mass bins: M$>$10$^{11}$M$_\\odot$, \n10$^{11}$$>$M (M$_\\odot$)$>$10$^{10}$ and 10$^{10}$$>$M (M$_\\odot$)$>$10$^{9}$. The resulting stacked X-ray spectra are shown in Fig.~\\ref{obs_spec_mass}.\n\n\\begin{figure}\n\\begin{center}\n\\includegraphics[scale=0.2]{f3a.eps}\n\\includegraphics[scale=0.2]{f3b.eps}\n\\end{center}\n\\caption{Stacked Chandra counts for galaxies binned as a function of their stellar mass. The {\\it left panel} shows the spectra for the following\nbins: M$>$10$^{11}$M$_\\odot$ ({\\it red squares}), 10$^{10}$$<$M$<$10$^{11}$M$_\\odot$ ({\\it blue triangles}) and 10$^{9}$$<$M$<$10$^{10}$M$_\\odot$ ({\\it black circles}).\n{\\it Right panel:} same but normalized at 1 keV. In the M$>$10$^{11}$M$_\\odot$ sample, the strong excess at E=6-7~keV, which we associate with the Fe~K$\\alpha$ line, is an \nindicator of AGN activity. Similarly, for the sources with 10$^{10}$$<$M$<$10$^{11}$M$_\\odot$ there is a hard X-ray spectrum, also suggesting a significant AGN fraction.\nThese preliminary results indicate that these heavily-obscured moderate-luminosity AGN are predominantly present in the most massive galaxies.}\n\\label{obs_spec_mass}\n\\end{figure}\n\n\nFor sources with M$>$10$^{11}$M$_\\odot$, there is a significant excess at 6-7~keV, above a spectrum that otherwise declines with increasing energy. This might \nbe due to the presence of the Fe~K$\\alpha$ line, a clear indicator of AGN activity. Contrary to the case of stacking as a function of IR luminosity (Fig.~\\ref{simul_spec}), \nhere we do not find evidence for an absorbed power-law ---the 6-7 keV feature is simply too sharply peaked. Possibly the restriction to $z$$<$1.2 for the mass-binned stacking, where \nphotometric redshifts are most accurate, reveals an emission line that is broadened by less accurate photometric redshifts in the full sample. That is, the feature in the $L_{IR}$-binned stack \nthat we interpreted as a heavily absorbed power law may instead be an Fe~K$\\alpha$ line broadened artificially by bad photometric redshifts. In the 10$^{11}$$>$M (M$_\\odot$)$>$10$^{10}$ \nsample we found a significant hardening of the X-ray spectrum (Fig.~\\ref{obs_spec_mass}), suggesting the presence of a significant fraction of AGN. In contrast, only a soft spectrum, \nconsistent with star-formation emission, can be seen for sources with 10$^{10}$$>$M (M$_\\odot$)$>$10$^{9}$. Taken together, these results indicate that AGN are predominantly present \nin the most massive galaxies, in agreement with the conclusions of \\citet{cardamone10b} and others. This will be elaborated in a paper currently in preparation.\n\n\\subsection{Space Density of Heavily-Obscured AGN}\n\nThe fraction of Compton-thick AGN in the local Universe is still heavily debated. \\citet{treister09b} reported a fraction of $\\sim$8\\% in a flux-limited sample of sources\ndetected in the Swift\/BAT all-sky \\citep{tueller08} and International Gamma-Ray Astrophysics Laboratory (INTEGRAL; \\citealp{krivonos07}) surveys. From an INTEGRAL volume-limited \nsurvey at $z$$<$0.015, \\citet{malizia09} found a higher fraction of 24\\%, suggesting that even surveys at E$>$10~keV are potentially \nbiased against the detection of Compton-thick AGN. The fraction of moderate-luminosity Compton-thick sources in our sample of sources \nwith $L_{IR}$$>$5$\\times$10$^{10}$$L_\\odot$, relative to all AGN in the CDF-S, is $\\sim$25\\% (132\/525), assuming that Compton-thick and Compton-thin AGN have similar \nmedian intrinsic luminosities. This indicates that there is no major evolution in the number of moderate-luminosity heavily-obscured AGN from $z$=0 to 2. In contrast, at higher \nluminosities, \\citet{treister10} reported that the ratio of obscured to unobscured quasars increased from $\\sim$1 at $z$=0 to $\\sim$2-3 at $z$$\\simeq$2. Hence, although all these \nestimates are still uncertain, it appears that the evolution of Compton-thick AGN depends strongly on their luminosity. We further speculate that this is indication that the \ntriggering of low-luminosity AGN is not related to the major merger of gas-rich galaxies as found by \\citet{treister10} for high-luminosity quasars or that the time delay\nbetween galaxy interactions and black hole growth is long \\citep{schawinski10b}.\n\n\\acknowledgements\n\nWe thank the referee, Fabrizio Fiore, for very useful and constructive comments. Support for the work of ET and KS was provided by the National\nAeronautics and Space Administration through Chandra\/Einstein\nPost-doctoral Fellowship Award Numbers PF8-90055 and PF9-00069\nrespectively issued by the Chandra X-ray Observatory Center, which is\noperated by the Smithsonian Astrophysical Observatory for and on\nbehalf of the National Aeronautics Space Administration under contract\nNAS8-03060. CMU and CC acknowledge support from NSF grants \nAST-0407295, AST-0449678, AST-0807570 and Yale University.\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nWith the rapid development of Internet of Things, the smart in-vehicle applications (i.e., autonomous driving, image assisted navigation and multimedia entertainment) have been widely applied to smart vehicles \\cite{Feng}\\cite{Zhang}, which can provide more comfortable and safer environment for drivers and passengers. Since these in-vehicle applications will consume huge computation resource and require low execution latency, the cloud server is employed to afford these complex computing tasks, which results in serious burden on backhaul network \\cite{Huang0}. Luckily, vehicular edge computing (VEC) with powerful computing capacity is deployed in the roadside densely and attached in the road side unit (RSU). Therefore, it is worthy to study how to efficiently utilize the computing capacity of VEC server to support low latency and energy-efficiency in-vehicle service. \n\nTo improve the offloading efficiency of vehicle terminals, the researchers have proposed many offloading and resource allocation methods in the VEC-based network. Zhang et al. \\cite{Mao} proposed a hierarchical cloud-based VEC offloading framework to reduce the execution delay, where a back server was utilized to offer extra computation resource for the VEC server. However, the priority of tasks were not considered in the offloading process, which may cause that delay sensitive tasks (i.e., assisted imaged navigation) cannot be processed timely for high-speed vehicles. To further reduce the task execution delay, Liu et al. \\cite{Liu} studied the task offloading problem by treating the vehicles and the RSUs as the two sides of a matching problem to minimize the task execution delay. But the computation capacity of VEC server should be fully exploited to decrease the energy consumption of in-vehicle applications. In addition, Li et al. \\cite{Li} considered the influence of time-varying channel on the task offloading strategies and formulated the problem of joint radio and computation resource allocation. However, the variety of vehicle speeds was not considered, which may cause that the high-speed vehicles cannot obtain the sufficient computation and wireless resource. To accelerate the radio and computation allocation process, Dai et al. \\cite{Dai} proposed a low-complexity algorithm to jointly optimize server selection, offloading ratio and computation resource, but the task handover between VEC servers may result in the increase in delay and occurrence of lost packets. To make a more accurate offloading decision, Sun et al. \\cite{Sun} proposed the task offloading algorithm by determining where the tasks were performed and the execution order of the tasks on the VEC server, but the proposed heuristic algorithm may not obtain a satisfying result when the vehicle's channel state fluctuated frequently. Considering the fluctuation of channel state, Zhan et al. \\cite{Zhan} proposed a deep reinforcement learning-based offloading algorithm to make the task offloading decision, which can be applied in the dynamic environment caused by vehicle's mobility. \n\nIn general, there are still some problems to be solved for the computing task offloading and resource allocation for the in-vehicle applications: (1) The impact of vehicle speed on task delay constraints have been neglected, which results in mismatching between vehicle speed and its allocated computation and wireless resource and cannot guarantee the delay requirements. (2) The vast fluctuation of vehicle's channel state caused by fast mobility has not been considered, which may result in offloading failure. (3) The computation capacity of VEC server has not been fully exploited because of the inaccurate resource allocation strategy. Inspired by the above work, we propose a vehicle speed aware computing task offloading and resource allocation strategy based on multi-agent reinforcement learning. Our work is novel in the following aspects.\n\n\\textbf{(1)Vehicle speed aware delay constraint model:} Different types of tasks and vehicle speeds demand various delay requirements for in-vehicle applications. Therefore, we fully analyze the internal relationship among vehicle speed, task type and delay requirements to propose a vehicle speed aware delay constraint model, which can make the task offloading and resource allocation process more accurately. \n\n\\textbf{(2)The calculation of energy consumption and delay for different types of tasks:} Based on the bandwidth and delay requirements, in-vehicle computing tasks are classified into three types: critic application, high-priority application and low-priority application. For different types of tasks, we calculate the energy consumption and delay based on task transmission and execution process for different offloading positions, respectively.\n\n\\textbf{(3)Multi-agent reinforcement learning based solution to our formulated problem:} A joint optimization of offloading and resource allocation problem is formulated by a Markov decision process (MDP) process, with the objective to minimize the energy consumption subject to delay constraint. In addition, multi-agent reinforcement learning is applied to solve the high-dimension action decision-making.\n\n\n\\section{System Model}\n\\subsection{System Framework}\nThe scenario considered in this paper is the computing task offloading of vehicles on urban roads in VEC network, as shown in Figure \\ref{system}. RSU is located along the roadside, and the coverage areas of adjacent RSUs do not overlap. Therefore, according to the coverage areas of RSUs, the road can be divided into several adjacent segments, where the vehicle can only establish the wireless link with the RSU of the current road segment. Each RSU is equipped with a VEC server whose powerful computation capacity can help the vehicle quickly handle computing tasks. Since the delay constraint of the computing task is extremely short, it can be assumed that the vehicle can still receive the task processing result from the previous VEC server when the vehicles travels to the road segment horizon. In addition, the computing task can also be executed locally to alleviate the traffic and computing burden of VEC server.\n\\begin{figure}[!h]\n\t\\centering\n\t\\includegraphics[width=8.5cm]{figure-2.eps}\n\t\\centering\n\t\\caption{The architecture of computing task offloading in VEC network}\n\t\\label{system}\n\\end{figure}\n\nBecause the VEC server can real-time perceive the state information of vehicles and owns powerful computing processing capacity, so it can generate the optimal computing task offloading and resource allocation strategy (including computing resource allocation and wireless resource allocation) for each vehicle. First, the state information of the vehicle, including task queue, speed, location, remaining computing capacity and wireless resources, is reported to the RSU in real time. The RSU will forward the received state information to the VEC server, which utilizes the extracted state information to calculates the task execution delay and energy consumption to formulate the optimized problem. The goal is to reduce the energy consumption of all vehicles by optimizing task offloading and resource allocation decisions without exceeding delay constraint. Then, according to the results of the joint task offloading and resource allocation, the vehicle's computing tasks can be executed locally or offloaded to the VEC server.\n\n\\subsection{Vehicle Speed Aware Delay Constraint Model}\n\nVehicle' computing tasks can be divided into three types: critical application (CA), high-priority application (HPA) and low-priority application (LPA), which needs different bandwidth and delay requirements \\cite{Dzi}. We denote these three task types by $\\phi_1$, $\\phi_2$ and $\\phi_3$, respectively. CA task generally refers to the autonomous driving and road safety tasks, which needs ultra delay to ensure the safety of vehicle driving. Therefore, this type of task needs to be executed locally and delay threshold is set to $Th{{r}_{1}}$. For the HPA task, it mainly involves image assisted navigation, parking navigation and some optional security applications. The delay tolerance for HPA task is related to the current vehicle speed. For vehicles with low speed, it doesn't matter even if the computing task takes a little longer time. More wireless and computation resource can be allocated to vehicles with high speed, whose tasks can be processed preferentially. The delay threshold of HPA task is set to $Th{{r}_{2}}$ when the vehicle speed reaches at the maximum road speed limit ${{v}_{\\max }}$. LPA tasks generally includes multimedia and passenger entertainment activities, so the requirement for delay threshold is relatively slack. The delay threshold is set to $Th{{r}_{3}}$.\n\nIn this paper, it is assumed that the generated computing tasks are independent task sequence. The computing task of the vehicle $k$ at time $t$ to be processed is defined as ${{\\mathcal{I}}_{t}}(k)$. For HPA task, when the vehicle's speed is low, the delay threshold of the computing task can be relatively longer. With the increase of speed, the information of the vehicle received from surrounding environment will increase rapidly at the same delay because of the longer distance traveled. Therefore, the delay of the computing task that the vehicle needs to be processed should be reduced rapidly. When the speed reaches a higher level, with the increase of speed, the increase amplitude of the vehicle's information received from the surrounding environment will gradually decrease. Therefore, the delay of the computing task that the vehicle needs to be processed will be reduced slowly.\n\nTherefore, we select one-tailed normal function to describe the relationship between delay constraint, $\\mathcal{T}({{v}_{\\mathcal{I}_t(k)}})$, and speed for task $\\mathcal{I}_t(k)$ of $\\phi _2$, as follows:\n\\begin{equation}\\small\n\\begin{aligned}\n\\mathcal{T}({{v}_{\\mathcal{I}_t(k)}})&=Th{{r}_{2}}\\frac{1}{\\sqrt{2\\pi }\\alpha }\\exp (-\\frac{v_{{{\\mathcal{I}}_{t}}(k)}^{2}}{2{{\\alpha }^{2}}})\/(\\frac{1}{\\sqrt{2\\pi }\\alpha }\\exp (-\\frac{v_{\\max }^{2}}{2{{\\alpha }^{2}}})) \\\\ \n& =\\exp (-\\frac{v_{{{\\mathcal{I}}_{t}}(k)}^{2}-v_{\\max }^{2}}{2{{\\alpha }^{2}}})Th{{r}_{2}},\\text{ }if\\text{ }{{\\mathcal{I}}_{t}}\\text{(}k\\text{)}\\in \\phi _2 \\\\ \n\\end{aligned}\n\\end{equation}\nwhere ${{v}_{\\mathcal{I}_t(k)}}$ is the current vehicle speed and ${{\\alpha }^{2}}$ is the variance of the normal function. $v_{max}$ is the road maximum speed. To ensure that the probability that vehicle speed is within the maximum speed exceeds 95\\%, we denote $\\alpha ={{v}_{\\max }}\/1.96$. Therefore, We employ $\\Upsilon({{v}_{\\mathcal{I}_t(k)}})$ to represent the delay threshold of computing task ${{\\mathcal{I}}_{t}}(k)$ of all task types, as follows:\n\\begin{equation}\n\\begin{aligned}\n& \\Upsilon ({{\\mathcal{I}}_{t}}\\text{(}k\\text{)})=\\left\\{ \\begin{aligned}\n& Th{{r}_{1}},\\text{ }if\\text{ }{{\\mathcal{I}}_{t}}\\text{(}k\\text{)}\\in {{\\phi }_{1}} \\\\ \n& \\mathcal{T}\\text{(}{{v}_{{{\\mathcal{I}}_{t}}\\text{(}k\\text{)}}}\\text{), }if\\text{ }{{\\mathcal{I}}_{t}}\\text{(}k\\text{)}\\in {{\\phi }_{2}} \\\\ \n& Th{{r}_{3}},\\text{ }if\\text{ }{{\\mathcal{I}}_{t}}\\text{(}k\\text{)}\\in {{\\phi }_{3}} \\\\ \n\\end{aligned} \\right. \\\\ \n& \\text{ } \\\\ \n\\end{aligned}\n\\end{equation}\n\n\\section{Delay and Energy Consumption of Different Offloading Positions}\nFor the generated computing task, the task handover between VEC server is generally not considered \\cite{Zhang} to ensure its successful transmission. For HPA and LPA computing tasks generated by the in-vehicle applications at time $t$, there are usually three ways to handle them, which are hold on, offloading to VEC server and local execution, as shown in Figure \\ref{delay}.\n\\begin{figure}[!h]\n\t\\centering\n\t\\includegraphics[width=8.5cm]{figure-1.eps}\n\t\\centering\n\t\\caption{Delay and energy consumption of different offloading pcositions}\n\t\\label{delay}\n\\end{figure}\nWhen the vehicle's current remaining computing resource and wireless resource and VEC server's remaining computing resource are insufficient to process the new computing task, the vehicle's computing task can choose to wait for a certain time until computing and wireless resources are released. \n\\subsection{Delay of task execution}\n\\subsubsection{Offloading to the VEC server}\nFor the local VEC server, we denote the set of vehicles in the service area by $\\mathbb{Q}$ and the number of these vehicles is $K$. When the computing task ${{\\mathcal{I}}_{t}}(k)$ belonging to task type $\\phi_2$ or $\\phi_3$ is offloaded to VEC server, the task completion time contains upload time, execution time and download time. For the task type $\\phi_2$, since the size of output file is much smaller than that of input file, the download time can be ignored. Considering that task upload, execution and download cannot be executed simultaneously within one transport time interval (TTI), the consumed time, $TR_{{{\\mathcal{I}}_{t}}(k)}$, needs to round up, which can be described as: \n\\begin{equation}\\small\nTR_{{{\\mathcal{I}}_{t}}(k)}^{{}}=\\left\\{ \\begin{aligned}\n& \\left\\lceil \\frac{{{c}_{{{\\mathcal{I}}_{t}}(k)}}}{r_{k}^{VEC,up}} \\right\\rceil +\\left\\lceil \\frac{{{\\kappa }_{{{\\mathcal{I}}_{t}}(k)}}{{c}_{{{\\mathcal{I}}_{t}}(k)}}}{b_{{{\\mathcal{I}}_{t}}(k)}^{VEC}{{f}^{VEC}}} \\right\\rceil ,\\text{ }if\\text{ }{{\\mathcal{I}}_{t}}(k)\\in \\phi_2 \\\\ \n& \\left\\lceil \\frac{{{c}_{{{\\mathcal{I}}_{t}}(k)}}}{r_{k}^{VEC,up}} \\right\\rceil +\\left\\lceil \\frac{{{\\kappa }_{{{\\mathcal{I}}_{t}}(k)}}{{c}_{{{\\mathcal{I}}_{t}}(k)}}}{b_{{{\\mathcal{I}}_{t}}(k)}^{VEC}{{f}^{VEC}}} \\right\\rceil +\\left\\lceil \\frac{{{\\omega }_{{{\\mathcal{I}}_{t}}(k)}}{{c}_{{{\\mathcal{I}}_{t}}(k)}}}{r_{k}^{VEC,down}} \\right\\rceil ,\\\\&\\text{ }if\\text{ }{{\\mathcal{I}}_{t}}(k)\\in \\phi_3 \\\\ \n\\end{aligned} \\right.\n\\end{equation}\nwhere ${{c}_{{{\\mathcal{I}}_{t}}(k)}}$ is the file size of task ${{\\mathcal{I}}_{t}}(k)$ and ${{\\kappa }_{{{\\mathcal{I}}_{t}}(k)}}$ is the calculation density of processing the task ${{\\mathcal{I}}_{t}}(k)$. ${{\\omega }_{{{\\mathcal{I}}_{t}}(k)}}$ is the scaling ratio of the downloaded task size relative to the uploaded task size. $b_{{{\\mathcal{I}}_{t}}(k)}^{VEC}$ indicates the proportion of computing resource allocated by VEC server to the task ${{\\mathcal{I}}_{t}}(k)$. ${{f}^{VEC}}$ denotes the CPU frequency of VEC server. The transmission capacity between vehicle and server can be obtained from the number of allocated channels, channel bandwidth, transmission power and noise power \\cite{Huang}. For uplink channel $n$ of VEC server allocated to vehicle $k$, the available uplink transmission capacity, $r_{k,n}^{VEC,up}$, can be expressed as\n\\begin{equation}\\small\nr_{k,n}^{VEC,up}=\\omega _{VEC}^{up}{{\\log }_{2}}\\left( 1+\\frac{P\\cdot h_{k,n}^{VEC,up}}{{{\\sigma }^{2}}+I_{k,n}^{VEC,up}} \\right),\\text{ }for\\text{ }n\\in \\mathbb{N}_{VEC}^{up}\n\\end{equation}\nwhere $\\omega _{VEC}^{up}=B_{VEC}^{up}\/N_{VEC}^{up}$. $B_{VEC}^{up}$ is the uplink bandwidth of VEC server and $N_{VEC}^{up}$ is the number of total channels of VEC server. $\\sigma^2$ denotes noise power and $P$ is transmission power. $I_{k,n}^{VEC,up}$ denotes the interference on channel $n$. $\\mathbb{N}_{VEC}^{up}$ indicates the uplink channel set of VEC server. Let $z_{k,n}^{VEC,up}$ indicate whether the uplink channel $n$ is allocated to the vehicle $k$. If it is allocated, $z_{k,n}^{VEC,up} = 1$, otherwise, $z_{k,n}^{VEC,up} = 0$. Then the uplink transmission capacity between vehicle $k$ and VEC server, $r_{k}^{VEC,up}$, can be depicted as\n\\begin{equation}\nr_{k}^{VEC,up}=\\sum\\limits_{n\\in \\mathbb{N}_{VEC}^{up}}{z_{k,n}^{VEC,up}r_{k,n}^{VEC,up}}\n\\end{equation}\n\nSimilarly, the transmission capacity of downlink channel $n$ of VEC server allocated to vehicle $k$, $r_{k,n}^{VEC,down}$, can be expressed as\n\\begin{equation}\n\\begin{aligned}\nr_{k,n}^{VEC,down}&=\\omega _{VEC}^{down}{{\\log }_{2}}\\left( 1+\\frac{P\\cdot h_{k,n}^{VEC,down}}{{{\\sigma }^{2}}+I_{k,n}^{VEC,down}} \\right),\\\\& \\text{ }for\\text{ }n\\in \\mathbb{N}_{VEC}^{down}\n\\end{aligned}\n\\end{equation}\nwhere $\\omega _{VEC}^{down}=B_{VEC}^{down}\/N_{VEC}^{down}$. Then the downlink transmission capacity between vehicle $k$ and VEC server, $r_{k}^{VEC,down}$, can be depicted as\n\\begin{equation}\nr_{k}^{VEC,down}=\\sum\\limits_{n\\in {{\\mathbb{N}_{VEC}^{down}}}}{z_{k,n}^{VEC,down}r_{k,n}^{VEC,down}}\n\\end{equation}\n\n\\subsubsection{Local Execution}\nFor computing task ${{\\mathcal{I}}_{t}}(k)$ executed locally, the consumed time, $TL_{{{\\mathcal{I}}_{t}}(k)}$, can be expressed as:\n\\begin{equation}\nTL_{{{\\mathcal{I}}_{t}}(k)}^{{}}=\\left\\lceil \\frac{{{\\kappa }_{{{\\mathcal{I}}_{t}}(k)}}{{c}_{{{\\mathcal{I}}_{t}}(k)}}}{{{f}^{k}}} \\right\\rceil \n\\end{equation}\nwhere ${{f}^{k}}$ is the CPU frequency of vehicle $k$. At time $t$, computing task ${{\\mathcal{I}}_{t}}(k)$ can select to hold on, be offloaded to the local VEC server and be executed locally. Therefore, for computing task ${{\\mathcal{I}}_{t}}(k)$, the total delay from generation to completion, $D({{\\mathcal{I}}_{t}}(k))$, is derived by\n\\begin{equation}\\small\n\\begin{aligned}\nD({{\\mathcal{I}}_{t}}(k))&=t-t_{{{\\mathcal{I}}_{t}}(k)}^{g}+\\tau _{{{\\mathcal{I}}_{t}}(k)}^{Hold}{{T}_{h}}\\\\&+(1-\\tau _{{{\\mathcal{I}}_{t}}(k)}^{Hold})[\\tau _{{{\\mathcal{I}}_{t}}(k)}^{VEC}TR_{{{\\mathcal{I}}_{t}}(k)}^{{}}+(1-\\tau _{{{\\mathcal{I}}_{t}}(k)}^{VEC})T{{L}_{{{\\mathcal{I}}_{t}}(k)}}]\n\\end{aligned}\n\\end{equation}\nwhere $t_{k,i}^{g}$ is the generated time of task ${{\\mathcal{I}}_{t}}(k)$. $\\tau _{{{\\mathcal{I}}_{t}}(k)}^{Hold}$ indicates whether task ${{\\mathcal{I}}_{t}}(k)$ holds on. If it holds on, ${{\\mathcal{I}}_{t}}(k) = 1$, otherwise, ${{\\mathcal{I}}_{t}}(k) = 0$. ${{T}_{h}}$ denotes the waiting time. $\\tau _{{{\\mathcal{I}}_{t}}(k)}^{VEC}$ indicates whether task ${{\\mathcal{I}}_{t}}(k)$ is offloaded to VEC server. If it is allocated, $\\tau _{{{\\mathcal{I}}_{t}}(k)}^{VEC} = 1$, otherwise, $\\tau _{{{\\mathcal{I}}_{t}}(k)}^{VEC} = 0$.\n\n\\subsection{Energy Consumption of Task Execution}\n\\subsubsection{Offloading to VEC server}\nWhen the computing task is offloaded to the VEC server, the energy consumption origins from uploading and downloading computing task. For the task type $\\phi_2$, since the size of output file is much smaller than that of input file, the energy consumed by downloading computing task can be ignored. Therefore, the energy consumption of task $\\mathcal{I}_{t}(k)$ belonging to $\\phi_2$ or $\\phi_3$ offloaded to VEC server, $ER_{{{\\mathcal{I}}_{t}}(k)}$, can be depicted as\n\\begin{equation}\nER_{{{\\mathcal{I}}_{t}}(k)}^{{}}=\\left\\{ \\begin{aligned}\n& P\\frac{{{c}_{{{\\mathcal{I}}_{t}}(k)}}}{r_{k}^{VEC,up}},\\text{ }if\\text{ }{{\\mathcal{I}}_{t}}(k)\\in \\phi_2 \\\\ \n& P(\\frac{{{c}_{{{\\mathcal{I}}_{t}}(k)}}}{r_{k}^{VEC,up}}\\text{+}\\frac{{{\\omega }_{{{\\mathcal{I}}_{t}}(k)}}{{c}_{{{\\mathcal{I}}_{t}}(k)}}}{r_{k}^{VEC,down}}),\\text{ }if\\text{ }{{\\mathcal{I}}_{t}}(k)\\in \\phi_3 \\\\ \n\\end{aligned} \\right.\n\\end{equation}\n\n\\subsubsection{Local Execution}\nWhen computing task ${{\\mathcal{I}}_{t}}(k)$ is executed locally, the consumed energy can be calculated according to the assigned computation resource, $EL_{{{\\mathcal{I}}_{t}}(k)}$, which can be expressed as\n\\begin{equation}\nEL_{{{\\mathcal{I}}_{t}}(k)}^{{}}={{\\xi }_{{{\\mathcal{I}}_{t}}(k)}}{{\\kappa }_{{{\\mathcal{I}}_{t}}(k)}}{{c}_{{{\\mathcal{I}}_{t}}(k)}}{{({{f}^{k}})}^{2}}\n\\end{equation}\nwhere ${\\xi }_{\\mathcal{I}_{t}(k)}$ is the energy density of processing task ${{\\mathcal{I}}_{t}}(k)$ \\cite{Din}.\nAccording to the different offloading positions of computing task ${{\\mathcal{I}}_{t}}(k)$, including VEC server and local device, the consumed energy of all vehicles served by local VEC server, $E(t)$, can be derived by\n\\begin{equation}\nE(t)=\\sum\\limits_{k\\in \\mathbb{Q}}{(1-\\tau _{{{\\mathcal{I}}_{t}}(k)}^{Hold})[\\tau _{{{\\mathcal{I}}_{t}}(k)}^{VEC}E{{R}_{{{\\mathcal{I}}_{t}}(k)}}}+(1-\\tau _{{{\\mathcal{I}}_{t}}(k)}^{VEC})E{{L}_{{{\\mathcal{I}}_{t}}(k)}}]\n\\end{equation}\n\n\\section{Delay and Energy-Efficiency Driven Computing Task Offloading and Resource Allocation Algorithm Based on Multi-Agent Reinforcement Learning}\n\\subsection{Problem Formulation}\nWe formulate the optimized problem of reducing the energy consumption of each vehicle without exceeding delay constraint by carrying out optimal computing task offloading and resource allocation strategy, which can be described as follows:\n\\begin{equation}\\label{problem}\n\\begin{aligned}\n& \\underset{{{X}_{t}}(k,n),\\forall k,n}{\\mathop{\\min }}\\,\\sum\\limits_{t=1}^{T}{E(t)} \\\\ \n& s.t. \\\\ \n& (c1)\\tau _{{{\\mathcal{I}}_{t}}(k)}^{VEC}+\\tau _{{{\\mathcal{I}}_{t}}(k)}^{Hold}\\le 1,\\forall k\\in \\mathbb{Q} \\\\ \n& (c2)\\sum\\limits_{k}{b_{{{\\mathcal{I}}_{t}}(k)}^{VEC}}\\le 1,\\forall k\\in \\mathbb{Q} \\\\ \n& (c3)\\sum\\limits_{k}{z_{k,n}^{VEC,up}}\\le 1,\\forall k\\in \\mathbb{Q},n\\in \\mathbb{N}_{VEC}^{up} \\\\ \n& (c4)\\sum\\limits_{k}{z_{k,n}^{VEC,down}}\\le 1,\\forall k\\in \\mathbb{Q},n\\in \\mathbb{N}_{VEC}^{down} \\\\ \n& (c5)D({{\\mathcal{I}}_{t(k)}})\\le \\Upsilon({{v}_{\\mathcal{I}_t(k)}}),\\forall k\\in \\mathbb{Q} \\\\ \n\\end{aligned}\n\\end{equation}\nwhere ${{X}_{t}}(k,n)=(\\tau _{{{\\mathcal{I}}_{t}}(k)}^{VEC},\\tau _{{{\\mathcal{I}}_{t}}(k)}^{Hold},z_{k,n}^{VEC,up},z_{k,n}^{VEC,down})$. Constraint (c1) implies that computing task ${{\\mathcal{I}}_{t(k)}}$ cannot be offloaded to the local VEC server, executed locally and hold on simultaneously. Constraint(c2) indicates that computation capacity allocated to computing task ${{\\mathcal{I}}_{t(k)}}$ by VEC server cannot exceed its own computing capacity. Constraint (c3) and constraint (c4) indicate that each channel must be assigned to one and only one vehicle at each scheduling period. Constraint (c5) indicates that computing task ${{\\mathcal{I}}_{t(k)}}$ should be completed within the delay constraint.\n\n\\subsection{Deep Reinforcement Learning-Based Solution Method}\nEquation (\\ref{problem}) is a multi-vehicle cooperation and competition problem, which is obviously a NP-hard problem. Therefore, we employ the deep reinforcement learning method to solve the proposed computing task offloading and resource allocation problem. First, we formulate our problem as a MDP to accurately describe the offloading and resource allocation decision process and utilize the multi-agent deep deterministic policy gradient (MADDPG) \\cite{Lowe} to find the optimal policy for the MDP. In what follows, we will present the elements of MDP, including state space, action space and reward function.\n\\subsubsection{State Space}\nWe defined the state space of vehicle $k$ as ${{s}_{k}}(t)$, including the state information of vehicle $k$, other vehicles and VEC server, which is depicted as\n\\begin{equation}\\small\n\\begin{aligned}\n{{s}_{k}}(t)&=[{{v}_{1}}(t),...,{{v}_{K}}(t),{{d}_{1}}(t),...,{{d}_{K}}(t),{{c}_{1}}(t),...,{{c}_{K}}(t), \\\\&r{{b}_{VEC}}(t),\ns\\tau _{k}^{Hold}(t),s\\tau _{k}^{VEC}(t),sb_{k}^{VEC}(t), sz_{1}^{VEC,up}(t),\\\\&...,sz_{N_{VEC}^{up}}^{VEC,up}(t),sz_{1}^{VEC,down}(t),...,sz_{N_{VEC}^{down}}^{VEC,down}(t)] \\\\ \n\\end{aligned}\n\\end{equation}\nwhere ${{v}_{k}}(t),{{d}_{k}}(t),{{c}_{k}}(t)$ are vehicle speed, position and file size to be processed of vehicle $k$ at time $t$, respectively. $r{{b}_{VEC}}(t)$ is the current remaining computation capacity of VEC server at time $t$. $s\\tau _{k}^{(\\centerdot )}(t)$ indicates whether the vehicle $k$ selects the offloading position $(\\centerdot)$ at time $t$. If it is selected, $s\\tau _{k}^{(\\centerdot )}(t) = 1$, otherwise, $s\\tau _{k}^{(\\centerdot )}(t)$ = 0. $sb_{k}^{VEC}(t)$is the ratio of computation resource allocated by VEC server to vehicle $k$ at time $t$. $sz_{1}^{VEC,up}(t),...,sz_{N_{VEC}^{up}}^{VEC,up}(t)$ indicates whether the uplink channel resource of VEC is available at time $t$. If it is available, the value is 1, otherwise, the value is 0. $sz_{1}^{VEC,down},...,sz_{N_{VEC}^{down}}^{VEC,down}(t)$ indicates whether the downlink channel resource of VEC is available at time $t$. If it is available, the value is 1, otherwise, the value is 0. Therefore, the state space of the system can be defined as: ${{S}_{t}}=({{s}_{1}}(t),...{{s}_{k}}(t)...,{{s}_{K}}(t))$.\n\n\\subsubsection{Action Space}\nSince vehicle $k$ cannot offload multiple tasks simultaneously at time $t$, which means that task ${{\\mathcal{I}}_{t(k)}}$ and vehicle $k$ have a one-to-one correspondence. Therefore, We can represent the action space of task ${{\\mathcal{I}}_{t(k)}}$ with the one of vehicle $k$. For vehicle $k$, the action space, $a_k(t)$, contains whether to hold on and offload task to VEC server, computation resource allocated by VEC server and the uplink and downlink channels allocated by VEC server, which can be expressed as\n\\begin{equation}\\small\n\\begin{aligned}\na_{k}^{{}}(t)&=[\\tau _{k}^{Hold}(t),\\tau _{k}^{VEC}(t),b_{k}^{VEC}(t),z_{k,1}^{VEC,up}(t),...,z_{k,N_{VEC}^{up}}^{VEC,up}(t),\\\\&z_{k,1}^{VEC,down}(t),...,z_{k,N_{VEC}^{down}}^{VEC,down}(t)]\n\\end{aligned}\n\\end{equation}\nTherefore, the action space of the system can be defined as: ${{A}_{t}}=\\{{{a}_{1}}(t),...{{a}_{k}}(t)...,{{a}_{K}}(t)\\}$.\n\n\\subsubsection{Reward}\nThe goal of this paper is to reduce the energy consumption of each vehicle terminal without exceeding task delay constraint, which can be realized by allocating the computation resource and wireless resource of the system. Therefore, we set rewards based on constraint conditions and objective function to accelerate the training speed. After taking action ${{a}_{k}}(t)$, if the state of vehicle $k$ does not satisfy the constraints (c1)-(c4), the reward function can be defined as \n\\begin{equation}\\small\n\\begin{aligned}\n& {{r}_{k}}(t)={{\\ell }_{1}}+{{\\Gamma }_{1}}\\cdot (s\\tau _{k}^{VEC}+s\\tau _{k}^{Hold}-1)\\cdot {{\\Lambda }_{(s\\tau _{k}^{VEC}+s\\tau _{k}^{Hold}\\le 1)}}\\\\&+{{\\Gamma }_{2}}\\cdot (\\sum\\limits_{k}{sb_{k}^{VEC}}-1)\\cdot {{\\Lambda }_{(\\sum\\limits_{k}{sb_{k}^{VEC}}\\le 1)}}\\\\& \n+{{\\Gamma }_{3}}\\cdot (\\sum\\limits_{k}{sz_{k,n}^{VEC,up}}-1)\\cdot {{\\Lambda }_{(\\sum\\limits_{k}{sz_{k,n}^{VEC,up}}\\le 1)}}\\\\&+{{\\Gamma }_{4}}\\cdot (\\sum\\limits_{k}{sz_{k,n}^{VEC,down}}-1)\\cdot {{\\Lambda }_{(\\sum\\limits_{k}{sz_{k,n}^{VEC,down}}\\le 1)}} \\\\ \n\\end{aligned}\n\\end{equation}\nwhere ${{\\Lambda }_{(\\centerdot )}}$ indicates that if the condition $(\\centerdot)$ is not satisfied, the value is -1, otherwise, the value is 0. ${{\\ell }_{1}},{{\\Gamma }_{1}},{{\\Gamma }_{2}},{{\\Gamma }_{3}},{{\\Gamma }_{4}}$ is experimental parameters. After taking action ${{a}_{k}}(t)$, if the state of vehicle $k$ satisfy all constraints (c1)-(c4), the reward function can be defined as \n\\begin{equation}\n{{r}_{k}}(t)={{\\ell }_{2}}+\\exp (Th{{r}_{k}}(t)-\\Upsilon({{v}_{\\mathcal{I}_t(k)}}))\n\\end{equation}\nwhere ${{\\ell }_{2}}$ is experimental parameters. After taking action ${{a}_{k}}(t)$, if the state of vehicle $k$ satisfy all constraints (c1)-(c5), the reward function can be defined as \n\\begin{equation}\nr(t)={{\\ell }_{3}}\\text{+}{{\\Gamma }_{5}}\\cdot \\exp ({{E}_{k}}(t))\n\\end{equation}\nwhere ${{\\ell }_{3}},{{\\Gamma }_{5}}$ denote experimental parameters.\n\\subsubsection{Joint Delay and Energy-Efficiency Algorithm Based on MADDPG}\nThe centralized training process is composed of $K$ agents, whose network parameter are $\\theta =\\{{{\\theta }_{1}},...,{{\\theta }_{K}}\\}$. We denote $\\mu =\\{{{\\mu }_{{{\\theta }_{1}}}},...,{{\\mu }_{{{\\theta }_{K}}}}\\}$ (abbreviated as $\\mu_i$) as the set of all agent deterministic policies. So for the deterministic policy ${{\\mu }_{k}}$ of agent $k$, the gradient can be depicted as\n\\begin{equation}\n\\begin{aligned}\n&{{\\nabla }_{{{\\theta }_{k}}}}J({{\\mu }_{k}})=\\\\&{{\\mathbb{E}}_{S,A\\sim \\mathcal{D}}}[{{\\nabla }_{{{\\theta }_{k}}}}{{\\mu }_{k}}({{a}_{k}}|{{s}_{k}}){{\\nabla }_{{{a}_{k}}}}Q_{k}^{\\mu }(S,{{a}_{1}},...,{{a}_{K}}){{|}_{{{a}_{k}}={{\\mu }_{k}}({{s}_{k}})}}]\n\\end{aligned}\n\\end{equation}\nwhere $\\mathcal{D}$ is experience replay buffer, which contains a series of $(S,A,{{S}^{'}},R)$. $Q_{k}^{\\mu }(s,{{a}_{1}},...,{{a}_{K}})$ is the Q-value function. For the critic network, it can be updated according to the loss function as follows\n\\begin{equation}\n\\begin{aligned}\n& \\mathcal{L}({{\\theta }_{k}})={{\\mathbb{E}}_{S,A,R,{{S}^{'}}}}{{[Q_{k}^{\\mu }(S,a_{1}^{{}},...,a_{K}^{{}})-y)}^{2}}] \\\\ \n& \\text{where} \\;\\; y=r_{k}^{{}}+\\gamma Q_{k}^{{{\\mu }^{'}}}({{S}^{'}},a_{1}^{'},...,a_{K}^{'}){{|}_{a_{j}^{'}=\\mu _{j}^{'}({{s}_{j}})}} \\\\ \n\\end{aligned}\n\\end{equation}\nwhere $\\gamma $ is the discount factor. The action network is updated by minimizing the policy gradient of the agent, which can be expressed as\n\\begin{equation}\n{{\\nabla }_{{{\\theta }_{k}}}}J\\approx \\frac{1}{X}\\sum\\limits_{j}{{{\\nabla }_{{{\\theta }_{k}}}}}{{\\mu }_{k}}(s_{k}^{j}){{\\nabla }_{{{a}_{k}}}}Q_{k}^{\\mu }({{S}^{j}},a_{1}^{j},...,a_{K}^{j}){{|}_{{{a}_{k}}={{\\mu }_{k}}(s_{k}^{j})}}\n\\end{equation}\nwhere $X$ is the size of mini-batch, $j$ is the index of samples. The specific joint delay and energy-efficiency algorithm based on MADDPG (JDEE-MADDPG) is shown in Algorithm 1.\n\\begin{algorithm}[!h]\n\t\\caption{JDEE-MADDPG}\n\t\\label{alg1}\n\n\t\\textbf{Initialize:} the positions, speed, task queue, computing resources and wireless resources of all vehicles. Initialize the computing and wireless resources of the VEC server. Initialize the weights of actor and critic networks.\n\t\n\t\\For{{episode= 1:M}}\n\t{\n\t\tInitialize a random process $\\mathcal{N}$ for action exploration;\n\t\n\t\tReceive initial state $S$;\n\t\t\n\t\t\\For{each vehicle $k=1,...,K$}\n\t\t{\n\t\t\tExecute actions ${{a}_{k}}$ and obtain new state $s_{k}^{'}$;\n\t\t\t\n\t\t\t\\uIf{ the $s_{k}^{'}$ does not satisfy constraints (c1)-(c4) in Eq.(12):}\n\t\t\t\t{\n\t\t\t\t \tObtain the reward of vehicle $k$ based on Eq.(15);\n\t\t\t }\n\t\t\t\\uElseIf{the $s_{k}^{'}$ satisfy all constraints (c1)-(c4) in Eq.(12):}\t \n\t\t\t\t{\n\t\t\t\t\tObtain the reward of vehicle $k$ based on Eq.(16);\n\t\t\t\t}\n\t\t\t\\uElseIf{the $s_{k}^{'}$ satisfy all constraints (c1)-(c5) in Eq.(12):}\n\t\t\t\t{\n\t\t\t\t\tObtain the reward of vehicle $k$ based on Eq.(17);\n\t\t\t\t}\n\n\t\t\t \\textbf{end}\n\t\t\t \n Obtain\tthe action $A$, new state ${{S}^{'}}$ and reward $R$;\n \n Store $(S,A,{{S}^{'}},R)$ in replay buffer $\\mathcal{D}$;\n \n\t\t}\n\t\t\\For{each vehicle $k=1,...,K$}\n\t\t{\n\t\t\tSample a random mini-batch of $X$ samples $({{S}^{j}},{{A}^{j}},{{R}^{j}},{{S}^{'}}^{j})$ from $\\mathcal{D}$;\n\t\t\t\n\t\t\tUpdate the critic network by minimizing the loss function, Eq.(19);\n\t\t\t\n\t\t\tUpdate actor network using the sampled policy gradient, Eq.(20);\n\t\t} \n\t\tUpdate the target network parameters of each vehicle $k$: $\\theta _{k}^{'}\\leftarrow \\delta {{\\theta }_{k}}+(1-\\delta )\\theta _{k}^{'}$\n\t}\n\\end{algorithm}\n\n\n\\section{Simulation Results }\n\\subsection{Parameter Setting}\nThe specific simulation parameters are presented in Table I and Table II. The algorithms compared in this section are as follows: \n\n\\textbf{All Local Execution (AL):} All computation tasks are executed locally.\n\n\\textbf{All VEC Execution (AV):} The CA tasks are executed locally, while HPA and LPA tasks are executed in VEC server. The resource allocation strategy is based on the size of task.\n\n\\textbf{Random Offloading (RD):} The HPA and LPA tasks are executed locally and in VEC server based on the uniform distribution. The resource allocation strategy is based on the size of task.\n\n\\textbf{Energy and Delay Greedy (EDG):} The offloading strategy is based on vehicle's channel state and resource allocation strategy is based on the size of task, in order to decrease the energy cost and execution delay in each step. \n\n\\begin{table}[!h]\n\t\\caption{Simulation Parameter Configuration}\n\t\\begin{tabular}{|c|c|ccc}\n\t\t\\cline{1-2}\n\t\t\\bf{Parameter} & \\bf{Value} & & & \\\\ \\cline{1-2}\n\t\tNumber of vehicles & 5, 7 ,9, 11, 13 & & & \\\\ \\cline{1-2}\n\t\tSize of task queue & 10 & & & \\\\ \\cline{1-2}\n\t\tSize of task input & {[}0.2, 1{]} Mb & & & \\\\ \\cline{1-2}\n\t\tSpeed of vehicle & {[}30, 50{]}, {[}50, 80{]}, {[}30, 80{]} Km\/h & & & \\\\ \\cline{1-2}\n\t\tRSU's coverage range & 500 m & & & \\\\ \\cline{1-2}\n\t\tRSU's bandwidth & 100 MHz & & & \\\\ \\cline{1-2}\n\t\tChannel model & Typical Urban & & & \\\\ \\cline{1-2}\n\t\t\\begin{tabular}[c]{@{}c@{}}Transmission power between\\\\ vehicle and RSU\\end{tabular} & 0.5 W & & & \\\\ \\cline{1-2}\n\t\t\\begin{tabular}[c]{@{}c@{}}Computation capacity \\\\ of VEC server\\end{tabular} & 10 G Cycles\/s & & & \\\\ \\cline{1-2}\n\t\t\\begin{tabular}[c]{@{}c@{}}Computation capacity \\\\ of vehicle\\end{tabular} & 1, 1.2, 1.4, 1.6, 1.8 G Cycles\/s & & & \\\\ \\cline{1-2}\n\t\tComputation density & {[}20, 50{]} Cycles\/bit & & & \\\\ \\cline{1-2}\n\t\tWaiting time of hold on & 20, 50 ms & & & \\\\ \\cline{1-2}\n\t\tDelay Threshold & 10, 40, 100ms & & & \\\\ \\cline{1-2}\t\t\n\t\t\\begin{tabular}[c]{@{}c@{}}Output data size\/ input \\\\ data size ratio\\end{tabular} & 0.1 & & & \\\\ \\cline{1-2}\n\t\tEnergy density {\\cite{Dai}} & $1.25\\times 10^{-26}$ J\/Cycle & & & \\\\ \\cline{1-2}\n\t\tParameters of reward & \\begin{tabular}[c]{@{}c@{}}${{\\Gamma }_{1}}=0.8$, ${{\\Gamma }_{2}},{{\\Gamma }_{3}},{{\\Gamma }_{4}},{{\\Gamma }_{5}}=0.5$ \\\\ ${{\\ell }_{1}}=-0.4,{{\\ell }_{2}}=-0.2,{{\\ell }_{3}}=0.5$ \\end{tabular} & & & \\\\ \\cline{1-2}\n\t\\end{tabular}\n\\end{table}\n\n\\begin{table}[!h]\n\t\\caption{The Neural Network and Training Parameters}\n\t\\begin{tabular}{|c|c|c|c|}\n\t\t\\hline\n\t\t\\textbf{Parameter} & \\textbf{Value} & \\textbf{Parameter} & \\textbf{Value} \\\\ \\hline\n\t\tLayers & 3 & Layer Type & Fully Connected \\\\ \\hline\n\t\t\\multicolumn{1}{|c|}{Hidden Units} & 512 & Learning Rate of Critic & 0.001 \\\\ \\hline\n\t\tOptimizer & Adam & Learning Rate of Actor & 0.0001 \\\\ \\hline\n\t\tEpisode & 140000 & Activation Function & Relu \\\\ \\hline\n\t\tMini-batch & 128 & Buffer Size & 20000 \\\\ \\hline\n\t\\end{tabular}\n\\end{table}\n\n\\subsection{Performance Evaluation}\nWe validate the algorithm performance in term of convergence property, task completion delay and energy consumption under different simulation configurations. \n\n\\begin{figure}[!h]\n\t\\centering\n\t\\includegraphics[width=8.6cm]{figure0.eps}\n\t\\centering\n\t\\caption{Convergence property of different numbers of vehicles.}\n\t\\label{convergence}\n\\end{figure}\n\\begin{figure*}[!h]\n\t\\centering\n\t\\includegraphics[width=\\textwidth]{figure1.eps}\n\t\\centering\n\t\\caption{Average task completion delay of different algorithms: (a) the number of vehicles is 5, (b) the number of vehicles is 7, (c) the number of vehicles is 9, (d) the number of vehicles is 11, (e) the number of vehicles is 13.}\n\t\\label{figure1}\n\\end{figure*}\n\\begin{figure*}[!h]\n\t\\centering\n\t\\includegraphics[width=\\textwidth]{figure2.eps}\n\t\\centering\n\t\\caption{Average task energy consumption of different algorithms: (a) the number of vehicles is 5, (b) the number of vehicles is 7, (c) the number of vehicles is 9, (d) the number of vehicles is 11, (e) the number of vehicles is 13.}\n\t\\label{figure2}\n\\end{figure*}\n\\begin{figure*}[!h]\n\t\\centering\n\t\\includegraphics[width=\\textwidth]{figure3.eps}\n\t\\centering\n\t\\caption{Average task completion delay and energy consumption of different algorithms: (a)(b) the vehicle speed range is [30, 50], (c)(d) the vehicle speed range is [50, 80].}\n\t\\label{figure3}\n\\end{figure*}\n\nIn Figure \\ref{convergence}, we present the convergence performance of our proposed JDEE-MADDPG algorithm with the different numbers of vehicles. It can be seen that with the increase of training episodes, the average reward of vehicles rises gradually and preserves a stable positive reward eventually. In the initial stage, the average reward of our proposed JDEE-MADDPG algorithm with less vehicles is higher than that with more vehicles, because the increase of vehicles means higher dimension of state space and action space and our proposed JDEE-MADDPG algorithm needs to take more explorations. Therefore, it will result in that the average reward of large number of vehicles is lower than that of small number of vehicles. As the training episode increases, our proposed JDEE-MADDPG algorithm gradually achieves the convergence state, where the average reward of 5 vehicles is the highest, while the average reward of 13 vehicles is the lowest. The reason is that more vehicles implies that they will compete the limited computation resource and wireless resource more fiercely, which causes that the reward of energy consumption and delay decreases. \n\nIn Figure \\ref{figure1}, we present the comparison of average task completion delay with different numbers of vehicles when the vehicle speed range is from 30 to 80 Km\/h. It can be seen that compared with AL, AV and RD algorithms, our proposed JDEE-MADDPG algorithms can always preserve a lower level of task completion delay for each client. This is because that our proposed JDEE-MADDPG algorithm can allocate the computation resource and wireless resource to vehicles more accurately based on the task priority, task size, vehicle speed and vehicle's channel state. In addition, the task completion delay of some vehicles of EDG algorithm is less than that of our proposed JDEE-MADDPG algorithm, because our proposed algorithm sacrifices a little task completion delay to decrease the energy consumption of vehicle terminals without exceeding the task delay constraint. \n\n\nIn Figure \\ref{figure2}, we show the comparison of average task energy consumption with different numbers of vehicles when the vehicle speed range is from 30 to 80 Km\/h. It can be observed that compared with other algorithms, our proposed JDEE-MADDPG algorithm can always preserve a lower level of energy consumption. This is because that our proposed JDEE-MADDPG algorithm can always make the optimal offloading and resource allocation strategy based on the task priority, task size, vehicle speed and vehicle's channel state and reduce the energy consumption of all vehicles as soon as possible.\n\n\nIn Figure \\ref{figure3}, we compare the average task completion delay and energy consumption with different vehicle speed range when the number of vehicles is 9. Compared with AL, AV and RD algorithm, our proposed JDEE-MADDPG algorithm performs better in terms of delay and energy consumption. This is because that our proposed JDEE-MADDPG algorithm can utilize more information of vehicle terminals and VEC server, i.e., vehicle position, vehicle speed, task queue, channel state and remaining computation resource to make the optimal offloading and resource allocation strategy. Besides, the reason that some vehicles' task completion delay of our proposed JDEE-MADDPG algorithm remains higher than that of EDG algorithm is that our JDEE-MADDPG algorithm usually allocate more wireless and computation resource of VEC server to the vehicles with high speed without exceeding task delay constraint, which may cause that task completion delay of some vehicles is higher than that of EDG algorithm. \n\n\n\n\\section{Conclusion}\nIn this paper, we propose a vehicle speed aware computing task offloading and resource allocation algorithm to achieve the goal of energy-efficiency for all vehicles within task delay constraint. First, we establish the vehicle speed-based delay constraint model based on task types and vehicle speed. And then we calculate the task completion delay and energy consumption for different offloading positions based on the allocated computation and wireless resource. Finally, we formulate the mathematical model with the objective to minimize energy consumption of all vehicles subject to the delay constraint. The MADDPG method is utilized to obtain the offloading and resource allocation strategy. Simulation results show that the proposed JDEE-MADDPG algorithm can decrease energy consumption and task completion delay compare with other algorithms under different numbers of vehicles and vehicle speed ranges.\n\n\\section*{Acknowledgement}\nThis research work was supported in part by the National Science Foundation of China (61701389, U1903213), the Natural Science Basic Research Plan in Shaanxi Province of China (2018JQ6022) and the Shaanxi Key R\\&D Program (2018ZDCXL-GY-04-03-02).\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nIn \\cite{LSY1, LSY2}, the authors introduced the notion of \\textit{holomorphic homogeneous regular}. Then in \\cite{Y:squeezing}, the equivalent notion of \\textit{uniformly squeezing} was introduced. Motivated by these studies, in \\cite{DGZ1}, the authors introduced the \\textit{squeezing function} as follows.\n\nDenote by $\\ball(r)$ the ball of radius $r>0$ centered at the origin 0. Let $\\Omega$ be a bounded domain in $\\cv^n$, and $p\\in \\Omega$. For any holomorphic embedding $f:\\Omega\\rightarrow \\ball(1)$, with $f(p)=0$, set\n$$s_{\\Omega,f}(p):=\\sup\\{r>0:\\ \\ball(r)\\subset f(\\Omega)\\}.$$\nThen, the squeezing function of $\\Omega$ at $p$ is defined as\n$$s_\\Omega(p):=\\sup_f\\{s_{\\Omega,f}(p)\\}.$$\n\nMany properties and applications of the squeezing function have been explored by various authors, see e.g. \\cite{DGZ1,DGZ2,DF:Bergman,DFW:squeezing,FS:squeezing,FW:squeezing,KZ:squeezing}.\n\nIt is clear that squeezing functions are invariant under biholomorphisms, and they are positive and bounded above by 1. It is a natural and interesting problem to study the uniform lower and upper bounds of the squeezing function.\n\nIt was shown recently in \\cite{KZ:squeezing} that the squeezing function is uniformly bounded below for bounded convex domains (cf. \\cite[Theorem 1.1]{Frankel}). On the other hand, in \\cite{DGZ1}, the authors showed that the squeezing function is not uniformly bounded below on certain domains with non-smooth boundaries, such as punctured balls. In \\cite{DF}, the authors constructed a smooth pseudoconvex domain in $\\cv^3$ on which the quotient of the Bergman metric and the Kobayashi metric is not bounded above near an infinite type point. By \\cite[Theorem 3.3]{DGZ2}, the squeezing function is not uniformly bounded below on this domain.\n\nThese studies raise the question: Is the squeezing function always uniformly bounded below near a smooth finite type point? In this paper, we answer the question negatively. More precisely, we have the following\n\n\\begin{thm}\\label{T:main}\nLet $\\Omega$ be a bounded domain in $\\cv^3$, and $q\\in \\partial\\Omega$. Assume that $\\Omega$ is smooth and pseudoconvex in a neighborhood of $q$ and the Bloom-Graham type of $\\Omega$ at $q$ is $d<\\infty$. Moreover, assume that the regular order of contact at $q$ is greater than $2d$ along two smooth complex curves not tangent to each other. Then the squeezing function $s_\\Omega(p)$ has no uniform lower bound near $q$.\n\\end{thm}\n\n\\begin{rmk}\nThe proof gives the estimate $s_\\Omega(p)\\leq C \\delta^{\\frac{1}{2d(2d+1)}}$ for some points approaching the boundary.\n\\end{rmk}\n\nIn section \\ref{S:prelim}, we recall some preliminary notions and results. In section \\ref{S:proof}, we prove Theorem \\ref{T:main}.\n\n\\section{Preliminaries}\\label{S:prelim}\n\nLet $\\Omega$ be a bounded domain in $\\cv^n$, $n\\ge 2$, and $q\\in \\partial\\Omega$. Assume that $\\Omega$ is smooth and pseudoconvex in a neighborhood of $q$. The \\textit{Bloom-Graham type} of $\\Omega$ at $q$ is the maximal order of contact of complex manifolds of dimension $n-1$ tangent to $\\partial \\Omega$ at $q$ (see e.g. \\cite{BG}). Choose local coordinates $(z,t)\\in \\cv^{n-1}\\times \\cv$ such that the complex manifold of dimension $n-1$ with the maximal order of contact is given by $\\{t=0\\}$. Then $\\Omega$ is locally given by $\\rho(z,t)<0$, where $\\rho(z,t)=\\Re t+P(z)+Q(z,t)$ with $Q(z,0)\\equiv 0$ and $\\deg P(z)=d$. (We say that the degree of $P$ is $d$ if the Taylor expansion of $P$ has no nonzero term of degree less than $d$.) Since $\\Omega$ is pseudoconvex, we actually have $d=2k$ (see e.g. \\cite{D'Angelo}).\n\nFor $1\\le k\\le n-1$, let $\\varphi:\\cv^k\\rightarrow \\cv^n$ be analytic with $\\varphi(0)=q$ and $\\textup{rank} d\\varphi(0)=k$. Then the \\textit{regular order of contact} at $q$ along the $k$-dimensional complex manifold defined by $\\varphi$ is defined as $\\deg \\rho\\circ\\varphi$ (see e.g. \\cite{D'Angelo}).\n\nDenote by $\\Delta$ the unit disc in $\\cv$. Let $p\\in \\Omega$ and $\\zeta\\in \\cv^n$. The \\textit{Kobayashi metric} is defined as\n$$K_\\Omega(p,\\zeta):=\\inf\\{\\alpha:\\ \\alpha>0,\\ \\exists\\ \\phi:\\Delta\\rightarrow \\Omega,\\ \\phi(0)=p,\\ \\alpha \\phi'(0)=\\zeta\\}.$$\nThen the \\textit{Kobayashi indicatrix} is defined as (see e.g. \\cite{K:indicatrix})\n$$D_\\Omega(p):=\\{\\zeta\\in \\cv^n:\\ K_\\Omega(p,\\zeta)<1\\}.$$\nFor each unit vector $e\\in \\cv^n$, set $D_\\Omega(p,e):=\\max\\{|\\eta|:\\ \\eta\\in \\cv,\\ \\eta e\\in D_\\Omega(p)\\}$. By the definition of Kobayashi indicatrix, the following three lemmas are clear.\n\n\\begin{lem}\\label{L:ball}\n$D_{\\ball(r)}(0)=\\ball(r)$.\n\\end{lem}\n\n\\begin{lem}\\label{L:indicatrix}\nLet $\\Omega_1$ and $\\Omega_2$ be two domains in $\\cv^n$ with $\\Omega_1\\subset \\Omega_2$. Then for each $p\\in \\Omega_1$, $D_{\\Omega_1}(p)\\subset D_{\\Omega_2}(p)$.\n\\end{lem}\n\n\\begin{lem}\\label{L:indicatrix1}\nLet $\\Omega$ be a domain in $\\cv^n$ and $f:\\Omega\\rightarrow \\cv^n$ a biholomorphic map. Then for each $p\\in \\Omega$, $D_{f(\\Omega)}(f(p))=f'(p)D_\\Omega(p)$.\n\\end{lem}\n\nWe also need the following localization lemma (see e.g. \\cite[Lemma 3]{FL:metrics}). We will use $\\gtrsim$ (resp. $\\lesssim$, $\\simeq$) to mean $\\ge$ (resp. $\\le$, $=$) up to a positive constant.\n\n\\begin{lem}\\label{L:localization}\nLet $\\Omega$ be a bounded domain in $\\cv^n$, $q\\in \\partial \\Omega$ and $U$ a neighborhood of $q$. If $V\\subset\\subset U$ and $q\\in V$, then\n$$K_\\Omega(p,\\zeta)\\simeq K_{\\Omega\\cap U}(p,\\zeta),\\ \\ \\ \\forall\\ p\\in V,\\ \\zeta\\in \\cv^n.$$\n\\end{lem}\n\nBy the above lemma, when we consider the size of the Kobayashi indicatrix in the next section, we will work in $\\Omega\\cap U$.\n\n\\section{Estimate of the squeezing function}\\label{S:proof}\n\nWe first choose local coordinates adapted to our purpose.\n\n\\begin{lem}\\label{L:normal}\nLet $\\Omega$ be a bounded domain in $\\cv^{n+1}$, $n\\ge 1$, and $q\\in \\partial\\Omega$. Assume that $\\Omega$ is smooth and pseudoconvex in a neighborhood of $q$ and the Bloom-Graham type of $\\Omega$ at $q$ is $2k$, $k\\ge 1$. Then there exist local coordinates $(z,t)=(z_1,\\cdots,z_n,u+iv)$ such that $q=(0,0)$ and $\\Omega$ is locally given by $\\rho(z,t)<0$ with\n\\begin{equation}\\label{E:normal}\n\\rho(z,t)=u+P(z)+Q(z)+vR(z)+u^2+v^2+o(u^2,uv,v^2,u|z|^{2k}),\n\\end{equation}\nwhere $P(z)$ is plurisubharmonic, homogeneous of degree $2k,$ but not pluriharmonic, $\\deg Q(z)\\ge 2k+1$ and $\\deg R(z)\\ge k+1$.\n\\end{lem}\n\\begin{proof}\nBy assumption, we have a local defining function of the form\n$$\\rho(z,t)=u+P(z)+Q(z)+au^2+buv+cv^2+uA(z)+vB(z)+o(|t|^2),$$\nwhere $P(z)$ is plurisubharmonic, homogeneous of degree $2k,$ but not pluriharmonic, $\\deg Q(z)\\ge 2k+1$,\nand $\\deg A(z),B(z)\\ge 1$. By changing $t$ to $t+dt^2$ and multiplying with $1+eu$ or $1+ev$, we can freely change the quadratic terms in $u,v$. Thus, we can assume that\n$$\\rho(z,t)=u+P(z)+Q(z)+u^2+v^2+uA(z)+vB(z)+o(|t|^2).$$\nMultiplying with $1-A(z)$, we get a new $A$, say $A'$ of degree at least $2$. Multiplying with $1-A'(z)$ we get\na new $A$ of degree at least $4$. Continuing, we can further assume that\n$$\\rho(z,t)=u+P(z)+Q(z)+u^2+v^2+vB(z)+o(|t|^2, u|z|^{2k}).$$\n\nWrite $B(z)=B_s(z)+B'(z)$, where $B_s(z)$ is the lowest order homogeneous term of degree $s\\ge 1$. Assume that $B_s(z)$ is pluriharmonic. Then there exists a holomorphic function $F(z)=A(z)-iB_s(z)$. Change again $\\rho$ to add the term $uA(z)$ with this new $A(z)$. Then $\\rho$ takes the form\n$$\\begin{aligned}\n\\rho(z,t)&=u+P(z)+Q(z)+u^2+v^2+uA(z)+vB_s(z)+vB'(z)+o(|t|^2,u|z|^{2k})\\\\\n&=u+P(z)+Q(z)+u^2+v^2+\\Re(tF(z))+vB'(z)+o(|t|^2,u|z|^{2k}).\n\\end{aligned}$$\nBy absorbing $\\Re(tF(z))$ into $u$, we get\n$$\\rho(z,t)=u+P(z)+Q(z)+u^2+v^2+vB'(z)+o(|t|^2,u|z|^{2k}).$$\nContinuing this process, we can assume that $\\rho$ takes the form\n$$\\rho(z,t)=u+P(z)+Q(z)+u^2+v^2+vB_l(z)+vB'(z)+o(|t|^2,u|z|^{2k}),$$\nwhere $B_l(z)$ is not pluriharmonic and $l\\le k$ or $l\\ge k+1$. We assume the first alternative, otherwise the proof is done. We will arrive at a contradiction to pseudoconvexity.\n\nNote that $P(z)$ is plurisubharmonic but not pluriharmonic. This implies that there exists a complex line through the origin on which the restriction of $P$ is subharmonic, but not harmonic (cf. \\cite{Forelli}). Pick a tangent vector $\\xi=(\\xi_1,\\dots \\xi_n)$ so that the Levi form of $P$ calculated at a point $\\eta\\xi$, $\\eta=|\\eta|e^{i\\theta}$, in the direction of $\\xi$ is $|\\eta|^{2k-2}G(\\theta)\\|\\xi\\|^2$. Here $G$ is a smooth nonnegative function which at most vanishes at finitely many angles. Choose $\\lambda$ such that $\\sigma=(\\xi,\\lambda)$ is a complex tangent vector to $\\partial \\Omega$, i.e.\n$$\\sum_{j=1}^n \\frac{\\partial \\rho}{\\partial z_j}\\xi_j+ \\frac{\\partial \\rho}{\\partial t}\\lambda=0.$$\nThen we have $|\\lambda|=O(|\\eta|^{2k-1}+|v||\\eta|^{l-1}+|t|^2+|u||\\eta|^{2k-1})\\|\\xi\\|$.\n\nThe Levi form of $\\rho$ at a boundary point $(\\eta\\xi,t)$, along the tangent vector $\\sigma$ is\n$$\\begin{aligned}\n\\mathcal L(r,\\sigma)=&|z|^{2k-2}G(\\theta)\\|\\xi\\|^2+|\\lambda|^2+\\mathcal L(vB_l(z),\\sigma)+\\cdots\\\\\n=&|z|^{2k-2}G(\\theta)\\|\\xi\\|^2+|\\lambda|^2+\\Re(\\sum_{j=1}^n\\frac{\\partial B_l}{\\partial z_j}i\\xi_j\\overline{\\lambda})+v\\sum_{k,m}\\frac{\\partial^2 B_l}{\\partial z_k\\overline z_m}\\xi_k\\overline{\\xi}_m+\\cdots.\n\\end{aligned}$$\n\nSince $B_l$ is not pluriharmonic, we can assume after changing $\\xi$ slightly that $\\frac{\\partial^2 B_l}{\\partial z_k\\partial \\overline{z}_m}\\xi_k\\overline{\\xi}_m\\neq 0$. Next choose $v=\\pm C|\\eta|^k$ with $C>\\max_\\theta\\{G(\\theta)\\}$. The second term is $o(|\\eta|^{2k-2}\\|\\xi\\|^2)$ and the third terms is $o(|\\eta|^{k+l-2}\\|\\xi\\|^2)$. The last term is $\\simeq (|\\eta|^{k+l-2}\\|\\xi\\|^2)$ and, since $l\\le k$, at least $O(|\\eta|^{2k-2}\\|\\xi\\|^2)$. Thus we have $\\mathcal L(r,\\sigma)<0$. This is a contraction.\n\\end{proof}\n\nBy Lemma \\ref{L:normal}, we can choose local coordinates $(z,w,t)=(z,w,u+iv)$ near $q$ such that $q=(0,0,0)$ and $\\Omega$ is locally given by $\\rho(z,w,t)<0$, where\n\\begin{equation}\\label{E:rho}\n\\rho(z,w,t)=u+P(z,w)+Q(z,w)+vR(z,w)+u^2+v^2+o(u^2,uv,v^2,u|(z,w)|^{2k}).\n\\end{equation}\nHere $P(z,w)$ is homogeneous of degree $2k$ with $P(z,0)=P(0,w)=0$, $\\deg Q(z,w)\\ge 2k+1$ with $\\deg Q(z,0)\\ge 4k+1$ and $\\deg Q(0,w)\\ge 4k+1$, and $\\deg R(z,w)\\ge k+1$. Set $p=(0,0,-\\delta)$ with $0<\\delta\\ll 1$.\n\n\\begin{lem}\\label{L:10}\nLet $\\zeta_1=(1,0,0)$ and $\\zeta_2=(0,1,0)$. Then $K_\\Omega(p,\\zeta_1),K_\\Omega(p,\\zeta_2)\\lesssim \\delta^{-\\frac{1}{4k+1}}$.\n\\end{lem}\n\\begin{proof}\nConsider the linear map $\\phi:\\Delta\\rightarrow \\cv^3$ with $\\phi(\\tau)=(\\beta\\tau,0,-\\delta)$ for $\\tau\\in \\Delta$, and $|\\beta|=\\epsilon \\delta^{\\frac{1}{4k+1}}$ for $0<\\epsilon\\ll 1$. Then\n$$\\rho\\circ\\phi(\\tau)\\le -\\delta+C|\\beta\\tau|^{4k+1}+o(\\delta)<-\\delta+\\epsilon\\delta+o(\\delta)<0.$$\nTherefore, $K_\\Omega(p,\\zeta_1)\\lesssim \\delta^{-\\frac{1}{4k+1}}$. The argument in the direction $\\zeta_2$ is similar.\n\\end{proof}\n\nLet $(a,b,0)$ be a point so that $P(a \\tau,b \\tau)$ is a subharmonic homogeneous polynomial of degree $2k$ which is not harmonic. Then both $a$ and $b$ must be nonzero. By scaling in each variable, we can assume that $a=b=1\/{\\sqrt {2}}$. \n\n\\begin{lem}\\label{L:11}\nLet $\\zeta=\\frac{1}{\\sqrt{2}}(1,1,0).$ Then $K_\\Omega(p,\\zeta)\\gtrsim \\delta^{-\\frac{1}{4k}}$.\n\\end{lem}\n\\begin{proof}\nFor $z,w$ small, we have\n$$\\begin{aligned}\nv^2\/2+vR(z,w)&\\ge v^2\/2-C|v|\\|z,w\\|^{k+1}+C^2\\|z,w\\|^{2k+2}-C^2\\|z,w\\|^{2k+2}\\\\\n&\\ge -C^2\\|z,w\\|^{2k+2}.\n\\end{aligned}$$\nTherefore,\n$$\\begin{aligned}\n\\rho&\\ge u+P(z,w)+Q(z,w)-C^2\\|z,w\\|^{2k+2}+u^2+v^2\/2+o(u^2,uv,v^2,u|(z,w)|^{2k})\\\\\n&\\ge u+P(z,w)+\\tilde{Q}(z,w)+u^2\/2+v^2\/4+o(u|(z,w)|^{2k})=:\\tilde{\\rho}.\n\\end{aligned}$$\n\nConsider an analytic map $\\phi:\\Delta\\rightarrow \\Omega$ with\n$$\\phi(\\tau)=(\\beta\\tau+f(\\tau),\\beta\\tau+g(\\tau),-\\delta+h(\\tau)),\\ \\ \\ |f(\\tau)|,|g(\\tau)|,|h(\\tau)|\\le |\\tau|^2.$$\nThen $\\tilde{\\rho}\\circ\\phi(\\tau)\\leq\\rho\\circ\\phi(\\tau)<0$. And we have\n$$\\begin{aligned}\n\\tilde{\\rho}(\\phi(\\tau))=&-\\delta+\\Re h(\\tau)+P(\\beta\\tau+f(\\tau),\\beta\\tau+g(\\tau))+\\tilde{Q}(\\beta\\tau+f(\\tau),\\beta\\tau+g(\\tau))\\\\\n&+u^2\/2+v^2\/4+o(u|(z,w)|^{2k})<0,\n\\end{aligned}$$\nand\n\\begin{equation}\\label{E:average}\n\\frac{1}{2\\pi}\\int_0^{2\\pi} \\tilde{\\rho}(\\phi(|\\tau|e^{i\\theta}))d\\theta<0.\n\\end{equation}\n\nNote that, by the homogeneous expansion of $P(z,w)$, we have for $|\\tau|<1$ small\n\\begin{equation}\\label{E:xitau}\n|\\beta\\tau|^{2k}-\\sum_{i=0}^{2k-1} |\\beta|^i|\\tau|^{4k-i}\\lesssim \\delta.\n\\end{equation}\nChoose $|\\tau|=c|\\beta|$ for some small constant $c>0.$ Then \\eqref{E:xitau} gives\n$$\\left|\\frac{\\beta}{2}\\right|^{4k}\\lesssim \\delta.$$\nHence, $K_\\Omega(p,u)\\gtrsim \\delta^{-\\frac{1}{4k}}$. \n\\end{proof}\n\n\\begin{lem}\\label{L:squeezing}\nLet $D$ be a bounded domain in $\\cv^n$, $n\\ge 2$, containing the origin. Assume that there exist two linearly independent nonzero vectors $\\zeta_1,\\zeta_2\\in D$ and $\\epsilon>0$ such that $\\epsilon(\\zeta_1+\\zeta_2)\\not\\in D$. Then there does not exist a linear map $L:D\\rightarrow \\cv^n$, with $L(0)=0$, such that $\\ball(3\\epsilon)\\subset L(D)\\subset \\ball(1)$.\n\\end{lem}\n\\begin{proof}\nLet $L:D\\rightarrow \\cv^n$ be a linear map with $L(0)=0$ and suppose $\\ball(3\\epsilon)\\subset L(D)$. Since $\\epsilon(\\zeta_1+\\zeta_2)\\not\\in D$ and $L$ is linear, we have $\\epsilon(L(\\zeta_1)+L(\\zeta_2))\\not\\in L(D)$. This implies that $\\epsilon(L(\\zeta_1)+L(\\zeta_2))\\not\\in \\ball(3\\epsilon)$ and thus $\\|L(\\zeta_1)+L(\\zeta_2)\\|\\ge 3$. However,\n$\\|L(\\zeta_1)+L(\\zeta_2)\\|\\leq \\|L(\\zeta_1)\\|+\\|L(\\zeta_2)\\|\\leq 1+1=2.$\nThis completes the proof.\n\\end{proof}\n\n\\begin{proof}[Proof of Theorem \\ref{T:main}]\nChoose local coordinates $(z,w,t)$ such that $q=(0,0,0)$ and let $p=(-\\delta,0,0)$ for $\\delta>0$ small. Let $\\zeta_1=(1,0,0)$ and $\\zeta_2=(0,1,0)$, the two directions along which the regular order of contact at $q$ is greater than $2d=4k$. By Lemma \\ref{L:10}, $K_\\Omega(p,\\zeta_1),K_\\Omega(p,\\zeta_2)\\lesssim \\delta^{-\\frac{1}{4k+1}}$. By Lemma \\ref{L:11}, $K_\\Omega(p,\\frac{1}{\\sqrt{2}}(\\zeta_1+\\zeta_2))\\gtrsim \\delta^{-\\frac{1}{4k}}$.\n\nChoose $\\lambda>0$ with $\\lambda\\gtrsim \\delta^{\\frac{1}{4k+1}}$ such that $\\lambda \\zeta_1,\\lambda \\zeta_2\\in D_\\Omega(p)$. Then for $\\epsilon \\simeq \\delta^{\\frac{1}{4k(4k+1)}}$, we have $\\epsilon (\\lambda \\zeta_1+ \\lambda \\zeta_2)\\not\\in D_\\Omega(p)$. Thus, by Lemma \\ref{L:squeezing}, there does not exist a linear map $L:D_\\Omega(p)\\rightarrow \\cv^3$ such that $\\ball(3\\epsilon)\\subset L(D_\\Omega(p))\\subset \\ball(1)$.\n\nLet $f$ be a biholomorphism of $\\Omega$ into $\\ball(1)$ such that $f(p)=0$ and $\\ball(c)\\subset f(\\Omega)$ for some $c>0$. Set $L=f'(p)$. Then, by Lemmas \\ref{L:ball}, \\ref{L:indicatrix} and \\ref{L:indicatrix1}, $\\ball(c)\\subset L(D_\\Omega(p))\\subset \\ball(1)$. Therefore, we have $c\\lesssim \\delta^{\\frac{1}{4k(4k+1)}}$. Since $f$ is arbitrary, we get $s_\\Omega(p) \\lesssim \\delta^{\\frac{1}{4k(4k+1)}}$. Since $\\delta$ can be arbitrarily small, this completes the proof.\n\\end{proof}\n\n\\begin{rmk}\nTheorem \\ref{T:main} does not hold if only assuming that the regular order of contact at $q$ is greater than $2d$ along one smooth complex curve. For instance, consider $\\Omega$ given by\n$$\\{(z,w,t)\\in \\cv^3:\\ |t|^2+|z|^2+|w|^6<1\\}.$$\nThen at $q=(0,0,1)$, the Bloom-Graham type is $2$ and the regular order of contact along $(0,1,0)$ is $6>4$. But $\\Omega$ is a bounded convex domain and thus the squeezing function has a uniform lower bound by \\cite{KZ:squeezing}.\n\\end{rmk}\n\n\\begin{rmk}\nUsing similar arguments, one can extend Theorem \\ref{T:main} to higher dimensions as follows.\n\n\\begin{thm}\nLet $\\Omega$ be a bounded domain in $\\cv^n$, $n\\ge 4$, and $q\\in \\partial\\Omega$. Assume that $\\Omega$ is smooth and pseudoconvex in a neighborhood of $q$ and the Bloom-Graham type of $\\Omega$ at $q$ is $d$. Moreover, assume that the regular order of contact at $q$ is $d$ along a two-dimensional complex surface $\\Sigma$ and the regular order of contact at $q$ is greater than $2d$ along two smooth complex curves not tangent to each other contained in $\\Sigma$. Then the squeezing function $s_\\Omega(p)$ has no uniform lower bound near $q$.\n\\end{thm}\n\\end{rmk}\n\n\\begin{rmk}\nAfter the completion of this work, it was brought to our attention by Gregor Herbort that a similar comparison result to \\cite{DF} was obtained for the following domain in \\cite{DFH:Bergman}:\n$$\\Omega:=\\{(z,w,t)\\in \\cv^3:\\ \\Re t+|z|^{12}+|w|^{12}+|z|^2|w|^4+|z|^6|w|^2<0\\}.$$\nTherefore, by our remark in the introduction, the squeezing function does not have a uniform lower bound on this domain. More generally, we have the following\n\\begin{thm}\nLet $\\Omega$ be a bounded domain in $\\cv^3$, and $q\\in \\partial\\Omega$. Assume that $\\Omega$ is smooth and pseudoconvex in a neighborhood of $q$ and the Bloom-Graham type of $\\Omega$ at $q$ is $d<\\infty$. Let $\\rho$ be a defining function of $\\Omega$ near $q$ in the normal form \\eqref{E:normal} and assume that the leading homogeneous term $P(z)$ only contains positive terms. Moreover, assume that the regular order of contact at $q$ is greater than $d$ along two smooth complex curves not tangent to each other. Then the squeezing function $s_\\Omega(p)$ has no uniform lower bound near $q$.\n\\end{thm}\n\\begin{proof}[Sketch of proof]\nIn Lemma \\ref{L:10}, we get $K_\\Omega(p,\\zeta_1),K_\\Omega(p,\\zeta_2)\\lesssim \\delta^{-\\frac{1}{2k+1}}$, by the same argument. In Lemma \\ref{L:11}, we get $K_\\Omega(p,\\zeta)\\gtrsim \\delta^{-\\frac{1}{2k}}$, by noticing that instead of \\eqref{E:xitau} we have $|\\xi\\tau|^{2k}\\lesssim \\delta$ since all terms of $P(z)$ are positive. Then arguing exactly as in the proof of Theorem \\ref{T:main}, we get $s_\\Omega(p) \\lesssim \\delta^{\\frac{1}{2k(2k+1)}}$.\n\\end{proof}\n\\end{rmk}\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nRecent advances in computer vision and machine learning have dramatically increased the accuracy of face recognition technologies \\cite{schroff2015facenet, masi2018deep, deng2019arcface}. Face recognition is already commonly used in commercial products such as Apple's FaceID \\cite{faceid} or at border checkpoints in airports where the portrait on a digitized biometric passport is compared with the holder's face. Most people have little to no concerns about these specific applications as they are limited in scope and have a single well defined goal. As the technologies mature however it becomes possible to deploy them on a much larger scale. The most well known example of this is the large scale use of CCTV cameras equipped with intelligent analysis software. In this way, facial recognition checkpoints are deployed at areas like gas stations, shopping centers, and mosque entrances \\cite{larson2018china, NOS}.\n\\begin{figure}\n \\centering\n \\includegraphics[width=\\linewidth]{figures\/overview_.png}\n \\setlength{\\abovecaptionskip}{-2pt}\n \\setlength{\\belowcaptionskip}{-2pt}\n \\caption{Overview of our proposed approach. For each subject, we automatically extract one high quality frame (or face crop) and encrypt it for storage. Access to the images is only possible after legal authorization in case of an investigation.}\n \\label{fig:overview}\n\\end{figure}\nFrom a domestic security point of view, these technologies are extremely useful. They can be used to find missing children, identify and track criminals or locate potential witnesses. There are many examples where CCTV footage in combination with facial recognition software has supported and accelerated criminal investigations. In 2016 for example, the ``man in the hat'' responsible for the Brussels terror attacks was identified thanks to FBI facial recognition software \\cite{maninhat}. \n\\\\\n\\newline\nThe proliferation of facial recognition technology however also raises many valid privacy concerns. A fundamental human rights principle is that surveillance should be necessary and proportionate. This principle was adopted by the UN Human Rights Council (HRC) in the ``the right to privacy in the digital age'' resolution which states that ``States should ensure that any interference with the right to privacy is consistent with the principles of legality, necessity and proportionality''\\cite{unitednations}.\n\\\\\n\\newline\nGovernments have to balance both aspects of this technology before they implement a certain solution. On the one hand, there are the obvious advantages of a large scale blanket surveillance system but this clearly violates the proportionality principle. On the other hand, a recent study showed that a majority of the public considers it to be acceptable for law enforcement to use facial recognition tools to assess security threats in public spaces as long as it is within a clearly defined regulatory framework \\cite{smith2019more}.\n\\\\\nAn interesting research direction is the development of more privacy-friendly alternatives that can still support law enforcement in criminal investigations. In this work we present an intelligent frame selection approach that can be used as a building block in a more privacy-friendly alternative to large scale face recognition. The problem of key frame selection is defined as selecting a single frame out of a video stream that represents the content of the scene \\cite{wolf1996key}. In the context of facial recognition, our goal is to record a single clear crop of each person visible in the stream without revealing his or her identity. In such a way, the minimization strategy of privacy preserving technologies \\cite{DomingoFerrer2020} is implemented by collecting the minimal necessary amount of personal data.\nAccording to \\cite{duncan2007engineering}, data minimization is an unavoidable first step to engineer systems in line with the principles of privacy by design \\cite{cavoukian2009privacy}. Next, to ensure the confidentiality of the collected data, all images are encrypted (i.e. ``hide'' strategy) and access can only be provided to law enforcement after legal authorization as shown in figure \\ref{fig:overview}.\n\\\\\n\\newline\nExtracting face crops which are suitable for recognition is a difficult problem as surveillance footage is typically blurry because of the motion of the subjects. In addition, it is hard to quantify the quality of a single frame as it depends on multiple aspects such as the angle, illumination and head position. Frame selection techniques have been used before in the context of face recognition to reduce the computational cost or increase the recognition rate \\cite{wong_cvprw_2011, anantharajah2012quality, qi2018boosting, vignesh2015face, best2017automatic, hernandez2020biometric, terhorst2020ser}. In this work we introduce a novel deep learning based technique to perform intelligent face recognition. The key contribution is a new face image quality assessment (FQA) approach inspired by anomaly detection. Unlike previous solutions, our system is trained completely unsupervised without the need for labeled face images and without access to a face recognition system. Our approach can be used for the same tasks as previous frame selection techniques but in this work we propose to use it as a building block for a more privacy-friendly alternative to large scale face recognition.\n\\\\\n\\newline\nThis paper is organized as follows. An overview of the related work is presented in section \\ref{section:related}. Next, we propose a privacy preserving alternative to large scale facial recognition in section \\ref{section:privacy_preserving_appr}. Our novel FQA method is introduced in section \\ref{section:approach} and experimentally validated in section \\ref{section:experimental_setup} and \\ref{section:results}. We conclude in section \\ref{section:conclusion} and give some pointers for future research.\n\n\\section{Related work}\n\\label{section:related}\nDeep learning has dominated the field of facial recognition in the past years. Recent techniques can reach accuracies of 99.63\\% \\cite{schroff2015facenet} on the Labeled Faces in The Wild dataset \\cite{huang2008labeled}, the most commonly used benchmark dataset. These data-driven methods outperform techniques based on engineered features by a large margin \\cite{masi2018deep}. Face recognition is typically subdivided into face verification and face identification. Face verification is the task of deciding whether two pictures show the same person where face identification tries to determine the identity of the person on an image.\n\\\\\n\\newline\nA large amount of research is dedicated to extracting high quality face image representations. These can then be used to calculate the similarity between two face crops or as input to a classification model. Different loss functions have been designed to get meaningful face image representations. First attempts used softmax loss \\cite{sun2014deep} while recent approaches focus on euclidean distance-based loss \\cite{schroff2015facenet}, cosine-margin-based loss \\cite{wang2017normface} and variations of the softmax loss \\cite{deng2019arcface}. Given these high quality feature embeddings, we can directly perform face verification by calculating the distance between two images (i.e. one-to-one matching). Face identification requires a database which contains a reference image of every identity. To identify a person on an image, the probe image's embedding should be compared with all images in the reference database (i.e. one-to-many matching).\n\\\\\n\\newline\nFace images are a popular biometric since they can be collected in unconstrained environments and without the user's active participation. These properties in combination with the widespread deployment of surveillance cameras give rise to some severe privacy concerns. As a result, researchers explore techniques to secure the data collected by surveillance cameras and are developing privacy preserving facial recognition systems. The techniques differ in exactly what privacy-sensitive aspects they protect. Some techniques avoid deducing soft biometrics (e.g. age, gender, race) from data collected for verification or recognition purposes \\cite{mirjalili2018gender, mirjalili2020privacynet}. Other techniques focus on face image de-identification \\cite{newton2005preserving}, which eliminates the possibility of a facial recognition system to identify the subject while still preserving some facial characteristics. A very powerful technique hides the input image and the facial recognition result from the server that performs the recognition using a cryptographic enhanced facial recogniton system \\cite{erkin2009privacy}.\n\\\\\n\\newline\nA lot of effort has gone into the development of face image quality assessment (FQA) metrics. Face image quality is defined as the suitability of an image for consistent, accurate and reliable recognition \\cite{hernandez2020biometric}. FQA methods aim to predict a value which describes the quality of a probe image. Most previous FQA techniques focus on improving the performance of a face recognition system \\cite{wong_cvprw_2011, anantharajah2012quality, qi2018boosting, vignesh2015face, best2017automatic, hernandez2020biometric, terhorst2020ser}. An FQA system is then used as a first step to make sure that we only feed high quality images to the face recognition model, therefore increasing the accuracy and reducing the computational cost. A first family of FQA techniques (Full-reference and reduced-reference FQA) assumes the presence of a high quality sample of the probe image's subject. These methods do not work for unknown subjects, which is necessary for our purpose as we will explain in the next section. A second family of FQA techniques develops hand-crafted features to assess the quality of a face image \\cite{anantharajah2012quality, REFICAO} while more recent studies apply data driven methods and report a considerable increase in performance. Different studies \\cite{qi2018boosting, vignesh2015face} propose a supervised approach where a model is trained to predict the distance between the feature embeddings of two images. Since two samples are necessary to perform a comparison, one low quality sample can affect the quality score of a high quality sample. This is commonly solved by assuming that an image compliant with the ICAO guidelines \\cite{REFICAO} represents perfect quality \\cite{hernandez2020biometric}. In the work of Hernandez-Ortega et al. a pretrained resnet-50 network \\cite{he2016deep} is modified by replacing the classification layer with two regression layers which output the quality score. Alternatively, it is also possible to use human labeled data \\cite{best2017automatic}. The most similar to our work is \\cite{terhorst2020ser} which also proposes an unsupervised approach. Here the quality of an image is measured as its robustness in the embedding space, which is calculated by generating embeddings using random subnetworks of a selected face recognition model.\n\\\\\n\\newline\nCompared to previous work, we introduce a novel completely unsupervised FQA method based on a variational autoencoder. We assume no access to the identities of the people and show that it works well for unseen people. Unlike \\cite{terhorst2020ser}, no facial recognition model is used to calculate a quality score. In contrast to previous work our main goal is not necessarily to improve the facial recognition accuracy but instead we use it as a building block to enable a more privacy-friendly alternative to large scale face recognition, as explained in the next section.\n\n\\section{Frame selection as an alternative to face recognition}\n\\label{section:privacy_preserving_appr}\nIn this section we introduce a framework based on \\cite{simoens2013scalable} that uses intelligent frame selection in the context of face recognition to build a more privacy-friendly alternative to large scale face recognition. Instead of proactively trying to recognize individuals, we follow the presumption of innocence principle and do not indiscriminately perform recognition of people as they go about their daily business. Instead, our system uses key frame extraction to capture a high quality snapshot of every person passing by the camera. These snapshots are encrypted and stored locally on the camera or securely transmitted to a cloud back-end for storage. In a normal situation this data is then automatically deleted after a well defined time period as defined in article 17 of the GDPR (``Right to be forgotten''). In this case, the identity of the people will never be known and no human operator will have access to the unencrypted pictures. The images can only be decrypted after legal authorization in case of a criminal investigation or another situation where access to the identities present in a certain location at a certain time is warranted. The high quality images can then be used as input for a face recognition system or to aid the investigation process. Since only high quality crops are stored, the storage overhead is much lower than in a system where the full video is stored. This system is summarized in Figure \\ref{fig:overview} using a video from the ChokePoint dataset \\cite{wong_cvprw_2011}.\n\\\\\n\\newline\nIt is important to note that we need to store at least one crop for each person visible in the video. It is not enough to use a fixed threshold to decide whether a frame should be stored or not, instead we have to store the best frame for each individual even if this best frame still has a relatively low quality compared to images of other individuals (for example because the person never perfectly faces the camera). As a generalization we could also decide to store a short video clip of a few seconds before and after the best frame has been captured.\n\\\\\n\\newline\nAn obvious disadvantage of our approach is that it is not possible to proactively recognize people for example to detect wanted individuals. On the other hand it does support the criminal investigation after the fact. Our system is therefore complementary and more suited to low risk areas where a full blown face recognition system would violate the proportionality principle. \n\n\n\\section{Face image quality assessment}\n\\label{section:approach}\n\\begin{figure}[t]\n \\centering\n \\includegraphics[width=\\linewidth]{figures\/FQA_overview.png}\n \\caption{Overview of our face quality-aware frame selection system.}\n \\label{fig:fqa_overview}\n\\end{figure}\nThe system proposed in the previous section relies on a Face Quality Assessment block to decide which crops to store. Any FQA method can be used but in this section we introduce a novel technique based on a variational autoencoder. Compared to other FQA methods, this has the benefit of being completely unsupervised. We do not assume access to a face recognition system or to the identities of the people in the dataset. Our method also generalizes well to other people outside of the original training dataset.\n\\\\\n\\newline\n An overview of the proposed system is depicted in figure \\ref{fig:fqa_overview}. The first step is to detect all faces in each frame and track them across subsequent frames. We use the MTCNN model \\cite{zhang2016joint} to detect faces in a still video frame. The output is a list of bounding boxes around the detected faces. To track a subject across a video, we simply calculate the euclidean distance between the bounding boxes of subsequent frames. Bounding boxes that are close to each other are considered to correspond to the same subject. To evaluate the quality of a face crop, we calculate the reconstruction probability of a variational autoencoder (VAE) trained on a dataset of high quality images. The VAE reconstruction probability is commonly used as an anomaly detection metric \\cite{an2015variational} to determine how different an input is to the data seen during training. By training the VAE on a publicly available dataset of high quality face images, we reformulate the FQA task as an anomaly detection task (i.e. how different is this face crop from the high quality face crops seen during training?). The next paragraph explains the VAE and reconstruction probability in more details.\n\\\\\n\\newline\nA variational autoencoder (VAE) \\cite{kingma2013auto} is a probabilistic variant of the standard autoencoder (AE). The encoder and decoder are modeled by probabilistic distributions rather than deterministic functions. The encoder $f_{\\phi}(x)$ models the posterior $q_{\\phi}(z | x)$ of the latent variable $z$ and a decoder $f_{\\theta}(z)$ models the likelihood $p_{\\theta}(x | z)$ of the data $x$ given the latent variable $z$. The prior distribution of the latent variable $p_{\\theta}(z)$ is chosen as a Gaussian distribution $\\mathcal{N}(0, I)$. The posterior $q_{\\phi}(z | x)$ and likelihood $p_{\\theta}(x | z)$ are isotropic multivariate normal distributions $\\mathcal{N}(\\mu_{z}, \\sigma_{z})$ and $\\mathcal{N}(\\mu_{x}, \\sigma_{x})$ respectively. Figure \\ref{fig:vae} shows the process of a forward pass of an image $x$ through the VAE, the arrows represent a sampling process. To train a VAE using backpropagation, every operation should be differentiable which is not the case for the sampling operations: $z \\sim \\mathcal{N}(\\mu_{z}, \\sigma_{z})$ and $\\hat{x} \\sim \\mathcal{N}(\\mu_{x}, \\sigma_{x})$. Applying the re-parameterization trick fixes this problem. A dedicated random variable $\\epsilon \\sim \\mathcal{N}(0, 1)$ is sampled such that the sampling operations can be rewritten as: $z \\sim \\mu_{z} + \\epsilon \\cdot \\sigma_z$ and $\\hat{x} \\sim \\mu_{x} + \\epsilon \\cdot \\sigma_x$. The VAE training objective is written as the expected log likelihood minus the KL divergence between the posterior and the prior as described in equation \\ref{eq:train_obj}. \n\\begin{equation}\n \\mathcal{L}(x) = E_{q_{\\phi}(z|x)}(p_{\\theta}(x|z)) - KL(q_{\\phi}(z|x)|p_{\\theta}(z))\n \\label{eq:train_obj}\n\\end{equation}\nThe first term is the reconstruction term and forces a good reconstruction $\\hat{x}$ of the input data $x$. The KL term regularizes the distribution of the latent space by forcing it to be Gaussian. By training a generative model, like a VAE, the model learns to approximate the training data distribution. When a large reconstruction error is observed, this is an indication that the data is not sampled from the data distribution the VAE was trained on.\n\\\\\n\\newline\nThe reconstruction probability is a generalization of the reconstruction error by taking the variability of the latent space and the reconstruction into account \\cite{an2015variational}. First, an image $x$ is fed to the encoder which generates the mean vector $\\mu_z$ and standard deviation vector $\\sigma_z$. Next, $L$ samples $\\{z^0, z^1,...,z^l\\}$ are drawn from the latent distribution $\\mathcal{N}(\\mu_z,\\sigma_z)$. All samples $z^l$ are fed into the decoder to get the distribution of the reconstruction of $x$ which is described by the mean $\\mu^{l}_{\\hat{x}}$ and the standard deviation $\\sigma^{l}_{\\hat{x}}$. The reconstruction probability is the probability of $x$ averaged over L samples as described by equation \\ref{eq:r_prop}.\n\\begin{equation}\n \\mathrm{RP}(x) = \\frac{1}{L} \\sum^{L}_{l=1} \\mathcal{N}(x | \\mu^{l}_{\\hat{x}}, \\sigma^{l}_{\\hat{x}})\n \\label{eq:r_prop}\n\\end{equation}\n\\begin{figure}\n \\centering\n \\includegraphics[width=225pt]{figures\/VAE.png}\n \\caption{Variational autoencoder with encoder $f_{\\phi}(x)$ and decoder $f_{\\theta}(z)$, each arrow represents a sampling processes.}\n \\label{fig:vae}\n\\end{figure}\n\\newline\nThe reconstruction probability was originally developed as an anomaly score by \\cite{an2015variational}. When a VAE is trained solely on samples of ``normal'' data, the latent distribution learns to represent these samples in a low dimensional space. Accordingly samples from ``normal'' data result in a high reconstruction probability while anomalies result in low reconstructing probabilities. We define the biometric quality of a face image as the reconstruction probability calculated by a VAE trained on high quality face images. Correspondingly, a high reconstruction probability is expected for high quality face images. Note that there is no explicit definition of face image quality and the quality score is independent of any face recognition model. The definition of a high quality face image is derived directly from the training data. \n\\\\\n\\newline\nThe encoder $f_{\\phi}(x)$ consists of 5 consecutive blocks of a convolution layer, batch normalization and a leaky ReLU activation function with at the end two fully connected layers. The outputs of the encoder are the parameters defining $q_{\\phi}(z|x)$. The decoder $f_{\\theta}(z)$ consists of 5 blocks of a transposed convolution layer, batch normalization and a leaky ReLU activation function with again two fully connected layers at the end. The outputs of the decoder are the parameters defining $p_{\\theta}(x|z)$. To calculate the reconstruction probability, $L$ is set to 10. The CelebA dataset \\cite{liu2015faceattributes} consisting of 202,599 face images serves as training data. The Adam optimization algorithm \\cite{kingma2014adam} was applied with a learning rate of 0.005 and a batch size of 144. The VAE was trained for 10 epochs. \nEach image is cropped by MTCNN \\cite{zhang2016joint}, resized to 64x64 pixels and converted to grayscale.\n\n\\section{Experimental setup}\n\\label{section:experimental_setup}\nIn this section, we isolate the FQA block for evaluation. According to the national institute of standards and technology (NIST), the default way to quantitatively evaluate a FQA system is analyzing the error vs. reject curve (ERC) \\cite{grother2007performance, grother2020ongoing}. As defined in section \\ref{section:related}, FQA indicates the suitability of an image for recognition. The ERC measures to what extent the rejection of low quality samples increases the verification performance as measured by the false non-match rate (FNMR). The FNMR is the rate at which a biometric matcher miscategorizes two signals from the same individual as being from different individuals \\cite{Schuckers2010}. Face verification consists of calculating a comparison score of two images and comparing this score with some threshold. The comparison score is defined as the euclidean distance between the FaceNet \\cite{schroff2015facenet} embeddings of the two images. To avoid a low quality sample affecting the verification performance of a high quality sample, one high quality reference image for every subject is required. For this, an ICAO compliant image is typically used \\cite{hernandez2020biometric}. We used the BioLab framework \\cite{ferrara2012face} to calculate an average ICAO compliance score for all images. For every subject, the image with the highest ICAO score is selected as a reference image. Note that it is not possible to use the BioLab framework as face quality assessment method directly because it cannot assess all images accurately and it is unable to operate in real-time. Now, assume a set of genuine image pairs $(x_i^{(1)}, x_i^{(2)})$ (i.e. two images of the same person) of length N. Every image pair $(x_i^{(1)}, x_i^{(2)})$ constitutes a distance $d_i$ (i.e. comparison score). To determine if two images match, the distance between the two images is compared with a threshold $d_t$. Using quality predictor function $F$ (i.e. a FQA method), a quality value $q_i$ is calculated for each image pair. Since $x_i^{(1)}$ is always a high quality image from the reference database, the quality $q_i$ of image pair $(x_i^{(1)}, x_i^{(2)})$ can be written as:\n\\begin{figure}[t]\n \\centering\n \\includegraphics[width=225pt]{figures\/example.png}\n \\caption{Sample images from the VggFace2 dataset ranked by different quality metrics.}\n \\label{fig:example_images}\n\\end{figure}\n\\begin{equation}\n q_i = q_i^{(2)} = F(x_i^{(2)}).\n\\end{equation}\nNow consider $R$ as a set of low quality entities composed by the samples that correspond with a predicted quality value below some threshold:\n\\begin{equation}\n R(r) = \\{i : q_i < F^{-1}(r), \\forall i < N \\}.\n\\end{equation}\n$F^{-1}$ is the inverse of the empirical cumulative distribution function of the predicted quality scores. The parameter $r$ is the fraction of images to discard, such that $F^{-1}(r)$ equals the quality threshold that corresponds with rejecting a fraction $r$ of all images. Then the FNMR can be written as:\n\\begin{equation}\n \\mathrm{FNMR} = \\frac{|d_i : d_i \\geq d_t, i \\notin R(r)|}{|d_i : d_i \\geq -\\infty, i \\notin R(r)|}\n\\end{equation}\nThe value of $r$ is manipulated to calculate the FMNR for different fractions of rejected images. The value of $d_t$ is fixed and computed using the inverse of the empirical cumulative distribution function of the distances between reference and probe images $M^{-1}$:\n\\begin{equation}\n d_t = M^{-1}(1 - f).\n\\end{equation}\nPractically, $f$ is chosen to give some reasonable FNMR. As suggested by \\cite{frontex} a maximum FNMR of 0.05 is maintained.\n\\section{Results}\n\\label{section:results}\n\\subsection{VggFace2}\nThe VggFace2 dataset \\cite{Cao18} is designed for training face recognition models and consists of more than 3.3 million face images of more than 9000 identities. In the following experiments, our approach (VAE) is compared to FaceQNet (FQN) \\cite{hernandez2020biometric}, a method based on the stochastic embedding robustness (SER) \\cite{terhorst2020ser} and a general image quality assessment (IQA) system \\cite{talebi2018nima}. FaceQNet is a fine-tuned resnet-50 network \\cite{he2016deep} which is trained on large amounts of face images for a face recognition task. The classification layer is then replaced by two layers designed for quality regression. The ground truth quality score is defined as the euclidean distance between the feature embeddings of the probe image and an ICAO compliant image of the same subject. The face image quality assessment method based on stochastic embedding robustness (SER) calculates a quality score by measuring the variations of the embeddings generated by random subnetworks of a resnet-101 \\cite{he2016deep} model trained for facial recognition. The general image quality assessment (IQA) system \\cite{talebi2018nima} does not take face features into account and predicts the general image quality. For all conducted experiments, images were cropped by the MTCNN face detector \\cite{zhang2016joint}.\n\\\\\n\\newline\nFigure \\ref{fig:example_images} shows five images from the VggFace2 dataset \\cite{Cao18} ranked by different quality metrics. This allows a first qualitative evaluation of the five metrics. As presented on the figure, all metrics agree on assigning the highest quality to the first image. All FQA metrics assign a low quality value to the third image because the person looks down, the general IQA method does not take this aspect into account and assigns a higher quality value. For the same reason, the IQA method assigns a low quality value to the fifth image in contrast to all FQA algorithms. On the fourth image, our method agrees with the IQA method and selects it as worst quality image as opposed to the other FQA metrics. We could assume that our method attaches more importance to blurriness compared to the other FQA algorithms. It is remarkable that the second image is ranked as second worst by FQN since the person looks straight into the camera.\n\\begin{figure}[t]\n \\centering\n \\includegraphics[width=225pt]{figures\/FNMR_01.png}\n \\caption{ERC with an initial FNMR of 0.01.}\n \\label{fig:vggface2_fnmr_01}\n\\end{figure}\n\\begin{figure}[h]\n \\centering\n \\includegraphics[width=225pt]{figures\/FNMR_001.png}\n \\caption{ERC with an initial FNMR of 0.001.}\n \\label{fig:vggface2_fnmr_001}\n\\end{figure}\n\\\\\n\\newline\nIn a second experiment, we evaluate our proposed FQA algorithm on still face images by analyzing the ERC plots as explained in section \\ref{section:experimental_setup}. For 103.008 images of 300 subjects from the VggFace2 dataset \\cite{Cao18}, a high quality reference image is selected using the ICAO compliance scores. The ERC plots in figures \\ref{fig:vggface2_fnmr_01} and \\ref{fig:vggface2_fnmr_001} display the FNMR for different fractions of rejected images. The red line with the label ``PERFECT'' represents a quality measure which correlates perfectly with the distance scores. When an initial FNMR of 0.01 is set, in an ideal scenario, the FNMR will be zero after rejecting 1\\% of all images. The closer an ERC is to the red line, the better the performance of the used FQA algorithm. For an initial FNMR of 0.01 our approach clearly outperforms FaceQNet, SER and the general image quality assessment program. We hypothesize that SER would perform better when the same type of embeddings were used for verification as quality estimation. In the conducted experiments SER uses ArcFace \\cite{deng2018arcface} embeddings to estimate face image quality while the FNMR is calculated using FaceNet embeddings. For an initial FNMR of 0.001, the difference with the other approaches is smaller. It is important to note that our model is considerably smaller than FaceQNet and the resnet-101 model \\cite{he2016deep} used by SER.\n\\begin{figure}[h!]\n \\centering\n \\includegraphics[width=225pt]{figures\/cp_map_0.jpg}\n \\caption{Consecutive face crops from one tracked identity in the ChokePoint dataset. The crop with the green border corresponds with the highest quality calculated by our FQA algorithm.}\n \\label{fig:cp_map_0}\n\\end{figure}\n\\begin{figure}[h!]\n \\centering\n \\includegraphics[width=225pt]{figures\/cp_plot_0.png}\n \\caption{Logarithm of the reconstruction probability (i.e. face quality) for consecutive face crops.}\n \\label{fig:cp_plot_0}\n\\end{figure}\nFaceQNet comprises 7 times more parameters than our VAE and resnet-101 even 14 times more. Additionally, our method is trained completely unsupervised without the need for ground truth quality values while FaceQNet relies on distance scores as ground truth quality values. The ground truth generation process used by FaceQNet also indicates the dependency on one or more face recognition models. This dependency is even more prominent for SER since a specific face recognition network is used for generating the quality predictions.\n\\subsection{ChokePoint}\nIn a third experiment, we focus on the ChokePoint dataset \\cite{wong_cvprw_2011}. This dataset is designed for conducting experiments on face identification and verification under real-world surveillance conditions. The dataset consists of 48 videos or 64,204 face images. Both crowded and single-identity videos are included. We now evaluate our system qualitatively on selecting one high quality face crop of each identity in a video stream. Figure \\ref{fig:cp_map_0} shows consecutive face crops of an example video. The crop outlined in green is the frame that corresponds with the highest quality value calculated by our FQA algorithm. Figure \\ref{fig:cp_plot_0} shows how the quality score changes over time as the subject moves through the scene. We define the quality score as the logarithm of the reconstruction probability. We can see that initialy the quality score decreases as the person is moving through a darker area looking down. The shades and the angle of the face makes these crops less useful for face recognition. As the person moves closer to the camera, the brightness increases and the subject becomes clearly visible. This is also reflected in an increasing quality score. The highest score is obtained when the person is close to the camera and is looking in the general direction of the camera. As the person turns away, the score again decreases. This qualitative example shows that our model is indeed able to assign understandable and meaningful scores to each frame. We made videos of this and other examples publicly available \\footnote{\\url{https:\/\/drive.google.com\/drive\/folders\/1GRlFRSxHRfBnfTpI5DG2v2rN3nTCg5Y0?usp=sharing}}. \n\\subsection{Bicycle parking}\nFinally, we also validated our approach on video data from security cameras in a bicycle parking lot. This to investigate how well the model generalizes to data collected in the real world. Figure \\ref{fig:bicycle_park} shows three example frames with its corresponding frame crop and quality score. Even though these images are very different from the images the VAE was originaly trained on, we can see that the model generalizes well and is able to assign useful scores to each crop.\n\\begin{figure}\n \\centering\n \\includegraphics[width=225pt]{figures\/bicycle_park.png}\n \\caption{Three frames from the footage of a security camera in a bicycle parking lot. The corresponding frame crops and quality scores are depicted below each frame.}\n \\label{fig:bicycle_park}\n\\end{figure}\n\n\\section{Conclusion and future work}\n\\label{section:conclusion}\nIn this study, a novel face image quality assessment method is proposed based on a variational autoencoder's reconstruction probability. This is, by our knowledge, the first time a generative model like a VAE is used to tackle the problem of face image quality assessment. We demonstrate, by quantitative and qualitative results, that our method can be used as a biometric quality predictor. Unlike other data driven approaches, no facial recognition model is used for training and no explicit definition of face quality is given. Our FQA algorithm is used as a building block in a privacy-friendly alternative to large scale facial recognition. Instead of identifying all detected faces in a video stream, our system saves one high quality face crop without revealing the person's identity. This face crop is encrypted and access is only granted after legal authorization. In such a way, the system still supports criminal investigations while not violating the proportionality principle.\n\\\\\n\\newline\nIn future work, we will further optimize the VAE architecture keeping the constraints on model size and computational complexity in mind as the final goal would be to deploy the model on a stand-alone edge device. It would be interesting to investigate different hardware platforms such as FPGAs that allow the model to process data in real-time with a small energy consumption, making it possible to embed our system in low cost surveillance camera's. Moreover, our method should be evaluated on other datasets and in combination with alternative feature extractors.\n\n\\section{ Acknowledgments}\nWe thank Klaas Bombeke (MICT UGent) for providing us with the dataset used in the experiment of Figure~9, and Stijn Rammeloo from Barco for the feedback on the initial manuscript. This research was funded by the imec.ICON SenseCity project and the Flemish Government (AI Research Program).\n\n\\bigskip\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}