diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzzhqbi" "b/data_all_eng_slimpj/shuffled/split2/finalzzhqbi" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzzhqbi" @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\n\\label{sec:intro}\nIn recent years, there has been a lot of interest in the study of\nout-of-equilibrium systems in quantum field theory, in view of\napplications to high energy heavy ion collisions, cosmology, or cold\natom\nphysics~\\cite{BergeBG1,BergeBS1,BergeBSV1,BergeBSV2,BergeBSV3,BergeGSS1,BergeS4,BergeSS3,FukusG1,Fukus3,KunihMOST1,IidaKMOS1,DusliEGV1,EpelbG1,DusliEGV2,EpelbG2,EpelbG3,KurkeM1,KurkeM2,KurkeM3,YorkKLM1,AttemRS1,BlaizGLMV1,BlaizLM2,HelleJW1,CasalHMS1,Wu1,ReinoS1,GautiS1,GirauS1,FloerW1,GasenP1,GasenKP1}. Generically,\nthe question one would like to address is that of a system prepared in\nsome non-equilibrium initial state, and let to evolve under the sole\nself-interactions of its constituents. The physically relevant\nquantum field theories cannot be solved exactly, and therefore some\napproximation scheme is mandatory in order to make progress. Moreover,\nthe standard perturbative expansion in powers of the interaction\nstrength is in general ill-suited to these out-of-equilibrium\nproblems. Indeed, the coefficients in the perturbative expansion are\ntime dependent and generically growing with time, thereby voiding the\nvalidity of the expansion after some finite time.\n\nThis ``secularity'' problem is resolved by the resummation of an\ninfinite set of perturbative contributions, which can be achieved via\nseveral schemes. The simplest of these schemes is kinetic\ntheory. However, in order to obtain a Boltzmann equation from the\nunderlying quantum field theory, several important assumptions are\nnecessary~\\cite{ArnolMY5}: (i) a relatively smooth system, so that a gradient\nexpansion can be performed, and (ii) the existence of well-defined\nquasiparticles. These limitations, especially the latter, make kinetic\ntheory difficult to justify for describing the early stages of heavy\nion collisions.\n\nCloser to the underlying quantum field theory, two resummation schemes\nhave been widely considered in many works. One of them is the\n2-particle irreducible (2PI)\napproximation~\\cite{LuttiW1,BaymK1,DominM1,DominM2,CornwJT1,Berge3,BergeBRS1,ReinoS2,BransG1,AartsLT1,ReinoS1,HattaN2,HattaN1}. This\nscheme consists in solving the Dyson-Schwinger equations for the\n2-point functions (and possibly for the expectation value of the\nfield, if it differs from zero). The self-energy diagrams that are\nresummed on the propagator are obtained self-consistently from the sum\nof 2PI skeleton vacuum diagrams (often denoted $\\Gamma_2[G]$ in the\nliterature). The only approximation arises from the practical\nnecessity of truncating the functional $\\Gamma_2[G]$ in order to have\nmanageable expressions. In applications, the 2PI scheme suffers from\ntwo limitations. One of them is purely computational: the convolution\nof the self-energy with the propagator takes the form of a memory\nintegral, that in principle requires that one stores the entire\nhistory of the evolution of the system, from the initial time to the\ncurrent time. The needed storage therefore grows quadratically with\ntime\\footnote{This can be alleviated somewhat by an extra\n approximation, in which one stores only the ``recent'' history of\n the system, in a sliding time window that moves with the current\n time.}. The second difficulty appears in systems that are the siege\nof large fields, or large occupation numbers. For instance, in QCD,\n``large'' would mean of order $g^{-1}$ for the fields, and of order\n$g^{-2}$ for the gluon occupation number. In this regime, the\nfunctional $\\Gamma_2[G]$ contains terms that have the same order of\nmagnitude at every order in the loop expansion, and therefore one\ncannot justify to truncate it at a finite loop order\\footnote{Such a\n truncation becomes legitimate, even in the large field regime, if\n there is an additional expansion parameter that one can use to\n control the loop expansion in $\\Gamma_2[G]$. In some theories with a\n large number $N$ of constituents (e.g. an $O(N)$ scalar theory in\n the limit $N\\to\\infty$), one can compute exactly the leading term of\n $\\Gamma_2[G]$ in the $1\/N$ expansion~\\cite{Berge3}.}. It turns out\nthat these problems of strong fields occur in real world problems,\ne.g. in the early stages of heavy ion\ncollisions~\\cite{IancuLM3,IancuV1,LappiM1,GelisIJV1,Gelis15}.\n\nThere is an alternative resummation scheme, that includes all the\nleading contributions in the large field regime and is similarly free\nof secular terms, called the Classical Statistical Approximation\n(CSA)~\\cite{PolarS1,Son1,KhlebT1,GelisLV2,FukusGM1,Jeon4}. It owes its\nname to the way it is implemented in practice, as an average over\nclassical solutions of the field equations of motion, with a Gaussian\nstatistical ensemble of initial conditions. The ability of this method\nto remain valid in the large field regime comes with a tradeoff~: the\nCSA can be tuned to be exact at the 1-loop level, but starting at the\n2-loop order and beyond, it includes only a subset of all the possible\ncontributions. The CSA can be derived via several methods~: from the\npath integral representation of observables~\\cite{Jeon4}, as an\napproximation at the level of the diagrammatic rules in the\nretarded-advanced formalism, or as an exponentiation of the 1-loop\nresult~\\cite{GelisLV2}.\n\nThe diagrammatic rules that define the classical statistical\napproximation allow graphs that have arbitrarily many loops. As in\nany field theory, the loops that arise in this expansion involve an\nintegral over a 4-momentum, and this integral can be ultraviolet\ndivergent. In the underlying --non approximated-- field theory, we\nknow how to deal with these infinities by redefining a finite number\nof parameters of the Lagrangian (namely the coupling constant, the\nmass and the field normalization). In general, this is done by first\nintroducing an ultraviolet regulator, for instance a momentum cutoff\n$\\Lambda_{_{\\rm UV}}$ on the loop momenta, and by letting the bare\nparameters of the Lagrangian depend on $\\Lambda_{_{\\rm UV}}$ in such a way\nthat physical quantities are independent of $\\Lambda_{_{\\rm UV}}$ (and of\ncourse are finite in the limit $\\Lambda_{_{\\rm UV}}\\to \\infty$). That this\nredefinition is possible is what characterizes a renormalizable field\ntheory. \n\nIn contrast, non-renormalizable theories are theories in which one\nneeds to introduce new operators that did not exist in the Lagrangian\none started from, in order to subtract all the ultraviolet divergences\nthat arise in the loop expansion. This procedure defines an\n``ultraviolet completion'' of the original theory, which is well\ndefined at arbitrary energy scales. The predictive power of the\noriginal theory is limited by the order at which it becomes necessary\nto introduce these new operators\\footnote{The predictive power of its\n ultraviolet completion may be quite limited as well, depending on\n how many new operators need to be introduced at each order\n (especially if this number grows very quickly or even worse becomes\n infinite).}.\n\n\nIt can also happen that, starting from a renormalizable field theory,\ncertain approximations of this theory (for instance including certain\nloop corrections, but not all of them) are not renormalizable. This\nwill be our main concern in this paper, in the context of the\nclassical statistical approximation. A recent numerical study\n\\cite{BergeBSV3} showed a pronounced cutoff dependence for rather\nlarge couplings in a computation performed in this approximation\nscheme. This could either mean that the CSA is not renormalizable, or\nthat the CSA is renormalizable but that renormalization was not\nperformed properly in this computation. It is therefore of utmost\nimportance to determine to which class --re\\-nor\\-ma\\-li\\-za\\-ble or\nnon-renormalizable-- the CSA belongs, since this has far reaching\npractical implications on how it can be used in order to make\npredictive calculations, and how to interpret the existing\ncomputations.\n\nNote that the question of the renormalizability of classical\napproximation schemes has already been discussed in quantum field\ntheory at finite temperature\n\\cite{BodekMS1,ArnolSY1,AartsS1,AartsS3,AartsNW1}, following attempts\nto calculate non-perturbatively the sphaleron transition rate\n\\cite{GrigoR1,GrigoRS1,GrigoRS2,AmbjoLS1,AmbjoLS2,AmbjoAPS2,AmbjoAPS1,AmbjoF1}.\nIn this context, one is calculating the leading high temperature\ncontribution, and in the classical approximation the Bose-Einstein\ndistribution gets replaced by $T\/\\omega_{\\boldsymbol k}$. This approximation leads\nto ultraviolet divergences in thermal contributions, that would\notherwise be finite thanks to the exponential tail of the\nBose-Einstein distribution. However, it has been shown that only a\nfinite number of graphs have such divergences, and that they can all\nbe removed by appropriate counterterms. The problem we will consider\nin this paper is different since we are interested in the classical\napproximation of a zero-temperature quantum field theory, where the\nfactors $T\/\\omega_{\\boldsymbol k}$ are replaced by $1\/2$. This changes drastically\nthe ultraviolet behavior.\n\n\nIn the section \\ref{sec:prelim}, we expose the scalar toy model we are\ngoing to use throughout the paper as a support of this discussion, we\nalso remind the reader of the closed time path formalism and of the\nretarded-advanced formalism (obtained from the latter via a simple\nfield redefinition), and we present the classical statistical\napproximation in two different ways (one that highlights its\ndiagrammatic rules, and one that is more closely related to the way it\nis implemented in numerical simulations). Then, we analyze in the\nsection \\ref{sec:UVcount} the ultraviolet power counting in the CSA,\nand show that it is identical to that in the underlying field theory.\nIn the section \\ref{sec:nonren}, we examine all the one-loop 2-point\nand 4-point functions in the CSA, and we show that one of them\nviolates Weinberg's theorem. This leads to contributions that are\nnon-renormalizable in the CSA. In the section \\ref{sec:int}, we\ndiscuss the implications of non-renormalizability of the CSA for the\ncalculation of some observables. We also argue that it may be possible\nto systematically subtract the leading non-renormalizable terms by the\naddition of a complex noise term to the classical equations of motion.\nFinally, the section \\ref{sec:concl} is devoted to concluding remarks.\nSome technical derivations are relegated into two appendices.\n\n\n\n\n\\section{Preliminaries}\n\\label{sec:prelim}\n\\subsection{Toy model}\n\\label{sec:model}\nIn order to illustrate our point, let us consider a massless real\nscalar field $\\phi$ in four space-time dimensions, with quartic\nself-coupling, and coupled to an external source $j(x)$,\n\\begin{equation}\n{\\cal L}\\equiv \\frac{1}{2}(\\partial_\\mu\\phi)(\\partial^\\mu\\phi)-\\frac{m^2}{2}\\phi^2\n-\\frac{g^2}{4!}\\phi^4+j\\phi\\; .\n\\end{equation}\nIn this model, $j(x)$ is a real valued function, given once for all as\na part of the description of the model. Sufficient regularity and\ncompactness of this function will be assumed as necessary.\n\nWe also assume that the state of the system at $x^0=-\\infty$ is the\nvacuum state (by adiabatically turning off the couplings at asymptotic\ntimes, we can assume that this is the perturbative vacuum state\n$\\big|0_{\\rm in}\\big>$). Because of the coupling to the external\nsource $j(x)$, the system is driven away from the vacuum state, and\nobservables measured at later times acquire non-trivial values. Our\ngoal is to compute the expectation value of such observables,\nexpressed in terms of the field operator and its derivatives, in the\ncourse of the evolution of the system,\n\\begin{equation}\n\\left<{\\cal O}\\right>\\equiv \\big<0_{\\rm in}\\big|{\\cal O}[\\phi,\\partial\\phi]\\big|0_{\\rm in}\\big>\\; .\n\\label{eq:obs0}\n\\end{equation}\nFor simplicity, one may assume that the observable is a local\n(i.e. depends on the field operator at a single space-time point) or\nmulti-local operator (i.e. depends on the field operator at a finite\nset of space-time points).\n\n\n\\subsection{Closed time path formalism}\n\\label{sec:CTP}\nIt is well known that the proper framework to compute expectation\nvalues such as the one defined in eq.~(\\ref{eq:obs0}) is the\nSchwinger-Keldysh (or ``closed time path'')\nformalism~\\cite{Schwi1,Keldy1}. In this formalism, there are two\ncopies $\\phi_+$ and $\\phi_-$ of the field (corresponding respectively\nto fields in amplitudes and fields in complex conjugated amplitudes),\nand four bare propagators depending on which type of fields they\nconnect. The expectation value of eq.~(\\ref{eq:obs0}) can be expanded\ndiagrammatically (each loop brings an extra power of the coupling\n$g^2$) by a set of rules that generalize the traditional Feynman rules\nin a simple manner~:\n\\begin{itemize}\n\\item[{\\bf i.}] Each vertex of a graph can be of type $+$ or $-$, and\n for a given graph topology one must sum over all the possible\n assignments of the types of these vertices. The rule for the $+$\n vertex ($-ig^2$) and for the $-$ vertex ($+ig^2$) differ only in\n their sign. The same rule applies to the external source $j$.\n\\item[{\\bf ii.}] A vertex of type $\\epsilon$ and a vertex of type\n $\\epsilon'$ must be connected by a bare propagator\n $G^0_{\\epsilon\\epsilon'}$. In momentum space, these bare propagators\n read~:\n \\begin{align}\n &G^0_{++}(p)=\\frac{i}{p^2-m^2+i\\epsilon}\\;,&&G^0_{--}(p)=\\frac{-i}{p^2-m^2-i\\epsilon}\\nonumber\\\\\n &G^0_{+-}(p)=2\\pi\\theta(-p^0)\\delta(p^2-m^2)\\;,&&G^0_{-+}(p)=2\\pi\\theta(p^0)\\delta(p^2-m^2)\n \\label{eq:SKprops}\n \\end{align}\n\\end{itemize}\nThe four bare propagators of the Schwinger-Keldysh formalism are\nrelated by a simple algebraic identity,\n\\begin{equation}\n G^0_{++}+G^0_{--}=G^0_{+-}+G^0_{-+}\\; ,\n \\label{eq:SKident}\n\\end{equation}\nthat one can check immediately from eqs.~(\\ref{eq:SKprops}). Note\nthat, on a more fundamental level, this identity follows from the\ndefinition of the various $G_{\\epsilon\\epsilon'}$ as vacuum\nexpectation values of pairs of fields ordered in various ways. For\nthis reason, it is true not only for the bare propagators, but for\ntheir corrections at any order in $g^2$.\n\n\n\\subsection{Retarded-advanced formalism}\n\\label{sec:RA}\nThe Schwinger-Keldysh formalism is not the only one that can be used\nto calculate eq.~(\\ref{eq:obs0}). One can arrange the four bare\npropagators $G^0_{\\epsilon\\epsilon'}$ in a $2\\times 2$ matrix, and\nobtain equivalent diagrammatic rules by applying a ``rotation'' to\nthis matrix~\\cite{AurenB1,EijckKW1,Gelis3,AartsS3}. Among this family of\ntransformations, especially interesting are those that exploit the\nlinear relationship (\\ref{eq:SKident}) among the\n$G^0_{\\epsilon\\epsilon'}$ in order to obtain a vanishing entry in the\nrotated matrix. The retarded-advanced formalism belongs to this class\nof transformations, and its propagators are defined by (let us denote\n$\\alpha=1,2$ the two values taken by the new index)~:\n\\begin{eqnarray}\n{\\mathbbm G}_{\\alpha\\beta}^0\\equiv \n\\sum_{\\epsilon,\\epsilon^\\prime=\\pm}\n\\Omega_{\\alpha\\epsilon}\\Omega_{\\beta\\epsilon^\\prime}\n{G}_{\\epsilon\\epsilon^\\prime}^0\\; ,\n\\label{eq:SK-rotation}\n\\end{eqnarray}\nwith the transformation matrix defined as \n\\begin{equation}\n\\Omega_{\\alpha\\epsilon}\n\\equiv\n\\begin{pmatrix}\n1 & -1 \\\\\n1\/2 & 1\/2 \\\\\n\\end{pmatrix}\\; .\n\\end{equation}\nThe bare rotated propagators read\n\\begin{equation}\n{\\mathbbm G}_{\\alpha\\beta}^0\n=\n\\begin{pmatrix}\n0 & G_{_A}^0\\\\\nG_{_R}^0& G_{_S}^0\\\\\n\\end{pmatrix}\\; ,\n\\end{equation}\nwhere we have introduced\n\\index{Retarded propagator}\n\\begin{equation}\nG_{_R}^0 = G_{++}^0-G_{+-}^0\\;,\\;\nG_{_A}^0 = G_{++}^0-G_{-+}^0\\;,\\;\nG_{_S}^0 = \\frac{1}{2}(G_{++}^0+G_{--}^0)\\; .\n\\end{equation}\n(The subscripts R, A and S stand respectively for {\\sl retarded}, {\\sl\n advanced} and {\\sl symmetric}.)\n\nIt is straightforward to verify that in the rotated formalism, the\nvarious vertices read~:\n\\begin{equation}\n\\Gamma_{\\alpha\\beta\\gamma\\delta}\\equiv -ig^2\\left[\n \\Omega^{-1}_{+\\alpha}\\Omega^{-1}_{+\\beta}\\Omega^{-1}_{+\\gamma}\\Omega^{-1}_{+\\delta}\n -\n \\Omega^{-1}_{-\\alpha}\\Omega^{-1}_{-\\beta}\\Omega^{-1}_{-\\gamma}\\Omega^{-1}_{-\\delta}\n\\right]\\; ,\n\\end{equation}\nwhere\n\\begin{equation}\n\\Omega^{-1}_{\\epsilon\\alpha}=\n\\begin{pmatrix}\n1\/2 & 1 \\\\\n-1\/2 & 1 \\\\\n\\end{pmatrix}\\qquad\\qquad[\\Omega_{\\alpha\\epsilon}\\Omega_{\\epsilon\\beta}^{-1}=\\delta_{\\alpha\\beta}]\\; .\n\\end{equation}\nMore explicitly, we have~:\n\\begin{eqnarray}\n&&\\Gamma_{1111}=\\Gamma_{1122}=\\Gamma_{2222}=0\\nonumber\\\\\n&&\\Gamma_{1222}=-ig^2\\; ,\\quad \\Gamma_{1112}=-ig^2\/4\\; .\n\\end{eqnarray}\n(The vertices not listed explicitly here are obtained by trivial\npermutations.) Concerning the insertions of the external source, the\ndiagrammatic rules in the retarded-advanced formalism are~:\n\\begin{equation}\nJ_1=ij\\;,\\quad J_2 = 0\\; .\n\\end{equation}\n\n\\subsection{From the external source to an external classical field}\n\\label{sec:extfield}\nFrom the above rules, we see that an external source can only be\nattached to a propagator endpoint of type 1, i.e. to the lowest time\nendpoint of a retarded or advanced propagator ($G_{12}^0=G_{_A}^0$,\n$G_{21}^0=G_{_R}^0$), as in the formula\n\\begin{equation}\n\\int {\\rm d}^4y\\; G_{21}^0(x,y)\\,J_1(y)\\; .\n\\end{equation}\n(This expression corresponds to the first graph on the left of the\nfigure \\ref{fig:phi4-tree}.) It is easy to see that the external\nsource can be summed to all orders, if one introduces the object\n$\\varphi(x)$ defined diagrammatically in the figure \\ref{fig:phi4-tree}.\n\\begin{figure}[htbp]\n\\begin{center}\n\\setbox1\\hbox to 9cm{\\hfil\\resizebox*{9cm}{!}{\\includegraphics{phi4-tree}}}\n$\\varphi\\equiv\\raise -4.3mm\\box1$\n\\end{center}\n\\caption{\\label{fig:phi4-tree}The first three terms of the\n diagrammatic expansion of the external field in terms of the\n external source, for a field theory with quartic coupling. The red\n dots represent the source insertions $J_1$, and the lines with an\n arrow are bare retarded propagators $G_{21}^0$. The quartic vertices\n are all of the type $\\Gamma_{1222}$.}\n\\end{figure}\nIt is well known that this series obeys the classical equation of motion,\n\\begin{equation}\n(\\square+m^2)\\varphi + \\frac{g^2}{6}\\varphi^3=j\\; ,\n\\end{equation}\nand since all the propagators are all of type $G_{21}^0$, i.e. retarded,\nit obeys the following boundary condition~:\n\\begin{equation}\n\\lim_{x^0\\to-\\infty}\\varphi(x)=0\\; .\n\\end{equation}\nThe source $j$ can be completely eliminated from the diagrammatic\nrules, by adding to the Lagrangian couplings between the field\noperator $\\phi$ and the classical field $\\varphi$,\n\\begin{equation}\n\\Delta{\\cal L}\\equiv \ng^2\\left[\n\\frac{1}{2}\\;\\varphi^2\\;\\phi_1\\;\\phi_2\n+\n\\frac{1}{2}\\;\\varphi\\; \\phi_1\\;\\phi_2^2\n+\n\\frac{1}{4!}\\;\\varphi\\; \\phi_1^3\n\\right]\n\\; .\n\\label{eq:dL}\n\\end{equation}\nNote that, since in all the graphs in the figure \\ref{fig:phi4-tree}\nthe root of the tree is terminated by an index $2$, the classical\nfield $\\varphi$ can only be attached to an index of type $2$ in these vertices.\n\n\n\n\n\\subsection{Classical statistical approximation (CSA)}\n\\label{sec:CSAdef}\n\\subsubsection{Definition from truncated retarded-advanced rules}\n\\label{sec:CSAdef1}\nThe classical statistical approximation consists in dropping all the\ngraphs that contain the vertex $\\Gamma_{2111}$, i.e. in assuming~:\n\\begin{equation}\n\\Gamma_{2111}=0\\qquad (\\mbox{and similarly for the permutations of $2111$})\\; .\n\\end{equation}\nIn the rest of this paper, we will simply call CSA the field theory\nobtained by dropping all the vertices that have 3 indices of type 1 in\nthe retarded-advanced formalism, while everything else remains\nunchanged. Therefore, in the retarded-advanced formalism, the CSA is\ndefined by the following diagrammatic rules~:\n\\begin{itemize}\n \\item[{\\bf i.}]{\\bf bare propagators~:}\n \\begin{eqnarray}\n &&\n G_{21}^0(p) = \\frac{i}{(p^0+i\\epsilon)^2-{\\boldsymbol p}^2-m^2}\\;,\\quad\n G_{12}^0(p) = \\frac{i}{(p^0-i\\epsilon)^2-{\\boldsymbol p}^2-m^2}\\;,\\nonumber\\\\\n &&\n G_{22}^0(p) = \\pi\\,\\delta(p^2-m^2)\\;.\n \\label{eq:RA:prop}\n \\end{eqnarray}\n \\item[{\\bf ii.}]{\\bf vertices~:}\n \\begin{equation}\n \\Gamma_{1222}\\mbox{\\ (and permutations)}=-ig^2\\quad,\\quad\\mbox{all other combinations zero}\\; .\n \\label{eq:RA:vtx}\n \\end{equation}\n \\item[{\\bf iii.}]{\\bf external sources~:}\n \\begin{equation}\n J_1=ij\\;,\\quad J_2 = 0\\; .\n \\label{eq:RA:J}\n \\end{equation}\n \\item[{\\bf iii${}^\\prime$.}]{\\bf external field~(see the section \\ref{sec:extfield})~:}\n \\begin{equation}\n \\Phi_1=0\\;,\\quad \\Phi_2 = \\varphi\\; .\n \\label{eq:RA:phi}\n \\end{equation}\n\\end{itemize}\nNote that this ``truncated'' field theory is still quite non trivial,\nin the sense that the above diagrammatic rules allow graphs with\narbitrarily many loops. The numerical simulations that implement the\nCSA provide the sum to all orders of the graphs that can be\nconstructed with these rules (with an accuracy in principle only\nlimited by the statistical errors in the Gaussian average over the\ninitial field fluctuations, since this average is approximated by a\nMonte-Carlo sampling).\n\n\n\\subsubsection{Definition by exponentiation of the 1-loop result}\n\\label{sec:CSAdef2}\nThe previous definition of the CSA makes it very clear what graphs are\nincluded in this approximation and what graphs are not. However, it is\na bit remote from the actual numerical implementation. Let us also\npresent here an alternative --but strictly equivalent-- way of\nintroducing the classical statistical approximation, that directly\nprovides a formulation that can be implemented numerically.\n\nFirstly, observables at leading order are expressible in terms of the\nretarded classical field $\\varphi$ introduced above,\n\\begin{equation}\n\\big<0_{\\rm in}\\big|{\\cal O}[\\phi,\\partial\\phi]\\big|0_{\\rm in}\\big>_{\\rm LO}\n=\n{\\cal O}[\\varphi,\\partial\\varphi]\\; .\n\\label{eq:LO}\n\\end{equation}\nAt next-to-leading order, it has been shown in\n\\cite{GelisLV2,GelisLV3,GelisLV4} that the observable can be expressed\nas follows,\n\\begin{eqnarray}\n&&\\big<0_{\\rm in}\\big|{\\cal O}[\\phi,\\partial\\phi]\\big|0_{\\rm in}\\big>_{\\rm NLO}\n=\n\\nonumber\\\\\n&&\\qquad\\quad=\n\\left[\\frac{1}{2}\\int\\frac{{\\rm d}^3{\\boldsymbol k}}{(2\\pi)^3 2E_{\\boldsymbol k}}\n\\int {\\rm d}^3{\\boldsymbol u}\\, {\\rm d}^3{\\boldsymbol v}\\,\n({\\boldsymbol\\alpha}_{\\boldsymbol k}\\cdot{\\mathbbm T}_{\\boldsymbol u})\n({\\boldsymbol\\alpha}_{\\boldsymbol k}^*\\cdot{\\mathbbm T}_{\\boldsymbol v})\n\\right]\\,\n{\\cal O}[\\varphi,\\partial\\varphi]\\, .\n\\label{eq:NLO}\n\\end{eqnarray}\nIn eq.~(\\ref{eq:NLO}), the operator ${\\mathbbm T}_{\\boldsymbol u}$ is the generator\nof shifts of the initial condition for the classical field $\\varphi$\non some constant time surface (the integration surface for the\nvariables ${\\boldsymbol u}$ and ${\\boldsymbol v}$) located somewhere before the source $j$ is\nturned on\\footnote{This is why eq.~(\\ref{eq:NLO}) does not have a term\n linear in ${\\mathbbm T}_{\\boldsymbol u}$, contrary to the slightly more general\n formulas derived in refs.~\\cite{GelisLV2,GelisLV3,GelisLV4}.}. This\nmeans that if we denote $\\varphi[\\varphi_{\\rm init}]$ the classical\nfield as a functional of its initial condition, then for any\nfunctional $F[\\varphi]$ of $\\varphi$, we have\n\\begin{equation}\n\\left[\\exp\\int {\\rm d}^3{\\boldsymbol u}\\;(a\\cdot{\\mathbbm T}_{\\boldsymbol u})\\right]\\;F[\\varphi[\\varphi_{\\rm init}]]=F[\\varphi[\\varphi_{\\rm init}+a]]\\; .\n\\end{equation}\n(This equation can be taken as the definition of ${\\mathbbm T}_{\\boldsymbol u}$.)\nIn eq.~(\\ref{eq:NLO}), the fields ${\\boldsymbol\\alpha}_{\\boldsymbol k}$ are free plane waves of momentum $k^\\mu$~:\n\\begin{equation}\n{\\boldsymbol\\alpha}_{\\boldsymbol k}(u)\\equiv e^{ik\\cdot u}\\;,\\qquad (\\square+m^2){\\boldsymbol\\alpha_k}(u) =0\n\\; .\n\\end{equation}\nNote that in eq.~(\\ref{eq:NLO}), the integration variable ${\\boldsymbol k}$ is a\nloop momentum. In general, the integral over ${\\boldsymbol k}$ therefore diverges\nin the ultraviolet, and must be regularized by a cutoff. After the\nLagrangian parameters have been renormalized at 1-loop, this cutoff\ncan be safely sent to infinity.\n\n\nIn this framework, the classical statistical method is defined as the\nresult of the exponentiation of the operator that appears in the right\nhand side of eq.~(\\ref{eq:NLO}),\n\\begin{eqnarray}\n&&\n\\big<0_{\\rm in}\\big|{\\cal O}[\\phi,\\partial\\phi]\\big|0_{\\rm in}\\big>_{\\rm CSA}\n=\\nonumber\\\\\n&&\\qquad=\n\\exp\\left[\\frac{1}{2}\\int\\!\\!\\frac{{\\rm d}^3{\\boldsymbol k}}{(2\\pi)^3 2E_{\\boldsymbol k}}\n\\int\\!\\! {\\rm d}^3{\\boldsymbol u}\\, {\\rm d}^3{\\boldsymbol v}\\,\n({\\boldsymbol\\alpha}_{\\boldsymbol k}\\cdot{\\mathbbm T}_{\\boldsymbol u})\n({\\boldsymbol\\alpha}_{\\boldsymbol k}^*\\cdot{\\mathbbm T}_{\\boldsymbol v})\n\\right]\n{\\cal O}[\\varphi,\\partial\\varphi]\\, .\n\\label{eq:CSA}\n\\end{eqnarray}\nNote that by construction, the CSA is identical to the underlying\ntheory at LO and NLO, and starts differing from it at\nNNLO and beyond (some higher loop graphs are included but not all\nof them). The relation between this formula and the way the classical\nstatistical method is implemented lies in the fact that the\nexponential operator is equivalent to a Gaussian average over a\nGaussian distribution of initial conditions for the classical field\n$\\varphi$,\n\\begin{eqnarray}\n&&\\exp\\left[\\frac{1}{2}\\int\\frac{{\\rm d}^3{\\boldsymbol k}}{(2\\pi)^3 2E_{\\boldsymbol k}}\n\\int {\\rm d}^3{\\boldsymbol u}\\, {\\rm d}^3{\\boldsymbol v}\\,\n({\\boldsymbol\\alpha}_{\\boldsymbol k}\\cdot{\\mathbbm T}_{\\boldsymbol u})\n({\\boldsymbol\\alpha}_{\\boldsymbol k}^*\\cdot{\\mathbbm T}_{\\boldsymbol v})\n\\right]\\;\nF[\\varphi[\\varphi_{\\rm init}]]\n\\nonumber\\\\\n&&\\qquad\\qquad\\qquad\\qquad\n=\n\\int[{\\rm D}a({\\boldsymbol u}){\\rm D}\\dot{a}({\\boldsymbol u})]\n\\;G[a,\\dot{a}]\\;F[\\varphi[\\varphi_{\\rm init}+a]]\\; ,\n\\label{eq:CSA1}\n\\end{eqnarray}\nwhere $G[a,\\dot{a}]$ is a Gaussian distribution, whose elements can be\ngenerated as\n\\begin{equation}\na(u)=\\int\\frac{{\\rm d}^3{\\boldsymbol k}}{(2\\pi)^3 2E_{\\boldsymbol k}}\\;\\left[\nc_{\\boldsymbol k}\\,{\\boldsymbol\\alpha}_{\\boldsymbol k}(u)+c_{\\boldsymbol k}^*\\,{\\boldsymbol\\alpha}_{\\boldsymbol k}^*(u)\n\\right]\\; ,\n\\label{eq:fluct}\n\\end{equation}\nwith $c_{\\boldsymbol k}$ complex Gaussian random numbers defined by\n\\begin{equation}\n\\big=0\\quad,\\quad \\big=0\\quad,\\quad \n\\big=(2\\pi)^3 E_{\\boldsymbol k}\\delta({\\boldsymbol k}-{\\boldsymbol l})\\; .\n\\end{equation}\nWe will not show here the equivalence of the two ways of defining the\nclassical statistical approximation that we have exposed in this\nsection. The main reason for recalling the second definition of the\nCSA was to emphasize the meaning of the variable ${\\boldsymbol k}$ in\neqs.~(\\ref{eq:NLO}), (\\ref{eq:CSA}) and (\\ref{eq:fluct}), as a loop\nmomentum. Therefore, an upper limit introduced in the ${\\boldsymbol k}$-integration\nin any of these formulas will effectively play the role of an\nultraviolet cutoff that regularizes loop integrals.\n\nTo make the connection with the diagrammatic rules of the classical\nstatistical approximation introduced in the previous subsection, the\ncutoff on the momentum of the initial fluctuations is an upper limit\nfor the momentum flowing through the $G_{22}^0$ propagators. In\ncontrast, the largest momentum that can flow through the $G_{21}^0$\nand $G_{12}^0$ propagators is only controlled by the discretization of\nspace, i.e. by the inverse lattice cutoff. In some implementations,\nthese two cutoffs are identical, but other implementations have chosen\nto have distinct cutoffs for these two purposes\\footnote{The downside\n of having two separate cutoffs is that this form of regularization\n violates~\\cite{EpelbGTW1} the Kubo-Martin-Schwinger\n identities~\\cite{Kubo1,MartiS1}, which leads to a non-zero\n scattering rate even in the vacuum.}~:\n\\begin{itemize}\n\\item In Refs.~\\cite{BergeBS1,BergeBSV1,BergeBSV2} an explicit cutoff\n $\\Lambda_{_{\\rm UV}}$, distinct from the lattice cutoff, is\n introduced in order to limit the largest ${\\boldsymbol k}$ of the initial\n fluctuations. In this setup, $\\Lambda_{_{\\rm UV}}$ is smaller than\n the lattice momentum cutoff, and the lattice spacing no longer\n controls the ultraviolet limit of the computation.\n\\item In Refs.~\\cite{DusliEGV1,EpelbG1,DusliEGV2,EpelbG3,BergeBSV3}\n fluctuation modes are included up to the lattice momentum cutoff,\n i.e. $\\Lambda_{_{\\rm UV}}$ is inversely proportional to the lattice\n spacing.\n\\end{itemize}\nA common caveat of most of these computations is that none has studied\nthe behavior of the results in the limit $\\Lambda_{_{\\rm UV}}\\to\n\\infty$, at the exception of ref.~\\cite{BergeBSV3} where a strong\ndependence on the ultraviolet cutoff was found.\n\n\n\n\n\\section{Ultraviolet power counting}\n\\label{sec:UVcount}\nAfter having defined the classical statistical approximation, we can\nfirst calculate the superficial degree of ultraviolet divergence for\narbitrary graphs in the CSA, in order to see what kind of divergences\none may expect. This is best done by using the definition introduced\nin the section \\ref{sec:CSAdef1}, that defines the CSA by its\ndiagrammatic rules.\n\nLet us consider a generic {\\sl connected} graph ${\\cal G}$ built with\nthese diagrammatic rules, made of~:\n\\begin{itemize}\n \\item $E$ external legs\n \\item $I$ internal lines\n \\item $L$ independent loops\n \\item $V$ vertices of type $\\phi^4$\n \\item $V_2$ vertices of type $\\phi^2\\varphi^2$\n \\item $V_1$ vertices of type $\\phi^3\\varphi$\n\\end{itemize}\nNote that for the internal lines, the superficial degree of divergence\ndoes not distinguish\\footnote{As we shall see later, due to the\n peculiar analytic structure of the integrands of graphs in the CSA,\n this power counting is too naive to accurately reflect the actual\n ultraviolet divergences.} between the propagators $G_{12}^0$,\n$G_{21}^0$ and $G_{22}^0$, because they all have a mass dimension\n$-2$. These numbers are related by the following relations~:\n\\begin{eqnarray}\n E+2I &=& 4V+3V_1+2V_2\\;,\\\\\n L&=& I - (V+V_1+V_2) +1\\; .\n\\end{eqnarray}\nThe first of these identities states that the number of propagator\nendpoints must be equal to the number of slots where they can be\nattached to vertices. The second identity counts the number of\nindependent momenta that can circulate in the loops of the graph.\n\nIn terms of these quantities, the superficial degree of divergence of\nthe graph ${\\cal G}$ is given by\n\\begin{eqnarray}\n\\omega({\\cal G})&=& 4L-2I\\nonumber\\\\\n&=& 4-E-(V_1+2V_2)\\nonumber\\\\\n&=& 4-E-N_\\varphi\\; ,\n\\label{eq:UVcount}\n\\end{eqnarray}\nwhere $N_\\varphi\\equiv V_1+2V_2$ is the number of powers of the\nexternal classical field $\\varphi$ inserted into the graph ${\\cal G}$.\n\nNote that $4-E$ is the superficial degree of divergence of a graph\nwith $E$ external points in a 4-dimensional scalar $\\phi^4$ theory, in\nthe absence of an external field\/source. Therefore, the external field\ncan only decrease the superficial degree of divergence (since\n$N_\\varphi\\ge 0$), which was expected since the couplings to the\nexternal field (see eq.~(\\ref{eq:dL})) have a positive mass dimension,\ni.e. they are super-renormalizable interactions.\n\nThe crucial point about this formula is that the superficial degree of\ndivergence does not depend on the fact that we have excluded the\nvertices of type $2111$ in the classical statistical approximation. In\nother words, the ultraviolet power counting is exactly the same in the\nfull theory and in the CSA. Eq.~(\\ref{eq:UVcount}) suggests that the\nonly ultraviolet divergent quantities are those for which $E\\le 4$,\nexactly as in the unapproximated theory.\n\nAs we shall see in the next section, there is nevertheless an issue\nthat hinders the renormalizability of the classical statistical\napproximation. Discarding the $\\Gamma_{1112}$ alters in subtle ways\nthe analytic structure of Green's functions, which leads to a\nviolation of Weinberg's theorem\\footnote{In addition to power counting\n arguments, renormalizability requires some handle on the recursive\n structure of the ultraviolet divergences. This may come in the form\n of Dyson's convergence theorem \\cite{Dyson2}, whose proof was\n completed by Weinberg \\cite{Weinb4} and somewhat simplified by Hahn\n and Zimmermann \\cite{HahnZ1,Zimme1} (see also\n Refs.~\\cite{CasweK1,CasweK2}). This result states that if all the\n divergences in the subgraphs of a given graph ${\\cal G}$ have been\n subtracted, then the remaining divergence is a polynomial of degree\n $\\omega({\\cal G})$ in the external momenta.}. As a consequence,\nultraviolet divergences in the CSA can be stronger that one would\nexpect on the basis of the power counting alone.\n\n\n\n\n\n\n\\section{Ultraviolet divergences in the CSA}\n\\label{sec:nonren}\n\\subsection{Introduction}\nThe un-truncated $\\phi^4$ theory that we started from is well known to\nbe renormalizable\\footnote{The fact that we are dealing here with a\n field theory coupled to an external source does not spoil this\n property, for sufficiently smooth external sources. See\n Ref.~\\cite{Colli1}, chapter 11.}. This means that all its\nultraviolet divergences can be disposed of by redefining the\ncoefficients in front of the operators that appear in the bare\nLagrangian.\n\nIn the retarded-advanced basis, the Lagrangian of the CSA differs from\nthe Lagrangian of the unapproximated theory in the fact that the vertex\n$\\Gamma_{1112}$ is missing. All the other terms of the Lagrangian are\nunchanged, in particular the operators that are quadratic in the\nfields. In this section we systematically examine 2- and 4-point\nfunctions at one loop, in order to see whether their ultraviolet\nbehavior is compatible with renormalizability or not.\n\n\\subsection{Self-energies at one loop}\nLet us start with the simplest possible loop correction: the one-loop\nself-energy, made of a tadpole graph. Depending on the indices $1$ and\n$2$ assigned to the two external legs, these self-energies are given\nin eq.~(\\ref{eq:S-1loop}).\n\\setbox2\\hbox to 1.2cm{\\resizebox*{1.2cm}{!}{\\includegraphics{S22-1loop}}}\n\\setbox3\\hbox to 1.2cm{\\resizebox*{1.2cm}{!}{\\includegraphics{S12-1loop}}}\n\\begin{eqnarray}\n-i\\big[\\Sigma_{11}\\big]_{_{\\rm CSA}}^{\\rm 1\\ loop}&=&0\n\\quad,\\qquad\n-i\\big[\\Sigma_{22}\\big]_{_{\\rm CSA}}^{\\rm 1\\ loop}=\\raise -2.5mm\\box2=0\n\\nonumber\\\\\n-i\\big[\\Sigma_{12}\\big]_{_{\\rm CSA}}^{\\rm 1\\ loop}&=&\\raise -2.5mm\\box3=-i\\frac{g^2\\Lambda_{_{\\rm UV}}^2}{16\\pi^2}\\; .\n\\label{eq:S-1loop}\n\\end{eqnarray}\n$\\Sigma_{11}$ is zero at one loop in the CSA, because it requires a\nvertex $1112$ that has been discarded. $\\Sigma_{22}$ is also zero,\nbecause it contains a closed loop made of a retarded propagator. The\nonly non-zero self-energy at 1-loop is $\\Sigma_{12}$, that displays\nthe usual quadratic divergence. This can be removed by a mass\ncounterterm in the Lagrangian,\n\\begin{equation}\n\\delta m^2 = -\\frac{g^2\\Lambda_{_{\\rm UV}}^2}{16\\pi^2}\\; ,\n\\end{equation}\nsince the mass term in the Lagrangian is precisely a $\\phi_1\\phi_2$ operator.\n\n\n\n\\subsection{Four point functions at one loop}\n\\subsubsection{Vanishing functions~: $\\Gamma_{1112}$, $\\Gamma_{1111}$ and $\\Gamma_{2222}$}\nThe 4-point function with indices $1112$ is a prime suspect for\nGreen's functions that may cause problems with the renormalizability\nof the CSA. Indeed, the CSA consists in discarding the operator\ncorresponding to this vertex from the Lagrangian. Therefore, if an\nintrinsic\\footnote{Here, we are talking about the overall divergence\n of the function, not the divergences associated to its various\n subgraphs, that may be subtracted by having renormalized the other\n operators of the Lagrangian.} ultraviolet divergent contribution to\nthis Green's function can be generated in the classical statistical\napproximation, then the CSA is not renormalizable.\n\nLet us first consider this 4-point function at 1-loop. At this order,\nthe only possible contribution (up to trivial permutations of the\nexternal legs) to the $\\Gamma_{1112}$ function is\n\\setbox1\\hbox to 3cm{\\resizebox*{3cm}{!}{\\includegraphics{G1112-1loop}}\\hfil}\n\\begin{equation}\n-i\\Gamma_{1112}^{\\rm 1\\ loop}=\\raise -4mm\\box1\\; ,\n\\end{equation}\nwhere the indices 1 and 2 indicate the various vertex\nassignments. (The $-i$ prefactor is a convention, so that the function\n$\\Gamma$ can be viewed directly as a correction to the coupling\nconstant $g^2$.) Because it must contain a vertex of type $1112$, this\nfunction is zero in the classical statistical\napproximation\\footnote{For the calculation of the full $\\Gamma_{1112}$\n at one loop, beyond the classical statistical approximation, see the\n appendix \\ref{app:1222-1112}.},\n\\begin{equation}\n-i\\big[\\Gamma_{1112}\\big]_{_{\\rm CSA}}^{\\rm 1\\ loop}=0\\; .\n\\end{equation}\nTherefore, this 4-point function does not cause any renormalization\nproblem in the CSA at 1-loop. Similarly, the function $\\Gamma_{1111}$\nat one loop also requires the vertex $1112$, and is therefore zero in the\nclassical statistical approximation\\footnote{At one loop, the\n functions $\\Gamma_{1111}$ and $\\Gamma_{2222}$ are also zero in the\n full theory.},\n\\begin{equation}\n-i\\big[\\Gamma_{1111}\\big]_{_{\\rm CSA}}^{\\rm 1\\ loop}=0\\; .\n\\end{equation}\nFor the function $\\Gamma_{2222}$ at one loop, the only possibility is\nthe following, \\setbox1\\hbox to\n3cm{\\resizebox*{3cm}{!}{\\includegraphics{G2222-1loop}}\\hfil}\n\\begin{equation}\n-i\\big[\\Gamma_{2222}\\big]_{_{\\rm CSA}}^{\\rm 1\\ loop}=\\raise -5mm\\box1=0\\; ,\n\\label{eq:2222-1loop}\n\\end{equation}\nwhere we have represented with arrows the $12$ propagators, since they\nare retarded propagators. This graphs is zero because it is made of a\nsequence of retarded propagators forming a closed loop.\n\n\n\\subsubsection{Logarithmic divergence in $\\Gamma_{1222}$}\nAt one loop, the function $\\Gamma_{1222}$ is given by the graph of\neq.~(\\ref{eq:1222-1loop}) (and several other permutations of the\nindices).\n\\setbox1\\hbox to 3cm{\\resizebox*{3cm}{!}{\\includegraphics{G1222-1loop}}\\hfil}\n\\begin{equation}\n-i\\big[\\Gamma_{1222}\\big]_{_{\\rm CSA}}^{\\rm 1\\ loop}=\\raise -4mm\\box1\n\\sim g^4 \\log(\\Lambda_{_{\\rm UV}})\\; .\n\\label{eq:1222-1loop}\n\\end{equation}\nIt is a straightforward calculation to check that this graph has a\nlogarithmic ultraviolet divergence, that can be removed by the\nstandard 1-loop renormalization of the coupling constant (this is\npossible, since the interaction term $\\phi_1\\phi_2^3$ has been kept in\nthe Lagrangian when doing the classical statistical approximation).\nThe calculation of this 4-point function is detailed in the\nappendix \\ref{app:1222-1112}.\n\n\n\n\\subsubsection{Violation of Weinberg's theorem in $\\Gamma_{1122}$}\nAnother interesting object to study is the 4-point function with\nindices $1122$. There is no such bare vertex in the Lagrangian (both\nfor the unapproximated theory and for the classical statistical\napproximation). Since the full theory is renormalizable, this\nfunction should not have ultraviolet divergences at 1-loop, since such\ndivergences would not be renormalizable. However, since the CSA\ndiscards certain terms, it not obvious a priori that this conclusion\nstill holds. For the sake of definiteness, let us denote\n$p_1,\\cdots,p_4$ the external momenta of this function (defined to be\nincoming into the graph, therefore $p_1+p_2+p_3+p_4=0$), and let us\nassume that the two indices $1$ are attached to the legs $p_1,p_2$ and\nthe two indices $2$ are attached to the legs $p_3,p_4$. At one loop,\nthis 4-point function (in the full field theory) receives the\nfollowing contributions~:\n\\setbox1\\hbox to 2.8cm{\\resizebox*{2.8cm}{!}{\\includegraphics{G1122-S}}}\n\\setbox2\\hbox to 1.1cm{\\resizebox*{1.1cm}{!}{\\includegraphics{G1122-T1}}}\n\\setbox3\\hbox to 1.1cm{\\resizebox*{1.1cm}{!}{\\includegraphics{G1122-T2}}}\n\\setbox4\\hbox to 1.1cm{\\resizebox*{1.1cm}{!}{\\includegraphics{G1122-T3}}}\n\\setbox5\\hbox to 1.1cm{\\resizebox*{1.1cm}{!}{\\includegraphics{G1122-U1}}}\n\\setbox6\\hbox to 1.1cm{\\resizebox*{1.1cm}{!}{\\includegraphics{G1122-U2}}}\n\\setbox7\\hbox to 1.1cm{\\resizebox*{1.1cm}{!}{\\includegraphics{G1122-U3}}}\n\\begin{eqnarray}\n-i\\Gamma_{1122}^{\\rm 1\\ loop}\n&=\n\\underbrace{\\raise -5mm\\box1}_{\\mbox{S channel}}\n&+\n\\underbrace{\n\\raise -11mm\\box2\n+\n\\raise -11mm\\box3\n+\n\\raise -11mm\\box4}_{\\mbox{T channel}}\n\\nonumber\\\\\n&&\n+\n\\underbrace{\n\\raise -11mm\\box5\n+\n\\raise -11mm\\box6\n+\n\\raise -11mm\\box7}_{\\mbox{U channel}}\\; .\n\\label{eq:1122-1loop}\n\\end{eqnarray}\nSince the full theory is renormalizable, the sum of all these graphs\nshould be ultraviolet finite, because there is no $1122$ 4-field\noperator in the bare Lagrangian.\n\nIt is however not obvious that the subset of these graphs that exist\nin the classical statistical approximation is itself ultraviolet finite.\nAmong the T-channel and U-channel graphs, only the first of the three\ngraphs exist in the CSA, since all the other graphs contain the $1112$\nbare vertex,\n\\setbox2\\hbox to 1.1cm{\\resizebox*{1.1cm}{!}{\\includegraphics{G1122-T1}}}\n\\setbox5\\hbox to 1.1cm{\\resizebox*{1.1cm}{!}{\\includegraphics{G1122-U1}}}\n\\begin{equation}\n-i\\big[\\Gamma_{1122}\\big]_{{\\rm CSA}}^{\\rm 1\\ loop}\n=\n\\raise -11mm\\box2\n+\n\\raise -11mm\\box5\\; .\n\\label{eq:1122-CSA}\n\\end{equation}\nSome details of the calculation of these graphs are provided in the\nappendix \\ref{app:1122}. One obtains\n\\begin{equation}\n-i\\big[\\Gamma_{1122}\\big]_{{\\rm CSA}}^{\\rm 1\\ loop}\n=-\\frac{g^4}{64\\pi}\\left[\n{\\rm sign}(T)+{\\rm sign}(U)\n+2\\,\\Lambda_{_{\\rm UV}}\n\\left(\n\\frac{\\theta(-T)}{|{\\boldsymbol p}_1+{\\boldsymbol p}_3|}\n+\n\\frac{\\theta(-U)}{|{\\boldsymbol p}_1+{\\boldsymbol p}_4|}\n\\right)\n\\right]\\; ,\n\\label{eq:1122-1loop-1}\n\\end{equation}\nwhere we denote\n\\begin{equation}\nT\\equiv (p_1+p_3)^2\\quad,\\quad U\\equiv (p_1+p_4)^2\\; ,\n\\end{equation}\nand where $\\Lambda_{_{\\rm UV}}$ is an ultraviolet cutoff introduced to\nregularize the integral over the 3-momentum running in the\nloop. As one sees, these graphs have a {\\sl linear} ultraviolet\ndivergence, despite having a superficial degree of divergence equal to\nzero. This property violates Weinberg's theorem since, if it were\napplicable here, it would imply at most a logarithmic divergence with\na coefficient independent of the external momenta. One can attribute\nthis violation to the analytic structure of the integrand\\footnote{For\n the graphs in eq.~(\\ref{eq:1122-CSA}), the integrand is of the form\n $\\delta(K^2)\\delta((P+K)^2)$ where $P\\equiv p_1+p_3$ or $P\\equiv\n p_1+p_4$. Using the first delta function, the argument of the second\n one is $(P+K)^2=2P\\cdot K+P^2$, which is only of degree $1$ in the\n loop momentum. Therefore, the second propagator contributes only\n $-1$ to the actual degree of divergence of the graph, contrary to\n the $-2$ assumed based on dimensionality when computing the\n superficial degree of divergence. This discrepancy is also related to the\n impossibility to perform a Wick's rotation when the integrand is\n expressed in terms of delta functions or retarded\/advanced\n propagators.}: unlike in ordinary Feynman perturbation theory, we\ncannot perform a Wick rotation to convert the integral to an integral\nover an Euclidean momentum, which is an important step in the proof of\nWeinberg's theorem. \n\nSince it occurs in the operator $\\phi_1^2\\phi_2^2$, that does not\nappear in the CSA Lagrangian, this linear divergence provides\nincontrovertible proof of the fact that {\\sl the classical statistical\n approximation is not renormalizable}. Moreover, this conclusion is\nindependent of the value of the coupling constant. The only thing one\ngains at smaller coupling is that the irreducible cutoff dependence\ncaused by these terms is weaker.\n\nIt should also be noted that this linear divergence is a purely\nimaginary contribution to the function $\\Gamma_{1122}$ (this can be\nunderstood from the structure of the integrand, that was made of two\ndelta functions, which is reminiscent of the calculation of the\nimaginary part of a Green's function via Cutkosky's cutting rules\n\\cite{Cutko1,t'HooV1}).\n\nIn the appendix \\ref{app:1122}, we also calculate the graphs of\neq.~(\\ref{eq:1122-1loop}) that do not contribute to the CSA, and we\nfind\n\\setbox3\\hbox to 1.1cm{\\resizebox*{1.1cm}{!}{\\includegraphics{G1122-T2}}}\n\\setbox4\\hbox to 1.1cm{\\resizebox*{1.1cm}{!}{\\includegraphics{G1122-T3}}}\n\\setbox6\\hbox to 1.1cm{\\resizebox*{1.1cm}{!}{\\includegraphics{G1122-U2}}}\n\\setbox7\\hbox to 1.1cm{\\resizebox*{1.1cm}{!}{\\includegraphics{G1122-U3}}}\n\\begin{eqnarray}\n\\raise -11mm\\box3\n+\n\\raise -11mm\\box4\n+\n\\raise -11mm\\box6\n+\n\\raise -11mm\\box7\n&=&\n-\\frac{g^4}{32\\pi}\\Big[\n1\n\\nonumber\\\\\n&&\n\\!\\!\\!\\!-\\Lambda_{_{\\rm UV}}\n\\left(\n\\frac{\\theta(-T)}{|{\\boldsymbol p}_1+{\\boldsymbol p}_3|}\n+\n\\frac{\\theta(-U)}{|{\\boldsymbol p}_1+{\\boldsymbol p}_4|}\n\\right)\n\\Big]\\; .\\nonumber\\\\\n&&\n\\label{eq:1122-1loop-2}\n\\end{eqnarray}\n(Note that the S-channel graph is in fact zero, because it is made of\na sequence of retarded propagators in a closed loop.) By adding\neqs.~(\\ref{eq:1122-1loop-1}) and (\\ref{eq:1122-1loop-2}), we obtain\nthe 1-loop result in the unapproximated theory,\n\\begin{equation}\n-i\\Gamma_{1122}^{\\rm 1\\ loop}=-\\frac{g^4}{32\\pi}\\left[\\theta(T)+\\theta(U)\\right]\\; ,\n\\end{equation}\nwhich is ultraviolet finite, in agreement with the renormalizability\nof the full theory. \n\n\n\\subsection{Two point functions at two loops}\nLet us also mention two problematic 2-point functions at two loops. We\njust quote the results here (the derivation will be given in\n\\cite{EpelbGTW1}), for an on-shell momentum $P$ ($P^2=0,p_0>0$)~:\n\\setbox1\\hbox to\n3.5cm{\\resizebox*{3.5cm}{!}{\\includegraphics{S11-2loop}}}\n\\begin{eqnarray}\n-i\\big[\\Sigma_{11}(P)\\big]_{_{\\rm CSA}}^{\\rm 2\\ loop}\n&=&\n\\raise -6.5mm\\box1\n\\nonumber\\\\\n&=&\n-\\frac{g^4}{1024\\pi^3}\\;\\left(\\Lambda_{_{\\rm UV}}^2-\\frac{2}{3}p^2\\right)\\; ,\n\\label{eq:S11-2loop}\n\\end{eqnarray}\n\\setbox1\\hbox to 3.5cm{\\resizebox*{3.5cm}{!}{\\includegraphics{S12-2loop}}}\n\\begin{eqnarray}\n{\\rm Im}\\,\\big[\\Sigma_{12}(P)\\big]_{_{\\rm CSA}}^{\\rm 2\\ loop}\n&=&\n\\raise -6.5mm\\box1\n\\nonumber\\\\\n&=&\n-\\frac{g^4}{1024\\pi^3}\n\\;\\left(\\Lambda_{_{\\rm UV}}^2-\\frac{2}{3}p^2\\right)\\; .\n\\label{eq:S12-2loop}\n\\end{eqnarray}\nAn ultraviolet divergence in $\\Sigma_{11}$ is non-renormalizable,\nsince there is no $\\phi_1^2$ operator in the Lagrangian. Similarly,\nthe divergence at 2-loops in ${\\rm Im}\\,\\Sigma_{12}$ is also\nnon-renormalizable, because it would require an imaginary counterterm,\nthat would break the Hermiticity of the Lagrangian.\n\n\n\n\\section{Consequences on physical observables}\n\\label{sec:int}\n\\subsection{Order of magnitude of the pathological terms}\nSo far, we have exhibited a 4-point function at 1-loop that has an\nultraviolet divergence in the CSA but not if computed in full, and\nthat cannot be renormalized in the CSA because it would require a\ncounterterm for an operator that does not exist in the Lagrangian. \n\nIn practice, this 1-loop function enters as a subdiagram in the loop\nexpansion of observable quantities, making them unrenormalizable. In\norder to assess the damage, it is important to know the lowest order\nat which this occurs. Let us consider in this discussion two\nquantities that have been commonly computed with the classical\nstatistical method: the expectation value of the energy-momentum\ntensor, and the occupation number, which can be extracted from the\n$G_{22}$ propagator.\n\\setbox1\\hbox to 4cm{\\resizebox*{4cm}{!}{\\includegraphics{G22-NNLO}}}\n\\setbox2\\hbox to 3cm{\\resizebox*{3cm}{!}{\\includegraphics{Tmunu-NNLO}}}\n\nFor the $22$ component of the propagator, the first occurrence of the\n$1122$ 4-point function as a subgraph is in the following 1-loop\ncontribution~:\n\\begin{equation}\n\\left[G_{22}\\right]{}_{\\rm CSA}^{^{\\rm NNLO}} = \\raise -4mm\\box1\\; ,\n\\label{eq:G22-NNLO}\n\\end{equation}\nwhere the problematic subdiagram has been highlighted (the propagators\nin purple are of type $G_{22}$). The fact that this problematic\nsubgraphs occurs only at 1-loop and beyond is related to the fact that\nclassical statistical approximation is equivalent to the full theory\nup to (including) NLO. The first differences appear at NNLO, which\nmeans 1-loop for the $G_{22}$ function. In a situation where the\ntypical physical scale is denoted $Q$, the subdiagram is of order $g^4\n\\Lambda_{_{\\rm UV}}\/Q$, and the external field attached to the graph\nis of order $\\Phi_2\\sim Q\/g$ (we assume a system dominated by strong\nfields, as in applications to heavy ion collisions). This 1-loop\ncontribution to $G_{22}$ is of order $g^2\\Lambda_{_{\\rm UV}} Q$, to be\ncompared to $Q^2\/g^2$ at leading order. Therefore, the relative\nsuppression of this non-renormalizable contribution is by a factor\n\\begin{equation}\ng^4\\frac{\\Lambda_{_{\\rm UV}}}{Q}\\; .\n\\label{eq:rel}\n\\end{equation}\n\nThe same conclusion holds in the case of the energy-momentum tensor,\nfor which the $1122$ 4-point function enters also at NNLO (in this\ncase, this means two loops), in the following diagram~:\n\\begin{equation}\n\\left[T^{\\mu\\nu}\\right]{}^{^{\\rm NNLO}}_{{\\rm CSA}} = \\raise -13mm\\box2\\; .\n\\label{eq:T-NNLO}\n\\end{equation}\n(The cross denotes the insertion of the $T^{\\mu\\nu}$ operator.) The\norder of magnitude of this graph is $g^2\\Lambda_{_{\\rm UV}}Q^3$, while the\nleading order contribution to the energy-momentum tensor is of order\n$Q^4 \/ g^2$. Therefore, the relative suppression is the same as in\neq.~(\\ref{eq:rel}). \n\nAll these examples suggest that a minimum requirement is that the\nultraviolet cutoff should satisfy\n\\begin{equation}\n\\Lambda_{_{\\rm UV}}\\ll \\frac{Q}{g^4}\\; ,\n\\label{eq:cond0}\n\\end{equation}\nfor the above contributions to give only a small contamination to\ntheir respective observables in a classical statistical computation\nwith cutoff $\\Lambda_{_{\\rm UV}}$. However, one could be a bit more\nambitious and request that this computation be also accurate at\nNLO. For this, we should set the cutoff so that the above diagrams are\nsmall corrections compared to the NLO contributions. This is achieved\nif the highlighted 4-point function in these graphs is small compared\nto the tree-level 4-point function, i.e. $g^2$. This more stringent\ncondition reads\n\\begin{equation}\n\\frac{g^4}{16\\pi}\\frac{\\Lambda_{_{\\rm UV}}}{Q}\\ll g^2\\quad,\\quad\\mbox{i.e.}\\;\\; \\Lambda_{_{\\rm UV}}\\ll \\frac{16\\pi Q}{g^2}\\; ,\n\\label{eq:cond1}\n\\end{equation}\nwhere we have reintroduced the factors $2$ and $\\pi$ from\neq.~(\\ref{eq:1122-1loop-1}), because in practical situations they are\nnumerically important. One can see that this inequality is easy to\nsatisfy at weak coupling $g^2\\ll 1$, and presumably only marginally satisfied at\nlarger couplings $g\\approx 1$.\n\n\n\\subsection{Ultraviolet contamination at asymptotic times}\nThe condition of eq.~(\\ref{eq:cond1}) ensures that the pathological\nNNLO contributions are much smaller than the NLO corrections (the\nlatter are correctly given by the classical statistical\napproximation). Another important aspect of this discussion is\nwhether, by ensuring that the inequality (\\ref{eq:cond1}) is\nsatisfied, one is guaranteed that the contamination by the\npathological terms remains small at all times. It is easy to convince\noneself that this is not the case. In Ref.~\\cite{EpelbGTW1}, we argue\nthat these pathological terms, if not removed, induce corrections that\nbecome comparable to the physical result after a time that varies as\n$Qt_*\\sim 2048\\pi^3 g^{-4} (Q\/\\Lambda_{_{\\rm UV}})^2$. Effectively,\nthese ultraviolet divergent terms act as spurious scatterings with a\nrate proportional to $g^4 \\Lambda_{_{\\rm UV}}^2\/Q$.\n\n\nMoreover, the state reached by the system when $t\\to+\\infty$ is\ncontrolled solely by conservation laws and by a few quantities that\ncharacterize the initial condition, in addition to the ultraviolet\ncutoff. For instance, if the only conserved quantities are energy and\nmomentum, then the asymptotic state depends only on the total energy\nin the system. If in addition the particle number was conserved, then\nthe asymptotic state would also depend on the number of particles in\nthe system.\n\nIn particular, the value of the coupling constant does not play a role\nin determining which state is reached at asymptotic times; it only\ncontrols how quickly the system approaches the asymptotic state. This\nmeans that, even if $g^2$ is small so that the inequality\n(\\ref{eq:cond1}) is satisfied, the CSA may evolve the system towards\nan asymptotic state that differs significantly from the true\nasymptotic state, regardless of how small $g^2$ is. Therefore, the\nstrong dependence of the asymptotic state on the ultraviolet cutoff\nobserved in the figure 10 of Ref.~\\cite{BergeBSV3} is not specific to\na ``large'' coupling $g^2=1$. Exactly the same cutoff dependence would\nbe observed at smaller couplings, but the system would need to evolve\nfor a longer time in order to reach it.\n\n\n\n\n\n\\subsection{Could it be fixed?}\nAn important issue is whether one could somehow alter the classical\nstatistical approximation in order to remove the linear divergence\nthat appears in the 1-loop 4-point function $\\Gamma_{1122}$. As a\nsupport of these considerations, let us consider the NNLO correction\nto the function $G_{22}$. Eq.~(\\ref{eq:G22-NNLO}) displays the unique\ncontribution in the classical statistical approximation. However, in\nthe full theory there are two other possible arrangements of the\ninternal $1\/2$ inside the $1122$ subdiagram. This topology with the\ncomplete $1122$ subdiagram reads~:\n\\setbox1\\hbox to 11cm{\\resizebox*{11cm}{!}{\\includegraphics{G22-NNLO-full}}}\n\\begin{equation}\n\\raise -3.5mm\\box1\\; .\n\\label{eq:G22-NNLO-full}\n\\end{equation}\nThe CSA result contains only the first graph, and as a consequence it\nhas a linear ultraviolet divergence, while the sum of the three graphs\nis finite. So the question is: could one reintroduce in the CSA the\ndivergent part of the 2nd and 3rd graphs, in order to compensate the\ndivergence of the first graph?\n\nIn order to better visualize what it would take to do this, let us\nmodify the way the graphs are represented, so that they reflect\nthe space-time evolution of the system and the {\\sl modus operandi} of\npractical implementations of the CSA. The modified representation for\nthe first term in eq.~(\\ref{eq:G22-NNLO-full}) is shown in the figure\n\\ref{fig:G22-CSA}.\n\\begin{figure}[htbp]\n\\begin{center}\n\\resizebox*{!}{2.8cm}{\\includegraphics{G22-CSA}}\n\\end{center}\n\\caption{\\label{fig:G22-CSA}Space-time representation of\n eq.~(\\ref{eq:G22-NNLO}). The propagators with an arrow are retarded\n propagators. The orange circles represent the mean value of the\n initial field. The orange lines represent the link coming from the\n Gaussian fluctuations of the initial field.}\n\\end{figure}\nIn this representation, the lines with arrows are retarded propagators\n(the time flows in the direction of the arrow). The solution of the\nclassical equation of motion is a sum of trees made of retarded\npropagators, where the ``leaves'' of the tree are anchored to the\ninitial surface. In the diagram shown in the figure \\ref{fig:G22-CSA},\nthere are two such trees, both containing one instance of the quartic\ninteraction term. In order to complete the calculation in the CSA, one\nperforms a Gaussian average over the initial value of the classical\nfield. Diagrammatically, this average amounts to attaching the leaves\nof the tree to 1-point objects representing the average value of the\ninitial field, or to connecting them pairwise with the 2-point\nfunction that describes the variance of the initial Gaussian\ndistribution. \n\nIt is crucial to note that the trees that appear in the solution of\nthe classical equation of motion are ``oriented''~: three retarded\npropagators can merge at a point, from which a new retarded propagator\nstarts. Let us call this a $3\\to 1$ vertex (when read in the\ndirection of increasing time). These trees do not contain any\n$2\\to 2$ or $1\\to 3$ vertices. Their absence is intimately related to\nthe absence of the $1122$ and $1112$ vertices in the Lagrangian in the\nclassical statistical approximation.\n\nIn the figure \\ref{fig:G22-NCSA}, we now show the same representation\nfor the 2nd and 3rd contributions of eq.~(\\ref{eq:G22-NNLO-full}).\nFirstly, we see that these graphs contain a $1\\to 3$ vertex\n(surrounded by a dotted circle in the figure), in agreement with the\nfact that they do not appear in the CSA.\n\\begin{figure}[htbp]\n\\begin{center}\n\\resizebox*{!}{3cm}{\\includegraphics{G22-NCSA}}\n\\end{center}\n\\caption{\\label{fig:G22-NCSA}Space-time representation of the NNLO\n contributions to $G_{22}$ that are not included in the classical\n statistical approximation. The dotted circles outline the $1112$\n vertices, that are missing in the CSA.}\n\\end{figure}\nThere is no way to generate the loop contained in this graphs via the\naverage over the initial conditions, because this loop corresponds to\nquantum fluctuations that happen later on in the time\nevolution. By Fourier transforming the divergent part of\nthese diagrams in eq.~(\\ref{eq:1122-1loop-2}), we can readily see that\nit is proportional to\n\\begin{equation}\n\\frac{1}{|{\\boldsymbol x}-{\\boldsymbol y}|}\\,\\delta((x^0-y^0)^2-({\\boldsymbol x}-{\\boldsymbol y})^2)\n\\end{equation}\nin coordinate space. Thus, the divergent part of these loops is\nnon-local, with support on the light-cone, as illustrated in the\nfigure \\ref{fig:G22-CT}.\n\\begin{figure}[htbp]\n\\begin{center}\n\\resizebox*{!}{3cm}{\\includegraphics{G22-CT}}\n\\end{center}\n\\caption{\\label{fig:G22-CT}Space-time representation of the divergent\n part of the graphs of figure \\ref{fig:G22-NCSA}. As explained in the\n text, these divergent terms are non-local in space-time, with\n support on the light-cone.}\n\\end{figure}\n\nThere can also be arbitrarily many occurrences of these divergent\nsubgraphs in the calculation of an observable in the classical\nstatistical method, as illustrated in the figure \\ref{fig:G22-multi}\nin the case of $G_{22}$.\n\\begin{figure}[htbp]\n\\begin{center}\n\\resizebox*{!}{4cm}{\\includegraphics{G22-multiloop}}\n\\end{center}\n\\caption{\\label{fig:G22-multi}Contribution to $G_{22}$ with many\n $\\Gamma_{1122}$ subgraphs.}\n\\end{figure}\nThis implies that these divergences cannot be removed by an overall\nsubtraction, and that one must instead modify the Lagrangian. One\ncould formally subtract them by adding to the action of the theory a\nnon-local counterterm\\footnote{Of course, there was no\n $\\phi_1^2\\phi_2^2$ term in the original bare action. On the other\n hand, we know that in the full theory, there should not be an\n intrinsic ultraviolet divergence in the $1122$ function. One should\n view this counterterm as a way of reintroducing some of the terms\n that are beyond the classical statistical approximation, in order to\n restore the finiteness of the $1122$ function.} of the form\n\\begin{equation}\n\\Delta{\\cal S}\\equiv\n-\\frac{i}{2}\\int{\\rm d}^4x\\,{\\rm d}^4y\\; \\left[\\phi_1(x)\\phi_2(x)\\right]\\,\nv(x,y)\\,\n\\left[\\phi_1(y)\\phi_2(y)\\right]\\; ,\n\\end{equation}\nwhere \n\\begin{equation}\nv(x,y)\\equiv \\frac{g^4}{64\\pi^3}\\frac{\\Lambda_{_{\\rm UV}}}{|{\\boldsymbol x}-{\\boldsymbol y}|}\\,\n\\delta((x^0-y^0)^2-({\\boldsymbol x}-{\\boldsymbol y})^2)\n\\end{equation}\nis tuned precisely to cancel the linear divergence in the\n$\\Gamma_{1122}$ function. In order to deal with such a term, the\nsimplest is to perform a {\\sl Hubbard-Stratonovich} transformation\n\\cite{Hubba1,Strat1}, by introducing an auxiliary field $\\zeta(x)$\nvia the following identity\n\\begin{equation}\ne^{i\\Delta{\\cal S}}\n=\n\\int[{\\rm D}\\zeta]\\;\ne^{\\frac{1}{2}\\int_{x,y}\\zeta(x)v^{-1}(x,y)\\zeta(y)}\n\\;\ne^{i\\int_x \\zeta(x)\\,\\phi_1(x)\\phi_2(x)}\\; .\n\\end{equation}\nThe advantage of this transformation is that we have transformed a\nnon-local four-field interaction term into a local interaction with a\nrandom Gaussian auxiliary field.\n\nThe rest of the derivation of the classical statistical method remains\nthe same: the field $\\phi_1$ appears as a Lagrange multiplier for a\nclassical equation of motion for the field $\\varphi$, but now we get an\nextra, stochastic, term in this equation~:\n\\begin{equation}\n (\\square+m^2)\\varphi+\\frac{g^2}{6}\\varphi^3 + i\\xi \\varphi=j\\; .\n \\label{eq:CL}\n\\end{equation}\nNote that we have introduced $\\zeta\\equiv i\\xi$ in order to have a\npositive definite variance for the new variable $\\xi$. From the\nabove derivation, this noise must be Gaussian distributed, with a mean\nand variance given by the following formulas,\n\\begin{eqnarray}\n\\big<\\xi(x)\\big>&=&0\\nonumber\\\\\n\\big<\\xi(x)\\xi(y)\\big>&=& \n\\frac{g^4}{64\\pi^3}\n\\frac{\\Lambda_{_{\\rm UV}}}{|{\\boldsymbol x}-{\\boldsymbol y}|}\\,\\delta((x^0-y^0)^2-({\\boldsymbol x}-{\\boldsymbol y})^2)\n\\; .\n\\label{eq:noise}\n\\end{eqnarray}\nBy construction, the noise term in eq.~(\\ref{eq:CL}), once averaged\nwith eq.~(\\ref{eq:noise}), will insert a non-local counterterm in\nevery place where the $\\Gamma_{1122}$ function can appear. For\ninstance, when applied to the calculation of $G_{22}$, the\ncontribution shown in the figure \\ref{fig:G22-multi} will be\naccompanied by the term shown in the figure \\ref{fig:G22-multi-CT}.\n\\begin{figure}[htbp]\n\\begin{center}\n\\resizebox*{!}{4cm}{\\includegraphics{G22-multiloop-CT}}\n\\end{center}\n\\caption{\\label{fig:G22-multi-CT}Effect of the noise term on the\n topology shown in the figure \\ref{fig:G22-multi}.}\n\\end{figure}\n\n\n\n\nAlthough the noise in eq.~(\\ref{eq:noise}) has non-local space-time\ncorrelations, it is easy to generate it in momentum space, where it\nbecomes diagonal. The main practical difficulty however comes from the\nnon-locality in time\\footnote{This non-locality appears to be a\n reminiscence of the memory effects that exist in the full quantum\n field theory, but are discarded in the classical statistical\n approximation. Note that the 2PI resummation scheme also has such\n terms.}: one would need to generate and store the whole\nspatio-temporal dependence for each configuration of the noise term\nprior to solving the modified classical equation of motion.\n\nThe noise term introduced in eq.~(\\ref{eq:CL}) is purely imaginary,\nand it turns the classical field $\\phi$ into a complex valued\nquantity. However, since $\\xi$ is Gaussian distributed with a zero\nmean, any Hermitean observable constructed from $\\phi$ via an average\nover $\\xi$ will be real valued\\footnote{The average over $\\xi$ will\n only retain terms that are even in $\\xi$, and the factors $i$ will\n cancel.}. Eq.~(\\ref{eq:CL}) is therefore a complex Langevin\nequation, and may be subject to the problems sometimes encountered\nwith this kind of equations (lack of convergence, or convergence to\nthe incorrect solution). At the moment, it is an open question whether\neq.~(\\ref{eq:CL}) really offers a practical way of removing the linear\nultraviolet divergences from the classical statistical approximation.\n\n\n\n\n\n\\section{Conclusions and outlook}\n\\label{sec:concl}\nIn this work, we have investigated the ultraviolet behavior of the\nclassical statistical approximation. This has been done by using\nperturbation theory in the retarded-advanced basis, where this\napproximation has a very simple expression, and in calculating all the\none-loop subdiagrams that can possibly be generated with these\ndiagrammatic rules.\n\nThe main conclusion of this study is that the classical approximation\nleads to a 1-loop 4-point function that diverges linearly in the\nultraviolet cutoff. More specifically, the problem lies in the\nfunction $\\Gamma_{1122}$, where the $1,2$ indices refer to the\nretarded-advanced basis. In the unapproximated theory, this function\nis ultraviolet finite, but it violates Weinberg's theorem in the\nclassical statistical approximation, because it has an ultraviolet\ndivergence with a coefficient which is non-polynomial in the external\nmomenta. Moreover, it is non-renormalizable because it corresponds to\nan operator that does not even appear in the Lagrangian one started\nfrom.\n\nThe mere existence of these divergent terms implies that the classical\nstatistical approximation is not renormalizable, no matter what the\nvalue of the coupling constant is. \n\nWe have estimated that the contamination of the results by these\nnon-renormalizable terms is of relative order $g^4\\Lambda_{_{\\rm\n UV}}\/Q$, where $Q$ is the typical physical momentum scale of the\nproblem under consideration. Based on this, the general rule is that\nthe coupling should not be too large, and the cutoff should remain\nclose enough to the physical scales. \n\nIn this paper, we have also proposed that this one-loop spurious\n(because it does not exist in the full theory) divergence may be\nsubtracted by adding a multiplicative Gaussian noise term to the\nclassical equation of motion. This noise term can be tuned in order to\nreintroduce some of the terms of the full theory that had been lost\nwhen doing the classical approximation. In order to subtract the\nappropriate quantity, this noise must be purely imaginary, with a \n2-point correlation given by the Fourier transform of the divergent\nterm. Unfortunately, this correlation is non-local in time, which\nmakes the implementation of this correction quite complicated. Whether\nthis can be done in practice remains an open question at this point.\n\nMoreover, the ultraviolet contamination due to these\nnon-renormalizable terms is cumulative over time, and will eventually\ndominate the dynamics of the system no matter how small\n$g^4\\Lambda_{_{\\rm UV}}\/Q$ is. An extensive discussion of this\nasymptotic ultraviolet sensitivity, and of the time evolution that\nleads to the asymptotic state, will be provided in a forthcoming work\n\\cite{EpelbGTW1}.\n\n\n\n\\section*{Acknowledgements}\nWe would like to thank L.~McLerran for useful discussions about\nthis work and closely related issues. This work is supported by the\nAgence Nationale de la Recherche project 11-BS04-015-01.\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nThe advancements in digital multimedia technologies over the past decade have received tremendous consumer acceptance. Stereoscopic 3D (S3D) digital technology has received a lot of attention lately and several industries such as film, gaming, education etc., are focused on S3D content creation. \nThe Motion Pictures Association of America $\\bf{MPAA}$ \\cite{mpaa2015} and $\\bf{Statista}$ \\cite{website:3dstatista} reported that in 2015, the S3D movie box office profit reached \\$1.7 billion and this was an increase of 20\\% compared to 2014. They also reported that the number of S3D screens in 2015 increased by 15\\% compared to 2014 in the United States (US). These numbers are a clear indication of the advancements in S3D technology and its ever increasing popularity with consumers.\n\nS3D video content creation \\cite{3DChain} involves several processing steps like sampling, quantization, synthesis etc. Each step in this processing chain affects the perceptual quality of the source video content leading to a degradation of the Quality of Experience (QoE) of the end user. To guarantee high user satisfaction, it becomes important to assess the loss in quality at each step of the lossy process. Quality assessment (QA) is the process of judging or estimating the quality of a video according to its perceptual experience. QA can be categorized into two types: subjective and objective. In subjective assessment, the viewers quantify their opinion on video quality based on their perceptual feel after viewing the content. Objective quality assessment refers to the automatic quality evaluation process designed to mimic subjective assessment. It has three flavors: full reference (FR) where the pristine source is available for QA, reduced reference (RR) where partial information about the pristine source is available for QA and no-reference (NR), where no pristine source information is available for QA. In this work, we focus on S3D NR video quality assessment (VQA). \n\nThe predominant approach to objectively assessing the quality of S3D content (image and video) has been to leverage the strength of 2D QA algorithms in conjunction with a factor that accounts for depth quality (for e.g.,\\cite{yasakethu2009analyzing,hewage2009depth,regis2013objective,bosc2011towards,banitalebi2016efficient,battisti2015perceptual, Joveluro2010,Flosim3D2017}). The reliance on 2D QA metrics for S3D quality assessment could be explained by the fact that depth perception is a consequence of the relative spatial shift present in two 2D images. Another reason for relying on 2D QA methods could be the fact that they are far more mature given the ubiquity of 2D content. Further, access to publicly available S3D subjective quality databases is very limited in general, and even more so in the case of videos. In contrast to the 2D QA reliant approach, we propose an NR S3D VQA algorithm based on statistical properties that are {\\em{innately stereoscopic}} and lightly augment it with a 2D NR spatial score. Towards this end, we propose a joint statistical model of the multi scale subband coefficients of motion and depth components of a natural S3D video sequence. Specifically, we propose a Bivariate Generalized Gaussian Distribution (BGGD) model for this joint distribution. We show that the BGGD model parameters possess good distortion discrimination properties. The model parameters combined with frame-wise 2D NR spatial quality scores are used as features to train an SVR for the NR VQA task. \n\nThe rest of the paper is organized as follows: Section \\ref{sec:literature survey} reviews relevant literature. Section \\ref{Sec:Proposed Method} presents the proposed joint statistical model and the NR S3D VQA algorithm. Section \\ref{sec:results} presents and discusses the results of the proposed NR VQA algorithm. Concluding remarks are made in Section \\ref{sec:conclusions}.\n\\section{Background}\n\\label{sec:literature survey}\nGiven that our contribution spans S3D video modeling and VQA, we review relevant literature in appropriately titled subsections in the following.\n\\subsection{Motion and Depth Dependencies in the Human Visual System (HVS)}\nMaunsell and Van Essen \\cite{maunsell1983functional} performed psychovisual experiments on the monkey visual cortex to explore the disparity selectivity in the middle temporal (MT) visual area. They found that two-thirds of MT area neurons are highly tuned and responsible for binocular disparity processing. Roy \\textit{et al.} \\cite{roy1992disparity} experimented on a large number of MT area neurons (295 neurons) of awake monkeys to explore the dependencies between motion and depth components. They concluded that 95\\% of the tested neurons were highly responsive to the crossed and uncrossed disparities. De Angelis and Newsome \\cite{deangelis1999organization} performed psychovisual experiments to explore the depth tuning in the MT area. They concluded that the MT neurons are primarily responsible for motion perception and also that these neurons are important for depth perception.\nDayan \\textit{et al.} \\cite{dayan2003theoretical} performed an experiment on the MT area of Macaque monkeys to model the firing response rate of a neuron. They hypothesized that the frequency response of neuron firing rate can be accurately modeled with a generalized gaussian distribution. These findings combined with the observation that neuronal responses are tuned to the statistical properties of the sensory input or stimuli \\cite{simoncelli2001natural} motivate us to study and model the joint statistical relationship between the subband coefficients of motion and depth components of the {\\em{stimuli}} i.e., natural S3D videos. \n\\subsection{S3D Natural Scene Statistical Modeling}\nWe briefly review literature on S3D natural scene statistical modeling. \nHuang \\textit{et al.} \\cite{huang2000statistics} explored the range statistics of a natural S3D scene. The range maps are captured using laser range finder and pixel, gradient and wavelet statistics are modeled. Liu \\textit{et al.} \\cite{liu2008disparity,liu2010dichotomy} computed the disparity information from the depth maps which were created using range finders. They established a spherically modeled eye structure to construct the disparity maps and finally correlated the constructed disparity maps with stereopsis of HVS maps. Potetz and Lee \\cite{potetz2003statistical} and Liu \\textit{et al.} \\cite{liu2011statistical,liu2009luminance} studied the statistical relationship between the luminance and spatial disparity maps in a multiscale subband decomposition domain. They concluded that the histograms of luminance and range\/disparity subband coefficients have sharp peaks and heavy tails and can be modeled with a univariate GGD (UGGD). Further, they explored the correlation dependencies between these subband coefficients. Su \\textit{et al.} \\cite{su2011natural} performed a study to explore the relationship between chrominance and range components. They found that the conditional distribution of chrominance coefficients (given range gradients) is modeled well using a Weibull distribution. While these studies focused on static S3D natural scenes, we found few studies on the joint statistics of motion and depth components of S3D natural videos. This paucity has provided additional motivation for our work. \n\\subsection{S3D Video Quality Assessment}\nWe now review S3D video quality assessment methods in the following. These methods could broadly be classified into statistical modeling based and human visual system (HVS) based approaches. \n\nStatistical model based approaches have been very successful in S3D IQA \\cite{chen2013no,khan2015full,su2015oriented,appina2016no,hachicha2017no}. Mittal \\textit{et al.} \\cite{mittal2011algorithmic} proposed an NR S3D VQA metric based on statistical measurements of disparity and differential disparity. They computed the mean, median, kurtosis, skew-ness and standard deviation values of the disparity and differential disparity maps to estimate the quality of an S3D video. These approaches point to the efficacy of using statistical models for S3D QA, and provide the grounding for our work. \n\nAs discussed earlier, several objective S3D FR VQA models \\cite{yasakethu2009analyzing,hewage2009depth,regis2013objective,Joveluro2010,bosc2011towards,banitalebi2016efficient,battisti2015perceptual} are based on applying 2D IQA\/VQA algorithms on individual views and the depth component of an S3D video. In addition to these, we review a few HVS inspired FR, RR and NR VQA algorithms next. \\cite{jin2011frqa3d} and \\cite{Han2012VCIP} proposed S3D VQA models based on 3D-Discrte Cosine Transform (DCT) and 3D structer models from the spatial, temporal and depth components of an S3D view. Qi \\textit{et al.} \\cite{qi2016stereoscopic} proposed an FR S3D VQA metric based on measuring the Just Noticeable Differences (JND) on spatial, temporal and depth maps. They computed the temporal JNDs (TJND) of spatial component and interview JNDs (IJND) from TJNDs to estimate the binocular property. Further, they calculated the similarity maps between reference and distorted spatial JNDs to estimate the spatial quality and finally, they computed the mean across all JNDs to estimate the overall S3D video quality score. De Silva \\textit{et al.} \\cite{de20103d} proposed an FR S3D VQA metric based on measuring the perceivable distortion strength from depth maps. They computed the JND value between reference and distorted depth maps to measure the quality of an S3D video. Galkandage \\textit{et al.} \\cite{galkandage2016stereoscopic} proposed S3D FR IQA and VQA metrics based on an HVS model and temporal features. They computed the Energy Quality Metric (EBEQM) scores to measure the spatial quality and finally pooled these scores by using empirical methods to estimate the overall quality score of an S3D video. De Silva \\textit{et al.} \\cite{de2013toward} proposed an S3D FR VQA based on measuring the spatial distortion, blur measurement and content complexity. They measured the structural similarity and edge degradation between reference and distorted views to compute the spatial quality, distortion strength. Content complexity was measured by calculating the spatial and temporal indices of an S3D view.\n\nHewage and Martini \\textit{et al.} \\cite{hewage2011reduced} proposed an S3D RR VQA metric based on depth map edges and S3D view chrominance information. They applied the Sobel operator to compute the edges of a depth map and utilized these edges to extract the chrominance features of an S3D view. Finally, they computed the PSNR values of extracted features to estimate the quality of an S3D video.\nYu \\textit{et al.} \\cite{yu2016binocular} proposed an S3D RR VQA metric based on perceptual properties of the HVS. They relied on motion vector strength to predict the reduced reference frame of a reference video, and binocular fusion and rivalry scores were calculated using the RR frames. Finally these scores were pooled using motion intensities as weights to compute the quality score of an S3D video. \n\nSazzad \\textit{et al.} \\cite{sazzad2010spatio} proposed an S3D NR VQA metric based on spatiotemporal segmentation. They measured structural loss by computing the edge strength degradation in each segment and motion vector length was measured to estimate the temporal cue loss. \nHa and Kim \\cite{ha2011perceptual} proposed an S3D NR VQA metric based on temporal variance, intra and inter disparity measurements. Depth maps are computed by minimizing the MSE values, and motion vector length is calculated to estimate the temporal variations. Intra and inter frame disparities were computed to measure the dependencies between motion and depth components. Solh and AlRegib \\cite{solh2011no} proposed an S3D NR VQA metric based on temporal inconsistencies, spatial and temporal outliers. Spatial and temporal outliers were measured by calculating the difference between ideal and estimated depth maps, and temporal inconsistencies were computed by calculating the standard deviation of difference between depth maps of successive frames. \nHasan \\textit{et al.} \\cite{hasan2014no} proposed an S3D NR VQA metric based on similarity matches and edge visualized areas. They utilized the edge strength to find the visualized areas and computed the energy error of similarity measure estimated between left and right views to calculate the disparity index. \nSilva \\textit{et al.} \\cite{silva2015no} proposed an S3D NR VQA metric based on distortion strength, disparity and temporal depth qualities of a 3D video. They computed the depth cue loss by measuring the spatial distortion strength, and temporal depth qualities were measured by calculating the correlation between histograms of frame motion vectors. \nHan \\textit{et al.} \\cite{han2015extended} proposed an NR S3D VQA metric based on the encoder settings of a transmitted video. They used the ITU-T G.1070 settings to model the packet loss artefacts and quality was measured by computing the correlation between perceptual scores and packet loss rates at different bit rates. Mahamood and Ghani \\cite{mahmood2015objective} proposed an S3D NR VQA metric based on computing the motion vector lengths and depth map features. They concluded that the number of bad frames in a video is a good predictor of motion and depth quality of an S3D video. \nYang \\textit{et al.} \\cite{yang2017} proposed an S3D NR VQA metric based on multi view binocular perception model. They applied the curvelet transform on spatial information of an S3D video to extract the texture analysis features and optical flow features were utilized to measure the temporal quality. Finally, they used empirical weight combinations to pool these scores to compute the overall quality score. Chen \\textit{et al.} \\cite{chen2017blind} proposed an S3D NR VQA model based on binocular energy mechanism. They computed the auto-regressive prediction based disparity measurement and natural scene statistics of an S3D video to compute the quality.\n\n\n\\begin{figure*}\n\\captionsetup[subfigure]{justification=centering}\n\\centering\n\\begin{subfigure}[b]{0.32\\textwidth}\n\\includegraphics[width=5.5cm,height=3cm]{Figures\/l1.png}\n\\subcaption{\\small Reference left view first frame.}\n\\label{fig:l1}\n\\end{subfigure}\n\\begin{subfigure}[b]{0.32\\textwidth}\n\\includegraphics[width=5.5cm,height=3cm]{Figures\/r1.png}\n\\subcaption{\\small Reference right view first frame.}\n\\label{fig:r1} \n\\end{subfigure}\n\\begin{subfigure}[b]{0.32\\textwidth}\n\\includegraphics[width=5.5cm,height=3cm]{Figures\/l2.png}\n\\subcaption{\\small Reference left view second frame.} \n\\label{fig:l2}\n\\end{subfigure}\n\\\\\n\\begin{subfigure}[b]{0.32\\textwidth}\n\\includegraphics[width=5.5cm,height=3cm]{Figures\/motionr.png}\n\\subcaption{\\small Optical flow magnitude map.} \n\\label{fig:motionr}\n\\end{subfigure}\n\\begin{subfigure}[b]{0.32\\textwidth}\n\\includegraphics[width=5.5cm,height=3cm]{Figures\/mr.png}\n\\subcaption{\\small Motion vector magnitude map.} \n\\label{fig:mr}\n\\end{subfigure}\n\\begin{subfigure}[b]{0.32\\textwidth}\n\\includegraphics[width=5.5cm,height=3cm]{Figures\/depthr.png}\n\\subcaption{\\small Disparity map.} \n\\label{fig:depthr}\n\\end{subfigure}\n\\\\\n\\begin{subfigure}[b]{0.32\\textwidth}\n\\includegraphics[width=5.5cm,height=3cm]{Figures\/motionhist.png}\n\\subcaption{\\small Optical flow magnitude histogram.} \n\\label{fig:motionhist}\n\\end{subfigure}\n\\begin{subfigure}[b]{0.32\\textwidth}\n\\includegraphics[width=5.5cm,height=3cm]{Figures\/motionhistm.png}\n\\subcaption{\\small Motion vector magnitude histogram.} \n\\label{fig:motionhistm}\n\\end{subfigure}\n\\begin{subfigure}[b]{0.32\\textwidth}\n\\includegraphics[width=5.5cm,height=3cm]{Figures\/depthhist.png}\n\\subcaption{\\small Disparity map histogram.} \n\\label{fig:depthhist}\n\\end{subfigure}\n\\\\\n\\begin{subfigure}[b]{0.45\\textwidth}\n\\includegraphics[height=4.5cm]{Figures\/3dhist.png}\n\\subcaption{\\small Joint histogram plot of optical flow magnitude and disparity maps.} \n\\label{fig:3dhist}\n\\end{subfigure}\n\\begin{subfigure}[b]{0.5\\textwidth}\n\\includegraphics[height=4.5cm]{Figures\/3dhistm.png}\n\\subcaption{\\small Joint histogram plot of motion vector magnitude and disparity maps.}\n\\label{fig:3dhistm} \n\\end{subfigure}\n\\\\\n\\begin{subfigure}[b]{0.45\\textwidth}\n\\includegraphics[height=4.5cm]{Figures\/3dmodelhist.png}\n\\subcaption{\\small Bivariate GGD fit between optical flow magnitude and disparity maps.} \n\\label{fig:3dmodelhist}\n\\end{subfigure}\n\\begin{subfigure}[b]{0.5\\textwidth}\n\\includegraphics[height=4.5cm]{Figures\/3dmodelhistm.png}\n\\subcaption{\\small Bivariate GGD fit between motion vector magnitude and disparity maps.} \n\\label{fig:3dmodelhistm}\n\\end{subfigure}\n\\caption{Illustration of BGGD model fit between combinations of optical flow magnitude and disparity maps, motion vector magnitude and disparity maps.}\n\\label{fig:LumDepth}\n\\end{figure*}\nOur literature survey has equipped us with the required background and motivation to study and model the joint statistics of motion and depth in S3D natural videos in a multi-resolution analysis domain. Further, it has given us the grounding to propose an S3D NR VQA algorithm dubbed Video QUality Evaluation using MOtion and DEpth Statistics (VQUEMODES) that relies on the joint statistical model parameters and 2D NR IQA scores. We describe the proposed approach in detail in the following section.\n\n\\section{Proposed Method} \n\\label{Sec:Proposed Method}\nWe first analyze the joint statistical behavior of the subband coefficients of motion and depth components in S3D natural videos. We then propose a BGGD model for the joint distribution. Subsequently, we describe the proposed S3D NR VQA algorithm. \n\\subsection{BGGD Modeling}\nWe empirically show that a Bivariate Generalized Gaussian Distribution (BGGD) accurately captures the dependencies between motion and depth subband coefficients. We use both optical flow vector magnitude and motion vector magnitude to represent motion in our statistical analysis. We do so to investigate the behavior at both fine and coarse motion representations respectively. Further, motion vector computation has significantly lower computational complexity compared to optical flow computation. We perform our analysis using multi-scale (3 scales) and multi-orientation ($0^0,30^0,60^0,90^0,120^0,150^0$) subband decomposition coefficients of the motion (optical flow\/motion vector) and disparity maps. \n\nThe multivariate GGD distribution of a random vector ${\\bf{x}} \\in \\mathbb{R}^N$ is given by \\cite{pascal2013parameter}\n\\begin{align}\np({\\bf{x}}|{\\bf{M}},\\alpha,\\beta)&=\\frac{1}{|{\\bf{M}}|^\\frac{1}{2}}g_{\\alpha,\\beta}({\\bf{x}}^T{\\bf{M}}^{-1}{\\bf{x}}) ,\\\\\ng_{\\alpha,\\beta}(y)&=\\frac{\\beta\\Gamma(\\frac{N}{2})}{(2^{\\frac{1}{\\beta}}\\Pi\\alpha)^{\\frac{N}{2}}\\Gamma(\\frac{N}{2\\beta})}e^{-\\frac{1}{2}(\\frac{y}{\\alpha})^{\\beta}},\n\\label{eqn:mggd}\n\\end{align}\nwhere ${\\bf{M}}$ is an $N\\times N$ symmetric scatter matrix, $\\beta$ is the shape parameter, $\\alpha$ is the scale parameter and $g_{\\alpha, \\beta}(\\cdot)$ is the density generator. \n\nFigs. \\ref{fig:l1} and \\ref{fig:l2} show the first and second frames of the Boxers S3D video left view respectively, and Fig. \\ref{fig:r1} shows the first frame of the right view from the Boxers video sequence of the IRCCYN database \\cite{urvoy2012nama3ds1}. Fig. \\ref{fig:depthr} shows the disparity map estimated using the SSIM-based stereo matching algorithm \\cite{chen2013full} computed between first frame of left and right views. Figs. \\ref{fig:motionr} and \\ref{fig:mr} show the optical flow and motion vector maps computed between the first and second frames of the left view respectively. The optical flow map is computed using the Black and Anandan \\cite{black1993framework} flow algorithm and the motion vector map is estimated using the three-step block motion estimation algorithm \\cite{jakubowski2013block}. Figs. \\ref{fig:depthhist}, \\ref{fig:motionhist} and \\ref{fig:motionhistm} show the histograms of the subband coefficients of disparity, optical flow magnitude and motion vector magnitude respectively. Fig. \\ref{fig:3dhist} shows the joint histogram between disparity and optical flow magnitude subband coefficients, and Fig. \\ref{fig:3dmodelhist} shows the estimated BGGD model of \\ref{fig:3dhist}. Fig. \\ref{fig:3dhistm} shows the joint histogram of disparity and motion vector magnitude subband coefficients and Fig. \\ref{fig:3dmodelhistm} shows the estimated BGGD model of \\ref{fig:3dhistm}. The BGGD model parameters were estimated using the approach taken by Su \\textit{et al.} \\cite{su2015oriented}. The marginal and joint histogram plots were computed at the first scale and $0^{0}$ orientation of the steerable pyramid decomposition. From the histograms \\ref{fig:depthhist}, \\ref{fig:motionhist} and \\ref{fig:motionhistm} we see that the subband coefficients have sharp peaks and long tails and is consistent with the observations in \\cite{dayan2003theoretical,young1987gaussian}. The joint histograms between disparity and motion components \\ref{fig:3dmodelhist} and \\ref{fig:3dmodelhistm} also have sharp peaks and heavy tails. These empirical findings lead us to propose that a bivariate GGD ($N = 2$) is ideally suited to model the joint distribution of disparity and motion subband coefficients. \n\n\nThe efficacy of the proposed BGGD model is first evaluated on three publicly available S3D video databases: IRCCYN \\cite{urvoy2012nama3ds1}, RMIT \\cite{cheng2012rmit3dv} and LFOVIA \\cite{appina2016subjective1}. The IRCCYN video database has pristine S3D videos and their H.264 and JP2K distorted versions, the RMIT database comprises pristine S3D videos and the LFOVIA database is composed of pristine S3D videos and their H.264 compressed versions.\n\\begin{figure*}[htbp]\n\\centering\n\\begin{subfigure}[b]{0.32\\textwidth}\n\\includegraphics[height=3cm]{Figures\/l1.png\n\\subcaption{\\small Reference left view.}\n\\label{fig:1l1}\n\\end{subfigure}\n\\begin{subfigure}[b]{0.32\\textwidth}\n\\includegraphics[height=3cm]{Figures\/ld.png}\n\\subcaption{\\small H.264 compressed left view.} \n\\label{fig:1ld}\n\\end{subfigure}\n\\begin{subfigure}[b]{0.32\\textwidth}\n\\includegraphics[height=3cm]{Figures\/ld1.png}\n\\subcaption{\\small JP2K distorted left view.} \n\\label{fig:1ld1}\n\\end{subfigure}\n\\\\\n\\begin{subfigure}[b]{0.32\\textwidth}\n\\includegraphics[height=3cm]{Figures\/opticconref.png}\n\\subcaption{\\small Isoprobability contour plot of joint optical flow magnitude and disparity distribution of the reference view. Best fit BGGD model: $\\alpha_o = 5\\times 10^{-14}$, $\\beta_o = 0.0405$, $\\chi_o = 1\\times10^{-7}$.}\n\\label{fig:13dcontref}\n\\end{subfigure}\n\\begin{subfigure}[b]{0.32\\textwidth}\n\\includegraphics[height=3cm]{Figures\/opticcon44.png}\n\\subcaption{\\small Isoprobability contour plot of joint optical flow magnitude and disparity distribution of the H.264 distorted view. Best fit BGGD model: $\\alpha_o = 1\\times10^{-8} $, $\\beta_o = 0.0514$, $\\chi_o = 1\\times10^{-8}$.} \n\\label{fig:13dconth264}\n\\end{subfigure} \n\\begin{subfigure}[b]{0.32\\textwidth}\n\\includegraphics[height=3cm]{Figures\/opticcon2.png}\n\\subcaption{\\small Isoprobability contour plot of joint optical flow magnitude and disparity distribution of the JP2K distorted view. Best fit BGGD model: $\\alpha_o = 1\\times10^{-16} $, $\\beta_o = 0.4238$, $\\chi_o = 5\\times10^{-7}$.} \n\\label{fig:13dcontjp}\n\\end{subfigure}\n\\\\\n\\begin{subfigure}[b]{0.32\\textwidth}\n\\includegraphics[height=3cm]{Figures\/3dcontmref.png}\n\\subcaption{\\small Isoprobability contour plot of joint motion vector magnitude and disparity distribution of reference view. Best fit BGGD model: $\\alpha_m = 4\\times10^{-5}$, $\\beta_m = 0.3579$, $\\chi_m = 1 \\times 10^{-8}$.}\n\\label{fig:13dcontmref}\n\\end{subfigure}\n\\begin{subfigure}[b]{0.32\\textwidth}\n\\includegraphics[height=3cm]{Figures\/3dcontmh264.png}\n\\subcaption{\\small Isoprobability contour plot of joint motion vector magnitude and disparity distribution of H.264 distorted view. Best fit BGGD model: $\\alpha_m = 4\\times10^{-4} $, $\\beta_m = 0.4457 $, $\\chi_m =8\\times10^{-7} $.} \n\\label{fig:13dcontmh264}\n\\end{subfigure}\n\\begin{subfigure}[b]{0.32\\textwidth}\n\\includegraphics[height=3cm]{Figures\/3dcontmjp.png}\n\\subcaption{\\small Isoprobability contour plot of joint motion vector magnitude and disparity distribution of JP2K distorted view. Best fit BGGD model: $\\alpha_m = 6\\times10^{-7}$, $\\beta_m = 0.603$, $\\chi_m = 6\\times10^{-7}$.} \n\\label{fig:13dcontmjp}\n\\end{subfigure}\n\\caption{Effects of distortion on the joint distribution of motion and depth subband coefficients ($0^0$ orientation and first scale). It can be seen that the BGGD model parameters are able to track the effects of distortions.}\n\\label{fig:LumDepthDist}\n\\end{figure*}\n\\begin{table*}[htbp]\n\\centering \n\\caption{BGGD features ($\\alpha$, $\\beta$) and goodness of fit ($\\chi$) value of reference S3D videos and its symmetric distortion combinations of IRCCYN, LFOVIA and RMIT databases.}\n\\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|}\n\\hline\nVideo Frame &Distortion Type & \\multicolumn{2}{c|}{Test stimuli} & \\multicolumn{6}{c|}{BGGD features ($\\alpha$, $\\beta$) and goodness of fit ($\\chi$) value}\\\\\n\\cline{3-10} \n& & Left & Right & $\\alpha_{o}$ & $\\beta_{o}$ & $\\chi_{o}$ & $\\alpha_{m}$ & $\\beta_{m}$ & $\\chi_{m}$\\\\\n\\hline\n\\multirow{8}{*}{\\begin{minipage}{.2\\textwidth}\n\\centering\n\\includegraphics[width=2.8cm,height=1.5cm]{Figures\/i1100.png}\n\\end{minipage}\n} & Reference & - & - & 1 $\\times$ $10^{-5}$&0.337 &3 $\\times$ $10^{-6}$& 8 $\\times$ $10^{-6}$&0.312 &\t2 $\\times$ $10^{-6}$ \\\\\n\\cline{2-10}\n& & 32 & 32 & 2 $\\times$ $10^{-6}$&\t0.213 &\t7 $\\times$ $10^{-6}$& 6 $\\times$ $10^{-7}$&\t0.245 &\t5 $\\times$ $10^{-6}$ \\\\\n\\cline{3-10}\n& H.264 & 38 & 38 &3 $\\times$ $10^{-5}$&\t0.341 &\t3 $\\times$ $10^{-6}$& 8 $\\times$ $10^{-5}$&\t0.377 &\t2 $\\times$ $10^{-6}$ \\\\\n\\cline{3-10}\n& (QP) & 44 & 44 & 1 $\\times$ $10^{-4}$&\t0.407 &\t2 $\\times$ $10^{-6}$& 7 $\\times$ $10^{-4}$&\t0.425 &\t1 $\\times$ $10^{-6}$ \\\\\n\\cline{2-10}\n& & 2 & 2 & 3 $\\times$ $10^{-4}$&0.715&\t1 $\\times$ $10^{-6}$&4 $\\times$ $10^{-6}$\t&0.737 &1 $\\times$ $10^{-7}$\\\\\n\\cline{3-10}\n&JP2K & 8 & 8 & 1 $\\times$ $10^{-4}$&0.698 &2 $\\times$ $10^{-7}$&3 $\\times$ $10^{-5}$&0.724 &2 $\\times$ $10^{-7}$\\\\\n\\cline{3-10}\n& (Bitrate = Mb\/s) & 16 & 16 & 1 $\\times$ $10^{-5}$&0.619 &4 $\\times$ $10^{-7}$&4 $\\times$ $10^{-4}$&0.641 &3 $\\times$ $10^{-7}$\\\\\n\\cline{3-10}\n& & 32 & 32 & 2 $\\times$ $10^{-4}$&0.455 &1 $\\times$ $10^{-6}$&5 $\\times$ $10^{-4}$&0.473 &1 $\\times$ $10^{-6}$\\\\\n\\hline\n\\multirow{8}{*}{\\begin{minipage}{.2\\textwidth}\n\\centering\n\\vspace{-0.6cm}\n\\includegraphics[width=2.8cm,height=1.5cm]{Figures\/i3100.png}\n\\end{minipage}\n}& Reference & - & - & 3 $\\times$ $10^{-11}$&0.652 &2 $\\times$ $10^{-7}$&5 $\\times$ $10^{-4}$&\t0.496 &\t2 $\\times$ $10^{-7}$ \\\\\n\\cline{2-10}\n& & 32 & 32 & 6 $\\times$ $10^{-6}$&\t0.284 &\t3 $\\times$ $10^{-7}$&5 $\\times$ $10^{-5}$&\t0.349 &\t1 $\\times$ $10^{-6}$ \\\\\n\\cline{3-10}\n& H.264 & 38 & 38 & 1 $\\times$ $10^{-4}$&0.372 &\t2 $\\times$ $10^{-6}$&1 $\\times$ $10^{-4}$&\t0.395 &\t1 $\\times$ $10^{-6}$ \\\\\n\\cline{3-10}\n& (QP) & 44 & 44 & 2 $\\times$ $10^{-4}$&\t0.405 &\t1 $\\times$ $10^{-6}$&3 $\\times$ $10^{-4}$&\t0.441 &\t9 $\\times$ $10^{-7}$ \\\\\n\\cline{2-10}\n& & 2 & 2 & 1 $\\times$ $10^{-11}$&\t0.622 &\t2 $\\times$ $10^{-7}$&2 $\\times$ $10^{-9}$&\t0.649 &\t2 $\\times$ $10^{-7}$ \\\\\n\\cline{3-10}\n&JP2K & 8 & 8 & 1 $\\times$ $10^{-11}$&\t0.677 &\t2 $\\times$ $10^{-7}$&\t2 $\\times$ $10^{-10}$&\t0.611 &\t2 $\\times$ $10^{-7}$ \\\\\n\\cline{3-10}\n& (Bitrate = Mb\/s) & 16 & 16 &4 $\\times$ $10^{-11}$&0.585&2 $\\times$ $10^{-7}$&6 $\\times$ $10^{-10}$&\t0.600 &\t2 $\\times$ $10^{-7}$ \\\\\n\\cline{3-10}\n& & 32 & 32 &2 $\\times$ $10^{-8}$&0.548 &\t1 $\\times$ $10^{-7}$&1 $\\times$ $10^{-11}$&\t0.571 &\t2 $\\times$ $10^{-7}$ \\\\\n\\hline\n\\multirow{7}{*}{\\begin{minipage}{.2\\textwidth}\n\\centering\n\\vspace{-0.6cm}\n\\includegraphics[width=2.8cm,height=1.5cm]{Figures\/sofa100.png}\n\\end{minipage}\n}& Reference & - & - & 5 $\\times$ $10^{-5}$&0.361 &\t7 $\\times$ $10^{-7}$ &5 $\\times$ $10^{-5}$&\t0.361 &\t7 $\\times$ $10^{-7}$ \\\\\n\\cline{2-10}\n& & 100 & 100 & 2 $\\times$ $10^{-6}$&\t0.318 &\t3 $\\times$ $10^{-6}$ &7 $\\times$ $10^{-5}$&\t0.365 &\t1 $\\times$ $10^{-6}$ \\\\\n\\cline{3-10}\n& H.264 & 200 & 200 &2 $\\times$ $10^{-5}$&\t0.447 &\t8 $\\times$ $10^{-7}$ &3 $\\times$ $10^{-6}$&\t0.281 &\t4 $\\times$ $10^{-6}$ \\\\\n\\cline{3-10}\n& (Bitrate=Kbps) & 350 & 350 & 9 $\\times$ $10^{-5}$&0.435&\t5 $\\times$ $10^{-7}$ &2 $\\times$ $10^{-6}$&\t0.268 &\t3 $\\times$ $10^{-6}$ \\\\\n\\cline{3-10}\n& & 1200 & 1200 & 2 $\\times$ $10^{-4}$&\t0.438 &\t3 $\\times$ $10^{-7}$ &2 $\\times$ $10^{-5}$&\t0.332 &\t1 $\\times$ $10^{-6}$ \\\\\n\\hline\n\\multirow{7}{*}{\\begin{minipage}{.2\\textwidth}\n\\centering\n\\vspace{-0.6cm}\n\\includegraphics[width=2.8cm,height=1.5cm]{Figures\/walk100.png}\n\\end{minipage}\n}& Reference & - & - & 1 $\\times$ $10^{-4}$&0.205 &\t9 $\\times$ $10^{-6}$&2 $\\times$ $10^{-7}$&0.230 &5 $\\times$ $10^{-6}$ \\\\\n\\cline{2-10}\n& & 100 & 100 &9 $\\times$ $10^{-5}$&0.430 &\t3 $\\times$ $10^{-6}$&4 $\\times$ $10^{-4}$&\t0.449 &\t1 $\\times$ $10^{-6}$ \\\\\n\\cline{3-10}\n& H.264 & 200 & 200 & 3 $\\times$ $10^{-5}$&0.288 &5 $\\times$ $10^{-6}$&4 $\\times$ $10^{-5}$&\t0.351 &\t2 $\\times$ $10^{-6}$ \\\\\n\\cline{3-10}\n& (Bitrate=Kbps) & 350 & 350 &6 $\\times$ $10^{-5}$&0.268 &\t6 $\\times$ $10^{-6}$&1 $\\times$ $10^{-5}$&0.319 &4 $\\times$ $10^{-6}$ \\\\\n\\cline{3-10}\n& & 1200 & 1200 & 5 $\\times$ $10^{-5}$&\t0.240 &\t1 $\\times$ $10^{-6}$&3 $\\times$ $10^{-6}$&0.277 &4 $\\times$ $10^{-6}$ \\\\\n\\hline\n\\multirow{7}{*}{\\begin{minipage}{.2\\textwidth}\n\\centering\n\\includegraphics[width=2.8cm,height=1.5cm]{Figures\/1100.png}\n\\end{minipage}\n}& & & & & & &\t&\t &\t \\\\\n& Reference & - & - & 9 $\\times$ $10^{-4}$ & 0.928 & 7 $\\times$ $10^{-6}$ & 8 $\\times$ $10^{-4}$&0.758 &8 $\\times$ $10^{-7}$ \\\\\n& & & & & & &\t&\t &\t \\\\\n& & & & & & &\t&\t &\t \\\\\n& & & & & & &\t&\t &\t \\\\\n& & & & & & &\t&\t &\t \\\\\n\\hline\n\\multirow{7}{*}{\\begin{minipage}{.2\\textwidth}\n\\centering\n\\includegraphics[width=2.8cm,height=1.5cm]{Figures\/3100.png}\n\\end{minipage}\n}& & & & & & &\t&\t &\t \\\\\n& Reference & - & - & 4 $\\times$ $10^{-5}$ & 0.279 & 6 $\\times$ $10^{-7}$&6 $\\times$ $10^{-5}$&0.368 &5 $\\times$ $10^{-8}$ \\\\\n& & & & & & &\t&\t &\t \\\\\n& & & & & & &\t&\t &\t \\\\\n& & & & & & &\t&\t &\t \\\\\n& & & & & & &\t&\t &\t \\\\\n\\hline\n\\end{tabular}\n\\label{tab:modelscores}\n\\end{table*}\nTable \\ref{tab:modelscores} shows the estimated BGGD model fitting parameters between disparity and motion (optical flow and motion vector) subband coefficients of the $100^{th}$ frame of a few reference S3D video sequences and their symmetric distortion combinations of the IRCCYN, LFOVIA and RMIT databases. \nThe results are computed at the first scale and $0^{0}$ orientation of the steerable pyramid decomposition. ($\\alpha_{o}, \\beta_{o}$) correspond to optical flow parameters and ($\\alpha_{m}, \\beta_{m}$) correspond to motion vector parameters. $\\chi$ indicates the goodness of fit value and it represents how well the proposed model fits our observations (joint histogram). The low values of the $\\chi$ point to the accuracy of the proposed model. $\\chi_{o}$ is the goodness of fit value computed for the optical flow case. Similarly, $\\chi_{m}$ is goodness of fit value for the motion vector case. In our analysis we observed that $\\chi_o$, $\\chi_m$ are in the range $10^{-8}$ and $10^{-6}$ for all S3D video sequences. From these results it is clear that the proposed BGGD model performs well in capturing the joint statistical dependencies between motion and disparity subband coefficients.\n\n\\subsection{Distortion Discrimination}\nSince we are interested in NR VQA of S3D videos, we explored the efficacy of the proposed model on distorted videos as well. Fig. \\ref{fig:1l1} shows the first frame of left reference view, and Figs. \\ref{fig:1ld} and \\ref{fig:1ld1} show the first frame of H.264 and JP2K distorted videos of corresponding reference view respectively. Due to a lack of other distortion types in the publicly available S3D video databases, we limited our analysis to H.264 and JP2K distortions. Figs. \\ref{fig:13dcontref}, \\ref{fig:13dconth264} and \\ref{fig:13dcontjp} show the isoprobability contour plots of the joint optical flow magnitude and disparity subband coefficient distribution of the reference, H.264 and JP2K distorted frames respectively. These plots are computed at the first scale and $0^{0}$ orientation of the steerable pyramid decomposition. The plots clearly show the strong dependencies between motion and depth components of an S3D view and further, the variation in the dependencies due to distortion. As a consequence, the effects of distortion manifest themselves in a change in the BGGD model parameters. Further, it can also be seen that H.264 and JP2K distortions result in distributions with heavier tails. We observed the same trend in the joint motion vector magnitude and disparity subband coefficients distributions. \n\nFigs. \\ref{fig:13dcontmref}, \\ref{fig:13dcontmh264} and \\ref{fig:13dcontmjp} show the isoprobability contour plots of the joint motion vector magnitude and disparity subband coefficient distribution of the reference, H264 and JP2K distorted frames respectively. As with optical flow, these plots correspond to the subband coefficients at the first scale and $0^0$ orientation of the steerable pyramid decomposition. Again, we see a similar distortion tracking trend with the corresponding BGGD model parameters. \nFig. \\ref{fig:featsetrefall} shows the distribution of BGGD coefficients ($\\alpha_m$, $\\beta_m$) computed at the first scale and $0^{0}$ orientation of all frames of the reference Boxers video and its H.264 distorted sequences at different QP (QP = 32, 38, 44) levels. It is clear that the BGGD features are able to discriminate the quality variations in S3D videos. As mentioned earlier, we consider motion vector magnitude in our work primarily due to their amenability for fast implementations. From Figs. \\ref{fig:LumDepthDist} and \\ref{fig:featsetrefall}, we have demonstrated that they have very good distortion discrimination properties as well.\n\n\\subsection{Video Quality Algorithm}\nThe flowchart of the proposed algorithm is shown in Fig. \\ref{fig:flowchart}. The feature extraction stage estimates frame-wise BGGD model parameters to represent motion and depth quality features, and relies on the average 2D NR IQA score as the spatial quality feature. These features are then used to train an SVR for frame-wise quality scores. For video-level quality prediction, the individual frame-wise quality predictions are simply averaged. The algorithm is described in detail in the following. \n\\subsection{Feature Extraction}\n\\subsubsection{Motion Vector Estimation}\nThe temporal (motion) features are extracted using motion vectors. The motivation to use motion vectors is based on our observations in the previous section that they are as effective as optical flow in distortion discrimination. Also, motion vector computation takes a fraction of the cost of computing the optical flow vectors. We used the three-step search method \\cite{jakubowski2013block} to estimate the motion vectors between successive frames and employed a macroblock size of 8 $\\times$ 8. Further, the magnitude of the motion vector is used to compute the temporal feature in our algorithm.\n\\begin{equation*}\nM_{s}=\\sqrt{M_{H}^{2}+M_{V}^{2}},\n\\end{equation*}\nwhere, $M_{s}$ represents the motion vector strength, $M_{H}$ and $M_{V}$ are horizontal and vertical motion vector components.\nSeveral video quality models \\cite{FLOSIM2015,ha2011perceptual} have effectively used motion vectors as temporal features in their work.\n\\begin{figure}\n\\includegraphics[width=8cm]{Figures\/h264refallfeature.png}\n\\caption{BGGD model parameter variation of reference and H.264 distortion levels of the Boxers video sequence of IRCCYN S3D database. Each point in the scatter plot represents the model parameter pair ($\\alpha_m, \\beta_m$) for a frame-pair in the corresponding video. The ($\\alpha_m, \\beta_m$) pair clearly discriminate varying quality levels.}\n\\label{fig:featsetrefall}\n\\end{figure}\n\n\\begin{figure}\n\\center\n\\begin{center}\n\\tikzstyle{block} = [rectangle, draw, fill=white!100, text width=7em, text centered, minimum height=2em]\n\\tikzstyle{block1} = [rectangle, draw, fill=white!100, text width=9em, text centered, minimum height=2em]\n\\tikzstyle{block2} = [rectangle, draw, fill=white!100, text width=13em, text centered, minimum height=2em]\n\\tikzstyle{block3} = [rectangle, draw, fill=white!100, text width=17em, text centered, minimum height=2em]\n\\tikzstyle{line} = [draw, color=black!200, line width=0.5mm,-latex']\n\\tikzstyle{line1} = [draw, color=black!200, -latex']\n\\vspace{1cm}\n\\begin{tikzpicture}[ node distance = 2.5cm,auto]\n\t \n\t \n\\node [block,draw=orange,line width=0.5mm] (A) {\\small Stereoscopic video};\n\\node [block, draw=magenta, below of=A, node distance = 1.5cm, xshift=-2cm] (B) {\\small Left video};\n\\node [block, draw=magenta, below of=A, node distance = 1.5cm, xshift=2cm] (C) {\\small Right video};\n\\node[rectangle,draw=blue,line width=0.5mm,dashed, fit=(A) (B) (C)](D) {};\n\n\\node [block1, draw=White, below of=D, node distance = 2.5cm, xshift=-2cm] (E) {\\bf{Temporal Features}};\n\\node [block1, draw=Violet, below of=E, node distance = 1cm, xshift=+0.3cm] (I) {\\small Motion Vector Estimation};\n\\node [block1, draw=Violet, below of=I, node distance = 1.5cm] (J) {\\small Steerable Pyramid Decomposition};\n\\node[rectangle,draw=green,line width=0.5mm,dashed, fit=(I) (J)](K) {};\n\n\\node [block1, draw=White, below of=D, node distance = 2.5cm, xshift=2.2cm] (M) {\\bf{Depth Features}};\n\\node [block1, draw=Violet, below of=M, node distance = 1cm, xshift=0cm] (N) {\\small Depth Map Estimation};\n\\node [block1, draw=Violet, below of=N, node distance = 1.5cm] (O) {\\small Steerable Pyramid Decomposition};\n\\node[rectangle,draw=VioletRed,line width=0.5mm,dashed, fit=(N) (O)](P) {};\n\\node [block1, draw=White, below of=D, node distance = 8.5cm, xshift=2cm] (R) {\\bf{Spatial Features}};\n\\node [block, draw=Violet, below of=R, node distance = 0.8cm, xshift=0cm] (S) {\\small NR 2D IQA models};\n\\node[rectangle,draw=RoyalPurple,line width=0.5mm,dashed, fit=(R) (S)](T) {};\n\\node [block2, draw=Plum, line width=0.5mm, below of=K, node distance = 2.5cm, xshift=1.3cm] (U) {\\bf{BGGD Model Fit Features}};\n\\node[rectangle,draw=Brown,line width=0.5mm,dashed, fit=(E) (K) (M) (P) (U)](Q) {};\n\\node [block, draw=red, line width=0.5mm, dashed, below of=Q, node distance = 4cm, xshift=-2cm] (W) {\\small DMOS scores};\n\\node [block3, draw=PineGreen, line width=0.5mm, below of=Q, node distance = 6cm, xshift=0cm] (V) {\\bf{Supervised Learning, Regression}};\n\\path [line1] (A.south) -- +(0.0em,-0.5em) -- +(-5.66em,-0.5em) -- (B.north);\n\\path [line1] (A.south) -- +(0.0em,-0.5em) -- +(5.66em,-0.5em) -- (C.north);\n\\path [line1] ([xshift=-0cm]I.south) -- ([xshift=-0cm]J.north);\n\\path [line1] ([xshift=-0cm]N.south) -- ([xshift=-0cm]O.north);\n\n\\path [line1] ([xshift=-0cm]K.south) -- +(0.0cm,-0.5cm) -- +(1cm,-0.5cm) --(U.north);\n\\path [line1] ([xshift=-0cm]P.south) -- +(0.0cm,-0.5cm) -- +(-2.3cm,-0.5cm) --(U.north);\n\n\\path [line] ([xshift=-0cm]D.south) -- ([xshift=-0.178cm]Q.north);\n\\path [line] ([xshift=-0.3cm]Q.south) -- ([xshift=-0.3cm]V.north)\n\\path [line] ([xshift=-0cm]D.south) -- +(0.0em,-1em) -- +(-4.2cm,-1em) -- +(-4.2cm,-6.5cm) -- +(-1.815cm,-6.5cm) --(W.north)\n\\path [line] ([xshift=0.5cm]W.south) -- ([xshift=-1.5cm]V.north)\n\\path [line] ([xshift=-0cm]D.south) -- +(0.0em,-1em) -- +(4.5cm,-1em) -- +(4.5cm,-6.3cm) -- +(2cm,-6.3cm) --(T.north)\n\\path [line] (T.south) -- ([xshift=1.825cm]V.north)\n\n\\end{tikzpicture}\n\\end{center}\n\\caption{\\small Flowchart of the proposed VQUEMODES algorithm.}\n\\label{fig:flowchart}\n\\end{figure}\n\\subsubsection{Disparity Estimation}\nThe human visual system takes two retinal views as input and fuses them into a single scene point by converting the scene differences of the two views into disparity\/depth information. The simple and complex cells extract the structural properties of both views (spatial and temporal) and further, this information is processed in the primary visual cortex to construct a single scene point illusion of depth perception \\cite{levelt1968binocular,hubel1965receptive}. In our work, the disparity maps are computed using an SSIM-based stereo-matching algorithm \\cite{chen2013full} for a given stereoscopic frame pair at every time instant. This algorithm works on the principle of finding the best matching block in the right view of a specific block in the corresponding left view. We chose this algorithm based on a trade-off between accuracy and time complexity. Since the temporal features are computed at a block size of $8 \\times 8$, we downsampled the disparity map subbands to the same size by averaging over an $8 \\times 8$ block. \n\\subsubsection{Motion and Depth Feature Extraction}\nOnce the motion vector map and disparity map is computed for every frame, the BGGD model parameters are estimated using the joint histogram of the subband coefficients of the motion vector magnitude and the disparity map. As mentioned earlier, we rely on the method in \\cite{su2015oriented} for parameter estimation. Specifically, these are computed at three scales and six orientations ($0^0,30^0,60^0,90^0,120^0,150^0$) of the steerable pyramid decomposition. \n\\subsubsection{Spatial Feature Extraction}\nThe third feature we employ in our algorithm is the spatial quality of the video. Spatial quality is computed on a frame-by-frame basis by averaging the spatial qualities of the left and right views of an S3D video. \n\\begin{equation*}\nS_i=\\frac{Spat_{i}^{L}+Spat_{i}^{R}}{2},\n\\end{equation*}\nwhere, $S_{i}$ represents the overall spatial quality score of the $i^{th}$ frame of an S3D video, $L,R$ represent the left and right views of an S3D video respectively. $Spat$ represents the 2D spatial quality score of the frame as computed using a state-of-the-art 2D NR IQA algorithm.\nSpecifically, we rely on the following NR IQA algorithms for estimating the spatial quality score of the frames: \nSparsity Based Image Quality Evaluator (SBIQE) \\cite{sbiqepriya}, Blind\/referenceless Image Spatial Quality Evaluator (BRISQUE) \\cite{mittal2012no}, Natural Image Quality Evaluator (NIQE) \\cite{mittal2013making}. For brevity, we refer the reader to the respective references for descriptions of these algorithms. All these algorithms deliver state-of-the-art performance on standard IQA databases. \n\n\\begin{figure}\n\\center\n\\includegraphics[width=8cm]{Figures\/h264refallNIQE1.png}\n\\caption{Frame-wise NIQE scores of the Boxers video reference sequence and H.264 distortion versions. It is evident that quality variations are clearly tracked.}\n\\label{fig:featspaNIQE}\n\\end{figure}\nFig. \\ref{fig:featspaNIQE} shows the frame-wise NIQE scores of reference and H.264 distorted versions of the Boxers video sequence of IRCCYN S3D video database. The NIQE scores are clearly varying according to the quality level and is therefore a good discriminatory feature for quality prediction. \n\\subsection{Supervised Learning and Quality Estimation}\nAs mentioned previously, we used three spatial scales and six orientations in our analysis resulting in a total of 18 subbands for every stereoscopic video frame. The BGGD model parameters are computed at every subband resulting in a feature vector $f$= $[\\alpha^1\\ldots\\alpha^{18}; \\beta^1\\ldots\\beta^{18}]$ per frame. For an S3D video, the feature vector set is $[f_1, f_2 \\dots f_{n}]$, where $n$ is the number of video frames and $f_{i}$ = $[\\alpha_i^1\\ldots\\alpha_i^{18}; \\beta_i^1\\ldots\\beta_i^{18}]$; $1 \\leq i \\leq n-1$. The spatial quality feature ($S_{i}$) is appended to the aforementioned BGGD features to explicitly rely on spatial quality. Finally, the feature vector of a video frame is $f_{i}^{s}=[\\alpha_i^1\\ldots\\alpha_i^{18}; \\beta_i^1\\ldots\\beta_i^{18}; S_{i}]$. We believe that over short temporal durations, the average DMOS score of an S3D video and the frame-level DMOS score are highly correlated and are interchangeable. Therefore, we performed the regression of the frame-level features $f_{i}^{s}$ and the video-level DMOS scores $D$ as its label. For video $V$, \n\\begin{equation}\nf_{i}^{s^{V}}=[\\alpha_i^1\\ldots\\alpha_i^{18}; \\beta_i^1\\ldots\\beta_i^{18};S_{i}],\n\\label{eqn:consolidated_features}\n\\end{equation} \nwith the corresponding label $D_V$. This feature vector and label set is used to train an SVR. \nSVR is shown to provide good performance even when the available training set size is small, demonstrate accurate performance in one-versus-rest schemes \\cite{rifkin2004defense}, provide sparse solutions \\cite{scholkopf1997comparing} and accurate estimation of global minimum \\cite{cristianini2000introduction} etc. In our work, we used the radial basis function (RBF) kernel as it gave the best overall performance. \n\nWe use regression to estimate the scores of test video frames. It should be noted that the training and regression happen at the frame-level. The overall (video-level) quality score is estimated by averaging the frame-level quality estimates. \n\n\\section{Results and Discussion}\n\\label{sec:results}\nWhile several S3D quality assessment databases have been reported in the literature, few of them are open source. We report our results on two popular publicly available databases: the IRCCYN and LFOVIA S3D VQA databases.\n\nThe IRCCYN database \\cite{urvoy2012nama3ds1} is composed of 10 pristine and 70 distorted video sequences with a good variation in texture, motion and depth components. The video sequences are captured with Panasonic AG-3DA1E twin-lens camera and the baseline separation between lenses is 60 mm. The video sequences are either 16 seconds or 13 seconds in duration and have a resolution of 1920 $\\times$ 1080 pixels with a frame rate of 25fps. All the video sequences are encoded in YUV420P format and saved in .avi container. This database is a combination of H.264 and JP2K compression artefact distortions. They utilized the JM reference software to add H.264 compression artefacts by varying the quantization parameter (QP = 32, 38, 44). JP2K artifacts (2, 8, 16, 32 Mb\/s) are added on a frame-by-frame basis for both views. These compression artefacts are symmetrically applied on left and right videos. The subjective study is performed in absolute category rating with hidden reference (ACR-HR) method and they published the DMOS values as quality scores. In our evaluation, we limited our temporal extent to the first 10 seconds of the video to avoid issues with blank frames. This applies specifically to the Boxers and Soccer in the database. \n\nThe LFOVIA database \\cite{appina2016subjective1} has H.264 compressed stereoscopic video sequences. The database has 6 pristine and 144 distorted video sequences and the videos are encoded in YUV 420P format and saved in mp4 container. The compression artifacts were introduced using \\textit{ffmpeg} by changing the bitrate (100, 200, 350, 1200 Kbps) as the quality variation parameter. The video sequences have a resolution of 1836 $\\times$ 1056 pixels with a frame rate of 25fps and a duration of 10 sec. This database is a combination of symmetric and asymmetric stereoscopic video sequences. They also performed the subjective study in ACR-HR method and published DMOS values as quality scores. The RMIT database has 47 reference video sequences and does not have any distorted video sequences or subjective scores. Therefore, we did not perform quality assessment on this database.\n\n\\begin{table*}[htbp]\n\\small\n\\caption{2D and 3D IQA\/VQA performance evaluation on IRCCYN and LFOVIA S3D video databases. {\\bf{Bold names}} indicate NR QA methods.}\n\\centering\n\\begin{tabular}{|c| c| c| c| c| c| c|}\n\\hline\n \\multirow{2}{*}{\\bf Algorithm} & \\multicolumn{3}{c|}{\\bf IRCCYN Database}& \\multicolumn{3}{c|}{\\bf LFOVIA Database}\\\\\n \\cline{2-7} \n & {\\bf LCC} & {\\bf SROCC} & {\\bf RMSE} &{\\bf LCC} & {\\bf SROCC} & {\\bf RMSE} \\\\\n\\hline\nSSIM \\cite{wang2004} & 0.6359\t&\t0.2465\t&\t1.0264 &\t0.8816 & 0.8828\t&\t6.1104\\\\\n\\hline\nMS-SSIM \\cite{wang2003multiscale} &0.9100\t&\t0.8534\t&\t0.6512&0.8172\t&\t0.7888\t&\t8.9467\\\\\n\\hline \n\\hline\n{\\bf{SBIQE}} \\cite{sbiqepriya} &0.0081\t&\t0.0054\t&\t1.2712& 0.0010\t&\t0.0043\t&\t16.0311\\\\\n\\hline\n{\\bf{BRISQUE}} \\cite{mittal2012no} &0.7535\t&\t0.8145\t&\t0.6535&0.6182\t&\t0.6000\t&\t12.6001\\\\\n\\hline\n{\\bf{NIQE}} \\cite{mittal2013making} & 0.5729&\t0.5664\t&0.8464& 0.7206\t&\t0.7376\t&11.1138\\\\\n\\hline\n\\hline \nSTMAD \\cite{vu2011spatiotemporal}& 0.6400 & 0.3495 & 0.9518& 0.6802 & 0.6014 & 9.4918\\\\\n\\hline\nFLOSIM \\cite{FLOSIM2015} & 0.9178&\t0.9111\t&0.4918 &- &- &-\\\\\n\\hline\n\\hline\nChen \\textit{et al.}\\cite{chen2013full} & 0.7886\t&\t0.7861\t&\t0.7464&0.8573\t&\t0.8588\t&6.6655\\\\\n\\hline\nSTRIQE \\cite{khan2015full} & 0.7931\t&\t0.6400\t&\t0.7544& 0.7543\t&\t0.7485\t&\t8.5011\\\\\n\\hline \n\\hline\n\\bf{VQUEMODES (NIQE)}& \\textbf{0.9697}&\\textbf{0.9637}&\\textbf{0.2635} &\\textbf{0.8943}&\\textbf{0.8890}&\\textbf{5.9124}\\\\\n\\hline\n\\end{tabular}\n\\label{table:object1}\t\n\\end{table*}\n\n \\begin{table*}[htbp]\n\\small\n\\caption{2D and 3D IQA\/VQA performance evaluation on different distortions of IRCCYN S3D video database. {\\bf{Bold names}} indicate NR QA methods.}\n\\centering\n\\begin{tabular}{|c| c| c| c| c| c| c|}\n\\hline\n \\multirow{2}{*}{\\bf Algorithm} & \\multicolumn{3}{c|}{\\bf H.264}& \\multicolumn{3}{c|}{\\bf JP2K}\\\\\n \\cline{2-7} \n & {\\bf LCC} & {\\bf SROCC} & {\\bf RMSE} &{\\bf LCC} & {\\bf SROCC} & {\\bf RMSE} \\\\\n\\hline\nSSIM \\cite{wang2004} & 0.7674 & 0.5464 & 0.8843 & 0.7283\t&\t0.5974\t&\t0.9202 \\\\\n\\hline\nMS-SSIM \\cite{wang2003multiscale} & 0.8795 & 0.6673 & 0.6955 &0.9414\t&\t0.9299\t&\t0.4327 \\\\\n\\hline\n\\hline\n{\\bf{SBIQE}} \\cite{sbiqepriya} & 0.0062 & 0.0058 & 1.9856 & 0.0120\t&\t0.0574\t&\t1.0289 \\\\\n\\hline\n{\\bf{BRISQUE}} \\cite{mittal2012no} & 0.7915 & 0.7637 & 0.7912 &0.8048\t&\t0.8999\t&\t0.5687 \\\\\n\\hline \n{\\bf{NIQE}} \\cite{mittal2013making} & 0.6814 & 0.6412 & 0.8715 & 0.6558\t&\t0.6427\t&\t0.7157 \\\\\n\\hline\n\\hline\nSTMAD \\cite{vu2011spatiotemporal}& 0.7641&0.7354 & 0.7296& 0.8388&0.7236 &0.7136 \\\\\n\\hline\nFLOSIM \\cite{FLOSIM2015}& 0.9265 & 0.8987 &0.4256 & 0.9665\t&\t0.9495\t&0.3359\\\\\n\\hline\n\\hline\nChen \\textit{et al.} \\cite{chen2013full} & 0.6618 & 0.5720 & 0.6915 & 0.8723\t&\t0.8724\t&\t0.6182 \\\\\n\\hline\nSTRIQE \\cite{khan2015full} & 0.7430 & 0.7167 & 0.8433 & 0.8403\t&\t0.8175\t&\t0.5666 \\\\\n\\hline\n\\hline\n\\bf{VQUEMODES (NIQE)} &\\textbf{0.9594} & \\textbf{0.9439} &\\textbf{0.1791} &\\textbf{0.9859}&\\textbf{0.9666}&\\textbf{0.0912}\\\\\n\\hline\n\\end{tabular}\n\\label{table:object2}\n\\end{table*}\n\n\\begin{table*}[htbp]\n\\small\n\\caption{2D and 3D IQA\/VQA performance evaluation on symmetric and asymmetric stereoscopic videos of LFOVIA S3D video database. {\\bf{Bold names}} indicate NR QA methods.}\n\\centering\n\\begin{tabular}{|c| c| c| c| c| c| c|}\n\\hline\n \\multirow{2}{*}{\\bf Algorithm} & \\multicolumn{3}{c|}{\\bf Symm}& \\multicolumn{3}{c|}{\\bf Asymm}\\\\\n \\cline{2-7} \n & {\\bf LCC} & {\\bf SROCC} & {\\bf RMSE} &{\\bf LCC} & {\\bf SROCC} & {\\bf RMSE} \\\\\n\\hline\nSSIM \\cite{wang2004} & 0.9037 & 0.8991 & 7.0246 & 0.8769 & 0.8755&\t5.8162\t \\\\\n\\hline\nMS-SSIM \\cite{wang2003multiscale} & 0.8901 & 0.8681 & 21.2322 &0.8423\t&\t0.7785\t&\t15.1681\\\\\n\\hline\n\\hline\n{\\bf{SBIQE}} \\cite{sbiqepriya} & 0.0006 & 0.0027 & 25.4484 & 0.0021\t&\t0.0051\t&\t12.0123 \\\\\n\\hline\n{\\bf{BRISQUE}} \\cite{mittal2012no} & 0.7829 & 0.7859 & 15.8298 &0.5411\t&\t0.5303\t&\t10.1719 \\\\\n\\hline \n{\\bf{NIQE}} \\cite{mittal2013making} & 0.8499 & 0.8705 & 13.4076 & 0.6835\t&\t0.6929\t& 8.8334 \\\\\n\\hline\n\\hline\nSTMAD \\cite{vu2011spatiotemporal}& 0.7815 &0.8000 & 10.2358& 0.6534&0.6010 &9.1614 \\\\\n\\hline\n\\hline\nChen \\textit{et al.} \\cite{chen2013full} & 0.9435 & 0.9182 &5.4346 & 0.8370\t&\t0.8376 & 6.6218\\\\\n\\hline\nSTRIQE \\cite{khan2015full} & 0.8275 & 0.8017 &9.2105 & 0.7559\t&\t0.7492\t&7.9321 \\\\\n\\hline\n\\hline\n\\bf{VQUEMODES (NIQE)} &\\textbf{0.9285} &\\textbf{0.9236} & \\textbf{3.9852} &\\textbf{0.8955}&\t\\textbf{0.8490}&\\textbf{6.9563} \\\\\n\\hline\n\\end{tabular}\n\\label{table:object3}\n\\end{table*}\n\n\\begin{table*}\n\\centering\n\\caption{Performance evaluation on IRCCYN S3D video database with different 2D NR IQA models in proposed algorithm.}\n\\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|c|}\n\\hline\n&\\multirow{2}{*}{\\bf Algorithm} & \\multicolumn{3}{c|}{\\bf H.264}& \\multicolumn{3}{c|}{\\bf JP2K}& \\multicolumn{3}{c|}{\\bf Overall}\\\\\n\\cline{3-11}\n& & {\\bf LCC} & {\\bf SROCC} & {\\bf RMSE} &{\\bf LCC} & {\\bf SROCC} & {\\bf RMSE} &{\\bf LCC} & {\\bf SROCC} & {\\bf RMSE} \\\\\n\\hline\n\\multirow{3}{*}{\\bf{VQUEMODES}} & SBIQE \\cite{sbiqepriya}&0.9262&0.8956&0.3286&0.9706&0.9473&0.2415&0.9622&0.9377&0.3039\\\\\n\\cline{2-11}\n & BRISQUE \\cite{mittal2012no} & 0.9517 & 0.9323 & 0.2319 & 0.9809\t&\t0.9577\t&\t0.1097 & 0.9624\t&\t0.9482\t&\t0.3019 \\\\\n\\cline{2-11}\n & NIQE \\cite{mittal2013making} &\\textbf{0.9594} & \\textbf{0.9439} &\\textbf{0.1791} &\\textbf{0.9859}&\\textbf{0.9666}&\\textbf{0.0912}& \\textbf{0.9697}&\\textbf{0.9637}&\\textbf{0.2635} \\\\\n\\cline{1-11}\n\\hline\n\\end{tabular}\n\\label{table:object4}\n\\end{table*}\n\n\\begin{table*}\n\\centering\n\\caption{Performance evaluation on LFOVIA S3D video database with different 2D NR IQA models in proposed algorithm.}\n\\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|c|}\n\\hline\n&\\multirow{2}{*}{\\bf Algorithm} & \\multicolumn{3}{c|}{\\bf Symm}& \\multicolumn{3}{c|}{\\bf Asymm}& \\multicolumn{3}{c|}{\\bf Overall}\\\\\n\\cline{3-11}\n& & {\\bf LCC} & {\\bf SROCC} & {\\bf RMSE} &{\\bf LCC} & {\\bf SROCC} & {\\bf RMSE} &{\\bf LCC} & {\\bf SROCC} & {\\bf RMSE} \\\\\n\\hline\n\\multirow{3}{*}{\\bf{VQUEMODES}} & SBIQE \\cite{sbiqepriya}&0.9000 & 0.8913 & 4.5900 & 0.8532&0.8234&7.1376 & 0.8483&0.8341&7.3476\\\\\n\\cline{2-11}\n & BRISQUE \\cite{mittal2012no} & 0.9125 & 0.9013 &4.3980 &0.8792&0.8489&6.8563 &0.8827\t&0.8693\t&5.9859\\\\\n\\cline{2-11}\n & NIQE \\cite{mittal2013making} &\\textbf{0.9285} &\\textbf{0.9236} & \\textbf{3.9852} &\\textbf{0.8955}&\t\\textbf{0.8490}&\\textbf{6.9563} &\\textbf{0.8943}&\\textbf{0.8890}&\\textbf{5.9124}\\\\\n\\cline{1-11}\n\\hline\n\\end{tabular}\n\\label{table:object5}\n\\end{table*}\n\\begin{table*}\n\\caption{Performance comparison with different 3D VQA metrics on IRCCYN S3D video database. {\\bf{Bold names}} indicate NR QA methods.}\n\\centering\n\\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|}\n\\hline\n\\multirow{2}{*}{\\bf Algorithm}& \\multicolumn{3}{c|}{\\bf H.264}& \\multicolumn{3}{c|}{\\bf JP2K}& \\multicolumn{3}{c|}{\\bf Overall}\\\\\n\\cline{2-10}\n& {\\bf LCC} & {\\bf SROCC} & {\\bf RMSE} &{\\bf LCC} & {\\bf SROCC} & {\\bf RMSE} &{\\bf LCC} & {\\bf SROCC} & {\\bf RMSE} \\\\\n\\hline\nTemporal FLOSIM \\cite{FLOSIM2015} & 0.6453 & 0.5489 & 0.6958 & 0.8441 & 0.8278 & 0.7027 & 0.7252 & 0.7097 & 0.8528 \\\\\n\\hline\n{\\bf{VQUEMODES}} (no spatial) & 0.9253 & 0.8955 & 0.3555 & 0.9690 & 0.9477 & 0.2572 & 0.9569 & 0.9330 & 0.3162 \\\\\n\\hline\nPQM \\cite{Joveluro2010}& - & - & - & -&-&- & 0.6340&0.6006&0.8784\\\\\n\\hline\n${\\text{Chen}}_{3D}$ \\cite{Flosim3D2017} & 0.7963 & 0.8035 & 2.5835 & 0.9358 & 0.8884 & 3.2863 & 0.8227 & 0.8201 & 2.9763\\\\\n\\hline \n${\\text{STRIQE}}_{3D}$ \\cite{Flosim3D2017} & 0.6836 & 0.6263 & 2.3683 & 0.8778 & 0.8513 & 3.2121 & 0.7599 & 0.7525 & 2.8374 \\\\\n\\hline\n${{\\text{FLOSIM}}_{3D}}$ \\cite{Flosim3D2017} & 0.9589 & 0.9478 & 0.3863 & 0.9738\t&\t0.9548\t&0.2976 & 0.9178&\t0.9111\t&0.4918\\\\\n\\hline \nPHVS-3D \\cite{jin2011frqa3d}& - & - & - & -&-&- & 0.5480&0.5146&0.9501\\\\\n\\hline\n3D-STS \\cite{Han2012VCIP}& - & - & - & -&-&- & 0.6417&0.6214&0.9067\\\\\n\\hline\nSJND-SVA \\cite{qi2016stereoscopic} & 0.5834 & 0.6810 & 0.6672 & 0.8062&0.6901&0.5079 & 0.6503&\t0.6229\t&0.8629\\\\\n\\hline\n{\\bf{Yang \\textit{et al.}}} \\cite{yang2017}& - & - & - & -&-&- & 0.8949&0.8552&0.4929\\\\\n\\hline\n{\\bf{BSVQE}} \\cite{chen2017blind}& 0.9168 & 0.8857 & - & 0.8953&0.8383&- & 0.9239&0.9086&-\\\\\n\\hline\n\\bf{VQUEMODES (NIQE)} &\\textbf{0.9594} & \\textbf{0.9439} &\\textbf{0.1791} &\\textbf{0.9859}&\\textbf{0.9666}&\\textbf{0.0912}& \\textbf{0.9697}&\\textbf{0.9637}&\\textbf{0.2635} \\\\\n\\hline\n\\end{tabular}\n\\label{table:object6}\n\\end{table*}\n\\begin{figure}[htbp]\n\\begin{subfigure}[b]{0.24\\textwidth\n\\includegraphics[height=3cm]{Figures\/dmosall1.png}\n\\subcaption{\\small VQUEMODES (no spatial).}\n\\end{subfigure}\n\\begin{subfigure}[b]{0.24\\textwidth}\n\\includegraphics[height=3cm]{Figures\/dmossbiqe1.png}\n\\subcaption{\\small VQUEMODES (SBIQE).} \n\\end{subfigure}\n\\\\\n\\begin{subfigure}[b]{0.24\\textwidth}\n\\includegraphics[height=3cm]{Figures\/dmosbrisqe1.png}\n\\subcaption{\\small VQUEMODES (BRISQUE).} \n\\end{subfigure}\n\\begin{subfigure}[b]{0.24\\textwidth}\n\\includegraphics[height=3cm]{Figures\/dmosniqe1.png}\n\\subcaption{\\small VQUEMODES (NIQE).} \n\\end{subfigure}\n\\caption{Scatter plots of proposed VQUEMODES algorithm prediction without and with different spatial metrics (SBIQE\/BRISQUE\/NIQE) versus DMOS values on the IRCCYN database.}\n\\label{fig:scatterIRCCYN}\n\\end{figure}\n\n\\begin{figure}[htbp]\n\\begin{subfigure}[b]{0.24\\textwidth\n\\includegraphics[height=3cm]{Figures\/dmoslfoviaall1.png}\n\\subcaption{\\small VQUEMODES (no spatial).}\n\\end{subfigure}\n\\begin{subfigure}[b]{0.24\\textwidth}\n\\includegraphics[height=3cm]{Figures\/dmoslfoviasbiqe1.png}\n\\subcaption{\\small VQUEMODES (SBIQE).} \n\\end{subfigure}\n\\\\\n\\begin{subfigure}[b]{0.24\\textwidth}\n\\includegraphics[height=3cm]{Figures\/dmoslfoviabrisqe1.png}\n\\subcaption{\\small VQUEMODES (BRISQUE).} \n\\end{subfigure}\n\\begin{subfigure}[b]{0.24\\textwidth}\n\\includegraphics[height=3cm]{Figures\/dmoslfovianiqe1.png}\n\\subcaption{\\small VQUEMODES (NIQE).} \n\\end{subfigure}\n\\caption{Scatter plots of proposed VQUEMODES algorithm prediction without and with different spatial metrics (SBIQE\/BRISQUE\/NIQE) versus DMOS values on the LFOVIA database.}\n\\label{fig:scatterLFOVIA}\n\\end{figure}\n\nFor both the databases, 80\\% of the videos are used for SVR training and the remaining samples are used for regression. In other words, the training and test sets are obtained by partitioning the set of available videos in the 80:20 proportion. Once this video-level partitioning is done, the actual training happens at the frame-level. During regression, the frame-level scores are estimated and averaged to compute the video-level quality score. We empirically justify the averaging of the frame-level scores to generate the video-level score. In over 1000 regression iterations, we found that the standard deviation of frame-level scores for a given video varied between 0.2 $\\times$ $10^{-8}$ and 0.25. This observation, combined with the high correlation of the average scores with DMOS values shown in Tables \\ref{table:object1} - \\ref{table:object6} provide evidence for the effectiveness of our approach. \n\nWe used the open-source SVM package {\\em{LIBSVM}} \\cite{chang2011libsvm} in our experiments. \nWe performed the training and testing 1000 times for statistical consistency with a random assignment of video-level samples without overlap between the training and testing sets. The reported results are the average over these 1000 trials. The performance of the proposed metric is measured using the following statistical measures: Linear Correlation Coefficient (LCC), Spearman's Rank Order Correlation Coefficient (SROCC) and Root Mean Square Error (RMSE). LCC signifies the linear dependence between two variables and SROCC reveals the monotonic relationship between two quantities. Higher LCC and SROCC value point to a good agreement between the subjective and objective. RMSE quantifies the magnitude of the error between estimated quality scores and DMOS values. All these results are evaluated after performing a non-liner logistic fit. We followed the standard procedure recommended by Video Quality Experts Group (VQEG) \\cite{website:LIVE_Database_Report} to perform the non-linear regression with 4-parameter logistic transform given by \n\\begin{equation*}\nf(x)=\\frac{\\tau_{1}-\\tau_{2}}{1+\\text{exp}({\\frac{x-\\tau_{3}}{|\\tau_{4}|}})}+\\tau_{2},\n\\end{equation*}\nwhere {\\em{x}} denotes the raw objective score, and $\\tau_{1}, \\tau_{2}, \\tau_{3}$ and $\\tau_{4}$ are the free parameters selected to provide the best fit of the predicted scores to the DMOS values. \n\nTable \\ref{table:object1} shows the performance of the proposed metric VQUEMODES (NIQE) on the IRCCYN and LFOVIA databases. We used the NIQE scores as spatial quality score in the proposed algorithm. Also, we compared our metric's performance with different state-of-art 2D IQA\/VQA and 3D IQA\/VQA models. SSIM \\cite{wang2004}, MS-SSIM \\cite{wang2003multiscale} are FR 2D IQA metrics and SBIQE \\cite{sbiqepriya}, NIQE \\cite{mittal2013making} and BRISQUE \\cite{mittal2012no} are 2D NR IQA metrics. These IQA metrics were applied on a frame-by-frame basis for each view and the final quality is computed by calculating the average of frame scores of both views. STMAD \\cite{vu2011spatiotemporal} and FLOSIM \\cite{FLOSIM2015} are 2D FR VQA metrics applied on individual views and the final score is computed by calculating the mean of both view scores. Chen \\textit{et al.} \\cite{chen2013full} and STRIQE \\cite{khan2015full} are stereoscopic FR IQA metrics. These metrics were applied on a frame-by-frame basis for the 3D video and the final quality score is computed by calculating the mean of the frame-level quality scores. From the table it is clear that the proposed metric outperforms all of the 2D IQA\/VQA and 3D IQA models.\n\nTable \\ref{table:object2} shows the proposed metric's evaluation on different distortions of the IRCCYN database. Table \\ref{table:object3} shows the performance evaluation of proposed metric on symmetric and asymmetric S3D video sequences of the LFOVIA database. Symmetric stereoscopic video has both views at same quality and asymmetric stereoscopic video has each view at a different quality. From these results it is clear that the proposed metric has state-of-art performance compared to the other quality metrics in both cases of different distortions (H.264 and JP2K) and symmetric and asymmetric distorted stereoscopic video sequences. \n\nWe checked the efficacy of proposed algorithm by replacing the NIQE scores with other popular 2D NR IQA methods as well. Tables \\ref{table:object4} and \\ref{table:object5} shows the performance evaluation of proposed metric VQUEMODES on the IRCCYN and LFOVIA databases respectively. From these results it is clear that the proposed metric demonstrates consistent and state-of-art results across spatial metrics and across distortion types. Table \\ref{table:object6} shows the performance comparison of the proposed method with different 3D VQA metrics on IRCCYN database. PQM \\cite{Joveluro2010}, $FLOSIM_{3D}$ \\cite{Flosim3D2017}, PHVS-3D \\cite{jin2011frqa3d}, 3D-STS \\cite{Han2012VCIP} and SJND-SVQ \\cite{qi2016stereoscopic} are S3D FR VQA models. Temporal FLOSIM is a part of the FLOSIM \\cite{FLOSIM2015} metric which computes the quality score of a video based on disturbances in motion components. $Chen_{3D}$ and $STRIQE_{3D}$ are 3D FR VQA metrics and these models are extensions of the Chen \\textit{et al.} and STRIQE (3D IQA metrics) by including the motion scores computed by Temporal FLOSIM. Yang \\textit{et al.} and \\text{BSVQE} \\cite{chen2017blind} are S3D NR VQA models.\n\n\nTo highlight the effectiveness of using joint motion and depth statistics for NR VQA, we report the performance of VQUMODES without including the spatial quality feature in Table \\ref{table:object6}. The corresponding scatter plot for this case in shown in Fig. \\ref{fig:scatterIRCCYN}. It is clear from these numbers and plots that our proposed statistical features are indeed very effective for S3D NR VQA. It is also clear that \nthe combination of joint motion and depth statistical features and spatial quality features (NIQE) has shown a small but consistent performance improvement (compared to the stand-alone motion and depth features) indicating that spatial quality does play a role in S3D VQA. Figs. \\ref{fig:scatterIRCCYN} and \\ref{fig:scatterLFOVIA} shows the scatter plots of proposed algorithm with different spatial metrics on the IRCCYN and LFOVIA S3D databases respectively. These scatter plots also provide corroborative evidence for the small but important role played by the spatial feature in improving overall NR VQA performance. \n\\section{Conclusions and Future work}\n\\label{sec:conclusions}\nInspired by the neurophysical response of the MT area to motion and depth inputs, we proposed a BGGD model to capture the joint statistical dependencies between motion and disparity subband coefficients of natural S3D videos. \nThe utility and efficacy of the proposed BGGD model was demonstrated in an NR stereo VQA application dubbed VQUEMODES. VQUEMODES was evaluated on the IRCCYN, LFOVIA S3D video databases and shown to have state-of-the-art performance compared to the other 2D and 3D IQA\/VQA metrics.\nWe believe that the proposed model could be useful in several applications like depth frame estimation from temporal feature maps, visual navigation, denoising, quality assessment etc. \n\n\\appendices\n\\ifCLASSOPTIONcaptionsoff\n \\newpage\n\\fi\n\n\\bibliographystyle{ieeetr}\n\\footnotesize\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section*{Introduction}\nThe novel severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2)\nfirst identified in Wuhan, China in December 2019 quickly spread\nacross the globe, leading to the declaration of a pandemic on March\n11, 2020~\\cite{WHOpandemic}. The emerging disease was termed\nCOVID-19. As of this January 2020 writing, more than 86 million\npeople have been infected, and more than 1.8 million deaths from\nCOVID-19 in more than 218 countries~\\cite{corona1} have been\nconfirmed. About 61 million people have recovered globally.\n\nProperly estimating the severity of any infectious disease is crucial\nfor identifying near-future scenarios, and designing intervention\nstrategies. This is especially true for SARS-CoV-2 given the relative\nease with which it spreads, due to long incubation periods,\nasymptomatic carriers, and stealth\ntransmissions~\\cite{he2020temporal}. Most measures of severity are\nderived from the number of deaths, the number of confirmed and\nunconfirmed infections, and the number of secondary cases generated by\na single primary infection, to name a few. Measuring these\nquantities, determining how they evolve in a population, and how they\nare to be compared across groups, and over time, is challenging due to\nmany confounding variables and uncertainties.\n\nFor example, quantifying COVID-19 deaths across jurisdictions must\ntake into account the existence of different protocols in assigning\ncause of death, cataloging co-morbidities \\cite{CDC_comorbidity}, and\nlag time reporting~\\cite{BBC_deaths}. Inconsistencies also arise in\nthe way deaths are recorded, especially when COVID-19 is not the\ndirect cause of death, rather a co-factor leading to complications\nsuch as pneumonia and other respiratory\nailments~\\cite{beaney2020excess}. In Italy, the clinician's best\njudgment is called upon to classify the cause of death of an untested\nperson who manifests COVID-19 symptoms. In some cases, such persons\nare given postmortem tests, and if results are positive, added to the\nstatistics. Criteria vary from region to\nregion~\\cite{onder2020case}. In Germany, postmortem testing is not\nroutinely employed, possibly explaining the large difference in\nmortality between the two countries. In the US, current guidelines\nstate that if typical symptoms are observed, the patient's death can\nbe registered as due to COVID-19 even without a positive\ntest~\\cite{CDC_death_definition}. Certain jurisdictions will list\ndates on which deaths actually occurred, others list dates on which\nthey were reported, leading to potential lag-times. Other countries\ntally COVID-19 related deaths only if they occur in hospital settings,\nwhile others also include those that occur in private and\/or nursing\nhomes.\n\nIn addition to the difficulty in obtaining accurate and uniform\nfatality counts, estimating the prevalence of the disease is also a\nchallenging task. Large-scale testing of a population where a fraction\nof individuals is infected, relies on unbiased sampling, reliable\ntests, and accurate recording of results. One of the main sources of\nsystematic bias arises from the tested subpopulation: due to shortages\nin testing resources, or in response to public health guidelines,\nCOVID-19 tests have more often been conducted on symptomatic persons,\nthe elderly, front-line workers and\/or those returning from\nhot-spots. Such non-random testing overestimates the infected fraction\nof the population.\n\nDifferent types of tests also probe different infected\nsubpopulations. Tests based on reverse-transcription polymerase chain\nreaction (RT-PCR), whereby viral genetic material is detected\nprimarily in the upper respiratory tract and amplified, probe\nindividuals who are actively infected. Serological tests (such as\nenzyme-linked immunosorbent assay, ELISA) detect antiviral antibodies\nand thus measure individuals who have been infected, including those\nwho have recovered.\n\nFinally, different types of tests exhibit significantly different\n``Type I'' (false positive) and ``Type II'' (false negative) error\nrates. The accuracy of RT-PCR tests depends on viral load which may be\ntoo low to be detected in individuals at the early stages of the\ninfection, and may also depend on which sampling site in the body is\nchosen. Within serological testing, the kinetics of antibody response\nare still largely unknown and it is not possible to determine if and\nfor how long a person may be immune from reinfection. Instrumentation\nerrors and sample contamination may also result in a considerable\nnumber of false positives and\/or false negatives. These errors\nconfound the inference of the infected fraction. Specifically, at low\nprevalence, Type I false positive errors can significantly bias the \nestimation of the IFR.\n\nOther quantities that are useful in tracking the dynamics of a\npandemic include the number of recovered individuals, tested, or\nuntested. These quantities may not be easily inferred from data and\nneed to be estimated from fitting mathematical models such as SIR-type\nODEs~\\cite{keeling2011modeling}, age-structured\nPDEs~\\cite{bottcher2020case}, or network\/contact\nmodels~\\cite{bottcher2017critical,bottcher2020unifying,pastor2015epidemic}.\n\nAdministration of tests and estimation of all quantities above can\nvary widely across jurisdictions, making it difficult to properly\ncompare numbers across them. In this paper, we incorporate excess\ndeath data, testing statistics, and mathematical modeling to\nself-consistently compute and compare mortality across different\njurisdictions. In particular, we will use excess mortality\nstatistics~\\cite{faust2020comparison,woolf2020excess,kontis2020magnitude}\nto infer the number of COVID-19-induced deaths across different\nregions. We then present a statistical testing model to estimate\njurisdiction-specific infected fractions and mortalities, their\nuncertainty, and their dependence on testing bias and errors. Our\nstatistical analyses and source codes are available at~\\cite{GitHub}.\n\n\\section*{Methods}\n\\subsection*{Mortality measures}\n\nMany different fatality rate measures have been defined to quantify\nepidemic outbreaks~\\cite{who_cfr_ifr}. One of the most common is the\ncase fatality ratio (${\\rm CFR}$) defined as the ratio between the\nnumber of confirmed ``infection-caused'' deaths $D_{\\rm c}$ in a\nspecified time window and the number of infections $N_{\\rm c}$\nconfirmed within the same time window, ${\\rm CFR} = D_{\\rm c}\/N_{\\rm\n c}$~\\cite{xu2020pathological}. Depending on how deaths $D_{\\rm c}$\nare counted and how infected individuals $N_{\\rm c}$ are defined, the\noperational CFR may vary. It may even exceed one, unless all deaths\nare tested and included in $N_{\\rm c}$.\n\nAnother frequently used measure is the infection fatality ratio (IFR)\ndefined as the true number of ``infection-caused'' deaths $D = D_{\\rm\n c} + D_{\\rm u}$ divided by the actual number of cumulative\ninfections to date, $N_{\\rm c} + N_{\\rm u}$. Here, $D_{\\rm u}$ is the\nnumber of unreported infection-caused deaths within a specified\nperiod, and $N_{\\rm u}$ denotes the untested or unreported infections\nduring the same period. Thus, ${\\rm IFR}= D\/(N_{\\rm c}+N_{\\rm u})$.\n\nOne major issue of both CFR and IFR is that they do not account for\nthe time delay between infection and resolution. Both measures may be\nquite inaccurate early in an outbreak when the number of cases grows\nfaster than the number of deaths and\nrecoveries~\\cite{bottcher2020case}. An alternative measure that\navoids case-resolution delays is the confirmed resolved mortality\n$M=D_{\\rm c}\/(D_{\\rm c}+R_{\\rm c})$~\\cite{bottcher2020case}, where\n$R_{\\rm c}$ is the cumulative number of confirmed recovered cases\nevaluated in the same specified time window over which $D_{\\rm c}$ is\ncounted. One may also define the true resolved mortality via\n$\\mathcal{M}= D\/(D + R)$, the proportion of the actual number of\ndeaths relative to the total number of deaths and recovered\nindividuals during a specified time period. If we decompose $R =\nR_{\\rm c} + R_{\\rm u}$, where $R_{\\rm c}$ are the confirmed and\n$R_{\\rm u}$, the unreported recovered cases, $\\mathcal{M}= (D_{\\rm\n c}+D_{\\rm u})\/(D_{\\rm c}+D_{\\rm u} + R_{\\rm c}+R_{\\rm u})$.\nThe total confirmed population is defined as \n$N_{\\rm c} = D_{\\rm c} + R_{\\rm c} + I_{\\rm c}$, where $I_{\\rm c}$ the\nnumber of living confirmed infecteds. Applying these definitions to\nany specified time period (typically from the ``start'' of an epidemic\nto the date with the most recent case numbers), we observe that\n$\\mathrm{CFR} \\leq M$ and $\\mathrm{IFR} \\leq {\\cal M}$. After the\nepidemic has long past, when the number of currently infected\nindividuals $I$ approach zero, the two fatality ratios and mortality\nmeasures converge if the component quantities are defined and measured\nconsistently, $\\lim_{t \\to \\infty} \\mathrm{CFR}(t) = \\lim_{t \\to\n \\infty} M(t) $ and $\\lim_{t \\to \\infty}\\mathrm{IFR}(t) = \\lim_{t \\to\n \\infty} \\mathcal{M}(t)$~\\cite{bottcher2020case}.\n\nThe mathematical definitions of the four basic mortality measures $Z =\n{\\rm CFR, IFR}, M, \\cal M$ defined above are given in\nTable~\\ref{tab:mortality_measures} and fall into two categories,\nconfirmed and total. Confirmed measures (CFR and $M$) rely only on\npositive test counts, while total measures (IFR and ${\\cal M}$) rely\non projections to estimate the number of infected persons in the total\npopulation $N$.\n\\begin{table*}[htb]\n\\renewcommand*{\\arraystretch}{2.8}\n\\begin{tabular}{|c|c|c||c|}\\hline\n\\backslashbox{Subpopulation}{Measure $Z$}\n& \\makebox[4em]{Fatality Ratios} & \\makebox[6em]{Resolved Mortality} \n& \\makebox[18em]{Excess Death Indices\n}\n\\\\\\hline\\hline\nConfirmed & \\,\\,\\, $\\displaystyle{ \n{\\rm CFR}= \\frac{D_{\\rm c}}{N_{\\rm c}}}$\\,\\, & \n$\\displaystyle M=\\frac{D_{\\rm c}} {D_{\\rm c}+R_{\\rm c}}$ & \\,\\,$D_{\\rm e}$ per 100,000: \n$\\displaystyle{ \\frac{D_{\\rm c}+D_{\\rm u}}{100,000}}$ \\,\\, \\\\[3pt]\\hline\nTotal & \\,\\,$\\displaystyle {\\rm IFR}=\\frac{D_{\\rm c} + D_{\\rm u}}{ N_{\\rm c} \n+ N_{\\rm u}}$\\,\\, & \n\\,\\, $\\displaystyle {\\cal M}=\\frac{D_{\\rm c}+D_{\\rm u}}\n{D_{\\rm c} + D_{\\rm u} + R_{\\rm c} + R_{\\rm u}}$\\,\\, \n& relative: $\\displaystyle r = \n\\frac{\\sum_{i}\\left[d^{(0)}(i) - {\\frac 1 J} \\sum_{j}^{J} d^{(j)}(i)\\right]}\n{{\\frac 1 J}\\sum_{j}^{J} \\sum_{i}d^{(j)}(i)}$\n\\\\[3pt]\\hline\n\\end{tabular}\n\\vspace{1mm}\n\\caption{\\textbf{Definitions of mortality measures.} Quantities with\n subscript ``c'' and ``u'' denote confirmed (\\textit{i.e.}, positively tested)\n and unconfirmed populations. For instance, $D_{\\rm c}$, $R_{\\rm c}$,\n and $N_{\\rm c}$ denote the total number of confirmed dead,\n recovered, and infected individuals, respectively. $d^{(j)}(i)$ is\n the number of individuals who have died in the $i^{\\rm th}$ time\n window (\\textit{e.g.}, day, week) of the $j^{\\rm th}$ previous\n year. The mean number of excess deaths between the periods $k_{\\rm\n s}$ and $k$ this year $\\bar{D}_{\\rm e}$ is thus $\\sum_{i=k_{\\rm\n s}}^k \\left[d^{(0)}(i)- {\\frac 1 J}\\sum_{j=1}^{J}\n d^{(j)}(i)\\right]$. Where the total number of infection-caused\n deaths $D_{\\rm c}+D_{\\rm u}$ appears,it can be estimated using the\n excess deaths $\\bar{D}_{\\rm e}$ over as detailed in the main text.\n We have also included raw death numbers\/100,000 and the mean excess\n deaths $r$ relative to the mean number of deaths over the same\n period of time from past years (see Eqs.~\\eqref{DK_STATS}).}\n\\label{tab:mortality_measures}\n\\end{table*}\nOf the measures listed in Table~\\ref{tab:mortality_measures}, the\nfatality ratio CFR and confirmed resolved mortality $M$ do not require\nestimates of unreported infections, recoveries, and deaths and can be\ndirectly derived from the available confirmed counts $D_{\\rm c}$,\n$N_{\\rm c}$, and $R_{\\rm c}$~\\cite{dong2020interactive}. Estimation\nof IFR and the true resolved mortality $\\cal M$ requires the\nadditional knowledge on the unconfirmed quantities $D_{\\rm u}, N_{\\rm\n u}$, and $R_{\\rm u}$. We describe the possible ways to estimate\nthese quantities, along with the associated sources of bias and\nuncertainty below.\n\n\\subsection*{Excess deaths data}\n\nAn unbiased way to estimate $D = D_{\\rm c} + D_{\\rm u}$, the\ncumulative number of deaths, is to compare total deaths within a time\nwindow in the current year to those in the same time window of\nprevious years, before the pandemic. If the epidemic is widespread and\nhas appreciable fatality, one may reasonably expect that the excess\ndeaths can be attributed to the\npandemic~\\cite{USData,SpainData,EnglandData,SwitzerlandData,Istat}.\nWithin each affected region, these ``excess'' deaths $D_{\\rm e}$\nrelative to ``historical'' deaths, are independent of testing\nlimitations and do not suffer from highly variable definitions of\nvirus-induced death. Thus, within the context of the COVID-19\npandemic, $D_{\\rm e}$ is a more inclusive measure of virus-induced\ndeaths than $D_{\\rm c}$ and can be used to estimate the total number\nof deaths, $D_{\\rm e} \\simeq D_{\\rm_c} + D_{\\rm u}$. Moreover, using\ndata from multiple past years, one can also estimate the uncertainty\nin $D_{\\rm e}$.\n\\begin{figure*}[htb]\n\\centering\n\\includegraphics[width = 0.98\\textwidth]{Fig1.pdf}\n\\caption{\\textbf{Examples of seasonal mortality and excess\n deaths}. The evolution of weekly deaths in (a) New York City (six\n years) and (b) Germany (five years) derived from data in\n Refs.~\\cite{excessdata,ExcessDeathsEconomist}. Grey solid lines and shaded\n regions represent the historical numbers of deaths and corresponding\n confidence intervals defined in Eq.~\\eqref{DK_STATS}. Blue solid\n lines indicate weekly deaths, and weekly deaths that lie outside the\n confidence intervals are indicated by solid red lines. The red\n shaded regions represent statistically significant mean cumulative\n excess deaths $D_{\\rm e}$. The reported weekly confirmed deaths\n $d^{(0)}_{\\rm c}(i)$ (dashed black curves), reported cumulative\n confirmed deaths $D_{\\rm c}(k)$ (dashed dark red curves), weekly\n excess deaths $\\bar{d}_{\\rm e}(i)$ (solid grey curves), and\n cumulative excess deaths $\\bar{D}_{\\rm e}(k)$ (solid red curves) are\n plotted in units of per 100,000 in (c) and (d) for NYC and Germany,\n respectively. The excess deaths and the associated 95\\% confidence\n intervals given by the error bars are constructed from historical\n death data in (a-b) and defined in Eqs.~\\eqref{DK_STATS} and\n \\eqref{DE_STATS}. In NYC there is clearly a significant number of\n excess deaths that can be safely attributed to COVID-19, while to date\n in Germany, there have been no significant excess deaths. Excess\n death data from other jurisdictions are shown in the Supplementary\n Information and typically show excess deaths greater than reported\n confirmed deaths (with Germany an exception as shown in (d)).}\n\\label{fig:panels}\n\\end{figure*}\nIn practice, deaths are typically tallied daily,\nweekly~\\cite{euromomo,USData}, or sometimes aggregated\nmonthly~\\cite{ExcessDeathsEconomist,ExcessDeathsCDC} with historical\nrecords dating back $J$ years so that for every period $i$ there are a\ntotal of $J+1$ death values. We denote by $d^{(j)}(i)$ the total\nnumber of deaths recorded in period $i$ from the $j^{\\rm th}$ previous\nyear where $0 \\leq j \\leq J$ and where $j=0$ indicates the current\nyear. In this notation, $D = D_{\\rm c} + D_{\\rm u} = \\sum_i\nd^{(0)}(i)$, where the summation tallies deaths over several periods\nof interest within the pandemic. Note that we can decompose\n$d^{(0)}(i) = d_{\\rm c}^{(0)}(i) + d_{\\rm u}^{(0)}(i)$, to include the\ncontribution from the confirmed and unconfirmed deaths during each\nperiod $i$, respectively.\nTo quantify the total cumulative excess deaths we derive excess deaths\n$d_{\\rm e}^{(j)}(i)=d^{(0)}(i)-d^{(j)}(i)$ per week relative to the\n$j^{\\rm th}$ previous year. Since $d^{(0)}(i)$ is the total number of\ndeaths in week $i$ of the current year, by definition $d_{\\rm\n e}^{(0)}(i) \\equiv 0$. The excess deaths during week $i$,\n$\\bar{d}_{\\rm e}(i)$, averaged over $J$ past years and the associated,\nunbiased variance $\\sigma_{\\rm e}(i)$ are given by\n\n\\begin{align}\n\\bar{d}_{\\rm e}(i) & = {1\\over J}\\sum_{j=1}^{J} d_{\\rm e}^{(j)}(i), \\nonumber \\\\\n\\sigma_{{\\rm e}}^{2}(i)& = {1\\over J-1}\\sum_{j=1}^{J}\n\\left[d_{\\rm e}^{(j)}(i)-\\bar{d}_{\\rm e}(i)\\right]^{2}.\n\\label{DK_STATS}\n\\end{align}\nThe corresponding quantities accumulated over $k$ weeks \ndefine the mean and variance of the cumulative excess deaths\n$\\bar{D}_{\\rm e}(k)$ and $\\Sigma_{{\\rm e}}(k)$\n\n\\begin{align}\n\\bar{D}_{\\rm e}(k) & = {1\\over J}\\sum_{j=1}^{J} \\sum_{i=1}^k d_{\\rm e}^{(j)}(i), \\nonumber \\\\\n\\Sigma_{{\\rm e}}^{2}(k) & = {1\\over J-1}\\sum_{j=1}^{J}\n\\left[\\sum_{i=1}^k d_{\\rm e}^{(j)}(i)-\\bar{D}_{\\rm e}(k) \\right]^{2},\n\\label{DE_STATS}\n\\end{align}\nwhere deaths are accumulated from the first to the $k^{\\rm th}$ week\nof the pandemic. The variance in Eqs.~\\eqref{DK_STATS} and\n\\eqref{DE_STATS} arise from the variability in the baseline number of\ndeaths from the same time period in $J$ previous years.\n\nWe gathered excess death statistics from over 23 countries and all US\nstates. Some of the data derive from open-source online repositories\nas listed by official statistical bureaus and health ministries\n\\cite{USData,SpainData,EnglandData,SwitzerlandData,Istat,ExcessDeathsCDC};\nother data are elaborated and tabulated in\nRef.~\\cite{ExcessDeathsEconomist}. In some countries excess death\nstatistics are available only for a limited number of states or\njurisdictions (\\textit{e.g.}, Brazil).\nThe US death statistics that we use in this study is based on weekly\ndeath data between 2015--2019~\\cite{ExcessDeathsCDC}. For all other\ncountries, the data collection periods are summarized in\nRef.~\\cite{ExcessDeathsEconomist}. Fig.~\\ref{fig:panels}(a-b) shows\nhistorical death data for NYC and Germany, while\nFig.~\\ref{fig:panels}(c-d) plots the confirmed and excess deaths and\ntheir confidence levels computed from Eqs.~\\eqref{DK_STATS} and\n\\eqref{DE_STATS}. We assumed that the cumulative summation is performed\nfrom the start of 2020 to the current week $k=K$ so that $\\bar{D}_{\\rm\n e}(K) \\equiv \\bar{D}_{\\rm e}$ indicates excess deaths at the time of\nwriting. Significant numbers of excess deaths are clearly evident for\nNYC, while Germany thus far has not experienced significant excess\ndeaths.\n\nTo evaluate CFR and $M$, data on only $D_{\\rm c}, N_{\\rm c}$, and\n$R_{\\rm c}$ are required, which are are tabulated by many\njurisdictions. To estimate the numerators of IFR and ${\\cal M}$, we\napproximate $D_{\\rm c}+D_{\\rm u} \\approx \\bar{D}_{\\rm e}$ using\nEq.~\\eqref{DE_STATS}.\nFor the denominators, estimates of the unconfirmed infected $N_{\\rm\n u}$ and unconfirmed recovered populations $R_{\\rm u}$ are\nrequired. In the next two sections we propose methods to estimate\n$N_{\\rm u}$ using a statistical testing model and $R_{\\rm u}$ using\ncompartmental population model.\n\n\\subsection*{Statistical testing model with bias and testing errors}\n\nThe total number of confirmed and unconfirmed infected individuals\n$N_{\\rm c} + N_{\\rm u}$ appears in the denominator of the IFR. To\nbetter estimate the infected population we present a statistical model\nfor testing in the presence of bias in administration and testing\nerrors. Although $N_{\\rm c} + N_{\\rm u}$ used to estimate the IFR\nincludes those who have died, depending on the type of test, it may or\nmay not include those who have recovered. If $S, I, R, D$ are the\nnumbers of susceptible, currently infected, recovered, and deceased\nindividuals, the total population is $N =S + I + R + D$ and the\ninfected fraction can be defined as $f = (N_{\\rm c} + N_{\\rm u})\/N=(I\n+ R + D)\/N$ for tests that include recovered and deceased individuals\n(\\textit{e.g.}, antibody tests), or $f = (N_{\\rm c} + N_{\\rm\n u})\/N=(I+D)\/N$ for tests that only count currently infected\nindividuals (\\textit{e.g.}, RT-PCR tests). If we assume that the total\npopulation $N$ can be inferred from census estimates, the problem of\nidentifying the number of unconfirmed infected persons $N_{\\rm u}$ is\nmapped onto the problem of identifying the true fraction $f$ of the\npopulation that has been infected.\n\\begin{figure*}\n\\includegraphics[width=0.75\\textwidth]{jurisdiction.pdf}\n\\caption{\\textbf{Biased and unbiased testing of a population.}\nA hypothetical scenario of testing a population (total $N=54$\nindividuals) within a jurisdiction (solid black boundary). Filled red\ncircles represent the true number of infected individuals who tested\npositive and the black-filled red circles indicate individuals who\nhave died from the infection. Open red circles denote uninfected\nindividuals who were tested positive (false positives) while filled\nred circles with dark gray borders are infected individuals who were\ntested negative (false negatives). In the jurisdiction of interest 5\nhave died of the infection while 16 are truly infected. The true\nfraction $f$ of infected in the entire population is thus $f=16\/54$\nand the true IFR=$5\/16$. However, under testing (green and blue)\nsamples, a false positive is shown to arise. If the apparent positive\nfraction $\\tilde{f}_{\\rm b}$ is derived from a biased sample (blue),\nthe estimated \\textit{apparent} IFR can be quite different from the\ntrue one. For a less biased (more random) testing sample (green\nsample), a more accurate estimate of the total number of infected\nindividuals is $N_{\\rm c}+N_{\\rm u} = \\tilde{f}_{\\rm b} N =\n(5\/14)\\times 54 \\approx 19$ when the single false positive in this\nsample is included, and $\\tilde{f}_{\\rm b} N = (4\/14)\\times 54 \\approx\n15$ when the false positive is excluded, and allows us to more\naccurately infer the IFR. Note that CFR is defined according to the\ntested quantities $D_{\\rm c}\/N_{\\rm c}$ which are precisely $2\/9$ and\n$2\/5$ for the blue and green sample, respectively, if false positives\nare considered. When false negatives are known and factored out\n$\\mathrm{CFR}=2\/8$ and 2\/4, for the blue and green samples,\nrespectively.}\n\\label{fig:metrics}\n\\end{figure*}\n\nTypically, $f$ is determined by testing a representative sample and\nmeasuring the proportion of infected persons within the\nsample. Besides the statistics of sampling, two main sources of\nsystematic errors arise: the non-random selection of individuals to be\ntested and errors intrinsic to the tests themselves. Biased sampling\narises when testing policies focus on symptomatic or at-risk\nindividuals, leading to over-representation of infected individuals.\n\nFigure~\\ref{fig:metrics} shows a schematic of a hypothetical initial\ntotal population of $N=54$ individuals in a specified\njurisdiction. Without loss of generality we assume there are no\nunconfirmed deaths, $D_{\\rm u} = 0$, and that all confirmed deaths are\nequivalent to excess deaths, so that $\\bar{D}_{\\rm e} = D_{\\rm c} = 5$\nin the jurisdiction represented by Fig.~\\ref{fig:metrics}. Apart from\nthe number of deceased, we also show the number of infected and\nuninfected subpopulations and label them as true positives, false\npositives, and false negatives. The true number of infected\nindividuals is $N_{\\rm c} + N_{\\rm u} = 16$ which yields the true $f =\n16\/54 = 0.27$ and an IFR = 5\/16 = 0.312 within the jurisdiction.\n\nAlso shown in Fig.~\\ref{fig:metrics} are two examples of sampling.\nBiased sampling and testing is depicted by the blue contour in which 6\nof the 15 are alive and infected, 2 are deceased, and the remaining 7\nare healthy. For simplicity, we start by assuming no testing errors.\nThis measured infected fraction of this sample $8\/15 = 0.533 > f =\n0.296$ is biased since it includes a higher proportion of infected\npersons, both alive and deceased, than that of the entire\njurisdiction. Using this biased measured infected fraction of $8\/15$\nyields ${\\rm IFR} = 5\/(0.533 \\cdot 54) \\approx 0.174$, which\nsignificantly underestimates the true ${\\rm IFR} = 0.312$. A\nrelatively unbiased sample, shown by the green contour, yields an\ninfected fraction of $4\/14 \\approx 0.286$ and an apparent ${\\rm\n IFR}\\approx 0.324$ which are much closer to the true fraction $f$\nand IFR. In both samples discussed above we neglected testing errors\nsuch as false positives indicated in Fig.~\\ref{fig:metrics}. Tests\nthat are unable to distinguish false positives as negatives would\nyield a larger $N_{\\rm c}$, resulting in an apparent infected fraction\n$9\/15$ and an even smaller apparent ${\\rm IFR}\\approx 0.154$. By\ncontrast, the false positive testing errors on the green sample would\nyield an apparent infected fraction $5\/15 = 0.333$ and IFR= 0.259.\n\n\nGiven that test administration can be biased, we propose a parametric\nform for the apparent or measured infected fraction\n\n\\begin{equation}\nf_{\\rm b}=f_{\\rm b}(f, b)\\equiv {fe^{b} \\over f(e^{b}-1)+1},\n\\label{F_B}\n\\end{equation}\nto connect the apparent (biased sampling) infected fraction $f_{\\rm\n b}$ with the true underlying infection fraction. The bias parameter\n$-\\infty < b < \\infty$ describes how an infected or uninfected\nindividual might be preferentially selected for testing, with $b <0$\n(and $f_{\\rm b}< f$) indicating under-testing of infected individuals,\nand $b>0$ (and $f_{\\rm b}> f$) representing over-testing of infecteds.\nA truly random, unbiased sampling arises only when $b=0$ where $f_{\\rm\n b} = f$. \nGiven $Q$ (possibly biased) tests to date, testing errors, and\nground-truth infected fraction $f$, we derive in the SI the likelihood\nof observing a positive fraction $\\tilde{f}_{\\rm b} = \\tilde{Q}^{+}\/Q$\n(where $\\tilde{Q}^{+}$ is the number of recorded positive tests):\n\n\\begin{equation}\nP(\\tilde{f}_{\\rm b}\\vert \\theta)\\approx\n{1\\over \\sqrt{2\\pi}\\sigma_{\\rm T}}\n\\exp\\left[-{(\\tilde{f}_{\\rm b} -\\mu)^{2}\\over \n2\\sigma_{\\rm T}^{2}}\\right],\n\\label{PTOT}\n\\end{equation}\nin which\n\n\\begin{eqnarray}\n\\mu &\\equiv &\nf_{\\rm b}(f,b)(1-{\\rm FNR}) + (1-f_{\\rm b}(f,b)){\\rm FPR}, \n\\nonumber \\\\\n\\sigma_{\\rm T}^{2} \n&\\equiv & \\mu (1-\\mu)\/Q.\n\\label{MU_SIGMA}\n\\end{eqnarray}\nHere, $\\mu$ is the expected value of the measured and biased fraction\n$\\tilde{f}_{\\rm b}$ and $\\sigma_{\\rm T}^{2}$ is its variance. Note\nthat the parameters $\\theta =\\{Q, f, b, {\\rm FPR}, {\\rm FNR}\\}$ may be\ntime-dependent and change from sample to sample. Along with the\nlikelihood function $P(\\tilde{f}_{\\rm b}\\vert f,\\theta)$, one can also\npropose a prior distribution $P(\\theta\\vert \\alpha)$ with\nhyperparameters $\\alpha$, and apply Bayesian methods to infer $\\theta$\n(see SI).\n\nTo evaluate IFR, we must now estimate $f$ given $\\tilde{f}_{\\rm b} =\n\\tilde{Q}^{+}\/Q$ and possible values for ${\\rm FPR}$, ${\\rm FNR}$,\nand\/or $b$, or the hyperparameters $\\alpha$ defining their\nuncertainty. The simplest maximum likelihood estimate of $f$ can be\nfound by maximizing $P(\\tilde{f}_{\\rm b}\\vert \\theta)$ with respect to\n$f$ given a measured value $\\tilde{f}_{\\rm b}$ and all other parameter\nvalues $\\theta$ specified:\n\\begin{equation}\n\\hat{f} \\approx {\\tilde{f}_{\\rm b} -{\\rm FPR}\n\\over e^{b}(1-{\\rm FNR}-\\tilde{f}_{\\rm b})+ \\tilde{f}_{\\rm b}-{\\rm FPR}}.\n\\label{f_b_hat}\n\\end{equation}\nNote that although FNRs are typically larger than FPRs, small values\nof $f$ and $\\tilde{f}_{\\rm b}$ imply that $\\hat{f}$ and $\\mu$ are more\nsensitive to the FPR, as indicated by Eqs.~\\eqref{MU_SIGMA} and\n\\eqref{f_b_hat}.\n\nIf time series data for $\\tilde{f}_{\\rm b}= \\tilde{Q}^{+}\/Q$ are\navailable, one can evaluate the corrected testing fractions in \nEq.\\,\\eqref{f_b_hat} for each time interval. Assuming that serological\ntests can identify infected individuals long after symptom onset, the\nlatest value of $\\hat{f}$ would suffice to estimate corresponding\nmortality metrics such as the $\\mathrm{IFR}$. For RT-PCR testing, one\ngenerally needs to track how $\\tilde{f}_{\\rm b}$ evolves in time. A\nrough estimate would be to use the mean of $\\tilde{f}_{\\rm b}$ over\nthe whole pandemic period to provide a lower bound of the estimated\nprevalence $\\hat{f}$.\n\nThe measured $\\tilde{f}_{\\rm b}$ yields only the apparent ${\\rm IFR} =\n\\bar{D}_{\\rm e}\/(\\tilde{f}_{\\rm b} N)$, but Eq.~\\eqref{f_b_hat} can\nthen be used to evaluate the corrected ${\\rm IFR} \\approx \\bar{D}_{\\rm\n e}\/(\\hat{f} N)$ which will be a better estimate of the true IFR.\nFor example, under moderate bias $\\vert b\\vert \\lesssim 1$ and\nassuming FNR, FPR, $\\tilde {f}_{\\rm {b}} \\lesssim 1$\nEq.~\\eqref{f_b_hat} relates the apparent and corrected IFRs through\n$\\bar{D}_{\\rm e}\/(\\tilde{f}_{\\rm b} N)\\sim \\bar{D}_{\\rm e}\/((\\hat{f}\ne^{\\rm b} + {\\rm{FPR}}) N)$.\n\nAnother commonly used representation of the IFR is $\\mathrm{IFR} = p\n(D_{\\rm c}+D_{\\rm u})\/N_{\\rm c} = p\\bar{D}_{\\rm e}\/N_{\\rm c}$. This\nexpression is equivalent to our $\\mathrm{IFR} = \\bar{D}_{\\rm e}\/(f N)$\nif $p = N_{\\rm c}\/(N_{\\rm c}+ N_{\\rm u})\\approx \\tilde{Q}^{+}\/(fN)$ is\ndefined as the fraction of infected individuals that are\nconfirmed~\\cite{LI_SCIENCE,chow2020global}. In this alternative\nrepresentation, the $p$ factor implicitly contains the effects of\nbiased testing. Our approach allows the true infected fraction $f$ to\nbe directly estimated from $\\tilde{Q}^{+}$ and $N$.\n\nWhile the estimate $\\hat{f}$ depends strongly on $b$ and ${\\rm FPR}$,\nand weakly on ${\\rm FNR}$, the \\textit{uncertainty} in $f$ will depend\non the uncertainty in the values of $b$, ${\\rm FPR}$, and ${\\rm FNR}$.\nA Bayesian framework is presented in the SI, but under a a Gaussian\napproximation for all distributions, the uncertainty in the testing\nparameters can be propagated to the squared coeffcient\n$\\sigma_{f}^{2}\/\\hat{f}^{2}$ of variation of the estimated infected\nfraction $\\hat{f}$, as explicitly computed in the SI. Moreover, the\nuncertainties in the mortality indices $Z$ decomposed into the\nuncertainties of their individual components are listed in\nTable~\\ref{tab:mortality_error}.\n\\subsection*{Using compartmental models to estimate resolved mortalities}\nSince the number of unreported recovered individuals $R_{\\rm u}$\nrequired to calculate ${\\cal M}$ is not directly related to excess\ndeaths nor to positive-tested populations, we use an SIR-type\ncompartmental model to relate $R_{\\rm u}$ to other inferable\nquantities~\\cite{keeling2011modeling}. Both unconfirmed recovered\nindividuals and unconfirmed deaths are related to unconfirmed infected\nindividuals who recover at rate $\\gamma_{\\rm u}$ and die at rate\n$\\mu_{\\rm u}$. The equations for the cumulative numbers of unconfirmed\nrecovered individuals and unconfirmed deaths,\n\n\\begin{equation}\n\\frac{\\mbox{d} R_{\\rm u}(t)}{\\mbox{d} t} = \\gamma_{\\rm u}(t) I_{\\rm u}(t),\\qquad\n\\frac {\\mbox{d} D_{\\rm u}(t)}{\\mbox{d} t} = \\mu_{\\rm u}(t) I_{\\rm u}(t),\n\\label{eq:SIR_ODE}\n\\end{equation}\ncan be directly integrated to find $R_{\\rm u}(t) = \\int_{0}^{t}\n\\gamma_{\\rm u}(t')I_{\\rm u}(t')\\mbox{d} t'$ and $D_{\\rm u}(t) =\n\\int_{0}^{t} \\mu_{\\rm u}(t')I_{\\rm u}(t')\\mbox{d} t'$. The rates\n$\\gamma_{\\rm u}$ and $\\mu_{\\rm u}$ may differ from those averaged over\nthe entire population since testing may be biased towards\nsubpopulations with different values of $\\gamma_{\\rm u}$ and $\\mu_{\\rm\n u}$. If one assumes $\\gamma_{\\rm u}$ and $\\mu_{\\rm u}$ are\napproximately constant over the period of interest, we find $R_{\\rm\n u}\/D_{\\rm u} \\approx \\gamma_{\\rm u}\/\\mu_{\\rm u}\\equiv \\gamma$. We\nnow use $D_{\\rm u} = \\bar{D}_{\\rm e} - D_{\\rm c}$, where both\n$\\bar{D}_{\\rm e}$ and $D_{\\rm c}$ are given by data, to estimate\n$R_{\\rm u} \\approx \\gamma (\\bar{D}_{\\rm e} - D_{\\rm c})$ and write\n$\\mathcal{M}$ as\n\\begin{equation}\n\\mathcal{M} =\\frac{\\displaystyle{\\bar{D}_{\\rm e}}} \n{\\bar{D}_{\\rm e} + R_{\\rm c} + \\gamma\n(\\bar{D}_{\\rm e}-D_{\\rm c})}.\n\\label{eq:Mp_2}\n\\end{equation}\nThus, a simple SIR model transforms the problem of determining the\nnumber of unreported death and recovered cases in $\\mathcal{M}$ to the\nproblem of identifying the recovery and death rates in the untested\npopulation. Alternatively, we can make use of the fact that both the\nIFR and resolved mortality $\\mathcal{M}$ should have comparable values\nand match ${\\cal M}$ to $\\mathrm{IFR}\\approx\n0.1-1.5$\\%~\\cite{salje2020estimating,chow2020global,ioannidis2020infection}\nby setting $\\gamma \\equiv \\gamma_{\\rm u}\/\\mu_{\\rm u}\\approx 100-1000$\n(see SI for further information). Note that inaccuracies in confirming\ndeaths may give rise to $D_{\\rm c} > \\bar{D}_{\\rm e}$. Since by\ndefinition, infection-caused excess deaths must be greater than the\nconfirmed deaths, we set $\\bar{D}_{\\rm e}-D_{\\rm c} = 0$ whenever data\nhappens to indicate $\\bar{D}_{\\rm e}$ to be less than $D_{\\rm c}$.\n\n\\section*{Results}\n\nHere, we present much of the available worldwide fatality data,\nconstruct the excess death statistics, and compute mortalities and\ncompare them across jurisdictions. We show that standard mortality\nmeasures significantly underestimate the death toll of COVID-19 for\nmost regions (see Figs.~\\ref{fig:panels} and\n\\ref{fig:excess_rate}). We also use the data to estimate uncertainties\nin the mortality measures and relate them uncertainties of the\nunderlying components and model parameters.\n\\subsection*{Excess and confirmed deaths} \n\nWe find that in New York City for example, the number of confirmed\nCOVID-19 deaths between March 10, 2020 and December 10, 2020 is\n19,694~\\cite{NYCdeaths} and thus significantly lower than the 27,938\n(95\\% CI 26,516--29,360) reported excess mortality\ncases~\\cite{USData}. From March 25, 2020 until December 10, 2020,\nSpain counts 65,673 (99\\% confidence interval [CI] 91,816--37,061)\nexcess deaths~\\cite{SpainData}, a number that is substantially larger\nthan the officially reported 47,019 COVID-19\ndeaths~\\cite{AllReps}. The large difference between excess deaths and\nreported COVID-19 deaths in Spain and New York City is also observed\nin Lombardia, one of the most affected regions in Italy. From February\n23, 2020 until April 4, 2020, Lombardia reported 8,656 reported\nCOVID-19 deaths~\\cite{AllReps} but 13,003 (95\\% 12,335--13,673) excess\ndeaths~\\cite{Istat}. Starting April 5 2020, mortality data in\nLombardia stopped being reported in a weekly format. In\nEngland\/Wales, the number of excess deaths from the onset of the\nCOVID-19 outbreak on March 1, 2020 until November 27, 2020 is 70,563\n(95\\% CI 52,250--88,877) whereas the number of reported COVID-19\ndeaths in the same time interval is 66,197~\\cite{excessdata}. In\nSwitzerland, the number of excess deaths from March 1, 2020 until\nNovember 29, 2020 is 5,664 (95\\% CI\n4,281--7,047)~\\cite{SwitzerlandData}, slightly larger than the\ncorresponding 4,932 reported COVID-19 deaths~\\cite{AllReps}.\n\n\\begin{figure}\n\\includegraphics{excess_vs_confirmed_global.png}\n\\includegraphics{excess_vs_confirmed_US.png}\n\\caption{\\textbf{Excess deaths versus confirmed deaths across\n different countries\/states.} The number of excess deaths in 2020\n versus confirmed deaths across different countries (a) and US states\n (b). The black solid lines in both panels have slope 1. In (a) the\n blue solid line is a guide-line with slope 3; in (b) the blue solid\n line is a least-squares fit of the data with slope $1.132$ (95\\% CI\n 1.096--1.168; blue shaded region). All data were updated on December\n 10,\n 2020~\\cite{ExcessDeathsEconomist,ExcessDeathsCDC,NYCData,dong2020interactive}.}\n\\label{fig:excess_versus_confirmed} \n\\end{figure}\n\nTo illustrate the significant differences between excess deaths and\nreported COVID-19 deaths in various jurisdictions, we plot the excess\ndeaths against confirmed deaths for various countries and US states as\nof December 10, 2020 in Fig.~\\ref{fig:excess_versus_confirmed}. We\nobserve in Fig.~\\ref{fig:excess_versus_confirmed}(a) that the number\nof excess deaths in countries like Mexico, Russia, Spain, Peru, and\nEcuador is significantly larger than the corresponding number of\nconfirmed COVID-19 deaths. In particular, in Russia, Ecuador, and\nSpain the number of excess deaths is about three times larger than the\nnumber of reported COVID-19 deaths. As described in the Methods\nsection, for certain countries (\\textit{e.g.}, Brazil) excess death data is not\navailable for all states~\\cite{ExcessDeathsEconomist}. For the\nmajority of US states the number of excess deaths is also larger than\nthe number of reported COVID-19 deaths, as shown in\nFig.~\\ref{fig:excess_versus_confirmed}(b). We performed a\nleast-square fit to calculate the proportionality factor $m$ arising\nin $\\bar{D}_{\\rm e} = m D_{\\rm c}$ and found $m \\approx 1.132$ (95\\%\nCI 1.096--1.168). That is, across all US states, the number of excess\ndeaths is about 13\\% larger than the number of confirmed COVID-19\ndeaths.\n\n\n\\subsection*{Estimation of mortality measures and their uncertainties}\n\nWe now use excess death data and the statistical and modeling\nprocedures to estimate mortality measures $Z=$ IFR, CFR, $M$, ${\\cal\n M}$ across different jurisdictions, including all US states and more\nthan two dozen countries.\\footnote{We provide an online dashboard that\n shows the real-time evolution of CFR and $M$ at\n \\url{https:\/\/submit.epidemicdatathon.com\/\\#\/dashboard}}.\nAccurate estimates of the confirmed $N_{\\rm c}$ and dead $D_{\\rm c}$\ninfected are needed to evaluate the CFR. Values for the parameters\n$Q$, FPR, FNR, and $b$ are needed to estimate $N_{\\rm c}+N_{\\rm u} = f\nN$ in the denominator of the IFR, while $\\bar{D}_{\\rm e}$ is needed to\nestimate the number of infection-caused deaths $D_{\\rm c}+D_{\\rm u}$\nthat appear in the numerator of the IFR and ${\\cal M}$. Finally, since\nwe evaluate the resolved mortality $\\mathcal{M}$, through\nEq.\\,\\ref{eq:Mp_2}, estimates of $\\bar{D}_{\\rm e}, D_{\\rm c}, R_{\\rm\n c}$, $\\gamma$, and FPR, FNR (to correct for testing inaccuracies in\n$D_{\\rm c}$ and $R_{\\rm c}$) are necessary. Whenever uncertainties are\navailable or inferable from data, we also include them in our\nanalyses.\n\nEstimates of excess deaths and infected populations themselves suffer\nfrom uncertainty encoded in the variances $\\Sigma_{{\\rm e}}^{2}$ and\n$\\sigma_{f}^{2}$. These uncertainties depend on uncertainties arising\nfrom finite sampling sizes, uncertainty in bias $b$ and uncertainty in\ntest sensitivity and specificity, which are denoted $\\sigma_{b}^{2}$,\n$\\sigma_{\\rm I}^{2}$, and $\\sigma_{\\rm II}^{2}$, respectively. We use\n$\\Sigma^2$ to denote population variances and $\\sigma^2$ to denote\nparameter variances; covariances with respect to any two variables\n${X,Y}$ are denoted as $\\Sigma_{X,Y}$. Variances in the confirmed\npopulations are denoted $\\Sigma_{N_{\\rm c}}^{2}$, $\\Sigma_{R_{\\rm\n c}}^{2}$, and $\\Sigma_{D_{\\rm c}}^{2}$ and also depend on\nuncertainties in testing parameters $\\sigma_{\\rm I}^{2}$ and\n$\\sigma_{\\rm II}^{2}$. The most general approach would be to define a\nprobability distribution or likelihood for observing some value of the\nmortality index in $[Z, Z+\\mbox{d} Z]$. As outlined in the SI, these\nprobabilities can depend on the mean and variances of the components\nof the mortalities, which in turn may depend on hyperparameters that\ndetermine these means and variances. Here, we simply assume\nuncertainties that are propagated to the mortality indices through\nvariances in the model parameters and\nhyperparameters~\\cite{lee2006analyzing}. The squared coefficients of\nvariation of the mortalities are found by linearizing them about the\nmean values of the underlying components and are listed in\nTable~\\ref{tab:mortality_error}.\n\n\\begin{table*}[htb]\n\\renewcommand*{\\arraystretch}{2.8}\n\\begin{tabular}{|c|*{3}{c|}}\\hline \\makebox[4em]{Mortality $Z$} & \\makebox[6em]{Uncertainties} & \\makebox[8em]{CV$^{2}\\displaystyle ={\\Sigma_{Z}^{2}\\over Z^{2}}$}\n \\\\\\hline\\hline $\\displaystyle {\\rm CFR}={D_{\\rm c}\\over N_{\\rm c}}$\n \\,\\, & $\\Sigma_{D_{\\rm c}}^{2}$, $\\Sigma_{N_{\\rm c}}^{2}$, $\\Sigma_{D_{\\rm c}, N_{\\rm c}}$ & \n$\\displaystyle {\\Sigma_{D_{\\rm c}}^{2} \\over D_{\\rm c}^{2}}+\n{\\Sigma_{N_{\\rm c}}^{2} \\over N_{\\rm c}^{2}} - 2{\\Sigma_{D_{\\rm c}, N_{\\rm c}}\n\\over D_{\\rm c}N_{\\rm c}}$ \\\\[3pt]\\hline\n$\\displaystyle {\\rm IFR}={D_{\\rm c} + D_{\\rm u}\\over N_{\\rm c} +\n N_{\\rm u}}\\approx {\\bar{D}_{\\rm e} \\over f N}$ & $\\Sigma_{\\rm e}^{2},\n\\Sigma_{N}^{2}, \\Sigma_{{\\rm e},N}, \\sigma_{f}^{2}$ & $\\displaystyle {{\\sigma}_{f}^{2} \\over\n \\hat{f}^{2}}+ {\\Sigma_{\\rm e}^{2} \\over \\bar{D}_{\\rm\n e}^{2}}+{\\Sigma_{N}^{2} \\over N^{2}}- {2\\Sigma_{{\\rm e}, N}\\over\n \\bar{D}_{\\rm e}N}$ \n\\\\[3pt]\\hline\n$\\displaystyle M={D_{\\rm c}\\over D_{\\rm c}+R_{\\rm c}}$ & \n$\\Sigma_{D_{\\rm c}}^{2}, \\Sigma_{R_{\\rm c}}^{2}, \\Sigma_{D_{\\rm c}, R_{\\rm c}}$ & \n$\\displaystyle M^{2}\\left({R_{\\rm c}\\over D_{\\rm c}}\\right)^{2}\n\\left[{\\Sigma_{D_{\\rm c}}^{2} \\over D_{\\rm c}^{2}}+ {\\Sigma_{R_{\\rm\n c}}^{2} \\over R_{\\rm c}^{2}} - {2\\Sigma_{R_{\\rm c}, D_{\\rm\n c}}\\over R_{\\rm c}D_{\\rm c}}\\right]$\n\\\\[3pt]\\hline\n$\\displaystyle {\\cal M}={\\bar{D}_{\\rm e} \\over \\bar{D}_{\\rm e} +\n R_{\\rm c} +R_{\\rm u}}$ & $\\Sigma_{\\rm e}^{2}, \\Sigma_{R_{\\rm c}}^{2}, \n\\Sigma_{R_{\\rm u}}^{2}, \\Sigma_{R_{\\rm c}, R_{\\rm u}}$ & \n$\\displaystyle (1-{\\cal M})^{2}{\\Sigma_{\\rm e}^{2} \\over \\bar{D}_{\\rm e}^{2}} \n+{\\Sigma_{R_{\\rm c}}^{2}\\over \\Gamma^{2}}\n+{\\Sigma_{R_{\\rm u}}^{2}\\over \\Gamma^{2}} \n-{2\\Sigma_{R_{\\rm c}, R_{\\rm u}}\\over \\Gamma^{2}}$\n\\\\[3pt]\\hline\n$\\displaystyle {\\cal M}={\\bar{D}_{\\rm e} \\over \n\\bar{D}_{\\rm e} + R_{\\rm c} + \\gamma(\\bar{D}_{\\rm e}-D_{\\rm c})}$ & \n$\\Sigma_{R_{\\rm c}}^{2}, \\Sigma_{\\rm e}^{2}, \\Sigma_{R_{\\rm c}, \\gamma},\n\\sigma_{\\gamma}^{2}$ & $\\displaystyle (1-{\\cal M})^{2}\n{\\Sigma_{\\rm e}^{2} \\over \\bar{D}_{\\rm e}^{2}} + \n{\\Sigma_{R_{\\rm c}}^{2} \\over \\Gamma^{2}} + \n{(\\bar{D}_{\\rm e}-D_{\\rm c})^{2}\\sigma_{\\gamma}^{2} \\over \\Gamma^{2}} -\n{2(\\bar{D}_{\\rm e}-D_{\\rm c})\\Sigma_{R_{\\rm c}, \\gamma}\\over \\Gamma^{2}}$ \\\\[3pt] \\hline\n\\end{tabular}\n\\caption{\\textbf{Uncertainty propagation for different mortality\n measures.} Table of squared coefficients of variation\n $\\mathrm{CV}^2=\\Sigma_{Z}^{2}\/Z^{2}$ for the different mortality\n indices $Z$ derived using standard error propagation\n expansions~\\cite{lee2006analyzing}. We use $\\Sigma_{N}^{2},\n \\Sigma_{N_{\\rm c}}^2$, $\\Sigma_{R_{\\rm c}}^2$, and $\\Sigma_{D_{\\rm\n c}}^2$ to denote the uncertainties in the total population,\n confirmed cases, recoveries, and deaths, respectively. The variance\n of the number of excess deaths is $\\Sigma_{{\\rm e}}^2$, which\n feature in the IFR and ${\\cal M}$. The uncertainty in the infected\n fraction $\\sigma_{f}^{2}$ that contributes to the uncertainty in IFR\n depends on uncertainties in testing bias and testing errors as shown\n in Eq.~\\eqref{sigma_f}. The term $\\Sigma_{D_{\\rm c}, N_{\\rm c}}$\n represents the covariance between $D_{\\rm c}, N_{\\rm c}$, and\n similarly for all other covariances $\\Sigma_{{\\rm e}, N}$,\n $\\Sigma_{D_{\\rm c}, R_{\\rm c}}$, $\\Sigma_{R_{\\rm c}, R_{\\rm u}}$,\n $\\Sigma_{R_{\\rm c}, \\gamma}$. Since variations in $D_{\\rm e}$ arise\n from fluctuations in past-year baselines and not from current\n intrinsic uncertainty, we can neglect correlations between\n variations in $D_{\\rm e}$ and uncertainty in $R_{\\rm c}, R_{\\rm u}$.\n In the last two rows, representing ${\\cal M}$ expressed in two\n different ways, $\\Gamma \\equiv \\bar{D}_{\\rm e} + R_{\\rm c} + R_{\\rm\n u}$ and $\\bar{D}_{\\rm e} + R_{\\rm c} + \\gamma (\\bar{D}_{\\rm e} -\n D_{\\rm c})$, respectively. Moreover, when using the SIR model to\n replace $D_{\\rm u}$ and $R_{\\rm u}$ with $\\bar{D}_{\\rm e}-D_{\\rm\n c}\\geq 0$, there is no uncertainty associated with $D_{\\rm u}$ and\n $R_{\\rm u}$ in a deterministic model. Thus, covariances cannot be\n defined except through the uncertainty in the parameter $\\gamma =\n \\gamma_{\\rm u}\/\\mu_{\\rm u}$.}\n\\label{tab:mortality_error}\n\\end{table*}\n\n\n\\begin{figure}\n\\includegraphics{Fig4.pdf}\n\\caption{\\textbf{Different mortality measures across different\n regions.} (a) The apparent (dashed lines) and corrected (solid\n lines) IFR in the US, as of November 1, 2020, estimated using excess\n mortality data. We set $\\tilde{f}_b=0.093,0.15$ (black,red), ${\\rm\n FPR}=0.05$, ${\\rm FNR}=0.2$, and $N=330$ million. For the\n corrected IFR, we use $\\hat{f}$ as defined in\n Eq.~\\eqref{f_b_hat}. Unbiased testing corresponds to setting\n $b=0$. For $b>0$ (positive testing bias), infected individuals are\n overrepresented in the sample population. Hence, the corrected\n $\\mathrm{IFR}$ is larger than the apparent $\\mathrm{IFR}$. If $b$ is\n sufficiently small (negative testing bias), the corrected\n $\\mathrm{IFR}$ may be smaller than the apparent $\\mathrm{IFR}$. (b)\n The coefficient of variation of $D_{\\rm e}$ (dashed line) and IFR\n (solid lines) with $\\sigma_{\\rm I}=0.02$, $\\sigma_{\\rm II}=0.05$,\n and $\\sigma_{b}=0.2$ (see Tab.~\\ref{tab:mortality_error}).}\n\\label{fig:IFR} \n\\end{figure}\nTo illustrate the influence of different biases $b$ on the\n$\\mathrm{IFR}$ we use $\\hat{f}$ from Eq.~\\eqref{f_b_hat} in the\ncorrected $\\mathrm{IFR}\\approx \\bar{D}_{\\rm e}\/(\\hat{f} N)$. We model\nRT-PCR-certified COVID-19 deaths~\\cite{CDC_classifying} by setting the\n${\\rm FPR}=0.05$~\\cite{watson2020interpreting} and the ${\\rm\n FNR}=0.2$~\\cite{fang2020sensitivity,wang2020detection}. The\nobserved, possibly biased, fraction of positive tests $\\tilde{f}_{\\rm\n b}=\\tilde{Q}^+\/Q$ can be directly obtained from corresponding\nempirical data. As of November 1, 2020, the average of\n$\\tilde{f}_{\\rm b}$ over all tests and across all US states is about\n9.3\\%~\\cite{CDCPositivityRate}. The corresponding number of excess\ndeaths is $\\bar{D}_{\\rm{e}}=294,700$~\\cite{ExcessDeathsEconomist} and\nthe US population is about $N\\approx 330$ million~\\cite{Census}. To\nstudy the influence of variations in $\\tilde{f}_{\\rm b}$, in addition\nto $\\tilde{f}_{\\rm b}=0.093$, we also use a slightly larger\n$\\tilde{f}_{\\rm b}=0.15$ in our analysis. In Fig.~\\ref{fig:IFR} we\nshow the apparent and corrected $\\mathrm{IFR}$s for two values of\n$\\tilde{f}_{\\rm b}$ [Fig.~\\ref{fig:IFR}(a)] and the coefficient of\nvariation $\\rm{CV}_{\\rm IFR}$ [Fig.~\\ref{fig:IFR}(b)] as a function of\nthe bias $b$ and as made explicit in\nTable~\\ref{tab:mortality_measures}. For unbiased testing [$b=0$ in\n Fig.\\,\\ref{fig:IFR}(a)], the corrected $\\mathrm{IFR}$ in the US is\n1.9\\% assuming $\\tilde{f}_{\\rm b}=0.093$ and 0.8\\% assuming\n$\\tilde{f}_{\\rm b}=0.15$. If $b>0$, there is a testing bias towards\nthe infected population, hence, the apparent $\\mathrm{IFR} =\n\\bar{D}_{\\rm e}\/(\\tilde{f}_{\\rm b}N)$ is smaller than the corrected\n$\\mathrm{IFR}$ as can be seen by comparing the solid (corrected IFR)\nand the dashed (apparent IFR) lines in Fig.~\\ref{fig:IFR}(a). For\ntesting biased towards the uninfected population ($b<0$), the\ncorrected $\\mathrm{IFR}$ may be smaller than the apparent\n$\\mathrm{IFR}$. To illustrate how uncertainty in $\\mathrm{FPR}$,\n$\\mathrm{FNR}$, and $b$ affect uncertainty in IFR, we evaluate\nCV$_{\\rm {IFR}}$ as given in Table~\\ref{tab:mortality_error}.\n\n\nThe first term in uncertainty $\\sigma_{f}^{2}\/\\hat{f}^{2}$ given in\nEq.~\\eqref{sigma_f} is proportional to $1\/Q$ and can be assumed to be\nnegligibly small, given the large number $Q$ of tests administered.\nThe other terms in Eq.\\,\\eqref{sigma_f} are evaluated by assuming\n$\\sigma_{b}=0.2, \\sigma_{\\rm I}=0.02$, and $\\sigma_{\\rm II}=0.05$ and\nby keeping FPR = 0.05 and FNR = 0.2. Finally, we infer $\\Sigma_{\\rm\n e}$ from empirical data, neglect correlations between $D_{\\rm e}$\nand $N$, and assume that the variation in $N$ is negligible so that\n$\\Sigma_{{\\rm e},N} = \\Sigma_{N} \\approx 0$. Fig.~\\ref{fig:IFR}(b)\nplots ${\\rm CV}_{\\mathrm{IFR}}$ and ${\\rm CV}_{D_{\\rm e}}$ in the US\nas a function of the underlying bias $b$. The coefficient of variation\n${\\rm CV}_{D_{\\rm e}}$ is about 1\\%, much smaller than ${\\rm\n CV}_{\\mathrm{IFR}}$, and independent of $b$. For the values of $b$\nshown in Fig.~\\ref{fig:IFR}(b), ${\\rm CV}_{\\mathrm{IFR}}$ is between\n47--64\\% for $\\tilde{f}_{\\rm b}=0.093$ and between 20--27\\% for\n$\\tilde{f}_{\\rm b}=0.15$.\n\nNext, we compared the mortality measures $Z=$IFR, CFR, $M$, $\\cal M$\nand the relative excess deaths $r$ listed in\nTab.~\\ref{tab:mortality_measures} across numerous jurisdictions. To\ndetermine the CFR, we use the COVID-19 data of\nRefs.~\\cite{NYCData,dong2020interactive}. For the apparent IFR, we use\nthe representation IFR $= p\\bar{D}_{\\rm e}\/N_{\\rm c}$ discussed\nabove. Although $p$ may depend on the stage of the pandemic, typical\nestimates range from 4\\%~\\cite{hortaccsu2020estimating} to\n10\\%~\\cite{chow2020global}. We set $p=0.1$ over the lifetime of the\npandemic. We can also use the apparent IFR $= \\bar{D}_{\\rm e}\/( f N)$,\nhowever estimating the corrected IFR requires evaluating the bias $b$.\n\\begin{figure*}[htb]\n\\includegraphics{boxplot.png}\n\\includegraphics{distributions.png}\n\\caption{\\textbf{Mortality characteristics in different countries and\n states.} (a) The values of relative excess deaths $r$, the CFR,\n the $\\mathrm{IFR}= p \\bar{D}_{\\mathrm{e}}\/N_{\\rm c}$ with $p=N_{\\rm\n c}\/(N_{\\rm c}+N_{\\rm u}) = 0.1$~\\cite{chow2020global}, the\n confirmed resolved mortality $M$, and the true resolved mortality\n $\\mathcal{M}$ (using $\\gamma=100$) are plotted for various\n jurisdictions. (b) Different mortality measures provide ambiguous\n characterizations of disease severeness. (c--g) The probability\n density functions (PDFs) of the mortality measures shown in (a) and\n (b). Note that there are only very incomplete recovery data\n available for certain countries (\\textit{e.g.}, US and UK). For\n countries without recovery data, we could not determine $M$ and\n $\\mathcal{M}$. The number of jurisdictions that we used in (a) and (c--g)\n are 77, 246, 73, 191, and 21 for the respective mortality measures\n (from left to right). All data were updated December 10,\n 2020~\\cite{ExcessDeathsEconomist,ExcessDeathsCDC,NYCData,dong2020interactive}.}\n\\label{fig:boxplot} \n\\end{figure*}\nIn Fig.~\\ref{fig:boxplot}(a), we show the values of the relative\nexcess deaths $r$, the CFR, the apparent IFR, the confirmed resolved\nmortality $M$, and the true resolved mortality $\\mathcal{M}$ for\ndifferent (unlabeled) regions. In all cases we set $p = 0.1,\n\\gamma=100$. As illustrated in Fig.~\\ref{fig:boxplot}(b), some\nmortality measures suggest that COVID-induced fatalities are lower in\ncertain countries compared to others, whereas other measures indicate\nthe opposite. For example, the total resolved mortality $\\mathcal{M}$\nfor Brazil is larger than for Russia and Mexico, most likely due to\nthe relatively low number of reported excess deaths as can be seen\nfrom Fig.~\\ref{fig:excess_versus_confirmed} (a). On the other hand,\nBrazil's values of $\\mathrm{CFR}$, $\\mathrm{IFR}$, and $M$ are\nsubstantially smaller than those of Mexico [see\n Fig.~\\ref{fig:boxplot}(b)].\n\nThe distributions of all measures $Z$ and relative excess deaths $r$\nacross jurisdictions are shown Fig.~\\ref{fig:boxplot}(c--g) and encode\nthe global uncertainty of these indices. We also calculate the\ncorresponding mean values across jurisdictions, and use the empirical\ncumulative distribution functions to determine confidence\nintervals. The mean values across all jurisdictions are\n$\\widebar{r}=0.08$ (95\\% CI 0.0025--0.7800),\n$\\widebar{\\mathrm{CFR}}=0.020$ (95\\% CI 0.0000--0.0565),\n$\\widebar{\\mathrm{IFR}}=0.0024$ (95\\% CI 0.0000--0.0150),\n$\\widebar{M}=0.038$ (95\\% CI 0.0000--0.236), and\n$\\widebar{\\mathcal{M}}=0.027$ (95\\% CI 0.000--0.193). For calculating\n$\\widebar{M}$ and $\\widebar{\\mathcal{M}}$, we excluded countries with\nincomplete recovery data. The distributions plotted in\nFig.~\\ref{fig:boxplot}(c--g) can be used to inform our analyses of\nuncertainty or heterogeneity as summarized in\nTab.~\\ref{tab:mortality_error}. For example, the overall variance\n$\\Sigma_{Z}^{2}$ can be determined by fitting the corresponding\nempirical $Z$ distribution shown in Fig.~\\ref{fig:boxplot}(c--g).\nTable~\\ref{tab:mortality_error} displays how the related\n$\\rm{CV}_{Z}^{2}$ can be decomposed into separate terms, each arising\nfrom the variances associated to the components in the definition of\n$Z$. For concreteness, from Fig.~\\ref{fig:boxplot}(e) we obtain\n$\\rm{CV}^2_{\\rm IFR} ={\\Sigma_{\\rm IFR}^2\/ \\widebar{\\mathrm{IFR}}}^2\n\\approx 1.16$ which allows us to place an upper bound on\n$\\sigma_{b}^{2}$ using Eq.~\\eqref{sigma_f}, the results of\nTab.~\\ref{tab:mortality_error}, and\n\\begin{equation}\n\\sigma_{b}^{2} < \\frac{(\\tilde{f}_{\\rm b}-{\\rm\n FPR})^{2}}{\\hat{f}^{2}(1-\\hat{f})^{2}}{\\rm CV}^{2}_{\\rm IFR} \n\\approx \\frac{(\\tilde{f}_{\\rm b}-{\\rm\n FPR})^{2}}{\\hat{f}^{2}(1-\\hat{f})^{2}} 1.16 \n\\end{equation}\nor on $\\sigma_{\\rm I}^{2}$ using $(1-\\hat{f})^{2}\\sigma_{\\rm I}^{2} <\n(\\tilde{f}_{\\rm b}-{\\rm FPR})^{2}{\\rm CV}^{2}_{\\rm IFR}$.\n\nFinally, to provide more insight into the correlations between\ndifferent mortality measures, we plot $M$ against $\\mathrm{CFR}$ and\n$\\mathcal{M}$ against $\\mathrm{IFR}$ in Fig.~\\ref{fig:mortality2}.\nFor most regions, we observe similar values of $M$ and $\\mathrm{CFR}$\nin Fig.~\\ref{fig:mortality2}(a). Althouigh we expect $M \\to\n\\mathrm{CFR}$ and $\\mathcal{M} \\to \\mathrm{IFR}$ towards the end of an\nepidemic, in some regions such as the UK, Sweden, Netherlands, and\nSerbia, $M \\gg {\\rm CFR}$ due to unreported or incomplete reporting of\nrecovered cases. About 50\\% of the regions that we show in\nFig.~\\ref{fig:mortality2}(b) have an $\\mathrm{IFR}$ that is\napproximately equal to $\\mathcal{M}$. Again, for regions such as\nSweden and the Netherlands, $\\mathcal{M}$ is substantially larger than\n$\\mathrm{IFR}$ because of incomplete reporting of recovered cases.\n\\begin{figure}\n\\includegraphics{Fig6.pdf}\n\\caption{\\textbf{Different mortality measures across different\n regions.} We show the values of $M$ and $\\mathrm{CFR}$ (a) and\n $\\mathcal{M}$ (using $\\gamma=100$) and $\\mathrm{IFR}= p\n \\bar{D}_{\\mathrm{e}}\/N_{\\rm c}$ with $p=N_{\\rm c}\/(N_{\\rm c}+N_{\\rm\n u}) = 0.1$~\\cite{chow2020global} (b) for different regions. The\n black solid lines have slope 1. If jurisdictions do not report the\n number of recovered individuals, $R_{\\rm c} = 0$ and $M =1$ [light\n red disks in (a)]. In jurisdictions for which the data indicate\n $\\bar{D}_{\\rm e} < D_{\\rm c}$, we set $\\gamma(\\bar{D}_{\\rm e} -\n D_{\\rm c}) = 0$ in the denominator of ${\\cal M}$ which prevents it\n from becoming negative as long as $\\bar{D}_{\\rm e} \\geq 0$. All data\n were updated on December 10,\n 2020~\\cite{ExcessDeathsEconomist,ExcessDeathsCDC,NYCData,dong2020interactive}.}\n\\label{fig:mortality2} \n\\end{figure}\n\\section*{Discussion}\n\\label{sec:conclusion}\n\\subsection*{Relevance}\nIn the first few weeks of the initial COVID-19 outbreak in March and\nApril 2020 in the US, the reported death numbers captured only about\ntwo thirds of the total excess deaths~\\cite{woolf2020excess}. This\nmismatch may have arisen from reporting delays, attribution of\nCOVID-19 related deaths to other respiratory illnesses, and secondary\npandemic mortality resulting from delays in necessary treatment and\nreduced access to health care~\\cite{woolf2020excess}. We also observe\nthat the number of excess deaths in the Fall months of 2020 have been\nsignificantly higher than the corresponding reported COVID-19 deaths\nin many US states and countries. The weekly numbers of deaths in\nregions with a high COVID-19 prevalence were up to 8 times higher than\nin previous years. Among the countries that were analyzed in this\nstudy, the five countries with the largest numbers of excess deaths\nsince the beginning of the COVID-19 outbreak (all numbers per 100,000)\nare Peru (256), Ecuador (199), Mexico (151), Spain (136), and Belgium\n(120). The five countries with the lowest numbers of excess deaths\nsince the beginning of the COVID-19 outbreak are Denmark (2), Norway\n(6), Germany (8), Austria (31), and Switzerland\n(33)~\\cite{ExcessDeathsEconomist} \\footnote{Note that Switzerland\n experienced a rapid growth in excess deaths in recent weeks. More\n recent estimates of the number of excess deaths per 100,000 suggest\n a value of 64~\\cite{excessdata}, which is similar to the\n corresponding excess death value observed in Sweden.}. If one\nincludes the months before the outbreak, the numbers of excess deaths\nper 100,000 in 2020 in Germany, Denmark, and Norway are -3209, -707,\nand -34, respectively. In the early stages of the COVID-19 pandemic,\ntesting capabilities were often insufficient to resolve\nrapidly-increasing case and death numbers. This is still the case in\nsome parts of the world, in particular in many developing\ncountries~\\cite{ondoa2020covid}. Standard mortality measures such as\nthe IFR and CFR thus suffer from a time-lag problem.\n\n\n\\subsection*{Strengths and limitations}\nThe proposed use of excess deaths in standard mortality measures may\nprovide more accurate estimates of infection-caused deaths, while\nerrors in the estimates of the fraction of infected individuals in a\npopulation from testing can be corrected by estimating the testing\nbias and testing specificity and sensitivity.\nOne could sharpen estimates of the true\nCOVID-19 deaths by systematically analyzing the statistics of deaths\nfrom all reported causes using a standard protocol such as\nICD-10~\\cite{CDCICD10}. \nFor example, the mean traffic deaths per month in Spain between\n2011-2016 is about 174 persons~\\cite{EUTransport}, so any\npandemic-related changes to traffic volumes would have little impact\nconsidering the much larger number of COVID-19 deaths.\n\n\nDifferent mortality measures are sensitive to different sources of\nuncertainty. Under the assumption that all excess deaths are caused by\na given infectious disease (\\textit{e.g.}, COVID-19), the underlying error in\nthe determined number of excess deaths can be estimated using\nhistorical death statistics from the same jurisdiction. Uncertainties\nin mortality measures can also be decomposed into the uncertainties of\ntheir component quantities, including the positive-tested fraction $f$\nthat depend on uncertainties in the testing parameters.\n\nAs for all epidemic forecasting and surveillance, our methodology\ndepends on the quality of excess death and COVID-19 case data and\nknowledge of testing parameters. For many countries, the lack of\nbinding international reporting guidelines, testing limitations, and\npossible data tampering~\\cite{aljazeera_deaths} complicates the\napplication of our framework. A striking example of variability is the\nlarge discrepancy between excess deaths $D_{\\rm e}$ and confirmed\ndeaths $D_{\\rm c}$ across many jurisdictions which render mortalities\nthat rely on $D_{\\rm c}$ suspect. More research is necessary to\ndisentangle the excess deaths that are directly caused by SARS-CoV-2\ninfections from those that result from postponed medical\ntreatment~\\cite{woolf2020excess}, increased suicide\nrates~\\cite{sher2020impact}, and other indirect factors contributing\nto an increase in excess mortality. Even if the numbers of excess\ndeaths were accurately reported and known to be caused by a given\ndisease, inferring the corresponding number of unreported cases (\\textit{e.g.},\nasymptomatic infections), which appears in the definition of the IFR\nand $\\mathcal{M}$ (see Tab.~\\ref{tab:mortality_measures}), is\nchallenging and only possible if additional models and assumptions are\nintroduced.\n\nAnother complication may arise if the number of excess deaths is not\nsignificantly larger than the historical mean. Then,\nexcess-death-based mortality estimates suffer from large\nuncertainty\/variability and may be meaningless. While we have\nconsidered only the average or last values of $\\tilde{f}_{\\rm b}$, our\nframework can be straightforwardly extended and dynamically applied\nacross successive time windows, using \\textit{e.g.}, Bayesian or\nKalman filtering approaches.\n\nFinally, we have not resolved the excess deaths or mortalities with\nrespect to age or other attributes such as sex, co-morbidities,\noccupation, etc. We expect that age-structured excess deaths better\nresolve a jurisdiction's overall mortality. By expanding our testing\nand modeling approaches on stratified data, one can also\nstraightforwardly infer stratified mortality measures $Z$, providing\nadditional informative indices for comparison.\n\\section*{Conclusions} \nBased on the data presented in Figs.~\\ref{fig:boxplot} and\n\\ref{fig:mortality2}, we conclude that the mortality measures $r$,\n$\\mathrm{CFR}$, $\\mathrm{IFR}$, $M$, and $\\mathcal{M}$ may provide\ndifferent characterizations of disease severity in certain\njurisdictions due to testing limitations and bias, differences in\nreporting guidelines, reporting delays, etc. The propagation of\nuncertainty and coefficients of variation that we summarize in\nTab.~\\ref{tab:mortality_error} can help quantify and compare errors\narising in different mortality measures, thus informing our\nunderstanding of the actual death toll of COVID-19. Depending on the\nstage of an outbreak and the currently available disease monitoring\ndata, certain mortality measures are preferable to others. If the\nnumber of recovered individuals is being monitored, the resolved\nmortalities $M$ and $\\mathcal{M}$ should be preferred over\n$\\mathrm{CFR}$ and $\\mathrm{IFR}$, since the latter suffer from errors\nassociated with the time-lag between infection and\nresolution~\\cite{bottcher2020case}. For estimating $\\mathrm{IFR}$ and\n$\\mathcal{M}$, we propose using excess death data and an epidemic\nmodel. In situations in which case numbers cannot be estimated\naccurately, the relative excess deaths $r$ provides a complementary\nmeasure to monitor disease severity. Our analyses of different\nmortality measures reveal that\n\n\\begin{itemize}\n\n\\item The CFR and $M$ are defined directly from confirmed deaths $D_{\\rm\n c}$ and suffers from variability in its reporting. Moreover,\n the CFR does not consider resolved cases and is expected to evolve\n during an epidemic. Although $M$ includes resolved cases, its\n additionally required confirmed recovered cases $R_{\\rm c}$ add\n to its variability across jurisdictions. Testing errors affect both\n $D_{\\rm c}$ and $R_{\\rm c}$, but if the FNR and FPR are known, they\n can be controlled using Eq.~\\eqref{PTOT_APP} given in the SI.\n\n\\item The IFR requires knowledge of the true cumulative number of\n disease-caused deaths as well as the true number of infected\n individuals (recovered or not) in a population. We show how these\n can be estimated from excess deaths and testing, respectively. Thus,\n the IFR will be sensitive to the inferred excess deaths and from the\n testing (particularly from the bias in the testing). Across all\n countries analyzed in this study, we found a mean IFR of about\n 0.24\\% (95\\% CI 0.0--1.5\\%), which is similar to the previously\n reported values between 0.1 and\n 1.5\\%~\\cite{salje2020estimating,chow2020global,ioannidis2020infection}.\n\n\\item In order to estimate the resolved true mortality ${\\cal M}$, an\n additional relationship is required to estimate the unconfirmed\n recovered population $R_{\\rm u}$. In this paper, we propose a simple\n SIR-type model in order to relate $R_{\\rm u}$ to measured excess and\n confirmed deaths through the ratio of the recovery rate to the death\n rate. The variability in reporting $D_{\\rm c}$ across different\n jurisdictions generates uncertainty in ${\\cal M}$ and reduces its\n reliability when compared across jurisdictions.\n\n\\item The mortality measures that can most reliably be compared across\n jurisdictions should not depend on reported data which are subject\n to different protocols, errors, and manipulation\/intentional\n omission. Thus, the per capital excess deaths and relative excess\n deaths $r$ (see last column of Table \\ref{tab:mortality_measures})\n are the measures that provide the most consistent comparisons of\n disease mortality across jurisdictions (provided total deaths are\n accurately tabulated). However, they are the least informative in\n terms of disease severity and individual risk, for which $M$ and\n ${\\cal M}$ are better.\n\n\\item Uncertainty in all mortalities $Z$ can be decomposed into the\n uncertainties in component quantities such as the excess death\n or testing bias. We can use global data to estimate the means and\n variances in $Z$, allowing us to put bounds on the variances of the\n component quantities and\/or parameters. \n\n\\end{itemize}\n\nParts of our framework can be readily integrated into or combined with\nmortality surveillance platforms such as the European Mortality\nMonitor (EURO MOMO) project~\\cite{euromomo} and the Mortality\nSurveillance System of the National Center for Health\nStatistics~\\cite{USData} to assess disease burden in terms of\ndifferent mortality measures and their associated uncertainty.\n\\section*{Data availability}\nAll datasets used in this study are available from\nRefs.~\\cite{USData,SpainData,EnglandData,SwitzerlandData,Istat}. The\nsource codes used in our analyses are publicly available at\n\\cite{GitHub}.\n\\section*{Acknowledgements}\nLB acknowledges financial support from the Swiss National Fund\n(P2EZP2\\_191888). The authors also acknowledge financial support from\nthe Army Research Office (W911NF-18-1-0345), the NIH (R01HL146552),\nand the National Science Foundation (DMS-1814364, DMS-1814090).\n\\onecolumngrid\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\\IEEEPARstart{W}ireless sensor networks (WSNs) are becoming a mainstream technology constituting the backbone of several emerging technologies, such as the internet of things (IoT) \\cite{Khalil2014} and smart cities \\cite{Rashid2016} (see references therein). Indeed, the flexible nature of WSNs \\cite{Chong2003} enables them to invade such a wide spectrum of applications. However, several aspects of WSNs remain fertile research grounds, especially distributed detection (DD) in WSNs \\cite{Chamberland2007}. In such a scenario, battery-powered sensor nodes (SNs) monitor the region of interest (ROI), which are geographically distributed in a vast region in order to detect any intruders. The locations of the SNs are best modeled as a random point process \\cite{Zhang2015}, because they might be out of communication range, out of power or might be even dropped from an airplane to form a network \\cite{Song2009}. Due to constrained power and bandwidth, the collected data is often compressed into a single bit decision. Moreover, because the communication range is limited, providing adequate coverage for the large number of SNs is a challaging task. So, the WSN is divided into geographical clusters \\cite{Bandyopadhyay2003} and hierarchically into three tiers; SNs, cluster heads (CHs) and the fusion center (FC). The SNs in each cluster send their data to the CH, which usually has more power and a larger communication range. The CHs in turn report the collected data to the FC, thus acting as high-power relays. Often data is relayed in an amplify-and-forward (AF) or decode-and-forward (DF) fashion over imperfect communication channels \\cite{Hong2007}. \n\nIn this paper, we investigate the decision fusion for distributed detection in a randomly deployed clustered-WSN operating with constrained power and over imperfect channels. In particular, the channels between SNs and CHs (termed SN-CH) and the channels between CHs and the FC (termed CH-FC) are assumed to suffer from additive white Gaussian noise (AWGN), in contrast to our previous work in \\cite{Aldalahmeh2016}, which assumes ideal channels. To the best of the a authors' knowledge, this is the first work that studies fusion rules in the above problem setting.\n\nIn the light of the previous framework, the main contributions of this paper are:\n\\begin{enumerate}\n\\item The optimal log-likelihood ratio (LLR) rule is presented first, which is analytically difficult to implement. Subsequently, a sub-optimal linear fusion rule (LFR) is derived. Intuitively, the LFR gives more weight to clusters with better detection and channel qualities. \n\\item We propose the approximate maximum likelihood estimator based fusion rule (LFR-aML) as a practical implementation of the LFR. The LFR-aML estimates the statistical parameters required for the detection fusion rule by solving a constrained maximum likelihood (ML) problem via the aid of Karush-Kuhn-Tucker (KKT) conditions.\n\\item To quantify the performance of the LFR and its derivatives, we derive Gaussian-tail upper bounds for the detection and the false alarm probabilities.\n\\item The optimal CH's transmission power is found in a closed-form manner while still adhering to a specific detection performance. This is achieved by also solving the KKT conditions of the related convex optimization problem.\n\\end{enumerate}\n\nThe rest of the paper is organized as follows. In Section \\ref{sec:Related-Work} related work is reviewed. The adopted notation is explained in Section \\ref{sec:Notation}. The system model is presented in Section \\ref{sec:System-Model}. The proposed fusion rules are discussed in Section \\ref{sec:Fusion-Rules}, in which we begin by formulating the optimal DD fusion rule for the noisy clustered WSN. Then, the LFR algorithm is derived in the light of the previous optimal rule. Finally, the LFR-aML is proposed as the practical implementation of the LFR. The power allocation for the LFR is investigated in Section \\ref{sec:Power-Allocation}. Section \\ref{sec:Practical-Consdr} discusses the practical implementation procedure of the LFR-aML algorithm. Section \\ref{sec:Simulation-Result} presents the simulation results and their discussions. Finally, the conclusions are given in Section \\ref{sec:Conclusion}.\n\\section{Related Work}\n\\label{sec:Related-Work}\nSince the seminal work by Tenney and Sandell \\cite{Tenney1981}, distributed detection has become, and still is, a rich research topic see for example \\cite{Veeravalli2012}, the recent tutorial \\cite{Javadi2016} and references therein. Fundamental results have been attained for parallel, tandem and tree sensor paradigm \\cite{Viswanathan1997}, emphasizing the optimality of fusing and quantizing the local log-likelihood ratios (LRTs) of the distributed sensors. Further extensions of the classical problem were presented in \\cite{Blum1997}, such as weak signal detection and robust detection. \n\nFor the case of single-bit quantization over perfect parallel networks, the opimal Chair-Varshney fusion rule (CVR) was derived in \\cite{Chair1986}, which implicitly requires knowledge of the target parameters (location and power). A generalized likelihood ratio (GLRT) detector was proposed in \\cite{Niu2006a} where the target parameters are estimated. However, the GLRT is computationally demanding and so the suboptimal counting rule (CR) was proposed in \\cite{Niu2005}, which is simply the sum of positive local detections. Its performance on the other hand, was investigated in \\cite{Niu2008}. The weighted decision fusion (WDF) is introduced in \\cite{Javadi2015} as an improvement of the CR, where the decisions are weighted first before being fused at the FC. In a different direction, a detector based on the Rao test was suggested in \\cite{Ciuonzo2013} and the generalized Rao test in \\cite{Ciuonzo2017a} that both strike a trade-off between complexity and performance. In a similar effort, the generalized locally optimum detector was devised in \\cite{Ciuonzo2017} and \\cite{Ciuonzo2018}. However, the previous detectors in general suffered from the problem of spurious detection\\footnote{Spurious detection is defined in \\cite{Guerriero2010} as the event when a target is present and some sensors far from it declare the presence of a target in their vicinity.}, especially in a large WSN. Scan statistics-based detection was proposed in \\cite{Guerriero2010} to overcome the previous problem but at the expense of a significant delay due to the sliding-window structure of the detector. A local vote decision fusion rule (LVDF) was proposed in \\cite{Katenka2008}, in which sensors use neighbouring decisions to correct their decisions locally and then integrate them globally.\n\n\n\n\nDD over multi-hop (tree) sensor networks has also received considerable attention. Asymptotic results were presented in \\cite{Tay2008} and \\cite{Tay2009} for DD with Neyman-Pearson and Bayesian criteria, respectively. Mainly, it was shown that the error probability decays exponentially as the number of SNs increases. Decision fusion rules over multi-hop networks were investigated in \\cite{Lin2005} with flat-fading noisy channels. The relaying SNs decode-and-forward the received data to the FC. The derived suboptimal rules in \\cite{Lin2005} de-emphasizes sensors with more hops. Similarly, authors in \\cite{Tian2007} investigated the same problem but with a binary-symmetric channel (BSC) model in the network, where it was shown that the optimal fusion rule is a weighted order statistic filter.\n\nClustered sensor networks were introduced for DD in \\cite{Ferrari2011}, in which sensors report to CHs that in turn report to a FC\\footnote{In \\cite{Ferrari2011} the CHs are refered to as FCs and the FC is refered to as an access point (AP).}. Majority-like fusion (MLF) rules were used on both the cluster level and the FC level. Surprisingly, results there show that the detection performance of a clustered sensor network is worse than the performance of sensors reporting directly to the FC. This is due to employing the MLF rule in the CHs level, thus introducing additional errors in the decision process. \n\nOn the other hand, in our previous work \\cite{Aldalahmeh2016} we have derived an optimal-cluster-based fusion rule (OCR) for clustered sensor networks. In this context, CHs collect the local SNs decisions and send this data to the FC over ideal channels. In this paper however, we consider a two-hop network in which the SN-CH and CH-FC channels are affected by AWGN. The CHs employ an AF scheme to send the collected data to the FC, in contrast to \\cite{Tian2007} that adopts sending hard decisions over multiple-hop BSCs. Moreover, the adopted AF scheme enables us to minimize the transmission power, provided a specific detection performance is satisfied.\n\n\n\n\n\n\n\\section{Notation}\n\\label{sec:Notation}\nIn this paper we will generally refer to deterministic values by lowercase symbols $(x)$, bold symbols refers to vector values $(\\mathbf{x})$, whereas random values are referred to by uppercase symbols $(X)$. However, we denote the number of CHs as ($M$), global detection threshold as ($\\Gamma$) and the transmitted power as ($P$). The operator $\\succcurlyeq$ refers to element-wise greater than or equal to. The probability of an event $A$ is denoted by $\\ensuremath{\\mathbb{P}}(A)$. A normal distributed random variable (RV) $X$ with mean $\\mu$ and variance $\\sigma^2$ is denoted $X \\sim \\mathcal{N}(\\mu,\\sigma^2)$ and Poisson RV $Y$ with mean $\\lambda$ is denoted by $Y \\sim \\text{Pois}(\\lambda)$. The expectation operator with respect to (w.r.t.) RV $X$ is written as $\\ensuremath{\\mathbb{E}}_X[\\cdot]$ and the moment generating function (MGF) for RV $X$ is defined as $M_X(t) = \\ensuremath{\\mathbb{E}}_X \\left[ \\exp(tX) \\right]$. Finally, the estimate of any variable $x$ is denoted by $\\widehat{x}$.\n\n\\section{System Model}\n\\label{sec:System-Model}\nThe WSN is functionally divided into three tiers as shown in Fig. \\ref{fig:WSN_fig}. In this section we present the sensing model, the stochastic geometry model for the SNs deployment, similar to \\cite{Zhang2015} and \\cite{Niu2005}, and the communication model between the three tiers.\n\\begin{figure}[!ht]\n\\centering\n\\WSNfig\n\\caption{The WSN topology, in which the star is the target, gray-shaded nodes are the detecting SNs and white-shaded nodes are the non-detecting SNs.}\n\\label{fig:WSN_fig}\n\\end{figure}\n\\subsection{Sensing and Sensor Deployment Models} \\label{subsec:sensing-model}\nConsider a WSN deployed over a region, $\\mathcal{A} \\subset \\mathbb{R}^2$ where $\\mathcal{A}$ is assumed to be significantly large. The WSN is modeled by a simple Poisson Point Process (PPP) $\\Phi = \\{ \\mathbf{X}_1, \\mathbf{X}_2, \\cdots, \\mathbf{X}_N \\}$ in $\\mathcal{A}$ \\cite{Streit2010}, where $\\mathbf{X}_i \\in \\Phi$ is the the coordinate of the $i$th SN. This implies that the $\\mathbf{X}_i$'s are random variables (RVs) and their number $N = \\vert \\Phi \\vert$ is a Poisson RV having the distribution $N \\sim \\text{Pois}(\\ensuremath{\\mathbb{E}}\\left[N\\right])$ where $\\ensuremath{\\mathbb{E}}\\left[N\\right]$ is the average number of SNs. The SN intensity ($\\lambda$) is defined as the average number of points (SNs) in a unit area. In general, the PPP might be non-homogeneous, i.e., the intensity is location dependent, described by $\\lambda(\\mathbf{x})$ where $\\textbf{x}$ is the location coordinates. This case might arise due to environmental or application specific constraints. However, using the non-homogeneous PPP model complicates the analysis. Thus we adopt a homogeneous PPP in our treatment ($\\lambda(\\mathbf{x})=\\lambda,\\; \\forall \\mathbf{x} \\in \\mathcal{A}$), in which we can approximate the non-homogeneous case appropriately if the $m$th cluster is adequately small. In other words, the intensity in the $m$th cluster does not vary significantly with space, i.e. $\\lambda_{m}(\\mathbf{x}) \\approx \\lambda_{m}$ where $\\lambda_m$ is the $m$th cluster SN intensity. In the homogeneous case, the number of SNs follows the distribution $\\text{Pois}(\\lambda \\vert \\mathcal{A} \\vert)$, where $\\vert \\mathcal{A} \\vert$ is the area of $\\mathcal{A}$. Fig. \\ref{fig:WSN_fig} shows a homogeneous random network deployment.\n\nThe WSN is tasked with the detection of any intruder or target entering the ROI. A target at location $\\mathbf{X}_t \\in \\mathcal{A}$ leaves a signature signal sensed by the SNs, which might be thermal, magnetic, electrical, seismic or electromagnetic signal \\cite{Arora2004}. We adopt the sensing model in \\cite{Niu2008}, in which the signature power in the far-field is assumed to decay quadratically with distance. The target's parameters are given in the vector $\\boldsymbol{\\theta} = [P_t, \\mathbf{X}_t]^T$, where $P_t$ is the target's signal power. The noise-free signal received at the $i$th SN located at a given $\\mathbf{x}_i$ has the following amplitude:\n\\begin{eqnarray}\na(\\mathbf{x}_i) = \\frac{\\sqrt{P_t}}{ \\max \\left( d_0, d_i \\right) } \\label{eq:noise-free-signal}\n\\end{eqnarray}\nwhere $d_0$ is the reference distance to the node's sensor and $d_i = \\| \\mathbf{x}_t - \\mathbf{x}_i \\|$ is the Euclidean distance between the target and the $i$th SN. Note that the measured signal is saturated if the distance to the target is smaller than $d_0$. The above model can adequately describe acoustic or electromagnetic signals. \n\nFor a given realization of $\\Phi$, each SN samples the environment to decide whether an intruder is present or not. Assuming conditional independence, the collected data at the $i$th SN under the null and alternative hypotheses, $\\mathcal{H}_0$ and $\\mathcal{H}_1$ respectively, takes the following form:\n\\begin{eqnarray}\n\\ensuremath{\\mathcal{H}}_1: S(\\mathbf{x}_i) &=& a(\\mathbf{x}_i) + Q_i \\\\\n\\ensuremath{\\mathcal{H}}_0: S(\\mathbf{x}_i) &=& Q_i \n\\end{eqnarray}\nwhere $Q_i$ is a white Gaussian noise at the $i$th SN with zero mean and variance $\\sigma^2_s$. However, in practice the collected data are actually correlated \\cite{Sundaresan2011}. The noise is assumed to be identically and independently distributed over all SNs and is not dependent on $\\mathbf{x}_i$. The sensing SNR is defined as $\\text{SNR}_s = P_t\/\\sigma^2_s$. Each SN computes its binary local decision, $I(\\mathbf{x}_i) = \\{0,1\\}$, by comparing the collected data with a local decision threshold $\\tau$, i.e.,\n\\begin{equation}\n\\label{eq:Ii}\nI(\\mathbf{x}_i) = \\begin{cases} 1, & g\\left( S(\\mathbf{x}_i) \\right) \\geq \\tau\t\\\\\n 0, & g\\left( S(\\mathbf{x}_i) \\right) < \\tau \\end{cases}\n\\end{equation}\nwhere $g(\\cdot)$ is the local detection function, e.g., matched filter or energy detector. Here, $\\tau$ is the same for all SNs. Therefore, the local probabilities of false alarm and detection are given respectively by\n\\begin{eqnarray}\nP_{fa} &=& f_0 \\left( \\tau,\\sigma_s \\right) \\label{P_fa} \\\\\nP_d(\\mathbf{x}_i) &=& f_1\\left( a(\\mathbf{x}_i), \\sigma_s, \\tau \\right) \\label{eq:P_d}\n\\end{eqnarray}\nwhere $f_0\\left(\\cdot\\right)$ and $f_1\\left(\\cdot\\right)$ are the false alarm and detection probabilities under $\\ensuremath{\\mathcal{H}}_0$ and $\\ensuremath{\\mathcal{H}}_1$, respectively. These functions depend on the type of local detector used, such as a matched filter or an energy detector. Note, however, that the probability of detection in \\eqref{eq:P_d} also depends on the target parameters, $P_t$ and $\\mathbf{x}_t$ through \\eqref{eq:noise-free-signal}.\n\n\\subsection{Communication Model} \\label{subsec:com-model}\nDue to the large area of the ROI, the WSN is geographically divided into $M$ disjoint zones: $\\ensuremath{\\mathcal{C}}_1, \\ensuremath{\\mathcal{C}}_2, \\cdots, \\ensuremath{\\mathcal{C}}_M $, where $\\ensuremath{\\mathcal{C}}_m \\in \\mathcal{A}$ for $m = 1, \\cdots, M$. For the sake of simplicity, the $\\ensuremath{\\mathcal{C}}_m$'s are assumed to be identical. Each zone is managed by a CH located at $\\mathbf{x}_m \\notin \\Phi$. The number of clusters is fixed and their locations are also fixed and known to the WSN. SNs located at $\\mathbf{x}_i \\in \\ensuremath{\\mathcal{C}}_m$ send their decisions to the $m$th CH. The CHs in turn report the collected decisions back to the FC. The three-tier network is shown in Fig. \\ref{fig:WSN_fig}.\n\nDue to cost and bandwidth constraints, SNs use on-off keying (OOK) to transmit their binary local decisions to the CH over a shared multiple access (MAC) AWGN channel. These SNs transmit with the same power $P_0$ within the cluster and are assumed to be synchronized to the same time slot. Hence, the received signal at the $m$th CH is \n\\begin{equation}\n\tY_m = \\sqrt{P_0} \\Lambda_m + W_m \\label{eq:SN-CH}\n\\end{equation}\nwhere \n\\begin{equation}\n\\Lambda_m = \\sum_{\\mathbf{X}_i \\in \\ensuremath{\\mathcal{C}}_m} I(\\mathbf{X}_i), \\; m = 1,\\cdots,M \\label{eq:Lambda_m}\n\\end{equation}\nis the number of positive local decisions in the $m$th cluster and $W_m$ is the AWGN at that CH with distribution of $\\mathcal{N}\\left(0,\\sigma^2_{c,m} \\right)$. Note that since $\\Lambda_m$ is actually the result of thinning of the PPP in the $m$th cluster, $\\Lambda_m$ is a Poisson RV distributed as \\cite{Aldalahmeh2016}\n\\begin{equation}\n\\Lambda_m \\sim \\begin{cases} \n\t\t\t\t\t\\textup{Pois} \\left( \\lambda_{0,m} \\right),& \\ensuremath{\\mathcal{H}}_0 \\\\\n\t\t\t\t\t\\textup{Pois} \\left( \\lambda_{1,m} \\right),& \\ensuremath{\\mathcal{H}}_1 \t\t\t\t\t\n\t\t\t \\end{cases} \\label{eq:CR-pdf}\t\t\t\n\\end{equation}\nwhere $\\lambda_{0,m}$ are $\\lambda_{1,m}$ are the mean numbers of the detecting SNs ($\\Lambda_m$) in the $m$th cluster under $\\ensuremath{\\mathcal{H}}_0$ and $\\ensuremath{\\mathcal{H}}_1$ respectively and are given by\n\\begin{eqnarray}\n\\lambda_{0,m} &=& \\lambda P_{fa} \\vert \\ensuremath{\\mathcal{C}}_m \\vert \\label{eq:lambda_0} \\\\\n\\lambda_{1,m} &=& \\lambda \\int_{\\ensuremath{\\mathcal{C}}_m} P_d(\\mathbf{x}) d\\mathbf{x}. \\label{eq:lambda_m}\n\\end{eqnarray}\n\nNote however that in the homogeneous case $\\lambda_{0,m} = \\lambda_0 \\, \\forall m$, since that $\\ensuremath{\\mathcal{C}}_m$'s are assumed to be identical.\n\nHowever, in order to implement the models in \\eqref{eq:SN-CH} and \\eqref{eq:Lambda_m}, the CH controls the SNs' transmission power via a power control scheme. Further discussion is provided in Section \\ref{sec:Practical-Consdr}. Each CH adopts the AF scheme to relay the gathered data to the FC over a dedicated AWGN wireless channel. The CHs are assumed to have more transmission power capabilities compared to the SNs. The received signals at the FC from the $m$th CH are \n\\begin{equation}\nZ_m = \\sqrt{P_m} Y_m + V_m, \\; m = 1,\\cdots,M \\label{eq:CH-FC}\n\\end{equation}\nwhere $P_m$ is the transmission power used by the $m$th CH and $V_m \\sim \\mathcal{N}\\left(0,\\sigma^2_{f,m} \\right)$ is the AWGN receiver in the channel between the FC and the $m$th CH.\n\\section{Fusion Rules in Clustered WSNs}\n\\label{sec:Fusion-Rules}\nIn this section we present the fusion rules for clustered WSNs. The ideal channel case is presented first as a benchmark for comparison. Then we proceed to discuss fusion rules for noisy channels. \n\\subsection{Decision Fusion in Ideal Channel Clustered WSN}\nFor a clustered WSN with ideal communication channels, the majority-like fusion rule has been proposed in\\cite{Ferrari2011}, in which the counting rule is implemented on the CH and the FC levels. However, since this rule showed a degraded performance in random WSNs so we will not discuss it further in this paper. \n\nIn \\cite{Aldalahmeh2016}, we proposed the OCR, in which CHs send the sum of the collected SNs' decisions, $\\Lambda_m$, to the FC to be optimally fused, i.e.,\n\\begin{equation}\n\\Lambda_{\\text{OCR}} = \\sum\\limits^M_{m=1} c_m \\Lambda_m\n\\label{eq:OCR} \n\\end{equation}\nwhere the optimal weighing coefficient is $c_m = \\log \\left( \\lambda_{1,m}\/ \\lambda_{0,m} \\right)$. This weighing effectively suppresses the previous spurious detection problem, since the clusters containing these spurious decisions have small weighing coefficients. Note however, that $\\lambda_{1,m}$ depends on the target's parameters, $\\boldsymbol{\\theta}$, through \\eqref{eq:lambda_m}. Thus an exact implementation of \\eqref{eq:OCR} requires knowledge of $\\boldsymbol{\\theta}$. This problem has been circumvented in \\cite{Aldalahmeh2016} by using a complexity-reduced GLRT, in which $\\boldsymbol{\\theta}$ is coarsely estimated.\n\nIt is interesting to note however, that the CR \\cite{Niu2005} is a special case of the OCR when there is only one global cluster encompassing the whole ROI. Thus the fusion rule in \\eqref{eq:OCR} reduces to \n\\begin{equation}\n\\Lambda_{\\text{{\\scriptsize CR}}} = \\sum_{\\mathbf{X}_i \\in \\Phi}^N I(\\mathbf{X}_i). \\label{eq:CR}\n\\end{equation}\n\nThe CR is also used as benchmark performance comparison of the fusion rules.\n\\subsection{Optimal Decision Fusion in a Noisy Clustered WSN}\nIn order to develop the optimal fusion rule in clustered WSNs with noisy channels, we investigate the received signals at the FC. By combining \\eqref{eq:SN-CH} and \\eqref{eq:CH-FC}, the received signal is\n\\begin{equation}\nZ_m = \\sqrt{\\widetilde{P}_m} \\Lambda_m + \\widetilde{V}_m \\label{eq:Z_m}\n\\end{equation}\nwhere $\\widetilde{P}_m = P_m P_0$ and $ \\widetilde{V}_m = \\sqrt{P_m} W_m + V_m$ is the aggregate noise at the $m$th CH-FC channel having a distribution of $\\mathcal{N}\\left( 0, \\widetilde{\\sigma}^2_m \\right)$ where $\\widetilde{\\sigma}^2_m = P_m \\sigma^2_{c,m} + \\sigma^2_{f,m}$. The likelihood-ratio-test (LRT) for the signal in \\eqref{eq:Z_m} is \n{\\small\n\\begin{eqnarray}\n\\Lambda_{\\text{LRT}} &=& \\prod_{m=1}^M \\frac{p\\left( z_m ; \\ensuremath{\\mathcal{H}}_1 \\right)}{p\\left( z_m ; \\ensuremath{\\mathcal{H}}_0 \\right)} = \\prod_{m=1}^M \\frac{\\ensuremath{\\mathbb{E}}_{\\Lambda_m} \\left[ p\\left( z_m \\vert \\Lambda_m \\right) ; \\ensuremath{\\mathcal{H}}_1 \\right] }{ \\ensuremath{\\mathbb{E}}_{\\Lambda_m} \\left[ p\\left( z_m \\vert \\Lambda_m \\right) ; \\ensuremath{\\mathcal{H}}_0 \\right] } \\nonumber \\\\\n &=& \\prod_{m=1}^M \\frac{\\ensuremath{\\mathbb{E}}_{\\Lambda_m} \\left[ \\exp\\left( -\\frac{1}{2\\widetilde{\\sigma}^2_m} \\left( z_m - \\sqrt{\\widetilde{P}_m}\\Lambda_m \\right)^2 \\right) ; \\ensuremath{\\mathcal{H}}_1 \\right] }{\\ensuremath{\\mathbb{E}}_{\\Lambda_m} \\left[ \\exp\\left( -\\frac{1}{2\\widetilde{\\sigma}^2_m} \\left( z_m - \\sqrt{\\widetilde{P}_m}\\Lambda_m \\right)^2 \\right) ; \\ensuremath{\\mathcal{H}}_0 \\right]}. \\nonumber \\\\ \\label{eq:LRT}\n\\end{eqnarray}\n}\nNote that the expectations in the numerator and denominator are w.r.t. the distributions in \\eqref{eq:CR-pdf}. Therefore, $p\\left( z_m ; \\ensuremath{\\mathcal{H}}_1 \\right)$ is actually the convolution of the Poisson distribution of $\\Lambda_m$ and the Gaussian distribution of the noise leading to the fourth term in \\eqref{eq:LRT}. Unfortunately, the corresponding log-likelihood ratio (LLR) is still not simpler:\n\\begin{eqnarray}\n\\Lambda_{\\text{{\\scriptsize LLR}}} &=& \\sum_{m=1}^M \\log \\left( \\ensuremath{\\mathbb{E}}_{\\Lambda_m} \\left[ \\exp\\left( -\\frac{s_m}{2} \\left( \\widetilde{z}_m - \\Lambda_m \\right)^2 \\right) ; \\ensuremath{\\mathcal{H}}_1 \\right] \\right) \\nonumber \\\\ \n&-& \\log \\left( \\ensuremath{\\mathbb{E}}_{\\Lambda_m} \\left[ \\exp\\left( -\\frac{s_m}{2} \\left( \\widetilde{z}_m - \\Lambda_m \\right)^2 \\right) ; \\ensuremath{\\mathcal{H}}_0 \\right] \\right) \\label{eq:LLR-Noisy}\n\\end{eqnarray}\nwhere $\\widetilde{z}_m = z_m \/ \\sqrt{\\widetilde{P}_m}$ and $s_m = \\widetilde{P}_m \/ \\widetilde{\\sigma}^2_m$ is the $m$th CH-FC channel SNR.\n\n\\subsection{Linear Decision Fusion Rule (LFR)}\nAlthough the fusion rule in \\eqref{eq:LLR-Noisy} is optimal, unfortunately it is impractical and does not lend itself to analysis. In order to derive a practical rule, we first need to find the distribution of $Z_m$. The MGF of $Z_m$ in \\eqref{eq:Z_m} is computed as:\n\\begin{eqnarray}\nM_{Z_m}(t) &=& M_{\\Lambda_m}\\left( \\sqrt{\\widetilde{P}_m} t \\right) M_{V_m}(t) \\nonumber \\\\\n &=& \\exp\\left( \\lambda_{j,m} \\left( e^{ t\\sqrt{\\widetilde{P}_m} } -1 \\right) + \\frac{\\widetilde{\\sigma}^2_m}{2} t^2 \\right)\n\\end{eqnarray}\nwhere $j=0,1$ for $\\ensuremath{\\mathcal{H}}_0$ and $\\ensuremath{\\mathcal{H}}_1$ respectively. Unfortunately, the above MGF is also intractable. However, using a first order approximation of the exponential function $(e^x \\approx 1+x)$ yields the following:\n\\begin{equation}\nM_{Z_m}(t) \\approx \\exp \\left( \\lambda_{j,m} t \\sqrt{\\widetilde{P}_m} + \\frac{\\widetilde{\\sigma}^2_m}{2} t^2 \\right)\n\\end{equation}\nwhich is the MGF of the Gaussian RV with $p(z_m) \\sim \\mathcal{N} \\left( \\lambda_{j,m} \\sqrt{\\widetilde{P}_m},\\, \\widetilde{\\sigma}^2_m \\right) $. Therefore, we approximate the LLR in \\eqref{eq:LLR-Noisy} as\n\\begin{eqnarray}\n\\Lambda_{\\text{{\\scriptsize LLR}}} &\\approx& \\sum_{m=1}^M \\frac{1}{2\\widetilde{\\sigma}^2_m} \\left( z_m - \\lambda_{1,m}\\sqrt{\\widetilde{P}_m} \\right)^2 \\nonumber \\\\\n&-& \\frac{1}{2\\widetilde{\\sigma}^2_m} \\left( z_m - \\lambda_{0,m}\\sqrt{\\widetilde{P}_m} \\right)^2.\n\\label{eq:App-LLR}\n\\end{eqnarray}\n\nExpanding the above and rearranging the terms gives \n\\begin{eqnarray}\n\\Lambda_{\\text{{\\scriptsize LLR}}} &\\approx& \\sum\\limits^M_{m=1} \\frac{1}{2\\widetilde{\\sigma}^2_m} \\left( z^2_m - 2\\lambda_{1,m}\\sqrt{\\widetilde{P}_m} z_m + \\widetilde{P}_m\\lambda_{1,m}^2 \\right) \\nonumber \\\\\n &-& \\sum\\limits^M_{m=1} \\frac{1}{2\\widetilde{\\sigma}^2_m} \\left( z^2_m - 2\\lambda_{0,m}\\sqrt{\\widetilde{P}_m} z_m + \\widetilde{P}_m\\lambda_{0,m}^2 \\right) \\nonumber \\\\\n &=& - \\sum\\limits^M_{m=1} \\sqrt{\\widetilde{P}_m}\\left( \\frac{\\lambda_{1,m} - \\lambda_{0,m}}{\\widetilde{\\sigma}^2_m} \\right) z_m \\nonumber \\\\\n &+& \\sum\\limits^M_{m=1} \\widetilde{P}_m\\left( \\lambda_{1,m}^2 + \\lambda_{0,m}^2 \\right). \n\\label{eq:LFR-long}\n\\end{eqnarray}\n\n\n\nWhen comparing $\\Lambda_{\\text{{\\scriptsize LLR}}}$ with the detection threshold ($\\Gamma$), the last term in \\eqref{eq:LFR-long} is absorbed by $\\Gamma$ since it is independent of $z_m$ . So the resulting linear fusion rule (LFR) becomes:\n\\begin{equation}\n\\Lambda_{\\text{{\\scriptsize LFR}}} = \\sum_{m=1}^M d_m z_m \\gtrless \\Gamma, \\label{eq:LFR}\n\\end{equation}\n\\label{eq:Noisy-FR}\nwhere the linear weighing coefficients are\n\\begin{equation}\nd_m = \\frac{\\sqrt{\\widetilde{P}_m}}{\\widetilde{\\sigma}^2_m}\\left( \\lambda_{1,m} - \\lambda_{0,m} \\right). \\label{eq:dm}\n\\end{equation}\n\nThe LFR is essentially a weighted sum of the data provided by each cluster. The impact of each cluster is reflected by its weight $d_m$, which is a measure of the detection performance and the channel quality of that cluster. \n\n\\begin{remark}\nThe LFR intuitively gives more weight to clusters with better detection, which is manifested in the mean difference term $\\left( \\lambda_{1,m} - \\lambda_{0,m} \\right)$. Also, more weight is given to clusters with good channel quality, i.e., large $\\sqrt{\\widetilde{P}_m}\/\\widetilde{\\sigma}^2_m$.\n\\end{remark}\n\nClearly the LFR is computationally simple, in contrast to the LLR in \\eqref{eq:LLR-Noisy}. In fact, its computational complexity amounts to $O(M)$ only.\n\\subsection{Approximated Maximum Likelihood-Based LFR (LFR-aML)}\nAlthough the LFR is a analytically simple, its implementation requires $\\lambda_{1,m}$'s to be known by the FC. Hence, we extend the LFR in this section to include an estimation phase prior to the detection phase.\n\n\\subsubsection{Estimation phase}\nAt first glance, the $\\lambda_{1,m}$'s can be computed by initially estimating $\\boldsymbol{\\theta}$ through maximum likelihood estimation from the log-likelihood of $Z_m$'s under $\\ensuremath{\\mathcal{H}}_1$ (the first term in \\eqref{eq:LLR-Noisy}). However, such an estimation problem is also nonlinear and nonconvex, leading to high computational complexity. So we propose estimating $\\lambda_{1,m}$'s directly. Still, attempting to do that from the current log-likelihood expression will not provide satisfactory results since each CH provides a single data point about $Z_m$, consequently, several instances of $Z_m$ are required. Thus we extend the LFR to the multiple-sample case, in which each SN makes $L$ independent decisions, $\\lbrace I_{i,l} \\rbrace_{l=0}^{L-1}$, that are relayed to the FC by the CHs. Further details of the implementation are provided in Section \\ref{sec:Practical-Consdr}.\n\nGiven the set of collected data $\\widetilde{z}_{l,m}$'s, which is the $l$th sample from the $m$th CH, then the corresponding likelihood function is $\\prod_{m=1}^M \\prod_{l=0}^{L-1} p\\left( \\widetilde{z}_{l,m} ; \\lambda_{1,m} \\right)$. It follows directly that the constrained ML problem using the related log-likelihood is \n\\begin{eqnarray}\n\\label{eq:ML}\n&& \\max_{\\lambda_{1,m}} \\, \\sum_{m=1}^M \\sum_{l=0}^{L-1} \\log \\left( p\\left( \\widetilde{z}_{l,m} ; \\lambda_{1,m} \\right) \\right)\\\\ \n&& \\text{s.t.} \\quad \\lambda_{1,m} \\geq \\lambda_0 \\quad \\forall m. \\nonumber\n\\end{eqnarray}\n\nEven though the ML problem above is separable in $\\lambda_{1,m}$, it is still complicated. Therefore, we propose to solve a suboptimal version of \\eqref{eq:ML} by using the corresponding lower bound provided by the following lemma:\n\n\\begin{lemma}\n\\label{lem:Bound}\nThe lower bound of the log-likelihood function in \\eqref{eq:ML} is given as\n\\begin{equation}\n\\label{eq:ML_bound}\n\\log\\left( p\\left( \\widetilde{z}_{l,m} ; \\lambda_{1,m} \\right) \\right) \\geq \\widehat{\\Lambda}_{l,m}\\log\\lambda_{1,m} -\\lambda_{1,m} + C_1\n\\end{equation}\nwhere \n\\begin{equation}\n\\widehat{\\Lambda}_{l,m} = \\frac{\\sum\\limits_{k=0}^{\\infty} k p\\left( \\widetilde{z}_{l,m} | k \\right)}{\\sum\\limits_{k=0}^{\\infty} p\\left( \\widetilde{z}_{l,m} | k \\right)}\n\\label{eq:zm_hat}\n\\end{equation}\nis the mean estimate of the $l$th received sample from the $m$th CH and $p\\left( \\widetilde{z}_{l,m} | k \\right)$ is the conditional Gaussian distribution given $k$ and \n\\begin{equation}\nC_1 = \\log C_0 - \\sum_{k=0}^{\\infty} \\pi_k \\log k!\n\\end{equation}\nwhere $C_0 = \\sum_{k=0}^{\\infty} p\\left( \\widetilde{z}_{l,m} | k \\right)$ and $\\pi_k = p\\left( \\widetilde{z}_{l,m} | k \\right) \/C_0$.\n\\end{lemma}\n\\begin{proof}\nSee Appendix \\ref{app:App-Lemma}.\n\\end{proof}\n\nConsequently, we have the surrogate optimization problem:\n\\begin{eqnarray}\n\\label{eq:app-ML}\n&&\\max_{\\lambda_{1,m}} \\, \\sum_{m=1}^M \\sum_{l=1}^L \\left( \\widehat{\\Lambda}_{l,m} \\log\\lambda_{1,m} - \\lambda_{1,m} \\right) \\\\\n &&\\text{s.t.} \\quad \\lambda_{1,m} \\geq \\lambda_0 \\quad \\forall m. \\nonumber\n\\end{eqnarray}\n\nThe optimal solution is given in the following lemma.\n\n\\begin{lemma}\n\\label{lem:const-opt}\nThe optimal solution of the constrained optimization problem in \\eqref{eq:app-ML} is\n\n\\begin{eqnarray}\n\\widehat{\\lambda}_{1,m} &=& \\begin{cases} \n\t\t\t\t \\widehat{\\Lambda}_{m}, \\quad \\eta_m < 0 \\\\\n\t\t\t\t \\lambda_0, \\quad \\eta_m = 0\n\t\t\t\\end{cases} \\label{eq:lambda_m_est} \\\\\t\n\\eta_m &=& 1 - \\widehat{\\Lambda}_{m}\/\\lambda_0\n\\label{eq:eta}\t\n\\end{eqnarray}\nwhere \n\\begin{equation}\n\\label{eq:Lambda-ave}\n\\widehat{\\Lambda}_{m} = \\frac{1}{L}\\sum_{l=0}^{L-1} \\widehat{\\Lambda}_{l,m}\n\\end{equation}\nis the average of the $m$th CH's estimates $\\widehat{\\Lambda}_{l,m}$ given in \\eqref{eq:zm_hat}.\n\n\\end{lemma}\n\n\\begin{proof}\nSee Appendix \\ref{app:App-Lemma-const-opt}.\n\\end{proof}\n\nThe estimator $\\widehat{\\lambda}_{1,m}$ in \\eqref{eq:lambda_m_est} is intuitive in the sense that it is the average of all sample estimates when a target is sensed and is simply $\\widehat{\\lambda}_{1,m} = \\lambda_0$ otherwise.\n\\subsubsection{Detection phase}\nGiven the estimates $\\widehat{\\lambda}_{1,m}$'s, we then propose the LFR-aML detector as\n\\begin{equation}\n\\Lambda_{\\text{{\\scriptsize aML}}} = \\sum_{m=1}^M \\widehat{d}_m \\widehat{z}_{m} \\gtrless \\Gamma\n\\label{eq:aML}\n\\end{equation}\nwhere the weighing coefficients now are\n\\begin{equation}\n\\widehat{d}_m = \\frac{\\sqrt{\\widetilde{P}_m}}{\\widetilde{\\sigma}^2_m}\\left( \\widehat{\\lambda}_{1,m} - \\lambda_{0} \\right) \\label{eq:d_m_hat}\n\\end{equation}\nand the $\\widehat{z}_m$ is the averaged data for each CH defined as\n\\begin{equation}\n\\label{eq:z_m_ave}\n\\widehat{z}_m = \\frac{1}{L} \\sum_{l=1}^L \\widetilde{z}_{l,m}.\n\\end{equation}\n\nThe computational complexity of the LFR-aML, on the other hand, is $O\\left( L M \\right)$, which is relatively larger than that for the LFR. \n\\subsection{Linear Fusion Rule Performance}\nDespite being a practical fusion rule, the LFR's detection and false alarm probabilities do not have closed forms that lend themselves to analysis. Thus we resort to finding the upper bound for those tail probabilities in the following theorem.\n\\begin{theorem}\nThe upper bound for tail probability for the LFR is \n\\begin{equation}\n\\ensuremath{\\mathbb{P}} \\left( \\Lambda_{\\text{{\\scriptsize LFR}}} > z ; \\ensuremath{\\mathcal{H}}_j \\right) \\leq \\exp \\left( - \\frac{\\left( z - \\overline{\\lambda}_{j,d} \\right)^2}{2\\widetilde{\\sigma}^2_{j,d}} \\right)\n\\label{eq:Thrm-Bnd}\n\\end{equation}\n\\label{thrm:Noisy-Mod-Chrnf-Bound}\nfor $j=0,1$, where\n\\begin{eqnarray}\n\\overline{\\lambda}_{j,d} &=& \\sum\\limits^M_{m=1} \\lambda_{j,m} d_m \\sqrt{\\widetilde{P}_m} \\label{eq:lambda_bar} \\\\\n\\widetilde{\\sigma}^2_{j,d} &=& \\sum\\limits^M_{m=1} d^2_m \\left( \\lambda_{j,m} \\widetilde{P}_m + \\widetilde{\\sigma}^2_m \\right)\\label{eq:sigma2_bar}.\n\\end{eqnarray}\n\\end{theorem}\n\\begin{proof}\nSee Appendix \\ref{app:App-Noisy-Mod-Chrnf-Bound}.\n\\end{proof}\n\nClearly, the upper bound given above is a Gaussian-tail bound. It is interesting to note that the mean defined in \\eqref{eq:lambda_bar} is actually a scaled version of the $\\Lambda_{\\text{{\\scriptsize LFR}}}$ mean defined in \\eqref{eq:LFR}. Thus it is expected that if the LFR value is increased the detection probability will significantly improve.\n\\section{Power Allocation}\n\\label{sec:Power-Allocation}\nWSNs are notorious for being power constrained. Hence, the power should be used wisely, especially if the application is critical such as intruder detection,. It is well established that the main source of power usage in a WSN is wireless communication \\cite{Raghunathan2002}. Thus it is desired to minimize the transmission power used by SNs and CHs while jointly taking into consideration the minimum required detection performance. \n\nFortunately, the linear LFR structure facilitates the use of power allocation strategy, in contrast to the LLR. The mean-difference (MD) \\cite{Liu2005b} is adopted as the detection performance criteria due to its desirable form. The MD is defined as \n\\begin{eqnarray}\n\\text{MD} &=& \\overline{\\lambda}_{d,1} - \\overline{\\lambda}_{d,0} \\nonumber \\\\\n&=& \\sum\\limits^M_{m=1} d_m \\sqrt{\\widetilde{P}_m} \\left( \\lambda_{1,m} - \\lambda_0 \\right) \\nonumber \\\\\n &=& \\sum\\limits^M_{m=1} \\frac{P_0 P_m \\left( \\lambda_{1,m} - \\lambda_0 \\right)^2}{\\sigma^2_{c,m} P_m + \\sigma^2_{f,m}}.\n\\end{eqnarray}\n\nIn general the MD and the detection performance can be significantly improved by increasing the SNs deployment density, $\\lambda$, without the need to increase the transmission power. So, attention should be directed at reducing the transmission power given a specific detection performance constraint\\footnote{Although $P_0$ is responsible for a large part of the used power, the $P_m$'s on the other hand, play a more critical role in the WSN since if any CH runs out of power a significant part of the network is rendered useless}. Thus we wish to solve the following optimization problem:\n\\begin{eqnarray}\n\\label{eq:Pwr-Allc}\n\\min_{\\mathbf{P}} && \\Vert \\mathbf{P} \\Vert_1 \\\\\n\\text{s.t.} \\quad \\mathbf{P} & \\succcurlyeq & \\mathbf{0} \\nonumber \\\\\n \\quad \\text{MD} &=& P_0 \\sum\\limits^M_{m=1} \\frac{ P_m \\left( \\lambda_{1,m} - \\lambda_0 \\right)^2}{\\sigma^2_{c,m} P_m + \\sigma^2_{f,m}} > D_0 \\nonumber\n\\end{eqnarray}\nwhere $\\mathbf{P} = \\left(P_1, P_2, \\cdots, P_M \\right)^T$ is the lumped CH's transmission powers vector and $D_0$ is the minimum mean difference as specified by the network. The $l_1$-norm is adopted since it reduces the residuals leading to smaller component values in $\\mathbf{P}$ and hence less transmission power. \n\nNote however, that the above power allocation problem \\eqref{eq:Pwr-Allc} is very similar to the water-filling problem \\cite{Boyd2004}, but here the power sum is minimized in contrast to the conventional water-filling formulation. Obviously, the above problem is convex, since the objective function is convex and the constraint is a linear-fractional function, which is also convex \\cite{Boyd2004}. However, the $l_1$-norm is not differentiable and consequently a closed form solution cannot be attained. So we replace the $\\Vert \\mathbf{P} \\Vert_1$ by the summation of the elements of $\\mathbf{P}$'s (where each element is non-negative). \n\n\\begin{theorem}\n\\label{thrm:Opt-pwr-alloc}\nThe optimal power allocation based on the formulation in \\eqref{eq:Pwr-Allc} is given as\n\n\\begin{equation}\nP_m = \\left( \\dfrac{ \\lambda_{d,m} \\sigma_{f,m} \\sqrt{\\nu}}{ \\sigma^2_{c,m}} - \\dfrac{\\sigma^2_{f,m}}{\\sigma^2_{c,m}} \\right)^{+} \\label{eq:Pm}\n\\end{equation}\nwhere $(x)^+ = \\max(0,x)$ and\n\\begin{equation}\n\\sqrt{\\nu} = \\dfrac{\\mathlarger{\\sum\\limits^M_{m=1}} \\dfrac{\\lambda_{d,m} \\sigma_{f,m}}{\\sigma^2_{c,m}}}{ \\mathlarger{\\sum\\limits^M_{m=1}} \\dfrac{\\lambda^2_{d,m}}{\\sigma^2_{c,m}} - D_1}.\n\\label{eq:nu}\n\\end{equation}\n\n\\end{theorem}\n\n\\begin{proof}\nSee Appendix \\ref{app:Opt-pwr-alloc}.\n\\end{proof}\n \n\nIntuitively, the allocated power is proportional to the cluster's detection performance manifested in the mean difference $\\lambda_{d,m}$ and is inversely proportional to the SN-CH channel noise, $\\sigma^2_{c,m}$. Of course, when using the power allocation algorithm practically, $\\widehat{\\lambda}_{1,m}$ is used to compute $\\lambda_{d,m}$.\n\nIn this work, we will denote the LFR with power allocation strategy as LFR-PA, whereas the LFR using equal power allocation is just denoted by LFR. Similarly, the fusion rule using the estimates in \\eqref{eq:eta} and \\eqref{eq:lambda_m_est} is called as LFR-aML.\n\n\\section{Practical Considerations}\n\\label{sec:Practical-Consdr}\nIn this section we discuss how the LFR-aML algorithm is implemented in practice. \n\nThe LFR-aML is preceded by an initialization stage where the communication parameters are estimated. In order to ensure a constant received power at the CHs, thus validating the model in \\eqref{eq:Lambda_m}, a simple power control scheme is implemented. The CHs send pilot signals to the SNs in the cluster to be used to adapt the SNs' transmission power accordingly. Moreover, the CHs compute the SN-CH channel noise variances, $\\sigma^2_{c,m}$, and likewise the FC computes the CH-FC channel noise variances, $\\sigma^2_{f,m}$. Finally, $\\lambda_0$ can be estimated off-line.\n\nThen the LFR-aML is initiated where it performs estimation of the clusters' average number of detecting SNs, global distributed detection and optimal CH power allocation. Algorithm \\ref{alg:LFR_aML} illustrates the complete LFR-aML detection procedure.\n\n\n\\begin{algorithm}[t]\n\\caption{: LFR-aML}\t\n\\label{alg:LFR_aML}\n\t\\textbf{Initialization:}\n\t\\begin{algorithmic}[1]\n\t\t\\STATE CHs send pilot signals to SNs.\n\t\t\\STATE SNs use pilot signal to adjust $P_0$.\t\t\n\t\t\\STATE CHs estimate the $\\lbrace\\sigma^2_{c,m}\\rbrace_{m=1}^M$.\n\t\t\\STATE FC estimate the $\\lbrace\\sigma^2_{f,m}\\rbrace_{m=1}^M$.\n\t\t\\STATE FC estimates $\\lambda_{0}$ via \\eqref{eq:lambda_0}, where $\\lambda_{0,m} = \\lambda_0\\, \\forall m$.\n\t\\end{algorithmic}\n\t\\textbf{LFR-aML:}\t\n\t\\begin{algorithmic}[1]\n\t\\STATE $P_m = P_{tot}\/M$.\n\t\t\\LOOP\n\t\t\\STATEx \\textbf{Estimation:}\n\t\t\\STATE SNs compute $\\lbrace I_{i,l} \\rbrace_{l=0}^{L-1}$ via \\eqref{eq:Ii}.\n\t\t\\STATE SNs send $\\lbrace I_{i,l} \\rbrace_{l=0}^{L-1}$ to the CHs.\n\t\t\\STATE CHs compute $\\lbrace \\widehat{\\lambda}_{1,m} \\rbrace_{m=1}^M$ via \\eqref{eq:lambda_m_est}, \\eqref{eq:eta} and \\eqref{eq:Lambda-ave}.\n\t\t\\STATE CHs compute $\\lbrace\\widehat{z}_m\\rbrace_{m=1}^M$ via \\eqref{eq:z_m_ave}.\n\t\t\\STATE CHs use $P_m$ to send $\\lbrace \\widehat{\\lambda}_{1,m},\\widehat{z}_m\\rbrace_{m=1}^M$ to FC.\t\t\n\t\t\\STATEx \\textbf{Detection:}\n\t\t\\STATE FC computes $\\Lambda_{\\text{{\\scriptsize aML}}}$ via \\eqref{eq:aML}, \\eqref{eq:d_m_hat} and \\eqref{eq:z_m_ave}.\t\t\n\t\t\\STATE FC tests condition $\\left(\\Lambda_{\\text{{\\scriptsize aML}}} \\gtrless \\Gamma\\right)$ for global detection.\n\t\t\\STATEx \\textbf{Power Allocation:}\n\t\t\\STATE FC computes $P_m$'s via \\eqref{eq:Pm} and \\eqref{eq:nu}.\n\t\t\\STATE FC sends $P_m$'s to CHs.\t\n\t\t\\ENDLOOP\n\t\\end{algorithmic}\n\t\n\\end{algorithm}\n\\begin{figure*}[!ht]\n\\centering\n\\subfloat[$\\text{SNR}_{f,m} = \\text{SNR}_{c,m} = 5,\\text{dB}, \\, \\forall m.$ \\label{fig:ROC-SNR-Eq-hi}]{ \\includegraphics[width=.45\\textwidth]{ROC_Sigma2_PFAfxd_Both_Hi_MC_1e5}} \\hfill\n\\subfloat[$\\text{SNR}_{f,m} = \\text{SNR}_{c,m} = -5\\,\\text{dB}, \\, \\forall m.$\\label{fig:ROC-SNR-Eq-lo}]{ \\includegraphics[width=.45\\textwidth]{ROC_Sigma2_PFAfxd_Both_Low_MC_1e5}} \\\\\n\\subfloat[$\\text{SNR}_{f,m} = 0\\,\\text{dB} < \\text{SNR}_{c,m} = 5\\,\\text{dB}, \\, \\forall m.$\\label{fig:ROC-SNR-Cm-gt-Fm}]{ \\includegraphics[width=.45\\textwidth]{ROC_Sigma2_PFAfxd_CH_gt_FC_MC_1e5}}\\hfill\n\\subfloat[$\\text{SNR}_{f,m} = 5\\,\\text{dB} > \\text{SNR}_{c,m} = 0\\,\\text{dB}, \\, \\forall m.$\\label{fig:ROC-SNR-Fm-gt-Cm}]{ \\includegraphics[width=.45\\textwidth]{ROC_Sigma2_PFAfxd_CH_ls_FC_MC_1e5}} \n\\caption{Effect of SN-CH and CH-FC channel SNRs on the detection performance. The SN transmission power is, $P_0 = 1$ and the CH transmission power is $P_m = 1,\\, \\forall m$.}\n\\label{fig:ROC-SNR}\n\\end{figure*}\n\\begin{figure*}[t]\n\\centering\n\\subfloat[Cluster containing target has $\\text{SNR}_{c,m} = 5\\,\\text{dB}$, rest of cluster have $\\text{SNR}_{f,m} = \\text{SNR}_{c,m} = 2,\\text{dB}.$\\label{fig:ROC-SNR-Target-hi-SNR_c}]{\\includegraphics[width=.45\\textwidth]{ROC_Sigma2_PFAfxd_Hi_SNRcm_MC_1e5}}\\hfill\n\\subfloat[Cluster containing target has $\\text{SNR}_{c,m} = -5\\,\\text{dB}$, rest of cluster have $\\text{SNR}_{f,m} = \\text{SNR}_{c,m} = 2,\\text{dB}.$\\label{fig:ROC-SNR-Target-lo-SNR_c}]{\\includegraphics[width=.45\\textwidth]{ROC_Sigma2_PFAfxd_Low_SNRcm_MC_1e5}} \\\\\n\\subfloat[Cluster containing target has $\\text{SNR}_{f,m} = 5\\,\\text{dB}$, rest of cluster have $\\text{SNR}_{f,m} = \\text{SNR}_{c,m} = 2,\\text{dB}.$\\label{fig:ROC-SNR-Target-hi-SNR_f}]{\\includegraphics[width=.45\\textwidth]{ROC_Sigma2_PFAfxd_Hi_SNRfm_MC_1e5}}\\hfill\n\\subfloat[Cluster containing target has $\\text{SNR}_{f,m} = -5\\,\\text{dB}$, rest of cluster have $\\text{SNR}_{f,m} = \\text{SNR}_{c,m} = 2,\\text{dB}.$\\label{fig:ROC-SNR-Target-lo-SNR_f}]{\\includegraphics[width=.45\\textwidth]{ROC_Sigma2_PFAfxd_Low_SNRfm_MC_1e5}}\n\\caption{Effect of SN-CH and CH-FC channel SNRs on the detection performance. The SN transmission power is $P_0 = 1$ and the CH transmission power is $P_m = 1,\\, \\forall m$.}\n\\label{fig:ROC-SNR-Target}\n\\end{figure*}\n\\begin{figure*}[t]\n\\centering\n \\subfloat[$P_{FA}$ Bound.\\label{fig:PFA-Bnd}]{ \\includegraphics[width=.45\\textwidth]{PFA_Bound_Basic_MC_1e5}}\n \\hfill\n \\subfloat[$P_D$ Bound.\\label{fig:PD-Bnd}]{ \\includegraphics[width=.45\\textwidth]{PD_Bound_Basic_MC_1e5}}\n\n\\caption{Tail Bounds for false alarm probability and detection probability. The SNs transmission power is $P_0 = 1$ and the CHs transmission power is $P_m = 1,\\, \\forall m$.}\n\\label{fig:Tail-Prob-Bnd}\n\\end{figure*}\n\\begin{figure*}[t]\n\\centering\n \\subfloat[ROC.\\label{fig:ROC_aML}]{ \\includegraphics[width=.45\\textwidth]{ROC_MULTITX_MC_1e5}}\n \\hfill\n \\subfloat[Power used $P_m$ by CHs.\\label{fig:PWR_aML}]{ \\includegraphics[width=.45\\textwidth]{PWR_MULTITX_MC_1e5}}\n\\caption{The effect of using the aML estimator on the ROC and the power used at $P_0 = 2$, data samples of $L=5$, $\\text{SNR}_{f,m} = 2 \\text{ and } \\text{SNR}_{c,m} = 2,\\text{dB}, \\, \\forall m.$}\n\\label{fig:ROC_PWR}\n\\end{figure*}\n\\begin{figure*}[t]\n\\centering\n \\subfloat[$P_D$ at $P_{FA} = 0.1$.\\label{fig:PD_K1_FC_CH_2}]{ \\includegraphics[width=.45\\textwidth]{PD_K1_FC_ALL_PFAfxd_5_CH_2_MC_1e5}}\n \\hfill\n \\subfloat[Saved power percentage compared to the LFR rule.\\label{fig:Psav_K1_FC_CH_2}]{ \\includegraphics[width=.45\\textwidth]{Psav_K1_FC_ALL_PFAfxd_5_CH_2_MC_1e5}}\n\\caption{Detection performance and the power saving percentage achieved using the LFR-PA rule at $P_0 = 5$ and $P_m = 2$ and the following conditions: (1) $\\text{SNR}_{f,m} = \\text{SNR}_{c,m} = 2\\text{dB}$, (2) $\\text{SNR}_{f,m} = 2$ and $\\text{SNR}_{c,m} = 5\\text{dB}$ and (3) $\\text{SNR}_{f,m} = 5$ and $\\text{SNR}_{c,m} = 2\\text{dB}$.}\n\\label{fig:PD_Psav_K1_FC_CH_2}\n\\end{figure*}\n\n\\section{Simulation Results and Discussion}\n\\label{sec:Simulation-Result}\nWe simulate a WSN deployed in a $50 \\times 50$ ROI. The intruder's power is $P_0 = 1$ located arbitrarily at $(4,5)$. The sensing SNR is set to 0 dB. The SNs have a reference distance of $d_0 = 1$ units with a local probability of false alarm of $10^{-2}$. The SNs adopt the matched filter as their local detector. The WSN has a SN deployment density of $\\lambda=2$ per unit area and is divided into 9 clusters. The system is simulated for $10^5$ Monte Carlo iterations. Note however, that the simulation setting here is arbitrary but these results also hold for different scenarios.\n\n\nFig.\\,\\ref{fig:ROC-SNR} shows the effect of the SNRs of the SN-CH and the CH-FC channels on the detection performance through the ROC graphs. The OCR and CR are included as upper and lower bounds for the proposed algorithms. The case of having equal high SNR for all SN-CH and CH-FC is shown in Fig.\\,\\subref*{fig:ROC-SNR-Eq-hi}, in which the LLR and the LFR (employing equal power allocation, $P_m=1\\, \\forall m$) achieve the optimal performance provided by the OCR. By contrast, Fig.\\,\\subref*{fig:ROC-SNR-Eq-lo} illustrates the case of having equal low SNR for all the clusters' channels. Here, the LLR rule performance is as good as that of the CR, whereas the LFR rule performs worse than the latter. This behaviour is explained by the direct dependence on the SNR in the weighing coefficients, in \\eqref{eq:dm}. Fig.\\,\\subref*{fig:ROC-SNR-Cm-gt-Fm} shows the case of having better SN-CH channels compared to the CH-FC channels, while Fig.\\,\\subref*{fig:ROC-SNR-Fm-gt-Cm} shows the opposite case. The LFR performance virtually does not change whereas the LLR slightly degrades in the latter case. The LFR behaviour is explained by noting that $\\widetilde{\\sigma}^2_m$ is the weighted sum of $\\sigma^2_{f,m}$ and $\\sigma^2_{c,m}$. The LLR sensitivity might be attributed to its nonlinear form.\n\nNext, Fig.\\,\\ref{fig:ROC-SNR-Target} shows the effect of having a good channel quality specifically in the cluster containing the target. Interestingly, the results here show that it is sufficient to have a good SN-CH channel quality in the cluster containing the target to achieve good detection performance as is evident in Fig.\\,\\subref*{fig:ROC-SNR-Target-hi-SNR_c} and Fig.\\,\\subref*{fig:ROC-SNR-Target-hi-SNR_f}. In a similar manner, a bad channel will significantly decrease the performance as can be seen in Fig.\\,\\subref*{fig:ROC-SNR-Target-lo-SNR_c} and Fig.\\,\\subref*{fig:ROC-SNR-Target-lo-SNR_f}. Fig.\\,\\ref{fig:Tail-Prob-Bnd} depicts the upper bounds for the $P_{FA}$ and the $P_D$ for the LFR presented in \\eqref{eq:Thrm-Bnd}. \n\nThen the performance of the LFR-aML is compared with the LLR, LFR and the CR in Figure \\ref{fig:ROC_PWR}. To make the comparison fair, all the rules use the received data average value in \\eqref{eq:z_m_ave}. In Figure \\ref{fig:ROC_aML}, the LFR-aML shows a satisfactory performance despite some loss of performance due to the inaccurate $\\lambda_{1,m}$ estimation. On the other hand, Figure \\ref{fig:PWR_aML} shows the power allocation via \\eqref{eq:Pm} over the CHs. Notice that with exact $\\lambda_{1,m}$ values, the power is concentrated in the cluster containing the target. Other clusters may receive no power allocation at all. However, using estimated values leads to spreading the power across the cluster, while giving more power to the target's cluster. \n\nFigure \\ref{fig:PD_K1_FC_CH_2} shows $P_D$ using the LFR-PA plotted with respect to $D_1$, which is the detection performance constraint. The $P_D$ values are fitted with a third degree polynomial to emphasize the trend in a better way. In Figure \\ref{fig:Psav_K1_FC_CH_2}, the power saving using the LFR-PA rule is plotted against $D_1$ for different WSN conditions. The power saving is defined as \n\\begin{equation}\nP_{\\text{sav}} = \\frac{1}{P_{tot}}\\sum\\limits^M_{m=1} \\left(P_{tot} - P_m\\right)\\times 100\\%\n\\end{equation}\nwhere $P_{tot}$ is the power used by the LFR rule. The OCR, LFR, and CR are plotted for the sake of comparison. $P_D$ and $P_{\\text{sav}}$ are plotted against $D_1$. It is clear that as $D_1$ increases, improved detection performance is achieved (in the range of 8\\%) at the expense of less saved energy. However, the power saving is significantly greater in the case of a better FC-CH channel as shown. Indeed, at $D_1=5.5$ the power saving is 84\\% for the case of $\\text{SNR}_{f,m}=5$dB and $\\text{SNR}_{c,m}=2$dB. Whereas it is 67\\% for both cases $\\text{SNR}_{f,m}=\\text{SNR}_{c,m}=5$dB and $\\text{SNR}_{f,m}=\\text{SNR}_{c,m}=2$dB. Furthermore, the performance of the equal power version is attained with 64\\% power saving. This follows from the direct proportionality of $P_m$ with $\\sigma^2_{f,m}$ in \\eqref{eq:Pm}.\n\n\\section{Conclusion}\n\\label{sec:Conclusion}\nIn this paper we have discussed fusion rules for DD in clustered-WSNs with communication channels experiencing AWGN. We have derived the optimal log-likelihood ratio fusion rule that turned out to be analytically intractable. A suboptimal linear fusion rule is subsequently derived, in which the cluster's data is linearly weighed. This rule, intuitively, gives more weight to clusters with better channels and better data quality. However, the LFR requires the knowledge of the mean value of detecting SNs. Thus we proposed the LFR-aML that employs an approximate constrained ML estimator to find the required parameters. \n\nIn addition, Gaussian-tail upper bounds for the LFR's detection and false probabilities are derived using approximated moment generating functions. Moreover, a power allocation strategy is proposed that minimizes the total power used by the WSN. The resulting allocated power is proportional to the expected number of detecting SNs in the cluster.\n\nExtensive simulations show that in order to achieve near optimal detection performance, it is sufficient to have a good channel quality for the cluster(s) containing the detecting SNs and moderate quality in the rest of the clusters. However, when using the LFR-aML, there is a performance gap with the ideal LFR due to the inherent estimation errors. It has been shown that the proposed power allocation strategy can achieve 84\\% power saving with only 5\\% performance reduction compared to the equal power scheme. Furthermore, the same detection performance as the equal power allocation version can be attained with a 14\\% power saving.\n\nSo in summary, for the first time a fusion rule has been derived for clustered WSN distributed detection with imperfect channels, and we have saved significant power usage with only a small reduction in detection performance. Several directions for future work can be pursued, including non-homogeneous PPP model and other types of channel imperfections such as path-loss, fading and channel failure. Furthermore, fusion rules for distributed detection can be investigated in which a decode-and-forward scheme (quantization) is employed at the CH.\n\\appendices\n\\section{Proof of Lemma \\eqref{lem:Bound}}\n\\label{app:App-Lemma}\nRecall that the likelihood function of the $l$th received sample from the $m$th cluster under $\\ensuremath{\\mathcal{H}}_1$ is actually the expectation of the conditional distribution $ p\\left( \\widetilde{z}_{l,m} ; \\lambda_{1,m} \\right) = \\ensuremath{\\mathbb{E}}_{\\Lambda_m} \\left[ p\\left( \\widetilde{z}_{l,m} \\vert \\Lambda_m \\right) \\right]$. Since $\\Lambda_m$ is a Poisson RV, then the previous expectation becomes as follows:\n\\begin{equation}\n p\\left( \\widetilde{z}_{l,m} ; \\lambda_{1,m} \\right) = \\sum_{k=0}^{\\infty} p\\left( \\widetilde{z}_{l,m} | k \\right) p\\left( k ; \\lambda_{1,m} \\right)\n\\label{eq:app-lr}\n\\end{equation}\nwhere $\\Lambda_m$ is replaced by $k$ in order to simplify the notation. Note that \\eqref{eq:app-lr} is a convex sum in terms of $p\\left( \\widetilde{z}_{l,m} | \\Lambda_m \\right)$ since $p\\left( k ; \\lambda_{1,m} \\right)$ is a proper (Poisson) distribution\\footnote{For any probability mass function all the probabilities must be less than one and also sum to unity.}. A lower bound of $\\log \\left( p\\left( \\widetilde{z}_{l,m} ; \\lambda_{1,m} \\right) \\right)$ can be attained via Jensen's inequality, however it is not useful in its current form. Instead, we proceed by treating the sum in \\eqref{eq:app-lr} as a convex combination of $p\\left( k ; \\lambda_{1,m} \\right)$ where $p\\left( \\widetilde{z}_{l,m} |k \\right)$ are the weighing coefficients. But first, we need to normalize those coefficients as follows:\n\n\n\\begin{equation}\np\\left( \\widetilde{z}_{l,m} ; \\lambda_{1,m} \\right) = C_0 \\sum_{k=0}^{\\infty} \\pi_k\n p\\left( k ; \\lambda_{1,m} \\right)\n\\end{equation}\nwhere $C_0 = \\sum_{k=0}^{\\infty} p\\left( \\widetilde{z}_{l,m} | k \\right)$ and $\\pi_k = p\\left( \\widetilde{z}_{l,m} | k \\right) \/C_0$ is a discrete distribution. Now we are ready to apply Jensen's inequality on the log of \\eqref{eq:app-lr} yielding\n\\begin{eqnarray}\n\\log \\left( p\\left( \\widetilde{z}_{l,m} ; \\lambda_{1,m} \\right) \\right) &\\geq& \\sum_{k=0}^{\\infty} \\pi_k \\log p\\left( k ; \\lambda_{1,m} \\right) + \\log C_0 \\nonumber \\\\\n&=& \\sum_{k=0}^{\\infty} k \\pi_k \\log \\lambda_{1,m} - \\lambda_{1,m} \\pi_k + C_1 \\nonumber \\\\\n&=& \\widehat{\\Lambda}_{l,m} \\log \\lambda_{1,m} - \\lambda_{1,m} + C_1\n\\end{eqnarray}\nwhere the second line above follows from the definition of the Poisson distribution and $C_1$ is a constant including the terms independent of $\\lambda_{1,m}$. The last line results from the definition of the $\\widehat{\\Lambda}_{l,m}$ in \\eqref{eq:zm_hat} and the fact that $\\pi_k$'s sum up to unity.\n\n\\section{Proof of Lemma \\eqref{lem:const-opt}}\n\\label{app:App-Lemma-const-opt}\n\nFirst, problem \\eqref{eq:app-ML} is put in the following canonical form:\n\n\\begin{eqnarray}\n\\label{eq:min-app-ML}\n&&\\min_{\\lambda_{1,m}} \\, \\sum_{m=1}^M \\left(\\lambda_{1,m} - \\widehat{\\Lambda}_{m} \\log\\lambda_{1,m} \\right) \\\\\n &&\\text{s.t.} \\quad \\lambda_0 - \\lambda_{1,m} \\leq 0 \\quad \\forall m. \\nonumber\n\\end{eqnarray}\nwhere the problem \\eqref{eq:min-app-ML} is scaled by $1\/L$, and $\\widehat{\\Lambda}_m$ is defined in \\eqref{eq:Lambda-ave}. The corresponding Lagrangian is\n\\begin{equation}\n\\mathcal{L} = \\sum_{m=1}^M \\lambda_{1,m} - \\widehat{\\Lambda}_{m}\\log\\lambda_{1,m} + \\eta_m \\left( \\lambda_0 - \\lambda_{1,m} \\right)\n\\end{equation}\nwhere $\\eta_m \\geq 0 \\; \\forall m$ are the slack variables. The corresponding KKT conditions \\cite{Boyd2004} are\n\\begin{equation}\n\\frac{\\partial \\mathcal{L} }{\\partial \\lambda_{1,m}} = 1 - \\frac{\\widehat{\\Lambda}_{m}}{\\lambda_{1,m}} - \\eta_m = 0 \\label{eq:derv_L}\n\\end{equation}\nfor all $m$ and the slack conditions are\n\\begin{equation}\n\\eta_m \\left( \\lambda_{1,m} - \\lambda_0 \\right) = 0 \n\\label{eq:comp-slack}\n\\end{equation}\nwith $\\eta_m \\geq 0$. The complementary slack conditions in \\eqref{eq:comp-slack} dictate that $\\lambda_{1,m} = \\lambda_0$ when $\\eta_m > 0$. Solving for the latter yields $\\eta_m = 1 - \\widehat{\\Lambda}_m\/\\lambda_0$. Alternatively, when $\\eta_m = 0$, that implies $\\lambda_{1,m} = \\widehat{\\Lambda}_m$.\n\n\n\\section{Proof of Theorem \\eqref{thrm:Noisy-Mod-Chrnf-Bound}}\n\\label{app:App-Noisy-Mod-Chrnf-Bound}\n\nThe MGF of $\\Lambda_{\\text{{\\scriptsize LFR}}}$ in \\eqref{eq:LFR} under either hypotheses is given by the conditional independence as \n\\begin{eqnarray}\nM_{\\text{LFR}} &=& \\prod_{m=1}^M M_{Z_m} \\left(t d_m ; \\ensuremath{\\mathcal{H}}_j \\right) \\nonumber \\\\\n \t\t &=& \\prod_{m=1}^M M_{\\Lambda_m} \\left(t d_m \\sqrt{\\widetilde{P}_m} ; \\ensuremath{\\mathcal{H}}_j \\right) M_{V_m} (t d_m).\n\\end{eqnarray}\n\nFrom the MGFs of the Gaussian and Poison distributions we have\n\\begin{eqnarray}\nM_{\\text{LFR}} \t\t \n &=& \\exp \\left( \\sum_{m=1}^M \\lambda_{j,m} \\left( e^{t d_m \\sqrt{\\widetilde{P}_m}} - 1\\right) + \\frac{d^2_m \\widetilde{\\sigma}^2_m t^2}{2} \\right). \\nonumber \\\\\n\\end{eqnarray}\n\nUsing the second-order Taylor series for the exponential function the MGF becomes\n{\\small\n\\begin{eqnarray}\nM_{\\text{LFR}} &\\approx& \\exp \\left( t \\sum\\limits^M_{m=1} \\lambda_{j,m} d_m \\sqrt{\\widetilde{P}_m} + \\frac{t^2}{2} d^2_m \\left( \\lambda_{j,m} \\widetilde{P}_m + \\widetilde{\\sigma}^2_m \\right) \\right) \\nonumber \\\\\n &=& \\exp \\left( \\overline{\\lambda}_{j,d} t + \\frac{\\widetilde{\\sigma}^2_{j,d}}{2} t^2 \\right),\n\\end{eqnarray}\n}\nwhere $\\overline{\\lambda}_{j,d}$ and $\\widetilde{\\sigma}^2_{j,d}$ are defined in \\eqref{eq:lambda_bar} and \\eqref{eq:sigma2_bar} respectively. The Chernoff bound for $\\Lambda_{\\text{{\\scriptsize LFR}}}$ is\n\\begin{eqnarray}\n\\ensuremath{\\mathbb{P}} \\left( \\Lambda_{\\text{{\\scriptsize LFR}}} > z \\right) &<& \\inf_{t>0} \\exp\\left(-zt\\right) M_{\\text{LFR}}(t) \\nonumber \\\\\n&=& \\inf_{t>0} \\exp \\left( -zt + \\overline{\\lambda}_{j,d} t + \\frac{\\widetilde{\\sigma}^2_{j,d}}{2} t^2\\right).\n\\end{eqnarray}\n\nThe infimum is found by taking the derivative of the exponential argument, equating to zero and solving for $t$. This gives the upper bound\n\\begin{equation}\n\\ensuremath{\\mathbb{P}} \\left( \\Lambda_{\\text{{\\scriptsize LFR}}} > z; \\ensuremath{\\mathcal{H}}_j \\right) < \\exp \\left( - \\frac{\\left( z - \\overline{\\lambda}_{j,d} \\right)^2}{2\\widetilde{\\sigma}^2_{j,d}} \\right).\n\\end{equation}\n\n\n\\section{Proof of Theorem \\eqref{thrm:Opt-pwr-alloc}}\n\\label{app:Opt-pwr-alloc}\n\nThe Lagrangian of canonical optimization problem is\n\\begin{eqnarray}\n\\mathcal{L}\\left( P_m,\\, \\mu_m \\right) &=& \\sum\\limits^M_{m=1} P_m - \\sum\\limits^M_{m=1} \\mu_m P_m \\nonumber \\\\\n &+& \\nu \\left( D_1 - \\sum\\limits^M_{m=1} \\frac{ P_m \\lambda_{d,m}^2}{\\sigma^2_{c,m} P_m + \\sigma^2_{f,m}} \\right)\n\\end{eqnarray}\nwhere $\\lambda_{d,m} = \\left( \\lambda_{1,m} - \\lambda_0 \\right)$ and $D_1 = D_0 \/P_0$. Then the related KKT conditions are\n\\begin{eqnarray}\n1 - \\mu_m -\\frac{\\nu \\lambda^2_{d,m} \\sigma^2_{f,m} }{ \\left( \\sigma^2_{c,m} P_m + \\sigma^2_{f,m} \\right)^2 } &=& 0 \\label{eq:Lag-derv} \\\\\n\\mu_m P_m &=& 0 \\label{eq:slack-cond} \\\\\n\\nu \\left( \\sum\\limits^M_{m=1} \\frac{ P_m \\lambda_{d,m}^2}{\\sigma^2_{c,m} P_m + \\sigma^2_{f,m}} - D_1 \\right) &=& 0 \\label{eq:MD-cond} \\\\\n\\mu_m \\geq 0, \\quad \\nu & \\geq & 0\n\\end{eqnarray}\nfor all $m$, where $\\mu_m$'s and $\\nu$ are the slack variables. To solve for $P_m$, we first recognize that if $\\nu > 0$ then having $\\mu_m > 0$ leads to $P_m = 0$ that follows from \\eqref{eq:slack-cond}, which violates the condition \\eqref{eq:MD-cond}. Hence, this forces $\\mu_m = 0$ and $\\nu > 0$. The solution of $P_m$ follows from the condition \\eqref{eq:Lag-derv} as stated in \\eqref{eq:Pm}. Finally, substituting the latter equation in \\eqref{eq:MD-cond} gives $\\nu$ solution stated in \\eqref{eq:nu}.\n\n\n\n\\ifCLASSOPTIONcaptionsoff\n \\newpage\n\\fi\n\n\n\n\n\n\\bibliographystyle{IEEEtran}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction} \n\nEnergy-based models (EBMs) \\cite{LeCun2006ATO} are known as powerful generative models widely studied in the field of machine learning, which define a general distribution explicitly by normalizing exponential negative energy. EBMs have also been successfully applied to solve visual tasks, such as image synthesis \\cite{Du2019ImplicitGA}, classification \\cite{Grathwohl2020YourCI}, out-of-distribution (OOD) detection \\cite{Liu2020EnergyOOD} or semi-supervised learning \\cite{Gao2020FlowCE}. Further, latent variables are incorporated to define energy-based latent variable models (EBLVMs), which are more expressive and capable of representation learning \\cite{Bao2021VariationalE}. \n\nHowever, the effectiveness of EBLVM deteriorates when applying it to solve computer vision problems, because learning by maximum likelihood estimation (MLE) usually suffers from the \\textit{doubly intractable} model problem \\cite{Vrtes2016LearningDI}, \\ie, sampling from the posterior and the joint distribution of model is nontrivial due to the intractable integrals in normalizing denominator. Recent works \\cite{Du2019ImplicitGA,Nijkamp2020LearningEM} leverage gradient-based Markov Chain Monte Carlo (MCMC), such as Langevin dynamics, to approximately sample from EBMs but require lots of steps, since fewer steps may lead to arbitrarily far sampling distribution from the target. Moreover, to handle the \\textit{doubly intractable} problem of EBLVMs, double MCMC sampling are needed and as a result making it infeasible on high-dimensional images. \n\nTo accomplish EBLVMs efficiently, this paper introduces a \\textbf{Bi}-level \\textbf{D}oubly \\textbf{V}ariational \\textbf{L}earning (\\textbf{BiDVL}) framework, which is based on tractable variational posterior and variational joint distribution to estimate the gradients in MLE. We formulate BiDVL as a bi-level optimization (BLO) problem as illustrated in \\cref{fig:bidvl}. In specific, the gradient estimate is compelled to fit the real one by exploring variational distributions to achieve the model distributions in the lower level, and the resulting gradient estimate is then used for optimizing the objective in the upper level. Theoretically, BiDVL is equivalent to the original MLE objective under the \\textit{nonparametric assumption} \\cite{Goodfellow2014GenerativeAN}, while to optimize it practically, we propose an efficient alternative optimization scheme. \n\\begin{figure}[t]\n \\centering\n \\includegraphics[width=0.94\\linewidth]{bidvl.png}\n \\caption{The motivation for BiDVL. Blue items represent the real objective optimization procedure. Red items indicate the approximate one. Real gradient, gradient estimate, and bounds are represented by blue, red and black curves, respectively. The bold red curve is the gradient estimate for upper level optimization. See \\cref{sec:frame} for detailed explanation. }\n \\label{fig:bidvl}\n\\end{figure}\n\nMoreover, the drawbacks of doubly variational learning come from its unstableness when learning a deep EBLVM on images. For example, AdVIL \\cite{Li2020ToRY} that can be approximately degenerated from BiDVL fails to achieve a good performance. Reasons for the failure lie in: 1) A general deep EBLVM has no structural assumption about its posterior and the unconstrained posterior may transfer among diverse mode structures during training, bring about dynamic target for variational learning. 2) Direct interaction between two variational distributions is overlooked, though they all approximate the EBLVM. 3) Both the energy-based posterior and its variational approximation have no effective way of modelling the real one. Such problems may effect other methods \\cite{Han2020JointTO,Bao2020BilevelSM,Bao2021VariationalE} for learning deep EBLVMs. \n\nTo tackle these issues, we decompose the EBLVM into an energy-based marginal distribution and a structural posterior, where the latter is represented by variational posterior. Then a symmetric KL divergence is chosen in the lower level, doubly linking the variational distributions by sharing posterior in both levels, a compact BiDVL for image tasks is thus obtained. At last, the decoupled EBLVM helps to derive an importance weighted KL divergence, in which samples for Monte Carlo estimator come from dataset. By learning from data through the doubly link, the posterior models the latent space more effectively. \n\nSpecifically, we parametrize the variational distributions with an inference model and an implicit generative model (IGM). Recently, \\cite{Han2019DivergenceTF,Han2020JointTO,Xie2021LearningEM} propose to jointly train EBM, inference model and IGM. However, their formulations are derived from intuitive combination to some extent, leading to overlap and conflict among models. BiDVL, entirely for learning EBLVMs, interprets the consistency by great improvement. In summary, we make following contributions: \n\n1) We introduce a MCMC-free BiDVL framework for learning EBLVMs efficiently. A corresponding alternative optimization scheme is adopted to solve it. \n\n2) We notice several problems in learning deep EBLVMs on images. To overcome these, we define a decoupled EBLVM and choose a symmetric KL divergence in the lower level. The resulting compact objective implies the consistency for learning EBLVMs. \n\n3) We demonstrate the impressive image generation quality among baseline models. Besides, BiDVL also shows the remarkable performance of testing image reconstruction and OOD detection. \n\n\\vspace{-0.1cm} \n\\section{Related work}\n\\label{sec:related}\n\n\\noindent \\textbf{Learning EBMs and EBLVMs}. As unnormalized probability model, EBMs and EBLVMs are able to fit data distribution more efficiently than other generative models. Nonetheless, they require nontrivial sampling from the model when learned by MLE. Many attempts \\cite{Du2019ImplicitGA,Arbel2021GeneralizedEB} leverage MCMC to approximate the procedure but suffers from quite poor efficiency. \\cite{Vrtes2016LearningDI,Bao2020BilevelSM,Bao2021VariationalE,Li2019LearningEM} avoid the nontrivial sampling based on Score Matching \\cite{Hyvrinen2005EstimationON}, however need unstable and inefficient computing of score functions. Another approach is Noise Contrastive Estimation \\cite{Gutmann2012NoiseContrastiveEO}, which performs worse on high-dimensional images \\cite{Grathwohl2021NoMF}. Furthermore, \\cite{Grathwohl2020LearningTS} proposes to learn Stein Discrepancy without sampling, \\cite{Yu2020TrainingDE} generalizes MLE to a $f$-divergence variational framework, \\cite{Grathwohl2021NoMF,Geng2021BoundsAA} propose MCMC-free variational approaches, while all of them aim at learning EBMs, our BiDVL accomplishes EBLVMs in a different way. \n\nRecent attempt \\cite{Li2020ToRY} introduces two variational models to form a min-min-max nested objective. Their method is evaluated on RBM \\cite{Ackley1985ALA} and DBM \\cite{Salakhutdinov2009DeepBM}, but is unstable for learning deep EBLVMs on images. We however propose a compact model for stability, see \\cref{sec:cobidvl} for details. \n\n\\noindent \\textbf{Joint training with other models}. Recent works also notice the compatibility between EBM and other models. \\cite{Nijkamp2020LearningEM,Xiao2020ExponentialTO,Xiao2021VAEBMAS,Arbel2021GeneralizedEB,Pang2020LearningLS} utilize an EBM to exponentially tilt a generative distribution, most of them have a two-stage refinement procedure with inefficient MCMC sampling. \\cite{Song2020DiscriminatorCD,Che2020YourGI} regard EBM as a discriminator of GAN, but still use MCMC to modify the generative distribution. \n\n\n\\cite{Han2019DivergenceTF} proposes a \\textit{divergence triangle} loss for joint training EBM, inference model and IGM, \\cite{Han2020JointTO} extends it to EBLVM and \\cite{Xie2021LearningEM} extends it by variational MCMC teaching \\cite{Xie2018CooperativeLO}. These studies integrate KL losses intuitively, by actually assuming the EBM always fits data perfectly, which induces harmful conflict among the models involved. While our compact objective derives an importance weighted KL loss and the variational models in BiDVL are optimized simultaneously rather than their sleep-awake scheme. All the differences originate from our unified framework, showing the consistency for learning EBLVMs. \n\n\n\\section{Preliminaries}\n\\label{sec:preli}\n\nEBLVM defines a joint scalar energy function over visible variable $v$ and latent variable $h$. The corresponding joint distribution is\n\\begin{equation}\n\\setlength{\\abovedisplayskip}{5pt}\n\\setlength{\\belowdisplayskip}{5pt}\n p_{\\psi}(v,h) = \\frac{\\exp{(-\\mathcal{E}_{\\psi}(v,h))}}{\\int\\exp{(-\\mathcal{E}_{\\psi}(v,h))}\\mathrm{d}h\\mathrm{d}v}, \n \\label{eq:eblvm}\n\\end{equation}\nwhere $\\mathcal{E}_{\\psi}(v,h)$ denotes the energy function over joint space parameterized by $\\psi$, $\\mathcal{Z}(\\psi)=\\int\\exp{(-\\mathcal{E}_{\\psi}(v,h))}\\mathrm{d}h\\mathrm{d}v$ is known as the \\textit{partition function}. The marginal distribution is given by $p_{\\psi}(v)=\\int p_{\\psi}(v,h)\\mathrm{d}h$, while the marginal energy is $\\mathcal{E}_{\\psi}(v)=-\\log\\int\\exp{(-\\mathcal{E}_{\\psi}(v,h))}\\mathrm{d}h$. \n\nTraining EBLVM is typically based on MLE, or equivalently minimizing the KL divergence $\\mathcal{J}(\\psi)=D_{\\rm KL}(q(v)||p_{\\psi}(v))$, where $q(v)$ denotes the empirical data distribution. As \\cite{LeCun2006ATO} notes, the gradient of the negative log-likelihood objective \\wrt $\\psi$ can be computed by\n\\begin{equation}\n\\setlength{\\abovedisplayskip}{5pt}\n\\setlength{\\belowdisplayskip}{5pt}\n\\begin{gathered}\n \\nabla_{\\psi}\\mathbb{E}_{q(v)}[-\\log p_{\\psi}(v)] = \\nabla_{\\psi}D_{\\rm KL}(q(v)\\Vert p_{\\psi}(v)) = \\\\\n \\mathbb{E}_{q(v)p_{\\psi}(h|v)} \\left[\\nabla_{\\psi}\\mathcal{E}_{\\psi}(v,h) \\right]-\\mathbb{E}_{p_{\\psi}(v,h)} \\left[\\nabla_{\\psi}\\mathcal{E}_{\\psi}(v,h) \\right], \n\\end{gathered} \n\\label{eq:cd}\n\\end{equation}\nwhere $p_{\\psi}(h|v)=\\frac{\\exp{(-\\mathcal{E}_{\\psi}(v,h))}}{\\int\\exp{(-\\mathcal{E}_{\\psi}(v,h))}\\mathrm{d}h}$ is the posterior distribution over latent variables. In \\cref{eq:cd}, samples from $p_{\\psi}(h|v)$ and $p_{\\psi}(v,h)$ are required for Monte Carlo gradient estimator, which is \\textit{doubly intractable} due to the integrals over the high-dimensional space. Commonly adopted MCMC however suffers from high consumption and is too bulky to be applied to both distributions simultaneously. \n\n\\section{Learning method}\n\\subsection{Proposed Bi-level doubly variational learning}\n\nTo handle the \\textit{doubly intractable} problem efficiently, we propose doubly variational learning, \\ie, introduce two variational models to represent the variational posterior $q_{\\omega_1}(h|v)$ and the variational joint distribution $p_{\\omega_2}(v,h)$, where $\\omega\\doteq(\\omega_1,\\omega_2)$ is the trainable parameters. Sampling from variational distribution requires only one forward step, largely improving efficiency. \n\n\\vspace{-0.7em}\n\\subsubsection{Framework}\n\\label{sec:frame}\n\nIn this part, we introduce the doubly variational learning under the bi-level optimization (BLO) framework. Let $\\mathcal{E}_{\\psi}\\doteq\\mathcal{E}_{\\psi}(v,h)$ for clarity. We first rewrite \\cref{eq:cd} as follow to show our motivation: \n\\begin{gather}\n\\setlength{\\abovedisplayskip}{5pt}\n\\setlength{\\belowdisplayskip}{5pt}\n \\nabla_{\\psi}\\mathcal{J}(\\psi) \\label{eq:split} = \\mathbb{E}_{q(v)p_{\\psi}(h|v)} \\left[\\nabla_{\\psi}\\mathcal{E}_{\\psi} \\right] - \\mathbb{E}_{p_{\\psi}(v,h)} \\left[\\nabla_{\\psi}\\mathcal{E}_{\\psi} \\right] \\\\\n \\equiv \\mathbb{E}_{q(v)q_{\\omega_1}(h|v)} \\left[\\nabla_{\\psi}\\mathcal{E}_{\\psi} \\right] - \\mathbb{E}_{p_{\\omega_2}(v,h)} \\left[\\nabla_{\\psi}\\mathcal{E}_{\\psi} \\right] \\tag{\\ref{eq:split}{a}}\\label{eq:splita} \\\\\n + \\mathbb{E}_{q(v)p_{\\psi}(h|v)} \\left[\\nabla_{\\psi}\\mathcal{E}_{\\psi} \\right] - \\mathbb{E}_{q(v)q_{\\omega_1}(h|v)} \\left[\\nabla_{\\psi}\\mathcal{E}_{\\psi} \\right] \\tag{\\ref{eq:split}{b}}\\label{eq:splitb} \\\\\n + \\mathbb{E}_{p_{\\omega_2}(v,h)} \\left[\\nabla_{\\psi}\\mathcal{E}_{\\psi} \\right] - \\mathbb{E}_{p_{\\psi}(v,h)} \\left[\\nabla_{\\psi}\\mathcal{E}_{\\psi} \\right], \\tag{\\ref{eq:split}{c}}\\label{eq:splitc}\n\\end{gather}\nwhere $\\equiv$ denotes the identity \\wrt $\\omega$, in other word, it holds whatever $\\omega_1,\\omega_2$ are. \\Cref{eq:split} implies that, for an appropriate $\\omega$ such that both terms (\\ref{eq:splitb}) and (\\ref{eq:splitc}) equal zero, then term (\\ref{eq:splita}) is an accurate gradient estimation. \n\nThe motivation to formulate a bi-level optimization problem is illustrated in \\cref{fig:bidvl}. Variational distributions $q_{\\omega_1}(h|v)$ and $p_{\\omega_2}(v,h)$ (red marked items) aim to chase the model distributions $p_{\\psi}(h|v)$ and $p_{\\psi}(v,h)$ (blue marked items) in lower-level optimization, reducing the gap between the upper and lower bound of gradient estimate term (\\ref{eq:splita}) (red curve). The gradient estimate is thus compelled to fit the real one, \\ie Eq.(\\ref{eq:split}) (bold blue curve). Then the estimate finally obtained (bold red curve) is used for optimizing the real objective in upper level. \n\nFollowing the formulas described in \\cite{Liu2021InvestigatingBO}, we assume the solution of lower-level (LL) subproblem \\wrt the upper-level (UL) variables is of a singleton, which is known as the \\textit{lower-level singleton} assumption \\cite{Liu2020AGF}. Therefore we define the LL subproblem as follow: \n\\begin{equation}\n\\setlength{\\abovedisplayskip}{5pt}\n\\setlength{\\belowdisplayskip}{5pt}\n \\begin{gathered}\n \\omega^*(\\psi) = \\arg\\min_{\\omega\\in\\Omega} \\mathcal{J}_{\\rm LL}(\\psi,\\omega), \\\\\n \\mathcal{J}_{\\rm LL}(\\psi,\\omega) = D(q(v)q_{\\omega_1}(h|v),q(v)p_{\\psi}(h|v)) \\\\\n + D(p_{\\omega_2}(v,h),p_{\\psi}(v,h)),\n \\end{gathered}\n \\label{eq:ll}\n\\end{equation}\nwhere $\\Omega$ is the parameter space of variational distributions. $D$ denotes certain proper metric corresponding to the assumption on $\\nabla_{\\psi}\\mathcal{E}_{\\psi}$, since the form of terms (\\ref{eq:splitb}),(\\ref{eq:splitc}) reminds of the general integral probability metrics. See \\cref{sec:diver} for related discussion. We should again emphasize that the solution of LL subproblem is a function \\wrt the UL variables, and as a result, $\\nabla_{\\psi}\\mathcal{J}(\\psi)=\\mathbb{E}_{q(v)q_{\\omega_1^*(\\psi)}(h|v)} \\left[\\nabla_{\\psi}\\mathcal{E}_{\\psi} \\right] - \\mathbb{E}_{p_{\\omega_2^*(\\psi)}(v,h)} \\left[\\nabla_{\\psi}\\mathcal{E}_{\\psi} \\right]$ holds only if $\\mathcal{J}_{\\rm LL}(\\psi,\\omega^*(\\psi))=0$. Then an equivalent of term (\\ref{eq:splita}) taking $\\omega=\\omega(\\psi)$ into account is given by: \n\\begin{equation}\n\\setlength{\\abovedisplayskip}{5pt}\n\\setlength{\\belowdisplayskip}{5pt}\n \\begin{gathered}\n \\mathbb{E}_{q(v)q_{\\omega_1(\\psi)}(h|v)} \\left[\\nabla_{\\psi}\\mathcal{E}_{\\psi} \\right] - \\mathbb{E}_{p_{\\omega_2(\\psi)}(v,h)} \\left[\\nabla_{\\psi}\\mathcal{E}_{\\psi} \\right] \\\\\n = \\frac{\\partial D_{\\rm KL}(q(v)q_{\\omega_1}(h|v)\\Vert p_{\\psi}(v,h))}{\\partial \\psi}|_{\\omega_1=\\omega_1(\\psi)} \\\\\n - \\frac{\\partial D_{\\rm KL}(p_{\\omega_2}(v,h)\\Vert p_{\\psi}(v,h))}{\\partial \\psi}|_{\\omega_2=\\omega_2(\\psi)}.\n \\end{gathered} \\label{eq:part}\n\\end{equation}\nInspired by \\cref{eq:part}, we define the UL subproblem as follow: \n\\begin{equation}\n\\setlength{\\abovedisplayskip}{5pt}\n\\setlength{\\belowdisplayskip}{5pt}\n \\begin{gathered}\n \\psi^* = \\arg\\min_{\\psi\\in\\Psi}\\mathcal{J}_{\\rm UL}(\\psi,\\omega^*(\\psi)), \\\\\n \\mathcal{J}_{\\rm UL}(\\psi,\\omega) = D_{\\rm KL}(q(v)q_{\\omega_1}(h|v)\\Vert p_{\\psi}(v,h)) \\\\\n - D_{\\rm KL}(p_{\\omega_2}(v,h)\\Vert p_{\\psi}(v,h)),\n \\end{gathered} \\label{eq:ul}\n\\end{equation}\nwhere $\\Psi$ is the parameter space of energy function. Further, the gradient of UL objective \\wrt $\\psi$ is denoted by \n\\begin{equation}\n\\setlength{\\abovedisplayskip}{5pt}\n\\setlength{\\belowdisplayskip}{5pt}\n\\begin{split}\n &\\ \\frac{\\partial\\mathcal{J}_{\\rm UL}(\\psi,\\omega^*(\\psi))}{\\partial\\psi} = \\frac{\\partial\\mathcal{J}_{\\rm UL}(\\psi,\\omega)}{\\partial\\psi}|_{\\omega=\\omega^*(\\psi)} \\\\\n +&\\ \\left( \\frac{\\partial\\omega^*(\\psi)}{\\partial\\psi^{\\top}} \\right)^{\\top}\\frac{\\partial\\mathcal{J}_{\\rm UL}(\\psi,\\omega)}{\\partial\\omega}|_{\\omega=\\omega^*(\\psi)},\n\\end{split}\n\\label{eq:ulgra}\n\\end{equation}\nwhere $\\frac{\\partial\\omega^*(\\psi)}{\\partial\\psi}$ is known as the Best-Response (BR) Jacobian \\cite{Liu2021InvestigatingBO} which captures the response of LL solution \\wrt the UL variables. \n\nWe argue that, under the \\textit{nonparametric assumption} \\cite{Goodfellow2014GenerativeAN}, the BLO problem (\\ref{eq:ll},\\ref{eq:ul}) in BiDVL is equivalent to optimize the original objective $\\mathcal{J}(\\psi)$. The formal theorem is presented as follow: \n\\begin{theorem} \\label{thm1}\nAssuming that $\\forall\\psi\\in\\Psi,\\exists\\omega\\in\\Omega$ such that \\\\ $D(q(v)q_{\\omega_1}(h|v),q(v)p_{\\psi}(h|v))=0$, and \\\\\n$D(p_{\\omega_2}(v,h),p_{\\psi}(v,h))=0$, then we have\n\\begin{equation*}\n \\mathcal{J}(\\psi) = \\mathcal{J}_{\\rm UL}(\\psi,\\omega^*(\\psi)), \\nabla_{\\psi}\\mathcal{J}(\\psi) = \\nabla_{\\psi}\\mathcal{J}_{\\rm UL}(\\psi,\\omega^*(\\psi)).\n\\end{equation*}\n\\end{theorem}\nThe \\textit{nonparametric assumption} does not always hold in practice, since the variational models have limited capacity. But we can still bound the original objective as $|\\mathcal{J}_{\\rm UL}(\\psi,\\omega^*(\\psi))- \\mathcal{J}(\\psi)|\\leq C^{\\prime}\\cdot\\mathcal{J}_{\\rm LL}(\\psi,\\omega)$. See \\cref{sec:derive,sec:proof} for detailed derivations and proof. \n\n\\vspace{-0.7em}\n\\subsubsection{Alternative optimization}\n\\label{sec:alter}\n\nSolving the BLO problem requires to compute the gradient of UL objective (\\ref{eq:ulgra}). As \\cite{Liu2021InvestigatingBO} notes, obtaining the gradient suffers from handling the BR Jacobian with a recursive derivation process. Recent attempts \\cite{Bao2020BilevelSM,Metz2017UnrolledGA} leverage \\textit{gradient unrolling} technique to estimate the gradient in bi-level optimization problems. Nevertheless, \\textit{gradient unrolling} requires inner loop to form a recursive computation graph and more resources are allocated for backtracking. It also incurs inferior performance in our case. For efficiency, we ignore the BR Jacobian term in \\cref{eq:ulgra}. Then BiDVL is optimized alternatively, \\ie, when updating the parameters involved in current level, we consider the rest as fixed. Notice the missing-term alternative optimization has been widely adopted in reinforcement learning. \n\n\\subsection{BiDVL for Image Tasks}\n\\label{sec:cobidvl}\n\nBiDVL is a general framework for EBLVMs, however, doubly variational learning is usually unstable to scale to learning deep EBLVMs on image datasets. AdVIL \\cite{Li2020ToRY}, which can be approximately derived from BiDVL via choosing KL divergence in the lower level, encounters the same difficulty that unstable training and inferior performance to deal with images. We attribute the failure to three key learning problems which exacerbate the instability on high-dimensional and highly multimodal image datasets:\n\n1) The generally defined undirected EBLVM (\\ref{eq:eblvm}) has no explicit structural assumption about its posterior $p_{\\psi}(h|v)$ (due to the coupled joint energy represented by network), and resulting in a dynamic target, whose mode structure transfers rapidly, for $q_{\\omega_1}(h|v)$ and $p_{\\omega_2}(v,h)$ to chase. \n\n2) Though both variational distributions are to approximate $p_{\\psi}(v,h)$, there is no direct interaction between them, thus misalignment may be induced in variational learning. \n\n3) Both $p_{\\psi}(h|v)$ and $q_{\\omega_1}(h|v)$ cannot model the real posterior effectively, since they learn from each other (\\ref{eq:ll}) rather than from data directly. \n\nNext, we introduce our solutions to the problems above with BiDVL as a necessary framework, and then, a practical compact BiDVL for image tasks is consequently conducted. \n\n\\vspace{-0.7em}\n\\subsubsection{Decoupled EBLVM}\n\\label{sec:decoup}\n\nThe problems above imply that an unconstrained posterior is confused and superfluous for modeling the latent space, which motivates us to \\textit{redefined} $p_{\\psi}(v)$ and $p_{\\psi}(h|v)$ as an energy-based marginal distribution and a structural posterior. Considering the same parameterization of $q_{\\omega_1}(h|v)$ and $p_{\\psi}(h|v)$, and by $\\min D(q(v)q_{\\omega_1}(h|v)||q(v)p_{\\psi}(h|v))$ in \\cref{eq:ll}, we can directly make the energy-based posterior equal to the variational one by sharing parameters $\\omega_1$, and as a consequence, we have $D(q(v)q_{\\omega_1}(h|v),q(v)p_{\\psi}(h|v))=0$, eliminating the confusing chase between posteriors in BiDVL as well as reducing the LL objective. We finally formulate a decoupled version of EBLVM: \n\\begin{equation}\n\\setlength{\\abovedisplayskip}{5pt}\n\\setlength{\\belowdisplayskip}{5pt}\n \\begin{gathered}\n p_{\\psi^{\\prime},\\omega_1}(v,h)=p_{\\psi^{\\prime}}(v)q_{\\omega_1}(h|v), \\\\\n p_{\\psi^{\\prime}}(v)\\propto\\exp{(-\\mathcal{E}_{\\psi^{\\prime}}(v))}, \n \\end{gathered}\n \\label{eq:jomod}\n\\end{equation}\nwhere $\\mathcal{E}_{\\psi^{\\prime}}(v)$ is the new marginal energy parametrized with $\\psi^{\\prime}$ and then a decoupled joint energy is \\textit{redefined} by $\\mathcal{E}_{\\psi^{\\prime},\\omega_1}(v,h) = \\mathcal{E}_{\\psi^{\\prime}}(v) - \\log q_{\\omega_1}(h|v)$. Note we are actually optimizing the upper bound of $\\mathcal{J}(\\psi)$, in other words, if the bi-level objectives are reduced, the original objective is reduced in turn, therefore we can optimize $\\omega_1$ in both levels without changing the optimal solution of BiDVL. \n\n\\vspace{-0.7em}\n\\subsubsection{Symmetric KL divergence}\n\\label{sec:symkl}\n\nSince $\\omega_1$ is optimized in both levels, a proper metric in the lower level can link variational distributions, where KL divergence is a feasible choice. However as \\cite{Goodfellow2016Deep,Han2020JointTO} notice, minimizing $D_{\\rm KL}(P\\Vert Q)$ \\wrt $Q$ compels $Q$ to cover the major support of $P$, while minimizing a reverse version $D_{\\rm KL}(Q\\Vert P)$, $Q$ is forced to chase the major modes of $P$. So for integrating both behavior, we choose a symmetric KL divergence, $S(P\\Vert Q)=D_{\\rm KL}(P\\Vert Q)+D_{\\rm KL}(Q\\Vert P)$, in the lower level. Meanwhile, by taking the alternative optimization from \\cref{sec:alter} into account, a compact BiDVL is derived from \\cref{eq:ll,eq:ul}: \n\\begin{equation}\n\\setlength{\\abovedisplayskip}{5pt}\n\\setlength{\\belowdisplayskip}{5pt}\n \\begin{gathered}\n \\min_{\\omega} \\mathcal{J}_{\\rm LL}(\\omega), \\\\\n \\mathcal{J}_{\\rm LL}(\\omega) = D_{\\rm KL}(p_{\\psi^{\\prime},\\omega_1}(v,h)\\Vert p_{\\omega_2}(v,h)) \\\\\n +\\ D_{\\rm KL}(p_{\\omega_2}(v,h)\\Vert p_{\\psi^{\\prime},\\omega_1}(v,h)),\n \\end{gathered}\n \\label{eq:newll}\n\\end{equation}\n\\begin{equation}\n\\setlength{\\abovedisplayskip}{5pt}\n\\setlength{\\belowdisplayskip}{5pt}\n \\begin{gathered}\n \\min_{\\psi^{\\prime},\\omega_1}\\mathcal{J}_{\\rm UL}(\\psi^{\\prime},\\omega_1) \\\\\n \\mathcal{J}_{\\rm UL}(\\psi^{\\prime},\\omega_1) = D_{\\rm KL}(q(v)q_{\\omega_1}(h|v)\\Vert p_{\\psi^{\\prime},\\omega_1}(v,h)) \\\\\n -\\ D_{\\rm KL}(p_{\\omega_2}(v,h)\\Vert p_{\\psi^{\\prime},\\omega_1}(v,h)), \n \\end{gathered} \n \\label{eq:newul}\n\\end{equation}\nwhere the symmetric KL divergence in the lower level sheds light on the connection between variational distributions in two directions. Based on above solutions, we derive a weighted KL loss shown in next section, which is proven to help modelling the data latent space effectively. \n\n\\vspace{-0.7em}\n\\subsubsection{Optimizing compact BiDVL} \n\\label{sec:ocblo}\n\nIn this part, we present the gradients for optimizing compact BiDVL (\\ref{eq:newll},\\ref{eq:newul}) followed by a simplified version. Gradients are computed by: \n\\begin{equation}\n\\setlength{\\abovedisplayskip}{5pt}\n\\setlength{\\belowdisplayskip}{5pt}\n \\begin{split}\n &\\ \\nabla_{\\omega} \\mathcal{J}_{\\rm LL}(\\omega) \\\\\n =&\\ \\nabla_{\\omega} D_{\\rm KL}(p_{\\psi^{\\prime}}(v)q_{\\omega_1}(h|v)\\Vert p_{\\omega_2}(v,h)) \\\\\n +&\\ \\nabla_{\\omega_2} \\mathbb{E}_{p_{\\omega_2}(v,h)}[\\mathcal{E}_{\\psi^{\\prime}}(v)] + \\nabla_{\\omega} D_{\\rm KL}(p_{\\omega_2}(v,h)\\Vert q_{\\omega_1}(h|v)) \\\\\n =&\\ \\nabla_{\\omega} \\int \\textcolor{red}{\\frac{p_{\\psi^{\\prime}}(v)}{q(v)}} q(v) q_{\\omega_1}(h|v)\\log\\frac{q(v)q_{\\omega_1}(h|v)}{p_{\\omega_2}(v,h)}\\mathrm{d}h\\mathrm{d}v \\\\\n +&\\ \\nabla_{\\omega_2} \\mathbb{E}_{p_{\\omega_2}(v,h)}[\\mathcal{E}_{\\psi^{\\prime}}(v)] + \\nabla_{\\omega} D_{\\rm KL}(p_{\\omega_2}(v,h)\\Vert q_{\\omega_1}(h|v)) \\\\\n =&\\ \\nabla_{\\omega} D_{\\textcolor{red}{r(v)}}(q(v)q_{\\omega_1}(h|v)\\Vert p_{\\omega_2}(v,h)) \\\\\n +&\\ \\nabla_{\\omega_2} \\mathbb{E}_{p_{\\omega_2}(v,h)}[\\mathcal{E}_{\\psi^{\\prime}}(v)] + \\textcolor{blue}{\\nabla_{\\omega} D_{\\rm KL}(p_{\\omega_2}(v,h)\\Vert q_{\\omega_1}(h|v))}, \\\\\n \\end{split}\n \\label{eq:nllgr}\n\\end{equation}\n\\begin{equation}\n\\setlength{\\abovedisplayskip}{5pt}\n\\setlength{\\belowdisplayskip}{5pt}\n \\begin{split}\n &\\ \\nabla_{\\psi^{\\prime},\\omega_1} \\mathcal{J}_{\\rm UL}(\\psi^{\\prime},\\omega_1) \\\\\n =&\\ \\mathbb{E}_{q(v)}[\\nabla_{\\psi^{\\prime}}\\mathcal{E}_{\\psi^{\\prime}}(v)] - \\mathbb{E}_{p_{\\omega_2}(v,h)}[\\nabla_{\\psi^{\\prime}}\\mathcal{E}_{\\psi^{\\prime}}(v)] \\\\\n -&\\ \\textcolor{blue}{\\nabla_{\\omega_1} D_{\\rm KL}(p_{\\omega_2}(v,h)\\Vert q_{\\omega_1}(h|v))}, \n \\end{split}\n \\label{eq:nulgr}\n\\end{equation}\nwhere $D_{r(v)}$ denotes the importance weighted KL divergence with $r(v)=\\frac{p_{\\psi^{\\prime}}(v)}{q(v)}$ as the importance ratio. The principal reason for utilizing importance ratio is the nontrivial sampling from $p_{\\psi^{\\prime}}(v)$. Notice computing the ratio is still nontrivial, we present a detailed analysis in \\cref{sec:analy}. \n\nImportance weighted term offers additional benefits. Since $D_{\\rm KL}(p_{\\psi^{\\prime}}(v)q_{\\omega_1}(h|v)\\Vert p_{\\omega_2}(v,h))$ has a mode-cover behavior, $p_{\\psi^{\\prime}}(v)$ defined on the whole space may provide fake mode-cover information for $p_{\\omega_2}(v,h)$. However, samples for estimating the weighted term come from dataset, brings $p_{\\omega_2}(v,h)$ faithful mode-cover information and further helps the EBLVM faster convergence and stabler training. Furthermore, minimizing $D_{\\rm KL}(q(v)q_{\\omega_1}(h|v) \\Vert p_{\\omega_2}(v,h))$ is equivalent to optimize the \\textit{evidence lower bound} of VAE, whose reconstructive behavior is proven to model the data latent space effectively. \n\nMinimizing $D_{\\rm KL}(p_{\\omega_2}(v,h)\\Vert q_{\\omega_1}(h|v))$ tends to increase the likelihood of generated samples under the posterior, which further enhance the alignment between visible space and latent space. By sharing the variational posterior $q_{\\omega_1}(h|v)$, the decoupled EBLVM thus learns a simpler structural posterior easy to model data latent space. \n\nNotice the opposite terms \\wrt $\\omega_1$ contained in \\cref{eq:nllgr,eq:nulgr} can be offset to derive simplified gradients: \n\\begin{equation}\n\\setlength{\\abovedisplayskip}{5pt}\n\\setlength{\\belowdisplayskip}{5pt}\n \\begin{gathered}\n \\nabla_{\\omega} \\mathcal{\\widehat{J}}_{\\rm LL}(\\omega) = \\nabla_{\\omega} D_{\\textcolor{red}{r(v)}}(q(v)q_{\\omega_1}(h|v)\\Vert p_{\\omega_2}(v,h)) \\\\\n + \\nabla_{\\omega_2} \\mathbb{E}_{p_{\\omega_2}(v,h)}[\\mathcal{E}_{\\psi^{\\prime}}(v)] + \\textcolor{blue}{\\nabla_{\\omega_2} D_{\\rm KL}(p_{\\omega_2}(v,h)\\Vert q_{\\omega_1}(h|v))},\n \\end{gathered}\n \\label{eq:pllgr}\n\\end{equation}\n\\begin{equation}\n\\setlength{\\abovedisplayskip}{5pt}\n\\setlength{\\belowdisplayskip}{5pt}\n \\begin{gathered}\n \\nabla_{\\psi^{\\prime}} \\mathcal{\\widehat{J}}_{\\rm UL}(\\psi^{\\prime}) = \\mathbb{E}_{q(v)}[\\nabla_{\\psi^{\\prime}}\\mathcal{E}_{\\psi^{\\prime}}(v)] - \\mathbb{E}_{p_{\\omega_2}(v,h)}[\\nabla_{\\psi^{\\prime}}\\mathcal{E}_{\\psi^{\\prime}}(v)]. \n \\end{gathered}\n \\label{eq:pulgr}\n\\end{equation}\nThe offset gradients are stabler than the original one, as the opposite terms are integrated into one level, however weakens the adversarial learning as we explained in \\cref{sec:analy}. \n\\begin{algorithm}\n\\caption{Bi-level doubly variational learning for EBLVM by alternative stochastic gradient descent}\n\\label{alg:bidvl}\n\\hspace*{0.02in}{\\bf Input:}\nLearning rate schemes $\\alpha$ and $\\beta$; randomly initialized network parameters $\\omega$ and $\\psi^{\\prime}$; number of lower-level step $N$\n\\begin{algorithmic}[1]\n\\REPEAT\n \\STATE Sample a batch of data\n \\FOR{$n=1,\\cdots,N$}\n \\STATE Optimize the lower-level subproblem (\\ref{eq:newll}) by \\cref{eq:nllgr} (or \\cref{eq:pllgr}): \\\\\n $\\omega \\leftarrow \\omega - \\alpha\\nabla_{\\omega} \\mathcal{J}_{\\rm LL}(\\omega)$\n \\ENDFOR\n \\STATE Optional \\textit{gradient unrolling} following \\cite{Metz2017UnrolledGA}\n \\STATE Optimize the upper-level subproblem (\\ref{eq:newul}) by \\cref{eq:nulgr} (or \\cref{eq:pulgr}): \\\\\n $\\psi^{\\prime} \\leftarrow \\psi^{\\prime} - \\beta\\nabla_{\\psi^{\\prime}} \\mathcal{J}_{\\rm UL}(\\psi^{\\prime},\\omega_1)$ \\\\ \n $\\omega_1 \\leftarrow \\omega_1 - \\alpha\\nabla_{\\omega_1} \\mathcal{J}_{\\rm UL}(\\psi^{\\prime},\\omega_1)$\n\\UNTIL{Convergence or reaching certain threshold}\n\\end{algorithmic}\n\\end{algorithm}\n\n\\subsection{Algorithm implementation}\n\\label{sec:algim}\n\nThe training procedure is summarized in \\cref{alg:bidvl}, and in this section, a specific implementation is introduced. Our model consists of an EBM, an inference model and a generative model, corresponding to the marginal energy-based distribution $p_{\\psi^{\\prime}}(v)$, the variational posterior $q_{\\omega_1}(h|v)$ and the variational joint distribution $p_{\\omega_2}(v,h)$, respectively. \n\nFor the generative model, BiDVL has many choices, \\eg implicit generative models (IGMs) or flows \\cite{JimenezRezende2015VariationalIW}. In this work, we simply utilize an IGM, from which a sample is obtained via passing a latent variable $h$ through a deterministic generator, \\ie, $v=G_{\\omega_2}(h)$, where $h$ is sampled from a known distribution $p(h)$. For the inference model, we employ the standard Gaussian encoder of VAE, ${\\mathcal{N}(\\mu_{\\omega_1}(v),\\Sigma_{\\omega_1}(v))}$. Sampling from the inference model typically adopts the reparametrization trick, \\ie $h=\\mu_{\\omega_1}(v)+\\Sigma_{\\omega_1}(v)\\cdot\\epsilon$, where $\\epsilon$ denotes the Gaussian noise. \n\n\\subsection{Discussion}\n\\label{sec:discuss}\n\nIn this section, we present some additional properties of our BiDVL under the specific implementation. By adopting the structure of VAE, \\cref{eq:nllgr} contains an importance weighted VAE loss, where we practically use the importance weighted reconstruction loss and KL regularization in \\cref{eq:nllgr}. Meanwhile, $D_{\\rm KL}(p_{\\omega_2}(v,h)\\Vert q_{\\omega_1}(h|v))$, which forms another reconstruction over latent space, takes part in a cycle reconstruction, linking the models tightly. \n\nBesides, there are two chase games in \\cref{eq:newll,eq:newul,eq:nllgr,eq:nulgr}:\n\n1) The first chase game is demonstrated between the marginal EBM $p_{\\psi^{\\prime}}$ and the IGM $p_{\\omega_2}$. In lower level, $p_{\\omega_2}$ tends to chase $p_{\\psi^{\\prime}}$, while in upper level, $p_{\\psi^{\\prime}}$ turns to data distribution and escapes from $p_{\\omega_2}$. The game formulates an adversarial learning similar to GAN, however discriminator guided IGM typically has limited capacity for assigning density on the learned support \\cite{Arbel2021GeneralizedEB}, while in BiDVL, IGM is guided by a sophisticated energy-based probability model. \n\n2) Opposite terms in \\cref{eq:nllgr,eq:nulgr} contribute to another chase game between variational models. In lower level, $q_{\\omega_1}$ and $p_{\\omega_2}$ are drew close over latent space, while in upper level, $q_{\\omega_1}$ attempts to escape from the other. Optimizing $\\omega_1$ to increase $D_{\\rm KL}(p_{\\omega_2}(v,h)\\Vert q_{\\omega_1}(h|v))$ in the upper level changes the mean and variance of inference model aimlessly, resulting in unstable training. Inspired by the adversarial nature between the reconstruction part and the KL regularization part in VAE loss, we instead minimize $D_{\\rm KL}(q_{\\omega_1}(h|v)||p(h)), v\\sim{p_{\\omega_2}(v)}$ in the upper level to interpret the chase game. In fact, it provides $q_{\\omega_1}$ a fixed target for stable training. \n\nAll of the properties such as importance weighted loss, the additional chase game and the bi-level optimization scheme are originated from the unified framework, implying the consistency for learning EBLVMs. \n\n\\section{Experiments}\n\nWe evaluate BiDVL on three tasks: image generation, test image reconstruction and out-of-distribution (OOD) detection. Test image reconstruction are conducted to evaluate the ability of modelling latent space, the rest experiments are designed mainly following \\cite{Han2020JointTO}. In the main paper, experiments are demonstrated on three commonly adopted benchmark datasets including CIFAR-10 \\cite{Krizhevsky2009LearningML}, SVHN \\cite{Netzer2011ReadingDI} and CelebA \\cite{Liu2015DeepLF}. For CIFAR-10 and SVHN, images are resized to $32\\times32$. For CelebA, we resize images to $32\\times32$ and $64\\times64$ to construct CelebA-32 and CelebA-64. All datasets are scaled to $[-1,1]$ for pre-processing. \n\nOur model consists of an EBM, an inference model and an IGM. The structure of variational models is following \\cite{Han2020JointTO}, which simply cascades several convolutional layers. For stable training, batch normalization \\cite{Ioffe2015BatchNA} layers are plugged between convolutional layers. The EBM uses convolutional layers to map a sample to real-valued energy with spectral normalization to ensure the constraint on $\\nabla_{\\psi}\\mathcal{E}_{\\psi}(v,h)$ discussed in \\cref{sec:diver}. Parameters are optimized via Adam \\cite{Kingma2015AdamAM}. We set the number of lower-level steps $N$ to one and ignore the \\textit{gradient unrolling} for faster and stabler training in our experiments. Some major choices are discussed in \\cref{sec:analy}. \n\nIn order to evaluate on larger scale images, we also conduct experiments on CelebA resized to $128\\times128$ with stronger Resnet-based structure in \\cref{sec:addex}. \n\n\\subsection{Image generation}\n\\label{sec:imgen}\n\nWe show the proposed model can generate realistic images with visual similarities to the training images by conducting experiments on CIFAR-10, SVHN and CelebA. \\Cref{fig:1} shows the result of $32\\times32$ images randomly generated by proposed model and \\cref{fig:2} shows the result of $64\\times64$ images on CelebA. We find the generated images maintain diversity, which confirms that the learned EBLVM well covers the most modes of data. \n\\begin{figure*}\n \\centering\n \\includegraphics[width=0.3\\linewidth]{generate_CIFAR10.png}\n \\includegraphics[width=0.3\\linewidth]{generate_SVHN.png}\n \\includegraphics[width=0.3\\linewidth]{generate_CelebA32.png}\n \\caption{Randomly generated images. Left: CIFAR-10. Middle: SVHN. Right: CelebA-32.}\n \\label{fig:1}\n\\end{figure*}\n\\begin{figure}[t]\n \\centering\n \\includegraphics[width=0.8\\linewidth]{generate_CelebA64.png}\n \\caption{Randomly generated images on CelebA-64.}\n \\label{fig:2}\n\\end{figure}\n\nWe adopt Frechet Inception Distance (FID) \\cite{Heusel2017GANsTB} to reflect the sample quality and the baseline models are chosen from EBMs, VAEs and GANs to align with the perspectives in \\cref{sec:discuss}. Divergence triangle \\cite{Han2020JointTO} is an energy-guided VAE with a similar model structure to us. IGEBM \\cite{Du2019ImplicitGA} and GEBM \\cite{Arbel2021GeneralizedEB} are MCMC-based EBMs while VERA \\cite{Grathwohl2021NoMF} is MCMC-free. SNGAN \\cite{Miyato2018SpectralNF} adopts the structure of Resnet and gets 21.7 FID on CIFAR-10. The recent exponential tilting models which formulate a two-stage refinement procedure with inefficient MCMC sampling, are not considered here. In \\cref{tab:imge1}, we evaluate our best model on CIFAR-10 and CelebA-64. \\Cref{tab:imge2} reports the FID on CelebA-32. Our model achieves superior performance than baseline models, especially getting 20.75 FID on CIFAR-10, even without elaborate structure such as Resnet. \n\\begin{table}\n \\centering\n \\begin{tabular}{lcc}\n \\toprule\n Model & CIFAR-10 & CelebA-64 \\\\\n \\midrule\n VAE \\cite{Kingma2014Autoencoding} & 109.5 & 99.09 \\\\\n Divergence triangle \\cite{Han2020JointTO} & 30.1 & 24.7 \\\\\n IGEBM \\cite{Du2019ImplicitGA} & 40.58 & - \\\\\n GEBM \\cite{Arbel2021GeneralizedEB} & 23.02 & - \\\\\n \n VERA \\cite{Grathwohl2021NoMF} & 27.5 & - \\\\\n SNGAN \\cite{Miyato2018SpectralNF} & 21.7 & 50.4 \\\\\n BiDVL (ours) & \\textbf{20.75} & \\textbf{17.24} \\\\\n \\bottomrule\n \\end{tabular}\n \\caption{Evaluation of sample quality on CIFAR-10 and CelebA-64 via FID.}\n \\label{tab:imge1}\n\\end{table}\n\\begin{table}\n \\centering\n \\begin{tabular}{lc}\n \\toprule\n Model & CelebA-32 \\\\\n \\midrule\n VAE \\cite{Kingma2014Autoencoding} & 38.76 \\\\\n DCGAN \\cite{Radford2016UnsupervisedRL} & 12.50 \\\\\n FCE \\cite{Gao2020FlowCE} & 12.21 \\\\\n GEBM \\cite{Arbel2021GeneralizedEB} & 5.21 \\\\\n BiDVL (ours) & \\textbf{4.47} \\\\\n \\bottomrule\n \\end{tabular}\n \\caption{Evaluation of sample quality on CelebA-32 using FID.}\n \\label{tab:imge2}\n\\end{table}\n\n\\subsection{Image reconstruction}\n\\label{sec:imrec}\n\nIn this section, we show our model can learn the low-dimensional structure of training samples by assessing the performance on testing image reconstruction. We must note that, EBMs and GANs are incapable of image reconstruction without an inference model, while an inference model is adopted as the variational posterior in our propose model, helping to accomplish the infer-reconstruct procedure. \nExperimentally, our model achieves a low reconstruction error on both CIFAR-10 and CelebA-64 test datasets, because the reconstruction behavior over latent variables helps to further enhance the alignment between visible space and latent space. We compare our model, which has the best generation in \\cref{sec:imgen}, to baseline models in \\cref{tab:imre}, with rooted mean square error (RMSE) as the metric. Reconstruction images shown in \\cref{fig:3} demonstrate our model can capture the major information in testing images, and the low reconstruction error implies that the information of real data flows to the EBLVM through IGM effectively. \n\\begin{table}\n \\centering\n \\begin{tabular}{lcc}\n \\toprule\n Model & CIFAR-10 & CelebA-64 \\\\\n \\midrule\n VAE \\cite{Kingma2014Autoencoding} & 0.192 & 0.197 \\\\\n ALICE \\cite{Li2017ALICETU} & 0.185 & 0.214 \\\\\n SVAE \\cite{Pu2018SymmetricVA} & 0.258 & 0.209 \\\\\n Divergence triangle \\cite{Han2020JointTO} & 0.177 & 0.190 \\\\\n BiDVL (ours) & \\textbf{0.168} & \\textbf{0.187} \\\\\n \\bottomrule\n \\end{tabular}\n \\caption{Evaluation of testing image reconstruction on CIFAR-10 and CelebA-64 using RMSE.}\n \\label{tab:imre}\n\\end{table}\n\\begin{figure}[t]\n \\centering\n \\includegraphics[width=0.48\\linewidth]{test_CIFAR10.png}\n \\includegraphics[width=0.48\\linewidth]{test_recon_CIFAR10.png}\n \\includegraphics[width=0.48\\linewidth]{test_CelebA64.png}\n \\includegraphics[width=0.48\\linewidth]{test_recon_CelebA64.png}\n \\caption{Test image reconstruction. Top: CIFAR-10. Bottom: CelebA-64. Left: test images. Right: reconstruction images.}\n \\label{fig:3}\n\\end{figure}\n\n\\subsection{Out-of-distribution detection}\n\\label{sec:ood}\n\nThe EBLVM defines an unnormalized density function which can be used for detecting OOD samples. Generative models like GANs are infeasible for OOD because the model distribution is implicitly defined, while likelihood models like VAEs and Flows are accused of overestimating OOD regions, and as a result, fails to distinguish OOD samples. However the log unnormalized marginal density of EBLVM, \\ie, the negative marginal energy $-\\mathcal{E}(v)$, is trained to assign low value on OOD regions and high value on data regions, which is suitable for our case. Since the marginal part of the decoupled EBLVM can model the visible space faithfully, we regard $-\\mathcal{E}_{\\psi^{\\prime}}(v)$ as the critic of samples. We mainly consider three OOD datasets: uniform noise, SVHN and CelebA. Our proposed model is trained on CIFAR-10 whose testing part is in-distribution dataset. Following \\cite{Du2019ImplicitGA}, area under the ROC curve (AUROC) is used as our evaluation metric. Since training EBM is of a high variance process found in related work \\cite{Han2020JointTO}, our best model is chosen from the early stage of training. \\Cref{tab:ood} shows our model achieves compatible performance with most of baseline chosen from recent EBMs, VAEs and flows. Surprisingly, we find our proposed model outperforms JEM slightly, though JEM is an energy-based classifier model supposed to be good at OOD detection. Moreover, VERA \\cite{Grathwohl2021NoMF}, a recent MCMC-free method for learning EBMs, performs quite well on SVHN dataset, while our model is better on CelebA. \n\\begin{table}\n \\centering\n \\begin{tabular}{lccc}\n \\toprule\n Model & Random & SVHN & CelebA \\\\\n \\midrule\n IGEBM \\cite{Du2019ImplicitGA} & 1.0 & 0.63 & 0.7 \\\\\n Glow \\cite{Kingma2018GlowGF} & 1.0 & 0.24 & 0.57 \\\\\n SVAE \\cite{Pu2018SymmetricVA} & 0.29 & 0.42 & 0.52 \\\\\n Divergence triangle \\cite{Han2020JointTO} & 1.0 & 0.68 & 0.56 \\\\\n JEM \\cite{Grathwohl2020YourCI} & 1.0 & 0.67 & 0.75 \\\\\n VERA \\cite{Grathwohl2021NoMF} & 1.0 & \\textbf{0.83} & 0.33 \\\\\n BiDVL (ours) & 1.0 & 0.76 & \\textbf{0.77} \\\\\n \\bottomrule\n \\end{tabular}\n \\caption{Out-of-distribution detection on uniform, SVHN, CelebA test datasets, with CIFAR-10 as the in-distribution dataset. We report the AUROC of negative free energy.}\n \\label{tab:ood}\n\\end{table}\n\n\\subsection{Ablations and analysis}\n\\label{sec:analy}\n\nWe first investigate how the modifications proposed in \\cref{sec:cobidvl} solve the problems by enhancing the learning stability. We conduct experiments on the original BiDVL with canonical KL, but it is unstable (even diverse) on CIFAR. With a similar phenomenon, only decoupling the EBLVM is still hard to converge. On the other side, the training of undecoupled EBLVM improves a lot with the symmetric KL and gets 32.10 FID on CIFAR, nonetheless much worse than our complete model that gets 20.75 shown in \\cref{sec:imgen}. \n\nNext we study the rest choices in BiDVL. Since computing the importance ratio $r(v)$ is nontrivial, we heuristically estimate it with a basic term and a bias term.\n\\begin{equation}\n\\setlength{\\abovedisplayskip}{5pt}\n\\setlength{\\belowdisplayskip}{5pt}\n \\begin{split}\n r(v) =&\\ \\frac{\\mathbb{E}_{p_{\\psi^{\\prime}}(v)}[q(v)]}{q(v)} \\frac{\\exp{(-\\mathcal{E}_{\\psi^{\\prime}}(v))}}{\\mathbb{E}_{q(v)}[\\exp{(-\\mathcal{E}_{\\psi^{\\prime}}(v))}]} \\\\\n \\approx&\\ r^{\\prime} \\frac{\\exp{(-\\mathcal{E}_{\\psi^{\\prime}}(v))}}{\\mathbb{E}_{q(v)}[\\exp{(-\\mathcal{E}_{\\psi^{\\prime}}(v))}]},\n \\end{split}\n \\label{eq:is}\n\\end{equation}\nwhere the basic term $r^{\\prime}$ corresponds to the average ratio, the bias term means samples with higher density under model distribution should be learned more. As training EBM is a high-variance process, which makes the estimate of the bias ratio noisy, we use $\\mathrm{Sigmoid}(\\mathbb{E}_{q(v)}[\\mathcal{E}_{\\psi^{\\prime}}(v)]-\\mathcal{E}_{\\psi^{\\prime}}(v))$ to replace the bias term but still leads to slightly inferior performance, consequently we ignore the bias term. \n\nFurthermore, we found the proposed model performs quite sensitive to the basic term, as shown in \\cref{tab:basra} studying the influences of varying basic ratio on CIFAR. \n\\begin{table}\n \\centering\n \\begin{tabular}{lccccc}\n \\toprule\n $r^{\\prime}$ & 0.01 & 0.05 & 0.1 & 0.5 & 1.0 \\\\\n \\midrule\n FID & 22.62 & 20.75 & 21.90 & 26.82 & 29.72 \\\\\n \\bottomrule\n \\end{tabular}\n \\caption{The influences of varying basic ratio on FID. Models are trained on CIFAR-10.}\n \\label{tab:basra}\n\\end{table}\nThe generation quality degrades as the basic ratio increases and perform pretty worse with $r^{\\prime}=1.0$. It corresponds to the intuition that the importance ratio is supposed to be small on average since the energy-based distribution is mild and typically assigns density on unreal samples. Finally, we use $0.05$ on $32\\times32$ datasets and $0.1$ on $64\\times64$ datasets. \n\nThen we study the effect of offsetting the opposite terms in \\cref{eq:nllgr,eq:nulgr}. The implementation details of not offset algorithm are follow \\cref{sec:discuss}. For comparison, we evaluate the algorithm without all opposite terms concerned. Not offset algorithm performs unstable on CelebA-64, even not converge sometimes. \\Cref{tab:offset} shows the influence. \n\\begin{table}\n \\centering\n \\begin{tabular}{lccc}\n \\toprule\n & w\/o & offset & not offset \\\\\n \\midrule\n FID & 20.24 & 20.75 & 21.54 \\\\ \n RMSE & 0.175 & 0.168 & 0.170 \\\\\n \\bottomrule\n \\end{tabular}\n \\caption{The influences of offsetting on FID and RMSE. Models are trained on CIFAR-10.}\n \\label{tab:offset}\n\\end{table}\nWe find the $\\mathrm{Sigmoid}$ estimation of bias term helps the not offset algorithm stabilizing, however it tends to overfit early and performs slightly worse. Not offset version achieves $0.81$ AUROC on SVHN and $0.75$ AUROC on CelebA, implying the adversarial learning may help BiDVL better modelling. Algorithm without terms concerned is slightly better on generation, but is inferior on reconstruction and OOD detection. It implies the reconstruction over latent variables enhances the alignment. Our formal experiments are conducted by the offset version. \n\n\\section{Conclusion}\n\\label{sec:concl}\n\nIn this paper, we introduce BiDVL, a MCMC-free framework to efficiently learn EBLVMs. For image task, we achieve a decoupled EBLVM, consisting of a marginal energy-based distribution and a structural posterior, and then choose a symmetric KL divergence in the lower level. Experiments show the impressive generative power on images concerned. In this work, simply stacking several convolutional layers largely restricts our model from scaling to a large image dataset. In future work, the deeper and more sophisticated designed networks shall be adopted to evaluate BiDVL on larger-scale images. \n\n\\noindent \\textbf{Broader impacts.} The method is for training EBLVMs, and the trained models have the ability of generating fake content, which may be used with potential malicious. \n\n\\noindent \\textbf{Acknowledgement.} This work is partially supported by the National Natural Science Foundation of China (62141604, 61972016, 62032016, 62106012, 62076016, 62101245), Natural Science Foundation of Beijing Municipality (L191007).\n\n\n{\\small\n\\bibliographystyle{ieee_fullname}","meta":{"redpajama_set_name":"RedPajamaArXiv"}}