diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzzllkc" "b/data_all_eng_slimpj/shuffled/split2/finalzzllkc" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzzllkc" @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\\label{sec:intro}\n\nThe numerical integration of functions in many dimensions has been a central topic in numerical\nanalysis for a long time. Current schemes such as adaptive cubature and adaptive Monte Carlo \nperform best for smooth integrand functions. However integrands with discontinuities can arise\nquite naturally in a variety of contexts, such as the calculation of the fermionic self-energy\nin condensed-matter physics or the study of multiphase flows in the context of computational fluid dynamics. \n\nIn one dimension discontinuities are easily accommodated within an adaptive framework simply\nby dividing the region of integration into sub-regions on which the integrand is smooth. \nIn this paper we show how this process can be extended to higher dimensionalities. Our method is applicable\nto integrals which are discontinuous on any number of hyperplanes that contain the origin, and in any number of \ndimensions. We limit our attention to integrals over the entire ${\\mathbb{R}^N}$ --- this\nis not a material limitation as integrals over a proper hyperrectangle can be straightforwardly mapped onto\n${\\mathbb{R}^N}$. We assume that the discontinuities in question arise from terms of the form sign$(C_{\\vec{x}})$ where\n$C_{\\vec{x}}$ is any linear combination of the coordinates. These is precisely the form of the discontinuities \nencountered in the Green's functions of fermionic systems. \n\nWe write the integral in question as \n\\begin{equation}\nI = \\int_{\\mathbb{R}^N} \\prod_{i=1}^{M} F_i(\\vec{x})d\\vec{x}, \n\\label{eq:integral}\n\\end{equation}\nwhere $F_i(\\vec{x})$ is discontinuous on the hyperplane with equation $\\vec{a_i}\\cdot \\vec{x} = 0$. \nWe construct the $M\\times N$ \\emph{discontinuity} matrix $\\mat{C}$ such that $C_{ij} = (\\vec{a_i})_j$. \nWe will here assume $M \\geq N$ --- we will comment on this at end the next section. \n\n\\section{Method}\n\nLet $S$ be the set of all $M\\times M$ diagonal matrices with diagonal components $\\pm 1$ ($|S| = 2^M$). \nTo determine the regions on which the integrand is continuous we thus have to solve the homogeneous system of simultaneous\ninequalities \n\\begin{equation}\n\\mat{C}\\mat{S}_i\\vec{x} \\geq 0,\n\\label{eq:system}\n\\end{equation}\nfor every $\\mat{S}_i \\in S$. Each inequality defines a closed-half space; the solution to the system is \nthe intersection of these half-spaces which can be interpreted geometrically as a convex polytope in \nits half-space representation (see \\cite[p. 31]{Grunbaum67}). In the case of a homogeneous system the resultant polytope is \nin fact a polyhedral (infinite) convex cone~\\cite{skeleton}. \n\nLet $P$ be the set of cones obtained by solving Eq.~\\eqref{eq:system}.\nA set of vectors $W_K = \\left\\{\\vec{w}_1, \\vec{w}_2, \\ldots ..\\vec{w}_p \\right\\}$ is a \\emph{skeleton} of a cone $K$\nif $\\vec{x} = \\sum_{i=1}^p \\lambda_i \\vec{w_i}$ belongs to $K$ for every $\\lambda_i \\geq 0$~\\cite{skeleton}. \nThe duality in the representation of a cone as either a system of linear inequalities or a conical combination \nof the skeleton is the essence of the well known Weyl-Minkowski theorem on cones. As we are only interested in \nsubspaces of ${\\mathbb{R}^N}$ with dimension $N$ --- lower-dimensional subspaces correspond to polyhedral facets which do \nnot contribute to the integral --- we can assume that $p \\geq N$. \nThe skeleton of an acute cone is unique up to scalar multiplication of the vectors~\\cite{skeleton}. \nOnce the normalization of the skeleton is fixed, each point in $K$ can be specified through its $\\lambda$ coefficients $(\\lambda_1, \\ldots, \\lambda_p)$.\n\nHaving achieved our goal of partitioning ${\\mathbb{R}^N}$ into regions where the integrand is \ncontinuous we now have to consider how to perform the integration over a cone $K\\in P$. When $p=N$ the polytope \nconstitutes an $N$-simplex which can be readily mapped onto the positive orthant by exploiting the bijection between\n$\\vec{x} \\in K$ and $\\vec{\\lambda}$. When $p>N$ the situation is more complex, for the skeleton is linearly dependent\nand there is no bijection to be exploited. To overcome this problem, each cone $K$ is decomposed into $N$-simplices\n$\\gamma^{K}_1, \\gamma^{K}_2., \\ldots$ which can then be individually mapped onto the positive orthant. The\nset of all simplices $\\digamma = \\left\\{ \\gamma^K_1, \\gamma^K_2, \\ldots | K \\in P \\right\\}$ evidently partitions ${\\mathbb{R}^N}$; the original \nintegration problem has thus been broken down into multiple, separate integrations, one over each \nsimplex in $\\digamma$. The method is inherently parallel --- barring error control considerations each region \nof integration can be processed independently of the others.\n\nTo control the precision of the calculation we use an unsophisticated two-pass scheme. The first pass consists of a \ncrude integration over every simplex $\\gamma \\in \\digamma$, with a relative precision of $10\\%$, yielding a\nresult $\\mu^{(1)}_\\gamma$ with an associated error $\\sigma^{(1)}_\\gamma$. From the $\\mu^{(1)}_\\gamma$ we determine the \nsimplex which contributes the most; let $\\mu_{\\textrm{max}} = \\textrm{max} \\left\\{|\\mu_\\gamma|, \\gamma \\in \\digamma \\right\\}$. \nTo achieve a requested relative precision $f$ on the entire integral $I$ we then repeat the integration, now evaluating \neach simplex to an \\emph{absolute} precision given by $\\epsilon_{\\textrm{abs}} = f\\mu_\\textrm{max}\/\\sqrt\\nu$, where $\\nu = |\\digamma|$\ndenotes the total number of simplices, ensuring obviously we do not re-evaluate the regions for which $\\sigma^{(1)}_\\gamma < \\epsilon_\\textrm{abs}$.\nThe end result $I=\\sum \\mu^{(2)}_\\gamma$ is then associated with an absolute error\n\\begin{equation}\n\\sigma_I = \\sqrt{\\sum_{\\gamma \\in \\digamma} (\\sigma^{(2)}_\\gamma)^2\/\\nu}.\n\\end{equation}\nIn practice small deviations of the resultant precision for the requested precision may occur when there are\nsignificant cancellations. This is not a particularly grave disadvantage as the actual error is always known.\n\nFinally, we return to the question of the number of constraints. We have been assuming that the number of rows $M$ \nof the constraint matrix $\\mat{C}$ is larger than the dimension of the integral, ignoring the case of an integrand which\nhas discontinuities on fewer than $N$ planes. This is dealt with by padding the rows of $\\mat{C}$ with arbitrary vectors \n(so long as they are not parallel to any other vectors) until $M\\geq N$. This trick has the disadvantage of causing \nunnecessary divisions of the region of integration but is necessary to guarantee the existence of cones. \n\n\\section{Implementation}\n\nThe process outlined above is implemented in \\verb|C++| with support for matrices provided by GSL~\\cite{gsl}. The\ninput is the integrand and the matrix $\\mat{C}$ construct as above specifying the discontinuities. The first step is \nthe decomposition of ${\\mathbb{R}^N}$ into the polyhedral cones $P$. To this end we use \\verb|skeleton|~\\cite{skeleton}\nwhich implements a modified version of the Motzkin-Burger algorithm. This package is called from our\ncode and returns the vectors comprising the skeleton of the polyhedral cones in $P$. \n\nTo cut the polyhedral cones in $K$ into $N$-simplices we first project the vectors in $W_K$ onto the\ncone's $(N-1)$-dimensional base. By `base' here we mean the subspace obtained by subtracting from\nall $\\vec{w} \\in W_k$ their components along the axis of the cone and then expressing them as linear\ncombinations of $(N-1)$ orthonormal vectors. We can then construct the desired decomposition of \n$K$ into $N$-simplices by triangulating the points in the $(N-1)$-dimensional base and then adding\nthe origin to these $(N-1)$-simplices. In general this triangulation is not unique. There are\nseveral algorithms to handle the triangulation of the base. We use the Quickhull algorithm\nimplemented in Qhull~\\cite{qhull}. \n\nEach point $\\vec{x}$ of the simplex can be written as a conical combination of the $(\\vec{\\lambda})$ \nand its skeleton vectors. To map the positive orthant onto the unit hypercube we use the rule\n$\\lambda_i = 1\/u_i - 1$. Depending on the integrand other rules may be more suitable but this\nwas chosen for its simplicity. \n\nThe final step is the integration itself. We use \\verb|HIntLib|~\\cite{hintlib1,hintlib2}, a sophisticated\n\\verb|C++| library that among other things implements adaptive cubature with a variety of rules and a range of Monte\nCarlo methods. It would be perhaps more efficient to use an adaptive code that can directly handle the \nsimplicial geometry, such as \\verb|CUBPACK|~\\cite{cubpack1,cubpack2} but for practical reasons this \napproach was not followed here. The integrations are performed in parallel using $\\verb|OpenMP|$ (\\verb|HIntLib|'s\nnative parallelization is not used).\n\n\n\\section{Results}\n\nWe test the method with a variety of integrands and for various dimensionalities. To do so we also have\nto prescribe the discontinuities. To streamline the discussion we express Eq.~\\eqref{eq:integral} as\n\\begin{equation}\nI = \\int_{\\mathbb{R}^N} \\prod_{i=1}^{M} F(g_i(\\vec{x}))d\\vec{x}, \n\\label{eq:integral2}\n\\end{equation}\nwhere $\\vec{g}(\\vec{x}) = \\mat{C}\\vec{x}$. A variety of test-matrices $\\mat{C}$ are considered --- they are listed in the Appendix. \nAll integrations are done using \\verb|HIntLib|'s adaptive routines and its implementation of the embedded degree-$7$ rule of Genz and Malik~\\cite{Genz80}. \nWe define the following integrand test-functions\n\\begin{align}\nF_1 (u) &= \\frac{1}{u - \\alpha + i \\beta \\textrm{sign}(u)} \\label{eq:f1} \\\\\nF_2 (u) &= \\frac{1}{u^2 - \\alpha + i \\beta \\textrm{sign}(u)} \\label{eq:f2}.\n\\end{align}\nWe note that $F_1$ is actually the non-interacting Green's function for the Anderson impurity model\nin the flat-band approximation~\\cite{Hewson97}. Our method was developed with this integrand\nin mind --- we also consider the integrand $F_{2}$ to illustrate the more general applicability of the \nmethod. As $F_{1}, F_{2}$ are complex-valued, we consider for brevity only the real part of Eq.~\\eqref{eq:integral2}. \n\nWe compare the speed-up afforded by our partitioning in terms of the number of integrand evaluations \nrequired to achieve a given relative error. In doing so, we seemingly ignore the computational effort \nrequired for the partitioning itself. In practice this turns out to be essentially insignificant, owing\nto the large computational cost of the integrations. Nevertheless the time spent in partitioning\ncan be reduced by noting that the solution of Eq.~\\eqref{eq:system} and subsequent triangulation can be\ncarried out in parallel for each $\\mat{S_i} \\in S$.\n\nUnless otherwise stated all integrations are carried out to $\\epsilon_{\\textrm{rel}} \\approx 10^{-4}$. Due to the\ntwo-pass technique for controlling the precision of the integration it may happen that the resultant error\nis (sometimes significantly) less than requested. This is to be expected when the $\\mu_\\gamma$ have mostly the\nsame sign, i.e.\\ $|I| \\gg \\mu_{\\textrm{max}}$; in such cases the target absolute error --- which is based on $\\mu_{\\textrm{max}}$ ---\nis smaller than necessary. The resultant error may be less than the requested in\nanother way: To obtain an estimate over each simplex $\\gamma$, \\verb|HIntLib| requires a minimum number $N_\\textrm{min}$\nof integrand evaluations. When this yields an estimate of the integral more precise than requested, nothing\ncan be done to reduce the number of evaluations.\n\n\\begin{table}\n\\caption{Results for $F_1$, $\\alpha = -0.2,\\beta=0.1$. \\label{table:f1}}{\n\\begin{tabular}{l | c | c | c | r}\n$N$ & $M$ & $N_{p}$ & $N_{H}$ & $N_{p}\/N$ \\\\\n\\hline\n2 & 3 & $5.3 \\times 10^4$ & $2.2 \\times 10^6$ & $41.4$ \\\\\n2 & 4 & $8.6 \\times 10^4$ & $8.3 \\times 10^5$ & $9.6$ \\\\\n2 & 5 & $1.3 \\times 10^5$ & $1.2 \\times 10^6$ & $9.0$ \\\\\n2 & 6 & $1.7 \\times 10^5$ & $1.0 \\times 10^6$ & $6.3$ \\\\\n3 & 5 & $3.3 \\times 10^6$ & $>3.0\\times 10^9$ & $>906.7$ \\\\\n3 & 6 & $7.2 \\times 10^6$ & $1.6 \\times 10^9$ & $224.9$ \\\\\n3 & 7 & $9.6 \\times 10^6$ & $2.6 \\times 10^9$ & $265.5$ \\\\\n3 & 8 & $1.4 \\times 10^7$ & $2.5 \\times 10^9$ & $172.9$ \\\\\n3 & 9 & $2.0 \\times 10^7$ & $2.7 \\times 10^9$ & $132.6$ \\\\ \n4 & 7 & $5.8 \\times 10^8$ & $>3.0\\times 10^9$ & $>5.2$ \\\\\n5 & 9 & $4.3 \\times 10^{10}$ & - & -\n\\end{tabular}}\n\\end{table}\n\\begin{table}\n\\caption{Results for $F_2$, $\\alpha = -0.2, \\beta = 0.1$.\\label{table:f2}}{\n\\begin{tabular}{l | c | c | c | c | c | c| r}\n$N$ & $M$ & $N_{p}$ & $\\epsilon^{*}_\\textrm{rel}$ & $N_{H}$ & $N^{*}_{H}$ & $N_{p}\/N$ & $N^{*}_{p}\/N$ \\\\\n\\hline\n2 & 3 & $3.6\\times 10^4$ & $1.8\\times10^{-7}$ & $1.3 \\times 10^4$ & $2.5 \\times 10^5$ & $0.35$ & $7.0$ \\\\\n2 & 4 & $4.8\\times 10^4$ & $4.6\\times10^{-6}$ & $4.0 \\times 10^5$ & $8.1 \\times 10^6$ & $8.3$ & $169$ \\\\\n2 & 5 & $6.0\\times 10^4$ & $4.6\\times10^{-6}$ & $6.5 \\times 10^5$ & $1.4 \\times 10^7$ & $10.8$ & $236$ \\\\\n2 & 6 & $7.2\\times 10^4$ & $1.6\\times10^{-6}$ & $1.8 \\times 10^6$ & $1.2 \\times 10^8$ & $24.9$ & $161$ \\\\\n3 & 5 & $1.7\\times 10^5$ & $8.8\\times10^{-5}$ & $9.8 \\times 10^8$ & $1.2 \\times 10^9$ & $5820$ & $7418$ \\\\\n3 & 6 & $2.5\\times 10^5$ & $7.7\\times10^{-5}$ & $1.7 \\times 10^8$ & $2.7 \\times 10^9$ & $6750$ & $10660$ \\\\\n3 & 7 & $3.8\\times 10^5$ & $6.1\\times10^{-5}$ & $>3.0 \\times 10^9$ & $>3.0 \\times 10^9$ & $7804$ & $>7804$ \\\\\n3 & 8 & $4.8\\times 10^5$ & $8.3\\times10^{-5}$ & $>3.0 \\times 10^9$ & $>3.0 \\times 10^9$ & $6244$ & $>6244$ \\\\\n3 & 9 & $5.8\\times 10^5$ & $7.0\\times10^{-5}$ & $>3.0 \\times 10^9$ & $>3.0 \\times 10^9$ & $>5172$ & $>5172$ \\\\\n4 & 7 & $2.7\\times 10^6$ & $9.8\\times10^{-5}$ & $>3.0 \\times 10^9$ & $>3.0 \\times 10^9$ & $>1111$ & $>5172$ \\\\\n5 & 9 & $4.8\\times 10^{7}$& $9.9\\times10^{-5}$ & $-$ & $-$ & $-$ & $-$\n\\end{tabular}}\n\\end{table}\n\nOur results are presented in Tables~\\ref{table:f1}, ~\\ref{table:f2}. In both tables $N$ denotes the dimension of integration, $M$ the number of hyperplane discontinuities, \n$N_p$ the number of function evaluations required using the partitioning technique and $N_H$ the number of function\nevaluations required to achieve the requested precision without utilising our partitioning scheme. To make the comparison\nmeaningful we obtain $N_{H}$ using the same adaptive cubature routines in \\verb|HIntLib| with the embedded degree-7 Genz-Malik rule that were employed to carry out the simplicial integrations, \nwith each point $\\vec{x} \\in {\\mathbb{R}^N}$ being mapped to $\\vec{t} \\in [-1, 1]^N$ through $x_i = t_i\/(1-t^2_i)$. We emphasise\nthat our proposed integration method can be used in conjunction with any integration algorithm and is not tied to this\nspecific Genz-Malik rule.\n\nWe note that for many of the integrations in Table~\\ref{table:f2} \nwe were unable to reduce the precision below a certain level. Thus for each integrand we report the relative precision $\\epsilon^{*}_\\textrm{rel}$ \\emph{actually} reached\nby our partitioning scheme. We feel it is not clear whether it would be fairer to judge the efficacy of our method method\nby comparing $N_p$ to the function evaluations required to achieve the target relative precision of $10^{-4}$ or the `accidental'\nprecision $\\epsilon^{*}_\\textrm{rel}$. We thus report both quantities, the latter denoted by $N^{*}_{H}$. From our\nresults it's obvious that even when calculating the integral to a precision much greater than required our method \ngreatly reduces the samples required of the integrand.\n\nFor the integrations attempted without partitioning ${\\mathbb{R}^N}$ we had to impose a maximum of $3\\times10^9$ integrand evaluations to prevent the integrator\nfrom exhausting the $8$~GB of RAM we had at our disposal. Apart from requiring fewer integrand samples, our partitioning\nmethod also drastically reduces the amount of memory required. This is because each the grid for each simplex can be discarded\nafter the integration is complete rather than having to concurrently store data for all previous grid refinements. We have\nhowever refrained from trying to quantify the improvement in the memory requirements as this is sensitive to the\ndetails of our implementation, our choice of integration routines and the benchmarking itself rather non-trivial, given the parallel nature\nof the program. Nevertheless in Table~\\ref{table:f2} it is evident that the integration becomes unmanageable without\nour method even in only four dimensions. \n\nAs the number of simplices into which ${\\mathbb{R}^N}$ is partitioned increases very rapidly with $N$, the success of the method depends on whether the advantages\nof a smooth integrand outweigh the cost of having to set up a new adaptive grid for each simplex, and the \n`unnecessary' function evaluations due to the crudeness of our error management. It is is evident that it does;\nin Table~\\ref{table:f1} we see that partitioning reduces the number of required integrand evaluations\nby $1-3$ orders of magnitude and in Table~\\ref{table:f2} by up to $4$ orders of magnitude. \n\n\n\\section{Conclusions}\n\nWe have described a method enabling the numerical integration of functions featuring hyperplane discontinuities\nwith existing adaptive cubature schemes. We showed how to construct a set $P$ of convex polyhedral cones\nthat partition ${\\mathbb{R}^N}$. Each cone $K \\in P$ is then partitioned into simplices $\\gamma^{K}_1,\\gamma^K_2, \\ldots$\nwhich comprise the set $\\digamma$ which partitions ${\\mathbb{R}^N}$. Each simplex $\\gamma \\in \\digamma$ can then\nbe mapped onto a hypercube and integrated using any existing multidimensional numerical integration algorithm. \nWe adopted a two-pass scheme to control the precision of our calculation. This allowed the essentially\ncomplete paralellization of the integrations. \n\nOur method can dramatically accelerate the evaluation of such multidimensional integrals. The reduction\nin the number of integrand samples required to obtain an estimate for the integral becomes more pronounced\nas the dimensionality $N$ increases, and is reduced by several orders of magnitude compared to a naive integration.\nMemory requirements are also greatly improved, allowing the evaluation of integrands in higher dimensions \nthan would be otherwise possible. \n\nThe method can be improved by coupling it directly to an integrator aware of the underlying simplicial \ngeometry, thereby eliminating the need for a simplex-hypercube mapping. Our precision control can also be potentially replaced with a more \nadvanced scheme in which no integrand evaluations are discarded and the threads dynamically synchronised. \nThis could improve the performance of our method but coordinating the threads will require some effort\nprogramming-wise.\n\nThe author would like to thank Alex C Hewson, Paul Carter and Ioannis Pesmazoglou for helpful discussions. This work was\ngenerously supported by the Engineering and Physical Sciences Research Council.\n\n\\section*{APPENDIX}\nWe list here the test matrices $\\mat{C}_{c \\times N}$ pertinent to Eq.~\\eqref{eq:integral2}.\n\\begin{equation*}\nC_{3\\times2}=\\begin{pmatrix}\n1 & 0 \\\\\n0 & 1 \\\\\n1 & 1 \n\\end{pmatrix} \\,\nC_{4\\times2}=\\begin{pmatrix}\n1 & 0 \\\\\n0 & 1 \\\\\n2 & 1 \\\\\n1 & -1 \n\\end{pmatrix} \\,\nC_{5\\times2}=\\begin{pmatrix}\n1 & 0 \\\\\n0 & 1 \\\\\n2 & 1 \\\\\n1 & -1 \\\\\n-1 & 2 \n\\end{pmatrix} \\,\nC_{6\\times2}=\\begin{pmatrix}\n1 & 0 \\\\\n0 & 1 \\\\\n2 & 1 \\\\\n1 & 1 \\\\\n1 & -1 \\\\\n-1 & 2 \n\\end{pmatrix}\n\\end{equation*}\n\n\\begin{equation*}\nC_{5\\times3} = \\begin{pmatrix}\n1 & 0 & 0 \\\\\n0 & 1 & 0 \\\\\n0 & 0 & 1 \\\\\n1 & 1 & -1 \\\\\n-1 & 2 & 1 \n\\end{pmatrix} \\,\nC_{6\\times3} = \\begin{pmatrix}\n1 & 0 & 0 \\\\\n0 & 1 & 0 \\\\\n0 & 0 & 1 \\\\\n1 & -1 & 1 \\\\\n1 & 1 & -1 \\\\\n-1 & 2 & 1 \n\\end{pmatrix} \\,\nC_{7\\times3} = \\begin{pmatrix}\n1 & 0 & 0 \\\\\n0 & 1 & 0 \\\\\n0 & 0 & 1 \\\\\n2 & 1 & -1 \\\\\n1 & 1 & 1 \\\\\n1 & 1 & -1 \\\\\n-1 & \\frac{1}{2} & 2 \n\\end{pmatrix}\n\\end{equation*}\n\\begin{equation*}\nC_{8\\times3} = \\begin{pmatrix}\n1 & 0 & 0 \\\\\n0 & 1 & 0 \\\\\n0 & 0 & 1 \\\\\n2 & 1 & -1 \\\\\n1 & 1 & 1 \\\\\n1 & 1 & -1 \\\\\n\\frac{1}{2} & \\frac{1}{2} & 1 \\\\\n-1 & \\frac{1}{2} & 2 \n\\end{pmatrix} \\,\nC_{9\\times3} = \\begin{pmatrix}\n1 & 0 & 0 \\\\\n0 & 1 & 0 \\\\\n0 & 0 & 1 \\\\\n2 & 1 & -1 \\\\\n\\frac{1}{2}& 2 & 1 \\\\\n-1 & 1 & \\frac{1}{2} \\\\\n2 & -1 & 1 \\\\\n1 & 1 & -1 \\\\\n-1 & \\frac{1}{2} & 2 \n\\end{pmatrix}\n\\end{equation*}\n\\begin{equation*}\nC_{7\\times4} = \\begin{pmatrix}\n1 & 0 & 0 & 0 \\\\ \n0 & 1 & 0 & 0 \\\\ \n0 & 0 & 1 & 0 \\\\ \n0 & 0 & 0 & 1 \\\\ \n1 & 1 & 1 & 1 \\\\ \n1 & 2 & 1 & 2 \\\\ \n1 & -2 & 2 & 1 \n\\end{pmatrix}\\,\nC_{9\\times5} = \\begin{pmatrix}\n1 & 0 & 0 & 0 & 0 \\\\ \n0 & 1 & 0 & 0 & 0 \\\\ \n0 & 0 & 1 & 0 & 0 \\\\ \n0 & 0 & 0 & 1 & 0 \\\\ \n0 & 0 & 0 & 0 & 1 \\\\ \n\n1 & 1 & 1 & 1 & 1 \\\\ \n\\frac{1}{2}& 1 & \\frac{1}{2} & 1 &\\frac{1}{2} \\\\ \n-1 & -1 & \\frac{1}{2} & 1 &2 \\\\ \n2 & 1 &-\\frac{1}{2} & 2 &-\\frac{1}{2}\n\\end{pmatrix}\n\\end{equation*}\n\n\\bibliographystyle{plain}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\\label{sec:intro}\n\nTransients are celestial objects whose brightness varies on timescales from seconds to years, for example due to explosions of their progenitors. These objects help astronomers to investigate physics under extreme conditions, and there are currently a large number of surveys being carried out to search for extra-galactic transients by regularly monitoring the night sky. \nDetected objects include classical transients, such as supernovae (SNe) that are luminous optical transients from either a thermonuclear explosion of a white dwarf, or the core-collapse (CC) of a massive star, as well as new and exotic transients. e.g. the kilonova AT2017gfo \\citep{GW170817MMA,sss17a,dlt17ck,vista,master,decam,lcogtkn, grawita, Smartt2017}. To correctly classify them, spectroscopy is required, however spectroscopic follow-up resources are scarce, and only a fraction of the currently discovered SNe can be classified in this way \\citep{Kulkarni2018}.\n\nInvestigating the multi-band light curves (LCs) of the transients can also provide information about the mechanism that drives the explosions, such as shock cooling, circumstellar matter (CSM) interaction, radioactive decay or a central engine such as a magnetar. The information is encoded in the LCs in different aspects, e.g. their timescales, shapes and absolute luminosities, both for individual objects and for curated samples of specific types.\nMuch progress has been made by linking multi-wavelength light curves to models, both numerical hydrodynamical models and simpler semi-analytical estimates. For instance, for both SNe Ia and stripped envelope CC SNe, whose emission during the photospheric phase is normally powered by the radioactive decay of synthesized \\Ni, \\citet{arnett1982} made a semi-analytic model that can reconstruct the bolometric LCs for physical parameters of the explosion and the progenitors, such as the amount of ejected nickel mass, the ejecta mass and the kinetic energy. Therefore, exploring a large sample of SNe provides the opportunity to unveil their underlying physical origins.\nSuch analysis has been performed in a number of studies, e.g. \\cite{barbarino, taddia2018csp, prentice2019, lyman16}, but the different studies often use different approaches and codes which makes the direct comparisons less straightforward.\nA generic code-package to handle light curve fitting for transients of different types, from different surveys, observed with varying cadences, is therefore useful to provide comparable results.\nFor this purpose, we present {\\tt HAFFET}\\xspace, a data-driven model fitter for transients. \nOften some spectroscopic information is also available for the SNe and needed to perform parts of the analysis, and {\\tt HAFFET}\\xspace also includes procedures for example for measuring ejecta velocities from spectra.\n\nThis paper aims to serve as a description of {\\tt HAFFET}\\xspace and its capabilities at the time of its first release.\nThis is complemented with a web-based\nusers guide, which the reader should always consult for more up-to-date information\\footnote{\\href{https:\/\/haffet.readthedocs.io\/en\/latest\/}{https:\/\/haffet.readthedocs.io\/en\/latest\/}}. This online documentation also contains more technical information and the code itself, whereas this paper aims to describe some of the possibilities in the code-package.\nFurthermore, as a scientific proof-of-concept of {\\tt HAFFET}\\xspace, we are in a companion paper exploring a large sample of Type Ib and Ic SNe, observed by the Zwicky Transient Facility \\citep[ZTF;][]{Bellm2019, Graham2019}, and spectroscopically classified as part of the Bright Transient Survey (BTS) \\citep{fremling2020,perley2020}. That paper \n(Yang et al. in preparation) contains the analysis of the objects that are used here to demonstrate the capabilities of {\\tt HAFFET}\\xspace, and is \nhereafter referred to as Paper I. \nWe have also estimated the luminosities for ZTF SNe Ic-BL using HAFFET in \\cite{Corsi2022} and in \nAnand et al. (in preparation). \n\nThis paper is organized as follows. \nIn Section~\\ref{sec:haffet} we present the code design of {\\tt HAFFET}\\xspace. In Section~\\ref{sec:data}, we describe how {\\tt HAFFET}\\xspace handle meta and observational data, and in Section~\\ref{sec:fitting} we discuss in detail the different available models designed for different types of observational data. In Section~\\ref{sec:application} we further explore the sample of ZTF SNe Ib and Ic with {\\tt HAFFET}\\xspace, complementary to Paper I. Finally, Section~\\ref{sec:conclusion} presents our conclusions and provides a short discussion where we put our results in context.\n\n\\section{HAFFET}\n\\label{sec:haffet}\n\nIn this section, we introduce the code structure as well as the key components of {\\tt HAFFET}\\xspace.\n\n\\subsection{Code Design}\n\\label{sec:cd}\n\nIn Figure~\\ref{fig:flowchart}, we illustrate the main steps as they were generally performed using {\\tt HAFFET}\\xspace for stripped envelope SNe in Paper I: \n\n\\begin{enumerate}\n \\item {\\tt HAFFET}\\xspace started a Python class named {\\tt snelist} with a meta table\\footnote{In {\\tt HAFFET}\\xspace, we use pandas dataframes for table construction: \\url{https:\/\/pandas.pydata.org\/pandas-docs\/stable\/reference\/api\/pandas.DataFrame.html}} that includes a list of objects. \n \n \\item For each object in the meta table, the meta information, e.g. coordinates, redshifts, etc, are standardized, serving as the basis for a new class named {\\tt snobject}.\n \n \\item For {\\tt snobject}, a number of built-in functions are available to parse both photometric and spectroscopic data from local or remote sources.\n \n \\item To anticipate the photometric features, e.g. the peak phase and magnitude, a continuous approach is provided as an optional choice, employing either the gaussian process (GP) regression or analytical SN LC model fittings.\n \n \\item Once the epochs from various bands have been matched (via binning or interpolations), the spectral energy distribution (SED) and the (pseudo)-bolometric LCs can be estimated. The host galaxy extinction can also be estimated via color comparison to templates.\n \n \\item Depending on the type of transient, the bolometric LCs can be fitted with a variety of analytical models: we have included the classical Arnett model, and also provide routines for users to import external models from, e.g. MOSFIT \\citep{mosfit} and redback\\footnote{\\href{https:\/\/redback.readthedocs.io\/en\/latest\/}{https:\/\/redback.readthedocs.io\/en\/latest\/}}.\n \n \\item Each spectrum was smoothed, normalized, and the continuum removed in order to show the emission and absorption features. \n For a certain selected line, the absorption components are automatically identified and are\n then fitted with various models for measuring spectral properties, in particular the minimum of the absorption features to estimate the ejecta velocities.\n \n \\item All of the photometric and spectral properties from {\\tt snobject} may be called by {\\tt snelist} and dispersed into parameter space for sample exploration.\n \n\\end{enumerate}\n\n\\begin{figure*}\n\\centering\n \\includegraphics[width=0.6\\textwidth]{sdapy.pdf}\n \\caption{Flowchart of the main steps in the {\\tt HAFFET}\\xspace process, performed on the stripped envelop SNe in Paper I. As shown, after getting observational data for a single SN, {\\tt snobject} is called to fit on multi-band LCs, which could be used to estimate photometric properties, as well as provide interpolated magnitudes for estimating color and bolometric LC. A further step is to fit bolometric LC with, e.g. the Arnett model, and spectral lines can be also fitted for photospheric velocities, which are often used to help break the degeneracy of the Arnett model. Finally, all the fitted parameters from model fittings are collected by the {\\tt snelist} for a sample study.}\n \\label{fig:flowchart}\n\\end{figure*}\n\nThe functions and their usages of {\\tt snobject} and {\\tt snelist} can be found in the online documentation\\footnote{For example, \\url{https:\/\/haffet.readthedocs.io\/en\/latest\/snobject.html} for {\\tt snobject}.}. In this paper, we give a theoretical overview of {\\tt HAFFET}\\xspace, and users should consult the corresponding functions for each section on the documentation page for more information and application examples. \n\n\\subsection{Data Cache and fitting options}\n\\label{sec:cache}\n\n{\\tt HAFFET}\\xspace handles data and fits for various SNe on a case-by-case basis, and below we describe in detail how it operates.\nFirst, a global variable called {\\tt ZTFDATA}\\footnote{{\\tt HAFFET}\\xspace access to ZTF data via ztfquery \\citep{ztfquery}, that follows the IRSA data structure for data storage. In order to be consistent, {\\tt HAFFET}\\xspace adopted the same global variable and data structure not only for ZTF data but also other resources such as the ATLAS forced photometry.} has to be defined (often in the shell profiles; if not, it will use the current folder). This is the directory for {\\tt HAFFET}\\xspace to manage cached data, e.g. the photometric data downloaded from online resources.\nIn order to increase code efficiency, it is important to properly cache the samplings as model fitting may take a long time, especially when using the Monte Carlo method. \nWe store and load the cached fitting samplings in {\\tt hdf5} format with the joblib\\footnote{\\href{https:\/\/joblib.readthedocs.io\/en\/latest\/index.html}{https:\/\/joblib.readthedocs.io\/en\/latest\/index.html}} package, and all these cached files are stored following a dedicated data structure.\n\nThen, according to the intrinsic properties and data qualities, different SN data may be fitted to models with different options, e.g. whether there is a tail in the SN LCs, or what is the range of photospheric phases for the Arnett model fitting, or if the automatic identification of absorption features is correct.\n{\\tt HAFFET}\\xspace provides default settings in a file for general usage, as well as a possibility for the users to specify parameters for individual SNe\\footnote{The parameter files can be found in the defined {\\tt ZTFDATA} directory, with {\\tt default\\_par.txt} for the general options, and {\\tt individual\\_par.txt} for specific SNe.}.\nAn alternative option to adjust fitting options for particular SNe is to use {\\tt HAFFET}\\xspace with a Graphical User Interface (GUI), see Figure~\\ref{fig:gui}. All the data, fitting parameters, and fitting samplings can be stored and loaded, which is universal to different modes.\n\n\\begin{figure*}\n\\centering\n \\includegraphics[width=0.8\\textwidth]{gui1.png}\n \\includegraphics[width=0.45\\textwidth]{gui2.png}\n \\includegraphics[width=0.45\\textwidth]{gui3.png}\n \\caption{A view of the {\\tt HAFFET}\\xspace GUI showing the model fittings on multi-band LCs, spectral lines and the constructed bolometric LC. In the upper panel we show the control window of the GUI.}\n \\label{fig:gui}\n\\end{figure*}\n\n\\section{Data Input}\n\\label{sec:data}\n\nTwo different kinds of data, i.e. meta and observational data, might be employed in the fitting procedure.\n\n\\subsection{Meta data}\n\\label{sec:meta}\n\nUsers may want to supply additional information for the SNe in addition to the observational data, for example spatial coordinates for forced photometry services or the redshifts to correct observing data to the rest frame.\nIn {\\tt HAFFET}\\xspace, both the Bright Transient Survey explorer\\footnote{\\url{https:\/\/sites.astro.caltech.edu\/ztf\/bts\/explorer.php}} (BTS) catalog and the Open Astronomy Catalog\\footnote{\\url{https:\/\/github.com\/astrocatalogs\/OACAPI}} (OAC) tables are available, providing meta data for the SNe. Users can also create their own tables specifically for their needs, with proper column names following this specific scheme\\footnote{More keywords of the scheme can be found at the {\\tt snelist} part of \\href{https:\/\/github.com\/saberyoung\/HAFFET\/blob\/master\/sdapy\/data\/default_par.txt}{https:\/\/github.com\/saberyoung\/HAFFET\/blob\/master\/sdapy\/data\/default\\_par.txt}.}:\n\n\\begin{verbatim}\n... ...\nidkey : objid # str # meta key as index\nidkey1 : alias # str # meta key as alias\nrakey : ra # str # meta key for ra\ndeckey : dec # str # meta key for dec\nzkey : z # str # meta key for redshift\ndistkey: dist # str # meta key for distance\n... ...\n\\end{verbatim}\n\nBelow is an illustration showing how only columns with certain names can be appropriately identified and processed \nto the {\\tt HAFFET}\\xspace meta table.\nHere, the object ID and coordinates would be read correctly, but the redshifts would be incorrect because of the erroneous column name (redshift instead of $z$).\n\n\\begin{verbatim}\nobjid,ra,dec,redshift\nZTF18aajpjdi,13:18:06.62,+27:52:24.2,0.0746\nZTF18aahtkze,11:14:19.24,+34:28:43.7,0.0693\nZTF18aapictz,11:59:37.00,+27:31:50.0,0.084\n... ...\n\\end{verbatim}\n\nThe Object ID is the only item \nrequired as the identifier of an object. \nIn most cases, an object would have multiple names, e.g. the survey name and the IAU name. \nIn order to compile all of these names that may be used to search for a certain object, an alias ID column is established as an option. \n\nCoordinates are essential mainly when using forced photometry services, and for getting Milky Way extinctions. There are built-in functions that can handle coordinates with different formats via astropy\\footnote{\\url{https:\/\/docs.astropy.org\/en\/stable\/index.html}}, as well as the healpix\\footnote{\\url{https:\/\/healpy.readthedocs.io\/en\/latest\/index.html}} indices for mollweide plots. \n\nThe redshift for each object should be well determined; otherwise all the \nanalysis will be done in the observer frame.\nUnless distances are specified, redshifts were converted to distances using a flat cosmology with H$_0=70$~km~s$^{-1}$~Mpc$^{-1}$ and $\\Omega_{\\rm{m}} = 0.3$. This was done using the redshifts in the CMB frame where the dipole parameters were obtained from \\cite{Fixsen}. We also included corrections for peculiar velocities using the formalism of \\cite{2015MNRAS.450..317C}.\nFor each individual SN we include an uncertainty on the peak magnitude corresponding to a 150~km~s$^{-1}$ uncertainty included in the peculiar velocity correction and a systematic $\\pm3$~km~s$^{-1}$~Mpc$^{-1}$ error on the adopted Hubble constant. \nFor the most nearby objects, with the largest peculiar motion corrections, the relative errors on the absolute magnitudes are the largest.\nComparing the estimates for peculiar velocity corrections for our SNe (Paper I) within $z < 0.015$ with values estimated using \\cite{2015ApJ...807...35M} and the corrections provided by NED for the Virgo, Great attractor and Shapley cluster \\citep{Mould2000ApJ...529..786M} we find a rms of 0.19 mag, so for these low-z SNe we add another 0.15 mag to the peak luminosity uncertainty in our error budget.\n\nThe transient type matters for some of the analysis, for example which spectral line should be used to determine the photospheric velocity, or which intrinsic colour template should be compared with to estimate the host galaxy reddening. \n{\\tt HAFFET}\\xspace is able to supply this from the BTS or OAC catalogs.\nThere is also a straightforward photometric categorization procedure within {\\tt HAFFET}\\xspace as implemented from \nastrorapid\\footnote{\\url{https:\/\/astrorapid.readthedocs.io\/en\/latest\/index.html}}. Moreover, since {\\tt HAFFET}\\xspace provides spectral line identification, this is aiding in a more secure classification of the transient types.\n\nWithin {\\tt HAFFET}\\xspace we correct all photometry for Galactic extinction, using the Milky Way (MW) color excess $E(B-V)_{\\mathrm{MW}}$ toward the position of the SNe.\nThese are all obtained from \\cite{2011ApJ...737..103S}\\footnote{By default, {\\tt HAFFET}\\xspace use Milky Way extinction in the meta table. If it is not available, and the coordinates are properly parsed, {\\tt HAFFET}\\xspace turn to get extinctions from \\cite{2011ApJ...737..103S} map via dustmaps \\citep{dustmaps}, i.e. \\url{https:\/\/dustmaps.readthedocs.io\/en\/latest\/}}. All reddening corrections are applied using the \\cite{1989ApJ...345..245C} extinction law with $R_V=3.1$. \n\nHost galaxy extinction is more uncertain. One method to estimate host-galaxy extinction is to make use of the fact that SNe often have similar intrinsic colors after peak \\citep[e.g.,][]{Drout2011}. This was developed by \\cite{stritzingercolors} and \\cite{taddia2018csp} for stripped envelope SNe and Type II SNe. \nFor super-luminous SNe (SLSNe), \\citet[][their table A4]{zhihao} investigated the $g-r$ colours at peak for a large sample of ZTF objects, which is used by {\\tt HAFFET}\\xspace as templates for SLSNe once the object has both $g$- and $r$-band data.\nTo estimate the values for the host-galaxy extinction, we use these intrinsic SN color-curve templates and attribute the difference between the observed and the intrinsic color to the host galaxy reddening.\n\n\\subsection{Observational data}\n\\label{sec:obsdata}\n\nOnce a list of objects is constructed using an appropriate meta table, each object's observational data must be assigned. \nBesides the image (see Fig. \\ref{fig:finder}),\n{\\tt HAFFET}\\xspace can be utilized to work with photometric and spectroscopic data, and in this section we describe how to input these data to {\\tt HAFFET}\\xspace.\n\n\\begin{figure*}\n\\centering\n \\includegraphics[width=0.6\\textwidth]{finder.png}\n \\caption{The image of the field of SN 2020bcq as a finder that was automatically made by {\\tt HAFFET}\\xspace. The images are queried with the PS1 Image Cutout Service\\tablenotemark{a}, while the SDSS field stars (the green open circles) are downloaded via astroquery\\tablenotemark{b}. The position of SN 2020bcq is marked as the red source in the middle of the inset.}\n \\label{fig:finder}\n\\tablenotetext{a}{\\url{https:\/\/outerspace.stsci.edu\/display\/PANSTARRS\/PS1+Image+Cutout+Service}}\n\\tablenotetext{b}{\\url{https:\/\/astroquery.readthedocs.io\/en\/latest\/}}\n\\end{figure*}\n\n\n\\subsubsection{Using private data and arbitrary input formats}\n\\label{sec:private}\n\nDifferent datasets may have significantly different data formats, depending on how the data was reduced and presented. \nFor example, certain codes favor the \\texttt{json} format, while others make use of the \\texttt{csv} format.\nMany sources may also use various keywords to describe the data, e.g. different notations of magnitudes or fluxes with distinct units.\n\nBy asking for particular previously determined keywords, {\\tt HAFFET}\\xspace uses a pandas dataframe to cope with pure text forms and streamline the procedure.\nIn the example shown below, the photometric data file should use the same keywords in order for them to be properly read\\footnote{More keywords of the scheme can be found at the {\\tt snobject} part of \\href{https:\/\/github.com\/saberyoung\/HAFFET\/blob\/master\/sdapy\/data\/default_par.txt}{https:\/\/github.com\/saberyoung\/HAFFET\/blob\/master\/sdapy\/data\/default\\_par.txt}.}:\n\n\\begin{verbatim}\n... ...\njdkey : jdobs # str # julian date\nmagkey : mag # str # AB magnitude\nemagkey : emag # str # magnitude error\nlimmagkey: limmag# str # limiting magnitude\nfluxkey : flux # str # flux (micro Jy)\nefluxkey : eflux # str # flux error\nfilterkey: filter# str # filters\n... ...\n\\end{verbatim}\n\nIt is important to note that \\texttt{jdkey} and \\texttt{filterkey} are strictly required, and either mag\/emag or flux\/eflux should be provided after that.\nThen, to convert between magnitudes and fluxes, built-in functions can be utilized. \n{\\tt HAFFET}\\xspace uses a default zeropoint of 23.9, which is appropriate for converting fluxes measured in micro Janskys.\n\nIt frequently happens that LCs from different sources do not match up. This can be due to a variety of factors, such as the methodology used to estimate the photometry, the way standard stars were chosen, or not correcting for the different filter functions. Therefore, {\\tt HAFFET}\\xspace makes a \\texttt{source} column whenever a LC is appropriately included in order to display the data origins.\n\nSimilar plain text files with either 2 or 3 columns are required for input of spectra, as well as an observation time that might be used later to estimate the line velocity evolution. \nThe three columns present the wavelengths, fluxes, and their errors. When making spectral fits, {\\tt HAFFET}\\xspace has built-in methods to mimic flux errors depending on the spectral quality if errors are not provided.\nThe flux units for spectra could be arbitrary, and {\\tt HAFFET}\\xspace is able to absolute calibrate them with interpolated photometric magnitudes.\n\n\\subsubsection{Using data from public online sources}\n\\label{sec:gm}\n\nSince open web servers offer data in a consistent data format, {\\tt HAFFET}\\xspace contains built-in functions for obtaining data from them, i.e. the ZTF and ATLAS Forced Photometry Services, the Open Astronomy Catalog API, and the Growth marshal\/fritz broker for the internal ZTF groups\\footnote{Users can easily add sources from their own API in the {\\tt HAFFET}\\xspace package following a consistent data format, and feel free to contact us for adding them into the GUI.}. A valid set of authorizations is required by {\\tt HAFFET}\\xspace to access these services if a user wants to utilize them.\n\nAs shown in Fig. \\ref{fig:ztfbase}, for the ZTF Forced Photometry Services the user can use {\\tt HAFFET}\\xspace to check if the object is covered by multiple fields, and if so, correct the baseline of different fields in the same filter. More details can be found in \\citet[][see their Fig. 4]{yuhan2019}.\n\n\\begin{figure*}\n\\centering\n \\includegraphics[width=0.6\\textwidth]{baseline.png}\n \\caption{Baseline correction for ZTF forced photometry LCs exemplified by SN 2020bcq. This target was observed in three filters (g, r and i band), and two fields (field 674 and 171). Observations associated with different fcqf ID (using Eq. 1 of \\citealp{yuhan2019}) and counts are shown in distinctive colors. The two red vertical dashed lines are set based on the user option for the selection of baseline range. The observations within the range are used to set the baseline flux levels, which are aligned by {\\tt HAFFET}\\xspace so that observations from different fields for the same filter are mutually consistent.}\n \\label{fig:ztfbase}\n\\end{figure*}\n\n\\section{Model Fittings}\n\\label{sec:fitting}\n\n{\\tt HAFFET}\\xspace is built to fit functions to a variety of datasets, including spectral absorption features and many kinds of LCs.\nDespite having very different data, these can be fitted using the same techniques.\nIn {\\tt HAFFET}\\xspace we therefore specify a number of \"models\" that provide the objectives for the data to approach as well as so-called \"engines\" that are used to prepare the dataset for certain fits.\n\nThere are currently nine engines available in {\\tt HAFFET}\\xspace, and these are shown in \nTable~\\ref{tab:engines}, along with a variety of built-in models for each of them, see Table~\\ref{tab:models}.\nIn this section, we describe all built-in engines and models, show how to tune their hyperparameters, and explain how the user can define their own engines and models.\n\n\\begin{table*}\n\\caption{Table of built-in engines defined in {\\tt HAFFET}\\xspace.}\n\\begin{center}\n\\begin{tabular}{cc} \\hline\\hline\nEngine name & Description \\\\ \n\\hline\nmultiband\\_early & multi-band LCs at early phases \\\\ \nmultiband\\_main & multi-band LCs at around the peak epoch \\\\ \nbol\\_early & bolometric LCs at early phases \\\\ \nbol\\_main & bolometric LCs at around the peak epoch \\\\ \nbol\\_tail & bolometric LCs at the tail phases \\\\ \nbol\\_full & bolometric LCs for the full range \\\\ \nsed & spectral energy distribution data, \\\\ \n& i.e. multi-band photometry or spectra \\\\ \nspecline & spectral line absorption feature \\\\ \nspecv\\_evolution & spectral\/photospheric line velocities \\\\\n\\hline\n\\end{tabular}\n\\label{tab:engines}\n\\end{center}\n\\end{table*}\n\n\n\\begin{table*}\n\\caption{Table of built-in models defined in {\\tt HAFFET}\\xspace. Here we list only the basic models, and in {\\tt HAFFET}\\xspace there are also models with multiple components, e.g. the Arnett model at the main peak plus the tail model, and a boundary of time between them is set as a free parameter as well. For such a case, we use a bol\\_full engine to prepare bolometric LCs for model fitting.}\n\\begin{center}\n\\begin{tabular}{cccc} \\hline\\hline\nModel Name & Engine & Description & Reference(s) \\\\ \n\\hline\nbazin & multiband\\_main & parametric model suited for SNe Ia LCs & \\cite{bazin} \\\\ \nvillar & multiband\\_main & parametric model for a broad range of LC & \\cite{villar} \\\\ \n & & morphologies, e.g. with bumps or plateau & \\\\ \nrisepl & multiband\\_early & power law model for the early rise of SNe & \\cite{Miller_2020} \\\\ \nsalt2 & multiband\\_main & empirical model for SNe Ia LCs & \\cite{salt2} \\\\\narnett & bol\\_main & radioactive model for bolometric LCs & \\cite{arnett1982} \\\\ \n & & of SNe Ia or CC at around the peak epochs & \\\\ \ntail & bol\\_tail & radioactive model for bolometric LCs & \\cite{Wygoda2019} \\\\ \n & & of SNe Ia or CC at the tail phases & \\\\ \nsbo & bol\\_early & shock cooling tail model for SNe IIb\/Ib & \\cite{piro2021} \\\\ \nmagnetar & bol\\_main & magnetar model for SLSNe & \\cite{Nicholl_2017} \\\\ \ngauss & specline & Gaussian model for spectral line profile & \\cite{gauss} \\\\ \nvoigt & specline & Voigt model for spectral line profile & \\cite{voigt} \\\\ \nexponential & specv\\_evolution & exponential model for velocity evolution & \\cite{Fremling2018} \\\\\n\\hline\n\\end{tabular}\n\\label{tab:models}\n\\end{center}\n\\end{table*}\n\n\\begin{figure*}\n\\centering\n \\includegraphics[width=0.6\\textwidth]{model_fitter.pdf}\n \\caption{Flowchart illustrating model fitting procedures of {\\tt HAFFET}\\xspace. As shown, {\\tt HAFFET}\\xspace first obtain model function and priors, as well as data to be fitted that were prepared by the engines. Then a minimizing process is performed to constrain model parameters. The output could be used as prior for MCMC routines. Finally the fitted samplings can be called by {\\tt HAFFET}\\xspace to, e.g. produce parameter contours, and reproduce observational data.}\n \\label{fig:modelfit}\n\\end{figure*}\n\n\\subsection{Defining models}\n\\label{sec:model}\n\nA \\texttt{python} file that contains analytical functions and a \\texttt{json} file that specifies the engine and parameters are used to define each model in {\\tt HAFFET}\\xspace.\nBelow is an example using the \\texttt{json} file of the power-law fits (which is further discussed in section~\\ref{sec:multiearly}):\n\n\\begin{verbatim}\nmodelmeta = {\n ...\n 'powerlaw_multiple':\n {\n 'engine': 'multiband_early',\n 'alias':\n [\n 'pl',\n 'powerlaw',\n 'Powerlaw',\n ],\n 'description': 'Power law fits',\n 'func': 'functions.powerlaw_full', \n 'parname' :\n [\n r'$t_\\mathrm{fl}$', \n r\"$A$\", \n r\"$\\alpha$\", \n r\"$C$\",\n ],\n 'par' :\n [\n 'texp', \n 'a', \n 'alpha', \n 'c',\n ],\n 'bestv':\n {\n 'texp' : -18,\n 'a' : 100,\n 'alpha' : 2, \n 'c' : 60,\n },\n 'bounds':\n {\n 'texp' : [-100, 0],\n 'a' : [0, 10000],\n 'alpha' : [0, 10],\n 'c' : [0, 10000],\n },\n 'same_parameter': ['texp'],\n 'fit_error' : True,\n },\n ...\n}\n\\end{verbatim}\n\nAs shown, the \\texttt{powerlaw\\_multiple} model would be called to fit data prepared by the \\texttt{multiband\\_early} engine with the \\texttt{powerlaw\\_full} functions having four free parameters. As discussed in Section~\\ref{sec:optimize}, there are two different likelihoods available in {\\tt HAFFET}\\xspace. By default, {\\tt HAFFET}\\xspace uses a Gaussian likelihood, while if we set \\texttt{fit\\_error} = True, it will include one more term to the likelihood as additional model uncertainties. \nThe \\texttt{same\\_parameter} item decides whether to fit multiple band LCs simultaneously or independently. If the \\texttt{same\\_parameter} is left blank, {\\tt HAFFET}\\xspace will fit the LCs in different bands one at a time. If some other parameters are specified, such as \\texttt{t\\_exp} for example, {\\tt HAFFET}\\xspace fit all bands at once with the same \\texttt{t\\_exp}, which is the epoch of first light for such a case.\n\n\\subsection{Built-in models}\n\\label{sec:builtin}\n\nIn the sections that follow we outline the fundamental concepts of the built-in models while highlighting the possibility of experimenting with various fitting choices, such as different free parameters, likelihoods, or fitting patterns. \n\n\\subsubsection{multiband\\_early models}\n\\label{sec:multiearly}\n\nPower-law fits are working with multiple-band photometric data prepared by the 'multiband\\_early' engine. \nThese fits can for example be used to estimate explosion epochs and rise times, but the precision by which we can constrain the first light is also of importance for many other investigations (for example for coincidence studies with neutrino signals or gravitational wave detections).\n\\cite{Miller_2020} developed a Bayesian framework to model the early rise of LCs as a power law in time in their Eq. 1:\n\\begin{equation}\n f_b(t) = C + H[t_\\mathrm{fl}] A_b (t - t_\\mathrm{fl})^{\\alpha_b},\n \\label{eqn:flux_model}\n\\end{equation}\nwhere C is the baseline flux present\nin the reference image prior to SN discovery, $t_\\mathrm{fl}$ is the time of first light, $H[t_\\mathrm{fl}]$ is the Heaviside function equal to 0 for t $< t_\\mathrm{fl}$ and 1 for t $\\geq t_\\mathrm{fl}$, $A_\\mathrm{b}$ is a constant of proportionality in filter b, and $\\alpha_\\mathrm{b}$ is the power-law index in filter b. \n\n\\cite{Miller_2020} applied this framework to a subset of normal SNe Ia from ZTF to simultaneoulsy model the evolution in both the ZTF $g$ and $r$ filters.\nThis assumes that the epoch of first light is \nthe same in the two ZTF bands, which is a fair assumption given the ZTF cadence and the similarity of the SN ejecta opacity at these wavelengths. \nIn {\\tt HAFFET}\\xspace we follow the same methodology to characterize the early emission in the \\texttt{powerlaw\\_multiple} model (see section \\ref{sec:model}), but also provide the possibility to fit the different band LCs independently by removing \\texttt{same\\_parameter} in the \\texttt{power\\_single} model. \n\nAs in \\cite{Olling2015}, \\cite{Miller_2020} only include observations up to 40\\% of the peak\namplitude of the SN LC, and note that this particular choice, instead of 30\\% or 50\\%, does slightly affect the final inference for the model parameters.\nFor comparison we adopt the same settings as default in the \\texttt{multiband\\_early} engine, where the peak amplitude in each band should be previously estimated via the gaussian process or other \\texttt{multiband\\_main} models. \n\nThe explosion epoch can also be fitted to bolometric models as a free parameter, or estimated as the middle of the last non-detection and the first detection epochs.\nIn {\\tt HAFFET}\\xspace, we provide users the option to choose their method of estimation.\n\n\\subsubsection{multiband\\_main models and Gaussian process}\n\\label{sec:multimain}\n\nThere are various spline techniques available to interpolate the multiple band photometric data around the main peak, i.e. Gaussian Process (GP) regression and the analytic model fittings. \n\nIn {\\tt HAFFET}\\xspace, we provide a non-parametric data-driven interpolation technique (i.e. GP) to serialize the LCs. In order to include color effects, we perform time-series forecasting on the joint multi-band (e.g. $g$ and $r$) fluxes convolved with the wavelengths. \nWe use a flat mean function\\footnote{Other mean functions are also available in {\\tt HAFFET}\\xspace, such as the Bazin or Villar models.} for the flux form, and establish two kernels, i.e. a constant kernel for the wavelengths, and a stationary Matern32 kernel for the fluxes, with the {\\tt GEORGE}\\footnote{\\href{https:\/\/george.readthedocs.io}{https:\/\/george.readthedocs.io}} package.\n\nThe GP interpolation becomes less informative with larger uncertainties and when the data are sparser. For estimating physical parameters for normal SNe, an alternative approach is then to fit the data using analytic functions, such as those used by \\cite{bazin, Karpenka, kessler1, kessler2, villar}.\nFor this purpose we included the \\cite{bazin} and \\cite{villar} models in {\\tt HAFFET}\\xspace. The 'Bazin' model is appropriate for SN LCs without fluctuations, whereas the 'Villar' model is also useful to fit plateaus and bumps.\nFor the Bazin model, the flux form is determined by 5 parameters:\n\n\\begin{equation}\n\\label{eqn:bazin}\nF(t) = \\mathrm{A} \\frac{e^{-(t-t_0)\/\\tau_\\mathrm{fall}}}{1+e^{-(t-t_0)\/\\tau_\\mathrm{rise}}} + \\mathrm{B}\n\\end{equation}\nwhere \\citet[][their fig.~8]{taddia2015sdss} illustrate the different parameters and how they contribute to the light curve properties.\nWe can obtain the date of maximum from the derivative of this function as $t_\\mathrm{max} = t_0 + \\tau_\\mathrm{rise} \\mathrm{log} \\frac{-\\tau_\\mathrm{rise}}{\\tau_\\mathrm{rise}+\\tau_\\mathrm{fall}}$ \\citep{bazin}.\n\nThe functional form of the Villar model is similar to the Bazin model, but incorporates a plateau component by adding two additional free parameters:\n\n\\begin{equation}\n\\label{eqn:villar}\nF(t) = \\begin{cases} \n \\frac{A+\\beta(t-t_0)}{1+e^{-(t-t_0)\/\\tau_\\mathrm{rise}}} & t 0$ and we set $\\bigtriangleup = 1$ in this paper.\n\n\\subsection{Problem Formulation}\nTo ensure that each requested FoV is rendered and transmitted within the VR interaction latency, we aim to optimize the total QoE under fixed VR interaction latency constraint via determining the optimal association between MEC and VR user group, and optimal rendering MEC for model migration. \n\nThe proposed MEC rendering schemes aim at maximizing the long-term total QoE under VR interaction latency constraint in the continuous time slots with respect to the policy $\\pi$ that maps the current state information $S_t$ to the probabilities of selecting possible actions in $A_t$. Therefore, based on the QoE of each VR user, an optimization problem (P1) is formulated as\n\\begin{align}\n (\\rm{P1})~\\max_{\\pi(A_t|S_t)}&\\sum_{t=0}^{\\infty}\\sum_{k = 1}^{K_{\\text{VR}}}\\gamma^{t}\\mathbb{E}_{\\pi}[\\rm{PSNR}_{k}]\\\\\n &T_{k} \\leq T_k^{\\text{th}},\n\\end{align}\nwhere $\\gamma\\in[0, 1)$ is the discount factor which can determine the weight of the future QoE, and $\\gamma = 0$ means that the agent just concerns the immediate reward. The state $S_t$ contains the index of the requested FoV, the location of each VR user, and the computation ability of each MEC. The action $A_{t}$ includes the optimal association between MEC and VR user group, and optimal rendering MEC for model migration.\n\nSince the dynamics of the wireless VR system is Markovian in continuous time slots, this is a Partially Observable Markov Decision Process (POMDP) problem which is generally intractable. Here, the parital observation refers to that the MECs can only know the previous FoV requests and the location of each VR user in the environment, while they are unable to know all the information of the communication environment, including, but not limited to, the channel conditions, and the FoV request in the current time slot. Furthermore, the traditional optimization methods may need the global information to achieve the optimal solution, which not only increase the overhead of signal transmission, but also increase the computation complexity. Approximate solutions will be discussed in Section III.\n\n\n\n\\begin{figure*}[!t]\n \\centering\n \\includegraphics[width=7 in]{prediction_learning_system.pdf}\n \\caption{Decoupled learning strategy for MEC rendering schemes in the wireless VR network.}\n \\label{basic_modules}\n\\end{figure*}\n\n\\section{Deep Reinforcement Learning-Based MEC Rendering Scheme}\nKnowing the deep neural networks as one of the most impressive non-linear approximation functions, DRL is an effective method to optimally solve POMDP problems \\cite{Jiang2}. In this section, to solve (P1), a decoupled learning strategy is proposed for FoV prediction and MEC rendering association, as shown in Fig. 5. Specifically, a RNN model based on GRU is used to predict FoV preference of each user over time. Then, four DRL algorithms, including centralized DQN, distributed DQN, centralized AC, and distributed AC, are proposed to select the FoV rendering MEC and the associated MEC for downlink transmission.\n\n\n\\subsection{FoV Prediction}\nBrownian Motion is used to simulate the eye movement of VR users over time, and assuming that the uplink received FoV preference of the $k$th VR user at the $t$th time slot is $\\widehat{{F}_{t}^{\\text{oV}}}^{k}\\in\\{1,2,...,N_{\\text{FoV}}\\}$. In order to detect dynamics in FoV preference of each VR user, the proposed learning scheme aims at utilizing not only the information presents in the most recent observation $O_{t}=\\{O_t^{1},O_t^{2},...,O_t^{\\text{K}_{\\text{VR}}}\\}$, where $O_t^{k} = \\{\\widehat{{F}_{t}^{\\text{oV}}}^{k}\\}$, but also the historical information in the previous observations ${H}_{t}=\\{O_{t-T_{0}+1},...,O_{t-2},O_{t-1}\\}$ given a memory window $T_{0}$. To recognize FoV preference over time, a RNN model with parameters $\\pmb{\\theta}_{\\text{RNN}}$, and specifically a GRU architecture, is leveraged. $\\pmb{\\theta}_{\\text{RNN}}$ is consisted of both the GRU internal parameters and the weights of the softmax layer. We choose RNN due to its ability in capturing time correlation of FoV preference over time, which can help learn the time-varying FoV preference for better prediction accuracy.\n\nAs shown in Fig. 5, the GRU layer includes multiple standard GRU units and historical observations $[O_{t-T_{0}+1},...,O_{t-1},]$ sequentially inputted into the RNN predictor. For the $k$th VR user, the GRU layer is connected to an output layer which is consisted of a softmax non-linearity with ${N}_{\\text{FoV}}$ output values, which represents the predicted probability $\\mathcal{P}\\{\\widehat{{F}_{t}^{\\text{oV}}}^{k} = \\hat{f}|[O_{t-T_{0}+1}^{k},...,O_{t-1}^{k}], \\pmb{\\theta}_{\\text{RNN}}\\}$ of the $\\hat{f}$th FoV ($\\hat{f} = 1,...,{N}_{\\text{FoV}}$) for the $t$th time slot given historical observations $[O_{t-T_{0}+1}^{k},...,O_{t-1}^{k}]$. \n\nTo adapt the model parameter $\\pmb{\\theta}_{\\text{RNN}}$, standard Stochastic Gradient Descent (SGD) via BackPropagation Through Time (BPTT) \\cite{BPTT} is deployed. At the $(t+1)$th time slot, the parameters $\\pmb{\\theta}_{\\text{RNN}}$ of the RNN predictor can be updated as\n\\begin{equation}\n \\pmb{\\theta}_{\\text{RNN}}^{t+1} = \\pmb{\\theta}_{\\text{RNN}}^{t} - \\lambda_{\\text{RNN}}\\nabla L_{\\text{RNN}}(\\pmb{\\theta}_{\\text{RNN}}^{t}),\n\\end{equation}\nwhere $\\lambda_{\\text{RNN}}\\in(0,1]$ is the learning rate, $\\nabla L_{\\text{RNN}}(\\pmb{\\theta}_{\\text{RNN}}^{t})$ is the gradient of the loss function $L_{\\text{RNN}}(\\pmb{\\theta}_{\\text{RNN}}^{t})$ to train the RNN predictor. $L_{\\text{RNN}}(\\pmb{\\theta}_{\\text{RNN}}^{t})$ can be obtained by averaging the cross-entropy loss as \n\\begin{equation}\n L_{\\text{RNN}}^{t}(\\pmb{\\theta}_{\\text{RNN}}) \\!=\\! -\\!\\!\\!\\!\\!\\!\\!\\!\\sum_{t^{'} = t-T_b+1}^{t}\\!\\!\\!\\!\\!\\log\\left(\\!\\mathcal{P}\\{\\widehat{{F}^{\\text{oV}}_{t^{'}}} = \\widetilde{{F}^{\\text{oV}}_{t^{'}}}|O_{t^{'}-T_{0}}^{t^{'}},\\pmb{\\theta}_{\\text{RNN}}\\}\\!\\right),\n\\end{equation}\nwhere\n\\begin{equation}\n O_{t^{'}-T_{0}}^{t^{'}} = [O_{t^{'}-T_{0}+1},...,O_{t^{'}-1},O_{t^{'}}],\n\\end{equation}\nand $T_b$ is the randomly selected mini-batch size.\n\nThrough FoV prediction, MECs are able to know the FoV preference of each VR user in advance. The VR users with the same predicted FoVs can be grouped together. After FoV rendering, the MECs will multicast or unicast the required FoVs to VR users selecting the same FoV, or a single VR user selecting unique FoV, respectively.\n\n\\subsection{Deep Reinforcement Learning}\nThe main purpose of Reinforcement Learning (RL) is to select proper MECs for MEC rendering schemes. Through a series of action strategies, MECs are able to interact with the environment, and obtain rewards due to their actions, which help to improve their action strategies. After plenty of iterations, MECs can learn the optimal policy that maximizes the long-term rewards. \n\nWe define $S\\in\\mathcal{S}$, $A\\in\\mathcal{A}$, and $R\\in\\mathcal{R}_{e}$ as any state, action and reward from their corresponding sets, respectively. According to the observed environmental state $S_t$ at the $t$th time slot, MECs choose specific actions $A_t$ from the set $\\mathcal{A}$ and receive rewards $R_t$, which are regarded as a metric to measure whether the selected actions are good. Thus, the purpose of RL algorithm is to find an optimal policy $\\pi$ which can maximize the long-term reward for $A = \\pi(S)$. The optimization function can be formulated as $$ and the detailed descriptions of the state, action and reward of problem (P1) are introduced as follows.\n\\begin{itemize}\n \\item State: At the $t$th time slot, the network state can be denoted as \n \\begin{align}\n S_{t} &= (\\widetilde{\\mathcal{F}_{t}^{\\text{oV}}}, \\mathcal{L}_{k,i}^{t}, \\mathcal{F}_{i}^{\\text{MEC}})\\in \\mathcal{S}, \\\\ \\nonumber\n \\text{with}~\\widetilde{\\mathcal{F}_{t}^{\\text{oV}}} &= \\{\\widetilde{{F}_{t}^{\\text{oV}}}^{1}, \\widetilde{{F}_{t}^{\\text{oV}}}^{2},...,\\widetilde{{F}_{t}^{\\text{oV}}}^{K_{\\text{VR}}}\\}, \\\\ \\nonumber\n \\mathcal{L}_{k,i}^{t} &= \\{ {l}_{k,1}^{t}, {l}_{k,2}^{t},..., {l}_{k,B}^{t}, \\},\\\\ \\nonumber\n \\mathcal{F}_{i}^{\\text{MEC}} &= \\{F_{1}^{\\text{MEC}}, F_{2}^{\\text{MEC}},..., F_{B}^{\\text{MEC}}\\},\n \\end{align}\n where $\\widetilde{{F}_{t}^{\\text{oV}}}^{k}$ is the index of the predicted FoV of the $k$th VR user at the $t$th time slot. $l_{k,i}^{t}$ is the distance between the $k$th VR user and the $i$th MEC at the $t$th time slot. $F_{i}^{\\text{MEC}}$ is the computation capability of the $i$th MEC.\n \n \\item Action: The action space can be written as\n \\begin{align}\n {A}_{t} &= \\{\\check{\\mathcal{A}}_{k,q}^{t}, \\acute{\\mathcal{A}}_{k,i}^{t}\\}\\in\\mathcal{A}, \\\\ \\nonumber\n \\text{with}~\\check{\\mathcal{A}}_{k,q}^{t} &= \\{\\check{A}_{k,1}, \\check{A}_{k,2},...,\\check{A}_{k,{N}_{\\text{FoV}}}\\},\\\\ \\nonumber\n \\acute{\\mathcal{A}}_{k,i}^{t} &= \\{\\acute{A}_{k,1}, \\acute{A}_{k,2},...,\\acute{A}_{k,K_{\\text{VR}}}\\}, \\nonumber\n \\end{align}\n where $\\check{A}_{k,q}^{t}\\in\\{0, 1\\}$ and $\\acute{A}_{k,i}^{t}\\in\\{0, 1\\}$ represent whether the $k$th MEC will render the $q$th FoV and serve the $i$th VR user at the $t$th time slot, respectively. For instance, if $\\check{A}_{k,q}^{t} = 1$ and $\\check{A}_{k,j}^{t} = 0$ , the $k$th MEC will render and migrate the $q$th FoV to the $j$th ($j\\neq k$) MEC choosing the same FoV. If $\\acute{A}_{k,i}^{t} = 1$, the $k$th MEC will support the downlink transmission of the $i$th VR user, otherwise, not.\n \n \\item Reward: The immediate reward $R_t$ is designed as\n \\begin{equation}\n {R}_{t}(S_t,A_t) = \\sum\\limits_{k=1}^{K_{\\text{VR}}} \\text{PSNR}_{k}^{t}.\n \\end{equation}\n\\end{itemize}\n\nThus, the discounted accumulation of the long-term reward can be denoted as\n\\begin{equation}\n {V}(S,\\pi) = \\sum_{t = 1}^{\\infty}(\\gamma)^{t - 1}{R}_{t}(S_t,A_t),\n\\end{equation}\nwhere $\\gamma\\in[0,1)$ is the discount factor.\n\nWhen the number of MECs and VR users are small, RL algorithm can efficiently obtain the optimal policy. However, when a large number of MECs and VR users exist, the state and action spaces will be scaled proportionally, which will inevitably result in massive computation latency and severely affect the performance of the RL algorithm. To address this issue, deep learning is introducted to RL, namely, deep reinforcement learning (DRL), through interaction with the environment, DRL can directly control the behavior of each agent, and solve complex decision-making problem. In DRL algorithm, two methods can be used to obtain optimal policy. One is called\nvalue-based optimization, such as DQN, which indirectly optimizes the policy by optimizing value function. While the other is policy-based optimization, such as AC, which can directly optimize the policy. In the following sections, four DRL algorithms are introduced in detail.\n\n\\begin{figure}[ht]\n \\centering\n \\includegraphics[width=3.0 in]{DQN.pdf}\n \\caption{The DQN diagram of the MEC rendering scheme.}\n \\label{basic_modules}\n\\end{figure}\n\n\\subsubsection{Centralized DQN}\n\nAs a value-based DRL algorithm, DQN combines a neural network with Q-learning and approximates the state-action value function via the deep neural network (DNN). Using DQN algorithm, a fraction of states are sampled and the neural network is applied to train a sufficiently accurate state-action value function, which is able to effectively solve the problem of high dimensionality in state space. Furthermore, the DQN algorithm uses the experience replay to train the learning process of RL. When updating the DQN algorithm, some experiences in the experience replay will be selected randomly to learn, so that the correlation among the training samples can be broke and the efficiency of the neural network can be improved. In addition, through averaging the selected samples, the distribution of training samples can be smoothed, which avoids the training divergence.\n\nAs shown in Fig. 6, the action-state value function $V_{\\text{DQN}}(S,A)$ in the DQN agent can be parameterized by using a function ${V}_{\\text{DQN}}(S,A;\\pmb{\\theta}_{\\text{DQN}})$, where $\\pmb{\\theta}_{\\text{DQN}}$ is the weight matrix of the DNN with multiple layers. Consider the conventional DNN, where the neurons between two adjacent layers are fully connected, which is so-called fully-connected layers. The input of the DNN is the variables in state $S_t$; the hidden layers are Rectifier Linear Units (ReLUs) through utilizing the function $f(x) = \\max(0,x)$; the output layer is consisted of linear units, which are all available actions in $A_t$. The exploitation is obtained by performing propagation of ${V}_{\\text{DQN}}(S,A;\\pmb{\\theta}_{\\text{DQN}})$ with respect to the observed state $S_t$. Moreover, the parameter $\\pmb{\\theta}_{\\text{DQN}}$ can be updated by using SGD as\n\\begin{equation}\n \\pmb{\\theta}_{\\text{DQN}}^{t+1} = \\pmb{\\theta}_{\\text{DQN}}^{t} - \\lambda_{\\text{DQN}}\\nabla L_{\\text{DQN}}(\\pmb{\\theta}_{\\text{DQN}}^{t}),\n\\end{equation}\nwhere $\\lambda_{\\text{DQN}}\\in(0,1]$ is the learning rate, $\\nabla L_{\\text{DQN}}(\\pmb{\\theta}_{\\text{DQN}}^{t})$ is the gradient of the loss function $L_{\\text{DQN}}(\\pmb{\\theta}_{\\text{DQN}}^{t})$ utilized to train the state-action value function. The loss function can be defined as\n\\begin{equation}\n L_{\\text{DQN}}(\\pmb{\\theta}_{\\text{DQN}}^{t}) =(\\hat{V}_{\\text{DQN}}-{V}_{\\text{DQN}}(S_i,A_i;\\pmb{\\theta}_{\\text{DQN}}^{t}))^{2},\n\\end{equation}\nwhere\n\\begin{equation}\n \\hat{V}_{\\text{DQN}} = R_{i+1}+\\gamma\\max_{A} {V}_{\\text{DQN}}(S_{i+1}, A; \\Bar{\\pmb{\\theta}}_{\\text{DQN}}^{t}).\n\\end{equation}\n$(S_i,A_i,S_{i+1},R_{i+1})$ are randomly selected previous samples for some $i\\in\\{t-M_r,...,t\\}$ with respect to a so-called minibatch. $M_r$ is the replay memory. $\\Bar{\\pmb{\\theta}}_{\\text{DQN}}^{t}$ is the so-called target Q-network which is utilized to estimate the future value of the Q-function in the update rule. Meanwhile, $\\Bar{\\pmb{\\theta}}_{\\text{DQN}}^{t}$ is periodically copied from the current value $\\pmb{\\theta}_{\\text{DQN}}^{t}$ and kept fixed for some episodes. The use of minibatch, rather than a single sample, to update the state-action value function ${V}_{\\text{DQN}}(S,A;\\pmb{\\theta}_{\\text{DQN}})$ is able to improve the convergent reliability of value function.\n\nThrough deriving the loss function in (29) and calculating the expectation of the selected previous samples in minibatch, ${V}_{\\text{DQN}}^{*}(S,A)$ can be obtained. The DQN algorithm is presented in Algorithm 1. \n\n\n\n\\begin{algorithm}[t]\n\\begin{algorithmic}[1]\n\\caption{DQN to dynamic decision-making and optimization of the MEC rendering scheme}\n\\STATE Initialize replay memory $D$ to capacity $\\hat{\\mathcal{N}}$, learning rate $\\lambda_{\\text{DQN}}\\in(0,1]$ and discount factor $\\gamma\\in[0,1)$.\n\\STATE Initialize state-action value function ${V}_{\\text{DQN}}(S,A;\\pmb{\\theta}_{\\text{DQN}})$, the parameters of primary Q-network $\\pmb{\\theta}_{\\text{DQN}}$ and target Q-network $\\Bar{\\pmb{\\theta}}_{\\text{DQN}}$.\n\\FOR{episode = 1,...,$M$}\n \\STATE Input the network state $S$ of the MEC rendering scheme.\n \\FOR{t = 1,...,T}\n \\STATE Use $\\epsilon$-greedy algorithm to select a random action $A_t$ from action space $\\mathcal{A}$.\n \\STATE Otherwise, select $A_t = \\max\\limits_{A\\in \\mathcal{A}}{V}(S_t,A;\\pmb{\\theta}_{\\text{DQN}})$.\n \\STATE The selected MECs render the predicted or uplink received FoVs and multicast\/unicast them to VR users according to the selected action $A_t$.\n \\STATE MECs observe reward $R_t$ and new state $S_{t+1}$.\n \\STATE Store transition $(S_{t},A_{t},R_{t},S_{t+1})$ in replay memory $D$.\n \\STATE Sample random minibatch of transitions $(S_j,A_j,R_j,S_{j+1})$ from replay memory $D$.\n \\IF{$j+1$ is terminal}\n \\STATE $y_j^{target} = R_j$.\n \\ELSE\n \\STATE $y_j^{target} = R_{j+1} + \\gamma\\max\\limits_{A}{V}_{\\text{DQN}}(S_{j+1},A;\\pmb{\\theta}_{\\text{DQN}})$.\n \\ENDIF\n \\STATE Perform a gradient descent step and update parameters $\\pmb{\\theta}_{\\text{DQN}}$ according to (28).\n \\STATE Update parameter $\\Bar{\\pmb{\\theta}}_{\\text{DQN}}$ of the target network every $\\Bar{K}$ steps.\n \\ENDFOR\n\\ENDFOR\n\\end{algorithmic}\n\\end{algorithm}\n\n\\subsubsection{Distributed DQN}\nIn the centralized DRL algorithm, it learns a single optimization policy centrally at the central controller, which requires the global observations, rewards, and actions of each MEC. When the number of MECs and VR users increase, the size of the proposed model and parameters can expand exponentially. In this case, the GPU memory in central controller not only needs to hold the model and batch of data, but also the intermediate outputs of the feedforward computation. With dense VR users, GPU memory can be easily overloaded in practice, especially for the GPUs with lower computation capability. Meanwhile, as the number of MECs and VR users scaling up, the centralized DRL can become inefficient due to the following issues. First, the training time is bound by the gradient computation time, and the frequency of parameter updating grows linearly with the number of MECs and VR users. Second, as the frequency of parameter updating grows, it could potentially slow down the optimization process and result in problems with convergence \\cite{Ddqn1}. \n\nUnlike the centralized DRL algorithm, the global objective in distributed DRL algorithm is the combination of each agent's local objective, and each agent needs to optimize its own objective. In the distributed DQN method, each agent learns independently from the other agents. When one of the agents selects an action based on the current state, the other agents can be approximated as part of the environment \\cite{DQ_OFDMA}.\n\nIn our model, the central controller stores a copy of the model parameter $\\pmb{\\theta}_{\\text{DDQN}}$. The $i$th MEC obtains the latest model parameter $\\pmb{\\theta}_{\\text{DDQN}}$ from the central controller with $\\widetilde{\\pmb{\\theta}}_{i} = \\pmb{\\theta}_{\\text{DDQN}}$. Based on the observed state $S_{t}^{i}$, it will select an action $A_{t}^{i}$ in all available actions in $\\mathcal{A}^i$. As a result, the environment will make a transition to the new state $S_{t+1}^{i}$ and a reward $R_{t}^{i}$ will be generated and fed back to the $i$th MEC. During training, the parameter $\\widetilde{\\pmb{\\theta}}_{i}$ of the $i$th MEC can be updated as \n\\begin{equation}\n \\widetilde{\\pmb{\\theta}}_{i}^{t+1} = \\widetilde{\\pmb{\\theta}}_{i}^{t} - \\lambda_{\\text{DDQN}}\\nabla L_{i}(\\widetilde{\\pmb{\\theta}}_{i}^{t}),\n\\end{equation}\nwhere $\\lambda_{\\text{DDQN}}\\in(0,1]$ is the learning rate, $L_{i}(\\widetilde{\\pmb{\\theta}}_{i}^{t})$ is the loss function of the $i$th MEC, which can be denoted as\n\\begin{equation}\n L_i(\\widetilde{\\pmb{\\theta}}_{i}^{t}) =(\\hat{V}_{\\text{DDQN}}-{V}_{\\text{DDQN}}(S_{j}^{i},A_{j}^{i};\\widetilde{\\pmb{\\theta}}_{i}^{t}))^{2},\n\\end{equation}\nwhere\n\\begin{equation}\n \\hat{V}_{\\text{DDQN}} = R_{j+1}^{i} + \\gamma\\max_{A^{i}}{V}_{\\text{DDQN}}(S_{j+1}^{i},A^{i};\\bar{\\widetilde{\\pmb{\\theta}}}_{i}^{t}).\n\\end{equation}\n$(S_{j}^{i},A_{j}^{i},S_{j+1}^{i},r_{j+1}^{i})$ are randomly selected previous samples for $j\\in\\{t-M_r,..,t\\}$ of the $i$th MEC. $\\bar{\\widetilde{\\pmb{\\theta}}}_{i}^{t}$ is the target Q-network which is used to estimate the future value of the state-action value function in the update rule. Furthermore, through deriving the loss function in (32) and computing the expectation of the selected samples, ${V}_{\\text{DDQN}}^{*}(S^{i},A^{i})$ can be obtained. In addition, the updated parameter $\\widetilde{\\pmb{\\theta}}_{i}$ of the $i$th MEC will be transmitted to the central controller and the model parameter $\\pmb{\\theta}_{\\text{DDQN}}$ can be updated as\n\\begin{equation}\n \\pmb{\\theta}_{\\text{DDQN}} = \\frac{1}{K_{\\text{DDQN}}^{\\text{MEC}}}\\sum\\limits_{i=1}^{K_{\\text{DDQN}}^{\\text{MEC}}}\\widetilde{\\pmb{\\theta}}_{i},\n\\end{equation}\nwhere $K_{\\text{DDQN}}^{\\text{MEC}}$ is the number of the MECs associated with the VR user groups.\n\n\\begin{figure}[ht]\n \\centering\n \\includegraphics[width=3.5 in]{actor_critic.pdf}\n \\caption{The Actor-Critic diagram of the MEC rendering scheme.}\n \\label{basic_modules}\n\\end{figure}\n\n\\subsubsection{Centralized AC }\nIn the DQN algorithm, the optimal policy of the MEC rendering scheme is indirectly obtained through optimizing the state-action value function. However, unlike the DQN algorithm, AC algorithm is able to directly optimize the policy of the MEC rendering scheme. \n\nThe core idea of the AC algorithm is to combine the advantages of Q-learning (value-based function) and the policy-gradient (policy-based function) algorithms. Consequently, the fast convergence of the value-based function and the directness of the policy-based function are all taken into consideration \\cite{AC1, AC2}. As shown in Fig. 7, the AC network is consisted of two independent networks, namely, an actor network and a critic network. Through learning the relationship between the environment and the rewards, the critic network is able to get the potential rewards of the current state. Then, the critic network will guide the actor network to select proper actions and update the actor network in each epoch. Therefore, the AC algorithm is usually developed as a two-time-scale algorithm, including critic updating step and actor updating step, which leads to slow learning efficiency. A parameterized policy $\\pi(A_t|S_t;\\pmb{\\theta}_{\\text{AC}})$ is learned to select actions according to the current environment state. Then, the critic network will obtain the reward feedback from the environment and use the state-value function $V_{\\text{AC}}(S_t;\\pmb{w}_{\\text{AC}})$ to evaluate the performed action. Meanwhile, a time-difference (TD) error is generated to reflect the performance of the performed action.\n\nIn particular, after performing action $A_t$ based on $S_t$ with policy $\\pi$, the critic network uses TD error to evaluate the action under the current state, which can be expressed as\n\\begin{equation}\n \\delta_t = R_t + \\gamma{V}_{\\text{AC}}(S_{t+1};\\pmb{w}_{\\text{AC}}^{t}) - V_{\\text{AC}}(S_t;\\pmb{w}_{\\text{AC}}^{t}).\n\\end{equation}\nThen, $\\pmb{w}_{\\text{AC}}^{t}$ can be updated as\n\\begin{equation}\n \\pmb{w}_{\\text{AC}}^{t+1} = \\pmb{w}_{\\text{AC}}^{t} + \\lambda_{\\text{critic}}\\delta_{t}\\nabla_{\\pmb{w}_{\\text{AC}}}V_{\\text{AC}}(S_t;\\pmb{w}_{\\text{AC}}^{t}).\n\\end{equation}\nwhere $\\lambda_{\\text{critic}}\\in(0,1]$ is the learning rate of the critic network.\n\nMeanwhile, in the actor network, the policy gradient method is usually adopted, which directly selects actions via parameterized policy. The parameter $\\pmb{\\theta}_{\\text{AC}}^{t}$ can be updated as\n\\begin{equation}\n \\pmb{\\theta}_{\\text{AC}}^{t+1} = \\pmb{\\theta}_{\\text{AC}}^{t} + \\lambda_{\\text{actor}}\\delta_{t}\\nabla_{\\pmb{\\theta}_{\\text{AC}}} \\log\\pi(A_t|S_t;\\pmb{\\theta}_{\\text{AC}}^{t}),\n\\end{equation}\nwhere $\\lambda_{\\text{actor}}\\in(0,1]$ is the learning rate of the actor network.\n\nCorrespondingly, the parameters in the actor and critic network will be iteratively updated to maximize the objective function. The detailed AC algorithm of MEC rendering scheme is proposed in Algorithm 2.\n\n\n\n\n\\begin{algorithm}[t]\n\\begin{algorithmic}[1]\n\\caption{Actor-Critic to dynamic decision-making and optimization of wireless VR system}\n\\STATE Initialize learning rate $\\lambda_{\\text{critic}}\\in(0,1]$, $\\lambda_{\\text{actor}}\\in(0,1]$ and discount factor $\\gamma\\in[0,1)$.\n\\STATE Initialize parameters $\\pmb{\\theta}_{\\text{AC}}$ and $\\pmb{w}_{\\text{AC}}$ for the actor and critic network, respectively.\n\\STATE Input the network state $S$ of the MEC rendering scheme.\n\\FOR{t = 1,...,T}\n \\STATE According to $\\pi(A|S_t;\\theta)$, select the action $A\\in\\mathcal{A}$.\n \\STATE The selected MECs render the required FoVs and multicast\/unicast them to VR users due to the selected action $A_t$.\n \\STATE MECs calculate the immediate reward $R_t$ and obtain the environment state $S_{t+1}$.\n \\STATE Store transition $(S_t,A_t,R_t,S_{t+1})$.\n \\STATE Calculate TD error $\\delta_t$ according to (35).\n \\STATE Update the parameters $\\pmb{w}_{\\text{AC}}$ of the critic network via (36).\n \\STATE Update the parameters $\\pmb{\\theta}_{\\text{AC}}$ of the actor network via (37).\n\\ENDFOR\n\\end{algorithmic}\n\\end{algorithm}\n\n\n\\subsubsection{Distributed AC}\nUnlike the centralized AC algorithm, the agent in the distributed AC algorithm performs action and obtains reward based on its own observed state. For the critic network in each agent, it shares its estimate of the value function with others through the central controller. While for the actor network in each agent, it performs individually without the need to infer the policies of others \\cite{Zhang_dis_AC}. \n\n\n\n\nIn our model, the $i$th MEC obtains the latest critic model parameter $\\pmb{w}_{\\text{DAC}}$ from the central controller, and let its own critic parameter $\\Bar{\\pmb{w}}_{t}^{i} = \\pmb{w}_{\\text{DAC}}$. At the $t$th time slot, according to the current environment state $S_{t}^{i}$ obtained by the $i$th MEC, a parameterized policy $\\pi_{i}(S_{t}^{i};\\Bar{\\pmb{\\theta}}_{t}^{i})$ is learned to select action $A_{t}^{i}$. Then, the critic network in the $i$th MEC will receive the reward feedback by the environment and evaluate the state-value function $V_{\\text{DAC}}(A_{t}^{i}|S_{t}^{i}; \\Bar{\\pmb{w}}_{t}^{i})$. Similarly, the TD error $\\delta_{t}^{i}$ of the $i$th MEC can be calculated to judge the performance of the performed action $A_{t}^{i}$, and the parameter $\\Bar{\\pmb{w}}_{t}^{i}$ of the critic network of the $i$th MEC can be updated as\n\\begin{equation}\n \\Bar{\\pmb{w}}_{t+1}^{i} = \\Bar{\\pmb{w}}_{t}^{i} + \\Bar{\\lambda}_{\\text{critic}}\\delta_{t}^{i}\\nabla_{\\Bar{\\pmb{w}}^{i}}V_{\\text{DAC}}(S_t^{i};\\Bar{\\pmb{w}}_{t}^{i}),\n\\end{equation}\nwhere $\\Bar{\\lambda}_{\\text{critic}}\\in(0,1]$ is the learning rate of the critic network, and\n\\begin{equation}\n \\delta_t^{i} = R_t^{i} + \\gamma{V}_{\\text{DAC}}(S_{t+1}^{i};\\Bar{\\pmb{w}}_{t}^{i}) - V_{\\text{DAC}}(S_t^{i};\\Bar{\\pmb{w}}_{t}^{i}).\n\\end{equation}\nFurthermore, for the parameter $\\Bar{\\pmb{\\theta}}_{t}^{i}$ of the actor network of the $i$th MEC, it can be updated via\n\\begin{equation}\n \\Bar{\\pmb{\\theta}}_{t+1}^{i} = \\Bar{\\pmb{\\theta}}_{t}^{i} +\\Bar{\\lambda}_{\\text{actor}}\\delta_{t}^{i}\\nabla_{\\Bar{\\pmb{\\theta}}^{i}}\\log\\pi(A_t^{i}|S_t^{i};\\Bar{\\pmb{\\theta}}_{t}^{i}),\n\\end{equation}\nwhere $\\Bar{\\lambda}_{\\text{actor}}\\in(0,1]$ is the learning rate of the actor network. In addition, the updated parameter $\\Bar{\\pmb{w}}_{t}^{i}$ in the critic network of the $i$th MEC will be sent to the central controller and the critic model parameter $\\pmb{w}_{\\text{DAC}}$ can be updated as\n\\begin{equation}\n \\pmb{w}_{\\text{DAC}} = \\frac{1}{K_{\\text{DAC}}^{\\text{MEC}}}\\sum\\limits_{i=1}^{K_{\\text{DAC}}^{\\text{MEC}}}\\Bar{\\pmb{w}}^{i},\n\\end{equation}\nwhere $K_{\\text{DAC}}^{\\text{MEC}}$ is the number of the MECs associated with the VR user groups. Correspondingly, the parameters in the actor and critic network will be iteratively updated to maximize the objective function.\n\n\n\n\n\\begin{figure}[!t]\n\t\\centering\n\t\\subfloat[]{\\includegraphics[width=3.0in]{prediction_reward.jpg}}\\label{fig_first_case}\n\t\\hfil\n\t\\subfloat[]{\\includegraphics[width=3.0in]{prediction_accuracy.jpg}}\\label{fig_second_case}\n\t\\caption{(a) Total reward of FoV prediction of each epoch via GRU. (b) FoV prediction accuracy of GRU for varying number of VR users.}\n\t\\label{basic_modules}\n\\end{figure}\n\n\n\\section{Simulation Results}\nIn this section, we examine the effectiveness of our proposed schemes with learning algorithms via simulation. For the learning algorithms, we set the learning algorithms use fully-connected neural network with two hidden layers and each layer has 128 ReLU units, we set the memory as 20, the minibatch size as 64, the learning rate for RNN as 0.005, the number of MECs as 8, the number of VR users as 8, $N_{\\text{FoV}} = 8$, $D(t) = 3$, $\\sigma^{2} = -110~\\text{dBm}$, $\\gamma = 0.9$, $\\alpha = \\beta = 3$, $\\lambda_{\\text{DQN}} = 0.05$, $\\lambda_{\\text{actor}} = 0.005$, $\\lambda_{\\text{critic}} = 0.05$, $T^{th} = 30 ~\\text{ms}$, $\\mathcal{R} = 1080\\text{p}$, $\\mathcal{C}_{\\mathcal{R}} = 200$, $F_{max}^{\\text{MEC}} = 5~\\text{GHz}$, $F_{min}^{\\text{MEC}} = 4~\\text{GHz}$, $F^{\\text{VR}} = 2~\\text{GHz}$, $f^{\\text{MEC}} = f^{\\text{VR}} = 1000~ \\text{Cycles\/bit}$, and $R^{\\text{fiber}} = 10~ \\text{Gb\/s}$. Consider a limited square area whose side length is 100 meters.\n\n\\subsection{FoV Prediction}\nIn the FoV prediction scheme, Brownian motion is deployed to simulate the eye movement of VR users. To obtain high accuracy in predicting FoV preference of each VR user in continuous time slots, a RNN model basd on GRU architecture is deployed. Fig. 8 (a) plots the total reward of FoV prediction of each epoch via RNN and Fig. 8 (b) plots the FoV prediction accuracy of RNN for varying number of VR users, respectively. It is observed that the prediction accuracy of the RNN remains $96\\%$ despite the increasing number of VR devices. This is because RNN utilizes a memory window with length 20 to store with the input observations, which can capture the FoV preference of VR users in the past time slots.\n\n\n\\subsection{MEC Rendering Scheme}\nFour DRL algorithms, including centralized DQN, distributed DQN, centralized AC, and distributed AC, are proposed to select proper MECs to render and transmit the required FoVs to VR users. For simplicity, we use ``w\/ Pred\", ``w\/o Pred\", ``w\/ Migra\", and ``w\/o Migra\" to represent ``with prediction\", ``without prediction\", ``with migration\", and ``without migration\" in the figures, respectively. To guarantee the fairness of each VR user, we use average QoE and VR interaction latency in the performance results.\n\n\\begin{figure}[ht]\n \\centering\n \\includegraphics[width=3.0 in]{qoe_final.jpg}\n \\caption{Total reward of the MEC rendering with prediction and migration scheme of each epoch via centralized\/distributed DQN\/AC learning algorithms.}\n \\label{basic_modules}\n\\end{figure}\n\n\n\nFig. 9 plots the total reward of the MEC rendering with prediction and migration scheme of each epoch via centralized\/distributed DQN\/AC learning algorithms. Each result is averaged over 100 training trails. It is observed that the total reward and the convergence speed of these four DRL learning algorithms follows: Centralized DQN $>$ Distributed DQN $>$ Centralized AC $>$ Distributed AC. This is due to the experience replay mechanism and randomly sampling in DQN, which use the training samples efficiently and smooth the training distribution over the previous behaviours. As the model parameters in AC algorithm are updated in two steps, including critic step and actor step, the convergence speed of the AC algorithm is lower. Apparently, the convergence speed of the centralized learning algorithms is faster than that of the distributed learning algorithms. This is because the distributed learning needs more time to learn from each agent with only local observation and reward, whereas centralized learning can learn from global observations and rewards.\n\n\\begin{figure}[ht]\n \\centering\n \\includegraphics[width=3.0 in]{uplink_dqn.jpg}\n \\caption{Average VR interaction latency of various MEC and VR rendering schemes via centralized DQN algorithm for varying uplink transmission latency.}\n \\label{basic_modules}\n\\end{figure}\n\nFig. 10 plots the average VR interaction latency of various MEC and VR rendering schemes via centralized DQN algorithm for varying uplink transmission latency. We observe that all the MEC rendering schemes outperform that of the VR rendering schemes, with around $40~\\text{ms}$ gain. This is because the processing ability of the MECs is much higher than that of the VR devices, and the data size of the FoV is smaller than that of the stitched 2D picture, which jointly decrease the rendering and downlink transmission latency. We also observe that the average VR interaction latency of the MEC rendering with prediction and migration scheme remains the same with increasing the uplink transmission latency, as the MECs do not need to wait for the uplink transmission of requested FoV from the VR devices before performing rendering.\n\nIn Fig. 10, we also compare our proposed learning-based schemes with those without learning. By comparing with the MEC\/VR rendering scheme with nearest association scheme plotted using dash lines, we see our proposed learning-based MEC\/VR rendering schemes achieve substantial gain in terms of VR interaction latency. This is due to that in the non-learning scheme, the VR user needs to transmit its requested FoV through uplink transmission and is always associated with the nearest MEC. Thus, it is possible that the MEC with low processing ability is selected to render the required FoV, which can increase the rendering latency.\n\n\\begin{figure}[ht]\n \\centering\n \\includegraphics[width=3.0 in]{different_learning_algorithms.jpg}\n \\caption{Average VR interaction latency of the MEC rendering with prediction and migration scheme and the VR rendering scheme via centralized\/distributed DQN\/AC learning algorithms for varying uplink transmission latency.}\n \\label{basic_modules}\n\\end{figure}\n\n\n\nFig. 11 plots the average VR interaction latency of the MEC rendering with prediction and migration scheme and the VR rendering scheme via centralized\/distributed DQN\/AC learning algorithms for varying uplink transmission latency. It is observed that for the MEC rendering scheme achieves much lower latency (about $40~\\text{ms}$) compared to VR rendering scheme. It is also seen that for the same rendering scheme, either the MEC or VR, the centralized DQN algorithm can achieve the minimum average VR interaction latency. This can be explained by the fact that the centralized learning algorithm learns a single policy common to the whole wireless VR system based on the global observations, while in the distributed learning algorithm, each agent only learns its own policy based on local observation.\n\n\\begin{figure}[ht]\n\t\\centering\n\t\\subfloat[]{\\includegraphics[width=3.0in]{number_vr_user_qoe.jpg}}\\label{fig_first_case}\n\t\\hfil\n\t\\subfloat[]{\\includegraphics[width=3.0in]{number_vr_user.jpg}}\\label{fig_second_case}\n\t\\caption{Average QoE and VR interaction latency of MEC rendering with prediction with\/without migration schemes via centralized\/distributed DQN\/AC learning algorithms with increasing number of VR users.}\n\t\\label{basic_modules}\n\\end{figure}\n\nFig. 12 plots the average QoE and VR interaction latency of MEC rendering with prediction with\/without migration schemes via centralized\/distributed DQN\/AC learning algorithms with increasing number of VR users, respectively. With increasing the number of VR users, the average QoE of VR user first decreases then becomes nearly stable as shown in Fig. 12 (a), whereas the average VR interaction latency first increases, then becomes nearly stable as shown in Fig. 12 (b). This is because with increasing number of VR users, more MECs are activated to provide the downlink transmission for more requested different FoVs, which increase the interference among those transmission. As the number of VR users becomes too large, all the MECs become active to serve all VR users to render most of FoVs, the rendering latency and interference among VR users become stable. \n\n\n\nInterestingly, we notice that for both centralized DQN and AC algorithms, we can see the performance gain of the MEC rendering with migration scheme over that without migration scheme in Fig. 12 (a) and (b). This is because the MECs with higher computing ability will be selected to render the same required FoV for migration, which decreases the rendering latency. Importantly, all the learning-based MEC rendering with prediction schemes substantially outperform the conventional non-learning based MEC rendering with nearest association scheme.\n\n\\begin{figure}[ht]\n\t\\centering\n\t\\subfloat[]{\\includegraphics[width=3.0in]{different_mec_number_qoe.jpg}}\\label{fig_first_case}\n\t\\hfil\n\t\\subfloat[]{\\includegraphics[width=3.0in]{different_mec_number.jpg}}\\label{fig_second_case}\n\t\\caption{Average QoE and VR interaction latency of MEC rendering with prediction with\/without migration schemes via centralized\/distributed DQN\/AC learning algorithms with increasing number of MECs.}\n\t\\label{basic_modules}\n\\end{figure}\n\nFig. 13 plots the average QoE and VR interaction latency of MEC rendering with prediction with\/without migration schemes via centralized\/distributed DQN\/AC learning algorithms with increasing number of MECs, respectively. With increasing the number of MECs, the average QoE of VR user first increases then becomes nearly stable as shown in Fig. 13 (a), whereas the average VR interaction latency first decreases, then becomes nearly stable as shown in Fig. 13 (b). This is because as the number of MECs increases, the VR users will have more MEC choices to be selected, thus, nearer MECs with higher execution ability can be utilized to render the required FoVs, which reduces the rendering and downlink transmission latency. However, as the number of MECs becomes too large, all MECs may be activated for rendering and downlink transmission, which leaves little gain for improvement.\n\n\n\n\n\n\n\n\n\n\\section{Conclusions}\nIn this paper, a decoupled learning strategy was developed to optimize real-time VR video streaming in wireless network, which considered FoV prediction and rendering MEC association. Specifically, based on GRU architecture, a RNN model was used to predict FoV preference of each VR user over time. Then, based on the correlation between the location and predicted FoV request of VR users, centralized and distributed DRL strategies were proposed to determine the optimal association between MEC and VR user group, and optimal rendering MEC for model migration, so as to maximize the long-term QoE of VR users. Simulation results shown that our proposed MEC rendering with prediction and migration scheme based on RNN and DRL algorithms substantially improved the long-term QoE of VR users and the VR interaction latency.\n\n\n\n\n\n\n\n\n\n\n\n\n\\ifCLASSOPTIONcaptionsoff\n \\newpage\n\\fi\n\n\n\n\n\n\n\n\n\n\n\\bibliographystyle{IEEEtran}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nIn a seminal paper, Rieder, Lebowitz and Lieb investigated the properties of chains of $N$ \nharmonic oscillators, interacting at their ends with stochastic heat baths \\cite{Lebow}.\nThese authors proved that while energy flows from hot to cold baths, the kinetic \ntemperature profile decreases exponentially in the direction of the hotter bath, rather \nthan increasing, and in the bulk its slope vanishes as $N$ grows. Thus, in case the kinetic \ntemperature equals the thermodynamic temperature, heat flows against\nthe direction of energy, in the bulk of such 1D systems. Moreover, \nno steady state is reached, because at boundaries, heat flows in the opposite direction.\nTaken as a paradox without explanation in Ref.\\cite{Lebow}, this fact reveals that, in \nharmonic chains of oscillators, the kinetic temperature does not correspond to the \nthermodynamic temperature, or the energy flux does not represent a heat flux, or both.\nUnexpected phenomena that seem to contradict the hydrodynanmic laws of transport, {\\em e.g.}\\ \ncurrents going against the density gradient, \na phenomenon called ``uphill diffusion'', can be observed in several experimental settings;\nsee e.g.\\ Ref.\\cite{ColGiaGibVer} for further references and a nonequilibrium model \nwith phase transition exhibiting uphill diffusion. However, the thermodynamic relevance of \nsuch models is still under investigation.\n\nAs a matter of fact, temperature and heat pertain to macroscopic objects with microscopic states corresponding\nto Local Thermodynamic Equilibrium (LTE); \nthey cannot be directly identified with mechanical quantities such as kinetic energy and\nenergy flux, Ref.\\cite{LandauV} \\S 9, \\cite{Chibbaro} Chapters 3, 4 \nand 5. LTE is the essence of Thermodynamics: it can be viewed at once as the precondition \nfor the existence of the thermodynamic fields, such as temperature and heat, and as the \nnatural state of objects obeying the thermodynamic laws. The microscopic conditions under \nwhich LTE is expected to hold are extensively discussed in the literature, {\\em e.g.}\\ \n\\cite{Prigogine} Section 15.1, \\cite{Spohn} Section \n2.3, \\cite{Bellissard} Section 3.3, \\cite{Kreuzer} Chapter 1. In short, LTE requires\nthe existence of three well separated time and space scales, so that: {\\bf 1.}\\ a macroscopic \nobject can be subdivided in mesoscopic cells that look like a point to macroscopic \nobservers, while containing a large number of molecules; {\\bf 2.}\\ boundary effects\nare negligible compared to bulk effects, so that the contributions of neighboring cells \nto the mass and energy of a given cell are inappreciable within a cell; \n{\\bf 3.}\\ particle interactions allow the cells to thermalize (positions and velocities \nbecome respectively uniformly and Maxwell-Boltzmann distributed) within times that \nare mere instants on the macroscopic scale. \n\nThat macroscopic observables are not affected by microscopic fluctuations, despite the exceedingly \ndisordered and energetic microscopic motions, is essential for mesoscopic quantities to be sufficiently \nstable that thermodynamic laws apply, {\\em e.g.}\\ \\cite{LandauV}, \\S 1 and \\S 2. This is the case for \na quantity that is spatially weakly inhomogeneous, when the number $N$ of particles in a cell \nis large, and the molecular interactions randomize positions and momenta so that, for \ninstance, the fluctuations of a quantity $\\phi$ of size $O(N)$ are order $O(\\sqrt{N})$. \nThe bulk of the cell then dominates in- and out-fluxes, and variations of $\\phi$ \nare sufficiently slow on the mesoscopic scale. Quantitatively, the space and time scales \nfor which this description holds depend on the properties of the microscopic components \nof the systems of interest, \\cite{Prigogine,Kreuzer,Spohn,Bellissard,Pigo1,Pigo2}. \n\nUnder the LTE condition, matter can be considered a continuum, obeying hydrodynamic laws,\n{\\em i.e.}\\ balance equations for locally conserved quantities, such \nas mass, momentum and energy \\cite{DeGroot,CR98,RC02}. For small to moderate driving, \nthey take a linear form, in which the local material properties are expressed\nby the linear transport coefficients. Locality implies that such \ncoefficients do not depend on the conditions of the system far away from \nthe considered region. The thermal conductivity of an iron bar at a given temperature at a given \npoint in space does not depend on the conditions of the bar far from that region; cutting the bar \nin two, or joining it to another bar, without changing the local state, leaves unchanged its local \nproperties.\n\nFluctuations remain of course present in systems made of particles; they are larger for larger systems, \nthey may be observed \\cite{AURIGA1,AURIGA2}, and they play a major role \nin many circumstances, see {\\em e.g.}\\ Refs.\\cite{Mish,HickMish}. This\nmotivates a considerable fraction of research in statistical physics, {\\em e.g.}\\ \\cite{StochTherm,CasatiRev},\nconcerning scales much smaller than the macroscopic ones, or occurring in \nlow dimensional (1D and 2D) systems \\cite{Li1,Dhar,Li2,LepriEd,CasatiJ}. In these phenomena, the linear transport \ncoefficients do not always \nseem to exist \\cite{LepriEd}, the robustness and universality of the thermodynamic laws appear \nto be violated, and the behaviours appear to be strongly affected by boundary conditions and by all parameters \nthat characterize a given object \\cite{ExpAnomFourier,JeppsR,GibRon,Zhao1,Zhao2,onorato,Saito}. \nIt is also well known that chains of oscillators behave more like some kind of \n(non-standard) fluids than like solids, because of the loss of crystalline structure, caused by \ncumulative position fluctuations \\cite{Peierls}. Consequently, a fluid-like (possibly fluctuating) \ndescription has been adopted in a number of papers, cf.\\ Refs.\\cite{NaRam02,MaNa06}.\n\nIn driven systems, the situation is problematic also because equipartition may be violated \\cite{Breaking,Sengers,PSV17},\nthe statistic describing the state of the system is model dependent, and the ergodic properties of the particles dynamics are only partially understood \\cite{GGnoneq,EWSR2016}. Hence, \nthere is no universally accepted microscopic notion of nonequilibrium temperature \\cite{MR,Jou1,Jou2,JouBook,Criado,He,PSV17}. Further, a microscopic definition of heat flux requires a clear distinction between energy transport due to macroscopic motions (convection), and transport without macroscopic motions (conduction), cf.\\ Chapter 4 of Ref.\\cite{Zema}, and Section III.2 and Chapter XI of Ref.\\cite{DeGroot}. In 1D systems, this may not be always possible \\cite{inpreparat}.\n\nOne possible interpretation of these facts is that LTE is violated in some situations, hence that \nthermodynamic concepts may be inappropriate \\cite{JeppsR,GibRon}. Another interpretation is that \nthermodynamic notions should be modified to treat small and strongly nonequilibrium systems, see \n{\\em e.g.}\\ \\cite{MR,Jou1,Jou2,He}.\nIt is therefore interesting to investigate the validity and universality of the mechanical counterparts \nof thermodynamic quantities, in situations in which LTE is not expected to hold, and ``anomalous'' \nphenomena have been reported. \n\nWe address such questions considering chains of $N$ Lennard-Jones oscillators interacting with \ndeterministic baths at their ends, and without on-site potentials. Our central findings are that:\n\\begin{itemize}\n\\item\nthermostats at different temperatures induce $O(N)$ distortions of the \nequilibrium lattice, resulting in highly in-homogeneous chains;\n\\item\nthermostats induce collective order $O(N)$ fluctuations, \n{\\em i.e.}\\ ``macroscopic'' motions. Negligible incoherent $O(\\sqrt{N})$ vibrations typical of \n3D equilibrium systems are thus replaced by kind of convective motions, even in \nchains bounded by still walls.\n\\end{itemize}\n\nThese observations combined with the results of Ref.\\cite{Lebow} and further literature, \n{\\em e.g.}\\ Refs.\\cite{MR,Jou1,Jou2,JouBook,He,Breaking}, suggest \nthat microscopic definitions appropriate for 3-dimensional equilibrium thermodynamic quantities, need \nextra scrutiny in 1D. In particular, we illustrate how $O(N)$ fluctuations and lattice distortions \naffect the collective behaviour of 1D systems, considering the notion of heat flux, $J$ \nsay, given by Eq.(23) of Ref.\\cite{LLP-PhyRep}. This way, we confirm from a different standpoint the conclusions\nreached in previous studies on the inapplicability of standard hydrodynamics \\cite{Hurtado,Politi}. We find \nthat:\n\\begin{itemize}\n\\item\n$J$ is not spatially uniform in steady states. Variations of $J$ decrease if the baths temperature difference is reduced \nat constant $N$, but they do not if the mean temperature gradient is reduced increasing $N$ at constant baths \ntemperatures. \n\\item\nDividing $J$ by the local \nmass density partially balances the lattice inhomogeneity and yields an approximately uniform quantity.\n\\end{itemize}\nThese observations should be combined with those of Refs.\\cite{Politi,inpreparat}, according to which\ncollective and molecular motions are correlated, making hard to disentangle convection from conduction,\nbecause single particles push their neighbors, producing kind of convective cascades.\nThat difficulties do not ease when $N$ grows, indicates \nthat LTE, hence thermodynamic quantities, cannot be established in our 1D systems. We relate this fact \nwith the $O(N)$ growth of fluctuations and lattice distortions.\n\n\n\n\\section{Chains of Lennard-Jones oscillators} \\label{sectII}\nConsider a 1D chain of $N$ identical moving particles of equal mass $m$, \nand positions $x_{i}$, $i=1,...,N$. Add two particles with fixed positions, $x_{0}=0$ and $x_{N+1}=(N+1)a$, where \n$a$ is the lattice spacing. Let nearest neighbors interact via the Lennard-Jones potential (LJ):\n\\be\nV_{1}(r)=\\epsilon \\left [ \\left (\\frac{a}{r}\\right)^{12} - 2 \\left (\\frac{a}{r}\\right )^{6} \\right ],\n\\label{LJ}\n\\ee \nwhere $r$ is the distance between nearest neighbors: \\cla{$r=|x_{i}-x_{i-1}|$} and $\\epsilon>0$ is \nthe depth of the potential well. Thus, $x_{i}=ai$, with \n$\\, i=0,\\ldots, N+1$, is a configuration of stable mechanical equilibrium for the system. We also \nconsider interactions involving first and second nearest neighbors, with second \npotential given by \\cite{1st2ndpaper}:\n\\be\n V_{2}(s)=\\epsilon \\left [ \\left (\\frac{2a}{s}\\right)^{12} - 2 \\left (\\frac{2a}{s}\\right )^{6} \\right ],\n\\label{1st2nd}\n\\ee \nwhere \\cla{$s=|x_{i}-x_{i-2}|$}. Further, we add two particles with fixed positions $x_{-1}=-a$ and $x_{N+2}=(N+2)a$.\nWith potential $V = V_{1}+V_{2}$, the system has the usual stable mechanical equilibrium configuration $x_{i}=ai$, \n$\\, i=-1,\\ldots, N+2$.\nThe first and last moving particles are in contact with two Nos\\'e-Hoover thermostats, at kinetic temperatures \n$T_{L}$ (on the left) and $T_{R}$ (on the right) and with relaxation times $\\theta_L$ and $\\theta_R$. \nIntroducing the forces\n\\be\n\\label{F1F2}\nF_{1}(r)= \\frac{\\partial V_{1}}{\\partial r }(r) ,\\quad F_{2}(s)= \\frac{\\partial V_{2}}{\\partial s }(s) \\, ,\n\\ee \nthe equations of motion are given by:\n\\bea\n&&m \\ddot{x}_{1} = F_{1}(x_{1}) - F_{1}(x_{2}-x_{1})-\\xi_{1} \\dot{x}_{1}, \\\\\n&&m \\ddot{x}_{i} = F_{1}(x_{i}-x_{i-1}) - F_{1}(x_{i+1}-x_{i}), \\, i=2,...,N-1, \\;\\;\\;\\;\\; \\\\\n&&m \\ddot{x}_{N} = F_{1}(x_{N}-x_{N-1}) - F_{1}(x_{N+1}-x_{N})-\\xi_{N} \\dot{x}_{N},\n\\label{eqsmot}\n\\eea\nwith\n\\be\n\\dot{\\xi}_{1} = \\frac{1}{\\theta_{L}^{2}}\\left ( \\frac{m \\dot{x}_{1}^{2} }{T_{L}}-1\\right ),\\quad \\dot{\\xi}_{N} = \\frac{1}{\\theta_{R}^{2}}\\left ( \\frac{m \\dot{x}_{N}^{2} }{T_{R}}-1\\right ),\n\\ee\nin the case of nearest neighbors interaction. For first and second neighbors interactions, we have:\n\\begin{widetext}\n\\bea\n&& \\hskip -18pt m \\ddot{x}_{1} = F_{1}(x_{1}) - F_{1}(x_{2}-x_{1}) + F_{2}(x_{1}+a) - F_{2}(x_{3}-x_{1}) -\\xi_{1}\\dot{x}_{1}, \\nonumber \\\\\n&& \\hskip -18pt m \\ddot{x}_{2} = F_{1}(x_{2}-x_{1}) - F_{1}(x_{3}-x_{2}) + F_{2}(x_{2}) - F_{2}(x_{4}-x_{2})-\\xi_{2}\\dot{x}_{2},\\nonumber \\\\\n&& \\hskip -18pt m \\ddot{x}_{i} = F_{1}(x_{i}-x_{i-1}) - F_{1}(x_{i+1}-x_{i}) + F_{2}(x_{i}-x_{i-2}) - F_{2}(x_{i+2}-x_{i}), \\qquad i=3,\\ldots, N-2, \\\\\n&& \\hskip -18pt m \\ddot{x}_{N-1} = F_{1}(x_{N-1}-x_{N-2}) - F_{1}(x_{N}-x_{N-1}) + F_{2}(x_{N-1}-x_{N-3}) - F_{2}(x_{N+1}-x_{N-1})-\\xi_{N-1}\\dot{x}_{N-1},\\nonumber \\\\\n&& \\hskip -18pt m \\ddot{x}_{N} = F_{1}(x_{N}-x_{N-1}) - F_{1}(x_{N+1}-x_{N}) + F_{2}(x_{N}-x_{N-2}) - F_{2}(x_{N+2}-x_{N})-\\xi_{N}\\dot{x}_{N},\\nonumber \n\\label{eqsmot2}\n\\eea\n\\end{widetext}\nwith\n\\be\n\\begin{split}\n\\dot{\\xi}_{l} &= \\frac{1}{\\theta_{L}^{2}}\\left ( \\frac{m \\dot{x}_{l}^{2} }{T_{L}}-1\\right ),\\, \nl=1,2,\\\\\n \\dot{\\xi}_{l} &= \\frac{1}{\\theta_{R}^{2}}\\left ( \\frac{m \\dot{x}_{l}^{2} }{T_{R}}-1\\right ),\\, l=N-1,N.\n\\end{split}\n\\ee\nThe hard-core nature of the LJ potentials preserves the order of particles:\n$0< x_{1}400$. The label \nof the particle corresponding to the maximum lattice distortion is fitted by \n$i_{\\mbox{\\small mld}} = 0.6063 N - 6.804$ with $R^2=0.9997$.\n}\n\\end{figure}\nIn the lower panel of Fig.\\ref{page67June} and in Fig.\\ref{page89June}, square root fits and linear \nfits are compared for $N$ ranging from 64 to 6000. \nThe square root fits are appropriate for small $N$, while at large $N$ the linear \nfit takes over. The\n\\begin{figure}\n\\includegraphics[width=0.5\\textwidth]{Fig2a.eps}\n\\includegraphics[width=0.5\\textwidth]{Fig2b.eps}\n\\caption{\\label{page67June} Upper panel: standard deviations of the particles vibrations \nabout their average position, in lattice vectors units, for the case of Fig.\\ref{pag2June}.\nLower panel: dependence on $N$ (ranging from 64 to 6000) of the maximum standard deviation together with a linear\nfit for $N>400$ (continuous blue line) and one square root fit for lattices with $N<2100$ (dashed red line). \nGrowing linearly with $N$, collective vibrations look like convective motions. \nThe label of the particle corresponding to the maximum fluctuation amplitude is fitted by \n$i_{\\mbox{\\small mfa}} = 0.7398 N - 6.75$ with $R^2=0.9993$.\n}\n\\end{figure}\n\\begin{figure}\n\\includegraphics[width=0.5\\textwidth]{Fig3a.eps}\n\\includegraphics[width=0.5\\textwidth]{Fig3b.eps}\n\\caption{\\label{page89June} Upper panel: dependence on $N$ of the standard deviations of the\nvibrations of particles at 1\/3 of the chain. Lower panel: dependence on $N$ of the standard \ndeviations of the vibrations at 2\/3 of the chain. In both cases, a square root and a linear \nfit are drawn. The square root fit holds at small $N$. At large $N$ the linear fit takes over. \nIn both panels $N$ ranges from 64 to 6000.\nParticles motions look more like some kind of convection rather than like microscopic lattice\nvibrations.}\n\\end{figure}\nsize of these vibrations appears even more striking observing that displacing by a large amount \none of them, a whole collection of particles must be correspondingly displaced. \nIndeed, the \nrepulsive part of the LJ potential does not allow particles' order to be modified, as noted also in Ref.\\cite{Politi}. \nTherefore, the motion of particles about their average positions is not an irregular motion about fixed \npositions. In accord with the observations on \npersistent correlations, this motion looks like a kind of convection, \nalthough LTE and standard hydrodynamics do not hold \\cite{GibRon,NaRam02,MaNa06,Hurtado,Politi}.\nIt follows that, in these cases, energy transport cannot be directly related to ``heat'' flows. \n\\begin{figure}\n\\hskip -20pt\n\\centering\n\\includegraphics[width=0.5\\textwidth]{Fig4.eps} \n\\caption{\\label{MeanDispNoGrad} {Equilibrium simulations. Plot of the displacement\nof the mean position of particle $i$ from its mechanical equilibrium position, $\\langle x_i \\rangle - i a$, \nfor various values of $N$, for first and second neighbors interactions when $T_L=T_R=5$.\nThe deviations from the mechanical equilibrium are negligible.}}\n\\end{figure}\n\\begin{figure}\n\\includegraphics[width=0.5\\textwidth]{Fig5a.eps}\n\\includegraphics[width=0.5\\textwidth]{Fig5b.eps}\n\\caption{\\label{equilibrio} Equilibrium simulations $(T_L=T_R=5)$ for $N$ ranging from 512 to \\cla{5000}. Upper panel: standard deviations of the particles vibrations \nabout their average position $(x_i- \\langle x_i \\rangle)$, in lattice vectors units.\nLower panel: dependence on $N$ of the maximum standard deviation, together with linear and square root fits. \nThis dependence on $N$ should not be confused with the $O(\\sqrt{i})$ dependence on $i$ of Ref.\\cite{Peierls}.}\n\\end{figure}\n\nThe situation is different for $T_L=T_R$. Figure \\ref{MeanDispNoGrad} shows \nthat the lattice deformations are much smaller than the lattice spacing $a$,\nand can be neglected. The computed values of \n$(\\langle x_i \\rangle -ia)$ practically vanish and do not depend on $N$. \nThe standard deviation of \nthe vibrations about the mean position is represented in upper panel of \nFig.\\ref{equilibrio} and it appears to be closer to $O(\\sqrt{N})$ than to $O(N)$ {as can be seen in lower panel of Fig.\\ref{equilibrio}.\nIn this case, in which there is no net energy transport, the system also behaves more like a fluid than\nlike a solid in sense closer to that of \\cite{Peierls}, although our results refers to a different situation.\n\n\n\\section{Heat flux}\n\n\\label{sec:heatflux}\nIn order to understand the effect of $O(N)$ fluctuations and lattice distortions on the\nbehaviour of usual microscopic quantities, let us consider the ``heat flux'' given\nby Eq.(23) of Ref.\\cite{LLP-PhyRep}. For the case of first and second nearest \nneighbors interactions, that expression must be modified as follows:\n\\be\n\\begin{split}\nJ_i=&\\frac 1 2 (x_{i+1}-x_i)F_1(x_{i+1}-x_i)(\\dot{x}_{i+1} + \\dot{x}_i) \\\\\n& + (x_{i+2}-x_i)F_2(x_{i+2}-x_i)(\\dot{x}_{i+2} + \\dot{x}_i)+ \\dot{x}_i h_i\\, ,\n\\label{exactJ}\n\\end{split}\n\\ee\nwhere $F_1$ and $F_2$ are defined by Eq.(\\ref{F1F2}) and $h_i$ is the energy of the $i$-th particle.\n\nThe quantity $J_i$ is only apparently ``local'' because it quantifies a flow through the position of particle $i$, \nand not through a fixed position in space. Moreover, it implicitly\nrequires small position fluctuations and small lattice deformations, because Eq.(\\ref{exactJ})\nis obtained through Fourier analysis for spatially homogeneous systems, in the limit of small \nwave vectors, \\cite{LLP-PhyRep,Dhar}. For instance, denoting by $k$ the wave-vector, Eq.(23) of \nRef.\\cite{LLP-PhyRep} follows from Eq.(21) only if $k(x_{n+1}-x_n)$ is small. \nOn the contrary, in our cases, this quantity strongly varies in space and time, and average lattice\ndistortions are of order $O(N)$, cf.\\ Section \\ref{fluid}. Therefore, one \nexpects $J_i$ to fail, and it is interesting to investigate how that \nis realized, varying the relevant model parameters.\n\nFor chains with nearest neighbors Lennard-Jones interactions ($F_2\\equiv 0$ \nin Eq.(\\ref{exactJ})),\nwe find that while the steady state heat flow should not depend on position, the time \naverage of $J_i$ substantially changes with $i$, cf.\\ Fig.\\ref{flussi}. \n\\begin{figure}\n\\includegraphics[width=0.5\\textwidth]{Fig6a.eps} \n\\includegraphics[width=0.5\\textwidth]{Fig6b.eps} \n\\caption{\\label{flussi} \nChains with nearest neighbors Lennard-Jones interactions.\nUpper Panel: flux $\\langle J_i \\rangle$ computed according to (\\ref{exactJ}), for $N=64$, $T_L=1$, $T_R=4$. \nLower Panel: $\\langle J_i \\rangle$ for $N=64$, $T_L=1$, $T_R=64$.}\n\\end{figure}\nTo quantify this phenomenon, we introduce the relative variation of $\\langle {J}_i \\rangle$, \n$$\n\\delta= \\left|{\\max_i \\langle {J}_{i}\\rangle-\\min_i \\langle {J}_{i}\\rangle \\over \\bar{J}} \\right| ~, ~~\\mbox{where } ~\n\\bar{J}={1 \\over N} \\sum_{i}\\langle {J}_{i}\\rangle ~ ,\n$$ \nIn Tables \\ref{tab2bis} and \\ref{tab4}, for average temperature gradients similar to those commonly found in the \nliterature \\cite{LLP,Kabu}, we observe that $\\delta$ tends to grow with the temperature gradient, \nat fixed $N$. In general, however, reducing the average gradient by increasing the system size, does not lead to smaller \n$\\delta$ \\cite{observ}.\n\n\\begin{table}[htp]\n\\begin{center}\n\\begin{tabular}{|c|c|c|}\n\\hline\n $T_{R}$ & $\\delta_1$ & $\\delta_2$ \\\\\n\\hline\n 1.1 & 0.0240 & 0.0199 \\\\\n\\hline\n 1.5 & 0.0091 & 0.0077 \\\\\n\\hline\n 2 & 0.0142 & 0.0145 \\\\\n\\hline\n 4 & 0.0480 & 0.0481 \\\\\n\\hline\n 8 & 0.0831 & 0.0829 \\\\\n\\hline\n16 & 0.1060 & 0.1062 \\\\\n\\hline\n32 & 0.1199 & 0.1201 \\\\\n\\hline\n64 & 0.1229 & 0.1232\\\\\n\\hline\n\\end{tabular}\n\\end{center}\n\\caption{\\label{tab2bis} Relative variation $\\delta$ \nof the flux $J_i$ \\cla{for} $N=64$ particles with first and second nearest neighbors\ninteractions. $T_L=1$ while $T_R$ takes eight different values. $\\delta_1$ is \ncomputed averaging over $2\\cdot 10^9$ time steps, $\\delta_2$ over $4 \\cdot 10^9$ time steps. }\n\\end{table}\n\n\\begin{table}[htp]\n\\begin{center}\n\\begin{tabular}{|c|c|c|c|}\n\\hline\n $T_{R}$ & $N=64$ & $N=128$ & $N=256$ \\\\\n\\hline\n 1.1 & 0.0240 & 0.0117 & 0.0110656 \\\\\n\\hline\n 1.5 & 0.0091 & 0.0297 & 0.0317283 \\\\\n\\hline\n 2 & 0.0142 & 0.0534 & 0.0555437 \\\\\n\\hline\n 4 & 0.0480 & 0.0817 & 0.104345 \\\\\n\\hline\n 8 & 0.0831 & 0.0659 & 0.0907829 \\\\\n\\hline\n16 & 0.1060 & 0.0683 & 0.0485491 \\\\\n\\hline\n32 & 0.1199 & 0.1560 & 0.0643797 \\\\\n\\hline\n64 & 0.1229 & 0.2306 & 0.195046 \\\\\n\\hline\n\\end{tabular}\n\\end{center}\n\\caption{\\label{tab4} Relative variation $\\delta$ of the average fluxes $\\langle J_i\\rangle $ defined by Eq.(\\ref{exactJ}).\nChains with $N=64$, $N=128$ and with $N=256$ particles, with nearest neighbors interactions are considered. \nAverages are computed over $2\\cdot 10^9$ time steps. $T_L=1$, while $T_R$ takes eight different values.}\n\\end{table}\nWe conclude that under our conditions the quantity $J_i$ represents neither a heat nor an energy flow,\nand that this is not a consequence of the size of temperature gradients, but of the size of fluctuations.\nThese increase with growing $N$, thus preventing LTE and standard hydrodynamics in the large $N$ limit \\cite{Hurtado,GibRon,Politi}.\nIn the next section we propose a modification of $J_i$ that, taking into account \nthe deformations of the lattice, is more stable than $J_i$ along the chain.\n\n\n\\section{Concluding remarks}\nIn this work we have presented numerical results concerning several kinds of 1D\nsystems of nonlinear oscillators, in contact with two Nos\\'e-Hoover thermostats.\nScrutinizing the behaviour of mechanical quantities that are commonly considered \nin the specialized literature, we have investigated the fluctuations and lattice \ndistortions, which are expected to prevent the establishment of ``thermodynamic'' \nregimes \\cite{GibRon,JeppsR,JouBook,Politi}. \n\n\\begin{figure}\n\\includegraphics[width=0.5\\textwidth]{Fig7.eps}\n\\caption{\\label{normJ}\nNormalized energy flux $J^n$ (o) and the flux $J$ (+) defined by \\eqref{exactJ} for chains of different lengths \n(N=128, 512, 1000, 2000) and with $T_L=1$ and $T_R=10$. \nAlthough $J^n$ \nis not exactly constant, at large $N$ it enjoys small fluctuations about a given average value.}\n\\end{figure}\n\nThermodynamic properties emerge indeed from\nthe collective behavior of very large assemblies of interacting particles, if correlations \ndecay rapidly compared to observation time scales, and if boundary effects are negligible. \nWhile this is often the case of 3D mesoscopic cells containing large numbers of properly \ninteracting particles, it is not obvious in 1D systems.\n\nIn particular, we have observed that temperature differences at the boundaries produce \n$O(N)$ fluctuations and deformations of the lattice, that result in strongly inhomogeneous systems. \nThis should be taken into account when defining the heat conductivity. \nFurthermore, these $O(N)$ effects imply that larger $N$ is not going to make our systems closer \nto thermodynamic systems, when $N$ is increased, consequently standard hydrodynamics does not apply \\cite{Hurtado,GibRon,Politi}.\n\nIn the light of the above observations, the definition of energy transport (\\ref{exactJ}) may be \nmodified in order to take into account the lattice deformations. One possibility that may considered\nis to normalize the energy flow by the average distance between particles, introducing\n\\be\nJ^n_i = {J_i \\over \\langle x_{i+1} - x_i \\rangle } ~, \\qquad i = 2, ... , N-2\\ .\n\\ee\nThe result, shown in Fig.\\ref{normJ},\nindicates that this avenue deserves further investigation, and will be the subject of future works.\n\n\\centerline{\\bf ACKNOWLEDGMENTS } \nThe authors are grateful to Carlos Mejia-Monasterio for extensive discussions\nand enlightening remarks. The authors are grateful to Antonio Politi for very useful suggestions.\nAntonio Politi, in particular has suggested the normalization of the energy flux.\nThis work is partially supported by Gruppo Nazionale per la\nFisica Matematica (GNFM-INdAM). C.G. and C.V. acknowledge financial supports\nfrom ``Fondo di Ateneo per la Ricerca 2016\" and ``Fondo di Ateneo per la Ricerca 2017\"-\nUniversit\\`a\\ di Modena e Reggio Emilia. \n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\\Ac{cs} is a signal processing technique aiming at recovering a high-dimensional sparse vector from a (noisy) system of linear equations \\cite{donoho2006compressed,CandesTao}. Joint sparsity refers to multiple vectors having the same support set\\footnote{The support set of a vector consists of the indices of the vector's nonzero entries.}, whose cardinality is typically much lower than the signal dimension. There are two prominent \\ac{cs} scenarios \\cite{cotter2005sparse,duarte2005joint} in the context of joint sparsity: (i) the \\ac{mmv} problem, where the measurement matrices are identical, and (ii) the \\ac{dcs} problem, where the measurement matrices are independent. Joint sparsity arises in a number of real-world scenarios, e.g., when multiple sensors or antennas observe the same signal corrupted by different channels and noise (e.g., \\cite{cotter2005sparse,duarte2005joint}). \nA prime example is radio frequency identification where the observed vectors are the received signals at different antennas (of the same receiver) \\cite{mayer2016exploiting}. Additionally, typical applications are magnetic resonance imaging \\cite{liang2009parallel}, distributed networks \\cite{wimalajeewa2014omp}, wireless communications \\cite{mayer2016exploiting}, and direction of arrival estimation \\cite{tzagkarakis2010multiple}. \n\nIn this work, we investigate an \\ac{amp} solution for joint sparse recovery when there is possible correlation between the signals (and the noise). We then evaluate this algorithm in the context of single-pixel color imaging \\cite{duarte2008sp}. In particular, we show the potential of joint recovery that exploits the correlation between the red, green, blue (RGB) color intensity channels. \n\n\\subsection{Related Work}\nSeveral methods for jointly sparse recovery have been proposed in the literature \n\\cite{cotter2005sparse,tropp2006algorithms1,tropp2006algorithms2,wipf2007empirical,davies2012rank,schniter2010turbo,wimalajeewa2014omp,ziniel2013efficient,mayer2015bayesian,zhao2015joint,lu2016independent,kim2011belief}. \\Ac{amp} was introduced in \\cite{donoho2009message,DonohoITWmessagepassingI,DonohoITWmessagepassingII} as a large system relaxation of loopy belief propagation to solve a random linear system with sparsity constraint. Scalar \\ac{bamp}, its Bayesian version \\cite{maleki2010approximate,montanari2012graphical}, uses the signal prior explicitly and is an efficient approximate \\ac{mmse} estimator. The turbo \\ac{bamp} methods in \\cite{schniter2010turbo,ziniel2013efficient,mayer2015bayesian}, and their generalization in \\cite{rangan2017} for clustered sparse signals, improve the recovery performance by exchanging extrinsic information about the current support estimate in each message passing iteration.\nIn \\cite{zhao2015joint,lu2016independent,zhu2016performance}, joint sparsity is directly enforced by an appropriate vector estimator (denoiser) function for the \\ac{bg} prior. \n\nThe \\ac{se} formalism developed in \\cite{DonohoITWmessagepassingI, DonohoITWmessagepassingII, bayati2011dynamics} analytically predicts the recovery performance of (B)\\acs{amp} algorithms. \\ac{se} was employed to analyze \\ac{bamp} for joint sparsity with a vector estimator and to point out the difference between the \\ac{dcs} and \\ac{mmv} scenarios in \\cite{lu2016independent}. Recent works rigorously prove the \\ac{se} for non-separable non-linearities \\cite{berthier2017} and a class of sliding-window denoisers \\cite{ma2017} with Gaussian i.i.d. measurement matrices. Furthermore, the \\ac{se} of the Vector AMP has been derived for a large class of right orthogonally invariant random sensing matrices \\cite{rangan2017vamp}. (We highlight that the acronym Vector AMP should not to be confused with the vector-prior version of BAMP, considered in this paper for the MMV\/DCS problems.) \n\nIn \\cite{zhu2016performance}, the replica method (a statistical physics tool for large disordered systems) is used to calculate the \\ac{mmse} of the \\ac{cs} measurement (note that \\cite{zhu2016performance} refers to \\ac{mmv} and \\ac{dcs} as MMV-2 and MMV-1, respectively).\nThe replica trick non-rigorously simplifies the high-dimensional integral for the \\ac{mmse} of the Bayesian estimator of the \\ac{cs} channel, thereby leading to the free energy as a function of the \\ac{mse}. The local maxima in the free energy function correspond to stable fixed points of \\ac{bp} and \\ac{bamp} and thus predict the expected \\ac{mse} of \\ac{bamp}. The replica analysis in \\cite{zhu2016performance} is performed for the \\ac{bg} signal prior with uncorrelated isotropic unitary signal and uncorrelated isotropic Gaussian noise distribution, i.e., with a single noise parameter.\n\n\\subsection{Contributions}\nWe consider the vector-prior \\ac{bamp} algorithm for the \\ac{dcs} and \\ac{mmv} problems, which uses an appropriate vector \\ac{mmse} estimator function and Onsager correction term to exploit joint sparsity structure, the signal distribution, and the noise covariance. We provide an analytical performance prediction for the \\ac{bamp} algorithm with a \\ac{bg} signal prior with arbitrary signal and noise correlation by (i) incorporating a linear joint decorrelation of the measurements, (ii) showing the equivariance of \\ac{Vbamp} w.r.t.\\ invertible linear transformations, (iii) extending the replica analysis from \\cite{zhu2016performance} to arbitrary diagonal noise covariance matrices.\n\nIn particular, the joint decorrelation yields a simpler equivalent measurement model with diagonal signal and noise covariance matrix (under mild conditions, one of the covariance matrices can be made the identity matrix). The simplified model naturally provides the measurement \\acp{snr} of each signal vector and substantially reduces the complexity of the \\ac{Vbamp} iterations. We further show that the \\ac{Vbamp} algorithm is equivariant to invertible linear transformations, thus, it preserves its properties across iterations in the transformed domain and delivers a result equivalent to that obtained with the original measurements {and covariance parameters}. For the widely used \\ac{bg} prior, we prove that the \\ac{Vbamp} iterations (and the corresponding \\ac{se}) preserve the diagonal structure of the (effective) noise covariance, thus implying that a $B$-dimensional state (instead of $B(B+1)\/2$ dimensions) is sufficient and that every \\ac{mmv} problem can be transformed into an equivalent \\ac{dcs} problem. Finally, we extend the replica analysis in \\cite{zhu2016performance} to the case of anisotropic noise (i.e., $B$ noise parameters instead of just $1$).\nThe replica analysis yields the $B$ measurement-wise \\acp{mse} of the \\ac{Vbamp} estimate in its {fixed} points.\n{We use both real-world and synthetic images to compare MMV-BAMP to state-of-the-art scalar recovery algorithms and to joint sparsity-aware algorithms in the context of single-pixel color imaging.}\n\n\\subsection{Outline}\nThe remainder of this paper is organized as follows. In Section \\ref{sec:vbamp}, we discuss the \\ac{Vbamp} algorithm, the estimator function for the multivariate \\ac{bg} signal prior, and the multivariate state evolution of \\ac{Vbamp}. In Section \\ref{sec:sec3}, the joint decorrelation of the signal and the noise vectors is investigated in the context of \\ac{Vbamp} and state evolution; the multivariate \\ac{bg} signal prior is studied as special case. In Section \\ref{sec:replicanalysis}, we present the multivariate free energy formula for arbitrary diagonal noise covariance matrices (the details of the replica analysis are relegated to the appendix). Section \\ref{sec:anisodyn} provides a qualitative discussion and open questions regarding the effects of signal correlation and the increasing number of jointly sparse vectors on the dynamics of \\ac{Vbamp}. Section \\ref{sec:sin_pix} evaluates the MMV-BAMP algorithm on a simplified single pixel imaging problem, highlighting the benefits of exploiting signal correlation across channels. We close with conclusions in Section \\ref{sec:concl}.\n\n\\subsection{Notation}\\label{subs:not}\nUppercase (lowercase) boldface letters denote matrices (vectors), and serif letters denote random quantities. For a matrix $\\mbf{A}$ (vector $\\mbf{a}$), $\\mbf{A}_{i}$ ($a_i$) denotes its $i$th row ($i$th entry) and $\\mbf{a}_{i}$ its $i$th column.\nThe all zero matrix and the identity matrix of dimension $M\\times N$ are denoted by $\\mbf{0}_{M\\times N}$ and $\\mathbf{I}_{M\\times N}$, respectively (we omit the subscript if the dimensions are clear from the context). The Dirac delta (generalized) function is $\\delta(\\mathbf{x})$. The normal distribution with mean $\\boldsymbol\\mu$ and covariance matrix $\\boldsymbol{\\Sigma}$ is denoted by $\\mcal{N}(\\boldsymbol\\mu,\\boldsymbol{\\Sigma})$ and $\\mcal{N}(\\mbf{x};\\boldsymbol\\mu,\\boldsymbol{\\Sigma})$ denotes the value of this normal \\ac{pdf} at $\\mbf{x}$. The outer product of a column vector $\\mathbf{x}$ with itself is denoted by $\\boldsymbol{\\langle} \\mathbf{x} \\boldsymbol{\\rangle} = \\mathbf{x} \\mathbf{x}^T$. For a vector $\\mathbf{x}=(x_1,\\ldots,x_B)^T$, $\\diag(\\mathbf{x})$ and $\\diag(x_1,\\ldots,x_B)$ denote the diagonal matrix whose $i$th diagonal element equals $x_i$. For a matrix $\\mathbf{X}$, $D(\\mathbf{X})$ is the diagonal matrix whose diagonal is identical to that of $\\mathbf{X}$, i.e., $D(\\cdot)$ is the orthogonal projection that zeros the off-diagonal elements. The Kronecker product of two matrices is denoted by $\\otimes$.\n\n\n\\section{BAMP with Vector Denoiser}\n\\label{sec:vbamp}\n\n\\subsection{Measurement Model}\n\nWe consider the measurement model\n\\begin{equation}\n \\mathbf{y}(b) = \\mbf{A}(b)\\mathbf{x}(b) + \\mathbf{w}(b)\\,, \\label{eq:matrixmeasurement}\n\\end{equation}\nwith $\\mathbf{y}(b)\\in\\mbb{R}^M$, $\\mathbf{x}(b)\\in\\mbb{R}^N$, $\\mathbf{w}(b)\\in\\mbb{R}^M$, and $\\mbf{A}(b)\\in\\mbb{R}^{M\\times N}$, for $b=1,\\ldots,B$.\nWe denote the measurement rate by $R = {M}\/{N}$.\nWe assume that the measurement matrices $\\mbf{A}(b)$ are realizations of Gaussian or Rademacher random matrices \\cite{foucart2013mathematical} with normalized columns.\nIf the measurement matrices $\\mbf{A}(b)$ are identical (i.e., $\\mbf{A}(b) = \\mbf{A}$, $b=1,\\dots,B$) we have an \\ac{mmv} scenario; if they are mutually independent then we have a \\ac{dcs} scenario.\nWe define the length-$B$ column vectors\n\\begin{align}\n \\vec{\\xv}_n &= (x_n(1),\\ldots,x_n(B))^T, \\nonumber\\\\\n \\vec{\\yv}_m &= (y_m(1),\\ldots,y_m(B))^T, \\\\\n \\vec{\\wv}_m &= (w_m(1),\\ldots,w_m(B))^T \\nonumber\n\\end{align}\n(similar notation will be used throughout the paper).\nJoint sparsity (cf.\\ JSM-2 in \\cite{duarte2005joint}) with sparsity (or nonzero probability) $\\epsilon$ requires that $\\vec{\\xv}_n=\\mathbf{0}$ with probability $1-\\epsilon$ and $\\vec{\\xv}_n\\ne\\mathbf{0}$\nwith probability $\\epsilon$.\nIn this work, we focus on signals with multivariate \\ac{bg} \\ac{pdf}, i.e., \n\\begin{equation}\nf_{\\vec{\\xvs}_n}(\\vec{\\xv}_n) = f_{\\vec{\\xvs}}(\\vec{\\xv}_n) =\n(1-\\epsilon) \\,\n\\delta(\\vec{\\xv}_n)\n+ \\epsilon\\, \\mcal{N}(\\vec{\\xv}_n;\\mathbf{0},\\boldsymbol{\\Sigma}_{\\xvecs}) , \\label{eq:signal_prior}\n\\end{equation}\n\\ac{iid} over $n$; here, $\\boldsymbol{\\Sigma}_{\\xvecs}$ is the covariance matrix of $\\vec{\\xv}_n$ given that it is non-zero vector.\nThe additive noise in \\eqref{eq:matrixmeasurement} is assumed to be \\ac{iid}\\ Gaussian over $m$ with zero mean and covariance $\\boldsymbol{\\Sigma}_{\\wvecs}$,\n\\begin{equation}\n\t\\vec{\\wvs}_m \\sim \\mcal{N}(\\mathbf{0}, \\boldsymbol{\\Sigma}_{\\wvecs}) \\,.\n\\end{equation}\n\n\n\\subsection{Vector-prior BAMP for MMV\/DCS} \\label{subsec:vbamp}\n\n\\begin{algorithm}[t]\n\t\\caption{\\acs{Vbamp} for MMV\/DCS}\n\t\\label{alg:vbamp}\n\t\\begin{algorithmic}[1]\n\t\t\\State {\\bf input:} $\\mathbf{y}(b)$, $\\mbf{A}(b)$, $\\boldsymbol{\\Sigma}_{\\xvecs}$, $\\varepsilon$, $t_{\\max}$, $\\varepsilon_{\\mathrm{tol}}$ \n\t\t\\State $t=0$, $\\hat{\\vec{\\xv}}_n^t=\\mathbf{0}_{B\\times 1}$, $\\vec{\\rv}_m^t=\\vec{\\yv}_m$, $\\forall m, n$\n\t\t\\Do\n\t\t\\State $t \\leftarrow t+1$ \n\t\t\\State $\\mathbf{u}^{t-1}(b)= \\hat{\\xv}^{t-1}(b) + \\mbf{A}(b)^T \\mathbf{r}^{t-1}(b)$, $\\forall b$ \n\t\t\\State $\\boldsymbol{\\Sigma}_{\\vvecs}^{t-1} = \\begin{cases} {\\boldsymbol{\\Sigma}_{\\vec{r}}^{t-1}}\n &\\mbox{ for MMV} \\\\ D\\big( {\\boldsymbol{\\Sigma}_{\\vec{r}}^{t-1}}\n \\big) &\\mbox{ for DCS} \\end{cases}$\n\t\t\\State $\\hat{\\vec{\\xv}}_n^t = F(\\vec{\\uv}_n^{t-1} ; \\boldsymbol{\\Sigma}_{\\vvecs}^{t-1})$, $\\forall n$ \n\t\t\\State $\\vec{\\rv}_m^{t} = \\vec{\\yv}_m - \\big(\\mbf{A}(1) \\hat{\\xv}^t(1),\\ldots,\\mbf{A}(B) \\hat{\\xv}^t(B)\\big)_m \\hspace*{15mm}$ \n\t\t\\hspace*{10mm} $\\phantom{=}+ \\frac{1}{M} \\sum_{n=1}^{N} F'(\\vec{\\uv}_n^{t-1} ; \\boldsymbol{\\Sigma}_{\\vvecs}^{t-1}) \\vec{\\rv}_m^t$, $\\forall m$\n\t\t\\doWhile{$\\sum_{b=1}^{B}\\left\\|\\hat{\\xv}^{t}(b)\\!-\\!\\hat{\\xv}^{t-1}(b) \\right\\|^2_2 \\!> \\! \\varepsilon_{\\mathrm{tol}} \\sum_{b=1}^{B}\\left\\|\\hat{\\xv}^{t-1}(b)\\right\\|^2_2$ {\\bf and} $t < t_{\\max}$} \n\t\t\\State\n\t\t\\Return $\\hat{\\xv}(b)=\\hat{\\xv}^t(b)$, \n\t\t$\\forall b$\n\t\\end{algorithmic}\n\\end{algorithm}\n\nThe \\ac{Vbamp} method for joint sparse recovery of $\\mathbf{x}(b)$, $b=1,\\dots,B$,\n\\cite{kim2011belief,zhao2015joint} is summarized in Algorithm \\ref{alg:vbamp}\n(superscript $t$ indicates the iteration index).\nNote that scalar \\ac{Vbamp} (i.e., when $B=1$) is a special case of Algorithm \\ref{alg:vbamp} where \\ac{mmv} and \\ac{dcs} are equivalent.\nThe vector-prior BAMP follows similar steps as ordinary scalar \\ac{bamp} \\cite{donoho2009message,maleki2010approximate,montanari2012graphical,DonohoITWmessagepassingI,DonohoITWmessagepassingII,bayati2011dynamics}. According to the decoupling principle \\cite{montanari2012graphical}, which holds in the asymptotic regime where $M,N\\rightarrow \\infty$ while $\\frac{M}{N} = R$, the \\ac{Vbamp} algorithm decouples the \\ac{cs} measurements \\equref{eq:matrixmeasurement} according to\n\\begin{equation}\n\\vec{\\uv}_n^t=\\vec{\\xv}_n + \\vec{\\vv}_n^t ,\n\\end{equation}\nwhere the effective noise vector is distributed as ${\\vec{\\vvs}_n^t \\sim \\mcal{N}(\\mathbf{0}, \\boldsymbol{\\Sigma}_{\\vvecs}^t)}$. \nThe effective noise covariance is estimated via the empirical covariance {$\\boldsymbol{\\Sigma}_{\\vec{r}}^{t}=\\cov\\!\\left\\{ \\vec{\\rv}_m^{t-1} \\right\\}$} from vectors $\\vec{\\rv}_m^{t-1}$ in line 6 of Algorithm \\ref{alg:vbamp}. It has been shown in \\cite{lu2016independent} that in the \\ac{dcs} scenario only the diagonal entries of the covariance matrix are retained due to the mixing effected by the $B$ {mutually} independent measurement matrices. In the following, we will simplify notation by\noccasionally dropping the indices $t$ and $n$.\n\nThe vector denoiser in \\ac{Vbamp} (line 7 of Algorithm \\ref{alg:vbamp}) amounts to a vector \\ac{mmse} estimator of $\\vec{\\xv}_n$ given the decoupled measurements $\\vec{\\uv}_n$.\nUsing Bayes' theorem, the denoiser can be written as:\n\\begin{equation}\n\\begin{aligned}\nF(\\vec{\\uv}; \\boldsymbol{\\Sigma}_{\\vvecs}) &= \\Exp_{\\vec{\\xvs}} \\left\\{\\vec{\\xvs} \\mid \\vec{\\uvs} = \\vec{\\uv} ; \\boldsymbol{\\Sigma}_{\\vvecs} \\right\\}\\\\\n&= \\frac{\\int_{\\mathbb{R}^{B}} \\vec{\\zv} \\,\\mathcal{N}(\\vec{\\uv};\\vec{\\zv},\\boldsymbol{\\Sigma}_{\\vvecs}) f_{\\vec{\\xv}}(\\vec{\\zv}) \\,d \\vec{\\zv}}{\\int_{{\\mathbb{R}^{B}}} \\mathcal{N}(\\vec{\\uv};\\vec{\\zv},\\boldsymbol{\\Sigma}_{\\vvecs}) f_{\\vec{\\xv}}(\\vec{\\zv}) \\,d \\vec{\\zv}} \\,,\n\\end{aligned}\n\\label{eq:F}\n\\end{equation}\n{where the covariance of the effective noise is $\\boldsymbol{\\Sigma}_{\\vvecs} = \\boldsymbol{\\Sigma}_{\\vec{r}}$ (\\ac{mmv}) or $\\boldsymbol{\\Sigma}_{\\vvecs} = D(\\boldsymbol{\\Sigma}_{\\vec{r}})$ (\\ac{dcs})}.\nFor the multivariate \\ac{bg} prior \\equref{eq:signal_prior}, the vector denoiser becomes\n\\begin{align}\nF(\\vec{\\uv}; \\boldsymbol{\\Sigma}_{\\vvecs}) = \\mathbf{W}\\vec{\\uv} \\qquad\\text{with }\\;\n\\mathbf{W} = \\frac{F_N(\\vec{\\uv}; \\boldsymbol{\\Sigma}_{\\vvecs})}{F_D(\\vec{\\uv}; \\boldsymbol{\\Sigma}_{\\vvecs})}\\boldsymbol{\\Sigma}_{\\xvecs} \\boldsymbol{\\Sigma}_{\\uvecs}^{-1} .\n\\label{eq:2_Fvec_bernoulli_gaussian}\n\\end{align}\nHere, $\\boldsymbol{\\Sigma}_{\\uvecs} = \\boldsymbol{\\Sigma}_{\\xvecs} + \\boldsymbol{\\Sigma}_{\\vvecs}$ and\n\\begin{align}\\label{eq:BGprior}\nF_N(\\vec{\\uv}; \\boldsymbol{\\Sigma}_{\\vvecs}) & = \\epsilon\\, \\mcal{N}(\\vec{\\uv};\\mathbf{0},\\boldsymbol{\\Sigma}_{\\uvecs}),\\\\\nF_D(\\vec{\\uv}; \\boldsymbol{\\Sigma}_{\\vvecs}) & = (1-\\epsilon)\\, \\mcal{N}(\\vec{\\uv};\\mathbf{0},\\boldsymbol{\\Sigma}_{\\vvecs}) + \\epsilon\\, \\mcal{N}(\\vec{\\uv};\\mathbf{0},\\boldsymbol{\\Sigma}_{\\uvecs})\\nonumber\n\\end{align}\nThe denoiser \\equref{eq:2_Fvec_bernoulli_gaussian} consists of a multivariate Gaussian Wiener estimator followed by a joint shrinkage operation.\n\nThe \\ac{Vbamp} residual is computed in line 8 of Algorithm \\ref{alg:vbamp}.\nAs in the original \\ac{amp} derivation \\cite{maleki2010approximate}, the Onsager correction term for the residual $\\vec{\\yv}_m - \\big(\\mbf{A}(1) \\hat{\\xv}(1),\\ldots,\\mbf{A}(B) \\hat{\\xv}(B)\\big)_m$ is computed via the derivative of the estimator. \nIn the asymptotic regime, the Onsager term \n\\begin{equation}\\label{eq:onsager}\n\t\\frac{1}{M} \\sum_{n=1}^{N} F'(\\vec{\\uv}_n ; \\boldsymbol{\\Sigma}_{\\vvecs}) \\,\\vec{\\rv}_m \n\\end{equation}\nrenders the decoupled measurement vectors $\\vec{\\uv}_n$ Gaussian with mean $\\vec{\\xv}_n$ and covariance $\\boldsymbol{\\Sigma}_{\\vvecs}$ \\cite{kim2011belief,zhu2016performance}.\nHere, the Jacobian matrix $F'(\\vec{\\uv} ; \\boldsymbol{\\Sigma}_{\\vvecs}) = {dF(\\vec{\\uv}; \\boldsymbol{\\Sigma}_{\\vvecs})}\/{d\\vec{\\uv}^T} $ of the estimator $F(\\vec{\\uv} ; \\boldsymbol{\\Sigma}_{\\vvecs})$ is given by\n\\begin{equation}\nF'(\\vec{\\uv}; \\boldsymbol{\\Sigma}_{\\vvecs})\n\t\t\t\t\\! =\\!\n\\mathbf{W} \\!-\\! \\left(1 \\!-\\! \\frac{F_N(\\vec{\\uv}; \\boldsymbol{\\Sigma}_{\\vvecs})}{F_D(\\vec{\\uv}; \\boldsymbol{\\Sigma}_{\\vvecs})}\\right) \\mathbf{W} \\vec{\\uv}_n \\vec{\\uv}_n^T (\\boldsymbol{\\Sigma}_{\\uvecs}^{-1}-\\boldsymbol{\\Sigma}_{\\vvecs}^{-1})\\,.\n\\end{equation}\n\nThe algorithm runs until the relative change in the estimated signal is below a certain threshold $\\varepsilon_{\\mathrm{tol}}$ or the maximum number of iterations $t_{\\max}$ is reached.\nCompared to scalar \\ac{bamp}, the vector \\ac{Vbamp} algorithm involves the following crucial modifications:\n\\begin{itemize}\n\\item a multivariate prior (possibly with joint sparsity structure and correlation);\n\\item the estimator acts on vectors rather than scalars \\equref{eq:F} and both correlated signal and correlated additive noise are taken into consideration (more precisely, the full signal and noise vector \\ac{pdf} is taken into account);\n\\item an\n\t Onsager term obtained as the sum of Jacobian matrices (cf.\\ \\equref{eq:onsager}).\n\\end{itemize}\n\n\\subsection{State Evolution}\n\n\\ac{se} was originally proposed in \\cite{donoho2009message} for scalar (B)AMP and extended to the \\ac{mmv} and \\ac{dcs} scenarios (e.g., in \\cite{lu2016independent}); it allows to characterize analytically the expected behavior of \\ac{Vbamp}\n(note that the Onsager term in \\cite{lu2016independent} is flawed \neven though the multivariate \\ac{se} is correct).\nIn particular, the \\ac{se} equation predicts the evolution of the effective noise covariance (the state)\nfor any signal prior {$f_{\\vec{\\xvs}}(\\vec{\\xv}_n)$} as\n\\begin{align}\n\\boldsymbol{\\Sigma}_{\\vvecs}^{t+1} \n= \\begin{cases}\n\\boldsymbol{\\Sigma}_{\\wvecs} + \\frac{1}{R} \\Exp_{\\vec{\\xvs},\\vec{\\vvs}} \\left\\{ \\boldsymbol{\\langle} \\mathbf{e}(\\vec{\\xvs},\\vec{\\vvs}) \\boldsymbol{\\rangle} \\right\\} & \\mbox{for MMV},\\\\\nD\\!\\left(\\boldsymbol{\\Sigma}_{\\wvecs} + \\frac{1}{R} \\Exp_{\\vec{\\xvs},\\vec{\\vvs}} \\left\\{ \\boldsymbol{\\langle} \\mathbf{e}(\\vec{\\xvs},\\vec{\\vvs})\\boldsymbol{\\rangle}\\right\\}\\right) & \\mbox{for DCS} ,\n\\end{cases} \\label{eq:stateevolution}\n\\end{align}\nwhere $\\mathbf{e}(\\vec{\\xvs},\\vec{\\vvs})=F(\\vec{\\xvs}+\\vec{\\vvs};\\boldsymbol{\\Sigma}_{\\vvecs}^{t}) - \\vec{\\xvs}$ is the error achieved by the\n\\ac{mmse} estimator $F(\\vec{\\uv};\\boldsymbol{\\Sigma}_{\\vvecs}^t)$ and $\\vec{\\vvs} \\sim \\mcal{N}(\\mathbf{0},\\boldsymbol{\\Sigma}_{\\vvecs}^t)$.\nThe state in the \\ac{mmv} scenario is in general $B(B+1)\/2$ dimensional (since the covariance matrix is symmetric).\nFrom \\equref{eq:stateevolution}, the \\ac{mse} prediction directly follows as\n\\begin{align*}\n\\cov \\{\\vec{\\uv}^t_n - \\vec{\\xv}_n\\} &= \\boldsymbol{\\Sigma}_{\\vvecs}^t , \\\\\n\\widehat{\\mathrm{MSE}}{}^t(b) &= R\\,(\\boldsymbol{\\Sigma}_{\\vvecs}^t - \\boldsymbol{\\Sigma}_{\\wvecs})_{b,b} ,\n\\end{align*}\nwith the \\ac{mse} per channel being defined as \n\\begin{equation*}\n\\mathrm{MSE}^t(b) = \\frac{1}{N}\\|\\hat{\\xv}^t(b) - \\mathbf{x}(b)\\|_2^2 .\n\\end{equation*}\n\n\n\\section{Diagonalized Vector-prior BAMP} \\label{sec:sec3}\n\n\\subsection{Joint Diagonalization for MMV} \\label{sec:gevd}\n\n\\begin{algorithm}[!t]\n\n\t\\caption{joint diagonalization transformation}\n\t\\label{alg:gevd}\n\t\\begin{algorithmic}[1]\n\t\t\\State Given $\\boldsymbol{\\Sigma}_{\\xvecs},\\boldsymbol{\\Sigma}_{\\wvecs}$\n\t\t\\State \\hspace*{5mm} find $\\mathbf{P}$ such that $\\mathbf{P}\\Pv^T=\\boldsymbol{\\Sigma}_{\\wvecs}$\n\t\t\\State \\hspace*{5mm} $\\mathbf{G} = \\mathbf{P}^{-1} \\boldsymbol{\\Sigma}_{\\xvecs} \\mathbf{P}^{-T}$\n\t\t\\State \\hspace*{5mm} find eigendecomposition $\\mathbf{Q} \\boldsymbol{\\Lambda} \\mathbf{Q}^T=\\mathbf{G}$\n\t\t\\State \\hspace*{5mm} $\\mathbf{T} = \\boldsymbol{\\Lambda}^{-1\/2}\\mathbf{Q}^T\\mathbf{P}^{-1}$\n\t\\end{algorithmic}\n\\end{algorithm}\n\nThe \\ac{Vbamp} algorithm in Section \\ref{subsec:vbamp} can deal with arbitrary signal and noise correlations $\\boldsymbol{\\Sigma}_{\\xvecs}$ and $\\boldsymbol{\\Sigma}_{\\wvecs}$, which in general results in a nondiagonal $\\boldsymbol{\\Sigma}_{\\vvecs}^t$ in the \\ac{mmv} scenario.\nIn the decoupled measurements $\\vec{\\uv}=\\vec{\\xv} + \\vec{\\vv}$, it means that there are $\\mcal{O}(B^2)$ \\ac{snr} relations and $B(B+1)\/2$ states: each $\\mathsf{x}_n(b)$ correlates with all $\\mathsf{x}_n(b')$, $b'\\in\\{1,\\ldots,B\\}\\setminus\\{b\\}$, and it is influenced simultaneously by all effective noise components $\\mathsf{v}(b')$, $b'\\in\\{1,\\ldots,B\\}$.\n\nUnder the assumption that the covariance matrices $\\boldsymbol{\\Sigma}_{\\xvecs}$ and $\\boldsymbol{\\Sigma}_{\\wvecs}$ are full rank and using the fact that covariance matrices are symmetric and positive definite and \\cite[Thm.~7.6.1.]{horn2012matrix}, there exists a nonsingular (but generally non-orthogonal) matrix $\\mathbf{T}$ that simultaneously diagonalizes the covariance matrices of the signal $\\vec{\\xv}$ and the noise $\\vec{\\wv}$.\nThe computation of $\\mathbf{T}$ is described in Algorithm \\ref{alg:gevd}. \nIn the transformed model\n\\begin{align}\n\\tilde{\\vec{\\yv}}_m = \\mathbf{T} \\vec{\\yv}_m , \\quad \\tilde{\\vec{\\xv}}_n &= \\mathbf{T} \\vec{\\xv}_n , \\quad \\tilde{\\vec{\\wv}}_m = \\mathbf{T} \\vec{\\wv}_m \\label{eq:transformedmodel2} ,\n\\end{align}\nwe thus have \n\\begin{align*}\n\\boldsymbol{\\Sigma}_{\\tilde{\\xvecs}} & = \\mathbf{T} \\boldsymbol{\\Sigma}_{\\xvecs} \\mathbf{T}^T = \\mathbf{I}_{B\\times B} , \\\\\n\\boldsymbol{\\Sigma}_{\\tilde{\\wvecs}} & = \\mathbf{T} \\boldsymbol{\\Sigma}_{\\wvecs} \\mathbf{T}^T = \\boldsymbol{\\Lambda}^{-1}=\\epsilon\\diag \\left(\\frac{1}{\\snr(1)}, \\ldots, \\frac{1}{\\snr(B)}\\right) .\n\\end{align*}\nHere, the per-channel SNRs are defined as\n\\begin{equation*}\n\t\\mathrm{SNR}(b) = \\frac{\\Exp_{\\mbsf{x}}\\! \\left\\{\\|\\mbf{A}(b)\\tilde{\\mbsf{x}}(b)\\|^2_2\\right\\}}{\\Exp_{\\mbsf{w}} \\!\\left\\{\\|\\tilde{\\mbsf{w}}(b)\\|^2_2\\right\\}} = \\epsilon\\,\\boldsymbol{\\Lambda}_{b,b}.\n\\end{equation*}\nNote that the decorrelation can be applied also in the \\ac{dcs} scenario, given that only the noise covariance $\\boldsymbol{\\Sigma}_{\\wvecs}$ is nondiagonal and the signal covariance $\\boldsymbol{\\Sigma}_{\\xvecs}$ is diagonal.\nWe emphasize that in case \\ac{Vbamp} operates on the transformed measurements, \nthe change in the prior distribution has to be accounted for in a nontrivial manner.\nThat is, the \\ac{mmse} estimator \\equref{eq:F} and its derivative will have a different form.\nConsider the \\ac{se} equation \\equref{eq:stateevolution} that describes the expected evolution of the effective noise covariance over the \\ac{Vbamp} iterations.\nIn the \\ac{mmv} scenario,\neven if $\\boldsymbol{\\Sigma}_{\\wvecs}$ and $\\boldsymbol{\\Sigma}_{\\vvecs}^t$ are diagonal, $\\boldsymbol{\\Sigma}_{\\vvecs}^{t+1}$ in general will not be diagonal because the estimator $F(\\vec{\\uv}^t_n,\\boldsymbol{\\Sigma}_{\\vvecs}^t)$ operates on the overall vector $\\vec{\\uv}^t_n$ ({nonetheless} the diagonalization described in Algorithm \\ref{alg:gevd} could be performed repeatedly in each iteration).\nHowever, we will see shortly that in the particular case of the \\ac{bg} prior this is no longer the case.\nA direct calculation reveals that\n\\begin{equation*}\n\\cov\\{ \\vec{\\yvs}_m \\} =\n\\begin{cases}\n\\boldsymbol{\\Sigma}_{\\wvecs} + \\frac{1}{R}\\cov\\{\\vec{\\xvs}_n\\} &\\mbox{for MMV}, \\\\\n\\boldsymbol{\\Sigma}_{\\wvecs} + \\frac{1}{R} D\\left( \\cov\\{\\vec{\\xvs}_n\\} \\right) &\\mbox{for DCS}.\n\\end{cases} \n\\end{equation*}\nThus, if we know either the noise or signal covariance then the other can be estimated directly through the measurement covariance $\\cov\\{\\vec{\\yvs}_m \\}$. Alternatively, when both covariances are unknown and the signal is drawn from a BG prior we can use the \\ac{em} AMP approach introduced in \\cite{SchniterEMAMP} to estimate both sets of parameters within the iterations.\n \n\\subsection{Equivariance of \\ac{Vbamp} for MMV}\nWe next establish the fact that for MMV both \\ac{Vbamp} and its \\ac{se} are equivariant w.r.t.\\ invertible linear transformations of the input. The proof of this result is provided in Appendix \\ref{app:vbamp_equivariance}.\n\n\\vspace*{2mm}\n\\begin{theorem}\\label{thm:equivar}\n\tAlgorithm \\ref{alg:vbamp} for \\ac{mmv} and its \\ac{se} are equivariant w.r.t.~invertible linear transformations. \n\n\tDenote one \\ac{Vbamp} iteration by $(\\hat{\\vec{\\xv}}_n^{t+1}, \\vec{\\rv}_m^{t+1}, \\boldsymbol{\\Sigma}_{\\vvecs}^{t+1})=\\text{V}(\\vec{\\yv}_m, \\hat{\\vec{\\xv}}_n^t, \\vec{\\rv}_m^t, \\boldsymbol{\\Sigma}_{\\vvecs}^t)$. For any nonsingular $\\mathbf{T}$, we have for all $m$ and $n$\n\\[\n\\text{V}(\\mathbf{T}\\vec{\\yv}_m, \\mathbf{T}\\hat{\\vec{\\xv}}_n^t, \\mathbf{T}\\vec{\\rv}_m^t, \\mathbf{T}\\boldsymbol{\\Sigma}_{\\vvecs}^t\\mathbf{T}^T) = (\\mathbf{T}\\hat{\\vec{\\xv}}_n^{t+1}, \\mathbf{T}\\vec{\\rv}_m^{t+1}, \\mathbf{T}\\boldsymbol{\\Sigma}_{\\vvecs}^{t+1}\\mathbf{T}^T).\n\\]\nFurthermore, the SE equation \\eqref{eq:stateevolution} translates to the transformed domain as\n\\begin{align}\n&\\mathbf{T}\\boldsymbol{\\Sigma}_{\\vvecs}^{t+1}\\mathbf{T}^T = \\mathbf{T}\\boldsymbol{\\Sigma}_{\\wvecs}\\mathbf{T}^T \\nonumber\\\\\n& \\hspace*{10mm}+ \\frac{1}{R} \\Exp_{\\vec{\\xvs},\\vec{\\vvs}} \\left\\{ \\boldsymbol{\\langle} F(\\mathbf{T}(\\vec{\\xvs}+\\vec{\\vvs});\\mathbf{T}{\\boldsymbol{\\Sigma}_{\\vvecs}}^{t}\\mathbf{T}^T) - \\mathbf{T}\\vec{\\xvs}\\boldsymbol{\\rangle} \\right\\} .\\label{eq:transformedSE1}\n\\end{align}\n\\end{theorem}\n\\vspace*{1mm}\n\nNote that \\equref{eq:transformedSE1} holds for any signal prior in the Bayesian setting, i.e., when the estimator is the \\ac{mmse} estimator.\nAssume that \\ac{Vbamp} converges to $\\hat{\\vec{\\xv}}_n$ with inputs $\\vec{\\yv}_n$, $\\boldsymbol{\\Sigma}_{\\xvecs}$, and $\\boldsymbol{\\Sigma}_{\\wvecs}$;\nthen, Theorem \\ref{thm:equivar} implies that \\ac{Vbamp} with inputs $\\mathbf{T}\\vec{\\yv}_n$, $\\mathbf{T}\\boldsymbol{\\Sigma}_{\\xvecs}\\mathbf{T}^T$, and $\\mathbf{T}\\boldsymbol{\\Sigma}_{\\wvecs}\\mathbf{T}^T$ \nconverges to the solution $\\mathbf{T}\\hat{\\vec{\\xv}}_n$. \n\n\\subsection{Bernoulli-Gauss Prior} \\label{sec:bg}\n\nFor the \\ac{bg} prior, after applying the transformation $\\mathbf{T}$, the equivalent measurement model becomes\n\\begin{equation}\n\\tilde{\\mathbf{y}}(b) = \\mbf{A}(b) \\tilde{\\mathbf{x}}(b) + \\tilde{\\mathbf{w}}(b) \\,, \\quad \\forall b \\label{eq:equivalentmodel}\n\\end{equation}\nwith signal and noise \\acp{pdf}\n\\begin{align}\nf_{\\tilde{\\vec{\\xvs}}}(\\tilde{\\vec{\\xv}}_n) &=\n(1-\\epsilon) \\,\\delta(\\tilde{\\vec{\\xv}}_n)\n+ \\epsilon\\, \\mcal{N}(\\tilde{\\vec{\\xv}}_n;\\mathbf{0},\\mathbf{I}) , \\label{eq:diagonalbgpdf} \\\\\nf_{\\tilde{\\vec{\\wvs}}}(\\tilde{\\vec{\\wv}}_m) &=\n\\mcal{N}(\\tilde{\\vec{\\wv}}_m;\\mathbf{0},\\boldsymbol{\\Lambda}^{-1}) \\label{eq:diagonalnoisepdf} .\n\\end{align}\nThat is, we retain a \\ac{bg} prior in the transformed domain, only with uncorrelated components. \nThis is a distinctive feature of the \\ac{bg} prior and in general doesn't hold for other types of distributions.\n\nIn Appendix \\ref{app:bg_diagonality} we demonstrate that \nfor the decorrelated model \\equref{eq:equivalentmodel} with \\ac{bg} prior \\equref{eq:diagonalbgpdf}--\\equref{eq:diagonalnoisepdf}, the \\ac{Vbamp} iterations under the MMV model preserve the diagonal structure of $\\boldsymbol{\\Sigma}_{\\tilde{\\vvecs}}^t$.\nIt follows that for \\ac{cs} measurements with multivariate \\ac{bg} signal prior, the decorrelation transformation has to be done only once before recovery;\ndetermining $\\mathbf{T}$ itself is of negligible computational effort unless $B$ is very large.\nThese observations have the following implications:\n\\begin{itemize}\n\t\\item The computation of \\equref{eq:2_Fvec_bernoulli_gaussian} and \\equref{eq:onsager} is significantly simplified, \n\tleading to complexity reductions by a factor of $B$.\n\\item The dimension of the \\ac{se} equations is $B$ instead of $B(B+1)\/2$.\n\tIn other words, $B(B+1)\/2$ effective noise covariance parameters in $\\boldsymbol{\\Sigma}_{\\vvecs}$ are reduced to $B$ effective noise variances, which\n\texplicitly characterize the \\ac{mse} for each signal vector estimate as\n\t\\begin{equation*}\n\t\\widehat{\\mse}{}^t(b) = R\\, (\\boldsymbol{\\Sigma}_{\\tilde{\\vvecs}}^{t} - \\boldsymbol{\\Sigma}_{\\tilde{\\wvecs}})_{b,b}.\n\t\\end{equation*}\n\t\\item Every \\ac{mmv} problem has an equivalent \\ac{dcs} problem with possibly rescaled \\acp{snr}.\n\tFurthermore, the analysis of \\ac{dcs} also covers that of \\ac{mmv}.\n\\end{itemize}\n\n\n\n\\begin{figure}[t]\n\\centering \n\\includegraphics[width=0.9\\linewidth]{.\/VBAMP-figure0.pdf}\n\\vspace*{2mm}\n\t\\caption{{Free energy function at different rates $R$ for $B=1$, $\\sigma_{\\ws}^2=-35\\dB$, and sparsity $\\epsilon=0.1$. Red squares and black triangles indicate local maxima and minima, respectively.}} \\label{fig:freeenergycurves_noisy}\n\\end{figure}\n\n\\section{Replica Analysis}\\label{sec:replicanalysis}\n\nIn \\cite{zhu2016performance}, the replica method was used to determine the \\ac{mse} performance of \\ac{Vbamp} for the measurement \\equref{eq:matrixmeasurement} and the \\ac{bg} prior \\equref{eq:signal_prior}, assuming $\\boldsymbol{\\Sigma}_{\\xvecs} = \\mathbf{I}$ and isotropic uncorrelated noise, i.e., $\\boldsymbol{\\Sigma}_{\\wvecs}=\\sigma_{\\mathsf{w}}^2 \\mathbf{I}$.\nIn this special case \\ac{mmv} and \\ac{dcs} (referred to as {MMV-2} and {MMV-1}, respectively, in \\cite{zhu2016performance}) are equivalent.\nThe analysis is quite sophisticated and the generalization to arbitrary signal and noise correlations seems infeasible.\nHowever, due to the joint diagonalization approach from Section \\ref{sec:sec3}, it suffices to extend the replica analysis to the case with $\\boldsymbol{\\Sigma}_{\\xvecs} = \\mathbf{I}$ and $\\boldsymbol{\\Sigma}_{\\wvecs}=\\diag( \\sigma_{\\ws}^2(1),\\ldots,\\sigma_{\\ws}^2(B) )$.\nIn particular, the replica method is capable of predicting the fixed points of \\ac{Vbamp} in the asymptotic regime ($N,M\\rightarrow\\infty$, $R=M\/N=\\mathrm{const.}$), as a function of the set of $B$ \\acp{mse}\\cite{krzakala2012probabilistic,krzakala2012statistical}.\nWe note that rigorous equivalence between the replica method and \\ac{se} is not always guaranteed and requires additional technicalities \\cite{reeves2016replica}.\nAssuming $\\boldsymbol{\\Sigma}_{\\xvecs} = \\mathbf{I}$ and $\\boldsymbol{\\Sigma}_{\\wvecs} = \\diag\\big(\\sigma_{\\ws}^2(1),\\ldots, \\sigma_{\\ws}^2(B)\\big)$, we compute in \nAppendix \\ref{app:replica}, following the derivation in \\cite{zhu2016performance}, the free energy $\\mcal{F}(\\vec{\\Ev})$\nas a function of the \\ac{mse} vector $\\vec{\\Ev} = (E(1),\\ldots,E(B))^T$ with $E(b)= \\mathrm{MSE}(b)$, resulting in\n\\begin{equation}\\label{eq:freeenergy}\n\\begin{split}\n& \\mcal{F}(\\vec{\\Ev}) = \n\t\t(1-\\epsilon)\\,\\zeta(\\gamma) + \\epsilon\\,\\zeta\\Big(\\frac{\\gamma}{1+\\gamma}\\Big) \\\\\n& {}-\\frac{R}{2} \\sum_{b=1}^{B}\\left(\\log \\frac{2\\pi R}{\\gamma(b)} \n\t\t+ \\gamma(b) \\sigma_{\\ws}^2(b) \n\t\t- \\frac{1-\\epsilon}{R} \\frac{\\gamma(b)}{1+\\gamma(b)} \n\t\t\\right) \\!.\n\\end{split}\n\\end{equation}\nIn this expression we used \n\\begin{align*}\n\\zeta(\\eta) \n& = \\int \\log \\left( \\epsilon \\prod_{b=1}^{B}(1+\\gamma(b))^{-\\frac12} \\right. \\\\\n&\\qquad\t+ \\left. (1-\\epsilon) \\exp\\!\\bigg(\\!-\\frac12 \\sum_{b=1}^{B} \\eta(b)\\,h^2(b)\\bigg) \\right)\\mcal{D}\\mathbf{h}\n\\end{align*}\nwith\n\\begin{equation*}\n\t\\gamma(b) = \\frac{R}{E(b)+R\\sigma_{\\ws}^2(b)};\n\\end{equation*}\nfurthermore, $\\mcal{D}\\mathbf{h} = \\mcal{N}(\\mathbf{h};\\mathbf{0},\\mathbf{I})\\,dh_1\\dots dh_B$ denotes the multivariate standard Gaussian measure.\n\n\nThe stationary points of $\\mcal{F}(\\vec{\\Ev})$ correspond to fixed points of belief propagation \\cite{heskes2003stable}, and hence to those of \\ac{Vbamp} in the asymptotic regime \\cite{zhu2016performance}. Thus, we can determine the component-wise \\ac{mse}s of \\ac{Vbamp} by evaluating \\equref{eq:freeenergy} and finding the largest components of $\\vec{\\Ev}$ that correspond to a local maximum of $\\mcal{F}(\\vec{\\Ev})$ \\cite{yedidia2005constructing,yedidia2001bethe}. Note that for isotropic noise ($\\sigma_{\\ws}^2(b) = \\sigma_{\\ws}^2$ $\\forall b$), the free energy in \\equref{eq:freeenergy} simplifies to the result obtained in \\cite{zhu2016performance} with one-dimensional argument $E=E(1)=\\ldots=E(B)$. Replica curves for the isotropic case with $B=1$ and $B=10$ are shown in Figure~\\ref{fig:freeenergycurves_noisy} and \\ref{fig:freeenergy_B10} respectively. It is important to point out here that all the plots are the result of numerical integrations (and not Monte Carlo simulations). In the free energy function, local maxima correspond to stable fixed points and local minima to unstable fixed points, whereas the global maximum of $\\mathcal{F}(E)$ corresponds to the \\ac{mmse}. \\ac{Vbamp} typically achieves the largest \\ac{mse} associated with a local maximum.\n\n\n\\begin{figure}[t]\n\\centering \n\\includegraphics[width=0.9\\linewidth]{.\/VBAMP-figure1.pdf}\n\\vspace*{2mm}\n\t\\caption{{One-dimensional free energy function for the isotropic case with $B=10$ jointly sparse \\ac{bg} vectors at rates around the phase transition rate ($\\boldsymbol{\\Sigma}_{\\wvecs} = -35\\dB \\mathbf{I}_B$).}}\\label{fig:freeenergy_B10}\n\\end{figure}\n\n\n\\subsection{MMSE Gap}\n\nIn the \\ac{cs} regime of small $\\epsilon$ and nonzero noise variance, the \\ac{mmse} estimate $\\hat{\\xv}$ for a single measurement features a first order phase transition (PT) characterized by an abrupt change of the \\ac{mse} at a certain rate $R_{\\PT}$: for rates less than $R_{\\PT}$, the \\ac{mse} tends to be large, whereas for rates larger than $R_{\\PT}$ the \\ac{mse} tends to be small and plateaus to fixed nonzero value. This phenomenon can be seen in Figure \\ref{fig:freeenergycurves_noisy}: for rates below $R\\approx 0.16$, where the free energy has a single maximum at an \\ac{mse} of about $-12\\dB$ whereas for rates larger than $R\\approx 0.17$ a second local maximum at \\acp{mse} less then about $-37\\dB$ appears. \n\nA similarly abrupt phase transition does not appear to occur when the number of measurements $B$ is sufficiently large. Figure \\ref{fig:SEcurves_B} shows the \\ac{se} curves for various $B$ with $\\epsilon=0.1$ and $\\boldsymbol{\\Sigma}_{\\xvecs}=\\mathbf{I}_B$. Observe that the ``bump'' in the \\ac{se} curve for small $B$ and large \\ac{mse}, which corresponds to the first \nfixed point, flattens out with increasing $B$. For large enough $B$ we observe that the \\ac{se} curve ceases to exhibit a first order PT, so that the \\ac{mse} changes smoothly with increasing rate $R$.\n\n\\begin{figure}[t]\n\\centering \n\\includegraphics[width=0.9\\linewidth]{.\/VBAMP-figure2.pdf}\n\\vspace*{2mm}\n\t\\caption{Noisy \\ac{se} curves for different number of jointly sparse \\ac{bg} signals ($\\epsilon=0.1$, $\\boldsymbol{\\Sigma}_{\\xvecs}=\\mathbf{I}_B$, $\\boldsymbol{\\Sigma}_{\\wvecs}=-35\\dB \\mathbf{I}_B$, $R=0.25$).} \\label{fig:SEcurves_B}\n\\end{figure}\n\nThe same conclusion can be obtained by investigating the behavior of the free energy functions. \\ac{Vbamp} typically achieves the largest \\ac{mse} which corresponds to a local maximum in the free energy, whereas the \\ac{mse} at the global maximum of the free energy is the \\ac{mmse}. As pointed out in \\cite{zhu2016performance}, whenever the free energy function has a second local maximum at a larger \\ac{mse} than the global maximum, \\ac{Vbamp} is not Bayesian-optimal (i.e., does not reach the \\ac{mmse}). For $B=1$, in Figure~\\ref{fig:freeenergycurves_noisy}, a second local (non-global) maximum appears and thus BAMP is not MMSE optimal in the rate region $0.19