diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzzkpuj" "b/data_all_eng_slimpj/shuffled/split2/finalzzkpuj" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzzkpuj" @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\n\nIt was found \\cite{Mancini96} (see also \\cite{IbortPhysScr}) that the quantum states can\nbe described by fair probability distributions, which contain the same complete\ninformation of quantum states that is contained in density operator \\cite{Landau1927}\nand different its representations like Wigner function \\cite{Wigner32},\nHusimi Q-function \\cite{Husimi40} and diagonal representation P-function\nof Glauber \\cite{Glauber63} and Sudarshan \\cite{Sudarshan63}.\nSuch a probability distributions depend on homodyne variables and reference frames parameters\nand are called optical tomograms \\cite{BerBer,VogRis} and symplectic tomograms\n\\cite{Mancini95}.\n\nIt is obvious, that not all of functions of these variables can be a tomogram\nof a physical state of quantum or classical system. In Ref. \\cite{OConnell}\ntwo sets of conditions, which are necessary and sufficient for a function\ndefined on phase space to be a Wigner function, were discussed.\nIn Ref. \\cite{Simoni2010} conditions for a symplectic tomogram-like function to be \na tomogram were formulated based on association of symplectic tomograms \nwith a unitary representation of the Weyl-Heisenberg group.\n\nThe evolution equation of symplectic tomograms of quantum systems was obtained in \\cite{ManciniFoundPhys97}.\nFor the optical tomograms such a dynamical equation was firstly found in \\cite{Korarticle2}.\nWe also tried to obtain the tomographic evolution equations for relativistic\nsystems in Ref. \\cite{Korarticle1}. \n\nThe aim of our article is to explore the conditions, which must be satisfied by\noptical tomograms of both quantum and classical states. \nWe focus on the problem of the conditions for optical tomograms due to these\ntomograms are directly measured by homodyne detectors in quantum optical\nexperiments (see \\cite{Raymer93,LvovRayRevModPhys}).\n\nWe also study the conditions under which the tomographic evolution equations \nretain the normalization of their solutions.\nIt turns out that these conditions play a decisive role in numerical findings of the solutions\nof such an equations because these equations conserves normalizations only \nfor a narrow class of functions, which obey to the special relations,\nand limited accuracy of numerical calculations can lead to non-normalized solutions.\n\nThe paper is organized as follows. In Sec.\\ref{Section2} we review the definitions and main properties of optical\nand symplectic tomograms for both classical and quantum states.\nIn Sec.\\ref{Section3} conditions for optical and symplectic tomogram-like functions to be \ntomograms of a physical states are given.\nIn Sec.\\ref{Section4} conditions for optical tomogram-like functions to retain normalization\nduring evolution are obtained.\nIn Sec.\\ref{Section5} conditions for conservation of \nnormalization of the symplectic tomograms during evolution are found.\nIn Sec.\\ref{Section6} we show that Liuoville equation, dynamical equations for Wigner function and Husimi function\ndo not demand of any additional conditions to conserve\nnormalizations of their solutions.\nIn Sec.\\ref{Section7} conclusion and prospectives are presented.\n\n\n\\section{\\label{Section2}Definitions and properties of optical and symplectic tomograms}\n\nFor spinless $N$ dimensional quantum systems with the density matrix $\\hat \\rho(t)$, which can be \ndependent on time, the corresponding \noptical $w(\\mathbf X,\\bm\\theta,t)$ and symplectic $M(\\mathbf X,\\bm\\mu,\\bm\\nu,t)$ tomograms are defined as follows\n(see \\cite{Korarticle5,IbortPhysScr})\n\\be\t\t\\label{eq_43}\nw(\\mathbf X,\\bm\\theta,t)=\\mathrm{Tr}\\left\\{\n\\hat\\rho(t) \\prod_{\\sigma=1}^{N}\n\\delta \\left(X_\\sigma-\\hat q_\\sigma\\cos\\theta_\\sigma\n-\\hat p_\\sigma\\frac{\\sin\\theta_\\sigma}{m_\\sigma\\omega_\\sigma}\\right)\n\\right\\},\n\\ee\n\\be\t\t\\label{eq_44}\nM(\\mathbf X,\\bm\\mu,\\bm\\nu,t)=\\mathrm{Tr}\\left\\{\n\\hat\\rho(t) \\prod_{\\sigma=1}^{N}\n\\delta \\left(X_\\sigma-\\mu_\\sigma \\hat q_\\sigma-\\nu_\\sigma \\hat p_\\sigma\\right)\n\\right\\},\n\\ee\nwhere frequency $\\omega_{\\sigma}$ and mass $m_\\sigma$ dimensional constants \nfor $\\sigma-$th degree of freedom are chosen from convenience considerations for particular Hamiltonian.\nTo simplify the formulas hereafter we choose the system of units\nsuch that $m_\\sigma=\\omega_\\sigma=\\hbar=1$.\nGiven the Wigner function of the state of quantum system or the classical\ndistribution function on phase space $W(\\mathbf q,\\mathbf p,t)$, for optical and symplectic tomograms\nwe can write\n\\be\t\t\\label{optfromWig}\nw(\\mathbf X,\\bm\\theta,t)=\n\\int~W(\\mathbf q,\\mathbf p,t)\\prod_{\\sigma=1}^{N}\n\\delta \\left(X_\\sigma-q_\\sigma\\cos\\theta_\\sigma\n-p_\\sigma\\sin\\theta_\\sigma\\right)\nd^Nq~d^Np,\n\\ee\n\\be\t\t\\label{sympfromWig}\nM(\\mathbf X,\\bm\\mu,\\bm\\nu,t)=\n\\int~W(\\mathbf q,\\mathbf p,t)\\prod_{\\sigma=1}^{N}\n\\delta (X_\\sigma-\\mu_\\sigma q_\\sigma-\\nu_\\sigma p_\\sigma)d^Nq~d^Np.\n\\ee\nWe suppose that the normalization of the Wigner function is unity.\n\nThe inverse transforms of maps (\\ref{optfromWig}) and (\\ref{sympfromWig})\nare given by the Fourier integrals: \n\\be\t\t\\label{eq_52}\nW(\\mathbf q,\\mathbf p,t)=\\int\\limits_{0}^{\\pi}d^N\\theta\n\\int\\limits_{-\\infty}^{+\\infty}\\frac{d^N\\eta~d^NX}{(2\\pi)^{2N}}\nw(\\mathbf X,\\bm\\theta,t)\\prod_{\\sigma=1}^{N}\n|\\eta_\\sigma|\\exp\\left\\{i\\eta_\\sigma\\left(X_\\sigma\n-q_\\sigma\\cos\\theta_\\sigma-p_\\sigma\\sin\\theta_\\sigma\\right)\\right\\} ,\n\\ee\n\\be\t\t\\label{eq_53}\nW(\\mathbf q,\\mathbf p,t)=\\frac{1}{(2\\pi)^{2N}}\\int M(\\mathbf X,\\bm\\mu,\\bm\\nu,t)\n\\prod_{\\sigma=1}^{N}\\exp\\left\\{ i \\left(X_\\sigma-\\mu_\\sigma q_\\sigma\n-\\nu_\\sigma p_\\sigma\\right)\\right\\}d^NX~d^N\\mu~d^N\\nu.\n\\ee\n\nTomograms are nonnegative and normalized by the conditions\n\\be\t\t\t\t\\label{eqnormOpt}\n\\int w(\\mathbf X,\\bm\\theta,t)d^NX=1,\n\\ee\n\\be\t\t\t\t\\label{eqnormSymp}\n\\int M(\\mathbf X,\\bm\\mu,\\bm\\nu,t)d^NX=1.\n\\ee\n\n\nSince the Dirac delta-function in Eq.(\\ref{eq_44}) is homogeneous function,\ni.e. $\\delta(\\lambda y)=|\\lambda|^{-1}\\delta(y)$, the symplectic tomogram is also the homogeneous function\n\\be \\label{eq_3}\nM(\\lambda_\\sigma X_\\sigma,\\lambda_\\sigma\\mu_\\sigma,\\lambda_\\sigma\\nu_\\sigma,t)=\nM(\\mathbf X,\\bm\\mu,\\bm\\nu,t)\\prod_{\\sigma=1}^N|\\lambda_\\sigma |^{-1}.\n\\ee\nThe optical tomogram is even function. It means that the optical tomogram has the property\n\\be\t\t\\label{eq_14}\nw(-X_\\sigma,\\theta_\\sigma+\\pi,t)=w(\\mathbf X,\\bm\\theta,t).\n\\ee\nIt is obvious, that \n\\be\t\t\\label{OptSym}\nw(\\mathbf X,\\bm\\theta,t)=M(\\mathbf X,\\mu_\\sigma=\\cos\\theta_\\sigma,\\nu_\\sigma=\\sin\\theta_\\sigma,t).\n\\ee\nThe homogeneity condition (\\ref{eq_3}) provides the relation of the optical and symplectic tomograms opposite to (\\ref{OptSym}).\nOne can transform the integral (\\ref{sympfromWig}) using polar coordinates \n$\\mu_\\sigma=r_\\sigma\\cos\\theta_\\sigma$, $\\nu_\\sigma=r_\\sigma\\sin\\theta_\\sigma$. After some calculations\nfor $N-$dinentional case we can write\n\\be \\label{eq_6}\nM(\\mathbf X,\\bm\\mu,\\bm\\nu)=w\\left(\\frac{X_\\sigma\\mathrm{sgn}(\\nu_\\sigma)}{\\sqrt{\\mu_\\sigma^2+\\nu_\\sigma^2}},\n\\cot^{-1}\\frac{\\mu_\\sigma}{\\nu_\\sigma}\\right)\n\\prod_{\\sigma=1}^N\\frac{1}{\\sqrt{\\mu_\\sigma^2+\\nu_\\sigma^2}}.\n\\ee\nAlso the symplectic tomogram satisfy an extra homogeneous\ndifferential relation. Making differentiation of (\\ref{eq_3}) by $\\lambda_i$\nand substituting $\\lambda_i=1$ we obtain\n$$\nM+X_i\\partial_{X_i}M+\\mu_i \\partial_{\\mu_i}M\n+\\nu_i\\partial_{\\nu_i}M=0.\n$$\nIn addition, optical tomograms must satisfy the entropic uncertainty relation\n(see \\cite{MAMankoAIP2011}) (Hirshman criterion)\n\\be\t\t\t\\label{Hirschman}\n-\\int w(\\mathbf X,\\bm\\theta,t)\\ln w(\\mathbf X,\\bm\\theta,t)d^NX\n-\\int w(\\mathbf X,\\bm\\theta+\\bm{\\pi}\/2,t)\\ln w(\\mathbf X,\\bm\\theta+\\bm{\\pi}\/2,t)d^NX \\geq N\\ln(\\pi e).\n\\ee\nHere $S(\\bm\\theta,t)=-\\int w(\\mathbf X,\\bm\\theta,t)\\ln w(\\mathbf X,\\bm\\theta,t)d^NX$\nis the tomographic Shannon entropy associated with optical tomogram.\n\nIn Ref. \\cite{Simoni2010} it was mentioned that non-negativity, normalization and homogeneity\nproperties are not sufficient for the symplectic tomogram-like function $f(\\mathbf X,\\bm\\mu,\\bm\\nu)$\nto be a tomogram. Also non-negativity, normalization (\\ref{eqnormOpt}), parity (\\ref{eq_14}),\nand satisfaction of Hirshman criterion (\\ref{Hirschman}) are not sufficient for optical \ntomogram-like function $f(\\mathbf X,\\bm\\theta)$ to be a tomogram of any quantum or\nclassical state.\nFor example, consider\n\\be\t\t\\label{example1}\nf(X,\\theta)=\\frac{1}{\\sqrt\\pi}e^{-(X-\\cos^3\\theta)^2}.\n\\ee\nIn spite of this function is positive and satisfy conditions (\\ref{eqnormOpt}),\n(\\ref{eq_14}), and (\\ref{Hirschman}), it is not an optical tomogram. More over, as it will\nbe seen from Section \\ref{Section4}, the normalization of this function will not be conserved \nduring time evolution.\n\n\\section{\\label{Section3}Conditions for tomogram-like functions to be \ntomograms of physical states}\n\nIn Ref. \\cite{OConnell} necessary and sufficient conditions for a phase-space function \nto be a Wigner distribution of physical system are studied. Such function is a Wigner function\nif and only if it defines a kernel (density matrix) for a positive trace-class operator\nwith trace equal to one. The authors consider two sets of necessary and\nsufficient conditions and show that these sets are formally equivalent.\n\n\\textbf{The first} more familiar \\textbf{set} is that the function $W(\\mathbf q,\\mathbf p)$ must be\nsquare integrable, and for every Wigner function $W_\\psi(\\mathbf q,\\mathbf p)$\nof a pure state $\\psi$ the following inequality must be valid\n\\be\t\t\\label{conditionW1}\n\\int W(\\mathbf q,\\mathbf p)W_\\psi(\\mathbf q,\\mathbf p)d^Nq~d^Np\\geq 0,\n\\ee\nand the function $W(\\mathbf q,\\mathbf p)$ must be normalized to unity\nover all phase space,\n\\be\t\t\\label{normWig}\n\\int W(\\mathbf q,\\mathbf p)d^Nq~d^Np=1.\n\\ee\nOne may find the proof that these conditions are necessary and\nsufficient in \\cite{PhisRep1984}.\n\nLet us note that from the physical point of view the integral in inequality \n(\\ref{conditionW1}) is a transition probability from the state with Wigner function \n$W(\\mathbf q,\\mathbf p)$ to the state $\\psi$ and, obviously, it must be non-negative.\n\n\\textbf{The second set} of conditions is called in \\cite{OConnell} as KLM conditions,\nafter Kastler \\cite{Kastler}, Loupias, and Miracle-Sole \\cite{Loupias1,Loupias2},\nand applied to the symplectic Fourier transform of the \nphase space function $W(\\mathbf q,\\mathbf p)$\n\\be\t\t\\label{sympFour}\n\\widetilde W(\\mathbf{u},\\mathbf{v})=\\int W(\\mathbf{q},\\mathbf{p})\ne^{i(\\mathbf q\\mathbf v-\\mathbf u\\mathbf p)}d^Nq~d^Np.\n\\ee\nThe first KLM condition, which is equivalent to (\\ref{conditionW1}), is that the function \n$\\widetilde W(\\mathbf{u},\\mathbf{v})$ must be continuous and of $\\hbar-$positive type,\ni.e. for every choce of poins \n$(\\mathbf u_1,\\mathbf v_1)$, $(\\mathbf u_2,\\mathbf v_2)$, ..., $(\\mathbf u_n,\\mathbf v_n)$,\nthe $n\\times n$ matrix with entries \n\\be\t\t\\label{matrixnonneg1}\nZ_{jk}=e^{i(\\mathbf u_j\\mathbf v_k-\\mathbf u_k\\mathbf v_j)\/2}\n\\widetilde W(\\mathbf u_j-\\mathbf u_k,\\mathbf v_j-\\mathbf v_k)\n\\ee\nis non-negative (in our system of units $\\hbar=1$).\nThe second KLM condition is equivalent to (\\ref{normWig})\n\\be\t\t\\label{normWig2}\n\\widetilde W(\\mathbf u=0,\\mathbf v=0)=1.\n\\ee\n\nUsing the formulas of maps of the Wigner function to the tomogram and vice versa\none can find necessary and sufficient conditions for tomographic functions corresponding to (\\ref{conditionW1}).\nNormalized according to (\\ref{eqnormOpt}) and (\\ref{eqnormSymp}) functions $w(\\mathbf X,\\bm\\theta)$ and $M(\\mathbf X,\\bm\\mu,\\bm\\nu)$ \nobeying relations (\\ref{eq_14}) or (\\ref{eq_3}) are tomograms of\nstates of quantum systems if and only if for any quantum pure state $\\psi$ with\nthe tomogram $w_\\psi(\\mathbf X,\\bm\\theta)$ (or $M_\\psi(\\mathbf X,\\bm\\mu,\\bm\\nu)$),\nthe following inequalities are valid\n\\be\t\t\t\\label{conditwpos}\n\\int w(\\mathbf X,\\bm\\theta)w_\\psi(\\mathbf X,\\bm\\theta)\ne^{i\\bm\\eta(\\mathbf X-\\mathbf X')}|\\eta_1||\\eta_2|\\dots|\\eta_n|\nd^nX~d^NX'~d^n\\eta~d^N\\theta\\geq 0, \n\\ee\n\\be\t\t\t\\label{conditMpos}\n\\int M(\\mathbf X,\\bm\\mu,\\bm\\nu)M_\\psi(\\mathbf X',\\bm\\mu,\\bm\\nu)\ne^{i(\\mathbf X-\\mathbf X')}\nd^nX~d^nX'~d^N\\mu~d^N\\nu\\geq 0. \n\\ee\nAlso with the map between functions $W(\\mathbf q,\\mathbf p,t)$ and $M(\\mathbf X,\\bm\\mu,\\bm\\nu,t)$\n the symplectic Fourier transform (\\ref{sympFour}) is expressed as follows\n\\be\t\t\\label{symFsym}\n\\widetilde W(\\mathbf{u},\\mathbf{v})=\\int M(\\mathbf X,\\mathbf{v},-\\mathbf{u})\ne^{i\\mathbf X}d^NX.\n\\ee\nSubstituting (\\ref{symFsym}) to the first KLM condition with the set of points\n$\\{(\\bm\\mu_j,\\bm\\nu_j):\\mathbf{v}_j=\\bm\\mu_j,\\mathbf{u}_j=-\\bm\\nu_j \\}$,\nwhich will also be arbitrary, we obtain the expression for non-negative matrix $Z_{jk}$\nfrom symplectic tomogram\n\\be\t\t\\label{matrixnonneg2}\nZ_{jk}=e^{i(\\bm\\mu_j\\bm\\nu_k-\\bm\\mu_k\\bm\\nu_j)\/2}\n\\int M(\\mathbf X,\\bm\\mu_j-\\bm\\mu_k,\\bm\\nu_j-\\bm\\nu_k)e^{i\\mathbf X}d^NX.\n\\ee\nWith the help of relation (\\ref{eq_6}) between $M(\\mathbf X,\\bm\\mu,\\bm\\nu)$ and\n$w(\\mathbf X,\\bm\\theta)$ we immediately obtain the expression for the matrix $Z_{jk}$\nin terms of the optical tomogram\n\\bea\t\t\nZ_{jk}&=&e^{i(\\bm\\mu_j\\bm\\nu_k-\\bm\\mu_k\\bm\\nu_j)\/2} \\nonumber \\\\[3mm]\n&&\\times\\int w\\left(X_\\sigma,\n\\cot^{-1}\\frac{\\mu_{\\sigma j}-\\mu_{\\sigma k}}{\\nu_{\\sigma j}-\\nu_{\\sigma k}}\\right)\n\\nonumber \\\\[3mm]\n&&\\times\\exp\\left[i\\sum_{\\sigma=1}^N X_\\sigma\\mathrm{sgn}(\\nu_{\\sigma j}-\\nu_{\\sigma k})\n\\sqrt{(\\mu_{\\sigma j}-\\mu_{\\sigma k})^2+(\\nu_{\\sigma j}-\\nu_{\\sigma k})^2}\\right]d^NX.\n\\label{matrixnonneg3}\n\\eea\nAs for condition (\\ref{normWig2}), it will be valid as the limit case \n\\be\t\t\\label{normWig3}\n\\widetilde W(0,0)=\\int M(\\mathbf X,0,0)e^{i\\mathbf X}d^NX=\n\\int w\\left(\\mathbf X ,\\cot^{-1}\\frac{0}{0}\\right)d^NX=1,\n\\ee\nbecause\nthe symplectic tomogram transforms to the delta-function\n$M(\\mathbf X,0,0)=\\delta(\\mathbf X)$, and the function $\\cot^{-1}(x)$ is confined at any argument.\nThus, instead of (\\ref{normWig3}) it is more preferably to use conditions (\\ref{eqnormOpt}) and (\\ref{eqnormSymp}),\nwhich must be valid for any $(\\bm\\mu,\\bm\\nu)\\neq0$, or phase vector $\\bm\\theta$ $\\{\\theta_j\\in[0,\\pi]\\}$.\n\nIn Ref. \\cite{Simoni2010} the same expression (\\ref{matrixnonneg2}) \nfor non-negative matrix $Z_{jk}$ was obtained from considerations of group theory\nand called as $\\omega-$positivity condition. It was shown \\cite{Simoni2009} that symplectic tomograms \nare associated with nontrivial unitary irreducible representations\n(we generalize here the results of \\cite{Simoni2010,Simoni2009} to $N-$dimensional case\nand to optical tomograms)\n\\be\t\t\\label{Weilrep}\n\\hat U_g(\\bm\\mu,\\bm\\nu,\\xi)=\\exp[i(\\bm\\mu\\hat{\\mathbf q}+\\bm\\nu\\hat{\\mathbf p}\\,)]e^{i\\xi}\n\\ee\nof the Weyl-Heisenberg group WH(2N). So, according to Naimark's theorem\n\\cite{Naimark} the function \n\\bea\t\t\n\\varphi(\\bm\\mu,\\bm\\nu,\\xi)&=&\\mathrm{Tr}\\left\\{\\hat\\rho\n\\exp\\left[i\\left(\\bm\\mu\\hat{\\mathbf q}+\\bm\\nu\\hat{\\mathbf p}\\,\\right)\\right]\\right\\}e^{i\\xi}\n=e^{i\\xi}\\int M(\\mathbf X,\\bm\\mu,\\bm\\nu)e^{i\\mathbf X}d^NX \\nonumber \\\\[3mm]\n&=&e^{i\\xi}\\int w\\left(\\mathbf X,\\cot^{-1}(\\mu_\\sigma\/\\nu_\\sigma)\\right) \n\\exp\\left[i\\sum_{\\sigma=1}^N X_\\sigma\\mathrm{sgn}(\\nu_{\\sigma})\n\\sqrt{\\mu_{\\sigma}^2+\\nu_{\\sigma}^2}\\right]d^NX\n\\label{funcongr}\n\\eea\nis a positive definite function on the Weyl-Heisenberg group, i.e.\nfor any $n-$tuple of group elements $(g_1,g_2,...,g_n)\\in \\mathrm{WH}(2N)$\nthe $n\\times n$ matrix with elements\n\\be\t\t\\label{matrixnonneg4}\nZ_{jk}=\\varphi(g_jg_k^{-1})\n\\ee\nis non-negative. Substitution of the expression of representation of the \ngroup element $g_jg_k^{-1}$\n\\be\t\t\\label{prod1}\n\\hat U_{g_j}(\\bm\\mu_j,\\bm\\nu_j,\\xi_j)\\hat U_{g_k}^{-1}(\\bm\\mu_k,\\bm\\nu_k,\\xi_k)=\n\\hat U_{g_jg_k^{-1}}\\Big[\\bm\\mu_j-\\bm\\mu_k,\\bm\\nu_j-\\bm\\nu_k,\\xi_j-\\xi_k+\n(\\bm\\mu_j\\bm\\nu_k-\\bm\\mu_k\\bm\\nu_j)\/2\\Big]\n\\ee\nto definition (\\ref{funcongr}) of the function $\\varphi$ and omission of the \ninessential factors $e^{i\\xi_j}$ and $e^{i\\xi_k}$ lead us to expressions of the \nmatrix $Z_{jk}$ for symplectic (\\ref{matrixnonneg2}) and optical (\\ref{matrixnonneg3})\ntomograms.\n\nIn order to functions $w_\\mathrm{cl}(\\mathbf X,\\bm\\theta)$ or $M_\\mathrm{cl}(\\mathbf X,\\bm\\mu,\\bm\\nu)$\nbe tomograms of classical systems, in addition to their smoothness, normalization in accordance with \nconditions (\\ref{eqnormOpt}, \\ref{eqnormSymp}), parity (\\ref{eq_14}) or homogeneity (\\ref{eq_3}) respectively,\nnecessary and sufficient conditions is that they must define the function $W_\\mathrm{cl}(\\mathbf q,\\mathbf p)$ \nwith formula (\\ref{eq_52}) or (\\ref{eq_53}), which consist in a class of distribution functions,\ni.e. positive normalized functions.\n\nIn terms of group theory the function $\\phi(\\bm\\mu,\\bm\\nu)$ \nwith the definition\n\\bea\t\t\n\\phi(\\bm\\mu,\\bm\\nu)&=&\\int W_\\mathrm{cl}(\\mathbf q,\\mathbf p)\n\\exp\\left[i\\left(\\bm\\mu\\mathbf{q}+\\bm\\nu\\mathbf{p}\\,\\right)\\right]d^Nq~d^Np\n=\\int M_\\mathrm{cl}(\\mathbf X,\\bm\\mu,\\bm\\nu)e^{i\\mathbf X}d^NX \\nonumber \\\\[3mm]\n&=&\\int w_\\mathrm{cl}\\left(\\mathbf X,\\cot^{-1}(\\mu_\\sigma\/\\nu_\\sigma)\\right) \n\\exp\\left[i\\sum_{\\sigma=1}^N X_\\sigma\\mathrm{sgn}(\\nu_{\\sigma})\n\\sqrt{\\mu_{\\sigma}^2+\\nu_{\\sigma}^2}\\right]d^NX\n\\label{funcongrcl}\n\\eea\non the translational group with group law \n\\be\t\t\t\\label{transgrlaw}\n(\\bm\\mu,\\bm\\nu)\\circ(\\bm\\mu\\,',\\bm\\nu\\,')=(\\bm\\mu+\\bm\\mu\\,',\\bm\\nu+\\bm\\nu\\,')\n\\ee \nmust be a positive definite function on this group (see \\cite{Simoni2010}), \ni.e. for every choice of points\n$(\\bm\\mu_1,\\bm\\nu_1)$, $(\\bm\\mu_2,\\bm\\nu_2)$, ..., $(\\bm\\mu_n,\\bm\\nu_n)$\nthe $n\\times n$ matrix with entries $\\phi_{jk}=\\phi(\\bm\\mu_j-\\bm\\mu_k,\\bm\\nu_j-\\bm\\nu_k)$\nmust be non-negative.\nThen, by the Bochner theorem the function $\\phi(\\bm\\mu,\\bm\\nu)$ is the Fourier transform \nof the probability measure on the phase space.\n\nNote \\cite{Simoni2010} that the function $\\phi(\\bm\\mu,\\bm\\nu)=e^{-i\\xi}\\varphi(\\bm\\mu,\\bm\\nu,\\xi)$\nmay be simultaneously of $\\hbar-$positive and positive type on the translational group\nwith group law (\\ref{transgrlaw}).\n In this case the corresponding tomogram-like function \n$w(\\mathbf X,\\bm\\theta)$ or $M(\\mathbf X,\\bm\\mu,\\bm\\nu)$ may be interpreted as a quantum as a classical tomogram.\nSuch an example is the tomogram of the ground state of the harmonic oscillator.\n\nLet's additionally note that conditions considered are always necessary but they are sufficient if and only if \nthe Radon integral of tomogram-like function of type (\\ref{optfromWig}) \nor (\\ref{sympfromWig}) with $W(\\mathbf q,\\mathbf p,t)$\ndefined by (\\ref{eq_52}) or (\\ref{eq_53}) exist and equal to the function itself.\nAs an example, examine the function\n\\be\t\t\t\\label{exampfunc1}\nf_1(X,\\mu,\\nu)=\\frac{e^{1\/4}}{\\sqrt\\pi}e^{-X^2-\\mu^2\/4-\\nu^2\/4}.\n\\ee\nEvidently, that this function is neither quantum no classical tomogram.\nNevertheless, it is easy to see that both the first and the second KLM conditions for this\nfunction are satisfied:\n\\bdm\n\\widetilde f_1(\\mu,\\nu)=\\int f_1(X,\\mu,\\nu)e^{iX}dX=\ne^{-\\mu^2\/4-\\nu^2\/4},~~~~\\widetilde f_1(X,0,0)=1,\n\\edm\nand $\\widetilde f_1(\\mu,\\nu)$ is $\\hbar-$positive function.\nThe thing is that, if we apply transformation (\\ref{eq_53}) to this function, \nwe obtain the classical distribution $W_1(q,p)=\\pi^{-1\/2}e^{-q^2-p^2}$ \n(which in this special example can also be interpreted as\na Wigner function of the ground state of the harmonic oscillator),\nbut if we apply to $W_1(q,p)$ map (\\ref{sympfromWig}), which is inverse to (\\ref{eq_53}), we obtain \nthe function that does not equal to $f_1(X,\\mu,\\nu)$.\nSo, Radon integral (\\ref{eq_53}) of the function $f_1(X,\\mu,\\nu)$ exist but does not \nequal to the function $f_1(X,\\mu,\\nu)$, and KLM conditions are not sufficient \nfor the tomogram-like function $f_1(X,\\mu,\\nu)$ to be a tomogram of the state of a physical quantum \nor classical system.\n\n\\section{\\label{Section4}Conditions for conservation of normalization \\\\ \nof optical tomogram-like functions during evolution}\nFor Hamiltonians\n\\be \\label{r18_2}\n\\hat H=\\sum_\\sigma\\frac{\\hat p_\\sigma^2}{2m_\\sigma}+V({\\mathbf q},t)\n\\ee\nthe evolution equation \nof the optical tomogram $w({\\mathbf X},{\\bm\\theta},t)$ \nfirstly was found in \\cite{Korarticle2}\n\\begin{eqnarray} \n&&\n\\frac{\\partial}{\\partial t}w({\\mathbf X},{\\bm\\theta},t)=\n\\sum_{\\sigma=1}^N\\omega_{0\\sigma}\n\\left[\\cos^2\\theta_\\sigma\\frac{\\partial}{\\partial\\theta_\\sigma}\n-\\frac{1}{2}\\sin2\\theta_\\sigma\\left\\{1+X_\\sigma\\frac{\\partial}{\\partial X_\\sigma}\\right\\}\n\\right]\nw({\\mathbf X},{\\bm\\theta},t)\\nonumber \\\\[3mm]\n&&+\n\\frac{2}{\\hbar}\\left[\\mathrm{Im}~V\\left\\{\n\\sin\\theta_\\sigma\\frac{\\partial}{\\partial\\theta_\\sigma}\n\\left[\\frac{\\partial}{\\partial X_\\sigma}\\right]^{-1}\n+X_\\sigma\\cos\\theta_\\sigma+i\\frac{\\hbar\\sin\\theta_\\sigma}\n{2m_\\sigma\\omega_{0\\sigma}}\n\\frac{\\partial}{\\partial X_\\sigma}\\right\\}\\right]\nw({\\mathbf X},{\\bm\\theta},t), \n\\label{r20_2}\n\\end{eqnarray}\nwhere we introduced the designation \\cite{Korarticle5}\n\\be\t\t\t\\label{invder}\n\\left[\\frac{\\partial}{\\partial X_\\sigma}\\right]^{-n}f(X_\\sigma)=\\frac{1}{(n-1)!}\n\\int(X_\\sigma-X_\\sigma')^{n-1}\\Theta(X_\\sigma-X_\\sigma')f(X_\\sigma')dX_\\sigma',\n\\ee\nwhere $\\Theta(X_\\sigma-X_\\sigma')$ is a Heaviside step function.\nOptical tomographic representation of the classical non-relativistic Liouville equation\n\\cite{Korarticle2} is the limit case of (\\ref{r20_2}) when $\\hbar\\to 0$\n\\begin{eqnarray} \n&&\\ds{\\frac{\\partial}{\\partial t}w_\\mathrm{cl}(\\mathbf X,\\bm\\theta,t)=\n\\sum_{\\sigma=1}^{N}\\omega_\\sigma \\left[\\cos^2\\theta_\\sigma\\frac{\\partial}{\\partial\\theta_\\sigma}\n-\\frac{1}{2}\\sin2\\theta_\\sigma\\left\\{1+X_\\sigma\\frac{\\partial}{\\partial X_\\sigma}\\right\\}\n\\right]w_\\mathrm{cl}(\\mathbf X,\\bm\\theta,t)} \\nonumber \\\\[3mm]\n&&\\ds{+\\left[\\sum_{\\sigma=1}^{n}\\frac{\\partial}{\\partial q_\\sigma}~V\\left\\{\nq_\\sigma\\rightarrow\\sin\\theta_\\sigma\\frac{\\partial}{\\partial\\theta_\\sigma}\n\\left[\\frac{\\partial}{\\partial X_\\sigma}\\right]^{-1}\n+X_\\sigma\\cos\\theta_\\sigma\\right\\}\n\\frac{\\sin\\theta_\\sigma}{m_\\sigma\\omega_\\sigma}\\frac{\\partial}{\\partial X_\\sigma}\\right]\nw_\\mathrm{cl}(\\mathbf X,\\bm\\theta,t).}\n\t\t\\label{eq_54}\n\\end{eqnarray}\nEvolution equation of the tomogram for arbitrary spinless\nquantum Hamiltonian was found in \\cite{Korarticle1}.\n\nEquations (\\ref{r20_2}), (\\ref{eq_54}) can be presented in the following compact form\n\\be \t\t\t\\label{eqcompFF}\n\\partial_t w({\\mathbf X},{\\bm\\theta},t)=\\hat F_\\mathrm{k}({\\mathbf X},{\\bm\\theta})~w({\\mathbf X},{\\bm\\theta},t)\n+\\hat F_\\mathrm{p}({\\mathbf X},{\\bm\\theta},t)~w({\\mathbf X},{\\bm\\theta},t),\n\\ee\nwhere $\\hat F_\\mathrm{k}({\\mathbf X},{\\bm\\theta})$ is a time independent operator\ncorresponding to the kinetic part of the Hamiltonian, and \n$\\hat F_\\mathrm{p}({\\mathbf X},{\\bm\\theta},t)$ is an operator\ncorresponding to the potential energy of the system. \nGenerally speaking, $\\hat F_\\mathrm{p}({\\mathbf X},{\\bm\\theta},t)$ can be dependent on time\nif the potential $V(\\mathbf q,t)$ depend on time. More over, the operator \n$\\hat F_\\mathrm{k}$ for equations (\\ref{r20_2}) and (\\ref{eq_54}) is the same,\nbut the operators $\\hat F_\\mathrm{p}$ are different for the classical and quantum cases.\n\nIf we know the initial condition $w_0({\\mathbf X},{\\bm\\theta})$\nfor the equation with the form (\\ref{eqcompFF}),\nwe can write its formal general solution with the help of chronological exponential operator \n\\be\t\t\t\\label{solexp}\nw({\\mathbf X},{\\bm\\theta},t)=\\mathrm{T}\\exp\\left\\{\n\\int\\limits_0^t\\left[\\hat F_\\mathrm{k}({\\mathbf X},{\\bm\\theta})+ \n\\hat F_\\mathrm{p}({\\mathbf X},{\\bm\\theta},t)\\right]dt\\right\\}w_0({\\mathbf X},{\\bm\\theta}).\n\\ee \n\nWhich of the properties the tomograms must satisfy for conservation\nof the normalization during evolution?\nLet's integrate left and right sides of equation (\\ref{eqcompFF})\nover the hyperspace $X^N$. If $w({\\mathbf X},{\\bm\\theta},t)$ is a continuous\nfunction normalized by unity\nand tends to zero faster then any finite negative power of $X_i$ when $X_i$\ntends to infinity, then the following relation must be valid\n\\be \t\t\t\\label{intzero}\n\\partial_t \\int w({\\mathbf X},{\\bm\\theta},t)dX^N=0=\n\\int\\hat F_\\mathrm{k}({\\mathbf X},{\\bm\\theta})~w({\\mathbf X},{\\bm\\theta},t)dX^N\n+\\int\\hat F_\\mathrm{p}({\\mathbf X},{\\bm\\theta},t)~w({\\mathbf X},{\\bm\\theta},t)dX^N.\n\\ee\nUsing integration by parts and taking into account condition (\\ref{eqnormOpt})\nit is easy to show that \n\\bdm\n\\int\\hat F_\\mathrm{k}({\\mathbf X},{\\bm\\theta})~w({\\mathbf X},{\\bm\\theta},t)d^NX=0.\n\\edm\nFor free motion $\\hat F_\\mathrm{p}\\equiv 0$ and equation (\\ref{intzero}) is valid\nwithout any additional properties of the tomogram. For linear potential\n$V(\\mathbf q)$ equation (\\ref{intzero}) is valid too.\nFor quadratic potential equation (\\ref{eqcompFF}) transforms to \n\\be\t\t\t\\label{eqossil}\n\\frac{\\partial}{\\partial t} w({\\mathbf X},{\\bm\\theta},t)=\\sum_{\\sigma=1}^{N} \\omega_\\sigma\n\\frac{\\partial}{\\partial\\theta_\\sigma}w({\\mathbf X},{\\bm\\theta},t),\n\\ee\nand we see that there is only normalization property (\\ref{eqnormOpt})\nmust be satisfied for tomograms to be normalized during evolution, i.e. for validity of equation (\\ref{intzero}).\n\nFurther, to simplify the formulas, we consider one-dimensional motion and choose a system \nof physical units in which $\\hbar=m=\\omega_0=1$. The generalization to multidimensional case is straightforward.\n\nAs we discuss the related motion, then without loss of physical \ngenerality, it suffices to consider the case when $V(q)$ is \na polynomial, because according to Chebyshev theorem, every continuous function $V(q)$ \non a finite interval with any accuracy can be approximated by a polynomial.\nEach term of the $n$-th power of $q$ of the polynomial potential will give \ncontribution to (\\ref{intzero}) as a sum of terms proportional to\n$\\cos^k\\theta\\sin^l\\theta,$ where $k+l=n$.\n\nSince equation (\\ref{intzero}) must be satisfied for any fixed \n$\\theta$ and $t$, then the sum of the coefficients of each term of the form \n$\\cos^k\\theta\\sin^l\\theta$ must be zero. Hence, we obtain \na system of equations for tomogram, which must be satisfied \nfor any fixed $\\theta$ and $t$.\n\nSuppose that the potential $V(q)$ contains a term with $q^3$. \nThen, after some calculations, we can show that the quantum case \nequation (\\ref{intzero}) will have the form\n\\begin{eqnarray} \n0&=&-\\frac{\\sin^3\\theta}{4}\\int\\partial _Xw(X,\\theta,t)dX\n\\nonumber \\\\[3mm]\n&&+3\\sin\\theta\\cos^2\\theta\\left(2\\int Xw(X,\\theta,t)dX\n+\\int X^2\\partial _Xw(X,\\theta,t)dX\n\\right)\n\\nonumber \\\\[3mm]\n&&+6\\sin^2\\theta\\cos\\theta~\\partial_\\theta\\left(\n\\int \\partial^{-1}_Xw(X,\\theta,t)dX\n+\\int Xw(X,\\theta,t)dX\n\\right)\n\\nonumber \\\\[3mm]\n&&+3\\sin^3\\theta\\left(\n\\partial^2_\\theta \\int\\partial^{-1}_Xw(X,\\theta,t)dX\n-\\int Xw(X,\\theta,t)dX\n\\right).\n\\label{Vq3}\n\\end{eqnarray}\nThe same equation but without term in the first row is obtained for a classical motion. \nUsing integration by parts it is easy to show that \neach of the first three rows in equation (\\ref{Vq3}) is equal to zero, \nand fourth line leads us to the equation\n\\be\t\t\t\\label{ostatok}\n\\partial^2_\\theta \\int Xw(X,\\theta,t)dX\n+\\int Xw(X,\\theta,t)dX=0.\n\\ee\nIt is easy to see that this is a classical harmonic oscillator equation with variable $\\theta$, \nand it is performed only if\n\\be\t\t\t\\label{garmsol}\n\\int Xw(X,\\theta,t)dX=A(t)\\cos\\theta+B(t)\\sin\\theta.\n\\ee\nThus, we have the integral condition, which must be satisfied \nfor the optical tomogram both in the classical and in the quantum case \nin order to its normalization will be remained during evolution in the third degree polynomial potential.\nNote that from definition of tomogram (\\ref{eq_43}) \nwe can see that the left side of equality (\\ref{garmsol}) is a mean value of the\nhomodyne variable $X$,\n\\be\t\t\t\\label{garmsolQP}\n\\int Xw(X,\\theta,t)dX=\\langle\\hat q\\rangle\\cos\\theta\n+\\langle\\hat p\\rangle\\sin\\theta,\n\\ee\nwhere $\\langle\\hat q\\rangle$ and $\\langle\\hat p\\rangle$ are average quantum (classical) \nvalues of the position and momentum respectively.\nSo, we have $A(t)=\\langle q\\rangle$ and $B(t)=\\langle p\\rangle$.\n\nSuppose that the potential $V(q)$ contains a term with $q^4$. \nAfter similar calculations, taking into account (\\ref{eqnormOpt}) and (\\ref{ostatok})\nwe obtain the differential equation for the second moment \nvariable $X$, which is valid both in the quantum and in the classical case:\n\\be\t\t\t\\label{ostatok2}\n\\partial^3_\\theta \\int X^2w(X,\\theta,t)dX\n+4\\partial_\\theta \\int X^2w(X,\\theta,t)dX=0.\n\\ee\nThe solution of (\\ref{ostatok2}) contains three constants independent on $\\theta$, but dependent on time \n\\be\t\t\t\\label{garmsol2}\n\\int X^2w(X,\\theta,t)dX=A_1(t)\\cos2\\theta+B_1(t)\\sin2\\theta +C_1(t).\n\\ee\nFrom definition of tomogram (\\ref{eq_43}) and the expression for the \nmean value of $X^2$ we can find the constants\n$A_1(t)$, $B_1(t)$, $C_1(t)$,\n\\be\t\t\t\\label{garmsolQP2}\n\\int X^2w(X,\\theta,t)dX=\\langle\\hat q^2\\rangle\\cos^2\\theta\n+\\langle\\hat p^2\\rangle\\sin^2\\theta\n+\\langle\\hat q\\hat p+\\hat p\\hat q\\rangle\\sin\\theta\\cos\\theta,\n\\ee\n\\bdm\nA_1(t)=\\frac{\\langle\\hat q^2\\rangle-\\langle\\hat p^2\\rangle}{2},~~~\nB_1(t)=\\frac{\\langle\\hat q\\hat p+\\hat p\\hat q\\rangle}{2},\n~~~ C_1(t)=\\frac{\\langle\\hat q^2\\rangle+\\langle\\hat p^2\\rangle}{2}.\n\\edm\nExpressing condition (\\ref{garmsol2}) - (\\ref{garmsolQP2}) through the Wigner function we obtain\n\\bea\t\t\t\n\\int X^2w(X,\\theta,t)dX&=&\n\\left(\\int q^2W(q,p,t)dq~dp\\right)\\cos^2\\theta\n+\\left(\\int p^2W(q,p,t)dq~dp\\right)\\sin^2\\theta \\nonumber \\\\[3mm]\n&&+2\\left(\\int qp\\,W(q,p,t)dq~dp\\right)\\sin\\theta\\cos\\theta.\n\\label{garmsolQP2W}\n\\eea\nArguing similarly, after some calculations we can show that if $V(q)$ contains a term with $q^n$, \nthen all average values $\\langle X^m\\rangle$ of the variable $X$, where $m=1,~2,...,n-2$, \nmust satisfy one of the following differential equation:\n\\be\t\t\t\\label{Gendifeven}\n\\left\\{\\frac{\\partial}{\\partial\\theta}\\prod_{k=1}^{m\/2}\n\\left[\\frac{\\partial^2}{\\partial\\theta^2}+(2k)^2\\right]\\right\\}\n\\int X^{m}\\,w(X,\\theta,t)dX=0,\n\\ee\nif $m$ is even number, or \n\\be\t\t\t\\label{Gendifodd}\n\\left\\{\\prod_{k=0}^{(m-1)\/2}\n\\left[\\frac{\\partial^2}{\\partial\\theta^2}+(2k+1)^2\\right]\\right\\}\n\\int X^{m}\\,w(X,\\theta,t)dX=0,\n\\ee\nif $m$ is odd number.\n\nGeneral solutions of these equations are respectively equal:\n\\be\t\t\t\\label{Gensoleven}\n\\int X^{m}\\,w(X,\\theta,t)dX=A_0(t)+\\sum_{k=1}^{m\/2}[A_k(t)\\cos(2k\\theta)\n+B_k(t)\\sin(2k\\theta)],~~~~m~~\\mathrm{is~~even},\n\\ee\n\\be\t\t\t\\label{Gensolodd}\n\\int X^{m}\\,w(X,\\theta,t)dX=\\sum_{k=0}^{(m-1)\/2}\\{A_k(t)\\cos[(2k+1)\\theta]\n+B_k(t)\\sin[(2k+1)\\theta]\\},~~~~m~~\\mathrm{is~~odd},\n\\ee\nwhere $A_k(t)$, $B_k(t)$ are independent on $\\theta$, but dependent on time $t$.\n\nConsistent with (\\ref{Gensoleven}) or (\\ref{Gensolodd}), from formula (\\ref{optfromWig}) we have that\nall average values $\\langle X^m\\rangle$ \nare the sums of terms of the form $A_{k\\,m}(t)\\cos^k\\theta\\sin^{m-k}\\theta$, \nwhere $k=0,~1,...,m$ and $A_{k\\,m}(t)$ are the constants that depend only on time. \nIn other words the following conditions must be satisfied:\n\\be\t\t\t\\label{garmsolQPnW}\n\\int X^{m}\\,w(X,\\theta,t)dX=\n\\sum_{k=0}^{m}A_{km}(t)\n\\cos^k\\theta\\sin^{m-k}\\theta,~~~m=1,~2,...,~n-2,\n\\ee\n\\be\t\t\t\\label{coefAkm}\t\t\t\nA_{km}(t)=\nC_{m}^k\\int q^kp^{m-k}\\,\nW(q,p,t)dq~dp,\n\\ee\nwhere $C_{m}^k$ is a binomial coefficient.\n\nIf the potential $V(q)$ is an analytic function representing an endless \nseries of powers of $q$,\nthen the tomogram $w(X,\\theta,t)$ must satisfy an infinite number of conditions \nof the form (\\ref{garmsolQPnW}), where $n=\\infty$. \n\nObviously, if we consider the classical case, \nthen in formula (\\ref{coefAkm}) the Wigner function $W(q,p,t)$ must be replaced \nwith classical distribution function $W_\\mathrm{cl}(q,p,t)$.\n\nOne can show that to all these conditions be satisfied, the function\n$w(X,\\theta,t)$ must allow to be expressed as an expansion in Hermite polynomials of the following form:\n\\be\t\t\t\\label{razlozhH}\nw(X,\\theta,t)=\\sum_{n,\\,m}\\rho_{nm}(t)\n\\frac{e^{i\\theta(m-n)}e^{-X^2}}{\\sqrt{\\pi2^{n+m}n!m!}}\nH_n(X)H_m(X),\n\\ee\nwhere for the quantum case the matrix $\\rho_{nm}(t)$ is obviously a density matrix\nof the quantum state in Fock basis representation, and for the classical case the set of quantities\n$\\rho_{nm}^\\mathrm{cl}(t)$ can be expressed in terms of classical distribution function\n$W_\\mathrm{cl}(q,p,t)$ as follows:\n\\bea\t\t\t\\label{rhoclas}\n\\rho_{nm}^\\mathrm{cl}(t)&=&\\frac{1}{\\sqrt{\\pi2^{n+m}n!m!}}\n\\int W_\\mathrm{cl}\\left(\\frac{q+q'}{2},p,t\\right) \\nonumber\\\\[3mm]\n&&\\times\\exp\\left(ip\\,(q-q')\n-\\frac{q^2}{2}-\\frac{q'^2}{2}\\right)\nH_n(q)H_m(q')\\,dq~dq'~dp.\n\\eea\nSo that, for the normalization of the optical tomogram be preserved during evolution \nin accordance with the equation (\\ref{r20_2}) or (\\ref{eq_54}),\nit is necessary and sufficient that the tomogram must be a sum of the form\n(\\ref{razlozhH}), which can be written for $N$-dimensional system as:\n\\bea\t\t\t\\label{razlozhHN}\nw(\\mathbf X,\\bm\\theta,t)=\\sum_{{n_1...n_N}\\atop{m_1...m_N}}\n\\rho_{n_1...n_Nm_1...m_N}(t)\\frac{1}{\\sqrt{\\pi^N}}\n\\prod_\\sigma^N\n\\frac{e^{i\\theta_\\sigma(m_\\sigma-n_\\sigma)}e^{-X_\\sigma^2}}\n{\\sqrt{2^{n_\\sigma+m_\\sigma}n_\\sigma!m_\\sigma!}}\nH_{n_\\sigma}(X_\\sigma)H_{m_\\sigma}(X_\\sigma).\n\\eea\n\n\nNote that the property of conservation of normalization in itself, \nadditionally to non-negativity, parity, and satisfaction of Hirschmann criterion,\nis also not sufficient for the function to be a tomogram. Consider the example\n\\be\t\t\t\\label{example3}\nw_1(X,\\theta)=\\frac{e^{-X^2}}{\\sqrt\\pi}\\left(X^4+\\frac{X^2}{4}+\\frac{1}{8}\\right).\n\\ee\nDespite the fact that the function $w_1(X,\\theta)$ has all properties listed above,\nit is not a tomogram of any quantum or classical state,\nbecause the ``probability\" of finding a quantum system in state $|0\\rangle\\langle0|$\nis negative (equal to $-1\/8$) or, if we apply to $w_1(X,\\theta)$ transformation (\\ref{eq_52}),\nwe obtain the function\n\\be\t\t\t\\label{example4}\nW_1(q,p)=\\frac{e^{-q^2-p^2}}{\\pi}\\left[(q^2+p^2)^2-\\frac{3}{4}(q^2+p^2)-\\frac{1}{4}\\right],\n\\ee \nwhich is not a classical distribution function.\n\n\\section{\\label{Section5}Conditions for conservation of normalization \\\\\nof symplectic tomogram-like functions during evolution}\nFor Hamiltonians (\\ref{r18_2})\nthe evolution equation \nof the symplectic tomogram $M({\\mathbf X},{\\bm\\mu,\\bm\\nu},t)$ \nof the quantum system has the form \\cite{ManciniFoundPhys97}\n\\begin{eqnarray}\n\\frac{\\partial}{\\partial t}M({\\mathbf X},{\\bm\\mu},{\\bm\\nu},t)&=&\n\\left[\\sum_{\\sigma=1}^N\\frac{\\mu_\\sigma}{m_\\sigma}\\frac{\\partial}{\\partial\\nu_\\sigma}\\right]\nM({\\mathbf X},{\\bm\\mu},{\\bm\\nu},t) \\nonumber \\\\[3mm]\n&+&\n\\frac{2}{\\hbar}\n\\left[\\mathrm{Im}~\nV\\left\\{-\\left[\\frac{\\partial}{\\partial X_\\sigma}\\right]^{-1}\n\\frac{\\partial}{\\partial\\mu_\\sigma}+\\frac{i\\nu_\\sigma\\hbar}{2}\n\\frac{\\partial}{\\partial X_\\sigma}\\right\\}\\right]\nM({\\mathbf X},{\\bm\\mu},{\\bm\\nu},t),\n\\label{eq_46}\n\\end{eqnarray}\nand for classical system in the potential field $V(\\mathbf q,t)$\nwe have \\cite{OlgaJRLR97}\n\\begin{eqnarray}\n\\frac{\\partial}{\\partial t}M_\\mathrm{cl}(\\mathbf X,\\bm\\mu,\\bm\\nu,t)&=&\n\\left[\\sum_{\\sigma=1}^N\\frac{\\mu_\\sigma}{m_\\sigma}\\frac{\\partial}{\\partial\\nu_\\sigma}\\right]\nM_\\mathrm{cl}(\\mathbf X,\\bm\\mu,\\bm\\nu,t) \\nonumber \\\\[3mm]\n&+&\n\\left[\\sum_{\\sigma=1}^{N}\\frac{\\partial}{\\partial q_\\sigma}\nV\\left\\{q_\\sigma\\rightarrow-\\left[\\frac{\\partial}{\\partial X_\\sigma}\\right]^{-1}\n\\frac{\\partial}{\\partial\\mu_\\sigma}\\right\\}\n\\nu_\\sigma\\frac{\\partial}{\\partial X_\\sigma}\\right]\nM_\\mathrm{cl}(\\mathbf X,\\bm\\mu,\\bm\\nu,t).\n\t\t\\label{eq_55}\n\\end{eqnarray}\n\nMaking the similar calculations as in the previous section for the term\nin the potential $V(q)$ with $q^3$ we can find the equation\nfor the first momentum of the variable $X$\n\\be \t\t\t\\label{condit1}\n\\partial^2_\\mu\\int XM(X,\\mu,\\nu,t)dX=0.\n\\ee\nThe solution of this differential equation has the linear form with\nrespect to the variable $\\mu$\n\\be \t\t\t\\label{solutionM1}\n\\int XM(X,\\mu,\\nu,t)dX=A_1(\\nu,t)+\\mu B_1(\\nu,t),\n\\ee\nand taking into account the homogeneity condition (\\ref{eq_3})\nwe find that $B_1(\\nu,t)=B_1(t)$ is independent on $\\nu$\nand $A_1(\\nu,t)=\\nu A'_1(t)$, where $A'_1(t)$ is independent on $\\nu$\n\\be \t\t\t\\label{solutionM1cor}\n\\int XM(X,\\mu,\\nu,t)dX=\\mu B_1(t)+\\nu A'_1(t),\n\\ee\nor from definition (\\ref{eq_44})\nthis relation can be written as\n\\be\t\t\t\\label{garmsolQPM}\n\\int XM(X,\\mu,\\nu,t)dX=\\langle\\hat q\\rangle\\mu\n+\\langle\\hat p\\rangle\\nu.\n\\ee\nAnalogously, for the term $q^4$ in the potential $V(q)$ under conditions (\\ref{eq_44}) and (\\ref{condit1})\nwe can find the condition\n\\be \t\t\t\\label{condit2}\n\\partial^3_\\mu\\int X^2M(X,\\mu,\\nu,t)dX=0,\n\\ee\nwhich is valid, because \n\\bea\t\t\t\n\\int X^2M(X,\\mu,\\nu,t)dX&=&\n\\left(\\int q^2W(q,p,t)dq~dp\\right)\\mu^2\n+\\left(\\int p^2W(q,p,t)dq~dp\\right)\\nu^2 \\nonumber \\\\[3mm]\n&&+2\\left(\\int qp\\,W(q,p,t)dq~dp\\right)\\mu\\nu.\n\\label{garmsolQPMW}\n\\eea\nFor the polynomial potential of the power of $q^n$ as in the cases of (\\ref{condit1}) and (\\ref{condit2})\nwe can write \n\\be \t\t\t\\label{conditn}\n\\partial^{m-1}_\\mu\\int X^{m-2}M(X,\\mu,\\nu,t)dX=0,~~~m=2,~3,...,~n,\n\\ee\nand as in the case of (\\ref{garmsolQPnW})\nwe have\n\\be\t\t\t\\label{garmsolQPnWM}\n\\int X^{m}\\,M(X,\\mu,\\nu,t)dX=\n\\sum_{k=0}^{m}A_{km}(t)\n\\mu^k\\nu^{m-k},~~~m=1,~2,...,~n-2,\n\\ee\nwhere $A_{km}(t)$ are defined by formula (\\ref{coefAkm}),\nand so, the symplectic tomogram must be represented in the form of the sum \\cite{OlgaJRLR97}\n\\bea\nM(X,\\mu,\\nu,t)&=&\\sum_{n,m}\\frac{\\rho_{nm}(t)}\n{\\sqrt{\\pi(\\mu^2+\\nu^2)2^{(n+m)}n!m!}}\n\\frac{(\\nu+i\\mu)^m(\\nu-i\\mu)^n}{(\\mu^2+\\nu^2)^{(n+m)\/2}}\n\\nonumber \\\\[3mm]\n&&\\times\\exp\\left(-\\frac{X^2}{\\mu^2+\\nu^2}\\right)\nH_n\\left(\\frac{X}{\\sqrt{\\mu^2+\\nu^2}}\\right)\nH_m\\left(\\frac{X}{\\sqrt{\\mu^2+\\nu^2}}\\right),\n\\label{razlMH}\n\\eea\nwhich is the necessary and sufficient condition for preservation\nof the normalization during evolution of the symplectic tomogram.\nFor $N-$dimensional systems the latter formula has the form\n\\bea\t\t\t\nM(\\mathbf X,\\bm\\mu,\\bm\\nu,t)&=&\\sum_{{n_1...n_N}\\atop{m_1...m_N}}\n\\frac{\\rho_{n_1...n_N\\,m_1...m_N}(t)}{\\sqrt{\\pi^N}}\n\\prod_\\sigma^N\n\\frac{(\\mu_\\sigma+i\\nu_\\sigma)^{m_\\sigma}(\\mu_\\sigma-i\\nu_\\sigma)^{n_\\sigma}}\n{\\sqrt{2^{n_\\sigma+m_\\sigma}(\\mu_\\sigma^2+\\nu_\\sigma^2)^{n_\\sigma+m_\\sigma+1}\nn_\\sigma!m_\\sigma!}} \\nonumber \\\\[3mm]\n&&\\times\n\\exp\\left(-\\frac{X_\\sigma^2}{\\mu_\\sigma^2+\\nu_\\sigma^2}\\right)\nH_{n_\\sigma}\\left(\\frac{X_\\sigma}{\\sqrt{\\mu_\\sigma^2+\\nu_\\sigma^2}}\\right)\nH_{m_\\sigma}\\left(\\frac{X_\\sigma}{\\sqrt{\\mu_\\sigma^2+\\nu_\\sigma^2}}\\right).\n\\label{HermExpanN}\n\\eea\n\nAs in the case with the optical tomograms, conditions of conservation of normalization of type \n(\\ref{conditn}), or (\\ref{garmsolQPnWM}), or (\\ref{razlMH}) are not sufficient for the non-negative\ntomogram-like function $f(\\mathbf X,\\bm\\mu,\\bm\\nu)$ obeying to (\\ref{eqnormSymp}) and (\\ref{eq_3})\nto be a tomogram of any quantum or classical state.\nProving example it is easy to obtain from expression (\\ref{example3}) applying the formula\n(\\ref{eq_6}):\n\\be\t\t\t\\label{example5}\nM_1(X,\\mu,\\nu)=\\frac{1}{\\sqrt\\pi\\sqrt{\\mu^2+\\nu^2}}\n\\left(\\frac{X^4}{(\\mu^2+\\nu^2)^2}+\\frac{X^2}{4(\\mu^2+\\nu^2)}+\\frac{1}{8}\\right)\n\\exp\\left(-\\frac{X^2}{\\mu^2+\\nu^2}\\right).\n\\ee\nThe function $M_1(X,\\mu,\\nu)$ is non-negative, normalized, homogeneous, and it conserves\nthe normalization during time evolution, but it is not a tomogram.\n\n\n\n\\section{\\label{Section6}Conservation of normalizations of the Wigner function,\nclassical distribution function, and Husimi function\nduring evoluton}\n\nIn previous sections we have found that in order to as classical as quantum\ndynamical equations for optical and symplectic\ntomograms retain normalizations, the tomograms \nhave to satisfy a set of conditions.\nAnd only normalization of the tomogram is insufficient for \nthis condition remain valid during time evolution.\n\nOn the contrary, Moyal equation for the Wigner function\n\\cite{Moyal1949}\n\\be\t\t\t\\label{Moyal}\n\\frac{\\partial }{\\partial _t} W({\\bf q},{\\bf p},t)=\n-\\sum_{\\sigma=1}^{N} \\frac{p_\\sigma}{m_\\sigma}\n\\frac{\\partial}{\\partial_{q_\\sigma }}W({\\bf q},{\\bf p},t)\n-2\\left[\\mathrm{Im}~V\\left\\{\n{\\bf q}+\\frac{i}{2} \\frac{\\partial}{\\partial {\\bf p}}\n\\right\\}\\right]W({\\bf q},{\\bf p},t),\n\\ee\nLiouville equation for the distribution function\n\\be\t\t\t\\label{Vlasov}\n\\frac{\\partial }{\\partial _t} W_\\mathrm{cl}({\\bf q},{\\bf p},t)=\n-\\sum_{\\sigma=1}^{N} \\frac{p_\\sigma}{m_\\sigma}\n\\frac{\\partial}{\\partial_{q_\\sigma }}W_\\mathrm{cl}({\\bf q},{\\bf p},t)\n-\\left[\\frac{\\partial }{\\partial {\\bf q}}~V({\\bf q})\\right]\\frac{\\partial}{\\partial {\\bf p}}\nW_\\mathrm{cl}({\\bf q},{\\bf p},t),\n\\ee\nand dynamical equation for the Husimi \nfunction \\cite{Mizrahi1986}\n\\bea\t\t\t\\label{HusimiEQ}\n\\frac{\\partial }{\\partial _t} Q({\\bf q},{\\bf p},t)&=&\n-\\sum_{\\sigma=1 }^{N} \\frac{1}{m_\\sigma}\\left[p_\\sigma\n\\frac{\\partial}{\\partial_{q_\\sigma }}\n+\\frac{1}{2}\\frac{\\partial}{\\partial_{p_\\sigma }}\n\\frac{\\partial}{\\partial_{q_\\sigma }}\\right]\nQ({\\bf q},{\\bf p},t) \\nonumber\\\\[3mm]\n&&-2\\left[\\mathrm{Im}~V\\left\\{\n{\\bf q}+\\frac{1}{2}\\frac{\\partial }{\\partial {\\bf q}}\n+\\frac{i}{2} \\frac{\\partial}{\\partial {\\bf p}}\n\\right\\}\\right]Q({\\bf q},{\\bf p},t),\n\\eea\nretain normalization for any normalized and quickly decaying functions at $q_i$, $p_j$ \ntending to infinity.\nAs in the previous parts let's approximate the potential $V(\\bf q)$ \nwith a polynomial and integrate both sides of the dynamical equations \n(\\ref{Moyal}) -- (\\ref{HusimiEQ}) over the hyperspace ${\\bf q}\\times{\\bf p}$. \nEasy to see that for any functions $W({\\bf q},{\\bf p},t)$, \n$W_\\mathrm{cl}({\\bf q},{\\bf p},t)$ and $Q({\\bf q},{\\bf p},t)$ from the class considered\nthe right sides of equations (\\ref{Moyal}) -- (\\ref{HusimiEQ})\nare becoming identically equal to zero.\n\nThus, Moyal equation, Liouville equation and dynamical equation for the Husimi function \nretain normalizations of their solutions without any additional conditions, \nlike (\\ref{Gendifeven}, \\ref{Gendifodd}) for optical\nor (\\ref{conditn}) for symplectic tomograms and that is a principal \ndifference of equations (\\ref{Moyal}) -- (\\ref{HusimiEQ}) from corresponding \nequations (\\ref{r20_2}, \\ref{eq_54}, \\ref{eq_46}, \\ref{eq_55}) for tomograms.\n\n\\section{\\label{Section7}Conclusion}\n\nIn summary we point out the main results of our paper.\nWe discussed the properties of optical and symplectic tomograms. \n\nIt was demonstrated that optical tomograms could be associated with a \nunitary representation of the Weyl-Heisenberg group as well as symplectic \ntomograms. This fact has allowed to formulate an autonomous criterion for optical\ntomogram-like function to be a tomogram of quantum or classical state\nbased on positivity properties of the function on the group\ndetermined with the help of the latter tomogram-like function.\n\nWe shown that for the arbitrary potential the tomograms must satisfy a set \nof relations to be normalized during evolution.\nWe found that all moments of homodyne variable $X$ \nas for optical as for symplectic tomography\nmust satisfy a set of specific linear differential equations.\nThese equations were obtained explicitly and their general \nsolutions were presented.\n\nWe also illustrated that Moyal equation, Liouville equation, and dynamical equation\nfor the Husimi function retain the normalization for any normalized and decreasing\nrapidly at infinity initial conditions, unlike tomographic dynamical equations.\n\nThe relations allowing to conserve the normalization of the tomogram\nduring time evolution determine a narrow class of functions, which can \nevolve without dramatic growth maintaining the physical sense.\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nThe Herschel Space Observatory was launched from Kourou in May 2009 aboard an Ariane 5 rocket. Two of three scientific instruments on the focal plane (PACS and SPIRE) are capable to observe the infrared sky with unprecedented angular\nresolution and sensitivity, providing photometric observations in 6\ndifferent bands \\citep[$70$$\\mu$m, $100$$\\mu$m, $160$$\\mu$m, $250$$\\mu$m,\n$350$$\\mu$m\\ and $500$$\\mu$m :][and reference therein]{Pilbratt10}.\n\nThe PACS photometer is composed of two bolometer arrays: a $64 \\times 32$ pixel matrix arranged from 8 monolithic subarrays of $16 \\times 16$\npixels each centered on the $70\\mu$m and $100\\mu$m wavelength (blue and green bands), and a $32 \\times 16$ pixel matrix organized in two subarrays for the band centered on $160\\mu$m (red band), see \\citet{Poglitsch10}. \n\nSPIRE comprises a three band photometer, operating in spectral bands centered on $250\\mu$m, $350\\mu$m and $500\\mu$m. Every band uses a matrix of germanium bolometers (139, 88 and 43 respectively) coupled to hexagonally packed conical feed horns \\citep{Griffin10}. \n\nIn order to handle science data provided by the Herschel instruments, including the data retrieval from the Herschel Science Archive, the data reduction through the standard pipelines and the scientific analysis, an official software environment called HIPE\n\\citep[Herschel Interactive Processing Environment,][]{HIPE} is available from ESA. \n\nThe raw data provided by the satellite are reduced in HIPE to generate scientific data (so-called Level 2) and\nintermediate products of the data reduction process (Level 0 and Level\n1 data).\n\nIn this paper, we describe the dedicated pipeline created to obtain maps for Hi-GAL\\ \\citep[Herschel Infrared Galactic Plane\nSurvey,][]{Molinari10_PASP}. Hi-GAL aims to homogeneously cover with observations in 5 contiguous IR bands between 70$\\mu$m and 500$\\mu$m a 2 degrees wide stripe of galactic plane between $l=-70^{\\circ}$ and $l=70^{\\circ}$.\n\nThe Galactic plane shows emission varying from point-like sources to large-scale structures and with intensity varying over a wide dynamic range. In this work we show that the existing standard reduction strategy (which is based on HIPE version 4.4.0, released in November, 11th 2010) is not optimized to reduce Hi-GAL data and that a dedicated pipeline can enhance the quality of the Level 2 products.\n\n After Herschel successfully passed the Performance\nVerification Phase (PV Phase), two fields of the Hi-GAL\\ survey were acquired\nduring the Science Demonstration Phase (SDP): 2x2 square degree areas\nof the Galactic plane centered on 30$^{\\circ}$ of longitude\n(hereafter, $\\textit{l}=30^{\\circ}$) and on 59$^{\\circ}$ ($\\textit{l}=59^{\\circ}$).\n\nWe describe the data reduction tools used to obtain high quality maps from SDP data, with the aim to provide a reliable environment for the\nRoutine Phase (RP) data. The maps provided by our pipeline are successfully used for several works like, e.g. , \\citet{Molinari10_special}, \\citet{Martin10}, \\citet{Peretto10}.\n\nThe paper is organized as follows: in Section \\ref{sec:higal} we\ndescribe the acquisition strategy for Hi-GAL\\ data; in Section \\ref{sec:preprocessing} we\ndescribe the pre-processing steps of the data reduction pipeline, necessary to prepare data for the map making\nand the tools that we have developed to that\npurpose. In Section \\ref{sec:mapmaking} we describe the ROMAGAL map\nmaking algorithm used in order to estimate the maps. ROMAGAL is used\nin place of the MadMap code which is the map making algorithm offered \nin HIPE. The quality of the ROMAGAL maps for both PACS and SPIRE instruments, related to the SDP observations, will be analyzed in Section\n\\ref{sec:SDP_results}; in Section \\ref{sec:Conclusions} we draw our\nconclusions. \n\n\\section{The Hi-GAL\\ acquisition strategy}\\label{sec:higal}\n\nHi-GAL\\ data are acquired in PACS\/SPIRE Parallel mode\\footnote{http:\/\/herschel.esac.esa.int\/Docs\/PMODE\/html\/parallel\\_om.html}, in which the same sky\nregion is observed by moving the satellite at a constant speed of 60$\\arcsec$\/sec and\nacquiring images simultaneously in five photometric bands: $70$$\\mu$m\\ and $160$$\\mu$m\\ for PACS and $250$$\\mu$m, $350$$\\mu$m\\ and $500\\mu$m for SPIRE.\n\nThe whole data acquisition is subdivided in $2^\\circ\\times2^\\circ$ fields of sky\ncentered on the Galactic plane. Every\nHi-GAL\\ field is composed of the superposition of two orthogonal AOR (Astronomical Observation Requests).\nEach of them is based on a series of consecutive, parallel and partly overlapped scan legs covering $2^\\circ\\times2^\\circ$ square degrees. The scanning strategy adopted for Hi-GAL is fully described in \\citep{Molinari10_PASP}. The superposition is performed in order to obtain a suitable data redundancy and for better sampling the instrumental effect like the high-frequency detector response \\citep{Molinari10_PASP}.\n\nThe acquisition rate for the parallel\nmode is 40 Hz for PACS and 10 Hz for SPIRE, although the PACS\ndata are averaged on-board for an effective rate of 5\\,Hz and 10\\,Hz\nfor the $70$$\\mu$m\\ and $160$$\\mu$m\\ array respectively. The implications of the PACS data compression are detailed in Section \\ref{sec:signal_striping}. \n\nAn example of the scanning strategy of the Hi-GAL\\ survey is shown\nin Figure \\ref{fig:coverage}. The coverage map of the PACS blue array is shown on the left panel and on the right panel we highlight the\nsuperposition of one scan leg to the following ones, by enlarging the\nbottom right corner of the left image. Two calibration blocks for each\nAOR, during which the instrument observes two internal calibration\nsources located on the focal plane, were provided during the SDP\nobservations. They appear as higher than mean coverage areas and are marked\nby black and green circles for the 2 AORs in Figure\n\\ref{fig:coverage}. Higher coverage zones are also clearly visible in\nthe slewing region at the end of each scan leg, where the telescope\ndecelerates and then accelerates before initiating the next scan leg. \n\n\\begin{figure}\n\\centering\n\\includegraphics[width=4cm]{figure1.eps}\n\\includegraphics[width=4cm]{figure2.eps}\n\\caption{Left: the coverage map of PACS blue array. Right: a zoom of the bottom right corner where is clear the\n effect of the superposition from one scan-leg to the next. Black and\n green circles highlight the calibration blocks that, during PV and SDP phase, were observed twice during which the internal calibration sources are observed.} \n\\label{fig:coverage}\n\\end{figure}\n\n\\section{Map making pre-processing}\\label{sec:preprocessing}\n\nThe aim of the pre-processing is to prepare Hi-GAL\\ data for\nmapmaking. \n\nWhile map making is performed\nusing a Fortran parallel code borrowed from cosmological observations\n(see Section \\ref{sec:mapmaking}), the preprocessing is done through a series of\nIDL and jython tools to be run on the data one after the other. After having\ntried to map Hi-GAL\\ data using the standard tools provided within HIPE,\nwe decided to develop our own routines that we tailored\nspecifically for the reduction of data affected by bright and irregular\nbackground, as in the Galactic plane. In fact, high-pass filtering\nused in HIPE to cure the long-time drift also removes a large part of\nthe diffuse Galactic emission. Furthermore, the standard deglitching\nembedded in HIPE , the MMT \\citep[Multiresolution Median\n Transform,][]{Starck95}, generates false detections in correspondence\nto the very bright sources when we apply this task to the PACS data. On the other hand, the deglitching procedure based on wavelet analysis used by SPIRE does not affect the data, given also the lower spatial resolution compared to the PACS one. We therefore use the HIPE task for SPIRE data only.\n\nHerschel data are stored in subsequent snapshots of the sky acquired\nby the entire detector array, called frames. In a first processing\nstep, the standard modules within HIPE are used to generate Level 1 data for both PACS (except the deglitching task) and SPIRE. Thus, the data are rearranged into one time series per detector pixel, called Time\nOrdered Data (TOD), in which the calibrated flux ( Jy\\,beam$^{-1}$ for SPIRE and Jy\\,sr$^{-1}$ for PACS) and the celestial coordinates are included. \nAt the end of this step, TODs are exported outside HIPE in fits format. In the subsequent processing steps, TODs are managed by a series of IDL tools, in order to produce final TODs free\nof systematic effects due to electronics and of glitches and corrupted chunks of data due to cosmic\nrays. To each TOD a flag file is attached to keep track\nof any flagging done during data reduction steps.\n\nPreprocessing includes identification of corrupted TODs\n(or part of them), drift removal and deglitching. The following steps will be the Noise Constraint Realization (NCR) and ROMAGAL mapmaking; they will be both described in detail in the next Sections. The summary of the entire pipeline is shown in the diagram in Figure \\ref{fig:higalpipe}. \n\n\\begin{figure}\n\\centering\n\\includegraphics[width=4cm]{figure3.eps}\n\\caption{Schematic representation of the Hi-GAL\\ data reduction pipeline} \n\\label{fig:higalpipe}\n\\end{figure}\n\n\\subsection{Corrupted TODs}\n\nA TOD can be partially corrupted by the random hiting of\ncharged particles (cosmic rays) which produce strong and spiky signal\nvariations called glitches. Two\ndifferent effects on the TODs can be identified: the glitch corrupts a single or few consecutive samples, generating\nspiky changes along the TOD. This is the most common effect and in\nSection \\ref{sec:deglitching} we describe how to detect and mask the\ndata for PACS, as well as to mask any possible residual glitches for SPIRE. Very powerful glitches, on the other hand,\nmake the detector signal unstable for a considerable\ninterval of time \\citep[see, e.g.,][]{Billot10}. This effect depends on the glitch amplitude and on the\nbolometers time response. These events affect a much larger part of\nthe TOD that cannot be used for mapmaking.\n\nTheir impact results in a bias on the bolometer \ntimeline with a low-frequency drift which can involve a considerable\nnumber of samples, as shown in\nFigure~\\ref{fig:bad_bolometer}. \n\nIn that Figure, blue crosses represent the\nobserved timeline of one bolometer of the blue\narray. \n\nAutomatic identification of the (partially) corrupted TODs exploits\nthe first derivative of the TOD to detect extraordinary ``jumps'' in\nthe signal. In order to determine what portion of the TOD is to be\nflagged, the flagging tool exploits the fact that the detector pixels response that have\nbeen hit by a cosmic ray is mostly exponential. Data samples ranging from the\njump to the time sample at which an exponential fit reaches 98\\% of\nthe signal level before the event are identified as bad data and\nstored as such in the corresponding flag file. In case the exponential\nfit does not reach 98\\% of the starting value before the end of the\nTOD, then all data starting from the hit will be flagged as is the case\nin Figure~\\ref{fig:bad_bolometer}. This procedure is applied both in the cases of a changing in responsivity or a detector offset alteration. In the latter, we estimate the fit with an exponent equal to 0. The described procedure was adopted to process both PACS and SPIRE data. \n\n\n\\begin{figure}\n\\centering\n\\includegraphics[width=8cm]{figure4.eps}\n\\caption{Timeline of a PACS blue bolometer. The exponential decay illustrates the change in responsivity after frame 40000 due to the impact of a powerful glitch.}\n\\label{fig:bad_bolometer}\n\\end{figure}\n\n\\subsection{Drift removal}\n\nAfter having identified corrupted data we proceed to the elimination of changes in responsivity over time. The procedures are in principle\nidentical for PACS and SPIRE data, the only differences account for\ndifferent composition of the detector arrays and the data acquisition\nof the two instruments. \n\nThe signal in PACS TODs that exit HIPE does not represent the true sky\nbut is dominated by the telescope background and the (unknown) zero level of the electronics. The\nelectronics further introduce significant pixel-to-pixel\noffsets. For each TOD, we mitigate the effect of pixel-to-pixel offset by calculating and then subtracting the median level for each pixel from each readout. This ensures that all pixels have median level equal to 0. The median value is preferred over mean because the median is a much better representation of the sky+telescope background flux, and is much less sensitive to signal from astrophysical sources.\n\nAlso, the subtraction of this offset from each TOD does not\nalter the signal in the final map, but it\nintroduces only a global offset constant over the entire area covered\nin the observation. However it should be kept in mind that bolometers\nare inherently differential detectors which bear no knowledge of an\nabsolute signal value; besides, any optimized map making methods like\nthe one we employ (see Section \\ref{sec:mapmaking}) produce maps with\nan unknown offset value which needs to be independently calibrated. So\nit is important to reduce all the bolometers to the same median value,\nregardless of its amount. The pixel-to-pixel median subtraction has the effect\nseen in Figure~\\ref{fig_drift_2}. Diffuse emission and compact sources\nare clearly visible in the frame. \n\n\\begin{figure}\n\\centering\n\\includegraphics[width=8cm]{figure5.eps}\n\\caption{\\textbf{Blue PACS frame after the median subtraction on each pixel. Diffuse emission and compact source are visible in the frame.}}\n\\label{fig_drift_2}\n\\end{figure}\n\n\n\nStill, when plotting a detector pixel timeline we see that the signal decreases\nsystematically from the start to the end of the observation. This trend is likely due to a combination of small changes in the thermal bath of the array and to small drifts in the electronics. The former affects the entire array, while the latter affects subunits of the detector array (in fact, PACS blue is divided into 8 units, PACS red into 2, electronically independent units \\citep{Poglitsch08}). The drift is then a combination of these two effects: drifts of the entire array and drift of a single subunit. These effects are dominant with respect to the $1\/f$ noise pattern, which will be described in the next Sections. \n\n\\begin{figure}\n\\centering\n\\includegraphics[width=8cm]{figure6.eps}\n\\caption{Median behavior computed on the whole array for each frame. The\n slow-drift behavior is due to the electronics and the thermal bath.} \n\\label{fig_drift_4}\n\\end{figure}\n\nIt is in principle not a trivial task to decide which drift has to be\nsubtracted first: the drift from the thermal bath (affecting the entire detector\narray) or the drift from the readout electronics (affecting sub-arrays\ndifferently)? Ideally\nboth should be subtracted, if only it were possible to separate each\ncomponent accurately, as the net effect in the data is the sum\nof both.\n\nOur methodology for removing the correlated signal drifts (on both the bolometer module\/unit level and the array level) is based on tracing the low signal envelope of the unit or array median levels. In Figure \\ref{fig_drift_4}, this envelope is the curve defined by the lowest signal values. and estimated as follows: \n\n\\begin{itemize}\n\\item[i] We compute the median value of the entire bolometer array\/unit for each bolometer readout. Figure \\ref{fig_drift_4} shows one example for the entire array. \n\\item[ii] The median values thus obtained are segmented and grouped by scan legs. Each scan leg is composed of $\\sim1000$ frames and we observed 54 scan leg for each $2^\\circ\\times2^\\circ$ Hi-GAL field.\n\\item[iii] For each scan leg we compute the minimum value of the array\/unit medians.\n\\item[iv] The resulting set of minimum median values for all scan legs are fit with a polynomial.\n\\end{itemize}\n\nThe median value for each array\/unit readout is chosen because it is closest to the actual sky+telescope background. However, as clearly seen in Figure \\ref{fig_drift_4}, in the presence of strong astrophysical sources the median value is incorrect for our purposes. The strong sources appear as signal spikes in Figure \\ref{fig_drift_4}. Hence, we take the next step of finding the minimum value from the set of medians belonging to a single scan leg. The idea is that at some point during the scan leg the median was indeed a true representation of the local sky+telescope and relatively free of source emission. This step allows us to reject the sources at the expense of degrading our time-resolution to scan-leg duration ($\\sim240$ sec). The polynomial fit allows us to estimate the drift behavior at the same time resolution as the PACS signal (5Hz and 10Hz for 70$\\mu$m and160$\\mu$m band respectively). We further note that the correlated signal drift is relatively flat over a single scan leg; hence, the minimum value is not significantly affected by the presence of the monotonic signal drift itself in the first place.\n\nThe minimum median method discussed above removes background levels from spatial emission structures that are of the order or larger than the scan legs yet preserves the spatial structures smaller than scan legs. In essence, information about the absolute calibration zero-point \\citep{Miville05} is lost but all spatial structures within our map boundaries are preserved.\n\nIn Figure \\ref{fig_drift_6} we reported the minimum median values of each subarray. The common downward trend is due to the (common) thermal bath drift, while the different readout electronics are responsible for the differences in the subarray slopes.\n\n\nWe therefore decide to subtract the subarray drifts in order to consider both the thermal bath and the readout electronics behaviors, but separately for each subunit.\n\n\n\n\\begin{figure}\n\\centering\n\\includegraphics[width=8cm]{figure7.eps}\n\\caption{Interpolation of the minima of the\n median evaluated on every scan leg. Each curve refers to a PACS subarray. The curves mimic the same behavior with slopes due to the different subarray electronics.} \n\\label{fig_drift_6}\n\\end{figure}\n\nOnce the individual subarray drift is removed, the remaining dispersion on the whole array is only a residual scatter due to the intrinsic efficiency of the removal tool, as shown in Figure \\ref{fig_drift_7}. \n\n\\begin{figure}\n\\centering\n\\includegraphics[width=8cm]{figure8.eps}\n\\caption{Minima of the median evaluated on the whole PACS blue array after the subarray drift\n subtraction. The dispersion is due to the\n intrinsic efficiency of the drifts subtraction tool. There is no\n residual behavior and the scatter is one order of magnitude under\n the value of the initial drift.} \n\\label{fig_drift_7}\n\\end{figure}\n\nSPIRE array detectors are not divided into subarrays, so every\nprocedure that has to be run 8 or 2 times for PACS data is only\nperformed once per SPIRE band. \nSPIRE uses blind bolometers (5 in total) as thermistors to\nevaluate the most relevant correlated noise component: the bath\ntemperature fluctuations. A standard pipeline module uses this\ninformation to perform an effective removal of the common drift\npresent along the scan observation. HIPE also corrects for the delay between\nthe detector data and the telescope position along the scan, using an\nelectrical low-pass filter response correction. But despite these\ninitial (and very effective) corrections, we apply the drift removal\ntool to SPIRE data in the same way as for PACS\ndata: we fit a polynomial to the minimum of the median of each scan\nleg (calculated over the entire detector array), that we then subtract\nto all TODs. Experience on the data shows that a residual long time\ndrift is often present in SPIRE data. \n\nFinally, when removing drifts it is important to know how the\nobservational scans are oriented. In fact, as the Galactic plane is\nvery bright, scans across the plane will give rise to an increase of\nsignal, on top of the general drift. On the other hand, when the scan\nis almost parallel to the plane of the Galaxy, the signal can\ndominated by its bright emission, also on the evaluation of the minima of the median. \n\n\\textit{In this case, the curve to fit is estimated only in the scan legs where the median is not affected by the signal.}\n\n\n\nSince the procedure is not automatic, care has to used when choosing what\npolynomial to fit and subtract from the data, in order not to\neliminate genuine signal from the sky. From our experience the best choice is a first or a second degree polynomial, depending on the signal behavior observed. \n\nHigher polynomial degree can be necessary when part of the drift has to be extrapolated in order to avoid signal contamination.\n\n\nAn example for one subarray is shown in Figure~\\ref{fig:drift_galassia}. \n\n\\begin{figure}\n\\centering\n\\includegraphics[width=8cm]{figure9.eps}\n\\caption{Blue curves: interpolation of the minima of the median for one subarray\n of PACS blue channel when the scan direction is almost parallel to the Galactic plane. Red line: the fit that we\n choose to evaluate the drift without considering the minima affected\n by the Galactic emission (central bump).} \n\\label{fig:drift_galassia}\n\\end{figure}\n\nEach scan leg is bigger then the overlapping region of the two scan directions for the scanning strategy adopted by Hi-GAL (see Figure \\ref{fig:coverage}). Since the Hi-GAL fields are squared regions, the slowly-traversed direction of the AOR within the overlapping region have a length comparable with the scan leg. Thus, we assume that even if there is a signal gradient along the slowly-traversed direction of the AOR, it is not filtered out by the array medians subtraction.\n\n\\subsection{Deglitching}\\label{sec:deglitching}\n\nTo remove outliers in the time series of the bolometers we exploit the spatial redundancy provided by the telescope movement which ensures that each sky pixel of the final map is observed with different bolometers. Outliers detection is done with the standard sigma-clipping algorithm: given a sample of $N$ values, first estimates for the mean and for the standard deviations are derived; then all the values that differ from the mean by more than $n$ standard deviations are considered outliers and removed. \n\nFor this algorithm the choice of $n$, the parameter that defines the threshold above which a value is considered an outlier, is usually arbitrary: a certain $n$ is chosen, very often equal to 3, without any statistical justification. Recently \\citet{Pezzuto11} has derived a formula that starting from the properties of the error function for a Gaussian distribution and exploiting the discreteness of the number of available measures, relates $n$ to the size of the sample. The formula is\n\n\\begin{equation}\nn=-0.569+\\sqrt{-0.072+4.99\\log(N)}\\label{nfromN}\n\\end{equation}\n\nAs a consequence, in the central region of the map, where the coverage (and so $n$) is high, the number of standard deviation is larger than in the outskirt of the map where the coverage is low. For instance, if a sky pixel has been observed with 40 bolometers, the above formula gives $n=2.25$; so, once we have estimated the mean $m$ and the standard deviations $\\sigma$, all the values $x_i$ such that ABS$(x_i-m)>2.25\\sigma$ are flagged as outliers. If a pixel has been observed with 20 bolometers the threshold lowers to 1.96$\\sigma$.\n\nThis procedure is automatically iterated until outliers are no longer found. However, the procedure converges within 1 iteration in $\\sim$98\\% of the cases in which we have applied the analysis.\n\nThe outliers detection is done in this way for both instruments, however for SPIRE, as explained before, we also make use of the standard deglitching algorithm (wavelet based) implemented in the official pipeline. But we found some weak glitches left in the SPIRE TODs so that we decided to run our deglitching algorithm also on SPIRE data.\n\nThe number of glitches found is on the average about 15\\%, a value which is likely larger than the real percentage. For PACS we are now working on a different way to associate each bolometer to the sky pixels, taking into account the finite size of the sky pixels. For the first test cases we run, the percentage of detected glitches is now around 5-6\\%.\n\n\\section{The ROMAGAL Map Making algorithm}\\label{sec:mapmaking}\n\nThe ROMAGAL algorithm is based on a Generalized Least Square (GLS)\napproach \\citep{Lupton93}. Since the TOD is a linear combination of\nsignal and noise, we can model our dataset $\\textbf{d}_{k}$ for each\ndetector $k$ as \\citep{Wright96}: \n\n\\begin{equation}\\label{eq:tod}\n\\textbf{d}_{k}=P\\textbf{m}+\\textbf{n}_{k}\n\\end{equation}\n\n\\noindent where $P$ is the pointing matrix, which associates to every\nsample of the timeline a direction in the sky, $\\textbf{m}$ is our map estimator of the\n``true\" sky and $\\textbf{n}_{k}$ is the noise\nvector. \n\nThe observed sky, $P\\textbf{m}$, is the map estimator of the ``true sky\" convolved with the instrumental\ntransfer function and the optical beam. However, in case of circularly\nsymmetric beam profile, $\\mathbf{m}$ is a beam smeared, pixelised\nimage of the sky.\n\nIn this case the pointing matrix has only a non-zero\nentry per row corresponding to the sky pixel observed at a given\ntime. Since the beam profiles for PACS \\citep{Poglitsch08} and SPIRE\n\\citep{Griffin09} are only weakly asymmetric we can put ourselves in\nthis simple case. Note that the transpose of the $P$ operator performs\na data binning (without averaging) into the sky pixels. \n\nEquation~\\ref{eq:tod} holds only if the noise vector of each detector $\\textbf{n}_{k}$ is\ncomposed of statistical random noise, with Gaussian distribution and\nnull average. All the relevant systematic effects (offset, glitches) have then to be removed with an accurate data preprocessing\nbefore map production, as explained in Section\n\\ref{sec:preprocessing}. \n\n\nThe formalism can be easily extended in the case of multidetector analysis. In this case the vector $\\textbf{d}$ contains the data relative to each detector. Rather, one has to take care to upgrade also the noise vector $\\textbf{n}$, accordingly with the correct noise value for each detector.\n\n\nThe GLS algorithm produces minimum noise variance sky maps. Noise\nproperties for each detector have to be previously estimated and provided in input to\nthe algorithm as described in Section \\ref{sec:noise_filters}. \n\nThe GLS estimate for the sky, $\\tilde\\textbf{m}$, is \\citep{Natoli01}\n\\begin{equation}\\label{eq:GLS}\n\\tilde\\textbf{m}=(P^{T}\\textbf{N}^{-1}P)^{-1}P^{T}\\textbf{N}^{-1}\\textbf{d}\n\\end{equation}\n\\noindent where $\\textbf{N}=\\langle \\textbf{nn}^{T}\\rangle$ is the\nnoise covariance matrix, which takes into account noise time\ncorrelation between different samples. Such correlation is\nparticularly high at low frequencies because of the $1\/f$ (or long\nmemory) noise. In case of uncorrelated noise (or white noise) the $\\textbf{N}$\nmatrix becomes diagonal and the problem is greatly simplified. If we\nfurther assume stationary uncorrelated noise, Equation \\ref{eq:GLS} reduces\nto: \n\\begin{equation}\\label{eq:naive}\n\\tilde\\textbf{m}=(P^{T}P)^{-1}P^{T}\\textbf{d} .\n\\end{equation}\n$P^{T}P$ is the number of observations of a pixel of the map, so we\nare averaging the different TOD values into that pixels assigning the\nsame weight to each sample. We will refer to this map estimate as ``naive\" or ``binned\" in the following. \n\nWhen non negligible noise correlation is present, as in the case of\nPACS \\citep{Poglitsch08} and SPIRE \\citep{Schulz08}, Equation~\\ref{eq:GLS} must be solved. This is a challenging\ncomputational task since it requires, in principle, the inversion of\nthe large (of the order of the number of pixels in the map) matrix\n$P^{T}\\textbf{N}^{-1}P$, which is the covariance matrix of the GLS\nestimator \\citep{Lupton93}. One key simplifying assumption is to\ninvoke that the noise is stationary. In this case, the $\\textbf{N}$ matrix has\na Toeplitz form which can be approximately treated as circulant,\nignoring boundary effects \\citep{Natoli01}. A circulant matrix is\ndiagonal in Fourier space and its inverse is also circulant, so the\nproduct between $\\textbf{N}^{-1}$ and a vector is a convolution\nbetween the same vector and a filter provided by any of the rows of\nthe matrix. In the following we will refer to any of these rows as a\nnoise filter. Its Fourier transform is the inverse of the noise frequency power\nspectrum. \n\nConsidering the conditions listed above, the GLS map making algorithm\nperforms the following operations, starting with rewriting the Equation \\ref{eq:GLS} in the form \n\n\\begin{equation}\\label{eq:gls_rewrite}\n(P^{T}\\textbf{N}^{-1}P)\\textbf{m}_{0}-P^{T}\\textbf{N}^{-1}\\textbf{d}=\\textbf{r}\n\\end{equation}\n\nwhere $\\textbf{m}_{0}$ is the starting map used at the first iteration, generally the naive map. \n\nThe $P \\textbf{m}_{0}$ product projects the map onto a timeline. Application of $\\textbf{N}^{-1}$: this is a convolution which can be performed in Fourier space. Application of $P^T$: this step projects the convolved timeline back into a map. \n\nThe second term performs the convolution with the filter (applying $\\textbf{N}^{-1}$ to the data vector $\\textbf{d}$ in Fourier space) and then the projection of the convolved timeline into a map (applying $P^T$ to the product $\\textbf{N}^{-1}\\textbf{d}$). \n\nThen, we need to evaluate the residual \\textbf{r}. If the residual is higher than a fixed threshold, it is used to produce a new map, $\\textbf{m}_{1}$, as described in \\citet{CG}. This map will be considered instead of $\\textbf{m}_{0}$ for evaluating again the Equation \\ref{eq:gls_rewrite}, until convergence. This is achieved by running a Conjugate Gradient algorithm, an iterative\n method useful to obtain a numerical solution of a system \\citep{CG},\n until convergence is reached with the residual lower then the threshold.\n\nThe algorithm outlined is the same described in \\citep[``unroll, convolve and bin\":][]{Natoli01} and is implemented in the ROMAGAL\ncode. The next section explains the strategy employed to estimate the\nnoise filters used by ROMAGAL directly from in-flight data. \n\n\\subsection{Noise estimation}\\label{sec:noise_filters}\n\nIn order to estimate the noise filters for ROMAGAL we need to\ninvestigate the noise statistical properties in the timelines. Data\nare mostly affected by two kind of statistical noise: $1\/f$ noise due\nboth to the electronics and thermal background radiation from the telescope\nor the instruments, and photon noise (see \\citealt{Poglitsch08,\n Schulz08}). \n\nThe detector $1\/f$ noise arises in the electronic chain and it impacts\nparticularly regions with low signal-to-noise ratio (SNR), where only\ndiffuse emission is present. In those regions it can be of the same\norder of magnitude of the signal or even higher. In these cases GLS\ntreatment is particularly effective. \n\nPhoton noise is due to statistical fluctuation in the photon\ncounts. This process follows poissonian statistic, so the SNR is\nproportional to the square root of counts. Since poisson distribution\ntends to be Gaussian for large numbers, we can approximate photon\nnoise as Gaussian on the map if the number of counts is large\nenough.\n \nSince bolometers are organized in matrix and sub-matrix, the signal of a bolometer can be correlated with the signal of another, generally adjacent, bolometer. These effects could be both\nstatistical and deterministic. We already described how to remove the deterministic common mode from\nTOD (like the thermal bath variations, see Section \\ref{sec:preprocessing}).\n\nOne possible source of statistical cross-correlated noise is the crosstalk between bolometers: the signal of\none pixel may contaminate the signal of its neighbors through the\ncapacitive inductive coupling, generating a common mode called\n``electrical crosstalk''. On the contrary, ``optical crosstalk'' is\ndue to diffraction or aberrations in the optical system which could\ndrive an astronomical source to fall on inappropriate detectors\n\\citep{Griffin09_pipeline}. \n\nWe then analyze the residual contribution of the statistical component\nof the correlated noise. We found that the residual correlated noise\nlevel in each pixel is negligible with respect to the intrinsic\ndetector noise level for both PACS and SPIRE instruments, as described\nin the following. \n\nIn principle, the noise\nproperties vary significantly across the array and we had to estimate\nthe noise power spectrum for each bolometer. \nTo do that we have processed ``blank\" sky mode (i.e. filled with\nnegligible contribution from sky signal) data acquired during the\nPV Phase. \n \nIn Figure \\ref{fig:PACS_blue_noise_spectrum} we show a typical noise spectrum\nestimated for a pixel of the $160\\mu$m PACS band (black) and the\ncross-spectrum between two adjacent bolometers (red). The\ncross-spectrum evaluates the impact of the cross-correlated noise in\nthe frequency domain between two different bolometers. The level of\nthe cross-correlated noise is at least 4 order of magnitude below the\nauto-correlated noise power spectrum of each pixel. Note that this means we do not see any relevant cross-correlated noise, despite the fact that crosstalk can be present into the timeline. \n\nIn Figure \\ref{fig:SPIRE_psw_noise_spectrum} we\nshow noise power spectra of the 250$\\mu$m SPIRE band bolometers. \nAlso in this case the cross-spectrum is negligible. \n \nNoise spectra of both PACS and SPIRE display low-frequency noise\nexcess ($1\/f$). In case of SPIRE spectra (Figure\n\\ref{fig:SPIRE_psw_noise_spectrum}) a high\nfrequency rise is also evident, which is due to the deconvolution of the bolometric\nresponse function. PACS spectra do not show this behavior because the\nbolometer transfer function is not deconvoluted by the standard\npipeline. \n\n\\begin{figure}\n\\centering\n\\includegraphics[width=8cm] {figure10.ps\n\\caption{Black line: typical noise spectrum of a PACS $160$$\\mu$m\\ detector,\n estimated on blank sky data. Red line: cross spectrum between two\n detectors of the same subarray. The level of the cross-correlated noise is\n significantly under the noise level of each single bolometer, so we\n can reasonably neglect it.} \n\\label{fig:PACS_blue_noise_spectrum}\n\\end{figure}\n\\begin{figure}\n\\centering\n\\includegraphics[width=8cm] {figure11.ps\n\\caption{Same as Figure \\ref{fig:PACS_blue_noise_spectrum} for a SPIRE\n $250$$\\mu$m\\ bolometer. For SPIRE also the noise level of cross-spectrum is\n reasonably negligible with respect to the auto spectrum level.} \n\\label{fig:SPIRE_psw_noise_spectrum}\n\\end{figure}\n\n\\subsection{From ROMA to ROMAGAL}\\label{sec:roma_romagal}\n \nThe ROMAGAL GLS code has been optimized to recover the features in the\nHi-GAL\\ fields with high accuracy. \n \nHi-GAL observes the Galactic plane where the dynamic range of sources\nspans over several orders of magnitudes. This poses strong constraints\non the map making algorithm: both the weak diffuse emission and the\nbright compact sources in e.g., star forming regions have to be\nrecovered with high accuracy. \nThe signal often exhibits steep gradients that are hard to follow for\nthe GLS solver, which relies on the assumption that the sky signal\ndoes not vary significantly within a resolution element (see below\n\\ref{sec:signal_striping}). At the same time, several systematics\naffect the dataset. As explained above, many of them are cured at the\npreprocessing level. However, their removal generates a conspicuous\namount of transient flagging, that must be correctly handled by the\nGLS code. \n \nThe core of ROMAGAL is the same of the ROMA code \\citep{deGasperis05}\nwhere the input-output routines have been deeply modified to adapt to\nthe HIPE generated dataset. ROMAGAL inputs are the TOD generated by\nHIPE, pointing and transient flagging. These have the same format for\nboth PACS and SPIRE. ROMAGAL outputs are fits file containing the\noptimal map, arranged in standard gnomonic projection routines. \nThe code is written in FORTRAN 95 and relies on the MPI library for\nparallel calls. It runs on the Hi-GAL\\ dedicated machine, called BLADE,\na cluster of 104 processors at 2.5 GHz each and 208 Gb RAM total. Its\nnodes are interconnected with MPI-infiniBAND. The machine is located\nat IFSI-Rome. \n\nAs explained in the previous Section, the computation of Equation~\\ref{eq:GLS} is unfeasible due to the size of the map's covariance\nmatrix. However, we assume the noise of each Hi-GAL\\ field to be\nstationary to set up an FFT based solver built upon a conjugate\ngradient iterative algorithm (see Section \\ref{sec:mapmaking}). Such\na scheme can estimate the final maps with a precision of order of\n$\\epsilon=10^{-8}$ in $\\sim$ 150 iterations for Hi-GAL\\ . ROMAGAL\ncomputational time scales linearly with the size of the dataset and\nonly weakly with the number of pixels in the output maps. The scaling\nwith the number of processors is highly optimal in the range of cores\nrequired for the Hi-GAL\\ analysis ($< 50$). For the largest channels\n(PACS blue band), a final GLS map of a $2^\\circ \\times 2^\\circ$ field requires about 16 Gbytes of RAM and 1400 sec on 8\ncores. Due to the high number of array pixels (2048), this channel is\nthe largest dataset to analyze as well as the most demanding in terms\nof computational resources. Further information on resource\nconsumptions can be found in Table \\ref{tab:computing_resources}. \n\n\\begin{table}\n\\begin{center}\n\\begin{tabular}{|c|c|c|}\n\\hline \\textbf{Band} & \\textbf{Total Time (sec)} & \\textbf{RAM (Gb)}\\\\ \n\\hline $70\\mu$m & $\\sim$ 1400 & 16\\\\ \n\\hline $160\\mu$m & $\\sim$ 1000 & 8\\\\ \n\\hline $250\\mu$m & $\\sim$ 180 & 4\\\\ \n\\hline $350\\mu$m & $\\sim$ 130& 1\\\\ \n\\hline $500\\mu$m & $\\sim$ 100& 1\\\\ \n\\hline \n\\end{tabular} \n\\end{center}\n\\caption{Time and minimum RAM amount required from ROMAGAL for each\n PACS and SPIRE band using 8 BLADE processors.} \n\\label{tab:computing_resources}\n\\end{table}\n\n\\subsection{Optimal treatment of transient flagging}\n\nAs mentioned above, the timelines are inspected for bad data samples\nthat must be excluded from map making as part of the preprocessing\npipeline. Bad data can arise due to a variety of reasons. They are\ngenerally caused by transient events, either unexpected (e.g.,\nglitches, anomalous hardware performance) or expected (e.g.,\ndetectors saturating because of a bright source, observation of a\ncalibrator). \nOnce identified, a flag record is generated and stored for these\nanomalous events, so that their contribution can be safely excluded\nfrom map making. Flags in the TOD pose a potential problem for ROMAGAL\nbecause its solver is based on the FFT as discussed in the previous\nsection. The FFT requires the timeline to be continuous and uniformly\nsampled. Since noise in the PACS and SPIRE data is correlated, just\nexcising the flagged sampled to fictitiously create a continuous\ntimeline would interfere with noise deconvolution, and is thus not a\nsafe option. Instead, we advocate using a suitable gap-filling\ntechnique. The rest of this section is mostly devoted to defining\nwhich, among the various options for gap filling, is best suited for the\nHi-GAL\\ maps. \n\nIt is important to realize that the output map will depend on the\ncontent of the flagged sections of the TOD even if these values are\nnot associated to any (real) map pixel. This is due to the convolution\nperformed within the solver that ``drags'' out data from the flagged\nsection of the timeline, even if, when the data are summed back into a\nmap, the $P$ operator is applied only to the unflagged samples. Since\none is not in control of the content of the flagged section of the\ntimeline, a kind of gap-filling must be performed in any case. We have\ntested different recipes for gap-filling making extensive use of\nsimulations. We have treated separately the signal and noise\ncomponents of the timelines, running separately noise dominated and\nsignal dominated cases, because the behavior towards flags of the two\ncomponents is different, as it will be shown below. \n\nThe simplest form of gap filling is to remove the content of the flags\naltogether, replacing the flagged sections with a nominal constant\n(usually null) value. This works very well on a signal-only simulation\nof the Hi-GAL field. However, it fails dramatically when noise is\npresent, as evident from Figure \n\\ref{fig:zeroflags_noiseonly_noCNR} (left panel), where a markedly\nstriped pattern in the reconstructed map is seen (in this simulation,\nthe Hi-GAL\\ noise has been amplified by a factor 100 to make its\npattern more evident). The reason for this failure is readily\nunderstood: The GLS map making employed requires noise stationarity\n(see Section \\ref{sec:roma_romagal} above), which is obviously not\npreserved by zeroing the gaps. A less obvious effect is that even if\ngaps are filled with a noise realization with the correct statistical\nproperties, but unconstrained, the GLS map making is bound to fail as\nwell, as shown in the middle panel of Figure\n\\ref{fig:zeroflags_noiseonly_noCNR}. A noise realization is said to be\nconstrained when it respects certain boundary conditions\n\\citep{1991ApJ...380L...5H}, which in our case are represented by the\nnoise behavior outside the gap. Unconstrained noise inside the gap,\ndespite having the correct statistical properties, creates a border\ndiscontinuity that causes the GLS map maker to behave sub-optimally\n\\citep{2002PhRvD..65b2003S}. We have employed a code to create\nGaussian noise constrained (NCR) realizations, based on the algorithm\nset forth by \\citet{1991ApJ...380L...5H}. The code uses in input the\nnoise pattern and statistical properties, as measured from the\ntimelines. The results on noisy simulations are excellent, as set forth\nby the third (rightmost) panel in Figure\n\\ref{fig:zeroflags_noiseonly_noCNR}. Note, however, that Figure\n\\ref{fig:zeroflags_noiseonly_noCNR} refers to a noise dominated\nsimulation. We now turn to discuss the effect of a non negligible\nsignal contribution (a far more realistic case for Hi-GAL\\ ). \n\n\\begin{figure*}\n\\centering\n\\includegraphics[width=10cm] {figure12.eps}\n\\caption{Shown are the results obtained in a noise\ndominated regime (normal Hi-GAL\\ noise is amplified by a factor\n100). Left panel is the map obtained replacing flagged data samples\nwith null values (clearly it does not work), middle panel is the map\nobtained replacing data samples with unconstrained noise realization\n(does not work either), right panel shows the map obtained using our\nNCR code (does work). \n\\label{fig:zeroflags_noiseonly_noCNR}}\n\\end{figure*}\n\nWe have verified that the presence of non negligible signal in the\ntimelines does not affect the results provided that the NCR is\nperformed using the underlying noise field as a baseline. Measuring\nthe latter in presence of signal is however impractical. It would be\nsignificantly simpler if the NCR could be run directly on the\ntimelines themselves, constraining thus the (fake) noise realization\nwithin the gap to the outside noise plus signal (true)\nvalues. Unfortunately, this poses a problem for Hi-GAL\\ data: large\nsignal jumps are present in the field and the resulting gap filling\nrealizations are affected in a non negligible manner by the boundary\nconditions, at least with the present version of ROMAGAL. This\nbehavior is different from what happens for experiments aimed at the\nCosmic Microwave Background (see e.g. \\citet{Masi06})\nwhere NCR codes are routinely run on the timelines as they are. In\norder to find a workaround that would spare us the inconvenience of\nestimating the underlying noise field to serve as a NCR input, we have\nmodified the flag treatment of the ROMAGAL code as explained in the\nfollowing. \n\nThe original version of ROMAGAL makes use of a single extra pixel\n(dubbed ``virtual pixel'') to serve as junk bin where the contents of\nthe gaps are sent when applying the $P^T$ operator within the ROMAGAL\nsolver. This approach, as stated above, works excellently in presence\nof both signal and noise, irrespective of their relative amplitude,\nprovided the NCR code assumes the underlying noise field as a baseline\nto perform the realization. In order to relax this assumption, we have\nmodified ROMAGAL to take into account not a single virtual pixel but\nan entire virtual map. In other words, we introduce a virtual\ncompanion for each pixel of the map, and use it as a junk bin to\ncollect the output from the gaps they correspond to. The hope is to\nredistribute the content of the flagged sections more evenly,\npreventing artifacts. This approach obtains satisfactory results when\nthe NCR code is run on the original (signal plus noise) timelines, as\nshown in Figure \\ref{fig:reldiff_realcnr_nvirtcnr_zeroflags_vs_input}.\n\n\\begin{figure}\n\\centering\n\\includegraphics[width=8cm] {figure13a.eps}\n\\includegraphics[width=8cm] {figure13b.eps}\n\\caption{Top row: For a signal dominated case, shown are the relative\n differences between the input simulated map and the output obtained\n with ROMAGAL (in the ``virtual map'' mode) with NCR performed on the\n underlying noise (left), on the original signal plus noise TOD\n (middle), and without NCR after replacing the gaps with null\n values. The latter case is clearly striped, but no signal related\n artifacts are present. Bottom row: For a noise dominated case, shown\n again are the relative difference versus the input map obtained by\n ROMAGAL with ``virtual map'', assuming NCR on underlying noise only\n (left), without NCR (middle) and with NCR on the signal plus noise\n TOD (right). As expected, the left panel shows the best residuals,\n but the right one appears as a good compromise (see also text). \n\\label{fig:reldiff_realcnr_nvirtcnr_zeroflags_vs_input}}\n\\end{figure}\nTo summarize our findings:\n\\begin{itemize}\n\n\\item Using a NCR realization code is {\\it always} necessary in order to avoid artifact striping effect into the map\n\n\\item If the NCR code is run on the underlying noise-only timelines\n (which are cumbersome to estimate) we obtain the best quality output\n map, with no signal related artifacts and a noise standard deviation\n which is lower (by a factor $\\sim 2$) with respect to the case in\n which no NCR is performed. \n\\item Running the NCR on the original timelines is possible with the\n ``virtual map'' approach. No signal artifacts are detected in the\n difference maps and the advantage in terms of noise standard\n deviation with respect to the no NCR case is still present, and of\n order 10\\% to 20\\% on average. \n\\end{itemize}\nWe have therefore chosen this latter approach as the baseline for the Hi-GAL\\ pipeline. \n\n\\section{ROMAGAL maps}\\label{sec:SDP_results} \n\nIn this section we analyze the final map obtained running our dedicated pipeline. We analyze the Point Spread Function (PSF) for the five bands in order to fix the resolution of the ROMAGAL maps. We compare the final GLS map with the naive map and we\npoint out the differences and the capability to recover the diffuse\nemission from the GLS map. Finally we discuss about the noise\nresiduals on the maps. \n\n\\subsection{Point Spread Function and pixel size}\\label{sec:PSF}\n\nThe angular resolution (pixel size) of the final map is a free\nparameter. Its choice is a compromise between competing\nrequirements. A small pixel size assures a correct sampling of the PSF;\n\nindeed, assuming a Gaussian profile for the PSF (which is reasonable as discussed in the following), the Nyquist theorem imposes that to better sample a 2-d image we need\nto set a pixel size which is at most one third of its FWHM value. \n\nOn the other hand, a too small pixel size can cause the loss of\nredundancy, useful to reduce the white noise level, and (even) some\nnon-observed pixels in the final map. \n\nThe diffraction limited beam of PACS at $70\\mu$m is $5.2\\arcsec$. Thus,\nwe should build the map with a pixel size of at least $1.8\\arcsec$. However, due to the limited bandwidth available for\ntransmission, especially in PACS\/SPIRE Parallel mode, PACS frames are coadded on-board the satellite before broadcasting. For\nthe $70$$\\mu$m\\ channel, a coaddition of 8 consecutive frames is applied by\non-board software. Since the acquisition rate is 40Hz and the scanning\nspeed for Hi-GAL\\ is set to $60\\arcsec \/s$, two close frames are\nsnapshots of the sky acquired $1.5\\arcsec$ apart. Due to coaddition,\nthe satellite provides one averaged frame every $12\\arcsec$; in\nspatial coordinates, this is twice the beam width of the PACS blue channel. \nThe measured PSF then is not the diffraction limited beam but it\nresults in an elongation along the scan direction due to the\nconvolution of the coaddition with the beam. As shown in \\citet{Lutz09},\nthe observations of Vesta and $\\alpha$Tau with the blue channel\nevidenced a FWHM equal to $5.86\\arcsec \\times12.16\\arcsec$ as a result\nof a 2-d Gaussian fitting, elongated in the scan direction. \n\nPACS $160\\mu$m is also affected by the averaging on-board, but only 4\n frames are coadded together. The nominal instrumental beam is\n $12.0\\arcsec$, while the measured is $11.64\\arcsec\\times15.65\\arcsec$\n \\citep{Lutz09}, elongated along the scan direction. However, in this\n case we can sample the beam without issues, and the effect of coaddition\n on the final map is negligible.\n \nFor the Hi-GAL\\ fields the scanning strategy consists of two orthogonal AORs, therefore the redundancy regularizes the\nPSF, resulting in approximately 2-d symmetric\nGaussian profile, as shown in Table \\ref{tab:time_beam_pixel}. \n\nAccording to the values reported in Table \\ref{tab:time_beam_pixel}, we observe\n quasi-symmetric beams with an averaged\nellipticity of less than $15\\%$ for both blue and red channel and the\naxis oriented randomly with respect to the scan direction.\n\nWe choose a pixel size of $3.2\\arcsec$ for the PACS $70\\mu$m\nband which samples the observed beam at Nyquist frequency. Below this threshold, the diffuse emission\nareas become too noisy due to the low SNR.\nSimilarly, we can choose a pixel size of\n $4.5\\arcsec$ for red band without loosing redundancy. \n\nSPIRE does not suffer from on-board coadding and the detectors were\nbuilt to reach the band diffraction limit. In-flight data show\nthat the SPIRE beam is well approximated by a 2-d asymmetric Gaussian\ncurve with the major axis orientation independent of the\nscan direction, with an ellipcticity not bigger then $10\\%$ (see \\citet{Sibthorpe10}). We set the pixel size for\neach SPIRE band equal to one third of the nominal beam. \nIn Table \\ref{tab:time_beam_pixel} we also report the beam measured in\nthe SPIRE maps. \n\nThe average ellipticity we observe agrees with found by \\citet{Sibthorpe10}. On the contrary, while the FWHM for the two axis found by \\citet{Sibthorpe10} are in agreement with the nominal (within the error), our measured beam results in a FWHM larger then the nominal of $\\sim25\\%$. \n\n\\begin{table*}\n\\begin{center}\n\\begin{tabular}{|c|c|c|c|c|}\n\\hline \\textbf{Band} & \\textbf{Nominal Beam (arcsec)} & \\textbf{Measured Beam (arcsec)} & \\textbf{Ellipticity} & \\textbf{Pixel Size (arcsec)}\\\\ \n\\hline $70\\mu$m & $5.2\\times 5.2$ & $\\sim 9.7\\times \\sim 10.7$ & 14.6\\% & 3.2 \\\\ \n\\hline $160\\mu$m & $12.0\\times 12.0$ & $\\sim 13.2\\times \\sim 13.9$ & 14.7\\%& 4.5\\\\ \n\\hline $250\\mu$m & $18\\times 18$ & $\\sim 22.8\\times \\sim 23.9$ & 8.3\\%& 6.0\\\\ \n\\hline $350\\mu$m & $24\\times 24$ & $\\sim 29.3\\times 31.3$ & 8.8\\%& 8.0\\\\ \n\\hline $500\\mu$m & $34.5\\times 34.5$ & $\\sim 41.1\\times \\sim 43.8$ & 9.7\\%& 11.5\\\\ \n\\hline \n\\end{tabular} \n\\end{center}\n\\caption{Nominal (2nd column), map-measured beam (two AOR, 3rd column) and ellipticity (4th column) of each band.} \n\\label{tab:time_beam_pixel}\n\\end{table*}\n\n\\subsection{Hi-GAL SDP results}\n\nThe quality of the outcome can be qualified by the comparison between the igls maps with the naive maps. In fact, the naive map is the simple averaging of signal recorded by every spatial pixel and it represents ``the least compromised view of the sky\". \n\nSince the TOD are created at the end of the preprocessing steps, when the data are a combination of only signal and $1\/f$ noise, we expect the $1\/f$ residuals in the naive map as well as a ``pure\" sky map produced by ROMAGAL. \n\nIn Figure \\ref{fig:PACS_l30_igls_vs_bin} a comparison between the\nnaive map and the ROMAGAL map of $\\textit{l}=30^{\\circ}$ field at $70\\mu$m is shown. The\nGLS code is capable to remove the $1\/f$ residuals without loosing any\nsignal both on bright sources and on diffuse emission. \n\n\\begin{figure}\n\\centering\n\\includegraphics[width=8cm]{figure17.eps}\n\\caption{Left: particular of the ROMAGAL map of the red PACS array, $\\textit{l}=30^{\\circ}$ field. Right: particular of the naive map, same\n band and field. The $1\/f$ noise is evident on the naive map, as well\n as its minimization is evident on the GLS map without loosing the\n signal of the diffuse component.} \n\\label{fig:PACS_l30_igls_vs_bin}\n\\end{figure}\n\n In particular, we choose three main proxies:\n\n\\begin{itemize}\n\\item the difference between naive and igls should show only a pattern due to the $1\/f$ noise residuals in the binned map. The pattern of this low-frequency noise is recognizable as stripes superimposed on the sky signal in the naive map. The stripes are the consequence of the $1\/f$ noise due to the adopted scanning strategy.\\\\\nIn Figure \\ref{fig:diff_PACS_l30_red_igls_bin} we show the difference\nbetween igls map and naive map. The $1\/f$ noise is removed in the igls\nmap but not in the naive. The residual stripes due to the\nlow-frequency noise are clearly visible. \n\n\\begin{figure}\n\\centering\n\\includegraphics[width=5.25cm]{figure18.eps}\n\\caption{Particular of map difference between PACS $160\\mu$m igls and naive, same region of the previous Figure. The\n stripes due to $1\/f$ removal made in the igls are evident.} \n\\label{fig:diff_PACS_l30_red_igls_bin}\n\\end{figure}\n\n\\item the source fluxes should be the same between the igls and naive maps.\\\\\n This item is quantified by the map difference, where the pattern is due only to the noise without any residual signal, except across the very brilliant source. This last effect is discussed on the next Section.\n\n\\item a statistical analysis of the background noise level should show a decrease of the rms value in the igls with respect to the naive map.\\\\\nIn Tables\n\\ref{tab:rms_PACS} and \\ref{tab:rms_SPIRE} we report the rms residuals of the PACS and SPIRE maps respectively, calculated\nin a diffuse emission area of each map. Since the flux between the maps is conserved, a decreasing of the rms noise level assures an increasing of S\/N ratio in the ROMAGAL maps.\n\n\\begin{table}\n\\begin{center}\n\\begin{tabular}{|c|c|c|c|}\n\n\\hline\n\\multicolumn{4}{|c|}{\\textbf{PACS $\\textit{l}=30^{\\circ}$ field}}\\\\\n\\hline \\textbf{Band}& \\textbf{rms igls (MJy\/pixel)}& \\textbf{rms naive(MJy\/pixel)}& \\textbf{ratio}\\\\ \n\\hline $70\\mu$m & 0.0085& 0.026& $\\sim$ 3.1 \\\\ \n\\hline $160\\mu$m & 0.047& 0.102& $\\sim$ 2.2\\\\ \n\\hline \n\\hline\n\\multicolumn{4}{|c|}{\\textbf{PACS $\\textit{l}=59^{\\circ}$ field}}\\\\\n\\hline \\textbf{Band}& \\textbf{rms igls (MJy\/pixel)}& \\textbf{rms naive(MJy\/pixel)}& \\textbf{ratio}\\\\ \n\\hline $70\\mu$m & 0.004545& 0.02208& $\\sim$ 4.9 \\\\ \n\\hline $160\\mu$m & 0.01899& 0.03586& $\\sim$ 1.9\\\\ \n\\hline \n\\end{tabular} \n\\end{center}\n\\caption{rms compared from GLS and naive map on both the SDP observations,\n for PACS bands, measured on a background region of $50\\times$50\n pixels. In the last column is reported the ratio between the naive\n and GLS rms.} \n\\label{tab:rms_PACS}\n\\end{table}\n\nThe ratio between the naive and igls rms shows an improvement of a factor $\\sim 2-5$ in the PACS ROMAGAL maps, and a factor $\\sim 1-2$ in the SPIRE case. The difference is mostly due to an intrinsically different $1\/f$ noise level.\n\n\\begin{table}\n\\begin{center}\n\\begin{tabular}{|c|c|c|c|}\n\\hline\n\\multicolumn{4}{|c|}{\\textbf{SPIRE $\\textit{l}=30^{\\circ}$ field}}\\\\\n\\hline \\textbf{Band}& \\textbf{rms igls (MJy\/beam)}& \\textbf{rms naive(MJy\/beam)}& \\textbf{ratio}\\\\ \n\\hline $250\\mu$m & 0.1749& 0.2868& $\\sim$ 1.6 \\\\ \n\\hline $350\\mu$m & 0.1569& 0.2302& $\\sim$ 1.5\\\\ \n\\hline $500\\mu$m & 0.2659& 0.4065& $\\sim$ 1.5\\\\ \n\\hline \n\\hline\n\\multicolumn{4}{|c|}{\\textbf{SPIRE $\\textit{l}=59^{\\circ}$ field}}\\\\ \n\\hline \\textbf{Band}& \\textbf{rms igls (MJy\/beam)}& \\textbf{rms naive(MJy\/beam)}& \\textbf{ratio}\\\\ \n\\hline $250\\mu$m & 0.09857& 0.1123& $\\sim$ 1.1 \\\\ \n\\hline $350\\mu$m & 0.0734& 0.08164& $\\sim$ 1.1\\\\ \n\\hline $500\\mu$m & 0.1073& 0.2101& $\\sim$ 1.9\\\\ \n\\hline \n\\end{tabular} \n\\end{center}\n\\caption{rms compared from GLS and naive map on both the SDP observations,\n for SPIRE bands, measured on a background region of $50\\times$50\n pixels. In the last column is reported the ratio between the naive\n and GLS rms. The $1\/f$ noise is less evident in the SPIRE bolometers\n with respect to the PACS one, but its effect is still remarkable.}\n\\label{tab:rms_SPIRE}\n\\end{table}\n\n\\end{itemize}\n\n\\subsection{Consistency of data analysis assumptions}\\label{sec:signal_striping}\n\nOne of the assumptions of ROMAGAL, as well as of all Fourier based GLS map making codes, is that the underlying sky signal is constant within a chosen resolution element. If this is not the case, artifacts (stripes) will be generated in the final map, contributing to the so called pixelization noise \\citep{Poutanen06}. In the case of Hi-GAL the situation is complicated by several effects: \n\n\\begin{itemize}\n\n\\item On-board coaddition of samples: each PACS 70$\\mu$m (160 $\\mu$m) frame is the result of an on-board average of eight (four)\nconsecutive frames reducing the effective sampling frequency of the instrument (see Section \\ref{sec:PSF}). Thus, sky signal is low-pass filtered by only partially effective data-windowing, rather then telescope response, leaving room for signal aliasing.\n\n\nThe map making code is quite sensitive to aliasing since it works in Fourier space. The situation is worsened by the large dynamic range of the Hi-GAL fields, especially when scanning across bright sources.\n\n\\item Time bolometer response induces signal distortions along the scan. While within HIPE the SPIRE detector response is deconvoluted from the data \\citep{Griffin09_pipeline}, the same is not true for PACS. Redundancy in each pixel is obtained by scans coming from different directions, thus the effect contributes further signal mismatch at the pixel level.\n\n\\item Pointing error: as analyzed in detail in \\citet{Herschel} the pointing performance of Herschel, which mean the capability of assign coordinates to each sample in a given reference frame, can be affected by several pointing error effects; the main contributor is due to the angular separation between the desired direction and the actual instantaneous direction, based on the position-dependent bias within the star-trackers. \n\nThe Herschel AOCS goal is that the mismatch between real coordinates and assigned\ncoordinates along the scan-leg is smaller than $1.2\\arcsec$ at 1 sigma \\citep{Herschel}. So that a 2$\\sigma$ event becomes significant compared to the PSF of the PACS blue band.\n\n\\end{itemize} \n\n\\begin{figure}\n\\centering\n\\includegraphics[width=6cm]{figure16.eps}\n\\caption{Zoom around a compact sources for the PACS blue optimal\n map. The striping dragged along the scan directions are\n remarkable.} \n\\label{fig:crosses_blue}\n\\end{figure}\n\nAll of the above effects challenge the basic assumptions (no signal aliasing and no sub-pixel effect) under which ROMAGAL works. Our simulations suggest that signal aliasing contribute significantly more than the other two. The net result on maps is the striping of bright sources in the GLS maps. An example is shown in Figure \\ref{fig:crosses_blue} for the PACS blue band.\n\nIt is important to notice that the stripes are present only around the point-like sources (where - of course - the signal aliasing is more evident), regardless of their magnitude. However, magnitude influences the amplitude of the stripes. For a source peak which is only 10 times higher that the rms background, the intensity of the stripes are within $1\\sigma$ from the background dispersion. For the more intense sources, the stripes magnitude in the pixels surrounding the PSF shape can be $100\\sigma$ times away from the background value.\n\nSince these stripes are produced within the GLS solver, which performs repetitive convolutions along the scan directions, but do not affect the naive map the obvious workaround is to implement dedicated map making which considers a different analysis around the bright sources.\n \n However, the detailed accounting of the above effects and the enhanced map making strategies to address them will be the subject of a forthcoming paper \\cite{Piazzo11}.\n\n\\section{Summary and conclusions} \\label{sec:Conclusions}\n\nThis paper describes in detail all steps of the processing of Herschel data, from originally downloaded frames to high-quality maps, used in the framework of Hi-GAL survey. Hi-GAL data are taken in fast scan mode ($60\\arcsec$\/sec ) and simultaneously by PACS and SPIRE (Parallel mode). We test our pipeline reducing data from the Science Demonstration Phase and present results, taking as proxy for the quality of the final images their comparison with naive maps. \n\nWe divided data processing into two distinct phases: preprocessing and mapmaking. Pre-processing aims to accurately remove systematics and random effects in the data, in order to prepare them for the ROMAGAL map making algorithm, which implements the minimum variance GLS approach in order to minimize the noise into the data. It turns out that NCR is a fundamental step in the pre-processing because ROMAGAL, as an FFT mapmaking code needs continuous and regular data time series as input.\n\nNoise residuals in the diffuse emission of the two test fields (SDP Hi-GAL data, two 2$^{\\circ}\\times2^{\\circ}$ tiles centered on the Galactic plane at l = 30$^{\\circ}$ and l = 59$^{\\circ}$) show that we obtain optimal maps, getting rid of glitches and systematic drifts, as well as minimizing the 1\/ f and white noise component. The remaining effects, which do not affect the overall quality of the maps except across the bright source on the PACS 70$\\mu$m maps, are under investigation that will appear in a dedicated publication to be available shortly. \n\n\\section*{Acknowledgements}\nSpecial thanks to G\\\"oran Pilbratt for allowing the using of PV Phase\nblank data. \n\n\\bsp\n\n\\bibliographystyle{mn2e}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nElectromagnetic periodic structures have received a lot of attention due to their peculiar properties and their relatively easy integration in complex designs. Among the applications are the design of Frequency-Selective Surfaces (FSS), reflect- and transmit-arrays, metasurfaces, metamaterials, etc. The efficient study of these materials requires suitable numerical tools.\n\nStructures periodic in two directions can be simulated using the Method of Moments (MoM), a spectral method based on surface integral equations \\cite{ref1}. If the excitation is quasi-periodic, \\textit{i.e.} it shares the periodicity of the structure within a linear phase-shift, the response of the structure can be obtained from the study of a single unit cell. The periodicity is accounted for through the use of a modified Green's function that accounts for the periodicity of the structure. First, the impedance matrix of the structure is computed. Then, the resulting system of equation is solved to obtain the response of the structure to a given excitation. For unit cells of small or moderate size, most of the time is devoted to the computation of the periodic impedance matrix. Non-periodic sources can be handled using the Array Scanning Method (ASM), which consists of decomposing the non-periodic source into a superposition of quasi-periodic sources and solving each subproblem separately \\cite{ref6}.\n\nOne drawback of the periodic Method of Moments is that the response of the structure must be computed for each frequency and relative phase-shift between consecutive unit cells. This problem becomes critical when using the ASM, for which a large number of phase shifts may be required to get accuracte results \\cite{ref9}. One way to counter this drawback is to use interpolation techniques \\cite{ref12, ref10, ref11}. The value of the impedance matrix is computed for few reference frequencies and\/or phase shifts. Its value at different frequencies and\/or phase shifts is then interpolated. Brute-force polynomial interpolation is inefficient due to the rapidly oscillating behaviour of the Green's function and the singularity arising from plane-waves with grazing incidence. To improve the convergence of the method, it was proposed to use a Model-Based Parameter Estimation (MBPE) technique \\cite{ref12, ref11}. Based on the physics, a model is developed to describe the generic behaviour of the periodic impedance matrix entries. Then, the evolution of each entry of the impedance matrix is obtained by adjusting few coefficients involved in the model. In \\cite{ref10}, the periodic impedance matrix for normally incident plane-waves is divided into two contributions: the resonant modes of the substrate and the rest. Then, the relative contribution of the different terms is estimated to fit the actual value of the periodic impedance matrix at few sampling frequencies. In \\cite{ref11}, it is proposed to extend the technique to oblique incidence by removing an additional phase term. In this way, frequency interpolation of the periodic impedance matrix can be carried out for non-vanishing phase shifts between consecutive unit cells. However, these studies concentrate on the interpolation of the periodic impedance matrix vs. frequency. It seems that the interpolation with respect to the phase shift has received little attention so far.\n\nIn this paper, we propose a method to interpolate the periodic impedance matrix for different phase shifts between consecutive unit cells. First, the contribution of the leading Floquet modes is removed from the impedance matrix. Then, an additional phase term is extracted. It is shown that the resulting function is smooth with respect to the phase shift between consecutive unit cells. The contribution of the dominant Floquet modes is evaluated numerically instead of being fitted, so that very few sampling points are required for the interpolation. It is in contrast with MBPE techniques for which at least one sampling point per coefficient is required. It is also shown that removing the contribution of the main Floquet modes breaks the periodicity of the impedance matrix, so that one impedance matrix can be used to obtain several sampling points.\n\nThe paper is organized as follows. In Section II, the working principle of the periodic MoM is briefly described. Then, in Section III, the behaviour of the periodic impedance matrix is studied and the interpolation technique is described. Last, in Section IV, the interpolation technique is validated through few examples. \n\n\\section{The Periodic Method of Moments}\nSurface Integral Equation based numerical methods are relying on the surface equivalence principle. The response of a structure to incident fields is modeled using equivalent currents. The fields generated by these currents correspond to the fields scattered by the structure. The amplitude of the equivalent currents for a given excitation can be determined by enforcing boundary conditions across each interface separating different media. Using the Method of Moments, the unknown currents are discretized using a predefined set of basis functions (BF) and the boundary conditions are imposed on average along a predefined set of testing functions (TF) \\cite{ref8}. First, the impact of each BF on the boundary conditions along each TF is computed and stored in the so-called impedance matrix. Then, the amplitude of the currents is found by solving the resulting system of equations:\n\\newcommand{\\mat}[1]{\\underline{\\underline{#1}}}\n\\newcommand{\\vect}[1]{\\mathbf{#1}}\n\\begin{equation}\n\\mat{Z} \\, \\vect{x} = - \\vect{b},\n\\end{equation}\nwith $\\mat{Z}$ the impedance matrix, $\\vect{x}$ a vector containing the unknown coefficients used to expand the equivalent currents using the BF and $\\vect{b}$ the impact of the incident fields on the boundary conditions.\nOne natural set of boundary conditions that can be used is the continuity of the tangential electric and magnetic fields, leading to the Poggio-Miller-Chang-Harrington-Wu-Tsai (PMCHWT) formulation. Using this formulation, the impedance matrix corresponds to the sum of the fields generated by the BF along the TF through the media on both sides of the interface.\n\nThis method can be extended to periodic structures provided that the fields radiated by replicas of the BFs are included when computing the impedance matrix \\cite{ref1}. In that case, one obtains the following system of equations\n\\newcommand{\\mat{\\tilde{Z}}}{\\mat{\\tilde{Z}}}\n\\begin{equation}\n\\mat{\\tilde{Z}}(\\boldsymbol{\\varphi}) \\, \\vect{x} = -\\vect{b}(\\boldsymbol{\\varphi})\n\\end{equation}\nwith $\\mat{\\tilde{Z}}$ the periodic impedance matrix and $\\boldsymbol{\\varphi}$ the phase-shift between consecutive unit cells in the two directions of periodicity. \n\nOne way to compute the periodic impedance matrix is to first compute the the periodic Green's function and convolve it spatially with the BFs and integrate the result along the TF. In that case, several methods can be used to accelerate the computation of the periodic Green's function (see \\textit{e.g.} \\cite{ref2, ref3, ref4, ref5}). Another possibility is to compute directly the impedance matrix in the spectral domain. In that case, each entry of the impedance matrix can be expressed as an infinite series of spectral term, each term corresponding to one Floquet mode of the structure \\cite{ref14}. \n\n\n\\section{Interpolation scheme}\n\\newcommand{\\dir}[1]{\\hat{\\mathbf{#1}}}\nWe consider a homogeneous medium of permittivity and permeability $\\varepsilon$ and $\\mu$. In this medium, a quasi-periodic set of BF of period $d_x$ and $d_y$ and phase-shifts $\\varphi_x$ and $\\varphi_y$ in the $\\dir{x}$ and $\\dir{y}$ directions emit electric ($E$) and magnetic ($H$) fields, which are tested with the TFs. We define the direction $\\dir{z} = \\dir{x}\\times \\dir{y}$. We consider that the TF is located either above or below the BF, \\textit{i.e.} the sign of $\\dir{z} \\cdot (\\vect{r}' - \\vect{r})$ is identical for any pair of points $\\vect{r}'$ and $\\vect{r}$ on the BF and TF, respectively. This condition is always met in planar geometries, so that we will restrain ourselves to this particular case for the rest of the paper. Then, using the notations of \\cite{ref9}, the corresponding entry of the periodic impedance matrix reads \n\\newcommand{\\sumInf}[1]{\\sum_{#1=-\\infty}^\\infty}\n\\newcommand{\\tilde{f}_{B,e} (\\mathbf{k}_{pq})}{\\tilde{f}_{B,e} (\\mathbf{k}_{pq})}\n\\newcommand{\\tilde{f}_{B,m} (\\mathbf{k}_{pq})}{\\tilde{f}_{B,m} (\\mathbf{k}_{pq})}\n\\newcommand{\\tilde{f}_{T,e} (-\\mathbf{k}_{pq})}{\\tilde{f}_{T,e} (-\\mathbf{k}_{pq})}\n\\newcommand{\\tilde{f}_{T,m} (-\\mathbf{k}_{pq})}{\\tilde{f}_{T,m} (-\\mathbf{k}_{pq})}\n\\begin{align}\n\\tilde{Z}^{EJ}&(\\boldsymbol{\\varphi}) = \n\\dfrac{\\eta k}{2 d_x d_y}\n\\sumInf{p} \\sumInf{q} \\dfrac{1}{\\gamma_{pq}}\n\\label{eq:01}\n\\\\\n& \\hspace{-1cm} \\times\n\\Big( \\tilde{f}_{B,e} (\\mathbf{k}_{pq}) \\tilde{f}_{T,e} (-\\mathbf{k}_{pq}) + \\tilde{f}_{B,m} (\\mathbf{k}_{pq}) \\tilde{f}_{T,m} (-\\mathbf{k}_{pq}) \\Big)\n\\nonumber\n\\\\\n\\tilde{Z}^{EM}&(\\boldsymbol{\\varphi}) = \n\\dfrac{ k}{2 d_x d_y}\n\\sumInf{p} \\sumInf{q}\n\\dfrac{1}{\\gamma_{pq}}\n\\label{eq:02} \n\\\\\n& \\hspace{-1cm} \\times\n\\Big( \\tilde{f}_{B,e} (\\mathbf{k}_{pq}) \\tilde{f}_{T,m} (-\\mathbf{k}_{pq}) - \\tilde{f}_{B,m} (\\mathbf{k}_{pq}) \\tilde{f}_{T,e} (-\\mathbf{k}_{pq}) \\Big) \\nonumber\n\\end{align}\nwith $\\boldsymbol{\\varphi} = (\\varphi_x, \\varphi_y)$, $\\eta = \\sqrt{\\mu\/\\varepsilon}$ the impedance of the medium through which the interactions are computed, $k = \\omega \\sqrt{\\varepsilon \\mu}$ the wavenumber of the medium, $\\omega$ the angular frequency, $\\mathbf{k}_{pq} = (k_{x,p}, k_{y,q}, \\pm \\gamma_{pq})$ the wave-vector associated to Floquet harmonic $(p,q)$, $\\tilde{f}_{B\/T,e\/m}$ the Fourier transform of the $\\dir{e}$ or $\\dir{m}$ component of the BF ($B$) and TF ($T$), with:\n\\begin{align}\n\\dir{e}(\\mathbf{k}_{pq}) \n&= \n\\dfrac{1}{k_{t,pq}} \\big(-k_{x,p}, k_{y,q}, 0 \\big) \n\\\\\n\\dir{m}(\\mathbf{k}_{pq}) \n&=\n- \\dfrac{\\gamma_{pq} }{k k_{t,pq}} \\bigg(\\pm k_{x,p}, \\pm k_{y,q}, - \\dfrac{k_{t,pq}^2}{\\gamma_{pq}} \\bigg)\n\\\\\n\\mathbf{k}_{t,pq} &= \\big(k_{x,p}, k_{y,q}, 0 \\big) \\\\\n\\gamma_{pq} &= \\sqrt{k^2-k_{t,pq}^2} \\\\\nk_{t,pq} &= \\sqrt{\\mathbf{k}_{t,pq} \\cdot \\mathbf{k}_{t,pq}} \\\\\nk_{x,p} &= \\dfrac{2 \\pi p + \\varphi_x}{d_x} \\label{eq:08}\\\\\nk_{y,q} &= \\dfrac{2 \\pi q + \\varphi_y}{d_y} \\label{eq:09}\n\\end{align}\nNote that where $\\pm$ is indicated, the $+$ sign is used if the TF is located above the BF, and the $-$ sign otherwise. Similarly, note that $b = \\sqrt{a}$ is defined such that its imaginary part is negative and, if $b$ is real, it is positive.\n\nFrom \\eqref{eq:01} and \\eqref{eq:02}, the dependence of the impedance matrix on $\\boldsymbol{\\varphi}$ can be analyzed. Looking at each term of the sum separately and factoring out the phase term due to the spatial shift between the BF and TF, one can highlight four different factors: \n\\newcommand{\\tilde{f}_{B,e\/m}^0 (\\mathbf{k}_{pq})}{\\tilde{f}_{B,e\/m}^0 (\\mathbf{k}_{pq})}\n\\newcommand{\\tilde{f}_{T,e\/m}^0 (-\\mathbf{k}_{pq})}{\\tilde{f}_{T,e\/m}^0 (-\\mathbf{k}_{pq})}\n\\newcommand{\\mathbf{r}_B^0}{\\mathbf{r}_B^0}\n\\newcommand{\\mathbf{r}_T^0}{\\mathbf{r}_T^0}\n\\newcommand{\\mathbf{k}_{pq}}{\\mathbf{k}_{pq}}\n\\begin{equation}\n\\label{eq:03}\n\\dfrac{1}{\\gamma_{pq}}\n\\end{equation}\n\\begin{equation}\n\\label{eq:04}\n\\tilde{f}_{B,e\/m}^0 (\\mathbf{k}_{pq}) \\tilde{f}_{T,e\/m}^0 (-\\mathbf{k}_{pq}) \n\\end{equation}\n\\begin{equation}\n\\label{eq:05}\n\\exp\\bigg(j \\mathbf{k}_{t,pq} \\cdot (\\mathbf{r}_B^0-\\mathbf{r}_T^0)\\bigg)\n\\end{equation}\n\\begin{equation}\n\\label{eq:06}\n\\exp\\bigg(\\pm j \\gamma_{pq} \\dir{z} \\cdot (\\mathbf{r}_B^0-\\mathbf{r}_T^0)\\bigg)\n\\end{equation}\nwith $\\tilde{f}_{B,e\/m}^0 (\\mathbf{k}_{pq})$ and $\\tilde{f}_{T,e\/m}^0 (-\\mathbf{k}_{pq})$ the Fourier transforms of the BF and TF when they are centered at the origin and $\\mathbf{r}_B^0$ and $\\mathbf{r}_T^0$ the position of the BF's and TF's actual centers with respect to the origin.\n\nWe now look at each factor separately.\n\\subsubsection*{Factor \\eqref{eq:03}}\nFor $k_{t,pq} \\simeq k$, this factor varies rapidly vs. $\\varphi_x$ and $\\varphi_y$ that may be hard to interpolate. However, for $k_{t,pq} \\gg k$, the variation is slow with respect to $\\boldsymbol{\\varphi}$ and thus the interpolation is expected to work well.\n\\subsubsection*{Factor \\eqref{eq:04}}\nThe BF and TF are generally well-behaving functions, so their Fourier transform does not exhibit any sharp feature. Moreover, the BF and TF being usually much smaller than the unit cell, their Fourier transform is expected to vary very slowly with $\\boldsymbol{\\varphi}$.\n\\subsubsection*{Factor \\eqref{eq:05}}\nThe variation of this factor with respect to $\\boldsymbol{\\varphi}$ does not depend on indices $(p,q)$ of the Floquet mode considered and can be factored out as \n\\begin{equation}\n\\label{eq:07}\n\\exp\\bigg(j \\Big(\\dfrac{\\varphi_x}{d_x} \\dir{x} + \\dfrac{\\varphi_y}{d_y} \\dir{y}\\Big) \\cdot (\\mathbf{r}_B^0-\\mathbf{r}_T^0)\\bigg).\n\\end{equation}\n\\subsubsection*{Factor \\eqref{eq:06}}\nFour different cases can be highlighted, depending on the distance $\\Delta z$ between the BF and the TF in the $\\dir{z}$ direction and depending on $k_{t,pq}$, the amplitude of the transverse wave-vector. On one hand, if $\\Delta z \\gg 0$, variations of the factor are rapid if $k_{t,pq} \\lesssim k_0$. However, if $k_{t,pq} \\gg k_0$, \\eqref{eq:06} becomes negligible and the contribution of the corresponding terms to series \\eqref{eq:01} and \\eqref{eq:02} is negligible. On the other hand, if $\\Delta z \\simeq 0$, variation with respect to $\\gamma_{pq}$ and thus $k_{t,pq}$ is slow. It should be noticed that for $k_{t,pq} = k$, despite the relatively slow variation, the function is not analytic. Thus, interpolation is not expected to provide accurate results when crossing this limit.\n\nSummarizing all these observations, the high-order $(p,q)$ terms, and thus their sum, are expected to be easy to interpolate. Thus the interpolation method is the following:\n\\begin{enumerate}\n\\item Compute the periodic impedance matrix for few phase shifts.\n\\item Remove the contribution of the dominant Floquet modes ($|p|, |q| \\leq N$).\n\\item Remove the phase term of \\eqref{eq:07}.\n\\item Interpolate the remaining function for the phase shift of interest using a polynomial interpolation technique.\n\\item Re-introduce the phase term of \\eqref{eq:07}.\n\\item Re-add the contribution of the dominant Floquet modes.\n\\end{enumerate}\n\nOne interesting feature of the technique is that $\\mat{\\tilde{Z}}$ is periodic with respect to $\\varphi_x$ and $\\varphi_y$, so that \n\\begin{equation}\n\\mat{\\tilde{Z}}(\\varphi_x, \\varphi_y) = \\mat{\\tilde{Z}}(\\varphi_x + 2\\pi, \\varphi_y) = \\mat{\\tilde{Z}}(\\varphi_x, \\varphi_y +2\\pi).\n\\end{equation}\nHowever, the Floquet modes that are removed do depend on the phase shift considered since, according to \\eqref{eq:08} and \\eqref{eq:09}, we have\n\\begin{equation}\nk_{x,p}(\\varphi_x +2\\pi) = k_{x,p+1}(\\varphi_x)\n\\end{equation}\n\\begin{equation}\nk_{y,q}(\\varphi_y + 2\\pi) = k_{y,q+1}(\\varphi_y)\n\\end{equation}\nIt means that from a single periodic impedance matrix, one can evaluate the value of the interpolant at several different locations. Let's take a simplified 1D example, as illustrated in Fig. \\ref{fig:02}. On the top graph, one can see the evolution of the Floquet modes of the series \\eqref{eq:01} and \\eqref{eq:02} with the phase-shift $\\varphi$. Clearly, if $\\varphi$ is increased by $2\\pi$, the Floquet modes will be translated by one single period. Hence, when summing the contribution of the Floquet modes, one obtains the same value for $\\varphi$ and $\\varphi +2\\pi$. However, after removing the contribution of the $2N+1$ dominant Floquet modes, the remaining terms in series \\eqref{eq:01} and \\eqref{eq:02} will be different for different phase shifts $\\varphi$ (cf. bottom graphs). The resulting interpolant $Z_\\text{rem}(\\varphi)$ will be non-periodic. Thus, from a single periodic impedance matrix, by choosing properly the Floquet modes that are being removed, one can determine the value of the interpolant at several different locations.\n\n\\begin{figure}\n\\center\n\\includegraphics[width = 6 cm]{Illustration_Zper} \\\\\n\\includegraphics[width = 7cm, trim={1cm 0 1.2cm 0},clip]{Illustration_Zrem}\n\\caption{Top graph: Qualitative illustration of the evolution of the Floquet modes of \\eqref{eq:01} and \\eqref{eq:02} with respect to the phase shift of the periodic impedance matrix. Different colors correspond to different phase shifts. Bottom graphs: Floquet modes included in the interpolant $Z_\\text{rem}(\\varphi)$ for $\\varphi = -2\\pi, 0, 2\\pi$. The black dashed Floquet modes are not accounted for in order to smoothen the interpolant (Floquet modes extraction). Through the extraction of these modes, the periodicity of the series of \\eqref{eq:01} and \\eqref{eq:02} is broken.}\n\\label{fig:02}\n\\end{figure}\n\nFor example, considering the original 2D periodic scenario, the periodic impedance matrix $\\mat{\\tilde{Z}}(\\boldsymbol{\\varphi} = (0,0))$ can be used to estimate the value of the interpolant at points $(-2\\pi, -2\\pi)$, $(-2\\pi, 0)$, $(-2\\pi, 2\\pi)$, $(0, -2\\pi)$, $(0, 0)$, $(0, 2\\pi)$, $(2\\pi, -2\\pi)$, $(2\\pi, 0)$ and $(2\\pi, 2\\pi)$. For each of these sampling points, a different set of Floquet harmonics will be extracted from the periodic impedance matrix and thus the value of the interpolant will be different.\n\n\\section{Numerical results}\nIn order to illustrate the improvement of the interpolant with each successive transformation, we consider the interaction between two identical co-planar rooftop basis functions aligned in the $\\dir{x}$ direction. Each half of the rooftop is corresponding to a square of size $\\lambda\/18$, $\\lambda$ being the wavelength. The periodicity in the $\\dir{x}$ and $\\dir{y}$ directions is $\\lambda\/1.8$. The TF corresponds to the replica of the BF translated by half a unit cell in the $\\dir{x}$ and $\\dir{y}$ directions. The evolution of the periodic interaction with phase shift is illustrated in the top graphs of Fig. \\ref{fig:01}. Graphs in the left-hand side illustrate the real part and graphs on the right-hand side illustrate the imaginary part of the periodic interaction. The middle graphs illustrate the value of the periodic impedance matrix after removing the contribution of the 9 dominant Floquet modes ($N=1$). The bottom graphs illustrate the value of the interpolant after removing both the contribution of the 9 dominant Floquet modes and the phase term of \\eqref{eq:07}. It can be seen that, at each step, the function becomes much smoother and easier to interpolate. Similar results are found for different pairs of BF and TF. It is worth noting that the value of the interpolant is illustrated for phase-shifts varying from $-2\\pi$ to $2\\pi$. While one is generally interested in estimating the periodic impedance matrix for phase shifts ranging from $-\\pi$ to $\\pi$, this extended zone can be used to add sampling points that may improve the polynomial interpolation. \n\n\\begin{figure}\n\\center\n\\includegraphics[width = 8cm]{fig1.pdf}\n\\caption{Evolution of the periodic interaction with the contribution of all the Floquet modes (top graphs), without the contribution of the 9 dominant Floquet modes (middle graphs) and after removing the remaining phase term (bottom graphs). Both the real (LHS) and imaginary (RHS) parts of the functions are displayed. The red square delineates the zone of phase-shifts going from $-\\pi$ to $\\pi$.}\n\\label{fig:01}\n\\end{figure}\n\nIn order to check the accuracy of the interpolation procedure, we meshed a whole plane of the unit cell using 200 rooftop TFs oriented along the $\\dir{x}$ and $\\dir{y}$ directions. The periodic interaction between a single BF and the 200 TF was computed for several different heights $\\Delta z$ between the BF and the TF. For the interpolation, we used four different periodic impedance matrices: $\\mat{\\tilde{Z}}(0,0)$, $\\mat{\\tilde{Z}}(0, \\pi)$, $\\mat{\\tilde{Z}}(\\pi,0)$ and $\\mat{\\tilde{Z}}(\\pi, \\pi)$. From these impedance matrices, the interpolant has been evaluated on 25 sampling points using the described procedure with $N=1$ (9 Floquet modes extracted), the coordinates of the sampling points corresponding to $\\varphi_x, \\varphi_y \\in \\{-2\\pi, -\\pi, 0, \\pi, 2\\pi\\}$. Then, from these sampling points, the interpolant was approximated using a polynomial of order 4. Due to the high number of sampling points, the coefficients of the polynomial were obtained by solving an overconstrained system of equations, the relative weight attributed to the most distant points being a hundred times smaller than the weight attributed to the nine central points. \n\nPeriodic impedance matrices were computed using the method of \\cite{ref15}. Since only one BF is used, the resulting impedance ``matrices\" correspond to vectors. The error is estimated as the norm of the difference between the interpolated impedance matrix and the computed one, relative to the norm of the computed one:\n\\begin{equation}\ne(\\boldsymbol{\\varphi}) = \\dfrac{\\|\\mat{\\tilde{Z}}_{interp}-\\mat{\\tilde{Z}}\\|_2}{\\|\\mat{\\tilde{Z}}\\|_2}\n\\end{equation}\nwith $\\mat{\\tilde{Z}}_{interp}$ the periodic impedance matrix obtained using the interpolation technique and $\\|\\cdot \\|$ the $L2$ norm. For any distance and phase-shift, the maximum error was found to be approximately $0.2\\%$.\n\nWe compared the times required to prepare the interpolation procedure and apply it. To do so, we considered the electric and magnetic fields generated by the plane on itself (200x400 interactions). In \\cite{ref15}, the periodic Green's function is first tabulated and then integrated over the BF and TF. For each phase-shift the tabulation of the periodic Green's function required 20\" and the computation of the impedance matrix required 6\". Thus computing the four periodic impedance matrices required 1'45\". Then, as explained earlier, the four periodic impedance matrices have been used to estimate the value of the interpolant at 25 different points. The estimation of the interpolant took about 10\". Last, to interpolate the value of the matrix for a given phase-shift, it took about 0.5\". The duration of the different steps is summarized in Table \\ref{tab:01}. It should be emphasized that the computation of the reference impedance matrices and the extraction of the dominant Floquet modes only needs to be performed once.\n\n\\begin{table}\n\\center\n\\caption{Computation times involved in the interpolation technique (in seconds).}\n\\begin{tabular}{|c|c|c|}\n\\hline\nComputation of four\n&\nExtraction of the dominant \n&\nInterpolation of\n\\\\\nperiodic impedance\n&\nFloquet modes and\n&\none matrix\n\\\\\nmatrices $\\mat{\\tilde{Z}}$\n&\nlinear phase shift\n&\n~\n\\\\\n\\hline\n105 & 10 & 0.5\n\\\\\n\\hline\n\\end{tabular}\n\\label{tab:01}\n\\end{table}\n\nLast, the interpolation technique has been used to compute the radiation pattern of the leaky-wave antenna proposed in \\cite{sup1} and used in \\cite{ref9} to validate the periodic MoM solver. The leaky wave antenna is made of a ground plane and a periodic array of patches located 19 mm above the ground plane. The patches are rectangular, with dimensions of 12.5 mm and 1 mm in the $\\dir{x}$ and $\\dir{y}$ directions. The periodicity of the patches is 13.5 mm in the $\\dir{x}$ direction and 3 mm in the $\\dir{y}$ direction. The structure is excited using a $\\dir{x}$-directed dipole located 9.5 mm below the center of one patch. The radiation pattern of the structure in the $E$ and $H$ planes($x-z$ and $y-z$ planes, respectively) is studied at 9.5 GHz.\n\nThe radiation pattern was computed twice, the results being displayed in Fig. \\ref{fig:03}. In both cases, the code of \\cite{ref9} was used to compute the periodic impedance matrices. The first radiation pattern was obtained by computing explicitly the periodic impedance matrix for each different phase shift. The second radiation pattern was obtained by computing the periodic impedance matrix for four different phase shifts and then interpolating the value of the periodic impedance matrix for the other phase shifts. The parameters used for the interpolation were the same as those used in the previous example. The radiation patterns obtained using both techniques and their difference are displayed in Fig. \\ref{fig:03}. It can be seen that both radiation patterns are in perfect agreement and that the relative error introduced by the interpolation technique is about 0.1\\%.\n\n\\begin{figure}\n\\center\n\\includegraphics[width = 7cm]{final_graph_1.pdf}\n\\caption{Radiation pattern of the leaky-wave antenna described in \\cite{sup1} and simulated in \\cite{ref9}. Blue and red curves correspond to the radiation pattern in the E and H-planes, respectively. Continuous lines: reference radiation patterns for which each periodic impedance matrix was computed explicitly. Dashed lines: radiation patterns obtained using the interpolation technique. Dotted dashed lines: error due to the interpolation technique.}\n\\label{fig:03}\n\\end{figure}\n\n\\section{Conclusion}\nTo conclude, we proposed a technique to efficiently interpolate the Method of Moments periodic impedance matrix vs. the phase shift between consecutive unit cells. To improve accuracy, the interpolant is smoothened by extracting the contribution of the dominant Floquet modes and a linear phase term. The efficiency of the method partly relies on the fact that a single periodic impedance matrix can be used to sample the interpolant at many different points. The accuracy of the method has been validated on numerical examples.\n\n\\section*{Acknowledgment}\nThis project has received funding from the European Union's Horizon 2020 research and innovation programme under the Marie Sk\\l odowska-Curie grant agreement No. 842184.\n\n\n\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section*{Acknowledgments}{#1}}\n\\newcommand{\\mykeywords}[1]{}\n\\bibliographystyle{alpha}\n\n\\fi\n\n\\newenvironment{proofidea}{\\noindent{\\textit{Proof idea.}}}{\\hfill$\\square$\\medskip}\n\n\\newcommand{\\bigdotcup}{\\ensuremath{\\mathaccent\\cdot{\\bigcup}}}\n\\newcommand{\\floor}[1]{\\left\\lfloor#1\\right\\rfloor}\n\\newcommand{\\card}[1]{\\lvert#1\\rvert}\n\\renewcommand{\\dim}{{n}}\n\n\\newcommand{\\RR}{\\ensuremath{\\mathbb{R}}}\n\\newcommand{\\ones}{\\ensuremath{\\mathbbm{1}}}\n\\newcommand{\\expectation}{\\operatorname{\\mathbb{E}}}\n\\newcommand{\\e}{\\expectation}\n\\newcommand{\\conv}{\\operatorname{conv}}\n\\newcommand{\\suchthat}{\\mathrel{:}}\n\\newcommand{\\norm}[1]{{\\lVert#1\\rVert}}\n\\newcommand{\\diag}{\\operatorname{diag}}\n\\newcommand{\\eps}{\\epsilon}\n\n\\newcommand{\\poly}{\\operatorname{poly}}\n\\newcommand{\\polylog}{\\operatorname{polylog}}\n\\newcommand{\\abs}[1]{\\lvert#1\\rvert}\n\\newcommand{\\vol}{\\operatorname{vol}}\n\\newcommand{\\expdist}[1]{\\mathrm{Exp({#1})}}\n\\newcommand{\\noname}[1]{}\n\n\\newcommand{\\dimension}{n}\n\n\\newtheorem{problem}{Problem}\n\n\\title{Efficient Learning of Simplices}\n\n\\date{}\n\n\\begin{document}\n\\maketitle\n\\begin{abstract}\nWe show an efficient algorithm for the following problem: Given uniformly random points from an arbitrary $\\dim$-dimen\\-sional simplex, estimate the simplex. The size of the sample and the number of arithmetic operations of our algorithm are polynomial in $\\dim$.\nThis answers a question of Frieze, Jerrum and Kannan~\\cite{FJK}.\nOur result can also be interpreted as efficiently learning the intersection of $\\dim+1$ half-spaces in $\\RR^\\dim$ in the model where the intersection is bounded and we are given polynomially many uniform samples from it.\nOur proof uses the local search technique from Independent Component Analysis (ICA), also used by \\cite{FJK}. Unlike these previous\nalgorithms, which were based on analyzing the fourth moment, ours is based on the third moment.\n\nWe also show a direct connection between the problem of learning a simplex and ICA: a simple randomized reduction to ICA from the problem of learning a simplex. The connection is based on a known representation of the uniform measure on a simplex. Similar representations lead to a reduction from the problem of learning an affine transformation of an $n$-dimensional $\\ell_p$ ball to ICA. \n\\end{abstract}\n\n\\mykeywords{independent component analysis, randomized reductions, learning convex bodies, method of moments}\n\n\\section{Introduction}\nWe are given uniformly random samples from an unknown convex body in $\\RR^\\dim$, how many samples are needed to approximately reconstruct\nthe body? It seems intuitively clear, at least for $\\dim = 2, 3$, that\nif we are given sufficiently many such samples then we can reconstruct (or learn) the body with very little error. For general $n$, it is known to require $2^{\\Omega(\\sqrt{\\dim})}$\nsamples~\\cite{GR09} (see also \\cite{KOR} for a similar lower bound in a different but related model of learning).\nThis is an information-theoretic lower bound and no computational considerations are involved. As mentioned\nin \\cite{GR09}, it turns out that if the body has few facets (e.g. polynomial in $\\dim$), then polynomial in $\\dim$ samples are\nsufficient for approximate reconstruction.\nThis is an information-theoretic upper bound and no efficient algorithms (i.e., with running time poly$(n)$) are known.\n(We remark that to our knowledge the same\nsituation holds for polytopes with poly$(n)$ vertices.) In this paper we study the\nreconstruction problem for the special case when the input bodies are restricted to be (full-dimensional) simplices. We show that\nin this case one can in\nfact learn the body efficiently. More precisely, the algorithm knows that the input body is a simplex but only up to an affine transformation, and the problem is to recover this affine transformation.\nThis answers a question of \\noname{Frieze~et~al.~}\\cite[Section 6]{FJK}.\n\n\nThe problem of learning a simplex is also closely related to the well-studied problem of learning intersections of half-spaces.\nSuppose that\nthe intersection of $\\dim+1$ half-spaces in $\\RR^\\dim$ is bounded, and we are given poly$(\\dim)$ uniformly random samples from it. Then our\nlearning simplices result directly implies that we can learn the $\\dim+1$ half-spaces. This also has the advantage of being a proper\nlearning algorithm, meaning that the output of the algorithm is a set of $\\dim+1$ half-spaces, unlike many of the previous algorithms.\n\n\n\n\n\\paragraph{Previous work.}\n\nPerhaps the first approach to learning simplices that comes to mind is to find a minimum volume simplex containing the samples. This can be shown\nto be a good approximation to the original simplex. (Such minimum volume estimators have been studied in machine learning literature, see e.g. \\cite{ScholkopfPSSW01} for the problem of estimating the support of a probability distribution. We are not aware of any\ntechnique that applies to our situation and provides theoretical guarantees.)\nHowever, the problem of finding a minimum volume simplex is in general\nNP-hard~\\cite{Packer}. This hardness is not directly applicable for our problem because our input is a random sample and not a general\npoint set. Nevertheless, we do not have an algorithm for directly finding a minimum volume simplex; instead we use ideas similar to those used in Independent Component Analysis (ICA).\nICA studies the following problem:\nGiven a sample from an affine transformation of a random vector with independently distributed coordinates, recover the affine transformation (up to some unavoidable ambiguities).\n\\noname{Frieze~et~al.~}\\cite{FJK} gave an efficient algorithm for this problem\n(with some restrictions on the allowed distributions, but also with some weaker requirements than full independence)\nalong with most of the details of a rigorous analysis (a complete analysis of a special case can be found in \\noname{Arora et al.~}\\cite{Arora2012}; see also \\noname{Vempala and Xiao~}\\cite{VempalaX11} for a generalization of ICA to subspaces along with a rigorous analysis). \nThe problem of learning parallelepipeds from uniformly random samples is a special case of this problem. \n\\cite{FJK} asked if one could learn other convex bodies, and in particular simplices, efficiently from uniformly random samples.\n\\noname{Nguyen and Regev~}\\cite{NguyenR09} gave a simpler and rigorous algorithm and analysis for the case of learning parallelepipeds with similarities to the popular FastICA algorithm of \\noname{Hyv\\\"{a}rinen~}\\cite{Hyvarinen99}.\nThe algorithm in \\cite{NguyenR09} is a first order algorithm unlike Frieze~et~al.'s second order algorithm.\n\nThe algorithms in both \\cite{FJK, NguyenR09} make use of the fourth moment function of the probability distribution. Briefly, the fourth moment in direction $u \\in \\RR^n$ is $\\mathbb{E}(u\\cdot X)^4$, where $X \\in \\RR^n$ is\nthe random variable distributed according to the input distribution. The moment function can be estimated from\nthe samples. The independent components of the distribution correspond to local maxima or minima of the\nmoment function, and can be approximately found by finding the local maxima\/minima of the moment function\nestimated from the sample.\n\n\n\nMore information on ICA including historical remarks can be found in \\cite{ICAbook, BlindSS}.\nIdeas similar to ICA have been used in\nstatistics in the context of projection pursuit since the mid-seventies.\nIt is not clear how to apply ICA to the simplex learning problem directly as there is no clear independence among the components. Let us\nnote that \\noname{Frieze et al.~}\\cite{FJK} allow certain kinds of dependencies among the components, however this does not appear to be useful\nfor learning simplices.\n\n\n\nLearning intersections of half-spaces is a well-studied problem in learning theory. The problem\nof PAC-learning intersections of even two half-spaces is open, and there is evidence that it is\nhard at least for sufficiently large number of half-spaces: E.g., \\noname{Klivans and Sherstov~}\\cite{KlivansS09} prove\nthat learning intersections of $n^\\epsilon$ half-spaces in $\\RR^\\dim$ (for constant $\\epsilon>0$) is\nhard under standard cryptographic assumptions (PAC-learning is possible, however, if one also\nhas access to a membership oracle in addition to random samples \\cite{KwekP98}). Because of this, much effort has been expended\non learning when the distribution of random samples is some simple distribution,\nsee e.g. \\cite{KlivansS07, Vempala10, VempalaFocs10} and references therein. This line of work makes substantial progress\ntowards the goal of learning intersections of $k$ half-spaces efficiently, however it falls short of\nbeing able to do this in time polynomial in $k$ and $n$; in particular, these algorithms do not seem to be\nable to learn simplices. The distribution of samples in\nthese works is either the Gaussian distribution or the uniform distribution over a ball.\n\\noname{Frieze et al.~}\\cite{FJK} and \\noname{Goyal and Rademacher~}\\cite{GR09} consider the uniform distribution over\nthe intersection. Note that this requires that the intersection be bounded. Note also that one\nonly gets positive samples in this case unlike other work on learning intersections of\nhalf-spaces. The problem of learning convex bodies can also be thought of as learning a distribution or density estimation\nproblem for a special class of distributions.\n\n\n\\noname{Gravin et al.~}\\cite{Gravin12} show how to reconstruct a polytope with $N$ vertices in $\\RR^n$, given its first $O(nN)$ moments\nin $(n+1)$ random directions. In our setting, where we have access to only a polynomial number of random samples, it's not clear\nhow to compute moments of such high orders to the accuracy required for the algorithm of \\cite{Gravin12} even for simplices.\n\nA recent and parallel work of \\noname{Anandkumar et al.~}\\cite{AnandTensor} is closely related to ours. They show that tensor decomposition\nmethods can be applied to low-order moments of various latent variable models to estimate their parameters. The latent variable \nmodels considered by them include Gaussian mixture models, hidden Markov models and latent Dirichlet allocations. The tensor \nmethods used by them and the local optima technique we use seem closely related. One could view our work, as well as theirs, as \nshowing that the method of moments along with existing algorithmic techniques can be applied to certain unsupervised learning problems.\n\n\\paragraph{Our results}\n\nFor clarity of the presentation, we use the following machine model for the running time: a random access machine that allows the following exact arithmetic operations over real numbers in constant time: addition, subtraction, multiplication, division and square root.\n\nThe estimation error is measured using total variation distance, denoted $d_{TV}$ (see Section \\ref{sec:preliminaries}).\n\n\\begin{theorem} \\label{thm:main}\nThere is an algorithm (Algorithm~\\ref{alg:simplex} below) such that given access to random samples from a simplex\n$S_{INPUT} \\subseteq \\RR^\\dim $, with probability at least $1-\\delta$ over the sample and the randomness of the algorithm, it outputs $n+1$ vectors that are the vertices of a simplex $S$ so that $d_{TV}(S, S_{INPUT}) \\leq \\eps$.\nThe algorithm runs in time polynomial in $\\dim$, $1\/\\eps$ and $1\/\\delta$.\n\\end{theorem}\n\nAs mentioned earlier, our algorithm uses ideas from ICA. \nOur algorithm uses the third moment instead of the fourth moment used in certain versions of ICA. The third moment is not useful for learning symmetric bodies such as the cube as it is identically $0$. \nIt is however useful for learning a simplex where it provides useful information, and is easier to handle than the fourth moment.\nOne of the main contributions of our work is the understanding of the third moment of a simplex and the structure of local maxima. This is more involved than in previous work as the simplex has no obvious independence structure, and the moment polynomial one gets has no obvious structure unlike for ICA.\n\nThe probability of success of the algorithm can be ``boosted'' so that the dependence of the running time on $\\delta$ is only linear in $\\log(1\/\\delta)$ as follows: The following discussion uses the space of simplices with total variation distance as the underlying metric space. Let $\\eps$ be the target distance. Take an algorithm that succeeds with probability $5\/6$ and error parameter $\\eps'$ to be fixed later (such as Algorithm \\ref{alg:simplex} with $\\delta = 1\/6$). Run the algorithm $t = O(\\log 1\/\\delta)$ times to get $t$ simplices. By a Chernoff-type argument, at least $2t\/3$ simplices are within $\\eps'$ of the input simplex with probability at least $1- \\delta\/2$.\n\nBy sampling, we can estimate the distances between all pairs of simplices with additive error less than $\\eps'\/10$ in time polynomial\nin $t, 1\/\\epsilon'$ and $\\log{1\/\\delta}$ so that all estimates are correct with probability at least $1-\\delta\/2$.\nFor every output simplex, compute the number of output simplices within estimated distance $(2+1\/10)\\eps'$. With probability at least\n$1-\\delta$ both of the desirable events happen, and then necessarily there is at least one output simplex, call it $S$, that has $2t\/3$ output simplices within estimated distance $(2+1\/10)\\eps'$. Any such $S$ must be within $(3+2\/10) \\eps'$ of the input simplex. Thus, set $\\eps'=\\eps\/(3+2\/10)$.\n\nWhile our algorithm for learning simplices uses techniques for ICA, we have to do \nsubstantial work to make those techniques work for the simplex problem. \nWe also show a more direct connection between the problem of learning a simplex and ICA: a randomized reduction from the problem of learning a simplex to ICA. The connection is based on a known representation of the uniform measure on a simplex as a normalization of a vector having independent coordinates. Similar representations are known for the uniform measure in an $n$-dimensional $\\ell_p$ ball (denoted $\\ell_p^n$) \\cite{barthe2005probabilistic} and the \\emph{cone measure} on the boundary of an $\\ell_p^n$ ball \\cite{schechtman1990volume, rachev1991approximate, MR1396997} (see Section \\ref{sec:preliminaries} for the definition of the cone measure). \nThese representations lead to a reduction from the problem of learning an affine transformation of an $\\ell_p^n$ ball to ICA. These reductions show connections between estimation problems with no obvious independence structure and ICA. \nThey also make possible the use of any off-the-shelf implementation of ICA. \nHowever, the results here do not supersede our result for learning simplices because to our knowledge no rigorous analysis is available for \nthe ICA problem when the distributions are the ones in the above reductions. \n\n\\paragraph{Idea of the algorithm.}\n\n\n The new idea for the algorithm is that after putting\nthe samples in a suitable position (see below), the third moment of the sample can be used to recover the simplex using a simple \nFastICA-like algorithm. We outline our algorithm next.\n\nAs any full-dimensional simplex can be mapped to any other full-dimensional simplex by an invertible affine transformation, it is enough to determine the translation and linear transformation that would take the given simplex to some canonical simplex.\nAs is well-known for ICA-like problems (see, e.g., \\cite{FJK}),\nthis transformation can be determined \\emph{up to a rotation} from the mean and the covariance matrix of the uniform distribution on the given simplex.\nThe mean and the covariance matrix can be estimated efficiently from a sample.\nA convenient choice of an $n$-dimensional simplex is the convex hull of the canonical vectors in $\\RR^{\\dim+1}$.\nWe denote this simplex $\\Delta_n$ and call it the \\emph{standard simplex}.\nSo, the algorithm begins by picking an arbitrary invertible affine transformation $T$ that maps $\\RR^\\dim$ onto the hyperplane $\\{x \\in \\RR^{\\dim+1} \\suchthat \\ones \\cdot x = 1 \\}$. We use a $T$ so that $T^{-1}(\\Delta_n)$ is an isotropic\\footnote{See Section \\ref{sec:preliminaries}.} simplex. In this case, the algorithm brings the sample set into isotropic position and embeds it in $\\RR^{\\dim+1}$ using $T$.\nAfter applying these transformations we may assume (at the cost of small errors in the final result) that our sample set is obtained by sampling from an unknown rotation of the standard simplex\nthat leaves the all-ones vector (denoted $\\ones$ from now on) invariant (thus this rotation keeps the center of mass of the standard simplex fixed), and the problem is to recover this rotation.\n\n\nTo find the rotation, the algorithm will find the vertices of the rotated simplex approximately. This can be done efficiently because of the following characterization of the vertices: Project the vertices of the simplex onto the hyperplane through the origin orthogonal to $\\ones$ and normalize the resulting vectors. Let $V$ denote this set of $n+1$ points. Consider the problem of maximizing the third moment of the uniform distribution in the simplex along unit vectors orthogonal to $\\ones$. Then $V$ is the complete set of local maxima and the complete set of global maxima (Theorem~\\ref{thm:maxima}). A fixed point-like iteration (inspired by the analysis of FastICA~\\cite{Hyvarinen99} and of gradient descent in \\cite{NguyenR09}) starting from a random point in the unit sphere finds a local maximum efficiently with high probability. By the analysis of the coupon collector's problem, $O(\\dim \\log \\dim)$ repetitions are highly likely to find all local maxima.\n\n\\paragraph{Idea of the analysis.}\n\nIn the analysis, we first argue that after putting the sample in isotropic position and mapping it through $T$, it is enough to analyze the algorithm in the case where the sample comes from a simplex $S$ that is close to a simplex $S'$ that is the result of applying a rotation leaving $\\ones$ invariant to the standard simplex. The closeness here depends on the accuracy of the sample covariance and mean as an estimate of the input simplex's covariance matrix and mean. A sample of size $O(n)$ guarantees\n(\\cite[Theorem 4.1]{MR2601042}, \\cite[Corollary 1.2]{1106.2775}) that the covariance and mean are close enough so that the uniform distributions on $S$ and $S'$ are close in total variation. We show that the subroutine that finds the vertices (Subroutine \\ref{alg:vertex}), succeeds with some probability when given a sample from $S'$. By definition of total variation distance, Subroutine \\ref{alg:vertex} succeeds with almost as large probability when given a sample from $S$ (an argument already used in \\cite{NguyenR09}). As an additional simplifying assumption, it is enough to analyze the algorithm (Algorithm \\ref{alg:simplex}) in the case where the input is isotropic, as the output distribution of the algorithm is equivariant with respect to affine invertible transformations as a function of the input distribution.\n\n\\iffalse\nWe may assume at the cost of small error in the final result that the sample set comes from\n\nAn interesting aspect of the algorithm is that it also uses vectors outside the plane $\\norm{x}_1=1$.\n\nThere are several kinds of approximations that need to be handled by a successful algorithm and its analysis:\n\n(isotropic, exact 3rd moment, real arithmetic) The algorithm can evaluate the gradient of the 3rd moment of the canonical simplex in some direction exactly.\n\n(isotropic, approximate 3rd moment, real arithmetic) The algorithm has access to samples from the canonical simplex. This allows the approximation of the gradient.\n\n(nearly isotropic, approximate 3rd moment, real arithmetic) The algorithm has access to samples from an affine perturbation of the canonical simplex.\n\n(nearly isotropic, approximate 3rd moment, limited precision) The algorithm has access to samples from an affine perturbation of the canonical simplex. The samples have limited numerical precision, arithmetic operations also have limited precision.\n\\fi\n\n\n\n\n\\paragraph{Organization of the paper.} \nStarting with some preliminaries in Sec.~\\ref{sec:preliminaries}, we state some\nresults on the third moment of simplices in Sec.~\\ref{sec:moment}. \nIn Sec.~\\ref{sec:standard} we give an algorithm that\nestimates individual vertices of simplices in a special position; using this algorithm as a subroutine in Sec.~\\ref{sec:general} we give\nthe algorithm for the general case. \nSec.~\\ref{sec:maxima} characterizes the set of local maxima of the third moment. \nSec.~\\ref{sec:prob} gives the probabilistic results underlying the reductions from learning simplices and $\\ell_p^n$ balls to ICA.\nSec.~\\ref{sec:reduction} explains those reductions.\n\n\n\n\\section{Preliminaries} \\label{sec:preliminaries}\nAn $\\dim$-simplex is the convex hull of $\\dim+1$ points in $\\RR^{\\dim}$ that do not lie on an $(\\dim-1)$-dimensional affine hyperplane.\nIt will be convenient to work with the standard $\\dim$-simplex\n$\\Delta^{\\dim}$ living in $\\RR^{\\dim+1}$ defined as the convex hull of the $\\dim+1$ canonical unit\nvectors $e_1, \\ldots, e_{\\dim+1}$; that is\n\\begin{align*}\n\\Delta^{\\dim} = \\{(x_0, \\ldots, x_{\\dim}) \\in \\RR^{\\dim+1} &\\suchthat x_0 + \\dotsb + x_{\\dim} = 1 \\optionalbreak \n\\text{ and } x_i \\geq 0 \\text{ for all } i\\}.\n\\end{align*}\nThe canonical simplex $\\Omega^\\dim$ living in $\\RR^\\dim$ is given by\n\\begin{align*}\n\\{(x_0, \\dotsc, x_{\\dim-1}) \\in \\RR^{\\dim} &\\suchthat x_0 + \\dotsb + x_{\\dim-1} \\leq 1 \\optionalbreak\n\\text{ and } x_i \\geq 0 \\text{ for all } i\\}.\n\\end{align*}\nNote that $\\Delta^\\dim$ is the facet of $\\Omega^{\\dim+1}$ opposite to the origin.\n\nLet $B_n$ denote the $n$-dimensional Euclidean ball.\n\n\nThe complete homogeneous symmetric polynomial of degree $d$ in variables $u_0, \\ldots, u_{\\dim}$, denoted $h_n(u_0, \\ldots, u_{\\dim})$,\nis the sum of all monomials of degree $d$ in the variables:\n\\begin{align*}\nh_d(u_0, \\ldots, u_\\dim)\n&= \\sum_{k_0 + \\dotsb + k_\\dim = d} u_0^{k_0} \\dotsm u_\\dim^{k_\\dim} \\optionalbreak\n= \\sum_{0 \\leq i_0 \\leq i_1 \\leq \\dotsb \\leq i_d \\leq \\dim} u_{i_0} u_{i_1} \\dotsm u_{i_d}.\n\\end{align*}\n\nAlso define the $d$-th power sum as\n$$\np_d(u_0, \\ldots, u_\\dim) = u_0^d + \\ldots + u_\\dim^d.\n$$\n\nFor a vector $u = (u_0, u_1, \\ldots, u_\\dim)$, we define\n\\[\nu^{(2)} = (u_0^2, u_1^2, \\ldots, u_\\dim^2).\n\\]\nVector $\\ones$ denotes the all ones vector (the dimension of the vector will be clear from the context).\n\nA random vector $X \\in \\RR^\\dim$ is \\emph{isotropic} if $\\e(X)=0$ and $\\e(X X^T) = I$. A compact set in $\\RR^\\dim$ is isotropic if a uniformly distributed random vector in it is isotropic. The inradius of an isotropic $n$-simplex is $\\sqrt{(n+2)\/n}$, the circumradius is $\\sqrt{n(n+2)}$.\n\nThe total variation distance between two probability measures is $d_{TV}(\\mu, \\nu) = \\sup_A \\abs{\\mu(A) - \\nu(A)}$ for measurable $A$. For two compact sets $K, L \\subseteq \\RR^\\dim$, we define the total variation distance $d_{TV}(K, L)$ as the total variation distance between the corresponding uniform distributions on each set. It can be expressed as\n\\[\n d_{TV}(K,L) =\n \\begin{cases}\n \\frac{\\vol{K \\setminus L}}{\\vol K} & \\text{if $\\vol K \\geq \\vol L$}, \\\\\n \\frac{\\vol{L \\setminus K}}{\\vol L} & \\text{if $\\vol L > \\vol K$.}\n \\end{cases}\n\\]\nThis identity implies the following elementary estimate:\n\\begin{lemma}\\label{lem:bmtv}\nLet $K, L$ be two compact sets in $\\RR^\\dim$. Let $0 < \\alpha \\leq 1 \\leq \\beta$ such that\n$\n\\alpha K \\subseteq L \\subseteq \\beta K$.\nThen $d_{TV}(K,L) \\leq 2\\left(1-(\\alpha\/\\beta)^\\dim\\right)$.\n\\end{lemma}\n\\begin{proof}\nWe have $d_{TV}(\\alpha K, \\beta K) = 1-(\\alpha\/\\beta)^\\dim$. Triangle inequality implies the desired inequality.\n\\end{proof}\n\n\\begin{lemma}\\label{lem:coupons}\nConsider the coupon collector's problem with $n$ coupons where every coupon occurs with probability at least $\\alpha$. Let $\\delta >0$. Then with probability at least $1-\\delta$ all coupons are collected after $\\alpha^{-1} (\\log n + \\log 1\/\\delta)$ trials.\n\\end{lemma}\n\\begin{proof}\nThe probability that a particular coupon is not collected after that many trials is at most\n\\[\n(1-\\alpha)^{\\alpha^{-1} (\\log n + \\log 1\/\\delta)} \\leq e^{-\\log n - \\log 1\/\\delta} = \\delta\/n.\n\\]\nThe union bound over all coupons implies the claim.\n\\end{proof}\n\nFor a point $x \\in \\RR^{\\dimension}$, $\\norm{x}_p = (\\sum_{i = 1}^{\\dimension} |x_i|^p)^{1\/p}$ is the standard $\\ell_p$ norm. The unit $\\ell_p^n$ ball is defined by \n\\[\nB_p^{\\dimension} = \\{x \\in \\RR^{\\dimension}: \\norm{x}_p \\leq 1 \\}.\n\\]\n\nThe Gamma distribution is denoted as $\\mathrm{Gamma}(\\alpha, \\beta)$ and has density $f(x; \\alpha, \\beta) = \\frac{\\beta^\\alpha}{\\Gamma(\\alpha)} x^{\\alpha - 1} e^{- \\beta x } 1_{x \\geq 0}$, \nwith shape parameters $\\alpha, \\beta >0$.\n$\\mathrm{Gamma}(1, \\lambda)$ is the exponential distribution, denoted $\\expdist{\\lambda}$. \nThe Gamma distribution also satisfies the following additivity property: If $X$ is distributed as $\\mathrm{Gamma}(\\alpha, \\beta)$ and $Y$ is distributed as $\\mathrm{Gamma}(\\alpha', \\beta)$, then $X+Y$ is distributed as $\\mathrm{Gamma}(\\alpha + \\alpha', \\beta)$.\n\nThe cone measure on the surface $\\partial K$ of centrally symmetric convex body $K$ in $\\RR^{\\dimension}$ \\cite{barthe2005probabilistic, schechtman1990volume, rachev1991approximate, MR1396997} is \ndefined by\n\\[\n\\mu_{K}(A) = \\frac{\\vol(ta; a \\in A, 0 \\leq t \\leq 1)}{\\vol(K)}.\n\\] \nIt is easy to see that $\\mu_{B_p^{\\dimension}}$ is uniform on $\\partial B_p^{\\dimension}$ for $p \\in \\{1, 2, \\infty\\}$.\n\nFrom \\cite{schechtman1990volume} and \\cite{rachev1991approximate} we have the following representation of the cone measure on $\\partial B_p^n$:\n\\begin{theorem}\\label{thm:cone}\nLet $G_1, G_2, \\dotsc, G_n$ be iid random variables with density proportional to $\\exp(-\\abs{t}^p)$. Then the random vector $X = G\/\\norm{G}_p$\nis independent of $\\norm{G}_p$. Moreover, $X$ is distributed according to $\\mu_{B_p^{\\dimension}}$.\n\\end{theorem}\n\nFrom \\cite{barthe2005probabilistic}, we also have the following variation, a representation of the uniform distribution in $B_p^n$:\n\\begin{theorem} \\label{theorem:fullnaor} \nLet $G = (G_1, \\dotsc, G_{\\dimension})$ be iid random variables with density proportional to $\\exp(-|t|^p)$. Let $Z$ be a random variable distributed as $\\expdist{1}$, independent of $G$. Then the random vector\n\\[\nV = \\frac{G}{\\bigl( \\sum_{i=1}^{\\dimension} \\abs{G_i}^p + Z\\bigr)^{1\/p}}\n\\]\nis uniformly distributed in $B_p^n$.\n \\end{theorem} \n\n See e.g. \\cite[Section 20]{MR1324786} for the change of variable formula in probability.\n\n\\section{Computing the moments of a simplex}\\label{sec:moment}\nThe $k$-th moment $m_k(u)$ over $\\Delta^\\dim$ is the function\n\\[\nu \\mapsto \\mathbb{E}_{X \\in \\Delta_\\dim}((u\\cdot X)^k).\n\\]\nIn this section we present a formula for the moment over $\\Delta^\\dim$.\nSimilar more general formulas appear in \\cite{LasserreA}.\nWe will use the following result from \\cite{GrundmannM78} for $\\alpha_i\\geq 0$:\n\\[\n\\int_{\\Omega^{\\dim+1}} x_0^{\\alpha_0} \\dotsm x_\\dim^{\\alpha_\\dim} dx = \\frac{\\alpha_0! \\dotsm \\alpha_\\dim!}{(\\dim+1+ \\sum_i \\alpha_i)!}.\n\\]\n\nFrom the above we can easily derive a formula for integration over $\\Delta^\\dim$:\n$$\n\\int_{\\Delta^\\dim} x_0^{\\alpha_0} \\dotsm x_\\dim^{\\alpha_\\dim} dx = \\sqrt{n+1} \\cdot \\frac{\\alpha_0! \\dotsm \\alpha_\\dim!}{(\\dim + \\sum_i \\alpha_i)!}.\n$$\n\nNow\n\\begin{align*}\n\\int_{\\Delta^\\dim} &(x_0 u_0 + \\ldots + x_\\dim u_\\dim)^k dx \\\\\n&= \\sum_{k_0 + \\dotsb + k_\\dim = k} \\binom{k}{k_0!, \\ldots, k_\\dim!} u_0^{k_0} \\ldots u_\\dim^{k_\\dim} \\int_{\\Delta^\\dim} x_0^{k_0} \\ldots x_\\dim^{k_\\dim} dx \\\\\n&= \\sum_{k_0 + \\dotsb + k_\\dim = k} \\binom{k}{k_0!, \\ldots, k_\\dim!} u_0^{k_0} u_0^{k_0} \\ldots u_\\dim^{k_\\dim}\n\\frac{ \\sqrt{\\dim+1} \\cdot k_0! \\ldots k_\\dim!}{(\\dim + \\sum_i k_i)!} \\\\\n&= \\frac{k! \\sqrt{\\dim+1}}{(\\dim+k)!} \\sum_{k_0 + \\dotsb + k_\\dim = k} u_0^{k_0} \\ldots u_\\dim^{k_\\dim} \\\\\n&= \\frac{k! \\sqrt{\\dim+1}}{(\\dim+k)!} \\; h_k(u).\n\\end{align*}\n\nThe variant of Newton's identities for the complete homogeneous symmetric polynomial gives the following relations which\ncan also be verified easily by direct computation:\n$$ 3 h_3(u) = h_2(u) p_1(u) + h_1(u) p_2(u) + p_3(u),$$\n$$ 2 h_2(u) = h_1(u) p_1(u) + p_2(u) = p_1(u)^2 + p_2(u).$$\n\nDivide the above integral by the volume of the standard simplex $|\\Delta_n|=\\sqrt{\\dim+1}\/{\\dim!}$\nto get the moment:\n\\begin{eqnarray*}\nm_3(u) & = & \\frac{3! \\sqrt{\\dim+1}}{(\\dim+3)!} h_3(u) \/ |\\Delta_n| \\\\\n & = & \\frac{2 (h_2(u) p_1(u) + h_1(u) p_2(u) + p_3(u))}{(\\dim+1)(\\dim+2)(\\dim+3)} \\\\\n & = & \\frac{(p_1(u)^3 + 3 p_1(u) p_2(u) + 2 p_3(u))}{(\\dim+1)(\\dim+2)(\\dim+3)} .\n\\end{eqnarray*}\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\section{Subroutine for finding the vertices of a rotated standard simplex} \\label{sec:standard}\nIn this section we solve the following simpler problem: Suppose we have $\\text{poly}(\\dim)$ samples from a rotated copy\n$S$ of the standard simplex, where the rotation is such that it leaves $\\ones$ invariant. The problem\nis to approximately estimate the vertices of the rotated simplex from the samples.\n\nWe will analyze our algorithm in the coordinate system in which the input simplex is the standard simplex. This is only\nfor convenience in the analysis and the algorithm itself does not know this coordinate system.\n\n\nAs we noted in the introduction, our algorithm is inspired by the algorithm of \\noname{Nguyen and Regev~}\\cite{NguyenR09} for\nthe related problem of learning hypercubes and also by the FastICA algorithm in \\cite{Hyvarinen99}. New ideas are\nneeded for our algorithm for learning simplices; in particular, our update rule is different. With the right update\nrule in hand the analysis turns out to be quite similar to the one in \\cite{NguyenR09}.\n\n\nWe want to find local maxima of the sample third moment. A natural approach to do this would be to use gradient\ndescent or Newton's method (this was done in \\cite{FJK}). Our algorithm, which only uses first order information, can\nbe thought of as a fixed point algorithm leading to a particularly simple analysis and fast convergence. Before stating\nour algorithm we describe the update rule we use.\n\nWe will use the abbreviation $C_\\dim = (\\dim+1)(\\dim+2)(\\dim+3)\/6$. Then, from the expression for $m_3(u)$ we get\n\\[\n\\nabla m_3(u) = \\frac{1}{6C_\\dim} \\left(3 p_1(u)^2 \\ones + 3 p_2(u) \\ones + 6 p_1(u) u + 6 u^{(2)}\\right).\n\\]\nSolving for $u^{(2)}$ we get\n\\begin{align}\nu^{(2)} &= C_\\dim \\nabla m_3(u) - \\frac{1}{2} p_1(u)^2 \\ones - \\frac{1}{2} p_2(u) \\ones - p_1(u) u \\nonumber \\\\\n &= C_\\dim \\nabla m_3(u) - \\frac{1}{2} (u \\cdot \\ones)^2 \\ones - \\frac{1}{2} (u \\cdot u)^2 \\ones - (u \\cdot \\ones) u.\\label{equ:squaring}\n\\end{align}\n\nWhile the above expressions are in the coordinate system where the input simplex is the canonical simplex, the\nimportant point is that all terms in\nthe last expression can be computed in any coordinate system that is obtained by a rotation leaving $\\ones$ invariant.\nThus, we can compute $u^{(2)}$ as well independently of what coordinate system we are working in. This immediately\ngives us the algorithm below.\nWe denote by $\\hat{m}_3(u)$ the sample third moment, i.e., $\\hat{m}_3(u)=\\frac{1}{t}\\sum_{i=1}^{t}(u \\cdot r_i)^3$ for $t$ samples.\nThis is a polynomial in $u$, and the gradient is computed in the obvious way. Moreover, the gradient of the sample moment is clearly an unbiased estimator of the gradient of the moment; a bound on the deviation is given in the analysis (Lemma \\ref{lem:gradient}).\nFor each evaluation of the gradient of the sample moment, we use a fresh sample.\n\nIt may seem a bit alarming that the fixed point-like iteration is squaring the coordinates of $u$, leading to an extremely fast growth (see Equation \\ref{equ:squaring} and Subroutine 1). But, as in other algorithms having quadratic convergence like certain versions of Newton's method, the convergence is very fast and the number of iterations is small. We show below that it is $O(\\log(n\/\\delta))$, leading to a growth of $u$ that is polynomial in $n$ and $1\/\\delta$. The boosting argument described in the introduction makes the final overall dependence in $\\delta$ to be only linear in\n$\\log (1\/\\delta)$.\n\nWe state the following subroutine for $\\mathbb{R}^n$ instead of $\\mathbb{R}^{n+1}$ (thus it is learning a rotated copy of\n$\\Delta^{n-1}$ instead of $\\Delta^n$). This is for notational convenience so that we work with $n$ instead of $n+1$.\n\n\\begin{subroutine}\n\\caption{Find one vertex of a rotation of the standard simplex $\\Delta^{n-1}$ via a fixed point iteration-like algorithm}\\label{alg:vertex}\n\\begin{algorithmic}\n\\State Input: Samples from a rotated copy of the $n$-dimensional standard simplex (for a rotation that leaves $\\ones$ invariant).\n\\State Output: An approximation to a uniformly random vertex of the input simplex.\n\\vspace{.2em}\n\\hrule\n\\vspace{.2em}\n\\State Pick $u(1) \\in S^{\\dim-1}$, uniformly at random.\n\\myFor{$i=1$ to $r$}\n\\begin{align*}\nu(i+1) := &C_{\\dim-1} \\nabla \\hat{m}_3(u(i)) - \\frac{1}{2} (u(i) \\cdot \\ones)^2 \\ones \\optionalbreak\n- \\frac{1}{2} (u(i) \\cdot u(i))^2 \\ones - (u(i) \\cdot \\ones) u(i).\n\\end{align*}\n\\State Normalize $u(i+1)$ by dividing by $\\norm{u(i+1)}_2$.\n\\myEndFor\n\\State Output $u(r+1)$.\n\\end{algorithmic}\n\\end{subroutine}\n\n\\begin{lemma}\\label{lem:gradient}\nLet $c>0$ be a constant, $\\dim > 20$, and $0<\\delta<1$. Suppose that Subroutine 1 uses a sample of size\n$t = 2^{17}\\dim^{2c+22}(\\frac{1}{\\delta})^2\\ln\\frac{2\\dim^5 r}{\\delta}$ for each evaluation of the gradient and runs for\n$r =\\log \\frac{4(c+3) n^2\\ln{n}}{\\delta}$\niterations. Then with probability at least $1-\\delta$\nSubroutine 1 outputs a vector within distance $1\/n^c$ from a vertex\nof the input simplex. With respect of the process of picking a sample and running the algorithm, each vertex is equally likely to be the nearest.\n\n\n\\end{lemma}\n\n\nNote that if we condition on the sample, different vertices are not equally likely over the randomness of the algorithm. That is, if we try to find all vertices running the algorithm multiple times on a fixed sample, different vertices will be found with different likelihoods.\n\\begin{proof}\nOur analysis has the same outline as that of \\noname{Nguyen and Regev~}\\cite{NguyenR09}.\nThis is because the iteration that we get is the same as that of \\cite{NguyenR09} except that cubing is replaced by\nsquaring (see below); however some details in our proof are different.\nIn the proof below, several of the inequalities are quite loose and are so chosen to make the computations\nsimpler.\n\nWe first prove the lemma assuming that the gradient computations are exact and then show how to handle samples.\nWe will carry out the analysis in the coordinate system where the given simplex is the standard simplex. This is\nonly for the purpose of the analysis, and this coordinate system is not known to the algorithm.\nClearly, $u(i+1) = (u(i)^2_1, \\ldots, u(i)_{\\dim}^2)$.\nIt follows that, $$u(i+1) = (u(1)_1^{2^i}, \\ldots, u(1)_{\\dim}^{2^i}).$$\nNow, since we choose $u(1)$ randomly, with probability at least $(1-(n^2-n)\\delta')$\n one of the\ncoordinates of $u(1)$ is greater than all the other coordinates in absolute value by a factor of at least $(1+ \\delta')$,\nwhere $0 < \\delta' < 1$.\n(A similar argument is made in \\cite{NguyenR09} with different parameters.\n We briefly indicate the proof for our case:\nThe probability that the event in question does not happen is less than the probability that there are two coordinates $u(1)_a$\nand $u(1)_b$ such that their absolute values are within factor $1+\\delta'$, i.e.\n$1\/(1+\\delta') \\leq |u(1)_a|\/|u(1)_b| < 1+\\delta'$. The probability that for given $a, b$ this event happens can be seen as the\nGaussian area of the four sectors (corresponding to the four choices of signs of $u(1)_a, u(1)_b$) in the plane each with angle\nless than $2\\delta'$. By symmetry, the Gaussian volume of these sectors is $2\\delta'\/(\\pi\/2) < 2\\delta'$.\nThe probability that\nsuch a pair $(a,b)$ exists is less than $2 \\binom{n}{2} \\delta'$.)\nAssuming this happens, then after $r$ iterations, the ratio between the largest coordinate (in absolute value) and the absolute value of any other coordinate is at least\n$(1+\\delta')^{2^r}$.\nThus, one of the coordinates is very\nclose to $1$ and others are very close to $0$, and so $u(r+1)$ is very close to a vertex of the input simplex.\n\nNow we drop the assumption that the gradient is known exactly. For each evaluation of the gradient we use\na fresh subset of samples of $t$ points. Here $t$ is chosen so that each evaluation of the gradient is within $\\ell_2$-distance\n$1\/\\dim^{c_1}$ from its true value with probability at least $1-\\delta''$, where $c_1$ will be set at the end of the proof.\nAn application of the Chernoff bound yields that we can take\n$t = 200\\dim^{2c_1+4}\\ln\\frac{2\\dim^3}{\\delta''}$; we omit the details. Thus all the $r$ evaluations of the gradient are\nwithin distance $1\/\\dim^{c_1}$ from their true values with probability at least $1-r\\delta''$.\n\n\n\nWe assumed that our starting vector $u(1)$ has a coordinate greater than every other coordinate by a factor\nof\n$(1+\\delta')$ in absolute value; let us assume without loss of generality that this is the first coordinate.\nHence $|u(1)_1| \\geq 1\/\\sqrt{n}$.\nWhen expressing $u^{(2)}$ in terms of the gradient, the gradient gets multiplied by $C_{n-1} < n^3$ (we are assuming $n>20$),\nkeeping this in mind and letting $c_2 = c_1-3$ we get for $j \\neq 1$\n\\begin{align*}\n\\frac{|u(i+1)_1|}{|u(i+1)_j|} \\geq \\frac{u(i)_1^2-1\/n^{c_2}}{u(i)_j^2+1\/n^{c_2}} \\geq \\frac{u(i)_1^2 (1-n^{-(c_2-1)})}{u(i)_j^2+1\/n^{c_2}}.\n\\end{align*}\n\n\nIf $u(i)_j^2 > 1\/\\dim^{c_2-c_3}$, where $1\\leq c_3 \\leq c_2 -2$ will be determined later, then we get\n\\begin{align} \\label{eq:if}\n|u(i+1)_1|\/|u(i+1)_j|\n&> \\frac{1-1\/n^{c_2-1}}{1+1\/n^{c_3}} \\cdot \\left(\\frac{u(i)_1}{u(i)_j}\\right)^2 \\nonumber\\\\\n&> (1-1\/n^{c_3})^2 \\left(\\frac{u(i)_1}{u(i)_j}\\right)^2.\n\\end{align}\n\nElse,\n\\begin{align*}\n|u(i+1)_1|\/|u(i+1)_j|\n&> \\frac{1\/n-1\/n^{c_2}}{1\/n^{c_2-c_3}+1\/n^{c_2}} \\\\\n&> \\left(1-\\frac{1}{n^{c_3}}\\right)^2 \\cdot n^{c_2-c_3 -1} \\\\\n&> \\frac{1}{2} n^{c_2-c_3-1},\n\\end{align*}\nwhere we used $c_3 \\geq 1$ and $n>20$ in the last inequality.\n\n\n\nWe choose $c_3$ so that\n\\begin{align}\\label{eq:c3}\n\\left(1-\\frac{1}{n^{c_3}}\\right)^2(1+\\delta') > (1+\\delta'\/2).\n\\end{align}\nFor this, $\\delta' \\geq 32\/n^{c_3}$ or equivalently $c_3 \\geq (\\ln{(32\/\\delta')})\/\\ln{n}$ suffices.\n\nFor $c_3$ satisfying \\eqref{eq:c3} we have $(1-\\frac{1}{n^{c_3}})^2(1+\\delta')^2 > (1+\\delta')$.\nIt then follows from \\eqref{eq:if} that\nthe first coordinate continues to remain the largest in absolute value by a factor of at least $(1+\\delta')$ after each iteration.\nAlso, once we have $|u(i)_1|\/|u(i)_j| > \\frac{1}{2} n^{c_2-c_3-1}$, we\nhave $|u(i')_1|\/|u(i')_j| > \\frac{1}{2} n^{c_2-c_3-1}$ for all $i'>i$.\n\n\\eqref{eq:if} gives that after $r$ iterations we have\n\\begin{align*}\n\\frac{|u(r+1)_1|}{|u(r+1)_j|}\n&> (1-1\/n^{c_3})^{2+2^2+\\ldots+2^r} \\left(\\frac{u(1)_1}{u(1)_j}\\right)^{2^r} \\\\\n&\\geq (1-1\/n^{c_3})^{2^{r+1}-2} (1+\\delta')^{2^r}.\n\\end{align*}\n\nNow if $r$ is such that $(1-1\/n^{c_3})^{2^{r+1}-2} (1+\\delta')^{2^r} > \\frac{1}{2} n^{c_2-c_3-1}$, we will be guaranteed that\n$|u(r+1)_1|\/|u(r+1)_j| > \\frac{1}{2} n^{c_2-c_3-1}$. This condition is satisfied if we have\n$(1-1\/n^{c_3})^{2^{r+1}} (1+\\delta')^{2^r} > \\frac{1}{2} n^{c_2-c_3-1}$, or equivalently\n$((1-1\/n^{c_3})^2 (1+\\delta'))^{2^r} \\geq \\frac{1}{2} n^{c_2-c_3-1}$.\nNow using \\eqref{eq:c3} it suffices to choose $r$ so that\n$(1+\\delta'\/2)^{2^r} \\geq \\frac{1}{2} n^{c_2-c_3-1}$. Thus we can take $r = \\log(4(c_2-c_3)(\\ln{n})\/\\delta')$.\n\n\n\nHence we get $|u(r+1)_1|\/|u(r+1)_j| > \\frac{1}{2} n^{c_2-c_3-1}$.\nIt follows that for\n$u(r+1)$, the $\\ell_2$-distance from the vertex $(1, 0, \\ldots, 0)$ is at most $8\/n^{c_2-c_3-2} < 1\/n^{c_2-c_3-3}$ for $\\dim > 20$; we omit\neasy details.\n\n\n\nNow we set our parameters: $c_3 = 1+(\\ln(32\/\\delta')\/\\ln{n})$ and $c_2 - c_3 - 3 = c$ and\n$c_1 = c_2 + 3 = 7 + c + \\ln(32\/\\delta')\/\\ln{n}$ satisfies all the constraints we imposed on $c_1, c_2, c_3$.\nChoosing $\\delta'' = \\delta'\/r$, we get that the procedure succeeds with probability at least\n$1-(\\dim^2-\\dim)\\delta' - r\\delta'' > 1-\\dim^2\\delta'$. Now setting $\\delta'=\\delta\/n^2$ gives the overall probability of error $\\delta$,\nand the number of samples and iterations as claimed in the lemma.\n\\end{proof}\n\n\n\n\\section{Learning simplices} \\label{sec:general}\n\nIn this section we give our algorithm for learning general simplices, which uses the subroutine from the previous section.\nThe learning algorithm uses an affine map $T:\\RR^{\\dim} \\to \\RR^{\\dim+1}$ that maps some isotropic simplex to the standard simplex.\nWe describe now a way of constructing such a map: Let $A$ be a matrix having as columns an orthonormal basis of $\\ones^\\perp$ in $\\RR^{\\dim+1}$.\nTo compute one such $A$, one can start with the $(n+1)$-by-$(n+1)$ matrix $B$ that has ones in the diagonal and first column, everything else is zero.\nLet $QR=B$ be a QR-decomposition of $B$. By definition we have that the first column of $Q$ is parallel to $\\ones$ and the rest of the columns span $\\ones^\\perp$.\nGiven this, let $A$ be the matrix formed by all columns of $Q$ except the first.\nWe have that the set $\\{A^T e_i\\}$ is the set of vertices of a regular $n$-simplex.\nEach vertex is at distance\n\\[\n\\sqrt{\\left(1-\\frac{1}{n+1}\\right)^2 + \\frac{n}{(n+1)^2}} = \\sqrt{\\frac{n}{n+1}}\n\\]\nfrom the origin, while an isotropic simplex has vertices at distance $\\sqrt{n(n+2)}$ from the origin. So an affine transformation that maps an isotropic simplex in $\\RR^\\dim$ to the standard simplex in $\\RR^{\\dim+1}$ is $T(x) = \\frac{1}{\\sqrt{(n+1)(n+2)}} A x + \\frac{1}{n+1} \\ones_{\\dim+1}$.\n\n\n\\begin{algorithm}\n\\caption{Learning a simplex.}\\label{alg:simplex}\n\\begin{algorithmic}\n\\State Input: Error parameter $\\eps>0$. Probability of failure parameter $\\delta>0$. Oracle access to random points from some $n$-dimensional simplex $S_{INPUT}$.\n\n\\State Output: $V =\\{v(1), \\dotsc, v(\\dim+1)\\} \\subseteq \\RR^{\\dim}$ (approximations to the vertices of the simplex).\n\\vspace{.2em}\n\\hrule\n\\vspace{.2em}\n\\State Estimate the mean and covariance using $t_1=\\poly(n,1\/\\eps, 1\/\\delta)$ samples $p(1), \\dotsc, p(t_1)$:\n\\[\n\\mu = \\frac{1}{t_1} \\sum_i p(i),\n\\]\n\\[\n\\Sigma = \\frac{1}{t_1} \\sum_i (p(i)-\\mu) (p(i)-\\mu)^T.\n\\]\n\\State Compute a matrix $B$ so that $\\Sigma = BB^T$ (say, Cholesky decomposition).\n\n\\State Let $U = \\emptyset$.\n\\myFor{$i=1$ to $m$ (with $m = \\poly (n, \\log 1\/\\delta))$}\n\n\\State Get $t_3=\\poly(n, 1\/\\eps, \\log 1\/\\delta)$ samples $r(1),\\dotsc r(t_3)$ and use $\\mu, B$ to map them to samples $s(i)$ from a nearly-isotropic simplex: $s(i) = B^{-1}(r(i)-\\mu)$.\n\n\\State Embed the resulting samples in $\\RR^{\\dim+1}$ as a sample from an approximately rotated standard simplex:\nLet $l(i) = T(s(i))$.\n\n\\State Invoke Subroutine \\ref{alg:vertex} with sample $l(1), \\dotsc, l(t_3)$ to get $u \\in \\RR^{\\dim+1}$.\n\\State Let $\\tilde u$ be the nearest point to $u$ in the affine hyperplane $\\{x \\suchthat x \\cdot \\ones = 1\\}$. If $\\tilde u$ is not within $1\/\\sqrt{2}$ of a point in $U$, add $\\tilde u$ to $U$. (Here $1\/\\sqrt{2}$ is half of the edge length of the standard simplex.)\n\\myEndFor\n\\State Let\n\\begin{align*}\nV &= B T^{-1}(U) + \\mu \\optionalbreak\n= \\sqrt{(n+1)(n+2)} B A^T\\left(U-\\frac{1}{n+1}\\ones \\right) + \\mu.\n\\end{align*}\n\\end{algorithmic}\n\\end{algorithm}\n\nTo simplify the analysis, we pick a new sample $r(1), \\dotsc, r(t_3)$ to find every vertex, as this makes every vertex equally likely to be found when given a sample from an isotropic simplex. (The core of the analysis is done for an isotropic simplex; this is enough as the algorithm's first step is to find an affine transformation that puts the input simplex in approximately isotropic position. The fact that this approximation is close in total variation distance implies that it is enough to analyze the algorithm for the case of exact isotropic position, the analysis carries over to the approximate case with a small loss in the probability of success. See the proof below for the details.) A practical implementation may prefer to select one such sample outside of the for loop, and find all the vertices with just that sample---an analysis of this version would involve bounding the probability that each vertex is found (given the sample, over the choice of the starting point of gradient descent) and a variation of the coupon collector's problem with coupons that are not equally likely.\n\n\\iffalse\n\\begin{theorem}\nAlgorithm \\ref{alg:simplex}, with probability at least $1-\\delta$ over the sample and the randomness of the algorithm, outputs $n+1$ vectors that are the vertices of a simplex $S$ so that $d_{TV}(S, S_{INPUT}) \\leq \\eps$. The algorithm runs in time polynomial in $\\dim$, $1\/\\eps$ and $1\/\\delta$.\n\\end{theorem}\n\\fi\n\n\\begin{proof}[\\myproof of Theorem~\\ref{thm:main}]\nAs a function of the input simplex, the distribution of the output of the algorithm is equivariant under invertible affine transformations.\nNamely, if we apply an affine transformation to the input simplex, the distribution of the output is equally transformed.\\footnote{To see this: the equivariance of the algorithm as a map between distributions is implied by the equivariance of the algorithm on any given input sample. Now, given the input sample, if we apply an affine transformation to it, this transformation is undone except possibly for a rotation by the step $s(i) = B^{-1}(r(i)-\\mu)$. A rotation may remain because of the ambiguity in the characterization of $B$. But the steps of the algorithm that follow the definition of $s(i)$ are equivariant under rotation, and the ambiguous rotation will be removed at the end when $B$ is applied again in the last step.}\nThe notion of error, total variation distance, is also invariant under invertible affine transformations.\nTherefore, it is enough to analyze the algorithm when the input simplex is in isotropic position.\nIn this case $\\norm{p(i)} \\leq n+1$ (see Section \\ref{sec:preliminaries}) and we can set $t_1 \\leq \\poly(n,1\/\\eps', \\log(1\/\\delta))$ so that $\\norm{\\mu} \\leq \\eps'$ with probability at least $1-\\delta\/10$ (by an easy application of Chernoff's bound), for some $\\eps'$ to be fixed later.\nSimilarly, using results from\n\\cite[Theorem 4.1]{MR2601042}, a choice of $t_1 \\leq n {\\eps'}^{-2} \\polylog(1\/\\eps') \\polylog(1\/\\delta)$ implies that the empirical second moment matrix \\[\\bar \\Sigma = \\frac{1}{t_1} \\sum_i p(i) p(i)^T\\] satisfies $\\norm{\\bar \\Sigma - I} \\leq \\eps'$ with probability at least $1-\\delta\/10$. We have $\\Sigma = \\bar \\Sigma - \\mu \\mu^T$ and this implies $\\norm{\\Sigma - I} \\leq \\norm{\\bar \\Sigma - I} + \\norm{\\mu \\mu^T} \\leq 2\\eps'$.\nNow, $s(1), \\dotsc, s(t_3)$ is an iid sample from a simplex $S'=B^{-1}(S_{INPUT}-\\mu)$. Simplex $S'$ is close in total variation distance to some isotropic simplex\\footnote{The isotropic simplex $S_{ISO}$ will typically be far from the (isotropic) input simplex, because of the ambiguity up to orthogonal transformations in the characterization of $B$.} $S_{ISO}$. More precisely, Lemma~\\ref{lem:tv} below shows that\n\\begin{equation}\\label{equ:tv}\nd_{TV}(S', S_{ISO}) \\leq 12 \\dim \\eps',\n\\end{equation}\nwith probability at least $1-\\delta\/5$.\n\nAssume for a moment that $s(1), \\dotsc, s(t_3)$ are from $S_{ISO}$. The analysis of Subroutine \\ref{alg:vertex} (fixed point-like iteration) given in Lemma~\\ref{lem:gradient} would guarantee the following: Successive invocations to Subroutine~\\ref{alg:vertex} find approximations to vertices of $T(S_{ISO})$ within Euclidean distance $\\eps''$ for some $\\eps''$ to be determined later and $t_3 = \\poly(\\dim,1\/\\eps'', \\log 1\/\\delta)$. We ask for each invocation to succeed with probability at least $1-\\delta\/(20m)$ with $m = n (\\log n + \\log 20\/\\delta)$. Note that each vertex is equally likely to be found. The choice of $m$ is so that, if all $m$ invocations succeed (which happens with probability at least $1-\\delta\/20$), then the analysis of the coupon collector's problem, Lemma~\\ref{lem:coupons}, implies that we fail to find a vertex with probability at most $\\delta\/20$. Overall, we find all vertices with probability at least $1-\\delta\/10$.\n\nBut in reality samples $s(1), \\dotsc, s(t_3)$ are from $S'$, which is only \\emph{close} to $S_{ISO}$. The estimate from \\eqref{equ:tv} with appropriate $\\eps' = \\poly(1\/n, \\eps'', \\delta)$ gives\n\\[\nd_{TV}(S', S_{ISO}) \\leq \\frac{\\delta}{10} \\frac{1}{ t_3 m},\n\\]\nwhich implies that the total variation distance between the joint distribution of all $t_3 m$ samples used in the loop and the joint distribution of actual samples from the isotropic simplex $S_{ISO}$ is at most $\\delta\/10$, and this implies that the loop finds approximations to all vertices of $T(S_{ISO})$ when given samples from $S'$ with probability at least $1-\\delta\/5$. The points in $U$ are still within Euclidean distance $\n\\eps''$ of corresponding vertices of $T(S_{ISO})$.\n\nTo conclude, we turn our estimate of distances between estimated and true vertices into a total variation estimate, and map it back to the input simplex. Let $S''=\\conv T^{-1} U$. As $T$ maps an isotropic simplex to a standard simplex, we have that $\\sqrt{(n+1)(n+2)} T$ is an isometry, and therefore the vertices of $S''$ are within distance $\\eps''\/\\sqrt{(n+1)(n+2)}$ of the corresponding vertices of $S_{ISO}$. Thus, the corresponding support functions are uniformly within \\[\\eps''' = \\eps''\/\\sqrt{(n+1)(n+2)}\\] of each other on the unit sphere. This and the fact that $S_{ISO} \\supseteq B_n$ imply\n\\[\n(1 - \\eps''') S_{ISO}\\subseteq S'' \\subseteq (1 + \\eps''')S_{ISO}.\n\\]\nThus, by Lemma \\ref{lem:bmtv}, $d_{TV}(S'', S_{ISO}) \\leq\n1 - (\\frac{1-\\eps'''}{1+\\eps'''})^\\dim \\leq 1-(1-\\eps''')^{2n}\\leq 2n \\eps''' \\leq 2\\eps''$\nand this implies that the total variation distance between the uniform distributions on $\\conv V$ and the input simplex is at most $2 \\eps''$. Over all random choices, this happens with probability at least $1-2\\delta\/5$. We set $\\eps'' = \\eps\/2$.\n\\end{proof}\n\n\\begin{lemma}\\label{lem:tv}\nLet $S_{INPUT}$ be an $\\dim$-dimensional isotropic simplex. Let $\\Sigma$ be an $\\dim$-by-$\\dim$ positive definite matrix such that $\\norm{\\Sigma - I} \\leq \\eps < 1\/2$. Let $\\mu$ be an $n$-dimensional vector such that $\\norm{\\mu} \\leq \\eps$. Let $B$ be an $n$-by-$n$ matrix such that $\\Sigma = BB^T$. Let $S$ be the simplex $B^{-1}(S_{INPUT}-\\mu)$. Then there exists an isotropic simplex $S_{ISO}$ such that $d_{TV}(S, S_{ISO}) \\leq 6 \\dim \\eps$.\n\\end{lemma}\n\\begin{proof}\nWe use an argument along the lines of the orthogonal Procrustes problem (nearest orthogonal matrix to $B^{-1}$, already in \\cite[Proof of Theorem 4]{NguyenR09}): Let $U D V^T$ be the singular value decomposition of $B^{-1}$. Let $R = U V^T$ be an orthogonal matrix (that approximates $B^{-1}$). Let $S_{ISO} = R S_{INPUT}$.\n\nWe have $S = UDV^T (S_{INPUT} - \\mu)$. Let $\\sigma_{min}$, $\\sigma_{max}$ be the minimum and maximum singular values of $D$, respectively. This implies:\n\\begin{align}\n\\sigma_{min} UV^T (S_{INPUT} - \\mu) &\\subseteq S \\subseteq \\sigma_{max}UV^T (S_{INPUT} - \\mu), \\nonumber \\\\\n\\sigma_{min} (S_{ISO} - R\\mu) &\\subseteq S \\subseteq \\sigma_{max}( S_{ISO} - R\\mu).\\label{equ:isotropy}\n\\end{align}\nAs $S_{ISO} \\supseteq B_n$, $\\norm{\\mu} \\leq 1$, $R$ is orthogonal and $S_{ISO}$ is convex, we have\n\\[\nS_{ISO}-R\\mu \\supseteq (1-\\norm{\\mu}) S_{ISO}.\n\\]\nAlso,\n\\begin{align*}\nS_{ISO} - R\\mu &\\subseteq S_{ISO} + \\norm{\\mu} B_n \\\\\n&\\subseteq S_{ISO} (1+ \\norm{\\mu}).\n\\end{align*}\nThis in \\eqref{equ:isotropy} gives\n\\[\n\\sigma_{min} (1-\\norm{\\mu}) S_{ISO} \\subseteq S \\subseteq \\sigma_{max}(1+\\norm{\\mu}) S_{ISO}.\n\\]\nThis and Lemma \\ref{lem:bmtv} imply\n\\[\nd_{TV}(S, S_{ISO}) \\leq 2\\left(1- \\left(\\frac{\\sigma_{min} (1-\\norm{\\mu}) }{ \\sigma_{max}(1+\\norm{\\mu})}\\right)^\\dim\\right).\n\\]\nThe estimate on $\\Sigma$ gives $\\sigma_{min} \\geq \\sqrt{1-\\eps}$, $\\sigma_{max} \\leq \\sqrt{1+\\eps}$. Thus\n\\begin{align*}\nd_{TV}(S, S_{ISO}) &\\leq 2\\left(1- \\left(\\frac{1-\\eps}{1+\\eps} \\right)^{3\\dim\/2}\\right) \\\\\n& \\leq 2\\left(1- \\left(1-\\eps\\right)^{3\\dim}\\right) \\\\\n& \\leq 6\\dim \\eps.\n\\end{align*}\n\\end{proof}\n\n\\section{The local and global maxima of the 3rd moment of the standard simplex and the isotropic simplex} \\label{sec:maxima}\n\n\nIn this section we study the structure of the set of local maxima of the third moment as a function of the direction (which happens to be essentially $u \\mapsto \\sum u_i^3$ as discussed in Section \\ref{sec:moment}). This is not necessary for our algorithmic result, however it gives insight into the geometry of the third moment (the location of local maxima\/minima and stationary points) and suggests that more direct optimization algorithms like gradient descent and Newton's method will also work, although we will not prove that.\n\n\\begin{theorem}\\label{thm:maxima}\nLet $K \\subseteq \\RR^\\dim$ be an isotropic simplex. Let $X$ be random in $K$. Let $V = \\{x_i\\}_{i=1}^{\\dim+1} \\subseteq \\RR^\\dim$ be the set of normalized vertices of $K$. Then $V$ is a complete set of local maxima and a complete set of global maxima of $F:S^{\\dim-1} \\to \\RR$ given by $F(u) = \\e ((u \\cdot X)^3)$.\n\\end{theorem}\n\\noindent\\emph{Proof idea:} Embed the simplex in $\\RR^{\\dim+1}$. Show that the third moment is proportional to the complete homogeneous symmetric polynomial of degree 3, which for the relevant directions is proportional to the sum of cubes. To conclude, use first and second order optimality conditions to characterize the set of local maxima.\n\\begin{proof}\nConsider the standard simplex\n\\[\n\\Delta^n = \\conv \\{e_1, \\dotsc, e_{\\dim+1} \\} \\subseteq \\RR^{\\dim+1}\n \\]\nand identify it with $V$ via a linear map $A :\\RR^{\\dim+1} \\to \\RR^{\\dim}$ so that $A(\\Delta^n) = V$. Let $Y$ be random in $\\Delta^n$. Consider $G: S^{\\dim} \\to \\RR$ given by $G(v) = m_3(v) = \\e ((v \\cdot Y)^3)$. Let $U = \\{ v \\in \\RR^{\\dim+1} \\suchthat v \\cdot \\ones =0, \\norm{v}=1 \\}$ be the equivalent feasible set for the embedded problem.\nWe have $G(v) = cF(A v)$ for any $v \\in U$ and some constant $c > 0$ independent of $v$.\nTo get the theorem, it is enough to show that the local maxima of $G$ in $U$ are precisely the normalized versions of the projections of the canonical vectors onto the hyperplane orthogonal to $\\ones = (1, \\dotsc, 1)$. According to Section \\ref{sec:moment}, for $v \\in U$ we have\n\\[\nG(v) \\propto p_3(v).\n\\]\nUsing a more convenient but equivalent constant, we want to enumerate the local maxima of the problem\n\\begin{equation}\\label{equ:opt}\n\\begin{aligned}\n\\max \\frac{1}{3} p_3(v)& \\\\\n\\text{s.t.} \\quad v \\cdot v &= 1 \\\\\nv \\cdot \\ones &= 0 \\\\\nv &\\in \\RR^{\\dim+1}.\n\\end{aligned}\n\\end{equation}\nThe Lagrangian function is\n\\[\nL(v, \\lambda_1, \\lambda_2) = \\frac{1}{3} \\sum_i v_i^3 - \\lambda_1 \\sum_i v_i - \\lambda_2\\frac{1}{2} \\left(\\biggl(\\sum_i v_i^2 \\biggr) -1 \\right).\n\\]\nThe first order condition is $\\nabla_v L = 0$, that is,\n\\begin{equation}\\label{equ:foc}\nv_i^2 = \\lambda_1 + \\lambda_2 v_i \\quad \\text{for $i= 1, \\dotsc, \\dim+1$.}\n\\end{equation}\nConsider this system of equations on $v$ for any fixed $\\lambda_1$, $\\lambda_2$. Let $f(x) = x^2$, $g(x)= \\lambda_1 + \\lambda_2 x$. The first order condition says $f(v_i) = g(v_i)$, where $f$ is convex and $g$ is affine. That is, the $v_i$s can take at most two different values. As our optimization problem \\eqref{equ:opt} is symmetric under permutation of the coordinates, we conclude that, after putting the coordinates of a point $v$ in non-increasing order, if $v$ is a local maximum of \\eqref{equ:opt}, then $v$ must be of the form\n\\[\nv = (a, \\dotsc, a, b, \\dotsc, b),\n\\]\nwhere $a>0>b$ and there are exactly $\\alpha$ $a$s and $\\beta$ $b$s, for $\\alpha, \\beta \\in \\{1, \\dotsc, \\dim\\}$.\n\nWe will now study the second order necessary condition (SONC) to eliminate from the list of candidates all vectors with $\\alpha >1$. It is easy to see that the surviving vectors are exactly the promised scaled projections of the canonical vectors. This vectors must all be local and global maxima: At least one of them must be a global maximum as we are maximizing a continuous function over a compact set and all of them have the same objective value so all of them are local and global maxima.\n\nThe SONC at $v$ asks for the Hessian of the Lagrangian to be negative semidefinite when restricted to the tangent space to the constraint set at $v$ \\cite[Section 11.5]{MR2423726}.\nWe compute the Hessian (recall that $v^{(2)}$ is the vector of the squared coordinates of $v$):\n\\[\n\\nabla_v L = v^{(2)} - \\lambda_1 \\ones - \\lambda_2 v\n\\]\n\\[\n\\nabla^2_v L = 2 \\diag(v) - \\lambda_2 I\n\\]\nwhere $\\diag(v)$ is the $(\\dim + 1)$-by-$(\\dim+1)$ matrix having the entries of $v$ in the diagonal and 0 elsewhere.\n\nA vector in the tangent space is any $z \\in \\RR^{\\dim+1}$ such that $z \\cdot \\ones = 0$, $v \\cdot z = 0$, and definiteness of the Hessian is determined by the sign of $z^T \\nabla^2_v L z$ for any such $z$, where\n\\[\nz^T \\nabla^2_v L z = \\sum_{i=1}^{\\dim+1} z_i^2 (2 v_i - \\lambda_2).\n\\]\nSuppose $v$ is a critical point with $\\alpha \\geq 2$. To see that such a $v$ cannot be a local maximum, it is enough to show $2a > \\lambda_2$, as in that case we can take $z = (1, -1, 0,\\dotsc, 0)$ to make the second derivative of $L$ positive in the direction $z$.\n\nIn terms of $\\alpha, \\beta, a, b$, the constraints of \\eqref{equ:opt} are $\\alpha a + \\beta b = 0$, $\\alpha a^2 + \\beta b^2 = 1$, and this implies $a = \\sqrt{\\frac{\\beta}{\\alpha (\\dim+1) }}$, $b = - \\sqrt{\\frac{\\alpha}{\\beta (\\dim+1) }}$. The inner product between the first order condition \\eqref{equ:foc} and $v$ implies $\\lambda_2 = \\sum v_i^3 = \\alpha a^3 + \\beta b^3$. It is convenient to consider the change of variable $\\gamma = \\alpha\/(\\dim+1)$, as now candidate critical points are parameterized by certain discrete values of $\\gamma$ in $(0,1)$. This gives $\\beta = (1-\\gamma)(\\dim+1)$, $ a = \\sqrt{(1-\\gamma)\/(\\gamma(\\dim+1))}$ and\n\\begin{align*}\n\\lambda_2 &= (\\dim+1)\\biggl[\\gamma \\left(\\frac{1-\\gamma}{\\gamma (\\dim+1)}\\right)^{3\/2} \\\\\n&\\qquad - (1-\\gamma)\\left(\\frac{\\gamma}{(1-\\gamma) (\\dim+1)}\\right)^{3\/2}\\biggr] \\\\\n&= \\frac{1}{\\sqrt{(\\dim+1) \\gamma (1-\\gamma)}} \\left[(1-\\gamma)^2 - \\gamma^2\\right] \\\\\n&= \\frac{1}{\\sqrt{(\\dim+1) \\gamma (1-\\gamma)}} [1- 2\\gamma].\n\\end{align*}\nThis implies\n\\begin{align*}\n2 a - \\lambda_2 &= \\frac{1}{\\sqrt{(\\dim+1) \\gamma (1-\\gamma)}} [2(1-\\gamma) -1 + 2\\gamma] \\\\\n &= \\frac{1}{\\sqrt{(\\dim+1) \\gamma (1-\\gamma)}}.\n\\end{align*}\nIn $(0,1)$, the function given by $\\gamma \\mapsto 2a-\\lambda_2 = \\frac{1}{\\sqrt{(\\dim+1) \\gamma (1-\\gamma)}}$ is convex and symmetric around $1\/2$, where it attains its global minimum value, $2\/ \\sqrt{\\dim+1}$, which is positive.\n\\end{proof}\n\n\\section{Probabilistic Results} \\label{sec:prob} \n\nIn this section we show the probabilistic results underlying the reductions from learning simplices and $\\ell_p^n$ balls to ICA. The results are Theorems \\ref{thm:simplexscaling} and \\ref{thm:lpscaling}. They each show a simple non-linear rescaling of the respective uniform distributions that gives a distribution with independent components (Definition \\ref{def:ic}).\n\nTheorem \\ref{thm:simplexscaling} below gives us, in a sense, a ``reversal'' of the representation of the cone measure on $\\partial B_p^n$, seen in Theorem \\ref{thm:cone}. Given any random point in the standard simplex, we can apply a simple non-linear scaling and recover a distribution with independent components. \n\n\\begin{definition}\\label{def:ic}\nWe say that a random vector $X$ has \\emph{independent components} if it is an affine transformation of a random vector having independent coordinates.\n\\end{definition}\n\n\\begin{theorem}\\label{thm:simplexscaling}\nLet $X$ be a uniformly random vector in the $(\\dimension-1)$-dimensional standard simplex $\\Delta_{n-1}$. Let $T$ be a random scalar distributed as $\\mathrm{Gamma}(n, 1)$. Then the coordinates of $T X$ are iid as $\\expdist{1}$.\n\nMoreover, if $A:\\mathbb{R}^{\\dimension} \\rightarrow \\mathbb{R}^{\\dimension}$ is an invertible linear transformation, then the random vector $TA(X)$ has independent components.\n\\end{theorem}\n\\begin{proof}\nIn the case where $p = 1$, Theorem \\ref{thm:cone} restricted to the positive orthant implies that for random vector $G = (G_1, \\dotsc, G_{\\dimension})$, if each $G_i$ is an iid exponential random variable $\\expdist{1}$, then $( G\/\\norm{G}_1, \\norm{G}_1)$ has the same (joint) distribution as $(X,T)$. Given the measurable function $f(x,t) = xt$, $f(X,T)$ has the same distribution as $f( G\/\\norm{G}_1, \\norm{G}_1)$. That is, $XT$ and $G$ have the same distribution\\footnote{See \\cite[Theorem 1.1]{MR1456629} for a similar argument in this context.}.\n\nFor the second part, we know $T A(X) = A(TX)$ by linearity. By the previous argument the coordinates of $TX$ are independent. This implies that $A(TX)$ has independent components.\n\\end{proof}\n\n\n\n\nThe next lemma complements the main result in \\cite{barthe2005probabilistic}, Theorem 1 (Theorem \\ref{theorem:fullnaor} elsewhere here). They show a representation of the uniform distribution in $B_p^n$, but they do not state the independence that we need for our reduction to ICA.\n\\begin{lemma} \\label{lemma:independence}\nLet $p \\in [1, \\infty)$. Let $G=(G_1, \\dotsc, G_n)$ be iid random variables with density proportional to $\\exp(-\\abs{t}^p)$. Let $W$ be a nonnegative random variable with distribution $\\expdist{1}$ and independent of $G$. Then the random vector\n\\[\n\\frac{G}{(\\norm{G}_p^p + W)^{1\/p}}\n\\]\nis independent of $(\\norm{G}_p^p + W)^{1\/p}$.\n\\end{lemma}\n\n\\begin{proofidea}\nWe aim to compute the join density, showing that it is a product of individual densities. To avoid complication, we raise everything to the $p$th power, which eliminates extensive use of the chain rule involved in the change of variables. \n\\end{proofidea}\n\n\\begin{proof}\nIt is enough to show the claim conditioning on the orthant in which $G$ falls, and by symmetry it is enough to prove it for the positive orthant. Let random variable $H = (G_1^p, G_2^p, \\dotsc, G_n^p)$.\nSince raising (strictly) positive numbers to the $p$th power is injective, it suffices to show that the random vector \n\\[\nX = \\frac{H}{\\sum_{i=1}^n{H}_i + W}\n\\]\nis independent of the random vector $Y = \\sum_{i=1}^n{H}_i + W$. \n \nFirst, let $U$ be the interior of the support of $(X,Y)$, that is $U = \\{ x \\in \\RR^{n} : x_i > 0, \\sum_i x_i < 1 \\}\\times \\{y \\in \\RR :y>0\\}$ and consider $h: U \\rightarrow \\RR^{\\dimension}$ and $w: U \\rightarrow \\RR$ where\n\\[\nh(x,y) = x y\n\\] and \n\\[\nw(x,y) = y - \\sum_{i=1}^n h(x,y)_i = y - \\sum_{i=1}^n x_i \\cdot y = y\\left(1 - \\sum_{i=1}^n x_i\\right).\n\\]\nThe random vector $(H,W)$ has a density $f_{H,W}$ supported on $V = \\operatorname{int} \\RR^{n+1}_+$ and \n$$(x,y) \\mapsto (h(x,y), w(x,y))$$\nis one-to-one from $U$ onto $V$. \nLet $J(x,y)$ be the determinant of its Jacobian. This Jacobian is\n\\[\n\\begin{pmatrix}\ny & 0 & \\cdots & 0 & x_1 \\\\\n0 & y & \\cdots & 0 & x_2 \\\\\n\\vdots & & & & \\vdots \\\\\n0 & 0 & \\cdots & y & x_n \\\\\n-y & -y & \\cdots & -y & 1 - \\sum_{i=1}^n x_i \\\\\n\\end{pmatrix}\n\\]\nwhich, by adding each of the first $n$ rows to the last row, reduces to\n\\[\n\\begin{pmatrix}\ny & 0 & \\cdots & 0 & x_1 \\\\\n0 & y & \\cdots & 0 & x_2 \\\\\n\\vdots & & & & \\vdots \\\\\n0 & 0 & \\cdots & y & x_n \\\\\n0 & 0 & \\cdots & 0 & 1 \\\\\n\\end{pmatrix},\n\\]\nthe determinant of which is trivially $J(x,y) = y^n$. \n\n\nWe have that $J(x,y)$ is nonzero in $U$. Thus, $(X,Y)$ has density $f_{X,Y}$ supported on $U$ given by\n\\begin{align*}\nf_{X,Y}(x,y) &= f_{H,W}\\bigl(h(x,y),w(x,y)\\bigr) \\cdot \\abs{J(x)}.\n\\end{align*}\n\n\n\nIt is easy to see\\footnote{See for example \\cite[proof of Theorem 3]{barthe2005probabilistic}.} that each $H_i = G_i^p$ has density $\\mathrm{Gamma}(1\/p, 1)$ and thus $\\sum_{i=1}^n H_i$ has density $\\mathrm{Gamma}(n\/p,1)$ by the additivity of the Gamma distribution. We then compute the joint density\n\\begin{align*}\nf_{X,Y}(x,y) &= f_{H,W}\\bigl(h(x,y),w(x,y)\\bigr) \\cdot y^n \\\\\n&= f_{H,W}\\Bigl(xy, y(1 - \\sum\\limits_{i=1}^n x_i)\\Bigr) \\cdot y^n.\n\\end{align*}\nSince $W$ is independent of $H$,\n\\begin{align*}\nf_{X,Y}(x,y) &= f_{W}\\bigg(y\\Big(1 - \\sum\\limits_{i=1}^n x_i\\Big)\\bigg) \\cdot y^n \\prod_{i=1}^n f_{H_i}(x_iy) \n\\end{align*}\nwhere\n\\begin{align*}\n\\prod_{i=1}^{\\dimension}\\Big(f_{H_i}(x_iy)\\Big) \\cdot f_{W}\\bigg(y\\Big(1 - \\sum\\limits_{i=1}^n x_i\\Big)\\bigg) \\cdot y^n \n&\\propto \\prod_{i=1}^n \\left[e^{-x_iy}(x_iy)^{\\frac{1}{p} - 1}\\right] \\exp\\left(-y(1-\\sum\\limits_{i=1}^n x_i)\\right) y^n \\\\\n&\\propto \\biggl(\\prod_{i=1}^n x_i^{\\frac{1}{p} - 1}\\biggr) y^{n\/p}.\n\\end{align*}\nThe result follows.\n\\end{proof}\n\nWith this in mind, we show now our analog of Theorem \\ref{thm:simplexscaling} for $B_p^{\\dimension}$.\n\n\\begin{theorem}\\label{thm:lpscaling\nLet $X$ be a uniformly random vector in $B_p^{\\dimension}$. Let $T$ be a random scalar distributed as $\\mathrm{Gamma}((n\/p)+1,1)$. Then the coordinates of $T^{1\/p} X$ are iid, each with density proportional to $\\exp(-\\abs{t}^p)$.\nMoreover, if $A: \\RR^{\\dimension} \\rightarrow \\RR^{\\dimension}$ is an invertible linear transformation, then the random vector given by $T^{1\/p}A(X)$ has independent components.\n\\end{theorem}\n\n\\begin{proof}\nLet $G = (G_1, \\dotsc, G_{\\dimension})$ where each $G_i$ is iid as $\\mathrm{Gamma}(1\/p, 1)$. Also, let $W$ be an independent random variable distributed as $\\expdist{1}$. Let $S = \\bigl(\\sum_{i=1}^n |G_i|^p + W \\bigr)^{1\/p}$. \n\nBy Lemma \\ref{lemma:independence} and Theorem \\ref{theorem:fullnaor} we know $(G\/S,S)$ has the same joint distribution as $(X,T^{1\/p})$. Then for the measurable function $f(x,t) = xt$, we immediately have $f(X,T^{1\/p})$ has the same distribution as $f(G\/S, S)$ and thus $XT^{1\/p}$ has the same distribution as $G$. \n\nFor the second part, since $T$ is a scalar, we have $T^{1\/p}A(X) = A(T^{1\/p}X)$. By the previous argument we have that the coordinates of $T^{1\/p}X$ are independent. Thus, $A(T^{1\/p}X)$ has independent components. \n\\end{proof}\n\n\nThis result shows that one can obtain a vector with independent components from a sample in a linearly transformed $\\ell_p$ ball.\nIn Section \\ref{sec:reduction} we show that they are related in such as way that one can recover the linear transformation from the independent components via ICA.\n\n\n\\section{Learning problems that reduce to ICA}\\label{sec:reduction}\nIndependent component analysis is a certain computational problem and an associated family of algorithms. Suppose that $X$ is a random $\\dimension$-dimensional vector whose coordinates are independently distributed. The coordinates' distributions are unknown and not necessarily identical. The ICA problem can be stated as follows: given samples from an affine transformation $Y=AX+b$ of $X$, estimate $A$ and $b$ (up to a certain intrinsic indeterminacy: permutation and scaling of the columns of $A$). We will state more precisely below what is expected of a an ICA algorithm.\n\n\nWe show randomized reductions from the following two natural statistical estimation problems to ICA:\n\n\\begin{problem}[simplex] \nGiven uniformly random points from an $n$-dimensional simplex, estimate the simplex.\n\\end{problem} \n\nThis is the same problem of learning a simplex as in the rest of the paper, we just restate it here for clarity.\n\nTo simplify the presentation for the second problem, we ignore the estimation of the mean of an affinely transformed distribution. That is, we assume that the $\\ell_p^n$ ball to be learned has only been \\emph{linearly} transformed.\n\n\\begin{problem}[linearly transformed $\\ell_p^n$ balls]\nGiven uniformly random points from a linear transformation of the $\\ell_p^n$-ball, estimate the linear transformation.\n\\end{problem}\n\nThese problems do not have an obvious independence structure. Nevertheless, known representations of the uniform measure in an $\\ell_p^n$ ball and the \\emph{cone measure} (defined in Section \\ref{sec:preliminaries}) on the surface of an $\\ell_p^n$ ball can be slightly extended to map a sample from those distributions into a sample with independent components by a non-linear scaling step.\nThe use of a non-linear scaling step to turn a distribution into one having independent components has been done before \\cite{MR2756189, DBLP:conf\/nips\/SinzB08}, but there it is applied \\emph{after} finding a transformation that makes the distribution axis-aligned. This alignment is attempted using ICA (or variations of PCA) on the original distribution \\cite{MR2756189, DBLP:conf\/nips\/SinzB08}, without independent components, and therefore the use of ICA is somewhat heuristic. One of the contributions of our reduction is that the rescaling we apply is ``blind'', namely, it can be applied to the original distribution. In fact, the distribution does not even need to be isotropic (``whitened''). The distribution resulting from the reduction has independent components and therefore the use of ICA on it is well justified.\n\nThe reductions are given in Algorithms \\ref{alg:simplexreduction} and \\ref{alg:lpreduction}. To state the reductions, we denote by $ICA(s(1), s(2), \\ldots)$ the invocation of an ICA routine. \nIt takes samples $s(1), s(2), \\ldots$ of a random vector $Y=AX + \\mu$, where the coordinates of $X$ are independent, and returns an approximation to a square matrix $M$ such that $M(Y-\\e(Y))$ is isotropic and has independent coordinates. \nThe theory of ICA \\cite[Theorem 11]{comon1994independent} implies that if $X$ is isotropic and at most one coordinate is distributed as a Gaussian, then such an $M$ exists and it satisfies $M A = DP$, where $P$ is a permutation matrix and $D$ is a diagonal matrix with diagonal entries in $\\{-1, 1\\}$. We thus need the following definition to state our reduction: Let $c_{p,n} = (\\e_{X \\in B_p^n}(X_1^2))^{1\/2}$. That is, the uniform distribution in $B_p^n\/c_{p,n}$ is isotropic.\n\nAs we do not state a full analysis of any particular ICA routine, we do not state explicit approximation guarantees.\n\n\\begin{algorithm}\n\\caption{Reduction from Problem 1 to ICA}\\label{alg:simplexreduction}\n\n\\begin{algorithmic}\n\\State Input: A uniformly random sample $p(1), \\dotsc, p(t)$ from an $n$-dimensional simplex $S$.\n\\State Output: Vectors $\\tilde v(1), \\dotsc, \\tilde v(n+1)$ such that their convex hull is close to $S$.\n\\vspace{.2em}\n\\hrule\n\\vspace{.2em}\n\\State Embed the sample in $\\RR^{n+1}$: Let $p'(i) = (p(i), 1)$.\n\n\\State For every $i = 1, \\dotsc, t$, generate a random scalar $T(i)$ distributed as $\\mathrm{Gamma}(n+1, 1)$. Let $q(i) = p'(i) T(i)$.\n\\State Invoke $ICA(q(1), \\dotsc, q(t))$ to obtain a approximately separating matrix $\\tilde M$.\n\\State Compute the inverse of $\\tilde M$ and multiply every column by the sign of its last entry to get a matrix $\\tilde A$.\n\\State Remove the last row of $\\tilde A$ and return the columns of the resulting matrix as $\\tilde v(1), \\dotsc, \\tilde v(n+1)$.\n\\end{algorithmic}\n\\end{algorithm}\nAlgorithm \\ref{alg:simplexreduction} works as follows: Let $X$ be an $(n+1)$-dimensional random vector with iid coordinates distributed as $\\expdist{1}$. Let $V$ be the matrix having columns $(v(i),1)$ for $i=1, \\dotsc, n+1$. Let $Y$ be random according to the distribution that results from scaling in the algorithm. Theorem \\ref{thm:simplexscaling} implies that $Y$ and $VX$ have the same distribution. Also, $X-\\ones$ is isotropic and $Y$ and $V(X-\\ones) + V \\ones$ have the same distribution. Thus, the discussion about ICA earlier in this section gives that the only separating matrices $M$ are such that $M V = D P$ where $P$ is a permutation matrix and $D$ is a diagonal matrix with diagonal entries in $\\{-1, 1\\}$. That is, $V P^T = M^{-1} D$. As the last row of $V$ is all ones, the sign change step in Algorithm \\ref{alg:simplexreduction} undoes the effect of $D$ and recovers the correct orientation.\n \n\\begin{algorithm}\n\\caption{Reduction from Problem 2 to ICA}\\label{alg:lpreduction}\n\\begin{algorithmic}\n\\State Input: A uniformly random sample $p(1), \\dotsc, p(t)$ from $A(B_p^n)$ for a known parameter $p \\in [1, \\infty)$, where $A:\\RR^n \\to \\RR^n$ is an unknown invertible linear transformation.\n\\State Output: A matrix $\\tilde A$ such that $\\tilde A B_p^n$ is close to $A(B_p^n)$.\n\\vspace{.2em}\n\\hrule\n\\vspace{.2em}\n\n\\State For every $i = 1, \\dotsc, t$, generate a random scalar $T(i)$ distributed as $\\mathrm{Gamma}((n\/p)+1,1)$. Let $q(i) = p(i) T(i)^{1\/p}$.\n\\State Invoke $ICA(q(1), \\dotsc, q(t))$ to obtain an approximately separating matrix $\\tilde M$. \n\\State Output $\\tilde A = c_{p,n}^{-1} \\tilde M^{-1}$.\n\\end{algorithmic}\n\\end{algorithm}\n\nSimilarly, Algorithm \\ref{alg:lpreduction} works as follows: Let $X$ be a random vector with iid coordinates, each with density proportional to $\\exp(-\\abs{t}^p)$. Let $Y$ be random according to the distribution that results from scaling in the algorithm. Theorem \\ref{thm:lpscaling} implies that $Y$ and $AX$ have the same distribution. Also, $X\/c_{p,n}$ is isotropic and we have $Y$ and $Ac_{p,n}(X\/c_{p,n})$ have the same distribution. Thus, the discussion about ICA earlier in this section gives that the only separating matrices $M$ are such that $M A c_{p,n} = D P$ where $P$ is a permutation matrix and $D$ is a diagonal matrix with diagonal entries in $\\{-1, 1\\}$. That is, $A P^{T} D^{-1} = c_{p,n}^{-1} M^{-1}$. The fact that $B_p^n$ is symmetric with respect to coordinate permutations and sign changes implies that $A P^{T} D^{-1} B_p^n = A B_p^n$ and is the same as $c_{p,n}^{-1} M^{-1}$. When $p \\neq 2$, the assumptions in the discussion above about ICA are satisfied and Algorithm \\ref{alg:lpreduction} is correct. When $p = 2$, the distribution of the scaled sample is Gaussian and this introduces ambiguity with respect to rotations in the definition of $M$, but this ambiguity is no problem as it is counteracted by the fact that the $l_2$ ball is symmetric with respect to rotations.\n\n\n\n\n\\section{Conclusion}\nWe showed, in two different ways, that the problem of learning simplices can be solved efficiently using techniques for ICA. We also showed that when the sample is one that may not satisfy the requirement of independent components, we can efficiently obtain from it a sample that guarantees this property and from which the original distribution can be estimated. \nMany questions remain: \nCan we do this for other polytopes? Can we do this when the points come from the Gaussian distribution with labels instead\nof the uniform distribution in the polytope? In particular, does any one of the two techniques that we used \nin this paper for learning simplices extend to learning polytopes or to \nlatent variable models?\n\n\\acks{We thank Santosh Vempala for telling us about the polytope learning problem, the approach of using higher-order moments and for helpful discussions.\nWe also thank Keith Ball, Alexander Barvinok, Franck Barthe, Mikhail Belkin, Adam Kalai, Assaf Naor, Aaditya Ramdas, Roman Vershynin and James Voss for helpful discussions.}\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\subsection{\\refname}%\n \\@mkboth{\\MakeUppercase\\refname}{\\MakeUppercase\\refname}%\n \\list{\\@biblabel{\\@arabic\\c@enumiv}}%\n {\\settowidth\\labelwidth{\\@biblabel{#1}}%\n \\leftmargin\\labelwidth\n \\advance\\leftmargin\\labelsep\n \\@openbib@code\n \\usecounter{enumiv}%\n \\let\\p@enumiv\\@empty\n \\renewcommand\\theenumiv{\\@arabic\\c@enumiv}}%\n \\sloppy\n \\clubpenalty4000\n \\@clubpenalty \\clubpenalty\n \\widowpenalty4000%\n \\sfcode`\\.\\@m}\n {\\def\\@noitemerr\n {\\@latex@warning{Empty `thebibliography' environment}}%\n \\endlist}\n\n\\usepackage{hyperref}\n\\hypersetup{\npdftitle={\\@title},\npdfauthor={\\@author},\nbreaklinks=true,\ncolorlinks=true,\n linkcolor=blue,\n citecolor=darkred,\n urlcolor=darkblue,\n}\n\n\\newenvironment{sectionauthors}{}{\n\\author{Jean-Guillaume Dumas\\footnote{Universit\\'e de Grenoble;\n Laboratoire Jean Kuntzmann, (umr CNRS 5224, Grenoble INP, INRIA,\n UJF, UPMF);\n \\href{mailto:Jean-Guillaume.Dumas@imag.fr}{Jean-Guillaume.Dumas@imag.fr};\n 51, rue des Math\\'ematiques, \n BP 53X, F-38041 Grenoble, France.\n } \\and Cl{\\'e}ment Pernet\\footnote{INRIA, Universit\\'e de Grenoble;\n Laboratoire LIG (umr CNRS 5217, Grenoble INP, INRIA, UJF, UPMF);\n \\href{mailto:Clement.Pernet@imag.fr}{Clement.Pernet@imag.fr};\n ENSIMAG Antenne de Montbonnot, \n 51, avenue Jean Kuntzmann, \n F-38330 Montbonnot Saint-Martin, France.\n }\n}\n\\maketitle\n\\setcounter{tocdepth}{5}\n\\@starttoc{toc}\n\\newpage\n}\n\\renewenvironment{table}[1][empty]\n {\\@float{table}[#1]\\begin{center}}\n {\\end{center}\\end@float}\n\\makeatother\n\n\\usepackage[english]{babel}\n\\usepackage{babelbib}\n\n\\textwidth 365pt\n\\oddsidemargin 52pt\n\\evensidemargin 52pt\n\n\\begin{document} \n\\input{sec134.tex}\n\n\\subsection{Acknowledgment}\nWe thank an anonymous referee for numerous helpful suggestions that\nconsiderably improved the paper.\n\n\\bibliographystyle{babplain}\n\n\\section{\\titlename}\\label{sec134}\n\n\n\\begin{sectionauthors}\n\\sectionauthor{Jean-Guillaume Dumas}{Universit{\\'e} de Grenoble}\n\\sectionauthor{Cl{\\'e}ment Pernet}{Universit{\\'e} de Grenoble}\n\\end{sectionauthors}\n\nWe present here algorithms for efficient computation of\nlinear algebra problems over finite fields. \nImplementations\\footnote{\\url{http:\/\/magma.maths.usyd.edu.au},\n \\url{http:\/\/www.maplesoft.com}, \\url{http:\/\/sagemath.org},\n \\url{http:\/\/www.shoup.net\/ntl},\n \\url{http:\/\/www.flintlib.org}, \n \\url{http:\/\/www.cs.uwaterloo.ca\/~astorjoh\/iml.html},\n \\url{http:\/\/m4ri.sagemath.org},\n \\url{http:\/\/linalg.org}}\nof the proposed algorithms are available through the {\\sc Magma}, {\\sc\n Maple} (within the \\verb!LinearAlgebra[Modular]! subpackage) and\n{\\sc Sage} systems; some parts can also be found within the C\/C++ libraries\nNTL,\nFLINT, IML, M4RI and the special purpose {\\sc LinBox} template library\nfor exact, high-performance linear algebra computation with dense,\nsparse, and structured matrices over the integers and over finite\nfields \\cite{DumGauGieGioHovKalSauTurVil02}. \n\n\\subsection{Dense matrix multiplication}\\label{ssec:blas}\n\n\\begin{definition} \\label{def:mm}\nFor $A \\in {\\mathbb F}_q^{m\\times k}$ and $B \\in {\\mathbb F}_q^{k\\times n}$ with elements $A_{i,j}$ and $B_{i,j}$,\nthe matrix $C=A\\times B$ has $C_{i,j} = \\sum_{l=1}^{k} A_{i,l}\nB_{l,j}$. We denote by $\\MatrixMul(m,k,n)$ a time complexity bound on the\nnumber of field operations necessary to compute $C$.\n\\end{definition}\nClassical triple loop implementation of matrix multiplication makes\n$\\MatrixMul(m,k,n)\\leq2mkn$. The best published estimates to date\ngives $\\MatrixMul(n,n,n) \\leq \\BigO{n^\\omega}$ with $\\omega \\approx\n2.3755$ \\cite{MR1056627}, though improvements to $2.3737$ and $2.3727$\nare now claimed \\cite{Sto10,Vas11}. For very rectangular matrices one\nalso have astonishing results like $\\MatrixMul(n,n,n^\\alpha) \\leq\n\\BigO(n^{2+\\epsilon})$ for a constant $\\alpha > 0.294$ and any\n$\\epsilon>0$ \\cite{MR1449760}.\nNowadays practical implementations mostly use Strassen-Winograd's\nalgorithm, see section \\ref{ssec:wino}, with an intermediate complexity and\n$\\omega \\approx 2.8074$.\n\n\\subsubsection{Tiny finite fields}\\label{sssec:tiny}\nThe practical efficiency of matrix multiplication depends highly on\nthe representation of field elements. \nWe thus present three kinds of compact representations for elements of a\nfinite field with\nvery small cardinality: bitpacking (for ${\\mathbb F}_2$), bit-slicing (for say\n${\\mathbb F}_3, {\\mathbb F}_5, {\\mathbb F}_7, {\\mathbb F}_{2^3}$, or ${\\mathbb F}_{3^2}$) and Kronecker\nsubstitution\\index{Kronecker!substitution}. These representations are\ndesigned to allow\nefficient linear algebra operations, including matrix multiplication.\n\\index{Four Russians (Method!of)}\n\\index{bit-packing}\n\n\n\\begin{algorithm}{[Greasing]}\\label{alg:Greasing}\\index{Greasing}\nOver ${\\mathbb F}_2$, the method of \\textit{the four Russians}~\\cite{MR0269546}, also called\n\\textit{Greasing} can be used as follows:\n\\begin{itemize}\n\\item A 64 bit machine word can be used to represent a row vector of dimension~64.\n\\item Matrix multiplication of a $m\\times k$ matrix $A$ by a $k\\times n$\nmatrix $B$ can be done by first storing all $2^k$ $k$-dimensional linear\ncombinations of rows of $B$ in a table. Then the i-th row of the\nproduct is copied from the row of the table indexed by the i-th row of $A$.\n\\item By ordering indices of the table according to a binary Gray Code, each\nrow of the table can be deduced from the previous one, using only one row\naddition. This brings the bit operation count to build the table from $k2^kn$ to $2^kn$.\n\\item Choosing $k=\\log_2n$ in the above method implies $\\MatrixMul(n)=\\BigO{n^3\/\\log n}$\nover~${\\mathbb F}_2$.\n\\end{itemize}\n\\end{algorithm}\n\n\\index{bit-slicing}\n\\begin{definition}\\cite{BooBra09}\nBitslicing consists in representing an $n$-dimensional vector of $k$-bit sized\ncoefficients using $k$ binary vectors of dimension $n$. In particular, one can\nuse boolean word instruction to perform arithmetic on 64 dimensional vectors.\n\\begin{itemize}\n\\item Over ${\\mathbb F}_3$, the binary representation $0 \\equiv [0,0], 1\\equiv\n[1,0], -1 \\equiv [11]$ allows to add and subtract two elements in 6 boolean operations:\n$$\\begin{array}{ll}\n\\text{Add}([x_0,x_1],[y_0,y_1]) : & s \\leftarrow x_0\\oplus y_1 , t \\leftarrow x_1 \\oplus y_0\\\\\n & \\text{Return} (s\\wedge t, (s\\oplus x_1) \\vee (t\\oplus y_1))\\\\\n\\text{Sub}([x_0,x_1],[y_0,y_1]) : & t\\leftarrow x_0\\oplus y_0 \\\\\n & \\text{Return}(t\\vee (x_1\\oplus y_1), (t\\oplus\n y_1)\\wedge(y_0\\oplus x_1))\n\\end{array}\n$$\n\\item Over ${\\mathbb F}_5$ (resp. ${\\mathbb F}_7$), a redundant representation $x=x_0+2x_1+4x_2 \\equiv\n[x_0,x_1,_2]$ allows to add two elements in 20 (resp. 17) boolean operations,\nnegate in 3 (resp. 6) boolean operations and double in 0 (resp. 5) boolean operations.\n\\end{itemize}\n\\end{definition}\n\n\\begin{table}[htbp]\n\\begin{tabular}{l|ccc}\n & ${\\mathbb F}_3$ & ${\\mathbb F}_5$ & $F_7$\\\\\nAddition & 6 & 20 & 17\\\\\nNegation & 1 & 5 & 3\\\\\nDouble & & 5 & 0\\\\\n\\end{tabular}\n\\caption{boolean operation counts for basic arithmetic using bit slicing}\n\n\\end{table}\n\n\n\n\\index{bit-packing}\n\\begin{definition}\nBitpacking consists in representing a vector of field elements as an integer\nfitting in a single machine word using a $2^k$-adic\nrepresentation: $$(x_0,\\dots,x_{n-1})\\in {\\mathbb F}_q^n \\equiv X=x_0+2^kx_1+\\dots\n+(2^k)^{n-1}x_{n-1} \\in {\\mathbb Z}_{2^{64}}$$ \nElements of extension fields are viewed as\npolynomials and stored as the evaluation of this polynomial at the\ncharacteristic of the field. The latter evaluation is called {\\em\n Kronecker substitution}\\index{Kronecker!substitution}.\n\\end{definition}\nWe first need a way to simultaneously reduce coefficients modulo the\ncharacteristic, see \\cite{MR2500374}.\n\\begin{algorithm}{[REDQ: Q-adic REDuction]}\\label{alg:REDQ}\\index{REDQ}\n\\begin{algorithmic}[1]\n\\REQUIRE Three integers $p$, $q$ and $\\tilde{r} = \\sum_{i=0}^d \\widetilde{\\mu_i} q^i \\in {\\mathbb Z}$.\n\\ENSURE $\\rho \\in {\\mathbb Z}$, with $\\rho = \\sum_{i=0}^d \\mu_i q^i$\nwhere $\\mu_i = \\widetilde{\\mu_i} \\bmod p$.\n\\UNDERL{REDQ COMPRESSION\\index{REDQ!Compression}}\n\\STATE $s = \\left\\lfloor \\frac{\\tilde{r}}{p} \\right\\rfloor$;\n\\FOR{$i=0$ to $d$}\n\\STATE $u_i = \\left\\lfloor \\frac{\\tilde{r}}{q^i} \\right\\rfloor - p \\left\\lfloor \\frac{s}{q^i} \\right\\rfloor$;\n\\ENDFOR\n\\UNDERL{REDQ CORRECTION\\index{REDQ!Correction}} \\hfill\\COMMENT{only when $p\\nmid q$, otherwise $\\mu_i=u_i$ is correct}\n\\STATE $\\mu_{d}=u_{d}$;\n\\FOR{$i=0$ to $d-1$}\n\\STATE $\\mu_i = u_i-qu_{i+1} \\bmod p$;\n\\ENDFOR\n\\\\[5pt]\\STATE Return $\\rho = \\sum_{i=0}^d \\mu_i q^i$;\n\\end{algorithmic}\n\\end{algorithm}\n\nOnce we can pack and simultaneously reduce coefficients of finite\nfield in a single machine word, the obtained parallelism can be used\nfor matrix multiplication. Depending on the respective sizes of the\nmatrix in the multiplication one can pack only the left operand or\nonly the right one or both \\cite{DumFouSal11}. We give here only a generic\nalgorithm for packed matrices, which use multiplication of a right\npacked matrix by a non packed left matrix.\n\\begin{algorithm}{[Right packed matrix\n multiplication\\index{packed!matrix multiplication}]}\\label{alg:rightcomp}\n\\begin{algorithmic}[1]\n\\REQUIRE A prime $p$ and $A_c \\in {\\mathbb F}_p^{m\\times k}$ and $B_c \\in {\\mathbb F}_p^{k\\times n}$,\nstored with several field elements per machine word.\n\\ENSURE $C_c=A_c\\times B_c\\in {\\mathbb F}_p^{m\\times n}$\n\\STATE $A=\\operatorname{Uncompress}(A_c)$; \\hfill\\COMMENT{extract the\n coefficients}\n\\STATE $C_c=A\\times B_c$; \\hfill\\COMMENT{Using e.g., algorithm\n \\ref{alg:fgemm}}\n\\STATE Return $\\operatorname{REDQ}(C_c)$;\n\\end{algorithmic}\n\\end{algorithm}\n\nThen, over extensions, fast floating point operations can be used on\nthe Kronecker substitution\\index{Kronecker!substitution} of the\nelements. \nIndeed, it is very often desirable to use floating point\narithmetic, {\\em exactly}.\nFor instance floating point\nroutines can more easily use large hardware registers, they can more easily\noptimize the memory hierarchy usage \\cite{MR2501869,WhaPetDon01} and\nportable implementations are more widely available. \nWe present next the dot product and the matrix\nmultiplication is then straightforward\n\\cite{MR2035233,MR2500374,DumFouSal11}. \n\\begin{algorithm}{[Compressed Dot product over extension fields]\\label{alg:FGDP}}\n\\begin{algorithmic}[1]\n\\REQUIRE A field ${\\mathbb F}_{p^k}$ with elements represented as exponents of\na generator of the field;\n\\REQUIRE two vectors $v_1$ and $v_2$ of elements of ${\\mathbb F}_{p^k}$;\n\\REQUIRE a sufficiently large integer $q$.\n\\ENSURE $R \\in {\\mathbb F}_{p^k}$, with $R = v_1^T\\cdot v_2$.\n\\newline{\\COMMENT{Tabulated conversion: uses tables from exponent to floating point evaluation}}\n\\STATE Set $\\widetilde{v_1}$ and $\\widetilde{v_2}$ to the floating\npoint Kronecker substitution\\index{Kronecker!substitution} of the elements of $v_1$ and $v_2$.\n\\STATE Compute $\\tilde{r} = \\widetilde{v_1}^T \\cdot\\widetilde{v_2}$;\n\\hfill\\COMMENT{The floating point computation}\n\\STATE $r = REDQ\\_COMPRESSION(\\tilde{r},p,q)$;\\index{REDQ!Compression}\n\\hfill\\COMMENT{Computing a radix\n decomposition}\n\\newline{\\COMMENT{Variant of REDQ\\_CORRECTION\\index{REDQ!Correction}:\n $\\mu_i = \\widetilde{\\mu_i} \\bmod p$ for\n$\\tilde{r} = \\sum_{i=0}^{2k-2} \\widetilde{\\mu_i} q^i$}}\n\\STATE Set $L = representation( \\sum_{i=0}^{k-2} \\mu_i X^i )$;\n\\STATE Set $H = representation( X^{k-1} \\times \\sum_{i=k-1}^{2k-2} \\mu_i X^{i-k+1} )$;\n\\STATE Return $R = H + L \\in {\\mathbb F}_{p^k}$;\n\\hfill\\COMMENT{Reduction in the field}\n\\end{algorithmic}\n\\end{algorithm}\n\n\n\\subsubsection{Word size prime fields}\n\\label{sec:fflas}\n\\index{BLAS}\\index{FFLAS}\nOver word-size prime fields one can also use the reduction to floating\npoint routines of algorithm \\ref{alg:FGDP}.\nThe main point is to be able to perform efficiently the\nmatrix multiplication of blocks of the initial matrices without\nmodular reduction. Thus delaying the reduction as much as possible,\ndepending on the algorithm and internal representations, in\norder to amortize its cost. We present next such a delaying with the\nclassical matrix multiplication algorithm and a centered\nrepresentation \\cite{MR2738206}.\n\n\\begin{algorithm}{[{\\texttt fgemm}: Finite Field GEneric Matrix Multiplication]\\label{alg:fgemm}}\n\\begin{algorithmic}[1]\n\\REQUIRE An odd prime $p$ of size smaller than the floating point mantissa $\\beta$\nand $F_p$ elements stored by values between\n$\\frac{1-p}{2}$ and \n$\\frac{p-1}{2}$\n\\REQUIRE $A \\in {\\mathbb F}_p^{m\\times k}$ and $B \\in {\\mathbb F}_p^{k\\times n}$\n\\ENSURE $C=A\\times B\\in {\\mathbb F}_p^{m\\times n}$\n\\IF {$n(p-1)^2<2^{\\beta+1}$}\n\\STATE Convert $A$ and $B$ to floating point matrices $A_f$ and $B_f$;\n\\STATE Use floating point routines to compute $C_f=A_f \\times B_f$;\n\\STATE $C = C_f \\mod p$;\n\\ELSE\n\\STATE Cut $A$ and $B$ into smaller blocks;\n\\STATE Call the algorithm recursively for the block multiplications;\n\\STATE Perform the block additions modulo $p$;\n\\ENDIF\n\\end{algorithmic}\n\\end{algorithm}\n\n\\subsubsection{Large finite fields}\\label{sssec:large}\nIf the field is too large for the strategy \\ref{alg:fgemm} over\nmachine words, then two main approaches would have to be considered:\n\\begin{itemize}\n\\item Use extended arithmetic, either arbitrary of fixed\n precision, if the characteristic is large, and a polynomial\n representation for extension fields. \n The difficulty here is to preserve an optimized memory\n management and to have an almost linear time extended precision polynomial\n arithmetic.\n\\item Use a residue number system and an evaluation\/interpolation\n scheme: one can use algorithm \\ref{alg:fgemm} for each prime in the\n RNS and each evaluation point. For ${\\mathbb F}_{p^k}$, the number of needed\n primes is roughly $2\\log_{2^\\beta}(p)$ and the number of evaluations\n points is $2k-1$.\n\\end{itemize}\n\n\\subsubsection{Large matrices: subcubic time complexity}\\label{ssec:wino}\n\nWith matrices of large dimension, sub-cubic time complexity algorithms, such as\nStrassen-Winograd's~\\cite{MR0297115} can be used to decrease the number of\noperations. Algorithm~\\ref{alg:strwin} describes how to compute one recursive\nlevel of the algorithm, using seven recursive calls and 15 block additions.\n\n\\begin{algorithm}{[Strassen-Winograd]}\n\\label{alg:strwin}\n$$A=\\begin{bmatrix}\n A_{11} & A_{12}\\\\\n A_{21} & A_{22}\\\\\n\\end{bmatrix};\n B=\n\\begin{bmatrix}\n B_{11} & B_{12}\\\\\n B_{21} & B_{22}\\\\\n\\end{bmatrix};\nC=\\begin{bmatrix}\n C_{11} & C_{12}\\\\\n C_{21} & C_{22}\\\\\n\\end{bmatrix};$$\n\n$$\n\\begin{array}{lllll}\n S_1 \\leftarrow A_{21} + A_{22}; & T_1 \\leftarrow B_{12} - B_{11}; & P_1 \\leftarrow A_{11} \\times B_{11}; & P_2 \\leftarrow A_{12} \\times B_{21};\\\\\n S_2 \\leftarrow S_1 - A_{11}; & T_2 \\leftarrow B_{22} - T_1; & P_3 \\leftarrow S_4 \\times B_{22}; &P_4 \\leftarrow A_{22} \\times T_4;\\\\\n S_3 \\leftarrow A_{11} - A_{21}; & T_3 \\leftarrow B_{22} - B_{12}; & P_5 \\leftarrow S_1 \\times T_1; & P_6\\leftarrow S_2 \\times T_2; \\\\\n S_4 \\leftarrow A_{12} - S_2;\t& T_4 \\leftarrow T_2 - B_{21}; & P_7 \\leftarrow S_3 \\times T_3; \\\\\n\\end{array}$$\n$$\\begin{array}{llll}\nC_{11} \\leftarrow P_1 + P_2; & U_2 \\leftarrow P_1 + P_6; & U_3 \\leftarrow U_2 +\nP_7; & U_4 \\leftarrow U_2 + P_5;\\\\\nC_{12} \\leftarrow U_4 + P_3; & C_{21} \\leftarrow U_3 - P_4;& C_{22} \\leftarrow U_3 + P_5; \\\\\n\\end{array}$$\n\\end{algorithm}\n\n\nIn practice, one uses a threshold in the matrix dimension to switch to a\nbase case algorithm, that can be any of the one previously described.\nFollowing section~\\ref{sec:fflas}, one can again delay the\nmodular reductions, \nbut the intermediate computations of Strassen-Winograd's algorithm impose a\ntighter bound:\n\n\\begin{theorem}\\cite{MR2738206}\nLet $A \\in {\\mathbb Z}^{m \\times k}$, $B \\in {\\mathbb Z}^{k \\times n}$ $C \\in {\\mathbb Z}^{m\n \\times n}$ and $\\beta \\in {\\mathbb Z}$ with\n\n$a_{i,j},b_{i,j},c_{i,j},\\beta \\in \\{0\\dots p-1\\}$. \nThen \nevery intermediate value $z$ involved in the computation of $A \\times B +\n\\beta C$ with $l$ ($l\\geq 1$) recursive levels of algorithm~\\ref{alg:strwin} satisfy: \n\\[ \\left|z \\right| \\leq \\left(\\frac{1+3^l}{2}\\right)^2 \\left\\lfloor{\n \\frac{k}{2^l}}\\right\\rfloor (p-1)^2 \\]\n\nMoreover, this bound is tight.\n\\end{theorem}\nFor instance, on a single Xeon 2.8GHz core with gcc-4.6.3, Strassen-Winograd's\nvariant implemented with LinBox-1.2.1 and GotoBLAS2-1.13 \ncan be 37\\% faster for the multiplication of $10\\,000\\times 10\\,000$\nmatrices over ${\\mathbb F}_{2^{19}-1}$, in less than $1'49\"$.\n\\subsection{Dense Gaussian elimination and echelon\n forms}\\label{ssec:echelon}\nIn this section, we present algorithms computing the determinant and\ninverse of square matrices; the rank, rank profile, nullspace, and\nsystem solving for arbitrary shape and rank matrices. All these\nproblems are solved a la Gaussian elimination, but recursively in\norder to effectively incorporate matrix multiplication. The latter is\ndenoted generically \\texttt{gemm} and, depending on the underlying\nfield, can be implemented using any of the techniques of sections\n\\ref{sssec:tiny}, \\ref{sec:fflas} or \\ref{sssec:large}.\n\n A special care\nis given to the asymptotic time complexities: the exponent is reduced to that of matrix\nmultiplication using block recursive algorithms, and the constants are also\ncarefully compared. Meanwhile, this approach is also effective for\nimplementations: grouping arithmetic operations into matrix-matrix products allow\nto better optimize cache accesses.\n\n\\subsubsection{Building blocks}\nAlgorithms~\\ref{alg:trsmrec}, \\ref{alg:trmm}, \\ref{alg:trtri} and~\\ref{alg:trtrm} show how to reduce the\ncomputation of triangular matrix systems, triangular matrix multiplications, and\ntriangular matrix inversions to matrix-matrix multiplication. Note that they do\nnot require any temporary storage other than the input and output arguments.\n\n\\begin{algorithm}{[\\texttt{trsm}: Triangular System Solve with Matrix\n right hand side]\\label{alg:trsmrec}}\n\\begin{algorithmic}[1]\n\\begin{minipage}{0.58\\columnwidth}\n\\REQUIRE{$A \\in {\\mathbb F}_q^{m \\times m}$ non-singular upper triangular, $B \\in {\\mathbb F}_q^{m \\times n}$}\n\\ENSURE{$X \\in {\\mathbb F}_q^{m \\times n}$ s.t. $AX= B$}\n\\IFTHEN{ m=1 }{{\\algorithmicreturn} $X= A_{1,1}^{-1} \\times B$}\n \\STATE $X_2=${\\tt trsm($A_3,B_2$)};\n \\STATE $B_1= B_1 - A_2X_2$; \\COMMENT{using\n \\hypnamref[alg:fgemm]{gemm}, e.g., via alg. \\ref{alg:fgemm}}\n \\STATE $X_1=${\\tt trsm($A_1,B_1$)}; \n\\RETURN $X=\n\\begin{bmatrix}\nX_1 \\\\ X_2\\end{bmatrix};\n$\n\\end{minipage}\\hfill\n\\begin{minipage}{0.36\\columnwidth}\nUsing the conformal block \\mbox{decomposition}:\\\\ \n$\n \\begin{bmatrix}\n A_1&A_2\\\\&A_3\n \\end{bmatrix}\n \\begin{bmatrix}\n X_1\\\\X_2\n \\end{bmatrix}\n =\n \\begin{bmatrix}\n B_1\\\\B_1\n \\end{bmatrix}\n $\n\\end{minipage}\n\\end{algorithmic}\n\\end{algorithm}\n\\index{TRSM}\n\n\\begin{algorithm}{[\\texttt{trmm}: Triangular Matrix Multiplication]\\label{alg:trmm}}\n\\begin{algorithmic}[1]\n\\begin{minipage}{0.58\\columnwidth}\n\\REQUIRE{ $A \\in {\\mathbb F}_q^{m \\times m}$ upper triangular, $B \\in {\\mathbb F}_q^{m \\times n}$}\n\\ENSURE{$C \\in {\\mathbb F}_q^{m \\times n}$ s.t. $AB= C$}\n\\IFTHEN{ m=1 }{{\\algorithmicreturn} $C= A_{1,1} \\times B$}\n \\STATE $C_1=${\\tt trmm($A_1,B_1$)}; \n \\STATE $C_1= C_1 + A_2B_2$; \\COMMENT{using \\hypnamref[alg:fgemm]{gemm}}\n \\STATE $C_2=${\\tt trmm($A_3,B_2$)}; \n\\RETURN $C=\n\\begin{bmatrix}\nC_1 \\\\ C_2\\end{bmatrix};\n$\n\\end{minipage}\\hfill\n\\begin{minipage}{0.36\\columnwidth}\nUsing the conformal block \\mbox{decomposition}:\\\\ \n$\n \\begin{bmatrix}\n A_1&A_2\\\\&A_3\n \\end{bmatrix}\n \\begin{bmatrix}\n B_1\\\\B_2\n \\end{bmatrix}\n =\n \\begin{bmatrix}\n C_1\\\\C_2\n \\end{bmatrix}\n $\n\\end{minipage}\n\\end{algorithmic}\n\\end{algorithm}\n\n\\begin{algorithm}{[\\texttt{trtri}: Triangular Matrix Inversion]\\label{alg:trtri}}\n\\begin{algorithmic}[1]\n\\begin{minipage}{0.58\\columnwidth}\n\\REQUIRE{ $A \\in {\\mathbb F}_q^{n \\times n}$ upper triangular and non-singular}\n\\ENSURE{$ C=A^{-1}$}\n\\IFTHEN{ m=1 }{{\\algorithmicreturn} $ C= A_{1,1}^{-1}$}\n \\STATE $C_1=A_1^{-1}$; \\COMMENT{using \\hypnamref[alg:trtri]{trtri} recursively}\n \\STATE $C_3=A_3^{-1}$; \\COMMENT{using \\hypnamref[alg:trtri]{trtri} recursively}\n \\STATE $C_2=A_2C_3$; \\COMMENT{using \\hypnamref[alg:trmm]{trmm} }\n \\STATE $C_2=-C_1C_2$; \\COMMENT{using \\hypnamref[alg:trmm]{trmm} }\n\\RETURN $C=\n\\begin{bmatrix}\nC_1 & C_2\\\\&C_3\\end{bmatrix};\n$\n\\end{minipage}\\hfill\n\\begin{minipage}{0.36\\columnwidth}\nUsing the conformal block \\mbox{decomposition}:\\\\\n$\n \\begin{bmatrix}\n A_1&A_2\\\\&A_3\n \\end{bmatrix},\n \\begin{bmatrix}\n C_1&C_2\\\\&C_3\n \\end{bmatrix}\n $\n\\end{minipage}\n\\end{algorithmic}\n\\end{algorithm}\n\n\n\\begin{algorithm}{[\\texttt{trtrm}: Upper-Lower Triangular Matrix Multiplication]\\label{alg:trtrm}}\n\\begin{algorithmic}[1]\n\\begin{minipage}{0.58\\columnwidth}\n\\REQUIRE{ $L \\in {\\mathbb F}_q^{n \\times n}$ lower triangular}\n\\REQUIRE{ $U \\in {\\mathbb F}_q^{n \\times n}$ upper triangular}\n\\ENSURE{$ A=UL$}\n\\IFTHEN{ m=1 }{{\\algorithmicreturn} $ A= U_{1,1}L_{1,1}$}\n \\STATE $A_1=U_1L_1$; \\COMMENT{using \\hypnamref[alg:trtrm]{trtrm} recursively}\n \\STATE $A_1=A_1+U_2L_2$; \\COMMENT{using \\hypnamref[alg:fgemm]{gemm}}\n \\STATE $A_2=U_2L_3$; \\COMMENT{using \\hypnamref[alg:trmm]{trmm} }\n \\STATE $A_3=U_3L_2$; \\COMMENT{using \\hypnamref[alg:trmm]{trmm} }\n \\STATE $A_4=U_3L_3$; \\COMMENT{using \\hypnamref[alg:trtrm]{trtrm} recursively}\n\\RETURN $A=\n\\begin{bmatrix}\nA_1 & A_2\\\\A_3&A_4\\end{bmatrix};\n$\n\\end{minipage}\\hfill\n\\begin{minipage}{0.36\\columnwidth}\nUsing the conformal block \\mbox{decomposition}:\\\\ \n$\\begin{bmatrix}\n L_1\\\\L_2&L_3\n \\end{bmatrix},\\begin{bmatrix}\n U_1&U_2\\\\&U_3\n \\end{bmatrix},\\begin{bmatrix}\n A_{1}&A_{2}\\\\A_{3}&A_{4}\n \\end{bmatrix}$\n\\end{minipage}\n\\end{algorithmic}\n\\end{algorithm}\n\n\n\n\\subsubsection{PLE decomposition}\\index{PLE decomposition}\n\nDense Gaussian elimination over finite fields can be reduced to matrix\nmultiplication, using the usual techniques for the LU decomposition of\nnumerical linear algebra~\\cite{MR0331751}. \nHowever, in applications over a finite field, the input matrix often\nhas non-generic rank profile and special care needs to be taken about\nlinear dependencies and rank deficiencies.\nThe PLE decomposition is thus a generalization of the PLU\ndecomposition for matrices with any rank\\index{Rank} profile. \n\\begin{definition}\nA matrix is in row-echelon form if all its zero rows occupy the last row\npositions and the leading coefficient of any non-zero row except the first one is strictly to the right of the\nleading coefficient of the previous row.\nMoreover, it is said to be in reduced row-echelon form, if all coefficients\nabove a leading coefficient are zeros.\n\\end{definition}\n\n\\begin{definition}\nFor any matrix $A\\in F_q^{m\\times n}$ of rank\\index{Rank} $r$, there is a PLE decomposition $A=PLE$\nwhere $P$ is a permutation matrix, $L$ is a $m\\times r$ lower triangular matrix\nand $E$ is a $r\\times n$ matrix in row-echelon form, with unit leading coefficients.\n\\end{definition}\n\nAlgorithm~\\ref{alg:pledec} shows how to compute such a decomposition by a block\nrecursive algorithm, thus reducing the complexity to that of matrix\nmultiplication.\n\n\\begin{algorithm}{[PLE decomposition]\\label{alg:pledec}}\n\\begin{algorithmic}[1]\n\\REQUIRE{ $A \\in {\\mathbb F}_q^{m \\times n}$}\n\\ENSURE{$(P,L,E)$ a PLE decomposition of $A$}\n\\IF{$n=1$}\n \\IFTHEN{$A=0_{m\\times 1}$}{{\\algorithmicreturn} $(I_m,I_0, A)$;}\n \\STATE Let $j$ be the column index of the first non zero entry of $A$\n and $P=T_{1,j}$ the transposition between indices $1$ and $j$;\n \\RETURN $(P,PA, [1])$;\n\\ELSE \n\\begin{minipage}{\\columnwidth}\n\\begin{minipage}{0.58\\columnwidth}\n \\STATE $(P_1,L_1,E_1) = \\texttt{PLE}(A_1)$; \\COMMENT{recursively}\n \\STATE $A_2 = P_1A_2$;\n \\STATE $A_3 = L_{1,1}^{-1}A_3$; \\COMMENT{using \\hypnamref[alg:trsmrec]{trsm}}\n \\STATE $A_4 = A_4 - L_{1,2}A_3$; \\COMMENT{using \\hypnamref[alg:fgemm]{gemm}}\n \\STATE $(P_2,L_2,E_2) = \\texttt{PLE}(A_4)$; \\COMMENT{recursively}\n\\end{minipage}\n\\begin{minipage}{0.36\\columnwidth}\nSplit $A$ columnwise in halves: \n$A=\\begin{bmatrix}A_1&A_2\\end{bmatrix}$\\\\\nSplit $A_2=\\begin{bmatrix}A_{3}\\\\A_4\\end{bmatrix}$,\n$L_1=\\begin{bmatrix}L_{1,1}\\\\L_{1,2}\\end{bmatrix}$ \nwhere $A_3$ and $L_{1,1}$ have $r_1$ rows.\n\\end{minipage}\n\\end{minipage}\n \\RETURN ($\n P_1 \\begin{bmatrix} I_{r_1}\\\\&P_2 \\end{bmatrix}, \n \\begin{bmatrix}\n L_{1,1}\\\\P_2L_{1,2}&L_2\n \\end{bmatrix}, \n \\begin{bmatrix}\n E_1&A_3\\\\&E_2\n \\end{bmatrix}\n$);\n\\ENDIF\n\\end{algorithmic}\n\n\\end{algorithm}\n\n\\subsubsection{Echelon forms}\n\nThe row-echelon and reduced row-echelon forms can be obtained from the PLE\ndecomposition, using additional operations: \\texttt{trsm}, \\texttt{trtri}\nand \\texttt{trtrm}, as shown in algorithm~\\ref{alg:rowech} and~\\ref{alg:redrowech}.\n\n\\begin{algorithm}{[\\texttt{RowEchelon}\\label{alg:rowech}]}\n\\begin{algorithmic}[1]\n\\label{alg:echelon}\n\\REQUIRE{ $A \\in {\\mathbb F}_q^{m \\times n}$ }\n\\ENSURE{$(X,E)$ such that $XA=E$, $X$ is non-singular and $E$ is in row-echelon\nform}\n\\STATE $(P,L,E) = \\texttt{PLE}(A)$; \\\\\n\\begin{minipage}{\\columnwidth}\n\\begin{minipage}{0.58\\columnwidth}\n\\STATE $X_1=L_1^{-1}$; \\COMMENT{using \\hypnamref[alg:trtri]{trtri}}\n\\STATE $X_2=-L_2X_1$; \\COMMENT{using \\hypnamref[alg:trmm]{trmm}}\n\\end{minipage}\\hfill\n\\begin{minipage}{0.36\\columnwidth}\nSplit $L = \n\\begin{bmatrix}\nL_1\\\\L_2\n\\end{bmatrix}$, $L_1: r\\times r$.\n\\end{minipage}\n\\end{minipage}\n\\RETURN $\\left(X = \n\\begin{bmatrix}\nX_1\\\\X_2&I_{m-r}\\end{bmatrix} P^T, E\\right);$\n\\end{algorithmic}\n\\end{algorithm}\n\n\n\\begin{algorithm}{[\\texttt{ReducedRowEchelon}\\label{alg:redrowech}]}\n\\begin{algorithmic}[1]\n\\label{alg:redechelon}\n\\REQUIRE{ $A \\in {\\mathbb F}_q^{m \\times n}$ }\n\\ENSURE{$(Y,R)$ such that $YA=R$, $Y$ is non-singular and $R$ is in reduced row-echelon\nform}\n\\STATE $(X,E) = \\texttt{RowEchelon}(A)$;\n\\STATE Let $Q$ be the permutation matrix that brings the leading row\ncoefficients of E to the diagonal;\n\\STATE Set $EQ=\n\\begin{bmatrix}\nU_1&U_2\\end{bmatrix}$; \\COMMENT{where $U_1$ is $r\\times r$ upper triangular}\n\\STATE $Y_1=U_1^{-1}$; \\COMMENT{using \\hypnamref[alg:trtri]{trtri}}\n\\STATE $Y_1=Y_1X_1$; \\COMMENT{using \\hypnamref[alg:trtrm]{trtrm}}\n\\STATE $R = \n\\begin{bmatrix}\nI_r&U_1^{-1}U_2\n\\end{bmatrix} Q^T$; \\COMMENT{using \\hypnamref[alg:trsmrec]{trsm}}\n\\RETURN $\\left(Y=\n\\begin{bmatrix}\nY_1\\\\U_2&I_{n-r}\n\\end{bmatrix} P^T, R\\right) ;$\n\\end{algorithmic}\n\\end{algorithm}\n\nFigure~\\ref{fig:reductions} shows the various steps between the classical\nGaussian elimination (LU decomposition), the computation of the echelon form and\nof the reduced echelon form, together with the various problems that each of\nthem solve. Table~\\ref{tab:gaussconstants} shows the leading constant $K_\\omega$ in the\nasymptotic time complexity of these algorithms, assuming that two $n\\times n $\nmatrices can be multiplied in $C_\\omega n^\\omega + o(n^\\omega)$.\n\\begin{figure}\n\\begin{center}\n\\includereduc\n\\end{center}\n\\caption{Reductions from PLE decomposition to Reduced echelon form}\\label{fig:reductions}\n\\end{figure}\n\\begin{table}\n \\begin{tabular}{llll}\n Algorithm & Constant $K_\\omega$ & $K_3$ & $K_{\\log_27}$\\\\\n \\hline\n \\texttt{gemm} & $C_\\omega$& 2 & 6 \\\\\n \\texttt{trsm} & $\\frac{C_\\omega}{2^{\\omega-1}-2}$ & $1$ & $4$\\\\\n \\texttt{trtri}& \n $\\frac{C_\\omega}{(2^{\\omega-1}-2)(2^{\\omega-1}-1)}$&$\\frac{1}{3}\\approx 0.33$&$\\frac{8}{5}=1.6$\\\\\n \\texttt{trtrm}, \\texttt{PLE}\n &\n $\\frac{C_\\omega}{2^{\\omega-1}-2}-\\frac{C_\\omega}{2^{\\omega}-2}$ &\n $\\frac{2}{3}\\approx 0.66$ & $\\frac{14}{5} =2.8$ \\\\\n \\texttt{Echelon}& $\\frac{C_\\omega}{2^{\\omega-2}-1}-\\frac{3C_\\omega}{2^{\\omega}-2}$ & 1 &\n $\\frac{22}{5}\\approx 4.4$\\\\\n \\texttt{RedEchelon} &$\\frac{C_\\omega(2^{\\omega-1}+2)}{(2^{\\omega-1}-2)(2^{\\omega-1}-1)}$& 2 &\n $\\frac{44}{5}= 8.8$\\\\\n \n\n\\end{tabular}\n\\caption{Complexity of elimination algorithms}\\label{tab:gaussconstants}\n\\end{table}\n\n\\begin{remark}\nNote that, if the rank $r$ is very small compared to the dimensions $m\\times n$ of\nthe matrix, a system $A x = b$ can be solved in time bounded by\n$\\BigO{(m+n)r^2}$ \\cite[Theorem~1]{MR1805128}. \n\\end{remark}\n\n\\subsection{Minimal and characteristic polynomial of a dense matrix}\n\\index{Characteristic polynomial}\n\\index{Minimal polynomial}\n\n\\begin{definition}\n\\begin{enumerate}\n\\item A {\\em Las-Vegas algorithm} is a randomized algorithm which is always\n correct. Its expected running is time is always finite.\n\\item A {\\em Monte-Carlo algorithm} is a randomized algorithm which is correct with a\n certain probability. Its running time is deterministic.\n\\end{enumerate}\n\\end{definition}\n\nThe computation of the minimal and characteristic polynomials is closely related\nto that of the Frobenius normal form.\n\\begin{definition} Any matrix $A \\in {\\mathbb F}_q^{n\\times n}$ is similar to a\n unique block diagonal matrix $F=P^{-1} A P =\n diag(C_{f_1},\\ldots,C_{f_t})$ where the blocks $C_{f_i}$ are\n companion matrices of the polynomials $f_i$, which satisfy\n $f_{i+1}|f_i$. The $f_i$ are the {\\em invariant factors} of\n $A$ and $F$ is the {\\em Frobenius normal form} of $A$. \n\\end{definition}\nMost algorithms computing the minimal and characteristic polynomial or the\nFrobenius normal form rely on Krylov basis computations.\n\\begin{definition}\n\\begin{enumerate}\n\\item The Krylov matrix of order $d$ for a vector $v$ w.r.t a matrix $A$ is the\n matrix \n$K_{A,v,d}=\n\\begin{bmatrix}\n v&Av&\\dots&A^{d-1}v\n \\end{bmatrix} \\in {\\mathbb F}_q^{n\\times d}$.\n\\item The minimal polynomial $P_\\text{min}^{A,v}$ of $A$ and $v$\n is the least degree monic polynomial $P$ such that $P(A)v=0$.\n\\end{enumerate}\n\\end{definition}\n\n\\begin{theorem}\n\\begin{enumerate}\n\\item $AK_{A,v,d}=K_{A,v,d}C_{P_\\text{min}^{A,v}}$, where\n $d=\\deg(P_\\text{min}^{A,v})$.\n\\item For lineraly independent vectors $(v_1,\\dots,v_k)$, if $K=\\begin{bmatrix}K_{A,v_1,d_1}&\\dots& K_{A,v_k,d_k}\\end{bmatrix}$ is non\n singular. Then $AK=\nK\\begin{bmatrix}\nC_{P_\\text{min}^{A,v_1}}&B_{1,2}&\\dots&B_{1,k}\\\\\nB_{2,1}&C_{P_\\text{min}^{A,v_1}}&\\dots&B_{2,k}\\\\\n\\vdots&\\vdots&\\ddots&\\vdots\\\\\nB_{k,1}&B_{k,2}&&C_{P_\\text{min}^{A,v_k}}\n\\end{bmatrix}$, where the blocks $B_{i,j}$ are zero except on the last column.\n\\item For linearly independent vectors $(v_1,\\dots,v_k)$, let $(d_1,\\dots\n d_k)$ be the lexicographically largest sequence of degrees such that\n $K=\\begin{bmatrix}K_{A,v_1,d_1}&\\dots& K_{A,v_k,d_k}\\end{bmatrix}$ is non-singular. Then \n\\begin{equation}\nK^{-1}AK=\n\\begin{bmatrix}\nC_{P_\\text{min}^{A,v_1}}&B_{1,2}&\\dots&B_{1,k}\\\\\n&C_{P_\\text{min}^{A,v_1}}&\\dots&B_{2,k}\\\\\n&&\\ddots&\\vdots\\\\\n&&&C_{P_\\text{min}^{A,v_k}}\n\\end{bmatrix}=H\n\\label{eq:hessenberg}\n\\end{equation}\n\\end{enumerate}\n\\end{theorem}\n\n\\begin{remark}\n\\begin{enumerate}\n\\item Some choice of vectors $v_1,\\dots, v_k$ lead to a matrix $H$ block\n diagonal: this is the Frobenius normal form~\\cite{MR1657129}\n\\item The matrix obtained at equation (\\ref{eq:hessenberg}) is called a Hessenberg\n form. It suffices to compute the characteristic polynomial from its diagonal blocks.\n\\end{enumerate}\n\\end{remark}\n\n\\begin{theorem}\n The Frobenius normal form can be computed:\n\\begin{enumerate}\n\\item\\label{alg:charp:det} by a deterministic algorithm~\\cite{StoVil00} in $6n^3+\\BigO{n^2\\log^2n}$ field\n operations,(only $(2+\\frac{2}{3})n^3+\\BigO{n^2}$ for the characteristic polynomial~\\cite{MR2280540})\n\\item by a deterministic algorithm~\\cite{MR1948725} in $\\BigO{n^\\omega\\log n \\log\\log\n n}$, together with a transformation matrix, (only $\\BigO{n^\\omega\\log n}$ for the characteristic polynomial~\\cite{MR796306} )\n\\item by a Las-Vegas algorithm~\\cite{MR1834824} in $\\BigO{n^\\omega\\log n}$\n field operations for any field, together with a transformation matrix\n\\item \\label{alg:charp:arith} by a Las-Vegas algorithm~\\cite{MR2402276} in $\\BigO{n^\\omega}$ for\n $q>2n^2$, without transformation matrix.\n\\end{enumerate} \n\n The minimal\\index{Minimal polynomial} and characteristic\\index{Characteristic polynomial} polynomials, obtained\n as the first invariant factor and the product of all invariant factors, can be\n computed with the same complexities.\n\\end{theorem}\n\n\\begin{remark}\nThese algorithms are all based Krylov bases. Algorithm~(\\ref{alg:charp:det}.)\niteratively compute the Krylov iterates one after the other. Their cubic time\ncomplexity with a small leading constant makes them comparable to Gaussian\nelimination.\nA fast exponentiation scheme by Keller-Gehrig~\\cite{MR796306} achieves a\nsub-cubic time complexity for the characteristic polynomial, off by a logarithmic factor of n from the matrix\nmultiplication. \nThe choice for the appropriate vectors that will generate the Frobenius normal\nform can be done either probabilistically (Las-Vegas) or deterministically with an\n$\\log\\log n$ factor.\nAlgorithm~(\\ref{alg:charp:arith}.) uses a\ndifferent iteration where the size of the Krylov increases according to an\narithmetic progression rather than geometric (as all others) and the transformation matrix is not\ncomputed. This allows it to match to the complexity of matrix\nmultiplication. This reduction is practical and is implemented as in {\\sc\n LinBox}.\n\\end{remark}\n\n\\begin{remark}\\label{rem:extensions}\nThese probabilistic algorithms depend on the ability to sample uniformly from a\nlarge set of coefficients from the field.\nOver small fields, it is always possible to embed the problem into an\nextension field, in order to make the random sampling set\nsufficiently large. In the worst case, this could add a\n$\\BigO{\\log(n)}$ factor to the arithmetic cost and prevent most of the\nbit-packing techniques.\nInstead, the effort of~\\cite{MR1834824} is to handle cleanly the small finite\nfield case.\n\\end{remark}\n\n\n\n\\subsection{Blackbox iterative methods}\\label{ssec:blackbox}\nWe consider now the case where the input matrix is sparse, i.e., has\nmany zero elements, or has a structure which enables fast\nmatrix-vector products. Gaussian elimination would fill-in the sparse\nmatrix or modify the interesting structure. Therefore one can use\niterative methods instead which only use matrix-vector iterations\n({\\em blackbox methods} \\cite{MR1056629}). \nThere are two major differences with numerical iterative \nroutines: over finite fields there exists isotropic vectors and there is no\nnotion of convergence, hence the iteration must proceed until exactness of the result~\\cite{Lam96}.\nProbabilistic early termination can nonetheless be applied when the\ndegree of the minimal polynomial is smaller than the dimension of the\nmatrix~\\cite{MR1687279,DumVil02,Ebe03}. More generally the probabilistic nature\nof the algorithms presented in this section is subtle: e.g., the computation of\nthe minimal polynomial is Monte-Carlo, but that of system solving, using the\nminimal polynomial, is Las-Vegas (by checking consistency of the produced\nsolution with the system). Making some of the Monte-Carlo solutions Las-Vegas is\na key open-problem in this area.\n \n\\subsubsection{Minimal\\index{Minimal polynomial} Polynomial and the Wiedemann algorithm\n }\\index{Wiedemann}\nThe first iterative algorithm and its analysis are due to D. Wiedemann\n\\cite{MR831560}. The algorithm computes the minimal\\index{Minimal polynomial}\npolynomial in the Monte-Carlo probabilistic fashion.\n\\begin{definition} For a linearly recurring sequence $S=(S_i)$, its\n minimal\\index{Minimal polynomial} polynomial is denoted by $\\Pi_S$.\n\\begin{itemize}\n\\item The minimal\\index{Minimal polynomial} polynomial of a matrix is denoted $\\Pi_A = \\Pi_{(A^i)}$.\n\\item For a matrix $A$ and a vector $b$, we note\n $\\Pi_{A,b}=\\Pi_{(A^i\\cdot b)}$.\n\\item With another vector $u$, we note\n $\\Pi_{u,A,b}=\\Pi_{(u^T\\cdot A^i \\cdot b)}$.\n\\end{itemize}\n\\end{definition}\n\n\\begin{algorithm}{[Wiedemann minimal\\index{Minimal polynomial} polynomial]\\label{alg:wiedemann}}\n\\begin{algorithmic}[1]\n\\REQUIRE $A \\in {\\mathbb F}_q^{n\\times n}$, $u, b \\in {\\mathbb F}_q^n$.\n\\ENSURE $\\Pi_{u,A,b}$.\n\\STATE Compute $S=(u^T A^i b)$ for $i \\leq 2n$;\n\\STATE Use the Berlekamp-Massey algorithm to compute the minimal\\index{Minimal polynomial}\npolynomial of the scalar sequence $S$;\n\\end{algorithmic}\n\\end{algorithm}\n\n\\begin{definition}\\seedefTotient\nWe extend Euler's totient function by \n$\\Phi_{q,k}(f)=\\prod (1-q^{-kd_i}),$ where ${d_i}$ are the degrees of the distinct monic\nirreducible factors of the polynomial~$f$.\n\\index{function!Euler's $\\Phi$}\n\\end{definition}\n\n\\begin{theorem} For vectors $u_1, \\ldots, u_j$ selected uniformly at random,\n the probability that $\\operatorname{lcm}(\\Pi_{u_j,A,b})=\\Pi_{A,b}$ is at\n least $\\Phi_{q,k}(\\Pi_{A,b})$.\n\\end{theorem}\n\n\\begin{theorem} For vectors $b_1, \\ldots, b_k$ selected\n uniformly at random, the probability that\n $\\operatorname{lcm}(\\Pi_{A,b_i})=\\Pi_A$ is at least\n $\\Phi_{q,k}(\\Pi_A)$.\n\\end{theorem}\n\\subsubsection{Rank\\index{Rank}, Determinant and\n Characteristic\\index{Characteristic polynomials} Polynomial}\nIt is possible to compute the rank\\index{Rank}, determinant, and\ncharacteristic\\index{Characteristic polynomials} polynomial of a\nmatrix from its minimal\\index{Minimal polynomial} polynomial. All\nthese reductions require to precondition the matrix so that the\nminimal\\index{Minimal polynomial} polynomial of the obtained matrix\nwill reveal the information sought, while keeping a low cost for the\nmatrix-vector product\n\\cite{MR1809985,MR1229306,DumVil02,Tur02,MR1849765,Vil03,MR1878939}. \n\\begin{theorem}\\cite{MR1809985}\\label{thm:EK} \n Let S be a finite subset of a field ${\\mathbb F}$ that does not include~$0$. \n Let $A \\in {\\mathbb F}^{m \\times n}$ having rank\\index{Rank} $r$. \n Let $D_1 \\in S^{n \\times n}$ and $D_2 \\in S^{m \\times m}$ be two\n random diagonal matrices then \n $degree(minpoly( D_1 \\times A^t \\times D_2 \\times A \\times D_1 ))=r$, \n with probability at least $1 - \\frac{11.n^2-n}{2|S|}$.\n\\end{theorem}\n\n\\begin{theorem}\\cite{Tur02}\\label{thm:det} \n Let S be a finite subset of a field ${\\mathbb F}$ that does not include~$0$. \n Let $U\\in S^{n \\times n}$ be a unit upper bi-diagonal matrix where\n the second diagonal elements $u_1,\\ldots,u_{n-1}$ are randomly\n selected in $S$.\n For $A \\in {\\mathbb F}^{n \\times n}$, the term of degree $0$ of the\n minimal\\index{Minimal polynomial} polynomial of $UA$ is the\n determinant of $A$ with probability at least $1-\\frac{n^2-n}{2|S|}$.\n\\end{theorem}\n\\begin{remark}\nIf $A$ is known to be non-singular the algorithm can be repeated with\ndifferent matrices $U$ until the obtained minimal polynomial is of\ndegree $n$. Then it is the characteristic polynomial of $UA$ and the\ndeterminant is certified. \nAlternatively if the matrix is singular then $X$ divides the minimal\npolynomial. As Wiedemann's algorithm always returns a factor of the\ntrue minimal polynomial, and $U$ is invertible,\nthe algorithm can be repeated on $UA$ until either the obtained\npolynomial is of degree $n$ or it is divisible by $X$.\nOverall\nthe determinant has a Las-Vegas blackbox solution.\n\\end{remark}\n\n\\begin{theorem}\\cite{MR1849765,Vil03}\\label{thm:rankupdate}\n Let S be a finite subset of a field ${\\mathbb F}$\n that does not include~$0$ and $A\\in {\\mathbb F}^{n\\times n}$ with $s_1,\n \\ldots, s_t$ as invariant factors. Let $U \\in S^{n\\times k}$ and $V\n \\in S^{k\\times n}$ be randomly chosen rank\\index{Rank} $k$ matrices\n in ${\\mathbb F}$. Then $\\operatorname{gcd}(\\Pi_A, \\Pi_{A+UV})=s_{k+1}$ with\n probability at least $1 -\\frac{nk + n + 1}{|S|}$.\n\\end{theorem}\n\\begin{remark}\nUsing the divisibility of the invariant factors and the fact that\ntheir product is of degree $n$, one can see that the number of degree\nchanges between successive invariant factors is of order\n$\\BigO{\\sqrt{n}}$~\\cite{MR1849765}. Thus by a binary search over\nsuccessive applications of theorem \\ref{thm:rankupdate} one can\nrecover all of the invariant factors and thus the\ncharacteristic\\index{Characteristic polynomials} polynomial of the\nmatrix in a Monte-Carlo fashion. \n\\end{remark}\n\\subsubsection{System solving and the Lanczos algorithm}\\index{Lanczos}\nFor the solution of a linear system $Ax=b$, one could compute the\nminimal\\index{Minimal polynomial} polynomial $\\Pi_{A,b}$ and then derive a solution of the\nsystem as a linear combination of the $A^ib$. \nThe following Lanczos approach is more efficient for system solving as\nit avoids recomputing (or storing) the latter vectors\n\\cite{MR1809985,MR1805174}.\n\\begin{algorithm}{[Lanczos system solving]\\label{alg:lanczos}}\n\\begin{algorithmic}[1]\n\\REQUIRE $A \\in {\\mathbb F}^{m\\times n}$, $b \\in {\\mathbb F}^m$.\n\\ENSURE $x\\in {\\mathbb F}^n$ such that $Ax=b$ or {\\em failure}.\n\\STATE\\label{lin:datdad} Let $\\tilde{A}=D_1 A^T D_2 A D_1$ and $\\tilde{b} = D_1 A^T D_2\nb+\\tilde{A} v$ with $D_1$ and $D_2$ random diagonal matrices and $v$ a\nrandom vector;\n\\STATE $w_0=\\tilde{b}$; $v_1=\\tilde{A}w_0$; $t_0=v_1^T w_0$;\n$\\gamma=\\tilde{b}^tw_0 t_0^{-1}$; $x_0 = \\gamma w_0$;\n\\REPEAT\n\\STATE $\\alpha=v^T_{i+1}v_{i+1} t_i^{-1}$; $\\beta=v^T_{i+1}v_{i}\nt_{i-1}^{-1}$; $w_{i+1} = v_{i+1} -\\alpha w_i -\\beta w_{i-1}$;\n\\STATE $v_{i+2}= \\tilde{A} w_{i+1}$; $t_{i+1}=w_{i+1}^T v_{i+2}$;\n\\STATE $\\gamma=\\tilde{b}^tw_{i+1}t_{i+1}^{-1}$; $x_{i+1}=x_i + \\gamma w_{i+1}$;\n\\UNTIL{$w_{i+1}=0$ or $t_{i+1}=0$;}\n\\STATE Return $x=D_1(x_{i+1}-v)$;\n\\end{algorithmic}\n\\end{algorithm}\n\nThe probability of success of algorithm \\ref{alg:lanczos} follows also\ntheorem \\ref{thm:EK}. \n\\begin{remark}\nOver small fields, if the rank of the matrix is known, the\ndiagonal matrices of line \\ref{lin:datdad} can be replaced by sparse\npreconditioners with $\\BigO{n\\log(n)}$ non-zero coefficients to avoid the\nneed of field extensions\\cite[corollary 7.3]{MR1878939}. \n\\end{remark}\n\\begin{remark}\nIf the system with $A$ and $b$ is known to have a solution then the\nalgorithm can be turned Las-Vegas by checking that the output $x$\nindeed satisfies $Ax=b$. In general, we do not know if this algorithm\nreturns failure because of bad random choices or because the system is\ninconsistent. However, Giesbrecht, Lobo and Saunders have shown that\nwhen the system is inconsistent, it is possible to produce a\ncertificate vector $u$ such that $u^T A=0$ together with \n$u^T b \\neq 0$ within the same complexity \n\\cite[Theorem 2.4]{MR1805174}. Overall, system solving can be\nperformed by blackbox algorithms in a Las-Vegas fashion.\n\\end{remark}\n\\subsection{Sparse and structured methods}\\index{Sparse matrix}\n\\index{Structured matrix}\\index{Gaussian elimination}\\index{Fill-in}\nAnother approach to sparse linear system is to use Gaussian\nelimination with pivoting, taking into account the zero coefficients. \nThis algorithm modifies the structure of the matrix and might suffer\nfrom fill-in. Consequently the available memory is usually the\nbottleneck. From a triangularization one can naturally derive the\nrank\\index{Rank}, determinant, system solving and nullspace. \nComparisons with the blackbox approaches above can be found e.g., in\n\\cite{DumVil02}.\n\\subsubsection{Reordering}\\index{Reordering}\n\\begin{algorithm}{[Gaussian elimination with linear pivoting]\\label{alg:reord}}\n\\begin{algorithmic}[1]\n\\REQUIRE a matrix $A \\in {\\mathbb F}^{m \\times n}$;\n\\ENSURE An upper triangular matrix $U$ such that there exists a\nunitary lower-triangular matrix $L$ and permutations matrices $P$ and\n$Q$ over ${\\mathbb F}$, with $A=P\\cdot L \\cdot U \\cdot Q$;\n\\FORALL{elimination steps}\n\\STATE Choose as pivot row the sparsest remaining row;\n\\STATE In this row choose the non zero pivot with lowest number of non\nzero elements in its column;\n\\STATE Eliminate using this pivot;\n\\ENDFOR\n\\end{algorithmic}\n\\end{algorithm}\n\n\\begin{remark} \nYannakakis showed that finding the minimal fill-in (or\nequivalently the best pivots) during\nGaussian elimination is an NP-complete task\n\\cite{MR604513}. \nIn numerical algorithms, heuristics have been developed and \ncomprise minimal degree ordering, cost functions or nested dissection\n(see\ne.g.,\n\\cite{MR1135327,MR2124398,MR1642639}).\nThese heuristics for reducing fill-in in the numerical setting, often\nassume symmetric and invertible matrices, and do not take into account\nthat new zeros may be produced by elimination operations ($a_{ij} =\na_{ij} + \\delta_i * a_{kj}$), as is the case with matrices over finite\nfields. \\cite{DumVil02} thus proposed the heuristic \\ref{alg:reord} to\ntake those new zeros into account, using a local optimization of a\ncost function at each elimination step.\n\\end{remark}\n\n\\subsubsection{Structured matrices and displacement\n rank\\index{Displacement rank}}\nOriginating from the seminal paper \\cite{MR537629} most of the\nalgorithms dealing with structured matrices use the displacement rank\\index{Displacement rank}\napproach \\cite{MR1843842}.\n\\begin{definition}\nFor $A\\in {\\mathbb F}^{m\\times m}$ and $B\\in{\\mathbb F}^{n\\times n}$, the {\\em Sylvester\n (resp. Stein)\nlinear displacement operator $\\bigtriangledown_{A,B}$\n(resp. $\\bigtriangleup_{A,B}$)} satisfy for $M\\in{\\mathbb F}^{m\\times n}$:\n\\begin{align*}\n\\bigtriangledown_{A,B}(M) = AM - MB\\\\\n\\bigtriangleup_{A,B}(M) = M - AMB\n\\end{align*}\nA pair of matrices $(Y,Z)\\in{\\mathbb F}^{m\\times \\alpha} \\times {\\mathbb F}^{n\\times\n \\alpha}$ is a {\\em $A,B$-Sylvester-generator\\index{Sylvester\n generator} of length $\\alpha$} (resp. Stein\\index{Stein\n generator}) for $M$ if $\\bigtriangledown_{A,B}(M) = Y Z^T$\n(resp. $\\bigtriangleup_{A,B}(M) = Y Z^T$). \n\\end{definition}\nThe main idea behind algorithms for structured matrices is to use such\ngenerators as a compact data structure, in cases where the\ndisplacement has low rank\\index{Rank}. \n\nUsual choices of matrices $A$ and $B$ are diagonal matrices and\ncyclic down shift matrices:\n\\begin{definition}\n$\\mathbb{D}_x, x\\in{\\mathbb F}^n$ is the diagonal matrix whose $(i,i)$\n entry is $x_i$.\\\\\n$\\mathbb{Z}_{n,\\varphi},\\varphi\\in{\\mathbb F}$ is the $n\\times n$ unit\n circulant matrix having $\\varphi$ at position $(1,n)$, ones in the\n subdiagonal $(i+1,i)$ and zeros elsewhere.\n\\end{definition}\n\n\\begin{table}[htbp]\n\\begin{tabular}{|c|c|c|c|c|}\n\\multicolumn{2}{|c|}{operator matrices}& class of structured & rank\\index{Rank} of\n& number of flops \\\\\nA & B & matrices $M$ & $\\bigtriangledown_{A,B}(M)$ & for computing $M\n\\cdot v$\\\\\n\\hline\n$\\mathbb{Z}_{n,1}$& $\\mathbb{Z}_{n,0}$& Toeplitz\\index{Toeplitz matrix} and its inverse& $\\leq\n2$& $\\BigO{(m+n)\\log(m+n)}$\\\\\n$\\mathbb{Z}_{n,1}$& $\\mathbb{Z}_{n,0}^T$& Hankel\\index{Hankel matrix} and its inverse& $\\leq 2$& $\\BigO{(m+n)\\log(m+n)}$\\\\\n$\\mathbb{Z}_{n,0}+\\mathbb{Z}_{n,0}^T$&$\\mathbb{Z}_{n,0}+\\mathbb{Z}_{n,0}^T$ & Toeplitz\\index{Toeplitz matrix}+Hankel\\index{Hankel matrix}& $\\leq 4$& $\\BigO{(m+n)\\log(m+n)}$\\\\\n$\\mathbb{D}_{x}$& $\\mathbb{Z}_{n,0}$& Vandermonde\\index{Vandermonde matrix}& $\\leq 1$& $\\BigO{(m+n)\\log^2(m+n)}$\\\\\n$\\mathbb{Z}_{n,0}$& $\\mathbb{D}_{x}$ & inverse of Vandermonde\\index{Vandermonde matrix}& $\\leq 1$& $\\BigO{(m+n)\\log^2(m+n)}$\\\\\n$\\mathbb{Z}_{n,0}^T$& $\\mathbb{D}_{x}$ & transposed of Vandermonde\\index{Vandermonde matrix}& $\\leq 1$& $\\BigO{(m+n)\\log^2(m+n)}$\\\\\n$\\mathbb{D}_{x}$& $\\mathbb{D}_{y}$ & Cauchy\\index{Cauchy matrix} and its inverse& $\\leq 1$& $\\BigO{(m+n)\\log^2(m+n)}$\\\\\n\\end{tabular}\n\\caption{Complexity of the matrix-vector product for some structured matrices}\\label{tab:struct}\n\\end{table}\nAs computing matrix vector products with such structured matrices have\nclose algorithmic correlation to computations with polynomials and\nrational functions, these matrices can be multiplied by vectors fast,\nin nearly linear time as shown on table \\ref{tab:struct}. \nTherefore the algorithms of section \\ref{ssec:blackbox} can naturally\nbe applied to structured matrices, to yield almost $\\BigO{n^2}$ time\nlinear algebra.\n\nNow, if the displacement rank\\index{Displacement rank} is small there exists \nalgorithms quasi linear in $n$, the dimension of the matrices, which\nover finite fields are essentially variations or extensions of the\nMorf\/Bitmead-Anderson divide-and-conquer \\cite{Mor80,MR591427} or\nCardinal's \\cite{MR1696212} approaches.\nThe method is based on dividing the original problem repeatedly into\ntwo subproblems with one leading principal submatrix and the related\nSchur complement. This leads to $\\BigO{\\alpha^2n^{1+o(1)}}$ system solvers,\nwhich complexity bound have recently been reduced to\n$\\BigO{\\alpha^{\\omega-1}n^{1+o(1)}}$ \\cite{MR2463004,JeaMou10}.\nWe few exceptions, all algorithms thus need matrices in generic rank\nprofile. Over finite fields this can be achieved using Kaltofen and\nSaunders unit upper triangular Toeplitz preconditioners\n\\cite{MR1229306} and by controlling the displacement rank growth and\nnon-singularity issues \\cite{Kal94}. \n\n\\subsection{Hybrid methods}\n\\subsubsection{Hybrid sparse-dense methods}\nOverall, as long as the matrix fits into memory, Gaussian elimination\nmethods are usually faster than iterative methods, over finite fields\n\\cite{DumVil02}.\nThere are then heuristics trying to take the best of both\nstrategies. Among those we briefly mention the most widely used:\n\\begin{itemize}\n\\item Perform the Gaussian elimination with reordering\n \\ref{alg:reord} until the matrix is almost filled-up. If the\n remaining non-eliminated part would fit as a dense matrix, switch to\n the dense methods of section \\ref{ssec:echelon}.\n\\item Maintain two sets of rows (or columns), sparse and\n dense. Favor elimination on the sparse set. This is\n particularly adapted to index calculus \\cite{LamOdl91}. \n\\item Perform a preliminary reordering in order to cut the matrix into\n four quadrants, the upper left one being triangular. This, together\n with the above strategies has proven effective on matrices which are\n already quasi-triangular, e.g., Gr{\\\"o}bner bases\n computations in finite fields \\cite{FauLac10}.\n\\item If the rank\\index{Rank} is very small compared to the dimension of the\n matrix, one can use left and right highly rectangular projections to\n manipulate smaller structures \\cite{MR2402272}.\n\\item The arithmetic cost and thus timing predictions are easier on\n iterative methods than on elimination methods. On the other hand the\n number of non-zero elements at a given point of the elimination is\n usually increasing during an elimination, thus providing a lower\n bound on the remaining time to triangularize. Thus a heuristic is to\n perform one matrix-vector product with the original matrix and then\n eliminate using Gaussian elimination. If at one point the lower\n bound for elimination time surpasses to predicted iterative one or\n if the the algorithm runs out of memory, stop the elimination and\n switch to the iterative methods \\cite{DurSauWan03}.\n\\end{itemize}\n\n\\subsubsection{Block-iterative methods}\n\nIterative methods based on one-dimensional projections, such as Wiedmann and\nLanczos algorithm can be generalized with block projections.\nVia efficient preconditioning \\cite{MR1878939} these extensions to the\nscalar iterative methods can present enhanced properties:\n\\begin{itemize}\n\\item Usage of dense sub-blocks, after multiplications of blocks of\n vectors with the sparse matrix or the blackboxes, allows for a\n better locality and optimization of memory accesses, via the\n application of the methods of section~\\ref{ssec:blas}.\n\\item Applying the matrix to several vectors simultaneously introduces more\n parallelism \\cite{MR1236735,MR1192970,MR1687279}.\n\\item Also, their probability of success augments with the size of the\n considered blocks, especially over small fields \\cite{MR1270621,Vil97}.\n\\end{itemize}\n\n\\begin{definition}\nLet $X\\in {\\mathbb F}_q^{k\\times n}$, $Y\\in {\\mathbb F}_q^{n\\times k}$ and $H_i =XA^iY$ for\n$i=0\\dots n\/k$.\nThe matrix minimal\\index{Minimal polynomial} polynomial of the sequence $H_i$ is the matrix polynomial\n$F_{X,A,Y} \\in {\\mathbb F}_q[X]^{k\\times k}$ of least degree, with its leading degree\nmatrix column-reduced, that annihilates the sequence $(H_i)$.\n\\end{definition}\n\n\\begin{theorem}\nThe degree $d$ matrix minimal\\index{Minimal polynomial} polynomial of a block sequence $(H_i) \\in (F_q^{k\\times\nk})^{\\mathbb Z}$ can be computed in $\\BigO{k^3d^2}$ using block versions of Hermite-Pade\napproximation and extended Euclidean algorithm~\\cite{MR1779720} or Berlkamp-Massey\nalgorithm~\\cite{MR1192970,MR1270621,Vil97}. Further improvement\nby~\\cite{MR1779720,MR2049765,MR2035204,MR2120701} bring this complexity down\nto $\\BigO{k^\\omega d}^{1+o(1)}$, using a matrix extended Euclidean algorithm.\n\\end{theorem}\n\n\\begin{algorithm}{[Nullspace vector]\\label{alg:nullspace}}\n\\begin{algorithmic}\n\\REQUIRE{ $A\\in {\\mathbb F}_q^{n\\times n}$}\n\\ENSURE {$\\omega \\in {\\mathbb F}_q^n$ a vector in the nullspace of $A$}\n\\STATE Pick $X\\in {\\mathbb F}_q^{k\\times n},Y\\in {\\mathbb F}_q^{n\\times k}$ uniformly at random;\n\\STATE Compute the sequence $H_i=XA^iY$;\n\\STATE Compute $F_{X,A,Y}$ the matrix minimal\\index{Minimal polynomial} polynomial;\n\\STATE Let $f=f_rx^r+\\dots+f_dx^d$ be a column of $F_{X,A,Y}$;\n\\STATE Return $\\omega=Yf_r +AYf_{r+1}+ \\dots +A^{d-r}Yf_{d}$;\n\\end{algorithmic}\n\\end{algorithm}\n\n\\begin{remark}\nThese block-Krylov techniques are used to achieve the best known time\ncomplexities for several computations with black-box matrices over a finite\nfield or the ring of integers: computing the\ndeterminant, the characteristic\\index{Characteristic polynomials} polynomial~\\cite{MR2120701} and the solution of a linear system\nof equations~\\cite{MR2396196}.\n\\end{remark}\n\n\\AllRefCited{MR2124398,MR0269546,MR1779720,MR591427,BooBra09,MR2463004,MR0331751,MR1696212,MR1878939,MR1236735,MR1192970,MR1449760,MR1056627,MR2500374,DumFouSal11,DumGauGieGioHovKalSauTurVil02,MR2035233,MR2738206,MR2280540,DumVil02,DurSauWan03,MR1834824,Ebe03,MR2396196,MR1809985,FauLac10,MR1657129,MR1805174,MR2035204,MR2501869,MR1642639,JeaMou10,MR537629,MR1687279,Kal94,MR1270621,MR1229306,MR1056629,MR2120701,MR796306,LamOdl91,Lam96,MR2402272,Mor80,MR1805128,MR1843842,MR2402276,MR1948725,StoVil00,Sto10,MR2049765,Tur02,Vas11,Vil97,MR1849765,Vil03,WhaPetDon01,MR831560,MR0297115,MR604513,MR1135327}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}