diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzzfjpy" "b/data_all_eng_slimpj/shuffled/split2/finalzzfjpy" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzzfjpy" @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\n\nLet $(v_{t})_{t\\in\\mathbb Z}$ be a complex circularly symmetric Gaussian stationary process with zero mean and covariance function $(r_k)_{k\\in\\mathbb Z}$ with $r_k=\\mathbb{E}[v_{t+k}v^*_{t}]$ and $r_k\\to0$ as $k\\to\\infty$. We observe $N$ independent copies of $(v_t)_{t\\in\\mathbb Z}$ over the time window $t\\in\\{0,\\ldots,T-1\\}$, and stack the observations in a matrix $V_T=[ v_{n,t} ]_{n,t = 0}^{N-1, T-1}$. This matrix can be written as $V_T=W_TR_T^{1\/2}$, where $W_T\\in\\mathbb{C}^{N\\times T}$ has independent $\\mathcal{CN}(0,1)$ (standard circularly symmetric complex Gaussian) entries and $R_T^{1\/2}$ is any square root of the Hermitian nonnegative definite Toeplitz $T\\times T$ matrix\n\\begin{equation*}\nR_T \\triangleq \\left[ r_{i-j} \\right]_{0\\leq i,j\\leq T-1} = \\begin{bmatrix}\nr_0 & r_{1} & \\ldots & r_{T-1} \\\\ \nr_{-1} & \\ddots & \\ddots & \\vdots \\\\ \n\\vdots & \\ddots & \\ddots & r_{1}\\\\ \nr_{1-T} & \\ldots & r_{-1} & r_0\n\\end{bmatrix}.\n\\end{equation*}\nA classical problem in signal processing is to estimate $R_T$ from the observation of $V_T$. \nWith the growing importance of multi-antenna array processing, there has recently been a renewed interest for this estimation problem in the regime of large system dimensions, {\\it i.e.} for both $N$ and $T$ large. \n\nAt the core of the various estimation methods for $R_T$ are the biased and unbiased estimates $\\hat{r}_{k,T}^b$ and $\\hat{r}_{k,T}^u$ for $r_k$, respectively, defined by\n\\begin{align*}\n\t\\hat{r}_{k,T}^b &= \\frac{1}{NT}\\sum_{n=0}^{N-1}\\sum_{t=0}^{T-1} v_{n,t+k} v_{n,t}^* \\mathbbm{1}_{0 \\leq t+k \\leq T-1} \\\\\n\t\\hat{r}_{k,T}^u &= \\frac{1}{N(T-|k|)} \\sum_{n=0}^{N-1} \\sum_{t=0}^{T-1} v_{n,t+k} v_{n,t}^* \\mathbbm{1}_{0 \\leq t+k \\leq T-1}\n\\end{align*}\nwhere $\\mathbbm{1}_A$ is the indicator function on the set $A$.\nDepending on the relative rate of growth of $N$ and $T$, the matrices $\\widehat{R}_{T}^b = [\\hat{r}_{i-j,T}^b]_{0 \\leq i,j \\leq T-1}$ and $\\widehat{R}_{T}^u = [\\hat{r}_{i-j,T}^u]_{0 \\leq i,j \\leq T-1}$ may not satisfy $\\Vert R_T - \\widehat{R}_{T}^b \\Vert \\overset{\\rm a.s.}{\\longrightarrow} 0$ or $\\Vert R_T - \\widehat{R}_{T}^u \\Vert \\overset{\\rm a.s.}{\\longrightarrow} 0$. An important drawback of the biased entry-wise estimate lies in its inducing a general asymptotic bias in $\\widehat{R}_{T}^b$; as for the unbiased entry-wise estimate, it may induce too much inaccuracy in the top-right and bottom-left entries of $\\widehat{R}_{T}^u$. The estimation paradigm followed in the recent literature generally consists instead in building banded or tapered versions of $\\widehat{R}_{T}^b$ or $\\widehat{R}_{T}^u$ ({\\it i.e.} by weighting down or discarding a certain number of entries away from the diagonal), exploiting there the rate of decrease of $r_k$ as $k\\to\\infty$ \\cite{WuPour'09,BickLev'08a,LamFan'09,XiaoWu'12,CaiZhangZhou'10,CaiRenZhou'13}.\nSuch estimates use the fact that $\\Vert R_T-R_{\\gamma(T),T}\\Vert \\to 0$ with $R_{\\gamma,T} = [ [R_T]_{i,j} \\mathbbm{1}_{|i-j| \\leq \\gamma}]$ for some well-chosen functions $\\gamma(T)$ (usually satisfying $\\gamma(T)\\to \\infty$ and $\\gamma(T)\/T\\to 0$) and restrict the study to the consistent estimation of $R_{\\gamma(T),T}$. The aforementioned articles concentrate in particular on choices of functions $\\gamma(T)$ that ensure optimal rates of convergence of $\\Vert R_T-\\widehat{R}_{\\gamma(T),T}\\Vert$ for the banded or tapered estimate $\\widehat{R}_{\\gamma(T),T}$.\nThese procedures, although theoretically optimal, however suffer from several practical limitations. First, they assume the {\\it a priori} knowledge of the rate of decrease of $r_k$ (and restrict these rates to specific classes). Then, even if this were indeed known in practice, being asymptotic in nature, the results do not provide explicit rules for selecting $\\gamma(T)$ for practical finite values of $N$ and $T$. Finally, the operations of banding and tapering do not guarantee the positive definiteness of the resulting covariance estimate.\n\nIn the present article, we consider instead that the only constraint about $r_k$ is $\\sum_{k=-\\infty}^\\infty |r_k|<\\infty$ and estimate $R_T$ from the standard (non-banded and non-tapered) estimates $\\widehat{R}_T^b$ and $\\widehat{R}_T^u$. The consistence of these estimates, in general invalid, shall be enforced here by the choice $N,T\\to\\infty$ with $N\/T\\to c\\in(0,\\infty)$. This setting is more practical in applications as long as both the finite values $N$ and $T$ are sufficiently large and of similar order of magnitude.\nAnother context where a non banded Toeplitz rectification of the estimated \ncovariance matrix leads to a consistent estimate in the spectral norm is \nstudied in \\cite{val-loub-icassp14}. \n\nOur specific contribution lies in the establishment of concentration inequalities for the random variables $\\Vert R_T-\\widehat{R}_T^b\\Vert$ and $\\Vert R_T-\\widehat{R}_T^u\\Vert$. It is shown specifically that, for all $x>0$, $-\\log \\mathbb{P}[\\Vert R_T-\\widehat{R}_T^b \\Vert> x ]= O(T)$ and $-\\log \\mathbb{P}[\\Vert R_T-\\widehat{R}^u_T\\Vert > x ]= O(T\/ \\log T)$. Aside from the consistence in norm, this implies as a corollary that, as long as $\\limsup_T\\Vert R_T^{-1}\\Vert<\\infty$, for $T$ large enough, $\\widehat{R}^u_T$ is positive definite with outstanding probability ($\\widehat{R}^b_T$ is nonnegative definite by construction).\n\nFor application purposes, the results are then extended to the case where $V_T$ is changed into $V_T+P_T$ for a rank-one matrix $P_T$. Under some conditions on the right-eigenspaces of $P_T$, we show that the concentration inequalities hold identically. The application is that of a single source detection (modeled through $P_T$) by an array of $N$ sensors embedded in a temporally correlated noise (modeled by $V_T$). To proceed to detection, $R_T$ is estimated from $V_T+P_T$ as $\\widehat{R}_T^b$ or $\\widehat{R}_T^u$, which is used as a whitening matrix, before applying a generalized likelihood ratio test (GLRT) procedure on the whitened observation. Simulations corroborate the theoretical consistence of the test. \n\nThe remainder of the article is organized as follows. The concentration inequalities for both biased and unbiased estimates are exposed in Section~\\ref{unperturbed}. The generalization to the rank-one perturbation model is presented in Section~\\ref{sig+noise} and applied in the practical context of source detection in Section~\\ref{detect}.\n\n{\\it Notations:} The superscript $(\\cdot)^{\\sf H}$ denotes Hermitian transpose, $\\left\\| X \\right\\|$ stands for the spectral norm for a matrix and Euclidean norm for a vector, and $\\| \\cdot \\|_\\infty$ is the $\\sup$ norm of a function. The notations ${\\cal N}(a,\\sigma^2)$ and ${\\cal CN}(a,\\sigma^2)$ represent the real and complex circular Gaussian distributions with mean $a$ and variance $\\sigma^2$. For $x \\in \\mathbb{C}^{m}$, $D_x=\\diag (x)=\\diag (x_0, \\ldots, x_{m-1} )$ is the diagonal matrix having on its diagonal the elements of the vector $x$.\nFor $x=[x_{-(m-1)},\\ldots,x_{m-1}]^{\\sf T} \\in \\mathbb{C}^{2m+1}$, the matrix ${\\cal T}(x) \\in \\mathbb{C}^{m \\times m}$ is the Toeplitz matrix built from $x$ with entries $[{\\cal T}(x)]_{i,j}=x_{j-i}$.\nThe notations $\\Re(\\cdot)$ and $\\Im(\\cdot)$ stand for the real and the imaginary parts respectively.\n\n\n\\section{Performance of the covariance matrix estimators} \n\\label{unperturbed} \n\n\\subsection{Model, assumptions, and results}\n\\label{subsec-model} \n\nLet $(r_k)_{k\\in\\mathbb Z}$ be a doubly \ninfinite sequence of covariance coefficients. For any $T \\in \\mathbb N$, let \n$R_T = \\mathcal T( r_{-(T-1)},\\ldots, r_{T-1})$, a Hermitian nonnegative definite matrix. \nGiven $N = N(T) > 0$, consider the \nmatrix model \n\\begin{equation}\nV_T = [ v_{n,t} ]_{n,t = 0}^{N-1, T-1} = W_T R_T^{1\/2}\n\\label{model1}\n\\end{equation}\nwhere $W_T = [ w_{n,t} ]_{n,t = 0}^{N-1, T-1}$ has independent ${\\cal CN}(0,1)$ entries. \nIt is clear that $r_k = \\mathbb{E} [v_{n,t+k} v_{n,t}^*]$ for any $t$, $k$, and $n \\in \\{0,\\ldots, N-1\\}$.\n\nIn the following, we shall make the two assumptions below.\n\\begin{assumption} \n\\label{ass-rk} \nThe covariance coefficients $r_k$ are absolutely summable and $r_0 \\neq 0$. \n\\end{assumption} \nWith this assumption, the covariance function \n\\[\n{\\boldsymbol\\Upsilon}(\\lambda) \\triangleq \n\\sum_{k=-\\infty}^\\infty r_k e^{-\\imath k\\lambda}, \\quad \n\\lambda \\in [0, 2\\pi) \n\\] \nis continuous on the interval $[0, 2\\pi]$. Since $\\| R_T \\| \\leq \\| \\boldsymbol\\Upsilon \\|_\\infty$ (see \\emph{e.g.} \\cite[Lemma 4.1]{Gray'06}), Assumption \\ref{ass-rk} \nimplies that $\\sup_T \\| R_T \\| < \\infty$. \n\nWe assume the following asymptotic regime which will be simply denoted as ``$T\\rightarrow\\infty$'':\n\\begin{assumption} \n\\label{ass-regime} \n$T \\rightarrow \\infty$ and $N\/T \\rightarrow c > 0$.\n\\end{assumption} \n\n\nOur objective is to study the performance of two estimators of the \ncovariance function frequently considered in the literature. These estimators are defined as \n\\begin{align}\n\\label{est-b} \n\\hat{r}_{k,T}^b&= \n\\frac{1}{NT} \\sum_{n=0}^{N-1}\\sum_{t=0}^{T-1}\nv_{n,t+k} v_{n,t}^* \\mathbbm{1}_{0 \\leq t+k \\leq T-1} \\\\\n\\label{est-u} \n\\hat{r}_{k,T}^u&=\\frac{1}{N(T-|k|)}\n\\sum_{n=0}^{N-1}\\sum_{t=0}^{T-1} v_{n,t+k} v_{n,t}^* \n\\mathbbm{1}_{0 \\leq t+k \\leq T-1}.\n\\end{align}\nSince $\\mathbb{E} \\hat{r}_{k,T}^b = ( 1 - |k|\/T ) r_k$ and $\\mathbb{E} \\hat{r}_{k,T}^u = r_k$,\nthe estimate $\\hat{r}_{k,T}^b$ is biased while $\\hat{r}_{k,T}^u$ is unbiased. \nLet also \n\\begin{align}\n\t\\label{est-Rb}\n\\widehat R^b_T &\\triangleq {\\cal T}\\left( \\hat{r}_{-(T-1),T}^b, \\ldots, \n\\hat{r}_{(T-1),T}^b \\right) \\\\\n\t\\label{est-Ru}\n\\widehat R^u_T &\\triangleq {\\cal T} \\left( \\hat{r}_{-(T-1),T}^u, \\ldots, \n\\hat{r}_{(T-1),T}^u \\right).\n\\end{align}\nA well known advantage of $\\widehat R^b_T$ over $\\widehat R^u_T$ as an estimate of $R_T$ is its structural nonnegative definiteness.\nIn this section, results on the spectral behavior of these matrices are provided under the form of concentration inequalities on $\\| \\widehat R^b_T - R_T \\|$ and $\\| \\widehat R^u_T - R_T \\|$: \n\\begin{theorem}\n\\label{th-biased} \nLet Assumptions~\\ref{ass-rk} and \\ref{ass-regime} hold true and let $\\widehat R^b_T$ be defined as in \\eqref{est-Rb}.\nThen, for any $x>0$,\n\\begin{equation*}\n\\mathbb{P} \\left[\\norme{\\widehat R_T^b - R_T} > x \\right] \\leq\n\\exp \\left( -cT \\left( \\frac{x}{\\| \\boldsymbol\\Upsilon \\|_\\infty} - \n \\log \\left( 1 + \\frac{x}{\\| \\boldsymbol\\Upsilon \\|_\\infty} \\right) + o(1) \\right)\n\\right)\n\\end{equation*}\nwhere $o(1)$ is with respect to $T$ and depends on $x$.\n\\end{theorem}\n\\begin{theorem}\n\\label{th-unbiased} \nLet Assumptions~\\ref{ass-rk} and \\ref{ass-regime} hold true and let $\\widehat R^u_T$ be defined as in \\eqref{est-Ru}.\nThen, for any $x>0$,\n\\begin{equation*}\n\\mathbb{P} \\left[\\norme{\\widehat R_T^u - R_T} > x \\right] \n\\leq \n\\exp \\left(- \\frac{cTx^2}{4\\norme{\\boldsymbol\\Upsilon}_{\\infty}^2\\log T} (1 + o(1)) \\right) \n\\end{equation*}\nwhere $o(1)$ is with respect to $T$ and depends on $x$.\n\\end{theorem}\n\nA consequence of these theorems, obtained by the Borel-Cantelli lemma, is that $\\| \\widehat R^b_T - R_T \\| \\to 0$ and \n$\\| \\widehat R^u_T - R_T \\| \\to 0$ almost surely as $T \\to \\infty$. \n\nThe slower rate of decrease of $T\/\\log(T)$ in the unbiased estimator exponent may be interpreted by the increased inaccuracy in the estimates of $r_k$ for values of $k$ close to $T-1$. \n\nWe now turn to the proofs of Theorems \\ref{th-biased} and \\ref{th-unbiased},\nstarting with some basic mathematical results that will be needed throughout \nthe proofs. \n\n\\subsection{Some basic mathematical facts} \n\n\\begin{lemma}\n\\label{lm-fq} \nFor $x,y \\in \\mathbb{C}^{m}$ and $A \\in \\mathbb{C}^{m\\times m}$,\n\\[\n\\left| x^{\\sf H} A x - y^{\\sf H} A y \\right| \\leq \n\\norme{A} (\\norme{x}+\\norme{y})\\norme{x-y}.\n\\]\n\\end{lemma}\n\\begin{proof}\n\\begin{align*} \n\\left| x^{\\sf H} A x - y^{\\sf H} A y \\right| &= \n\\left| x^{\\sf H} A x - y^{\\sf H} A x + y^{\\sf H} A x - y^{\\sf H} A y \\right| \\\\\n&\\leq \\left| (x - y)^{\\sf H} A x \\right| + \\left| y^{\\sf H} A (x - y) \\right| \\\\\n&\\leq \\norme{A}(\\norme{x} + \\norme{y}) \\norme{x-y}.\n\\end{align*}\n\\end{proof}\n\n\\begin{lemma} \n\\label{chernoff} \nLet $X_0, \\ldots, X_{M-1}$ be independent $\\mathcal{CN}(0,1)$ random \nvariables. Then, for any $x > 0$, \n\\[\n\\mathbb{P} \\left[ \\frac1M \\sum_{m=0}^{M-1} (|X_m|^2 - 1) > x \\right] \n\\leq \\exp \\left( -M ( x - \\log(1+x) ) \\right) . \n\\] \n\\end{lemma}\n \n\\begin{proof} \nThis is a classical Chernoff bound. Indeed, given $\\xi \\in (0,1)$, we have\nby the Markov inequality \n\\begin{align*}\n\\mathbb{P} \\Bigl[ M^{-1} \\sum_{m=0}^{M-1} (|X_m|^2 - 1) > x \\Bigr] &= \\mathbb{P}\\left[ \\exp \\left( \\xi \\sum_{m=0}^{M-1} |X_m|^2 \\right) \n> \\exp \\xi M (x+1) \\right] \\\\\n&\\leq \\exp(-\\xi M (x+1)) \n\\mathbb{E} \\left[ \\exp \\left( \\xi \\sum_{m=0}^{M-1} |X_m|^2 \\right) \\right] \\\\\n&= \\exp \\left(- M \\left( \\xi(x+1) + \\log(1-\\xi) \\right) \\right) \n\\end{align*} \nsince $\\mathbb{E} \\left[\\exp (\\xi |X_m|^2) \\right] = 1\/(1-\\xi)$. The result follows upon \nminimizing this expression with respect to $\\xi$. \n\\end{proof}\n\n\\subsection{Biased estimator: proof of Theorem \\ref{th-biased}} \\label{biased}\nDefine\n\\begin{align*}\n\\widehat \\Upsilon^b_T(\\lambda) &\\triangleq \n\\sum_{k=-(T-1)}^{T-1} \\hat{r}_{k,T}^b e^{\\imath k \\lambda} \n\\\\\n\\Upsilon_T(\\lambda) &\\triangleq \n\\sum_{k=-(T-1)}^{T-1} r_k e^{\\imath k \\lambda}.\n\\end{align*} \nSince $\\widehat{R}_T^b-R_T$ is a Toeplitz matrix, from \\cite[Lemma 4.1]{Gray'06}, \n\\begin{equation*}\n\\norme{\\widehat{R}_T^b-R_T} \\leq \n\\underset{\\lambda\\in[0,2\\pi)}{\\text{sup}} \n\\left| \\widehat \\Upsilon_T^b(\\lambda) - \\Upsilon_T(\\lambda) \\right| \n\\leq \n\\underset{\\lambda\\in[0,2\\pi)}{\\text{sup}} \n\\left| \\widehat \\Upsilon_T^b(\\lambda) - \\mathbb{E} \\widehat \\Upsilon_T^b(\\lambda) \\right| +\n\\underset{\\lambda\\in[0,2\\pi)}{\\text{sup}} \n\\left| \\mathbb{E} \\widehat \\Upsilon_T^b(\\lambda) - \\Upsilon_T(\\lambda) \\right|. \n\\end{equation*}\nBy Kronecker's lemma (\\cite[Lemma 3.21]{Kallenberg'97}), the rightmost term at the right-hand side satisfies \n\\begin{equation}\n\\label{determ-biased} \n\\left| \\mathbb{E} \\widehat \\Upsilon_T^b(\\lambda) - \\Upsilon_T(\\lambda)\\right| \n\\leq \\sum_{k=-(T-1)}^{T-1} \\frac{|k r_k|}{T} \n\\xrightarrow[T\\to\\infty]{} 0. \n\\end{equation} \nIn order to deal with the term \n$\\sup_{\\lambda \\in [0,2\\pi)} | \\widehat \\Upsilon_T^b(\\lambda) - \\mathbb{E} \\widehat \\Upsilon_T^b(\\lambda) |$, \ntwo ingredients will be used. The first one is the \nfollowing lemma (proven in Appendix \\ref{anx-lm-qf}):\n\n\\begin{lemma}\n\\label{lemma_d_quad}\nThe following facts hold:\n\\begin{align*} \n\\widehat \\Upsilon_T^b(\\lambda) &= d_T(\\lambda)^{\\sf H}\\frac{V_T^{\\sf H}V_T}{N}d_T(\\lambda) \\\\\n\\mathbb{E} \\widehat \\Upsilon_T^b(\\lambda) &= d_T(\\lambda)^{\\sf H} R_T d_T(\\lambda)\n\\end{align*} \nwhere $d_T(\\lambda)=1\/\\sqrt{T}\\left[1, e^{- \\imath\\lambda}, \\ldots, \ne^{-\\imath(T-1)\\lambda} \\right]^{\\sf T}$.\n\\end{lemma}\n\nThe second ingredient is a Lipschitz property of the function \n$\\| d_T(\\lambda) - d_T(\\lambda') \\|$ seen as a function of $\\lambda$. \nFrom the inequality $|e^{-\\imath t\\lambda}-e^{-\\imath t\\lambda'}| \n\\leq t|\\lambda-\\lambda'|$, we indeed have\n\\begin{equation} \n\\label{lipschitz} \n\\| d_T(\\lambda) - d_T(\\lambda') \\| = \n\\sqrt{\\frac{1}{T} \\sum_{t=0}^{T-1} |e^{-\\imath t\\lambda}-\ne^{-\\imath t\\lambda'}|^2} \\leq \\frac{T|\\lambda-\\lambda'|}{\\sqrt{3}} . \n\\end{equation}\n\nNow, denoting by $\\lfloor \\cdot \\rfloor$ the floor function and choosing \n$\\beta > 2$, define ${\\cal I}=\\left\\{0, \\ldots, \\lfloor T^{\\beta} \\rfloor - 1 \\right\\}$.\nLet $\\lambda_i=2 \\pi \\frac{i}{\\lfloor T^{\\beta} \\rfloor }$, $i \\in {\\cal I}$, be a regular discretization of the \ninterval $[0, 2\\pi]$. \nWe write \n\\begin{align*}\n&\\underset{\\lambda \\in [0, 2\\pi)}{\\text{sup}} \\left| \\widehat \\Upsilon_T^b(\\lambda) - \\mathbb{E} \\widehat \\Upsilon_T^b(\\lambda) \\right| \n \\\\&\\leq \n\\underset{i \\in {\\cal I}}{\\text{max}} \n\\underset{\\lambda \\in [\\lambda_i,\\lambda_{i+1}]}{\\text{sup}} \\Bigl(\\left| \\widehat \\Upsilon_T^b(\\lambda) \n- \\widehat \\Upsilon_T^b(\\lambda_i) \\right| + \\left| \\widehat \\Upsilon_T^b(\\lambda_i) \n- \\mathbb{E} \\widehat \\Upsilon_T^b(\\lambda_i) \\right| + \\left| \\mathbb{E} \\widehat \\Upsilon_T^b(\\lambda_i) \n- \\mathbb{E} \\widehat \\Upsilon_T^b(\\lambda) \\right|\\Bigr)\\\\ &\\leq \n\\underset{i \\in {\\cal I}}{\\text{max}} \n\\underset{\\lambda \\in [\\lambda_i,\\lambda_{i+1}]}{\\text{sup}} \n\\left| \\widehat \\Upsilon_T^b(\\lambda) - \\widehat \\Upsilon_T^b(\\lambda_i) \\right| + \\underset{i \\in {\\cal I}}{\\text{max}} \\left| \\widehat \\Upsilon_T^b(\\lambda_i) \n- \\mathbb{E} \\widehat \\Upsilon_T^b(\\lambda_i) \\right| + \\underset{i \\in {\\cal I}}{\\text{max}} \n\\underset{\\lambda \\in [\\lambda_i,\\lambda_{i+1}]}{\\text{sup}} \n\\left| \\mathbb{E} \\widehat \\Upsilon_T^b(\\lambda_i) - \\mathbb{E} \\widehat \\Upsilon_T^b(\\lambda) \\right| \\\\ &\\triangleq \\chi_1 + \\chi_2 + \\chi_3 . \n\\end{align*} \nWith the help of Lemma \\ref{lemma_d_quad} and \\eqref{lipschitz}, \nwe shall provide concentration inequalities on the random terms $\\chi_1$ \nand $\\chi_2$ and a bound on the deterministic term $\\chi_3$. \nThis is the purpose of the three following lemmas. Herein and in the \nremainder, $C$ denotes a positive constant independent of $T$. This constant \ncan change from an expression to another. \n\n\\begin{lemma}\n\\label{chi1} \nThere exists a constant $C > 0$ such that for any $x > 0$ and any $T$ large\nenough, \n\\begin{equation*}\n\\mathbb{P} \\left[ \\chi_1 > x \\right]\n\\leq \\exp \\Biggl( - cT^2\\Biggl( \\frac{x T^{\\beta-2}}{C \\| \\boldsymbol\\Upsilon\\|_\\infty} \n- \\log \\frac{x T^{\\beta-2}}{C \\| \\boldsymbol\\Upsilon\\|_\\infty} - 1 \\Biggr) \\Biggr) . \n\\end{equation*}\n\\end{lemma} \n\n\\begin{proof}\nUsing Lemmas \\ref{lemma_d_quad} and \\ref{lm-fq} along with \n\\eqref{lipschitz}, we have \n\\begin{align*} \n\\left| \\widehat \\Upsilon_T^b(\\lambda) - \\widehat \\Upsilon_T^b(\\lambda_i) \\right|\n&= \\left| d_T(\\lambda)^{\\sf H}\\frac{V_T^{\\sf H} V_T}{N} d_T(\\lambda) \n - d_T(\\lambda_i)^{\\sf H} \\frac{V_T^{\\sf H} V_T}{N}d_T(\\lambda_i) \\right| \\\\\n&\\leq 2 N^{-1} \\norme{d_T(\\lambda) - d_T(\\lambda_i)} \\| R_T\\| \\norme{W_T^{\\sf H} W_T} \\\\\n&\\leq C | \\lambda - \\lambda_i | \\| \\boldsymbol\\Upsilon\\|_\\infty \\norme{W_T^{\\sf H} W_T} . \n\\end{align*} \nFrom $\\| W_T^{\\sf H} W_T \\| \\leq \\tr(W_T^{\\sf H} W_T)$ and \nLemma~\\ref{chernoff}, assuming $T$ large enough so that $f(x,T) \\triangleq x T^{\\beta-1} \/ (C N \\| \\boldsymbol\\Upsilon\\|_\\infty)$ satisfies \n$f(x,T) \\geq 1$, we then obtain \n\\begin{align*}\n\\mathbb{P} \\left[ \\chi_1 > x \\right] &\\leq \n\\mathbb{P} \\left[ \nC \\| \\boldsymbol\\Upsilon\\|_\\infty T^{-\\beta} \n\\sum_{t=0}^{T-1}\\sum_{n=0}^{N-1} |w_{n,t}|^2 > x \\right] \\\\\n&= \\mathbb{P} \\left[ \n\\frac{1}{NT} \\sum_{n,t} (| w_{n,t} |^2 -1 ) > f(x,T) - 1 \\right] \\\\\n&\\leq \\exp( - NT( f(x,T) - \\log f(x,T) - 1 ) ) . \n\\end{align*} \n\\end{proof}\n\n\\begin{lemma} \n\\label{chi2} \nThe following inequality holds\n\\begin{equation*}\n\\mathbb{P} \\left[ \\chi_2 > x \\right] \\leq\n2T^{\\beta} \\exp \\Biggl( - c T \\Biggl( \\frac{x}{\\|\\boldsymbol\\Upsilon\\|_\\infty} - \n\\log \\Bigl( 1 + \\frac{x}{\\|\\boldsymbol\\Upsilon\\|_\\infty} \\Bigr) \\Biggr) \\Biggr). \n\\end{equation*}\n\\end{lemma} \n\n\\begin{proof}\nFrom the union bound we obtain:\n\\begin{align*} \n\\mathbb{P} \\left[ \\chi_2 > x \\right]\n&\\leq \\sum_{i=0}^{\\lfloor T^{\\beta} \\rfloor - 1} \n\\mathbb{P} \\left[ \\left| \\widehat \\Upsilon_T^b(\\lambda_i) \n - \\mathbb{E} \\widehat \\Upsilon_T^b(\\lambda_i) \\right| > x \\right].\n\\end{align*} \nWe shall bound each term of the sum separately. Since \n\\begin{equation*} \n\\mathbb{P} \\left[ \\left| \\widehat \\Upsilon_T^b(\\lambda_i) \n - \\mathbb{E} \\widehat \\Upsilon_T^b(\\lambda_i) \\right| > x \\right] \n= \\mathbb{P} \\left[ \\widehat \\Upsilon_T^b(\\lambda_i) \n - \\mathbb{E} \\widehat \\Upsilon_T^b(\\lambda_i) > x \\right] +\n\\mathbb{P} \\left[ - \\left( \\widehat \\Upsilon_T^b(\\lambda_i) - \\mathbb{E} \\widehat \\Upsilon_T^b(\\lambda_i) \\right) > x \\right] \n\\end{equation*}\nit will be enough to deal with the first right-hand side term as the second \none is treated similarly.\nLet $\\eta_T(\\lambda_i) \\triangleq W_T q_T(\\lambda_i) = \n\\left[ \\eta_{0,T}(\\lambda_i), \\ldots, \\eta_{N-1,T}(\\lambda_i) \\right]^{\\sf T}$ \nwhere $q_T(\\lambda_i) \\triangleq R_T^{1\/2} d_T(\\lambda_i)$. Observe that \n$\\eta_{k,T}(\\lambda_i) \\sim \\mathcal{CN}(0,\\| q_T(\\lambda_i) \\|^2 I_N)$. We know from Lemma~\\ref{lemma_d_quad} that \n\\begin{equation}\n\\widehat \\Upsilon_T^b(\\lambda_i) - \\mathbb{E} \\widehat \\Upsilon_T^b(\\lambda_i) = \n\\frac 1N \\left( \\| \\eta_T(\\lambda_i) \\|^2 - \\mathbb{E} \\| \\eta_T(\\lambda_i) \\|^2 \\right).\n\\label{Epsilon_eta}\n\\end{equation}\nFrom (\\ref{Epsilon_eta}) and Lemma~\\ref{chernoff}, we therefore get \n\\begin{equation*} \n\\mathbb{P}\\left[ \\widehat \\Upsilon_T^b(\\lambda_i) - \\mathbb{E} \\widehat \\Upsilon_T^b(\\lambda_i) > x \\right]\n\\leq \\exp \\Biggl( -N \\Biggl( \\frac{x}{\\| q_T(\\lambda_i) \\|^2} - \n \\log\\Bigl( 1 + \\frac{x}{\\| q_T(\\lambda_i) \\|^2} \\Bigr) \\Biggr) \\Biggr). \n\\end{equation*} \nNoticing that $\\| q_T(\\lambda_i) \\|^2 \\leq \\|\\boldsymbol\\Upsilon\\|_\\infty$ and that the function $f(x) = x - \\log \\Bigl( 1 + x \\Bigr)$ is increasing for $x>0$, we get the result.\n\\end{proof} \n\nFinally, the bound for the deterministic term $\\chi_3$ is provided by the following lemma:\n\\begin{lemma}\n\\label{chi3} \n$\\displaystyle{ \n\\chi_3 \\leq C \\| \\boldsymbol\\Upsilon\\|_\\infty T^{-\\beta + 1}\n}$. \n\\end{lemma} \n\\begin{proof} \nFrom Lemmas \\ref{lemma_d_quad} and \\ref{lm-fq} along with \n\\eqref{lipschitz}, we obtain\n\\begin{align*} \n\\left| \\mathbb{E} \\widehat \\Upsilon_T^b(\\lambda) - \\mathbb{E} \\widehat \\Upsilon_T^b(\\lambda_i) \\right|\n&= \\left| d_T(\\lambda)^{\\sf H} R_T d_T(\\lambda) - d_T(\\lambda_i)^{\\sf H} R_T d_T(\\lambda_i) \\right| \\\\\n&\\leq 2 \\norme{R_T} \\norme{d_T(\\lambda) - d_T(\\lambda_i)} \\\\\n&\\leq C \\| \\boldsymbol\\Upsilon\\|_\\infty | \\lambda - \\lambda_i | T. \n\\end{align*} \nFrom $\\underset{i \\in {\\cal I}}{\\text{max}} \\underset{\\lambda \\in [\\lambda_i,\\lambda_{i+1}]}{\\text{sup}} \n| \\lambda - \\lambda_i | = \\lambda_{i+1} - \\lambda_i = T^{-\\beta}$ we get the result.\n\\end{proof}\n\nWe now complete the proof of Theorem \\ref{th-biased}. From \n\\eqref{determ-biased} and Lemma~\\ref{chi3}, we get\n\\begin{equation*}\n\\mathbb{P} \\left[\\norme{\\widehat R_T^b - R_T} > x \\right] = \n\\mathbb{P}\\left[ \\chi_1 + \\chi_2 > x + o(1) \\right]. \n\\end{equation*} \nGiven a parameter $\\epsilon_T \\in [0,1]$, we can write (with some slight notation abuse)\n\\begin{equation*}\n\\mathbb{P}\\left[ \\chi_1 + \\chi_2 > x + o(1) \\right] \\leq\n\\mathbb{P}\\left[ \\chi_1 > x \\epsilon_T \\right] + \\mathbb{P}\\left[\\chi_2 > x (1 - \\epsilon_T) + o(1) \\right].\n\\end{equation*}\nWith the results of Lemmas \\ref{chi1} and \\ref{chi2}, setting $\\epsilon_T=1\/T$, we get\n\\begin{align*}\n\\mathbb{P}\\left[ \\chi_1 + \\chi_2 > x + o(1) \\right] &\\leq\n\\mathbb{P}\\left[ \\chi_1 > \\frac{x}{T} \\right] + \\mathbb{P}\\left[ \\chi_2 > x (1 - \\frac{x}{T}) + o(1) \\right] \\\\\n&\\leq\n\\exp \\Bigl( - cT^2 \\Bigl( \\frac{x T^{\\beta-3}}{C \\| \\boldsymbol\\Upsilon\\|_\\infty} \n- \\log \\frac{x T^{\\beta-3}}{C \\| \\boldsymbol\\Upsilon\\|_\\infty} - 1 \\Bigr) \\Bigr) \\\\\n&+ \\exp \\Bigl( - cT \\Bigl( \\frac{x\\left( 1 - \\frac{1}{T} \\right)}{\\|\\boldsymbol\\Upsilon\\|_\\infty} - \n\\log \\Bigl( 1 + \\frac{x\\left( 1 - \\frac{1}{T} \\right)}{\\|\\boldsymbol\\Upsilon\\|_\\infty} \\Bigr) + o(1) \\Bigr) \\Bigr) \\\\\n&=\n\\exp \\Bigl( - cT \\Bigl( \\frac{x}{\\|\\boldsymbol\\Upsilon\\|_\\infty} - \n\\log \\Bigl( 1 + \\frac{x}{\\|\\boldsymbol\\Upsilon\\|_\\infty} \\Bigr) + o(1) \\Bigr) \\Bigr) \n\\end{align*}\nsince $\\beta>2$.\n\n\\subsection{Unbiased estimator: proof of Theorem \\ref{th-unbiased}}\\label{unbiased}\nThe proof follows basically the same main steps as for Theorem~\\ref{th-biased} \nwith an additional difficulty due to the scaling terms $1\/(T-|k|)$.\n\nDefining the function \n\\[ \n\\widehat \\Upsilon^u_T(\\lambda) \\triangleq \n\\sum_{k=-(T-1)}^{T-1} \\hat{r}_{k,T}^u e^{ik \\lambda}\n\\]\nwe have\n\\begin{equation*} \n\\norme{\\widehat{R}_T^u-R_T} \\leq \n\\underset{\\lambda\\in[0,2\\pi)}{\\text{sup}} \n\\left| \\widehat \\Upsilon_T^u(\\lambda) - \\Upsilon_T(\\lambda) \\right| = \n\\underset{\\lambda\\in[0,2\\pi)}{\\text{sup}} \n\\left| \\widehat \\Upsilon_T^u(\\lambda) - \n \\mathbb{E} \\widehat \\Upsilon_T^u(\\lambda) \\right|\n\\end{equation*} \nsince $\\Upsilon_T(\\lambda)=\\mathbb{E} \\widehat \\Upsilon_T^u(\\lambda)$, the estimates\n$\\hat{r}_{k,T}^u$ being unbiased. \n\nIn order to deal with the right-hand side of this expression, we need the \nfollowing analogue of Lemma~\\ref{lemma_d_quad}, borrowed from \n\\cite{val-loub-icassp14} and proven here in Appendix~\\ref{anx-lm-qf2}.\n\\begin{lemma}\n\\label{lemma_d_quad2}\nThe following fact holds:\n\\begin{align*}\n\\widehat \\Upsilon_T^u(\\lambda) &= d_T(\\lambda)^{\\sf H} \n\\left( \\frac{V_T^{\\sf H}V_T}{N} \\odot B_T \\right) d_T(\\lambda)\n\\end{align*} \nwhere $\\odot$ is the Hadamard product of matrices and where \n\\[\n\tB_T \\triangleq \\left[ \\frac{T}{T-|i-j|} \\right]_{0\\leq i,j\\leq T-1}.\n\\] \n\\end{lemma}\n\nIn order to make $\\widehat \\Upsilon_T^u(\\lambda)$ more tractable, we rely on the \nfollowing lemma which can be proven by direct calculation.\n\\begin{lemma}\n\\label{lemma_hadamard}\nLet $x$, $y \\in \\mathbb{C}^{m}$ and $A, B \\in C^{m \\times m}$. Then \n\\begin{equation*}\nx^{\\sf H}( A \\odot B ) y = \\tr (D_x^{\\sf H} A D_y B^{\\sf T}) \n\\end{equation*}\nwhere we recall $D_x = \\diag(x)$ and $D_y = \\diag(y)$.\n\\end{lemma}\n\nDenoting\n\\begin{align*}\nD_T(\\lambda) &\\triangleq \\diag (d_T(\\lambda)) = \\frac{1}{\\sqrt{T}} \\diag(1, e^{i\\lambda}, \\ldots, e^{i(T-1)\\lambda} ) \\\\\nQ_T(\\lambda) &\\triangleq R_T^{1\/2} D_T(\\lambda) B_T D_T(\\lambda)^{\\sf H} (R_T^{1\/2})^{\\sf H}\n\\end{align*}\nwe get from Lemmas~\\ref{lemma_d_quad2} and \\ref{lemma_hadamard} \n\\begin{align}\n\\label{Upsilon_sum} \n\\widehat \\Upsilon_T^u(\\lambda) &= \\frac1N\n\\tr(D_T(\\lambda)^{\\sf H} (R_T^{1\/2})^{\\sf H} W_T^{\\sf H} W_T R_T^{1\/2} D_T(\\lambda) B_T)\n\\nonumber\\\\ &= \\frac1N \\tr (W_T Q_T(\\lambda) W_T^{\\sf H}) \\nonumber \\\\\n&= \\frac1N\n\\sum_{n=0}^{N-1} w_n^{\\sf H} Q_T(\\lambda) w_n \n\\end{align} \nwhere $w_i^{\\sf H}$ is such that $W_T=[w_0^{\\sf H},\\ldots,w_{N-1}^{\\sf H}]$. \n\nCompared to the biased case, the main difficulty lies here in the fact \nthat the matrices $B_T\/T$ and $Q_T(\\lambda)$ have unbounded spectral norm as $T\\to\\infty$. \nThe following lemma, proven in Appendix~\\ref{prf-lm-B-Q}, provides some information on the spectral behavior of these\nmatrices that will be used subsequently. \n\\begin{lemma}\n\\label{lm-B-Q} \nThe matrix $B_T$ satisfies \n\\begin{equation}\n\\label{norme_B} \n\\norme{B_T} \\leq \\sqrt{2} T( \\sqrt{\\log T} + C). \n\\end{equation} \nFor any $\\lambda \\in[0, 2\\pi)$, the eigenvalues \n$\\sigma_0, \\ldots, \\sigma_{T-1}$ of the matrix $Q(\\lambda)$ satisfy the \nfollowing inequalities: \n\\begin{eqnarray}\n\\sum_{t=0}^{T-1} \\sigma_t^2 &\\leq& 2 \\norme{\\boldsymbol\\Upsilon}_{\\infty}^2 \\log T + C \\label{sum_sigma2}\\\\ \n\\underset{t}{\\max} |\\sigma_t| &\\leq& \\sqrt{2} \\| \\boldsymbol \\Upsilon \\|_\\infty \n( \\log T )^{1\/2} + C \\label{sig_max} \\\\\n\\sum_{t=0}^{T-1} |\\sigma_t|^3 &\\leq& C ((\\log T)^{3\/2} +1)\\label{sum_sigma3}\n\\end{eqnarray}\nwhere the constant $C$ is independent of $\\lambda$.\n\\end{lemma} \n\nWe shall also need the following easily shown Lipschitz property of the \nfunction $\\norme{D_T(\\lambda) - D_T(\\lambda')}$: \n\\begin{equation} \n\\label{lipschitz_D} \n\\| D_T(\\lambda) - D_T(\\lambda') \\| \\leq \\sqrt{T}|\\lambda - \\lambda'|. \n\\end{equation}\n\nWe now enter the core of the proof of Theorem~\\ref{th-unbiased}. \nChoosing $\\beta>2$, let $\\lambda_i=2 \\pi \\frac{i}{\\lfloor T^{\\beta}\\rfloor}$, \n$i \\in {\\cal I}$, be a regular discretization of the interval $[0, 2\\pi]$ with \n${\\cal I}=\\left\\{0, \\ldots, \\lfloor T^{\\beta} \\rfloor - 1 \\right\\}$. We write \n\\begin{align*}\n\\underset{\\lambda \\in [0, 2\\pi)}{\\text{sup}} \n\\left| \\widehat \\Upsilon_T^u(\\lambda) - \\mathbb{E} \\widehat \\Upsilon_T^u(\\lambda) \\right| \n&\\leq \\underset{i \\in {\\cal I}}{\\text{max}} \n\\underset{\\lambda \\in [\\lambda_i,\\lambda_{i+1}]}{\\text{sup}} \n\\left| \\widehat \\Upsilon_T^u(\\lambda)-\\widehat \\Upsilon_T^u(\\lambda_i) \\right| + \\underset{i \\in {\\cal I}}{\\text{max}} \\left| \\widehat \\Upsilon_T^u(\\lambda_i) - \\mathbb{E} \\widehat \\Upsilon_T^u(\\lambda_i) \\right|\\\\& + \\underset{i \\in {\\cal I}}{\\text{max}} \\underset{\\lambda \\in [\\lambda_i,\\lambda_{i+1}]}{\\text{sup}} \n\\left| \\mathbb{E} \\widehat \\Upsilon_T^u(\\lambda_i) - \\mathbb{E} \\widehat \\Upsilon_T^u(\\lambda) \\right| \\\\ \n&\\triangleq \\chi_1 + \\chi_2 + \\chi_3 . \n\\end{align*} \n\nOur task is now to provide concentration inequalities on the random terms \n$\\chi_1$ and $\\chi_2$ and a bound on the deterministic term $\\chi_3$.\n\n\\begin{lemma}\n\\label{chi1-u} \nThere exists a constant $C > 0$ such that, if $T$ is large enough, the following \ninequality holds:\n\\begin{equation*}\n\\mathbb{P} \\left[ \\chi_1 > x \\right]\n\\leq \\exp \\left( - cT^2 \\left( \\frac{x T^{\\beta-2}}{C \\sqrt{\\log T}} \n- \\log \\frac{x T^{\\beta-2}}{C \\sqrt{\\log T}} - 1 \\right) \\right). \n\\end{equation*} \n\\end{lemma} \n\n\\begin{proof}\nFrom Equation~\\eqref{Upsilon_sum}, we have\n\\begin{align} \n\\left| \\widehat \\Upsilon_T^u(\\lambda) - \\widehat \\Upsilon_T^u(\\lambda_i) \\right|\n& = \\frac1N \\left| \\sum_{n=0}^{N-1} w_n^{\\sf H}\\left( Q_T(\\lambda) - Q_T(\\lambda_i) \\right) w_n \\right| \\nonumber \\\\\n& \\leq \\frac1N \\sum_{n=0}^{N-1} \\left| w_n^{\\sf H} \\left( Q_T(\\lambda) - Q_T(\\lambda_i) \\right) w_n \\right| \\nonumber\\\\\n& \\leq \\frac1N \\norme{Q_T(\\lambda) - Q_T(\\lambda_i)} \\sum_{n=0}^{N-1} \\norme{w_n}^2 \\nonumber.\n\\end{align}\nThe norm above further develops as\n\\begin{align*} \n&\\norme{Q_T(\\lambda) - Q_T(\\lambda_i)}\\\\\n& \\leq \\norme{R_T} \\Vert D_T(\\lambda)B_TD_T(\\lambda)^{\\sf H}- \nD_T(\\lambda_i)B_TD_T(\\lambda)^{\\sf H} + D_T(\\lambda_i)B_TD_T(\\lambda)^{\\sf H} - \nD_T(\\lambda_i)B_TD_T(\\lambda_i)^{\\sf H} \\Vert \\\\\n&\\leq 2 \\norme{D_T(\\lambda)} \\norme{R_T} \\norme{B_T} \n\\norme{D_T(\\lambda) - D_T(\\lambda_i)} \\leq C T ( \\sqrt{\\log T} + 1) \\left| \\lambda - \\lambda_i \\right| \n\\end{align*}\nwhere we used \\eqref{norme_B}, \\eqref{lipschitz_D}, and $\\norme{D_T(\\lambda)}=1\/\\sqrt{T}$.\nUp to a change in $C$, we can finally write $\\norme{Q_T(\\lambda) - Q_T(\\lambda_i)} \\leq \nC T^{1-\\beta} \\sqrt{\\log T}$. Assume that $f(x,T) \\triangleq xT^{\\beta-2}\/\n\\left( C \\sqrt{\\log T} \\right)$ \nsatisfies $f(x,T) > 1$ (always possible for every fixed $x$ by taking $T$ large). Then we get by Lemma~\\ref{chernoff} \n\\begin{align*} \n\\mathbb{P} \\left[ \\chi_1 > x \\right] & \\leq \\mathbb{P} \\left( C N^{-1} T^{1-\\beta} \n\\sqrt{\\log T} \\sum_{n,t} \\left|w_{n,t}\\right|^2 > x \\right) \\\\ \n&= \\mathbb{P} \\left( \\frac{1}{NT} \\sum_{n,t} (\\left|w_{n,t}\\right|^2 - 1) \n> f(x,T) - 1 \\right) \\\\\n&\\leq\n\\exp \\left( - NT \\left( f(x,T) - \\log \\left(f(x,T)\\right) - 1 \\right) \\right).\n\\end{align*}\n\\end{proof}\n\nThe most technical part of the proof is to control the term $\\chi_2$, which we handle hereafter.\n\n\\begin{lemma}\n\\label{chi2-u} \nThe following inequality holds:\n\\begin{equation*}\n\\mathbb{P} \\left[ \\chi_2 > x \\right]\n\\leq \\exp \\left(- \\frac{cx^2T}{4\\norme{\\boldsymbol\\Upsilon}_{\\infty}^2\\log T} (1 + o(1)) \\right).\n\\end{equation*}\n\\end{lemma}\n\n\\begin{proof}\nFrom the union bound we obtain:\n\\begin{equation} \\label{union_bound}\n\\mathbb{P} \\left[ \\chi_2 > x \\right]\n\\leq \\sum_{i=0}^{\\lfloor T^{\\beta} \\rfloor - 1} \n\\mathbb{P} \\left[ \\left| \\widehat \\Upsilon_T^u(\\lambda_i) \n - \\mathbb{E} \\widehat \\Upsilon_T^u(\\lambda_i) \\right| > x \\right].\n\\end{equation} \nEach term of the sum can be written\n\\begin{equation*}\n\\mathbb{P} \\left[ \\left| \\widehat \\Upsilon_T^u(\\lambda_i) - \\mathbb{E} \\widehat \\Upsilon_T^u(\\lambda_i) \\right| > x \\right] = \\mathbb{P} \\left[\\widehat \\Upsilon_T^u(\\lambda_i) - \\mathbb{E} \\widehat \\Upsilon_T^u(\\lambda_i) > x \\right] + \\mathbb{P} \\left[- \\left(\\widehat \\Upsilon_T^u(\\lambda_i) - \\mathbb{E} \\widehat \\Upsilon_T^u(\\lambda_i) \\right) > x \\right].\n\\end{equation*}\nWe will deal with the term $\\psi_i = \\mathbb{P} \\left[\\widehat \\Upsilon_T^u(\\lambda_i) - \\mathbb{E} \\widehat \\Upsilon_T^u(\\lambda_i) > x \\right]$, the term $\\mathbb{P} \\left[- \\left(\\widehat \\Upsilon_T^u(\\lambda_i) - \\mathbb{E} \\widehat \\Upsilon_T^u(\\lambda_i) \\right) > x \\right]$ being treated similarly.\nLet $Q_T(\\lambda_i) = U_T \\Sigma_T U_T^{\\sf H}$ be a spectral \nfactorization of the Hermitian matrix $Q_T(\\lambda_i)$ with \n$\\Sigma_T =\\diag (\\sigma_{0},\\ldots,\\sigma_{T-1})$. \nSince $U_T$ is unitary and $W_T$ has independent ${\\cal CN}(0,1)$ elements,\nwe get from Equation \\eqref{Upsilon_sum} \n\\begin{equation}\n\\label{Ups_u}\n\\widehat \\Upsilon_T^u(\\lambda_i) \\stackrel{\\cal L}{=} \n\\frac{1}{N}\\sum_{n=0}^{N-1} w_n^{\\sf H} \\Sigma_T(\\lambda_i) w_n \n= \\frac{1}{N}\\sum_{n=0}^{N-1} \\sum_{t=0}^{T-1} |w_{n,t}|^2 \\sigma_t\n\\end{equation}\nwhere $\\stackrel{\\cal L}{=}$ denotes equality in law. \nSince $\\mathbb{E} \\left[ e^{a|X|^2} \\right]= 1\/(1-a)$ when $X \\sim {\\cal CN}(0, 1)$ \nand $0 x \\right) \\nonumber \\\\ \n&\\leq \\mathbb{E}\\left[\\text{exp}\\Bigl( \n\\frac{\\tau}{N}\\sum_{n,t} |w_{n,t}|^2\\sigma_t \\Bigr) \\right]\n\\exp \\Bigl(-\\tau\\Bigl(x + \\sum_{t=0}^{T-1}\\sigma_t \\Bigr)\\Bigr) \\nonumber \\\\ \n&= \n\\exp \\Bigl( -\\tau \\Bigl(x+ \\sum_{t=0}^{T-1} \\sigma_t \\Bigr) \\Bigr) \n\\prod_{t=0}^{T-1} \\Bigl( 1- \\frac{\\sigma_t\\tau}{N} \\Bigr)^{-N} \n\\label{pro} \\\\\n&= \\exp\\Bigl(-\\tau \\Bigl(x+ \\sum_{t=0}^{T-1} \\sigma_t \\Bigr) \n - N \\sum_{t=0}^{T-1} \\log \\Bigl(1-\\frac{\\sigma_t\\tau}{N} \\Bigr) \\Bigr) \n\\nonumber\n\\end{align}\nfor any $\\tau$ such that $0 \\leq \\tau < \\underset{0\\leq t\\leq T-1}{\\text{min}}\\frac{N}{\\sigma_t}$.\nWriting $\\log(1-x) = - x - \\frac{x^2}{2} + R_3(x)$ with $\\left| R_3(x) \\right| \\leq \\frac{|x|^3}{3(1-\\epsilon)^3}$ when $|x|<\\epsilon<1$, we get\n\\begin{align}\n\\psi_i &\\leq\n\\exp \\Bigl(-\\tau x + N \\sum_{t=0}^{T-1}\\Bigl( \\frac{\\sigma_t^2\\tau^2}{2N^2} \n+ R_3 \\Bigl(\\frac{\\sigma_t\\tau}{N} \\Bigr) \\Bigr) \\Bigr) \\nonumber \\\\\n&\\leq \\exp \\Bigl(-N \\Bigl( \\frac{\\tau x}{N} - \\frac{\\tau^2}{2N^2} \\sum_{t=0}^{T-1} \\sigma_t^2 \\Bigr) \\Bigr) \\exp \\Bigl( N \\sum_{t=0}^{T-1} \\Bigl| R_3 \\Bigl(\\frac{\\sigma_t \\tau}{N} \\Bigr) \\Bigr| \\Bigr) \\label{expr}.\n\\end{align} \nWe shall manage this expression by using Lemma~\\ref{lm-B-Q}. In order to \ncontrol the term $\\exp(N \\sum |R_3(\\cdot)|)$, we make the choice\n\\begin{equation*}\n\\tau = \\frac{axT}{\\log T}\n\\end{equation*}\nwhere $a$ is a parameter of order one to be optimized later.\nFrom \\eqref{sig_max} we get \n$\\max_t\\frac{\\sigma_t \\tau}{N} = O \\left( (\\log T)^{-1\/2} \\right)$. Hence, \nfor all $T$ large, $\\tau < \\min_t \\frac{N}{\\sigma_t}$. Therefore, \\eqref{pro} \nis valid for this choice of $\\tau$ and for $T$ large. Moreover, for $\\epsilon$ \nfixed and $T$ large, $\\frac{\\sigma_t \\tau}{N} < \\epsilon <1$ so that for \nthese $T$\n\\begin{equation*}\nN \\sum_{t=0}^{T-1} \\Bigl| R_3 \\Bigl(\\frac{\\sigma_t\\tau}{N} \\Bigr) \\Bigr| \n\\leq \\frac{a^3T^3x^3}{3N^2(1-\\epsilon)^3 (\\log T)^3} \\sum_{t=0}^{T-1} |\\sigma_t|^3 \n= O \\left( {T}{(\\log T)^{-3\/2}}\\right) \n\\end{equation*}\nfrom (\\ref{sum_sigma3}).\nPlugging the expression of $\\tau$ in (\\ref{expr}), we get \n\\begin{equation*}\n\\psi_i \\leq \\exp \\Bigl(-N \\Bigl( \\frac{a T x^2}{(\\log T)N} - \\frac{a^2T^2x^2}{2N^2(\\log T)^2} \\sum_{t=0}^{T-1} \\sigma_t^2 \\Bigr) \\Bigr) \\exp \\Bigl( C \\Bigl( {T}{(\\log T)^{-3\/2}} \\Bigr) \\Bigr) . \n\\end{equation*}\nUsing (\\ref{sum_sigma2}), we have\n\\begin{equation*}\n\\psi_i \\leq \\exp \\Bigl(-\\frac{x^2T}{\\log T} \n\\Bigl(a - \\frac{\\norme{\\boldsymbol\\Upsilon}_{\\infty}^2a^2T}{N} \\Bigr) \\Bigr) \n\\exp\\Bigl( \\frac{CT}{(\\log T)^{3\/2}} \\Bigr). \n\\end{equation*}\nThe right hand side term is minimized for $a=\\frac{N}{2T\\norme{\\boldsymbol\\Upsilon}_{\\infty}^2}$ which finally gives\n\\begin{equation*}\n\\psi_i \n\\leq \\exp \\Bigl(- \\frac{Nx^2}{4\\norme{\\boldsymbol\\Upsilon}_{\\infty}^2\\log T} (1 + o(1)) \\Bigr).\n\\end{equation*}\nCombining the above inequality with (\\ref{union_bound}) (which induces additional $o(1)$ terms in the argument of the exponential) concludes the lemma.\n\\end{proof}\n\n\\begin{lemma}\n\\label{chi3-u} \n$\\displaystyle{ \n\\chi_3 \\leq C T^{-\\beta+2} \\sqrt{\\log T}\n}$. \n\\end{lemma}\n\n\\begin{proof}\nFrom Lemma~\\ref{lemma_d_quad2}, $\\norme{R_T \\odot B_T} \\leq \\norme{R_T}\\norme{B_T}$ (see \\cite[Theorem 5.5.1]{HornJoh'91}), and \\eqref{lipschitz}, we get:\n\\begin{equation*}\n\\left| \\mathbb{E} \\widehat \\Upsilon_T^u(\\lambda_i) - \\mathbb{E} \\widehat \\Upsilon_T^u(\\lambda) \\right| \n\\leq 2 \\norme{d_T(\\lambda) - d_T(\\lambda_i)} \\norme{R_T} \\norme{B_T}\n\\leq C T^2 \\left| \\lambda - \\lambda_{i} \\right| \\| \\boldsymbol\\Upsilon\\|_\\infty \\sqrt{\\log T}.\n\\end{equation*}\n\\end{proof}\nLemmas \\ref{chi1-u}--\\ref{chi3-u} show that $\\mathbb{P}[\\chi_2 > x]$ dominates the\nterm $\\mathbb{P}[\\chi_1 > x]$ and that the term $\\chi_3$ is vanishing. Mimicking the\nend of the proof of Theorem \\ref{th-biased}, we obtain Theorem \n\\ref{th-unbiased}. \n\nWe conclude this section by an empirical evaluation by Monte Carlo simulations of $\\mathbb{P}[\\Vert {\\widehat R_T - R_T}\\Vert > x]$ (curves labeled Biased and Unbiased), with $\\widehat R_T\\in\\{\\widehat R_T^b,\\widehat R_T^u\\}$, $T=2N$, $x=2$. This is shown in Figure~\\ref{det} against the theoretical exponential bounds of Theorems~\\ref{th-biased} and \\ref{th-unbiased} (curves labeled Biased theory and Unbiased theory). We observe that the rates obtained in Theorems~\\ref{th-biased} and \\ref{th-unbiased} are asymptotically close to optimal.\n\n\\begin{figure}[H]\n\\center\n\t\\begin{tikzpicture}[font=\\footnotesize]\n\t\t\\renewcommand{\\axisdefaulttryminticks}{2} \n\t\t\\pgfplotsset{every axis\/.append style={mark options=solid, mark size=2pt}}\n\t\t\\tikzstyle{every major grid}+=[style=densely dashed] \n\t\t\t \n\t\t\\pgfplotsset{every axis legend\/.append style={fill=white,cells={anchor=west},at={(0.99,0.02)},anchor=south east}} \n\t\t\\tikzstyle{every axis y label}+=[yshift=-10pt] \n\t\t\\tikzstyle{every axis x label}+=[yshift=5pt]\n\t\t\\begin{axis}[\n\t\t\t\tgrid=major,\n\t\t\t\n\t\t\t\txlabel={$N$},\n\t\t\t\tylabel={$T^{-1} \\log \\left( \\mathbb P \\left[ \\norme {\\widehat R_T - R_T} > x \\right] \\right)$},\n\t\t\t ytick={-0.2,-0.15,-0.1,-0.05,0},\n yticklabels = {$-0.2$,$-0.15$,$-0.1$,$-0.05$,$0$},\n\t\t\t\n \n\t\t\t\n\t\t\t\txmin=10,\n\t\t\t\txmax=40, \n\t\t\t\tymin=-0.2, \n\t\t\t\tymax=0,\n width=0.7\\columnwidth,\n height=0.5\\columnwidth\n\t\t\t]\n\n \\addplot[smooth,black,densely dashed,line width=0.5pt,mark=star] plot coordinates{\n(10.000000,-0.052766)(12.000000,-0.051276)(14.000000,-0.050322)(16.000000,-0.049673)(18.000000,-0.049212)(20.000000,-0.048872)(22.000000,-0.048614)(24.000000,-0.048414)(26.000000,-0.048255)(28.000000,-0.048127)(30.000000,-0.048023)(32.000000,-0.047936)(34.000000,-0.047864)(36.000000,-0.047803)(38.000000,-0.047750)(40.000000,-0.047705)\n\n\n\n };\n \\addplot[smooth,black,line width=0.5pt,mark=star] plot coordinates{\n(10.000000,-0.178687)(12.000000,-0.154996)(14.000000,-0.137000)(16.000000,-0.122875)(18.000000,-0.112712)(20.000000,-0.1042589)(22.000000,-0.096377)(24.000000,-0.091108)(26.000000,-0.085960)(28.000000,-0.082679)(30.000000,-0.080274)(32.000000,-0.077053)(34.000000,-0.074633)(36.000000,-0.071135)(38.000000,-0.069311)(40.000000,-0.067182)\n\n\n\n\n };\n \\addplot[smooth,black,densely dashed,line width=0.5pt,mark=o] plot coordinates{\n(10.000000,-0.011823)(12.000000,-0.010787)(14.000000,-0.010070)(16.000000,-0.009540)(18.000000,-0.009129)(20.000000,-0.008799)(22.000000,-0.008526)(24.000000,-0.008295)(26.000000,-0.008097)(28.000000,-0.007924)(30.000000,-0.007771)(32.000000,-0.007635)(34.000000,-0.007512)(36.000000,-0.007401)(38.000000,-0.007300)(40.000000,-0.007206)\n\n\n\n };\n \n \\addplot[smooth,black,line width=0.5pt,mark=o] plot coordinates{\n(10.000000,-0.063852)(12.000000,-0.050732)(14.000000,-0.041205)(16.000000,-0.034622)(18.000000,-0.029335)(20.000000,-0.025196)(22.000000,-0.021541)(24.000000,-0.018643)(26.000000,-0.015748)(28.000000,-0.013602)(30.000000,-0.012069)(32.000000,-0.010916)(34.000000,-0.010038)(36.000000,-0.007765)(38.000000,-0.007908)(40.000000,-0.007151)\n\n\n\n };\n\n \\legend{ {Biased theory},{Biased},{Unbiased theory},{Unbiased}}\n \\end{axis}\n\t\\end{tikzpicture}\n\\caption{Error probability of the spectral norm for $x=2$, $c=0.5$, $[R_T]_{k,l}=a^{|k-l|}$ with $a=0.6$.}\n\\label{det}\n\\end{figure}\n\n\\section{Covariance matrix estimators for the \\\\ ``Signal plus Noise'' model} \n\\label{sig+noise} \n\n\\subsection{Model, assumptions, and results}\n\\label{subsec-model-perturbed} \n\nConsider now the following model:\n\\begin{equation}\\label{model2}\n\tY_T = [y_{n,t}]_{\\substack{0\\leq n\\leq N-1 \\\\0 \\leq t\\leq T-1}} =P_T+V_T\n\\end{equation}\nwhere the $N\\times T$ matrix $V_T$ is defined in \\eqref{model1} and where $P_T$ satisfies the \nfollowing assumption: \n\\begin{assumption} \n\\label{ass-signal} \n$P_T \\triangleq \\boldsymbol h_T \\boldsymbol s_T^{\\sf H} \\Gamma_T^{1\/2}$ where $\\boldsymbol h_T\\in \\mathbb{C}^N$ is a \ndeterministic vector such that\n$\\sup_T \\| \\boldsymbol h_T \\| < \\infty$, the vector \n$\\boldsymbol s_T = (s_0, \\ldots, s_{T-1})^{\\sf T} \\in \\mathbb{C}^T$ is a random vector\nindependent of $W_T$ with the distribution ${\\cal CN}(0, I_T)$, and \n$\\Gamma_T = [\\gamma_{ij} ]_{i,j=0}^{T-1}$ is Hermitian nonnegative such that $\\sup_T \\| \\Gamma_T \\| < \\infty$. \n\\end{assumption}\n\nWe have here a model for a rank-one signal corrupted with a Gaussian \nspatially white and temporally correlated noise with stationary temporal\ncorrelations. Observe that the signal can also be temporally correlated. \nOur purpose is still to estimate the noise correlation matrix\n$R_T$. To that end, we use one of the estimators \\eqref{est-b} or \\eqref{est-u} \nwith the difference that the samples $v_{n,t}$ are simply replaced with the \nsamples $y_{n,t}$. It turns out that these estimators are still consistent in \nspectral norm. Intuitively, $P_T$ does not break the \nconsistence of these estimators as it can be seen as a rank-one \nperturbation of the noise term $V_T$ in which the subspace spanned by \n$(\\Gamma^{1\/2})^{\\sf H} \\boldsymbol s_T$ is ``delocalized'' enough so as not to perturb \nmuch the estimators of $R_T$. In fact, we even have the following strong result.\n\\begin{theorem}\n\\label{th-perturb} \nLet $Y_T$ be defined as in \\eqref{model2} and let Assumptions~\\ref{ass-rk}--\\ref{ass-signal} hold. Define the\nestimates \n\\begin{align*}\n\\hat{r}_{k,T}^{bp}&= \n\\frac{1}{NT} \\sum_{n=0}^{N-1}\\sum_{t=0}^{T-1}\ny_{n,t+k} y_{n,t}^* \\mathbbm{1}_{0 \\leq t+k \\leq T-1} \\\\\n\\hat{r}_{k,T}^{up}&= \n\\frac{1}{N(T-|k|)} \\sum_{n=0}^{N-1}\\sum_{t=0}^{T-1}\ny_{n,t+k} y_{n,t}^* \\mathbbm{1}_{0 \\leq t+k \\leq T-1}\n\\end{align*}\nand let \n\\begin{align*}\n\\widehat R_T^{bp} &= {\\cal T}(\\hat{r}_{-(T-1),T}^{bp},\\ldots,\\hat{r}_{(T-1),T}^{bp}) \\\\\n\\widehat R_T^{up} &= {\\cal T}(\\hat{r}_{-(T-1),T}^{up},\\ldots,\\hat{r}_{(T-1),T}^{up}).\n\\end{align*}\nThen for any $x > 0$, \n\\begin{equation*} \n\\mathbb{P} \\left[\\norme{\\widehat R_T^{bp} - R_T} > x \\right]\n\\leq \n\\exp \\Bigl( -cT \\Bigl( \\frac{x}{\\| \\boldsymbol\\Upsilon \\|_\\infty} - \n \\log \\Bigl( 1 + \\frac{x}{\\| \\boldsymbol\\Upsilon \\|_\\infty} \\Bigr) + o(1) \\Bigr) \n\\Bigr) \n\\end{equation*} \nand \n\\begin{equation*} \n\\mathbb{P} \\left[\\norme{\\widehat R_T^{up} - R_T} > x \\right]\n\\leq \n\\exp \\Bigl(- \\frac{cTx^2}{4\\norme{\\boldsymbol\\Upsilon}_{\\infty}^2\\log T} (1 + o(1)) \n\\Bigr). \n\\end{equation*} \n\\end{theorem}\n\nBefore proving this theorem, some remarks are in order.\n\\begin{remark}\nTheorem \\ref{th-perturb} generalizes without difficulty to the case where $P_T$ has a fixed rank $K>1$. This captures the situation of $K\\ll \\min(N,T)$ sources.\n\\end{remark} \n\\begin{remark}\n\tSimilar to the proofs of Theorems \\ref{th-biased} and \\ref{th-unbiased}, the proof of Theorem~\\ref{th-perturb} uses concentration inequalities for functionals of Gaussian random variables based on the moment generating function and the Chernoff bound. Exploiting instead McDiarmid's concentration inequality \\cite{ledoux}, it is possible to adapt Theorem~\\ref{th-perturb} to $\\boldsymbol s_T$ with bounded (instead of Gaussian) entries. This adaptation may account for discrete sources met in digital communication signals. \n\\end{remark} \n\n\\subsection{Main elements of the proof of Theorem \\ref{th-perturb}} \n\nWe restrict the proof to the more technical part that concerns $\\widehat R^{up}_T$. \nDefining\n\\begin{eqnarray*}\n\\widehat \\Upsilon_T^{up}(\\lambda) \\triangleq \\sum_{k=-(T-1)}^{T-1} \\hat{r}_{k,T}^{up} e^{ik \\lambda}\n\\end{eqnarray*}\nand recalling that $\\Upsilon_T(\\lambda) = \\sum_{k=-(T-1)}^{T-1} r_{k} e^{ik \\lambda}$, \nwe need to establish a concentration inequality on \\linebreak\n$\\mathbb{P} \\left[ \\sup_{\\lambda\\in[0,2\\pi)} | \\widehat \\Upsilon_T^{up}(\\lambda) - \n\\Upsilon_T(\\lambda) | > x \\right]$. For any $\\lambda\\in[0,2\\pi)$, the term \n$\\widehat \\Upsilon_T^{up}(\\lambda)$ can be written as \n(see Lemma~\\ref{lemma_d_quad2}) \n\\begin{align*}\n\\widehat \\Upsilon_T^{up}(\\lambda)&= \nd_T(\\lambda)^{\\sf H} \\left(\\frac{Y_T^{\\sf H}Y_T}{N} \\odot B_T \\right)\nd_T(\\lambda) \\\\\n&= \nd_T(\\lambda)^{\\sf H} \\left(\\frac{V_T^{\\sf H}V_T}{N} \\odot B_T \\right)\nd_T(\\lambda) \\\\\n&+ \nd_T(\\lambda)^{\\sf H} \\left(\\frac{P_T^{\\sf H}V_T+V_T^{\\sf H}P_T}{N} \\odot B_T \n\\right) d_T(\\lambda) \\\\\n&+ d_T(\\lambda)^{\\sf H} \\left(\\frac{P_T^{\\sf H}P_T}{N} \\odot B_T \\right)\nd_T(\\lambda) \\\\\n&\\triangleq \\widehat \\Upsilon_T^{u}(\\lambda) + \n\\widehat \\Upsilon_T^{cross}(\\lambda) + \\widehat \\Upsilon_T^{sig}(\\lambda) \n\\end{align*} \nwhere $B_T$ is the matrix defined in the statement of \nLemma~\\ref{lemma_d_quad2}.\nWe know from the proof of Theorem~\\ref{th-unbiased} that \n\\begin{equation} \n\\label{noise-term} \n\\mathbb{P} \\left[\\sup_{\\lambda\\in[0,2\\pi)} \n| \\widehat \\Upsilon_T^{u}(\\lambda) - \\Upsilon_T(\\lambda) | > x \\right]\n\\leq \n\\exp \\left(- \\frac{cTx^2}{4\\norme{\\boldsymbol\\Upsilon}_{\\infty}^2\\log T} (1 + o(1)) \n\\right) . \n\\end{equation} \nWe then need only handle the terms $\\widehat \\Upsilon_T^{cross}(\\lambda)$ and \n$\\widehat \\Upsilon_T^{sig}(\\lambda)$. \n\nWe start with a simple lemma. \n\\begin{lemma}\n\\label{lm-prod-gauss}\nLet $X$ and $Y$ be two independent ${\\cal N}(0,1)$ random variables. Then for \nany $\\tau \\in(-1,1)$, \n\\[\n\\mathbb{E}[\\exp(\\tau XY)] = (1 - \\tau^2)^{-1\/2}.\n\\]\n\\end{lemma}\n\\begin{proof}\n\\begin{align*}\n\\mathbb{E}[\\exp(\\tau XY)] &= \\frac{1}{2\\pi} \\int_{\\mathbb R^2} \ne^{\\tau xy} e^{-x^2\/2} e^{-y^2\/2} \\, dx\\, dy \\\\\n&= \\frac{1}{2\\pi} \\int_{\\mathbb R^2} \ne^{-(x-\\tau y)^2\/2} e^{-(1 - \\tau^2)y^2\/2} \\, dx\\, dy \\\\\n&= (1 - \\tau^2)^{-1\/2} .\n\\end{align*}\n\\end{proof} \nWith this result, we now have\n\\begin{lemma} \n\\label{cross-term}\nThere exists a constant $a > 0$ such that \n\\[\n\\mathbb{P}\\left[ \\sup_{\\lambda \\in[0,2\\pi)} | \\widehat \\Upsilon_T^{cross}(\\lambda) | \n> x \\right] \\leq \\exp\\Bigl( - \\frac{axT}{\\sqrt{\\log T}}(1 + o(1)) \\Bigr) . \n\\]\n\\end{lemma}\n\\begin{proof}\nWe only sketch the proof of this lemma. We show that for any \n$\\lambda \\in [0, 2\\pi]$, \n\\[\n\\mathbb{P}[ | \\widehat \\Upsilon_T^{cross}(\\lambda) | \n> x ] \\leq \\exp\\Bigl( - \\frac{axT}{\\sqrt{\\log T}} + C \\Bigr) \n\\]\nwhere $C$ does not depend on $\\lambda \\in [0,2\\pi]$. The lemma is then proven by a discretization\nargument of the interval $[0, 2\\pi]$ analogous to what was done in the \nproofs of Section~\\ref{unperturbed}.\nWe shall bound $\\mathbb{P}[ \\widehat \\Upsilon_T^{cross}(\\lambda) > x ]$, the term\n$\\mathbb{P}[ \\widehat \\Upsilon_T^{cross}(\\lambda) < - x ]$ being bounded similarly. \nFrom Lemma~\\ref{lemma_hadamard}, we get \n\\begin{align*}\n\\widehat \\Upsilon_T^{cross}(\\lambda) &= \n\\tr \\Bigl( D_T(\\lambda)^{\\sf H} \\frac{P^{\\sf H}_T V_T + V_T^{\\sf H} P_T}{N} \nD_T(\\lambda) B_T \\Bigr) \\\\\n&= \\tr \\frac{D_T(\\lambda)^{\\sf H} (\\Gamma_T^{1\/2})^{\\sf H} {\\boldsymbol s}_T \n{\\boldsymbol h}_T^{\\sf H} W_T R_T^{1\/2} D_T(\\lambda) B_T}{N} \\\\\n&\\phantom{=} + \n\\tr \\frac{D_T(\\lambda)^{\\sf H} (R_T^{1\/2})^{\\sf H} W_T^{\\sf H} {\\boldsymbol h}_T \n{\\boldsymbol s}_T^{\\sf H} \\Gamma_T^{1\/2} D_T(\\lambda) B_T}{N} \\\\\n&= \\frac 2N \\Re ( \\boldsymbol h_T^{\\sf H} W_T G_T(\\lambda) \\boldsymbol s_T ) \n\\end{align*} \nwhere $G_T(\\lambda) = R_T^{1\/2} D_T(\\lambda) B_T D_T(\\lambda)^{\\sf H} \n(\\Gamma_T^{1\/2})^{\\sf H}$. Let $G_T(\\lambda) = U_T \\Omega_T \n\\widetilde U_T^{\\sf H}$ be a singular value decomposition of $G_T(\\lambda)$ where \n$\\Omega = \\diag(\\omega_0, \\ldots, \\omega_{T-1})$. Observe that the vector \n$\\boldsymbol x_T \\triangleq W_T^{\\sf H} \\boldsymbol h_T = (x_0,\\ldots, x_{T-1})^{\\sf T}$ has \nthe distribution ${\\cal CN}(0, \\| \\boldsymbol h_T \\|^2 I_T)$. We can then write \n\\[\n\\widehat \\Upsilon_T^{cross}(\\lambda) \\stackrel{{\\cal L}}{=} \n\\frac 2N \\Re\\left( \\boldsymbol x_T^{\\sf H} \\Omega_T \\boldsymbol s_T \\right)\n= \\frac 2N \\sum_{t=0}^{T-1} \\omega_t ( \\Re x_t \\Re s_t + \\Im x_t \\Im s_t). \n\\]\nNotice that $\\{ \\Re x_t, \\Im x_t, \\Re s_t, \\Im s_t \\}_{t=0}^{T-1}$ are\nindependent with $\\Re x_t, \\Im x_t \\sim {\\cal N}(0, \\| \\boldsymbol h_T \\|^2\/2)$ and \n$\\Re s_t, \\Im s_t \\sim {\\cal N}(0, 1\/2)$. Letting $0 < \\tau < (\\sup_T \\| \\boldsymbol h_T \\|)^{-1}(\\sup_{\\lambda} \\| G_T(\\lambda) \\|)^{-1}$ and using Markov's inequality and Lemma~\\ref{lm-prod-gauss}, we get \n\\begin{align*} \n\\mathbb{P} \\left[ \\widehat \\Upsilon_T^{cross}(\\lambda) > x \\right] &= \n \\mathbb{P} \\left[ e^{N \\tau \\widehat \\Upsilon_T^{cross}(\\lambda)} > e^{N \\tau x} \\right] \n\\leq e^{-N \\tau x} \\mathbb{E} \\left[ e^{2\\tau \\sum_t \n\\omega_t ( \\Re x_t \\Re s_t + \\Im x_t \\Im s_t)} \\right] \\\\\n&= e^{-N\\tau x} \\prod_{t=0}^{T-1} \\left( 1 - \\tau^2 \\omega_t^2 \\| \\boldsymbol h_T \\|^2 \\right)^{-1} \n= \\exp \\left( -N \\tau x - \\sum_{t=0}^{T-1} \\log( 1 - \\tau^2 \\omega_t^2 \\| \\boldsymbol h_T\\|^2 ) \\right). \n\\end{align*}\nMimicking the proof of Lemma~\\ref{lm-B-Q}, we can establish that \n$\\sum_t \\omega_t^2 = O(\\log T)$ and $\\max_t \\omega_t = O(\\sqrt{\\log T})$ \nuniformly in $\\lambda \\in [0, 2\\pi]$. Set $\\tau = b \/ \\sqrt{\\log T}$ where\n$b > 0$ is small enough so that \n$\\sup_{T,\\lambda} (\\tau \\| \\boldsymbol h_T \\| \\, \\| G_T(\\lambda) \\|) < 1$. \nObserving that $\\log(1-x) = O(x)$ for $x$ small enough, we get \n\\[\n\\mathbb{P}[ \\widehat \\Upsilon_T^{cross}(\\lambda) > x ] \\leq \n\\exp \\bigl( -N bx\/\\sqrt{\\log T} + {\\cal E}(\\lambda, T) \\bigr) \n\\]\nwhere $| {\\cal E}(\\lambda, T) | \\leq (C \/ \\log T) \\sum_t \\omega_t^2 \\leq C$. \nThis establishes Lemma~\\ref{cross-term}. \n\\end{proof}\n\n\\begin{lemma} \n\\label{signal-term}\nThere exists a constant $a > 0$ such that \n\\[\n\\mathbb{P}\\left[ \\sup_{\\lambda \\in[0,2\\pi)} | \\widehat \\Upsilon_T^{sig}(\\lambda) | \n> x \\right] \\leq \\exp\\Bigl( - \\frac{axT}{\\sqrt{\\log T}}(1 + o(1)) \\Bigr) . \n\\]\n\\end{lemma}\n\\begin{proof}\nBy Lemma~\\ref{lemma_hadamard},\n\\begin{align*} \n\\widehat \\Upsilon_T^{sig}(\\lambda) &= \nN^{-1} \\tr ( D_T^{\\sf H} P_T^{\\sf H} P_T D_T B_T ) \\\\\n&= \\frac{\\| \\boldsymbol h_T \\|^2}{N} \\boldsymbol s_T^{\\sf H} G_T(\\lambda) \\boldsymbol s_T \n\\end{align*} \nwhere $G_T(\\lambda) = \\Gamma_T^{1\/2} D_T(\\lambda) B_T D_T(\\lambda)^{\\sf H} \n(\\Gamma_T^{1\/2})^{\\sf H}$. By the spectral factorization \n$G_T(\\lambda) = U_T \\Sigma_T U_T^{\\sf H}$ with \n$\\Sigma_T = \\diag(\\sigma_0, \\ldots, \\sigma_{T-1})$, we get\n\\[\n\\widehat \\Upsilon_T^{sig}(\\lambda) \\stackrel{{\\cal L}}{=} \n\\frac{\\| \\boldsymbol h_T \\|^2}{N} \\sum_{t=0}^{T-1} \\sigma_t | s_t |^2 \n\\] \nand\n\\begin{align*} \n\\mathbb{P}[ \\widehat \\Upsilon_T^{sig}(\\lambda) > x ] &\\leq \ne^{-N\\tau x} \\mathbb{E} \\Bigl[ e^{\\tau \\|\\boldsymbol h_T\\|^2 \\sum_t \\sigma_t | s_t|^2}\\Bigr] \\\\\n&= \\exp\\Bigl( -N \\tau x - \\sum_{t=0}^{T-1} \n\\log(1 - \\sigma_t \\tau \\|\\boldsymbol h_T\\|^2) \\Bigr) \n\\end{align*} \nfor any $\\tau \\in (0, 1 \/ (\\|\\boldsymbol h_T\\|^2 \\sup_\\lambda \\| G_T(\\lambda)\\|))$. \nLet us show that \n\\[\n| \\tr G_T(\\lambda) | \\leq C \\sqrt{\\frac{\\log T + 1}{T}} . \n\\] \nIndeed, we have \n\\begin{align*}\n| \\tr G_T(\\lambda) | &= N^{-1} | \\tr D_T B_T D_T^{\\sf H} \\Gamma_T | = \\frac 1N \\left| \\sum_{k,\\ell=0}^{T-1} \\frac{e^{-\\imath (k-\\ell)\\lambda} \n\\gamma_{\\ell,k}}{T-|k-\\ell|} \\right| \\\\ \n&\\leq \\Bigl( \\frac 1N \\sum_{k,\\ell=0}^{T-1} |\\gamma_{k,\\ell}|^2 \\Bigr)^{1\/2} \n\\Bigl( \\frac 1N \\sum_{k,\\ell=0}^{T-1} \\frac{1}{(T-|k-\\ell|)^2} \\Bigr)^{1\/2} \\\\\n&= \\Bigl(\\frac{\\tr \\Gamma_T\\Gamma_T^{\\sf H}}{N} \\Bigr)^{1\/2} \n\\Bigl(\\frac 2N (\\log T + C) \\Bigr)^{1\/2} \\leq C \\sqrt{\\frac{\\log T + 1}{T}} .\n\\end{align*} \nMoreover, similar to the proof of Lemma~\\ref{lm-B-Q}, we can show that \n$\\sum_t\\sigma_t^2 = O(\\log T)$ and $\\max_t |\\sigma_t| = O(\\sqrt{\\log T})$ \nuniformly in $\\lambda$. Taking $\\tau = b \/ \\sqrt{\\log T}$ for \n$b > 0$ small enough, and recalling that $\\log(1-x) = 1 - x + O(x^2)$\nfor $x$ small enough, we get that \n\\[\n\\mathbb{P}[ \\widehat \\Upsilon_T^{sig}(\\lambda) > x ] \\leq \n\\exp\\Bigl( - \\frac{N bx}{\\sqrt{\\log T}} + \n\\frac{b \\| \\boldsymbol h_T \\|^2}{\\sqrt{\\log T}} \\tr G_T(\\lambda) + \n{\\cal E}(T,\\lambda) \\Bigr) \n\\]\nwhere $| {\\cal E}(T,\\lambda) | \\leq (C \/ \\log T) \\sum_t \\sigma_t^2 \\leq C$. \nWe therefore get \n\\[\n\\mathbb{P}[ \\widehat \\Upsilon_T^{sig}(\\lambda) > x ] \\leq \n\\exp\\Bigl( - \\frac{N bx}{\\sqrt{\\log T}} + C \\Bigr) \n\\]\nwhere $C$ is independent of $\\lambda$. Lemma~\\ref{signal-term} is then obtained\nby the discretization argument of the interval $[0,2\\pi]$.\n\\end{proof}\n\nGathering Inequality~\\eqref{noise-term} with Lemmas~\\ref{cross-term} and \n\\ref{signal-term}, we get the second inequality of the statement of \nTheorem~\\ref{th-perturb}. \n\n\n\\section{Application to source detection} \\label{detect}\n\nConsider a sensor network composed of $N$ sensors impinged by zero (hypothesis $H_0$) or one (hypothesis $H_1$) source signal. The stacked signal matrix $Y_T=[y_0,\\ldots,y_{T-1}]\\in\\mathbb{C}^{N\\times T}$ from time $t=0$ to $t=T-1$ is modeled as\n\\begin{eqnarray}\\label{model_det}\nY_T = \\left\\{\n \\begin{array}{ll}\n V_T & \\mbox{, $H_0$} \\\\\n \\boldsymbol h_T \\boldsymbol s_T^{\\sf H} + V_T& \\mbox{, $H_1$}\n \\end{array}\n\\right.\n\\end{eqnarray}\nwhere $s_T^{\\sf H}=[s_0^*,\\ldots,s_{T-1}^*]$ are (hypothetical) independent $\\mathcal{CN}(0,1)$ signals transmitted through the constant channel $\\boldsymbol h_T \\in \\mathbb{C}^{N}$, and $V_T=W_T R_T^{1\/2}\\in\\mathbb{C}^{N\\times T}$ models a stationary noise matrix as in \\eqref{model1}.\n\nAs opposed to standard procedures where preliminary pure noise data are available , we shall proceed here to an online signal detection test solely based on $Y_T$, by exploiting the consistence established in Theorem~\\ref{th-perturb}.\nThe approach consists precisely in estimating $R_T$ by $\\widehat{R}_T\\in\\{\\widehat{R}_T^{bp},\\widehat{R}_T^{up}\\}$, which is then used as a whitening matrix for $Y_T$. The binary hypothesis \\eqref{model_det} can then be equivalently written\n\\begin{eqnarray}\\label{model_w}\nY_T \\widehat{R}_T^{-1\/2}= \\left\\{\n \\begin{array}{ll}\n W_T {R}_T \\widehat{R}_T^{-1\/2}& \\mbox{, $H_0$} \\\\\n \\boldsymbol h_T \\boldsymbol s_T^{\\sf H} \\widehat{R}_T^{-1\/2} + W_T {R}_T \\widehat{R}_T^{-1\/2}& \\mbox{, $H_1$}.\n \\end{array}\n\\right.\n\\end{eqnarray}\nSince $\\Vert R_T\\widehat{R}_T^{-1}-I_T\\Vert\\to 0$ almost surely (by Theorem~\\ref{th-perturb} as long as $\\inf_{\\lambda\\in[0,2\\pi)} {\\boldsymbol \\Upsilon}(\\lambda)>0$), for $T$ large, the decision on the hypotheses \\eqref{model_w} can be handled by the generalized likelihood ratio test (GLRT) \\cite{BianDebMai'11} by approximating $W_T {R}_T \\widehat{R}_T^{-1\/2}$ as a purely white noise. We then have the following result.\n\\begin{theorem}\\label{th_detection}\n\tLet $\\widehat{R}_T$ be any of $\\widehat{R}_T^{bp}$ or $\\widehat{R}_T^{up}$ strictly defined in Theorem~\\ref{th-perturb} for $Y_T$ now following model \\eqref{model_det}. Further assume $\\inf_{\\lambda\\in[0,2\\pi)} {\\boldsymbol \\Upsilon}(\\lambda)>0$ and define the test\n\\begin{equation}\\label{glrt_est}\n\\alpha = \\frac{N\\norme{Y_T \\widehat{R}_T^{-1}Y_T^{\\sf H}}}{\\tr \\left( Y_T \\widehat{R}_T^{-1} Y_T^{\\sf H} \\right)} ~ \\overset{H_0}{\\underset{H_1}{\\lessgtr}} ~ \\gamma\n\\end{equation}\nwhere $\\gamma\\in\\mathbb R^+$ satisfies $\\gamma>(1+\\sqrt c)^2$. Then, as $T \\to \\infty$,\n\\begin{align*}\n\t\\mathbb{P} \\left[ \\alpha \\geq \\gamma \\right] \\to \\left\\{ \\begin{array}{ll} 0 &,~H_0 \\\\ 1 &,~H_1. \\end{array}\\right.\n\\end{align*}\n\\end{theorem}\n\nRecall from \\cite{BianDebMai'11} that the decision threshold $(1+\\sqrt{c})^2$ corresponds to the almost sure limiting largest eigenvalue of $\\frac1T W_T W_T^{\\sf H}$, that is the right-edge of the support of the Mar\\v{c}enko--Pastur law. \n\nSimulations are performed hereafter to assess the performance of the test \\eqref{glrt_est} under several system settings. We take here ${\\boldsymbol h}_T$ to be the following steering vector ${\\boldsymbol h}_T=\\sqrt{p\/T}[1, \\ldots , e^{2i\\pi \\theta (T-1)}]$ with $\\theta = 10^\\circ$ and $p$ a power parameter. The matrix $R_T$ models an autoregressive process of order 1 with parameter $a$, {\\it i.e.} $[R_T]_{k,l}=a^{|k-l|}$. \n\nIn Figure~\\ref{det1}, the detection error $1-\\mathbb{P} [ \\alpha \\geq \\gamma|H_1]$ of the test \\eqref{glrt_est} for a false alarm rate (FAR) $\\mathbb{P} [ \\alpha \\geq \\gamma|H_0 ]=0.05$ under $\\widehat{R}_T=\\widehat{R}_T^{up}$ (Unbiased) or $\\widehat{R}_T=\\widehat{R}_T^{bp}$ (Biased) is compared against the estimator that assumes $R_T$ perfectly known (Oracle), {\\it i.e.} that sets $\\widehat{R}_T=R_T$ in \\eqref{glrt_est}, and against the GLRT test that wrongly assumes temporally white noise (White), {\\it i.e.} that sets $\\widehat{R}_T=I_T$ in \\eqref{glrt_est}. The source signal power is set to $p=1$, that is a signal-to-noise ratio (SNR) of $0$ dB, $N$ is varied from $10$ to $50$ and $T=N\/c$ for $c=0.5$ fixed. In the same setting as Figure~\\ref{det1}, the number of sensors is now fixed to $N=20$, $T=N\/c=40$ and the SNR (hence $p$) is varied from $-10$~dB to $4$~dB. The powers of the various tests are displayed in Figure~\\ref{det2} and compared to the detection methods which estimate $R_T$ from a pure noise sequence called Biased PN (pure noise) and Unbiased PN. The results of the proposed online method are close to that of Biase\/Unbiased PN, this last presenting the disadvantage to have at its disposal a pure noise sequence at the receiver. \n\nBoth figures suggest a close match in performance between Oracle and Biased, while Unbiased shows weaker performance. The gap evidenced between Biased and Unbiased confirms the theoretical conclusions. \n\n\\begin{figure}[H]\n\\center\n \\begin{tikzpicture}[font=\\footnotesize]\n \\renewcommand{\\axisdefaulttryminticks}{2} \n \\pgfplotsset{every axis\/.append style={mark options=solid, mark size=2pt}}\n \\tikzstyle{every major grid}+=[style=densely dashed] \n\\pgfplotsset{every axis legend\/.append style={fill=white,cells={anchor=west},at={(0.01,0.01)},anchor=south west}} \n \\tikzstyle{every axis y label}+=[yshift=-10pt] \n \\tikzstyle{every axis x label}+=[yshift=5pt]\n \\begin{semilogyaxis}[\n grid=major,\n \n xlabel={$N$},\n ylabel={$1-\\mathbb{P}[\\alpha>\\gamma|H_1]$},\n \n \n \n xmin=10,\n xmax=50, \n ymin=1e-4, \n ymax=1,\n width=0.7\\columnwidth,\n height=0.5\\columnwidth\n ]\n \\addplot[smooth,black,line width=0.5pt,mark=star] plot coordinates{\n(10.000000,0.318300)(15.000000,0.164000)(20.000000,0.069100)(25.000000,0.028400)(30.000000,0.013300)(35.000000,0.006100)(40.000000,0.002900)(45.000000,0.001200)(50.000000,0.0005000)\n\n\n };\n \n \\addplot[smooth,black,line width=0.5pt,mark=o] plot coordinates{\n(10.000000,0.848100)(15.000000,0.385600)(20.000000,0.168900)(25.000000,0.063900)(30.000000,0.028900)(35.000000,0.012800)(40.000000,0.005200)(45.000000,0.002100)(50.000000,0.000900)\n\n\n };\n \n \\addplot[smooth,black,line width=0.5pt,mark=triangle] plot coordinates{\n(10.000000,0.973000)(15.000000,0.953000)(20.000000,0.954000)(25.000000,0.957000)(30.000000,0.958000)(35.000000,0.958000)(40.000000,0.940000)(45.000000,0.970000)(50.000000,0.949000)\n\n\n\n };\n \n \\addplot[smooth,black,line width=0.5pt,mark=x] plot coordinates{\n(10.000000,0.200200)(15.000000,0.104000)(20.000000,0.043600)(25.000000,0.018700)(30.000000,0.008500)(35.000000,0.003300)(40.000000,0.0014500)(45.000000,0.000600)(50.000000,0.000200)\n\n\n };\n\n\t\\legend{{Biased},{Unbiased},{White},{Oracle}}\n \\end{semilogyaxis}\n \\end{tikzpicture}\n \\caption{Detection error versus $N$ with FAR$=0.05$, $p=1$, SNR$=0$ dB, $c=0.5$, and $a=0.6$.}\n\\label{det1}\n\\end{figure}\n\n\\begin{figure}[H]\n\\center\n \\begin{tikzpicture}[font=\\footnotesize]\n \\renewcommand{\\axisdefaulttryminticks}{2} \n \\pgfplotsset{every axis\/.append style={mark options=solid, mark size=2pt}}\n \\tikzstyle{every major grid}+=[style=densely dashed] \n\\pgfplotsset{every axis legend\/.append style={fill=white,cells={anchor=west},at={(0.01,0.99)},anchor=north west}} \n \\tikzstyle{every axis y label}+=[yshift=-10pt] \n \\tikzstyle{every axis x label}+=[yshift=5pt]\n \\begin{axis}[\n grid=major,\n \n xlabel={SNR (dB)},\n ylabel={$\\mathbb{P}[\\alpha>\\gamma|H_1]$},\n \n \n \t \n xmin=-10,\n xmax=4, \n ymin=0, \n ymax=1,\n width=0.7\\columnwidth,\n height=0.5\\columnwidth\n ]\n \n \\addplot[smooth,black,line width=0.5pt,mark=star] plot coordinates{\n(-15.000000,0.051200)(-14.000000,0.048100)(-13.000000,0.049100)(-12.000000,0.053100)(-11.000000,0.049800)(-10.000000,0.053200)(-9.000000,0.056100)(-8.000000,0.0593000)(-7.000000,0.067500)(-6.000000,0.073900)(-5.000000,0.110200)(-4.000000,0.190800)(-3.000000,0.330600)(-2.000000,0.538500)(-1.000000,0.766600)(0.000000,0.922000)(1.000000,0.985800)(2.000000,0.998600)(3.000000,1.000000)(4.000000,1.000000)(5.000000,1.000000)\n\n\n };\n \n \\addplot[smooth,black,line width=0.5pt,mark=o] plot coordinates{\n(-15.000000,0.053900)(-14.000000,0.049300)(-13.000000,0.049500)(-12.000000,0.055100)(-11.000000,0.050800)(-10.000000,0.054000)(-9.000000,0.057800)(-8.000000,0.060100)(-7.000000,0.068400)(-6.000000,0.076300)(-5.000000,0.100400)(-4.000000,0.159500)(-3.000000,0.248400)(-2.000000,0.427000)(-1.000000,0.652600)(0.000000,0.854200)(1.000000,0.964900)(2.000000,0.995100)(3.000000,0.999500)(4.000000,1.000000)(5.000000,1.000000)\n\n\n };\n \n \\addplot[smooth,black,densely dashed,line width=0.5pt,mark=star] plot coordinates{\n(-15.000000,0.047700)(-14.000000,0.047400)(-13.000000,0.044600)(-12.000000,0.049700)(-11.000000,0.049700)(-10.000000,0.051900)(-9.000000,0.059400)(-8.000000,0.063100)(-7.000000,0.073300)(-6.000000,0.099100)(-5.000000,0.149500)(-4.000000,0.240100)(-3.000000,0.399900)(-2.000000,0.613300)(-1.000000,0.816900)(0.000000,0.944300)(1.000000,0.991500)(2.000000,0.999200)(3.000000,1.000000)(4.000000,1.000000)(5.000000,1.000000)\n\n };\n \n \\addplot[smooth,black,densely dashed,line width=0.5pt,mark=o] plot coordinates{\n(-15.000000,0.048300)(-14.000000,0.051100)(-13.000000,0.047100)(-12.000000,0.050300)(-11.000000,0.051800)(-10.000000,0.052900)(-9.000000,0.056300)(-8.000000,0.062600)(-7.000000,0.068100)(-6.000000,0.086500)(-5.000000,0.116600)(-4.000000,0.178600)(-3.000000,0.299600)(-2.000000,0.505700)(-1.000000,0.721500)(0.000000,0.897600)(1.000000,0.976000)(2.000000,0.996900)(3.000000,0.999900)(4.000000,0.999900)(5.000000,1.000000)\n\n };\n \n \n \\addplot[smooth,black,line width=0.5pt,mark=x] plot coordinates{\n(-15.000000,0.048800)(-14.000000,0.050800)(-13.000000,0.048300)(-12.000000,0.048900)(-11.000000,0.055600)(-10.000000,0.058300)(-9.000000,0.063100)(-8.000000,0.066000)(-7.000000,0.084300)(-6.000000,0.110800)(-5.000000,0.169400)(-4.000000,0.280300)(-3.000000,0.446500)(-2.000000,0.669800)(-1.000000,0.858100)(0.000000,0.963000)(1.000000,0.994300)(2.000000,0.999400)(3.000000,1.000000)(4.000000,1.000000)(5.000000,1.000000)\n\n\n\n };\n\n\t\\legend{{Biased},{Unbiased},{Biased PN},{Unbiased PN},{Oracle}}\n \\end{axis}\n \\end{tikzpicture}\n \\caption{Power of detection tests versus SNR (dB) with FAR$=0.05$, $N=20$, $c=0.5$, and $a=0.6$.}\n\\label{det2}\n\\end{figure}\n\n\n\\begin{appendix}\n\\subsection{Proofs for Theorem~\\ref{th-biased}} \n\\subsubsection{Proof of Lemma \\ref{lemma_d_quad}} \n\\label{anx-lm-qf} \n\nDeveloping the quadratic forms given in the statement of the lemma, we get \n\\begin{align*} \nd_T(\\lambda)^{\\sf H}\\frac{V_T^{\\sf H}V_T}{N}d_T(\\lambda) &= \n\\frac{1}{NT}\\sum_{l,l'=0}^{T-1} e^{-\\imath(l'-l)\\lambda} [V_T^{\\sf H}V_T]_{l,l'} \\\\\n&= \\frac{1}{NT}\\sum_{l,l'=0}^{T-1} e^{-\\imath(l'-l)\\lambda} \n\\sum_{n=0}^{N-1} v^*_{n,l} v_{n,l'} \\\\ \n&= \\sum_{k=-(T-1)}^{T-1} e^{-\\imath k \\lambda} \n\\frac{1}{NT} \\sum_{n=0}^{N-1} \\sum_{t=0}^{T-1} v_{n,t}^* v_{n,t+k} \n\\mathbbm{1}_{0 \\leq t+k \\leq T-1}\\\\ \n&= \\sum_{k=-(T-1)}^{T-1} \\hat{r}_k^b e^{-\\imath k \\lambda}=\n\\widehat\\Upsilon_T^b(\\lambda), \n\\end{align*} \nand \n\\begin{align*} \n\\mathbb{E} \\left[ d_T(\\lambda)^{\\sf H} \\frac{V_T^{\\sf H}V_T}{N}d_T(\\lambda) \\right] \n&= d_T(\\lambda)^{\\sf H} (R_T^{1\/2})^{\\sf H} \n\\frac{\\mathbb{E}[ W_T^{\\sf H} W_T]}{N} R_T^{1\/2} d_T(\\lambda) \\\\ \n&= d_T(\\lambda)^{\\sf H} R_T d_T(\\lambda) .\n\\end{align*} \n\n\\subsection{Proofs for Theorem~\\ref{th-unbiased}}\n\\subsubsection{Proof of Lemma \\ref{lemma_d_quad2}} \n\\label{anx-lm-qf2} \nWe have \n\\begin{align*}\nd_T(\\lambda)^{\\sf H} \\left( \\frac{V_T^{\\sf H}V_T}{N} \\odot B_T \\right) d_T(\\lambda) &=\n\\frac{1}{NT}\\sum_{l,l'=-(T-1)}^{T-1}e^{i(l-l') \\lambda} [V_T^{\\sf H}V_T]_{l,l'}\\frac{T}{T-|l-l'|} \\nonumber \\\\\n&= \\sum_{k=-(T-1)}^{T-1}e^{ik\\lambda}\\frac{1}{N(T-|k|)}\\sum_{n=0}^{N-1}\\sum_{t=0}^{T-1}v_{n,t}^*v_{n,t+k}\\mathbbm{1}_{0 \\leq t+k \\leq T-1} \\\\\n&=\\sum_{k=-(T-1)}^{T-1} \\hat{r}_k^u e^{ik \\lambda} =\\widehat \\Upsilon_T^u(\\lambda).\n\\end{align*}\n\n\\subsubsection{Proof of Lemma \\ref{lm-B-Q}} \n\\label{prf-lm-B-Q} \nWe start by observing that \n\\begin{align*}\n\\tr B_T^2 &= \\sum_{i,j=0}^{T-1} \\left[ B_T \\right]_{i,j}^2 \n= \\sum_{i,j=0}^{T-1} \\left( \\frac{T}{T-|i-j|} \\right)^2 \n= 2\\sum_{i>j}^{T-1} \\left( \\frac{T}{T-|i-j|} \\right)^2 + T \\\\\n&= 2 \\sum_{k=1}^{T-1} \\left( \\frac{T}{T-k} \\right)^2 \\left( T - k \\right) + T \n= 2 T^2 \\sum_{k=1}^{T-1} \\frac{1}{T-k} + T \n= 2 T^2 \\left(\\log T + C \\right).\n\\end{align*}\nInequality \\eqref{norme_B} is then obtained upon noticing that \n$\\norme{B_T} \\leq \\sqrt{\\tr B_T^2}$. \n\nWe now show (\\ref{sum_sigma2}). \nUsing twice the inequality $\\tr (FG) \\leq \\norme{F} \\tr(G)$ when \n$F,G \\in \\mathbb{C}^{m \\times m}$ and $G$ is nonnegative definite \\cite{HornJoh'91}, \nwe get\n\\begin{align*}\n\\sum_{t=0}^{T-1} \\sigma_t^2(\\lambda_i) &= \\tr Q_T(\\lambda_i)^2 \n= \\tr R_T D_T(\\lambda_i)B_T D_T(\\lambda_i)^{\\sf H} R_T D_T(\\lambda_i) \nB_T D_T(\\lambda_i)^{\\sf H} \\\\\n&\\leq \\norme{R_T} \\tr R_T (D_T(\\lambda_i) B_T D_T(\\lambda_i)^{\\sf H})^2 \\\\\n&\\leq T^{-2} \\norme{R_T}^2 \\tr (B_T^2) \\leq 2\\norme{\\boldsymbol\\Upsilon}_{\\infty}^2 \\log T + C. \n\\end{align*}\n\nInequality \\eqref{sig_max} is immediate since $\\norme{Q_T}^2 \\leq \\tr Q_T^2$.\n\nAs regards \\eqref{sum_sigma3}, by the Cauchy--Schwarz inequality,\n\\begin{align*}\n\\sum_{t=0}^{T-1} |\\sigma_t^3(\\lambda_i)| &= \n\\sum_{t=0}^{T-1} \\sigma_t^2(\\lambda_i) |\\sigma_t(\\lambda_i)| \n\\leq \\sqrt{\\sum_{t=0}^{T-1} \\sigma_t^4(\\lambda_i) \\sum_{t=0}^{T-1} \\sigma_t^2(\\lambda_i)} \\\\ \n&\\leq \\sqrt{\\left(\\sum_{t=0}^{T-1} \\sigma_t^2(\\lambda_i) \\right)^2 \\sum_{t=0}^{T-1} \\sigma_t^2(\\lambda_i)} = \\left( \\sum_{t=0}^{T-1} \\sigma_t^2(\\lambda_i) \\right)^{3\/2}\\\\\n&= C ( (\\log T)^{3\/2} +1 ).\n\\end{align*}\n\n\n\n\\end{appendix}\n\n\\bibliographystyle{IEEEtran}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{\\label{Intro:level1}Introduction}\nCoherent radiation can polarize the angular momentum distribution of an ensemble of atoms in various ways, creating different \npolarization moments, which modifies the way these atoms will interact with radiation. Carefully prepared spin polarized \natoms can make the absorption highly dependent on frequency \n(electromagnetically induced transparency~\\cite{Harris:1990}), causing large values of the dispersion, which, in turn, \nare useful for such interesting effects as slow light~\\cite{Hau:1999} and optical information storage~\\cite{Liu:2001}. \nElectric and magnetic fields, external or inherent in the radiation fields, may also influence the time evolution of \nthe spin polarization and cause measurable changes in absorption or fluorescence intensity and\/or polarization. \nThese effects are the basis of many magnetometry schemes~\\cite{Scully:1992,Budker:2007}, \nand must be taken into account in atomic clocks~\\cite{Knappe:2005} and when searching for \nfundamental symmetry violations~\\cite{Budker:2002} or exotic physics such as \nan electric dipole moment of the electron~\\cite{Regan:2002}. \nSufficiently strong laser radiation creates atomic polarization in excited as well as in the ground state~\\cite{Auzinsh:OptPolAt}\nThe polarization is destroyed when the Zeeman sublevel degeneracy is removed by a magnetic field. \nSince the ground state has a much longer lifetime, very narrow magneto-optical resonances can be created, which \nare related to the ground-state Hanle effect (see ~\\cite{Arimondo:1996} for a review). Such resonances were first \nobserved in cadmium in 1964~\\cite{Lehmann:1964}. \n\nThe theory of magneto-optical resonances has been understood for some time(see \\cite{Budker:2002,Alexandrov:2005,Auzinsh:OptPolAt} for a review), \nand bright (opposite sign) resonances have also been observed and explained~\\cite{Kazantsev:1984,Renzoni:2001a,Alnis:2001}; the challenge in describing experiments \nlies in choosing the effects to be included in the numerical calculations so as to find a balance between computation time and accuracy.\nThe optical Bloch equations (OBEs) for the density matrix have been used as early as 1978 to model magneto-optical \nresonances~\\cite{Picque:1978}. In order to achieve greater accuracy, later efforts to model signals took into account \neffects such as Doppler broadening, the coherent properties of the laser radiation, \nand the mixing of magnetic sublevels in an external magnetic field to produce more and more \naccurate descriptions of experimental signals~\\cite{Auzinsh:2008}. Analytical models can also achieve excellent descriptions of \nexperimental signals at low laser powers in the regime of linear excitation where optical pumping plays a negligible \nrole~\\cite{Castagna:2011,Breschi:2012}. In recent \nyears, excellent agreement has been achieved by numerical calculations even when optical pumping plays a role. \nHowever, as soon as the laser radiation begins to saturate the \nabsorption transition, the model's accuracy suffers. The explanation has been that at high radiation intensities, \nit is no longer possible to model the \nrelaxation of atoms moving in and out of the beam with a single rate constant~\\cite{Auzinsh:1983, Auzinsh:2008}. \nNevertheless, accurate numerical models of situations in an intense laser field are very desirable, because they \ncould arise in a number of experimental situations. Therefore, we have set out to model magneto-optical effects in the presence of \nintense laser radiation by taking better account of the fact that an atom experiences different laser intensity values as it \npasses through a beam. In practice, we solve the rate \nequations for the Zeeman coherences for different regions of the laser beam with a value of the Rabi frequency that more \nclosely approximates the real situation in that part of the beam.\nTo save computing time, stationary solutions to the rate equations for Zeeman sublevels and coherences are sought for each region~\\cite{Blushs:2004}. \nWith this simplification to take into account the motion of atoms through the beam, \nwe could now obtain accurate descriptions of experimental signals up to much higher intensities for reasonable computing times. \nMoreover, the model can be used to study the spatial distribution of the laser induced \nfluorescence within the laser beam. We performed such a study theoretically and experimentally using two overlapping lasers: \none spatially broad, intense pump laser, and a weaker, tightly focused, spatically narrow probe laser. \nThe qualitative agreement between experimental and theoretical fluorescence intensity profiles indicates \nthat the model is a useful tool for studying fluorescence dynamics\nas well as for modelling magneto-optical signals at high laser intensities. \n\n\n\\section{\\label{Theory:level1}Theory}\nThe theoretical model used here is a further development of previous efforts~\\cite{Auzinsh_crossing:2013}, which has been subjected \nto some initial testing in the specialized context of an extremely thin cell~\\cite{Auzinsh:2015}.\nThe description of coherent processes starts with the optical Bloch equation (OBE):\n\\begin{equation}\ni \\hbar \\frac{\\partial \\rho}{\\partial t} = \\left[\\hat{H},\\rho \\right]+ i \\hbar \\hat{R}\\rho,\n\\end{equation}\nwhere $\\rho$ is the density matrix describing the atomic state, $\\hat{H}$ is the Hamiltonian of the system, \nand $\\hat{R}$ is an operator that describes relaxation. These equations are transformed into rate equations \nthat are solved under stationary conditions in order to obtain the Zeeman coherences in the \nground ($\\rho_{g_ig_j})$ and excited ($\\rho_{e_ie_j}$) states\\cite{Blushs:2004}. However, when the intensity distribution in the beam is not \nhomogeneous, more accurate results can be achieved by dividing the laser beam into concentric regions and solving the \nOBEs for each region separately while accounting for atoms that move into and out of each region as they fly through the \nbeam. Figure~\\ref{fig:dal1} illustrates the idea. \n\\begin{figure}\n\t\\includegraphics[width=0.45\\textwidth]{beam_division.pdf}\n\t\\caption{\\label{fig:dal1} Laser beam profile split into a number of concentric regions.}\n\\end{figure}\nThe top part of the figure shows the intensity profile of the laser beam, while \nthe bottom part of the figure shows a cross-section of the laser beam indicating the concentric regions. \n\nIn order to account for particles that leave one region and enter the next, an extra term must be added to the OBE:\n\\begin{equation}\n\\label{eq:transit}\n-i \\hbar\\hat{\\gamma_t} \\rho+i \\hbar\\hat{\\gamma_t} \\rho'.\n\\end{equation}\nIn this term, $\\rho'$ is the density matrix of the particles entering the region (identical to the density matrix of the \nprevious region), and $\\hat{\\gamma_t}$ is an operator that accounts for transit relaxation. This operator is \nessentially a diagonal matrix with elements $\\hat{\\gamma}_{t_{ij}}=(\\nicefrac{v_{yz}}{s_n})\\delta_{ij}$, where $v_{yz}$ \ncharacterizes the particle speed in the plane perpendicular to the beam and $s_n$ is the linear dimension of the region. \nTo simplify matters, we treat particle motion in only one direction and later average with particles that move in the other \ndirection. In that case, $\\rho'=\\rho^{n-1}$. \nThus, the rate equations for the density matrix $\\rho^n$ of the $n$textsuperscript{th} region become \n\\begin{align}\n\\label{eq:rate_region}\ni~\\hbar \\frac{\\partial\\rho^n}{\\partial t} &= \\left[ \\hat{H},\\rho^n\\right]+i ~\\hbar \\hat{R} \\rho^n -i~\\hbar\\hat{\\gamma_t}^n\\rho^n \\notag \\\\ \n& +i~\\hbar\\hat{\\gamma_t}^n\\rho^{n-1}-i~\\hbar\\hat{\\gamma_c}\\rho^n+i~\\hbar\\hat{\\gamma_c}\\rho^0.\n\\end{align}\nIn this equation the relaxation operator $\\hat{R}$ describes spontaneous relaxation only and $\\hat{\\gamma_c}$ is the collisional relaxation \nrate, which, however, becomes significant only at higher gas densities. \n\nNext, the rotating wave approximation~\\cite{Allen:1975} is applied to the OBEs, which yield \nstochastic differential equations that can be simplified by means of the decorrelation \napproach~\\cite{Kampen:1976}. Since the measurable quantity is merely light intensity, \na formal statistical average is performed over the fluctuating phases of these stochastic equations, \nmaking use of the decorrelation approximation~\\cite{Blushs:2004}. As a result, the density matrix \nelements that correspond to optical coherences are eliminated and one is left with rate equations for the \nZeeman coherences:\n\\begin{align}\n\\label{eq:ground}\n\\frac{\\partial \\rho_{g_i,g_j}^n}{\\partial t} =& \\sum_{e_k,e_m}\\left(\\Xi_{g_ie_m}^n + (\\Xi_{e_kg_j}^n)^*\\right) d_{g_ie_k}^*d_{e_mg_j}\\rho_{e_ke_m}^n \\notag \\\\\n& - \\sum_{e_k,g_m}(\\Xi_{e_kg_j}^n)^*d_{g_ie_k}^*d_{e_kg_m}\\rho_{g_mg_j}^n \\notag \\\\ \n& - \\sum_{e_k,g_m}\\Xi_{g_ie_k}^n d_{g_me_k}^*d_{e_kg_j}\\rho_{g_ig_m}^n \\\\\n& - i\\omega_{g_ig_j}\\rho_{g_ig_j}^n+\\sum_{e_ke_l}\\Gamma_{g_ig_j}^{e_ke_l}\\rho_{e_ke_l}^n-\\gamma_{t}\\rho_{g_ig_j}^n \\notag \\\\ \n& + \\gamma^{n}_{t}\\rho_{g_ig_j}^{n-1}-\\gamma^{n}_{c}\\rho_{g_ig_j}^n+\\gamma_c\\rho_{g_ig_j}^0\\notag\\\\\n\\label{eq:excited}\n\\frac{\\partial \\rho_{e_i,e_j}^n}{\\partial t} =& \\sum_{g_k,g_m}\\left((\\Xi_{e_ig_m}^n)^* + \\Xi_{g_ke_j}^n\\right) d_{e_ig_k}^*d_{g_me_j}\\rho_{g_kg_m}^n \\notag\\\\\n& - \\sum_{g_k,e_m}\\Xi_{g_ke_j}^nd_{e_ig_k}d_{g_ke_m}^*\\rho_{e_me_j}^n \\notag \\\\ \n& - \\sum_{g_k,e_m}(\\Xi_{e_ig_k}^n)^*d_{e_mg_k}d_{g_ke_j}^*\\rho_{e_ie_m}^n \\\\\n& - i\\omega_{e_ie_j}\\rho_{e_ie_j}^n-\\Gamma\\rho_{e_ie_j}^n-\\gamma^{n}_{t}\\rho_{e_ie_j}^n \\notag \\\\\n& +\\gamma^{n}_{t}\\rho_{e_ie_j}^{n-1}-\\gamma_{c}\\rho_{e_ie_j}^n \\notag.\n\\end{align}\nIn both equations, the first term describes population increase and creation of coherence due to induced \ntransitions, the second and third terms describe population loss due to induced transitions, the fourth \nterm describes the destruction of Zeeman coherences due to the splitting $\\omega{g_ig_j}$, \nrespectively, $\\omega_{e_ie_j}$ of the Zeeman sublevels in an external magnetic field, \nand the fifth term in Eq.~\\ref{eq:excited} describes spontaneous decay with $\\Gamma\\rho_{e_ie_j}^n$ giving the \nspontaneous rate of decay for the excited state. At the same time the fifth term in Eq.~\\ref{eq:ground} \ndescribes the transfer of population and coherences from the excited state matrix element $\\rho_{e_k e_l}$ to \nthe ground state density matrix element $\\rho_{g_i g_j}$ with rate $\\Gamma^{e_k e_l}_{g_i g_j}$. \nThese transfer rates are related to the rate of spontaneous decay $\\Gamma$ for the excited state. \nExplicit expressions for these $\\Gamma^{e_k e_i}_{g_i g_j}$ can be calculated from quantum angular \nmomentum theory and are given in~\\cite{Auzinsh:OptPolAt}. \nThe remaining terms have been described previously in the context of \nEqns.~\\ref{eq:transit} and~\\ref{eq:rate_region}. \nThe laser beam interaction is represented by the term\n\\begin{align}\n\\Xi_{g_ie_j}= \\frac{|\\bm\\varepsilon^n|^2}{\\frac{\\Gamma+\\Delta\\omega}{2}+i \\left(\\bar{\\omega}-\\mathbf{k}\\cdot \\mathbf{v}+\\omega_{g_ie_j}\\right)},\n\\end{align} \nwhere $|\\bm\\varepsilon^n|^2$ is the laser field's electric field strength in the $n$th region, $\\Gamma$ is the spontaneous \ndecay rate, $\\Delta\\omega$ is the laser beam's spectral width, $\\bar{\\omega}$ is the laser frequency, \n$k\\cdot v$ gives the Doppler shift, and $\\omega_{g_ie_j}$ is the difference in energy between levels \n$g_i$ and $e_j$. The system of linear equations can be solved for stationary conditions to \nobtain the density matrix $\\rho$. \n\nFrom the density matrix one can obtain the fluorescence intensity from each region for each velocity group $v$ and given \npolarization $\\bm\\varepsilon_f$ up to a constant factor of $\\tilde{I}_0$~\\cite{AuzFerb:OptPolMol, Barrat:1961, Dyakonov:1965}:\n\\begin{equation}\\label{eq:fluorescence}\n\tI_{n}(v,\\bm\\varepsilon_f) = \\tilde{I}_0\\sum\\limits_{g_i,e_j,e_k} d_{g_ie_j}^{\\ast(ob)}d_{e_kg_i}^{(ob)}\\rho_{e_je_k}.\n\\end{equation}\nFrom these quantities one can calculate the total fluorescence intensity for a given polarization $\\bm\\varepsilon_f$: \n\\begin{align}\n\tI(\\bm\\varepsilon_f) = \\sum_{n} \\sum_{v} f(v)\\Delta v \\frac{A_{n}}{A} I_{n}(v,\\bm\\varepsilon_f).\n\\end{align}\nHere the sum over $n$ represents the sum over the different beam regions of relative area \n$\\nicefrac{A_{n}}{A}$ as they are traversed by the \nparticle, $v$ is the particle velocity along the laser beam, and $f(v)\\Delta v$ gives the number of \natoms with velocity $v\\pm \\nicefrac{\\Delta v}{2}$. \n\nIn practice, we do not measure the electric field strength of the laser field, but the intensity $I=P\/A$, where $P$ is the \nlaser power and $A$ is the cross-sectional area of the beam. In the theoretical model it is more convenient to use the \nRabi frequency $\\Omega_R$, here defined as follows:\n\\begin{align}\n\\label{eq:Rabi}\n\\Omega_R = k_R \\frac{\\vert\\vert d \\vert\\vert \\cdot \\vert\\vert \\epsilon \\vert\\vert}{\\hbar} \\\n\t = k_R \\frac{\\vert\\vert d \\vert\\vert}{\\hbar} \\sqrt{\\frac{2 I}{\\epsilon_0 n c}},\n\\end{align}\nwhere $\\vert\\vert d \\vert\\vert$ is the reduced dipole matrix element for the transition in question, $\\epsilon_0$ is the \nvacuum permittivity, $n$ is the index of refraction of the medium, $c$ is the speed of light, and $k_R$ is a factor that \nwould be unity in an ideal case, but is adjusted to achieve the best fit between theory and experiment since the \nexperimental situation will always deviate from the ideal case in some way. \nWe assume that the laser beam's intensity distribution follows a Gaussian distribution. We define the average value of \n$\\Omega_R$ for the whole beam by taking the weighted average of a Gaussian distribution on the range [0,FWHM\/2], where \nFWHM is the full width at half maximum. Thus it follows that the Rabi frequency at the peak of the intensity distribution \n(see Fig.~\\ref{fig:dal1}) is $\\Omega_R=0.721\\Omega_{peak}$. From there the Rabi frequency of each region can be obtained \nby scaling by the value of the Gaussian distribution function. \n\n\n\n\n\n\n\\section{\\label{exp:level1}Experimental setup}\nThe theoretical model was tested with two experiments. The first experiment measured magneto-optical resonances on the $D_1$ line of \n$^{87}$Rb and is shown schematically in Fig.~\\ref{fig:exp_Rb87}. The experiment has been described elsewhere along with \ncomparison to an earlier version of the theoretical model that did not divide the laser beam into separate regions~\\cite{Auzinsh:2009}.\n\\begin{figure}\n\t\\includegraphics[width=0.45\\textwidth]{exp_rb87.pdf}\n\t\\caption{\\label{fig:exp_Rb87} (Color online) Basic experimental setup for measuring magneto-optical resonances. The inset \n\ton the left shows the level diagram of $^{87}$Rb~\\cite{Steck:rubidium87}. The other inset shows the geometrical orientation of the electric \n\tfield vector \\boldmath{E}, the magnetic field vector \\boldmath{B}, and laser propagation direction (Exc.) and \n\tobservation direction (Obs.).}\n\\end{figure}\nThe laser was an extended cavity diode laser, whose frequency could be scanned by applying a voltage to a piezo crystal attached to the grating. \nNeutral density (ND) filters were used to regulate the laser intensity, and linear polarization was obtained using a \nGlan-Thomson polarizer. A set of three orthogonal Helmholtz coils scanned the magnetic field along the $z$ axis \nwhile compensating the ambient field in the other directions. A pyrex cell with a natural isotopic mixture \nof rubidium at room temperature was located at the center of the coils. The total laser induced fluorescence (LIF) \nin a selected direction (without frequency or polarization selection) was detected with \na photodiode (Thorlabs FDS-100) and data were acquired with a data acquisition card (National Instruments 6024E)\nor a digital oscilloscope (Agilent DSO5014). To generate the magnetic field scan with a rate of about 1~Hz, \na computer-controlled analog signal was applied to a bipolar power supply (Kepco BOP-50-8M). The laser \nfrequency was simultaneously scanned at a rate of about 10-20 MHz\/s, and it was measured by \na wavemeter (HighFinnesse WS-7). \nThe laser beam was characterized using a beam profiler (Thorlabs BP104-VIS). \n\nA second experimental setup was used to study the spatial profile of the fluorescence generated by atoms in a \nlaser beam at resonance. It is shown in Fig.~\\ref{fig:exp_setup}. Here two lasers were used to excite the \n$D_1$ and $D_2$ transitions of cesium. Both lasers were based on distributed feedback diodes from toptica \n(DL100-DFB). One of the lasers (Cs $D_2$) served as a pump laser with a spatially broad and intense beam, \nwhile the other (Cs $D_1$), \nspatially narrower beam probed the fluorescence dynamics within the pump beam. \nFigure~\\ref{fig:levels} shows the level scheme of the excited transitions. \nBoth lasers were stabilized with saturation absorption signals from cells shielded by three layers of mu-metal. \nMu-metal shields were used to avoid frequency drifts due to the magnetic field scan performed in the experiment and other \nmagnetic field fluctuations in the laboratory.\n\\begin{figure}\n\t\\includegraphics[width=0.45\\textwidth]{divu_staru_exp_sh_v3.pdf}\n\t\\caption{\\label{fig:exp_setup} (Color online) Experimental setup for the two-laser experiment. The lasers were stabilized by two Toptica Digilok modules \n\tlocked to error signals generated from saturated absorption spectroscopy measurements made in a separate, magnetically shielded cell.}\n\\end{figure}\n\\begin{figure}\n\t\\includegraphics[width=0.45\\textwidth]{Cs_limenu_shema_DiviStari.pdf}\n\t\\caption{\\label{fig:levels} Level scheme for the two-laser experiment. The bold, solid arrow represents the pump laser transition, \n\twhereas the arrows with dashed lines represent the scanning laser transitions. Other transitions are given as thin, solid lines.}\n\\end{figure}\n\nA bandpass filter (890~nm $\\pm$ 10~nm) was placed before the photodiode. \nTo reduce noise from the intense pump beam, the probe beam was modulated by placing a mechanical \nchopper near its focus, and the fluorescence signal was passed through a lock-in amplifier and recorded \non a digital oscilloscope (Yokogawa DL-6154). \nThe probe laser was scanned through the pump laser beam profile using \na mirror mounted on a moving platform (Nanomax MAX301 from Thorlabs) with a scan range of 8 mm in one dimension. \nThe probing beam itself had a full width at half maximum (FWHM) \ndiameter of~\\SI{200}{\\micro\\metre} with typical laser power of~\\SI{100}{\\micro\\watt}. \nThe pump beam width was~\\SI{1.3}{\\milli\\metre} (FWHM) and its power was~\\SI{40}{\\milli\\watt}. This laser beam diameter was achieved by letting the \nlaser beam slowly diverge after passing the \nfocal point of a lens with focal length of~\\SI{1}{\\metre}. The pump laser beam diverged slowly enough to be effectively \nconstant within the vapor cell.\nThe probe beam was also focussed by the same lens to reach its focus point inside the cell. \n\n\n\n\n\n\n\\section{\\label{1laser:level1}Application of the model to magneto-optical signals obtained for high laser power densities}\nAs a first test for the numerical model with multiple regions inside the laser beam, we used the model to calculate the \nshapes of magneto-optical resonances for $^{87}$Rb in an optical cell. The experimental setup was described earlier \n(see Fig.~\\ref{fig:exp_Rb87}). Figure~\\ref{fig:one_laser_exp}(a)--(c) \nshow experimental signals (markers) and theoretical calculations (curves) of magneto-optical signals in the \n$F_g=2\\longrightarrow F_e=1$ transition of the $D_1$ line of $^{87}$Rb. Three theoretical curves are shown: \ncurve N1 was calculated assuming a laser beam with a single average \nintensity; curve N20 was calculated using a laser beam divided into 20 concentric regions; curve N20MT was calculated \nin the same way as curve N20, but furthermore the results were averaged also over trajectories that did not pass through \nthe center. At the relatively low Rabi frequency of $\\Omega_R = 2.5$~MHz [Fig.~\\ref{fig:one_laser_exp}(a)] \nall calculated curves practically coincided and described well the experimental signals. The single region model \ntreats the beam as a cylindrical beam with an intensity of 2~mW\/cm$^2$, which is below the saturation intensity for \nthat transition of 4.5~mW\/cm$^2$~\\cite{Steck:rubidium87}. When the laser intensity was 20~mW\/cm$^2$ ($\\Omega_R = 8.0$~MHz), \nwell above the saturation intensity, model N1 is no longer adequate for describing the experimental signals \nand model N20MT works slightly better [Fig.~\\ref{fig:one_laser_exp}(b)]. \nIn particular, the resonance becomes sharper and sharper as the intensity increases, and models \nN20 and N20MT reproduce this sharpness. Even at an intensity around 200~mW\/cm$^2$ ($\\Omega_R = 25$~MHz), \nthe models with 20 regions describe the shape of the experimental curve quite well, \nwhile model N1 describes the experimental results poorly in terms of width and overall shape [Fig.~\\ref{fig:one_laser_exp}(c)]. \n\n\\begin{figure}[htpb]\n\t\\includegraphics[width=0.45\\textwidth]{rb87_a.pdf}\n\t\\includegraphics[width=0.45\\textwidth]{rb87_b.pdf}\n\t\\includegraphics[width=0.45\\textwidth]{rb87_c.pdf}\n\t\\caption{(Color online) Magneto-optical resonances for the $F_g=2\\longrightarrow F_e=1$ transition of the $D_1$ line of $^{87}$Rb. \n\tFilled circles represent experimental measurements for (a) 28 $\\mu$W ($\\Omega_R$=2.5 MHz) \n\t(b) 280 $\\mu$W ($\\Omega_R$=8.0 MHz), and \n\t(c) 2800 $\\mu$W ($\\Omega_R$=25 MHz). \n\tCurve N1 (dashed) shows the results of a theoretical model that uses one Rabi frequency to model the entire beam profile. \n\tCurve N20 (dash-dotted) shows the result of the calculation when the laser beam profile is divided into 20 concentric circles, \n\tand the optical Bloch equations are solved separately for each circle. \n\tCurve N20MT (solid) shows the results for a calculation with 20 concentric regions when trajectories are taken into account \n\tthat do not pass through the center of the beam. \n\t}\n\t\\label{fig:one_laser_exp}\n\\end{figure}\n\n\n\\section{\\label{distribution:level1}Investigation of the spatial distribution of fluorescence in an intense laser beam}\n\\subsection{Theoretical investigation of the spatial dynamics of fluorescence in an extended beam}\nIn order to describe the magneto-optical signals in the previous sections, the fluorescence from all concentric beam regions \nin models N20 and N20MT was summed, since usually experiments measure only total fluorescence (or absorption), especially if \nthe beams are narrow. \nHowever, solving the optical Bloch equations separately for different concentric regions of the laser beam, it is possible \nto calculate the strength of the fluorescence as a function of distance from the center of the beam. With an appropriate \nexperimental technique, the distribution of fluorescence within a laser beam could also be measured. \n\nFigure~\\ref{fig:dynamics} shows the calculated fluorescence distribution as a function of position in the laser beam. \nAs atoms move through the beam in one direction, the intense laser radiation optically pumps the ground state. In a very \nintense beam, the ground state levels that can absorb light have emptied even before the atoms reach the center \n(solid, green curve). Since atoms are actually traversing the beam from all directions, the result is a fluorescence profile with a \nreduction in intensity near the center of the beam (dashed, red curve). \n\\begin{figure}[htpb]\n\t\\includegraphics[width=0.45\\textwidth]{f3.pdf}\n\t\\caption{(Color online) Fluorescence distribution as a function of position in the laser beam.\n\tDotted (blue) line---laser beam profile, solid (green) line---fluorescence from atoms moving in one direction;\n\tdash-dotted (red) line---the overall fluorescence as a function of position that results \n\tfrom averaging all beam trajectories. Results from theoretical calculations.}\n\t\\label{fig:dynamics}\n\\end{figure}\nThe effect of increasing the laser beam intensity (or Rabi frequency) can be seen in Fig.~\\ref{fig:dynamics_rabi}.\nAt a Rabi frequency of $\\Omega_R=0.6$ MHz, the fluorescence profile tracks the intensity profile of the laser beam \nexactly. When the Rabi frequency is increased ten times ($\\Omega_R=6.0$ MHz), which corresponds to an intensity \nincrease of 100, the fluorescence profile already appears somewhat deformed and wider than the actual laser beam profile. \nAt Rabi frequencies of $\\Omega_R=48.0$ MHz and greater, the fluorescence intensity at the center of the intense laser beam \nis weaker than towards the edges as a result of the ground state being depleted by the intense radiation \nbefore the atoms reach the center of the laser beam.\n\\begin{figure}[htpb]\n\t\\includegraphics[width=0.45\\textwidth]{f4.pdf}\n\t\\caption{(Color online) Fluorescence distribution as a function of position in the laser beam for various values of the Rabi frequency. \n\tResults from theoretical calculations. As the Rabi frequency increases, the distribution becomes broader.}\n\t\\label{fig:dynamics_rabi}\n\\end{figure}\n \n\\subsection{Experimental study of the spatial dynamics of excitation and fluorescence in an intense, extended beam}\n\nIn order to test our theoretical model of the spatial distribution of fluorescence from atoms in an intense, extended pumping beam, \nwe decided to record magneto-optical resonances \nfrom various positions in the pumping beam. The experimental setup is shown in \nFig.~\\ref{fig:exp_setup}. To visualize these data, surface plots were generated where one horizontal \naxis represented the magnetic field and the other, the position of the probe beam relative to the pump beam axis. The \nheight of the surface represented the fluorescence intensity. In essense, the surface consists of a series of \nmagneto-optical resonances recorded for a series of positions of the probe beam axis relative the the pump beam axis. \nFig.~\\ref{fig:p44_s43} shows the results for experiments [(a)] and calculations [(b)] for which the pump beam was tuned to the \n$F_g=4\\longrightarrow F_e=4$ transition of the Cs $D_2$ line and the probe beam was tuned to the\n$F_g=4\\longrightarrow F_e=3$ transition of the Cs $D_1$ line. \n\\begin{figure*}\n\t\\includegraphics[width=0.45\\textwidth]{f7a.pdf}\n\t\\includegraphics[width=0.45\\textwidth]{f7b.pdf}\n\t\\caption{\\label{fig:p44_s43} (Color online) Magneto-optical resonances produced for various positions of the probing laser beam \n\t($F_g=4\\longrightarrow F_e=3$ transition of the $D_1$ line of cesium) with respect to the pump laser beam \n\t($F_g=4\\longrightarrow F_e=4$ transition of the $D_2$ line of cesium): \n\t(a) experimental results and (b) theoretical calculations. }\n\\end{figure*}\nOne can see that the theoretical plot reproduces qualitatively all the features of the experimental measurement. \nSimilar agreement can be observed when the probe beam was tuned to the $F_g=3\\longrightarrow F_e=4$ transition \nof the Cs $D_1$ line, as \nshown in Fig.~\\ref{fig:p44_s34}. \n\\vfill\n\\break\n\\begin{figure*}\n\t\\includegraphics[width=0.45\\textwidth]{f8a.pdf}\n\t\\includegraphics[width=0.45\\textwidth]{f8b.pdf}\n\t\\caption{\\label{fig:p44_s34} (Color online) Magneto-optical resonances produced for various positions of the probe laser beam \n\t($F_g=3\\longrightarrow F_e=4$ transition of the $D_1$ line of cesium) with respect to the pump laser beam \n\t($F_g=4\\longrightarrow F_e=4$ transition of the $D_2$ line of cesium): \n\t(a) experimental results and (b) theoretical calculations. }\n\\end{figure*}\n\n\n\n\\section{\\label{Conclusions:level1}Conclusions}\nWe have set out to model magneto-optical signals more accurately at laser intensities significantly higher than the saturation \nintensity by dividing the laser beam into concentric circular regions and solving the rate equations for Zeeman coherences in each region while \ntaking into account the actual laser intensity in that region and the transport of atoms between regions. This approach was used to \nmodel magneto-optical resonances for the $F_g=2 \\longrightarrow F_e=1$ transitions of the $D_1$ line of $^{87}$Rb, comparing the \ncalculated curves to measured signals. \nWe have demonstrated that good agreement between theory and experiment can be achieved up to Rabi frequencies of at least 25~MHz, \nwhich corresponds to a laser intensity of 200 mW\/cm$^2$, or more than 40 times the saturation intensity of the transition.\nAs an additional check on the model, we have studied the spatial distribution of the fluorescence intensity within a laser beam theoretically \nand experimentally. The results indicated that at high laser power densities, the maximum fluorescence intensity is not produced \nin the center of the beam, because the atoms have been pumped free of absorbing levels prior to reaching the center. We compared\nexperimental and theoretical signals of magneto-optical resonance signals obtained by exciting cesium atoms with a \nnarrow, weak probe beam tuned to the $D_1$ transition at various locations inside a region illuminated by an intense pump beam \ntuned to the $D_2$ transition and obtained good qualitative agreement. \n\n \n\\begin{acknowledgments}\nWe gratefully acknowledge support from the Latvian Science Council Grant Nr. 119\/2012 and \nfrom the University of Latvia Academic Development Project Nr. AAP2015\/B013.\n\\end{acknowledgments}\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\\label{sec:intro}\nPrincipal component analysis (PCA) is \nthe ``workhorse'' method\n for dimensionality reduction and feature extraction. It finds well-documented applications, including statistics, bioinformatics, genomics, quantitative finance, and engineering, to name just a few. The goal of PCA is to obtain low-dimensional representations for high-dimensional data, while preserving most of the high-dimensional data variance \\cite{1901pca}. \n\n\nYet, various practical scenarios involve \\emph{multiple} datasets, in which one is tasked with extracting the most discriminative information of one target dataset relative to others. \nFor instance, consider two gene-expression measurement datasets of volunteers from across different geographical areas and genders: the first dataset collects gene-expression levels of cancer patients, considered here as the \n \\emph{target data}, \nwhile the second contains levels from healthy individuals corresponding here to our \n \\emph{background data}. The goal is to identify molecular subtypes of cancer within cancer patients.\nPerforming PCA on either the target data or the target together with background data is likely to yield principal components (PCs) that correspond to\nthe background information common to both datasets (e.g., the demographic patterns and genders) \\cite{1998background}, rather than the PCs uniquely describing the subtypes of cancer.\nAlbeit simple to comprehend and practically relevant, such discriminative data analytics has not been thoroughly addressed. \n\n\nGeneralizations of PCA include kernel (K) PCA \\cite{kpca,2017kpca}, graph PCA \\cite{proc2018gsk}, \n$\\ell_1$-PCA \\cite{2018l1pca}, \n robust PCA \\cite{jstsp2016shahid},\nmulti-dimensional scaling \\cite{mds}, \nlocally linear embedding \\cite{lle}, \nIsomap \\cite{2000isomap}, and Laplacian eigenmaps \\cite{2003eigenmap}. Linear discriminant analysis (LDA) is a\n\\emph{supervised} classifier of linearly projected reduced dimensionality data vectors. It is designed so that linearly projected training vectors (meaning \\emph{labeled} data) of the same class stay as close as possible, while projected data of different classes are positioned as far as possible \\cite{1933lda}. \nOther discriminative methods include \n\tre-constructive and discriminative subspaces \\cite{fidler2006combining},\n\t discriminative vanishing component analysis \\cite{hou2016discriminative}, \n\t and kernel LDA \\cite{mika1999fisher},\n\t which similar to LDA rely on labeled data.\nSupervised PCA looks for orthogonal projection vectors so that the dependence of projected vectors from \n\tone dataset on the other dataset is maximized \\cite{barshan2011supervised}.\n\nMultiple-factor analysis, an extension of PCA to deal with multiple datasets, is implemented in two steps: S1) normalize each dataset by the largest eigenvalue of \n\tits sample covariance matrix; and, S2) perform PCA on the combined dataset of all normalized ones \\cite{abdi2013multiple}.\nOn the other hand, canonical correlation analysis is widely employed for analyzing multiple datasets \\cite{1936cca,2018cwsggcca,2018gmcca}, but its goal is to extract the shared low-dimensional structure. \nThe recent proposal called contrastive (c) PCA aims at extracting contrastive information between two datasets \\cite{2017cpca}, by searching for directions along which the target data variance is large while that of \n the background data is small. Carried out using the singular value decomposition (SVD), cPCA can reveal dataset-specific information often missed by standard PCA if the involved hyper-parameter is properly selected. \nThough possible \n to automatically choose\nthe best hyper-parameter from a list of candidate values, performing SVD multiple\ntimes can be computationally cumbersome in large-scale feature extraction settings.\n\n\n \n\n\nBuilding on but going beyond cPCA, this paper starts by developing a novel approach, termed discriminative (d) PCA, for discriminative analytics of \\emph{two} datasets. dPCA looks for linear projections (as in LDA) but of \\emph{unlabeled}\n data vectors, by \\textcolor{black}{maximizing the variance of projected target data while minimizing that of background data. This leads to a \\emph{ratio trace} maximization formulation,}\n and also justifies our chosen term \\emph{discriminative PCA}. Under certain conditions, dPCA is proved to be least-squares (LS) optimal in the sense that it reveals PCs specific to the target data relative to background data. Different from \n cPCA, dPCA is parameter free, and it requires a single generalized eigendecomposition, lending itself favorably to large-scale discriminative data analytics. \n However, real data vectors \n often exhibit nonlinear correlations, rendering dPCA inadequate for complex practical setups. To this end, nonlinear dPCA is developed via kernel-based learning. Similarly, the solution of KdPCA can be provided analytically in terms of generalized eigenvalue decompositions. As the complexity of KdPCA grows only linearly with the dimensionality of data vectors, KdPCA is preferable over dPCA for discriminative analytics of high-dimensional data.\n\n\n\n\n\ndPCA is further extended to handle multiple (more than two) background datasets. Multi-background (M) dPCA is developed to extract low-dimensional discriminative structure unique to the target data but not to \\emph{multiple} sets of background data. This becomes possible by maximizing \\textcolor{black}{the variance of projected \n\t target data while minimizing the sum of variances of all projected \n\t background data.}\nAt last, kernel (K) MdPCA is put forth to account for nonlinear data correlations.\n\n\n\\emph{Notation}: Bold uppercase (lowercase) letters denote matrices (column vectors).\nOperators $(\\cdot)^{\\top}$,\n$(\\cdot)^{-1}$, and ${\\rm Tr}(\\cdot)$ denote matrix transposition, \ninverse, and trace, respectively; \n$\\|\\mathbf{a}\\|_2 $ is the $\\ell_2$-norm of vector $\\mathbf{a}$;\n ${\\rm diag}(\\{a_i\\}_{i=1}^m)$ is a diagonal matrix holding elements $\\{a_i\\}_{i=1}^m$ on its main diagonal; \n $\\mathbf{0}$ denotes all-zero vectors or matrices;\n and $\\mathbf{I}$ represents identity matrices of suitable dimensions.\n\n\n\\section{Preliminaries and Prior Art}\\label{sec:preli}\nConsider two datasets, namely a target dataset $\\{\\mathbf{x}_i\\in\\mathbb{R}^D\\}_{i=1}^m$ that we are interested in analyzing, and a background dataset $\\{\\mathbf{y}_j\\in\\mathbb{R}^D\\}_{j=1}^n$ that contains latent background-related vectors also present in the target data. Generalization to multiple background datasets will be presented in Sec. \\ref{sec:mdpca}. \nAssume without loss of generality that both datasets are centered; in other words, \n\\textcolor{black}{the sample mean $m^{-1}\\sum_{i=1}^m \\mathbf{x}_i$ $(n^{-1}\\sum_{j=1}^n \\mathbf{y}_j)$ has been subtracted from each $\\mathbf{x}_i$ $(\\mathbf{y}_j)$.}\nTo motivate our novel approaches \nin subsequent sections, some basics of PCA and cPCA are outlined next.\n\n\nStandard PCA handles a single dataset at a time. \nIt looks for low-dimensional representations $\\{\\boldsymbol{\\chi}_i\\in\\mathbb{R}^d \\}_{i=1}^m$ of $\\{\\mathbf{x}_i\\}_{i=1}^m$ with $d1$, PCA looks for $\\{\\mathbf{u}_i\\in\\mathbb{R}^D \\}_{i=1}^d$, \nobtained from the \\textcolor{black}{$d$ eigenvectors of $\\mathbf{C}_{xx}$ associated with the first $d$ largest eigenvalues sorted in a decreasing order}. As alluded to in Sec. \\ref{sec:intro}, PCA applied on $\\{\\mathbf{x}_i \\}_{i=1}^m$ only, or on the combined datasets\n $\\{\\{\\mathbf{x}_i\\}_{i=1}^m,\\,\\{\\mathbf{y}_j\\}_{j=1}^n \\}$ can generally not uncover the discriminative patterns or features of the target data relative to the background data.\n\n\n\n\nOn the other hand, the recent cPCA seeks a vector $\\mathbf{u}\\in\\mathbb{R}^D$ along which the target data exhibit large variations while the background\n data exhibit small variations, via solving \\cite{2017cpca}\n\t\\begin{subequations}\n\t\t\\label{eq:cpca}\n\t\t\\begin{align}\n\t\t\t\\underset{\\mathbf{u}\\in\\mathbb{R}^D}{\\max}\n\t\t\t\\quad&\\mathbf{u}^\\top\\mathbf{C}_{xx}\\mathbf{u}-\\alpha \\mathbf{u}^\\top\\mathbf{C}_{yy}\\mathbf{u}\\label{eq:cpcaobj}\\\\\n\t\t\t{\\rm s.\\,to}\\quad &\\mathbf{u}^\\top\\mathbf{u}=1\n\t\t\\end{align}\t\t\n\t\\end{subequations}\nwhere $\\mathbf{C}_{yy}:=(1\/n)\\sum_{j=1}^n\\mathbf{y}_j\\mathbf{y}_j^\\top\\in\\mathbb{R}^{D\\times D}$ denotes the sample covariance matrix of $\\{\\mathbf{y}_j \\}_{j=1}^n$, and the hyper-parameter $\\alpha\\ge 0$ trades off maximizing the target data variance (the first term in \\eqref{eq:cpcaobj}) for minimizing the background data variance (the second term). For a given $\\alpha$, the solution of \\eqref{eq:cpca} is given by the eigenvector of $\\mathbf{C}_{xx}-\\alpha\\mathbf{C}_{yy}$ associated with its largest eigenvalue, along which the obtained data projections constitute the first contrastive (c) \\textcolor{black}{PC}. Nonetheless, there is no rule of thumb for choosing $\\alpha$.\n\\textcolor{black}{A spectral-clustering based algorithm was devised to automatically select $\\alpha$ from a list of candidate values \\cite{2017cpca}, but its brute-force search is computationally expensive to use in large-scale datasets.}\n\n\n\n\n\n\\section{Discriminative Principal Component Analysis} \\label{sec:dpca}\nUnlike PCA, LDA is a supervised classification method of linearly projected data at reduced dimensionality. \nIt finds those linear projections that reduce that variation in the same class and increase the separation between classes \\cite{1933lda}. This is accomplished by maximizing the ratio of the labeled data variance between classes to that within the classes. \n\nIn a related but unsupervised setup, \nconsider we are given a target dataset and a background dataset, and we are tasked with \\textcolor{black}{extracting vectors that are meaningful in representing\n\t $\\{\\mathbf{x}_i\\}_{i=1}^m$, but not $\\{\\mathbf{y}_j\\}_{j=1}^n$.}\n A meaningful \n approach would then be to maximize the ratio of the projected target data variance over that of the background data. Our \\emph{discriminative (d) PCA} approach finds\n\t\t\\begin{equation}\t\\label{eq:dpca}\n\t\t\\textcolor{black}{\n\t\t\t\\hat{\\mathbf{u}}:=\\arg\n\t\t\t\\underset{\\mathbf{u}\\in\\mathbb{R}^{D}}{\\max}\n\t\t\t\\quad\\frac{\\mathbf{u}^\\top\\mathbf{C}_{xx}\\mathbf{u}}{ \\mathbf{u}^\\top\\mathbf{C}_{yy}\\mathbf{u}}}\n\t\t\\end{equation}\nWe will term the solution in \\eqref{eq:dpca} discriminant subspace vector,\nand the projections $\\{\\hat{\\mathbf{u}}^\\top\\mathbf{x}_i\\}_{i=1}$ the first discriminative (d) \\textcolor{black}{PC}. \nNext, we discuss the solution in \\eqref{eq:dpca}.\n\n\nUsing Lagrangian duality theory, the solution in \\eqref{eq:dpca} corresponds to the right eigenvector of $\\mathbf{C}_{yy}^{-1}\\mathbf{C}_{xx}$ associated with the largest eigenvalue. \nTo establish this, note that \\eqref{eq:dpca} can be equivalently rewritten a\n\t\\begin{subequations}\n\\label{eq:dpcafm2}\n\\begin{align}\n\t\\hat{\\mathbf{u}}:=\\arg\n\t\t\\underset{\\mathbf{u}\\in\\mathbb{R}^{D}}{\\max}\\quad& \\mathbf{u}^\\top\\mathbf{C}_{xx}\\mathbf{u}\\label{eq:dpcafm2cos}\\\\\n\t{\\rm s.\\,to}\\quad& \\mathbf{u}^\\top\\mathbf{C}_{yy}\\mathbf{u}=1.\\label{eq:dpcafm2con}\n\\end{align}\n\t\\end{subequations}\n\tLetting $\\lambda$ denote the dual variable associated with the constraint \\eqref{eq:dpcafm2con}, the Lagrangian of \\eqref{eq:dpcafm2} becomes\n\\begin{equation}\\label{eq:lag}\n\\mathcal{L}(\\mathbf{u};\\,\\lambda)=\\mathbf{u}^\\top\\mathbf{C}_{xx}\\mathbf{u}+\\lambda\\left(1-\\mathbf{u}^\\top\\mathbf{C}_{yy}\\mathbf{u}\\right).\n\\end{equation}\nAt the optimum $(\\hat{\\mathbf{u}};\\,\\hat{\\lambda})$, \nthe KKT conditions confirm that\n\\begin{equation}\\label{eq:gep}\n\t\\mathbf{C}_{xx}\\hat{{\\mathbf{u}}}=\\hat{\\lambda}\\mathbf{C}_{yy}\\hat{\\mathbf{u}}.\n\\end{equation}\nThis is a generalized eigen-equation, whose solution $\\hat{\\mathbf{u}}$ is the generalized eigenvector of $(\\mathbf{C}_{xx},\\,\\mathbf{C}_{yy})$ corresponding to the generalized eigenvalue $\\hat{\\lambda}$. \nLeft-multiplying\n \\eqref{eq:gep} by $\\hat{\\mathbf{u}}^\\top$ yields\n$\t\\hat{\\mathbf{u}}^\\top\\mathbf{C}_{xx}\\hat{\\mathbf{u}}=\\hat{\\lambda} \\hat{\\mathbf{u}}^\\top\\mathbf{C}_{yy}\\hat{\\mathbf{u}}\n$, corroborating that the optimal objective value of \\eqref{eq:dpcafm2cos} is attained when $\\hat{\\lambda}:=\\lambda_1$ is the largest generalized eigenvalue. Furthermore, \\eqref{eq:gep} can be solved efficiently using well-documented solvers that rely on e.g., Cholesky's factorization \\cite{saad1}. \n \n \n Supposing further that $\\mathbf{C}_{yy}$ is nonsingular\n \\eqref{eq:gep} yields \n \\begin{equation}\n \\label{eq:dpcasol}\n \\mathbf{C}_{yy}^{-1}\\mathbf{C}_{xx}\\hat{\\mathbf{u}}=\\hat{\\lambda}\\hat{\\mathbf{u}}\n \\end{equation}\nimplying that $\\hat{\\mathbf{u}}$ in \\eqref{eq:dpcafm2} is the right eigenvector of $\\mathbf{C}_{yy}^{-1}\\mathbf{C}_{xx}$ corresponding to the largest eigenvalue $\\hat{\\lambda}=\\lambda_1$.\n\n\n\nTo find multiple ($d\\ge 2$) subspace vectors, namely $\\{\\mathbf{u}_i\\in\\mathbb{R}^D\\}_{i=1}^d$ that form $\\mathbf{U}:=[\\mathbf{u}_1 \\, \\cdots \\, \\mathbf{u}_d]\\in\\mathbb{R}^{D\\times d}$, \\textcolor{black}{in \\eqref{eq:dpca} with $\\mathbf{C}_{yy}$ being nonsingular,} can be generalized as follows (cf. \\eqref{eq:dpca})\n\\vspace{4pt}\n\\textcolor{black}{\n\t\\begin{equation}\n\t\t\\label{eq:dpcam}\t\n\t\t\\hat{\\mathbf{U}}:=\\arg\t\\underset{\\mathbf{U}\\in\\mathbb{R}^{D\\times d}}{\\max}~\n\t\t{\\rm Tr}\\left[\\left(\\mathbf{U}^\\top\\mathbf{C}_{yy}\\mathbf{U}\\right )^{-1}\\mathbf{U}^\\top\\mathbf{C}_{xx}\\mathbf{U}\\right].\n\t\\end{equation}\n\t}\n\n\nClearly, \\eqref{eq:dpcam} is a \\emph{ratio trace} maximization problem; see e.g., \\cite{2014mati}, whose solution\nis given in Thm. \\ref{the:dpca} (see a proof in \\cite[p. 448]{2013fukunaga}).\n\n\n\n\\begin{theorem}\n\t\\label{the:dpca}\n\tGiven centered data $\\{{\\mathbf{x}}_i\\in\\mathbb{R}^{D}\\}_{i=1}^m$ and $\\{{\\mathbf{y}}_j\\in\\mathbb{R}^{D}\\}_{j=1}^n$ with sample covariance matrices $\\mathbf{C}_{xx}:=(1\/m)\\sum_{i=1}^m\\mathbf{x}_i\\mathbf{x}_i^\\top$ and $\\mathbf{C}_{yy}:=(1\/n)\\sum_{j=1}^n\\mathbf{y}_j\\mathbf{y}_j^\\top\\succ\\mathbf{0}$, the $i$-th column of the dPCA optimal solution $\\hat{\\mathbf{U}}\\in\\mathbb{R}^{D\\times d}$ in \\eqref{eq:dpcam} is given by the right eigenvector of $\\mathbf{C}_{yy}^{-1}\\mathbf{C}_{xx}$ associated with the $i$-th largest eigenvalue, where $i=1,\\,\\ldots,\\,d$.\n\\end{theorem}\n\n\n\nOur dPCA for \ndiscriminative analytics of two datasets is summarized in Alg. \\ref{alg:dpca}.\n\\textcolor{black}{Four} remarks are now in order.\n\n\\begin{remark}\nWithout background data, we have $\\mathbf{C}_{yy}=\\mathbf{I}$, and dPCA boils down to the standard PCA. \n\\end{remark}\n\n\n\\begin{remark}\n\tSeveral possible combinations of target and background datasets include:\n\t i) measurements from a healthy group $\\{\\mathbf{y}_j\\}$ and a diseased group $\\{\\mathbf{x}_i\\}$, where the former has similar population-level variation \n\t with the latter, but distinct variation \n\t due to subtypes of diseases; ii) before-treatment $\\{\\mathbf{y}_j\\}$ and after-treatment $\\{\\mathbf{x}_i\\}$ datasets, in which the former contains additive \n\t measurement noise rather than the variation caused by treatment; and iii) signal-free $\\{\\mathbf{y}_j\\}$ and signal recordings $\\{\\mathbf{x}_i\\}$, where the former consists of only noise.\n\\end{remark}\n\n\\begin{remark}\\label{re:twoeq}\nConsider the eigenvalue decomposition \n$\\mathbf{C}_{yy}=\\mathbf{U}_y\\mathbf{\\Sigma}_{yy}\\mathbf{U}_y^\\top$. \nWith $\\mathbf{C}_{yy}^{1\/2}:=\\mathbf{\\Sigma}_{yy}^{1\/2}\\mathbf{U}_{y}^\\top$, and the definition \n $\\mathbf{v}:=\\mathbf{C}_{yy}^{\\top\/2}\\mathbf{u}\\in\\mathbb{R}^D$, \\eqref{eq:dpcafm2} can be expressed as\t\n\t\\begin{subequations}\n\t\t\\label{eq:v}\n\t\\begin{align}\n\t\t\\hat{\\mathbf{v}}:=\\arg\n\t\\max_{\\mathbf{v}\\in\\mathbb{R}^D}\\quad&\n\t\t\\mathbf{v}^\\top\\mathbf{C}^{-1\/2}_{yy}\\mathbf{C}_{xx}\\mathbf{C}^{-\\top\/2}_{yy}\\mathbf{v}\\\\\n\t\t{\\rm s.\\,to}\\quad &\\mathbf{v}^\\top\\mathbf{v}=1\t\n\t\t\\end{align}\n\t\t\\end{subequations}\nwhere $\\hat{\\mathbf{v}}$ corresponds to the leading eigenvector of $\\mathbf{C}_{yy}^{-1\/2}\\mathbf{C}_{xx} \\mathbf{C}_{yy}^{-\\top\/2}$. Subsequently, $\\hat{\\mathbf{u}}$ in \\eqref{eq:dpcafm2} is recovered as $\\hat{\\mathbf{u}}=\\mathbf{C}_{yy}^{-\\top\/2}\\hat{\\mathbf{v}}$.\nThis indeed suggests that discriminative analytics of $\\{\\mathbf{x}_i\\}_{i=1}^m$ and $\\{\\mathbf{y}_j\\}_{j=1}^n$ using dPCA \ncan be viewed as PCA of the `denoised' or `background-removed' data $\\{\\mathbf{C}^{-1\/2}_{yy}\\mathbf{x}_i \\}$,\nfollowed by an `inverse' transformation to \nmap the obtained subspace vector of the $\\{\\mathbf{C}^{-1\/2}_{yy}\\mathbf{x}_i \\}$ data to $\\{\\mathbf{x}_i\\}$ that of the target \ndata. \nIn this sense, $\\{\\mathbf{C}^{-1\/2}_{yy}\\mathbf{x}_i \\}$ can be seen as the data obtained after removing the dominant `background' subspace vectors from the target data. \n\n\n\n\t\n\n\n\n\\end{remark}\n\\begin{remark}\nInexpensive power or Lanczos iterations \\cite{saad1} can be employed to compute the principal eigenvectors in \\eqref{eq:dpcasol}.\n\\end{remark}\n\n\n\n\\begin{algorithm}[t]\n\t\\caption{Discriminative PCA.}\n\t\\label{alg:dpca}\n\t\\begin{algorithmic}[1]\n\t\t\\STATE {\\bfseries Input:}\n\t\tNonzero-mean target and background data $\\{\\accentset{\\circ}{\\mathbf{x}}_i\\}_{i=1}^m$ and $\\{\\accentset{\\circ}{\\mathbf{y}}_j\\}_{j=1}^n$; number of dPCs $d$.\n\t\t\\STATE {\\bfseries Exclude} the means from $\\{\\accentset{\\circ}{\\mathbf{x}}_i\\}$ and $\\{\\accentset{\\circ}{\\mathbf{y}}_j\\}$ to obtain centered data $\\{\\mathbf{x}_i\\}$, and $\\{\\mathbf{y}_j\\}$. Construct $\\mathbf{C}_{xx}$ and $\\mathbf{C}_{yy}$.\n\t\t\\STATE {\\bfseries Perform} \\label{step:4} eigendecomposition\n\t\ton $\\mathbf{C}_{yy}^{-1}\\mathbf{C}_{xx}$ to obtain the $d$ right eigenvectors $\\{\\hat{\\mathbf{u}}_i\\}_{i=1}^d$ associated with the $d$ largest eigenvalues.\n\t\t\\STATE {\\bfseries Output} $\\hat{\\mathbf{U}}=[\\hat{\\mathbf{u}}_1\\,\\cdots \\, \\hat{\\mathbf{u}}_d]$.\n\t\t\\vspace{-0pt}\n\t\\end{algorithmic}\n\\end{algorithm} \n\n\n\n\n\\textcolor{black}{Consider again \\eqref{eq:dpcafm2}. Based on Lagrange duality, when selecting $\\alpha=\\hat{\\lambda}$ in \\eqref{eq:cpca}, where $\\hat{\\lambda}$ is the largest eigenvalue of $\\mathbf{C}_{yy}^{-1}\\mathbf{C}_{xx}$, cPCA maximizing $\\textcolor{black}{{\\mathbf{u}}^\\top}(\\mathbf{C}_{xx}-\\hat{\\lambda}\\mathbf{C}_{yy}){\\mathbf{u}}$ is equivalent to $\\max_{\\mathbf{u}\\in\\mathbb{R}^{D}}\\mathcal{L}(\\mathbf{u};\\hat{\\lambda})=\\mathbf{u}^\\top(\\mathbf{C}_{xx}-\\hat{\\lambda}\\mathbf{C}_{yy})\\mathbf{u}+\\hat{\\lambda}$, which coincides with \\eqref{eq:lag} when $\\lambda=\\hat{\\lambda}$ at the optimum. \nThis suggests that the optimizers of cPCA and dPCA share the same direction when $\\alpha$ in cPCA is chosen to be the optimal dual variable $\\hat{\\lambda}$ of our dPCA in \\eqref{eq:dpcafm2}.\n\tThis equivalence between dPCA and cPCA with a proper $\\alpha$ can also be seen from \n\tthe following.\n\t\\begin{theorem}{\\cite[Theorem 2]{guo2003generalized}}\n\t\t\\label{the:cvsd}\n\t\tFor real symmetric matrices $\\mathbf{C}_{xx}\\succeq \\mathbf{0}$ and $\\mathbf{C}_{yy}\\succ\\mathbf{0}$, the following holds\n\t\t\t\\begin{equation*}\n\t\t\t\\check{\\lambda}=\\frac{\\check{\\mathbf{u}}^\\top\\mathbf{C}_{xx}\\check{\\mathbf{u}}}{\\check{\\mathbf{u}}^\\top\\mathbf{C}_{yy}\\check{\\mathbf{u}}}=\\underset{\\|\\mathbf{u}\\|_2=1}{\\max}\\frac{\\mathbf{u}^\\top\\mathbf{C}_{xx}\\mathbf{u}}{\\mathbf{u}^\\top\\mathbf{C}_{yy}\\mathbf{u}} \n\t\t\t\\end{equation*}\n\t\t\tif and only if\n\t\t\t\\begin{equation*}\n\t\t\t\\check{\\mathbf{u}}^\\top(\\mathbf{C}_{xx}-\\check{\\lambda}\\mathbf{C}_{yy})\\check{\\mathbf{u}}=\\underset{\\|\\mathbf{u}\\|_2=1}{\\max}\\mathbf{u}^\\top(\\mathbf{C}_{xx}-\\check{\\lambda}\\mathbf{C}_{yy})\\mathbf{u}.\n\t\t\t\\end{equation*}\n\t\\end{theorem}\n\t}\n\n\nTo gain further insight into the relationship between dPCA and cPCA,\nsuppose that $\\mathbf{C}_{xx}$ and $\\mathbf{C}_{yy}$ are simultaneously diagonalizable; that is, there exists an unitary matrix $\\mathbf{U}\\in\\mathbb{R}^{D\\times D}$ such that\n\\begin{equation*}\n\\mathbf{C}_{xx}:=\\mathbf{U}\\mathbf{\\Sigma}_{xx}\\mathbf{U}^\\top,\\quad {\\rm and}\\quad \\mathbf{C}_{yy}:=\\mathbf{U}\\mathbf{\\Sigma}_{yy}\\mathbf{U}^\\top\n\\end{equation*} \nwhere diagonal matrices $\\mathbf{\\Sigma}_{xx},\\,\\mathbf{\\Sigma}_{yy}\\succ \\mathbf{0}$ hold accordingly eigenvalues $\\{\\lambda_x^i\\}_{i=1}^D$ of $\\mathbf{C}_{xx}$ and $\\{\\lambda_y^i\\}_{i=1}^D$ of $\\mathbf{C}_{yy}$ on their main diagonals. \n\\textcolor{black}{Even if the two datasets may share some subspace vectors, $\\{\\lambda_x^i\\}_{i=1}^D$ and $\\{\\lambda_y^i\\}_{i=1}^D$ are in general not the same.}\nIt is easy to check that $\\mathbf{C}_{yy}^{-1}\\mathbf{C}_{xx}=\\mathbf{U}\\mathbf{\\Sigma}_{yy}^{-1}\\mathbf{\\Sigma}_{xx}\\mathbf{U}^\\top=\\mathbf{U} {\\rm diag}\\big(\\{\\frac{\\lambda_x^i}{\\lambda_y^i}\\}_{i=1}^D\\big)\\mathbf{U}^\\top$. Seeking the first $d$ latent subspace vectors is tantamount to taking the $d$ columns of $\\mathbf{U}$ that correspond to the $d$ largest values among $\\{\\frac{\\lambda_x^i}{\\lambda_y^i}\\}_{i=1}^D$.\nOn the other hand, cPCA for a fixed $\\alpha$, looks for the first $d$ latent subspace vectors of $\\mathbf{C}_{xx}-\\alpha\\mathbf{C}_{yy}=\\mathbf{U}(\\mathbf{\\Sigma}_{xx}-\\alpha{\\bm\\Sigma}_{yy})\\mathbf{U}^\\top=\\mathbf{U}{\\rm diag}\\big(\\{\\lambda_x^i-\\alpha\\lambda_y^i\\}_{i=1}^D\\big)\\mathbf{U}^\\top$, which amounts to taking the $d$ columns of $\\mathbf{U}$ associated with the $d$ largest values in $\\{\\lambda_x^i-\\alpha\\lambda_y^i\\}_{i=1}^D$. \n\\textcolor{black}{This further confirms that when $\\alpha$ is sufficiently large (small), cPCA returns the $d$ columns of $\\mathbf{U}$ associated with the $d$ largest $\\lambda_{y}^{i}$'s ($\\lambda_x^{i}$'s). When $\\alpha$ is not properly chosen, cPCA may fail to extract the most contrastive information from target data relative to background data. In contrast, this is not an issue is not present in dPCA simply because it has no tunable parameter.}\n\n\n\n\n\\section{Optimality of {d}PCA}\n\\label{sec:optim}\n\nIn this section, we show that dPCA is optimal when data obey a certain affine model. In a similar vein, PCA adopts a factor analysis model to express the non-centered background data $\\{\\accentset{\\circ}{\\mathbf{y}}_j\\in\\mathbb{R}^D\\}_{j=1}^n$ as\n\\begin{equation}\n\\accentset{\\circ}{\\mathbf{y}}_j=\\mathbf{m}_y+\n\\mathbf{U}_b\\bm{\\psi}_j+\\mathbf{e}_{y,j},\t\\quad j=1,\\,\\ldots,\\,n\\label{eq:y}\n\\end{equation}\nwhere $\\mathbf{m}_y\\in\\mathbb{R}^D$ denotes the unknown location (mean) vector; $\\mathbf{U}_b\\in\\mathbb{R}^{D\\times k}$ has orthonormal columns with $k{\\lambda_{x,i}}\/{\\lambda_{y,i}}$.\n\\end{assumption}\nAssumption \\ref{asmp:unique} essentially requires that $\\mathbf{u}_s$ is discriminative enough in the target data relative to the background data. \n\\textcolor{black}{After combining Assumption \\ref{asmp:unique} and the fact that $\\mathbf{u}_s$ is an eigenvector of $\\mathbf{C}_{yy}^{-1}\\mathbf{C}_{xx}$, it follows readily that the eigenvector of $\\mathbf{C}_{yy}^{-1}\\mathbf{C}_{xx}$ associated with the largest eigenvalue is $\\mathbf{u}_s$.}\nUnder these two assumptions, we establish the optimality of dPCA next. \n\\begin{theorem}\n\t\\label{the:optm}\n\tUnder Assumptions \\ref{asmp:model} and \\ref{asmp:unique} with $d=1$, as $m,\\,n\\to\\infty$, the solution of \\eqref{eq:dpca} recovers the subspace vector specific to target data relative to background data, namely $\\mathbf{u}_s$. \n\\end{theorem}\n\n \n\\section{Kernel dPCA} \\label{sec:kdpca}\n\nWith advances in data acquisition and data storage technologies, a sheer volume of possibly high-dimensional data are collected daily, that topologically lie on a nonlinear manifold in general. This goes beyond the ability of the (linear) dPCA in Sec. \\ref{sec:dpca} due mainly to a couple of reasons: i) dPCA presumes a linear low-dimensional hyperplane to project the target data vectors; and ii) dPCA incurs computational complexity $\\mathcal{O}(\\max (m,\\,n) D^2)$ that grows quadratically with the dimensionality of data vectors.\n To address these challenges, this section generalizes dPCA to account for nonlinear data relationships via kernel-based learning, and puts forth kernel (K) dPCA for nonlinear discriminative analytics. Specifically, KdPCA starts by \\textcolor{black}{mapping} both the target and background data vectors from the original data space to a higher-dimensional (possibly infinite-dimensional) feature space using a common nonlinear function, which is followed by performing linear dPCA on the \\textcolor{black}{transformed} data.\n\n\nConsider first the dual version of dPCA, starting with the $N:=m+n$ augmented data $\\{\\mathbf{z}_i\\in\\mathbb{R}^D\\}_{i=1}^N$ as \n\\begin{equation*}\n\\label{eq:z}\n\\mathbf{z}_i:=\\left\\{\n\\begin{array}{ll}\n\\mathbf{x}_i, & 1\\le i \\le m\\\\\n\\mathbf{y}_{i-m}, & m< i \\le N\n\\end{array}\n\\right.\n\\end{equation*}\nand express the wanted\nsubspace vector $\\mathbf{u}\\in\\mathbb{R}^D$ in terms of\n $\\mathbf{Z}:=[\\mathbf{z}_1\\,\\cdots \\, \\mathbf{z}_N]\\in\\mathbb{R}^{D\\times N}$, yielding $\\mathbf{u}:=\\mathbf{Z}\\mathbf{a}$, where $\\mathbf{a}\\in\\mathbb{R}^N$ denotes the dual vector. \\textcolor{black}{When ${\\rm min}(m\\,,n)\\gg D$, matrix $\\mathbf{Z}$ has full row rank in general. Thus, there always exists a vector $\\mathbf{a}$ so that $\\mathbf{u}=\\mathbf{Z}\\mathbf{a}$. Similar steps have also been used in obtaining dual versions of PCA and CCA \\cite{kpca,2004kernel}.} \nSubstituting $\\mathbf{u}=\\mathbf{Z}\\mathbf{a}$ into \\eqref{eq:dpca} leads to our dual dPCA \n\\begin{equation}\n\\textcolor{black}{\t\\label{eq:ddpca}\n\t\\underset{\\mathbf{a}\\in\\mathbb{R}^N}{\\max}\n\t\\quad \\frac{\\mathbf{a}^\\top \\mathbf{Z}^\\top \\mathbf{C}_{xx}\\mathbf{Z}\\mathbf{a}}{ \\mathbf{a}^\\top \\mathbf{Z}^\\top\\mathbf{C}_{yy}\\mathbf{Z}\\mathbf{a}}\n}\n\\end{equation}\nbased on which we will develop our KdPCA in the sequel.\n\n\n\n\nSimilar to deriving KPCA from dual PCA \\cite{kpca}, our approach is first to transform $\\{\\mathbf{z}_i\\}_{i=1}^N$ from $\\mathbb{R}^D$ to a high-dimensional space $\\mathbb{R}^L$ (possibly with $L=\\infty$) by some nonlinear mapping function $\\bm{\\phi}(\\cdot)$, followed by removing the sample means of $\\{\\bm{\\phi}(\\mathbf{x}_i)\\}$ and $\\{\\bm{\\phi}(\\mathbf{y}_j)\\}$ from the corresponding transformed data; and subsequently, implementing dPCA on the centered transformed datasets to obtain the low-dimensional \\textcolor{black}{kernel} dPCs. Specifically, the sample covariance matrices of $\\{\\bm{\\phi}(\\mathbf{x}_i)\\}_{i=1}^m$ and $\\{\\bm{\\phi}(\\mathbf{y}_j)\\}_{j=1}^n$ can be expressed as \n\\begin{align*}\t\n\t\\mathbf{C}_{xx}^{\\phi }&:=\\frac{1}{m}\\sum_{i=1}^m \\left(\\bm{\\phi}(\\mathbf{x}_i)-\\bm{\\mu}_{ x}\\right)\\left(\\bm{\\phi}(\\mathbf{x}_i)-\\bm{\\mu}_{ x}\\right)^\\top\\in\\mathbb{R}^{L\\times L}\\\\\n\t\\mathbf{C}_{yy}^{\\phi }&:=\\frac{1}{n}\\sum_{j=1}^n \\left(\\bm{\\phi}(\\mathbf{y}_j)-\\bm{\\mu}_{ y}\\right)\\left(\\bm{\\phi}(\\mathbf{y}_j)-\\bm{\\mu}_{ y}\\right)^\\top\\in\\mathbb{R}^{L\\times L}\n\\end{align*}\nwhere the $L$-dimensional vectors $\\bm{\\mu}_{x}:=(1\/m)\\sum_{i=1}^m\\bm{\\phi}(\\mathbf{x}_i)\n$ and $\\bm{\\mu}_{y}:=(1\/n)\\sum_{j=1}^n\\bm{\\phi}(\\mathbf{y}_j)\n$ are accordingly the sample means of $\\{\\bm{\\phi}(\\mathbf{x}_i)\\}$ and $\\{\\bm{\\phi}(\\mathbf{y}_j)\\}$.\nFor convenience, let\n $\\bm{\\Phi}(\\mathbf{Z}):=[\\bm{\\phi}(\\mathbf{x}_1)-\\bm{\\mu}_x,\\,\\cdots,\\,\\bm{\\phi}(\\mathbf{x}_m)-\\bm{\\mu}_x,\\,\\bm{\\phi}(\\mathbf{y}_1)-\\bm{\\mu}_y,\\,\\cdots,\\,\\bm{\\phi}(\\textcolor{black}{\\mathbf{y}_n})-\\bm{\\mu}_y]\\in\\mathbb{R}^{L\\times N}$.\n Upon replacing $\\{\\mathbf{x}_i\\}$ and $\\{\\mathbf{y}_j\\}$ in \\eqref{eq:ddpca} with $\\{\\bm{\\phi}(\\mathbf{x}_i)-\\bm{\\mu}_x\\}$ and $\\{\\bm{\\phi}(\\mathbf{y}_j)-\\bm{\\mu}_y\\}$, respectively, \\textcolor{black}{the kernel version of \\eqref{eq:ddpca} boils down to}\n\\textcolor{black}{\n\\begin{equation}\t\n\t\\label{eq:kdpca}\n\t\\underset{\\mathbf{a}\\in\\mathbb{R}^N}{\\max}\n\t\\quad\\frac{\\mathbf{a}^\\top \\bm{\\Phi}^\\top(\\mathbf{Z}) \\mathbf{C}_{xx}^{\\phi}\\bm{\\Phi}(\\mathbf{Z})\\mathbf{a}}{ \\mathbf{a}^\\top \\bm{\\Phi}^\\top(\\mathbf{Z})\\mathbf{C}_{y y}^{\\phi}\\bm{\\Phi}(\\mathbf{Z})\\mathbf{a}}.\n\\end{equation}}\n\n\nIn the sequel, \\eqref{eq:kdpca} will be further simplified by leveraging the so-termed `kernel trick' \\cite{RKHS}. \n\nTo start, define a kernel matrix $\\mathbf{K}_{xx}\\in\\mathbb{R}^{m\\times m}$ of $\\{\\mathbf{x}_i\\}$ whose $(i,\\,j)$-th entry is $\\kappa(\\mathbf{x}_i,\\,\\mathbf{x}_j):=\\left<\\bm{\\phi}(\\mathbf{x}_i),\\,\\bm{\\phi}(\\mathbf{x}_j)\\right>$ for $i,\\,j=1,\\,\\ldots,\\,m$, where $\\kappa(\\cdot)$ represents some kernel function. Matrix $\\mathbf{K}_{yy}\\in\\mathbb{R}^{n\\times n}$ of $\\{\\mathbf{y}_j\\}$ is defined likewise. Further, the $(i,\\,j)$-th entry of matrix $\\mathbf{K}_{xy}\\in\\mathbb{R}^{m\\times n}$ is $\\kappa(\\mathbf{x}_i,\\,\\mathbf{y}_j):=\\left<\\bm{\\phi}(\\mathbf{x}_i),\\,\\bm{\\phi}(\\mathbf{y}_j)\\right>$. Centering $\\mathbf{K}_{xx}$, $\\mathbf{K}_{yy}$, and $\\mathbf{K}_{xy}$ produces \n\\begin{align*}\n\\mathbf{K}_{xx}^c&:=\\mathbf{K}_{xx}-\\tfrac{1}{m}\\mathbf{1}_{m }\\mathbf{K}_{xx}-\\tfrac{1}{m}\\mathbf{K}_{xx}\\mathbf{1}_{m}+\\tfrac{1}{m^2}\\mathbf{1}_{m}\\mathbf{K}_{xx}\\mathbf{1}_{m}\\\\\n\\mathbf{K}_{yy}^c&:=\\mathbf{K}_{yy}-\\!\\tfrac{1}{n}\\mathbf{1}_{ n}\\mathbf{K}_{yy}-\\!\\tfrac{1}{n}\\mathbf{K}_{yy}\\mathbf{1}_{n}+\\!\\tfrac{1}{n^2}\\mathbf{1}_{n}\\mathbf{K}_{yy}\\mathbf{1}_{n}\\\\\n\\mathbf{K}_{xy}^c&:=\\mathbf{K}_{xy}-\\tfrac{1}{m}\\mathbf{1}_{ m}\\mathbf{K}_{xy}-\\tfrac{1}{n}\\mathbf{K}_{xy}\\mathbf{1}_{ n}+\\tfrac{1}{mn}\\mathbf{1}_{m}\\mathbf{K}_{xy}\\mathbf{1}_{n}\n\\end{align*}\nwith matrices $\\mathbf{1}_{m}\\in\\mathbb{R}^{m\\times m}$ and $\\mathbf{1}_n\\in\\mathbb{R}^{n\\times n}$ having all entries $1$. Based on those centered matrices, let\n\\begin{equation}\n\\label{eq:k}\n\\mathbf{K}:=\\left[\\begin{array}\n{cc}\n\\mathbf{K}_{xx}^c &\\mathbf{K}_{xy}^c\\\\\n(\\mathbf{K}_{xy}^c)^\\top &\\mathbf{K}_{yy}^c\n\\end{array}\n\\right]\\in\\mathbb{R}^{N\\times N}.\n\\end{equation}\nDefine further \n$\\mathbf{K}^x\\in\\mathbb{R}^{N\\times N}$ and $\\mathbf{K}^y\\in\\mathbb{R}^{N\\times N}$ with $(i,\\,j)$-th entries\n\\begin{subequations}\n\t\\label{eq:kxky}\n\t\\begin{align}\n\tK^x_{i,j}\\,:=\\left\\{\n\t\\begin{array}{ll}\n\tK_{i,j}\/m \\!&1\\le i \\le m\\\\\n\t~~~0 &m0$ is added to the diagonal entries of $\\mathbf{K}\\mathbf{K}^y$. Hence, our KdPCA formulation for $d=1$ is given by}\n\t\\begin{equation}\t\\label{eq:kdpcafm2}\n\\textcolor{black}{\t\t\\hat{\\mathbf{a}}:=\\arg\t\n\t\\underset{\\mathbf{a}\\in\\mathbb{R}^N}{\\max}\n\t\\quad\\frac{\\mathbf{a}^\\top \\mathbf{K}\\mathbf{K}^x\\mathbf{a}}{ \\mathbf{a}^\\top \\left(\\mathbf{K}\\mathbf{K}^y+\\epsilon \\mathbf{I}\\right)\\mathbf{a}}.}\n\t\\end{equation}\nAlong the lines of dPCA, the solution of KdPCA in \\eqref{eq:kdpcafm2} can be provided by \n\\begin{equation}\n\\label{eq:kdpcasol}\n\\textcolor{black}{\n\\left(\\mathbf{K}\\mathbf{K}^y+\\epsilon\\mathbf{I}\\right)^{-1}\\mathbf{K}\\mathbf{K}^x\\hat{\\mathbf{a}}=\\hat{\\lambda}\n\\hat{\\mathbf{a}}.}\n\\end{equation}\nThe optimizer $\\hat{\\mathbf{a}}$ coincides with the right eigenvector of $\\left(\\mathbf{K}\\mathbf{K}^y+\\epsilon\\mathbf{I}\\right)^{-1}\\mathbf{K}\\mathbf{K}^x$ corresponding to the largest eigenvalue $\\hat{\\lambda}=\\lambda_1$.\n\n When looking for $d$ dPCs, with $\\{\\mathbf{a}_i\\}_{i=1}^d$ collected as columns in $\\mathbf{A}:=[\\mathbf{a}_1\\,\\cdots \\,\\mathbf{a}_d]\\in\\mathbb{R}^{N\\times d}$, the KdPCA in \\eqref{eq:kdpcafm2} can be generalized to $d\\ge 2$ as \n\t\\begin{equation*}\n\t\t\t\\hat{\\mathbf{A}}:=\\arg\n\t\t\\underset{\\mathbf{A}\\in\\mathbb{R}^{N\\times d}}{\\max}\n\t\t\\quad {\\rm Tr} \\left[ \\left(\\mathbf{A}^\\top (\\mathbf{K}\\mathbf{K}^y+\\epsilon \\mathbf{I}\\right)\\mathbf{A})^{-1} \\mathbf{A}^\\top \\mathbf{K}\\mathbf{K}^x\\mathbf{A}\\right]\n\t\\end{equation*}\nwhose columns correspond to the $d$ right eigenvectors of $\\left(\\mathbf{K}\\mathbf{K}^y+\\epsilon\\mathbf{I}\\right)^{-1}\\mathbf{K}\\mathbf{K}^x$\n associated with the $d$ largest eigenvalues.\n Having found $\\hat{\\mathbf{A}}$, one can project the data $\\bm{\\Phi}(\\mathbf{Z})$ onto the obtained $d$ subspace vectors by $\\mathbf{K}\\hat{\\mathbf{A}}$. \n It is worth remarking that KdPCA can be performed in the high-dimensional feature space without explicitly forming and evaluating the nonlinear transformations. Indeed, this becomes possible by the `kernel trick' \\cite{RKHS}. \n The main steps of KdPCA are given in Alg. \\ref{alg:kdpca}. \n \nTwo remarks are worth making at this point.\n \t\\begin{remark}\n \t\tWhen the kernel function required to form $\\mathbf{K}_{xx}$, $\\mathbf{K}_{yy}$, and $\\mathbf{K}_{xy}$ is not given, one may use the multi-kernel learning method to automatically choose the right kernel function(s); see for example, \\cite{mkl2004,zhang2017going,tsp2017zwrg}. Specifically, one can presume $\\mathbf{K}_{xx}:=\\sum_{i=1}^P\\delta_i\\mathbf{K}_{xx}^i$, $\\mathbf{K}_{yy}:=\\sum_{i=1}^P\\delta_i\\mathbf{K}_{yy}^i$, and $\\mathbf{K}_{xy}:=\\sum_{i=1}^P\\delta_i\\mathbf{K}_{xy}^i$ in \\eqref{eq:kdpcafm2}, where $\\mathbf{K}_{xx}^i\\in\\mathbb{R}^{m\\times m}$, $\\mathbf{K}_{yy}^i\\in\\mathbb{R}^{n\\times n}$, and $\\mathbf{K}_{xy}^i\\in\\mathbb{R}^{m\\times n}$ are formed using the kernel function $\\kappa_i(\\cdot)$; and $\\{\\kappa_i(\\cdot)\\}_{i=1}^P$ are a preselected dictionary of known kernels, but $\\{\\delta_i\\}_{i=1}^P$ will be treated as unknowns to be learned along with $\\mathbf{A}$ in \\eqref{eq:kdpcafm2}.\n \t\\end{remark}\n \t\\begin{remark}\n \t\tIn the absence of background data, upon setting $\\{\\bm{\\phi}(\\mathbf{y}_j)=\\mathbf{0}\\}$, and $\\epsilon=1$ in \\eqref{eq:kdpcafm2}, matrix $\\left(\\mathbf{K}\\mathbf{K}^y+\\epsilon\\mathbf{I}\\right)^{-1}\\mathbf{K}\\mathbf{K}^x$ reduces to\n \t\t\\begin{equation*}\n \t\t\\mathbf{M}:=\\left[\n \t\t\\begin{array}\n \t\t{cc}\n \t\t(\\mathbf{K}_{xx}^c)^2 &\\mathbf{0}\\\\\n \t\t\\mathbf{0} &\\mathbf{0}\n \t\t\\end{array}\n \t\t\\right].\n \t\t\\end{equation*} After collecting the first $m$ entries of $\\hat{\\mathbf{a}}_i$ into $\\mathbf{w}_i\\in\\mathbb{R}^{m}$, \\eqref{eq:kdpcasol} suggests that $(\\mathbf{K}_{xx}^c)^2\\mathbf{w}_i=\\lambda_i\\mathbf{w}_i$, where $\\lambda_i$ denotes the $i$-th largest eigenvalue of $\\mathbf{M}$. \n \t\tClearly, $\\{\\mathbf{w}_i\\}_{i=1}^d$ can be viewed as the $d$ eigenvectors of $(\\mathbf{K}_{xx}^c)^2$ associated with their $d$ largest eigenvalues. Recall that KPCA finds the first $d$ principal eigenvectors of $\\mathbf{K}_{xx}^c$ \\cite{kpca}. Thus, KPCA is a special case of KdPCA, when no background data are employed.\n \t\\end{remark}\n \n\n\n\n\\section{Discriminative Analytics with\\\\ Multiple Background Datasets} \\label{sec:mdpca}\n\nSo far, we have presented discriminative analytics methods for two datasets. \nThis section presents their generalizations\nto cope with multiple (specifically, one target plus more than one background) datasets. \nSuppose that, in addition to the zero-mean target dataset $\\{\\mathbf{x}_i\\in\\mathbb{R}^D\\}_{i=1}^m$, we are also given $M\\ge 2$ centered background datasets $\\{\\mathbf{y}_j^k\\}_{j=1}^{n_k}$ for $k=1,\\,\\ldots,\\,M$. \nThe $M$ sets of background data $\\{\\mathbf{y}_j^k\\}_{k=1}^M$ contain latent background subspace vectors that are also present in $\\{\\mathbf{x}_i\\}$.\n\nLet $\\mathbf{C}_{xx}:=m^{-1}\\sum_{i=1}^{m}\\mathbf{x}_i\\mathbf{x}_i^\\top$ and $\\mathbf{C}_{yy}^k:=n_k^{-1}\\times $ $\n\\sum_{j=1}^{n_k}\\mathbf{y}_{j}^k(\\mathbf{y}^k_{j})^\\top$ be the corresponding sample covariance matrices. The goal here is to unveil the latent subspace vectors \n\\textcolor{black}{that are significant in representing the target data, but not any of the background data.}\nBuilding on the dPCA in \\eqref{eq:dpcam} for a single background dataset, it is meaningful to seek directions that maximize the variance of target data, while minimizing those of all background data. Formally, \nwe pursue the following optimization, that we term multi-background (M) dPCA here, for discriminative analytics of multiple datasets\n\\begin{equation}\\label{eq:gdpca}\n\\underset{\\mathbf{U}\\in\\mathbb{R}^{D\\times d}}{\\max}\t\\quad {\\rm Tr}\\bigg[\\bigg(\\sum_{k=1}^M\\omega_k\\mathbf{U}^\\top\\mathbf{C}_{yy}^{k}\\mathbf{U}\\bigg)^{-1}\\mathbf{U}^\\top\\mathbf{C}_{xx}\\mathbf{U}\\bigg]\n\\end{equation}\nwhere $\\{\\omega_k\\ge 0\\}_{k=1}^M$ with $\\sum_{k=1}^M \\omega_k=1$\nweight the variances of the $M$ projected background datasets. \n\n\nUpon defining $\\mathbf{C}_{yy}:=\\sum_{k=1}^{M}\\omega_k\\mathbf{C}_{yy}^k\n$, it is straightforward to see that \\eqref{eq:gdpca} reduces to \\eqref{eq:dpcam}. Therefore, one readily deduces \n that the optimal ${\\mathbf{U}}$ in \\eqref{eq:gdpca} can be obtained by taking the $d$ right eigenvectors of $\\mathbf{C}_{yy}^{-1}\\mathbf{C}_{xx}$ that are associated with the $d$ largest eigenvalues. For implementation, the steps of MdPCA are presented in Alg. \\ref{alg:mdpca}.\n\n\\begin{algorithm}[t]\n\t\\caption{Multi-background dPCA.}\n\t\\label{alg:mdpca}\n\t\\begin{algorithmic}[1]\n\t\t\\STATE {\\bfseries Input:}\n\t\tTarget data $\\{\\accentset{\\circ}{\\mathbf{x}}_i\\}_{i=1}^m$ and background data $\\{\\accentset{\\circ}{\\mathbf{y}}_{j}^k\\}_{j=1}^{n_k}$ for $k=1,\\,\\ldots,\\,M$; weight hyper-parameters $\\{\\omega_k\\}_{k=1}^M$; number of dPCs $d$.\n\t\t\\STATE {\\bfseries Remove} the means from $\\{\\accentset{\\circ}{\\mathbf{x}}_i\\}$ and $\\{\\accentset{\\circ}{\\mathbf{y}}_{j}^k\\}_{k=1}^{M}$ to obtain $\\{\\mathbf{x}_i\\}$ and $\\{\\mathbf{y}_{j}^k\\}_{k=1}^{M}$. Form $\\mathbf{C}_{xx}$, $\\{\\mathbf{C}_{yy}^k\\}_{k=1}^M$, and $\\mathbf{C}_{yy}:=\\sum_{k=1}^{M}\\omega_k\\mathbf{C}^k_{yy}$.\n\t\t\\STATE {\\bfseries Perform} eigendecomposition\n\t\ton $\\mathbf{C}_{yy}^{-1}\\mathbf{C}_{xx}$ to obtain the first $d$ right eigenvectors $\\{\\hat{\\mathbf{u}}_i\\}_{i=1}^d$.\n\t\t\\STATE {\\bfseries Output} $\\hat{\\mathbf{U}}:=[\\hat{\\mathbf{u}}_1\\,\\cdots\\,\\hat{\\mathbf{u}}_d]$.\n\t\t\\vspace{-0pt}\n\t\\end{algorithmic}\n\\end{algorithm}\n\n\n\\begin{remark}\n\t\\label{rmk:alpha}\t\n\tThe parameters $\\{\\omega_k\\}_{k=1}^M$ can be decided using two possible methods:\n\t i) spectral-clustering \\cite{ng2002spectral} to select a few sets of $\\{\\omega_k\\}$ yielding \\textcolor{black}{the most representative subspaces for projecting the target data across $\\{\\omega_k\\}$;}\n\t or ii) optimizing $\\{\\omega_k\\}_{k=1}^M$ jointly with $\\mathbf{U}$ in \\eqref{eq:gdpca}.\n\\end{remark}\n\n\n\n\nFor data belonging to nonlinear manifolds, kernel (K) MdPCA will be developed next.\n With some nonlinear function $\\phi(\\cdot)$, we obtain the transformed target data $\\{\\bm{\\phi}(\\mathbf{x}_i)\\in\\mathbb{R}^L\\}$ as well as background data $\\{\\bm{\\phi}(\\mathbf{y}_{j}^k)\\in\\mathbb{R}^L\\}$. Letting $\\bm{\\mu}_x\\in\\mathbb{R}^L$ and $\\bm{\\mu}_{y}^k:=(1\/n_k)\\sum_{j=1}^{n_k}\\bm{\\phi}(\\mathbf{y}_{j}^k)\\in\\mathbb{R}^L$ denote the means of $\\{\\bm{\\phi}(\\mathbf{x}_i)\\}$ and $\\{\\bm{\\phi}(\\mathbf{y}_{j}^k)\\}$, respectively, one can form the corresponding covariance matrices $\\mathbf{C}_{xx}^{\\phi}\\in\\mathbb{R}^{L\\times L}$, and \n\\begin{equation*}\n\t\\mathbf{C}_{yy}^{\\phi,k}:=\\frac{1}{n_k}\\sum_{j=1}^{n_k}\\left(\\bm{\\phi}(\\mathbf{y}_{j}^k)-\\bm{\\mu}_{y}^k\\right)\\left(\\bm{\\phi}(\\mathbf{y}_{j}^k)-\\bm{\\mu}_{y}^k\\right)^\\top\\in\\mathbb{R}^{L\\times L}\n\t\\end{equation*}\nfor $k=1,\\,\\ldots,\\, M$.\nDefine the aggregate vector $\\mathbf{b}_i\\in\\mathbb{R}^L$ \n\\begin{equation*}\n\\label{eq:b}\n\\mathbf{b}_i:=\\left\\{\n\\begin{array}{ll}\n\\bm{\\phi}(\\mathbf{x}_i)-\\bm{\\mu}_x, & 1\\le i \\le m\\\\\n\\bm{\\phi}(\\mathbf{y}_{i-m}^1)-\\bm{\\mu}_{y}^1, & m< i \\le m+n_1\\\\\n~~~\\vdots & \\\\\n\\bm{\\phi}(\\mathbf{y}_{i-\\textcolor{black}{(N-n_M)}}^M)-\\bm{\\mu}_y^M,& N-n_{M}< i\\le N\n\\end{array}\n\\right.\n\\end{equation*}\nwhere $N:=m+\\sum_{k=1}^M n_k$, for $i=1,\\,\\ldots,\\,N$, and collect vectors $\\{\\mathbf{b}_i\\}_{i=1}^N$ as columns to form $\\mathbf{B}:=[\\mathbf{b}_1\\,\\cdots\\,\\mathbf{b}_N]\\in\\mathbb{R}^{L\\times N}$.\nUpon assembling dual vectors $\\{\\mathbf{a}_i\\in\\mathbb{R}^N\\}_{i=1}^d$ to form $\\mathbf{A}:=[\\mathbf{a}_1\\,\\cdots \\, \\mathbf{a}_d]\\in\\mathbb{R}^{N\\times d}$,\nthe kernel version of \\eqref{eq:gdpca} can be obtained as\n\t\\begin{equation*}\n\n\\underset{\\mathbf{A}\\in\\mathbb{R}^{N\\times d}}{\\max} {\\rm Tr}\\bigg[\\bigg(\\mathbf{A}^\\top\\mathbf{B}^\\top\\sum_{k=1}^M\\omega_k\\mathbf{C}^{\\phi,k}_{yy}\\mathbf{B}\\mathbf{A}\\hspace{-0.1cm}\\bigg)^{-1}\\mathbf{A}^\\top\\mathbf{B}^\\top\\mathbf{C}^{\\phi} _{xx}\\mathbf{B}\\mathbf{A}\\bigg].\n\t\\end{equation*}\n\n\n\n\n\nConsider now kernel matrices $\\mathbf{K}_{xx}\\in\\mathbb{R}^{m\\times m}$ and $\\mathbf{K}_{kk}\\in\\mathbb{R}^{n_k\\times n_k}$, whose $(i,\\,j)$-th entries are $\\kappa(\\mathbf{x}_i,\\,\\mathbf{x}_j)$ and $\\kappa(\\mathbf{y}_{i}^k,\\,\\mathbf{y}_{j}^k)$, respectively, for $k=1,\\,\\ldots,\\,M$. \n Furthermore, matrices $\\mathbf{K}_{xk}\\in\\mathbb{R}^{m\\times n_k}$, and $\\mathbf{K}_{lk}\\in\\mathbb{R}^{n_l\\times n_k}$ are defined with their corresponding $(i,\\,j)$-th elements $\\kappa(\\mathbf{x}_{i},\\,\\mathbf{y}_{j}^k)$ and $\\kappa(\\mathbf{y}_{i}^{l},\\,\\mathbf{y}_{j}^k)$, for $l=1,\\,\\ldots,\\, k-1$ and $k=1,\\,\\ldots,\\,M$.\nWe subsequently center those matrices to obtain $\\mathbf{K}_{xx}^c$ and \n\\begin{align*}\n\\mathbf{K}_{kk}^c&:=\\mathbf{K}_{kk}-\\tfrac{1}{n_k}\\mathbf{1}_{n_k}\\mathbf{K}_{kk}-\\tfrac{1}{n_k}\\mathbf{K}_{kk}\\mathbf{1}_{n_k}+\\tfrac{1}{n_k^2}\\mathbf{1}_{n_k}\\mathbf{K}_{kk}\\mathbf{1}_{n_k}\\\\\n\\mathbf{K}_{xk}^c&:=\\mathbf{K}_{xk}-\\tfrac{1}{m}\\mathbf{1}_{m }\\mathbf{K}_{xk}-\\tfrac{1}{n_k}\\mathbf{K}_{xk}\\mathbf{1}_{n_k}+\\tfrac{1}{mn_k}\\mathbf{1}_{m}\\mathbf{K}_{xk}\\mathbf{1}_{n_k}\\\\\n\\mathbf{K}_{lk}^c&:=\\mathbf{K}_{lk}-\\!\\tfrac{1}{n_l}\\mathbf{1}_{n_l}\\mathbf{K}_{lk}-\\!\\tfrac{1}{n_{k}}\\mathbf{K}_{lk}\\mathbf{1}_{n_k}+\\!\\tfrac{1}{n_ln_k}\\mathbf{1}_{n_l}\\mathbf{K}_{lk}\\mathbf{1}_{n_k}\n\\end{align*}\nwhere $\\mathbf{1}_{n_k}\\in\\mathbb{R}^{n_k\\times n_k}$ and $\\mathbf{1}_{n_{l}}\\in\\mathbb{R}^{n_{l}\\times n_{l}}$ are all-one matrices. With $\\mathbf{K}^x$ as in \\eqref{eq:kx}, consider the $N\\times N$ matrix\n\\begin{equation}\n\\label{eq:km}\n\\mathbf{K}:=\\left[\\begin{array}\n{llll}\n\\mathbf{K}_{xx}^c & \\mathbf{K}_{x1}^c & \\cdots & \\mathbf{K}_{xM}^c \\\\\n(\\mathbf{K}_{x1}^c)^\\top & \\mathbf{K}_{11}^c & \\cdots & \\mathbf{K}_{1M}^c \\\\\n\\quad\\vdots & \\quad\\vdots & \\ddots & \\quad\\vdots \\\\\n(\\mathbf{K}_{xM}^c)^\\top & (\\mathbf{K}_{1M}^c)^\\top & \\cdots & \\mathbf{K}_{MM}^c\n\\end{array}\n\\right]\n\\end{equation}\nand $\\mathbf{K}^k\\in\\mathbb{R}^{N\\times N}$ with $(i,\\,j)$-th entry\n\\begin{equation}\nK^k_{i,j}:=\\left\\{\\!\\!\\begin{array}{cl}\nK_{i,j}\/n_k, \\!&\\text{if}~ m+\\sum_{\\ell=1}^{n_{k-1}}n_{\\ell}< i \\le m+\\sum_{\\ell=1}^{n_{k}}n_{\\ell}\\\\\n0, &\\mbox{otherwise}\n\\end{array}\n\\right.\\label{eq:kk}\n\\end{equation}\nfor $k=1,\\,\\ldots,\\,M$. Adopting the regularization in \\eqref{eq:kdpcafm2}, our KMdPCA finds\n\t\\begin{equation*}\n\\hat{\\mathbf{A}}:=\\arg\\underset{\\mathbf{A}\\in\\mathbb{R}^{N\\times d}}{\\max}{\\rm Tr}\\bigg[\\bigg(\\mathbf{A}^\\top\\Big(\\mathbf{K}\\sum_{k=1}^M\\mathbf{K}^k+\\epsilon\\mathbf{I}\\Big)\\mathbf{A}\\bigg)^{-1}\\!\\!\\mathbf{A}^\\top\\mathbf{K}\\mathbf{K}^x\\mathbf{A}\\bigg]\n\t\\end{equation*}\nsimilar to (K)dPCA, whose solution comprises the right eigenvectors associated with the \\textcolor{black}{first $d$} largest eigenvalues in\n\t\\begin{equation}\n\t\t\\label{eq:kmdpcasol}\n\t\t\\bigg(\\mathbf{K}\\sum_{k=1}^M\\mathbf{K}^k+\\epsilon\\mathbf{I}\\bigg)^{-1}\\mathbf{K}\\mathbf{K}^x\\hat{\\mathbf{a}}_i=\\hat{\\lambda}_i\\hat{\\mathbf{a}}_i.\n\t\\end{equation}\n\t\n\n\n\nFor implementation, KMdPCA is presented in Alg. \\ref{alg:mkdpca}.\n\\begin{algorithm}[t]\n\t\\caption{Kernel multi-background dPCA.}\n\t\\label{alg:mkdpca}\n\t\\begin{algorithmic}[1]\n\t\t\\STATE {\\bfseries Input:}\n\t\tTarget data $\\{\\mathbf{x}_i\\}_{i=1}^m$ and background data $\\{\\mathbf{y}_{j}^k\\}_{j=1}^{n_k}$ for $k=1,\\,\\ldots,\\,M$; number of dPCs $d$; kernel function $\\kappa(\\cdot)$; weight coefficients $\\{\\omega_k\\}_{k=1}^M$; constant $\\epsilon$.\n\t\t\\STATE {\\bfseries Construct} $\\mathbf{K}$ using \\eqref{eq:km}. Build $\\mathbf{K}^x$ and $\\{\\mathbf{K}^k\\}_{k=1}^M$ via \\eqref{eq:kx} and \\eqref{eq:kk}.\n\t\t\\STATE {\\bfseries Solve} \\eqref{eq:kmdpcasol} to obtain the first $d$ eigenvectors $\\{\\hat{\\mathbf{a}}_i\\}_{i=1}^d$.\n\t\t\\STATE {\\bfseries Output} $\\hat{\\mathbf{A}}:=[\\hat{\\mathbf{a}}_1\\,\\cdots\\,\\hat{\\mathbf{a}}_d]$. \n\t\t\\vspace{-0pt}\n\t\\end{algorithmic}\n\\end{algorithm} \n\n\n\n\\begin{remark}\nWe can verify that PCA, KPCA, dPCA, KdPCA, MdPCA, and KMdPCA incur computational complexities $\\mathcal{O}(mD^2)$, $\\mathcal{O}(m^2D)$, $\\mathcal{O}(\\max(m,n)D^2)$, $\\mathcal{O}(\\max(m^2,n^2)D)$, $\\mathcal{O}(\\max(m,\\bar{n} )D^2)$, and $\\mathcal{O}(\\max(m^2,\\bar{n}^2 )D)$, respectively, where $\\bar{n}:=\\max_k~\\{n_k\\}_{k=1}^M$. \nIt is also not difficult to check that the computational complexity of \n\tforming $\\mathbf{C}_{xx}$, $\\mathbf{C}_{yy}$, $\\mathbf{C}_{yy}^{-1}$, and performing the eigendecomposition on $\\mathbf{C}_{yy}^{-1}\\mathbf{C}_{xx}$ is $\\mathcal{O}(mD^2)$, $\\mathcal{O}(nD^2)$, $\\mathcal{O}(D^3)$, and $\\mathcal{O}(D^3)$, respectively. As the number of data vectors ($m,\\,n$) is much larger than their dimensionality $D$, when performing dPCA in the primal domain, it follows readily that dPCA incurs complexity $\\mathcal{O}(\\max(m,n)D^2)$. Similarly, the computational complexities of the other algorithms can be checked\nEvidently, when $\\min(m,n)\\gg D $ or $\\min(m,\\underline{n})\\gg D$ with $\\underline{n}:=\\min_k\\,\\{n_k\\}_{k=1}^M$,\n\tdPCA and MdPCA are computationally more attractive than KdPCA and KMdPCA. On the other hand, KdPCA and KMdPCA become more appealing, when $D\\gg \\max(m,n)$ or $D\\gg \\max(m,\\bar{n})$.\n\tMoreover, the computational complexity of cPCA is $\\mathcal{O}(\\max (m,n)D^2L)$, where $L$ denotes the number of $\\alpha$'s candidates. Clearly, relative to dPCA, cPCA is computationally more expensive when $DL> \\max(m,n)$.\n\\end{remark}\n\n\n\n\n\\section{Numerical Tests}\\label{sec:simul}\n\nTo evaluate the performance of our proposed approaches for discriminative analytics, we carried out a number of numerical tests using several synthetic and real-world datasets, a sample of which are reported in this section. \n\n\\subsection{dPCA tests}\n\n\n\n Semi-synthetic target \\textcolor{black}{$\\{\\accentset{\\circ}{\\mathbf{x}}_i\\in\\mathbb{R}^{784}\\}_{i=1}^{2,000}$} and background images \\textcolor{black}{$\\{\\accentset{\\circ}{\\mathbf{y}}_j\\in\\mathbb{R}^{784}\\}_{j=1}^{3,000}$} were obtained by superimposing images from the MNIST \\footnote{Downloaded from http:\/\/yann.lecun.com\/exdb\/mnist\/.} and CIFAR-10 \\cite{cifar10} datasets.\n Specifically, the target data $\\{\\mathbf{x}_i\\in\\mathbb{R}^{784}\\}_{i=1}^{2,000}$ were generated using $2,000$ handwritten digits 6 and 9 (1,000 for each) of size $28\\times 28$,\n superimposed with $2,000$ frog images from the CIFAR-10 database \\cite{cifar10} followed by removing the sample mean from each data point; see Fig. \\ref{fig:targ}. The raw $32\\times 32$ frog images were converted into grayscale, and randomly cropped to $28\\times 28$. The zero-mean background data $\\{\\mathbf{y}_j\\in\\mathbb{R}^{784}\\}_{j=1}^{3,000}$ were constructed using $3,000$ cropped frog images, which were randomly chosen from the remaining frog images in the CIFAR-10 database.\n\n \\begin{figure}[t]\n \t\\centering \n \t\\includegraphics[scale=0.64]{targ.pdf} \n \t\\caption{\\small{Superimposed images.}}\n \t\\label{fig:targ}\n \\end{figure}\n \nThe dPCA Alg. \\ref{alg:dpca} was performed on $\\{\\accentset{\\circ}{\\mathbf{x}}_i\\}$ and $\\{\\accentset{\\circ}{\\mathbf{y}}_j\\}$ with $d=2$. PCA was implemented on $\\{\\accentset{\\circ}{\\mathbf{x}}_i\\}$ only. The first two PCs and dPCs are presented in the left and right panels of Fig. \\ref{fig:digits}, respectively. Clearly, dPCA reveals the discriminative information of the target data describing digits $6$ and $9$ relative to the background data, enabling successful discovery of the digit $6$ and $9$ subgroups. On the contrary, PCA captures only the patterns that correspond to the generic background rather than those associated with the digits $6$ and $9$. \n\\textcolor{black}{To further assess the performance of dPCA and PCA, K-means is carried out using the resulting low-dimensional representations of the target data. The clustering performance is evaluated in terms of two metrics: clustering error and scatter ratio. The clustering error is defined as the ratio of the number of incorrectly clustered data vectors over $m$.\n\tScatter ratio verifying cluster separation is defined as $S_t\/\\sum_{i=1}^2S_i$, where $S_t$ and $\\{S_i\\}_{i=1}^2$ denote the total scatter value and the within cluster scatter values, given by $S_t:=\\sum_{j=1}^{2,000}\\|\\hat{\\mathbf{U}}^\\top\\mathbf{x}_j\\|_2^2$ and $\\{S_i:=\\sum_{j\\in \\mathcal{C}_i}\\|\\hat{\\mathbf{U}}^\\top\\mathbf{x}_j-\\hat{\\mathbf{U}}^\\top\\sum_{k\\in{\\mathcal{C}}_i}\\mathbf{x}_k\\|_2^2\\}_{i=1}^2$, respectively, with $\\mathcal{C}_i$ representing the set of data vectors belonging to cluster $i$. Table \\ref{tab:cluster} reports the clustering errors and scatter ratios of dPCA and PCA under different $d$ values. Clearly, dPCA exhibits lower clustering error and higher scatter ratio.}\n\n\\begin{figure}[t]\n\t\\centering \n\t\\includegraphics[scale=0.53]{digits.pdf} \t\n\t\\caption{\\small{dPCA versus PCA on semi-synthetic images.}}\n\t\\label{fig:digits}\n\\end{figure}\n\n\n\n\n\\renewcommand{\\arraystretch}{1.5} \n\\begin{table}[tp]\t\n\\centering\n\t\\fontsize{8.5}{8}\\selectfont\n\t\t\\textcolor{black}{\\caption{\\textcolor{black}{Performance comparison between dPCA and PCA.}}\n\t\\label{tab:cluster}\n\t\\vspace{.8em}\n\t\\begin{tabular}{|c|c|c|c|c|}\n\t\t\\hline\n\t\t\\multirow{2}{*}{$d$}&\n\t\t\\multicolumn{2}{c|}{Clustering error}&\\multicolumn{2}{c|}{ Scatter ratio}\\cr\\cline{2-5}\n\t\t&dPCA&PCA&dPCA&PCA\\cr\n\t\t\\hline\n\t\t\\hline\n\t\t1&0.1660&0.4900&2.0368&1.0247\\cr\\hline\n\t\t2&0.1650&0.4905&1.8233&1.0209\\cr\\hline\n\t\t3&0.1660&0.4895&1.6719&1.1327\\cr\\hline\n\t\t4&0.1685&0.4885&1.4557&1.1190\\cr\\hline\n\t\t5&0.1660&0.4890&1.4182&1.1085\\cr\\hline\n\t\t10&0.1680&0.4885&1.2696&1.0865\\cr\\hline\n\t\t50&0.1700&0.4880&1.0730&1.0568\\cr\\hline\n\t 100&0.1655&0.4905&1.0411&1.0508\\cr\n\t\t\\hline\n\t\\end{tabular}}\n\\end{table}\n \n\n\nReal protein expression data \\cite{mice}\n were also used to evaluate the ability of dPCA to discover subgroups in real-world conditions. Target data $\\{\\accentset{\\circ}{\\mathbf{x}}_i\\in\\mathbb{R}^{77}\\}_{i=1}^{267}$ contained $267$ data vectors, each collecting $77$ protein expression measurements of a mouse having Down Syndrome disease \\cite{mice}. \nIn particular, the first $135$ data points $\\{\\accentset{\\circ}{\\mathbf{x}}_i\\}_{i=1}^{135}$ recorded protein expression measurements of $135$ mice with drug-memantine treatment, while the remaining $\\{\\accentset{\\circ}{\\mathbf{x}}_i\\}_{i=136}^{267}$ collected measurements of $134$ mice without such treatment. Background data $\\{\\accentset{\\circ}{\\mathbf{y}}_j\\in\\mathbb{R}^{77}\\}_{j=1}^{135}$ on the other hand, comprised such measurements from $135$ healthy mice, which likely exhibited similar natural variations (due to e.g., age and sex) as the target mice, but without the differences that result from the Down Syndrome disease. \n\n\n\nWhen performing cPCA on $\\{\\accentset{\\circ}{\\mathbf{x}}_i\\}$ and $\\{\\accentset{\\circ}{\\mathbf{y}}_j\\}$, four $\\alpha$'s were selected from $15$ logarithmically-spaced values between $10^{-3}$ and $10^{3}$ via the spectral clustering method presented in \\cite{2017cpca}. \n\n\nExperimental results are reported in Fig. \\ref{fig:mice} with red circles and black diamonds representing sick mice with and without treatment, respectively. Evidently, when PCA is applied, the low-dimensional representations of the protein measurements from mice with and without treatment are distributed similarly. In contrast, the low-dimensional representations cluster two groups of mice successfully when dPCA is employed. At the price of runtime (about $15$ times more than dPCA), cPCA with well {tuned} parameters ($\\alpha=3.5938$ and $27.8256$) can also separate the two groups.\n\n\n\n\\begin{figure}[t]\n\t\\centering \n\t\\includegraphics[scale=0.735]{mice.pdf} \n\t\\vspace{-4pt}\n\t\\caption{\\small{Discovering subgroups in mice protein expression data.}}\n\t\\label{fig:mice}\n\t\\vspace{-4pt}\n\\end{figure}\n\n\\subsection{KdPCA tests}\\label{sec:simu2}\n\n\n\n \\begin{figure}[t]\n \t\\centering \n \t\\includegraphics[scale=0.61]{kdpca_target.pdf} \n \t\\caption{\\small{Target data dimension distributions with $x_{i,j}$ representing the $j$-th entry of $\\mathbf{x}_i$ for $j=1,\\, \\ldots,\\,4$ and $i=1,\\, \\ldots,\\,300$.}}\n \t\\label{fig:kdpca_target}\n \\end{figure}\n \n In this subsection, our KdPCA is evaluated \n using synthetic and real data.\nBy adopting the procedure described in \\cite[p. 546]{hastie2009elements}, we generated target data $\\{{\\mathbf{x}}_i:=[x_{i,1}\\, x_{i,2}\\,x_{i,3}\\,x_{i,4}]^\\top\\}_{i=1}^{300}$ and background data $\\{{\\mathbf{y}}_j\\in\\mathbb{R}^4\\}_{j=1}^{150}$. In detail, $\\{[x_{i,1}\\,x_{i,2}]^{\\top}\\}_{i=1}^{300}$ were sampled uniformly from two circular concentric clusters with corresponding radii $1$ and $6$ shown in the left panel of Fig. \\ref{fig:kdpca_target}; and $\\{[x_{i,3}\\,x_{i,4}]^{\\top}\\}_{i=1}^{300}$ were uniformly drawn from a circle with radius $10$; see Fig. \\ref{fig:kdpca_target} (right panel) for illustration.\n The first and second two dimensions of $\\{{\\mathbf{y}}_j\\}_{j=1}^{150}$ were uniformly sampled from two concentric circles with corresponding radii of $4$ and $10$. All data points in $\\{{\\mathbf{x}}_i\\}$ and $\\{{\\mathbf{y}}_j\\}$ were corrupted with additive noise sampled independently from $\\mathcal{N}(\\mathbf{0},\\,0.1\\mathbf{I})$. \n To unveil the specific cluster structure of the target data relative to the background data, \n Alg. \\ref{alg:kdpca} was run with $\\epsilon=10^{-3}$ and using the degree-$2$ polynomial kernel $\\kappa(\\mathbf{z}_i,\\mathbf{z}_j)=(\\mathbf{z}_i^\\top\\mathbf{z}_j )^2$. Competing alternatives including PCA, KPCA, cPCA, kernel (K) cPCA \\cite{2017cpca}, and dPCA were also implemented. \n Further, KPCA and KcPCA shared the kernel function with KdPCA. Three different values of $\\alpha$ were automatically chosen for cPCA \\cite{2017cpca}.\n The parameter $\\alpha$ of KcPCA was set as $1$, $10$, and $100$.\n \n\nFigure \\ref{fig:kdpca_syn} depicts the first two dPCs, cPCs, and PCs of the aforementioned dimensionality reduction algorithms. Clearly, only KdPCA\nsuccessfully reveals the two unique clusters of $\\{{\\mathbf{x}}_i\\}$ relative to $\\{{\\mathbf{y}}_j\\}$. \n\n\\begin{figure}[t]\n\t\\centering \n\t\\includegraphics[scale=0.78]{kdpca_syn.pdf} \n\t\t\t\\vspace{5pt}\n\t\\caption{\\small{Discovering subgroups in nonlinear synthetic data.}}\n\t\\label{fig:kdpca_syn}\n\\end{figure}\n\\begin{figure}[t]\n\t\\centering \n\t\\includegraphics[scale=0.74]{kdpca_real.pdf} \n\t\\caption{\\small{Discovering subgroups in MHealth data.}}\n\t\\label{fig:kdpca_real}\n\\end{figure}\n\n\\begin{figure}[t]\n\t\\centering \n\t\\includegraphics[scale=0.71]{kdpca_real_test2.pdf} \n\t\\caption{\\small{Distinguishing between waist bends forward and cycling.}}\n\t\\label{fig:kdpca_real_test2} \n\\end{figure}\n\n\\begin{figure}[t]\n\t\\centering \n\t\\includegraphics[scale=0.7]{kdpca_real_add.pdf} \n\t\\caption{\\small{Face recognization by performing KdPCA.}}\n\t\\label{fig:kdpca_real_add} \n\\end{figure}\n\n \nKdPCA was tested \nin realistic settings using the real Mobile (M) Health data \\cite{mhealth}. This dataset consists of sensor (e.g., gyroscopes, accelerometers, and EKG) measurements from volunteers conducting a series of physical activities. In the first experiment, $200$ target data $\\{{\\mathbf{x}}_i\\in\\mathbb{R}^{23}\\}_{i=1}^{200}$ were used, each of which recorded $23$ sensor measurements from one volunteer performing two different physical activities, namely laying down and having frontal elevation of arms ($100$ data points correspond to each activity). Sensor measurements from the same volunteer standing still were utilized for the $100$ background data points $\\{{\\mathbf{y}}_j\\in\\mathbb{R}^{23}\\}_{j=1}^{100}$. \nFor KdPCA, KPCA, and KcPCA algorithms, the Gaussian kernel \n with bandwidth $5$\nwas used. Three different values for the parameter $\\alpha$ in cPCA were automatically selected from a list of $40$ logarithmically-spaced values between $10^{-3}$ and $10^{3}$, whereas $\\alpha$ in KcPCA was set to $1$ \\cite{2017cpca}.\n\n\n\nThe first two dPCs, cPCs, and PCs of KdPCA, dPCA, KcPCA, cPCA, KPCA, and PCA are reported in Fig. \\ref{fig:kdpca_real}. It is self-evident that the two activities evolve into two separate clusters in the plots of KdPCA and KcPCA. On the contrary, due to the nonlinear data correlations, the other alternatives fail to distinguish the two activities.\n\n \nIn the second experiment, the target data were formed with sensor measurements of one volunteer executing waist bends forward and cycling. The background data were collected from the same volunteer standing still. The Gaussian kernel with bandwidth $40$ was used for KdPCA and KPCA, while the second-order polynomial kernel $\\kappa(\\mathbf{z}_i,\\,\\mathbf{z}_j)=(\\mathbf{z}_i^\\top\\mathbf{z}_j+3)^2$ was employed for KcPCA. The first two dPCs, cPCs, and PCs of simulated schemes are depicted\n in Fig. \\ref{fig:kdpca_real_test2}. \nEvidently, KdPCA outperforms its competing alternatives in discovering the two physical activities of the target data.\n\n\\textcolor{black}{To test the scalability of our developed schemes, the Extended Yale-B (EYB) face image dataset \\cite{yaleb} was adopted to test \n\t the clustering performance of KdPCA, KcPCA, and KPCA. EYB database contains frontal face images of $38$ individuals, each having about around $65$ color images of $192\\times 168$ ($32,256$) pixels. The color images of three individuals ($60$ images per individual) were converted into grayscale images and vectorized to obtain $180$ vectors of size $32,256\\times 1$. The $120$ vectors from two individuals (clusters) comprised the target data, and the remaining $60$ vectors formed the background data. A Gaussian kernel with bandwidth $150$ was used for KdPCA, KcPCA, and KPCA. Figure \\ref{fig:kdpca_real_add} reports the first two dPCs, cPCs, and PCs of KdPCA, KcPCA (with 4 different values of $\\alpha$), and KPCA, with black circles and red stars representing the two different individuals from the target data. K-means is carried out using the resulting $2$-dimensional representations of the target data. The clustering errors of KdPCA, KcPCA with $\\lambda=1$, KcPCA with $\\lambda=10$, KcPCA with $\\lambda=50$, KcPCA with $\\lambda=100$, and KPCA are $0.1417$, $0.7$, $0.525$, $0.275$, $0.2833$, and $0.4167$, respectively. Evidently, the face images of the two individuals can be better recognized with KdPCA than with other methods.}\n\n\\subsection{MdPCA tests}\n \n\t\nThe ability of the MdPCA Alg. \\ref{alg:mdpca} for discriminative dimensionality reduction \nis examined here with two background datasets. \n For simplicity, the involved weights were set to $\\omega_1=\\omega_2=0.5$.\t\n \n In the first experiment,\n\ttwo clusters of $15$-dimensional data points were generated for the target data $\\{\\accentset{\\circ}{\\mathbf{x}}_i\\in\\mathbb{R}^{15}\\}_{i=1}^{300}$ ($150$ for each). \nSpecifically, the first $5$ dimensions of $\\{\\accentset{\\circ}{\\mathbf{x}}_i\\}_{i=1}^{150}$ and $\\{\\accentset{\\circ}{\\mathbf{x}}_i\\}_{i=151}^{300}$ were sampled from $\\mathcal{N}(\\mathbf{0},\\,\\mathbf{I})$ and $\\mathcal{N}(8\\mathbf{1},\\,2\\mathbf{I})$, respectively. The second and last $5$ dimensions of $\\{\\accentset{\\circ}{\\mathbf{x}}_i\\}_{i=1}^{300}$ were drawn accordingly from the normal distributions $\\mathcal{N}(\\mathbf{1},\\,10\\mathbf{I})$ and $\\mathcal{N}(\\mathbf{1},\\,20\\mathbf{I})$. The right top plot of Fig. \\ref{fig:mdpca_syn} shows that performing PCA cannot resolve the two clusters.\nThe first, second, and last $5$ dimensions of the first background dataset $\\{\\accentset{\\circ}{\\mathbf{y}}_{j}^1\\in\\mathbb{R}^{1}\\}_{j=15}^{150}$ were sampled from $\\mathcal{N}(\\mathbf{1},\\,2\\mathbf{I})$, $\\mathcal{N}(\\mathbf{1},\\,10\\mathbf{I})$, and $ \\mathcal{N}(\\mathbf{1},\\,2\\mathbf{I})$, respectively, while those of the second background dataset $\\{\\accentset{\\circ}{\\mathbf{y}}_{j}^2\\in\\mathbb{R}^{15}\\}_{j=1}^{150}$ were drawn from $\\mathcal{N}(\\mathbf{1},\\,2\\mathbf{I})$, $\\mathcal{N}(\\mathbf{1},\\,2\\mathbf{I})$, and $\\mathcal{N}(\\mathbf{1},\\,20\\mathbf{I})$. \nThe two plots at the bottom of Fig. \\ref{fig:mdpca_syn} depict the first two dPCs of dPCA implemented with a single background dataset. \nEvidently, MdPCA can discover the two clusters in the target data by leveraging \n the two background datasets. \n\n\\begin{figure}[t]\n\t\\centering \n\t\\includegraphics[scale=0.66]{mdpca_syn.pdf} \n\t\\vspace{-5pt}\n\t\\caption{\\small{Clustering structure by MdPCA using synthetic data.}}\n\t\\label{fig:mdpca_syn}\n\t\\vspace{-5pt}\n\\end{figure}\n\n\\begin{figure}[t]\n\t\\centering \n\t\\includegraphics[scale=0.69]{mdpca_semisyn.pdf} \n\t\\vspace{-8pt}\n\t\\caption{\\small{Clustering structure by MdPCA using semi-synthetic data.}}\n\t\\label{fig:mdpca_semisyn}\n\t\\vspace{-10pt}\n\\end{figure}\n\n \nIn the second experiment, the\n target data $\\{\\accentset{\\circ}{\\mathbf{x}}_{i}\\in\\mathbb{R}^{784}\\}_{i=1}^{400}$ were obtained using $400$ handwritten digits $6$ and $9$ ($200$ for each) of size $28\\times 28$ from the MNIST dataset superimposed with $400$ resized `girl' images from the CIFAR-100 dataset \\cite{cifar10}. \nThe first $392$ dimensions of the first background dataset $\\{\\accentset{\\circ}{\\mathbf{y}}_{j}^1\\in\\mathbb{R}^{784}\\}_{j=1}^{200}$ and the last $392$ dimensions of the other background dataset $\\{\\accentset{\\circ}{\\mathbf{y}}_{j}^2\\in\\mathbb{R}^{784}\\}_{j=1}^{200}$ correspond to the first and last $392$ features of $200$ cropped girl images, respectively. The remaining dimensions of both background datasets were set zero.\n Figure \\ref{fig:mdpca_semisyn} presents the obtained (d)PCs of MdPCA, dPCA, and PCA, with red stars and black diamonds depicting digits $6$ and $9$, respectively. \n PCA and dPCA based on a single background dataset (the bottom two plots in Fig. \\ref{fig:mdpca_semisyn}) reveal that the two clusters of data follow a similar distribution in the space spanned by the first two PCs. The separation between the two clusters becomes clear when the MdPCA is employed.\n \n\n \n \n\n\\subsection{KMdPCA tests}\n\\begin{figure}[t]\n\t\\centering \n\t\\includegraphics[scale=0.7]{kmdpca_syn.pdf} \n\t\\vspace{-10pt}\n\t\\caption{\\small{The first two dPCs obtained by Alg. \\ref{alg:mkdpca}.}}\n\t\\label{fig:kmdpca}\n\t\\vspace{-13pt}\n\\end{figure}\n\n\nAlgorithm \\ref{alg:mkdpca} \\textcolor{black}{with $\\epsilon=10^{-4}$} is examined for dimensionality reduction using simulated data and compared against MdPCA, KdPCA, dPCA, and PCA.\nThe first two dimensions of the target data $\\{\\mathbf{x}_i\\in \\mathbb{R}^6\\}_{i=1}^{150}$ and $\\{\\mathbf{x}_i\\}_{i=151}^{300}$ were generated from two circular concentric clusters with respective radii of $1$ and $6$. The remaining four dimensions of the target data $\\{\\mathbf{x}_i\\}_{i=1}^{300}$ were sampled from two concentric circles with radii of $20$ and $12$, respectively. Data $\\{\\mathbf{x}_i\\}_{i=1}^{150}$ and $\\{\\mathbf{x}_i\\}_{i=151}^{300}$ corresponded to two different clusters. \nThe first, second, and last two dimensions of one background dataset $\\{\\mathbf{y}_{j}^1\\in\\mathbb{R}^6\\}_{j=1}^{150}$ were sampled from three concentric circles with corresponding radii of $3$, $3$, and $12$. \nSimilarly, three concentric circles with radii $3$, $20$, and $3$ were used for generating the other background dataset $\\{\\mathbf{y}_{j}^2\\in\\mathbb{R}^6\\}_{j=1}^{150}$. Each datum in $\\{\\mathbf{x}_i\\}$, $\\{\\mathbf{y}_{j}^1\\}$, and $\\{\\mathbf{y}_{j}^2\\}$ was corrupted by additive noise $ \\mathcal{N}(\\mathbf{0}, \\,0.1\\mathbf{I})$. When running KMdPCA, the degree-$2$ polynomial kernel used in Sec. \\ref{sec:simu2} was adopted, and weights were set as $\\omega_1=\\omega_2=0.5$.\n\n\nFigure \\ref{fig:kmdpca} depicts the first two dPCs of KMdPCA, MdPCA, KdPCA and dPCA, as well as the first two PCs of (K)PCA.\nIt is evident that only KMdPCA is able to discover the two clusters in the target data.\n\n\n\n\n\n\n\n\n\n\n\n\\section{Concluding Summary}\\label{sec:concl}\nIn diverse practical setups, one is interested in extracting, visualizing, and leveraging the unique low-dimensional features of one dataset relative to a few others. This paper put forward a novel framework, that is termed discriminative (d) PCA, for performing discriminative analytics of multiple datasets. Both linear, kernel, and multi-background models were pursued.\n In contrast with existing alternatives, dPCA is demonstrated to be optimal under certain assumptions. \nFurthermore, dPCA is \\textcolor{black}{parameter free}, and requires only one generalized eigenvalue decomposition. \nExtensive tests using both synthetic and real data corroborated the efficacy of our proposed approaches relative to relevant prior works. \n\nSeveral directions open up for future research: i) distributed and privacy-aware (MK)dPCA implementations to cope with large amounts of high-dimensional data; ii) robustifying (MK)dPCA to outliers; \n and iii) \n graph-aware (MK)dPCA generalizations exploiting additional priors of the data.\n\n\\section*{Acknowledgements}\nThe authors would like to thank Professor Mati Wax for pointing out an error in an early draft of this paper.\n\n\n\\bibliographystyle{IEEEtranS}\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nMost of our experience (outside perturbation theory) with quantum mechanics concerns nonrelativistic quantum systems. This may be due to the fact that, as yet, no results on specific models of relativistic quantum field theory (rqft) in (physical) four space-time dimensions exist \\cite{GlJa}. In spite of that, it is still widely believed, and there are good reasons for that \\cite{Haag}, that rqft is the most fundamental physical theory. One of its most basic principles is \\textbf{microcausality} (\\cite{Haag}, \\cite{StrWight}), which is the local (i.e., in terms of local quantum fields) formulation of Einstein causality, the limitation of the velocity of propagation of signals by $c$, the velocity of light in the vacuum:\n$$\n[\\Phi(f),\\Phi(g)] = 0\n\\eqno{(1)}\n$$\nwhere the fields $\\Phi$ are regarded as (space-time) operator-valued distributions, when the supports of $f$ and $g$ are space-like to one another, see section 3.\n\nUnfortunately, in spite of its enormous success, nonrelativistic quantum mechanics (nrqm) is well-known to violate Einstein causality, which is not surprising, since nrqm is supposed to derive from the the non-relativistic limit $c \\to \\infty$ of rqft, mathematically speaking a rather singular limit, called a group contraction, from the Poincar\\'{e} to the Galilei group, first analysed by In\\\"{o}n\\\"{u} and Wigner \\cite{InWi}. For some systems, such as quantum spin systems, finite group velocity follows from the Lieb-Robinson bounds \\cite{LiR}, \\cite{NS}: these systems are, however, approximations to nonrelativistic many-body systems. Due to the crucial importance of Einstein causality for the foundations of physics (see \\cite{HMN}), it is important to understand in which precise sense nrqm is an \\textbf{approximation} of a causal theory, viz., rqft.\n\nIn classical physics, acausal behavior is well-known, e.g., in connection to the diffusion equation or heat conduction problems. In both cases, these equations may be viewed as an approximation of the telegraphy equation (see \\cite{Bar}, pg. 185 or \\cite{MorFes}, section 7.4), and the approximations are under mathematical control. What happens in quantum theory?\n\nFor imaginary times, the heat diffusion equation becomes the Schr\\\"{o}dinger equation, for a free particle of mass $m$ in infinite space ($\\hbar = 1$):\n$$\ni\\frac{\\partial}{\\partial t} \\Psi_{t} = - \\frac{1}{2m} \\triangle \\Psi_{t} \\mbox{ with } \\Psi \\in {\\cal H}=L^{2}(\\mathbf{R}^{3})\n\\eqno{(2.1)}\n$$\nThe Laplacean $\\triangle$ is a multiplication operator in momentum space, and the solution of (2.1) is (with $\\tilde{f}$ denoting Fourier transform of $f$),\n$$\n\\tilde{\\Psi_{t}}(\\vec{p}) = \\exp(-it\\frac{\\vec{p}^{2}}{2m}) \\tilde{\\Psi_{0}}(\\vec{p})\n\\eqno{(2.2)}\n$$\nAssuming that $\\Psi_{0}$ is a $C_{0}^{\\infty}(\\mathbf{R}^{3})$ function, i.e., smooth with compact support, it follows from (2.2) and the ''only if'' part of the Paley-Wiener theorem (see,e.g., \\cite{RSII}, Theorem IX-11) that, for any $t \\ne 0$, $\\Psi_{t}$ cannot be of compact support and is thus infinitely extended: one speaks of ''instantaneous spreading'' \\cite{GCH}. Of course, spreading is a general phenomenon in quantum physics, and this feature is demonstrated in a varied number of situations and in several possible ways, including an exact formula for the free propagator (see, e.g., \\cite{MWB}, chapter 2). The fact that the violation of Einstein causality is ''maximal'' was sharpened and made precise by Requardt \\cite{Req}, who showed that, for a class of one and n body non-relativistic systems, states localized at time zero in an arbitrarily small open set of $\\mathbf{R}^{n}$ are total after an arbitrarily small time.\n\nA very nice recent review of related questions was given by Yngvason in \\cite{Yng}. His theorem 1 transcribes a result proved by Perez and Wilde \\cite{PeWil} (see also \\cite{Yng} for additional related references), which shows that localization in terms of position operators is incompatible with causality in relativistic quantum physics. \n\nA different approach, which generalizes the argument after (2.2) by making a different use of analyticity, and also introduces the dichotomy mentioned in the abstract, was proposed by Hegerfeldt \\cite{GCH1}:\n\n\\textbf{Theorem 1} Let $H$ be a self-adjoint operator, bounded below, on a Hilbert space ${\\cal H}$. for given $\\Psi_{0} \\in {\\cal H}$, let $\\Psi_{t}, t \\in \\mathbf{R}$, be defined as\n$$\n\\Psi_{t} = \\exp(-iHt) \\Psi_{0}\n\\eqno{(3)}\n$$\nLet $A$ be a positive operator on ${\\cal H}$, $A \\ge 0$, and $p_{A}$ be defined by\n$$\np_{A}(t) \\equiv (\\Psi_{t}, A \\Psi_{t})\n\\eqno{(4)}\n$$\nThen, either\n$$\np_{A}(t) \\ne 0 \n\\eqno{(5)}\n$$\nfor almost all $t$ and the set of such $t$ is dense and open, or \n$$\np_{A}(t) \\equiv 0 \\mbox{ for all } t\n\\eqno{(6)}\n$$\n\nIf, now, the probability to find a particle inside a bounded region $V$ is given by the expectation value of an operator $N(V)$, such that\n$$\n0 \\le N(V) \\le \\mathbf{1}\n\\eqno{(7)}\n$$\n(e.g., $$N(V) = |\\chi_{V})(\\chi_{V}| \\eqno{(8)}$$ , where $\\chi_{V}$ is the characteristic function of $V$), and $\\mathbf{1}$ the identity operator, it follows from theorem 1, with the choice\n$$\nA = \\mathbf{1} - N(V)\n\\eqno{(9)}\n$$\nthat, if at $t=0$ a particle is strictly localized in a bounded region $V_{0}$, then, unless it remains in $V_{0}$ for all times, it cannot be strictly localized in a bounded region $V$, however large, for any finite time interval thereafter, implying a violation of Einstein causality (see also \\cite{GCH1}, pg. 24, for further comments).\n\nOur main purpose in this review is to analyse the dichotomy (5)-(6) for nonrelativistic quantum systems (in case (6) we include some relativistic systems in the final discussion in the conclusion). We start with the option given by equation (6).\n\n\\section{Systems confined to a bounded region of space in quantum theory}\n\nOption (6) is found - with $A$ defined by (7)-(9) - in all systems restricted to lie in a finite region $V$ by \\textbf{boundaries}, with a Hamiltonian $H_{V}$ in theorem 1 self-adjoint and bounded below. This includes the electromagnetic field (Casimir effect, see the conclusion), but we now concentrate on nonrelativistic quantum systems. The simplest prototype of such is the free Hamiltonian $H_{V} = -\\frac{d^{2}}{dx^{2}}$, with $V=[0,L]$. The forthcoming theorem summarizes (and slightly extends) the rather detailed analysis in \\cite{GarKar}, using the results in \\cite{Robinson} (see the appendix of \\cite{GarKar} and references given there for the standard concepts used below). Our forthcoming conclusions differ, however, from \\cite{GarKar}. \n\n\\textbf{Theorem 2.1} In the following three cases, $H_{V}$ is self-adjoint and semi-bounded:\n\na1) $H_{V}^{\\sigma}$ on the domain \n\\begin{eqnarray*}\nD(H_{V}^{\\sigma}) = \\{\\mbox{ set of absolutely continuous (a.c.) functions } \\Psi \\mbox{ over } [0,L]\\\\\n\\mbox{ with a.c. first derivative } \\Psi^{'} \\mbox{ such that } \\Psi^{''} \\in L^{2}(0,L)\\}\n\\end{eqnarray*}\nand satisfying the boundary condition (b.c.)\n$$\n\\Psi^{'}(0)= \\sigma_{0} \\Psi(0) \\mbox{ and } \\Psi^{'}(L)= -\\sigma_{L} \\Psi(L)\n\\eqno{(10)}\n$$\nwhere $(\\sigma_{0},\\sigma_{L}) \\in (\\mathbf{R} \\times \\mathbf{R})$;\n\na2) $H_{V}^{\\infty}$ on $D(H_{V}^{\\infty})$, same as inside the brackets in a1), but with (10) replaced by\n$$\n\\Psi(0) = \\Psi(L) = 0\n\\eqno{(11)}\n$$\n\na3) $H_{V}^{\\theta}$, on $D(H_{V}^{\\theta})$, same as inside the brackets in a1), but with (10) replaced by \n$$\n\\Psi(0)= \\exp(i\\theta) \\Psi(L)\n\\eqno{(12)}\n$$\nwith $\\theta \\in \\mathbf{R}$. The case $\\sigma_{0}=0$ in a1) corresponds to Neumann b.c., $\\sigma_{0}>0$ to repulsive, $\\sigma_{0}<0$ to attractive boundaries (see \\cite{Robinson}, pg.17), with analogous statements for $\\sigma_{L}$. The case a2) corresponds to setting $\\sigma_{0} = -\\sigma_{L} = \\infty$ in (10), and is the case of impenetrable boundaries (Dirichlet b.c.). a3) is a generalization of periodic b.c.. We also have:\n\n\\textbf{Theorem 2.2} \n\na) In case a1), the momentum $p=-i\\frac{d}{dx}$ is not a symmetric operator;\n\nb) In case a2), $p$ defines a closed symmetric operator $p_{\\infty}$, and in case a3) it is a self-adjoint operator $p_{\\theta}$, which is a self-adjoint extension of $p_{\\infty}$, but for no $\\theta \\in \\mathbf{R}$ there are functions satisfying the Dirichlet b.c. (11) in the domain $D(p_{\\theta})$ of $p_{\\theta}$. Furthermore, in case a3),\n$$\nH_{V}^{\\theta} = p_{\\theta}^{2} = p_{\\theta}^{*} p_{\\theta}\n\\eqno{(13)}\n$$\n\nAn explicit proof of b.) may be found in \\cite{GarKar}, and a.) is straightforward. We see that in cases a1) and a2) the momentum is not well-defined (as a self-adjoint operator), while it is so in case a3), in which case the expected property (13) holds.\n\nWhat do we conclude fom theorems 2.1 and 2.2 (and their natural extensions to partial differential operators in higher dimensions, see \\cite{Robinson}, pg. 34)? As remarked by Robinson (\\cite{Robinson}, page 22), defining the probability current density $j(x)$ associated to the particle,\n$$\nj(x) = i(\\frac{d\\bar{\\Psi}}{dx} \\Psi(x)-\\bar{\\Psi}(x)\\frac{d\\Psi(x)}{dx})\n\\eqno{(14)}\n$$\nwe see that for a1),a2), $j(0)=0=j(L)$, while, in case a3), only $j(0)=j(L)$ holds. Thus, only in cases a1),a2) the particle flux both into and out of the system is zero, corresponding to an \\textbf{isolated} system, while a3) only means that all that flows in at $x=0$ flows out at $x=L$. This is the case with \\textbf{periodic} b.c. (a restriction of a3)), which requires for each $\\Psi$ that $\\Psi(x+L)=\\Psi(x)$ for all $x \\in [0,L]$. we thus call a3) generalised periodic b.c.: they allow a finite system to have a momentum operator \\cite{MaRo}, because, at the same time, they render the system ''infinite'' in a peculiar way, making it into a torus. \n \nWe see, therefore, that the attempt to confine a quantum system in a bounded region of space by imposing on it b.c. originating from classical physics (a1),a2)) leads to physical inconsistencies, since the momentum is expected to exist as a local generator of space-translations (theorem 2.2). Generalized periodic b.c. (a3)) sometimes save the situation, for instance regarding thermodynamic quantities in statistical mechanics, which are expected (and often proven) not to depend on the boundary conditions \\cite{Ru}. For expectation values and correlation functions this need not be the case. In addition, there are situations in rqft, such as the Casimir effect, for which periodic b.c. are definitely not adequate, as we shall comment in the conclusion.\n\n\\section{The problem of instantaneous spreading}\n\nWe now come to the option given by equation (5). Using theorem 1, Hegerfeldt (see \\cite{GCH} and references given there) proposed to analyse a two-atom model suggested by Fermi \\cite{Fermi} to check finite propagation speed in quantum electrodynamics, with $H$ the Hamiltonian, $A=A_{e_{B}}$, the probability that atom $B$, initially in the ground state, is excited by a photon resulting from the decay of atom $A$, initially in an excited state, and $\\Psi_{0}$ denoting the initial physical state of the system $A-B$. The conclusion is that $B$ is either immediately excited with nonzero probability or never. This conclusion was challenged by Buchholz and Yngvason \\cite{BYng} in a beautiful and subtle analysis, in which they concluded that there are no causality problems for the Fermi system in a full description of the system by rqft. One important point raised in \\cite{BYng} is that (4),(5) with $A$ positive is not an adequate criterion to investigate causality in rqft, as shown by the simple counterexample of the state $(\\Psi_{0}, \\cdot \\Psi_{0})$ equal to the vacuum state, for which $p_{A}(t)$ is \\textbf{always} nonzero for $A$ positive space-localized, by the Reeh-Schlieder theorem (see, e.g., \\cite{Ar}, pg. 101). It is perhaps worth noting that a non-perturbative rqft description of the two-atom system is not known, but the authors \\cite{BYng} relied on the general principles of rqft (\\cite{Haag}, \\cite{Ar}).\n\nIt follows from the above that instantaneous spreading would cease to be an obstacle to the physical consistency of nonrelativistic quantum mechanics if it could be shown that the latter is an approximation, in a suitable precise sense, to rqft. In this review we expand on a discussion by C. J\\\"{a}kel and one of us \\cite{JaWre} on this matter. In a not yet precise fashion (but see Lemma 2), one might propose as approximation criterion\n\n\\textbf{Proposal C} Nonrelativistic ground state expectation values are ''close'' to the corresponding relativistic vacuum expectation values when certain physical parameters are ''small''.\n\nFor an atom, e.g. hydrogen, in interaction with the electromagnetic field in its ground state, one such parameter is the ratio between the mean velocity of the electron in the ground state and $c$, which is of order of the fine structure constant. It is clear that the Dirac atom, with a potential, is not a fully relativistic system, and therefore not a candidate to solve the Einstein causality problems in the manner proposed in \\cite{BYng}: thus, the well-known relativistic corrections \\cite{JJS} do not solve the causality issue as sketched above. Perturbative quantum electrodynamics, in spite of its great success, does not offer a solution either: for instance, the relativistic Lamb shift relies strongly on Bethe's nonrelativistic treatment (see, e.g., \\cite{JJS}, pg. 292). \n\nWe now attempt to make proposal C precise, and, at the same time, show some results relating relativistic and nonrelativistic systems which are not found in this form in the textbook literature. We take as the nonrelativistic systems, formulated in Fock space, the symmetric Fock space for Bosons, ${\\cal F}_{s}({\\cal H})$ which we simply denote by ${\\cal F}$ (see \\cite{MaRo} for a nice textbook presentation) and there the state $\\omega_{\\Psi_{0}}=(\\Psi_{0}, \\cdot \\Psi_{0})$. The observables will be functionals of the nonrelativistic free quantum fields at time zero:\n$$\n\\Phi(\\vec{x}) = \\phi(\\vec{x}) + \\phi^{*}(\\vec{x})\n\\eqno{(15.1)}\n$$\nwhere $*$ denotes hermitian conjugate and\n$$\n\\phi(\\vec{x}) = \\frac{1}{(2\\pi)^{3\/2}(2m_{0})^{1\/2}}\\int d\\vec{k} a(\\vec{k})\\exp(-i\\vec{k}\\cdot \\vec{x})\n\\eqno{(15.2)}\n$$\nand the canonically conjugate momenta\n$$\n\\Pi(\\vec{x}) = \\pi(\\vec{x}) + \\pi^{*}(\\vec{x})\n\\eqno{(16.1)}\n$$\nwhere\n$$\n\\pi(\\vec{x}) = -\\frac{i(2m_{0})^{1\/2}}{(2\\pi)^{3\/2}}\\int d\\vec{k} a(\\vec{k})\\exp(i\\vec{k}\\cdot \\vec{x}) \n\\eqno{(16.2)}\n$$\nAbove, $a, a^{*}$ are annihilation-creation operators satisfying\n$$\n[a(\\vec{k}),a^{*}(\\vec{l})] = \\delta(\\vec{k}-\\vec{l})\n\\eqno{(17)}\n$$\nIt is more adequate, both mathematically and physically (\\cite{MaRo},\\cite{RSII}) to use the smeared fields\n$$\n\\Phi(f) = \\int d\\vec{x} f(\\vec{x}) \\Phi(\\vec{x})\n\\eqno{(18)}\n$$\nand\n$$\n\\Pi(g) = \\int d\\vec{x} g(\\vec{x}) Pi(\\vec{x})\n\\eqno{(19)}\n$$\ni.e., to consider $\\Phi,\\Pi$ as operator-valued distributions, satisfyind the canonical commutation relations (CCR)\n$$\n[\\Phi(f),\\Pi(g)] = i(f,g)\n\\eqno{(20)}\n$$\non a suitable dense set (\\cite{RSII}, pg. 232), with\n$$\n(f,g) = \\int d\\vec{x} \\bar{f}(\\vec{x}) g(\\vec{x})\n\\eqno{(21)}\n$$\nfor $f,g \\in {\\cal S}(\\mathbf{R}^{3})$, the Schwarz space \\cite{RSII}. For the free relativistic quantum system, the corresponding state is again the no-particle state $\\omega_{\\Psi_{0}}$, the observables (functionals of) the relativistic free quantum fields\n$$\n\\Phi_{r}(\\vec{x}) = \\phi_{r}(\\vec{x}) + \\phi_{r}^{*}(\\vec{x})\n\\eqno{(22.1)}\n$$\nwhere\n$$ \n\\phi_{r}(\\vec{x}) = \\frac{c}{(2\\pi)^{3\/2}}\\int d\\vec{k}\\frac{1}{(2\\omega_{\\vec{k}}^{c})^{1\/2}} a(\\vec{k})\\exp(-i\\vec{k}\\cdot \\vec{x})\n\\eqno{(22.2)}\n$$\nand the canonically conjugate momentum\n$$\n\\Pi_{r}(\\vec{x}) = \\pi_{r}(\\vec{x}) + \\pi_{r}^{*}(\\vec{x})\n\\eqno{(23.1)}\n$$\nwith\n$$\n\\pi_{r}(\\vec{x}) = -\\frac{i}{(2\\pi)^{3\/2}c}\\int d\\vec{k}(2\\omega_{\\vec{k}}^{c})^{1\/2} a(\\vec{k})\\exp(i\\vec{k}\\cdot \\vec{x})\n\\eqno{(23.2)}\n$$\nIt is convenient to consider the CCR in the Weyl form\n$$\n\\exp(i\\Pi(f))\\exp(i\\Phi(g))=\\exp(i\\Phi(g))\\exp(i\\Pi(f))\\exp(-i(f,g))\n\\eqno{(24)}\n$$\nfor $f,g \\in {\\cal S}_{\\mathbf{R}}(\\mathbf{R}^{3})$, the Schwarz space of real-valued functions on $\\mathbf{R}^{3}$. Above,\n$$\n\\omega_{\\vec{k}}^{c} \\equiv (c^{2}\\vec{k}^{2}+m_{0}^{2}c^{4})^{1\/2}\n\\eqno{(25)}\n$$\nwith $m_{0}$ the ''bare mass'' of the particles. We write\n$$\na(f) = (2m_{0})^{1\/2}[\\Phi(f)+i \\Pi(f)]\n\\eqno{(26)}\n$$\nand similarly for $a^{*}(f),a_{r}(f),a_{r}^{*}(f)$. The zero-particle vector $\\Psi_{0} \\in {\\cal F}$ is such that\n$$\na(f)\\Psi_{0} = 0 \\mbox{ for all } f \\in {\\cal S}(\\mathbf{R}^{3})\n\\eqno{(27)}\n$$\nand similarly for $a_{r}(f)$. We assume that there exists a continuous unitary representation $U(\\vec{a},R)$ of the Euclidean group $\\vec{x} \\to R\\vec{x}+\\vec{a}$ on ${\\cal F}$ with $R$ a rotation and $\\vec{a}$ a translation, s.t.\n$$\nU(\\vec{a},R) a(f) U(\\vec{a},R)^{-1} = a(f_{\\vec{a},R})\n\\eqno{(28)}\n$$\nwith\n$$\nf_{\\vec{a},R}(\\vec{x})= f(R^{-1}(\\vec{x}-\\vec{a}) \n\\eqno{(29)}\n$$\nThe following lemma is fundamental:\n\n\\textbf{Lemma 1} The no-particle state $\\Psi_{0}$ is the unique state invariant under $U(\\vec{a},R)$.\n\n\\textbf{Proof} By (27),(28),\n$$\n0 = U(\\vec{a},R) a(f) \\Psi_{0} = a(f_{\\vec{a},R})U(\\vec{a},R) \\Psi_{0}\n\\eqno{(30)}\n$$\nSince, for all $f \\in {\\cal S}(\\mathbf{R}^{3})$, $U(\\vec{a},R) \\Psi_{0}$ is also a no-particle state by (30), it follows that\n$$\nU(\\vec{a},R) \\Psi_{0} = \\lambda(\\vec{a},R) \\Psi_{0}\n\\eqno{(31)}\n$$\nwith $|\\lambda|=1$, and the $\\lambda$ form a one-dimensional representation of the Euclidean group. Since the Euclidean group posesses only the trivial one-dimensional representation, we conclude that\n$$\nU(\\vec{a},R) \\Psi_{0} = \\Psi_{0}\n$$\ni.e., $\\Psi_{0}$ is necessarily a Euclidean invariant state (As Wightman observes \\cite{Wight}, this is \\textbf{not} assumed when one writes (28)!). In the case of the free (relativistic or nonrelativistic) field, the cluster property of the two-point function (a corollary of the Riemann-Lebesgue lemma, see, e.g., \\cite{MWB}, Lemma 3.8) implies, together with von Neumann's ergodic theorem (see, again, e.g., \\cite{MWB}, Theorem A.2), that $\\Psi_{0}$ is the unique state invariant under all space translations and, thus, the unique state invariant under all $U(\\vec{a},R)$. q.e.d.\n\nLemma 1 is the main ingredient of\n\n\\textbf{Theorem 3} The representations of the Weyl CCR (24) $(\\Phi,\\Pi)$ and $(\\Phi_{r},\\Pi_{r})$ are unitarily inequivalent.\n\n\\textbf{Proof} The proof of this theorem follows from (\\cite{RSII}, Theorem X.46), the inequivalence of the Weyl CCR for different masses $m_{1}$ and $m_{2}$, by identifying $\\Phi_{m_{1}}$ with $\\Phi$ and $\\Phi_{m_{2}}$ with $\\Phi_{r}$ (and similarly for the $\\Pi$). Let $G(R,\\vec{a})$ (resp. $G_{r}(R,\\vec{a})$) be the representatives of the Euclidean group leaving $(\\Phi,\\Pi)$ (resp.$(\\Phi_{r},\\Pi_{r})$) invariant. We assume that there exists a unitary map $T$ on ${\\cal F}$ which satisfies $$T \\exp(i\\Phi(f))T^{-1} = \\exp(i\\Phi_{r}(f)) \\eqno{(32)}$$ and $$T \\exp(i\\Pi(f))T^{-1} = \\exp(i\\Pi_{r}(f) \\eqno{(33)}$$. Exactly as in \\cite{RSII}, Theorem X.46, pg.234, this leads to \n$$\nTG(R,\\vec{a})T^{-1} = G_{r}(R,\\vec{a})\n\\eqno{(34)}\n$$\nfor all $(R,\\vec{a})$ in the Euclidean group. Applying (34) to $\\Psi_{0}$, we find\n$$\nT \\Psi_{0} = G_{r}(R,\\vec{a}) T \\Psi_{0}\n\\eqno{(35)}\n$$\nand, since, by lemma 1, $\\Psi_{0}$ is the unique vector in ${\\cal F}$ invariant under both $G(R,\\vec{a})$ and $G_{r}(R,\\vec{a})$, (35) yields\n$$\nT \\Psi_{0} = \\alpha \\Psi_{0}\n\\eqno{(36)}\n$$\nwhere $\\alpha$ is a phase. From (32), (33), and (36),\n\\begin{eqnarray*}\n(\\Psi_{0}, \\Phi(f) \\Phi(g) \\Psi_{0}) = (\\Psi_{0}, T \\exp(i\\Phi(f))T^{-1} T \\exp(i\\Phi(g))T^{-1}\\Psi_{0})\\\\\n= (\\Psi_{0},\\exp(i\\Phi_{r}(f))\\exp(i\\Phi_{r}(g))\\Psi_{0})\n\\end{eqnarray*}$$\\eqno{(37)}$$\nwhich implies that $\\Psi_{0}, \\Phi(f) \\Phi(g) \\Psi_{0})$ and $(\\Psi_{0},\\exp(i\\Phi_{r}(f))\\exp(i\\Phi_{r}(g))\\Psi_{0})$ are equal as tempered distributions on\n${\\cal S}(\\mathbf{R}^{3}) \\times {\\cal S}(\\mathbf{R}^{3})$. We have, from (15), (22),\n\\begin{eqnarray*}\n(\\Psi_{0}, \\Phi(\\vec{x})\\Phi(\\vec{y})\\Psi_{0}) = \\frac{1}{2m_{0}} \\delta(\\vec{x}-\\vec{y})=\\\\\n= \\frac{1}{(2m_{0})(2\\pi)^{3}} \\int d\\vec{k} \\exp(i\\vec{k} \\cdot (\\vec{x}-\\vec{y}))\n\\end{eqnarray*} $$\\eqno{(38)}$$\nwhile\n\\begin{eqnarray*}\n(\\Psi_{0}, \\Phi_{r}(\\vec{x}) \\Phi_{r}(\\vec{y}) \\Psi_{0}) = \\frac{1}{i}\\Delta_{+}(\\vec{x}-\\vec{y},m_{0}^{2})=\\\\\n=\\frac{1}{2(2\\pi)^{3}} \\int d\\vec{k} \\exp(i\\vec{k}\\cdot(\\vec{x}-\\vec{y}))\\frac{c^{2}}{\\omega_{\\vec{k}}^{c}}\n\\end{eqnarray*}$$\\eqno{(39)}$$\nand, by (25),\n$$\n\\frac{c^{2}}{\\omega_{\\vec{k}}^{c}}= \\frac{1}{m_{0}(1+\\frac{\\vec{k}^{2}}{m_{0}^{2}c^{2}})^{1\/2}}\n\\eqno{(40)}\n$$\nfrom which\n$$ \n\\frac{c^{2}}{\\omega_{\\vec{k}}^{c}} \\to \\frac{1}{m_{0}} \\mbox{ as } c \\to \\infty\n\\eqno{(41)}\n$$\nFor finite $c$, (38) and (39) do not, however, satisfy (37), leading to a contradiction. q.e.d.\n\nIn spite of the fact that the relativistic and nonrelativistic zero-time fields lead to inequivalent representations of the CCR due to the fact that the corresponding two-point functions are different for finite $c$, (41) shows that (39) tends to (38) as $c \\to \\infty$ and suggests that proposal C might be correct This is the content of the forthcoming lemma 2. We assume that we are given two ''wave-functions'' $f_{1},f_{2}$ such that\n$$\nf_{1}, f_{2} \\in {\\cal S}(\\mathbf{R}^{3})\n\\eqno{(42)}\n$$\n\\textbf{Lemma 2} Let (42) hold and $\\epsilon $, $\\delta $ be chosen such that for $i=1,2$,\n\\begin{eqnarray*}\n\\int_{\\frac{|\\vec{k}|}{m_{0}c}>\\delta} d\\vec{k}|\\tilde{f_{i}}(\\vec{k})|^{2} < \\epsilon\\\\\n\\end{eqnarray*}\n$$\\eqno{(43)}$$\nThen\n\\begin{eqnarray*}\n2m_{0} \\Delta C \\equiv 2m_{0} |(\\Psi_{0}, \\Phi(f_{1})\\exp(-i(t_{1}-t_{2})H) \\Phi(f_{2})\\Psi_{0})-\\\\\n- (\\Psi_{0}, \\Phi_{r}(f_{1}) \\exp(-i(t_{1}-t_{2})H_{r}) \\Phi_{r}(f_{2})\\Psi_{0})| \\le\\\\\n\\le (2\\epsilon + \\delta^{2}\/2 + |t_{1}-t_{2}|\\frac{m_{0}c^{2}}{\\hbar} \\frac{\\delta^{4}}{8})\n\\end{eqnarray*} $$\\eqno{(44)}$$\nAbove,\n$$\nH \\equiv \\int d\\vec{k} \\frac{\\vec{k}^{2}}{2m_{0}} a^{*}(\\vec{k})a(\\vec{k})\n\\eqno{(45)}\n$$\n$$\nH_{r} \\equiv \\int d\\vec{k} (\\omega_{\\vec{k}}^{c}-m_{0}c^{2}) a^{*}(\\vec{k})a(\\vec{k}) \n\\eqno{(46)}\n$$\nWe also define the number operator\n$$\nN \\equiv \\int d\\vec{k} a^{*}(\\vec{k})a(\\vec{k})\n\\eqno{(47)}\n$$\n\n\\textbf{Remark} It is supposed that $\\delta$ is sufficiently small and is coupled to $\\epsilon$, so that both are small: a fine tuning is required in (43) and depends on the specific problem, but the requirement (43) is very natural and corresponds to the previously mentioned condition that the wavefunctions are ''small' beyond a certain critical momentum (in the ''relativistic'' region of momenta). In addition the time interval $|t_{1}-t_{2}|$ should be small in comparison with characteristic times related to the rest energy $\\frac{\\hbar}{m_{0}c^{2}}$ (here we reinserted $\\hbar$ for clarity). (45)-(47) may be understood as quadratic forms (see \\cite{RSII}, pg. 220). The quantity subtracted in (46) is the ''Zitterbewegungsterm'' \\cite{JJS}. Notice that the $2m_{0}$ factor in (44) cancels the product of two $(2m_{0})^{-1\/2}$ in each $\\Phi(f)$ in (15), or the corresponding relativistic term in the limit $c \\to \\infty$ by (41).\n\n\\textbf{Proof} We write, by (45), (46), (15) and (22), and setting $\\tau \\equiv t_{1}-t_{2}$,\n\\begin{eqnarray*}\n\\Delta C = | \\int d\\vec{k} (\\tilde{f}_{1}(\\vec{k})-\\tilde{f}_{2}(\\vec{k})) \\beta(\\vec{k},\\tau,c)|\\\\\n\\beta(\\vec{k},\\tau,c) \\equiv \\frac{1}{2m_{0}} \\exp(-i\\tau \\frac{\\vec{k}^{2}}{2m_{0}}-\\\\\n- \\frac{c^{2}}{\\omega_{\\vec{k}}^{c}} \\exp(-i\\tau(\\omega_{\\vec{k}}^{c}-m_{0}c^{2}))\n\\end{eqnarray*} $$\\eqno{(48)}$$\nWe split the integral defining $\\Delta C$ in (48) into one over $ I_{\\delta} \\equiv \\{\\vec{k} ;\\frac{|\\vec{k}|}{m_{0}c}>\\delta \\}$, and the other over the complementary region. We now insert the elementary inequalities valid inside $I_{\\delta}$: \n\\begin{eqnarray*}\n\\frac{1}{2m_{0}} - \\frac{c^{2}}{2\\omega_{\\vec{k}}^{c}} \\le \\frac{\\delta^{2}}{4m_{0}}\\\\\n|\\exp(-i\\tau \\frac{\\vec{k}^{2}}{2m_{0}}) -\\exp(-i\\tau(\\omega_{\\vec{k}}^{c}-m_{0}c^{2}))|\\\\\n\\le \\frac{m_{0}c^{2}|\\tau|\\delta^{4}}{8\\hbar}\\\\\n\\frac{c^{2}}{2\\omega_{\\vec{k}}^{c}} \\le \\frac{1}{2m_{0}}\n\\end{eqnarray*}\nas well as assumption (43) in the complement of $I_{\\delta}$, into (48), to obtain (44). q.e.d.\n\nLemma 2 shows that in the free field case, in spite of the nonequivalence of the relativistic and nonrelativistic representations shown in theorem 3, Einstein causality is saved, at least in an approximative sense. The real trouble starts with interactions. In that case, (37) implies, taking now for $\\Phi$ the interacting field, and $\\Psi_{0}$ the interacting vacuum $\\Omega_{0}$, that the two-point function of the interacting field must equal that of the free field of mass $m_{0}$ in the case of equivalence of representations. For a hermitian local scalar field for which the vacuum is cyclic, (37) (with $m_{0} > 0$) implies that $\\Phi$ is a free field of mass $m_{0}$ (Theorem 4.15 of \\cite{StrWight}). We know, however, at least for space dimensions less or equal to 2, interacting fields exist, the first one historically having been in one dimension \\cite{GlJa} the free scalar Boson field of mass $m_{0} >0$. Its Hamiltonian is\n$$\nH(g) = H_{0}+H_{I}(g)= \\int_{\\mathbf{R}}dk \\omega_{k}a^{*}(k)a(k)+\\int_{\\mathbf{R}}dx g(x):\\Phi_{r}(x)^{4}:\n\\eqno{(49)}\n$$\nwith $g \\in L^{2}(\\mathbf{R})$ a real valued function. $H(g)$ is a well-defined symmetric operator on a dense set in Fock space (see proposition pg. 227 of \\cite{RSII}; for self-adjointness see further in the same reference). The dots in (49) denote the so-called Wick product, which means that all creation operators in $\\Phi_{r}(x)^{4}$ are to be placed to the left of all annihilation operators (for further elementary discussion see \\cite{MaRo}, and a complete treatment \\cite{RSII}). In (49), the limit $g \\to \\lambda$ (with $\\lambda > 0$ a constant, interpreted as the coupling constant) exists in a well-defined sense \\cite{GliJa}). In the present case, the vacua $\\Omega_{0}$ and the no-particle state, which also belong to inequivalent representations, differ greatly. This may already be expected on the level of (49), because the ground state $\\Omega_{g}$ of $H(g)$ (whose existence was proved in \\cite{GliJa}) cannot be $\\Psi_{0}$ because of the vacuum polarizing term $H_{I}^{P}(g)$ in (49):\n$$\nH_{I}^{P}(g) = \\int dx g(x) \\phi_{r}^{*}(x)^{4}\n\\eqno{(50)}\n$$\nThose terms in (49) which commute with the number operator $N$ given by (47) are all equal to\n$$\nH_{I}^{C}(g) = 6 \\int dx g(x) \\phi_{r}^{*}(x)^{2} \\phi_{r}(x)^{2} \n\\eqno{(51)}\n$$\nThe formal limit as $c \\to \\infty$ of the operator $H(g)-H_{I}^{C}(g)$ is not ''small'', for instance for (50) we get from (41)\n$$\nH_{I,\\infty}^{P}(g) = \\int dk_{1} \\cdots dk_{4} \\tilde{g}(k_{1}+\\cdots+k_{4})a^{*}(k_{1}) \\cdots a^{*}(k_{4})\n\\eqno{(52)}\n$$\nin the sense of quadratic forms. In the formal limit $c \\to \\infty$, $g \\to \\lambda$, (51) yields\n$$\nH_{I} = \\frac{3\\lambda}{2m_{0}^{2}} \\int dxdy a^{*}(x)a^{*}(y)\\delta(x-y)a(x)a(y)\n\\eqno{(53)}\n$$\nwith $a^{*}(x),a(x)$ defined by (26): together with $H_{0}$ in (49), this defines the Hamiltonian of a nonrelativistic system of Bosons with delta-function interactions (see \\cite{Do} for the precise definition in a segment with periodic b.c.).\n\nThe limit $g \\to \\lambda$, followed by $c \\to \\infty$, was controlled by Dimock \\cite{Dim} in a remarkable tour-de-force. He showed that the two-particle scattering amplitude of model (49) converges to that of model (53) (with the free Hamiltonian (45)). The proof in \\cite{Dim} does not, however, offer any hint as to how the contribution of all the terms in $H(g)-H_{I}^{C}(g)$ becomes irrelevant in that limit (W.F.W. thanks Prof. Dimock for a discussion about this topic). \n\nThe above-mentioned point is crucial, for the following reason. For quantum systems in general, it is essential to arrive at many-body systems with \\textbf{nonzero} density $\\rho$ in the thermodynamic limit, i.e., $N \\to \\infty$, $V \\to \\infty$, with $\\frac{N}{V}=\\rho >0$ (see \\cite{MaRo} for an overview of applications). The corresponding non-relativistic system has, in contrast to the situation considered in \\cite{Dim}, also an infinite number of degrees of freedom ($N \\to \\infty$). The situation has an analogy to the classical limit of quantum mechanical correlation functions considered by Hepp \\cite{HepC}, where two possible limits may be envisaged, one of them yielding quantum mechanical N-particle systems, the other one classical field theory. For free systems with nonzero density, non-Fock representations arise, both in the non-relativistic and in the relativistic cases \\cite{AW}, but it may be checked that lemma 2 continues to hold (for zero temperature). For interacting systems, however, $N$ is not a good quantum number, and, upon fixing it (at a large value proportional to the volume $V$), the relativistic system can only be close to the nonrelativistic one if the contribution of the terms $H(g)-H_{I}^{C}(g$ becomes indeed irrelevant in the joint limit $g \\to \\lambda$ followed by $c \\to \\infty$. \n\nAs an example, we expect that the ground state energy per unit volume of the relativistic system (with a Hamiltonian for volume $V$ defined as in \\cite{GliJa}) tends, as $V \\to \\infty$ and $c \\to \\infty$, to the thermodynamic limit $e$ of the same quantity in the Lieb-Liniger model \\cite{LLi}, which is explicitly known to be $e(\\rho)= \\rho^{3}f(\\frac{\\lambda}{\\rho})$, with $f$ known explicitly as the unique solution of a Fredholm integral equation. Since $\\rho$ is not a parameter in the relativistic system, it is only when the above mentioned terms do not contribute (in the limit $g \\to \\lambda$, $c \\to \\infty$) that a similar fixing of the density becomes possible also for the relativistic system. This seems to be a deep mystery, whatever the way the problem is regarded.\n\nIn order to explain the last issue more completely, consider the l.h.s. of (44) in lemma 2. In the first term thereof, $\\Psi_{0}$ should be replaced by the ground state of the Hamiltonian $H+H_{I}$, where $H$ is given by (45) and $H_{I}$ by (53), and $\\Phi(f)$ replaced by a bounded function of the zero time nonrelativistic fields as in (24). Properly speaking, instead of smearing with a function $g$ one should consider the Hamiltonian restricted to a bounded region, e.g. a segment with, say, periodic b.c., and the thermodynamic limit taken, but we shall continue with the previous description for brevity. The second term on the left hand side of (44) should be replaced by\n$$\n\\lim_{g \\to \\lambda} (\\Omega_{g}, A_{r}(f_{1}) \\exp(-i(t_{1}-t_{2})H(g)) B_{r}(f_{2})\\Omega_{g})\n\\eqno{(54)}\n$$\nwhere $\\Omega_{g}$ is the ground state of $H(g)$ (shown to exist in \\cite{GliJa}), $A_{r}(f_{1}),B_{r}(f_{2})$ are bounded local functions of the fields (22), (23), i.e., with $f_{1},f_{2}$ with \\textbf{compact} support in the space variable. It was shown in \\cite{GliJa} that \n$$\n\\exists\\lim_{g \\to \\lambda} \\exp(i \\tau H(g)) A(f) \\exp(-i \\tau H(g))\n\\eqno{(55)}\n$$\nin the sense of the norm (in the C*-algebraic sense) for bounded local $A(f)$ (for these concepts, see \\cite{BRo2} or \\cite{Hug}). This limit, for both operators $A(f_{1}),B(f_{2})$ in (54), determines the observable content of (54), but it is clear that the whole of $H(g)$ will contribute to (55), in particular terms such as\n\\begin{eqnarray*}\nH_{I,c}^{P}(g) = \\int dk_{1} \\cdots dk_{4} \\prod_{i=1}^{4} \\frac{c}{(2\\omega_{k_{i}}^{c})^{1\/2}}\\\\\n\\tilde{g}(k_{1}+\\cdots+k_{4}) a^{*}(k_{1}) \\cdots a^{*}(k_{4})\n\\end{eqnarray*} $$\\eqno{(56)}$$ \nin $H(g)-H_{I}^{C}(g)$ will contribute, for a certain choice of observables in (55). Given that their formal limit as $c \\to \\infty$ does not vanish ((52)), it seems very unlikely that the limit, as $c \\to \\infty$, of (55) is independent of $H(g)-H_{I}^{C}$. In this connection, one may recall that the $S$-matrix, considered in \\cite{Dim} as an observable, is of a special kind, because it commutes with the free Hamiltonian $H_{0}$ in (49).\n\nThe basic ingredient of the proof of (55) \\cite{GliJa} is the fundamental property of microcausality (1) \\cite{Haag}\\cite{StrWight}. On the other hand, the form (49) of the Hamiltonian is dictated by the property of Lorentz covariance \\cite{GlJa}, proved for this model in \\cite{HeOs}.\n\n\\section{Conclusion}\n\nIn this review we discussed two aspects of the dynamics of non-relativistic quantum systems, unified by a dichotomy in Hegerfeldt's theorem 1. According to this theorem, there are exactly two options (5) and (6) for such systems.\n\nThe first aspect was related to option (6) in that theorem, viz. the attempt to isolate a quantum system from its surroundings by a set of boundary conditions, including those of Dirichlet and Neumann type (a1,a2). In general, this leads to physical inconsistencies, as reviewed in theorem 2.2 for non-relativistic systems, see also \\cite{GarKar}. We view these inconsistencies as consequences of trying to impose conditions deriving from classical physics to quantum systems, be they non-relativistic or relativistic. The latter case is well illustrated in Milonni's famous paper \\cite{Mil}, where he proved that, near a perfectly reflecting slab, the transverse vector potential and the electric field satisfy a set of equal-time CCR different from those holding for free fields. In \\cite{BohrRos}, Bohr and Rosenfeld showed, under the natural assumption that the fields are measured by observing the motion of quantum massive objects with which the fields interact, that the above-mentioned equal-time CCR follow. They are, therefore, very fundamental. \n\nThis suggests that such idealized b.c. are unphysical: this fact was explicitly shown for Dirichlet or Neumann b.c. in the case of the Casimir effect \\cite{KNW}. The reason is that there are wild fluctuations of quantum fields over sharp surfaces \\cite{DeCa}. One promising direction to study this (as yet open) problem is to look at the electromagnetic field in the presence of dielectrics, instead of the ''infinitely thin'' conductor plates (see \\cite{Bim} and references given there). The Casimir problem (for conductors or dielectrics) is an example for which adoption of generalized periodic b.c. (a3) in theorems 2.1,2.2 is not a physically reasonable option: it is not compatible with the theory's classical limit.\n\nOur second topic concerned option (5) in theorem 1, the issue of instantaneous spreading for non-relativistic quantum systems. We concluded that there are serious obstacles on the way to rescue Einstein causality in the (natural) approximative sense of lemma 2, due to terms such as the vacuum polarizing term (56) in the interaction Hamiltonian (49), which are not ''small'' in the formal limit $c \\to \\infty$ by (52). Since the presence of such terms is dictated by such fundamental principles as Lorentz covariance and microcausality, the solution may not be simple.\n\nAlthough we used a special model for the sake of argument, any physical theory with vacuum polarization, such as quantum electrodynamics, is expected to be subject to analogous considerations. Notice that the remarks on the use of approximate theories in \\cite{Yng} do not apply here, because the problems we pose are not due to the approximative character of the theories, such as, e.g., various cutoffs in quantum-electrodynamics, but, as remarked in the previous paragraph, are due to an intrinsic property of relativistic quantum field theory, viz., vacuum polarization (or, more precisely, having a non-persistent vacuum).\n\nThe arguments we presented, however, are clearly no mathematical proof of a no-go theorem. One reason is that the limits $g \\to \\lambda$ and $c \\to \\infty$ do not necessarily commute: they may not. The problem is therefore open. It is hoped that a complete change of point of view may clarify the problem, but we conjecture that $H(g)-H_{C}^{I}(g)$ will play a central role in the final solution. \n\nProgress in both topics above would obviously be of great relevance for the foundations of quantum theory. \n\n\\textbf{Acknowledgement} The idea of a part of this review arose at the meeting of operator algebras and quantum physics, satellite conference to the XVIII international congress of mathematical physics. We thank the organizers for making the participation of one of us (W. F. W.) possible, and Prof. J. Dimock for discussions there on matters related to section 3. We also thank Christian J\\\"{a}kel for critical remarks concerning possible changes of viewpoint, and for recalling some relevant references. W.F.W. also thanks J. Froehlich for calling his attention to the reference \\cite{Yng}.\n \n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nBiological systems, such as humans, use electrical signals as the medium of communication between their control centers (brains) and motor organs (arms, legs). While this is taken for granted by most people, those with severe physical impairments, such as quadriplegia, experience the breakdown of this communication system rendering them unable to perform the most basic physical movements. Modern technologies, such as BCIs, have attempted to ameliorate this through the use of brain signals as commands for assistive systems \\cite{perdikis2018cybathlon}. MI, a common paradigm for BCI control, requires the subject to simulate or imagine movement of the limbs on account of there being discernible differences in brain signals when moving different limbs \\cite{abiri2019comprehensive}. Due to it being non-invasive and cost-effective, EEG is the method of choice for collecting data for such systems \\cite{abiri2019comprehensive}.\n\nOne of the many recent developments in the application of EEG-driven BCIs is the Cybathlon competition held every four years under the auspices of Eidgen\u00f6ssische Technische Hochschule Z\u00fcrich (ETH Zurich) \\cite{riener2014cybathlon}. The competition involves physically challenged individuals completing routine tasks via assistive systems. One such task -- the BCI race -- has the participants (called pilots) control a virtual game character via brain signals only. Competing teams, who may hail from either academia or industry, are responsible for creating BCI systems and training their respective pilots. The goal of the Cybathlon is to push the state-of-the-art in BCI assistive systems, and accelerate its adoption in everyday lives of those who need it most.\n\nFor the 2020 edition of Cybathlon, a team from the Technische Universit\u00e4t M\u00fcnchen (TUM) called \"CyberTUM\" is amongst the competitors in the BCI race challenge. In order to achieve high scores in the competition, a major part of BCI development is the focus on robustness of the system i.e. minimizing the variability of the system for different sessions and environments. Lack of robustness, in fact, is an established concern in almost all BCI systems. Possible causes of the problem include nonstationarity of EEG signals (variance for the same subject) \\cite{wolpaw2002brain} \\cite{vidaurre2010towards}. An additional cause, as noted by participating teams in Cybathlon 2016, is the change in the subject's emotional state. During the race, As expected, a public event such as the BCI race, the pilots' stress stress levels increased. This is to be expected as a public event such as the BCI race can heighten stress. This change in the pilots' emotional state caused their respective BCI systems to perform sub-optimally. \n\nThe objective of this work is to mitigate this concern and develop MI systems that are robust to perturbations in the subject's emotional state, specifically to emotional arousal. In order to achieve this, we develop VR environments to induce high and low arousal in the subject before recording MI data. VR environments have been previously used along with EEG to prompt changes in emotional arousal \\cite{baumgartner2006neural}. Additionally, they have been used together with MI for treating Parkinson's disease \\cite{mirelman2013virtual}. To our knowledge, this is the first work where VR environments are used to increase robustness of MI-BCI systems. Subsequently, learning algorithms are trained, not only for MI but also for different arousal states. The idea is that during the BCI race, we first detect the pilot's emotional state of arousal, and choose the appropriate MI classifier. Due to COVID-19, many steps in the above mentioned outline had to be modified, the details of which are present as follows. \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\section{Related Work}\n\n\\subsection{Cybathlon 2016}\nThe inaugural Cybathlon competition was held in 2016. After the competition, the competing teams published their methods for training the participants, amongst which were Brain Tweakers (EPFL) \\cite{perdikis2018cybathlon} and Mirage91 (Graz University of Technology). One of the pilots of the former performed well in the qualifiers but poorly in the final, prompting the authors to cite psychological factors such as stress as the possible cause for the drop. A similar course of events was observed for the pilot of Mirage91, who after achieving an average runtime of 120 s in the days leading up to the Cybathlon, dropped to 196 s during the competition. The authors indicated that the pilot was showing signs of nervousness on competition day, with a heart beat of 132 beats per minute (bpm) prior to the race \\cite{statthaler2017cybathlon}. \n\nThe authors' hypothesis regarding the drop in their pilots' performances is supported by existing BCI literature \\cite{chaudhary2016brain} \\cite{lotte2013flaws} \\cite{hammer2012psychological} \\cite{jeunet2016standard}. Further support comes from evidence in affective science: It has been theorized that any event that causes an increase in emotional arousal can affect perception and memory in a manner which causes the retention of high-priority information and disregard of low-priority information \\cite{mather2012selective}.\n\n\n\\subsection{Emotional Valence and Arousal}\nEmotions are defined as complex psychological states, with three constituents: subjective experience, physiological and behavioral response \\cite{hockenbury2000discovering}. Following early attempts \\cite{wundt1897outlines}, more rigorous descriptions of emotions were made, the most widely accepted of which being the 'circumplex model' \\cite{russell1980circumplex}. It proposes that all emotions can be described as a combination of two properties: valence and arousal. These can be thought as orthogonal axes in two-dimensions. Neurologically, it entails that any emotional state is a result of two distinct and independent neural sub-systems \\cite{posner2005circumplex}. Figure \\ref{fig:circumplex} provides a visual representation of the circumplex model. As can be seen, emotions such as 'excited' are high on both the arousal and valence axes, while 'gloomy' is low in both arousal and valence.\n\n\\begin{figure}[tpb]\n \\centering\n \\includegraphics[width=0.4\\textwidth]{figures\/valence-arousal.png}\n \\caption{The circumplex model of emotional classification. Figure courtesy of \\cite{Kim2016ImageRW}.} \\label{fig:circumplex}\n\\end{figure}\n\nAlternative descriptions, such as the 'vector model' \\cite{Bradley1992RememberingPP}, do not veer off sharply from the circumplex model; they too base emotional classification on both valence and arousal. Hence the circumplex model was used as the paradigm of emotional analysis for the duration of the project.\n\n\n\\subsection{Arousal, EEG and Motor Imagery}\\label{arousal-eeg-background}\nStates of high and low arousal can be inferred from EEG signals \\cite{pizzagalli2007electroencephalography}. This has been previously used to train learning systems for distinguishing between various arousal states \\cite{nagy2014predicting}. EEG bands pertinent to different states of arousal are alpha (8-14 Hz) -- related to a relaxed yet awakened state -- and gamma (36-44 Hz) -- a pattern associated with increased arousal and attention. The theta pattern (4-8 Hz), correlated with lethargy and sleepiness, is also useful for differentiating arousal. \n\nWith regards to motor imagery (MI), the most relevant EEG bands have been shown to be alpha (8-14 Hz) and beta (14-30 Hz) \\cite{graimann2010brain}, the latter of which is associated with high degrees of cognitive activity \\cite{pizzagalli2007electroencephalography}. \n\nMotor imagery data refers to data produced when the subject simulates limb movement. As movement of different limbs is sufficiently distinguishable, this can be used to perform control for various other tasks \\cite{padfield2019eeg}. To record EEG data for motor imagery, the 10-20 international system of electrode placement is used \\ref{fig:eeg-map}. Due to the cross-lateral nature of limb control in the human brain, movement of the right arm is recorded most faithfully by C3 and that of the left arm by C4 \\cite{graimann2010brain}.\n\n\\begin{figure}[htpb]\n \\centering\n \\includegraphics[width=0.3\\textwidth]{figures\/eeg-map.jpg}\n \\caption{10-20 International system of EEG electrode placement. Electrodes C3 and C4 are most relevant for MI activity. Figure courtesy of \\cite{rojas2018study}.} \n \\label{fig:eeg-map}\n\\end{figure}\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\section{Methodology}\n\n\\subsection{Virtual Reality Environments}\nTraditional methods of inducing stress include the Sing-a-song stress test (SSST) \\cite{brouwer2014new} and the Trier social stress test (TSST) \\cite{kirschbaum1993trier}, while meditation has been shown to induce relaxation \\cite{sedlmeier2012psychological}\\cite{lumma2015meditation}. Emulating such environments faithfully in VR is sufficiently challenging, and may not be the most productive way to use VR to induce high\/low emotional arousal.\n\nPreviously, VR exposure therapy has been explored to alleviate various psychological disorders \\cite{krijn2004virtual}. One such example is using a VR height challenge -- placing the subject on higher ground in a virtual environment \\cite{diemer2016fear}. Not only does the challenge induce high emotional arousal in test subjects, but the control subjects -- the ones who are not acrophobic -- also exhibit the same physiological responses as the test group i.e. increased heart rate and skin conductance level \\cite{diemer2016fear}. Similarly, VR environments, particularly those with natural scenery e.g. a forest, have shown efficacy in reducing stress \\cite{anderson2017relaxation}\\cite{annerstedt2013inducing}. We thus developed two VR environments: one where the subject was placed on top of a skyscraper, called 'Height' while the second in a relaxing forest called 'Relaxation.' The environments were created using Unity 3D\\footnote{\\href{https:\/\/unity.com\/}{https:\/\/unity.com\/}}.\n\n\\begin{figure}[tbp]\n\\centering\n \\begin{subfigure}[b]{0.4\\textwidth}\n \\includegraphics[width=\\textwidth]{figures\/aroused.png}\n \\label{fig:f1}\n \\end{subfigure}\n \\begin{subfigure}[b]{0.4\\textwidth}\n \\includegraphics[width=\\textwidth]{figures\/relaxation.jpg}\n \\label{fig:f2}\n \\end{subfigure}\n \\caption{Virtual reality environments for inducing arousal in subjects. On top is 'Height' designed to induce high arousal by placing the subject on the edge of a skyscraper. Below is 'Relaxation' intended to lower arousal via a natural, calming setting.}\n \\label{fig:vr}\n\\end{figure}\n\n\n\\subsection{Dataset}\nAs this project was part of the CyberTUM team's participation in Cybathlon 2020, the original idea was to collect real data with the actual pilots who will be competing in the even proper. At the beginning of this work, however, no ethics approval had been acquired to run any experiments on the pilots. This was not detrimental to the project as a proof-of-concept could still be arrived at by collecting EEG data from volunteers within the CyberTUM team. The COVID-19 pandemic obstructed our means of collecting such data.\n\nIn the absence of our own motor-imagery and arousal data, we opted for the Graz 2b data set \\cite{leeb2008bci}. It belongs to a family of BCI datasets collected by the BCI Lab at Graz University of Technology. The dataset has been used previously in the BCI Competition IV \\cite{tangermann2012review}. EEG data is collected for 9 subjects doing a binary motor-imagery task (moving right and left hand on cue). The data is sampled at a frequency of 250 Hz with 3 EEG and 3 EOG channels. For our experiments, we use data from two subjects, B05 and B04, whom we refer to as subject 1 and 2 respectively henceforth.\n\n\n\\subsection{Subject Classification as Proxy for Arousal Classification}\\label{subject-classification}\nAs mentioned, we were unable to obtain our own EEG arousal data. To train the classifiers, we alternatively modified the experiment. Instead of using data with high\/low arousal emotional states as labels, we used different subjects as proxies for such states, making it a cross-subject classification task \\cite{del2014electroencephalogram} \\cite{riyad2019cross}. As EEG signals demonstrate significant variance between subjects, we can consider the data coming from subject A as that belonging to the emotional state of high arousal, and data from subject B as belonging to low arousal. With this approach, we can continue to train a classifier that would approximate the performance of one that is trained on actual arousal data, assuming the emotional states in this actual data are informative.\n\n\n\n\\subsection{Experimental Design}\nThe original scheme was to:\n\\begin{enumerate}\n \\item Develop VR environments in line with existing literature that are known to induce stress (high arousal) and relaxation (low arousal) in subjects.\n \\item Use electrodermal activity (EDA) activity to validate the efficacy of VR environments. EDA is a wide-used measure for emotional arousal, as skin conductance rises with rise in arousal \\cite{critchley2002electrodermal}.\n \\item Record MI data alternating between states of low and high arousal for each session. Start with 60s of inducing high arousal via the \"Height\" environment, then record MI data for 45s. Repeat the same with \"Forest\" environment for relaxed state. Repeat this process for each trial. The MI data was to be recorded by using the common paradigm of showing the participant a cue on screen (typically left or right arrow) which would prompt them to imagine as if they were moving their left or right hand \\cite{ramoser2000optimal} \\cite{pfurtscheller1997eeg} \\cite{liu2017motor}.\n \\item Train an arousal classifier. The aim of this classifier is to indicate the emotional state (high or low arousal) of the subject.\n \\item Train separate MI classifiers for each emotional state. The goal is to optimize for accuracy, even if different types of pre-processing and classifier types were required for each state, unlike the arousal classifier which necessitates the same pre-processing steps.\n \\item During deployment, first classify the emotional state using the arousal classifier, and based on its result, choose the appropriate MI classifier.\n\n\\end{enumerate}\nAs mentioned previously, due to numerous factors, many steps in the above formulation had to be either abandoned (2 and 3) or modified (4 and 5). The revised scheme, replaced steps 4-6 with the following:\n\n\\begin{enumerate}\n \\item Train a cross-subject classifier replacing the arousal classifier. The task of this classifier is to take EEG as input from any of the two subjects, and classify the input as belonging to either subject 1 or 2. As the classifier is agnostic to the subject, the same pre-processing had to be done for each subject's data.\n \\item Train separate MI classifiers for each subject instead of training for each emotional state.\n \\item At test time, sample a run of a few data points (5 in our experiments), feeding them to the cross-subject classifier. Based on its mode (most frequent classification), select the appropriate MI classifier.\n\\end{enumerate}\n\n\n\\subsection{Learning algorithms}\\label{algos}\nWe experimented with a multitude of machine learning algorithms which are briefly described as follows.\n\n\\paragraph{Logistic regression}\nLogistic regression is a modification of linear regression for a binary classification task \\cite{kleinbaum2002logistic}. It predicts the probability of a class given the input, by first learning a weighted linear combination of input features and applying a logistic function to the result.\n\\begin{equation}\n y = \\frac{1}{1+e^{-a}} \\quad where \\quad a = \\theta_0 + \\theta_1.x_1 + \\theta_2.x_2\n\\end{equation}\n\n\\paragraph{Linear discriminant analysis}\nLDA attempts to maximize inter-class variance while minimizing intra-class variance \\cite{balakrishnama1998linear} in the data. This results in a clustering of the data where it is easily separable. It is widely used in MI BCI \\cite{wang2006common} \\cite{wang2006common}.\n\n\\paragraph{Naive Bayes}\nA probabilistic classifier, naive bayes uses bayes' law to calculate the posterior probability of an event (class) given the prior and likelihood \\cite{murphy2006naive}. The posterior can then be updated with new evidence. It assumes that the features are independent, hence the term naive in its name.\n\\begin{equation}\n P(y|x) = \\frac{P(y).P(x|y)}{P(x)}\n\\end{equation}\n\n\\paragraph{Ensemble model}\nThis is implemented as a voting classifier in gumpy. It uses a mix of classifiers such as nearest-neighbor, LDA and support vector machines (SVM) and uses the majority vote as the classification output. As such, it necessarily either equals or outperforms both Naive Bayes and LDA as it uses them in the ensemble.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\section{Results}\n\n\\subsection{Artifact Removal}\n\\begin{figure}[!t]\n \\begin{subfigure}[b]{0.5\\textwidth}\n \\includegraphics[width=\\textwidth]{figures\/subject-5-1.png}\n \\caption{Plotting ICA with EOG channels. A visual depiction of the first component (in red) of ICA being correlated with EOG.}\n \\label{fig:f1}\n \\end{subfigure}\n \\begin{subfigure}[b]{0.5\\textwidth}\n \\includegraphics[width=\\textwidth]{figures\/subject-5-2.png}\n \\caption{Plotting ICA components against each other. The peak in the first component (blue) evidently due to an eye-blink. }\n \\label{fig:f2}\n \\end{subfigure}\n \\caption{Artifact analysis using ICA for subject 1.}\n \\label{ica}\n\\end{figure}\nThe data for subject 1 and 2 contained 324 and 399 trials (attempts at moving right or left hand) respectively. The standard approach to train MI classifiers is to analyze data and remove existing artifacts before extracting features from the data \\cite{uriguen2015eeg}. We first applied a Butterworth bandpass filter \\cite{daud2015butterworth} to extract frequencies within the range 2-60 Hz. We then analyze the data for artifacts. A common source of artifacts in MI data is noise from electrodes located in the forehead's proximity. This is in fact data collected from the Electrooculography (EOG) channels which detect movements such as eye blinks, which may show up in the MI data. Such noise can be detected by first performing independent component analysis (ICA) -- widely used in EEG preprocessing \\cite{ica} -- which tries to decompose a signal into constituent component under the assumption of statistical independence. We then see which of the resultant components correlates most with EOG channels, and filter it out \\cite{ica-eog}. An example of ICA on subject 1 can be seen in figure \\ref{ica}. We filter out the first component which seems to be picking up an eye blink. ICA on subject 2 did not improve the results.\n\n\n\\subsection{Feature Extraction}\n\\begin{figure*}[t]\n\\centering\n\\begin{subfigure}[b]{0.49\\linewidth}\n\\centering\n\\includegraphics[height=3cm]{figures\/pca-1.png}\n\\caption{PCA visualization of subject 1's feature vector.}\n\\end{subfigure} \n\\begin{subfigure}[b]{0.49\\linewidth}\n\\centering\n\\includegraphics[height=3cm]{figures\/pca-2.png}\n\\caption{PCA visualization of subject 2's feature vector.} \n\\end{subfigure}\n\\caption{Dimensionality reduction using PCA for feature space visualization of both subjects. Subject 2's features are more informative for the motor-imagery task compared to subject 1 which is also reflected in the training accuracy. Right hand movements are labeled red while left hand movements are blue.}\n\\label{fig:pca}\n\\end{figure*}\nSeveral methods were attempted to extract features. In principle, feature extraction in BCI takes two forms: frequency band selection and channel selection (also known as spatial filtering). In regards to the former, we've previously mentioned in \\ref{arousal-eeg-background} that alpha and beta bands have been shown to be most related to MI activity. Accordingly, we use these frequency bands as our features. In the same section we observed that channels C3 and C4 are the most relevant for MI, which we can use directly without any spatial filtering. For this, instead of using raw alpha and beta patterns, we opt for logarithmic sub-band powers of said patterns (see gumpy documentation\\footnote{\\href{http:\/\/gumpy.org\/}{http:\/\/gumpy.org\/}}). Each spectrum is divided into four sub-bands. An alternative approach for feature extraction in MI classification has been the use of the \"common spatial pattern (CSP)\" algorithm \\cite{Koles2005SpatialPU}. It tries to find optimal variances of subcomponents of a signal \\cite{csp} with respect to a given task. In our experiments, however, CSP performed poorly compared to logarithmic sub-band power of alpha and beta bands. The results when CSP was applied have thus been omitted from the report, but could be reproduced in the notebook (see section \\ref{documentation}). A visualization of the features using PCA for both subjects can be seen in figure \\ref{fig:pca}. As can be observed, the features for subject 2 are more conducive to discrimination of MI. This is also verified in the training results, where every classification algorithm achieved higher accuracy for subject 2 compared to subject 1.\n\n\n\\subsection{Training}\n\\begin{figure}[t]\n \\centering\n \\includegraphics[width=0.5\\textwidth]{figures\/training.pdf}\n \\caption{Training scheme for both classifiers. MI classifiers are trained separately for each subject (labels corresponding to right and left hand) while Cross-subject classifier trained on features of both subjects.} \\label{fig:training}\n\\end{figure}\nAs mentioned previously, we train two types of classifiers: MI per subject classifier and cross-subject classifier. The entire training procedure is visually depicted in figure \\ref{fig:training}. After doing feature extraction, we first train an MI classifier for each subject with labels 0 and 1 (left and right hand movement respectively). Subsequently, we combine data of both subjects, labelling it 0 and 1 (subject 1 and subject 2 respectively) and train the cross-subject classifier. All classifiers described in \\ref{algos} are trained in each case, the results of which can be seen in table \\ref{results}.\n\n\\paragraph{MI classification}\nThe data for each subject was divided into an 80-20 split (training-test). The features were also standardized by rescaling to zero mean and unit standard deviation. Results for both subjects were satisfactory, although subject 1's data was harder to train on compared to subject 2. This can be observed by looking at the ranges of training accuracy for both subjects [55.84-70.12 vs. 91.25-95]\\%. Subject 2's classifiers achieved both a higher average accuracy as well as lower variance. LDA performed best for subject 1, while logistic regression achieved best results for subject 2. \n\n\n\\paragraph{Cross-subject classification}\nTraining for cross-subject classifiers followed the same procedure of feature extraction with the only difference being a re-labeling of the samples from limb movements to source subject. Once again, we split the data into 80-20 (train-test) portions, though this time the data is the combined samples from both subjects. For testing the classifiers, we split the test set further into sections containing five samples (trials) each. For each section, we take the mode (most frequent prediction) of the classifier which is considered the final result. For example, if our test data has 50 samples from each subject, we portion it into 20 sections (each subject with 10 sections). We then feed each section to the classifier and take the majority score for that section as the classifier's prediction. As can be seen, the ensemble model outperforms the rest of the algorithms by a considerable margin. In addition to this, we also created t-SNE embeddings of the features with 2 and 3 dimensions \\cite{tsne}. The results were not up to par and have thus been left out here (they can be reproduced via the notebook discussed in \\ref{documentation}). More details can be found in \\ref{discussion}.\n\n\n\n\n\\begin{table}[h]\n\\caption{Summary of results. Accuracy scores for MI (both subjects) as well as cross-subject (X-sub) using various classifiers. Best results in bold}\n\\begin{center}\n\\begin{tabular}{|c|c|c|c|c|}\n\\hline\n\\textbf{Task} & \\multicolumn{4}{|c|}{\\textbf{Classifier}} \\\\\n\\cline{2-5} \n\\textbf{} & \\textbf{Logistic Regression} & \\textbf{LDA} & \\textbf{Naive Bayes} & \\textbf{Ensemble}\\\\\n\\hline\nMI-sub 1 & 67.53\\% & \\textbf{70.12\\%} & 55.84\\% & 70.12\\% \\\\\n\\hline\nMI-sub 2 & \\textbf{95\\%} & 91.25\\% & 91.25\\% & 93.75\\% \\\\\n\\hline\nX-sub & 58.65\\% & 59.38\\% & 59.38\\% & \\textbf{68.75\\%} \\\\\n\\hline\n\\end{tabular}\n\\label{results}\n\\end{center}\n\\end{table}\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\section{Discussion}\\label{discussion}\n\nThe results indicate that assuming different emotional states impart sufficient differences in EEG data, we can train classifiers that perform well above chance. Significant differences in the EEG signals between both subjects were observed during feature extraction and classification. This is not an uncommon phenomenon and has been documented in the literature \\cite{pizzagalli2007electroencephalography}. Blankertz et. al show that after testing on 80 subjects, the average classifier accuracy of a binary task was 74.4\\% with a spread of 16.5\\% \\cite{BLANKERTZ20101303}. Our findings buttress this as the best models for subject 1 and 2 achieved 75.38\\% and 95\\% accuracy respectively. This variability generally chalked up to differences in the subjects' abilities for implicit learning \\cite{kotchoubey2000learning}, performance in early neurofeedback sessions \\cite{Neumann1117} and attention spans \\cite{Daum94}. \n\nAccording to Tangermann et. al, the best results on data set 2b were achieved using filter-bank CSP as a pre-processing step followed by a naive bayes classifier during Competition IV \\cite{tangermann2012review}. In our testing, however, vanilla CSP for feature extraction was sub-optimal. Naive bayes was also found trailing behind other classifiers as seen in table \\ref{results}. We thus observe that vanilla CSP is not as performant as log band-power in our experiments, while we did not perform any experiments with filter-bank CSP.\n\nIn regards to cross-subject classification, appreciable results have been achieved by using ICA for feature extraction \\cite{tangkraingkij2009selecting} combined with a nearest-neighbor (NN) classifier. We verify the efficacy of ICA as a pre-processing step for feature extraction. Other approaches have shown PCA as an effective step for dimensionality reduction \\cite{palaniappan2005energy}. While we could not confirm this with PCA, using the more modern dimensionality reduction technique of t-SNE performed poorly in our experiments (tested using target dimensions 2 and 3). There is, however, recent evidence that using t-SNE in tandem with common dictionary learning may yield good results \\cite{nishimoto2020eeg}.\n\n\\subsection{Limitations and Future Outlook}\n\nA primary limitation of this work is the lack of testing on actual subjects. While the system ensures acceptable performance on an existing dataset, we can not conclude much about its usefulness in the real-world. To make such assertions with a certain degree of confidence, we need to evaluate how quickly we can switch between various MI classifiers based on the predictions of the emotion (cross-subject) classifier. This is also true for calibration time at the start of each session; while we use five trials during testing and get well above-chance results, comprehensive and systematic verification of the system is in order if it is to be of any practical use.\n\nIn addition to alpha patterns, gamma bands are correlated with increased arousal \\cite{pizzagalli2007electroencephalography}, which may have carried a strong supervision signal for the classifier. Had we acquired EEG data for aroused and relaxed states of a subject, an emphasis on gamma bands would have been warranted. As such, in the present case, as we did not have data corresponding to high and low arousal, gamma patterns were assumed not to be informative.\n\nFuture work may also look at training classifiers for more than two subjects. While two subjects suffice for the purposes of this study, as the original task was the discrimination between two emotional states of arousal, it may be worth exploring how the cross-subject classifier would scale to additional classes. This may be interpreted as having to classify not only emotional arousal but also valence (positive or negative) which may have important ethical implications.\n\nMost of the classifiers used in this project are classic algorithms, and were chosen for their still prevailing use in MI BCI. However, future work may also incorporate modern approaches such as deep neural networks for MI classification \\cite{Tabar_2016}. Deep learning could also be used to formulate our problem as that of multi-task learning for both arousal and MI classification \\cite{multitask}. In this manner we can replace training multiple classifiers with a single one which both classifies emotional arousal as well as motor imagery. \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\section{Interdisciplinary Work}\nThe nature of this project necessitated the undertaking of a multi-disciplinary approach, from understanding and systematizing human emotional arousal to developing algorithms for distinguishing both emotional states as well as motor function via EEG. Thus, this work borrows, incorporates and synthesizes elements from a number of disciplines including psychology (emotional arousal), neuroscience (EEG and motor-imagery), computer graphics (virtual reality environments) and artificial intelligence (machine learning for classification). Broadly, we can categorize psychology and neuroscience as brain sciences and computer graphics and artificial intelligence under the umbrella of informatics. Each of the two disciplines contributed unique methods and insights without which the project may not have come to fruition. The most valuable insight was the difficulty in training accurate machine learning algorithms for EEG. Although machine learning has become the dominant paradigm for classification tasks, this project demonstrates that pre-processing of data (via techniques such as ICA and log power-band) is at least as important to the success of the system as the classifier (the results for other feature extractors can be reproduced in the provided notebook), and even after pre-processing, we have no guarantees of robust performance. Another key insight was the extent to which EEG patterns vary between different people, pointing to the difficulty of transfer learning in this domain. \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\section{Conclusion}\nA major hurdle in the widespread and practical use of assistive systems based on MI-BCI is lack of reliability. While this can have many origins, an important source as identified by two Cybathlon teams in 2016 was related to shifts in the subject's state of emotional arousal. In this work, we present an end-to-end framework for inducing high\/low arousal in subjects, collecting EEG data and train learning algorithms for robust MI classification. While COVID-19 enforced certain constraints on data acquisition, we were still able to develop a proof-of-concept for how emotion-robust MI-BCI systems could be trained. Our results indicate that if the training signal contains sufficient information i.e. each emotional state has a distinct enough EEG signature, we can successfully train systems that are robust to variance in emotional arousal. A thorough study, however, needs to be conducted to determine the practicality of such a system with respect to variables such as classifier switching times and calibration periods.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\section{Acknowledgments}\nThis project could not have been possible without the aid of Nicholas Berberich who provided constant and quality guidance on overall methodology, feature extraction and algorithms. Also worth gratitude are Matthijs Pals for his support in regards to MI data preprocessing and Svea Meyer for helping in the initial phase of the project as well as with explaining EEG terminology.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\bibliographystyle{plain}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}