diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzzpslu" "b/data_all_eng_slimpj/shuffled/split2/finalzzpslu" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzzpslu" @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\n\nA classical result discovered by Carath\\'{e}odory and Fej\\'{e}r in 1911 \\cite{caratheodory1911zusammenhang} states that, if an $N\\times N$ Hermitian Toeplitz matrix $\\m{T}$ is positive semidefinite (PSD) and has rank $r\\leq N$, then it can be factorized as\n\\equ{\\m{T} = \\m{A}\\m{P}\\m{A}^H, \\label{eq:vanderdecp1}}\nwhere $\\m{P}$ is an $r\\times r$ positive definite diagonal matrix and $\\m{A}$ is an $N\\times r$ Vandermonde matrix whose columns are discrete sinusoidal waves with distinct frequencies. Moreover, such a decomposition is unique if $r0$, if and only if $\\m{T} \\geq \\m{0}$. Moreover, the decomposition is unique if $\\m{T}$ is rank-deficient. \\label{thm:VD}\n\\end{thm}\n\\begin{proof} Suppose that $\\m{T}$ can be written as in \\eqref{eq:VD}, where $p_k>0$, it is evident that $\\m{T}$ is PSD. This completes the `only if' part. We next show the `if' part. To do so, we start with the case of $r=\\rank\\sbra{\\m{T}}\\leq N-1$. Since $\\m{T}\\geq \\m{0}$, there exists $\\m{V} = \\mbra{\\m{v}_1^T,\\dots,\\m{v}_{N}^T}^T\\in\\mathbb{C}^{N\\times r}$ satisfying $\\m{T} = \\m{V}\\m{V}^H$, where $\\m{v}_j\\in\\mathbb{C}^{1\\times r}$, $j=1,\\dots,N$. Let $\\m{V}_U = \\mbra{\\m{v}_1^T,\\dots, \\m{v}_{N-1}^T}^T$ and $\\m{V}_L = \\mbra{\\m{v}_2^T,\\dots, \\m{v}_{N}^T}^T$. By the structure of $\\m{T}$, we have that $\\m{V}_U\\m{V}_U^H = \\m{V}_L\\m{V}_L^H$. By \\cite[Theorem 7.3.11]{horn2012matrix}, there exists an $r\\times r$ unitary matrix $\\m{U}$ satisfying $\\m{V}_L = \\m{V}_U\\m{U}$. It follows that $\\m{v}_j = \\m{v}_1\\m{U}^{j-1}$, $j=2,\\dots,N$ and therefore,\n\\equ{t_j = \\m{v}_1 \\m{U}^{-j}\\m{v}_1^H, \\quad j=1-N,\\dots,N-1. \\label{eq:tjinvU}}\nNote that $\\m{U}$ has the following eigen-decomposition:\n\\equ{\\m{U} = \\widetilde{\\m{U}} \\diag\\sbra{z_1,\\dots,z_r} \\widetilde{\\m{U}}^H, \\label{eqw:UinUz}}\nwhere $\\widetilde{\\m{U}}$ is also an $r\\times r$ unitary matrix and $z_k = e^{i2\\pi f_k}$ with $f_k\\in\\mathbb{T}$, $k=1,\\dots,r$. Insert \\eqref{eqw:UinUz} into \\eqref{eq:tjinvU} and let $p_k = \\abs{\\m{v}_1\\widetilde{\\m{u}}_k}^2>0$, $k=1,\\dots,r$, where $\\widetilde{\\m{u}}_k$ denotes the $k$th column of $\\widetilde{\\m{U}}$. Then we have that\n\\equ{t_j = \\sum_{k=1}^r p_k e^{-i2\\pi j f_k}.}\nUsing the identity above, $\\m{T}$ can be written as in \\eqref{eq:VD}. It is evident that $f_k$, $k=1,\\dots,r$ are distinct since otherwise, $\\rank\\sbra{\\m{T}} < r$, which cannot be true.\n\nWe now consider the case of $r=N$, in which $\\m{T}$ is positive definite. To obtain a decomposition as in \\eqref{eq:VD}, we choose arbitrarily $f_N \\in\\mathbb{T}$ and let $p_N = \\sbra{\\m{a}^H\\sbra{f_N} \\m{T}^{-1} \\m{a}\\sbra{f_N}}^{-1}>0$. After that, we define a new sequence $\\m{t}'=\\mbra{t'_j}, \\;\\abs{j}\\leq N-1$ as:\n\\equ{t'_j = t_j - p_N e^{-i2\\pi j f_N}. \\label{eq:t1j}}\nIt follows that\n\\equ{\\m{T}\\sbra{\\m{t}'}\n= \\m{T} - p_N\\m{a}\\sbra{f_N}\\m{a}^H\\sbra{f_N}. \\label{eq:Tu1}}\nBy the choice of $p_N$, the matrix\n\\equ{\\begin{bmatrix} p_N^{-1} & \\m{a}^H\\sbra{f_N} \\\\ \\m{a}\\sbra{f_N} & \\m{T} \\end{bmatrix} = \\begin{bmatrix} \\m{a}^H\\sbra{f_N}\\m{T}^{-\\frac{1}{2}} \\\\ \\m{T}^{\\frac{1}{2}} \\end{bmatrix}\\begin{bmatrix} \\m{a}^H\\sbra{f_N}\\m{T}^{-\\frac{1}{2}} \\\\ \\m{T}^{\\frac{1}{2}} \\end{bmatrix}^H \\notag}\nis PSD and rank-deficient.\nNotice that $\\m{T}\\sbra{\\m{t}'}$ is the Schur complement of $\\m{T}$ in the above matrix, and therefore\n\\equ{\\m{T}\\sbra{\\m{t}'}\\geq \\m{0}. \\label{eq:Tu1psd}}\nMoreover, it holds that\n\\equ{\\rank\\sbra{\\m{T}\\sbra{\\m{t}'}}0$. It is evident that\n\\equ{\\m{A}\\sbra{\\m{f}'} \\m{P}'\\m{A}^H\\sbra{\\m{f}'}=\\m{A}\\sbra{\\m{f}} \\m{P}\\m{A}^H\\sbra{\\m{f}}.}\nTherefore, there exists an $r\\times r$ unitary matrix $\\m{U}'$ satifying $\\m{A}\\sbra{\\m{f}'}\\m{P}'^{\\frac{1}{2}} = \\m{A}\\sbra{\\m{f}} \\m{P}^{\\frac{1}{2}} \\m{U}'$. It follows that\n\\equ{\\m{A}\\sbra{\\m{f}'} = \\m{A}\\sbra{\\m{f}} \\m{P}^{\\frac{1}{2}} \\m{U}' \\m{P}'^{-\\frac{1}{2}}.}\nThis means that for every $j = 1,\\dots,r$, $\\m{a}\\sbra{f'_j}$ lies in the range space spanned by $\\lbra{\\m{a}\\sbra{f_k}}_{k=1}^r$. By the fact that $r\\leq N-1$ and that any $N$ atoms $\\m{a}\\sbra{f_k}$ with distinct $f_k$'s are linearly independent, we have that $f'_j\\in\\lbra{f_k}_{k=1}^r$ and thus the two sets $\\lbra{f'_j}_{j=1}^r$ and $\\lbra{f_k}_{k=1}^r$ are identical. It follows that the two decompositions of $\\m{T}$ are identical.\n\\end{proof}\n\nWe next discuss how to obtain the Vandermonde decomposition, to be specific, how to solve for $f_k$ and $p_k$ in \\eqref{eq:VD}. In fact, a computational approach can be provided based on the proof of Theorem \\ref{thm:VD}. In the case of $r\\leq N-1$, using Cholesky decomposition, we can compute $\\m{V}\\in\\mathbb{C}^{N\\times r}$ satisfying $\\m{T} = \\m{V}\\m{V}^H$. By the arguments of the proof, it is easy to show the following equation:\n\\equ{\\sbra{\\m{V}_U^H\\m{V}_L - z_k \\m{V}_U^H\\m{V}_U}\\widetilde{\\m{u}}_k = 0,}\nfrom which $z_k$ and $\\widetilde{\\m{u}}_k$, $k=1,\\dots,r$ can be computed as the eigenvalues and eigenvectors of the matrix pencil $\\sbra{\\m{V}_U^H\\m{V}_L, \\m{V}_U^H\\m{V}_U}$. Finally, the parameters are obtained as: $f_k = \\frac{1}{2\\pi}\\Im \\ln z_k\\in\\mathbb{T}$ and $p_k = \\abs{\\m{v}_1\\widetilde{\\m{u}}_k}^2$, $k=1,\\dots,r$, where $\\m{v}_1$ is the first row of $\\m{V}$. In the case of $r= N$, $f_N\\in\\mathbb{T}$ can be chosen arbitrarily first, and the rest can be done following from the proof.\n\n\\section{FS Vandermonde Decomposition of Toeplitz Matrices} \\label{sec:VDint}\n\nWe present the FS Vandermonde decomposition result in this section. To encode the interval information into the Vandermonde decomposition, we first construct a trigonometric polynomial that is nonnegative on the interval $\\mathcal{I}$ and negative on its complement. We first clarify some notations. For $f_L\\neq f_H\\in\\mathbb{T}$, if $f_L< f_H$, then $\\mathcal{I}=\\mbra{f_L, f_H}$ denotes a closed interval as usual. Otherwise, we define $\\mathcal{I}=\\mbra{f_L, f_H}\\coloneqq \\mathbb{T}\\backslash \\sbra{f_H, f_L}$. By this definition, we can conveniently deal with the case in which $0$ (or $1$) is an interior point of $\\mathcal{I}$. The trigonometric polynomial, $g$, is defined as:\n\\equ{g(z) = \\frac{1}{z\\sqrt{z_Lz_H}}\\sbra{z-z_L}\\sbra{z-z_H}\\sgn\\sbra{f_H-f_L}, \\label{eq:g1}}\nwhere $z_L \\coloneqq e^{i2\\pi f_L}$, $z_H \\coloneqq e^{i2\\pi f_H}$ and $\\sgn\\sbra{\\cdot}$ is the sign function. With simple derivations, we have\n\\equ{g(z)= r_1 z^{-1} + r_0 + \\overline{r}_1 z, \\label{eq:gz1}}\nwhere\n{\\setlength\\arraycolsep{2pt}\\equa{r_0\n&=& - \\frac{z_L+z_H}{\\sqrt{z_Lz_H}} \\sgn\\sbra{f_H-f_L} \\notag \\\\\n&=& -2\\cos\\mbra{\\pi\\sbra{f_H-f_L}}\\sgn\\sbra{f_H-f_L}, \\label{eq:r0}\\\\ r_1\n&=& \\sqrt{z_Lz_H}\\sgn\\sbra{f_H-f_L} \\notag\\\\\n&=& e^{i\\pi\\sbra{f_L+f_H}} \\sgn\\sbra{f_H-f_L}.\\label{eq:r1}\n}}It is evident that $g(z)$ is a Hermitian trigonometric polynomial that is real-valued on $\\mathbb{T}$. By the way that $g(z)$ is constructed, we know that $g(z)$ has two single roots $z_L$ and $z_H$, and equivalently, $g(f)$ has two single roots $f_L$ and $f_H$. Therefore, $g(f)$ flips its sign around $f_L$ and $f_H$. Two possibilities are: $g(f)$ is positive on $\\sbra{f_L, f_H}$ and negative on $\\sbra{f_H, f_L}$, or negative on $\\sbra{f_L, f_H}$ and positive on $\\sbra{f_H, f_L}$. To determine which one is true, we check the value at $f=\\frac{1}{2}\\sbra{f_L+f_H}$:\n\\equ{\\begin{split}\n&g\\sbra{\\frac{1}{2}\\sbra{f_L+f_H}} \\\\\n&= r_0 + 2\\Re \\sbra{r_1 e^{-i\\pi \\sbra{f_L+f_H} }} \\\\\n&= \\lbra{2-2\\cos\\mbra{\\pi\\sbra{f_L-f_H}}} \\sgn\\sbra{f_H-f_L}. \\end{split}}\nConsequently, the sign of $g$ at $f=\\frac{1}{2}\\sbra{f_L+f_H}$ is identical to that of $f_H-f_L$, meaning that $g(f)$ is always positive on $\\sbra{f_L, f_H}$ and negative on $\\sbra{f_H, f_L}$ whenever $f_Lf_H$.\n\nNow we are ready to present the FS Vandermonde decomposition result, which is summarized in the following theorem.\\footnote{Part of the FS Vandermonde decomposition result was extended to a general form in the recent preprint \\cite{chao2016semidefinite}, which appeared online after our conference paper \\cite{yang2016vandermonde_ccc} was accepted.}\n\n\\begin{thm} Given $\\mathcal{I}\\subset\\mathbb{T}$, a Toeplitz matrix $\\m{T}\\in\\mathbb{C}^{N\\times N}$ admits an FS Vandermonde decomposition, as in \\eqref{eq:VD}, with $f_k\\in\\mathcal{I}$, if and only if\n{\\setlength\\arraycolsep{2pt}\\equa{\\m{T}\n&\\geq& \\m{0}, \\label{eq:Tupsd}\\\\ \\m{T}_g\n&\\geq& \\m{0}, \\label{eq:Tunew}\n}}where $g$ is defined by \\eqref{eq:gz1}-\\eqref{eq:r1} and $\\m{T}_g$ by \\eqref{eq:Tg}.\nMoreover, the decomposition is unique if either $\\m{T}$ or $\\m{T}_g$ is rank-deficient. \\label{thm:BCVD}\n\\end{thm}\n\n\\begin{proof} We first show the ``if'' part. Consider the case of $r\\leq N-1$. It then follows from \\eqref{eq:Tupsd} and Theorem \\ref{thm:VD} that $\\m{T}$ admits a unique Vandermonde decomposition as in \\eqref{eq:VD}. So, it suffices to show $f_k\\in \\mathcal{I}$, $k=1,\\dots,r$ under the additional condition \\eqref{eq:Tunew}. To do so, note by \\eqref{eq:VD} that\n\\equ{t_{n-m} = T_{mn} = \\sum_{k=1}^r p_k e^{i2\\pi (m-n) f_k}. \\label{eq:tlk}}\nIt immediately follows that\n\\equ{\\begin{split}\\mbra{T_g}_{mn}\n&= \\sum_{j=-1}^1 r_j t_{n-m+j} \\\\\n&= \\sum_{j=-1}^1 r_j \\sum_{k=1}^r p_k e^{i2\\pi (m-n-j) f_k}\\\\\n&= \\sum_{k=1}^r p_k e^{i2\\pi (m-n) f_k} \\sum_{j=-1}^1 r_j e^{-i2\\pi j f_k}\\\\\n&= \\sum_{k=1}^r p_k g\\sbra{f_k} e^{i2\\pi (m-n) f_k}, \\end{split} \\label{eq:Tgkl}}\nand hence\n\\equ{\\begin{split}\\m{T}_g\n&= \\sum_{k=1}^r p_k g\\sbra{f_k} \\m{a}\\sbra{N-1,f_k}\\m{a}^H\\sbra{N-1,f_k}\\\\\n&= \\m{A}\\sbra{N-1,\\m{f}}\\diag\\sbra{p_1g\\sbra{f_1},\\dots, p_rg\\sbra{f_r}} \\m{A}^H\\sbra{N-1,\\m{f}}, \\end{split} \\label{eq:T1ueq}}\nwhere $\\m{A}\\sbra{N-1,\\m{f}} \\coloneqq \\mbra{\\m{a}\\sbra{N-1,f_1},\\dots,\\m{a}\\sbra{N-1,f_r}}$ is an $\\sbra{N-1}\\times r$ Vandermonde matrix and $\\diag\\sbra{p_1g\\sbra{f_1},\\dots, p_rg\\sbra{f_r}}$ denotes a diagonal matrix with $p_kg\\sbra{f_k}$, $k=1,\\dots,r$ on the diagonal. Note that $\\m{A}\\sbra{N-1,\\m{f}}$ has full column rank since $r\\leq N-1$. Using \\eqref{eq:T1ueq} and \\eqref{eq:Tunew}, we have that\n\\equ{\\begin{split}\n&\\diag\\sbra{p_1g\\sbra{f_1},\\dots, p_r g\\sbra{f_r}} = \\m{A}^{\\dag}\\sbra{N-1,\\m{f}} \\m{T}_g \\m{A}^{\\dag H}\\sbra{N-1,\\m{f}} \\geq \\m{0}, \\end{split}}\nwhere $\\cdot^{\\dag}$ denotes the matrix pseudo-inverse operator.\nThis means that $p_kg\\sbra{f_k}\\geq 0$, and since $p_k>0$, we have $g\\sbra{f_k}\\geq 0$, $k=1,\\dots,r$. By the property of $g(f)$, finally, we have $f_k\\in\\mathcal{I}$, $k=1,\\dots,r$.\n\nWe next consider the case of $r=N$ in which $\\m{T}$ is positive definite. Let $f_N = f_L$ and $p_N = \\sbra{\\m{a}^H\\sbra{f_N} \\m{T}^{-1} \\m{a}\\sbra{f_N}}^{-1}>0$. Similar to that in the proof of Theorem \\ref{thm:VD}, we define a new sequence $\\m{t}'=\\mbra{t'_j}, \\;\\abs{j}\\leq N-1$ as in \\eqref{eq:t1j}, which therefore satisfies \\eqref{eq:Tu1}, \\eqref{eq:Tu1psd} and \\eqref{eq:Tu1rank}. Moreover, we have\n\\equ{\\begin{split}\\mbra{T_g\\sbra{\\m{t}'}}_{mn}\n&= \\sum_{j=-1}^1 r_j t'_{n-m+j} \\\\\n&= \\mbra{T_g}_{mn} - p_N g(f_N) e^{i2\\pi (m-n) f_N}, \\end{split}}\nand hence\n\\equ{\\m{T}_g\\sbra{\\m{t}'} = \\m{T}_g - p_Ng\\sbra{f_N} \\m{a}\\sbra{N-1,f_N}\\m{a}^H\\sbra{N-1,f_N}. \\label{eq:Tgt1}}\nBy \\eqref{eq:Tunew} and the fact that $g\\sbra{f_N}=g\\sbra{f_L}=0$, we have\n\\equ{\\m{T}_g\\sbra{\\m{t}'} = \\m{T}_g \\geq \\m{0}. \\label{eq:Tu1newpsd}}\nNow consider $\\m{T}\\sbra{\\m{t}'}$ that satisfies \\eqref{eq:Tu1psd}, \\eqref{eq:Tu1rank} and \\eqref{eq:Tu1newpsd}. Following from the ``if'' part of Theorem \\ref{thm:BCVD} in the case of $r\\leq N-1$ that we just proved, $\\m{T}\\sbra{\\m{t}'}$ admits a unique decomposition as in \\eqref{eq:VD}, with $f_k\\in\\mathcal{I}$, $k=1,\\dots,r=N-1$. Therefore, it follows from \\eqref{eq:Tu1} that\n\\equ{\\m{T} = \\m{T}\\sbra{\\m{t}'} + p_N\\m{a}\\sbra{f_N}\\m{a}^H\\sbra{f_N}}\nhas a decomposition as in \\eqref{eq:VD}, with $f_k\\in\\mathcal{I}$, $k=1,\\dots,r=N$. So we complete the ``if'' part.\n\nThe ``only if'' part can be shown by similar arguments. In particular, given $\\m{T}$ as in \\eqref{eq:VD}, it is evident that \\eqref{eq:Tupsd} holds. Moreover, \\eqref{eq:Tunew} also holds, since we still have \\eqref{eq:T1ueq}, in which $g\\sbra{f_k}\\geq 0$, $k=1,\\dots,r$ by the property of $g$.\n\nWe finally shown the uniqueness under the additional condition that $\\m{T}$ or $\\m{T}_g$ is rank-deficient. When $\\m{T}$ is rank-deficient, this is a direct consequence of Theorem \\ref{thm:VD}. In the other case when $\\m{T}$ has full rank and $\\m{T}_g$ is rank-deficient, note first that there are at least $N$ distinct $f_k$'s in the FS Vandermonde decomposition of $\\m{T}$, since, otherwise, $\\m{T}$ loses rank. We now recall \\eqref{eq:T1ueq}, in which $\\m{A}\\sbra{N-1,\\m{f}}$ has full row rank and $g(f_k)\\geq 0$. To guarantee that $\\m{T}_g$ is rank-deficient, $g(f_k)\\neq 0$ must hold for maximally $N-2$, $f_k$'s and the other $f_k$'s must be either $f_L$ or $f_H$. This means that the decomposition consists of exactly $N$ atoms and two of them are located at $f_L$ and $f_H$. Therefore, the other $N-2$ frequencies are fixed as well, and the FS Vandermonde decomposition is unique.\n\\end{proof}\n\nThe FS Vandermonde decomposition can be computed similarly as the standard Vandermonde decomposition provided that the conditions of Theorem \\ref{thm:BCVD} are satisfied. More concretely, in the case when $\\m{T}$ is rank-deficient, it admits a unique Vandermonde decomposition that can be computed as in Section \\ref{sec:standardVD}. In the case when $\\m{T}$ has full rank, an $N$-atomic decomposition can be computed following from the proof of Theorem \\ref{thm:BCVD}, to be specific, fix $f_N=f_L$ first and compute the other parameters following the proof.\n\nFinally, note that the FS Vandermonde decomposition result can be extended straightforwardly to the multiple frequency band case. Let $K=\\bigcup_{l=1}^J \\mbra{f_{Ll}, f_{Hl}}$, where $\\mbra{f_{Ll}, f_{Hl}}\\subset\\mathbb{T}$, $l=1,\\dots, J\\geq 2$ are disjoint. We have the following corollary of Theorem \\ref{thm:BCVD}, the proof of which is straightforward and thus is omitted.\n\\begin{cor} Given $K = \\bigcup_{l=1}^J \\mbra{f_{Ll}, f_{Hl}}$, a Toeplitz matrix $\\m{T}\\in\\mathbb{C}^{N\\times N}$ admits an FS Vandermonde decomposition, as in \\eqref{eq:VD}, with $f_k\\in K$, if and only if there exist sequences $\\m{t}_l$, $l=1,\\dots,J$ satisfying\n{\\setlength\\arraycolsep{2pt}\\equa{\\sum_{l=1}^J \\m{t}_l\n&=& \\m{t}, \\label{eq:sumTleqT}\\\\ \\m{T}\\sbra{\\m{t}_l}\n&\\geq& \\m{0}, \\label{eq:Tupsdl}\\\\ \\m{T}_{g_l}\\sbra{\\m{t}_l}\n&\\geq& \\m{0}, \\quad l=1,\\dots,J, \\label{eq:Tunewl}\n}}where $g_l$, $l=1,\\dots,J$ are $g$ defined with respect to $\\mbra{f_{Ll}, f_{Hl}}$, respectively. \\label{cor:BCVDM}\n\\end{cor}\n\n\n\\section{Duality} \\label{sec:duality}\nUsing the FS Vandermonde decomposition result presented in the previous section, we can explicitly characterize the cone of Toeplitz matrices admitting such decompositions. Due to the interest in optimization problems, we naturally look at the dual cone, which, as we will see, enables us to link the FS Vandermonde decomposition to the theory of trigonometric polynomials, to be specific, the PRL given in \\cite{davidson2002linear,alkire2002convex} (see also \\cite{dumitrescu2007positive}).\n\nFor a sequence $\\m{t}=\\mbra{t_j}$, $\\abs{j}\\leq N-1$ with $t_{-j} = \\overline{t}_j$, let $\\m{t}_R = \\mbra{\\Re t_{N-1}, \\dots, \\Re t_{1}, \\frac{\\sqrt{2}}{2}t_0, \\Im t_1, \\dots, \\Im t_{N-1}}^T\\in\\mathbb{R}^{2N-1}$ be a representation of $\\m{t}$ in the real domain, where the coefficient $\\frac{\\sqrt{2}}{2}$ for $t_0$ is chosen for convenience. It is obvious that all $N\\times N$ Toeplitz matrices admitting an FS Vandermonde decomposition on a given interval $\\mathcal{I}\\subset\\mathbb{T}$ form a cone that can be identified with\n\\equ{\\begin{split}\n&\\mathcal{K}_{\\text{VDF}} \\coloneqq \\lbra{\\m{t}_R:\\; \\m{T} = \\sum_{k} p_k\\m{a}\\sbra{f_k}\\m{a}^H\\sbra{f_k},\\; p_k\\geq 0, f_k\\in \\mathcal{I}}.\\end{split} \\label{eq:KVDF}}\nDefine\n\\equ{\\begin{split}\\mathcal{K}_{\\text{VDM}}\n&\\coloneqq \\lbra{\\m{t}_R:\\; \\m{T}\\geq\\m{0}, \\; \\m{T}_{g}\\geq \\m{0}}, \\end{split} \\label{eq:KVDM}}\nwhere $g$ is defined in Theorem \\ref{thm:BCVD}. A direct consequence of Theorem \\ref{thm:BCVD} is that\n\\equ{\\mathcal{K}_{\\text{VDF}} = \\mathcal{K}_{\\text{VDM}}. \\label{eq:VDconeeq} }\n\nWe next consider the dual cone of $\\mathcal{K}_{\\text{VDF}}$ defined as \\cite{boyd2004convex}\n\\equ{\\mathcal{K}_{\\text{VDF}}^*\\coloneqq \\lbra{\\m{\\alpha}\\in\\mathbb{R}^{2N-1}:\\; \\m{t}_R^T\\m{\\alpha}\\geq 0 \\text{ for any } \\m{t}_R\\in\\mathcal{K}_{\\text{VDF}}}. \\label{eq:defdualcone}}\nBefore proceeding to the main result of this section, we first introduce some notations. Let\n\\equ{\\mathcal{K}_{\\text{PolF}}\\coloneqq \\lbra{\\m{\\gamma}_R:\\; \\sum_{j=1-N} ^{N-1} \\gamma_j e^{i2\\pi j f} \\geq 0,\\; f\\in \\mathcal{I}} \\label{KPolF}}\ndenote the cone of trigonometric polynomials of order $N-1$ and nonnegative on $\\mathcal{I}$, where $\\m{\\gamma}_R$ is similarly defined as $\\m{t}_R$. Let also $\\m{\\Theta}_j$, $\\abs{j}\\leq N-1$ be an $N\\times N$ elementary Toeplitz matrix with ones on its $j$th diagonal and zeros elsewhere. With respect to $\\m{\\Theta}_j$ and the trigonometric polynomial $g$ defined by \\eqref{eq:gz1}-\\eqref{eq:r1}, we define the $(N-1)\\times (N-1)$ Toeplitz matrix $\\m{\\Theta}_{gj}$, like $\\m{T}_g$ with respect to $\\m{T}$. By definition, it is easy to verify that\n\\setlength\\arraycolsep{2pt}{\\equa{\\m{T}\n&=& \\sum_{j=1-N}^{N-1} \\m{\\Theta}_j t_j, \\label{eq:TinTheta}\\\\ \\m{T}_g\n&=& \\sum_{j=1-N}^{N-1} \\m{\\Theta}_{gj} t_j. \\label{eq:TginThetag}\n}}We also define the cone\n\\equ{\\begin{split} \\mathcal{K}_{\\text{PolM}}\\coloneqq \\Big\\{\\m{\\gamma}_R:\\;\n& \\gamma_{-j} = \\tr\\sbra{\\m{\\Theta}_j \\m{Q}_0} + \\tr\\mbra{ \\m{\\Theta}_{gj} \\m{Q}_1 }, \\\\\n& \\abs{j}\\leq N-1, \\\\\n& \\m{Q}_0\\in\\mathbb{C}^{N\\times N}, \\m{Q}_1\\in\\mathbb{C}^{(N-1)\\times (N-1)},\\\\\n& \\m{Q}_0\\geq\\m{0},\\m{Q}_1\\geq\\m{0} \\Big\\}. \\end{split} \\label{KPolM}}\nThe main result of this section is given in the following theorem.\n\n\\begin{thm} We have the following identities:\n {\\setlength\\arraycolsep{2pt}\\equa{\\mathcal{K}_{\\text{VDF}}^*\n &=& \\mathcal{K}_{\\text{PolF}}, \\label{eq:dualeq1} \\\\ \\mathcal{K}_{\\text{PolM}}^*\n &=& \\mathcal{K}_{\\text{VDM}}. \\label{eq:dualeq2}\n }}Therefore, provided that $\\mathcal{K}_{\\text{VDF}} = \\mathcal{K}_{\\text{VDM}}$ we can conclude that $\\mathcal{K}_{\\text{PolF}} = \\mathcal{K}_{\\text{PolM}}$, and vice versa.\n\\label{thm:dualcone}\n\\end{thm}\n\n\\begin{proof} We first show \\eqref{eq:dualeq1}. Note that $\\m{t}_R\\in\\mathcal{K}_{\\text{VDF}}$ if and only if\n\\equ{t_j = \\sum_{k} p_k e^{-i2\\pi j f_k}, \\quad j = 1-N,\\dots,N-1, \\label{eq:tj_KVDF}}\nwhere $p_k\\geq 0$ and $f_k\\in\\mathcal{I}$. For any $\\m{\\alpha}=\\mbra{\\alpha_{1-N},\\dots,\\alpha_{N-1}}^T\\in\\mathbb{R}^{2N-1}$, we define $\\m{\\gamma}\\in\\mathbb{C}^{2N-1}$ such that $\\gamma_0 = \\sqrt{2}\\alpha_0$, $\\gamma_j = \\alpha_{-j}+i\\alpha_{j}$ and $\\gamma_{-j} = \\alpha_{-j}-i\\alpha_{j}$, $j=1,\\dots,N-1$. It follows that $\\m{\\alpha} = \\m{\\gamma}_R$ and\n\\equ{\\begin{split}\\m{t}_R^T\\m{\\alpha}\n&= \\frac{\\sqrt{2}}{2}t_0\\cdot \\frac{\\sqrt{2}}{2}\\gamma_0 + \\Re \\sum_{j=1}^{N-1} \\overline{t}_j \\gamma _j\\\\\n&= \\frac{1}{2} \\sum_{j=1-N}^{N-1} \\overline{t}_j\\gamma_j. \\end{split} \\label{eq:realtHgamma1}}\nInserting \\eqref{eq:tj_KVDF} into \\eqref{eq:realtHgamma1}, we have that\n\\equ{\\m{t}_R^T\\m{\\alpha} = \\frac{1}{2} \\sum_{k} p_k \\sum_{j=1-N}^{N-1} \\gamma_j e^{i2\\pi j f_k}. \\label{eq:realtHgamma}}\nBy \\eqref{eq:realtHgamma} and the definition of the dual cone, $\\m{\\alpha} = \\m{\\gamma}_R \\in\\mathcal{K}_{\\text{VDF}}^*$ if and only if the right hand of \\eqref{eq:realtHgamma} is nonnegative for any $p_k\\geq 0$ and any $f_k\\in\\mathcal{I}$. The above condition holds if and only if $h(f) \\coloneqq \\sum_{j=1-N}^{N-1} \\gamma_j e^{i2\\pi j f}$ is nonnegative on $\\mathcal{I}$, or equivalently, $\\m{\\alpha} \\in \\mathcal{K}_{\\text{PolF}}$ by \\eqref{KPolF}.\n\nTo show \\eqref{eq:dualeq2}, we can similarly define $\\m{t}$ for $\\m{\\alpha}\\in\\mathbb{R}^{2N-1}$ such that $\\m{\\alpha} = \\m{t}_R$. It follows that $\\m{T}$ and $\\m{T}_g$ are Hermitian. For any $\\m{\\gamma}_R\\in \\mathcal{K}_{\\text{PolM}}$, which can be expressed as in \\eqref{KPolM}, we have that\n\\equ{\\begin{split}\\m{\\gamma}_R^T\\m{\\alpha}\n&= \\frac{1}{2} \\sum_{j=1-N}^{N-1} \\overline{\\gamma}_j t_j\\\\\n&= \\frac{1}{2} \\sum_{j=1-N}^{N-1} \\gamma_{-j} t_j \\\\\n&= \\frac{1}{2} \\sum_{j=1-N}^{N-1} t_j \\lbra{\\tr\\sbra{\\m{\\Theta}_j \\m{Q}_0} + \\tr\\mbra{ \\m{\\Theta}_{gj} \\m{Q}_1 }}. \\end{split}}\nUsing the identities in \\eqref{eq:TinTheta} and \\eqref{eq:TginThetag}, we have that\n\\equ{\\m{\\gamma}_R^T\\m{\\alpha} = \\frac{1}{2} \\tr\\sbra{\\m{T}\\m{Q}_0} + \\frac{1}{2} \\tr\\sbra{\\m{T}_g \\m{Q}_1}. \\label{eq:equ_dualcone}}\nBy the definition of the dual cone, $\\m{\\alpha} \\in \\mathcal{K}_{\\text{PolM}}^*$ if and only if $\\m{\\gamma}_R^T\\m{\\alpha}\\geq 0$ for any $\\m{\\gamma}_R\\in \\mathcal{K}_{\\text{PolM}}$. Using \\eqref{KPolM} and \\eqref{eq:equ_dualcone}, the above condition holds if and only if $\\tr\\sbra{\\m{T}\\m{Q}_0} + \\tr\\sbra{\\m{T}_g \\m{Q}_1}\\geq 0$ for any $\\m{Q}_0\\geq \\m{0}$ and $\\m{Q}_1\\geq \\m{0}$, which holds if and only if $\\tr\\sbra{\\m{T}\\m{Q}_0}\\geq 0$ for any $\\m{Q}_0\\geq \\m{0}$ and $\\tr\\sbra{\\m{T}_g \\m{Q}_1}\\geq 0$ for any $\\m{Q}_1\\geq \\m{0}$, and is further equivalent to the condition $\\m{T}\\geq \\m{0}$ and $\\m{T}_g\\geq \\m{0}$. The last condition is equivalent to $\\m{\\alpha} = \\m{t}_R\\in \\mathcal{K}_{\\text{VDM}}$ by \\eqref{eq:KVDM}.\n\nFinally, provided that $\\mathcal{K}_{\\text{VDF}} = \\mathcal{K}_{\\text{VDM}}$ and using \\eqref{eq:dualeq1} and \\eqref{eq:dualeq2}, we have that\n\\equ{\\mathcal{K}_{\\text{PolF}} = \\mathcal{K}_{\\text{VDF}}^* = \\mathcal{K}_{\\text{VDM}}^* = \\mathcal{K}_{\\text{PolM}}^{**}. }\nUsing the identify that $\\mathcal{K}_{\\text{PolM}}^{**} = \\mathcal{K}_{\\text{PolM}}$, which follows from the fact that $\\mathcal{K}_{\\text{PolM}}$ is convex and closed \\cite{boyd2004convex}, we conclude that $\\mathcal{K}_{\\text{PolF}} = \\mathcal{K}_{\\text{PolM}}$.\nBy similar arguments we can also show that $\\mathcal{K}_{\\text{VDF}} = \\mathcal{K}_{\\text{VDM}}$ provided that $\\mathcal{K}_{\\text{PolF}} = \\mathcal{K}_{\\text{PolM}}$.\n\\end{proof}\n\nBy Theorem \\ref{thm:dualcone}, the FS Vandermonde decomposition on $\\mathcal{I}$ is linked via duality to the trigonometric polynomials nonnegative on the same interval. Moreover, the identity that $\\mathcal{K}_{\\text{PolF}} = \\mathcal{K}_{\\text{PolM}}$ provides a matrix form parametrization of the coefficients of these polynomials. In fact, this is exactly the Gram matrix parametrization concluded by the PRL in \\cite{davidson2002linear,alkire2002convex} (see also \\cite{dumitrescu2007positive}). This means that the PRL in \\cite{davidson2002linear,alkire2002convex} can be obtained from the FS Vandermonde decomposition; conversely, the PRL also provides an alternative way to characterize the set of Toeplitz matrices admitting an FS Vandermonde decomposition.\\footnote{Note that Theorem \\ref{thm:BCVD} is stronger in the sense that it concludes that all such Toeplitz matrices always admit a decomposition containing $N$ atoms or less.} Therefore, it will not be surprising that, as we will see, for certain convex optimization problems the two techniques can be applied to give the primal and the dual problems, respectively. But note that there are indeed scenarios in which one technique can be applied while the other cannot. Examples will be provided in the ensuing sections to demonstrate the usefulness of the FS Vandermonde decomposition.\n\n\\begin{rem} The trigonometric polynomial $g(z) = r_{-1}z + r_0 + r_1z^{-1}$ that is nonnegative on $\\mathcal{I}$ and negative on its complement plays an important role in both the FS Vandermonde decomposition of Toeplitz matrices and the Gram matrix parametrization of trigonometric polynomials. It is worth noting that the polynomial defined in the present paper (recall \\eqref{eq:gz1}-\\eqref{eq:r1}) is different from those in \\cite{dumitrescu2007positive,davidson2002linear,alkire2002convex}. As a matter of fact, while the polynomial we define applies uniformly to all intervals $\\mathcal{I}\\in\\mathbb{T}$, certain modifications to the polynomial or additional operations such as sliding the interval have to be taken in \\cite{dumitrescu2007positive, davidson2002linear,alkire2002convex} when $\\mathcal{I}$ contains certain critical points such as $0$ (or $1$) and $\\frac{1}{2}$.\n\\end{rem}\n\n\\section{Application in the Theory of Moments} \\label{sec:moment}\n\n\\subsection{Problem Statement}\nFor a given sequence $t_j$, $\\abs{j}\\leq N-1$ and a given domain $F$, a truncated moment problem entails determining whether there exists a positive Borel measure $\\mu$ on $F$ such that \\cite{akhiezer1965classical}\n\\equ{t_j = \\int_F z^j \\text{d} \\mu\\sbra{z}, \\quad \\abs{j}\\leq N-1. \\label{eq:tj_moment}}\nThe problem is further referred to as a truncated $K$-moment problem if $\\mu$ is constrained to be supported on a semialgebraic set $K\\subset F$, i.e., \\cite{schmudgen1991thek}\n\\equ{\\supp\\sbra{\\mu} \\subset K. \\label{eq:supp_moment}}\nA measure $\\mu$ satisfying \\eqref{eq:tj_moment} is a representing measure for $\\m{t}$; $\\mu$ is a $K$-representing measure if it satisfies \\eqref{eq:tj_moment} and \\eqref{eq:supp_moment}.\n\nThe truncated moment and $K$-moment problems have been solved when $F$ is the real or the complex domain (note that the complex moment problem is defined slightly differently from \\eqref{eq:tj_moment}) \\cite{curto1991recursiveness,curto2000truncated,lasserre2009moments}. The truncated moment problem is also solved when $F$ is the unit circle, known as the truncated trigonometric moment problem \\cite{grenander1958toeplitz,curto1991recursiveness}. In fact, the solution is given by evoking the Vandermonde decomposition of Toeplitz matrices: A representing measure $\\mu$ exists if and only if the Toeplitz matrix $\\m{T}$ formed using $\\m{t}$ admits a Vandermonde decomposition, or equivalently, $\\m{T}\\geq \\m{0}$ by Theorem \\ref{thm:VD}. To the best of our knowledge, however, the truncated trigonometric $K$-moment problem is still open. This section is devoted to a solution to this problem by applying the FS Vandermonde decomposition.\n\nNote that a semialgebraic set $K$ on the unit circle $\\mathbb{T}$ can be identified with the union of finite disjoint subintervals $\\mbra{f_{Ll}, f_{Hl}} \\subset \\mathbb{T}$, $l=1,\\dots,J$. Therefore, the moment problem of interest can be restated as follows. For a given sequence $t_j$, $\\abs{j}\\leq N-1$, the truncated trigonometric $K$-moment problem entails determining whether there exists a $K$-representing measure $\\mu$ on $\\mathbb{T}$ satisfying that\n{\\setlength\\arraycolsep{2pt}\\equa{t_j\n&=& \\int_{\\mathbb{T}} e^{-i2\\pi jf} \\text{d} \\mu\\sbra{f}, \\quad \\abs{j}\\leq N-1, \\label{eq:tj_moment1} \\\\ \\supp\\sbra{\\mu}\n&\\subset& K=\\bigcup_{l=1}^J \\mbra{f_{Ll}, f_{Hl}}\\subset \\mathbb{T}. \\label{eq:supp_moment1}} }\n\n\\subsection{Proposed Solution}\nLet $\\m{T}$ be the $N\\times N$ Toeplitz matrix formed using the moment sequence $t_j$, $\\abs{j}\\leq N-1$. Suppose that an $r$-atomic $K$-representing measure $\\mu$ for $\\m{t}$ exists that satisfies \\eqref{eq:tj_moment1} and \\eqref{eq:supp_moment1}. It follows from \\eqref{eq:supp_moment1} that\n\\equ{\\mu\\sbra{f} = \\sum_{k=1}^r p_k \\delta_{f_k}, \\quad f_k \\in K, \\label{eq:mut}}\nwhere $\\delta_{f}$ is the Dirac delta function and $p_k>0$ denotes the density at $f_k$. Inserting \\eqref{eq:mut} into \\eqref{eq:tj_moment1}, we have that\n\\equ{t_j= \\sum_{k=1}^r p_k e^{-i2\\pi jf_k}, \\quad \\abs{j}\\leq N-1,\\; f_k\\in K.}\nIt follows that\n\\equ{\\m{T} = \\sum_{k=1}^r p_k \\m{a}\\sbra{f_k}\\m{a}^H\\sbra{f_k}, \\quad f_k\\in K.}\nThis means that $\\m{T}$ admits an $r$-atomic FS Vandermonde decomposition on $K$. It is easy to show that the above arguments also hold conversely. So we conclude the following result.\n\\begin{lem} An $r$-atomic $K$-representing measure $\\mu$ for $\\m{t}$ exists if and only if $\\m{T}$ admits an $r$-atomic FS Vandermonde decomposition on $K$. \\label{lem:moment_VD}\n\\end{lem}\n\nWe next provide explicit conditions on $\\m{T}$ by applying Theorem \\ref{thm:BCVD}. In the case when $K$ is a single interval, the following theorem is a direct consequence by combining Lemma \\ref{lem:moment_VD} and Theorem \\ref{thm:BCVD}.\n\n\\begin{thm} Given $K=\\mbra{f_L, f_H}$, an $r$-atomic $K$-representing measure $\\mu$ for $\\m{t}$ exists if and only if \\eqref{eq:Tupsd} and \\eqref{eq:Tunew} hold, where $r=\\rank\\sbra{\\m{T}}$, and $g$ is defined by \\eqref{eq:gz1}-\\eqref{eq:r1}. Moreover, $\\mu$ can be found by computing the FS Vandermonde decomposition of $\\m{T}$ on $K$, and it is unique if $\\m{T}$ or $\\m{T}_g$ is rank-deficient. \\label{thm:momentsolution1}\n\\end{thm}\n\nIn the multiple frequency band case in which $K=\\bigcup_{l=1}^J \\mbra{f_{Ll}, f_{Hl}}$, corresponding to Corollary \\ref{cor:BCVDM}, we have the following corollary of Theorem \\ref{thm:momentsolution1}. The proof is trivial and is omitted.\n\\begin{cor} Given $K = \\bigcup_{l=1}^J \\mbra{f_{Ll}, f_{Hl}}$, a $K$-representing measure $\\mu$ for $\\m{t}$ exists if and only if\nthere exist sequences $\\m{t}_l$, $l=1,\\dots,J$ satisfying \\eqref{eq:sumTleqT}-\\eqref{eq:Tunewl}. \\label{cor:momentsolutionM}\n\\end{cor}\n\nCorollary \\ref{cor:momentsolutionM} provides a numerical approach to finding a $K$-representing measure, if it exists, by solving the following feasibility problem that is a SDP:\n\\equ{\\begin{split}\n&\\text{Find}\\; \\m{t}_l, \\quad l=1,\\dots,J,\\\\\n&\\text{subject to \\eqref{eq:sumTleqT}-\\eqref{eq:Tunewl}}. \\end{split} \\label{eq:feasprob}}\nIf a solution, denoted by $\\m{t}_l^*$, $l=1,\\dots,J$, can be found, then we can find representing measures for $\\m{t}_l^*$ on each corresponding interval by Theorem \\ref{thm:momentsolution1}, the sum of which finally form a $K$-representing measure for $\\m{t}$. If \\eqref{eq:feasprob} is infeasible, then no $K$-representing measure for $\\m{t}$ exists.\n\n\\begin{rem} In the case when $\\m{T}$ has full rank, the representing measure $\\mu$ might not be unique, if it exists. By solving \\eqref{eq:feasprob}, we actually find one among them. In this case the obtained measure $\\mu$ may consist of as large as $NJ$ atoms. To possibly reduce the number of atoms (a.k.a. to simplify the obtained measure), we can find the one minimizing certain convex function of $\\m{t}_l$, $l=1,\\dots,J$, e.g., $\\pm\\tr\\sbra{\\m{T}\\sbra{\\m{t}_1}}$. By doing so, it is expected that certain $\\m{T}\\sbra{\\m{t}_l}$'s are rank-deficient and thus result in a small number of atoms. \\label{rem:simplifymeasure}\n\\end{rem}\n\nFinally, it is interesting to note that the dual problem of \\eqref{eq:feasprob} can be easily obtained using the result in Section \\ref{sec:duality}. Using the cone notations \\eqref{eq:feasprob} can be written as:\n\\equ{\\begin{split}\n&\\text{Find}\\; \\m{t}_{R,l}\\in \\mathcal{K}_{\\text{VDM},l}, \\quad l=1,\\dots,J,\\\\\n&\\text{subject to } \\sum_{l=1}^J \\m{t}_{R,l} = \\m{t}_R, \\end{split} \\label{eq:feasprob2}}\nwhere $\\m{t}_{R,l}\\coloneqq \\mbra{\\m{t}_l}_R$, and $\\mathcal{K}_{\\text{VDM},l}$ denotes $\\mathcal{K}_{\\text{VDM}}$ in \\eqref{eq:KVDM} with $g$ being $g_l$. The Lagrangian function is given by:\n\\equ{\\begin{split}\\mathcal{L}\\sbra{\\m{t}_{R,1},\\dots,\\m{t}_{R,J}, \\m{\\alpha}}\n&= \\sbra{\\sum_{l=1}^J \\m{t}_{R,l} - \\m{t}_R}^T\\m{\\alpha} \\\\\n&= \\sum_{l=1}^J \\m{t}_{R,l}^T\\m{\\alpha} - \\m{t}_R^T\\m{\\alpha}, \\end{split}}\nwhere $\\m{t}_{R,l}\\in \\mathcal{K}_{\\text{VDM},l}$, $l=1,\\dots,J$, and $\\m{\\alpha}$ is the Lagrangian multiplier. Using the knowledge of the dual cone, we have that\n\\equ{\\min_{\\m{t}_{R,l} \\in \\mathcal{K}_{\\text{VDM},l}} \\mathcal{L} = \\left\\{ \\begin{array}{ll} -\\m{t}_R^T\\m{\\alpha}, & \\text{if } \\m{\\alpha}\\in \\mathcal{K}_{\\text{VDM},l}^*, l=1,\\dots,J;\\\\ -\\infty, & \\text{otherwise.} \\end{array} \\right. }\nTherefore, the dual problem is given by:\n\\equ{\\max_{\\m{\\alpha}} \\m{t}_R^T\\m{\\alpha}, \\text{ subject to } \\m{\\alpha}\\in \\bigcap_{l=1}^J \\mathcal{K}_{\\text{PolM},l}, \\label{eq:dualprob}}\nwhere we have used the identity that $\\mathcal{K}_{\\text{VDM},l}^*=\\mathcal{K}_{\\text{PolM},l}$ given by Theorem \\ref{thm:dualcone}. Note that \\eqref{eq:dualprob} can be cast as SDP following from \\eqref{KPolM}.\n\n\\begin{exa} Suppose that the moment sequence $t_j$, $\\abs{j}\\leq N-1$ is generated from its $3$-atomic representing measure\n\\equ{\\mu_1 = 0.7\\delta_{0.1} + 2\\delta_{0.25} + \\delta_{0.7},}\nwhich is plotted in Fig. \\ref{Fig:moment} together with $\\mu_j$, $j=2,\\dots,5$ that will be solved for.\n\\begin{itemize}\n \\item[1)] In the case of $N\\geq 4$, we can form the Toeplitz matrix $\\m{T}$ using $\\m{t}$, having that $\\rank\\sbra{\\m{T}} = 3 < N$. By Theorem \\ref{thm:VD}, $\\mu_1$ is the unique representing measure for $\\m{t}$.\n \\item[2)]Suppose that $N=3$ and $K = \\mbra{0.05, 0.75}$. Since $K$ includes all the frequencies in $\\mu_1$, one representing measure on $K$ has already been given by $\\mu_1$. By the existence of the representing measure, it follows from Theorem \\ref{thm:momentsolution1} that $\\m{T}$ and $\\m{T}_g$ are both PSD. Applying the proposed FS Vandermonde decomposition algorithm to the solution, the following $3$-atomic $K$-representing measure is obtained:\n \\equ{\\mu_2 =0.4630\\delta_{0.05} + 2.2485\\delta_{0.2383} + 0.9885\\delta_{0.6927}, \\notag}\n which is somehow similar to $\\mu_1$. Note that the frequency $0.05$ in $\\mu_2$ is nothing but the staring point of $K$, which has been deliberately chosen in the presented decomposition algorithm. Note that\n \n \\item[3)]Suppose that $N=3$ and $K = \\mbra{0.05, 0.3}\\cup \\mbra{0.65,0.75}$. One representing measure for $\\m{t}$ is also given by $\\mu_1$. To possibly find another one, we solve \\eqref{eq:feasprob} using SDPT3 \\cite{toh1999sdpt3} in Matlab and a solution is successfully found. Applying FS Vandermonde decomposition to the solution, a $6$-atomic $K$-representing measure is given by:\n \\equ{\\begin{split}\\mu_3\n &= 0.1825\\delta_{0.05} + 1.2284\\delta_{0.1764}\\\\\n &\\quad+ 1.2713\\delta_{0.2722}+ 0.1546\\delta_{0.65} \\\\\n &\\quad + 0.5088\\delta_{0.6917} + 0.3545\\delta_{0.7436}. \\end{split} \\notag}\n In $\\mu_3$, $0.05$ and $0.65$ are the starting points of the two intervals of $K$. The first three frequencies are located on the first interval and the other three frequencies are on the other interval.\n \\item[4)] Suppose that $N=3$. We want to check whether one representing measure exists on $K = \\mbra{0.2, 0.3} \\cup \\mbra{0.6, 0.8}$. To do so, we also solve \\eqref{eq:feasprob} and a solution is successfully found. This means that a $K$-representing measure exists for $\\m{t}$ by Corollary \\ref{cor:momentsolutionM}. Applying the FS Vandermonde decomposition, a $6$-atomic $K$-representing measure is given by:\n \\equ{\\begin{split}\\mu_4\n &= 1.9614\\delta_{0.2} + 0.1296\\delta_{0.2290}\\\\\n &\\quad+ 0.4456\\delta_{0.2891}+ 0.2437\\delta_{0.6} \\\\\n &\\quad + 0.3637\\delta_{0.6467} + 0.5561\\delta_{0.7962}. \\end{split} }\n \\item[5)] With the same settings as in 4), instead of solving \\eqref{eq:feasprob}, we find the one maximizing $\\tr\\sbra{\\m{T}\\sbra{\\m{t}_1}}$ among all feasible representing measures on $K$, following Remark \\ref{rem:simplifymeasure}. The obtained solution $\\sbra{\\m{t}_1^*, \\m{t}_2^*}$ satisfies that $\\rank\\sbra{\\m{T}\\sbra{\\m{t}_1^*}}=\\rank\\sbra{\\m{T}\\sbra{\\m{t}_2^*}}=20,\\m{a}_k\\in\\mathcal{A}\\sbra{\\mathcal{I}}}\\lbra{\\sum_k c_k:\\; \\m{y} = \\sum_k c_k \\m{a}_k}\\\\ =\n&\\inf_{f_k\\in\\mathcal{I},s_k}\\lbra{\\sum_k \\abs{s_k}:\\; \\m{y} = \\sum_k \\m{a}\\sbra{f_k}s_k}. \\end{split} \\label{eq:atomn_I}}\nThe following FS atomic norm minimization (FS-ANM) problem was proposed in \\cite{mishra2015spectral}:\n\\equ{\\min_{\\m{y}} \\norm{\\m{y}}_{\\mathcal{A}\\sbra{\\mathcal{I}}}, \\text{ subject to } \\m{y}_{\\Omega} = \\m{y}_{\\Omega}^o. \\label{eq:atomnmin_I}}\nThis means that, among all candidates $\\m{y}$ which are consistent with the acquired samples $\\m{y}_{\\Omega}^o$, we find the one $\\m{y}^*$ with the minimum FS atomic norm as the signal estimate, and the frequencies composing $\\m{y}^*$ form the frequency estimates. Note that the noisy case can be dealt with similarly following a standard routine (by replacing the equality constraint in \\eqref{eq:atomnmin_I} by $\\twon{\\m{y}_{\\Omega} - \\m{y}_{\\Omega}^o}\\leq \\eta$ given the upper bound $\\eta$ on the noise energy).\nNote also that \\eqref{eq:A_I}-\\eqref{eq:atomnmin_I} degenerate to the existing standard forms in the case of $\\mathcal{I}=\\mathbb{T}$.\n\nSince the FS atomic norm defined in \\eqref{eq:atomn_I} is inherently semi-infinite programming (SIP), a finite-dimensional formulation of it is required to practically solve \\eqref{eq:atomnmin_I}, which is dealt with in the ensuing section by applying the FS Vandermonde decomposition.\n\n\n\\subsection{SDP Formulation of FS Atomic Norm}\n\nBy applying the FS Vandermonde decomposition, the FS atomic norm is cast as SDP in the following theorem.\n\\begin{thm} It holds that\n\\equ{\\begin{split} \\norm{\\m{y}}_{\\mathcal{A}\\sbra{\\mathcal{I}}}\n=& \\min_{x, \\m{t}} \\frac{1}{2}x + \\frac{1}{2}t_0,\\\\\n& \\text{ subject to } \\begin{bmatrix} x & \\m{y}^H \\\\ \\m{y} & \\m{T} \\end{bmatrix}\\geq \\m{0} \\text{ and } \\m{T}_g\\geq \\m{0}, \\end{split} \\label{eq:atomnDsdp}}\nwhere $g$ is as defined previously.\n \\label{thm:atomnD}\n\\end{thm}\n\n\\begin{proof} Let $F^*$ be the optimal objective value of \\eqref{eq:atomnDsdp}. We need to show that $\\norm{\\m{y}}_{\\mathcal{A}\\sbra{\\mathcal{I}}} = F^*$.\n\nWe first show that $F^* \\leq \\norm{\\m{y}}_{\\mathcal{A}\\sbra{\\mathcal{I}}}$. To do so, let $\\m{y} = \\sum_k c_k\\m{a}\\sbra{f_k,\\phi_k}$ be an FS atomic decomposition of $\\m{y}$ on $\\mathcal{I}$. Then let $\\m{t}$ be such that $\\m{T}(\\m{t}) = \\sum_k c_k \\m{a}\\sbra{f_k}\\m{a}^H\\sbra{f_k}$ and $x = \\sum_k c_k$. By Theorem \\ref{thm:BCVD}, we have that $\\m{T}_g\\geq\\m{0}$. Moreover, it holds that\n\\equ{\\begin{bmatrix} x & \\m{y}^H \\\\ \\m{y} & \\m{T} \\end{bmatrix} = \\sum_k c_k\\begin{bmatrix} \\overline{\\phi}_k \\\\ \\m{a}\\sbra{f_k} \\end{bmatrix} \\begin{bmatrix} \\overline{\\phi}_k \\\\ \\m{a}\\sbra{f_k} \\end{bmatrix}^H \\geq \\m{0}.}\nTherefore, $x$ and $\\m{t}$ constructed above form a feasible solution to the problem in \\eqref{eq:atomnDsdp}, at which the objective value equals\n\\equ{\\frac{1}{2}x + \\frac{1}{2}t_0 = \\sum_k c_k.}\nIt follows that $F^* \\leq \\sum_k c_k$. Since the inequality holds for any FS atomic decomposition of $\\m{y}$ on $\\mathcal{I}$, we have that $F^* \\leq \\norm{\\m{y}}_{\\mathcal{A}\\sbra{\\mathcal{I}}}$ by the definition of $\\norm{\\m{y}}_{\\mathcal{A}\\sbra{\\mathcal{I}}}$.\n\nOn the other hand, suppose that $\\sbra{x^*, \\m{t}^*}$ is an optimal solution to the problem in \\eqref{eq:atomnDsdp}. By the fact that $\\m{T}(\\m{t}^*)\\geq \\m{0}$ and $\\m{T}_g(\\m{t}^*)\\geq \\m{0}$ and applying Theorem \\ref{thm:BCVD}, we have that $\\m{T}\\sbra{\\m{t}^*}$ has an FS Vandermonde decomposition on $\\mathcal{I}$ as in \\eqref{eq:VD} with $\\sbra{r, p_k, f_k}$ denoted by $\\sbra{r^*, p_k^*, f_k^*}$.\nBy the fact that $\\begin{bmatrix} x^* & \\m{y}^H \\\\ \\m{y} & \\m{T}\\sbra{\\m{t}^*} \\end{bmatrix}\\geq\\m{0}$, we have that $\\m{y}$ lies in the range space of $\\m{T}\\sbra{\\m{t}^*}$ and thus has the following FS atomic decomposition:\n\\equ{\\m{y} = \\sum_{k=1}^{r^*} c_k^*\\m{a}\\sbra{f_k^*,\\phi_k^*}, \\quad f_k^*\\in\\mathcal{I}. \\label{eq:yatomdec}}\nMoreover, it holds that\n{\\setlength\\arraycolsep{2pt}\\equa{x^*\n&\\geq& \\m{y}^H \\mbra{\\m{T}\\sbra{\\m{t}^*}}^{\\dag}\\m{y} = \\sum_{k=1}^{r^*} \\frac{c_k^{*2}}{p_k^*},\\\\ t_0^*\n&=& \\sum_{k=1}^{r^*} p_k^*.\n}}It therefore follows that\n\\equ{\\begin{split}F^*\n&= \\frac{1}{2}x^* + \\frac{1}{2}t_0^* \\\\\n&\\geq \\frac{1}{2}\\sum_k \\frac{c_k^{*2}}{p_k^*} + \\frac{1}{2}\\sum_k p_k^* \\\\\n&\\geq \\sum_{k} c_k^*\\\\\n&\\geq \\norm{\\m{y}}_{\\mathcal{A}\\sbra{\\mathcal{I}}}. \\end{split} \\label{eq:Fleqsum} }\nCombining \\eqref{eq:Fleqsum} and the inequality that $F^* \\leq \\norm{\\m{y}}_{\\mathcal{A}\\sbra{\\mathcal{I}}}$ as shown previously, we conclude that $F^* = \\norm{\\m{y}}_{\\mathcal{A}\\sbra{\\mathcal{I}}}$ and complete the proof. At last, it is worth noting that by \\eqref{eq:Fleqsum} it must hold that $p_k^* = c_k^*$ and $\\norm{\\m{y}}_{\\mathcal{A}\\sbra{\\mathcal{I}}} = \\sum_{k} c_k^*$. Therefore, the FS atomic decomposition in \\eqref{eq:yatomdec} must achieve the FS atomic norm.\n\\end{proof}\n\n\\begin{rem} Note that the SDP formulation of the FS atomic norm presented in Theorem \\ref{thm:atomnD} can be easily extended to the multiple frequency band case by applying Corollary \\ref{cor:BCVDM}, to be specific, by replacing the constraints in \\eqref{eq:atomnDsdp} resulting from \\eqref{eq:Tupsd} and \\eqref{eq:Tunew} by those in \\eqref{eq:sumTleqT}-\\eqref{eq:Tunewl}. The proof of Theorem \\ref{thm:atomnD} can still be applied in this case with minor modifications. \\label{rem:SDP_MB}\n\\end{rem}\n\nIt immediately follows from Theorem \\ref{thm:atomnD} that \\eqref{eq:atomnmin_I} can be written as the following SDP:\n\\equ{\\begin{split}\n& \\min_{\\m{y}, x, \\m{t}} \\frac{1}{2}x + \\frac{1}{2}t_0,\\\\\n& \\text{ subject to } \\begin{bmatrix} x & \\m{y}^H \\\\ \\m{y} & \\m{T} \\end{bmatrix}\\geq \\m{0}, \\m{T}_g\\geq \\m{0} \\text{ and } \\m{y}_{\\Omega} = \\m{y}_{\\Omega}^o. \\end{split} \\label{eq:CANM_sdp}}\nNote that \\eqref{eq:CANM_sdp} can be solved using off-the-shelf SDP solvers such as SDPT3. Given its solution, the frequencies can be retrieved from the FS Vandermonde decomposition of $\\m{T}$. Moreover, as in the standard atomic norm method, the Toeplitz matrix $\\m{T}$ in \\eqref{eq:CANM_sdp} can be interpreted as the ``data covariance matrix'' \\cite{yang2015gridless,yang2016vandermonde}. By solving \\eqref{eq:CANM_sdp} we actually fit the data covariance matrix $\\m{T}$ by exploiting its structures, e.g., PSDness (the first constraint), Toeplitz (explicitly imposed) and low rank ($t_0$ in the objective is proportional to the nuclear or trace norm of $\\m{T}$), and its connection to the acquired data $\\m{y}_{\\Omega}^o$ (the first and the last constraints). But different from the standard atomic norm method, more precise knowledge of $\\m{T}$ is exploited in the FS atomic norm method by additionally including the constraint $\\m{T}_g\\geq \\m{0}$.\n\nBefore proceeding to the next subsection, we note that \\eqref{eq:atomnmin_I} was solved by studying its dual in \\cite{mishra2015spectral}. In particular, the dual of \\eqref{eq:atomnmin_I} is given by:\n\\equ{\\max_{\\m{z}} \\Re \\sbra{\\m{y}_{\\Omega}^o \\m{z}_{\\Omega}}, \\text{ subject to } \\norm{\\m{z}}_{\\mathcal{A}\\sbra{\\mathcal{I}}}^*\\leq 1 \\text{ and } \\m{z}_{\\Omega^c}=\\m{0}, \\label{eq:dualprb}}\nwhere $\\Omega^c$ denotes the complement of $\\Omega$ and $\\norm{\\m{z}}_{\\mathcal{A}\\sbra{\\mathcal{I}}}^*$ is the dual FS atomic norm. By the fact that\n\\equ{\\norm{\\m{z}}_{\\mathcal{A}\\sbra{\\mathcal{I}}}^* = \\sup_{\\m{a}\\in\\mathcal{A}\\sbra{\\mathcal{I}}} \\Re\\sbra{\\m{a}^H\\m{z}} = \\sup_{f\\in\\mathcal{I}} \\abs{\\m{a}^H\\sbra{f}\\m{z}}, }\nthe constraint that $\\norm{\\m{z}}_{\\mathcal{A}\\sbra{\\mathcal{I}}}^*\\leq 1$ can be cast as the following:\n\\equ{\\abs{\\m{a}^H\\sbra{f}\\m{z}}\\leq 1 \\text{ for any } f\\in\\mathcal{I}, \\label{eq:absazleq1}}\nwhere\n\\equ{q(f)\\coloneqq \\m{a}^H\\sbra{f}\\m{z} \\label{eq:qf}}\nis referred to as the dual polynomial \\cite{candes2013towards,mishra2015spectral}.\nIt follows that $1-\\abs{q(f)}^2$ is a Hermitian trigonometric polynomial nonnegative on $\\mathcal{I}$ and, by the PRL, admits a Gram matrix parametrization as in \\eqref{KPolM}. With some further derivations that we will omit, it can be shown that \\eqref{eq:absazleq1} holds if and only if the unit polynomial (the right hand side of the inequality in \\eqref{eq:absazleq1}) has the following Gram matrix parametrization:\n\\equ{\\tr\\sbra{\\m{\\Theta}_j \\m{Q}_0} + \\tr\\mbra{ \\m{\\Theta}_{gj} \\m{Q}_1} = \\left\\{ \\begin{array}{ll} 1, & \\text{if } j=0,\\\\ 0, & \\text{otherwise}, \\end{array}\\right. \\label{eq:sdpbrl1}}\nwhere $\\m{Q}_0$ and $\\m{Q}_1$ satisfy\n\\equ{\\begin{bmatrix}1 & \\m{z}^H \\\\ \\m{z} & \\m{Q}_0 \\end{bmatrix} \\geq \\m{0} \\text{ and } \\m{Q}_1\\geq \\m{0}. \\label{eq:sdpbrl2}}\nIn fact, the characterization of \\eqref{eq:absazleq1} using \\eqref{eq:sdpbrl1} and \\eqref{eq:sdpbrl2} is nothing but the result of the bounded real lemma (BRL) for trigonometric polynomials \\cite{davidson2002linear,dumitrescu2007positive}. This can be viewed as a more precise result of the PRL when dealing with bounded polynomials as in \\eqref{eq:absazleq1}. Finally, \\eqref{eq:dualprb} is cast as the following SDP:\n\\equ{\\max_{\\m{z}, \\m{Q}_0,\\m{Q}_1} \\Re \\sbra{\\m{y}_{\\Omega}^o \\m{z}_{\\Omega}}, \\text{ subject to } \\eqref{eq:sdpbrl1}, \\eqref{eq:sdpbrl2} \\text{ and } \\m{z}_{\\Omega^c}=\\m{0}. \\label{eq:dualSDP}}\nWithout surprise, it follows from a standard Lagrangian analysis that \\eqref{eq:dualSDP} is the dual of \\eqref{eq:CANM_sdp} (note that the analysis uses \\eqref{eq:TinTheta} and \\eqref{eq:TginThetag} and will be left to interested readers). Since strong duality holds \\cite{boyd2004convex}, the solution to \\eqref{eq:dualSDP} can be obtained for free when solving \\eqref{eq:CANM_sdp} using a primal-dual algorithm, and vice versa.\n\nIn summary, the FS Vandermonde decomposition can be applied to provide a primal SDP formulation of \\eqref{eq:atomnmin_I}, while the trigonometric polynomial based technique in \\cite{mishra2015spectral} provides a dual SDP formulation. Moreover, the FS Vandermonde decomposition also provides a new method for frequency retrieval. In fact, it is found that the new method results in higher numerical stability, as compared to the root-finding method in \\cite{candes2013towards,mishra2015spectral}. This can be explained as follows. By using the FS Vandermonde decomposition, we can always determine the number of frequencies first by computing $\\rank\\sbra{\\m{T}}$, which can effectively reduce the problem dimension and improve stability. In contrast to this, the root-finding method requires to solve all, up to $2N-2$, roots of the polynomial $1-\\abs{q(f)}^2$, among which appropriate ones (those with unit modulus) are then selected to produce the frequencies.\n\n\n\\subsection{Computational Complexity}\nWe next analyze the computational complexity of the presented FS atomic norm method, to be specific, the complexity of solving the SDP in \\eqref{eq:CANM_sdp}. To do so, we consider the general multiple band case in which, according to Remark \\ref{rem:SDP_MB}, \\eqref{eq:CANM_sdp} becomes:\n\\equ{\\begin{split}\n& \\min_{\\m{y}, x, \\m{t}_l} \\frac{1}{2}x + \\frac{1}{2}\\sum_{l=1}^J t_{l0},\\\\\n& \\text{ subject to } \\begin{bmatrix} x & \\m{y}^H \\\\ \\m{y} & \\sum_{l=1}^J \\m{T}(\\m{t}_l) \\end{bmatrix}\\geq \\m{0}, \\\\\n&\\phantom{\\text{ subject to }} \\m{T}(\\m{t}_l) \\geq \\m{0}, \\; \\m{T}_g(\\m{t}_l)\\geq \\m{0}, l=1,\\dots,J,\\\\\n&\\phantom{\\text{ subject to }} \\m{y}_{\\Omega} = \\m{y}_{\\Omega}^o. \\end{split} \\label{eq:CANM_sdp_MB}}\nEvidently, the SDP in \\eqref{eq:CANM_sdp_MB} has $n=O(JN)$ free variables and $m=2J+1$ LMIs, and the $i$th LMI has size of $k_i\\times k_i$ with $k_i=O(N)$. It follows from \\cite{ben2013lectures} that a primal-dual algorithm for \\eqref{eq:CANM_sdp_MB} has a computational complexity on the order of\n\\equ{\\sbra{1+\\sum_{i=1}^m k_i}^{\\frac{1}{2}} n\\sbra{n^2+n\\sum_{i=1}^m k_i^2 + \\sum_{i=1}^m k_i^3} = O\\sbra{J^{3.5}N^{4.5}}. \\label{eq:complexity}}\nBy arguments similar to those above, the standard atomic norm method in the absence of prior knowledge has a computational complexity of $O\\sbra{N^{4.5}}$. This together with\n\\eqref{eq:complexity} indicates that, with a fixed number of intervals $J$, the presented FS atomic norm method has a complexity higher than the standard atomic norm method by a constant factor and the factor increases with $J$.\n\n\\subsection{Numerical Simulation}\nWe provide a simple illustrative example below to demonstrate the advantage of using the prior knowledge for frequency estimation.\n\n\\begin{exa} Consider a line spectrum composed of $K=3$ frequencies $\\m{f}=[0.22, 0.23, 0.28]^T$ as shown in Fig. \\ref{Fig:LSE}. To estimate\/recover the spectrum, $M=16$ randomly located noiseless samples are acquired among $N=64$ uniform samples. The standard ANM and the FS-ANM methods are implemented using SDPT3 to estimate the line spectrum. In FS-ANM, the prior knowledge that the frequencies lie in $\\mathcal{I}=\\mbra{0.2, 0.3}$ is used. The estimation results are presented in Fig. \\ref{Fig:LSE}. It can be seen that FS-ANM exactly recovers the spectrum but ANM does not. For both ANM and FS-ANM, the recovered frequencies retrieved using the Vandermonde decomposition match the locations at which the dual polynomials have unit magnitude. For FS-ANM the frequencies computed using the FS Vandermonde decomposition have recovery errors on the order of $10^{-10}$ while those computed using the root-finding method have errors on the order of $10^{-6}$.\n\\end{exa}\n\n\\begin{figure}\n\\centering\n \\subfigure[ANM]{\n \\label{Fig:ANM}\n \\includegraphics[width=3in]{Fig_AppLSE_ANM.pdf}}%\n \\subfigure[FS-ANM]{\n \\label{Fig:CANM}\n \\includegraphics[width=3in]{Fig_AppLSE_CANM.pdf}}\n\\centering\n\\caption{Line spectral estimation results of (a) ANM and (b) FS-ANM.} \\label{Fig:LSE}\n\\end{figure}\n\nNote that the presented method can deal with noise with minor modifications, as shown in \\cite{mishra2015spectral}. In the noisy case, a simulation has been included in \\cite{mishra2015spectral} to compare the signal recovery errors of the atomic norm method in cases with and without the prior knowledge. It is shown that ``the prior information formulation yields a higher stability in presence of noise.'' Readers are referred to \\cite[Section VIII-B]{mishra2015spectral} for detail.\n\n\n\n\n\n\\subsection{Extension to FS Atomic $\\ell_0$ Norm}\nIn this subsection, we provide an example in which the FS Vandermonde decomposition result is applicable but the theory of trigonometric polynomials is not. In particular, we study the FS atomic $\\ell_0$ norm defined by:\n\\equ{\\begin{split}\\norm{\\m{y}}_{\\mathcal{A}\\sbra{\\mathcal{I}},0} \\coloneqq\n&\\inf_{c_k>0,\\m{a}_k\\in\\mathcal{A}\\sbra{\\mathcal{I}}}\\lbra{\\mathcal{K}:\\; \\m{y} = \\sum_{k=1}^{\\mathcal{K}} c_k \\m{a}_k}\\\\ =\n&\\inf_{f_k\\in\\mathcal{I},s_k}\\lbra{\\mathcal{K}:\\; \\m{y} = \\sum_{k=1}^{\\mathcal{K}} \\m{a}\\sbra{f_k}s_k}. \\end{split} \\label{eq:atom0n_I}}\n$\\norm{\\m{y}}_{\\mathcal{A}\\sbra{\\mathcal{I}},0}$ is of interest since it exploits sparsity to the greatest extent possible, while $\\norm{\\m{y}}_{\\mathcal{A}\\sbra{\\mathcal{I}}}$ is in fact its convex relaxation. It has been vastly demonstrated in the literature on compressed sensing that improved performance can usually be obtained by solving (or approximately solving) $\\ell_0$ norm based problems (see, e.g., \\cite{candes2008enhancing,andersson2014new,yang2016vandermonde}). More recently, a new trend of frequency estimation is to directly solve the $\\ell_0$ norm based formulations using nonconvex optimization techniques for low rank matrix recovery \\cite{cho2016fast,cai2016fast}. To do so, the key is to formulate the frequency estimation problem in the continuous setting as a matrix rank minimization problem. In the context of the FS atomic $\\ell_0$ norm, the following result can be obtained by applying the FS Vandermonde decomposition.\n\n\\begin{thm} It holds that\n\\equ{\\begin{split} \\norm{\\m{y}}_{\\mathcal{A}\\sbra{\\mathcal{I}},0}\n=& \\min_{x, \\m{t}} \\rank\\sbra{\\m{T}},\\\\\n& \\text{ subject to } \\begin{bmatrix} x & \\m{y}^H \\\\ \\m{y} & \\m{T} \\end{bmatrix}\\geq \\m{0} \\text{ and } \\m{T}_g\\geq \\m{0}, \\end{split} \\label{eq:atom0nDsdp}}\nwhere $g$ is as defined previously.\n \\label{thm:atom0nD}\n\\end{thm}\n\n\\begin{proof} The proof is similar to that of Theorem \\ref{thm:atomnD}. At the first step, by applying the FS Vandermonde decomposition, we can construct a feasible solution, as in the proof of Theorem \\ref{thm:atomnD}, to the optimization problem in \\eqref{eq:atom0nDsdp}, which concludes that $\\norm{\\m{y}}_{\\mathcal{A}\\sbra{\\mathcal{I}},0}\\leq r^*$, where $r^*$ denotes the optimal objective value of \\eqref{eq:atom0nDsdp}. At the second step, for any optimal solution that achieves the optimal value $r^*$, we can similarly obtain an $r^*$-atomic FS decomposition of $\\m{y}$, which results in that $\\norm{\\m{y}}_{\\mathcal{A}\\sbra{\\mathcal{I}},0}\\geq r^*$. So we complete the proof.\n\\end{proof}\n\nIt follows from Theorem \\ref{thm:atom0nD} that $\\norm{\\m{y}}_{\\mathcal{A}\\sbra{\\mathcal{I}},0}$ can be cast as a rank minimization problem, while solving (or approximately solving) the resulting optimization problem is beyond the scope of this paper. It is worth noting that, since $\\norm{\\m{y}}_{\\mathcal{A}\\sbra{\\mathcal{I}},0}$ is nonconvex, a trigonometric polynomial based technique, as used for $\\norm{\\m{y}}_{\\mathcal{A}\\sbra{\\mathcal{I}}}$ in \\cite{mishra2015spectral}, cannot be applied in this case to provide a finite-dimensional formulation.\n\n\n\\section{Conclusion} \\label{sec:conclusion}\n\nIn this paper, the FS Vandermonde decomposition of Toeplitz matrices on a given interval was studied. The new result generalizes the classical Vandermonde decomposition result. It was shown by duality to be connected to the theory of trigonometric polynomials. It was also applied to provide a solution to the classical truncated trigonometric $K$-moment problem and a primal SDP formulation of the recent FS atomic norm for line spectral estimation with prior knowledge.\n\n\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nContinuous word representation has been a fundamental ingredient to many NLP tasks with the advent of simple and successful approaches such as Word2Vec \\cite{mikolov2013efficient} and GloVe \\cite{pennington2014glove}.\nAlthough it has been verified that they are effective in formulating semantic and syntactic relationship between words, there are some limitations.\nFirst, they are only available to words in pre-defined vocabulary thus prone to the Out-of-Vocabulary(OOV) problem.\nSecond, they cannot utilize subword information at all because they regard word as a basic unit.\nThose problems become more magnified when applying word-based methods to agglutinative languages such as Korean, Japanese, Turkish, and Finnish.\nIn this work, we propose a new model that utilizes syllables as basic components of word representation to alleviate the problems, especially for Korean.\nIn our experiment, we confirm that our model constructs representation of words which contains a semantic and syntactic relationship between words.\nWe also show that our model can handle OOV problem and capture morphological information without dedicated analysis.\n\n\n\\section{Related Work}\n\nRecent works that utilize subword information to construct word representation could be largely divided into two families: The models that use morphemes as a component and the others taking advantage of characters.\n\n\\vspace{2mm}\n\\noindent\\textbf{Morpheme-based representation models}\n\nA morpheme is the smallest unit of meaning in linguistics.\nTherefore, there are many researches that consider morphemes when building word representations \\cite{luong2013better, botha2014compositional, cotterell2015morphological}.\n\n\\newcite{luong2013better} applies a recursive neural network over morpheme embeddings to obtain word embeddings.\nAlthough morpheme-based models are good at capturing semantics, one major drawback is that most of them require manually annotated data or an explicit morphological analyzer which could introduce unintended errors.\nOur model doesn't need such a preprocessing.\n\n\\begin{figure*}[!htbp]\n\t\\includegraphics[width=1.0\\textwidth,height=200pt]{model}\n\t\\caption{Overall architecture of our model. Each syllable is a $d$-dimensional vector. For a given word `\\begin{CJK}{UTF8}{mj}\uc548\ub155\ud558\uc138\uc694\\end{CJK}' (hello, \\emph{annyeonghaseyo}), we concatenate vectors according to syllable order in word. After passing through the convolutional layer and max pooling layer, word representation is produced. All parameters are jointly trained by Skip-gram scheme.\\label{fig:model}}\n\\end{figure*}\n\n\\vspace{2mm}\n\\noindent\\textbf{Character-based representation models}\n\nRecently, utilizing information from characters has become one of the active NLP research topics.\nOne way to extract knowledge from a sequence of characters is using character n-grams \\cite{wieting2016charagram, bojanowski2016enriching}.\n\n\\newcite{bojanowski2016enriching} suggests an approach based on the Skip-gram model \\cite{mikolov2013efficient}, where the model sums character n-gram vectors to represent a word.\nOn the other hand, there are some approaches \\cite{dos2014deep, ling2015finding, santos2015boosting, zhang2015character, kim2016character, jozefowicz2016exploring, chung2016character} in which word representations are composed of character embeddings via deep neural networks such as convolutional neural networks (CNN) or recurrent neural networks (RNN).\n\n\\newcite{kim2016character} introduces a language model that aggregates subword information through a character-level CNN. \nModels based on characters have shown competitive results on many tasks.\nA problem of character-based models is that characters themselves have no semantic meanings so that models often concentrate on only local syntactic features of words.\nTo avoid the problem, we select syllables which have fine-granularity like a character but has its own meaning in Korean as a basic component of the representation of words.\n\n\\section{Proposed Model}\n\n\\noindent\\textbf{Characteristics of Korean Words}\n\nMorphologically, unlike many other languages, a Korean word (\\emph{Eojeol}) is not just a concatenation of characters.\nIt is constructed by the following hierarchy: a sequence of syllables (\\emph{Eumjeol}) forms a word, and the composition of 2 or 3 characters (\\emph{Jaso}) forms a syllable \\cite{kang1994syllable}.\n\nIn linguistics, Korean language is categorized as an agglutinative language, where each word is made of a set of morphemes.\nTo complete the Korean word (\\emph{Eumjeol}), a root morpheme must be combined with a bound morpheme (\\emph{Josa}), or a postposition (\\emph{Eomi}). \nThis derivation produces about 60 different forms of the similar meaning, which causes the explosion of vocabulary.\nFor the same reason, the number of occurrences of each word is relatively small even with a large corpus, which prevents the model from an efficient learning.\nThus, most of the Korean word representation models use morphemes as an embedding unit, though it requires an additional preprocessing.\nThe problem is that errors coming from an immature morpheme analyzer might be propagated to the word representation model.\nMoreover, a single Korean syllable possess a semantic meaning.\nFor example, the word `\\begin{CJK}{UTF8}{mj}\ub300\ud559\\end{CJK}'(college, \\emph{daehag}) is a composition of `\\begin{CJK}{UTF8}{mj}\ub300\\end{CJK}'(big, or great, \\emph{dae}) and `\\begin{CJK}{UTF8}{mj}\ud559\\end{CJK}'(learn, or a study, \\emph{hag}).\nTherefore, our model regards syllables as embedding units rather than words or morphemes.\nFor instance, the representations of `\\begin{CJK}{UTF8}{mj}\ub098\ub294\\end{CJK}'(I am, \\emph{naneun}), `\\begin{CJK}{UTF8}{mj}\ub098\uc758\\end{CJK}'(my, \\emph{naui}), or `\\begin{CJK}{UTF8}{mj}\ub098\uc5d0\uac8c\\end{CJK}'(to me, \\emph{na-ege}) are constructed by leveraging the same syllable vector `\\begin{CJK}{UTF8}{mj}\ub098\\end{CJK}'(I, \\emph{na}).\n\n\\vspace{5mm}\n\\noindent\\textbf{Syllable-based Representation}\n\nSimilar to \\cite{kim2016character}, let $\\mathcal{S}$ be a set of all Korean syllables.\nWe embed each syllables into $d$-dimensional vector space, so that $Q \\in \\mathbb{R}^{d \\times |\\mathcal{S}|}$ becomes a syllable embedding matrix.\nLet $(s_1, s_2 , ..., s_l)$ denote a word $t \\in V$ which consists of $l$ syllables, $t$ is represented by concatenating syllable vectors as a column vector: $(Qs_1, Qs_2, ..., Qs_l) \\in \\mathbb{R}^{d \\times l}$.\nThen we apply a convolution filter $H \\in \\mathbb{R}^{d \\times w}$ having a width $w$, we get a feature map $f^t \\in \\mathbb{R}^{l-w+1}$.\nFor filters whose widths are more than 1, they need a zero padding when processing words coming from only a single syllable.\n\nIn detail, for the given filter $H$, the feature map can be calculated as follows:\n\\begin{equation}\nf^t_i = \\tanh (\\langle (Qs_i, ..., Qs_{i+w-1}) ,H \\rangle + b)\n\\end{equation}\n\\noindent where $\\langle A, B \\rangle = \\text{tr}(AB^\\intercal)$ denoting Frobenius inner product.\nWe then apply a max pooling $y^t = \\max_i f^t_i$ to extract the most important feature.\nBy using multiple filters, namely $H_1, H_2, ..., H_h$, we get a final representation $y^t = (y^t_1, ..., y^t_h)$ for the word $t$.\n\nFor training, we adopt Skip-gram \\cite{mikolov2013distributed} method with negative sampling so that for a given center word $y^t$, we maximize the log-probability of predicting context word $y^c$.\nWe jointly train syllable embedding matrix and convolution filters all together.\nFigure \\ref{fig:model} shows overall architecture of our model.\n\n\\section{Experiments and Results}\n\n\\noindent\\textbf{Datasets and Baselines}\n\nThe Experiments are performed on a randomly sampled subset of Korean News corpus collected from 2012 to 2014, containing approximately 2.7M tokens, 11k vocabulary, and 1k syllables.\nWe compare our model to the original \\emph{skip-gram model with negative sampling}~\\cite{mikolov2013distributed} as a baseline.\n\n\\vspace{2mm}\n\\noindent\\textbf{Implementation details}\n\nFor all experiments, we use the following common parameters for both our model and baseline.\nWe use vector representations of dimension 320, the size of window is 4 and the negative-sampling parameter is 7.\nWe train over twelve epochs.\nIn our model, the dimension of syllable embedding is 320.\nEmpirically, using filters with size 1\\textasciitilde4 was enough since most of Korean words are composed of 2\\textasciitilde4 syllables\\footnote[1]{About 95\\% of words in a training set had a length less than 5.}.\n\n\\begin{figure}[!htbp]\n \\includegraphics[width=\\linewidth]{score}\n \\caption{Test result on translated WordSim353 dataset. It contains similarity and relatedness test and measured by Pearson correlation. Our model outperformed the baseline in similarity task.\\label{fig:score}}\n\\end{figure}\n\n\\subsection{Quantitative Evaluation}\n\nWe use the WordSim353 dataset \\cite{finkelstein2001placing, agirre2009study} for the word similarity and relatedness task.\nAs WordSim353 dataset is an English data, we translated it into Korean.\nThe quality of the word vector representation is evaluated by computing Pearson correlation coefficient between human judgment scores and the cosine similarity between word vectors.\n\nThe graph in Figure \\ref{fig:score} shows that our model outperforms the baseline on WS353-Similarity dataset.\nWe estimated it since a lot of similar words share the same syllable(s) in Korean.\nOn the other hand, on WS353-Relatedness, the performance is not as good in comparison with the similarity task.\nWe presume that leveraging syllables on computing representations can be a noise among related words without common syllables.\n\n\\subsection{Qualitative Evaluation}\n\n\\noindent\\textbf{Out-Of-Vocabulary Test}\n\n\\begin{table}\n \\centering\n \\begin{tabular}{c|c}\n \\toprule\n Original word & Newly coined word \\\\\n \\midrule\n \\makecell{\\begin{CJK}{UTF8}{mj}\uad6c\uae00\\end{CJK}\\\\(Google, \\emph{gugeul})} & \\makecell{\\begin{CJK}{UTF8}{mj}\uad6c\uae00\uc2e0\\end{CJK}\\\\(God google, \\emph{gugeulsin})} \\\\\n \\makecell{\\begin{CJK}{UTF8}{mj}\uc774\ub4dd\\end{CJK}\\\\(Profit, \\emph{ideug})} & \\makecell{\\begin{CJK}{UTF8}{mj}\uac1c\uc774\ub4dd\\end{CJK}\\\\(Real profit, \\emph{gaeideug})} \\\\\n \\makecell{\\begin{CJK}{UTF8}{mj}\ud1f4\uadfc\\end{CJK}\\\\(Leave work,\\\\ \\emph{toegeun})} & \\makecell{\\begin{CJK}{UTF8}{mj}\ud1f4\uadfc\uac01\\end{CJK}\\\\(Time to leave work,\\\\ \\emph{toegeungag})} \\\\\n \\makecell{\\begin{CJK}{UTF8}{mj}\uac24\ub7ed\uc2dc\ub178\ud2b8\\end{CJK}\\\\(Galaxy Note,\\\\ \\emph{gaelleogsinoteu})} & \\makecell{\\begin{CJK}{UTF8}{mj}\uac24\ub178\ud2b8\\end{CJK}\\\\(Gal'Note,\\\\ \\emph{gaelnoteu})} \\\\\n \\bottomrule\n \\end{tabular}\n \\caption{4 newly coined words in Korean which did not appear in training data. Proposed model successfully recognized stem from the original word, and predicted it as the most similar word.\\label{tab:oov}}\n\\end{table}\n\nSince our model uses syllable vectors when computing word representation, it is possible to achieve representation of OOV words by combining syllables.\nTo evaluate the representations of OOV words, we manually chose 4 newly coined words not appear in training data (Table \\ref{tab:oov}).\nThese words were derived from original words.\nFor example, `\\begin{CJK}{UTF8}{mj}\uad6c\uae00\uc2e0\\end{CJK}'(God Google, \\emph{gugeulsin}) is derived from `\\begin{CJK}{UTF8}{mj}\uad6c\uae00\\end{CJK}'(Google, \\emph{gugeul}) and `\\begin{CJK}{UTF8}{mj}\uac24\ub178\ud2b8\\end{CJK}`(Gal'Note, \\emph{gaelnoteu}) is a abbreviation form of `\\begin{CJK}{UTF8}{mj}\uac24\ub7ed\uc2dc\ub178\ud2b8\\end{CJK}'(Galaxy Note, \\emph{gaelleogsinoteu}).\nMorphologically, two of them concatenate additional syllables to the original word, and the other two remove some syllables.\n\nWe examined the nearest neighbor of the representations of OOV words, and confirmed that each original word vector is placed in the nearest distance.\nIt is no wonder since almost every newly coined word keeps the syllables of original word with their positions fixed.\n\n\\vspace{2mm}\n\\noindent\\textbf{Morphological Representation Test}\n\nWe now evaluate our model on language morphology by observing how word representation leverages morphological characteristics.\nAs mentioned above, the process of forming a sentence of Korean is totally different from many other languages.\nIn case of Korean, a word can function in the sentence only if it is combined with the bound morpheme.\nFor example, `\\begin{CJK}{UTF8}{mj}\uc11c\uc6b8\uc744\\end{CJK}'(of Seoul, \\emph{seoul-eul}) is a combination of full morpheme `\\begin{CJK}{UTF8}{mj}\uc11c\uc6b8\\end{CJK}'(Seoul, \\emph{seoul}) + bound morpheme `\\begin{CJK}{UTF8}{mj}\uc744\\end{CJK}'(of, \\emph{eul}).\n\n\\begin{figure}[!ht]\n \\adjustbox{minipage=2em,valign=c}{\\subcaption{}\\label{sfig:pca-a}}%\n \\begin{subfigure}[c]{\\dimexpr0.9\\linewidth-1em\\relax}\n \\centering\n \\includegraphics[width=\\textwidth,valign=c]{pca-baseline}\n \\end{subfigure}\n \\adjustbox{minipage=2em,valign=c}{\\subcaption{}\\label{sfig:pca-b}}%\n \\begin{subfigure}[c]{\\dimexpr0.9\\linewidth-1em\\relax}\n \\centering\n \\includegraphics[width=\\textwidth,valign=c]{pca-ours}\n \\end{subfigure}\n \\caption{PCA projections of vector representation of 100 randomly sampled pairs of word. Each pair is composed of a word and the same word with postposition. In (b), our model shows that words forming the discriminative parallel clusters against postposition-combined-words.\\label{fig:pca}}\n\\end{figure}\n\nTo compare how models learn the morphological characteristics, we randomly sampled hundred words and the same words combined with certain postposition(`\\begin{CJK}{UTF8}{mj}\uc744\\end{CJK}', \\emph{eul}) from the training data.\nThe graph in Figure \\ref{fig:pca} shows this result more clearly.\nWe can observe that words forming the discriminative parallel clusters against postposition-combined-words while the baseline doesn't.\n\n\\section{Conclusion}\n\nWe present a syllable-based word representation model experimented with Korean, which is one of morphologically rich languages.\nOur model keeps the characteristics of Skip-gram models, in which word representation learns from context words.\nIt also takes into account the morphological characteristics by sharing parameters between the words that contain common syllables.\nWe demonstrate that our model is competitive on quantitative evaluations.\nFurthermore, we show that the model can handle OOV words, and capture morphological relationships.\nAs a future work, we have a plan to expand our model so that it can utilize overall information extracted from words, morphemes and characters.\n\n\\section*{Acknowledgments}\nThis work was supported by the National Research Foundation of Korea(NRF) funded by the Ministry of Science, ICT and Future Planning (NRF-2016M3C4A7952587, PF Class Heterogeneous High Performance Computer Development).\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nMolecular dynamics (MD) is a computationally intensive simulation technique\nwhich is used to gather insights into the structural,\nthermodynamic and kinetic properties of \nsystems modeled at the atomistic level. MD simulations have been extensively used to study, e.g.,\nbiologically-relevant conformational changes of proteins, their\ninteractions with small molecules, and with other proteins. The\nproblem of achieving sufficient sampling of the phase space has been\ntackled by careful software parallelization~\\cite{phillips_scalable_2005,pronk_gromacs_2013} and\naccelerated processors~\\cite{harvey_acemd:_2009}.\nMoreover, the sampling problem can be addressed by\nenhanced sampling\ntechniques, such as umbrella sampling~\\cite{torrie_nonphysical_1977},\nmetadynamics~\\cite{laio_metadynamics:_2008}, steered MD~\\cite{giorgino_high-throughput_2011}, accelerated MD~\\cite{hamelberg_accelerated_2004}, etc.\nThese approaches \nalter the Hamiltonian or the temperature of the system, therefore modifying\nits kinetic properties. However, they allow the reconstruction of the free\nenergy landscape.\n\n\n\n\nMost of the biased sampling methodologies require the modeler to\nchoose a few ``important'' \ncollective variables (CV), along which the important dynamics of the\nsystem is assumed to happen. CVs are arbitrary functions of the internal coordinates of the\nsystem; software such as PLUMED~\\cite{tribello_plumed_2014} and\nColvars~\\cite{fiorin_using_2013} allows their specification them \\emph{via}\nconcise symbolic expressions.\nThe choice of CVs is normally done before\nperforming the simulation~\\cite{laio_escaping_2002}. However it is common practice to choose the CVs\nfor a production run by\n performing an analysis on a preliminary simulation. \nA careful choice of CVs\nis also needed in unbiased molecular dynamics simulations, in order to\nobtain insight on the relevant processes that possibly occurred during the\ntime evolution. \n\n\n\nIn general, choosing the CVs that better describe a conformational change, even if this change can be observed by visualizing the MD trajectory, can be far from trivial. \nFor these reasons, there is a need\nfor software interfaces that simplify their iterative definition and\nevaluation~\\cite{giorgino_plumed-gui:_2014,biarnes_metagui._2012}.\n\n\n\\section{Features}\n\nWe here present METAGUI 3, a plug-in providing a graphical user interface\n(GUI) to construct thermodynamic and kinetic models of processes\nsimulated by large-scale MD. The GUI is based on the well-known MD visualization\nsoftware VMD~\\cite{humphrey_vmd:_1996}; it extends the\nfeatures of an earlier version~\\cite{biarnes_metagui._2012}, exploiting in particular: (a) its ability to toggle between\ntwo representations, structural and free energy, enabling the quick\ninspection of the configurations associated to specific values of the CVs; (b) a procedure for grouping together similar structures in {\\em microstates}\nbased only on the collective variables; and (c) a simplified and\nguided workflow for the analysis of metadynamics results. METAGUI 3\nwas extensively rewritten to provide the following new features,\nwhich will be presented in detail in the next sections:\n\n\\begin{enumerate}\n\\item an interface for defining and adding interactively new collective variables;\n\\item new algorithms for finding the microstates, namely $k$-medoids~\\cite{kaufman_book_1990} and Daura's algorithm~\\cite{daura_peptide_1999};\n\\item a reorganized graphical interface;\n\\item support for the analysis of unbiased MD trajectories;\n\\item a new clustering method~\\cite{rodriguez_clustering_2014} for identifying kinetic basins.\n\\end{enumerate}\n\n\n\\begin{figure}\n \\centering\n \\includegraphics[width=.7\\textwidth]{overview}\n \\caption{Overview of the analysis steps in METAGUI 3. The shaded\n boxes correspond to the three main sections of the GUI. The microstates,\n the free energies landscape, and the clusters can be\n visualized independently after the corresponding step in the\n analysis. }\n \\label{fig:overview}\n\\end{figure}\n\n\n\n\n\n\\subsection{The graphical user interface}\n\nUpon starting the tool, the user is presented with the METAGUI 3\nwindow; the three main tasks are shown as tabs, namely \\emph{Define\n inputs}, \\emph{Analyze} and \\emph{Visualize}. The three sections\ncorrespond to the three logical steps of the analysis workflow shown\nin Figure~\\ref{fig:overview}.\n\nThe \\emph{Define inputs} tab (Figure~\\ref{fig:inputs}) allows to\nspecify the input files, including the topology file of the system\n(i.e.\\ a file in PDB, PSF, or GRO format), and one or more trajectory files.\nFor each trajectory, a file containing the values of collective variables\n(\\emph{COLVAR} files) is required as well as the temperature of the\nrun; if the simulations are biased by metadynamics, the time-varying\npotentials have to be supplied in matching \\emph{HILLS}\nfiles, produced by PLUMED-patched MD engines during the simulation.\nThe part of the plug-in dedicated to the analysis of biased simulations \nis unchanged with respect to the earlier version of METAGUI~\\cite{biarnes_metagui._2012}. \nThe \\emph{Save} and \\emph{Load configuration} buttons allow storing\nand retrieving the whole set-up at once. \n\n\\begin{figure}\n \\centering\n \\includegraphics[width=.9\\textwidth]{inputs_captioned}\n \\caption{The \\emph{Define inputs} tab, which allows setting the trajectory files, the system's topology (A) and the temperatures of the available trajectories (B). The middle section (C) opens the Plumed-GUI collective-variable editor, allowing the run-time definition of unbiased CVs in addition to those originally present in the \\emph{COLVAR} files. Defined collective variables appear in section (D), either over a white background (CVs in the simulation files) or over a green \nbackground (CVs defined at run-time).}\n \\label{fig:inputs}\n\\end{figure}\n\n\nActivating the \\emph{Load all} button loads all the simulation data into\nmemory; this includes the molecular topology, all of the trajectory\nframes, and the values of the collective variables. A summary of the\nCVs appears on the lower-hand panel of the GUI, along with any\nadditional CV defined manually at runtime, as discussed in\nSection~\\ref{sec:run-time-definition}.\n\n\nAfter the input is defined, operations in the \\emph{Analyze} tabs can\ntake place (Figure~\\ref{fig:analyze}). In particular, the first step is\nto group the configurations explored during the simulations into\n\\emph{microstates}, namely sets of similar structures. Three partitioning algorithms are available, namely\n(a) $k$-medoids with $k$-means++ initial seeding~\\cite{kaufman_book_1990,vassilvitskii2006k} with or without\nsieving~\\cite{shao_clustering_2007}; (b) an implementation of the\nalgorithm by Daura et al.~\\cite{daura_peptide_1999}; and (c) the\nsimple grid partitioning in CV space implemented in the earlier versions of\nMETAGUI.\nThe $k$-medoids correspond to a partitioning of\nthe space in $k$ Voronoi cells. The parameter $k$ directly defines\nthe number of microstates that are obtained. Daura's\nclustering algorithm~\\cite{daura_peptide_1999} automatically determines the number \nof microstates by finding groups within a cut-off distance, that should be specified by the user. \nWith respect to the grid partitioning, included also in this version, the use of more \nelaborated partition methods has the advantage of automatically focusing on \nthe populated regions of the CV space. This avoids the exponential \ngrowth with the number of CVs of the microstates with negligible population.\n\n\n\\begin{figure}\n \\centering\n \\includegraphics[width=1.0\\textwidth]{analyze}\n \\caption{Two steps of the \\emph{Analysis} procedure: (left) compute\n microstates via one of the available partitioning procedures; \n (right) compute the thermodynamics of each microstate via either\n WHAM (biased trajectories) or Boltzmann\n inversion of the populations (unbiased).}\n\\label{fig:analyze}\n\\end{figure}\n\n\nOnce microstates have been computed, the tool allows their visualization.\nThe \\emph{Visualize} tab lists the microstates along with their free\nenergy (Figure~\\ref{fig:visualize-microstates}). Selecting one of them it is possible to visualize the corresponding structures or\nsave them into a PDB file. It is also possible to seamlessly switch between a\ncollective variables space representation of microstates\nand their atomic structure representation (Figure~\\ref{fig:fes});\nthis greatly helps associating structural rearrangements\nwith collective variable changes.\n\n\n\\subsection{Support for unbiased simulations}\n\nA new feature in METAGUI 3.0 enables the analysis of trajectories resulting from\nboth unbiased and metadynamics-based simulations. It is possible to\nswitch between the two modes by toggling a \\emph{Biased} check-box on\neach trajectory. \nIn both biased and unbiased simulations the structures are grouped together\ninto a set of microstates, namely sets of structures with similar values of the\ncollective variables. For biased simulations, free energies \nof the microstates are then\ncomputed by the\nweighted histogram analysis method (WHAM)~\\cite{roux_calculation_1995,Marinelli_2009}. For unbiased simulations, free energies are\ncomputed applying the Boltzmann inversion to the populations of the\nmicrostates (visit counts).\nThe tool also allows analyzing simulations for which part of the trajectories are biased and part of the trajectories are not.\n\n\n\\subsection{Run-time definition of collective variables}\\label{sec:run-time-definition}\n\n\\begin{figure}\n \\centering\n \\includegraphics[width=.9\\textwidth]{plumed-gui-1}\n \\caption{New collective variables can be defined via the Plumed-GUI\n interface \\cite{giorgino_plumed-gui:_2014} and included in the analysis at run-time. In\n this example, the user defined two runtime CVs in PLUMED 2.0\n syntax: the distance between the center of mass (\\texttt{COM}) of\n the heavy atoms of residues 1 and 2; and the angle formed by the\n COM of residues 1, 2 and 3.}\n\\label{fig:plumed-gui}\n\\end{figure}\n\n\nIn METAGUI 3, PLUMED output can be read after the\nsimulation, in the form of \\emph{COLVAR} files as described\nabove, as well as recomputed at \\emph{run-time}. This is an important improvement with respect to the previous version of the interface: the user can now define\ncollective variables in the PLUMED language, and have them evaluated on\nthe loaded trajectories. CVs defined at run-time can either replace or be added to the\nlist of pre-computed CVs. Definition of CVs at run-time is done\nthrough the Plumed-GUI software (Figure~\\ref{fig:plumed-gui}), which\nprovides a guided environment to prototype CV declarations with\ntemplates and symbolic atom selection keywords~\\cite{giorgino_plumed-gui:_2014}.\nPlumed-GUI is distributed with VMD since version 1.9.2, and can be updated separately\nfrom it.\n\nThe new version of METAGUI described in this work can process the output of the PLUMED 1.3~\\cite{bonomi_plumed:_2009} and 2.x~\\cite{tribello_plumed_2014}\nengines, making it compatible with a number of different molecular\ndynamics packages like AMBER~\\cite{case_amber_2005}, NAMD~\\cite{phillips_scalable_2005},\nGROMACS~\\cite{pronk_gromacs_2013} and several others.\nThe use of CVs computed at run-time only requires\nPLUMED's \\emph{driver}, a stand-alone\nexecutable which is built independently of MD engines.\n\n\n\n\n\n\n\\subsection{Clustering strategies}\n\nMETAGUI 3 allows quickly detecting the presence of metastable states by applying the \nDensity Peak clustering approach~\\cite{rodriguez_clustering_2014}. The approach is based \non the assumption that cluster centers are surrounded by neighbors with lower local \ndensity and that they are at a relatively large distance from any points with a higher \nlocal density. The method proceeds by computing two quantities for each\npoint: the density $\\rho_i$ and the minimum distance to a point with higher density \n$\\delta_i$. In its simplest formulation, the density is estimated by the number of \nneighbors within a cut-off, although more elaborated definitions can be employed. \nOnce computed, both quantities are plotted in the so-called decision plane. \nCluster centers stand out naturally as outliers on the graph and can be manually picked up.\nAfter the cluster centers have been found, each remaining point is assigned to the \nsame cluster as its nearest neighbor of higher density.\n\nIn the context of molecular dynamics, the quantity $\\rho$ has a direct physical interpretation\nthrough the Boltzmann equation, which relates the logarithm of the probability density with the\nfree energy. Therefore, in the version implemented in METAGUI 3, the density of a microstate\nis computed by $\\rho_i=\\exp\\left(-\\Delta G_i \/ k_BT \\right)$, where $k_B$ is the Boltzmann constant and $T$ is the temperature. This definition leads to realistic\ndensity values even in the case of biased simulations, where standard density calculation \nmethods cannot be employed. The distance between microstates is computed as the Euclidean distance \nin the CVs space. However, to deal with the different ranges of variation of the CVs, it is necessary to\ninclude some kind of normalization. This is done by dividing the coordinates by the bin size in the grid. \nTherefore, the distance can be interpreted as the number of hypercubes between microstates. \nThe clusters obtained by this procedure are a good approximation of the metastable\nstates normally obtained by Markov State Modeling. Therefore, they can be used \nas a starting point for kinetic analysis.\n\n\\begin{figure}\n \\centering\n \\includegraphics[width=.9\\textwidth]{visualize_microstates}\n \\caption{List of microstates obtained from the partitioning step, each\n listed with the corresponding population (\\emph{Size}),\n coordinates of the centroid (\\emph{CVs}), free energy\n ($\\Delta G$), and cluster assignment (available after the\n Density Peak clustering step). }\n\\label{fig:visualize-microstates}\n\\end{figure}\n\n\\begin{figure}\n \\centering\n \\includegraphics[width=.9\\textwidth]{fes_captioned}\n \\caption{\n Free energy surface (FES) plotted with respect to two CVs\n ($x$ and $y$ axis). Each microstate is shown as a dot, whose color\n and $z$ coordinate reflect the corresponding $\\Delta G$. Lines\n join kinetically-communicating microstates.}\n\\label{fig:fes}\n\\end{figure}\n\n\n\\section{A benchmark on a 125 microseconds-long MD trajectory}\n\nWith the aim of illustrating the main features of METAGUI 3, we analyzed an \nunbiased simulation of Villin-headpiece folding~\\cite{Lindorff-Larsen517}. \nThe trajectory is 125 $\\mu$s long, and includes approximately 12 folding events.\nThe collective variables employed were the content of $\\alpha$-helix, \nthe radius of gyration, the backbone dihedral distance to the crystal structure \nand the number of hydrophobic contacts. All of them were computed \nby means of PLUMED~\\cite{tribello_plumed_2014} using the Plumed-GUI\ninterface~\\cite{giorgino_plumed-gui:_2014}. \nThe content of $\\alpha$-helix was computed by means of the \\texttt{ALPHARMSD} keyword~\\cite{pietrucci_collective_2009}, \nusing $r_0=1.0 \\, \\mbox{\\AA}, d_0=0.0 \\, \\mbox{\\AA}, n=6$, and $m=12$. The radius of gyration was \ncomputed by taking into account only the C$\\alpha$ coordinates. For the backbone\ndihedral distance, the reference structure considered was PDB code 2F4K~\\cite{Kubelka2006546}; \nthe dihedrals of terminal residues were ignored. The number of hydrophobic contacts was computed as\nthe combination of the coordination coordinates between all the possible hydrophobic\nresidues side-chain pairs. Only the heavy atoms were considered and $r_0$ was set to \n3.5 \\r{A}.\n\n\nA total of 400 microstates were found by the $k$-medoids method, reducing \nthe computational time by using the sieving option~\\cite{shao_clustering_2007} with a maximum distance \nmatrix dimension of 15,000. \nThen, the free energy of each microstate was computed \napplying the Boltzmann inversion to their populations. Finally, the free energy\nwells were localized by Density Peak clustering~\\cite{rodriguez_clustering_2014},\nallowing the classification of the microstates in 3 different clusters.\n\nFigure \\ref{fig:proj} summarizes the results of the analysis, showing the projection on \ndifferent CVs of the free energy surface, one using $\\alpha$-helix and the \nradius of gyration (panel A) and the other using the dihedral distance to the \ncrystal structure and the number of hydrophobic contacts (panel C),\nthe clusters partitions of the same projections (panels B and D respectively), \nthe decision plane that origins the basin partition (panel E) and the \nrepresentative structures for each basin (labeled as S1, S2 and S3). \nAs expected, the microstate with minimum free energy (S1) corresponds to\nthe folded state. Moreover, the folding funnel is evident in both the projections. \nS2 and S3 are configurations corresponding to intermediates of the folding process. \nRemarkably, the different \nprojections shown in figure do not correspond to different calculations of\nthe system, but different visualizations of the same analysis. METAGUI 3 allows \ntoggling between them by changing the checked CVs in the \n\\emph{FES and Basins plot} frame (Figure~\\ref{fig:visualize-microstates}). \nMoreover, by using the \\emph{Pick} function, one can \nvisualize the structures corresponding to a microstate by simply clicking on \na sphere in the free energy representation. \n\n\\begin{figure}\n \\centering\n \\includegraphics[width=\\textwidth]{Figure_Metagui}\n \\caption{ METAGUI 3 output of the analysis of a 125 $\\mu$s-long molecular dynamics trajectory\n of the villin headpiece:\n (A) free energy surface projected on the $\\alpha$-helix and the radius of gyration CVs; \n (B) assignment of the microstates to clusters in the projection of panel A;\n (C) free energy surface projected on the dihedral distance to the\ncrystal structure and the number of maintained hydrophobic contacts;\n (D) assignment of the microstates to clusters in the projection of panel C;\n (E) decision graph that leads to the cluster partition shown in panes B and D.\n S1, S2 and S3 are the cartoon representation of the representative structures \nin each cluster.\n }\n\\label{fig:proj}\n\\end{figure}\n\n\n\\section{Conclusions}\n\nThe introduction of graphical tools in the MD analysis workflow can\nspeed-up iterations in the search for appropriate collective variables,\na necessary step towards the data-driven rationalization\nof MD results. METAGUI~3 aims at providing a quick way to\ncross-reference structural and thermodynamic information, by enabling\nboth the selection of a set of relevant CVs on existing simulations,\nand providing a quick way to define new ones. \n\nThe source code of\nMETAGUI 3 is available at the URL\n\\url{https:\/\/github.com\/metagui\/metagui3}; it can be freely\nredistributed and\/or modified under the terms of the GNU General\nPublic License (Free Software Foundation) version 3 or later.\n\n\n\n\n\\section{Acknowledgements}\n\nThe code of METAGUI 3.0 includes parts from the earlier METAGUI~2.0\nversion, written by Xevi Biarn\u00e9s, Fabio Pietrucci, Fabrizio Marinelli\nand Alessandro Laio. Alessandro Laio and Alex Rodriguez \nacknowledge financial support from the grant Associazione Italiana per \nla Ricerca sul Cancro 5 per mille, Rif.\\ 12214.\n\n\n\n\\bibliographystyle{model1-num-names}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nThroughout this paper, we study the \\emph{$(n,k)$-Bernoulli-Laplace model}. In the model, there are two urns, a \\emph{left} urn and a \\emph{right} urn, each of which contains exactly $n$ balls. Of the total $2n$ balls contained in both urns, $n$ are colored red and $n$ are colored white. Starting from a given coloration of $n$ balls in each urn, at each step $k$ balls are selected uniformly at random without replacement from each urn. The selected balls are then swapped and placed in the opposite urn. The process then repeats itself. Letting $X_t$ denote the number of red balls in the left urn after $t$ swaps, the process $(X_t)$ is Markov. Our main goal is to understand how long it takes for the chain to be within $\\epsilon>0$ of its stationary distribution $\\pi$ in total variation. Our focus in this paper is on the case when the number of swaps $k$ is of order $n$ where $n\\gg1$. The main result of this paper partially resolves an open question posed by Eskenazis and Nestoridi in~\\cite{EN}. \n\n\nOur interest in the $(n,k)$-Bernoulli-Laplace model comes from shuffling large decks of cards. Mapping the above model to this setting, the deck of cards has size $2n\\gg1$ and at each step of the shuffle we cut the deck into two equal piles of $n$ cards, shuffle each deck independently and perfectly, reassemble the deck and then move the top $k$ cards to the bottom. This process repeats itself until sufficient mixing is achieved. From this description, it follows that the $(n,k)$-Bernoulli-Laplace model describes this card shuffling algorithm \\emph{without} the separate step of shuffling each of the smaller decks independently and perfectly at each step. See~\\cite{NW_19} for further details. \n\n\\subsection{Preliminaries} Before discussing existing results in the literature and the results of this paper, we first fix some notation and terminology. \n\nThroughout, $\\mathcal X= \\{0,1,2,\\ldots, n\\}$ denotes the state space of the $(n,k)$-Bernoulli-Laplace chain $(X_t)$, and $p_t(x,y)$ denotes the associated probability of transitioning from $x\\in\\mathcal X$ to $y\\in \\mathcal X$ in $t$ steps; that is,\n\\begin{align}\n\\label{eqn:trans}\np_t(x,y):= \\mathbf{P}_x \\{ X_t =y \\}\n\\end{align} \nwhere the subscript in the probability $\\mathbf{P}$ indicates that $X_0=x$. One can write down the specific formulas for the transitions $p_t(x,y)$ (see~\\cite{EN}) but these formulas are not particularly important in our analysis. Similar to $\\mathbf{P}_{X_0}\\{ \\cdots\\}$, we use an analogous notation for the expectation $\\mathbf{E}$; that is, $\\mathbf{E}_{X_0}\\{ \\cdots \\}$ indicates that $(X_t)$ has initial distribution $X_0$. For any $A\\subset \\mathcal X$ and $x\\in \\mathcal X$, we let \n\\begin{align}\nP_t(x, A) := \\sum_{y\\in A} p_t(x,y) = \\mathbf{P}_x\\{ X_t \\in A\\}\n\\end{align}\ndenote the probability of transitioning from state $x$ to the set $A$ in $t$ steps. It is known (see~\\cite{Tai_96}) that $(X_t)$ has a unique stationary distribution $\\pi$ which is hypergeometric; specifically, $\\pi$ satisfies\n\\begin{align}\n\\label{eqn:pidist}\n\\pi(\\{x\\})= \\frac{\\binom{n}{x} \\binom{n}{n-x}}{\\binom{2n}{n}}, \\qquad x\\in \\mathcal X. \n\\end{align} \nObserve that each of the quantities above depends on the parameter $n\\in \\mathbf{N}$. Throughout, unless otherwise specified (see, for example, the paragraph below), we will suppress this dependence. \n\nLet \n\\begin{align}\nd^{(n)}(t):= \\max_{x\\in \\mathcal X} \\| P_t(x, \\, \\cdot \\,) - \\pi (\\, \\cdot \\,) \\|_{TV} = \\tfrac{1}{2}\\max_{x\\in \\mathcal X} \\sum_{y\\in \\mathcal X} | p_t(x,y) - \\pi(y)|, \n\\end{align} \nand define for $\\epsilon >0$ the \\emph{mixing time} $t_{\\text{mix}}^{(n)} (\\epsilon)$ by \n\\begin{align}\nt_{\\text{mix}}^{(n)}(\\epsilon) = \\min \\{ t\\in \\mathbf{N} \\, : \\, d^{(n)}(t) \\leq \\epsilon \\}. \n\\end{align}\nWe say that the Markov chain $(X_t)$ exhibits \\emph{cutoff} if \n\\begin{align}\n\\label{eqn:cutoff}\n\\lim_{n\\rightarrow \\infty} \\frac{t_{\\text{mix}}^{(n)}(\\epsilon)}{t_{\\text{mix}}^{(n)}(1-\\epsilon)} =1 \\,\\,\\, \\,\\,\\text{ for all }\\, \\,\\,\\, \\, \\epsilon \\in (0, 1). \n\\end{align}\nIf the Markov chain $(X_t)$ exhibits cutoff and for every $\\epsilon \\in (0,1)$ there exists a constant $c_\\epsilon$ and a sequence $w_n$ satisfying \n\\begin{align}\nw_n=o(t_{\\text{mix}}^{(n)}(1\/2)) \\qquad \\text{ and } \\qquad t_{\\text{mix}}^{(n)}(\\epsilon)- t_{\\text{mix}}^{(n)}(1-\\epsilon) \\leq c_\\epsilon w_n \\quad \\text{ for all }\\quad n, \n\\end{align} \nwe say that $(X_t)$ has \\emph{cutoff window} $w_n$. For other preliminaries concerning mixing times of Markov chains, see~\\cite{LPW_17}. \n\n\n\\subsection{Previous results and statement of the main result}\nExisting results on mixing times for the $(n,k)$-Bernoulli-Laplace model largely focus on the case when the number of swaps $k$ is \\emph{much smaller} than the number of balls $n$ in each urn. The earliest works of Diaconis and Shahshahani~\\cite{DS_81, DS_87} and Donnely, Floyd and Sudbury~\\cite{DLS_94} treat the case when $k=1$ and establish cutoff in total variation and in the \\emph{separation distance}, respectively. Diaconis and Shahshahani proved their results by analyzing random walks on Cayley graphs of the symmetric group where edges correspond to transpostions. These results were extended to random walks on distance regular graphs in~\\cite{B_98}. See also~\\cite{scarabotti1997time} for the Bernoulli-Laplace model with multiple urns in the case when $k=1$. We refer to~\\cite{school_02} which contains a signed generalization of the model. \n\nThe case when the number of swaps $k>1$ in the Bernoulli-Laplace two-urn model was first studied by Nestoridi and White~\\cite{NW_19}, where a number of estimates (not all sharp) were deduced for $t_{\\text{mix}}(\\epsilon)$ for general $k$. These estimates were made sharp in the case when the number of swaps $k$ satisfies $k=o(n)$ in a joint paper of Eskenazis and Nestoridi~\\cite{EN}, ultimately yielding the bound \n\\begin{align}\n\\label{eqn:upperbksmall}\n\\frac{n}{4k}\\log n - \\frac{c(\\epsilon) n}{k}\\leq t_{\\text{mix}}(\\epsilon) \\leq \\frac{n}{4k} \\log n + \\frac{3n}{k}\\log \\log n + O\\Big( \\frac{n}{\\epsilon^4 k}\\Big). \n\\end{align}\nThus, the model with $k=o(n)$ exhibits cutoff with cutoff window $\\tfrac{n}{k} \\log \\log n$. Estimates deduced in the case when $k=O(n)$ were not optimal; that is, in~\\cite{NW_19} it was shown that if $k\/n \\rightarrow \\lambda \\in (0,1\/2)$, then \n \\begin{align}\n \\label{eqn:lowb1}\n \\frac{\\log n }{2 |\\log (1-2\\lambda)|} -c(\\epsilon) \\leq t_{\\text{mix}}(\\epsilon) \\leq \\frac{\\log(n\/\\epsilon)}{2\\lambda(1-\\lambda)}. \n \\end{align} \nMoreover, it was conjectured in~\\cite{EN} that the lower bound in~\\eqref{eqn:lowb1} is sharp. In general, it was left there as an open problem to determine the mixing time of the $(n,k)$-Bernoulli-Laplace model when $k\/n\\rightarrow \\lambda \\in (0,1\/2)$. In this paper, we make progress on solving this problem under \\emph{reasonable} assumptions on the convergence rate $k\/n\\rightarrow \\lambda \\in (0,1\/2)$, which we now describe.\n\n\n \n\\begin{Assumption}\n\\label{assump:1}\nThe parameter $k=k(n)$ in the $(n,k)$-Bernoulli-Laplace model satisfies the following conditions:\n\\begin{itemize}\n\\item[\\textbf{(c0)}] $k\/n\\rightarrow \\lambda \\in (0, 1\/2)$ as $n\\rightarrow \\infty.$ \n\\item[\\textbf{(c1)}] There exists $\\delta \\in (0,1\/2)$ for which $k\/n\\in (0, \\delta)$ for all $n$.\n\\item[\\textbf{(c2)}] $\\Delta_n :=\\tfrac{k}{n}- \\lambda$ satisfies the asymptotic condition\n\\begin{align}\n\\Delta_n = o(1\/\\log n) \\,\\,\\,\\, \\text{ as } \\,\\,\\,\\, n\\rightarrow \\infty. \n\\end{align}\n\\end{itemize}\n\n\\begin{Remark}\n\\label{rem:1}\nAlthough \\cref{assump:1} is not explicitly employed in the paper~\\cite{NW_19}, to the best of our knowledge it seems that one needs to impose some condition like the one above to make the asymptotic formulas previously used in~\\cite{NW_19} to deduce a lower bound on $t_{\\text{mix}}^{(n)}(\\epsilon)$. For further information on this point, we refer the reader to~\\cref{rem:2}. \n\\end{Remark}\n\n\\end{Assumption}\n\n\n\nThroughout, using the notation in \\cref{assump:1}, we define \n\\begin{align}\n\\label{def:times}\ns_n = \\lambda^{-1} \\log \\log n \\qquad \\text{ and } \\qquad t_n = \\frac{\\log n}{2| \\log (1-2\\lambda)|}.\n\\end{align}\nOur main result is the following:\n\\begin{Theorem}\n\\label{thm:main}\nSuppose that in the $(n,k)$-Bernoulli-Laplace chain, the parameter $k$ satisfies \\cref{assump:1}. \n For any $\\epsilon \\in (0,1)$, there exists a constants $c(\\epsilon), N(\\epsilon)>0$ depending only on $\\epsilon$ such that \n\\begin{align}\n\\label{eqn:upperb}\nt_n - c(\\epsilon) \\leq t_{\\text{mix}}^{(n)}(\\epsilon) \\leq t_n + 3s_n + 1 \n\\end{align}\nfor all $n\\geq N(\\epsilon)$. \nIn particular, the $(n,k)$-Bernoulli-Laplace chain with parameter $k$ satisfying \\cref{assump:1} has mixing time $t_n$ with cutoff window $s_n.$\n\\end{Theorem}\n\n\n\\section{Proof outline }\n\\label{sec:outline}\n\n\n\\subsection{The lower bound}\nThe proof of the lower bound with $k$ satisfying \\cref{assump:1} was first estabished in \\cite{NW_19}. \nWe present the lower bound argument and fill in some details in \\cref{sec:lowerb} for completeness. \nThe lower bound argument follows the standard idea of finding a set $A\\subseteq \\mathcal X$ where $|P_t (0,A)-\\pi_n(A)|$ is large. In particular, the argument uses the estimate \n\\[\\TV{P_t(0, \\, \\cdot\\,)-\\pi_n(\\, \\cdot \\,)} \\geq \\sup_{A\\subseteq \\mathcal X} |P_t(0,A)-\\pi_n(A)|.\\]\nIt is worth noting that the lower bound argument uses Chebyshev's inequality to bound the $P_t(0,A)$ from above and to bound $\\pi_n(A)$ from below, for a clever choice of $A$. \nThis argument is instructive and provides intuitive support for the lower bound being sharp since the application of Chebychev's inequality fails to bound $P_t(0,A)$ and $\\pi_n(A)$ away from each other for $t\\geq t_n$.\nOur main contribution in this paper is the upper bound~\\eqref{eqn:upperb}. \n\n\\subsection{The upper bound~\\eqref{eqn:upperb}}\n\nWhile the broad structure of the argument giving the upper bound~\\eqref{eqn:upperb} is similar to that used to obtain the upper bound in~\\eqref{eqn:upperbksmall} in the case when $k=o(n)$~\\cite{EN}, the details giving these results are quite different. This is especially true because the asymptotics are more delicate when $k$ is order $n$ as in~\\cref{assump:1}. Furthermore, we cannot rely on a known result of Diaconis and Freedman~\\cite{DF_1980} in the last step of the proof. This was used crucially in~\\cite{EN} to show that once two copies of the chain are sufficiently close, in this case at distance $o(k)$, then in one step the processes are distance $o(1)$ apart with high probability. This step relies inherently the binomial approximation of the hypergeometric distribution (see, for example,~\\cite{R_01}), which is not valid in our regime of interest since $k$ is order $n$. The last step, i.e. going from distance $o(k)$ to $o(1)$, is arguably the most involved because we manufacture a \\emph{discrete normal} approximation (in total variation) of the hypergeometric. This same approximation was also used in~\\cite{LCM_07} to establish a Berry-Esseen theorem for the hypergeometric distribution in nonstandard parameter cases. For more on Barry-Esseen theorems, we refer the reader to Feller's book~\\cite{F_08}. All of these steps are carried out in detail in \\cref{sec:osmallrootn} and \\cref{sec:upperbound}. \n\n\n\n\n\\subsubsection{The coupling} \nThe proof of the upper bound~\\eqref{eqn:upperb} employs coupling methods. Specifically, recall that if $\\mu$ and $\\nu$ are probability measures defined on all subsets of $\\mathcal X$, then we can write\n\\begin{align}\n\\| \\mu - \\nu \\|_{TV} = \\inf \\{ \\mathbf{P}\\{ X\\neq Y \\} \\, : \\, (X,Y) \\text{ is a coupling of } \\mu \\text{ and } \\nu \\}. \n\\end{align}\nHere we recall that $(X,Y)$ is a \\emph{coupling} of the measures $\\mu$ and $\\nu$ if $X$ and $Y$ are random variables on a common probability space $(\\Omega, \\mathcal{F}, \\mathbf{P})$ with $X\\sim \\mu$ and $Y\\sim \\nu$. Thus in order to achieve the claimed upper bound~\\eqref{eqn:upperb}, we will utilize a particular coupling of two copies of the chain to bound the total variation distance from above. \n\nRecall that in the $(n,k)$-Bernoulli-Laplace model, $X_t$ denotes the number of red balls in the left urn at time $t$. Thus if $(X_t)$ and $(Y_t)$ are two copies of the $(n,k)$-Bernoulli-Laplace chain, then $X_t$ and $Y_t$ are the number of red balls in two separate left urns at time $t$. The coupling we use can be described as follows (see also~\\cite{EN, NW_19}): \n\n\\begin{Coupling}\\label{coupling}\nLet $(X_t)$ and $(Y_t)$ be two copies of the $(n,k)$-Bernoulli-Laplace chain and fix a time $t\\geq 0$. Given the distributions $X_t$ and $Y_t$ at time $t$, we define a coupling of $X_{t+1}$ and $Y_{t+1}$. First, label the balls in the two left urns at time $t$ separately from $1$ to $n$ so that each red ball has a smaller label than each white ball. \nFurthermore, in a similar way label all of the balls in the two right urns from $n+1$ to $2n$ so that each red ball has a smaller label than each white ball. \nUniformly and independently select subsets $A\\subseteq \\{1,\\dots,n\\}$ and $B\\subseteq \\{n+1,\\dots, 2n\\}$ with $|A|=|B|=k$. To obtain $X_{t+1}$ and $Y_{t+1}$, \nswap the balls indexed by the elements of $A$ in each left urn with the balls in the corresponding right urn with index belonging to $B$. \\end{Coupling}\n\n \n\n\n\n\\begin{figure}\n \\centering\n \\begin{tikzpicture}[thick, scale = 1]\n \n \n \\foreach \\x in {0, 3, 7, 10}{\n \\foreach \\y in {0, 2, 4, 6}{\n \\draw (\\x -1, \\y) arc [ start angle = 180, end angle = 360, x radius =1, y radius = .25];\n }\n \\foreach \\y in {2,6}{\n \\draw (\\x + 1, \\y) arc [ start angle = 0, end angle = 180, x radius =1, y radius = .25];}\n \\foreach \\y in {0,4}{\n \\draw[dashed] (\\x + 1, \\y) arc [ start angle = 0, end angle = 180, x radius =1, y radius = .25];\n \\draw (\\x - 1, \\y ) -- (\\x - 1, \\y + 2);\n \\draw (\\x + 1, \\y ) -- (\\x + 1, \\y + 2);\n \\foreach \\z in {.2,.8, 1.4}{\n \\draw[fill=white] (\\x -.4, \\y + \\z) circle (.25);}\n \\foreach \\z in {.5,1.2}{\n \\draw[fill=white] (\\x +.4, \\y + \\z) circle (.25);}\n }\n }\n \n \n \\foreach \\z in {.2, .8, 1.4}{\n \\draw[fill = ourRed] (-.4, 4 + \\z) circle (.25);}\n \\foreach \\z in {1.4, .8}{\n \\draw[fill = ourRed] (2.6, 4 + \\z) circle (.25);}\n \\foreach \\x in {0,7}{\n \\draw[fill = ourRed] (\\x - .4, 1.4) circle (.25);}\n \\draw[fill = ourRed] (6.6, 5.4) circle (.25);\n \\foreach \\x in {3,10}{\n \\foreach \\z in {.2, .8, 1.4}{\n \\draw[fill = ourRed] (\\x - .4, \\z) circle (.25);}\n \\draw[fill = ourRed] (\\x + .4, 1.2) circle (.25);}\n \\foreach \\z in {.2, .8, 1.4}{\n \\draw[fill = ourRed] (9.6, 4 + \\z) circle (.25);}\n \\draw[fill = ourRed] (10.4, 5.2) circle (.25);\n\n \n \n \n \\foreach \\y in {0,4}{\n \\foreach \\z\/\\i in {1.4\/1, .8\/2, .2\/3}{\n \\draw ( - .4, \\y + \\z) node {\\i};}\n \\foreach \\z\/\\i in {1.2\/4, .5\/5}{\n \\draw (.4, \\y + \\z) node {\\i};}\n }\n \\foreach \\y in {0,4}{\n \\foreach \\z\/\\i in {1.4\/6, .8\/7, .2\/8}{\n \\draw ( 2.6, \\y + \\z) node {\\i};}\n \\foreach \\z\/\\i in {1.2\/9, .5\/10}{\n \\draw (3.4, \\y + \\z) node {\\i};}\n }\n \n \n \\draw node [below] at (1.5, 3.5) {$X_t$};\n \\draw node [below] at (1.5, -.5) {$Y_t$};\n \\draw node [below] at (8.5, 3.5) {$X_{t+1}$};\n \\draw node [below] at (8.5, -.5) {$Y_{t+1}$};\n \n \n \\foreach \\y in {0,4}{\n \\foreach \\x in {0, 7}{\n \\draw (\\x, \\y - .5) node(\\x\\y){$L$};\n }\n \\foreach \\x in {3, 10}{\n \\draw (\\x, \\y - .5) node(\\x\\y){$R$};\n }}\n \n \\end{tikzpicture}\n \\caption{An illustration of \\cref{coupling} with $n=5$, $k=2$ and $A=\\{1,3\\}$ and $B=\\{9,10\\}$.}\n \\label{fig:1}\n\\end{figure}\n\n\n \n \n\n\n\n\n\n Refer to \\cref{fig:1} for an illustration of this coupling. Importantly, two chains coupled as in \\cref{coupling} have the property that $|X_{t+1}-Y_{t+1}|\\leq |X_t - Y_t|$~\\cite{NW_19}. \nFurthermore, we have the next proposition (see~\\cite[Proposition 2]{EN}) whose argument is a modification of the \\emph{path coupling theorem} of Bubley and Dyer~\\cite{BD_97}. \n\n\\begin{Proposition}\\label{prop:couple}\nLet $(X_t)$ and $(Y_t)$ be two instances of the $(n,k)$-Bernoulli-Laplace chain coupled at each time $t\\geq 1$ as in \\cref{coupling} starting from $X_0, Y_0 \\in \\mathcal X$. For $r>0$, let \n\\begin{align}\n\\label{eqn:tcouple}\n\\tau_{\\emph{couple}}(r) := \\min\\{t\\,:\\, |X_t-Y_t|\\leq r\\}.\n\\end{align}\nThen, for every $t\\in \\mathbb{N}$, \n\\[\\mathbf{P}(\\tau_{\\emph{couple}}(r)>t)\\leq \\Big( 1-\\frac{2k(n-k)}{n^2}\\Big)^t\\frac{|X_0-Y_0|}{r}.\\]\n\n\\end{Proposition}\n\n\n\\subsubsection{The steps in the upper bound argument}\nIn \\cref{sec:osmallrootn} and~\\cref{sec:upperbound}, we provide the details for the upper bound~\\eqref{eqn:upperb}. Here we lay out the argument giving the upper bound in several steps. \n\n\n\\textbf{Step 1}. Recalling $t_n$ as in~\\eqref{def:times} and fixing $\\kappa_1\\gg 1$, we will first prove that with high probability, there exists $t\\leq t_n$ such that \n\\begin{align*}\nX_{t}, Y_{t} \\in (\\tfrac{n}{2}- \\kappa_1 \\sqrt{n}, \\tfrac{n}{2}+ \\kappa_1 \\sqrt{n}). \n\\end{align*} \nAs a consequence, by $t_n$ steps, with high probability (depending on the choice of $\\kappa_1$) the chains have been at distance order $\\sqrt{n}$ apart. This step will be achieved using the first and second eigenfunctions for the chain, which translates to precise first and second moment estimates for $(X_t)$. \n\n\\textbf{Step 2}. Recalling $s_n$ as in~\\eqref{def:times} and fixing $\\kappa_2 , \\kappa_3\\gg 1$, we will show that if $X_0, Y_0 \\in (\\tfrac{n}{2}-\\kappa_2\\sqrt{n},\\tfrac{n}{2}+ \\kappa_2 \\sqrt{n})$, then with high probability (depending on $\\kappa_2, \\kappa_3$) if $X_t, Y_t$ are coupled according to \\cref{coupling} there exists $t \\leq s_n$ so that \n\\begin{align*}\n|X_t - Y_t| \\leq \\frac{\\sqrt{n}}{\\log \\log n} \\qquad \\text{ and } \\qquad X_t, Y_t \\in (\\tfrac{n}{2}- \\kappa_3 \\sqrt{n} (\\log n)^{|P_\\lambda|\/2}, \\tfrac{n}{2}+ \\kappa_3 \\sqrt{n} (\\log n)^{|P_\\lambda|\/2})\\end{align*}\nwhere $P_\\lambda =\\log(1-2\\lambda) \\tfrac{2}{\\lambda}$. Thus the chains become closer asymptotically but they may deviate slightly from $\\tfrac{n}{2}$ by more than order $\\sqrt{n}$. Note that this deviation becomes particularly pronounced when $\\lambda \\approx 1\/2$ by the definition of $P_\\lambda$. \n\n\\textbf{Step 3}. In this step, we correct for the deviation in \\textbf{Step 2} from $\\tfrac{n}{2}$ in $2 s_n$ steps again making use of~\\cref{coupling}. That is, fix $\\kappa_4 \\gg 1$ and suppose $X_0, Y_0$ satisfy\n\\begin{align*}\n|X_0 - Y_0| \\leq \\frac{\\sqrt{n}}{\\log \\log n} \\qquad \\text{ and } \\qquad X_t, Y_t \\in (\\tfrac{n}{2}- \\kappa_3 \\sqrt{n} (\\log n)^{|P_\\lambda|\/2}, \\tfrac{n}{2}+ \\kappa_3 \\sqrt{n} (\\log n)^{|P_\\lambda|\/2}).\\end{align*}\nSupposing that the chains $(X_t)$ and $(Y_t)$ are coupled according to~\\cref{coupling}, we will show that with high probability (depending on $\\kappa_4 \\gg 1$) there exists $t\\leq 2 s_n$ so that \n\\begin{align}\n\\label{eqn:step3}\n|X_t - Y_t| \\leq \\frac{\\sqrt{n}}{\\log \\log n} \\qquad \\text{ and } \\qquad X_t, Y_t \\in (\\tfrac{n}{2}- \\kappa_4 \\sqrt{n} , \\tfrac{n}{2}+ \\kappa_4 \\sqrt{n} ). \\end{align}\n\n\\textbf{Step 4}. At this point, by using the strong Markov property and combining \\textbf{Step 1}, \\textbf{Step 2} and \\textbf{Step 3}, with high probability there exists $t\\leq t_n + 3 s_n$ such that~\\eqref{eqn:step3} is satisfied regardless of $X_0$, $Y_0$. From this point, we will see that in one additional step the processes are within distance $o(1)$ of each other with high probability. This step is more involved than the other steps and constitutes its own section (cf. \\cref{sec:upperbound}). The main reason this step is more involved is that we have to understand the hypergeometric distributions that make up the one-step behavior of the chain within this particular parameter range on the initial data. Here, because $k$ is order $n$ instead of $k=o(n)$, we cannot appeal to an existing result to do so. Instead, we make use of some asymptotic estimates for the hypergeometric distributions in~\\cite{LCM_07} to compare the hypergeometrics with what we call \\emph{discrete normal} distributions. \n\n\n\n\\section{Lower bound}\n\\label{sec:lowerb}\n\n\nHere we establish the lower bound on the mixing time~\\eqref{eqn:upperb}. First, however, we state some auxiliary results to be used throughout the paper. For the reader, it may be more valuable to skip the proof of these results on a first pass and proceed directly to the proof of the lower bound. The proof of the auxiliary results will be given later in~\\cref{sec:aux}. \n\n\n\\subsection{Auxiliary lemmata}\nFor $x\\in \\mathcal X$, define $f_i(x)$, $i=1,2,3$, by \n\\begin{align}\n\\label{eqn:fs}\n f_1(x)= 1-\\frac{2x}{n}, \\qquad f_2(x) = 1-\\frac{2(2n-1)x}{n^2}+\\frac{2(2n-1)x(x-1)}{n^2(n-1)}, \\,\\,\\, \\text{ and } \\,\\,\\, f_3(x)= 1- \\frac{2x(n-x)}{n^2}.\n\\end{align}\nWe remark that while $f_1$ and $f_2$ are eigenfunctions for the Bernoulli-Laplace model with respective eigenvalues $f_1(k)$ and $f_2(k)$~\\cite{EN}, $f_3(k)$ is contained in the righthand side of the bound in \\cref{prop:couple}. For further information on the dual Hahn eigenfunctions for the Bernoulli-Laplace chain, we refer the reader to~\\cite{NW_19}. See also~\\cite{KZ_09} for further reading on polynomial eigenfunctions of Markov chains in general. \n\nOne can check that $|f_1| \\leq 1$ on $\\mathcal X$ and, after a short exercise optimizing the quadratic $f_2(x)$ on $\\mathcal X$, for $n\\geq 2$ we have \n\\begin{align}\n\\label{eqn:f2bound}\n1- \\frac{2n-1}{2n-2} \\leq f_2(x) \\leq 1 \\,\\,\\,\\,\\, \\text{ for all } \\,\\,\\, x\\in \\mathcal X. \n\\end{align} \nWe will also need the following basic structural facts about the functions $f_1$ and $f_2$. \n\\begin{Lemma}\n\\label{lem:1}\nFor all $t\\geq 0$ and all $X_0\\in \\mathcal X$, we have the identities\n\\begin{align}\n\\label{id:1}\n\\mathbf{E}_{X_0} f_1(X_t) &=\\Big(1-\\frac{2k}{n}\\Big)^t f_1(X_0)\\\\\n\\label{id:2} \\mathbf{E}_{X_0} f_1(X_t)^2& = \\frac{1}{2n-1}+ \\frac{2n-2}{2n-1} f_2(k)^t f_2(X_0). \n\\end{align}\nFurthermore, let $q\\geq0$ be a constant and suppose $X_0= \\tfrac{n}{2}+ O( \\sqrt{n} (\\log n)^{q\/2} )$ as $n\\rightarrow \\infty$. Then we have the asymptotic formula \n\\begin{align}\n\\label{eqn:f2as}\nf_2(X_0) = \nO(n^{-1} (\\log n)^q). \n\\end{align}\n\\end{Lemma}\n\n\n\n\\begin{Lemma}\n\\label{lem:2}\nSuppose that \\cref{assump:1} is satisfied. Let $t=t(n)\\in \\mathbf{N}$ satisfy $t\\leq t_n$ for all $n$ where $t_n$ is as in~\\eqref{def:times}. Let $h_1, h_2$ be given by $f_2(k)=1-2h_1(k)$ and $(1-2\\lambda)^2= 1- 2h_2(\\lambda)$, and set $\\Delta_n':= h_1(k)-h_2(\\lambda)$ and $\\Delta_n''= k(n-k)\/n^2 - (\\lambda- \\lambda^2)$. Then as $n\\rightarrow \\infty$\n\\begin{align}\n\\label{as:id1} f_1(k)^{t} &=(1-2\\lambda)^{t}\\bigg(1 -\\frac{ 2t \\Delta_n}{1-2\\lambda} + O(t^2 \\Delta_n^2) \\bigg),\\\\\n\\label{as:id2}f_2(k)^t &= (1-2\\lambda)^{2t}\\bigg(1 - \\frac{2t \\Delta_n'}{1- 2\\lambda} +O(t^2( \\Delta_n')^2) \\bigg),\\\\\n\\label{as:id3} f_3(k)^t&= (1-2\\lambda + 2\\lambda^2)^t\\bigg(1- \\frac{2t \\Delta_n''}{1- 2\\lambda} +O(t^2( \\Delta_n'')^2) \\bigg).\n\\end{align}\nFurthermore, \n\\begin{align}\n\\label{eqn:dtil}\n|\\Delta_n'| \\leq 2 |\\Delta_n| + O(1\/n) \\qquad \\text{ and } \\qquad |\\Delta_n''| \\leq |\\Delta_n| \n\\end{align}\nfor all $n$. \n\\end{Lemma}\n\n\\begin{Remark}\n\\label{rem:2}\nNote that a similar formula to~\\eqref{as:id1} is employed on~\\cite[p.444]{NW_19} but without an assumption similar to~\\cref{assump:1}. Such a formula is not valid unless $k\/n\\rightarrow \\lambda \\in (0,1\/2)$ sufficiently fast. \n\\end{Remark}\n\n\\subsection{Proof of the lower bound}\n\nIf we let $f(x)=\\sqrt{n-1}f_1(x)=\\sqrt{n-1}(1-2x\/n)$, then $f$ is tightly concentrated around its mean with respect to the stationary distribution $\\pi_n$. \nThis is true because $\\pi_n$ has mean $n\/2$ and the bulk of its mass is concentrated around its mean with exponentially decaying tails. On the other hand, $\\mathbf{E}_0 f(X_t) \\approx (1-2\\lambda)^{-c}$ for $t=t_n -c$ and large fixed $c$. \nFurthermore, $P_t(0,\\cdot)$ must concentrate values of $f(x)$ tightly around its mean, due to a constant bound on its variance. \nThus, it is intuitive that $P_t(0,\\cdot)$ must assign a significant portion of its mass away from $n\/2$. \nIn short, the bulk of the masses from $\\pi_n$ and $P_t(0,\\cdot)$ do not overlap, providing a lower bound on the total variation distance between these measures. Note that this proof is presented in~\\cite{NW_19}. Below, we give the proof in full detail as it gives credence to the claim that the lower bound in~\\eqref{eqn:lowb1} is sharp. \n\n\\begin{Theorem}\nSuppose that \\cref{assump:1} is satisfied.\nFor all positive $\\epsilon>0$, \n\\[\\TV{P_t(0,\\cdot)-\\pi_n} \\geq 1-\\epsilon\\] \nprovided that $n$ is large enough and \\[t\\leq \\frac{\\log n}{2|\\log(1-2\\lambda)|}-\\frac{\\log(\\sqrt{3} +100)- \\tfrac{1}{2}\\log \\epsilon }{ |\\log (1-2\\lambda)|}.\\]\n\\end{Theorem}\n\n\\begin{proof}\nLet $\\pi$ be the stationary distribution of $(X_t)$ with $n$ suppressed. \nLet $f(x) = \n\\sqrt{n-1}(1-\\frac{2x}{n})$. Since $\\pi$ has hypergeometric distribution as in~\\eqref{eqn:pidist}:\n\\[\\mathbf{E}_{\\pi} f(X_t) = \\sqrt{n-1}\\, \\mathbf{E}_{\\pi} f_1(X_t)=0.\\]\nAlso note that \n\\begin{align*}\n \\text{Var}_{\\pi} f(X_t) &= \\mathbf{E}_{\\pi}f(X_t)^2-(\\mathbf{E}_{\\pi} f(X_t))^2\\\\\n &= (n-1) \\mathbf{E}_{\\pi} f_1(X_t)^2\\\\\n &= \\frac{n-1}{2n-1} +(n-1)\\frac{2n-2}{2n-1} f_2(k)^t\\mathbf{E}_{\\pi}f_2(X_t)\n\\end{align*}\nwhere the last equality follows from \\cref{lem:1}.\nSince $f_2(x)$ is an (right) eigenfunction of the chain with eigenvalue $f_2(k)\\neq 1$, it is orthogonal to $\\pi$.\nIn particular, $\\mathbf{E}_{\\pi} f_2(X_t)=0$. \nThus, $\\text{Var}_{\\pi} f(X_t)\\leq 1\/2$. \n\nWe repeat these calculations for the chain started from $0$ for $t$ satisfying \n\\begin{align}\n\\label{eqn:tdef2}\nt= \\frac{\\log n}{2|\\log(1-2\\lambda)|}-c=t_n-c \n\\end{align}\nfor a constant $c$ which we will determine later. Observe by \\cref{lem:1} and \\cref{lem:2} \n\\begin{align}\\mathbf{E}_{0} f(X_t) = \\sqrt{n-1}\\mathbf{E}_0 f_1(X_t) &=\\sqrt{n-1} (1-\\tfrac{2k}{n})^t \\nonumber\\\\ \\label{eqn:310}\n&=\\sqrt{n-1} (1-2\\lambda)^t(1+o(1))\\\\\n&= \\frac{\\sqrt{n-1}}{\\sqrt{n}} (1-2\\lambda)^{-c}(1+o(1))\\nonumber\\\\ \\label{eqn:311}\n&= (1+o(1))(1-2\\lambda)^{-c}.\n\\end{align}\nSimilarly, combining \\eqref{eqn:310} with \\cref{lem:1} and \\cref{lem:2} we have \n\\begin{align*}\n \\text{Var}_{0} f(X_t)&= \\mathbf{E}_0f(X_t)^2- (\\mathbf{E}_0f(X_t))^2\\\\\n &= (n-1) \\left[\\frac{1}{2n-1}+ \\frac{2n-2}{2n-1} f_2(k)^t f_2(0)\\right] - (n-1)(1-2\\lambda)^{2t}(1+o(1))\\\\\n &= (n-1) \\left[\\frac{1}{2n-1}+\\frac{2n-2}{2n-1}(1-2\\lambda)^{2t}( 1+ o(1))\\right] - (n-1)(1-2\\lambda)^{2t}(1+o(1)) \\\\\n &= \\frac{n-1}{2n-1} + (n-1) (1-2\\lambda)^{2t}(1+o(1))-(n-1)(1-2\\lambda)^{2t}(1+o(1))\\\\\n &= \\frac{1}{2}+ o(1) (n-1) (1-2\\lambda)^{2t}= \\frac{1}{2} +o(1) (1-2\\lambda)^{-2c}.\n \\end{align*}\nIn particular, $\\text{Var}_{0}f(X_t) \\leq 3\/2$ for $n$ sufficiently large. \n\n\n\nWe finish the proof by applying Chebychev's inequality. \nLet \n\\begin{align*}\nA_\\alpha = \\{x\\in \\mathcal X \\, : \\, |f(x)|\\leq \\alpha\\} \\qquad \\text{ and } \\qquad B_{r,c}=\\{x\\in \\mathcal X \\, : \\, |f(x) - \\mathbf{E}_0 f(X_t)|\\leq r\\sqrt{3\/2}\\}\n\\end{align*}\nwhere we recall that $t$ is as in~\\eqref{eqn:tdef2}. \nNotice that \\[\\pi(A_\\alpha)=\\mathbf{P}_\\pi (\\{|f(X_t)|\\leq \\alpha\\})=1-\\mathbf{P}_\\pi (\\{|f(X_t)|>\\alpha\\})\\geq 1- \\frac{1}{2\\alpha^2}\\]\nwhere we used $\\mathbf{P}_\\pi(\\{|f(X_t)|>\\alpha\\})\\leq \\alpha^{-2}\\mathbf{E}_\\pi |f(X_t)|^2 \\leq 1\/(2\\alpha^2)$. \nOn the other hand, in a similar way we have \n\\[ {P_t}(0,B_{r,c}) = 1-P_t(0, B_{r,c}^{\\text{c}}) =\n1-\\mathbf{P}_0\\{ |f(X_t)-\\mathbf{E}_0f(X_t)|> r\\sqrt{3\/2}\\}\\geq 1-\\frac{1}{r^2}\\]\nfor $n$ sufficiently large. \nNotice that if $(1-2\\lambda)^{-c}(1+o(1))-r\\sqrt{3\/2}\\geq 100 \\alpha$, then $A_\\alpha$ and $B_{r,c}$ are disjoint.\nTherefore, $\\mu(B_\n{r,c}^{\\text{c}}) \\geq \\mu(A_\\alpha)$ for any measure $\\mu$ on subsets of $\\mathcal X$.\nIn particular, for appropriate choices of $\\alpha, c, r$ we have that $P_t(0,A_\\alpha)\\leq 1\/r^2$. \n\nNow let $\\epsilon>0$ be small but fixed. Choose $\\alpha, r$ such that $\\frac{1}{2\\alpha^2},\\frac{1}{r^2}\\leq \\epsilon\/2$ and pick $c$ such that $(1-2\\lambda)^{-c}-r\\sqrt{3\/2}\\geq 100\\alpha$. Note that under these constraints, we may choose $$c=\\frac{\\log(\\sqrt{3} +100)- \\tfrac{1}{2}\\log \\epsilon }{ |\\log (1-2\\lambda)|}.$$ \nThen\n\\begin{align*}\n \\TV{P_t(0,\\,\\cdot\\,)-\\pi (\\, \\cdot \\,)}&\\geq |P_t(0,A_\\alpha)-\\pi(A_\\alpha)|= 1-\\frac{1}{2\\alpha^2}- \\frac{1}{r^2}\\geq 1-\\epsilon. \n\\end{align*}\nThis concludes the proof.\n\\end{proof}\n\n\\begin{Remark}\nNotice that if $c\\leq 0$, then $\\mathbf{E}_{0}f(X_t)=(1+o(1))(1-2\\lambda)^{-c}$ and will quickly approach $0$ as $c\\to -\\infty.$ \nIn particular, $\\mathbf{E}_{0}f(X_t)\\to 0$ as $t$ increases beyond $t_n$. \nThis roughly translates to the claim that $P_t(0,\\cdot)$ and $\\pi_n$ both assign almost all of their mass close to $n\/2$. \nIf this is true, then the total variation distance between $P_t(0,\\cdot)$ and $\\pi_n$ should be small.\nThis provides evidence for believing that the lower bound on the mixing time is sharp.\n\\end{Remark}\n\n\\section{Getting at distance $o(\\sqrt{n})$}\n\\label{sec:osmallrootn}\n\n\n\nThroughout this section, $(X_t)$ and $(Y_t)$ will denote two copies of the Bernoulli-Laplace chain. The goal of this section is to show that in $3s_n +t_n$ steps (with $s_n, t_n$ as in~\\eqref{def:times}) the chains are distance $o(\\sqrt{n})$ apart with high probability. The argument establishing this fact is broken into a few pieces based on the size of the initial distance $|X_0-Y_0|$. First, we see that in $t_n$ steps, the two copies are within distance $O(\\sqrt{n})$ (see~\\cref{Lemma:Orootn} below) with high probability, regardless of their initial states. Then, starting at distance $O(\\sqrt{n})$ apart, we see that in \\cref{Lemma:orootn} using the path coupling described in \\cref{sec:outline}, the distance is decreased to $o(\\sqrt{n})$ in $s_n$ steps with high probability. Although the two processes become closer in this second step, there is a possibility that they slightly deviate from the mean $n\/2$ of the stationary distribution. Consequently, in \\cref{prop:returntomean} we show that in $2s_n$ steps, the chains return to the $O(\\sqrt{n})$ band about $n\/2$ while maintaining a distance of $o(\\sqrt{n})$ between the two chains. \n\n\\subsection*{Asymptotic notation}\nBelow, we adopt some further asymptotic notation for convenience of mathematical expression. In what follows, if $a_n, b_n $ are sequences of nonnegative real numbers, we write $a_n \\lesssim b_n$ if $a_n =O(b_n)$ as $n\\rightarrow \\infty$ and the constant in the asymptotic estimate does \\emph{not} depend on the parameters $\\kappa_i>0$ below. We will also write $a_n \\cong b_n$ if both $a_n \\lesssim b_n$ and $b_n \\lesssim a_n$. \n\n\n\\subsection{Getting at distance $O(\\sqrt{n})$}\n\nApplying \\cref{lem:1} and~\\cref{lem:2}, we will now show that running the chains $(X_t)$ and $(Y_t)$ for $t=t_n$ steps brings $X_t$ and $Y_t$ within distance $O(\\sqrt n)$ of each other with high probability. \n\n\\begin{Lemma}\\label{Lemma:Orootn}\nSuppose \\cref{assump:1} is satisfied and \nfor a fixed constant $\\kappa_1\\in (0,\\infty)$, let \\[\\tau_1(\\kappa_1):=\\min \\left\\{t\\,:\\, X_t,Y_t\\in (\\tfrac{n}{2}-\\kappa_1\\sqrt{n},\\tfrac{n}{2}+\\kappa_1\\sqrt{n}) \\right\\}.\\]\nThen, \\[\\mathbf{P}(\\tau_1(\\kappa_1)>t_n) \\lesssim \\frac{1}{\\kappa_1^2}.\\]\n \\end{Lemma}\n\\begin{proof}\nBy combining Chebychev's inequality with \\cref{lem:1} and~\\cref{lem:2} we have\n\\begin{align}\n\\nonumber \\mathbf{P}_{X_0}\\left(\\left|X_{t_n}-\\tfrac{n}{2}\\right|>\\kappa_1 \\sqrt{n}\\right)= \\mathbf{P}(f_1(X_{t_n})^2>4\\kappa_1^2\/n)&\\leq \\frac{n}{4 \\kappa_1^2} \\mathbf{E}_{X_0} f_1(X_{t_n})^2\\\\\n\\nonumber &= \\frac{n}{4\\kappa_1^2}[\\tfrac{1}{2n-1} + \\tfrac{2n-2}{2n-1}f_2(k)^{t_n} f_2(X_0)]\\\\\n&\\lesssim \\frac{1}{\\kappa_1^2} + \\frac{n}{\\kappa_1^2}(1-2\\lambda)^{2t_n} \\lesssim \\frac{1}{\\kappa_1^2} \\label{cheby3} \n\\end{align}\nwhere in the asymptotic inequality~\\eqref{cheby3} we used~\\eqref{eqn:f2bound} to bound $f_2(X_0)$. \nThus, \n\\[\\mathbf{P}\\left(\\tau_1(\\kappa_1)> t_n\\right)\\leq \\mathbf{P}_{X_0} \\left(|X_{t_n}-\\tfrac{n}{2}|>\\kappa_1\\sqrt{n}\\right)+\\mathbf{P}_{Y_0} \\left(|Y_{t_n}-\\tfrac{n}{2}|>\\kappa_1\\sqrt{n}\\right)\\lesssim \\frac{1}{\\kappa_1^2}\\]\nby the estimate \\eqref{cheby3}. \n\\end{proof}\n\n\nGiven that the chains are sufficiently close to $n\/2$, we want to ensure that they stay near $n\/2$ for a significant window of time. \\cref{prop:step1} below shows that with high probability, the chains will stay close to the mean for $s_n=\\lambda^{-1}\\log\\log n$ steps after falling within the $\\kappa_1\\sqrt{n}$ range of $n\/2$. Later, this provides ample time to move the chains closer without deviating too far from the mean of the stationary distribution.\n\n\\begin{Proposition}\\label{prop:step1}\nSuppose that~\\cref{assump:1} is satisfied and that $|X_{0}-\\tfrac{n}{2}|<\\kappa_1 \\sqrt{n}$. Then, for every $r\\in (0, \\infty)$, $s\\in\\mathbf{N}$ we have \n\\[\\mathbf{P}\\Big(\\sup\\limits_{t\\in I_n(s)} |X_t-\\tfrac{n}{2}|>r\\Big)\\lesssim \\frac{n}{r^2}(\\log n)^{|P_\\lambda|},\\]\nwhere $I_n(s)=[s, s+ s_n]$, $P_\\lambda = \\log(1-2\\lambda)\\frac{2}{\\lambda}$ and $s_n= \\lambda^{-1}\\log \\log n$ is as in~\\eqref{def:times}. \n\\end{Proposition}\n\n\\begin{proof}Let $s \\in \\mathbf{N}$. For $t\\ge 0$, note that $|X_t-\\tfrac{n}{2}|>r$ if and only if $f_1(X_t)=|1-2X_t\/n|>2r\/n$.\nDefine $M_t =f_1(X_t)\/f_1(k)^t$. It follows that $M_t$ is a martingale by~\\cref{lem:1} relation \\eqref{id:1}. Setting $s_n= \\lambda^{-1} \\log \\log n$, we thus obtain\n\\begin{align*}\n \\mathbf{P}\\Big(\\sup\\limits_{t\\in I_n(s)} \\left|X_t-\\tfrac{n}{2}\\right|>r\\Big)=\\mathbf{P}\\Big(\\sup\\limits_{t\\in I_n(s)} |f_1(X_t)|>\\tfrac{2r}{n}\\Big)&=\\mathbf{P}\\Big(\\sup\\limits_{t\\in I_n(s)}f_1(k)^t |M_t|>\\tfrac{2r}{n}\\Big)\\\\\n &\\leq \\mathbf{P}\\bigg(\\sup\\limits_{t\\in I_n(s)} |M_t|>\\frac{2r}{n}\\frac{1}{f_1(k)^s }\\bigg)\\\\\n &\\leq \\frac{n^2f_1(k)^{2s}}{4r^2}\\mathbf{E}|M_{s+s_n} |^2,\n\\end{align*}\nwhere on the last inequality we used the fact that $|M_t|^2$ is a submartingale. Note that\\begin{align*}\n \\frac{n^2f_1(k)^{2s}}{4r^2}\\mathbf{E}|M_{s+s_n} |^2\n &=\\frac{n^2}{4r^2}\\frac{1}{f_1(k)^{2s_n}}\\mathbf{E} f_1(X_{s+s_n})^2\\\\\n &=\\frac{n^2}{4r^2}\\frac{1}{\\left(1-\\frac{2k}{n}\\right)^{2s_n}}\\left(\\frac{1}{2n-1}+\\frac{2n-2}{2n-1}f_2(k)^{s+s_n}f_2(X_0)\\right). \n\\end{align*}\nwhere on the last line we use \\cref{lem:1} relation~\\eqref{id:2}. \n\nNow observe that by \\cref{lem:1}, $f_2(X_0)=O(n^{-1})$ since $|X_0- \\tfrac{n}{2}| \\leq \\kappa_1 \\sqrt{n}$. Also, using~\\cref{lem:2} with \\cref{assump:1} implies $f_2(k)^{s+s_n}= o(1)$ as\n $n\\rightarrow \\infty$. Combining the previous two observations and using \\cref{lem:2} again produces\n\\begin{align*}\n \\frac{n^2}{4r^2}\\frac{1}{\\left(1-\\frac{2k}{n}\\right)^{2s_n}}\\left(\\frac{1}{2n-1}+\\frac{2n-2}{2n-1}f_2(k)^{s+s_n}f_2(X_0)\\right)\n \\cong \\frac{n}{8r^2}\\frac{1}{\\left(1-\\frac{2k}{n}\\right)^{2s_n}} \\cong \\frac{n}{8r^2}(\\log n)^{|P_\\lambda|}. \n \n\\end{align*}\nThus,\n\\[\\mathbf{P}\\bigg(\\sup\\limits_{t\\in I_n(s)} \\left|X_t-\\tfrac{n}{2}\\right|>r\\bigg)\\lesssim \\frac{n}{r^2}(\\log n)^{|P_\\lambda|} ,\\] as claimed. \n\\end{proof}\n\n\n\\subsection{Getting at distance $o(\\sqrt{n})$ from $O(\\sqrt{n})$}\n\nNow, given that $X_0$ and $Y_0$ are within distance $O(\\sqrt{n})$ of $n\/2$, \\cref{Lemma:orootn} below will bring the two chains $X_t$ and $Y_t$ within distance $o(\\sqrt{n})$ of each other with high probability in a negligible number of steps. This result is a modified version of~\\cite[Lemma 12]{EN}. Note, however, that although the distance between the chains decreases, the chains may slightly deviate from the mean. See the choice of $r_n=\\sqrt{n} (\\log n)^{|P_\\lambda|\/2}$ below in the statement of~\\cref{Lemma:orootn}. \n\n\n\\begin{Lemma}\\label{Lemma:orootn}\nSuppose that~\\cref{assump:1} is satisfied and that $X_0,Y_0\\in (\\frac{n}{2}-\\kappa_2\\sqrt{n},\\frac{n}{2}+\\kappa_2\\sqrt{n})$ for some $\\kappa_2\\in (0,\\infty)$. Suppose, furthermore, that the chains $(X_t)$,$(Y_t)$ are coupled according \\cref{coupling}. For $\\kappa_3\\in (0,\\infty)$, consider the stopping time \n\\[\\tau_3(\\kappa_3):= \\min \\left\\{ t:|X_t-Y_t|\\leq \\frac{\\sqrt{n}}{\\log \\log n}\\quad\\text{ and }\\quad X_t,Y_t\\in \\left(\\tfrac{n}{2}-\\kappa_3r_n,\\tfrac{n}{2}+\\kappa_3r_n\\right )\\right\\}\\]\nwhere $r_n =\\sqrt{n}(\\log n)^{|P_\\lambda|\/2}.$\nThen we have \n\\[\\mathbf{P}\\left(\\tau_3(\\kappa_3)>s_n\\right)\\lesssim \\frac{1}{\\kappa_3^2}\\]\nwhere $s_n= \\lambda^{-1} \\log \\log n$ is as in~\\eqref{def:times}. \n\\end{Lemma}\n\n\n\n\\begin{proof}\nRecall by~\\cref{assump:1}, $k<\\tfrac{n}{2}$. Also recall that (see~\\eqref{eqn:tcouple})\n\\[\\tau_{\\emph{couple}}\\left(\\frac{\\sqrt{n}}{\\log \\log n}\\right)=\\min\\left\\{t: |X_t-Y_t|\\leq \\frac{\\sqrt{n}}{\\log\\log n}\\right\\}.\\] \nBy definition of $\\tau_3(\\kappa_3)$,\n\\begin{align*}\n\\left\\{\\tau_3(\\kappa_3)>s_n\\right\\}&\\subseteq\\Big\\{\\tau_{\\emph{couple}}\\left(\\tfrac{\\sqrt n}{\\log\\log n}\\right)> s_n\\Big\\} \\cup \\bigg( \\bigcup_{t\\in [0,s_n]}\\left\\{|X_t-\\tfrac{n}{2}|\\vee|Y_t-\\tfrac{n}{2}|>\\kappa_3 r_n \\right\\}\\bigg) .\\end{align*}\nBy \\cref{prop:couple}, the assumption on $|X_0-Y_0|$ and \\cref{lem:2}, we know that \n\\begin{align*}\n \\mathbf{P}\\Big(\\tau_{\\emph{couple}}\\left(\\tfrac{\\sqrt{n}}{\\log\\log n}\\right)>s_n\\Big)\n &\\lesssim (1-2\\lambda +2\\lambda^2)^{s_n}\\frac{2\\kappa_2\\sqrt{n}}{\\tfrac{\\sqrt{n}}{\\log\\log n}}\\\\\n &= e^{\\log(1-2\\lambda + 2\\lambda^2)\\lambda^{-1}\\log\\log n}2 \\kappa_2\\log\\log n\\\\\n &=\\frac{2\\kappa_2\\log\\log n}{(\\log n)^{|\\log(1-2\\lambda+2\\lambda^2)|\\lambda^{-1}}} =o(1). \n\\end{align*}\nApplying \\cref{prop:step1} we get \n\\[\\mathbf{P}\\bigg(\\bigcup_{t\\in [0,s_n]}\\left\\{|X_t-\\tfrac{n}{2}|\\vee|Y_t-\\tfrac{n}{2}|>\\kappa_3r_n )\\right\\}\\bigg)\\lesssim \\frac{n}{\\kappa_3^2 r_n^2}(\\log n)^{|P_\\lambda|}\\lesssim \\frac{1}{\\kappa_3^2}.\\]\nUsing a union bound, we obtain the claimed result. \n\\end{proof}\n\nRecall that although \\cref{Lemma:orootn} brings $X_t$ and $Y_t$ within distance $o(\\sqrt{n})$ of one another in a negligible amount of time, it is possible for $X_t$ and $Y_t$ to deviate from $n\/2$ by more than $O(\\sqrt{n})$. \nThe idea in \\cref{prop:returntomean} is that, even if this does happen, by taking another negligible number of steps we can ensure that with high probability the copies will return to the desired distance apart within the desired $O(\\sqrt{n})$ window about $n\/2$. \n\n\n\n\\begin{Proposition}\\label{prop:returntomean}\nSuppose \\cref{assump:1} is satisfied and that $(X_t)$ and $(Y_t)$ are coupled as in \\cref{coupling} with \n\\begin{align*}\nX_0,Y_0\\in \\left(\\tfrac{n}{2}-\\kappa_3\\sqrt{n}(\\log n)^{\\frac{|P_{\\lambda}|}{2}},\\tfrac{n}{2}+\\kappa_3\\sqrt{n}(\\log n)^{\\frac{|P_{\\lambda}|}{2}}\\right) \\qquad \\text{ and } \\qquad |X_0-Y_0| \\leq \\frac{\\sqrt{n}}{\\log\\log n}\n\\end{align*}\nfor some $\\kappa_3\\in (0,\\infty)$. \nFor $\\kappa_4\\in (0,\\infty)$, define the stopping time \n\\[\\tau_4(\\kappa_4):= \\min \\left\\{ t:|X_t-Y_t|\\leq \\frac{\\sqrt{n}}{\\log \\log n}\\,\\,\\,\\text{ and }\\,\\,\\, X_t,Y_t\\in \\left(\\tfrac{n}{2}-\\kappa_4\\sqrt{n},\\tfrac{n}{2}+\\kappa_4\\sqrt{n}\\right )\\right\\}.\\]\nThen we have \n\\[\\mathbf{P}\\left(\\tau_4(\\kappa_4)>2s_n \\right)\\lesssim \\frac{1}{\\kappa_4^2}\\]\nwhere $s_n$ is as in~\\eqref{def:times}. \n\\end{Proposition}\n\n\\begin{proof}\nBy construction of the \\cref{coupling}, $t\\mapsto |X_t -Y_t|$ is monotone decreasing. Hence, by the hypothesis on the initial condition we have \\[|X_t-Y_t|\\leq |X_0-Y_0| \\leq \\frac{\\sqrt{n}}{\\log \\log n}\\]\nfor all $t\\geq 0$. \nIn particular, \\[\\tau_4(\\kappa_4)= \\min \\left\\{ t: X_t,Y_t\\in \\left(\\frac{n}{2}-\\kappa_4\\sqrt{n},\\frac{n}{2}+\\kappa_4\\sqrt{n}\\right )\\right\\}.\\]\nUsing \\cref{lem:1} with~\\cref{lem:2} gives\n\\begin{align}\n \\nonumber \\mathbf{E}_{X_0} f_1(X_{2s_n})^2 &= \\frac{1}{2n-1} + \\frac{2n-2}{2n-1} f_2(k)^{2s_n} f_2(X_0)\\\\\n \\label{eqn:cong1} & \\cong \\frac{1}{2n-1} + \\frac{2n-2}{2n-1} (1-2\\lambda)^{4s_n} f_2(X_0). \n \\end{align}\n By hypothesis on $X_0$, it follows by~\\cref{lem:2} that\n \\begin{align*}\n f_2(X_0) = O\\Big( \\frac{(\\log n)^{|P_\\lambda|}}{n}\\Big). \n \\end{align*}\n Plugging this into~\\eqref{eqn:cong1} implies \n \\begin{align*}\n \\mathbf{E}_{X_0} f_1(X_{2s_n})^2 &\\lesssim \\frac{1}{2n-1} + \\frac{2n-2}{2n-1} O(n^{-1} (\\log n)^{- |P_\\lambda|})\\\\\n & = O(n^{-1}) \n \\end{align*} \nwith the same estimate holding true for $Y_{2s_n}$. \n\n\nCombining these estimates with the monotonicity of the coupling along with a union bound and Chebychev's inequality then gives \n\\begin{align*}\n\\mathbf{P}_{X_0,Y_0}(\\tau_4(\\kappa_4)>2 s_n)&\\leq \\mathbf{P}_{X_0,Y_0}\\left(|X_{2s_n}-\\tfrac{n}{2}|\\vee |Y_{2s_n}-\\tfrac{n}{2}|> \\kappa_4\\sqrt{n}\\right)\\\\\n&=\\mathbf{P}_{X_0,Y_0}\\left(|f_1(X_{2s_n})|\\vee |f_2(Y_{2s_n}) |> 2\\kappa_4\/\\sqrt{n}\\right) \\lesssim \\frac{1}{\\kappa_4^2}, \n\\end{align*}\nfinishing the proof. \n\\end{proof}\n\n\n\n\\section{The upper bound on the mixing time}\n\\label{sec:upperbound}\n\n\nIf $X$ is a random variable, then we let $\\mu_X$ denote its distribution. The goal of this section is to show that if $X_0-Y_0=o(\\sqrt{n})$ and $X_0=\\tfrac{n}{2}+O(\\sqrt{n})$, $Y_0=\\tfrac{n}{2}+O(\\sqrt{n})$ as $n\\rightarrow \\infty$, then \n\\begin{align}\n\\label{eqn:TVo1}\n\\TV{P_1(X_0, \\, \\cdot \\,) - P_1(Y_0, \\, \\cdot \\,) }=\\TV{\\mu_{X_1}-\\mu_{Y_1}} = o(1) \\,\\, \\text{ as }\\,\\, n\\rightarrow \\infty. \n\\end{align}\nCombining this with the results of the previous section ultimately yields the desired upper bound on the mixing time. \n\nOur method to show~\\eqref{eqn:TVo1} is quite different than the method used in~\\cite{EN}. To see why, \nlet $\\text{Hyper}(j, \\ell, m)$ denote the hypergeometric distribution of $m$ objects selected without replacement from a total of $j$ objects, $\\ell$ of which are of type 1 and $j-\\ell$ are of type 2. Also, let $\\text{Bin}(j, r)$ denote the binomial distribution of $j$ trials with success probability $r$. Observe that we can write \n\\begin{align}\nX_1-X_0= H_1 - H_0 \\qquad \\text{ and } \\qquad Y_1 - Y_0 = H_3 - H_2 \n\\end{align}\nwhere $H_1\\sim \\text{Hyper}(n,n-X_0, k)$ and $H_0\\sim \\text{Hyper}(n, X_0, k)$ while $H_3 \\sim \\text{Hyper}(n,n-Y_0, k)$ and $H_2\\sim \\text{Hyper}(n, Y_0, k)$. Thus we wish to estimate\n\\begin{align}\\label{eqn:hyperrep}\n\\TV{\\mu_{X_1}-\\mu_{Y_1}}= \\TV{ \\mu_{X_0+ H_1 - H_0} -\\mu_{Y_0 +H_3- H_2} }. \n\\end{align} \n\nIn order to estimate~\\eqref{eqn:hyperrep}, we employ a large $n$ comparison of the hypergeometric distributions $H_i$, $i=0,1,2,3$, in the total variation distance. Because $k\/n\\rightarrow \\lambda \\in (0,1\/2)$ instead of $k=o(n)$ as $n\\rightarrow \\infty$, we cannot appeal to the result of Diaconis and Freedman~\\cite{DF_80}, which in particular shows that if $k=o(n)$, then \n\\begin{align}\n\\label{eqn:hyperb}\n\\TV{\\mu_{H_i} -\\mu_B}\\lesssim \\frac{k}{n} + \\sqrt{\\frac{k}{n}}\n\\end{align}\nwhere $B\\sim \\text{Bin}(k, 1\/2)$. Using normal approximation of the binomial, the desired result~\\eqref{eqn:TVo1} follows~\\cite[Lemma 16]{EN} in the case when $k=o(n)$ by using the triangle inequality several times in~\\eqref{eqn:hyperrep}. \n\nNote that in our case, the righthand side of~\\eqref{eqn:hyperb} is order $1$. Hence, instead of using a binomial and then normal approximation, we manufacture a discrete version of the normal that approximates the $H_i$'s in total variation. Combining this approximation with the triangle inequality a few times in~\\eqref{eqn:hyperrep}, we will then be able to conclude~\\eqref{eqn:TVo1}. \n\n\\subsection{The discrete normal distribution}\nIn what follows, we will use $\\phi :\\mathbf{R}\\rightarrow \\mathbf{R}$ to denote the probability density function of the standard normal on $\\mathbf{R}$; that is,\n\\begin{align}\n\\phi(x) = \\frac{e^{-\\frac{x^2}{2}}}{\\sqrt{2\\pi}}, \\qquad x\\in \\mathbf{R}. \n\\end{align}\n\n\\begin{Definition}\nWe say that a random variable $Z$ distributed on the integers $\\mathbf{Z}$ has a \\emph{discrete normal distribution} with parameters $x\\in \\mathbf{R}$, $s>0$ and finite set $\\mathcal{S} \\subset \\mathbf{Z}$, denoted by $Z \\sim \\text{dN}(x, s, \\mathcal{S})$, if \n\\begin{align*}\n\\mathbf{P}(Z= j) = \n\\begin{cases}\n\\frac{1}{\\mathcal{N}}\\frac{\\phi \\big( \\frac{j-x}{s}\\big)}{s} & \\text{ if } \\,\\, j \\in \\mathcal{S}\\\\\n0 & \\text{ if } j \\notin \\mathcal{S}\n\\end{cases}\n\\end{align*}\nwhere $\\mathcal{N}:= \\sum_{j\\in \\mathcal{S}} \\frac{\\phi \\big( \\frac{j-x}{s}\\big)}{s}$ is the normalization constant. \\end{Definition}\n\n\\begin{Remark}\nDepending on the choice of parameters $x\\in \\mathbf{R}$, $s>0$ and finite set $\\mathcal{S}$, the name \\emph{discrete normal} above is a bit of a misnomer. Indeed, unless properly shifted and scaled with a large enough set $\\mathcal{S}$, a random variable $Z\\sim \\text{dN}(x,s, \\mathcal{S})$ may have little to do with the usual normal distribution. All of the discrete normals used below, however, will indeed be reminiscent of a normal distribution due to the particular choices of $x,s$ and $\\mathcal{S}$. \n\\end{Remark}\n\nIn order to setup and prove the next result, set $\\mathcal X_k= \\{0,1,2,\\ldots, k\\}$, fix $\\ell \\in \\mathcal X$ and define parameters\n\\begin{align}\n\\label{eqn:not1}\np:=\\frac{\\ell}{n},\\, \\,\\, f:= \\frac{k}{n},\\, \\,\\,q:=1-p, \\,\\,\\, \\sigma:= \\sqrt{k pq (1-f)}.\n\\end{align}\nFurthermore, for $j\\in \\mathcal X_k$ set\n\\begin{align}\n\\label{eqn:not2}\n x_j := \\frac{j - kp}{\\sqrt{k p q}} \\qquad \\text{ and } \\qquad \\tilde{x}_j := \\frac{x_j}{\\sqrt{1-f}}= \\frac{j-kp}{\\sigma}. \n\\end{align}\nWe first need a technical lemma concerning the behavior of the normalization constant for a particular discrete normal. \n\\begin{Lemma}\n\\label{lem:normalization}\nGiven the choice of parameters in~\\eqref{eqn:not1} and~\\eqref{eqn:not2}, suppose \\cref{assump:1} is satisfied and that $\\ell=n\/2+O(\\sqrt{n})$. Then $$\\mathcal{N}_n:=\\sum_{j\\in \\mathcal X_k}\\frac{\\phi(\\tilde{x}_j)}{\\sigma}=1 +O(n^{-1\/2})\\,\\,\\, \\text{ as }\\,\\,\\, n\\rightarrow \\infty.$$ \n\\end{Lemma}\n\n\\begin{proof}\nThe proof of this result follows by integral comparison. Note that \n\\begin{align*}\n\\mathcal{N}_n &\\leq \\frac{1}{\\sqrt{2\\pi \\sigma^2}}+ \\sum_{j=0}^{\\lfloor kp\\rfloor-1}\\frac{ e^{-\\frac{1}{2}\\big(\\frac{j-kp}{\\sigma} \\big)^2}}{\\sqrt{2\\pi \\sigma^2}} + \\sum_{j=\\lfloor kp\\rfloor +1}^k \\frac{ e^{-\\frac{1}{2}\\big(\\frac{j-kp}{\\sigma} \\big)^2}}{\\sqrt{2\\pi \\sigma^2}}\\\\\n& \\leq \\int_\\mathbf{R} \\frac{ e^{-\\frac{1}{2}\\big(\\frac{x-kp}{\\sigma} \\big)^2}}{\\sqrt{2\\pi \\sigma^2}} \\, dx + \\frac{1}{\\sqrt{2\\pi \\sigma^2}}=1 + \\frac{1}{\\sqrt{2\\pi \\sigma^2}}. \n\\end{align*} \nGiven the asymptotic behavior of $\\sigma$, this finishes the proof of the upper bound. To obtain the lower bound, note that \n\\begin{align*}\n\\mathcal{N}_n &\\geq \\sum_{j=1}^{\\lfloor kp\\rfloor }\\frac{ e^{-\\frac{1}{2}\\big(\\frac{j-kp}{\\sigma} \\big)^2}}{\\sqrt{2\\pi \\sigma^2}} + \\sum_{j=\\lfloor kp\\rfloor}^k \\frac{ e^{-\\frac{1}{2}\\big(\\frac{j-kp}{\\sigma} \\big)^2}}{\\sqrt{2\\pi \\sigma^2}} - \\frac{1}{\\sqrt{2\\pi \\sigma^2}} \\\\\n& \\geq \\int_0^k \\frac{ e^{-\\frac{1}{2}\\big(\\frac{x-kp}{\\sigma} \\big)^2}}{\\sqrt{2\\pi \\sigma^2}} \\, dx - \\frac{1}{\\sqrt{2\\pi \\sigma^2}}\\\\\n&= 1- \\int_{-\\infty}^{-kp\/\\sigma} \\phi(x) \\, dx - \\int_{kq\/\\sigma}^\\infty \\phi(x) \\, dx - \\frac{1}{\\sqrt{2\\pi \\sigma^2}}. \\end{align*}\nNow if $\\mathcal{Z}$ is a standard normal random variable on $\\mathbf{R}$, we note that \n\\begin{align*}\n\\mathcal{N}_n &\\geq 1- \\int_{-\\infty}^{-kp\/\\sigma} \\phi(x) \\, dx - \\int_{kq\/\\sigma}^\\infty \\phi(x) \\, dx - \\frac{1}{\\sqrt{2\\pi \\sigma^2}} \\\\\n&=1- \\mathbf{P}(\\mathcal{Z}\\leq -k p\/\\sigma) -\\mathbf{P}(\\mathcal{Z} \\geq kq\/\\sigma)- \\frac{1}{\\sqrt{2\\pi \\sigma^2}}. \n\\end{align*}\nBy Chebychev's inequality on the square $\\mathcal{Z}^2$, we see that both $\\mathbf{P}(\\mathcal{Z}\\leq -kp\/\\sigma)$ and $\\mathbf{P}(\\mathcal{Z} \\geq kq\/\\sigma)$ are order $1\/n$ as $\\ell=n\/2 + O(\\sqrt{n})$. We thus conclude the result. \n\\end{proof}\n\n\nBefore stating the main comparison lemma, we will use a result from \\cite{LCM_07}.\nThough the following lemma is not stated explicitly in \\cite{LCM_07}, the proof of it is contained within the proof of \\cite[Theorem~1]{LCM_07}.\nIn particular, we will restate the assumptions that are required for \\cite[Equations~(4.28)-(4.29)]{LCM_07}.\n\n\\begin{Lemma}\\label{StatLemma}\nSuppose $0 R).\n \\end{align*}\n \n \n \n \n Using the well known tail bound in \\cite{Hoeffding_1963}, we arrive at \n \\begin{align*}\n P(HR) \\leq 2e^{-2k\\left(\\delta pq(1-f)\\right)^2}\\lesssim \\frac{1}{\\sqrt{n}}. \n \\end{align*}\n Furthermore, it follows from symmetry, \\cref{lem:normalization}, and \\cref{StatLemma} that\n \\begin{align*}\n \\sum_{j=L}^{R}|\\mathbf{P}(H=j)-\\mu(j)| \\leq2\\sum_{j=L}^{\\lfloor kp\\rfloor}\\bigg|\\mathbf{P}(H=j)-\\frac{\\phi((j-kp)\/\\sigma)}{\\sigma}\\bigg| + O(n^{-1\/2})\\leq \\frac{C}{\\sigma} \\lesssim \\frac{1}{\\sqrt{n}}. \n \\end{align*}\n This concludes the proof.\n\\end{proof}\n\n\n\\subsection{Using the hypergeometric comparison}\nLet us now combine the previous result with~\\eqref{eqn:hyperrep} in order to see what we have left to estimate. To this end, let $Z_i \\sim \\text{dN}(k p_i, \\sigma_i, \\mathcal X_k), i=0,1,2,3$, with\n\\begin{align}\n\\label{eqn:not3}\n\\ell_0= X_0, \\,\\,\\, \\ell_1=n-X_0, \\,\\,\\, \\ell_2= Y_0, \\,\\,\\, \\ell_3=n-Y_0, \\,\\,\\, p_i=\\ell_i\/n,\n\\end{align} \nand \n\\begin{align}\n\\label{eqn:not4}\n\\sigma_i = \\sqrt{kp_i (1-p_i) (1-k\/n)}, i=0,1,2,3. \n\\end{align}\nNote that if $\\eta= X_0-Y_0$, then by the triangle inequality and properties of the total variation distance of random variables distributed on $\\mathbf{Z}$\n\\begin{align*}\n\\TV{\\mu_{X_1}- \\mu_{Y_1}} &= \\TV{\\mu_{X_0+ H_1-H_0}- \\mu_{Y_0+ H_3-H_2} }\\\\\n & \\leq \\TV{\\mu_{X_0 + H_1-H_0} - \\mu_{X_0+ Z_1-Z_0} } + \\TV{\\mu_{X_0 +Z_1 -Z_0}- \\mu_{Y_0 +Z_3-Z_2}}\\\\\n & \\qquad + \\TV{\\mu_{Y_0 +Z_3 -Z_2}- \\mu_{Y_0 +H_3-H_2}}\\\\\n &= \\TV{\\mu_{H_1-H_0} - \\mu_{Z_1-Z_0} } + \\TV{\\mu_{\\eta+Z_1 -Z_0}- \\mu_{Z_3-Z_2}}\\\\\n & \\qquad + \\TV{\\mu_{Z_3 -Z_2}- \\mu_{H_3-H_2}}\\\\\n & \\leq \\sum_{i=0}^3\\TV{\\mu_{H_i}- \\mu_{Z_i}} + \\| \\mu_{\\eta + Z_1}- \\mu_{Z_3} \\|_{TV} + \\|\\mu_{Z_0}- \\mu_{Z_2} \\|_{TV}. \n \\end{align*}\n By \\cref{hyper}, \n \\begin{align*}\n \\sum_{i=0}^3\\TV{\\mu_{H_i}- \\mu_{Z_i}} = O(n^{-1\/2}) \n \\end{align*}\n assuming $\\eta=X_0-Y_0=o(\\sqrt{n})$ and $X_0= \\tfrac{n}{2}+O(\\sqrt{n}), Y_0= \\tfrac{n}{2}+O(\\sqrt{n})$. \nThus, we must bound \n\\begin{align*}\n\\| \\mu_{\\eta + Z_1}- \\mu_{Z_3} \\|_{TV} + \\|\\mu_{Z_0}- \\mu_{Z_2} \\|_{TV}\n\\end{align*}\nunder the same assumptions. \n\\begin{Lemma}\\label{normal}\nSuppose that \\cref{assump:1} is satisfied and consider the parameters $\\ell_i$, $p_i$, $\\sigma_i$ as in \\eqref{eqn:not3} and the random variables $Z_i \\sim \\text{\\emph{dN}}(kp_i, \\sigma_i, \\mathcal X_k)$. If $\\eta=X_0-Y_0=o(\\sqrt{n})$ and $X_0=n\/2+ O(n^{1\/2})$, $Y_0= n\/2+O(n^{1\/2})$, then as $n\\rightarrow \\infty$ \n\\begin{align}\n\\label{eqn:Zs}\n\\| \\mu_{\\eta + Z_1}- \\mu_{Z_3} \\|_{TV} + \\|\\mu_{Z_0}- \\mu_{Z_2} \\|_{TV} =o(1).\n\\end{align} \n\\end{Lemma}\n\n\\begin{proof}\nNote that it suffices to show that $\\TV{\\mu_{\\eta+Z_1} - \\mu_{Z_3}}\\rightarrow 0$ as $n\\rightarrow \\infty$. Let $\\epsilon >0$, $\\mathcal{Y}_\\eta := \\mathcal X_k \\cap (\\mathcal X_k + \\eta)$ and $\\mathcal{N}_{n,1}$ and $\\mathcal{N}_{n,3}$ denote the respective normalization constants for $Z_1$ and $Z_3$. Fixing a constant $K>0$ to be determined momentarily, let $J_n(K)=[\\tfrac{k}{2}-K \\sqrt{n}, \\tfrac{k}{2}+ K \\sqrt{n}]$. Note we can write\n\\begin{align*}\n2 \\TV{\\mu_{\\eta+Z_1} - \\mu_{Z_3}}&= \\sum_{j\\in \\mathbf{Z}} | \\mathbf{P}(Z_1 = j- \\eta) - \\mathbf{P}(Z_3=j) | \\\\\n&= \\sum_{j \\in \\mathcal{Y}_\\eta\\cap J_n(K)} \\bigg|\\frac{\\phi\\big(\\frac{j-\\eta-kp_1}{\\sigma_1} \\big)}{\\mathcal{N}_{n,1}\\sigma_1 }- \\frac{\\phi\\big(\\frac{j-kp_3}{\\sigma_3} \\big)}{\\mathcal{N}_{n,3}\\sigma_3 }\\bigg|\\\\\n&\\qquad + \\sum_{j \\in \\mathcal{Y}_\\eta\\cap J_n(K)^c} \\bigg|\\frac{\\phi\\big(\\frac{j-\\eta-kp_1}{\\sigma_1} \\big)}{\\mathcal{N}_{n,1}\\sigma_1 }- \\frac{\\phi\\big(\\frac{j-kp_3}{\\sigma_3} \\big)}{\\mathcal{N}_{n,3}\\sigma_3 }\\bigg|\\\\\n&\\qquad + \\sum_{j\\in \\mathcal X_k - (\\mathcal X_k +\\eta)} \\mathbf{P}(Z_3=j) + \\sum_{j\\in (\\mathcal X_k +\\eta)-\\mathcal X_k} \\mathbf{P}(Z_1=j)\\\\\n&=: T_1 + T_2 + T_3 +T_4. \n\\end{align*} \nWe next show how to estimate $T_3$. The term $T_4$ can be done analogously, so we omit those details. Observe that if $\\eta >0$ and $j\\in \\mathcal X_k - (\\mathcal X_k + \\eta)$, then $j \\leq \\eta-1$. Also if $\\eta \\leq 0$ and $j \\in \\mathcal X_k -(\\mathcal X_k + \\eta)$, then $j \\geq k+\\eta+1$. Now since $\\eta=o(\\sqrt{k})$\n\\begin{align*}\nT_3 &\\leq \\sum_{j=0}^{|\\eta|} \\mathbf{P}(Z_3=j) + \\sum_{j=k+1-|\\eta|}^k \\mathbf{P}(Z_3 = j)\\\\\n& \\lesssim \\frac{(| \\eta|+ 1)}{\\sigma_3} \\bigg(\\phi\\bigg( \\frac{|\\eta|-kp_3}{\\sigma_3}\\bigg)+ \\phi\\bigg( \\frac{k+1 -|\\eta| - kp_3}{\\sigma_3}\\bigg) \\bigg) \\lesssim e^{-\\epsilon' n}\n\\end{align*} \nfor some constant $ \\epsilon' >0$ independent of $n$. \n\nWe next estimate $T_2$. Note that, by using integral comparison and that $\\ell_i= n\/2 + O(n^{1\/2})$ and $\\eta=o(n^{1\/2})$, it follows that there exists a constant $C>0$ independent of $K,n$ such that for all $K$ and $n$ large enough\n\\begin{align*} \nT_2 \\leq C \\int_{-\\infty}^{-K\/2} \\phi(x) \\, dx + C \\int_{K\/2}^\\infty \\phi(x) \\, dx. \n\\end{align*}\nThus pick $K>0$ large enough so that $T_2<\\epsilon\/2$ for all $n$ large enough. \n\nTurning finally to $T_1$, first note that since $\\eta=X_0-Y_0=o(n^{1\/2})$, $X_0=n\/2+ O(n^{1\/2})$ and $Y_0= n\/2+O(n^{1\/2})$\n\\begin{align*}\n\\frac{1}{\\sigma_1}- \\frac{1}{\\sigma_3}&= \\frac{\\eta + \\frac{Y_0^2-X_0^2}{n}}{\\sqrt{X_0 Y_0(1-X_0\/n)(1-Y_0\/n)(1-\\frac{k}{n})}} \\frac{\\sqrt{n\/k}}{\\sqrt{X_0(1-X_0\/n)}+ \\sqrt{Y_0(1-Y_0\/n)}}\\\\\n&= o(n^{-1}). \n\\end{align*}\nThus combining this with \\cref{lem:normalization} produces \n\\begin{align*}\nT_1 &= \\sum_{j \\in \\mathcal{Y}_\\eta\\cap J_n(K)} \\bigg|\\frac{\\phi\\big(\\frac{j-\\eta-k p_1}{\\sigma_1} \\big)}{\\sigma_1 }- \\frac{\\phi\\big(\\frac{j-k p_3}{\\sigma_3} \\big)}{\\sigma_3 }\\bigg| + O(n^{-1\/2})\\\\\n&= \\sum_{j \\in \\mathcal{Y}_\\eta\\cap J_n(K)} \\bigg|\\frac{\\phi\\big(\\frac{j-\\eta-kp_1}{\\sigma_1} \\big)}{\\sigma_1 }- \\frac{\\phi\\big(\\frac{j-kp_3}{\\sigma_3} \\big)}{\\sigma_1 }\\bigg| + o(1)\\\\\n&=: T_1' + o(1).\\end{align*}\nNote that for $j\\in \\mathcal X_k \\cap J_n(K)$, both $(j-\\eta-kp_1)\/\\sigma_1$ and $(j-kp_3)\/\\sigma_3$ are bounded in $n$. Hence for $T_1'$ we may write for some constant $C>0$ \n\\begin{align*}\nT_1' &= \\sum_{j \\in \\mathcal{Y}_\\eta\\cap J_n(K)} \\bigg|\\frac{\\phi\\big(\\frac{j-\\eta-kp_1}{\\sigma_1} \\big)}{\\sigma_1 }- \\frac{\\phi\\big(\\frac{j-kp_3}{\\sigma_3} \\big)}{\\sigma_1 }\\bigg| \\\\\n&= \\frac{1}{\\sqrt{2\\pi \\sigma_1^2}} \\sum_{j\\in \\mathcal{Y}_\\eta \\cap J_n(K)} e^{-\\frac{(j-\\eta-k p_1)^2}{2\\sigma_1^2}}\\bigg| 1- e^{- \\big(\\frac{(j-kp_3)^2}{2\\sigma_3^2}-\\frac{(j-\\eta-kp_1)^2}{2\\sigma_1^2}\\big) }\\bigg| \\\\\n& \\leq \\frac{C}{\\sqrt{2\\pi \\sigma_1^2}} \\sum_{j\\in \\mathcal{Y}_\\eta \\cap J_n(K)} e^{-\\frac{(j-\\eta-kp_1)^2}{2\\sigma_1^2}}\\bigg| \\frac{j-kp_3}{\\sigma_3}- \\frac{j-\\eta - kp_1}{\\sigma_1}\\bigg|\\\\\n& = \\frac{C}{\\sqrt{2\\pi \\sigma_1^2}} \\sum_{j\\in \\mathcal{Y}_\\eta \\cap J_n(K)} e^{-\\frac{(j-\\eta-k p_1)^2}{2\\sigma_1^2}}\\bigg| \\frac{j-k p_3}{\\sigma_1}- \\frac{j-\\eta - k p_1}{\\sigma_1}\\bigg| + O(n^{-1\/2})\\\\\n&\\leq \\frac{C}{\\sqrt{2\\pi \\sigma_1^2}} \\sum_{j\\in \\mathcal{Y}_\\eta \\cap J_n(K)} e^{-\\frac{(j-\\eta-k p_1)^2}{2\\sigma_1^2}} \\frac{2\\eta}{\\sigma_1} + O(n^{-1\/2})= o(1)\n\\end{align*}\nas $\\eta= o(n^{1\/2})$ and $\\sigma= O(n^{1\/2})$. This completes the proof since by choosing $N=N(K)>0$ large enough we have \n\\begin{align} \n\\TV{\\mu_{X_1} - \\mu_{Y_1} } < \\epsilon\n\\end{align}\nfor all $n\\geq N$. \n\n\\end{proof}\n\n\\subsection{Upper bound}\n\nWe now use the previous estimates to conclude the upper bound in~\\eqref{eqn:upperb}.\n\n\\begin{Theorem}\nLet $\\epsilon >0$. For the $(n,k)$-Bernoulli-Laplace model under \\cref{assump:1}, we have \n\\[t_{mix}(\\epsilon)\\leq \\frac{\\log(n)}{2|\\log(1-2\\lambda)|} + 3\\lambda^{-1}\\log\\log n +1\\] for large enough $n$. \n\\end{Theorem}\n\nIn order to setup the proof, we recall the definitions of $\\tau_1(\\kappa_1), \\tau_3(\\kappa_3)$ and $\\tau_4(\\kappa_4)$:\n\\begin{align*}\n\\tau_1(\\kappa_1)&:=\\min \\left\\{t\\,:\\, X_t,Y_t\\in (\\tfrac{n}{2}-\\kappa_1\\sqrt{n},\\tfrac{n}{2}+\\kappa_1\\sqrt{n}) \\right\\},\\\\\n\\tau_3(\\kappa_3)&:= \\min \\left\\{ t:|X_t-Y_t|\\leq \\frac{\\sqrt{n}}{\\log \\log n}\\quad\\text{ and }\\quad X_t,Y_t\\in \\left(\\tfrac{n}{2}-\\kappa_3r_n,\\tfrac{n}{2}+\\kappa_3r_n\\right )\\right\\},\\\\\n\\tau_4(\\kappa_4)&:= \\min \\left\\{ t:|X_t-Y_t|\\leq \\frac{\\sqrt{n}}{\\log \\log n}\\,\\,\\,\\text{ and }\\,\\,\\, X_t,Y_t\\in \\left(\\tfrac{n}{2}-\\kappa_4\\sqrt{n},\\tfrac{n}{2}+\\kappa_4\\sqrt{n}\\right )\\right\\},\n\\end{align*}\nwhere $r_n =\\sqrt{n}(\\log n)^{|P_\\lambda|\/2}$. \n\n\n\\begin{proof}\nBelow, for simplicity of expression, we will suppress the $\\kappa_i$'s in $\\tau_i(\\kappa_i)$. For $t< \\tau_4$, we couple the two chains $(X_t)$ and $(Y_t)$ according to \\cref{coupling}. For $t\\geq \\tau_4$, we pick the optimal coupling; that is, the coupling $(X_t, Y_t)$ of $\\mu_{X_t}$ and $\\mu_{Y_t}$ such that $\\TV{\\mu_{X_t}- \\mu_{Y_t}} = \\mathbf{P}\\{X_t \\neq Y_t\\}=\\mathbf{P}\\{ |X_t - Y_t | \\geq 1 \\}$. \nNote by \\cref{Lemma:Orootn} and the Strong Markov Property: \n\\begin{align*}\n \\mathbf{P}(\\tau_4 >t_n+3s_n \\,|\\,X_0,Y_0)&=\\mathbf{P}(\\tau_4 >t_n+3s_n, \\tau_1> t_n \\,|\\,X_0,Y_0)+\\mathbf{P}(\\tau_4 >t_n+3s_n, \\tau_1\\leq t_n \\,|\\,X_0,Y_0)\\\\\n &\\lesssim \\frac{1}{\\kappa_1^2} + \\mathbf{P}(\\tau_4 >t_n+3s_n, \\tau_1\\leq t_n|X_0,Y_0)\\\\\n &= \\frac{1}{\\kappa_1^2} + \\mathbf{E}\\E(\\textbf{1}\\{\\tau_4 >t_n+3s_n\\}\\textbf{1}\\{ \\tau_1\\leq t_n\\}|\\mathcal F_{\\tau_1})\\\\\n &= \\frac{1}{\\kappa_1^2} + \\mathbf{E}(\\textbf{1}\\{\\tau_1\\leq t_n\\}\\mathbf{P} (\\tau_4 >t_n+3s_n|\\mathcal F_{\\tau_1}))\\\\\n &\\leq \\frac{1}{\\kappa_1^2} + \\mathbf{E}\\mathbf{P} (\\tau_4 >3s_n\\,|\\, X_{\\tau_1},Y_{\\tau_1}).\n\\end{align*}\nContinuing in this way, we obtain by \\cref{Lemma:orootn}\n\\begin{align*}\n \\mathbf{E} \\mathbf{P} (\\tau_4 >3s_n\\,|\\,X_{\\tau_1},Y_{\\tau_1})&= \\mathbf{E} \\mathbf{P} (\\tau_4 >3s_n, \\tau_3>s_n \\,|\\,X_{\\tau_1},Y_{\\tau_1})+ \\mathbf{E}\\mathbf{P} (\\tau_4 >3s_n,\\tau_3\\leq s_n\\,|\\,X_{\\tau_1},Y_{\\tau_1})\\\\\n &\\lesssim \\frac{1}{\\kappa_3^2} + \\mathbf{E}\\mathbf{P} (\\tau_4 >3s_n,\\tau_3\\leq s_n\\,|\\,X_{\\tau_1},Y_{\\tau_1})\\\\\n &= \\frac{1}{\\kappa_3^2} + \\mathbf{E}\\E (\\textbf{1}\\{\\tau_4 >3s_n\\}\\textbf{1}\\{\\tau_3\\leq s_n\\}\\,|\\,\\mathcal{F}_{\\tau_3})\\\\\n &= \\frac{1}{\\kappa_3^2} + \\mathbf{E}(\\textbf{1}\\{\\tau_3\\leq s_n\\}\\mathbf{P}(\\tau_4> 3s_n| \\mathcal F_{\\tau_3}))\\\\\n &\\leq \\frac{1}{\\kappa_3^2} + \\mathbf{E}\\mathbf{P}(\\tau_4>2s_n| X_{\\tau_3}, Y_{\\tau_3})\\lesssim \\frac{1}{\\kappa_3^2}+\\frac{1}{\\kappa_4^3}.\n\\end{align*}\nCombining the above with \\cref{normal} and setting $t= t_n+ 3s_n + 1$ gives\n\\begin{align*}\n\\TV{\\mu_{X_t}- \\mu_{Y_t} }& \\leq \\mathbf{P} ( |X_t - Y_t | \\geq 1 ) \\\\\n&= \\mathbf{P} ( |X_t - Y_t | \\geq 1, \\, \\tau_4 > t_n + 3s_n ) + \\mathbf{P} ( |X_t - Y_t | \\geq 1, \\, \\tau_4 \\leq t_n + 3s_n )\\\\\n& \\lesssim \\frac{1}{\\kappa_1^2} + \\frac{1}{\\kappa_3^2}+ \\frac{1}{\\kappa_4^2} + \\mathbf{P} ( |X_t - Y_t | \\geq 1, \\, \\tau_4 \\leq t_n + 3s_n )\\\\\n&\\leq \\frac{1}{\\kappa_1^2} + \\frac{1}{\\kappa_3^2}+ \\frac{1}{\\kappa_4^2} + \\mathbf{E} \\mathbf{P} ( |X_1 - Y_1| \\geq 1 \\, | \\, X_{\\tau_4}, Y_{\\tau_4} )\\\\\n&=\\frac{1}{\\kappa_1^2} + \\frac{1}{\\kappa_3^2}+ \\frac{1}{\\kappa_4^2} + o(1). \\end{align*}\nNote that picking the $\\kappa_i$'s and $n$ large enough finishes the proof. \n\\end{proof}\n\n\n\n\\section{Proof of the auxiliary results}\n\\label{sec:aux}\nIn this section, we prove \\cref{lem:1} and \\cref{lem:2}. \n\n\\begin{proof}[Proof of~\\cref{lem:1}]\nLet $(\\mathcal{F}_t)_{t\\geq 0}$ denote a filtration of $\\sigma$-fields to which $(X_t)$ is adapted. We start with the proof of~\\eqref{id:1} and~\\eqref{id:2}. These identities are contained in the proof of \\cite[Lemma 3]{EN}, but we provide the details for completeness. For~\\eqref{id:1}, since $f_1(x)$ is an eigenfunction with eigenvalue $f_1(k)=1-2k\/n$ (cf.~\\cite{DS_87, EN}), we obtain for $t\\geq 1$\n\\begin{align*}\n\\mathbf{E}_{X_0} f_1(X_t) = \\mathbf{E}_{X_0} \\mathbf{E}_{X_0}[ f_1(X_t) | \\mathcal{F}_{t-1}] = \\mathbf{E}_{X_0} \\mathbf{E}_{X_{t-1}}[ f_1(X_1)] = (1-\\tfrac{2k}{n}) \\mathbf{E}_{X_0} f_1(X_{t-1}). \n\\end{align*} \nRepeating the process above produces~\\eqref{id:1}. In order to obtain~\\eqref{id:2}, by a direct calculation it follows that\n\\begin{align*}\nf_1(x)^2= \\frac{1}{2n-1} + \\frac{2n-2}{2n-1} f_2(x). \n\\end{align*}\nThus following the same eigenvalue procedure as above but $f_2$ playing the role of $f_1$ we obtain~\\eqref{id:2}. \n\nIn order to establish~\\eqref{eqn:f2as}, let $q\\geq 0$ and $X_0= \\tfrac{n}{2}+ \\zeta$ where $\\zeta$ is $O(\\sqrt{n} (\\log n)^{q\/2})$. Then\n\\begin{align*}\nf_2(X_0) &= \\bigg(1+ \\frac{2n-1}{2n-2} - \\frac{2n-1}{n}\\bigg) + \\zeta\\bigg(\\frac{2(2n-1)}{n(n-1)}- \\frac{2(2n-1)}{n^2} \\bigg) + \\zeta^2\\frac{2(2n-1)}{n^2(n-1)} - \\frac{2n-1}{n(n-1)}\\\\\n&\\qquad \\qquad - \\zeta \\frac{2(2n-1)}{n^2(n-1)}\\\\\n&= \\zeta^2\\frac{2(2n-1)}{n^2(n-1)} +O(n^{-1}) = O(n^{-1} (\\log n)^q),\n\\end{align*} \nas claimed. \n\\end{proof}\n\n\n\\begin{proof}[Proof of~\\cref{lem:2}]\nWe first prove~\\eqref{as:id1}. Letting $g: (-\\infty, 1\/2)\\rightarrow \\mathbf{R}$ be given by $g(x)= \\log (1-2x)$, observe that \n\\begin{align}\n\\label{eqn:as1}\nf_1(k)^t = (1-2\\lambda)^t \\frac{(1-2k\/n)^t}{(1-2\\lambda)^t} &=(1-2\\lambda)^t \\exp ( t[g(k\/n) - g(\\lambda)])\\\\\n\\nonumber &=: (1-2\\lambda)^t \\exp( t \\Delta(g)). \n\\end{align} \nTaylor's formula then implies\n\\begin{align*}\n\\Delta(g)=g(k\/n) -g(\\lambda) = -\\frac{2\\Delta_n}{1-2\\lambda} + \\Delta_n^2 \\frac{g''(\\xi)}{2} \n\\end{align*} \nfor some $\\xi $ in between $k\/n$ and $\\lambda$. Employing \\cref{assump:1} (c1) then gives\n\\begin{align*}\n \\Delta_n^2\\frac{g''(\\xi)}{2!} = O( \\Delta_n^2) .\n \\end{align*} \n Plugging this into~\\eqref{eqn:as1} using $t\\leq t_n$ and \\cref{assump:1}(c2) then implies\n \\begin{align*}\nf_1(k)^t &= (1-2\\lambda)^t \\sum_{j=0}^\\infty \\frac{(t \\Delta(g))^j}{j!}\\\\\n &= (1-2\\lambda)^t(1- 2t \\Delta_n\/(1-2\\lambda) + O(t^2 \\Delta_n^2)),\n \\end{align*}\n which is~\\eqref{as:id1}. \n \n In order to obtain~\\eqref{as:id2}, we follow a similar reasoning to the one used to arrive at~\\eqref{as:id1} to see that\n \\begin{align*}\n f_2(k)^t = (1-2\\lambda)^{2t} \\frac{f_2(k)^t}{(1-2\\lambda)^{2t}} = (1- 2\\lambda)^{2t} \\exp\\{t (g(h_1(k))- g(h_2(\\lambda )))\\}. \n \\end{align*} \nA short calculation shows that\n \\begin{align*}\n \\Delta_n'= h_1(k)-h_2(\\lambda) =2 \\Delta_n (1-\\tfrac{k}{n}- \\lambda) +O(1\/n), \n \\end{align*} \n yielding the first part of~\\eqref{eqn:dtil}. Following the same reasoning as above yields~\\eqref{as:id2}. \n \n The proof of~\\eqref{as:id3} and the second part of~\\eqref{eqn:dtil} are similar, so we omit the details. \n \n\\end{proof}\n\n\n\n\n\n\\bibliographystyle{plainurl}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\\label{intro}\n\n\\begin{figure}[!htb]\n\t\\centering\n\t\\adjincludegraphics[Clip=0 0 0 {0.404\\height}, width=0.8\\linewidth]{images\/detector.jpg}\n\t\\caption{The central detector of JUNO. An acrylic sphere with a diameter of 35.4 meters filled with 20 kt of liquid scintillator. The detector contains \\num{17612} large (20 inches) PMTs and \\num{25600} small PMTs (3 inches).}\n\t\\label{fig:detector}\n\\end{figure}\n\nJUNO is a neutrino observatory under construction in southern China. Its physical program covers a wide range of problems~\n\\cite{JUNO}. The main goals are to determine the neutrino mass ordering and to accurately measure the parameters of neutrino oscillations $\\sin^2{\\theta_{12}}, \\Delta m_{21}^2, \\Delta m^{2}_{31}$. JUNO will detect reactor neutrinos from the Yangjiang and Taishan nuclear power plants. Simultaneously JUNO will be able to observe neutrinos from supernovae, atmospheric neutrinos, solar neutrinos and geoneutrinos.\n\nFigure~\\ref{fig:detector} shows the detector design. The detector is a transparent acrylic sphere with a diameter of 35.4 meters that is located underground in a cylindrical water pool. The sphere is filled with 20 kt of liquid scintillator. The detector is equipped with a huge number of photo-multiplier tubes (PMTs) of two types: \\num{17612} large PMTs (20 inches) and \\num{25600} small PMTs (3 inches). Neutrinos, which are produced in nuclear reactors, interact with the protons of the scintillator in the detector via the inverse beta-decay (IBD) channel: $\\overline \\nu_{e} + p \\rightarrow e^{+} + n$. The scintillator then produces visible light upon the interaction of the ejected positron with the media. The amount of emitted photons is tightly related to the neutrino energy. The neutron, after some time, is captured by a hydrogen atom of liquid scintillator, producing 2.2 MeV de-excitation gammas. Thus, the time coincidence of signals from the positron and the neutron makes it possible to separate the event from backgrounds. The information collected by PMTs is used for estimation of the neutrino energy.\n\nTo resolve the neutrino mass ordering the energy resolution must be $\\sigma \\leqslant 3\\%$ at 1 MeV, which is very close to the statistical limit corresponding to the light yield in JUNO, about 1300 detected photons (hits) at 1~MeV. The energy nonlinearity uncertainty should be < 1\\%~\\cite{JUNO}.\n\nMachine Learning (ML) methods are very popular in science today, including high energy physics, in particular, neutrino experiments~\\cite{Psihas} and collider experiments~\\cite{Guest}. We use ML approach for energy reconstruction in the JUNO experiment. Our problem is a regression supervised learning problem. The data (time and charge information) collected by PMTs is used as input for supervised training of ML model. Earlier we demonstrated that the ML approach can have the quality required for the JUNO experiment on our data and also has the advantage of speed and ease of application~\\cite{CNNs}.\n\nIn this work we use Boosted Decision Trees (BDT)~\\cite{Friedman} for energy reconstruction in the energy range of 0\u201310 MeV covering the region of interest for IBD events from reactor electron antineutrinos. Compared to~\\cite{CNNs} we designed and studied new features and achieved much better resolution with BDT, which is now comparable to the resolution of more complex models.\n\n\\section{Data description}\n\\label{DataDesc}\n\n\\begin{figure}[!htb]\n\t\\centering\n\t\\subfloat[]{\\includegraphics[width=0.49\\textwidth]{images\/event_vis.pdf}\\label{fig:data_vis_ch}}\n\t\\subfloat[]{\\includegraphics[width=0.49\\textwidth]{images\/event_vis_ht.pdf}\\label{fig:data_vis_ht}}\n\t\\caption{Example of an event seen by 17612 large ($20''$) PMTs for a positron event of 5.5 MeV. Only fired PMTs are shown. In fig (a) color represents the accumulated charge in PMTs: yellow points show the channels with more hits, red points \u2014 the channels with fewer hits. In fig (b) color indicates PMT activation time --- darker blue color shows an earlier activation. The primary vertex is shown by the gray sphere.}\n\t\\label{fig:data_vis}\n\\end{figure}\n\nThe dataset is generated by the full detector Monte Carlo method using the official JUNO software \\cite{Huang}. The detector simulation is based on the Geant4 framework~\\cite{Allison} with the geometry implemented in detail. The train and test datasets are described as follows:\n\\begin{enumerate}\n\t\\item \\textbf{Training dataset} consists of 5 million events, uniformly distributed in kinetic energy from 0 to 10 MeV and in the volume of the central detector (in liquid scintillator).\n\t\n\t\\item \\textbf{Testing dataset} consists of subsets with discrete kinetic energies of 0 MeV, 0.1 MeV, 0.2 MeV, ..., 1 MeV, 2 MeV, ..., 10 MeV. Each subset contains about 10 thousand events. This dataset is used to estimate performance after the end of training.\n\\end{enumerate}\nOur data have four configurations: 1) without electronics effects; \n2) taking into account the transit time spread (TTS) of PMTs;\n3) taking into account the dark noise (DN) of PMTs; and 4) taking into account both effects.\nTTS occurs due to the stochasticity of the path of photo-electron from the photo-cathode to the anode and effectively smears the time information. DN effect gives spontaneous hits on PMTs. In further TTS and DN are always enabled if not specified otherwise.\n\nFigure~\\ref{fig:data_vis} illustrates an example of accumulated charge in the PMT channels (left) and the evolution in time of the same signal in terms of the first hit time distribution (right).\n\n\\section{Boosted Decision Trees}\n\\label{BDT}\n\nBDT is the an ensemble model, where a simple and quickly learning Decision Tree (DT) model is used as the base algorithm. DTs in BDT are trained sequentially. Each subsequent DT is trained to correct errors of previous DTs in the ensemble. \nIn this work we use the XGBRegressor implementation of BDT from the XGBoost library~\\cite{XGB}.\n\nDT is built recursively starting from the root node, splitting the source set into two subsets (left and right) based on the values of the input features. To build a tree, we need a principle based on which we will split the original set of objects into subsets. XGBRegressor uses Gain maximization to splitting input data into subsets.\n\nIn XGBoost the objective function contains a two parts: the training loss and the regularization term: \n\\begin{equation}\\label{eq:obj}\n\\mathcal{L}(\\phi) = \\sum_{i}{l \\left(\\hat y_{i}, y_{i} \\right)} + \\sum_{k} {\\Omega (f_k)}\n\\end{equation}\nA tree is penalized if the sum of the norm of values in its leaves is very large. Therefore, the regularization term is introduced here as follows:\n\\begin{equation}\n\\Omega(f) = \\gamma T + \\frac{1}{2}\\lambda \\sum^{T}_{j=0}{\\omega^2_j},\n\\end{equation}\nwhere $T$ is the number of leaves, $\\omega_j$ are values in the leaves, $\\gamma$ and $\\lambda$ are numerical parameters of the regularization. In~\\cite{XGB} the authors showed that the optimization of an objective function \\eqref{eq:obj} reduces to maximizing of the Gain. And Gain is defined as:\n\\begin{equation}\n{\\rm Gain} = \\frac{1}{2}\\left[ \\frac{G_l^2}{H_l^2 + \\lambda} + \\frac{G_r^2}{H_r^2 + \\lambda} - \\frac{(G_l + G_r)^2}{H^2_l + H^2_l + \\lambda}\\right] - \\gamma,\n\\end{equation}\nwhere $G$, $H$ are the corresponding sums of the first and second derivatives of the objective function for a given partition and the indices $l$ and $r$ mean the left and right partition.\n\n\\section{Feature Engineering}\n\\label{FeatEng}\n\nThe basic features for the energy reconstruction are the following aggregated features: \n\\begin{enumerate}\n\t\\item Total number of detected photo-electrons (hits): \\texttt{nHits}. \n\t\n\tIn the first approximation, the total number of hits is proportional to the event energy.\n\t\\item Coordinate components of the center of charge:\n\t\\begin{equation}\n\t(x_{cc},\\ y_{cc},\\ z_{cc}) = \\mathbf{r}_{\\rm cc} = \\frac{\\sum_i^{N_{\\rm PMTs}} \\mathbf{r}_{{\\rm PMT}_i} n_{{\\rm p.e.}, i}}{\\sum_i^{N_{\\rm PMTs}}n_{{\\rm p.e.}, i}},\n\t\\end{equation}\n\tand its radial component: \n\t\\begin{equation}\n\tR_{cc} = \\sqrt{x_{cc}^2 + y_{cc}^2 + z_{cc}^2}.\n\t\\end{equation}\n\t\n\tCoordinate components of the center of charge are rough approximations of the location of the energy deposition. These features are important for energy reconstruction since the number of hits depends on the location of the energy deposition.\n\t\\item Coordinate components of the center of first hit time:\n\t\\begin{equation}\n\t(x_{cht},\\ y_{cht},\\ z_{cht}) = \\mathbf{r}_{\\rm cht} = \\frac{1}{\\sum_i^{N_{\\rm PMTs}} \\frac{1} {t_{{\\rm ht},i} + c}} \\sum_i^{N_{\\rm PMTs}} \\frac{\\mathbf{r}_{{\\rm PMT}_i}} {t_{{\\rm ht},i} + c},\n\t\\end{equation}\n\tand its radial component: \n\t\\begin{equation}\n\t R_{cht} = \\sqrt{x_{cht}^2 + y_{cht}^2 + z_{cht}^2}.\n\t\\end{equation}\n\tHere the constant $c$ is required to avoid division by zero. \n\tThese features bring extra information on the location of the energy deposition.\n\t\\item Mean and standard deviation of the first hit time distributions: \\texttt{ht\\_mean}, \\texttt{ht\\_std}.\n\\end{enumerate}\nFor ML models including Boosted Decision Trees, it is often useful to engineer new features from the existing features~\\cite{Heaton}. We use the following extra synthetic features:\n\\begin{gather}\n\t\\gamma_{z}^{cc} = \\frac{z_{cc}}{\\sqrt{x_{cc}^2 + y_{cc}^2}},\\\n\t\\gamma_{y}^{cc} = \\frac{y_{cc}}{\\sqrt{x_{cc}^2 + z_{cc}^2}},\\ \n\t\\gamma_{x}^{cc} = \\frac{x_{cc}}{\\sqrt{z_{cc}^2 + y_{cc}^2}}; \\\\ \n\t\\theta_{cc} = \\arctan{\\frac{\\sqrt{x_{cc}^2 + y_{cc}^2}}{z_{cc}}},\\ \n\t\\phi_{cc} = \\arctan{\\frac{y_{cc}}{x_{cc}}}; \\\\\n\tJ_{cc} = R_{cc}^2 \\cdot \\sin{\\theta_{cc}},\\ \n\t\\rho_{cc} = \\sqrt{x_{cc}^2 + y_{cc}^2}.\n\\end{gather}\nAnd some trigonometric functions of angles $\\theta_{cc}$, $\\phi_{cc}$: $ \\sin{\\theta_{cc}},\\ \\cos{\\theta_{cc}},\\ \\sin{\\phi_{cc}},\\ \\cos{\\phi_{cc}}$. We also use 11 similar features for the center of first hit time.\n\nIn addition, we prepare five more features related to the location of the PMT received the maximum number of photo-electrons: \\texttt{x\\_max}, \\texttt{y\\_max}, \\texttt{z\\_max}, \\texttt{theta\\_max}, \\texttt{phi\\_max}, the maximum number of photons on PMT \\texttt{npe\\_max}, and the average number of photons on PMTs \\texttt{npe\\_mean}. \n\nAlso we added the following features: \\texttt{entries1}, \\texttt{entries2}. Here, \\texttt{entries1} is the percentage of PMTs with only 1~hit, \\texttt{entries2} --- with 2~hits. And the one more feature is \\texttt{nPMTs} --- the total number of fired PMTs.\n\nNow let's take a closer look at the first hit time distribution. Consider what fraction of fired PMTs received at least one photon depending on time, which is, in essence, the cumulative distribution function (CDF) of the first hit time distribution. Figure~\\ref{fig:cdf_and_pdf_ht} illustrates an example of a 7~MeV event. The entire event typically lasts for about \\num{1000}~ns, but the majority of photo-electrons are recorded by the PMTs in the first hundred nanoseconds and then mainly dark hits are recorded. \n\nIn Figure~\\ref{fig:cdf_and_pdf_ht} one also can see that at the beginning of events there is a short period of time $\\Delta t$ during which the photons have not reached the PMTs and only dark noise is recorded.\n\nFigure~\\ref{fig:cdf_and_pdf_ht_diff_R} shows how the CDFs for the events with the energy of 7~MeV change depending on their location, closer to the edge (large R) or closer to the center of the detector (small R).\nIn the case where R is quite small, it takes more time for the photons to be detected by PMTs and also it takes more time to ``saturate''.\n\n\\begin{figure}[!htb]\n\t\\centering\n\t\\subfloat[Example for a specific R.]{\\includegraphics[width=0.49\\textwidth]{images\/cdf_and_pdf_ht.pdf}\\label{fig:cdf_and_pdf_ht}}\n\t\\subfloat[Example for different R.]{\\includegraphics[width=0.49\\textwidth]{images\/cdf_and_pdf_ht_diff_R.pdf}\\label{fig:cdf_and_pdf_ht_diff_R}}\n\t\\caption{Examples of CDFs and PDFs (bottom right) of the first hit time distribution for events with energy equal to 7~MeV.}\n\t\\label{fig:ht_profiles}\n\\end{figure}\n\nFinally, the idea is to simply decompose the entire curve into a set of percentiles, and then select those that suit best for energy reconstruction. X\\%-percentile indicates how long it takes to register X\\% of the first PMT hits. We use the following set of percentiles: \\{1\\%, 2\\%, ..., 10\\%, 15\\%, ..., 90\\%, 91\\%, ..., 99\\%\\}.\n\n\\section{Selection of event time window}\n\\label{EventTimeWinSel}\n\nOne can also see in Figure~\\ref{fig:ht_profiles} that the signal hits arrive within the first few hundred nanoseconds, while the dark hits form quasi-constant pedestal. Therefore, a time window can be selected, based on the data, in a way to contain mainly signal events. For this purpose we trained Boosted Decision Trees model with different bounds of window: \\{75ns, 125ns, 175ns, 250ns, 500ns, 750ns, 1500ns\\} and always started from $t=0$. It was trained on a 200k dataset and using all new features. \n\nTable~\\ref{tab:bounds_results} shows the Mean Absolute Error (MAE), the Root Mean Squared Error (RMSE), Mean Absolute Percentage Error (MAPE) on the test dataset for different window bounds. The best one was found to be 500~ns, however all the windows longer than 175~ns showed similar performance. \n\n\\begin{table}[!htb]\n\t\\centering\n\t\\caption{MAE, RMSE, MAPE metrics for BDT models for different window bounds.}\n\t\\label{tab:bounds_results}\n\t\\begin{tabular}{llll}\n\t\t\\hline\n\t\tBound, ns& MAE, MeV & RMSE, MeV & MAPE, \\% \\\\\n\t\t\\hline\n\t\t75 & 0.1019 & 0.1664 & 3.705 \\\\\n\t\t125 & 0.0562 & 0.0781 & 1.858 \\\\\n\t\t175 & 0.0505 & 0.0698 & 1.662 \\\\\n\t\t250 & 0.0487 & 0.0676 & 1.594 \\\\\n\t\t\\textbf{500} & \\textbf{0.0477} & \\textbf{0.0660} & \\textbf{1.569} \\\\\n\t\t750 & 0.0480 & 0.0664 & 1.585 \\\\\n\t\t1500 & 0.0479 & 0.0662 & 1.589 \\\\\n\t\t\\hline\n\t\\end{tabular}\n\\end{table}\n\n\\section{Feature selection}\n\\label{FeatSel}\n\nFinally, we have a large set of features, but many of them are highly correlated, so we expect that a small set of them contain all the information and provide a performance close to the best possible one. Thus, the next task is to get a subset of the most informative features from the available set of all 78 features. For this purpose we use a dataset with 1M events.\n\nTo select the most informative features, we use the following algorithm. First, we train a model on all features and computed RMSE on the validation dataset with 150k events. Then we take an empty list and start populating it with features. On each step we pick a feature which provide the best improvement of the model in terms of RMSE calculated on the validation dataset and put it to the end of the list. We continue while this RMSE value differs from the RMSE value for the model trained on all features by more than $\\varepsilon$, chosen to be 0.0002. This procedure results to the following set of features (sorted by importance): \\[\\texttt{nHits},\\ \\texttt{ht\\_20p},\\ \\texttt{jacob\\_cc},\\ \\texttt{ht\\_2p},\\ \\texttt{ht\\_35p},\\ \\texttt{R\\_cc},\\ \\texttt{ht\\_75p}\\]\nNot surprisingly, for the energy reconstruction, the most informative feature is the total number of hits \\texttt{nHits}, because its strongly correlated with energy, but at the same time it is hard to interpret the order of the rest features. The subset of the selected features contains \\texttt{jacob\\_cc} and \\texttt{R\\_cc}, which bring the spatial information allowing to recover the non-uniformity of detector response. \nWe checked simpler combinations of features for the center of charge position. It turned out that the combination of \\texttt{rho\\_cc} and \\texttt{R\\_cc} gives the same result as \\texttt{jacob\\_cc} and \\texttt{R\\_cc}, so we have chosen them as they are more intuitive. Our final set of features is: \\[\\texttt{nHits},\\ \\texttt{ht\\_20p},\\ \\texttt{rho\\_cc},\\ \\texttt{ht\\_2p},\\ \\texttt{ht\\_35p},\\ \\texttt{R\\_cc},\\ \\texttt{ht\\_75p}\\]\nFigure~\\ref{fig:percentiles_vis} illustrates the selected percentiles of the CDFs for an event with energy of 7~MeV and for different radial positions. As one can see \\texttt{ht\\_2p} contains information about the beginning of the event, that is about the moment when the number of photons PMT hits begins to grow sharply. The remaining percentiles contain information about the shape of the CDF curve and help us to separate one curve from another. The 75\\% percentile is close to the moment of ``saturation'' of the CDF curve.\n\n\\begin{figure}[!htb]\n\t\\centering\n\t\\includegraphics[width=\\textwidth]{images\/percentiles_vis.pdf}\n\t\\caption{The selected percentiles of the CDFs of the first hit time distributions for a 7~MeV event for different radial positions: 2\\% (top left), 20\\% (top right), 35\\% (bottom left) and 75\\% (bottom right).}\n\t\\label{fig:percentiles_vis}\n\\end{figure} \n\n\\section{Results}\n\\label{Res}\n\nTo evaluate the quality of the model, we use two metrics: resolution and bias. These metrics are obtained as a result of the Gaussian fit of $E_{\\rm pred} - E_{\\rm true}$ distribution. The resolution is defined as $\\sigma \/ E_{\\rm true}$ and the bias --- as $\\mu \/ E_{\\rm true}$, where $\\sigma$ and $\\mu$ are the standard deviation and the mean of the Gaussian distribution respectively. The performance is shown dependent on the so called visible energy, i.e.\\ the maximal energy that can be converted into light: $E_{\\rm vis} = E_{\\rm kin} + 1.022 \\text{ MeV}$. This procedure is described in more details in~\\cite{CNNs}.\n\nFigure~\\ref{fig:res_and_bias_diff_sizes_of_datasets} illustrates the results for BDT models trained on datasets that contain different amount of events: 100k, 1M, 5M. We obtained that 1M events can provide the best possible accuracy of the model, providing only a little improvement compared to 100k events. This illustrates that fast learning is one of the advantages of the BDT model: one can get an acceptable quality already on a relatively small number of events in the dataset.\n\nFigure~\\ref{fig:res_and_bias_diff_options} shows a comparison for the BDT model trained on the 5M dataset for different options: without TTS \\& DN, with TTS only, with DN only, with TTS \\& DN. One can see that DN worsens the resolution, TTS --- almost does not.\n\nA comparison of the BDT model with other more complex deep learning models (ResNet, VGG, and GNN)~\\cite{CNNs} is shown in Figure~\\ref{fig:res_and_bias_all_models}. All the models are trained on the dataset with 5M events. We can see that the performance of the energy reconstruction with BDT model is practically similar to the complex deep learning models. At the same time the computations required for training and prediction are much faster due to the minimalistic nature of BDT.\n\n\\begin{figure}[!htb]\n\t\\centering\n\t\\subfloat[Comparision of different sizes of datasets.]{\\includegraphics[width=0.48\\textwidth]{images\/res_and_bias_diff_sizes_of_datasets}\\label{fig:res_and_bias_diff_sizes_of_datasets}}\n \\hspace{1em\n\t\\subfloat[Comparison of different TTS \\& DN options for a dataset with 5M events.]{\\includegraphics[width=0.48\\textwidth]{images\/res_and_bias_diff_options}\\label{fig:res_and_bias_diff_options}}\n\t\\caption{Results of the energy reconstruction for the BDT model: resolution (upper panel) and bias (lower panel). Note that the first point corresponds to 1.122 MeV.}\n\t\\label{fig:results_bdt}\n\\end{figure}\n\n\\begin{figure}[!htb]\n\t\\centering\n\t\\includegraphics[width=0.7\\textwidth]{images\/res_and_bias_all_models}\n\t\\caption{Energy reconstruction performance: resolution (upper panel) and bias (lower panel) obtained with BDT, ResNet-J, VGG-J and GNN-J models. The plots are offset along X-axis within $\\pm$0.06 MeV for better readability. Note that the first point corresponds to 1.122 MeV.}\n\t\\label{fig:res_and_bias_all_models}\n\\end{figure} \n\n\\section{Summary}\n\\label{Sum}\n\nIn this work we have presented the use of Boosted Decision Trees for energy reconstruction in the JUNO experiment in the relevant energy range. We have designed and investigated a large set of features and have selected a small subset providing the performance nearly equal the one obtained with the full set of features. Using such a minimalistic and fast model as BDT we achieved a performance similar to the one of more complex models like ResNet, VGG, GNN.\n\n\\section*{Acknowledgements}\n\\begin{acknowledgement}\nWe are immensely grateful to Yury Malyshkin for his invaluable contribution to this work. We would like to thank Weidong Li, Jiaheng Zou, Tao Lin, Ziyan Deng, Guofu Cao and Miao Yu for their tremendous contribution to the development of JUNO offline software and to Xiaomei Zhang and Jo\\~ao Pedro Athayde Marcondes de Andr\\'e for production of the MC samples. We are grateful to N.~Kutovskiy, N.~Balashov for providing an extensive IT support and computing resources of JINR cloud services~\\cite{Baranov}.\nFedor Ratnikov is supported by the Russian Science Foundation under grant agreement \\textnumero{17-72-20127}.\n\\end{acknowledgement}\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}