diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzzoefg" "b/data_all_eng_slimpj/shuffled/split2/finalzzoefg" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzzoefg" @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\n\nThe gamma-ray bursts (GRBs) measured with the BATSE instrument on the \nCompton Gamma-Ray Observatory are usually characterized by \n9 observational quantities (2 durations, 4 fluences, 3 peak fluxes) \n\\cite{me96}, \\cite{pac}, \\cite{me00}. \nIn a previous paper \\cite{bag98} \nwe have shown that these 9 quantities \ncan be reduced to only two significant independent \nvariables (principal components). Here we present a new statistical\nanalysis of the correlation between these variables and show that\nthere is a significant difference between the power law exponents \nof long and short bursts. The details of this analysis will be presented\nelsewhere \\cite{bal01}.\n\n\\section{Distributions of Durations and Total Emitted Energies}\n\\label{sec:durations}\n\nWe consider here those GRBs from the current \nBATSE Gamma-Ray Burst Catalog \\cite{me00}\n which have measured $T_{90}$ durations and fluences\n ($ F_1, F_2, F_3, F_4$).\nTherefore, we are left with $N=1929$ GRBs, all of which have defined \n$T_{90}$ and $F_{tot}(= F_1 + F_2 + F_3 + F_4$), as well as \npeak fluxes $P_{256}$. \n\nThe distribution of the log$T_{90}$ clearly displays \ntwo peaks reflecting the existence of two groups of GRBs\n\\cite{k}. \nThis bimodal distribution can be fitted by two log-normal \ndistributions \\cite{hor}.\nThe fact that the distribution of $T_{90}$ within a subclass is log-normal\nhas important consequences. Let us denote the observed duration of a\nGRB with $T_{90}$ (which may be subject to cosmological \ntime dilatation) and with\n$t_{90}$ those measured by a comoving observer (intrinsic duration). \nThen one has $T_{90} = t_{90} f(z)$ \nwhere $z$ is the redshift, and $f(z)$ measures the time dilatation. \nFor the concrete form of $f(z)$ one can take $f(z) = (1+z)^k$, \nwhere $k=1$ or $k=0.6$, depending on whether energy stretching is included\nor not. \n\nTaking the logarithms of both sides of this equality\none obtains the logarithmic duration as a sum of two independent\nstochastic variables. According to a theorem of Cram\\'er \\cite{cra37}, if \na variable $\\zeta$, which has a Gaussian distribution, is given by a sum of \ntwo independent variables, i.e. $\\zeta = \\xi + \\eta$, then both \n$\\xi$ and $\\eta$ have Gaussian distributions. Therefore, from this theorem \nit follows that the Gaussian distributions of $\\log T_{90}$, \nconfirmed for the two subclasses separately \\cite{hor},\nimplies the same type of distribution for the variables of $\\log t_{90}$ and \nof $\\log f(z)$. However, unless the space-time geometry has a very \nparticular structure, the distribution of $\\log f(z)$ cannot be Gaussian. \nThis means that the Gaussian nature of the distribution of $\\log T_{90}$ \nmust be dominated by the distribution of $\\log t_{90}$, and the latter \nmust then necessarily have a Gaussian distribution. This holds for both \nduration subgroups separately. \n(Note here that several other authors,\ne.g. \\cite{wp94}, \\cite{nor94}, \\cite{nor95},\nhave already suggested, that the distribution of $T_{90}$\nreflects predominantly the distribution of $t_{90}$.)\n\nOne also has $F_{tot} = (1+z) E_{tot}\/(4\\pi d_l^2(z)) = c(z) E_{tot}$,\nwhere $d_l$ is the luminosity distance, and $E_{tot}$ is the total\nemitted energy. Once there is a log-normal distribution\nfor $F_{tot}$ (for the two subgroups separately), then the\nprevious application of Cram\\'er theorem is also possible here. \nThe existence of this log-normal distribution is not obvious, but may be\nshown as follows.\n\nAssume both the short and the long groups have\ndistributions of the variables $T_{90}$ and $F_{tot}$ which are\nlog-normal. In this case, it is possible to fit {\\it simultaneously} \nthe values of $\\log F_{tot}$ and $\\log T_{90}$ by a single two-dimensional \n(\"bivariate\") normal distribution. This distribution has five parameters \n(two means, two dispersions, and the angle ($\\alpha$) between the axis \n$\\log T_{90}$ and the semi-major axis of the ``dispersion ellipse\"). \nIts standard form can be seen in \\cite{tw53} (Chapt. 1.25).\nWhen the $r$-correlation coefficient differs from zero, then \nthe semi-major axis of the dispersion ellipse represents a linear relationship \nbetween $\\log T_{90}$ and $\\log F_{tot}$ with a slope of $m=\\tan \\alpha$. This\nlinear relationship between the logarithmic variables implies a power law\nrelation of form $F_{tot} = (T_{90})^m$ between the fluence and the \nduration, where $m$ may be different for the two subgroups. Then\na similar relation will exist between\n$t_{90}$ and $E_{tot}$.\n\nWe obtain the best fit through a maximum likelihood \nestimation (e.g., \\cite{KS76}, Vol.2., p.57-58).\nFrom this estimation we obtain\nthe dependence of the total emitted on the intrinsic duration in form\n\\begin{equation}\nE_{tot} \\propto \\cases{ (t_{90})^{1}~~;&~~(short bursts); \\cr\n (t_{90})^{2.3}~~;&~~(long bursts). \\cr}\n\\label{eq:ftotpower}\n\\end{equation}\n\nSeveral papers discuss the biases in the BATSE values of\n$F_{tot}$ and $T_{90}$ (cf. \\cite{ep92}, \\cite{lamb93}, \\cite{lp96}, \n\\cite{pl96}, \\cite{lp97}, \\cite{pac},\n\\cite{hak00}, \\cite{meg00}). \nAll types of biases are particularly essential for\nfaint GRBs. To discuss these effects we provide several different\nadditional calculations (for more details see \\cite{bal01}),\nwhich give the same results.\n\n\n\\section{Conclusion}\n\nThe exponent in the power laws differ significantly for the two subclasses\nof short ($T_{90} < 2$ s) and long ($T_{90} > 2$ s) bursts.\nThese new results may\nindicate that two different types of central engines are at work, or\nperhaps two different types of progenitor systems are involved. \nWhile the nature of the progenitors remains so far\nindeterminate, our results indicate strongly that the nature of the energy\nrelease process giving rise to the bursts is different between the two\nburst classes. \nIn the short ones the total energy released is proportional to\nthe duration, while in the long ones it is proportional roughly to the\nsquare of the duration. This result is completely model-independent, and\nprovides an interesting constraint on the two types of bursts.\n\n\nThis research was supported in part through\nOTKA grants T024027 (L.G.B.), F029461 (I.H.) and T034549,\nNASA grant NAG5-2857, Guggenheim Foundation\nand Sackler Foundation (P.M.) and\nResearch Grant J13\/98: 113200004 (A.M.).\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{\\bf Introduction} \n\n\\vskip 5pt\n\n In the recent work \\cite{BES}, Balinski-Evans-Sait\\=o introduced\n an $L^p$-seminorm $\\|(\\alpha\\cdot \\hbox{\\sl p}){\\it} %\\def\\mb{\\mathbf f}\\|_{p,\\Omega}$ of\n a ${{\\mathbb C}}^4$-valued function $f$ in an open subset $\\Omega$ of\n ${{\\mathbf R}}^3$, relevant to a massless Dirac operator\n\\begin{equation}\\label{1dirac} \n \\alpha \\cdot \\hbox{\\sl p} =\n \\sum_{j=1}^3 \\alpha_j(-i\\partial_j) \\qquad (\\partial_j\n = \\partial\/\\partial{x_j}),\n\\end{equation}\n where $\\hbox{\\sl p} = -i\\nabla$, and\n $\\alpha= (\\alpha_1, \\, \\alpha_2, \\, \\alpha_3)$ is the triple of\n $4 \\times 4$ Dirac matrices\n\\begin{equation}\\label{1alpha} \n \\alpha_j =\n \\begin{pmatrix}\n 0_2 &\\sigma_j \\\\ \\sigma_j & 0_2\n \\end{pmatrix} \\qquad (j = 1, \\, 2, \\, 3)\n\\end{equation}\n with the $2\\times 2$ zero matrix $0_2$ and the triple of $2 \\times 2$\n Pauli matrices\n$$\n \\sigma_1 =\n\\begin{pmatrix}\n 0&1 \\\\ 1& 0\n\\end{pmatrix}, \\,\\,\\,\n \\sigma_2 =\n\\begin{pmatrix}\n 0& -i \\\\ i&0\n\\end{pmatrix}, \\,\\,\\,\n \\sigma_3 =\n\\begin{pmatrix}\n 1&0 \\\\ 0&-1\n\\end{pmatrix}.\n$$\n They used this seminorm to give a group\n of inequalities called {\\it Dirac--Sobolev inequalities}\n in order to obtain $L^p$-estimates of the {\\it zero modes}, i.e. eigenfunctions\n associated with the eigenvalue $\\lambda = 0$,\n of the Dirac operator $(\\alpha\\cdot {\\hbox{\\sl p}})+Q$, where $Q(x)$ is a $4\\times 4$\n Hermitian matrix-valued potential decaying at infinity.\n We believe that our above notation ``$\\hbox{\\sl p}\\,$\" for the differential\n operator $-i\\nabla$ will not be confused with another ``$p$\" which appears\n as the superscript $1\\leq p <\\infty$ of the space $L^p$.\n\n\\par\n Let\n $\\Omega$ be an open subset of ${{\\mathbf R}}^3$ and let the first order Dirac--Sobolev space\n $ \\mathbb{H}_{0}^{1,p}(\\Omega)$, $1\\le p <\\infty,$ be the\n completion of $[C_0^{\\infty}(\\Omega)]^4$ with respect to the norm\n\\begin{equation}\\label{1H1pOmega} \n \\|{\\it} %\\def\\mb{\\mathbf f}\\|_{D,1,p,\\Omega} :\n = \\left \\{ \\int_{\\Omega} (|{\\it} %\\def\\mb{\\mathbf f}(x)|_p^p +\n |(\\alpha \\cdot \\hbox{\\sl p}){\\it} %\\def\\mb{\\mathbf f}(x)|_p^p \\,)\\, dx \\right \\}^{1\/p}\n = \\big(\\|{\\it} %\\def\\mb{\\mathbf f}\\|_{p,\\Omega}^p\n +\\|(\\alpha \\cdot \\hbox{\\sl p}){\\it} %\\def\\mb{\\mathbf f}\\|_{p,\\Omega}^p\\big)^{1\/p} ,\n\\end{equation}\n where ${{\\it f}} %\\def\\mbf{{\\mb f}}(x) = {}^t(f_1(x), f_2(x), f_3(x), f_4(x))$, the norm of a vector\n${\\it} %\\def\\mb{\\mathbf a} = {}^t(a_1, a_2, a_3, a_4) \\in {\\mathbb C}^4$ being denoted by\n\\begin{equation} \\label{1gpnorm} \n | {\\it} %\\def\\mb{\\mathbf a} |_p = \\Big[\\sum_{k=1}^4 |a_k|^p\\, \\Big]^{1\/p}. \\hskip 70pt\n\\end{equation}\n\n As one of the simplest Dirac--Sobolev inequalities (\\cite{BES}, Corollary 2), they\n showed: If $\\Omega$ is a bounded open\n subset of ${{\\mathbb R}}^3$ and ${\\it} %\\def\\mb{\\mathbf f} \\in \\mathbb{H}_{0}^{1,p}(\\Omega)$\n with $1 \\le p < \\infty$, then for $1 \\le k < p(p+3)\/3$\n there exists a positive constant $C$ such that\n\\begin{equation}\\label{1DS} \n \\|{\\it} %\\def\\mb{\\mathbf f}\\|_{k,\\Omega}\n \\le C \\|(\\alpha \\cdot \\hbox{\\sl p}){\\it} %\\def\\mb{\\mathbf f}\\|_{p, \\Omega}\\,,\n\\end{equation}\n where $\\| {\\it} %\\def\\mb{\\mathbf g} \\|_{p,\\Omega}$ stands for the norm of\n ${\\it} %\\def\\mb{\\mathbf g} = {}^t(g_1, g_2, g_3, g_4) \\in [L^p(\\Omega)]^4$ given by\n\\begin{equation}\\label{1pOnorm} \n \\|{\\it} %\\def\\mb{\\mathbf g}\\|_{p,\\Omega} =\n \\left \\{ \\int_{\\Omega} |{\\it} %\\def\\mb{\\mathbf g}(x)|_p^p\\, dx \\right \\}^{1\/p}\n = \\left\\{ \\int_{\\Omega}\\, \\sum_{j=1}^4 |g_j(x)|^p \\,dx \\right\\}^{1\/p}.\n\\end{equation}\n\n Now let $\\beta$ be the fourth Dirac matrix given by\n\\begin{equation}\\label{1betam} \n \\beta =\n \\begin{pmatrix}\n {1}_2 & 0_2 \\\\ 0_2 & -{1}_2\n \\end{pmatrix},\n\\end{equation}\n where $1_2$ is the $2 \\times 2$ unit matrix. It has been known that\n the free massless Dirac operator $\\alpha\\cdot \\hbox{\\sl p}$ or the free Dirac\n operator $\\alpha\\cdot \\hbox{\\sl p} + m\\beta$ with positive mass $m$ and\n the relativistic Schr\\\"odinger operator $\\sqrt{m^2 - \\Delta}$ may bring similar\n properties in $L^2$ sometimes but not necessarily in $L^p$ with $p\\not=2$.\n On the other hand, the following two norms are equivalent: for $1 1$ or the triangle\n inequality for $p = 1$, we have\n$$\n\\aligned\n |(\\alpha\\cdot \\hbox{\\sl p}){\\it} %\\def\\mb{\\mathbf f}|_p^p &\n = |(\\sum_{l=1}^3 -\\alpha_l i\\partial_l){\\it} %\\def\\mb{\\mathbf f}|_p^p\n \\leq \\big(\\sum_{l=1}^3 |\\alpha_l i\\partial_l{\\it} %\\def\\mb{\\mathbf f}|_p\\big)^p\n = \\big(\\sum_{\\ell=1}^3 |\\partial_l{\\it} %\\def\\mb{\\mathbf f}|_p\\big)^p\\\\\n & \\leq \\big[\\big(\\sum_{l=1}^3 1^q\\big)^{1\/q}\n \\big(\\sum_{\\ell=1}^3 |\\partial_l{\\it} %\\def\\mb{\\mathbf f}|_p^p\\,\\big)^{1\/p}\\big]^p\n = 3^{p-1}|\\nabla {\\it} %\\def\\mb{\\mathbf f}|_p^p\\,,\n\\endaligned\n$$\n where $p^{-1} + q^{-1} = 1$ and see (\\ref{1Sobnorm}) for the definition of\n $|\\nabla {\\it} %\\def\\mb{\\mathbf f}|_p$. It follows that\n\\begin{equation} \\label{2acpfest} \n \\|(\\alpha\\cdot \\hbox{\\sl p}){\\it} %\\def\\mb{\\mathbf f}\\|_{p,\\Omega}\n = \\Big(\\int_{\\Omega} |(\\alpha\\cdot \\hbox{\\sl p}){\\it} %\\def\\mb{\\mathbf f}|_p^p \\, dx\\Big)^{1\/p}\n \\leq 3^{\\frac{p-1}{p}}\\Big(\\int_{\\Omega} |\\nabla {\\it} %\\def\\mb{\\mathbf f}|_p^p\n \\, dx\\Big)^{1\/p}\n = 3^{\\frac{p-1}{p}}\\|\\nabla {\\it} %\\def\\mb{\\mathbf f}\\|_{p,\\Omega}\\,,\n\\end{equation}\n where $\\|(\\alpha\\cdot \\hbox{\\sl p}){\\it} %\\def\\mb{\\mathbf f}\\|_{p,\\Omega}$ is the norm\n of $(\\alpha\\cdot \\hbox{\\sl p}){\\it} %\\def\\mb{\\mathbf f} \\in [L^p(\\Omega)]^4$ given by (\\ref{1pOnorm}),\n and $\\|\\nabla {\\it} %\\def\\mb{\\mathbf f}\\|_{p,\\Omega}$ is given by\n\\begin{equation*}\n \\|\\nabla {\\it} %\\def\\mb{\\mathbf f}\\|_{p,\\Omega}\n = \\left\\{ \\int_{\\Omega} |\\nabla {\\it} %\\def\\mb{\\mathbf f}|_{p}^p \\, dx \\right\\}^{1\/p}\\,.\n\\end{equation*}\n Then it is easy to see that (\\ref{2acpfest}) implies (\\ref{2est}).\n The map $J$ is one-to-one since, for ${\\it} %\\def\\mb{\\mathbf f}_j \\in [H^{1,p}({\\Omega})]^4$,\n $j = 1, 2$, we have\n\\begin{equation}\\label{2oneone} \n J_{\\Omega}{\\it} %\\def\\mb{\\mathbf f}_1 = J_{\\Omega}{\\it} %\\def\\mb{\\mathbf f}_2\n \\ \\ {\\rm in} \\ \\mathbb{H}^{1,p}(\\Omega)\n \\Longrightarrow {\\it} %\\def\\mb{\\mathbf f}_1 = {\\it} %\\def\\mb{\\mathbf f}_2 \\ \\ {\\rm in} \\\n [L^p(\\Omega)]^4\n \\Longrightarrow {\\it} %\\def\\mb{\\mathbf f}_1 = {\\it} %\\def\\mb{\\mathbf f}_2 \\ \\ {\\rm in} \\\n [H^{1,p}(\\Omega)]^4.\n\\end{equation}\n Using (\\ref{2est}) and proceeding as in (\\ref{2oneone}), we see that\n the identity map $J_{0,\\Omega}$ on $[H_0^{1,p}(\\Omega)]^4$ is also continuous and\n one-to-one on $[H_0^{1,p}(\\Omega)]^4$. Since $[C_0^{\\infty}(\\Omega)]^4$ is dense\n in both $[H_0^{1,p}(\\Omega)]^4$ and $\\mathbb{H}_{0}^{1,p}(\\Omega)$,\n $[H_0^{1,p}(\\Omega)]^4$ is a dense subset of\n$\\mathbb{H}_{0}^{1,p}(\\Omega)$. This completes the proof.\n\\end{proof} \n\n\\vskip 30pt\n\n\\section{{\\bf Range of the map ${\\mathbf J_{0,\\Omega}}$}} \n\n\\vskip 5pt\n\n In this section we are going to prove Theorem 1.3, (ii).\n\n\\begin{prop} \n Let $1 < p < \\infty$. Then the map\n $J_{0, {\\mathbf R}^3}$ is onto $\\mathbb{H}_0^{1,p}({\\mathbf R}^3)$.\n\\end{prop} \n\n The proof will be given after the following two lemmas.\n\n\\begin{lem} \n Let $1 < q < \\infty$. Then $\\Delta(C_0^{\\infty}({\\mathbf R}^3))$ is dense\n in $L^q({\\mathbb R}^3)$.\n\\end{lem} \n\n\n\\begin{proof} [Proof of Lemma 3.2] \n Suppose that $f \\in L^r({\\mathbb R}^3)$\n with $1\/q +1\/r =1$ satisfies\n\\begin{equation*}\n \\langle f,\\Delta \\phi \\rangle = \\int f(x) \\Delta \\phi(x) dx =0,\n \\quad \\text{for all} \\,\\, \\phi \\in C_0^{\\infty}({\\mathbf R}^3).\n\\end{equation*}\n Then we obtain in the sense of distributions $\\Delta f = 0$, namely, $\\Delta$\n annihilates $f$. By elliptic regularity, we see that $f$ must be $C^{\\infty}$, and hence $f(x)$ is a polynomial of $x$.\n Since $f(x)$ should belong to $ L^r({\\mathbb R}^3)$, we have $f = 0$. \nThis proves Lemma 3.2.\n\\end{proof} \n\n\\begin{rem} \n {\\rm Lemma 3.2 does not hold for $q = 1$, since,\n in this case, the Laplacian $\\Delta$ always annihilates a constant $C \\ne 0$\nwhich is a nonzero element of $L^{\\infty}({\\mathbf R}^3) = L^1({\\mathbf R}^3)^*$.}\n\\end{rem} \n\\begin{lem} \n Let $1 < q < \\infty$. Let $\\Omega \\subset {\\mathbf R}^3$ be an open set. Then, for each pair\n $(j, k)$, $j, k = 1, 2, 3$, there exists a positive constant $C = C_{jk}$ such that\n\\begin{equation}\\label{3jkdeltain} \n \\|\\partial_j\\partial_k\\phi \\|_{q,\\Omega} \\le C\\|\\Delta \\phi \\|_{q,\\Omega}\n \\qquad (\\phi \\in C_0^{\\infty}(\\Omega)).\n\\end{equation}\n\\end{lem} \n\n\\begin{proof} \n By Stein \\cite{Stein}, p.59, Proposition 3, there exists a positive\nconstant $C = C_{jk}$ such that\n\\begin{equation*}\n \\|\\partial_j\\partial_k\\phi\\|_q \\leq C\\|\\Delta\\phi\\|_q,\n \\qquad \\phi \\in C_0^{\\infty}({{\\mathbb R}}^3).\n\\end{equation*}\n Of course, this holds for $\\phi \\in C_0^{\\infty}({{\\mathbb R}}^3)$.\n\\end{proof} \n\n\\begin{proof} [Proof of Proposition 3.1] \n The proof will be divided into four steps.\n\n\\vskip 5pt\n\n (I) Let $f = {}^t(f_1,f_2,f_3,f_4) \\in \\mathbb{H}_0^{1,p}({{\\mathbb R}}^3)$.\n Then ${f} \\in [L^p({{\\mathbb R}}^3)]^4$, and\n\\begin{equation*}\n g:= (\\alpha\\cdot \\hbox{\\sl p}){ f}\n = -i[\\alpha_1\\partial_1 { f} +\\alpha_2\\partial_2 { f}\n +\\alpha_3\\partial_3 { f}]\n\\end{equation*}\n belongs to $[L^p({{\\mathbb R}}^3)]^4$.\n Using the definition (1.2)\n of the Dirac matrices $\\alpha_j$, $j = 1, 2, 3$, we can rewrite this with\n ${ g}={}^t(g_1,g_2,g_3.g_4)$ as\n\\begin{equation}\\label{3gf} \n \\begin{cases}\n ig_1 = (\\partial_1-i\\partial_2)f_4+\\partial_3f_3, \\hskip 80pt \\\\\n ig_2 = (\\partial_1+i\\partial_2)f_3-\\partial_3f_4, \\\\\n ig_3 = (\\partial_1-i\\partial_2)f_2+\\partial_3f_1, \\\\\n ig_4 = (\\partial_1+i\\partial_2)f_1-\\partial_3f_2.\n \\end{cases}\n\\end{equation}\n Then from the first and second equations of (\\ref{3gf}) we have\n\\begin{equation*}\n \\aligned\n (\\partial_1+i\\partial_2)ig_1 &= (\\partial_1^2+\\partial_2^2)f_4\n + \\partial_3(\\partial_1+i\\partial_2)f_3 \\\\\n & = (\\partial_1^2+\\partial_2^2)f_4 + \\partial_3(ig_2+ \\partial_3f_4),\n \\endaligned\n\\end{equation*}\n so that\n\\begin{equation*}\n \\Delta f_4 = (\\partial_1^2+\\partial_2^2+\\partial_3^3)f_4\n = (\\partial_1+i\\partial_2)(ig_1)- \\partial_3(ig_2),\n\\end{equation*}\n and hence, by applying $\\partial_j$ to both sides of the above equation,\n we have, for $j = 1, 2, 3$,\n\\begin{equation}\\label{3Del4} \n \\Delta\\partial_j f_4 = (\\partial_j\\partial_1 + i\\partial_j\\partial_2)(ig_1)\n - \\partial_j\\partial_3(ig_2).\n\\end{equation}\n Similarly we have from (\\ref{3gf})\n\\begin{equation}\\label{3Del321} \n \\begin{cases}\n \\Delta\\partial_j f_3 = (\\partial_j\\partial_1 - i\\partial_j\\partial_2)(ig_2)\n - \\partial_j\\partial_3(ig_1), \\\\\n \\Delta\\partial_j f_2 = (\\partial_j\\partial_1 + i\\partial_j\\partial_2)(ig_3)\n - \\partial_j\\partial_3(ig_4), \\\\\n \\Delta\\partial_j f_1 = (\\partial_j\\partial_1 - i\\partial_j\\partial_2)(ig_4) -\n \\partial_j\\partial_3(ig_3).\n \\end{cases}\n\\end{equation}\n The equalities in (\\ref{3Del4}) and (\\ref{3Del321}) should be interpreted\n as equalities in the space ${\\mathcal D}'({{\\mathbb R}}^3)$ of distributions\n on ${{\\mathbb R}}^3$.\n\n\\vskip 5pt\n\n (II) Our first goal is to show that each distribution $\\partial_jf_k$ actually\n belongs to $L^p({{\\mathbb R}}^3)$, where $j = 1, 2, 3$ and $k = 1, 2, 3, 4$, namely,\n for each $j$ and $k$ there exists $F_{jk} \\in L^p({{\\mathbb R}}^3)$ such that\n\\begin{equation}\\label{3f4} \n \\<\\partial_jf_k, \\ \\phi\\> = \\int_{{{\\mathbb R}}^3} F_{jk}(x)\\phi(x)\\, dx\n\\end{equation}\n for any $\\phi \\in C_0^{\\infty}({{\\mathbb R}}^3)$, where the left-hand side is a bilinear form on\n ${\\mathcal D}'({{\\mathbb R}}^3) \\times C_0^{\\infty}({{\\mathbb R}}^3)$.\nThis will show that $f$ belongs to $[H^{1,p}({{\\mathbb R}}^3)]^4$.\nWe shall prove (\\ref{3f4}) for $k = 4$\n and $j = 1, 2, 3$ since other cases can be proved in a similar manner.\n After that, finally we show that $f$ belongs to\n$[H_0^{1,p}({{\\mathbb R}}^3)]^4$ to complete our proof.\n\n\\vskip 5pt\n\n (III) Let $q$ be the conjugate of $p$ or let $q$ satisfy $p^{-1} + q^{-1} = 1$.\n We see from (\\ref{3Del4}) that for $\\phi \\in C_0^\\infty(\\Omega)$,\n\\begin{eqnarray*}\n \\langle \\partial_j f_4, \\Delta \\phi \\rangle\n &=& \\langle \\Delta \\partial_j f_4, \\phi \\rangle\n = \\langle (\\partial_j\\partial_1 + i\\partial_j\\partial_2)(ig_1)\n - \\partial_j\\partial_3(ig_2), \\phi \\rangle\\\\\n &=& \\langle ig_1, (\\partial_j\\partial_1 + i\\partial_j\\partial_2)\\phi\\rangle\n - \\langle ig_2, \\partial_j\\partial_3 \\phi \\rangle.\n\\end{eqnarray*}\n Hence by Lemma 3.5 we have\n\\begin{multline}\\label{3ffour} \n \\hskip 40pt |\\langle \\partial_j f_4, \\Delta \\phi \\rangle|\n \\le \\|g_1\\|_p \\|(\\partial_j\\partial_1 + i\\partial_j\\partial_2)\\phi\\|_q\n + \\|g_2\\|_p \\|\\partial_j\\partial_3 \\phi\\|_q \\\\\n \\hskip 63pt \\le (C_{j1}+C_{j2}) \\|g_1\\|_p \\|\\Delta \\phi\\|_q\n + C_{j3} \\|g_2\\|_p \\|\\Delta \\phi\\|_q \\\\\n = [(C_{j1} + C_{j2}) \\|g_1\\|_p + C_{j3} \\|g_2\\|_p] \\|\\Delta \\phi\\|_q. \\hskip 82pt\n\\end{multline}\n Since $\\Delta C_0^{\\infty}({{\\mathbb R}}^3))$ is dense in $L^q({{\\mathbb R}}^3)$ as has been shown in\n Lemma 3.2, the inequality (\\ref{3ffour}) is extended uniquely to a continuous linear form\n on $L^q({{\\mathbb R}}^3)$. Since $L^p({{\\mathbb R}}^3)$ is the dual space of $L^q({{\\mathbb R}}^3)$, there\n exists a function $F_{j4} \\in L^p({{\\mathbb R}}^3)$ such that\n $\\<\\partial_j f_4, \\ \\psi\\> = \\int_{{{\\mathbb R}}^3} F_{j4}(x)\\psi(x) \\, dx\\,, \\, \\psi \\in L^q({{\\mathbb R}}^3)$,\n which implies (\\ref{3f4}) with $k = 4$ and $j = 1, 2, 3$.\n In particular, we have also shown that\n\\begin{equation}\\label{3.7}\n \\|\\partial_jf_k \\|_p \\leq C_0 \\{\\Sigma_{k=1}^4\\|g_k\\|_p^p\\}^{1\/p}\n = C_0\\|g\\|_p,\n\\end{equation}\nwith a positive constant $C_0$ for all $j=1,2,3$ and $k=1,2,3,4$.\n\n\\vskip 5pt\n\n (IV) Finally, since $f$ is $\\mathbb{H}_0^{1,p}({{\\mathbb R}}^3)$,\n by definition there exist a sequence $\\{f_n\\}_{n=1}^{\\infty}$\n in $C_0^{\\infty}({{\\mathbb R}}^3)$ with $f_n =(f_{n,1},f_{n,2},f_{n,3},f_{n,4})$\n such that with\n $g_n =(g_{n,1},g_{n,2},g_{n,3},g_{n,4}) := (\\alpha\\cdot p) f_n$,\n\\begin{multline*}\n \\hskip 50pt \\|f_n -f\\|_{D,1,p,{{\\mathbb R}}^3}^p = \\|f_n -f\\|_p^p\n + \\|(\\alpha\\cdot \\hbox{\\sl p}) (f_n -f)\\|_p^p \\\\\n = \\|f_n -f\\|_p^p + \\|g_n -g\\|_p^p \\rightarrow 0,\n \\quad\\quad n \\rightarrow \\infty. \\hskip 55pt\n\\end{multline*}\n Since by the same argument used to get (\\ref{3.7})\n we have $\\|\\partial_j (f_{n,k}-f_k)\\|_p \\leq C_0\\|g_n-g\\|_p$\n for all $j=1,2,3$ and $k=1,2,3,4$, it follows that\n $\\|f_n -f\\|_{S,1,p,{{\\mathbb R}}^3}^p \\rightarrow 0$\n as $n \\rightarrow \\infty$, so that $f \\in [H_0^{1,p}({{\\mathbb R}}^3)]^4$.\n This completes the proof of Proposition 3.1.\n\\end{proof} \n\n\n\\begin{prop} \n Let $1 < p < \\infty$. Let $\\Omega \\subset {\\mathbf R}^3$ be an open set and $J_{0,\\Omega}$\n be given in {\\rm (\\ref{1J})}. Then the map $J_{0,\\Omega}$ is onto $\\mathbb{H}_0^{1,p}(\\Omega)$.\n Further, the inverse map $J_{0,\\Omega}^{-1}$ is well-defined as a bounded linear\n operator.\n\\end{prop} \n\n\\begin{proof} \n (I) We have seen in Propositions 3.1 that\n $[H_0^{1,p}({\\mathbb R}^3)]^4 = \\mathbb{H}_0^{1,p}({{\\mathbb R}}^3)$ as sets and there\n exist positive constants $C_1$ and $C_2$ such that\n\\begin{equation}\\label{3th12} \n C_1\\|{\\it} %\\def\\mb{\\mathbf f}\\|_{D,1,p} \\le \\|{\\it} %\\def\\mb{\\mathbf f}\\|_{S,1,p}\n \\le C_2\\|{\\it} %\\def\\mb{\\mathbf f}\\|_{D,1,p}\n\\end{equation}\n for ${\\it} %\\def\\mb{\\mathbf f} \\in [H_0^{1,p}({\\mathbb R}^3)]^4 = \\mathbb{H}_0^{1,p}({{\\mathbb R}}^3)$.\n\n\\vskip 5pt\n\n (II) Let ${\\it} %\\def\\mb{\\mathbf f} \\in [H_0^{1,p}(\\Omega)]^4$. Then there exists a sequence\n $\\{{\\it \\phi}} %\\def\\mbP{{\\mathbf \\Phi}_n\\}_{n=1}^{\\infty} \\subset [C_0^\\infty(\\Omega)]^4$ such that\n\\begin{equation}\\label{3fPhiO} \n \\|{\\it} %\\def\\mb{\\mathbf f} - {{\\it \\phi}} %\\def\\mbP{{\\mathbf \\Phi}}_n\\|_{S,1,p,\\Omega} \\to 0 \\qquad (n \\to \\infty).\n\\end{equation}\n Since each $\\phi_n$ can be naturally extended to be an element of\n $[\\Con]^4$ by setting ${{\\it \\phi}} %\\def\\mbP{{\\mathbf \\Phi}}_n(x) = 0$ for\n $x \\in {\\mathbb R}^3 \\setminus \\Omega$, we have\n\\begin{equation}\\label{3fPhiR} \n \\|{\\it} %\\def\\mb{\\mathbf f} - {{\\it \\phi}} %\\def\\mbP{{\\mathbf \\Phi}}_n\\|_{S,1,p} \\to 0 \\qquad (n \\to \\infty),\n\\end{equation}\n where ${\\it} %\\def\\mb{\\mathbf f}$ is also extended to be a function on ${\\mathbb R}^3$ by setting\n $0$ outside $\\Omega$, and hence ${\\it} %\\def\\mb{\\mathbf f} \\in [H^{1,p}({\\mathbb R}^3)]^4$ with\n support in the closure of $\\Omega$. Therefore\n ${\\it} %\\def\\mb{\\mathbf f} \\in \\mathbb{H}^{1,p}({{\\mathbb R}}^3)$ and ${\\it} %\\def\\mb{\\mathbf f}$ satisfies\n (\\ref{3th12}). Then, (\\ref{3fPhiR}) is combined with (\\ref{3th12}) to yield\n\\begin{equation}\\label{3fPhiDR} \n \\|{\\it} %\\def\\mb{\\mathbf f} - {{\\it \\phi}} %\\def\\mbP{{\\mathbf \\Phi}}_n\\|_{D,1,p} \\to 0 \\qquad (n \\to \\infty),\n\\end{equation}\n which implies, together with the fact that and $\\phi_n$ have support in\n $\\Omega$, that\n\\begin{equation}\\label{3fPhiDO} \n \\|{\\it} %\\def\\mb{\\mathbf f} - {{\\it \\phi}} %\\def\\mbP{{\\mathbf \\Phi}}_n\\|_{D,1,p,\\Omega} \\to 0 \\qquad (n \\to \\infty).\n\\end{equation}\n Thus we have ${\\it} %\\def\\mb{\\mathbf f} \\in \\mathbb{H}_{0}^{1,p}(\\Omega)$, and we\n obtain from (\\ref{3th12})\n\\begin{equation}\\label{3thO12} \n C_1\\|{\\it} %\\def\\mb{\\mathbf f}\\|_{D,1,p,\\Omega} \\le \\|{\\it} %\\def\\mb{\\mathbf f}\\|_{S,1,p,\\Omega}\n \\le C_2\\|{\\it} %\\def\\mb{\\mathbf f}\\|_{D,1,p,\\Omega}.\n\\end{equation}\n\n (III) Let ${\\it} %\\def\\mb{\\mathbf f} \\in \\mathbb{H}_{0}^{1,p}(\\Omega)$. Then, starting with\n ${\\it} %\\def\\mb{\\mathbf f} \\in \\mathbb{H}_{0}^{1,p}(\\Omega)$ proceeding as in (II), we can show that\n ${\\it} %\\def\\mb{\\mathbf f} \\in [H_0^{1,p}(\\Omega)]^4$ and the estimates $(\\ref{3thO12})$ are\n satisfied, which completes the proof.\n\\end{proof} \n\n\\begin{proof} [Proof of Theorem 1.3, (ii)] \n Theorem 1.3, (ii) follows from Propositions 3.1 and 3.5.\n\\end{proof} \n\n\\vskip 30pt\n\n\\section{\\bf The case ${\\it} %\\def\\mb{\\mathbf p = 1}$}\n\n\\vskip 5pt\n\n The goal of this section is to prove Theorem 1.3, (iii), that is, to prove\n $[H^{1,1}(\\Omega)]^4$ and $[H_0^{1,1}(\\Omega)]^4$ are proper subspaces of\n $\\mathbb{H}^{1,1}(\\Omega)$ and $\\mathbb{H}_{0}^{1,1}(\\Omega)$, respectively.\n First, we are going to show, for $\\Omega = {\\mathbb R}^3$, that\n $[H_0^{1,1}({\\mathbb R}^3)]^4 = [H^{1,1} ({\\mathbb R}^3)]^4$ is a proper\n subspace of ${\\mathbb H}_{0}^{1,1}({\\mathbb R}^3) = {\\mathbb H}^{1,1}({\\mathbb R}^3)$\n (Proposition 4.4). Then all other statements in Theorem 1.3, (iii) will follow from\n Proposition 4.4. In the following, when speaking of ${\\mathbb H}_{0}^{1,1}({\\mathbb R}^3)$\n or ${\\mathbb H}^{1,1}({\\mathbb R}^3)$, and $[H_0^{1,1}({\\mathbb R}^3)]^4$ or\n $[H^{1,1}({\\mathbb R}^3)]^4$, we shall use the latter, namely,\n ${\\mathbb H}^{1,1}({\\mathbb R}^3)$ and $[H^{1,1}({\\mathbb R}^3)]^4$. As easily seen,\n ${\\mathbb H}^{1,p}(\\mathbb R^3)$ is also the subspace of $[L^p({\\mathbb R}^3)]^4$\n consisting of all ${f} \\in [L^p({\\mathbb R}^3)]^4$ such that\n $(\\alpha\\cdot \\hbox{\\sl p} + \\beta) f \\in [L^p({\\mathbb R}^3)]^4$ instead of\n $(\\alpha\\cdot \\hbox{\\sl p}) f \\in [L^p({\\mathbb R}^3)]^4$, where $\\beta$ is the fourth\n Dirac matrix $\\beta$ given by (\\ref{1betam}).\n\n\\vskip 5pt\n\n\\begin{lem} \n The map\n\\begin{equation*}\n (\\alpha\\cdot \\hbox{\\sl p}) + \\beta \\,:\\, \\mathbb{H}^{1,1}({\\mathbf R}^3) \\ni f\n \\mapsto (\\alpha\\cdot \\hbox{\\sl p} + \\beta)f \\in [L^1({\\mathbf R}^3)]^4\n\\end{equation*}\n maps $\\mathbb{H}^{1,1}({\\mathbf R}^3)$ one-to-one and onto $[L^1({\\mathbf R}^3)]^4$.\n\\end{lem} \n\n\\begin{proof} \n (I) We define the Dirac operator $H_0 = (\\alpha\\cdot \\hbox{\\sl p}) + \\beta$ as a linear\n operator in $[L^1({\\mathbf R}^3)]^4$ with domain $D(H_0) = \\mathbb{H}^{1,1}({\\mathbf R}^3)$.\n It is easy to see that $H_0$ is a closed operator in $[L^1({\\mathbf R}^3)]^4$. Let\n the operator $H = (\\alpha\\cdot \\hbox{\\sl p}) + \\beta$ be defined as a pseudodifferential\n operator acting on $[{\\mathcal S}'({\\mathbb R}^3)]^4$, the dual space of $[{\\mathcal S}({\\mathbb R}^3)]^4$,\n with $4\\times 4$ matrix symbol\n\\begin{equation} \n \\sigma_H(\\xi) = \\alpha\\cdot\\xi + \\beta\n = \\sum_{j=1}^3 \\xi_j\\alpha_j + \\beta.\n\\end{equation}\n Then the operator $H_0$ can be viewed as the restriction of the operator $H$\n to $\\mathbb{H}^{1,1}({\\mathbf R}^3)$. Let $B$ be a pseudodifferential operator acting\n on $[{\\mathcal S}'({\\mathbb R}^3)]^4$ with symbol\n\\begin{equation*}\n \\sigma_{B}(\\xi) = (1 + |\\xi|^2)^{-1}\\sigma_{H}(\\xi)\n = \\sigma_H(\\xi)[(1 + |\\xi|^2)^{-1}I_4]. \\hskip 50pt\n\\end{equation*}\n By the anti-commutative relation\n\\begin{equation*}\n \\alpha_j\\alpha_k + \\alpha_k\\alpha_j = 2\\delta_{jk}I_4\n \\qquad (j, k = 1, 2, 3, 4, \\alpha_4 = \\beta),\n\\end{equation*}\n where $I_4$ is the $4 \\times 4$ unit matrix, we see that\n\\begin{equation*}\n \\sigma_{H}(\\xi)\\sigma_{B}(\\xi)\n = \\sigma_{B}(\\xi)\\sigma_{H}(\\xi) = I_4,\n\\end{equation*}\n which implies that\n\\begin{equation}\\label{4AB} \n HB{\\it f}} %\\def\\mbf{{\\mb f} = BH{\\it f}} %\\def\\mbf{{\\mb f} = {\\it f}} %\\def\\mbf{{\\mb f} \\qquad\n ({\\it f}} %\\def\\mbf{{\\mb f} \\in [{\\mathcal S}'({\\mathbb R}^3)]^4),\n\\end{equation}\n {\\it i.e.,} the operator $B$ is the inverse operator of $H$ on $[{\\mathcal S}'({\\mathbb R}^3)]^4$.\n\n\\vskip 5pt\n\n (II) Note that\n\\begin{equation}\\label{4B0dec} \n B = HB^{(1)}, \\hskip 150pt\n\\end{equation}\n where the symbol $\\sigma_{B^{(1)}}(\\xi)$ of $B^{(1)}$ is given by\n $\\sigma_B^{(1)}(\\xi) = (|\\xi|^2 + 1)^{-1}I_4$. Let\n $\\sigma_{0}(\\xi) = (|\\xi|^2 + 1)^{-1}$ and let $\\sigma_{0}(\\hbox{\\sl p})$ be the pseudodifferential\n operator on ${\\mathcal S}'({\\mathbb R}^3)$ with symbol $\\sigma_{0}(\\xi)$. The symbol $\\sigma_{0}(\\xi)$\n is a $C^{\\infty}$ function on ${\\mathbb R}_{\\xi}^3$ and bounded together with all their\n derivatives. Then, by noting that the integrand $(|\\xi|^2 + 1)^{-1}({\\mathcal F}\\phi)(\\xi)$\n is a function in ${\\mathcal S}({\\mathbb R}_{\\xi}^3)$ for $\\phi \\in {\\mathcal S}({\\mathbb R}^3)$, we have\n\\begin{multline*}\n \\hskip 50pt (\\sigma_0(\\hbox{\\sl p})\\phi)(x) = \\lim_{R\\to\\infty}\n (2\\pi)^{-3\/2}\\int_{|\\xi| 1$ there exists $a \\in (0, 1)$ which satisfies (\\ref{4pineq}). For\n $p = 1$, however, there is no $a$ which satisfies (\\ref{4pineq}) since\n both sides of (\\ref{4pineq}) become 3\/2. Indeed, our pseudodifferential operator\n $B$ has symbol $\\sigma_B(\\xi)$ belonging to the H\\\"ormander class\n $S^{-1}_{1-a,0}({\\mathbb R}^3)$. To prove Lemma 4.1, which is the case\n $p = 1$, we have discussed the integral kernel of the Dirac operator. }\n\\end{rem} \n\n To proceed, we need some facts on the local Hardy space $h^1({\\mathbb R}^3)$, which\n is introduced in Goldberg\\,\\cite{G} in connection with the Hardy space $H^1({\\mathbb R}^3)$. The\n Hardy space is (see e.g. Fefferman-Stein\\, \\cite{FS}) the proper subspace of $L^1({{\\mathbb R}}^3)$\n consisting of the functions $f \\in L^1({{\\mathbb R}}^3)$ such that $R_j f \\in L^1({{\\mathbb R}}^3)$ for\n $j=1, 2, 3$, where $R_j :=\\partial_j \\cdot (-\\Delta)^{-1\/2}$ are the Riesz\n transforms, having symbols $i\\xi_j\/|\\xi|$. Let $\\varphi$ be a\n fixed function in the Schwartz space ${{\\mathcal S}({\\mathbb R}^3)}$ such that $\\varphi =1$ in a\n neighborhood of the origin. By definition a distribution $f$ belongs to $h^1({{\\mathbb R}}^3)$\n if and only if $f \\in L^1({{\\mathbb R}}^3)$ and $r_j f \\in L^1({{\\mathbb R}}^3)$ for $j=1,\n 2, 3$, where $r_j,\\, j=1, 2, 3$, are pseudodifferential operators with\n symbol $\\sigma_{r_j}(\\xi) = (1-\\varphi(\\xi))(i\\xi_j\/|\\xi|)$ (\\cite{G}, Theorem 2 (p.33)).\n The definition is independent of the choice of $\\varphi$. It is a Banach\n space with norm $\\|f\\|_{h^1} = \\|f\\|_{L^1} + \\sum_{j=1}^3 \\|r_j f\\|_{L^1}$.\n The space $h^1({\\mathbb R}^3)$ is a proper subspace of $L^1({{\\mathbb R}}^3)$, which is strictly\n larger than the Hardy space $H^1({{\\mathbb R}}^3)$ (see e.g. \\cite{G}, p.33, just after\n Theorem 3).\n\n Now, we are introducing the following operator\n\\begin{equation}\\label{4r'jdef} \n r'_j = \\partial_j (1-\\Delta)^{-1\/2}\n = \\frac{\\partial_j}{(-\\Delta)^{1\/2}}\n \\frac{(-\\Delta)^{1\/2}}{(1-\\Delta)^{1\/2}}\n =R_j\\cdot \\frac{(-\\Delta)^{1\/2}}{(1-\\Delta)^{1\/2}},\n\\end{equation}\n where we note that the pseudodifferential operator $(-\\Delta)^{1\/2}\/(1-\\Delta)^{1\/2}$ is\n a bounded operator on $L^1({{\\mathbb R}}^3)$ (see Stein\\,\\cite{Stein}, p.133, Eq.(31)).\n\n The proof of the lemma below was inspired by the proof of \\cite{G}, Theorem 2 (p.33).\n\n\\vskip 5pt\n\n\\begin{lem} \n A distribution $f$ in ${\\mathbb R}^3$ belongs to $h^1({{\\mathbb R}}^3)$ if and only if\n $f \\in L^1({{\\mathbb R}}^3)$ and $r'_j f \\in L^1({{\\mathbf R}}^3)$ for $j= 1, 2, 3$.\n\\end{lem} \n\n\\begin{proof} \n (I) It is sufficient to show that $r_j - r'_j$,\n $j = 1, 2, 3$, are bounded linear operators on $L^1({\\mathbb R}^3)$ (or, more exactly, the\n pseudodifferential operator $r_j - r_j'$ defined on ${\\mathcal S}({\\mathbb R}^3)$ can be uniquely\n extended to a bounded linear operator on $L^1({\\mathbb R}^3)$). Note that the operators\n $r_j$ and $r'_j$ have symbols\n\\begin{equation}\\label{4rjr'j} \n \\sigma_{r_j}(\\xi) = \\frac{(1-\\varphi(\\xi))i\\xi_j}{|\\xi|}, \\quad\n \\sigma_{r'_j}(\\xi) = \\frac{i\\xi_j}{(1+|\\xi|^2)^{1\/2}}\\,,\n\\end{equation}\n and both symbols are $C^{\\infty}$ functions in ${\\mathbb R}^3_{\\xi}$ and bounded\n together with all their derivatives, and we have\n\\begin{multline*}\n \\hskip 50pt \\sigma_{r_j}(\\xi) - \\sigma_{r'_j}(\\xi)\n = \\frac{(1-\\varphi(\\xi))i\\xi_j}{|\\xi|}\\Big(1-\\frac{|\\xi|}{(1+|\\xi|^2)^{1\/2}}\\Big)\n - \\frac{\\varphi(\\xi)i\\xi_j}{(1+|\\xi|^2)^{1\/2}} \\\\\n =: \\sigma_{1j}(\\xi) + \\sigma_{2j}(\\xi). \\hskip 0pt\n\\end{multline*}\n As in the proof of Lemma 4.1, we are going to show that, for each $j = 1, 2, 3$, the\n pseudodifferential operator with symbols $\\sigma_{1j}$ and $\\sigma_{2j}$ have integral kernels\n belonging to $L^1({\\mathbb R}^3)$, in other words, that their inverse Fourier transforms\n $\\overline{\\mathcal F}\\sigma_{1j}(x)$ and $\\overline{\\mathcal F}\\sigma_{2j}(x)$ belong to $L^1({{\\mathbb R}}^3)$, where $\\overline{\\mathcal F}$ is\n given by\n\\begin{equation*}\n \\overline{\\mathcal F}\\phi(x) = (2\\pi)^{-3\/2}\\int_{{\\mathbb R}^3} e^{ix\\cdot\\xi}\\phi(\\xi) \\, d\\xi.\n\\end{equation*}\n\n It is easy to see that $\\overline{\\mathcal F}\\sigma_{2j} \\in L^1({{\\mathbb R}}^3)$, because $\\sigma_{2j}$\n belongs to ${\\mathcal S}({\\mathbb R}^3)$, so that $\\overline{\\mathcal F}\\sigma_{2j}$ belongs to ${\\mathcal S}({\\mathbb R}^3)$ and hence it\n belongs to $L^1({\\mathbb R}^3)$. In the rest of the proof we are going to show that\n $\\overline{\\mathcal F}\\sigma_{1j} \\in L^1({{\\mathbb R}}^3)$.\n\n\\vskip 5pt\n\n (II) By definition we have\n$$\n \\sigma_{1j}(\\xi) =\n i(1-\\varphi(\\xi))\\frac{\\xi_j}{|\\xi|(1+|\\xi|^2)^{1\/2}(|\\xi|+(1+|\\xi|^2)^{1\/2})},\n$$\n and hence, $\\sigma_{1j}(\\xi) = O(|\\xi|^{-2})$ as $|\\xi| \\to \\infty$. Thus, by noting that\n $1-\\varphi(\\xi)$ is $0$ around the origin $\\xi = 0$, we see that $\\sigma_{1j} \\in L^2({\\mathbb R}^3)$.\n Therefore $I_j(x) = (\\overline{\\mathcal F}\\sigma_{1j})(x)$ exists as a function in $L^2({\\mathbb R}_x^3)$.\n Let $\\rho(t)$ be a real-valued $C^{\\infty}$ function on $[0, \\infty)$ such that\n\\begin{equation*}\n \\rho(t) = 1 \\quad (0 \\le t \\le 1), \\ \\ \\ = 0 \\quad (t \\ge 2).\n\\end{equation*}\n Then, since $\\sigma_{1j}(\\xi)\\rho(\\epsilon|\\xi|)$, $\\epsilon > 0$, converges to $\\sigma_{1j}(\\xi)$ in\n $L^2({\\mathbb R}_{\\xi}^3)$ as $\\epsilon \\downarrow 0$, we have, by setting\n\\begin{equation*}\n I_j(x, \\epsilon) := \\overline{\\mathcal F}(\\sigma_{1j}(\\xi)\\rho(\\epsilon|\\xi|)),\n\\end{equation*}\n $I_j(x, \\epsilon)$ converges to $I_j(x)$ in $L^2({\\mathbb R}_x^3)$ as $\\epsilon \\downarrow 0$,\n and hence there exists a decreasing sequence\n\\begin{equation*}\n 1 \\ge \\epsilon_2 > \\epsilon_2 > \\cdots > \\epsilon_m > \\ \\to 0\n\\end{equation*}\n such that $I_j(\\epsilon_m, x) \\to I_j(x)$ a.e. $x$ as $m \\to \\infty$. For the\n sake of the simplicity of notations, we shall use $\\epsilon \\le 1$ instead of $\\epsilon_m$.\n\n\\vskip 5pt\n\n (III) Let $\\alpha = (\\alpha_1, \\alpha_2, \\alpha_3)$ be a multi-index. Then\n we have\n\\begin{equation}\\label{4ests1j} \n |\\partial_{\\xi}^{\\alpha}\\sigma_{1j}(\\xi)| \\le C_{j,\\alpha}(1 + |\\xi|)^{-2-|\\alpha|}\n \\qquad (\\xi \\in {\\mathbb R}_{\\xi}^3 )\n\\end{equation}\n with a constant $C_{j,\\alpha} > 0$, where we should note that $1 - \\varphi(\\xi)$\n is bounded and\n$$\n \\partial_{\\xi}^{\\alpha}(1 - \\varphi(\\xi)) = - \\partial_{\\xi}^{\\alpha}\\varphi(\\xi)\n \\in {\\mathcal S}({\\mathbb R}_{\\xi}^3)\n$$\n for $\\alpha \\ne 0$. Let $\\ell$ be a positive integer and $k = 1, 2, 3$. Then,\n by integration by parts,\n\\begin{multline}\\label{4Jepxi} \n \\hskip 26pt \\ds (2\\pi)^{3\/2}x_k^{\\ell}I_j(x, \\epsilon) = \\int_{{\\mathbb R}^3}\\big\\{(-i\\partial_{\\xi_k}^{\\ell})\n e^{ix\\cdot\\xi}\\big\\}\\sigma_{1j}(\\xi)\\rho(\\epsilon|\\xi|) \\, d\\xi \\\\\n \\ds = (-i)^{\\ell}(-1)^{\\ell}\\int_{{\\mathbb R}^3} e^{ix\\cdot\\xi}\n \\partial_{\\xi_k}^{\\ell}\\big\\{\\sigma_{1j}(\\xi)\\rho(\\epsilon|\\xi|)\\big\\}\\, d\\xi. \\hskip 48pt\n\\end{multline}\n Here, by the Leibniz formula, we have\n\\begin{multline*}\n \\ds \\partial_{\\xi_k}^{\\ell}\\big\\{\\sigma_{1j}(\\xi)\\rho(\\epsilon|\\xi|)\\big\\}\n = \\big(\\partial_{\\xi_k}^{\\ell}\\sigma_{1j}(\\xi)\\big)\\rho(\\epsilon|\\xi|)\n + \\sum_{m=1}^\\ell {}_{\\ell}C_m \\big(\\partial_{\\xi_k}^{\\ell-m}\\sigma_{1j}(\\xi)\\big)\n \\big(\\partial_{\\xi_k}^{m}\\rho(\\epsilon|\\xi|)\\big) \\\\\n \\ds =: J_{0}(\\xi, \\epsilon) + J_1(\\xi, \\epsilon) \\hskip 180pt\n\\end{multline*}\n with ${}_{\\ell}C_m = \\ell\/(m!(\\ell-m)!)$. For $m = 1, 2, \\cdots, \\ell$, we have\n\\begin{equation*}\n \\xi \\in \\ {\\rm supp}(\\partial_{\\xi_k}^{m}\\rho(\\epsilon|\\xi|)) \\Longrightarrow\n 1 \\le \\epsilon|\\xi| \\le 2 \\Longrightarrow \\epsilon \\le \\frac2{|\\xi|}\\,,\n\\end{equation*}\n where ${\\rm supp}(f)$ denotes the support of $f$. Thus we can replace $\\epsilon$ in\n $\\partial_{\\xi_k}^{m}\\rho(\\epsilon|\\xi|)$ by $2|\\xi|^{-1}$ when we evaluate\n $|\\partial_{\\xi_k}^{m}\\rho(\\epsilon|\\xi|)|$. Therefore it follows that\n\\begin{equation}\\label{4parho} \n |\\partial_{\\xi_k}^{m}\\rho(\\epsilon|\\xi|)| \\le c(1 + |\\xi|)^{-m}\\chi_{\\epsilon}(\\xi),\n\\end{equation}\n where $c = c_{j,k,m}$ is a positive constant and $\\chi_{\\epsilon}(\\xi)$ is the characteristic\n function of the set $A_{\\epsilon} = \\{ \\xi \\,:\\, \\epsilon^{-1} \\le |\\xi| \\le 2\\epsilon^{-1} \\}$.\n Since it is supposed that $\\epsilon \\le 1$, we have $A_{\\epsilon} \\subset \\{ \\xi \\,:\\, |\\xi| \\ge 1 \\}$.\n The inequalities (\\ref{4ests1j}) and (\\ref{4parho}) are combined to give\n\\begin{equation*}\n |J_1(\\xi, \\epsilon)| \\le C(1 + |\\xi|)^{-2 - \\ell}\\chi_{\\epsilon}(\\xi) \\qquad (\\xi \\in {\\mathbb R}_{\\xi}^3)\n\\end{equation*}\n with positive constant $C = C_{j,k,\\ell}$. Let $\\ell \\ge 2$. Then it is seen that\n $|J_1(\\xi, \\epsilon)|$ is dominated by $C(1 + |\\xi|)^{-2 - \\ell}$, which is in $L^1({\\mathbb R}_{\\xi}^3)$,\n and $J_1(\\xi, \\epsilon) \\to 0$ for each $\\xi \\in {\\mathbb R}_{\\xi}^3$ as $\\epsilon \\to 0$, and hence, by\n the Lebesgue convergence theorem, we have\n\\begin{equation*}\n \\int_{{\\mathbb R}^3} e^{ix\\cdot\\xi}J_1(\\xi, \\epsilon) \\, d\\xi \\to 0 \\quad (\\epsilon \\to 0).\n\\end{equation*}\n Similarly, since $|J_{0}(\\xi, \\epsilon)|$ is dominated by $|\\partial_{\\xi_k}^{\\ell}\\sigma_{1j}(\\xi)|$\n which is in $L^1({\\mathbb R}_{\\xi}^3)$, and $J_{0}(\\xi, \\epsilon)$ converges to\n $\\partial_{\\xi_k}^{\\ell}\\sigma_{1j}(\\xi)$ for each $\\xi \\in {\\mathbb R}_{\\xi}^3$ as $\\epsilon \\to 0$, we have\n\\begin{equation*}\n \\int_{{\\mathbb R}^3} e^{ix\\cdot\\xi}J_0(\\xi, \\epsilon) \\, d\\xi \\to \\int_{{\\mathbb R}^3} e^{ix\\cdot\\xi}\n \\partial_{\\xi_k}^{\\ell}\\sigma_{1j}(\\xi) \\, d\\xi \\quad (\\epsilon \\to 0).\n\\end{equation*}\n Therefore, by letting $\\epsilon \\to 0$ in (\\ref{4Jepxi}), we obtain\n\\begin{equation}\\label{4xlequal} \n x_k^{\\ell}I_j(x) = x_k^{\\ell}(\\overline{\\mathcal F}\\sigma_{1j})(x)\n = i^{\\ell}(2\\pi)^{-3\/2}\\int_{{\\mathbb R}^3} e^{ix\\cdot\\xi}\n \\partial_{\\xi_k}^{\\ell}\\sigma_{1j}(\\xi) \\, d\\xi\n\\end{equation}\n a.e. $x \\in {\\mathbb R}^3$ for $\\ell \\ge 2$ and $j, k = 1, 2, 3$. Here the right-hand\n side is uniformly bounded for $x \\in {\\mathbb R}^3$. Thus, by considering the case $\\ell = 2$\n and $\\ell = 4$, it follows that\n\\begin{equation*}\n |(\\overline{\\mathcal F}\\sigma_{1j})(x)| \\le C_j\\min (|x|^{-2}, |x|^{-4}) \\quad (j = 1, 2, 3)\n\\end{equation*}\n with a positive constant $C_j$, which implies that $\\overline{\\mathcal F}\\sigma_{1j} \\in L^1({\\mathbb R}^3)$.\n This completes the proof of Lemma 4.3.\n\\end{proof} \n\n\\vskip 5pt\n\n Now we are going to prove that $[H^{1,1}({\\mathbb R}^3)]^4$ is a proper subspace of\n $\\mathbb{H}^{1,1}({\\mathbb R}^3)$, which is the most crucial part of Theorem 1.3,\n (iii) in the following strategy: Let ${\\mb g} \\in [L^1({\\mathbb R}^3)]^4 \\setminus [h^1({\\mathbb R}^3)]^4$,\n where $h^1({\\mathbb R}^3)$ is the local Hardy space which can be defined, by Lemma 4.3, as\n the space of all distributions $f$ such that $f \\in L^1({{\\mathbb R}}^3)$ and $r'_j f \\in L^1({{\\mathbb R}}^3)$\n for $j=1, 2, 3$, where $r'_j$ is given by (\\ref{4r'jdef}). Set\n ${\\it f}} %\\def\\mbf{{\\mb f} = (\\alpha\\cdot\\hbox{\\sl p} + \\beta)^{-1}{\\mb g}$. Then, by using Lemma 4.1,\n we have ${\\it f}} %\\def\\mbf{{\\mb f} \\in \\mathbb{H}^{1,1}({\\mathbb R}^3)$. Then we shall be able to show that\n ${\\it f}} %\\def\\mbf{{\\mb f} \\notin [H^{1,1}({\\mathbb R}^3)]^4$.\n\n\\vskip 5pt\n\n\\begin{prop} \n ${\\mathbb H}^{1,1}({\\mathbb R}^3)$ is strictly larger than $[H^{1,1}({\\mathbb R}^3)]^4$.\n\\end{prop} \n\n\\begin{proof} \n (I) It follows from Lemma 4.1 that, for every ${\\mb g} \\in [L^1({\\mathbb R}^3)]^4$ there exists\n a unique ${\\it f}} %\\def\\mbf{{\\mb f} \\in {\\mathbb H}^{1,1}({\\mathbb R}^3)$ such that ${\\mb g} = (\\alpha\\cdot \\hbox{\\sl p} + \\beta){\\it f}} %\\def\\mbf{{\\mb f}$.\n The equation ${\\mb g} = (\\alpha\\cdot\\hbox{\\sl p} + \\beta){\\it f}} %\\def\\mbf{{\\mb f}$ can be written as\n\\begin{equation*}\n \\begin{cases}\n &g_1= -i(\\partial_1-i\\partial_2)f_4 -i\\partial_3f_3 +f_1,\\\\\n &g_2= -i(\\partial_1+i\\partial_2)f_3 +i\\partial_3f_4 +f_2, \\\\\n &g_3= -i(\\partial_1-i\\partial_2)f_2 -i\\partial_3f_1 -f_3,\\\\\n &g_4= -i(\\partial_1+i\\partial_2)f_1 +i\\partial_3f_2 -f_4.\\\\\n \\end{cases}\n\\end{equation*}\n Solving the above equation for $f_1, f_2, f_3$ and $f_4$, we obtain\n\\begin{equation}\\label{4eqf} \n \\begin{cases}\n (1 - \\Delta)f_1 = -i(\\partial_1 - i\\partial_2)g_4 - i\\partial_3g_3 + g_1, \\\\\n (1 - \\Delta)f_2 = -i(\\partial_1 + i\\partial_2)g_3 - i\\partial_3g_4 + g_2, \\\\\n (1 - \\Delta)f_3 = -i(\\partial_1 - i\\partial_2)g_2 - i\\partial_3g_1 - g_3, \\\\\n (1 - \\Delta)f_4 = -i(\\partial_1 + i\\partial_2)g_1 - i\\partial_3g_2 - g_4, \\\\\n \\end{cases}\n\\end{equation}\n where $\\partial_j = \\partial\/\\partial x_j$. Here each equation in (\\ref{4eqf}) should be viewed\n as equations in ${\\mathcal S}'({\\mathbb R}^3)$. As has been shown in the proof of Lemma 4.1,\n the differential operator $1 - \\Delta$ has the inverse $(1 - \\Delta)^{-1}$\n as a pseudodifferential operator with symbol $(1 + |\\xi|^2)^{-1}$, and hence,\n by applying $(1 - \\Delta)^{-1}$ and $\\partial_j$ to each of the equations in (\\ref{4eqf}),\n it follows that\n\\begin{equation}\\label{4eqf2} \n \\begin{cases}\n \\partial_jf_1 = -i(\\partial_j\\partial_1 - i\\partial_j\\partial_2)\n (1-\\Delta)^{-1}g_4 - i\\partial_j\\partial_3(1-\\Delta)^{-1}g_3\n +\\partial_j(1-\\Delta)^{-1}g_1,\\\\\n \\partial_jf_2 = -i(\\partial_j\\partial_1 + i\\partial_j\\partial_2)\n (1-\\Delta)^{-1}g_3 + i\\partial_j\\partial_3(1-\\Delta)^{-1}g_4\n +\\partial_j(1-\\Delta)^{-1}g_2,\\\\\n \\partial_jf_3 = -i(\\partial_j\\partial_1 - i\\partial_j\\partial_2)\n (1-\\Delta)^{-1}g_2 - i\\partial_j\\partial_3(1-\\Delta)^{-1}g_1\n -\\partial_j(1-\\Delta)^{-1}g_3,\\\\\n \\partial_jf_4 = -i(\\partial_j\\partial_1 + i\\partial_j\\partial_2)\n (1-\\Delta)^{-1}g_1 + i\\partial_j\\partial_3(1-\\Delta)^{-1}g_2\n -\\partial_j(1-\\Delta)^{-1}g_4.\n \\end{cases}\n\\end{equation}\n\n\\vskip 5pt\n\n (II) By Lemma 4.3 we can choose $g_0 \\in L^1({\\mathbb R}^3)\\setminus h^1({\\mathbb R}^3)$\n such that $r'_3g_0 = \\partial_3(1 - \\Delta)^{-1\/2}g_0 \\notin L^1({\\mathbb R}^3)$. Then define\n ${\\mb g} = {}^t(g_1, g_2, g_3, g_4) \\in [L^1({\\mathbb R}^3)]^4$ by\n\\begin{equation*}\n g_1(x) = g_3(x) = g_4(x) = 0 \\ \\ \\ {\\rm and} \\ \\ \\ g_2(x) = g_0(x).\n\\end{equation*}\n Then we have from (\\ref{4eqf2})\n\\begin{equation}\\label{4pa3} \n \\partial_jf_4 = i\\partial_j\\partial_3(1-\\Delta)^{-1}g_2 \\qquad (j = 1, 2, 3).\n\\end{equation}\n Since $r_3'g_2 \\notin L^1({\\mathbb R}^3)$, we have necessarily $r_3'g_2 \\notin h^1$. Then\n $\\big(r'_1(r'_3g_2), r'_2(r'_3g_2), r'_3(r'_3g_2)\\big)$ does not belong to\n $[L^1({{\\mathbb R}}^3)]^3$. It follows from (\\ref{4r'jdef}) that the\n symbol $s_{j3}(\\xi)$ of $r'_jr'_3$ is given by\n\\begin{equation*}\n s_{j3}(\\xi) = \\frac{i\\xi_j}{(1 + |\\xi|^2)^{1\/2}}\\frac{i\\xi_3}{(1 + |\\xi|^2)^{1\/2}}\n = (i\\xi_j)(i\\xi_3)(1 + |\\xi|^2)^{-1},\n\\end{equation*}\n and hence by using (\\ref{4r'jdef}) again, we see that\n\\begin{equation}\\label{4r'jr'3} \n r'_jr'_3g_2 = (\\partial_j(1 - \\Delta)^{-1\/2})(\\partial_3(1 - \\Delta)^{-1\/2})g_2\n = \\partial_j\\partial_3(1 - \\Delta)^{-1}g_2\n\\end{equation}\n for $j = 1, 2, 3$. Thus we have from (\\ref{4pa3}) and (\\ref{4r'jr'3})\n\\begin{equation*}\n (\\partial_1f_4, \\partial_2f_4, \\partial_3f_4) = i(r'_1r'_3g_2, r'_2r'_3g_2, r'_3r'_3g_2)\n \\notin [L^1({\\mathbb R}^3)]^3,\n\\end{equation*}\n which implies that ${\\it f}} %\\def\\mbf{{\\mb f} \\notin [H^{1,1}({\\mathbb R}^3)]^4$. This completes the\n proof of Proposition 4.4.\n\\end{proof} \n\n\\vskip 5pt\n\n\\begin{prop} \n Let $\\Omega$ be an open subset of ${\\mathbb R}^3$.\nThen\n\n {\\rm (i)} $[H_0^{1,1}(\\Omega)]^4$ is a proper subspace of ${\\mathbb H}_0^{1,1}(\\Omega)$.\n\n {\\rm (ii)} $[H^{1,1}(\\Omega)]^4$ is a proper subspace of ${\\mathbb H}^{1,1}(\\Omega)$.\n\\end{prop} \n\n\\begin{proof} \n (I) We are going to show the norms $\\|f\\|_{S,1,1,\\Omega}$\n of $[H_0^{1,1}(\\Omega)]^4$ and $\\|f\\|_{D,1,1,\\Omega}$ of ${\\mathbb H}_{0}^{1,1}(\\Omega)$\n are not equivalent on $[C_0^{\\infty}(\\Omega)]^4$ (see (\\ref{1SOB}) and (\\ref{1S1p}) for\n the definition of these norms). To this end we use Proposition 4.4. Without loss of\n generality, we may assume that $\\Omega$ contains the unit ball $\\{x\\,:\\, |x| \\leq 1\\}$\n with center at the origin. As in (\\ref{1pOnorm}) (with $p = 1$), we denote the norm of\n $[L^1(\\Omega)]^4$ by $\\|{\\it f}} %\\def\\mbf{{\\mb f}\\|_{1,\\Omega}$, {\\it i.e.,}\n\\begin{equation*}\n \\|{\\it f}} %\\def\\mbf{{\\mb f}\\|_{1,\\Omega} = \\int_{\\Omega} \\, \\sum_{j=1}^4 |f_j(x)| \\, dx\n \\qquad ({\\it f}} %\\def\\mbf{{\\mb f}(x) = {}^t(f_1(x), f_2(x), f_3(x), f_4(x)).\n\\end{equation*}\n By Proposition 4.4 and the fact that $[C_0^{\\infty}({{\\mathbb R}}^3)]^4$ is dense in\n both $[H^{1,1}({\\mathbb R}^3)]^4$ and $\\mathbb{H}^{1,1}({\\mathbb R}^3)$ (Theorem 1.3, (i)), the\n norms $\\|{{\\it f}} %\\def\\mbf{{\\mb f}}\\|_{S,1,1,{\\mathbb R}^3}$ and $\\|{{\\it f}} %\\def\\mbf{{\\mb f}}\\|_{D,1,1,{\\mathbb R}^3}$ are not equivalent\n on $[C_0^{\\infty}({{\\mathbb R}}^3)]^4$. Therefore, by taking note of Proposition 2.2,\n (\\ref{2est}) with $\\Omega = {\\mathbf R}^3$, which says that the norm $\\|{{\\it f}} %\\def\\mbf{{\\mb f}}\\|_{D,1,1,{\\mathbb R}^3}$\n is dominated by the norm $\\|{{\\it f}} %\\def\\mbf{{\\mb f}}\\|_{S,1,1,{\\mathbb R}^3}$,\n there exists a sequence $\\{{{\\it f}} %\\def\\mbf{{\\mb f}}_n\\}_{n=1}^{\\infty}$ of functions in\n $[C_0^{\\infty}({{\\mathbb R}}^3)]^4$ such that ${\\it f}} %\\def\\mbf{{\\mb f}_n \\ne 0$ and\n $\\|{{\\it f}} %\\def\\mbf{{\\mb f}_n}\\|_{S,1,1,{\\mathbb R}^3} \\ge (n + 1)\\|{{\\it f}} %\\def\\mbf{{\\mb f}_n}\\|_{D,1,1,{\\mathbb R}^3}$\\,, or\n\\begin{equation}\\label{4ineqr3} \n \\|{\\it f}} %\\def\\mbf{{\\mb f}_n\\|_{1,{\\mathbb R}^3} + \\|\\nabla{\\it f}} %\\def\\mbf{{\\mb f}_n\\|_{1,{\\mathbb R}^3}\n \\ge (n + 1)[\\|{\\it f}} %\\def\\mbf{{\\mb f}_n\\|_{1,{\\mathbb R}^3} + \\|(\\alpha\\cdot{\\hbox{\\sl p}}){\\it f}} %\\def\\mbf{{\\mb f}_n\\|_{1,{\\mathbb R}^3}]\n\\end{equation}\n for each $n = 1, 2, \\cdots$. Each ${\\it f}} %\\def\\mbf{{\\mb f}_n$ has support in some ball $\\{ x \\,:\\, |x|\\le R_n\\}$\n with radius $R_n > 0$ and center at the origin. We may assume with no loss of generality that\n $R_n \\ge 1$ ($n = 1, 2, \\cdots$). Put ${\\mb g}_n(x) = R_n^3{\\it f}} %\\def\\mbf{{\\mb f}_n(R_nx)$. Then ${\\mb g}_n$ has support\n in the unit ball $\\{ x \\,:\\, |x|\\le 1 \\}$, and hence in $\\Omega$, so that\n $\\{{\\mb g}_n\\} \\subset [C_0^{\\infty}(\\Omega)]^4$ for each $n$. We have\n$$\n \\|{\\mb g}_n\\|_{1,\\Omega} = \\|{\\it f}} %\\def\\mbf{{\\mb f}_n\\|_{1,{\\mathbb R}^3} \\ \\ \\ {\\rm and} \\ \\ \\\n \\|\\partial_j {\\mb g}_n\\|_{1,\\Omega}\n = R_n\\|\\partial_j{\\it f}} %\\def\\mbf{{\\mb f}_n\\|_{1,{\\mathbb R}^3}\n$$\n for $j = 1, 2, 3$. Then by (\\ref{4ineqr3}) we have\n$$\n \\|{\\mb g}_n\\|_{1,\\Omega} + \\frac1{R_n}\\|\\nabla{\\mb g}_n\\|_{1,\\Omega}\n \\ge (n + 1)[ \\|{\\mb g}_n\\|_{1,\\Omega} +\n \\frac1{R_n}\\|(\\alpha\\cdot \\hbox{\\sl p}){\\mb g}_n\\|_{1,\\Omega}],\n$$\n and hence, by noting $R_n \\ge 1$\n\\begin{multline*}\n \\hskip 50pt \\frac1{R_n}\\|\\nabla{\\mb g}_n\\|_{1,\\Omega}\n \\ge n\\|{\\mb g}_n\\|_{1,\\Omega}\n + \\frac{n+1}{R_n}\\|(\\alpha\\cdot \\hbox{\\sl p}){\\mb g}_n\\|_{1,\\Omega} \\\\\n \\ge \\frac{n}{R_n} [\\|{\\mb g}_n\\|_{1,\\Omega}\n + \\|(\\alpha\\cdot \\hbox{\\sl p}){\\mb g}_n\\|_{1,\\Omega}\\|]. \\hskip 126pt\n\\end{multline*}\n Therefore\n$$\n \\|\\nabla{\\mb g}_n\\|_{1,\\Omega}\n \\ge n[\\|{\\mb g}_n\\|_{1,\\Omega} + \\|(\\alpha\\cdot \\hbox{\\sl p}){\\mb g}_n\\|_{1,\\Omega}],\n$$\n which implies that $\\|{\\mb g}_n\\|_{S,1,1,\\Omega} \\ge n\\|{\\mb g}_n\\|_{D,1,1,\\Omega}$\n for $n = 1, 2, \\cdots$. This proves that the norms $\\|f\\|_{S,1,1,\\Omega}$ and\n $\\|f\\|_{D,1,1 \\Omega}$ are not equivalent on $[C_0^{\\infty}(\\Omega)]^4$, showing (i).\n\n (II) As we have seen in the proof of (i), the norms of $[H_0^{1,1}(\\Omega)]^4$\n and ${\\mathbb H}_0^{1,1}(\\Omega)$ are not equivalent. In fact, there exists a sequence\n $\\{g_n\\} \\subset [C_0^{\\infty}(\\Omega)]^4$ such that\n\\begin{equation*}\n \\|g_n\\|_{S,1,1,\\Omega} \\geq n \\|g_n\\|_{D,1,1,\\Omega}, \\,\\, n=1,2,\\, \\dots.\n\\end{equation*}\n Since clearly this sequence is also contained in $[H^{1,1}(\\Omega)]^4$,\n this implies that the norms of $[H^{1,1}(\\Omega)]^4$ and ${\\mathbb H}^{1,1} (\\Omega)$ are\n not equivalent. This shows (ii), completing the proof of Proposition 4.5.\n\\end{proof} \n\n\\begin{proof} [Proof of Theorem 1.3, (iii)]\n Theorem 1.3, (iii) follows from Propositions 4.5.\n\\end{proof}\n\n\\bigskip\\bigskip\n {\\bf Acknowledgement.}\\ \nBeing deeply grieved that our respected friend, Professor Tetsuro Miyakawa,\nsuddenly passed away in February 2009, \nwe gratefully remember nice useful discussion with him\nabout the Riesz transform at the early stage of the present work.\nThanks are also due to Professor Shuichi Sato for valuable\n discussion about the Hardy space and local Hardy space. \n The research of T.I. is supported in part by JSPS Grant-in-Aid\n for Sientific Research No. 17654030.\nWe wish to thank the referee for his kindly pointing out\na flaw in an assertion in Theorem 1.3(ii) of our original version of \nthe paper.\n\n\n\\vskip 30pt\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\\label{sec:introduction}\nPrecise determinations of radial matrix elements of electric dipole (E1) transitions are essential for advancing the study of parity-nonconserving (PNC) weak-force-induced interactions in atoms. These matrix elements are important for testing calculations of the PNC transition amplitudes $\\mathcal{E}_{\\rm PNC}$~\\cite{BlundellSJ92, KozlovPT01, PorsevBD10, RobertsDF2013}, as well as for determining the scalar and vector transition polarizabilities~\\cite{BlundellSJ92, DzubaFS97, Derevianko00, SafronovaJD99, VasilyevSSB02}. For PNC studies based on the $6s~^2S_{1\/2} \\rightarrow 7s~^2S_{1\/2}$ transition in cesium, for example, the most essential E1 matrix elements are $\\langle ms~^2S_{1\/2} || r || np~^2P_J\\rangle$, where $m, n = 6$ or 7, and $J = 1\/2$ or $3\/2$. Over the years, most of these quantities have been measured~\\cite{YoungHSPTWL94,RafacT98,RafacTLB99,DereviankoP02a,AminiG03,BouloufaCD07,ZhangMWWXJ13,PattersonSEGBSK15,GregoireHHTC15,TannerLRSKBYK92,SellPEBSK11,antypas7p2013,BouchiatGP84,TohJGQSCWE18,BennettRW99,Borvak14,PhysRevA.99.032504} to a precision of 0.15\\% or better. The least precise moments, prior to the present work, were $\\langle 6s~^2S_{1\/2} || r || 7p~^2P_{J}\\rangle$. Disagreement between three recent experimental results~\\cite{VasilyevSSB02,antypas7p2013,Borvak14} at the $\\sim1$\\% level motivated us to re-examine these transitions. In this paper, we report new measurements of these matrix elements in $^{133}$Cs to a precision of $0.10\\%$ and $0.16\\%$, completing the required set of precise determinations of $E1$ dipole matrix elements between the lowest $ms~^2S_{1\/2}$ and $np~^2P_J$ states.\n\nIn order to determine the reduced matrix element for the $6s~^2S_{1\/2} \\rightarrow 7p~^2P_{3\/2}$ transition at $\\lambda = 455.7$ nm, we carry out a set of measurements in which we compare the absorption coefficient on this line to that of the `reference' D$_1$ line at 894.6 nm. See the simplified energy level diagram in Fig.~\\ref{fig:EnergyLevel}(a).\n\\begin{figure} [b!]\n\t \\includegraphics[width=8.5cm]{EnergyLevels_dual.png}\\\\\n\t \\caption{Energy level diagrams of atomic cesium, showing the states relevant to these measurements. In (a), atoms are excited from the $6s\\, ^2S_{1\/2}$ ground state to the $6p\\, ^2P_{1\/2}$ ($\\lambda =$ 894 nm) or the $7p\\, ^2P_{3\/2}$ ($\\lambda =$ 456 nm) excited states, allowing for a comparison of the absorption coefficients of these two lines.\n\t In (b), we compare the absorption coefficients for the $6s~^2S_{1\/2} \\rightarrow 7p~^2P_{1\/2}$ line at 459 nm to that of the $6s~^2S_{1\/2} \\rightarrow 7p~^2P_{3\/2}$ line at 456 nm. Wavelengths shown in the figure are vacuum wavelengths.}\n\t \\label{fig:EnergyLevel}\n\\end{figure}\nThe matrix element for the latter is well measured~\\cite{YoungHSPTWL94,RafacT98,RafacTLB99,DereviankoP02a,AminiG03,BouloufaCD07,ZhangMWWXJ13,PattersonSEGBSK15,GregoireHHTC15,TannerLRSKBYK92,SellPEBSK11}, with an impressive precision of 0.035\\%. The ratio of absorption coefficients for these two lines therefore allows us to determine the reduced matrix element $\\langle 6s~^2S_{1\/2} || r || 7p~^2P_{3\/2}\\rangle$ precisely. \nA similar comparison to the D$_1$ line strength for the $6s~^2S_{1\/2} \\rightarrow 7p~^2P_{1\/2}$ transition at $\\lambda = 459.4$ nm, however, is less fruitful. This is a weaker absorption line, and the difference between the absorption strength at 459 nm and at 894 nm is too great. Therefore, we determine the matrix element at 459 nm through comparison with the 456 nm line strength, which now serves as the reference. We show the relevant transitions for this measurement in Fig.~\\ref{fig:EnergyLevel}(b).\n\nIn each case, we use a pair of cw tunable single-mode diode lasers to measure and compare the absorption strengths of two lines in a cesium vapor cell. \nWe direct the two laser beams through a cesium vapor cell along overlapping beam paths. Then we block one laser beam to allow only the other to pass through the cell, scan the laser frequency through the resonance frequency, and record the absorption lineshape for this line. We then block the first laser, and record the absorption lineshape for the second. We alternate measurements between lasers several times in quick succession.\n\nThese measurements differ from a previous measurement~\\cite{antypas7p2013} from our group in several important regards. In that measurement, we used a single blue diode laser which we could tune to either the $6s~^2S_{1\/2} \\rightarrow 7p~^2P_{1\/2}$ transition at 459 nm or the $6s~^2S_{1\/2} \\rightarrow 7p~^2P_{3\/2}$ transition at 456 nm, and compared the absorption coefficient of each of these lines directly to the absorption coefficient of the 894~nm line. The precision of the measurement for the 459~nm line suffered from the large difference in absorption strengths, as described previously.\nIn the present measurement, we avoid this difficulty by using two blue diode lasers, and determining the ratio of absorption coefficients for these two lines directly. Two additional improvements that we have made are: (1) In our previous measurement, we fit the absorption curves assuming a Gaussian Doppler-broadened lineshape. We have discovered that, to attain a level of precision of less than 1\\%, one must use a proper Voigt profile, a convolution of the homogeneous natural linewidth and the inhomogeneous Doppler-broadened linewidth, when the natural linewidths and\/or Doppler widths of the two transitions differ from one another. (2) For strongly absorbing lines, such as the D$_1$ line at the higher cell densities, the scan speed of the laser frequency becomes important. Under these strong absorption conditions, the medium changes quickly from fully transmitting to fully absorbing, and then back to fully transmitting again, as we tune the laser through the resonance. If the rise and fall times of the photodetector are too slow, then one cannot obtain good fits to the data, and the measurement of the absorption coefficient is not reliable. We have corrected each of these issues in the current measurements.\n\n\n\\section{Theory}\\label{sec:theory}\n \nWhen a low-intensity, narrow-band laser beam is incident upon a cell containing an absorbing atomic medium, the laser power transmitted through the cell can be written simply as \n\\begin{equation}\nP(\\omega) = P_0 \\exp{ \\{-2 \\alpha(\\omega ) \\ell_{cell} \\} },\n\t \\label{eqn:fitfunction}\n\\end{equation}\nwhere $P_0$ is the transmitted power in the absence of any absorption by the medium, $\\alpha(\\omega )$ is the frequency-dependent electric field attenuation coefficient, and $\\ell_{cell}$ is the cell length. The attenuation coefficient $\\alpha(\\omega )$ for linearly-polarized light by a Doppler-broadened atomic gas\nin terms of the reduced $E1$ matrix elements $\\langle J^{\\prime} || \\vec{r} || J \\rangle $, as a sum over the various hyperfine components of the states, is given by Eq.~(14) of Ref.~\\cite{RafacT98} as\n\\begin{eqnarray}\\label{eq:abscoeff}\n \\alpha(\\omega) & = & \\frac{2 \\pi^2 n \\alpha_{\\rm fs} \\omega }{\\left( 2I+1\\right) \\left(2J+1\\right) }\n \\left| \\langle J^{\\prime} || \\vec{r} || J \\rangle \\right|^2 \\\\ & & \\hspace{0.3in}\\times \\sum_{F^{\\prime}, } \\sum_{F} q_{J,F \\rightarrow J^{\\prime},F^{\\prime}} V(\\omega) , \\nonumber\n\\end{eqnarray}\nwhen the transition frequencies \n$\\omega$ are independent of $m$ and $m^{\\prime}$. \n$J$, $I$, and $F$ are quantum numbers for the total electronic, nuclear spin, and total angular momentum, respectively, and $m$ for the projection of $F$ on the $z$ axis. \nWe use unprimed (primed) notation to indicate ground (excited) state quantities.\n$n$ is the number density of the cesium atoms in the beam path, $\\alpha_{\\rm fs}$ is the fine structure constant, and\n$q_{J,F \\rightarrow J^{\\prime},F^{\\prime}}$ are weight factors for the different hyperfine components due to the angular momentum of the states, \n\\begin{eqnarray}\\label{eq:q}\n q_{J,F \\rightarrow J^{\\prime},F^{\\prime}} & = &(-1)^{2(I+J)} \\left( 2F^{\\prime} + 1\\right) \\left( 2F + 1\\right) \\nonumber \\\\\n & & \\hspace{-0.4in} \\times \\sum_{m, m^{\\prime}} \\left( \\begin{array}{ccc} F^{\\prime} & 1 & F \\\\ -m^{\\prime} & 0 & m \\end{array} \\right)^2 \\left\\{ \\begin{array}{ccc} J^{\\prime} & F^{\\prime} & I \\\\ F & J & 1 \\end{array} \\right\\}^2 .\n\\end{eqnarray} \nThe arrays in the parentheses and curly brackets are the Wigner $3j$ (for linear polarization) and $6j$ symbols, respectively. We list the values of $q_{J,F \\rightarrow J^{\\prime},F^{\\prime}}$ for the transitions relevant to this work in Table \\ref{table:q}.\n\\begin{table}[t!]\n\\caption{Numerical values of the weights $q_{J,F \\rightarrow J^{\\prime},F^{\\prime}}$ for each of the hyperfine components of the $6s \\ ^2S_{1\/2} \\rightarrow np \\ ^2P_{j}$ transitions, as given in Eq.~(\\ref{eq:q}). \\label{table:q} }\n \\begin{tabular}{|l|c|c|}\n \\hline\n \\multicolumn{1}{|c|} {$F \\rightarrow F^{\\prime}$} \t\t& $6s \\ ^2S_{1\/2} \\rightarrow np \\ ^2P_{1\/2}$ & $6s \\ ^2S_{1\/2} \\rightarrow np \\ ^2P_{3\/2}$ \\\\ \\hline \\hline\n \n\\rule{0in}{0.15in} $4 \\rightarrow 3^{\\prime} $ & $ 7\/8$ & $7\/48$ \\\\\n\n\\rule{0in}{0.15in} $4 \\rightarrow 4^{\\prime} $ & $ 5\/8$ & $7\/16$ \\\\\n\n\\rule{0in}{0.15in} $4 \\rightarrow 5^{\\prime} $ & $ - $\t& $11\/12$ \\\\\n\n\\rule{0in}{0.3in} $3 \\rightarrow 2^{\\prime} $ & $ -$ & $5\/12$ \\\\\n\n\\rule{0in}{0.15in} $3 \\rightarrow 3^{\\prime} $ & $ 7\/24$ & $7\/16 $ \\\\\n\n\\rule{0in}{0.15in} $3 \\rightarrow 4^{\\prime} $ & $ 7\/8$ \t& $5\/16$ \\\\\n \\hline\n \\end{tabular}\n \n\\end{table}\n\n\n$V(\\omega)$ is the Voigt lineshape function, \n\\begin{equation}\\label{eq:Voigt}\n V(\\omega) = \\sqrt{\\frac{\\ln 2}{\\pi^3}} \\frac{1}{\\Delta \\omega_D} \\int_{-\\infty}^{\\infty} \\frac{\\Gamma^{\\prime} e^{-4 \\ln2 \\left( \\omega_D \/ \\Delta \\omega_D \\right)^2} d \\omega_D }{\\left[\\omega - \\omega_D - \\omega_{F \\rightarrow F^{\\prime}} \\right]^2 + \\Gamma^{\\prime 2}\/4} ,\n\\end{equation} \nthe convolution of the Lorentzian homogeneous lineshape function (of width $\\Gamma^{\\prime}$) and the Gaussian distribution of width $\\Delta \\omega_D$.\n$\\omega_{F \\rightarrow F^{\\prime}}$ is resonant frequency of the $F \\rightarrow F^{\\prime}$ hyperfine component of the transition.\nThis lineshape function is normalized such that its integral across the resonance is unity. In this expression, $\\omega_D$ is the Doppler shift, equal to $\\omega v\/c$, where $v$ is the atomic velocity, and $\\Delta \\omega_D$ is the Doppler full-width-at-half-maximum (FWHM) of the transition, equal to $\\omega \\: \\sqrt{8 k_B T \\ln2\/(Mc^2)} \\: $. \nThus, precise measurements of the absorption in a cell would allow us to determine the radial matrix element for that individual transition, provided we also measure the vapor density in the cell, and the length of the cell. \n\n\n\nInstead, by alternating transmission measurements between two spectral lines, we can determine the ratio of absorption strengths, and eliminate the need for precise determination of the cell length and vapor density. \nFor example, to determine $\\langle 6s_{1\/2} || r || 7p_{3\/2} \\rangle$ (we abbreviate the state notation $|m \\ell \\ ^2L_J \\rangle $ using only the quantum numbers of the single active electron $|m \\ell_j \\rangle $), we measure the absorption through the cell on the $6s \\ ^2S_{1\/2} \\rightarrow 6p \\ ^2P_{1\/2}$ line at 894 nm, which serves as the reference, and the absorption on the $6s \\ ^2S_{1\/2} \\rightarrow 7p \\ ^2P_{3\/2}$ line at 456 nm. The ratio of matrix elements is then determined as\n\\begin{equation}\\label{eq:ratioR1}\n R_1 \\equiv \\frac{\\langle 6s_{1\/2} || r || 6p_{1\/2}\\rangle}{\\langle 6s_{1\/2} || r || 7p_{3\/2}\\rangle} = \\sqrt{\\frac{\\alpha_{3 \\rightarrow 3^{\\prime}}^{894}(\\omega_0)\/(7\/24)}{\\alpha_{F \\rightarrow F^{\\prime}}^{456}(\\omega_0)\/q_{F \\rightarrow F^{\\prime}}}}.\n\\end{equation}\n$\\alpha_{F \\rightarrow F^{\\prime}}^{\\lambda}(\\omega_0)$ is the attenuation coefficient at line center at wavelength $\\lambda$ for the $F \\rightarrow F^{\\prime}$ component, and we have abbreviated the $q_{F \\rightarrow F^{\\prime}}$ factor, omitting the $J$ and $J^{\\prime}$ for brevity from Eq.~(\\ref{eq:q}). \nThis factor $\\alpha_{F \\rightarrow F^{\\prime}}^{\\lambda}(\\omega_0) \/ q_{F \\rightarrow F^{\\prime}}$ is the same for each of the different hyperfine components of the transition, so we define a scaled attenuation term\n\\begin{equation}\n \\Upsilon^{\\lambda} =\\frac{ \\alpha_{F \\rightarrow F^{\\prime}}^{\\lambda}(\\omega_0) \\ell_{cell}}{ q_{F \\rightarrow F^{\\prime}}}\n\\end{equation}\nfor the line. The term $\\Upsilon^{\\lambda}$ is the attenuation of the absorbing vapor on the line at wavelength $\\lambda$, \ndefined in such a way as to make $\\Upsilon^{\\lambda}$ equivalent for each of the hyperfine components of the transition.\nIn terms of $\\Upsilon$ then, the ratio $R_1$ is \n\\begin{equation}\\label{eq:ratioR1Upsilon}\n R_1 = \\frac{\\langle 6s_{1\/2} || r || 6p_{1\/2}\\rangle}{\\langle 6s_{1\/2} || r || 7p_{3\/2}\\rangle} = \\sqrt{\\frac{\\Upsilon^{894}}{\\Upsilon^{456}}}.\n\\end{equation}\nSimilarly, we define the ratio \n\\begin{equation}\\label{eq:ratioR2}\n R_2 \\equiv \\frac{\\langle 6s_{1\/2} || r || 7p_{3\/2}\\rangle}{\\langle 6s_{1\/2} || r || 7p_{1\/2}\\rangle} = \\sqrt{\\frac{\\Upsilon^{456}}{\\Upsilon^{459}}},\n\\end{equation}\nwhich we measure by comparing the attenuation coefficients of the $6s \\ ^2S_{1\/2} \\rightarrow 7p \\ ^2P_{3\/2}$ line at 456 nm, which serves as the reference, and the absorption on the $6s \\ ^2S_{1\/2} \\rightarrow 7p \\ ^2P_{1\/2}$ line at 459 nm. We describe our measurement of $R_2$ in Sec. \\ref{sec:RME459}. \n\nThere is an important subtlety regarding the role of the transition frequency $\\omega$ on the attenuation coefficient. The frequency $\\omega$ appears in the numerator of the expression for the attenuation coefficient, Eq.~(\\ref{eq:abscoeff}). For a Doppler broadened medium, the Doppler width $\\Delta \\omega_D$, which is proportional to $\\omega$, appears in the denominator of Eq.~(\\ref{eq:Voigt}). Therefore, for a Doppler-broadened transition, these frequency factors cancel, and the attenuation coefficients in Eq.~(\\ref{eq:ratioR1Upsilon}) are independent of the optical frequencies of the two transitions.\nCareful attention, however, must be paid to the proper normalization of the Voigt function.\n\n\n\\section{Measurement of $R_1$}\\label{sec:RME456}\nWe first describe the measurement of the ratio of transition moments $R_1$, as defined in Eq.~(\\ref{eq:ratioR1Upsilon}). \n\nWe show the experimental setup in Fig.~\\ref{fig:ExperimentalSetup894}.\n\\begin{figure}\n\t \\includegraphics[width=8cm]{Exptsetupsmall-894.png}\\\\\n\t \\caption{Experimental setup for the two measurements. The 455.7 nm laser stays the same for both measurement. The 894.6 nm or 459.4 nm laser changes depending on the measurement, $R_1$ or $R_2$, respectively, being done. Abbreviations in this figure are: \n(AOM1,2) acousto-optic modulators; (ECDL) external cavity diode laser;\n(PD1-3) photodiodes; (FP1,2) Fabry\u2013-P\\'erot cavities; and (W) wedged windows. FP2 (in the dashed box) is used only for the measurement of $R_1$. }\n\t \\label{fig:ExperimentalSetup894}\n\\end{figure}\nWe use two home-made external cavity diode lasers (ECDL) in Littrow configurations, one at $\\lambda = 894$ nm, the other at 456 nm. The 894 nm laser produces $\\sim$10 mW of output power, while the 456 nm laser produces $\\sim$20 mW. By ramping the laser diode current and the piezoelectric transducer (PZT) voltage concurrently, we are able to achieve mode-hop free scans of $7-10$ GHz, significantly greater than the widths of the spectra. \n\n\nWe align the laser beams so that they overlap one another in the cesium vapor cell, a sealed glass cell of inside length $\\ell_{cell} = 29.9034 \\: (44)$~cm fitted with flat windows. \nControl of the density of cesium in the cell is achieved using a cold finger enclosed within a copper block, whose temperature we control and stabilize to between $-8$ and $+18^\\circ$C with a thermo-electric cooler and feed-back circuit. We use Kapton heaters to keep the vapor cell above room temperature at $\\sim 25^\\circ$C, and enclose the cell in an aluminum shell inside an insulating styrofoam container to help maintain a stable and uniform body temperature. \nTo detect the power of the laser beam transmitted through the vapor cell, we use a linear silicon photodiode, labeled PD3 in Fig.~\\ref{fig:ExperimentalSetup894}. \nThe photodiode current is amplified using a trans\\-impedance amplifier with a gain of $5 \\times 10^7$ V\/A, designed for high-gain, low-noise operation. This amplifier is followed by a second op amp with a gain of 10. \nWe chose a slow scan rate ($\\sim$4 GHz\/s) and wide amplifier bandwidth (60 kHz), to allow fast rise- and fall-times of the signal.\n\n\\begin{figure*}[t!]\n\t\\includegraphics[width=15cm]{blue_all_graphs_fix.png}\\\\\n\t \\caption{Examples of absorption spectra recorded at a cold finger temperature of $-2^\\circ$C. These figures show the photodiode signal versus laser frequency as we scan (a) the 456 nm laser through $ F=4 \\rightarrow F'=3^{\\prime},4^{\\prime},5^{\\prime}$ components (not resolved) of the $6s \\ ^2S_{1\/2} \\rightarrow 7p \\ ^2P_{3\/2} $ transition, and (c) the 894~nm laser through $F=3 \\rightarrow F'=3^{\\prime},4^{\\prime}$ components of the $6s \\ ^2S_{1\/2} \\rightarrow 6p \\ ^2P_{1\/2} $ transition. In each, the data are shown as the red data points, and the result of a least-squares fit as a black dashed line. We show the residuals (data$-$fit) for each in (b) and (d).}\n\t \\label{fig:datafit894}\n\\end{figure*}\n\nTo improve the precision of the measurements, we stabilize the optical power delivered to the cell. For this purpose, we diffract a fraction of each individual beam in an acousto-optic modulator (AOM1 or AOM2), and measure the relative power of the undiffracted beams using photodiodes (PD1 or PD2). We use the photodiode current to generate an error signal, which controls the r.f.\\ power applied to the AOMs. In this manner, we are able to stabilize the power of each laser, and achieve a relatively flat power level for the scan of each laser, with less than 3\\% variation in the laser power over a typical scan of $4-6$ GHz for the 456 nm laser and 0.5\\% for the 894 nm laser.\nTo minimize saturation effects, the laser power incident on the cell from the 456 nm laser is about 40 nW in a $\\sim$1 mm diameter beam and the 894 nm laser has 8 nW with $\\sim$2 mm diameter. \nA 15 cm focal length lens after the vapor cell reduces the laser beam size incident on PD3 to less than the photocathode size.\n\nWe calibrate the frequency scans of the two lasers using separate Fabry-\u2013P\\'erot (FP) cavities, with free spectral ranges (FSR) of $\\sim1500$ MHz. \nWe record the transmission through the cavity concurrently with each absorption spectrum, and fit the frequencies of the transmission peaks to a 3rd-order polynomial in the laser frequency ramp voltage.\n\nBefore each set of measurements, we record the photodetector background offset voltage, the measured signal when no light is incident on the photodiode.\nWe also account for the small amount of laser power in the wings of the laser power spectrum. For this, we insert a second cesium vapor cell, which we heat to $\\sim 120^\\circ$C, into the beam path at the beginning of each data run. This vapor cell is labeled `Hot Cell' in Fig.~\\ref{fig:ExperimentalSetup894}. Absorption in this cell of the on-resonant light is very strong, while off-resonant light is transmitted.\nThis gives us a good measurement of the laser power in the wings, typically $\\sim 0.1\\%$ of the total power for the 456 nm laser and $\\sim 1\\%$ for the 894 nm laser. \nWe then determine the total offset level that comes from the background and laser power in the wings, to deduct from our data before curve fitting.\n\n\nFor each measurement, we block one of the lasers so that only one beam passes through the vapor cell, and record approximately four full absorption curves over a ten second period. We then block that laser, unblock the other, and record the absorption curves for the second laser. \nWe repeat this process for a total of four records of 894 nm and three for 456 nm. In total, there are typically sixteen scans at 894~nm and twelve at 456~nm for each measurement.\nRapid reversals between the two wavelengths help minimize variations in the cesium density between measurements.\nWe perform multiple runs at each temperature. \nThen we change the temperature of the cold finger, wait for the cold finger temperature to stabilize, and collect new spectra. \nAdditionally, we remove the absorption cell from the beam path and verify the absence of any spectral feature in the scans. \n\n\n\nWe show examples of absorption spectra at 456 nm and 894 nm in Fig.~\\ref{fig:datafit894}.\nThe absorption peak at 456 nm, shown as the red data points in Fig.~\\ref{fig:datafit894}(a), is made up of the three hyperfine transitions $F = 4 \\rightarrow F^{\\prime} = 3^{\\prime}, 4^{\\prime}$, and $5^{\\prime}$. These peaks are unresolved since the hyperfine splitting of the $7p \\ ^2P_{3\/2}$ level~\\cite{williamsHH18} is less than the Doppler width $\\Delta \\nu_D^{456} \\sim 700$ MHz. The slope of the unabsorbed signal to either side of the absorption dip is due to etalon effects (the variation of the transmitted power due to the interference between the reflections from the two window surfaces) in the windows of the cell. These windows are 1.2 mm thick, corresponding to a FSR of $\\sim$80 GHz, far greater than the full 6 GHz scan length. \nThe 894 nm absorption line shown in Fig.~\\ref{fig:datafit894}(c) consists of two components, corresponding to $F^{\\prime} = 3^{\\prime}$ on the left and $F^{\\prime} = 4^{\\prime}$ on the right. The frequency separation between these two peaks is the \n1167.7 MHz hyperfine splitting of the $6p \\ ^2P_{1\/2}$ state~\\cite{UdemRHH99,Das2016,RafacTanner1997,GerginovCTMDBH06}, which is resolved in this spectrum since this splitting is greater than the $\\Delta \\nu_D^{894} \\sim 360$ MHz Doppler broadening of the transition. Note that the $F^{\\prime} = 3^{\\prime}$ peak is weaker than the $F^{\\prime} = 4^{\\prime}$, consistent with \n$q_{3 \\rightarrow 3^{\\prime}} = 7\/24$ and $q_{3 \\rightarrow 4^{\\prime}} = 7\/8$ for $J = J^{\\prime} = 1\/2$ \nlisted in Table \\ref{table:q}. \nFor each of these lines, we have the choice of exciting from the $F = 3$ or $F = 4$ hyperfine component of the ground state. Since the absorption strength of the 894 nm line is so much stronger than that of the 456 nm line, we used the $F = 3$ ground state (the weaker component) for the former, and $F = 4$ (the stronger component) for the latter. Given the limitations of our setup, the best case would be if both lines have the same strength, as we would be able to record data over a wider range of temperatures or attenuation coefficients.\n\nFor each of the spectra, we fit the data to an equation of the form shown in Eqs.~(\\ref{eqn:fitfunction})$-$(\\ref{eq:Voigt}). Our fit equation has five adjustable parameters: the level of full transmission, a term to account for the slope in the full-transmission level, the center frequency of one of the hyperfine components of the transition, and terms describing the Doppler-broadened width ($\\Delta \\nu_D^{\\lambda}$) and amplitude ($\\Upsilon^{\\lambda}$) of the absorption dip. The relative heights of the different hyperfine components are determined by the $q_{F \\rightarrow F^{\\prime}}$ factors in Table \\ref{table:q}, and are fixed in our fits. The linewidths in the Voigt lineshape are $\\Gamma^{\\prime} = 2 \\pi (\\Delta \\nu_N + 0.2 \\ {\\rm MHz}) $, where $\\Delta \\nu_N = $ 4.6 MHz~\\cite{YoungHSPTWL94,RafacT98,RafacTLB99,DereviankoP02a,AminiG03,BouloufaCD07,ZhangMWWXJ13,PattersonSEGBSK15,GregoireHHTC15,TannerLRSKBYK92,SellPEBSK11} (1.22 MHz~\\cite{PaceA75, MarekN76, GustavssonLS77, DeechLPS77, CampaniDG78, OrtizC81}) is the natural linewidth of the $6p ^2P_{1\/2}$ line ($7p ^2P_{3\/2}$ line), and 0.2 MHz is the intrinsic linewidth of the laser; and a Gaussian of width $\\Delta \\nu_D^{894} \\sim 360$ MHz or $\\Delta \\nu_D^{456} \\sim 700$ MHz. (We will return to this laser bandwidth correction later in this section.) We use measured values for the frequency difference between the hyperfine components~\\cite{RafacTanner1997,UdemRHH99,Das2016,williamsHH18}. We show an example of the best fit profile to the absorption spectra as the black dashed lines in Figs.~\\ref{fig:datafit894}(a) and (c). In Figs.~\\ref{fig:datafit894}(b) and (d), we show the residuals, the difference between our data and the fitted spectra. The small residuals indicate that the fits are very good models of the absorption profile. \n\nAt the higher temperatures used for these measurements ($T > 10^{\\circ}$C), we start to observe some departure of the ratio $\\alpha_{3 \\rightarrow 4^{\\prime}}^{894}(\\omega_0) \/ \\alpha_{3 \\rightarrow 3^{\\prime}}^{894}(\\omega_0)$ from the expected value of three when we fit the two peaks independently. We suspect that this is a result of errors in the measurement of the offset voltage, which become more critical for these strongly absorbing peaks. At the most extreme temperature used ($T = 18^{\\circ}$C), this ratio was as low as 2.946, so we attempted no measurements at higher temperatures. \n\nTo include the effect of the spectral linewidth of the lasers in our fits to the absorption spectra, we first measured (1) the beat signal between the output of the 894 nm laser with that of a frequency comb laser (FCL); and (2) the beat signal between two similar blue diode lasers, each tuned to 456 nm.\nIn each case, we overlap the two interfering beams on a fast photodiode and observe the photocurrent on an r.f.\\ spectrum analyzer. \nThe long-term bandwidth in both cases was $2-3$~MHz. In addition, \nwe could observe the bandwidth of the signal on a single sweep of the spectrum analyzer. This shows considerable variation from sweep to sweep, probably due to acoustic vibrations of elements within the cavity, but we could observe lines as narrow as a few hundred kHz. We interpret these observations as a short-term (intrinsic) line width of $\\Delta \\nu_{Li} \\sim200$~kHz, with slower fluctuations over a range of $\\Delta \\nu_{Ls} \\sim3$~MHz. \nWe calculate the effect of these laser frequency fluctuations, and determine that these can be included in the fits to the absorption spectra by modifying the Voigt lineshape function in two ways. \nFirst, we increase the homogeneous linewidth $\\Delta \\nu_N$, using the sum of the natural linewidth of the transition and the intrinsic linewidth $\\Delta \\nu_{Li}$ of the laser. This is a small, but not negligible, increase. \nSecond, we increase the inhomogeneous linewidth in the Voigt function calculation to the quadrature sum of the Doppler width, $\\Delta \\nu_D$, and the slow laser frequency fluctuations, $\\Delta \\nu_{Ls}$. \nFor the linewidths of our system, this is a negligible increase.\n\nAfter fitting each of the sixteen (twelve) absorption curves at $\\lambda = $ 894 nm ($\\lambda = $ 456 nm) within a set individually, we compute the mean and standard deviation of the mean for the fitted values of $\\Upsilon^{894}$ ($\\Upsilon^{456}$). \nWe show a plot of $\\Upsilon^{456}$ vs.\\ $\\Upsilon^{894}$ in Fig.~\\ref{fig:fittedresult894}. \n\\begin{figure}\n \\includegraphics[width=\\columnwidth]{456v894f=4.png}\\\\\n \\includegraphics[width=\\columnwidth]{456v894f=4res.png}\\\\\n\t \\caption{\n Plot of $\\Upsilon^{456}$ against $\\Upsilon^{894}$ showing (a) the datapoints (circles) and the least squares fit straight line, and (b) the residuals in the ordinate. In (a), the error bars are smaller than the size of the markers. The vertical error bars in (b) represent the combined $1 \\sigma$ uncertainties in $\\Upsilon^{456}$ and $\\Upsilon^{894}$. } \n\t \\label{fig:fittedresult894}\n\\end{figure}\nEach data point represents the average value of $\\Upsilon^{456}$ and $\\Upsilon^{894}$ at a particular cold finger temperature. \nThe data point near the origin was recorded with the vapor cell removed from the beam path. The error bars on the data points are too small to observe in Fig.~\\ref{fig:fittedresult894}(a). The vertical error bars in the residual plot, Fig.~\\ref{fig:fittedresult894}(b), represent the combined $1 \\sigma$ uncertainties in $\\Upsilon^{456}$ and $\\Upsilon^{894}$.\n\nThe total uncertainties in the values of $\\Upsilon^{\\lambda}$ are the statistical, etalon effects, and offset uncertainties added in quadrature. The statistical uncertainty comes from the standard deviation of the mean of the $\\Upsilon^{\\lambda}$ values from the fits to the sixteen (or twelve) absorption curves. \n\nThe etalon effects are our estimate of the uncertainty of the attenuation coefficient resulting from the interference between the reflections at the cell windows. We account for this effect to first order as a linear variation of the laser power with frequency, ignoring any curvature. \nThis simple model is not adequate, however, when the peak or valley in the sinusoidal variation of the unabsorbed laser power is close to the frequency of the absorption feature. In these frequency spaces, we estimate the effect of the curvature of the unabsorbed laser power on the size of the absorption peak, which we assign as the uncertainty due to the etalon effect. In cases of extreme curves, we also applied a small correction to the absorption height, along with an uncertainty of twice the size of the correction. \nWe estimate the effect for each absorption curve depth. A 1 mV change in the height of the signal changes $\\Upsilon^{894}$ ($\\Upsilon^{456}$) by 0.3\\% (0.1\\%). \n\n\nThe offset uncertainty is our estimation of the error in measuring the signal on PD3 resulting from the power in the wings of the laser spectrum and the background signal, which we subtract from the signal as an offset. The offset uncertainty of the 456 nm signal is $\\sim 0.05\\%$ of $\\Upsilon^{456}$, while for 894 nm it is more significant at $\\sim 0.15\\%$ due to the stronger absorption at 894 nm.\n\n \\begin{figure*}\n \t \\includegraphics[width=15cm]{894_all_graphs.png}\\\\\n\n \t \\caption{Examples of absorption spectra with the cell of length $\\ell_{cell} \\sim 6$ cm at a cold finger temperature of 56$^\\circ$C. These figures show the photodiode signal vs.~laser frequency as we scan (a) the 459 nm laser through $ F=4 \\rightarrow F'=3^{\\prime},4^{\\prime}$ components of the $6s \\ ^2S_{1\/2} \\rightarrow 7p \\ ^2P_{1\/2} $ transition, and (c) the 456 nm laser through $F=4 \\rightarrow F'=3^{\\prime},4^{\\prime},5^{\\prime}$ components of the $6s \\ ^2S_{1\/2} \\rightarrow 7p \\ ^2P_{3\/2} $ transition. In each, the data are shown as the red data points, and the result of a least-squares fit as a black dashed line. The hyperfine components are not resolved for either transition. We show the residuals for each in (b) and (d). }\n\t \\label{fig:datafit1}\n \\end{figure*}\n\n \nThe solid line in Fig.~\\ref{fig:fittedresult894} is the result of a least-squares fit of a straight line to the data, with two adjustable parameters: the slope and the intercept. We present the results of this fit in Table \\ref{table:fits894}. The intercept is within one standard deviation of zero, as expected, while the slope, i.e.~the ratio of scaled absorption terms, is $\\Upsilon^{456} \/ \\Upsilon^{894} = 0.016239 \\: (21)$. (We show $1 \\sigma$ uncertainties in the least significant digits inside the parentheses following the numerical value.) The reduced $\\chi_r^2$ for these data is 1.29, indicating that deviations of the data from the fitted line are slightly larger than the uncertainties would suggest. We expand the uncertainties of the slope by ~$\\sqrt[]{1.29}$ to account for this, and quote a final slope of $0.016239 \\: (23)$.\nUsing Eq.~(\\ref{eq:ratioR1Upsilon}), the inverse of the square root of the slope yields the ratio of matrix elements $R_1 = 7.8474 \\: (56)$. \n\\begin{table}[t!]\n \\caption{Numerical values for the intercept, slope, and reduced $\\chi_r^2$ from the fit to the data in Fig.~\\ref{fig:fittedresult894}. These uncertainties have not yet been expanded by $\\sqrt[]{\\chi_r^2}$. }\n \\begin{tabular}{c|c}\n \\hline\n Parameter \\rule{0.1in}{0in} & Value \\\\ \\hline \\hline\n \\rule{0in}{0.2in} Intercept & \\rule{0.1in}{0in}$3.6\\ (43) \\times 10^{-5}$ \\\\\n \\rule{0in}{0.2in} Slope & $0.016239\\ (21)$ \\\\\n \\rule{0in}{0.2in} $\\chi_r^2$ & $1.29 $\t \\\\\n \\hline\n \\end{tabular}\n \\label{table:fits894} \n\\end{table}\n\nIn Sections \\ref{sec:error} and \\ref{sec:Results}, we will consider a few additional systematic effects that contribute to this measurement, and use $R_1$ to determine the E1 matrix element for the $6s \\ ^2S_{1\/2} \\rightarrow 7p \\ ^2P_{3\/2}$ transition. Before we do this, we describe parallel measurements of the absorption strength of the $6s \\ ^2S_{1\/2} \\rightarrow 7p \\ ^2P_{1\/2}$ transition.\n\n\n\\section{Measurement of $R_2$}\\label{sec:RME459}\n\nWe turn now to the measurement of the ratio $R_2$, as defined in Eq.~(\\ref{eq:ratioR2}). \nWe carry out this measurement in a fashion similar to that of the measurement of $R_1$ discussed in Sec.~\\ref{sec:RME456}, alternating between absorption measurements on \nthe $6s \\ ^2S_{1\/2} \\rightarrow 7p \\ ^2P_{3\/2}$ line at 456 nm and the $6s \\ ^2S_{1\/2} \\rightarrow 7p \\ ^2P_{1\/2}$ line at 459 nm.\nThe experimental setup is very similar to the one discussed in the previous section, and is shown in Fig.~\\ref{fig:ExperimentalSetup894}. The 894 nm ECDL is replaced with a 459 nm ECDL, and FP2 is removed, since we can use the same Fabry-P\\'{e}rot cavity FP1 for both lasers. \nEach laser generates approximately 20 mW of laser light, and produces mode-hop free scans of $>7$ GHz. \n\n\nOther differences in the apparatus or procedure include: ($i$) We carry out these measurements in three separate data sets, which differ in F, the hyperfine level of the ground state, or the vapor cell used. This is in contrast to our determination of $R_1$, for which we use only one F value and one vapor cell. The first two data sets are performed with a \nshort (of length $\\ell_{cell} \\sim 6$ cm) sealed glass cell mounted with $0.5^{\\circ}$ wedged windows. The shorter cell length requires higher Cs densities for comparable absorption, and the wedged windows reduce the magnitude of the etalon effects. We control the density of cesium in the cell using a cold finger enclosed within an aluminum block, whose temperature we control and stabilize to between $40-65^\\circ$C using a thermo-electric module and feed-back circuit. We use heat tape coiled around the vapor cell to heat the cell body to $\\sim 80^\\circ$C, \nand wrap the cell and heat tape with aluminum foil to help maintain a stable and uniform body temperature. \nFor the third data set on these lines, we used the long ($\\ell_{cell} = 29.9$ cm) vapor cell described in Sec.~\\ref{sec:RME456}.\nObserving similar results in this second cell allows us to rule out background gas in the cell or collisional effects as possible sources of error.\n($ii$) Additionally, the linear silicon photodiode, labeled PD3 in Fig.~\\ref{fig:ExperimentalSetup894}, had a slower rise\/fall time, as both of the absorption curves were similar in depth and we could achieve a better signal-to-noise ratio by filtering out the high-frequency noise. \nWe amplify the photodiode current using a transimpedance amplifier of gain $5 \\times 10^7$ V\/A and bandwidth of 2 kHz, followed by a second amplifier of gain 10, and measure $\\sim 1$ mV of noise on a $\\sim 2$ V signal. The laser power incident on the cell is about 50 nW in a $\\sim$1 mm diameter beam for both lasers. \n($iii$) Finally, since the curves were shallower, we were able to scan the laser frequencies through the absorption curves more rapidly. When we investigated the effects of the bandwidth and scan rate as we did for $R_1$, we found that recording eight full absorption curves over a two second period allows good fits to the data. \n\nFor these data, measurements of the background offset voltage several times each day, rather than before each run, were sufficient. \nThe background offset voltage was small ($\\sim 1$ mV), and variations were minimal, falling well within the measurement uncertainty. \nBefore every run we did insert the hot cell to estimate and correct for the small amount of laser power in the wings of the laser power spectrum.\nThis gives us a good measurement of the laser power in the wings, \ntypically $\\sim 0.1\\%$ of the full power incident on the photodiode for the 456 nm and $\\sim 0.3\\%$ for the 459 nm laser. We deduct the total offset in the signal, the power in the laser wings along with the background signal, from the data before fitting. We \nestimate the uncertainty of the attenuation coefficients due to the offset to be 0.05\\% (0.1\\%) for the 459 nm (456 nm) laser.\n\nWe show examples of the measured spectra as the red data points in Fig.~\\ref{fig:datafit1}(a) ($6s \\ ^2S_{1\/2} \\rightarrow 7p \\ ^2P_{1\/2}$ line at 459 nm) and \\ref{fig:datafit1}(c) ($6s \\ ^2S_{1\/2} \\rightarrow 7p \\ ^2P_{3\/2}$ line at 456 nm). We fit the data to an equation of the form shown in Eqs.~(\\ref{eqn:fitfunction})$- $(\\ref{eq:Voigt}), using the same five adjustable parameters as described in Section \\ref{sec:RME456}.\nThe lineshape of each hyperfine component of the transition is a Voigt profile, with a Lorentzian width $\\Delta \\nu_N$ (1.22 MHz for the $7p ^2P_{3\/2}$ line, or 1.06 MHz for the $7p ^2P_{1\/2}$ line~\\cite{PaceA75, MarekN76, GustavssonLS77, DeechLPS77, CampaniDG78, OrtizC81}) added to the linewidth of the lasers of 0.2 MHz and a Gaussian of width $\\Delta \\nu_D^{456} \\sim \\Delta \\nu_D^{459} \\sim 750$ MHz. We use calculated values for the relative amplitudes ($q_{F \\rightarrow F^{\\prime}}$ of Table \\ref{table:q}) and experimental values~\\citep{williamsHH18} for the frequency difference of the hyperfine components. We show the least-squares fit spectral profiles as the black dashed lines in Figs.~\\ref{fig:datafit1}(a) and (c). The residuals, the difference between the data and the fitted profile, are shown in Figs.~\\ref{fig:datafit1}(b) and (d).\nWe fit each of the twenty-four (thirty-two) scans at 456 nm (459 nm) within a measurement individually, and compute the mean and standard deviation of the mean of the fitted values of $\\Upsilon^{\\lambda}$.\n\nFinally, we plot $\\Upsilon^{459}$ against $\\Upsilon^{456}$, and determine the least-squares fit of a straight line to determine the slope. An example of one such plot (set 2) for the transition from $6s \\: ^2S_{1\/2}, \\ F=4$ is shown in Fig.~\\ref{fig:fittedresult}(a). \n \\begin{figure}\n \\includegraphics[width=\\columnwidth]{456v459f=4.png}\\\\\n \\includegraphics[width=\\columnwidth]{456v459f=4res.png}\\\\ \n \t \\caption{Plot of $\\Upsilon^{459}$ vs.\\ $\\Upsilon^{456}$ for data set 2 showing (a) the datapoints (circles) and line of best fit, (b) residuals in the ordinate. In (a), the error bars are smaller than the size of the markers. The vertical error bars in (b) represent the combined $1 \\sigma$ uncertainties in $\\Upsilon^{459}$ and $\\Upsilon^{456}$.}\n \t \\label{fig:fittedresult}\n \\end{figure}\nEach point on the graph corresponds to a different cold finger temperature, with the $y$-coordinate and $x$-coordinate coming respectively from the 459 nm and 456 nm average. \nWe determined the uncertainties of each $\\Upsilon^{\\lambda}$ as the quadrature sum of\nthe statistical, etalon effect, and offset uncertainties. The etalon effect uncertainty is as described for $R_1$. \nWe found that a 1 mV change in the offset resulted in a change to $\\Upsilon^{456}$ and $\\Upsilon^{459}$ of 0.1\\% and 0.05\\%, respectively. \nThe error bars shown on the residual plot, Fig.~\\ref{fig:fittedresult}(b), represents the combined errors of $\\Upsilon^{459}$ and $\\Upsilon^{456}$.\nWe perform separate analyses of the $6s \\: ^2S_{1\/2}, \\ F=3$ and $F=4$ data, since their sensitivity to systematic effects differs.\nWe \npresent the intercepts and slopes from individual straight line fits for the three data sets in Table \\ref{table:2dayresults}. The intercepts are again all acceptably close to zero. \nWe will derive the ratio $R_2$ from the square root of the inverse of the slopes of these plots, as shown in Eq.~(\\ref{eq:ratioR2}), but first we consider some additional systematic effects, as discussed in the following section.\n\n\n \\begin{table}[t]\n \\caption{Summary of linear fit results from our three data sets of $\\Upsilon^{459}$ against $\\Upsilon^{456}$. The listed uncertainties for the slope and intercept have not yet been expanded by $\\sqrt[]{\\chi_r^2}$. Data sets 1-2 were collected with the shorter $\\sim 6$ cm cell. Data for set 3 was recorded using the longer $\\sim 30$ cm cell.}\n \\begin{tabular}{|c|c|c|c|c|}\n \\hline\n \t Data Set & Intercept \\rule{0in}{0.15in}& Slope & $\\chi_{r}^2$ \\\\ \n \\hline \\hline \n \n \\rule{0.09in}{0in}Set 1, F=3\\rule{0.09in}{0in}\t\\rule{0in}{0.15in}\t& $-5.8\\: (41) \\times 10^{-5}$ &\\rule{0.09in}{0in} $0.23380 \\: (29) $ \\rule{0.09in}{0in}& \\rule{0.09in}{0in}$1.95$\\rule{0.09in}{0in} \\\\\n Set 2, F=4 \\rule{0in}{0.15in} \t& $-2.4 \\: (42) \\times 10^{-5} $ & $0.23477 \\: (32)$ & $1.95$ \\\\\n\n Set 3, F=3 \\rule{0in}{0.15in}\t\t& $-1.4 \\: (51) \\times 10^{-5}$ & $0.23593 \\: (48)$ & $1.52$ \\\\\n\n \\hline\n\n \n \\end{tabular}\n\n \\label{table:2dayresults}\n \\end{table}\n\n\n\\section{Errors}\\label{sec:error}\nWe have investigated several potential sources of systematic effects, listed in Table \\ref{table:sourcesoferror}, to estimate their impact on the measurements. We describe each of these effects in this section. All of these systematic effects are applied to $R_1$ ($R_2$), after fitting $\\Upsilon^{456}$ ($\\Upsilon^{459}$) against $\\Upsilon^{894}$ ($\\Upsilon^{456}$). \n\nWe derive the uncertainties labeled `Fit' from the fitted values of the slope and their uncertainties, listed in Tables \\ref{table:fits894} and \\ref{table:2dayresults}. We have expanded these uncertainties by $\\sqrt{\\chi_r^2}$ to account for excess variation of the data points. \n\\begin{table}[b!]\n \\caption{Sources of error and the percentage uncertainty resulting from each, for the determinations of $R_1$ and $R_2$. \n We add the errors in quadrature to obtain the total uncertainty. \n }\n \\begin{tabular}{|l| c| c|}\n \\hline\n\n \\rule{0.1in}{0in}Source & \\rule{0.1in}{0in}$\\sigma_1\/R_1$(\\%)\t\\rule{0.1in}{0in}\t & \\rule{0.1in}{0in} $\\sigma_2\/R_2$(\\%)\\rule{0.1in}{0in} \\\\\n\t\t\\hline \\hline\n Fit\t\t\t\t\t& 0.07 &0.09-0.13 \\\\\n\t\t\n Freq.~scan calibr.\t\t\t& 0.04 & 0.01 \\\\\n Zeeman\t\t\t\t&0.03 \t&0.02 \\\\\n Beam overlap\t\t\t\t\t& 0.01 & 0.01 \\\\\n\n Saturation\t\t& 0.02 \t& 0.02 \\\\\n\t\t\t\t\n Linewidth\t\t& 0.02 \t& 0.02 \\\\\n \t\t\t\n \\hline \\hline\n Total uncertainty\t\t\t\t& 0.09\t&0.09-0.13\\\\\n \\hline\n \\end{tabular}\n\n \\label{table:sourcesoferror}\n\\end{table}\n\n \\begin{table}\n \\caption{Summary of the three data sets of $R_2$ values after correction for the effects of the Zeeman splitting. The uncertainty in $R_2$ includes the errors due to the fit (increased by $\\sqrt{\\chi_r^2}$ of the appropriate data set), frequency calibration, Zeeman splitting, beam overlap, saturation, and linewidth. Data sets 1-2 were collected with the shorter ($\\sim 6$ cm) cell. Data for set 3 was recorded using the longer ($\\sim 30$ cm) cell. The $\\chi_r^2$ of the weighted mean of the three sets is 4.2 and the weighted mean's error has been expanded by $\\sqrt[]{4.2}$. See Fig.~\\ref{fig:R2} for a plot of the results of the individual data sets and the weighted mean. }\n \\begin{tabular}{|c|c|c|c|c|}\n \\hline\n \t Data Set\\rule{0in}{0.15in} & $R_2$ \\\\ \n \\hline \\hline\n Set 1, F=3\\rule{0in}{0.15in} & 2.0684 (19) \\\\\n Set 2, F=4\\rule{0in}{0.15in} & 2.0637 (21)\\\\\n Set 3, F=3\\rule{0in}{0.15in} & 2.0591 (27) \\\\\n \\hline\n \\rule{0.1in}{0in} Weighted Mean\\rule{0in}{0.15in}\t\\rule{0.1in}{0in}\t& \\rule{0.1in}{0in}2.0646 (26)\\rule{0.1in}{0in}\\\\ \n \\hline\n \\end{tabular}\n \n \\label{table:R2results}\n \\end{table}\n\nDuring the course of analyzing the absorption curves, we noted a sensitivity of the fits to the frequency calibration of the laser scans. We calibrate these scans, as discussed earlier, using the transmission peaks of the lasers through the FP cavities. The FSRs of these cavities, however, are not known precisely. \nWe experimentally determined the FSR values of both FP cavities by fitting the absorption data using different values of the FSR. Using the variation in the residuals, we obtain an estimate for the FSR that fits the absorption curves the best. \n(Since the hyperfine splittings of each of these states are well known, the residuals of the absorption curves are sensitive to variations in the frequency calibration of the scans.)\nWe determined the FSR for the FP cavity used with the 456\/459~nm lasers to be $FSR_{FP1} = 1501.6 \\: (10)$ MHz, while for the 894~nm laser, we determine $FSR_{FP2} = 1481.9 \\: (4)$ MHz.\nWe also use these fits to estimate the effect that the uncertainty of the FP FSR has on the measured ratios. \nWe estimate an uncertainty in the ratio $R_1$ due to frequency calibration, to be $0.04\\%$. For the ratio $R_2$, we find the uncertainty to be at most $0.01\\%$. \n\nThe magnetic field at the cell location also affects the measurements of the absorption strength. We measure a static magnetic field of $\\sim 1$ G in the vertical direction (parallel to the laser polarization) at the location of the 6 cm vapor cell, mainly from the optical table. We minimize the magnetic field generated by the heat tape by wrapping the heat tape in alternating directions, ensuring the magnetic field only comes from the surroundings. A $\\sim 1$~G field causes a Zeeman splitting on each hyperfine component of 2 MHz or less. \nWe approximate the effect of Zeeman splitting on the effective homogeneous linewidth of the transition by adding the Zeeman broadening of each hyperfine component to the natural linewidth, which we use in the Voigt function for our analysis. \nWe multiply $R_2$ by the Zeeman correction for the appropriate starting F state, 0.9999 for F=4 data and 1.0001 for F=3 data. We estimate an uncertainty in $R_2$ due to this correction to be about $0.02 \\% $.\nThe height of the 30 cm cell above the table was greater than that of the small cell, so the magnetic field for measurements of $R_1$ was smaller, $\\sim$0.5 G. This led to a Zeeman splitting of less than 0.7 MHz. We estimate the uncertainty in $R_1$ due to this Zeeman splitting to be 0.03\\%, and did not apply any correction to these data. \n\n\nSmaller systematic errors in the ratios result from beam overlap errors and saturation effects. We estimate that each of these effects contribute 0.02\\% uncertainty or smaller, as listed in Table~\\ref{table:sourcesoferror}. \nWe measure that the two laser beams are parallel to one another to within 0.05 mrad, and overlap each other in the cell to within 0.5 mm. Therefore the effective path lengths for these two beams are identical to within 0.02\\%, for an effect on $R_1$ and $R_2$ of 0.01\\%. \nWe minimize saturation effects by reducing the laser intensity of the 456 nm and 459 nm lasers to less than $2~\\times~10^{-4} $ times the saturation intensity for the transition~\\cite{Urvoy11} using a neutral density filter and reflections from several uncoated wedged windows. We attenuated the power of the 894 nm laser more than the 456 nm and 459 nm laser to similarly avoid saturation.\nWe estimate that saturation effects could have an effect at the $0.02\\%$ level.\n\nLastly, we include an uncertainty for the correction that we apply for the linewidth of the lasers used. For all of the ECDL lasers we estimated a 200 kHz intrinsic linewidth with a conservative uncertainty of 200 kHz. These uncertainties would lead to about a 0.02\\% uncertainty in each of the ratios.\n\nWe add the fit, frequency scan calibration, Zeeman, beam overlap, saturation, and linewidth errors in quadrature for our final uncertainties to $R_1$ and $R_2$,\nand apply the Zeeman correction to $R_2$ to get our final values. \nIn the next section, we discuss the results of these measurements.\n\n\n\\section{Results}\\label{sec:Results}\n\n\\begin{figure}\n \\includegraphics[width=\\columnwidth]{R4564591.png}\\\\\n \\caption{ \n\t Plot of the three sets of $R_2$ results and the weighted mean. We have expanded the error bars of the weighted mean by $\\sqrt{\\chi_r^2}= 2.0$ to account for the variation among the individual results.} \n\t \\label{fig:R2}\n\\end{figure}\n\nAfter adding the uncertainties described in the previous section, the final result for $R_1$ is $R_1 = 7.8474 \\: (72)$. \nFor $R_2$, after applying the corrections and uncertainties described in the previous section to the three individual data sets, we arrive at the results shown in Table \\ref{table:R2results} and plotted in Fig.~\\ref{fig:R2}. The weighted average of these results is $R_2 = 2.0646 \\: (26)$. \nWe compare these results for $R_1$ and $R_2$ with a number of prior experimental and theoretical results in Table \\ref{table:ResultComparison}, and illustrate these in Fig.~\\ref{fig:R1R2}. \n\\begin{figure}\n \\includegraphics[width=\\columnwidth]{R1theexp.png}\\\\\n \\includegraphics[width=\\columnwidth]{R2theexp.png}\\\\\n\t \\caption{Plots showing comparisons of various experimental and theoretical determinations of (a) $R_1 \\equiv \\langle 6s_{1\/2} || r || 6p_{1\/2}\\rangle \/ \\langle 6s_{1\/2} || r || 7p_{3\/2}\\rangle$ and (b) $R_2 \\equiv \\langle 6s_{1\/2} || r || $ $ 7p_{3\/2}\\rangle \/ \\langle 6s_{1\/2} || r || 7p_{1\/2}\\rangle$ ratios. See Table \\ref{table:ResultComparison} for references to these data. For Safronova {\\it et al.} (Refs.~\\cite{SafronovaJD99,SafronovaSC16}), we plot both the scaled (sc) and single-double (SD) values for $R_2$.\n\t } \n\t \\label{fig:R1R2}\n\\end{figure}\nWe note that the previous results for $R_1$ and $R_2$ by Shabanova {\\it et al.} \\cite{shabanovaMK1979} (who reported oscillator strengths, which we converted to matrix elements) and $R_2$ by Borv\\'{a}k \\cite{Borvak14} are in reasonable agreement, to within their error bars, of our results, which are of higher precision. (We derived the ratio value for Borv\\'{a}k using $(R_2)^2 = (21\/22) \\times 4.461 \\: (23)$ from the table on page 126, where the additional factor of $21\/22$ comes from combinations of Clebsch-Gordan coefficients.) $R_1$ from Antypas \\cite{antypas7p2013} disagrees with our new results, which we consider to be more reliable due to the use of the Voigt profile and the addition of the faster photodiode amplifier that we discussed earlier. We derived the value of $R_2$ for Vasilyev {\\it et al.}~\\cite{VasilyevSSB02} from their matrix elements, whose measurement was dependent on precise knowledge of the vapor cell path length and atomic density in the vapor cell, unlike our method. The scaled theoretical results of Refs.~\\cite{BlundellSJ92, SafronovaJD99, SafronovaSC16} appear to be in good agreement with our new results as well. \n\n \nWe use $R_1 = 7.8474 \\: (72)$ to determine the matrix element for the $6s \\ ^2S_{1\/2} \\rightarrow 7p \\ ^2P_{3\/2}$ transition using \n\\begin{equation}\n \\langle 6s_{1\/2} || r || 7p_{3\/2}\\rangle = \\frac{\\langle 6s_{1\/2} || r || 6p_{1\/2} \\rangle}{R_1} \n\\end{equation} \nand $\\langle 6s_{1\/2} || r || 6p_{1\/2}\\rangle = 4.5057 \\: (16) \\ a_0$, the weighted average of the transition matrix element for the D$_1$ line from Refs.~\\cite{YoungHSPTWL94,RafacT98,RafacTLB99,DereviankoP02a,AminiG03,BouloufaCD07,ZhangMWWXJ13,PattersonSEGBSK15,GregoireHHTC15,TannerLRSKBYK92,SellPEBSK11}.\nOur result is \n\\begin{equation}\n\\langle 6s_{1\/2} || r || 7p_{3\/2}\\rangle = 0.57417 \\: (57) \\ a_0.\n\\end{equation}\n\n\\begin{table*}[t]\n \\caption{Experimental and theoretical results for the reduced dipole matrix elements of the cesium $6s\\,^2S_{1\/2} \\rightarrow 7p\\,^2P_{J}$ transitions. The matrix elements are given as factors of $a_0$. For Ref.~\\cite{SafronovaSC16}, we list both the single double (-SD) and scaled (-sc) values.}\n \\begin{tabular}{|l|l|l|l|l|}\n \\hline\n \\multicolumn{1}{|c|} {Group} & \\multicolumn{1}{|c|}{$R_1$} & \\multicolumn{1}{|c|}{$R_2$} & $\\langle 6s_{1\/2}||r|| 7p_{3\/2} \\rangle$ & $\\langle 6s_{1\/2}||r|| 7p_{1\/2} \\rangle$ \\\\ \\hline \n {\\underline{\\emph{Experimental}}} \t\t& & & &\t\t \\\\\n Shabanova {\\it et al.}, 1979~\\cite{shabanovaMK1979} \t& \\hspace{0.1in} 7.76 (14) & \\hspace{0.1in} 2.052 (38) & \\hspace{0.1in} 0.583 (10) & \\hspace{0.1in} 0.2841 (21) \\\\\n Vasilyev {\\it et al.}, 2002~\\cite{VasilyevSSB02} \t\t& & \\hspace{0.1in} 2.124 (24) & \\hspace{0.1in} 0.5856 (50) & \\hspace{0.1in} 0.2757 (20) \\\\\n Antypas and Elliott, 2013~\\cite{antypas7p2013} \t& \\hspace{0.1in} 7.796 (41) & \\hspace{0.1in} 2.072 (12) & \\hspace{0.1in} 0.5780 (7) & \\hspace{0.1in} 0.2789 (16) \\\\\n Borv\\'{a}k, 2014~\\cite{Borvak14}\t\t\t& & \\hspace{0.1in} 2.0635 (53)~~ & \\hspace{0.1in} 0.5759 (30) & \\hspace{0.1in} 0.2743 (29) \\\\\n This work\t \t\t\t& \\hspace{0.1in} 7.8474 (72)~~ & \\hspace{0.1in} 2.0646 (26) & \\hspace{0.1in} 0.57417 (57) & \\hspace{0.1in} 0.27810 (45) \\\\\n & & & & \\\\\n \\multicolumn{1}{|l|}{\\underline{\\emph{Theoretical}}} \t& & & & \\\\\n Dzuba {\\it et al.}, 1989~\\cite{DzubaFKS89}\t\t\t& \\hspace{0.1in} 7.708 & \\hspace{0.1in} 2.12 & \\hspace{0.1in} 0.583 \t& \\hspace{0.1in} 0.275 \\\\\n Blundell {\\it et al.}, 1992~\\cite{BlundellSJ92}\t & \\hspace{0.1in} 7.83 & \\hspace{0.1in} 2.057 & \\hspace{0.1in} 0.576 \t& \\hspace{0.1in} 0.280 \\\\\n Safronova {\\it et al.}, 1999~\\cite{SafronovaJD99}\t& \\hspace{0.1in} 7.873 & \\hspace{0.1in} 2.065 & \\hspace{0.1in} 0.576 \t& \\hspace{0.1in} 0.279 \\\\\n Derevianko, 2000~\\cite{Derevianko00}\t& & & & \\hspace{0.1in} 0.281 \\\\\n Porsev {\\it et al.}, 2010~\\cite{PorsevBD10} \t\t\t& & & & \\hspace{0.1in} 0.2769 \\\\\n Safronova {\\it et al.}-SD, 2016~\\cite{SafronovaSC16}\t& \\hspace{0.1in} 7.452 & \\hspace{0.1in} 2.016 & \\hspace{0.1in} 0.601 & \\hspace{0.1in} 0.298 \\\\\n Safronova {\\it et al.}-sc, 2016~\\cite{SafronovaSC16} & \\hspace{0.1in} 7.873 & \\hspace{0.1in} 2.065 & \\hspace{0.1in} 0.576 & \\hspace{0.1in} 0.279 \\\\\n \\hline\n \\end{tabular}\n \\label{table:ResultComparison}\n\\end{table*}\n\n\\begin{figure}\n \\includegraphics[width=\\columnwidth]{matrixelement7p32.png}\\\\\n \\includegraphics[width=\\columnwidth]{matrixelement7p12.png}\\\\\n\t \\caption{ Plots of the experimental and theoretical values of (a) $\\langle 6s_{1\/2}||r|| 7p_{3\/2} \\rangle$ and (b) $\\langle 6s_{1\/2}||r|| 7p_{1\/2} \\rangle$, as listed in Table ~\\ref{table:ResultComparison}. Experimental values are on the left, while theoretical values are shown on the right. See Table \\ref{table:ResultComparison} for references to these data. For Safronova {\\it et al.} (Refs.~\\cite{SafronovaJD99, SafronovaSC16}), we have plotted only the scaled (sc) values.} \n\t \\label{fig:matrix1232}\n\\end{figure}\n\n\nWe combine $R_2 = 2.0646 \\: (26)$ in Eq.~(\\ref{eq:ratioR2}) and our new determination of $\\langle 6s_{1\/2} || r || 7p_{3\/2}\\rangle$ to obtain\n\\begin{equation}\n\\langle 6s_{1\/2}|| r ||7p_{1\/2} \\rangle = 0.27810 \\: (45)~a_0.\n\\end{equation}\n\n\nWe present a summary of past experimental and theoretical results of these dipole matrix elements in Table~\\ref{table:ResultComparison}. We have also plotted the results for $\\langle 6s_{1\/2} || r || 7p_{3\/2}\\rangle$ and $\\langle 6s_{1\/2} || r || 7p_{1\/2}\\rangle$ in Figs.~\\ref{fig:matrix1232}(a) and (b), respectively. \nFor Shabanova {\\it et al.}~\\cite{shabanovaMK1979} and Borv\\'{a}k \\cite{Borvak14}, our result is within their uncertainties for $\\langle 6s_{1\/2}|| r || 7p_{3\/2} \\rangle$, but in poorer agreement with the $\\langle 6s_{1\/2}|| r || 7p_{1\/2} \\rangle$ value. The matrix element values from Borv\\'{a}k come from a direct determination, separate from the ratio measurement discussed above.\n\n\nFor comparison with theory, our result is within the distribution spanned by the $\\langle 6s_{1\/2}||r|| 7p_{1\/2} \\rangle$ values. In particular, our result is in the middle of the two closest theory values from Refs.~\\cite{SafronovaJD99, PorsevBD10}. For $\\langle 6s_{1\/2} || r || 7p_{3\/2} \\rangle$, our value is below all of the theoretical results, but within 0.3\\% of Blundell {\\it et al.}~\\cite{BlundellSJ92} and scaled values of Refs.~\\cite{SafronovaJD99, SafronovaSC16}. \nIn Table~\\ref{table:ResultComparison}, we list two values, calculated results using different theoretical methods, from the Supplemental Material of Ref.~\\cite{SafronovaSC16}. \nThe authors of \\cite{SafronovaSC16} recommend the single-double (SD) all-order approach values, which we have listed as `Safronova {\\it et al.}-SD'. In Ref.~\\cite{SafronovaJD99}, the authors note that scaling improved agreement of theoretical determinations of the $\\langle 6s_{1\/2} || r || 7p_{j}\\rangle$ matrix elements with experiment, and recommend using the scaled values. We also observe that scaling improves the agreement of theory with the current measurements, and have included those values from \\cite{SafronovaSC16} as `Safronova {\\it et al.}-sc'.\nIn comparison with Refs.~\\cite{SafronovaJD99, SafronovaSC16}, our measurements of both matrix elements are much closer to the scaled theory values than the SD theory values.\n\n\n\\section{Conclusion}\\label{sec:Conclusion}\n\nIn conclusion, we present measurements of the ratio of the dipole matrix elements of the cesium $6s\\,^2S_{1\/2} \\rightarrow 6p\\,^2P_{1\/2}$ and $6s\\,^2S_{1\/2} \\rightarrow 7p\\,^2P_{3\/2}$ transitions and the ratio of $6s\\,^2S_{1\/2} \\rightarrow 7p\\,^2P_{1\/2}$ and $6s\\,^2S_{1\/2} \\rightarrow 7p\\,^2P_{3\/2}$ transitions. We used a ratio measurement of the two transitions to eliminate the need for precise knowledge of the path length of the laser within the vapor cell, or of the density of cesium, which helped to eliminate potential systematic errors. From these measurements, we calculate new, higher precision results of the dipole matrix elements of cesium with precision $\\le 0.16 \\%$. With our new knowledge of the dipole matrix elements, we are poised to be able to evaluate the scalar and vector polarizabilities of the cesium $6s\\,^2S_{1\/2} \\rightarrow 7s\\,^2s_{1\/2}$ transition. A new value of the vector polarizability has implications on the interpretation of cesium parity nonconservation measurements, and will allow a new determination of the weak charge in cesium.\n\nThis material is based upon work supported by the National Science Foundation under Grant Number PHY-1607603 and PHY-1460899. Useful conversations with Dan Leaird are also gratefully acknowledged.\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nThe past decade has witnessed an explosive growth of data and the needs for high-speed data processing. A large-scale data often needs to be sorted to enable higher efficiency. Sorting is a key kernel in many applications such as data mining \\cite{aggarwal2015data}, robotics \\cite{bayindir2016review} and machine learning \\cite{devlin2018bert}. To efficiently sort an array into an order, numerous sorting algorithms have been invented in the past, such as merge-sort \\cite{goldstine1947planning} or quick-sort \\cite{quicksort}. These algorithms can be accelerated using CPUs\/GPUs \\cite{chhugani2008efficient,zhang2016high,satish2009designing}, FPGAs \\cite{chen2017computer,chen2019sorting,samardzic2020bonsai,song2016parallel} and ASICs \\cite{norollah2019rths,najafi2018low,lin2017hardware}. However, transferring data between memory and external processing units incurs a long latency and a degraded energy efficiency. Techniques like memory management \\cite{stehle2017memory} have been developed to minimize the data movement, but such optimizations do not fundamentally solve the problem.\n\nMemristive in-memory sorting \\cite{rram_sort_alam2020,prasad2021memristive} have been proposed recently to tackle this challenge. Memristor-aided logic is developed in \\cite{rram_sort_alam2020} to implement compare-and-select blocks in memory. However, a large number of memristor cells are used to implement logic gates with frequent write operations, resulting in a low memory density and a degraded device lifetime. The latest memristive in-memory sorting \\cite{prasad2021memristive} uses iterative in-memory min computations with help of a near-memory circuit. The min values are searched by traversing each bit column using column reads (CR) on a 1T1R memristive memory. Frequent write operations in \\cite{rram_sort_alam2020} are eliminated; however, the number of CRs is proportional to the number of 1T1R cells in the memristive memory, degrading the latency and energy efficiency.\n\nIn this work, we propose a column-skipping algorithm to minimize the number of CRs for improved sorting speed and hardware efficiency. A near-memory circuit is designed to keep track the column read conditions and skip those that are leading 0's or have been processed previously. A multi-bank management is developed to enhance the scalability when sorting a larger array stored in different memristive memory banks. Implemented in a 40nm CMOS technology with 1T1R memristive memory and experimented on a variety of sorting datasets, the length-1024 32-bit column-skipping memristive sorter with state recording of 2 demonstrates up to 4.08$\\times$ speedup, 3.14$\\times$ area efficiency and 3.39$\\times$ energy efficiency, respectively, over the latest memristive in-memory sorting implementation \\cite{prasad2021memristive}. \n\n\\section{Background}\n\n\\subsection{Sorting Applications}\n\nSorting is a known bottleneck for many applications \\cite{aggarwal2015data,bayindir2016review,devlin2018bert}. Here we briefly introduce two representative applications where sorting dominates the execution time: 1) Kruskal's algorithm for minimum spanning tree (MST). In Kruskal's algorithm, all the graph edges need to be sorted from low weight to high weight. Majority of the weights are small numbers with frequent repetitions; 2) MapReduce in distributed systems. In MapReduce, maps need to be sorted before transferring to the reducer stage \\cite{dean2008mapreduce}. These maps are typically clustered in a few groups. We use datasets generated from these two applications for benchmarking in Section~\\ref{section:evaluation}.\n\n\\subsection{Memristive In-Memory Sorting}\n\nIterative in-memory min computation is proposed in \\cite{prasad2021memristive} for memristive in-memory sorting. It uses $N$ iterations to successively search and exclude the min values in a length-$N$ array. Suppose each memristor cell stores a bit in a 1T1R memristive memory. \\figurename~\\ref{fig:memristive_sorting} shows an example for a length-$N$ ($N = 3$) array of $w$-bit ($w = 4$) numbers, \\{8, 9, 10\\}. \n\nIn each iteration, a $w$-step bit traversal algorithm searches the min value: at step $j$ ($j = w-1 \\rightarrow 0$), a near-memory circuit reads an bit column corresponding to the $j$-th bits of all array elements, searches for 1's in that bit column, and exclude the rows that have 1's. When a bit column contains all 0's or 1's, the row exclusion can be skipped. Rows that are corresponding to non-minimum values are excluded step by step until the min value is reached. The row for the min value is then excluded and marked as sorted before moving to the next min search iteration. The near-memory circuit is designed to support two operations, column read (CR) and row exclusion (RE), and their associated control logic. \\figurename~\\ref{fig:memristive_sorting} shows the steps to sort \\{8,9,10\\} using memristive in-memory sorting \\cite{prasad2021memristive}. Note that the near-memory circuit in \\cite{prasad2021memristive} does not keep track the number of remaining elements in the array; therefore it takes $N = 3$ iterations of min search, each contains $w = 4$ CRs. The total sorting latency is $N\\times w = 12$ CRs. \n\n\\begin{figure}\n \\centering\n \\includegraphics[width = 0.95\\linewidth]{figures\/example-SIM.pdf}\n \\caption{Memristive in-memory sorting \\cite{prasad2021memristive}.}\n \\label{fig:memristive_sorting}\n\\end{figure}\n\n\\section{Column-Skipping Memristive In-Memory Sorting}\n\nWe observe that memristive in-memory sorting in \\cite{prasad2021memristive} introduces a large number of redundant CRs which are repeatedly executed on leading 0's or bit columns that have been processed previously. As shown in \\figurename~\\ref{fig:memristive_sorting}, when searching the 2nd minimum number 9, the first 3 CRs have been processed in the 1st iteration and are repeated in the 2nd iteration. To efficiently skip these redundant CRs, we propose a low-latency column-skipping algorithm. We use unsigned fixed-point number as example, but it can easily be applicable to signed fixed-point and floating-point number formats with small changes as described in \\cite{prasad2021memristive}.\n\n\\subsection{Low-Latency Column-Skipping Algorithm}\n\nRedundant CRs can happen in two scenarios: 1) array elements may include leading 0's. CRs on these leading 0's can be skipped at the beginning of each iteration; 2) some CRs may have been processed previously for REs, i.e., we do not need to exclude any new rows for those bit columns. \n\nTo detect and skip the redundant CRs, we propose to record the $k$ most recent RE states and their corresponding column indexes. The recorded states can be reloaded to skip redundant CRs. \\figurename~\\ref{fig:flow_chart} summarizes the iterative min computation for a length-$N$ array with proposed column skipping algorithm (where $n = 1 \\rightarrow N$): 1) if state records are empty, the $w$-step algorithm \\cite{prasad2021memristive} traverses each bit column from MSB ($i = w-1$) to LSB ($i = 0$). $k$ most recent RE states whose bit columns are not all 0's or 1's and their corresponding column indexes are stored in a state controller; 2) if state records are non-empty, we reload the most recent RE state and the corresponding column index $s$ and start from the next bit column $s-1$. CRs are executed on subsequent bit columns until reaching the min value.\n\n\\begin{figure}\n \\centering\n \\includegraphics[width = 0.80\\linewidth]{figures\/flow_chart.pdf}\n \\caption{Iterative min search with proposed column-skipping algorithm}\n \\label{fig:flow_chart}\n\\end{figure}\n\n\\begin{figure}\n\\xdef\\xfigwd{\\textwidth}\n\\centering\n \\includegraphics[width = 0.95\\linewidth]{figures\/example-k=2.pdf}\n \\caption{Column-skipping memristive in-memory sorting with state recording $k=2$.}\n \\label{fig:memristive_sorting_k2}\n\\end{figure}\n\n\\figurename~\\ref{fig:memristive_sorting_k2} illustrates the proposed column-skipping algorithm with state recording $k = 2$ when sorting the 4-bit array $\\{8,9,10\\}$. State recording in the first iteration helps to skip the first 3 CRs in searching the 2nd minimum and the first 2 CRs in searching the 3rd minimum. The total latency is reduced to only 7 CRs. The selection of $k$ affects the performance of the proposed column-skipping algorithm. We study the impacts of $k$ on sorting speedup, silicon area and power consumption in Section~\\ref{section:evaluation}. \n\n\\begin{figure}\n \\centering\n \\includegraphics[width = 0.91\\linewidth]{figures\/1T1R.pdf}\n \\caption{Near-memory circuit for column-skipping memristive in-memory sorting.}\n \\label{fig:near_mem_circuit}\n\\end{figure}\n\n\\subsection{Near-Memory Circuit for Column-Skipping}\n\n\\figurename~\\ref{fig:near_mem_circuit} demonstrates the near-memory circuit connected to a 1T1R memristive memory to implement the proposed column-skipping algorithm. The 1T1R memristive memory stores the binary bits of array elements with MSB on the leftmost column. Similar to \\cite{prasad2021memristive}, select lines with sense amplifiers and bitline drivers are used for column reads. The proposed near-memory circuit consists of three modules: 1) a column processor that controls the column states; 2) a row processor that controls wordline (or RE) states; 3) a state controller that stores the RE states and their corresponding column indexes using a $k$-entry table. It also controls signals to execute all the operations. \n\nThe near-memory circuit supports the four operations in \\figurename~\\ref{fig:flow_chart} as following: 1) column read (CR), where the column processor enables the bitline driver of a column and the corresponding bit column is read to the row processor. The column controller generates the next-step column state and the enable signal for column update ($cen$). Sense amplifiers measure the current on each select line to determine if it's 0 or 1; 2) row exclusion (RE), where the row processor checks if the bit column are all 0's or 1's (through row controller) before updating the wordlines (or RE) states. The row controller generates the enable signal for wordline update ($ren$). The wordlines that are connected to 1's are excluded and set to 0; 3) state recording (SR), where RE states and their corresponding column indexes are stored in a $k$-entry table. The recording is enabled ($sen$) if an iteration starts from the MSB and the bit column is not all 0's or 1's; and 4) state loading (SL), where the most recent RE state and the corresponding column index are sent to the row processor and column processor, respectively. The load enable signal ($len$) selects the reloaded states when updating the wordline and column registers. A top-level controller is used to schedule the four operations. \n\nWhen multiple rows remain unexcluded at the end of an iteration due to repetitions in the array, the column processor stalls to avoid redundant CRs until all repetition elements are excluded successively in the row processor.\n\n\n\\section{Multi-Bank Management}\n\\label{section:scalability}\n\n\\begin{figure}\n \\centering\n \\includegraphics[width = 0.95\\linewidth]{figures\/multi_bank.pdf}\n \\caption{Multi-bank management to synchronize sub-sorter operations and select sorted output}\n \\label{fig:scalability}\n\\end{figure}\n\nThe near-memory circuit shown in \\figurename~\\ref{fig:near_mem_circuit} can be scaled up to support larger array (i.e. larger $N$) or higher precision (i.e. large $w$). However, practical array can be too big to fit in a single memristive memory. To solve this problem, we propose a scalable solution to sort larger array stored in multi-bank memristive memory. \n\nSuppose a length-$N$ array is stored in $C$-bank memristive memory, each bank stores $N\/C$ elements and has its own near-memory circuit that forms a length-$N\/C$ sub-sorter. To realize length-$N$ sorting using $C$ sub-sorters of length-$N\/C$, sub-sorters' operations need to be synchronized and run as a whole. A multi-bank manager is designed to connect the sub-sorters for this synchronization purpose: the judgement about all 0's or 1's needs to be considered globally to synchronize RE and SR operations while CR and SL operations are synchronized through the OR gates. \n\n\\figurename~\\ref{fig:scalability} shows the multi-bank manager to generate synchronized operation bits $en_{sync}$ based on local operation bits $en_i$ from sub-sorter $i$, where $i \\in [1,C]$. In each sub-sorter, the synchronized operation bits $en_{sync}$ are used for replacing the original signals ($en_i$) to realize the corresponding function. The multi-bank manager monitors the sub-sorters' states and select the output from one of the $C$ sub-sorters if existing repetitions. Performance of the proposed multi-bank management are evaluated in Section~\\ref{section:evaluation}. \n\n\\section{Evaluation and Benchmarking}\n\\label{section:evaluation}\n\nWe evaluate the proposed techniques using statistically distributed datasets (uniform, normal and clustered) and practical datasets (from Kruskal's and MapReduce). We use 32-bit precision: the uniform distribution ranges from 0 to $2^{32}-1$, the normal distribution has a mean of $2^{31}$ and a standard deviation of $2^{31}\/3$, and the clustered distribution has 2 clusters centered at $2^{15}$ and $2^{25}$ with identical standard deviation of $2^{13}$. To estimate the silicon area and power consumption, prototype sorters of length-1024 are implemented with 1T1R memristive memory using a 40nm CMOS technology. The RRAM device has two states and the corresponding resistances are 10M$\\Omega$ and 100k$\\Omega$, respectively. State-of-the-art memristive in-memory sorter \\cite{prasad2021memristive} (baseline) and conventional digital merge sorter are implemented for comparison. All prototype sorters run at a 500MHz clock frequency.\n\n\\subsection{Sorting Speedup}\n\nThe baseline implementation \\cite{prasad2021memristive} has a fixed sorting speed of 32 cycles per number for any datasets. The merge sorter outperforms the baseline by 3.2$\\times$ in speed. The speed of column-skipping sorter depends on parameter $k$ and dataset distribution. \\figurename~\\ref{fig:speed} shows the normalized speedup over the baseline on the selected datasets with $N = 1024$, $w = 32$ and varying state recording $k$. When $k$ increases, the min search is more likely to start from a recorded RE state; however, the reloaded starting position ($s$ in \\figurename~\\ref{fig:flow_chart}) may be further away from the optimal starting position, degrading the speedup due to less number of skipped CRs. We observe that the speedup saturates when $k$ reaches 2 or 3 and then goes down across selected datasets. \n\nThe proposed column-skipping algorithm achieves faster sorting speed (up to 2.22$\\times$ over the baseline) on clustered dataset than the speedup on uniformly or normally distributed datasets (up to 1.21$\\times$ and 1.23$\\times$ over the baseline, respectively). This is because clustered elements with small centers signify more leading 0's and redundant CRs. In Kruskal's and MapReduce dataset, majority of the small and repetitive elements lead to much better results for a speedup up to 3.46$\\times$ and 4.16$\\times$ over the baseline, respectively. \n\n\\begin{figure}\n \\centering\n \\includegraphics[width = 0.95\\linewidth]{figures\/speed.pdf}\n \\caption{Normalized speedup over the baseline on different datasets with $N$ = 1024, $w$ = 32 and varying state recording $k$.}\n \\label{fig:speed}\n\\end{figure}\n\n\\begin{figure}\n \\centering\n \\includegraphics[width = 0.95\\linewidth]{figures\/area_power.pdf}\n \\caption{Normalized area and power over the baseline on MapReduce dataset with $N$ = 1024, $w$ = 32 and varying state recording $k$.}\n \\label{fig:area_power}\n\\end{figure}\n\n\\subsection{Area and Energy Efficiency}\n\nWith $N = 1024$ and $w = 32$, the baseline sorter occupies 77.8K \\textmu m$^2$ in silicon while the merge sorter occupies 246.1K \\textmu m$^2$. The merge sorter demonstrates 1.01$\\times$ area efficiency (throughput\/area) over the baseline. We further measure the areas of column-skipping sorters with varying state recording $k$. \\figurename~\\ref{fig:area_power} presents the normalized area and area efficiency over the baseline when sorting the MapReduce dataset. With $k = 1$, column-skipping sorter demonstrates more than 3.2$\\times$ area efficiency over the baseline. When $k$ increases, the sorter area increases due to larger state controller to store more RE states; however, the area efficiency goes down, because the speedup starts saturating when $k$ reaches 2 or 3. \n\n\\begin{figure}\n \\centering\n \\includegraphics[width = \\linewidth]{figures\/sca.pdf}\n \\caption{(a) Implementation summary using 40nm CMOS technology and 1T1R memristive memory (K \\textmu m$^2$ for area, Num\/ns\/mm$^2$ for area efficiency, mW for power, Num\/\\textmu J for energy efficiency);(b) Normalized area and power (for MapReduce dataset) with varying sub-sorter length $N_s$ for $N$ = 1024, $w$ = 32 and $k = 2$ ($N_s = 1024$ is the baseline).}\n \\label{fig:performance}\n\\end{figure}\n\nWe measured power using Ansys PowerArtist considering switching activities when sorting MapReduce dataset. The baseline sorter and the merge sorter consume 319.7 mW and 825.9 mW, respectively. The merge sorter demonstrates 1.24$\\times$ energy efficiency (throughput\/power) over the baseline. Column-skipping sorter consumes more power with increasing $k$, but the energy efficiency reaches the peak at $k = 2$ as shown in \\figurename~\\ref{fig:area_power}, outperforming the baseline by 3.39$\\times$. The area and power consumption of 1T1R array are orders of magnitude less than the near-memory circuit. One can select the parameter $k$ based on target dataset for optimized speed, area and energy efficiency. \n\n\\subsection{Multi-Bank Management}\n\nTo evaluate multi-bank management, we build a column-skipping sorter of $N = 1024$ using sub-sorters of length $N_s$ = 64, 256, 512. Multi-bank management does not change the speedup brought by column-skipping when clock frequency remains unchanged. Further reducing the sub-sorter length results in a degraded clock frequency under 500MHz due to more complex multi-bank manager. \\figurename~\\ref{fig:performance}(a) demonstrates the normalized area and power of multi-bank management over the original $N=1024$ sorter. We observe that the area and power of the near-memory circuit in sub-sorters decreases super-linearly when $N_s$ decreases. Even with an extra multi-bank manager, the total area and power for multi-bank management goes down with smaller sub-sorter length. Using 16 sub-sorters of length $N_s = 64$, the area and power reduction can be up to 14\\% and 9\\% compared to the original $N = 1024$ sorter. \\figurename~\\ref{fig:performance}(b) summarizes the implementation results for different sorters.\n\n\\section{Conclusions}\n\nWe present a fast and scalable memristive in-memory sorting that employs a column-skipping algorithm and a multi-bank management. Near-memory circuit with state recording is designed to efficiently skip redundant column reads for improved sorting speed and hardware efficiency. The multi-bank manager enables column-skipping for dataset stored in different banks of memristive memory. Prototype sorters are implemented using 40nm CMOS technology and 1T1R memristive memory. Experimented on a variety of sorting datasets with array length-1024, data precision 32-bit and state recording of 2, the speed, area efficiency and energy efficiency are 4.08$\\times$, 3.14$\\times$ and 3.39$\\times$, respectively, than the state-of-the-art memristive in-memory sorting. \n\n\\bibliographystyle{IEEEtran}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nBeginning in December 2019, coronavirus disease 2019 (COVID-19) caused by severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) has become one of the most deadly global pandemic in history. The COVID-19 infections in the US and other nations are still spiking. As of October 30, 2020, the World Health Organization (WHO) has reported 44,888,869 confirmed cases of COVID-19 and 1,178,145 confirmed deaths. The virus has spread to Africa, Americas, Eastern Mediterranean, Europe, South-East Asia and Western Pacific \\cite{weeklyReport}. To prevent further damage to our livelihood, we must control its spread through testing, social distancing, tracking the spread, and developing effective vaccines, drugs, diagnostics, and treatments.\n\nSARS-CoV-2 is a positive-sense single-strand RNA virus that belongs to the Nidovirales order, coronaviridae family and betacoronavirus genus \\cite{of2020species}. To effectively track the virus, testing patients with suspected exposure to COVID-19 and sequencing the strand via PCR (polymerase chain reaction) are important. From sequencing, we can analyze patterns in mutation and predict transmission pathways. Without understanding such pathways, current efforts to find effective medicines and vaccines could become futile because mutations may change viral genome or lead to resistance. As of October 30, 2020, there are 89627 available sequences with 23763 unique single nucleotide polymorphisms (SNPs) with respect to the first SARS-CoV-2 sequent collected in December 2019 \\cite{wu2020new} according to our mutation tracker \\url{https:\/\/users.math.msu.edu\/users\/weig\/SARS-CoV-2_Mutation_Tracker.html}.\n\nA popular method for understanding mutational trends is to perform phylogenetic analysis, where one clusters mutations to find evolution patterns and transmission pathways. Phylogenetic analysis has been done on the Nidovirales family \\cite{alam2020functional, gong2020sars, forster2020phylogenetic, li2020evolutionary, alam2020functional, kasibhatla2020understanding} to understand genetic evolutionary pathways, protein level changes \\cite{wang2020decoding, wang2020characterizing, chen2020mutations, kasibhatla2020understanding}, large scale variants \\cite{wang2020characterizing, wang2020decoding, wang2020decoding0,worobey2020emergence} and global trends \\cite{ bai2020comprehensive, toyoshima2020sars, van2020emergence}. Commonly used techniques for phylogenetic analysis include tree based methods \\cite{page2012space} and $K$-means clustering. Both methods belong to unsupervised machine learning techniques, where ground truth is unavailable. These approaches provide valuable information for exploratory research. A main issue with phylogenetic tree analysis is that as the number of samples increase, its computation becomes unpractical, making it unsuitable for large genome datasets. In contrast, $K$-means scales well with sample size increase, but does not perform well when the sample size is too small. Jaccard distance is commonly used to compare genome sequences \\cite{zhou2008approach} because it offers a phylogenetic or topological difference between samples. However, the tradeoff to the Jaccard distance is that its feature dimension is the same as its number of samples, suggesting that for a large sample size, the number of features is also large. Since $K$-means clustering relies on computing the distance between the center of the cluster and each sample, having a large feature space can result in expensive computation, large memory requirement, and poor clustering performance. This become a significant problem as the number of SARS-CoV-2 genome isolates from patients has reached 150,000 at this point. There is a pressing need for efficient clustering methods for SARS-CoV-2 genome sequences. \n\nOne technique to address this challenge is to perform dimensional reduction on the $K$-means input dataset so that the task becomes manageable. Commonly used dimension reduction algorithms focus on two aspects: 1) the pairwise distance structure of all the data samples and 2) preservation of the local distances over the global distance. Techniques such as principal component analysis (PCA) \\cite{jolliffe2016principal}, Sammon mapping \\cite{sammon1969nonlinear}, and multidimensional scaling (MDS) \\cite{cox2008multidimensional} aim to preserve the pairwise distance structure of the dataset. In contrast, the t-distributed stochastic neighbor embedding (t-SNE) \\cite{linderman2019fast, maaten2008visualizing}, uniform manifold approximation and projection (UMAP) \\cite{mcinnes2018umap,becht2019dimensionality}, Laplacian eigenmaps \\cite{belkin2001laplacian}, and LargeVis \\cite{tang2016visualizing} focus on the preservation of local distances. Among them, PCA, t-SNE, and UMAP are the most frequently used algorithms in the applications of cell biology, bioinformatics, and visualization \\cite{becht2019dimensionality}. \n \nPCA is a popular method used in exploratory studies, aiming to find the directions of the maximum variance in high-dimensional data and projecting them onto a new subspace to obtain low-dimensional feature spaces while preserving most of the variance. The principal components of the new subspace can be interpreted as the directions of the maximum variance, which makes the new feature axes orthogonal to each other. Although PCA is able to cover the maximum variance among features, it may lose some information if one chooses an inappropriate number of principal components. As a linear algorithm, PCA performs poorly on the features with nonlinear relationship. Therefore, in order to present high-dimensional data on low dimensional and nonlinear manifold, some nonlinear dimensional reduction algorithms such as t-SNE and UMAP are employed. T-SNE is a nonlinear method that can preserve the local and global structures of data. There are two main steps in t-SNE.\nFirst, it finds a probability distribution of the high dimensional dataset, where similar data points are given higher probability. Second, it finds a similar probability distribution in the lower dimension space, and the difference between the two distributions is minimized. However, t-SNE computes pairwise conditional probabilities for each pair of samples and involves hyperparameters that are not always easy to tune, which makes it computationally complex. UMAP is a novel manifold learning technique that also captures a nonlinear structure, which is competitive with t-SNE for visualization quality and maintains more of the global structure with superior run-time performance \\cite{mcinnes2018umap}. UMAP is built upon the mathematical work of Belkin and Niyogi on Laplacian eigenmaps, aiming to address the importance of uniform data distributions on manifolds via Riemannian geometry and the metric realization of fuzzy simplicial sets by David Spivak \\cite{spivak2012metric}. Similar to t-SNE, UMAP can optimize the embedded low-dimensional representation with respect to fuzzy set cross-entropy loss function by using stochastic gradient descent. The embedding is found by finding a low-dimensional projection of the data that closely matches the fuzzy topological structure of the original space. The error between two topological spaces will be minimized by optimizing the spectral layout of data in a low dimensional space. \n\n\nIn this work, we explore efficient computational methods for the SARS-CoV-2 phylogenetic analysis of large volumn of SARS-CoV-2 genome sequences. Specifically, we are interested in developing a dimension-reduction assisted clustering method. To this end, we compare the effectiveness and accuracy of PCA, t-SNE and UMAP for dimension reduction in association with the $K$-means clustering. To quantitatively evaluate the performance, we recast supervised classification problems with labels into a $K$-means clustering problems so that the accuracy of $K$-means clustering can be evaluated. As a result, the accuracy and performance of PCA, t-SNE and UMAP-assisted $K$-means clustering can be compared. By choosing the different dimensional reduction ratios, we examine the performance of these methods in $K$-means settings on four standard datasets. We found that UMAP is the most efficient, robust, reliable, and accurate algorithm. Based on this finding, we applied the UMAP-assisted $K$-means technique to large scale SARS-CoV-2 datasets generated from a Jaccard distance representation and a SNP position-based representation to further analyze its effectiveness, both in terms of speed and scalability. Our results are compared with those in the literature \\cite{wang2020decoding} to shed new light on SARS-CoV-2 phylogenetics. \n\n\n\n\n\\section{Methods}\n\n\\subsection{Sequence and alignment}\nThe SARS-CoV-2 sequences were obtained from GISAID databank (\\url{www.gisaid.com}). Only complete genome sequences with collection date, high coverage, and without 'NNNNNN' in the sequences were considered. Each sequence was aligned to the reference sequence \\cite{wu2020new} using a multiple sequence alignment (MSA) package Clustal Omega \\cite{sievers2011fast}. A total of 23763 complete SARS-CoV-2 sequences are analyzed in this work.\n\n\\subsection{SNP position based features}\nLet $N$ be the number of SNP profiles with respect to the SARS-CoV-2 reference genome sequence, and let $M$ be the number of unique mutation sites. Denote $V_i$ as the position based feature of the $i$th SNP profile.\n\\begin{equation}\nV_i = [v_i^1, v_i^2, ..., v_i^M], \\quad i = 1,2,..., N\n\\end{equation}\nis a $1\\times M$ vector. Here\n\\begin{equation}\nv_i^j = \\begin{cases}\n1, & \\text{mutation site} \\\\\n0, & \\text{otherwise}. \\\\\n\\end{cases}\n\\end{equation}\nWe compile this into an $N\\times M$ position based feature,\n\\begin{equation}\nS(i,j) = v_i^j\n\\end{equation}\nwhere each row represents a sample. Note that $S(i,j)$ is a binary representation of the position and is sparse.\n\n\\subsection{Jaccard based representation}\\label{sec:Jaccard}\n\nThe Jaccard distance measures the dissimilarity between two sets. It is widely used in the phylogenetic studies of SNP profiles. In this work, we utilize Jaccard distance to compare SNP profiles of SARS-CoV-2 genome isolates.\n\nLet $A$ and $B$ be two sets. Consider the Jaccard index between $A$ and $B$, denoted $J(A,B)$, as the cardinality of the intersection divided by the cardinality of the union\n\\begin{equation}\n\tJ(A,B) = \\frac{\\big| A\\cap B\\big|}{\\big|A \\cup B \\big|} = \\frac{\\big| A\\cap B\\big|}{\\big|A\\big| + \\big|B\\big| - \\big|A \\cap B \\big|}. \n\\end{equation}\nThe Jaccard distance between the two sets is defined by subtracting the Jaccard index from 1:\n\\begin{equation}\n\td_J(A,B) = 1- J(A,B) = \\frac{\\big|A \\cup B \\big| - \\big|A \\cap B \\big|}{\\big|A \\cup B \\big|}\n\\end{equation}\n\nWe assume there are $N$ SNP profiles or genome isolates that have been aligned to the reference SARS-CoV-2 genome. Let $S_i$, $i=1,...,N$, be the set with the position of the mutation of the $i$th sample. The Jaccard distance between two sets $S_i$ and $S_j$ is given by $d_J(S_i, S_j)$. Taking the pairwise distance between all the samples, we can construct the Jaccard based representation, resulting in an $N\\times N$ distance matrix $D$\n\\begin{equation}\n\tD(i,j) = d_J(S_i, S_j)\n\\end{equation}\nThis distance defines a metric over the collections of all finite sets \\cite{levandowsky1971distance}.\n\n\n\t%\n\n\n\n\n\n\t%\n\n\\subsection{$K$-means clustering}\n$K$-means clustering is one of the most popular unsupervised learning methods in machine learning, where it aims to cluster or partition a data $\\{x_1, ..., x_N\\}$, $x_i \\in \\mathbb{R}^M$ into $k$ clusters, $\\{C_1, ..., C_k\\}$, $k \\le N$.\n\n$K$-means clustering begins with selecting $k$ points as $k$ cluster centers, or centroids. Then, each point in the dataset is assigned to the nearest centroid. The centroids are then updated by minimizing the within-cluster sum of squares (WCSS), which is defined as \n\\begin{equation}\n\\sum_{i=1}^k \\sum_{x_i \\in C_k} \\|x_i - \\mu_k\\|_2^2.\n\\end{equation}\nHere, $\\|\\cdot\\|_2$ denotes the $l_2$ norm and $\\mu_k$ is the average of the data point in cluster $k$\n\\begin{equation}\n\\displaystyle \\mu_k = \\frac{1}{|C_k|} \\sum_{x_i \\in C_k} x_i.\n\\end{equation}\n\nThis method, however, only finds the optimal centroid, given a fixed number of clusters $k$. In applications, we are interested in finding the optimal number of clusters as well. In order to obtain the best $k$ clusters, elbow method was used. The optimal number of clusters can be determined via the elbow method by plotting the WCSS against the number of clusters, and choosing the inflection point position as the optimal number of clusters.\n\n\\subsection{Principal component analysis}\nPrincipal component analysis (PCA) is one the most commonly used dimensional reduction techniques for the exploratory analysis of high-dimensional data \\cite{jolliffe2016principal}. Unlike other methods, there is no need for any assumptions in the data. Therefore, it is a useful method for new data, such as SARS-CoV-2 SNPs data. PCA is conducted by obtaining one component or vector at a time. The first component, termed the principal component, is the direction that maximizes the variance. The subsequent components are orthogonal to earlier ones.\n\nLet $\\{x_i\\}_{i=1}^N$ be the input dataset, with $N$ being the number of samples or data points. For each $x_i$, let $x_i \\in \\mathbb{R}^M$, where $M$ is the number of features or data dimension. Then, we can cast the data as a matrix $X \\in \\mathbb{R}^{N \\times M}$. PCA seeks to find a linear combination of the columns of $X$ with maximum variance.\n\\begin{equation}\n\\sum_{j=1}^n a_jx_j = Xa,\n\\end{equation}\nwhere $a_1, a_2, ..., a_n$ are constants. The variance of this linear combination is defined as \n\\begin{equation}\n{\\rm var}(Xa) = a^TSa,\n\\end{equation}\nwhere $S$ is the covariance matrix for the dataset. Note that we compute the eigenvalue of the covariance matrix. The maximum variance can be computed iteratively using Rayleigh's quotient\n\\begin{equation}\na_{(1)} = \\arg \\max_a \\frac{a^TX^TXa}{a^Ta}.\n\\end{equation}\nThe subsequent components can be computed by maximizing the variance of\n\\begin{equation}\n\\hat{X}_k = X - \\sum_{j=1}^{k-1} Xa_ja_{j}^T\n\\end{equation}\nwhere $k$ represents the $k$th principal component. Here, $k-1$ principal components are subtracted from the original matrix $X$. Therefore, the complexity of the method scales linearly with the number of components one seeks to find. In applications, we hope that the first few components give rise to a good PCA representation of the original data matrix $X$. \n\n\n\\subsection{t-SNE}\nThe t-distributed stochastic neighbor embedding (t-SNE) is a nonlinear dimensional reduction algorithm that is well suited for reducing high dimensional data into the two- or three-dimensional space. There are two main stages in t-SNE. First, it constructs a probability distribution over pairs of data such that a pair of near data points is assigned with a high probability, while a pair of farther away points is given a low probability. Second, t-SNE defines a probability distribution in the embedded space that is similar to that in the original high-dimensional space, and aims to minimize the Kullback-Leibler (KL) divergence between them \\cite{linderman2019fast}.\n\nLet $\\{x_1, x_2, ..., x_N | x_i \\in \\mathbb{R}^M \\}$ be a high dimensional input dataset. Our goal is to find an optimal low dimensional representation $\\{y_1, ..., y_N | y_i \\in \\mathbb{R}^k\\}$, such that $k << M$.\nThe first step in t-SNE is to compute the pairwise distribution between $x_i$ and $x_j$, defined as $p_{ij}$. However, we find the conditional probability of $x_j$, given $x_i$:\n\\begin{equation}\\label{pdist} \np_{j|i} = \\frac{\\exp(-\\|x_i - x_j\\|^2\/2\\sigma_i^2)}{\\sum_{m\\ne i}\\exp(-\\|x_i - x_m\\|^2\/2\\sigma_i^2)}, \\quad i \\ne j,\n\\end{equation}\nsetting $p_{i|i} = 0$, and the denominator normalizes the probability. Here, $\\sigma_i$ is the predefined hyperparameter called perplexity. A smaller $\\sigma_i$ is used for a denser dataset. Notice that this conditional probability is symmetric when the perplexity is fixed, i.e. $p_{i|j} = p_{j|i}$. Then, define the pairwise probability as \n\\begin{equation}\np_{ij} = \\frac{p_{j|i} + p_{ij}}{2N}.\n\\end{equation}\n\nIn the second step, we learn a $k$-dimensional embedding $\\{y_1, ..., y_N | y_i \\in \\mathbb{R}^k\\}$. To this end, t-SNE calculates a similar probability distribution $q_{ij}$ defined as\n\\begin{equation}\\label{qdist} \nq_{ij} = \\frac{\\frac{1}{1+\\|y_i - y_j\\|^2}}{\\sum_{m}\\sum_{l\\ne m}\\frac{1}{1+\\|y_m - y_l\\|^2}}, \\quad i \\ne j\n\\end{equation}\nand setting $q_{ii} = 0$. Finally, the low dimensional embedding $\\{y_1, ..., y_N | y_i \\in \\mathbb{R}^k\\}$ is found by minimizing the KL-divergence via a standard gradient descent method\n\\begin{equation}\n{\\rm KL}(P|Q) = \\sum_{i, j} p_{ij}\\log\\frac{p_{ij}}{q_{ij}},\n\\end{equation}\nwhere $P$ and $Q$ are the distributions for $p_{ij}$ and $q_{ij}$, respectively. Note that the probability distributions in Eqs. (\\ref{pdist}) and (\\ref{qdist}) can be replaced by using many other delta sequence kernel of positive type \\cite{wei2000wavelets}. \n\n\\subsection{UMAP}\nUniform manifold approximation and projection (UMAP) is a nonlinear dimensional reduction method, utilizing three assumptions: the data is uniformly distributed on Riemannian manifold, Riemannian metric is locally constant, and the manifold if locally connected. Unlike t-SNE which utilizes probabilistic model, UMAP is a graph-based algorithm. Its essentially idea is to create a predefined $k$-dimensional weighted UMAP graph representation of each of the original high-dimensional data point such that the edge-wise cross-entropy between the weighted graph and the original data is minimized. Finally, the $k$-dimensional eigenvectors of the UMAP graph are used to represent each of the original data point. \nIn this section, a computational view of UMAP is presented. For a more theoretical account, the reader is referred to Ref. \\cite{mcinnes2018umap}.\n\nSimilar to t-SNE, UMAP considers the input data $X = \\{x_1, x_2, ..., x_N\\},$ $x_i \\in \\mathbb{R}^M$ and look for an optimal low dimensional representation $\\{y_1, ..., y_N | y_i \\in \\mathbb{R}^k\\}$, such that $k < M$. The first stage is the construction of weighted $k$-neighbor graphs. Let define a metric $d: X\\times X \\to \\mathbb{R}^+$. Let $k << M$ be a hyperparemeter, and compute the $k$-nearest neighbors of each $x_i$ under a given metric $d$. For each $x_i$, let\n\\begin{equation}\n\\rho_i = \\min\\{d(x_i, x_j)| 1 \\le j \\le k, d(x_i, x_j) > 0\\}\n\\end{equation}\nwhere $\\sigma_i$ is defined via\n\\begin{equation}\n\\sum_{j=1}^k \\exp\\left(\\frac{-\\max(0, d(x_i, x_j) - \\rho_i)}{\\sigma_i}\\right) = \\log_2 k.\n\\end{equation}\nOne chooses $\\rho_i$ to ensure at least one data point is connected to $x_i$ and having edge weight of 1, and set $\\sigma_i$ as a length scale parameter.\nOne defines a weighted directed graph $\\bar{G} = (V, E, \\omega)$, where $V$ is the set of vertices (in this case, the data $X$), $E$ is the set of edges $E = \\{ (x_i, x_j)| 1 \\le h \\le k, 1 \\le i \\le N\\}$, and $\\omega$ is the weight for edges\n\\begin{equation}\n\\omega(x_i, x_j) = \\exp\\left( \\frac{-\\max(0, d(x_i, x_j) - \\rho_i)}{\\sigma_i}\\right).\n\\end{equation}\nUMAP tries to define an undirected weighted graph $G$ from directed graph $\\bar{G}$ via symmetrization. Let $A$ be the adjacency matrix of the graph $\\bar{G}$. A symmetric matrix can be obtained\n\\begin{equation}\nB = A + A^T - A\\otimes A^T,\n\\end{equation}\nwhere $T$ is the transpose and $\\otimes$ denotes the Hadamard product. Then, the undirected weighted Laplacian $G$ (the UMAP graph) is defined by its adjacency matrix $B$. \n\nIn its realization, UMAP evolves an equivalent weighted graph $H$ with a set of points $\\{y_i\\}_{i=1,\\cdots,N}$, utilizing attractive and repulsive forces. The attractive and repulsive forces at coordinate $y_i$ and $y_j$ are given by\n\\begin{align}\n& \\frac{-2ab \\|y_i - y_j\\|_2^{2(b-1)}}{1 + \\|y_i - y_j\\|_2^2} w(x_i, x_j) (y_i - y_j), ~~{\\rm and} \\\\\n& \\frac{2b}{(\\epsilon + \\|y_i - y_j\\|_2^2)(1+a\\|y_i - y_j\\|_2^{2b})} (1-w(x_i, x_j)) (y_i - y_j)\n\\end{align}\nwhere $a,b$ are hyperparemeters, and $\\epsilon$ is taken to be a small value such that the denominator does not become 0. The goal is to find the optimal low-dimensional coordinates $\\{y_i\\}_{i=1}^{N}$, $y_i \\in \\mathbb{R}^k$, that minimizes the edge-wise cross entropy with the original data at each point. The evolution of the UMAP graph Laplacian $G$ can be regarded as a discrete approximation of the Laplace-Beltrami operator on a manifold defined by the data \\cite{chen2019evolutionary}. Implementation and further detail of UMAP can be found in Ref. \\cite{mcinnes2018umap}. \n\nUMAP may not work well if the data points is non-uniform. If part of the data points have $k$ important neighbors while other part of the data points have $k'>>k$ important neighbors, the $k$-dimensional UMAP will not work efficiently. Currently, there is no algorithm to automatically determine the critic minimal $k_{\\rm min}$ for a given dataset. Additionally, weights $w(x_i,x_j)$ and force terms can be replaced by other functions that are easier to evaluate \\cite{wei2000wavelets}. The metric $d$ can be selected as Euclidean distance, Manhattan distance, Minkowski distance, and Chebyshev distance, depending on applications. \n\n\\section{Validation}\n $K$-means clustering is one of the unsupervised learning algorithms, suggesting that neither the accuracy nor the root-mean-square error can be calculated to evaluate the performance of the $K$-means clustering explicitly. Additionally, $K$-means clustering can be problematic for high-dimensional large datasets. Dimension-reduced $K$-means clustering is an efficient approach. \nTo evaluate its accuracy and performance, we convert supervised classification problems with known salutations into dimension-reduced $K$-means clustering problems. In doing so, we apply the $K$-means clustering to the classification dataset by setting the number of clusters equals to the number of the real categories. Next, in each cluster, we will take the data with the dominant label as the test for all samples and then calculate the $K$-means clustering accuracy for the whole dataset.\n \n\\subsection{Validation data}\\label{sec:Validation}\nIn this work, we will consider the following classification datasets to test the performance of the clustering methods: Coil 20, Facebook large page-page network, MNIST, and Jaccard distanced-based MNIST. \n\\begin{itemize}\n\t\\item \\textbf{Coil 20}: Coil 20 \\cite{Coil20} is a dataset with 1440 gray scale images, consisting of 20 different objects, each with 72 orientation. Each image is of size $128\\times 128$, which was treated as a 16384 dimensional vector for dimensional reduction\n\t\n\t\\item \\textbf{Facebook Network}: Facebook large page-page network \\cite{rozemberczki2019multiscale} is a page-page webgraph of verified Facebook sites. Each node represents a facebook page, and the links are the mutual links between sites. This is a binary dataset with 22,470 nodes; hence the sample size and feature size are both 22,470. Jaccard distance was computed between each nodes for the feature space.\n\t\n\t\\item \\textbf{MNIST}: MNIST \\cite{lecun1998gradient} is a hand written digit dataset. Each image is a grey scale of size $28\\times 28$, which was treated as a 784 dimensional vector for the feature space, each with a integer value in [0, 255]. Standard normalization was used before performing dimensional reduction. There are 70,000 sample, with 10 different labels.\n\t\n\t\\item \\textbf{Jaccard distanced-based MNIST}: The above dataset was converted to a Jaccard distance-based dataset. This is to simulate position based mutational dataset, where 1 indicates a mutation in a particular position. Jaccard distance was used to construct the feature space, hence for each sample, the feature size is 70,000. This dataset can be viewed as an additional validation on our Jaccard distance representation. \n\t\n\\end{itemize}\n\n\\subsection{Validation results}\nIn the present work, we implement three popular dimensional reduction methods, PCA, UMAP, and t-SNE, for the dimension reduction and compare their performance in $K$-means clustering. For a uniform comparison, we reduce the dimensions of the samples by a set of ratios. The minimum between the number of features and the number of samples was taken as base of the reduction. For the Coil 20 dataset, since the numbers of samples and features were 1440 and 16384, respectively, dimension-reductions were based on 1440. \nFor the Facebook Network, since the numbers of samples and features were both \n22,470, dimension-reductions were based on 22,470.\nFor the MNIST dataset, since the numbers of samples and features were respectively 70,000 and 784, dimension-reductions were based on 784. Finally,\n for the Jaccard distanced-based MNIST dataset, since the numbers of samples and features were both 70,000, dimension-reductions were based on 70,000. \nNote that for the Jaccard distanced-based MNIST data, more aggressive ratios were used because the original feature size is huge, i.e., 70,000. The standard ratios of 2, 4, and 8, etc do not sufficiently reduce the dimension for effective $K$-means computation. For the purpose of visualization, two-dimensional reduction algorithms are applied to each reduction scheme. \nIn order to validate PCA, UMAP, and t-SNE assisted $K$-means clustering, we observed their performance using labeled datasets. $K$-nearest neighbors ($K$-NN) was used to find the baseline of the reduction, which reveals how much information can be preserved in the feature after applying a dimensional reduction algorithm. For $k$-NN, 10 fold cross-validation was performed.\n\nNotably, $K$-means clustering is an unsupervised learning algorithm, which does not have labels to evaluate the clustering performance explicitly. However, we can assess the $K$-means clustering accuracy via labeled datasets that has ground truth. In doing so, we choose the number of $K$ as the original number of classes. Then, we can compared the $k$-means clustering results with the ground truth. Therefore, the accuracy can reveal the performance of the proposed dimension-reduction-assisted ($k$-means) clustering method. For the classification problem, we assume the training set is $\\{(\\mathbf{x}_i, y_i) | \\mathbf{x}_i\\in \\mathbb{R}^m, y_i \\in\\mathbb{Z} \\}_{i=1}^{n}$ with the $|\\{y_i\\}_{i=1}^{n}|=k$. Here $n$, $m$, and $k$ represent the number of samples, the number of features $\\{\\mathbf{x}_i\\}$, and the number of labels $\\{y_i\\}$, respectively. We set the number of clusters equals to the number of labels $k$. After applying the $K$-means clustering algorithm, we get $k$ different clusters $\\{\\mathbf{c}_j\\}^{k}_{j=1}$. In each cluster, we define the predictor of the $K$-means clustering in the cluster $\\mathbf{c}_j$ to be :\n\\begin{equation}\n\\hat{\\mathbf{y}}(\\mathbf{\\mathbf{c}}_j) = \\max\\{ F_j(y_1), \\cdots, F_j(y_k) \\},\n\\end{equation}\nwhere $F_j(y_i), \\cdots, F_j(y_k)$ are the appearance frequencies of each label in the cluster $\\mathbf{c}_j$. Then the clustering accuracy can be defined as:\n\\begin{equation}\n\\text{Accuracy} = \\frac{\\sum_{i} 1_{ \\{ y_i = \\hat{y}_i \\} } } {n},\n\\end{equation}\nwhere $\\{\\hat{y}_i\\}$ are predicted labels. \nMoreover, other evaluation metrics such as precision, recall, and receiver operating characteristic (ROC) can also be defined accordingly.\n\n\\subsubsection{Coil 20}\n\n\\begin{figure}[ht]\n\t\\centering\n\t\\includegraphics[width = \\textwidth]{Coil20_compressed.pdf}\n\t\\caption{Comparison of different dimensional reduction algorithms on Coil 20 dataset. Total 20 different labels are in the Coil 20 dataset, and we use the ground truth label to color each data points. (a) Feature size is reduced to dimension 2 by PCA. (b) Feature size is reduced to dimension 2 by t-SNE. (c) Feature size is reduced to dimension 2 by UMAP. }\n\t\\label{fig:Coil20}\n\\end{figure}\n\n\n\\autoref{fig:Coil20} shows the performance of PCA-assisted, UMAP-assisted and t-SNE-assisted clustering of the Coil 20 dataset. For each case, the dataset were reduced to dimension 2 using default parameters, and the plots were colored with the ground truth of the Coil 20 dataset. It can be seen that PCA does not present good clustering, whereas UMAP and t-SNE show very good clusters. \n\n\\begin{table}[ht]\n\t\\centering\n\t\\caption{Accuracy of $k$-NN of the Coil 20 dataset without applying any reduction algorithms, as well as the accuracy of $k$-NN assisted by PCA, UMAP and t-SNE with different dimensional reduction ratio. The sample size, feature size, and the number of labels of the Coil 20 dataset are 1440, 16384, and 20, respectively. }\n\t\\begin{tabular}{c|cccccc}\\hline \n\t\tDataset & \\tabincell{c}{$k$-NN accuracy \\\\ w\/o reduction} & \\tabincell{c}{Reduced\\\\ dimension} & \\tabincell{c}{PCA\\\\accuracy} & \\tabincell{c}{UMAP\\\\accuracy} & \\tabincell{c}{t-SNE\\\\accuracy} \\\\\\hline\n\t\t\\multirow{10}*{\\shortstack{Coil 20 \\\\(1440,16384,20)}} & \\multirow{10}{*}{0.956}\n\t\t& 720 (1\/2) & 0.955 & 0.668 & 0.850 \\\\\n\t\t& & 360 (1\/4) & 0.957 & 0.861 & 0.889 \\\\\n\t\t& & 180 (1\/8) & 0.973 & 0.867 & 0.881 \\\\\n\t\t& & 90 (1\/16)& 0.977 & 0.860 & 0.885 \\\\\n\t\t& & 45 (1\/32) & 0.980 & 0.861 & 0.875 \\\\\n\t\t& & 22 (1\/64) & 0.985 & 0.868 & 0.743 \\\\\n\t\t& & 14 (1\/100) & 0.730 & 0.851 & 0.878 \\\\\n\t\t& & 7 (1\/200) & 0.985 & 0.870 & 0.845 \\\\\n\t\t& & 3 & 0.850 & 0.863 & 0.959 \\\\\n\t\t& & 2 & 0.730 & 0.853 & 0.948 \\\\ \\hline\n\t\\end{tabular}\n\t\\label{tab:Coil20 KNN ACC}\n\\end{table}\n\n\\autoref{tab:Coil20 KNN ACC} shows the accuracy of $k$-NN clustering of the Coil 20 dataset assisted by PCA, t-SNE, and UMAP with different dimensional reduction radio. The Coil 20 dataset has 1,440 samples, 16,384 features, and 20 different labels. For PCA, the sklearn implementation on python was used with standard parameters. Note that for all methods, dimensions were reduced to 3 and 2 for a comparison. For t-SNE, Multicore-TSNE \\cite{Ulyanov2016} was used because it offers up to 8 core processor, which is not available in the sklearn implementation, and it is the fastest performing t-SNE algorithm. For UMAP, we used standard parameters \\cite{mcinnes2018umap}. It can be seen that when we reduce the dimension to 3, t-SNE performs best. Moreover, when the dimensional reduction ratio is 1\/100, PCA and UMAP also perform well. Notably, the $k$-NN accuracy for the data without applying any dimensional reduction algorithm is 0.956, indicating that UMAP does not provide the best clustering performance on the Coil 20 dataset. However, PCA and t-SNE will preserve the information of the original data with a dimensional reduction ratio larger than 1\/100, and t-SNE even performs better for dimensional three on the Coil 20 dataset. \n\n\\begin{table}[ht]\n\t\\centering\n\t\\caption{Accuracy of $K$-means clustering of the Coil 20 dataset without applying any reduction algorithms, as well as the accuracy of $K$-means assisted by PCA, UMAP and t-SNE with different dimensional reduction ratio. The sample size, feature size, and the number of labels of the Coil 20 dataset are 1440, 16384, and 20, respectively. }\n\t\\begin{tabular}{c|cccccc} \\hline\n\t\tDataset & \\tabincell{c}{$K$-means accuracy \\\\ w\/o reduction} & \\tabincell{c}{Reduced\\\\ dimension} & \\tabincell{c}{PCA\\\\accuracy} & \\tabincell{c}{UMAP\\\\accuracy} & \\tabincell{c}{t-SNE\\\\accuracy} \\\\\\hline\n\t\t\\multirow{11}*{\\shortstack{Coil 20 \\\\(1440,16384,20)}} & \\multirow{11}{*}{0.626} \n\t\t \n\t\t & 720 (1\/2) & 0.64 & 0.301 & 0.798 \\\\\n\t\t& & 360 (1\/4) & 0.678 & 0.800 & 0.718 \\\\\n\t\t& & 180 (1\/8) & 0.633 & 0.822 & 0.648 \\\\\n\t\t& & 90 (1\/16) & 0.642 & 0.799 & 0.681 \\\\\n\t & & 45 (1\/32) & 0.666 & 0.800 & 0.615 \\\\\n\t\t& & 22 (1\/64) & 0.673 & 0.819 & 0.151 \\\\\n\t\t& & 14 (1\/100) & 0.631 & 0.817 & 0.154 \\\\\n\t\t& & 7 (1\/200) & 0.591 & 0.819 & 0.360 \\\\\n\t & & 3 & 0.561 & 0.800 & 0.780 \\\\\n\t\t& & 2 & 0.537 & 0.801 & 0.828 \\\\ \\hline\n\t\\end{tabular}\n\t\\label{tab:Coil20 KMean ACC}\n\\end{table}\n\\autoref{tab:Coil20 KMean ACC} describes the accuracy of $K$-means clustering of Coil 20 assisted by PCA, UMAP, and t-SNE with different dimensional reduction ratio. For consistency, we use the same set of standard parameters as $k$-NN. For the Coil 20 dataset, the accuracy of $K$-means clustering assisted by UMAP has the best performance. When the reduced dimension is 2048 (ratio 1\/8), UMAP will result in a relatively high $K$-means accuracy (0.822). Moreover, although PCA performs best on $k$-NN accuracy, it performs poorly on the $K$-means accuracy, indicating that PCA is not a suitable dimensional reduction algorithm on the Coil 20 dataset. Furthermore, the highest accuracy of $K$-means clustering is 0.828, which is calculated from the t-SNE-assisted algorithm. However, the t-SNE-assisted accuracy under different reduction ratio changes dramatically. When the ratio is 1\/64, the t-SNE-assisted accuracy is only 0.151, indicating that t-SNE is sensitive to the hyper-parameters settings. In contrast, the performance of UMAP is highly stable under all dimension-reduction ratios. \n\nNote that dimension-reduced $k$-means clustering methods outperform the original $k$-means clustering. Therefore, the proposed dimension-reduced $k$-means clustering methods not only improve the $k$-means clustering efficiency, but also achieve better accuracy. \n\n\n\n\n\n\\subsubsection{Facebook Network}\n\\begin{figure}[ht]\n \n \\centering\n\t\\includegraphics[width = \\textwidth]{Facebook_compressed.pdf}\n\t\\caption{Comparison of different dimensional reduction algorithms on the Facebook Network dataset. Total 4 different labels are in the Facebook Network dataset, and we use the ground truth label to color each data points. (a) Feature size is reduced to dimension 2 by PCA. (b) Feature size is reduced to dimension 2 by t-SNE. (c) Feature size is reduced to dimension 2 by UMAP. }\n\t\\label{fig:Facebook}\n\\end{figure}\n\\autoref{fig:Facebook} shows the visualization performance of PCA-assisted, UMAP-assisted, and t-SNE-assisted clustering of the Facebook Network. For each case, the dataset was reduced to dimension 2 using default parameters, and the plots were colored with the ground truth of the Facebook Network. \\autoref{fig:Facebook} shows that the PCA-based data is located distributively, while the t-SNE- and UMAP-based data show clusters. \n\n\n\\autoref{tab:Facebook KNN ACC} shows the accuracy of $k$-NN clustering of the Facebook Network assisted by PCA, t-SNE, and UMAP with different dimensional reduction radio. The Facebook Network dataset has 22,470 samples with 4 different labels, and the feature size of the Facebook Network is also 22,470. For each algorithm, we use the same settings as the Coil 20 dataset. Without applying any dimensional reduction method, The Facebook Network has 0.755 $k$-NN accuracy. The reduced feature from PCA has the best $k$-NN performance when the reduction ratio is 1\/2. UMAP has a better performance compared to PCA and t-SNE when the reduction ratio is smaller than 1\/16. \n\n\\begin{table}[ht!]\n\t\\centering\n\t\\caption{Accuracy of $k$-NN of the Facebook Network without applying any reduction algorithms, as well as the accuracy of $k$-NN assisted by PCA, UMAP and t-SNE with different dimensional reduction ratio. The sample size, feature size, and the number of labels of the Facebook Network are 22470, 22470, and 4, respectively. }\n\t\\begin{tabular}{c|cccccc}\\hline \n Dataset & \\tabincell{c}{$k$-NN accuracy \\\\ w\/o reduction} & \\tabincell{c}{Reduced\\\\ dimension} & \\tabincell{c}{PCA\\\\accuracy} & \\tabincell{c}{UMAP\\\\accuracy} & \\tabincell{c}{t-SNE\\\\accuracy} \\\\\\hline\n\t\t\\multirow{12}*{\\shortstack{Facebook Network \\\\(22470, 22470, 4)}} & \\multirow{13}{*}{0.755}\n\t\t \n\t\t & 11235 (1\/2) & 0.756 & 0.360 & 0.307 \\\\\n\t\t& & 5617 (1\/4) & 0.755 & 0.669 & 0.316 \\\\\n\t\t& & 2808 (1\/8) & 0.754 & 0.754 & 0.355 \\\\\n\t\t& & 1404 (1\/16) & 0.751 & 0.816 & 0.707 \\\\\n\t\t& & 702 (1\/32) & 0.751 & 0.814 & 0.669 \\\\\n\t\t& & 351 (1\/64) & 0.746 & 0.815 & 0.690 \\\\\n\t\t& & 224 (1\/100) & 0.733 & 0.814 & 0.676 \\\\\n\t\t& & 112 (1\/200) & 0.721 & 0.819 & 0.633 \\\\\n\t\t& & 44 (1\/500) & 0.714 & 0.816 & 0.709 \\\\\n\t\t& & 22 (1\/1000) & 0.690 & 0.815 & 0.643 \\\\\n\t\t& & 3 & 0.552 & 0.801 & 0.741 \\\\\n\t\t& & 2 & 0.501 & 0.786 & 0.732 \\\\\\hline\n\t\\end{tabular}\n\t\\label{tab:Facebook KNN ACC}\n\\end{table}\n\n\n\\autoref{tab:Facebook KMean ACC} describes the accuracy of $K$-means clustering of the Facebook Network assisted by PCA, UMAP and t-SNE with different dimensional reduction ratio. PCA, UMAP, and t-SNE all have very poor performance, which may be caused by the smaller number of labels. The highest accuracy 0.427 is observed in the t-SNE-assistant algorithm with dimension 2. \n\n\n\n\\begin{table}[ht!]\n\t\\centering\n\t\\caption{Accuracy of $K$-means clustering of the Facebook Network without applying any reduction algorithms, as well as the accuracy of $K$-means assisted by PCA, UMAP and t-SNE with different dimensional reduction ratio. The sample size, feature size, and the number of labels of the Facebook Network are 22470, 22470, and 4, respectively.}\n\t\\begin{tabular}{c|cccccc} \\hline\n Dataset & \\tabincell{c}{$K$-means accuracy \\\\ w\/o reduction} & \\tabincell{c}{Reduced\\\\ dimension} & \\tabincell{c}{PCA\\\\accuracy} & \\tabincell{c}{UMAP\\\\accuracy} & \\tabincell{c}{t-SNE\\\\accuracy} \\\\\\hline\n\t\t\\multirow{12}*{\\shortstack{Facebook Network \\\\(22470, 22470, 4)}} &\n\t\t\\multirow{12}{*}{0.374}\n\t\t \n\t\t& 11235 (1\/2) & 0.331 & 0.306 & 0.306 \\\\\n\t\t& & 5617 (1\/4) & 0.331 & 0.307 & 0.299 \\\\\n\t\t& & 2808 (1\/8) & 0.331 & 0.411 & 0.314 \\\\\n\t\t& & 1404 (1\/16) & 0.331 & 0.397 & 0.313 \\\\\n\t\t& & 702 (1\/32) & 0.331 & 0.401 & 0.306 \\\\\n\t\t& & 351 (1\/64) & 0.331 & 0.400 & 0.308 \\\\\n\t\t& & 224 (1\/100) & 0.331 & 0.400 & 0.327 \\\\\n\t\t& & 112 (1\/200) & 0.331 & 0.400 & 0.306 \\\\\n\t\t& & 44 (1\/500) & 0.331 & 0.400 & 0.313 \\\\\n\t\t& & 22 (1\/1000) & 0.331 & 0.401 & 0.306 \\\\\n\t\t& & 3 & 0.332 & 0.351 & 0.344 \\\\\n\t\t& & 2 & 0.358 & 0.345 & 0.427 \\\\ \\hline\n\t\\end{tabular}\n\t\\label{tab:Facebook KMean ACC}\n\\end{table}\n\nSimilar to the last case, UMAP-based and t-SNE-based dimension-reduced $k$-means clustering methods outperform the original $k$-means clustering with the full feature dimension. Therefore, it is useful to carry out dimension reduction before $k$-means clustering for large datasets. \n\n \n\n\\subsubsection{MNIST}\n\n\\begin{figure}[ht]\n \n \\centering\n\t\\includegraphics[width = \\textwidth]{MNIST_compressed.pdf}\n\t\\caption{Comparison of different dimensional reduction algorithms on the MNIST dataset. Total 10 different labels are in the MNIST dataset, and we use the ground truth label to color each data points. (a) Feature size is reduced to dimension 2 by PCA. (b) Feature size is reduced to dimension 2 by t-SNE. (c) Feature size is reduced to dimension 2 by UMAP. }\n\t\\label{fig:MNIST}\n\\end{figure}\n\n\n\\begin{table}[ht!]\n\t\\centering\n\t\\caption{Accuracy of $k$-NN of the MNIST dataset without applying any reduction algorithms, as well as the accuracy of $k$-NN assisted by PCA, UMAP and t-SNE with different dimensional reduction ratio. The sample size, feature size, and the number of labels of the MNIST dataset are 70000, 784, and 10, respectively.}\n\t\\begin{tabular}{c|cccccc}\\hline \n Dataset & \\tabincell{c}{$k$-NN accuracy \\\\ w\/o reduction} & \\tabincell{c}{Reduced\\\\ dimension} & \\tabincell{c}{PCA\\\\accuracy} & \\tabincell{c}{UMAP\\\\accuracy} & \\tabincell{c}{t-SNE\\\\accuracy} \\\\\\hline\n\t\t\\multirow{9}*{\\shortstack{MNIST \\\\(70000, 784, 10)}} & \\multirow{9}{*}{0.948} \n\t\t \n\t\t& 392 (1\/2) & 0.951 & 0.937 & 0.696 \\\\\n\t\t& & 196 (1\/4) & 0.956 & 0.938 & 0.846 \\\\\n\t\t& & 98 (1\/8) & 0.960 & 0.937 & 0.893\\\\\n\t\t& & 49 (1\/16) & 0.961 & 0.937 & 0.886 \\\\\n\t\t& & 24 (1\/32) & 0.953 & 0.937 & 0.842 \\\\\n\t\t& & 12 (1\/64) & 0.926 & 0.937 & 0.676 \\\\\n\t\t& & 7 (1\/100) & 0.846 & 0.936 & 0.940 \\\\\n\t\t& & 3 & 0.513 & 0.929 & 0.938 \\\\\n\t\t& & 2 & 0.323 & 0.919 & 0.928 \\\\ \\hline\n\t\\end{tabular}\n\t\\label{tab:MNIST KNN ACC}\n\\end{table}\n\n\\autoref{fig:MNIST} shows the performance of PCA-assisted, UMAP-assisted and t-SNE-assisted clustering of the MNIST dataset. The sample size of the MNIST dataset is 70000, which has 784 features with 10 different digit labels. For each case, the dataset was reduced to dimension 2 using default parameters, and the plots were colored with the ground truth of the MNIST dataset. In \\autoref{fig:MNIST}, by applying the UMAP algorithm, the clear clusters can be detected for the MNIST dataset. The t-SNE offers a reasonable clustering at dimension 2 too. However, the PCA does not provide a good clustering. \n\n\n\\begin{table}[ht]\n\t\\centering\n\t\\caption{Accuracy of $K$-means clustering of the MNIST dataset without applying any reduction algorithms, as well as the accuracy of $K$-means assisted by PCA, UMAP and t-SNE with different dimensional reduction ratio. The sample size, feature size, and the number of labels of the MNIST dataset are 70000, 784, and 10, respectively.}\n\t\\begin{tabular}{c|cccccc} \\hline\n Dataset & \\tabincell{c}{$K$-means accuracy \\\\ w\/o reduction} & \\tabincell{c}{Reduced\\\\ dimension} & \\tabincell{c}{PCA\\\\accuracy} & \\tabincell{c}{UMAP\\\\accuracy} & \\tabincell{c}{t-SNE\\\\accuracy} \\\\\\hline\n\t\t\\multirow{9}*{\\shortstack{MNIST \\\\(70000, 784, 10)}} & \\multirow{9}{*}{0.494}\n\t\t \n\t & 392 (1\/2) & 0.487 & 0.665 & 0.122 \\\\\n\t\t& & 196 (1\/4) & 0.492 & 0.667 & 0.113 \\\\\n\t\t& & 98 (1\/8) & 0.498 & 0.673 & 0.113 \\\\\n\t\t& & 49 (1\/16) & 0.496 & 0.718 & 0.113 \\\\\n\t\t& & 24 (1\/32) & 0.501 & 0.697 & 0.114 \\\\\n\t\t& & 12 (1\/64) & 0.489 & 0.682 & 0.138 \\\\\n\t\t& & 7 (1\/100) & 0.464 & 0.677 & 0.740 \\\\\n\t\t& & 3 & 0.365 & 0.727 & 0.537 \\\\\n\t\t& & 2 & 0.300 & 0.712 & 0.593 \\\\ \\hline\n\t\\end{tabular}\n\t\\label{tab:MNIST KMean ACC}\n\\end{table}\n\n\\autoref{tab:MNIST KNN ACC} shows the accuracy of $k$-NN clustering of the MNIST dataset assisted by PCA, t-SNE, and UMAP with different dimensional reduction radios. For each algorithm, we use the same settings as the Coil 20 dataset. Without applying any dimensional reduction algorithms, the accuracy of $k$-NN is 0.948. By applying PCA\/UMAP with the reduction ratio greater than 1\/64, the accuracy of PCA\/UMAP-assisted $k$-NN is at the same level without using any dimensional reduction algorithm. However, in contract with UMAP and t-SNE, when the reduced dimension is 2 or 3, PCA performs poorly. This indicates that the PCA may not be suitable for dimension-reduction for datasets with a large sample size. \n\n\n\\autoref{tab:MNIST KMean ACC} describes the accuracy of $K$-means clustering of the MNIST dataset assisted by PCA, UMAP, and t-SNE with different dimensional reduction ratios. By applying PCA, the accuracy of $K$-means is around 0.45. \nThe t-SNE method performance is quite unstable, from very poor (0113) to the best (0.740), and to a relatively low value of 0.593. In contrast, we can see a stable and improved accuracy from using UMAP at various reduction ratios, indicating that the reduced feature generated by UMAP can better represent the clustering properties of the MNIST dataset compared to the PCA and t-SNE. \n\nAs observed early, the present UMAP and t-SNE-assisted $k$-means clustering methods also significantly out-perform the original $k$-means clustering for this dataset. \n\n\n\\subsubsection{Jaccard distanced-based MNIST}\n\n\\begin{figure}[ht]\n\t\\centering\n\n\t\\includegraphics[width = \\textwidth]{BinaryMNIST_compressed.pdf}\n\t\\caption{Comparison of different dimensional reduction algorithms on the Jaccard distanced-based MNIST dataset. Total 10 different labels are in the Jaccard distanced-based MNIST dataset, and we use the ground truth label to color each data points. (a) Feature size is reduced to dimension 2 by PCA. (b) Feature size is reduced to dimension 2 by t-SNE. (c) Feature size is reduced to dimension 2 by UMAP. }\n\t\\label{fig:Jaccard distanced-based MNIST}\n\\end{figure}\n\n\nOut last validation dataset is Jaccard distanced-based MNIST. This dataset can be treated as a test on the Jaccard distance-based data representation. \n\\autoref{fig:Jaccard distanced-based MNIST} shows the performance of PCA-assisted, UMAP-assisted, and t-SNE-assisted clustering of the Jaccard distanced-based MNIST dataset. The dataset was reduced to dimension 2 using default parameters for visualization, and the plots were colored with the ground truth of the Jaccard distanced-based MNIST dataset. From \\autoref{fig:Jaccard distanced-based MNIST}, we can see that UMAP provides the clearest clusters compared to PCA and t-SNE when the dimension is reduced to 2. The performance of t-SNE is reasonable while PCA does not give a good clustering. \n\n\\begin{table}[ht!]\n\t\\centering\n\t\\caption{Accuracy of $k$-NN of the Jaccard distanced-based MNIST dataset without applying any reduction algorithms, as well as the accuracy of $k$-NN assisted by PCA, UMAP and t-SNE with different dimensional reduction ratio. The sample size, feature size, and the number of labels of the Jaccard distanced-based MNIST dataset are 70000, 70000, and 10, respectively.}\n\t\\begin{tabular}{c|cccccc}\\hline \n Dataset & \\tabincell{c}{$k$-NN accuracy \\\\ w\/o reduction} & \\tabincell{c}{Reduced\\\\ dimension} & \\tabincell{c}{PCA\\\\accuracy} & \\tabincell{c}{UMAP\\\\accuracy} & \\tabincell{c}{t-SNE\\\\accuracy} \\\\\\hline\n\t\t\\multirow{13}*{\\shortstack{Jaccard distanced-based MNIST \\\\(70000, 70000, 10)}} & \\multirow{13}{*}{0.958} \n\t\t \n\t\t& 7000 (1\/10) & 0.958 & 0.958 & NA \\\\\n\t\t& & 3500 (1\/20) & 0.958 & 0.966 & NA \\\\\n\t\t& & 1750 (1\/40) & 0.958 & 0.967 & NA \\\\\n\t\t& & 875 (1\/80) & 0.958 & 0.967 & NA \\\\ \n\t\t& & 437 (1\/160) & 0.958 & 0.968 & 0.718 \\\\\n\t\t& & 218 (1\/320) & 0.958 & 0.968 & 0.701 \\\\\n\t\t& & 109 (1\/640) & 0.958 & 0.968 & 0.873 \\\\\n\t\t& & 70 (1\/1000) & 0.958 & 0.968 & 0.915 \\\\\n\t\t& & 35 (1\/2000) & 0.956 & 0.968 & 0.872 \\\\\n\t\t& & 17 (1\/5000) & 0.938 & 0.968 & 0.916 \\\\\n\t\t& & 7 (1\/10000) & 0.867 & 0.967 & 0.942 \\\\ \n\t\t& & 3 & 0.487 & 0.965 & 0.939 \\\\ \n\t & & 2 & 0.313 & 0.960 & 0.924 \\\\\\hline\n\t\\end{tabular}\n\t\\label{tab:Jaccard distanced-based MNIST KNN ACC}\n\\end{table}\n\n\\begin{table}[ht!]\n\t\\centering\n\t\\caption{Accuracy of $K$-means clustering of the Jaccard distanced-based MNIST dataset without applying any reduction algorithms, as well as the accuracy of $K$-means assisted by PCA, UMAP and t-SNE with different dimensional reduction ratio. The sample size, feature size, and the number of labels of the Jaccard distanced-based MNIST dataset are 70000, 70000, and 10, respectively.}\n\t\\begin{tabular}{c|cccccc} \\hline\n Dataset & \\tabincell{c}{$K$-means accuracy \\\\ w\/o reduction} & \\tabincell{c}{Reduced\\\\ dimension} & \\tabincell{c}{PCA\\\\accuracy} & \\tabincell{c}{UMAP\\\\accuracy} & \\tabincell{c}{t-SNE\\\\accuracy} \\\\\\hline\n\t\t\\multirow{13}*{\\shortstack{Jaccard distanced-based MNIST \\\\(70000, 70000, 10)}} & \\multirow{13}{*}{0.555}\n\t\t \n\t\t& 7000 (1\/10) & 0.436 & 0.329 & NA \\\\\n\t\t& & 3500 (1\/20) & 0.436 & 0.693 & NA \\\\\n\t\t& & 1750 (1\/40) & 0.436 & 0.792 & NA \\\\\n\t\t& & 875 (1\/80) & 0.435 & 0.793 & NA \\\\\n\t\t& & 437 (1\/160) & 0.435 & 0.793 & 0.114 \\\\\n\t\t& & 218 (1\/320) & 0.435 & 0.793 & 0.156 \\\\\n\t\t& & 109 (1\/640) & 0.435 & 0.794 & 0.114 \\\\\n\t\t& & 70 (1\/1000) & 0.436 & 0.793 & 0.113 \\\\\n\t\t& & 35 (1\/2000) & 0.435 & 0.794 & 0.116 \\\\\n\t\t& & 17 (1\/5000) & 0.436 & 0.793 & 0.113 \\\\\n\t\t& & 7 (1\/10000) & 0.431 & 0.793 & 0.737 \\\\\n\t\t& & 3 & 0.364 & 0.798 & 0.635 \\\\\n\t\t& & 2 & 0.261 & 0.791 & 0.635 \\\\\\hline\n\t\\end{tabular}\n\t\\label{tab:Jaccard distanced-based MNIST KMean ACC}\n\\end{table}\n\n\\autoref{tab:Jaccard distanced-based MNIST KNN ACC} shows the accuracy of $k$-NN clustering of Jaccard distanced-based MNIST assisted by PCA, t-SNE, and UMAP with different dimensional reduction radios. For each algorithm, we use the same settings as the Coil 20 dataset. Notably, the $k$-NN accuracy for the data without applying any dimensional reduction algorithm is 0.958, which is at the same level as the PCA algorithm with a reduction ratio greater than 1\/5000. Moreover, we can find that UMAP performs well compared to PCA and t-SNE, indicating that after applying UMAP, the reduced feature still preserves most of the valued information of the Jaccard distanced-based MNIST dataset. The stability and persistence of \nUMAP at various reduction ratios are the most important features. \n\n\\autoref{tab:Jaccard distanced-based MNIST KMean ACC} describes the accuracy of $K$-means clustering of the Jaccard distanced-based MNIST dataset assisted by PCA, UMAP, and t-SNE with different dimensional reduction ratio. For consistency, we will use the same standard parameters as $k$-NN. Similar to the MNIST dataset, the accuracy of $K$-means clustering assisted by UMAP still has the best performance. When the reduced dimension is 3, UMAP will result in the highest $K$-means accuracy 0.798. Noticeably, although PCA performs well on $k$-NN accuracy, it has the lowest $K$-mean accuracy, indicating that PCA is not a suitable dimensional reduction algorithm, especially for those datasets with a large number of samples. To be noted, the t-SNE accuracy at four reduced dimensions are not available due to the extremely long running time. \n\n\n\nIn a nutshell, PCA, UMAP, and t-SNE can all perform well for $k$-NN. However, for the Coil 20 dataset, UMAP performs slightly poorly, whereas the t-SNE performs well, which may be caused by a lack of data size. In order to train UMAP, it needs a suitable data size. The Coil 20 dataset has 20 labels, each with only 72 samples. This may not be enough to train UMAP properly. However, even in this case, UMAP performance is still very stable at various reduction ratios and is the best method in terms of reliability, which become the major advantages of UMAP. Another strength of UMAP comes from its dimension-reduction for $K$-means clustering. In most cases, UMAP can improve $K$-means clustering accuracy, especially for the Jaccard distanced-based MNIST dataset. Furthermore, UMAP can generate a very clear and elegant visualization of clusters with low dimensional reduction value such as 2. Additionally, UMAP performed better than PCA and t-SNE for a larger dataset (MNIST and Jaccard distanced-based MNIST). Especially for the Jaccard distanced-based MNIST data, where Jaccard distance was used as the metric, UMAP performed best, which indicates the merit of using UMAP for Jaccard distanced-based datasets, such as COVID-19 SNP datasets. Furthermore, the accuracies for $k$-NN classification and $K$-means clustering are both improved on the Jaccard distance-based MNIST dataset compared to the original MNIST dataset, which provides convincing evidence that the Jaccard distance representation will help improve the performance of the clustering on the SARS-CoV-2 mutation dataset in the following sections. \n\n\n\\subsection{Efficiency comparison}\n \nIt is important to understand the computational time behaviors of various methods. To this end, we compare computational time for three dimension-reduction techniques. \\autoref{fig:time} depicts the computational time of three methods for the four datasets under various reduction ratios. The green, orange, and blue lines represent the computational time of t-SNE, UMAP, and PCA, respectively. Some points in green line of \\autoref{fig:time} (d) are not available, which due to the extremely long running time. PCA performed best in most cases, except for the Coil 20 dataset, where UMAP had comparable computational time. This behavior is expected because PCA is a linear transformation, and its time should scale linearly with the number of components in the lower dimensional space. UMAP and t-SNE were slower than PCA, but it is evident from MNIST and Jaccard distanced-based MNIST datasets that UMAP scales better with the increase in the number of samples. \nNote that for Jaccard distanced-based MNIST, a higher dimension was not computed because the computational time was too long. For Facebook Network, UMAP is outperforming t-SNE; however, for higher dimensions, t-SNE computed faster. Nonetheless, from our baseline test \\autoref{tab:Facebook KNN ACC}, t-SNE does not perform well, indicating instability. Faster computation time may indicate too fast of a convergence, which leads to poor embedding. \n \n\n\\begin{figure}[ht]\n\t\\centering\n\t\\begin{subfigure}{.49\\textwidth}\n\t\t\\centering\n\t\t\\includegraphics[width = \\textwidth]{coil_20_time.pdf}\n\t\t\\subcaption{Coil 20 time}\n\t\\end{subfigure}\n\t\\begin{subfigure}{.49\\textwidth}\n\t\t\\centering\n\t\t\\includegraphics[width = \\textwidth]{facebook_large_time.pdf}\n\t\t\\subcaption{Facebook Network time}\n\t\\end{subfigure}\n\t\n\t\\begin{subfigure}{.49\\textwidth}\n\t\t\\centering\n\t\t\\includegraphics[width = \\textwidth]{MINST_time.pdf}\n\t\t\\subcaption{MNIST time}\n\t\\end{subfigure}\n\t\\begin{subfigure}{.49\\textwidth}\n\t\t\\centering\n\t\t\\includegraphics[width = \\textwidth]{MINST_b_time.pdf}\n\t\t\\subcaption{Jaccard distanced-based MNIST time}\n\t\\end{subfigure}\n\t\\caption{Computational time of each reduction ratio. The green, orange and blue lines represent the computational time of t-SNE, UMAP, and PCA, respectively. Not surprisingly, PCA performs the best in the majority of cases, except for the Coil 20 dataset. UMAP and t-SNE perform worse than PCA, but UMAP scales better when there are more samples, as evident from MNIST and Jaccard distanced-based MNIST datasets. Note that for Jaccard distanced-based MNIST, the higher dimension was not computed because the computational time was too long.}\n\t\\label{fig:time}\n\\end{figure}\n\n\n\n\n\\section{SARS-CoV-2 mutation clustering}\n\n\n\\subsection{World SARS-CoV-2 mutation clustering}\n We gather data submitted to GISAID up to October 30, 2020, and the total number of samples is 89627. We first get the SNP information by applying the multiple sequence alignment, which leads to 23763 unique SNPs. Next, we calculate the pairwise Jaccard distance of our dataset in order to generate the Jaccard distance-based features. Here, the number of rows is the number of samples (89627), and the number of columns is the feature size (89627). As we mentioned in Section \\ref{sec:Jaccard}, the Jaccard distance-based feature is a square matrix. However, due to the large size of samples and features, applying $K$-means clustering directly on the feature of the size of 89627$\\times$89627 is a very time-consuming process. Considering that UMAP outperforms the other two dimensional reduction algorithms (PCA and t-SNE) on the Jaccard distance-based MNIST dataset, we employ UMAP to reduce our original feature with the size of 89627$\\times$89627 to 89627$\\times$200. To be noted, UMAP is a reliable and stable algorithm, which performs consistently in clustering at various reduction ratios. Therefore, there is no need to use the same reduction dimension of 200 and one can also choose a different reduction dimension value to generate similar results. \n\nWith the reduced dimension feature that has the size of 89627$\\times$200, we split our SARS-CoV-2 dataset into different clusters by applying the $K$-means clustering methods. After comparing the WCSS under a different number of clusters, we find that there are 6 clusters forming within the SARS-CoV-2 population based on the elbow method. \\autoref{tab: 1030 mut} shows the top 25 single mutations of each cluster. In order to understand the relationship, we also analyzed the commutation occurring in each cluster (\\autoref{tab: 1030 co}). From \\autoref{tab: 1030 mut} and \\autoref{tab: 1030 co} we see the following:\n\n\n\n\\begin{table}[ht]\n\t\\centering\n\t\\scriptsize\n\t\\setlength\\tabcolsep{1pt}\n\t\\caption{The frequency and occurrence percentage of SARS-CoV-2 co-mutations from each clusters in the world.}\n\t\\begin{tabular}{clcc}\\hline \n\t\tCluster & Co-mutations & Frequency & Occurrence percentage \\\\\\hline\n\t\tCluster 1 & [241, 1163, 3037, 7540, 14408, 16647, 18555, 22992, 23401, 23403, 28881, 28882, 28883] & 776 & 0.463 \\\\\n\t\tCluster 2 & [241, 3037, 14408, 23403] & 8640 & 0.925\\\\\n\t\tCluster 3 & [241, 1059, 3037, 14408, 23403, 25563] & 8878 & 0.662 \\\\\n\t\tCluster 4 & [241, 3037, 14408, 23403, 28881, 28882, 28883] & 14913 & 0.829 \\\\\n\t\tCluster 5 & [241, 3037, 14408, 23403] & 17412 & 0.969 \\\\\n\t\tCluster 6 & [241, 1163, 3037, 7540, 14408, 16647, 18555, 22992, 23401, 23403, 28881, 28882, 28883] & 1352 & 0.771 \\\\ \\hline\n\t\\end{tabular}\n\t\\label{tab: 1030 co}\n\\end{table}\n\n\n\\begin{itemize}\n\t\\item Though Clusters 1 and 6 seem similar from the top 25 single mutations, the co-mutations tells a different story. The same co-mutations have a higher frequency in Cluster 6, indicating that the co-mutation has higher number of descendants. \n\t\\item Clusters 2 and 5 have high frequency of [241, 3037, 14408, 23403] mutations, but Cluster 5 has a clear co-mutation descendent with high frequency.\n\t\\item Cluster 3 has a unique combination of mutation that is only popular in Cluster 3.\n\\end{itemize}\n\n\n\\autoref{tab: 1030Country} shows the cluster distributions of samples from 25 countries. Here, we use the ISO 3166-1 alpha-2 codes as the country code. The listed countries are the United Kingdom (UK), the United States (US), Australia (AU), India (IN), Switzerland (CH), Netherlands (NL), Canada (CA), France (FR), Belgium (BE), Singapore (SG), Spain (ES), Russia (RU), Portugal (PT), Denmark (DK), Sweden (SE), Austria (AT), Japan (JP), South Africa (ZA), Iceland (IS), Brazil (BR), Saudi Arabia (SA), Norway (NO), China (CN), Italy (IT), and Korea (KR). From \\autoref{tab: 1030Country}, we can see the following:\n\\begin{itemize}\n\t\\item SNP profiles from UK are dominated in Clusters 5 and 4.\n\t\\item Clusters 1 and 6's SNP profiles are predominantly found in AU.\n\t\\item SNP profiles from US are found mostly in Clusters 3 and 5.\n\t\\item Most country's SNP profiles are found in Clusters 2 - 5, with some having slightly higher numbers, but not as significant as the UK, US and AU.\n\\end{itemize}\n\n\\begin{figure}[ht]\n\t\\centering\n\t\\includegraphics[width = \\textwidth]{worldMap.png}\n\t\\caption{Cluster distribution of the global SARS-CoV-2 mutation dataset. Using Highchart, the world map was colored, according to the dominant cluster. Clusters 1, 2, 3, 4, 5, and 6 were colored with light blue, blue, green, red, purple and yellow, respectively. For example, United States have SNP profiles from all clusters, but Cluster 5 (purple) is the dominant type in the US. Only countries with more than 25 sequenced data available on GISAID were considered. Countries with fewer than 25 samples are labeled grayed.}\n\\end{figure}\n\n\n Notably, in \\autoref{tab: 1030 co}, Cluster 2 and Cluster 5 have the same co-mutations with a relatively large frequency, while Cluster 1 and Cluster 6 share the same co-mutations with a relatively low frequency, which indicate that Cluster 2 and Cluster 5 share the same ``root\" with a large size, while Cluster 1 and Cluster 6 share the same ``root\" with a smaller size in the 200-dimensional (200D) space. However, we cannot visualize the distribution of our reduced dataset in the 200D space. Therefore, benefit from the stable and reliable performance of UMAP at various reduction ratios, we reduce the dimension of our original dataset to 2, which enables us to observe the distribution of the dataset in the two-dimensional (2D) space. \\autoref{fig:1030_World_UMAP} visualizes the distribution of our dataset with 6 distinct clusters with 2D UMAP. It can be seen that 2 clusters (i.e., Cluster 2' and Cluster 3') share a small ``root\" located in the middle of the figure, and Cluster 4' and Cluster 5' share another large ``root\" that also located in the middle of the figure.\n \n\n\\begin{figure}[ht]\n \\centering\n \\includegraphics[width = 0.9\\textwidth]{1030_UMAP_World.pdf}\n \\caption{ 2D UMAP visualization of the world SARS-CoV-2 mutation dataset with 6 distinct clusters. Red, orange, yellow, light blue, blue, and dark blue represent for Clusters 1', 2', 3', 4', 5', and 6', respectively.} \n \\label{fig:1030_World_UMAP}\n\\end{figure}\n\n\n\\subsection{United States SARS-CoV-2 mutation clustering}\nIn addition to analyzing the clustering in the world, SNP profiles of SARS-CoV-2 from the United States (US) were considered. In this section, the US dataset has 10279 unique single mutations and 22390 samples. Therefore, the dimension of the Jaccard distance-based dataset is 22390$\\times$22390. After applying the UMAP, we reduce the dimension of the original dataset to be 22390$\\times$200. Following the similar $K$-means clustering processes as we did for the world dataset, we find that there are 6 predominant clusters forming in the United States. \\autoref{fig: USMap} show the US map with the cluster statistic. Here, Highchart was used to generate the plot with the pie chart. Each states were colored based on the dominant cluster.\n\n\\autoref{tab: 1030US mut} shows the top 25 mutations from each clusters in the United States. The states with more than 50 samples are listed. \\autoref{tab: 1030US co} shows the common occurring co-mutations, and we can observe the following:\n\n\n\n\n\n\n\n\\begin{itemize}\n\t\\item Cluster F have high frequency of co-mutations [241, 3037, 14408, 23403, 28881, 28882, 28883], which is a descendent of common co-mutations of Cluster 4 [241, 3037, 14408, 23403, 28881, 28882, 28883] from \\autoref{tab: 1030US mut}.\n\t\\item Clusters A, B, C, and D have frequent co-mutations [241, 1059, 3037, 14408, 23403, 25563], which are also frequent co-mutations of Cluster 3.\n\\end{itemize}\n\n\n\n\n\\begin{table}[ht]\n\t\\centering\n\t\\caption{The frequency and occurrence percentage of SARS-CoV-2 co-mutations from each clusters in US clusters.}\n\t\\begin{tabular}{c l c c }\\hline \n\t\tCluster & Co-mutations & Frequency & Occurrence percentage \\\\\\hline\n\t\tCluster A & [241, 1059, 3037, 14408, 23403, 25563] & 3116 & 0.465 \\\\\n\t\tCluster B & [241, 1059, 3037, 14408, 23403, 25563] & 5763 & 0.605\\\\\n\t\tCluster C & [241, 1059, 3037, 14408, 23403, 25563] & 8878 & 0.662 \\\\\n\t\tCluster D & [241, 1059, 3037, 14408, 23403, 25563, 27964] & 1225 & 0.864 \\\\\n\t\tCluster E & [8782, 17747, 17858, 18060, 28144] & 1109 & 0.743\\\\\n\t\tCluster F & [241, 3037, 14408, 23403, 28881, 28882, 28883] & 2575 & 0.932 \\\\ \\hline\n\t\\end{tabular}\n\t\\label{tab: 1030US co}\n\\end{table}\n\n\n\n\n\n\\begin{figure}[ht]\n\t\\centering \n\t\\includegraphics[width = \\textwidth]{USMap.png}\n\t\\caption{Cluster distribution of United States SARS-CoV-2 mutation dataset. Using Highchart, the US map was colored, according to the dominant cluster. Clusters A, B, C, D, E, and F were colored with light blue, blue, green, red, purple, and yellow, respectively. For example, United States have SNP profiles from all clusters, but Cluster E (purple) is the dominant type in the US. Only those countries that have more than 25 sequenced data available on GISAID were considered in the plot. }\n\t\\label{fig: USMap}\n\\end{figure}\n\n\n Notably, in \\autoref{tab: 1030US co}, Clusters A, B, and C have the same high-frequency co-mutations, indicating that these three clusters may share the same ``root\" in the 200D space. However, it is impossible to show the distribution of each cluster in the 200D space. Considering the stability and reliability of UMAP at various reduction ratios, we employ UMAP to the original US dataset with reduced dimension 2, aiming to observe the distribution of the dataset in the 2D space. \\autoref{fig:1030_US_UMAP} illustrates the 2D visualization of the US dataset with 6 distinct clusters. We can see that there are 3 clusters (Clusters A', B', and F') share the same ``root\" located in the middle of the figure, while the other 3 clusters (Clusters C', D', and E') are not. This confirms our deduction about why Clusters A, B, and C have the same high-frequency co-mutations in \\autoref{tab: 1030US co}.\n\n\\begin{figure}[ht]\n \\centering\n \\includegraphics[width = 0.9\\textwidth]{1030_UMAP_US.pdf}\n \\caption{ The 2D UMAP visualization of the US SARS-CoV-2 mutation dataset with 6 distinct clusters. Red, orange, yellow, light blue, blue, and dark blue represent for Cluster A', B', C', D', E', and F', respectively.} \n \\label{fig:1030_US_UMAP}\n\\end{figure}\n\n\\section{Discussion}\n\n In this section, we compared our past results \\cite{wang2020decoding} with our new method to gain a different perspective in clustering with the SNP profiles of COVID-19. In our previous work, a total of 8309 unique single mutations are detected in 15140 SARS-CoV-2 isolates. Here, we also calculate the pairwise distance among 15140 SNP profiles and set the number of clusters to be six. \\autoref{tab:world0601} shows the cluster distribution of samples from the 15 countries \\cite{wang2020decoding}. The listed countries are the United States (US), Canada (CA), Australia (AU), United Kingdom (UK), Germany (DE), France (FR), Italy (IT), Russia (RU), China (CN), Japan (JP), Korean (KR), India (IN), Spain (ES), Saudi Arabia (SA), and Turkey (TR), and we use Cluster I, II, III, IV, V, and VI to represent six clusters without applying any dimensional reduction algorithm. \\autoref{tab:world0601pca} lists the cluster distribution of samples from the same 15 countries, where we use I$_p$, II$_p$, III$_p$, IV$_p$, V$_p$, and VI$_p$ to represent six clusters performed by PCA with the reduction ratio to be 1\/160. \\autoref{tab:world0601umap} lists the cluster distribution of samples from the same 15 countries, where we use I$_u$, II$_u$, III$_u$, IV$_u$, V$_u$, and VI$_u$ to represent six clusters performed by UMAP with the reduction ratio setting to be 1\/160. Noticeably, the SNP profile is focused in Cluster I$_u$, whereas in the non-reduced version, the samples are more spread out. This may be caused by the large number of features, making computed distance between the centroid and each data too similar, and leading to samples being placed in incorrect clusters.\n \n\nNot surprisingly, PCA and the original method for \\cite{wang2020decoding} has nearly identical result. It has been shown in \\cite{wang2020decoding} that PCA\nis the continuous solution of the cluster indicators in the $K$-means clustering method. On the other hand, UMAP shows a slightly different result. In the PCA method, the distribution is more spread out. In addition, the top occurrence for each country is higher for UMAP. On the other hand, we see that there are more samples in Cluster I$_u$ for UMAP, which may indicate that mutations in Cluster I$_u$ are the main strand.\n\n\n Moreover, \\autoref{fig:0601_PCA_UMAP} illustrates the 2D visualizations of the US dataset up to June 01, 2020, with 6 distinct clusters by applying two different dimensional reduction algorithms. We can see that the data distribute disorderly under both PCA- and UMAP-assisted $K$-means clustering algorithms. Specifically, the PCA-assisted algorithm has a really poor clustering performance, while the UMAP-assisted algorithm forms more clear and better clusters than the PCA-assisted algorithm, which is consistent with our previous analysis in Section \\ref{sec:Validation}.\n \n\n\n\n\\begin{figure}[ht]\n \\centering\n \\includegraphics[width = 1\\textwidth]{Discussion.pdf}\n \\caption{ 2D visualizations of the US SARS-CoV-2 mutation dataset up to June 01, 2020 with 6 distinct clusters by applying two different dimensional reduction algorithms. (a) 2D PCA visualization. Red, orange, yellow, light blue, blue, and dark blue represent for Cluster I$_p^{\\prime}$, II$_p^{\\prime}$, III$_p^{\\prime}$, IV$_p^{\\prime}$, V$_p^{\\prime}$, and VI$_p^{\\prime}$, respectively. (b) 2D UMAP visualization. Red, orange, yellow, light blue, blue, and dark blue represent for Cluster I$_u^{\\prime}$, II$_u^{\\prime}$, III$_u^{\\prime}$, IV$_u^{\\prime}$, V$_u^{\\prime}$, and VI$_u^{\\prime}$, respectively.} \n \\label{fig:0601_PCA_UMAP}\n\\end{figure}\n\n\n\n\n\n\n\\section{Conclusion} \nThe rapid global spread of coronavirus disease 2019 (COVID-19) caused by severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) has led to genetic mutation stimulated by genetic evolution and adaptation. Up to October 30, 89627 complete SARS-CoV-2 sequences, and a total of 23763 unique SNPs have been detected. Our previous work traced the COVID-19 transmission pathways and analyzed the distribution of the subtypes of SARS-CoV-2 across the world based on 15,140 complete SARS-CoV-2 sequences. The $K$-means clustering separated the sequences into six distinguished clusters. However, considering the tremendous increase in the number of available SARS-CoV-2 sequences, an efficient and reliable dimensional reduction method is urgently required. Therefore, the objective of the present work is to explore the best suited dimension reduction algorithm based on their performance and effectiveness. Here, a linear algorithm PCA and two non-linear algorithms, t-distributed stochastic neighbor embedding (t-SNE) and uniform manifold approximation and projection (UMAP), have been discussed. To evaluate the performance of dimension reduction techniques in clustering, which is an unsupervised problem, we first cast classification problems into clustering problems with labels. Next, by setting different reduction ratios, we test the effectiveness and accuracy of PCA, t-SNE, and UMAP for $k$-NN and $K$-means using four benchmark datasets. The results show that overall, UMAP outperforms other two algorithms. \nThe major strengths of UMAP is that UMAP-assisted $k$-NN classification and UMAP-assisted $K$-means clustering at various dimension reduction ratios have a consistent performance in terms of accuracy, which proves that UMAP is a stable and reliable dimension reduction algorithm. Moreover, compared to the $K$-means clustering accuracy that does not involve any dimensional reduction, UMAP-assisted $K$-means clustering can improve the accuracy for most cases. Furthermore, when the dimension is reduced to two, the UMAP clustering visualization is clear and elegant. Additionally, UMAP is a relatively efficient algorithm compared to t-SNE. Although PCA is a faster algorithm, its major limitation is its poor performance in accuracy. To be noted, UMAP performs better than PCA and t-SNE for the dataset with a large number of samples, indicating it is the best suited dimensional reduction algorithm for our SARS-CoV-2 mutation dataset. Moreover, we apply the UMAP-assisted $K$-means clustering to the world SARS-CoV-2 mutation dataset (up to October 30, 2020), which displays six distinct clusters. Correspondingly, the same approaches are also applied to the United States SARS-CoV-2 mutation dataset (up to October 30, 2020), resulting in six different clusters as well. Furthermore, we provide a new perspective by utilizing UMAP-assisted $K$-means clustering to analyze our previous SARS-CoV-2 mutation datasets, and the 2D visualization of UMAP-assisted $K$-means clustering of our previous world SARS-CoV-2 mutation dataset (up to June 01, 2020) forms more clear clusters than the PCA-assisted $K$-means clustering. Finally, one of our four datasets was generated by the Jaccard distance representation, which improves both $k$NN classification and $k$-means clustering accuracies on the original dataset. \n\n\n\n\n\n\\section*{Acknowledgment}\nThis work was supported in part by NIH grant GM126189, NSF Grants DMS-1721024, DMS-1761320, and IIS1900473, Michigan Economic Development Corporation, Bristol-Myers Squibb, and Pfizer. The authors\nthank The IBM TJ Watson Research Center, The COVID-19 High Performance Computing Consortium, and\nNVIDIA for computational assistance.\n\n\\clearpage\n\\bibliographystyle{abbrv}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}