diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzzigce" "b/data_all_eng_slimpj/shuffled/split2/finalzzigce" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzzigce" @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\n\n\tCounting stars is a powerful method for probing galactic structure,\nbut until recently it has been limited to stars $I<19$. At fainter magnitudes\ngalaxies vastly outnumbers stars. Although galaxies are typically resolved \neven in ground-based images and therefore can usually be distinguished from \nthe point-like stars, some galaxies with steep surface-brightness profiles\navoid detection and pollute the sample. The problem grows worse rapidly at \nlower flux levels since the galaxies become smaller, fainter, and more \nnumerous. Heretofore, intrinsically faint stars could therefore be studied\nonly when they were found nearby. For the faintest stars, the volume probed \nwas so small that measurements of the luminosity function (LF) were both\nhighly uncertain and highly controversial. One result of this is that most\npeople have assumed that the mass function (MF), which is derived from the LF\nusing a mass-luminosity relation, continued with its Salpeter slope\n\\begin{equation} \n{d N\\over d\\log M}\\propto M^\\alpha\\qquad (\\alpha=-1.35,\\rm Salpeter)\n\\label{eq:salp}\n\\end{equation}\nas measured for relatively massive stars. This then led to the assumption\nthat there was a large quantity of stellar matter which was not observed\nbut must ``certainly'' be there if only our instruments were powerful enough\nto see them. Hence, people would routinely quote high mass-to-light\nratios $(M\/L\\sim 10)$ for the luminous components of galaxies believing that\n``dark matter'' was needed only to account for the rest. In the case of the \nMilky Way disk, at least, we now have the powerful instrument at our disposal,\nbut we do not see the stars. In the case of the bulge, we are able to see\nmuch fainter than before, although we still do not probe directly the region\nof the MF corresponding to the place where the disk MF turns over. \nNevertheless, we must begin to suspect that the disk and bulge MFs are similar\nand that the large mass which is dynamically determined to be associated with \nthe luminous components of galaxies is not in the form of low-luminosity\nstars.\n\n\\section{The Disk Mass and Luminosity Functions}\n\n\tGould, Bahcall, \\& Flynn~\\cite{gbfI} identified 192 M dwarf stars in \n22 fields\nimaged by the Wide Field Camera (WFC2) on HST to an average limiting\nmagnitude of $I=23.7$, about 100 times fainter than the limit of typical of \nground-based surveys. We combined these with a brighter sample of 65 M dwarfs\nidentified in 162 fields imaged with the pre-repair Planetary Camera. We \nfound that the LF clearly peaks at about $M_V\\sim 12$ ($M_I\\sim 9$). The\ntransformation from an LF to an MF requires some care because the \nmass-luminosity relation is non-linear. However, using the empirically \nmeasured relation of Henry \\& McCarthy~\\cite{hm}, we found that the MF peaks\nat about $M\\sim 0.6\\,M_\\odot$. The detailed structure of the faint end of \nthe LF remained poorly determined because there were only a total of 23 stars\nwith $M_V>13.5$. However, we have now analyzed an additional 31 WFC2 fields\nwhich contain a total of 24 stars in this faint region.~\\cite{gbfII} \nWe now find a clear\nbreak in the MF at $M\\sim 0.6\\,M_\\odot$. In contrast to Eq.~\\ref{eq:salp},\n\\begin{equation}\n\\alpha\\sim -1.2 \\quad (M>0.6 M_\\odot);\\qquad \\alpha\\sim 0.4\\quad\n(M<0.6\\,M_\\odot)\\label{eq:realmf}\n\\end{equation}\nEven after correcting for binaries (to which HST is almost completely \ninsensitive) the slope at the low-mass end is only $\\alpha\\sim 0.1$.\nThere are perhaps hints of a rise in the MF at the very last bin, but the\nstatistics are too poor to resolve this issue.\n\n\\section{Bulge Luminosity Function and Mass Function}\n\n\tLight, Baum, \\& Holtzman~\\cite{lbh} have used the WFC2 to measure\nthe LF of the galactic bulge in Baade's Window to an apparent magnitude \n$V\\sim 26$.\nThis is not as deep as the images used to measure the disk LF primarily\nbecause the bulge fields are limited by crowding. Moreover, since the bulge \nis 8 kpc away, while the disk stars can be seen as close as 0.5 kpc, \n(corresponding to an additional factor of 250 in apparent brightness), the\nbulge LF measurement is cutoff about 10 magnitudes (factor 10,000 in \nluminosity) brighter than the disk LF. Even so, this is a factor $\\sim 100$\nimprovement on pre-HST efforts. The results are noteworthy: to the limit\nto which it can be measured, $M_V\\sim 10$, the bulge LF coincides with the\ndisk LF. Since the heavy-element abundance of bulge stars is similar to those\nin the solar neighborhood, the mass-luminosity relation should be similar.\nHence the MFs of the two populations should also be similar. This suggests\nthat perhaps the MFs are also the same at the low mass end. If so, this\nleads to some rather dramatic conclusions.\n\n\tThe dynamically-measured mass of the bulge is \n$\\sim 2\\times 10^{10}\\,M_\\odot$. Han finds that the stars observed by\nLight et al.\\ account for half of this mass, but can account for no more than\n1\/10 of the observed microlensing events.~\\cite{han} \nIf the bulge LF is extended using the disk LF and similarly converted into an \nMF, this would account for 70\\% of the bulge mass, but less than 1\/2 the\nmicrolensing events and essentially none of the short events. Only when\nHan adds in the remaining 30\\% of the mass in brown dwarfs \n($M\\sim 0.08\\,M_\\odot$) can he account for these short events. In brief,\nstar count work on the luminous populations seems to suggest that much of\nthe mass in these components is composed of brown dwarfs or other dark objects\nof similar mass.\n\n\\section{Hubble Deep Field Search For Halo Stars}\n\n\tThe Hubble Deep Field (HDF) with a total of 10 days of integration\nprovides a unique opportunity to probe for extreme halo objects. \nFlynn, Gould, \\& Bahcall~\\cite{fgb} found that stars could be separated from\ngalaxies to a limiting magnitude $I=26.3$, about 10 times fainter than\ntypical WFC2 fields used to measure the disk LF. Most known populations of\nstars in the Galaxy will not generate counts near this faint limit simply\nbecause to do so they would have to be so far away that they would be outside\nthe Galaxy! Since the faintest magnitudes reached by HDF are essentially free\nof known populations, it can be used to search for objects that are so \nintrinsically faint that they would have escaped notice in earlier studies.\nThe only ``expected'' candidate of this type are the white dwarfs, for which\nHDF give us the first meaningful limits:\n\\begin{equation} \nf < 0.31 \\times 10^{0.72[(V-I)-1.8]},\\label{eq:wdlimit}\n\\end{equation}\nwhere $f$ is the halo fraction of $0.5\\,M_\\odot$ white dwarfs and $(V-I)$ is\ntheir color. Thus, HDF tells us white dwarfs in the expected color range\nmake up no more than 1\/2 to 1\/3 of the halo. More generally, HDF constrains\nall classes of objects with absolute magnitude $M_I$, mass $M$, and halo \nfraction $f$ by,\n\\begin{equation}\nM_I > 17.2 + {5\\over 3}\\log\\biggl(f{0.08\\,M_\\odot\\over M}\\biggr)\\qquad\n(V-I>1.8),\\label{eq:alllimit}\n\\end{equation}\nwhere I have scaled the mass to the maximum of the brown dwarf regime. This\nlimit is 10 times fainter than the faintest star ever observed and 100 times\nfainter than the faintest halo star ever observed. In brief, a significant\npopulation (but not the whole halo) of white dwarfs is still permitted, but\nordinary halo stars simply do not contribute to the mass of the Galaxy.\n\n\\section{HDF Limits on Intergalactic Stars}\n\n\tIntergalactic stars are not often regarded as candidates for dark \nmatter, but many cosmological scenarios produce stars at very early times\nand these must be distributed approximately as the dark matter. Thus, it\nis of interest to determine their density. HDF can be used to search\nfor K giant stars over a volume of about 70 cubic kiloparsecs outside the\nGalaxy (but inside the Local Group). The density is at least a factor\n3000 times lower than the local density of giant stars and so more than\n300,000 times below the local dark matter density (assuming a locally \nmeasured MF). Of course, the Local Group dark matter density is\nabout 10,000 times lower than the nearby density, so intergalactic\nstars make up less than 1\/30 of the dark matter in the Local Group.\n\n\\section*{Acknowledgments}\nThis work was supported in part by NSF grant AST 9420746.\n\n\\section*{References}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\\label{sec:introduction:1}\n\nLow-density parity-check (LDPC) codes offer excellent tradeoffs between performance and complexity for error correction in communication systems. Quasi-cyclic (QC) LDPC codes in particular have proved extremely attractive due to their implementation advantages, both in encoding and decoding \\cite{Li_QC_encoders:06, Dai:08, Mansour:07}. Many analyses of QC-LDPC codes have been carried out based on optimization of parameters such as the minimum Hamming distance of the code or the girth of the Tanner graph. However, it has been shown that an excellent first-order measure of performance over the AWGNC is the minimum \\emph{pseudo-weight} of the code \\cite{Wiberg}. So far, few results exist in the literature on the minimum pseudo-weight of QC-LDPC and related codes. \n\nSpectral graph analysis was used in~\\cite{Tanner:01:1}, and more\nrecently, in~\\cite{Vontobel:Koetter:04:1}, to obtain bounds on the\nminimum Hamming weight, and minimum AWGNC pseudo-weight, respectively, of a length-$n$\n$(c,d)$-regular code $\\mathcal{C}$ over the binary field $\\Ftwo$:\n $$d_{\\mathrm{min}}\\geqw_{\\mathrm{p}}^{\\mathrm{min}}(\\matr{H}) \\geq n \\frac{2c -\n \\lambda_2}{\\lambda_1 - \\lambda_2}; d_{\\mathrm{min}} \\geq n\\frac{2}{d}\n \\frac{2c +d - 2 - \\lambda_2}{\\lambda_1 - \\lambda_2},$$ with\n $\\lambda_1 > \\cdots > \\lambda_s$ being the distinct ordered\n eigenvalues of $\\matr{H}^{\\mathsf{T}} \\matr{H}\\in \\mathbb{R}^{n\\times n}$ (where $\\matr{H}$ is viewed as\n a matrix in $ \\mathbb{R}^{m\\times n}$). These bounds are, for most codes,\n loose. However, in particular cases, like the projective geometry\n codes \\cite{Vontobel:Smarandache:05:1, Smarandache:Vontobel:07,Kou:Lin:Fossorier:01:1}, they are attained.\n A current problem with these bounds is that for most LDPC codes, it is not practical to evaluate the eigenvalues $\\lambda_1,\\lambda_2$ due to the size of the matrix $\\matr{H}^{\\mathsf{T}} \\matr{H}$. \n \n In this paper we show how to compute the AWGN pseudo-weight lower bound for quasi-cyclic\n (QC) and related codes by utilizing the $\\mathcal A$-submodule\n structure of quasi-cyclic codes, ${\\cal A}= \\mathbb{R}[X]\/(X^r-1)$ \\cite{Lally:Fitzpatrick:01, Ling:Sole:01, Ling:Sole:03}. \n In particular, we begin by showing how the\n polynomial parity-check matrix that describes a cyclic code can be used\n to compute the required eigenvalues, and then generalize this approach to compute the required eigenvalues for QC codes. \n We also define the class of ``nested circulant'' matrices, and show that these have eigenvalues\n which are given by evaluating a multivariate associated\n polynomial at points whose coordinates are particular roots of\n unity. Finally, we give a necessary \n condition for the pseudo-weight lower bound to be attained when $\\matr{H}$ is circulant\n and show a few classes of cyclic codes satisfying this criterion.\n\n\n\n\n\\section{Basic Notation and Definitions}\n\\label{sec:notation:1}\n\nAll codes in this paper will be binary linear codes of a certain\nlength $n$ specified through a (scalar) parity-check matrix\n$\\matr{H}=(h_{j,i}) \\in \\GF{2}^{m \\times n}$ as the set of all\nvectors $\\vect{c} \\in \\Ftwo^n$ \\ such that $\\matr{H} \\cdot \\vect{c}^\\mathsf{T} =\n\\vect{0}^\\mathsf{T}$, where ${}^\\mathsf{T}$ denotes transposition. The minimum\nHamming distance of a code $\\code{C}$ will be denoted by $d_{\\mathrm{min}}(\\code{C})$. The\nfundamental cone $\\fch{K}{H}$ of $\\matr{H}$ is the set of all vectors\n$\\boldsymbol{\\omega} \\in \\mathbb{R}^n$ that satisfy\n \\begin{alignat}{2}\n \\omega_i\n &\\geq 0 \n \\ \n &&\\text{for all $i \\in \\set{I}(\\matr{H})$} \\; , \n \\label{eq:fund:cone:def:1} \\\\\n \\omega_i\n &\\leq\n \\sum_{i' \\in \\set{I}_j(\\matr{H}) \\setminus i} \\!\\!\n \\omega_{i'}\n \\ \n &&\\text{for all $j \\in \\set{J}(\\matr{H})$, \\ \n $i \\in \\set{I}_j(\\matr{H})$} \\; ,\n \\label{eq:fund:cone:def:2}\n \\end{alignat}\n where $\\set{J}(\\matr{H})$ and $\\set{I}(\\matr{H})$ denote the sets of row\n and column indices of $\\matr{H}$ respectively, and $\\set{I}_j(\\matr{H}) \\triangleq \\{ i \\in\n \\set{I} \\ | \\ h_{j,i} = 1 \\}$ for each $j \\in \\set{J}(\\matr{H})$. A vector $\\boldsymbol{\\omega} \\in \\fch{K}{H}$ is called a \\emph{pseudo-codeword}. The AWGNC \\emph{pseudo-weight} of a\n pseudo-codeword $\\boldsymbol{\\omega}$ is defined to be $w_{\\mathrm{p}}(\\boldsymbol{\\omega}) =\n w_{\\mathrm{p}}^{\\mathrm{AWGNC}}(\\boldsymbol{\\omega}) \\triangleq \\lVert \\boldsymbol{\\omega} \\rVert_1^2 \/ \\lVert \\boldsymbol{\\omega}\n \\rVert_2^2$. (For a motivation of these definitions,\n see~\\cite{Vontobel:Koetter:05:1:subm,\n Koetter:Li:Vontobel:Walker:07:1}). The minimum of the AWGNC\n pseudo-weight over all nonzero pseudo-codewords is called the minimum\n AWGNC pseudo-weight and is denoted by $w_{\\mathrm{p}}^{\\mathrm{min}}(\\matr{H})$.\n\nFor any integer $s \\ge 1$, let $R_s = \\{ \\exp(\\imath 2 \\pi r \/ s) \\; : \\; 0 \\le r < s \\}$\ndenote the set of complex $s$-th roots of unity, and let $R_s^{-} =\nR_s\\backslash \\{ 1 \\}$. The symbol ${}^*$ denotes complex conjugation. Also, an $r \\times r$ circulant matrix $\\matr{B}$, whose entries are square $L \\times L$ matrices, will be called an \\emph{$L$-block circulant} matrix; we shall denote this by\n\\[\n\\matr{B} = \\mathrm{circ}(\\matr{b}_0, \\matr{b}_1, \\cdots, \\matr{b}_{r-1})\n\\] \nwhere the (square $L \\times L$ matrix) entries in the first column of $\\matr{B}$ are $\\matr{b}_0$, $\\matr{b}_1$, ... , $\\matr{b}_{r-1}$ respectively. \n\n Finally, $\\mathbb{Z}$, $\\mathbb{R}$, ${\\mathbb C}$, and $\\Ftwo$ will be the ring of integers,\n the field of real numbers, the complex field, and the finite field\n of size $2$, respectively. For a positive integer $L$, $[L]$ will\n denote the set of nonnegative integers smaller than $L$:\n $[L]=\\{0,1,\\ldots, L-1\\}$.\n\n \\section{Computing the Eigenvalues of $\\matr{H}^{\\mathsf{T}}\\matr{H}$ \nfor a QC Code}\n\\label{sec:eigenvalues}\n\nIn this section we will show that the polynomial representation of a\nQC code will prove very helpful in computing the eigenvalues of the\nlarge matrix $\\matr{H}^{\\mathsf{T}}\\matr{H}$, easing in this way the computation\nof the lower bound\n\\begin{align}\n\\label{lowerbound} d_{\\mathrm{min}}&\\geqw_{\\mathrm{p}}^{\\mathrm{min}}(\\matr{H}) \\geq n \\frac{2c -\n \\lambda_2}{\\lambda_1 - \\lambda_2}.\n\\end{align}\nThis section is organized in three subsections. In Sec.\n\\ref{subsec:circulantmatrix} and \\ref{subsec:QCcodes} we provide\nsome background on circulant matrices and QC codes. Section\n\\ref{subsec:eigenvaluesQC} will contain the main result on the\neigenvalues of $\\matr{H}^{\\mathsf{T}}\\matr{H}$, where $\\matr{H}$ is the\nparity-check matrix of a QC code.\n\\subsection{ Eigenvalues of a Circulant Matrix} \n\\label{subsec:circulantmatrix}\n\nThe eigenvalues of a square circulant matrix are well known\n\\cite{MacWilliams:Sloane:98}. If $\\matr{B}\\in {\\mathbb C}^{n\\times n}$ is a\ncirculant matrix and $w(X)= b_0+b_1X+\\ldots +b_{n-1}X^{n-1}$ its\n(column) associated polynomial, then the eigenvalues of $\\matr{B}$ are\ngiven by this polynomial's evaluation at the complex $n$-th roots of unity, i.e. $w(x)$ for all $x \\in R_n$. \n\nThe following gives a proof of this result based on the polynomial representation of a circulant\nmatrix. It may be seen as a special case of the method we present later for QC codes.\n\nLet $\\lambda$ be an eigenvalue of $\\matr{B}$. Then there exists a nonzero\nvector $\\vect{v}=(v_0, \\ldots,\n v_{n-1})^\\mathsf{T} \\in {\\mathbb C}^{n}$ such that\n\\begin{align*}\n&\\matr{B}\\vect{v}=\\lambda\\vect{v}.\n \\end{align*}\n In polynomial form, this equation is equivalent to (here $v(X) = v_0+v_1X+\\ldots +v_{n-1}X^{n-1}$):\n \\begin{align*}\n &w(X)v(X)=\\lambda v(X) \\mod (X^n-1) {~\\rm iff}\n \\\\& X^n-1~|~w(X)v(X)-\\lambda v(X) {~\\rm in~} {\\mathbb C}{~\\rm iff}\\\\\n &w(x)v(x)=\\lambda v(x), \\forall x \\in R_n{~\\rm iff}\\\\\n &(w(x)-\\lambda)v(x)=0, \\forall x\\in R_n \\; .\\\\\n\\end{align*}\nFor each $x\\in R_n$, $\\lambda= w(x)$ is a solution of the above\nequation, and therefore it is an eigenvalue for the matrix $\\matr{B}$.\nThere are $n$ such solutions, therefore, these are all possible\neigenvalues of $\\matr{B}$.\n\nIn the next theorem we will consider an \\emph{$L$-block circulant} matrix instead of a circulant matrix. This theorem may be found in \\cite{Tee:05}; \nwe provide here an alternative proof based on the polynomial representation.\n\\begin{theorem}\\label{theorem2} \n Let $\\matr{B}=\\mathrm{circ}(\\matr{b}_0, \\matr{b}_1, \\cdots, \\matr{b}_{r-1})\\in {\\mathbb C}^{rL\\times rL}$ be an\n $L$-block circulant matrix. Let $\\matr{W}(X)= \\matr{b}_0+\\matr{b}_1X+\\ldots\n +\\matr{b}_{r-1}X^{r-1}$ its (column) associated matrix polynomial.\n Then the eigenvalues of $\\matr{B}$ are given by the union of the\n eigenvalues of the $L\\times L$ matrices $\\matr{W}(x)$, for all $x\\in\n R_r$.\n\\end{theorem} \n \n\\IEEEproof The proof follows the reasoning in the theorem above.\n\nLet $\\lambda$ be an eigenvalue of $\\matr{B}$. Then there exists a nonzero\nvector $\\vect{v}\\triangleq(v_0, \\ldots, v_{rL-1})^\\mathsf{T}\\in {\\mathbb C}^{rL}$ such\nthat\n\\begin{align}\\label{eigenvalue}\n &\\matr{B}\\vect{v}=\\lambda\\vect{v}.\n \\end{align}\n Let $\\vect{p}(X)\\in {\\mathbb C}^{L}[X]$ given by $\\vect{p}(X)=(v_0, \\ldots,\n v_{L-1})^\\mathsf{T}+(v_L, \\ldots, v_{2L-1})^\\mathsf{T} X+\\ldots+ (v_{r(L-1)},\n \\ldots, v_{rL-1})^\\mathsf{T} X^{r-1}.$ In polynomial form,\n equation~\\eqref{eigenvalue} is equivalent to:\n \\begin{align*}\n &\\matr{B}(X) \\vect{p}(X)=\\lambda \\vect{p}(X) \\mod (X^r-1) {~\\rm iff}\n \\\\& X^r-1~|~\\matr{B}(X)\\vect{p}(X)-\\lambda \\vect{p}(X) {~\\rm in~} {\\mathbb C} {~\\rm iff}\\\\\n &\\matr{B}(x)\\vect{p}(x)=\\lambda \\vect{p}(x), \\forall x\\in R_r.\n\\end{align*}\nThe last equation is the equation for the eigenvalues of the matrix\n$\\matr{B}(x)$. Each such matrix has $L$ eigenvalues, counting\nmultiplicities, and there are $r$ distinct complex numbers in $R_r$; this accounts for the total number $rL$ of eigenvalues of\n$\\matr{B}$. The eigenvectors can also be deduced from the above.\n\n\n\\subsection{Definition and Properties of QC Codes} \n\\label{subsec:QCcodes}\nA linear QC-LDPC code $\\code{C}_{\\rm QC}\\triangleq\\codeCQC{r}$ of length\n$n = rL$ can be described by an $rJ \\times rL$ (scalar) parity-check\nmatrix $\\matr{\\bar H}_{\\rm QC}^{(r)}\\triangleq\\matr{\\bar H}$ that is formed by a $J\n\\times L$ array of $r \\times r$ circulant matrices.\n\\begin{align}\n\\matr{\\bar H}=\\left[\n\\begin{array}{cccc}\n\\matr{P}_{1,1} & \\matr{P}_{1,2} & \\ldots & \\matr{P}_{1,L} \\\\\n\\matr{P}_{2,1} & \\matr{P}_{2,2} & \\ldots & \\matr{P}_{2,L} \\\\\n\\vdots &\\vdots &\\ldots &\\vdots \\\\\n\\matr{P}_{J,1} & \\matr{P}_{J, 2} & \\ldots & \\matr{P}_{J, L} \n\\end{array}\n \\right], \n\\end{align}\nwhere the entries $\\matr{P}_{i,j}$ are $r\\times r$ circulant matrices.\nClearly, by choosing these circulant matrices to be low-density, the\nparity-check matrix will also be low-density.\n\nWith the help of the well-known isomorphism between the ring of\n$r\\times r$ circulant matrices and the ring of polynomials modulo $X^r\n- 1$, to each matrix $\\matr{P}_{i,j}$ we can associate a polynomial\n$p_{i,j}(X)$, and thus a QC-LDPC code can equivalently be described by\na polynomial parity-check matrix $\\matr{P}(X)$ of size $J \\times L$,\nwith polynomial operations performed modulo $X^r-1$:\n\\begin{align}\n\\matr{P}(X)=\\left[\n\\begin{array}{cccc}\np_{1,1}(X) & p_{1,2}(X) & \\ldots & p_{1,L}(X) \\\\\np_{2,1}(X) & p_{2,2}(X) & \\ldots & p_{2,L}(X) \\\\\n\\vdots &\\vdots &\\ldots &\\vdots \\\\\np_{J,1}(X) & p_{J, 2}(X) & \\ldots & p_{J, L}(X) \n\\end{array}\n \\right].\n\\end{align}\n \nBy permuting the rows and columns of the scalar parity-check matrix\n$\\matr{\\bar H}$,\\footnote{i.e., by taking the first row in the first block\n of $r$ rows, the first row in the second block of $r$ rows, etc.,\n then the second row in the first block, the second row in the second\n block, etc., and similarly for the columns.} we obtain an equivalent\nparity-check matrix representation $\\matr{H}$ for the QC code\n$\\codeCQC{r}$,\n\n\\begin{align}\\matr{H}\n &\\triangleq\n \\begin{bmatrix}\n \\matr{H}_0 & \\matr{H}_{r-1} & \\cdots \n & \\matr{H}_1 \\\\ \n \\matr{H}_1 & \\matr{H}_0 & \\cdots \n & \\matr{H}_2 \\\\ \n \\vdots & \\vdots & \\ddots \n & \\vdots \\\\ \n \\matr{H}_{r-1} & \\matr{H}_{r-2} & \\cdots \n & \\matr{H}_0\n \\end{bmatrix}.\n\\label{eq:matrix_1_bijection}\n\\end{align}\nwhere $\\matr{H}_0, \\matr{H}_1, \\ldots, \\matr{H}_{r-1}$ are scalar $J\n\\times L$ matrices. The connection between the two representations is\n\\begin{align}\n\\matr{H}_0 + \\matr{H}_1 X + \\cdots + \\matr{H}_{r-1} X^{r-1}=\\matr{P}(X).\n\\label{eq:matrix_2_bijection}\n\\end{align}\n\n\\subsection{The Eigenvalues of the Matrix $\\matr{H}^\\mathsf{T}\\cdot \\matr{H}$ of a QC Code}\n\\label{subsec:eigenvaluesQC}\n\nNote that for a fixed value of $r \\ge 1$, (\\ref{eq:matrix_2_bijection}) provides a simple bijective correspondence between the set of polynomial matrices $\\matr{P}(X) \\in (\\mathbb{R}[X]\/(X^r-1))^{J \\times L}$ and the set of parity-check matrices of the form (\\ref{eq:matrix_1_bijection}). Furthermore, the product of two such polynomial matrices, where defined, yields another which corresponds via this bijection with the product of the corresponding parity-check matrices in the form (\\ref{eq:matrix_1_bijection}). Also note that transposition of a polynomial matrix in the form (\\ref{eq:matrix_2_bijection}) corresponds to transposition of the corresponding parity-check matrix in the form (\\ref{eq:matrix_1_bijection}), under this bijection.\n\nIt follows that $\\matr{H}^\\mathsf{T}\\cdot\\matr{H}$ is an $L$-block circulant matrix; applying Theorem~\\ref{theorem2} to this matrix yields the following corollary.\n\n\\begin{corollary}\n \n\n The eigenvalues of $\\matr{H}^\\mathsf{T}\\cdot\\matr{H}$ are given\n by the union of the eigenvalues of the $L\\times L$ matrices\n $\\matr{P}^\\mathsf{T}(x^*)\\cdot \\matr{P}(x),$ for $x\\in R_r$.\n\\label{cor:QC_codes}\n\\end{corollary} \n\\IEEEproof We apply Theorem~\\ref{theorem2} to the $L$-block circulant\nmatrix $\\matr{H}^\\mathsf{T}\\cdot\\matr{H}\\triangleq \\mathrm{circ}(\\matr{b}_0, \\matr{b}_1, \\cdots, \\matr{b}_{r-1})\\in {\\mathbb C}^{rL\\times rL}$ and form the matrix $\\matr{W}(X)= \\matr{b}_0+\\matr{b}_1X+\\ldots\n+\\matr{b}_{r-1}X^{r-1}$. This is equal to the product of the two\nmatrix polynomials of $\\matr{H}^\\mathsf{T}$ and $\\matr{H}$,\nwhich are $\\matr{H}_0^{\\mathsf{T}} + \\matr{H}_{r-1}^{\\mathsf{T}} X +\n\\cdots + \\matr{H}_{1}^{\\mathsf{T}} X^{r-1} = X^r\\matr{P}^\\mathsf{T}(1\/X)$ and $\\matr{H}_0 + \\matr{H}_1 X +\n\\cdots + \\matr{H}_{r-1} X^{r-1} = \\matr{P}(X)$, respectively. Therefore\n$\\matr{W}(X)=(X^r\\matr{P}^\\mathsf{T}(1\/X))\\cdot\\matr{P}(X)$ and so the eigenvalues of $\\matr{H}^\\mathsf{T}\\cdot\\matr{H}$ are the eigenvalues of $\\matr{P}^\\mathsf{T}(1\/x)\\cdot\n\\matr{P}(x)$, for all $x\\in R_r$; these are then equal to the eigenvalues of $\\matr{P}^\\mathsf{T}(x^*)\\cdot\n\\matr{P}(x)$, for all $x\\in R_r$ (as $x^*=1\/x$ for all such $x$).\n\n\\begin{example}\\label{Tannerexample}\n Let $r=31$ and consider the $(3,5)$-regular QC-LDPC code given by the\n scalar $93 \\times 155$ matrix\\footnote{Here $\\matr{I}_\\ell$ denotes the $31 \\times 31$ identity matrix with rows\nshifted cyclically to the left by $\\ell$ positions.}\n\\begin{align*}\n {\\matr{\\bar H}} &= \\begin{bmatrix}\n \\matr{I}_1 & \\matr{I}_2 & \\matr{I}_4 & \\matr{I}_8 & \\matr{I}_{16}\\\\\n \\matr{I}_5 & \\matr{I}_{10} & \\matr{I}_{20} & \\matr{I}_9 & \\matr{I}_{18}\\\\\n \\matr{I}_{25}& \\matr{I}_{19} & \\matr{I}_7 & \\matr{I}_{14}&\n \\matr{I}_{28}\n \\end{bmatrix}.\n \\end{align*}\n The polynomial parity-check matrix $\\matr{P}(X)\\in\n (\\mathbb{R}[X]\/(X^r-1))^{3 \\times 5}$ is\n \\begin{align*}\n \\matr{P}(X)\n &= \\begin{bmatrix}\n X & X^2 & X^4 & X^8 & X^{16}\\\\\n X^5 & X^{10} & X^{20} & X^9 & X^{18}\\\\\n X^{25} & X^{19} & X^7 & X^{14}& X^{28}\n \\end{bmatrix} \\; .\n \\end{align*}\n This code is the famous $(3,5)$-regular QC-LDPC code of length $155$\n presented in~\\cite{Tanner:Sridhara:Fuja:01:1}. Note that the code\n parameters are $[155, 64, 20]$. The corresponding matrix\n $\\matr{H}$ in the form (\\ref{eq:matrix_1_bijection}) is a $31\\times 31 $ matrix with block entries\n $\\matr{H}_i$, $i\\in [31]$ obtained by decomposing $\\matr{P}(X)$\n according to the powers of $X$:\n \\begin{align}\n\\matr{P}(X)=\\matr{H}_0 + \\matr{H}_1 X + \\cdots + \\matr{H}_{30} X^{30}.\n\\end{align}\nObviously only $15$ matrices among the $\\matr{H}_i$ are nonzero, and all\nof these contain only one $1$, the other entries being zero.\n\nThe matrix $\\matr{H}^\\mathsf{T}\\cdot\\matr{H}$ is a $5$-block circulant matrix. Corollary \\ref{cor:QC_codes} above tells us\nthat in order to compute its eigenvalues, we need to form the matrices\n$\\matr{P}^\\mathsf{T}(\\rho^{-i})\\cdot \\matr{P}(\\rho^i)$, for all $i\\in [31]$ (here $\\rho$ denotes a primitive complex $31$-th root of unity). We\nhave that\n\\begin{align*}\n \\matr{P}^\\mathsf{T}(1\/x) &= \\begin{bmatrix}\n x^{30} & x^{29} & x^{27} & x^{23} & x^{15}\\\\\n x^{26} & x^{21} & x^{11} & x^{22} & x^{13}\\\\\n x^{6} & x^{12} & x^{24} & x^{17}& x^{3}\n \\end{bmatrix}^\\mathsf{T} \\;\n \\end{align*}\nand\n\\begin{align*}\n & \\matr{P}^\\mathsf{T}(1\/x)\\cdot \\matr{P}(x)=\n \\begin{bmatrix}\n 3 & a & e^* & c & e^* \\\\\n a^* & 3 & b & a^* & d \\\\\n e & b^* & 3 & c & b^* \\\\\n c^* & a & c^* & 3 & d \\\\\n e & d^* & b & d^* & 3 \\end{bmatrix} \\;,\n\\end{align*}\nfor all $x \\in R_{31}$, where \n\\begin{align*}\n a&=x + x^5 + x^{25}; b = x^2 + x^{10} + x^{19};\n c = x^4 + x^7 + x^{20};\\\\\n d& =x^8 + x^9 + x^{14}; e = x^{16} + x^{18} +\n x^{28}.\n\\end{align*}\nObviously for $i\\in [31]$, each matrix $\\matr{P}^\\mathsf{T}(\\rho^{-i})\\cdot\n\\matr{P}(\\rho^i)$ is Hermitian (in fact nonnegative definite), hence each has $5$ real nonnegative eigenvalues,\ngiving a total of $31\\cdot 5=155$ nonnegative eigenvalues for $\\matr{H}^\\mathsf{T}\\cdot\\matr{H}$.\n\nWe obtain that for each $i \\in [31], i\\neq 0$,\nthe associated polynomial of $\\matr{P}^\\mathsf{T}(\\rho^{-i})\\cdot\n\\matr{P}(\\rho^i)$ may be written as (using $\\rho^{31} = 1$)\n\\begin{eqnarray*}\nu(\\lambda) & = & \\lambda^2(\\lambda^3 - 15 \\lambda^2 + 62 \\lambda - 62) \\\\\n& = & \\lambda^2(\\lambda-\\lambda_2)(\\lambda-\\lambda_3)(\\lambda-\\lambda_4)\n\\end{eqnarray*}\nwhere $\\lambda_2 = 8.6801$, $\\lambda_3 = 4.8459$ and $\\lambda_4 =\n1.4740$. Also, for $i=0$ the associated polynomial of\n$\\matr{P}^\\mathsf{T}(\\rho^{-i})\\cdot \\matr{P}(\\rho^i)$ may be written as\n$u(\\lambda) = \\lambda^4(\\lambda-\\lambda_1)$ where $\\lambda_1 = 15$.\nThis yields the nonzero eigenvalues of $\\matr{H}^\\mathsf{T}\\cdot\\matr{H}$ as $\\{ \\lambda_1, \\lambda_2, \\lambda_3,\n\\lambda_4 \\}$ with multiplicities $1$, $30$, $30$ and $30$\nrespectively.\n\\end{example} \n \n\n\\section{Eigenvalues of Nested Circulant Matrices}\n\\label{sec:nestedcirculanteigenvalues}\nIn this section we define the class of \\emph{nested circulant} matrices,\nand show that they have eigenvalues which are given by evaluating a\nmultivariate associated polynomial at points whose coordinates are\nparticular roots of unity. \n\n\\begin{theorem}\\label{theorem:nested_2} \n Let $\\matr{B}=\\mathrm{circ}(\\matr{b}_0, \\matr{b}_1, \\cdots, \\matr{b}_{r-1})\\in {\\mathbb C}^{rL\\times rL}$ be an $L$-block\n circulant matrix. Suppose that each subblock $\\matr{b}_i$,\n $i \\in [r]$, is also circulant, with associated polynomial\n $p^{(i)}(X) = \\sum_{j=0}^{L-1} b_{i,j} X^j$. Define the\n associated polynomial of $\\matr{B}$ by\n\\[ \nq(X,Y) = \\sum_{i=0}^{r-1} \\sum_{j=0}^{L-1} b_{i,j} X^i Y^j \\; .\n\\] \nThen the set of eigenvalues of $\\matr{B}$ is given by \n\\[\n\\{ q(x,y) \\; : \\; x \\in R_r, y \\in R_L \\} \\; .\n\\]\n\\end{theorem} \n\n\\IEEEproof For each $j \\in [L]$ define $u^{(j)}(X) =\n\\sum_{i=0}^{r-1} b_{i,j} X^i$. By Theorem~\\ref{theorem2}, the\neigenvalues of $\\matr{B}$ are equal to those of the matrices given by\n$\\matr{W}(x)$ for $x \\in R_r$; each of these is circulant with\nassociated polynomial (in $Y$) given by\n\\[\n\\sum_{j=0}^{L-1} u^{(j)}(x) Y^j = q(x,Y) \\; .\n\\]\nThus the eigenvalues of each $\\matr{W}(x)$ are equal to $q(x,y)$ for\n$y \\in R_L$, and the result follows.\n\nWe next define what is meant by a \\emph{nested circulant} matrix.\n\\begin{definition}\\label{m_nested_circulant}\nLet $m \\ge 1$ and let $i_t$ be a positive integer for each $t=1,2,\\cdots,m$. Also let $\\matr{B} =\n \\mathrm{circ}(\\matr{b}_0, \\matr{b}_1, \\cdots, \\matr{b}_{i_1-1})$ be\n a block-circulant matrix such that for every $t=1,2,\\cdots,m-1$,\n $j_t \\in [i_t]$ \n\\begin{align*} &\\matr{b}_{j_1,j_2,\\cdots,j_{t}}=\\\\ &\n \\mathrm{circ}(\\matr{b}_{j_1,j_2,\\cdots,j_{t},0},\n \\matr{b}_{j_1,j_2,\\cdots,j_{t},1}, \\cdots,\n \\matr{b}_{j_1,j_2,\\cdots,j_{t},i_{t+1}-1})\\end{align*}\nis also block-circulant,\n and that $\\matr{b}_{j_1,j_2,\\cdots,j_{m}} = b_{j_1,j_2,\\cdots,j_{m}}$ are\n scalars. Then $\\matr{B}$ is said to be an $m$-nested circulant\n matrix (with dimension $n = \\prod_{t=1}^{m} i_t$). The associated polynomial of $\\matr{B}$ is defined by\n\\begin{equation}\n q(X_1,X_2,\\cdots,X_{m}) = \\sum_{j_1=0}^{i_1-1} \\sum_{j_2=0}^{i_2-1} \n\\cdots \\sum_{j_{m}=0}^{i_{m}-1} b_{j_1,j_2,\\cdots,j_{m}} \\prod_{t=1}^{m} X_{t}^{j_t} \n\\label{eq:definition_char_poly} \n\\end{equation}\n\\end{definition}\nNote that the $1$-nested circulants are precisely the circulant\nmatrices, and that the $2$-nested circulants are precisely the \n$i_2$-block-circulant matrices with circulant subblocks. Also note that the\nassociated polynomial $q(X_1,X_2,\\cdots,X_{m})$ provides a succinct\ndescription of the matrix $\\matr{B}$.\n\nA straightforward generalization of Theorem \\ref{theorem:nested_2} is\nas follows.\n\\begin{theorem}\\label{theorem:nested_m} \n Let $\\matr{B}$ be an $m$-nested circulant matrix with associated\n polynomial $q(X_1,X_2,\\cdots,X_{m})$ given by\n (\\ref{eq:definition_char_poly}) above. Then the set of eigenvalues\n of $\\matr{B}$ is given by\n\\[\n\\{ q(x_1,x_2,\\cdots,x_{m}) \\; : \\; x_t \\in R_{i_t} \\quad \\forall t = 1,2,\\cdots,m \\}\n\\]\n\\end{theorem} \n\n\\IEEEproof The proof uses induction, and follows the lines of the\nproof of Theorem \\ref{theorem:nested_2} in a rather straightforward\nmanner.\n\n\\begin{example}\\label{fully_nested_circulant}\n Here we take an example of an $3$-nested circulant (i.e. $m=3$),\n where $i_t=2$ for $t=1,2,3$. The eigenvalues of\n\\[\n\\matr{B} = \\begin{bmatrix} \n0 & 1 & 0 & 0 & 0 & 1 & 1 & 1 \\\\\n1 & 0 & 0 & 0 & 1 & 0 & 1 & 1 \\\\\n0 & 0 & 0 & 1 & 1 & 1 & 0 & 1\\\\\n0 & 0 & 1 & 0 & 1 & 1 & 1 & 0 \\\\\n0 & 1 & 1 & 1 & 0 & 1 & 0 & 0 \\\\\n1 & 0 & 1 & 1 & 1 & 0 & 0 & 0 \\\\\n1 & 1 & 0 & 1 & 0 & 0 & 0 & 1\\\\\n1 & 1 & 1 & 0 & 0 & 0 & 1 & 0\\end{bmatrix}\n\\]\nare equal to the eigenvalues of \n\\[\n\\matr{B'} = \\begin{bmatrix} \n0 & 1+x & x & x \\\\\n1+x & 0 & x & x \\\\\nx & x & 0 & 1+x \\\\\nx & x & 1+x & 0\\end{bmatrix}\n\\]\nfor $x \\in \\{-1,1\\}$, which are equal to the eigenvalues of \n\\[\n\\matr{B''} = \\begin{bmatrix} \nxy & 1+x+xy \\\\\n1+x+xy & xy\\end{bmatrix}\n\\]\nfor $x \\in \\{-1,1\\}$ and $y \\in \\{-1,1\\}$. Finally, these are equal to the set \n\\[\n\\{ q(x,y,z) \\; : \\; x,y,z \\in \\{-1,1\\} \\}\n\\]\nwhere the associated polynomial of $\\matr{B}$ is $q(x,y,z) = xy +\nz(1+x+xy)$. In this example $b_{0,0,0} = 0$, $b_{0,0,1} = 1$,\n$b_{0,1,0} = 0$, $b_{0,1,1} = 0$, $b_{1,0,0} = 0$, $b_{1,0,1} = 1$,\n$b_{1,1,0} = 1$, $b_{1,1,1} = 1$; these may be easily obtained by\nmatching the elements of the first column of $\\matr{B}$ with the binary\nexpansion of the corresponding row position.\n\nThis example may be generalized to the case where $n = 2^m$ and the\ncirculant is $m$-nested; the eigenvalues are real.\nNote that the choice of the first column in $\\matr{B}$ determines which\nterms in $\\{ 1,x,y,z,xy,yz,zx,xyz \\}$ are included in the\nassociated polynomial, and hence controls the eigenvalues of\n$\\matr{B}$.\n\\end{example}\n\n\\begin{theorem}\\label{theorem:nested_H_nested_B}\n If $\\matr{H}$ is an $m$-nested circulant matrix, then $\\matr{B} =\n \\matr{H}^{\\mathsf{T}} \\matr{H}$ is an $m$-nested circulant matrix. \n\\end{theorem}\n\n\\IEEEproof It is straightforward to prove the stronger result that if $\\matr{A}$ and $\\matr{B}$ are $m$-nested circulants with specified nested dimensions, then $\\matr{A}^{\\mathsf{T}} \\matr{B}$ is also $m$-nested circulant, with the same nested dimensions. The proof proceeds by induction on $m$. The base case $m=1$ is straightforward. Next, let $\\matr{A}$ be block-circulant with block entries in the first column equal to some $(m-1)$-nested circulants $\\matr{A}_i$, and let $\\matr{B}$ be block-circulant with block entries in the first column equal to some $(m-1)$-nested circulants\n$\\matr{B}_j$. The matrix $\\matr{A}^{\\mathsf{T}} \\matr{B}$ is then block-circulant, and each block entry is a sum of matrices of the form $\\matr{A}_i^{\\mathsf{T}} \\matr{B}_j$. By the principle of induction, each\nof these matrices is an $(m-1)$-nested circulant, and it is easy to show that a sum of $t$-nested circulants (of the same nested dimensions) is another $t$-nested circulant (with these nested dimensions).\n\n\\section{Conditions for the Pseudo-Weight Lower Bound to Hold with Equality}\n\\label{sec:conditions} \n\nIt is straightforward to show that a necessary condition for the bound of \\cite{KV-lower-bounds}\nto hold with equality is that the eigenvalues of $\\matr{B} =\n\\matr{H}^{\\mathsf{T}} \\matr{H} \\in \\mathbb{R}^{n\\times n}$ are $\\lambda_1$ with multiplicity $1$ and\n$\\lambda_2 < \\lambda_1$ with multiplicity $n-1$.\n\nIf $\\matr{H}$ is circulant with (row) associated polynomial $w(X)$ of degree $k \\le n$, the eigenvalues of $\\matr{B}$\nare precisely $\\{ \\left| w(x) \\right|^2 \\; : \\; x \\in R_n \\}$;\ntherefore the largest eigenvalue of $\\matr{B}$ is $\\lambda_1 = \\left|\n w(1) \\right|^2 = d^2$ where $d$ is the number of nonzero\ncoefficients in $w(X)$ (noting that $\\left| w(1) \\right|^2 > \\left|\n w(x) \\right|^2$ for all $x \\in R_n^{-}$). Let $\\tilde{w}(X) = X^k w(1\/X)$ denote the \\emph{reciprocal\npolynomial} of $w(X)$ which is obtained by reversing the order of\ncoefficients in $w(X)$. Now assume that the bound of\n\\cite{KV-lower-bounds} holds with equality. Then we must have\n\\[\n\\left| w(x) \\right|^2 = w(x)w^{*}(x) = \\lambda_2 \\quad \\forall \\: x \\in R_n^{-}\n\\] \nfor some positive real number $\\lambda_2$, i.e.\n\\[\nw(x)w(1\/x) = \\lambda_2 \\quad \\forall \\: x \\in R_n^{-} \\; .\n\\]\nThis is equivalent to\n\\[\nw(x)\\tilde{w}(x) = \\lambda_2 x^k \\quad \\forall \\: x \\in R_n^{-}\n\\]\nThus $R_n^{-}$ is a subset of the roots of the polynomial\n$w(X)\\tilde{w}(X) - \\lambda_2 X^k$, and so\n\\begin{equation}\nw(X)\\tilde{w}(X) - \\lambda_2 X^k = (1+X+X^2+\\cdots +X^{n-1}) r(X)\n\\label{eq:equality_in_bound_circ}\n\\end{equation}\nwhere $r(X)$ is a polynomial of degree $2k-n+1 \\ge 0$ with integer\ncoefficients. In the following we give details of this condition for\nsome codes which attain the bound of \\cite{KV-lower-bounds} with\nequality.\n\\vspace{-2mm}\n\\begin{example}\\label{EG22example}\n The $\\mathrm{EG}(2,2)$ code with $q=2$, $n=3$, $k=1$, $d=2$ has\n $w(X) = 1+X$. Here $\\lambda_1 = d^2 = 4$ and\n (\\ref{eq:equality_in_bound_circ}) holds in the form\n\\[\n(1+X)^2 - X = 1+X+X^2\n\\]\nso in this case $\\lambda_2 = 1$ and $r(X) = 1$. Here\n\\[\nd_{\\mathrm{min}} = w_{\\mathrm{p}}^{\\mathrm{min}}(\\matr{H}) = n \\left(\n \\frac{2d-\\lambda_2}{d^2-\\lambda_2} \\right) = 3 = q+1 \\; .\n\\]\n\\end{example}\n\\vspace{-5mm}\n\\begin{example}\\label{PG22example}\n The $\\mathrm{PG}(2,2)$ code with $q=2$, $n=7$, $k=3$, $d=3$ has\n $w(X) = 1+X+X^3$. Here $\\lambda_1 = d^2 = 9$ and\n (\\ref{eq:equality_in_bound_circ}) holds in the form\n\\[\n(1+X+X^3)(1+X^2+X^3) - 2X^3 = 1+X+\\cdots+X^6\n\\]\nso in this case $\\lambda_2 = 2$ and $r(X) = 1$. Here\n\\[\nd_{\\mathrm{min}} = w_{\\mathrm{p}}^{\\mathrm{min}}(\\matr{H}) = n \\left(\n \\frac{2d-\\lambda_2}{d^2-\\lambda_2} \\right) = 4 = q+2 \\; .\n\\]\n\\end{example}\n\\vspace{-5mm}\n\\begin{example}\\label{PG24example}\n The $\\mathrm{PG}(2,4)$ code with $q=2$, $n=21$, $k=11$, $d=5$ has\n $w(X) = 1+X^2+X^7+X^8+X^{11}$. Here $\\lambda_1 = d^2 = 25$ and\n (\\ref{eq:equality_in_bound_circ}) holds in the form\n\\begin{eqnarray*}\n& (1+X^2+X^7+X^8+X^{11})(1+X^3+X^4+X^9+X^{11}) \\\\ \n& - 4X^{11} = (1+X+X^2+\\cdots+X^{20})(1-X+X^2)\n\\end{eqnarray*}\nso in this case $\\lambda_2 = 4$ and $r(X) = 1-X+X^2$. Here\n\\[\nd_{\\mathrm{min}} = w_{\\mathrm{p}}^{\\mathrm{min}}(\\matr{H})= n \\left(\n \\frac{2d-\\lambda_2}{d^2-\\lambda_2} \\right) = 6 = q+2 \\; .\n\\]\n\\end{example}\n\\vspace{-2mm}\nNote that for a general $\\mathrm{PG}(2,q)$ code, for the bound to hold\nwith equality we require\n\\begin{eqnarray*}\nw_{\\mathrm{p}}^{\\mathrm{min}}(\\matr{H}) = q+1 & = & n \\left( \\frac{2d-\\lambda_2}{d^2-\\lambda_2}\n \\right) \\\\ \n & = & (q^2+q+1) \\left( \\frac{2(q+1)-\\lambda_2}{(q+1)^2-\\lambda_2} \\right) \\; .\n\\end{eqnarray*}\nand therefore we must have $\\lambda_2 = q$. Also, for a general\n$\\mathrm{EG}(2,q)$ code, for the bound to hold with equality we\nrequire\n\\begin{eqnarray*}\nw_{\\mathrm{p}}^{\\mathrm{min}}(\\matr{H}) = q+1 & = & n \\left( \\frac{2d-\\lambda_2}{d^2-\\lambda_2} \n\\right) \\\\\n& = & (q^2-1) \\left( \\frac{2q-\\lambda_2}{q^2-\\lambda_2} \\right) \\; .\n\\end{eqnarray*}\nand therefore we must have $\\lambda_2 = q$ if $q>2$, whereas for\n$q=2$, any $\\lambda_2$ will achieve the bound.\n\n\n\\section{Conclusions and Future Work}\n\\label{sec:conclusions:1}\n\nA method has been presented for evaluation of the eigenvalue-based lower bound on the AWGNC pseudo-weight based on spectral\nanalysis, for QC and related codes. It was shown that the relevant eigenvalues may be found by computing the\neigenvalues of a certain number of small matrices. We also presented a \nnecessary condition for the bound to be attained with\nequality and gave a few examples of codes for which this happens.\nFuture work involves optimization of QC code designs based on these bounds. \n\n\\section{Acknowledgment}\nThe first author was supported by NSF Grant DMS-0708033 and TF-0830608.\n \n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\\label{sec1}\n\nThe success of Deep Neural Networks (DNNs) \\cite{He2016Deep, wen2021zigan, hu2018squeeze}, \\cite{liu2016ssd, 9546634} largely owes to the completeness of training data, which means that the collected data should be carefully annotated, low level noised, and sufficiently large. But in many real scenarios that need professional labeling knowledge, such as fine-grained image classification for different kinds of birds, it is generally hard to access a large number of label-rich training samples.\n\nOne solution to alleviate the over-dependence of DNNs on label-rich training data is Few-Shot Learning (FSL)~\\cite{sun2019meta,chen2019closer,ryu2020metaperturb,yang2020restoring,baik2020meta,yang2021free,liu2020negative}, which aims to mine the transferable meta-knowledge from the base classes so that DNNs can utilize such knowledge to easily recognize new classes given only a few training examples. However, the main challenge for the FSL \\cite{sun2019meta,chen2019closer,ryu2020metaperturb,yang2020restoring,baik2020meta,yang2021free} is that learning to classify from few examples having limited representative capability inevitably brings the overfitting issue. Therefore, researchers mainly focus on leveraging meta-learning technology~\\cite{vinyals2016matching, snell2017prototypical,ryu2020metaperturb, sung2018learning, hao2019collect, wu2019parn} to deal with the FSL problem. However, the above-mentioned FSL methods focus to classify coarse-grained generic object categories, which are less suitable to address the few-shot fine-grained classification task~\\cite{wu2021object, zhu2020multi, wertheimer2021few, li2020bsnet}, that requires to emphasize the local feature variations or subtle feature differences.\n\n\n\nInspired by the meta-learning success for generic object classification, some researchers~\\cite{tang2020revisiting, wang2021few, li2020bsnet, zhu2020multi, wu2021object, dong2020learning, huang2020low, li2019revisiting, li2019distribution, garcia2017few, wertheimer2021few, tang2020blockmix, tang2022learning, huang2021toan} start to extend the study of FSL from generic object classes to fine-grained classes, where the main challenge is that fine-grained recognition is more dependent on \\textbf{mining spatially local discriminative parts} of an input image, rather than global features extracted by generic meta-learning models such as prototypical networks~\\cite{snell2017prototypical}. As illustrated in Fig. \\ref{fig1}(a), many few-shot fine-grained works mine discriminative parts of the whole image based on the attention mechanism~\\cite{zhu2020multi}, feature map reconstruction~\\cite{wertheimer2021few}, and feature-level spatial alignment~\\cite{wu2021object}. However, these methods fail to leverage cross-image object semantic relations between the training examples (denoting the support images) and test examples (denoting the query images).\n\n\\begin{figure}[t]\n\\vspace{-6pt}\n\\setlength{\\abovecaptionskip}{0.1cm}\n\\setlength{\\belowcaptionskip}{-0.2cm}\n\\centering\n\\includegraphics[height=4.5cm, width=8.3cm]{ACM_motivation.png}\n\\caption{(a) Previous fine-grained works attempt to learn discriminative local image parts but have no interaction between support-query images, which may cause relation matching confusion between different fine-grained objects and yield misclassification. (b) In contrast, HelixFormer consisting of RMP and REP modules (details will be given in Sec. \\ref{sec3}) can mine image pair-level semantically related parts, \\textit{e.g.} birds' wings, and further learn how to distinguish these subtle feature differences to avoid misclassification.}\n\\label{fig1}\n\\end{figure}\n\nAs a matter of fact, \\textit{to recognize a novel class's samples, humans tend to compare the local semantic parts' differences between the ever-seen and newly-seen objects, so that the subtle feature differences can be identified using only a few examples.} Thus, benefiting from such a fact, we are encouraged to model the cross-image object semantic relation to find the discriminative local regions in the Transformer, as illustrated in Fig. \\ref{fig1}(b). We further give an in-depth analysis on how to design such a cross-attention mechanism under both few-shot and fine-grained scenarios via the following two aspects.\n\nFirstly, under the FSL setting, we consider that the cross-image object semantic relation calculated from the support to query image, should be consistent with that calculated from the query to support image, namely, \\textbf{cross-image object semantic relation consistency}, which motivates us to use a symmetrical structure to model the cross-image object semantic relations. Moreover, similar to humans that need to distinguish from multiple relations between a target object and its various neighbors to determine its belonged category, cross-image object semantic relation should be modeled as \\zb{a parts-to-parts (many-to-many) matching process, that involves several attention heads of a Transformer running in parallel with each head focusing on a certain part.}\n\nSecondly, such cross-image semantic relation learning should not only consider the few-shot scenario, but also consider the fine-grained setting which requires to strengthen subtle local features' discriminability. Different fine-grained objects in real world may present arbitrary poses, scales and appearances, causing different similarity in both global appearance and various local regions, hurting the classification performance. This challenge may be alleviated if the learned cross-image semantic relation can serve to emphasize those discriminative local features by integrating into a well-performed fine-grained baseline model, \\zb{and enhance the representation learning process for those discovered semantically-similar regions.}\n\nIn view of the above, we propose a Transformer-based double-helix model, namely HelixFormer to solve the few-shot fine-grained image classification task. Benefiting from the self-attention mechanism of Transformer, the HelixFormer exploits the multi-head key-query-value outputs to do interaction between different images, and predicts their object-level semantic relation. HelixFormer is mainly composed of a cross-image object Relation Mining Process (RMP) across the support-query branches, and a feature Representation Enhancement Process (REP) within each single branch. Specifically, we feed the input image pairs into the support and query branches respectively. First, by treating the input feature patches (extracted by the Convolution Neural Networks (CNNs) based backbone such as Conv-4~\\cite{sung2018learning}) as tokens and considering the cross-branch token information interaction, RMP produces \\textbf{Cross-image Semantic Relation Maps (CSRMs)} for each branch in a bidirectional and symmetrical manner, which ensures the cross-image relation consistency. Meanwhile, we formulate the multi-head cross-attention mechanism as modeling many-to-many inter-object semantic relations, where one object would interact with multiple other objects. Second, the above CSRMs encode image patch-level semantic relation between different fine-grained objects, and further help the subsequent REP \\zb{to enhance the learning of the feature representations within either the support or query feature encoding branch, which boosts the baseline model's ability to distinguish subtle feature differences of fine-grained objects.}\n\nThe main contributions of this work can be summarized as follows:\n\n\\begin{enumerate}[1)]\n\\item We propose a novel HelixFormer architecture, leveraging on the cross-image object semantic relation learning at patch level, to perform the few-shot fine-grained learning. To our knowledge, this is the first work to introduce semantic relation mining in the Transformer model for few-shot fine-grained task.\n\n\\item To ensure the semantic relation consistency between a pair of support-query images in FSL, we design a double-helix RMP structure that can generate consistent patch-level CSRMs in different branches. Furthermore, with the aid of CSRMs, we develop a REP to enhance the feature learning for those semantically-similar regions presenting subtle feature differences.\n\n\\item We have conducted extensive experiments on five few-shot fine-grained benchmarks. Both qualitative and quantitative results validate that the HelixFormer can effectively learn the cross-image object semantic relations, and further utilize such relations to enhance the model's generalization ability.\n\\end{enumerate}\n\n\\section{Related Works}\n\\label{sec2}\n\n\\noindent\\textbf{Few-Shot Fine-Grained Learning (FSFGL).} Recent FSL works \\cite{alet2019neural, yin2019meta, 9729102, ren2018learning, ye2022makes, lee2019learning, kirsch2019improving} can be roughly categorized into three types: 1) Optimization-based methods \\cite{finn2017model,rusu2018meta} that focus on learning good initialization parameters in order to quickly adapt the few-shot model to novel classes; 2) Metric-based methods \\cite{vinyals2016matching, snell2017prototypical, oreshkin2018tadam, sung2018learning} that aim to design a distance metric, so that the few-shot model can learn the semantic relation between different input images; 3) Data augmentation-based methods \\cite{reed2017few, chen2019image, zhang2018metagan, hariharan2017low, wang2018low, li2020adversarial} that produce new samples to enlarge the training set for model training. Recently, inspired by the rapid development of meta-learning, researchers~\\cite{tang2020revisiting, wang2021few, li2020bsnet, zhu2020multi, wu2021object, dong2020learning, huang2020low, li2019revisiting, li2019distribution, garcia2017few, wertheimer2021few} start to explore the generalization ability of FSL model on novel fine-grained sub-classes where only a few training examples are given. For example, a multi-attention meta-learning method~\\cite{zhu2020multi} is proposed to learn diverse and informative parts of fine-grained images. Besides, the work~\\cite{wertheimer2021few} tackles the FSFGL problem from the perspective of reconstructing the query image to learn a classifier. More recently, the work~\\cite{wu2021object} tries to increase the fine-grained classification accuracy via long-shot-range spatial alignment between support and query features. Motivated by these works in the FSFGL community, we further extend the study of FSFGL to a Transformer-based structure, and investigate its effectiveness in strengthening the support-query relation matching process only given a few samples.\n\n\\noindent\\textbf{Cross-attention Models.} In this part, we review existing cross-attention works \\cite{chen2021crossvit, lin2021cat, wang2021crossformer, wei2020multi, Yang3690, ke2021prototypical, huang2019ccnet} and find that they are mainly based on the attention modeling of cross-scale features \\cite{chen2021crossvit, lin2021cat, wang2021crossformer}, cross-modality relationships \\cite{wei2020multi, Yang3690}, joint spatio-temporal information \\cite{ke2021prototypical}, and inner-image multi-patches \\cite{huang2019ccnet} to capture the intra-object relations and contextual information. Different from the above methods, our work aims to exploit the cross-image object semantic relations (\\textit{i.e.} finding the discriminative local spatial regions between objects that are helpful for fine-grained recognition) to address the FSFGL issue. On the other hand, there are only two works \\cite{hou2019cross, zhuang2020learning} employing cross-attention mechanism to perform the FSL task. Our work differs from the two works as follows: 1) We study a symmetrical Transformer-based structure, which fully considers the symmetry property between support and query images in FSL, and imposes a cross-image object semantic relation consistency between support-query and query-support matching process; 2) We develop a two-step relation matching process (a two-branch relation mining process and a representation enhancement process), which has the merit of improving the baseline model's ability to distinguish the subtle local feature differences.\n\n\n\\noindent\\textbf{Transformer for Vision Tasks.} Vision Transformer (ViT) \\cite{dosovitskiy2020image} shows promising performance on a variety of tasks including image classification \\cite{dosovitskiy2020image}, object detection \\cite{chu2021twins, wang2021pyramid}, segmentation \\cite{chu2021twins, wang2021pyramid}, and pose estimation \\cite{yuan2021hrformer}. The goal of ViT is to model long-range dependencies across different input sequence elements (or input tokens), by splitting an input image into a sequence of image patches with size of 16$\\times$16 pixels. More recently, to reduce the computational cost, many researchers incorporate the multi-scale branch into Transformer, via inserting depth-wise convolutions into the self-attention \\cite{yuan2021hrformer} or exploiting the multi-resolution parallel structure \\cite{liu2021swin}. The above works attempt to model the global self-attention within an input image through the single-branch structure, while our HelixFormer tries to identify the local patch-level cross-attention between different input images. Besides, CrossTransformer \\cite{doersch2020crosstransformers} and CrossViT \\cite{chen2021crossvit} are recently developed Transformer-based dual-branch structures, where the CrossTransformer utilizes the dual-branch structure to achieve a coarse-grained spatial correspondence, and CrossViT feeds the image patches of different sizes into two separate branches to extract multi-scale information. We would like to emphasize that, compared with the above dual-branch network structure, our HelixFormer is actually a double-helix dual-branch structure, which means that the cross-object semantic relations from the support to query branch and vice versa are symmetric and complementary, ensuring the semantic consistency assumption of relation pairs.\n\n\n\\section{The Proposed Method}\n\\label{sec3}\n\nThe overall framework of the proposed HelixFormer is illustrated in Fig. \\ref{fig2}. For easy understanding, we first give the problem formulation and the episodic training strategy for Few-Shot Learning (FSL) task. Then we introduce the proposed HelixFormer and discuss its superiority over several variants of existing cross-attention models. Finally, we give the overall objectives and cross-attention learning strategy of our model.\n\n\\subsection{Preliminaries}\n\n\\noindent\\textbf{FSL Setting.} Given a set of base classes {\\small $D_{base}$} and a CNN-based feature embedding network (or backbone) $F$, the purpose of few-shot learning is to learn a task-agnostic $F$ on {\\small $D_{base}$} via an episodic task sampling strategy, so that the $F$ can be generalized to novel classes {\\small $D_{novel}$}, where {\\small $D_{base} \\cap D_{novel} = \\emptyset$}.\nFor a typical FSL setting, each episodic task represents an $N$-way $K$-shot classification task, where both support set $S$ and query set $Q$ are sampled from the same $N$ classes. During the meta-training stage, each episodic task is sampled from the base classes {\\small $D_{base}$}.\n\n\\begin{figure*}\n\\vspace{-6pt}\n\\setlength{\\abovecaptionskip}{0.1cm}\n\\setlength{\\belowcaptionskip}{-0.5cm}\n\\centering\n\\includegraphics[height=6.4cm]{ACM_framework.png}\n\\caption{The overview of the proposed HelixFormer, which takes a pair of support-query features {\\small $(f_S, f_Q)$} extracted by a CNN backbone as its input, and outputs a pair of {\\small $(\\hat{f}_S, \\hat{f}_Q)$} for doing the subsequent object classification. Note that for simplicity, we omit the multi-head attention in this figure.}\n\\label{fig2}\n\\end{figure*}\n\n\\noindent\\textbf{Two-branch Baseline Introduction.} Inspired by the two-branch network structure to learn semantic representations in the relation network (RN)~\\cite{sung2018learning}, we employ the RN as our baseline model. As illustrated in Fig. \\ref{fig2}, given an input image pair $(x_S, x_Q)$, the RN first produces the high-level semantic feature pairs $(f_S, f_Q)$ via a convolution-based backbone $F$. Then, a classification network $H$ is used to predict whether the query image $x_Q$ has the same class label with the $n$-th class support image $x_{n,S}$. Thus, the loss function of our baseline model $L_{cls}(F, H;x_S)$ can be formulated as follows:\n\n\\begin{small}\n\\begin{equation}\n\\begin{aligned}\n\\label{eq1}\nL_{cls}(F,H;x_S) = &\\sum\\nolimits_{(x_Q,y_Q)\\in Q} log \\ P(y_Q=n|x_Q;x_S) \\\\& = \\frac{exp(H([F(x_{n,S}), F(x_Q)]))}{\\sum_{n^{\\prime} \\in N} exp(H([F(x_{n^{\\prime},S)}, F(x_Q)]))}\n\\end{aligned}\n\\end{equation}\n\\end{small}\n\n\\noindent where $[F(x_{n,S}), F(x_Q)]$ denotes the concat operation between $F(x_{n,S})$ and $F(x_Q)$, and $y_Q$ is the label of query image. During the meta-test or model evaluation stage, each episodic task is sampled from the novel classes {\\small $D_{novel}$}, given the prototype of $K$ labeled support images of the $n$-class $x_{n,S}$. Note that the label $y_Q$ is available for model training on {\\small $D_{base}$}, but it can only be used for model evaluation on {\\small $D_{novel}$}.\n\n\\subsection{HelixFormer}\n\\label{sec3}\n\nThe purpose of this work is to capture the cross-image object semantic relations for improving the generalization ability of fine-grained model. By means of the multi-head attention module in Transformer, a many-to-many matching process of semantically related regions can be established. \\textbf{Note that} \\textit{just a single HelixFormer} is sufficient in capturing the cross-attention, and please refer to our supplementary material for the study of stacking HelixFormer.\n\n\\noindent\\textbf{Bidirectional Relation Mining Process.} Given a pair of images $(x_S, x_Q)$ sampled from the $S$ and $Q$ respectively, the backbone $F$ is first used to extract a pair of high-level features $(f_S, f_Q)$ where $f_S = F(x_S) \\in \\mathbb{R}^{C\\times H\\times W}$, and $C$ denotes the channel number, $H$ and $W$ are the height and width of the features, respectively. Although the feature pairs $(f_S, f_Q)$ contain rich semantics, they lack interaction from each other, and do not consider the cross-object semantic relations between support-query images.\n\nTo fully encode the relations between the two branches, we treat the feature maps $f \\in \\mathbb{R}^{C\\times H\\times W}$ as a sequence of $HW$ tokens, with each token having $C$ channels, which can be formulated as $f = [f^1, f^2, ..., f^{HW}]$, where $f^i \\in \\mathbb{R}^{C}$.\n\nIn detail, given the token embedding with weight parameters $W_S^e$, $W_S^k$, $W_S^v$ for support branch $S$, and parameters $W_Q^e$, $W_Q^k$, $W_Q^v$ for query branch $Q$, the query vector $e^i$, key vector $k^i$, and value vector $v^i$ can be calculated as follows:\n\n\\begin{small}\n\\begin{equation}\n\\label{eq2.0}\n\\left\\{ \\begin{array}{l}\ne_S^i = W_S^e \\ f_S^i\\\\\nk_S^i = W_S^k \\ f_S^i\\\\\nv_S^i = W_S^v \\ f_S^i\n\\end{array} \\right.\\quad \\quad \\left\\{ \\begin{array}{l}\ne_Q^i = W_Q^e \\ f_Q^i\\\\\nk_Q^i = W_Q^k \\ f_Q^i\\\\\nv_Q^i = W_Q^v \\ f_Q^i\n\\end{array} \\right.\n\\end{equation}\n\\end{small}\n\n\\noindent where for avoiding the confusion of symbol definition, $e^i$ denotes the query vector in the RMP, and $Q$ represents the query branch for the whole pipeline. Besides, according to the ablation studies in Sec. \\ref{sec4.4}, HelixFormer employs the convolution-based token embedding and removes the position embedding, since local spatial information has been encoded in the convolution feature maps. Further, we achieve the RMP by a symmetrical cross-attention from two directions: 1) We obtain support branch-related features using the value vector $v_Q^i$ from another branch $Q$, formulated as Q$\\rightarrow$S; 2) We also obtain query branch-related features using the value vector $v_S^i$ of the support branch $S$, formulated as S$\\rightarrow$Q.\n\n\\textbf{For Q$\\rightarrow$S direction}, let $\\mathbf{A}_{Q,S} \\in \\mathbb{R}^{HW \\times HW}$ denote the matrix of attention scores obtained via the matrix multiplication as follows:\n\n\\begin{equation}\n\\label{eq2}\n\\mathbf{A}_{Q,S} = K_Q \\ E_S^\\mathrm{T}\n\\end{equation}\n\n\\noindent where $K_Q = [k_Q^1, ..., k_Q^{HW}] \\in \\mathbb{R}^{HW \\times C}$ and $E_S = [e_S^1, ..., e_S^{HW}] \\in \\mathbb{R}^{HW \\times C}$, which can be obtained using Eq. \\ref{eq2.0}. Note that the designed token embedding way does not change the channel number for each input token $f^i \\in \\mathbb{R}^C$. Moreover, to perform normalization for attention scores in Transformer and find the semantically related regions according to clues from another branch $Q$, a softmax layer with a scaling factor is employed as follows:\n\n\\begin{small}\n\\begin{equation}\n\\label{eq3}\nR_{Q,S} = Softmax(\\mathbf{A}_{Q,S} \/ \\sqrt{C}) \\ V_Q\n\\end{equation}\n\\end{small}\n\n\\noindent where $V_Q = [v_Q^1, v_Q^2, ..., v_Q^{HW}] \\in \\mathbb{R}^{HW \\times C}$, and $R_{Q,S} \\in \\mathbb{R}^{HW \\times C}$ represents the Cross-image Semantic Relation Maps (CSRMs), encoding the patch-level semantic relations from query to support branch. Then, the CSRMs are reshaped to the same dimension as the backbone features $f_S \\in \\mathbb{R}^{C \\times H \\times W}$, in order to enhance the semantically similar backbone features in the REP operation.\n\n\\textbf{For S$\\rightarrow$Q direction}, the CSRMs $R_{S,Q} \\in \\mathbb{R}^{HW \\times C}$ also can be easily obtained by performing a symmetrical process described in Eqs. \\ref{eq2} and \\ref{eq3}, which can be written as follows:\n\n\\vspace{-0.10cm}\n\\begin{small}\n\\begin{equation}\n\\begin{array}{l}\n\\mathbf{A}_{S,Q} = K_S \\ E_Q^\\mathrm{T}, \\\\\nR_{S,Q} = Softmax(\\mathbf{A}_{S,Q} \/ \\sqrt{C}) \\ V_S.\n\\end{array}\n\\end{equation}\n\\end{small}\n\n\\noindent\\textbf{Representation Enhancement Process.} \\zb{Based on the above RMP, bidirectional semantic relations of support-to-query features and query-to-support features have been symmetrically encoded into the matrix of attention scores, from both directions $\\mathbf{A}_{Q,S}$ and $\\mathbf{A}_{S,Q}$, so that these cross-image object semantically-similar parts can be first found}. In this part, we design a REP that can further guide the classification network to learn how to distinguish these semantically similar features obtained by the RMP.\n\nGiven the high-level features $(f_S, f_Q)$ learned from the CNNs-based backbone, and the CSRMs $(R_{Q,S}, R_{S,Q})$ calculated from the Q$\\rightarrow$S and S$\\rightarrow$Q, the REP can be formulated as follows:\n\n\\begin{small}\n\\begin{equation}\n\\label{eq5}\n\\left\\{ \\begin{array}{l}\n\\hat{f}_S = MLP(Norm(f_S \\odot R_{Q,S})) \\\\\n\\hat{f}_Q = MLP(Norm(f_Q \\odot R_{S,Q}))\n\\end{array} \\right.\n\\end{equation}\n\\end{small}\n\n\\noindent where $\\odot$ denotes an element-wise multiplication operation, and CSRMs are defined as a soft relation mask that can strengthen the features $f$ extracted by CNNs-based backbone. Besides, $MLP$ is the Feed Forward module as illustrated in Fig. \\ref{fig2}, which allows the backbone to focus on the subtle feature differences of the predicted semantically similar regions. The experimental analyses of the REP are shown in Sec. \\ref{sec4.4}. Overall, \\zb{$\\hat{f}_S$ and $\\hat{f}_Q$ are defined as the output features of the REP, and then will be fed into the classification head as illustrated in Fig. \\ref{fig2}.}\n\n\n\n\n\\begin{figure*}\n\\vspace{-6pt}\n\\setlength{\\abovecaptionskip}{0.1cm}\n\\setlength{\\belowcaptionskip}{-0.3cm}\n\\centering\n\\includegraphics[height=5.8cm]{ACM_fig3.png}\n\\caption{Other Transformer-based cross-attention model designs. (a) Q$\\rightarrow$S: Cross-attention from query to support; (b) S$\\rightarrow$Q: Cross-attention from support to query; (c) S$\\rightleftharpoons$Q: Bidirectional asymmetric cross-attention, which is a sequential stack of the above S$\\rightarrow$Q and Q$\\rightarrow$S variants; (d) Q$\\rightleftharpoons$S: Bidirectional asymmetric cross-attention by stacking the Q$\\rightarrow$S and S$\\rightarrow$Q variants.}\n\\label{fig3}\n\\end{figure*}\n\n\\noindent\\textbf{Differences among Transformer-based Cross-attention Variants.}\n\\label{sec3.3}\nGiven high-level feature pairs $(f_S, f_Q)$ extracted from CNN-based backbone $F$, there are many Transformer-based alternatives to model support-query patch-level relations of the extracted high-level features pairs $(f_S, f_Q)$. They can be categorized into three classes as follows.\n\n\n\n\n\\textbf{1)} \\zb{Unidirectional cross-attention structure: As shown in Figs. \\ref{fig3}(a) and \\ref{fig3}(b), only the features from a single branch (support or query branch) are enhanced by means of cross-attention from another branch. For such a case, enhanced features $\\hat{f}$ and original backbone features $f$ are used as the input of classification head. Such a unidirectional cross-attention way fails to achieve high classification accuracy, since this way only considers the semantic enhancement of only a single branch. }\n\n\\textbf{2)} \\zb{Bidirectional but asymmetric structure: As illustrated in Figs. \\ref{fig3}(c) and \\ref{fig3}(d), S$\\rightleftharpoons$Q (or Q$\\rightleftharpoons$S) is a sequential stack of the above S$\\rightarrow$Q and Q$\\rightarrow$S variants (or Q$\\rightarrow$S and S$\\rightarrow$Q variants), where both the query features and support features are enhanced. But this approach does not consider support-query and query-support feature matching processes in parallel, which is detrimental to the cross-image object relation consistency assumption.}\n\n\\textbf{3)} \\zb{Bidirectional and symmetrical structure: S$\\leftrightarrow$Q refers to HelixFormer, which is shown in Fig.2. Compared with the above structures, HelixFormer is a more general form of cross-attention models, and thus has much less inductive bias. For unidirectional or bidirectional asymmetric structure, a kind of \\textit{uneven\/biased} cross-attention learning between support and query patch-level features is injected into the whole network. But for HelixFormer, the learned cross-image patch-level attention relations are symmetric and complementary. The detailed experimental and visual analyses are shown in Sec. \\ref{sec4.4}}\n\n\n\\vspace{-0.20cm}\n\\subsection{Overall Objectives and Cross-attention Learning Strategy}\n\n\\noindent\\textbf{Overall Objectives.} For finding a many-to-many semantic relation matching between the support and query branches, we utilize the multi-head attention mechanism, which consists of multiple attention layers with different token embedding parameters. The overall loss function of the $n$-th class on base classes {\\small $D_{base}$} of HelixFormer can be written as follows:\n\n\\begin{small}\n\\begin{equation}\n\\begin{aligned}\n\\label{eq6}\nL_{cls}(F, W, \\phi, H; x_S) = &\\sum\\nolimits_{(x_Q,y_Q)\\in Q} log \\ P(y_Q=n|x_Q; x_S) \\\\& = \\frac{exp(H([\\hat{f}_{n,S},\\hat{f}_Q]))}{\\sum_{n^{\\prime} \\in N} exp(H([\\hat{f}_{n^{\\prime},S}, \\hat{f}_Q]))}\n\\end{aligned}\n\\end{equation}\n\\end{small}\n\n\\noindent where $W$ and $\\phi$ denote learnable parameters in RMP and REP, respectively.\n\n\n\\begin{table*}[]\n\\centering\n\\setlength{\\tabcolsep}{1.25mm}{\n\\begin{tabular}{c|c|c|c|c|c|c|c|c}\n\\hline\n\\multirow{2}{*}{Method} & \\multirow{2}{*}{Setting} & \\multirow{2}{*}{Backbone} & \\multicolumn{2}{c}{Stanford Dogs} & \\multicolumn{2}{c}{Stanford Cars} & \\multicolumn{2}{c}{NABirds} \\\\\n & & & 1-shot & 5-shot & 1-shot & 5-shot & 1-shot & 5-shot \\\\ \\hline\nRelationNet~(CVPR-18)~\\cite{sung2018learning} & In. & Conv-4 & \\, 43.29\u00b10.46$^\\diamond$ & \\, 55.15\u00b10.39$^\\diamond$ & \\, 47.79\u00b10.49$^\\diamond$ & \\, 60.60\u00b10.41$^\\diamond$ & 64.34\u00b10.81* & 77.52\u00b10.60* \\\\\nGNN$^\\dagger$~(ICLR-18)~\\cite{garcia2017few} & In. & Conv-4 & 46.98\u00b10.98 & 62.27\u00b10.95 & 55.85\u00b10.97 & 71.25\u00b10.89 & - & - \\\\\nCovaMNet~(AAAI-19)~\\cite{li2019distribution} & In. & Conv-4 & 49.10\u00b10.76 & 63.04\u00b10.65 & 56.65\u00b10.86 & 71.33\u00b10.62 & 60.03\u00b10.98* & 75.63\u00b10.79* \\\\\nDN4~(CVPR-19)~\\cite{li2019revisiting} & In. & Conv-4 & 45.73\u00b10.76 & 66.33\u00b10.66 & 61.51\u00b10.85 & 89.60\u00b10.44 & 51.81\u00b10.91* & 83.38\u00b10.60* \\\\\nLRPABN~(TMM-20)~\\cite{huang2020low} & In. & Conv-4 & 45.72\u00b10.75 & 60.94\u00b10.66 & 60.28\u00b10.76 & 73.29\u00b10.58 & 67.73\u00b10.81* & 81.62\u00b10.58* \\\\\nMattML~(IJCAI-20)~\\cite{zhu2020multi} & In. & Conv-4 & 54.84\u00b10.53 & 71.34\u00b10.38 & 66.11\u00b10.54 & 82.80\u00b10.28 & - & - \\\\\nATL-Net~(IJCAI-20)~\\cite{dong2020learning} & In. & Conv-4 & 54.49\u00b10.92 & 73.20\u00b10.69 & 67.95\u00b10.84 & 89.16\u00b10.48 & - & - \\\\\nFRN~(CVPR-21)~\\cite{wertheimer2021few} & In. & Conv-4 & 49.37\u00b10.20 & 67.13\u00b10.17 & 58.90\u00b10.22 & 79.65\u00b10.15 & - & - \\\\\nLSC+SSM~(ACM MM-21)~\\cite{wu2021object} & In. & Conv-4 & 55.53\u00b10.45 & 71.68\u00b10.36 & 70.13\u00b10.48 & 84.29\u00b10.31 & 75.60\u00b10.49 & 87.21\u00b10.29 \\\\\nOurs & In. & Conv-4 & \\textbf{59.81}\u00b10.50 & \\textbf{73.40}\u00b10.36 & \\textbf{75.46}\u00b10.37 & \\textbf{89.68}\u00b10.25 & \\textbf{78.63}\u00b10.48 & \\textbf{90.06}\u00b10.26 \\\\ \\hline\nLSC+SSM~(ACM MM-21)~\\cite{wu2021object} & In. & ResNet-12 & 64.15\u00b10.49 & 78.28\u00b10.32 & 77.03\u00b10.46 & 88.85\u00b10.46 & 83.76\u00b10.44 & 92.61\u00b10.23 \\\\\nOurs & In. & ResNet-12 & \\textbf{65.92}\u00b10.49 & \\textbf{80.65}\u00b10.36 & \\textbf{79.40}\u00b10.43 & \\textbf{92.26}\u00b10.15 & \\textbf{84.51}\u00b10.41 & \\textbf{93.11}\u00b10.19 \\\\ \\hline\n\\end{tabular}}\n\\caption{\\label{tab1}5-way classification accuracy ($\\%$) on the Stanford Dogs, Stanford Cars and NABirds datasets respectively, $^\\diamond$, and * represent that the corresponding results are reported in~\\cite{zhu2020multi}, and~\\cite{huang2020low}, respectively. Other results are reported in their original papers. ``\\;In.\\;'' denotes the inductive few-shot learning.}\n\\vspace{-0.4cm}\n\\end{table*}\n\n\\begin{table}[]\n\\centering\n\\setlength{\\tabcolsep}{0.8mm}{\n\\begin{tabular}{c|c|c|c}\n\\hline\n\\multirow{2}{*}{Method} & \\multirow{2}{*}{Backbone} & \\multicolumn{2}{c}{CUB} \\\\\n & & 1-shot & 5-shot \\\\ \\hline\nFEAT~(CVPR-20)~\\cite{ye2020few} & Conv-4 & 68.87\u00b10.22 & 82.90\u00b10.15 \\\\\nCTX~(NIPS-20)~\\cite{doersch2020crosstransformers} & Conv-4 & 69.64 & 87.31 \\\\\nFRN~(CVPR-21)~\\cite{wertheimer2021few} & Conv-4 & 73.48 & 88.43 \\\\\nLSC+SSM~(ACM MM-21)~\\cite{wu2021object} & Conv-4 & 73.07\u00b10.46 & 86.24\u00b10.29 \\\\\nOurs & Conv-4 & \\textbf{79.34}\u00b10.45 & \\textbf{91.01}\u00b10.24 \\\\ \\hline\nRelationNet*~(CVPR-18)~\\cite{sung2018learning} & ResNet-34 & 66.20\u00b10.99 & 82.30\u00b10.58 \\\\\nDeepEMD~(CVPR-20)~\\cite{zhang2020deepemd} & ResNet-12 & 75.65\u00b10.83 & 88.69\u00b10.50 \\\\\nICI~(CVPR-20)~\\cite{wang2020instance} & ResNet-12 & 76.16 & 90.32 \\\\\nCTX~(NIPS-20)~\\cite{doersch2020crosstransformers} & ResNet-12 & 78.47 & 90.90 \\\\\nFRN (Baseline)~\\cite{wertheimer2021few} & ResNet-12 & 80.80\u00b10.20 & - \\\\\nFRN~(CVPR-21)~\\cite{wertheimer2021few} & ResNet-12 & \\textbf{83.16} & \\textbf{92.59} \\\\\nLSC+SSM~(ACM MM-21)~\\cite{wu2021object} & ResNet-12 & 77.77\u00b10.44 & 89.87\u00b10.24 \\\\\nOurs (Baseline) & ResNet-12 & 72.61\u00b10.47 & 85.60\u00b10.29 \\\\\nOurs & ResNet-12 & 81.66\u00b10.30 & 91.83\u00b10.17 \\\\ \\hline\n\\end{tabular}}\n\\caption{\\label{tab2}5-way classification accuracy ($\\%$) on the CUB (using bounding-box cropped images). ``\\;FRN (Baseline)\\;'' represents the classification results achieved by \\textbf{their baseline model}.}\n\\vspace{-0.4cm}\n\\end{table}\n\n\\begin{table}[]\n\\centering\n\\setlength{\\tabcolsep}{1.5mm}{\n\\begin{tabular}{c|c|c|c}\n\\hline\n\\multirow{2}{*}{Method} & \\multirow{2}{*}{Backbone} & \\multicolumn{2}{c}{Aircraft} \\\\\n & & 1-shot & 5-shot \\\\ \\hline\nProtoNet~(NIPS-17)~\\cite{snell2017prototypical} & Conv-4 & 47.72 & 69.42 \\\\\nDSN~(CVPR-20)~\\cite{simon2020adaptive} & Conv-4 & 49.63 & 66.36 \\\\\nCTX~(NIPS-20)~\\cite{doersch2020crosstransformers} & Conv-4 & 49.67 & 69.06 \\\\\nFRN~(CVPR-21)~\\cite{wertheimer2021few} & Conv-4 & 53.20 & 71.17 \\\\\nOurs & Conv-4 & \\textbf{70.37}\u00b10.57 & \\textbf{79.80}\u00b10.42 \\\\ \\hline\nProtoNet~(NIPS-17)~\\cite{snell2017prototypical} & ResNet-12 & 66.57 & 82.37 \\\\\nDSN~(CVPR-20)~\\cite{simon2020adaptive} & ResNet-12 & 68.16 & 81.85 \\\\\nCTX~(NIPS-20)~\\cite{doersch2020crosstransformers} & ResNet-12 & 65.60 & 80.20 \\\\\nFRN~(CVPR-21)~\\cite{wertheimer2021few} & ResNet-12 & 70.17 & \\textbf{83.81} \\\\\nOurs & ResNet-12 & \\textbf{74.01}\u00b10.54 & 83.11\u00b10.41 \\\\ \\hline\n\\end{tabular}}\n\\caption{\\label{tab3}5-way classification accuracy ($\\%$) on the Aircraft dataset.}\n\\vspace{-0.2cm}\n\\end{table}\n\n\\noindent\\textbf{Cross-attention Learning Strategy.} To transfer the semantic relations from base classes {\\small $D_{base}$} to novel classes {\\small $D_{novel}$}, we employ a two-stage training strategy. Firstly, to ensure that the backbone $F$ has sufficient high-level semantic knowledge for subsequent cross-attention matching process, the $F$ is trained on base classes {\\small $D_{base}$} by optimizing Eq. \\ref{eq1}. Secondly, we insert the HelixFormer at the end of backbone $F$, and finetune the entire framework to compare the support and query images by optimizing Eq. \\ref{eq6} on {\\small $D_{base}$}.\n\n\\section{Experiments}\nWe evaluate the proposed method on five FSFGL benchmarks including Stanford Dogs, Stanford Cars, NABirds, CUB, and Aircraft. Additionally, we also perform cross-domain few-shot experiments to further show the transferability and adaptability of the HelixFormer. The following experiments are implemented by Pytorch, and all images are resized to 84$\\times$84 pixels for fair comparison.\n\n\\vspace{-0.10cm}\n\\subsection{Dataset}\n\\label{sec4.1}\n\\vspace{-0.10cm}\n\\noindent\\textbf{Stanford Dogs} \\cite{khosla2011novel} contains a total of 20580 images and 120 sub-classes of dogs. Following~\\cite{zhu2020multi}, we adopt 70, 20, 30 classes for meta-train, meta-validation and meta-test, respectively. \\textbf{Stanford Cars} \\cite{krause20133d} consists of 16,185 images from 196 sub-classes, and we adopt 130, 17, 49 classes for meta-train, meta-validation and meta-test, respectively. \\textbf{NABirds} \\cite{van2015building} provides 555 sub-classes of birds from North American. We use 350, 66, 139 categories for meta-train, meta-validation and meta-test, respectively, which is consistent with~\\cite{huang2020low}. \\textbf{CUB} \\cite{wah2011caltech} has 11,788 bird images containing 200 classes. We follow the commonly used split way \\cite{tang2020revisiting, wertheimer2021few}, which employs 100, 50, and 50 classes for meta-train, meta-validation and meta-test, respectively. \\textbf{Aircraft} \\cite{maji2013fine} includes 100 classes and 10000 aircraft images. Following the split way in \\cite{wertheimer2021few}, we use 50, 25, and 25 classes for meta-train, meta-validation and meta-test, respectively.\n\n\\vspace{-0.25cm}\n\\subsection{Experimental Setup}\n\\label{sec4.2}\n\\vspace{-0.10cm}\n\\noindent\\textbf{Stage One: Pre-training Backbone.} Following the setting in \\cite{chen2020new, snell2017prototypical, rusu2018meta, ye2020few}, we insert a fully-connected layer at the end of the selected backbone such as Conv-4 or ResNet-12, and train the backbone on base classes {\\small $D_{base}$}. In this stage, the backbone is trained from scratch using SGD optimizer with a batch size of 128, a momentum of 0.9, a weight decay of 0.0005, and an initial learning rate of 0.1. To keep consistent with the setting in \\cite{wu2021object}, the learning rate decays at 85 and 170 epochs. We remove the fully-connected layer for performing the next meta-training stage.\n\n\\noindent\\textbf{Stage Two: Meta-training HelixFormer.} In this stage, we first insert the proposed HelixFormer at the end of the pre-trained backbone, and then finetune the whole model to perform cross-attention for each input image pair. A learning rate of 0.001 is employed for all modules. SGD with a weight decay of 0.001 and Adam are used to optimize the backbone and HelixFormer, respectively. The whole training process lasts 130 epochs, and the learning rate decays to 0.0001 and 0.00001 at 70 and 110 epochs, respectively. \\zb{The number of multi-head attention in the single-layer HelixFormer is set to $2$, and the corresponding experimental analysis is shown in Sec. \\ref{sec4.4}}. For model evaluation, we report the results with 95$\\%$ confidence intervals over 2000 test episodes, and the best model is chosen according to its accuracy on the validation set.\n\n\n\\vspace{-0.20cm}\n\\subsection{Experimental Results}\n\\label{sec4.3}\n\\vspace{-0.10cm}\n\n\\noindent\\textbf{Few-shot Fine-grained Image Classification Results.}\nIt is generally recognized that spatially local feature relations across different fine-grained objects are particularly important for fine-grained classification. Thus, we first validate the effectiveness of HelixFormer on a wide range of fine-grained benchmark datasets, which is shown in Tables \\ref{tab1}-\\ref{tab3}.\n\n\nFirst, Table \\ref{tab1} reports the classification accuracies on the Standard Dogs, Standard Cars, and NABirds. It can be seen from Table \\ref{tab1} that the proposed HelixFormer can be applied on different backbones such as Conv-4 \\cite{sung2018learning} and ResNet-12 \\cite{chen2019closer}. Moreover, we compare the proposed HelixFormer with state-of-the-art general few-shot learning methods (including DN4 \\cite{li2019revisiting} and FRN \\cite{wertheimer2021few}, \\textit{etc.}) and few-shot fine-grained image classification methods (including MattML \\cite{zhu2020multi}, LSC+SSM \\cite{wu2021object}, \\textit{etc.}). FRN considers the few-shot learning as a feature reconstruction problem by reducing the reconstruction errors of images belonging to the same classes, while LSC+SSM attempts to align spatial distribution of feature maps between support and query images, both of which are state-of-the-art general and fine-grained few-shot classification methods, respectively. The experimental results show that the proposed HelixFormer outperforms these methods with a considerable margin, further demonstrating that learning the cross-attention using the proposed double-helix structure can boost the accuracy of fine-grained object recognition.\n\nFurthermore, we also conduct experiments on two more challenging datasets (CUB and Aircraft), as shown in Table \\ref{tab2} and \\ref{tab3}. We also observe a consistent accuracy increase using our method. Besides, we would like to emphasize that the HelixFormer significantly boosts the accuracy of Baseline from $72.61\\%$ to $81.66\\%$ on CUB dataset and has been verified on \\textit{\\textbf{five} commonly used fine-grained datasets}, comprehensively showing the effectiveness of HelixFormer on fine-grained recognition.\n\n\n\\noindent\\textbf{Few-shot Fine-grained Cross-domain Image Classification Results.}\nConsidering that the distribution differences between the training and test data often exist, we conduct few-shot fine-grained recognition experiments under cross-domain setting, to validate the effectiveness of the HelixFormer in alleviating the impact of the domain differences, and the results are reported in Table \\ref{tab4}.\n\nWe carry out the cross-domain adaptation from generic bird categories (widely collected from Internet) to a particular country (America). It can be seen from Table \\ref{tab4} that our method has higher accuracy than both the Baseline model and LSC+SSM model \\cite{wu2021object}, demonstrating that HelixFormer also can improve the transferability and domain adaptability of the existing models.\n\n\n\\begin{table}[t]\n\\centering\n\\setlength{\\tabcolsep}{0.8mm}{\n\\begin{tabular}{c|c|c|c}\n\\hline\n\\multirow{2}{*}{Method} & \\multirow{2}{*}{Backbone} & \\multicolumn{2}{c}{CUB $\\to$ NABirds} \\\\\n & & 1-shot & 5-shot \\\\ \\hline\nLSC+SSM~(Baseline)~\\cite{wu2021object} & ResNet-12 & 45.70\u00b10.45 & 63.84\u00b10.40 \\\\\nLSC+SSM~(ACM MM-21)~\\cite{wu2021object} & ResNet-12 & 48.50\u00b10.48 & 66.35\u00b10.41 \\\\ \\hline\nBaseline (RN~\\cite{sung2018learning}) & Conv-4 & 43.55\u00b10.45 & 55.53\u00b10.42 \\\\\nOurs & Conv-4 & \\textbf{47.87}\u00b10.47 & \\textbf{61.47}\u00b10.41 \\\\\\hline\nBaseline (RN~\\cite{sung2018learning}) & ResNet-12 & 46.22\u00b10.45 & 63.23\u00b10.42 \\\\\nOurs & ResNet-12 & \\textbf{50.56}\u00b10.48 & \\textbf{66.13}\u00b10.41 \\\\ \\hline\n\\end{tabular}\n\\caption{\\label{tab4} 5-way few-shot fine-grained classification results by adapting from the CUB-trained model to NABirds dataset using different backbones.}\n}\n\\vspace{-0.6cm}\n\\end{table}\n\n\\begin{figure}\n \\centering\n \\includegraphics[height=4.8cm, width=6.0cm]{acm_fig4.png}\n \\vspace{-6pt}\n \\caption{Visualization results of the backbone features and output features using different cross-attention model designs, respectively.}\n \\label{fig4}\n\\end{figure}\n\n\n\\vspace{-0.10cm}\n\\subsection{Insight Analyses}\n\\label{sec4.4}\n\\vspace{-0.10cm}\n\n\n\\begin{table}[]\n\\centering\n\\setlength{\\tabcolsep}{0.2mm}{\n\\begin{tabular}{c|c|c|c|c|c}\n\\hline\n\\multirow{2}{*}{Method} & \\multirow{2}{*}{Token} & \\multicolumn{2}{c}{CUB} & \\multicolumn{2}{c}{Stanford Cars} \\\\\n & & 1-shot & 5-shot & 1-shot & 5-shot \\\\ \\hline\nB\/L (RN~\\cite{sung2018learning}) & - & 66.08\u00b10.50 & 79.04\u00b10.35 & 56.55\u00b10.52 & 70.52\u00b10.39 \\\\\nRN with S$\\leftrightarrow$Q & Fc. & 73.80\u00b10.47 & 89.34\u00b10.27 & 72.94\u00b10.50 & 88.85\u00b10.26 \\\\\nRN with S$\\leftrightarrow$Q & Cv. & \\textbf{79.34}\u00b10.45 & \\textbf{91.01}\u00b10.24 & \\textbf{75.46}\u00b10.37 & \\textbf{89.68}\u00b10.25 \\\\ \\hline\n\\end{tabular}}\n\\caption{\\label{tab5} The study of token embedding by employing fully-connected embedding (\\textit{i.e.} Fc.) or convolutional embedding (\\textit{i.e.} Cv.), respectively. B\/L denotes the baseline model~\\cite{sung2018learning}, and S$\\leftrightarrow$Q denotes the proposed HelixFormer.}\n\\end{table}\n\n\n\n\\noindent\\textbf{Results of using Fully-connected or Convolutional-based Token Embedding.}\nThe choice on feature embedding way (in a fully-connected or convolutional way) of the input tokens is essential to guarantee the good performance of the proposed HelixFormer. Table \\ref{tab5} reports the results of using different embedding ways, showing that the accuracy using convolution projection outperforms that using fully-connected token embedding. The reason is that few-shot fine-grained recognition needs to capture more local detail features, which are exactly provided by the local convolution projection.\n\n\\begin{figure*}\n\\vspace{-6pt}\n \\centering\n \\includegraphics[height=4.0cm, width=17.3cm]{ACM_vis_1.png}\n \\vspace{-6pt}\n \\caption{Visualization results of features extracted by \\zb{backbone, the RMP, and the REP, respectively}. Due to the global feature variations (\\textit{e.g.,} the color of an object) and local feature changes (\\textit{e.g.,} headlight translation of a car and beak rotation of a bird), it is hard to find the cross-image patch-level matching of semantic features, as shown in the heatmaps from the backbone. By HelixFormer, the key cross-image object semantic relations, such as birds' wings or cars' headlights, can be effectively matched. \\textit{Please refer to our supplementary material for more visualization results.}}\n \\label{fig5}\n\\end{figure*}\n\n\\begin{table}[]\n\\centering\n\\setlength{\\tabcolsep}{0.8mm}{\n\\begin{tabular}{c|c|c|c|c}\n\\hline\n\\multirow{2}{*}{Method} & \\multicolumn{2}{c}{CUB} & \\multicolumn{2}{c}{Stanford Cars} \\\\\n & 1-shot & 5-shot & 1-shot & 5-shot \\\\ \\hline\nRN~\\cite{sung2018learning} & 66.08\u00b10.50 & 79.04\u00b10.35 & 56.55\u00b10.52 & 70.52\u00b10.39 \\\\\nRN with Q$\\rightarrow$S & 75.37\u00b10.46 & 88.78\u00b10.26 & 72.12\u00b10.50 & 88.23\u00b10.26 \\\\\nRN with S$\\rightarrow$Q & 77.24\u00b10.49 & 90.06\u00b10.27 & 73.34\u00b10.51 & 89.44\u00b10.26 \\\\\nRN with S$\\leftrightarrow$Q & \\textbf{79.34}\u00b10.45 & \\textbf{91.01}\u00b10.24 & \\textbf{75.46}\u00b10.37 & \\textbf{89.68}\u00b10.25 \\\\ \\hline\n\\end{tabular}}\n\\caption{\\label{tab6} Results using different cross-attention structures: the unidirectional cross-attention (including Q$\\rightarrow$S and S$\\rightarrow$Q) and the bidirectional structure. Q$\\rightarrow$S denotes that support features are reconstructed only using query images, according to the semantic relations between support-query features, and vice versa. The definition of S$\\leftrightarrow$Q follows Table~\\ref{tab5}.}\n\\vspace{-0.2cm}\n\\end{table}\n\n\n\\begin{table}[]\n\\centering\n\\setlength{\\tabcolsep}{0.38mm}{\n\\begin{tabular}{c|c|c|c|c|c}\n\\hline\n\\multirow{2}{*}{Method} & \\multirow{2}{*}{SY?} & \\multicolumn{2}{c}{CUB} & \\multicolumn{2}{c}{Stanford Cars} \\\\\n & & 1-shot & 5-shot & 1-shot & 5-shot \\\\ \\hline\nRN~\\cite{sung2018learning} & - & 66.08\u00b10.50 & 79.04\u00b10.35 & 56.55\u00b10.52 & 70.52\u00b10.39 \\\\\nRN with S$\\rightleftharpoons$Q & \\scriptsize{\\XSolid} & 77.69\u00b10.48 & 89.39\u00b10.27 & 74.56\u00b10.50 & \\textbf{89.89}\u00b10.22 \\\\\nRN with Q$\\rightleftharpoons$S & \\scriptsize{\\XSolid} & 77.46\u00b10.47 & 89.86\u00b10.25 & 74.42\u00b10.50 & 88.63\u00b10.24 \\\\\nRN with S$\\leftrightarrow$Q & \\Checkmark & \\textbf{79.34}\u00b10.45 & \\textbf{91.01}\u00b10.24 & \\textbf{75.46}\u00b10.37 & 89.68\u00b10.25 \\\\ \\hline\n\\end{tabular}}\n\\caption{\\label{tab7} The study of bidirectional cross-attention using asymmetric or symmetric structure. SY is short for symmetric, and S$\\rightleftharpoons$Q denotes the asymmetric but bidirectional structure and the definition of S$\\leftrightarrow$Q follows Table~\\ref{tab5}.}\n\\vspace{-12pt}\n\\end{table}\n\n\n\\begin{table}[]\n\\centering\n\\small\n\\setlength{\\tabcolsep}{0.65mm}{\n\\begin{tabular}{c|c|c|c|c|c}\n\\hline\n\\multirow{2}{*}{Method} & \\multirow{2}{*}{REP?} & \\multicolumn{2}{c}{CUB} & \\multicolumn{2}{c}{Stanford Cars} \\\\\n & & 1-shot & 5-shot & 1-shot & 5-shot \\\\ \\hline\nRN~\\cite{sung2018learning} & - & 66.08\u00b10.50 & 79.04\u00b10.35 & 56.55\u00b10.52 & 70.52\u00b10.39 \\\\\nRN with S$\\leftrightarrow$Q & w\/o & 77.90\u00b10.45 & 89.39\u00b10.27 & 73.52\u00b10.49 & 88.48\u00b10.25 \\\\\nRN with S$\\leftrightarrow$Q & with & \\textbf{79.34}\u00b10.45 & \\textbf{91.01}\u00b10.24 & \\textbf{75.46}\u00b10.37 & \\textbf{89.68}\u00b10.25 \\\\ \\hline\n\\end{tabular}}\n\\caption{\\label{tab8} The results of removing the REP in HelixFormer.}\n\\vspace{-12pt}\n\\end{table}\n\n\n\\noindent\\textbf{Unidirectional or Bidirectional RMP.}\nIn this part, we study the impact of using unidirectional or bidirectional cross-attention structure (as introduced in Fig. \\ref{fig3} of Sec. \\ref{sec3.3}) on the classification results. Experimental results in Table \\ref{tab6} show that compared with the unidirectional structure (Q$\\rightarrow$S or S$\\rightarrow$Q), the bidirectional cross-attention structure (S$\\leftrightarrow$Q) can well improve the generalization ability of few-shot learning models.\n\n\\noindent\\textbf{Asymmetric and Symmetric RMP.} For bidirectional attention, we observe from Table \\ref{tab7} that a symmetrical structure has an advantage in improving the model accuracy. We further visualize their heatmaps in Fig. \\ref{fig4}, and find that the symmetrical structure captures relatively more accurate inter-object semantic relations.\n\n\\noindent\\textbf{The Role of REP.} We further show the effectiveness of the REP in the proposed HelixFormer, by directly feeding the CSRMs pairs $(R_{Q,S}, R_{S,Q})$ as the input of the classification head. We observe the performance deterioration by comparing the last two rows in Table \\ref{tab8}, showing the importance of learning to distinguish subtle feature differences by the REP.\n\n\\noindent\\textbf{Multi-head Attention in HelixFormer.} Table \\ref{tab9} shows the 5-way 1-shot classification accuracy by changing the number of multi-head attention using Conv-4 backbone.\n\n\\begin{table}[]\n\\centering\n\\setlength{\\tabcolsep}{1.2mm}{\n\\begin{tabular}{c|c|c|c}\n\\hline\nMethod & \\# Multi-head & CUB & Stanford Cars \\\\ \\hline\nRN with S$\\leftrightarrow$Q & 1 & 77.52\u00b10.49 & 74.62\u00b10.50 \\\\\nRN with S$\\leftrightarrow$Q & 2 & \\textbf{79.34}\u00b10.45 & \\textbf{75.46}\u00b10.37 \\\\\nRN with S$\\leftrightarrow$Q & 4 & 78.68\u00b10.47 & 73.62\u00b10.48 \\\\ \\hline\n\\end{tabular}}\n\\caption{\\label{tab9} Results of changing the number of multi-head attention.}\n\\vspace{-0.2cm}\n\\end{table}\n\n\n\\begin{table}[]\n\\small\n\\centering\n\\setlength{\\tabcolsep}{1.0mm}{\n\\begin{tabular}{c|c|c|c|c}\n\\hline\nMethod & Backbone & \\#FLOPs. & \\#Params. & CUB \\\\ \\hline\nRN~\\cite{sung2018learning} & ResNet-12 & 2.48G & 9.1M & 72.61\u00b10.47 \\\\\nRN~\\cite{sung2018learning} & ResNet-50 (\\textbf{Deeper}) & 3.69G & 24.6M & 69.00\u00b10.52 \\\\\nRN~\\cite{sung2018learning} & ResNet-101 (\\textbf{Deeper}) & 5.98G & 43.2M & 68.71\u00b10.54 \\\\\nRN with S$\\leftrightarrow$Q & ResNet-12 & 2.53G & 9.5M & \\textbf{81.66}\u00b10.30 \\\\ \\hline\n\\end{tabular}}\n\\caption{\\label{tab10} 5-way 1-shot classification accuracy (\\%) for relation network baseline~\\cite{sung2018learning} with different backbones.}\n\\vspace{-0.6cm}\n\\end{table}\n\n\n\\noindent\\textbf{Study of Parameter Size and Feature Visualization.}\nIn the few-shot learning community, models with smaller parameter sizes are usually adopted to avoid over-fitting for the limited number of training samples. In other words, increasing the number of model parameters may not improve its generalization performance. Table \\ref{tab10} shows that the HelixFormer more effectively boosts the model accuracy, compared with other models with more parameters and FLOPs. Furthermore, we visualize the support and query features which are extracted from the backbone, the proposed RMP, and REP of HelixFormer respectively. The visualized results are illustrated in Fig. \\ref{fig5}, and the heatmaps are obtained via a max-pooling operation along the channel dimension of feature maps to preserve the spatial information. Besides, we also visualize the support-query features using asymmetric or symmetric cross-attention structure in Fig. \\ref{fig5}. These visualization results illustrate that the semantically similar regions between support and query images can be well matched via HelixFormer.\n\n\\section{Conclusion}\nIn this work, we proposed a Transformer-based double-helix cross-attention model, namely HelixFormer, which is composed of a Relation Mining Process (RMP) to discover the cross-image object semantic relation and produce patch-level cross-relation maps, and a Representation Enhancement Process (REP) to enhance the identified discriminative local features for final recognition. Experimental results demonstrate that such a bidirectional and symmetrical structure has the merit of ensuring the cross-object semantic relation consistency, improving the model generalization ability on few-shot fine-grained learning.\n\n\n\\clearpage\n\\section{Acknowledgement}\nThis work is supported by National Natural Science Foundation of China (No. 62071127 and No. 62101137), Zhejiang Lab Project (No. 2021KH0AB05).\n\n\n\n\n\n{\\small\n\\bibliographystyle{ACM-Reference-Format}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n \n\n\n Real-time detection of various categories of objects in images is one of the\n key tasks in computer vision.\n This topic has been extensively studied in the past a few years due to its\n important applications in surveillance, intelligent video analysis {\\em etc}. \n %\n \n \n \n %\n Viola and Jones proffered the first real-time face detector\n \\cite{Viola2004Robust,Viola2002Fast}.\n To date, it is still considered one of the state-of-the-art, and their framework \n is the basis of many incremental work afterwards. \n Object detection is a highly asymmetric classification\n problem with the exhaustive scanning-window search being used to locate the target in an\n image. Only a few are true target objects among the millions of scanned patches.\n Cascade classifiers \n \n \n \n have been proposed for efficient detection, which takes the asymmetric\n structure into consideration. \n Under the assumption of each node of the cascade classifier makes independent classification\n errors, the detection rate and false positive rate of the entire cascade are:\n $ F_{\\rm dr} = \\prod_{ t =1}^N d_t $ and\n $ F_{\\rm fp} = \\prod_{ t =1}^N f_t $, respectively.\n As pointed out in \\cite{Viola2004Robust,Wu2005Linear}, these two equations \n suggest a {\\em node learning objective}: \n Each node should have an extremely high detection rate $d_t $ \n ({\\em e.g.}, $99.7\\%$) and \n a moderate false positive rate $ f_t $ ({\\em e.g.}, $50\\%$). \n With the above values of $ d_t $ and $ f_t $, assume that \n the cascade has $ N = 20 $ nodes, then $ F_{\\rm dr} \\approx 94\\%$\n and $ F_{\\rm fp} \\approx 10^ {-6} $, which is usually the design goal. \n \n\n\n A drawback of standard boosting like AdaBoost is that it does not \n take advantage of\n the cascade classifier. AdaBoost only minimizes the overall classification error and does\n not minimize the number of false negatives. \n In this sense, the features selected are not optimal for \n the purpose of rejecting negative examples.\n %\n %\n %\n At the feature selection and classifier training level, Viola and Jones\n leveraged the asymmetry property, to some extend, by\n replacing AdaBoost with AsymBoost \\cite{Viola2002Fast}.\n AsymBoost incurs more loss for misclassifying a positive example by simply\n modifying AdaBoost's exponential loss. \n Better detection rates were observed over the standard AdaBoost. Nevertheless, \n AsymBoost addresses the node learning goal {\\em indirectly}\n and still may not be the optimal solution.\n Wu {\\em et al.} explicitly studied the node learning goal and they proposed \n to use linear asymmetric classifier (LAC) and Fisher linear discriminant analysis (LDA)\n to adjust the linear coefficients of the selected weak classifiers\n \\cite{Wu2005Linear,Wu2008Fast}.\n Their experiments indicated that with this post-processing technique, \n the node learning objective can be better met, which is translated into improved \n detection rates. \n In Viola and Jones' framework, boosting is used to select \n features and at the same time to\n train a strong classifier. Wu {\\em et al.}'s work separates these two tasks:\n they still use AdaBoost or AsymBoost to select features; and at the second step, \n they build a strong classifier using LAC or LDA. \n Since there are two steps here, in Wu {\\em et al.}'s work \n \\cite{Wu2005Linear,Wu2008Fast}, \n the node learning objective is only considered at the \n second step. At the first step---feature selection---the node learning objective is\n not explicitly considered. We conjecture that\n {\\em further improvement may be gained\n if the node learning objective is explicitly \n taken into account at both steps}. \n We design new boosting algorithms to implement this idea and verify\n this conjecture. \n %\n %\n Our major contributions are as follows. \n \\begin{enumerate}\n \\item\n We develop new boosting-like algorithms via directly\n minimizing the objective function of \n linear asymmetric classifier, which is termed as LACBoost (and\n FisherBoost from Fisher LDA). Both of them can be used to\n select features that is optimal for achieving the node learning goal in training a \n cascade classifier. To our knowledge, this is the first attempt to design such a \n feature selection method. \n \\item\n LACBoost and FisherBoost share similarities with LPBoost \n \\cite{Demiriz2002LPBoost} in the sense that both use\n column generation---a technique originally proposed\n for large-scale linear programming (LP). \n Typically, the Lagrange dual problem \n is solved at each iteration in column generation. We instead solve\n the primal quadratic programming (QP) problem, which has a special structure\n and entropic gradient (EG)\n can be used to solve the problem very efficiently. \n Compared with general interior-point based QP solvers, EG is much faster. \n Considering one needs to solve QP problems a few thousand times for training\n a complete cascade detector, the efficiency improvement is enormous. \n Compared with training an AdaBoost based cascade detector, the time needed \n for LACBoost (or FisherBoost) is comparable.\n \n \n \n \n \n This is because for both cases, the majority of the time is spent on weak\n classifier training and bootstrapping. \n \\item\n We apply LACBoost and FisherBoost to face detection and better performances are observed over\n the state-of-the-art methods \\cite{Wu2005Linear,Wu2008Fast}. The results\n confirm our conjecture and show the effectiveness of LACBoost and FisherBoost. \n LACBoost can be immediately applied to other asymmetric classification problems.\n \\item\n We also analyze the condition that makes the validity of LAC,\n and show that the multi-exit cascade might be more suitable\n for applying LAC learning of \\cite{Wu2005Linear,Wu2008Fast} (and our LACBoost)\n rather than Viola-Jones standard cascade.\n \\end{enumerate}\n Besides these, the LACBoost\/FisherBoost algorithm differs from traditional boosting algorithms\n in that LACBoost\/FisherBoost does not minimize a loss function. This opens new possibilities\n for designing new boosting algorithms for special purposes. \n We have also extended column generation for optimizing nonlinear \n optimization problems. \n %\n %\n %\n Next we review some related work that is closest to ours. \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n\n\n \\textbf{Related work}\n %\n %\n %\n There is a large body of previous work in object detection\n \\cite{pham07,Wu2003Rare};\n of particular relevance to our work is boosting object detection originated\n from Viola and Jones'\n framework.\n There are three important components that make Viola and Jones' framework\n tremendously successful:\n (1) The cascade classifier that efficiently filters out most negative patches\n in early nodes; and also contributes to enable the final classifier to have\n a very high detection rate;\n (2) AdaBoost that selects informative features and at the same time trains \n a strong classifier;\n (3) The use of integral images, which makes the computation of Haar features \n extremely fast. \n Most of the work later improves one or more of these three components. \n In terms of the cascade classifier, a few different approaches such as \n soft cascade \\cite{Bourdev05SoftCascade}, dynamic cascade \n \\cite{Rong2007}, and multi-exit cascade \\cite{pham08multi}. \n We have used the multi-exit cascade in this work.\n The multi-exit cascade tries to improve the classification performance by \n using all the selected weak classifiers for each node. \n \n \n \n So for the $ n $-th strong classifier (node), it uses all the weak classifiers\n in this node as well as those in the previous $ n - 1 $ nodes. \n We show that the LAC post-processing can enhance the multi-exit cascade.\n More importantly, we show that the multi-exit cascade better meets LAC's \n requirement of data being Gaussian distributions. \n \n The second research topic is the learning algorithm \n for constructing a classifier. \n Wu {\\em et al.} use fast forward feature selection to \n accelerate the training procedure\n \\cite{Wu2003Rare}. They have also proposed LAC to learn a better strong\n classifier \\cite{Wu2005Linear}. \n Pham and Cham recently proposed online asymmetric boosting \n with considerable improvement in training time \\cite{pham07}.\n By exploiting the feature\n statistics, they have also designed a fast method to train weak classifiers\n \\cite{Pham2007Fast}. \n %\n %\n %\n Li {\\em et al.} advocated FloatBoost to discard\n some redundant weak classifiers during AdaBoost's \n greedy selection procedure \\cite{Li2004Float}. \n Liu and Shum proposed KLBoost to select features and train a strong \n classifier \\cite{Liu2003KL}. \n Other variants of boosting have been applied to detection. \n %\n \\comment{\n For example,\n Promising results were reported with\n LogitBoost \\cite{Tuzel2008PAMI} that employs the logistic regression \n loss, and GentleBoost \\cite{Torralba2007} that uses adaptive \n Newton steps to fit the additive model.\n \n\n \n New features have also been designed for improving the detection\n performance. Viola and Jones' \n Haar features are not sufficiently discriminative for detecting\n more complex objects like pedestrians, or multi-view faces. \n Covariance features \\cite{Tuzel2008PAMI} and histogram of oriented gradients \n \\cite{Dalal2005HOG} have been proposed in this context. Both of them are\n possible to the use the idea of integral images\/histograms to reduce the\n computation complexity. \n \n }\n\n \n\n\n\n\n \\textbf{Notation} \n The following notation is used. \n A matrix is denoted by a bold upper-case\n letter ($\\mathbf X$); a column vector is denoted by a bold lower-case\n letter ($ \\mathbf x $). \n The $ i$th row of $\\mathbf X $ is denoted by $ \\mathbf X_{i:} $ \n and the $ i $-th column $ \\mathbf X_{:i}$.\n The identity matrix is $ \\bf I $ and its size should be clear\n from the context. $ \\bf 1 $ and \n $ \\bf 0 $ are column vectors of $ 1$'s and $ 0$'s,\n respectively. \n We use $ \\psd, \\nsd $ to denote component-wise inequalities. \n\n\n\n Let $ \\{ (\\mathbf x_i, y_i ) \\}_{i = 1, \\cdots, m}$ be the set of\n training data, where $ \\mathbf x_i \\in \\mathcal X$ and $ y_i \\in \\{-1,+1\\}\n $, $ \\forall i$. \n The training set consists of $ m_1 $ positive training points\n and $ m_2 $ negative ones; $ m_1 + m_2 = m $. \n %\n Let $ h ( \\cdot ) \\in \\mathcal H $ be a weak\n classifier that projects an input vector $ \\mathbf x $ into \n $\\{-1, +1 \\}$. Here we only consider discrete classifier\n outputs.\n We assume that the set $ \\mathcal H $ is finite and we\n have $ n $ possible weak classifiers. Let the matrix $ \\H \\in\n \\mathbb{R}^{ m \\times n }$ where the $ (i,j)$ entry of $ \\H $ is\n $ \\H_{ij} = h_j ( \\mathbf x_i ) $. \n $ \\H_{ij} $ is the label predicted by weak classifier $\n h_j(\\cdot) $ on the training datum $ \\mathbf x_i $. \n \\comment{\n \n Therefore each\n column $ \\H_{ :j } $ of the matrix $ \\H $ consists of the\n output of weak classifier $ h_j(\\cdot) $ on all the training\n data; while each row $ \\H_{ i: } $ contains the outputs of all\n weak classifiers on the training datum $ \\mathbf x_i $. \n \n }\n We define a matrix $ \\mathbf A \\in \\mathbb{R}^{ m \\times n }$ such that its\n $( i, j )$ entry is\n $ \\mathbf A_{ij} = y_i h_j ( \\mathbf x_i ) $.\n \\comment{\n \n Boosting algorithms entirely depend on the matrix $ \\mathbf A $ and\n do not directly interact with the training examples. \n Our following discussion will largely focus on the matrix $\\mathbf A$.\n We write the vector obtained by multiplying a matrix $ \\mathbf A $ \n with a vector \n $ \\mathbf w $\n as $ \\mathbf A \\mathbf w $ and its $i$-th entry as $ (\\mathbf A \\mathbf w)_i$, which is the margin\n of the training datum $ \\mathbf x_i $: $ \\rho_i = \\mathbf A_{ i :} \\mathbf w = (\\mathbf A \\mathbf w)_i $.\n \n }\n\n\n \\comment{\n \n The paper is organized as follows. We briefly review the concept of LAC\n in Section \\ref{sec:LAC} before we propose our LACBoost and FisherBoost\n in Section \\ref{sec:LACBoost}. We present the experiments in Section\n \\ref{sec:exp} and conclude the paper in Section \\ref{sec:con}.\n \n }\n\n\n\n\\section{Linear Asymmetric Classification}\n\\label{sec:LAC}\n\n\n\n\n Before we propose our LACBoost and FisherBoost, we \n briefly overview the concept of LAC. \n %\n %\n Wu {\\em et al.} \\cite{Wu2008Fast} have proposed linear asymmetric classification\n (LAC) as a post-processing step \n for training nodes in the cascade framework. \n LAC is guaranteed to get an optimal solution\n under the assumption of Gaussian data distributions.\n\n\n Suppose that we have a linear classifier \n $ f(\\mathbf x) = {\\bf sign}(\\mathbf w^{\\!\\top} \\mathbf x - b)$,\n if we want to find a pair of $\\{ \\mathbf w , b \\}$ with a very\n high accuracy on the positive data $\\mathbf x_1$ and a moderate accuracy on\n the negative $\\mathbf x_2$, which is expressed as the following problem:\n \\begin{align}\n \\begin{split}\n \\max_{\\mathbf w \\neq {\\bf 0}, b} \\, \\Pr_{\\mathbf x_1 \\sim ( \\mu_1, {\\bf \\Sigma}_1) }\n \\{ \\mathbf w ^{\\!\\top} \\mathbf x_1 \\geq b \\}, \\,\\,\n {\\rm s.t.} \\, \\Pr_{\\mathbf x_2 \\sim (\\mu_2,{\\bf \\Sigma}_2)} \n \\{ \\mathbf w^{\\!\\top} \\mathbf x_2\n \\leq b \\} = \\lambda,\n \\label{EQ:LAC}\n \\end{split}\n \\end{align}\n where $\\mathbf x \\sim (\\mu,{\\bf \\Sigma})$ denotes \n a symmetric distribution with mean $\\mu$ and covariance ${\\bf \\Sigma}$.\n %\n %\n If we prescribe $\\lambda$ to $0.5$ and assume that for any $\\mathbf w$, \n $\\mathbf w^{\\!\\top} \\mathbf x_1$\n is Gaussian and $\\mathbf w^{\\!\\top} \\mathbf x_2$ is symmetric, then \n \\eqref{EQ:LAC} can be approximated by \n \\begin{equation}\n \\label{EQ:LAC1}\n \\max_{\\mathbf w \\neq \\bf 0} \\;\\;\n \\frac{ \\mathbf w^{\\!\\top} ( \\mu_1 - \\mu_2 ) } \n { \\sqrt{ \\mathbf w^{\\!\\top} {\\bf \\Sigma}_1 \\mathbf w } }.\n \\end{equation}\n %\n %\n %\n \\eqref{EQ:LAC1} is similar to LDA's optimization problem\n \\begin{equation}\n \\label{EQ:LDA1}\n \\max_{\\mathbf w \\neq \\bf 0} \\;\\;\n \\frac{ \\mathbf w^{\\!\\top} ( \\mu_1 - \\mu_2 ) } \n { \\sqrt{ \\mathbf w^{\\!\\top} ( {\\bf \\Sigma}_1 + {\\bf \\Sigma}_2 ) \\mathbf w } }.\n \\end{equation}\n \\eqref{EQ:LAC1} can be solved by eigen-decomposition and a close-formed \n solution can be derived:\n %\n %\n %\n\\begin{equation}\n \\mathbf w^\\star = {\\bf \\Sigma}_1^{-1} ( \\mu_1 - \\mu_2 ),\n \\quad \n b^{\\star} = { \\mathbf w^{\\star} } ^{{\\!\\top}} \\mu_2.\n\\label{EQ:LAC_SOL}\n\\end{equation}\nOn the other hand, each node in cascaded boosting classifiers has the following form:\n\\begin{equation}\n \\label{EQ:nodeclassifier}\n f(\\mathbf x) = {\\bf sign}(\\mathbf w^{\\!\\top} \\H (\\mathbf x) - b),\n\\end{equation}\nWe override the symbol $ \\H (\\mathbf x)$ here, \nwhich denotes the output vector of all weak classifiers over the datum $ \\mathbf x $. \nWe can cast each node as a linear classifier over the feature space\nconstructed by the binary outputs of all weak classifiers.\nFor each node in cascade classifier, we wish to maximize the detection\nrate as high as possible, and \nmeanwhile keep the false positive rate to an\nmoderate level ({\\em e.g.}, $50.0\\%$). \nThat is to say, the problem\n\\eqref{EQ:LAC} expresses the node learning goal. \nTherefore, we can use boosting algorithms ({\\em e.g.}, AdaBoost) as feature\nselection methods, and then use LAC to learn a linear classifier over\nthose binary features chosen by boosting.\nThe advantage is that LAC considers the asymmetric node learning explicitly.\n\nHowever, there is a precondition of LAC's validity. That \nis, for any $\\mathbf w$, $\\mathbf w^{\\!\\top} \\mathbf x_1$ is a Gaussian and $\\mathbf w^{\\!\\top} \\mathbf x_2$\nis symmetric. \nIn the case of boosting classifiers, $\\mathbf w^{\\!\\top} \\mathbf x_1$ and $\\mathbf w^{\\!\\top} \\mathbf x_2$ can be \nexpressed as the margin of positive data and negative data.\nEmpirically Wu {\\em et al.} \\cite{Wu2008Fast}\nverified that $\\mathbf w^{\\!\\top} \\mathbf x$ is Gaussian approximately for a cascade face detector.\nWe discuss this issue in the experiment part in more detail.\n\n\n\n\\section{Constructing Boosting Algorithms from LDA and LAC}\n\\label{sec:LACBoost} \n \n \n In kernel methods, the original data are non\\-linear\\-ly \n mapped to a feature space and \n usually the mapping\n function $ {\\phi } ( \\cdot ) $ is not explicitly available.\n It works through the inner product of \n $ {\\phi } ( \\mathbf x_i ) ^{\\!\\top} {\\phi } ( \\mathbf x_j ) $. \n In boosting \\cite{Ratsch2002BoostSVM},\n the mapping function can be seen as explicitly known\n through:\n $\n {\\phi } ( \\mathbf x ) : \\mathbf x \\mapsto [ h_1(\\mathbf x),\\dots,h_n(\\mathbf x) ]. \n $\n Let us consider the Fisher LDA case first because the solution to LDA\n will generalize to LAC straightforwardly, by looking at\n the similarity between \\eqref{EQ:LAC1} and \\eqref{EQ:LDA1}.\n\n Fisher LDA \n maximizes the between-class variance and minimizes the within-class\n variance. In the binary-class case, we can equivalently\n rewrite \\eqref{EQ:LDA1} into\n \\begin{equation}\n \\label{EQ:100}\n \\max_\\mathbf w \\;\\; \\frac{ ( \\mu_1 - \\mu_2 ) ^ 2 }\n { \\sigma_1 + \\sigma_2 } \n = \n \\frac{ \\mathbf w ^{\\!\\top} \\mathbf C_b \\mathbf w }\n { \\mathbf w ^{\\!\\top} \\mathbf C_w \\mathbf w },\n \\end{equation}\n where $ \\mathbf C_b $ and $ \\mathbf C_w $ are the between-class and within-class\n scatter matrices; $ \\mu_1 $ and $ \\mu_2 $ are\n the projected centers of the two classes.\n The above problem can be equivalently reformulated as \n \\begin{equation}\n \\label{EQ:101}\n \\min_\\mathbf w \\;\\; \\mathbf w ^{\\!\\top} \\mathbf C_w \\mathbf w - \\theta ( \\mu_1 - \\mu_2 )\n \\end{equation}\n for some certain constant $ \\theta $ and under the assumption that\n $ \\mu_1 - \\mu_2 \\geq 0 $.\\footnote{In our face detection experiment,\n we found that this assumption could always be satisfied.}\n Now in the feature space, our data are \n $ {\\phi }( \\mathbf x_i ) $, $ i=1\\dots m$.\n We have\n \\begin{align}\n \\mu_1\n & = \\frac{ 1 } { m_1 } \\mathbf w^{\\!\\top} \\sum_{y_i = 1} {\\phi }(\\mathbf x_i) \n \n = \\frac{ 1 } { m_1 } \\sum_{y_i = 1} \\mathbf A_{ i: } \\mathbf w\n \n \n \n = \\frac{ 1 } { m_1 } \\sum_{y_i = 1} (\\mathbf A \\mathbf w)_i\n = \\boldsymbol e_1 ^{\\!\\top} \\mathbf A \\mathbf w ,\n \\end{align}\n where $ \\mathbf A_{ i: } $ is the $ i $-th row of $ \\mathbf A$.\n %\n %\n %\n \\begin{align}\n \\mu_2\n & = \n \\frac{ 1 } { m_2 } \\mathbf w^{\\!\\top} \\sum_{y_i = -1} {\\phi }(\\mathbf x_i)\n = \\frac{ 1 } { m_2 } \\sum_{y_i = -1} \\H_{ i: } \\mathbf w\n \n \n = - \\boldsymbol e_2 ^{\\!\\top} \\mathbf A \\mathbf w,\n \\end{align}\n %\n %\n Here the $ i $-th entry of $ \\boldsymbol e_1 $ is defined as \n $ \\boldsymbol e_{1i} = 1\/m_1 $ if $ y_i = +1 $, otherwise \n $ \\boldsymbol e_{1i} = 0$. Similarly \n $ \\boldsymbol e_{2i} = 1\/m_2 $ if $ y_i = -1 $, otherwise \n $ \\boldsymbol e_{2i} = 0$. We also define $ \\boldsymbol e = \\boldsymbol e_1 + \\boldsymbol e_2 $.\n %\n %\n %\n %\n For ease of exposition, we order the training data according to their\n labels. So\n the vector $ \\boldsymbol e \\in \\mathbb{R}^{m}$:\n \\begin{equation}\n \\boldsymbol e = [ 1\/m_1,\\cdots, 1\/m_2,\\cdots ]^{\\!\\top}, \n \\label{EQ:e}\n \\end{equation}\n and the first $ m_1$ components of $ \\boldsymbol \\rho $ correspond to the\n positive training data and the remaining ones\n correspond to the $ m_2$\n negative data. \n So we have $ \\mu_1 - \\mu_2 = \\boldsymbol e^{\\!\\top} \\boldsymbol \\rho $, \n $ \\mathbf C_w = {m_1 }\/{ m } \\cdot {\\bf \\Sigma}_1 + {m_2 }\/{ m } \\cdot {\\bf \\Sigma}_2 $\n with\n $ {\\bf \\Sigma}_{1,2} $ the covariance matrices. \n By noticing that\n \\[\n \\mathbf w^{\\!\\top} {\\bf \\Sigma}_{1,2} \\mathbf w = \\frac{1}{m_{1,2} ( m_{1,2} - 1 ) }\n \\sum_{i>k, y_i=y_k = \\pm 1}\n (\\rho_i - \\rho_k )^2,\n \\]\n %\n %\n we can easily rewrite the original problem into:\n \\begin{align}\n \\min_{\\mathbf w,\\boldsymbol \\rho}\n \n \n \n \\tfrac{1}{2} \\boldsymbol \\rho ^{\\!\\top} \\mathbf Q \\boldsymbol \\rho - \\theta \\boldsymbol e^{\\!\\top}\n \\boldsymbol \\rho,\n \n \\quad {\\rm s.t.} ~&\\mathbf w \\psd {\\bf 0},\n {\\bf 1}^{\\!\\top} \\mathbf w = 1,\n \n \n {\\rho}_i = ( \\mathbf A \\mathbf w )_i,\n i = 1,\\cdots, m.\n \\label{EQ:QP1}\n \\end{align}\n Here\n $ \\mathbf Q = \\begin{bmatrix} \\mathbf Q_1 & {\\bf 0} \\\\ {\\bf 0} & \\mathbf Q_2 \\end{bmatrix} $\n is a block matrix with\n \\[\n \\mathbf Q_1 = \n \\begin{bmatrix}\n \\tfrac{1}{m} & -\\tfrac{1}{ m (m_1-1)} & \\ldots & -\\tfrac{1}{m(m_1-1)} \\\\\n -\\tfrac{1}{m(m_1-1)} & \\tfrac{1}{ m } & \\ldots & -\\tfrac{1}{m(m_1-1)} \\\\\n \\vdots & \\vdots & \\ddots & \\vdots \\\\\n -\\tfrac{1}{m(m_1-1)} & -\\tfrac{1}{m (m_1-1)} & \\ldots &\\tfrac{1}{m } \n \\end{bmatrix},\n \\]\n and $ \\mathbf Q_2 $ is similarly defined by replacing $ m_1$ with $ m_2 $ in $ \\mathbf Q_1$.\n \\comment{\n \n \\[\n \\mathbf Q_2 = \n \\begin{bmatrix}\n \\tfrac{1}{m} & -\\tfrac{1}{m(m_2-1)} & \\ldots &\n -\\tfrac{1}{m(m_2-1)} \\\\\n -\\tfrac{1}{m(m_2-1)} & \\tfrac{1}{m} & \\ldots &\n -\\tfrac{1}{m(m_2-1)} \\\\\n \\vdots & \\vdots & \\ddots & \\vdots \\\\\n -\\tfrac{1}{m(m_2-1)} & -\\tfrac{1}{m(m_2-1)} & \\ldots\n &\\tfrac{1}{m} \n \\end{bmatrix}. \n \\]\n \n }\n Also note that we have introduced a constant $ \\frac{1}{2} $ before the quadratic term\n for convenience. The normalization \n constraint $ { \\bf 1 } ^{\\!\\top} \\mathbf w = 1$\n removes the scale ambiguity of $ \\mathbf w $. Otherwise the problem is\n ill-posed. \n\n In the case of LAC, the covariance matrix of the negative data is not involved,\n which corresponds to the matrix $ \\mathbf Q_2 $ is zero. So we can simply set \n $ \\mathbf Q = \\begin{bmatrix} \\mathbf Q_1 & {\\bf 0} \\\\ {\\bf 0} & \\bf 0 \\end{bmatrix} $ and\n \\eqref{EQ:QP1} becomes the optimization problem of LAC. \n \n At this stage, it remains unclear about how to solve the problem \\eqref{EQ:QP1}\n because we do not know all the weak classifiers. \n The number of possible weak classifiers\n could be infinite---the dimension of the \n optimization variable $ \\mathbf w $ is infinite.\n So \\eqref{EQ:QP1} is a semi-infinite quadratic program (SIQP).\n We show how column generation can be used to solve this problem.\n To make column generation applicable, we need to derive a\n specific Lagrange dual of the primal\n problem.\n\n\n\n\\textbf{The Lagrange dual problem}\n We now derive the Lagrange dual of the quadratic problem \\eqref{EQ:QP1}. \n Although we are only interested in the variable $ \\mathbf w $, we need to\n keep the auxiliary variable $ \\boldsymbol \\rho $ in order to obtain\n a meaningful dual problem. The Lagrangian of \\eqref{EQ:QP1}\n is\n $ L ( \n \\underbrace{ \\mathbf w, \\boldsymbol \\rho}_{\\rm primal}, \\underbrace{ \\u, r }_{\\rm dual}\n ) \n = \\tfrac{1}{2} \\boldsymbol \\rho ^{\\!\\top} \\mathbf Q \\boldsymbol \\rho - \\theta \\boldsymbol e^{\\!\\top} \\boldsymbol \\rho \n + \\u ^{\\!\\top} ( \\boldsymbol \\rho - \\mathbf A \\mathbf w ) - \\mathbf q ^{\\!\\top} \\mathbf w + r ( {\\bf 1} ^{\\!\\top} \\mathbf w - 1 )\n $ with $ \\mathbf q \\psd \\bf 0 $. \n $ \\sup_{\\u, r} \\inf_{ \\mathbf w, \\boldsymbol \\rho } L ( \\mathbf w, {\\boldsymbol \\rho}, \\u, r ) $\n gives the following Lagrange dual:\n \\begin{align}\n \\max_{\\u, r} ~& -r - \\overbrace{\n \\tfrac{1}{2} \n (\\u - \\theta \\boldsymbol e)^{\\!\\top} \\mathbf Q^{-1} (\\u - \\theta \\boldsymbol e)\n }^{\\rm regularization}, \n %\n \n %\n {\\rm \\;\\; s.t.} \n ~\n %\n \n %\n \\sum_{i=1}^m u_i \\mathbf A_{i:} \\nsd r {\\bf 1 } ^{\\!\\top}. \n \\label{EQ:dual}\n \\end{align}\n In our case, $ \\mathbf Q $ is rank-deficient and its inverse does not exist\n (for both LDA and LAC).\n We can simply regularize $ \\mathbf Q $ with $ \\mathbf Q + \\delta {\\bf I} $ with \n $ \\delta $ a very small constant. \n One of the KKT optimality conditions between the dual and primal\n is\n $ \\boldsymbol \\rho^\\star = - \\mathbf Q^{-1} ( \\u ^ \\star - \\theta \\boldsymbol e )$,\n which can be used to establish the connection between the dual optimum and\n the primal optimum. \n This is obtained by the fact that \n the gradient of $ L $ w.r.t. $ \\boldsymbol \\rho $ must vanish at \n the optimum, $ { \\partial L } \/ { \\partial \\rho_i } = 0 $,\n $ \\forall i = 1\\cdots n $.\n\n Problem \\eqref{EQ:dual} can be viewed as a regularized LPBoost problem.\n Compared with the hard-margin LPBoost \\cite{Demiriz2002LPBoost},\n the only difference is the regularization term in the cost function.\n The duality gap between the primal \\eqref{EQ:QP1} and the \n dual \\eqref{EQ:dual} is zero. In other words, the solutions of\n \\eqref{EQ:QP1} and \\eqref{EQ:dual} coincide. \n Instead of solving \\eqref{EQ:QP1} directly, one calculates the\n most violated constraint in \\eqref{EQ:dual} iteratively for\n the current solution and adds this constraint to the\n optimization problem. In theory, any column that violates\n dual feasibility can be added. To speed up the convergence,\n we add the most violated constraint by solving the following\n problem:\n %\n %\n %\n \\begin{equation}\n h' ( \\cdot ) = {\\rm argmax}_{h( \\cdot ) } ~ \n \n \\sum_{i=1}^m u_i y_i h ( \\mathbf x_i).\n \\label{EQ:pickweak}\n \\end{equation}\n %\n %\n %\n This is exactly the same as the one that standard AdaBoost\n and LPBoost use for producing the best weak classifier. That\n is to say, to find the weak classifier that has minimum weighted\n training error. We summarize the LACBoost\/FisherBoost\n algorithm in\n Algorithm~\\ref{alg:QPCG}.\n By simply changing $ \\mathbf Q_2 $, Algorithm~\\ref{alg:QPCG} can be used to\n train either LACBoost or FisherBoost.\n Note that to obtain an actual strong classifier,\n one may need to include an offset $ b $, {\\em i.e.} the final classifier\n is $ \\sum_{j=1}^n h_j (\\mathbf x) - b $ because from the cost function\n of our algorithm \\eqref{EQ:101}, we can see that the cost function itself\n does not minimize any classification error. It only finds a projection\n direction in which the data can be maximally separated. A simple line\n search can find an optimal $ b $. \n Moreover, when training a cascade, we need to tune this offset anyway\n as shown in \\eqref{EQ:nodeclassifier}.\n \n %\n %\n \n %\n\n\n The convergence of Algorithm~\\ref{alg:QPCG} is guaranteed by\n general column generation or cutting-plane algorithms, which\n is easy\n to establish. When a new $ h'(\\cdot) $ that violates dual\n feasibility is added, the new optimal value of the dual\n problem (maximization) would decrease. Accordingly, the\n optimal value of its primal problem decreases too because they\n have the same optimal value due to zero duality gap. Moreover\n the primal cost function is convex, therefore in the end it\n converges to the global minimum. \n\n\n\n\n \n\n \n \\linesnumbered\\SetVline\n \\begin{algorithm}[t]\n \\caption{Column generation for QP.} \n %\n %\n %\n \\centering\n \\begin{minipage}[]{0.91\\linewidth}\n %\n \\KwIn{Labeled training data $(\\mathbf x_i, y_i), i = 1\\cdots m$;\n termination threshold $ \\varepsilon > 0$;\n regularization\n parameter $ \\theta $; maximum number of iterations\n $ n_{\\rm max}$.\n }\n %\n %\n %\n { {\\bf Initialization}:\n $ m = 0 $;\n $ \\mathbf w = {\\bf 0} $;\n and $ u_i = \\frac{1}{ m }$, $ i = 1$$\\cdots$$m$. \n }\n\n \\For{ $ \\mathrm{iteration} = 1 : n_\\mathrm{max}$}\n {\n \n \n %\n %\n \\ensuremath{ - \\,} \n Check for the optimality: \\\\\n {\\bf if}{ $ \\mathrm{iteration} > 1 $ \\text{ and } $\n \\sum_{ i=1 }^m u_i y_i h' ( \\mathbf x_i ) \n < r + \\varepsilon $},\n \\\\\n { \\bf then}\n \\\\\n $~ ~ ~$ break; and the problem is solved; \n \n \\ensuremath{ - \\,} \n Add $ h'(\\cdot) $ to the restricted master problem, which\n corresponds to a new constraint in the dual;\n %\n %\n \n \n \n\n \\ensuremath{ - \\,} \n Solve the dual problem \\eqref{EQ:dual}\n (or the primal problem \\eqref{EQ:QP1}) \n and update $ r $ and\n $ u_i$ ($ i = 1\\cdots m$). \n\n\n \\ensuremath{ - \\,} \n Increment the number of weak classifiers\n $n = n + 1$. \n }\n \\KwOut{\n \n \n \n \n %\n The selected features are $ h_1, h_2, \\dots, h_n $.\n The final strong classifier is:\n $ F ( \\mathbf x ) = \\textstyle \\sum_{j=1}^{ n } w_j h_j( \\mathbf x ) - b $.\n Here the offset $ b $ can be learned by a simple search. \n \n }\n \\end{minipage}\n \\label{alg:QPCG}\n \\end{algorithm}\n \n \n\n\n At each iteration of column generation,\n in theory, we can solve either the dual \\eqref{EQ:dual} \n or the primal problem \\eqref{EQ:QP1}. \n However, \n in practice, it could be much faster to solve the primal problem because\n \n \n (i) Generally,\n the primal problem has a smaller size, hence faster to solve.\n The number of variables of \\eqref{EQ:dual} is $ m $ at each iteration,\n while the number of variables is the number of iterations \n for the primal problem. \n For example, in Viola-Jones' face detection framework, \n the number of training data $ m = \n 10,000 $ and $ n_{\\rm max} = 200 $. In other words, the \n primal problem has at most $ 200 $ variables in this case;\n \n (ii)\n The dual problem is a standard QP problem. It has no special structure\n to exploit. As we will show, the primal problem belongs to\n a special class of problems and\n can be efficiently \n solved using entropic\/exponentiated \n gradient descent (EG) \\cite{Beck03Mirror,Globerson07Exp}. \n A fast QP solver is extremely important for training a \n object detector because we need to the solve a few thousand \n QP problems. \n \n \n\n\n We can recover both of the dual variables \n $ \\u^\\star, r^\\star $ easily from \n the primal variable $ \\mathbf w^\\star $:\n \\begin{align}\n \\u^\\star &= - \\mathbf Q\\boldsymbol \\rho^\\star + \\theta \\boldsymbol e; \\label{EQ:KA}\\\\\n r^\\star &= \\max_{ j = 1 \\dots n } \n \\bigl\\{ \\textstyle \\sum_{i=1}^m u_i^\\star \\mathbf A_{ij} \\bigr\\}.\n \\label{EQ:KAb}\n \\end{align}\n The second equation is obtained by the fact that \n in the dual problem's constraints, at optimum,\n there must exist at least one $ u_i^\\star$ \n such that the equality holds. That is to say,\n $ r^\\star $ is the largest {\\em edge}\n over all weak classifiers. \n \n\n\n\n We give a brief introduction to the EG algorithm before we proceed. \n Let us first define the unit simplex \n $ \\Delta_n = \\{ \n \\mathbf w \\in \\mathbb{R}^n : {\\bf 1 } ^ {\\!\\top} \\mathbf w = 1, \\mathbf w \\psd {\\bf 0 }\n \\} $. \n EG efficiently solves the convex optimization problem\n \\begin{equation}\n \\label{EQ:EG1}\n \\min_\\mathbf w \\,\\,\\, f(\\mathbf w), \\,\n {\\rm s.t.} \\,\\, \\mathbf w \\in \\Delta_n, \n \\end{equation}\n under the assumption that the objective function $ f(\\cdot) $\n is a convex Lipschitz continuous function with Lipschitz\n constant $ L_f $ w.r.t. a fixed given norm $ \\lVert \\cdot \\rVert$.\n The mathematical definition of $ L_f $ is that\n $ | f(\\mathbf w) -f (\\mathbf z) | \\leq L_f \\lVert \\mathbf x - \\mathbf z \\rVert$ holds\n for any $ \\mathbf x, \\mathbf z $ in the domain of $ f(\\cdot)$.\n The EG algorithm is very simple:\n %\n %\n \\begin{enumerate}\n \\item\n Initialize with $\\mathbf w^0 \\in \\text{the interior of } \\Delta_n$;\n \\item\n Generate the sequence $ \\{ \\mathbf w^k \\} $, $ k=1,2,\\cdots$\n with:\n \\begin{equation}\n \\label{EQ:EQ2}\n \\mathbf w^k_j = \\frac{ \\mathbf w^{k-1}_j \\exp [ - \\tau_k f'_j ( \\mathbf w^{k-1} ) ] } \n { \\sum_{j=1}^n \\mathbf w^{k-1}_j \\exp [ - \\tau_k f'_j ( \\mathbf w^{k-1} ) ] }. \n \\end{equation}\n Here $ \\tau_k $ is the step-size. \n $ f'( \\mathbf w ) = [ f_1'(\\mathbf w), \\dots, f_n'(\\mathbf w) ] ^{\\!\\top} $\n is the gradient of $ f(\\cdot) $;\n \\item\n Stop if some stopping criteria are met.\n \\end{enumerate}\n The learning step-size can be determined by \n \n $ \\tau_k = \\frac{ \\sqrt{ 2\\log n } } { L_f }\n \\frac{1}{ \\sqrt{ k } },\n $\n \n following \\cite{Beck03Mirror}.\n In \\cite{Globerson07Exp}, the authors have \n used a simpler strategy to set the learning rate. \n\n EG is a very useful tool for solving large-scale \n convex minimization problems over the unit simplex. \n Compared with standard QP solvers like Mosek \n \\cite{Mosek}, EG is much faster. EG makes it possible \n to train a detector using almost the same amount of time\n as using standard AdaBoost as the majority of time is\n spent on weak classifier training and bootstrapping. \n\n\n In the case that $ m_1 \\gg 1 $, \n \\[\n \\mathbf Q_1 =\n \\frac{1}{m}\n \\begin{bmatrix}\n 1 & -\\tfrac{1}{ m_1-1} & \\ldots & -\\tfrac{1}{ m_1-1 } \\\\\n -\\tfrac{1}{ m_1-1 } & 1 & \\ldots & -\\tfrac{1}{ m_1-1} \\\\\n \\vdots & \\vdots & \\ddots & \\vdots \\\\\n -\\tfrac{1}{ m_1-1 } & -\\tfrac{1}{ m_1-1 } & \\ldots & 1 \n \\end{bmatrix} \\approx \\frac{1}{m} \\bf I.\n \\]\n Similarly, for LDA, $ \\mathbf Q_2 \\approx \\frac{1}{m} \\bf I$\n when $ m_2 \\gg 1 $. Hence,\n \\begin{equation}\n \\label{EQ:Q10} \n \\mathbf Q \\approx \n \\begin{cases}\n \\frac{1}{m} \\bf I; & \\text{for Fisher LDA}, \\\\\n \\frac{1}{m} \\begin{bmatrix}\n {\\bf I} & {\\bf 0} \\\\\n {\\bf 0} & {\\bf 0}\n \\end{bmatrix},\n & \\text{for LAC}.\n \\end{cases}\n \\end{equation}\n Therefore, the problems involved can be simplified when $ m_1 \\gg 1 $ and\n $ m_2 \\gg 1 $ hold.\n The primal problem \\eqref{EQ:QP1} equals\n \\begin{align}\n \\min_{\\mathbf w,\\boldsymbol \\rho} ~& \\tfrac{1}{2} \\mathbf w ^{\\!\\top} ( \\mathbf A^{\\!\\top} \\mathbf Q \\mathbf A) \\mathbf w \n - ( \\theta \\boldsymbol e ^{\\!\\top}\n \\mathbf A ) \\mathbf w,\n %\n \n %\n \\;\\;\n {\\rm s.t.}\n \\;\n %\n \n %\n \\mathbf w \\in \\Delta_n.\n \\label{EQ:QP2}\n \\end{align}\n \n \n \n \n \n We can efficiently solve \\eqref{EQ:QP2} \n using the EG method. \n In EG there is an important parameter $ L_f $, which is\n used to determine the step-size. \n $ L_f $ can be determined by the $\\ell_\\infty $-norm of $ | f' (\\mathbf w) | $.\n In our case $ f' (\\mathbf w) $ is a linear function, which is trivial to compute.\n The convergence of EG is guaranteed; see \\cite{Beck03Mirror} for details.\n \n In summary, when using EG to solve the primal problem, \n Line $ 5 $ of Algorithm~\\ref{alg:QPCG} is: \n \n \\ensuremath{ - \\,} \n {\\em Solve the primal problem \\eqref{EQ:QP2} using EG, and update \n the dual variables $ \\u $ with \\eqref{EQ:KA}, and $ r $ with \\eqref{EQ:KAb}.\n }\n\n\n\n \\comment{\n \n %\n \n FIXME for the journal version\n \n %\n In our case \\eqref{EQ:QP2},\n $ L_f $ can be set to $ | \\mathbf A ^{\\!\\top} \\mathbf A |_\\infty + \\theta m \\norm[1]{ \\boldsymbol e ^{\\!\\top} \\mathbf A }$\n with $ | \\mathbf A ^{\\!\\top} \\mathbf A |_\\infty $\n the maximum magnitude of the matrix $ \\mathbf A ^{\\!\\top} \\mathbf A $.\n $ L_f $ is the upper bound of the $ \\ell_1$-norm of the cost function's gradient:\n $ \\norm [1]{ \\mathbf w^{\\!\\top} (\\mathbf A^{\\!\\top} \\mathbf A ) + \\theta m \\boldsymbol e^{\\!\\top} \\mathbf A } \\leq L_f $. \n %\n %\n With the triangle inequality, we have\n $ \\norm [1]{ \\mathbf w^{\\!\\top} (\\mathbf A^{\\!\\top} \\mathbf A ) - \\theta m \\boldsymbol e^{\\!\\top} \\mathbf A } $ $\n \\leq \\norm[1]{ \\mathbf w^{\\!\\top} (\\mathbf A^{\\!\\top} \\mathbf A ) } $ $ + \\theta m \\norm[1]{ \\boldsymbol e^{\\!\\top} \\mathbf A } $ $\n \\leq | \\mathbf A ^{\\!\\top} \\mathbf A |_\\infty $ $ + \\theta m \\norm[1]{ \\boldsymbol e ^{\\!\\top} \\mathbf A }$.\n\n\n\n The following theorem ensures the convergence of the EG algorithm \n for our problem.\n \\begin{theorem}\n Under the assumption that the learning step-size\n satisfies $ 0 < \\tau_k < \\frac{1} { n | \\mathbf A^{\\!\\top} \\mathbf A |_\\infty } $,\n then we have\n $ f( \\mathbf w^\\star ) \\leq f (\\mathbf w ^{ k } ) \\leq f (\\mathbf w ^\\star ) \n + \\frac{1} { \\tau_k ( k -1 ) } {\\rm KL} [ \\mathbf w^\\star \\Vert \\mathbf w^0 ] $,\n where $ f(\\cdot) $ denotes the objective function in \\eqref{EQ:QP2};\n and $ | \\mathbf X |_\\infty $ denotes the maximum magnitude element of \n the matrix $ \\mathbf X $; $ {\\rm KL} ( \\u \\Vert \\v )$ computes the \n Kullback\u2013Leibler divergence of $ \\u, \\v \\in \\Delta_n $.\n \\end{theorem}\n The proof follows the proof of Theorem 1 in \\cite{Globerson07Exp}. \n \n }\n\n\n\n\n\\section{Applications to Face Detection}\n\n\n \n\\begin{figure}[t!]\n \\centering\n \\includegraphics[width=.45\\textwidth]{adaboost_toy}\n \\includegraphics[width=.45\\textwidth]{fisherboost_toy}\n \\caption{Decision boundaries of \n AdaBoost (left) and FisherBoost (right) on $2$D artificial data\n (positive data represented by $ \\square $'s and negative data\n by $\\times$'s). Weak classifiers are decision stumps.\n In this case, FisherBoost intends to correctly classify more\n positive data in this case. \n }\n \\label{fig:toy}\n\\end{figure}\n\n\n First, let us show a simple example on a synthetic dataset \n (more negative data than positive data)\n to illustrate\n the difference between FisherBoost and AdaBoost.\n Fig. \\ref{fig:toy} demonstrates the subtle difference of the classification \n boundaries obtained by AdaBoost and FisherBoost. \n We can see that\n FisherBoost seems to focus more on correctly classifying positive data points.\n This might be due to the fact that AdaBoost only optimizes the overall \n classification accuracy. \n This finding is consistent with the result in \\cite{Paisitkriangkrai2009CVPR}.\n \n\\begin{figure}[t]\n \\centering\n \\includegraphics[width=.32\\textwidth]{p1}\n \\includegraphics[width=.32\\textwidth]{p2}\n \\includegraphics[width=.32\\textwidth]{p3}\n \\caption{Normality test (normal probability plot)\n for the face data's margin distribution of nodes $1$, $2$, $3$.\n The $ 3 $ nodes contains $ 7 $, $ 22 $, $ 52 $ weak classifiers respectively.\n Curves close to a straight line mean close to a Gaussian.\n }\n \\label{fig:normplot}\n\\end{figure}\n\n\n\n\n\n\\textbf{Face detection}\n In this section, we compare our algorithm with other \n state-of-art face detectors.\n We first show some results about the validity \n of LAC (or Fisher LDA) post-processing for improving \n node learning in object detection. \n Fig. \\ref{fig:normplot} illustrates the normal probability plot of \n margins of positive training\n data, for the first three nodes in the multi-exit with LAC cascade. \n Clearly, the larger\n number of weak classifiers being used,\n the more closely the margin follows Gaussian distribution.\n In other words, \n LAC may achieve a better performance if a larger number of weak classifiers\n are used. The performance could be poor with too fewer weak classifiers.\n The same statement applies to Fisher LDA, and LACBoost, FisherBoost, too. \n Therefore, we do not\n apply LAC\/LDA in the first eight nodes because the margin distribution could be\n far from a Gaussian distribution.\n Because the late nodes of a multi-exit cascade contain more weak classifiers, \n we conjecture that the multi-exit cascade \n might meet the Gaussianity requirement better. We have compared \n multi-exit cascades with LDA\/LAC post-processing against \n standard cascades with LDA\/LAC post-processing in \\cite{Wu2008Fast}\n and slightly improved performances were obtained. \n\n\n\n Six methods are evaluated with the multi-exit cascade framework\n \\cite{pham08multi},\n which are AdaBoost with LAC post-processing,\n or LDA post-processing,\n AsymBoost with LAC or LDA post-processing \\cite{Wu2008Fast}, and our\n FisherBoost,\n LACBoost. We have also implemented Viola-Jones'\n face detector as the baseline \\cite{Viola2004Robust}.\n As in \\cite{Viola2004Robust}, five basic types of Haar-like features \n are calculated, which makes up of a $162, 336$ dimensional over-complete\n feature set on an image of $24 \\times 24$ pixels.\n To speed up the weak classifier training, as in \\cite{Wu2008Fast},\n we uniformly sample $10\\%$ of features for\n training weak classifiers (decision stumps). \n The training data are $9,832$ mirrored $24 \\times 24$ face images \n ($5,000$ for training and $4,832$ for validation) and $7,323$ large background images,\n which are the same as in \\cite{Wu2008Fast}.\n\n Multi-exit cascades with $22$ exits and $2,923$ weak classifiers are\n trained with various methods. \n For fair comparisons, we have used the same cascade structure and \n same number of weak classifiers for all the compared learning methods.\n The indexes of exits are pre-set to simplify the training\n procedure.\n For our FisherBoost and LACBoost, we have an important parameter \n $\\theta$, which is chosen from \n $\\{\n \\frac{1}{10}, \n \\frac{1}{12},\n \\frac{1}{15},\n $\n $\n \\frac{1}{20}, \n \\frac{1}{25},\n \\frac{1}{30},\n $\n $\n \\frac{1}{40},\n \\frac{1}{50} \n \\}$.\n We have not carefully tuned this parameter using cross-validation.\n Instead, we train a $10$-node cascade for each candidate $ \\theta$, \n and choose the one with the best {\\em training} \n accuracy.\\footnote{ To train a complete $22$-node cascade\n and choose the best $ \\theta $\n on cross-validation data may give better detection rates.} \n At each exit, negative examples misclassified by current cascade are\n discarded, and new negative examples are bootstrapped from the background\n images pool. \n Totally, billions of negative examples are extracted from the pool.\n The positive training data and validation data keep unchanged during the\n training process.\n\nOur experiments are performed on a workstation with $8$ Intel Xeon\nE$5520$ CPUs and $32$GB RAM.\nIt takes about $3$ hours to train the multi-exit cascade with AdaBoost or AsymBoost.\nFor FisherBoost and LACBoost, it takes less than $ 4 $ hours to train\na complete multi-exit cascade.\\footnote{Our implementation is in C++ and \n only the weak classifier\n training part is parallelized using OpenMP. \n}\nIn other words, \n our EG algorithm takes less than $ 1 $ hour for \n solving the primal QP problem (we need to solve a QP at each iteration).\n A rough estimation of the computational complexity is as follows. \n Suppose that the number of training\n examples is $ m $, number of weak classifiers is $ n $,\n At each iteration of the cascade training,\n the complexity for solving the primal QP using EG is\n $ O( m n + k n^2) $ with $ k $ the iterations \n needed for EQ's convergence.\n The complexity for training the weak classifier is\n $ O( m d ) $ with $d$ the number of all Haar-feature patterns.\n In our experiment, $ m = 10,000 $,\n $ n \\approx 2900 $,\n $d = 160,000$,\n $ k < 500 $.\nSo the majority of the training computation is on the weak classifier training.\n\n\n\n\n We have also experimentally observed the speedup of EG against standard QP solvers. \n We solve the primal QP defined by \\eqref{EQ:QP2} using EG and Mosek \\cite{Mosek}.\n The QP's size is $ 1,000 $ variables. \n With the same accuracy tolerance (Mosek's primal-dual gap is set to $ 10^{-7}$\n and EG's convergence tolerance is also set to $ 10^{-7}$), \n Mosek takes $1.22 $ seconds and EG is\n $ 0.0541 $ seconds. So EG is about $ 20 $ times faster. \n Moreover, at iteration $ n + 1 $ of training the cascade,\n EG can take advantage of the last iteration's solution\n by starting EG from a small perturbation of the previous solution. \n Such a warm-start gains a $ 5 $ to $ 10\\times $ speedup in our experiment,\n while there is no off-the-shelf warm-start QP solvers available yet.\n\n\n We evaluate the detection performance on the MIT+CMU frontal\n face test set. \nTwo performance metrics are used here: each node and the entire cascade.\nThe node metric is how well the classifiers meet the node learning objective.\nThe node metric provides useful information about \nthe capability of each method to achieve the node learning goal.\nThe cascade metric uses the receiver operating characteristic (ROC)\nto compare the entire cascade's peformance.\nMultiple issues have impacts on the cascade's performance: classifiers,\nthe cascade structure, bootstrapping {\\em etc}.\n\n\n\nWe show the node comparison results in Fig. \\ref{fig:node1}.\nThe node performances between FisherBoost and LACBoost\nare very similar. From Fig.~\\ref{fig:node1}, as reported in \\cite{Wu2008Fast},\nLDA or LAC post-processing can considerably reduce the\nfalse negative rates. \nAs expected, our proposed FisherBoost and LACBoost can further reduce the false\nnegative rates significantly. \nThis verifies the advantage of selecting features with the node learning goal \nbeing considered. \n\n \nFrom the ROC curves in Fig.~\\ref{fig:ROC1}, we can see that FisherBoost and LACBoost\n outperform all the other methods.\n In contrast to the results of the detection rate for each node, \n LACBoost is slightly worse than FisherBoost in some cases.\n That might be due to that many factors have impacts on the final result of\n detection.\n \n \n \n \n LAC makes the assumption of Gaussianity and symmetry data distributions,\n which may not hold well in the early nodes. \n This could explain why\n LACBoost does not always perform the best.\n Wu {\\em et al.} have observed the same phenomenon that LAC post-processing\n does not outperform LDA post-processing in a few cases.\n However, we believe that for harder detection tasks, the \n benefits of LACBoost would be more impressive.\n\n The error reduction results of FisherBoost and LACBoost in \n Fig.~\\ref{fig:ROC1} are not as great as those in Fig. \\ref{fig:node1}.\n This might be explained by the fact that the\n cascade and negative data bootstrapping remove\n of the error reducing effects, to some extend.\n We have also compared our methods with the boosted greedy sparse LDA (BGSLDA) in\n \\cite{Paisitkriangkrai2009CVPR}, which is considered one of the state-of-the-art.\n \n \n \n We provide the ROC curves in the supplementary package.\n Both of our methods outperform \n BGSLDA with AdaBoost\/AsymBoost by about $ 2\\%$ in the detection rate. \n Note that BGSLDA uses the standard cascade. \n So besides the benefits of our FisherBoost\/LACBoost,\n the multi-exit cascade also brings effects. \n\n\n\n\n\\begin{figure}[t]\n \\begin{center}\n \\includegraphics[width=.45\\textwidth]{Fisher_node}\n \\includegraphics[width=.45\\textwidth]{LACBoost_node}\n \\includegraphics[width=.45\\textwidth]{FisherBoost_vs_AsymBoost_noderates}\n \\includegraphics[width=.45\\textwidth]{LACBoost_vs_AsymBoost_noderates}\n \\end{center}\n \\caption{Node performances on the validation data. \n ``Ada'' means that features are selected using AdaBoost;\n ``Asym'' means that features are selected using AsymBoost.\n }\n \\label{fig:node1}\n\\end{figure}\n\n\n\n\\begin{figure}[t]\n \\begin{center}\n \\includegraphics[width=.45\\textwidth]{Fisher_ROC}\n \\includegraphics[width=.45\\textwidth]{LACBoost_ROC}\n \\includegraphics[width=.45\\textwidth]{FisherBoost_vs_AsymBoost}\n \\includegraphics[width=.45\\textwidth]{LACBoost_vs_AsymBoost}\n \\end{center}\n \\caption{Cascade performances using ROC curves \n (number of false positives versus detection rate) on the MIT+CMU test data.\n ``Ada'' means that features are selected using AdaBoost. Viola-Jones cascade\n is the method in\n \\cite{Viola2004Robust}.\n ``Asym'' means that features are selected using AsymBoost.\n }\n \\label{fig:ROC1}\n\\end{figure}\n\n\n\n\n\n\n\\comment{\n\\begin{figure}[t]\n \\begin{center}\n \\includegraphics[width=.45\\textwidth]{FisherBoost_vs_BGSLDA}\n \\end{center}\n \\caption{Cascade performances on the MIT+CMU test data. We compare our methods with BGSLDA in\n \\cite{Paisitkriangkrai2009CVPR}.}\n \\label{fig:ROC_BGSLDA}\n\\end{figure}\n}\n\n\\section{Conclusion}\n\n\n By explicitly taking into account the node learning goal in cascade classifiers,\n we have designed \n new boosting algorithms for more effective object detection. \n Experiments validate the superiority of our FisherBoost and LACBoost. \n We have also proposed to use entropic gradient to efficiently \n implement FisherBoost and LACBoost. The proposed algorithms are easy to implement\n and can be applied other asymmetric classification tasks in computer vision.\n We are also trying to design new asymmetric boosting algorithms\n by looking at those asymmetric kernel classification methods. \n\n\n\\bibliographystyle{splncs}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nThe light pseudoscalar particle named {\\it axion} is an important element of the\nStandard Model and its generalizations. Axion arises \\cite{1,2} due to breaking\nof the Peccei-Quinn symmetry which was introduced \\cite{3} in quantum\nchromodynamics (QCD) in order to avoid strong {\\it CP} violation and large\nelectric dipole moment of a neutron (numerous experiments exclude both these\neffects to a high level of precision).\nWhat is more, axions provide an elegant solution for the problem of dark matter in\nastrophysics and cosmology \\cite{4,5}. This is the reason why a lot of experiments for\nsearching axions has been performed in different countries \\cite{6}.\nSpecifically, strong constraints on the coupling constants of an axion and other axion-like\nparticles with photons, electrons and nucleons were obtained from astrophysical\nobservations. Up to the present, however, there is the so-called {\\it window} in the\nvalues of an axion mass, where these constraints are either missing or not sufficiently\nstrong.\n\nThere are also massless and light scalar particles predicted in many extensions of the\nStandard Model \\cite{7}. Exchange of such particles between atoms of two macrobodies\nleads to corrections to the Newton law of gravitation at separations below a micrometer.\nBy coincidence, at so small separations Newton's gravitational law is not verified experimentally\nwith sufficient precision. Within a submicrometer interaction range experiment does not\nexclude corrections which exceed the Newton gravitational force by many orders of\nmagnitude \\cite{8}. Similar corrections are predicted in extra-dimensional models with a\nlow-energy compactification scale \\cite{9,10}. Many experiments of E\\\"{o}tvos- and Cavendish-type have\nbeen performed during the last few years searching for possible corrections to the Newton\nlaw of gravitation \\cite{11}.\n\nRecently it was found \\cite{12,13,14,15} that strong model-independent constraints on the coupling\nconstants of axions with nucleons follow from measurements of the Casimir-Polder and Casimir force.\nSome of these constraints overlap with an axion window and, thus, are complementary to\nastrophysical limits. As to corrections to Newton's law of gravitation, measurements of the van der Waals\nand Casimir forces have long been used to constrain their parameters \\cite{16,17}.\nNew, more precise measurements of the Casimir force allowed significant strengthening of previously\nobtained constraints on non-Newtonian gravity over the region of separations below $1\\,\\mu$m\n\\cite{18,19,20,21,22}.\n\nIn this paper, we review constraints on the coupling constants of an axion to a proton and a neutron,\nand corrections to Newton's law of gravitation which follow from the most precise measurements of the\nCasimir interaction \\cite{23,24}. We compare the obtained constraints on an axion with the alternative\nconstraints following from some other laboratory experiments. The constraints on the coupling constants\nof an axion and on non-Newtonian gravity, following from measurements of the Casimir interaction, are\nmutually compared and some conclusions inherent to both of them are obtained.\n\nThe paper is organized as follows. In Section 2 we consider the types of effective potentials which arise\ndue to one- and two-axion exchange. These are compared with the effective potentials originating from\nthe exchange of massless and massive scalar particles. Section 3 is devoted to the constraints on\naxion-nucleon coupling constants which follow from measurements of the Casimir-Polder force acting\nbetween the condensate of ${}^{87}$Rb atoms and a glass silica plate. In Section 4 the constraints on\naxion to nucleon coupling constants are presented obtained from measurements of the gradient of the\nCasimir force between a microsphere and a plate coated with a nonmagnetic metal Au or a magnetic\nmetal Ni. These experiments were performed by means of a dynamic atomic force microscope (AFM).\nSection 5 contains similar constraints obtained from measurements of the gradient of the Casimir force\nbetween Au-coated surfaces of a sphere and a plate using a micromachined oscillator.\nIn Section 6 the constraints on the coupling constants of an axion are provided which follow from\nmeasurements of the Casimir force between corrugated surfaces. In Section 7 we compare the\nconstraints on an axion found from measurements of the Casimir interaction with those obtained from\nsome other laboratory experiments. Section 8 is devoted to the constraints on non-Newtonian gravity\nderived from the Casimir effect. In Section 9 the reader will find our conclusions and discussion.\n\nThroughout the paper we use units in which $\\hbar=c=1$.\n\n\\section{Types of effective potentials}\n\nBelow we consider effective potentials arising from the interaction of nucleons (protons and neutrons) with\nan axion and other axion-like particles predicted in different variants of the Grand Unification Theories.\nAxions also interact with electrons and photons. These interactions are, however, much weaker than\naxion-nucleon interaction \\cite{25} and for our purposes can be neglected. In any case, their account would\nlead to only a minor strengthening of the constraints on axion-nucleon coupling constants obtained from the\nforce measurements between macroscopic bodies.\n\nWe assume that the interaction of axion-like particles $a$ with nucleons $\\psi$ is described by the\nLagrangian \\cite{4}\n\\begin{equation}\n{\\cal L}=-i g_{ak}\\bar{\\psi}\\gamma_5\\psi a,\n\\label{eq1}\n\\end{equation}\n\\noindent\nwhere $g_{ak}$ is the coupling constant of an axion to a proton ($k=p$) or to a neutron ($k=n$).\nIn doing so the pseudoscalar coupling of axions and other axion-like particles to nucleons is assumed\n(note that the pseudovector coupling introduced for the\noriginal QCD axions results in the nonrenormalizable\ntheory \\cite{15}). The exchange of one axion between two nucleons of spins\n$\\mbox{\\boldmath$\\sigma$}_{1,2}\/2$ situated at the points\n$\\mbox{\\boldmath$r$}_1\\neq\\mbox{\\boldmath$r$}_2$ with coupling (\\ref{eq1}) results in the following\neffective potential \\cite{25,26}\n\\begin{eqnarray}\n&&\nV(\\mbox{\\boldmath$r$}_1-\\mbox{\\boldmath$r$}_2;\\mbox{\\boldmath$\\sigma$}_{1},\\mbox{\\boldmath$\\sigma$}_{2})\n=\\frac{g_{ak}g_{al}}{16\\pi m_km_l}\\left[\n\\vphantom{\\left(\n\\frac{m_a}{|\\mbox{\\boldmath$r$}_1-\\mbox{\\boldmath$r$}_2|^2}\n\\frac{1}{|\\mbox{\\boldmath$r$}_1-\\mbox{\\boldmath$r$}_2|^3}\\right)}\n(\\mbox{\\boldmath$\\sigma$}_{1}\\cdot\\mbox{\\boldmath$n$})\n(\\mbox{\\boldmath$\\sigma$}_{2}\\cdot\\mbox{\\boldmath$n$})\\right.\n\\nonumber\\\\\n&&~~~~~~~~~\n\\times\\left(\n\\frac{m_a^2}{|\\mbox{\\boldmath$r$}_1-\\mbox{\\boldmath$r$}_2|}+\n\\frac{3m_a}{|\\mbox{\\boldmath$r$}_1-\\mbox{\\boldmath$r$}_2|^2}+\n\\frac{3}{|\\mbox{\\boldmath$r$}_1-\\mbox{\\boldmath$r$}_2|^3}\\right)\n\\nonumber\\\\\n&&~~~~\n\\left.-\n(\\mbox{\\boldmath$\\sigma$}_{1}\\cdot\\mbox{\\boldmath$\\sigma$}_{2})\n\\left(\n\\frac{m_a}{|\\mbox{\\boldmath$r$}_1-\\mbox{\\boldmath$r$}_2|^2}+\n\\frac{1}{|\\mbox{\\boldmath$r$}_1-\\mbox{\\boldmath$r$}_2|^3}\\right)\n\\right]\\,e^{-m_a|\\mbox{\\boldmath$r$}_1-\\mbox{\\boldmath$r$}_2|}.\n\\label{eq2}\n\\end{eqnarray}\n\\noindent\nHere, $g_{ak}$ and $g_{al}$ are the axion-proton ($k,l=p$) or \naxion-neutron ($k,l=n$) interaction constants, $m_k,\\,m_l$ are the nucleon masses,\n$m_a$ is the axion mass, and the unit vector\n$\\mbox{\\boldmath$n$}=\n(\\mbox{\\boldmath$r$}_1-\\mbox{\\boldmath$r$}_2)\n\/|\\mbox{\\boldmath$r$}_1-\\mbox{\\boldmath$r$}_2|$.\n\nAs is seen in (\\ref{eq2}), the effective potential depends on the nucleon spins. Because of this,\nthe resulting interaction between two unpolarized test bodies averages to zero.\nTaking into account that already performed experiments on measuring the Casimir interaction\n\\cite{23,24} deal with unpolarized test bodies, it seems impossible to use them for constraining\nthe axion to nucleon coupling constants basing on the simplest process of one-axion exchange.\n\nThe situation changes when we consider the process of two-axion exchange between the two\nnucleons. In this case the Lagrangian (\\ref{eq1}) leads to the following effective\npotential \\cite{25,27,28}\n\\begin{equation}\nV_{kl}(|\\mbox{\\boldmath$r$}_1-\\mbox{\\boldmath$r$}_2|)=-\n\\frac{g_{ak}^2g_{al}^2}{32\\pi^3m_km_l}\\,\n\\frac{m_a}{(\\mbox{\\boldmath$r$}_1-\\mbox{\\boldmath$r$}_2)^2}\\,\nK_1(2m_a|\\mbox{\\boldmath$r$}_1-\\mbox{\\boldmath$r$}_2|),\n\\label{eq3}\n\\end{equation}\n\\noindent\nwhere $K_1(z)$ is the modified Bessel function of the second kind. Note that (\\ref{eq3}) is\nderived under the condition\n$|\\mbox{\\boldmath$r$}_1-\\mbox{\\boldmath$r$}_2|\\gg 1\/m_{k,l}$\nwhich is satisfied with a large safety margin in all the experiments considered below.\nEquation (\\ref{eq3}) does not depend on the nucleon spins. Thus, after the integration over the\nvolumes of test bodies, it leads to some additional force of the axionic origin which can be\nconstrained from the measurement results.\n\nNow we address to exchange of massless and light scalar particles between the atoms of two\nmacroscopic bodies. The exchange of one light scalar particle of mass $M$ between two pointlike\nparticles with masses $m_1$ and $m_2$ spaced at the points\n$\\mbox{\\boldmath$r$}_1$ and $\\mbox{\\boldmath$r$}_2$ results in the spin-independent\nYukawa-type effective potential \\cite{8}. It is convenient to parametrize this potential as a\ncorrection to Newton's law of gravitation:\n\\begin{equation}\nV(|\\mbox{\\boldmath$r$}_1-\\mbox{\\boldmath$r$}_2|)=-\n\\frac{Gm_1m_2}{|\\mbox{\\boldmath$r$}_1-\\mbox{\\boldmath$r$}_2|}\\,\\left(1+\n\\alpha e^{-|\\mbox{\\boldmath$r$}_1-\\mbox{\\boldmath$r$}_2|\/\\lambda}\\right).\n\\label{eq4}\n\\end{equation}\n\\noindent\nHere, $\\alpha$ is a dimensionless constant characterizing the strength of Yukawa interaction,\n$\\lambda=1\/M$\n is the Compton wavelength of light scalar particle characterizing the interaction range, and $G$ is the\nNewtonian gravitational constant. As was noted in Section 1, the effective potential (\\ref{eq4}) arises\nalso in extradimensional models with a low-energy compactification scale \\cite{9,10}.\nIn this case the quantity $\\lambda$ has the meaning of the characteristic size of a multidimensional\ncompact manifold.\n\nThe exchange of one massless scalar particle leads to an effective\n potential which is inversely proportional to the separation\n distance.\nThe exchange of an even number of massless pseudoscalar particles\n(for instance, by the arions) results in the effective potentials\ninversely proportional to higher powers of the separation.\nSimilar potentials arise also due to the exchange of two\nneutrinos,\ntwo goldstinos, or other massless fermions \\cite{29,30}.\nThe power-type effective potentials are also usually\nparametrized as corrections to Newton's law of gravitation\n\\begin{equation}\nV_n(|\\mbox{\\boldmath$r$}_1-\\mbox{\\boldmath$r$}_2|)=-\n\\frac{Gm_1m_2}{|\\mbox{\\boldmath$r$}_1-\\mbox{\\boldmath$r$}_2|}\\,\\left[1+\n\\Lambda_n\n\\left(\\frac{r_0}{|\\mbox{\\boldmath$r$}_1-\\mbox{\\boldmath$r$}_2|}\\right)^{n-1}\n\\right].\n\\label{eq5}\n\\end{equation}\n\\noindent\nHere, $\\Lambda_n$ is a dimensionless constant, $n$ is a positive\ninteger, and $r_0=10^{-15}\\,$m is chosen to preserve the correct\ndimension of energy at different $n$. Note that the exchange by\ntwo axion-like particles in the limiting case $m_a\\to 0$\nin accordance to (\\ref{eq3}) results in the potential \\cite{31}\n\\begin{equation}\nV_{kl}(|\\mbox{\\boldmath$r$}_1-\\mbox{\\boldmath$r$}_2|)=-\n\\frac{g_{ak}^2g_{al}^2}{64\\pi^3m_km_l}\\,\n\\frac{1}{|\\mbox{\\boldmath$r$}_1-\\mbox{\\boldmath$r$}_2|^3}.\n\\label{eq6}\n\\end{equation}\n\\noindent\nThis can be represented as a correction to Newton's law of\ngravitation in (\\ref{eq5}) with $n=3$ (the same power-type\ninteraction is obtained from the exchange of two arions).\nThe effective potential (\\ref{eq5}) with $n=3$ is also obtained\nfrom extra-dimensional models with noncompact (but warped)\nextra dimensions \\cite{32,33}.\n\n\\section{Constraints on an axion from measurements of the Casimir-Polder force}\n\nThe Casimir-Polder force acting between ${}^{87}$Rb atoms\nbelonging to a Bose-Einstein condensate cloud and a SiO${}_2$\nplate was measured by means of the following dynamic\nexperiment \\cite{34}. The condensate cloud was placed in a\nmagnetic trap with frequencies $\\omega_{0z}=1438.85\\,$rad\/s\nin the perpendicular direction to the plate and\n$\\omega_{0t}=40.21\\,$rad\/s in the lateral direction.\nThe Thomas-Fermi radii of the condensate cloud of ${}^{87}$Rb\natoms in the perpendicular and lateral directions were\n$R_z=2.69\\,\\mu$m and $R_l=97.1\\,\\mu$m, respectively.\nThe dipole oscillations of the condensate in the $z$ direction\nwith a constant amplitude $A_z=2.5\\,\\mu$m were excited.\nThe separation distance $a$ between the center of mass of a\ncondensate and a plate was varied from 6.88 to $11\\,\\mu$m,\ni.e., in the region where the thermal effects in the\nCasimir-Polder force contribute essentially.\nThe temperature of the plate was equal to either $T=310\\,$K\n(as in an environment) or $T=479\\,$K and $T=605\\,$K (which\ncorresponds to out of equilibrium situations). However, for\nconstraining the parameters of an axion, the strongest result\nfollows from the measurements in thermal equilibrium.\n\nUnder the influence of the Casimir-Polder force between\n${}^{87}$Rb atoms and a plate, the oscillation frequency\n$\\omega_{0z}$ slightly shifts to some other value $\\omega_z$.\nThe relative frequency shift is given by\n\\begin{equation}\n\\gamma_z=\\frac{|\\omega_{0z}-\\omega_z|}{\\omega_{0z}}\\approx\n\\frac{|\\omega_{0z}^2-\\omega_z^2|}{2\\omega_{0z}^2}.\n\\label{eq7}\n\\end{equation}\n\\noindent\nThis frequency shift was measured \\cite{34} as a function of\n$a$ with some measurement errors determined at a 67\\% confidence\nlevel. For example, at the shortest separation $a_1=6.88\\,\\mu$m\nthis absolute error was $\\Delta_1\\gamma_z=3.06\\times 10^{-5}$.\nThe quantity $\\gamma_z$ was also calculated using the Lifshitz\ntheory of atom-wall interaction and subsequent averaging over the\ncondensate cloud. Under the assumption that SiO${}_2$ is an ideal\ninsulator, i.e., by disregarding the influence of its dc\nconductivity,\nit was found \\cite{34} that the measurement results are in\nagreement with theory in the limits of the experimental error\n$\\Delta\\gamma_z$ (the importance of this assumption was\ndemonstrated later \\cite{23,24,35}).\n\nDue to the interaction potential (\\ref{eq3}), there may be also\nsome additional force between a condensate cloud and a plate\ncaused by the two-axion exchange between protons and neutrons\nbelonging to them. The respective additional frequency shift can\nbe calculated by the additive summation of (\\ref{eq3}) over all\nnucleons of a ${}^{87}$Rb atom and a plate with subsequent\naveraging over the condensate cloud (see \\cite{12} for details).\nUnder an assumption that the plate has an infinitely large area\n(it was shown \\cite{12} that relative corrections to the result\ndue to a finite plate area are of order $10^{-6}$) the additional\nfrequency shift due to two-axion exchange is given by \\cite{12}\n\\begin{equation}\n\\gamma_{z}^{\\rm add}(a)=\n\\frac{15A(g_{ap},g_{an})}{2\\pi A_z m_{\\rm Rb}\\omega_{0z}^2}\n\\Phi(a,m_a),\n\\label{eq8}\n\\end{equation}\n\\noindent\nwhere $m_{\\rm Rb}$ is the mass of ${}^{87}$Rb atom and the\nfunction $\\Phi(a,m_a)$ is defined as\n\\begin{eqnarray}\n&&\n\\Phi(a,m_a)=\n\\int_{1}^{\\infty}\\!\\!\\!du\\frac{\\sqrt{u^2-1}}{u}\ne^{-2m_aau}\n\\nonumber \\\\\n&&~~~~~~\\times\n\\left(1-e^{-2m_aDu}\\right)\\,\nI_1(2m_aA_zu)\\Theta(2m_aR_zu).\n\\label{eq9}\n\\end{eqnarray}\nHere, $D=7\\,$mm is the thickness of SiO${}_2$ plate and\n\\begin{equation}\n\\Theta(t)\\equiv\\frac{1}{t^3}(t^2\\sinh t-3t\\cosh t+3\\sinh t).\n\\label{eq10}\n\\end{equation}\n\\noindent\nThe constant $A(g_{ap}g_{an})$ in (\\ref{eq8}) depends on the\nmaterial properties as follows \\cite{12}\n\\begin{eqnarray}\n&&\nA(g_{ap},g_{an})=\n\\frac{\\rho_{{\\rm SiO}_2}m_a}{16\\pi^2m^2m_{\\rm H}}\n(37g_{ap}^2+50g_{an}^2)\n\\nonumber\\\\\n&&~~~~~~~\n\\times\\left(\\frac{Z_{{\\rm SiO}_2}}{\\mu_{{\\rm SiO}_2}}g_{ap}^2\n+\\frac{N_{{\\rm SiO}_2}}{\\mu_{{\\rm SiO}_2}}g_{an}^2\\right),\n\\label{eq11}\n\\end{eqnarray}\n\\noindent\nwhere $\\rho_{{\\rm SiO}_2}$ is the plate density,\n$m=(m_p+m_n)\/2$ is the mean nucleon mass,\n$Z_{{\\rm SiO}_2}$ and $N_{{\\rm SiO}_2}$ are the number of protons\nand the mean number of neutrons in a SiO${}_2$ molecule,\nrespectively. The quantity\n$\\mu_{{\\rm SiO}_2}=m_{{\\rm SiO}_2}\/m_{\\rm H}$, where\n$m_{{\\rm SiO}_2}$ is the mean mass of a SiO${}_2$ molecule and\n$m_{\\rm H}$ is the mass of atomic hydrogen.\n\nTaking into account that the observed frequency shift was in\nagreement with that originating from the Casimir-Polder force,\nthe additional frequency shift (\\ref{eq8}) due to two-axion\nexchange should be constrained by the magnitude of the experimental\nerror\n\\begin{equation}\n\\gamma_{z}^{\\rm add}(a_1)\\leq\\Delta_1\\gamma_z.\n\\label{eq12}\n\\end{equation}\n\\noindent\n{}From the numerical analysis of this equation, the constraints\non axion-nucleon coupling constants were obtained \\cite{12} under\ndifferent assumptions about a relationship between $g_{an}$ and\n$g_{ap}$. For example, under a natural assumption that\n$g_{an}=g_{ap}$ \\cite{25}, the resulting constraints are shown\nin Fig.~1, where the region of the plane above the line is\nexcluded and the region below the line is allowed.\nThese constraints cover the wide region of axion masses from\n$m_a=10^{-4}$ to 0.3\\,eV. As is seen in Fig.~1, the strength\nof constraints decreases with increasing axion mass.\nIn Section 7 we compare the constraints of Fig.~1 with those\nobtained from other measurements of the Casimir force and\ndifferent laboratory experiments.\n\n\\section{Constraints on an axion from measurements of the gradient of the Casimir force\nby means of AFM}\n\nIn the sequence of three experiments, the gradient of the Casimir\nforce was measured\nbetween the surfaces of a hollow sphere and a plate both coated\nwith Au films \\cite{36,37},\nwith Au and Ni films, respectively \\cite{38},\nand with Ni films \\cite{39,40}.\nFor technological purposes, there were also various material\nlayers\nbelow Au and Ni coatings on both a hollow sphere made of fused\nsilica (SiO${}_2$) and a sapphire (Al${}_2$O${}_3$) plate.\nThe radii of spheres were of about $50\\,\\mu$m and the plates (disks)\nwere of approximately 5\\,mm radius, i.e., by a factor of 100\nlarger than the spheres.\nMeasurements of the gradient of the Casimir force,\n$\\partial F_C(a)\/\\partial a$,\nas a function of separation $a$ between the plate and the sphere,\nwere performed by means of dynamic AFM (see \\cite{36,37} for\ndetails). In all three experiments the measurement results were\nfound in agreement with theoretical predictions of the Lifshitz\ntheory in the limits of the experimental errors\n$\\Delta F_C^{\\prime}(a)$. Calculations of the theoretical force\ngradients were performed with omitted relaxation properties of\nconduction electrons in metals (an account of the\nrelaxation properties of\nconduction electrons in computations using the Lifshitz theory\nleads to disagreement with the measurement data of many\nexperiments \\cite{22,23,24,36,37,39,40}).\n\nThe two-axion exchange between nucleons belonging to a sphere and\na plate leads to some attraction in addition to the Casimir force.\n The gradient of this additional force acting between a spherical\nenvelope (layer) of thickness $\\Delta_s$ and external radius $R$,\nand a plate of thickness $D$ can be calculated by the additive\nsummation of the interaction potentials (\\ref{eq3}) \\cite{13}\n\\begin{eqnarray}\n&&\n\\frac{\\partial F_{\\rm add}(a)}{\\partial a}=\n\\frac{\\pi}{m^2m_H^2}C_pC_s\n\\int_{1}^{\\infty}\\!du\\frac{\\sqrt{u^2-1}}{u^2}\n\\left(1-e^{-2m_auD}\\right)\n\\nonumber \\\\\n&&~~~~\n\\times\ne^{-2m_aau}\\,\\left[\\Phi(R,m_au)-\ne^{-2m_au\\Delta_s}\\Phi(R-\\Delta_s,m_au)\\right],\n\\label{eq13}\n\\end{eqnarray}\n\\noindent\nwhere the function $\\Phi(r,z)$ is defined as\n\\begin{equation}\n\\Phi(r,z)=r-\\frac{1}{2z}+e^{-2rz}\\left(r+\n\\frac{1}{2z}\\right),\n\\label{eq14}\n\\end{equation}\nthe coefficients $C_{p(s)}$ for a plate (spherical layer)\nmaterials are given by\n\\begin{equation}\nC_{p(s)}=\\rho_{p(s)}\\left(\\frac{g_{ap}^2}{4\\pi}\\,\n\\frac{Z_{p(s)}}{\\mu_{p(s)}}+\\frac{g_{an}^2}{4\\pi}\\,\n\\frac{N_{p(s)}}{\\mu_{p(s)}}\\right),\n\\label{eq15}\n\\end{equation}\n\\noindent\n$\\rho_{p(s)}$ are the plate (spherical layer)\ndensities, and the quantities $Z_{p(s)}$, $N_{p(s)}$ and\n$\\mu_{p(s)}$ have the same meaning, as explained below\n(\\ref{eq11}), but in application to the molecules (atoms)\nof a plate and a spherical layer, respectively.\n\nNow we concentrate our attention on the experiment using\nAu-coated surfaces of a spherical envelope of thickness\n$\\Delta_s^{\\! g}=5\\,\\mu$m, of radius $R=41.3\\,\\mu$m\nand a plate \\cite{36,37}. The thicknesses of the Au coating\non the sphere and the plate were\n$\\Delta_s^{\\!\\rm Au}=\\Delta_p^{\\!\\rm Au}=280\\,$nm.\nThis allows to calculate the Casimir force (but not the\nadditive force due to two-axion exchange) as between entirely\nAu bodies. In calculation of the additional force it should\nbe taken into account that in the experiment \\cite{36,37}\nthe Au layers on both the spherical envelope and the plate\nwere deposited on the layers of Al of equal thicknesses\n$\\Delta_s^{\\!\\rm Al}=\\Delta_p^{\\!\\rm Al}=20\\,$nm.\nNow the gradient of the additional force can be calculated\nby applying (\\ref{eq13}) to each pair of material layers\nforming the spherical envelope and the plate taking into\naccount the separation distances between each pair of\nmaterial layers\n\\begin{eqnarray}\n&&\n\\frac{\\partial F_{\\rm add}(a)}{\\partial a}=\n\\frac{\\pi}{m^2m_H^2}\n\\int_{1}^{\\infty}\\!du\\frac{\\sqrt{u^2-1}}{u^2}\ne^{-2m_aau}\n\\nonumber \\\\\n&&~~~~~~~~~~\n\\times\nX_p(m_au)X_s(m_au),\n\\label{eq16}\n\\end{eqnarray}\n\\noindent\nwhere\n\\begin{eqnarray}\n&&\nX_p(z)\\equiv C_{\\rm Au}\\left(1-e^{-2z\\Delta_p^{\\!\\rm Au}}\\right)\n\\nonumber \\\\\n&&~~~\n+C_{\\rm Al}e^{-2z\\Delta_p^{\\!\\rm Au}}\\left(1-e^{-2z\\Delta_p^{\\!\\rm Al}}\\right)\n+C_{\\rm sa}e^{-2z(\\Delta_p^{\\!\\rm Au}+\\Delta_p^{\\!\\rm Al})},\n\\nonumber \\\\[-2mm]\n&&\n\\label{eq17} \\\\[-2mm]\n&&\nX_s(z)\\equiv C_{\\rm Au}\\left[\\Phi(R,z)-e^{-2z\\Delta_s^{\\!\\rm Au}}\n\\Phi(R-\\Delta_s^{\\!\\rm Au},z)\\right]\n\\nonumber \\\\\n&&~~~~~~\n+C_{\\rm Al}e^{-2z\\Delta_s^{\\!\\rm Au}}\n\\left[\\vphantom{e^{-2z\\Delta_s^{\\!\\rm Al}}\n\\Phi(R-\\Delta_s^{\\!\\rm Au})}\n\\Phi(R-\\Delta_s^{\\!\\rm Au},z)\\right.\n\\nonumber \\\\\n&&~~~~~~~~~~\\left.\n-e^{-2z\\Delta_s^{\\!\\rm Al}}\n\\Phi(R-\\Delta_s^{\\!\\rm Au}-\\Delta_s^{\\!\\rm Al},z)\\right]\n\\nonumber \\\\\n&&~~~~~~\n+C_{g}e^{-2z(\\Delta_s^{\\!\\rm Au}+\\Delta_s^{\\!\\rm Al})}\n\\left[\\vphantom{e^{-2z\\Delta_s^{\\!\\rm Al}}\n\\Phi(R-\\Delta_s^{\\!\\rm Au})}\n\\Phi(R-\\Delta_s^{\\!\\rm Au}-\\Delta_s^{\\!\\rm Al},z)\\right.\n\\nonumber \\\\\n&&~~~~~~~~~~\\left.\n-\ne^{-2z\\Delta_s^{\\! g}}\n\\Phi(R-\\Delta_s^{\\!\\rm Au}-\\Delta_s^{\\!\\rm Al}\n-\\Delta_s^{\\!g},z)\\right].\n\\nonumber\n\\end{eqnarray}\n\\noindent\nIn these equations, the thickness of the sapphire plate\nwas put equal to\ninfinity, as it does not influence the result.\nThe coefficients $C_{\\rm Au}$, $C_{\\rm Al}$, $C_{g}$\nand $C_{sa}$ are defined in Eq.~(\\ref{eq15}) which\nshould be applied to the\natoms Au and Al and to the molecules of glass and sapphire\n[the densities of these materials entering (\\ref{eq15}) are\n$\\rho_{\\rm Au}$, $\\rho_{\\rm Al}$, $\\rho_g$ and $\\rho_{sa}$;\nthey can be found in the tables].\n\nTaking into account that no additional force was observed in\nthe experiment \\cite{36,37} within the measurement\nerror, one can write\n\\begin{equation}\n\\frac{\\partial F_{\\rm add}(a)}{\\partial a}\\leq\n\\Delta F_C^{\\prime}(a).\n\\label{eq18}\n\\end{equation}\n\\noindent\nNumerical analysis of this equation leads to new constraints\non the interaction constants $g_{ap}$ and $g_{an}$.\nThe strongest constraints are obtained at the shortest\nexperimental separation $a_1=235\\,$nm. At this separation\ndistance the experimental error determined at a 67\\% confidence\nlevel is\n$\\Delta F_C^{\\prime}(a_1)\\equiv\\Delta_1F_C^{\\prime}=\n0.5\\,\\mu$N\/m \\cite{36}.\nIn Fig.~2 we show these constraints by the solid line under\nthe assumption $g_{ap}=g_{an}$ (see \\cite{13} for the alternative\nassumptions). The region of the plane above the line is\nexcluded, and the region below the line is allowed.\nThe comparison of the solid line in Fig.~2 with the line in\nFig.~1 shows that the constraints following from measurements\nof the gradient of the Casimir force are stronger than those\nobtained from measurements of the Casimir-Polder force.\nThe largest strengthening by a factor of 170 is achieved for the\naxion mass $m_a=0.3\\,$eV.\n\nSimilar results can be obtained \\cite{13} from the measurement\ndata of experiment with a Au-coated spherical envelope of\n$R=64.1\\,\\mu$m radius and a Ni-coated plate \\cite{38}.\nThe gradient of the additional force due to two-axion exchange\nis again given by (\\ref{eq16}), where $X_s(z)$ is presented in\n(\\ref{eq17}) and $X_p(z)$ takes a more simple form due to the\nabsence of an Al layer below a Ni coating\n\\begin{equation}\nX_p(z)= C_{\\rm Ni}\\left(1-e^{-2z\\Delta_p^{\\!\\rm Ni}}\\right)\n+C_{\\rm Si}e^{-2z\\Delta_p^{\\!\\rm Ni}}.\n\\label{eq19}\n\\end{equation}\nHere, $\\Delta_p^{\\!\\rm Ni}=154\\,$nm and $C_{\\rm Ni}$ can be\ncalculated using (\\ref{eq15}).\n\nThe constraints on the coupling constants of axions to nucleons\ncan be again obtained from (\\ref{eq18}).\nThe strongest constraints follow at the shortest separation\nequal to $a_1=220\\,$nm in this experiment. The respective total\nexperimental error determined at a 67\\% confidence level is\n$\\Delta_1F_C^{\\prime}=0.79\\,\\mu$N\/m \\cite{38}.\nThe constraints obtained under the condition $g_{ap}=g_{an}$\nare shown by the long-dashed line in Fig.~2. As can be seen in\nFig.~2, the constraints following from the experiment with Au-Ni\ntest bodies are up to a factor 1.5 weaker\nthan those obtained\nfrom the experiment with Au-Au test bodies. The main reason is\nthe smaller density of Ni, as compared with Au.\n\nIn the third experiment, a Ni-coated spherical envelope of\n$R=61.71\\,\\mu$m radius and a Ni-coated plate were used \\cite{39,40}.\nThe additional force can be again expressed by (\\ref{eq16}).\nIn this case, however, the functions $X_p(z)$ and $X_s(z)$ are\nmore complicated than in the previously considered experiments\nbecause for technological purposes there were two additional\nlayers (Al and Cr)\nbelow the Ni coating on both a spherical envelope and on a plate\n(see \\cite{13} for explicit expressions).\n\nThe constraints on $g_{ap}=g_{an}$ were again obtained from (\\ref{eq18}).\nThe strongest constraints follow at the shortest separation\ndistance ($a_1=223\\,$nm in this case). The total\nexperimental error determined at a 67\\% confidence level\nat the shortest separation is\n$\\Delta_1F_C^{\\prime}=1.2\\,\\mu$N\/m \\cite{38}.\nThe obtained constraints\nare shown by the short-dashed line in Fig.~2. They are slightly\nweaker than those following from the experiments with Au-Au\nand Au-Ni test bodies. This is again explained by\nthe smaller density of Ni in comparison with that of Au (see\nSection 7 for comparison with other laboratory constraints).\n\n\\section{Constraints on an axion from measurements of the Casimir pressure\nby means of micromachined oscillator}\n\nThe Casimir pressure $P_C(a)$ between two parallel Au-coated\nplates\nwas determined from dynamic measurements performed in sphere-plate\n geometry using a micromechanical torsional oscillator\n\\cite{41,42}.\nA sapphire sphere and a Si plate of thickness $D=5\\,\\mu$m were\ncoated with the layers of Cr of equal thickness\n$\\Delta_s^{\\!\\rm Cr}=\\Delta_p^{\\!\\rm Cr}=10\\,$nm.\nThe outer layers of Au were of thicknesses\n$\\Delta_s^{\\!\\rm Au}=180\\,$nm on the sphere and\n$\\Delta_p^{\\!\\rm Au}=210\\,$nm on the plate.\nThe resulting radius of the sphere was measured to be\n$R=151.3\\,\\mu$m. The experimental results for the Casimir pressure\n between two parallel plates spaced $a$ apart were found to be\nin agreement with the predictions of the Lifshitz theory in the\nlimits of the total experimental error in the pressure\nmeasurements $\\Delta P_C(a)$ determined at a 95\\% confidence\nlevel. Here, we recalculate this error to a 67\\% confidence\nlevel in order to obtain constraints comparable with those\nfollowing from other experiments. The theoretical results were\nobtained with omitted contribution of the relaxation properties\nof free electrons (taking these properties into account leads\nto theoretical predictions excluded by the measurement data\n\\cite{23,24,41,42}).\n\nThe additional effective pressure between two parallel plates\ndue to two-axion exchange between nucleons of a sphere and a\nplate can be calculated by the additive summation using the\ninteraction potential (\\ref{eq3}) (see \\cite{14} for details).\nThe result is the following \\cite{14}:\n\\begin{eqnarray}\n&&\nP_{\\rm add}(a)=\n-\\frac{1}{2m^2m_{\\rm H}^2R}\\int_{1}^{\\infty}\\!\\!\\!du\n\\frac{\\sqrt{u^2-1}}{u^2}\n\\nonumber \\\\\n&&~~~~~~~~~\n\\times\ne^{-2m_aau}\\tilde{X}_p(m_au)\\tilde{X}_s(m_au),\n\\label{eq20}\n\\end{eqnarray}\n\\noindent\nwhere\n\\begin{eqnarray}\n&&\n\\tilde{X}_p(z)\\equiv C_{\\rm Au}\\left(1-e^{-2z\\Delta_p^{\\!\\rm Au}}\n\\right)\n\\nonumber \\\\\n&&~~~\n+C_{\\rm Cr}e^{-2z\\Delta_p^{\\!\\rm Au}}\n\\left(1-e^{-2z\\Delta_p^{\\!\\rm Cr}}\n\\right)\n\\nonumber \\\\\n&&~~~\n+C_{\\rm Si}e^{-2z(\\Delta_p^{\\!\\rm Au}+\\Delta_p^{\\!\\rm Cr})}\n\\left(1-e^{-2zD}\n\\right),\n\\label{eq21} \\\\[1mm]\n&&\n\\tilde{X}_s(z)\\equiv C_{\\rm Au}\\left[\n\\vphantom{e^{-2z\\Delta_s^{\\!\\rm Au}}}\n\\Phi(R,z)\n-e^{-2z\\Delta_s^{\\!\\rm Au}}\n\\Phi(R-\\Delta_s^{\\!\\rm Au},z)\\right]\n\\nonumber \\\\\n&&~~~\n+C_{\\rm Cr}e^{-2z\\Delta_s^{\\!\\rm Au}}\n\\left[\n\\vphantom{e^{-2m_au\\Delta_s^{\\!\\rm Au}}}\n\\Phi(R-\\Delta_s^{\\!\\rm Au},z)\n\\right.\n\\nonumber \\\\\n&&~~~~~~~~~~~~\n\\left.\n-e^{-2z\\Delta_s^{\\!\\rm Cr}}\n\\Phi(R-\\Delta_s^{\\!\\rm Au}-\\Delta_s^{\\!\\rm Cr},z)\n\\right]\n\\nonumber \\\\\n&&~~~\n+C_{sa}\ne^{-2z(\\Delta_s^{\\!\\rm Au}+\\Delta_s^{\\!\\rm Cr})}\n\\Phi(R-\\Delta_s^{\\!\\rm Au}-\\Delta_s^{\\!\\rm Cr},z).\n\\nonumber\n\\end{eqnarray}\n\\noindent\nThe function $\\Phi(r,z)$ used here is given in (\\ref{eq14}).\nThe coefficients $C_{\\rm Au}$, $C_{\\rm Cr}$, $C_{\\rm Si}$, and\n$C_{sa}$ are the same as used above. All of them are expressed\nby (\\ref{eq15}), as applied to respective materials.\n\nThe constraints on the axion-nucleon interaction constants were\nfound from the inequality\n\\begin{equation}\n|P_{\\rm add}(a)|\\leq\\Delta P_C(a).\n\\label{eq22}\n\\end{equation}\n\\noindent\nFor different regions of axion masses the strongest constraints\nfollow from (\\ref{eq22}) at different separation distances.\nThus, within the regions\n$m_a<0.1\\,$eV, $0.1\\,\\mbox{eV}\\leq m_a<0.5\\,$eV and\n$0.5\\,\\mbox{eV}\\leq m_a<15\\,$eV the strongest constraints were\nobtained at $a=300$, 200 and 162\\,nm, respectively.\nAt these separations the total experimental errors in\nmeasurements of the Casimir pressure recalculated to a 67\\%\nconfidence level were equal to 0.22, 0.38, and 0.55\\,mPa,\nrespectively. In Fig.~3 the obtained constraints are shown\nby the solid line under the condition $g_{ap}=g_{an}$.\nThey are stronger than the constraints following from\nmeasurements of the Casimir-Polder force (see Fig.~1) and from\nmeasurements of the gradient of the Casimir force between\nAu-Au surfaces (see the solid line in Fig.~2). Thus, at\n$m_a=1\\,$eV\nthe constraints of Fig.~3 are stronger by a factor of 3.2 than\nthe strongest constraints of Fig.~2 shown by the solid line\n(a more detailed comparison is contained in Section 7).\n\n\\section{Constraints on an axion from measurements of the Casimir force\nbetween corrugated surfaces}\n\nSeveral measurements of the Casimir interaction between a sphere\nand a plate were performed in the case when the surface of at\n least one test body is not smooth, but covered with the\n longitudinal corrugations \\cite{43,44,45,46,47,48,49,50}.\n The shape of the corrugations was either sinusoidal\n \\cite{43,44,47,48,49,50} or rectangular \\cite{45,46}\n(in the latter case the sphere was smooth, and only the plate was\ncorrugated). If both the test bodies are corrugated and some\nnonzero phase shift between corrugations is present, there is\nnot only the normal Casimir force acting perpendicular to the\nsurfaces, but the lateral Casimir force as well\n\\cite{43,44,47,48}.\nHere we consider the constraints on axion-nucleon coupling\nconstants obtained \\cite{15} from measurements of the normal\n\\cite{49,50} and lateral \\cite{47,48} Casimir force between\nsinusoidally corrugated Au-coated surfaces (experiments\n\\cite{43,44} are less precise, and experiments \\cite{45,46}\nuse the rectangular corrugated Si plates and lead to weaker\nconstraints due to a smaller density of Si).\n\nWe begin with an experiment on measuring the lateral Casimir\nforce between\nsinusoidally corrugated surfaces of a sphere and a plate\n\\cite{47,48}. The corrugation axes of the longitudinal\ncorrugations on both bodies were kept parallel, and there was\nsome phase shift $\\varphi_0$ between corrugations.\nThe period of corrugations was $\\Lambda=574.4\\,$nm.\nMeasurements of the lateral Casimir force as a function of\nthe phase shift were performed over the region of separations\nbetween the mean levels of corrugations from 120 to 190\\,nm.\nThe corrugation amplitudes were\n$A_1=85.4\\,$nm and $A_2=13.7\\,$nm on the plate and on the\nsphere, respectively. The plate was made of a hard epoxy and\ncoated with a layer of Au of thickness\n$\\Delta_p^{\\!\\rm Au}=300\\,$nm.\nThe sphere was made of polystyrene and coated with a layer of\nCr of $\\Delta_s^{\\!\\rm Cr}=10\\,$nm thickness and then with a layer of\nAu of $\\Delta_s^{\\!\\rm Au}=50\\,$nm thickness.\nThe outer radius of the sphere was measured to be $R=97.0\\,\\mu$m.\nThe measurement results were compared with theoretical predictions\n of the scattering theory (which generalizes the Lifshitz theory\n for the case of arbitrary shaped bodies) and demonstrated good\n agreement in the limits of the experimental error\n $\\Delta F_C^{\\rm lat}(a)$ \\cite{47,48}.\n\n The additional lateral force due to two-axion exchange between\n sinusoidally corrugated surfaces of a sphere and a plate can be\n calculated using (\\ref{eq3}). The maximum amplitude of this\n force, which is obtained at the phase shift $\\varphi_0=\\pi\/2$,\n takes the form \\cite{15}\n\\begin{eqnarray}\n&&\n\\max|F_{\\rm add}^{\\rm lat}(a)|=\n\\frac{\\pi^2 RC_{\\rm Au}}{m_am^2m_{\\rm H}^2}\\,\n\\frac{A_1A_2}{\\Lambda\\sqrt{A_1^2+A_2^2}}\n\\nonumber \\\\[1mm]\n&&~~\n\\times\n\\int_{1}^{\\infty}\\!\\!\\!du\\frac{\\sqrt{u^2-1}}{u^3}\ne^{-2m_aua} I_1\\left(2m_au\\sqrt{A_1^2+A_2^2}\\right)\n\\nonumber \\\\[1mm]\n&&~~~~~\n\\times\n(1-e^{-2m_au\\Delta_p^{\\!\\rm Au}})\n\\left[\n\\vphantom{e^{-2m_au\\Delta_{\\rm Au}^{\\!(1)}}}\nC_{\\rm Au}+(C_{\\rm Cr}-C_{\\rm Au})\n\\right.\n\\nonumber \\\\[1mm]\n&&~~~~\\left.\n\\times e^{-2m_au\\Delta_s^{\\!\\rm Au}}\n-C_{\\rm Cr}\ne^{-2m_au(\\Delta_s^{\\!\\rm Au}+\\Delta_s^{\\!\\rm Cr})}\n\\right].\n\\label{eq23}\n\\end{eqnarray}\n\\noindent\nHere, the hard epoxy and polystyrene would lead to negligibly\nsmall contributions to the force due to two-axion exchange.\nBecause of this, only metallic coatings were taken into account\nin (\\ref{eq23}).\n\nThe constraints on an axion can be obtained from the inequality\n\\begin{equation}\n\\max|F_{\\rm add}^{\\rm lat}(a)|\\leq\n\\Delta F_{C}^{\\rm lat}(a),\n\\label{eq24}\n\\end{equation}\n\\noindent\nwhere the left-hand side is given by (\\ref{eq23}).\nFor axion-like particles with masses $m_a<20\\,$eV, the strongest\nconstraints are obtained from the measure of agreement between\nexperiment and theory at $a=124.7\\,$nm. At this separation the\ntotal experimental error recalculated to a 67\\% confidence level\nfor convenience in comparison with other experiments is\n$\\Delta F_C^{\\rm lat}=2.4\\,$pN (note that according to a\nconservative estimation, the total experimental error\ncalculated in \\cite{47,48} at a 95\\% confidence level is by a\nfactor of 2 larger than the same error found at a 67\\% confidence\nlevel). The constraints on $g_{ap}=g_{an}$ obtained from\n(\\ref{eq24}) at $a=124.7\\,$nm\nare shown by the solid line in Fig.~4, where the region of the\nplane above the line is excluded and the region below the line is\nallowed. Note that this line is slightly different from the\nrespective lines in Fig.~2(a,b) in \\cite{15} because it was\nplotted there at the 95\\% confidence level.\n\nWe now turn our attention to the experiment on measuring the\nnormal Casimir force between a sinusoidally corrugated Au-coated\npolystyrene sphere of $R=99.6\\,\\mu$m radius and a sinusoidally\ncorrugated Au-coated plate made of hard epoxy \\cite{49,50}.\nThis experiment was performed at different angles between the\nlongitudinal corrugations on the sphere and on the plate varying\nfrom 0 to 2.4${}^{\\circ}$. There was no phase shift between\ncorrugations on both bodies. Below we obtain constraints on the\naxion-nucleon coupling constants from the measurement data for\nthe case of parallel corrugation axes on the sphere and the\nplate. The thicknesses of Au coatings on the sphere and on the\nplate were $\\Delta_s^{\\!\\rm Au}=110\\,$nm and\n$\\Delta_p^{\\!\\rm Au}=300\\,$nm, respectively.\nFor technological purposes, before depositing the Au coatings,\nthe sphere was first coated with a layer of Cr of thickness\n$\\Delta_s^{\\!\\rm Cr}=10\\,$nm and then\nwith a layer of Al of thickness\n$\\Delta_s^{\\!\\rm Al}=20\\,$nm.\nThe period of uniaxial sinusoidal corrugations on both bodies\nwas $\\Lambda=570.5\\,$nm, and the corrugations amplitudes were\n$A_1=40.2\\,$nm and $A_2=14.6\\,$nm on the plate and on the sphere,\nrespectively. The measurement results were compared with\ntheoretical predictions of the scattering theory and found in\ngood agreement within the limits of the total experimental error.\n\nThe additional normal force acting between a sphere and a plate\ndue to two-axion exchange was again calculated \\cite{15} using\n(\\ref{eq3})\n\\begin{eqnarray}\n&&\nF_{\\rm add}^{\\rm nor}(a)=-\\frac{\\pi RC_{\\rm Au}}{2m_am^2m_{\\rm H}^2}\n\\int_{1}^{\\infty}\\!\\!\\!du\\frac{\\sqrt{u^2-1}}{u^3}e^{-2m_aua}\n\\nonumber \\\\[1mm]\n&&~~~~~\n\\times I_0\\left(2m_au(A_1-A_2)\\right)(1-e^{-2m_au\\Delta_p^{\\,\\rm Au}})\n\\nonumber \\\\[1mm]\n&&~~~~~\n\\times\\left[C_{\\rm Au}+(C_{\\rm Al}-C_{\\rm Au})\ne^{-2m_au\\Delta_s^{\\!\\rm Au}}\\right.\n\\nonumber \\\\[1mm]\n&&~~~~~~~\n+(C_{\\rm Cr}-C_{\\rm Al})\ne^{-2m_au(\\Delta_s^{\\!\\rm Au}+\\Delta_s^{\\!\\rm Al})}\n\\nonumber \\\\[1mm]\n&&~~~~~~~\n\\left.\n-C_{\\rm Cr}\ne^{-2m_au(\\Delta_s^{\\!\\rm Au}+\\Delta_s^{\\!\\rm Al}\n+\\Delta_s^{\\!\\rm Cr})}\\right].\n\\label{eq25}\n\\end{eqnarray}\n\\noindent\nThe constraints on the axion-nucleon coupling constants\n$g_{an}=g_{ap}$ were found from the inequality\n\\begin{equation}\n|F_{\\rm add}^{\\rm nor}(a)|\\leq\\Delta F_C^{\\rm nor}(a).\n\\label{eq26}\n\\end{equation}\n\\noindent\nThe strongest constraints follow from (\\ref{eq26}) at the\nshortest separation distance $a_1=127\\,$nm where the total\nexperimental error determined at a 67\\% confidence level is\nequal to $\\Delta F_C^{\\rm nor}(a_1)=0.94\\,$pN \\cite{49,50}.\n\nIn Fig.~4 the obtained constraints under a condition\n$g_{an}=g_{ap}$ are shown by the dashed line. It can be seen\nthat for $m_a<5.3\\,$eV they are stronger than those following\nfrom measurements of the lateral Casimir force (the solid line),\nbut become weaker than the latter for larger axion masses.\n\n\\section{Comparison between different laboratory constraints}\n\nIt is interesting to compare all discussed above constraints,\nobtained from measurements of the Casimir interaction,\nbetween themselves and with other laboratory constraints on\naxion to nucleon coupling constants. Such a comparison is\nperformed in Fig.~5 over the wide range of axion masses\nfrom $10^{-10}$ to 20\\,eV. The constraints on $g_{an}$\nobtained \\cite{51} by means of a magnetometer using\nspin-polarized K and ${}^3$He atoms are shown by the solid\nline 1. These constraints are applicable in the region of\n$m_a$ from $10^{-10}$ to $6\\times 10^{-6}\\,$eV.\nThe solid line 2 indicates the constraints obtained \\cite{52}\nfrom the recent Cavendish-type experiment \\cite{53} in the\nregion from $m_a=10^{-6}$ to $6\\times 10^{-2}\\,$eV.\nThe weaker constraints found \\cite{25} from the older\nCavendish-type experiments \\cite{54,55} and from the\nE\\\"{o}tvos-type experiment \\cite{56}, respectively, are\nshown by the dashed lines 3 and 4 (these and the following\nconstraints are obtained under a condition $g_{an}=g_{ap}$).\nThese constraints cover the region of $m_a$ from $10^{-8}\\,$eV\nto $4\\times 10^{-5}\\,$eV (line 3) and to $10^{-5}\\,$eV (line 4).\nThe lines 5--8 are obtained \\cite{12,13,14,15} from measurements\nof the Casimir interaction. They are discussed in this paper.\nThe line 5 reproduces the line in Fig.~3 obtained for $m_a$\nfrom $10^{-3}$ to 15\\,eV from measurements of the Casimir\npressure (see Section 5). The dashed lines 6 and 7 reproduce\nthe solid line in Fig.~2 and the line in Fig.~1 found in the\nregion from $3\\times 10^{-5}$ to 1\\,eV from measurements of\nthe gradient of the Casimir force between Au-Au surfaces and\nin the region from $10^{-4}$ to 0.3\\,eV from measurements of\nthe Casimir-Polder force, respectively (see Sections 4 and 3).\nFinally, the line 8 reproduces the solid line in Fig.~4\nfound in the region of $m_a$ from 1 to 20\\,eV.\nIt follows from measurements of the lateral Casimir force\nbetween corrugated surfaces discussed in Section 6\n(measurements of the normal Casimir force between sinusoidally\ncorrugated surfaces lead to weaker constraints than those\nshown in Fig.~5).\n\nThe strength of almost all laboratory constraints shown in\nFig~5 (with exception of that shown by line 1) monotonically\ndecreases with increase of the axion mass $m_a$.\nIf one introduces the Compton wavelength of an axion\n$\\lambda_a=1\/m_a$, it is correct to say that the strength\nof almost all constraints (and all of those\nfollowing from measurements\nof the gravitational and Casimir interactions) decreases with\ndecreasing $\\lambda_a$. The same is true for the\nYukawa-type corrections to Newton's law of gravitation\n(\\ref{eq4}) whose strength decreases with decreasing\ninteraction range $\\lambda$ (see the next section).\nThis property likens the interaction potentials (\\ref{eq3})\nand (\\ref{eq4}) and specifies the interaction range where\nthe most strong constraints on respective hypothetical\nforces can be obtained from experiments on measuring the\nCasimir interaction.\n\nThe vertical lines in Fig.~5 indicate the region from\n$m_a=10^{-5}$ to $10^{-2}\\,$eV, which is often called an\naxion window \\cite{57}. As can be seen in Fig.~5,\nexperiments measuring the Casimir interaction lead to\nstrengthening of the laboratory constraints on axion to\nnucleon coupling constants near the upper border of the\naxion window and also for larger axion masses.\n\n\\section{Constraints on corrections to Newton's law of gravitation}\n\nThe constraints on corrections to the Newton law of gravitation\ndescribed by the potentials (\\ref{eq4}) and (\\ref{eq5}) can be\nobtained from the gravitational experiments of E\\\"{o}tvos- and\nCavendish-type and from measurements of the Casimir interaction.\nAs explained in Section 1, measurements of the Casimir force\nhave long been used for constraining hypothetical interactions\nof both Yukawa and power type.\nBecause of this, here we only briefly present the obtained\nresults and indicate regions where measurements of the Casimir\nforce lead to the most strong constraints, as compared to\ngravitational experiments.\n\nThe Yukawa-type interaction potential between the test bodies\nused in experiments on measuring the Casimir force is obtained\nby the integration of (\\ref{eq4}) over the volumes of bodies.\nIn so doing, at submicrometer separations the Newton\ngravitational force turns out to be negligibly small, as\ncompared to the error of force measurements. Similar to the\ncase of axion considered above, the constraints on the constants\nof Yukawa-type interaction $\\alpha$ and $\\lambda$ are obtained\nfrom a condition that this interaction was not experimentally\nobserved in the limits of the experimental error in measurements\nof the Casimir interaction.\n\nIn Fig.~6 we present the strongest constraints on the\nYukawa interaction constant $\\alpha$ in the micrometer and\nsubmicrometer interaction range $\\lambda$ obtained\nfrom measurements of the Casimir interaction.\nThe line 1 in Fig.~6 was obtained \\cite{18} from measurements\nof the lateral Casimir force between sinusoidally corrugated\nsurfaces of a sphere and a plate \\cite{47,48} (see Section 6).\nIt presents the strongest constraints on the Yukawa-type\ncorrections to Newton's law of gravitation within the\ninteraction range from $\\lambda=1.6$ to 11.6\\,nm.\nThe line 2 shows constraints found \\cite{21} from measuring\nthe normal Casimir force between sinusoidally corrugated\nsurfaces at the angle between corrugations equal to\n2.4${}^{\\circ}$ \\cite{49,50} (see Section 6). These constraints\nare the strongest ones in the interaction range from 11.6 to\n17.2\\,nm.\nThe constraints obtained from measurements of the Casimir\npressure by means of a micromachined torsional oscillator\n(see Section 5) are\nindicated by the line 3. They are the strongest ones for\n$17.2\\,\\mbox{nm}<\\lambda<89\\,$nm.\nAt larger $\\lambda$ the most strong constraints shown by the\nline 4 follow from the so-called Casimir-less experiment\n\\cite{58}, where the Casimir force was nullified by using the\ndifference force measurement scheme. These constraints are\nthe strongest ones up to $\\lambda=891\\,$nm.\nThe constraints of the line 5 are found \\cite{59} from\nmeasurements of the Casimir force between Au-coated surfaces\nof a plate and a spherical lens of large radius. They are\nthe strongest ones up to $\\lambda=3.16\\,\\mu$m.\nFor larger $\\lambda$ the strongest constraints on the\nYukawa-type corrections to Newton's gravitational law\nfollow from the Cavendish-type experiments. The first\nconstraints of such kind are indicated by the line 6\n\\cite{60,61}. Thus, measurements of the Casimir interaction\nlead to the most strong constraints on non-Newtonian\ngravity over a wide interaction range from 1.6\\,nm to a\nfew micrometers. As can be seen in Fig.~6, the strength of\nall constraints decreases with decreasing $\\lambda$, i.e.,\nwith increasing mass of a hypothetical particle which\ninitiates the additional interaction of Yukawa-type.\nThis is similar to the case of an axion considered in\nSections 3--6.\n\nConstraints on the power-type corrections to Newton's law\nof gravitation (\\ref{eq5}) follow from the gravitational\nexperiments of E\\\"{o}tvos and Cavendish type \\cite{8} and\nfrom measurements of the Casimir force \\cite{17,30}.\nAt the present time the most strong constraints follow from\nthe E\\\"{o}tvos-type experiments\n($|\\Lambda_1|\\leq 1\\times 10^{-9}$ \\cite{62} and\n$|\\Lambda_2|\\leq 4\\times 10^{8}$ \\cite{56}) and from\nthe Cavendish-type experiments\n($|\\Lambda_3|\\leq 1.3\\times 10^{20}$ \\cite{52},\n$|\\Lambda_4|\\leq 4.9\\times 10^{31}$ \\cite{52}, and\n$|\\Lambda_5|\\leq 1.5\\times 10^{43}$ \\cite{52}).\nNote that \\cite{52} uses another parametrization for the\npower-type corrections to Newtonian gravitation.\n\n\\section{Conclusions and discussion}\nIn the foregoing, we have considered the constraints on axion\nto nucleon couplings following from laboratory experiments on\nmeasuring the Casimir interaction. The obtained constraints\nare quite competitive in the region of axion masses from\n$10^{-3}$ to 20\\,eV. The most strong of them follow from a\ndynamic determination of the Casimir pressure between two\nparallel plates and from measurement of the lateral Casimir\nforce between sinusoidally corrugated surfaces.\nAll these constraints were derived by considering the process\nof two-axion exchange between two nucleons. This process is of\nthe lowest order contributing to the force acting between\nunpolarized test bodies. The obtained constraints were\ncompared with those following from other laboratory experiments.\n\nWe have also compared the constraints on an axion with previously\nobtained constraints on corrections to the Newton law of\ngravitation of Yukawa and power type. The most strong constraints\nof this kind following from measurements of the Casimir\ninteraction are collected.\nIn the interaction range below a few micrometers they are\nstronger than the constraints on Yukawa-type corrections to\nNewton's law following from the gravitational experiments of\nE\\\"{o}tvos and Cavendish type.\n\nIn future it would be interesting to perform measurements of\nthe Casimir interaction between two polarized test bodies.\nThis would lead to an additional force due to exchange of one\naxion between protons and neutrons and, as a consequence, to\nmuch stronger constraints on the axion to nucleon coupling\nconstants.\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}