diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzzgqsm" "b/data_all_eng_slimpj/shuffled/split2/finalzzgqsm" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzzgqsm" @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\\label{sec:introduction}\nExotic nuclei with extreme neutron-to-proton ratios are crucial for theoretical nuclear structure research as\ntheir properties provide critical information on nuclear interactions, many-body techniques, and astrophysical scenarios. \nHowever, because of their weak binding, their quasiparticle excitations are often affected by the low-lying scattering space (a.k.a.\\ particle continuum), which enhances the necessary computational effort. For such nuclei, nucleonic pairing must be handled within the full Hartree-Fock-Bogoliubov (HFB) scheme instead of the simpler Bardeen-Cooper-Schrieffer (BCS) approximation \\cite{Dobaczewski1984,Belyaev1987,Dobaczewski1996,Dobaczewski2013,Bulgac02}. In addition, the associated self-consistent densities are usually very extended in space, which requires large basis sets or large coordinate-space boxes.\nBoth requirements become particularly demanding if one aims at symmetry-unrestricted calculations (i.e., without imposing space reflection or axial or spherical symmetries). \nThe paper aims to propose a reliable and efficient computational scheme to solve the HFB equations on a three-dimensional (3D) Cartesian coordinate-space grid. \n\nThe nuclear energy density functional (EDF) method is one of the most widely used methods to study medium-mass and heavy nuclei \\cite{Bender2003,Schunck2019}.\nIts main ingredient is an EDF that represents the effective in-medium nuclear interaction.\nAmong many EDFs, the Skyrme functional, originally based on Skyrme interaction \\cite{Skyrme1958}, is commonly used to study global\n nuclear properties, such as ground-state energies, deformations, and low-lying excitations \\cite{Schunck2019,Erler2011,Erler2012a}.\nIt is to be noted that the nuclear EDF method is closely related to the density-functional theory (DFT).\nHence, it is often referred to as nuclear DFT.\n\nOver the years, a number of HFB solvers have been developed; see Table 2 of Ref.~\\cite{Bender2020} for a summary. These solvers can be divided into two families.\nThe codes in the first group are based on the expansion of single-particle wave functions in a finite set of basis functions such as the harmonic\noscillator (HO) eigenfunctions. \nExamples of such solvers are: \n {\\sc HFBTHO} \\cite{Stoitsov2005, Perez2017}, which solves the axial (2D) HFB equations in the axial HO or the transformed HO basis; \n{\\sc HFODD} \\cite{Dobaczewski1997_1, Schunck2017,Dobaczewski2021}, which solves the 3D HFB equations in the Cartesian HO basis without assumption of self-consistent symmetries; and {\\sc HFBPTG} \\cite{Stoitsov2008}, which solves the axial (2D) HFB equations in the P\\\"oschl-Teller-Ginocchio basis.\n\nThe basis-expansion method is efficient and has been successfully employed in large-scale calculations \\cite{Erler2012a}.\nHowever, when it is applied to weakly-bound nuclei, the performance of this method deteriorates as huge configuration spaces are required to describe the asymptotic behavior of HFB solutions. \nHere, the approach of choice is the HFB framework formulated in the coordinate-space representation \\cite{Dobaczewski1984,Dobaczewski1996,Dobaczewski2013}.\n\nThe coordinate-space solvers constitute the second family of HFB codes. Examples of such solvers are: \n {\\sc HFBRAD} \\cite{Bennaceur2005} solves spherically symmetric HFB problem using finite differences; \n{\\sc HFB-AX} \\cite{Pei2008} is a 2D solver based on B-splines; \n{\\sc SkyAx} \\cite{Reinhard2020} is a highly optimized 2D Hartree-Fock (HF) + BCS code using the fast Fourier transform (FFT) method to compute derivatives;\n{\\sc Sky3D} \\cite{Maruhn2014,Schuetrumpf2018} is a 3D extension of {\\sc SkyAx}; the predecessor of {\\sc SkyAx} and {\\sc Sky3D} is a 1D spherical HF+BCS code using five-point finite differences which was published first in \\cite{Reinhard1991} and has meanwhile been developed into a full spherical HFB code {\\sc Sky1D} \\cite{PGHFBcodes};\nthe HFB extension of {\\sc SkyAx} is {\\sc Sky2D} \\cite{PGHFBcodes};\n{\\sc EV8} solves the Skyrme HF+BCS equations using the imaginary time method on a 3D mesh that is limited to one octant by imposing time-reversal and spatial symmetries \\cite{Bonche2005,Ryssens2015}; {\\sc MOCCa} \\cite{Ryssens2019,Scamps2021} is a Skyrme-HFB extension of {\\sc EV8}; {\\sc MADNESS-HFB} \\cite{Pei2014} is a 3D HFB solver\nbased on multi-resolution analysis and multi-wavelet expansion; {\\sc LISE} is a 3D HFB solver \n\\cite{Jin2021} employing the discrete variable representation (or Lagrange-mesh method)\nand fast Fourier transforms; and there are also 3D HFB solvers based on the contour integral of the Green's function using the shifted Krylov subspace method \\cite{Jin2017,Kashiwaba2020}. \n\nThe major difference between \nbasis-based and mesh-based methods is the treatment of one-quasiparticle continuum space \\cite{Belyaev1987,Michel2008,Pei2011,Dobaczewski2013}.\nIn the case of coordinate-space methods, the discretized continuum strongly depends on the geometry of the spatial box and the grid size. \nFor large 3D boxes and dense grids, the size of the discretized continuum space quickly becomes intractable as the maximum allowed quasiparticle energy increases.\n\n\n\nA promising approach to the coordinate-space HFB problem is the canonical-basis HFB method proposed in Refs.\\ \\cite{Reinhard1997, Tajima2004}.\nThe one-body density matrix is diagonal in the canonical basis (or natural orbits), and its eigenstates are spatially localized if the nucleus is particle-bound. \nBecause of this localization, the single-particle (s.p.) continuum level density is significantly reduced.\n\n\nIn this work, we develop a 3D Skyrme-HFB solver {\\sc HFBFFT} in the coordinate-space representation. This code is based on the published code {\\sc Sky3D} \\cite{Maruhn2014,Schuetrumpf2018}.\n{\\sc Sky3D} has been well optimized for performance and parallelized with OpenMP and MPI \\cite{Afibuzzaman2018}.\nIn {\\sc HFBFFT} we maintain the high-level parallelization, making it scalable on modern supercomputers.\nIn order to overcome the pairing collapse problem mentioned in Ref.\\ \\cite{Tajima2004}, we implement the soft energy cutoff of pairing space and develop the annealing of pairing strengths to avoid pairing deadlock at an early stage.\nFurthermore, we introduce the sub-iteration method in the configuration space to stabilize and speed up the convergence. \nWe also resolve the problem of Hermiticity violation in {\\sc Sky3D} brought by the incompatibility between the product rule and the Fourier-transform-based algorithm for derivatives.\nTo benchmark {\\sc HFBFFT} we study several nuclear systems and compare our results against {\\sc HFBTHO} and the coordinate-space HFB codes {\\sc Sky1D} and {\\sc Sky2D}, which solve the HFB problem in 1D (spherical) and 2D (axial) geometries, respectively.\n\n\nThis paper is organized as follows.\nIn Sec.\\ \\ref{sec:basis}, the Skyrme EDF and the HFB theory are briefly introduced.\nThe numerical details and algorithms of {\\sc HFBFFT} are described in section \\ref{sec:algorithm}.\nIn Sec.\\ \\ref{sec:benchmark}, we present test and benchmark results.\nFinally, conclusions and outlooks are presented in Sec.\\ \\ref{sec:conclusion}.\n\n\n\\section{Skyrme Hartree-Fock-Bogoliubov theory}\\label{sec:basis}\n\nIn this section, we briefly summarize the Skyrme EDF and the HFB theory.\n\n\\subsection{The Skyrme energy density functional}\n\nThe HFB theory describes a many-Fermion system in terms\nof an orthonormal set of s.p.\\ wave functions $\\psi_\\alpha$\nwith fractional occupation amplitudes $v_\\alpha$, i.e.,\n\\begin{equation}\n \\left\\{\\psi_\\alpha,v_\\alpha,\\alpha=1,...,\\Omega\\right\\},\n\\label{eq:spbasis}\n\\end{equation}\nwhere $\\Omega$ denotes the size of the active s.p.\\ space. \nThe amplitude $v_\\alpha$ can take values continuously in the interval $[0,1]$. The complementary amplitude is $u_\\alpha=\\sqrt{1-v_\\alpha^2}$. \n\nThe code {\\sc HFBFFT} uses a formulation of HFB theory in the basis of natural orbitals, which are defined as the basis of s.p.\\ states $\\psi_\\alpha$ in which the one-body density matrix $\\hat{\\rho}$ is diagonal, i.e.,\n$\\hat{\\rho}=\\sum_\\alpha|\\psi_\\alpha\\rangle{n}_\\alpha\\langle\\psi_\\alpha|$,\nwhere $n_\\alpha$, an eigenvalue of $\\hat{\\rho}$, represents the canonical-state occupation.\nThe numerical HFB scheme in the canonical basis was presented in \\cite{Reinhard1997} and improved\nin \\cite{Tajima2004}. For the relation between the standard matrix formulation and the canonical formulation of HFB, see Refs.~\\cite{Dobaczewski1996,Ring2004}.\nIn the canonical basis, the HFB\nmean-field state takes the BCS-like form:\n\\begin{equation}\n |\\Phi\\rangle\n =\n \\prod_{\\alpha>0}\\big(\n u_\\alpha^{\\mbox{}}+v_\\alpha^{\\mbox{}}\\hat{a}^+_\\alpha\\hat{a}^{+}_{\\overline{\\alpha}}\n \\big)|0\\rangle\n\\label{eq:BCState}\n\\end{equation}\nwhere $|0\\rangle$ is the vacuum state, $\\hat{a}^+_\\alpha$ is the\ncreation operator of $\\psi_\\alpha$, and $\\overline{\\alpha}$ the\nconjugate partner to state $\\alpha$ that corresponds to the same eigenvalue of $\\hat{\\rho}$. \n\n\nAny self-consistent mean-field theory starts from expressing the\nenergy of the system in terms of s.p.\\ wave functions and occupation\namplitudes (\\ref{eq:spbasis}). \nEDFs go for a simpler approach by starting from the energy defined in terms of only a few local densities and currents.\nFor the case of stationary states of even-even\nnuclei, the energy depends only on the local particle density $\\rho_q$, the kinetic-energy density\n$\\tau_q$, and the spin-orbit density $\\vec{J}_q$:\n\\begin{subequations}\\label{eq:densities}\n\\begin{align}\n \\rho_q(\\vec{r})&=\\displaystyle\n \\sum_{\\alpha\\in q}\\sum_{s} \n v_{\\alpha}^2|\\psi_{\\alpha}(\\vec{r},s)|^2,\n \\notag \\\\\n \\tau_q(\\vec{r})&=\\displaystyle\n \\sum_{\\alpha\\in q}\\sum_{s} \n v_{\\alpha}^2|\\nabla\\!\\psi_{\\alpha}(\\vec{r},s)|^2, \\notag \\\\\n \\vec{J}_q(\\vec{r}) &=\\displaystyle\n -\\mathrm{i}\\sum_{\\alpha\\in q}\\sum_{ss'} v_{\\alpha}^2\n \\psi_{\\alpha}^*(\\vec{r},s)\n \\nabla\\! \\times\\! \\vec{\\sigma}_{ss'} \n \\psi^{\\mbox{}}_{\\alpha}(\\vec{r},s'),\n \\label{eq:rtjeven}\n\\end{align}\nwhere $q\\in\\{\\mathrm{p},\\mathrm{n}\\}$ stands for protons or neutrons and $s, s'=\\pm 1\/2$ label the two spinor components of the wave functions.\nPairing EDFs additionally require the pairing density\n\\begin{align}\n \\xi_q(\\vec{r}) \n &=\\displaystyle\n {\\sum^\\mathrm{(cut)}_{\\alpha\\in q}u_{\\alpha}v_{\\alpha} \n \\sum_{s}\\left( -2s \\right) \\psi_{\\overline{\\alpha}}(\\vec{r},-s)\n \\psi_{\\alpha}}(\\vec{r},s),\n \\label{eq:rtjpair}\n\\end{align}\nwhere the first summation includes a cutoff in the pairing space. \nFor a stationary state of an even-even nucleus, the conjugate s.p.\\ state $\\overline{\\alpha}$ can be assumed to be the time-reversed state of $\\alpha$, which leads to\n\\begin{align}\n \\xi_q(\\vec{r}) \n &=\\displaystyle\n {\\sum^\\mathrm{(cut)}_{\\alpha\\in q} \\sum_{s} \n u_{\\alpha}v_{\\alpha} \\left| \\psi_{\\alpha}(\\vec{r},s) \\right|^2}.\n\\end{align}\n\\end{subequations} \n\nThe code {\\sc HFBFFT}, as its predecessor {\\sc Sky3D}, employs the\nwidely used Skyrme EDF. This EDF is well described in all details at\nseveral places \\cite{Bender2003,Erler2011,Schunck2019}.\nThus we give here only a brief account with emphasis on the pairing\npart. The total energy is a functional of the local densities:\n\\begin{subequations}\n\\label{eq:Etot}\n\\begin{equation}\n E_\\mathrm{tot}\n =\n E_\\mathrm{Skyrme}[\\rho,\\tau,\\vec{J}]\n +\n E_\\mathrm{pair}[\\rho,\\xi]\n +\n E_\\mathrm{Coul}[\\rho_p],\n\\label{eq:efundet}\n\\end{equation}\nwhere (ignoring here isospin index for simplicity)\n\\begin{align}\n\tE_\\mathrm{Skyrme} \n\t&= E_\\mathrm{kin} + E_\\mathrm{\\rho \\rho} + E_\\mathrm{\\rho \\tau} + E_{\\rho \\Delta \\rho}\n\t+ E_\\mathrm{ \\nabla \\vec{J}} \n\\notag \\\\\n &= \\int\\! d^3r\\left[ \\frac{\\hbar^2}{2m}\\tau +\n C^{\\rho}\\rho^2 + C^{\\tau}\\rho \\tau+ C^{\\Delta\\rho}\\rho\\Delta\\rho + C^{\\vec{J}}\\rho \\nabla\\cdot\\vec{J}\\right],\n \\\\\n E_\\mathrm{pair}\n &=\n \\frac{1}{4} \\sum_{q\\in\\{\\mathrm{p},\\mathrm{n}\\}}V_{\\mathrm{pair},q}\n \\int d^3r |\\xi_q|^2\n \\left[1 -\\frac{\\rho}{\\rho_{0,\\mathrm{pair}}}\\right],\n \\\\\n E_{\\mathrm{Coul}}\n &=\\frac{e^{2}}{2} \\int\\! \\mathrm{d}^{3} r \\mathrm{~d}^{3} r^{\\prime} \\frac{\\rho_{\\mathrm{p}}(\\vec{r}) \\rho_{\\mathrm{p}}\\left(\\vec{r}^{\\prime}\\right)}{\\left|\\vec{r}-\\vec{r}^{\\prime}\\right|}\n -\\int\\! \\mathrm{d}^{3} r \\frac{3 e^{2}}{4}\\left(\\frac{3}{\\pi}\\right)^{\\frac{1}{3}} \\rho_{\\mathrm{p}}^{4 \/ 3}.\n\\label{eq:epair}\n\\end{align} \n\\end{subequations}\n $E_\\mathrm{Skryme}$ is a functional of $\\rho$, $\\tau$,\nand $\\vec{J}$; $E_\\mathrm{pair}$ is a functional of $\\rho$\nand $\\xi$;\nand the Coulomb energy $E_{\\mathrm{Coul}}$ is a functional of proton density $\\rho_p$. \nThe pairing functional can be motivated by a density-dependent\n$\\delta$ interaction. \nIt includes two limiting cases. \nThe first case is a pure contact interaction, also called volume pairing, which is recovered when $\\rho_{0,\\mathrm{pair}}\\rightarrow\\infty$. \nThe second case corresponds to a value near matter equilibrium density $\\rho_{0,\\mathrm{pair}}=0.16$ fm$^{-3}$, which localizes pairing around the nuclear surface. \nAdjustment of $\\rho_{0,\\mathrm{pair}}$ as a free parameter delivers a form of the pairing functional which stays in between the extremes of volume and surface pairing \\cite{Dobaczewski2001,Kluepfel2009}.\n\n\n\n\n\n\\subsection{The HFB theory in canonical basis} \\label{sec:HFB_theory}\n\nIn practice, one deals with two types of fermions: protons and neutrons. \nTo keep the notation simple, in the following, we assume that the isospin quantum number is included in the quantum label $\\alpha$ of the canonical state. \nThe HFB equations are derived variationally by minimizing the HFB Routhian\n\\begin{equation}\\label{routhian}\n R=E_{\\mathrm{tot}}\n -\\sum_{q\\in \\{\\mathrm{p},\\mathrm{n}\\}}\\epsilon_{\\mathrm{F},q} \\sum_{\\alpha\\in q} v_{\\alpha}^{2}\n - \\sum_{\\alpha\\beta} \\lambda_{\\alpha\\beta}\\left(\\langle\\psi_{\\beta} | \\psi_{\\alpha}\\rangle-\\delta_{\\alpha\\beta}\\right),\n\\end{equation}\nwith respect to $\\psi_\\alpha$ and $v_\\alpha$.\nIn Eq.~(\\ref{routhian})\n$\\epsilon_\\mathrm{F}$ is the\nFermi energy which is also the Lagrange parameter for the particle-number constraint, \nand the $\\hat\\lambda$ is the matrix of Lagrangian\nmultipliers that guarantee the orthonormality of canonical wave functions. \nSince $\\langle\\psi_{\\beta} | \\psi_{\\alpha}\\rangle = \\langle\\psi_{\\alpha} | \\psi_{\\beta}\\rangle^*$,\nit is required that the matrix $\\hat\\lambda$\nis Hermitian so that the number of its independent elements coincides with the total number of independent constraints.\n\nVariation of the Skryme and Coulomb energies with regard to the s.p.\\ wave function yields the HF Hamiltonian\n$\\hat{h}$:\n\\begin{equation}\n \\frac{\\delta \\left(E_\\mathrm{Skyrme} + E_\\mathrm{Coul} \\right)}{\\delta\\psi_\\alpha^\\dagger}\n =\n v_\\alpha^2\\hat{h}\\psi_\\alpha.\n\\label{eq:mfham}\n\\end{equation}\nBy the chain rule for derivatives, (\\ref{eq:mfham}) can be reduced to the variation with respect to the densities, which delivers explicit expressions for $\\hat{h}$ \\cite{Bender2003,Erler2011}.\nThe HF Hamiltonian $\\hat{h}$ is a functional of local densities (particle density, kinetic-energy density, spin-orbit density) in the standard fashion of nuclear EDFs \\cite{Bender2003}. \n\nVariation of the pairing energy with respect to the s.p.\\ wave function gives\n\\begin{equation}\\label{eq:pair_ham}\n \\frac{\\delta E_\\mathrm{pair}}{\\delta\\psi_\\alpha^\\dagger}\n =\n u_\\alpha v_\\alpha \\hat{\\tilde{h}}\\psi_\\alpha + v_\\alpha^2 \\hat{h}^\\prime \\psi_\\alpha.\n\\end{equation}\nThe first term is related to the variation\nwith respect to the pairing density, which yields the pairing potential \\cite{Ring2004}\n\\begin{equation}\n \\tilde{h}_q(\\vec{r})\n =\n \\frac{1}{2} V_{\\mathrm{pair},q}\n \\xi_q\n \\left[1 -\\frac{\\rho}{\\rho_{0,\\mathrm{pair}}}\\right],\\ q\\in \\{\\mathrm{p},\\mathrm{n}\\}.\n\\label{eq:gappot}\n\\end{equation}\nThe second term is the pairing-rearrangement term, brought by the density dependence of the pairing functional. \nFor simplicity, we treat the rearrangement term $\\hat{h}^\\prime$ as part of the HF Hamiltonian $\\hat{h}$ in the following. \nThe pairing potential $\\tilde{h}_q(\\vec{r})$ is a local potential in most pairing functionals.\nFrom that, we obtain the state-dependent gap\n\\begin{equation}\n \\Delta_{\\alpha\\alpha}\n =\n \\langle\\psi_\\alpha|\n \\tilde{h}_{q_\\alpha}|\\psi_\\alpha\\rangle\n ,\n\\label{eq:gapalpha}\n\\end{equation}\nwhere $q_\\alpha$ is the isospin of state $\\alpha$. \nAnother aspect of the pairing is determined by the gap equation, which is obtained from the variation with respect to $v_\\alpha$: \n\\label{eq:HFBeqs}\n\\begin{equation}\n 0\n =\n 4 v_\\alpha^{\\mbox{}} (h_{\\alpha\\alpha}^{\\mbox{}} - \\epsilon_{\\mathrm{F},q_\\alpha})\n + 2 \\left(\\frac{v_\\alpha^2}{u_\\alpha^{\\mbox{}}}-u_\\alpha^{\\mbox{}}\\right)\n \\Delta_{\\alpha\\alpha},\n\\label{eq:gapeq}\n\\end{equation}\nwhere $h_{\\alpha\\alpha}$ are the diagonal matrix elements of\nthe HF Hamiltonian $\\hat{h}$.\nThe HF Hamiltonian together with the pairing potential constitutes the main ingredients of the HFB equations. \n\nWith the orthonormality of canonical states taken into account, the constrained variation of the total energy with respect to $\\psi_\\alpha^\\dagger$ yields the mean-field equations:\n\\begin{equation}\n \\hat{\\mathcal{H}}_\\alpha^{\\mbox{}} \\psi_\\alpha^{\\mbox{}}\n=\n \\textstyle{\\sum_\\beta}\n \\psi_\\beta\\lambda^{\\mbox{}}_{\\beta\\alpha}, \n\\label{eq:cmfeq}\n\\end{equation}\nwhere\n\\begin{subequations}\\label{eq:genmf_avlambda}\n\\begin{align}\n \\hat{\\mathcal{H}}_\\alpha^{\\mbox{}} \n &=\n v^2_\\alpha \\hat{h} + u_\\alpha^{\\mbox{}}\n v_\\alpha^{\\mbox{}} \\hat{\\tilde{h}},\n\\label{eq:genmf}\n\\\\\n \\lambda^{\\mbox{}}_{\\beta\\alpha}\n &=\n \\frac{1}{2}\n \\langle\\psi_\\beta|\\hat{\\mathcal{H}}_\\alpha^{\\mbox{}}+\\hat{\\mathcal{H}}_\\beta|\\psi_\\alpha^{\\mbox{}}\\rangle.\n\\label{eq:avlambda}\n\\end{align}\n\\end{subequations}\nThe mean-field equations (\\ref{eq:cmfeq},\\ref{eq:genmf_avlambda}) and gap equations (\\ref{eq:gapeq}) together constitute the self-consistent HFB equations in the canonical basis. \n\nIn (\\ref{eq:genmf}) $\\hat{\\mathcal{H}}_\\alpha^{\\mbox{}}$ is a state-dependent one-body Hamiltonian composed of the HF Hamiltonian and the pairing potential.\nThe full matrix $\\hat\\lambda$ needs to be taken into account because the $\\hat{\\mathcal{H}}_\\alpha^{\\mbox{}}$ are state-dependent \\cite{Reinhard1997,Tajima2004}.\nIn contrast, pure HF or HF+BCS calculations only require diagonal matrix elements $\\lambda_{\\alpha\\alpha}$, which are also known as s.p.\\ energies. \nThe Hermiticity of $\\hat\\lambda$ is enforced by explicit symmetrization in Eq.\\ (\\ref{eq:avlambda}).\nIt can be shown by multiplying both sides of Eq.\\ (\\ref{eq:cmfeq}) by $\\psi_\\beta^\\dagger$ that the final solution should obey the symmetry conditions\n\\begin{subequations}\n\\label{eq:symmcond_both}\n\\begin{equation}\n 0\n =\\lambda^{-}_{\\beta\\alpha}\\equiv\n \\frac{1}{2}\\left(\\langle\\psi_\\beta^{\\mbox{}}|\\hat{\\mathcal{H}}_\\alpha^{\\mbox{}}|\\psi_\\alpha^{\\mbox{}}\\rangle\n -\n \\langle\\psi_\\beta^{\\mbox{}}|\\hat{\\mathcal{H}}_\\beta^{\\mbox{}}|\\psi_\\alpha^{\\mbox{}}\\rangle\\right).\n\\label{eq:symmcond}\n\\end{equation}\nOne can combine these into one condition:\n\\begin{equation}\n 0\n =\n \\Delta\\mathcal{S}^2\n \\equiv\n \\frac{1}{\\Omega^2} \\sum_{\\alpha\\beta} \\left| \\lambda^{-}_{\\beta\\alpha} \\right|^2, \n\\label{eq:symmcondaver}\n\\end{equation}\n\\end{subequations}\nwhich is the average of squared matrix elements of ${\\hat{\\lambda}}^-$. \nThe actual size of $\\Delta\\mathcal{S}^2$ will serve as a check for the convergence of the HFB solution.\n\nIt should be noted that $\\hat\\lambda^-$ vanishes when both s.p.\\ states $\\psi_\\alpha$ and $\\psi_\\beta$ are fully occupied ($v_\\alpha = v_\\beta = 1$) or unoccupied ($v_\\alpha = v_\\beta = 0$) since then $\\langle\\psi_\\alpha|\\hat{h}|\\psi_\\beta\\rangle=\\langle\\psi_\\beta|\\hat{h}|\\psi_\\alpha\\rangle^*$. \nThus, for a pure HF calculation, $\\Delta\\mathcal{S}^2$ measures the overlap between occupied and unoccupied orbits, which should be zero at the self-consistent solution.\n\nWhen the size of active s.p.\\ space equals the number of particles ($\\Omega_n = N$, $\\Omega_p = Z$) and \nall the s.p.\\ orbits are fully occupied (as in the pure HF calculation), \n$\\Delta\\mathcal{S}^2$ is always zero; hence, it is not an appropriate measure for convergence.\nHowever, this quantity can still be utilized to check the Hermiticity of our implementation of $\\hat{h}$ (see Sec.\\ \\ref{sec:hermiticity}). \nFor the pure HF case, a suitable quantity for convergence check could be\n\\begin{equation}\n \\sum_\\alpha \\left|\\langle \\psi_\\alpha | \\hat{h}^2 | \\psi_\\alpha \\rangle - \\langle \\psi_\\alpha | \\hat{h} | \\psi_\\alpha \\rangle^2\\right|,\n\\end{equation}\nwhich is zero at the HF solution.\n\n\\section{Numerical representation} \\label{sec:algorithm}\n\n\\subsection{Numerical realization on a 3D coordinate-space grid} \\label{sec:num_grid}\n\nThe numerical representation is explained in detail in Refs.~\\cite{Maruhn2014,Schuetrumpf2018}. \nHere, we repeat the essentials briefly. For simplicity, our discretization strategy is explained here for one dimension;\nthe generalization to 3D is straightforward.\n\nAll wave functions, densities and fields are defined on a three-dimensional equidistant Cartesian grid. \nThe grid points in the $x$ direction are\n\\begin{equation}\n x_\\nu = \\left(-\\frac{N_x+1}{2}+\\nu\\right)\\delta x,\n \\quad \\nu=1,\\ldots,N_x,\n\\label{eq:xgrid}\n\\end{equation}\nwhere $N_x$ is the (even) number of grid points and $\\delta x$ is the\ngrid spacing. \nSimilar gridding applies to the $y$ and $z$ directions. The action of local\noperators on a coordinate-space grid is a simple multiplication\nof the local operator field and the wave function. \nThe action of momentum operators, such as in the kinetic energy, requires first and second derivatives defined in Fourier space.\nThe Fourier technique has been proved to be superior in precision and advantageous for large grids \\cite{Blum1992}.\nIt is noteworthy that the direct Coulomb potential is also solved in Fourier space.\nThe Coulomb solver has to fulfill the condition that the result in the box is the correct solution to Poisson's equation with the boundary condition of zero potential at infinity.\nThe algorithm to solve Poisson's equation for an isolated charged distribution has been implemented in {\\sc Sky3D}.\nIt follows the ideas of \\cite{Hockney1970, Eastwood1979} by doubling the 3D grid, folding the proton density with the $1\/r$ Green's function in momentum space and then restricting the final solution inside the original box.\n\nThe discrete grid points $k_n$ in Fourier space are related to the same number of grid points $x_\\nu$ in coordinate space as:\n\\begin{subequations}\n\\label{eq:FT}\n\\begin{align}\nk_n &=\\left\\{\n\\begin{aligned}\n &(n-1)\\delta k, \\quad n=1,\\ldots, \\frac{N_x}{2} \\\\\n &(n-N_x-1)\\delta k, \\quad n=\n \\frac{N_x}{2}+1,\\ldots, N_x \n\\end{aligned},\n\\right. \\\\\n \\delta k\n &=\n \\frac{2\\pi}{N_x\\delta x}.\n\\end{align}\n\\end{subequations}\nNote that the coordinate-space grid (\\ref{eq:xgrid}) in combination with the conjugate momentum-space grid (\\ref{eq:FT}), imposes no spatial symmetry at all. \nBut the particular examples considered for benchmarking in this study obey reflection symmetry in all three directions.\n\nA wave function $\\psi(x_\\nu)$ in coordinate space is related to a\nwave function $\\widetilde{\\psi}(k_n)$ in Fourier space by the discrete Fourier transform and its inverse\n\\begin{subequations}\n\\begin{align}\n \\widetilde{\\psi}(k_n)\n &=\n \\sum_{\\nu=1}^{N_x}\n \\exp{\\left(-\\mathrm{i} k_nx_\\nu\\right)}\\psi(x_\\nu) \n ,\n\\label{eq:FTforward}\\\\\n \\psi(x_\\nu) \n &=\n \\frac{1}{N_x}\\sum_{n=1}^{N_x}\n \\exp{\\left(\\mathrm{i} k_nx_\\nu\\right)}\\widetilde{\\psi}(k_n).\n\\label{eq:FTbackward}\n\\end{align}\n\\end{subequations}\nBoth can be efficiently computed via the FFT algorithm provided by the FFTW3 library \\cite{Frigo2005}.\nThis complex Fourier representation implies that the function $\\psi$ is\nperiodic, i.e., $\\psi(x+N_x \\cdot \\delta x)=\\psi(x)$. The appropriate\nintegration scheme that complies with the above summations is the trapezoidal rule\n\\begin{equation}\n \\int_{-\\frac{N_x}{2}\\delta x}^{\\frac{N_x}{2}\\delta x} dx\\, f(x) \\approx \\sum_{\\nu = 1}^{N_x} f(x_\\nu) \\delta x, \n\\end{equation}\nwhere all terms are added up with equal weights.\n\nIn Fourier space the $m$-th\nderivative becomes a multiplication by $(\\mathrm{i} k_n)^m$. \nOne proceeds then in the following way: First, a forward transform (\\ref{eq:FTforward}) is performed; then $\\widetilde{\\psi}(k_n)$ is multiplied by\n$(\\mathrm{i} k_n)^m$; and finally \n$(\\mathrm{i}k_n)^m\\widetilde{\\psi}(k_n)$ is transformed back to the coordinate space by Eq.~(\\ref{eq:FTbackward}). \nOne should note that there is an arbitrariness about the choice of momentum $k_{N_x\/2+1}$:\nit can be taken as $\\pm \\frac{N_x}{2} \\delta k$.\nThis arbitrariness does not alter the transforms (\\ref{eq:FTforward}, \\ref{eq:FTbackward}) \nbut gives different results of the $m$-th derivative when $m$ is odd (no impact when $m$ is even).\nA natural choice is to equally split $\\widetilde{\\psi}(k_{N_x\/2+1})$ between the positive and negative momenta,\nmaking them cancel each other in the final result of an odd-order derivative.\nIt is equivalent to setting $\\widetilde{\\psi}(k_{N_x\/2+1})=0$.\nThis choice ensures that the derivative of a real-valued function is still real-valued; \nit also means that the second derivative is not equivalent to two consecutive first derivatives in this framework.\nThe remaining problem is the Hermiticity breaking caused by the product rule;\nwe will discuss it in Sec. \\ref{sec:hermiticity}. \n\n\n\\subsection{Solution by accelerated gradient iteration}\n\\label{sec:grad}\n\nThe solution of the coupled HFB equations (\\ref{eq:HFBeqs}) is obtained by interlaced iterations of the gap equation and the mean-field equation. \nThe gap equation (\\ref{eq:gapeq}) can be solved in a closed form and it yields:\n\\begin{equation}\n \\left\\{\\begin{array}{c} v_\\alpha^{\\mbox{}} \\\\ u_\\alpha^{\\mbox{}} \\end{array}\\right\\}\n =\n \\sqrt{\\frac{1}{2}\\mp \\frac{1}{2}\n \\frac{h_{\\alpha\\alpha}^{\\mbox{}}-\\epsilon_{\\mathrm{F},q_\\alpha}}\n {\\sqrt{(h_{\\alpha\\alpha}-\\epsilon_{\\mathrm{F},q_\\alpha})^2+\\Delta^2_{\\alpha\\alpha}} } }\n .\n\\label{eq:uveq}\n\\end{equation}\nThe Fermi energy $\\epsilon_\\mathrm{F}$ needs to be adjusted to fulfill the particle-number condition\n\\begin{equation}\n \\epsilon_{\\mathrm{F},q}\\;\\longleftrightarrow\\;\n \\sum_{\\alpha\\in q}^{\\mbox{}} v_\\alpha^2=N_q,\n\\end{equation}\nwhere $N_q$ is the required particle number.\nNote that only the diagonal elements of the pairing potential and the HF Hamiltonian in the canonical basis enter (\\ref{eq:uveq}); hence, no information about the non-diagonal elements is needed to determine the occupation amplitudes.\n\n\nThe solution of the mean-field equation (\\ref{eq:cmfeq}) is obtained by the damped gradient iteration \\cite{Maruhn2014,Reinhard1982,Blum1992} interlaced with updating the matrix $\\hat\\lambda$ for the orthonormality constraint. \nThe steps are:\n\\begin{enumerate}\n \\item\n For given\n $\\{\\psi_\\alpha^{\\mbox{}},v_\\alpha^{\\mbox{}},u_\\alpha^{\\mbox{}},\\alpha=1,...,\\Omega\\}$\n compute the local densities, the HF Hamiltonian $\\hat{h}$ and the pairing potential $\\hat{\\tilde{h}}$.\n \\item\\label{it:hmf}\n Compute the action of $\\hat{h}$ and $\\hat{\\tilde{h}}$ on all $\\psi_\\alpha^{\\mbox{}}$\n and store the result in work arrays $\\Psi_{\\alpha}$ and $\\widetilde{\\Psi}_{\\alpha}$, i.e.,\n \\begin{subequations}\n \\begin{equation}\n \\hat{h}\\psi_\\alpha^{\\mbox{}}\\longrightarrow\\Psi_\\alpha^{\\mbox{}},\n \\end{equation}\n \\begin{equation}\n \\hat{\\tilde{h}}\\psi_\\alpha^{\\mbox{}}\\longrightarrow\\widetilde{\\Psi}_\\alpha^{\\mbox{}},\n \\end{equation}\n \\end{subequations}\n for $\\alpha=1,\\ldots,\\Omega$.\n \\item\\label{it:h_Delta_diag}\n Use $\\Psi_\\alpha^{\\mbox{}}$ and $\\widetilde{\\Psi}_\\alpha^{\\mbox{}}$ to compute and store the \n s.p.\\ energies and pairing gaps\n \\begin{subequations}\n \\begin{equation}\n h_{\\alpha\\alpha}=\\langle\\psi_\\alpha^{\\mbox{}}|\\Psi_\\alpha^{\\mbox{}}\\rangle,\n \\end{equation}\n \\begin{equation}\n \\Delta_{\\alpha\\alpha} = \\langle\\psi_\\alpha^{\\mbox{}}|\\widetilde{\\Psi}_\\alpha^{\\mbox{}}\\rangle.\n \\end{equation}\n \\end{subequations}\n \\item\n Evaluate and store the action of the generalized mean-field Hamiltonian (overwriting $\\Psi_\\alpha^{\\mbox{}}$)\n \\begin{equation}\n \\mathcal{H}_\\alpha^{\\mbox{}}\\psi_\\alpha^{\\mbox{}}\n =\n v_\\alpha^2\\Psi_\\alpha^{\\mbox{}}+u_\\alpha^{\\mbox{}} v_\\alpha^{\\mbox{}}\\widetilde{\\Psi}_\\alpha^{\\mbox{}}\n \\;\\longrightarrow\\;\n \\Psi_\\alpha^{\\mbox{}}.\n \\end{equation}\n \\item\n Apply the matrix of Lagrange multipliers on all $\\psi_\\alpha$; compute and store (again overwriting $\\Psi_\\alpha^{\\mbox{}}$)\n \\begin{equation}\n \\Psi_\\alpha^{\\mbox{}}-\\sum_\\beta^{\\mbox{}}\\psi_\\beta^{\\mbox{}}\\lambda_{\\beta\\alpha}^{\\mbox{}}\n \\;\\longrightarrow\\;\n \\Psi_\\alpha^{\\mbox{}}\n .\n \\end{equation}\n \\item\n Apply the damping operation $\\hat{\\mathcal{D}}$ and orthonormalization $\\hat{\\mathcal{O}}$\n \\begin{subequations}\n \\begin{equation}\n \\psi_\\alpha^\\mathrm{(new)}\n =\n \\hat{\\mathcal{O}}\\left\\{\\psi_\\alpha^{\\mbox{}}-\\hat{\\mathcal{D}}\\Psi_\\alpha^{\\mbox{}}\\right\\}, \n \\end{equation}\n \n \\begin{equation}\n \\hat{\\mathcal{D}}\n =\n \\frac{x_0}{v_\\alpha^2(\\hat{T}+E_0)+\\frac{1}{2}u_\\alpha^{\\mbox{}} v_\\alpha^{\\mbox{}}\\tilde{h}_0}\n ,\n \\end{equation}\n \\end{subequations}\n where $x_0$, $E_0$, and $\\tilde{h}_0$ are adjustable numerical parameters. \n The empirical values $x_0=0.45$, $E_0=100$ MeV and $\\tilde{h}_0=\\mathrm{max}\\left[\\tilde{h}_\\mathrm{n}(\\vec{r}),\\tilde{h}_\\mathrm{p}(\\vec{r})\\right]$ are used in our calculations.\n It is worth noting that the lower bound of $u_\\alpha v_\\alpha$ and $ v_\\alpha^2$ in $ \\hat{\\mathcal{D}}$ is set to be $10^{-1}$ for numerical stability.\n \n \\item\\label{it:pair}\n With the new $h_{\\alpha\\alpha}$ and $\\Delta_{\\alpha\\alpha}$ from step \\ref{it:h_Delta_diag}, compute\n new occupations $v_\\alpha^{\\mbox{}}$ and $u_\\alpha^{\\mbox{}}$ \n using Eq.~(\\ref{eq:uveq}).\n \n \\item Reevaluate the action of the generalized mean-field Hamiltonian on all $\\psi_\\alpha$ and compute the matrix of Lagrange multipliers \n \\begin{equation}\n \\lambda^{\\mbox{}}_{\\beta\\alpha}\n =\n \\frac{\n \\langle\\psi_\\beta|\\hat{\\mathcal{H}}_\\alpha^{\\mbox{}}|\\psi_\\alpha^{\\mbox{}}\\rangle \n + \\langle \\psi_\\alpha | \\hat{\\mathcal{H}}_\\beta | \\psi_\\beta \\rangle^*\n }{2}.\n \\end{equation}\n\\end{enumerate}\n\nThe above iteration usually starts from a number of HF+BCS steps, which are done in the same way as in {\\sc Sky3D}. \nThe HF+BCS calculation is initialized by a 3D HO wave function that can be triaxially deformed.\nTo achieve better convergence, in step 1 the new densities are mixed linearly with the old ones:\n\\begin{equation}\n \\kappa^{(n)} = (1-\\gamma)\\kappa^{(n-1)} + \\gamma\\kappa^{(n)}_{\\psi}, \\quad \\kappa = \\rho, \\tau\\ \\mathrm{or}\\ \\xi,\n\\end{equation}\nwhere $n$ is the iteration number, subscript $\\psi$ denotes the density directly computed from the wave functions, and $\\gamma$ is the adjustable mixture ratio with a default value of 0.2.\n\n\\subsection{Sub-iterations in configuration space}\n\nThe damped gradient scheme outlined in Sec.\\ \\ref{sec:grad} converges, but requires more iterations in the HFB scheme as compared to the HF + BCS used in {\\sc Sky3D}.\nIt also involves operations on the full 3D grid which can make computations cumbersome. \nThe pairing part in the iterative steps works predominantly within the given space of canonical states.\nThus one can reduce the total numerical expense by the sub-iteration method: \nswitching between the full 3D step and a fast iterative solver in configuration space. \nTo this end, we map the mean-field equations into configuration space with\nthe expansion\n\\begin{equation}\n \\psi_\\alpha\n =\n \\sum_{n=1}^{\\Omega}\\varphi_n c_{n\\alpha},\n \\label{eq:config_expansion}\n\\end{equation}\nwhere $\\{\\varphi_n\\}$ is a set of s.p.\\ states acting as the expansion\nbasis. \nFor simplicity we choose an expansion basis such that $c^{(0)}_{n\\alpha} = \\delta_{n\\alpha}$ at the beginning.\nInserting (\\ref{eq:config_expansion}) into the HFB mean-field equations\n(\\ref{eq:cmfeq}) yields\n\\begin{equation}\n \\lambda^-_{\\beta \\alpha} =\n \\sum_{m n} c_{n\\beta}^*\n \\left\\langle\\varphi_n\\left|\\frac{\\hat{\\mathcal{H}}_\\alpha^{\\mbox{}}\n -\\hat{\\mathcal{H}}_\\beta^{\\mbox{}}}{2}\n \\right|\\varphi_m\\right\\rangle \n c_{m\\alpha} = 0.\n\\label{eq:asymm}\n\\end{equation}\nEq.\\ (\\ref{eq:asymm}) is essentially the same as the symmetry condition (\\ref{eq:symmcond}).\nIt is solved by a simple damped gradient iteration:\n\\begin{equation}\n\\begin{split}\n c_{n\\alpha}^\\mathrm{(new)}\n &=\n\\hat{\\mathcal{O}}\\left\\{c_{n\\alpha}^{\\mbox{}}\n -\n \\frac{\\delta}{h_{nn}\\!-\\!h_{11}\\!+\\!E_0}\n \\left[\\sum_{m}\\mathcal{H}_{\\alpha,nm}^{\\mbox{}} c_{m\\alpha}^{\\mbox{}}\n -\\sum_{\\beta}c_{n\\beta}^{\\mbox{}}\\lambda_{\\beta\\alpha}^{\\mbox{}}\n \\right]\n \\right\\}\n \\\\\n &=\\hat{\\mathcal{O}}\\left\\{c_{n\\alpha}^{\\mbox{}}\n -\n \\frac{\\delta}{h_{nn}\\!-\\!h_{11}\\!+\\!E_0}\n \\sum_{\\beta}c_{n\\beta}^{\\mbox{}}\\lambda^-_{\\beta\\alpha}\n \\right\\}\n ,\n\\end{split}\n\\label{eq:symm}\n\\end{equation}\nwhere\n$\\mathcal{H}_{\\alpha,nm}=\\langle\\varphi_n|\\hat{\\mathcal{H}}_\\alpha^{\\mbox{}}|\\varphi_m\\rangle$ and\n\\begin{equation}\n\\lambda_{\\beta \\alpha} = \\frac{1}{2} \\sum_{mn} c_{n\\beta}^* \\left( \\mathcal{H}_{\\alpha,nm} + \\mathcal{H}_{\\beta,nm} \\right) c_{m\\alpha}.\n\\end{equation}\nThe (interlaced) solution of the gap equations remains as before, but we do not update the local densities, the HF Hamiltonian $\\hat{h}$ and the pairing potential $\\hat{\\tilde{h}}$ in configuration space. \nThe convergence of the iteration is checked,\nagain, by the symmetry conditions (\\ref{eq:symmcondaver}). \nThe most efficient combination of the full 3D step with the iterations in configuration\nspace is a matter of experience, see Sec.\\ \\ref{sec:benchmark}.\n\n\n\\subsection{Soft cutoff on pairing-active space}\nIt is well known that the HFB equations with local interactions diverge when solved in infinite quasiparticle\/canonical space \\cite{Dobaczewski1984}. \nTo limit the pairing-active space, all local densities (\\ref{eq:densities}) are augmented by the cutoff factor $w_\\alpha$, for instance the particle and pairing densities:\n\\begin{subequations}\n\\label{eq:cutpairdens}\n\\begin{align}\n\t\\rho(\\vec{r}) &= \\displaystyle\n \\sum_{\\alpha} \n w_{\\alpha}v_{\\alpha}^2\\sum_{s}|\\psi_{\\alpha}(\\vec{r},s)|^2,\n \\\\\n \\xi(\\vec{r}) &=\n \\sum_{\\alpha}w_{\\alpha}^{\\mbox{}}u_{\\alpha}^{\\mbox{}}v_{\\alpha}^{\\mbox{}}\n \\sum_{s}|\n \\psi_{\\alpha}^{\\mbox{}}(\\vec{r},s) |^2.\n\\label{eq:pairdens}\n\\end{align}\nThe same augment also applies to the kinetic-energy and the spin-orbit densities.\nA fixed number of states (realized by setting $w_\\alpha=1$ or 0) is dangerous for two reasons. \nFirst, it hinders the portability of the pairing functional between codes and nuclei, because the s.p.\\ space depends on the basis representation.\nSecond, level crossings near the hard cutoff can induce jumps of the pairing energy.\nThese problems can be solved by pairing renormalization \\cite{Dobaczewski1996,Borycki2006} which, however, could be impractical in a full 3D treatment that involves huge canonical spaces.\nTherefore, a commonly used remedy is to use a soft pairing cutoff \\cite{Bonche1985,Krieger1990}\n\\begin{equation}\n w_\\alpha^{\\mbox{}}\n =\n \\frac{1}\n {\\displaystyle 1+\n \\exp\\left(\\frac{h_{\\alpha\\alpha}-\\epsilon_\\mathrm{F}-\\Delta\\epsilon_\\mathrm{cut}}\n {\\Delta\\epsilon_\\mathrm{cut}\/10}\n \\right)}\n .\n\\label{eq:softcut}\n\\end{equation}\n\\end{subequations}\nThe cutoff places a fixed band $\\Delta\\epsilon_\\mathrm{cut}$ above\nthe actual Fermi energy $\\epsilon_\\mathrm{F}$. We are going to use here $\\Delta\\epsilon_\\mathrm{cut}=15$ MeV. \nIt is important to note that the soft cutoff modifies the state-dependent Hamiltonian $\\hat{\\mathcal{H}}_\\alpha^{\\mbox{}}$:\n\\begin{equation}\n \\hat{\\mathcal{H}}_\\alpha^{\\mbox{}} = w_\\alpha \\left( v^2_\\alpha \\hat{h} + u_\\alpha^{\\mbox{}}v_\\alpha^{\\mbox{}} \\hat{\\tilde{h}} \\right),\n\\end{equation}\nwhich defines all the ingredients entering the canonical HFB equations.\n\n\\subsection{Strategies to avoid premature pairing breakdown}\n\\label{sec:breakdown}\n\nThe pairing comes along with a second-order superfluid-to-normal phase transition. Below the critical pairing strength, the HFB pairing gap remains exactly zero. \nAbove this critical strength, pairing becomes active and the gap starts to grow quickly. \nHowever, the onset of pairing is often delayed in a numerical calculation. \nThe problem is that zero pairing remains a valid solution to the HFB (BCS) equations, but an unstable one.\nIt can then take a very long time before the algorithm overcomes the instability and drives towards a stable solution. \nAs a consequence, an iteration scheme can easily be deadlocked due to a pairing breakdown. \nThis is a well-known problem.\nMost algorithms incorporate recovery strategies, such as occasional kickoffs by giving the pairing gap an artificial value, small enough not to spoil the physics but large enough to revive the pairing mechanism.\n\nThere is a more insidious problem with the state-dependent pairing gap $\\Delta_{\\alpha\\alpha}$: \nIt can happen that one canonical state logs out from the\npairing scenario and gets stuck in its own pairing breakdown\n$\\Delta_{\\alpha\\alpha}\\rightarrow 0$. \nTo understand that, we inspect Eq.\\ (\\ref{eq:cmfeq}) and recall that \n$\\hat{\\mathcal{H}}_\\alpha^{\\mbox{}}=v_\\alpha^{\\mbox{}}\\left(\nv_\\alpha^{\\mbox{}}\\hat{h} + u_\\alpha^{\\mbox{}}\\hat{\\tilde{h}}\\right)$.\nFar above the Fermi energy, we encounter states with $u_\\alpha^{\\mbox{}}\\gg v_\\alpha^{\\mbox{}}$ such that $\\hat{\\mathcal{H}}_\\alpha^{\\mbox{}}\\approx\\hat{\\tilde{h}}$ becomes a purely local operator. \nThe solution to the mean field equation is $\\psi\\propto\\delta(\\vec{r}-\\vec{r}_\\mathrm{min})$ where $\\vec{r}_\\mathrm{min}$ is the point $\\hat{\\tilde{h}}$ has a minimum. \nIn practice, this will be the representative of a\n$\\delta$-function on the grid, slightly mellowed by orthonormalization to other states.\nAs a consequence, the state acquires a very high kinetic energy and a very high canonical s.p.\\ energy, which drives the solution of the gap equation (\\ref{eq:uveq}) even more toward $v_\\alpha^{\\mbox{}}\\rightarrow 0$. \nThis as such is a valid physical mechanism as long as the iterations curb down the occupations slowly from above.\nIt becomes a\nproblem if some $v_\\alpha^{\\mbox{}}$ gets stuck at zero at the very early stage of the iterative process. \nOnce this has happened, the state $\\alpha$ is locked out of the pairing space.\nIn order to avoid this from happening, we adopt a strategy similar to simulated annealing \\cite{Press1992} and start the iteration scheme with an enhanced effective pairing strength which gradually reduces to the\nphysical strength as\n\\begin{equation}\n V_\\mathrm{pair}^\\mathrm{(eff)}\n = \n V_\\mathrm{pair}^{\\mbox{}}\n \\left(\\eta_\\mathrm{enh}\n \\frac{\\mbox{max}(\\mathcal{N}_\\mathrm{enh}-\\mbox{\\tt iter},0)}\n {\\mathcal{N}_\\mathrm{enh}}\n +1\n \\right),\n\\end{equation}\nwhere {\\tt iter} is the iteration number. \nIn practice, we use an enhancement factor $\\eta_\\mathrm{enh}=2$ and $\\mathcal{N}_\\mathrm{enh}=400$. \nWith this choice, the lock-in problem in the most critical early phases of iterations is avoided.\n\n\n\\subsection{Hermiticity restoration} \\label{sec:hermiticity}\nAccording to Refs.\\ \\cite{Maruhn2014,Schuetrumpf2018},\nthe explicit expression of applying the Skyrme HF Hamiltonian $\\hat{h}$ on a wave function $\\psi$ can be written as:\n\\begin{equation}\\label{eq:h_terms}\n\t\\begin{split}\n\t\\hat{h}\\psi =\\ &U(\\vec{r})\\psi - \\nabla\\cdot \\left[B(\\vec{r})\\nabla\\right]\\psi \\\\\n + & \\frac{\\mathrm{i}}{2}\\left[\\vec{W} \\cdot\\left(\\vec{\\sigma} \\times \\nabla\\right)\\psi + \\vec{\\sigma} \\cdot \\nabla \\times \\left(\\vec{W}\\psi\\right) \\right]. \n\t \\end{split}\n\\end{equation}\nThis expression can be directly derived from the Skyrme EDF via Eq.\\ (\\ref{eq:mfham}), without invoking the product rule.\nIn \\cite{Schuetrumpf2018} it was noted that the product rule is not perfectly fulfilled\nwhen derivatives are evaluated via the discrete Fourier transform.\nTherefore, in {\\sc Sky3D} version 1.1 the commonly-adopted form of the spin-orbit term \n\\begin{equation} \\label{eq:old_spin_orbit}\n\\mathrm{i} \\vec{W} \\cdot(\\vec{\\sigma} \\times \\nabla) \\psi\n\\end{equation}\nwas replaced by the one given in Eq.\\ (\\ref{eq:h_terms});\nwith $\\nabla \\times \\vec{W} = 0$, these two forms are connected by the product rule.\nHowever, the second term of $\\hat{h}\\psi$, which involves a position-varying differential operator, is still calculated through the product rule in {\\sc Sky3D}:\n\\begin{equation}\n \\nabla\\cdot \\left[B(\\vec{r})\\nabla\\right]\\psi = \n \\sum_{i=x,y,z} \\frac{\\partial B}{\\partial i}\\frac{\\partial \\psi}{\\partial i} + B\\frac{\\partial^2 \\psi}{\\partial i^2}.\n \\label{eq:prodrule}\n\\end{equation}\nUnfortunately, evaluating Eq.\\ (\\ref{eq:prodrule}) with the FFT-based differentiation breaks the Hermiticity of the operator \\cite{Johnson2011}.\nThis point is confirmed by computed results shown in Sec.\\ \\ref{sec:hermiticityresult}.\n\nInstead of using Eq.\\ (\\ref{eq:prodrule}), the simplest way to restore Hermiticity in the evaluation of $\\nabla \\cdot \\left(B \\nabla \\psi \\right)$ is to compute two consecutive first-order derivatives. \nBut, as discussed in Sec.\\ \\ref{sec:num_grid}, this creates a problem with the second derivative that involves the Fourier component $\\widetilde{\\psi}(k_{N_x\/2+1})$.\nAccording to Ref.\\ \\cite{Johnson2011}, one should keep the term $\\widetilde{\\psi}(k_{N_x\/2+1})$ in the two first derivatives, \nand average the results of $k_{N_x\/2+1}= \\pm \\frac{N_x}{2} \\delta k$ to maintain the symmetry in Fourier space. \nOne can show that this ``average'' algorithm is equivalent to Algorithm \\ref{algorithm} (Algorithm 3 in \\cite{Johnson2011}), \nwhich is simpler to compute and thus implemented in {\\sc HFBFFT}. In Algorithm \\ref{algorithm}, one first computes an FFT-based first derivative, with $\\widetilde{\\psi}(k_{N_x\/2+1})$ saved and then zeroed before the inverse FFT is performed on $\\mathrm{i}k_n\\widetilde{\\psi}(k_n)$ (steps 1 through 3).\nThen one multiplies in coordinate space with the field $B(x)$ involved (step 4). Finally, one computes the derivative of the $B\\psi^\\prime$ thus obtained with modifying $\\widetilde{\\phi}(k_{N_x\/2+1})$ so that we can keep Hermiticity without losing the information of $\\widetilde{\\psi}(k_{N_x\/2+1})$ (steps 5 through 7).\nThe position-varying differential operator also appears in many other physics equations, like the heat equation with varying diffusivity and Poisson's equation with changing permittivity;\nhence, Algorithm \\ref{algorithm} has a broad application range.\n\n\\begin{algorithm}\n\\setstretch{1.5}\n\\caption{Compute the one-dimensional position-varing differetiation $\\frac{d}{dx}\\left[B(x)\\frac{d\\psi}{dx}\\right]$.}\n\\label{algorithm}\n\\begin{algorithmic}[1]\n\\State Compute Fourier transform\n$\\widetilde{\\psi}_n=\\mathtt{FFT}[\\psi_\\nu]$\n with $\\psi_\\nu=\\psi(x_\\nu)$.\n\\State Save $\\widetilde{\\psi}_{N_x\/2+1}\\rightarrow\\widetilde{\\Psi}$, build\n $\\widetilde{\\psi^\\prime}_n=ik_n\\widetilde{\\psi}_n$ with $\\widetilde{\\psi^\\prime}_{N_x\/2+1}=0$.\n\\State Compute inverse transform\n $\\psi^\\prime_\\nu=\\mathtt{FFT}^{-1}[\\widetilde{\\psi^\\prime}_n]$.\n\\State Build $\\phi_\\nu=B_\\nu\\psi^\\prime_\\nu$ with $B_\\nu=B(x_\\nu)$.\n\\State Compute Fourier transform $\\widetilde{\\phi}_n=\\mathtt{FFT}[\\phi_\\nu]$.\n\\State Build $\\widetilde{\\phi^\\prime}_n=ik_n\\widetilde{\\phi}_n$ and set\n $\\widetilde{\\phi^\\prime}_{N_x\/2+1} = -\\frac{\\sum_{\\nu=1}^{N_x}c_\\nu}{N_x} \\left(\\frac{N_x}{2} \\delta k \\right)^2\\widetilde{\\Psi}$.\n\\State Compute inverse transform \n $\\frac{d}{dx}\\left[B(x)\\frac{d\\psi}{dx}\\right]_\\nu=\n \\mathtt{FFT}^{-1}[\\widetilde{\\phi^\\prime}_n]$.\n\\end{algorithmic}\n\\end{algorithm}\n\n\\subsection{Numerical realization in harmonic-oscillator basis}\\label{sec:ho_representation}\n\nThe HFB solutions obtained with the {\\sc HFBFFT} code will be compared with the well-established code {\\sc HFBTHO}. This code has been extensively documented in several publications \\cite{Stoitsov2005, Perez2017}. \nThe solver\n{\\sc HFBTHO} uses an expansion of the s.p.\\ wave functions in the basis of axially symmetric HO (or transformed HO) states. The basis is given by the number of oscillator shells that defines the s.p.\\ space size, as well as the oscillator length and deformation that determine the HO wave functions.\nLocal fields in {\\sc HFBTHO} are handled on the Gaussian integration points and the\nGaussian integration rule is used to compute integrals.\n\nA major difference between the two codes lies in the way the HFB\nequations are solved. {\\sc HFBFFT} uses a representation in terms of the canonical basis, see Sec.\\ \\ref{sec:algorithm}, while {\\sc HFBTHO} works in\na quasiparticle space. \nThe results are fully equivalent if the same number of s.p.\\ states is used. \nDifferences appear in connection with the cutoff in pairing space. {\\sc HFBFFT} defines the cutoff in terms of the canonical s.p.\\ energies, whereas {\\sc HFBTHO} does that in terms of the quasiparticle energies.\nThis, taken together with the fact that the pairing strength has to depend on the size of the pairing space, means that the values of $V_{\\mathrm{pair},q}$ are not fully portable. \nIt will play a role in the benchmarking tests presented in Sec.\\ \\ref{sec:benchmark}.\n\n\n\n\\section{Benchmarks}\\label{sec:benchmark}\n\nIn this section, we benchmark {\\sc HFBFFT} against {\\sc HFBTHO}, {\\sc Sky1D}, and {\\sc Sky2D}. \nThese codes have symmetry restrictions. \n{\\sc Sky1D} enforces spherical symmetry and can be used for magic nuclei. \n{\\sc HFBTHO} and {\\sc Sky2D} allow for axially symmetric shapes and cover all test cases here. \nThose codes can run with or without imposing reflection symmetry. \n\nFirst, we determine appropriate parameters to use, including the box size and grid spacing.\nBefore making comparisons with other solvers, we quantify the effect brought by the Hermiticity restoration.\nIn the next step, we compare some characteristic nuclei ranging from spherical doubly magic $^{132}$Sn and $^{208}$Pb, to spherical superfluid $^{120}$Sn, to deformed superfluid $^{102,110}$Zr, to superdeformed fission isomer in $^{240}$Pu.\nIn all these calculations, we use the Skyrme functional SLy4 \\cite{Chabanat1998} in the particle-hole channel and the mixed density-dependent\n$\\delta$ interaction ($\\rho_{0,\\mathrm{pair}}=0.32$ fm$^{-3}$ in Eq.\\ (\\ref{eq:epair})) in the particle-particle channel.\n\n\\subsection{Parameter determination}\nTo ensure the correct asymptotic behavior near the box boundary, we use $^{110}$Zr to determine the appropriate box and grid sizes.\nThe nucleus $^{110}$Zr is chosen because it has a significant neutron excess and thus weakly bound canonical states.\nThe calculated proton and neutron densities were inspected for different box lengths and different grids. \nBased on this analysis, we adopted a cubic box with a side length of 37.6 fm and 48 grid points in each dimension (spacing between two neighboring points is 0.8 fm).\nWith the above settings, the proton and neutron densities are below 10$^{-7}$ nucleons\/fm$^3$ at the boundary, which is small enough for our tests.\nFor spherical nuclei such as $^{120}$Sn, a smaller box is usually sufficient.\n\nWe take 176 neutron and 126 proton canonical states ($\\Omega_n = 176,\\ \\Omega_p = 126$), 15 MeV energy cutoff for the pairing window.\nThis number of active states is determined by the tests for spherical $^{120}$Sn and deformed $^{110}$Zr nuclei.\nWhen we increase the number of active states to 200 neutron and 150 proton states, the total energy remains stable within 10 keV.\nIn order to speed up the convergence, we perform 100 sub-iteration steps in the configuration space between two gradient iterations in the coordinate space, initialize with 30 HF+BCS steps, and employ the pairing enhancement factors defined in Sec.\\ \\ref{sec:breakdown}.\n\nFor {\\sc HFBTHO} calculations, we take 25 HO shells for both protons and neutrons unless explicitly stated otherwise.\nAn axially deformed HO basis with $\\beta_2 = 0.2$ is used in deformed nuclei ground state calculations ($^{102,110}$Zr and $^{240}$Pu) and $\\beta_2 = 0.6$ is used to calculate the $^{240}$Pu fission isomer.\nFor the spherical nuclei, we also compare {\\sc HFBFFT} with the results of the 1D spherical HFB code {\\sc Sky1D}, which uses a radial coordinate-space mesh and the five-point finite difference formula for derivatives. The mesh spacing and the number of points we employ in {\\sc Sky1D} are 0.15 fm and 141, respectively.\nFor the deformed nuclei, we compare {\\sc HFBFFT} results with the 2D axial HFB code {\\sc Sky2D}, which uses 31 points in both $r$- and $z$-directions with a mesh spacing of 0.7 fm. Since the nuclei considered in this study are all reflection-symmetric, the grid extends from $z=0$\\,fm to $z=21$\\,fm.\n\n\\subsection{Pairing renormalization}\\label{sec:renormalization}\nAs we mentioned in Sec.\\ \\ref{sec:ho_representation}, pairing strengths are not portable between {\\sc HFBFFT} and {\\sc HFBTHO} because of different descriptions of the pairing space and different structures of one-quasiparticle continuum in these two solvers. \nTherefore, we need to renormalize the pairing strengths to compare results for open-shell nuclei in which pairing is essential.\nIntuitively, there are several choices for pairing renormalization.\n \nFor instance, one can tune pairing strengths to reproduce the pairing energies in different solvers.\nHowever, as discussed in \\cite{Papenbrock1999,Borycki2006}, the pairing energy density is divergent with respect to the cutoff energy. A better measure is the quantity\n\\begin{equation}\\label{eq:Ekineff}\n\\tilde{E}_\\mathrm{kin}^q = E_\\mathrm{kin}^q+ E_\\mathrm{pair}^q~~(q={\\rm n\\,or\\,p}),\n\\end{equation}\nwhich is less sensitive to the pairing cutoff energy.\nAs it will be\nshown in Sec.\\ \\ref{sec:doublymagic}, the kinetic energy strongly depends on the basis size in {\\sc HFBTHO}.\nTherefore, in situations when the error related to the choice of the basis, or spatial grid, dominates, $\\tilde{E}_\\mathrm{kin}$ will be a poor renormalization measure.\nAnother pairing measure is the spectral pairing gap \\cite{Dobaczewski1984,Dobaczewski1996,Bender2000}\n\\begin{equation}\\label{eq:Gap}\n\\Delta^q \\equiv \\frac{\\sum_{\\alpha \\in q} w_\\alpha v_\\alpha^2 \\Delta_{\\alpha\\alpha}}{\\sum_{\\alpha \\in q} w_\\alpha v_\\alpha^2}\n~~(q={\\rm n\\,or\\,p}).\n\\end{equation}\nThis quantity has been used in numerous papers to adjust pairing strengths to observed odd-even mass differences and we shall use it in this study to renormalize the pairing channel of different solvers.\n\n\n\\subsection{Energy shift by Hermiticity restoration} \\label{sec:hermiticityresult}\n\nAs we mentioned in Sec.\\ \\ref{sec:hermiticity}, the product rule in the FFT-based differentiation violates the Hermiticity of the position-varying differential operator.\nTo restore the Hermiticity, we implement Algorithm \\ref{algorithm} in {\\sc HFBFFT}.\nThe results are shown in Table \\ref{tab:hermiticity} for several nuclei.\nThe Hermiticity violation is demonstrated by a non-vanishing $\\Delta\\mathcal{S}^2 \\sim 10^{-6}$ MeV$^2$\nin the calculations of spherical nuclei $^{132}$Sn and $^{208}$Pb for which the static pairing vanishes and hence the HFB calculation is reduced to HF.\nAs for other open-shell nuclei with non-vanishing pairing, their $\\Delta\\mathcal{S}^2$ values are similar before and after the Hermiticity restoration.\nThese values of $\\Delta\\mathcal{S}^2$ are characteristic of the accuracy typically achieved in {\\sc HFBFFT} and they are larger than the error due to the Hermiticity breaking.\nIn terms of the total energy, the effect is of the order of a few keV, i.e., insignificant for many practical applications. \nEven so, Hermiticity breaking effects can affect some calculations if not remedied.\nFor example, the small error brought by the Hermiticity breaking can accumulate step by step in a time-dependent calculation.\n\n\n\\begin{table}[htb]\n\\centering\n\\begin{tabular}{c|rr|rr}\n\\hline\n\\hline\n\\multirow{2}{*}{{\\sc HFBFFT}} & \\multicolumn{2}{c|}{Hermiticity broken} & \\multicolumn{2}{c}{Hermiticity restored} \\\\\n & \\multicolumn{1}{c}{$E_\\mathrm{tot}$} &\\multicolumn{1}{c|}{$\\Delta\\mathcal{S}^2$} & \\multicolumn{1}{c}{$E_\\mathrm{tot}$} &\\multicolumn{1}{c}{$\\Delta\\mathcal{S}^2$} \\\\\n\\hline \n$^{132}$Sn &$-$1103.542\\textbf{9} &3.91E-06 &$-$1103.542\\textbf{3} &4.15E-14 \\\\\n$^{208}$Pb &$-$1635.68\\textbf{17} &6.61E-06 &$-$1635.68\\textbf{07} &6.40E-14\\\\\n$^{120}$Sn &$-$1018.331\\textbf{0} &3.11E-05 &$-$1018.330\\textbf{5} &3.45E-05 \\\\\n$^{110}$Zr &$-$893.857\\textbf{8} &3.09E-05 &$-$893.857\\textbf{4} &4.02E-05 \\\\\n$^{102}$Zr &$-$859.469\\textbf{6} &3.00E-05 &$-$859.469\\textbf{2} &2.19E-05 \\\\\n\\hline\n\\hline\n\\end{tabular}\n\\caption{Total energies $E_\\mathrm{tot}$ (in MeV) and $\\Delta\\mathcal{S}^2$ (in MeV$^2$) for five nuclei calculated with {\\sc HFBFFT} without and with the Hermiticity restoration. The digits which do not coincide before and after the Hermiticity restoration are marked in bold.}\n\\label{tab:hermiticity}\n\\end{table}\n\n\\subsection{Doubly magic nuclei: $^{132}$Sn and $^{208}$Pb}\\label{sec:doublymagic}\nIn the first step, we calculate two doubly magic unpaired nuclei $^{132}$Sn and $^{208}$Pb.\nFor these nuclei, the results of \\mbox{\\sc HFBFFT} and {\\sc Sky3D} are identical.\nIn Table \\ref{tab:132Sn}, we list the ground-state energies as well as contributions from various functional terms, obtained from four solvers {\\sc HFBFFT}, {\\sc HFBTHO}, {\\sc Sky1D} and {\\sc Sky2D} for $^{132}$Sn. \nTable~\\ref{tab:208Pb} shows similar results for $^{208}$Pb.\n\\begin{table}[htb]\n\\centering\n\\begin{tabular}{lrrrr}\n\\hline\n\\hline\n$^{132}$Sn & {\\sc HFBTHO} & {\\sc HFBFFT} & {\\sc Sky1D} & {\\sc Sky2D}\\\\\n\\hline\n$E_{\\mathrm{tot}}$ & $-$1103.4\\textbf{9}\t& $-$1103.5\\textbf{4} & $-$1103.5\\textbf{7}\t& $-$1103.5\\textbf{6}\\\\\n$E_{\\mathrm{kin}}^\\mathrm{n}$ & 1637.\\textbf{71}\t& 1637.9\\textbf{7}\t& 1638.0\\textbf{1} & 1638.0\\textbf{2}\\\\\n$E_{\\mathrm{kin}}^\\mathrm{p}$ & 808.\\textbf{44}\t& 808.5\\textbf{7}\t& 808.5\\textbf{9} &808.5\\textbf{6}\\\\\n$E_{\\mathrm{\\rho \\rho}}$ & $-$4876.\\textbf{26}\t& $-$4877.0\\textbf{2} & $-$4877.0\\textbf{4}\t&$-$4877.0\\textbf{7} \\\\\n$E_{\\mathrm{\\rho \\tau}}$ & 821.\\textbf{49}\t& 821.7\\textbf{0}\t& 821.7\\textbf{3} &821.7\\textbf{2}\\\\\n$E_{\\mathrm{\\rho \\Delta \\rho}}$ & 248.\\textbf{11}\t & 248.2\\textbf{3}\t& 248.2\\textbf{5} &248.2\\textbf{3}\\\\\n$E_{\\mathrm{\\rho \\nabla \\vec{J}}}$ & $-$84.4\\textbf{0}\t& $-$84.4\\textbf{3} & $-$84.4\\textbf{4}\t &$-$84.4\\textbf{3}\\\\\n$E_{\\mathrm{Coul}}$ & 341.4\\textbf{2} & 341.4\\textbf{4} & 341.4\\textbf{4} &341.4\\textbf{3}\t\\\\\n\\hline\n\\hline\n\\end{tabular}\n\\caption{Energy contributions (in MeV) to the binding energy of for $^{132}$Sn computed with {\\sc HFBTHO}, {\\sc HFBFFT}, {\\sc Sky1D}, and {\\sc Sky2D}. The digits which do not coincide with {\\sc HFBFFT} are marked in bold.}\n\\label{tab:132Sn}\n\\end{table}\nWhen we compare {\\sc HFBFFT} with {\\sc Sky1D} and {\\sc Sky2D} for $^{132}$Sn , we find the energy differences do not usually exceed 40\\,keV. \nSuch small differences can be traced back to different box boundary conditions assumed in these codes.\nIn {\\sc HFBFFT}, calculations are performed in a 3D rectangular box while the box is represented by a spherical shell in {\\sc Sky1D} and a cylindrical shape in {\\sc Sky2D}. \nFor a well-bound nucleus and large spatial boxes, the results should be practically independent of the geometry of the box. \nAs seen in Table\\,\\ref{tab:132Sn} this indeed holds for $^{132}$Sn. \nAs we will see below, larger box-related errors are expected in superfluid and\/or weakly bound nuclei. \nFor nuclear matter and time-dependent calculations, the finite-size box errors can be appreciable; they can be greatly reduced by imposing twist-averaged boundary conditions\n\\cite{Schuetrumpf2016}.\n\n\\begin{table*}[htb]\n\\centering\n\\begin{tabular}{lrrrrrr}\n\\hline\n\\hline\n$^{208}$Pb & $N$=15 & $N$=20 & $N$=25 & $N$=30 & {\\sc HFBFFT} & {\\sc Sky1D} \\\\\n\\hline\n$E_{\\mathrm{tot}}$ & $-$1634.\\textbf{25}\t & $-$1635.\\textbf{16}\t & $-$1635.\\textbf{46}\t & $-$1635.6\\textbf{2} &$-$1635.6\\textbf{8}\t& $-$1635.7\\textbf{0} \\\\\n$E_{\\mathrm{kin}}^\\mathrm{n}$ & 2525.\\textbf{13}\t& 2527.\\textbf{80}\t& 2528.\\textbf{42}\t& 2528.\\textbf{83} &2529.1\\textbf{3} & 2529.1\\textbf{6}\t\\\\\n$E_{\\mathrm{kin}}^\\mathrm{p}$ & 133\\textbf{4.56}\t& 133\\textbf{6.34}\t& 1336.\\textbf{71}\t& 1336.\\textbf{91} &1337.0\\textbf{6}\t& 1337.0\\textbf{7} \\\\\n$E_{\\mathrm{\\rho \\rho}}$ & $-$78\\textbf{35.80}\t& $-$784\\textbf{4.07}\t& $-$784\\textbf{5.66}\t & $-$7846.\\textbf{67} & $-$7847.\\textbf{54}& $-$7847.\\textbf{63} \\\\\n$E_{\\mathrm{\\rho \\tau}}$ & 132\\textbf{7.84}\t& 132\\textbf{9.55}\t& 1329.\\textbf{79}\t& 1329.\\textbf{98} &1330.2\\textbf{0}\t& 1330.2\\textbf{2} \\\\\n$E_{\\mathrm{\\rho \\Delta \\rho}}$ & 31\\textbf{4.05} & 315.\\textbf{12}\t& 315.\\textbf{12} \t& 315.\\textbf{17} &315.2\\textbf{9}\t& 315.2\\textbf{9} \\\\\n$E_{\\mathrm{\\rho \\nabla \\vec{J}}}$ & $-$96.\\textbf{30}\t& $-$96.4\\textbf{4}\t& $-$96.4\\textbf{2} \t& $-$96.4\\textbf{3} &$-$96.4\\textbf{5} & $-$96.4\\textbf{5}\t\\\\\n$E_{\\mathrm{Coul}}$ & 796.\\textbf{26} & 796.\\textbf{55}\t& 796.\\textbf{56}\t& 796.6\\textbf{0} &796.6\\textbf{3} & 796.6\\textbf{3}\t \\\\\n\\hline\n\\hline\n\\end{tabular}\n\\caption{Energies for $^{208}$Pb from {\\sc HFBTHO} (computed with different number of HO shells $N$), {\\sc HFBFFT}, and {\\sc Sky1D}. All energies are in MeV. The digits which do not coincide with {\\sc HFBFFT} are marked in bold.}\n\\label{tab:208Pb}\n\\end{table*}\n\nWe find about 50\\,keV energy difference between {\\sc HFBTHO} and {\\sc HFBFFT}; this difference can be primarily traced back to $E_\\mathrm{kin}$ and $E_\\mathrm{\\rho \\rho}$.\nAs discussed in Refs.\\ \\cite{Furnstahl2012, Binder2016}, the kinetic energy converges slowly in the HO basis. \nTo investigate this effect, we calculate $^{208}$Pb using different numbers of HO\\ shells in {\\sc HFBTHO}.\nWe see in Table \\ref{tab:208Pb} that when we increase the number of HO shells to 30, the {\\sc HFBTHO} energies approach the {\\sc HFBFFT} values.\nIt is also seen that $E_\\mathrm{kin}$ and $E_\\mathrm{\\rho \\rho}$ exhibit the largest variations with $N$.\n\nIn Refs.\\ \\cite{Furnstahl2012, More2013}, the correction to the ground-state energy due to the finite number of HO shells $N$ has been derived:\n\\begin{equation}\n\t\\label{eq:ene_conv}\n\tE_{L} = E_{\\infty} + a_0e^{-2k_{\\infty}L},\n\\end{equation}\nwhere $L \\equiv \\sqrt{2(N+3\/2+2)}b$, $b$ is the oscillator length of our HO basis, and $a_0,k_{\\infty}$ and $E_{\\infty}$ are fit parameters. \nThen $E_{\\infty}$ is the energy in the limit of infinitely large model space. The fit \nof $E_\\mathrm{tot}$ to Eq.\\ (\\ref{eq:ene_conv}) \nis presented in Fig.\\ \\ref{fig:1} and the resulting value of $E_{\\infty} = -1635.786$ MeV agrees fairly well with the {\\sc HFBFFT} and {\\sc Sky1D} values.\nHence, obtaining an accurate kinetic as well as total energies in a HO basis-expansion solver requires a huge number of shells. In this context, the use of the coordinate-space representation is beneficial.\n\n\\begin{figure}[htb]\n\\centering\n\\includegraphics[width=0.45\\textwidth]{Fig1.pdf}\n\\caption{$E_\\mathrm{tot}$ as a function of $L$ for $^{208}$Pb. The {\\sc HFBTHO} results are marked by red dots. The blue curve is fitted according to Eq.\\ (\\ref{eq:ene_conv}).}\n\\label{fig:1}\n\\end{figure}\n\n \n\\begin{table*}[htb]\n\\centering\n\\begin{tabular}{l|r|rrr|rrr}\n\\hline\n\\hline\n\\multirow{2}{*}{$^{120}$Sn} & {\\sc HFBTHO} & {\\sc HFBFFT} & {\\sc Sky1D} & {\\sc Sky2D} & {\\sc HFBFFT } & {\\sc Sky1D} & {\\sc Sky2D} \\\\ \n & & \\multicolumn{3}{c}{$\\tilde{E}_\\mathrm{kin}^\\mathrm{n}$-renorm. } &\n \\multicolumn{3}{|c}{$\\Delta^\\mathrm{n}$ -renorm. } \\\\\n & & & & & & & \\\\[-8pt]\n\\hline\n$E_{\\mathrm{tot}}$ & $-$1018.\\textbf{77} &$-$1018.3\\textbf{4} &$-$1018.\\textbf{45} &$-$1018.3\\textbf{7} &$-$1018.7\\textbf{8} &$-$1018.\\textbf{92} &$-$1018.7\\textbf{4}\\\\\n$E_{\\mathrm{Coul}}$ & 347.\\textbf{37} & 347.4\\textbf{4} & 347.4\\textbf{5} &347.4\\textbf{1} &347.4\\textbf{7} & 347.4\\textbf{9} &347.4\\textbf{5}\\\\\n$E_{\\mathrm{kin}}^\\mathrm{n}$ & 134\\textbf{0.51} & 1335.4\\textbf{0} &1335.\\textbf{18} &1335.4\\textbf{3} &1339.1\\textbf{7} &1339.1\\textbf{4} &1338.\\textbf{72}\\\\\n$E_{\\mathrm{kin}}^\\mathrm{p}$ & 830.\\textbf{75} & 830.9\\textbf{7} &831.0\\textbf{1} &830.0\\textbf{1} &831.2\\textbf{5} &831.3\\textbf{1} &831.2\\textbf{8}\\\\\n$E_\\mathrm{pair}^\\mathrm{n}$ & $-$\\textbf{12.48} & $-$7.3\\textbf{7} &$-$7.\\textbf{15} &$-$7.4\\textbf{0} & $-$9.\\textbf{29} &$-$9.\\textbf{14} &$-$9.\\textbf{02}\\\\\n$\\tilde{E}_{\\mathrm{kin}}^\\mathrm{n}$ & \\textit{1328.03} & \\textit{1328.03} &\\textit{1328.03}\n&\\textit{1328.03} &1329.88 &1330.01 &1329.70\\\\\n$\\Delta^\\mathrm{n}$ & \\textit{1.25} & 1.08 &1.07 &1.09 &\\textit{1.25} &\\textit{1.25} &\\textit{1.25}\\\\\n$\\epsilon_{\\mathrm{F},\\mathrm{n}}$ & $-$8.0\\textbf{2} &$-$8.0\\textbf{1} &$-$8.0\\textbf{1} &$-$8.0\\textbf{4} &$-$8.0\\textbf{0} &$-$8.0\\textbf{0} &$-$8.0\\textbf{4}\\\\\n$V_\\mathrm{pair,n}$ & $-$284.57 & $-$342.70 &$-$346.50 &$-$354.90 & $-$361.80 & $-$367.30 &$-$372.35\\\\ \n$r_\\mathrm{rms}$ & 4.67 & 4.67 &4.67 &4.67 & 4.67 &4.67 &4.67\\\\\n\n\\hline\n\\hline \n\\end{tabular}\n\\caption{Results of HFB + SLy4 calculations for $^{120}$Sn using {\\sc HFBTHO}, {\\sc HFBFFT}, {\\sc Sky1D}, and {\\sc Sky2D}.\nTwo neutron pairing renormalization variants are considered, by adjusting \nthe neutron pairing strengths in {\\sc HFBFFT}, {\\sc Sky1D}, and {\\sc Sky2D} to reproduce the {\\sc HFBTHO} values of $\\tilde{E}_\\mathrm{kin}^\\mathrm{n}$ and $\\Delta^\\mathrm{n}$.\n All energies are in MeV. \n The radius $r_{\\mathrm{rms}}$ is in fm.\n The digits which do not coincide with {\\sc HFBFFT} are marked in bold.}\n\\label{tab:120Sn}\n\\end{table*}\n\n\\subsection{Spherical superfluid nucleus: $^{120}$Sn}\\label{sec:120Sn}\n\nWe now calculate $^{120}$Sn which has a non-vanishing neutron pairing.\nThe neutron pairing strength $V_\\mathrm{pair,n}$ in {\\sc HFBTHO} is adjusted to the average experimental neutron pairing gap $\\Delta_n = 1.25$\\,MeV.\nIn {\\sc HFBFFT}, {\\sc Sky1D} and {\\sc Sky2D}, two pairing renormalizations are used.\nIn the first variant, the neutron pairing strengths are adjusted to reproduce the {\\sc HFBTHO} value of $\\tilde{E}_\\mathrm{kin}^\\mathrm{n}$.\nIn the second variant, the {\\sc HFBTHO} value of $\\Delta^\\mathrm{n}$ is matched.\nThe results for both variants are displayed in Table \\ref{tab:120Sn}.\nThe neutron pairing strengths vary between the solvers, reflecting different structures of their quasiparticle pairing spaces, i.e., different pairing cutoff procedures and different structures of the discretized one-quasiparticle continuum.\n\nAlthough there are large discrepancies in $E_\\mathrm{kin}^\\mathrm{n}$ and $E_\\mathrm{pair}^\\mathrm{n}$ between {\\sc HFBFFT} and {\\sc HFBTHO}, in the first renormalization variant, the difference of the total energy, about 0.4 MeV, is quite reasonable considering the fact that the pairing space is treated differently and the {\\sc HFBTHO} results are affected by the basis truncation error.\nThe difference in $E_{\\mathrm{tot}}$ between the three coordinate-space solvers, less than 150\\,keV, reflects the dependence of the level density of the discretized quasiparticle continuum on the box boundary conditions assumed.\n\nIn the pairing gap renormalization variant, the agreement of $E_{\\mathrm{tot}}$ is even better, with only 10-30\\,keV difference between {\\sc HFBFFT}, {\\sc HFBTHO} and {\\sc Sky2D}. \nIn this variant, the magnitudes of the neutron pairing energy and kinetic energy are considerably larger as compared to the variant in which $\\tilde{E}_\\mathrm{kin}^\\mathrm{n}$ is renormalized.\nStill, as seen in Table \\ref{tab:120Sn}, both pairing renormalizations work reasonably well for $^{120}$Sn. \nIt is interesting to note that the total root-mean-square (rms) radii $r_\\mathrm{rms}$ are predicted very robustly in all renormalization variants.\n\n\\subsection{Axially deformed nuclei: $^{102,110}$Zr}\n\nThe neutron-rich nuclei $^{102,110}$Zr are suitable test cases, as they are known\/expected to have large prolate deformations. \nIn addition, $^{110}$Zr is weakly bound, with the neutron chemical potential $\\epsilon_{\\mathrm{F},\\mathrm{n}}\\approx -3.5$\\,MeV. \nThe HFB proton pairing vanishes in this nucleus.\nIn Table \\ref{tab:110Zr}, we show results for $^{110}$Zr with the two pairing renormalization schemes investigated in Sec.~\\ref{sec:120Sn}.\nIt is seen that the {\\sc HFBFFT} results for various observables, i.e., total energy, quadrupole moments, and the rms radii, all agree well with those from {\\sc HFBTHO} in both pairing variants.\n\n \\begin{table*}[htb]\n\\centering\n\\begin{tabular}{l|r|rr|rr}\n\\hline\n\\hline\n\\multirow{2}{*}{$^{110}$Zr } & {\\sc HFBTHO} & {\\sc HFBFFT} & {\\sc Sky2D} & {\\sc HFBFFT } & {\\sc Sky2D} \\\\ \n & & \\multicolumn{2}{c}{$\\tilde{E}_\\mathrm{kin}^\\mathrm{n}$-renorm. } &\n \\multicolumn{2}{|c}{$\\Delta^\\mathrm{n}$ -renorm. } \\\\\n \\hline\n$E_\\mathrm{tot}$ & $-$893.\\textbf{97} & $-$894.3\\textbf{3} &$-$894.3\\textbf{2} &$-$894.0\\textbf{1} &$-$894.0\\textbf{1}\\\\\n$E_\\mathrm{Coul}$ & 226.7\\textbf{2} & 226.7\\textbf{2} &226.7\\textbf{1} & 226.7\\textbf{4} &226.7\\textbf{0}\\\\\n$E_\\mathrm{kin}^\\mathrm{n}$ & 136\\textbf{8.08} & 1369.\\textbf{22} &1368.\\textbf{98} & 1367.\\textbf{86} &1367.\\textbf{13}\\\\\n$E_\\mathrm{kin}^\\mathrm{p}$ & 632.0\\textbf{3} & 632.0\\textbf{5} &632.\\textbf{13} & 632.\\textbf{16} &632.\\textbf{05}\\\\\n$E_\\mathrm{pair}^\\mathrm{n}$ & $-$\\textbf{3.18} &$-$4.\\textbf{31} &$-$4.\\textbf{08} & $-$2.\\textbf{30} &$-$2.\\textbf{19}\\\\\n$\\tilde{E}_\\mathrm{kin}^\\mathrm{n}$ & \\textit{1364.90} & \\textit{1364.90} &\\textit{1364.90} &1365.56 &1364.94\\\\\n$\\Delta^\\mathrm{n}$ & \\textit{0.64} & 0.93 &0.92 &\\textit{0.64} &\\textit{0.64} \\\\\n$\\epsilon_{\\mathrm{F},\\mathrm{n}}$ & $-$3.5\\textbf{5} & $-$3.5\\textbf{0} &$-$3.5\\textbf{2} &$-$3.5\\textbf{5} &$-$3.5\\textbf{7}\\\\\n$V_\\mathrm{pair,n}$ & $-$284.57 & $-$409.80 &$-$428.00 & $-$371.00 &$-$384.80 \\\\\n$r_\\mathrm{rms}$ & 4.7\\textbf{3} & 4.7\\textbf{3} &4.7\\textbf{4} &4.7\\textbf{3} &4.7\\textbf{4} \\\\\n$Q_{20}^\\mathrm{n}$ & 7\\textbf{89} & 79\\textbf{4} &79\\textbf{5} & 79\\textbf{1} &79\\textbf{6}\\\\\n$Q_{20}^\\mathrm{p}$ & 44\\textbf{4} & 44\\textbf{7} &44\\textbf{7} & 44\\textbf{5} &44\\textbf{7}\\\\\n\n\\hline\n\\hline \n\\end{tabular}\n\\caption{Results of HFB + SLy4 calculations for $^{110}$Zr with {\\sc HFBTHO}, {\\sc HFBFFT} and {\\sc Sky2D}.\nTwo neutron pairing renormalization variants are considered, by adjusting the neutron pairing strengths in {\\sc HFBFFT} and {\\sc Sky2D} to reproduce the {\\sc HFBTHO} values of $\\tilde{E}_\\mathrm{kin}^\\mathrm{n}$ and $\\Delta^\\mathrm{n}$.\nAll energies are in MeV.\nThe radius $r_{\\mathrm{rms}}$ is in fm and quadrupole moments $Q_{20}^\\mathrm{p,n}$ are in fm$^2$. The HFB proton pairing vanishes in this nucleus.\nThe digits which do not coincide with {\\sc HFBFFT} are marked in bold.}\n\\label{tab:110Zr}\n\\end{table*}\n\nIn the case of $^{102}$Zr, one also needs to consider proton pairing. \nIn this case, we renormalize both neutron and proton spectral pairing gaps by reproducing their values obtained from {\\sc HFBTHO}.\nIn the calculation, 25 HO shells for both neutrons and protons are employed in {\\sc HFBTHO}, which means that the s.p.\\ proton and neutron spaces are the same.\nIn {\\sc HFBFFT}, the canonical spaces are different as $\\Omega_n = 176$, $\\Omega_p = 126$. However, the actual pairing space is set by the soft-cutoff factor $w_\\alpha$.\nIt is seen in Table \\ref{tab:102Zr} that the benchmarking results following the pairing renormalization are very satisfactory. \nIn particular, the results of {\\sc HFBFFT}, {\\sc HFBTHO}, and {\\sc Sky2D } are fairly close for the observables:\n$E_\\mathrm{tot}$, $r_\\mathrm{rms}$ , and quadrupole moments.\n\n\n\\begin{table}[htb]\n\\centering\n\\begin{tabular}{lrrr}\n\\hline\n\\hline\n$^{102}$Zr & HFBTHO & HFBFFT & Sky2D \\\\\n\\hline\n$E_\\mathrm{tot}$ & $-$859.6\\textbf{5} & $-$859.6\\textbf{9} &$-$859.6\\textbf{7}\\\\\n$E_\\mathrm{Coul}$ & 231.1\\textbf{1} & 231.1\\textbf{6} &231.1\\textbf{4}\\\\\n$E_\\mathrm{kin}^\\mathrm{n}$ & 120\\textbf{2.02} & 120\\textbf{0.96} & 120\\textbf{1.97}\\\\\n$E_\\mathrm{kin}^\\mathrm{p}$ & 651.2\\textbf{5} & 651.2\\textbf{2} & 651.2\\textbf{7}\\\\\n$E_\\mathrm{pair}^\\mathrm{n}$ & $-$\\textbf{3.39} & $-$2.\\textbf{50} &$-$2.\\textbf{39} \\\\\n$E_\\mathrm{pair}^\\mathrm{p}$ & $-$1.\\textbf{97} & $-$1.4\\textbf{2} &$-$1.3\\textbf{8} \\\\\n$\\tilde{E}_\\mathrm{kin}^\\mathrm{n}$ & 119\\textbf{8.63} & 1199.5\\textbf{3} & 1199.5\\textbf{8} \\\\\n$\\tilde{E}_\\mathrm{kin}^\\mathrm{p}$ & 649.\\textbf{28} & 649.\\textbf{79} & 649.\\textbf{89} \\\\\n$\\Delta^\\mathrm{n}$ & \\textit{0.69} & \\textit{0.69} &\\textit{0.69}\\\\\n$\\Delta^\\mathrm{p}$ & \\textit{0.56} & \\textit{0.56} & \\textit{0.56} \\\\\n$\\epsilon_{\\mathrm{F},\\mathrm{n}}$ & $-$5.4\\textbf{3} & $-$5.4\\textbf{2} & $-$5.4\\textbf{4} \\\\\n$\\epsilon_{\\mathrm{F},\\mathrm{p}}$ & $-$12.0\\textbf{9} & $-$12.0\\textbf{9} &$-$12.1\\textbf{0} \\\\\n$V_\\mathrm{pair}^\\mathrm{n}$ & $-$284.57 & $-$367.00 & $-$378.40 \\\\\n$V_\\mathrm{pair}^\\mathrm{p}$\n & $-$284.57 & $-$372.00 & $-$384.70 \\\\\n$r_\\mathrm{rms}$ & 4.58 & 4.58 &4.58\\\\\n$Q_{20}^\\mathrm{n}$ & 63\\textbf{9} & 63\\textbf{9} & 64\\textbf{0} \\\\\n$Q_{20}^\\mathrm{p}$ & 411 & 411 & 411 \\\\\n\\hline\n\\hline\n\\end{tabular}\n\\caption{Results of HFB + SLy4 calculations for $^{102}$Zr using {\\sc HFBTHO}, {\\sc HFBFFT} and {\\sc Sky2D}. The pairing renormalization is carried out by adjusting the proton and neutron pairing strengths in {\\sc HFBFFT} and {\\sc Sky2D} to reproduce the {\\sc HFBTHO} values of $\\Delta^\\mathrm{n}$ and $\\Delta^\\mathrm{p}$.\nAll energies are in MeV. \nThe radius $r_{\\mathrm{rms}}$ is in fm and quadrupole moments $Q_{20}^\\mathrm{p,n}$ are in fm$^2$.\nThe digits which do not coincide with {\\sc HFBFFT} are marked in bold.}\n\\label{tab:102Zr}\n\\end{table}\n\n\\subsection{Superdeformed heavy nucleus: $^{240}$Pu}\nCompared with the HO basis, the coordinate-space representation can better capture strongly deformed configurations, such as\nthe superdeformed fission isomer (f.i.) in $^{240}$Pu. \nIndeed, very large configuration spaces are needed to guarantee the convergence of the HO expansion at large deformations \\cite{Nikolov2011,Schunck2013}.\nGiven the large number of nucleons in $^{240}$Pu, one needs to carefully consider the number of canonical states in {\\sc HFBFFT} and {\\sc Sky2D} calculations.\nTo this end, we performed a series of calculations by increasing the canonical space until the convergence had been reached.\nThis has been done separately for the ground state (g.s.) and f.i.\\ of $^{240}$Pu.\nThe final values are: ($\\Omega_n,\\ \\Omega_p) = (300,\\ 200)$ for the g.s.\\ and ($\\Omega_n,\\ \\Omega_p) = (400,\\ 300)$ for the f.i.\\ calculations.\nWe renormalize the pairing strengths for $^{240}$Pu to reproduce the g.s.\\ $\\Delta^{\\mathrm{n}}$ and $\\Delta^{\\mathrm{p}}$ obtained in {\\sc HFBTHO}.\nThe results are displayed in Table \\ref{tab:240Pu}. \n\nIn {\\sc HFBTHO} and {\\sc Sky2D}, the f.i.\\ is found by performing quadrupole-moment constrained calculations.\nThe f.i.\\ configuration in {\\sc HFBFFT} was computed by initializing the code with various HO deformations.\nAs seen in Table \\ref{tab:240Pu}, {\\sc HFBTHO} and {\\sc HFBFFT} results are very similar for g.s.\\ energies, g.s.\\ quadruple deformations, and radii.\n\nTo test the functionality of {\\sc HFBFFT} for the f.i.,\nwe renormalize pairing strengths in {\\sc HFBFFT} and {\\sc Sky2D} to the {\\sc HFBTHO} pairing gaps.\nBoth coordinate-space solvers give very close results for the f.i., and they agree nicely with the {\\sc HFBTHO} results, see Table \\ref{tab:240Pu}.\nOverall, the $^{240}$Pu results obtained with {\\sc HFBFFT} for both the g.s.\\ and f.i.\\ show reasonable agreement with those from {\\sc HFBTHO}.\n\n\\begin{table*}[htp]\n\\centering\n\\begin{tabular}{l|rr|rrr}\n\\hline\n\\hline\n\\multirow{3}{*}{$^{240}$Pu} & \\multicolumn{2}{c|}{ground state} & \\multicolumn{3}{c}{fission isomer} \\\\ \n\n & \\multicolumn{1}{c}{{\\sc HFBTHO}} &\\multicolumn{1}{c|}{{\\sc HFBFFT}} &\\multicolumn{1}{c}{{\\sc HFBTHO}} \n &\\multicolumn{1}{c}{{\\sc HFBFFT}} \n &\\multicolumn{1}{c}{{\\sc Sky2D}}\\\\\n\\hline\n$E_\\mathrm{tot}$ & $-$1802.\\textbf{11} &$-$1802.\\textbf{43} & $-$1797.\\textbf{00} &$-$1797.\\textbf{35} &$-$1797.3\\textbf{5}\\\\\n$E_\\mathrm{Coul}$ & 989.\\textbf{61} &956.\\textbf{98} & 957.0\\textbf{2} & 956.9\\textbf{6} & 956.9\\textbf{0}\\\\\n$E_\\mathrm{kin}^\\mathrm{n}$ & 293\\textbf{8.92} &293\\textbf{9.94} & 292\\textbf{2.56} & 2923.4\\textbf{5} &2923.4\\textbf{3} \\\\\n$E_\\mathrm{kin}^\\mathrm{p}$ & 152\\textbf{0.95} &152\\textbf{1.46} & 1525.\\textbf{25} & 1525.\\textbf{52} & 1525.\\textbf{33}\\\\\n$E_\\mathrm{pair}^\\mathrm{n}$ & $-$\\textbf{3.11} &$-$\\textbf{2.30} & $-$\\textbf{3.52} & $-$2.\\textbf{60} & $-$2.\\textbf{48}\\\\\n$E_\\mathrm{pair}^\\mathrm{p}$ & $-$1.\\textbf{54} &$-$1.\\textbf{22} & $-$2.\\textbf{85} &$-$2.\\textbf{19} &$-$2.\\textbf{07}\\\\\n$\\tilde{E}_\\mathrm{kin}^\\mathrm{n}$ & 293\\textbf{5.81} & 293\\textbf{7.64} & 291\\textbf{9.03} & 2920.\\textbf{85} &2920.\\textbf{55}\\\\\n$\\tilde{E}_\\mathrm{kin}^\\mathrm{p}$ & 151\\textbf{9.40} &152\\textbf{0.25} & 152\\textbf{2.39} &1523.\\textbf{33} &1523.\\textbf{25} \\\\\n$\\Delta^\\mathrm{n}$ &\\textit{0.44} &\\textit{0.44} & \\textit{0.47} & \\textit{0.47} & \\textit{0.47} \\\\\n$\\Delta^\\mathrm{p}$ & \\textit{0.33} &\\textit{0.33} & \\textit{0.46} &\\textit{0.46} &\\textit{0.46} \\\\\n$\\epsilon_{\\mathrm{F},\\mathrm{n}}$ & $-$5.7\\textbf{1} \n& $-$5.7\\textbf{0} &$-$5.6\\textbf{6} & $-$5.6\\textbf{5} &$-$5.6\\textbf{7} \\\\\n$\\epsilon_{\\mathrm{F},\\mathrm{p}}$ & $-$5.6\\textbf{9} & $-$5.7\\textbf{0} & $-$5.7\\textbf{6} & $-$5.7\\textbf{7} &$-$5.7\\textbf{9} \\\\\n$r_\\mathrm{rms}$ & 5.93 &5.93 & 6.40 & 6.40 &6.40 \\\\\n$Q_{20}^\\mathrm{n}$ & 178\\textbf{4} & 178\\textbf{2} & 50\\textbf{63} & 507\\textbf{2} &507\\textbf{1}\\\\\n$Q_{20}^\\mathrm{p}$ & 116\\textbf{6} & 116\\textbf{5} & 33\\textbf{36} & 334\\textbf{4} &334\\textbf{3}\\\\\n$V_\\mathrm{pair}^\\mathrm{n}$ & $-$284.57 &$-$360.00 & $-$284.57 &$-$369.00 &$-$384.60\\\\\n$V_\\mathrm{pair}^\\mathrm{p}$ & $-$284.57 &$-$355.00 & $-$284.57 &$-$360.00 &$-$375.80\\\\\ns.p. space & 25 shells & (300,\\ 200) & 25 shells & (400,\\ 300) & (400,\\ 300)\\\\\n\\hline\n\\hline \n\\end{tabular}\n\\caption{Results of HFB + SLy4 calculations for $^{240}$Pu ground state and fission isomer using {\\sc HFBTHO}, {\\sc HFBFFT} and {\\sc Sky2D}. \nThe pairing strengths in {\\sc HFBFFT} and {\\sc Sky2D} were adjusted to reproduce the spectral pairing gaps obtained in {\\sc HFBTHO} for the g.s.\\ and f.i.\\ separately.\nThe s.p.\\ space for {\\sc HFBFFT} is defined by means of ($\\Omega_n$, $\\Omega_p$). \nAll energies are in MeV, $r_{\\mathrm{rms}}$ is in fm, and $Q_{20}^\\mathrm{p,n}$ are in fm$^2$.\nThe digits which do not coincide with {\\sc HFBFFT} are marked in bold.}\n\\label{tab:240Pu}\n\\end{table*}\n\n\n\n\\section{Conclusions}\\label{sec:conclusion}\nWe developed a 3D Skyrme HFB solver {\\sc HFBFFT} in the coordinate-space representation using the canonical basis approach.\nThe code is based on the well-optimized {\\sc Sky3D} solver.\nIn {\\sc HFBFFT} we implemented several new elements to facilitate calculations, namely \n(i) the sub-iteration method in configuration space to accelerate the convergence; \n(ii) the soft pairing cutoff and pairing annealing\nto avoid pairing breakdown; and (iii)\na new algorithm to restore the Hermiticity of the HFB matrix.\n\nThe new solver has been benchmarked\nfor several spherical and deformed nuclei against {\\sc HFBTHO}, {\\sc Sky2D}, and (for spherical systems) {\\sc Sky1D}. The representation of the positive-energy continuum differs between HFB codes: In particular, it depends on the code's geometry (spherical, cylindrical, Cartesian), the size of s.p.\\ configuration space (number of HO shells, box size, grid size), and the effective pairing space. Consequently,\neven if the EDFs employed in two codes are identical, the pairing channel is usually described differently. This creates problems when comparing different HFB solvers as the perfect benchmarking is practically impossible \\cite{Pei2008}. In this work, we carried our inter-code comparisons by renormalizing pairing strengths to the spectral pairing gaps and\/or the effective kinetic energy $\\tilde{E}_\\mathrm{kin}$. While both methods give similar results, spectral pairing gaps are less sensitive to the s.p.\\ space assumed.\n\nBy carrying out calculations with different HFB solvers, we were able to assess the ranges of different uncertainties. \nFor the total energy, the typical errors are: several keV due to the Hermiticity breaking; 10-80\\,keV due to different box boundary conditions assumed; 10-140\\,keV due to different quasiparticle continuum discretizations; and several hundred keV due to the basis truncation in HO basis-expansion solvers.\n\nAs a 3D solver, {\\sc HFBFFT} is the tool of choice to study deformed and weakly bound systems. \nTo make this tool versatile, several enhancements are planned.\nMost importantly, we intend to implement pairing regularization \\cite{Bulgac02,Borycki2006,Pei2011} to get rid of the dependence of pairing strengths on the cutoff energy.\nAnother essential development is to be able to compute potential energy surfaces defined by means of constraining one-body operators. \n This will enable us to use HFBFFT in the calculations of large-amplitude nuclear collective motions such as fission or fusion, for which the solvers based on the basis-expansion approach require the use of excessively large configuration spaces.\nFinally, the performance of {\\sc HFBFFT} needs to be further optimized for modern supercomputer architectures.\n\n\n\n\\section*{Acknowledgments}\nComments from Kyle Godbey are gratefully appreciated.\nComputational resources were provided by the Institute for Cyber-Enabled Research at Michigan State University. \nThis material is based upon work supported by the U.S.\\ Department of Energy, Office of Science, Office of Nuclear Physics under award numbers DE-SC0013365 and DE-SC0018083 (NUCLEI SciDAC-4 collaboration).\n\n\\bibliographystyle{IEEEtran}\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\n\n\nInfrared behavior of Greens functions (GFs) of Yang-Mills theories has been intensively studied in the last decade. Nonperturbative information about dynamical symmetry breaking and confinement (e.g free non-propagation of confined degrees of freedom) could be encoded in the low $q^2$ behavior of GFs. Mostly gauge-variant GFs have been obtained by solving Schwinger-Dyson equations (SDEs) and simulated on the lattice as well. In the case of pure gluodynamics, the methods based on truncated set of SDEs offer recently two type of solutions. As these solutions are non-unique and both type slightly modeled truncation dependent they should be tested consequently when use to calculate gauge invariant observable. The so called {\\it scaling} solutions have power law momentum behaviour with well defined exponent in the infrared \\cite{LESM2002,ZW2002,FIPA2007} and lead to infrared vanishing gluon propagator and correspondingly infrared enhanced ghost propagator. Mesons and baryons can be described by SDEs themselves, actually Bethe-Salpeter and relativistic Fadeev equation are simply the parts of SDEs. Successful matching between gauge invariant hadronic observables and those gauge variant solutions has not been achieved until now. \n\n\n\n\nBeside of this, there exist the so-called {\\it decoupling} solutions\nas proposed and showed for instance in \\cite{BOU2007,BLYMPQ2008,AGBIPA2008}. Such decoupling solutions typically result in Pinch Technique (PT) SDEs framework \\cite{COR1982,COR1986,COR1989,BIPA2002,BIPA2004,BIPA2008} called there massive solution since the massless pole of gluon propagator disappear and gluon propagator is finite (but nonzero) in the infrared (for the topical review see \\cite{rewiev2009}).\nRecall, PT rearranges the original gauge scheme dependent GFs in a unique way such that unphysical degrees of freedom are eliminated. In the mean time, as the lattice calculations \\cite{lat1,lat2,lat3} in conventional Landau gauge start to support the decoupling solution reaching recently quite infrared momenta down to 75 MeV \\cite{lat4}, it attracts new attention again \\cite{PENIN,cornwall2009}.\n \n\nPrincipal advantage of the PT is its scheme and gauge invariance.\nIt has been proved to all orders of perturbation theory that \nthe PT GFs satisfy Ward identities \nand that extracted effective charges are process independent. Furthermore, GFs do not depend on a gauge fixing parameters. From this automatically follows that when hadron property are calculated within the QCD PT GFs, an unphysical degrees freedom are eliminated form the beginning. \n However, as always necessary, the approximation must be made to truncate the infinite tower of the SDEs series. In the original paper the gauge technique was used to construct gluon vertex. To that point Kallen-Lehmann representation (KLR) was assumed for the gluon propagator. \nIn contradiction, in other studies it is suggested that instead of validity, this is just the absence of KLR which can ensure confinement \\cite{alkofer}. Pure Yang-Mills part of QCD could be responsible for confinement, therefore the gauge technique construction based intrinsically on the KLR may be actually a weak point of the PT. To that point we recall that in the later stage \\cite{COR1986,COR1989} the gauge technique was omitted and the gluon vertex were constructed independently on KLR \\cite{COR1989}. In this paper, we do not consider KLR as a reasonable criterion, instead of as a meaningful criterion for the gluon propagator behavior we consider chiral symmetry breaking which must be triggered by this propagator in usual sense. Clearly, the appearance of chiral symmetry breaking is one of the main \"musts\" of QCD.\n\nIn the next section we briefly rederive the SDE with parametrization of the solution as made in \n\\cite{cornwall2009} and basic ingredients are reviewed in this section for completeness. In the third Section we find the discuss this solution and find the restrictions which must the resulting propagator obey. \n\n\\section{PT SDE with gauge invariant vertex}\n\n\nIn the paper \\cite{cornwall2009} the PT propagator based on the WTI improved vertex \\cite{COR1989} was considered.\nMaking a simple parametrization of the solution then the SDE has been solved analytically.\nWe adopt here the form of the solution proposed in \\cite{cornwall2009} and obtain the running coupling for all $q^2$. \n \nThe product of the gauge coupling $g^2$ and PT gluon propagator $\\hat{d}$\ndefines renormalization invariants. It is certainly allowed to rewrite this product by a new one, where one function can represent invariant running charge while the second, say the function $H$ stays for the rest, let us assume the function $H$ shows up a massive pole instead massless one \n. Using the same convention as in \\cite{cornwall2009} we can write:\n\\begin{equation} \\label{prd}\ng^2\\hat{d}(q^2)=\\bar{g}^2(q^2)\\hat{H}(q^2)\n\\end{equation}\nClearly the functions on rhs. of Eq. (\\ref{prd}) are obviously not uniquely define \nif one does not say more. This problem is simply avoided if one assume the form of $H$ explicitly\nsince then only one $\\bar{g}^2(q^2)$ function needs to be identified. \n\n\n\nThe simplest parametrization we can use is the following hard mass approximation\n\\begin{equation} \ng^2\\hat{d}(q^2)=\\bar{g}^2(q^2)\\frac{1}{q^2-m^2+i\\varepsilon}.\n\\end{equation}\n\nIn some approaches \\cite{SHISOL2007} it is assumed that this is the running coupling\nwhich can satisfy Khallen-Lehmann representation (KLR).\nIn paper \\cite{cornwall2009} it is conjectured that the both functions $\\bar{g}^2$ and $\\hat{H}$ \nsatisfy KLR, so the integral representation for the product has the same analyticity domain but \nthe absorptive part is not positive semidefinite. It should be negative in places in accordance with the one loop ultraviolet asymptotic $g^2\\hat{d}(q^2)\\simeq 1\/[q^2ln(q^2)]$.\n\n\nRemind here, KLR for the running charge should read \n\\begin{equation} \\label{KLR}\n\\bar{g}^2_{KLR}(q^2)=\\frac{1}{\\pi}\\int d\\omega \\frac{\\Im \\bar{g}^2_{KLR}(\\omega)}{q^2-\\omega+i\\varepsilon}\\, . \n\\end{equation}\nSuch running function is homomorphic in whole complex plane up to the real positive semi-axis of $q^2$ where the branch points are located.\n \n\nThe PT SDE is represented by a non-linear integral equation derived in \\cite{COR1986} and solved first time in \\cite{cornwall2009}, it reads\n\\begin{equation} \\label{ptsde}\n\\left[\\bar{g}^2\\hat{d}(q^2)\\right]^{-1}=q^2bZ-\\frac{ib}{\\pi^2}\\int d^{4k}\\hat{H}(k)\\hat{H}(k+q)\\left[q^2+\\frac{m^2}{11}\\right]+C \\,\\, , \n\\end{equation}\nwhere $C$ is momentum independent constant and where as the hard mass approximation has been employed. The one loop beta function coefficient is \n\\begin{equation}\nb=\\frac{11N_c-2N_f}{48\\pi^2}.\n\\end{equation}\n\nAfter the renormalization, which was made by on shell-subtraction (note, renormalization is not multiplicative here, for details see \\cite{cornwall2009}) we get for Eq. (\\ref{ptsde})\n\\begin{equation} \\label{rptsde}\n\\left[\\bar{g}^2\\hat{d}\n(q^2)\\right]^{-1}=b\\left[J(q)(q^2+\\frac{m^2}{11})-J(m)\\frac{12m^2}{11}\\right]\\, ,\n\\end{equation}\nwhere the function $J$ is renormalized in accordance with the correct one loop ultraviolet asymptotic and it reads\n\\begin{equation}\nJ(q)=-\\int_{4m^2}^{\\infty}d \\omega\\frac{q^2}{\\omega}\\frac{\\rho(\\omega;m)}{q^2-\\omega+i\\varepsilon}+2+2\\ln{(m\/\\Lambda)}\n\\end{equation}\nwhere $\\Lambda$ is usual QCD scale valued few hundred $MeV$ for $N_f=2$\nand $\\rho(\\omega;m)=\\sqrt{1-\\frac{4m^2}{\\omega}}$\n. \nThe integral is a textbook scalar 1-loop integral and can be easily evaluated as following:\n\\begin{eqnarray}\nJ(q)&=&\\rho\\ln{\\left|\\frac{1+\\rho}{1-\\rho}\\right|}+2\\ln(m\/\\Lambda) \n\\nonumber \\\\\n&-&i\\pi\\rho\\theta(q^2-4m^2)\\,\\; \n\\, \\, \\,\\,\\,\\,\\,\\,\\mbox{for} \\, \\, 1-4m^2\/q^2>0\n\\nonumber \\\\\nJ(q)&=&-i2\\rho{\\mbox{arctg}}\\left(\\frac{i}{\\rho}\\right)+2\\ln(m\/\\Lambda)\\,; \\, \\,\\, \\, \\,\\mbox{for} \\, \\, 04m^2$ is given by very simple phase space factor $\\pi\\rho=\\pi(1-4m^2\/q^2)$, thus the imaginary and the real parts of the running coupling read\n\\begin{eqnarray} \\label{icka}\nIm \\, b\\bar{g}^2&=&(1-\\gamma)\\frac{\\pi\\rho(q^2)}{(Re J(q)-\\gamma J(m))^2+(\\pi\\rho(q^2))^2}\\, \\, ;\n\\nonumber \\\\\nRe \\, b\\bar{g}^2&=&(1-\\gamma)\\frac{Re J(q)-\\gamma J(m)}{(Re J(q)-\\gamma J(m))^2+(\\pi\\rho(q^2))^2} \\, ,\n\\end{eqnarray}\nwhere we have used the short notation $\\gamma=\\frac{12m^2}{11q^2+m^2}$. For $q^2<4m^2$ the imaginary part vanishes.\n\nAs our approximated PT gluon propagator has a single real pole, it does not respect confinement issues. Nevertheless we expect that this is a good approximation of the true full (exact) PT solution for the gluon propagator which could posses a large enhancement at vicinity of $q^2\\simeq m^2$ instead of the real pole we use. The singularity of exact PT propagator need are very likely situated away of the real axis and need not be a simple pole but a branch point(s). Furthermore, we argue that for any $m$ the resulting $\\alpha$ does not satisfy KLR. Actually although high $s=q^2$ behaviour of absorptive part exactly corresponds with known analyticized one loop QCD coupling \\cite{SHIRKOV,MISO1997,MISO1998}. i.e. for \n$s>>m^2,\\Lambda^2$ we can get\n\\begin{equation}\nIm \\alpha(s) \\rightarrow \\frac{4\\pi\/b}{\\pi^2+\\ln^2(s\/\\Lambda^2)},\n\\end{equation}\nbut this the appearance of nontrivial threshold which crucially changes real part of the running coupling behaviour when $m$ is nonzero. Before showing this explicitly we discuss general features of the solution.\n\n\n\nSince we have basically reproduced Cornwall's solution \\cite{cornwall2009} for a general $m$ , we will briefly summarize results for completeness here. By construction, the correct one loop perturbation theory behaviour is reproduced for any $m$, i.e. $\\alpha\\simeq 1\/ln(q^2\/\\Lambda^2)$ for $q^2>>\\Lambda,m$. There exists certain critical mass, say $m_c^{I}$, bellow which the coupling $\\alpha$ is more singular as it posses nonsimple pole, this critical mass is approximately given by the ratio $m_c^{I}\/\\Lambda=1.2$ here. Decreasing the mass parameter $m$ we can find second critical point, say $m_c^{II}$ ,where this pole crosses Minkowski light cone $q^2=0$ and becomes well known unphysical spacelike Landau pole. This Landau ghost is really unacceptable and it gives severe boundary on the gluon mass, in our case we get approximately $m_c^{II}=0.4\\Lambda$. Examples of running coupling are shown in Fig. 1 for various $m$. Singularities move to the left as $m$ decrease. For large $m$ enough,\n$m>1.2\\Lambda$, the only standard two particle branch point singularity located at $4m^2$ remains and the pole has gone. The singularity can appear only under the threshold, wherein the zero of $\\alpha^{-1}$ is not protected by a nonzero absorptive part. As an exotic solution, there also exists a \"double pole\" (note, it is not a simple double pole) solution for a specific $m$, i.e. the pole in $H$ is enhanced by non simple pole of $\\alpha$ at the same point $q=m$. \n\n\\begin{figure}\n\\centerline{\\epsfig{figure=alfa.eps,width=9truecm,height=9truecm,angle=0}}\n\\caption[caption]{Pinch technique coupling $\\alpha$ for a various ratio $m\/\\Lambda$. For better identification $Im $ part is displayed for regular solution with $m\/\\Lambda=1.2$ only. } \n\\end{figure}\n\n\nThe gluon propagator vanishes faster then $1\/q^2$ , thus it cannot satisfy KLR. It was suggested in the paper \\cite{cornwall2009} thatthe PT gluon propagator can the product of the coupling and newly introduced function $H$ both of them satisfying their own KLR. Due to this reasoning the author excluded such solutions which lead to the singular coupling in the timelike region as well. As a matter of the fact we argue that the pinch technique running coupling does not satisfy KLR for any $m$, even for those values where $\\alpha$ is regular. We check wether the coupling $\\alpha^{KLR}(q^2)=\\bar{g}^2_{KLR}(q^2)\/(4\\pi)$ is analytical in usual sense, since it must satisfy KLR (\\ref{KLR}). It has been done by substitution of the absorptive part of $\\alpha^{PT}$ into rhs. of (\\ref{KLR}) , subsequently the obtained $\\alpha^{KLR}(q^2)$ is compared with the real part of $\\alpha^{PT}$ already known from the solution of PT SDE. For easiest inspection we choose spacelike $q^2$ where the dispersion integral is regular and $\\alpha^{KLR}(q^2)$ can be evaluated with arbitrary accuracy. The comparison of analyticized coupling with $\\alpha^{PT}$ is shown in Fig.2. For any $m$ the coupling $\\alpha^{PT}$ never agrees with \"analyticized\" coupling defined by (\\ref{KLR}). For instance for $m=1.5$, where the best optimized approximation is roughly achieved, one gets 30 $\\%$ underestimation in the infrared, while we get complete disagreement for large $q^2$. Clearly, the dispersion relation, which would be very usefull in other practical calculations, is not followed here. We argue that there is no any known physical reason to expect KLR for the running coupling.\n Instead of, as $\\alpha$ is the unique form factor of the pinch gluon propagator its absence can regarded as consequence of the confinement.\n\n\\begin{figure}\n\\centerline{\\epsfig{figure=space.eps,width=9truecm,height=9truecm,angle=0}}\n\\hspace{1cm}\n\\caption[caption]{$b\\bar{g^2}$ plotted for spacelike fourmomenta for various ratio of $m$. It is compared with analyticized coupling (AC) as described in the text.} \n\\end{figure}\n\n\n\n Let us mention here that the absence of KLR for propagator is perhaps common phenomenon of a strong coupling theory and it is quite independent on a model details.\nActually, the absence of KLR has been already observed in strong coupling QED, scalar toy models \\cite{SAULI} and in the case of quark propagator in Landau gauge QCD \\cite{SAUBI}.\nRoughly say, to get the dispersion relation for selfenergy, i.e. for the inverse of the propagator, and simultaneously expect the spectral representation for the propagator itself is too strong assumption in the strong coupling theory like QCD.\n \n \n As we discussed, mass $m$ is severely constrained from bellow by requirement of absence of the spacelike Landau pole. For an indication of what $m$ should be we do not choose the criterion of KLR, which appear as a quite obscured requirement in confining theory, but we require that pinch technique running coupling must be large enough in order to trigger correct chiral symmetry breaking in QCD. In the real QCD the dynamical chiral symmetry breaking is phenomenon responsible for the most of nucleon mass (i.e. for the u,d quarks dynamical mass generation) while it it simultaneously explains the lightness of the pions. To describe all these observable in selfcontained way, the chiral symmetry breaking must be correctly incorporated into the formalism. Such requirement very naturally gives upper boundary on the pinch technique gluon mass since for gluon heavy enough, the pinch technique running coupling is too weak and it does not trigger chiral symmetry breaking. \n\nIn principle, employing the formalism of PT SDEs for gluons and quarks simultaneously solved with the Bethe-Salpeter equations one should be able to fit the mass in the PT gluon propagator from meson spectra. Unhappily, the recent calculations are still far from this stage even in the more conventional gauge fixed schemes and the form of propagators entering the calculation is still dubious. First, we will describe arising obstacles of such a treatment and we suggest simplified way to make an reliable estimate of $m$, which is solely based on the solution of quark gap equation in the ladder approximation. \n\nIn lattice QCD, a static quark-antiquark potential can be computed with the Wilson loop technique. This gives us confining linear potential $V_L$ between infinitely heavy quarks. For a correct description of excited mesons this could be principally involved covariantly in the quark-antiquark kernel of BSE. From the other side the various hadronic observables were calculated in the framework of Schwinger-Dyson equations during last two decades, it includes the meson spectra and decays \\cite{meson1997,meson2007,meson2008} and various form factors \\cite{ff2000,ff2008}, more complicated baryonic properties were studied in this framework as well. Most of them use the ladder approximation of the quark SDE (and meson BSE) in Landau gauge, while first steps beyond the ladder approximation has been considered only quite recently. Recall, the ladder approximation means that \"very effective\" one gluon exchange is considered only, while a more gluon exchanges are needed and more topologically more complicated\ndiagrams must contribute to get linear potential in non-relativistic limit. \nSuch higher skeletons quite naturally generates important scalar part of the quark-antiquark potential \\cite{Bicudo:2003ji}. \nWe conjecture that if the quark-antiquark BSE kernel analogue of $V_L$ is not included then this is the main source of discrepancy when one compare GFs used in meson ladder calculations and the GFs actually obtained from SDEs. It is more then obvious that the effective gluon propagator used in a typical ladder approximated BSE more or less models unconsidered higher order skeletons. Without reasonable matching of SDEs calculations on gluon propagator and the one entering kernels of the meson BSE, the infrared behaviour of gluon propagator is not obvious.\n \n\nOn the other hand, our knowledge of Wilson lattice results combined with the knowledge of typical quark SDE solutions offers economical way, which we argue is efficient enough in order to estimate the PT gluon propagator in the infrared. For this purpose , let us consider the solutions of the quark SDE when one gluon and one gluon plus infrared enhanced effective interaction $V_L$ is added. The difference of these two aolution has been studied in the paper \\cite{BIMACACAOL2009} and it gives approximative double enhancement of the quark dynamical mass in deep infrared Euclidean $Q^2$ when $V_L$ is taken into account. Using these arguments, the main issue is that infrared quark mass $M(0)$ should be already as large as $M(0)\\simeq \\Lambda=250 MeV$ when one uses the ladder quark gap equation alone but now with the PT running coupling implemented in. The additional unconsidered term $V_L$ could be then responsible for an additional grow of the quark mass in the same order. It automatically gives the limit on the running coupling , its value must be significantly larger then the critical coupling, bellow which there is no symmetry breaking at all. As the dynamical quark mass function obtained in the ladder approximation is quite universal, we did need to perform a detailed numerical analyzes of the quark gap equation with PT running coupling and we can estimate the solution from the infrared value of the coupling, which must be $\\alpha(0) \\simeq 2.0$ or larger. To get such value we can see that we must use the solution with $m\/\\Lambda=0.4-0.7$. Since required interval lies between $m^I_c$ and $m^{II}_c$ we always have the running coupling singularity at the timelike regime. In this way the pinch technique offers possible scenario for Infrared Slavery again, albeit with coupling enhancement in the timelike region simply due to the massiveness of the gluon. \n\n\n\n\n\n\n\n\\section{Conclusion}\n\nUsing recently obtained pinch technique gluon propagator \\cite{cornwall2009} the limits on the effective gluon mass have been reconsidered. It is confirmed that in order to avoid unphysically singular running coupling, the gluon mass must be bounded from bellow. We argue that the requirement of KLR for the running coupling is not a good guide for this purpose and we assume that running coupling can be enhanced or even singular in the timelike region of the momenta.\nIt is suggested that the upper boundary on the gluon mass $m$ stems from chiral symmetry breaking when quarks are considered as well. As the infrared enhancement of the interaction is necessary to get correct triggering of symmetry breaking and since the running coupling crucially depends on mass $m$, the upper boundary stems from the minimal required pinch technique running coupling.\n It gives the acceptable region of the gluon mass $m\\simeq 0.4-0.7 \\Lambda $, or so. This is in reasonable agreement with the recent lattice results \nand simultaneously it does not contradict the existence of chiral symmetry breaking in QCD.\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nNeural network architectures that achieve high accuracy at specific tasks (e.g., image classification~\\cite{vgg,resnet,densenet}) have been designed through many trials and errors by humans, which requires a high level of expertise and have been a burden to practitioners and researchers in the machine learning community.\nThus, AutoML has attracted the attention from many of them, which aims at automatically choosing a better design for each part in the machine learning pipeline, e.g., data augmentation~\\cite{aa,fast_aa,pba,adv_aug}, network architecture~\\cite{darts,nas_net}, loss functions~\\cite{am_lfs}, or learning parameters~\\cite{bayse,o_lr}.\nIn particular, several methods proposed in the field of Neural Architecture Search (NAS) started to achieve comparable accuracy to manually designed networks in a few tasks such as image classification~\\cite{nas_net,proxyless_nas,pg_darts}.\nHowever, the early methods of NAS typically have some practical problems mainly due to the large requirements for computing resources such as memories and GPUs~\\cite{rl_nas}.\nThen, recent studies tried to find better architecture with more efficient approaches~\\cite{darts,enas,amoeba_net,ds_nas},\nwhich lead the optimization goal of NAS to become varied.\n\\cite{fb_net} proposed a method to search network architectures that can achieve high accuracy under the limitation of computing resources used for inference,\nand \\cite{amc} proposed to consider memory and power efficiency at inference time by model compression.\nMost of the methods mainly focused on the network architecture as their optimization target because it has large impact on the entire performance,\nbut as the optimization goal becomes diversified, the target of AutoML also becomes varied over different building blocks of the training,\ne.g., data augmentation and learning parameters.\n\n\\label{intro}\n\\begin{figure}[!tbp]\n \\centering\n \\includegraphics[width=8cm,height=5cm]{overview.pdf}\n \\caption{Comparison of search spaces. Our method jointly explores data augmentation policies and network architectures by combining differentiable methods for each part.\n \\label{search_space}\n\\end{figure}\n\nIn terms of search spaces, even a single part in the training pipeline has a large one.\nFor example, an efficient NAS method, ENAS~\\cite{enas}, still has a large search space over $1.3 \\times 10^{11}$ possible networks.\nAuto Augment~\\cite{aa} is a method to automatically choose the data augmentation policies during training, and the search space has roughly $2.9 \\times 10^{32}$ possibilities.\nIt means that searching over all possibilities of the combination of these two parts will have about $3.8 \\times 10^{43}$ possibilities, which brings significant difficulty to automatic exploration.\nIn addition, searching for network architectures and data augmentation policies can have different objectives.\nThe former explores mainly to minimize a loss function, while the latter also tries to increase the variety of training data.\nThese difficulties have prevented previous research from searching over those two parts jointly, so that most of the studies have focused on automatic exploration of one part at a time.\nAlthough there are a few studies that tried to explore learning parameters and network architectures jointly~\\cite{auto_has,toward_auto}, few research have attempted joint optimization of data augmentation policies and network architectures to the best of our knowledge.\n\nIn this paper, we propose a joint optimization method for both data augmentation policies and network architectures.\nAs stated above, these two parts can have large search spaces in total and different objectives, which implies that combining existing methods for each part straightforwardly would be intractable.\nAdditionally, during the progress of architecture search, networks in different phases of training can desire different data augmentation policies for better generalization ability.\nMotivated by these intuitions, we propose an end-to-end differentiable approach to optimize both parts in the training pipeline simultaneously.\nFig.~\\ref{search_space} shows the difference of our search space from previous successful methods for each part.\nSpecifically, we jointly optimize the differentiable approaches for augmentation policy search~\\cite{faster_aa} and architecture search~\\cite{darts}.\nWe firstly apply differentiable operations to the input data, and then use the transformation outputs as the inputs for the differentiable NAS method.\nIt enables to optimize the augmentation policies with the gradients come from the upstream NAS method because the entire pipeline is fully differentiable, so that we can train both parts simultaneously in the end-to-end manner.\nWe consider a combination of existing methods each of which is performed independently on either of augmentation policy search or architecture search as the baseline, and compare the performance with ours.\nThe experimental results show that our method achieves competitive or superior performance in common benchmarks of image classification.\n\n\\section{Preliminaries}\n\\label{preliminary}\nAs the differentiable methods for augmentation policy search and architecture search, we adopt Faster Auto Augmentation~\\cite{faster_aa} and DARTS~\\cite{darts}.\nWe briefly summarize them in this section.\n\n\\subsection{Differentiable Data Augmentation}\nData augmentation is a series of transformation applied on the input data.\nTypically, we have to choose which operations should be applied with what magnitudes.\nSeveral methods for automatic search of probability distribution on the selection of operators and their magnitudes have been proposed~\\cite{aa,fast_aa,faster_aa}.\nThe Faster-AA considers that a policy consists of $L$ sub-policies each of which has $K$ consecutive operations.\nEach operation out of $\\#\\mathcal{O}$ operations has a probability $p_O \\in [0, 1]$ which represents how likely the operation is adopted in a sub-policy and a magnitude $\\mu_O \\in [0, 1]$ which controls the transformation.\nTherefore, the search space is $(\\#\\mathcal{O} \\times [0, 1] \\times [0, 1])^{KL}$ in total, where $\\mathcal{O}$ is a set of possible operations.\nApplying an operation $O$ to an input data $X$ is formulated as:\n\\begin{eqnarray}\nX \\rightarrow \\left\\{\n\\begin{array}{ll}\n O(X; \\mu_O)& (\\textrm{with the probability }p_O) \\\\\n X & (\\textrm{with the probability of }1 - p_O), \\label{diff_op}\n\\end{array}\n\\right.\n\\end{eqnarray}\nand note that the Gumbel trick~\\cite{gumbel} is used to make the probability differentiable.\nA sub-policy is a series of $K$ operations, and during training, the output of $k$-th operation $X'$ from an input $X$ is calculated as a weighted sum over all possible $\\#\\mathcal{O}$ operations as follows:\n\\begin{eqnarray}\nX' = \\sum_{n=1}^{\\#\\mathcal{O}} [\\sigma_\\eta(\\boldsymbol{z}_k)]_n O_k^{(n)}(X; \\mu_k^{(n)}, p_k^{(n)}), \\label{op1}\\\\\n{\\bf s.t.} \\sum_{n=1}^{\\#\\mathcal{O}} [\\sigma_\\eta(\\boldsymbol{z}_k)]_n = 1, \\label{op2}\n\\end{eqnarray}\nwhere $\\sigma_\\eta$ is a softmax function with\na temperature parameter $\\eta > 0$,\nand $\\boldsymbol{z}_k \\in \\mathbb{R}^{\\# \\mathcal{O}}$ denotes the learnable parameter for the distribution of operator selection.\nDuring inference, the $k$-th operation is sampled from the categorical distribution ${\\rm Cat}(\\sigma_\\eta(\\boldsymbol{z}_k))$, so that we obtain transformed data $X'$ by Eq.~(\\ref{diff_op}).\n\n\\subsection{Differentiable NAS}\nDARTS~\\cite{darts} is a differentiable neural architecture search method which focuses on searching the inside structures of normal cells and reduction cells that are finally stacked up to build a deep network. \nEach cell is represented as a directed acyclic graph (DAG) consisting of $N$ nodes which represent intermediate features (e.g., feature maps).\nA cell takes two input nodes and one output node, and\nan edge $f^{(i, j)}$ between two nodes $i, j$ represents an operation such as convolution or pooling.\nA node $j$ has the connections with all the previous nodes $i < j$ in topological ordering of the DAG, so that the search space of DARTS is roughly\n\\begin{eqnarray}\n|\\mathcal{F}|^2 \\prod_{k=1}^{N-3}\\frac{k(k+1)}{2},\n\\end{eqnarray}\nwhere $\\mathcal{F}$ is a set of candidate operations.\nTo make the search space continuous, DARTS relaxes the categorical choice of an operation to a softmax over possible operations:\n\\begin{eqnarray}\n\\bar{f}^{(i,j)} = \\sum_{f \\in \\mathcal{F}} \\frac{\\exp(\\alpha_f^{(i,j)})}{\\sum_{f'} \\exp(\\alpha_{f'}^{(i,j)})}f(x).\n\\end{eqnarray}\nAfter the training, a discrete architecture is determined by replacing each mixed operation $\\bar{f}^{(i, j)}$ with the most likely operation $f^{(i, j)} = \\mathop{\\rm arg~max}\\limits_{f \\in \\mathcal{F}} \\alpha_f^{(i, j)}$.\n\nDARTS performs a bilevel optimization on the architecture parameter $\\alpha$ and the model weights $w$ using two sets of training and validation data.\n\\begin{eqnarray}\n \\underset{\\alpha}{\\rm min}\\ \\mathcal{L}_{val} (w^*(\\alpha), \\alpha), \\label{darts1} \\\\\n {\\bf s.t.}\\ w^*(\\alpha) = \\underset{w}{\\rm arg~min}~\\mathcal{L}_{train}(w, \\alpha), \\label{darts2}\n\\end{eqnarray}\nwhere $\\mathcal{L}_{val}$ and $\\mathcal{L}_{train}$ are loss functions calculated with validation data and train data, respectively.\n$w^*(\\alpha)$ denotes the optimal model weights for an architecture $\\alpha$.\nEq. (\\ref{darts1}) has an inner optimization for $w^*({\\alpha})$, so that evaluating the gradient of $\\mathcal{L}_{val}$ w.r.t. $\\alpha$ can be prohibitive.\nTherefore, DARTS approximates $w^*(\\alpha)$ by adapting $w$ only after a single training step on the training data as follows:\n\\begin{eqnarray}\n w^*(\\alpha) = w - \\xi\\nabla_w \\mathcal{L}_{train}(w, \\alpha) \\label{darts3}.\n\\end{eqnarray}\nThen, during a single step in the iterative optimization procedure of DARTS, it solves Eq. (\\ref{darts1}) and Eq. (\\ref{darts2}) alternately.\n\n\\section{Method}\n\\label{method}\nWe sequentially combine Faster-AA and DARTS, then optimize both in the end-to-end manner.\nWe solve another bilevel optimization problem for our entire search space.\nSpecifically, augmentation policies and network architectures are both optimized by minimizing a loss function on the validation dataset, while the network weights are optimized using the training dataset.\nThis bilevel optimization is formulated as:\n\\begin{eqnarray}\n\\underset{\\alpha,\\boldsymbol{z}_k,p_O,\\mu_O}{\\rm min}\\ \\mathcal{L}_{val}(w^*(\\alpha),\\alpha,\\boldsymbol{z}_k,p_O,\\mu_O), \\label{eq1}\\\\\n{\\bf s.t.}\\ w^*(\\alpha) = \\underset{w}{\\rm arg~min}~\\mathcal{L}_{train}(w,\\alpha,\\boldsymbol{z}_k,p_O,\\mu_O), \\label{eq2}\n\\end{eqnarray}\nand we adopt the first-order approximation for the architecture gradient for Eq. (\\ref{darts1}) same as in DARTS to speed-up the optimization, i.e., we set $\\xi$ to $0$ in Eq. (\\ref{darts3}).\nThen, we iteratively solve Eq. (\\ref{eq1}) and Eq. (\\ref{eq2}).\nAs a loss function, we use the cross-entropy loss for both $\\mathcal{L}_{train}$ and $\\mathcal{L}_{val}$.\n\nTo solve Eq. (\\ref{eq1}), we first apply a series of differentiable data augmentation operations $O^{(n)} (n=1,...,\\#\\mathcal{O})$, then give the transformed data to the network to solve Eq. (\\ref{eq2}).\nWe outline this algorithm in Alg. \\ref{alg1}.\n\n\\begin{algorithm}\n \\caption{Joint optimization}\\label{alg1}\n\\begin{algorithmic}[1]\n \\WHILE{\\textit{not converged}}\n \\STATE \/\/ solve equation (\\ref{eq1})\n \\STATE sample $X \\sim D_{val}$ \\\\\n \\STATE apply equation (\\ref{op1}) to $X$\n \\STATE calculate $L_{val}(w, \\alpha, \\boldsymbol{z}_k, p_O, \\mu_O)$ with $X'$ \\\\\n \\STATE calculate gradients of $L_{val}$ w.r.t. $\\boldsymbol{z}_k, p_O, \\mu_O, \\alpha$ \\\\\n \\STATE update Faster-AA parameters ($z_k, p_O, \\mu_O$)\n \\STATE update DARTS parameters $\\alpha$ by gradient descent \\\\\n \\STATE \/\/ solve equation (\\ref{eq2})\n \\STATE sample $X \\sim D_{train}$ \\\\\n \\STATE apply equation (\\ref{op1}) to $X$ \\\\\n \\STATE calculate $L_{train}(w, \\alpha, \\boldsymbol{z}_k, p_O, \\mu_O)$ with $X'$ \\\\\n \\STATE calculate gradients of $L_{train}$ w.r.t. $w$ \\\\\n \\STATE update the network weights $w$ by gradient descent \\\\\n \\ENDWHILE\n\\STATE Derive the final policy and architecture\n\\end{algorithmic}\n\\end{algorithm}\n\nThe total computational resources we require for the joint optimization additionally to what DARTS requires are relatively small.\nSpecifically, the additional space complexity is only $KL(\\#\\mathcal{O} \\times 3)$, while the search space of Faster-AA is $(\\#\\mathcal{O} \\times [0, 1] \\times [0, 1])^{KL}$ which is large.\nIn addition to it, our entire system is end-to-end differentiable, and augmentation policy search and network architecture search are jointly performed to minimize the same loss function, so that the gradients for updating policy and architecture parameters are obtained via a single backpropagation.\nThis advantage of our end-to-end differentiable approach enables to conduct joint optimization for policy search and architecture search with few additional space and time complexity compared to the case if we apply Faster-AA and DARTS independently and combine the results.\n\nThe original Faster-AA uses a critic network to consider a classification loss and the WGAN-GP loss~\\cite{wgan_gp} that encourage the distribution of transformed data to be as close to the original data distribution as possible.\nFor the same purpose, we can also exploit the network under searching with DARTS as a critic network to encourage transformed data to remain in the same classes before transformations.\nHowever, DARTS only uses a cross-entropy as the loss for architecture search, while the critic network in Faster-AA also considers WGAN-GP.\nIn this paper, we adopt a single unified loss function for the both of policy search and architecture search and did not introduce any critic network for simplicity of the entire framework and computational efficiency.\n\n\\section{Experiments}\n\\label{experiments}\nWe compare our joint optimization model with the original DARTS and the baseline which combines the results of Faster-AA and DARTS that are optimized independently from each other.\nIn the baseline, the learned policy by Faster-AA is transferred to be used for the training of DARTS.\nWe conducted the comparison on three datasets, CIFAR-10, CIFAR-100, and SVHN.\n\n\\label{results}\n\\begin{figure*}[htbp]\n\\begin{minipage}{0.3\\hsize}\n \\centering\n \\includegraphics[width=5cm,height=4cm]{transition_c10.png}\n \\subcaption{On CIFAR-10}\n\\end{minipage}\n\\begin{minipage}{0.3\\hsize}\n \\centering\n \\includegraphics[width=5cm,height=4cm]{transition_c100.png}\n \\subcaption{On CIFAR-100}\n\\end{minipage}\n\\begin{minipage}{0.3\\hsize}\n \\centering\n \\includegraphics[width=6.5cm,height=4cm]{transition_svhn.png}\n \\subcaption{On SVHN}\n \\label{svhn_policy}\n\\end{minipage}\n \\caption{\n Probability distribution of the augmentation policy selection over time.\n }\n \\label{res1}\n\\end{figure*}\n\nWe first carefully re-implemented Faster-AA\\footnote{source code from: \\url{https:\/\/github.com\/moskomule\/dda\/tree\/fasteraa\/faster_autoaugment}} and DARTS\\footnote{source code from: \\url{https:\/\/github.com\/quark0\/darts}} based on their authors' implementation.\nThen, we confirmed our implementation successfully reproduced the reported scores in those papers.\nIt should be noted that we adopt the same search space as the original paper for DARTS, while we exclude the cutout operation from the search space for the Faster-AA part in our framework as the authors do in their implementation.\nAccording to a comment which has been left in their code, the cutout operation makes the optimization unstable.\nTherefore, the target operations in the augmentation policy search are \\textit{shear X, shear Y, translate X, translate Y, rotate, auto contrast, horizontal flip, invert, equalize, solarize, posterize, contrast, color, brightness, sharpness, and sample pairing}.\nWe set the number of sub-policies $L = 10$ and the number of operations in a sub-policy $K = 2$, which are the same in the original settings.\nAdditionally, as preprocessings in the baseline, random cropping with zero-padding and random horizontal flipping are always applied.\nAfter those preprocessings, we apply transformations under the policy search with Faster-AA.\nThe cutout operation is always applied after the Faster-AA part in the baseline, although it is not included in the search space.\n\nFor the baseline, we first obtain a Faster-AA policy independently from DARTS by following the same experimental settings used in the original paper, i.e., the baseline uses WideResnet40-2~\\cite{wideresnet} as the architecture to search policies for 20 epochs.\nNext, the network architecture for the baseline is explored by DARTS in the same manner with the original paper.\nThose policy and architecture obtained by the existing methods are combined together, then we train the model weights for 600 epochs.\nAs for the searching epochs of architectures, we conduct two different total epochs (25 and 50) both for the baseline and proposed framework.\nBecause we found that training of DARTS is unstable on CIFAR-100 and SVHN as mentioned in~\\cite{darts_stab}.\n\\textit{We repeated experiments three times to see average scores with standard deviations.}\n\n\\subsection{Discussion}\n\\begin{table}[htbp]\n \\begin{center}\n \\caption{\n Comparison in classification accuracy.\n (1) ``DARTS\" only searches architecture.\n (2) ``Baseline\" separately searches policies and architectures.\n (3) ``Ours\" is the proposed joint search method.\n Our method achieves competitive or superior results compared with the baseline.}\n \\label{tab1}\n \\begin{tabular}{c|c|c|c} \\hline\n method & CIFAR-10 & CIFAR-100 & SVHN \\\\ \\hline\n \\multicolumn{4}{c}{searching epoch is 50} \\\\ \\hline \n \n DARTS & 97.33$\\pm$0.12 & 76.21$\\pm$3.76 & 97.85$\\pm$0.08 \\\\\n Baseline & \\textbf{97.55$\\pm$0.30} & 77.96$\\pm$3.12 & \\textbf{98.02$\\pm$0.03} \\\\\n Ours & 97.40$\\pm$0.03 & \\textbf{79.02$\\pm$2.14} & 97.92$\\pm$0.12 \\\\ \\hline\n \\multicolumn{4}{c}{searching epoch is 25} \\\\ \\hline\n \n DARTS & 97.14$\\pm$0.04 & 82.91$\\pm$0.30 & 97.94$\\pm$0.06 \\\\\n Baseline & 97.29$\\pm$0.03 & \\textbf{84.17$\\pm$0.29} & \\textbf{98.03$\\pm$0.05} \\\\\n Ours & \\textbf{97.46$\\pm$0.09} & 83.81$\\pm$0.49 & 97.82$\\pm$0.08 \\\\ \\hline\n \\end{tabular}\n \\end{center}\n\\end{table}\n\n\nWe present the experimental results in Table \\ref{tab1}.\nOur proposed method achieves competitive or superior results compared to the baseline in both searching epochs.\nAs stated above, the baseline uses WideResnet40-2 during policy search, and it can be different from the architecture found by DARTS for the final stage to learn the model weights for 600 epochs.\nAlthough the policies found with WideResnet40-2 show high accuracy at several results, which might be derived from the suitability of WideResnet40-2 for augmentation policy search.\nIf that is the case, the model selection largely affects on the policy search, so that the human expertise is still required.\nOn the other hand, our proposed method does not require humans to select the network architecture for policy search and achieves competitive or even superior results to Faster-AA.\nWe believe that the joint optimization approach has more potential to obtain high performance and reduces human expertise. \n\nFig. \\ref{res1} shows how the categorical distribution to choose augmentation policies changes over the time.\nWe found that the augmentation policies obtained with our method choose color enhancement operations such as \\textit{color} or \\textit{auto contrast} more often than geometric operations such as \\textit{rotate} from the transition.\nThis trend of the resulting policies is also reported in other papers of automatic augmentation policy search~\\cite{aa}.\nIn Figure \\ref{svhn_policy}, the policy dramatically changes after 45 epochs, and we found that the network architecture under searching also largely changed at the same timing, which may imply that the optimal policy is different depending on the network architecture.\n\n\\section{Conclusion}\n\\label{conclusion}\nIn this paper, we proposed a method to jointly optimize data augmentation policy and network architecture.\nThe proposed method combines differentiable methods for policy search and architecture search to jointly optimize them in the end-to-end manner.\nThe experimental results showed that our method achieves competitive or superior performance to independently searched results in common benchmarks of image classification.\nOur joint optimization approach may be able to include the other parts such as learning rates.\nHence, we will attempt to bring more automation to the design of training pipeline with this end-to-end differentiable approach for the future work.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\bibliographystyle{aaai}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Embedding}\n \n \n \n \n \n \n \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\section{Future Directions}\nAlthough many efforts have been made to design deep recommender systems from manual to automatic, there remain a few potential opportunities for future research directions. \n\n\n\\noindent \\textbf{GNNs-based Recommendations.}\nGraph neural networks (GNNs) have been extensively studied recently due to their impressive capability in representation learning on graph-structured data~\\cite{ma2021deep,xu2021automorphic,ding2021diffmg}. Recent works have been proposed to advance deep recommender systems based on GNNs techniques~\\cite{fan2019graph,fan2020graph,fan2022graph}. Thus, exploring the combination of AutoML and GNNs provides great opportunities to further boost the performance of GNNs-based recommendation methods. \nA few works have been proposed to study the combination of AutoML and GNNs together in the research community. \nFor instance, GraphNAS~\\cite{graphnas} and Auto-GNN~\\cite{autognn} made the very first attempt to enable the automatic design of the best graph neural architecture via reinforcement learning techniques. The main drawback of these two methods is computationally expensive because of the large search space.\nTo tackle this challenge, SANE~\\cite{zhao2021search} proposes a differentiable architecture search algorithm for GNNs, where an advanced one-shot NAS paradigm is adopted to accelerate the search process.\nAs the very first work to apply automatic NAS techniques to GNNs-based deep recommender systems, AutoGSR~\\cite{chen2022autogsr} attempts to search for the optimal GNNs architecture on GNNs-based session recommendation through a differentiable architecture search algorithm.\n\n\n\\noindent \\textbf{Multi-Modality Recommendations.}\nIn addition to historical interactions between users and items, items' auxiliary knowledge from various modalities (\\textit{e.g.}, visual, acoustic, and textual) has been incorporated to learn users' preferences for providing high-quality recommendation services~\\cite{wei2019mmgcn}. \nHence, it is desirable to advance deep multimodal learning via automated machine learning techniques~\\cite{yin2021bmnas,perez2019mfas}, so as to design an optimal algorithm for targeted tasks. \nFor example, as the very first work on automated neural architecture search method for deep multimodal learning, the work of Multimodal Fusion Architecture Search (MFAS) aims to find accurate fusion architectures for multi-modal classification problem~\\cite{perez2019mfas}.\nA Bilevel Multimodal Neural Architecture Search framework (BM-NAS) is proposed to learn the architectures of multimodal fusion models via a bilevel searching scheme~\\cite{yin2021bmnas}. \n\n\n\n\n\n\n\n\\noindent \\textbf{Other Recommendation Tasks.}\nIn addition to AutoML for GNNs-based and multimodal recommendations, various important recommendation tasks are rarely explored through automated machine learning techniques, such as POI recommendations~\\cite{zhao2020go}, sequential recommendations~\\cite{kang2018self}, social recommendations~\\cite{fan2019graph}, \\textit{etc}. \nA few works have been conducted to apply automated neural architecture search techniques for spatio-temporal prediction such as AutoST~\\cite{10.1145\/3394486.3403122} and AutoSTG~\\cite{10.1145\/3442381.3449816}, which can help design optimal neural architectures for POI recommendations. \nBesides, despite the success of various deep social recommendations, heavy manual work and domain knowledge is required to inherently combine user-item interactions and social relations~\\cite{fan2020graph}, which can be addressed by AutoML techniques.\n\n\\section{Conclusion}\nDeep recommender systems have attracted increasing attention in both academia and industry. \nBesides, automated machine learning (AutoML), as one of the most promising AI techniques, has shown its great capabilities to advance deep architecture designs from manual to automatic. \nIn this survey, we have conducted a comprehensive overview of an emerging research field: automated machine learning for deep recommender systems. \nSpecifically, we discuss the state-of-the-art AutoML approaches that automate the feature selection, feature embeddings, feature interactions, and system design in DRS.\nWe expect this survey can facilitate future research directions in the academic and industry community.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\begin{scriptsize}\n\\bibliographystyle{named}\n\n\\section{Introduction}\n\\label{sec:intro}\nRecent years have witnessed the explosive growth of online service providers~\\cite{ricci2011introduction}, including a range of scenarios like movies, music, news, short videos, e-commerces, \\textit{etc}. This leads to the increasingly serious information overload issue, overwhelming web users.\nRecommender systems are effective mechanisms that mitigate the above issue by intelligently retrieving and suggesting personalized items, \\textit{e.g.}, contents and products, to users in their information-seeking endeavors, so as to match their interests and requirements better.\nWith the development and prevalence of deep learning, deep recommender systems (DRS) have piqued interests from both academia and industrial communities\n~\\cite{zhang2019deep,nguyen2017personalized}, due to their superior capacity of learning feature representations and modeling non-linear interactions between users and items~\\cite{zhang2019deep}.\n\nTo construct DRS architectures, the most common practice is to design and tune the different components in a hand-crafted fashion. However, manual development is fraught with three inherent challenges. First, this requires extensive expertise in deep learning and recommender systems. Second, substantial engineering labor and time cost are required to design task-specific components for various recommendation scenarios. Third, human bias and error can result in suboptimal DRS components, further reducing recommendation performance.\n\nRecently, powered by the advances of both theories and technologies in automated machine learning (AutoML), tremendous interests are emerging for automating the components of DRS. \nBy involving AutoML for the deep recommender systems, different models can be automatically designed according to various data, thus improving the prediction performance and enhancing generalization.\nBesides, it is helpful to eliminate the negative influence for DRS from human bias and error, as well as reduce artificial and temporal costs significantly.\nAs shown in Tabel~\\ref{tab:comp}, we summarize these researches from the perspective of search space and search strategy, which are two critical factors for AutoML. \nTypically, these works can be divided into the following categories according to the components in DRS:\n\n\n\\begin{table*}[]\n\\centering\n\\vskip -0.2in\n\\caption{\\small{Automated Machine Learning for Deep Recommender Systems.}}\n\\vspace{-0.3cm}\n\\label{tab:comp}\n\\renewcommand\\arraystretch{1.4} \n\\resizebox{1.0\\textwidth}{!}{\n\\begin{threeparttable}\n\\begin{tabular}{c|c|c|c|c|c}\n\\toprule \\toprule\n\\multirow{2}{*}{\\large\\begin{tabular}[c]{@{}c@{}}\\textbf{DRS} \\\\ \\textbf{Component}\\end{tabular}}&\\multirow{2}{*}{\\textbf{Search Space}}&\\multicolumn{4}{c}{\\textbf{Search Strategy}} \\\\ \\cline{3-6} \n & & \\textbf{Gradient} & \\textbf{Reinforcement Learning} & \\textbf{Evolutionary} & \\textbf{Others\\tnote{*}} \\\\ \\hline\n\\multirow{2}{*}{\\large\\begin{tabular}[c]{@{}c@{}}\\textbf{\\textit{Feature}} \\\\ \\textbf{\\textit{Selection}}\\end{tabular}} & \\textbf{Raw Feature} & AutoField~\\shortcite{wang2022autofield} & FSTD~\\shortcite{fard2013using}, MARLFS~\\shortcite{liu2021automated} & - & - \\\\ \\cline{2-2} \n & \\textbf{Generated Feature} & GLIDER~\\shortcite{glider} & - & - & AutoCross~\\shortcite{autocross} \\\\ \\hline\n\\multirow{2}{*}{\\large\\begin{tabular}[c]{@{}c@{}}\\textbf{\\textit{Feature}} \\\\ \\textbf{\\textit{Embedding}}\\end{tabular}} & \\textbf{Single Embedding} & AMTL~\\shortcite{amtl}, AutoEmb~\\shortcite{autoemb} & ESAPN~\\shortcite{esapn} & - & PEP~\\shortcite{pep} \\\\ \\cline{2-2} \n & \\textbf{Group Embedding} & AutoDim~\\shortcite{autodim}, DNIS~\\shortcite{dnis} & NIS~\\shortcite{nis}, AutoIAS~\\shortcite{autoias} & RULE~\\shortcite{rule} & - \\\\ \\hline\n\\multirow{3}{*}{\\large\\begin{tabular}[c]{@{}c@{}}\\textbf{\\textit{Feature}} \\\\ \\textbf{\\textit{Interaction}}\\end{tabular}} & \\textbf{Feature Interaction} & AutoFIS~\\shortcite{autofis}, FIVES~\\shortcite{fives} & AutoIAS~\\shortcite{autoias} & - & DeepLight~\\shortcite{deeplight},BP-FIS~\\shortcite{chen2019bayesian} \\\\ \\cline{2-2} \n & \\textbf{Interaction Function} & SIF~\\shortcite{sif}, AIM~\\shortcite{aim} & AutoIAS~\\shortcite{autoias} & AutoFeature~\\shortcite{autofeature} & - \\\\ \\cline{2-2} \n & \\textbf{Interaction Block} & AutoPI~\\shortcite{autopi} & - & AutoCTR~\\shortcite{autoctr} & AMEIR~\\shortcite{ameir} \\\\ \\hline\n\\multirow{2}{*}{\\large\\begin{tabular}[c]{@{}c@{}}\\textbf{\\textit{System}} \\\\ \\textbf{\\textit{Design}}\\end{tabular}} & \\textbf{Framework} & - & - & - & DeepRecInfra~\\shortcite{deeprecinfra}, AutoRec~\\shortcite{autorec} \\\\ \\cline{2-2} \n & \\textbf{Optimazation} & AutoLoss~\\shortcite{zhao2021autoloss}, $\\lambda$opt\\shortcite{lambdaopt} & - & - & - \\\\ \\bottomrule \\bottomrule\n\\end{tabular}\n\\begin{tablenotes}\n \\footnotesize\n \\item[*] Others including Bayesian optimization, gird\/random search and regularization.\n \\end{tablenotes}\n \\end{threeparttable}\n}\n\\vskip -0.15in\n\n\n\\end{table*}\n\n\\begin{itemize}[leftmargin=*]\n\\item \\textbf{Feature Selection}: This is the process of selecting a subset of the most predictive and relevant features (or generated features) for subsequent DRS models. \nBy eliminating the redundant or irrelevant features, feature selection can help enhance the recommendation performance and accelerate DRS model training~\\cite{nadler2005prediction}. \n\n\n\\item\\textbf{Feature Embedding}: \nTypically, the features for DRS are high-dimensional and extremely sparse. \nMost recommendation models first transform the raw features into one-hot vectors and then embed them as dense representations via the feature embedding layer.\nAutoML technique is utilized to dynamically search the optimal embedding sizes for improving prediction accuracy, saving storage space, and reducing model capacity. \n\n\\item\\textbf{Feature Interaction}: \nEffectively modeling predictive feature interactions is critical for boosting the recommendation quality of DRS because the interaction of two features would alter their individual feature effects. For example, users often download food delivery apps at mealtime, in which interactions between app category and time-stamp is a highly predictive signal. \nTherefore, some AutoML-based works are devoted to exploring beneficial feature interactions with proper interaction functions.\n\n\\vspace{-1mm}\n\\item \\textbf{System Design}: \nIn addition to the above components of DRS models, system design also has a crucial impact on DRS performances, including hardware infrastructure, data pipeline, and information transfer, as well as implementation, deployment, optimization, and evaluation. \n\\end{itemize}\n\n\n\n\n\n\n\nThis survey is to provide a literature overview on the advances of AutoML for constructing DRS architectures.\nTo be specific, we first provide an overview of AutoML techniques.\nThen, we discuss the state-of-the-art AutoML approaches that automate the feature selection, feature embeddings, feature interactions, and system design in DRS models.\nFinally, we discuss the appealing directions that can bring this research field into a new frontier.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\section{Priliminary}\n\n\n\n\\section{Overview of AutoML}\nGiven the problem description and datasets, the goal of Automated Machine Learning (AutoML) techniques is to construct machine learning solutions automatically for time-consuming and iterative real-world tasks.\nIt has shifted the model design mechanism from hand-crafted to automatic, enabling unparalleled prospects for deep learning model construction. AutoML frameworks are typically comprised of three following components:\n\\begin{itemize}[leftmargin=*]\n\\vspace{-1mm}\n\\item \\textbf{Search Space}. The search space defines a group of candidate operations and the relationships between them that enable appropriate model designs to be formed. For DRS, different components contain diverse search spaces, which are involved with human prior knowledge.\n\n\n\\vspace{-1mm}\n\\item \\textbf{Search Strategy}. The search strategy specifics how to do an efficient exploration of the search space and find out the optimal architectures, which typically contains gradient-based optimization~\\cite{ruder2016overview}, reinforcement learning (RL)~\\cite{kaelbling1996reinforcement}, evolutionary algorithms~\\cite{qin2008differential}, Bayesian optimization~\\cite{snoek2012practical}, random search~\\cite{bergstra2012random}, \\textit{etc}.\n\\vspace{-1mm}\n\\item \\textbf{Performance Estimation Strategy}. Performance estimation is the process of estimating the performance of sampled candidate architectures from the massive search space. To reduce the computational cost of training and estimating over these candidates, various strategies for performance estimation have been proposed, such as weight sharing~\\cite{pham2018efficient} and network morphism~\\cite{elsken2017simple}.\n\n\\end{itemize}\n\\section{Feature Selection}\nIn recommender systems, feature selection aims to select a subset of relevant features for constructing recommendation models. \nIn practical online service providers, data is composed of a massive amount of features, including user portraits, item attributes, behavior features, contextual features as well as combinatorial features based on previous feature types. \nHowever, some of these raw features may be irrelevant or redundant in recommendations, which call for effective feature selections that can boost recommendation performance, overcome input dimensionality and overfitting, enhance model generalization and interpretability, as well as accelerate model training.\n\nThe classic feature selection methods are typically presented in three classes: 1) Filter methods, which select features based only on feature correlations regardless of the model~\\cite{hall1999correlation,yu2003feature}; 2) Wrapper methods, which evaluate subsets of features that allows detecting the possible interactions amongst variables~\\cite{maldonado2009wrapper}; and 3) Embedded methods, where a learning algorithm performs feature selection and classification simultaneously, such as LASSO~\\cite{fonti2017feature} and decision trees~\\cite{ke2017lightgbm}.\nThese methods, however, usually fail in deep learning-based recommender systems with both numerical and categorical features. For instance, filter methods neglect the dependencies between feature selection and downstream deep recommendation models; Wrapper methods must explore $2^m$ candidate feature subspaces, \\textit{i.e.}, keep or drop for $m$ feature fields; Embedded methods are sensitive to the recommendation models' strong structural assumptions. To deal with these issues, AutoML-based methods are utilized to adaptively select effective features for recommendations.\nAccording to the feature selection stage, we categorize the research into two groups: \\textit{Selection from Raw Features} and \\textit{Selection from Generated Features}.\n\n\n\\subsection{Selection from Raw Features}\n\nAccording to a survey from Crowdflower, scientists spend 80\\% of time on data and feature preparation. Therefore, introducing AutoML into raw feature selection has the potential to significantly enhance the data scientists' productivity, and frees them up to focus on real business challenges. FSTD \\cite{kroon2009automatic,fard2013using} introduces reinforcement learning (RL) into feature selection with single agent. They have a large search space of $2^m$ ($m$ is the number of feature fields), where each candidate is a possible feature subset.\nTo limit the searching complexity, MARLFS~\\cite{liu2019automating,liu2021automated} reformulates feature selection as a multi-agent reinforcement learning problem. Each feature is assigned an agent, and then all feature agents maintain to select or deselect the corresponding feature simultaneously. \nTo reduce the computations, the following efforts attempt to accelerate the searching efficiency by learning with external knowledge~\\cite{fan2020autofs} and reducing the number of agents~\\cite{zhao2020simplifying,fan2021autogfs}. However, due to the intrinsically low sample efficiency, RL-based methods are still difficult to be integrated into real-world recommender systems with large-scale user-item interactions. To this end, AutoField~\\cite{wang2022autofield} is \nproposed for practical recommendations, where the search space is relaxed to be continuous by allocating two variables to control the selection of each feature. Afterward, the selected features can be evaluated on the validation dataset by gradient descent.\n\n\n\n\n\n\n\\subsection{Selection from Generated Features}\nIn addition to selecting informative features from the raw feature set, some works learn to discover and generate beneficial combinatorial features (\\textit{i.e.}, cross features), including categorical and statistical features.\nGLIDER~\\cite{glider} utilizes the gradient-based neural interaction detector to detect generic non-additive and high-order combinatorial features efficiently, whose search space is $2^{2^m}$. The detected features are evaluated by a linear regression model and trained from scratch. Similarly, AutoCross~\\cite{autocross} searches useful high-order cross features by transferring the original space to a tree-structured space, reducing the search space from $2^{2^m}$ to $(\\mathcal{C}_m^2)^k$, where $k$ is the expected number of cross features. Then a greedy-based beam search~\\cite{beam_search} is performed to prune unpromising branches for further improving the efficiency. The feature set evaluation is achieved by field-wise logistic regression approximately and trained from scratch.\n\nTo discover useful statistical features from the raw feature set, AEFE~\\cite{aefe} designs second-order combinatorial features search space with size $2^{\\mathcal{C}_m^2Q}$, where $Q$ is the number of pre-defined construction rules. After the feature generation\nof groupby, aggregating, and paradigm combination, a greedy-based search with feature filtering is deployed and the selected features are evaluated from scratch.\n\n\n\\section{Feature Embedding}\nDifferent from Computer Vision (CV) and Natural Language Processing (NLP), the input features used in recommender systems are extremely sparse and high-dimensional.\nTo tackle this problem, neural network-based models leverage a feature embedding layer to map the high-dimensional features into a low-dimensional latent space.\nSpecifically, for a feature field, we assign each feature $f_i$ with a dense embedding vector $\\mathbf{e}_{i}$ and save all embedding vectors in an embedding table $\\mathbf{E}\\in \\mathbb N^{V\\times d}$, where $V$ and $d$ are are the vocabulary size and pre-defined embedding size, respectively. As shown in Figure~\\ref{fig:embedding}, based on the embedding table $\\mathbf{E}$, we can obtain the embedding vectors through the embedding look-up process.\n\nFeature embedding is the cornerstone of the DRS as the number of parameters in DRS is heavily concentrated in the embeddings and the subsequent components are constructed on the basis of feature embeddings. The feature embedding layer not only directly affects storage capacity and online inference efficiency~\\cite{autodis}, but also has a non-negligible effect on the prediction accuracy.\nTo improve the prediction accuracy, save storage space and reduce model capacity, some AutoML-based solutions are proposed to dynamically search the embedding sizes for different features. The intuition behind is that assigning high-dimensional embeddings for high-frequency features can improve model capacity while low-dimensional embeddings for low-frequency features contribute to preventing overfitting~\\cite{autoemb}. According to whether or not the optimal embedding size is searched for each feature value, these solutions can be divided as into categories: \\textit{Single Embedding Search} and \\textit{Group Embedding Search}, as in Figure~\\ref{fig:embedding}.\n\n\n\n\n\\begin{figure}[!t]\n\\vskip -0.15in\n\n \\centering\n \\setlength{\\belowcaptionskip}{-0.5cm}\n \\hspace*{-6.6mm}\n \\includegraphics[width=0.5\\textwidth]{Figure\/embedding.pdf}\n \\vskip -0.15in\n \\caption{\\small{AutoML for feature embedding.}}\n \\label{fig:embedding}\n\\end{figure}\n\n\\subsection{Single Embedding Search}\nSingle embedding search-based methods~\\cite{amtl,pep} aim to search optimal embedding dimension for each feature value, hence facing huge search space due to the large vocabulary size $V$. AMTL~\\cite{amtl} leverages a twins-based architecture to avoid the unbalanced parameters update problem due to the different frequencies and designs an embedding search space with $d^V$ size, where $d$ is the embedding size. The twins-based architecture acts as a frequency-aware policy network to search the optimum dimension for each feature value, and the learning process is relaxed to a continuous space by temperated softmax~\\cite{kd} and optimized by gradients.\nPEP~\\cite{pep} proposes a pruning-based solution by enforcing column-wise sparsity on the embedding table with $L_0$ normalization, generating $2^{Vd}$ search space. To save computation cost and avoid setting pruning thresholds manually, PEP utilizes the trainable pruning parameters to prune each element automatically, which can be jointly optimized with the model parameters via gradient-based back-propagation. \n\nHowever, the search space of AMTL and PEP are highly related with the embedding size $d$, which hinders the optimization procedure. To reduce the search space, a solution is to divide the embedding dimension into several \\textit{\\textbf{column-wise}} sub-dimensions (\\textit{e.g.}, slicing the original dimension $d = 64$ into $\\{2,4,8,16,32,64\\}$ 6 candidate sub-dimensions).\nAutoEmb~\\cite{autoemb} and ESAPN~\\cite{esapn} reduce the search space from $d^V$ to $a^V$, where $a$ is the number of candidate sub-dimensions, greatly shrinking the search space compared with AMTL.\nAutoEmb performs a soft-selection strategy by summing over the candidate sub-dimensions with learnable weights.\nInstead, ESAPN performs a hard-selection strategy and a frequency-aware policy network serves as an automated RL agent to decide whether to enlarge the dimensions under the streaming setting. \n\nBesides searching embedding dimension dynamically for each features, learning embeddings via combination adaptively is also a trend. ANT~\\cite{ant} and AutoDis~\\cite{autodis} leverage the combination over a set of anchor embeddings (named meta-embeddings in AutoDis) to represent categorical and numerical feature respectively, building a $2^{km}$ search space, where $k$ is the number of anchor embeddings. ANT uses a sparse transformation operation to hard-select relevant anchor embedding while AutoDis designs an automatic discretization network to soft-select informative meta-embeddings. \n\n\n\\subsection{Group Embedding Search}\nAutoEmb and ESAPN shrink the search space by dividing the embedding dimension into candidate \\textit{\\textbf{column-wise}} sub-dimensions.\nAnother solution is to group the feature values of a field based on some indicators (\\textit{e.g.}, frequencies) and assign a \\textit{\\textbf{row-wise}} group embedding dimension for all the values within the group. \nA special case is setting the number of groups $b=1$ and searching a global embedding dimension for all the feature values of a field, such as AutoDim~\\cite{autodim}. AutoDim pre-defines several candidate sub-dimensions like AutoEmb but has a smaller search space, shrinking from $a^V$ to $a^m$. The optimization procedure is achieved by a bi-level gradient-based algorithm with Gumbel-Softmax trick~\\cite{gumbel_softmax}.\n\nTo balance the search efficiency and performance, some works split the feature into multi-groups (\\textit{i.e.}, $b>1$) based on the feature frequencies or clustering. DNIS~\\cite{dnis} divides the feature values with similar frequencies into $b$ groups, reducing the search space from $2^{Vd}$ into $2^{bd}$. Then, a gradient-based differentiable search with gradient normalization is performed to search optimal group embeddings. Specifically, NIS~\\cite{nis} and RULE~\\cite{rule} reduce the search space significantly from both row-wise and column-wise perspectives. NIS designs single-size embedding search (with $ab$ space) and multi-size embedding search (with $b^a$ ), and uses the RL-based method to find the optimal embedding dimensions.\nSimilarly, RULE divides the embedding table into multi-blocks and builds a $2^{ab}$ search space. Then the evolutionary search algorithm is proposed to search optimal item embeddings under memory constraint for the on-device recommendation. Each sub-structure is evaluated by an accurate performance estimator to balance the prediction confidence and training time.\n\\section{Feature Interaction}\nEffectively modeling feature interactions is one of the most commonly-used approaches for DRS models to improve prediction performance. Recently, plenty of works leverage various operations to capture informative interaction signals explicitly and implicitly, such as inner product (PNN~\\cite{pnn}), outer product (CFM~\\cite{cfm}), convolution (FGCNN~\\cite{fgcnn}) and \\textit{etc}. However, these works utilize identical interaction functions to model all the feature interactions indiscriminately, which may introduce noisy interactive signals and weaken the effectiveness of modeling. To overcome these issues, some AutoML-based methods are designed to search beneficial feature interactions with optimal interaction function adaptively. These methods can be categorized into three groups depending on the search space: \\textit{Feature Interaction Search}, \\textit{Interaction Function Search}, and \\textit{Interaction Block Search}, as shown in Figure~\\ref{fig:interaction}.\n\n\\begin{figure}[!t]\n \\vskip -0.15in\n \\centering\n \\setlength{\\belowcaptionskip}{-0.1cm}\n \\hspace*{-6mm}\\includegraphics[width=0.5\\textwidth]{Figure\/interaction.pdf}\n \\vskip -0.15in\n \\caption{\\small{AutoML for feature interaction.}}\n \\vspace{-0.3cm}\n \\label{fig:interaction}\n\\end{figure}\n\n\\subsection{Feature Interaction Search}\nTo search beneficial feature interactions for enriching information, some AutoML-based works design high-order feature interactions search space and leverage search algorithms (mainly gradient-based search algorithm) to derive feature interactions automatically. AutoFIS~\\cite{autofis} identifies and selects important feature interactions by enumerating all the feature interactions and introducing a set of architecture parameters ``gates'' to indicate the importance of individual feature interactions, facing $2^{\\mathcal{C}_m^2}$ search space even in second-order feature interactions. The architecture parameters are optimized by gradient descent with GRDA optimizer~\\cite{grda} to get a sparse solution automatically. However, when searching high-order feature interactions, the search space of AutoFIS is huge, resulting in low search efficiency.\n\nTo solve the \\textit{efficiency-accuracy} dilemma, AutoGroup~\\cite{autogroup} proposes automatic feature grouping, reducing the $p^{th}$-order search space from $2^{\\mathcal{C}_m^p}$ to $2^{gm}$, where $g$ is the number of pre-defined groups. The discrete search space is then relaxed to continuous and the derivative is approximated by introducing the Gumbel-Softmax trick~\\cite{gumbel_softmax}. AutoHash~\\cite{autohash} shares a similar idea with AutoGroup to reduce high-order search space by the hashing function.\n\nAlthough AutoGroup and AutoHash improve the high-order interaction search efficiency via feature grouping and hashing, they ignore the \\textit{order-priority} property ~\\cite{profit}, which reveals that the higher-order feature interactions quality can be relevant to their de-generated low-order ones, and lower-order feature interactions are likely to be more vital compared with higher-order ones. To reduce the architecture parameters and search costs, PROFIT~\\cite{profit} distills the $p^{th}$-order search space from $2^{\\mathcal{C}_m^p}$ to $2^{mp}$ by the composition of low-rank tensors approximately. Then to ensure the order-priority property, a progressive\nsearch algorithm based on the gradient is proposed to search high-order feature interactions order-by-order. Similarly, FIVES regards the original features as a feature graph conceptually and models the high-order feature interactions by a GNN with layer-wise adjacency matrix, so that the $p^{th}$-order search space is reduced from $2^{\\mathcal{C}_m^p}$ to $2^{m^2}$. Then, FIVES parameterizes the adjacency matrix and makes them depend on the previous layer, so that the order-priority property can be kept. \n\nThe above-mentioned works search beneficial feature interactions for all users non-personally, which overlooks the individuality and personality of the user's behavior. To provide personalized selection of second-order feature interaction, BP-FIS~\\cite{chen2019bayesian} designs a personalized search space with size $2^{u\\mathcal{C}_m^2}$, where $u$ is the number of users. Specifically, BP-FIS proposes bayesian personalized feature interaction selection mechanism under the Bayesian Variable Selection (BVS)~\\cite{tibshirani1996regression} theory by forming a Bayesian generative model and deriving the Evidence Lower Bound (ELBO), which can be optimized by an efficient Stochastic Gradient Variational Bayes (SGVB) method.\n\n\n\\subsection{Interaction Function Search}\nAs suggested by PIN~\\cite{pin}, different feature interactions are suitable for different interaction functions. Therefore, searching optimal interaction functions contributes to better capturing informative interaction signals. \nSIF~\\cite{sif} automatically devises suitable interaction functions for collaborative filtering (CF) task with two fields, which consists of micro search space referring to element-wise MLP and macro search space including 5 pre-defined operations (\\textit{i.e.}, multiply, plus, min, max, and concat). A bi-level gradient-based search algorithm is utilized to relax the choices among operations in a continuous space. \nAutoFeature~\\cite{autofeature} extends the interaction functions search to multi-field high-order scenarios by utilizing micro-networks with different architectures to model feature interactions. The whole search space expands to $b^{\\mathcal{C}_m^p}$ for the $p^{th}$-order interactions, where $b$ is the number of candidate interaction functions, including add, Hadamard-product, concat, Generalized-product, and null. The search process is implemented by an evolutionary algorithm with the Naive Bayes tree, and each sampled architecture is trained and evaluated from scratch. \n\nHowever, the interaction calculations of SIF and AutoFeature are artificially specified, which requires high dependence on domain knowledge. To overcome this limitation, AOANet~\\cite{aoanet} proposes a generalized interaction paradigm by decomposing commonly-used structures into Projection, Interction and Fusion phase. Therefore, the formula of the $l$-th interaction layer $C_l$ is given as: $C_l = \\{Z_{u,v} | (u,v)\\in pairing(B_0,B_{l-1})\\}$, where $Z_{u,v} = (u\\otimes v) \\odot W_l$ is the interaction map of vector $u$ and $v$, respectively. \nThen architecture parameters are introduced to distinguish the importance of interaction maps, and the optimization procedure is achieved by a gradient-based method like AutoFIS.\n\n\n\\subsection{Interaction Block Search}\nSearching appropriate interaction functions for different feature interactions may bring huge search space and high search overhead. Therefore, one straightforward idea is to take the original features as a whole and modularize representative operations in several blocks to formulate a generalizable search space, which is widely used in CV tasks~\\cite{darts}.\nAutoCTR~\\cite{autoctr} designs a two-level hierarchical search space by abstracting the raw features and operations (\\textit{i.e.}, MLP, FM~\\cite{fm}, and dot-product) into virtual blocks, which are further connected as a directed acyclic graph (DAG). Similar to AutoFeature, AutoCTR utilizes a multi-objective evolutionary algorithm with architectural-level learning-to-rank guidance to search the optimal architecture. The sampled architectures are evaluated from scratch and some tricks (\\textit{e.g.}, data sub-sampling and warm-start) are used to accelerate the evaluation process.\n\nTo further improve computational efficiency, AutoPI~\\cite{autopi} utilizes a gradient-based search strategy for exploration in a more efficient search space. AutoPI designs a hierarchical search space with blocks connecting into a DAG where the interaction cell formulates the higher-order feature interactions and the ensemble cell combines lower-order and higher-order interactions. Then, a bi-level optimization approach is applied to discover optimal architecture after the continuous relaxation.\n\n\\subsection{Comprehensive Search}\nIn addition to designing a single search space (\\textit{e.g.}, feature embedding, feature interaction, or interaction block) in DRS, some works design a hybrid search space and perform a comprehensive search. \nBased on AutoFIS, AIM~\\cite{aim} designs a mixed search space to select significant feature interactions, appropriate interaction functions, and optimal embedding dimensions automatically in a unified framework. It is noteworthy that the $p^{th}$-order feature interaction search is achieved by combining raw features with the maintained top-$k$ ${(p-1)}^{th}$-order feature interactions, reducing the search space from $2^{\\mathcal{C}_m^p}$ to $2^{km}$. Besides, the ``gates'' are extended to search embedding dimensions with $2^{md}$ space and interaction functions with $2^{kmb}$ space. AutoIAS~\\cite{autoias} designs an integrated search space for multiple components in DRS, including feature embedding ($a^m$) and projection ($a^{\\mathcal{C}_m^2}$), second-order feature interaction ($2^{m+\\mathcal{C}_m^2}$) and interaction function ($b^{\\mathcal{C}_m^2}$), as well as the MLP structures. An architecture generator network is trained by policy gradient and used to produce better architectures with dependency, where Knowledge Distillation (KD)~\\cite{kd} is performed to enhance consistency among sub-architectures. \nDeepLight~\\cite{deeplight} develops an integrated search space for feature embedding ($2^{Vd}$), interaction ($2^{d^2}$) and MLP structures by pruning redundant parameters with $L_2$ penalty.\nResembling DeepLight, UMEC~\\cite{umec} develops an integrated search space for both embedding ($2^{Vd}$) and MLP structures. Then the sparsity is achieved by $L_2$ norms, which are further reformulated as a minimax optimization problem and optimized via a gradient-based algorithm.\n\nAMEIR~\\cite{ameir} proposes an automatic behavior modeling, feature interaction exploration and\nMLP structure investigation solution for both sequential and non-sequential features. It is worth noting that AMEIR designs an interaction block search space (including CNN, RNN, pooling, and attention layers) for identifying sequential patterns in the user history and high-order feature interaction search space for modeling non-sequential features. The one-shot weight-sharing random\nsearch paradigm widely-used in other works is deployed to boost search efficiency.\n\\section{System Design}\nBesides the aforementioned techniques on automating key components in DRS models, scientists also devise AutoML-based frameworks and training procedures from the perspective of system design. \n\n\\noindent \\textbf{Framework.}\nWorks in this domain mainly optimize the recommender systems from a framework perspective.\nDeepRecInfra~\\cite{deeprecinfra} considers the inference query sizes, arrival patterns, recommendation architectures, and underlying hardware systems (GPU\/CPU) to obtain the optimal infrastructure (\\textit{i.e.}, maximizing the latency-bounded throughput), which could help reduce the latency. \nIn terms of models' implementation, AutoRec~\\cite{autorec}, as the first open-source platform, provides a highly-flexible pipeline for various data formation, tasks, and models in deep recommender systems. \n\n\n\\noindent \\textbf{Training.}\nThe training process is also crucial for designing reliable recommender systems. \nIn general cases, the loss function is vital for training. GradNorm~\\cite{chen2018gradnorm} and $\\lambda$Opt~\\cite{lambdaopt} focus on adjusting the coefficients of loss items and optimizing parameters via gradient descent. The difference is that GradNorm aims to balance different losses of multi-task while $\\lambda$Opt only adjusts the regularization level. Later, Zhao et al.~\\cite{zhao2021autoloss} proposed an adaptive loss function search framework, AutoLoss, based on a bi-level gradient-based algorithm with Gumbel-Softmax trick~\\cite{gumbel_softmax}. AutoLoss attributes the most appropriate loss function for each data example by automatically designing various loss functions, rather than adjusting coefficients only like the aforementioned works. In the scenario of knowledge transferring, system designers should figure out which parameters should be frozen to prevent overfitting on the target dataset. With Gumbel-Softmax trick~\\cite{gumbel_softmax}, AutoFT~\\cite{yang2021autoft} automatically decides whether the embedding of a field and parameters of a layer should be fine-tuned.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\\vspace{-3mm}\nA successful search relies on the engine accurately interpreting the intent behind a user's query and returning likely relevant results ranked high. There has been much progress allowing search engines to respond effectively even to short keyword queries on rare intents \\cite{RareQuerySuggestion, ClassificationRareQueries, PersonalizedExpansion}. Despite this, recommendation of queries is an integral part of all search experiences -- either in the form of \\textit{query autocomplete} (queries that match the prefix the user has currently typed into the search box) or \\textit{query suggestions} (reformulation options once an initial query has been provided).\nIn this work, we focus on the query suggestion task.\n\nOriginal algorithms for this scenario relied on extracting co-occurrence patterns between query pairs, and their constituent terms, within historical logs \\cite{jones2006generating, huang2003relevant, fonseca2005concept, beeferman2000agglomerative}. Such methods often work well for frequent queries. Recent work utilizing generative approaches common in natural language processing (NLP) scenarios offer generalization in terms of being able to provide suggestions even for rare queries \\cite{mitra2015exploring, cho2014learning}. More specifically, the work by Sordoni et al. \\cite{HRED} focuses on generating query suggestions that are aware of the context of the user's current session. The current paper is most similar to this work in terms of motivation and the core technical component.\n\nThe experiments described here are based on data from a commercial stock image search engine. In this setting, the items in the index are professionally taken high quality images to be used in commercial publishing material. The users of such a system exhibit similar properties to what might be expected on general purpose search engines - i.e., the use of relatively short queries often with multiple reformulations within a session. \nThe logged data therefore contains not only the sequence of within-session queries, but also impression logs listing what images were shown in response to a query and which amongst those were clicked. \n\nThe availability of usage data, which provides implicit relevance signals, allows the building of a query reformulation model that includes aspects that have been shown to be useful in related literature:\nsession context capturing information from previous queries in the session, as well as properties of relevant results via a multitask component. Building on state-of-the-art models in this manner, we specialize the solution to our setting by utilizing a novel supervision signal for the reformulation model in the form of linguistically rich captions available for the clicked results (in our case, images) across sessions.\n\n\n\\begin{figure*}[!t]\n \\centering\n \\includegraphics[width=0.85\\textwidth]{Teaser.pdf}\\vspace{-2mm}\n\\caption{\\scriptsize{The basic idea behind our work. We generate query reformulations using \\textit{(a)} subsequent queries within sessions, and \\textit{(b)} the captions of clicked images, as supervision signals. In both the cases, the task of generating reformulations is done while jointly optimizing the ranking of results.}}\n\\label{fig:teaser}\n\\vspace{-5mm}\n\\end{figure*}\n\n\\vspace{-4mm}\n\\section{Related Work}\n\\vspace{-3mm}\nA user of a search system provides an input query, typically a short list of keywords, into the search box and expects content relevant to their need ranked high in the result list. There are many reasons why a single iteration of search may not be successful -- mis-specified queries (including spelling errors), imperfect ranking, ambiguous intent, and many more. As a result, it is useful to think of a search session as a series of interactions -- where the user enters a query, examines and potentially interacts with the returned results, and constructs a refined query that is expected to more accurately represent their intent. Search engines therefore mine historical behavior of users on this query and similar ones in an attempt to optimize the entire search session~\\cite{silvestri2009mining}. \n\nBeing able to effectively extract these signals from historical logs starts with understanding and interpreting user behavior appropriately. For example, Huang et al. ~\\cite{huang2009analyzing} pointed out that successful reformulations, especially those involving changes to words and their order, can be identified as those that retrieve new items which are presented higher in the subsequent results. An automatic reformulation experience involves implementing lessons from such analyses. The first of these is the use of previous queries within the current search sessions to inform the subsequent suggestions \u2013 i.e., modeling the \\textit{session context}. Earlier papers (e.g.~\\cite{cao2008context}) explicitly captured co-occurrence within sessions which, while being an intuitive and simple strategy, had the disadvantage of not being able to account for rarer queries. Newer efforts (e.g.~\\cite{mitra2015exploring}) therefore utilize distributed representations of terms and queries to help generalize to unseen queries.\n\nSuch efforts are part of a wider expansion of techniques originally common within NLP domains to Information Retrieval (IR) scenarios. \nConceptually, a generation-based model for query reformulation is obtained by mapping a query to the subsequent one in the same session. Such a model incorporates two signals known to be useful from traditional IR: $(1)$ sequence of terms within a query \\& $(2)$ sequence of queries within a session. \nRecent papers have investigated models anchored in the original generic NLP settings but customized to the characteristics of search queries. For example, Dehghani et al. ~\\cite{Dehghani2017} suggest a `copy' mechanism within the sequence-to-sequence (seq-to-seq) models \\cite{sutskever2014sequence} to allow for terms to be carried over across queries in the session. In the current paper, we consider the work of Sordoni et al. \\cite{HRED} as a reference for the core seq-to-seq model. The model, referred to here as \\textit{H}ierarchical \\textit{R}ecurrent \\textit{E}ncoder \\textit{D}ecoder (\\textit{\\small{HRED}}), is a standard encoder-decoder setup, where word embeddings are aggregated into a query representation, a sequence of which in turn leads to a session representation. A decoder for the hierarchically organized query and session encoders is trained to predict the sequence of query words that compose the subsequent query in the session. Along with being a strong baseline, it serves to illustrate the core components of our work: $(a)$ use of a novel supervision signal in the form of captions of clicked results, and $(b)$ jointly optimizing ranking along with query reformulation. These extensions could similarly be done with other seq-to-seq models used for query suggestion. \n\n\n\nOur motivation for using captions of clicked images as supervision signal stems from the fact that captions are often succinct summaries of the content of the actual images as the creators are incentivized to have their images found. In particular, captions indicate which objects are present in the image, their corresponding attributes, as well as relationships with other objects in the same image -- for example, \\textit{``A beautiful girl \\textbf{wearing} a yellow shirt \\textbf{standing near} a red car\"}. These properties make the captions a good target.\n\n\nMultitask learning~\\cite{caruana1997multitask} has been shown to have success in scenarios where related tasks benefit from common signals. A recent paper ~\\cite{MNSRF} shows benefits of such a pairing in a search setting. Specifically, Ahmad et al. show that coupling with a classifier distinguishing clicked results from those skipped helps improve a query suggestion model. We extend this work by utilizing a pairwise loss function commonly used in learning-to-rank~\\cite{Burges2005}. We show that not only does this provide the expected increase in the effectiveness of the ranker component, but also increases the diversity of suggested reformulations. Such diversity has been shown to be important for the query suggestion user experience~\\cite{ma2010diversifying}. \n\nWe begin by providing details of the mathematical notation in the next section, before describing our models in detail. The subsequent experimental section provides empirical evidence of the benefits that our design choices bring. \n\\vspace{-5mm}\n\\section{Notation and Model Architectures}\n\\vspace{-0.5mm}\n\\label{sec:notations}\n\\subsubsection{3.1 \\quad Notation:}\n\\vspace{-2mm}\nWe define a session as a sequence of queries , $\\mathcal{S} = \\{q_1, \\dots, q_n\\}$. Each query $q_i$ in session $\\mathcal{S}$ has a set of displayed images associated with it, $\\mathcal{I}_i = \\{I_{i}^1, \\dots, I_{i}^m\\}$. A subset of images in $\\mathcal{I}_i$ are clicked, we refer to the top-ranked clicked image as $I_{i}^{\\text{ }\\text{clicked}}$. All the images in the set $\\mathcal{I}_i$ have a caption describing them, the entire set of which is represented as $\\mathcal{C}_i = \\{C_{i}^{1}, \\dots, C_{i}^{m} \\}$. It follows that every $I_i^{\\text{ }\\text{clicked}}$ will also have an associated caption with it, given as $C_i^{\\text{ }\\text{clicked}}$. Given this, for every successful query $q_i$ in session $\\mathcal{S}$, we will have an associated clicked image $I_i^{\\text{ }\\text{clicked}}$ and a corresponding caption $C_i^{\\text{ }\\text{clicked}}$. We consider the size of impression $m$ (number of images) to be fixed for all $q_i$.\n\nOur models treat each query $q_i$ in any given session, as a sequence of words, $q_i = \\{w_1, \\dots, w_{l_q} \\}$. Captions are represented similarly - as sequences of words, $C_i^j = \\{w_1, \\dots, w_{l_c}\\}$.\nWe use LSTMs \\cite{LSTM} to model the sequences, owing to their demonstrated capabilities in modeling various natural language tasks, ranging from machine translation \\cite{sutskever2014sequence} to query suggestion \\cite{Dehghani2017}.\n\nThe input to our models is a query $q_i$ in the session $\\mathcal{S}$, and the desired output is a target reformulation $q_{\\text{reform}}$. This target reformulation $q_{\\text{reform}}$ can either be \\textit{(i)} the subsequent query $q_{i+1}$ in the same session $S$, or \\textit{(ii)} the caption $C_i^{\\text{ } \\text{clicked}}$ corresponding to the clicked image $I_i^{\\text{ }\\text{clicked}}$. Note that obtaining contextual query suggestions via a \\textit{translation model} that has learnt a mapping between successive queries within a session (i.e., \\textit{(i)}) has been previously proposed in our reference baseline papers~\\cite{HRED, MNSRF}. In the current paper, we utilize a linguistically richer supervision signal, in the form of captions of clicked images (i.e., \\textit{(ii)}), and analyze the behavior of the different models across three high level axes - relevance, descriptiveness and diversity of generated reformulations. \\vspace{-7mm}\n\\subsubsection{3.2 \\quad Model Architectures: }\n\\vspace{-2mm}\n\\label{secModels}\nIn this paper, we evaluate two base models -- \\textit{\\small{HRED}} and \\textit{\\small{HRED}} with \\textbf{Cap}tions (\\textit{\\small{HREDCap}}), and to study the effect of multitask learning, we add a ranker component to each of these models; giving us two more multitask variants -- \\textit{\\small{HRED + Ranker}} and \\textit{\\small{HREDCap + Ranker}}. The underlying architecture of \\textit{HRED} and \\textit{\\small{HREDCap}} (and the corresponding variants) is essentially the same, but \\textit{\\small{HRED}} has been trained by using $q_{i+1}$ as target and \\textit{\\small{HREDCap}} has been trained using $C_{i}^{clicked}$ as target. \\textit{\\small{HRED}} comprises of a query encoder, a session encoder, and a query decoder; all of which are descried below.\n\n\n\\noindent\\textbf{Query Encoder:} The query encoder generates a query level encoding $\\mathbf{V}_{q_i}$ for every $q_i \\in \\mathcal{S}$. This is done by first representing the query $q_i$ using vector embeddings of corresponding words $\\{\\mathbf{w}_1, \\dots, \\mathbf{w}_{l_q}\\}$, and then sequentially feeding them into a bidirectional LSTM (BiLSTM) \\cite{graves2005framewise}. As shown in Fig. \\ref{fig:NotationFigure}(a), the query encoder takes each of these word representations as input to the BiLSTM at every encoding step and updates the hidden states based on the forward and backward pass over the input query. The forward and backward hidden states are concatenated, and after applying attention \\cite{bahdanau2014neural} over the concatenated hidden states, we obtain a fixed size vector representation $\\mathbf{V}_{q_{i}}$ for the query $q_i \\in \\mathcal{S}$. \n\n\\noindent\\textbf{Session Encoder:} The encoded representation $\\mathbf{V}_{q_{i}}$ of query $q_i \\in \\mathcal{S}$ is used by the session encoder, along with encoded representations $\\{\\mathbf{V}_{q_1}, \\dots, \\mathbf{V}_{q_{i-1}}\\}$ of previous queries within the same session, to capture the context of the ongoing session thus far. The session encoder, which is modeled by a unidirectional LSTM \\cite{LSTM}, updates the session context $\\mathbf{V}^{q_{i}}_{\\mathcal{S}}$ after each new $\\mathbf{V}_{q_{i}}$ is presented to it. Fig. \\ref{fig:NotationFigure}(b) illustrates one such update where the session encoding is updated from $\\mathbf{V}^{q_{i-1}}_{\\mathcal{S}}$ to $\\mathbf{V}^{q_{i}}_{\\mathcal{S}}$ after $\\mathbf{V}_{q_{i}}$ is provided as input to the session encoder by the query encoder.\nSince it is unreasonable to assume access to future queries in the session while generating a reformulation for the current query, we use a unidirectional LSTM to model the forward sequence of queries within a session. Accordingly, the session encoder updates its hidden state based on the forward pass over the query sequence. As shown in Fig. \\ref{fig:NotationFigure}(b), max-pooling is applied over each dimension of the hidden state to obtain the session encoding $\\mathbf{V}^{q_{i}}_{\\mathcal{S}}$. \n\n\\begin{figure*}[bt]\n \\centering\n \\includegraphics[width=1.0\\textwidth]{Notations.pdf}\\vspace{-2mm}\n \\caption{\\scriptsize{An illustration of the \\textit{(a)} query encoder, \\textit{(b)} session encoder, and \\textit{(c)} query decoder }}\n \\label{fig:NotationFigure}\n\\vspace{-6mm}\n\\end{figure*}\n\n\\noindent\\textbf{Query Decoder:} The generated session encoding $\\mathbf{V}^{q_{i}}_{\\mathcal{S}}$ is used as input by a query decoder to generate a reformulation $\\hat{q}_{\\text{reform}} = \\{\\hat{w}_1, \\dots, \\hat{w}_{l_r}\\}$ for the query $q_i \\in \\mathcal{S}$. As shown in Fig. \\ref{fig:NotationFigure}(c), the reformulation is generated word by word using a single layer unidirectional LSTM. With each unfolding of the decoder LSTM at step $t \\in \\{1, \\dots, l_r\\}$, a new word $\\hat{w}_t$ is generated as per the following probability:\n\\vspace{-2mm}\\[\n \\hat{w}_t =\n \\argmax_{w^i \\in \\mathcal{V}} P(\\hat{w}_t = w^i \\mid \\hat{w}_{1: t-1}, \\mathbf{V}^{q_i}_{\\mathcal{S}}) \\text{ }\\text{ } \\footnote{For $t=1$, $P(\\hat{w}_t = w^i \\mid \\hat{w}_{1: t-1}, \\mathbf{V}^{q_i}_{\\mathcal{S}})$ reduces to $P(\\hat{w}_t = w^i \\mid \\mathbf{V}^{q_i}_{\\mathcal{S}})$. However, for the sake of readability, this special consideration for $t=1$ has been skipped for the following equations. }\n\\] \\vspace{-2mm}\n\n\\vspace{-6mm}\\begin{equation}\n P(\\hat{w}_t = w^i \\mid \\hat{w}_{1: t-1}, \\mathbf{V}^{q_i}_{\\mathcal{S}}) = g(\\phi(h_d^t))\n \\label{eq:condGen}\\vspace{-1mm}\n\\end{equation}\nHere, $h_d^t$ is the hidden state of the decoder at decoding step $t$, $\\hat{w}_{1: t-1}$ denotes the previous words generated by the decoder, and $\\phi(h_d^t)$ is a non-linear operation over $h_d^t$. The softmax function $g(.)$ provides a probability distribution over the entire vocabulary $\\mathcal{V}$. $w^i$ is used to denote the $i$-th word in $\\mathcal{V}$. The joint probability of generating a reformulation $\\hat{q}_{\\text{reform}} = \\{\\hat{w}_1, \\dots, \\hat{w}_{l_r}\\}$ can be decomposed into the ordered conditionals as\n $P(\\hat{q}_{\\text{reform}} \\mid q_i) = \\prod_{t = 1}^{l_r} P(\\hat{w}_t \\mid \\hat{w}_{1:t-1}, \\mathbf{V}^{q_i}_{\\mathcal{S}})$.\n \nDuring training, the decoder compares each word $\\hat{w}_t$ in the generated reformulation $\\hat{q}_{\\text{reform}}$ with the corresponding word $w_t$ in the target reformulation $q_{\\text{reform}}$, and aims to minimize the negative log-likelihood. For a given reformulation by the decoder, the loss is \\vspace{-4mm}\n\\begin{equation}\n \\label{eq:reformLoss}\n \\mathcal{L}_{\\text{reform}} = - \\sum_{t = 1}^{l_r} \\log P(\\hat{w}_t = w_t \\mid \\hat{w}_{1:t-1}, \\mathbf{V}^{q_{i}}_{\\mathcal{S}}) + \\mathcal{L}_{reg}\\vspace{-3mm}\n\\end{equation}\n\nHere, $\\mathcal{L}_{reg} = - \\lambda \\sum_{w^i \\in \\mathcal{V}} P(w^i \\mid \\hat{w}_{1:t-1}, \\mathbf{V}^{q_{i}}_{\\mathcal{S}}) \\cdot \\log P(w^i \\mid \\hat{w}_{1:t-1}, \\mathbf{V}^{q_{i}}_{\\mathcal{S}})$ is a regularization term added to prevent the predicted probability distribution over the words in the vocabulary from being highly skewed. $\\lambda$ is a regularization hyperparameter. The training loss is the sum of $\\mathcal{L}_{\\text{reform}}$ over all query reformulations generated by the decoder during training.\n\nTo summarize, the model encodes the queries, generates session context encodings, and generates the reformulated query using the decoder while updating the model parameters using the gradients of $\\mathcal{L}_{\\text{reform}}$. \n\n\n\\begin{figure*}[tb]\n \\centering\n \\begin{minipage}{0.5\\textwidth}\n \\centering\n \\includegraphics[width=1.0\\textwidth]{GeSeCa.pdf}\n \\end{minipage}\n \n \\begin{minipage}{0.4\\textwidth}\n \\centering\n \\includegraphics[width=1.0\\textwidth]{Ranker.pdf}\\vspace{-1mm}\n \\end{minipage}\n\\caption{\\scriptsize{\nThe proposed architecture of our multitask model: \\textit{HRED + Ranker} (left). For the sake of brevity, we have shown the ranker component separately (right). For \\textit{HREDCap + Ranker}, the supervision signals are obtained from captions of clicked images and not subsequent queries.}}\n\\label{fig:architectures}\\vspace{-5mm}\n\\end{figure*}\n\n\\noindent\\textbf{Ranker Component:}\nThis additional component is responsible for ranking the $m$ retrieved results for $q_i \\in \\mathcal{S}$. As shown in Fig. \\ref{fig:architectures} (right), the ranker takes as input the concatenation of query and session encoding $[\\mathbf{V}_{q_i} \\oplus \\mathbf{V}_{\\mathcal{S}}^{q_i}]$, for every $q_i \\in \\mathcal{S}$. \nThe concatenated vector representation $[\\mathbf{V}_{q_i} \\oplus \\mathbf{V}_{\\mathcal{S}}^{q_i}]$ is used to compute the similarity between the query $q_i$ and its candidate results. The concatenation of these encodings is done to ensure that both current query information (as captured in $\\mathbf{V}_{q_i}$) and ongoing session context (as captured in $\\mathbf{V}_{\\mathcal{S}}^{q_i}$) is used by the ranker.\nTo obtain a representation of the images, we use their corresponding captions. Formally, for every query $q_i \\in \\mathcal{S}$ each image $I_i^j \\text{ } \\in \\mathcal{I}_i$ is represented using $\\mathbf{C}_i^j$. The average of the vector embeddings of words $\\{w_1, \\dots, w_{l_c} \\}$ in $\\mathbf{C}_i^j$ is computed for the image $I_i^j$. The cosine similarities between $[\\mathbf{V}_{q_i} \\oplus \\mathbf{V}_{\\mathcal{S}}^{q_i}]$ and the image representations $\\mathbf{C}_i^j \\in \\mathcal{C}_i$ are used to rank order the retrieved results. The $j$-th element of the similarity vector $\\mathbf{S}_i$ represents the similarity between $[\\mathbf{V}_{q_i} \\oplus \\mathbf{V}_{\\mathcal{S}}^{q_i}]$ and $\\mathbf{C}_i^j$.\n\\begin{equation}\n\\label{similarityEquation}\n {S}_i^j = sim([\\mathbf{V}_{q_i} \\oplus \\mathbf{V}_{\\mathcal{S}}^{q_i}], \\mathbf{C}_i^j)\n\\end{equation}\n\n\\noindent During training, the ranker tries to learn model parameters based on one of the following two objectives:\\\\\n\\noindent(i) \\textbf{Cross Entropy Loss}: As described in \\cite{MNSRF}, we utilize the `clicked' versus `not-clicked' boolean event to train a classifier, where the ranker scores the $m$ retrieved results based on the probability of being clicked by the user. In the following equation, $\\mathbf{R}_i$ for query $q_i$ is an $m$-dimensional vector, where each value in the vector indicates whether the corresponding image was clicked or not. I.e., $R_i^j = 0$ if $I_i^j$ was not clicked, and $R_i^j = 1$ if $I_i^j$ was clicked. A sigmoid of the scores from Eq.~\\ref{similarityEquation} is taken as the probability of click. Using the $\\mathbf{R}_i$ as labels, the ranker can now be trained using a standard cross entropy loss function:\\vspace{-1mm}\n\\begin{equation}\\\n\\label{eq:rankLoss_BCE}\n \\mathcal{L}_{\\text{rank}} = BCE(\\sigma(\\mathbf{S}_i), \\mathbf{R}_i)\\vspace{-2mm}\n\\end{equation}\n\n\\noindent(ii) \\textbf{Pairwise Ranking Loss}: As described in \\cite{Burges2005}, the original boolean labels in $\\mathbf{R}_i$ can be used to construct an alternate event space where labels $M_{jk} = 1$ when the image at rank $j$ was clicked while the one at $k$ was not. Pairwise ranking loss allows to better model the preferences of certain results over the others.\\vspace{-1mm}\n\\begin{equation}\n\\label{eq:rankLoss_RO}\n %\n \\mathcal{L}_{\\text{rank}} = - \\frac{1}{m^2} \\sum_{j=1}^m\\sum_{ \\substack{k=1\\\\k\\neq j} }^m M_{jk}*\\log \\hat{M}_{jk} + (1 - M_{jk})*\\log(1 - \\hat{M}_{jk})\\vspace{-1mm}\n\\end{equation}\n\\[\\vspace{-1mm}\n\\text{where, } \\hat{M}_{jk} = P(S_i^j > S_i^k \\mid [\\mathbf{V}_{q_i} \\oplus \\mathbf{V}_{\\mathcal{S}}^{q_i}]) = \\sigma(S_i^j - S_i^k)\n\\vspace{-1mm}\\]\n\n\n\\noindent Since \\textit{\\small{HRED + Ranker}} and \\textit{\\small{HREDCap + Ranker}} are multitask models, their training objective is a weighted combination of $\\mathcal{L}_{\\text{reform}}$ and $\\mathcal{L}_{\\text{rank}}$. \\vspace{-2mm}\n\\begin{equation}\n \\mathcal{L}_{\\text{multitask}} = \\alpha \\cdot \\mathcal{L}_{\\text{reform}} + (1 - \\alpha) \\cdot \\mathcal{L}_{\\text{rank}}\\vspace{-1.5mm}\n\\end{equation}\nHere, $\\alpha$ is a hyperparameter used for controlling the relative contribution of the two losses. \nAs mentioned earlier, either the regular binary cross-entropy loss or the pairwise-ranking loss can be used for $\\mathcal{L}_{\\text{rank}}$. We experiment using both and report our results on the effect of using one over the other. The models that are trained using cross entropy loss are appended with \\textit{(\\small{CE})}, and the models that are trained using pairwise ranking objective are denoted as \\textit{(\\small{RO})}. \n\nIt is worth noting that since for a given query $q_i$ there can be more than one clicked images, our ranker component allows $\\mathbf{R}_i$ to take the value $1$ at more than a single place. However, while training the reformulation model, we only consider the caption of the \\textit{highest ranked} clicked image. \n\n\\vspace{-4mm}\n\\section{Experiments}\n\\vspace{-3mm}\n\\label{sec:experimentalSetup}\n\\subsubsection{Dataset:} We use logged impression data from Adobe Stock\\footnote{\\url{https:\/\/stock.adobe.com\/}}. %\nThe query logs contain information about the queries that were issued by users, and the images that were presented in response to those queries. Additionally, they contain information about which of the displayed images were clicked by the user.\nWe consider the top-10 ranked results, i.e., the number of results to be considered for each query is $m=10$. The queries are segmented into sessions (multiple queries by the same user within a $30$ minute time window), while maintaining the sequence in which they were executed by a user. \nWe retain both multi-query sessions as well as single-query sessions, leading to a dataset comprising $1,301,888 $ sessions, $2,122,079$ queries, and $10,185,979$ unique images. We note that $\\sim24.8\\%$ of the sessions are single-query sessions, while rest all are multi-query sessions; each of which, on average, comprise of $2.19$ queries.\nAdditionally, we remove all non-alphanumeric characters from the user-entered queries, while keeping spaces, and convert all characters to lowercase.\n\n\nTo obtain the train, test and validation set, we first shuffle the sessions and split them in a $80:10:10$ ratio, respectively. While it is possible for a query to be issued by different users in distinct sessions, a given search session occurs in only one of these sets. These sets are kept the same for all experiments, to ensure consistency while comparing the performance of trained models. The validation set is used for hyperparameter tuning. \\vspace{-6.0mm}\n\\subsubsection{Experimental Setup: }\nWe construct a global vocabulary $\\mathcal{V}$ of size $37,648$ comprising of words that make up the queries and captions for images. Each word in the vocabulary is represented using a $300$-dimensional vector $\\mathbf{w}_i$. Each $\\mathbf{w}_i \\in \\mathcal{V}$ is initialized using pre-trained GloVe vectors \\cite{pennington2014glove}. Words in our vocabulary $\\mathcal{V}$ that do not have a pre-trained embedding available in GloVe ($1,941$ in number), are initialized using samples from a standard normal distribution.\nSince the average number of words in a query, average number of words in a caption, and average number of queries within a session are $2.31$, $5.22$, and $1.63$, we limit their maximum sizes to $5$, $10$, and $5$, respectively. For queries and captions that contain less than $5$ and $10$ words respectively, we pad them using `$

$' tokens. The number of generated words in $\\hat{q}_{\\text{reform}}$ was limited to $10$, i.e., $l_r = 10$.\n\nDuring training, we use Adam optimizer~\\cite{ADAM} with a learning rate initialized to $10^{-3}$. Across all the models, the regularization coefficient $\\lambda$ is set to be $0.1$. For multitask models, the loss trade-off hyperparameter $\\alpha$ is set to $0.45$. The sizes of the hidden states of query level encoder $\\overrightarrow{h}_q$ and $\\overleftarrow{h}_q$ are set to $256$, and that of session level encoder $h_{\\mathcal{S}}$ is set to $512$. The size of the decoder's hidden state is kept to be $256$. We train all the models for a maximum of $30$ epochs, using batches of size $512$, with early stopping based on the loss over the validation set. The best trained models are quantitatively and qualitatively evaluated and we discuss the results in the upcoming section. \n\nAt test time, we use \na beam search-based decoding approach to generate multiple reformulations \\cite{bahdanau2014neural}. \nFor our experiments, we set the beam width $K=3$. The choice of $K$ was governed by observations that will be discussed later, while analyzing the diversity and relevance of generated reformulations. These three reformulations are rank ordered using their generation probability. \n\nWe experiment with a range of hyperparameters and find that the evaluation results are stable with respect to our hyperparameter choices. However, our motivation is less about training the most accurate models, as we wish to measure the effect of the supervision signal and training objective when used alongside the baseline models. While presenting the results in Table \\ref{tab:mainResults} \\& \\ref{tab:lengtAnalysis}, we report the average of values over $10$ different runs, as well the standard deviations.\n\\vspace{-5mm}\n\\section{Evaluation and Results}\n\\vspace{-3mm}\nIn this section, we evaluate the performance of the aforementioned models using multiple metrics for each of the two tasks: query reformulation and ranking. The metrics used here are largely inspired from~\\cite{Dehghani2017}, and we discuss these below briefly. Towards the end of the section we also provide some qualitative results. \\vspace{-3mm}\n\\vspace{-2mm}\n\\subsubsection{5.1 \\quad Evaluation Metrics:}\nEvaluation for query reformulation involves comparing the generated reformulation $\\hat{q}_{\\text{reform}}$ with the target reformulation $q_{\\text{reform}}$. For all the models, irrespective of whether they utilize the next query within the session $q_{i+1}$ as the target reformulation, or the caption $C_i^{\\text{ }\\text{clicked}}$ corresponding to the clicked image, the ground truth reformulation $q_{\\text{reform}}$ is always taken to be $q_{i+1}$\\footnote{For sessions with less than $5$ queries in a session, if $q_i$ is the last query of the session, the model is trained to predict the `end of session' token as the first token of $q_{i+1}$. The subsequent predicted tokens are encouraged to be the padding token `$

$'.}. This consistency has been maintained across all models to ensure that their performance is comparable, no matter what signal was used to train the reformulation model. The metrics used here cover three aspects: `Relevance' (BLEU \\& sim$_{emb}$), `Ranking' (MRR) and `Diversity' (analyzed later).\n\n\n\\noindent\\textbf{BLEU score}: This metric~\\cite{papineni2002bleu}, commonly used in machine translation scenarios, quantifies the similarity between a predicted sequence of words and the target sequence of words using n-gram precision.\nA higher BLEU score corresponds to a higher similarity between the predicted and target reformulations.\n\n\\noindent\\textbf{Embedding based Query Similarity}: This metric takes semantic similarity of words into account, instead of their exact overlap.\nA phrase-level embedding is calculated using vector extrema~\\cite{vectorExtrema}, for which pretrained GLoVe embeddings were used. The cosine similarity between the phrase-level vectors for the two queries is given by sim$_{emb}$. A higher value of sim$_{emb}$ is taken to signify a greater semantic similarity between the prediction and the ground truth. Unlike BLEU, we expect sim$_{emb}$ to provide a notion of similarity of the generated query to the target that allows for replacement words that are similar to the observed ones.\n\n\\noindent\\textbf{Mean Reciprocal Rank (MRR)}: The ranker's effectiveness is evaluated using MRR \\cite{MRR}, which is given as the reciprocal rank of the first relevant (i.e., clicked) result averaged over all queries, across all sessions. \nA higher value of MRR will signify a better ranker in the proposed multitask models. To have a standard point of reference to compare against, we computed the observed MRR for the queries in the test set and found it to be $0.31$. This means that on average, for queries in our test set, the first image clicked by the users was at rank $\\sim3.1$. \\vspace{-3mm}\n\\vspace{-3mm}\n\n\\begin{table*}[!t]\n \\centering\n \\scalebox{0.71}{\n \\begin{tabular}{| c | c | c | c | c |}\\hline\n {} & \\multicolumn{3}{c}{\\textbf{Query Reformulation}} & \\textbf{Ranking}\\\\\n \\textbf{Model} & {\\textbf{BLEU (\\%)} } & {\\textbf{sim$_{\\mathbf{emb}}$ (\\%)} } &\\textbf{Diversity} & \\textbf{MRR} \\\\\n {} & {($\\uparrow$)} & {($\\uparrow$)} & \\textit{Top K = $3$} ($\\uparrow$) & \\textit{Baseline}: $0.31$ ($\\uparrow$)\\\\\\hline\n {HRED} & $6.92 \\pm 0.06$ & $40.7 \\pm 1.3$ & $0.37 \\pm 0.01$ & - \\\\\\hline\n {HRED + Ranker} (CE) & $7.63 \\pm 0.07$ & $43.5 \\pm 1.2$ & $0.42 \\pm 0.02$ & $0.35 \\pm 0.02$ \\\\\n {HRED + Ranker} (RO) & $7.51 \\pm 0.07$ & $40.8 \\pm 1.4$ & $0.43 \\pm 0.02$ & $0.39 \\pm 0.01$ \\\\\\hline\n {HREDCap} & $7.13 \\pm 0.09$ & $37.8 \\pm 1.4$ & $0.39 \\pm 0.04$ & - \\\\\\hline\n {HREDCap + Ranker} (CE) & $7.95 \\pm 0.11$ & $39.4 \\pm 1.2$ & $0.44 \\pm 0.06$ & $0.38 \\pm 0.02$ \\\\\n {HREDCap + Ranker} (RO) & $7.68 \\pm 0.10$ & $37.6 \\pm 1.4$ & $0.45 \\pm 0.05$ & $0.41 \\pm 0.02$ \\\\\\hline\n \\end{tabular}\n }\\vspace{0.5mm}\n \\caption{\\scriptsize{Performance of models based on reformulation and ranking metrics}}\\vspace{-9mm}\n \\label{tab:mainResults}\n\\end{table*}\n\n\\subsubsection{5.2 \\quad Main Results}\nHaving discussed the metrics, we will now present the performance of our models on the two tasks under consideration, namely query reformulation and ranking. Table \\ref{tab:mainResults} provides these results as well as the effect of different ranking losses -- denoted by (\\textit{\\small{RO}}) and (\\textit{\\small{CE}}) respectively.\n\n\\noindent\\textbf{Evaluation based on Reformulation:}\nFor the purpose of this evaluation, we fix the beam width $K=3$ and report the average of maximum values among all the candidate reformulations, across all queries in our test set. \n\nWhile comparing \\textit{\\small{HRED}} and \\textit{\\small{HRED + Ranker}} (both \\textit{\\small{CE}} and \\textit{\\small{RO}}), we observe that the multitask version performs better across \\textit{all} metrics. A similar trend can be observed when comparing \\textit{\\small{HREDCap}} with its multitask variants. For all the three metrics for query reformulations, the best performing model is a multitask model -- this validates the observations from \\cite{MNSRF} in our context.\n\n\nWhen comparing the two core reformulation models -- \\textit{\\small{HRED}} \\& \\textit{\\small{HREDCap}}, we find that the richer captions data that \\textit{\\small{HREDCap}} sees is aiding the model -- while \\textit{\\small{HRED}} scores better sim$_{emb}$, \\textit{\\small{HREDCap}} wins out on BLEU \\& Diversity. The drop in sim$_{emb}$ values can be explained by noting that on average captions contain more words than queries ($5.22$ in comparison to $2.31$), and hence similarity-based measures, due to additional words in the captions, will not be as high as overlap-based measures (i.e., BLEU). \n\\noindent\\textbf{Evaluation based on Ranking:}\nTo evaluate the performance of the ranker component in our proposed multitask models, we use MRR. We use the observed MRR of clicked results in the test set ($0.31$) as the baseline. We also analyze the effect of using the pairwise objective as opposed to the binary cross entropy loss.\n\nLooking at the results presented in Table~\\ref{tab:mainResults}, three trends emerge. Firstly, all the proposed multitask models perform better than the baseline. The best performing model, i.e., \\textit{\\small{HREDCap + Ranker}} with pairwise loss (\\textit{\\small{RO}}), outperforms the baseline by about $32\\%$. Secondly, we observe that using pairwise loss leads to an increase in MRR, for both of the cases under consideration, with only marginal drop in reformulation metrics -- we revisit this observation in the next section. %\nLastly, the multitask models that use captions perform better than multitask models that use subsequent queries. \n\\vspace{-3mm}\n\\subsubsection{5.3 \\quad Analysis: }\n\\vspace{-2mm}\nIn this section, we concentrate on the following two aspects of the generated query reformulations: $(a)$ diversity, and $(b)$ descriptiveness.\n\n\\noindent\\textbf{Diverse Query Reformulations due to Multitasking:}\nThe importance of suggesting diverse queries to enhance user search experience is well established within the IR community. The mechanism to obtain a diverse set of reformulation alternatives is via the use of beam search based decoding.\nIn scenarios where a set of top-K candidates are required, we take inspiration from Ma et al. \\cite{ma2010diversifying} to evaluate the predictions of our models for their diversity. For a beam width of $K$, a reformulation model will generate $\\mathcal{R}_{gen} = \\{r_1, r_2, \\dots, r_K\\}$ candidate reformulations for a given original query. We quantify the diversity in the candidate reformulations by comparing each candidate reformulation $r_i$ with other reformulations $ r_j \\in \\mathcal{R}_{gen} : i \\neq j$. The diversity of a set of $K$ queries is evaluated as \n\\vspace{-2mm}\\[\n D(\\mathcal{R}_{\\text{gen}}) = 1 - \\frac{1}{K(K-1)}*\\left(\\sum_{r_i \\in \\mathcal{R}_{\\text{gen}}}\\sum_{\\substack{r_j \\in \\mathcal{R}_{\\text{gen}}: \\text{ }j \\neq i}} sim_{emb}(r_i, r_j)\\right)\\vspace{-2mm}\n \n\\]\nIn Table \\ref{tab:mainResults}, it can be observed that multitask models generate more diverse reformulations than models trained just for the task of query reformulation. This is particularly evident when comparing the effect of the ranking loss.\n\nFrom Figure \\ref{fig:diversity}, it can be noted that as more candidate reformulations are taken into consideration, i.e., as the beam width $K$ is increased, the average relevance of the reformulations decreases across all the models. However, the diverseness of $\\mathcal{R}_{gen}$ flattens after $K=3$. This was the reason for setting the beam width to $3$ while presenting results in Table \\ref{tab:mainResults}.\n\n\\begin{figure*}[!h]\n\\vspace{-6mm}\n \\centering\n \\includegraphics[width=0.9\\textwidth]{Diversity.pdf}\\vspace{-2mm}\n\\caption{\\scriptsize{The trade-off between relevance (as quantified by sim$_{emb}$) and diversity. As $K$ is increased, the relevance of generated predictions drops across all models.}}\n\\label{fig:diversity}\n\\vspace{-5mm}\n\\end{figure*}\n\\vspace{-2mm}\n\\noindent\\textbf{Descriptive Reformulations using Captions:}\nThe motivation for generating more descriptive reformulations is of central importance to our idea of using image captions. To this end, we analyze the generated reformulations to assess if this is indeed the case. We start by noting (see Table \\ref{tab:lengtAnalysis}) that captions corresponding to clicked images for queries in our test set contain, on average, more words than the queries. Following this, we analyze the generated reformulations by two of our multitask models -- (i) \\textit{\\small{HRED + Ranker (RO)}}, which guides the process of query reformulation using subsequent queries within a session, and (ii) \\textit{\\small{HREDCap + Ranker (RO)}}, which guides the process of query reformulation using captions corresponding to clicked images. For this entire analysis, we removed stop words \\cite{nltk} \nfrom all the queries and captions under consideration.\n\nAs can be noted from Table \\ref{tab:lengtAnalysis}, reformulations using captions tend to contain more words than reformulations without them. However, number of words in a query is only a facile proxy for its descriptiveness. Acknowledging this, we perform a secondary aggregate analysis on the number of novel words inserted into the reformulation and number of words dropped from the original query. We identify novel words as words that were not present in the original query $q_i$ but have been generated in the reformulation $\\hat{q}_{\\text{reform}}$, and dropped words as the words that were present in the original query but are absent from the generated reformulation. Table \\ref{tab:lengtAnalysis} indicates that, on average, the model trained using captions tends to insert more novel words while reformulating the query, and at the same time drops fewer words from the query. Interestingly, models trained using subsequent queries inserts almost as many words into the reformulation as it drops from the original query. \n\nTo analyze this further, we compute the average similarity between the novel words that were inserted and the words that were dropped,\nby averaging the GloVe vector based similarity between words, across all queries in our test set. For \\textit{\\small{HRED + Ranking (RO)}} this average similarity is $\\mathbf{0.64}$, while for \\textit{\\small{HREDCap + Ranker (RO)}} it is $\\mathbf{0.41}$. A higher similarity value for the former suggests that the model largely \\textit{substitutes} the existing words with words having similar semantic meaning. Using captions, on the other hand, is more likely to generate novel words which bring in additional meaning.\n\n\\begin{table*}[!t]\n \\centering\n \\scalebox{0.68}{\n \\begin{tabular}{| c | c | c |}\\hline\n Avg. \\# of words in queries & \\multicolumn{2}{c |}{$2.31 \\pm 0.92$ word(s)} \\\\\n Avg. \\# of words in captions & \\multicolumn{2}{c |}{$5.22 \\pm 2.37$ word(s)}\\\\\\hline\\hline\n \\textbf{Models $\\rightarrow$} & {HRED + Ranker (RO)}$\\text{ }$ & $\\text{ }${HREDCap + Ranker (RO)}\\\\\\hline\n Avg. \\# generated words & $2.18 \\pm 0.61 $ word(s) & $4.91 \\pm 1.16$ word(s) \\\\\n Avg. \\# novel words & $1.04 \\pm 0.13$ word(s) & $2.56 \\pm 0.47$ word(s) \\\\\n Avg. \\# dropped words & $1.14 \\pm 0.15$ word(s) & $0.89 \\pm 0.17$ word(s)\\\\\\hline\n Avg. similarity b\/w insertions and drops & $0.64 \\pm 0.03$ & $0.41 \\pm 0.04$\\\\\\hline\n \\end{tabular}\n }\\vspace{0.5mm}\n \\caption{\\scriptsize{Analyzing the effect of using captions on length of generated query reformulations, along with influence on generating novel words while dropping the existing ones. }}\n \\label{tab:lengtAnalysis}\n\\vspace{-10mm}\n\\end{table*}\n\n\\vspace{-2mm}\n\\subsubsection{5.4 \\quad Qualitative Results:}\n\\vspace{-3.5mm}\n\nIn Table \\ref{tab:qualResults}, we present a few examples depicting the descriptive nature of generated reformulations. The generated reformulations by \\textit{\\small{HRED + Ranker}} are compared against those by \\textit{\\small{HREDCap + Ranker}}. We only present the top ranked reformulation among top-$K$ reformulations.\nWe note that using captions as target generates reformulations that are more descriptive and the process of generation results in more insertions of novel words, in comparison to using subsequent queries as targets. These qualitative observations, along with quantitative observations discussed earlier, reinforce the efficacy of using captions of clicked images for the task of query reformulation.\n\n\\begin{table*}[!h]\\vspace{-5mm}\n \\centering\n \\scalebox{0.55}{\n \\begin{tabular}{|c | c p{3.4cm} | p{7.0cm} | p{3.4cm} | p{6.4cm} |}\\hline\n {} & \\multicolumn{2}{c|}{\\textbf{Queries}} & \\textbf{Clicked Caption} & \\textit{HRED + Ranker (RO)} & \\textit{HREDCap + Ranker (RO)} \\\\\\hline\n \\multirow{3}{*}{$\\mathbf{Session_1}$} & $\\mathbf{q_1}$ & traffic & rush hour traffic & traffic jam & traffic \\textbf{jam during rush hour} \\\\\n \n & $\\mathbf{q_2}$ & traffic \\textbf{jam} & traffic jams in the city, road, rush hour\n & \\textbf{city} traffic jam & traffic \\textbf{during} \\textbf{rush hour} in \\textbf{city} \\\\\n \n & $\\mathbf{q_3}$& traffic jam pollution & blurred silhouettes of cars by steam of exhaust\n & traffic jam \\textbf{cars} & \\textbf{dirt} and \\textbf{smoke} from \\textbf{cars} in traffic jam\\\\\\hline\n \n \\multirow{3}{*}{$\\mathbf{Session_2}$} & $\\mathbf{q_1}$ & sleeping baby & sleeping one year old baby girl & \\textbf{cute} sleeping baby & \\textbf{little} baby sleeping \\textbf{peacefully} \\\\\n \n {} & $\\mathbf{q_2}$ & sleeping baby cute & baby boy in white sunny bedroom & sleeping baby & baby sleeping in \\textbf{bed peacefully} \\\\\n \n {} & $\\mathbf{q_3}$& white bed sleeping baby & carefree little baby sleeping with white soft toy & baby sleeping in bed & \\textbf{little} baby sleeping in white bed \\textbf{peacefully}\\\\\\hline\n \n \\multirow{2}{*}{$\\mathbf{Session_3}$} & $\\mathbf{q_1}$ & chemistry & three dimensional illustration of molecule model & chemical \\textbf{reaction} & \\textbf{molecules} and \\textbf{structures} in chemistry \\\\\n \n {} & $\\mathbf{q_3}$& molecule reaction & chemical reaction between molecules & reaction molecules & molecules reacting in \\textbf{chemistry}\\\\\n \n {} & $\\mathbf{q_3}$& molecule collision & frozen moment of two particle collision & collision molecules & molecules colliding \\textbf{chemistry} \\textbf{reaction}\\\\\\hline\n \\end{tabular}\n }\\vspace{0.5mm}\n \\caption{\\scriptsize{Qualitative results comparing the generated reformulation by \\textit{HRED + Ranker} and \\textit{HREDCap + Ranker}. The words in \\textbf{bold} are novel insertions.}}\n \\label{tab:qualResults}\\vspace{-9mm}\n\\end{table*}\n\n\n\\vspace{-6mm}\n\\section{Conclusion}\n\\vspace{-4mm}\n\n\nIn this paper, we build upon recent advances in sequence-to-sequence models based approaches for recommending queries. The core technical component of our paper is the use of a novel supervision signal for training seq-to-seq models for query reformulation -- i.e., captions of clicked images instead of subsequent queries within a session, as well as the use of a pairwise preference based objective for the secondary ranking task. The effect of these are evaluated alongside baseline model architectures for this setting. Our extensive analysis evaluated the model and training method combinations towards being able to generate a set of descriptive, relevant and diverse reformulations.\n\nAlthough the experiments were done on data from an image search engine, we believe that similar improvements can be observed if content properties from textual documents can be integrated into the seq-to-seq models. \nFuture work will look into the influence of richer representations on the behavior of the ranker, and in turn on the characteristics of the reformulations.\n\n\n\n\\bibliographystyle{splncs04}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}