diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzzivvp" "b/data_all_eng_slimpj/shuffled/split2/finalzzivvp" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzzivvp" @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\n\nIn recent years ultracold atomic systems have revealed themselves as quantum\nsimulators for many-body theories \\cite{RevModPhys.80.885}. Especially their\nhigh degree of tunability makes them attractive for this purpose. An example\nof a system that can be simulated in this way is the Fr\\\"{o}hlich polaron\nwhich is well-known from solid state physics where it is used to describe\ncharge carriers in a polar solid (see for example Ref. \\cite{BoekDevreese} for\nan extended overview). In the context of ultracold gases the system of\nimpurities embedded in a Bose-Einstein condensation can be mapped onto the\nFr\\\"{o}hlich polaron Hamiltonian \\cite{PhysRevLett.96.210401,\nPhysRevA.73.063604}. In this case the role of the charge carriers is played by\nthe impurities and the lattice vibrations are replaced by the Bogoliubov\nexcitations. Recently this system has gained much interest both theoretically\n\\cite{PhysRevA.76.011605, 1367-2630-10-3-033015, 0295-5075-82-3-30004,\nPhysRevA.82.063614, 1367-2630-13-10-103029, PhysRevB.80.184504,\nspringerlink:10.1134\/S1054660X11150035, PhysRevA.84.063612,\nPhysRevA.83.033631, springerlink:10.1007\/s10909-010-0286-0}\\ and\nexperimentally \\cite{springerlink:10.1007\/s00340-011-4868-6,\nPhysRevLett.105.133202, PhysRevA.77.011603, PhysRevLett.105.045303,\n2011arXiv1106.0828C}.\n\nFor the present work we focus on a single Fr\\\"{o}hlich polaron for which the\nHamiltonian can not be analytically diagonalized and one has to rely on\napproximation methods. The most advanced theory for the ground state\nproperties is the Jensen-Feynman variational principle \\cite{PhysRev.97.660}\nwhich can be extended through the Feynman-Hellwarth-Iddings-Platzman (FHIP)\napproximation for the response properties \\cite{PhysRev.127.1004,\nPhysRevB.5.2367}. The optical absorption of the Fr\\\"{o}hlich solid state\npolaron was later also obtained through a diagrammatic Monte Carlo calculation\nand a comparison with the FHIP approximation showed a good agreement at weak\nand intermediate polaronic coupling but in the strong coupling regime\ndeviations were revealed \\cite{PhysRevLett.91.236401, PhysRevLett.96.136405}.\nSince there is no known material that exhibits the strong coupling behavior\nonly the weak and intermediate coupling regime could be experimentally probed\nwhich resulted in a good agreement with the theory \\cite{PolaronsAndExcitons,\nPhysRevLett.58.1471}. A better understanding of the strong coupling regime\ncould also shed light on the possible role of polarons and bipolarons in\nunconventional pairing mechanisms for high-temperature superconductivity\n\\cite{BoekAlexandrov, PhysRevB.77.094502}. Recently it was shown that for an\nimpurity in a condensate the use of a Feshbach resonance allows an external\ntuning of the polaronic coupling parameter which makes it a promising system\nto probe the strong polaronic coupling regime for the first time\n\\cite{PhysRevB.80.184504}. Recent experiments have shown the feasibility of\nusing Feshbach resonances for the tuning of interparticle interactions between\ndifferent species \\cite{PhysRevA.85.042721, PhysRevA.85.032506,\nPhysRevA.85.051602}.\n\nSince the impurities are considered as not charged it is not possible to\nconduct optical absorption measurements to reveal the polaronic excitation\nstructure as is possible for the Fr\\\"{o}hlich solid state polaron. It was\nshown in \\cite{PhysRevA.83.033631} that Bragg spectroscopy is suited to\nexperimentally probe the polaronic excitation structure of an impurity in a\ncondensate. Bragg scattering is a well established experimental technique in\nthe context of ultracold gases (see for example Refs.\n\\cite{PhysRevLett.88.120407, PhysRevLett.83.2876}). The setup consists of two\nlaser beams with different frequencies $\\omega_{1}$ and $\\omega_{2}$ and\ndifferent momenta $\\vec{k}_{1}$ and $\\vec{k}_{2}$ that are radiated on the\nimpurity. The impurity can then absorb a photon from laser 1 and emit it to\nlaser 2 during which process it has gained an energy $\\omega=\\omega_{1\n-\\omega_{2}$ and a momentum $\\vec{k}=\\vec{k}_{1}-\\vec{k}_{2}$. The response is\nreflected in the number of impurities that have gained a momentum $\\vec{k}$ as\na function of $\\vec{k}$ and $\\omega$. This number is proportional to the\nimaginary part of the density response function $\\chi \\left( \\omega,\\vec\n{k}\\right) $ \\cite{Pitaevskii}\n\n\\begin{equation}\n\\chi \\left( \\omega,\\vec{k}\\right) =\\frac{i}{\\hbar}\\int_{0}^{\\infty\n}dte^{i\\omega t}\\left \\langle \\left[ \\widehat{\\rho}_{\\vec{k}}\\left( t\\right)\n,\\widehat{\\rho}_{\\vec{k}}^{\\dag}\\right] \\right \\rangle , \\label{DensResp\n\\end{equation}\nwith $\\widehat{\\rho}_{\\vec{k}}$ the density operator of the impurity.\n\nAnother powerful tool in the context of ultracold gases is the application of\noptical lattices which can be employed to modify the geometry of the system\n\\cite{citeulike:2749190}. This allows to confine the system in one or two\ndirections such that the confinement length is much smaller than all other\ntypical length scales which results in an effectively low dimensional system.\nFor these systems the interparticle interactions can be described through a\ncontact pseudopotential with an amplitude that is a function of the\nthree-dimensional scattering length and the confinement length. This permits\nto experimentally tune the interactions between the particles by varying the\nstrength of the confinement which results in a resonant behavior. These\nconfinement-induced resonances have been studied both theoretically\n\\cite{PhysRevLett.101.170401, 1367-2630-7-1-192, PhysRevLett.91.163201,\nPhysRevLett.81.938, PhysRevA.64.012706} and experimentally\n\\cite{PhysRevLett.104.153203, PhysRevLett.94.210401, Haller04092009,\nPhysRevLett.104.153202, PhysRevLett.106.105301}.\n\nIn the present work we adapt the calculations of the ground state and response\nproperties of the polaronic system consisting of an impurity in a condensate\nto the case of reduced dimensions. This was done for the Fr\\\"{o}hlich solid\nstate polaron in Refs. \\cite{PhysRevB.33.3926, PhysRevB.31.3420,\nPhysRevB.36.4442}\\ which led to the polaronic scaling relations. These are\napplicable for polaronic systems of which the interaction amplitude\n$V_{\\vec{k}}$ (see later) is a homogeneous function. Unfortunately this is not\nthe case for the polaron consisting of an impurity in a Bose-Einstein\ncondensate. We start by showing that also in lower dimensions the Hamiltonian\nof an impurity in a condensate can be mapped onto the Fr\\\"{o}hlich polaron\nHamiltonian. Then the Jensen-Feynman variational principle is applied to\ncalculate an upper bound for the free energy and an estimation of the\neffective mass and the radius of the polaron as was done in Ref.\n\\cite{PhysRevB.80.184504}\\ for the three-dimensional case. Subsequently the\ntreatment of Ref. \\cite{PhysRevA.83.033631} for the response to Bragg\nspectroscopy in 3 dimensions is adapted to reduced dimensions. All results are\napplied to the specific system of a lithium-6 impurity in a sodium condensate.\n\n\\section{Impurity in a condensate in $d$ dimensions}\n\nThe Hamiltonian of an impurity in an interacting bosonic gas is given by\n\\begin{equation}\n\\widetilde{H}=\\frac{\\widehat{p}^{2}}{2m_{I}}+\\sum_{\\vec{k}}E_{\\vec{k}\n\\widehat{a}_{\\vec{k}}^{\\dag}\\widehat{a}_{\\vec{k}}+\\frac{1}{2}\\sum_{\\vec\n{k},\\vec{k}^{\\prime},\\vec{q}}V_{BB}\\left( \\vec{q}\\right) \\widehat{a\n_{\\vec{k}^{\\prime}-\\vec{q}}^{\\dag}\\widehat{a}_{\\vec{k}+\\vec{q}}^{\\dag\n\\widehat{a}_{\\vec{k}}\\widehat{a}_{\\vec{k}^{\\prime}}+\\sum_{\\vec{k},\\vec{q\n}V_{IB}\\left( \\vec{q}\\right) e^{i\\vec{q}.\\widehat{r}}\\widehat{a}_{\\vec\n{k}^{\\prime}-\\vec{q}}^{\\dag}\\widehat{a}_{\\vec{k}^{\\prime}}. \\label{HamOrig\n\\end{equation}\nThe first term in this expression represent the kinetic energy of the impurity\nwith $\\widehat{p}$ ($\\widehat{r}$) the momentum (position) operator of the\nimpurity with mass $m_{I}$. The second term in the right-hand side of\n(\\ref{HamOrig}) describes the kinetic energy of the bosons with creation\n(annihilation) operators $\\left \\{ \\widehat{a}_{\\vec{k}}^{\\dag}\\right \\} $\n($\\left \\{ \\widehat{a}_{\\vec{k}}\\right \\} $) and energy $E_{\\vec{k}\n=\\frac{\\hbar^{2}k^{2}}{2m_{B}}-\\mu$ where $\\mu$ is the chemical potential of\nthe bosons and $m_{B}$ their mass. The last two terms represent the\ninteraction energy with $V_{BB}\\left( \\vec{q}\\right) $ the Fourier transform\nof the boson-boson interaction potential and $V_{IB}\\left( \\vec{q}\\right) $\nof the impurity-boson interaction potential. All vectors in expression\n(\\ref{HamOrig}) are considered as $d$-dimensional.\n\nIn Refs. \\cite{PhysRevLett.85.3745} and \\cite{PhysRevLett.84.2551} it is shown\nthat in one and two dimensions, respectively, at temperatures well below a\ncritical temperature $T_{c}$ a trapped weakly interacting Bose gas is\ncharacterized by the presence of a true condensate while just below $T_{c}$\nthis is a quasicondensate. A quasicondensate exhibits phase fluctuations with\na radius $R_{\\phi}$ that is smaller than the size of the system but greatly\nexceeds the coherence length $\\xi$ \\cite{PhysRevLett.85.3745,\nPhysRevLett.84.2551}. Since the radius of the polaron $R_{pol}$ is typically\nof the order $\\xi$ (see later) we have $R_{pol}\\ll R_{\\phi}$ which shows that\nthe polaronic features are also present in a quasicondensate. In the following\nwe no longer make the distinction and use the name condensate for both\nsituations. The presence of a condensate can be expressed through the\nBogoliubov shift which (within the local density aproximation) transforms the\nHamiltonian (\\ref{HamOrig}) into \\cite{PhysRevB.80.184504}\n\\begin{equation}\n\\widehat{H}=E_{GP}+g_{IB}N_{0}+\\widehat{H}_{pol},\\label{TotHamilt\n\\end{equation}\nwhere use was made of contact interactions, i.e. $V_{BB}\\left( \\vec\n{q}\\right) =g_{BB}$ and $V_{IB}\\left( \\vec{q}\\right) =g_{IB}$. In order to\nhave a stable condensate the boson-boson interaction should be repulsive, i.e.\n$g_{BB}>0$. The sign of the impurity-boson interaction strength $g_{IB}$ is in\npriciple arbitrary, however for the Bogoliubov approximation to be valid the\ndepletion of the condensate around the impurity must remain smaller than the\ncondensate density which means the formalism is not valid for a large negative\n$g_{IB}$ \\cite{0295-5075-82-3-30004, PhysRevLett.102.030408}. The first term\nin the right-hand side of (\\ref{TotHamilt}) represents the Gross Pitaevskii\nenergy $E_{GP}$ of the condensate and the second term gives the interaction of\nthe impurity with the condensate (with $N_{0}$ the number of condensed bosons\nin a unit volume). The third term is the polaron Hamiltonian which describes\nthe interaction between the impurity and the Bogoliubov excitations\n\\begin{equation}\n\\widehat{H}_{pol}=\\frac{\\widehat{p}^{2}}{2m_{I}}+\\sum_{\\vec{k}\\neq0\n\\hbar \\omega_{\\vec{k}}\\widehat{\\alpha}_{\\vec{k}}^{\\dag}\\widehat{\\alpha\n_{\\vec{k}}+\\sum_{\\vec{k}\\neq0}V_{\\vec{k}}\\rho_{I}\\left( \\vec{k}\\right)\n\\left( \\widehat{\\alpha}_{\\vec{k}}+\\widehat{\\alpha}_{-\\vec{k}}^{\\dag}\\right)\n,\\label{PolHam\n\\end{equation}\nwhere $\\left \\{ \\widehat{\\alpha}_{\\vec{k}}^{\\dag}\\right \\} $ ($\\left \\{\n\\widehat{\\alpha}_{\\vec{k}}\\right \\} $) are the creation (annihilation)\noperators of the Bogoliubov excitations with dispersion\n\\begin{equation}\n\\hbar \\omega_{\\vec{k}}=\\frac{\\hbar^{2}k}{2m_{B}\\xi}\\sqrt{\\left( \\xi k\\right)\n^{2}+2},\\label{BogSpectr\n\\end{equation}\nwith $\\xi$ the healing length: $\\xi=\\sqrt{\\frac{\\hbar^{2}}{2m_{B}N_{0}g_{BB}\n}$. The interaction amplitude $V_{\\vec{k}}$ is given by\n\\begin{equation}\nV_{\\vec{k}}=\\sqrt{N_{0}}g_{IB}\\left( \\frac{\\left( \\xi k\\right) ^{2\n}{\\left( \\xi k\\right) ^{2}+2}\\right) ^{1\/4}.\\label{IntAmp\n\\end{equation}\n\n\n\\section{Polaronic ground state properties in $d$ dimensions}\n\nIn this section we summarize the main results from standard polaron theory\nregarding the ground state properties with emphasis on the dependency on the\ndimension (see for example Ref. \\cite{2010arXiv1012.4576D} for more details)\nand we apply this to the polaronic system consisting of an impurity in a condensate.\n\n\\subsection{Jensen-Feynman variational principle}\n\nThe most accurate available description of the ground state properties of a\npolaron is based on the Jensen-Feynman inequality which states\n\\cite{PhysRev.97.660, BoekFeynman}\n\\begin{equation}\n\\mathcal{F}\\leq \\mathcal{F}_{0}+\\frac{1}{\\hbar \\beta}\\left \\langle \\mathcal{S-S\n_{0}\\right \\rangle _{\\mathcal{S}_{0}},\\label{JensFey\n\\end{equation}\nwith $\\mathcal{F}$ the free energy of the system, $\\mathcal{F}_{0}$ the free\nenergy of a trial system, $\\beta=\\left( k_{B}T\\right) ^{-1}$ the inverse\ntemperature with $k_{B}$ the Boltzmann constant,$\\ \\mathcal{S}$ the action of\nthe system and $\\mathcal{S}_{0}$ the action of the trial system. It was\nsuggested by Feynman to consider the particle harmonically coupled to a mass\n$M$ with a coupling constant $MW^{2}$ for the trial system\n\\cite{PhysRev.97.660}. This leads to the following expression for the\nJensen-Feynman inequality (\\ref{JensFey}) \\cite{PhysRevB.33.3926}\n\\begin{align}\n\\mathcal{F} & \\leq \\frac{d}{\\beta}\\left \\{ \\ln \\left[ 2\\sinh \\left(\n\\frac{\\beta \\hbar \\Omega}{2}\\right) \\right] -\\ln \\left[ 2\\sinh \\left(\n\\frac{\\beta \\hbar \\Omega}{2\\sqrt{1+M\/m}}\\right) \\right] \\right \\} \\nonumber \\\\\n& -\\frac{1}{\\beta}\\ln \\left[ V\\left( \\frac{m+M}{2\\pi \\hbar^{2}\\beta}\\right)\n^{d\/2}\\right] -\\frac{d}{2\\beta}\\frac{M}{m+M}\\left[ \\frac{\\hbar \\beta \\Omega\n}{2}\\coth \\left[ \\frac{\\hbar \\beta \\Omega}{2}\\right] -1\\right] \\nonumber \\\\\n& -\\sum_{\\vec{k}}\\frac{\\left \\vert V_{\\vec{k}}\\right \\vert ^{2}}{\\hbar}\\int\n_{0}^{\\hbar \\beta\/2}du\\mathcal{G}\\left( \\vec{k},u\\right) \\mathcal{M\n_{M,\\Omega}\\left( \\vec{k},u\\right) ,\\label{JensFeyn\n\\end{align}\nwith $d$ the dimension, $V$ the volume, $\\Omega=W\\sqrt{1+M\/m_{I}}$ and\n$\\mathcal{G}\\left( \\vec{k},u\\right) $ the Green function of the Bogoliubov\nexcitations\n\\begin{equation}\n\\mathcal{G}\\left( \\vec{k},u\\right) =\\frac{\\cosh \\left[ \\omega_{\\vec{k\n}\\left( u-\\hbar \\beta\/2\\right) \\right] }{\\sinh \\left[ \\hbar \\beta \\omega\n_{\\vec{k}}\/2\\right] },\n\\end{equation}\nand $\\mathcal{M}_{M,\\Omega}\\left( \\vec{k},u\\right) $ the memory function\n\\begin{equation}\n\\mathcal{M}_{M,\\Omega}\\left( \\vec{k},u\\right) =\\exp \\left[ -\\frac{\\hbar\nk^{2}}{2\\left( m+M\\right) }\\left \\{ u-\\frac{u^{2}}{\\hbar \\beta}-\\frac{M\n{m}\\frac{\\cosh \\left[ \\Omega \\hbar \\beta\/2\\right] -\\cosh \\left[ \\Omega \\left(\n\\hbar \\beta\/2-u\\right) \\right] }{\\Omega \\sinh \\left( \\hbar \\beta \\Omega\n\/2\\right) }\\right \\} \\right] .\n\\end{equation}\nThe parameters $\\Omega$ and $M$ are then determined variationally by\nminimizing the expression (\\ref{JensFeyn}). The present treatment also allows\nan estimation of the radius of the polaron as the root mean square of the\nreduced coordinate $\\vec{r}$ of the model system \\cite{PhysRevB.31.4890}\n\\begin{equation}\n\\left \\langle r^{2}\\right \\rangle =d\\frac{\\hbar}{2\\Omega}\\frac{m_{I}+M}{Mm_{I\n}\\coth \\left( \\frac{\\beta \\hbar \\Omega}{2}\\right) .\\label{radius\n\\end{equation}\nIn \\cite{PhysRev.97.660} Feynman also presented a calculation of the polaronic\neffective mass $m^{\\ast}$ at zero temperature\n\\begin{equation}\nm^{\\ast}=m_{I}+\\frac{1}{d}\\sum_{\\vec{k}}k^{2}\\frac{\\left \\vert V_{\\vec{k\n}\\right \\vert ^{2}}{\\hbar}\\int_{0}^{\\infty}due^{-\\omega_{\\vec{k}}u\n\\mathcal{F}_{M,\\Omega}\\left( \\vec{k},u\\right) u^{2},\\label{EffMassFeyT0\n\\end{equation}\nwith\n\\begin{equation}\n\\mathcal{F}_{M,\\Omega}\\left( \\vec{k},u\\right) =\\lim_{\\beta \\rightarrow \\infty\n}\\mathcal{M}_{M,\\Omega}\\left( \\vec{k},u\\right) =\\exp \\left \\{ -\\frac{\\hbar\nk^{2}}{2\\left( m+M\\right) \\Omega}\\left[ \\frac{M}{m}\\left( 1-e^{-\\Omega\nu}\\right) +\\Omega u\\right] \\right \\} .\n\\end{equation}\nAs far as we know there exists no generalization of equation\n(\\ref{EffMassFeyT0}) to finite temperatures but as a first estimation we use\n(\\ref{EffMassFeyT0}) with the temperature dependent variational parameters $M$\nand $\\Omega$.\n\n\\subsection{Polaron consisting of an impurity in a condensate}\n\nHere we introduce the Bogoliubov spectrum (\\ref{BogSpectr}) and the\ninteraction amplitude (\\ref{IntAmp}) which are specific for the polaronic\nsystem consisting of an impurity in a condensate. This allows us to write the\nJensen-Feynman inequality (\\ref{JensFeyn}) as (we also use polaronic units,\ni.e. $\\hbar=\\xi=m_{I}=1$)\n\\begin{align}\n\\mathcal{F} & \\leq \\frac{d}{\\beta}\\left \\{ \\ln \\left[ 2\\sinh \\left(\n\\frac{\\beta \\Omega}{2}\\right) \\right] -\\ln \\left[ 2\\sinh \\left( \\frac\n{\\beta \\Omega}{2\\sqrt{1+M}}\\right) \\right] \\right \\} \\nonumber \\\\\n& -\\frac{1}{\\beta}\\ln \\left[ V\\left( \\frac{1+M}{2\\pi \\beta}\\right)\n^{d\/2}\\right] -\\frac{d}{2\\beta}\\frac{M}{1+M}\\left[ \\frac{\\beta \\Omega\n{2}\\coth \\left[ \\frac{\\beta \\Omega}{2}\\right] -1\\right] \\nonumber \\\\\n& -\\frac{\\alpha^{\\left( d\\right) }}{4\\pi}\\left( \\frac{m_{B}+1}{m_{B\n}\\right) ^{2}\\int_{0}^{\\infty}dk\\frac{k^{d}}{\\sqrt{k^{2}+2}}\\int_{0\n^{\\beta\/2}du\\mathcal{G}\\left( k,u\\right) \\mathcal{M}_{M,\\Omega}\\left(\nk,u\\right) ,\\label{JenFeyDim\n\\end{align}\nwhere we introduced the dimensionless coupling parameter $\\alpha^{\\left(\nd\\right) }$ as follows\n\\begin{equation}\n\\alpha^{\\left( d\\right) }=4\\pi \\frac{2\\pi^{d\/2}}{\\Gamma \\left( \\frac{d\n{2}\\right) }N_{0}g_{IB}^{2}\\left( \\frac{m_{I}\\xi^{2}}{\\hbar^{2}}\\right)\n^{2}\\frac{V}{\\left( 2\\pi \\xi \\right) ^{d}}\\left( \\frac{m_{B}}{m_{B}+m_{I\n}\\right) ^{2},\\label{KopConst\n\\end{equation}\nwith $\\Gamma \\left( x\\right) $ the gamma function. The prefactor was chosen\nto be in agreement with the definition for $\\alpha^{\\left( 3\\right) }$ in\nRef. \\cite{PhysRevB.80.184504}\\ . Note that the coupling parameter depends on\nthe impurity-boson interaction amplitude $g_{IB}$ and also on the boson-boson\ninteraction amplitude $g_{BB}$ through the healing length $\\xi$. As mentioned\nin the introduction these interaction amplitudes and thus also the coupling\nparameter can be externally tuned through a Feshbach resonance or in reduced\ndimensions also with a confinement induced resonance.\n\nFor $d=2$ the $k$-integral in (\\ref{JenFeyDim}) contains an ultraviolet\ndivergence. This is also the case in 3 dimensions and it was shown in Ref.\n\\cite{PhysRevB.80.184504} that this is solved by applying the\nLippmann-Schwinger equation up to second order for the interaction amplitude\nin the second term of the Hamiltonian (\\ref{TotHamilt}). This results in a\nrenormalization factor that is incorporated through the following substitution\n\\cite{PhysRevB.80.184504}\n\\begin{equation}\nN_{0}g_{IB}\\rightarrow N_{0}\\left( T\\left( E\\right) +g_{IB}^{2}\\sum\n_{\\vec{k}}\\frac{1}{\\frac{\\hbar^{2}k^{2}}{2m_{r}}-E}\\right) , \\label{Renorm\n\\end{equation}\nwith $T\\left( E\\right) $ the scattering $T$-matrix. In 2 dimensions the\nlimit $E\\rightarrow0$ in (\\ref{Renorm}) results in an infrared divergence. The\nsecond term in (\\ref{Renorm}) can be written as\n\\begin{equation}\nN_{0}g_{IB}^{2}\\sum_{\\vec{k}}\\frac{1}{\\frac{\\hbar^{2}k^{2}}{2m_{r}}-E\n=\\frac{\\alpha^{\\left( 2\\right) }}{2\\pi}\\frac{\\hbar^{2}}{m_{I}\\xi^{2}\n\\frac{m_{B}+m_{I}}{m_{B}}\\int_{0}^{\\infty}\\frac{k}{k^{2}-2m_{r}E\/\\hbar^{2}}dk,\n\\label{Regul\n\\end{equation}\nwhich lifts the ultraviolet divergence in (\\ref{JenFeyDim}). For numerical\nconsiderations a cutoff $K_{c}$ is introduced for the $k$-integral which\nenables us to calculate the integral in (\\ref{Regul})\n\\begin{align}\n\\frac{\\alpha^{\\left( 2\\right) }}{2\\pi}\\frac{\\hbar^{2}}{m_{I}\\xi^{2}\n\\frac{m_{B}+m_{I}}{m_{B}}\\int_{0}^{K_{c}}\\frac{k}{k^{2}-2m_{r}\\xi^{2\nE\/\\hbar^{2}}dk & =\\frac{\\alpha^{\\left( 2\\right) }}{4\\pi}\\frac{\\hbar^{2\n}{m_{I}\\xi^{2}}\\frac{m_{B}+m_{I}}{m_{B}}\\ln \\left( \\frac{\\frac{\\hbar^{2\nK_{c}^{2}}{2m_{r}}-E}{E}\\right) \\nonumber \\\\\n& \\approx \\frac{\\alpha^{\\left( 2\\right) }}{4\\pi}\\frac{\\hbar^{2}}{m_{I\n\\xi^{2}}\\frac{m_{B}+m_{I}}{m_{B}}\\ln \\left( \\frac{\\hbar^{2}K_{c}^{2}}{2m_{r\nE}\\right) , \\label{Regular\n\\end{align}\nwhere in the second step we used the fact that the energy related to the\ncutoff is much larger than the typical energy of the scattering event $E$.\nEquation (\\ref{Regular}) shows that the chosen value of $E$ is not important\nsince it only results in an energy shift and therefore has no influence on the\nphysical properties of the system.\n\n\\subsection{Results}\n\nWe apply the presented treatment to the system of a lithium-6 impurity in a\nsodium condensate ($m_{B}\/m_{I}=3.82207$). All results are presented in\npolaronic units, i.e. $\\hbar=\\xi=m_{I}=1$.\n\nIn figure \\ref{fig: Grond2D} the results for the polaronic ground state\nproperties in 2 dimensions as a function of the coupling parameter\n$\\alpha^{\\left( 2\\right) }$ are presented. In (a) the radius of the polaron\nis shown and in (b) the effective mass at different temperatures. The observed\nbehavior is analogous to the three-dimensional case (see Ref.\n\\cite{PhysRevB.80.184504})\\ and suggests that for growing $\\alpha^{\\left(\n2\\right) }$ the self-induced potential becomes stronger leading to a bound\nstate at high enough $\\alpha^{\\left( 2\\right) }$. However, as compared to\nthe three-dimensional case, the transition is much smoother with a transition\nregion between $\\alpha^{\\left( 2\\right) }\\approx1$ and $\\alpha^{\\left(\n2\\right) }\\approx3$. This behavior is in agreement with the mean-field\nresults of Refs. \\cite{PhysRevB.46.301, 0295-5075-82-3-30004}, where also a\nsmooth transition to the self-trapped state was found for $d=2$. For the\ncutoff $K_{c}$ we used the inverse of the Van der Waals radius for sodium\nwhich results in $K_{c}=200$. To check whether this cutoff is large enough the\nvariational parameter $M$ is plotted in the inset of figure\n(\\ref{fig: Grond2D})(b) for different values of $K_{c}$ which reveals already\na reasonable convergence at $K_{c}\\approx5$\n\\begin{figure}\n[ptb]\n\\begin{center}\n\\includegraphics[\nheight=8.5361cm,\nwidth=8.7052cm\n\n{Figure1.eps\n\\caption{(Color online) The groundstate properties of the polaron consisting\nof a lithium-6 impurity in a sodium condensate in 2 dimensions. In (a) the\nradius of the polaron (\\ref{radius}) is presented and in (b) the effective\nmass (\\ref{EffMassFeyT0}) as a function of the polaronic coupling parameter\n$\\alpha^{\\left( 2\\right) }$ at different temperatures ($\\beta=\\left(\nk_{B}T\\right) ^{-1}$) and with a cutoff $K_{c}=200$. The inset shows the\nvariational parameter $M$ for different values of the cutoff $K_{c}$ at\n$\\beta=50$. All results are presented in polaronic units ($\\hbar=m_{I}=\\xi\n=1$). \n\\label{fig: Grond2D\n\\end{center}\n\\end{figure}\n\n\nIn figure \\ref{fig: grond1D} the results for the 1-dimensional case are\npresented. In (a) the radius of the polaron is plotted and (b) shows the\neffective mass at different temperatures. For growing $\\alpha^{\\left(\n1\\right) }$ the characteristics of the appearance of a bound state in the\nself-induced potential are again observed. The characteristics of the weak\ncoupling regime are however not present and the transition region is between\n$\\alpha^{\\left( 1\\right) }=0$ and $\\alpha^{\\left( 1\\right) }\\approx1$.\nThis is again in agreement with the mean-field results of Refs.\n\\cite{PhysRevB.46.301, 0295-5075-82-3-30004} for $d=1$.\n\\begin{figure}\n[ptb]\n\\begin{center}\n\\includegraphics[\nheight=8.5361cm,\nwidth=8.703cm\n\n{Figure2.eps\n\\caption{(Color online) The polaronic ground state properties of a lithium-6\nimpurity in a sodium condensate in 1 dimension. The radius (a) and the\neffective mass (b) are presented as a function of the polaronic coupling\nparameter $\\alpha^{\\left( 1\\right) }$ at different temperatures\n($\\beta=\\left( k_{B}T\\right) ^{-1}$). All results are presented in polaronic\nunits ($\\hbar=m_{I}=\\xi=1$).\n\\label{fig: grond1D\n\\end{center}\n\\end{figure}\n\n\n\\section{Response to Bragg scattering in $d$ dimensions}\n\nThe response of a system to Bragg spectroscopy is proportional to the\nimaginary part of the density response function (\\ref{DensResp}). In Ref.\n\\cite{PhysRevA.83.033631} it was shown that the use of the\nFeynman-Hellwarth-Iddings-Platzman approximation (as introduced in Ref.\n\\cite{PhysRev.127.1004}\\ for a calculation of the impedance of the\nFr\\\"{o}hlich solid state polaron and generalized in Ref.\n\\cite{PhysRevB.5.2367} for the optical absorption) leads to the following\nexpression for the Bragg response:\n\\begin{equation}\n\\operatorname{Im}\\left[ \\chi \\left( \\omega,\\vec{k}\\right) \\right]\n=-\\frac{k^{2}}{m_{I}}\\frac{\\operatorname{Im}\\left[ \\Sigma \\left( \\omega\n,\\vec{k}\\right) \\right] }{\\left \\{ \\omega^{2}-\\operatorname{Re}\\left[\n\\Sigma \\left( \\omega,\\vec{k}\\right) \\right] \\right \\} ^{2}+\\left \\{\n\\operatorname{Im}\\left[ \\Sigma \\left( \\omega,\\vec{k}\\right) \\right]\n\\right \\} ^{2}}, \\label{BraggResp\n\\end{equation}\nwith $\\Sigma \\left( \\omega,\\vec{k}\\right) $ the self energy\n\\begin{align}\n\\Sigma \\left( \\omega,\\vec{k}\\right) & =\\frac{2}{m_{I}N\\hbar}\\sum_{\\vec\n{q}\\neq0}\\left \\vert V_{\\vec{q}}\\right \\vert ^{2}\\frac{\\left( \\vec{k}.\\vec\n{q}\\right) ^{2}}{k^{2}}\\label{SelfEnergy}\\\\\n& \\times \\int_{0}^{\\infty}dt\\left( 1-e^{i\\omega t}\\right) \\operatorname{Im\n\\left \\{ \\left[ e^{i\\omega_{\\vec{q}}t}+2\\cos \\left( \\omega_{\\vec{q}}t\\right)\nn\\left( \\omega_{\\vec{q}}\\right) \\right] \\exp \\left[ -\\left( \\vec{k\n+\\vec{q}\\right) ^{2}D\\left( t\\right) \\right] \\right \\} ,\n\\end{align}\n$n\\left( \\omega \\right) =\\left( \\exp \\left[ \\beta \\hbar w\\right] -1\\right)\n^{-1}$ the Bose-Einstein distribution and\n\\begin{equation}\nD\\left( t\\right) =\\frac{t^{2}}{2\\beta \\left( m_{I}+M\\right) }-i\\frac{\\hbar\n}{2\\left( m_{I}+M\\right) }t+\\frac{\\hbar M}{2m_{I}\\Omega \\left(\nm_{I}+M\\right) }\\left[ 1-e^{i\\Omega t}+4\\sin^{2}\\left( \\frac{\\Omega t\n{2}\\right) n\\left( \\Omega \\right) \\right] . \\label{Dfun\n\\end{equation}\nFor numerical calculations the representation for the self energy as derived\nin appendix \\ref{App: AndereRep} is used.\n\n\\subsection{Sum rule}\n\nAs was first noted in \\cite{PhysRevB.15.1212} for the Fr\\\"{o}hlich polaron and\ngeneralized in \\cite{PhysRevA.83.033631} for an impurity in a condensate the\nf-sum rule can be written as\n\\begin{equation}\n\\frac{\\pi}{2}\\frac{1}{\\left( 1-R\\left( \\alpha,k\\right) \\right) \n+\\frac{m_{I}}{k^{2}}\\int_{\\varepsilon}^{\\infty}d\\omega \\omega \\operatorname{Im\n\\left[ \\chi \\left( \\omega,\\vec{k}\\right) \\right] =\\frac{\\pi}{2},\n\\label{Somregel\n\\end{equation}\nwith $\\varepsilon$ a small number such that the Drude peak (see later) is not\nincluded in the integral and\n\\begin{equation}\nR\\left( \\alpha,k\\right) =\\lim_{\\omega \\rightarrow0}\\frac{\\operatorname{Re\n\\left[ \\Sigma \\left( \\omega,\\vec{k}\\right) \\right] }{\\omega^{2}}.\n\\label{RFun\n\\end{equation}\nIn the limits $\\beta \\rightarrow \\infty$ and $k\\rightarrow0$ the function\n(\\ref{RFun}) is related to the Feynman effective mass (\\ref{EffMassFeyT0})\n\\cite{PhysRevB.15.1212}\n\\begin{equation}\nm^{\\ast}=m_{I}\\left( 1-\\lim_{\\beta \\rightarrow \\infty}R\\left( \\alpha,0\\right)\n\\right) .\n\\end{equation}\nThis relation provides a powerful experimental tool to determine the effective\nmass from the optical response which was recently applied for the Fr\\\"{o}hlich\nsolid state polaron \\cite{PhysRevB.81.125119, PhysRevLett.100.226403}.\n\n\\subsection{Self energy for an impurity in a condensate}\n\nIntroducing the interaction amplitude (\\ref{IntAmp}) and the coupling\nparameters (\\ref{KopConst}) in expressions (\\ref{ImagZelf}) and (\\ref{ReZelf})\nfor the imaginary and real part of the self energy results in (using polaronic\nunits)\n\\begin{align}\n\\operatorname{Im}\\left[ \\Sigma \\left( \\omega,\\vec{k}\\right) \\right] &\n=\\sqrt{2\\pi \\beta \\left( 1+M\\right) }\\frac{\\alpha^{\\left( d\\right) }}{8\\pi\n}\\frac{\\Gamma \\left( d\/2\\right) }{2\\pi^{d\/2}}\\left( \\frac{m_{B}+1}{m_{B\n}\\right) ^{2}B\\left( \\beta,n,n^{\\prime}\\right) \\nonumber \\\\\n& \\times \\sum_{n,n^{\\prime}=0}^{\\infty}\\int d\\vec{q}\\frac{q}{\\sqrt{q^{2}+2\n}\\frac{\\left( \\vec{k}.\\vec{q}\\right) ^{2}}{k^{2}}\\left \\vert \\vec{k}+\\vec\n{q}\\right \\vert ^{2\\left( n+n^{\\prime}\\right) -1}e^{-a^{2}\\left(\n\\beta \\right) \\left( \\vec{k}+\\vec{q}\\right) ^{2}}\\nonumber \\\\\n& \\times \\left \\{ \\left[ 1+n\\left( \\omega_{\\vec{q}}\\right) \\right] \\left[\ne^{-\\frac{\\beta \\left( 1+M\\right) \\left( A^{+}+\\omega \\right) ^{2}}{2\\left(\n\\vec{k}+\\vec{q}\\right) ^{2}}}-e^{-\\frac{\\beta \\left( 1+M\\right) \\left(\nA^{+}-\\omega \\right) ^{2}}{2\\left( \\vec{k}^{\\prime}+\\vec{q}\\right) ^{2}\n}\\right] \\right. \\nonumber \\\\\n& \\left. +n\\left( \\omega_{\\vec{q}}\\right) \\left[ e^{-\\frac{\\beta \\left(\n1+M\\right) \\left( A^{-}+\\omega \\right) ^{2}}{2\\left( \\vec{k}+\\vec\n{q}\\right) ^{2}}}-e^{-\\frac{\\beta \\left( 1+M\\right) \\left( A^{-\n-\\omega \\right) ^{2}}{2\\left( \\vec{k}+\\vec{q}\\right) ^{2}}}\\right]\n\\right \\} ;\\label{ImagSelf}\\\\\n\\operatorname{Re}\\left[ \\Sigma \\left( \\omega,\\vec{k}\\right) \\right] &\n=\\sqrt{2\\beta \\left( 1+M\\right) }\\frac{\\alpha^{\\left( d\\right) }}{4\\pi\n}\\frac{\\Gamma \\left( d\/2\\right) }{2\\pi^{d\/2}}\\left( \\frac{m_{B}+1}{m_{B\n}\\right) ^{2}B\\left( \\beta,n,n^{\\prime}\\right) \\nonumber \\\\\n& \\times \\sum_{n,n^{\\prime}=0}^{\\infty}\\int d\\vec{q}\\frac{q}{\\sqrt{q^{2}+2\n}\\frac{\\left( \\vec{k}.\\vec{q}\\right) ^{2}}{k^{2}}\\left \\vert \\vec{k}+\\vec\n{q}\\right \\vert ^{2\\left( n+n^{\\prime}\\right) -1}e^{-a^{2}\\left(\n\\beta \\right) \\left( \\vec{k}+\\vec{q}\\right) ^{2}}\\nonumber \\\\\n& \\times \\left \\{ \\left[ 1+n\\left( \\omega_{\\vec{q}}\\right) \\right] \\left[\n2F\\left( \\sqrt{\\frac{\\beta \\left( m+M\\right) }{2}}\\frac{A^{+}}{\\left \\vert\n\\vec{k}+\\vec{q}\\right \\vert }\\right) -F\\left( \\sqrt{\\frac{\\beta \\left(\nm+M\\right) }{2}}\\frac{A^{+}+\\omega}{\\left \\vert \\vec{k}+\\vec{q}\\right \\vert\n}\\right) \\right. \\right. \\nonumber \\\\\n& \\left. -F\\left( \\sqrt{\\frac{\\beta \\left( m+M\\right) }{2}}\\frac\n{A^{+}-\\omega}{\\left \\vert \\vec{k}+\\vec{q}\\right \\vert }\\right) \\right]\n+n\\left( \\omega_{\\vec{q}}\\right) \\left[ 2F\\left( \\sqrt{\\frac{\\beta \\left(\nm+M\\right) }{2}}\\frac{A^{-}}{\\left \\vert \\vec{k}+\\vec{q}\\right \\vert }\\right)\n\\right. \\nonumber \\\\\n& \\left. \\left. -F\\left( \\sqrt{\\frac{\\beta \\left( m+M\\right) }{2}\n\\frac{A^{-}+\\omega}{\\left \\vert \\vec{k}+\\vec{q}\\right \\vert }\\right) -F\\left(\n\\sqrt{\\frac{\\beta \\left( m+M\\right) }{2}}\\frac{A^{-}-\\omega}{\\left \\vert\n\\vec{k}+\\vec{q}\\right \\vert }\\right) \\right] \\right \\} .\n\\end{align}\nSee appendix \\ref{App: AndereRep} for the definition of the different\nfunctions. These expressions are suited for numerical calculations of the\nBragg response.\n\n\\subsection{Weak coupling limit}\n\nAt weak polaronic coupling the Bragg response (\\ref{BraggResp}) to lowest\norder in $\\alpha$ is given by (in polaronic units)\n\\begin{equation}\n\\operatorname{Im}\\left[ \\chi^{W}\\left( \\omega,\\vec{k}\\right) \\right]\n=-\\frac{k^{2}}{\\omega^{4}}\\operatorname{Im}\\left[ \\Sigma^{W}\\left(\n\\omega,\\vec{k}\\right) \\right] .\n\\end{equation}\nIn the weak coupling limit the variational parameter $M$ tends to zero and the\nimaginary part of the self energy (\\ref{ImagSelf}) becomes\n\\begin{align}\n\\operatorname{Im}\\left[ \\Sigma^{W}\\left( \\omega,\\vec{k}\\right) \\right] &\n=\\frac{\\sqrt{2\\beta \\pi}}{2}\\sum_{\\vec{q}\\neq0}\\left \\vert V_{\\vec{q\n}\\right \\vert ^{2}\\frac{\\left( \\vec{k}.\\vec{q}\\right) ^{2}}{k^{2}}\\left \\vert\n\\vec{k}+\\vec{q}\\right \\vert ^{-1}\\nonumber \\\\\n& \\times \\left(\n\\begin{array}\n[c]{c\n\\left[ 1+n\\left( \\omega_{\\vec{q}}\\right) \\right] \\left \\{ \\exp \\left[\n-\\frac{2\\beta \\left( B^{+}+\\omega \\right) ^{2}}{4\\left( \\vec{k}+\\vec\n{q}\\right) ^{2}}\\right] -\\exp \\left[ -\\frac{2\\beta \\left( B^{+\n-\\omega \\right) ^{2}}{4\\left( \\vec{k}+\\vec{q}\\right) ^{2}}\\right] \\right \\}\n\\\\\n+n\\left( \\omega_{\\vec{q}}\\right) \\left \\{ \\exp \\left[ -\\frac{2\\beta \\left(\nB^{-}+\\omega \\right) ^{2}}{4\\left( \\vec{k}+\\vec{q}\\right) ^{2}}\\right]\n-\\exp \\left[ -\\frac{2\\beta \\left( B^{-}-\\omega \\right) ^{2}}{4\\left( \\vec\n{k}+\\vec{q}\\right) ^{2}}\\right] \\right \\}\n\\end{array}\n\\right) , \\label{ImagZwak\n\\end{align}\nwith\n\\begin{equation}\nB^{\\pm}=\\pm \\omega_{\\vec{q}}+\\frac{\\left( \\vec{k}+\\vec{q}\\right) ^{2}}{2}.\n\\label{Aplusmin\n\\end{equation}\nThese expressions coincide with the weak coupling result obtained in the\nframework of Gurevich, Lang and Firsov \\cite{Gurevich}.\n\n\\subsection{Results}\n\nWe present the Bragg response for a lithium-6 impurity in a sodium condensate\n($m_{B}\/m_{I}=3.82207$). All results are presented in polaronic units, i.e.\n$\\hbar=\\xi=m_{I}=1$.\n\nIn figure \\ref{fig: RespTafh} the Bragg response (\\ref{BraggResp}) is\npresented for different temperatures and for a momentum exchange $k=1$ in 1\nand 2 dimensions at weak polaronic coupling ($\\alpha^{\\left( 1\\right)\n}=0.01$ and $\\alpha^{\\left( 2\\right) }=0.1$). In both cases we observe the\nDrude peak centered at $\\omega=0$ and a peak corresponding to the emission of\nBogoliubov excitations. This is qualitatively the same behavior as in the\nthree-dimensional case \\cite{PhysRevA.83.033631}, quantitatively we observe\nthat the amplitude of the Bogoliubov emission peak increases as the dimension\nis reduced. The Drude peak is a well-known feature in the response spectra of\nthe Fr\\\"{o}hlich polaron (see for example Refs. \\cite{Huybrechts1973163,\nPhysRevLett.100.226403, springerlink:10.1007\/PL00011092}) and is a consequence\nof the incoherent scattering of the polaron with thermal Bogoliubov\nexcitations. The width of the Drude peak scales with the scattering rate for\nabsorption of a Bogoliubov excitation which is proportional to the number of\nthermally excited Bogoliubov excitations \\cite{Mahan}. This explains the\ntemperature dependence of the width of the Drude peak in figure\n\\ref{fig: RespTafh}.\n\nIn 1D another sharp peak is observed in figure \\ref{fig: RespTafh} at\n$\\omega=\\omega_{k}$ (with $\\omega_{k}$ the Bogoliubov dispersion\n(\\ref{BogSpectr}) and $k$ the exchanged momentum) which broadens as the\ntemperature is increased and dominates the Bogoliubov emission peak at\nrelatively high temperatures. This extra peak in 1D is associated with the\nweak coupling regime since at intermediate coupling the sharp structure\ndisappears and the peak merges with the Bogoliubov emission peak. The location\nindicates that it corresponds to the process where both the exchanged energy\n$\\hbar \\omega$ and momentum $\\vec{k}$ are transferred to a Bogoliubov\nexcitation. Whether this extra peak is experimentally observable is\nquestionable since it is only visible at relatively high temperatures where in\nreduced dimensions thermal phase fluctuations can become important and destroy\nthe polaronic features.\n\nFigure \\ref{Fig: ZwakKAfh} presents the Bragg response for different momenta\nexchange at a temperature $\\beta=100$ (where the sharp peak at the Bogoliubov\ndispersion in 1D is too narrow to perceive). The insets show the location of\nthe maximum of the Bogoliubov emission peak as a function of the exchanged\nmomentum together with a least square fit to the Bogoliubov spectrum\n(\\ref{BogSpectr}) which results in a good agreement. The optimal fitting\nparameter is determined as $m_{B}=4.3159$ ($4.2216$) in 1D (2D)\n\n\\begin{figure}\n[ptb]\n\\begin{center}\n\\includegraphics[\nheight=2.527in,\nwidth=5.0721in\n\n{Figure3.eps\n\\caption{(Color online) The Bragg response (\\ref{BraggResp}) at weak polaronic\ncoupling, momentum exchange $k=1$ and for different temperatures\n($\\beta=\\left( k_{B}T\\right) ^{-1}$) in 1D (a) and 2D (c). In both cases a\npeak corresponding to the emission of Bogoliubov excitations is observed\ntogether with the anomalous Drude peak at $\\omega=0$. In 1D another sharp peak\nis present at $\\omega=\\omega_{k}$, with $\\omega_{k}$ the Bogoliubov dispersion\n(\\ref{BogSpectr}). In (b) we have zoomed in on this sharp peak in 1D. All\nquantities are in polaronic units ($\\hbar=m_{I}=\\xi=1$).\n\\label{fig: RespTafh\n\\end{center}\n\\end{figure}\nIn figures \\ref{Fig: RES1D} and \\ref{Fig: RES2D} we have zoomed in on the tail\nof the Bogoliubov emission peak for different values of the coupling parameter\nin 1 and 2 dimensions, respectively. At larger values for the polaronic\ncoupling parameter $\\alpha^{\\left( d\\right) }$ the emergence of a secondary\npeak is observed. This behavior is also observed in the optical absorption of\nthe Fr\\\"{o}hlich solid state polaron where the secondary peak corresponds to a\ntransition to the Relaxed Excited State accompanied by the emission of phonons\n\\cite{PhysRevLett.22.94}. The Relaxed Excited State denotes an excitation of\nthe impurity in the relaxed self-induced potential where relaxed means that\nthe self-induced potential is adapted to the excited state wave function of\nthe impurity. In the inset the location of this secondary peak is plotted as a\nfunction of the exchanged momentum together with a least square fit to the\nfollowing quadratically spectrum:\n\\begin{equation}\n\\omega \\left( k\\right) =\\omega+\\frac{\\hbar^{2}k^{2}}{2m},\\label{KwadSpectr\n\\end{equation}\nwhich shows a good agreement at small $k$. This suggests that the state\ncorresponding to the secondary peak is characterized by a transition frequency\n$\\omega$ and an effective mass $m$ (this was also observed for the\n3-dimensional case in Ref. \\cite{PhysRevA.83.033631})\n\\begin{figure}\n[ptb]\n\\begin{center}\n\\includegraphics[\nheight=6.2077cm,\nwidth=12.8678cm\n\n{Figure4.eps\n\\caption{(Color online) The Bragg response at weak polaronic coupling\n($\\alpha^{\\left( d\\right) }=0.1$) for different exchanged momenta $k$ in 1\nand 2 dimensions and at a temperature $\\beta=100$. The inset shows the\nlocation of the maximum of the peak as a function of the exchanged momentum\n(markers) together with a least square fit to the Bogoliubov spectrum\n(\\ref{BogSpectr}) (full line), this results in $m_{B}=4.3159$ ($4.2216$) for\nthe fitting parameter in 1D (2D). Everything is in polaronic units\n($\\hbar=m_{I}=\\xi=1$).\n\\label{Fig: ZwakKAfh\n\\end{center}\n\\end{figure}\n\\begin{figure}\n[ptb]\n\\begin{center}\n\\includegraphics[\nheight=5.9485cm,\nwidth=11.6333cm\n\n{Figure5.eps\n\\caption{(Color online) Here we zoomed in on the tail of the Bogoliubov\nemission peak for momentum exchange $k=1$ and temperature $\\beta=100$ in 1\ndimension. It is clear that at larger values for $\\alpha^{\\left( 1\\right) }$\na secondary peak emerges. The inset shows the location of the maximum of this\nsecondary peak at $\\alpha^{\\left( 1\\right) }=3$ as a function of the\nexchanged momentum (markers) together with a least square fit to a quadratic\nspectrum (\\ref{KwadSpectr}) (solid line), this results in $\\omega=1.6386$ and\n$m=2.0107$ for the fitting parameters. Everything is in polaronic units\n($\\hbar=m_{I}=\\xi=1$).\n\\label{Fig: RES1D\n\\end{center}\n\\end{figure}\n\\begin{figure}\n[ptb]\n\\begin{center}\n\\includegraphics[\nheight=5.7705cm,\nwidth=11.5279cm\n\n{Figure6.eps\n\\caption{(Color online) Here we zoomed in on the tail of the Bogoliubov\nemission peak for momentum exchange $k=1$ and temperature $\\beta=100$ in 2\ndimensions. As in the one-dimensional case a secondary peak emerges at larger\nvalues for $\\alpha^{\\left( 2\\right) }$. The inset shows the location of the\nmaximum of this secondary peak at $\\alpha^{\\left( 2\\right) }=4$ as a\nfunction of the exchanged momentum (markers) together with a least square fit\nto a quadratic spectrum (\\ref{KwadSpectr}) (solid line), this results in\n$\\omega=2.0601$ and $m=2.0755$ for the fitting parameters. Everything is in\npolaronic units ($\\hbar=m_{I}=\\xi=1$).\n\\label{Fig: RES2D\n\\end{center}\n\\end{figure}\n\n\nFinally we have checked whether the spectra satisfy the sum rule\n(\\ref{Somregel}). We calculated the sum of the two terms on the left hand side\nof expression (\\ref{Somregel}) which is presented in table\n\\ref{Tab: Somregel1d} for $d=1$ and in table \\ref{Tab: Somregel2d} for $d=2$\nat $\\beta=100$ and at different values for $\\alpha$ and $k$. These values\nshould be compared to $\\pi\/2=1.5708$ which results in a fair agreement with\nsmall deviations which are to be expected since numerically we had to\nintroduce a cutoff for the $\\omega$-integral in (\\ref{Somregel}) and the\nchoice of the parameter $\\varepsilon$ in (\\ref{Somregel}) is somewhat\narbitrary resulting in a double counting of part of the weight of the Drude peak\n\n\\begin{table}[tbp] \\centering\n\n\\begin{tabular}\n[c]{l||ll}\n& $\\alpha^{\\left( 1\\right) }=0.1$ & $\\alpha^{\\left( 1\\right) \n=3$\\\\ \\hline \\hline\n$k=1$ & $1.5440$ & $1.5547$\\\\\n$k=3$ & $1.5544$ & $1.5743\n\\end{tabular}\n\\ \\ \\ \n\\caption{Here we show the sum of the two terms at the left hand side of the f-sum rule (\\ref{Somregel}) in 1 dimension at $\\beta = 100$ and at different values for $\\alpha^{(1)}$ and $k$.}\\label{Tab: Somregel1d\n\\end{table\n\n\\begin{table}[tbp] \\centering\n\n\\begin{tabular}\n[c]{l||ll}\n& $\\alpha^{\\left( 2\\right) }=1$ & $\\alpha^{\\left( 2\\right) \n=4$\\\\ \\hline \\hline\n$k=1$ & $1.5678$ & $1.5734$\\\\\n$k=3$ & $1.5669$ & $1.5800\n\\end{tabular}\n\\ \\ \\ \\ \n\\caption{Here we show the sum of the two terms at the left hand side of the f-sum rule (\\ref{Somregel}) in 2 dimensions at $\\beta = 100$ and at different values for $\\alpha^{(2)}$ and $k$.}\\label{Tab: Somregel2d\n\\end{table\n\n\n\\section{Discussion and conclusions}\n\nWe have applied the calculations for the polaronic ground state properties of\nan impurity in a Bose-Einstein condensate and the response of this system to\nBragg spectroscopy to reduced dimensions. For this purpose we introduced a\npolaronic coupling parameter $\\alpha^{\\left( d\\right) }$ (\\ref{KopConst})\nwhich depends on the dimension. For growing $\\alpha^{\\left( d\\right) }$ the\nground state properties suggest that the self-induced potential accomodates a\nbound state. As compared to the three-dimensional case the transition to the\nself-trapped state is much smoother in reduced dimension and for $d=1$ the\ncharacteristics of the weak coupling regime are absent.\n\nThe Bragg response of the system revealed a peak corresponding to the emission\nof Bogoliubov excitations, the Drude peak and the emergence of a secondary\npeak in the strong coupling regime. The amplitude of these polaronic features\ngrows when we go to reduced dimensions. This is important since this indicates\nthat going to reduced dimensions can facilitate an experimental detection of\npolaronic features. In 1D another sharp peak is observed at weak polaronic\ncoupling that corresponds to the full transition of the exchanged energy and\nmomentum to a Bogoliubov excitation.\n\nAnother advantage of considering reduced dimensions is the possibility of\nusing confinement-induced resonances which permits a tuning of the polaronic\ncoupling parameter. These results show that considering an impurity in a\nBose-Einstein condensate in reduced dimensions is a very promising candidate\nto experimentally probe the polaronic strong coupling regime for the first time.\n\n\\begin{acknowledgments}\nThe authors gratefully acknowledge fruitful discussions with M. Wouters and A.\nWidera. This work was supported by FWO-V under Projects No. G.0180.09N, No.\nG.0115.06, No. G.0356.06, No. G.0370.09N and G.0119.12N, and the WOG Belgium\nunder Project No. WO.033.09N. J.T. gratefully acknowledges support of the\nSpecial Research Fund of the University of Antwerp, BOF NOI UA 2004. W.C.\nacknowledges financial support from the BOF-UA.\n\\end{acknowledgments}\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\n\\subsection{Forward accumulation AD and hyper-dual numbers}\n\nIn forward accumulation, the order for evaluating derivatives corresponds to the\nnatural order in which the expression is evaluated. This just asks to\nbe implemented by extending the operations ($+,-,*,\/,\\dots$) and\nintrinsic functions ($\\sin, \\cos, \\exp,\\dots$) from the domain of the\nreal numbers. \n\nHyper-dual numbers~\\cite{Fike_HD} are represented as 4-components\nvectors of real numbers $\\tilde x = \\hr{x_1}{x_2}{x_3}{x_4}$. They\nform a field with operations\n\\begin{eqnarray}\n \\tilde x \\pm \\tilde y &=& \\hr{x_1 \\pm y_1}{x_2 \\pm y_2}{x_3 \\pm y_3}\n {x_4\\pm y_4}\\,, \\\\\n \\tilde x \\tilde y &=& \\hr{x_1y_1}{x_1y_2+x_2y_1}{x_1y_3+x_3y_1}{x_1y_4 +\n x_4y_1 + x_2y_3 + x_3y_2}\\,, \\\\\n \\tilde{x}^{-1} &=& \\hr{1\/x_1}{-x_2\/x_1^2}{-x_3\/x_1^2}{2x_2x_3\/x_1^3 -\n x_4\/x_1^2}\\,.\n\\end{eqnarray}\nArbitrary real functions $f:\\mathbb R\\to \\mathbb R$ are promoted to\nhyper-dual functions by the relation \n\\begin{equation}\n \\tilde f(\\tilde x) = \\hr{f(x_1)}{f'(x_1)x_2}{f'(x_1)x_3}{x_4f'(x_1) + x_2x_3f''(x_1)}\\,.\n\\end{equation}\n\nNote that the function and its derivatives are only evaluated at the first component of\nthe hyper-dual argument $x_1$. In contrast with the techniques of symbolic\ndifferentiation, AD only gives the values of the derivatives of a\nfunction in one specific point. We also stress that the usual field \nfor real numbers is recovered for hyper-real numbers of the form\n$\\tilde x = \\hr{x_1}{0}{0}{0}$ (i.e. the real numbers are a sub-field\nof the hyper-dual field).\n\nPerforming \\emph{any} series of operations in\nthe hyper-dual field, the derivatives of any expression are\nautomatically determined at the same time as the value of the\nexpression itself. It is instructive to explicitly check the case of a\nsimple function composition\n\\begin{equation}\n y=f(g(x))\\,.\n\\end{equation}\nIf we evaluate this at the hyper-real argument $\\tilde x$, it is\nstraightforward to check that one gets\n\\begin{eqnarray}\n \\tilde z = \\tilde g(\\tilde x) &=& \\hr{g(x_1)}{g'(x_1)x_2}{g'(x_1)x_3}{x_4g'(x_1) + x_2x_3g''(x_1)}\\,.\\\\\n \\tilde y = \\tilde f(\\tilde z) &=& \\hr{f(z_1)}{f'(z_1)z_2}{f'(z_1)z_3}{z_4f'(z_1) + z_2z_3f''(z_1)} \\\\\n &=& \\hrl{f(g(x_1))}{f'(g(x_1))g'(x_1)x_2}{f'(g(x_1))g'(x_1)x_3}\\\\\n && \\hrr{f'(g(x_1))(x_4g'(x_1) + x_2x_3g''(x_1)) + \n g'^2(x_1)f''(g(x_1))x_2x_3}\\,.\n\\end{eqnarray}\nNow if we set $\\tilde x = \\hr{x_1}{1}{1}{0}$, we get $y'$ as the second\/third\ncomponent of $\\tilde y$, and $y''$ as the fourth component of $\\tilde y$. \n\n\\subsection{Functions of several variables}\n\nThe extension to functions of several variables is\nstraightforward. Each of the real arguments of the function is\npromoted to an hyper-dual number. The hyper-dual field allows the\ncomputation of the gradient and the Hessian of arbitrary\nfunctions.\n\nFor a function of several variables $f(\\mathbf{x})$, after\npromoting all its arguments to the hyper-dual field we get the\nhyper-dual function $\\tilde f(\\tilde{\\mathbf{x}})$. Partial derivatives of the\noriginal function $\\partial_if(\\mathbf{x})$ and the Hessian\n$\\partial_i\\partial_jf(\\mathbf{x})$ can be obtained using appropriate\nhyper-dual arguments (see table~\\ref{tab:multi} for an explicit\nexample with a function of two variables).\n\n\\begin{table}\n \\centering\n \\begin{tabular}{lll}\n \\toprule\n $\\tilde x$& $\\tilde y$& $\\tilde f(\\tilde x,\\tilde y)$\\\\\n \\midrule\n $\\hr{x_1}{1}{1}{0}$ & $\\hr{y_1}{0}{0}{0}$ &\n $\\hr{f}{\\partial_x\n f}{\\partial_xf}{\\partial_x^2f}$\n \\\\ \n $\\hr{x_1}{1}{0}{0}$ & $\\hr{y_1}{0}{1}{0}$ &\n $\\hr{f}{\\partial_x\n f}{\\partial_yf}{\\partial_x\\partial_yf}$ \\\\\n $\\hr{x_1}{0}{0}{0}$ & $\\hr{y_1}{1}{1}{0}$ &\n $\\hr{f}{\\partial_y\n f}{\\partial_yf}{\\partial_y^2f}$\n \\\\\n \\bottomrule\n \\end{tabular}\n \\caption{Example output for a function of two variables $f(x,y)$\n after promoting the arguments to hyper-real numbers. In the third\n column the function and all derivatives are evaluated at $(x_1,y_1)$.}\n \\label{tab:multi}\n\\end{table}\n\n\n\n\n\n\\subsection{Error propagation in the determination of the root of a function}\n\\label{sec:newton}\n\nA simple example is finding a root of a non-linear function of one\nreal variable. \nWe are interested in the case when\nthe function depends on some data $d_a\\, (a=1,\\dots,N_{\\rm\n data})$ that are themselves Monte Carlo observables\\footnote{In the\nterminology of section~\\ref{sec:gamma} $d_a$ are some primary or derived\nobservables. }, $f(x;d_a)$. In this case\nthe error in the MC data $d_a$ propagates into an error of the\nroot of the function. We assume that for the central values of the data $\\bar\nd_a$ the root is located at $\\bar x$\n\\begin{equation}\n f(\\bar x; \\bar d_a) = 0\\,.\n\\end{equation}\nFor error propagation we are interested in how much changes the\nposition of the root when we change the data. This ``derivative'' can\nbe easily computed. When we shift the data around its central value\n$\\bar d_a\\to \\bar d_a+\\delta d_a$, to leading order the function changes to\n\\begin{equation}\n f(x;\\bar d_a+\\delta d_a) = f(x;\\bar d_a) +\n \\partial_a f\\Big|_{(x;\\bar d_a)}\\delta d_a\\qquad (\\partial_a\n \\equiv \\partial\/\\partial d_a)\\,.\n\\end{equation}\nThis function will no longer vanish at $\\bar x$, but at a slightly\nshifted value $x = \\bar x+\\delta x$. Again to leading order\n\\begin{equation}\n f(\\bar x+\\delta x;\\bar d_a+\\delta d_a) = f(\\bar x;\\bar d_a) +\n \\partial_x f\\Big|_{(\\bar x;\\bar d_a)}\\delta x + \n \\partial_a f\\Big|_{(\\bar x;\\bar d_a)}\\delta d_a = 0\\,.\n\\end{equation}\nThis allows to\nobtain the derivative of the position of the root with respect to the\ndata\n\\begin{equation}\n \\frac{\\delta x}{\\delta d_a} = - \\frac{\\partial_a f}{\\partial_x\n f}\\Big|_{(\\bar x;\\bar d_a)}\\,.\n\\end{equation}\nThat is the quantity needed for error propagation. Note that in\npractical applications the\niterative procedure required to find the root (i.e. Newton's method,\nbisection, \\dots)\nis only used once (to find the position of the root). Error\npropagation only needs the derivatives of the target function at\n$(\\bar x,\\bar d_a)$. \n\n\\subsection{Error propagation in fit parameters}\n\\label{sec:fits}\n\nIn (non-linear) least squares one is usually interested in finding the\nvalues of some parameters $p_i\\, (i=1,\\dots,N_{\\rm parm})$ that minimize\nthe function\n\\begin{equation}\n \\label{eq:chisq}\n \\chi^2(p_i;d_a)\\,,\\qquad p_i\\, (i=1,\\dots,N_{\\rm parm})\\,, \\quad\n d_a\\, (a=1,\\dots,N_{\\rm data})\\,.\n\\end{equation}\nHere $d_a$ are the data that is fitted. In many cases the\nexplicit form of the $\\chi^2$ is\n\\begin{equation}\n \\chi^2 = \\sum_{a=1}^{N_{\\rm data}} \\left( \\frac{f(x_a;p_i) -\n y_a}{\\sigma(y_a)} \\right)^2\\,,\n\\end{equation}\nwhere $f(x_a;p_i)$ is a function that depends on the parameters $p_i$,\nand $y_a$ are the data points (represented by $\\{d_a\\}$\nin eq.~(\\ref{eq:chisq})). The result of the fit is some parameters\n$\\bar p_i$ that make \n$\\chi^2(\\bar p_i;\\bar d_a)$ minimum for some fixed values of the data $\\bar d_a$. \n\nWhen propagating errors in a fit we are interested in how much \nthe parameters change when we change the data. This ``derivative'' is defined\nby the implicit condition that the $\\chi^2$ has to stay always at its\nminimum. If we shift the data $d_a \\to \\bar d_a + \\delta\nd_a$ we have to leading order\n\\begin{equation}\n \\chi^2(p_i;\\bar d_a + \\delta d_a) = \\chi^2(p_i;\\bar d_a) +\n \\partial_a\\chi^2\\Big|_{(p_i;\\bar d_a)}\\, \\delta d_a\\,, \\qquad\n (\\partial_a \\equiv \\partial\/\\partial d_a)\\,.\n\\end{equation}\nthis function will no longer have its\nminimum at $\\bar p_i$ but will be shifted by an amount $\\delta\np_i$. Minimizing with respect to $p_i$ and expanding at $p_i=\\bar p_i$ to leading\norder one obtains the condition\n\\begin{equation}\n \\partial_j \\partial_i \\chi^2\\Big|_{(\\bar p_i;\\bar d_a)}\\delta p_j + \\partial_i\n \\partial_a\\chi^2\\Big|_{(\\bar p_i;\\bar d_a)}\\, \\delta d_a = 0\\,, \\qquad\n (\\partial_i \\equiv \\partial\/\\partial p_i)\\,.\n\\end{equation}\nDefining the Hessian of the $\\chi^2$ at the minimum\n\\begin{equation}\n H_{ij} = \\partial_j \\partial_i \\chi^2\\Big|_{(\\bar p_i;\\bar d_a)}\\,,\n\\end{equation}\nwe can obtain the derivative of the fit parameters with respect to the\ndata~\\footnote{As with the case of the normal equations, the reader is advised to\nimplement the inverse of the Hessian using the \\texttt{SVD}\ndecomposition to detect a possibly ill-conditioned Hessian matrix}\n\\begin{equation}\n \\frac{\\delta p_i}{\\delta d_a} = -\\sum_{j=1}^{N_{\\rm\n parm}}(H^{-1})_{ij}\\partial_j \\partial_a \\chi^2\\Big|_{(\\bar p_i;\\bar d_a)}\\,.\n\\end{equation}\nOnce more the iterative procedure is only used one time\n(to find the central values of the fit parameters), and error\npropagation is performed by just evaluating derivatives of the\n$\\chi^2$ function at $(\\bar p_i, \\bar d_a)$.\n\n\n\\subsection{A complete example}\n\nThis is a complete commented example that shows most of the\nfeatures of the code. This particular snippet is part of\nthe distribution~\\cite{aderrors-mod} (file \\texttt{test\/complete.f90}\n(see~\\ref{sec:code-listing})). It uses the \n\\texttt{module simulator} (\\texttt{test\/simulator.f90}) that generates\nautocorrelated data along the lines of appendix~\\ref{sec:model}. Here\nwe give an overview on the package using a full example where we\ncompute the error in the derived quantity\n\\begin{equation}\n \\label{eq:derived}\n z= \\frac{\\sin x}{\\cos y + 1} \\,.\n\\end{equation}\nThe quantities $x$ and $y$ are\ngenerated using the procedure described in appendix~\\ref{sec:model}\nand are assumed to originate from simulations with\ncompletely different parameters. The line numbers correspond to the\nlisting in section~\\ref{sec:code-listing}.\n\n\\begin{description}\n\\item[lines 4-7] modules provided with the\n distribution~\\cite{aderrors-mod}. \n\\item[lines 21-36] Use module \\texttt{simulator} to produce measurements for\n two observables from different ensembles. On the first MC ensemble\n we have $\\tau_k = [1.0, 3.0, 12.0, 75.0]$ and the measurements\n \\texttt{data\\_x(:)} correspond to couplings \n $\\lambda_k = [1.0, 0.70, 0.40, 0.40]$. For the second MC ensemble we\n have $\\tau_k = [2.0, 4.0, 6.0, 8.0]$ and the measurements\n \\texttt{data\\_y(:)} have couplings $\\lambda_k = [2.3, 0.40, 0.20,\n 0.90]$. In the first case we have a sample of length 5000 in 4\n replicas of sizes $1000,30,3070,900$. For the second MC chain we\n have a single replica of size 2500. See \n appendix~\\ref{sec:model} for analytic expressions of the error of\n both observables. \n\n\\item[lines 42-45] Load measurements of the first observable in the\n variable \\texttt{x}, and set the details of the analysis:\n \\begin{description}\n \\item[line 43] Set the ensemble ID to 1.\n \\item[line 44] Set replica vector to $[1000,30,3070,900]$.\n \\item[line 45] Set the exponential autocorrelation time\n ($\\tau_{\\rm exp} = 75$).\n \\end{description}\n\\item[lines 48-49] Load measurements of the second ensemble into\n \\texttt{y} and set ensemble ID to 2. In this case we have only one\n replica (the default), and we choose not to add a tail to the\n autocorrelation function since the number of measurements (2500) is\n much larger than the exponential autocorrelation time $\\tau_{\\rm\n exp}=8$ of the second MC ensemble.\n\n\\item[line 52] Computes the derived observable \\texttt{z}.\n\n\\item[lines 56-58] Details of the analysis for observable \\texttt{z}:\n \\begin{description}\n \\item[line 56] Add the tail to the normalized autocorrelation\n function at the point where the signal $\\rho(t)$ is equal\n to 1.0 times the error and the ensemble ID is 1.\n \\item[line 58] Set the parameter $S_\\tau=3$ for ensemble ID 2 to\n automatically choose the optimal window (see~\\cite{Wolff:2003sm}).\n \\end{description}\n\n\\item[line 60] Performs the error analysis in \\texttt{z}.\n \n\\item[lines 62-68] Prints estimate of the derived observable with\n the error. Also prints $\\tau_{\\rm int}$ for each ensemble and what\n portion of the final error in \\texttt{z} comes from each ensemble\n ID.\n\n\\item[lines 70-72] Prints in file \\texttt{history\\_z.log} the details\n of the analysis: fluctuations per ensemble, normalized\n autocorrelation function and $\\tau_{\\rm int}$ as a function of the\n Window size. This allows to produce the plots in Fig.~\\ref{fig:hist_z}.\n\\end{description}\n\n\\begin{figure}\n \\centering\n \\begin{subfigure}[b]{\\textwidth}\n \\includegraphics[width=0.9\\textwidth]{plots\/history_z_d\/ID1-hist.pdf}\n \\caption{MC history for ID 1}\n \\label{fig:h1}\n \\end{subfigure}\\\\\n \\begin{subfigure}[b]{0.45\\textwidth}\n \\includegraphics[width=\\textwidth]{plots\/history_z_d\/ID1-auto.pdf}\n \\caption{Normalized autocorrelation function for ID 1}\n \\label{fig:a1}\n \\end{subfigure}\n \\begin{subfigure}[b]{0.45\\textwidth}\n \\includegraphics[width=\\textwidth]{plots\/history_z_d\/ID1-taui.pdf}\n \\caption{$\\tau_{\\rm int}$ for ID 1}\n \\label{fig:t1}\n \\end{subfigure}\n\n \\begin{subfigure}[b]{\\textwidth}\n \\includegraphics[width=0.9\\textwidth]{plots\/history_z_d\/ID2-hist.pdf}\n \\caption{MC history for ID 2}\n \\label{fig:h2}\n \\end{subfigure}\\\\\n \\begin{subfigure}[b]{0.45\\textwidth}\n \\includegraphics[width=\\textwidth]{plots\/history_z_d\/ID2-auto.pdf}\n \\caption{Normalized autocorrelation function for ID 2}\n \\label{fig:a2}\n \\end{subfigure}\n \\begin{subfigure}[b]{0.45\\textwidth}\n \\includegraphics[width=\\textwidth]{plots\/history_z_d\/ID2-taui.pdf}\n \\caption{$\\tau_{\\rm int}$ for ID 2}\n \\label{fig:t2}\n \\end{subfigure}\n \\caption{Histories, autocorrelation functions and $\\tau_{\\rm int}$\n for the derived observable $z$ (eq.~(\\ref{eq:derived})).}\n \\label{fig:hist_z}\n\\end{figure}\n\n\nRunning the code produces the output\n\\begin{verbatim}\n Observable z: 0.24426076465139626 +\/- 5.8791563778643217E-002\n Contribution to error from ensemble ID 1 86.5\n Contribution to error from ensemble ID 2 13.4\n\\end{verbatim}\n\n\\newpage\n\\subsubsection{Example code listing}\n\\label{sec:code-listing}\n\n\n\n\\begin{lstlisting}\nprogram complete\n\n use ISO_FORTRAN_ENV, Only : error_unit, output_unit\n use numtypes\n use constants\n use aderrors\n use simulator\n \n implicit none\n\n integer, parameter :: nd = 5000, nrep=4\n type (uwreal) :: x, y, z\n integer :: iflog, ivrep(nrep)=(\/1000,30,3070,900\/), i, is, ie\n real (kind=DP) :: data_x(nd), data_y(nd\/2), err, ti, texp\n real (kind=DP) :: tau(4), &\n lam_x(4)=(\/1.0_DP, 0.70_DP, 0.40_DP, 0.40_DP\/), &\n lam_y(4)=(\/2.3_DP, 0.40_DP, 0.20_DP, 0.90_DP\/)\n character (len=200) :: flog='history_z.log'\n\n\n ! Fill arrays data_x(:) with autocorrelated\n ! data from the module simulator. Use nrep replica\n tau = (\/1.0_DP, 3.0_DP, 12.0_DP, 75.0_DP\/)\n texp = maxval(tau)\n\n is = 1\n do i = 1, nrep\n ie = is + ivrep(i) - 1\n call gen_series(data_x(is:ie), err, ti, tau, lam_x, 0.3_DP)\n is = ie + 1\n end do\n\n ! Fill data_y(:) with different values of tau also using\n ! module simulator\n forall (i=1:4) tau(i) = real(2*i,kind=DP)\n call gen_series(data_y, err, ti, tau, lam_y, 1.3_DP)\n\n\n ! Load data_x(:) measurements in variable x.\n ! Set replica vector, exponential autocorrelation time\n ! and ensemble ID.\n x = data_x\n call \n call \n call \n \n ! Load data_y(:) measurements in variable y\n y = data_y\n call \n\n ! Exact, transparent error propagation\n z = sin(x)\/(cos(y) + 1.0_DP)\n\n ! Attach tail in ensemble with ID 1 when signal in the\n ! normalized auto-correlation function equals its error\n call \n ! Set Stau=3 for automatic window in ensemble with ID 2\n call \n ! Perform error analysis (tails, optimal window,...)\n call \n\n ! Print results and output details to flog \n write(*,'(1A,1F8.5,1A,1F8.5)')'** Observable z: ', \n do i = 1, \n write(*,'(3X,1A,1I3,3X,1F5.2,\n \n write(*,'(2X,1A,1F0.4,1A,1F8.4,1A)')'(tau int: ', \n end do\n\n open(newunit=iflog, file=trim(flog))\n call \n close(iflog)\n \n stop\nend program complete\n\n\n\\end{lstlisting}\n\n\n\n\n\\subsection{Exact error propagation}\n\\label{sec:exact-error-prop}\n\nIn order to support our claim that AD techniques perform linear\npropagation of errors \\emph{exactly}, it is interesting to\ncompare the result of the procedure\ndescribed in section~\\ref{sec:fits} (based on computing the Hessian at\nthe minimum) with an implementation of the \nfitting procedure where all operations are performed in the hyper-dual\nfield. For this last case we use a simple (and inefficient) minimizer:\na simple gradient descent with a very small and constant damping\nparameter. We point out that our inefficient\nalgorithm needs $\\mathcal O(2000)$ iterations of the loop in\nalgorithm~\\ref{alg:gradient} to converge to the solution. \n\nSince each operation in the algorithm is performed in the hyper-dual\nfield we are exactly evaluating the derivative of the\nfitting routine with respect to the data points that enter in the\nevaluation of the $\\chi^2$. These derivatives are all that is needed to\nperform linear error propagation. All derivatives are computed \nto machine precision.\n\nBeing more explicit, if we define a function that performs one iteration of the\ngradient descent (see algorithm~\\ref{alg:gradient})\n\\begin{equation}\n \\mathbf{\\mathcal I}(\\mathbf{x}) = \\mathbf{x} - \\gamma\\mathbf{\\nabla}\\chi^2\\Big|_{\\mathbf{x}}\\,,\n\\end{equation}\nwe can define the function\n\\begin{equation}\n \\mathcal M = \\underbrace{\\mathcal I \\circ \\mathcal I \\circ \\cdots\n \\circ \\mathcal I}_ {\\text{2000 times!}}\\,,\n\\end{equation}\nthat returns the minima of the $\\chi^2$ for any reasonable input.\nAD is computing the derivative of $\\mathcal M$ with respect to the\ndata that enter the evaluation of the $\\chi^2$\nexactly\\footnote{Incidentally it also computes the derivative with\n respect to the initial guess of the minima, and correctly gives zero.}. \n\nOn the other hand the derivation of section~\\ref{sec:fits} is also\nexact to leading order. Therefore both \nprocedures should give the same errors, even if in one case we are\ncomputing the derivative just by making the derivative of every\noperation of the gradient descent algorithm, and in the other case we\nare exactly computing the Hessian of the $\\chi^2$ function at the\nminima and using it for linear error propagation. \nThe results of this small test confirm our expectations: \n\\begin{eqnarray}\n\\textbf{Gradient:}\\quad n = 0.97959028626531608 &\\pm& 0.193130534135\\,80400\\,.\\\\ \n\\textbf{Hessian:}\\quad n = 0.97959028626531608 &\\pm& 0.193130534135\\,94974\\,.\\\\ \n\\nonumber \\\\\n\\textbf{Gradient:}\\quad m = 0.36501862665927676 &\\pm& 4.4699145709\\,887811\\times 10^{-2}\\,.\\\\\n\\textbf{Hessian:}\\quad m = 0.36501862665927676 &\\pm& 4.4699145709\\,956804\\times 10^{-2}\\,.\n\\end{eqnarray}\nAs the reader can see both errors agree with more than 12 decimal\nplaces. The critical reader can still argue that in fact the errors\nare not exactly the same. One might be tempted to say that the\nsmall difference is due to ``higher orders terms'' in the expansion\nperformed in section~\\ref{sec:fits},\nbut this would be wrong (AD is an \\emph{exact} truncation up to some\norder). The reason of the difference are ``lower order\nterms''. In the derivation of section~\\ref{sec:fits} we have assumed\nthat the $\\chi^2$ is in the minimum. In fact, any minimization\nalgorithm returns the minimum only up to some precision. This\nsmall deviation from the true minimum gives a residual gradient of the\n$\\chi^2$ function that contaminates (the 14$^{\\underline{th}}$\nsignificant digit!) of the errors in the fit parameters. \n\\begin{algorithm}\n \\caption{Gradient descent}\n \\label{alg:gradient}\n \\begin{algorithmic}[1]\n \\State $\\gamma \\gets 0.0001$\n \\State $X_i^{(1)} \\gets 1.0$\n \\State $\\epsilon\\gets 10^{-12}$\n \n \\Repeat\n \\State $X_i^{(0)} \\gets X_i^{(1)}$\n \\State $df_i \\gets {\\partial \\chi^2}\/{\\partial x_i}$\n \\State $X_i^{(1)} \\gets X_i^{(0)} - \\gamma * df_i$\n \\Until ($|df|< \\epsilon)\\, \\MyAnd\\, (|X^{(1)}-X^{(0)}|<\\epsilon)$\n \\end{algorithmic}\n\\end{algorithm}\n\n\n\n\\subsection{Description of the problem}\n\nWe are interested in the general situation where some primary\nobservables $A_i^\\alpha$ are measured in several Monte Carlo\nsimulations. Here the index $\\alpha$ labels the ensemble where the\nobservable is measured, while the index $i=1,\\dots,N_{\\rm obs}^\\alpha$\nruns over the observables measured in the ensemble $\\alpha$.\nDifferent ensembles are assumed to correspond to different\nphysical simulation parameters (i.e. different values of the lattice spacing or \npion mass in the case of lattice QCD, different values of the\ntemperature for simulations of the Ising model, etc\\dots). \n\nThere are two generic situations that are omitted from the discussion\nfor the sake of the presentation (the notation easily becomes\ncumbersome). First there is the case where different ensembles\nsimulate the same physical parameters but different algorithmic\nparameters. This case is easily taken into account with basically the\nsame techniques as described here. Second, there is the case where\nsimulations only differ in the seed of the random number generator\n(i.e. different \\emph{replica}). Replicas can be used to improve the\nstatistical precision and the error determination along the lines\ndescribed in~\\cite{Wolff:2003sm}. We just point out that the available\nimplementation~\\cite{aderrors-mod} supports both cases.\n\nA concrete example in the context of simulations of the Ising model\nwould correspond to the following observables\\footnote{Note that the\n numbering of observables does not need to be consistent across\n ensembles. How observables are labeled in each ensemble\n is entirely a matter of choice.}\n\\begin{eqnarray}\n A_1^1 = \\langle \\epsilon\\rangle_{T_1};&\\qquad&\n A_2^1 = \\langle m\\rangle_{T_1}\\\\\n A_1^2 = \\langle m\\rangle_{T_2}\\,,\n\\end{eqnarray}\nwhere $\\epsilon$ is the energy per spin and $m$ the magnetization per\nspin. The notation $\\langle\\cdot \\rangle_{T}$ means that the\nexpectation value is taken at temperature $T$. \n\nIn data analysis we are interested in determining the uncertainty in\nderived observables $F \\equiv f(A_i^\\alpha)$ (i.e. functions of the\nprimary observables). For the case presented above a simple example of\na derived observable is\n\\begin{equation}\n \\label{eq:exder}\n F = \\frac{A_2^1 - A_1^2 }{A_1^1} = \\frac{\\langle m\\rangle_{T_1} - \\langle m\\rangle_{T_2} }{\\langle\n \\epsilon\\rangle_{T_1}}\\,. \n\\end{equation}\n\nA more realistic example of a derived observable in lattice QCD is the\nvalue of the proton mass. This is a function of many measured primary observables\n(pion, kaon and proton masses and possibly decay constants measured in\nlattice units at several values of the lattice spacing and\nvolume). The final result (the physical proton mass) is a function of\nthese measured primary observables. Unlike the case of the example in\neq.~(\\ref{eq:exder}), this function cannot be written explicitly: it\ninvolves several fits to extrapolate the lattice data to the\ncontinuum, infinite volume and the physical point (physical values of\nthe quark masses). \n\nAny analysis technique for lattice QCD must properly deal with the\ncorrelations between the observables measured on the same ensembles\n(i.e. $A_1^1=\\langle \\epsilon\\rangle_{T_1}$ and $A_2^1=\\langle\nm\\rangle_{T_1}$ in our first example), and with the autocorrelations\nof the samples produced by any MC simulation. At the same time the\nmethod has to be generic enough so that the error in complicated derived\nobservables determined by a fit or by any other iterative procedure\n(like the example of the proton mass quoted above) can be properly\nestimated. \n\n\\subsection{The $\\Gamma$-method}\n\nIn numerical applications we only have access to a finite set of MC\nmeasurements for each primary observable $A^\\alpha_i$ \n\\begin{equation}\n \\label{eq:1}\n a_i^\\alpha(t)\\,,\\qquad t = 1, \\dots, N_\\alpha\\,,\n\\end{equation}\nwhere the argument $t$ labels the MC measurement, and the\nnumber of measurements available in ensemble $\\alpha$ is labeled by\n$N_\\alpha$ (notice that it is the same for all observables measured on\nthe ensemble $\\alpha$). As estimates for the values $A_i^\\alpha$ we\nuse the MC averages \n\\begin{equation}\n \\label{eq:2}\n \\bar a_i^\\alpha = \\frac{1}{N_\\alpha}\\sum_{t=1}^{N_\\alpha} a_i^\\alpha(t)\\,.\n\\end{equation}\nFor every observable we define the \nfluctuations over the mean in each ensemble \n\\begin{equation}\n \\delta_i^\\alpha(t) = a_i^\\alpha(t) - \\bar a_i^\\alpha\\,.\n\\end{equation}\n\nWe are interested in determining the uncertainty in any derived\nobservable (i.e. a generic function of the primary observables)\n\\begin{equation}\n F \\equiv f(A_i^\\alpha)\\,.\n\\end{equation}\nThe observable $F$ is estimated by\n\\begin{equation}\n \\bar F = f(\\bar a_i^\\alpha)\\,.\n\\end{equation}\nIn order to compute its error, we use linear propagation of errors\n(i.e. a Taylor approximation of the function $f$ at $A_i^\\alpha$)\n\\begin{equation}\n f(A_i^\\alpha + \\epsilon_i^\\alpha) = F + f_i^\\alpha \\epsilon_i^\\alpha + \\mathcal O(\\epsilon_i^2)\\,. \n\\end{equation}\nwhere\n\\begin{equation}\nf_i^\\alpha = \\partial_i^\\alpha f|_{A_i^\\alpha} = \\frac{\\partial\n f}{\\partial A_i^\\alpha}\\Big|_{A_i^\\alpha}\\,.\n\\end{equation}\nIn practical situations these derivatives are evaluated at $\\bar a_i^\\alpha$\n\\begin{equation}\n \\bar f_i^\\alpha = \\partial_i^\\alpha f\\Big|_{\\bar a_i^\\alpha}\\,.\n\\end{equation}\nWe also need the\nautocorrelation function of the primary observables. When estimated\nfrom the own Monte Carlo data we use \n\\begin{equation}\n \\label{eq:gamma}\n \\Gamma_{ij}^{\\alpha\\beta}(t) =\n \\frac{\\delta_{\\alpha\\beta}}{N_\\alpha-t}\\sum_{t'=1}^{N_\\alpha-t} \n \\delta_i^\\alpha(t+t')\\delta_j^{\\alpha}(t')\\,.\n\\end{equation}\n\nFinally, the error estimate for $F$ is given in terms of the\nautocorrelation functions\n\\begin{equation}\n \\label{eq:rho}\n \\rho_F^{\\alpha}(t) =\n \\frac{\\Gamma_F^{\\alpha}(t)}{\\Gamma_F^{\\alpha}(0)}\\,,\\qquad\n \\Gamma_F^\\alpha (t) = \\sum_{ij} \\bar f_i^\\alpha \\bar f_j^\\alpha \\Gamma_{ij}^{\\alpha \\alpha}(t)\\,,\n\\end{equation}\nthat are used to define the (per-ensemble) \\emph{variances}\n$\\sigma_F^\\alpha(F)$ and integrated autocorrelation times $\\tau_{\\rm\n int}^\\alpha(F)$ given by\n\\begin{equation}\n \\label{eq:tauint}\n (\\sigma_F^\\alpha)^2 = \\Gamma_F^\\alpha(0)\\,, \\qquad\n \\tau_{\\rm int}^\\alpha(F) = \\frac{1}{2} + \\sum_{t=1}^\\infty \\frac{\\Gamma_F^\\alpha(t)}{\\Gamma_F^\\alpha(0)} \\,.\n\\end{equation}\nSince different ensembles are statistically uncorrelated, the total\nerror estimate comes from a combination in quadrature\n\\begin{equation}\n (\\delta \\bar F)^2 = \\sum_\\alpha \\frac{(\\sigma_F^\\alpha)^2}{N_\\alpha}\n 2\\tau_{\\rm int}^\\alpha(F)\\,.\n\\end{equation}\nIn the $\\Gamma$-method each ensemble is treated independently, which\nallows to know how much each ensemble contributes to the error in\n$F$. The quantity \n\\begin{equation}\n \\label{eq:error_contribution}\n R_\\alpha(F) = \\frac{(\\sigma_F^\\alpha)^22\\tau_{\\rm\n int}^\\alpha(F)}{N_\\alpha(\\delta \\bar F)^2} \n\\end{equation}\nrepresents the portion of the total squared error of $F$ coming from\nensemble $\\alpha$.\n\nA crucial step is to perform a truncation of the infinite sum in\neq.~(\\ref{eq:tauint}). In practice we use \n\\begin{equation}\n \\tau_{\\rm int}^\\alpha(F) = \\frac{1}{2} + \\sum_{t=1}^{W^\\alpha_F} \\frac{\\Gamma_F^\\alpha(t)}{\\Gamma_F^\\alpha(0)}\\,.\n\\end{equation}\n\nIdeally $W^\\alpha_F$ has to be large compared with the exponential\nautocorrelation time ($\\tau_{\\rm exp}^\\alpha$) of the simulation (the slowest\nmode of the Markov operator). In \nthis regime the truncation error is $\\mathcal O(e^{-W^\\alpha\/\\tau_{\\rm\n exp}^\\alpha})$. The problem is that the error in $\\Gamma_F^\\alpha(t)$\nis approximately constant in $t$. At large\nMC times the signal for $\\Gamma_F^\\alpha(t)$ is small, and a too large\nsummation window will just add noise to the determination of\n$\\tau_{\\rm int}^\\alpha$.\n\nIn~\\cite{Wolff:2003sm} a practical recipe to choose\n$W_F^\\alpha$ is proposed based on minimizing the sum of the systematic\nerror of the truncation and the statistical error. In this proposal\nit is assumed that $\\tau_{\\rm exp}^\\alpha \\approx S_\\tau\\tau_{\\rm\n int}^\\alpha$, where $S_\\tau$ is a parameter that can be tuned by\ninspecting the data\\footnote{Values in the range $S_{\\tau}\\approx\n 2-5$ are common.}.\n\n\\begin{figure}\n \\centering\n \\includegraphics[width=0.6\\textwidth]{fig\/Wgamm}\n \\caption{We show the approach to the true error in the\n $\\Gamma$-method as a function of the summation window $W$ for\n three values of \n $\\tau$. In all cases the autocorrelation function is assumed to \n follow a simple exponential decay $\\Gamma(t) = e^{-|t|\/\\tau}$ (see\n appendix~\\ref{sec:model}). \n The asymptotic convergence is exponential, but in many\n practical situations we are far from this asymptotic\n behavior. As proposed in~\\cite{Schaefer:2010hu} we can account for\n the slow modes in the analysis using eq.~(\\ref{eq:tauint_tauexp})\n (corresponding to the line labeled $\\tau_{\\rm exp}$). In this\n very simple example (where the autocorrelation function has a single\n contribution) this procedure gives the correct error estimation\n for any window size and value of $\\tau$. \n }\n \\label{fig:Wgamm} \n\\end{figure}\n\nAlthough this is a sensible choice, it has been\nnoted~\\cite{Schaefer:2010hu} that in many practical situations in\nlattice computations $\\tau_{\\rm exp}^\\alpha$ and $\\tau_{\\rm\n int}^\\alpha$ are in fact very different. In these cases the\nsummation window $W^\\alpha_F$ cannot be taken much larger than $\\tau^\\alpha_{\\rm\n exp}$, and one risks ending up underestimating the errors (see\nfigure~\\ref{fig:Wgamm}). An improved \nestimate for $\\tau_{\\rm int}^\\alpha$ was proposed for these\nsituations. First the autocorrelation function $\\rho_F^\\alpha(t)$ is\nsummed explicitly up to $W_F^\\alpha$. This value has to be large, but\nsuch that $\\rho_F^\\alpha(W_F^\\alpha)$ is statistically different from\nzero. For $t>W_F^\\alpha$ we assume \nthat the autocorrelation function is basically given by the\nslowest mode $\\rho_F^\\alpha(t)\\sim \\exp(-|t|\/\\tau_{\\rm exp}^\\alpha)$\nand explicitly add the remaining tail to the computation of $\\tau_{\\rm\n int}^\\alpha$. The result of this operation can be summarized in the formula\n\\begin{equation}\n \\label{eq:tauint_tauexp}\n \\tau_{\\rm int}^\\alpha = \\frac{1}{2} + \\sum_{t=1}^{W^\\alpha_F}\n \\rho_F^\\alpha(t) + \\tau_{\\rm exp}^\\alpha\\rho_F^\\alpha(W_F^\\alpha+1)\\,.\n\\end{equation}\n\nThe original proposal~\\cite{Schaefer:2010hu} consists in adding the\ntail to the autocorrelation function at a point where $\\rho_F(t)$ is\nthree standard deviations away from zero, and use the error\nestimate~(\\ref{eq:tauint_tauexp}) as an upper bound on the error. On\nthe other hand recent works of the ALPHA collaboration attach the tail\nwhen the signal in $\\rho_F(t)$ is about to be lost (i.e. $\\rho_F(t)$\nis 1-2 standard deviations away from zero) and use\neq.~(\\ref{eq:tauint_tauexp}) as \\emph{the} error estimate (\\emph{not}\nas an upper bound). This last option seems more appealing to the\nauthor. \n\nAll these procedures require an estimate of $\\tau_{\\rm\n exp}^\\alpha$. This is usually obtained by inspecting large\nstatistics in cheaper simulations\n(pure gauge, coarser lattice spacing, \\dots). The interested reader is\ninvited to consult the original references~\\cite{Schaefer:2010hu,\n Virotta2012Critical} for a full discussion.\n\n\\subsubsection{Notes on the practical implementation of the\n $\\Gamma$-method}\n\\label{sec:notes-pract-impl}\n\nIn practical implementations of the $\\Gamma$-method, as suggested\nin~\\cite{Wolff:2003sm}, it is convenient to \nstore the mean and the \\emph{projected fluctuations per\n ensemble} of an observable\n\\begin{equation}\n \\label{eq:deltas}\n \\bar F = f(\\bar a_i^\\alpha)\\,,\\qquad \\delta_F^\\alpha(t) =\n \\sum_{i}\\bar f_i^\\alpha \\delta_i^\\alpha(t)\\,. \n\\end{equation}\nNote that observables that are functions of\nother derived observables are easily analyzed. For\nexample, if we are interested in\n\\begin{equation}\n G = g(f(A_i^\\alpha))\\,,\n\\end{equation}\nwe first compute \n\\begin{equation}\n \\bar F = f(\\bar a_i^\\alpha)\\,,\\qquad \\delta_F^\\alpha(t) = \\sum_{i}\\bar f_i^\\alpha \\delta_i^\\alpha(t)\\,.\n\\end{equation}\nNow to determine $G$ we only need an extra derivative\n$\\bar g_F = \\partial_Fg\\Big|_{\\bar F}$, since \n\\begin{equation}\n \\label{eq:error_propagation}\n \\bar G = g(\\bar F) = g(f(\\bar a_i^\\alpha))\\,,\\qquad \\delta_G^\\alpha(t) =\n\\bar g_F\\delta_F^\\alpha(t) = \\sum_{i}\\bar g_F\\bar f_i^\\alpha\n\\delta_i^\\alpha(t) = \\sum_{i}\\bar g_i^\\alpha\n\\delta_i^\\alpha(t)\\,,\n\\end{equation}\nwith $\\bar g_i^\\alpha = \\partial_i^\\alpha g|_{\\bar a_i^\\alpha}$.\n\nAt this point the difference between a primary and a\nderived observable is just convention: any primary observable can be\nconsidered a derived observable defined with some identity\nfunction. It is also clear that the means and the fluctuations are all\nthat is needed to implement linear propagation of errors in the\n$\\Gamma$-method.\n\nFinally, we emphasize that computing derivatives of arbitrary\nfunctions lies at the core of the\n$\\Gamma$-method. In~\\cite{Wolff:2003sm, Schaefer:2010hu} a numerical\nevaluation of the derivatives is used. Here we propose to use AD\ntechniques for reasons of efficiency and robustness, but before giving\ndetails on AD we will comment on the differences between the\n$\\Gamma$-method and the popular methods based on binning and\nresampling. \n\n\n\\subsection{Binning techniques}\n\\label{sec:comp-with-resampl}\n\n\\begin{figure}\n \\centering\n \\includegraphics[width=0.5\\textwidth]{fig\/bin}\n \\caption{Blocks of data are averaged to produce a new data set with\n less autocorrelations. No matter how large the bins are,\n adjacent bins have always data that are very close in MC time.}\n \\label{fig:bin}\n\\end{figure}\n\nResampling methods (bootstrap, jackknife) usually rely on binning to\nreduce the autocorrelations of the data. One does not resample the\ndata itself, but bins of data. The original measurements\n$a_i^\\alpha(t)$ are first averaged in blocks of size $N_B$ (see\nfigure~\\ref{fig:bin}) \n\\begin{equation}\n b_i^\\alpha(w) = \\frac{1}{N_B}\\sum_{t=1+(w-1)N_B}^{wN_B} a_i^\\alpha(t)\\,,\n \\qquad (w=1,\\dots,N_\\alpha\/N_B)\\,. \n\\end{equation}\nThis blocked data is then treated like independent measurements and\nresampled, either with replacement in bootstrap techniques or by just\nleaving out each observation (jackknife).\n\nHow good is the assumption that blocks are independent?\nA way to measure this is to determine the autocorrelation function of\nthe blocked data (see\nappendix~\\ref{sec:ap_binning}). In~\\cite{Wolff:2003sm} it is shown\nthat the leading \nterm for large $N_B$ in the integrated autocorrelation time of the\nbinned data is $\\mathcal O(1\/N_B)$.\nThe autocorrelation function decays exponentially at large MC\ntime, but the fact that adjacent bins have always data that are very\nclose in MC time (see fig.~\\ref{fig:1ovN}) transforms this expected\nexponential suppression in a power law. \n\\begin{figure}\n \\centering\n \\includegraphics[width=0.6\\textwidth]{fig\/Nbin2}\n \\caption{We show the approach to the true error by\n binning as a function of the bin size $N_B$ for three values of\n $\\tau$. In all cases the autocorrelation function is assumed to \n follow a simple exponential decay $\\Gamma(t) = e^{-|t|\/\\tau}$ (see\n appendix~\\ref{sec:model}). The asymptotic convergence is slow ($\\mathcal\n O(1\/N_B)$). Moreover in many practical situations we are far from this\n asymptotic behavior.}\n \\label{fig:1ovN} \n\\end{figure}\n\nIn practice it is difficult to have bins of size much larger than\nthe exponential autocorrelation times. In this situation one is far\nfrom the asymptotic $\\mathcal O(1\/N_B)$ scaling and binning severely\nunderestimates the errors. An instructive example is to just consider\na data with simple \nautocorrelation function $\\Gamma(t) = e^{-|t|\/\\tau}$ (see\nappendix~\\ref{sec:ap_binning}). \nFigure~\\ref{fig:1ovN} shows the approach to the true error as a\nfunction of the bin size for different values of $\\tau$. \n\nThis example should be understood as a warning, and not as an academic\nexample: in state of the art lattice QCD simulations it is not uncommon\nto simulate at parameter values where $\\tau_{\\rm exp}\\sim 100-200$,\nand it is fairly unusual to have statistics that allow for bins of\nsizes larger than 50-100. We end up commenting that the situation in the\n$\\Gamma$-method is better for two reasons. First the truncation errors\nare \\emph{exponential} instead of power-like when enough data is\navailable. Second (and more important), in the cases where \nthe statistics is not much larger than the exponential autocorrelation\ntimes, an improved estimate like eq.~(\\ref{eq:tauint_tauexp}) is only\navailable for the $\\Gamma$-method.\n\n\n\\subsection{The $\\Gamma$-bootstrap method}\n\\label{sec:resampl-gamma-meth}\n\nConceptually there is no need to use binning techniques with\nresampling. Binning is only used to tame the autocorrelations in the\ndata, and as we have seen due to the current characteristics of the\nlattice QCD simulations it seriously risks underestimating the errors.\n\nResampling is a tool for error propagation that automatically takes\ninto account the \\emph{correlations} among different observables. A\npossible analysis strategy consist in using the $\\Gamma$-method to\ndetermine the errors of the primary observables, and use resampling\ntechniques for error propagation.\n\nThe $N_{\\rm p}\\times N_{\\rm p}$ covariance matrix among the primary\nobservables $A_i^\\alpha$ can be estimated by\n\\begin{equation}\n \\label{eq:cov}\n {\\rm cov}(A_i^\\alpha, A_j^\\beta) \\approx\n \\frac{1}{N_\\alpha}\n \\left[\n \\Gamma_{ij}^{\\alpha\\beta}(0) +\n 2\\sum_{t=1}^{W^\\alpha}\\Gamma_{ij}^{\\alpha\\beta}(t)\n \\right]\\,.\n\\end{equation}\nThe effect of large tails in the autocorrelation functions can also be\naccounted for in the determination of this covariance matrix by using \n\\begin{equation}\n \\label{eq:cov_texp}\n {\\rm cov}(A_i^\\alpha, A_j^\\beta) \\approx\n \\frac{1}{N_\\alpha}\n \\left[\n \\Gamma_{ij}^{\\alpha\\beta}(0) +\n 2\\sum_{t=1}^{W^\\alpha}\\Gamma_{ij}^{\\alpha\\beta}(t) +\n 2\\tau_{\\rm exp}^\\alpha\\Gamma_{ij}^{\\alpha\\beta}(W^\\alpha+1)\n \\right]\\,.\n\\end{equation}\nIn the previous formulas the window $W^\\alpha$ can be chosen with\nsimilar criteria as described in section~\\ref{sec:gamma}. Different\ndiagonal entries $i=j$ might give different values for\nthe window $W^\\alpha_i$. For the case of eq.~(\\ref{eq:cov}), the\nvalues $W_i^{\\alpha}$ are chosen with the criteria described\nin~\\cite{Wolff:2003sm}, and conservative error estimates are obtained\nby using \n\\begin{equation}\n W^\\alpha = \\max_i\\{W_i^\\alpha\\}\\,.\n\\end{equation}\nOn the other hand for the case of eq.~(\\ref{eq:cov_texp}) we choose\n$W_i^\\alpha$ according to the criteria discussed before\neq.~(\\ref{eq:tauint_tauexp}), and we use\n\\begin{equation}\n W^\\alpha = \\min_i\\{W_i^\\alpha\\}\\,.\n\\end{equation}\n\nOnce the covariance matrix is known, one can generate bootstrap\nsamples following a multivariate Gaussian distribution with the mean\nof the observables $\\bar a_i^\\alpha$ as mean, and the covariance\namong observables as covariance. These bootstrap samples are\nused for the analysis as in any resampling analysis where the samples\ncome from binning. We will refer to this analysis technique as\n$\\Gamma$-bootstrap. \n\nIt is clear that if each primary observable is measured in a different\nensemble the covariance matrix ${\\rm cov}(A_i^\\alpha,A_j^\\beta)$ is diagonal\n(cf. equation~(\\ref{eq:gamma})). In this case the \nbootstrap samples are generated by just generating independent random\nsamples $\\mathcal N(\\bar a_i^\\alpha, \\sigma^2(a_i^\\alpha))$ for each\nobservable. Moreover in this particular case the analysis of the data\nwill give completely equivalent results as using the\n$\\Gamma$-method. This is more clearly seen by looking at\nequation~(\\ref{eq:error_propagation}): any derived observable will\nhave just the same set of autocorrelation functions except for a\ndifferent scaling factor that enters the error\ndetermination in a trivial way. \n\n\\begin{table}\n \\centering\n \\begin{tabular}{lllllll}\n \\toprule\n &&&& \\multicolumn{3}{c}{Binning error\/True error} \\\\\n \\cmidrule(lr){5-7}\n Observable& Value & $\\lambda_k$ & $\\tau_{\\rm int}$ & $N_B=10$ &$N_B=25$ &$N_B=50$ \\\\\n \\cmidrule(lr){1-1}\\cmidrule(lr){2-2}\\cmidrule(lr){3-3}\\cmidrule(lr){4-4}\\cmidrule(lr){5-5}\\cmidrule(lr){6-6}\\cmidrule(lr){7-7}\n $x$ & 2.00(7) & $(1.08, 0.08, 0.05, 0.0)$ &4.5 &0.75 & 0.87 & 0.91 \\\\\n $y$ & 1.86(8) & $(1.00, 0.15, 0.0, 0.05)$ &6.1 &0.65 & 0.76 & 0.82 \\\\\n $z=x\/y$ & 1.075(14) & $(0.0025, -0.044, 0.027, -0.029)$ &56.1 & 0.25 & 0.36 & 0.48\\\\\n \\bottomrule\n \\end{tabular}\n \\caption{Large autocorrelation times can show up on derived\n observables even in cases where the primary observables have all\n small autocorrelations. The error in the column labeled ``value''\n shows the exact value of the observable and an error computed\n assuming a sample of length 2000. The values of $\\lambda_k$,\n together with $\\tau_k = (4.0, 100.0, 2.0, 3.0)$ give the\n autocorrelation functions $\\Gamma(t) =\n \\sum_k\\lambda_k^2e^{-|t|\/\\tau_k}$ (see appendix~\\ref{sec:model}). \n }\n \\label{tab:ratio}\n\\end{table}\n\nBut when several observables determined on the same ensembles\nenter in the analysis, results based on this $\\Gamma$-bootstrap\napproach are not equivalent to analysis based on the\n$\\Gamma$-method. In resampling \nmethods we loose all information on autocorrelations when we build the\nbootstrap samples. When combining different observables from the same\nensembles the slow modes of the MC chain can be enhanced. Large\nautocorrelations may show up in derived observables even in cases when\nthe primary observables have all small integrated autocorrelation\ntimes. Table~\\ref{tab:ratio} shows an example where the ratio of two\nprimary observables shows large autocorrelation times ($\\tau_{\\rm\n int}=56$) even in the case where the two primary observables have\nrelatively small autocorrelation times ($\\tau_{\\rm int}=4.5$ and\n$\\tau_{\\rm int}=6.1$). Note also that \nthe derived observable is very precise (the error\nis 6 times smaller than those of the primary observables).\n\nOnce more this example should not be considered as an academic\nexample\\footnote{In general terms large autocorrelations tend to be\n seen in very precise data. Uncorrelated noise reduces autocorrelation\n times. A trivial example is to note that adding a white uncorrelated\n noise to any data decreases the integrated autocorrelation time of\n the observable (but of course \\emph{not} the error!).}. Profiting\nfrom the correlations among observables to obtain precise results lies\nat the heart of many state of the art computations. As examples we can \nconsider the ratio of decay constants ($F_K\/F_\\pi$ plays a central\nrole in constraining CKM matrix elements for example) or the\ndetermination of isospin breaking effects. In these \ncases there is always the danger that the precise derived observable\nshows larger autocorrelations than the primary observables. \n\nWhile one loses information on the autocorrelations for\nderived observables (only a full analysis with the $\\Gamma$-method has\naccess to this information), the large truncation errors expected\nfrom binning are avoided by using a combination of the $\\Gamma$-method\nand resampling techniques for error propagation. Therefore this\nanalysis technique should always be preferred over analysis using\nbinning techniques.\n\n\n\n\\subsection{Binning}\n\\label{sec:ap_binning}\n\nBinning can also be studied exactly in this model. Bins of length\n$N_B$ are defined by block averaging\n\\begin{equation}\n b(\\alpha) = \\frac{1}{N_B}\\sum_{i=1}^{N_B} x((\\alpha-1)N_B + i)\\,.\n\\end{equation}\nThe autocorrelation function of the binned data is given by\n\\begin{eqnarray}\n \\Gamma_b(t) &=& \\langle b(\\alpha+t) b(\\alpha) \\rangle =\n \\frac{1}{N_B^2}\\sum_{i,j=1}^{N_B} \\Gamma_x(tN_B + i-j) \\\\\n &=& \\frac{1-\\delta_{t,0}}{N^2_B} \\sum_k \\lambda_k^2 K(N_B, \\tau_k) +\n \\frac{\\delta_{t,0}}{N^2_B} \\sum_k \\lambda_k^2 H(N_B,\\tau_k)\\,,\n\\end{eqnarray}\nwhere the functions $K(N,\\tau)$ and $H(N,\\tau)$ are given by\n\\begin{eqnarray}\n K(N,\\tau) &=& \n \\frac{1-\\cosh(N\/\\tau)}{1-\\cosh(1\/\\tau)}\\,, \\\\\n H(N,\\tau) &=& N\\left[\n 1+2e^{-1\/\\tau}\\frac{1-e^{-(N-1)\/\\tau}}{1-e^{-1\/\\tau}}\n \\right] \\\\\n \\nonumber\n &-& 2e^{-1\/\\tau}\\frac{1-Ne^{-(N-1)\/\\tau}+(N-1)e^{-N\/\\tau}}\n {(1-e^{-1\/\\tau})^2}\\,. \n\\end{eqnarray}\nNow we can determine the normalized autocorrelation function and the\nintegrated autocorrelation time of the binned data\n\\begin{eqnarray}\n \\Delta &=& \\sum_k \\lambda_k^2 H(N_B,\\tau_k)\\,, \\\\\n \\rho_b(t) &=& \\Delta^{-1}\\sum_k\\lambda_k^2\n K(N_B,\\tau_k) e^{-tN_B\/\\tau_k}\\,, \\qquad (t>0)\\,.\\\\\n \\tau_{{\\rm int},b} &=& \\frac{1}{2} + \\Delta^{-1} \\sum_k \\lambda_k^2\n \\frac{K(N_B,\\tau_k)}{e^{N_B\/\\tau_k} - 1}\\,.\n\\end{eqnarray}\nResampling techniques (bootstrap, jackknife) treat the bins as\nindependent variables. Therefore the error estimate for a sample of\nlength $N$ is\n\\begin{equation}\n (\\delta_{\\rm binning} x)^2 = \\frac{\\Delta}{NN_B^2}\\,,\n\\end{equation}\nwhile the true error eq.~(\\ref{eq:error}) can conveniently be written as\n\\begin{equation}\n (\\delta x)^2 = \\frac{1}{NN_B^2}\\left( \\Delta + 2\\sum_k \\lambda_k^2\n \\frac{K(N_B,\\tau_k)}{e^{N_B\/\\tau_k} - 1} \\right)\\,,\n\\end{equation}\n\n\n\n\n\\section{Introduction}\n\\input{intro.tex}\n\n\n\\section{Analysis of Monte Carlo data}\n\\label{sec:gamma}\n\\input{gamma.tex}\n\n\n\\section{Automatic differentiation}\n\\label{sec:ad}\n\\input{ad.tex}\n\n\n\\section{Applications of AD to the analysis of MC data}\n\\input{applications.tex}\n\n\n\\section{Worked out example}\n\\label{sec:worked-out-example}\n\\input{example.tex}\n\n\\section{Conclusions}\n\\label{sec:conclusions}\n\\input{conclusions.tex}\n\n\\section*{Acknowledgments}\n\\addcontentsline{toc}{section}{Acknowledgments}\n\nThis work has a large debt to all the members of the ALPHA\ncollaboration and specially to the many discussions with Stefano\nLottini, Rainer Sommer and Francesco Virotta. The author thanks Rainer\nSommer for a critical reading of an earlier version of the manuscript,\nand Patrick Fritzsch for his comments on the text.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nLet $p$ be a prime number, let $q=p^a$, and let ${\\mathbb F}_q$ be the field of $q$ elements. Let\n\\begin{equation}\nf_\\lambda(x) = \\sum_{j=1}^N \\lambda_j x^{{\\bf a}_j}\\in {\\mathbb F}_q[x_0,\\dots,x_n]\n\\end{equation}\nbe a homogeneous polynomial of degree $d$. We write ${\\bf a}_j = (a_{0j},\\dots,a_{nj})$ with $\\sum_{i=0}^n a_{ij} = d$. Let $X_\\lambda\\subseteq{\\mathbb P}^n$ be the hypersurface defined by the equation $f_\\lambda(x) = 0$ and let $Z(X_\\lambda\/{\\mathbb F}_q,t)$ be its zeta function. We write\n\\begin{equation}\nZ(X_\\lambda\/{\\mathbb F}_q,t) = \\frac{P_\\lambda(t)^{(-1)^n}}{(1-t)(1-qt)\\cdots(1-q^{n-1}t)}\n\\end{equation}\nfor some rational function $P_\\lambda(t)\\in 1+t{\\mathbb Z}[[t]]$. If $d=1$ then $P_\\lambda(t) = 1$, so we shall always assume that $d\\geq 2$.\n\nDefine a nonnegative integer $\\mu$ by the equation\n\\[ \\bigg\\lceil \\frac{n+1}{d}\\bigg\\rceil = \\mu+1, \\]\nwhere $\\lceil r\\rceil$ denotes the least integer greater than or equal to the real number $r$. By a result of Ax\\cite{A} (see also Katz\\cite[Proposition~2.4]{K}) we have\n\\[ P_\\lambda(q^{-\\mu}t) \\in 1+t{\\mathbb Z}[[t]]. \\]\nOur goal in this paper is to give a mod $p$ congruence for $P_\\lambda(q^{-\\mu}t)$. We do this by defining a generalization of the classical Hasse-Witt matrix, which gives such a congruence for $\\mu=0$. Presumably our matrix is the matrix of a ``higher Hasse-Witt'' operation as defined by Katz\\cite[Section 2.3.4]{K2}, but so far we have not been able to prove this.\n\nIt will be convenient to define an augmentation of the vectors ${\\bf a}_j$. Set\n\\[ {\\bf a}_j^+ = (a_{0j},\\dots,a_{nj},1)\\in{\\mathbb N}^{n+2},\\quad j=1,\\dots,N, \\]\nwhere ${\\mathbb N}$ denotes the nonnegative integers. Note that the vectors ${\\bf a}_j^+$ all lie on the hyperplane $\\sum_{i=0}^n u_i=du_{n+1}$ in ${\\mathbb R}^{n+2}$. We shall be interested in the lattice points on this hyperplane that lie in $({\\mathbb R}_{>0})^{n+2}$: set\n\\[ U = \\bigg\\{ u=(u_0,\\dots,u_{n+1})\\in{\\mathbb N}^{n+2}\\mid \\text{$\\sum_{i=0}^n u_i = du_{n+1}$ and $u_i>0$ for all $i$}\\bigg\\}. \\]\nNote that $u\\in U$ implies that $u_{n+1}\\geq\\mu+1$. Let\n\\[ U_{\\min} = \\{ u=(u_0,\\dots,u_{n+1})\\in U\\mid u_{n+1} = \\mu+1\\}, \\]\na nonempty set by the definition of $\\mu$. We define a matrix of polynomials with rows and columns indexed by $U_{\\min}$: let $A(\\Lambda) = [A_{uv}(\\Lambda)]_{u,v\\in U_{\\min}}$, where\n\\[ A_{uv}(\\Lambda) = (-1)^{\\mu+1}\\sum_{\\substack{\\nu\\in{\\mathbb N}^N\\\\ \\sum_{j=1}^N \\nu_j{\\bf a}_j^+ = pu-v}} \\frac{\\Lambda_1^{\\nu_1}\\cdots\\Lambda_N^{\\nu_N}}{\\nu_1!\\cdots\\nu_N!}\\in {\\mathbb Q}[\\Lambda_1,\\dots,\\Lambda_N]. \\]\n\nNote that since the $(n+1)$-st coordinate of each ${\\bf a}_j^+$ equals 1, the condition on the summation implies that\n\\[ \\sum_{j=1}^N \\nu_j = (p-1)(\\mu+1). \\]\nWhen $\\mu=0$, it follows that $\\nu_j\\leq p-1$ for all $j$, hence the matrix $A(\\Lambda)$ can be reduced modulo $p$. We denote by $\\bar{A}(\\Lambda)\\in{\\mathbb F}_p[\\Lambda]$ its reduction modulo $p$. Using the algorithm of Katz\\cite[Algorithm 2.3.7.14]{K2}, one then checks that $\\bar{A}(\\lambda)$ is the Hasse-Witt matrix of the hypersurface $f_\\lambda =0$. It is somewhat surprising that even when $\\mu>0$ we still have $\\nu_j\\leq p-1$ for all $j$. \n\n\\begin{lemma}\nIf $u,v\\in U_{\\min}$, $\\nu\\in{\\mathbb N}^N$, and $\\sum_{j=1}^N \\nu_j{\\bf a}_j^+ = pu-v$, then $\\nu_j\\leq p-1$ for all $j$. In particular, $A_{uv}(\\Lambda)\\in({\\mathbb Q}\\cap{\\mathbb Z}_p)[\\Lambda]$, so $A_{uv}(\\Lambda)$ can be reduced modulo $p$.\n\\end{lemma}\n\nThe proof of Lemma 1.3 will be given in Section 2. By the results of \\cite[Theorem~2.7 or Theorem~3.1]{AS}, which will be recalled in Section 2, Lemma~1.3 implies immediately that each $A_{uv}(\\Lambda)$ is a mod $p$ solution of an $A$-hypergeometric system of differential equations. \n\nWrite the rational function $P_\\lambda(t)$ of (1.2) as \n\\[ P_\\lambda(t) = \\frac{Q_\\lambda(t)}{R_\\lambda(t)}, \\]\nwhere $Q_\\lambda(t)$ and $R_\\lambda(t)$ are relatively prime polynomials with \n\\[ Q_\\lambda(q^{-\\mu}t),\\;R_\\lambda(q^{-\\mu}t) \\in 1+t{\\mathbb Z}[t]. \\]\nIf $X_\\lambda$ is smooth, it is known that $P_\\lambda(t)$ is a polynomial, i.~e., $R_\\lambda(t)=1$. \nOur main result is the following, which does not require any smoothness assumption.\n\n\\begin{theorem}\nIf $n$ is not divisible by $d$, then $R_\\lambda(q^{-\\mu}t)\\equiv 1\\pmod{q}$ and \n\\[ Q_\\lambda(q^{-\\mu}t) \\equiv \\det\\big(I-t\\bar{A}(\\lambda^{p^{a-1}})\\bar{A}(\\lambda^{p^{a-2}})\\cdots \\bar{A}(\\lambda)\\big)\\pmod{p}. \\]\n\\end{theorem}\n\nNote that even in the classical case of the Hasse-Witt matrix ($\\mu=0$), this result contains something new, as we do not assume that $X_\\lambda$ is a smooth hypersurface.\n\nThe proof of Theorem~1.4 will occupy Sections~\\mbox{3--5}. To describe the zeta function, we apply the $p$-adic cohomology theory of Dwork, as in Katz\\cite[Sections 4--6]{K3}. Indeed, Equation~(3.5) below is a refined version of \\cite[Equation~(4.5.33)]{K3}. We discuss the case $d\\,|\\,n$ in Section~6. If $d\\,|\\, n$, the conclusion of Theorem 1.4 need not hold, and the rational function $P_\\lambda(q^{-\\mu}t)\\pmod{p}$ is instead described by Theorem 6.2. We prove the generic invertibility of the matrix $\\bar{A}(\\Lambda)$ in Section 7.\n\n\\section{Proof of Lemma 1.3}\n\nIt will be convenient for later applications to prove a more general version of Lemma 1.3. Put $S=\\{0,1,\\dots,n\\}$ and let $I\\subseteq S$. Define an integer $\\mu_I$ by the equation\n\\[ \\bigg\\lceil\\frac{|I|}{d}\\bigg\\rceil = \\mu_I + 1. \\]\nNote that $\\mu_I\\geq 0$ if $I\\neq\\emptyset$, $\\mu_\\emptyset = -1$, and, in the notation of the Introduction, $\\mu_S = \\mu$. Set\n\\[ U^I = \\bigg\\{ u=(u_0,\\dots,u_{n+1})\\in{\\mathbb N}^{n+2}\\mid \\text{$\\sum_{i=0}^n u_i = du_{n+1}$ and $u_i>0$ for all $i\\in I$}\\bigg\\}. \\]\nNote that $u\\in U^I$ implies that $u_{n+1}\\geq\\mu_I+1$. Let\n\\[ U^I_{\\min} = \\{ u=(u_0,\\dots,u_{n+1})\\in U^I\\mid u_{n+1} = \\mu_I+1\\}, \\]\na nonempty set by the definition of $\\mu_I$. Lemma 1.3 is the special case $I=S$ of the following result.\n\n\\begin{lemma}\nIf $u,v\\in U^I_{\\min}$, $\\nu\\in{\\mathbb N}^N$, and $\\sum_{j=1}^N \\nu_j{\\bf a}_j^+ = pu-v$, then $\\nu_j\\leq p-1$ for all $j$. \n\\end{lemma}\n\n\\begin{proof}\nThe result is trivial when $I=\\emptyset$ since $U^\\emptyset_{\\min} = \\{(0,\\dots,0)\\}$, so assume $I\\neq\\emptyset$. \nLet $u=(u_0,\\dots,u_n,\\mu_I+1),v=(v_0,\\dots,v_n,\\mu_I+1)\\in U^I_{\\min}$. Fix $k\\in\\{1,\\dots,N\\}$. We claim there exists an index $i_0\\in\\{0,\\dots,n\\}$ such that\n\\begin{equation}\na_{i_0k}\\geq \\begin{cases} u_{i_0} & \\text{if $i_0\\in I$,} \\\\ u_{i_0}+1 & \\text{if $i_0\\not\\in I$.} \\end{cases}\n\\end{equation}\nFor if (2.2) fails for all $i_0\\in\\{0,\\dots,n\\}$, then\n\\[ u-{\\bf a}_k^+ = (u_0-a_{0k},\\dots,u_n-a_{nk},\\mu_I)\\in U^I, \\]\ncontradicting the definition of $\\mu_I$.\n\nIf $\\nu_k\\geq p$, then\n\\[ \\nu_ka_{i_0k}\\geq pa_{i_0k}\\geq \\begin{cases} pu_{i_0} & \\text{if $i_0\\in I$,} \\\\ pu_{i_0}+p & \\text{if $i_0\\not\\in I$,} \\end{cases} \\]\nhence in both cases we have\n\\[ \\nu_ka_{i_0k}>pu_{i_0}-v_{i_0}. \\]\nBut our hypothesis $\\sum_{j=1}^N \\nu_j{\\bf a}_j^+ = pu-v$ implies that\n\\[ \\nu_ka_{i_0k}\\leq pu_{i_0}-v_{i_0}. \\]\nThis contradiction shows that $\\nu_k\\leq p-1$. And since $k$ was arbitrary, the lemma is established.\n\\end{proof}\n\nWe recall the definition of the $A$-hypergeometric system of differential equations associated to the set $A = \\{{\\bf a}_j^+\\}_{j=1}^N$. Let $L\\subseteq {\\mathbb Z}^N$ be the lattice of relations on $A$,\n\\[ L = \\bigg\\{ l=(l_1,\\dots,l_N)\\in{\\mathbb Z}^N\\mid \\sum_{j=1}^N l_j{\\bf a}_j^+ = {\\bf 0}\\bigg\\}, \\]\nand let $\\beta=(\\beta_0,\\dots,\\beta_{n+1})\\in {\\mathbb C}^{n+2}$. The $A$-hypergeometric system with parameter~$\\beta$ is the system of partial differential operators in variables $\\Lambda_1,\\dots,\\Lambda_N$ consisting of the box operators\n\\[ \\Box_l = \\prod_{l_j>0} \\bigg(\\frac{\\partial}{\\partial\\Lambda_j}\\bigg)^{l_j} - \\prod_{l_j<0} \\bigg(\\frac{\\partial}{\\partial\\Lambda_j}\\bigg)^{-l_j} \\quad\\text{for $l\\in L$} \\]\nand the Euler (or homogeneity) operators\n\\[ Z_i = \\sum_{j=1}^N a_{ij}\\Lambda_j\\frac{\\partial}{\\partial\\Lambda_j} - \\beta_i\\quad\\text{for $i=0,\\dots,n$} \\]\nand\n\\[ Z_{n+1} = \\sum_{j=1}^N \\Lambda_j\\frac{\\partial}{\\partial\\Lambda_j} - \\beta_{n+1}. \\]\n\nLet $A^I(\\Lambda) = [A^I_{uv}(\\Lambda)]_{u,v\\in U^I_{\\min}}$, where\n\\begin{equation}\nA^I_{uv}(\\Lambda) = (-1)^{\\mu_I+1}\\sum_{\\substack{\\nu\\in{\\mathbb N}^N\\\\ \\sum_{j=1}^N \\nu_j{\\bf a}_j^+ = pu-v}} \\frac{\\Lambda_1^{\\nu_1}\\cdots\\Lambda_N^{\\nu_N}}{\\nu_1!\\cdots\\nu_N!}\\in {\\mathbb Q}[\\Lambda_1,\\dots,\\Lambda_N]. \n\\end{equation}\nNote that in the notation of the Introduction we have $A^S_{uv}(\\Lambda) = A_{uv}(\\Lambda)$. \nBy Lemma 2.1, the polynomials $A^I_{uv}(\\Lambda)$ have $p$-integral coefficients. \nLemma 2.1 also says that $pu-v$ is very good in the sense of \\cite[Section~2]{AS}. We may therefore apply \\cite[Theorem~2.7]{AS} (or \\cite[Theorem 3.1]{AS} since this system is nonconfluent) to conclude that $\\bar{A}^I_{uv}(\\Lambda)$ is a mod $p$ solution of the $A$-hypergeometric system with parameter $\\beta = pu-v$ (or, equivalently, $\\beta=-v$ since we have reduced modulo $p$). \n\n\\section{The zeta function}\n\nTo make a connection between the matrix $A(\\Lambda)$ and the zeta function~(1.2), we apply a consequence of the Dwork trace formula developed in \\cite{AS2} (see Equation~(3.5) below). Let $\\gamma_0$ be a zero of the series $\\sum_{i=0}^\\infty t^{p^i}\/p^i$ having ${\\rm ord}_p\\:\\gamma_0 = 1\/(p-1)$, where ${\\rm ord}_p$ is the $p$-adic valuation normalized by ${\\rm ord}_p\\:p = 1$. Let $L_0$ be the space of series\n\\[ L_0 = \\bigg\\{\\sum_{u \\in{\\mathbb N}^{n+2}} c_u\\gamma_0^{pu_{n+1}}x^u\\mid\\text{$\\sum_{i=0}^n u_i-du_{n+1}=0$, $c_u\\in{\\mathbb C}_p$, and $\\{c_u\\}$ is bounded}\\bigg\\}. \\] \nFor $I\\subseteq\\{0,\\dots,n\\}$, let $L_0^I$ be the subset of $L_0$ defined by\n\\[ L_0^I = \\bigg\\{ \\sum_{u \\in{\\mathbb N}^{n+2}} c_u\\gamma_0^{pu_{n+1}}x^u\\in L_0\\mid \\text{$u_i>0$ for $i\\in I$}\\bigg\\}. \\]\nLet $ {\\rm AH}(t)= \\exp(\\sum_{i=0}^{\\infty}t^{p^i}\/p^i)$ be the Artin-Hasse series, a power series in $t$ that has $p$-integral coefficients, and set \n\\[ \\theta(t) = {\\rm AH}({\\gamma}_0t)=\\sum_{i=0}^{\\infty}\\theta_i t^i. \\]\nWe then have\n\\begin{equation}\n{\\rm ord}_p\\: \\theta_i\\geq \\frac{i}{p-1}.\n\\end{equation}\n\nWe define the Frobenius operator on $L_0$. Put\n\\begin{equation}\n\\theta(\\hat{\\lambda},x) = \\prod_{j=1}^N \\theta(\\hat{\\lambda}_jx^{{\\bf a}^+_j}),\n\\end{equation}\nwhere $\\hat{\\lambda}$ denotes the Teichm\\\"uller lifting of $\\lambda$. \nWe shall also need to consider the series $\\theta_0(\\hat{\\lambda},x)$ defined by\n\\begin{equation}\n\\theta_0(\\hat{\\lambda},x) = \\prod_{i=0}^{a-1} \\prod_{j=1}^N \\theta\\big((\\hat{\\lambda}_jx^{{\\bf a}^+_j})^{p^i}\\big) = \\prod_{i=0}^{a-1}\\theta(\\hat{\\lambda}^{p^i},x^{p^i}).\n\\end{equation}\nDefine an operator $\\psi$ on formal power series by\n\\begin{equation}\n\\psi\\bigg(\\sum_{u\\in{\\mathbb N}^{n+2}} c_ux^u\\bigg) = \\sum_{u\\in{\\mathbb N}^{n+2}} c_{pu}x^u.\n\\end{equation}\nDenote by $\\alpha_{\\hat{\\lambda}}$ the composition\n\\[ \\alpha_{{\\hat{\\lambda}}} := \\psi^a\\circ\\text{``multiplication by $\\theta_0(\\hat{\\lambda},x)$.''} \\]\nThe map $\\alpha_{\\hat{\\lambda}}$ operates on $L_0$ and is stable on each $L_0^I$. The proof of Theorem 1.4 will be based on the following formula for the rational function $P_\\lambda(t)$ defined in~(1.2). By \\cite[Equation 7.12]{AS2} we have\n\\begin{equation}\nP_\\lambda(qt) = \\prod_{I\\subseteq\\{0,1,\\dots,n\\}} \n\\det(I-q^{n+1-|I|}t\\alpha_{\\hat{\\lambda}}\\mid L_0^I)^{(-1)^{n+1+|I|}}.\n\\end{equation}\n\nTo exploit (3.5) we shall need $p$-adic estimates for the action of $\\alpha_{\\hat{\\lambda}}$ on $L^I_0$. \nExpand (3.3) as a series in $x$, say,\n\\begin{equation}\n\\theta_0(\\hat{\\lambda},x) = \\sum_{w\\in{\\mathbb N}A} \\theta_{0,w}(\\hat{\\lambda})x^w.\n\\end{equation}\nNote that from the definitions we have $\\theta_{0,w}(\\hat{\\lambda})\\in{\\mathbb Q}_p(\\zeta_{q-1},\\gamma_0)$. \nA direct calculation shows that for $v\\in U^I$,\n\\begin{equation}\n\\alpha_{\\hat{\\lambda}}(x^v) =\\sum_{u\\in U^I} \\theta_{0,qu-v}(\\hat{\\lambda})x^u,\n\\end{equation}\nthus we need $p$-adic estimates for the $\\theta_{0,qu-v}(\\hat{\\lambda})$ with $u,v\\in U^I$.\n\nExpand (3.2) as a series in $x$:\n\\begin{equation}\n\\theta(\\hat{\\lambda},x) = \\sum_{w\\in{\\mathbb N}A} \\theta_w(\\hat{\\lambda})x^w,\n\\end{equation}\nwhere \n\\begin{equation}\n\\theta_w(\\hat{\\lambda}) = \\sum_{\\nu\\in{\\mathbb N}^N} \\theta_\\nu^{(w)}\\hat{\\lambda}^\\nu\n\\end{equation}\nwith\n\\begin{equation}\n\\theta_\\nu^{(w)} = \\begin{cases} \\prod_{j=1}^N \\theta_{\\nu_j} & \\text{if $\\sum_{j=1}^N \\nu_j{\\bf a}_j^+ = w$,} \\\\ 0 & \\text{if $\\sum_{j=1}^N \\nu_j{\\bf a}_j^+ \\neq w$.} \\end{cases}\n\\end{equation}\nFrom (3.1) we have the estimate \n\\begin{equation}\n{\\rm ord}_p\\:\\theta_\\nu^{(w)}\\geq\\frac{\\sum_{j=1}^N\\nu_j}{p-1} = \\frac{w_{n+1}}{p-1}.\n\\end{equation}\nIn particular, this implies the estimate\n\\begin{equation}\n{\\rm ord}_p\\:\\theta_w(\\hat{\\lambda}) \\geq \\frac{w_{n+1}}{p-1}.\n\\end{equation}\n\nBy (3.3) and (3.8) we have\n\\begin{equation}\n\\theta_{0,w}(\\hat{\\lambda}) = \\sum_{\\substack{u^{(0)},\\dots,u^{(a-1)}\\in{\\mathbb N}A\\\\ \\sum_{i=0}^{a-1}\np^iu^{(i)}=w}} \\prod_{i=0}^{a-1} \\theta_{u^{(i)}}(\\hat{\\lambda}^{p^i}).\n\\end{equation}\nIn particular, we get the formula\n\\begin{equation}\n\\theta_{0,qu-v}(\\hat{\\lambda}) = \\sum_{\\substack{w^{(0)},\\dots,w^{(a-1)}\\in{\\mathbb N}A\\\\ \\sum_{i=0}^{a-1}\np^iw^{(i)}=qu-v}} \\prod_{i=0}^{a-1} \\theta_{w^{(i)}}(\\hat{\\lambda}^{p^i}). \n\\end{equation}\nApplying (3.12) to the products on the right-hand side of (3.14) gives\n\\begin{equation}\n{\\rm ord}_p\\:\\bigg(\\prod_{i=1}^{a-1} \\theta_{w^{(i)}}(\\hat{\\lambda}^{p^i})\\bigg)\\geq \\sum_{i=0}^{a-1} \\frac{w^{(i)}_{n+1}}{p-1}.\n\\end{equation}\nThis estimate is not directly helpful for estimating $\\theta_{0,qu-v}(\\hat{\\lambda})$ because we lack information about the $w^{(i)}$. Instead we proceed as follows. \n\nFix $w^{(0)},\\dots,w^{(a-1)}\\in{\\mathbb N}A$ with \n\\begin{equation}\n\\sum_{i=0}^{a-1} p^iw^{(i)} = qu-v.\n\\end{equation} \nWe construct inductively from $\\{w^{(i)}\\}_{i=0}^{a-1}$ a related sequence $\\{\\tilde{w}^{(i)}\\}_{i=0}^a\\subseteq U^I$ such that \n\\begin{equation}\nw^{(i)} = p\\tilde{w}^{(i+1)}-\\tilde{w}^{(i)}\\quad\\text{for $i=0,\\dots,a-1$.}\n\\end{equation}\nFirst of all, take $\\tilde{w}^{(0)} = v$. Eq.~(3.16) shows that $w^{(0)}+\\tilde{w}^{(0)}=p\\tilde{w}^{(1)}$ for some $\\tilde{w}^{(1)}\\in{\\mathbb Z}^{n+2}$; since $w^{(0)}\\in{\\mathbb N}A$ and $\\tilde{w}^{(0)}\\in U^I$ we conclude that $\\tilde{w}^{(1)}\\in U^I$. Suppose that for some $0< k\\leq a-1$ we have defined $\\tilde{w}^{(0)},\\dots,\\tilde{w}^{(k)}\\in U^I$ satisfying (3.17) for $i=0,\\dots,k-1$. Substituting $p\\tilde{w}^{(i+1)}-\\tilde{w}^{(i)}$ for $w^{(i)}$ for $i=0,\\dots,k-1$ in (3.16) gives\n\\begin{equation}\n-\\tilde{w}^{(0)}+ p^k\\tilde{w}^{(k)} + \\sum_{i=k}^{a-1} p^iw^{(i)} = p^au-v.\n\\end{equation}\nSince $\\tilde{w}^{(0)}=v$, we can divide this equation by $p^k$ to get $\\tilde{w}^{(k)} + w^{(k)} = p\\tilde{w}^{(k+1)}$ for some $\\tilde{w}^{(k+1)}\\in{\\mathbb Z}^{n+2}$. Since $w^{(k)}\\in{\\mathbb N}A$ and (by induction) $\\tilde{w}^{(k)}\\in U^I$, we conclude that $\\tilde{w}^{(k+1)}\\in U^I$. This completes the inductive construction. Note that in the special case $k=a-1$, this computation gives $\\tilde{w}^{(a)} = u$.\n\nSumming Eq.~(3.17) over $i=0,\\dots,a-1$ and using $\\tilde{w}^{(0)} = v$, $\\tilde{w}^{(a)} = u$, gives\n\\begin{equation}\n\\sum_{i=0}^{a-1} w^{(i)} = pu-v + (p-1)\\sum_{i=1}^{a-1} \\tilde{w}^{(i)},\n\\end{equation}\nhence\n\\begin{equation}\n\\sum_{i=0}^{a-1} \\frac{w^{(i)}_{n+1}}{p-1} = \\frac{pu_{n+1}-v_{n+1}}{p-1} + \\sum_{i=1}^{a-1} \\tilde{w}^{(i)}_{n+1}.\n\\end{equation}\nFor $w^{(0)},\\dots,w^{(a-1)}$ as in (3.16), we thus get from (3.15)\n\\begin{equation}\n{\\rm ord}_p\\:\\bigg(\\prod_{i=0}^{a-1} \\theta_{w^{(i)}}(\\hat{\\lambda}^{p^i})\\bigg)\\geq \\frac{pu_{n+1}-v_{n+1}}{p-1} + \\sum_{i=1}^{a-1} \\tilde{w}^{(i)}_{n+1}.\n\\end{equation}\nSince $\\tilde{w}^{(i)}\\in U^I$, we have\n\\begin{equation}\n\\text{$\\tilde{w}^{(i)}_{n+1} =\\mu_I+1$ if $\\tilde{w}^{(i)}\\in U^I_{\\min}$ and $\\tilde{w}^{(i)}_{n+1} \\geq\\mu_I+2$ if $\\tilde{w}^{(i)}\\not\\in U^I_{\\min}$.} \n\\end{equation}\nFrom (3.21) and (3.22) we get the following result.\n\\begin{lemma}\nFor $u,v\\in U^I$ and $w^{(0)},\\dots,w^{(a-1)}$ as in $(3.16)$, we have\n\\begin{equation}\n{\\rm ord}_p\\:\\bigg(\\prod_{i=0}^{a-1} \\theta_{w^{(i)}}(\\hat{\\lambda}^{p^i})\\bigg)\\geq \\frac{pu_{n+1}-v_{n+1}}{p-1} + (a-1)(\\mu_I+1).\n\\end{equation}\nFurthermore, if any of the terms $\\tilde{w}^{(1)},\\dots,\\tilde{w}^{(a-1)}$ of the associated sequence satisfying $(3.17)$ is not contained in $U^I_{\\min}$, then\n\\begin{equation}\n{\\rm ord}_p\\:\\bigg(\\prod_{i=0}^{a-1} \\theta_{w^{(i)}}(\\hat{\\lambda}^{p^i})\\bigg)\\geq \\frac{pu_{n+1}-v_{n+1}}{p-1} + (a-1)(\\mu_I+1)+1.\n\\end{equation}\n\\end{lemma}\n\nOur desired estimate for $\\theta_{0,qu-v}(\\hat{\\lambda})$ now follows from (3.14).\n\\begin{corollary}\nFor $u,v\\in U^I$ we have\n\\begin{equation}\n{\\rm ord}_p\\: \\theta_{0,qu-v}(\\hat{\\lambda})\\geq \\frac{pu_{n+1}-v_{n+1}}{p-1} + (a-1)(\\mu_I+1). \n\\end{equation}\n\\begin{comment}\nFurthermore, if $u\\not\\in U^I_{\\min}$ or $v\\not\\in U^I_{\\min}$, then\n\\begin{equation}\n{\\rm ord}_p\\: \\theta_{0,qu-v}(\\hat{\\lambda})\\geq \\frac{pu_{n+1}-v_{n+1}}{p-1} + (a-1)(\\mu_I+1) + 1. \n\\end{equation}\n\\end{comment}\n\\end{corollary}\n\n\\section{The action of $\\alpha_{\\hat{\\lambda}}$ on $L_0^I$}\n\nIn this section, we use Corollary 3.26 to study the action of $\\alpha_{\\hat{\\lambda}}$ on~$L_0^I$. \nFrom (3.7) and the formula of Serre\\cite[Proposition~7]{S} we have\n\\begin{equation}\n\\det(I-t\\alpha_{\\hat{\\lambda}}\\mid L_0^I) = \\sum_{m=0}^\\infty a^I_mt^m, \n\\end{equation}\nwhere\n\\begin{equation}\na^I_m = (-1)^m \\sum_{U_m\\subseteq U^I} \\sum_{\\sigma\\in{\\mathcal S}_m} {\\rm sgn}(\\sigma)\\prod_{u\\in U_m} \\theta_{0,qu-\\sigma(u)}(\\hat{\\lambda})\\in {\\mathbb Q}_p(\\zeta_{q-1},\\gamma_0),\n\\end{equation}\nthe outer sum is over all subsets $U_m\\subseteq U^I$ of cardinality $m$, and ${\\mathcal S}_m$ is the group of permutations on $m$ objects. \n\\begin{proposition}\nThe coefficient $a^I_m$ is divisible by $q^{m(\\mu_I+1)}$ and satisfies the congruence\n\\begin{equation}\n a^I_m\\equiv (-1)^m \\sum_{U_m\\subseteq U_{\\min}^I} \\sum_{\\sigma\\in{\\mathcal S}_m} {\\rm sgn}(\\sigma)\\prod_{u\\in U_m} \\theta_{0,qu-\\sigma(u)}(\\hat{\\lambda}) \\pmod{pq^{m(\\mu_I+1)}}.\n\\end{equation}\nIn particular, $a^I_m\\equiv 0\\pmod{pq^{m(\\mu_I+1)}}$ if $m>|U^I_{\\min}|$.\n\\end{proposition}\n\n\\begin{proof}\nIf $U_m\\subseteq U^I$ is a subset of cardinality $m$ and $\\sigma$ is a permutation of $U_m$, then by (3.27)\n\\begin{align*}\n{\\rm ord}_p\\bigg(\\prod_{u\\in U_m}\\theta_{0,qu-\\sigma(u)}(\\hat{\\lambda})\\bigg) &\\geq m(a-1)(\\mu_I+1) + \\sum_{u\\in U_m} \\frac{pu_{n+1}-\\sigma(u)_{n+1}}{p-1} \\\\\n &= m(a-1)(\\mu_I+1) + \\sum_{u\\in U_m} u_{n+1} \\\\\n &\\geq ma(\\mu_I+1)\n\\end{align*}\nsince $u\\in U_m$ implies $u_{n+1}\\geq \\mu_I+1$. It follows from (4.2) that $a^I_m$ is divisible by~$q^{m(\\mu_I+1)}$. Furthermore, $u_{n+1}\\geq\\mu_I+2$ if $u\\not\\in U^I_{\\min}$, so \n\\begin{equation*}\n{\\rm ord}_p\\bigg(\\prod_{u\\in U_m}\\theta_{0,qu-\\sigma(u)}(\\hat{\\lambda})\\bigg) \\geq ma(\\mu_I+1)+1\\quad\\text{if $U_m\\not\\subseteq U^I_{\\min}$.}\n\\end{equation*}\nThe congruence (4.4) now follows from (4.2).\n\\end{proof}\n\nAs an immediate corollary of Proposition 4.3, we have the following result.\n\\begin{corollary}\nThe reciprocal roots of $\\det(I-t\\alpha_{\\hat{\\lambda}}\\mid L_0^I)$ are all divisible by $q^{\\mu_I+1}$. \n\\end{corollary}\n\nCorollary 4.5 allows us to analyze the terms on the right-hand side of (3.5).\n\\begin{proposition}\nThe reciprocal roots of $\\det(I-q^{n+1-|I|}t\\alpha_{\\hat{\\lambda}}\\mid L_0^I)$ are divisible by $q^{\\mu+2}$ unless either $|I|=n+1$ or $|I|=n$ and $n$ is divisible by $d$, in which case they are divisible by $q^{\\mu+1}$. \n\\end{proposition}\n\n\\begin{proof}\nCorollary 4.5 and the definition of $\\mu_I$ imply that the reciprocal roots of $\\det(I-q^{n+1-|I|}t\\alpha_{\\hat{\\lambda}}\\mid L_0^I)$ are divisible by $q$ to the power\n\\begin{equation}\nn+1-|I| + \\bigg\\lceil \\frac{|I|}{d}\\bigg\\rceil.\n\\end{equation}\nIf $|I|=n+1$, this reduces to $\\mu+1$. Suppose $|I|=n$. From the definition of $\\mu$ we have $n=\\mu d + r$ with $0\\leq r\\leq d-1$. The expression (4.7) then reduces to $1+\\lceil (\\mu d+r)\/d\\rceil$, which equals $\\mu+2$ if $r>0$ and equals $\\mu+1$ if $r=0$. \n\nIf $|I|=n-1$, expression (4.7) reduces to $2+\\lceil (\\mu d+r-1)\/d\\rceil$, which equals $\\mu+3$ if $r>1$ and equals $\\mu+2$ if $r=0,1$. Finally, note that expression (4.7) cannot decrease when $|I|$ decreases, so expression (4.7) will be $\\geq \\mu+2$ for $|I|0\\}$.\n\\end{lemma}\n\nLet $Z=\\{ {\\bf a}_k^+\\mid \\text{$l^{(u)}_{\\mu_I+k}\\neq 0$ for some $u\\in U^I_{\\min}$}\\}$. If ${\\bf a}_k^+\\not\\in Z$, then $l^{(u)}_{\\mu_I+k}=0$ for all $u\\in U^I_{\\min}$, so if $Z=\\emptyset$ then $l^{(u)}_{\\mu_I+k} = 0$ for all $k$, $1\\leq k\\leq |U^I_{\\min}|$, and all $u\\in U^I_{\\min}$, which establishes the proposition. So suppose $Z\\neq\\emptyset$ and choose $k_0$, $1\\leq k_0\\leq |U^I_{\\min}|$, such that ${\\bf a}_{k_0}^+$ is a vertex of the convex hull of $Z$. If $l^{(u)}\\neq{\\bf 0}$, then Lemma 7.16 implies that $k_0\\neq k_u$. By the definition of $L_u$, we therefore have $l^{(u)}_{\\mu_I+k_0}\\geq 0$ for all $u\\in U^I_{\\min}$. Furthermore, since ${\\bf a}_{k_0}^+\\in Z$, we must have $l^{(u)}_{\\mu_I+k_0}>0$ for some $u\\in U^I_{\\min}$. It follows that $\\sum_{u\\in U^I_{\\min}} l^{(u)}_{\\mu_I+k_0}>0$, contradicting (7.8). Thus we must have $Z=\\emptyset$.\n\\end{proof}\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nRecent development~\\cite{Maldacena:1997re} of the dual correspondence\nbetween gauge theories and string theories\nhas given us a powerful tool to investigate\nstrongly coupled gauge theories like Quantum Chromodynamics (QCD) or Technicolor, which are otherwise\nextremely difficult to solve.\nAccording to the gauge\/string duality the low energy QCD becomes a theory of mesons in\nthe large number of color ($N_c$)\nand large 't Hooft coupling ($\\lambda\\equiv g_s^2N_c$) limit, but in a warped five-dimensional spacetime.\nThe theory, known as holographic QCD, is nothing but a 5D flavor gauge theory in the\nwarped background geometry, endowed with a Chern-Simons term,\nnecessary to realize the global anomalies of QCD~\\cite{Sakai:2004cn,Erlich:2005qh}.\n\nBeing the theory of mesons in the large $N_c$ limit, holographic QCD\nshould admit baryons as solition solutions, as conjectured by Skyrme\nlong time ago for the nonlinear sigma model~\\cite{Skyrme:1961vq}.\nIndeed, it was found that the baryons are realized as instanton\nsolitons in holographic QCD \\cite{Son:2003et}\\footnote{There is an alternative realization of baryons\nthrough 5D holographic baryon fields \\cite{deTeramond:2005su}.}.\nThe instanton picture of baryons\nreproduces the success of skyrmions rather well but with much less\nparameters for the spectrum and the static properties of\nbaryons~\\cite{Hong:2007kx,Hata:2007mb,Nawa:2006gv,Hong:2007dq}.\nUnlike skyrmions, however, the instanton solitons are made of not\nonly pions but infinite towers of vector mesons, intertwined\nnontrivially, leading to small size objects without any intrinsic\ncore, which therefore realizes full vector meson dominance for\nbaryons~\\cite{Hong:2007kx,Hong:2007dq}.\n\n\nOne of nice features of skyrmion is that monopole catalysis of\nbaryon decay~\\cite{Rubakov:1981rg,Callan:1982ac} is easily described\nin the skyrmion picture of baryons, which then sets up the practical\nbasis for the calculation of monopole\ncatalysis~\\cite{Callan:1983nx}. The magnetic monopole provides a\ndefect on which the topological charge of skyrmions can unwind,\nallowing skyrmions to decay at the rate of QCD scale without\nsuppression by the monopole scale.\n\n\nIn this paper we describe how monopole catalysis of baryon decay\nrealizes in holographic QCD. The magnetic monopole catalyzes baryon\ndecay, since the 5D baryon number current,\n$B^{M}=(1\/32\\pi^2)\\epsilon^{MNPQR} {\\rm Tr}\\,F_{NP}F_{QR}$ is not\nconserved in the presence of the holographic magnetic monopole\nstring by violating the Bianchi identity for the gauge fields.\nFurthermore, we show that monopole catalysis can be naturally\ndescribed in string theory as the dissolution of $D4$ brane\n(instanton soliton) into $D6$ brane (monopole string), which\nsuggests that monopole catalysis of baryon decay should occur\nwithout any barrier and hold even beyond the large $N_c$ limit. The\ndecay rate of baryons by monopole catalysis is determined by the\nscale of instanton solitons. In section 2 we first briefly review\nthe monopole catalysis of skyrmion decay, studied by Callan and\nWitten~\\cite{Callan:1983nx}, and then in sections 3 and 4 we\ndescribe how monopole catalysis is realized in holographic QCD.\nFinally in section 5 we present the string theory realization of\nmonopole catalysis of baryon decay. Section 6 contains concluding\nremarks together with future directions.\n\n\n\n\n\\section{Monopole Catalysis of Baryon Decay\\label{section2}}\n\nMonopoles can arise in grand unified theories (GUTs) as the unified gauge symmetry\nbreaks down to the Standard Model gauge group. Their typical size is\nabout the unification scale, which is nearly point-like compared to\nthe usual scales of the Standard model physics, especially QCD.\nSince electromagnetism is the only long-ranged interaction in the\nStandard Model, a GUT monopole eventually looks like an\nelectromagnetic Dirac monopole to low energy observers. Near the\ncore of the monopole, there are clouds of heavy GUT gauge fields\nwhich can mediate various GUT interactions, among which are baryon\nnumber violating processes. Despite its small size, the cross\nsections for these monopole-induced processes are not suppressed by\nthe unification scale, due to their origin being related to chiral\nanomalies. Rather, the monopole center provides a baryon number\nviolating vertex of unit strength acting as a catalysis of baryon\ndecay, and the actual cross section is governed by low energy\ndynamics such as QCD.\n\n\nThe physics gets more interesting when the low energy dynamics such\nas QCD becomes strongly coupled and takes a completely different\nlooking effective theory. Because the monopole catalysis is not\nsuppressed by any scales of the UV theory, its presence must persist\neven in the low energy effective theory, and we need a\n non-trivial disguise of the monopole-induced baryon\ndecay as an interesting low energy phenomenon within the effective\ntheory. As the structure of the monopole core given by the UV\nphysics is not a concern of the low energy effective theory, the\nDirac monopole profile of the unbroken gauge group should be\nsufficient for the low energy description of monopole catalysis.\nThis indicates an intricate theoretical consistency requirement for\nthe physics induced by Dirac monopoles in any low energy effective\ntheory.\n\n\n\nA consistent low energy effective theory of QCD is the chiral\nLagrangian of $SU(N_F)_L\\times SU(N_F)_R$, and the low energy\nbaryons are effectively described by topological solitons, called\nSkyrmions, in the large $N_c$ limit. Assuming that the large $N_c$ QCD can\nbe embedded in some GUT theory which contains monopoles capable of\nbaryon number catalysis, the above discussion leads to the\nexpectation that an electromagnetic Dirac monopole should be able\nto induce Skyrmion-baryon decay within the low energy chiral\ndynamics. At first sight, this looks puzzling because the baryon\nnumber of the Skyrmions is purely topological and there seems to be\nno way for the Skyrmions to decay within the framework of the\neffective theory. This is the problem of monopole catalysis of\nSkyrmion decay analyzed by Callan and Witten long ago~\\cite{Callan:1983nx}.\n\nThe resolution of the puzzle is that the baryon number and its\ncurrent must be modified in the presence of a background\nelectromagnetic field~\\footnote{In general, any background\n$SU(N_F)_L\\times SU(N_F)_R\\times U(1)_B$ gauge fields upon weakly\ngauging it requires modification of the baryon number current.}, to\nbe gauge invariant and conserved at the same time. This requirement\ndetermines the baryon number current uniquely~\\cite{Callan:1983nx}.\nMore explicitly, the standard baryon number current of Skyrmions\nwhich is topologically conserved is given by\n\\begin{equation}\nB^\\mu={1\\over\n24\\pi^2}\\epsilon^{\\mu\\nu\\alpha\\beta}{\\rm Tr}\\left(U^{-1}\\partial_\\nu\nU U^{-1}\\partial_\\alpha U U^{-1}\\partial_\\beta U \\right)\\quad,\\label{skyrmion}\n\\end{equation}\nwhere $U$ is the $SU(2)$ group field of the chiral\nLagrangian~\\footnote{For simplicity, we will confine our discussion\nto the $N_F=2$ massless quarks} which transforms as\n\\begin{equation}\nU \\to g_L U\ng_R^\\dagger\\quad,\n\\end{equation}\nunder the chiral symmetry $SU(2)_L\\times\nSU(2)_R$. Since the electromagnetic $U(1)_{EM}$ acts on $U$ by\n\\begin{equation}\nU\\to e^{iQ}U e^{-iQ}\\quad,\\quad Q=\\left(\\begin{array}{cc}{2\\over 3}\n& 0\\\\0 & -{1\\over 3}\\end{array}\\right)\\quad,\n\\end{equation}\nthe simplest way to\nmake the baryon current gauge invariant would be to replace the\nordinary derivatives $\\partial U$ by the covariant derivatives\n$DU=\\partial U +A^{EM}[Q,U]$, where $A^{EM}$ is the electromagnetic\ngauge potential~\\footnote{We use the convention where the gauge\npotential is {\\it anti}-hermitian, and the covariant derivative is\n$D=\\partial+A$. To compare with Ref.\\cite{Callan:1983nx}, simply\nreplace $A$ by $-ieA$.}. However, the resulting baryon current is no longer\nconserved in general, and we need to add more gauge-invariant\nterms to make it conserved. This has been worked out in\nRef.\\cite{Callan:1983nx}, and we quote the result\n\\begin{eqnarray} B^\\mu &=&\n{1\\over\n24\\pi^2}\\epsilon^{\\mu\\nu\\alpha\\beta}{\\rm Tr}\\left(U^{-1}\\partial_\\nu\nU U^{-1}\\partial_\\alpha U U^{-1}\\partial_\\beta U \\right)\\nonumber\\\\\n&& -{1\\over\n24\\pi^2}\\epsilon^{\\mu\\nu\\alpha\\beta}\\partial_\\nu\\left[3A^{EM}_\\alpha\n{\\rm Tr}\\left(Q(U^{-1}\\partial_\\beta U+\\partial_\\beta U U^{-1})\\right)\\right]\\quad.\\label{newbn}\n\\end{eqnarray}\nIt is clear that the final form of the\nmodification does not affect the conservation of the current, while it can also be checked that\nthe result is $U(1)_{EM}$ gauge invariant.\n\n\nHowever, the conservation of baryon number,\n$\\partial_\\mu B^\\mu=0$, is guaranteed only for a smooth, well-defined\nbackground potential $A^{EM}$. Since a Dirac monopole field is\nsingular and doesn't have a well-defined potential, its presence\nmight invalidate the conservation of the above gauge invariant\nbaryon number. Indeed, this is exactly what\ncauses Skyrmions to decay in the presence of a monopole.\nOne might question that the topological Skyrmion number (\\ref{skyrmion})\ncan never be violated under a smooth time evolution, and an initial\nSkyrmion would never decay to topologically trivial meson states.\nHowever, a caveat is that a Dirac monopole entails the Dirac-string on which\n$A^{EM}$ is singular, and\nwe are allowed to take singular gauge transformations to move this string from\none direction to another. Under these singular $U(1)_{EM}$ gauge transformations which are\nnow allowed\nin a Dirac monopole background,\nthe topological Skyrmion number can actually change. In other words, the Skyrmion number\nis not a well-defined gauge-invariant quantity in the presence of a Dirac monopole, and\na configuration with non-zero Skyrmion number can be\nequivalently described by a topologically trivial configuration under a gauge transformation.\nWhat remains invariant under the gauge transformations is the new gauge-invariant\nbaryon number (\\ref{newbn}).\n\nThis gauge-invariant baryon number can be eaten up at the monopole center dynamically,\nwhich is responsible for the baryon decay.\nNear the center of the monopole, only the neutral component of the pion, $\\pi^0$,\ncan take non-zero values, while charged pion excitations would cause too much energy, since they have nonzero\nangular momentum proportional to the magnetic charge of monopole.\nWriting $U(t)={\\rm exp}({2i\\over F_\\pi} \\pi^0(t) \\sigma^3)$ near the center\nof a Dirac monopole of unit strength\n\\begin{equation}\nA^{EM}=-{i\\over 2}(1-\\cos\\theta)d\\phi\\quad,\n\\end{equation}\nand using $\\epsilon^{r\\theta\\phi t}={-1\\over \\sqrt{g}}={-1\\over r^2\\sin\\theta}$ in the polar coordinate,\nthe radial flux of the baryon number out of the monopole is readily calculated to be\n\\begin{equation}\nB^r={\\partial_t\\pi^0\\over 4\\pi^2 F_\\pi r^2}\\quad,\n\\end{equation}\nwhose integration gives the change of baryon number\n\\begin{equation}\n{dB\\over dt}={1\\over \\pi F_\\pi }({\\partial_t\\pi^0})\\quad.\\label{change}\n\\end{equation}\nTherefore, the rate of change of $\\pi^0$ at the monopole center\nis proportional to disappearance of the baryon charge from the effective theory.\nIn the original GUT, this baryon number violation should be accompanied by\ncreation of leptons, whose detail should resort to some unknown\ndynamics at the center of the monopole, so that the total fermion number is conserved.\nIn the low energy effective theory,\nthis normally involves putting a relevant boundary condition\non the leptons at the monopole center, such that the change of baryon number is compensated by\nthe change of lepton number.\nIn the following sections, we will see that all the above features nicely fit into\na simple description in the framework of holographic QCD.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\section{Holographic Baryon Number Current}\n\n\nTo be specific, we present our analysis in the model by Sakai and Sugimoto (SS) from Type IIA\nstring theory (See Ref.\\cite{Bergman:2007pm,Casero:2007ae,Aharony:2008an} for its quark mass deformation).\nHowever, most of the steps we perform are dictated by symmetry and don't in fact\ndepend on the details of the model, and hence\nour analysis is applicable to any model of holographic QCD.\n\n\n\n\n\nIn the SS model, the world volume $U(N_F)_L$ and $U(N_F)_R$ gauge fields\non $D8$ and $\\bar D8$ in UV region are holographically dual to\nthe corresponding chiral symmetry in QCD. Its spontaneous breaking\nto the diagonal $U(N_F)_I$ is geometrically realized\nby adjoining $D8$ and $\\bar D8$ at the tip of the cigar geometry.\nAlternatively, we can view this as having a single $D8$ brane whose\ntwo asymptotic boundaries towards UV region encode the chiral symmetry\n$U(N_F)_L$ and $U(N_F)_R$ respectively.\nThe latter view point is more practical in the analysis, and\nwe introduce a coordinate $z$ on $D8$ such that $z\\to\\pm \\infty$\nrepresent two UV boundaries. We will call $z$ the radial or the 5th direction.\nAssuming homogeneity along the internal $S^4$ fibration,\nthe world volume theory on $N_F$ $D8$ branes is effectively a 5D $U(N_F)$ gauge\ntheory in a non-trivial $z$ dependent background.\nAccording to AdS\/CFT correspondence, the asymptotic values\nof the 5D gauge potential near the two boundaries, $A_\\mu(x,z\\to\\infty)$\nand $A_\\mu(x,z\\to-\\infty)$\\footnote{We use Greek indices for the Minkowki $R^{1,3}$ directions,\nwhile the full 5D coordinates $(x^\\mu,z)$ will be denoted by capital letters.},\nare non-dynamical background fields coupled to QCD $U(N_F)_L$ and $U(N_F)_R$ currents respectively.\nEquivalently, they are\nprecisely the background gauge potential upon weakly gauging the chiral symmetry.\n\nNote that the above prescription holds true in the gauge where $A_z$\nis kept free (and vanishes at $z\\to\\pm\\infty$), while it is often more convenient\nto work in the gauge where $A_z=0$. After performing a suitable\ngauge transformation from the above to the $A_z=0$ gauge, the boundary behavior of $A_\\mu$\nwill be slightly different from the above. However, any gauge invariant calculations\nare independent of the gauge choice.\n\n\nThe 5D gauge theory on $D8$ also contains a tower of normalizable (axial) vector meson excitations\nin view of 4D observers. Especially, the Wilson line\n\\begin{equation}\nU(x^\\mu)=P \\exp \\left( -\\int_{-\\infty}^{+\\infty}dz \\, A_z (x^\\mu,z)\\right)\\quad,\\label{wilson}\n\\end{equation}\nis identified as the massless Nambu-Goldstone pion for the chiral symmetry breaking\n$U(N_F)_L\\times U(N_F)_R\\to U(N_F)_I$~\\footnote{Note that axial anomaly of $U(1)_A$\nis negligible in the large $N_c$ limit.}, and it is precisely the group field entering\nthe low energy QCD chiral Lagrangian.\nUpon expanding $A_M$ in terms of normalizable (axial) vector mesons as well as\nthe non-normalizable background fields previously mentioned~\\footnote{The modes from $A_z$,\nexcept the Wilson line (\\ref{wilson}),\nare eaten after all\n by massive spin 1 (axial) vector mesons coming from $A_\\mu$. In this sense, the $A_z=0$\ngauge is a kind of unitary gauge where only physical degrees of freedom are present.},\nand performing\n$z$-integration we obtain a 4D effective chiral Lagrangian of $U(x)$ and\nexcited mesons coupled to the background gauge potential. The part that contains $U(x)$\nreproduces the previously known gauged Skyrmion theory with the correct Wess-Zumino-Witten term.\nTherefore, the 5D gauge field compactly summarizes the pions, the excited mesons,\nand the background gauge potential of the chiral symmetry in a single unified framework.\n\n\n\nAs the 5D gauge theory on $D8$ branes includes the Skyrmion theory,\nthere must exist topological objects similar to Skyrmions that play a role of baryons.\nIndeed, the 5D gauge theory has topological solitons whose field profile\non the spatial $(x^\\mu, z)$ directions taking that of instantons.\nWe call them instanton-baryons not to be confused with real instantons in Euclidean field theory.\nThis also agrees with the baryonic objects coming from $S^4$-wrapped $D4$ branes in the string theory\nof this background, since these $D4$ branes can dissolve into $D8$ branes exactly as instanton-solitons~\\cite{Douglas:1995bn}.\nThe topology of instanton-baryons is counted by the instanton number\n\\begin{equation}\nB={1\\over 32 \\pi^2}\\int dz dx^3 \\epsilon^{MNPQ}{\\rm Tr}\\left(F_{MN} F_{PQ}\\right)\n={1\\over 8\\pi^2}\\int_{R^4}{\\rm Tr}\\left( F\\wedge F\\right)\\quad,\\label{number}\n\\end{equation}\nwhere $M,N,P,Q$ spans only spatial dimensions and the epsilon tensor is defined in the flat space.\nIn our convention, $F=dA+A\\wedge A$.\nBeing topological, its conservation is guaranteed in any smooth situations.\n\n\n\nWe can easily find the corresponding conserved current in 5D \\cite{Son:2003et}.\nFrom the Bianchi identity $DF\\equiv dF+A\\wedge F-F\\wedge A=0$,\nwe have\n\\begin{equation}\nd{\\rm Tr}\\left( F\\wedge F\\right)={\\rm Tr}\\left(DF\\wedge F\\right)+{\\rm Tr}\\left(DF\\wedge F\\right)=0\\quad,\n\\end{equation}\nand the current 1-form defined by $j_B=*_5{\\rm Tr}\\left(F\\wedge F\\right)$ is conserved\n\\begin{equation}\nd*_5 j_B=0\\quad.\\label{conserve}\n\\end{equation}\nNote that this is independent of what metric we use in defining $*_5$ since the conservation\nis a consequence of the Bianchi identity. In fact, the results of the following discussions\nwill not be affected by metric at all,\nand for simplicity\nwe will keep the flat 5D metric in $(x^\\mu,z)$ coordinate whenever we need the metric as\nintermediate steps.\nIn components (\\ref{conserve}) is written as $D_M j_B^M=\\partial_M j_B^M=0$,\nwhere $j_B^M=g^{MN} j_{BN}=\\eta^{MN} j_{BN}$ and $D_M$\nis the metric covariant derivative.\nThis means that we can define a 4D conserved current $B^\\mu$ by simply integrating\n$j_B^\\mu$ along the $z$-direction,\n\\begin{equation}\nB^\\mu\\equiv \\int_{-\\infty}^{+\\infty} dz \\, j_B^\\mu\\quad,\n\\end{equation}\nwhere the conservation is shown by\n\\begin{equation}\n\\partial_\\mu B^\\mu =\\int_{-\\infty}^{+\\infty} dz \\, \\partial_\\mu j_B^\\mu\n=-\\int_{-\\infty}^{+\\infty} dz \\, \\partial_5 j_B^5=j_B^5(-\\infty)-j_B^5(+\\infty)=0\\quad,\\label{consv}\n\\end{equation}\nas the boundary term\n\\begin{equation}\nj_B^5(\\pm\\infty) \\sim \\epsilon^{\\mu\\nu\\alpha\\beta}{\\rm Tr}\n\\left(F_{\\mu\\nu}F_{\\alpha\\beta}\\right)|_{z\\to\\pm\\infty}\\label{bdr}\n\\end{equation}\nis the chiral sphaleron density of the background gauge potential of the chiral symmetry, and\nwe assume that it vanishes for now. This is justified in our consideration of\nstatic monopoles without electric fields which will be discussed in a moment.\nWe will come back to the implication of these boundary terms later.\nThe explicit form of the 4D conserved baryon current $B^\\mu$ is\n\\begin{equation}\nB^\\mu=\\int_{-\\infty}^{+\\infty}dz \\,j_B^\\mu\n\\sim\\int_{-\\infty}^{+\\infty}dz \\,\\epsilon^{\\mu\\nu\\alpha\\beta }\\,\n{\\rm Tr}\\left(F\\wedge F\\right)_{\\nu\\alpha\\beta z}=\\eta^{\\mu\\nu}\\left[\n*_4 \\int_z\\,{\\rm Tr}\\left(F\\wedge F\\right)\\right]_\\nu\\quad.\n\\end{equation}\nThe final form makes it clear that the end result is indeed metric-independent.\nBy comparing with the normalized instanton number (\\ref{number}) as $\\int d^3x B^0$,\nwe can easily fix the normalization\nto be\n\\begin{equation}\nB^\\mu={1\\over 8\\pi^2}\\int_{-\\infty}^{+\\infty} dz\\,\\epsilon^{\\mu\\nu\\alpha\\beta}\\,\n{\\rm Tr}\\left(F_{\\nu\\alpha}F_{\\beta z}\\right)\\quad.\\label{holocurrent}\n\\end{equation}\nUpon expanding $A_M$ in terms of the group field $U(x)$ (as well as excited mesons)\nand performing $z$-integration,\nwe naturally expect\nthat it reduces to the usual Skyrmion number current in (\\ref{skyrmion}).\n\nA crucial point is that the above baryon current (\\ref{holocurrent})\nremains gauge invariant even in the presence of background gauge potentials\nfor the chiral symmetry, which are encoded as non-normalizable modes of $A_M$.\nIts conservation, relying on the Bianchi identity, is also intact\nin any smooth situations up the boundary term in (\\ref{bdr}). However, this boundary term\ncancels in (\\ref{consv}) in the case of vector-like background field, that is,\nthe fields coupled to $U(N_F)_I$ such that $A_\\mu(+\\infty)=A_\\mu(-\\infty)$.\nThe electromagnetism belongs to this case with\n\\begin{equation}\nA(+\\infty)=A(-\\infty)=Q A^{EM}\\quad,\\quad\nQ=\\left(\\begin{array}{cc}{2\\over 3}\n& 0\\\\0 & -{1\\over 3}\\end{array}\\right)\\quad.\\label{elec}\n\\end{equation}\nBecause these two constraints uniquely fix\nthe baryon current (\\ref{newbn}) in the presence of background electromagnetic potential,\nthe above holographic baryon current must reproduce (\\ref{newbn})\nas its lowest component involving $U(x)$.\n\nTo check this, it is convenient to work in the $A_z=0$ gauge\nwith an expansion~\\cite{Sakai:2004cn}\n\\begin{equation}\nA_\\mu(x,z)=A_{L\\mu}^{\\xi_+}(x)\\psi_+(z)+A_{R\\mu}^{\\xi_-}(x)\\psi_-(z)+({\\rm excited\\,\\,modes})\\quad,\n\\end{equation}\nwhere\n\\begin{equation}\nA_{L\\mu}^{\\xi_+}=\\xi_+ (A_L)_\\mu\\xi_+^{-1} +\\xi_+\\partial_\\mu\\xi_+^{-1}\\quad,\\quad\nA_{R\\mu}^{\\xi_-}=\\xi_- (A_R)_\\mu\\xi_-^{-1} +\\xi_-\\partial_\\mu\\xi_-^{-1}\\quad.\n\\end{equation}\nThe group field $U(x)$ is contained in the above by $\\xi^{-1}_+\\xi_-=U$, and we denote the background\ngauge potential by $A_L=A(+\\infty)$ and $A_R=A(-\\infty)$. There still remains a residual gauge symmetry\nto fix, called Hidden Local Symmetry, which is nothing but the gauge transformation at the\ndeepest IR $z=0$ which acts on the partial Wilson lines $\\xi_\\pm$ as $\\xi_\\pm(x)\\to h(x)\\xi_\\pm(x)$.\nFor our purpose, we take the gauge $\\xi_+^{-1}=U$ and $\\xi_-=1$, upon which we have\n\\begin{equation}\nA_\\mu=\\left[\\left(U^{-1}QU\\right)\\psi_+ + Q\\psi_-\\right]A_\\mu^{EM} + \\psi_+U^{-1}\\partial_\\mu U\n+({\\rm excited\\,\\,modes})\\quad,\n\\end{equation}\nwhere we have explicitly used the electromagnetic background potential (\\ref{elec}).\nThe details of the zero mode wavefunctions $\\psi_\\pm(z)$ won't be important later,\nexcept $\\psi_+ + \\psi_-\\equiv 1$ and $\\psi_+(\\infty)=\\psi_-(-\\infty)=1$.\nFrom\n\\begin{eqnarray}\nF_{\\nu\\alpha}&=&\\left(\\left(U^{-1}QU-Q\\right)\\psi_+ +Q\\right) F_{\\nu\\alpha}^{EM}\n-\\psi_+(1-\\psi_+)\\left[U^{-1}\\partial_\\nu U, U^{-1}\\partial_\\alpha U\\right]\\nonumber\\\\\n&& +\\psi_+(1-\\psi_+)\\left(U^{-1}\\partial_\\nu U\\left(Q-U^{-1}QU\\right)+\\left[U^{-1},Q\\right]\n\\partial_\\nu U\\right) A_\\alpha^{EM} -(\\nu\\leftrightarrow\\alpha)\\quad,\\nonumber\\\\\nF_{\\beta z}&=&-\\left(\\partial_z \\psi_+\\right)\\left(\\left(U^{-1}QU-Q\\right) A_\\beta^{EM}\n+U^{-1}\\partial_\\beta U \\right)\\quad,\n\\end{eqnarray}\nand integrating over $z$ in (\\ref{holocurrent}), we can easily check that\nthe result indeed agrees with the previously known 4D result (\\ref{newbn}).\nObserve that the $z$-integration involves only the following integrals\n\\begin{equation}\n\\int_{-\\infty}^{+\\infty}dz\\, (\\partial_z\\psi_+)(\\psi_+)^n ={1\\over n+1}(\\psi_+)^{n+1}\\Big|^{+\\infty}_{-\\infty}\n={1\\over n+1}\\quad,\n\\end{equation}\nwithout regard to a detailed functional form of $\\psi_+$. This can be understood because\nthe final result is dictated by symmetry and it should be true\nuniversally for any holographic model of QCD.\n\n\nIn the presence of a general chiral background potential, we naturally propose\n(\\ref{holocurrent}) to be the right answer for the modified baryon current.\nWith this granted, a violation of the baryon number by the boundary term (\\ref{bdr})\n\\begin{equation}\n\\partial_\\mu B^\\mu \\sim \\epsilon^{\\mu\\nu\\alpha\\beta}{\\rm Tr}\n\\left(F_{\\mu\\nu}F_{\\alpha\\beta}\\right)\\Big|_R-\\epsilon^{\\mu\\nu\\alpha\\beta}{\\rm Tr}\n\\left(F_{\\mu\\nu}F_{\\alpha\\beta}\\right)\\Big|_L\\quad,\n\\end{equation}\nimplies that baryon number can be generated in\nan environment with non-zero chiral-asymmetric sphaleron density.\nThis can be achieved by sphalerons made of elecro-weak gauge bosons, which\nare indeed known to induce baryon asymmetry via chiral anomaly. Our result\ncan be thought of as\n a manifestation of this physics\nin the low energy effective theory.\n\nFor a reference, we obtain from (\\ref{holocurrent}) the baryon\ncurrent in a general chiral background potential $A_L$ and $A_R$,\n\\begin{eqnarray}\nB^\\mu&=&\\frac{1}{24\\pi^2}\\,\\epsilon^{\\mu\\nu\\alpha\\beta}\\,{\\rm Tr}\\,\n\\left(U^{-1}\\partial_{\\nu}UU^{-1}\\partial_{\\alpha}UU^{-1}\\partial_{\\beta}U \\right)\\nonumber\\\\\n& & -\\frac{1}{8\\pi^2}\\,\\epsilon^{\\mu\\nu\\alpha\\beta} {\\rm\nTr}\\,\\partial_{\\nu}\n\\left(U^{-1}A_{L\\alpha}\\partial_{\\beta}U+A_{R\\alpha}U^{-1}\\partial_{\\beta}U-\nU^{-1}A_{L\\alpha}UA_{R\\beta} \\right)\\nonumber\\\\\n& & -\\frac{1}{8\\pi^2}\\,\\epsilon^{\\mu\\nu\\alpha\\beta}\\,{\\rm Tr}\\left(\n\\partial_{\\nu}A_{L\\alpha}\\,A_{L\\beta}+\\frac23A_{L\\nu}A_{L\\alpha}A_{L\\beta}-(L\\leftrightarrow R)\\right)\\,.\n\\end{eqnarray}\nThis is an extension of (\\ref{newbn}), which we find via holographic\nQCD.\n\n\n\\section{Holographic Monopole Catalysis of Baryon Decay}\n\n\nIn this section, we re-analyze the monopole-induced baryon decay we discussed in\nsection \\ref{section2} in the framework of holographic QCD, and find that the physics\nbecomes more transparent in holographic QCD.\n\nA 4D background electromagnetic Dirac monopole enters our holographic model\nas a specific non-normalizable mode in the expansion of 5D gauge field $A_M$.\nIn the gauge where we keep $A_z$ free, which is more convenient than the previous $A_z=0$ gauge\nfor the present purpose,\nthe background potential appears in the expansion as\n\\begin{equation}\nA_\\mu(x,z)={1\\over 2}\\left(A_L +A_R\\right)_\\mu +\n{1\\over 2}\\left(A_L -A_R\\right)_\\mu\\psi_0(z)+({\\rm normalizable\\,\\,modes})\\label{exp}\n\\quad,\n\\end{equation}\nwhere $\\psi_0(z)= {2\\over \\pi}\\tan^{-1}(z)$ with $\\psi_0(+\\infty)=-\\psi_0(-\\infty)=1$.\n$A_z$ contains only normalizable modes. For electromagnetism which is vector-like,\n$A_L=A_R=QA^{EM}$, the second term is absent and we observe that a 4D monopole background\nwould enter the 5D expansion homogeneously along the $z$-direction.\nIn fact, a 5D gauge theory doesn't allow topological monopoles, but instead\ncan have string-like objects (monopole-string) whose 3-dimensional transverse profiles\nresemble those of monopoles. Therefore, the natural holographic object corresponding\nto a 4D monopole is a monopole-string extending along the radial direction.\nBehaviors of normalizable modes, including the pions $U(x)$, can be analyzed\nby studying the 5D gauge field fluctuations around the monopole-string background.\n\nA nice thing in this holographic set-up is that the violation of baryon number (\\ref{holocurrent})\nin the presence of a monopole-string has a simple explanation in terms of\na violation of the Bianchi identity due to the magnetic source.\nThe basic reason behind this simplification is that holographic QCD\nunifies dynamical degrees of freedom of the model, such as $U(x)$,\nwith the background potential $A^{EM}$\nin a single 5D gauge theory framework, so that their physics should find its explanations within\nthe 5D gauge theory.\nWe should also point out that in the monopole-string background, the normalizable modes\nin the above expansion (\\ref{exp}) will in general be excited by back-reactions, and the full\nfield configuration is more complicated than the Dirac monopole alone.\nHowever, the amount of violation of the Bianchi identity\ndoesn't depend on these meson clouds due to its topological nature,\nand is localized at the core of the monopole-string.\n\nWe write the 5D gauge field as\n\\begin{equation}\nA=Q A^{EM} + \\tilde A\\quad,\n\\end{equation}\nwhere $A^{EM}$ is a unit monopole-string background homogeneous along $z$ with $A^{EM}_z=0$,\nand $\\tilde A$ encodes any smooth dynamics of normalizable modes including pions $U(x)$, as well\nas additional smooth background potential $A_L$ and $A_R$ coupled to the chiral currents.\nBeing a unit magnetic source, $A^{EM}$ is characterized by\n\\begin{equation}\n-2\\pi i =\\int_{S^2}\\,F^{EM}=\\int_{S^2}\\,dA^{EM}=\\int_{B^3}\\,d^2 A^{EM}\\quad,\n\\end{equation}\nwhere in the last equality we use the Stokes theorem on the 3-ball $B^3$ around the monopole core.\nThis gives us\n\\begin{equation}\nd^2 A^{EM}=-2\\pi i \\,\\delta_3(\\vec 0)\\quad,\n\\end{equation}\nwhere $\\delta_3(\\vec 0)$ is a delta 3-form localized in space $\\vec x$ at the monopole center $\\vec x=\\vec 0$.\nIn components $\\delta_3(\\vec 0)=\\delta^{(3)}(\\vec x)\\,dx^1\\wedge dx^2 \\wedge dx^3$.\nWith this gadget, it is straightforward to find the Bianchi identity violation in 5D,\n\\begin{equation}\nDF\\equiv dF+A\\wedge F-F\\wedge A = Q \\,d^2 A^{EM}=-2\\pi i \\,Q\\,\\delta_3(\\vec 0)\\quad,\n\\end{equation}\nso that ${\\rm Tr}\\left(F\\wedge F\\right)$ is no longer closed,\n\\begin{equation}\nd\\,{\\rm Tr}\\left(F\\wedge F\\right)=2{\\rm Tr}\\left(DF\\wedge F\\right)=\n-4\\pi i \\,{\\rm Tr}\\left(QF\\right)\\wedge \\delta_3(\\vec 0)\\quad.\n\\end{equation}\nIn components this is equivalent to\n\\begin{equation}\n{1\\over 4}\\epsilon^{MNPQR}\\partial_M{\\rm Tr}\\left(F_{NP}F_{QR}\\right)\n=4\\pi i \\delta^{(3)}(\\vec x){\\rm Tr}\\left(Q F_{tz}\\right)\\quad,\n\\end{equation}\nwhich implies that\n\\begin{equation}\n\\partial_\\mu\\left(\\epsilon^{\\mu\\nu\\alpha\\beta}{\\rm Tr}\\left(F_{\\nu\\alpha}F_{\\beta z}\\right)\\right)\n=-{1\\over 4}\\partial_z\\left(\\epsilon^{\\mu\\nu\\alpha\\beta}\n{\\rm Tr}\\left(F_{\\mu\\nu}F_{\\alpha\\beta }\\right)\\right)\n+4\\pi i \\delta^{(3)}(\\vec x){\\rm Tr}\\left(Q F_{tz}\\right)\\quad.\n\\end{equation}\nThis is precisely what we need to find the violation of baryon number (\\ref{holocurrent}),\n\\begin{eqnarray}\n\\partial_\\mu B^\\mu&=&{1\\over 8\\pi^2}\\int_{-\\infty}^{+\\infty} dz\\,\n\\partial_\\mu\\left(\\epsilon^{\\mu\\nu\\alpha\\beta}{\\rm Tr}\\left(F_{\\nu\\alpha}F_{\\beta z}\\right)\\right)\\\\\n&=&\n{1\\over 32\\pi^2}\\left(\\epsilon^{\\mu\\nu\\alpha\\beta}{\\rm Tr}\n\\left(F_{\\mu\\nu}F_{\\alpha\\beta}\\right)\\Big|_R-\\epsilon^{\\mu\\nu\\alpha\\beta}{\\rm Tr}\n\\left(F_{\\mu\\nu}F_{\\alpha\\beta}\\right)\\Big|_L\\right)+{i\\delta^{(3)}(\\vec x)\\over 2\\pi}\n\\int_{-\\infty}^{+\\infty}dz\\,{\\rm Tr}\\left(Q F_{tz}\\right),\\nonumber\n\\end{eqnarray}\nwhere we will ignore the first term as we already discuss it in the previous section.\n\nTo study the monopole-induced second term,\nlet us go back to the $A_z=0$ gauge and expand $A_\\mu(x,z)$ more precisely,\n\\begin{equation}\nA_\\mu=\\left[\\left(U^{-1}(QA_\\mu^{EM}+A_{L\\mu} )U\\right)\\psi_+ + (QA_\\mu^{EM}+A_{R\\mu})\\psi_-\\right]\n+\\psi_+U^{-1}\\partial_\\mu U+\\sum_{k\\ge 1} B^{(k)}_\\mu \\psi_k\\quad,\n\\end{equation}\nincluding now the complete spectrum of excited (axial) vector mesons in the expansion.\nSince $A_t^{EM}=0$, and $F_{tz}=-\\partial_z A_t$ in our gauge, the $z$-integral is readily performed\nto give\n\\begin{eqnarray}\n\\partial_\\mu B^\\mu&=&-{i\\delta^{(3)}(\\vec x)\\over 2\\pi}{\\rm Tr}\\left(Q A_t\\right)\\Big|^{+\\infty}_{-\\infty}\n\\nonumber\\\\\n&=&-{i\\delta^{(3)}(\\vec x)\\over 2\\pi}\\left[{\\rm Tr}\\left(Q U^{-1}\\partial_t U\\right)\n+{\\rm Tr}\\left(Q U^{-1}A_{Lt} U\\right)-{\\rm Tr}\\left(Q A_{Rt}\\right)\\right]\\quad.\\label{chemical}\n\\end{eqnarray}\nThis is the main result in this section.\nNote that there is no contribution from excited (axial) vector mesons because\ntheir wavefunctions $\\psi_k(z)$ vanish sufficiently fast near the boundaries.\nIt is easy to see that the first term precisely reproduces to the 4D result (\\ref{change}).\nWriting $U(x)={\\rm exp}({2i\\over F_\\pi} \\pi^0(x) \\sigma^3)$ as before, we have\n\\begin{equation}\n\\partial_\\mu B^\\mu=\n-{i\\delta^{(3)}(\\vec x)\\over 2\\pi}{\\rm\nTr}\\left(Q\\sigma^3\\right){2i(\\partial_t \\pi^0)\\over F_\\pi}=\n{(\\partial_t\\pi^0)\\over \\pi F_\\pi} \\delta^{(3)}(\\vec x)\\quad,\n\\end{equation}\nwhose $\\vec x$-space integration is nothing but (\\ref{change}).\n\nIt is also interesting\nto speculate the implication of the second and the third terms. They seem to\nindicate that in the presence of a chiral asymmetric chemical potential,\nmonopoles can create\/annihilate baryon number. It would be interesting\nto understand this better.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\section{String Theory Realization}\n\n\nThere is a nice stringy set-up realizing the physics of the previous sections\nin terms of $D$-~branes\nin the Sakai-Sugimoto model.\nLet us parameterize the cigar-shaped part of the gravity background by\na radial coordinate $U\\ge U_{KK}$ and an angle $\\tau\\sim\\tau+2\\pi M_{KK}^{-1}$~\\cite{Sakai:2004cn}.\nDetails of the parameters $U_{KK}$ and $M_{KK}$ are not relevant in our discussion.\nOur $N_F$ probe $D8$ branes are\nspanning a line $\\{\\tau=0\\} \\cup \\{\\tau=\\pi M_{KK}^{-1}\\}$, $U\\ge U_{KK}$ in the cigar part.\nThey also wrap the internal $S^4$ fibration and span the Minkowski space $R^{1,3}$.\nWe then consider a $D6$ brane which wraps the internal $S^4$ fibration\nand spans a half of the cigar $0 \\le \\tau \\le \\pi M_{KK}^{-1}$, $U\\ge U_{KK}$,\nending on one of the $N_F$ $D8$ branes. It is point-like in\nthe spatial $\\vec x$ and static along the time.\n\nIgnoring $S^4$ and $U$ directions\nsince they are common to the $D8$ and $D6$ branes, the system\nis similar to a $D1$ brane ending perpendicularly on one of $N_F$ $D3$ branes.\nThe end point of $D1$ on $D3$ looks like a monopole source in view of $D3$ world volume\ngauge field.\nBecause only one end of the $D1$ ($D6$) brane is on the $D3$ ($D8$) brane while the\nother end extends to infinity,\nthe resulting monopole configuration on the $D3$ ($D8$) brane world volume\nis an Abelian Dirac monopole with infinite energy in the spatial $\\vec x$ directions.\nIncluding the radial direction $U$ in the 5D $D8$ brane world volume (we ignore $S^4$),\nor the $z$ direction in the previous sections,\nwhat we have is precisely a 5D holographic, Dirac monopole-string\nin the 5D gauge theory on the $D8$ branes.\nIts $U(1)$ monopole charge direction depends on which $D8$ brane the $D6$ brane ends,\nand we simply call it the charge matrix $Q$.\nFor an example of $N_F=2$ with the $D6$ brane ending on the first $D8$ brane,\nwe have\n\\begin{equation}\nQ=\\left(\\begin{array}{cc} 1& 0 \\\\ 0& 0\\end{array}\\right)\\quad.\n\\end{equation}\nAs the monopole-string is homogeneous along $U$ (or $z$), it represents\na vector-like background gauge potential of the chiral symmetry (or $A_L=A_R=Q A^{Dirac}$)\nwith a monopole charge $Q$ in holographic QCD.\nTherefore, the physics of monopole catalysis of baryon decay\nthat we study in the previous sections must apply to this stringy setting.\n\n\n\nIndeed, we can easily identify the string theory\nphenomenon corresponding to the baryon decay in the presence of a $D6$ monopole-string.\n5D instanton-baryons on the $D8$ branes can be thought of as $S^4$-wrapped\n$D4$ branes dissolved into the $D8$ branes.\nBut, these $S^4$-wrapped $D4$ branes can also dissolve into our $D6$ brane, because\nthey are similar to a $D0\/D2$ system when we ignore the common $S^4$ directions.\nTherefore, $D4$-baryons can be captured by the $D6$ monopole-string and disappear from\nthe $D8$ world volume. This is the string theory correspondent to the monopole catalysis\nof baryon decay.\n\n\nWe define the baryon number of a single $S^4$-wrapped $D4$ brane to\nbe one, as it also has unit instanton number on the $D8$ branes. The\ndissolved $D4$-baryon number into our $D6$ monopole-string is\nmeasured by the 2-form field strength $F^{(2)}=dA^{D6}$ of the $D6$\nworld volume gauge potential on the half cigar that the $D6$ world\nvolume spans, similar to $D0\/D2$ system, \\begin{equation} (\\Delta B)_{D6}={i\\over\n2\\pi}\\int_{(U,\\tau)}\\,F^{(2)} =-{i\\over 2\\pi}\n\\int_{-\\infty}^{+\\infty} dz\\, A_z^{D6}\\quad, \\end{equation} where the\nnormalization can be fixed by that a unit $(-2\\pi i)$ flux\nrepresents a single dissolved $D4$-brane, and we use the Stokes\ntheorem in the last equality since the boundary of the $D6$ half\ncigar is precisely the line along $z$ at the $D6\/D8$ intersection.\nNote that in $\\vec x$ space, this is precisely the position of the\nmonopole-string core. As the $D6$ brane ends on one of the $D8$\nbranes, the $D6$ world volume gauge potential is identical to the\ncorresponding $D8$ world volume gauge potential at the $D6\/D8$\nintersection. Therefore, the above $A_z^{D6}$ can be equally\ninterpreted as the $A_z$ at the monopole-string core on the $D8$\nbrane on which the $D6$ brane ends, \\begin{equation} A_z(t,\\vec 0) = Q\nA_z^{D6}(t)\\quad, \\end{equation} where we take the monopole position at the\norigin $\\vec x=\\vec 0$, and $Q$ represents\nthe charge matrix of $D6$ ending on the $D8$ brane, as given before. Using\n${\\rm Tr}(Q^2)=1$, we have \\begin{equation} (\\Delta B)_{D6}=-{i\\over\n2\\pi}\\int_{-\\infty}^{+\\infty} dz\\,{\\rm Tr}\\left(Q\nA_z\\right)\\Big|_{\\vec x=\\vec 0}\\quad, \\end{equation} whose time derivative must\nbe equal to the rate of disappearance of the baryon number from the\n$D8$ branes, \\begin{equation} {dB\\over dt} = -{d \\over dt}(\\Delta B)_{D6}\n=-{i\\over 2\\pi}{d \\over dt}{\\rm Tr}\\left( -Q\\int_{-\\infty}^{+\\infty}\ndz\\,A_z\\right)\\quad. \\end{equation} Noting that the integral inside the trace\nis precisely the Wilson line corresponding to the pions, \\begin{equation}\n-\\int_{-\\infty}^{+\\infty} dz\\,A_z= {2i \\over F_\\pi}\\pi^0 (\\vec\n0)\\sigma^3 \\quad, \\end{equation} we finally have \\begin{equation} {dB\\over dt} ={1\\over \\pi\nF_\\pi}{\\rm Tr}\\left(Q\\sigma^3\\right)\\left(\\partial_t\n\\pi^0\\right)\\quad. \\end{equation} This is precisely what we have in the\nprevious sections.\n\n\n\\section{Conclusion}\n\nFermion number is often not conserved in background fields which\nmodify the spectrum of fermions~\\cite{Rubakov:2002fi}. One\ntantalizing such phenomenon is the monopole catalysis of baryon\ndecay, where baryons disappear (or appear) near the magnetic\nmonopole. Since monopole catalysis of baryon decay may have a\nsignificant effect in monopole search and proton decay experiment,\nboth of which are consequences of unified gauge theories, it is\ndesirable to understand it more clearly. We have investigated\nmonopole catalysis in the context of the gauge\/string duality and\nshowed how it is realized in holographic QCD and also in string\ntheory. In doing so we have demonstrated that the gauge\/string\nduality is indeed a powerful tool to study strong interactions, and\nfound that the baryon number violation under the magnetic monopole\nor by the electroweak sphaleron can be formulated into a single\nequation in holographic QCD.\n\nThere are several phenomenological implications of our study. One of\nthem is the generation of baryons in the presence of magnetic\nmonopole by external chiral chemical potentials, as shown in\nEq.~(\\ref{chemical}), which might be more effective in generating\nbaryon asymmetry at lower temperature where the sphalerons are\nsuppressed. This mechanism might be relevant in early universe or\nheavy ion collision but we leave it for future investigation.\n\n\n\n\n\n\n\n\\subsection*{Acknowledgments}\nThis work is supported in part\nby the Korea Research Foundation Grant funded by the\nKorean Government (MOEHRD, Basic Research Promotion Fund)\n(KRF-2007-314-C00052)(D.~K.~H.),\nby KOSEF\nBasic Research Program with the grant No. R01-2006-000-10912-0 (D.~K.~H. and C.~P.),\nby the KOSEF SRC Program through CQUeST at Sogang University (K.~M.~L.),\nKRF Grants No. KRF-2005-070-C00030 (K.~M.~L.), and\nthe KRF National Scholar program (K.~M.~L.).\nH.U.Y. thanks Koji Hashimoto, Takayuki Hirayama and Feng-Li Lin for discussions.\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nMachine learning (ML) is a powerful tool for diagnosing medical images, even achieving success rates surpassing experts~\\cite{Ehteshami_Bejnordi2017-gs,Rajpurkar2017-hk} However, ML is not yet widely used in healthcare, partly due to the lack of annotated data, which is time-consuming and costly to acquire. Recently, crowdsourcing -- annotating images by a crowd of non-experts -- has been proposed as an alternative, with some promising results, for example~\\cite{ONeil2017,Cheplygina2016,maier2014can}. In particular, studies which rely on segmenting structures have been successful~\\cite{orting2019survey}.\n\nThere are two problems with current crowdsourcing studies. One is that crowds are not able to provide diagnostic labels for images when a lot of expertise is required. For example, in \\cite{albarqouni2016aggnet} crowds alone do not provide labels of sufficient quality when asked to detect mitosis in histopathology images. Another problem is that crowd annotations need to be collected for previously unseen images, which is not a desirable situation in practice. \n\nWe propose to address both problems by multi-task learning with crowdsourced visual features. Instead of asking the crowds to provide a diagnostic label for an image, we query visual features, like color and shape. These can be provided more intuitively, but could potentially still be informative to machine learning algorithms. To address the problem of requiring crowd annotation at test time, we propose a multi-task learning setup where the crowdsourced features are only needed at the output side of the network, and thus only required when training the algorithm. In previous work multi-task learning has shown good results when combined with features provided by experts \\cite{Murthy2017-lq,Hussein2017-xs} or automatically extracted by image processing methods \\cite{Dhungel2017-in}. In this study we investigate for the first time whether multi-task learning is also beneficial for crowdsourced visual features. Additionally, we investigate the effects of combining these crowdsourced features in an ensemble. \n\nWe specifically investigate the problem of skin lesion diagnosis from dermoscopy images. We collected crowd features for asymmetry, border irregularity and color of the lesions and added these features to a baseline model with binary classification. We show that these features have limited effect when used individually, but improve performance of the baseline when used in an ensemble. This suggests that crowds can provide relevant, but complementary information to ground truth diagnostic labels.\n\n\\section{Methods} \n\n\\subsection{Data}\nThe dataset was extracted from the open challenge dataset of ISIC 2017 challenge \\cite{Codella2017-rd}, where one of the goals is to classify the lesion as melanoma or not. Examples of the skin lesions are shown in Fig.~\\ref{fig1}. \n\\begin{figure}\n \\centering\n \\includegraphics[width=0.7\\textwidth]{skin_lesions.png}\n \\caption{Examples of benign (top row) and malignant (bottom row) skin lesions from the ISIC 2017 challenge}\n \\label{fig1}\n\\end{figure}\n\nThe 2000 skin lesion images from the ISIC 2017 training data were used for training, validation and test purposes in our study. The dataset contains three classes of skin lesions: melanoma (374 lesions), seborrheic keratosis (254 lesions) and nevus (1372 lesions). The images with melanoma and seborrheic keratosis classes were combined and labeled malignant, the nevus images were labeled benign. For each malignant skin lesion there are about two benign lesions in the dataset.\n\nCrowd features were collected from undergraduate students following a similar protocol as in \\cite{Cheplygina2018-ee}. The students assessed the ABC features \\cite{Abbasi2004-og} used by dermatologists: A for asymmetrical shape, B for border irregularity and C for color of the assessed lesion. In total 745 lesions were annotated, where each image was annotated by 3 to 6 students. Each group assessed a different set of skin lesions. From the full set of 745 skin lesions, 744 have asymmetry features, 745 have border features and 445 have color features. Before proceeding with the experiments, we normalized the measurements per student (resulting in standardized data with mean 0 and standard deviation 1) and then averaged these standardized measurements per lesion.The probability densities for the ABC features, derived from the normalized measurements, are shown in Fig.~\\ref{pdf}. The differences between the probability densities for malignant and benign lesions suggest that the annotation score is informative for a lesion's label.\n\\begin{figure}\n \\centering\n \\includegraphics[width=\\textwidth]{pdf}\n \\caption{Probability densities f(A), f(B) and f(C) derived from the normalized asymmetry A, border B and color C measurements. The blue solid line shows density for benign lesions, the red broken line for malignant ones. The mean {\\textmu} and standard deviation {\\textsigma} are -0.002 and 0.766 for A, -0.010 and 0.759 for B and 0.024 and 0.897 for C measurements.}\n \\label{pdf}\n\\end{figure}{}\n\nAs preprocessing we applied augmentation, i.e. random rotation with a maximum of 180 degrees, horizontal and vertical flipping, a width and height image adjustment of 10\\%, a shear range of 0.2 degrees in counterclockwise direction and a random channel shift range of maximum 20 affecting the image brightness. After that the resulting images are rescaled to (384,384) pixels. \n\n\\subsection{Models}\n\nWe used two models in our experiments: a baseline model and a multi-task model (see Fig.~\\ref{fig2}). Both models were built on a convolutional base and extended by adding specific layers. As encoder we used the VGG16 \\cite{Simonyan2014-rs} convolutional base. For this base, containing a series of pooling and convolutional layers, we applied fixed pre-trained ImageNet weights. We have trained the baseline and multi-task models in two ways: a) freeze the convolutional base and train the rest of the layers and b) train all layers including the convolutional base. \n\nFurthermore, we used two different ensembles of the multi-task models with asymmetry, border and color features. The first ensemble used equal weights for each of the three multi-task models, while the second ensemble used a weighted averaging strategy.\n\n\\begin{figure}\n \\centering\n \\includegraphics[width=\\textwidth]{base_multi_task_models.eps}\n \\caption{Architecture of baseline and multi-task models. All models built on top of the VGG16 convolutional base.}\n \\label{fig2}\n\\end{figure}{}\n\nThe baseline model extended the convolutional base with two fully-connected layers with a sigmoid activation function. During training class weights were used to pay more attention to samples from the under-represented class. Cross-entropy is used as loss function. All choices for the baseline model were made based on only the training and validation set. Fine-tuning the baseline model was done until the performance on the validation dataset was within the performance of the ISIC challenge. \n\nThe multi-task model extended the convolutional base with three fully connected layers. The model has two outputs with different network heads: one head is the classification output, the other represents the visual characteristic (Asymmetry, Border or Color). A customized mean squared error loss together with a last layer linear mapping is used for the annotation-regression task. We used a vector in which we store per lesion whether or not a crowdsourced feature is available. This vector is used in a custom loss function that calculates the mean squared error. In case no feature is available, it has no effect on the weights and the error is not propagated.\nFor the binary classification task, again a cross-entropy loss and sigmoid activation function of the nodes are used. The contribution of the different losses are equal. The resulting loss values are summed and minimised during network training.\n\nThe ensembles are based on the predictions of the three available multi-task models (asymmetry, border and color) and two ensemble strategies: averaging and optimized weighted averaging.\n\n\\subsection{Experimental setup}\n\nWe compared six models with two variants for each model: a variant with a frozen convolutional layer and fully trained additional layers, and a variant in which all layers were trained. The six models were:\n\\begin{itemize}\n \\item Baseline model with binary classification\n \\item Multi-task model with the Asymmetry feature\n \\item Multi-task model with the Border feature\n \\item Multi-task model with the Color feature\n \\item Ensemble with averaging the multi-task models predictions \n \\item Ensemble with optimized weighted averaging the multi-task models predictions\n\\end{itemize}\n\nWe used 5-fold cross-validation, with the dataset split in a stratified fashion, keeping the malignant benign ratio equal over the training, validation and test subset. More specifically 70\\% of the dataset is used as the train subset (1400 lesions), 17.5\\% as the validation subset (350 lesions), leaving 12.5\\% as the test subset (250 lesions). The percentage of malignant lesions for each of the three subsets is approximately 32\\%.\n\nWe trained both models iterating over 30 epochs with a batch size of 20 using the default back propagation algorithm RMSprop\\cite{Tieleman2012-kp} as the optimizer, with a learning rate of $2.0\\mathrm{e}{-5}$.\n\nWe performed an ensemble of the predictions of multi-task models through two strategies. In the first strategy, called averaging, the lesion's classification was based on the predictions of the three multi-task models, with the prediction of each model having an equal weight (a third). In the second strategy the classification was based on optimized weights for the prediction of each multi-task model. We used the differential evolution global optimization algorithm \\cite{Storn1997-mo}, with a tolerance of $1.0\\mathrm{e}{-7}$ and maximum of 1000 iterations, to find the optimized weights.\n\nWe compared the average of the area under the receiver operating characteristic curve (AUC) of the baseline model to the average AUC scores of the different multi-task models and ensembles. The average AUC score was calculated per experiment taking the average of the AUC score of each fold.\n\nWe implemented our deep learning models in Keras using the TensorFlow backend \\cite{Geron2019-rs}. The code is available on Github: \\url{https:\/\/github.com\/raumannsr\/hints_crowd}. \n\n\\section{Results}\n\nThe AUC results obtained with the six models and two variants (frozen and not frozen convolutional layer) are shown in Table~\\ref{tab:auc_results}.\n\n\\begin{table}[]\n \\centering\n \\begin{tabular}{c|c|c}\n \\hline\n Model & AUC (mean $\\pm$ std) Frozen & AUC (mean $\\pm$ std) Non Frozen \\\\ [0.5ex]\n \\hline\n \n Baseline & $0.753 \\pm 0.025$ & $0.794 \\pm 0.030$ \\\\\n \n Asymmetry & $0.750 \\pm 0.025$ & $0.799 \\pm 0.027$ \\\\\n Border & $0.755 \\pm 0.029$ & $0.782 \\pm 0.020$ \\\\\n Color & $0.756 \\pm 0.025$ & $0.770 \\pm 0.016$ \\\\\n \n Averaging & $0.752 \\pm 0.027$ & $0.808 \\pm 0.025$ \\\\\n Optimized weighted averaging & $0.753 \\pm 0.025$ & $0.811 \\pm 0.028$ \\\\\n \n \\hline\n \\end{tabular}\n \\caption{AUC of the six different models in two variants: frozen and non frozen convolutional base.}\n \\label{tab:auc_results}\n\\end{table}\n\nAs can be seen in Table~\\ref{tab:auc_results} and Fig.~\\ref{fig3} the AUC values for the frozen variant are close together. We observed that the AUC values of the non frozen variant outperformed the frozen variant. The results of the non frozen ensemble strategies show an improved generalisation. The ensemble with optimized weighted averaging outperformed the averaging ensemble by achieving an AUC of $0.811 \\pm 0.028$. Note that these numbers are somewhat lower than the ISIC 2017 challenge performances, however, we did not focus on optimizing performance but rather illustrating the added value of the multi-task ensembles approach.\n\n\\begin{figure}\n \\centering\n \\includegraphics[width=0.9\\textwidth]{boxplot.eps}\n \\caption{AUC of the six different models: baseline model, multi-task with asymmetry, multi-task with border, multi-task with color, ensemble with averaging and ensemble with optimized weighted averaging}\n \\label{fig3}\n\\end{figure}{}\n\n\\section{Discussion and conclusions}\n\nWe addressed two problems in this paper. One is that crowds are not able to provide diagnostic labels when domain specific expertise is required. Another problem is that crowd features need to be collected for previously unseen images. The aim of this paper was to validate the proposed solution to tackle both problems with a multi-task learning setup using crowdsourced features. We showed that crowdsourced visual features (solely used when training the algorithm) in a multi-task learning set-up can improve the baseline model performance. This result supports the idea that crowdsourced features used at the output side of a multi-task network are a valuable alternative to possibly inaccurate diagnostic crowdsourced labels. Our work provided new insights into the potential of crowd features. Individual crowdsourced features have limited effect on the model, but when combined in an ensemble, have a bigger effect on model performance.\n\nWe sought to investigate the effect of adding crowdsourced features in a multi-task learning setting and did not focus on the absolute performance of the model. We first developed the baseline model until the performance on the test set was within the range of performances of the ISIC challenge. We did not try to optimise the model further to achieve the highest performance. For example, we have cropped the images to the same size for convenience, but this could lead to decreased the performance. To further validate the usefulness of crowdsourced features and their ensembles, it would be straightforward to extend other types of models with an additional output. \n\nWe hypothesize that pre-training the network with domain specific data (instead of ImageNet) would further increase the AUC. As a next step we will validate two more strategies. The first strategy is to train the network on a segmentation task (using the available segmentations) and then fine-tune the network on the diagnosis and the crowdsourced features. The second strategy is to first train the network on the crowdsourced features and then fine-tune on the diagnosis. \n\nWe averaged the normalized crowdsourced features and gave the crowd output the same weight as the true label of the lesion. This might not be the best choice and other combining strategies could be investigated. In particular, adding each annotator as an individual output could be of interest. Cheplygina et al \\cite{Cheplygina2018-ee} showed that the disagreement of annotators about the score - as measured by the standard deviation of scores for a particular lesion - was informative for that lesion's label. Exploiting the similarities an differences between the annotators in our study could therefore further increase performance. \n\nThere are some limitations with the visual assessment of the crowd. First of all, the annotators were not blinded to the true labels of the skin lesions due to the public nature of the dataset. However, if the visual assessments were perfectly correlated with the true labels, multi-task learning would not have been helpful. The information that the crowd provides therefore appears to be complementary to what is already present in the data. The next step would be to validate the method with other types of crowdsourced features like Amazon Mechanical Turk or a similar platform.\n\n\\subsection{Negative results}\n\nIn an earlier version of the paper (available on arXiV, \\url{https:\/\/arxiv.org\/abs\/2004.14745}) we reported that multi-task models with a single feature had a substantial improvement over the baseline model. Upon closer inspection, it turned out that this result was observed due to a bug in Keras. Adding the VGG16 model as first layer and weight freezing the newly added layer does not work as advertised - the weights do continue to be updated due to the bug. As such, we compared a baseline model with frozen weights, with multi-task models with unfrozen weights, which is an unfair comparison. This issue (\\url{https:\/\/github.com\/keras-team\/keras\/issues\/8443}) was partly fixed by Keras, but still did not work for the multi-task models. A workaround used for this paper is setting all inner layers of the convolutional base as non-trainable. When factoring this out, there wasn't a clear gain in performance, however, ensembles still proved to be useful. \n\n\\section*{Acknowledgments}\nWe would like to acknowledge all students who have contributed to this project with image annotation and\/or code. We gratefully acknowledge financial support from the Netherlands Organization for Scientific Research (NWO), grant no. 023.014.010.\n\n\\bibliographystyle{splncs04}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}