diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzzqlld" "b/data_all_eng_slimpj/shuffled/split2/finalzzqlld" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzzqlld" @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\n\nThe search for the Flavor Changing Neutral Current (FCNC) processes, has\nbeen one of the leading tools to test\nthe Standard Model (SM), in an attempt of either discovering or putting\nstringent limits on new physics\nscenarios. The discovery of the Higgs boson at the LHC, has lead the way to\na comprehensive program of measuring of \nits properties and branching ratios, in order to look for deviations\nfrom the SM predicted Higgs. Within the SM, there are no FCNC transitions at tree level\nmediated by the Higgs boson, due to the the presence of only one \nHiggs doublet and at the one-loop level these FCNC interactions are extremely small. There are however many extensions of the SM\nwhere the suppression of the neutral flavor changing transitions due to the\nGlashow-Iliopoulos-Maiani (GIM) mechanism \ncan be relaxed, with the presence of additional scalar doublets or through\nthe additional contributions of new particles in the loop diagrams. \nIn the presence of two or more scalar doublets, these FCNC interactions will be generated at tree level and can be very large unless some ad-hoc discrete symmetry is imposed.\n \nMotivated by the nature of the standard Yukawa coupling scheme \nthe authors of~\\cite{Cheng:1987rs} observed that the new FCNC couplings in the general two-Higgs doublet model naturally follow the hierarchical structure of the quark masses and therefore \nany $\\bar{q}q' H$ coupling should experience the following structure \n\\begin{equation}\ng_{qq'H} \\sim \\sqrt{m_q m_q'},\n\\end{equation} \nindicating that the larger couplings can be expected in the FCNC interactions of a top-quark with the Higgs field.\nThe large production rate of the top quarks at the LHC allows one to look for a transition of the top quark to a quark of a different flavor but same charge, $t\\rightarrow cH$ (and $t\\rightarrow uH$),\nas no symmetry prohibits this decay. The SM branching ratio of this process is extremely small, of the order BR($t\\rightarrow cH)_{{\\rm SM}} \\approx 10^{-15}$~\\cite{Mele:1998ag,AguilarSaavedra:2004wm}, \nwhich is many orders of magnitude smaller than the value to be measured\nat the LHC at 14 TeV. Therefore an affirmative observation of the process $t\\rightarrow qH$, well above the SM rate,\nwill be a conclusive indication of new physics beyond the SM. \n\nThe probing of FCNC couplings in the quark sector, can be performed either at\na high energy collider or indirect limits can be obtained from neutral meson \noscillations ($K^0-\\bar{K}^0$, $B^0-\\bar{B}^0$ \nand $D^0-\\bar{D}^0$)~\\cite{Bona:2007vi, Blankenburg:2012ex, Aranda:2009cd}. \nThe $tqH$ coupling also affects the $Z\\rightarrow c\\bar{c}$ decay at the loop\nlevel and is therefore constrained by the electroweak precision observables\nof the $Z$ boson~\\cite{Larios:2004mx}. \nThe ATLAS and the CMS collaborations have \nset upper limits on the flavor changing neutral currents in the top sector \nthrough the top pair production, with one top decaying to $Wb$ and the other \ntop assumed to decay to $qH$. The leptonic decay mode of the $W$ is considered\nand the different Higgs decay channels are analyzed, \nwith the Higgs decaying either to two photons~\\cite{Aad:2014dya,CMS:2014qxa} \nor to $b\\bar{b}$~\\cite{Aad:2015pja,CMS:2015qhe}.\nCombining the analysis of the different Higgs decay channels,\nbased at $\\sqrt{s}$ = 8 TeV and \nan integrated luminosity of 20.3 (19.7) fb$^{-1}$, the 95\\% CL upper limits\nobtained by ATLAS (CMS)~\\cite{Aad:2015pja,Khachatryan:2016atv} are\nBr$(t\\rightarrow cH) \\leq 4.6 (4.0) \\times 10^{-3}$ \nand Br$(t\\rightarrow uH) \\leq 4.5 (5.5) \\times 10^{-3} $. On the phenomenological \nside the sensitivity of LHC measurements to these non-standard flavor violating couplings in \nthe top sector has been explored in great details, considering ($a$) the \ntop quark pair production~\\cite{AguilarSaavedra:2000aj, Atwood:2013ica, Kobakhidze:2014gqa, Wu:2014dba}, \n($b$) the single top $+$ Higgs production~\\cite{AguilarSaavedra:2004wm, Greljo:2014dka}\n and ($c$) single top $+$ W production~\\cite{Liu:2016dag}. \n\nThe analysis of the $tqH$ coupling has also been carried out in the context \nof the next generation $e^-e^+$ linear colliders, the International Linear Collider (ILC)\nand the Compact Linear Collider (CLIC)~\\cite{Behnke:2013xla, Aicheler:2012bya}. \nThese planned high energy $e^-e^+$ colliders are expected to perform \nprecision measurements of the top-quark and the Higgs boson.\nThey will be able to scrutinize the couplings\nin the top-Higgs sector to the extreme precision, making them suitable for the sensitive tests of physics beyond the SM. \nThe baseline machine design for both colliders allows for up to\n$\\pm 80 \\%$ electron polarization, while provisions have been made to allow positron polarization of $\\pm 30 \\%$ as \nan upgrade option. Both these machines are designed to operate at centre of mass energies of 350, 500 \nand 1000 GeV, with the possibility of CLIC to be also adapted for the 3 TeV operation. Several studies have been carried out in the context of zero beam polarization at the ILC~\\cite{Han:2001ap, Hesari:2015oya} in an attempt\nto constrain the $tqH$ vertex. \n \nThe Higgs boson within the SM couples similarly to \n$\\bar{q}_Lq_R$ and $\\bar{q}_Rq_L$, i.e. $y_{LR}=y_{RL}$. Most of the studies \nin the context of the FCNC in the Higgs sector takes into effect this \nconsideration and assumes the similarity between the chiral \ncouplings. In this work we have focussed on the chiral nature of the FCNC \ncouplings and have shown how the inequality of chiral couplings leads to distinct behaviour \nin the distributions of final states at linear colliders. \nWe work in the context of initial beam polarization for both the electron and the positron, \nusing the advantages of their adjustment for enhancing the sensitivities of the measured branching ratios and \nthe asymmetries on the FCNC parameters. We also present the results in the case of transverse polarized beams. \n\n\nIt is a well known fact that by a detailed study of the top (antitop)\ndecay products one can obtain valuable information about the top spin\nobservables and then use them for the detailed exploration of the top quark\npair production or decay \ndynamics to distinguish among different models of new physics ~(\\cite{KamenikMelic} and references therein). In order to maximize the top spin effects it is advisable to \nchoose a proper spin quantization axis. \nAt the Tevatron, where the top quark pair production was\ndominated by the quark-antiquark annihilation a special off-diagonal axis was shown to exist~\\cite{Parke:1996pr},\nmaking top spins $100\\%$ correlated. On the other hand, at the LHC \nthe top quark pair production is dominated by the gluon-gluon fusion and \nthere is no such optimal axis for this process\\footnote{At \nlow $m_{t\\bar t}$ the top quark pair production via gluon-gluon fusion is \ndominated by like-helicity gluons. Consequently, spin correlations are \nmaximal in the helicity basis~\\cite{Mahlon:2010gw}.}. The $t\\bar{t}$ \nproduction through the electron-positron annihilation at the linear \ncolliders will be similar to the Tevatron production, therefore\nthe top quark spins will also be maximally correlated in the off-diagonal basis. \nThe $t$, $\\bar{t}$ spin effects, can be analyzed in the lepton-lepton \nor lepton+jets final states through a number of angular distributions and \ncorrelations. The spin information is proportional\nto the spin analyzing power of the decay products of the top and will\ntherefore differ from the SM one in the case of FCNC top-Higgs decay.\nWe therefore also carry out a detailed study of the FCNC $t\\to qH$ decay \nwith different spin observables, and in different top-spin polarization basis, \nusing both unpolarized and longitudinally polarized beams. \n\nThe outlay of the paper is as follows. We discuss in Sec.~\\ref{sec:FCNC}, \nthe most general FCNC lagrangian considered for our analysis.\nWe give a brief review of the effects of initial beam polarizations in the $t\\bar{t}$ production at the linear \ncollider in Sec.~\\ref{sec:beam_pol}. The detailed analysis of the $tqH$\nfinal state is performed in Sec.~\\ref{sec:cmframe} and constraints are \nobtained from angular asymmetries. The top spin observables in the context of \ndifferent spin bases are discussed in Sec.~\\ref{sec:spintop}. A thorough numerical\nstudy of the process $e^-e^+\\rightarrow t\\bar{t}\\rightarrow qHW^-\\bar{b}\\rightarrow \nqb\\bar{b}l^-\\bar{\\nu}_l\\bar{b}$ including top FCNC coupling is performed in \nSec.~\\ref{sec:numericalstudy}, and finally we conclude in Sec.~\\ref{sec:conclusion}.\nThe analytical form of the different production and decay matrices, along with the expressions for the \ntop spin observables used for our analysis are listed in Appendix~\\ref{sec:All_appen}\nand \\ref{sec:appen_spin}.\n\n\n\\section{The flavor changing top quark coupling}\\label{sec:FCNC}\n\nWe concentrate on the most general FCNC $tqH$ Lagrangian of the form \n\\begin{eqnarray}\\label{eq:tqhA}\n{\\cal L}^{tqH} &=& g_{tu} \\bar{t}_R u_L H + g_{ut} \\bar{u}_R t_L H + g_{tc} \\bar{t}_R c_L H \n+ g_{ct} \\bar{c}_R t_L H + h.c \\nonumber \\\\\n&=& \\bar{t} (g_{tq} P_L + g_{qt}^\\ast P_R ) q H + \\bar{q} (g_{qt} P_L + g_{tq}^\\ast P_R ) t H.\n\\end{eqnarray} \nThis Lagrangian gives rise to the tree-level FCNC decays $t \\to H q,(q=u,c)$ with the partial decay width given as \n\\begin{eqnarray}\\label{eq:top_TDW}\n\\Gamma_{t\\rightarrow q H}&=&\\frac{1}{32 \\pi m_t^3}\\sqrt{m_t^2-(m_q-m_H)^2}\n\\sqrt{m_t^2-(m_q+m_H)^2}\\left[(\\mid g_{tq}\\mid^2 + \\mid g_{qt}\\mid^2)(m_t^2+m_q^2-m_H^2) \\right. \\nonumber \\\\\n&& \\left. +4 m_t m_q \n\\left(g_{tq}^\\ast g_{qt} + g_{qt}^\\ast g_{tq}\\right)\\right].\n\\end{eqnarray}\nThe SM top-quark decay is dominated by $t\\rightarrow b W^+$ and it is given by\n\\begin{equation}\\label{eq:tWb}\n \\Gamma_{t\\rightarrow b W^+} = \\frac{G_F}{8 \\sqrt{2} \\pi m_t^3}(m_t^2-m_W^2)^2(m_t^2+2 m_W^2) \\,.\n\\end{equation}\nWe neglect the mass of the emitted quark $m_q$, in our \nanalysis. The branching ratios of the top decaying in the presence of these \nflavor violating Yukawa couplings is then given by\n\\begin{eqnarray}\\label{eq_BR}\n {\\rm BR}(t\\rightarrow q H) &=& \\frac{1}{2\\sqrt{2}G_F}\\frac{(m_t^2-m_H^2)^2}{(m_t^2-m_W^2)^2 (m_t^2+2 m_W^2)}\n (\\mid g_{tq}\\mid^2 + \\mid g_{qt}\\mid^2) \\alpha_{QCD},\n \\end{eqnarray}\nwhere the NLO QCD corrections to the SM decay width \\cite{Li:1990qf} and the $t \\to c H$ \ndecay \\cite{Zhang:2013xya} are included in the factor $\\alpha_{QCD} = 1+0.97 \\alpha_s = 1.10$ \\cite{Greljo:2014dka}. \nThe total decay width of the top in the presence of these FCNC couplings is then \n\\begin{eqnarray}\n \\Gamma_t &=& \\Gamma_t^{SM} + \\Gamma_{t\\rightarrow q_H} \n \\approx \\Gamma_t^{SM} + 0.155 (\\mid g_{tq}\\mid^2 + \\mid g_{qt}\\mid^2) \\,.\n\\end{eqnarray}\nWe have $\\Gamma_{t}^{SM} = \\Gamma_{t\\rightarrow W^+ b}$ = 1.35 GeV \nfor $m_t$ = 173.3 GeV at NLO, while the experimentally observed value of the total \ntop-quark width is, $\\Gamma_t = 1.41^{+0.19}_{-0.15}$ GeV~\\cite{Agashe:2014kda}.\nThe additional FCNC decay processes give positive contributions to $\\Gamma_t$, proportional to \n$(\\mid g_{tq}\\mid^2 + \\mid g_{qt}\\mid^2)$ and from the experimentally observed \n$\\Gamma_t$ an upper bound on $\\sqrt{\\mid g_{tq}\\mid^2 + \\mid g_{qt}\\mid^2}$ \ncan be obtained. \nThese flavor changing couplings can also lead to the \nthree body decay $h\\rightarrow t^\\ast (\\to W^+ b)\\bar{q}$, where top is\nproduced off-shell and $q=u,~c$. Then total width of the Higgs gets\nmodified and the couplings $g_{tq},~g_{qt}$ can be \nindependently constrained from the measurement of the Higgs \ndecay width at the LHC~\\cite{Atwood:2013ica}.\n \n\\section{Polarized beams in $t\\bar{t}$ production at the $e^- e^+$ linear collider}\\label{sec:beam_pol}\n\nThe most general formula for the matrix element square $|T_{e^-e^+}|^2$ for arbitrary polarized $e^-e^+$ beams producing a $t\\bar{t}$ pair \nis given in Refs.~\\cite{Kleiss:1986ct, Hikasa:1985qi}. However for the annihilation process \nwith massless electron and positron, the helicity of the electron has to be opposite to \nthat of the positron, and the final formula is reduced to the form,\n\\begin{eqnarray}\n |T|^2 &=& \\frac{1}{4}\\left\\lbrace(1-P^L_{e^-})(1+P^L_{e^+}) |T_{e^-_L e^+_R}|^2 + \n (1+P^L_{e^-})(1-P^L_{e^+}) |T_{e^-_R e^+_L}|^2 \\right. \\nonumber \\\\\n && \\hspace*{0.5cm}\\left. + P^T_{e^-} P^T_{e^+} {\\rm Re} \\left[ e^{-i(\\alpha_- + \\alpha_+)} \n T_{e^-_R e^+_L} T_{e^-_L e^+_R}^{\\ast}+ e^{i(\\alpha_- + \\alpha_+)}T_{e^-_L e^+_R} T_{e^-_R e^+_L}^{\\ast}\\right ]\n \\right\\rbrace,\n \\label{eq:fin2a}\n\\end{eqnarray}\nwhere $T_{e^-_{\\lambda_1} e^+_{\\lambda_2}}$ is the helicity amplitude for the process under\nconsideration, and $\\lambda_1,~\\lambda_2$ are the helicities of the electron and the positron,\nrespectively. $P^L_{e^\\mp}$ is the degree of the longitudinal polarization and $P^T_{e^\\mp}$ \nis the transversal polarization for the electrons and positrons. \nThe $\\alpha_{\\mp}$ refers to the angle of polarization of the electron and the positron, \nrespectively. The polarizations of the electron and the positron at the linear colliders\nare independent and can be arbitrarily changed. The proposed linear colliders (ILC and CLIC)\nassume that the following polarizations can be achieved\\footnote{It is important to note \nthe role of the beam polarization in the $t\\bar{t}$ production. For the $-80\\%$ of the electron\npolarization and $+30\\%$ of the positron polarization the initial stated will be dominantly \npolarized as $e^-_L e^+_R$, giving in the SM\na constructive interference of the $\\gamma$ and $Z$ amplitudes\nfor the production of $t_L\\bar{t}_R$ pair, and a destructive interference for the \nproduction of $t_R \\bar{t}_L$, which then leads to a large positive forward-backward asymmetry. \n} \n\\begin{eqnarray}\nP^{L,T}_{e^-} = \\pm 80\\% \\;, P^{L,T}_{e^+} = \\pm 30\\% \\,.\n\\end{eqnarray}\nAs it was shown in Ref.~\\cite{Hikasa:1985qi}, if one is interested in the $\\phi_t$ (azimuthal angle of \nthe top quark) dependence of the cross section, instead of discussing $\\phi_t$ dependence directly, \nit is simpler to study $\\alpha_{\\mp}$ dependence, since the latter is explicit in above. It can be shown that \n\\begin{eqnarray}\n | \\langle f(\\phi_t,...) | T | e^-(\\alpha_-) e^+ (\\alpha_+) \\rangle |^2 = | \\langle f(\\phi_t = 0,...) | T |\n e^-(\\alpha_- - \\phi_t) e^+ (\\alpha_+ - \\phi_t) \\rangle |^2 \\,,\n\\end{eqnarray}\nfrom the rotational invariance with respect to the beam direction, i.e. the rotation of the final state \nby $\\phi_t$ is equivalent to the rotation of the initial state by $-\\phi_t$.\nWith this assumption Eq.~(\\ref{eq:fin2a}) becomes \n\\begin{eqnarray}\n |T|^2 &=& \\frac{1}{4}\\left\\lbrace(1-P^L_{e^-})(1+P^L_{e^+}) |T_{e^-_L e^+_R}|^2 + \n (1+P^L_{e^-})(1-P^L_{e^+}) |T_{e^-_R e^+_L}|^2 \\right. \\nonumber \\\\\n && \\hspace*{0.5cm}\\left. -2 P^T_{e^-} P^T_{e^+} \n\\rm{Re}~e^{i(\\eta-2\\phi_t)} T^{\\ast}_{e^-_R e^+_L} T_{e^-_L e^+_R}\n \\right\\rbrace,\n \\label{eq:fin2}\n\\end{eqnarray}\nwhere $\\eta = \\alpha_-+\\alpha_+$.\nThe effects of various beam polarizations in above will be discussed in the following. \n \n\\section{Analysis of the $tqH$ final state at the $e^- e^+$ linear collider}\\label{sec:cmframe}\n\nWe study the $t\\bar{t}$ production in the context of\nthe $e^-e^+$ linear collider, where one of the top decays to $Wb$, and the \nother decays to $q(u,c)H$ and the leptonic decay mode of \nthe $W$ boson is considered:\n\\begin{eqnarray}\n&& e^-(p_1) + e^+(p_2)\\rightarrow t(q_1)+\\bar{t}(q_2), \\nonumber \\\\\n&& \\hspace*{3cm} t(q_1) \\rightarrow q (p_{q})+ H, \\qquad \n\\bar{t}(q_2) \\rightarrow \\bar{b} (p_b)+ l^+(p_l)+ \\nu(p_\\nu).\n\\end{eqnarray}\nWe first consider the leading order spin dependent differential cross-section of the top pair production\nin a generic basis. The total phase space is split into the product of the differential cross-section for \nthe $t\\bar{t}$ production, the three-particle decay of the antitop quark and the two-particle decay \nof the top quark, with the Higgs decaying to $b\\bar{b}$. We first do the analysis considering the decay of $t$ to $qH$ and the inclusive decay of $\\bar{t}$. In an attempt to make a comparative study, we also consider the $t\\bar{t}$ production, with the SM decay of top to $W^+b$,\nand the inclusive decay of $\\bar{t}$. This SM process will be a background for the $tqH$ final state, with the $H$ \nand the $W$ decaying hadronically. \nSince the analysis is being similar for both, the considered signal and the SM \nbackground, we only discuss the calculation of the signal in details.\nThe differential cross section in the centre of mass frame becomes\n\\begin{eqnarray}\\label{df_cs1}\nd\\sigma &=& \\frac{1}{2s} \\int \\frac{ds_1}{2\\pi} \\frac{1}{((s_1-m_t^2)^2+\\Gamma_t^2m_t^2)}\n \\times \\mid \\bar{\\mathcal{M}}^2 \\mid \\nonumber \\\\\n&\\times& (2 \\pi)^4 \\delta^4 (q_1+q_2-p_1-p_2) \\frac{d^3q_1}{(2\\pi^3) 2E_1} \\frac{d^3q_2}{(2\\pi^3) 2E_2}~~~~~\n[\\rm{production~of}~ t\\bar{t}] \\nonumber \\\\\n&\\times& (2\\pi)^4 \\delta^4(p_q+p_H-q_1) \\frac{d^3p_q}{(2\\pi^3) 2E_q} \\frac{d^3p_H}{(2\\pi^3) 2E_H} \n~~~~~[\\rm{decay~ of}~ t]\\,,\n\\end{eqnarray}\nwhere $\\sqrt{s}$ is the centre of mass energy and $s_1 = (p_q+p_H)^2$. The energies of the produced top and the antitop are given by \n$E_1,~E_2$, whereas the energies of the decay products are denoted by $E_q$ and $E_H$. \nFor these decays, in the center of mass frame and in the narrow width approximation, we can express the elements of the phase space in (\\ref{df_cs1}) as\n\\begin{eqnarray}\n&&\\int \\frac{ds_1}{2\\pi} \\frac{1}{((s_1-m_t^2)^2+\\Gamma_t^2m_t^2)}\n = \\int \\frac{ds_1}{2\\pi} \\frac{\\pi}{m_t \\Gamma_t} \\delta(s_1-m_t^2)\n =\\frac{1}{2 m_t \\Gamma_t} \\,,\\label{t_prop} \\\\ \n&&\\int \\frac{1}{2s} (2 \\pi)^4 \\delta^4 (q_1+q_2-p_1-p_2) \\frac{d^3q_1}{(2\\pi^3) 2E_1} \\frac{d^3q_2}{(2\\pi^3) 2E_2}\n = \\frac{3\\beta}{64 \\pi^2 s} d\\cos\\theta_t d\\phi_t \\,, \\label{prod_tt} \\\\\n&&\\int (2\\pi)^4 \\delta^4(p_q+p_H-q_1) \\frac{d^3p_q}{(2\\pi^3) 2E_q} \\frac{d^3p_H}{(2\\pi^3) 2E_H}=\\frac{1}{2(2\\pi)^2}\n\\int d\\Omega_q \\frac{\\mid p_q \\mid^2}{(m_t^2-m_H^2)} \\,. \n\\label{t_decay}\n\\end{eqnarray}\nThe total matrix element squared $\\mid \\bar{\\mathcal{M}}^2 \\mid $ in Eq.~(\\ref{df_cs1}), \nis defined as\n\\begin{eqnarray}\n\\mid \\bar{\\mathcal{M}}^2 \\mid&=& \\sum_{L,R}\\sum_{(\\lambda_t\\lambda_t'=\\pm)}\\rho^{P(t\\bar{t})}_{LR,\\lambda_t\\lambda_t'}\\rho^{D(t)}_{\\lambda_t\\lambda_t'} = \\sum_{L,R}\\sum_{(\\lambda_t\\lambda_t'=\\pm)}\n \\mathcal{M}^{L,R}_{\\lambda_t}\\mathcal{M}^{*L,R}_{\\lambda_t'}\n \\rho^{D(t)}_{\\lambda_t\\lambda_t'},\n\\end{eqnarray}\nwhere $ \\mathcal{M}^{L,R}_{\\lambda_t}$ is the production helicity amplitude of the top with a given helicity \n$\\lambda_t$. The helicities of the antitop are summed over. The production helicity amplitudes are listed\nin Eqs.(\\ref{va_hel}) of Appendix \\ref{sec:Appendix1}. \nThe decay matrix of the top quark is defined as $\\rho^{D(t)}_{\\lambda_t\\lambda_t'}=\\mathcal{M}(\\lambda_t)\n\\mathcal{M}^*(\\lambda_t')$ and for $t \\to q H$ the explicit expressions in the rest frame of the top, as well as in the centre of mass frame are given in Appendix \\ref{sec:appen_decaytcH}. For the top decaying to $W^+b$ the spin density matrix \n$\\rho^{D(t)}_{\\lambda_t\\lambda_t'}$, is given in Appendix~\\ref{sec:appen_decaytWb}, \nfor both the top rest frame and the centre of mass frame. \n\nWe have performed our calculations, in the frame where the electron beam direction \nis in the positive $z$ direction, with the top emitted at a polar angle \n$\\theta_t$ and the quark emitted in the top decay makes a polar $\\theta_q$ angle with\nthe electron beam, as \nshown in Fig.~\\ref{fig:eett_2nd_frame}. \n\\begin{figure}[htb]\n\\centering\n\\includegraphics[width=10cm, height=6.5cm]{eett_3frames.eps}\n\\caption{The coordinate system in the colliding $e^-e^+$ centre of mass \nframe. The $y$-axis is chosen along the $p_1 (e^-) \\times q_1 (t)$ direction and is pointing towards\nthe observer. The coordinate systems in the $t$ and $\\bar{t}$ rest frames are obtained from it by rotation \nalong the $x$ axis and then boost along the $y$ axis.}\n\\label{fig:eett_2nd_frame}\n\\end{figure}\nThe four-vector in the rest frame of the top are related to the c.m. frame by the following \nboost and the rotation matrices (the boost matrix is along the $z$ direction, whereas the \nrotation matrix is applied along the $y$ axis):\n\\begin{eqnarray}\n q_1&=&\\begin{pmatrix}\n 1&0&0&0 \\\\ 0&\\cos\\theta_t&0&\\sin\\theta_t \\\\ 0&0&1&0 \\\\ 0&-\\sin\\theta_t &0&\\cos\\theta_t\n \\end{pmatrix} \\begin{pmatrix} \n \\gamma &0&0&\\gamma\\beta \\\\ 0&1&0&0 \\\\0&0&1&0 \\\\ \\gamma\\beta &0&0&\\gamma \n \\end{pmatrix} q_1^{top} \\,, \n\\end{eqnarray}\nwhere $q_1^{top}$ is defined in the rest frame of the top. \nThe momentum four-vectors in the c.m frame are given by\n\\begin{eqnarray}\np_1 &=& \\frac{\\sqrt{s}}{2}(1,0,0,1),~~~~p_2 = \\frac{\\sqrt{s}}{2}(1,0,0,-1) \\nonumber \\\\\nq_1&=& \\frac{\\sqrt{s}}{2}(1,\\beta \\sin\\theta_t,0,\\beta\\cos\\theta_t),~~~~\nq_2= \\frac{\\sqrt{s}}{2}(1,-\\beta \\sin\\theta_t,0,-\\beta\\cos\\theta_t) \\nonumber \\\\\np_{q}&=&(E_{q}, E_{q}\\sin\\theta_{q} \\cos\\phi_{q}, E_{q}\\sin\\theta_{q}\\sin\\phi_{q}, E_{q}\\cos\\theta_{q})\n\\end{eqnarray}\nThe momentum of the emitted light quark $\\mid p_q \\mid$ is equal to its energy \n$E_{q}$\nand in the c.m frame the following relations are obtained: \n\\begin{eqnarray} \\label{eq:cosTHtq}\n\\mid p_q \\mid &=& E_{q} = \\frac{(m_t^2-m_H^2)}{\\sqrt{s} (1-\\beta \\cos\\theta_{tq})}, \\nonumber \\\\\n\\cos \\theta_{tq} &=& \\cos\\theta_t \\cos \\theta_q +\\sin\\theta_t \\sin \\theta_q \\cos \\phi_q \\,. \n\\end{eqnarray}\nwhere $\\cos \\theta_{tq}$ is the angle between the top and the emitted light quark in the c.m. frame.\n\nCombining the production and the density matrices in the narrow width approximation for $t$, we get the \npolar distribution of the emitted quark $q$, in the presence of the beam polarization\nafter integrating over $\\phi_q,\\theta_t$, to be \n\\begin{eqnarray}\n \\frac{d\\sigma}{ds~d\\cos\\theta_q~d\\phi_t} &=& |T|^2\n\\end{eqnarray}\nwhere $|T|^2$ is of the form given in Eq.~(\\ref{eq:fin2}). We compute $|T_{e^-_L e^+_R}|^2$, $|T_{e^-_R e^+_L}|^2$\nfor the considered process, and present them in the most general form:\n\\begin{eqnarray}\n |T_{e^{\\mp}_L e^{\\pm}_R}|^2 &=& (|g_{tq}|^2+|g_{qt}|^2) \\left(a_0 + a_1 \\cos\\theta_q + a_2 \\cos^2\\theta_q \\right) \\nonumber \\\\ \n&& \\pm(|g_{tq}|^2-|g_{qt}|^2) \\left(b_0+ b_1 \\cos\\theta_q + b_2 \\cos^2\\theta_q \\right) \\,.\n\\end{eqnarray}\nThe coefficients $a_{i},b_{i}$ \ncan be deduced from the following expressions: \n\\begin{align}\\label{eq:Long_mp}\n |T_{e^-_L e^+_R}|^2 &= (m_t^2-m_H^2) \\frac{\\pi s}{\\beta} \\left \\lbrace \\frac{|g_{tq}|^2+|g_{qt}|^2}{\\beta^2-1}\n \\left[-4 A_L B_L \\cos\\theta_q \\left ( \\beta +(\\beta^2-1)\\tanh^{-1}\\beta\\right) \\right. \\right. \\nonumber \\\\\n & \\left. \\left.+ (A_L^2+B_L^2) \\cos^2\\theta_q \\left (\\beta(2\\beta^2-3)-3(\\beta^2-1)\\tanh^{-1} \\beta\\right )\n \\right. \\right. \\nonumber \\\\\n & \\left. \\left. +\\left(-\\beta +(\\beta^2-1)\\tanh^{-1}\\beta\\right)(A_L^2+B_L^2) -2\\beta(\\beta^2-1)B_L^2 \\right]\n \\right. \\nonumber \\\\\n & \\left. + 2(|g_{tq}|^2-|g_{qt}|^2)\\left[\\cos\\theta_q \\left((A_L^2+B_L^2)\\tanh^{-1} \\beta -\\beta B_L^2 \\right)\n \\right. \\right. \\nonumber \\\\ \n& \\left. \\left. \n+A_L B_L \n \\left(1-3 \\cos^2\\theta_q\\right) (\\beta-\\tanh^{-1} \\beta) \\right] \\right\\rbrace, \\\\\n |T_{e^-_R e^+_L}|^2 &= (m_t^2-m_H^2) \\frac{\\pi s}{\\beta} \\left \\lbrace \\frac{|g_{tq}|^2+|g_{qt}|^2}{\\beta^2-1}\n \\left[-4 A_R B_R \\cos\\theta_q \\left ( \\beta +(\\beta^2-1)\\tanh^{-1}\\beta\\right ) \\right. \\right. \\nonumber \\\\\n & \\left. \\left.+ (A_R^2+B_R^2)\\cos^2\\theta_q \\left ( \n \\beta(2\\beta^2-3)-3(\\beta^2-1)\\tanh^{-1} \\beta\\right )\n \\right. \\right. \\nonumber \\\\\n & \\left. \\left. +\\left(-\\beta +(\\beta^2-1)\\tanh^{-1}\\beta\\right)(A_R^2+B_R^2) -2\\beta(\\beta^2-1)B_R^2 \\right]\n \\right. \\nonumber \\\\\n & \\left. - 2(|g_{tq}|^2-|g_{qt}|^2)\\left[\\cos\\theta_q \\left((A_R^2+B_R^2)\\tanh^{-1} \\beta -\\beta B_R^2 \\right)\n \\right. \\right. \\nonumber \\\\ \n& \\left. \\left. +A_R B_R \n \\left(1-3 \\cos^2\\theta_q\\right) (\\beta-\\tanh^{-1} \\beta) \\right] \\right\\rbrace,\\label{eq:Long_pm}\n\\end{align}\nand \n\\begin{align}\\label{eq:Trans}\n T^{\\ast}_{e^-_R e^+_L} T_{e^-_L e^+_R} &= \\frac{\\pi s}{\\beta}\n(m_t^2-m_H^2) (\\beta-\\tanh^{-1} \\beta) (3\\cos^2\\theta_q-1) \\cos (\\eta-2\\phi_t) \\nonumber \\\\\n&\\left\\lbrace (|g_{tq}|^2+|g_{qt}|^2)(A_L A_R-B_L B_R)\n+(|g_{tq}|^2-|g_{qt}|^2)(A_L B_R -A_R B_L)\\right\\rbrace,\n \\end{align}\nwhere $A_{L,R}$ and $B_{L,R}$ are combinations of the standard SM $\\gamma$ \nand $Z$ couplings with the top and the leptons in the $t\\bar{t}$ production given \nin the Eq.~(\\ref{eq:albl}). The Yukawa chiral couplings, as seen from \nEqs.~(\\ref{eq:Long_mp}),~(\\ref{eq:Long_pm}) are both proportional \nto the polar angle of the emitted light quark, $\\cos\\theta_q, \\cos^2\\theta_q$, but \nhave different dependencies. The coefficients of the coupling $|g_{tq}|^2$, which \nmeasures the coupling strength of $t_L$ with $q_R$ and the Higgs, are summed \nin Eq.~(\\ref{eq:Long_mp}), whereas the coefficients of the other chiral coupling\n$|g_{qt}|^2$ do not add up, but cancel each other partially. This is the case \nwhen the electron beam is left polarized and the positron is right polarized. \nThis behaviour of $|g_{tq}|^2$ and $|g_{qt}|^2$ is reversed \nwith the right polarized electrons and the left polarized positrons, as can be noticed from Eq.~(\\ref{eq:Long_pm}), where the coefficients of $|g_{qt}|^2$ add up. Therefore, \nit will be possible to control the influence of particular chiral couplings with a\nsuitable choice of beam polarization. The case of transverse polarization is also \nconsidered, although both $|g_{tq}|^2,~|g_{qt}|^2$ involve same angular dependencies in \nEq.~(\\ref{eq:Trans}) and therefore cannot be used for the analysis of the chirality of the FCNC couplings. \nIt is clear from Eqs.~(\\ref{eq:Long_mp}),~(\\ref{eq:Long_pm}),~(\\ref{eq:Trans}), that $|g_{tq}|^2$ and $|g_{qt}|^2$ cannot be isolated separately, but \ntheir effects can be individually controlled with suitable choice of beam \npolarization. We next study different distributions in the presence of the \nchiral FCNC couplings and accordingly construct asymmetries to set limits on them.\n\n\\subsection{Constraints on the chiral FCNC couplings by angular asymmetries}\\label{sec:cmf_asymmetries}\n\nNext, we perform a detailed analysis of the signal FCNC process considered, along with the standard SM background ($t\\bar{t}, t\\rightarrow Wb$, W decaying hadronically) and construct different asymmetries for obtaining limits on the couplings. \n\nThe total cross section for both the signal and\nthe background, in case of the longitudinal beam polarization is \n\\begin{eqnarray}\n\\sigma_{Signal} &=& \\frac{(m_t^2-m_H^2)^2}{4 s \\Gamma_t m_t}\\frac{1}{1-\\beta^2}(|g_{tq}|^2+|g_{qt}|^2)\\left((1-P^L_{e^-})(1+P^L_{e^+})\n\\left(s \\beta^2 B_L^2 + (2 m_t^2+s) A_L^2\\right) \\right. \\nonumber \\\\\n&& \\left. + (1+P^L_{e^-})(1-P^L_{e^+}) \\left(s \\beta^2 B_R^2 + (2 m_t^2+s) A_R^2 \\right) \\right)\\,, \\\\\n\\sigma_{Bkg} &=&\\frac{g^2 m_t}{2 s^2 \\Gamma_t m_W^2}\\frac{1}{(1-\\beta^2)^2} (m_t^2-m_W^2)^2(m_t^2+2m_W^2)\n\\left((1-P^L_{e^-})(1+P^L_{e^+})\n\\left(s \\beta^2 B_L^2 \\right. \\right. \\nonumber \\\\ \n&& \\left. \\left. + (2 m_t^2+s) A_L^2\\right) \n + (1+P^L_{e^-})(1-P^L_{e^+}) \\left(s \\beta^2 B_R^2 + (2 m_t^2+s) A_R^2 \\right) \\right)\\,, \n\\end{eqnarray}\nwhere again $A_{L,R}$ and $B_{L,R}$ are combinations of the SM $\\gamma$ and $Z$ couplings with\nthe quarks in the $t\\bar{t}$ production given in Appendix~\\ref{sec:All_appen}. \n\n\nWe have performed our analysis considering $\\sqrt{|g_{tq}|^2+|g_{qt}|^2}$ = 0.16,\nin accordance with the latest LHC bounds~\\cite{Greljo:2014dka}.\nThe background i.e. the SM $\\bar{t}Wb$ contribution is scaled down, to be \ncompared with the signal. We are currently not applying any cuts on the \nfinal state, but a detailed analysis using all the experimental cuts will be\nperformed in Sec.~\\ref{sec:numericalstudy}.\n\nThe polar angle distribution of the \nemitted quark is plotted in Fig.~\\ref{fig:dist_pol} for both, the signal and \nthe background, for \n($a$) $P^L_{e^-} = P^L_{e^+} = 0$, ($b$) $P^L_{e^-} = -0.8,~P^L_{e^+} = 0.3$ \nand $c$) $P^L_{e^-} = 0.8,~P^L_{e^+} = -0.3$. The polar \nangle distribution will be sensitive to the chirality of the Yukawa couplings and \ntherefore we present our results for three different cases:\n\\begin{itemize}\n \\item Case 1 : $\\sqrt{|g_{tq}|^2+|g_{qt}|^2}$ = 0.16\\,,\n \\item Case 2 : $\\sqrt{|g_{tq}|^2+|g_{qt}|^2}$ = 0.16, with $|g_{qt}|^2$ = 0 \\,,\n \\item Case 3 : $\\sqrt{|g_{tq}|^2+|g_{qt}|^2}$ = 0.16, with $|g_{tq}|^2$ = 0 \\,.\n\\end{itemize}\n\\begin{figure}[htb]\n \\begin{subfigure}{0.45\\linewidth}\n \\centering\n \\includegraphics[width=6.5cm, height=5cm]{Pol_dist_up.eps}\n\\caption{}\n\\label{fig:upa}\n \\end{subfigure}\n \\hspace{1.0cm}\n \\begin{subfigure}{0.45\\linewidth}\n \\centering\n \\includegraphics[width=6.5cm, height=5cm]{Pol_dist_eLpR.eps}\n\\caption{}\n\\label{fig:plr}\n \\end{subfigure}\\\\[1ex]\n \\begin{subfigure}{\\linewidth}\n \\hspace{3.5cm}\n \\includegraphics[width=6.5cm, height=5cm]{Pol_dist_eRpL.eps}\n\\caption{}\n\\label{fig:prl}\n \\end{subfigure}\n \\caption{The polar angle distribution of the quark at $\\sqrt{s}= 500$ GeV,\nfor ($a$) $P^L_{e^-} = P^L_{e^+} = 0$, ($b$) $P^L_{e^-} = -0.8,~P^L_{e^+} = 0.3$ and \n($c$) $P^L_{e^-} = 0.8,~P^L_{e^+} = -0.3$. The different Cases\nare discussed in the text.}\n\\label{fig:dist_pol}\n\\end{figure}\nIt can be clearly seen from Fig:~\\ref{fig:dist_pol}, that $|g_{tq}|^2$ \nand $|g_{qt}|^2$ are sensitive to the beam polarization. The different Cases\nbehave similar in the unpolarized case, Fig.~\\ref{fig:upa}.\nCase 2 is most prominent when the electron beam is left polarized and \nthe positron is right polarized, Fig.~\\ref{fig:plr}, whereas Case 3 is distinct for \nthe scenario with right polarized electrons and \nleft polarized positrons, Fig.~\\ref{fig:prl}. Therefore the manifestation of the \ndominance of one of the coupling, if present, will be prominent using the \nsuitable initial beam polarization. \n\nUsing the above fact that the couplings are sensitive to the polar \nangle distributions of the quark, we next consider \ndifferent asymmetries to give simultaneous limits to both of the \ncouplings. The $|g_{tq}|^2$ and $|g_{qt}|^2$ terms are accompanied by $\\cos\\theta_q$,~ $\\cos(\\eta-2\\phi)$ and $\\cos(\\eta-2\\phi) \\cos^2\\theta_q$ angular \ndependence. The asymmetries which will isolate these terms are the forward-backward asymmetry \nand the azimuthal asymmetry defined as \n\\begin{eqnarray}\nA_{fb}(\\cos\\theta_0)&=& \\frac{1}{d\\sigma\/ds}\n\\left(\\int^1_{\\cos\\theta_0} d\\cos\\theta_q - \\int^{\\cos\\theta_0}_{-1} d\\cos\\theta_q \\right)\n\\frac{d\\sigma}{ds~d\\cos\\theta_q} \\,,\\label{eq:fb}\\\\\nA_{\\phi}(\\cos\\theta_0) &=& \\frac{1}{d\\sigma\/ds}\\left(\\int^{\\cos\\theta_0}_{-\\cos\\theta_0} d\\cos\\theta_q\n\\int_0^{2\\pi} d \\phi_t~sgn(\\cos(\\eta - 2\\phi_t))\\right) \\frac{d\\sigma}{ds~d\\Omega} \\,,\n\\label{eq:trans_asym} \n\\end{eqnarray}\nwhere $\\theta_0$ is the experimental polar-angle cut \\cite{Grzadkowski:2000nx,Rindani:2003av}\nand $\\Omega = d\\cos\\theta_q~d\\phi_t$. \nThe forward-backward asymmetry will isolate the terms proportional to $\\cos\\theta_q$ in Eqs.(\\ref{eq:Long_mp})\nand (\\ref{eq:Long_pm}). We plot in Fig.~\\ref{fig:FB}, the forward backward asymmetry as a function of \nthe cut-off angle $\\cos\\theta_0$. The dip in the plot is where the value of \n$A_{fb}(\\cos\\theta_0)$ is zero. \nIn the presence of $|g_{tq}|^2~(|g_{qt}|^2 = 0)$, i.e Case 2, with left polarized electrons \nand right polarized positrons, the quarks are emitted in the forward direction with the dip \nof $A_{fb}$ to be greater than zero, Fig.~\\ref{fig:fb_lr}, \nwhereas the other Cases almost follow the SM distribution. \nSimilarly, with the opposite choice of beam polarization, the \n$|g_{qt}|^2~(|g_{tq}|^2 = 0)$ coupling leads to the quarks being emitted in the forward direction, \nresulting in the dip of $A_{fb}$ to be greater than zero for Case 3 in Fig.~\\ref{fig:fb_rl}. \n\\begin{figure}[htb]\n \\begin{subfigure}{0.45\\linewidth}\n \\centering\n \\includegraphics[width=6.5cm, height=5cm]{FB_unpol.eps}\n\\caption{}\n\\label{fig:fb_up}\n \\end{subfigure}\n \\hspace{1.0cm}\n \\begin{subfigure}{0.45\\linewidth}\n \\centering\n \\includegraphics[width=6.5cm, height=5cm]{FB_eLpR.eps}\n\\caption{}\n\\label{fig:fb_lr}\n \\end{subfigure}\\\\[1ex]\n \\begin{subfigure}{\\linewidth}\n \\hspace{3.5cm}\n \\includegraphics[width=6.5cm, height=5cm]{FB_eRpL.eps}\n\\caption{}\n\\label{fig:fb_rl}\n \\end{subfigure}\n\\caption{The forward backward asymmetry as a function of the cut-off angle \n$\\cos\\theta_0$ Eq.~(\\ref{eq:fb}) at $\\sqrt{s}= 500$ GeV, for ($a$) $P^L_{e^-} = P^L_{e^+} = 0$,\n($b$)$P^L_{e^-} = -0.8,~P^L_{e^+} = 0.3$ and ($c$) $P^L_{e^-} = 0.8,~P^L_{e^+} = -0.3$. The different Cases are discussed in the text.}\n\\label{fig:FB}\n\\end{figure}\n\nNext, we plot the azimuthal asymmetry $A_{\\phi}(\\cos\\theta_0)$ as a function \nof $\\cos\\theta_0$ in Fig.~\\ref{fig:azimuthal}. The terms proportional \nto $\\cos(\\eta-2\\phi_t)$ in Eq.~(\\ref{eq:Trans}) survive. We have considered\n$\\eta$ = 0 for our analysis and $P^T_{e^-} = 0.8$ and $P^T_{e^+} =$ 0.3. The \ndistribution is similar for the signal and the background, therefore this \nwill not be an useful observable\\footnote{ \nHowever once the FCNC coupling is discovered, this asymmetry can\nbe used as an additional observable to give limits to the couplings.}.\n\\begin{figure}[htb]\n\\centering\n\\includegraphics[width=6.9cm, height=4.9cm]{Azimuth.eps}\n\\caption{The azimuthal asymmetry $A_{\\phi}(\\theta_0)$ as a function of\n$\\cos\\theta_0$ Eq.~(\\ref{eq:trans_asym}) at $\\sqrt{s}= 500$ GeV, for the transversal polarizations \n$P^T_{e^-} =0.8$ and $P^T_{e^+} =$ 0.6. The different Cases are discussed in\nthe text.}\n\\label{fig:azimuthal}\n\\end{figure}\n\nWe compute the limits on the FCNC couplings from the measurement of the forward-backward asymmetry, \nof $e^-e^+ \\rightarrow t\\bar{t}, t\\rightarrow b W^+$ in the SM.\nThe statistical fluctuation in the asymmetry ($A$), for a given luminosity $\\mathcal{L}$ \nand fractional systematic error $\\epsilon$, is given as\n\\begin{eqnarray}\n\\Delta A^2 &=&\\frac{1-A^2}{\\sigma \\mathcal{L}}+\\frac{\\epsilon^2}{2}(1-A^2)^2,\n\\end{eqnarray}\nwhere $\\sigma$ and $A$ are the values of the cross section and the asymmetry. The value\nof $\\epsilon$ is set to zero for our analysis. \nWe define the statistical significance of an asymmetry prediction for the new physics,\n$A_{FCNC}$, as the number of standard deviations that it lies away from the SM result $A_{SM}$, \n\\begin{equation}\n s=\\frac{|A_{FCNC}-A_{SM}|}{\\Delta A_{SM}}\\,,\n\\end{equation}\nwhere $A_{FCNC}$ is the asymmetry calculated for the process $e^-e^+ \\rightarrow t(\\to c H) \\bar{t}$. \n\\begin{figure}[h]\n \\begin{subfigure}{0.45\\linewidth}\n \\centering\n \\includegraphics[width=7cm, height=5cm]{Contours_unpolLR.eps}\n\\caption{}\n\\label{fig:contour1}\n \\end{subfigure}\n \\hspace{1.0cm}\n \\begin{subfigure}{0.45\\linewidth}\n \\centering\n \\includegraphics[width=7cm, height=5cm]{Contours_LRRL.eps}\n\\caption{}\n\\label{fig:contour2}\n \\end{subfigure}\\\\[1ex]\n\\caption{Contour plots of 3$\\sigma$ and 5$\\sigma$ statistical significance \nin the $|g_{tq}|^2-|g_{qt}|^2|$ region from $A_{fb}$ for $\\theta_0$ = 0, Eq.~(\\ref{eq:fb})\nat $\\sqrt{s}$ = 500 GeV and ${\\cal L}$ = 500 fb$^{-1}$. \nThe solid lines are for the unpolarized case, the dashed lines are for a beam polarization of\n($a$) $P^L_{e^-}$ = -0.8, $P^L_{e^+}$ = 0.3 ($b$) $P^L_{e^-}$ = 0.8, \n$P^L_{e^+}$ = -0.3. Region in blue will be probed at 5$\\sigma$ and the green+blue area will be explored at 3$\\sigma$\nwith unpolarized beams. The inclusion of the beam polarization probes yellow+green+blue area \nat 5$\\sigma$ and pink+yellow+green+blue at 3$\\sigma$. The region which can not be probed\nby ILC with this choice of beam polarization is shown in grey.}\n\\label{fig:contourall}\n\\end{figure}\nWe show in Fig.~\\ref{fig:contourall} the $|g_{tq}|^2-|g_{qt}|^2$ region, which can be probed \nat a statistical significance of 3$\\sigma$ and 5$\\sigma$, with both unpolarized and \npolarized beams. The outside area surrounding solid lines can be probed with unpolarized\nbeams and the outside area surrounding dashed lines can be probed with a beam\npolarization of $P_{e^-}^L = -0.8,~P_{e^+}^L = 0.3$ (Fig.~\\ref{fig:contour1}),\n$P_{e^-}^L = 0.8,~P_{e^+}^L = -0.3$ (Fig.~\\ref{fig:contour2}). Obviously, the inclusion of the beam \npolarization can probe a greater region of the $|g_{tq}|^2-|g_{qt}|^2$\nparameter space. The $\\cos\\theta_q$ terms \nin Eqs.~(\\ref{eq:Long_mp}-\\ref{eq:Long_pm}) \ncancel each other in case of unpolarized beams.\nThe region in grey is the one, which cannot be explored by ILC with this choice of\nthe beam polarization.\n\nNow we turn to the discussion of different top spin observables which can be used to study the FCNC couplings. \n\n\\section{Top spin observables at the ILC}\\label{sec:spintop}\n\nWe investigate in this section the top spin polarization in the context of the linear collider, \nas the spin information of the decaying top is not diluted by hadronization. In an attempt\nto understand the top spin correlations, we work in the zero momentum frame ($t\\bar{t}$-ZMF)\n\\cite{Bernreuther:2004jv} of the $t\\bar{t}$ quarks, which is \n\\begin{eqnarray}\n (q_1+q_2)^\\mu =\\left(\\sqrt{(q_1+q_2)^2},0,0,0\\right).\n\\end{eqnarray}\nThe $t$ and the $\\bar{t}$ rest frames are then obtained, by boosting (no rotation is involved) into the $t\\bar{t}$-ZMF. This is\ndifferent from the laboratory frame considered before in Sec.~\\ref{sec:cmframe},\nwhere the electron beam is chosen along the $z$ axis, and the $t$ and the $\\bar{t}$ rest\nframes were constructed by boosting from the lab frame along with a suitable Wigner rotation.\n\nThe top quark pair production at $O(\\alpha_{\\rm em})$ is given by\na direct production with the $\\gamma$ and $Z$ exchange: \n\\begin{eqnarray}\ne^-(p_1,\\lambda_1) e^+(p_2,\\lambda_2) \\overset{\\gamma,Z}{\\to} t(q_1,s_t) \\bar{t}(q_2,s_{\\bar{t}})\\,.\n\\label{eq:prod} \n\\end{eqnarray} \nThe spin four-vectors of the top, $s_t$ and the antitop, $s_{\\bar{t}}$ satisfy the usual relations\n\\begin{equation}\ns_t^2 = s_{\\bar{t}}^2 = -1\\,, \\quad\\quad k_1 \\cdot s_t = k_2 \\cdot s_{\\bar{t}} = 0 \\,.\n\\end{equation}\nThe leading order differential cross section for the $t\\bar{t}$ production,\nin the presence of longitudinal polarization Eq.~(\\ref{eq:fin2}), has the phase space factor\nEq.~(\\ref{prod_tt}) and can be written in the spin density matrix representation as\n\\begin{eqnarray}\n&& d\\sigma (\\lambda_1,\\lambda_2,s_t,s_{\\bar{t}}) = \\frac{3 \\beta}{32 \\pi s} |\\mathcal T|^2 \\,,\n\\nonumber \\\\\n&& \n|\\mathcal T|^2 = \\frac{1}{4}{\\rm Tr} \\left [ \\rho \\cdot ({\\bf 1} + {\\bf\\hat{s}}_t \n\\cdot {\\bf\\sigma})\\otimes ({\\bf 1} + {\\bf\\hat{s}}_{\\bar{t}} \\cdot {\\bf\\sigma}) \\right ]\\,.\n\\end{eqnarray}\nIn the above equation, $\\rho = \\rho^{P(t\\bar{t})}$ is the corresponding production spin density matrix \ndescribing the production of (on-shell) top quark pairs in a specific spin configuration,\nwhile $\\mathbf{\\hat{s}}_t$ ($\\mathbf{\\hat{s}}_{\\overline{t}}$) is the unit polarization \nvector of the top (antitop) quark in its rest frame and \n$\\boldsymbol\\sigma = (\\sigma_1,\\sigma_2,\\sigma_3)^T$ is a vector of Pauli matrices.\nConveniently, the most general decomposition of the spin density matrix $\\rho$ for the $t\\bar{t}$ \nproduction is of the form\n\\begin{eqnarray}\n\\rho = A\\, {\\bf 1} \\otimes {\\bf 1} + B_i^{t} \\, \\sigma_i \\otimes {\\bf 1} + B_i^{\\overline{t}} \\, {\\bf 1} \\otimes \\sigma_i + C_{ij} \\, \\sigma_i \\otimes \\sigma_j \\, , \n\\label{eq:rho}\n\\end{eqnarray}\nwhere the functions $A$, ${B}_i^t$ (${B}_i^{\\overline{t}}$) and $C_{ij}$ describe the spin-averaged production\ncross section, polarization of top (antitop) quark and the top-antitop spin-spin correlations, respectively.\nUsing the spin four-vectors defined as\n\\begin{eqnarray}\ns_t^{\\mu} &=& \\left ( \\frac{ \\mathbf{q}_1 \\cdot \\mathbf {\\hat{s}}_t}{m_t}, \\mathbf {\\hat{s}}_t + \\frac{ \\mathbf{q}_1 (\\mathbf{q}_1 \\cdot \\hat{\\mathbf s}_t)}{ m_t (E_t + m_t)} \\right )\\,,\n\\nonumber \\\\\ns_{\\overline{t}}^{\\mu} &=& \\left ( \\frac{ \\mathbf{q}_2\\cdot \\mathbf{\\hat{s}}_{\\overline{t}}}{m_t}, \\mathbf{\\hat{s}}_{\\overline{t}} + \\frac{ \\mathbf{q}_2 (\\mathbf{q}_2 \\cdot \\mathbf{\\hat{s}}_{\\overline{t}})}{ m_t (E_{\\bar t} + m_t)} \\right )\\,,\n\\end{eqnarray}\nthe decomposition of the squared scattering amplitude $|\\mathcal T|^2$ can be written as\n\\begin{eqnarray}\n|\\mathcal T|^2 = a + b^{t}_\\mu s_t^{\\mu} + b^{\\overline{t}}_\\mu s_{\\overline{t}}^{\\mu} + c_{\\mu\\nu} s_t^{\\mu} s_{\\overline{t}}^{\\nu}\\,,\n\\label{eq:M2}\n\\end{eqnarray}\nand by comparing expressions \\eqref{eq:rho} and \\eqref{eq:M2} one can extract the functions $A$, ${B}_i^t$ (${B}_i^{\\overline{t}}$) and $C_{ij}$.\nThe functions $B_i^t (B_i^{\\bar t})$ and $C_{ij}$ can be further decomposed as \n\\begin{eqnarray}\nB_i^t &=& b_p^t \\hat{p}_i + b_q^t \\hat{q}_i \\,,\\nonumber \\\\\nC_{ij} &=& c_o \\delta_{ij} + c_4 \\hat{p}_i\\hat{p}_j + c_5 \\hat{q}_i\\hat{q}_j + c_6 \n(\\hat{p}_i\\hat{q}_j + \\hat{q}_i\\hat{p}_j)\\,,\n\\end{eqnarray}\nwhere $\\hat{k}$ denotes the unit vector, and we have kept only nonvanishing terms for our case \n\\footnote{In the SM the top-quark spin polarization in the normal direction to the production \nplane only exists if one considers QCD radiative corrections or absorptive part of the \n$Z$-propagator. However, since these contributions for the $t\\bar{t}$ production at \nlinear colliders are extremely small~\\cite{GK1,GK2,Groote:2010zf,KornerNEW}\n(apart from the threshold region) we do not consider them here.}.\n\nThe various top spin observables $\\langle {\\cal O}_i \\rangle$ can then be calculated as\n\\begin{eqnarray}\n \\langle {\\cal O}_i (\\mathbf{{S}}_t, \\mathbf{{S}}_{\\overline{t}})\\rangle\n = \\frac{1}{\\sigma} \\int d\\Phi_{t\\bar t} { {\\rm Tr} [ \\rho \\cdot {\\cal O}_i (\\mathbf{{S}}_t, \n \\mathbf{{S}}_{\\overline{t}} ) ]} \\,,\n \\end{eqnarray}\nwhere $\\sigma = \\int d\\Phi_{t\\bar t} {\\rm Tr}[\\rho]$ is the unpolarized production cross-section,\n$d\\Phi_{t\\bar{t}}$ is the phase space differential and \n$\\mathbf{{S}}_t = \\boldsymbol{\\sigma}\/2 \\otimes {\\bf 1} \\, \n( \\mathbf{{S}}_{\\overline{t}} = {\\bf 1} \\otimes \\boldsymbol{\\sigma}\/2)$ is the top (antitop) spin operator. \nWe consider the following spin observables\n\\begin{eqnarray}\\label{eq:observables}\n{\\cal O}_1 &=& \\frac{4}{3} \\mathbf{{S}}_t \\cdot \\mathbf{S}_{\\overline{t}} \\,,\n\\nonumber \\\\\n{\\cal O}_2 &=& \\mathbf{{S}}_t \\cdot \\mathbf{\\hat{a}},~~~~~\\bar{\\cal O}_2 = \\mathbf{{S}}_{\\bar t} \\cdot \\mathbf{\\hat{b}} \\,,\n\\nonumber \\\\\n{\\cal O}_3 &=& 4 ( \\mathbf{{S}}_t \\cdot \\mathbf{\\hat{a}} ) ( \\mathbf{S}_{\\overline{t}} \\cdot \\mathbf{\\hat{b}} ),\\nonumber \\\\\n{\\cal O}_4 &=& 4 \\left ( (\\mathbf{{S}}_t \\cdot \\mathbf{\\hat{p}} ) \n( \\mathbf{S}_{\\overline{t}} \\cdot \\mathbf{\\hat{q}}) + (\\mathbf{{S}}_t \\cdot \\mathbf{\\hat{q}} )\n( \\mathbf{S}_{\\overline{t}} \\cdot \\mathbf{\\hat{p}} ) \\right ),\n\\end{eqnarray}\ngiving the net spin polarization of the top-antitop system (${\\cal O}_1$), polarization of the top (antitop)\nquark (${\\cal O}_2 (\\bar{{\\cal O}}_2)$), the top-antitop spin correlation (${\\cal O}_3$), with respect to spin quantization axes $\\mathbf{\\hat a}$ \nand $\\mathbf{\\hat b}$. The observable ${\\cal O}_4$ is an additional top-antitop \nspin correlation with respect to the momentum of the incoming and\nthe outgoing particles~\\cite{BrandUwer}. \n\nThe observable ${\\cal O}_1$ can be probed using the opening angle distribution \n($\\varphi$), i.e. the angle between the direction of flight of the two (top and antitop) spin analyzers (which are the final particles produced in the top and antitop decays) \ndefined in the $t$ and $\\bar{t}$ frames, \nrespectively, i.e ${\\bf\\hat{p}}_q \\cdot {\\bf\\hat{p}}_l = \\cos\\varphi$, \n\\begin{eqnarray}\\label{eq:dobservable}\n\\frac{1}{\\sigma} \\frac{d \\sigma}{d \\cos \\varphi} &=& \\frac{1}{2} \\left ( 1 - D \\cos\\varphi \\right ),\n\\end{eqnarray}\nand\n\\begin{equation}\nD = \\braket{\\mathcal O_1} \\kappa_f \\kappa_{\\bar{f}} \n\\end{equation}\nwhere $\\kappa_f(\\kappa_{\\bar{f}})$ are the top, antitop spin analyzers considered here. The spin analyzer \nfor the FCNC top-Higgs decays can be either a direct $t$-quark daughter, i.e. $H$ or $c\/u$-quark, \nor $H$ decay products like $b$ or $\\bar{b}$ in $b\\bar{b}$ decay, or $\\tau^+(\\tau^-)$ in $H \\to \\tau^+\\tau^-$ decay, or jets. On the other hand, \nthe spin analyzer for $\\bar{t}$ are $W^-$ or $\\bar{b}$, or a $W^-$ decay products $l^-, \\bar \\nu$ or jets. \nWe consider the $q=c\/u$ quark from the top and the $l^-$ from the antitop as spin analyzers in this work.\nThe spin analyzers are calculated from the one-particle decay density matrices given as \n\\begin{eqnarray}\n\\rho^{t \\to f(\\bar{t} \\to \\bar{f})}_{\\alpha\\alpha'} &=& \\Gamma^{t \\to f(\\bar{t} \\to \\bar{f})}\n\\left[\\frac{1}{2} \\left({\\bf 1} + \\kappa_{f(\\bar{f})} \n{\\bf\\hat{p}}_{f(\\bar{f})}\\cdot \\bf{\\sigma} \\right)\\right]_{\\alpha\\alpha'}. \n\\end{eqnarray}\nwhere $\\alpha,\\alpha'$ denote the $t$-quark spin orientations, ${\\bf\\hat{p}}_{f}$ and ${\\bf\\hat{p}}_{\\bar{f}}$ are the directions of flight \nof the final particles $f$ and $\\bar {f}$ in the rest frame of the top and the antitop quarks respectively. \nThe values of various $\\kappa_{f(\\bar f)}$ for SM top (antitop) decays are presently known at NLO in QCD\nand can be found in~\\cite{Brandenburg:2002xr}. The top quark polarization matrix can be also written as \n \\begin{equation}\n\\rho^{t \\to f}_{\\alpha\\alpha'} = \\frac{1}{2} \n\\begin{pmatrix}\n1 + \\kappa_f \\cos\\theta^{top}_f & \\kappa_f \\sin \\theta^{top}_f e^{i \\phi^{top}_f} \\\\\n\\kappa_f \\sin \\theta^{top}_f e^{-i \\phi^{top}_f} &1 - \\kappa_f \\cos\\theta^{top}_f \n \\end{pmatrix}_{\\alpha\\alpha'},\n \\end{equation}\n and similarly for the antitop spin matrix $\\rho^{\\bar{t} \\to \\bar{f}}$.\nThe top spin analyzing power of $q$ ($\\kappa_{q}$) from the $t \\to H q$ decay\ncan be calculated from Eq.~(\\ref{eq_me_topch_rest}), in Appendix~\\ref{sec:appen_decaytcH}, \n\\begin{eqnarray}\n\\kappa_q &=& \\frac{|g_{qt}|^2-|g_{tq}|^2}{|g_{qt}|^2+|g_{tq}|^2} \\,.\\label{eq:kappa1}\n\\end{eqnarray}\n Similarly, \nthe spin analyzing power for the $b$ quark ($\\kappa_b$), from the top decay to $W^+b$ can be obtained from \nEqs.~(\\ref{eq_me_topWB_rest}), in Appendix~\\ref{sec:appen_decaytWb}, \n\\begin{eqnarray}\n \\kappa_b &=& \\frac{m_t^2-2m_W^2}{m_t^2+2m_W^2} \\,. \\label{eq:kappa2}\n\\end{eqnarray}\nLeptons emitted from the antitop decay, due to the $V-A$ interactions\nare the perfect top spin analyzers (Eqs.~(\\ref{eq_me_top}), in Appendix~\\ref{sec:appen_decaytWb}) with \n\\begin{equation}\n\\kappa_{\\bar{f}} = \\kappa_l =1,\n\\end{equation} \nat LO QCD ($\\alpha_s$ corrections are negligible \\cite{Brandenburg:2002xr}), \nwith their flight directions being 100\\% correlated with the directions of the top spin.\nIt is clear from Eq.~(\\ref{eq:kappa1}) that with $|g_{qt}|^2 \\simeq |g_{tq}|^2$, the spin information of the top is lost ($\\kappa_q \\approx$ 0). However in the presence or dominance of only one of the coupling,\nthe emitted quark acts as a perfect spin analyzer ($\\kappa_q \\approx 1$). \n\nThe top (antitop)-quark polarization and spin-spin correlations can be measured using the double differential \nangular distribution of the top and antitop quark decay products:\n\\begin{eqnarray}\n\\frac{1}{\\sigma} \\frac{d^2 \\sigma}{d \\cos\\theta_f d\\cos\\theta_{\\bar f}} = \\frac{1}{4} \\left ( 1 + B_t \n \\cos\\theta_f + B_{\\bar{t}} \\cos\\theta_{\\bar f} - C \\cos\\theta_f \\cos\\theta_{\\bar f} \\right )\\,,\n\\label{eq:diffsigma}\n\\end{eqnarray}\nwhere $\\theta_f (\\theta_{\\bar f})$ is the angle between the direction of the top (antitop) \nspin analyzer $f,\\,(\\bar f)$ in the $t$ $(\\bar{t})$ rest frame and the \n$\\hat{\\bf a}$ ($\\hat{\\bf b}$) direction in the $t\\bar{t}$-ZMF, c.f.~\\cite{Bernreuther:2004jv}. \nComparing Eq.~(\\ref{eq:diffsigma}), with Eq.~(\\ref{eq:observables}), we have\n\\begin{eqnarray}\nB_t &=& \\braket{\\mathcal O_2} \\kappa_{f} \\,,\\quad \n\\quad B_{\\bar t} = \\braket{\\bar {\\mathcal O}_2} \\kappa_{\\bar f}\\,,\n\\nonumber \\\\\nC &=& \\braket{\\mathcal O_3} \\kappa_{f} \\kappa_{\\bar f}\\,.\n\\end{eqnarray}\nwhere ${\\cal O}_2$ and $\\bar{\\cal O}_2$ are related to the top, antitop spin polarization \ncoefficients $B_t$ and $B_{\\bar t}$. Since there is no CP violation in our case, we consider\n$B \\equiv B_t = \\mp B_{\\bar t}$ for $\\hat{\\bf a} = \\pm \\hat{\\bf b}$ . \nThis limit is a good approximation for the charged leptons \nfrom $W$ decays~\\cite{Brandenburg:2002xr}. \nThe spin observable ${\\cal O}_3$ is also related to the spin correlation function $C_{ij}$ \nin Eq.~(\\ref{eq:rho}), \n\\begin{eqnarray}\n\\braket{ {\\cal O}_3 } = \\frac{ \\sigma_{t \\bar{t}} (\\uparrow \\uparrow) + \\sigma_{t \\bar{t}} (\\downarrow \\downarrow) - \\sigma_{t \\bar{t}} (\\uparrow \\downarrow)- \n\\sigma_{t \\bar{t}} (\\downarrow \\uparrow)}\n{\\sigma_{t \\bar{t}} (\\uparrow \\uparrow) + \\sigma_{t \\bar{t}} (\\downarrow \\downarrow) + \\sigma_{t \\bar{t}} (\\uparrow \\downarrow)+ \\sigma_{t \\bar{t}} (\\downarrow \\uparrow)}\\,,\n\\end{eqnarray}\nwhere the arrows refer to the up and down spin orientations of the top and the antitop \nquark with respect to the $\\mathbf{\\hat a}$ and $\\mathbf{\\hat b}$ quantization axes,\nrespectively. \n\\\\\nAlso ${\\cal O}_4$ gets corrected by $\\kappa_{f} \\kappa_{\\bar f}$ depending on the final particles measured from the $t$ and $\\bar{t}$ decays. \n\nThe arbitrary unit vectors $\\bf{\\hat a}$ and $\\bf{\\hat b}$ specify different\nspin quantization axes which can be chosen to maximize\/minimize the desired polarization \nand the correlation effects. We work with the following choices:\n\\begin{align}\n\\hat{\\bf a} &= - \\hat{\\bf b} = \\hat{\\bf q}\\,, && ({\\rm ``helicity\" \\; basis})\\,,&&\n\\nonumber \\\\\n\\hat{\\bf a} &= \\hat{\\bf b} = \\hat{\\bf p}\\,, && ({\\rm ``beamline\" \\; basis})\\,,&&\n\\nonumber \\\\\n\\hat{\\bf a} &= \\hat{\\bf b} = \\hat{\\bf d}_{\\bf X}\\,, && ({\\rm ``off-diagonal\" \\; basis\\, (specific \\, for\\,some \\,model\\, X)})\\,,&&\n\\nonumber \\\\\n\\hat{\\bf a} &= \\hat{\\bf b} = \\hat{\\bf e}_{\\bf X} &&({\\rm ``minimal\" \\; basis\\, (specific \\, for\\,some \\,model\\, X)})\n\\label{eq:axes}\n\\end{align}\nwhere $\\hat{\\bf p}$ is the direction of the incoming beam and $\\hat{\\bf q} = \\hat{\\bf q}_1$ is\nthe direction of the outgoing top quark, both in the $t\\bar t$ center of mass frame. The \noff-diagonal basis \\cite{Parke:1996pr} is the one, where the top spins are $100\\%$ correlated and\nis given by quantizing the spins along the axis $ \\hat {\\bf d}_{\\rm SM}$ determined as\n\\begin{eqnarray}\n\\hat{\\bf d}_{\\rm SM} = \\hat{\\bf d}_{\\rm SM}^{\\rm max} =\n\\frac{ - \\hat{\\bf p} + ( 1 - \\gamma) z \\; \\hat{\\bf q}_1}{\\sqrt{ 1 - (1 - \\gamma^2) z^2}}\\,,\n\\label{eq:dSM}\n\\end{eqnarray}\nwhere $z = \\hat{\\bf p} \\cdot \\hat{\\bf q}_1 = \\cos \\theta$ and \n$\\gamma = E_t\/m_t = 1\/\\sqrt{1 - \\beta^2}$ and which interpolates between the beamline \nbasis at the threshold ($\\gamma \\to 1$) and the helicity basis for\nultrarelativistic energies ($\\gamma \\to \\infty$). We would like to point out here that this\noff-diagonal basis $\\hat{\\bf d}_{\\rm SM}$ is specific to the SM $t\\bar{t}$ production, but a\ngeneral procedure for finding such an off-diagonal basis is given \nin~\\cite{Mahlon:1997uc, Uwer:2004vp}. The idea is to determine the maximal eigenvalue\nof the matrix function $C_{ij}$ in Eq.~(\\ref{eq:rho}) and the corresponding eigenvector,\nwhich provides the off-diagonal quantization axis $\\hat {\\bf d}_X$, for any model X \n\\cite{KamenikMelic}.\n\nHere we introduce the complementary basis to the ``off-diagonal'' one, $\\bf{\\hat{e}}_{\\rm SM}$, \nwhere the eigenvector corresponds to the minimal eigenvalue of $C_{ij}$ in the\nSM quark-antiquark production. The correlation of the top-antitop spins\nin this basis is minimal. This axis could be useful in the \nnew model searches since the minimization of the top-antitop correlations in the SM \ncan, in principle, enhance the non-SM physics. The `minimal basis' is defined by the axis \n\\begin{eqnarray}\n{\\bf\\hat{e}}_{\\rm SM} = \\hat{\\bf e}_{\\rm SM}^{\\rm min} = \n\\frac{ - \\gamma z {\\bf\\hat{p}} + (1 - ( 1 - \\gamma^2) z^2 )\\; \n{\\bf\\hat{q}_1}}{\\sqrt{ (1 - z^2) (1 - (1 - \\gamma^2) z^2)}}.\n\\label{eq:eSM}\n\\end{eqnarray}\n\\begin{figure}\n\\hspace{3.5cm}\n\\includegraphics[width=6cm, height=4.5cm]{spin_top.eps}\n\\caption{The top quark spin vector $s_t$ in the $t\\bar{t}$ production in $t$ rest frame, with\nthe direction of $s_t$ given by an angle $\\xi$. The angle $\\xi$ is measured in the \nclockwise direction from the $\\bar{t}$ momenta.}\n\\label{fig:spin}\n\\end{figure}\nThe `off-diagonal' and the `minimal' basis define the angle $\\xi$ between the top-quark\nspin vector and the antitop direction in the top-quark rest frame~\\cite{Mahlon:2010gw}, \nshown in Fig.~\\ref{fig:spin}, \n\\begin{eqnarray}\n\\tan \\xi^{\\rm off(=max)} &=& \\frac{\\tan \\theta_t }{\\gamma} \\,, \\quad\n\\tan \\xi^{\\rm min} = \\frac{\\gamma }{\\tan \\theta_t} \\,,\n\\end{eqnarray}\nor \n\\begin{eqnarray}\n \\cos^2 \\xi^{\\rm min} + \\cos^2 \\xi^{\\rm off(=max)} = 1 \\,,\n\\end{eqnarray} \nas expected. As already stated, such axes which minimize or maximize spin correlations can be constructed for any model. \n\nThe analytical form of the observables defined in Eq.~(\\ref{eq:observables}), \nis listed in appendix~\\ref{sec:appen_spin} for the SM $t\\bar{t}$ production in the presence\nof longitudinal polarization. The observables ${\\cal O}_i$, ($i$ = 1,2,3,4) are then \nmultiplied with the appropriate $\\kappa$ factors. The QCD radiative corrections for all \nthe top spin observables considered here are calculated in \\cite{BrandUwer} and it is shown \nto be small. Also recently it has been shown that the $O(\\alpha_S)$ corrections to the \nmaximal spin-spin correlations in the off-diagonal basis are negligible \\cite{KornerNEW}. \nTherefore we neglect them all in our calculations. \n\nNext, we present the results for spin correlations\nand top (antitop)-quark polarizations in the helicity basis ($C_{\\rm hel}$, $B_{\\rm hel}$),\nbeamline basis ($C_{\\rm beam}$, $B_{\\rm beam}$), off-diagonal ($C_{\\rm off}$, $B_{\\rm off}$)\nand the minimally polarized basis ($C_{\\rm min}$, $B_{\\rm min}$), as \ndefined by Eqs.~\\eqref{eq:axes}, \\eqref{eq:dSM} and~\\eqref{eq:eSM} respectively, and check\nfor their sensitivity to the initial beam polarization. These results are presented in the \nabsence of cuts, realistic cuts severely distort the non-zero coefficients of \nEq.~(\\ref{eq:dobservable}) and Eq.~(\\ref{eq:diffsigma}). The observable ${\\cal O}_1$ as seen\nfrom Eq.~(\\ref{eq:O1}), is equal to 1 and is therefore independent of beam polarization. \nHowever, it is dependent on the value of $\\kappa_f$. \n\nIn Table~\\ref{tab:O1-4} we present the values of the different spin observables\nin the different spin basis considered here, in the presence of beam polarizations. We have\nconsidered the case, when the antitop is decaying to lepton ($\\kappa_{\\bar{f}}$ =1), \n$\\kappa_f = \\kappa_q$, Eq.~(\\ref{eq:kappa1}) for the \nFCNC top decay, and $\\kappa_f = \\kappa_b$, Eq.~(\\ref{eq:kappa2}) for the top decaying to $W^+b$.\n\\begin{table}[htb]\n\\begin{center}\n\\begin{tabular}{|c|c|c|c|c|} \\hline\nObservables &Basis &$P^L_{e^-} = 0, P^L_{e^+} = 0$ &$P^L_{e^-} = 0.8, P^L_{e^+} = -0.3$\n&$P^L_{e^-} = -0.8, P^L_{e^+} =0.3$ \\\\ \\hline\n${\\cal O}_1$ & &0.333$\\kappa_f$ &0.333$\\kappa_f$ &0.333$\\kappa_f$ \\\\ \\hline\n &hel &$-$0.076$\\kappa_f$ & 0.247$\\kappa_f$ &$-$0.239$\\kappa_f$ \\\\ \n &beam &$-$0.174$\\kappa_f$ & 0.344$\\kappa_f$ &$-$0.436$\\kappa_f$ \\\\ \n${\\cal O}_2$ &off &0.176$\\kappa_f$ &$-$0.351$\\kappa_f$ & 0.443$\\kappa_f$ \\\\ \n &min & 0.04$\\kappa_f$ &$-$0.131$\\kappa_f$ & 0.127$\\kappa_f$ \\\\ \\hline\n &hel & $-$0.654$\\kappa_f$ & $-$0.666$\\kappa_f$ & $-$0.648$\\kappa_f$ \\\\ \n &beam & 0.881$\\kappa_f$ &0.852 $\\kappa_f$ &0.897$\\kappa_f$ \\\\ \n${\\cal O}_3$ &off &0.911$\\kappa_f$ & 0.886$\\kappa_f$ &0.924$\\kappa_f$ \\\\ \n &min & 0.224$\\kappa_f$ &0.229 $\\kappa_f$ &0.222$\\kappa_f$ \\\\ \\hline \n${\\cal O}_4$ & & 0.546$\\kappa_f$ & 0.612$\\kappa_f$ &0.512$\\kappa_f$ \\\\ \\hline \n\\end{tabular}\n\\caption{The value of the spin observables in different bases, with different choices of initial beam polarization. $\\kappa_f = \\kappa_q$ for FCNC $t$-decays and $\\kappa_f = \\kappa_b$ for $t \\to W^+ b$.}\n\\label{tab:O1-4}\n\\end{center}\n\\end{table} \nWe note that the top (antitop) spin polarizations are quite sensitive to the beam polarization, while \nthis is not the case for the spin-spin correlations ${\\cal O}_3,{\\cal O}_4$ where the influence of the beam \npolarizations gets diluted, see Eqs.~(\\ref{eq:O3h}-\\ref{eq:O4}). \nAlso note that all observables are proportional to $\\kappa_f = \\kappa_q$ and will be equal to zero if $g_{tq}$ and $g_{qt}$ are equal. \n\\section{Numerical analysis of the FCNC $g_{tq},~g_{qt}$ couplings at the ILC}\\label{sec:numericalstudy}\n\nIn this section we perform a detailed numerical simulation of the FCNC interactions in the $t \\to qH$ \ndecay at the ILC. As before, the process we consider is the top pair production, \nwith the top decaying to $qH$, the antitop decaying to $W^-\\bar{b}$ with the $W^-$ decaying\nleptonically and subsequently the Higgs decaying to a $b\\bar{b}$ pair.\nThe main background for the process under study comes from the $t\\bar{t}$ pair production, with \none of the top decaying hadronically and the other decaying to a lepton, $\\nu$ and a $b$ quark.\nWe have performed our calculations, by first generating the Universal Feynrules Output (UFO) model \nfile using FeynRules 2.3~\\cite{Alloul:2013bka}, including\nthe effective interaction, defined in Eq.~(\\ref{eq:tqhA}). The UFO file is then implemented in\nMadGraph 5 v2.4.2~\\cite{Alwall:2011uj}, for Monte Carlo simulation. We also employ\nPythia 8~\\cite{Sjostrand:2014zea} for parton showering and hadronization along with \nFastjet-3.2.0~\\cite{Cacciari:2011ma} for the jet formation.\nThe cross section of the signal and the background, at $\\sqrt{s}$ = 500 GeV, before the\napplication of the event selection criteria is listed in Table~\\ref{tab:allowedcs}. \n\\begin{table}[htb]\n\\begin{center}\n\\begin{tabular}{|c|c|c|c|} \\hline\n&$\\sigma$(fb)&$\\sigma$(fb)&$\\sigma$(fb) \\\\\n$e^-e^+\\rightarrow t\\bar{t}$ &$P^L_{e^-} = 0, P^L_{e^+} = 0$ & $P^L_{e^-} = -0.8, P^L_{e^+} = 0.3$ \n& $P^L_{e^-} = 0.8, P^L_{e^+} = -0.3$ \\\\ \n \\hline\n{\\rm signal:}&&& \\\\$t\\rightarrow q b\\bar{b},\n\\bar{t}\\rightarrow l^- \\bar{\\nu}_l \\bar{b}$ &$73.4(|g_{tq}|^2+|g_{qt}|^2)$ \n&$120.5(|g_{tq}|^2+|g_{qt}|^2)$ &$62(|g_{tq}|^2+|g_{qt}|^2)$ \\\\ \\hline\n{\\rm background:}&&&\\\\\n$t\\rightarrow q_1 q_2 b,\n\\bar{t}\\rightarrow l^- \\bar{\\nu}_l \\bar{b}$ &74.5 &124.7 &58.9 \\\\ \\hline\n\\end{tabular}\n\\caption{The production cross section of the signal and the background at $\\sqrt{s}$ = 500 GeV. The results\nare presented for both the polarized and the unpolarized beams.}\n\\label{tab:allowedcs}\n\\end{center}\n\\end{table}\n\nWe now describe in details \nthe different cuts and conditions considered for our analysis. Since the top\nfrom the $tqH$ final state decays to $Wb$, the lepton from the $W$, tends to be energetic\nand isolated. Therefore firstly the events \nwith one isolated lepton are selected, through the lepton isolation cut. An isolated lepton is identified, \nby demanding that the scalar sum of the energy of all the stable\nparticles within the cone of $\\Delta R = \\sqrt{\\Delta \\eta^2 +\\Delta\\phi^2}\\leq 0.2$ about \nthe lepton is less than $\\sqrt{6(E_l-15)}$~\\cite{Yonamine:2011jg}, where\n$E_l$ is the energy of the lepton. Furthermore, the transverse momenta of the \nleptons are assumed as $p_T>$10 GeV. The events with more than one isolated lepton are discarded.\nThe remaining stable visible particles of the event, are then clustered into four jets using the \ninbuilt $k_t$ algorithm in FastJet for $e^-e^+$ collisions, which is similar to the Durham algorithm.\nThe reconstructed jets and the isolated lepton are combined to form the intermediate heavy states. \nThe three jets with the highest $b$ tagging probability are considered as the $b$ jets. A jet \nis tagged as a $b$ jet if it has a $b$ parton within a cone of $\\Delta R <$ 0.4 with the jet axis.\nA tagging efficiency of 80\\%~\\cite{Asner:2013psa} is further incorporated. The jets are checked \nfor isolation and are expected to have $p_T>$ 20 GeV.\nThe momentum of the neutrino is calculated by summing over all the visible momenta\nand the energy of the neutrino is assigned the magnitude of its momenta vector. The isolated lepton and the neutrino reconstructs the leptonically decaying $W$ boson. \n\nThere will be three $b$ tagged jets and a non $b$ jet in the final state and therefore\nthree possible combinations to reconstruct the Higgs mass from the $b$ tagged jets. Additionally\none of this pair of $b$ jets reconstructing the Higgs mass, along with the the non $b$ jet should\ngive an invariant mass close to $m_t$. We choose the combination of the jets,\nwhich minimizes the quantity $|m_{b_ib_j}-m_H|^2 +|m_{b_ib_jQ}-m_t|^2$, with \n$i,j$ taking values for various combinations of the $b$ jets and $Q$ is the non-$b$ jet. \nThe reconstructed Higgs mass is given by $m_{b_ib_j}$, and the reconstructed top\nmass is denoted by $m_{b_ib_jQ}$. In order to account for the detector resolution, \nwe have smeared the leptons and the jets using the following parametrization. The\njet energies are smeared~\\cite{BrauJames:2007aa} with the different \ncontributions being added in quadrature,\n\\begin{eqnarray}\n \\frac{\\sigma(E_{jet})}{E_{jet}}&=& \\frac{0.4}{\\sqrt{E_{jet}}} \\oplus 2.5\\% \\,.\n\\end{eqnarray}\nThe momentum of the lepton is smeared as a function of the momentum and the angle $\\cos\\theta$ \nof the emitted leptons~\\cite{Li:2010ww}\n\\begin{eqnarray}\n \\frac{\\sigma(P_l)}{P_l^2} =\\left(\\begin{array}{cc}a_1\\oplus\\frac{b_1}{P_l},&|\\cos\\theta_l|< 0.78 \\\\\n \\left(a_2 \\oplus \\frac{b_2}{P_l}\\right) \\left(\\frac{1}{\\sin(1-|\\cos\\theta_l|)}\n \\right) & |\\cos\\theta_l|> 0.78 \n \\end{array}\\right) \\,,\n\\end{eqnarray}\nwith\n\\begin{eqnarray}\n(a_1,~b_1) &=& 2.08\\times10^{-5}~({\\rm{1\/GeV}}),~~8.86\\times10^{-4}, \\nonumber \\\\\n(a_2,~b_2) &=& 3.16\\times10^{-6}~({\\rm{1\/GeV}}),~~2.45\\times10^{-4}. \n\\end{eqnarray}\n\nWe plot in Fig.~\\ref{fig:m_recons}, the reconstructed Higgs, $t$ and the $\\bar{t}$ masses. The \nHiggs mass is reconstructed as $m_H^2 = (p_b +p_{\\bar{b}})^2$, whereas the top \nthe antitop masses are calculated as \n$m_t^2 = (p_b + p_{\\bar{b}} + p_{non-b})^2$, \n$m_{\\bar{t}}^2 = (p_{l^-}+ p_{\\bar{\\nu}}+ p_{\\bar{b}})^2$. \nThe plots for the signal are constructed \ntaking into account the current stringent LHC constraint on the FCNC couplings, $\\sqrt{|g_{tq}|^2+|g_{qt}|^2}$ = 0.16. We have shown the results for Case 1, discussed\nin Sec.~\\ref{sec:cmf_asymmetries}, as the reconstructed mass will be the same for all three cases.\n\\begin{figure}[htb]\n\\vspace{1cm}\n$\\begin{array}{ccc}\n \\includegraphics[width=5 cm, height= 6cm]{mHiggs.eps} &\n \\includegraphics[width=5 cm, height= 6cm]{mass_t.eps} &\n \\includegraphics[width=5 cm, height= 6cm]{mass_tbar.eps}\n \\end{array}$\n \\caption{The reconstructed masses of the Higgs, $t$-quark and $\\bar{t}$, for the signal \nand the $t\\bar{t}$ background, at $\\sqrt{s}$ = 500 GeV, with ${\\cal L}$ = 500 fb$^{-1}$\nand unpolarized beams. For the \nsignal we have considered Case 1 from Sec.~\\ref{sec:cmf_asymmetries} with $\\sqrt{|g_{tq}|^2+|g_{qt}|^2}$ = 0.16.}\n \\label{fig:m_recons}\n\\end{figure}\nWe note that since we have not done a real detector analysis, the mass reconstruction of the $W$ boson is poor\nin our case, due to the presence of missing energy. Therefore a loose cut on $m_W$ is applied for our analysis.\nIt is clear from Fig.~\\ref{fig:m_recons}, that the cut imposed on the reconstructed $m_t$ and \n$m_{\\bar{t}}$ should be different. The reconstructed mass of $\\bar{t}$ is broad, due\nto the presence of the missing energy from the $W$ decay. We have applied the same kinematic cut to the mass\nof the top and the antitop for the sake of simplicity. The implementation of these cuts, eliminates the \n$Wb\\bar{b}jj$ and $Zb\\bar{b}jj$ backgrounds.\nThe kinematical cuts, which are imposed on the various reconstructed masses are summarized below:\n\\begin{itemize}\n\\item 115 $\\leq m_H$ (GeV) $\\leq$ 135,~~~~160 $\\leq m_t$ (GeV) $\\leq$ 188~~~~30 $\\leq m_W$ (GeV) $\\leq$ 100\n\\end{itemize}\nAdditional cuts can be applied, on the energy of the emitted quark in the top rest frame~\\cite{Han:2001ap}, so as \nto increase the signal to background ratio. The energy of the emitted quark, as a result of the\ntwo body decay of the top is \n\\begin{equation}\n E_{q}^{top}=\\frac{m_t}{2}\\left(1-\\frac{m_H^2}{m_t^2}\\right),\n\\end{equation}\nand is peaked around 42 GeV, for a Higgs mass of 125 GeV. The jet from the background, which will fake the $q$\njet, will have a more spread out energy. We do not apply this cut, as the application of the above cuts \nalready lead to a much reduced background. The energy distribution of both the signal and the background\nare shown in Fig.~\\ref{fig:energy_rf}.\n\\begin{figure}\n\\centering\n\\begin{minipage}{0.45\\textwidth}\n\\centering\n\\includegraphics[width=6.5cm, height=6cm]{energy_top_rf.eps}\n\\caption{The energy distribution of the non-$b$ jet ($t\\rightarrow qH$) in the rest frame of the top, \nat $\\sqrt{s}$ = 500 GeV, with unpolarized beams and ${\\cal L}$ = 500 fb$^{-1}$.}\n\\label{fig:energy_rf}\n\\end{minipage}\n\\hspace{0.5cm}\n\\begin{minipage}{0.45\\textwidth}\n\\centering\n\\includegraphics[width=6.5cm, height=6cm]{lep_jet.eps}\n\\caption{The opening angle distribution, Eq.~(\\ref{eq:dobservable}) between the\ndirection of the lepton (from $\\bar{t}\\rightarrow l^-\\bar{\\nu} \\bar{b}$) and the non-$b$ jet (from $t\\rightarrow qH$), in the $t$ and $\\bar{t}$ rest frame. }\n\\label{fig:lepjet}\n\\end{minipage}\n\\end{figure}\n\nFurther on, we concentrate on the observables which will be sensitive to the chiral nature of the FCNC interactions. \nOne of them is the polar angle distribution of the non-$b$ jet, which was earlier shown in Fig.~\\ref{fig:dist_pol}.\nThe effect of the individual chiral couplings is more evident with a suitable choice of initial \nlongitudinal beam polarization. The various distributions which we consider here \nare all calculated\nin the $t\\bar{t}-$ZMF. The decay products, which act as spin analyzers for our case are \nthe non-$b$ jet ($q$) from the decay $t\\rightarrow q H$ and the lepton ($l^-$)from the decay \n$\\bar{t} \\rightarrow l^-\\bar{\\nu} \\bar{b}$. All the distribution plots are given with the \nnumber of surviving events, for $\\mathcal{L}$ = 500 fb$^{-1}$.\nWe plot the opening angle distribution $1\/\\sigma (d\\sigma\/d\\cos\\varphi)$ (Eq.~(\\ref{eq:dobservable}))\nin Fig.~\\ref{fig:lepjet}, which is\nsensitive to the top and the antitop spin analyzers. The distribution is\nflat for Case 1, when $|g_{tq}|^2 = |g_{qt}|^2$, leading \nto $\\kappa_q = 0$. It peaks in the forward direction in the presence of $|g_{tq}|^2$, and in the backward\ndirection for $|g_{qt}|^2$ (clearly seen in the inset of Fig.~\\ref{fig:lepjet}). \nThe top spin\nis considered\nin the normalized distribution $1\/\\sigma (d\\sigma\/d\\cos\\theta_{qs_t})$, where \n$\\theta_{qs_t}$ is the angle between the direction of the top spin analyzer (non-$b$ jet) \nin the top rest frame and the top spin quantization axis ($s_t$) in the \n$t\\bar{t}$-ZMF. The angle $\\cos\\theta_{qs_t}$ is the angle $\\cos\\theta_f$ defined\nin Eq.~(\\ref{eq:diffsigma}). The spin of the top can be chosen in the direction\nof any of the spin quantization axes as defined in Sec.~\\ref{sec:spintop}. \nThis distribution is sensitive to the polarization of the top and we show in \nFig.~\\ref{fig:basis} the distribution calculated in the different bases. As \nexpected, the `beamline' basis and the `off-diagonal' basis are most sensitive \nto the top polarization and therefore also to the decay dynamics of the top.\nThe chiral nature of the FCNC coupling will be more clearly visible in these \ntwo basis, with a flat distribution in case of the equality of the two chiral \ncoupling. The `helicity' and the `minimal' basis will not be effective in \ndiscriminating the chirality and they are shown just for the illustration. \nThe effect is further enhanced with the \nbeam polarizations of $P^L_{e^-} = -0.8$ and $P^L_{e^+} = 0.3$,\nin all the spin bases considered here. We show the distribution in the `off-diagonal' basis \nin Fig.~\\ref{fig:offDB}, as it is most sensitive to the beam polarization.\n\\begin{figure}[htb]\n\\vspace{0.4cm}\n \\begin{subfigure}{0.45\\linewidth}\n \\centering\n \\includegraphics[width=7cm, height=6cm]{jettop_hb.eps}\n \\caption{}\n \\label{fig:hb}\n \\end{subfigure}\n \\hspace{1.0cm}\n \\begin{subfigure}{0.45\\linewidth}\n \\centering\n \\includegraphics[width=7cm, height=6cm]{jettop_bb.eps}\n \\caption{}\n \\label{fig:bb}\n \\end{subfigure}\n \\begin{subfigure}{0.45\\linewidth}\n\\centering\n\\vspace{0.6cm}\n \\includegraphics[width=7cm, height=6cm]{jettop_odb.eps}\n \\caption{}\n \\label{fig:odb}\n \\end{subfigure}\n \\hspace{1.3cm}\n \\begin{subfigure}{0.45\\linewidth}\n \\vspace{0.6cm}\n\\centering \n \\includegraphics[width=7cm, height=6cm]{jettop_mb.eps}\n \\caption{}\n \\label{fig:mb}\n \\end{subfigure}\n \\caption{The distribution $1\/\\sigma (d\\sigma\/d\\cos\\theta_{qs_t})$, with unpolarized beams\n at $\\sqrt{s}$ = 500 GeV and $\\cal L$ = 500 fb$^{-1}$, where $\\theta_{qs_t}$ is the \nangle between the direction of the top spin analyzer (non-$b$ jet from $t\\rightarrow q H$) in \n$t$ rest frame and the spin quantization axis of\nthe top ($s_t$) in the $t\\bar{t}$-ZMF. The different spin quantization axes considered are discussed\nin Eq.~(\\ref{eq:axes}).}\n\\label{fig:basis}\n\\end{figure}\n\nThe double differential angular distribution of the top and the antitop\ndefined in Eq.~(\\ref{eq:diffsigma}) provides a measurement of the spin-spin \ncorrelations. It was shown in Ref.~\\cite{Bernreuther:2013aga} that, for the \nexperimental analysis, it is more suitable to use the one-dimensional \ndistribution of the product of the cosines, ${\\cal O}_{s_t,s_{\\bar{t}}} =\n\\cos \\theta_f \\cos \\theta_{\\bar{f}}$, rather than analyzing \nEq.~(\\ref{eq:diffsigma}). We define $\\cos \\theta_f \\cos \\theta_{\\bar{f}}$ \nas $\\cos\\theta_{qs_t} \\cos\\theta_{ls_{\\bar{t}}}$ for our analysis. The \n$1\/\\sigma (d\\sigma\/d{\\cal O}_{s_t,s_{\\bar{t}}})$ distribution is shown \nin Fig.~\\ref{fig:coscos}, using the `off-diagonal' basis and a longitudinal\nbeam polarization of $P^L_{e^-} = -0.8$ and $P^L_{e^+} = 0.3$ . \nThe asymmetry of the plot around $\\cos\\theta_{qs_t} \\cos\\theta_{ls_{\\bar{t}}}$ = 0, \nsignals for the spin-spin correlation. The plot for Case 2 ($|g_{qt}|^2$ = 0) shows more events for \npositive values for $\\cos\\theta_{qs_t} \\cos\\theta_{ls_{\\bar{t}}}$, \nwhereas for Case 3 ($|g_{tq}|^2$ = 0)\none gets more events for negative values of $\\cos\\theta_{qs_t} \\cos\\theta_{ls_{\\bar{t}}}$. \n\\begin{figure}\n\\centering\n\\begin{minipage}{0.45\\textwidth}\n\\centering\n\\includegraphics[width=6.5cm, height=6cm]{odb_polarized.eps}\n\\caption{The normalized $1\/\\sigma (d\\sigma\/d\\cos\\theta_{qs_t})$ distribution\n(the definitions are same as in Fig.~\\ref{fig:basis}) at $\\sqrt{s}$ = 500 GeV,\nwith polarized beams and ${\\cal L}$ = 500 fb$^{-1}$.}\n\\label{fig:offDB}\n\\end{minipage}\n\\hspace{0.5cm}\n\\begin{minipage}{0.45\\textwidth}\n\\centering\n\\includegraphics[width=6.5cm, height=6cm]{cosfcosfbar.eps}\n\\caption{The normalized distribution of the product \n$\\cos\\theta_{qs_t} \\cos\\theta_{ls_{\\bar{t}}}$ , \n($\\theta_{qs_t} = \\angle (\\hat{\\bf p}_q,\\hat{\\bf a}), \n\\theta_{qs_{\\bar{t}}} = \\angle (\\hat{\\bf p}_l,\\hat{\\bf b})$),\nusing the off-diagonal basis, at $\\sqrt{s}$ = 500 GeV, with \npolarized beams and ${\\cal L}$ = 500 fb$^{-1}$. }\n\\label{fig:coscos}\n\\end{minipage}\n\\end{figure}\n\nWe next estimate the sensitivity that can be obtained for the FCNC $tqH$ couplings, \ngiven by the efficient signal identification and the significant background suppression\nwhich can be achieved at the linear collider. We adopt the following formula for \nthe significance measurement~\\cite{Cowan:2010js},\n\\begin{equation}\n S = \\sqrt{2\\left[\\left(N_S+N_B\\right) {\\rm ln}\\left(1+\\frac{N_S}{N_B}\\right)-N_S\\right]},\n\\end{equation}\nwith $N_S$ and $N_B$ being the number of signal and background events. \nIn Fig.~\\ref{fig:sig1} we present the contours of 3$\\sigma$ and 5$\\sigma$ significance\nfor our process in the $|g_{tq}|^2-|g_{qt}|^2$ plane. The sensitivity of the linear \ncollider will increase with the implementation of beam polarization with left polarized \nelectrons and right polarized positrons. Since the total cross section is proportional\nto $|g_{tq}|^2+|g_{qt}|^2$, the contours are symmetric in that plane. \nThe sensitivity to the coupling $\\sqrt{|g_{tq}|^2+|g_{qt}|^2}$, as a function of the \nintegrated luminosity for $\\sqrt{s}$ = 500 GeV is shown Fig.~\\ref{fig:sig2}. \nOne can see that at 3$\\sigma$ statistical sensitivity and \n${\\cal L}$ = 500 fb$^{-1}$, $\\sqrt{|g_{tq}|^2+|g_{qt}|^2}$\ncan be probed to 0.063 (0.056) with unpolarized (polarized) beams. The limits obtained from the asymmetries, specially \n$A_{fb}$ from Sec.~\\ref{sec:cmf_asymmetries} will be more stronger\nand will not be symmetric in the $|g_{tq}|^2-|g_{qt}|^2$ plane.\n\\begin{figure}\n\\centering\n\\begin{minipage}{0.45\\textwidth}\n\\centering\n\\includegraphics[width=7cm, height=6cm]{Significance1.eps}\n\\caption{Contour plots in the $|g_{tq}|^2-|g_{qt}|^2$ plane, for the statistical significance \n$S$, from the production cross section, at $\\sqrt{s}$ = 500 GeV and a luminosity of 500 fb$^{-1}$, with\nunpolarized beams [black] and a beam polarization of $P^L_{e^-}$ = -0.8 and $P^L_{e^+}$ = 0.3 [red-dashed].}\n\\label{fig:sig1}\n\\end{minipage}\n\\hspace{0.2cm}\n\\begin{minipage}{0.45\\textwidth}\n\\centering\n\\includegraphics[width=7cm, height=6cm]{Significance2.eps}\n\\caption{The sensitivity of 3$\\sigma$ and 5$\\sigma$ to the FCNC coupling $\\sqrt{|g_{tq}|^2 +|g_{qt}|^2}$\nat $\\sqrt{s}$ = 500 GeV, as a function of integrated luminosity. The black solid line is for unpolarized beams,\nand the red-dashed line is for a beam polarization of $P^L_{e^-}$ = -0.8 and $P^L_{e^+}$ = 0.3. }\n\\label{fig:sig2}\n\\end{minipage}\n\\end{figure}\nWe find the following upper bounds as listed in Table~\\ref{tab:limits} at the 2$\\sigma$, \n3$\\sigma$ and the 5$\\sigma$ level from the total cross section, in the case of the \npolarized and the unpolarized beams.\n\\begin{table}[htb]\n\\begin{center}\n\\begin{tabular}{|c|c|c|c|c|} \\hline\n &\\multicolumn{2}{|c|}{$P^L_{e^-} = 0, P^L_{e^+} = 0$}&\\multicolumn{2}{|c|}{$P^L_{e^-} = -0.8, P^L_{e^+} = 0.3$} \\\\ \\hline\n &&&& \\\\\nSignificance &$\\sqrt{|g_{tq}|^2+|g_{qt}|^2}$ &BR($t\\rightarrow qH$)\n&$\\sqrt{|g_{tq}|^2+|g_{qt}|^2}$ &BR($t\\rightarrow qH$) \\\\ \\hline\n2$\\sigma$ &0.052 &7.61$\\times 10^{-4}$ &0.046 &5.96$\\times 10^{-4}$ \\\\ \\hline\n3$\\sigma$ &0.063 &1.19$\\times 10^{-3}$ &0.056 &8.84$\\times 10^{-4}$ \\\\ \\hline\n5$\\sigma$ &0.085 &2.04$\\times 10^{-3}$ &0.074 &1.54$\\times 10^{-3}$ \\\\ \\hline\n\\end{tabular}\n\\caption{Upper bounds on $\\sqrt{|g_{tq}|^2+|g_{qt}|^2}$ and the respective branching ratios,\nthat can be obtained in the ILC, at $\\sqrt{s}$ = 500 GeV, with a luminosity of 500 fb$^{-1}$.\nThe results are presented for both, the polarized and the unpolarized case.}\n\\label{tab:limits}\n\\end{center}\n\\end{table}\n\n\\section{Conclusion}\\label{sec:conclusion}\n\nWe have studied the flavor violating top-Higgs interactions, \nat the $e^-e^+$ linear colliders using different beam polarizations. There are \nseveral works exhibiting the prospects of the LHC to constrain or discover \nthese couplings, by considering several signatures of the flavor violating interactions.\nThe LHC experiments have also looked into these couplings and have obtained bounds on the branching ratio\nof the process $t\\rightarrow q H$. These flavor violating interactions can have a chiral structure with the top coupling differently to the left handed and the right handed fermions. \nSince the branching ratio of the top to $qH$, as well as, the total \nproduction cross section is being proportional to $|g_{tq}|^2+|g_{qt}|^2$, the chiral\nnature won't be evident from these measurements.\n\nTherefore, we have looked in the context of the linear collider into various observables \nwhich will highlight this aspect of the couplings. The polar angle distribution\nof the quark emitted from the $t\\rightarrow cH$ decay, will exhibit a behaviour\nsensitive to the nature of the coupling. This will change with\nthe change of the beam polarization. The distribution will be flat for all the \npolarization combinations if $|g_{tq}|^2 = |g_{qt}|^2$. The presence of only\none of the coupling ($|g_{tq}|^2$) leads to a forward peak for $e^-_L e^+_R$ polarization\nand will be unchanged for the $e^-_R e^+_L$ polarization. The opposite behaviour is observed for \n$|g_{qt}|^2$. Next, the forward-backward asymmetry $A_{fb}$ is used in order\nto constrain the $|g_{tq}|^2-|g_{qt}|^2$ parameter space. \n\nThe spins of the tops are correlated in the top pair production and the decay products of the \ntops are correlated with the spins, therefore the decay products of the top and the antitop\nare correlated. \nThe presence of new physics in the top decay will therefore, lead to a change in \nthe correlation coefficient in the angular distribution of the top decay products. A right \nchoice of spin basis of the top quark pair\nis also important in enhancing the correlation. We consider different \nobservables in Sec.~\\ref{sec:spintop}, which are sensitive to the spin analyzing power\n($\\kappa$) of the top decay product. The quark emitted\nfrom the top FCNC decay, will be a perfect spin analyzer ($\\kappa_q =1$)\nin the presence of a single chiral coupling. The $\\kappa_q$ of the emitted quark \nwill be zero when $|g_{tq}|^2 = |g_{qt}|^2$ and the correlation will be lost. \nWe have performed an analysis applying\nall the cuts at the linear collider in Sec.~\\ref{sec:numericalstudy}, and have studied the spin observables\nin the context of different spin bases. We find that the off-diagonal basis along with \nthe beamline basis are the most sensitive to the chirality of the couplings. The effect is even more enhanced by polarizing \nthe initial beams of left handed electrons and right handed positrons.\n\nFinally, we have obtained a limit on the couplings from the total cross section and find \nthat BR($t\\rightarrow qH$) can be probed to $5.59 \\times 10^{-3}(8.84 \\times 10^{-4})$\nat 3$\\sigma$ level at the ILC, with $\\sqrt{s} =$ 500 GeV, $\\cal L$ = 500 fb$^{-1}$ and \na beam polarization of $P^L_{e^-} = 0 (-0.8), P^L_{e^+} = 0 (0.3)$ , which hopefully will be observed \nat the future linear colliders. \n\n\\section*{Acknowledgments}\nWe would like to thank Juan Antonio Aguilar Saavedra for very useful discussions. \nThis work is supported by the Croatian Science Foundation (HRZZ) project PhySMaB,\n``Physics of Standard Model and Beyond\" as well as by the H2020 Twinning\nproject No. 692194, ``RBI-T-WINNING''.\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\\label{sec:intro}\n\nThe dynamics of a finite quantum system, i.e., one with a finite number\nof degrees of freedom described by a Hilbert space ${\\mathcal H}$, is given by the \nSchr\\\"odinger equation. The Hamiltonian $H$ is a densely \ndefined self-adjoint operator on ${\\mathcal H}$, and for a vector $\\psi(t)$ in\nthe domain of $H$ the state at time $t$ satisfies\n\\begin{equation}\ni\\partial_t \\psi(t) = H\\psi(t) \\, .\n\\label{se}\\end{equation}\nFor all initial conditions $\\psi(0)\\in{\\mathcal H}$, the unique solution is given by\n$$\n\\psi(t)=e^{-itH}\\psi(0), \\mbox{ for all } t\\in\\bR.\n$$\nDue to Stone's Theorem $e^{-itH}$ is a strongly continuous one-parameter group of\nunitary operators on ${\\mathcal H}$, and the self-adjointness of $H$ is the necessary and sufficient\ncondition for the existence of a unique continuous solution for all times.\n\nAn alternative description of this dynamics is the so-called Heisenberg picture\nin which the time evolution is defined on the algebra of observables instead\nof the Hilbert space of states. The corresponding Heisenberg equation is\n\\begin{equation}\n\\partial_t A(t)=i[H,A(t)]\\, ,\n\\label{he}\\end{equation}\nwhere, for each $t\\in \\bR$, $A(t)\\in{\\mathcal B}({\\mathcal H})$ is a bounded linear operator\non ${\\mathcal H}$. Its solutions are given by a one-parameter group of $*$-automorphisms,\n$\\tau_t$, of ${\\mathcal B}({\\mathcal H})$:\n$$\nA(t)=\\tau_t(A(0)).\n$$\n\nFor the description of physical systems we expect the Hamiltonian, $H$,\nto have some additional properties. E.g., for finite systems such as atoms or\nmolecules, stability of the system requires that $H$ is bounded from below.\nIn this case, the infimum of the spectrum is expected to be an eigenvalue and\nis called the ground state energy. When the model Hamiltonian, $H$, is describing \nbulk matter rather than finite systems, we expect some additional properties. \nE.g., the stability of matter requires that the ground state energy has a lower bound\nproportional to $N$, where $N$ is the number of degree of freedom. Much progress\non this stability property has been made in the last several decades \n\\cite{lieb_selecta_stability,lieb-seiringer}.\nWe also expect that the dynamics of local observables of bulk matter, or large\nsystems in general, depends only on the local environment. Mathematically this \nis best expressed by the existence of the dynamics in the thermodynamic limit,\ni.e., in infinite volume. This is the question we address in this paper.\n\nThere are two settings that allow one to prove a rich set of important physical\nproperties of quantum dynamical systems, including infinite ones: the $C^*$ \ndynamical systems and the $W^*$ dynamical systems \\cite{bratteli1987}. In both cases, the \nalgebra of observables can be thought of a norm-closed $*$-subalgebra ${\\mathcal A}$ of some algebra\nof the form ${\\mathcal B}({\\mathcal H})$, but in the case of the $W^*$-dynamical systems we additionally\nrequire that the algebra is closed for the weak operator topology, which makes it a \nvon Neumann algebra. For a $C^*$-dynamical system the group of automorphisms \n$\\tau_t$ is assumed to be strongly continuous, i.e., for all $A\\in{\\mathcal A}$, the map \n$t\\mapsto \\tau_t(A)$ is continuous in $t$ for the operator norm ($C^*-$norm) on ${\\mathcal A}$. \nIn a $W^*$-dynamical system the continuity is with respect to the weak topology.\n\nIn the case of lattice systems with a finite-dimensional Hilbert space\nof states associated with each lattice sites, such as quantum spin-lattice systems \nand lattice fermions, it has been known for a long time that under rather\ngeneral conditions the dynamics can be described by a $C^*$ dynamical\nsystem, including in the thermodynamic limit \\cite{bratteli1997}. When the Hilbert \nspace at each site is infinite-dimensonal and the finite-system Hamiltonians are \nunbounded, this is no longer possible and the {\\em weak continuity} becomes a natural \nassumption.\n\nThe class of systems we will primarily focus on here are lattices of quantum oscillators but\nthe underlying lattice structure is not essential for our method. Systems defined on \nsuitable graphs, such as the systems considered in \\cite{eisert2005,eisert2008} can\nalso be analyzed with the same methods. In a recent preprint \\cite{amour2009}, it was shown \nthat convergence of the dynamics in the thermodynamic limit can be obtained for a modified \ntopology. Here, we follow a somewhat different approach. The main difference is that we study\nthe thermodynamic limit of anharmonic perturbations of an {\\em infinite} harmonic lattice\nsystem described by an explicit $W^*$-dynamical system. The more traditional way\nis to first define the dynamics of anharmonic systems in finite volume (which can be done \nby standard means \\cite{reed-simon}), and then to study the limit in which the volume \ntends to infinity. This is what is done in \\cite{amour2009}, but it appears that controlling \nthe continuity of the limiting dynamics is more straightforward \nin our approach. In fact, we are able to show that the resulting dynamics for the class of \nanharmonic lattices we study is indeed weakly continuous, and we obtain a \n$W^*$-dynamical system for the infinite system. The $W^*$-dynamical setting is \nobtained by considering the GNS representation of a ground state or thermal \nequilibrium state of the harmonic system. The ground states and thermal states \nare quasi-free states in the sense of \\cite{robinson1965}, or convex mixtures\nof quasi-free states. In the ground state case the GNS representations are the well-known\nFock reprensentations. For the thermal states the GNS representations have been constructed \nby Araki and Woods \\cite{araki1963}.\n\nCommon to both approaches, ours and the one of \\cite{amour2009}, is the crucial role\nplayed by an estimate of the speed of propagation of perturbations in the system, commonly\nreferred to as Lieb-Robinson bounds \\cite{hast2006,lieb1972,nach12006,nach22006,nach2007}.\nBriefly, if $A$ and $B$ are two \nobservables of a spatially extended system, localized in regions $X$ and $Y$ of our graph,\nrespectively, and $\\tau_t$ denotes the time evolution of the system then, a Lieb-Robinson\nbound is an estimate of the form\n$$\n\\Vert [\\tau_t (A), B]\\Vert\\leq C e^{-a(d(X,Y)-v\\vert t\\vert)}\\, ,\n$$\nwhere $C, a$, and $v$ are positive constants and $d(X,Y)$ denotes the distance\nbetween $X$ and $Y$. Lieb-Robinson bounds for anharmonic \nlattice systems were recently proved in \\cite{nachtergaele2009}, and this work builds on the\nresults obtained there. Our results are mainly limited to short-range interactions that are either \nbounded or unbounded perturbations of the harmonic interaction (linear springs).\n\nTo conclude the introduction, let us mention that the same questions, the existence of the\ndynamics for infinite oscillator lattices, can and has been asked for classical systems. Two\nclassic papers are \\cite{lanford1977,marchioro1978}. Many properties of this classical infinite\nvolume harmonic dynamics have been studied in detail e.g. \\cite{Spohn77, vanhem} and \nsome recent progress on locality estimates for anharmonic systems is reported in \n\\cite{butta2007,raz2009}.\n\nThe paper is organized as follows. We begin with a section discussing bounded interactions.\nIn this case, the existence of the dynamics follows by mimicking the proof valid in the \ncontext of quantum spins systems. Section 3 describes the infinite volume harmonic dynamics on\ngeneral graphs. It is motivated by an explicit example on $\\mathbb{Z}^d$. Next, in Section 4, we \ndiscuss finite volume perturbations of the infinite volume harmonic dynamics and prove that\nsuch systems satisfy a Lieb-Robinson bound. In Section 5 we demonstrate that the existence\nof the dynamics and its continuity follow from the Lieb-Robinson estimates established in \nthe previous section.\n\n\n\n\\section{Bounded Interactions} \\label{sec:bdints}\n\nThe goal of this section is to prove the existence of the dynamics for oscillator systems\nwith bounded interactions. Since oscillator systems with bounded interactions can be treated \nas a special case of more general models with bounded interactions, we will use \na slightly more general setup in this section, which we now introduce.\n\nWe will denote by $\\Gamma$ the underlying structure on which our models will be defined.\nHere $\\Gamma$ will be an arbitrary set of sites equipped with a metric $d$.\nFor $\\Gamma$ with countably infinite cardinality, we will need to assume that there exists a\nnon-increasing function $F: [0, \\infty) \\to (0, \\infty)$ for which:\n\n\\noindent i) $F$ is uniformly integrable over $\\Gamma$, i.e.,\n\\begin{equation} \\label{eq:fint}\n\\| \\, F \\, \\| \\, := \\, \\sup_{x \\in \\Gamma} \\sum_{y \\in \\Gamma}\nF(d(x,y)) \\, < \\, \\infty,\n\\end{equation}\n\n\\noindent and\n\n\\vspace{.3cm}\n\n\\noindent ii) $F$ satisfies\n\\begin{equation} \\label{eq:intlat}\nC \\, := \\, \\sup_{x,y \\in \\Gamma} \\sum_{z \\in \\Gamma}\n\\frac{F \\left( d(x,z) \\right) \\, F \\left( d(z,y)\n\\right)}{F \\left( d(x,y) \\right)} \\, < \\, \\infty.\n\\end{equation}\n\nGiven such a set $\\Gamma$ and a function $F$, by the triangle inequality,\nfor any $a \\geq 0$ the function\n\\begin{equation*}\nF_a(x) = e^{-ax} \\, F(x),\n\\end{equation*}\nalso satisfies i) and ii) above with $\\| F_a \\| \\leq \\| F \\|$ and $C_a \\leq C$.\n\nIn typical examples, one has that $\\Gamma \\subset \\mathbb{Z}^{d}$ for\nsome integer $d \\geq 1$, and the metric is\njust given by $d(x,y) = |x - y|=\\sum_{j=1}^{d} |x_j - y_j|$.\nIn this case, the function $F$ can be\nchosen as $F(|x|) = (1 + |x|)^{- d - \\epsilon}$ for any $\\epsilon >0$.\n\n\nTo each $x \\in \\Gamma$, we will associate a Hilbert space $\\mathcal{H}_x$.\nIn many relevant systems, one considers\n$\\mathcal{H}_x = L^2( \\mathbb{R}, {\\rm d} q_x)$, but this is not essential.\nWith any finite subset $\\Lambda \\subset \\Gamma$,\nthe Hilbert space of states over $\\Lambda$ is given by\n\\begin{equation*}\n\\mathcal{H}_{\\Lambda} \\, = \\, \\bigotimes_{x \\in \\Lambda} \\mathcal{H}_x,\n\\end{equation*}\nand the local algebra of observables over $\\Lambda$ is then defined to be\n\\[\n\\mathcal{A}_{\\Lambda} = \\bigotimes_{x \\in \\Lambda} {\\mathcal B} ({\\mathcal H}_x),\n\\]\nwhere ${\\mathcal B} ({\\mathcal H}_x)$ denotes the algebra of bounded linear operators on ${\\mathcal H}_x$.\n\nIf $\\Lambda_1 \\subset \\Lambda_2$, then there is a natural way of identifying\n$\\mathcal{A}_{\\Lambda_1} \\subset \\mathcal{A}_{\\Lambda_2}$, and we may thereby\ndefine the algebra of quasi-local observables by the inductive limit\n\\begin{equation*}\n\\mathcal{A}_{\\Gamma} \\, = \\, \\bigcup_{\\Lambda \\subset \\Gamma} \\mathcal{A}_{\\Lambda},\n\\end{equation*}\nwhere the union is over all finite subsets $\\Lambda \\subset \\Gamma$; see\n\\cite{bratteli1987,bratteli1997} for a discussion of these issues in general.\n\nThe result discussed in this section corresponds to bounded perturbations of\nlocal self-adjoint Hamiltonians. We fix a collection of on-site local operators\n$H^{\\rm loc} = \\{ H_x \\}_{x \\in \\Gamma}$ where each $H_x$ is a self-adjoint\noperator over $\\mathcal{H}_x$. In addition, we will consider a general class of bounded perturbations.\nThese are defined in terms of an interaction $\\Phi$, which is a map from the\nset of subsets of $\\Gamma$ to $\\mathcal{A}_{\\Gamma}$ with the property that\nfor each finite set $X \\subset \\Gamma$, $\\Phi(X) \\in \\mathcal{A}_X$ and\n$\\Phi(X) ^*= \\Phi(X)$. As with the Lieb-Robinson bound proven in \\cite{nachtergaele2009}, \nwe will need a growth condition on the set of interactions $\\Phi$ for which we can prove\nthe existence of the dynamics in the thermodynamic limit.\nThis condition is expressed in terms of the following norm. \nFor any $a \\geq 0$, denote by $\\mathcal{B}_a(\\Gamma)$ the set of interactions\nfor which\n\\begin{equation} \\label{eq:defphia}\n\\| \\Phi \\|_a \\, := \\, \\sup_{x,y \\in \\Gamma} \\frac{1}{F_a (d(x,y))} \\,\n\\sum_{X \\ni x,y} \\| \\Phi(X) \\| \\, < \\, \\infty.\n\\end{equation}\n\nNow, for a fixed sequence of local Hamiltonians $H^{\\rm loc} = \\{H_x \\}_{x\\in\\Gamma}$, as\ndescribed above, an interaction $\\Phi \\in \\mathcal{B}_a(\\Gamma)$, and a finite subset\n$\\Lambda \\subset \\Gamma$, we will consider self-adjoint Hamiltonians of the form\n\\begin{equation} \\label{eq:localham}\nH_{\\Lambda} \\, = \\, H^{\\rm loc}_{\\Lambda} \\, + \\, H^{\\Phi}_{\\Lambda} \\, = \\, \\sum_{x \\in \\Lambda} H_x \\, + \\, \\sum_{X \\subset \\Lambda} \\Phi(X),\n\\end{equation}\nacting on $\\mathcal{H}_{\\Lambda}$ (with domain given by $\\bigotimes_{x \\in \\Lambda} D(H_x)$ where $D(H_x) \\subset {\\mathcal H}_x$ denotes the domain of $H_x$). As these operators are self-adjoint, they generate a dynamics, or time evolution, $\\{ \\tau_t^{\\Lambda} \\}$,\nwhich is the one parameter group of automorphisms defined by\n\\begin{equation*}\n\\tau_t^{\\Lambda}(A) \\, = \\, e^{it H_{\\Lambda}} \\, A \\, e^{-itH_{\\Lambda}} \\quad \\mbox{for any} \\quad A \\in \\mathcal{A}_{\\Lambda}.\n\\end{equation*}\n\n\\begin{thm}\nUnder the conditions stated above, for all $t \\in {\\mathbb R}$, $A \\in \\mathcal{A}_{\\Gamma}$, \nthe norm limit \n\\begin{equation}\\label{eq:claim} \n\\lim_{\\Lambda \\to \\Gamma} \\, \\tau_t^{\\Lambda} (A) = \\tau_t(A)\n\\end{equation} exists in the sense of non-decreasing exhaustive sequences of finite\nvolumes $\\Lambda$ and defines a group of $*-$automorphisms $\\tau_t$ on the completion of\n$\\mathcal{A}_\\Gamma$. The convergence is uniform for $t$ in a compact set.\n\\end{thm}\n\n\n\\begin{proof} \nLet $\\Lambda \\subset \\Gamma$ be a finite set. Consider the unitary propagator\n\\begin{equation} \\label{eq:intuni}\n {\\mathcal U}_{\\Lambda} (t,s) = e^{i t H_{\\Lambda}^{\\text{loc}} } \\, e^{-i (t-s) H_{\\Lambda}} \\, e^{-is H_{\\Lambda}^{\\text{loc}}} \n\\end{equation}\nand its associated {\\it interaction-picture} evolution defined by\n\\begin{equation} \\label{eq:intpic}\n\\tau^{\\Lambda}_{t, \\text{int}} (A) = {\\mathcal U}_{\\Lambda} (0,t) \\, A \\; {\\mathcal U}_{\\Lambda} (t,0) \\quad \\mbox{for all } A \\in \\mathcal{A}_{\\Gamma} \\, .\n\\end{equation} \nClearly, $\\mathcal{U}_{\\Lambda}(t,t) = \\idty$ for all $t \\in \\mathbb{R}$, and it is also easy to check \nthat \n\\begin{equation*}\ni \\frac{{\\rm d}}{{\\rm d} t} \\, {\\mathcal U}_{\\Lambda} (t,s) = H_{\\Lambda}^{\\text{int}} (t) \\, {\\mathcal U}_{\\Lambda} (t,s) \\quad \\mbox{and} \\quad\n- i \\frac{{\\rm d}}{{\\rm d} s} \\, {\\mathcal U}_{\\Lambda} (t,s) = {\\mathcal U}_{\\Lambda} (t,s) \\, H_{\\Lambda}^{\\text{int}} (s) \n\\end{equation*}\nwith the time-dependent generator \n\\begin{equation} \\label{eq:gen}\nH^{\\text{int}}_{\\Lambda} (t) = e^{i H_{\\Lambda}^{\\text{loc}} t} H_{\\Lambda}^{\\Phi} e^{-i H^{\\text{loc}}_{\\Lambda} t} = \\sum_{Z \\subset \\Lambda} e^{i H_{\\Lambda}^{\\text{loc}} t} \\, \\Phi (Z) \\, e^{-i H^{\\text{loc}}_{\\Lambda} t} \\, . \n\\end{equation}\n\nFix $T>0$ and $X \\subset \\Gamma$ finite. For any $A \\in \\mathcal{A}_X$, we will show that \nfor any non-decreasing, exhausting sequence $\\{ \\Lambda_n \\}$ of $\\Gamma$, the sequence\n$\\{ \\tau_{t, \\text{int}}^{\\Lambda_n}(A) \\}$ is Cauchy in norm, uniformly for $t \\in [-T,T]$.\nMoreover, the bounds establishing the Cauchy property depend on $A$ only through $X$ and $\\|A\\|$.\nSince\n\\begin{equation*}\n\\tau_t^{\\Lambda} (A) = \\tau_{t,\\text{int}}^{\\Lambda} \\left(e^{itH_{\\Lambda}^{\\text{loc}}} \\, A \\, e^{-it H_{\\Lambda}^{\\text{loc}}} \\right) = \n\\tau_{t,\\text{int}}^{\\Lambda} \\left(e^{it \\sum_{x \\in X}H_x} \\, A \\, e^{-i t \\sum_{x \\in X} H_x} \\right) \\, ,\n\\end{equation*}\nan analogous statement then immediately follows for $\\{ \\tau_t^{\\Lambda_n}(A) \\}$, since they\nare all also localized in $X$ and have the same norm as $\\|A\\|$.\n\nTake $n \\leq m$ with $X \\subset \\Lambda_n \\subset \\Lambda_m$ and calculate\n\\begin{equation} \\label{eq:diff}\n\\tau_{t,\\text{int}}^{\\Lambda_m} (A) - \\tau_{t,\\text{int}}^{\\Lambda_n} (A) = \\int_0^t \\frac{{\\rm d}}{{\\rm d} s} \\left\\{ {\\mathcal U}_{\\Lambda_m} (0,s) \\, {\\mathcal U}_{\\Lambda_n} (s,t) \\, A \\, {\\mathcal U}_{\\Lambda_n} (t,s) \\, {\\mathcal U}_{\\Lambda_m} (s,0) \\right\\} \\, ds \\, .\n\\end{equation}\nA short calculation shows that\n\\begin{equation}\n\\begin{split}\n\\frac{{\\rm d}}{{\\rm d} s} {\\mathcal U}_{\\Lambda_m} (0,s) & \\, {\\mathcal U}_{\\Lambda_n} (s,t) \\, A \\, {\\mathcal U}_{\\Lambda_n} (t,s) \\, {\\mathcal U}_{\\Lambda_m} (s,0) \\\\\n& = \\, i \\mathcal{U}_{\\Lambda_m}(0,s) \\left[ \\left( H^{\\text{int}}_{\\Lambda_m}(s) - H^{\\text{int}}_{\\Lambda_n}(s) \\right), \\mathcal{U}_{\\Lambda_n}(s,t) \\, A \\, \\mathcal{U}_{\\Lambda_n}(t,s) \\right] \\mathcal{U}_{\\Lambda_m}(s,0) \\\\\n& = \\, i \\mathcal{U}_{\\Lambda_m}(0,s) e^{is H_{\\Lambda_n}^{\\text{loc}}} \\left[ \\tilde{B}(s), \\tau_{s-t}^{\\Lambda_n} \\left( \\tilde{A}(t) \\right) \\right] e^{-is H_{\\Lambda_n}^{\\text{loc}}} \\mathcal{U}_{\\Lambda_m}(s,0) \\, ,\n\\end{split}\n\\end{equation}\nwhere\n\\begin{equation} \\label{eq:tat}\n\\tilde{A}(t) = e^{-it H_{\\Lambda_n}^{\\text{loc}}} A \\, e^{it H_{\\Lambda_n}^{\\text{loc}}} = e^{-it H_{X}^{\\text{loc}}} A \\, e^{it H_{X}^{\\text{loc}}} \n\\end{equation}\nand\n\\begin{eqnarray} \\label{eq:tbs}\n\\tilde{B}(s) & = & e^{-is H_{\\Lambda_n}^{\\text{loc}}}\\left( H^{\\text{int}}_{\\Lambda_m}(s) - H^{\\text{int}}_{\\Lambda_n}(s) \\right) e^{is H_{\\Lambda_n}^{\\text{loc}}} \\nonumber \\\\\n& = & \\sum_{Z \\subset \\Lambda_m} e^{is H_{\\Lambda_m \\setminus \\Lambda_n}^{\\text{loc}}} \\Phi(Z) e^{-is H_{\\Lambda_m \\setminus \\Lambda_n}^{\\text{loc}}} - \\sum_{Z \\subset \\Lambda_n} \\Phi(Z) \\nonumber \\\\\n& = & \\sum_{\\stackrel{Z \\subset \\Lambda_m:}{ Z \\cap \\Lambda_m \\setminus \\Lambda_n \\neq \\emptyset}} e^{is H_{\\Lambda_m \\setminus \\Lambda_n}^{\\text{loc}}} \\Phi(Z) e^{-is H_{\\Lambda_m \\setminus \\Lambda_n}^{\\text{loc}}} \n\\end{eqnarray}\nCombining the results of (\\ref{eq:diff}) -(\\ref{eq:tbs}), and using unitarity, we find that\n\\begin{equation}\n\\left\\| \\tau_{t,\\text{int}}^{\\Lambda_m} (A) - \\tau_{t,\\text{int}}^{\\Lambda_n} (A) \\right\\| \\leq \\int_0^t \\left\\| \\left[ \\tau_{s-t}^{\\Lambda_n} \\left( \\tilde{A}(t) \\right), \\tilde{B}(s) \\right] \\right\\| \\, ds \\,\n\\end{equation}\nand by the Lieb-Robinson bound proven in \\cite{nachtergaele2009}, it is clear that\n\\begin{eqnarray}\n&&\\left\\| \\left[ \\tau_{s-t}^{\\Lambda_n} \\left( \\tilde{A}(t) \\right), \\tilde{B}(s) \\right] \n\\right\\|\\\\\n& \\leq & \\sum_{\\stackrel{Z \\subset \\Lambda_m:}{ Z \\cap \\Lambda_m \\setminus \n\\Lambda_n \\neq \\emptyset}} \n\\left\\| \\left[ \\tau_{s-t}^{\\Lambda_n} \\left( \\tilde{A}(t) \\right), \ne^{is H_{\\Lambda_m \\setminus \\Lambda_n}^{\\text{loc}}} \\Phi(Z) \ne^{-is H_{\\Lambda_m \\setminus \\Lambda_n}^{\\text{loc}}} \\right] \\right\\| \\nonumber \\\\\n& \\leq & \\frac{2 \\| A \\|}{C_a} \\left( e^{2 \\| \\Phi \\|_a C_a |t-s|} - 1 \\right) \\sum_{y \\in \\Lambda_m \\setminus \\Lambda_n} \\sum_{\\stackrel{Z \\subset \\Lambda_m:}{ y \\in Z }} \\| \\Phi(Z) \\| \\sum_{x \\in X} \\sum_{z \\in Z} F_a( d(x,z)) \\nonumber \\\\ & \\leq & \\frac{2 \\| A \\|}{C_a} \\left( e^{2 \\| \\Phi \\|_a C_a |t-s|} - 1 \\right) \\sum_{y \\in \\Lambda_m \\setminus \\Lambda_n} \\sum_{z \\in \\Lambda_m} \\sum_{\\stackrel{Z \\subset \\Lambda_m:}{ y, z \\in Z }} \\| \\Phi(Z) \\| \\sum_{x \\in X} F_a( d(x,z)) \\nonumber \\\\\n& \\leq & \\frac{2 \\| A \\| \\| \\Phi \\|_a}{C_a} \\left( e^{2 \\| \\Phi \\|_a C_a |t-s|} - 1 \\right) \\sum_{y \\in \\Lambda_m \\setminus \\Lambda_n} \\sum_{x \\in X} \\sum_{z \\in \\Lambda_m} F_a( d(x,z)) F_a(d(z,y)) \\nonumber \\\\\n& \\leq & 2 \\| A \\| \\| \\Phi \\|_a \\left( e^{2 \\| \\Phi \\|_a C_a |t-s|} - 1 \\right) \\sum_{y \\in \\Lambda_m \\setminus \\Lambda_n} \\sum_{x \\in X} F_a( d(x,y)) \\, .\\nonumber\n\\end{eqnarray}\nWith the estimate above and the properties of the function $F_a$, it is clear that\n\\begin{equation}\n\\sup_{t \\in [-T,T]} \\left\\| \\tau_{t,\\text{int}}^{\\Lambda_m} (A) - \\tau_{t,\\text{int}}^{\\Lambda_n} (A) \\right\\| \\to 0 \\quad \\mbox{ as } n, m \\to \\infty \\, ,\n\\end{equation}\nand the rate of convergence only depends on the norm $\\| A \\|$ and the set $X$ where $A$ is\nsupported. This proves the claim.\n\\end{proof}\n\nIf all\nlocal Hamiltonians $H_x$ are bounded, $\\{\\tau_t\\}$ is strongly continuous. \nIf the $H_x$ are allowed to be densely defined unbounded self-adjoint operators, \nwe only have weak continuity and the dynamics is more naturally defined on a\nvon Neumann algebra. This can be done when we have a suffiently nice invariant\nstate for the model with only the on-site Hamiltonians. E.g., suppose that\nfor each $x\\in \\Gamma$, we have a normalized eigenvector $\\phi_x$ of $H_x$.\nThen, for all $A\\in\\mathcal{A}_\\Lambda$, for any finite $\\Lambda\\subset \\Gamma$,\ndefine\n\\begin{equation}\n\\rho(A)=\\langle \\bigotimes_{x\\in\\Lambda}\\phi_x, A \\bigotimes_{x\\in\\Lambda}\\phi_x\\rangle\\, .\n\\end{equation}\n$\\rho$ can be regarded as a state of the infinite system defined on the norm completion \nof $\\mathcal{A}_\\Gamma$. The GNS Hilbert space $\\mathcal{H}_\\rho$\nof $\\rho$ can be constructed as the closure of \n$\\mathcal{A}_\\Gamma \\bigotimes_{x\\in\\Gamma}\\phi_x$. \nLet $\\psi\\in \\mathcal{A}_\\Gamma \\bigotimes_{x\\in\\Gamma}\\phi_x$.\nThen \n\\begin{equation}\n\\begin{split}\n\\left\\| \\left( \\tau_t(A) - \\tau_{t_0}(A) \\right) \\psi \\right\\| \\leq & \\left\\| \n\\left( \\tau_t(A) - \\tau_t^{(\\Lambda_n)}(A) \\right) \\psi \\right\\| \\\\\n+ & \\left\\| \\left( \\tau_t^{(\\Lambda_n)}(A) - \\tau_{t_0}^{(\\Lambda_n)}(A) \\right) \n\\psi \\right\\| +\n\\left\\| \\left( \\tau_{t_0}^{(\\Lambda_n)}(A) - \\tau_{t_0}(A) \\right) \\psi \\right\\| \\, ,\n\\end{split}\n\\end{equation}\nFor sufficiently large $\\Lambda_n$, the $\\lim_{t\\to t_0}$ of middle term vanishes \nby Stone's theorem. The two other terms are handled by \\ref{eq:claim}. It is clear\nhow to extend the continuity to $\\psi\\in\\mathcal{H}_\\rho$.\n \nWe will discuss this type of situation in more detail in the next three sections\nwhere we consider models that include quadratic (unbounded) interactions as well.\n\n\n\n\\section{The Harmonic Lattice}\\label{sec:harm}\n\nAs noted in the introduction, we will consider anharmonic perturbations of infinite\nharmonic lattices. In this section we discuss the properties\nof the harmonic systems that we need to assume in general in order\nto study the perturbations in the thermodynamic limit. We will also\nshow in detail that a standard harmonic lattice model posesses all the\nrequired properties.\n\n\\subsection{The CCR algebra of observables}\n\nWe begin by introducing the CCR algebra on which the harmonic dynamics will be\ndefined. Following \\cite{manuceau1973}, one can define the CCR algebra over any \nreal linear space\n$\\mathcal{D}$ equipped with a non-degenerate, symplectic bilinear form $\\sigma$, i.e.\n$\\sigma : \\mathcal{D} \\times \\mathcal{D} \\to \\mathbb{R}$ with the property that\nif $\\sigma(f,g) = 0$ for all $f \\in \\mathcal{D}$, then $g = 0$, and \n\\begin{equation} \\label{eq:symp}\n\\sigma(f,g) = - \\sigma(g,f) \\quad \\mbox{for all } f, g \\in \\mathcal{D} .\n\\end{equation}\nIn typical examples, $\\mathcal{D}$ will be a complex inner product space associated \nwith $\\Gamma$, e.g.\n$\\mathcal{D} = \\ell^2( \\Gamma)$ or a subspace thereof such as $\\mathcal{D} = \\ell^1( \\Gamma)$, \nor $\\ell^2(\\Gamma_0)$, with $\\Gamma_0\\subset\\Gamma$, and\n\\begin{equation}\n\\sigma(f,g) = \\mbox{Im} \\left[ \\langle f, g \\rangle \\right] \\, .\n\\end{equation}\nThe Weyl operators over $\\mathcal{D}$ are defined by associating non-zero elements\n$W(f)$ to each $f \\in \\mathcal{D}$ which satisfy\n\\begin{equation} \\label{eq:invo}\nW(f)^* = W(-f) \\quad \\mbox{for each } f \\in \\mathcal{D} \\, ,\n\\end{equation}\nand\n\\begin{equation} \\label{eq:weylrel}\nW(f) W(g) = e^{-i \\sigma(f,g)\/2} W(f+g) \\quad \\mbox{for all } f, g \\in \\mathcal{D} \\, .\n\\end{equation}\nIt is well-known that there is a unique, up to $*$-isomorphism, $C^*$-algebra generated\nby these Weyl operators with the property that $W(0) = \\idty$, $W(f)$ is unitary\nfor all $f \\in \\mathcal{D}$, and $\\| W(f) - \\idty \\| = 2$ for all $ f \\in \\mathcal{D} \\setminus \\{0 \\}$, \nsee e.g. Theorem 5.2.8 \\cite{bratteli1997}. This algebra, commonly known as the CCR algebra, or Weyl algebra, over\n$\\mathcal{D}$, we will denote by $\\mathcal{W} = \\mathcal{W}( \\mathcal{D})$.\n\n\\subsection{Quasi-free dynamics}\nThe anharmonic dynamics we study in this paper will be defined as\nperturbations of harmonic, technically {\\em quasi-free}, dynamics.\nA quasi-free dynamics on $\\mathcal{W}(\\mathcal{D})$ is a one-parameter\ngroup of *-automorphisms $\\tau_t$ of the form\n\\begin{equation}\n\\tau_t(W(f))=W(T_t f), \\quad f\\in \\mathcal{D}\n\\end{equation}\nwhere $T_t:\\mathcal{D}\\to\\mathcal{D}$ is a group of real-linear, symplectic\ntransformations, i.e., \n\\begin{equation} \\label{eq:sympT}\n\\sigma(T_t f, T_t g)= \\sigma(f,g)\\, .\n\\end{equation}\nAs $\\| W(f) - W(g) \\| = 2$ for all $ f\\neq g \\in \\mathcal{D}$, one should not\nexpect $\\tau_t$ to be strongly continuous; only a weaker form of\ncontinuity is present. This means that $\\tau_t$ does {\\em not}\ndefine a $C^*$-dynamical system on $\\mathcal{W}$, and thus we \nlook for a $W^*$-dynamical setting in which the weaker form of\ncontinuity is naturally expressed. \n\nIn the present context, it suffices to regard a {\\it $W^*$-dynamical system} as a pair $\\{ \\mathcal{M}, \\alpha_t \\}$\nwhere $\\mathcal{M}$ is a von Neumann algebra and $\\alpha_t$ is a weakly continuous, \none parameter group of $*$-automorphisms of $\\mathcal{M}$. For the harmonic systems we are \nconsidering, a specific $W^*$-dynamical system arises as follows. Let $\\rho$ be a \nstate on $\\mathcal{W}$ and denote by $\\left( \\mathcal{H}_{\\rho}, \\pi_{\\rho}, \n\\Omega_{\\rho} \\right)$ the corresponding GNS representation. We will assume\nthat $\\rho$ is both regular and $\\tau_t$-invariant. Recall that $\\rho$ is \nregular if and only if $t \\mapsto \\rho( W(tf) )$ is continuous \nfor all $f \\in \\mathcal{D}$, and $\\tau_t$-invariance means\n\\begin{equation}\n\\rho( \\tau_t(A) ) = \\rho(A) \\quad \\mbox{for all } A \\in \\mathcal{W}.\n\\end{equation}\nFor the von Neumann \nalgebra $\\mathcal{M}$, take the weak-closure of $\\pi_{\\rho}( \\mathcal{W})$ in $\\mathcal{L}\n( \\mathcal{H}_{\\rho})$ and let $\\alpha_t$ be the weakly continuous, \none parameter group of $*$-automorphisms of $\\mathcal{M}$ obtained by lifting $\\tau_t$ to \n$\\mathcal{M}$. The latter step is possible since $\\rho$ is $\\tau_t$-invariant, see e.g. Corollary 2.3.17 \\cite{bratteli1987}.\n\n\\subsection{Lieb-Robinson bounds for harmonic lattices}\nTo prove the existence of the dynamics for anharmonic models, we use that the unperturbed harmonic\nsystem satisfies a Lieb-Robinson bound. Such an estimate depends directly on\nproperties of $\\sigma$ and\n$T_t$. In fact, it is easy to calculate that\n\\begin{eqnarray}\n\\left[ \\tau_t(W(f)), W(g) \\right] & = & \\left\\{ W( T_t f) - W(g) W( T_tf) W(-g) \\right\\} W(g) \\nonumber \\\\\n& = & \\left\\{ 1 - e^{i \\sigma( T_tf, g)} \\right\\} W( T_tf) W(g) \\, ,\n\\end{eqnarray}\nusing the Weyl relations (\\ref{eq:weylrel}). For the examples we consider below, one can prove that\nfor every $a>0$, there exists positive numbers $c_a$ and $v_a$ for which \n\\begin{equation} \\label{eq:dynestex}\n\\left| \\sigma( T_t f, g ) \\right| \\leq c_a e^{ v_a |t|} \\sum_{x, y \\in \\mathbb{Z}^d} |f(x)| \\, |g(y)| \\frac{e^{-a|x-y|}}{(1+|x-y|)^{d+1}}\n\\end{equation}\nholds for all $t \\in \\mathbb{R}$ and all $f, g \\in \\ell^2( \\mathbb{Z}^d)$. In general, we will assume that\nthe harmonic dynamics satisfies an estimate of this type. Namely, we suppose that there exists a number\n$a_0 >0$ for which given $00$, take $\\gamma : [- \\pi, \\pi)^d \\to \\mathbb{R}$ as in \n(\\ref{eq:defgamma}), and\nset $U$ and $V$ as in (\\ref{eq:defU+V}) with (\\ref{eq:multg}). If $\\omega >0$, both \n$U$ and $V$ are bounded transformations on $\\ell^2( \\mathbb{Z}^d)$. We will treat \nthe case $\\omega =0$ by a limiting argument. The mapping $T_t$ defined by setting\n\\begin{equation} \\label{eq:deftt}\nT_t = (U+V) \\mathcal{F}^{-1} M_t \\mathcal{F} (U^*-V^*) \\, ,\n\\end{equation}\nis well-defined on $\\ell^2( \\mathbb{Z}^d)$. To define the dynamics on $\\mathcal{W}(\\mathcal{D})$\nwe will need to choose subspaces $\\mathcal{D}$ that are $T_t$ invariant. \nOn such $\\mathcal{D}$, $T_t$ is clearly real-linear. With (\\ref{eq:bog1}) \nand (\\ref{eq:bog2}), one\ncan easily verify the group properties $T_0 = \\idty$, $T_{s+t} = T_s \\circ T_t$, and\n\\begin{equation}\n\\mbox{Im} \\left[ \\langle T_t f, T_t g \\rangle \\right] = \\mbox{Im} \\left[ \\langle f,g \\rangle \\right] \\, ,\n\\end{equation}\ni.e. $T_t$ is sympletic in the sense of (\\ref{eq:sympT}). Using Theorem 5.2.8 of \\cite{bratteli1997}, \nthere is a unique \none parameter group of $*$-automorphisms on $\\mathcal{W}( \\mathcal{D})$, which we will denote\nby $\\tau_t$, that satisfies\n\\begin{equation}\n\\tau_t(W(f)) = W(T_tf) \\quad \\mbox{for all } f \\in \\mathcal{D} \\, .\n\\end{equation}\nThis defines the harmonic dynamics on $\\mathcal{W}( \\mathcal{D})$.\n\nHere it is important that $T_t : \\mathcal{D} \\to \\mathcal{D}$. As we demonstrated in \\cite{nachtergaele2009},\nthe mapping $T_t$ can be expressed as a convolution. In fact, \n\\begin{equation} \\label{eq:defft}\nT_tf = f * \\overline{ \\left(H_t^{(0)} + \\frac{i}{2}(H_t^{(-1)} + H_t^{(1)}) \\right)} + \\overline{f}*\\left( \\frac{i}{2}(H_t^{(1)} - H_t^{(-1)}) \\right).\n\\end{equation}\nwhere\n\\begin{equation}\\label{eq:h}\n\\begin{split}\nH^{(-1)}_t(x) &= \\frac{1}{(2 \\pi)^d} {\\rm Im} \\left[ \\int \\frac{1}{ \\gamma(k)} e^{i(k \\cdot x-2\\gamma(k)t)} \\, d k \\right],\n\\\\\nH^{(0)}_t(x) &= \\frac{1}{(2 \\pi)^d} {\\rm Re} \\left[ \\int e^{i(k \\cdot x - 2\\gamma(k)t)} \\, dk \\right],\n\\\\\nH^{(1)}_t(x) &= \\frac{1}{(2 \\pi)^d} {\\rm Im} \\left[ \\int \\gamma(k) \\, e^{i(k \\cdot x-2\\gamma(k)t)} \\, dk \\right] .\n\\end{split}\n\\end{equation}\nUsing analysis similar to what is proven in \\cite{nachtergaele2009}, the following result holds.\n\\begin{lem}\\label{lem:htx}\nConsider the functions defined in (\\ref{eq:h}). For $\\omega\\geq 0, \\lambda_1,\n\\ldots,\\lambda_d\\geq 0$, but such that \n$c_{\\omega,\\lambda} = (\\omega^2 + 4 \\sum_{j=1}^d \\lambda_j )^{1\/2} >0$, \nand any $\\mu >0$, the bounds\n\\begin{equation}\n\\begin{split}\n\\left| H_t^{(0)}(x) \\right| &\\leq e^{-\\mu \\left( |x| - c_{\\omega,\\lambda} \\max \\left( \\frac{2}{\\mu} \\, , \\, e^{(\\mu\/2)+1}\\right) |t| \\right)}\n\\\\\n\\left| H_t^{(-1)}(x) \\right| &\\le c^{-1}_{\\omega,\\lambda}e^{-\\mu \\left( |x| - c_{\\omega,\\lambda} \\max \\left( \\frac{2}{\\mu} \\, , \\, e^{(\\mu\/2)+1}\\right) |t| \\right)}\n\\\\\n\\left| H_t^{(1)}(x) \\right| &\\le c_{\\omega,\\lambda}e^{\\mu\/2}e^{-\\mu \\left( |x| - c_{\\omega,\\lambda} \\max \\left( \\frac{2}{\\mu} \\, , \\, e^{(\\mu\/2)+1}\\right) |t| \\right)}\n\\end{split}\n\\end{equation}\nhold for all $t \\in \\mathbb{R}$ and $x \\in \\mathbb{Z}^d$. Here $|x| = \\sum_{j=1}^{d} |x_i|$.\n\\end{lem}\n\nGiven the estimates in Lemma~\\ref{lem:htx}, equation (\\ref{eq:defft}) and \nYoung's inequality imply that $T_t$ can be defined as\na transformation of $\\ell^p(\\mathbb{Z}^d)$, for $p\\geq 1$. However,\nthe symplectic form limits us to consider $\\mathcal{D}=\\ell^p(\\mathbb{Z}^d)$\nwith $1\\leq p\\leq 2$.\n\nThe following bound now readily follows:\n\\begin{equation}\n\\begin{split}\n\\vert \\mbox{Im} \\langle T_t f, g\\rangle\\vert \\leq & \\left(1+2 e^{\\mu\/2}c_{\\omega,\\lambda} + 2 c_{\\omega,\\lambda}^{-1}\\right) \\times \\\\\n& \\quad \\times \\sum_{x,y} \\vert f(x)\\vert\\, \\vert g(y)\\vert \ne^{-\\mu \\left( |x| - c_{\\omega,\\lambda} \\max \\left( \\frac{2}{\\mu} \\, , \\, e^{(\\mu\/2)+1}\\right) |t| \\right)}\n\\end{split}\n\\end{equation}\nThis implies an estimate of the form (\\ref{eq:dynestex}), and hence a Lieb-Robinson bound as in (\\ref{eq:freest}).\n\nA simple corollary of Lemma~\\ref{lem:htx} follows.\n\n\\begin{cor} \\label{cor:hest} Consider the functions defined in (\\ref{eq:h}). For $\\omega\\geq 0, \\lambda_1,\n\\ldots,\\lambda_d\\geq 0$, but with \n$c_{\\omega,\\lambda} = (\\omega^2 + 4 \\sum_{j=1}^d \\lambda_j )^{1\/2} >0$.\nTake $\\| \\cdot \\|_1$ to be the $\\ell^1$-norm. One has that \n\\begin{equation}\n\\| H_t^{(0)} - \\delta_0 \\|_1 \\to 0 \\quad \\mbox{as} \\quad t \\to 0,\n\\end{equation}\nand\n\\begin{equation}\n\\| H_t^{(m)} \\|_1 \\to 0 \\quad \\mbox{as} \\quad t \\to 0, \\quad \\mbox{for } m \\in \\{ -1, 1\\}.\n\\end{equation}\n\\end{cor} \n\\begin{proof}\nThe estimates in Lemma~\\ref{lem:htx} imply that the functions $H_t^{(m)}$ are bounded by exponentially decaying functions (in $|x|$).\nThese estimates are uniform for $t$ in compact sets, e.g. $t \\in [-1,1]$, and therefore dominated convergence applies. It is clear that\n$H_0^{(0)}(x) = \\delta_0(x)$ while $H_0^{(m)}(x) = 0$ for $m \\in \\{-1,1\\}$. This proves the corollary.\n\\end{proof}\n\n\\subsubsection{Representing the dynamics} The infinite-volume ground state of the model (\\ref{eq:harham}) is the vacuum\nstate for the $b-$operators, as can be seen from (\\ref{eq:diagham}). This state\nis defined on $\\mathcal{W}(\\mathcal{D})$ by\n\\begin{equation}\\label{eq:state}\n\\rho(W(f))=e^{-\\frac{1}{4}\\Vert (U^*-V^*)f\\Vert^2}\n\\end{equation}\nBy standard arguments this defines a state on $\\mathcal{W}(\\mathcal{D})$\n\\cite{bratteli1997}. Using (\\ref{eq:deftt}), (\\ref{eq:bog1}) and (\\ref{eq:bog2}) one \nreadily verifies that $\\rho$ is $\\tau_t$-invariant. $\\rho$ is regular\nby observation. The weak continuity of the dynamics in the GNS-representation\nof $\\rho$ will follow from the continuity of the functions of the form\n\\begin{equation}\\label{eq:weakcont}\nt\\mapsto \\rho(W(g_1)W(T_t f)W(g_2)), \\mbox{ for}\\ g_1, g_2, f\\in \\mathcal{D}.\n\\end{equation}\nWhen $\\omega>0$, this continuity can be easily observed from the\nfollowing expresion:\n\\begin{equation} \\label{eq:defstate}\n\\begin{split}\n\\rho(W(g_1)W(T_t f)W(g_2)) = & e^{i\\sigma(g_1, g_2)\/2}\ne^{i\\sigma(T_t f, g_2 - g_1)\/2} \\times \\\\\n&\\times e^{-\\Vert(U^*-V^*)(g_1+g_2+T_t f)\\Vert^2\/4}\n\\end{split}\n\\end{equation}\nNote that $T_t$ is differentiable with bounded derivative and that both $U$ and $V$ \nare bounded. This establishes the continuity in the case that $\\omega>0$. \n\nAs discussed in the introduction of the section, the $W^*$-dynamical system\nis now defined by considering the GNS representation $\\pi_\\rho$\nof $\\rho$. This yields a von Neumann algebra \n$\\mathcal{M}=\\overline{\\pi_\\rho(\\mathcal{W}(\\mathcal{D}))}$. The invariance of\n$\\rho$ implies that the dynamics is implementable by unitaries $U_t$, i.e.,\n\\begin{equation}\n\\pi_\\rho(\\tau_t(W(f)))=U_t^* \\pi_\\rho(W(f)) U_t\\, .\n\\end{equation}\nUsing $U_t$, the dynamics can be extended to $\\mathcal{M}$. As a consequence of\n(\\ref{eq:weakcont}), this extended dynamics is weakly continuous.\n\n\\subsubsection{The case of $\\omega=0$} We now discuss the case $\\omega =0$. Here, the maps $T_t$ are defined\nusing the convolution formula (\\ref{eq:defft}). By Lemma \\ref{lem:htx},\n$T_t$ is well-defined as a transformation of $\\ell^p(\\mathbb{Z}^d)$, for\n$1\\leq p\\leq 2$. Both the group property of $T_t$ and the invariance of \nthe symplectic form $\\sigma$ follow in the limit $\\omega\\to 0$ by\ndominated congervence which is justified by Lemma \\ref{lem:htx}.\nThis demonstrates that the dynamics is well defined.\n\nWe represent the dynamics in a state $\\rho$ is defined by (\\ref{eq:state}), but with the understanding\nthat $\\Vert (U^*- V^*)f\\Vert$ may take on the value $+\\infty$, in which case\n$\\rho(W(f))=0$. $\\rho$ is still clearly regular. It remains to show that the\ndynamics is weakly continuous.\n\nObserve that\n\\begin{equation}\n\\begin{split}\nT_t f - f = f * \\left(H_t^{(0)} - \\delta_0 \\right) & - f* \\left( \\frac{i}{2}(H_t^{(-1)} + H_t^{(1)}) \\right) \\\\\n& \\quad + \\overline{f}*\\left( \\frac{i}{2}(H_t^{(1)} - H_t^{(-1)}) \\right),\n\\end{split}\n\\end{equation}\nfollows from (\\ref{eq:defft}). Using Young's inequality and Corollary~\\ref{cor:hest}, it is clear that $\\| T_t f - f \\| \\to 0$ as \n$t \\to 0$ for any $f \\in \\ell^p( \\mathbb{Z}^d)$ with $1 \\leq p \\leq 2$. A calculation shows that\n\\begin{equation}\n(U^*-V^*)(T_t f - f) = F_1 * \\left(H_t^{(0)} - \\delta_0 \\right) - F_2 * H_t^{(-1)} - i F_3 *H_t^{(1)} \\, ,\n\\end{equation}\nwhere\n\\begin{equation}\n\\begin{split}\n& \\quad \\quad F_1 = \\, \\mathcal{F}^{-1} M_{\\sqrt{ \\gamma}} \\mathcal{F} \\mbox{Im}[f] \n - i \\mathcal{F}^{-1} M_{ \\gamma^{-1\/2}} \\mathcal{F} \\mbox{Re}[f] \\, , \\\\\n & F_2 = \\mathcal{F}^{-1} M_{\\sqrt{ \\gamma}} \\mathcal{F} \\mbox{Re}[f] \\, , \\quad \\mbox{and} \\quad\n F_3 = \\mathcal{F}^{-1} M_{\\gamma^{-1\/2}} \\mathcal{F} \\mbox{Im}[f] \\, . \n\\end{split}\n\\end{equation}\nA similar argument to what is given above now implies that \n$\\| (U^*-V^*)(T_tf-f) \\| \\to 0$ as $t \\to 0$, for any $f \\in \\mathcal{D}_0$, \nwhere\n\\begin{equation}\n\\mathcal{D}_0 = \\left\\{ f \\in \\ell^2( \\mathbb{Z}^d) : \\mathcal{F}^{-1} M_{\\gamma^{-1\/2}} \\mathcal{F} \\mbox{Re}[f] \\in \\ell^2( \\mathbb{Z}^d) \\right\\} \\, .\n\\end{equation}\nNo additional assumption on $\\mbox{Im}[f]$ is necessary since $F_3$ is\nconvolved with $H_t^{(1)}$. Given the form of (\\ref{eq:defstate}), this suffices to prove weak continuity.\nIn fact, one can check that $T_t$ leaves $\\mathcal{D}_0$ invariant and that if $f \\in \\mathcal{D}_0$, then \n$(U^*-V^*)T_t f \\in \\ell^2( \\mathbb{Z}^d)$ for all $t \\in \\mathbb{R}$. \nThis establishes weak continuity of the dynamics, defined on $\\mathcal{W}( \\mathcal{D}_0)$.\n\n\n\\begin{rem} We observe that, when $\\omega=0$, the finite volume Hamiltonian $H_L^h$ (\\ref{eq:harham}) is translation invariant and commutes with the total momentum operator $P_0$ (see (\\ref{eq:Q+Pk})). In fact, $H_L^h$ can be written as \n\\[ \\begin{split} H_L^h &= P_0^2 + \\sum_{k \\in \\Lambda_L^* \\backslash \\{ 0 \\}} P_k^* P_k + \\gamma^2 (k) Q_k^* Q_k \\\\ &= P_0^2 + \\sum_{k \\in \\Lambda_L^* \\backslash \\{ 0 \\}} \\gamma(k) (2 b_k^* b_k + 1) \\end{split} \\] where we used the notation (\\ref{eq:Q+Pk}) and, for $k \\not = 0$, we introduced the operators $b_k, b_k^*$ as in (\\ref{eq:beqns}). \nIn this case, the operator $H_L^h$ does not have eigenvectors; its spectrum is purely continuous. By a unitary transformation, the Hilbert space ${\\mathcal H}_{\\Lambda_L}$ (see (\\ref{eq:hspace})) can be mapped into the space $L^2 ({\\mathbb R}, {\\rm d} P_0 ; {\\mathcal H}_b)$ of square integrable functions of $P_0 \\in {\\mathbb R}$, with values in ${\\mathcal H}_b$. Here, ${\\mathcal H}_b$ denotes the Fock space generated by all creation and annihilation operators $b_k^*, b_k$ with $k \\not = 0$. It is then easy to construct vectors which minimize the energy by a given distribution of the total momentum; for an arbitrary (complex valued) $f \\in L^2 ({\\mathbb R})$ with $\\| f \\|=1$, we define $\\psi_f \\in L^2 ({\\mathbb R}, {\\rm d} P_0 ; {\\mathcal H}_b)$ by setting $\\psi_f (P_0) = f (P_0) \\Omega$ (where $\\Omega$ is the Fock vacuum in ${\\mathcal H}_b$). These vectors are not invariant with respect to the time evolution. It is simple to check that the Schr\\\"odinger evolution of $\\psi_f$ is given by $e^{-iH_{L}^h t} \\psi_f = \\psi_{f_t}$ with $f_t (P_0) = e^{-it P_0^2} f (P_0)$ is the free evolution of $f$. In particular, for $\\omega =0$, $H_L^h$ does not have a ground state in the traditional sense of an eigenvector. \nFor this reason, when $\\omega =0$, it is not a priori clear what the natural choice of state should be. As is discussed above, one possibility is to consider first $\\omega \\not = 0$ and then take the limit $\\omega \\to 0$. This yields a ground state for the infinite system \nwith vanishing center of mass momentum of the oscillators. By considering non-zero values\nfor the center of mass momentum, one can also define other states with similar properties.\n\\end{rem}\n\n\\subsubsection{Some final comments} The analysis in the following sections and our main result is not limited\nto the class of examples we discussed above. E.g., harmonic systems defined on \nmore general graphs, such as the ones considered in \\cite{eisert2005,eisert2008} \ncan also be treated. Also note that our choice of time-invariant\nstate, while natural, is by no means the only possible. Instead of the\nvacuum state defined in (\\ref{eq:state}), equilibrium states at positive temperatures could be used in exactly the same way. It would also make\nsense to study the convergence of the equilibrium or ground states\nfor the perturbed dynamics and to consider the dynamics in the representation of the limiting infinite-system state, but we have\nnot studied this situation and will not discuss it in this paper.\n\n\n\\section{Perturbing the Harmonic Dynamics}\n\nIn this section, we will discuss finite volume perturbations of the infinite volume \nharmonic dynamics which we defined in Section~\\ref{sec:harm}. To begin, we recall a\nfundamental result about perturbations of quantum dynamics defined by adding a bounded term\nto the generator. This is a version of what is usually known as the Dyson or Duhamel \nexpansion. The following statement summarizes Proposition 5.4.1 of \\cite{bratteli1997}.\n\\begin{prop} \\label{prop:perdyn} Let $\\{ \\mathcal{M}, \\alpha_t \\}$ be a $W^*$-dynamical system and let $\\delta$ denote the\ninfinitesimal generator of $\\alpha_t$. Given any $P = P^* \\in \\mathcal{M}$, set $\\delta_P$ to be the bounded \nderivation with domain $D( \\delta_P) = \\mathcal{M}$ satisfying $\\delta_P(A) = i [P, A]$ for all $A \\in \\mathcal{M}$.\nIt follows that $\\delta + \\delta_P$ generates a one-parameter group of $*$-automorphisms $\\alpha^P$ of $\\mathcal{M}$\nwhich is the unique solution of the integral equation\n\\begin{equation} \\label{eq:defpdyn}\n\\alpha_t^P(A) = \\alpha_t(A) + i \\int_0^t \\alpha_s^P \\left( \\left[ P, \\alpha_{t-s}(A) \\right] \\right) \\, ds \\, . \n\\end{equation}\nIn addition, the estimate\n\\begin{equation} \\label{eq:dynestpert}\n\\left\\| \\alpha_t^P(A) - \\alpha_t(A) \\right\\| \\leq \\left( e^{|t| \\| P \\|} - 1 \\right) \\, \\| A \\| \\, \n\\end{equation}\nholds for all $t \\in \\mathbb{R}$ and $A \\in \\mathcal{M}$.\n\\end{prop}\n\nSince the initial dynamics $\\alpha_t$ is assumed weakly continuous, the norm estimate (\\ref{eq:dynestpert}) can be used to\nshow that the perturbed dynamics is also weakly continuous. Hence, for each $P = P^* \\in \\mathcal{M}$ the pair \n$\\{ \\mathcal{M}, \\alpha_t^P \\}$ is also a $W^*$-dynamical system. Thus, if $P_i=P_i^* \\in \\mathcal{M}$ \nfor $i=1,2$, then one can define $\\alpha_t^{P_1+P_2}$ iteratively.\n\n\n\\subsection{A Lieb-Robinson bound for on-site perturbations} \\label{subsec:onsite}\n\nIn this section we will consider perturbations of the harmonic dynamics \ndefined in Section~\\ref{sec:harm}. Recall that our general assumptions for the harmonic\ndynamics on $\\Gamma$ are as follows.\n\nWe assume that the harmonic dynamics, $\\tau^0_t$, is defined on a Weyl algebra\n$\\mathcal{W}( \\mathcal{D})$ where $\\mathcal{D}$ is a subspace of $\\ell^2(\\Gamma)$.\nIn fact, we assume there exists a group $T_t$ of real-linear\ntransformations which leave $\\mathcal{D}$ invariant and satisfy\n\\begin{equation}\n\\tau_t^0(W(f)) = W(T_tf) \\quad \\mbox{for all } f \\in \\mathcal{D} \\, .\n\\end{equation}\nIn addition, we assume that this harmonic dynamics satisfies a Lieb-Robinson \nbound. Specifically, we suppose that there exists a number $a_0 >0$ for which\ngiven any $00$ and define the function $\\Psi_t : [0,t] \\to \\mathcal{W}(\\mathcal{D})$ by setting\n\\begin{equation} \\label{eq:defpsit}\n\\Psi_t(s) = \\left[ \\tau_s^{(\\Lambda)} \\left( \\tau_{t-s}^0(W(f)) \\right), W(g) \\right] \\, .\n\\end{equation}\nIt is clear that $\\Psi_t$ interpolates between the commutator associated with the original harmonic dynamics, $\\tau_t^0$ at $s=0$, \nand that of the perturbed dynamics, $\\tau_t^{(\\Lambda)}$ at $s=t$. A calculation shows that\n\\begin{equation} \\label{eq:dpsit}\n\\frac{d}{ds} \\Psi_t(s) = i \\sum_{x \\in \\Lambda} \\left[ \\, \\tau_s^{(\\Lambda)} \\left( \\left[ P_x, W ( T_{t-s}f) \\right] \\right) , W(g) \\right] \\, ,\n\\end{equation}\nwhere differentiability is guaranteed by the results of Proposition~\\ref{prop:perdyn}.\nThe inner commutator can be expressed as\n\\begin{eqnarray}\n \\left[ P_x, W ( T_{t-s}f) \\right] & = & \\int_{\\mathbb{C}} \\left[ W(z \\delta_x), W( T_{t-s}f) \\right] \\mu_x( dz) \\nonumber \\\\\n & = & W( T_{t-s}f) \\mathcal{L}_{t-s;x}(f) \\, .\n\\end{eqnarray}\nwhere\n\\begin{equation} \\label{eq:defLx}\n\\mathcal{L}^*_{t-s;x}(f) = \\mathcal{L}_{t-s;x}(f) = \\int_{\\mathbb{C}} W(z \\delta_x) \\left\\{e^{i \\sigma( T_{t-s}f, z \\delta_x)} -1 \\right\\} \\mu_x(dz) \\, \\in \\mathcal{W}(\\mathcal{D}) \\, .\n\\end{equation}\nThus $\\Psi_t$ satisfies\n\\begin{equation}\n\\begin{split}\n\\frac{d}{ds} \\Psi_t(s) = i \\sum_{x \\in \\Lambda} & \\Psi_t(s) \\tau_s^{(\\Lambda)} \\left( \\mathcal{L}_{t-s;x}(f) \\right) \\\\ \n+ & i \\sum_{x \\in \\Lambda} \\tau_s^{(\\Lambda)} \\left( W( T_{t-s}f) \\right) \\, \\left[ \\tau_s^{(\\Lambda)} \\left( \\mathcal{L}_{t-s;x}(f) \\right), W(g) \\right] \\, .\n\\end{split}\n\\end{equation}\nThe first term above is norm preserving. In fact, define a unitary evolution $U_t(\\cdot)$ by setting\n\\begin{equation}\n\\frac{d}{ds}U_t(s) = - i \\sum_{x \\in \\Lambda} \\tau_s^{(\\Lambda)} \\left( \\mathcal{L}_{t-s;x}(f) \\right) U_t(s) \\quad \\mbox{with } U_t(0) = \\idty \\, .\n\\end{equation}\nIt is easy to see that\n\\begin{equation}\n\\frac{d}{ds} \\left( \\Psi_t(s) U_t(s) \\right) = i \\sum_{x \\in \\Lambda} \\tau_s^{(\\Lambda)} \\left( W( T_{t-s}f) \\right) \\, \\left[ \\tau_s^{(\\Lambda)} \\left( \\mathcal{L}_{t-s;x}(f) \\right), W(g) \\right] U_t(s) \\, ,\n\\end{equation}\nand therefore,\n\\begin{equation}\n\\Psi_t(t) U_t(t) = \\Psi_t(0) + i \\sum_{x \\in \\Lambda} \\int_0^t \\tau_s^{(\\Lambda)} \\left( W( T_{t-s}f) \\right) \\, \\left[ \\tau_s^{(\\Lambda)} \\left( \\mathcal{L}_{t-s;x}(f) \\right), W(g) \\right] U_t(s) \\, ds \\, .\n\\end{equation}\nEstimating in norm, we find that\n\\begin{equation} \\label{eq:norm1}\n\\begin{split}\n\\Big\\| \\Big[ \\tau_t^{(\\Lambda)} \\left(W (f)\\right) , W(g) \\Big] \\Big\\| \\leq & \\Big\\| \\Big[ \\tau_t^0 \\left(W (f)\\right) , W(g) \\Big] \\Big\\| \\\\\n& + \\sum_{x \\in \\Lambda} \\int_0^t \\Big\\| \\left[ \\tau_s^{(\\Lambda)} \\left( \\mathcal{L}_{t-s;x}(f) \\right) , W(g) \\right] \\Big\\| \\, ds \\, .\n\\end{split}\n\\end{equation}\nMoreover, using (\\ref{eq:defLx}) and the bound (\\ref{eq:prelrb}), it is clear that\n\\begin{equation} \\label{eq:norm2}\n\\begin{split}\n \\Big\\| \\left[ \\tau_s^{(\\Lambda)} \\left( \\mathcal{L}_{t-s;x}(f) \\right) , W(g) \\right] \\Big\\| \\leq & \\, c_a e^{v_a(t-s)} \\sum_{x' \\in \\Gamma} |f(x')| F_a \\left( d(x,x') \\right) \\times \\\\\n& \\quad \\times \\int_{\\mathbb{C}} |z| \\, \\Big\\| \\left[ \\tau_s^{(\\Lambda)} \\left( W(z \\delta_x) \\right) , W(g) \\right] \\Big\\| \\, |\\mu_x|(dz)\n\\end{split}\n\\end{equation}\nholds. Combining (\\ref{eq:norm2}), (\\ref{eq:norm1}), and (\\ref{eq:freelrb}), we have proven that\n\\begin{equation}\\label{eq:norm3}\n\\begin{split}\n\\Big\\| \\Big[ \\tau_t^{(\\Lambda)} \\left(W (f)\\right) , W(g) \\Big] \\Big\\| \\leq \\; & c_a e^{v_a t} \\sum_{x, y} |f(x)| \\, |g(y)| \\, F_a \\left( d(x,y) \\right) \\\\ \n&+ c_a \\sum_{x' \\in \\Gamma} |f(x')| \\sum_{x \\in \\Lambda} F_a \\left( d(x,x') \\right) \\int_0^t e^{v_a (t-s)} \\times \\\\\n&\\quad \\times \\int_{\\mathbb{C}} |z| \\, \\Big\\| \\left[ \\tau_s^{(\\Lambda)} \\left( W(z \\delta_x) \\right) , W(g) \\right] \\Big\\| \\, |\\mu_x|(dz) \\, ds \\,.\n\\end{split}\n\\end{equation}\nFollowing the iteration scheme applied in \\cite{nachtergaele2009}, one arrives at (\\ref{eq:anharmbd}) as claimed.\n\\end{proof}\n\n\n\n\n\\subsection{Multiple Site Anharmonicities} \\label{subsec:anharmms}\n\nIn this section, we will prove that Lieb-Robinson bounds, similar to those in\nTheorem~\\ref{thm:ahlrb}, also hold for perturbations involving short range interations.\nWe introduce these as follows.\n\nFor each finite subset $X \\subset \\Gamma$, we associate a finite measure $\\mu_X$ on\n$\\mathbb{C}^X$ and an element $P_X \\in \\mathcal{W}(\\mathcal{D})$ with the form\n\\begin{equation} \\label{eq:PX}\nP_X = \\int_{\\mathbb{C}^X} W( z \\cdot \\delta_X) \\, \\mu_X(d z) \\, ,\n\\end{equation}\nwhere, for each $z \\in \\mathbb{C}^X$, the function $z \\cdot \\delta_X : \\Gamma \\to \\mathbb{C}$ is\ngiven by\n\\begin{equation}\n(z \\cdot \\delta_X)(x) = \\sum_{x' \\in X} z_{x'} \\delta_{x'}(x) = \\left\\{ \\begin{array}{cc} z_x & \\mbox{if } x \\in X , \\\\ 0 & \\mbox{otherwise.} \\end{array} \\right.\n\\end{equation}\nWe will again require that $\\mu_X$ is invariant with respect to $z \\mapsto -z$, and hence, $P_X$ is\nself-adjoint. In analogy to (\\ref{eq:defpl}), for any finite subset $\\Lambda \\subset \\Gamma$, we will set\n\\begin{equation} \\label{eq:defpl2}\nP^{\\Lambda} = \\sum_{X \\subset \\Lambda} P_X\\, ,\n\\end{equation}\nwhere the sum is over all subsets of $\\Lambda$.\nHere we will again let $\\tau^{(\\Lambda)}_t$ denote the dynamics resulting from Proposition~\\ref{prop:perdyn}\napplied to the $W^*$-dynamical system $\\{ \\mathcal{M}, \\tau_t^0 \\}$ and the perturbation $P^{\\Lambda}$ defined by (\\ref{eq:defpl2}). \n\nThe main assumption on these multi-site perturbations follows.\nThere exists a number $a_1 >0$ such that for all $0< a \\leq a_1$, there is a number $\\kappa_a >0$ for which\ngiven any pair $x_1, x_2 \\in \\Gamma$, \n\\begin{equation} \\label{eq:pertbd}\n\\sum_{\\stackrel{X \\subset \\Gamma:}{x_1, x_2 \\in X}} \\int_{\\mathbb{C}^X} |z_{x_1}| | z_{x_2}| \\big| \\mu_X \\big|(dz) \\leq \\kappa_a F_a \\left( d(x_1,x_2) \\right) \\, .\n\\end{equation} \n\n\\begin{thm} \\label{thm:ahlrbms} Let $\\tau_t^0$ be a harmonic dynamics defined on $\\Gamma$.\nAssmue that (\\ref{eq:pertbd}) holds, and that $\\tau_t^{(\\Lambda)}$ denotes the corresponding\nperturbed dynamics. For every $0< a \\leq \\min(a_0, a_1)$, there exist positive numbers $c_a$ and $v_a$ for which\nthe estimate\n\\begin{equation} \\label{eq:anharmbdms}\n\\left\\| \\left[ \\tau_t^{(\\Lambda)} \\left( W(f) \\right), W(g) \\right] \\right\\| \\leq c_a e^{ (v_a + c_a \\kappa_a C_a^2 ) |t|} \\sum_{x, y} |f(x)| \\, |g(y)| F_a \\left( d(x,y) \\right)\n\\end{equation}\nholds for all $t \\in \\mathbb{R}$ and for any functions $f, g \\in \\mathcal{D}$. \n\\end{thm}\nThe proof of this result closely follows that of Theorem~\\ref{thm:ahlrb}, and so we only comment on\nthe differences.\n\\begin{proof}\nFor $f,g \\in \\mathcal{D}$ and $t >0$, define $\\Psi_t :[0,t] \\to \\mathcal{W}(\\mathcal{D})$ as in (\\ref{eq:defpsit}).\nThe derivative calculation beginning with (\\ref{eq:dpsit}) proceeds as before. Here\n\\begin{equation} \\label{eq:deflzms}\n\\mathcal{L}_{t-s;X}(f) = \\int_{\\mathbb{C}^X} W( z \\cdot \\delta_X) \\left\\{ e^{i \\sigma(T_{t-s}f, z \\cdot \\delta_X) } - 1 \\right\\} \\, \\mu_X( d z) \\, ,\n\\end{equation}\nis also self-adjoint. The norm estimate\n\\begin{equation} \\label{eq:norm1ms}\n\\begin{split}\n\\Big\\| \\Big[ \\tau_t^{(\\Lambda)} \\left(W (f)\\right) , W(g) \\Big] \\Big\\| \\leq & \\Big\\| \\Big[ \\tau_t^0 \\left(W (f)\\right) , W(g) \\Big] \\Big\\| \\\\ & + \\sum_{X \\subset \\Lambda} \\int_0^t \\Big\\| \\left[ \\tau_s^{(\\Lambda)} \\left( \\mathcal{L}_{t-s;X}(f) \\right) , W(g) \\right] \\Big\\| \\, ds \\, ,\n\\end{split}\n\\end{equation}\nholds similarly. With (\\ref{eq:deflzms}), it is easy to see that the integrand in (\\ref{eq:norm1ms}) is bounded by\n\\begin{equation} \nc_a e^{v_a(t-s)} \\sum_{x\\in \\Gamma} \\, |f(x)| \\, \\sum_{x' \\in X} F_a \\left( d(x,x') \\right) \\int_{\\mathbb{C}^X} | z_{x'}| \\, \\, \\Big\\| \\left[ \\tau_s^{(\\Lambda)} \\left( W(z \\cdot \\delta_X) \\right) , W(g) \\right] \\Big\\| \\, |\\mu_X|(dz) \\, ,\n\\end{equation}\nthe analogue of (\\ref{eq:norm2}), for $00$ and take $m \\leq n$. Iteratively applying Proposition~\\ref{prop:perdyn}, we have that\n\\begin{equation} \\label{eq:dyn3}\n\\tau_t^{(\\Lambda_n)}(W(f)) = \\tau_t^{(\\Lambda_m)}(W(f)) + i \\int_0^t \\tau_s^{(\\Lambda_n)} \\left( \\left[ P^{\\Lambda_n \\setminus \\Lambda_m}, \\tau_{t-s}^{(\\Lambda_m)}(W(f)) \\right] \\right) \\, ds \\, ,\n\\end{equation}\nfor all $-T \\leq t \\leq T$. The bound\n\\begin{eqnarray}\n&&\\left\\| \\left[ P^{\\Lambda_n \\setminus \\Lambda_m}, \\tau_{t-s}^{(\\Lambda_m)}(W(f)) \\right] \\right\\|\\\\\n & \\leq & \\sum_{\\stackrel{X \\subset \\Lambda_n:}{ X \\cap \\Lambda_n \\setminus \\Lambda_m \\neq \\emptyset}} \\int_{\\mathbb{C}^X}\n\\left\\| \\left[ W(z \\cdot \\delta_X) , \\tau_{t-s}^{(\\Lambda_m)}(W(f)) \\right] \\right\\| \\, | \\mu_X| ( dz) \\nonumber \\\\\n& \\leq & c_a e^{(v_a +c_a \\kappa_a C_a^2)(t-s)} \\sum_{x \\in \\Gamma} |f(x)| \\sum_{\\stackrel{X \\subset \\Lambda_n:}{ X \\cap \\Lambda_n \\setminus \\Lambda_m \\neq \\emptyset}} \\sum_{y \\in X} F_a \\left( d(x,y) \\right) \\int_{\\mathbb{C}^X} |z_y| \\, | \\mu_X| (dz) \\nonumber \\\\ & \\leq & c_a e^{(v_a +c_a \\kappa_a C_a^2)(t-s)} \\sum_{x \\in \\Gamma} |f(x)| \\sum_{y \\in \\Lambda_n \\setminus \\Lambda_m } F_a \\left( d(x,y) \\right) \\sum_{\\stackrel{X \\subset \\Gamma:}{ y \\in X}} \\int_{\\mathbb{C}^X} |z_y| \\, | \\mu_X| (dz) \\nonumber \\\\ & \\leq & M c_a e^{(v_a +c_a \\kappa_a C_a^2)(t-s)} \\sum_{x \\in \\Gamma} |f(x)| \\sum_{y \\in \\Lambda_n \\setminus \\Lambda_m } F_a \\left( d(x,y) \\right) \\nonumber\n\\end{eqnarray}\nfollows readily from Theorem~\\ref{thm:ahlrbms} and assumption (\\ref{eq:1mom}).\nFor $f \\in \\ell^1( \\Gamma)$ and fixed $t$, the upper estimate above goes to zero as\n$n,m \\to \\infty$. In fact, the convergence is uniform for $t \\in [-T,T]$. \nThis proves (\\ref{eq:lim}).\n\nBy an $\\epsilon\/3$ argument, similar to what is done at the end of Section~\\ref{sec:bdints}, weak continuity follows since we know it holds for the \nfinite volume dynamics.\nThis completes the proof of Theorem~\\ref{thm:exist}.\n\\end{proof}\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{KG-agnostic Entity Linkers}\n\\label{sec:appendix}\n\nAGDISTIS~\\cite{DBLP:conf\/ecai\/UsbeckNRGCAB14} is an EL approach expecting already marked entity mentions. It expects a KG dump available in the Turtle format~\\cite{beckett2014rdf}. For candidate generation, first, an index is created which contains all available entities and their labels. They are extracted from the available Turtle dump. The input entity mention is first normalized by reducing plural and genitive forms and removing common affixes. Furthermore, if an entity mention consists of a substring of a preceding entity mention, the succeeding one is directly mapped to the preceding one. Additionally, the space of possible candidates can be limited by configuration. Usually, the candidate space is reduced to organizations, persons and locations. The candidates are then searched for over the index by comparing the reduced entity mention with the labels in the index using trigram similarity. No candidates are included, which contain time information inside the label.\nAfter gathering all candidates of all entity mentions in the utterance, the candidates are ranked by building a temporary graph. Starting with the candidates as the initial nodes, the graph is expanded breadth-first by adding the adjacent nodes and the edges in-between. It is done to some previously set depth. This results in a partly connected graph containing all candidates. \nThen the HITS-algorithm~\\cite{Kleinberg1999} is run and the most authoritative candidate nodes are chosen per entity mention. Thus, the approach is performing a global entity coherence optimization. \nThe approach uses label and alias information for building the index. Type information can be used to restrict the candidate space and the KG structure is utilized during the candidate ranking. \n\n\\paragraph{MAG.} MAG~\\cite{Moussallem2017} is a multilingual extension of AGDISTIS. Again, no ER is performed. The same label index as used in AGDISTIS is employed. Besides that, the following additional indices were created: \n\\begin{itemize}\n \\item A person index, containing the person names and the variations in different languages\n \\item A rare references index containing textual descriptions of entities\n \\item An acronym index based on the commercial STANDS4~\\footnote{\\label{footnote1}\\url{http:\/\/www.abbreviations.com\/}} data\n \\item A context index containing semantic embeddings of Concise Bounded Description~\\footnote{\\url{https:\/\/www.w3.org\/Submission\/CBD\/}}\n\\end{itemize} \nDuring candidate generation, it is first checked if the entity mention corresponds to an acronym. If it is one, no further preprocessing is done. If not, the entity mention is normalized by removing special characters, changing the casing and splitting camel-cased words. After preprocessing, the candidates are searched by first checking for exact matches, then searching via trigram similarity. If this still did not produce any candidates, the entity mention is stemmed and the search is repeated. If a mention is an acronym, the candidate list is expanded with the corresponding entities.\nThen, more candidates are searched by taking an entity mention and the set of all entity mentions in the utterance. Those are used to build a tf-idf search query over the context index. All returned candidates are then first filtered by trigram similarity between entity mention and candidate. A second filtering is applied by counting the number of direct connections between the remaining candidates and the candidates of the other entity mentions. The candidates with too few links are pruned away. All the candidates of the entity mention are then sorted by their popularity (calculated via PageRank~\\cite{page1999pagerank}) and the top 100 are returned.\nThen, the entities are disambiguated in nearly the same way as done by AGDISTIS. The only difference is the option to use PageRank instead of HITS to rank the final candidates. \nAdditionally to the properties already used by AGDISTIS, item descriptions are incorporated via the context index.\n\nDoSeR~\\cite{DBLP:conf\/esws\/ZwicklbauerSG16} also expects already marked entity mentions. The linker focuses being to link to multiple knowledge graphs simultaneously. Here, they support RDF-based KGs and entity-annotated document (EAD) KGs (e.g., Wikipedia). The KGs are split into core and optional KGs. Core KGs contain the entities to which one wants to link. Optional KGs complement the core KGs with additional data.\nFirst, an index is created which includes the entities of all core KGs. In the index, the labels or surface forms, a semantic embedding, and each entity's prior popularity are stored. \nThe semantic embeddings are computed by using Word2Vec. For EAD-KGs, the different documents are taken and all words, which are not pointing to entities, are removed. All remaining words are replaced with the corresponding entity identifier. These sequences are then used to train the embeddings. For RDF-KGs, a Random Walk is performed over the graph and the resulting sequences are used to train the embeddings. The succeeding node is chosen with a probability corresponding to the reciprocal of the number of edges it got. The same probability is used to sometimes jump to another arbitrary node in the graph. \nThe prior probability is calculated by either using the number of incoming\/outgoing edges in the RDF-KG or the number of annotations that point to the entity in the EAD KG. If type information is available, the entity space can be limited here too. \nFirst, candidates are generated by searching for exact matches and then the AGDISTIS candidate generation is used to find more candidates. \nThe candidates are disambiguated, similar to the way AGDISTIS and MAG are doing it. First, a graph is built though not a complete graph but a $K$-partite graph where $K$ is the number of all entity mentions. Edges exist only between candidates of different entities. Using the complete graph resulted in a loss of performance. After the graph is created, PageRank is done to score the different entities coherently. The edge weights correspond to the (normalized) cosine similarity of the semantic embeddings of the two connected entities. Additionally, at any point during PageRank computation, it is possible to jump to an arbitrary node with a certain probability. This probability depends on the prior popularity of the entity.\nIt uses label information, the knowledge graph structure and type information (if desired). \n\\begin{table*}[tbh!]\n \\centering\n \\begin{tabularx}{\\textwidth}{p{3cm}YYYYYY}\n \\toprule\n \\textbf{Approach}&\\textbf{Labels\/\\allowbreak Aliases} & \\textbf{Descriptions}& \\textbf{Knowledge graph structure} & \\textbf{Hyper-relational structure} & \\textbf{Types} & \\textbf{Additional Information} \\\\ \\midrule\n AGDISTIS~\\cite{DBLP:conf\/ecai\/UsbeckNRGCAB14} & \\cmark & \\xmark & \\cmark & \\xmark & \\cmark & \\\\\n MAG~\\cite{Moussallem2017} & \\cmark & \\cmark & \\cmark & \\xmark & \\cmark & STANDS4~\\ref{footnote1} \\\\\n DoSeR~\\cite{DBLP:conf\/esws\/ZwicklbauerSG16} & \\cmark & \\xmark & \\cmark & \\xmark & \\cmark & \\\\\n \\bottomrule\n \\end{tabularx}\n \\caption{Comparison between the utilized Wikidata characteristics of each KG-agnostic approach.}\n \\label{tab:comparison_KG_agnostic_approaches_wikidata}\n\\end{table*}\n\n\n\n\\section{EL-only results and discussion}\n\n\\begin{table*}\n \\centering\n \\begin{threeparttable}\n \\begin{tabular}{cccP{3cm}c}\n \\toprule\n \n &\\rotatebox[origin=c]{-66}{Mulang et al.~\\cite{DBLP:journals\/corr\/abs-2008-05190}}\n &\\rotatebox[origin=c]{-66}{LSH-ELMo model~\\cite{perkins2020separating}}\n &\\rotatebox[origin=c]{-66}{\\parbox{3cm}{NED using DL on Graphs~\\cite{DBLP:journals\/corr\/abs-1810-09164}~\\tnotex{tn:elonly1}}} &\\rotatebox[origin=c]{-66}{Botha et al.~\\cite{Botha2020}}\\\\\n \\midrule\n AIDA-CoNLL~\\cite{hoffart-etal-2011-robust}&0.9494~\\cite{DBLP:journals\/corr\/abs-2008-05190}~\\tnotex{tn:elonly6}~\\tnote{,}~\\tnotex{tn:elonly3}&0.73~\\cite{perkins2020separating}&-&-\\\\\n ISTEX-1000~\\cite{DBLP:journals\/corr\/abs-1904-09131}&0.9261~\\cite{DBLP:journals\/corr\/abs-2008-05190}~\\tnotex{tn:elonly4}&-&-&-\\\\\n Wikidata-Disamb~\\cite{DBLP:journals\/corr\/abs-1810-09164}&0.9235~\\cite{DBLP:journals\/corr\/abs-2008-05190}~\\tnotex{tn:elonly5}&-&0.916~\\cite{DBLP:journals\/corr\/abs-1810-09164}&-\\\\\n Mewsli-9~\\cite{Botha2020} & - & - & - & 0.91~\\cite{Botha2020}~\\tnotex{tn:elonly7} \\\\\n \n \\bottomrule\n \\end{tabular}\n \\begin{tablenotes}\n \\setlength{\\columnsep}{0.8cm}\n \\setlength{\\multicolsep}{0cm}\n \\begin{multicols}{3}\n \\small\n \\item[1] \\label{tn:elonly1} Model with best result\n \\item[2] \\label{tn:elonly6} Accuracy instead of $F_1$\n \\item[3] \\label{tn:elonly3} DCA-SL used\n \\item[4] \\label{tn:elonly4} XLNet used\n \\item[5] \\label{tn:elonly5} Roberta used\n \\item[6] \\label{tn:elonly7} Recall instead of $F_1$\n \\end{multicols}\n \\end{tablenotes}\n \\end{threeparttable}\n \\caption{Results: EL-only.}\n \\label{tab:dataset_results_el_only}\n\\end{table*}\n\nThe results for EL-only approaches can be found in Table~\\ref{tab:dataset_results_el_only}.\nAIDA-CoNLL results are available for three of the four approaches, but the results for one is the accuracy instead of the $F_1$-measures. \nThe available labels for each item and property make language-model-based approaches possible that perform quite well~\\cite{DBLP:journals\/corr\/abs-2008-05190}. No approaches are available to compare to the one by Botha et al.~\\cite{Botha2020}, but the result demonstrates the promising performance of multilingual EL with Wikidata as the target KG.\n\n\n\\begin{table*}[]\n \\centering\n \\begin{tabular}{ll}\n \\toprule\n Dataset & Links \\\\\n \\midrule\n T-REx~\\cite{elsahar2019t} & \\url{https:\/\/hadyelsahar.github.io\/t-rex} \\\\ \n NYT2018~\\cite{DBLP:journals\/pvldb\/LinLXLC20, DBLP:conf\/icde\/LinC19} & not found \\\\\n ISTEX-1000~\\cite{DBLP:journals\/corr\/abs-1904-09131} & \\url{https:\/\/github.com\/wetneb\/opentapioca\/blob\/master\/data} \\\\\n LC-QuAD 2.0~\\cite{dubey2019lc} & \\url{https:\/\/github.com\/AskNowQA\/LC-QuAD2.0\/tree\/master\/dataset}\\\\\n Knowledge Net~\\cite{mesquita2019knowledgenet} & \\url{https:\/\/github.com\/diffbot\/knowledge-net\/tree\/master\/dataset}\n \\\\\n KORE50DYWC~\\cite{noullet2020kore} & \\url{https:\/\/www.aifb.kit.edu\/web\/KORE_5\n Kensho Derived Wikimedia Dataset~\\cite{KenshoRD} & \\url{https:\/\/www.kaggle.com\/kenshoresearch\/kensho-derived-wikimedia-data} \\\\\n CLEF HIPE 2020~\\cite{Ehrmann2020} & \\url{https:\/\/github.com\/impresso\/CLEF-HIPE-2020\/tree\/master\/data} \\\\\n Mewsli-9~\\cite{Botha2020} & \\url{https:\/\/metatext.io\/datasets\/mewsli-9-} \\\\\n TweekiData~\\cite{tweeki:wnut20} & \\url{https:\/\/github.com\/ucinlp\/tweeki\/tree\/main\/data\/Tweeki_data} \\\\\n TweekiGold~\\cite{tweeki:wnut20}& \\url{https:\/\/github.com\/ucinlp\/tweeki\/tree\/main\/data\/Tweeki_gold}\\\\\n \\bottomrule\n \\end{tabular}\n \\caption{Links to datasets}\n \\label{tab:dataset_links}\n\\end{table*}\n\n\\end{appendix}\n\\section{Discussion} \\label{sec:discussion}\n\\subsection{Current Approaches, Datasets and their Drawbacks}\n\\paragraph{Approaches.}\nThe number of algorithms using Wikidata is small; the number of algorithms using Wikidata solely is even smaller. Most algorithms employ labels and alias information contained in Wikidata. Some deep learning-based algorithms leverage the underlying graph structure, but the inclusion of that information is often superficial. The same information is also available in other KGs. Additional statement-specific information like qualifiers is used by only one algorithm (OpenTapioca), and even then, it only interprets qualifiers as extra edges to the item. Thus, there is no inclusion of the actual structure of a hyper-relation. Information like the descriptions of items that are providing valuable context information is also rarely used. Wikidata includes type information, but almost none of the existing algorithms utilize it to do more than to filter out entities that are not desired to link in general. An exception is perhaps Tweeki, though it only uses types during ER.\n\nIt seems that most of the authors developed approaches for Wikidata due to it being popular and up-to-date while not specifically utilizing its structure. With small adjustments, many would also work on any other KG. \nBesides the less-dedicated utilization of specific characteristics of Wikidata, it is also notable that there is no clear focus on one of the essential characteristics of Wikidata, continual growth. Many approaches use static graph embeddings, which need to be retrained if the KG changes. EL algorithms working on Wikidata, which are not usable on future versions, seem unintuitive. But there also exist some approaches which can handle change. They often rely on more extensive textual information, which is again challenging due to the limited amount of such data in Wikidata. Wikidata descriptions do exist, but only short paragraphs are provided, in general, insufficient to train a language model. To compensate, Wikipedia is included, which provides this textual information. It seems like Wikidata as the target KG with its language-agnostic identifiers and the easily connectable Wikipedia with its multilingual textual information are a great pair. But surprisingly, most methods do use either Wikipedia or Wikidata. A combination happens rarely but seems very fruitful, as can be seen via the performance of the multilingual EL by Botha et al.~\\cite{Botha2020}. Though even this approach still uses Wikidata only sparsely. \n\nNone of the investigated approaches' authors tried to examine the performance between different versions of Wikidata. Since continuous evolution is a central characteristic of Wikidata, a temporal analysis would be reasonable. \nAs we are confronted with a fast-growing ocean of knowledge, taking into account the change of Wikidata and hence developing approaches that are robust against that change will undoubtedly be useful for numerous applications and their users. \n\nThis survey aimed to identify the extent to which the current state of the art in Wikidata EL is utilizing the characteristics of Wikidata. As only a few are using more information than on other established KGs, there is still much potential for future research. \n\n\\paragraph{Datasets.} Only a limited number of datasets were created entirely with Wikidata in mind exist. Many datasets used are still only mapped versions of datasets created for other knowledge graphs. Multilingualism is present so far that some datasets contain documents in different languages. However, only different documents for different languages are available. Having the same documents in multiple languages would be more helpful for an evaluation of multilingual Entity Linkers. The fact that the Wikidata is ever-changing is also not genuinely considered in any datasets. Always providing the dump version on which the dataset was created is advisable. A big advantage for the community is that datasets from very different domains like news, forums, research, tweets exist. The utterances can also vary from shorter texts with only a few entities to large documents with many entities.\nThe difficulty of the datasets significantly differs in the ambiguity of the entity mentions. \nThe datasets also differ in quality. Some were automatically created and others annotated manually by experts.\nThere are no unanimously agreed-upon datasets used for Wikidata EL.\nOf course, a single dataset can not exist as different domains and text types make different approaches, and hence datasets necessary.\n\n\\subsection{Future Research Avenues}\n\\label{subsec:future}\n\nIn general, Wikidata EL could be improved by including the following aspects:\n\n\\paragraph{Hyper-relational statements.}\nThe qualifier and rank information of Wikidata could be suitable to do EL on time-sensitive utterances~\\cite{DBLP:conf\/acl\/AgarwalSCHW18}. The problem revolves around utterances that talk about entities from different time points and spans and thus, the referred entity can significantly diverge.\nThe usefulness of other characteristics of Wikidata, e.g., references, may be limited but could make EL more challenging due to the inclusion of contradictory information. Therefore, research into the consequences and solutions of conflicting information would be advisable.\nAnother possibility would be to directly include the qualifier information via the KG embeddings. For example, the StarE~\\cite{Galkin2020} embedding includes qualifiers directly during training. It performs superior over regular embeddings on the task of link prediction if enough statements have qualifiers assigned. This is promising but whether this directly applies to EL approaches, which use such embeddings, has to be evaluated.\n\n\\paragraph{More extensive type information.}\nWhile type information is incorporated by some linkers, it is generally done to simply limit the candidate space to the three main types: location, organization and person. But Raiman and Raiman~\\cite{DBLP:conf\/aaai\/RaimanR18} showed that a more extensive system of types proves very effective on the task of EL. If an adequate typing system is chosen and the correct type of an entity mention is available, an entity linker can achieve a near-perfect performance. Especially as Wikidata has a much more fine-grained and noisy type system than other KGs, evaluating the performance of entity linkers, which incorporate types, is of interest. \nWhile most approaches use types directly to limit the candidate space, incorporating them indirectly via type-aware ~\\cite{Zhang2020} or hierarchy-sensitive embeddings~\\cite{Chami2020, Nayyeri2021, DBLP:conf\/nips\/BalazevicAH19} might also prove useful for EL.\nBut note that the incorporation of type information heavily depends on the performance of the type classifier, and the difficulty of the type classification task again depends on the type system. Nevertheless, an improved type classification would directly benefit type-utilizing entity linkers. \n\n\\paragraph{Inductive or efficiently trainable knowledge graph embeddings.}\nTo reiterate, due to the fast rate of change of Wikidata, approaches are necessary, which are more robust to such a dynamic KG. Continuously retraining transductive embeddings is intractable, so more sophisticated methods like inductive or efficiently retrievable graph embeddings are a necessity~\\cite{Wang2019, Wang2019a, Teru2020,Baek2020,Albooyeh2020,Hamaguchi2017,Wu2019a, galkin2021nodepiece, Daruna2021}.\nFor example, the embedding by Albooyeh et al.~\\cite{Albooyeh2020} can be employed, which can handle out-of-sample entities. These are entities, which were not available at training time, but are connected to entities, which were existing.\nTo go even further, NodePiece~\\cite{galkin2021nodepiece}, the KG-embedding counterpart of sub-word embeddings like BERT, works by relying on only a small subset of anchor nodes and all relations in the KG. While it uses a fraction of all nodes, it still is able to achieve performance competitive with transductive embeddings on the task of link prediction. By being independent of most nodes in a KG, one can include new entities (in the form of nodes) without having to retrain. As an alternative, standard continual learning approaches could be employed to learn new data while being robust against catastrophic forgetting. An examination of the performance of popular techniques in the context of KG embeddings can be found in the paper by Daruna et al.~\\cite{Daruna2021}.\n\n\\paragraph{Item label and description information in multiple languages for multilingual EL.}\nMultilingual or cross-lingual EL is already tackled with Wikidata but currently mainly by depending on Wikipedia. Using the available multilingual label\/description information in a structured form together with the rich textual information in Wikipedia could move the field forward. The approach by Botha et al.~\\cite{Botha2020}, which could be seen as an extension of BLINK~\\cite{Wu2019}, performs very well on the task of cross- and multilingual EL. \nFor example, the approach by Mulang et al.~\\cite{DBLP:journals\/corr\/abs-2008-05190}, which fully relies on label information, could be extended in a similar way as BLINK was extended. Instead of only using labels (of items and properties) in the English language, training the model directly in multiple languages could prove effective. Additionally, multilingual description information might be used too. We are convinced that also investigations into the linking of long-tail entities are needed.\n\nIt seems like there exist no commonly agreed-on Wikidata EL datasets, as shown by a large number of different datasets the approaches were tested on. Such datasets should try to represent the challenges of Wikidata like the time-variance, contradictory triple information, noisy labels, and multilingualism.\n\n\n\\section*{Acknowledgments}\nWe acknowledge the support of the EU project TAILOR (GA 952215), the Federal Ministry for Economic Affairs and Energy (BMWi) project SPEAKER (FKZ 01MK20011A), the German Federal Ministry of Education and Research (BMBF) projects and excellence clusters ML2R (FKZ 01 15 18038 A\/B\/C), MLwin (01S18050 D\/F), ScaDS.AI (01\/S18026A) as well as the Fraunhofer Zukunftsstiftung project JOSEPH. The authors also acknowledge the financial support by the Federal Ministry for Economic Affairs and Energy of Germany in the project CoyPu (project number 01MK21007G).\n\\section{Datasets}\n\\label{sec:datasets}\n\\subsection{Overview}\nThis section is concerned with analyzing the different datasets which are used for Wikidata EL. A comparison can be found in Table~\\ref{tab:datasets}. \n\\begin{table*}[ptb!]\n \\small\n \\centering\n \\rotatebox{-90}{\n \\begin{threeparttable}\n \\begin{tabularx}{0.95\\textheight}{p{4cm}p{3cm}XXp{2.8cm}P{1.5cm}p{3cm}}\n \\toprule\n \\textbf{Dataset} & \\textbf{Domain} & \\textbf{Year} & \\textbf{Annotation process} & \\textbf{Purpose} & \\textbf{Spans given} & \\textbf{Identifiers} \\\\ \\midrule\n T-REx~\\cite{elsahar2019t} & Wikipedia abstracts & 2015& automatic & Knowledge Base Population (KBP), Relation Extraction (RE), Natural Language Generation (NLG) &\\cmark& Wikidata \\\\ \n NYT2018~\\cite{DBLP:journals\/pvldb\/LinLXLC20, DBLP:conf\/icde\/LinC19} & News & 2018& manually & EL &\\cmark & Wikidata, DBpedia \\\\\n ISTEX-1000~\\cite{DBLP:journals\/corr\/abs-1904-09131} & Research articles& 2019 & manually&EL & \\cmark& Wikidata \\\\\n LC-QuAD 2.0~\\cite{dubey2019lc} & General complex questions (Wikidata) & 2019& semi-automatic &Question Answering (QA) &\\xmark& DBpedia, Wikidata \\\\\n Knowledge Net~\\cite{mesquita2019knowledgenet} & Wikipedia abstracts, biographical texts & 2019& manually & KBP &\\cmark& Wikidata\n \\\\\n KORE50DYWC~\\cite{noullet2020kore} & News & 2019& manually & EL & \\cmark& Wikidata, DBpedia, YAGO, Crunchbase \\\\\n Kensho Derived Wikimedia Dataset~\\cite{KenshoRD} & Wikipedia & 2020& automatic & Natural Language Processing (NLP) &\\cmark & Wikidata, Wikipedia \\\\\n CLEF HIPE 2020~\\cite{Ehrmann2020} & Historical newspapers & 2020& manually & ER, EL & \\cmark &Wikidata \\\\\n Mewsli-9~\\cite{Botha2020} & News in multiple languages & 2020& automatic & Multilingual EL & \\cmark & Wikidata \\\\\n TweekiData~\\cite{tweeki:wnut20} & Tweets&2020& automatic&EL&\\cmark & Wikidata \\\\\n TweekiGold~\\cite{tweeki:wnut20}&Tweets&2020& manually&EL&\\cmark& Wikidata\\\\\n \\bottomrule\n \\end{tabularx}\n \\begin{tablenotes}\n \\setlength{\\columnsep}{0.8cm}\n \\setlength{\\multicolsep}{0cm}\n \\begin{multicols}{2}\n \\item[1] \\label{datasets:tnote1} data from 2010\n \\item[2] \\label{datasets:tnote2} Original dataset on Wikipedia\n \\end{multicols}\n \\end{tablenotes}\n \\end{threeparttable}\n }\n \\caption{Comparison of used datasets.}\n \\label{tab:datasets}\n\\end{table*}\nThe majority of datasets on which existing Entity linkers were evaluated, were originally constructed for KGs different from Wikidata. Such a mapping can be problematic as some entities labeled for other KGs could be missing in Wikidata. Or some NIL entities that do not exist in other KGs could exist in Wikidata.\nEleven datasets~\\cite{DBLP:journals\/corr\/abs-1904-09131,DBLP:journals\/pvldb\/LinLXLC20,dubey2019lc, elsahar2019t, noullet2020kore, mesquita2019knowledgenet, KenshoRD, Ehrmann2020, Botha2020, tweeki:wnut20} were found for which Wikidata identifiers were available from the start. \nIn the following the datasets are separated by their domain. A list of all examined datasets - including links where available - can be found in the Appendix in \\Cref{tab:dataset_links}.\n\n\\subsubsection{Encyclopedic datasets}\nLC-QuAD 2.0~\\cite{dubey2019lc} is a semi-automatically created dataset for Questions Answering providing complex natural language questions. For each question, Wikidata and DBpedia identifiers are provided. The questions are generated from subgraphs of the Wikidata KG and then manually checked. The dataset does not provide annotated mentions.\n\nT-REx~\\cite{elsahar2019t} was constructed automatically over Wiki\\-pedia abstracts. Its main purpose is Knowledge Base Population (KBP). According to Mulang et al.~\\cite{mulang2020encoding}, this dataset describes the challenges of Wikidata, at least in the form of long, noisy labels, the best. \n\nThe Kensho Derived Wikimedia Dataset~\\cite{KenshoRD} is an automatically created condensed subset of Wikimedia data. It consists of three levels: Wikipedia text, annotations with Wikipedia pages and links to Wikidata items. Thus, mentions in Wikipedia articles are annotated with Wikidata items. However, as some Wikidata items do not have a corresponding Wikipedia page, the annotation is not exhaustive. It was constructed for NLP in general. \n\n\\subsubsection{Research-focused datasets}\nISTEX-1000~\\cite{DBLP:journals\/corr\/abs-1904-09131} is a research-focused dataset containing 1000 author affiliation strings. It was manually annotated to evaluate the OpenTapioca~\\cite{DBLP:journals\/corr\/abs-1904-09131} entity linker. \n\n\\subsubsection{Biographical datasets}\nKnowledgeNet~\\cite{mesquita2019knowledgenet} is a Knowledge Base Population dataset with 9073 manually annotated sentences. The text was extracted from biographical documents from the web or Wikipedia articles.\n\n\\subsubsection{News datasets}\nNYT2018~\\cite{DBLP:journals\/pvldb\/LinLXLC20, DBLP:conf\/icde\/LinC19} consists of 30 news documents that were manually annotated on Wikidata and DBpedia. It was constructed for KBPearl~\\cite{DBLP:journals\/pvldb\/LinLXLC20}, so its main focus is also KBP which is a downstream task of EL.\n\nOne dataset, KORE50DYWC~\\cite{noullet2020kore}, was found, which was not used by any of the approach papers. \nIt is an annotated EL dataset based on the KORE50 dataset, a manually annotated subset of the AIDA-CoNLL corpus. The original KORE50 dataset focused on highly ambiguous sentences. All sentences were reannotated with DBpedia, Yago, Wikidata and Crunchbase entities.\n\n\n\n\nCLEF HIPE 2020~\\cite{Ehrmann2020} is a dataset based on historical newspapers in English, French and German. Only the English dataset will be analyzed in the following. This dataset is of great difficulty due to many errors in the text, which originate from the OCR method used to parse the scanned newspapers. For the English language, only a development and test set exist. In the other two languages, a training set is also available. It was manually annotated.\n\nMewsli-9~\\cite{Botha2020} is a multilingual dataset automatically constructed from WikiNews. It includes nine different languages. A high percentage of entity mentions in the dataset do not have corresponding English Wikipedia pages, and thus, cross-lingual linking is necessary. Again, only the English part is included during analysis.\n\n\\subsubsection{Twitter datasets}\nTweekiData and TweekiGold~\\cite{tweeki:wnut20} are an automatically annotated corpus and a manually annotated dataset for EL over tweets. TweekiData was created by using other existing tweet-based datasets and linking them to Wikidata data via the Tweeki EL. TweekiGold was created by an expert, manually annotating tweets from another dataset with Wikidata identifiers and Wikipedia page-titles. \n\n\\subsection{Analysis} \n\\begin{table*}[ptb!]\n \\centering\n \\rotatebox{-90}{\n \\begin{threeparttable}\n \\begin{tabularx}{0.95\\textheight}{p{5cm}p{2cm}p{2cm}XXXX}\n \\toprule\n \\textbf{Dataset}& \\# \\textbf{documents} & \\# \\textbf{mentions} & \\textbf{NIL entities} & \\textbf{Wikidata entities} & \\textbf{Unique Wikidata entities} & \\textbf{Mentions per document} \\\\ \\midrule\n T-REx~\\cite{elsahar2019t} & \\num{4650000} & \\num{51297484}&0\\% &100\\% & 9.1\\% & 11.03\\\\\n NYT2018~\\cite{DBLP:journals\/pvldb\/LinLXLC20, DBLP:conf\/icde\/LinC19}~\\tnotex{tn:tabced1} & \\num{30}\n &- &-& - & - & -\\\\\n ISTEX-1000~\\cite{DBLP:journals\/corr\/abs-1904-09131} (train) & \\num{750}& 2073& 0\\%&100\\%& 53.7\\%& 2.76\\\\\n ISTEX-1000~\\cite{DBLP:journals\/corr\/abs-1904-09131} (test) & 250 & 670 & 0\\%&100\\%& 65.8\\%&2.68\\\\\n LC-QuAD 2.0~\\cite{dubey2019lc} & \\num{6046} & \\num{44529} & 0\\%& 100\\%& 51.2\\% & 1.47\\\\\n Knowledge Net~\\cite{mesquita2019knowledgenet} (train) & \\num{3977} & \\num{13039}& 0\\%&100\\%& 30\\% &3.28\\\\\n Knowledge Net~\\cite{mesquita2019knowledgenet} (test)~\\tnotex{tn:tabced2} & \\num{1014} & -& -&-& - &-\\\\\n KORE50DYWC~\\cite{noullet2020kore} & \\num{50} & \\num{307}& 0\\%&100\\% & 72.0\\% & 6.14 \\\\\n Kensho Derived Wikimedia Dataset~\\cite{KenshoRD} & \\num{14255258} &\\num{121835453}&0\\% &100\\%&3.7\\%& 8.55\\\\\n CLEF HIPE 2020 (en, dev)~\\cite{Ehrmann2020} & 80& 470& 46.4\\%&53.6\\%& 31.9\\%& 5.88\\\\\n CLEF HIPE 2020 (en, test)~\\cite{Ehrmann2020} & 46&134& 33.6\\%&66.4\\%& 42.5\\% & 2.91\\\\\n Mewsli-9 (en)~\\cite{Botha2020} & \\num{12679} \n &\\num{80242} &0\\% & 100\\% & 48.2\\% &6.33\\\\\n TweekiData~\\cite{tweeki:wnut20}&\\num{5000000}&\\num{5038870}& 61.2\\%& 38.8\\%&5.4\\% &1.01\\\\\n TweekiGold~\\cite{tweeki:wnut20}&500&\\num{958}&11.1\\%&88.9\\%&66.6\\%&1.92\\\\\n \\bottomrule\n \\end{tabularx}\n \\begin{tablenotes}\n \\item[1] \\label{tn:tabced1} Information gathered from accompanying paper as dataset was not available \n \\item[2] \\label{tn:tabced2} Available dataset did not contain mention\/entity information\n \\end{tablenotes}\n \\end{threeparttable}\n }\n \\caption{Comparison of the datasets with focus on the number of documents and Wikidata entities.}\n \\label{tab:comp_entities_datasets}\n\\end{table*}\nTable~\\ref{tab:comp_entities_datasets} shows the number of documents, the number of mentions, NIL entities and unique entities, and the mentioned ratio. What classifies as a document in a dataset depends on the dataset itself. For example, for T-REx, a document is a whole paragraph of a Wikipedia article, while for LC-QuAD 2.0, a document is just a single question. Due to this, the average number of entities in a document also varies, e.g., LC-QuAD 2.0 with $1.47$ entities per document and T-REx with $11.03$. If a dataset was not available, information from the original paper was included. If dataset splits were available, the statistics are also shown separately. The majority of datasets do not contain NIL entities. For the Tweeki datasets, it is not mentioned which Wikidata dump was used to annotate. For a dataset that contains NIL entities, this is problematic. On the other hand, the dump is specified for the CLEF HIPE 2020 dataset, making it possible to work on the Wikidata version with the correct entities missing.\n\n\\begin{highlightbox}{\\hyperlink{rq1}{Research Question 1}}{Which Wikidata EL datasets exist, how widely used are they and how are they constructed?}\nThe preceding paragraphs answer the following two aspects of the first research question. First, we provided descriptions and an overview of all datasets created for Wikidata, including statistics on their structure. This answers which datasets exist. Furthermore, for each dataset it is stated how they were constructed, whether automatically, semi-automatically or manually. Thus information on the quality and construction process of the datasets is given.\nTo answer the last part of the question, how widely are the datasets in use, Table~\\ref{tab:dataset_usage} shows how many times each Wikidata dataset was used in Wikidata EL approaches during training or evaluation. As one can see, there exists no single dataset used in all research of EL. This is understandable as different datasets focus on different document types and domains as shown in Table~\\ref{tab:datasets}, what again results in different approaches. \n\\end{highlightbox}\n\n\\begin{table*}[htb!]\n \\centering\n \\begin{threeparttable}\n \\begin{tabular}{lc}\n \\toprule\n \\textbf{Dataset} & \\textbf{Number of usages in Wikidata EL approach papers} \\\\\n \\midrule\n T-REx~\\cite{elsahar2019t} & 2~\\cite{mulang2020encoding, DBLP:journals\/pvldb\/LinLXLC20} \\\\\n NYT2018~\\cite{DBLP:journals\/pvldb\/LinLXLC20, DBLP:conf\/icde\/LinC19} & 1~\\cite{DBLP:journals\/pvldb\/LinLXLC20}\n \\\\\n ISTEX-1000~\\cite{DBLP:journals\/corr\/abs-1904-09131} & 2~\\cite{DBLP:journals\/corr\/abs-2008-05190, DBLP:journals\/corr\/abs-1904-09131} \\\\\n LC-QuAD 2.0~\\cite{dubey2019lc} & 2~\\cite{DBLP:journals\/corr\/abs-1912-11270, Banerjee2020} \\\\\n Knowledge Net~\\cite{mesquita2019knowledgenet} & 1~\\cite{DBLP:journals\/pvldb\/LinLXLC20}\n \\\\\n KORE50DYWC~\\cite{noullet2020kore} & 0 \\\\\n Kensho Derived Wikimedia Dataset~\\cite{KenshoRD} & 1~\\cite{perkins2020separating} \\\\\n CLEF HIPE 2020~\\cite{Ehrmann2020} & 3~\\cite{borosrobust, Labusch2020, Provatorova2020} \\\\\n Mewsli-9~\\cite{Botha2020} & 1~\\cite{Botha2020} \\\\\n TweekiData~\\cite{tweeki:wnut20} & 1~\\cite{tweeki:wnut20} \\\\\n TweekiGold~\\cite{tweeki:wnut20}& 1~\\cite{tweeki:wnut20}\\\\\n \\bottomrule\n \\end{tabular}\n \\caption{Usage of datasets for training or evaluation.}\n \\label{tab:dataset_usage}\n \\end{threeparttable}\n\\end{table*}\n\n\\begin{table*}[htb!]\n \\centering\n \\begin{tabularx}{\\textwidth}{p{5cm}XXXX}\n \\toprule\n \\textbf{Dataset} & \\textbf{Average number of matches} & \\textbf{No match} & \\textbf{Exact match} & \\textbf{More than one match} \\\\ \\midrule\n T-REx & 4.79 & 31.36\\% & 32.98\\% & 35.65\\% \\\\\n ISTEX-1000 (train) & 23.23 & 8.06\\%& 26.34\\% & 65.61\\%\\\\\n ISTEX-1000 (test) & 25.85 & 10.30\\% & 23.88\\% & 65.82\\%\\\\\n Knowledge Net (train) & 21.90 & 10.41\\% & 22.29\\% & 67.3\\% \\\\\n KORE50DYWC & 28.31 & 3.93\\% & 7.49\\% & 88.60\\% \\\\\n Kensho Derived Wikimedia Dataset & 8.16 & 35.18\\% & 30.94\\% & 33.88\\% \\\\\n CLEF HIPE 2020 (en, dev) & 24.02 & 35.71\\% & 11.51\\% & 52.78\\% \\\\\n CLEF HIPE 2020 (en, test) & 17.78 & 43.82\\% & 6.74\\% & 49.44\\% \\\\\n Mewsli-9 (en) &11.09&16.80\\%&34.90\\%&47.30\\% \\\\\n TweekiData &19.61&19.98\\%&12.01\\%&68.01\\% \\\\\n TweekiGold &16.02&7.41\\%&20.25\\%&72.34\\% \\\\\n \\bottomrule\n \\end{tabularx}\n \\caption{Ambiguity of mentions (existence of a match does not correspond to a correct match), NYT2018 dataset was not available and LC-QuAD 2.0 is not annotated.}\n \\label{tab:ambiguity_mentions}\n\\end{table*}\n\\begin{table}[htb!]\n \\centering\n \\begin{tabularx}{\\linewidth}{p{4cm}cc}\n \\toprule\n \\textbf{Dataset} & \\textbf{Acc.} & \\textbf{Acc. filtered} \\\\ \\midrule\n ISTEX-1000 (train) & 0.744 &0.716\\\\\n ISTEX-1000 (test) & 0.716&0.678\\\\\n Knowledge Net (train) & 0.371&0.285 \\\\\n KORE50DYWC & 0.225& 0.187\\\\\n CLEF HIPE 2020 (en, dev) & 0.333&0.287 \\\\\n CLEF HIPE 2020 (en, test) & 0.258& 0.241\\\\\n TweekiGold & 0.565 & 0.520\\\\\n Mewsli-9 (en) & 0.602&0.490\\\\\n \\bottomrule\n \\end{tabularx}\n \\caption{EL accuracy - Kensho Derived Wikimedia Dataset, T-REx and TweekiData are not included due to size, \\textbf{Acc. filtered} has all exact matches removed, NYT2018 dataset was not available and LC-QuAD 2.0 is not annotated.}\n \\label{tab:es}\n\\end{table}\nThe difficulty of the different datasets was measured by the accuracy of a simple EL method (Table~\\ref{tab:es}) and the ambiguity of mentions (Table~\\ref{tab:ambiguity_mentions}). \nThe simple EL method searches for entity candidates via an ElasticSearch index, including all English labels and aliases. It then disambiguates by taking the one with the largest tf-idf-based BM25 similarity measure score and the lowest Q-identifier number resembling the popularity. Nothing was done to handle inflections.\\footnote{All source code, plots and results can be found on \\url{https:\/\/github.com\/semantic-systems\/ELEnglishWD}}\nOnly accessible datasets were included. As one can see, the accuracy is positively correlated with the number of exact matches. The more ambiguous the underlying entity mentions are, the more inaccurate a simple similarity measure between label and mention becomes. In this case, more context information is necessary. The simple Entity Linker was only applied to datasets that were feasible to disambiguate in that way. T-REx and the Kensho Derived Wikimedia Dataset were too large in terms of the number of documents to run the linker on commodity hardware. \nAccording to the EL performance, ISTEX-1000 is the easiest dataset. Many of the ambiguous mentions reference the most popular one, while also many exact unique matches exist.\nT-REx, the Kensho Derived Wikimedia Dataset and the Mewsli-9 training dataset have the largest percentage of exact matches for labels.\nWhile TweekiGold is quite ambiguous, deciding on the most prominent entity appears to produce good EL results.\nThe most ambiguous dataset is KORE50DYWC. Additionally, just choosing the most popular entity of the exact matches results in worse performs than for example on TweekiGold which is also very ambiguous. This is due to the fact that the original KORE50 dataset focuses on difficult ambiguous entities which are not necessarily popular. The CLEF HIPE 2020 dataset also has a low EL accuracy but not due to ambiguity but many mentions with no exact match. The reason for that is the noise created by OCR. \n\nThe second column of Table~\\ref{tab:es} specifies the accuracy with all unique exact matches removed. This is based on the intuition that exact matches without any competitors are usually correct. \n\nAs seen in the \\Cref{tab:datasets,tab:ambiguity_mentions,tab:comp_entities_datasets,tab:es}, there exists a very diverse set of datasets for EL on Wikidata, differing in the domain, document type, ambiguity and difficulty. \n\\begin{highlightbox}{\\hyperlink{rq2}{Research Question 2}}{Do the characteristics of Wikidata matter for the design of EL datasets and if so, how?}\nExcept the Mewsli-9~\\cite{Botha2020} and CLEF HIPE 2020~\\cite{Ehrmann2020} datasets, none of the others take any specific characteristics of Wikidata into account. \nThe two exceptions focus on multilinguality and rely therefore directly on the language-agnostic nature of Wikidata.\nThe CLEF HIPE 2020 dataset is designed for Wikidata and has documents for English, French and German, but each language has a different corpus of documents. The same is the case for the Mewsli-9 dataset, while here, documents in nine languages are available.\nIn the future, a dataset similar to VoxEL~\\cite{rosales2018voxel}, which is defined for Wikipedia, would be helpful. Here, each utterance is translated into multiple languages, which eases the comparison of the multilingual EL performance. Having the same corpus of documents in different languages would allow a better comparison of a method's performance in various languages. Of course, such translations will never be perfectly comparable. \n\\end{highlightbox}\n\n\nBesides that, we identified one additional characteristic which might be of relevance to Wikidata EL datasets.\nIt is the large rate of change of Wikidata.\nDue to that, it would be advisable that the datasets specify the Wikidata dumps they were created on, similar to Petroni et al.~\\cite{petroni2020kilt}. Many of the existing datasets do that, yet not all. In current dumps, entities, which were available while the dataset was created, could have been removed. It is even more probable that NIL entities could now have a corresponding entity in an updated Wikidata dump version. If the EL approach now would detect it as a NIL entity, it is evaluated as correct, but in reality, this is false and vice versa. \nOf course, this is not a problem unique to Wikidata. Anytime, the dump is not given for an EL dataset, similar uncertainties will occur. But due to the fast growth of Wikidata (see Figure~\\ref{fig:wikidata_items}), this problem is more pronounced.\n\nConcerning \\textit{emerging entities}, another variant of an EL dataset could be useful too. Two Wikidata dumps from different time points could be used to label the utterances. Such a dataset would be valuable in the context of an EL approach supporting emerging entities (e.g., the approach by Hoffart et al.~\\cite{DBLP:conf\/www\/HoffartAW14}). With the true entities available, one could measure the quality of the created emerging entities. That is, multiple mentions assigned to the same emerging entity should also point to a single entity in the more recent KG. \nAlso, constraining that the method needs to perform well on both KG dumps would force EL approaches to be less reliant on a fixed graph structure.\n\n\n\n\n\n\\section{Introduction}\n\\subsection{Motivation}\n\\begin{figure}\n \\centering\n \\includegraphics[width=\\linewidth, trim={0 2cm 0 0},clip]{Images\/EntityLinkingExample.pdf}\n \\caption{Entity Linking - Mentions in the text are linked to the corresponding entities (color-coded) in a knowledge graph (here: Wikidata).}\n \\label{fig:ELExample}\n\\end{figure}\nEntity Linking (EL) is the task of connecting already marked mentions in an utterance to their corresponding entities in a knowledge graph (KG), see Figure~\\ref{fig:ELExample}. In the past, this task was tackled by using popular knowledge bases such as DBpedia~\\cite{DBLP:journals\/semweb\/LehmannIJJKMHMK15}, Freebase~\\cite{DBLP:conf\/sigmod\/BollackerEPST08} or Wikipedia. While the popularity of those is still imminent, another alternative, named Wikidata~\\cite{DBLP:journals\/cacm\/VrandecicK14}, appeared. \n\nWikidata follows a similar philosophy as Wikipedia as it is curated by a continuously increasing community, see Figure~\\ref{fig:editors}. However, Wikidata differs in the way knowledge is stored - information is stored in a structured format via a knowledge graph (KG). \nAn important characteristic of Wikidata is its inherent multilingualism. While Wikipedia articles exist in multiple languages, Wikidata information are stored using language-agnostic identifiers. This is of advantage for multilingual entity linking. \nDBpedia, Freebase or Yago4~\\cite{Tanon2020} are KGs too which can become outdated over time~\\cite{Ringler2017}. They rely on information extracted from other sources in contrast to the Wikidata knowledge which is inserted by a community. \nGiven an active community, this leads to Wikidata being frequently and timely updated - another characteristic.\nNote that DBpedia also stays up-to-date but has a delay of a month\\footnote{\\url{https:\/\/release-dashboard.dbpedia.org\/}} while Wikidata dumps are updated multiple times a month.\nThere are up-to-date services to access knowledge for both KGs, Wikidata and DBpedia (cf. DBpedia Live~~\\footnote{\\url{https:\/\/wiki.dbpedia.org\/online-access\/DBpediaLive}}), but full dumps are preferred as else the FAIR replication~\\cite{wilkinson2016fair} of research results based on the KG is hindered. \nAnother Wikidata characteristic interesting for Entity Linkers, are hyper-relations (see Figure~\\ref{fig:wikidata_subgraph} for an example graph), which might affect their abilities and performance. \n\nTherefore, it is of interest how existing approaches incorporate these characteristics.\nHowever, existing literature lacks an exhaustive analysis which examines Entity Linking approaches in the context of Wikidata.\n\n\\begin{figure}[htb!]\n \\centering\n \\includegraphics[width=\\linewidth]{Images\/active_editors.pdf}\n \\caption{Active editors in Wikidata~\\cite{Foundation2020}.}\n \\label{fig:editors}\n\\end{figure}\n\nUltimately, this survey strives to expose the benefits and associated challenges which arise from the use of Wikidata as the target KG for EL. \nAdditionally, the survey provides a concise overview of existing EL approaches, which is essential to (1) avoid duplicated research in the future and (2) enable a smoother entry into the field of Wikidata EL. \nSimilarly, we structure the dataset landscape which helps researchers find the correct dataset for their EL problem.\n\n\\begin{figure}[htb!]\n \\centering\n \\includegraphics[width=\\linewidth]{Images\/papers_timeline.pdf}\n \\caption{Publishing years of included Wikidata EL papers (Table~\\ref{tab:comparison_approaches_wikidata}).}\n \\label{fig:papers}\n\\end{figure}\n\n\nThe focus of this survey lies on EL approaches, which operate on already marked mentions of entities, as the task of Entity Recognition~(ER) is much less dependent on the characteristics of a KG. However, due to the recent uptake of research on EL on Wikidata, there is only a low number of EL-only publications. To broaden the survey's scope, we also consider methods that include the task of ER. We do not restrict ourselves regarding the type of models used by the entity linkers. \n\nThis survey limits itself to all EL approaches supporting the English language as most frequent language, and thus, a better comparison of the approaches and datasets is possible. We also include approaches that support multiple languages. The existence of such approaches for Wikidata is not surprising as an important characteristic of Wikidata is the support of a multitude of languages. \n\n\n\\begin{figure}[htb!]\n \\centering\n \\includegraphics[width=\\linewidth]{Images\/KGExample.pdf}\n \\caption{Wikidata subgraph - Dashed rectangle represents a claim with attached qualifiers.}\n \\label{fig:wikidata_subgraph}\n\\end{figure}\n\n\n\n\\subsection{Research Questions and Contributions}\nFirst, we want to develop an overview of datasets for EL on Wikidata. Our survey analyses datasets and whether they are designed with Wikidata in mind and if so, in what way?\nThus, we post the following two research questions:\n\\begin{quote}\n \\hypertarget{rq1}{\\textbf{RQ 1}: Which Wikidata EL datasets exist, how widely used are they and how are they constructed?} \n\\end{quote}\n\\begin{quote}\n \\hypertarget{rq2}{\\textbf{RQ 2}: Do the characteristics of Wikidata matter for the design of EL datasets and if so, how?}\n\\end{quote}\nTo answer those two research questions, an overview of the structure of Wikidata and the amount of information it contains (see Section~\\ref{sec:wikidata}) is given. All current Wikidata-specific EL datasets are gathered and analyzed with the research questions in mind. Furthermore, we discuss how the characteristics of Wikidata might affect the design of datasets (see Section~\\ref{sec:datasets}).\n\nEL approaches use many kinds of information like labels, popularity measures, graph structures, and more. This multitude of possible signals raises the question of how the characteristics of Wikidata are used by the current state of the art of EL on Wikidata. Thus, the third research question is:\n\\begin{quote}\n \\hypertarget{rq3}{\\textbf{RQ 3}: How do current Entity Linking approaches exploit the specific characteristics of Wikidata?}\n\\end{quote}\nIn particular, which Wikidata-specific characteristics contribute to the solution? Wikidata-specific characteristics mean characteristics that are part of Wikidata but not necessarily only occurring in Wikidata.\n\nLastly, we identify what kind of characteristics of Wikidata are of importance for EL but are insufficiently considered. \nThis raises the last research question:\n\\begin{quote}\n \\hypertarget{rq4}{\\textbf{RQ 4}: Which Wikidata characteristics are unexploited by existing Entity Linking approaches?}\n\\end{quote}\nThe last two questions are answered by gathering all existing approaches working on Wikidata systematically, analyzing them, and discussing the potential and challenges of Wikidata for EL (see Section~\\ref{sec:approaches}).\n\nThis survey makes the following contributions:\n\\begin{itemize}\n \\item An overview of all currently available EL datasets focusing on Wikidata\n \\item An overview of all currently available EL approaches linking on Wikidata \n \\item An analysis of the approaches and datasets with a focus on Wikidata characteristics\n \\item A concise list of future research avenues\n\\end{itemize}\n\n\n\\section{Approaches}\n\\label{sec:approaches}\nCurrently, the number of methods intended to work explicitly on Wikidata is still relatively small, while the amount of the ones utilizing the characteristics of Wikidata is even smaller. \n\nThere exist several KG-agnostic EL approaches~\\cite{Moussallem2017,DBLP:conf\/ecai\/UsbeckNRGCAB14,DBLP:conf\/esws\/ZwicklbauerSG16}. However, they were omitted as their focus is being independent of the KG. While they are able to use Wikidata characteristics like labels or descriptions, there is no explicit usage of those. They are available in most other KGs. None of the found KG-agnostic EL papers even mentioned Wikidata. Though we recognize that KG-agnostic approaches are very useful in the case that a KG becomes obsolete and has to be replaced or a non-public KG needs to be used, such approaches are not included in this section. However, Table~\\ref{tab:comparison_KG_agnostic_approaches_wikidata} in the Appendix provides an overview of the used Wikidata characteristics of the three approaches. \n\nDeepType~\\cite{DBLP:conf\/aaai\/RaimanR18} is an entity linking approach relying on the fine-grained type system of Wikidata and the categories of Wikipedia. As type information is not evolving as fast as novel entities appear, it is relatively robust against a changing knowledge base. While it uses Wikidata, it is not specified in the paper whether it links to Wikipedia or Wikidata. Even the examination of the available code did not result in an answer as it seems that the entity linking component is missing. While DeepType showed that the inclusion of Wikidata type information is very beneficial in entity linking, we did not include it in this survey due to the aforementioned reasons.\nAs Wikidata contains many more types ($\\approx$\\num{2400000}) than other KGs, e.g., DBpedia ($\\approx$\\num{484000})~\\cite{Tanon2020}~\\footnote{if all rdf:type objects are considered, else $\\approx$ 768 (gathered via https:\/\/dbpedia.org\/sparql\/) if only considering types of the DBpedia ontology}), it seems to be more suitable for this fine-grained type classification. Yet, not only the number of types plays a role but also how many types are assigned per entity. In this regard, Wikipedia provides much more type information per entity than Wikidata~\\cite{Weikum2020}. That is probably the reason why both Wikipedia categories and Wikidata types are used together. As Wikidata is growing every minute, it may also be challenging to keep the type system up to date. \n\nTools without accompanying publications are not considered due to the lack of information about the approach and its performance. Hence, for instance, the Entity Linker in the \nDeepPavlov~\\cite{burtsev2018deeppavlov} \nframework is not included, although it targets Wikidata and appears to use label and description information successfully to link entities.\n\nWhile the approach by Zhou et al.~\\cite{DBLP:journals\/tacl\/ZhouRWCN20} does utilize Wikidata aliases in the candidate generation process, the target KB is Wikipedia and was therefore excluded.\n\nThe vast majority of methods is using machine learning to solve the EL task~\\cite{DBLP:journals\/corr\/abs-1810-09164,mulang2020encoding,DBLP:conf\/lrec\/KlangN20,DBLP:conf\/starsem\/SorokinG18, Banerjee2020, huangentity, perkins2020separating, DBLP:journals\/corr\/abs-2008-05190, borosrobust, Provatorova2020, Labusch2020, Botha2020, DBLP:journals\/corr\/abs-1904-09131}. Some of those approaches solve the ER and EL jointly as an end-to-end task. Besides that, there exist two rule-based approaches\n~\\cite{DBLP:journals\/corr\/abs-1912-11270, tweeki:wnut20} and two based on graph optimization~\\cite{DBLP:conf\/lrec\/KlangN20, DBLP:journals\/pvldb\/LinLXLC20}.\n\nThe approaches mentioned above solve the EL problem as specified in Section~\\ref{sec:problem}. That is, other EL methods with a different problem definition also exist. For example, Almeida et al.~\\cite{almeida2016streets} try to link street names to entities in Wikidata by using additional location information and limiting the entities only to locations. As it uses additional information about the true entity via the location, it is less comparable to the other approaches and, thus, was excluded from this survey. Thawani et al.~\\cite{DBLP:conf\/semweb\/ThawaniHHZDSQSP19} link entities only over columns of tables. The approach is not comparable since it does not use natural language utterances. \nThe approach by Klie et al.~\\cite{DBLP:conf\/acl\/KlieCG20} is concerned with Human-In-The-Loop EL. While its target KB is Wikidata, the focus on the inclusion of a human in EL process makes it incomparable to the other approaches.\nEL methods exclusively working on languages other than English~\\cite{ElVaigh2020,Ellgren2020, Klang2014, Veen2016, Ehrmann2020a} were not considered but also did not use any novel characteristics of Wikidata.\nIn connection to the CLEF HIPE 2020 challenge~\\cite{Ehrmann2020a}, multiple Entity Linkers working on Wikidata were built. While short descriptions of the approaches are available in the challenge-accompanying paper, only approaches described in an own published paper were included in this survey. The approach by Kristanti and Romary~\\cite{Kristanti2020} was not included as it used pre-existing tools for EL over Wikidata, for which no sufficient documentation was available. \n\nDue to the limited number of methods, we also evaluated methods that are not solely using Wikidata but also additional information from a separate KG or Wikipedia. This is mentioned accordingly. Approaches linking to knowledge graphs different from Wikidata, but for which a mapping between the knowledge graphs and Wikidata exists, are also not included. Such methods would not use the Wikidata characteristics at all, and their performance depends on the quality of the other KG and the mapping.\n\nIn the following, the different approaches are described and examined according to the used characteristics of Wikidata. An overview can be found in \\Cref{tab:comparison_approaches_wikidata}.\nWe split the approaches into two categories, the ones doing only EL and the ones doing ER and EL.\nFurthermore, to provide a better overview of the existing approaches, they are categorized by notable differences in their architecture or used features. This categorization focuses on the EL aspect of the approaches. \n\nFor each approach, it is mentioned what datasets were used in the corresponding paper. Only a subset of the datasets was directly annotated with Wikidata identifiers. Hence, datasets are mentioned, which do not occur in \\Cref{sec:datasets}.\n\n\\begin{table*}[tbh!]\n \\centering\n \\begin{threeparttable}\n \\begin{tabularx}{\\textwidth}{p{3cm}YYYYYY}\n \\toprule\n \\textbf{Approach}&\\textbf{Labels\/\\allowbreak Aliases} & \\textbf{Descriptions}& \\textbf{Knowledge graph structure} & \\textbf{Hyper-relational structure} & \\textbf{Types} & \\textbf{Additional Information} \\\\ \\midrule\n OpenTapioca~\\cite{DBLP:journals\/corr\/abs-1904-09131} & \\cmark & \\xmark & \\cmark & \\cmark & \\cmark & \\xmark \\\\\n Falcon 2.0~\\cite{DBLP:journals\/corr\/abs-1912-11270}&\\cmark&\\xmark&\\cmark\\tnotex{tn:tabcwc3}&\\xmark &\\xmark & \\xmark\\\\\n Arjun~\\cite{mulang2020encoding}&\\cmark&\\xmark&\\xmark &\\xmark& \\xmark & \\xmark\\\\\n VCG~\\cite{DBLP:conf\/starsem\/SorokinG18}&\\cmark&\\xmark& \\cmark& \\xmark & \\xmark & \\xmark \\\\\n KBPearl~\\cite{DBLP:journals\/pvldb\/LinLXLC20} & \\cmark & \\xmark& \\cmark & \\xmark & \\xmark & \\xmark \\\\\n PNEL~\\cite{Banerjee2020} & \\cmark & \\cmark& \\cmark & \\xmark & \\xmark & \\xmark\n \\\\\n Mulang et al.~\\cite{DBLP:journals\/corr\/abs-2008-05190} & \\cmark & \\cmark~\\tnotex{tn:tabcwc2}& \\cmark & \\xmark & \\xmark & \\xmark\n \\\\\n Perkins~\\cite{perkins2020separating} & \\cmark & \\xmark& \\cmark & \\xmark & \\xmark & \\xmark \\\\\n NED using DL on Graphs~\\cite{DBLP:journals\/corr\/abs-1810-09164}&\\cmark&\\xmark&\\cmark&\\xmark & \\xmark & \\xmark\\\\\n Huang et al.~\\cite{huangentity} & \\cmark & \\cmark& \\cmark & \\xmark & \\xmark & Wikipedia \\\\\n Boros et al.~\\cite{borosrobust} & \\xmark & \\xmark & \\xmark & \\xmark & \\cmark & Wikipedia, DBpedia \\\\\n Provatorov et al.~\\cite{Provatorova2020} & \\cmark & \\cmark & \\xmark & \\xmark & \\xmark & Wikipedia \\\\ \n Labusch and Neudecker~\\cite{Labusch2020} & \\xmark & \\xmark & \\xmark & \\xmark & \\xmark & Wikipedia \\\\\n Botha et al.~\\cite{Botha2020} & \\xmark & \\xmark & \\xmark& \\xmark & \\xmark & Wikipedia \\\\\n Hedwig~\\cite{DBLP:conf\/lrec\/KlangN20}&\\cmark&\\cmark& \\cmark& \\xmark & \\xmark & Wikipedia\\\\\n Tweeki~\\cite{tweeki:wnut20} & \\cmark & \\xmark & \\xmark & \\xmark & \\cmark & Wikipedia \\\\\n \\bottomrule\n \\end{tabularx}\n \\begin{tablenotes}\n \\setlength{\\columnsep}{0.8cm}\n \\setlength{\\multicolsep}{0cm}\n \\begin{multicols}{2}\n \\small\n \\item[2] \\label{tn:tabcwc2} Appears in the set of triples used for disambiguation\n \\item[1] \\label{tn:tabcwc3} Only querying the existence of triples\n \\end{multicols}\n \\end{tablenotes}\n \\end{threeparttable}\n \\caption{Comparison between the utilized Wikidata characteristics of each approach.}\n \\label{tab:comparison_approaches_wikidata}\n\\end{table*}\n\n\\subsection{Entity Linking} \n\\label{subsec:el}\n\\subsubsection{Language model-based approaches}\nThe approach by Mulang et al.~\\cite{DBLP:journals\/corr\/abs-2008-05190} is tackling the EL problem with transformer models~\\cite{Vaswani2017}. It is assumed that the candidate entities are given. For each entity, the labels of 1-hop and 2-hop triples are extracted. Those are then concatenated together with the utterance and the entity mention. The concatenation is the input of a pre-trained transformer model. With a fully connected layer on top, it is then optimized according to a binary cross-entropy loss. This architecture results in a similarity measure between the entity and the entity mention.\nThe examined models are the transformer models Roberta~\\cite{DBLP:journals\/corr\/abs-1907-11692}, XLNet~\\cite{DBLP:conf\/nips\/YangDYCSL19} and the DCA-SL model~\\cite{DBLP:conf\/emnlp\/YangGLTZWCHR19}.\nThe approach was evaluated on three datasets with no focus on certain documents or domains: ISTEX-1000~\\cite{DBLP:journals\/corr\/abs-1904-09131}, Wikidata-Disamb~\\cite{DBLP:journals\/corr\/abs-1810-09164} and AIDA-CoNLL~\\cite{hoffart-etal-2011-robust}. AIDA-CoNLL is a popular dataset for evaluating EL but has Wikipedia as the target. ISTEX-1000 focuses on research documents, and Wikidata-Disamb is an open-domain dataset.\nThere is no global coherence technique applied.\nOverall, up to 2-hop triples of any kind are used. For example, labels, aliases, descriptions, or general relations to other entities are all incorporated. It is not mentioned if the hyper-relational structure in the form of qualifiers was used.\nOn the one hand, the purely language-based EL results in less need for retraining if the KG changes as shown by other approaches~\\cite{Botha2020, Wu2019}. This is the case due to the reliance on sub-word embeddings and pre-training via the chosen transformer models. If full word-embeddings were used, the inclusion of new words would make retraining necessary. \nStill, an evaluation of the model on the zero-shot EL task is missing and has to be done in the future.\nThe reliance on the triple information might be problematic for long-tail entities which are rarely referred to and are part of fewer triples. Nevertheless, a lack of available context information is challenging for any EL approach relying on it.\n\n\nThe approach designed by Botha et al.~\\cite{Botha2020} tackles multilingual EL. It is also crosslingual. That means it can link entity mentions to entities in a knowledge graph in a language different from the utterance one. The idea is to train one model to link entities in utterances of 100+ different languages to a KG containing not necessarily textual information in the language of the utterance. While the target KG is Wikidata, they mainly use Wikipedia descriptions as input. This is the case as extensive textual information is not available in Wikidata. \nThe approach resembles the Wikification method by Wu et al.~\\cite{Wu2019} but extends the training process to be multilingual and targets Wikidata.\nCandidate generation is done via a dual-encoder architecture. Here, two BERT-based transformer models~\\cite{Devlin2019} encode both the context-sensitive mentions and the entities to the same vector space.\nThe mentions are encoded using local context, the mention and surrounding words, and global context, the document title. Entities are encoded by using the Wikipedia article description available in different languages. In both cases, the encoded CLS-token are projected to the desired encoding dimension. The goal is to embed mentions and entities in such a way that the embeddings are similar.\nThe model is trained over Wikipedia by using the anchors in the text as entity mentions. There exists no limitation that the used Wikipedia articles have to be available in all supported languages. If an article is missing in the English Wikipedia but available in the German one, it is still included.\nNow, after the model is trained, all entities are embedded. The candidates are generated by embedding the mention and searching for the nearest neighbors.\nA cross-encoder is employed to rank the entity candidates, which cross-encodes entity description and mention text together by concatenating and feeding them into a BERT model. Final scores are obtained, and the entity mention is linked.\nThe model was evaluated on the cross-lingual EL dataset TR2016\\textsuperscript{hard}~\\cite{Tsai2016} and the multilingual EL dataset Mewsli-9~\\cite{Botha2020}. Furthermore, it was tested how well it performs on an English-only dataset called WikiNews-2018~\\cite{Gillick2019}.\nWikidata information is only used to gather all the Wikipedia descriptions in the different languages for all entities. \nThe approach was tested on zero- and few-shot settings showing that the model can handle an evolving knowledge graph with newly added entities that were never seen before. This is also more easily achievable due to its missing reliance on the graph structure of Wikidata or the structure of Wikipedia. It is the case that some Wikidata entities do not appear in Wikipedia and are therefore invisible to the approach. \nBut as the model is trained on descriptions of entities in multiple languages, it has access to many more entities than only the ones available in the English Wikipedia. \n\n\\subsubsection{Language model and graph embeddings-based approaches}\nThe master thesis by Perkins~\\cite{perkins2020separating} is performing candidate generation by using anchor link probability over Wikipedia and locality-sensitive hashing (LSH)~\\cite{Gionis1999} over labels and mention bi-grams. Contextual word embeddings of the utterance (ELMo~\\cite{DBLP:conf\/naacl\/PetersNIGCLZ18}) are used together with KG embeddings (TransE~\\cite{DBLP:conf\/nips\/BordesUGWY13}), calculated over Wikipedia and Wikidata, respectively.\nThe context embeddings are sent through a recurrent neural network. The output is concatenated with the KG embedding and then fed into a feed-forward neural network resulting in a similarity measure between the KG embedding of the entity candidate and the utterance.\nIt was evaluated on the AIDA-CoNLL~\\cite{hoffart-etal-2011-robust} dataset. \nWikidata is used in the form of the calculated TransE embeddings. Hyper-relational structures like qualifiers are not mentioned in the thesis and are not considered by the TransE embedding algorithm and, thus, probably not included. \nThe used KG embeddings make it necessary to retrain when the Wikidata KG changes as they are not dynamic. \n\n\\subsubsection{Word and graph embeddings-based approaches}\nIn 2018, Cetoli et al.~\\cite{DBLP:journals\/corr\/abs-1810-09164} evaluated how different types of basic neural networks perform solely over Wikidata. Notably, they compared the different ways to encode the graph context via neural methods, especially the usefulness of including topological information via GNNs~\\cite{Sperduti1997, wu2020comprehensive} and RNNs~\\cite{Hochreiter1997}. \nThere is no candidate generation as it was assumed that the candidates are available.\nThe process consists of combining text and graph embeddings. The text embedding is calculated by applying a Bi-LSTM over the Glove Embeddings of all words in an utterance. The resulting hidden states are then masked by the position of the entity mention in the text and averaged. A graph embedding is calculated in parallel via different methods utilizing GNNs or RNNs. The end score is the output of one feed-forward layer having the concatenation of the graph and text embedding as its input. It represents if the graph embedding is consistent with the text embedding. \nWikidata-Disamb30~\\cite{DBLP:journals\/corr\/abs-1810-09164} was used for evaluating the approach. Each example in the dataset also contains an ambiguous negative entity, which is used during training to be robust against ambiguity.\nOne crucial problem is that those methods only work for a single entity in the text. Thus, it has to be applied multiple times, and there will be no information exchange between the entities. While the examined algorithms do utilize the underlying graph of Wikidata, the hyper-relational structure is not taken into account. The paper is more concerned with comparing how basic neural networks work on the triples of Wikidata. Due to the pure analytical nature of the paper, the usefulness of the designed approaches to a real-world setting is limited. The reliance on graph embeddings makes it susceptible to change in the Wikidata KG.\n\n\\subsection{Entity Recognition and Entity Linking}\n\\label{subsec:erel}\nThe following methods all include ER in their EL process. \n\n\\subsubsection{Language model-based approaches}\nIn connection to the \\emph{CLEF 2020 HIPE challenge}~\\cite{Ehrmann2020a}, multiple approaches~\\cite{borosrobust, Labusch2020, Provatorova2020} for ER and EL of historical newspapers on Wikidata were developed. Documents were available in English, French and German. Three approaches with a focus on the English language are described in the following. Differences in the usage of Wikidata between the languages did not exist. Yet, the approaches were not multilingual as different models were used and\/or retraining was necessary for different languages. \n\n\nBoros et al.~\\cite{borosrobust} tackled ER by using a BERT model with a CRF layer on top, which recognizes the entity mentions and classifies the type. During the training, the regular sentences are enriched with misspelled words to make the model robust against noise.\nFor EL, a knowledge graph is built from Wikipedia, containing Wikipedia titles, page ids, disambiguation pages, redirects and link probabilities between mentions and Wikipedia pages are calculated. The link probability between anchors and Wikipedia pages is used to gather entity candidates for a mention. \nThe disambiguation approach follows an already existing method~\\cite{Kolitsas2018}. Here, the utterance tokens are embedded via a Bi-LSTM. The token embeddings of a single mention are combined. Then similarity scores between the resulting mention embedding and the entity embeddings of the candidates are calculated. The entity embeddings are computed according to Ganea and Hofmann~\\cite{Ganea2017}. These similarity scores are combined with the link probability and long-range context attention, calculated by taking the inner product between an additional context-sensitive mention embedding and an entity candidate embedding. The resulting score is a local ranking measure and is again combined with a global ranking measure considering all other entity mentions in the text.\nIn the end, additional filtering is applied by comparing the DBpedia types of the entities to the ones classified during the ER. If the type does not match or other inconsistencies apply, the entity candidate gets a lower rank. Here, they also experimented with Wikidata types, but this resulted in a performance decrease.\nAs can be seen, technically, no Wikidata information besides the unsuccessful type inclusion is used. Thus, the approach resembles more of a Wikification algorithm. Yet, they do link to Wikidata as the HIPE task dictates it, and therefore, the approach was included in the survey. New Wikipedia entity embeddings can be easily added~\\cite{Ganea2017} which is an advantage when Wikipedia changes. Also, its robustness against erroneous texts makes it ideal for real-world use. \nThis approach reached SOTA performance on the CLEF 2020 HIPE challenge.\n\nLabusch and Neudecker~\\cite{Labusch2020} also applied a BERT model for ER. For EL, they used mostly Wikipedia, similar to Boros et al.~\\cite{borosrobust}. They built a knowledge graph containing all person, location and organization entities from the German Wikipedia. Then it was converted to an English knowledge graph by mapping from the German Wikipedia Pages via Wikidata to the English ones. This mapping process resulted in the loss of numerous entities. The candidate generation is done by embedding all Wikipedia page titles in an Approximative Nearest Neighbour index via BERT. Using this index, the neighboring entities to the mention embedding are found and used as candidates. For ranking, anchor-contexts of Wikipedia pages are embedded and fed into a classifier together with the embedded mention-context, which outputs whether both belong to the same entity. This is done for each candidate for around 50 different anchor contexts. Then, multiple statistics on those similarity scores and candidates are calculated, which are used in a Random Forest model to compute the final ranks. \nSimilar to the previous approach, Wikidata was only used as the target knowledge graph, while information from Wikipedia was used for all the EL work. Thus, no special characteristics of Wikidata were used. The approach is less affected by a change of Wikidata due to similar reasons as the previous approach. This approach lacks performance compared to the state of the art in the HIPE task. The knowledge graph creation process produces a disadvantageous loss of entities, but this might be easily changed.\n\nProvatorov et al.~\\cite{Provatorova2020} used an ensemble of fine-tuned BERT models for ER. The ensemble is used to compensate for the noise of the OCR procedure. The candidates were generated by using an ElasticSearch index filled with Wikidata labels. The candidate's final rank is calculated by taking the search score, increasing it if a perfect match applies and finally taking the candidate with the lowest Wikidata identifier number (indicating a high popularity score). They also created three other methods of the EL approach: (1) The ranking was done by calculating cosine similarity between the embedding of the utterance and the embedding of the same utterance with the mention replaced by the Wikidata description. Furthermore, the score is increased by the Levenshtein distance between the entity label and the mention. (2) A variant was used where the candidate generation is enriched with historical spellings of Wikidata entities. (3) The last variant used an existing tool~\\cite{Hulst2020}, which included contextual similarity and co-occurrence probabilities of mentions and Wikipedia articles. In the tool, the final disambiguation is based on the ment-norm method by Le and Titov~\\cite{Le2018} .\nThe approach uses Wikidata labels and descriptions in one variant of candidate ranking. Beyond that, no other characteristics specific to Wikidata were considered. Overall, the approach is very basic and uses mostly pre-existing tools to solve the task. The approach is not susceptible to a change of Wikidata as it is mainly based on language and does not need retraining. \n\nThe approach designed by Huang et al.~\\cite{huangentity} is specialized in short texts, mainly questions.\nThe ER is performed via a pre-trained BERT model \\cite{Devlin2019} with a single classification layer on top, determining if a token belongs to an entity mention.\nThe candidate search is done via an ElasticSearch\\footnote{\\url{https:\/\/www.elastic.co\/elasticsearch\/}} index, comparing the entity mention to labels and aliases by exact match and Levenshtein distance.\nThe candidate ranking uses three similarity measures to calculate the final rank. A CNN is used to compute a character-based similarity between entity mention and candidate label. This results in a similarity matrix whose entries are calculated by the cosine similarity between each character embedding of both strings. \nThe context is included in two ways. First, between the utterance and the entity description, by embedding the tokens of each sequence through a BERT model. Again, a similarity matrix is built by calculating the cosine similarity between each token embedding of both utterance and description. The KG is also considered by including the triples containing the candidate as a subject. For each such triple, a similarity matrix is calculated between the label concatenation of the triple and the utterance. The most representative features are then extracted out of the matrices via max-pooling, concatenated and fed into a two-layer perceptron.\nThe approach was evaluated on the WebQSP~\\cite{DBLP:conf\/starsem\/SorokinG18} dataset, which is composed of short questions from web search logs.\nWikidata labels, aliases and descriptions are utilized. Additionally, the KG structure is incorporated through the labels of candidate-related triples. This is similar to the approach by Mulang et al.~\\cite{DBLP:journals\/corr\/abs-2008-05190}, but only 1-hop triples are used. There is also no hyper-relational information considered.\nDue to its reliance on text alone and using a pre-trained language model with sub-word embeddings, it is less susceptible to changes of Wikidata. While the approach was not empirically evaluated on the zero-shot EL task, other approaches using language models (LM)~\\cite{Logeswaran2019,Botha2020, Wu2019} were and indicate a good performance. \n\n\\subsubsection{Word embedding-based approaches}\n\\emph{Arjun}~\\cite{mulang2020encoding} tries to tackle specific challenges of Wikidata like long entity labels and implicit entities. Published in 2020, Arjun is an end-to-end approach utilizing the same model for ER and EL. It is based on an Encoder-Decoder-Attention model. First, the entities are detected via feeding Glove~\\cite{Pennington2014} embedded tokens of the utterance into the model and classifying each token as being an entity or not. Afterward, candidates are generated in the same way as in Falcon 2.0~\\cite{DBLP:journals\/corr\/abs-1912-11270} (see \\Cref{subsubsec:rule}). The candidates are then ranked by feeding the mention, the entity label, and its aliases into the model and calculating the score. The model resembles a similarity measure between the mention and the entity labels. \nArjun was trained and evaluated on the T-REx~\\cite{elsahar2019t} dataset consisting of extracts out of various Wikipedia articles. \nIt does not use any global ranking. Wikidata information is used in the form of labels and aliases in the candidate generation and candidate ranking. The model was trained and evaluated using GloVe embeddings, for which new words are not easily addable. New entities are therefore not easily supported. However, the authors claim that one can replace them with other embeddings like BERT-based ones. While those proved to perform quite well in zero-shot EL~\\cite{Botha2020, Wu2019}, this was usually done with more context information besides labels. Therefore it remains questionable if using those would adapt the approach for zero-shot EL. \n\n\n\n\\subsubsection{Word and graph embeddings-based approaches}\nIn 2018, Sorokin and Gurevych~\\cite{DBLP:conf\/starsem\/SorokinG18} were doing joint end-to-end ER and EL on short texts. The algorithm tries to incorporate multiple context embeddings into a mention score, signaling if a word is a mention, and a ranking score, signaling the candidate's correctness. First, it generates several different tokenizations of the same utterance. For each token, a search is conducted over all labels in the KG to gather candidate entities. If the token is a substring of a label, the entity is added.\nEach token sequence gets then a score assigned. The scoring is tackled from two sides. On the utterance side, a token-level context embedding and a character-level context embedding (based on the mention) are computed. The calculation is handled via dilated convolutional networks (DCNN) \\cite{Yu2016}. On the KG side, one includes the labels of the candidate entity, the labels of relations connected to a candidate entity, the embedding of the candidate entity itself, and embeddings of the entities and relations related to the candidate entity. This is again done by DCNNs and, additionally, by fully connected layers. The best solution is then found by calculating a ranking and mention score for each token for each possible tokenization of the utterance. All those scores are then summed up into a global score. The global assignment with the highest score is then used to select the entity mentions and entity candidates. \nThe question-based EL datasets WebQSP~\\cite{DBLP:conf\/starsem\/SorokinG18} and GraphQuestions~\\cite{su2016generating} were used for evaluation. GraphQuestions contains multiple paraphrases of the same questions and is used to test the performance on different wordings.\nThe approach uses the underlying graph, label and alias information of Wikidata. Graph information is used via connected entities and relations. They also use TransE embeddings, and therefore no hyper-relational structure. Due to the usage of static graph embeddings, retraining will be necessary if Wikidata changes.\n\n\\emph{PNEL}~\\cite{Banerjee2020} is an end-to-end (E2E) model jointly solving ER and EL focused on short texts. PNEL employs a Pointer network~\\cite{DBLP:conf\/nips\/VinyalsFJ15} working on a set of different features. An utterance is tokenized into multiple different combinations. Each token is extended into the (1) token itself, (2) the token and the predecessor, (3) the token and the successor, and (4) the token with both predecessor and successor. For each token combination, candidates are searched for by using the BM25 similarity measure. Fifty candidates are used per tokenization combination. Therefore, 200 candidates (not necessarily 200 distinct candidates) are found per token. For each candidate, features are extracted. Those range from the simple length of a token to the graph embeddings of the candidate entity.\nAll features are concatenated to a large feature vector. \nTherefore, per token, a sequence of 200 such features vectors exists.\nFinally, the concatenation of those sequences of each token in the sentence is then fed into a Pointer network. At each iteration of the Pointer network, it points to one distinct candidate in the network or an \\texttt{END} token marking no choice. Pointing is done by computing a softmax distribution and choosing the candidate with the highest probability. Note that the model points to a distinct candidate, but this distinct candidate can occur multiple times. Thus, the model does not necessarily point to only a single candidate of the 200 ones.\nPNEL was evaluated on several QA datasets, namely WebQSP~\\cite{DBLP:conf\/starsem\/SorokinG18}, SimpleQuestions~\\cite{bordes2015large} and LC-QuAD 2.0~\\cite{dubey2019lc}. SimpleQuestions focuses, as the name implies, on simple questions containing only very few entities. LC-QuAD 2.0, on the other hand, contains both, simple and more complex, longer questions including multiple entities.\nThe entity descriptions, labels and aliases are all used. Additionally, the graph structure is included by TransE graph embeddings, but no hyper-relational information was incorporated.\nE2E models can often improve the performance of the ER. Most EL algorithms employed in the industry often use older ER methods decoupled from the EL process. Thus, such an E2E EL approach can be of use. Nevertheless, due to its reliance on static graph embeddings, complete retraining will be necessary if Wikidata changes. \n\n\\subsubsection{Non-NN ML-based approaches}\n\\emph{OpenTapioca}~\\cite{DBLP:journals\/corr\/abs-1904-09131} is a mainly statistical EL approach published in 2019. \nWhile the paper never mentions ER, the approach was evaluated with it. In the code, one can see that the ER is done by a SolrTextTagger analyzer of the Solr search platform\\footnote{\\url{https:\/\/lucene.apache.org\/solr\/}}. \nThe candidates are generated by looking up if the mention corresponds to an entity label or alias in Wikidata stored in a Solr collection. Entities are filtered out which do not correspond to the type person, location or organization.\nOpenTapioca is based on two main features, which are local compatibility and semantic similarity. First, local compatibility is calculated via a popularity measure and a unigram similarity measure between entity label and mention. The popularity measure is based on the number of sitelinks, PageRank scores, and the number of statements. Second, the semantic similarity strives to include context information in the decision process. All entity candidates are included in a graph and are connected via weighted edges. Those weights are calculated via a statistical similarity measure. This measure includes how likely it is to jump from one entity candidate to another while discounting it by the distance between the corresponding mentions in the utterance. The resulting adjacency matrix is then normalized to a stochastic matrix that defines a Markov Chain. One now propagates the local compatibility using this Markov Chain. Several iterations are then taken, and a final score is inferred via a Support Vector Machine. It supports multiple entities per utterance.\nOpenTapioca is evaluated on AIDA-CoNLL~\\cite{hoffart-etal-2011-robust}, Microposts 2016~\\cite{Rizzo}, ISTEX-1000~\\cite{DBLP:journals\/corr\/abs-1904-09131} and RSS-500~\\cite{roder2014n3}. RSS-500 consists of news-based examples and Microposts 2016 focuses on shorter documents like tweets. OpenTapioca was therefore evaluated on many different types of documents.\nThe approach is only trained on and evaluated for three types of entities: locations, persons, and organizations. \nIt facilitates Wikidata-specific labels, aliases, and sitelinks information. More importantly, it also uses qualifiers of statements in the calculation of the PageRank scores. But the qualifiers are only seen as additional edges to the entity.\nThe usage in special domains is limited due to its restriction to only three types of entities, but this is just an artificial restriction. It is easily updatable if the Wikidata graph changes as no immediate retraining is necessary.\n\n\\subsubsection{Graph optimization-based approaches}\n\\emph{Hedwig}~\\cite{DBLP:conf\/lrec\/KlangN20} is a multilingual entity linker specialized on the TAC 2017~\\cite{Ji2017} task but published in 2020. Another entity linker~\\cite{DBLP:journals\/corr\/abs-1903-05498}, developed by the same authors, is not included in this survey as Hedwig is partly an evolution of it. The entities to be linked are limited to only a subset of all possible entity classes. Hedwig employs Wikidata and Wikipedia at the same time. The Entity Recognition uses word2vec embeddings~\\cite{Mikolov2013}, character embeddings, and dictionary features where the character embeddings are calculated via a Bi-LSTM. The dictionary features are class-dependent, but this is not defined in more detail. Those embeddings and features are computed and concatenated for each token. Afterward, the whole sequence of token features is fed into a Bi-LSTM with a linear chain Conditional Random Field (CRF) layer at the end to recognize the entities. The candidates for each detected entity mention are then generated by using a mention dictionary. The dictionary is created from Wikidata and Wikipedia information, utilizing labels, aliases, titles or anchor texts. The candidates are disambiguated by constructing a graph consisting of all candidate entities, mentions, and occurring words in the utterance. The edges between entities and other entities, words, or mentions have the normalized pointwise mutual information (NPMI) assigned as their weights. The NPMI specifies how frequently two entities, an entity and a mention or an entity and a word, occur together. Those scores are calculated over a Wikipedia dump. Finally, the PageRank of each node in the graph is calculated via power iteration, and the highest-scoring candidates are chosen. \n \nThe type classification is used to determine the types of entities, not mentions. As this is only relevant for the TAC 2017 task, the classifier can be ignored. \nThe approach was evaluated on the TAC 2017~\\cite{Ji2017} dataset, which focuses on entities of type person, organization, location, geopolitics and facilities. The documents originate from discussion forums and newswire texts.\nLabels and aliases from multiple languages are used. It also uses sitelinks to connect the Wikidata identifiers and Wikipedia articles. The paper also claims to use descriptions but does not describe anywhere in what way. No hyper-relational or graph features are used. As it employs class-dependent features, it is limited to the entities of classes specified in the TAC 2017 task. The NPMI weights have to be updated with the addition of new elements in Wikidata and Wikipedia. \n\n\\emph{KBPearl}~\\cite{DBLP:journals\/pvldb\/LinLXLC20}, published in 2020, utilizes EL to populate incomplete KGs using documents. First, a document is preprocessed via Tokenization, POS tagging, NER, noun-phrase chunking, and time tagging. Also, an existing Information Extraction tool is used to extract open triples from the document. They experimented with four different tools (ReVerb~\\cite{Fader2011}, MinIE~\\cite{Gashteovski2017}, ClausIE~\\cite{Corro2013} and Stanford Open IE Tool~\\cite{Angeli2015}), Open triples are non-linked triples extracted via an open information extraction tool. The triples consist of a subject, predicate and object in unstructured text. For example, the open triple \\texttt{} can be extracted from \"Ennio Morricone, known for numerous famous soundtracks of the Spaghetti Western era, composed the soundtrack of the movie The Hateful Eight.\". The triples are processed further by filtering invalid tokens and doing canonicalization. Then, a graph of entities, predicates, noun phrases, and relation phrases is constructed. The candidates are generated by comparing the noun\/relation phrases to the labels and aliases of the entities\/predicates. The edges between the entities\/relations and between entities and relations are weighted by the number of intersecting one-hop statements. The next step is the computation of a maximum dense subgraph. Density is defined by the minimum weighted degree of all nodes \\cite{hoffart-etal-2011-robust}. As this problem is NP-hard, a greedy algorithm is used for optimization. New entities relevant for the task of Knowledge Graph Population are identified by thresholding the weighted sum of an entity's incident edges.\nLike used here, global coherence can perform sub-optimally since not all entities\/relations in a document are related. Thus, two variants of the algorithm are proposed. First, a pipeline version that separates the full document into sentences. Second, a near neighbor mode, limiting the interaction of the nodes in the graph by the distances of the corresponding noun-phrases and relation-phrases.\nKBPearl was evaluated on many different datasets: ReVerb38~\\cite{DBLP:journals\/pvldb\/LinLXLC20}, NYT2018~\\cite{DBLP:journals\/pvldb\/LinLXLC20, DBLP:conf\/icde\/LinC19}, LC-QuAD 2.0~\\cite{dubey2019lc}, QALD-7-WIKI~\\cite{usbeck20177th}, T-REx~\\cite{elsahar2019t}, Knowledge Net~\\cite{mesquita2019knowledgenet} and CC-DBP~\\cite{glass2018dataset}. These datasets encompass news articles, questions, and general open-domain documents.\nThe approach includes label and alias information of entities and predicates. Additionally, one-hop statement information is used, but hyper-relational features are not mentioned. However, the paper does not claim that its focus is entirely on Wikidata. Thus, the weak specialization is understandable. While it utilizes EL, the focus of the approach is still on knowledge base population.\nNo training is necessary, which makes the approach suitable for a dynamic graph like Wikidata.\n\n\\subsubsection{Rule-based approaches}\n\\label{subsubsec:rule}\n\\emph{Falcon 2.0}~\\cite{DBLP:journals\/corr\/abs-1912-11270} is a fully linguistic approach and a transformation of Falcon 1.0~\\cite{DBLP:conf\/naacl\/SakorMSSV0A19} to Wikidata. Falcon 2.0 was published in 2019, and its focus lies on short texts, especially questions. It links entities and relations jointly. Falcon 2.0 uses entity and relation labels as well as the triples themselves. The relations and entities are recognized by applying linguistic principles. The candidates are then generated by comparing mentions to the labels using the Levenshtein distance. The ranking of the entities and relations is done by creating triples between the relations and entities and checking if the query is successful. The more successful the queries, the higher the candidate will be ranked. If no query is successful, the algorithm returns to the ER phase and splits some of the recognized entities again. As Falcon 2.0 is an extension of Falcon 1.0 from DBpedia to Wikidata, the usage of specific Wikidata characteristics is limited. Falcon 2.0 is tuned for EL on questions and short texts, as well as the English language and it was evaluated on the two QA datasets LC-QuAD 2.0~\\cite{dubey2019lc} and SimpleQuestions~\\cite{bordes2015large}.\nIt is not generalizable to longer, more noisy, non-question texts. The used rules follow the structure of short questions. Hence, longer texts consisting of multiple sentences or non-questions are not supported. If the text is grammatically incorrect, the linguistic rules used to parse the utterance would fail. For example, linking Tweets would then be infeasible. \nAs it is only based on rules, it is clearly independent of changes in the KG.\n\n\\emph{Tweeki}~\\cite{tweeki:wnut20} is an approach focusing on unsupervised EL over tweets. The ER is done by a pre-existing Entity Recognizer~\\cite{Gardner2018} which also tags the mentions. The candidates are generated by first calculating the link probability between Wikidata aliases over Wikipedia and then searching for the aliases in a dictionary. \nThe ranking is done using the link probabilities while pruning all candidates that do not belong to the type provided by the Entity Recognizer. \nTweeki was evaluated on the accompanied dataset TweekiGold, consisting of random annotated tweets. Additionally, it was tested on the Microposts 2016~\\cite{Rizzo} dataset and the datasets by Derczynski~\\cite{Derczynski2015} which both also focus on shorter, noisy texts like tweets.\nThe approach does not need to be trained, making it very suitable for linking entities in tweets. In this document type, often novel entities with minimal context exist.\nRegarding features of Wikidata, it uses label, alias and type information. Due to it being unsupervised, changes to the KG do not affect it.\n\n\\subsection{Analysis}\n\\label{sec:approaches_evaluation}\n\nMany approaches include some form of language model or word embedding. This is expected as a large factor of entity linking encompasses the comparison of word-based information. And in that regard, language models like BERT~\\cite{Devlin2019} proved very performant in the last years. Furthermore, various language models rely on sub-word or character embeddings which also work on out-of-dictionary words. This is in contrast to regular word-embeddings, which can not cope with words never seen before.\nIf graph information is part of the approach, the approaches either used graph embeddings, included some coherence score as a feature or created a neighborhood graph on the fly and optimized over it. Some approaches like OpenTapioca, Falcon 2.0 or Tweeki utilized more old-fashioned methods. They either employed classic ML together with some basic features or worked entirely rule-based. \n\n\\subsubsection{Performance}\nTable~\\ref{tab:dataset_results_el_er} gives an overview of all available results for the approaches performing ER and EL. While results for the EL-only approaches exist, the used measures vary widely. Thus, it is very difficult to compare the approaches. To not withhold the results, they can still be found in the appendix in Table~\\ref{tab:dataset_results_el_only} with an accompanying discussion. We aim to fully recover this table and also extend Table~\\ref{tab:dataset_results_el_er} in future work.\n \nThe micro $F_1$ scores are given:\n\\begin{equation*}\n F_1 = 2 \\cdot \\frac{p \\cdot r}{p +r}\n\\end{equation*}\nwhere p is the precision $p=\\frac{\\mathit{tp}}{\\mathit{tp} + \\mathit{fp}}$ and r is the recall $r=\\frac{\\mathit{tp}}{\\mathit{tp} + \\mathit{fn}}$. Here, $\\mathit{tp}$ is the number of true positives, $\\mathit{fp}$ is the number of false positives and $\\mathit{fn}$ is the number of false negatives over a ground truth. Micro $F_1$ means that the scores are calculated over all linked entity mentions and not separately for each document and then averaged. True positives are the correctly linked entity mentions, false positives incorrectly linked entities that do not occur in the set of valid entities and false negatives entities that occur in the set of valid entities but are not linked to~\\cite{Cornolti2013}. \nThe approaches were evaluated on many different datasets, which makes comparison very difficult. Additionally, many approaches are evaluated on datasets designed for knowledge graphs different from Wikidata and then mapped. \nOften, the approaches are evaluated on the same dataset but over different subsets, which complicates a comparison even more.\nThe method by Perkins~\\cite{perkins2020separating} was also evaluated on the Kensho Derived Wikimedia Dataset~\\cite{KenshoRD}, but it was only used to compare different variants of the designed approach and focused on different amounts of training data. Thus, inclusion in the evaluation table is not reasonable. \n\n\\begin{table*}\n \\centering\n \\rotatebox{-90}{\n \\resizebox{0.95\\textheight}{!}{\n \\begin{threeparttable}\n \\begin{tabular}{ccccccccccccc}\n \\toprule\n &\\rotatebox[origin=c]{-66}{OpenTapioca~\\cite{DBLP:journals\/corr\/abs-1904-09131}} & \n \\rotatebox[origin=c]{-66}{Falcon 2.0~\\cite{DBLP:journals\/corr\/abs-1912-11270}} &\n \\rotatebox[origin=c]{-66}{Arjun~\\cite{mulang2020encoding}} & \n \\rotatebox[origin=c]{-66}{VCG~\\cite{DBLP:conf\/starsem\/SorokinG18}} &\n \\rotatebox[origin=c]{-66}{KBPearl~\\cite{DBLP:journals\/pvldb\/LinLXLC20}~\\tnotex{tn:erel2}} &\n \\rotatebox[origin=c]{-66}{PNEL~\\cite{Banerjee2020}}&\n \\rotatebox[origin=c]{-66}{Huang et al.~\\cite{huangentity}} &\n \\rotatebox[origin=c]{-66}{Boros et al.~\\cite{borosrobust}} &\n \\rotatebox[origin=c]{-66}{Provatorov et al.~\\cite{Provatorova2020}} &\n \\rotatebox[origin=c]{-66}{\\parbox{2cm}{Labusch \\& Neudecker~\\cite{Labusch2020}}} &\n \\rotatebox[origin=c]{-66}{Hedwig~\\cite{DBLP:conf\/lrec\/KlangN20}} &\n \\rotatebox[origin=c]{-66}{Tweeki~\\cite{tweeki:wnut20}}\\\\\n \\toprule \n AIDA-CoNLL~\\cite{hoffart-etal-2011-robust} &0.482~\\cite{DBLP:journals\/corr\/abs-1904-09131}&-&-&-&-&-&-&-&-&-&-&-\\\\\n Microposts 2016~\\cite{Rizzo}&0.087~\\cite{DBLP:journals\/corr\/abs-1904-09131}, 0.148~\\cite{tweeki:wnut20}&-&-&-&-&-&-&-&-&-&-&0.248~\\cite{tweeki:wnut20} \\\\\n ISTEX-1000~\\cite{DBLP:journals\/corr\/abs-1904-09131}&0.87~\\cite{DBLP:journals\/corr\/abs-1904-09131}&-&-&-&-&-&-&-&-&-&-&- \\\\\n RSS-500~\\cite{roder2014n3} & 0.335~\\cite{DBLP:journals\/corr\/abs-1904-09131}&-&-&-&-&-&-&-&-&-&-&-\\\\\n LC-QuAD 2.0~\\cite{dubey2019lc}&0.301~\\cite{Banerjee2020} & 0.445~\\cite{Banerjee2020}&-&\n 0.47~\\cite{Banerjee2020}&-&0.589~\\cite{Banerjee2020}~\\tnotex{tn:erel3}&-&-&-&-&-&-\n \\\\\n LC-QuAD 2.0~\\cite{dubey2019lc}~\\tnotex{tn:erel8}&0.25~\\cite{DBLP:journals\/corr\/abs-1912-11270} & 0.68~\\cite{DBLP:journals\/corr\/abs-1912-11270}&-&\n &-&-&-&-&-&-&-&-\n \\\\\n LC-QuAD 2.0~\\cite{dubey2019lc}~\\tnotex{tn:erel1}&-& 0.320~\\cite{Banerjee2020}&-&-&-&0.629~\\cite{Banerjee2020}~\\tnotex{tn:erel3}&-&-&-&-&-&-\n \\\\\n Simple-Question&0.20~\\cite{Banerjee2020}& 0.41~\\cite{Banerjee2020}&-&-&-&0.68~\\cite{Banerjee2020}~\\tnotex{tn:erel4}&-&-&-&-&-&-\\\\\n Simple-Question~\\cite{bordes2015large}~\\tnotex{tn:erel9}&-& 0.63~\\cite{DBLP:journals\/corr\/abs-1912-11270}&-&-&-&-&-&-&-&-&-&-\\\\\n T-REx~\\cite{elsahar2019t}& 0.579~\\cite{mulang2020encoding}&-&0.713~\\cite{mulang2020encoding}&-&-&-&-&-&-&-&-&-\\\\\n T-REx~\\cite{elsahar2019t}~\\tnotex{tn:erel7}& -&-&-&-&0.421~\\cite{DBLP:journals\/pvldb\/LinLXLC20}&-&-&-&-&-&-&-\\\\\n WebQSP~\\cite{yih2016value}&-&-&0.730~\\cite{Banerjee2020,DBLP:conf\/starsem\/SorokinG18}&-&-&0.712~\\cite{Banerjee2020}~\\tnotex{tn:erel5}&0.780~\\cite{huangentity}&-&-&-&-&-\\\\\n CLEF HIPE 2020~\\cite{Ehrmann2020}&-&-&-&-&-&-&-&0.531~\\cite{Ehrmann2020a}~\\tnotex{tn:erel6}&0.300~\\cite{Ehrmann2020a}~\\tnotex{tn:erel6}&0.141~\\cite{Ehrmann2020a}~\\tnotex{tn:erel6}&-&-\\\\\n TAC2017~\\cite{Ji2017}&-&-&-&-&-&-&-&-&-&-&0.582~\\cite{DBLP:conf\/lrec\/KlangN20}\\\\\n Graph-Questions~\\cite{su2016generating}&-&-&0.442~\\cite{DBLP:conf\/starsem\/SorokinG18}&-&-&-&-&-&-&-&-&-\\\\\n QALD-7-WIKI~\\cite{usbeck20177th}&-&-&-&-&0.679~\\cite{DBLP:journals\/pvldb\/LinLXLC20}&-&-&-&-&-&-&-\\\\\n NYT2018~\\cite{DBLP:journals\/pvldb\/LinLXLC20, DBLP:conf\/icde\/LinC19}&-&-&-&-&0.575~\\cite{DBLP:journals\/pvldb\/LinLXLC20}&-&-&-&-&-&-&-\\\\\n ReVerb38~\\cite{DBLP:journals\/pvldb\/LinLXLC20}&-&-&-&-&0.653~\\cite{DBLP:journals\/pvldb\/LinLXLC20}&-&-&-&-&-&-&-\\\\\n Knowledge Net~\\cite{mesquita2019knowledgenet}&-&-&-&-&0.384~\\cite{DBLP:journals\/pvldb\/LinLXLC20}&-&-&-&-&-&-&-\\\\\n CC-DBP~\\cite{glass2018dataset}&-&-&-&-&0.499~\\cite{DBLP:journals\/pvldb\/LinLXLC20}&-&-&-&-&-&-&-\\\\\n TweekiGold~\\cite{tweeki:wnut20}&0.291~\\cite{tweeki:wnut20}&-&-&-&-&-&-&-&-&-&-&0.65~\\cite{tweeki:wnut20}\\\\\n Derczynski~\\cite{Derczynski2015}&0.14~\\cite{tweeki:wnut20}&-&-&-&-&-&-&-&-&-&-&0.371~\\cite{tweeki:wnut20}\\\\\n \\bottomrule\n \\end{tabular}\n \\begin{tablenotes}\n \\setlength{\\columnsep}{0.8cm}\n \\setlength{\\multicolsep}{0cm}\n \\begin{multicols}{3}\n \\small\n \\item[1] \\label{tn:erel2} NN model\n \\item[2] \\label{tn:erel3} L model\n \\item[3] \\label{tn:erel8} 1000 sampled questions from LC-QuAD 2.0\n \\item[4] \\label{tn:erel1} LC-QuAD 2.0 test set used in KBPearl paper\n \\item[5] \\label{tn:erel4} S model\n \\item[6] \\label{tn:erel9} Probably evaluated on train and test set\n \\item[7] \\label{tn:erel7} Evaluation on subset of T-REx data different to the subset used in Arjun paper\n \\item[8] \\label{tn:erel5} W model\n \\item[9] \\label{tn:erel6} Strict mention matching\n \\end{multicols}\n \\end{tablenotes}\n \\end{threeparttable}\n }}\n \\caption{Results: ER + EL.}\n \\label{tab:dataset_results_el_er}\n\\end{table*}\n\nInferring the utility of a Wikidata characteristic from the different approaches' $F_1$-measures is inconclusive due to the sparsity of results.\nFor ER + EL approaches, most results were available for LC-QuAD 2.0. Yet, no conclusion can be drawn as many approaches were evaluated on different subsets of the dataset. Falcon 2.0 performs well, but it does not substantially rely on Wikidata characteristics. The performance is good as it is designed for simple questions that follow its rules very closely. Arjun performs well on T-REx by mainly using label information, but the number of methods tested on the T-REx dataset is too low to be conclusive. Besides that, PNEL and the approach by Huang et al. also achieve good results; both include a broader scope of Wikidata information in the form of labels, descriptions and graph structure. As HIPE challenge approaches are using Wikidata only marginally and the difference in performance depends more on the robustness against the OCR-introduced noise, comparing them is not providing information on the relevance of Wikidata characteristics.\n\n\\subsubsection{Utilization of Wikidata characteristics}\nWhile some algorithms~\\cite{mulang2020encoding} do try to examine the challenges of Wikidata, like more noisy long entity labels, many fail to use most of the advantages of Wikidata's characteristics. If the approaches are using even more information than just the labels of entities and relations, they mostly only include simple n-hop triple information. Hyper-relational information like qualifiers is only used by OpenTapioca but still in a simple manner. This is surprising, as they can provide valuable additional information. As one can see in Figure~\\ref{fig:qualifiers_bars}, around half of the statements on entities occurring in the LC-QuAD 2.0 dataset have one or more qualifiers. These percentages differ from the ones in all of Wikidata, but when entities are considered, appearing in realistic use cases like QA, qualifiers are much more abundant. Thus, dismissing the qualifier information might be critical. The inclusion of hyper-relational graph embeddings could improve the performance of many approaches already using non-hyper-relational ones.\nRank information of statements might be useful to consider, but choosing the best one will probably often suffice. \n\n\\begin{figure}[hbt!]\n \\centering\n \\includegraphics[width=\\linewidth]{Images\/statement_qualifiers_bar_both.pdf}\n \\caption{Percentage of statements having the specified number of qualifiers for all LC-QuAD 2.0 and Wikidata entities.}\n \\label{fig:qualifiers_bars}\n\\end{figure}\n\nOf all approaches, only two algorithms~\\cite{huangentity, Banerjee2020} use descriptions explicitly. Others incorporate them through triples too, but more on the side~\\cite{DBLP:journals\/corr\/abs-2008-05190}. Descriptions can provide valuable context information and many items do have them; see Figure~\\ref{fig:wikidata_descriptions}. Hedwig~\\cite{DBLP:conf\/lrec\/KlangN20} claims to use descriptions but fails to describe how. \n\nTwo approaches~\\cite{DBLP:conf\/lrec\/KlangN20, Botha2020} demonstrated the usefulness of the inherent multilingualism of Wikidata, notably in combination with Wikipedia. \n\n\\begin{table}[htb!]\n \\centering\n \\begin{tabularx}{\\linewidth}{Xcc}\n \\toprule\n \\textbf{Approach} & \\textbf{Code} & \\textbf{Web API} \\\\ \\midrule\n OpenTapioca~\\cite{DBLP:journals\/corr\/abs-1904-09131} & \\cmark & \\cmark \\\\\n Falcon 2.0~\\cite{DBLP:journals\/corr\/abs-1912-11270}&\\cmark&\\cmark\\\\\n Arjun~\\cite{mulang2020encoding}&\\cmark&\\xmark\\\\\n VCG~\\cite{DBLP:conf\/starsem\/SorokinG18}&\\cmark&\\xmark \\\\\n KBPearl~\\cite{DBLP:journals\/pvldb\/LinLXLC20} & \\xmark & \\xmark \\\\\n PNEL~\\cite{Banerjee2020} & \\cmark & \\xmark \\\\\n Mulang et al.~\\cite{DBLP:journals\/corr\/abs-2008-05190} & \\cmark & \\xmark\\\\\n Perkins~\\cite{perkins2020separating} & \\xmark & \\xmark\\\\\n NED using DL on Graphs~\\cite{DBLP:journals\/corr\/abs-1810-09164}&\\cmark&\\xmark\\\\\n Huang et al.~\\cite{huangentity} & \\xmark & \\xmark\\\\ \n Boros et al.~\\cite{borosrobust} & \\xmark & \\xmark \\\\\n Provatorov et al.~\\cite{Provatorova2020} & \\xmark & \\xmark \\\\\n Labusch and Neudecker~\\cite{Labusch2020} & \\cmark & \\xmark \\\\\n Botha et al.~\\cite{Botha2020} & \\xmark & \\xmark \\\\\n Hedwig~\\cite{DBLP:conf\/lrec\/KlangN20}&\\xmark&\\xmark \\\\\n Tweeki~\\cite{tweeki:wnut20} & \\xmark & \\xmark\\\\\n \\bottomrule\n \\end{tabularx}\n \\caption{Availability of approaches.}\n \\label{tab:availability}\n\\end{table}\n\nAs Wikidata is always changing, approaches robust against change are preferred. A reliance on transductive graph embeddings~\\cite{DBLP:conf\/starsem\/SorokinG18,Banerjee2020, DBLP:journals\/corr\/abs-1810-09164, perkins2020separating}, which need to have all entities available during training, makes repeated training necessary. Alternatively, the used embeddings would need to be replaced with graph embeddings, which are efficiently updatable or inductive~\\cite{Wang2019, Wang2019a, Teru2020,Baek2020,Albooyeh2020,Hamaguchi2017,Wu2019a, galkin2021nodepiece, Daruna2021}.\nThe rule-based approach Falcon 2.0~\\cite{DBLP:journals\/corr\/abs-1912-11270} is not affected by a developing knowledge graph but only usable for correctly-stated questions. \nMethods only working on text information~\\cite{mulang2020encoding, Provatorova2020, huangentity, Botha2020, DBLP:journals\/corr\/abs-2008-05190} like labels, descriptions or aliases do not need to be updated if Wikidata changes, only if the text type or the language itself does. This is demonstrated by the approach by Botha et al.~\\cite{Botha2020} and the Wikification EL BLINK~\\cite{Wu2019}, which mainly use the BERT model and are able to link to entities never seen during training. If word-embeddings instead of sub-word embeddings are used, for example, GloVe~\\cite{Pennington2014} or word2vec~\\cite{Mikolov2013}, this advantage diminishes as new never-seen labels could not be interpreted. Nevertheless, the ability to support totally unseen new entities was only demonstrated for the approach by Botha et al~\\cite{Botha2020}. The other approaches still need to be evaluated on the zero-shot EL task to be certain. \nFor approaches~\\cite{DBLP:conf\/lrec\/KlangN20, huangentity, tweeki:wnut20} that rely on statistics over Wikipedia, new entities in Wikidata may sometimes not exist in Wikipedia to a satisfying degree.\nAs a consequence, only a subset of all entities in Wikidata is supported. This also applies to the approaches by Boros et al.~\\cite{borosrobust}, and Labusch and Neudecker~\\cite{Labusch2020} which are mostly using Wikipedia information. Additionally, they are susceptible to changes in Wikipedia, especially specific statistics calculated over Wikipedia pages which have to be updated any time a new entity is added.\nBotha et al.~\\cite{Botha2020} also mainly depend on Wikipedia and thus on the availability of the desired Wikidata entities in Wikipedia itself. Since the approach uses Wikipedia articles in multiple languages, it encompasses many more entities than the previous approaches that focus on Wikipedia. Botha et al.'s~\\cite{Botha2020} approach was designed for the zero- and few-shot setting, it is quite robust against changes in the underlying knowledge graph.\n\n\\begin{table*}[hb!]\n \\centering\n \\begin{tabularx}{0.99\\linewidth}{lcccc}\n \\toprule\n \\textbf{Survey} & \\textbf{\\# Approaches} & \\textbf{\\# Wikidata Approaches} & \\textbf{\\# Datasets} & \\textbf{\\# Wikidata Datasets} \\\\ \\midrule\n Sevgili et al.~\\cite{DBLP:journals\/corr\/abs-2006-00575} & 30 & 0 & 9 & 0 \\\\\n Al-Moslmi et al.~\\cite{al2020named} & 39 & 0 & 17 & 0\\\\ \n Oliveira et al.~\\cite{oliveira2020towards} & 36 & 0 & 32 & 0 \\\\\n This survey & 16 & 16 & 21 & 11 \\\\\n \\bottomrule\n \\end{tabularx}\n \\caption{Survey Comparison}\n \\label{tab:survey_comparison}\n\\end{table*}\n\nApproaches relying on statistics~\\cite{DBLP:journals\/pvldb\/LinLXLC20, DBLP:journals\/corr\/abs-1904-09131} need to update them regularly, but this might be efficiently doable.\nOverall, the robustness against change might be negatively affected by static\/transductive graph embeddings.\n\n\\begin{highlightbox}{\\hyperlink{rq3}{Research Question 3}}{How do current Entity Linking approaches exploit the specific characteristics of Wikidata?}\nThe preceding summary and evaluation of the existing Wikidata Entity Linkers, together with \\Cref{tab:comparison_approaches_wikidata} and the descriptions in \\Cref{subsec:el,subsec:erel}, provide an overview of all approaches with a focus on the incorporated Wikidata-characteristics.\n\\end{highlightbox}\n\\begin{highlightbox}{\\hyperlink{rq4}{Research Question 4}}{Which Wikidata characteristics are unexploited by existing Entity Linking approaches?}\nThe most unexploited characteristics are the descriptions, the hyper-relational structure and the type information, as can be seen in \\Cref{tab:comparison_approaches_wikidata}.\nNearly none of the found approaches exploited hyper-relational information in the form of qualifiers. And the one (i.e. OpenTapioca) using them did that in a simple way. As it is confirmed by the benchmarks that the inclusion of those can improve the performance of link prediction~\\cite{Galkin2020}, this might also be the case for the task of EL. \nFurthermore, description information is still greatly underutilized. It can be a valuable piece of context information of an entity. Of course, it is not ideal as often the description can be short, especially for long-tail entities. \nA possible way to circumvent this challenge is the recent development of \\textbf{Abstract Wikipedia\n}~\\cite{DBLP:journals\/cacm\/Vrandecic21}, which will support the multilingual generation of descriptions in the future. \nWhile some approaches utilize type information, most use them to limit the set of valid entity candidates to instances of only a small subset of all types, namely person, organization and location. This is surprising as the non-included paper by Raiman and Raiman~\\cite{DBLP:conf\/aaai\/RaimanR18} shows that a fine-grained type system can heavily improve the entity linking performance.\nAs mentioned in \\Cref{sec:wikidata}, rank information might also be used by not only including statements of the best rank but also of others. For example, in the case that statements exist which were valid at different points of time, including all could prove useful when linking documents of different ages. But such a special use case is not considered by any existing Wikidata EL approach.\nFor some more ideas on how to include those characteristics in the future, please refer to \\cref{subsec:future}.\n\\end{highlightbox}\n\n\n\n\n\\subsection{Reproducibility}\nNot all approaches are available as a Web API or even as source code. An overview can be found in Table~\\ref{tab:availability}. \nThe number of approaches for Wikidata having an accessible Web API is meager. While the code for some methods exists, this is the case for only half of them. The effort to set up different approaches also varies significantly due to missing instructions or data. \nThus, we refrained from evaluating and filling the missing results for all the datasets in Tables~\\ref{tab:dataset_results_el_only} and~\\ref{tab:dataset_results_el_er}. However, we seek to extend both tables in future work.\n\n\n\n\\section{Problem Definition}\n\\label{sec:problem}\nEL is the task of linking an entity mention in unstructured or semi-structured data to the correct entity in a KG. The focus of this survey lies in unstructured data, namely, natural language utterances.\n\n\\subsection{General terms}\n\\paragraph{Utterance.}\nAn utterance $u$ is defined as a sequence of $n$ words $w$.\n\\begin{equation*}\n u = (w_0, w_1, ... w_{n-1})\n\\end{equation*}\n\n\\paragraph{Entity.}\nThere exists no universally agreed-on definition of an entity in the context of EL~\\cite{rosales2020fine}. \nAccording to the Oxford Dictionary, an entity is:\n\\begin{quote}\n \"something that exists separately from other things and has its own identity\"~\\cite{oxford_entity}\n\\end{quote}\nWhat elements of a KG correspond to entities depends on the KG itself. \nIn the case of Wikidata, we define it as follows:\n\\begin{quote}\n Any Wikidata item is an entity. \n\\end{quote}\nIn Section~\\ref{sec:wikidata}, we further define Wikidata items.\nMany EL approaches limit the space of valid entities. Usually, named entities like a specific person (e.g. \\texttt{Barack Obama}), an organization (e.g. \\texttt{NASA}) or a movie (e.g. \\texttt{The Hateful Eight}) are desirable to link. In general, any entity which can be denoted with a proper noun is a named entity. But sometimes, also common entities like concepts (e.g. \\texttt{dog} or \\texttt{theater}) are included. What exactly is linked depends on the use case~\\cite{rosales2020fine}. \n\n\n\\paragraph{Knowledge Graph.}\nWhile the term knowledge graph was already used before, the popularity increased drastically after Google introduced the \\texttt{Knowledge Graph} in 2012~\\cite{singhal2012introducing, Ehrlinger2016}. \nHowever, similar to an entity, there exists no unanimous definition of a KG~\\cite{Ehrlinger2016, hogan2020knowledge}. \nFor example, F\u00e4rber et al. define a KG as an RDF graph~\\cite{DBLP:journals\/semweb\/FarberBMR18}.\nHowever, a KG being an RDF graph is a strict assumption. While the Wikidata graph is available in the RDF format, the main output format is JSON. Freebase, often called a KG, did not provide the RDF format until a year after its launch~\\cite{freebase_rdf}. \nPaulheim defines it less formal as: \n\\begin{quote}\n \"A knowledge graph (i) mainly describes real world entities and their interrelations, organized in a graph, (ii) defines possible classes and relations of entities in a schema, (iii) allows for potentially interrelating arbitrary entities with each other and (iv) covers various topical domains.\"~\\cite{Paulheim2017}\n\\end{quote}\nBut constraint (iv) of Paulheims definition alienates commercial KGs, focusing on a single domain, such as a financial, medical, or geographical one. \nAs no unanimously agreed definition exists, we define a knowledge graph very broadly in the following and the Wikidata KG more concrete in Section~\\ref{sec:wikidata}.\nThe term KG is often used as a synonym for the term knowledge base~(KB), but they are not the same~\\cite{jarke1989kbms}. No single definition for a KB exists either. Jarke et al. define it as \"a representation of heuristic and factual information, often in the form of facts, assertions and deduction rules\"~\\cite{jarke1989kbms}. The term is often loosely used to describe a system that is able to store knowledge in the form of structured or unstructured information. While any KG is a KB, not any KB is a KG. The main difference is that a KB does not have to be graph-structured.\n\nIn this survey, a knowledge graph is defined as a directed graph $G=(V,E, \\mathcal{R})$ consisting of vertices $V$, edges $E$ and relations $\\mathcal{R}$. A subset of the vertices corresponds to entities $\\mathcal{E}$ or literals $\\mathcal{L}$. A literal is a concrete value of information like the height or a name of an entity. A literal vertex has incoming edges but no outgoing ones. Other types of vertices might exist depending on the KG.\n$E$ is a set $\\{e_1, \\dots, e_{|E|}\\}$ of edges with $e_j \\in V \\times \\mathcal{R} \\times V$ where relations $\\mathcal{R}$ assign a certain meaning to the connection between entities. Such edges are also called triples. \nThere are special subtypes of KGs, e.g., hyper-relational graphs such as Wikidata.\n\n\\paragraph{Hyper-Relational Knowledge Graphs. \\label{par:hyper}} In a hyper-relational knowledge graph, statements can be specified by more information than a single relation. Multiple relations are, therefore, part of a statement.\nIn case of a hyper-relatio\\-nal graph $\\mathcal{G} = (V, E, \\mathcal{R})$, $E$ is a list $(e_1, \\dots, e_n)$ of edges with $e_j \\in V \\times \\mathcal{R} \\times V \\times \\mathcal{P}(\\mathcal{R} \\times V)$ for $1\\le j \\le n$, where $\\mathcal{P}$ denotes the power set. A hyper-relational fact $e_j \\in E$ is usually written as a tuple $ (s,r,o,\\mathcal{Q})$, where $\\mathcal{Q}$ is the set of \\emph{qualifier pairs} $\\{(qr_{i},qv_{i}) \\}$ with \\emph{qualifier relations} $qr_{i} \\in \\mathcal{R}$ and \\emph{qualifier values} $qv_{i} \\in V$. The triple $(s,r,o)$ is referred to as the \\emph{main triple} of the fact. $Q_j$ denotes the qualifier pairs \nof $e_j$~\\cite{Galkin2020}. \nFor example, the \\texttt{nominated for}~edge in Fig.~\\ref{fig:wikidata_subgraph} has two additional qualifier relations and would be represented as \\nohyphens{\\texttt{(Ennio Morricone, nominated for, Academy Award for Best Original Score, \\{( for work, The Hateful Eight ), (statement is subject of, 88th A-cademy Awards)\\})}}. \n\n\\subsection{Tasks}\nSince not only approaches that solely do EL were included in the survey, Entity Recognition will also be defined. \n\n\\paragraph{Entity Recognition.} ER is the task of identifying the mention span $$m = (w_i, ..., w_k) | 0 \\leq i \\leq k \\leq n-1$$ of all entities in an utterance $u$. Each such span is called an entity mention $m$. The word or word sequence referring to an entity is also known as the surface form of an entity.\n An utterance can contain more than one entity, often also consisting of more than one word. Sometimes, a broad type of an entity is classified too. Usually, those are \\texttt{person}, \\texttt{location} and \\texttt{organization}. Some of the considered approaches do such a classification task and also use it to improve the EL.\n \n It is also up to debate what an entity mention is. In general, a literal reference to an entity is considered a mention. But whether to include pronouns or how to handle overlapping mentions depends on the use case. \n\n\\paragraph{Entity Linking.} \nThe goal of EL is to find a mapping function that maps all found mentions to the correct KG entities and also to identify if an entity mention does not exist in the KG. \n\nIn general, EL takes the utterance $u$ and all $k$ identified entity mentions $M=(m_1, ... m_k)$ in the utterance and links each of them to an element of the set $(\\mathcal{E}\\cup \\{\\mathit{NIL}\\})$. The $\\mathit{NIL}$ element is added to the set of vertices to be able to signalize that the entity, that the mention is referring to, is not known to the KG. Such a $\\mathit{NIL}$ entity is also called an out-of-KG entity. Another way to handle such unknown entities is to create emerging entities~\\cite{DBLP:conf\/www\/HoffartAW14}. In that case, the entity is still unknown to the KG, but after encountering it, it is separately stored using information like the provided entity mentions. Now no single $\\mathit{NIL}$ entity, but a growing set of emerging entities exists. EL is then done using the entities in the KG and all already encountered emerging entities. While all KG-unknown entities point to the same single $\\mathit{NIL}$ entity, they might point to different emerging entities. \n\n\n\nEL is often split into two subtasks. First, potential candidates for an entity are retrieved from a KG. This is necessary as doing EL over the whole set of entities is often intractable. This \\emph{Candidate generation} is usually performed via efficient metrics measuring the similarities between mentions in the utterance and entities in the KG. The result is a set of candidates $C=\\{c_0, \\cdots, c_l\\}$ for each entity mention $m$ in the utterance. \nAfter limiting the space of possible entities, one of the available candidates is chosen for each entity. This is done via a \\emph{candidate ranking} algorithm, which assigns a rank to each candidate.\nThe assignment is done by computing a score for each candidate signalizing how likely it is the correct entity. The candidate with the highest score is chosen as the correct entity for the mention.\n\nThere are two different categories of reranking methods are called \\emph{local} or \\emph{global}~\\cite{DBLP:conf\/acl\/RatinovRDA11}. \n\\begin{align*}\n \\mathit{score_{\\mathit{local}}}: C \\times M &\\to \\mathbb{R}\\\\ \n \\text{ given by } (c,m) &\\mapsto \\mathit{score_{\\mathit{local}}}(c, m)\n\\end{align*}\nwhere $\\mathit{score_{\\mathit{local}}}$ is a local scoring function of a candidate. The goal is then to optimize the objective function:\n\\begin{equation*}\n A^* = \\argmax_A \\sum_{i=1}^k \\mathit{score_{\\mathit{local}}}(a_i, m_i) | a_i \\in C_i\n\\end{equation*}\nwhere $A = \\{a_1, ... , a_k\\} \\in \\mathcal{P}(\\mathcal{E})$ is an assignment of one candidate to each entity mention $m_i$. $\\mathcal{P}(*)$ is the power set operator.\n\nThe rank assignment and score calculation of the candidates of one entity is often not independent of the other entities' candidates. In this case, the ranking will be done by including the whole assignment via a global scoring function:\n\\begin{equation*}\n \\mathit{score_{\\mathit{global}}}: \\mathcal{P}(\\mathcal{E}) \\to \\mathbb{R} \\text{ given by }A \\mapsto \\mathit{score_{\\mathit{global}}}(A)\n\\end{equation*}\nThe objective function is then:\n\\begin{align*}\n A^* &= \\argmax_A \\left[\\sum_{i=1}^k \\mathit{score_{\\mathit{local}}}(a_i, m_i)\\right] \\\\ \n &+ \\mathit{score_{\\mathit{global}}}(A) \\ | \\ a_i \\in C_i\n\\end{align*}\n \n\nNote, there also exists some ambiguity in the objective of linking itself. For example, there exists a Wikidata entity \\texttt{2014 FIFA World Cup} and an entity \\texttt{FIFA World Cup}. There is no unanimous solution on how to link the entity mention in the utterance \\texttt{In 2014, Germany won the \\underline{FIFA World Cup}}.\n\nSometimes EL is also called Entity Disambiguation, which we see more as part of EL, namely where entities are disambiguated via the candidate ranking.\n\nThere exist multiple special cases of EL.\n\\textit{Multilingual EL} tries to link entity mentions occurring in utterances of different languages to one shared KG, for example, English, Spanish or Chinese utterances to one language-agnostic KG. \nFormally, an entity mention $m$ in some utterance $u$ of some context language $l_c$ has to be linked to a language-agnostic KG which includes information in multiple languages $L_{KG}=\\{l_1,...,l_k\\}$ where $l_c$ can but has not to be an element of $L_{KG}$~\\cite{Botha2020}.\n\n\\textit{Cross-lingual EL} tries to link entity mentions in utterances in different languages to a KG in one dedicated language, for example, Spanish and German utterances to an English KG~\\cite{Rijhwani2019}. In that case, the multilingual EL problem gets constrained to $L_{KG}=\\{l_{KG}\\}$ where $l_c \\neq l_{KG}$.\n\nIn \\textit{zero-shot EL}, the entities during test time $\\mathcal{E}_{\\mathit{test}}$ are not available at training time $\\mathcal{E}_{\\mathit{train}}$. \n$$\n\\mathcal{E}_{\\mathit{test}} \\cap \\mathcal{E}_{\\mathit{train}} = \\emptyset~\\text{where}~\\mathcal{E}_{\\mathit{test}} \\subset \\mathcal{E},~\\mathcal{E}_{\\mathit{train}} \\subset \\mathcal{E}\n$$\nThus, the entity linker must be able to handle unseen entities. \nThe term was coined by Logeswaran et al.~\\cite{Logeswaran2019}, but they limited the task to only have descriptions available while our definition does not include such a limitation. \n\n\\textit{KB\/KG-agnostic EL} approaches are able to support different KBs respectively KGs, often multiple in parallel. \nFor example, a KG must be available in RDF format. We refer the interested reader to central works~\\cite{DBLP:conf\/ecai\/UsbeckNRGCAB14, Moussallem2017, DBLP:conf\/esws\/ZwicklbauerSG16} or our Appendix.\n\\section{Related work}\n\\label{sec:related-work}\n\nWhile there are multiple recent surveys on EL, none of those are specialized in analyzing EL on Wikidata.\n\nThe extensive survey by Sevgili et al.~\\cite{DBLP:journals\/corr\/abs-2006-00575} is giving an overview of all neural approaches from 2015 to 2020. It compares 30 different approaches on nine different datasets. \nAccording to our criteria, none of the included approaches focuses on Wikidata. The survey also discusses the current state of the art of domain-independent and multi-lingual neural EL approaches. However, the influence of the underlying KG was not of concern to the authors. It is not described in detail how they found the considered approaches.\n\nIn the survey by Al-Moslmi et al.~\\cite{al2020named}, the focus lies on ER and EL approaches over KGs in general. It considers approaches from 2014 to 2019. It gives an overview of the different approaches of ER, Entity Disambiguation, and EL. A distinction between Entity Disambiguation and EL is made, while our survey sees Entity Disambiguation as a part of EL. The roles of different domains, text types, or languages are discussed. The authors considered 89 different approaches and tools. \nMost approaches were designed for DBpedia or Wikipedia, some for Freebase or YAGO, and some to be KG-agnostic. Again, none focused on Wikidata. $F_1$ scores were gathered on 17 different datasets.\nFifteen algorithms, for which an implementation or a WebAPI was available, were evaluated using GERBIL~\\cite{Roeder2018}.\n\nAnother survey~\\cite{oliveira2020towards} examines recent approaches, which employ holistic strategies. Holism in the context of EL is defined as the usage of domain-specific inputs and metadata, joint ER-EL approaches, and collective disambiguation methods. Thirty-six research articles were found which had any holistic aspect - none of the designed approaches linked explicitly to Wikidata.\n\nA comparison of the number of approaches and datasets included in the different surveys can be found in Table~\\ref{tab:survey_comparison}. \n\nIf we go further into the past, the existing surveys~\\cite{shen2014entity, DBLP:journals\/tacl\/LingSW15} are not considering Wikidata at all or only in a small amount as it is still a rather recent KG in comparison to the other established ones like DBpedia, Freebase or YAGO. For an overview of different KGs on the web, we refer the interested reader to the paper by Heist et al.~\\cite{DBLP:series\/ssw\/HeistHRP20}.\n\nNo found survey focused on the differences of EL over different knowledge graphs, respectively, on the particularities of EL over Wikidata. \n\\section{Survey Methodology}\nThere exist several different ways in which a survey can contribute to the research field~\\cite{Kitchenham2004}:\n\\begin{enumerate}\n \\item \\label{cont_1} Providing an overview of current prominent areas of research in a field\n \\item \\label{cont_2} Identification of open problems\n \\item \\label{cont_3} Providing a novel approach tackling the extracted open problems (in combination with the identification of open problems)\n\\end{enumerate}\nWe analyse different recent and older surveys on EL and highlight specific areas which are not covered as well as our survey's novelties (see also Section~\\ref{sec:discussion}). While some very recent surveys exist~\\cite{al2020named, oliveira2020towards, DBLP:journals\/corr\/abs-2006-00575}, they do not consider the different underlying Knowledge Graphs as a significant factor affecting the performance of EL approaches. Furthermore, barely any approaches included in other surveys are working on Wikidata and take the particular characteristics of Wikidata into account (see Section~\\ref{sec:related-work}). \nOur survey fills these gaps by contributing according to \\Cref{cont_1,cont_2}.\n\n\\begin{table*}[htb!]\n \\centering\n \\begin{tabularx}{.78\\textwidth}{Y Y}\n \\toprule\n \\multicolumn{2}{c}{\\textbf{Criteria}} \\\\ \\midrule\n \\textbf{Must satisfy all} & \\textbf{Must not satisfy any} \\\\ \n \\midrule\n \\begin{itemize}\n \\item Approaches that consider the problem of unstructured EL over Knowledge Graphs \n \\item Approaches where the target Knowledge Graph is Wikidata \n \\end{itemize}\n &\n \\begin{itemize}\n \\item Approaches conducting Semi-structured EL\n \\item Approaches not doing EL in the English language\n \\end{itemize} \\\\\n \\bottomrule\n \\end{tabularx}\n \\caption{Qualifying and disqualifying criteria for approaches. \"Semi-structured\" in this table means that the entity mentions do not occur in natural language utterances but in more structured documents such as tables.}\n \\label{tab:qual_disqual}\n\\end{table*}\n\n\\begin{table*}[htb!]\n \\centering\n \\begin{tabularx}{.78\\textwidth}{YY}\n \\toprule\n \\multicolumn{2}{c}{\\textbf{Criteria}} \\\\ \\midrule\n \\textbf{Must satisfy all} & \\textbf{Must not satisfy any} \\\\ \\midrule\n \\begin{itemize}\n \\item Datasets that are designed for EL or are used for evaluation of Wikidata EL\n \\item Datasets must include Wikidata identifiers from the start; an existing dataset later mapped to Wikidata is not permitted\n \\end{itemize} &\n \\begin{itemize}\n \\item Datasets without English utterances\n \\end{itemize} \\\\\n \\bottomrule\n \\end{tabularx}\n \\caption{Qualifying and disqualifying criteria for the dataset search.}\n \\label{tab:qual_disqual_datasets}\n\\end{table*}\n\nUntil December 18, 2020, we continuously searched for existing and newly released scientific work suitable for the survey.\nNote, this survey includes only scientific articles that were accessible to the authors.\\footnote{\\url{https:\/\/www.projekt-deal.de\/about-deal\/}}\n\n\\subsection{Approaches}\nOur selection of approaches stems from a search over the following search engines:\n\\begin{itemize}\n \\item Google Scholar\n \\item Springer Link\n \\item Science Direct \n \\item IEEE Xplore Digital Library\n \\item ACM Digital Library\n\\end{itemize}\n\nTo gather a wide choice of approaches, the following steps were applied.\n\\texttt{Entity Linking}, \\texttt{Entity Disambiguation} or variations of the phrases~\\footnote{Google Scholar search query: \\texttt{(intitle:\"entity\" OR intitle:\"entities\") AND (intitle:\"link\" OR intitle:\"linking\" OR intitle:\"disambiguate\" OR entitle:\"disambiguation\") AND intext:\"Wikidata\"}} had to occur in the title of the paper. The publishing year was not a criterion due to the small number of valid papers and the relatively recent existence of Wikidata. Any approach where \\texttt{Wikidata} was not occurring once in the full text was not considered. \nThe systematic search process resulted in exactly 150 papers and theses (including duplicates). \n\nFollowing this search, the resulting papers were filtered again using the qualifying and disqualifying criteria which can be found in Table~\\ref{tab:qual_disqual}. This resulted in 15 papers and one master thesis in the end.\n \nThe search resulted in papers in the period from 2018 to 2020. While there exist EL approaches from 2016~\\cite{spitz2016so, almeida2016streets} working on Wikidata, they did not qualify according to the criteria above. \n\n\\subsection{Datasets}\n\nThe dataset search was conducted in two ways. \nFirst, a search for potential datasets was performed via the same search engines as used for the approaches.\nSecond, all datasets occurring in the system papers were considered if they fulfilled the criteria.\nThe criteria for the inclusion of a dataset can be found in Table~\\ref{tab:qual_disqual_datasets}. \n\nWe filtered the dataset papers in the following way. First, in the title, \\texttt{Entity Linking} or \\texttt{Entity Disambiguation} or variations thereof had to occur, similar to the search for the Entity Linking approaches. Additionally, \\texttt{dataset}, \\texttt{data}, \\texttt{corpus} or \\texttt{benchmark} had to occur once in title~\\footnote{Google Scholar Search Query: \\texttt{intext:\"Wikidata\" AND (intitle:dataset OR intitle:data OR intitle:benchmark OR intitle:corpus) AND (intitle:entity OR intitle:entities) AND (intitle:link OR intitle:linking OR intitle:disambiguate OR intitle:disambiguation)}} must occur in the title and \\texttt{Wikidata} has to appear at least once in the full text. Due to those keywords, other datasets suitable for EL, but constructed for a different purpose like KG population, were not included. This resulted in 26 papers (including duplicates). Of those, only two included Wikidata identifiers and focused on English.\n\nEighteen datasets were accompanying the different approaches. Many of those did not include Wikidata identifiers from the start. This made them less optimal for the examination of the influence of Wikidata on the design of datasets. They were included in the section about the approaches but not in the section about the Wikidata datasets.\n\nAfter the removal of duplicates, 11 Wikidata datasets were included in the end.\n\n\\section{Wikidata}\n\\label{sec:wikidata}\nWikidata is a community-driven knowledge graph edited by humans and machines. \nThe Wikidata community can enrich the content of Wikidata by, for example, adding\/changing\/removing entities, statements about them, and even the underlying ontology information.\nAs of July 2020, it contained around 87 million items of structured data about various domains. Seventy-three million items can be interpreted as entities due to the existence of an \\texttt{is instance} property. As a comparison, DBpedia contains around 5 million entities~\\cite{Tanon2020}. Note that the \\texttt{is instance} property includes a much broader scope of entities than the ones interpreted as entities for DBpedia.\nIn comparison to other similar KGs, the Wikidata dumps are updated most frequently~(\\Cref{tab:kg_statistics}). But note that this only applies to the dumps, if one considers direct access via the Website or a SPARQL endpoint, both, Wikidata~\\footnote{https:\/\/www.wikidata.org\/}\\footnote{https:\/\/query.wikidata.org} and DBpedia~\\footnote{https:\/\/www.dbpedia.org\/resources\/live\/}\\footnote{https:\/\/www.dbpedia.org\/resources\/live\/dbpedia-live-sync\/} provide continuously updated knowledge.\n\n\\begin{table*}[htb!]\n \\centering\n \\begin{tabular}{l c c c}\n \\toprule\n \\textbf{KG}& \\textbf{\\#Entities in million} & \\textbf{\\#Labels\/Aliases in million} &\\textbf{last updated} \\\\ \\midrule\n Wikidata & 78 & 442 & up to 4 times a month~\\footnote{https:\/\/dumps.wikimedia.org\/wikidatawiki\/entities\/} \\\\\n DBpedia & 5 & 22 & monthly\\\\\n Yago4 & 67 & 371 & November 2019\\\\\n \\bottomrule\n \\end{tabular}\n \\caption{KG statistics by~\\cite{Tanon2020}.}\n \\label{tab:kg_statistics}\n\\end{table*}\n\n\\subsection{Definition}\nWikidata is a collection of \\emph{entities} where each such entity has a page on Wikidata. An entity can be either an \\textit{item} or a \\textit{property}. Note, an entity in the sense of Wikidata is generally not the same as an entity one links to via EL. For example, Wikidata entities are also properties that describe relations between different items. Linking to such relations is closer to Relation Extraction~\\cite{Sorokin2017,Lin2017,Bastos2020}. Furthermore, many items are more abstract classes, which are usually not considered as entities linked to in EL.\nNote that if not mentioned otherwise, if we speak about entities, entities in the context of EL are meant.\n \n\\paragraph{Item.} Topics, classes, or objects are defined as items. An item is enriched with more information using statements about the item itself. In general, items consist of one label, one description, and aliases in different languages. An unique and language-agnostic identifier identifies items in the form \\texttt{Q[0-9]+}. An example of an item can be found in Figure~\\ref{fig:item_example}.\n\nFor example, the item with the identifier \\texttt{Q23848} has the label \\texttt{Ennio~Morricone}, two aliases, \\texttt{Dan Savio} and \\texttt{Leo Nichols}, and \\texttt{Italian composer, orchestrator and conductor \\\\ (1928-2020)} as description at the point of writing. The corresponding Wikidata page can also be seen in Figure~\\ref{fig:item_example}. \n\\begin{figure}[hb!]\n \\includegraphics[width=\\linewidth]{Images\/WD_Ennio.png}\n \\caption{Example of an item in Wikidata.}\n \\label{fig:item_example}\n\\end{figure}\n\n\\paragraph{Property.} A property specifies a relation between items or literals. Each property also has an identifier similar to an item, specified by \\texttt{P[0-9]+}. For instance, a property \\texttt{P19} specifies the place of birth \\texttt{Rome} for \\texttt{Ennio~Morricone}. In NLP, the term \\texttt{relation} is commonly used to refer to a connection between entities. A property in the sense of Wikidata is a type of relation. To not break with the terminology used in the examined papers, when we talk about relations, we always mean Wikidata properties if not mentioned otherwise. \n\n\\paragraph{Statement.} A statement introduces information by giving structure to the data in the graph. It is specified by a \\emph{claim}, and \\emph{references}, \\emph{qualifiers} and \\emph{ranks} related to the claim. Statements are assigned to items in Wikidata.\nA claim is defined as a pair of property and some value. A value can be another item or some literal. Multiple values are possible for a property. Even an \\texttt{unknown value} and a \\texttt{no value} exists. \n\n\\emph{References} point to sources making the claims inside the statements verifiable. In general, they consist of the source and date of retrieval of the claim. \n\n\\emph{Qualifiers} define the value of a claim further by contextual information. For example, a qualifier could specify for how long one person was the spouse of another person. Qualifiers enable Wikidata to be hyper-relational (see \\Cref{par:hyper}). Structures similar to qualifiers also exist in some other knowledge graphs, such as the inactive Freebase in the form of Compound Value Types~\\cite{DBLP:conf\/sigmod\/BollackerEPST08}. \n\n\\emph{Ranks} are used if multiple values are valid in a statement. If the population of a country is specified in a statement, it might also be useful to have the populations of past years available. The most up-to-date population information usually has then the highest rank and is thus usually the most desirable claim to use. \n\nStatements can be also seen in Figure~\\ref{fig:item_example} at the bottom. For example, it is defined that \\texttt{Ennio~Morricone} is an \\texttt{instance of} the class \\texttt{human}. This is also an example for the different types of items. While \\texttt{Ennio~Morricone} is an entity in our sense, \\texttt{human} is a class. \n\n\\begin{figure*}[htb!]\n \\begin{subfigure}{0.45\\linewidth}\n \\centering\n \\includegraphics[width= \\linewidth]{Images\/items_wikidata.pdf}\n \\caption{Number of items of Wikidata since launch~\\cite{Manske2020}.}\n \\label{fig:wikidata_items}\n \\end{subfigure}\\qquad\n \\begin{subfigure}{0.45\\linewidth}\n \\centering\n \\includegraphics[width=\\linewidth]{Images\/labels_per_item.pdf} \n \\caption{Average number of labels (+ aliases) per item~\\cite{Manske2020}.}\n \\label{fig:sub_item_labels}\n \\end{subfigure}\n \n \\begin{subfigure}{0.45\\linewidth}\n \\centering\n \\includegraphics[width=\\linewidth]{Images\/percentage_items_without_aliases.pdf} \n \\caption{Percentage of items without any aliases~\\cite{Manske2020}.}\n \\label{fig:item_labels_no_aliases}\n \\end{subfigure}\\qquad\n \\begin{subfigure}{0.45\\linewidth}\n \\centering\n \\includegraphics[width= \\linewidth]{Images\/percentage_items_without_description.pdf}\n \\caption{Percentage of items without a description~\\cite{Manske2020}.}\n \\label{fig:wikidata_descriptions}\n \\end{subfigure}\n \\caption{Statistics on Wikidata based on~\\cite{Manske2020}.}\n\\end{figure*}\n\n\\paragraph{Other structural elements.}\nThe aforementioned elements are essential for Wikidata, but more do exist. For example, there are entities (in the sense of Wikidata) corresponding to Lexemes, Forms, Senses or Schemas. Lexemes, Forms and Senses are concerned with lexicographical information, hence words, phrases and sentences themselves. This is in contrast to Wikidata items and properties, which are directly concerned with things, concepts and ideas. Schemas formally subscribe to subsets of Wikidata entities. For example, any Wikidata item which has \\texttt{actor} as its \\texttt{occupation} is an \\texttt{instance of} the class \\texttt{human}. Both, lexicographical and schema information, are usually not directly of relevance for EL. Therefore, we refrain from introducing them in more detail.\n\nFor more information on Wikidata, see the paper by Denny Vrande{\\v{c}}i{\\'c} and Markus Kr{\\\"{o}}tzsch~\\cite{DBLP:journals\/cacm\/VrandecicK14}.\n\n\\paragraph{Differences in structure to other knowledge graphs}\nDBpedia extracts its information from Wikipedia and Wikidata. It maps the information to its own ontology.\nDBpedia's statements consist of only single triples (\\texttt{}) since it follows the RDF specification~\\cite{DBLP:journals\/semweb\/LehmannIJJKMHMK15}. Additional information like qualifiers, references or ranks do not exist. But it can be modeled via additional triples. As it is no inherent feature of DBpedia, it is harder to use as there a no strict conventions. Entities in DBpedia have human-readable identifiers, and there exist entities per language~\\cite{DBLP:journals\/semweb\/LehmannIJJKMHMK15} with partly differing information. Hence, for a single concept or thing, multiple DBpedia entities might exist. For example, the English entity of the city Munich\\footnote{https:\/\/dbpedia.org\/page\/Munich} has 25 entities as \\texttt{dbo:administrativeDistrict} assigned. The German entity\\footnote{http:\/\/de.dbpedia.org\/page\/M\u00fcnchen} only a single one. It seems that this originates from a different interpretation of the predicate \\texttt{dbo:administrativeDistrict}. \n\nYago4 extracts all its knowledge from Wikidata but filters out information it deems inadequate. For example, if a property is used too seldom, it is removed. If a Wikidata entity does not have a class that exists in Schema.org\\footnote{https:\/\/schema.org}, it is removed. The RDF specification format is used. Qualifier information is included indirectly via separate triples. Rank information and references of statements do not exist. The identifiers follow either a human-readable form if available via Wikipedia or Wikidata or use the Wikidata QID. However, in contrast to DBpedia, only one entity exists per thing or concept~\\cite{Tanon2020}.\n\nFor a thorough comparison of Wikidata and other KGs (in respect to Linked Data Quality~\\cite{zaveri2016quality}), please refer to the paper by F\u00e4rber et al.~\\cite{DBLP:journals\/semweb\/FarberBMR18}.\n\\subsection{Discussion}\n\\paragraph{Novelties.} A useful characteristic of Wikidata is that the community can openly edit it. Another novelty is that there can be a plurality of facts, as contradictory facts based on different sources are allowed. Similarly, time-sensitive data can also be included by qualifiers and ranks. The population of a country, for example, changes from year to year, which can be represented easily in Wikidata. Lastly, due to their language-agnostic identifiers, Wikidata is inherently multilingual. Language only starts playing a role in the labels and descriptions of an item.\n\n\\paragraph{Strengths.} Due to the inclusion of information by the community, recent events will likely be included. The knowledge graph is thus much more up-to-date than most other KGs. Freebase is unsupported for years now, and DBpedia updates its dumps only every month. Note, the novel DBpedia live 2.0\\footnote{\\url{https:\/\/forum.dbpedia.org\/t\/differences-in-results-on-dbpedia-live-and-http-dbpedia-org-sparql-endpoint\/888\/2}} is updated when changes to a Wikipedia page occur, but, as discussed, makes research harder to replicate. Thus, Wikidata is much more suitable and useful for industry applications such as smart assistants since it is the most complete open-accessible data source to date. In Figure~\\ref{fig:wikidata_items}, one can see that number of items in Wikidata is increasing steadily. The existence of labels and additional aliases (see Figure~\\ref{fig:sub_item_labels}) helps EL as a too-small number of possible surface forms often lead to a failure in the candidate generation. DBpedia does, for example, not include aliases, only a single exact label~\\footnote{There exist some predicates (e.g., \\texttt{foaf:name}, \n\\texttt{dbp:commonName} or \\texttt{dbp:conventionalLongName}) that might point to aliases but they are often either not used or specify already stated aliases.}; to compensate, additional resources like Wikipedia are often used to extract a label dictionary of adequate size~\\cite{Moussallem2017}. Even each property in Wikidata has a label~\\cite{DBLP:journals\/cacm\/VrandecicK14}. Fully language model-based approaches are therefore more naturally usable~\\cite{mulang2020encoding}.\nAlso, nearly all items have a description, see Figure~\\ref{fig:wikidata_descriptions}. This short natural language phrase can be used for context similarity measures with the utterance. \nThe inherent multilingual structure is intuitively useful for multilingual Entity Linking. \nTable~\\ref{tab:statistics_languages} shows information about the use of different languages in Wikidata. As can be seen, item labels\/aliases are available in up to 457 languages. But not all items have labels in all languages. On average, labels, aliases and descriptions are available in 29.04 different languages. However, the median is only 6 languages. Many entities will, therefore, certainly not have information in many languages. The most dominant language is English, but not all elements have label\/alias\/description information in English. For less dominant languages, this is even more severe. German labels exist, for example, only for 14 \\%, and Samoan labels for 0.3 \\%. \nContext information in the form of descriptions is also given in multiple languages. Still, many languages are again not covered for each entity (as can be seen by a median of only 4 descriptions per element). \nWhile the multilingual label and description information of items might be useful for language model-based variants, the same information for properties enables multilingual language models. Because, on average, 21.18 different languages are available per property for labels, one could train multilingual models on the concatenations of the labels of triples to include context information. But of course, there are again many properties with a lower number of languages, as the median is also only 6 languages. Cross-lingual EL is therefore certainly necessary to use language model-based EL in multiple languages.\n\n\\begin{table*}[htb!]\n \\centering\n \\begin{tabularx}{\\linewidth}{p{10cm}XX}\n \\toprule\n & \\textbf{Items} & \\textbf{Properties} \\\\\n \\midrule\n \\textbf{Number of languages} & 457 & 427\\\\\n \\textbf{(average, median) of \\# languages per element (labels + descriptions)}&29.04, 6 &21.24, 13 \\\\\n \\textbf{(average, median) of \\# languages per element (labels)}&5.59 , 4 &21.18, 6 \\\\\n \\textbf{(average, median) of \\# languages per element (descriptions)}&26.10, 4&9.77, 6 \\\\\n \\textbf{\\% elements without English labels}&15.41\\% & 0\\% \\\\\n \\textbf{\\% elements without English descriptions}&26.23\\%&1.08\\% \\\\\n \\bottomrule\n \\end{tabularx}\n \\caption{Statistics - Languages Wikidata (Extracted from dump~\\cite{Foundation2020a}).}\n \\label{tab:statistics_languages}\n\\end{table*}\nBy using the qualifiers of hyper-relational statements, more detailed information is available, useful not only for Entity Linking but also for other problems like Question Answering. The inclusion of hyper-relational statements is also more challenging. Novel graph embeddings have to be developed and utilized, which can represent the structure of a claim enriched with qualifiers~\\cite{DBLP:conf\/www\/RossoYC20, Galkin2020}. \n\nRanks are of use for EL in the following way. Imagine a person had multiple spouses throughout his\/her life. In Wikidata, all those relationships are assigned to the person via statements of different ranks. If now an utterance is encountered containing information on the person and her\/his spouse, one can utilize the Wikidata statements for comparison. Depending on the time point of the utterance, different statements apply. One could, for example, weigh the relevance of statements according to their rank. If now a KG (for example Yago4~\\cite{Tanon2020}) includes only the most valid statement, the current spouse, utterances containing past spouses are harder to link. \n\nFor references, up to now, no found approach did utilize them for EL. One use case might be to filter statements by reference if one knows the source's credibility, but this is more a measure to cope with the uncertainty of statements in Wikidata and not directly related to EL.\n\n\\begin{table*}[htb!]\n \\centering\n \\begin{tabular}{lccccc}\n \\toprule\n \\textbf{\\# Labels\/aliases}& \\num{70124438} & \\num{2041651} & \\num{828471} & \\num{89210} & \\num{3329} \\\\\n \\textbf{\\# Items per label\/alias}&$1$ & $2$ & $3-10$ & $11 - 100$ & $< 100$ \\\\ \n \\bottomrule\n \\end{tabular}\n \\caption{Number of English labels\/aliases pointing to a certain number of items in Wikidata (Extracted from dump~\\cite{Foundation2020a}).}\n \\label{tab:mention_items}\n\\end{table*}\n\n\\paragraph{Weaknesses.} However, this community-driven approach also introduces challenges. For example, the list of labels of an item will not be exhaustive, as shown in Figures~\\ref{fig:sub_item_labels} and ~\\ref{fig:item_labels_no_aliases}. The graphs consider labels and aliases of all languages. While the median of labels and aliases is around 4 per element, not all are useful for Entity Linking. \\texttt{Ennio Morricone} does not have an alias solely consisting of \\texttt{Ennio} while he will certainly sometimes be referenced by that. Thus, one can not rely on the exact labels alone. But interestingly, Wikidata has properties for the fore- and surname alone, just not as a label or alias. A close examination of what information to use is essential. \n\nThis is also a problem in other KGs. Also, Wikidata often has items with very long, noisy, error-prone labels, which can be a challenge to link to~\\cite{mulang2020encoding}. Nearly 20 percent of labels have a length larger than 100 letters, see Figure~\\ref{fig:wikidata_label_lengths}. Due to the community-driven approach, false statements also occur due to errors or vandalism~\\cite{Heindorf2016}.\n\n\\begin{figure}[htb!]\n \\centering\n \\includegraphics[width=\\linewidth]{Images\/wikidata_length_percentiles.pdf}\n \\caption{Percentiles of English label lengths (Extracted from dump~\\cite{Foundation2020a}).}\n \\label{fig:wikidata_label_lengths}\n\\end{figure}\nAnother problem is that entities lack of facts (here defined as statements not being labels, descriptions, or aliases). According to Tanon et al.~\\cite{Tanon2020}, in March 2020, DBpedia had, on average, $26$ facts per entity while Wikidata had only $12.5$. This is still more than YAGO4 with $5.1$. \nTo tackle such long-tail entities, different approaches are necessary.\nThe lack of descriptions can also be a problem. Currently, around 10\\% of all items do not have a description, as shown in Figure~\\ref{fig:wikidata_descriptions}. Luckily, the situation is increasingly improving. \n\nA general problem of Entity Linking is that a label or alias can reference multiple entities, see Table~\\ref{tab:mention_items}. While around 70 million mentions point each to a unique item, 2.9 million do not. Not all of those are entities by our definition but, e.g., also classes or topics. In addition, longer labels or aliases often correspond to non-entity items. Thus, the percentage of entities with overlapping labels or aliases is certainly larger than for all items. To use Wikidata as a Knowledge Graph, one needs to be cautious of the items one will include as entities. For example, there exist \\texttt{Wikimedia disambiguation page} items that often have the same label as an entity in the classic sense. Both \\texttt{Q76} and \\texttt{Q61909968} have \\texttt{Barack Obama} as the label. Including those will make disambiguation more difficult.\nAlso, the possibility of contradictory facts will make EL over Wikidata harder. \n\nIn Wikification, also known as EL on Wikipedia, large text documents for each entity exist in the knowledge graph, enabling text-heavy methods~\\cite{Wu2019}. Such large textual contexts (besides the descriptions and the labels of triples itself) do not exist in Wikidata, requiring other methods or the inclusion of Wikipedia. However, as Wikidata is closely related to Wikipedia, an inclusion is easily doable. Every Wikipedia article is connected to a Wikidata item. The Wikipedia article belonging to a Wikidata item can be, for example, extracted via a SPARQL~\\footnote{https:\/\/www.w3.org\/TR\/rdf-sparql-query\/} query to the Wikidata Query Service~\\footnote{https:\/\/query.wikidata.org} using the \\texttt{http:\/\/schema.org\/about} predicate. The Wikidata item of a Wikipedia article can be simply found on the article page itself or by using the Wikipedia API~\\footnote{https:\/\/en.wikipedia.org\/w\/api.php}. \n\nOne can conclude that the characteristics of Wikidata, like being up to date, multilingual and hyper-relational, introduce new possibilities. At the same time, the existence of long-tail entities, noise or contradictory facts poses a challenge.\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nThe rapid development of experimental techniques in cold-atom systems\\cite{Bloch_2005, giorgini_theory_2008, Strinati_2018} gave rise to a revival of interest in unconventional fermionic superfluids exhibiting pairing at finite center of mass momentum, the so called Fulde-Ferrell-Larkin-Ovchinnikov states.\\cite{fulde_superconductivity_1964, larkin_nonuniform_1965, Agterberg_2020} Such phases were since long discussed in solid-state physics contexts, but more recently were also predicted to arise in Fermi mixtures of ultracold atoms involving a population (and\/or mass) imbalance between the two particle species forming the Cooper pairs. Due to the high level of controllability, this class of systems constitutes an interesting and promising platform for exploiting exotic superfluid phases including those of the FFLO type. \n\n In a cold, two-component Fermi mixture increasing the concentration imbalance may serve to gradually mismatch the Fermi surfaces of the two atomic species, which suppresses pairing and ultimately drives the system to the normal, polarized metallic phase. The modulated FFLO-type superfluid was predicted to occur as an intermediate phase constituting an energetic compromise between the uniform, BCS-type superfluid and the polarized Fermi liquid. Extensive studies spread over years (see e.g. Refs. \\onlinecite{He_2006, Gubbels_2009, Baarsma_2010, radzihovsky_imbalanced_2010, Cai_2011, Baarsma_2013, Rosher_2015, Karmakar_2016, Kinnunen_2018, Pini_2021_2, Rammelmuller_2021}) addressed the energetic aspects of the problem, in particular the competition between the different candidate modulated ground states. The emergent consensus is that (at mean field level) a superfluid pair-density-wave (FFLO) phase is rather robustly stable, albeit in a relatively narrow region of the phase diagram. \n\nSomewhat surprisingly, fluctuation effects occurring in the FFLO phases were addressed much less comprehensively and rather coherently pointed towards instability of these long-range-ordered pair-density-wave states to thermal order-parameter fluctuations.\\cite{Shimahara_1998, Samokhin_2010, Radzihovsky_2009, Radzihovsky_2011, Yin_2014, Jakubczyk_2017, Wang_2018} The mechanism destabilizing the FFLO states is somewhat akin to that prohibiting the long-range order in $XY$ or Heisenberg ferromagnets in dimensionality $d\\leq 2$. However, in contrast to the conventional magnets, the FFLO phases involve not only rotational (superfluid) symmetry breaking, but also breaking of translational symmetry, leading to a significantly softer Goldstone fluctuation spectrum in the (putative) ordered phases. In consequence, the possibilities of realizing such symmetry-breaking states at $T>0$ is severely restricted also in $d=3$. \n\nThe previous studies of Refs. \\onlinecite{Shimahara_1998, Samokhin_2010, Radzihovsky_2009, Radzihovsky_2011, Yin_2014, Jakubczyk_2017, Wang_2018} departed from a (putative) FFLO type state and investigated the low-energy fluctuations around it. In other words, they addressed the question of stability of the modulated phase without recourse to global features of the phase diagram. \nIn contrast, our present approach implements a different logical line invoking the properties of the mean-field (MF) phase diagram, which, by necessity involves the presence of both uniform and nonuniform phases leading to the emergence of a thermal Lifshitz point where the normal (Fermi liquid like), the modulated (FFLO), and the uniform superfluid (BCS-like) phases all coexist. We point out that the stability conditions for the Lifshitz point at $T>0$ to fluctuations are significantly stronger as compared to those of the FFLO phase alone. For the isotropic systems we demonstrate the instability of the Lifshitz point to order-parameter fluctuations in $T>0$ at any dimensionality $d<4$, which should in consequence lead to a complete suppression of the pair-density wave phase at any $T>0$. \nOur conclusion is therefore stronger than those reached in the previous studies. Moreover, our reasoning does not require any detailed knowledge concerning the FFLO-type state and the associated excitation spectrum above it. We argue on the other hand that the FFLO-type states of the uniaxial type, such as coupled arrays of atomic tubes considered e.g. in Refs.~\\onlinecite{Lutchyn_2011, Revelle_2016, Sundar_2020} are stable in dimensionality $d=3$ (but not in $d=2$) and present themselves as plausible candidates for hosting the FFLO phases. \n\nWe also point out that the situation at $T=0$ is entirely different.\nWe predict that the FFLO state may well be a stable ground state in a range of values of the imbalance parameters, squashed (at $T=0$) between the BCS-type superfluid and normal metallic phases. The emergent structure of the phase diagram at $T\\geq 0$ implies the generic existence of a point located at $T=0$, where the three phases meet (i.e. the quantum Lifshitz point). Notably, such a quantum Lifshitz point\\cite{Zdybel_2020} should occur as a fluctuation-driven entity without any need for fine-tuning of the system parameters. \n\nThe outline of this paper is as follows: In Sec.~II we summarize the standard model to describe Fermi mixtures with population\/mass imbalance (applicable in both the solid-state and cold-atom contexts) and give an overview of the features of the corresponding mean-field phase diagram. We in particular point out the generic occurrence of a thermal Lifshitz point located at a temperature $T_L>0$. We subsequently elucidate the structure of the effective action to describe the system in the vicinity of the Lifshitz point. In Sec.~III we discuss the stability of the Lifshitz point to order parameter fluctuations at Gaussian level depending on the system dimensionality $d$ and the anisotropy index $m$. In Sec.~IV we provide an estimate of the Lifshitz critical exponents from a truncation of functional renormalization group for general $d$, $m$ and number of order parameter components $N$. We in particular confirm the picture derived in Sec.~III and point out that the anomalous dimension associated with a class of spatial directions is negative. We summarize the paper in Sec.~V. \n\n\\section{Summary of the model and mean-field results}\nThe common point of departure for theoretical studies of imbalanced Fermi mixtures (applicable also to electronic systems) is provided by the grand canonical Hamiltonian \n\\begin{equation} \n\\mathcal{H} = \\sum_{\\vec{k}, \\sigma} \\xi_{\\vec{k}, \\sigma} c^\\dagger_{\\vec{k}, \\sigma} c_{\\vec{k}, \\sigma} + \\frac{g}{V}\\sum_{\\vec{k}, \\vec{k}', \\vec{q}} c^\\dagger_{\\vec{k}+\\frac{\\vec{q}}{2}, \\uparrow} c^\\dagger_{-\\vec{k}+\\frac{\\vec{q}}{2}, \\downarrow} c_{\\vec{k}'+\\frac{\\vec{q}}{2}, \\downarrow} c_{-\\vec{k}'+\\frac{\\vec{q}}{2}, \\uparrow} \n\\label{Ham}\n\\end{equation} \ninvolving the kinetic energy term with species-dependent dispersion relation and chemical potential \n\\begin{equation} \n\\xi_{\\vec{k}, \\sigma} =\\epsilon_{\\vec{k}, \\sigma}-\\mu_\\sigma \n\\end{equation} \n as well as an attractive two-body interaction potential driving $s$-waver pairing. For convenience the latter is taken in the form of a point-like interaction with $g<0$. For the isotropic cold-atomic gases the dispersion reads $\\epsilon_{\\vec{k}, \\sigma}=\\frac{\\vec{k}^2}{2 \\mathcal{M}_\\sigma}$, where the masses $\\mathcal{M}_\\sigma$ of the two fermionic species may in general be different. As we argue below, anisotropic kinetic terms allow for stabilizing the FFLO phases. This conclusion actually seems in line with experimental findings, since the most convincing evidence for FFLO-like features was reported for highly anisotropic situations both in the solid state\\cite{Uji_2012, Uji_2013, Tsuchiya_2015, Koutroulakis_2016, Cho_2021} and ultracold gases contexts. We therefore do not restrict to any specific form of the dispersion. We nonetheless have in mind the setup, where $\\tilde{m}$ out of the $d$ spatial directions are distinct from the remaining $d-\\tilde{m}$.\n As an experimentally relevant case one may, for example, invoke the following dispersion\n \\begin{equation} \n \\epsilon_{\\vec{k},\\sigma}= \\sum_{i=1}^{\\tilde{m}} \\frac{{k_i}^2}{2 \\mathcal{M}_\\sigma} - 2 t_\\perp \\sum_{i=\\tilde{m}+1}^d \\cos k_i\\;. \n\n \\label{Anizo_disp}\n \\end{equation} \nIn particular, for $d=2$ and $\\tilde{m}=1$ this dispersion\nwas implemented to describe coupled atomic tubes\\cite{Lutchyn_2011} in the cold-atom context as well as specific organic superconductors.\\cite{Mayaffre_2014, Piazza_2016} Note that the corresponding Fermi surfaces exhibit a considerable degree of nesting, favoring finite-momentum pairing. Clearly, for $\\tilde{m}=d$ we recover from Eq.~(\\ref{Anizo_disp}) the standard continuum gas, while $\\tilde{m}=0$ yields the hipercubic lattice dispersion. One virtue of the parametrization is that both $d$ and $\\tilde{m}$ ($\\tilde{m}\\leq d$) may formally be treated as real parameters, providing a way of continuously interpolating between different physically relevant cases (e.g. $\\tilde{m}=0$ and $\\tilde{m}=1$), see Sec.~III and IV. \n\\begin{figure}[ht] \n\\begin{center}\n\\includegraphics[width=9cm]{Phase_diag}\n\\caption{(Color online) Left panel: schematic mean-field phase diagram of a system described by the Hamiltonian of Eq.~(\\ref{Ham}). Increasing the imbalance parameter $h=(\\mu_\\uparrow -\\mu_\\downarrow)\/2$ suppresses pairing. The pair density wave (FFLO) phase is energetically favored in a (typically tiny) region between the uniform superfluid (BCS) and Fermi-liquid (FL) states. The Lifshitz point where these three phases coexist, is inevitably present at $T>0$ and constitutes the bottleneck for stability of the phase diagram with respect to fluctuations. Right panel: The anticipated renormalized phase diagram of an isotropic system, where the FFLO phase survives only at $T=0$ and a quantum Lifshitz point occurs (see the main text). }\n\\label{Phase_diag}\n\\end{center} \n\\end{figure} \n\\subsection{Pairing susceptibility}\nWe will now approach the pairing instability from the symmetric (Fermi-liquid) phase (compare Fig.~1) corresponding to sufficiently large imbalance parameter $h=(\\mu_\\uparrow - \\mu_\\downarrow)\/2$ and\/or temperature $T$. \nThe effective (Landau-Ginzburg) order-parameter action for the model given by Eq.~(\\ref{Ham}) can be constructed by a standard procedure described in literature (see e.g. Refs.~\\onlinecite{Strack_2014, Piazza_2016, Zdybel_2018, Zdybel_2019}) analogous to the one developed long ago for magnetic transitions.\\cite{Nagaosa_book} Up to terms quadratic in the pairing field $\\phi$, the effective action reads\n\\begin{equation}\n\\mathcal{S}_{eff}^{(2)} = \\sum_{\\vec{q}, i\\omega_n} \\phi^*_{\\vec{q}, i\\omega_n} \\left[ -1\/g - \\chi_0 (\\vec{q}, i\\omega_n) \\right] \\phi_{\\vec{q}, i\\omega_n}\\;. \n\\label{action}\n\\end{equation} \nHere $\\phi_{\\vec{q}, i\\omega_n}$ is the (scalar) complex $s$-wave pairing field written in the momentum-frequency representation, while $\\chi_0 (\\vec{q}, i\\omega_n)$ involves the particle-particle bubble and may be expressed as\n\\begin{equation}\n\\chi_0 (\\vec{q}, i\\omega_n) = T\\int_{\\vec{k}}\\frac{1-f\\left(\\xi_{\\vec{k}, \\downarrow}\\right)-f\\left(\\xi_{\\vec{k}+\\vec{q}, \\uparrow}\\right)}{\\xi_{\\vec{k}+\\vec{q}, \\uparrow}+\\xi_{\\vec{k}, \\downarrow}-i\\omega_n}\\;,\n\\end{equation}\nwhere $f(X)=(e^{X\/T}+1)^{-1}$ is the Fermi-Dirac distribution and $\\int_{\\vec{k}}=\\int\\frac{d\\vec{k}}{(2\\pi)^d}$.\n\n At mean-field level an instability towards superfluidity occurs once the Landau coefficient $a_2 =\\left[ -1\/g - \\chi_0 (\\vec{q}, 0) \\right] $ in Eq.~(\\ref{action}) becomes negative for some value of $\\vec{q}$ (hereafter denoted as $\\vec{Q}$). A nonzero ordering wavevector $\\vec{Q}$ marks an FFLO-type instability. Note that $\\chi_0 (\\vec{q}, i\\omega_n)$ involves no dependence on $g$. In consequence, once a set of parameters for which $\\chi_0 (\\vec{q}, 0)$ features a maximum at $\\vec{q} = \\vec{Q}\\neq 0$ is identified, the (mean-field) transition between the normal and FFLO phases can be conveniently tuned by modifying $g$ alone.\n\\begin{figure}[ht] \n\\begin{center}\n\\includegraphics[width=9cm]{chi_0_anizot_gray.png}\n\\caption{Evolution of the pairing susceptibility $\\chi_0 (\\vec{q}, 0)$ upon varying $h$ for the dispersion given by Eq.~(\\ref{Anizo_disp}) and $(d,\\tilde{m})=(2,1)$. The sharp peak located for $h=0$ at $\\vec{q}=(0,0)$ broadens and for $h=h_c\\approx 0.03$ continuously splits. At $h=h_c$ and $q$ small, the pairing field propagator is quadratic in momentum in the $q_y$ direction, but quartic along $q_x$. The degenerate maxima of $\\chi_0 (\\vec{q}, 0)$ are located at the $q_x$ axis and remain well separated from zero for $h>h_c$. For a projection on the $q_x$ axis, compare Fig.~\\ref{chi_projection}. The plot parameters are $t_\\perp=\\frac{1}{2}$, $\\mu_\\uparrow+\\mu_\\downarrow=6.6$, $\\mathcal{M}_\\downarrow\/\\mathcal{M}_{\\uparrow}=1.0$, $T=10^{-2}$.}\n\\label{chi}\n\\end{center} \n\\end{figure} \n\\begin{figure}[ht] \n\\begin{center}\n\\includegraphics[width=9cm]{chi_0_anizot_rzut.png}\n\\caption{Projection of the pairing susceptibility $\\chi_0 (\\vec{q}, 0)$ plotted in Fig.~\\ref{chi} on $\\vec{q}=(q_x,0)$. }\n\\label{chi_projection}\n\\end{center} \n\\end{figure} \n\\begin{figure}[ht] \n\\begin{center}\n\\includegraphics[width=9cm]{chi_0_uniform_r403_gray_v2.png}\n\\caption{Evolution of the pairing susceptibility $\\chi_0 (\\vec{q}, 0)$ upon varying $h$ for the dispersion given by Eq.~(\\ref{Anizo_disp}) and $(d,\\tilde{m})=(3,0)$ [i.e. for an isotropic continuum system of atomic particles]. The sharp peak located for $h$ sufficiently small at $\\vec{q}=(0,0)$ broadens upon increasing $h$ and for $h=h_c\\approx 0.62$ becomes degenerate on a two-dimensional sphere. At $h=h_c$ the pairing field propagator is quartic in momentum $\\vec{q}$. We put $q_z=0$ in the plot. A projection on the $q_x$ (or any other) axis, yields a picture qualitatively equivalent to the one presented in Fig.~\\ref{chi_projection} for the anisotropic situation. The plot parameters are: $\\mu_{\\downarrow} +\\mu_{\\uparrow}=1$, $\\mathcal{M}_{\\downarrow}\/\\mathcal{M}_{\\uparrow}=4.03$, $T=10^{-3}$ and were chosen to mimic the experimentally relevant $^{161}Dy$-$^{40}K$ mixture.\\cite{Ravensbergen_2020} }\n\\label{chi_iso}\n\\end{center} \n\\end{figure} \n The point $(\\vec{Q}, 0)$ serves as a reference for the momentum\/frequency expansion of the vertex functions in the Landau-Ginzburg action. In particular, the order parameter mass is given by \n\\begin{equation} \nr = [-1\/g -\\chi_0(\\vec{Q}, 0)]\\;. \n\\end{equation}\nAs already remarked, $\\chi_0 (\\vec{q}, 0)$ does not depend on $g$, which may therefore always be adjusted to obtain $r=0$. In Figs.~\\ref{chi} and \\ref{chi_iso} we plot $\\chi_0 (\\vec{q}, 0)$ for two situations corresponding to the uniaxial and isotropic cases. By varying $h$ or $T$ the position of the maximum (i.e. the ordering wavevector $\\vec{Q}$) may be continuously shifted towards zero, where the superfluid phase becomes uniform [see Fig.~(\\ref{Phase_diag})]. For clarity, in Fig.~\\ref{chi_projection} we also expose the projection of $\\chi_0 (\\vec{q}, 0)$ plotted in Fig.~\\ref{chi} on the $q_x$ axis.\n\nIn addition to the quadratic term given by Eq.~(\\ref{action}) the effective action $\\mathcal{S}_{eff}[\\phi]$ involves order parameter self-interaction terms of order higher than two in the field $\\phi$. \n\nConsider now the structure of the effective action approaching the Lifshitz point along the superfluid phase transition line (given by $r=0$) from above (compare Fig.~1). Upon crossing the Lifshitz point the ordering wavevector $\\vec{Q}$ is continuously shifted away from zero and becomes degenerate. The degeneracy level is determined by the symmetry of the Fermi surface. For the isotropic case (see Fig.~\\ref{chi_iso}) $\\vec{Q}$ picks up any direction in the $(d-1)$-dimensional space. In the immediate vicinity of the Lifshitz point one may expand $\\chi_0$ in momentum $\\vec{q}$ around zero (retaining terms up to quartic order). Right at the Lifshitz point the coefficients of at least some of the $q^2$ terms vanish and become negative below the Lifshitz point. Stability of the system is then retained due to the terms quartic in momentum. \n\n\n\n\\section{The Lifshitz point stability}\nWe now analyze the structure of the effective action for the order parameter $\\phi$. \nFrom the expansion described above in Sec.~II A we obtain at the Lifshitz point an effective Landau-Ginzburg action which, when expressed in position representation, reads: \n\\begin{equation} \n\\mathcal{S}_{eff} =\\int d^d x \\left[U(|\\phi|^2) +\\frac{1}{2}Z_\\perp \\left(\\nabla_\\perp\\phi\\right)^2 +\\frac{1}{2} Z_{||} \\left(\\Delta_{||} \\phi\\right)^2 \\right]\\;, \n\\label{Seff}\n\\end{equation} \nwhere $U(|\\phi|^2)$ denotes the effective potential, which may be expanded to yield a polynomial in $|\\phi|^2$\n\\begin{equation} \nU(|\\phi|^2) = r |\\phi|^2 + u |\\phi|^4 +\\dots \\,.\n\\end{equation}\nThe $\\sim|\\phi|^2$ term coefficient, resulting from Eq~(\\ref{action}) vanishes at the entire phase transition line, including the Lifshitz point. The coefficients of the higher-order terms of $U(|\\phi|^2)$ may be expressed by the fermionic loops evaluated at external momenta $\\vec{q}=0$. The energy cost of creating order-parameter nonuniformities is governed by the laplacian terms in $m$ spatial directions and gradient terms in the remaining $d-m$ directions. Explicitly: \n\\begin{equation}\n \\left(\\Delta_{||} \\phi\\right)^2 = \\sum_{i=1}^2\\sum_{\\alpha,\\,\\beta =1}^m \\frac{\\partial^2 \\phi_i}{\\partial x_\\alpha \\partial x_\\beta}\\frac{\\partial^2 \\phi_i}{\\partial x_\\alpha\\partial x_\\beta}\n\\end{equation} \nand\n \\begin{equation}\n\\left(\\nabla_\\perp\\phi\\right)^2 = \\sum_{i=1}^2\\sum_{\\alpha =m+1}^d \\frac{\\partial \\phi_i}{\\partial x_\\alpha}\\frac{\\partial \\phi_i}{\\partial x_\\alpha}\\;,\n\\end{equation}\nwhere the $i$ summation runs over the two components of the pairing field $\\phi$. For symmetry reasons the number $m$ of 'soft' directions in the action of Eq.~(\\ref{Seff}) must either coincide with the anisotropy index $\\tilde{m}$ of Eq.~\\ref{Anizo_disp} or be equal $d-\\tilde{m}$. \nSince, for the time being, we are interested in the thermal phase transition, we dropped the contributions from quantum fluctuations in Eq.~(\\ref{Seff}).\nIn the above form $\\mathcal{S}_{eff}$ accounts for a generic anisotropic situation, where the dispersion is quartic in $m$ ($m\\leq d$) spatial directions and quadratic in the remaining ones (compare Sec.~II A). The isotropic case corresponds to $m=d$. The above effective action describes the $m$-axial Lifshitz point, analogous to those studied previously in the contexts of anisotropic magnets \n\\cite{Grest_1978, Selke_1988, Diehl_2002, Butera_2008} and liquid crystals.\\cite{Chaikin_book, Singh_2000} One may now adopt the standard Gaussian level arguments \\cite{Goldenfeld_book} to argue for the instability of the Lifshitz point with respect to order parameter fluctuations at sufficiently low dimensionality. Considering that the (putative) homogeneous ordered phase supports a massless transverse mode, from the structure of the effective action of Eq.~(\\ref{Seff}) it follows that (in Fourier space) the (transverse) 2-point correlation function in the immediate vicinity of the Lifshitz point reads:\n\\begin{equation} \nG(\\vec{q})=G(\\vec{q}_\\perp, \\vec{q}_{||})=\\frac{1}{Z_\\perp \\vec{q}_\\perp^2 +Z_{||}(\\vec{q}_{||}^2)^2}\\;,\n\\end{equation}\nwhile in real space \n\\begin{equation}\nG(\\vec{x}_\\perp, \\vec{x}_{||}) \\sim \\int d^d q G(\\vec{q})e^{i(\\vec{q}_\\perp \\vec{x}_\\perp + \\vec{q}_{||} \\vec{x}_{||})}=\\int d^d q\\frac{e^{i(\\vec{q}_\\perp \\vec{x}_\\perp + \\vec{q}_{||} \\vec{x}_{||})}}{Z_\\perp \\vec{q}_\\perp^2 +Z_{||}(\\vec{q}_{||}^2)^2}\\,.\n\\end{equation}\nBy substituting $(\\vec{q}_{||}^2)=\\tilde{q}_{||}$, expanding the exponential occurring in the numerator, passing to spherical coordinates in each of the two subspaces and integrating over the angular coordinates, one arrives at the following expression: \n\\begin{equation} \n\\mathcal{C}\\int_0^{\\Lambda_\\perp} dq_\\perp \\int_0^{\\Lambda_{||}^2}d\\tilde{q}_{||} \\frac{\\tilde{q}_{||}^{\\frac{m}{2}-1} q_\\perp^{d-m-1}}{Z_\\perp \\vec{q}_\\perp^2 + Z_{||}\\tilde{q}_{||}^2}\n\\end{equation}\nwith $\\mathcal{C}$ constant and $\\Lambda_\\perp$, $\\Lambda_{||}$ being microscopic (momentum) cutoffs. Transformation to polar coordinates: $\\sqrt{Z_{||}}\\tilde{q}_{||} = r\\cos\\phi$, $\\sqrt{Z_\\perp}q_\\perp = r \\sin{\\phi}$ leads to an integral of the form \n\\begin{equation}\nG(\\vec{x}_\\perp, \\vec{x}_{||})\\sim\\int_0^\\Lambda dr r^{d-\\frac{m}{2}-3}\\;,\n\\end{equation}\ndivergent for $d\\leq 2+\\frac{m}{2}$. This implies instability of the Lifshitz point with respect to order parameter fluctuations for $d$ below $2+\\frac{m}{2}$. The above treatment is analogous to a Gaussian level demonstration of the Mermin-Wagner theorem in the standard isotropic situations.\\cite{Goldenfeld_book} \nThe obtained condition coincides with those previously recognized for Lifshitz points for magnets and liquid crystals and may be cast in the form: \n\\begin{equation}\nd_L = 2+\\frac{m}{2} \\;,\n\\label{dL}\n\\end{equation} \nwhere $d_L$ is the lower critical dimension for occurrence of an $m$-axial Lifshitz point where the normal, homogeneous superfluid and FFLO phases would coexist. Note that for $m=0$ we recover $d_L=2$ in line with the Mermin-Wagner theorem, while for the isotropic case ($m=d$) Eq.~(\\ref{dL}) leads to $d_L=4$, which in dimensionality $d=3$ and $d=2$ prohibits the occurrence of the isotropic Lifshitz point, and in consequence also the FFLO phase squashed between the BCS-like and Fermi liquid phases according to the mean-field predictions. \n\nThe expression of Eq.~(\\ref{dL}) is not new and was first derived in the context of magnetic systems long ago by Grest and Sak\\cite{Grest_1978} within the $2+\\epsilon$ expansion of the nonlinear sigma model. It is expected to be valid for situations characterized by the number of order parameter components $N\\geq 2$. Analysis of the Ginzburg criterion for the Lifshitz point\\cite{Diehl_2002} indicates a similar effect on the upper critical dimension $d_u$, such that $d_u=4+\\frac{m}{2}$.\n\nAs concerns the stability of the FFLO states, we emphasize the difference between the above arguments and those presented in earlier literature. While the previous studies addressed stability of different putative ground states of the FFLO type to Goldstone fluctuations, the present analysis invokes the envisaged presence of the thermal Lifshitz point in the phase diagram and inspects its stability to critical order parameter fluctuations. This yields in the isotropic situation a condition by far more restrictive. The absence of a thermodynamically stable long-range ordered FFLO phase certainly does not contradict the presence of regions of the phase diagram exhibiting enhanced FFLO pairing fluctuations (see Ref.~\\onlinecite{Pini_2021} for a recent discussion) which may well be detected in experiments on various systems. On the other hand, our argument offers an explanation of why convincing experimental evidence for FFLO states was reported only for strongly anisotropic situations and demonstrates that the occurrence of a true long-range ordered FFLO thermodynamic phase in isotropic three-dimensional systems is in fact completely excluded. Also note that the FFLO phase may well remain stable at $T=0$ (compare Fig.~\\ref{Phase_diag}) which implies the presence of a quantum Lifshitz point in addition to the FFLO quantum critical point\\cite{Piazza_2016, Pimenov_2018} in the phase diagram. \n\nWe also make the observation that the condition for stability of the Lifshitz point is significantly weaker in the anisotropic case ($mk$ untouched. The trace in Eq.~(\\ref{Wetterich}) encompasses in the present context summation over momentum as well as components of the order-parameter field $\\phi$, while $\\Gamma_k^{(2)}[\\phi] $ denotes the second functional derivative of $\\Gamma_k[\\phi]$. The framework resting upon Eq.~(\\ref{Wetterich}) was over the last years fruitfully applied in a broad range of contexts (for reviews see for example Refs.~\\onlinecite{Berges_2002, Pawlowski_2007, Kopietz_book, RG_book, Metzner_2012, Dupuis_2021}). \n\nOne successful approximation scheme to integrate the Wetterich equation is recognized as the derivative expansion (DE). It amounts to classifying the symmetry-allowed terms occurring in $\\Gamma_k[\\phi]$ according to the number of field derivatives and truncating terms of order higher than a given value. This projects the functional differential equation Eq.~(\\ref{Wetterich}) onto a finite, numerically manageable set of partial (integro-)differential flow equations. Only very recently was this framework systematically applied\\cite{Polsi_2020} to the case of $O(N)$-symmetric models at order $\\partial^4$ (and $\\partial^6$ for the Ising universality class\\cite{Balog_2019}) in dimensionality $d=3$. These computations led to estimates of the critical exponents of accuracy comparable to those delivered by the best Monte Carlo simulations and perturbation theory calculations. It will become clear that the case of the Lifshitz point constitutes a significantly more demanding challenge for the treatment based on the Wetterich approach. The reason for this is at least two-fold: (i) terms quartic in momentum appear in the inverse propagator even at the bare level and are crucial for capturing the relevant physics; (ii) the anisotropic nature of the problem complicates the loop integrals. Despite these, as we demonstrate below, the Wetterich approach captures a substantial amount of physics and delivers estimates of the critical exponents even at the lowest orders of the DE. \n\\subsection{Local potential approximation}\nWe now consider the leading (zeroth order) truncation of the derivative expansion, where the effective potential is a flowing (scale dependent) function, but the momentum dependencies in the propagator are not renormalized. This is known commonly as the local potential approximation (LPA) and, for the present problem, amounts to parametrizing $\\Gamma_k[\\phi]$ via the following form \n\\begin{equation} \n \\Gamma_k[\\phi]=\\int d^d x \\left[U_k(\\rho) +\\frac{1}{2}Z_\\perp \\left(\\nabla_\\perp\\phi\\right)^2 +\\frac{1}{2} Z_{||} \\left(\\Delta_{||} \\phi\\right)^2 \\right]\\;, \n\\label{LPA}\n\\end{equation} \nwhere we introduced $\\rho =\\frac{1}{2}|\\phi|^2$. At this approximation level the gradient coefficients $Z_\\perp$ and $ Z_{||}$ are scale independent (in consequence the anomalous dimensions are neglected), while there is no preimposed parametrization of the flowing effective potential $U_k(\\rho)$. Crucially, a term $\\sim \\left(\\nabla_{||}\\phi\\right)^2$ is absent in Eq.~(\\ref{LPA}). In a higher order calculation involving the flow of momentum dependencies of the propagator, this term is present and should scale to zero at the Lifshitz point only for vanishing $k$. \n\n\n By plugging Eq.~(\\ref{LPA}) into Eq.~(\\ref{Wetterich}) we obtain a closed flow equation for $U_k(\\rho)$ of the form \n\\begin{equation}\n\\partial_k U_k(\\rho) = \\frac{1}{2}\\int_q \\partial_k R_k (\\vec{q})\\left[G_\\sigma(\\vec{q},\\rho, m) + (N-1)G_\\pi(\\vec{q},\\rho, m)\\right]\\;,\n\\label{LPA_eq}\n\\end{equation}\nwhere \n\\begin{align}\nG_\\sigma^{-1}(\\vec{q},\\rho, m)&= Z_{||}(\\vec{q}_{||}^2)^2 + Z_\\perp \\vec{q}_\\perp^2 + U_k'(\\rho)+2\\rho U_k''(\\rho)+R_k(\\vec{q}) \\nonumber \\\\\nG_\\pi^{-1} (\\vec{q},\\rho, m)&= Z_{||}(\\vec{q}_{||}^2)^2 + Z_\\perp \\vec{q}_\\perp^2 + U_k'(\\rho)+R_k(\\vec{q}) \n\\label{Gs}\n\\end{align}\nare the regularized inverse propagators for the longitudinal ($\\sigma$) and transverse ($\\pi$) modes. The integral $\\int_q = \\int\\frac{d^m q_{||}}{(2\\pi)^m}\\int \\frac{d^{d-m} q_{\\perp}}{(2\\pi)^{d-m}}$ in Eq.~(\\ref{LPA_eq}) encompasses the two subspaces characterized by distinct behavior of the dispersion. For $m=0$ we recover the standard LPA equation well studied for the $O(N)$-symmetric models, while for $m=d$ the $\\vec{q}_{\\perp}$-space is 0-dimensional which corresponds to the isotropic Lifshitz point. \n\nWe now implement the following rescaling \n\\begin{align}\n\\vec{q}_\\perp = k \\tilde{\\vec{q}}_\\perp\\;,\\; \\vec{q}_{||}=(Z_\\perp\/Z_{\\parallel})^{1\/4} k^{1\/2}\\tilde{\\vec{q}}_{||} \\nonumber \\\\\n\\rho = Z_\\perp^{\\frac{m}{4}-1} Z_{\\parallel}^{-\\frac{m}{4}}k^{d-\\frac{m}{2}-2}\\tilde{\\rho}\\;,\\; U_k(\\tilde{\\rho})=Z_\\perp^{\\frac{m}{4}} Z_{\\parallel}^{-\\frac{m}{4}}k^{d-\\frac{m}{2}}\\tilde{u}_k(\\tilde{\\rho})\n\\end{align} \nand consider the cutoff of the form \n\\begin{equation}\nR_k(\\vec{q})=Z_{\\perp}k^2 r\\left(\\tilde{\\vec{q}}_\\perp^2+(\\tilde{\\vec{q}}_{||}^2)^2\\right)\\;.\n\\end{equation}\nThis allows us to cast the LPA flow equation in a scale invariant form:\n\\begin{align}\n&\\partial_t \\tu_k = - \\left(d-\\frac{m}{2}\\right)\\tilde{u}_k - \\left(2+\\frac{m}{2}-d\\right)\\trho \\tu_k'+\\nonumber \\\\ \n&\\frac{1}{2}\\int_{\\tilde q}\\left[\\frac{1}{y+\\tu_k'+2\\trho\\tu_k''+r(y)} +\\frac{N-1}{y+\\tu_k' +r(y)} \\right]\\left[2r(y)-2yr'(y)\\right]\\;,\n\\end{align}\nwhere we introduced $y=\\tilde q_\\perp^2 +\\tilde q_{||}^4$ and $t=\\log(k\/\\Lambda)$. In each of the two subspaces corresponding to $\\tilde q_\\perp$ and $\\tilde q_{||}$ we now pass to the (hyper)spherical coordinates and perform the angular integrations. Subsequently the change of variables $\\tilde q_\\perp =\\zeta \\cos\\theta$, $\\tilde q_{||}^2=\\zeta\\sin\\theta$ (with $\\zeta=\\sqrt{y}$) and integration over $\\theta$ leads to the following form of the flow equation: \n\\begin{align}\n\\partial_t \\tu_k =& - \\left(d-\\frac{m}{2}\\right)\\tilde{u}_k- \\left(2+\\frac{m}{2}-d\\right)\\trho \\tu_k'+\\nonumber \\\\\n&\\mathcal{V}_{d,m}\\int_0^\\infty dy y^{\\frac{d}{2}-\\frac{m}{4}-1}\\times\\nonumber \\\\ \n&\\left[2r(y)-2yr'(y)\\right] \\left[\\frac{1}{y+\\tu_k'+2\\trho\\tu_k''+r(y)} +\\frac{N-1}{y+\\tu_k' +r(y)} \\right]\\;, \n\\label{LPA_resc}\n\\end{align}\nwith\n\\begin{equation}\n\\mathcal{V}_{d,m}=\\frac{\\mathcal{S}^{d-m-1}\\mathcal{S}^{m-1}}{16(2\\pi)^d}\\mathcal{B}\\left(\\frac{d-m}{2},\\frac{m}{4}\\right)\\;,\n\\end{equation} \nwhere in turn $\\mathcal{S}^{n-1}=\\frac{2\\pi^{n\/2}}{\\Gamma(n\/2)}$ is the surface area of the $(n-1)$-dimensional unit sphere, and $\\mathcal{B}(x,y)$ denotes the Euler beta function. \n\nWe now observe that by substituting $(d-\\frac{m}{2})\\rightarrow d_{eff}$ and $\\mathcal{V}_{d,m}\\rightarrow v_d= [2^{d+1}\\pi^{d\/2}\\Gamma (d\/2)]^{-1} $ in Eq.~(\\ref{LPA_resc}) we recover the LPA equation for the standard $O(N)$-symmetric case in dimensionality $d_{eff}$. It follows that, at the LPA level of approximation, the RG equation for the $m$-axial Lifshitz point in $d$ dimensions differs from the corresponding flow equation for the $O(N)$ model in dimensionality $d_{eff}=d+\\frac{m}{2}$ exclusively by the $m$-dependent constant multiplying the integral. The quantity $\\mathcal{V}_{d,m}$ (and $v_d$ alike) is however redundant as it can be absorbed by the transformation \n\\begin{equation}\nu_k = \\mathcal{V}_{d,m} w_k \\;,\\;\\;\\;\\; \\tilde{\\rho}=\\mathcal{V}_{d,m}\\tilde{\\gamma}\\;,\n\\end{equation}\nwhich casts the LPA equation in the form \n\\begin{align}\n\\partial_t \\tilde{w}_k =& - \\left(d-\\frac{m}{2}\\right)\\tilde{w}_k- \\left(2+\\frac{m}{2}-d\\right)\\tilde{\\gamma} \\tilde{w}_k'+\\nonumber \\\\\n&\\int_0^\\infty dy y^{\\frac{d}{2}-\\frac{m}{4}}\\times\\nonumber \\\\ \n&\\left[2r(y)-2yr'(y)\\right] \\left[\\frac{1}{y+\\tilde{w}_k'+2\\tilde{\\gamma}\\tilde{w}_k''+r(y)} +\\frac{N-1}{y+\\tilde{w}_k' +r(y)} \\right]\\;, \n\\label{LPA_resc_2}\n\\end{align}\nwhere $\\tilde{w}_k=\\tilde{w}_k(\\tilde{\\gamma})$ and prime now denotes differentiation with respect to $\\tilde{\\gamma}$. It follows that the critical behavior at the $m$-axial Lifshitz point is fully equivalent to the one at the $O(N)$-symmetric critical point at dimensionality reduced by $\\frac{m}{2}$. This fact was previously recognized at the mean-field and Gaussian level as well as in the limit $1\/N\\to 0$.\\cite{Diehl_2002, Shpot_2008, Burgsmuller_2010, Shpot_2012, Jakubczyk_2018, Lebek_2020, Lebek_2021} The above reasoning indicates that the correspondence remains valid within the LPA approximation, and in fact should remain correct as long as the anomalous dimensions are neglected. \nIn particular the lower (as well as the upper) critical dimension describing these two situations are then shifted by $m\/2$ in agreement with the Gaussian argument presented in Sec.~III. In view of the above, the critical exponents for the Lifshitz point may be extracted using the routines previously developed for the $O(N)$ models. Here we focus on the $\\nu_\\perp$ exponent describing the decay of the correlation function $G(\\vec{x}_\\perp, \\vec{x}_{||}=0)$. The analogous exponent $\\nu_{||}$ controlling $G(\\vec{x}_\\perp=0, \\vec{x}_{||})$ is related to $\\nu_\\perp$ via the scaling relation\\cite{Diehl_2002} \n\\begin{equation}\n\\nu_{||}=\\frac{2-\\eta_\\perp}{4-\\eta_{||}}\\nu_\\perp\\;.\n\\end{equation} \nIn the absence of anomalous dimensions we find $\\nu_{||} =\\frac{1}{2}\\nu_\\perp$. Other critical exponents are then also recovered via scaling relations.\\cite{Diehl_2002}\n\n The quantity $\\nu_\\perp^{-1}$ may be identified as the leading eigenvalue of the RG transformation of Eq.~(\\ref{LPA_resc}) [or Eq.~(\\ref{LPA_resc_2})] linearized around the fixed point. Technically, we first discretize Eq.~(\\ref{LPA_resc}) on the $\\trho$-grid (typically involving $\\approx 60$ points) and solve for the fixed point $u^*(\\trho)$. The RG equation Eq.~(\\ref{LPA_resc}) is then linearized around $u^*(\\trho)$ and its diagonalization yields the spectrum, which contains a single positive eigenvalue $\\lambda_\\nu$, which we identify with $\\nu_\\perp^{-1}$. For details on the numerical procedure, see Ref.~\\onlinecite{Chlebicki_2021}. For $m\\to 0$ we obviously recover the value pertinent to the standard $O(N)$ model. In the practical calculation we consider two families of cutoff functions: \n \\begin{align}\n r(y)= (1-y)\\theta (1-y)\\;\\;\\;\\; &\\textrm{(Litim cutoff)} \\\\ \n r(y)= \\alpha \\frac{y}{e^y-1} \\;\\;\\;\\; &\\textrm{(Wetterich cutoff)}\\;, \n \\label{cutoffs}\n \\end{align}\n the latter one involving a variable parameter $\\alpha$. The obtained value of $\\nu_\\perp^{-1}$ carries a weak dependence on $\\alpha$. In accord with the principle of minimal sensitivity\\cite{Canet_2003_2, Balog_2020} (PMS) one chooses $\\alpha$ so that $\\nu_\\perp^{-1}$ is locally stationary with respect to variation of $\\alpha$. Our results for $\\nu_\\perp$ depending on $N$, $d$ and $m$ are presented in Figs.~5-7 and compared with those obtained within the $\\epsilon=4+\\frac{m}{2}-d$ expansion and $\\frac{1}{N}$ expansion. The differences between the values obtained using the different cutoff functions are relatively small and here we present the results obtained using the Litim cutoff. \n\\begin{figure}[ht] \n\\begin{center}\n\\includegraphics[width=9cm]{LPA_vs_eps.png}\n\\caption{The correlation length exponent $\\nu_\\perp$ for the uniaxial ($m=1$) Lifshitz point for $N=2$ plotted as a function of dimensionality $d$. The results obtained within the LPA approximation are superimposed with those resulting from the $\\epsilon=4\\frac{1}{2}-d$ expansion in Ref.~(\\onlinecite{Shpot_2001}) up to order $\\epsilon^2$. The two sets of points coincide in the vicinity of the upper critical dimension $d_u=4\\frac{1}{2}$, above which we recover the mean-field result $\\nu_\\perp=\\frac{1}{2}$. An increase of $\\nu_\\perp$ upon lowering $d$, indicating the expected divergence at the lower critical dimension $d_L=2\\frac{1}{2}$ is clearly visible in the LPA data. }\n\\label{LPA_vs_eps}\n\\end{center} \n\\end{figure} \n\\begin{figure}[ht] \n\\begin{center}\n\\includegraphics[width=9cm]{d_dep.png}\n\\caption{The correlation length exponent $\\nu_\\perp$ for the $m$-axial Lifshitz point for $N=2$ plotted as a function of dimensionality $d$ for a sequence of values of $m$. The plot demonstrates the shift of the upper critical dimension $d_u$ as well as the growing degree of divergence occurring upon increasing $m$. The curves are all related by translations in the horizontal direction (see the main text). }\n\\label{d_dep}\n\\end{center} \n\\end{figure} \n\\begin{figure}[ht] \n\\begin{center}\n\\includegraphics[width=9cm]{N_infty.png}\n\\caption{The (inverse) correlation length exponent $\\lambda_{\\nu}=\\nu_{\\perp}^{-1}$ obtained within the LPA approximation plotted as a function of $m$ for $d=3$ and a sequence of values of $N$. The plot demonstrates in particular the convergence of the results towards the (exact) limit $\\nu_{\\perp}^{-1}\\to d-2-\\frac{m}{2}$ for $N\\to\\infty$. Interestingly, our results indicate significantly faster convergence for $m$ large. }\n\\label{N_infty}\n\\end{center} \n\\end{figure} \n\nOur approach correctly reproduces the $1\/N\\to 0$ as well as $\\epsilon\\to 0$ limits and is applicable to a broad range of parameters in the $(d,m,N)$ space. At the present truncation level it is however not sufficient to correctly address the limit of dimensionality $d$ approaching $d_L$, which is dominated by the neglected anomalous dimensions (see Sec.~IVC for an extension in this direction). Nonetheless our LPA data indicates a rapid growth of $\\nu_\\perp$ upon lowering $d$ towards $d_L$ (see e.g. Fig.~\\ref{LPA_vs_eps}). \n\n\\subsection{Constrained LPA' and the anomalous dimensions}\nWe now extend the truncation described in Sec.~IVB to account for the anomalous dimensions in the simplest conceivable way. This amounts to treating the quantities $Z_{\\perp}$ and $Z_{||}$ in Eq.~(\\ref{Gs}) as scale-dependent (but not field-dependent) quantities, while disregarding the term $\\sim (\\nabla_{||}\\phi )^2$ (alike at the LPA level). The latter constitutes here an additional approximation. An analogous procedure for the case of isotropic $O(N)$ models is recognized as ''LPA' '' and yields, for the $d=3$ XY or Heisenberg universality classes a somewhat overestimated value of the anomalous dimension $\\eta$. For the present anisotropic situation (characterized by the effective dimensionality below 3) we may expect only a qualitative estimate of the values of $\\eta_\\perp$ and $\\eta_{||}$. Interestingly, we find nonetheless that the degree of violation of the correspondence discussed in Sec.~IVB upon including the anomalous dimensions is in fact very low in the physically interesting situations.\nAnother interesting point concerns the sign of $\\eta_{||}$. In this respect, for example in $(d,m,N)=(3,1,3)$ the $1\/N$ expansion (up to terms $\\sim 1\/N$) predicts\\cite{Shpot_2005, Shpot_2012} a positive value in contrast to the $\\epsilon$-expansion\\cite{Shpot_2001} as well as the nonperturbative RG study of Ref.~(\\onlinecite{Essafi_2012}). \n\nThe running anomalous dimensions are related to the flowing $Z$-factors via\\cite{Essafi_2012} $\\eta_{\\perp}=-\\frac{1}{Z_\\perp}\\partial_t Z_{\\perp}$ and $\\eta_{||}=-\\frac{1}{\\theta Z_{||}}\\partial_t Z_{||}$ with $\\theta = \\frac{2-\\eta_{\\perp}}{4-\\eta_{||}}$ being the anisotropy exponent. The flow of the effective potential is derived along the line of Sec.~IVB. We obtain: \n\\begin{align}\n\\partial_t \\tu_k =& - \\left(d-\\frac{m}{2}-\\frac{m}{4}(\\eta_\\perp-\\eta_{||}) \\right)\\tilde{u}_k \\nonumber \\\\ \n&-\\left(2+\\frac{m}{2}-d-\\eta_\\perp (1-\\frac{m}{4})-\\eta_{||}\\frac{m}{4} \\right)\\trho \\tu_k' \\nonumber \\\\\n&+\\mathcal{V}_{d,m}\\int_0^\\infty dy y^{\\frac{d}{2}-\\frac{m}{4}-1}\\times\\nonumber \\\\ \n&\\left[(2-\\eta_\\perp)r(y)-2yr'(y)\\right] \\left[\\frac{1}{y+\\tu_k'+2\\trho\\tu_k''+r(y)} +\\frac{N-1}{y+\\tu_k' +r(y)} \\right] \\nonumber \\\\\n&+\\mathcal{W}_{d,m}\\int_0^\\infty dy y^{\\frac{d}{2}-\\frac{m}{4}}\\times\\nonumber \\\\ \n&\\left[\\eta_\\perp -\\eta_{||}\\right]r'(y) \\left[\\frac{1}{y+\\tu_k'+2\\trho\\tu_k''+r(y)} +\\frac{N-1}{y+\\tu_k' +r(y)} \\right]\\;,\n\\label{LPA_prime_resc}\n\\end{align}\nwhere \n\\begin{equation}\n\\mathcal{W}_{d,m} =\\frac{\\mathcal{S}^{d-m-1}\\mathcal{S}^{m-1}}{16(2\\pi)^d}\\mathcal{B}\\left(\\frac{d-m}{2},\\frac{m}{4}+1\\right)\\;.\n\\end{equation}\nThe above flow equation for $u_k$ must be supplemented by the expressions for the running anomalous dimensions $\\eta_\\perp$ and $\\eta_{||}$. These are evaluated along the line well described in literature (see e.g. Ref.~\\onlinecite{Dupuis_2021}). By differentiating the Wetterich equation Eq.~(\\ref{Wetterich}) twice, we obtain the flow of the two-point function $\\Gamma^{(2)}$. Subsequently, by taking the second derivative with respect to momentum in the $\\perp$ direction and the fourth derivative with respect to momentum in the $||$ direction evaluated at vanishing momentum, we extract the flow of $Z_\\perp$ and $Z_{||}$, from which $\\eta_\\perp$ and $\\eta_{||}$ follow. The resulting expressions (especially for $\\eta_{||}$) are very lengthy and we refrain from quoting them here. \nThe physical anomalous scaling dimensions correspond to the fixed-point values of $\\eta_\\perp$ and $\\eta_{||}$, which we extract numerically. The data presented below corresponds to results obtained with the PMS-optimized Wetterich cutoff. We note that for a range of $d$ and $m$ corresponding to low effective dimensionalities we were not able ot obtain a PMS value of $\\alpha$, in which case we chose a value of $\\alpha$ corresponding to a global extremum over a range of considered values. As representative results, in Figs.~\\ref{eta_1} and \\ref{eta_2} we plot the obtained dependencies of $\\eta_\\perp$ and $\\eta_{||}$ with fixed $N=1$, varying $d$ and $m$. Our numerical values are larger as compared to those resulting from the $\\epsilon$ expansion which is probably due to both our truncation errors and the low order of the implemented $\\epsilon$ expansion. We point out that the sign of $\\eta_{||}$ is negative in the entire region of parameters considered by us. As concerns the limit $m\\to 0$, we observe convergence of both $\\eta_\\perp$ and $\\nu_\\perp$ to the anticipated values corresponding to the standard $O(N)$ models. The quantity $\\eta_{||}$ becomes a meaningless (redundant) parameter but does not vanish for $m\\to 0$. \n\\begin{figure}[ht] \n\\begin{center}\n\\includegraphics[width=9cm]{eta_1_d.png}\n\\caption{The anomalous scaling dimension $\\eta_\\perp$ as function of $d$ for a sequence of values of $m$ and $N=1$. }\n\\label{eta_1}\n\\end{center} \n\\end{figure} \n\\begin{figure}[ht] \n\\begin{center}\n\\includegraphics[width=9cm]{eta_2_d.png}\n\\caption{ The anomalous scaling dimension $\\eta_{||}$ as function of $d$ for a sequence of values of $m$ and $N=1$.}\n\\label{eta_2}\n\\end{center} \n\\end{figure} \n\nWe now investigate to which extent the relation between the Lifshitz point in $d$ dimensions and the $O(N)$ critical point in dimensionality $d_{eff}$ explored in Sec.~IVB becomes violated in presence of the anomalous dimensions. For this aim we plot the critical exponents as function of the effective dimensionality in Figs.~\\ref{eta_1_coll} and ~\\ref{eta_2_coll}. The collapse of the curves indicates a high level of agreement with the picture demonstrated IVB at the LPA level for all of the critical exponents (including the anomalous dimensions). \n\\begin{figure}[ht] \n\\begin{center}\n\\includegraphics[width=9cm]{eta_1_d_collapse.png}\n\\caption{The anomalous scaling dimension $\\eta_\\perp$ as function of $d_{eff}$ for a sequence of values of $m$ and $N=1$. The collapse of the curves indicates approximate fulfillment of the correspondence between the Lifshitz and $O(N)$-symmetric critical behavior also in presence of the anomalous dimensions.}\n\\label{eta_1_coll}\n\\end{center} \n\\end{figure} \n\\begin{figure}[ht] \n\\begin{center}\n\\includegraphics[width=9cm]{eta_2_d_collapse.png}\n\\caption{ The anomalous scaling dimension $\\eta_{||}$ as function of $d_{eff}$ for a sequence of values of $m$ and $N=1$. The collapse of the curves indicates approximate fulfillment of the correspondence between the Lifshitz and $O(N)$-symmetric critical behavior also in presence of the anomalous dimensions.}\n\\label{eta_2_coll}\n\\end{center} \n\\end{figure} \nWe by no means expect this equivalence to remain exact beyond the LPA approximation, however the level of numerical agreement is striking. \n\nWe finally point out that the present LPA' level of approximation is the lowest possible allowing for capturing the anomalous dimensions. It would be very interesting to extend the present analysis by including $\\rho$-dependencies as well as accounting for the neglected $\\sim (\\nabla_{||}\\phi )^2$ term. We relegate this to future studies.\n\n\\section{Summary}\nIn this paper we addressed the restrictions on the stability of the long-range ordered pair density wave (FFLO superfluid) states arising due to the presence of a thermal Lifshitz point as predicted by the mean-field theory. We pointed out that the occurrence of these phases in isotropic systems (such as ultracold atoms in continuum) is in fact completely excluded both in dimensionality $d=2$ and $d=3$, except for zero temperature. In consequence, the corresponding phase diagram should generically host a quantum Lifshitz point. This is no longer the case in systems exhibiting a unidirectional anisotropy, where a Lifshitz point at $T>0$ may be stable in $d=3$ (but not in $d=2$). \n\nThe study of the FFLO superfluid Lifshitz point prompted us to readdress the Lifshitz critical behavior with arbitrary $d$, $m$ and $N$ from the point of view of functional renormalization group. We have found that at the approximation level of the local potential approximation (LPA), which amounts to disregarding the anomalous dimensions, the $m$-axial Lifshitz critical behavior is \\emph{exactly} equivalent to that describing the standard $O(N)$-symmetric critical point in effective dimensionality $d_{eff}=d-m\/2$. Our numerical analysis going beyond LPA level and accounting for $\\eta_\\perp$ and $\\eta_{||}$ indicates that this relation is only mildly violated also in this case. In particular, we have found that the anomalous dimension $\\eta_\\perp$ with a high level of accuracy coincides with the value of the $\\eta$ exponent of the corresponding $O(N)$ model in dimensionality reduced by $m\/2$. We obtained negative values of $\\eta_{||}$ for the entire range or scanned values of $(d, m, N)$. \n\nOur work opens avenues for future studies in at least two separate directions. On one hand, it would be interesting to explore thermodynamic and transport properties accompanying the vicinity of the fluctuation induced quantum Lifshitz point approaching it from finite $T$, in particular exploiting the interplay of order parameter and fermionic fluctuations. On the other hand, it might appear very fruitful to employ more sophisticated truncations of the Wetterich equation to further clarify the nature of the thermal Lifshitz points. \n\n\\begin{acknowledgments}\nWe are grateful to Hans Werner Diehl, Dominique Mouhanna, Pierbiagio Pieri, and Mykola Shpot for useful correspondence and remarks on the content of the manuscript. P. J. thanks Hiroyuki Yamase for numerous discussions on closely related topics. We acknowledge support from the Polish National Science Center via 2017\/26\/E\/ST3\/00211. \n\\end{acknowledgments}\n\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction} \\label{sec:Introduction}\n\n\n\nRecent years have seen a number of efforts to apply photon pixel count statistics to gamma-ray data, in order to characterize populations of point sources (PSs) too faint to be individually detected at high significance (e.g. \\cite{Malyshev:2011zi, Lee:2014mza,Lee:2015fea, Linden:2016rcf, Lisanti:2016jub, Zechlin:2015wdz, Zechlin:2017wsy,Zechlin:2017uzo, Daylan:2016tia, Portillo_2017, Collin:2021ufc}). The general idea of these methods is to exploit the fact that an unmodeled PS population gives rise to non-Poissonian fluctuations in the number of photons per pixel, with ``hot spots'' corresponding to the locations of sources. Even if no individual hot spot is significant enough to be established as a PS with high probability, the distribution of fluctuations can be used to infer the properties of the population. These methods have been applied to characterize contributions to the extragalactic gamma-ray background (e.g.~\\cite{Lisanti:2016jub, Zechlin:2017wsy, Zechlin:2017uzo}) and to study inner Galaxy PS populations (e.g.~\\cite{Lee:2015fea, Linden:2016rcf, Calore:2021bty}); they have also been applied to other datasets, e.g. crowded optical fields \\cite{Portillo_2017} and high-energy neutrinos \\cite{IceCube:2019xiu}. \n\nInitially these methods focused on the case of isotropic PS populations, which is likely to be a good approximation for all-sky background radiation generated by a large ensemble of faint extragalactic sources. However, subsequent studies \\cite{Lee:2015fea, Daylan:2016tia, Collin:2021ufc} extended this approach to the case of source populations with an arbitrary spatial distribution.\n\nIn this work we focus on one such method, {\\it Non-Poissonian Template Fitting} (NPTF) \\cite{Lee:2014mza, Lee:2015fea,Mishra-Sharma:2016gis}, which has been applied in a range of contexts but particularly to study the Galactic Center Excess (GCE) in public data from the {\\it Fermi} Gamma-Ray Space Telescope (hereafter {\\it Fermi}). The GCE is an extended and roughly spherical (not disk-like) source of GeV-scale gamma rays filling the region within $1.5 \\text{ kpc}$ of the Galactic Center (GC) \\cite{Goodenough:2009gk,Hooper:2010mq, Hooper:2011ti,Hooper:2013rwa,Daylan:2014rsa,Calore:2014xka,TheFermi-LAT:2015kwa}. \n\nThe origin of the GCE has been the subject of active controversy for the past decade, with two explanations receiving the most attention. One possibility is that the GCE originates from diffuse particle dark matter (DM) undergoing annihilation (e.g. \\cite{Goodenough:2009gk,dan_foreman_mackey_2016_53155,Daylan:2014rsa,Karwin:2016tsw}), as the flux, energy spectrum, and spatial morphology of the GCE appear broadly consistent with a DM origin. If this hypothesis were confirmed, it would be a discovery of profound importance, representing the first evidence of non-gravitational interactions between DM and visible particles. However, the energy spectrum of the GCE also closely resembles that of gamma-ray pulsars observed by {\\it Fermi}, and a number of studies have found that the spatial morphology of the GCE is a closer match to the stellar bulge than to a DM annihilation signal \\cite{Macias:2016nev, Bartels:2017vsx, Macias:2019omb, Pohl:2022nnd}.\\footnote{However, other recent studies \\cite{DiMauro:2021raz,Cholis:2021rpp} have found the opposite preference; the result appears to be sensitive to how the Galactic background emission is modeled.} For these reasons, it seems plausible that the GCE represents the detection of a pulsar population in the Galactic bulge (e.g. \\cite{Abazajian:2012pn,Abazajian:2014fta,Hooper:2013nhl,Mirabal:2013rba,Calore:2014oga,Cholis:2014lta,Yuan:2014yda,OLeary:2015qpx,Ploeg:2017vai,Hooper:2018fih,Bartels:2018xom,Bartels:2018eyb}). If this population includes sources with brightness approaching the {\\it Fermi} sensitivity threshold, then NPTF methods have the potential to characterize at least the bright end of this new population, and provide strong evidence against the DM hypothesis.\n\nPrevious NPTF studies have claimed evidence for a GCE source population comprised of relatively bright and rare PSs \\cite{Lee:2015fea}, but recent studies have found that those claims may have been premature due to unaccounted-for systematic errors \\cite{Leane:2019xiy,Chang:2019ars,Buschmann:2020adf,Leane:2020nmi,Leane:2020pfc}. Other analyses have found a preference for a significant diffuse emission component \\cite{List:2020mzd, Calore:2021bty}, although this does not exclude the pulsar hypothesis, since the sources might simply be too faint to detect with current methods. At the same time, work on modeling the pulsar population in the bulge has suggested that plausible pulsar luminosity functions could generate very few {\\it Fermi} detected sources, while yielding an appreciable number of sources in the flux range potentially detectable by NPTF methods \\cite{Ploeg:2020jeh, Gautam:2021wqn, Dinsmore:2021nip} or related approaches using machine learning \\cite{List:2021aer,Mishra-Sharma:2021oxe}.\n\nGiven this uncertain situation, it is timely to understand how well NPTF can be expected to perform in detecting faint PS populations, and how this performance can be optimized by analysis choices. For example, many previous studies have chosen {\\it Fermi} event selections to optimize angular resolution, at the cost of exposure. While several studies have explored the effect on their results of varying the event selection (e.g. \\cite{Lisanti:2016jub, Leane:2020nmi, Leane:2020pfc}), this has not yet been done in a systematic way.\n\nIn this work, we systematically explore the ability of the public \\texttt{NPTFit} algorithm (as described in Ref.~\\cite{Mishra-Sharma:2016gis}) to reconstruct faint sources in simulated data, as a function of the instrument capabilities and analysis choices. We focus primarily on the analysis of the inner Milky Way, as relevant for the GCE, but also provide results for the simpler case where signal and background are both isotropic.\n\n\n\nWe begin in Sec.~\\ref{sec:analyticforms} by discussing how we expect the likelihood ratio in favor of a point-source population to behave, in a simplified approximate context that can be treated analytically, by approximating some or all of the relevant Poisson distributions as Gaussian. This approximation is not expected to hold in detail in the cases of greatest interest to us, but it is helpful for building intuition.\n\nIn Sec.~\\ref{sec:Methodology}, we then move on to our numerical study, starting by discussing the procedure by which we perform fits to the real {\\it Fermi} data to derive reasonable baseline estimates for the properties of the background model and PSs. We use these results to generate simulated data that is similar to the true gamma-ray sky as observed by {\\it Fermi}, using the public code \\texttt{NPTFit-Sim}, a package designed to simulate populations of unresolved PSs.\\footnote{\\url{https:\/\/github.com\/nickrodd\/NPTFit-Sim}} In this section we also discuss our methodology for fitting to simulated data, and the test statistic we will use to describe the sensitivity of NPTF methods to faint sources.\n\nIn Sec.~\\ref{sec:varyingdiffparams} we lay out the parameters we will vary in our simulations: exposure, angular resolution, energy window, pixel size, and source brightness. We describe the procedure for varying each of these parameters using \\texttt{NPTFit} and \\texttt{NPTFit-Sim}, including any associated modifications to the prior ranges.\n\nIn Sec.~\\ref{sec:isotropic} we perform an initial analysis and comparison between simulated data and our analytic approximations, in the simplified scenario where the PS and smooth contributions to the gamma-ray sky are both isotropic.\n\nWe then move on to a full realistic inner Galaxy analysis; conduct variations of the various analysis parameters, singly and in combination; and present the (numerical) results in Sec.~\\ref{sec:results}. In particular, we explore the individual effects of varying the exposure level and the point spread function (PSF), and map out the tradeoff when exposure level is increased (reduced) with the effect of worsening (improving) angular resolution, using the specific examples of {\\it Fermi} event classes sorted by angular resolution. Modifying the energy window varies the effective exposure, the PSF, and also (in real data) the relative amplitude of the various background and signal components; we explore these effects independently. We then demonstrate the effect of varying the brightness of the PSs while keeping the total flux of the population constant (as appropriate for hypothetical source populations that explain the bulk of the GCE). Finally, we examine the question of the optimal pixel size for NPTF analyses, exploring both the sensitivity to faint PSs and accuracy of the parameter reconstruction. \n\nIn Sec.~\\ref{sec:conclusion} we summarize our results and discuss some implications for NPTF analyses of {\\it Fermi} gamma-ray data in the inner Galaxy.\n\n\nIn Appendix~\\ref{app:detailedmethodology} we present further details of our simulation parameters and fitting methodology; in Appendix~\\ref{app:sharpSCF} we discuss the degree to which our source count functions model a single-brightness PS population;\nin Appendix~\\ref{app:isotropic} we show additional results for the simpler case where both signal and background are isotropic; and in Appendix~\\ref{app:modela} we show the results of using an alternative Galactic diffuse model as the basis for our simulations.\n\n\n\n\\section{Analytic approximations for non-Poissonian template fitting}\n\\label{sec:analyticforms}\n\nLet us begin by building some intuition for how the detectability of PSs is likely to scale in a NPTF-like setup. We will initially follow the approach of Ref.~\\cite{Leane:2020pfc}, essentially replacing the Poisson distributions with Gaussians; this will be a good approximation when the number of sources per pixel and number of counts\/source are both large, and can more generally provide qualitative insights into how various inputs affect the PS sensitivity.\n\nHere we will compute likelihoods and likelihood ratios as a measure of sensitivity, whereas in the numerical analysis of later sections we will perform a Bayesian analysis and evaluate Bayes factors. The Bayesian evidence is an integral of the likelihood weighted by the priors, and so loosely speaking we expect them to have qualitatively similar properties under variations of the source brightness, exposure, etc. However, our expressions for the likelihood ratios should not a priori be expected to accurately approximate the Bayes factors, since Bayes factors incorporate information from the priors (including the number of free parameters in the model), and the likelihood ratios do not.\\footnote{However, in practice, we will find that for our default choice of priors, the differences between the likelihood ratios and Bayes factors are small compared with other differences between the analytic and numerical results.}\n\\subsection{Pixel likelihood to observe $N$ photons} \\label{subsubsec:settingupanalytic}\n\nLet us first review some relevant results from Ref.~\\cite{Leane:2020pfc}. Consider a simplified scenario where our PS population model predicts $n_0$ sources per pixel, and all sources are identical, with an expected number of photons per source of $s$. For the moment, we will ignore leakage out of the pixel due to the non-trivial angular resolution, but as a first approximation the effect of such leakage would be to reduce $s$. We are interested in calculating the probability to observe $N$ photons in a pixel. \n\nIf we fix the number of observed sources (in a given pixel) to be $n$, then the total number of photons in the pixel will follow a Poisson distribution with mean $n s$. For $n s \\gg 1$, we can approximate this distribution as a Gaussian with a mean and variance of $n s$, via the Central Limit Theorem. Then the probability to observe $N$ photons is given approximately by:\n\\begin{equation}\n P(N|\\{n, s \\}) = \\frac{1}{\\sqrt{2 \\pi n s}} e^{-(N-n s)^{2}\/(2 n s)}\n \\label{eqn:p(n|nchis)}\n\\end{equation}\nAs a note, this expression can be thought of as a continuous probability density function (PDF), but also as a measure of the finite probability to observe $N$ photons by integrating the PDF over a bin of width $dN=1$ (i.e. the difference between adjacent values of $N$). Provided the PDF does not vary rapidly over the bin, this integral can simply be approximated by the value of the PDF at the center of the bin. We will use both interpretations of $P(N|\\{n,s\\})$ and similar quantities in the following calculations.\n\n\n\nThis distribution function is convolved with $P(n|n_{0})$, a distribution that describes the probability of drawing $n$ sources given that the expected number of sources is $n_{0}$. The resulting function, which we denote $P(N|{n_0, s})$, describes the likelihood of obtaining $N$ photons given that the number of sources is described by a Poisson distribution with an expectation value of $n_{0}$. If the number of sources is large, we can also approximate the distribution that describes the number of sources with a Gaussian with mean and variance $n_{0}$. Furthermore, the integrand is dominated by the region where $n s \\approx N$, so we can set $n s \\approx N$ except where $N- n s$ appears in an exponent. \n\nThese approximations yield the following equation for the probability to observe $N$ photons given $n_0$ and $s$:\n\n\\begin{align}\n P(N|\\{n_{0},s \\}) &= \\int dn P(N |\\{n, s\\}) P(n|n_{0}) \\nonumber \\\\\n &\\approx \\int dn \\frac{1}{\\sqrt{2 \\pi N}} e^{-(n s -N)^{2}\/(2N)} \\nonumber \\\\ & \\times \\frac{1}{\\sqrt{2 \\pi n_{0}}}e^{-(n_{0}-n)^2\/(2 n_{0})}\n \\label{eqn:convolution}\n\\end{align}\n\nThe integral over $n$ can be performed analytically and takes a simple form, if we assume the peak in the integrand is sufficiently far away from the limits of integration that we can take those limits to $\\pm \\infty$ without affecting the result. Furthermore, around the peak of the probability distribution we have $N \\approx n_{0} s$, so we can approximate $N\\approx n_0 s$ except when $N - n_0 s$ appears in an exponent. These approximations yield:\n\\begin{equation}\n\\begin{split}\n P(N|\\{n_{0},s \\}) &\\approx \\frac{1}{\\sqrt{2 \\pi (N+n_{0} (s)^{2})}} e^{\\frac{-(N-n_{0} s)^{2}}{2(N+n_{0} s^{2})}}\\\\\n &\\approx \\frac{1}{\\sqrt{2 \\pi n_{0} s(1+ s)}} e^{\\frac{-(N-n_{0} s)^{2}}{2 n_{0} s (1+ s)}}\n\\end{split}\n\\label{eqn:integraresult}\n\\end{equation}\n\n\n\nThat is, under these approximations the probability of observing $N$ photons takes a Gaussian form (at least near the peak of the distribution), but with an inflated variance of $n_0 s (1 + s)$, a factor of $(1+s)$ greater than the expectation value of $n_0 s$.\n\n\n\nIf $s \\ll 1$, our model corresponds to a very faint source population that should be indistinguishable from diffuse emission. In this case, we recover the standard Gaussian approximation to the Poisson distribution, with equal mean and variance of $n_0 s$, \n\n\\begin{equation}\n P(N|\\{n_{0}, s \\}) \\approx \\frac{1}{\\sqrt{2 \\pi n_{0} s}} e^{\\frac{-(N-n_{0} s)^{2}}{2 n_{0} s }}\n\\label{eqn:poissondistribution}\n\\end{equation}\n\nThus the characteristic feature of a PS population (within these approximations) is an enhanced variance, by a factor of $1+s$. \n\nIn the event that the number of counts per source satisfies $s \\gg 1$ but the number of sources per pixel $n_0 \\lesssim 1$, a more refined approximation for the distribution may be useful (going beyond the results of Ref.~\\cite{Leane:2020pfc}). If $i$ sources are drawn in a given pixel ($i$ being an integer), the number of counts from those sources will be Poisson-distributed with expectation $s i$; for $s\\gg 1$ and $i > 0$, we can approximate each of these individual distributions as a Gaussian with mean and variance $s i$, so overall we have:\n\\begin{equation}\n P(N|\\{n_{0}, s \\}) \\approx p_0(n_0) \\delta_{N,0} + \\sum_{i=1}^\\infty p_i(n_0) \\frac{1}{\\sqrt{2 \\pi s i}} e^{\\frac{-(N- s i)^{2}}{2 s i}},\n\\label{eqn:doublepoissondistribution}\n\\end{equation}\nwhere $p_i(n_0)$ is the Poisson probability of drawing $i$ sources when $n_0$ are expected.\n\n\n\\subsection{Likelihood ratio between models (Gaussian approximation)}\nNow suppose the true underlying model for a given pixel yields a Gaussian distribution for $N$ with mean $X$ and variance $\\sigma^2$. We wish to evaluate the expected log likelihood ratio between the correct model and an alternative model that predicts mean $Y$ and variance $\\tau^2$. We will denote these models respectively as $(X,\\sigma^2)$ and $(Y,\\tau^2)$. This result has been computed previously in Ref.~\\cite{Leane:2020pfc}; we review it here.\n\nFor context, the correct model might represent a linear combination of a PS population and a diffuse signal, while the alternative model allows only for a diffuse signal; the expected log likelihood ratio in this case then gives a measure of how well we will be able to exclude the all-diffuse model and thus detect the PS population. We will work out the case for general $(X,Y,\\sigma^2,\\tau^2)$ first, under the approximation where all the relevant probability distributions are Gaussian, and then apply this general result to several scenarios in which we might wish to detect PS populations. \n\n\n\nFor a single pixel, the probability of finding $N$ photons predicted by the model $(Y,\\tau^2)$) is:\n\\begin{equation}\n \\mathcal{L} = P(N|\\{Y,\\tau^2 \\}) = e^{[-(N-Y)^{2}]\/(2 \\tau^{2})}\/\\sqrt{2 \\pi \\tau^{2}}\n \\label{eqn:likelihoodeqn}\n\\end{equation}\ncorresponding to a log likelihood of $\\ln{\\mathcal{L}} = -\\frac{(N-Y)^2}{2\\tau^2} - \\frac{1}{2} \\ln{2\\pi \\tau^2}$. To get the expected value of the log likelihood, we can integrate against the true distribution of $N$, i.e. $P(N|\\{X,\\sigma^2\\})$. This yields \\cite{Leane:2020pfc}:\n\\begin{equation}\n \\left \\langle \\ln{\\mathcal{L}}(Y,\\tau^2) \\right \\rangle = -\\frac{\\left[(X-Y)^2 + \\sigma^2\\right]}{2\\tau^2} - \\frac{1}{2} \\ln(2\\pi\\tau^2)\n \\end{equation}\n\n\n\nNow we diverge from Ref.~\\cite{Leane:2020pfc}, which focused on determining the best-fit choice for $\\tau^2$ given a discrepancy between $X$ and $Y$. Let us instead simply examine the expected $\\Delta \\ln\\mathcal{L}$ between the fitted model $(Y,\\tau^2)$ and the best-fit model $(X,\\sigma^2)$, which is given by:\n\\begin{align}\n \\left \\langle \\Delta \\ln{\\mathcal{L}} \\right \\rangle & \\equiv\n \\left \\langle \\ln{\\mathcal{L}}(X,\\sigma^2) - \\ln{\\mathcal{L}}(Y,\\tau^2) \\right \\rangle \\nonumber \\\\\n & =\\frac{\\left[(X-Y)^2 + \\sigma^2\\right]}{2\\tau^2} + \\frac{1}{2} \\ln(2\\pi\\tau^2) - \\frac{1}{2} - \\frac{1}{2} \\ln(2\\pi\\sigma^2) \\nonumber \\\\\n & = \\frac{\\left[(X-Y)^2 + \\sigma^2\\right]}{2\\tau^2} - \\frac{1}{2} \\left[1 + \\ln\\left(\\frac{\\sigma^2}{\\tau^2}\\right) \\right]\n \\end{align}\n \n If both models produce a very similar expected number of photons, i.e. $X\\approx Y$, and differ only in their variances, then this result can be simplified to:\n \\begin{align}\n \\left \\langle \\Delta \\ln{\\mathcal{L}} \\right \\rangle \n & = \\frac{\\sigma^2}{2\\tau^2} - \\frac{1}{2} \\left[1 + \\ln\\left(\\frac{\\sigma^2}{\\tau^2}\\right) \\right] \\label{eqn:bayesfactor}\n \\end{align}\n \n Note however that if $Y$ and $\\tau^2$ are allowed to vary within certain limits or while satisfying certain conditions, then it is not guaranteed that the best-fit point lies at $Y=X$; if the global likelihood maximum (at $Y=X$, $\\tau^2=\\sigma^2$) cannot be attained, then the best-fit value of $Y$ will depend on the value of $\\tau^2$ (and vice versa). Most simply, this can occur when the model is Poissonian, in which case $\\tau^2$ is fixed to $Y$, but the data has non-Poissonian components and so $\\sigma^2$ differs from $X$ in the true underlying model. A related scenario, studied in Refs.~\\cite{Leane:2020nmi, Leane:2020pfc}, occurs when the model requires the same value of $Y$ in multiple pixels but the true underlying model varies across those pixels; this leads to a best-fit model variance $\\tau^2$ that differs from the true underlying variance $\\sigma^2$ (possibly leading to misattribution of the enhanced variance to a PS population).\n\n\n\n\n\\subsection{Variance between realizations (Gaussian approximation)}\n\nIn addition to working out the expected log likelihood ratio as a measure of sensitivity to incorrect modeling (such as attempting to describe PSs with a Poissonian template), it is helpful to understand the expected variability in this ratio between different realizations. In the limit where the number of pixels is large, the total $\\Delta \\ln\\mathcal{L}$ for the image is the sum of many independent random variables ($\\Delta \\ln\\mathcal{L}$ for each pixel), and so is expected to follow a Gaussian probability distribution by the Central Limit Theorem (even if the probability distribution for $\\Delta \\ln\\mathcal{L}$ in a single pixel is highly non-Gaussian). Consequently, in this limit, we expect the distribution of the total $\\Delta \\ln\\mathcal{L}$ (summed over pixels) to be well-characterized by its expectation value and variance.\n\nAs in the previous subsection, we will work out the result initially for general choices of the PDF parameters for the true and alternative hypotheses, $(X,Y,\\sigma^2,\\tau^2)$. We will then apply these results to specific scenarios, in particular where the true model (described by $(X,\\sigma^2)$) includes a PS component but the alternative model (described by $(Y,\\tau^2)$) does not.\n\n\nWe can estimate the variance of $\\Delta \\ln \\mathcal{L}$ by evaluating $\\text{Var}(\\Delta\\ln \\mathcal{L}) \\equiv \\langle (\\Delta \\ln\\mathcal{L})^2 \\rangle - \\langle \\Delta \\ln\\mathcal{L} \\rangle^2$. Let us first focus on the case where $\\sigma \\gg \\tau$ and $\\left \\langle \\Delta \\ln{\\mathcal{L}} \\right \\rangle \\gg 1$ and so the first term dominates in Eq.~\\ref{eqn:bayesfactor}. This can occur, for example, where there is a bright PS population inducing a large variance $\\sigma^2 \\gg X$, which cannot be replicated by an alternative model based solely on diffuse emission with Poissonian statistics; in that sense this is a high-detectability limit.\n\n\nThen using the estimates above and again taking $X\\approx Y$, we find that:\n\\begin{align} \\langle (\\Delta \\ln\\mathcal{L})^2 \\rangle &\\approx \\int dN \\left[-\\frac{(N-X)^2}{2\\tau^2} + \\frac{(N-X)^2}{2\\sigma^2}\\right]^2 \\nonumber \\\\\n& \\times P(N|\\{X,\\sigma^2\\}) \\nonumber \\\\\n& \\approx \\frac{3}{4} \\left(\\frac{\\sigma}{\\tau}\\right)^4,\\end{align}\nand thus:\n\\begin{align} \\text{Var}(\\Delta\\ln \\mathcal{L}) & \\approx \\frac{3}{4} \\left(\\frac{\\sigma}{\\tau}\\right)^4 - \\frac{1}{4} \\left(\\frac{\\sigma}{\\tau}\\right)^2 \\nonumber \\\\\n& = \\frac{1}{2} \\left(\\frac{\\sigma}{\\tau}\\right)^4.\\end{align}\nThus we expect the standard deviation in this regime to be:\n\\begin{align}\\text{std}(\\Delta\\ln \\mathcal{L}) & \\approx \\frac{1}{\\sqrt{2}}\\frac{\\sigma^2}{\\tau^2} \\nonumber \\\\\n&\\approx \\sqrt{2} \\langle \\Delta \\ln \\mathcal{L}\\rangle.\\end{align}\nWe see that we generically expect the scatter in $ \\Delta \\ln\\mathcal{L}$ (from a single pixel) to be of the same order as its expected value. When combining $n_\\text{pix}$ pixels, the expectation value and variance are both enhanced by a factor of $n_\\text{pix}$, so the standard deviation should be suppressed relative to the expectation value by a factor of $1\/\\sqrt{n_\\text{pix}}$.\n\nIn this high-detectability, purely Gaussian case, there is actually a simple analytic expression for the full PDF of $\\Delta \\ln \\mathcal{L}$,\n\\begin{equation} P(\\Delta \\ln \\mathcal{L} = x) = \\frac{1}{\\sqrt{\\pi x \\delta}} e^{-x\/\\delta}, \\, x \\ge 0, \\end{equation}\nwhere $\\delta \\equiv (\\sigma^2\/\\tau^2) - 1$. It can be readily checked that this distribution reproduces the expectation value and variance given above for $\\delta \\gg 1$. Note that this distribution is not at all Gaussian; however, as discussed above, combining a large number of pixels and summing their $\\Delta \\ln \\mathcal{L}$ contributions is expected to give an approximately Gaussian PDF by the Central Limit Theorem.\n\nIf we instead consider the low-detectability case where $\\tau^2 \\approx \\sigma^2$, i.e. $\\delta = (\\sigma^2\/\\tau^2) -1 \\ll 1$, then we instead obtain:\n\\begin{align} \\langle (\\Delta \\ln\\mathcal{L})^2 \\rangle & \\approx \\int dN \\left[-\\frac{(N-X)^2}{2\\tau^2} + \\frac{(N-X)^2}{2\\sigma^2}\\right.\\nonumber \\\\\n& \\left. - \\frac{1}{2}\\ln \\frac{\\tau^2}{\\sigma^2} \\right]^2 P(N|\\{X,\\sigma^2\\}) \\nonumber \\\\\n& \\approx \\delta^2 \\int dN \\left[- \\frac{(N-X)^2}{2\\sigma^2} + \\frac{1}{2} \\right]^2 P(N|\\{X,\\sigma^2\\}) \\nonumber \\\\\n& \\approx \\delta^2\/2.\\end{align}\nIn the same limit, \n\\begin{align} \\langle \\Delta \\ln\\mathcal{L} \\rangle &\\approx \\delta^2\/4.\\end{align}\nThus for $\\delta \\ll 1$, the first term dominates the variance and we have:\n\\begin{align} \\text{Var}(\\Delta\\ln \\mathcal{L}) & \\approx \\delta^2\/2 \\approx 2 \\langle \\Delta \\ln\\mathcal{L} \\rangle.\\end{align}\nThus in this case the square root of the variance is parametrically enhanced (by a factor of $1\/\\delta$) relative to the expectation value. The variance and expectation value are parametrically similar and will both be enhanced by a factor of $n_\\text{pix}$ when multiple pixels are combined, and so in this regime the standard deviation (square root of the variance) should be of the same order as the square root of the expectation value.\n\nNow we will apply these results to estimate the expected log likelihood ratio between a model containing PSs and one that omits them, when a real population of PSs is present in the data. This $\\Delta \\ln \\mathcal{L}$ will tell us the confidence level with which we expect to be able to exclude the model with no PSs, and hence the confidence level for PS detection. It is similar to the metric we will use for sensitivity to a PS population in our numerical studies.\n\n\\subsection{Single component (100 \\% PS emission)} \\label{subsubsec:case1analytic}\n\nLet us begin by assuming that the data is completely described by a PS population (of identical sources, as described above) without any contribution from a smooth background source. The PS emission has a mean and variance approximated by $(X,\\sigma^{2}) = (N, N (1+s))$, where $s$ is the number of photons per source and $N$ the total number of photons. \n\nLet us consider the expected $\\Delta \\ln \\mathcal{L}$ between the correct PS-based model, and a model that includes only smooth emission, but which correctly predicts the expected number of photons $N$. Such a smooth model must have equal mean and variance, so we must have $(Y,\\tau^{2}) = (N,N)$.\n\n\n\nUsing Eq.~\\ref{eqn:bayesfactor}, we plug in these parameters and obtain:\n\n\\begin{equation}\n \\langle \\Delta \\ln{\\mathcal{L}}\\rangle \\approx \\frac{1}{2} \\left[s - \\ln\\left(1+ s\\right) \\right]\n \\label{eqn:case1naturallogbf}\n\\end{equation}\nNote that {\\it all} dependence on the total number of photons $N$ has canceled out; only the number of photons per source is relevant. In particular, this property ensures the likelihood ratio will go to 1 when $s \\ll 1$ as required (since this corresponds to the limit of many very faint sources, at which point the smooth model is perfectly adequate), even if the number of sources is very large. Specifically, at small $s$ we have $\\langle \\Delta \\ln{\\mathcal{L}}\\rangle \\approx (s\/2)^2$.\n\nHowever, this behavior also has the perhaps-surprising implication that having {\\it more} sources (and hence more photons) of fixed brightness in a single pixel neither increases nor decreases the PS sensitivity based on the pixel likelihood, at least once the numbers are large enough that the relevant likelihoods can be approximated as Gaussian. \n\nThe leading order behavior of this function at large $s$ is $\\langle \\Delta \\ln{\\mathcal{L}}\\rangle \\approx s\/2$, i.e. the log likelihood in favor of PSs grows linearly with the brightness of the sources. Since the number of photons seen from a given source is directly proportional to the exposure (i.e. time viewing the source multiplied by the effective area of the instrument), we expect that (at least in this background-free case) $\\langle \\Delta \\ln{\\mathcal{L}}\\rangle$ will also grow linearly with exposure. The normalization factor here is also familiar. $2\\Delta \\ln\\mathcal{L} \\approx s$ is often used as a test statistic, whose square root translates to the significance in sigma; thus roughly speaking, we expect the detection significance of the PS population (measured in sigma) from a given pixel to approach $\\sqrt{s}$ for large $s$.\n\n\n\nIf we do not impose the condition that the expected number of photons is $N$, we can maximize the likelihood for this model under the condition $Y=\\tau^2$, obtaining:\n\\begin{equation}Y_\\text{optimal} = \\frac{1}{2} \\left(\\sqrt{4 X^2 + 4\\sigma^2 + 1} -1\\right).\\end{equation}\nFor the case at hand, this yields $Y_\\text{optimal} = \\frac{1}{2} \\left(\\sqrt{4 N^2 + 4 N(1+s) + 1} -1\\right)$. If $N \\gg s$, $N \\gg 1$ (i.e. the number of both sources and photons is large, consistent with our Gaussian approximations), then to a good approximation $Y_\\text{optimal} \\approx N$ and the estimates above should be reasonable. It is also true in practice, in NPTF analyses of the GCE, that the total photon flux associated with the best-fit GCE model is typically very similar when comparing the fits with and without a model for GCE PSs (e.g. \\cite{Lee:2015fea}).\n\n\\subsection{Generalization to arbitrary ratio of PS and smooth emission}\n\nNow let us consider the scenario in which a fraction $k$ of the emission is associated with PSs and the remainder with smooth emission. We seek to evaluate $\\langle \\Delta \\ln{\\mathcal{L}}\\rangle$ between the best-fit model (corresponding to the truth) and the model with only smooth emission.\n\nThe total number of predicted photons is the sum of the predicted photons associated with each component. The sum of two Gaussian-distributed random variables is also Gaussian-distributed, with mean (variance) given by the sum of the means (variances) for the individual distributions. Thus within our approximations, the best-fit model (matching the truth) has a Gaussian probability distribution for the number of photons $N$ with parameters $(X,\\sigma^{2})=(kN+(1-k)N,kN(1+ s)+(1-k)N)$. \n\nThe model with only smooth components that matches the total number of photons has (as in our previous example) $(Y,\\tau^{2}) = (N,N)$. \n\nUsing Eq.~\\ref{eqn:bayesfactor}, we obtain:\n\n\\begin{equation}\n \\langle \\Delta\\ln \\mathcal{L}\\rangle \\approx \\frac{k}{2} s +\\ln{\\left(\\frac{1}{\\sqrt{1+k s}}\\right)}\n \\label{eqn:lnBFcase2}\n\\end{equation}\n\nThus the effect of a non-zero background fraction on the sensitivity is equivalent to rescaling the photon flux of individual sources. In this case, the change in scaling behavior from $\\langle \\Delta\\ln \\mathcal{L}\\rangle \\propto s^2$ to $\\langle \\Delta\\ln \\mathcal{L}\\rangle \\propto s$ will occur parametrically around $k s \\sim 1$. It is worth noting that if there are a large number of pixels, a significant detection may be consistent with $k s \\ll 1$ from every individual pixel, and in this case we should expect a faster-than-linear scaling of the log likelihood ratio with increasing $s$ (or $k$).\n\n\\subsection{A more accurate probability distribution: accounting for rare sources}\n\nWe can also compute the expected value of the likelihood ratio and its variance, between two Gaussian models, if the underlying ``true'' probability distribution is given by Eq.~\\ref{eqn:doublepoissondistribution}, for the case $n_0 \\lesssim 1$ where the Gaussian approximations break down. For a Gaussian distribution with mean $X$ and variance $\\sigma^2$, we find:\n\n\\begin{align} \\langle \\ln \\mathcal{L}\\rangle & = \\int dN \\left[-\\frac{(N-X)^2}{2\\sigma^2} - \\frac{1}{2} \\ln(2\\pi\\sigma^2)\\right] P(N|n_0, s) \\nonumber \\\\\n& = p_0(n_0) \\left[-\\frac{X^2}{2\\sigma^2} - \\frac{1}{2} \\ln(2\\pi\\sigma^2)\\right] \\nonumber \\\\\n& + \\sum_{i=1}^\\infty p_i(n_0) \\int dN \\left[-\\frac{(N-X)^2}{2\\sigma^2} - \\frac{1}{2} \\ln(2\\pi\\sigma^2)\\right] \\nonumber \\\\\n&\\times \\frac{1}{\\sqrt{2\\pi i s}} e^{-(N - i s)^2\/2 i s} \\nonumber \\\\\n& = -\\frac{1}{2} \\sum_{i=0}^\\infty p_i(n_0) \\left[\\frac{(i s -X)^2+ i s}{\\sigma^2} + \\ln(2\\pi\\sigma^2)\\right], \\end{align}\nwhere again we have made the approximation of taking the limits of integration to $\\pm \\infty$, relying on $i s \\gg 1$ for $i \\ge 1$ (so that the Gaussians are centered well away from the limits of integration). Now the infinite sums over $i$ can be computed by using the fact that the Poisson probabilities $p_i(n_0) = n_0^i e^{-n_0}\/i!$ satisfy $\\sum_{i=0}^\\infty p_i(n_0) = 1$. In particular, by relabeling dummy indices the following identities can easily be proved:\n\\begin{align} \\sum_{k=0}^\\infty k p_k(n_0) & = n_0,\\nonumber \\\\\n\\sum_{k=0}^\\infty k^2 p_k(n_0) & = n_0 (n_0 + 1). \\nonumber \\\\\n\\sum_{k=0}^\\infty k^3 p_k(n_0) & = n_0^3 +3 n_0^2 + n_0, \\nonumber \\\\\n \\sum_{k=0}^\\infty k^4 p_k(n_0) & = n_0^4 + 6 n_0^3 + 7 n_0^2 + n_0. \\label{eqn:poissonidentities}\n\\end{align}\n\nApplying these results we find:\n\\begin{align} \\langle \\ln \\mathcal{L}\\rangle & = - \\frac{1}{2} \\left[\\ln(2\\pi \\sigma^2) + \\frac{X^2 + s^2 n_0 (n_0 + 1) + s n_0 (1 - 2 X)}{\\sigma^2}\\right] \\end{align}\n\nIn particular, if we hold the variance constant then the likelihood is maximized for $X=n_0 s$, and if we set $X=n_0 s$ (i.e. the model matches the expected total number of photons), then we obtain:\n\\begin{align} \\langle \\ln \\mathcal{L}\\rangle & = - \\frac{1}{2} \\left[\\ln(2\\pi \\sigma^2) + \\frac{n_0 s(1+s)}{\\sigma^2}\\right] \\end{align}\nThe likelihood is then maximized for $\\sigma^2 = n_0 s (1+s)$, which is the same variance we found when we directly approximated the probability distribution for the point-source population as Gaussian.\n\nIf we examine the expected $\\Delta \\ln \\mathcal{L}$ between this best-fit Gaussian model and a Gaussian model with $\\sigma^2 = X = n_0 s$ (representing a purely diffuse signal), we find:\n\\begin{align} \\langle \\Delta \\ln \\mathcal{L}\\rangle & = - \\frac{1}{2} \\left[\\ln(1+s) + \\frac{n_0 s(1+s)}{n_0 s(1+s)} - \\frac{n_0 s(1+s)}{n_0 s}\\right] \\nonumber \\\\\n& = \\frac{1}{2} \\left[s - \\ln(1+s)\\right] \\end{align}\nRemarkably, this is exactly the same result we found under the Gaussian approximation for the underlying probability distribution (Eq.~\\ref{eqn:case1naturallogbf}), suggesting that this result is quite robust even when the assumptions needed to justify the Gaussian approximation break down.\n\nAs previously, we can generalize to the case where PSs constitute a fraction $k$ of the total emission, so the total expected photon count is $n_0 s\/k$ with an expected number of $(1-k) n_0 s\/k$ photons originating from diffuse emission. In this case the probability distribution for $N$ given in Eq.~\\ref{eqn:doublepoissondistribution} must be updated accordingly. As previously, we approximate the probability distribution for the number of photons from diffuse emission as a Gaussian with mean and variance $(1-k) n_0 s\/k$; for each choice $i$ for the number of sources drawn, the emission from sources (mean and variance $s i$) can be added to that from diffuse emission by the usual prescription for the sum of normally-distributed random variables (i.e. the means and variances add). Thus the overall distribution becomes:\n\\begin{align} & P(N|n_0, s, k) \\approx \\nonumber \\\\\n& \\sum_{i=0}^\\infty p_i(n_0) \\frac{e^{-(N- s (i + n_0(1\/k- 1)))^2\/2 s (i + n_0(1\/k- 1))}}{\\sqrt{2 \\pi s (i + n_0(1\/k- 1))}}, \\label{eqn:doublepoissonmixed}\\end{align}\nwhere as previously $p_i(n_0)$ is the Poisson probability of drawing $i$ sources when $n_0$ are expected.\n\nUnder this distribution, if we compute the expected likelihood of a Gaussian model with mean $X$ and variance $\\sigma^2$, we find (by the same methods as previously):\n\\begin{align} \\langle \\ln \\mathcal{L}\\rangle \n& = - \\frac{1}{2} \\left[ \\frac{1}{\\sigma^2} \\left(X^2 - \\frac{2 X n_0 s}{k} \\right. \\right. \\nonumber \\\\\n& \\left. \\left. + \\frac{n_0 s(k + s (k^2 + n_0))}{k^2} \\right) + \\ln(2\\pi \\sigma^2) \\right] \\end{align}\nFor fixed $\\sigma^2$, this is maximized for $X=n_0 s\/k$ (as expected, when the model matches the total number of counts); if we fix $X=n_0 s\/k$, then we obtain:\n\\begin{align} \\langle \\ln \\mathcal{L}\\rangle = - \\frac{1}{2} \\left[ \\frac{1}{\\sigma^2} \\left(\\frac{n_0 s (ks + 1)}{k} \\right) + \\ln(2\\pi \\sigma^2 ) \\right]. \\end{align}\nThe expected log likelihood difference between the purely diffuse Gaussian model with $\\sigma^2=X= n_0 s$ and the Gaussian model with $\\sigma^2=(n_0 s \/ k) (k s + 1)$ (matching our previous prescription in the case of mixed PS and smooth emission) is then given by,\n\\begin{align} \\langle \\Delta \\ln \\mathcal{L} \\rangle = \\frac{1}{2} \\left[k s + \\ln \\frac{1}{1+k s} \\right], \\end{align}\nexactly as previously.\n\nHowever, while the expected log likelihood is unchanged by shifting to this modified probability distribution, the variance differs. Working in the limit where the log terms in the delta log likelihood can be ignored, let us examine the variance of the delta log likelihood between the Gaussian models with $\\sigma^2= n_0 s$ and $\\sigma^2=(n_0 s \/ k) (k s + 1)$. In both cases we take $X=n_0 s\/k$. Then using the identities in Eq.~\\ref{eqn:poissonidentities}, we obtain:\n\\begin{align}\n\\text{Var}(\\Delta \\ln \\mathcal{L}) & = \\langle (\\Delta \\ln \\mathcal{L})^2 \\rangle - \\langle \\Delta \\ln \\mathcal{L} \\rangle^2 \\nonumber \\\\\n & \\approx k^2 \\frac{s(s+6) + 3}{4 n_0} \\left( \\frac{ks}{k s + 1}\\right)^2 + \\frac{1}{2} (k s)^2 \\nonumber \\\\\n & \\rightarrow \\langle \\Delta \\ln \\mathcal{L}\\rangle^2 \\left(\\frac{1}{n_0} + 2 \\right), \\quad k s \\gg 1 . \\end{align}\n\nIn particular, we observe that there is now an additional term in the variance which scales as $1\/n_0$. Consistently with our previous calculation, this term will be negligible when $n_0 \\gg 1$ and our original Gaussian approximation holds, but it can lead to a significant enhancement to the variance when $n_0\\ll 1$. In particular, for $k s \\gg 1$ and $n_0 \\ll 1$, we expect that the variance over the full dataset can be approximated as:\n\\begin{align}\\text{Var}(\\Delta \\ln \\mathcal{L}) & \\approx n_\\text{pix} \\langle \\Delta \\ln \\mathcal{L}\\rangle^2_\\text{per pixel} \/n_0 \\nonumber \\\\\n& \\approx \\langle \\Delta \\ln \\mathcal{L}\\rangle^2_\\text{overall} \/n_0 n_\\text{pix}, \\nonumber \\\\\n\\Rightarrow \\text{std}(\\Delta \\ln \\mathcal{L}) & \\approx \\frac{\\langle \\Delta \\ln \\mathcal{L}\\rangle_\\text{overall}}{\\sqrt{n_0 n_\\text{pix}}}.\\end{align}\nThus we see that in this case, rather than the suppression of $1\/\\sqrt{n_\\text{pix}}$ that we found earlier (for the high-$k s$ case), instead the suppression is only $1\/\\sqrt{n_\\text{tot}}$, where $n_\\text{tot}=n_0 n_\\text{pix}$ is the total number of PSs in the image.\n\nBroadly speaking, the standard deviation in $\\Delta \\ln\\mathcal{L}$ is always related to $\\langle \\Delta \\ln \\mathcal{L}\\rangle$ by a factor of $1\/\\sqrt{A}$, but $A$ can be either the number of pixels, the number of PSs (when the number of sources per pixel is small), or the test statistic $\\langle \\Delta \\ln \\mathcal{L}\\rangle$ itself (when the contribution to the test statistic per pixel is small). In the examples we have checked, it is always the smallest of these three parameters that dominates the variance, which is intuitively sensible. \n\nNote that in particular this means the variance can be much larger than one might naively estimate from the square root of the test statistic; if the number of sources is only $\\mathcal{O}(100)$, then the variance in the test statistic will be consistently at the $\\mathcal{O}(10\\%)$ level even if the sources are bright and the significance of detection is very high. Furthermore, we have so far neglected contributions to the variance from the width of the source count function (which will modify the effective $s$ entering these calculations from realization to realization, and hence increase the variance), the presence of a non-zero point spread function (likewise), and cross-talk and degeneracies with other background components. \n\n\\subsection{Implications for analysis choices}\n\nIf the overall number of photon counts increases, due to increased exposure (i.e. increased observation time or effective area), the signal fraction $k$ remains constant while $s$ varies linearly. Consequently, we expect $\\langle \\Delta \\ln \\mathcal{L}\\rangle$ to depend linearly on exposure for sufficiently large $s$, with the transition from quadratic to linear scaling beginning around $s \\sim 1\/k$.\n\nSuppose a non-zero angular resolution for the instrument causes the expected number of photons from a single source in a pixel to be reduced, due to leakage into neighboring pixels. Then $\\langle \\Delta \\ln \\mathcal{L}\\rangle$ will be reduced by the same factor, in the regime where the delta log likelihood scales linearly with $s$. The signal fraction $k$ should not be affected by this leakage unless the overall distribution of either the signal or background varies rapidly relative to the angular resolution scale; if there is such a rapid variation, there may also be a correction corresponding to the change in $k$. \n\nThe fact that the sensitivity depends only on $ks$ suggests that it is generally more important to have a low background fraction (high $k$) than a high density of sources (since the latter has no effect on the expected sensitivity in the regimes of validity of our analytic approximations). This suggests that the sensitivity is likely to be dominated by pixels where the expected PS signal is brightest as a fraction of all diffuse backgrounds (which may not be the pixels with the largest number of sources).\n\n\nWe have derived these results for the contribution to $\\langle \\Delta \\ln \\mathcal{L}\\rangle$ from a single pixel, but the overall log likelihood is simply the sum of the results for the individual pixels. We can thus apply these results even to the realistic case where the background and signal models can have quite different spatial distributions: we simply calculate the appropriate $k$-value in each pixel and estimate the contribution to $\\langle \\Delta \\ln \\mathcal{L}\\rangle$ accordingly. Also note that the smooth model can be arbitrarily complicated; the only information we have used is that it has Poissonian statistics.\n\n\n\\section{Inputs and methodology for numerical calculations} \\label{sec:Methodology}\n\\subsection{Data selection} \\label{subsec:DataSelection}\nTo calibrate our simulations to the real gamma-ray sky, we employ eleven years of the Pass 8 public \\textit{Fermi} data. The data were collected over 573 weeks from August 4, 2008 to June 19, 2019. To employ the most stringent cosmic-ray rejection criteria, we restrict our selection to the ULTRACLEANVETO event class. For most tests, except those that varied energy range, we limited the energy range to $2 - 20$ GeV, following the default in \\texttt{NPTFit} and previous NPTF analyses. We restrict ourselves to an analysis of the top three quartiles of the data graded by angular resolution, as this provides enough range in angular resolution to explore the tradeoff with exposure, and the angular resolution degrades significantly in the bottom quartile.\n\n\\subsection{\\texttt{NPTFit} scan setup} \\label{subsec:NPTFit}\n\nWe employ \\texttt{v.0.2} of \\texttt{NPTFit}, together with \\texttt{MultiNest}, a Bayesian inference tool \\cite{Mishra-Sharma:2016gis,Feroz:2008xx}. For fits of the simulated data, the number of live points is described within the individual procedure sections; when not otherwise specified we used nlive=100. (We have checked that the effect on $\\langle \\ln \\text{BF}\\rangle$ of increasing nlive beyond 100 was consistently very small.)\n\n\nWhen fitting to simulated data, our region of interest (ROI) is centered on the GC and has a radius of $15^{\\circ}$. We exclude the band with galactic latitude $|b| < 2^{\\circ}$. This ROI is chosen for computational efficiency and to minimize contamination from background emissions, while preserving sensitivity to the GCE, motivated by a recent study finding that sensitivity to GCE PSs plateaus for ROIs with radii between $15^{\\circ}$ and $20^{\\circ}$ \\cite{Buschmann:2020adf}. Our expectation is that shifting to a modestly different ROI would not significantly affect the scaling with exposure, angular resolution, etc, that we study in this work, although the overall sensitivity would change and so should not be compared directly between analyses with different ROIs.\n\nDuring \\texttt{NPTFit} scans, the non-Poissonian components of the sky maps must be exposure corrected at each pixel (a computationally-costly process) since the exposure map (a map that reflects the duration of observation and the stringency of data selection) is non-uniform \\cite{Mishra-Sharma:2016gis,Feroz:2008xx}. In order to optimize computational efficiency (and consistent with the recommendations in \\texttt{NPTFit}), we set $\\text{nexp} = 5$ to divide the ROI into $5$ distinct regions within which the exposure is treated as uniform. \n\n\\subsection{Modeling the gamma-ray sky}\n\nWe conducted a Bayesian \\texttt{NPTFit} analysis of the (real data) \\textit{Fermi} sky map at nlive = 500, modeling the sky as a linear combination of spatial templates characterized by parameters and prior distributions that are described in Appendix~\\ref{app:detailedmethodology}. For this analysis, and subsequent fits to real data (which were used only to choose parameters for the subsequent simulations), we extended the radius of the ROI from $15^\\circ$ to $30^\\circ$, but retained the mask of the Galactic plane (consistent with defaults in \\texttt{NPTFit}). Smooth\/diffuse templates were included for the \\textit{Fermi} Bubbles (``Bub\"), smooth isotropic emission (``Iso\"), smooth GCE (``GCE\"), and Galactic diffuse emission (``Dif\"). Templates were included for PS populations associated with the GCE (``GCE PS\"), isotropic \/ extragalactic sources (``Iso PS\"), and the Galactic disk (``Disk PS\"). Smooth\/diffuse templates each have one associated parameter, $A_{\\text{smooth}}$, controlling their overall normalization in the model; PS population templates (hereafter ``PS templates'') are associated with an overall normalization parameter $A_{\\text{PS}}$ which controls the number of sources, and with a source count function (SCF) which describes the number of sources as a function of their flux. We use a singly-broken power law model for the SCF, as is the default in \\texttt{NPTFit}:\n\\begin{equation}\n \\frac{dN}{dS} = A_{\\text{PS}} T_{\\text{PS}}\n \\begin{cases}\n \\left ( \\frac{S}{S_{b}} \\right )^{-n_{1}} & S \\geq S_{b} \\\\\n \\left ( \\frac{S}{S_{b}} \\right )^{-n_{2}} & S \\le S_{b}\n \\end{cases},\n \\label{eqn:scf}\n\\end{equation}\nwhere $A_{\\text{PS}}$ is an overall normalization factor, and $T_{\\text{PS}}$ is the position-dependent template with the fixed normalization given in the \\texttt{NPTFit} code (see Appendix \\ref{app:detailedmethodology} for details). Note the parameter $S_b$ controls the expected number of photon counts per source at the position of the break in the power law. The total number of sources in a given pixel is then set by:\n\\begin{equation} N_\\text{tot} = A_{\\text{PS}} T_{\\text{PS}} S_b \\left(\\frac{1}{n_1 - 1} + \\frac{1}{1 - n_2}\\right)\\label{eq:nsource} \\end{equation}\nwhereas the total number of photons is set by:\n\\begin{equation} S_\\text{tot} = A_{\\text{PS}} T_{\\text{PS}} S_b^2 \\left(\\frac{1}{n_1 - 2} + \\frac{1}{2 - n_2}\\right). \\label{eq:nphotons} \\end{equation}\n\nNote that in the main text of the paper we employ the default Galactic diffuse emission model from \\texttt{NPTFit}, constructed from the {\\it Fermi} Collaboration's \\texttt{p6v11} diffuse model. This model is known to have features that can bias the results \\cite{Buschmann:2020adf} when it is used directly to reconstruct PS populations from the real data; however, it should provide a reasonable description of the data when we are only interested in constructing and analyzing simulations (where the model is correct by construction). To check this assertion, in Appendix~\\ref{app:modela} we recalculate our results with a different Galactic diffuse emission model and comment on the differences. Either of these Galactic diffuse emission models reconstruct the GCE as being $100\\%$ PSs, with the smooth GCE component being negligible: consequently, our simulations will generally explore the sensitivity of NPTF methods to a PS population bright enough to explain the full GCE.\n\nAfter performing this fit, we extracted the posterior median parameters associated with each template, which were then used as the baseline inputs to simulations for the rest of the paper. These simulation parameters are displayed in Table \\ref{tab:fitparam} in Appendix~\\ref{app:detailedmethodology}. Note in particular that (consistent with previous NPTF studies) the inferred shape of the SCF for the GCE PS is quite sharply peaked around $S_b$, so we will generally be simulating GCE PS populations where the PSs all have roughly the same brightness as observed at Earth (fixed by $S_b$). This is likely {\\it not} a realistic luminosity function, but serves as a convenient basis for understanding the sensitivity of NPTF methods. We discuss the sharpness of the SCF peak further in Appendix~\\ref{app:sharpSCF}.\n\n\\subsection{Producing simulated sky maps}\n\\label{subsec:skymapsimulations}\n\nFor each template, we generated realizations based on the posterior median parameters from the real data. For the smooth\/diffuse emission components, we performed a Poisson draw from the associated template (with normalization given by the simulation parameters taken from the fit to real data). To obtain realizations of PS populations, we employed \\texttt{NPTFit-Sim}.\n\nTo obtain the full skymaps, the individual components were summed. All skymaps were binned using \\texttt{HEALPix}, a package designed to allow equal-area pixelization of the sky \\cite{Gorski:2004by}. The nside value controls the pixel size, with the sky having a total of $12\\times \\text{nside}^2$ equal-area pixels. By default we set nside to 128, which is also the \\texttt{NPTFit} default, and corresponds to roughly a $~0.5^{\\circ}$ mean spacing between individual pixel centers in the region toward the GC. For nside=128, there are 2808 pixels within our ROI.\n\n\\subsection{Sensitivity figure of merit} \\label{subsec:sensitivity}\n\n\nA \\texttt{NPTFit} analysis returns posterior probability distributions for each of the parameters, and an estimate of the overall Bayesian evidence for the model. Comparing two \\texttt{NPTFit} analyses, with different template choices, allows us to evaluate the Bayes factor (BF) between the two scenarios, as the ratio of their evidences. In particular, we can define the sensitivity to a GCE PS population in terms of the BF in favor of a model that contains the complete set of templates (Dif, Bub, Iso, GCE, GCE PS, Disk PS, Iso PS) compared with a model that excludes the GCE PS template. A high value of this BF corresponds to a high-significance detection of the GCE PS template, over and above the smooth GCE template. For convenience, we will generally work with $\\ln{\\text{BF}}$ rather than the BF itself. Where $\\text{BF} \\lesssim 1$ and so $\\ln\\text{BF}$ is negative, there is no detection of GCE PSs. \n\nThe BF directly gives the ratio of Bayesian probabilities that the model with the GCE PS template is correct, compared to the model without that contribution. For those more accustomed to frequentist statistics, it may be helpful to think of the BF as comparable to a likelihood ratio $\\mathcal{L}_1\/\\mathcal{L}_2$, with additional terms that penalize models with more degrees of freedom. In this sense $2 \\ln{\\text{BF}}$ is broadly analogous to the commonly-used test statistic $2 \\Delta \\ln{\\mathcal{L}}$, which for a likelihood that is Gaussian near its maximum ($\\mathcal{L}(x) \\propto e^{-x^2\/(2\\sigma^2)}$) can be written as $2 \\Delta \\ln{\\mathcal{L}} \\approx (x\/\\sigma)^2$, and thus can be thought of as the ``number of sigma'' squared associated with the deviation from the best-fit point.\n\nThe $\\ln \\text{BF}$ in favor of a GCE PS population can vary widely between realizations. For our main figure of merit for sensitivity, we will use the expected value of $\\ln \\text{BF}$ obtained by taking the average across realizations, $\\langle \\ln \\text{BF} \\rangle$, although we will also show the scatter between realizations. \n\n\\section{Procedures for parameter variation}\n\\label{sec:varyingdiffparams}\n\nWithin each subsection below, we describe the general procedure for varying different inputs: exposure, angular resolution, source brightness, and pixel size. We describe the method for adjusting parameters in the simulation of skymaps as well as how to account for these variations through the priors when analyzing the skymaps using \\texttt{NPTFit}. If the test involves combinations of these variations, then the priors must be modified by simultaneously implementing the adjustment factors to the priors for each alteration performed. \n\n\\subsection{Exposure}\n\\label{subsec:varyingexposue}\n\nAlthough \\textit{Fermi} is a space-based telescope, it does not observe every part of the sky simultaneously. As a result, an exposure map is needed to keep track of how long \\textit{Fermi} observed a particular region of the sky and with what effective area. The exposure map provided by the \\textit{Fermi}-LAT Collaboration and implemented in \\texttt{NPTFit} has units of $\\text{cm}^{2}\\text{s}$ \\cite{Mishra-Sharma:2016gis}.\n\nIncreasing the amplitude of the exposure map could describe longer observations with \\textit{Fermi} or less stringent cuts on photons as part of the data selection. We define an exposure rescaling factor $\\chi$, which allows us to vary the intensity of the exposure map through a scalar multiplicative factor, hence rescaling the expected number of photons present in simulated data. In our baseline case, $\\chi=1$. In general, the exposure rescaling could be position-dependent (e.g. corresponding to longer observations of only part of the ROI). However, we expect such position-dependent variations to be modest for the inner Galaxy region, as the size of our ROI is smaller than the field of view of {\\it Fermi}.\n\nWe implemented the variation of exposure in the simulated data by modifying the template parameters as follows. For smooth\/diffuse templates, the template normalization $A_{\\text{smooth}}$ is multiplied by the rescaling factor $\\chi$, since $A_{\\text{smooth}}$ determines the mean photon counts within each pixel. For non-Poissonian templates, when the other parameters are held fixed, $A_{\\text{PS}}$ determines how {\\it many} sources are present, which is not a function of exposure. Therefore, we instead multiply the counts break $S_b$ by $\\chi$, as $S_{b}$ controls the expected number of photons emitted by a source lying at the break in the SCF; as discussed previously, for the fits we perform, $S_b$ corresponds to the typical number of photons per source. To ensure that the total number of sources does not change, we divide $A_{\\text{PS}}$ by $\\chi$ following Eq.~\\ref{eq:nphotons}. \n\nWhen performing \\texttt{NPTFit} analyses on these modified skymaps, the input exposure map must be multiplied by $\\chi$. Furthermore, we adjust the range of priors governing $A_{\\text{smooth}}$ for Poissonian sources and $A_{\\text{PS}}$ and $S_{b}$ for non-Poissonian sources, so that they correspond to the same underlying physical emission parameters as in the original $\\chi=1$ analysis. For example, the boundaries of a log prior on $\\log A_{\\text{smooth}}$ are shifted by $+\\log\\chi$; the boundaries of a log prior on $\\log A_{\\text{PS}}$ are shifted by $-\\log\\chi$; and the boundaries of the linear prior on $S_b$ are multiplied by $\\chi$. (The original values of all priors are displayed in Table~\\ref{tab:priors} in Appendix~\\ref{app:detailedmethodology}.) For simulated data, the number of live points we utilized in the scans is nlive=300. We checked that the relative changes in the recovered evidences (under variations to nlive) are negligible for all individual realizations.\n\n\\subsection{Angular resolution}\n\\label{subsec:methodangularresolution}\n\nAngular resolution, characterized by the Point Spread Function (PSF), represents how well a telescope such as \\textit{Fermi} is able to reconstruct the original direction of a detected photon. A non-delta-function PSF represents an uncertainty in the direction of a photon's origin. As a result, the image produced of a photon source is ``smeared\" across one or more pixels. Since the \\texttt{NPTFit} implementation does not account for correlations between neighboring pixels (see e.g. \\cite{Collin:2021ufc} for a discussion), this smearing has the potential to bias the recovered SCF. \n\nModifications to the photon direction reconstruction, or construction of future gamma-ray telescopes, may allow for better angular resolution (equivalently, a narrower PSF) than {\\it Fermi} can currently achieve. However, even within the {\\it Fermi} dataset photons can be separated by the quality of their directional reconstruction, allowing us to improve angular resolution at the cost of exposure. Specifically, {\\it Fermi} photons are divided into four quartiles ranked by angular resolution, and separate PSF estimates are provided for each of the quartiles. Furthermore, lower-energy photons have intrinsically worse angular resolution, so a cut on photon energy has the effect (among others) of modifying the effective PSF. \n\n\\textit{Fermi}'s PSF is modeled by a pair of King functions (defined in Eq.~\\ref{eqn:kingfunction}) and is characterized by a set of several parameters. The PSF is approximately Gaussian near the core, with larger non-Gaussian tails. Eq.~\\ref{eqn:fermipsf} displays the full functional form of \\textit{Fermi}'s PSF. \\footnote{\\url{https:\/\/fermi.gsfc.nasa.gov\/ssc\/data\/analysis\/documentation\/Cicerone\/Cicerone_LAT_IRFs\/IRF_PSF.html}} \n\n\\begin{equation}\n P(x,\\vec{\\alpha_{P}})=f_{\\text{core}}K(x,\\sigma_{\\text{core}},\\gamma_{\\text{core}})+(1-f_{\\text{core}})K(x,\\sigma_{\\text{tail}},\\gamma_{\\text{tail}})\n \\label{eqn:fermipsf}\n\\end{equation}\n\\begin{equation}\n K(x,\\sigma,\\gamma) = \\frac{1}{2 \\pi \\sigma ^{2}} \\left (1 - \\frac{1}{\\gamma}\\right) \\left [1+\\frac{x^{2}}{2 \\gamma \\sigma^{2}} \\right ]^{-\\gamma}\n \\label{eqn:kingfunction}\n\\end{equation}\nHere $x$ is a rescaled distance from the center of the source, with an energy-dependent scale factor $S_p(E)$: \n\\begin{align}\n x & = \\frac{\\delta p}{S_{p}(E)} \\nonumber \\\\\n \\delta p & = 2 \\sin^{-1} \\left(\\frac{|\\hat{p}^\\prime - \\hat{p}|}{2}\\right),\n \\label{eqn:xspepsf}\n\\end{align}\nwhere $\\hat{p}$ and $\\hat{p}^\\prime$ are the unit vectors corresponding respectively to the true and reconstructed directions of the photon. The parameters that define the PSF ($S_p(E)$, $f_\\text{core}$, and $\\gamma$ and $\\sigma$ for the two King functions in Eq.~\\ref{eqn:fermipsf}) are provided with the \\textit{Fermi} dataset as functions of energy and event selection. \n\nNote that because of the rather complex form of the {\\it Fermi} PSF, different event selections may have PSFs that are not related simply by an overall shift in scale (e.g. by a modification to $S_p(E)$), but are different in shape. To maximize the practical applicability of our work, rather than simply rescaling the PSF, we test the effect of using the true PSFs for different quartiles of the {\\it Fermi} data ranked by PSF, and for different energy ranges. However, we will show that within the range of event selections we study, the effect on sensitivity of changing the PSF can be quite well described by the variation of a summary parameter such as the $68\\%$ containment angle, suggesting that the detailed form of the {\\it Fermi} PSF is not a crucial ingredient.\n\nWe divide the dataset into 40 log-spaced energy bins spanning the range from 0.2 GeV to 2000 GeV (i.e. 10 bins per decade). For each quartile and energy bin, we re-simulate the data with the same underlying model parameters but different PSF parameters. We stack these simulations together where appropriate (e.g. when testing multiple quartiles simultaneously, or when considering a broad energy range). When we analyze the simulated data, we use the worst PSF for any subset of the simulated data, which is consistent with what has been done in previous studies on the real data \\cite{Lee:2015fea, Leane:2020pfc}. For example, if the simulated data involved photons from the top three PSF quartiles and a range of energies from $E_\\text{min}$ to $E_\\text{max}$, the PSF parameters used will correspond to the third-best PSF quartile and the energy $E_\\text{min}$ (since the angular resolution of {\\it Fermi} improves monotonically with increasing energy). For tests that varied angular resolution, the number of live points we utilized in the scans is nlive=100. To check that this value of nlive was adequate, we randomly selected a subset of the realizations at given angular resolution and performed a cross-check at nlive=500. We found that the differences in the recovered evidences were negligible in all cases. Hence, it is reasonable to assume that the scans across all realizations are well-converged with respect to nlive variations.\n\n\\subsection{Energy range}\n\nVarying the energy range of the data selection has multiple effects. Including a wider range of energies effectively increases the exposure; including lower-energy photons worsens the angular resolution. As mentioned above, we use the real PSF of {\\it Fermi} for different energy ranges as one way to probe the effects of varying angular resolution. However, changing the energy range has additional effects that are not reducible to changes to angular resolution and exposure: low-energy photons are more abundant than high-energy ones in general, but also the spectra of the various emission components are different. Consequently, changing the energy range will modify the flux fraction associated with the GCE (and all other components). The choice of energy range thus needs to be optimized depending on the signal of interest.\n\nTo address the specific question of the optimal energy range for GCE studies, we change our energy cut on the real data and then repeat the analysis described in Sec.~\\ref{sec:Methodology}. That is, we re-fit the templates to the real data with the new energy range, simulate the data based on these new template parameters, and determine the sensitivity to PSs as a function of energy range.\n\nFor all simulations involved in the variation of energy range, the number of live points we used to scan the data was set to nlive=100. To check that this value of nlive was adequate, we randomly selected a subset of the realizations at each energy range, and performed a cross-check at nlive=300. We found that the differences in the recovered evidences were negligible in all cases. Hence, it is reasonable to assume that the scans across all realizations are well-converged with respect to nlive variations.\n\n\\subsection{Pixel size}\n\\label{subsec:pixelsizevarmethod}\n\nAnother somewhat ad-hoc choice in the standard NPTF analysis is the choice of spatial binning for the photons, i.e. the size of the equal-area \\texttt{HEALPix} pixels (or equivalently, their number). Following previous studies such as \\cite{Lee:2015fea,Mishra-Sharma:2016gis,Leane:2019xiy}, we utilized $\\text{nside}=128$ for the majority of our analysis. The reason for this choice is the similarity between the $\\text{nside}=128$ pixel radius and \\textit{Fermi}'s angular resolution in the energy range of interest. \n\nIf the pixel size chosen is substantially smaller than the angular resolution, PSs will always occupy multiple pixels, and the fact that \\texttt{NPTFit} does not model correlations between neighboring pixels could lead to a loss of sensitivity. In the extreme limit of small pixel size, where all pixels contain either 0 or 1 photons, all sensitivity to PS populations would be lost. On the other hand, pixels much larger than the angular resolution increase the background from diffuse signals in any given pixel, and again this might be expected to reduce the sensitivity to PSs.\n\nTo examine these effects in detail, we use the same template parameters derived from the data for $\\text{nside}=128$ (without any adjustment of parameters or priors) but simulate skymaps at $\\text{nside} = 512$ using the procedure described in Sec.~\\ref{sec:Methodology}.\nAfter generating simulated skymaps at this higher resolution, we can increase the pixel size for each realization as desired, using a HEALPix package that combines ``children\" pixels to create a superpixel, while accounting for proper normalization of photon counts \\cite{Gorski:2004by}. This preserves the distribution of sources in the individual realizations as we explore different pixel sizes.\n\nFor variations in pixel size, priors do not need to be adjusted. Instead, the templates must be properly normalized within the ROI to ensure accurate scaling during parameter retrieval (see Appendix \\ref{app:detailedmethodology}). \n\nFor all simulations involved in the variation of pixel size, the number of live points we used to scan the data was set to nlive=100. To check that this value of nlive was adequate, we randomly selected a subset of the realizations at each pixel size and performed a cross-check at nlive=500. We found that the differences in the recovered evidences were negligible in all cases. Hence, it is reasonable to assume that the scans across all realizations are well-converged with respect to nlive variations.\n\n\\subsection{Source brightness}\n\nThe previously-discussed parameters describe instrumental properties and analysis choices. In this section, we discuss how the template model parameters are adjusted to describe a genuinely different source population. In analyses of sensitivity as a function of source brightness, our goal is to understand the potential of the \\texttt{NPTFit} algorithm to detect faint sub-threshold sources.\n\nThe brightness of PSs can be varied by adjusting the $S_{b}$ and $A_{\\text{PS}}$ parameters of the SCF (Eq.~\\ref{eqn:scf}). We tested the effect of varying the brightness of individual PSs while keeping the total flux in the PS population fixed (as appropriate for a PS population making up most or all of the GCE). Using Eq.~\\ref{eq:nphotons}, this requirement can be satisfied by simultaneously varying $S_b$ and $A_{\\text{PS}}$ as follows:\n\\begin{equation}\n S_{b} \\to n S_{b}, A_{\\text{PS}} \\to \\frac{1}{n^{2}} A_{\\text{PS}}\n \\label{eqn:countsbrightness}\n\\end{equation}\nFor example, if $n = 1\/2$, the number of photons each source emits is reduced by half, however, the number of sources increases by a factor of 2 to compensate. This equates to a factor of 4 increase in the template normalization factor $A_{\\text{PS}}$, since the number of sources scales as $A_{\\text{PS}} S_b$.\n\nAs for the exposure tests, when we rescale the simulated parameters we also rescale the priors in the fit to simulated data. For example, if $S_b \\to n S_b$ and there was initially a linear prior on $S_b$ in the range $[0.05,80]$, the new prior is linear with range $[0.05n, 80n]$. If the prior on $\\log_{10} A_{\\text{PS}}$ is initially $[-6, 1]$, it is adjusted to $[-6- 2\\log_{10}{n},1 - 2 \\log_{10}{n}]$. We also checked the effect of keeping the priors fixed; except in situations where the true parameters approached the edge of the prior (or fell outside it), the effect was minimal.\n\nFor all simulations involved in the variation of source brightness, the number of live points we used to scan the data was set to nlive=100 for computational efficiency.\n\n\\section{A simplified isotropic scenario}\n\\label{sec:isotropic}\n\nBefore analyzing the results with all templates, we perform a simplified analysis including only isotropic components in our simulations, and approximating the exposure map as uniform. This analysis serves as a test of our analytic predictions. We describe some additional studies under this simplified scenario in Appendix \\ref{app:isotropic}.\n\nIn this case, our normalization convention for the emission templates requires that the templates $T$ and $T_\\text{PS}$ are both 1 in all pixels within the ROI. The normalization of the simulated signals was determined by matching the parameters for the PS component ($A_\\text{PS}$, $n_1$, $n_2$ and $S_b$) to the isotropic PS component extracted from the real {\\it Fermi} data.\nFor our baseline analyses, the smooth component normalization was chosen such that the total flux contributions of the smooth isotropic and PS components are equal. Explicitly, given our normalization convention for the templates $T$ and $T_\\text{PS}$, this means that $A(\\theta)_\\text{smooth}$ is given by:\n\\begin{equation}\n A(\\theta)_\\text{smooth}=A(\\theta)_{\\text{PS}}S_{b}^{2} \\left [\\frac{(n_{1}-n_{2})}{(n_{1}-2)(2-n_{2})} \\right ]\n \\label{eqn:normalization}\n\\end{equation}\nwhere (as previously) $A(\\theta)_{\\text{PS}}$ is the template normalization for the emission associated with the isotropic PS population, $S_b$ is the break of the source-count function, \nand $n_1, n_2$ are the slope of the source count functions defined by a singly-broken power law.\n\nAs previously, we use \\texttt{NPTFit-Sim} to simulate PSs and a Poisson draw to simulate the smooth component. The priors on the various parameters are set as discussed in Sec.~\\ref{sec:varyingdiffparams} and Appendix \\ref{app:detailedmethodology}. (Note that if the simulated value of $A_\\text{smooth}$ was outside the prior range on $A_\\text{iso}$ in the main analysis, we would need to adopt different priors for this isotropic study, but in fact it lies well within the prior range so this is not a problem.)\n\n\\subsection{Variation of exposure (narrow PSF)}\n\\label{subsec:isotropicpsf}\n\n\\begin{figure*}[ht]\n \\centering\n \\includegraphics[width=0.49\\linewidth]{lowexpisoanalyticpsf.pdf}\n \\hspace{0.4em}\n \\includegraphics[width=0.49\\linewidth]{highexpisoanalyticpsf.pdf}\n \\caption{$\\ln{\\text{BF}}$ and $\\langle \\ln{\\text{BF}} \\rangle $ (magenta) across 10 realizations at varying $\\chi$ values for isotropic emission, simulated and scanned using a narrow Gaussian PSF. The dashed line denotes the modified power law fit defined in Eq.~\\ref{eqn:powerlawshift}. The dotted line denotes the first term of the analytic form Eq.~\\ref{eqn:lnBFcase2}, while the dash-dot line denotes the full analytic solution described in Eq.~\\ref{eqn:lnBFcase2}. }\n \\label{fig:analyticcase2comparisonpsf}\n\\end{figure*}\n\n\\begin{figure*}[ht]\n \\centering\n \\includegraphics[width=0.49\\linewidth]{lowexpisoanalytic.pdf}\n \\hspace{0.4em}\n \\includegraphics[width=0.49\\linewidth]{highexpisoanalytic.pdf}\n \\caption{$\\ln{\\text{BF}}$ and $\\langle \\ln{\\text{BF}} \\rangle $ (magenta) across 10 realizations at varying $\\chi$ values for isotropic emission, simulated and scanned using a realistic PSF model. The dashed line denotes the modified power law fit defined in Eq.~\\ref{eqn:powerlawshift}. The dotted line denotes the first (linear) term of the analytic form in Eq.~\\ref{eqn:lnBFcase2}, while the blue dash-dot line denotes the full analytic solution from the same equation with $s \\rightarrow S_b$. The green dash-dot line shows the analytic solution with $s \\rightarrow S_b\/3$.}\n \\label{fig:analyticcase2comparison}\n\\end{figure*}\n\nThe analytic solutions we derived in Sec.~\\ref{sec:analyticforms} assumed that the PSs were not smeared by the PSF. Thus as a cross-check of the scaling behavior estimated from our analytic results, we performed an initial set of simulations where the PSF was taken to be extremely narrow (i.e. the angular reconstruction was effectively perfect), covering a range of exposure levels $\\chi$. Specifically, we sampled exposure rescaling factors $\\chi$ between $10^{-2}$ and $10$ (recall $\\chi=1$ corresponds to the baseline exposure), and for each case generated and scanned skymaps that employed a Gaussian PSF with a tiny variance $10^{-20}$. For each choice of $\\chi$ we ran the analysis for 10 simulated realizations, and for each realization evaluated the $\\ln{\\text{BF}}$ between models with and without isotropically distributed PSs (both models allow for an isotropically distributed smooth component).\n\nFig.~\\ref{fig:analyticcase2comparisonpsf} plots the Bayes factor preference in favor of PSs as a function of exposure, together with the analytic solution for the likelihood ratio in the case where PSs and background are equally bright (Eq.~\\ref{eqn:lnBFcase2} with $k=1\/2$ and $s= S_b$). We also show the result of including only the linear $ks\/2$ term in the analytic estimate of Eq.~\\ref{eqn:lnBFcase2}. \n\nFor each choice of exposure, we evaluate $\\langle \\ln \\text{BF}\\rangle(\\chi)$ by taking the average of $\\ln\\text{BF}$ across realizations at each exposure level; we also compute the standard error of the mean across the realizations for this quantity at each $\\chi$ value (indicated by magenta vertical bars in Fig.~\\ref{fig:analyticcase2comparisonpsf}).\n\nWe work by default (here and in the remainder of this work) with the log of the Bayes factor between models, rather than the likelihood ratio; however, in this specific example we also evaluated the log likelihood ratio and found that it was generically quite close to the log Bayes factor (and in particular the difference between the two was not responsible for the difference between the numerical results and the analytic approximation for $\\langle \\Delta \\ln \\mathcal{L}\\rangle$). Thus we treat our analytic approximation as a rough estimate for $\\langle \\ln \\text{BF}\\rangle$.\n\nFor these parameter choices, we observe that the analytic form mildly overestimates the sensitivity, by a factor of roughly 20-30\\% in $\\langle \\ln \\text{BF}\\rangle$ at high exposure, but accurately captures the fall-off of the detection sensitivity at low exposure, and the scaling at high exposure. The remaining discrepancy is likely due to the approximations we have made in deriving our analytic results (e.g. relating to the shape of the probability distribution, and assuming we can treat all integrals as having limits $\\pm \\infty$, as well as approximating the SCF as a delta-function).\n\nWe observe a consistent scatter at the $\\mathcal{O}(10\\%)$ level in $\\langle \\ln \\text{BF}\\rangle$ between different realizations, which does not obviously decrease at large $\\chi$. (Note that here we are discussing the standard deviation across realizations, not the standard error of the mean; the latter is smaller.) This can be understood in terms of our variance calculations in Sec.~\\ref{sec:analyticforms}. The parameters we have simulated correspond to $5.61\\chi$ photons\/source, 2808 pixels, and an average of $0.11$ sources\/pixel; thus we expect a total number of sources in the ROI around $280$, and a standard deviation in the log likelihood ratio that is of order $\\langle \\ln \\text{BF}\\rangle\/\\sqrt{280} \\sim 0.06 \\langle \\ln \\text{BF}\\rangle$ or $\\sqrt{\\langle \\ln \\text{BF}\\rangle}$, whichever is larger. This is consistent with the $\\mathcal{O}(10\\%)$ scatter we observe at high exposure.\n\nIn addition to the comparison to the analytic prediction, we can parameterize the scaling of the sensitivity with $\\chi$ as a power law and fit for the parameters (although power-law behavior should be expected to break down at sufficiently small $\\chi$, where $\\ln \\text{BF}$ can attain negative values). The fitting function we use is:\n\\begin{equation} \\langle \\ln \\text{BF}\\rangle(\\chi) = \\alpha \\chi^{\\beta} + \\gamma,\n \\label{eqn:powerlawshift}\\end{equation}\nwhere the offset parameter $\\gamma$ serves to correct the behavior at small $\\chi$ where there is not enough data to detect a significant signal. For each value of $\\chi$ we took the central value of $\\langle \\ln \\text{BF}\\rangle(\\chi)$ to be the average over realizations, with an error bar determined by the standard error of the mean, and performed a least-squares fit. The resulting best-fit model is also plotted in Fig.~\\ref{fig:analyticcase2comparisonpsf}.\n\n\n\n\\subsection{Variation of exposure (realistic PSF)}\n\nIn a realistic scenario, we will always have to deal with a PSF that is not arbitrarily narrow. We repeat the simulation and analysis described above using the full PSF appropriate to the real {\\it Fermi} dataset (for the top PSF quartile), and show results in Fig.~\\ref{fig:analyticcase2comparison}. We compare these results to the same analytic solution (i.e. with no allowance for the PSF) as in the previous analysis, and again perform a least-squares fit to a simple power law fitting function.\n\nWe observe that the analytic solution still describes the shape of $\\langle \\ln \\text{BF}\\rangle$ as a function of $\\chi$ quite well, but now the discrepancy in our sensitivity metric is more pronounced (a factor of a few at high $\\chi$). At least qualitatively, this discrepancy can be largely absorbed by taking $s\/S_b$ to be a constant other than unity; Fig.~\\ref{fig:analyticcase2comparison} shows the effect of using the analytic approximation with $s$ replaced by $ S_b\/3$. The variance remains $\\mathcal{O}(10\\%)$ at high $\\chi$, which can be understood as discussed above.\n\nIn this more realistic case, we thus recommend using the analytic estimate only to understand scaling behavior rather than as a quantitative estimate of the expected sensitivity, although a reasonably good description can be obtained by fitting a constant rescaling factor to be applied to $s$.\n\n\\subsection{Variation of relative flux contributions}\n\\label{app:subdominantcomp}\n\nWe can also test the effects of varying the relative flux contributions of the smooth and PS components while allowing the total flux to remain constant. \nFig.~\\ref{fig:isotropicfluxfrac} demonstrates how the sensitivity changes as the PS flux fraction is varied, for both the narrow PSF and realistic PSF (top quartile) cases. For comparison, we also overlay the predictions given by our analytic solutions, Eq.~\\ref{eqn:lnBFcase2}, with $s \\rightarrow S_b$ and $s\\rightarrow S_b\/3$. We find that as the relative contribution of the PS component increases, the sensitivity of \\texttt{NPTFit} to PSs naturally increases with a shape consistent with the analytic prediction provided by Eq.~\\ref{eqn:lnBFcase2}, and in the narrow-PSF case the $s\\rightarrow S_b$ substitution provides quantitatively accurate results. Although the case where the map is simply produced with a smooth component is not shown in the figure due to the log-scaling, the result averages to $-0.54 \\pm 0.067$ in the realistic-PSF case and $-0.56 \\pm 0.14$ in the narrow-PSF case, both of which are small, as expected.\n\n\n\\begin{figure}[ht]\n \\centering\n \\includegraphics[width=0.98\\linewidth]{isotropicfluxfrac_narrowpsf.pdf}\n \\caption{$\\ln{\\text{BF}}$ and $\\langle \\ln{\\text{BF}}\\rangle $, as the relative contributions between a smooth and PS component within an isotropic map are varied, for realistic and narrow PSF prescriptions. Dashed lines indicate the analytic prescription of Eq.~\\ref{eqn:lnBFcase2} with the replacements $s\\rightarrow S_b$ (black) and $s\\rightarrow S_b\/3$ (red).}\n \\label{fig:isotropicfluxfrac}\n\\end{figure}\n\n\\section{Results of simulated parameter variations in the full inner Galaxy analysis}\n\\label{sec:results}\n\nIn this section we now proceed to a numerical analysis using simulated {\\it Fermi} data for the inner Galaxy, employing the complete set of templates discussed in Sec.~\\ref{sec:Methodology}. The results of this section can thus be used directly to optimize \\texttt{NPTFit}-based approaches to studies of the inner Galaxy and GCE.\n\n\\subsection{Varying the exposure}\n\\label{sec:exposuretest}\n\nAs in the isotropic case, we sampled exposure rescaling factors $\\chi$ between $10^{-2}$ and $10$. For each choice of $\\chi$ we ran the analysis for 20 simulated realizations, and for each realization evaluated the $\\ln{\\text{BF}}$ between models with and without the GCE PS template. \n\nFig.~\\ref{fig:exposuretest} shows the resulting values of $\\ln\\text{BF}$ for each exposure level. We evaluate $\\langle \\ln \\text{BF}\\rangle(\\chi)$ by taking the average of $\\ln\\text{BF}$ across realizations at each exposure level (indicated by magenta vertical bars in the figure along with errorbars that denote the $1\\sigma$ standard error of the mean across all the realizations within a particular case). \n\nAs discussed in Sec.~\\ref{sec:isotropic}, we fit a power-law function (Eq.~\\ref{eqn:powerlawshift}) to the data for $\\langle \\ln \\text{BF}\\rangle(\\chi)$. The resulting best-fit parameters are given in Table \\ref{tab:exposurefit}, and the best-fit model is plotted in Fig.~\\ref{fig:exposuretest} (solid blue line). We find approximately that $\\langle \\ln \\text{BF} \\rangle \\propto \\chi^{0.8}$ at large BF. This is broadly consistent with our expectation from the analytic estimate in Sec.~\\ref{sec:analyticforms} that $\\langle \\ln \\text{BF} \\rangle$ should scale $\\sim$linearly in $\\chi$.\n\n\\begin{figure*}\n \\centering\n \\includegraphics[width=0.49\\linewidth]{lowexposurevarp6v11.pdf}\n \\hspace{0.4em}\n \\includegraphics[width=0.49\\linewidth]{highexposurevarp6v11.pdf}\n \\caption{$\\ln{\\text{BF}}$ and $\\langle \\ln{\\text{BF}}\\rangle$ (magenta) across 20 realizations sampled at various levels of exposure. \\textit{Left}: realizations with $\\chi<1$. \\textit{Right}: realizations with $\\chi \\geq 1$. The best-fit line is a standard power law with an additive shift defined in Eq.~\\ref{eqn:powerlawshift}.}\n \\label{fig:exposuretest}\n\\end{figure*}\n\n\\begin{table}[H]\n\\centering\n\\begin{tabular}{r r r }\n\\hline\nRecovered Parameters & \\texttt{p6v11}\\\\\n\\hline\n$\\alpha$ (coefficient) & $11.30 \\pm 0.69$ \\\\\n$\\beta$ (power) & $0.76 \\pm 0.04$ & \\\\\n$\\gamma$ (shift) & $-0.62 \\pm 0.20$ & \\\\\n\\botrule\n\\end{tabular}\n\\caption{Best-fit parameters obtained using least-squares regression method along with the $1\\sigma$ error for the power law fit of $\\langle \\ln{\\text{BF}}\\rangle$ to $\\chi$, as defined in Eq.~\\ref{eqn:powerlawshift}.}\n\\label{tab:exposurefit}\n\\end{table}\n\n\\subsection{Varying the angular resolution}\n\\label{sec:varyingangexp}\n\n\\subsubsection{PSF models for different quartiles} \\label{subsec:varyingpsf}\n\nWe begin by examining how the sensitivity of \\texttt{NPTFit} varies when the top three quartiles of data by PSF are analyzed separately, keeping the energy range fixed at its default value of $2-20 \\text{ GeV}$. Quartiles are labeled in order of decreasing angular resolution (so e.g. ``PSF 01'' represents the best quartile). \n\nWithin each quartile, we simulated and analyzed 20 realizations. All simulations are generated at a rescaling factor of $\\chi = 1$. Fig.~\\ref{fig:psfquartiletest} shows a striking decline in sensitivity in the quartiles with worse angular resolution. As previously, we display both the scatter between realizations and the average $\\langle \\ln \\text{BF}\\rangle$ across realizations for a given quartile.\n\n\\begin{figure}\n \\centering\n \\includegraphics[width=0.98\\linewidth]{psfvarp6v11.pdf}\n \\caption{$\\ln{\\text{BF}}$ and $\\langle \\ln{\\text{BF}}\\rangle$ (magenta) values across 20 realizations at each PSF quartile.}\n \\label{fig:psfquartiletest}\n\\end{figure}\n\\begin{figure*}\n \\centering\n \\includegraphics[width=0.98\\linewidth]{containmentangle.pdf}\n \\caption{$\\ln{\\text{BF}}$ and $\\braket{\\ln{\\text{BF}}}$ (magenta) from 20 realization at each energy range parameterized by the absolute value of $68 \\%$ containment angle of the PSF. For the baseline $2-20$\\,GeV range, we test each of the top three quartiles individually. The best-fit line (black) is the power-law fit to the $\\langle \\ln{\\text{BF}}\\rangle$ obtained using least-squares. The baseline case (blue vertical line) denotes the baseline case 2-20\\,GeV at quartile 1, typically used in previous \\textit{Fermi} analyses. }\n \\label{fig:containmentanglep6v11}\n\\end{figure*}\n\\subsubsection{PSF models for different energies} \\label{subsec:varyingpsfenergy}\n\nAnother practical way to vary the angular resolution in {\\it Fermi} data is to modify the energy window. We first examined the (theoretical) case where the angular resolution is varied while keeping all other parameters constant. We re-simulated the data with the original exposure map ($\\chi=1$) but with PSF corresponding to the appropriate {\\it Fermi} PSF for energies between 0.6 GeV and 3.2 GeV (recall that the baseline analysis uses the {\\it Fermi} PSF at 2 GeV), in the top PSF quartile. In each case we performed 20 realizations.\n\nIn Fig.~\\ref{fig:containmentanglep6v11} we plot the resulting values of $\\ln\\text{BF}$, against the value of the $68 \\%$ containment angle associated with each PSF model (in degrees), which we denote $\\eta$. We also include the results for the 2 GeV PSF in all three quartiles (discussed above).\n\nTo summarize the results, we fit the data with a power law, $\\langle \\ln \\text{BF}\\rangle(\\eta) = a \\eta^{b}$, using the same least-squares analysis as described above for the case of $\\langle \\ln \\text{BF}\\rangle(\\chi)$.\nTab.~\\ref{tab:containmentanglefit} displays the resulting best-fit parameters. We find that $\\langle \\ln \\text{BF}\\rangle(\\eta) \\propto \\eta^{-1}$, i.e. the sensitivity appears to scale approximately inversely with the containment radius, at least while holding the pixel size constant at $\\text{nside}=128$.\n\n\n\n\\begin{table}[H]\n\\centering\n\\begin{tabular}{r r c}\n\\hline\n\nParameter & \\texttt{p6v11} \\\\\n\\hline\n$a$ (coefficient) & $2.5 \\pm 0.6$ \\\\\n$b$ (power) & $-1.0 \\pm 0.2$ & \\\\\n\n\\botrule\n\\end{tabular}\n\\caption{Recovered parameters and the $1\\sigma$ error for the power-law fit to $\\langle \\ln \\text{BF}\\rangle$ as a function of $68\\%$ containment angle.}\n\\vspace{-10pt}\n\\label{tab:containmentanglefit}\n\\end{table}\n\nThus as a rule of thumb, we expect an increase in the exposure by a factor of $n$ to be approximately compensated by an increase in the containment radius (not the containment area) by a factor of $n$; if the exposure can be more than doubled while worsening the containment angle by less than a factor of two, this will generally be a beneficial tradeoff. \n\n\\begin{figure*}\n \\centering\n \\includegraphics[width=0.49\\linewidth]{lowtradeoffp6v11.pdf}\n \\hspace{0.4em}\n \\includegraphics[width=0.49\\linewidth]{hightradeoffp6v11.pdf}\n \\caption{$\\langle \\ln{\\text{BF}}\\rangle$ and the $1\n \\sigma$ standard error of the mean across 20 realizations for the top three quartiles graded in angular resolution sampling different values of $\\chi$. \\textit{Left}: realizations with $\\chi<1$. \\textit{Right}: realizations with $\\chi \\geq 1$. The best-fit lines show the Eq.~\\ref{eqn:powerlawshift} fit to the data.}\n \\label{fig:angularexp1sb}\n\\end{figure*}\n\n\nNote that our choice of $\\text{nside}=128$ corresponds to a mean pixel spacing ($0.46^\\circ$) that exceeds or is comparable to the $68\\%$ containment angle for all but the widest energy range (0.6-20 GeV) that we consider. We will show in Sec.~\\ref{sec:pixelsizevar} that decreasing the pixel size below the PSF does not appear to have large effects on the expected sensitivity (although it can increase the variance), but studies focusing on a broader energy range might still wish to test smaller nside values to reduce leakage of PSs into neighboring pixels.\n\n\n\n\\subsubsection{Simultaneous variation of exposure and angular resolution} \n\\label{subsec:tradeoffs}\nTo check the stability of the scaling rules we have found so far and the validity of this simple estimate, we now explicitly test the effect of simultaneously varying the angular resolution and the exposure. In many realistic situations, and in particular for \\textit{Fermi} data, relaxing cuts on photon quality will simultaneously increase the effective exposure and worsen angular resolution.\n\n\nWe repeat the analysis described in Sec.~\\ref{sec:exposuretest} for simulated data using the appropriate PSF model for PSF quartiles 2 and 3, with 20 realizations for each combination of $\\chi$ and quartile. We scanned the realizations at nlive = 300. Our results for $\\langle \\ln \\text{BF} \\rangle(\\chi)$ for each quartile are summarized in Fig.~\\ref{fig:angularexp1sb}. As in Eq.~\\ref{eqn:powerlawshift}, we fit the data for each quartile with a power law with a constant offset, and provide the best-fit parameters in Tab.~\\ref{tab:tradeoffp6v11}.\n\nIn general we observe that the slope appears to become steeper (more rapid increase in sensitivity with exposure) in quartiles with worse angular resolution. This reflects that significant detection of PSs requires a higher $\\chi$ value when the angular resolution is worse, but for sufficiently large $\\chi$ factors, the significance becomes almost independent of angular resolution. This may be related to the pixels surrounding a PS becoming bright enough to be individually detected as significant PSs.\n\n\nWe can also test the effect of stacking together the simulated data corresponding to the different quartiles, which has the effect of increasing the effective exposure $\\chi_{\\textrm{eff}} >\\chi$ relative to the one-quartile case. The sum of the first and second quartile has $\\chi_{\\text{eff}} = 2$, and the combined top three quartiles have $\\chi_{\\textrm{eff}} = 3$. \n\n\n Fig.~\\ref{fig:multiquartilesuperposed} shows the sensitivity based on 20 realizations for each of these three cases scanned at nlive=300. We find that to quite a good approximation, the increased number of photons simply cancels out the effects of worsening the angular resolution on average. As a simple estimate we calculated the combined effects of the predictions of varying the exposure and angular resolution. To do so, we define a rescaling factor $r=r_{\\text{exposure}}(\\chi)r_{\\text{PSF}}(\\text{Quartile})$, where $r_{\\text{exposure}}(\\chi)$ is the ratio of the expected log BF at exposure $\\chi$, denoted $\\langle\\ln{\\text{BF}}\\rangle(\\chi)$, to the baseline expected log BF $\\langle\\ln{\\text{BF}}\\rangle(\\chi=1)$, as obtained from Eq.~\\ref{eqn:powerlawshift} and Table~\\ref{tab:exposurefit}. Thus $r_\\text{exposure}(\\chi)$ characterizes the increase in sensitivity with enhanced exposure. $r_{\\text{PSF}}(\\text{Quartile})$ is the ratio of the expected log BF for a specified single quartile, denoted $\\langle\\ln{\\text{BF}}\\rangle(\\text{Quartile})$, to the baseline expected log BF $\\langle\\ln{\\text{BF}}\\rangle(\\text{Quartile}=1)$, obtained from Fig.~\\ref{fig:psfquartiletest}. Thus $r_\\text{PSF}(\\text{Quartile})$ characterizes the decline of sensitivity with worsening angular resolution. $\\langle\\ln{\\text{BF}}\\rangle(\\chi)$ and $\\langle\\ln{\\text{BF}}\\rangle(\\text{Quartile})$ are denoted as blue stars and orange pentagons on Fig.~\\ref{fig:multiquartilesuperposed}, respectively. To obtain the combined estimate denoted by the black filled ``X\", we multiplied the calculated $r$ factor with the baseline value $\\langle \\ln{\\text{BF}}\\rangle(\\text{Quartile 1})$ obtained from the realizations for the baseline case in Fig.~\\ref{fig:multiquartilesuperposed}. We find that this estimate agrees with the simulation results that on average adding quartiles with worse angular resolutions to gain exposure does not yield large increases (or decreases) in the average sensitivity to PSs.\n \n Quantitatively, Q3 has a containment angle slightly more than twice that of Q1 (see Fig.~\\ref{fig:containmentanglep6v11}), while including Q2 and Q3 triples the exposure. The scaling of the sensitivity with exposure is slightly sublinear whereas for containment angle it is linear to a good approximation, and in practice we find that these two effects almost completely cancel out. Thus the overall sensitivity is (perhaps surprisingly, and somewhat coincidentally) insensitive to the inclusion of additional quartiles.\n \n We might wonder if by approximating the PSF in the stacked dataset as the PSF of the worst quartile, we introduce biases in the recovered parameters. We checked this explicitly over our sample of 20 realizations. On average, we find that the median parameter deviations were rather small, mostly at $<1\\sigma$ level, with some exceptions for components such as the isotropic emission. However, we also checked cases where we stacked different realizations of the same quartile, so that the PSF was the same between different subpopulations and was thus modeled in the same way for the simulated data and the fit. We found, on average, that the biases were similar in these cases; they were not obviously worsened by stacking maps with different PSFs, while modeling with the worst PSF. Thus, the mis-reconstruction cannot be attributed to mismodeling of the PSF in a subset of the data. \n \n\\begin{figure}\n \\centering\n \\includegraphics[width=0.98\\linewidth]{multqmapp6v11.pdf}\n \\caption{$\\ln{\\text{BF}}$ and $\\langle \\ln{\\text{BF}}\\rangle$ (magenta) across 10 realizations that stacked skymaps generated with different angular resolutions (PSF quartiles). The scans assumed the worst angular resolutions. The blue stars indicate the increased sensitivity as predicted by varying the exposure, while the orange pentagons indicate the worsening of sensitivity due to angular resolution degradation. As a estimate, the black filled \"X\" symbols display the combined estimate from varying the two parameters.}\n \\label{fig:multiquartilesuperposed}\n\\end{figure}\n\n\\subsubsection{Varying the energy range} \\label{subsec:energyrangeevenquartile}\n\nWhile we have previously explored the effect of changing the PSF to one appropriate for other energy ranges, we now explore the effect of changing the energy range itself. We kept the upper limit of the energy range fixed at 20 GeV, since high-energy photons are rare and their inclusion\/exclusion is unlikely to qualitatively change the results. We varied the low-energy limit of the energy range between 0.6-3.2 GeV, spanning the peak of the GCE, by including or excluding low-energy bins. As discussed previously, the bin boundaries are log-spaced in energy, with 10 bins per decade, starting at 0.2 GeV.\n\n\nAs a first test, we sought to understand how the sensitivity could be expected to vary just as a result of the modified angular resolution combined with the larger number of photons in low-energy bins. To explore this question, we held the underlying physical model fixed, and treated the enhanced number of photons as an effective exposure factor $\\chi_{\\text{eff}}$, while using the appropriate PSF for the lowest-energy photons in the analysis. Specifically, we took $\\chi_\\text{eff}$ to be the ratio of the total number of photons in the real data (over the whole sky) in the modified energy range, to the total number of photons in the original energy range.\n\n\\begin{figure}\n \\centering\n \\includegraphics[width=0.98\\linewidth]{energyrangetype1.pdf}\n \\caption{$\\ln{\\text{BF}}$ and $\\braket{\\ln{\\text{BF}}}$ (magenta) resulting from simulations that varied energy while allowing parameters to remain constant. The increase in photon counts is folded through the effective exposure factor $\\chi_{\\text{eff}}$. The red stars display the predictions resulting from the tests that varied exposure, while the orange diamonds are predictions provided by the containment angle. As an estimate, the black squares represent the combined effects of varying exposure and containment angle.}\n \\label{fig:energyrangevaryparamconst}\n\\end{figure}\n\nFig.~\\ref{fig:energyrangevaryparamconst} shows the results of this test. The results indicate that due to the worse angular resolution obtained by including data from lower energy ranges, we expect at best a mild increase in the (expected) sensitivity, compared with a substantial increase in the case where only the exposure is varied. As a first-order comparison to our simulated results, we analyzed the combination of the effects of varying the exposure and PSF as discussed in Sec.~\\ref{subsec:varyingexposue}, \\ref{subsec:varyingpsf}. Similar to Sec.~\\ref{subsec:energyrangeevenquartile}, we define a rescaling factor $r=r_{\\text{exposure}}(\\chi)r_{\\text{PSF}}(\\eta)$, where $r_{\\text{exposure}}(\\chi)$ is the ratio of $\\langle\\ln{\\text{BF}}\\rangle(\\chi)$ to $\\langle\\ln{\\text{BF}}\\rangle(\\chi=1)$, obtained from Eq.~\\ref{eqn:powerlawshift} and Table~\\ref{tab:exposurefit}. $r_{\\text{PSF}}(\\eta)$ is the ratio of $\\langle\\ln{\\text{BF}}\\rangle(\\eta)$ to $\\langle\\ln{\\text{BF}}\\rangle(\\eta[2-20\\,\\text{GeV}])$ (baseline case), obtained from Fig.~\\ref{fig:containmentanglep6v11}. $\\langle\\ln{\\text{BF}}\\rangle(\\chi)$ and $\\langle\\ln{\\text{BF}}\\rangle(\\eta)$ are denoted as red stars and orange diamonds on Fig.~\\ref{fig:energyrangevaryparamconst}, respectively. To obtain the combined estimate denoted by the black filled ``X\", we multiplied the calculated $r$ to the $\\langle \\ln{\\text{BF}}\\rangle [2-20\\,\\text{GeV}]$ obtained from the simulations in Fig.~\\ref{fig:energyrangevaryparamconst}. The estimates indicate a fairly flat scaling behavior of sensitivity across different energy ranges. This suggests that the beneficial effects of increasing sensitivity are canceled out by the worsening of angular resolution at lower energy ranges. \n\nAs an example, consider varying the minimum energy of the event selection from 2.0 to 1.0 GeV. The $68 \\%$ containment angle of the PSF increases from the baseline $~0.23^{\\circ}$ to $~0.40^{\\circ}$. As shown in Fig.~ \\ref{fig:containmentanglep6v11}, this change in PSF induces a decrease in $\\braket{\\ln{\\text{BF}}}$ by a factor of $0.58$. However, the larger number of photon counts with a minimum energy of 1.0 GeV corresponds to an effective rescaling factor of $\\chi_{\\text{eff}} =2.75$ relative to the case with minimum energy 2.0 GeV (ignoring differences in the spectrum between the different components). Therefore, based on Tab. \\ref{tab:exposurefit}, the value of $\\ln{\\text{BF}}$ should increase by a factor of $\\sim 2.10$, if this exposure change were the only factor. The combined effect would correspond to only a $\\sim 22\\%$ increase in $\\ln{\\text{BF}}$. Thus in this case, we would expect the increase in sensitivity from additional photons to come close to offsetting the loss of angular resolution, leading to very little net change in sensitivity (with perhaps a slight advantage for a 1.0 GeV minimum energy). This resembles the roughly flat behavior with energy we actually observe in Fig.~\\ref{fig:energyrangevaryparamconst}.\n\nThis is how the sensitivity would behave if the signal and backgrounds had identical spectra, but of course this is not the case for the GCE. For a specific signal, such as the GCE in this case, we need to either input a theoretical spectrum for each component, or re-fit the model parameters from the real data in each energy band. We take the latter approach here, and then repeat the sensitivity analysis on data simulated using these updated, energy-dependent parameters.\n\\begin{figure}\n \\centering\n \\includegraphics[width=0.98\\linewidth]{energyrangetype2.pdf}\n \\caption{$\\ln{\\text{BF}}$ and $\\braket{\\ln{\\text{BF}}}$ (magenta) at different energy ranges while varying the template parameters to match the posterior median parameters from fits to the real data.} \n \\label{fig:energyvariationrefit}\n\\end{figure}\n\nFig.~\\ref{fig:energyvariationrefit} shows the result of these simulations and analyses. If only photon number and angular resolution were relevant, there would be a strong argument for extending the energy range for the analysis all the way down to 0.6 MeV (or lower), but for the actual GCE spectrum we observe that the highest expected sensitivity is obtained for a minimum energy of 1.0 or 1.6 GeV. This energy scale roughly coincides with the peak of the GCE distribution. \n\nOne might ask if features in Fig.~\\ref{fig:energyvariationrefit} simply reflect fluctuations in the total GCE flux inferred from the real data in different energy ranges (used to fix the simulation parameters). We checked this explicitly and found no evidence of such an association; the parameters controlling the simulated GCE PS flux vary smoothly over the relevant range of threshold energies, and the fluctuations in Fig.~\\ref{fig:energyvariationrefit} are thus likely to be statistical.\n\n\n\\subsection{Pixel size variation} \\label{sec:pixelsizevar}\nWe examined a wide range of pixel size to determine an optimal value for analysis. We started at an nside value of 512 and downgraded to nside values of 256, 128, 64, 32. At each pixel level, we computed $\\ln{\\text{BF}}$ across 20 realizations.\n\n\\begin{figure}\n \\centering\n \\includegraphics[width=0.98\\linewidth]{nsidep6v11.pdf}\n \\caption{$\\ln{\\text{BF}}$ and $\\braket{\\ln{\\text{BF}}}$ (magenta) at each realization across five pixel sizes, where the pixel size decreases as nside correspondingly increases.}\n \\label{fig:pixelvar}\n\\end{figure}\n\nFig. \\ref{fig:pixelvar} shows the recovered sensitivity as a function of nside. \nWe find that there is a slight increase in the sensitivity to a population of PSs as resolution is improved. However, the scatter between individual realization is also increased. It is plausible that this occurs because with small pixel sizes there is a greater risk that a relatively-bright source happens to land near a pixel boundary and consequently loses significance. In realizations where this behavior happens to be rare, the significance is naturally higher than for smaller nside (as the likelihood contributions from a larger number of pixels are summed), but in other realizations this effective dimming of the sources will markedly decrease the inferred significance of the population. (An alternative way to think about this is that pixels much smaller than the PSF are not independent data points, and so by treating them as independent we may artificially enhance the apparent significance of the result \\cite{Collin:2021ufc},\\footnote{We thank Nicholas Rodd for pointing out this effect.} but may also miss correlations that could reveal a signal.) In the regime where the pixel size is significantly larger than the PSF, we expect the significance to be reduced because we have reduced the number of independent data points (pixels), discarding information in the process.\n\nIt appears that nside 128 and 256 are likely the optimal values: nside 256 has a slightly higher average sensitivity but with considerably more scatter between realizations. The relative insensitivity of NPTF methods to pixel size, in a simpler context, was previously studied in Ref.~\\cite{Malyshev:2011zi}.\n\n\\begin{figure*}\n \\centering\n \\includegraphics[width=0.49\\linewidth]{normalfluxfracp6v11.pdf}\n \\includegraphics[width=0.49\\linewidth]{fluxfracdegeneracyp6v11.pdf}\n \\caption{\\textit{Left}: Flux fraction plot at the common pixel size employed in previous analyses (nside = 128) of certain emissions: \\textit{Fermi} bubbles (purple), diffuse emission (red), GCE PS (blue), disk PS (orange). \\textit{Right}: Flux fraction plot demonstrating mis-reconstruction of the diffuse (red) and disk PS (orange) components at a low nside value (nside = 32). The recovered flux fraction are obtained from 20 simulations. The vertical dashed lines denote the flux fraction injected into the simulations.}\n \\label{fig:pixelvarintensityp6v11}\n\\end{figure*}\n\nOn the other end of the spectrum, low nside values cause severe discrepancies in the recovered flux fraction for Galactic diffuse emission and PS populations associated with the Galactic disk (Disk PS). At an nside value of 32, for example, a significant portion of the injected Galactic diffuse emission is absorbed into the Disk PS template. In contrast, at the standard choices of pixel size (such as nside = 128), the recovered flux fractions are fairly consistent with the injection values. The left panel of Fig.~\\ref{fig:pixelvarintensityp6v11} shows the recovered flux fractions and the injected values (dashed lines) for nside = 128 for the \\textit{Fermi} bubbles, diffuse, GCE PS, and disk PS contributions. The right panel of Fig.~\\ref{fig:pixelvarintensityp6v11} displays the mis-reconstruction of the diffuse and disk PS emissions due to a large pixel size (nside = 32). The recovered flux fractions are systematically biased across all realizations, even in this case where the model is correct by construction. Thus low-nside analyses should be treated with considerable caution, in addition to their lack of sensitivity.\n\n\n\\subsection{Source brightness} \\label{subsec:nptfitdetectionlimit}\n\\begin{table*}\n\\centering\n\\begin{tabular}{l r r r r }\n\\hline\n & $n=1\/4$ & $n = 1\/2$ & $n=1$ & \\\\\n\\hline\n\\textit{Quartile} 1\\\\\n\\hline\n$\\alpha$ (coefficient) & $3.18 \\pm 0.70$ & $7.35 \\pm 0.49$ & $11.30 \\pm 0.69$ \\\\\n$\\beta$ (power) & $1.43 \\pm 0.13$ & $0.97 \\pm 0.05$ & $0.76 \\pm 0.04$ \\\\\n$\\gamma$ (shift) & $0.0003 \\pm 0.31$ & $-0.40 \\pm 0.14$& $-0.62 \\pm 0.20$\\\\\n\\hline\n\\textit{Quartile} 2\\\\\n\\hline\n$\\alpha$ (coefficient) & $0.48 \\pm 0.18$ & $3.97 \\pm 0.48$& $6.14 \\pm 0.78$ \\\\\n$\\beta$ (power) & $2.24 \\pm 0.20$ & $1.22 \\pm 0.07$& $1.01 \\pm 0.08$ \\\\\n$\\gamma$ (shift) & $0.11 \\pm 0.26$ & $-0.34\\pm 0.18$& $-0.002 \\pm 0.37$ \\\\\n\\hline\n\\textit{Quartile} 3\\\\\n\\hline\n$\\alpha$ (coefficient) & $0.02 \\pm 0.02$ & $0.88 \\pm 0.39$ & $2.44 \\pm 0.96$ \\\\\n$\\beta$ (power) & $3.67 \\pm 0.40$ & $ 2.01 \\pm 0.22$& $1.50 \\pm 0.21$ \\\\\n$\\gamma$ (shift) & $0.18 \\pm 0.30$ & $-0.23 \\pm 0.19 $& $0.30 \\pm 0.37$ \\\\\n\\hline\n\\botrule\n\\end{tabular}\n\\caption{Best-fit parameters obtained using least-squares regression method for the power law fit to the $\\ln{\\text{BF}}$ values as we varied the source brightness of the GCE PS component in \\texttt{p6v11}. Note that $n=1$ is the result from averaging across 20 realizations, while $n=1\/2,1\/4$ result from 20 realizations.}\n\\label{tab:tradeoffp6v11}\n\\end{table*}\n\nFinally, we would like to understand how sensitivity scales with the brightness of the sources. This is important both for understanding the prospects for future detection of a bulge PS population with {\\it Fermi}, and in understanding the likely sensitivity of future telescopes with different exposure and angular resolution.\n\n\\begin{figure}\n \\centering\n \\includegraphics[width=0.98\\linewidth]{sensitivitydiffbrightness.pdf}\n \\caption{$\\langle \\ln{\\text{BF}}\\rangle$ and the $1\\sigma$ standard error of the mean (obtained from 20 realizations at each point) across the top three quartiles at varying brightness level for the full \\textit{Fermi} case, with the baseline exposure. The best-fit $S_b$ in the real data is 17.11; we test the effect of reducing $S_b$ by a factor of 2 or 4. The baseline case was scanned using nlive=300, while the other cases were performed using nlive=100 for computational efficiency.\n }\n \\label{fig:sensitivitychi1}\n\\end{figure}\n\nWe perform the same test as described in Sec.~\\ref{subsec:tradeoffs}, where now the source brightness parameter $S_b$ is multiplied by a factor of $n=1\/2$ or $1\/4$ (and the number of sources is modified to keep the total flux constant). Fig.~\\ref{fig:angexp0.5sb} shows the results, while Table~\\ref{tab:tradeoffp6v11} shows the corresponding best-fit parameters and uncertainties obtained from the power-law fit in Eq.~\\ref{eqn:powerlawshift}, using the least-squares method.\n\n\\begin{figure*}\n \\centering\n \\includegraphics[width=0.49\\linewidth]{lowtradeoffp6v110.5s.pdf}\n \\hspace{0.4em}\n \\includegraphics[width=0.49\\linewidth]{hightradeoffp6v110.5s.pdf} \\\\\n \\includegraphics[width=0.49\\linewidth]{lowtradeoffp6v110.25s.pdf}\n \\hspace{0.4em}\n \\includegraphics[width=0.49\\linewidth]{hightradeoffp6v110.25s.pdf}\n \\caption{$\\langle \\ln{\\text{BF}}\\rangle$ and the $1\\sigma$ standard error of the mean across 20 realizations at different $\\chi$ values for the top three quartiles graded by angular resolution. \\textit{Left}: realizations with $\\chi<1$. \\textit{Right}: realizations with $\\chi \\geq 1$. In the \\textit{upper panels} we simulated the sources at half of the baseline brightness ($S_{b}=17.11$ from Table~\\ref{tab:fitparam}), but doubled the number of sources, in accordance with Eq.~\\ref{eq:nphotons}; in the \\textit{lower panels} we simulated the sources at $1\/4$ of the baseline brightness but quadrupled the number of sources. The dashed, dotted, and dash-dotted lines correspond to the shifted power law (Eq.~\\ref{eqn:powerlawshift} for quartiles 1, 2, and 3, respectively.}\n \\label{fig:angexp0.5sb}\n\\end{figure*}\n\n\nOur results indicate that the parameter (of the power-law fit) $\\alpha$ increases at higher values of $n$. Since $\\alpha$ corresponds to the sensitivity at $\\chi=1$, it is expected to observe an increase as the brightness of the sources increase. On the other hand, $\\beta$ decreases modestly as $n$ increases. Assuming that $\\braket{\\ln{\\text{BF}}}$ has the same functional behavior as $\\braket{\\Delta\\ln{\\mathcal{L}}}$, our expectation from the analytic results is $\\beta\\approx 1$, at least if the fit is dominated by the region where the expected number of counts\/source and hence $\\braket{\\ln{\\text{BF}}}$ is large. However, at low brightness levels, as predicted by the analytic equations, we expect a quadratic scaling, $\\beta\\approx 2$. Thus it is reasonable to see a stronger scaling for smaller values of $n$, where the quadratic behavior is relevant for a larger range of $\\chi$. In other words, increased exposure is more important for fainter sources.\n\nTo clearly demonstrate the effect of varying the source brightness on the overall sensitivity, we plotted $\\langle \\ln{\\text{BF}}\\rangle(\\chi=1)$ across 20 realizations as a function of $S_{b}$, for the top three PSF quartiles, in Fig.~\\ref{fig:sensitivitychi1}. The baseline case corresponds to $S_{b}=17.11$. As expected, we find that, on average, sensitivity increases as the PS population brightens.\n\nApproximating the sensitivity-exposure relation by the power law with an additive shift as in Eq.~\\ref{eqn:powerlawshift}, we can estimate the $\\chi$ value that corresponds to $\\ln{\\text{BF}} \\geq 1$ for Quartile 1, as a function of the brightness of the source population (indicating some sensitivity to the sources; we could increase this threshold to require a more significant detection). Using the best-fit parameters in Table~\\ref{tab:tradeoffp6v11}, we find that for $n=1$, this threshold corresponds to $\\chi \\geq 0.08$; for $n=1\/2$, to $\\chi \\geq 0.18$; and for $n=1\/4$, to $\\chi \\geq 0.45$. As expected, a higher level of exposure is required to detect fainter populations of PSs. \n\n\\section{Conclusions} \\label{sec:conclusion}\n\nWe have investigated the statistical behavior of non-Poissonian template fitting, as implemented in the \\texttt{NPTFit} public code, when characterizing unresolved point sources. In particular we have explored the sensitivity to point sources both analytically and numerically, in a simplified isotropic-emission scenario and a realistic scenario relevant to gamma rays from the inner Galaxy. We define the sensitivity to point sources (or detectability of point sources) as the ratio of the maximum likelihood (analytic case) or Bayesian evidence (numerical case) between the true underlying model versus a model that excludes point sources associated with the signal component. \nWe first derived analytic estimates of the sensitivity for point sources with a delta-function source count function, where all sources have the same expected number of photons per source $s$. We found that the expected contribution to the log likelihood ratio from a given pixel is a function only of $ks$, where $k$ is the fraction of the emission that is attributed to point sources; the scaling of the log likelihood with $k s$ is linear for $k s \\gg 1$ and quadratic for $k s \\ll 1$. We also examined the variance in this sensitivity, reflecting the expected scatter between realizations. By exploring a range of scenarios, we found that the standard deviation of our sensitivity metric was generically smaller than the expectation value by a factor of $\\sqrt{A}$, where $A$ can be the total number of sources in the ROI, the total number of pixels in the ROI, or the log likelihood ratio itself; in general, whichever parameter is smallest dominates the variance. This behavior can lead to a relatively large scatter between the sensitivity inferred from different simulations, which we indeed observe in the numerical data. We tested our analytic predictions using numerical simulations in a simplified case where both point sources and smooth emission are isotropic (detailed in Sec.~\\ref{sec:isotropic} and Appendix \\ref{app:isotropic}), and found that the analytic results were quantitatively quite accurate in the case where the PSF is very narrow, and provide a good description of various scaling relations even with a more realistic PSF. The analytic and isotropic results may be relevant to other analyses employing non-Poissonian template fitting, e.g. the analysis of the neutrino background presented in \\cite{Aartsen:2019mbc}. \n\nWe then numerically investigated the role of several key parameters in the \\texttt{NPTFit} analysis of a population of point sources associated with the \\textit{Fermi} gamma-ray skymap. The parameters we tested included exposure, angular resolution, source brightness and pixel size. This analysis was performed within the full \\textit{Fermi} scenario using both \\texttt{p6v11} templates and Model A templates for the Galactic diffuse emission (detailed in Appendix \\ref{app:modela}). The results we quote below are based on simulations with the default \\texttt{p6v11} template from the public \\texttt{NPTFit} code, but we found consistent results using an alternative background model denoted Model A. \n\nFor the cases we tested, we found the following general relationships between exposure, angular resolution, and sensitivity: \n\\begin{itemize}\n\\item Gaining exposure alone induces an increase in sensitivity that is roughly linear for the exposure range and analysis choices we focused on. The analytic approximation predicts a scaling between linear and quadratic; our results are broadly consistent with this expectation, but in complex\/realistic sky models involving multiple templates with different morphologies, we have observed both slightly sub-linear scaling and stronger-than-quadratic scaling, the latter in quartiles with poor angular resolution. \n\\item Worse angular resolution results in lower sensitivity (as expected) such that the third-highest-quality angular resolution quartile has average sensitivity approximately $60\\%$ lower than of the highest quality quartile (although the degree of degradation can vary depending on the exposure level and the brightness of the sources, with the loss of sensitivity being more pronounced for fainter sources and lower exposure).\n\\item As a simplified parametrization of the angular resolution (which varies according to quartile selection and included energy range), we examined how the sensitivity varies according to the $68\\%$ containment angle radius $\\eta$, and found that sensitivity has an approximate inverse proportionality relation to $\\eta$. \n\\item With these results, we tested the effects of increasing exposure at the expense of angular resolution in practical situations relevant to {\\it Fermi} data, by varying the quartile selection and energy range. We find, on average, that the increase in sensitivity from a larger exposure is offset by the degraded angular resolution to a good approximation. \n\\item Hence, this tradeoff only produces a small net change in sensitivity; there is a broad range of possible analysis choices that yield similar expected sensitivity. The large scatter in sensitivity between realizations means that some caution is needed when interpreting changes in the Bayes factor in real data upon addition of extra quartiles, energy bins, ROI, etc; it appears possible for changes in the analysis choice that modify the dataset to have a large apparent effect on the Bayes factor purely due to this scatter.\n\\end{itemize}\n\nWe also examined the role of source brightness to the sensitivity of \\texttt{NPTFit} to point sources to understand how the sensitivity falls off as the source brightness declines. For sufficiently faint point sources, as expected, \\texttt{NPTFit} is unable to distinguish point sources from background smooth emission. More specifically, in the top graded quartile for angular resolution, the minimum exposure (in relation to the baseline exposure) required to achieve any hint of detection (defined somewhat arbitrarily as an average Bayes factor $\\ln{\\text{BF}}\\geq 1$) scales as $O(S_{b}^{-1.28})$, where $S_{b}$ is the peak number of photons per Galactic Center Excess point source.\n\nWe explored simulations with different pixel sizes, and found that pixel size was not a crucial factor in determining the sensitivity of \\texttt{NPTFit}, except in the case of extremely large or small pixel sizes. Very large pixel sizes induced inaccurate recovery of the various physical emission components; small pixel sizes led to higher variance across realizations.\n\nThese results serve as a systematic demonstration of the behavior of \\texttt{NPTFit} under a range of conditions relevant to analyses of {\\it Fermi} data from the inner Galaxy (and potentially more broadly).\n\n\\section*{Acknowledgements}\n\nThe authors thank Siddharth Mishra-Sharma and Nicholas Rodd for helpful feedback and suggestions. TRS was supported by the U.S. Department of Energy, Office of Science, Office of High Energy Physics of U.S. Department of Energy under grant Contract Number DE-SC0012567 through the Center for Theoretical Physics at MIT, and the National Science Foundation under Cooperative Agreement PHY-2019786 (The NSF AI Institute for Artificial Intelligence and Fundamental Interactions, \\url{http:\/\/iaifi.org\/}). The work of LGCB was funded by the Massachusetts Institute of Technology Undergraduate Research Opportunities Program (UROP) office. \n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}