diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzzfiye" "b/data_all_eng_slimpj/shuffled/split2/finalzzfiye" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzzfiye" @@ -0,0 +1,5 @@ +{"text":"\\section{Quark-meson model}\nIn order to map out the phase diagrams, we employ\nthe quark-meson model as \nan effective low-energy model of QCD.\nThe Minkowski Lagrangian including\n$\\mu_u$ and $\\mu_d$ reads\n\\begin{eqnarray} \\nonumber\n{\\cal L}&=&\n{1\\over2}\\left[(\\partial_{\\mu}\\sigma)(\\partial^{\\mu}\\sigma)\n+(\\partial_{\\mu} \\pi_3)(\\partial^{\\mu} \\pi_3)\n\\right]\n+(\\partial_{\\mu}+2i\\mu_I\\delta_{\\mu}^0)\\pi^+(\\partial^{\\mu}-2i\\mu_I\\delta_{0}^{\\mu})\n\\pi^-\n\\nonumber-{1\\over2}m^2(\\sigma^2+\\pi_3^2\n\\\\ &&\n+2\\pi^+\\pi^-)\n-{\\lambda\\over24}(\\sigma^2+\\pi_3^2+2\\pi^+\\pi^-)^2\n+h\\sigma+\\bar{\\psi}\\left[\ni\/\\!\\!\\!\\!\\partial\n+\\mu_f\n\\gamma^0\n-g(\\sigma+i\\gamma^5{\\boldsymbol\\tau}\\cdot{\\boldsymbol\\pi})\\right]\\psi\\;.\n\\label{lag}\n\\end{eqnarray}\nWe will in the following allow for an inhomogeneous chiral condensate,\nseveral different ans\\\"atze have been discussed in the literature, for\nexample a chiral-density wave (CDW) and a chiral soliton\nlattice\nWe choose the simplest ansatz, namely that of a CDW,\n\\begin{eqnarray}\\nonumber\n\\sigma=\\phi_0\\cos(qz)\\;,\\hspace{0.5cm}\n\\pi_1=\\pi_0\n\\;,\\hspace{0.5cm}\n\\pi_2=0\n\\;,\\hspace{0.5cm}\n\\pi_3=\\phi_0\\sin(qz)\\;,\n\\end{eqnarray}\nwhere $\\phi_0$ is the magnitude of the chiral condesate, $q$ is a wavevector,\nand $\\pi_0$ is a homogeneous pion condensate.\nBelow we will express the effective potential using\nthe variables $\\Delta=g\\phi_0$ and $\\rho=g\\pi_0$.\n\nThere are two technical details I would like to\nbriefly mention, namely that of parameter fixing and regulator artefacts.\nIn mean-field calculations it is common to determine the\nparameters of the Lagrangian at tree level. However, this is inconsistent.\nOne must determine the parameters to the same accuracy as one is calculating\nthe effective potential. We are calculating the effective potential\nto one loop in the large-$N_c$ limit, which means that we integrate\nover the fermions, but treat the mesons at tree level.\nConsequently, we must determine the parameters in Eq. (\\ref{lag})\nin the same approximation. If one determines the\nparameters at tree level and the effective potential in the one-loop\nlarge-$N_c$ approximation, the onset of BEC will not be at\n$\\mu_I={1\\over2}m_{\\pi}$, in fact in some case there are substantial\ndeviations from this exact result. This point has been ignored in most\ncalculations to date.\nSecondly,\nin calculations using the NJL model, one is typically using a hard\nmomentum cutoff. In cases with inhomogeneous phases, this leads to an\nasymmetry in the states included due to the sign of $q$ in the\ndifferent quark dispersion relations. This leads to a $q$-dependent\neffective potential even in the limit $\\Delta\\rightarrow0$. This is\ninconsistent and one must typically subtract a $q$-dependent term\nto remedy this.\nIn the present calculations in the QM model,\nwe are using dimensional regularization.\nWe have explicitly checked that the effective potential is consistent, i.e.\nit is independent of the wavevector $q$ in the limit\n$\\Delta\\rightarrow0$.\n\nLet us return to the QM Lagrangian (\\ref{lag}).\nThe quark energies are\n\\begin{eqnarray}\\nonumber\n E_u^{\\pm}&=&E(\\pm q,-\\mu_I)\\;,\n\\hspace{0.5cm}\nE_d^{\\pm}=E(\\pm q,\\mu_I)\\;,\n\\hspace{0.5cm}\nE_{\\bar{u}}^{\\pm}=E(\\pm q,\\mu_I)\\;,\n\\hspace{0.5cm}E_{\\bar{d}}^{\\pm}=E(\\pm q,-\\mu_I)\\;, \n\\end{eqnarray}\nwhere\n\\begin{eqnarray}\nE(q,\\mu_I)&=&\n\\left[\n\\left(\\sqrt{p_{\\perp}^2+\n\\left(\\sqrt{p_{\\parallel}^2+\\Delta^2}+{q\\over2}\\right)^2}+\n{\\mu_I}\\right)^2\n+\\rho^2\\right]^{1\\over2}\\;.\n\\end{eqnarray}\nNote in particular that the quark energies depend on the isospin chemical\npotential.\nAfter regularization and renormalization, the zero-temperature part of the\neffective potential\ncan be expressed in terms of the physical meson masses and the pion-decay\nconstant as~\\footnote{We have been combining the on-shell and\n $\\overline{\\rm MS}$\nschemes~\\cite{patrick}.}\n\\begin{eqnarray}\\nonumber\nV_{\\rm 1-loop}&=&\n{1\\over2}f_{\\pi}^2q^2\n\\left\\{1-\\dfrac{4 m_q^2N_c}{(4\\pi)^2f_\\pi^2}\n\\left[\\log\\mbox{$\\Delta^2+\\rho^2\\over m_q^2$}\n +F(m_\\pi^2)+m_\\pi^2F^{\\prime}(m_\\pi^2)\n \\right]\n\\right\\}{\\Delta^2\\over m_q^2}\n\\\\ && \\nonumber\n+\\dfrac{3}{4}m_\\pi^2 f_\\pi^2\n\\left\\{1-\\dfrac{4 m_q^2N_c}{(4\\pi)^2f_\\pi^2}m_\\pi^2F^{\\prime}(m_\\pi^2)\n\\right\\}\\dfrac{\\Delta^2+\\rho^2}{m_q^2}\n\\\\ \\nonumber &&\n -\\dfrac{1}{4}m_\\sigma^2 f_\\pi^2\n\\left\\{\n1 +\\dfrac{4 m_q^2N_c}{(4\\pi)^2f_\\pi^2}\n\\left[ \\left(1-\\mbox{$4m_q^2\\over m_\\sigma^2$}\n\\right)F(m_\\sigma^2)\n +\\dfrac{4m_q^2}{m_\\sigma^2}\n-F(m_\\pi^2)-m_\\pi^2F^{\\prime}(m_\\pi^2)\n\\right]\\right\\}\\dfrac{\\Delta^2+\\rho^2}{m_q^2} \n\\\\ \\nonumber &&\n-2\\mu_I^2f_\\pi^2\n\\left\\{1-\\dfrac{4 m_q^2N_c}{(4\\pi)^2f_\\pi^2}\n\\left[\\log\\mbox{$\\Delta^2+\\rho^2\\over m_q^2$}\n+F(m_\\pi^2)+m_\\pi^2F^{\\prime}(m_\\pi^2)\\right]\n\\right\\}{\\rho^2\\over m_q^2}\n\\\\ \\nonumber\n & & + \\dfrac{1}{8}m_\\sigma^2 f_\\pi^2\n\\Bigg\\{ 1 -\\dfrac{4 m_q^2 N_c}{(4\\pi)^2f_\\pi^2}\\Bigg[\n\\dfrac{4m_q^2}{m_\\sigma^2}\n\\left( \n\\log\\mbox{$\\Delta^2+\\rho^2\\over m_q^2$}\n-\\mbox{$3\\over2$}\n\\right) -\\left( 1 -\\mbox{$4m_q^2\\over m_\\sigma^2$}\\right)F(m_\\sigma^2)\n\\\\ &&\\nonumber\n+F(m_\\pi^2)+m_\\pi^2F^{\\prime}(m_\\pi^2)\\Bigg]\\Bigg\\}\n\\dfrac{(\\Delta^2+\\rho^2)^2}{m_q^4}\n- \\dfrac{1}{8}m_\\pi^2 f_\\pi^2\n\\left[1-\\dfrac{4 m_q^2N_c}{(4\\pi)^2f_\\pi^2}m_\\pi^2F^{\\prime}(m_\\pi^2)\\right]\n\\dfrac{(\\Delta^2+\\rho^2)^2}{m_q^4}\n\\\\ &&\n-m_\\pi^2f_\\pi^2\\left[\n1-\\dfrac{4 m_q^2 N_c}{(4\\pi)^2f_\\pi^2}m_\\pi^2F^{\\prime}(m_\\pi^2)\n\\right]\\dfrac{\\Delta}{m_q}\\delta_{q,0}\n+V_{\\rm fin}\\;,\n\\label{fullb}\n\\end{eqnarray}\nwhere $V_{\\rm fin}$ is a finite term that must be evaluated numerically.\nThe linear term is responsible for explicit\nchiral symmetry breaking. It only contributes when $q=0$, i.e. in the\nhomogeneous case; for nonzero $q$ this term averages to zero over\na sufficiently large spatial volume of the system.\nThe finite-temperature part of the one-loop effective potential is\n\\begin{eqnarray}\nV_{T}&=&-2N_cT\\int_p\\bigg\\{\n\\log\\Big[1+e^{-\\beta(E_{u}-\\mu)}\\Big]\n+\\log\\Big[1+e^{-\\beta(E_{\\bar{u}}+\\mu)}\\Big]\n\\bigg\\}\n+{u\\leftrightarrow d}\\;.\n\\label{fd}\n\\end{eqnarray}\n\\section{Coupling to the Polyakov loop}\nThe Wilson line which wraps all the way around in imaginary time\nis defined as\n\\bq\nL({\\bf x})&=&{\\cal P}\\exp\\left[\ni\\int_0^{\\beta}d\\tau A_4({\\bf x},\\tau)\n\\right]\\;,\n\\label{ldef}\n\\end{eqnarray}\nwhere ${\\cal P}$ is time ordering.\nThe Wilson line is not gauge invariant, but taking the trace of it\ngives a gauge-invariant object, namely the Polyakov loop ${\\rm Tr}\\,L$.\nThe Polyakov loop is, however, not invariant under the socalled center\nsymmetry of the (pure glue) QCD Lagrangian, but transforms as \n${\\rm Tr}\\,L\\rightarrow e^{i{n\\over N_c}}{\\rm Tr }\\,L$, where $n=0,1,2...N_c-1$ and\n$e^{i{n\\over N_c}}$ is one of\nthe $N_c$ roots of unity. A nonzero expectation value of the\nPolyakov loop therefore signals the breaking of the center symmetry.\nThe expectation value of the correlator of Polyakov loop and its conjugate\nis related to the free energy of a quark and an antiquark.\nEmploying cluster decomposition, the expectation value of the Polyakov\nloop itself is related to the free energy of a single quark;\nA vanishing value of ${\\rm Tr}\\,L$ corresponds to an infinite free energy\nof a single quark and a vanishing value of\n${\\rm Tr}\\,L^{\\dagger}$ corresponds to an infinite free energy\nof an antiquark, and therefore signals confinement.\nIf we denote by $\\Phi$ the expectation value of \n${\\rm Tr}\\,L$ and $\\bar{\\Phi}$ of ${\\rm Tr}\\,L^{\\dagger}$,\nthe medium contribution reads\n \\begin{eqnarray}\\nonumber\nV_T&=&-2T\\int{d^3p\\over(2\\pi)^3}\n\\bigg\\{\n {\\rm Tr}\\log\\Big[1+3(\\Phi+\\bar{\\Phi}e^{-\\beta(E_u-\\mu)})\n e^{-\\beta(E_u-\\mu)}+e^{-3\\beta(E_u-\\mu)}\\Big]\n\\\\ \n&&+{\\rm Tr}\\log\\Big[1+3(\\bar{\\Phi}+\\Phi e^{-\\beta(E_{\\bar{u}}+\\mu)})\n e^{-\\beta(E_{\\bar{u}}+\\mu)}\n+e^{-3\\beta(E_{\\bar{u}}+\\mu)}\\Big]\n\\bigg\\}\n+{u\\leftrightarrow d}\n\\;.\n\\end{eqnarray} \nWe note that the symmetry between $\\Phi$ and $\\bar{\\Phi}$ implies\n$\\Phi=\\bar{\\Phi}$.\nThe expression for $V_T$ is also manifestly\nreal, reflecting that there is no sign problem at finite $\\mu_I$ and zero\n$\\mu_B$. Finally, for $\\Phi=\\bar{\\Phi}=1$, we recover the standard\nexpression for the fermionic contribution (\\ref{fd}) for $N_c=3$.\n\nWe also need to introduce a potential ${\\cal U}$ from the glue sector.\nThe terms are constructed from $\\Phi$ and $\\bar{\\Phi}$ such that\nit satisfies the symmetries. There are several such potentials\non the market. We choose the following logarithmic potential~\\cite{ratti2}\n\\begin{eqnarray}\n{{\\cal U}\\over {T^4}}&=&-{1\\over2}a \\Phi\\bar{\\Phi} + b\n\\log\\left[ 1 - 6\\Phi\\bar{\\Phi} +4(\\Phi^3 +\\bar{\\Phi}^3) -3(\\Phi\\bar{\\Phi})^2\n\\right]\\;,\n\\label{pglog}\n\\end{eqnarray}\nwith\n$a = 3.51 -2.47\\left({T_0\\over T}\\right) +15.2\\left({T_0\\over T}\\right)^2$\nand $b = -1.75\\left({T_0\\over T}\\right)^3$\\;.\nThe parameters are determined such that the potential reproduces the\npressure for pure-glue QCD as calculated on the lattice for temperatures\naround the critical temperature. Since the critical temperature depends\non the number of flavors $n_f$ and $\\mu_I$, one can refine\nthe potential by making $T_0$ dependent on these parameters,\n$T_0(N_f,\\mu_I)=T_{\\tau}e^{-1\/(\\alpha_0b(\\mu_I))}$ with\n$b(\\mu_I)={1\\over6\\pi}(11N_c-2N_f)-b_{\\mu_I}{\\mu_I^2\\over T_{\\tau}^2}$\nand $T_{\\tau}=1.77\\;\\text{GeV}$ and $\\alpha_0 = 0.304$\nare determined such that the transition temperature for pure glue at $\\mu_I=0$\nis $T_0=270$ MeV~\\cite{pawlow}. \nIn the numerical work, we use \n$m_{\\pi}=140$ MeV, $m_{\\sigma}=500$ MeV, $\\Delta=300$ MeV, and $f_{\\pi}=93$ MeV.\n\n\n\n\n\n\n\n\n\\section{Phase diagram in the $\\mu$-$\\mu_I$ plane}\nWe first discuss the results for the phase diagram in the $\\mu$-$\\mu_I$ plane\nat $T=0$ in the chiral limit, which\nis shown in the left panel of Fig.~\\ref{pionphase2}.\nDashed lines indicate first-order transitions while solid lines indicate\nsecond-order transitions. The black dot indicates the\nendpoint of the first-order line.\nThe vacuum phase is part of the $\\mu$-axis (recall that in the chiral limit,\nthe onset of pion condensation is at $\\mu_I=0$), ranging from $\\mu=0$\nto $\\mu=300$ MeV. In this phase all the thermodynamic functions are independent\nof the quark chemical potentials.\nIn the region to the left of the blue line, there is a nonzero pion condensate\nwhich is independent of $\\mu$.\nIn the wedge-like shaped region between the blue and green\nlines, there is a phase with a $\\mu$-dependent pion condensate and\na vanishing chiral condensate. In this phase the isospin and\nquark densities are nonzero. In the region between the green and red lines,\nwe have an inhomogeneous phase with a nonzero wavevector $q$. In this\nphase, the pion condensate is zero; thus there is no coexistence of\nan inhomogeneous chiral condensate and a homogeneous pion condensate.\nFinally, the region to the right of red, blue,\nand green line segments is the symmetric phase, where\n$\\Delta=\\rho=q=0$. The blue dot marks the Lifshitz\npoint where the homogeneous, inhomogeneous and chi-\nrally symmetric phases connect.\nIn the right panel of Fig.~\\ref{pionphase2}, we show \nthe pion condensate $\\rho$ (green line),\nthe magnitude of the chiral condensate $\\Delta$ (blue line), and the\nwavevector $q$ (red line) as functions of $\\mu$ for fixed $\\mu_I=5$ MeV.\nThis corresponds to a horizontal line in the phase diagram.\n\n\\begin{figure}[htb]\n\\begin{center}\n \\includegraphics[width=0.45\\textwidth]{phaseinhom.pdf}\n\\includegraphics[width=0.45\\textwidth]{10.pdf}\n\\caption{Phase diagram in the chiral limit (left) and $\\rho$, $\\Delta$, and\n $q$ as functions of $\\mu$ for $\\mu_I=5$ MeV (right). See main text for\n details.}\n\\label{pionphase2}\n\\end{center}\n\\end{figure}\n\n\nIn the left panel of Fig.~\\ref{bondary}, we show the\nwindow of inhomogeneous chiral condensate for $\\mu_I=0$, i.e. on the\n$\\mu$-axis as a function of the pion mass.\nThe inhomogeneous phase ceases to exist for pion masses larger than\n$37.1$ MeV and so is not present at the physical point\nThis is contrast to the QM study in Ref.~\\cite{nickel} where an\ninhomogeneous phase is found at the physical point.\nHowever, in that paper the parameters were\ndetermined at tree level, which may explain the difference with\nthe results presented here.\nThe question of inhomogeneous phases in QCD was also addressed by\nBuballa at this meeting using the NJL model, in particular\ndiscussing the role of the quark masses~\\cite{bubi}.\nPerforming a Ginzburg-Landau analysis, they find an inhomogeneous\nphase which\nshrinks with increasing quark mass, but survives at the physical point.\n\n\\begin{figure}[htb]\n\\begin{center}\n \\includegraphics[width=0.44\\textwidth]{boundary.pdf}\n \\includegraphics[width=0.44\\textwidth]{pionphase.pdf}\n\\end{center}\n\\caption{Window of inhomogeneous phase as a function of the pion mass\n for $\\mu_I=0$ (left) and the homogeneous phase diagram at the physical\npoint (right).}\n\\label{bondary}\n\\end{figure}\n\nIn the right panel of\nFig. \\ref{bondary}, we show the phase diagram at the physical point.\nThe thermodynamic observables are independent\nof $\\mu$ and $\\mu_I$ in the region bounded by the \n$\\mu_I$ and $\\mu$ axes, and the straight lines given by \n$\\mu+\\mu_I=gf_{\\pi}=m_q$ (blue line) and $\\mu_I=\\mu_i^c={1\\over2}m_{\\pi}$\n(red line).\nWe therefore refer to this as the vacuum phase.\nThe red line shows the phase boundary between a phase with\n$\\rho=0$ and a pion-condensed phase. The transition is second order\nwhen the red line is solid and first order when it is dashed. The\nsolid dot indicates the position of the critical end point where the\nfirst-order line ends, and is located at\n$(\\mu, \\mu_I) = (264,91)$ MeV. The green line indicates the boundary\nbetween a chirally broken phase and a phase where chiral symmetry \nis approximately restored. This line is defined by the inflection point\nof the chiral condensate in the $\\mu$-direction.\nThe region bounded by the three lines is a phase\nwith chiral symmetry breaking but no pion condensate. The \neffective potential depends on $\\mu$ and $\\mu_I$ and therefore\nthe quark and isospin densities are nonzero.\n\n\n\\section{Phase diagram in the $\\mu_I$-$T$ plane}\n\n\n\n\n\\begin{figure}[!htb]\n\\begin{center}\n \\includegraphics[width=0.45\\textwidth]{phasePQMlog.pdf}\n \n \\includegraphics[width=0.45\\textwidth]{PD_final.pdf}\n \\end{center}\n\\caption{Phase diagram in the $\\mu_I$-$T$ plane from the PQM model\n(left) and from the lattice simulations of Refs.~\\cite{gergy1,gergy2,gergy3}.}\n\\label{pionphase}\n\\end{figure}\nIn the left panel of Fig.~\\ref{pionphase},\nwe show the phase diagram in the $\\mu_I$-$T$ diagram\nresulting from our PQM model calculation. The green line indicates\nthe transition to a pion-condensed phase, where the $O(2)$-symmetry associated\nwith the conservation of the third component of the isospin is broken.\nThis transition is second order everywhere. At $T=0$ this transition takes\nplace at $\\mu_I={1\\over2}m_{\\pi}$ by construction.\nThe chiral transition line is in blue, while the transition line for the\ndeconfinement transition is in red.\nIn the noncondensed phase, these coincide.\nWhen these lines meet the BEC line, \nthey depart and the former coincides the BEC line.\nThe transition line for deconfinement penetrates into condensed phase.\nFinally, the dashed line indicates the BEC-BCS crossover defined by\nthe condition $\\Delta>\\mu_I$, i.e. when the dispersion relations for the \n$u$-quark and $\\bar{d}$-quark\nno longer have their minima at $p=0$, but rather at $p=\\sqrt{\\Delta^2-\\mu_I^2}$.\nIn the right panel of Fig.~\\ref{pionphase}, we see the lattice results of\nRefs.~\\cite{gergy1,gergy2,gergy3}.\nThe blue band indicates the chiral crossover. \nWithin the uncertainty it coincides with \nthe deconfinement transition. The green band indicates\nthe second-order transition to a Bose-Einstein condensed state. The three\ntransition meet at the yellow point. Their simulations also indicate\nthat the deconfinement transition temperature\ndecreases and the transition line smoothly penetrates into the\npion-condensed phase.\n\n\\section{Summary}\nI would like to finish this talk by highlighting our main results and\nthe comparison with lattice results from Refs.~\\cite{gergy1,gergy2,gergy3}.\n\n\\begin{enumerate}\n\\item Rich phase diagrams. Inhomogeneous chiral condensate excluded\nfor $m_{\\pi}>37.1$ MeV.\n\n\\item No inhomogeneous chiral condensate coexists with a\n homogeneous pion condensate.\n\\item Good agreement between lattice simulations and model calculations:\n\\begin{enumerate}\n\\item Second-order transition to a BEC state, which is in the\n$O(2)$ universality class.\nAt $T=0$, onset of pion condensation at exactly $\\mu_I^c={1\\over2}m_{\\pi}$. \n\\item BEC and chiral transition lines merge at large $\\mu_I$.\n\\item The deconfinement transition smoothly penetrates\ninto the BEC phase.\n\\end{enumerate}\n\\end{enumerate}\n\n\n\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nSince the discovery of the first transiting exoplanet, HD 209458b \\citepads{2000ApJ...529L..41H,2000ApJ...529L..45C}, the {transit method} has become the most successful detection method, surpassing the combined detection counts of all other methods (see Fig.~\\ref{Fig_methods}) and giving rise to the most thoroughly characterized exoplanets at present. \n\n\\begin{figure}\n\\begin{center}\n\\includegraphics[width=10cm]{Fig_methods.pdf}\n\n\\caption{The fractions by which various detection methods contributed to the accumulated sample of known planets is shown, for years since 1995. At the end of 1995, only five planets were known, three from {pulsar timing} and two from radial velocities. Between 1996 and 2013, the sample of known planets was dominated by those discovered with radial velocities, while in 2018, 78\\% of all known planets had been discovered by transits. Based on data from the NASA Exoplanet Archive in Feb. 2018, and using its classification by discovery methods. 'Timing' includes planets found by pulsar timing, {eclipse timing}, or {transit timing}. Other detection methods ({astrometry}, orbital brightness variation) generate only a very small contribution that is barely visible at the bottom of the graph, for years following 2010.}\n \\label{Fig_methods}\n\\end{center}\n\\end{figure}\n\nThe detection of planetary transits is among the oldest planet detection methods; together with the {radial velocity} (RV) method it was proposed in \\citeyearads{1952Obs....72..199S} in a brief paper by {Otto Struve}. The early years of exoplanet discoveries were however dominated by planets found by RVs, and prior to the 1999 discovery of transits on {HD 209458}, the transit method was not considered overly promising by the community at large. For example, a \\citeyearads{1996jpl..reptQ....E} (Elachi et al.) NASA Road Map for the Exploration of Neighboring Planetary Systems (ExNPS) revises in some detail the potential of RV, astrometry and microlensing detections, with a recommended focusing onto space interferometry, while transits were considered only cursory.\nConsequently, activities to advance transit detections were rather limited; most notable are early proposals for a spaced based transit-search by {Borucki}, Koch and collaborators (\\citeads{1985ApJ...291..852B,1996JGR...101.9297K,1997ASPC..119..153B}) and the {TEP project}, a search for transiting planets around the eclipsing binary {CM Draconis} that had started in 1994 (\\citeads{1998A&A...338..479D,2000ApJ...535..338D}). The discovery of the first transiting planets, with some of them like HD 209458b already known from RV detections, quickly led to intense activity to more deeply characterize them, mainly from multi-color photometry (e.g. \\citeads{2000ApJ...540L..45J,2001NewA....6...51D}) or from spectroscopy during transits (e.g. \\citeads{2000A&A...359L..13Q,2002ApJ...568..377C,2004MNRAS.353L...1S}); to provide the community with efficient transit fitting routines (e.g. \\citeads{2002ApJ...580L.171M}; for more see below), or to extract the most useful set of physical parameters from transit lightcurves \\citepads{2003ApJ...585.1038S}.\n\nThe first detections of transits also provided a strong motivation towards the set-up of dedicated transit searches, which soon led to the first planet discoveries by that method, namely {OGLE-TR-56b} \\citepads{2003Natur.421..507K} and further planets by the OGLE-III survey, followed by {TrES-1} \\citepads{2004ApJ...613L.153A}, which was the first transit-discovery on a bright host star. Transits were therefore established as a valid method to find new planets. \n\nCentral to the method's acceptance was also the fact that planets discovered by transits across bright host stars permit the extraction of a wealth of information from further observations. Transiting planets orbiting bright host-stars, such as HD 209458b, HD 189733 \\citepads{2005A&A...444L..15B}, WASP-33b \\citepads{2006MNRAS.372.1117C,2010MNRAS.407..507C}, or the terrestrial planet 55 Cnc e \\citepads{2004ApJ...614L..81M,2011ApJ...737L..18W} are presently the planets about which we have the most detailed knowledge. Besides RV observations for the mass and orbit determinations, further characterization may advance with the following techniques: transit photometry with increased precision or in different wavelengths; transit photometry to derive transit timing variations (TTVs); spectroscopic observations during transits (transmission spectroscopy; line-profile tomography of exoplanet transits; {Rossiter-McLauglin effect}). Furthermore, the presence of transits -- strictly speaking primary transits of a planet in front of its central star -- usually implies the presence of {secondary eclipses} or occultations, when a planet disappears behind its central star. These eclipses, as well as phase curves of a planet's brightness in dependence of its orbital position, might be observable as well.\n\nGiven the modest instrumental requirements to perform such transit searches on bright star samples -- both HD209458b's transits and the planet TReS-1 were found with a telescope of only 10cm diameter -- the first years of the 21st century saw numerous teams attempting to start their own transit surveys. Also, for the two space-based surveys that were launched a few years later, {CoRoT} and {Kepler}, it is unlikely that they would have received the necessary approvals without the prior ground-based discovery of transiting planets. The enthusiasm for transit search projects at that time is well represented by a paper by \\citetads{2003ASPC..294..361H} \n which lists 23 transit surveys that were being prepared or already operating. Its title 'Hot Jupiters Galore' also typifies the expectation that significant numbers of transiting exoplanets will be found in the near future: Summing all 23 surveys, Horne predicted a rate of 191 planet detections per month. \n In reality, advances were much slower, with none of these surveys reaching the predicted productivity. By the end of 2007, before the first discoveries from the CoRoT space mission \\citepads{2008A&A...482L..17B,2008A&A...482L..21A}, only 27 planets had been found through transit searches. This slower advance can be traced to two issues that revealed themselves only during the course of the first surveys: The amount of survey time required under real conditions was higher than expected, and the presence of {red noises} decreased sensitivity to transit-like events (see later in this chapter). Once these issues got understood and accounted for, some of these ground-based surveys became very productive, and both WASP and HAT\/HATS have detected over 100 planets to date. The next major advances based on the transit method arrived with the launch of the space missions CoRoT in 2006 and Kepler in 2009. These led to the discoveries of transiting terrestrial-sized planets ({CoRoT-7b} by \\citeads{2009A&A...506..287L}, {Kepler-10b} by \\citeads{2011ApJ...729...27B}); to planets in the temperate regime ({CoRoT-9b}, \\citeads{2010Natur.464..384D}); to transiting {multi-planet systems} \\citepads{2011Natur.470...53L} and to a huge amount of transiting planets that permit a deeper analysis of planet abundances in a very large part of the radius - period (or Teff) parameter space. In the following, an introduction is given on the methodology of the transit detection and its surveys, as well as an overview about the principal projects that implement these surveys.\n \n\\section{Fundamentals of the transit method}\n\n\\begin{figure}[t]\n\\includegraphics[width=\\textwidth]{Fig_transito_v2.png}\n\\caption{Outline of the transit of an exoplanet, with the main quantities used to describe the orbital configuration, from the observables given in the lower solid curve (the observed light curve), to the model representations from the observer's point of view (central panel) or other view points (top panels). See the text for details. }\n\\label{fig:fig_tran} \n\\end{figure}\n\nA schematic view of a transit event is given in Figure~\\ref{fig:fig_tran}, where the bottom part represents the observed flux of the system. As the planet passes in front of the star, its flux diminishes by a fractional denoted as $\\Delta F$. Under the assumptions of negligible flux from the planet and of spherical shapes of the star and planet, $\\Delta F$ is given by the ratio of the areas of the planet and the star:\n\n\\begin{equation}\n\\Delta F \\approx \\left(\\frac{R_p}{R_s}\\right)^2 = k^2\n\\end{equation}\n\nwhere $R_p$ is the radius of the planet, $R_s$ the radius of the star, and $k$ is the radius ratio. The total duration of the transit event is represented as $t_T$, and the time of totality, in which the entire planet disk is in front of the stellar disk (the time between \\emph{second} and \\emph{third} contacts, using eclipse terminology) is given by $t_F$, during which the light curve is relatively flat. Using basic geometry, the work by \\citetads{2003ApJ...585.1038S} derived analytic expressions that relate these observables to the orbital parameters. In particular, the {impact parameter} $b$, defined as the minimal projected distance to the center of the stellar disc during the transit, can be expressed as:\n\n\\begin{equation}\nb\\equiv \\frac{a}{R_s}\\cos i=\\Big\\{\\frac{(1-k)^2-[\\sin^2(t_F\\pi\/P)\/\\sin^2(t_T\\pi\/P)](1+k)^2\n}{\\cos^2(t_F\\pi\/P)\/\\cos^2(t_T\\pi\/P)}\\Big\\}^{1\/2}\n\\end{equation}\n\nwhere $a$ is the orbital semimajor axis, $i$ the orbital inclination, and $P$ the orbital period. A commonly used quantity that can be obtained from photometric data alone is the so-called scale of the system, or the ratio between the semimajor axis and the radius of the star:\n\n\\begin{equation}\\label{transit_distance}\n\\frac{a}{R_s}=\\frac{1}{\\tan(t_T\\pi\/P)}\\sqrt{(1+k)^2-b^2}\n\\end{equation}\n\nwhich, using {Kepler laws} of motion and making the reasonable approximations of the mass of the planet being much smaller than the mass of its host star, and a spherical shape for the star, can be transformed into a measurement of the mean {stellar density}:\n\n\\begin{equation}\n\\rho_s = \\frac{3\\pi}{GP^2}\\left(\\frac{a}{R_s}\\right)^3\n\\end{equation}\n\n This measurement and its comparison with the stellar density estimated by other means (through spectroscopy, mass-radius relations, or asteroseismology) has often been used as a way to prioritise the best transit candidates from a survey (\\citealt{2003ApJ...585.1038S,Tingley:2011aa,Kipping:2014ab}). In the previous equations we have assumed circular orbits for simplicity; a derivation of equivalent equations including the eccentricity terms can be found in \\citeauthor{Tingley:2011aa} (2011, with a correction in Eq. 15 of \\citealt{Parviainen:2013aa}). \n\nWhile the previous expressions allow quick estimates of the major parameters of an observed transit, more sophisticated derivations using the formalisms of \\citetads{2002ApJ...580L.171M} or \\citetads{2006A&A...450.1231G} are commonly used for their more precise derivation. This is in part due to a subtle effect visible in Figure~\\ref{fig:fig_tran}: the {limb-darkening} of the star, that manifests itself as a non-uniform brightness of the stellar disk. The limb-darkening of the star makes it challenging to determine the moments of \\emph{second} and \\emph{contact} of the transit, and to precisely measure $\\Delta F$. For more detailed introductions into the parameters that can be measured from transits, we refer to \\citetads{2010exop.book...55W} and \\citetads{2010trex.book.....H}.\n\n\\subsection{Detection probability}\n\nThe geometric probability to observe a planet in transit is given by \\citepads{2010exop.book...55W}:\n\n\\begin{equation}\np_{tra} = \\left(\\frac{R_s\\pm R_p}{a}\\right)\\left(\\frac{1+e\\sin \\omega}{1-e^2}\\right)\n\\end{equation}\n \nwhere the $+$ sign is used to include grazing transits, and the $-$ sign refers to the probability of full transits, that have \\emph{second} and \\emph{third} contacts. For a typical hot Jupiter with a semimajor axis of 0.05~AU this is on the order of 10\\%, while for a Earth at 1~AU it goes down to 0.5\\%. This geometric obstacle is the main handicap of the transit method, since the majority of existing planet systems will not display transits.\n\n\\subsection{False Positives}\n\nA transit-shaped event in a light curve is not always caused by a transiting planet, as there are a number of astrophysical configurations that can lead to similar signatures. These are the so-called false positives in transit searches, which have been a nuisance of transit surveys since their beginning. One example of a {false positive} would be a stellar eclipsing binary in an apparent sky position that is so close to a brighter single star that the light of both objects falls within the same photometric aperture of a detector: the deep eclipses of the eclipsing binary are diluted due to the flux of the brighter star, and a shallower eclipse is observed in the light curve, with a very similar shape to a transiting planet. More complete descriptions of the types of false positives that affect transit searches, and their expected frequencies, can be found in \\cite{Brown:2003aa, Alonso:2004ab,Almenara:2009aa, Santerne:2013aa}. \n\nTo detect false positives and to confirm the planetary nature of a list of candidates provided by a transit survey, a series of follow-up observations are required (e.g. \\citealt{Latham:2003aa,Alonso:2004ab, Latham:2007aa, Latham:2008aa, Deeg:2009aa, Moutou:2013aa, Gunther:2017aa}), which apply to both ground-based or space-based surveys. Traditionally, the \\emph{confirmation} that a transit signal is caused by a planet takes place when its mass is measured with high precision RV measurements. In some cases, particularly with planets orbiting faint host stars, or for the confirmation of the smallest planets, the achievable RV precision is insufficient to measure the planet's mass. As these cases are of high interest, for example, planets with similar sizes as the Earth orbiting inside the habitable zone of its host star, statistical techniques have been developed to estimate the probability of the observed signals being due to planets relative to every other source of false positive we know of. In this case, the planets are known as \\emph{validated}. Current {validation} procedures use the fact that astrophysical false positives scenarios have very low probabilities when several transiting signals are seen on the same star \\citep{Lissauer:2012aa}, or they use all the available information (observables and knowledge of the galactic population and stellar evolution) to compare the probabilities of a signal due to a transiting planet vs. anything else. A few examples of validation studies are \\cite{Torres:2011aa,Morton:2012aa,Lissauer:2014aa,2014ApJ...784...45\n,Diaz:2014aa,Torres:2015aa,2016ApJ...822...86M,Torres:2017aa}, some of which use one of the current state-of-the-art validation procedures: \\tt{ {BLENDER}}\\rm, \\tt{{VESPA}}\\rm, and \\tt{{PASTIS}}\\rm. \n\nFinally, some false positives may be due to artifacts of the red noise or other instrumental effects, even in the most precise surveys to date (e.g. \\citealt{Coughlin:2014aa}). In a few cases planets that were previously validated have been disproved after an independent analysis \\citep{Cabrera:2017aa, Shporer:2017ac}, which should generate some caution about the use of results from validations, which are statistical by design. \n\n\n\\section{Transit Surveys: factors affecting their design}\n\nThe task of surveying a stellar sample for the presence of transiting planets must overcome the inherent inefficiencies of the transit method: The planets need to be aligned correctly (see previous section) and the observations must be made when transits occur. The expected abundances of the desired planet catch must be taken into account and their transits need to be detectable with sufficient {photometric precision}. Furthermore, transit-like events (false positives) may arise from other astrophysical as well as instrumental sources and means to identify them need to be provided. The success of a transit-detection experiment must take these factors into account, which are discussed in the following\n\n\\runinhead{Sample size}\nThe probability $p_{tr}$ for transits to occur in a given random-oriented system is between a few percent for Hot Jupiters and less than 0.1\\% for cool giant planets. In order to achieve a reasonable probability that $N$ transiting system will be found in a given stellar field, the number of surveyed stars (that is, stars for which light curves with sufficient precision for transit detection are obtained) should be at least $N_\\mathrm{survey} \\approx N\/(p_\\mathrm{tra}\\ f) $, where $f$ is the fractional abundance of the detectable planet population in the stellar sample. For surveys of Hot Jupiters, with $f \\approx 1\\% $ of main-sequence (MS) stars \\citepads{2012ApJ...753..160W,2011arXiv1109.2497M\n, this leads to minimum samples of 2000 MS stars to expect a single transit discovery. Given that most stars in the bright samples of small-telescope surveys are not on the MS, sample sizes of 5000 - 10000 targets are however more appropriate. Survey fields that provide sufficient numbers of suitable stars, by brightness and by desired stellar type, need therefore be defined. The size of the sample is then given by the size of the field of view ({\\it fov}) and by the spatial density of suitable target stars, which depends on the precision of the detector (primarily depending on the telescope aperture) and on the location of the stellar fields. Also, in most surveys, sample size is increased through successive observations of different fields.\n\n\\runinhead{Temporal coverage}\nAt any given moment, the probability for the observation of a transit of a correctly aligned system is $p \\approx t_T\/ P$, where $t_T$ is the duration of a transit and $P$ the orbital period. This probability goes from 5-8\\% for {Ultra-short periodic planets} over 2-3\\% for typical {Hot Jupiter} systems to 0.15\\% for an Earth-Sun alike. For an estimation of the number of transits for a given sample at a given time, we need to multiply this probability and the probability for correct alignment with the abundance of detectable planets. \nTo determine a planet's period, of course at least two transits need to be observed. The requirement to observe three transit-like events that are periodic has however been habitual in ground-based observations, which are prone to produce transit-like events from meteorologic and other non-astronomic causes. Furthermore, for an increased S\/N of transit detections, especially towards the detection of smaller planets, as well as towards a more precise derivation of physical parameters, the rule is 'the more transits, the better'. Continuous observational coverage is the most time-efficient way to achieve the observation of a minimum number transits (e.g. $N_\\mathrm{tr,min} > 3$) for a given system. However, only space missions are able to observe nearly continuously over timescale of weeks, which is the only way to ascertain that transiting planets above some size threshold and below some maximum period are being detected with near-certainty. Ground-based surveys, with their interruptions from the day\/night cycle and from meteorological incidences, can only seek reasonable {\\it probabilities} (but no certainty) to catch a desired number of transits from a given planet. The principal factor that determines the number of observed transits in a given discontinuous light curve is a planet's orbital phase (at some reference time, such as the begin of observations) or its epoch (the time when one of its transits occurs); both are of course unknown prior to a planet's discovery. An example of the effect of phase on the number of observed transits in discontinuous data is shown in Fig.~\\ref{Fig_phase_coverage}. As a rough rule, in order to achieve reasonable detection probabilities (e.g. $\\geq 70\\%$) for typical Hot Jupiters ($P= 3-4 d$) with a requirement of 3 observed transits, surveys should cover a stellar field for at least 300h. \n\\begin{figure}\n\\begin{center}\n\\includegraphics[width=10cm]{Fig_phase_coverage.pdf}\n\n\\caption{The expected number of transits by a a test-planet\nwith a period of 5 days that would have been\nobserved in a ground-based lightcurve covering 617 hrs (= 25.7 d). The clockwise direction is the planet's phase at some time of reference and the radial\ndistance gives the number of transits that would have been observed at each phase. Achieving that the large majority of the potential phases would produce at least 3 transits required an observational coverage that was much longer than 3 times the orbital period. Adapted from \\citetads{1998A&A...338..479D}.}\n \\label{Fig_phase_coverage}\n\\end{center}\n\\end{figure}\n\n\n\n\\runinhead{Transit detection precision}\nA basic version of the S\/N of a single transit is given by the ratio \n\\begin{equation}\\label{transit_SN}\n(S\/N)_{tr} \\approx \\Delta F \/ \\sigma_\\mathrm{lc} \\ ,\n\\end{equation}\nwhere $\\Delta F$ is the fractional flux-loss during a transit and $\\sigma_\\mathrm{lc}$ is the fractional noise of the light curve on the timescale of the transit-duration $T_T$. This noise is composed of various sources,most notably photon noise from the target and the surrounding sky background, Cosmic Ray hits, CCD read noise and flat-fielding or {jitter} noises (which arise from variations of the positions or shapes of stellar point spread functions on detectors whose sensitivity is not uniform). For ground-based surveys, we also have to add variations from atmospheric transparency and {scintillation} noise. \nFig.~\\ref{Fig_rms_NGTS} shows the scatter over 1 hour time-scales from the most precise space-based survey, Kepler, and from NGTS, one of the leading ground-based ones. \n\n\n\\begin{figure}\n\\begin{center}\n\\includegraphics[width=11.5cm]{Fig_kepler_NGTS.pdf}\n\\caption{Comparison of precision of between Kepler (grey points) and NGTS (violet points). Given is the lightcurves' rms scatter on a 1h time-scale. The magnitudes for NGTS are in the I-band; for Kepler they are in its own system. The Kepler noises are from long-cadence (0.5h cycle time) curves of Q1 targets \\citepads{2010ApJ...713L.120J} and scaled by $\\sqrt{1\/2}$ towards the 1h time-scale. The NGTS data are from 695 hours of monitoring with a 12 s cadence, rebinned to exposure times of 1 h, from \\citetads{Wheatley:2017ab}. The difference between NGTS and Kepler precision would be reduced by a factor of $\\sqrt{950\/200} \\approx 2.2$ if the different aperture sizes are taken into account. The precision for the brightest NGTS targets is limited by scintillation noise, which is independent of the targets' brightness.}\n \\label{Fig_rms_NGTS}\n\\end{center}\n\\end{figure}\n\nEarly estimates for planet detection yields assumed commonly a white-noise scaling from the point-to-point scatter of an observed light curve to the usually much longer duration of a transit. In practice, red or {correlated noises} degrade the precision of nearly all photometric time-series over longer time-scales, as was first shown by \\citetads{2006MNRAS.373..231P},\nbased on data from the OGLE-III transit survey. \nOnly the space-based data from the Kepler mission uphold a white-noise scaling from their acquisition cycle of 30 minutes to a transit-like duration of 6 hrs (\\citeads{2010ApJ...713L.120J\n; see \\citeads{2011ApJS..197....6G}\nfor more details on Kepler's noise properties). The CoRoT mission, in contrary to Kepler on a low Earth orbit, produced light curves that on time-scales of 2h were already about twice as noisy as would be the result of a white-noise scaling from their acquisition cycle of 8.5 minutes \\citepads{2009A&A...506..425A}.\nAt least as strongly affected are ground-based surveys, with the principal culprit being the nightly airmass variation, which is on a similar time-scale as the duration of most transits. Correlated noises have been the principal source for the early overestimations of detection yields. In the case of SuperWASP, recognizing their influence led to a revision of detection yields and to an increase in temporal coverage early in its operational phase \\citepads{2006MNRAS.373.1151S}. For surveys that attempt to detect shallow transits, brightness variations due to the sample stars' activity might also be of concern. The demonstration that this variability does not prevent the detection of terrestrial planets of solar-like stars \\citepads{2002ApJ...575..493J} was an important advance during the development of the Kepler mission. \\citetads{2004A&A...414.1139A}\nfound then that K stars are the most promising targets for transit surveys, while the surveys' performance drops\nsignificantly for stars earlier than G and younger than 2.0 Gyr. \nFor a quantitative discussion of the factors that influence the yield of transit surveys, we refer to \\citet{Beatty:2008aa}. \n\nAlgorithms to dampen red noises and other systematic effects have been developed to either 'clean' directly a lightcurve from their influences or as part of a detection algorithm, thereby increasing its sensitivity for transit-like features. Examples are the pre-whitening employed in the Kepler pipeline \\citepads{2010SPIE.7740E..0DJ\n, the cleaning of Corot lightcurves \\citepads{2015sf2a.conf..277G} or the widely used {SYSREM} \\citepads{2005MNRAS.356.1466T} and TFA \\citepads{2005MNRAS.356..557K} algorithms.\n\n\n\n\\runinhead {Brightness of the sample: Rejection of false positives and characterisation of the planet catch}\nAs mentioned, a large number of transit surveys was initiated in the first years of the 21st century, after the discovery of the first transiting planets. These early efforts were aimed about equally at deep surveys of small fields using larger (1m and more) telescopes and at shallow surveys with small instruments having fields of view. The surveys with larger telescopes, including early projects with Hubble Space Telescope (\\citeads{2000ApJ...545L..47G}, on the {47 Tucanae} globular cluster, and the {SWEEPS} survey by \\citeads{2006Natur.443..534S}), were met however with limited success, with the most productive one becoming the OGLE-III \\citepads{2003AcA....53..291U} survey using a dedicated 1m telescope. Besides the difficulties to get access to the required large facilities to perform a deep survey over a sufficiently long time, a major drawback of such surveys is the faintness of the sample. RV verifications or further observational refinements of their transit detections are either impossible, or if possible at all, they likely require the largest existing telescope facilities.\n\nFor example, from the SWEEPS survey that targeted the Sagittarius I window of the Galactic bulge with the Hubble Space Telescope, \\citetads{2006Natur.443..534S} report the detection of transits on 16 targets. Their faintness of V=18.8 to 26.2 as well as crowding permitted however only for two of them (SWEEPS-04 and 11) a confirmation as planets, based on RVs taken with the 8m VLT. All other SWEEPS detections have remained in candidate status until the present. We also note the comparatively small impact (relative to brighter targets) of the very large number of planets on the fainter end of the Kepler mission's sample (\\citeads{2014ApJ...784...45R}\nwith 815 planets, \\citeads{2016ApJ...822...86M}\nwith 1284 planets). These planets count only with probabilistic validations, and their principal usefulness are statistical studies on planet abundances across their known parameters (radius, period, central star type, planet multiplicity). The brightness of a target sample is therefore a very valuable parameter towards the science return of a transit survey! \n\nThe most common follow-up observations of transit detections are RV measurements, which do not only prove (or disprove) a planet's existence beyond reasonable doubt, but also greatly improve our knowledge about them, providing masses, orbital eccentricity and occasionally, also the detection of further non-transiting planets in the same system. In practise, from the RV follow-up of numerous candidates for the Kepler, K2 and CoRoT missions, we found that a magnitude of $\\approx$ 14.5 is a soft limit for their routinary follow-up. This is due to that brightness being near the limit for RV measurements at several relatively well-accessible mid-sized telescopes with appropriate instrumentation (e.g. the FIES instrument on the 2.5m Nordic Optical Telescope, or the HARPS instruments on the 3.6m Telescopio Nazionale Galileo (TNG) and on the ESO 3.6m telescope). \n \nOn transiting systems of bright central stars, a host of further possibilities to examine these systems opens up - such as observation of the Rossiter Mc-Laughlin effect, transit spectroscopy, secondary eclipse measurements or the detection of phase curves (see the Handbook's Section on Exoplanet Characterization). For this reason -- increased knowledge about the discovered systems -- both of the upcoming space-based transit surveys, TESS and PLATO, will focus on samples that are brighter than those of Kepler and CoRoT, while ground-based surveys continue with their efforts to find transiting planets principally on bright, or on special types of target stars.\n\n\\section{Transit detection in light curves}\nEfficient recognition of transit-like features in light curves is a central part of any transit detection experiment. This task is usually performed in two steps. In the first one, detection statistical values that describe the likelihood of a light curve to contain a transit-like event are assigned. These might also be expressed as a function of a candidate planet's size, period and further parameters. In the second step, these statistical values are evaluated and those candidates that deserve closer investigation are extracted. We reproduce here a description of this step in the Kepler pipeline, from \\citetads{2010SPIE.7740E..0DJ}:\n\"Light curves whose maximum folded {detection statistic} exceeds 7.1$\\sigma$ are designated Threshold Crossing Events (TCEs) and subjected to a suite of diagnostic tests in Data Validation (DV) to fit a planetary model to the data and to establish or break confidence in the planetary nature of the transit-like events\". \nThe threshold value for the extraction of candidates needs to be chosen with care, as it must provide a balance between the number of false positives -- which increases to unmanageable levels if the threshold is too low -- and the risk to miss detections of true planets if the threshold is too high.\n\nAs representative transit detection methods and algorithms we mention here the early work on matched-filter detection algorithms by \\citetads{1996Icar..119..244J\n, which provided the basics for the transit detection of the early TEP observing project as well as for the Kepler mission; the widely used box least-squares ({BLS) algorithm} \\citepads{Kovacs:2002xf} with derivatives (e.g. \\citeads{2006MNRAS.373..799C\n) or algorithms using wavelets (e.g. \\citeads{2007A&A...467.1345R})\n\nFor the second step of a detection procedure, the evaluation of a transit candidate as a planet-like event, usually a more detailed modelling (or fitting) of the light curve of the presumed transit is performed. The \\citetads{2002ApJ...580L.171M}\nalgorithm or the analytical eclipsing formulae by \\citetads{2006A&A...450.1231G}\nare widely used basic transit modellers that have also been integrated into several transit fitting packages.\n\n\n\\begin{table*}\n\n\\begin{sideways}\n\\begin{minipage}{20.5cm}\n \n \\caption{Selected transit surveys}\n \\label{table1}\n \\includegraphics[width=\\columnwidth]{table1.pdf}\n \\end{minipage}\n\\end{sideways}\n\\end{table*}\n\n\\section{Transit surveys: past, current and future projects}\nThe {Extrasolar Planets Encyclopedia} ( \\url{http:\/\/www.exoplanet.eu\/research\/}) lists currently web-sites of 39 planet search projects that indicate 'transits' as a principal observing method. These projects include finished ones, currently operating ones, projects that are in various preparation stages, as well as projects or proposals that have never moved beyond some design phase. It includes also some projects that aren't dedicated to the discovery, but to the follow-up of transiting planets, such as ESA's CHEOPS space mission. In Table 1 and in the following notes we provide an overview over a selection of well-known transit detection surveys. The columns of Table 1 have the following meaning:\\\\\n\\begin{description}\n\\item[years:] Indicates the years of operation.\n\\item[config:] Instrument configuration, with the aperture diameters of individual optical units (cam = camera)\n\\item[{\\it fov}$_\\mathrm{single}$:] The sky area in deg$^2$ covered by a single optical unit of the detection experiment.\n\\item[{\\it fov}$_\\mathrm{instr}$:] The sky area in deg$^2$ that is covered simultaneously be the experiment at a single site in its usual operating mode. Only given if there are multiple optical units at a site.\n\\item[$N_{pl}$:] Count of planet detections in February 2018, based on the number of planets carrying an instrument's designation in the name. Planets labeled with other designations, such as HD or GJ numbers, are missed.\n\\item[R$_{05}$, R$_{50}$, R$_{95}$:] 5th, 50th (median) and 95th percentiles of the radii of the detected planets. If $N_{pl} \\leq$ 20, the smallest and largest planets are given.\n\\item[m$_{05}$, m$_{50}$, m$_{95}$:] Similar to the previous, but indicating the V-mag brightness of the detected systems.\n\\end{description}\nPlanet counts, radii and magnitudes are from the Encyclopedia of Extrasolar Planets and from the {NASA Exoplanet Explorer}. Below, some notes are provided for the transit surveys listed in Table 1. \n\n\\runinhead{{OGLE}-III} The {Optical Gravitational Lensing Experiment} has been implemented in four phases, with the fourth one operational at present. OGLE is dedicated to the detection of substellar objects from microlensing, except for its third phase (OGLE-III, \\citeads{2003AcA....53..291U}), when the observing procedure of the 1m OGLE telescope at Las Campanas Observatory was modified to enable the detection of transits. OGLE-TR-56 was the first planet discovered in a transit search, with a posterior verification from RV follow-up \\citepads{2003Natur.421..507K}.\n\n\\runinhead{{TrES}} The `{Trans-Atlantic Exoplanet Survey}' was the first project with instruments that were specifically designed and dedicated for transit surveying. Its first telescope, originally named STARE, was used in the 1999 discovery of the transits of HD 209458b during tests at the High Altitude Observatory at Boulder. In 2001, it was relocated to Teide Observatory, Tenerife, where a systematic transit search began. Since 2003, the project operated under the TrES name, after the merger with two other projects using similar instrumentation, namely PSST at Lowell Observatory and the Sleuth Project at Palomar Observatory \\citepads{2008PhDT........70O}. The principal success of TrES was the detection of the first transiting planets orbiting bright stars (TrES 1, \\citeads{2004ApJ...613L.153A}; TrES 2 \\citeads{2006ApJ...651L..61O}) by a dedicated survey. TrES was discontinued in 2010.\n\n\\runinhead{{XO}} This survey started in 2003 at a single site, with a second phase observing from three sites from 2012-2014. The CCDs are read in time-delayed integration (TDI): pixels are read continuously while stars move along columns on the detector, owing to a slewing motion of the telescope. This setup enlarges the effective field of view and results in stripes of $7^\\circ$ x $43^\\circ$ that are acquired during each single exposure.\n\n\\runinhead{{HAT}} This denominator ({Hungarian-made Automated Telescope}) encompasses two surveys: For one, since 2003 {HATnet} operates seven CCD cameras with 110mm apertures on individual mounts, with five of them at Fred Lawrence Whipple Observatory at Mount Hopkins in Arizona and two at Mauna Kea Observatory in Hawaii. For another, {HATSouth} \\citepads{2013PASP..125..154B} is a network across three sites in the southern hemisphere that is able to track stars continuously over longer time-spans. Since 2009, it operates at the Las Campanas Observatory (Chile), at the High Energy Stereoscopic System site (Namibia), and at the Siding Spring Observatory (Australia). Each of these sites contains two mounts, with each of them holding four Takahashi astrographs with individual apertures of 180 mm. The HAT consortium is also advancing the {HATPI} Project of an all-sky camera consisting of 63 optical units on a single mount.\n\n\\runinhead{{WASP}} ({Wide Angle Search for Planets}, see \\citeads{2006PASP..118.1407P} for an instrument description; Smith et al. \\citeyearads{2014CoSka..43..500S}\nfor a review). This consortium operates two instruments: {SuperWASP}-North, since 2004 at Roque de los Muchachos Observatory on the Canary Island of La Palma, and {WASP-South}, since 2006 at the South African Astronomical Observatory. A predecessor instrument, WASP0, was operated during the year 2000 on La Palma. SuperWASP-North is an array of 8 cameras covering 480 degrees of sky with each exposure; WASP-South is a close copy of it. WASP is currently the ground based search that has detected the most planets, among them several (such as WASP-3b, 12b, 43b) that stand out for their excellent suitability for deeper characterization work, due to their short orbital period and\/or large size.\n\n\\runinhead{{Kelt}} The `{Kilodegree Extremely Little Telescope}' has to date been the most successful survey using very wide field detectors (with a {\\it fov} of $26^\\circ$ x $26^\\circ$) with commercial photographic optics of short focal length. Kelt-North operates since 2005 from Winer Observatory, Arizona, and KELT-South since 2009 from Sutherland, South Africa. Both instruments use a CCD camera with an 80mm\/f1.8 Mamya lens.\n\n\n\n\\runinhead{{NGTS}} The {Next-Generation Transit Survey} \\citepads{2013EPJWC..4713002W,Wheatley:2017ab} is operated by a consortium of seven institutions from Chile, Germany, Switzerland, and the United Kingdom. After testing in La Palma and at Geneva Observatory, operations started in 2016 at ESO's Paranal Observatory. NGTS employs an automated array of twelve 20-centimeter f\/2.8 telescopes on independent mounts, sensitive to orange to near-infrared wavelengths (600 \u2013- 900 nm). It is a successor project to WASP that achieves significantly better photometric precision (Fig~\\ref{Fig_NGTS_WASP}), but with a focus on late type stars. Its first planet discovery has been the most massive planet known to transit an M-dwarf \\citepads{2017arXiv171011099B}. Simulations for a 4-year survey predict the discovery of about 240 planets, among them about 20 planets of 4 $R_\\mathrm{Earth}$ or less \\citepads{2017MNRAS.465.3379G}\n\n\n\\begin{figure}\n\\begin{center}\n\\includegraphics[width=7cm]{Fig_NGTS_WASP.pdf}\n\n\\caption{Single transit observations of the hot Jupiter WASP-\n4b with one NGTS telescope unit (top) and WASP (bottom). From \\citetads{Wheatley:2017ab}, reproduced with permission.}\n \\label{Fig_NGTS_WASP}\n\\end{center}\n\\end{figure}\n\n\\runinhead{CoRoT} Named after `{Convection Rotation and Transits}', this was the first space mission dedicated to exoplanets. Launched in December 2006 by the French space agency {CNES} and partners into a low polar orbit for a survey lasting initially 4 years, it surveyed 163 665 targets distributed over 26 stellar fields in two opposite regions in the galactic plane, with survey coverages lasting between 21 and 152 days \\citep{megapaper}. In May 2009, its first data processing unit failed and CoRoT's {\\it fov} was reduced to half, while the failure of the other unit in Nov. 2012 caused the end of the mission. Its most emblematic discovery was CoroT-7b, the first transiting terrestrial planet \\citepads{2009A&A...506..287L\n\n\\runinhead{Kepler} This NASA mission was launched in 2009 into an Earth trailing orbit, for a mission of 4 years to survey a single field of 170,000 stars, principally for the presence of Earth-sized planets. Kepler has discovered the majority of currently known exoplanets, with discoveries that have revolutionized the field of exoplanets. In contrary to the planets found by any other transit survey, only a small fraction (3\\%) of Kepler planets are Jupiter-sized ($ \\geq 0.9 R_{jup}$), while the vast majority are Earth or Super-Earth sized ones. Science operations under the 'Kepler' denomination ended in May 2013 when two of the spacecraft's reaction wheels failed and its pointing become unreliable.\n \n\\runinhead{{K2}} In March 2014, the Kepler spacecraft was returned into service under the K2 name. Its observing mode was adapted to the reduced number of reaction wheels, surveying fields near the ecliptic plane for about 80 days each \\citepads{2014PASP..126..398H}. Planets found by K2 have a rather similar size-distribution to Kepler, albeit with a somewhat larger fraction of Giant planets (8\\% are larger than 0.9R$_{jup}$). K2 is expected to end around Oct. 2018, when the spacecraft runs out of fuel. \n \n\\runinhead{{TESS}} The `{Transiting Exoplanets Survey Satellite}' by {NASA} aims to scan about 85\\% of the entire sky for transits across relatively bright stars \\citepads{2015JATIS...1a4003R}. Most areas will be covered by pointings lasting 28 days. The spacecraft harbors 4 wide-field telescopes that cover jointly a stripe of the sky of 24$^\\circ$ by 96$^\\circ$. TESS is expected for launch in spring 2018 into an elliptical orbit with a 13.7-day period in a 2:1 resonance with the Moon's orbit, for a mission of 2 years.\n\n\\runinhead{{PLATO}} This {ESA} mission, named after '{PLAnetary Transits and Oscillation of stars}', is expected to be launched in 2026 into an orbit around the L2 point, to perform during at least 4 years a survey of several large sky-areas \\citep{2014ExA....38..249R}. The mission's core sample are 15\\ 000 stars of $ 8 \\leq m_V \\leq 11$ while a secondary `statistical' sample includes 245\\ 000 targets up to $m_V \\approx 16 $. PLATO will have four groups of detectors, each with six cameras that all point to the same {\\it fov}. Between the groups there is a partial overlap due to which areas near the center of the common {\\it fov} will be covered by all 24 cameras while outer zones will be covered by 6 or 12 cameras only. Two additional \"fast\" cameras with rapid cycle-times and color-filters will survey the brightest stars of 4 -\u2013 8 $m_V$.\n\n\\subsection{Surveys for planets of low-mass stars}\n\nSeveral surveys, which are not listed in Table 1, have been designed specifically for the detection of planets around {low mass stars}, and in particular, {M-stars}. Given the difficulties to detect planets in the {habitable zone} of solar-like stars, planet-searches around such stars provide an alternative path for the detection of potentially habitable planets (e.g. \\citeads{2007AsBio...7...85S}).\nTheir small size permits that terrestrial planets produce transits that are deep enough to be observable from moderate ground-based instruments. Also, the habitable zone around these stars corresponds to orbital periods of a few days to weeks, making habitable planets' transits shorter, more frequent, and hence easier to detect than for solar-type stars. Disadvantages of low-mass stars as targets are however a flux variability that is exhibited by most of them, and the sparsity of such stars with sufficient apparent brightness. As a consequence, these detection projects are not performed as wide-field surveys, but as searches that point to selected target stars, which are covered sequentially. As such, these projects cover relatively few targets and have only a small planet catch, but may provide discoveries of large impact towards our knowledge of potentially habitable planets.\n\n\\runinhead{{MEarth}} This project operates since 2008 eight 40 cm telecopes at Mount Hopkins, Arizona, and since 2014 a similar setup at Cerro Tololo, Chile \\citepads{2012AJ....144..145B}. MEarth has discovered several small planets, among them LHS1140b, a planet of 1.4 $R_\\mathrm{Earth}$ in the habitable zone of an M dwarf at a distance of 10.5 parsec \\citepads{2017Natur.544..333D}.\n\n\\runinhead{{TRAPPIST} \/ {SPECULOOS}} The `TRAnsiting Planets and PlanetesImals Small Telescope' survey consists of two 60 cm robotic telescopes, one operting since 2010 at ESO's La Silla Observatory, Chile, and one since 2016 at Oukaimden Observatory, Marocco. It has the dual objective of transit detection and the study of comets and other small bodies in the Solar System \\citepads{2014acm..conf..240J}. Its outstanding discovery has been the TRAPPIST-1 system of seven planets, with some of them in the habitable zone, around an ultra-cool M8 dwarf at a distance of 12 parsec \\citepads{2017Natur.542..456G}. TRAPPIST is also a prototype of the SPECULOOS (Search for habitable Planets EClipsing ULtra-cOOl Stars) project, whose first phase will consist of four 1m robotic telescopes at ESO's Paranal Observatory.\n\n\\section{Conclusion}\n\nIn the year 2003, K. Horne predicted the success of transit surveys in a paper entitled 'Hot Jupiters Galore'. It took longer than expected to get to that point, and required the understanding and resolution of several subtle issues affecting these surveys,\nbut today the paper's title has become reality and the discovery of transiting planets is common place. This applies not only to Hot Jupiters but also to planets across the entire size regime and has been a consequence of the continued refinement of observing techniques and of the development of new instruments, both ground and space based.\n\nAt the time of writing, the transit method is expected to remain the largest contributor towards the discovery of new planets and planet systems, with several ambitious ground and space-based searches under way. Planet systems found in transit searches will also continue to provide the motivation for the continued development of instruments and observing techniques, which take advantage of the opportunities for deeper insights that transiting systems offer. In that sense, systems found by transit surveys will continue as a basic nutritient of the field of exoplanet science. \n\nFor further reading about transits as a tool to detect and characterize exoplanets, we refer to the reviews by \\citetads{2010exop.book...55W}\nand by \\citetads{2016ASSL..428...89C}\nand to a book dedicated to transiting exoplanets by \\citetads{2010trex.book.....H}.\n\n\n\n\\begin{acknowledgement}\nFinancial support by the Spanish Spanish Secretary of State for R\\&D\\&i (MINECO) is acknowledged by HD under the grant ESP2015-65712-C5-4-R and by RA for the Ram\u00f3n y Cajal program RYC-2010-06519, and the program RETOS ESP2014-57495-C2-1-R and ESP2016-80435-C2-2-R. This contribution has benefited from the use of the NASA Exoplanet Archive and the Extrasolar Planets Encyclopaedia and the authors acknowledge the people behind these tools.\n\\end{acknowledgement}\n\n\n\\bibliographystyle{spbasicHBexo} \n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section*{Acknowledgment}\nThis paper is supported by National Natural Science Fund for Distinguished Young Scholars (No. 61525201) and General Program of National Natural Science Foundation of China (61972006).\n\n\\bibliographystyle{model5-names}\n\n\\section{Framework}\n\\label{framework}\n\n\\begin{figure}\n\\includegraphics[width=0.48\\textwidth]{pic\/framework.png}\n\\caption{NLI2Code framework}\n\\label{fig-framework}\n\\end{figure}\nAs Figure \\ref{fig-framework} shows, the \\textsc{NLI2Code} framework consists of three components.\nIn the offline part, we construct a natural language interface as pairs of functional features and code patterns.\nIn the online part, the user solves tasks by selecting functional features and our synthesizer completes the corresponding code patterns into well-typed code snippets.\nIn the rest of this section, we will discuss the three components separately.\n\n\\subsection{Functional Feature Extractor}\nExtracting functional features is the first step in our framework.\nA functional feature is a brief description of certain library functionality in verb phrase form.\nNowadays, libraries typically provide multiple platforms for developers and users to communicate, such as mailing lists, issue tracker system, and posts from online forums like Stack Overflow.\nThese communication records are the natural corpus to extract functional features because they contain rich information about how libraries are used.\nIn our framework, all verb phrases from the discussions are considered as candidate functional features.\nWe issue two challenges to get usable functional features:\n\\begin{itemize}\n\\item Noises. As Figure \\ref{fig-so} shows, phrases like \\emph{want to} and \\emph{try many things} are unrelated to library functionalities and have little semantic information. Such phrases should be pruned off. \n\\item Diversity. Functionalities could be expressed in different ways. \\textit{e.g.} \\emph{set an Excel cell color} and \\emph{set the color of an Excel cell} in Figure \\ref{fig-so}.\nFurthermore, users could use different words which makes the phrases lexically different. \\textit{e.g.} \\emph{change the cell color}.\nSuch phrases with the same semantic information need to be clustered and normalized.\nOtherwise, the generated natural language interface will be verbose and repetitive.\n\\end{itemize}\n\nIn this work, we applied a filtering pipeline to remove noise phrases, considering stop words, the structure, and the context of the phrases.\nTo cluster similar phrases, we designed a normal form to extract the core action and objects in verb phrases.\nAfter normalization, phrases with the same content or merely lexically different are merged.\nHere we define two important properties for the extracted functional features:\n\\begin{itemize}\n\\item Accurate: Each functional feature should clearly correspond to certain library functionality.\n\\item Complete: The set of all functional features should cover the library functionalities as much as possible.\n\\end{itemize}\n\n\\begin{figure}\n\\includegraphics[width=0.45\\textwidth]{pic\/so.png}\n\\caption{Functional features in Stack Overflow}\n\\label{fig-so}\n\\end{figure}\n\n\\subsection{Code Pattern Miner}\nFunctional features organize library functionalities in a list of verb phrases.\nAlthough many posts from the user forums naturally provide code examples, these examples usually cannot be reused as-it.\nIn fact, most code examples are only intended to describe the main idea of a solution to the original question, which could be difficult to be understood by others \\citep{DBLP:conf\/icsm\/TreudeR17}.\nFurthermore, a recent analysis shows that online code examples usually have quality problems such as missing control constructs and incorrect order of API calls \\citep{api-misuse}.\nAnother analysis on 914,974 Java code snippets from Stack Overflow shows that only 3.89\\% of them are parsable \\citep{compilable}.\n\nA practical way to improve the quality of code examples is to detect similar API usage in a larger codebase.\nA code pattern is a code template describing that in a certain usage scenario, some API elements are frequently called together.\nCompared to a single code example in the original post, a code pattern exploits the commonalities among similar programs, which reduces the risk of unknown consequences.\nMoreover, code patterns naturally hint users which part of the code to modify because they leave variations among the programs as unfilled parts.\nCommon variations include hard-coded strings and magic numbers.\nA common procedure for code pattern mining is as follows:\n\\begin{itemize}\n\\item construct a code corpus\n\\item abstract code into a certain data structure (\\textit{e.g.}, call sequence, abstract syntax tree, data flow graph)\n\\item apply the corresponding frequent pattern mining algorithm on the corpus and transform the frequent items back to code\n\\end{itemize}\n\n\\subsection{Synthesizer}\nCode patterns are incomplete because they usually miss local information.\nExisting IDEs (Integrated Development Environment) usually provide a simple code completion feature.\nHowever, such completion typically only considers one step of computation, which means that the recommendation result is a single variable or method.\nIn fact, a missing parameter may require a method chain to get the desired result.\nFurthermore, each method in the chain may require new parameters to synthesize.\nThese efforts suggest a general direction for the synthesizer in \\textsc{NLI2Code}:\ngiven a programming context $\\Gamma$ and the desired type $\\tau$, synthesize the entire type-correct expression with type $\\tau$ from the context.\nFormally, find expression $e$ such that $\\Gamma\\vdash e: \\tau$.\n\nWe conclude two solutions for the synthesizer.\nThe first one is called the type-directed search \\citep{PLDI12:completion, insynth}, which enumerates all possible expressions with the desired type.\nSince the searching space is usually large, heuristic functions are often used to guide the search process.\nThe second solution recommends expressions according to the statistical analysis of a large code corpus.\nTo synthesize the desired expression, users can benefit if they are recommended with expressions frequently used under a similar context. \n\nTo understand the potential and feasibility of our framework, we instantiated it as a tool \\textsc{NLI4j} to reuse Java libraries.\nIn the following three sections, we will separately introduce our implementation of the three components.\n\\subsection{RQ1: Filtering Pipeline}\nAs the first step of our algorithm, \\textsc{NLI4j} extracts all the verb phrases from user discussions and then select functional features by filtering unrelated phrases.\nThis subsection discusses the output of our filtering pipeline on the labeled dataset \\textit{$SO_{small}$}.\n\n\\subsubsection{Methodology}\nWe first combine all the three filters (\\textit{i.e.} stop word filter, context filter, and structure filter) to filter verb phrases.\nThe results are compared to the manually labeled results on the dataset \\textit{$SO_{small}$}.\nWe automatically compare the results with a script that simply matches the textual contents.\nTo avoid the mistakes caused by trivial details in natural language (\\textit{e.g.} tenses of verbs, the plural form), instead of asking the annotator to label the benchmarks from scratch, \nwe provide the extracted verb phrases and let the annotator select the functional features from the phrases.\nIf the provided phrases already missed certain functional features, the verdict of this sentence will be a failure even before comparison. \n\nFurthermore, to evaluate the importance of each filter, we created three new filtering pipelines by removing one filter at a time.\nThen, we evaluated the three modified pipelines using the same script.\n\n\\subsubsection{Results}\nFrom the 500 sentences, our extractor extracted 1,360 verb phrases using the Stanford NLP toolkit.\nWe fed the phrases to our filter pipeline and got 315 functional features.\nFor 93\\% (465 out of 500) sentences, the automatically extracted functional features matched the labeled ones in the benchmark.\nOur tool missed 41 functional features and gave 12 wrong features in the remaining 35 sentences.\nTable \\ref{label-result} summarizes details of the results for each library.\nFor each library, we list 1). the number of verb phrases mined from the sentences, 2). the number of functional features after filtering, and 3). the number of sentences which the filtered results match the benchmarks.\nFrom the result, we did not notice significant variations between different libraries.\nHowever, it is possible that our fixed stop words can cause some false negatives when a new library is specified, since a stop word could be a domain-specific concept or action for the new library.\nIn that case, a customized stop word list is recommended.\n\n\\begin{table}\n \\centering\n \\caption{Filtering results for the five libraries}\n \\vspace{0.2cm}\n \\label{label-result}\n \\begin{tabular}{l r r r}\n \\hline\n \\multirow{2}*{\\textbf{\\footnotesize Library}} & \\multirow{2}*{\\textbf{\\footnotesize \\# Phrases}} & \\multirow{2}*{\\textbf{\\footnotesize \\# Features}} & \\textbf{\\footnotesize \\# Correct} \\\\ \n & & & \\textbf{\\footnotesize Sentences} \\\\\n \\hline\n jsoup & 245 & 55 & 96 \\\\\n apache-poi & 277 & 81 & 92 \\\\\n neo4j & 290 & 75 & 87 \\\\\n deeplearning4j & 284 & 68 & 96 \\\\\n eclipse-jdt & 264 & 36 & 94 \\\\\n \\hline\n \\bfseries all & \\bfseries 1,360 & \\bfseries 315 & \\bfseries 465 \\\\\n \\hline\n \\end{tabular}\n\\end{table}\n\nWe checked each of the 35 failed sentences and summarized two main reasons for the mistakes, which resulted in both missing and wrong functional features:\n\\begin{itemize}\n \\item \\emph{Preprocessing of natural language.}\n We found in more than half of the failed sentences, the NLP toolkit did not produce the correct verb phrase list as expected.\n \\item \\emph{Tangled votes.}\n Some phrases were upvoted and downvoted at the same time, and our current weights for the filters lead to a wrong decision for these phrases.\n\\end{itemize}\n\nThe first reason is an external factor to our tool.\nWe found the Stanford NLP toolkit sometimes failed to split the sentences correctly when the punctuation characters are not correctly used.\nAlso, a common case for failed POS tagging is when verbs appear at the beginning of sentences.\nFor the second reason, as there were both upvotes and downvotes in our filter pipeline, sometimes they are tangled and bring mistakes in the functional feature recognition.\nFor example, the phrase \\emph{``return the node of the highest score''} was missing from the sentence \\emph{``With Cypher, I'm trying to return the node of the highest score.\"} because it was downvoted for using a stop word \\emph{return} and upvoted for the context (with a preceding Q\\&A expression \\emph{I'd like to}).\nMachine learning approaches could help in such a scenario by assigning proper weights for the three filters (all one point in our current implementation).\nHowever, consider the small size of the annotated sentences and the fact that the current algorithm is accurate for most sentences, we did not apply machine learning approaches at present.\n\nTo evaluate the importance of each filter, we create three new filtering pipelines by removing one filter at a time.\nThe results are displayed in the first three rows of Table \\ref{tab-modified} and the last row combines all the three filters.\nWhen the stop word filter is removed, the number of wrong features rapidly raises to 478 as the first row shows.\nThe context filter upvotes the verb phrases following Q\\&A expressions, as a result, the filter pipeline tends to give a lower score for each phrase after removing this filter.\nAs the second row shows, the number of missing features without the context filter is the largest.\nThe third row depicts the result for removing the structure filter, which also brings more incorrect features.\n\n\\begin{table}\n \\centering\n \\caption{Results for different combinations of the three filters}\n \\label{tab-modified}\n \\vspace{0.2cm}\n \\begin{tabular}{l r r r}\n \\hline\n \\textbf{\\footnotesize Filter} & \\textbf{\\footnotesize \\#Correct} & \\textbf{\\footnotesize \\#Wrong} & \\textbf{\\footnotesize \\#Missing}\\\\\n \\textbf{\\footnotesize Combinition} & \\textbf{\\footnotesize Sentences} & \\textbf{\\footnotesize Features} & \\textbf{\\footnotesize Features}\\\\\n \\hline\n context+structure & 120 & 478 & 21 \\\\\n word+structure & 414 & 10 & 103 \\\\\n word+context & 389 & 97 & 36 \\\\\n \\hline\n all filters & 465 & 12 & 41\\\\\n \\hline\n \\end{tabular}\n\\end{table}\n\n\\mybox{\\emph{Answer for RQ1:} \nOn a labeled dataset containing five hundred sentences, our filtering pipeline correctly filters\n unrelated verb phrases for 465 (93\\%) sentences. All the three filters contribute to the performance\n and the stop word filter is proved to be the vital factor. }\n\n\\subsection{Controlled Experiment}\nWe conducted a controlled experiment on the library \\texttt{apache-poi} to see whether our tool can improve the efficiency of reusing libraries in real-world programming.\nWe also recorded the ranking of the expressions accepted by users to evaluate the performance of our synthesizer.\n\\subsubsection{Methodology}\n\\begin{figure*}\n\\centering\n\\includegraphics[width=0.95\\textwidth]{pic\/study_ui.png}\n\\caption{UI for our controlled experiment. The user is invoking a functional feature.}\n\\label{fig-ui}\n\\end{figure*}\nTo evaluate the utility and effectiveness of programming with \\textsc{NLI4j}, we invited 8 participants to solve real-world programming tasks using the tool.\nAll the participants were familiar with the Java programming language and were divided into two groups.\nThe newcomer group consisted of five participants new to the library \\texttt{apache-poi}.\nThe rest three participants once built client projects with \\texttt{apache-poi} and they formed the expert group.\nThe logic and usage of \\textsc{NLI4j} were briefly introduced to all the participants in advance.\n\nWe prepared five specific programming tasks for the participants to solve.\nPrototypes of the tasks were randomly picked from an online tutorial website\\footnote{https:\/\/www.tutorialspoint.com\/apache\\_poi}.\nWe concretized the tasks for two reasons.\nFirst, some tasks require specific configurations to be automatically validated. \nFor example, we concretized the task \\emph{``create blank workbook''} by specifying the file name and the path to save it.\nSecond, some tasks in the tutorial are supposed to teach users how to manipulate a class for multiple subtasks. Separating the subtasks makes it more executable for composing solutions and validators.\nTable \\ref{tab-tasks} lists the tasks with a brief description and the number of API elements invoked in the code example from the tutorial.\nOn average, one task in our experiment invokes 5.4 API elements.\nFigure \\ref{fig-ui} shows the user interface of our controlled experiment.\nEach task has three components: a task description with a detailed hint, a solution file with some pre-defined variables, and a validator program.\nAll tasks consider the fill-in-the-blanks approach, which meant the participants needed to fill the solution file by implementing the missing functions.\nA task is considered to be accomplished if the validator returns the accepted page.\nThe tool for our controlled experiment is available in the published online artifacts.\n\n\\begin{table}\n \\caption{Five tasks for participants to solve with \\textsc{Apache-POI}}\n \\vspace{0.2cm}\n \\begin{tabular}{c|l|c}\n \\hline\n \\bfseries Id & \\bfseries Task description & \\bfseries \\#invoked APIs\\\\\n \\hline\n 1 & create blank workbook & 4\\\\\n 2 & write into a spreadsheet & 3\\\\\n 3 & set cell color & 6\\\\\n 4 & set italic font and font color & 7\\\\\n 5 & create hyperlink to URL & 7\\\\\n \\hline\n \\end{tabular}\n \\label{tab-tasks}\n\\end{table}\n\nWe allowed all participants to visit online resources such as Q\\&A forums and search engines when solving tasks, but we recorded the number of pages they opened in the process.\nTwo settings were configured for the coding environment, one equipped with the \\textsc{NLI4j} plugin and the other without it.\nParticipants were assigned at random to each programming task and each coding environment, and thus there was no proper balance.\nNo participants were assigned to the same task with different coding environments.\nFor the participants who used \\textsc{NLI4j}, their interaction with the plugin was recorded.\nRecall that our synthesizer would recommend synthesized expressions to users, we recorded whether the user accepted the recommendation and the ranking of the expression that they used.\nFinally, the overall task duration and the number of websites viewed was recorded to facilitate data analysis.\n\n\\subsubsection{Results}\n\nTable \\ref{tab-user} shows the results of the controlled experiment.\nColumns (1) and (2) display the participant's index and the reported programming expertise (N stands for the newcomer, and E for the expert).\nColumns (3) and (4) display the task and the code environment (STD stands for the standard IDE and NLI stands for the IDE equipped with \\textsc{NLI4j} plugin).\nFinally, Column (5) refers to the overall duration of the task, and Column (6) displays the number of web pages the participant opened for the task.\n\n\\begin{table}[htb]\n \\centering\n \\scriptsize\n \\caption{Summary of experiment results}\n \\label{results}\n \\centering\n \\begin{tabular}{c|c|c|l|c|c}\n \\hline\n \\bfseries (1) & \\bfseries (2) & \\bfseries (3) & \\bfseries (4) & \\bfseries (5) & \\bfseries (6)\\\\\n \\bfseries Id & \\bfseries Expertise & \\bfseries Task & \\bfseries Environment & \\bfseries Time(s) & \\bfseries \\#Pages\\\\\n \\hline\n \\multirow{5}{*}{1} & \\multirow{5}{*}{N} & 1 & NLI & 144 & 0 \\\\ \\cline{3-6} \n & & 2 & STD & 377 & 4 \\\\ \\cline{3-6}\n & & 3 & NLI & 180 & 2 \\\\ \\cline{3-6} \n & & 4 & STD & 520 & 6 \\\\ \\cline{3-6} \n & & 5 & STD & 1021 & 8 \\\\ \\hline\n \\multirow{5}{*}{2} & \\multirow{5}{*}{N} & 1 & STD & 212 & 2 \\\\ \\cline{3-6} \n & & 2 & STD & 419 & 4 \\\\ \\cline{3-6} \n & & 3 & NLI & 306 & 3 \\\\ \\cline{3-6} \n & & 4 & NLI & 729 & 5 \\\\ \\cline{3-6} \n & & 5 & STD & 741 & 9 \\\\ \\hline\n \\multirow{5}{*}{3} & \\multirow{5}{*}{N} & 1 & NLI & 165 & 0 \\\\ \\cline{3-6} \n & & 2 & NLI & 265 & 2 \\\\ \\cline{3-6} \n & & 3 & STD & 764 & 10 \\\\ \\cline{3-6} \n & & 4 & STD & 1189 & 10 \\\\ \\cline{3-6} \n & & 5 & STD & 812 & 7 \\\\ \\hline\n \\multirow{5}{*}{4} & \\multirow{5}{*}{N} & 1 & STD & 315 & 5 \\\\ \\cline{3-6} \n & & 2 & NLI & 197 & 2 \\\\ \\cline{3-6} \n & & 3 & STD & 610 & 6 \\\\ \\cline{3-6} \n & & 4 & NLI & 576 & 3 \\\\ \\cline{3-6} \n & & 5 & NLI & 382 & 3 \\\\ \\hline\n \\multirow{5}{*}{5} & \\multirow{5}{*}{N} & 1 & NLI & 190 & 0 \\\\ \\cline{3-6} \n & & 2 & STD & 598 & 8 \\\\ \\cline{3-6} \n & & 3 & NLI & 247 & 3 \\\\ \\cline{3-6} \n & & 4 & STD & 1186 & 5 \\\\ \\cline{3-6} \n & & 5 & NLI & 431 & 5 \\\\ \\hline\n \\multirow{5}{*}{6} & \\multirow{5}{*}{E} & 1 & NLI & 90 & 0 \\\\ \\cline{3-6} \n & & 2 & STD & 197 & 2 \\\\ \\cline{3-6} \n & & 3 & NLI & 91 & 0 \\\\ \\cline{3-6} \n & & 4 & NLI & 410 & 1 \\\\ \\cline{3-6} \n & & 5 & STD & 547 & 1 \\\\ \\hline\n \\multirow{5}{*}{7} & \\multirow{5}{*}{E} & 1 & STD & 122 & 1 \\\\ \\cline{3-6} \n & & 2 & NLI & 109 & 2 \\\\ \\cline{3-6} \n & & 3 & STD & 169 & 1 \\\\ \\cline{3-6} \n & & 4 & STD & 623 & 4 \\\\ \\cline{3-6} \n & & 5 & NLI & 315 & 0 \\\\ \\hline\n \\multirow{5}{*}{8} & \\multirow{5}{*}{E} & 1 & STD & 176 & 1 \\\\ \\cline{3-6} \n & & 2 & NLI & 138 & 0 \\\\ \\cline{3-6} \n & & 3 & STD & 201 & 1 \\\\ \\cline{3-6} \n & & 4 & NLI & 484 & 5 \\\\ \\cline{3-6} \n & & 5 & NLI & 206 & 0 \\\\ \\hline\n \\end{tabular}\n \\label{tab-user}\n\\end{table}\n\nFigures \\ref{fig-compare1} and \\ref{fig-compare2} summarize the data from Table \\ref{tab-user} for newcomers. \nIt compares the average time (minutes) used and the number of web pages opened by the newcomers between two coding environments.\nOn average, newcomers without \\textsc{NLI4j} spent 674 seconds and visited 6.5 web pages for each task, which is significantly larger than the number for participants using the plugin (317.7 seconds and 2.3 pages).\nNotice that when newcomers using \\textsc{NLI4j} met the first task (\\emph{create blank workbook}), all of them solved the task without referring to any web sources.\n\n\\begin{figure}[htb]\n\\centering\n\\includegraphics[width=0.44\\textwidth]{pic\/compare1.png}\n\\caption{Comparison between the average time newcomers spent in two coding environments\\label{fig-compare1}}\n\\end{figure}\n\n\\begin{figure}[htb]\n\\centering\n\\includegraphics[width=0.44\\textwidth]{pic\/compare2.png}\n\\caption{Comparison between the average number of web pages newcomers visited in two coding environments} \n\\label{fig-compare2}\n\\end{figure}\n\nHowever, such a difference is not that obvious in the expert group.\nOn average, an expert with \\textsc{NLI4j} solved a task in 230.3 seconds, an expert without \\textsc{NLI4j} solved a task in 290.7 seconds.\nFrom later communication with the three experts, we found they were familiar with how to read and search library documentation, which can explain the number of web pages they opened was much smaller than the newcomer group.\nHowever, all of the three experts confirmed the plugin is convenient when they forgot how to use a certain API.\nMany participants reported that when they used the plugin, they could accomplish most tasks without external information and the visited web pages were only to confirm the solution.\n\nFor participants who solved tasks with \\textsc{NLI4j}, we also asked them to record the rankings of the expressions they chose to complete the code patterns.\nWe wanted to know whether our synthesizer recommended useful expressions to the participants.\nTable \\ref{tab-interaction} shows the result, the second column lists the number of interactions for each task.\nAn interaction means there is a missing variable for users to provide or select from the list of recommended expressions.\nWe use two metrics, \\textit{i.e.} MRR (Mean Reciprocal Rank) and Hit@1 to evaluate the quality of the recommendation.\nOur synthesizer could not recommend useful expressions if the missing part must be specified by users.\nFor example, the only interaction in the first task is the name of the workbook, which could be any valid string.\n\\textsc{NLI4j} failed to recommend the desired string and get 0 for both MRR and Hit@1 metrics.\nActually, among all 13 interactions for the tasks, such conditions (arbitrary values of built-in types) happened 5 times.\nFor all the other interactions, \\textsc{NLI4j} successfully recommended the desired expressions at a top-2 position.\nIn the third task, all four desired variables were recommended as the first choice.\nOn average, each task requires 2.6 interactions and the average MRR value is 0.54 and the value for Hit@1 is 0.46.\n\n\\begin{table}\n\\centering\n\\caption{Recommendation performance of the synthesizer\\label{tab-interaction}}\n\\begin{tabular}{c c c c}\n\\hline\n\\bfseries Task ID & \\bfseries \\#Interactions & \\bfseries MRR & \\bfseries Hit@1\\\\\n\\hline\n1 & 1 & 0 & 0\\\\\n2 & 2 & 0.25 & 0\\\\\n3 & 4 & 1.00 & 1.00\\\\\n4 & 4 & 0.375 & 0.25\\\\\n5 & 2 & 0.50 & 0.50\\\\\n\\hline\n\\bfseries Average & \\bfseries 2.6 & \\bfseries 0.54 & \\bfseries 0.46\\\\\n\\hline\n\\end{tabular}\n\\end{table}\n \n\n\\subsubsection{Discussion}\nAll participants were asked to fill a simple survey after the controlled experiment.\nThe survey form is available in our published artifacts.\nForm the survey result, we can see all participants agree that using \\textsc{NLI4j} could improve their coding efficiency.\nWhen asked to compare the input form of functional features with free-form natural language, most participants (6 out of 8) reported that they preferred functional features.\nHowever, some participants raised the concern that for those functions not included in functional features, they could only turn to the free-form queries.\nBesides, one of our participants mentioned that although free-form queries are flexible, however, composing such queries from scratch could be difficult for a newcomer.\nHe mentioned that some hint like auto-completion or our functional features would be very helpful when users described their requirements.\n\nWe also asked the participants to compare code patterns used in \\textsc{NLI4j} with concrete code examples.\nOverall, most of the participants (5 out of 8) preferred code patterns with two main reasons. \nFirst, code patterns gave a more clear hint for where to modify.\nSecond, participants believed that code patterns had higher quality and were more reliable since they were mined from multiple concrete examples.\n\n\\mybox{\\emph{Answer for RQ4:}\nThe result of the controlled experiment shows that \\textsc{NLI4j} can save half of the coding time for newcomers of a library.\nFor experienced developers, \\textsc{NLI4j} can play the role of a prompter when they forget the usage of certain APIs.\nGiven a programming context, the recommended expressions from our synthesizer can effectively help developers fill the missing parts.\n}\n\\section{Extracting Functional Features}\n\\label{feature}\n\\begin{figure*}[!htb]\n\\centering\n\\includegraphics[width=\\textwidth]{pic\/task-workflow.png}\n\\caption{The process of extracting functional features}\n\\label{fig-task}\n\\end{figure*}\nAs the first component of \\textsc{NLI2Code}, we need a list of functional features to summarize frequently-used library functionalities.\nRecall that a functional feature is defined as a brief description of certain functionality in verb phrase form.\nGiven a library, our extractor takes Stack Overflow threads as input and outputs the functional feature list.\n\nFigure \\ref{fig-task} shows the workflow of our approach to extract functional features.\nWe firstly extract verb phrases from Stack Overflow threads by leveraging the syntax parsing techniques.\nThen, a set of heuristic rules is used to filter out unlikely phrases.\nConsidering that the same functionality can be expressed in different ways, we propose a normalized functional feature representation grammar to ensure the correct clustering of phrases.\nAt last, a frequent subgraph mining algorithm gSpan \\citep{gSpan} is applied to mine functional features from the clustered phrases.\n\n\n\\subsection{Candidate Functional Features}\n\nOur data source is the Q\\&A threads from Stack Overflow containing the specific tags, such as \\textit{``apache-poi''} for the POI project.\nAccording to the definition, we extract all verb phrases as candidate functional features.\nSimilar to the state-of-the-art works, we use Stanford NLP toolkits \\citep{stanford-nlp} to extract verb phrases from the raw data.\nA big problem for applying NLP tools to software documentation is that there are many code-like terms, which are error-prone in the POS (Part-of-Speech) tagging and might cause failure in the syntax tree parsing.\nThus, we replace the code-like terms with special placeholders to ensure the correct POS tagging.\nFor reproducibility, we briefly explain how we recognize code-like terms here.\nStack Overflow threads usually label code fragments with consecutive $
$ tags or $$ tag for the inline code elements.\nFor those code-like terms that are not annotated with HTML tags, we employ a set of regular expressions, which is provided by \\cite{task1} to identify them from the natural language content.\n\nAfter the preprocessing, we split the natural language text into sentences and feed each sentence to the Stanford NLP toolkit.\nThe toolkit returns a tree-structured parsing result.\nFigure \\ref{fig-syntaxtree} displays the parsing tree of a long sentence, which contains seven verb phrases (subtrees tagged with VP).\nAll the verb phrases from the parsing tree are extracted and form the initial candidate functional features, which will be filtered by a filtering pipeline.\nThe pipeline filters out phrases not related to library functions, for example, in the sentence from Figure \\ref{fig-syntaxtree}, only one phrase out of seven subtrees is valuable, which is the seventh phrase \\emph{set up the print area for the excel file}. \n\n\\subsection{Filtering Pipeline}\nThe filtering pipeline consists of three rule-based phrase filters.\nIf a phrase matches the rule of a filter, one piece of evidence will be added to the phrase.\nA piece of evidence might be counted as one vote up or one vote down or veto to accept the phrase.\nThen we collect all the evidence added to a phrase and count the vote.\nIntuitionally, we remove a phrase when the upvotes are less than the downvotes.\n\nThe first filter is based on a handcrafting stop word list.\nWe downvote three types of verb phrases because they are not likely to appear in a meaningful functional feature: \n\\begin{itemize}\n\\item Special grammatical ingredients such as auxiliary verbs (\\textit{e.g.}, be, do, have), modal verbs and pronouns usually do not have actual meanings.\n\\item Q\\&A special words. The sentences from Stack Overflow often contain trivial words for describing the questioners' requirements (\\textit{e.g.}, ask, try, need).\n\\item Programming special terms. Some programming terms, keywords in programs, or development special words are usually not part of valid functional features. (\\textit{e.g.}, extend, return and stack trace).\n\\end{itemize}\n\nThe second filter judges the phrases based on information from the context.\nThough the phrases containing Q\\&A special expressions are considered invalid, the phrases following some special Q\\&A expressions are very likely to refer to the library functionalities.\nFor example, in Figure \\ref{fig-syntaxtree}, the 5th verb phrase \\emph{\"need to ...\"} should be filtered, but the 7th verb phrase \\emph{\"set up the print areas for the excel file\"} following the Q\\&A phrase \\emph{\"need to\"} is a functional feature.\nFor each phrase, we analyze its preceding content in the same sentence.\nIf we find a match with Q\\&A special expressions before the phrase, we upvote the phrase.\n\nThe third filter is based on the structure of the phrase in the syntax tree.\nWe use syntactic structure characteristics to filter out invalid verb phrases.\nFor example, the 3rd and the 6th phrases in Figure \\ref{fig-syntaxtree} do not contain any verbs as direct children and will be filtered out with the structural filter.\nBesides, there are usually some complex sub-clauses in the verb phrase.\nWe hope to keep our generated features as concise as possible, therefore we remove the sub-clauses.\nAnother important purpose of filtering parse tree structures is to get the candidate phrases ready for the later normalization.\nThe structural filter ensures the phrase candidates are compatible with the normal form.\n\n\\subsection{Phrase Normalization}\nTo cluster verb phrases with similar meaning, we define the normal form of feature phrases as Table \\ref{bnf} shows.\nThe symbol ``[]\" denotes that a component is optional.\nGenerally speaking, a functional feature consists of at least an \\emph{Action} and an \\emph{Object}, which could be modified by a \\emph{Condition} (usually a prepositional phrase).\n\n\\begin{table}[htbp]\n\\caption{The normal form of feature phrases}\n\\label{bnf}\n\\centering\n\\begin{tabular}{l l l}\n\\hline\nFeature & ::= & Action Object [Condition]\\\\\nAction & ::= & verb [particle]\\\\\nObject & ::= & dt adj noun\\\\\nCondition & ::= & prep [verb] Object\\\\\n\\hline\n\\end{tabular}\n\\end{table}\n\nOur pilot study summarizes the common parsing tree types that are compatible with our normal form.\nTo put this straight, Table \\ref{transformrule} lists 6 types and their transformation rules to the normal form.\nCase \\#1 is the most common case, denotes the verb phrases that consist of a verb and a noun phrase.\nParticles for the intransitive verbs are presented in case \\#2.\nCase \\#3 is another popular case that contains a verb, noun phrase (NP), and prepositional phrase (PP).\nCase \\#4 denotes the verb phrases that do not contain a direct noun phrase.\nCase \\#5 is for the noun phrases that consist of a word chain headed by a noun.\nCase \\#6 is a typical prepositional phrase.\n\n\\begin{table*}[htbp]\n    \\centering\n    \\scriptsize\n    \\caption{Transformation rules for common types of verb phrases\\label{transformrule}}\n    \\begin{tabular}{l l l l}\n    \\hline\n    \\bfseries ID & \\bfseries Grammar pattern & \\bfseries Phrase example & \\bfseries Transformation rule to normal form\\\\\n    \\hline\n    \\multirow{2}{*}{1}&\\multirow{2}{*}{VP:=VB NP}&get the cached formula value & \\multirow{2}{*}{VB:=verb; NP:=Object}\\\\\n    & & VP(VB)(NP(DT)(JJ)(NN)(NN)) &\\\\\n    \\hline\n    \\multirow{2}{*}{2}&\\multirow{2}{*}{VP:=VB PRT NP}&set up the print areas & VB:=verb;PRT:=particle;\\\\\n    & & VP(VB)(PRT(RP))(NP(DT)(NN)(NN)) & NP:=Object\\\\\n    \\hline\n    \\multirow{2}{*}{3}&\\multirow{2}{*}{VP:=VB NP PP}&delete documents from lucene index & VB:=verb;NP:=Object;\\\\\n    & & VP(VB)(NP(NN))(PP(IN)(NP(NN)(NN))) & PP:=Condition\\\\\n    \\hline\n    \\multirow{2}{*}{4}&\\multirow{2}{*}{VP:=VB PP}&iterate through the terms in a document & VB:=verb;IN(in PP):=particle;\\\\\n    & & VP(VB)(PP(IN)(NP(NP(DT)(NN))(PP(IN)(NP(DT)(NN))))) & NP(in PP):=Object\\\\\n    \\hline\n    5&NP:=word NN&Case \\#1-\\#3& Map word to dt adj noun in Object\\\\\n    \\hline\n    \\multirow{2}{*}{6}&\\multirow{2}{*}{PP:=IN NP}& \\multirow{2}{*}{Case \\#3-\\#4} & IN ::= prep (in Condition);\\\\\n    & & & NP ::= Object (in Condition)\\\\\n    \\hline\n    \\end{tabular}\n\\end{table*}\n\nAfter normalization, we rebuild the tree representation for the phrase and apply \\emph{gSpan} algorithm to mine frequent subgraphs as our final functional features.\nFurthermore, we merge two phrases if they share the same objects and their action words are synonyms judged by WordNet \\footnote{We use APIs from nltk.corpus.wordnet}.\nFigure \\ref{cluster} explains why normalization is necessary.\nFigure \\ref{cluster}.(a) is the parse tree of the verb phrase \\emph{set the print area} and Figure \\ref{cluster}.(b) depicts another candidate phrase \\emph{set up the print areas for the excel file}.\nThe original parsing trees contain many detailed grammatical ingredients, which prevent us from mining valuable common subgraphs.\nThe two largest common subgraphs between tree (a) and tree (b) are (VP (VB set) (NP)) (in red color and bold font) and (NP (DT the) (NN print)) (in blue color and underscored), which are meaningless.\nIn contrast, Figure \\ref{cluster}.(c) and Figure \\ref{cluster}.(d) are rebuilt from our normalized phrases, which omit unnecessary details like POS tags and unify the structures of the top layers.\nTheir common parts (in red and bold font) show us a reasonable result.\n\n\\begin{figure}[htb]\n    \\centering\n    \\includegraphics[width=0.43\\textwidth]{pic\/syntax-tree.png}\n    \\caption{The parsing tree of a long sentence. The seventh verb phrase is a functional feature and the others need to be filtered.}\n    \\label{fig-syntaxtree}\n\\end{figure}\n    \n\\begin{figure}[htb]\n\\centering\n\\includegraphics[width=0.43\\textwidth]{pic\/cluster.png}\n\\caption{Comparison between parse tree and normalized tree in mining frequent subtrees}\n\\label{cluster}\n\\end{figure}\n    \n\n\n\\section{Introduction}\n\\label{intro}\n\nTo implement certain functionality, developers often reuse existing libraries with the corresponding APIs.\nYet discovering the correct subset of the APIs is a major obstacle for the API users \\citep{api-hard2}.\nThe obstacle not only comes from the large size of APIs, furthermore, a real-world programming task usually requires the cooperation of multiple APIs, and each API invocation should follow strict specifications.\nFor example, for a simple functionality like ``\\textit{set color for an Excel cell}\", the desired API usage sequence using \\texttt{apache-poi} is as follows:\n\\begin{equation}\\nonumber\n    \\begin{split}\n    &Workbook.createCellStyle();\\\\\n    &CellStyle.setFillBackgroundColor(short);\\\\\n    &CellStyle.setFillForegroundColor(short);\\\\\n    &CellStyle.setFillPattern(FillPatternType);\\\\\n    &Cell.setCellStyle(CellStyle);\n    \\end{split}\n\\end{equation}\n\nTo address the issue, we promoted the concept of NLI (Natural Language Interface) for library reuse \\citep{DBLP:conf\/icsr\/ShenXZZW19}.\nWith NLI, users reuse library functionalities with high-level natural language descriptions instead of directly manipulating the detailed APIs.\nFigure \\ref{mps} summarizes the key steps of how Alice, a Java programmer, reuses the library \\texttt{apache-poi} with NLI.\nAs Figure \\ref{mps}.(a) shows, Alice starts from selecting the desired functionality from a list of natural language descriptions, which is \\emph{set cell color} in this case.\nAfter the selection, the functionality is mapped to its corresponding implementation, which is a built-in code template in NLI.\nAs Figure \\ref{mps}.(b) shows, Alice needs to provide three parameters (\\textit{i.e.} specific background color, foreground color and fill pattern) to fill the template.\nEach parameter to provide is annotated with an example expression in grey font, which is recommended by a synthesizer.\nIn fact, there are more than three missing parameters in the code template, the synthesizer has automatically created trivial ones from the current context (\\textit{e.g.} creating a \\textit{Workbook} object with the constructor).\nAfter Alice fills the parameters, a well-typed code snippet is synthesized and inserted into the editor (as Figure \\ref{mps}.(c) shows), which perfectly solves Alice's task.\n\n\\begin{figure}[!htb]\n    \\centering\n    \\subfigure[User types or selects the desired functional feature]{\n    \\begin{minipage}{8cm}\n    \\centering\n    \\includegraphics[width=\\textwidth]{pic\/mps-alias.png}\n    \\vspace{0.1cm}\n    \\end{minipage}\n    }\n    \\centering\n    \\subfigure[The code pattern exposes three parameters for user]{\n    \\begin{minipage}{8cm}\n    \\centering\n    \\includegraphics[width=\\textwidth]{pic\/mps-param.png}\n    \\vspace{0.1cm}\n    \\end{minipage}\n    }\n    \\centering\n    \\subfigure[The synthesized code snippet for the functional feature]{\n    \\begin{minipage}{8cm}\n    \\vspace{0.2cm}\n    \\includegraphics[width=\\textwidth]{pic\/mps-code.png}\n    \\end{minipage}\n    }\n    \\caption{Application of NLI for reusing \\texttt{apache-poi}}\n    \\label{mps}\n\\end{figure}\n\nFor the library reuse problem, we highlight the benefits of NLI from two aspects.\nThe first benefit is the query composition.\nIf the developer is not familiar with the library, it could be difficult to compose a high-quality query which accurately describes the desired functionality.\nFor example, Table \\ref{qa} displays a post\\footnote{https:\/\/stackoverflow.com\/questions\/53052931} from Stack Overflow.\nThe post title mistakenly mentioned the concept ``\\textit{background color}'', while the accepted answer shows the user actually desired ``\\textit{foreground color}''.\nIn NLI, we summarize the library functionalities into functional features.\nWe conjecture that, compared to composing free-form queries, the mechanism of selecting functional features is easier and can make users more confident about the results.\nThe second benefit is code quality.\nAn illustrative code example can help developers quickly understand how to implement certain functionality.\nHowever, many online code examples are only intended to express the main idea of a solution instead of being reused as-it.\nPrevious studies \\citep{DBLP:conf\/icsm\/TreudeR17, api-misuse} show that online code examples are often not self-explanatory and may have quality problems such as incorrect order of API calls.\nAs Table \\ref{qa} shows, the code snippet in the accepted answer contains only one API, which is not a complete solution for the task.\nIn NLI, we mine code patterns by exploring more usage examples of the API.\nOur hypothesis is that unveiling how APIs are used in similar program contexts could improve the code quality.\n\n\\begin{table}[t]\n    \\centering\n    \\caption{An example post from Stack Overflow}\n    \\label{qa}\n    \\begin{tabular}{p{0.46\\textwidth}}\n    \\hline\n    \\textbf{Title: Apache-POI : How to set background color of a cell when creating spreadsheet?}\\\\\n    \\hline\n    \\textbf{Question: } In Apache POI 4.0, I want to set an Excel cell background color.\n    But all I get are black cells. I've tried many things, but result is always the same.\n    How can I set the background color of an Excel cell in Apache POI 4.0 ?\\\\\n    \\hline\n    \\textbf{Answer: }Try to use below code for background style:\\\\\n    \\small{setFillForegroundColor(IndexedColors.YELLOW.getIndex());}\\\\\n    \\hline\n    \\end{tabular}\n\\end{table}\n\nTo construct and use NLI, we designed an abstract framework \\textsc{NLI2Code}, which consists of three components: a functional feature extractor, a code pattern miner, and a synthesizer.\n\\emph{Functional features} are natural language descriptions of the library functionalities.\nIn this paper, we instantiated the extractor by mining Stack Overflow since a previous survey shows it is the first option for most developers to search for programming solutions \\citep{uclsurvey}.\nIn the second component, we try to match each functional feature with a \\emph{code pattern}, which is a code template mined from multiple implementations of the feature.\nAs code patterns usually lack customized information such as local parameters, a \\emph{synthesizer} is supposed to complete them into compilable snippets.\nThe missing parameters could be synthesized from the current programming context or provided by the user.\nFinally, \\textsc{NLI2Code} combines the three components and generates well-typed code snippets for users.\n\nAround the central concept \\textsc{NLI}, the main contributions of this paper are:\n\\begin{itemize}\n\\item an algorithm to extract verb phrases describing library functionalities from Stack Overflow.\n\\item an approach to mine code patterns, with a self-designed intermediate representation for Java to eliminate coding style differences.\n\\item an instantiation of \\textsc{NLI2Code} to reuse Java libraries, with evaluation on real-world tasks to prove the feasibility of the framework.\n\\end{itemize}\n\nThe remainder of the paper is organized as follows.\nSection \\ref{framework} demonstrates the abstract framework \\textsc{NLI2Code}.\nSections \\ref{feature}, \\ref{pattern} and \\ref{synthesizer} explain our implementation of the framework, which is available from our online artifacts \\footnote{https:\/\/github.com\/nli2code\/jss-artifact}.\nIn Section \\ref{evaluation}, we conduct several experiments to check the accuracy of our algorithms and apply a controlled experiment to explain how \\textsc{NLI2Code} works in real-world development.\nSection \\ref{related} introduces the related work.\nSection \\ref{conclusion} briefly summarizes this paper.\n\\section{Synthesizer}\n\\label{synthesizer}\nAs the last component of our framework, the synthesizer completes the skeleton code into a well-typed code snippet under the current programming context.\nWe explain the details in this section for reproducibility, but we do not claim the synthesizer as a contribution.\n\nConsider each hole in the skeleton code is annotated with the corresponding type, the synthesis problem can be stated as: \ngiven a programming context, how to create an expression with the desired type $\\tau$.\nHere are the three strategies we use:\n\\begin{itemize}\n\\item pick a variable of $\\tau$ from the current context\n\\item call the constructor function of $\\tau$ \n\\item invoke a method chain and the return type of the last method is $\\tau$ \n\\end{itemize}\n\nThe last strategy is a search process. Figure \\ref{search-tree} displays an example of the search tree.\nEach node in the tree is an API type from the library and an edge connects two nodes if they are separately the caller and the return type of a method.\nThe root of the tree is the type of a declared variable and the leaves are the target type.\nA path from the root to a leaf represents a method chain which returns the desired type.\nAs Figure \\ref{search-tree} shows, there are four method chains to create a variable with type \\textit{``Cell''} from the starting type \\textit{``Workbook''}.\nDuring the search, we also considered type casting between types by analyzing the inheritance between APIs.\n\n\\begin{figure}[htb]\n    \\centering\n    \\includegraphics[width=0.42\\textwidth]{pic\/search-tree.png}\n    \\caption{Type-directed search tree}\n    \\label{search-tree}\n\\end{figure}\n\nTo guide the search process, we define a cost model as the heuristic rule.\nThe model evaluates the goodness of different ways for variable synthesis by mapping them to integers.\nUsing existing variables in context is encouraged, with zero cost.\nIf there are multiple variables with the same type, we choose the one created most recently due to software localness.\nIf a variable is the return value of a certain method, it costs 2 when the method is a constructor and 1 for else.\nThe process for variable synthesis could be recursive, which means in the process of synthesizing the current variable, the invocations require parameters that are not in the context.\nOur cost model adds the costs for synthesizing these parameters to the total cost.\n\n\\begin{equation}\n\\begin{aligned}\ncost(t) = 0,\\quad t\\ in\\ context\\ or\\ t\\ is\\ constant\n\\end{aligned}\n\\end{equation}\n\\begin{equation}\n\\begin{aligned}\ncost(f(t_1, t_2, ..., t_k)) =\nprice(f) + \\sum_{i=1}^{k}{cost(t_i)}\n\\end{aligned}\n\\end{equation}\n\\begin{equation}\n\\begin{aligned}\nprice(f) =\n\\begin{cases}\n2& \\text{f is constructor}\\\\\n1& \\text{else}\n\\end{cases}\n\\end{aligned}\n\\end{equation}\n\nIf two expressions get the same score under the cost model, we break the tie by referring to the code corpus. \nRecall that each skeleton code comes from a code corpus, we first select instances of the skeleton code from the corpus.\nFor each instance, we extract the variable to fill the hole and locate the definition of the variable by analyzing the \\textit{``def-use''} relationship.\nThe process of extraction can be recursive because the definition of a variable may use other undeclared variables.\nThe recursion terminates when we find all definitions of the variables or we meet a variable defined outside the current method body (\\textit{i.e.}, parameters of the method, global variables).\nAs a result, from each instance in the corpus, we extract an expression (\\textit{i.e.}, a method chain) to fill the hole. \nFor the two synthesized expressions with the same score, we calculate their frequency in the expressions extracted from the corpus and recommend them in the order of decreasing frequency.\n\\subsection{RQ2: Functional Features}\nIn this subsection, we generate functional features for each library from the dataset \\textit{$SO_{large}$}.\nOur benchmark is the lists of functionality descriptions from the official tutorials.\nThe evaluation checks both 1). whether each functional feature is accurate, and 2). whether each functionality in the tutorial is covered.\n\n\\subsubsection{Methodology}\nGiven the Stack Overflow corpus of a library, the output of our functional feature extractor is a list of functional features in verb phrase form.\n\nTo evaluate the accuracy, we ask two annotators to rate the extracted functional features.\nThey are requested to give a score for each feature: two points for an actual library functionality, one point for a likely functionality that requires further information to make it clear, zero point for a meaningless phrase.\nTwo annotators mark the functional features separately and afterwards discuss to reach an agreement.\nWe count all the ratings by annotators and calculate the average score for all the functional features.\n\nTo evaluate the completeness, we ask the annotators to review the functionalities in the benchmark one by one and judge whether the functionality is included in our generated functional features.\nAgain, we ask the annotators to give a score for each functionality.\nIf a functionality is included in our generated features, our result gets two points.\nIf our output includes a similar functional feature but not precise, our result gets one point.\nOtherwise, our result gets zero point for the functionality.\n\n\\subsubsection{Results}\n\nTable \\ref{tab-task1} displays the results for the accuracy of the functional features.\nThe first column is the name of the library and the second column is the number of the normalized functional features extracted from the \\textit{$SO_{large}$} dataset.\nThe third column lists the number of functional features marked with three different scores and the last column is the average score for all the functional features in this library.\nAs the last row shows, for a total of 531 functional features, 282 (53.1\\%) of them are annotated with two points, 176 (33.1\\%) are annotated with 1 point and the remaining 73 (13.7\\%) are irrelevant to library functionalities.\nThe average score shows that our functional features get approximately 1.39 points out of two.\n\nTable \\ref{tab-task2} displays the results for the completeness of the functional features.\nInstead of rating a functional feature, we rate each functionality from the official tutorial in Table \\ref{tab-task2}.\nThe result shows that our generated functional features can cover 86.3\\% (82 out of 95, 66.3\\% with two points, 20.0\\% with one point) of the functionalities in the benchmark.\nFor the functions which get one point, our annotators reported that the majority of them are caused by the fact that the tutorial summarizes several tasks into one functionality.\nFor example, the last function in the tutorial of \\texttt{apache-poi} is \\textit{``cells with multiple styles''}, which mentioned three tasks (setting color, font and cell style) at the same time.\nThere is little chance that a user will discuss the three functionalities together in a verb phrase.\nWe carefully analyzed all the 13 missing functionalities (rated as zero point) by manually searching them on Stack Overflow.\nAs a result, seven of them are never mentioned on Stack Overflow, the rest six functionalities are discussed fewer than three times in the whole corpus.\nSince we only keep the frequent normalized syntax trees during normalizing functional features, phrases with low frequency will not be included in our final result.\n\nThe results show a little fluctuation among different libraries, especially for the completeness.\nIn Table \\ref{tab-task2}, the \\texttt{eclipse-jdt} library gets the highest score of 2.00, while the \\texttt{deeplearning4j} library is rated the lowest (1.09 points).\nThe fluctuation comes from the different sizes of related threads on Stack Overflow.\nIn fact, as Table \\ref{so-dataset}, the number of threads under the tag \\texttt{deeplearning4j} is the smallest in our dataset.\nThe small size of discussions obviously affects the completeness of functional features, which is an external threat to our algorithm.\n\n\\begin{table}[t]  \n    \\centering\n    \\caption{Accuracy of the generated functional features}  \n    \\vspace{0.2cm}\n    \\label{tab-task1}\n    \\begin{tabular}{l c p{0.5cm}<{\\centering} p{0.5cm}<{\\centering} p{0.5cm}<{\\centering} c c}\n    \\hline\n    \\multirow{2}*{\\textbf{\\footnotesize Library}} & \\multirow{2}*{\\textbf{\\footnotesize \\# Features}} & \\multicolumn{3}{c}{\\textbf{\\footnotesize Score}} & \\multirow{2}*{\\textbf{\\footnotesize Average}} \\\\ \n    \\cline{3-5}\n    & & 2 & 1 & 0 & \\textbf{\\footnotesize Score} \\\\\n    \\hline \n    jsoup & 86 & 41 & 27 & 18 & 1.26\\\\\n    apache-poi & 190 & 116 & 48 & 26 & 1.47\\\\\n    neo4j & 119 & 61 & 40 & 18 & 1.36\\\\\n    dl4j & 33 & 16 & 14 & 3 & 1.39\\\\\n    eclipse-jdt & 103 & 48 & 47 & 8 & 1.39\\\\\n    \\hline\n    {\\bfseries all} & {\\bfseries 531} & {\\bfseries 282} & {\\bfseries 176} & {\\bfseries 73} & {\\bfseries 1.39} \\\\\n    \\hline\n    \\end{tabular}\n\\end{table}\n\n\\begin{table}[t]  \n\\centering\n\\caption{Completeness of the generated functional features}\n\\vspace{0.2cm}\n\\label{tab-task2}\n\\begin{tabular}{l c p{0.5cm}<{\\centering} p{0.5cm}<{\\centering} p{0.5cm}<{\\centering} c}\n    \\hline\n    \\multirow{2}*{\\textbf{\\footnotesize Library}} & \\multirow{2}*{\\textbf{\\footnotesize \\# Functions}} & \\multicolumn{3}{c}{\\textbf{\\footnotesize Score}} & \\multirow{2}*{\\textbf{\\footnotesize Average}} \\\\ \n    \\cline{3-5}\n    & & 2 & 1 & 0 & \\textbf{\\footnotesize Score} \\\\\n    \\hline\n    jsoup & 13 & 10 & 3 & 0 & 1.77 \\\\\n    apache-poi & 46 & 30 & 12 & 4 & 1.56\\\\\n    neo4j & 9 & 7 & 1 & 1 & 1.67 \\\\\n    dl4j & 21 & 10 & 3 & 8 & 1.09 \\\\\n    eclipse-jdt & 6 & 6 & 0 & 0 & 2.00\\\\\n    \\hline\n    {\\bfseries all} & {\\bfseries 95} & {\\bfseries 63} & {\\bfseries 19} & {\\bfseries 13} & {\\bfseries 1.52} \\\\\n    \\hline\n\\end{tabular}\n\\end{table}\n\n\\mybox{\\emph{Answer for RQ2:} \nBy comparing the functional features with the official tutorials,\n we found that 86.2\\% (458 out of 531) functional features are accurate.\n Furthermore, the features can cover 86.3\\%(82 out of 95) of the functionalities listed in the official tutorials.\n}\n\n\n\\section{Conclusion}\n\\label{conclusion}\nThis paper promotes the concept of NLI (Natural Language Interface) for library reuse.\nTo construct and use NLI, we design a framework with three components (\\textit{i.e.}, functional feature extractor, code pattern miner, and synthesizer).\nWe instantiate the three components as a tool \\textsc{NLI4j} to reuse Java libraries.\nThe accuracy of our extracted functional features is 86.2\\% and can cover 86.3\\% of functionalities provided by the official tutorials.\nBy comparing with existing code pattern miner, \\textsc{NLI4j} can mine more accurate and complete code patterns.\nFinally, a controlled experiment with eight participants on five real-world tasks shows that our tool can save half of the coding time for newcomers of the library.\nFrom the practical perspective, our framework promotes the efficiency of reusing libraries.\nFrom the academic perspective, our framework lays out a design space of building the natural language interface for libraries, which would hopefully inspire research in this area.\n\n\\section{Evaluation}\n\\label{evaluation}\n\nIn this section, we evaluate \\textsc{NLI4J} from three perspectives, corresponding to the three components of the framework.\nFirst, to evaluate the accuracy and completeness of the functional features, we compare the extracted functional features with the library functionalities provided by the official tutorials.\nSecond, to evaluate the quality of code patterns, we use the code examples from the official tutorials as benchmarks and compare our mining algorithm with two existing pattern mining tools \\citep{OOPSLA15, api-misuse}.\nThird, we evaluate the synthesizer with a controlled experiment on \\texttt{apache-poi}.\nIn the study, we implement an IDE plugin by putting all the three components together and investigate whether the plugin could save programmers' time to solve real-world tasks.\n\nOur research questions are as follows:\n\\begin{itemize}\n    \\item \\textit{$RQ_1:$ How well does our filtering pipeline perform on selecting functional features from user discussions?}\n    This question aims at accessing whether our filtering pipeline is effective to filter unrelated verb phrases.\n    Furthermore, we investigate the importance of each filter in the process.\n    \\item \\textit{$RQ_2:$ To what extent is \\textsc{NLI4j} able to provide accurate and complete functional features?}\n    This question evaluates the accuracy and completeness of the normalized functional features.\n    Here, accuracy refers to each functional feature should clearly correspond to a functionality.\n    The completeness refers to the capability of the generated functional features to cover frequently-used library functionalities.\n    \\item \\textit{$RQ_3:$ How does our code pattern mining algorithm perform compared to existing mining tools?}\n    This research question is related to the quality of the mined code patterns.\n    Given the same codebase, we compare our mined code patterns with two existing mining algorithms, which separately abstract source code into syntax trees and sequences.\n    \\item \\textit{$RQ_4:$ To what extent is \\textsc{NLI4j} able to promote the efficiency to solve real-world programming tasks?}\n    Finally, this research question directly investigates the usefulness of \\textsc{NLI4j} in real-world development.\n\n\\end{itemize}\n\nIn the following, we first introduce our datasets and benchmarks.\nThen, for each research question, we detail our evaluation methodology and results in an individual subsection.\n\n\\input{sections\/6a_datasets}\n\\input{sections\/6b_label}\n\\input{sections\/6c_feature}\n\\input{sections\/6d_pattern}\n\\input{sections\/6e_userstudy}\n\\input{sections\/6f_threats}\n\n\\section{Related Work}\n\\label{related}\nThe idea of \\textsc{NLI2Code} contributes to the large body of work on API comprehension and software reuse.\nIn addition, each of the three components has benefited from related work in the corresponding domain, which will be summarized in this section separately.\n\n\\subsection{Information extraction from software artifacts}\nSeveral researchers have succeeded in extracting high-quality software specifications from software artifacts using NLP techniques.\n\\cite{zhong09} proposed an approach for inferring specifications from API documentation by detecting actions and resources through machine learning.\nTheir evaluation showed relatively high precision, recall, and F-scores for five software libraries, and indicated potential uses in bug detection.\n\\cite{concept} presented an NLP-based approach to extract and organize concepts from software identifiers in a WordNet-like structure through tokenization, part-of-speech tagging, dependency sorting, and lexical expansion.\n\\cite{api-tutorial} introduced an approach to select relevant tutorial fragments for APIs, which combined the topic model and the PageRank algorithm.\nMore closely to our goal, summarizing software artifacts with functional explanations, \\cite{faq} designed an approach to extract FAQs from mailing lists and forums.\nThe approach applied the LDA algorithm to extract topic models from the data which are used for the creation of topic-specific FAQs.\n\\cite{task1} defined the concept of task as a specific programming action that has been described in the documentation.\nIndexing long documents with high-level tasks can help users quickly locate the part they care about.\nFurthermore, \\cite{task2code} developed a tool that can map tasks to code snippets from Stack Overflow answers.\nIn \\textsc{NLI2Code}, we normalize the free-form tasks into a set of pre-defined functional features and enhance concrete code examples into abstract code patterns, considering the quality of Stack Overflow code examples are controversial \\citep{api-misuse}. \n\n\\subsection{Code pattern mining}\nCode patterns are abstract code examples with metavariables or other components to be completed by users.\nModern IDEs usually integrate relevant features to define widely-used code patterns, such as live template feature in IntelliJ IDEA and SnipMatch in Eclipse.\nSeveral studies \\citep{api2, api3} applied statistical methods to automatically mine code patterns since source code was shown to be highly repetitive \\citep{naturalness}.\nThe common workflow for code pattern mining first abstracts source code into a well-designed data structure and then apply the corresponding frequent pattern mining algorithm.\n\\cite{fse14:idiom} presented \\textsc{Haggis}, a system for mining code patterns that was built on techniques from statistical natural language processing.\n\\textsc{Haggis} transformed source code into abstract syntax trees and applied Bayesian probabilistic tree substitution technique to get code patterns.\nThe mined patterns were proved to be accurate and meaningful and the author mentioned part of the patterns were accepted by the Eclipse SnipMatch project.\nTo detect API misuse in online forums, \\cite{api-misuse} developed a tool \\textsc{ExampleCheck} to compare API usages in the forum with code patterns mined from large codebases.\nThe authors designed a data structure called the structured call sequence, which enriched API invocations with syntax like guard conditions and control flow statements.\nSuch enrichment is vital because most API misuses in online code examples suffer from missing guard conditions and exception handling and \\textsc{ExampleCheck} could effectively find these misuses.\n\nCompared with existing works, which are designed to solve a particular problem, \\textsc{NLI2Code} is designed as an abstract framework, which does not specify the approach to mine code patterns.\nCode abstraction designed in existing tools may rely on properties of their problems and cannot be easily generalized to others.\n\n\\subsection{Program synthesis from natural language}\nProgram synthesis is the task of automatically finding a program in the underlying programming language that satisfies the user intent expressed in the form of some specification \\citep{synthesis-overview}.\nThis problem has been considered the holy grail of computer science since the inceptions of AI in the 1950s.\nProgram synthesis works diverse in the form of specification, including partial data structures \\citep{pbp}, test cases \\citep{pbe}, natural language \\citep{t2api, keyword} and their combination \\citep{IJCAI15}.\nDespite its ambiguity, the natural language specification is the most flexible one and requires the smallest effort to compose.\nExisting synthesis tools with natural language input either recommend related APIs \\citep{demomatch, ASE17:bot} or compilable snippets \\citep{usage}.\n\\cite{OOPSLA15} defined a free-form specification that allowed users to write natural language queries and use names of local variables.\nGiven a specification, they mapped it to a method and expanded the method with a PCFG model trained from large codebases.\n\\cite{codehint} developed a dynamic and interactive program synthesis tool \\textsc{CodeHint}, which was integrated into the Eclipse IDE.\n\\textsc{CodeHint} allowed users to execute the recommended code snippets and refine the snippets iteratively.\n\n\\subsection{Datasets and Benchmarks}\nTo answer the research questions, we collect data for five Java libraries:\nan html extraction library (\\texttt{jsoup}), a source code parser (\\texttt{eclipse-jdt}), \na library manipulating Microsoft documents (\\texttt{apache-poi}), a deep learning toolkit (\\texttt{deeplearning4j}) and a graph database platform (\\texttt{neo4j}).\nIn addition to being widely used, these five libraries cover different domains of programming, from the front-end html parsing to the back-end database manipulation.\n\nTo construct NLI for a given library, our tool requires 1). related threads from Stack Overflow, and 2). client code reusing the library APIs.\n\nStack Overflow provides a tag for each of the five libraries (\\textit{e.g.} tag \\textit{``jsoup''} for the \\texttt{jsoup} library).\nFor each library, we crawl all the threads containing tag \\textit{``java''} and the library-specific tag.\nSince our functional feature extractor processes a single sentence at a time, we extract textual contents of the threads and split the text into sentences using the Stanford NLP toolkit.\nThe sentences form our first dataset \\textit{$SO_{large}$}.\nTable \\ref{so-dataset} lists the number of the threads and the split sentences in \\textit{$SO_{large}$}.\nFurthermore, we extract a smaller dataset \\textit{$SO_{small}$} by randomly sampling 100 sentences for each of the five libraries.\nDuring the sampling, we remove sentences that are shorter than 15 characters, since such sentences are usually mistakenly split and seldom contain functional features.\nAs a result, the dataset \\textit{$SO_{small}$} contains 500 sentences.\nBased on our theoretical definition of the extraction process, the first author manually labels the functional features for each sentence in \\textit{$SO_{small}$}.\n\nFor client code, we build the dataset by downloading all the client repositories using the Github APIs \\footnote{https:\/\/api.github.com\/search\/repositories}.\nGiven a library, the query we used is restricted as follows: the body is the name of the library (\\textit{e.g.}, jsoup), the programming language is specified as Java, and each repository should have at least five stars.\nTable \\ref{client-dataset} lists the number of the client repositories we download and the number of the source files from the repositories.\n\nTo evaluate the generated NLI, a list of library functionalities and their implementations are required as benchmarks.\nWe turn to the official tutorial for each of the five libraries.\nThe names of the tutorials vary between libraries, (\\textit{e.g.} cookbook, developers' guide), and we organize each tutorial as a list of functionalities.\nEach functionality is a pair consisting of a concise description and a code example.\nWe filter the functionalities with too long code examples (\\textit{i.e.} more than 20 lines of code after removing the comments) because instead of discussing a specific feature, such long examples are more likely to describe a topic or a complete procedure to reuse the library.\nAfter the filtering, we treat all the left official functionalities as benchmarks in our evaluation.\nFor each library, Table \\ref{benchmarks} lists the number of the functionalities and the average lines of a code example (LoC) in the benchmarks.\n\n\\begin{table}\n    \\centering \\caption{Overview of the Stack Overflow dataset}\n    \\vspace{0.2cm}\n    \\label{so-dataset}\n    \\begin{tabular}{l r r}\n        \\hline\n        \\textbf{\\footnotesize Library} & \\textbf{\\footnotesize \\# Threads} & \\textbf{\\footnotesize \\# Sentences}\\\\\n        \\hline\n        jsoup & 649 & 2,780\\\\\n        apache-poi & 2,496 & 8,046\\\\\n        neo4j & 1,600 & 8,144\\\\\n        deeplearning4j & 290 & 1,310\\\\\n        eclipse-jdt & 805 & 3,461\\\\\n        \\hline\n        \\bfseries all & \\bfseries 5,840 & \\bfseries 23,741\\\\\n        \\hline\n    \\end{tabular}\n\\end{table}\n\n\\begin{table}\n    \\centering\n    \\caption{Overview of the client code dataset}\n    \\vspace{0.2cm}\n    \\label{client-dataset}\n    \\begin{tabular}{l r r}\n        \\hline\n        \\textbf{\\footnotesize Library} & \\textbf{\\footnotesize \\# Repositories} & \\textbf{\\footnotesize \\# Source Files}\\\\\n        \\hline\n        jsoup & 119 & 5,077\\\\\n        apache-poi & 239 & 21,601\\\\\n        neo4j & 291 & 37,428\\\\\n        deeplearning4j & 48 & 7,470\\\\\n        eclipse-jdt & 26 & 34,254\\\\\n        \\hline\n        \\bfseries all & \\bfseries 723& \\bfseries 105,830\\\\\n        \\hline\n    \\end{tabular}\n\\end{table}\n\n\\begin{table}[!htb]\n    \\centering\n    \\caption{Benchmarks from the offcial tutorials}\n    \\vspace{0.2cm}\n    \\label{benchmarks}\n    \\begin{tabular}{l r r}\n        \\hline\n        \\textbf{\\footnotesize Library} & \\textbf{\\footnotesize \\# Functionalities} & \\textbf{\\footnotesize Average LoC}\\\\\n        \\hline\n        jsoup & 13 & 4.1\\\\\n        apache-poi & 46 & 12.3\\\\\n        neo4j & 9 & 2.9 \\\\\n        deeplearning4j & 21 & 10.9\\\\\n        eclipse-jdt & 6 & 18.0\\\\\n        \\hline\n        \\bfseries all & \\bfseries 95 & \\bfseries 10.3\\\\\n        \\hline\n    \\end{tabular}\n\\end{table}\n\\subsection{RQ3: Code Pattern}\nIn this section, we focus on evaluating the performance of our code pattern mining algorithm.\nWe start the evaluation from some examples and then compare our miner with two existing pattern mining tools.\n\n\\subsubsection{Examples}\n\nFigure \\ref{fig-patternexp} shows five code patterns that \\textsc{NLI4j} mines.\nA symbol with $\\$$ denotes that there is a missing part in the code pattern.\nTo be more specific, a $<\\$HOLE>$ represents a missing variable and a $<\\$BODY>$ represents a missing code block.\nThe reader will observe the immediate usefulness of the code patterns for learning API usage.\n\n\\begin{figure}[t]\n\\begin{lstlisting}[language=Java]\n\/\/ parse text from html\nDocument document_1 = Jsoup.parse(<$HOLE1>);\ndocument_1.select(<$HOLE2>).first().text();\n\\end{lstlisting}\n\n\\begin{lstlisting}[language=Java]\n\/\/ create an embedded database\nGraphDatabaseFactory factory_1 = \n    new GraphDatabaseFactory();\nGraphDatabaseService service_1 = \n    factory_1.newEmbeddedDatabase(<$HOLE1>);\n\\end{lstlisting}\n\n\\begin{lstlisting}[language=Java]\n\/\/ configure a network\nMultiLayerConfiguration configuration_1 =\n    new NeuralNetConfiguration.Builder()\n    .seed(<$HOLE1>).iterations(<$HOLE2>)\n    .list().layer(<$HOLE3>).build();\n\\end{lstlisting}\n\n\\begin{lstlisting}[language=Java]\n\/\/ merge cells\nCellRangeAddress address_1 = new\n    CellRangeAddress(\n    <$HOLE1>, <$HOLE2>,\n    <$HOLE3>, <$HOLE4>\n);\n<$HOLE5>.addMergedRegion(address_1);\n\\end{lstlisting}\n\n\\begin{lstlisting}[language=Java]\n\/\/ save workbook\nWorkbook wb_1 = new HSSFWorkbook();\ntry {\n    wb.write(<$HOLE1>);\n} catch (IOException e) { <$BODY> }\n\\end{lstlisting}\n\n\\caption{Example code patterns for functional features\\label{fig-patternexp}}\n\\end{figure}\n\n\nAs Figure \\ref{fig-patternexp} shows, a code pattern usually describes the frequent combination of API elements.\nThe fourth code pattern (\\textit{i.e., ``merge cells''}) denotes that a cell region is managed by the class $CellRangeAddress$, which is usually invoked with another method $addMergedRegion$.\nTo instantiate a $CellRangeAddress$ object, four parameters are required to specify the left top and the right bottom corners of the region.\nSome patterns contain control flow statements besides API invocations, such as the last example \\textit{``save workbook''} in Figure \\ref{fig-patternexp}.\nThe code pattern not only summarizes the correct APIs to invoke, but also hints that the method $write$ needs to handle an exception.\nThe specific way to handle the exception is left to the user in a $<\\$BODY>$ block.\n\n\\subsubsection{Methodology}\n\nTo evaluate the performance of our pattern mining algorithm, we use code examples from the official tutorials as the benchmark.\nFrom the total 95 functionalities in the benchmarks, we first remove the 13 functionalities that are not covered by our functional features (\\textit{i.e.} the functionalities with zero point in Table \\ref{tab-task2}).\nThen we removed another 13 functionalities because our algorithm failed to match a correct API from the corresponding functional feature.\nTable \\ref{pattern-dataset} shows the 69 left functionalities.\n\n\\begin{table}[htb]\n    \\centering\n    \\caption{The number of functionalities to mine code patterns}\n    \\vspace{0.2cm}\n    \\label{pattern-dataset}\n    \\begin{tabular}{l r r}\n        \\hline\n        \\multirow{2}*{\\textbf{\\footnotesize Library}} & \\textbf{\\footnotesize \\# Covered} & \\textbf{\\footnotesize \\# Functions to}\\\\ \n        & \\textbf{\\footnotesize Functions} & \\textbf{\\footnotesize Mine Patterns} \\\\\n        \\hline\n        jsoup & 13 & 12\\\\\n        apache-poi & 42 & 36\\\\\n        neo4j & 8 & 7\\\\\n        deeplearning4j & 13 & 10\\\\\n        eclipse-jdt & 6 & 4\\\\\n        \\hline\n        \\bfseries all & \\bfseries 82 & \\bfseries 69\\\\\n        \\hline\n    \\end{tabular}\n\\end{table}\n\nAs there is no universal metric to measure the quality of code patterns, we approximated the quality by calculating the \\emph{Jaccard distance}.\nTo be more specific, we built a set of the invoked APIs in the code pattern and another set of the invoked APIs in the official example.\nJaccard distance is the metric to calculate the differences between two sets as follows:\n\\begin{equation}\ndis(X,Y)=1-\\frac{|X\\cap Y|}{|X\\cup Y|}\n\\end{equation}\n\nWe compare our pattern mining algorithm with two existing tools, \\textit{i.e.}, \\textsc{anyCode} and \\textsc{ExampleCheck}.\n\\textsc{anyCode} expands an API element into a Java expression with a pre-trained PCFG (Probabilistic Context Free Grammar) model.\nThe second tool \\textsc{ExampleCheck} is designed to check API misuse from the Stack Overflow.\nThe rationale behind is to compare API examples from Stack Overflow with the mined API usage patterns from Github.\nWe choose the two tools because their abstractions for source code are representative. \n\\textsc{anyCode} abstracts code into tree-based structure PCFG, and \\textsc{ExampleCheck} abstracts code into the sequence structure SCS (\\textit{i.e.}, Structured Call Sequence).\nTo make the comparison meaningful, we configure the settings for all three tools as follows:\n\\begin{itemize}\n    \\item \\textit{The same codebase.}\n    All three tools are provided with the same codebase, which is all the usage examples for a given API. On average, the codebase for each API contains 217 source code files.\n    \\item \\textit{The same threshold.}\n    We set the threshold (5\\%) for the minimum frequency of a pattern to be mined from the codebase.\n\\end{itemize}\n\n\\subsubsection{Results}\nTable \\ref{pattern-result} shows the results of the experiments.\nGiven a tool and a library, we list the average Jaccard distance between the mined code patterns and the code examples from the benchmarks.\nOverall, \\textsc{NLI4j} achieves the minimum average Jaccard distance (0.29), which proves the code patterns mined by our tool are more similar to the official code examples.\nIn the experiment, we found that \\textsc{anyCode} can only synthesize quite short patterns.\nSome code examples in our benchmark contain more than ten API invocations, as a result, the performance of \\textsc{anyCode} on these cases are not as good as the other two tools.\n\\textsc{ExampleCheck} and \\textsc{NLI4j} can generate more complete and complex code patterns.\nHowever, as we explained before, in such cases, the sequence structure \\textsc{ExampleCheck} used is too strict for pattern mining.\nFor example, for the color setting task in Apache POI, we found that two APIs (\\textit{i.e.}, \\textit{``setFillForegroundColor''} and \\textit{``setFillPattern''}) could be swapped.\nHowever, swapping two APIs will result in two different subsequences for \\textsc{ExampleCheck} and it failed to produce the complete pattern.\n\nBesides, we observed the fluctuation among different libraries.\nFor \\texttt{deeplearning4j} and \\texttt{eclipse-jdt}, we found the Jaccard distances of all the three tools are significantly larger than the rest three libraries.\nIn fact, we found the style of API usage varies among different libraries.\nFor example, \\texttt{deeplearning4j} often requires a long method chain to configure the network from all aspects.\nHowever, users of \\texttt{deeplearning4j} may skip some aspects in their client code, as a result, the mined patterns are visibly shorter than the official code examples.\nFor \\texttt{eclipse-jdt}, many functionalities of the library apply the visitor pattern (a design pattern). \nAll the code abstractions of the three tools are designed to analyze code snippets inside a method, which could not represent the visitor pattern well.\n\n\\begin{table}\n    \\centering\n    \\caption{Comparison of three pattern mining tools}\n    \\vspace{0.2cm}\n    \\label{pattern-result}\n    \\begin{tabular}{l  c  c  c}\n        \\hline\n        \\textbf{\\footnotesize Library} & \\textbf{\\footnotesize anyCode} & \\textbf{\\footnotesize ExampleCheck} & \\textbf{\\footnotesize NLI4j} \\\\\n        \\hline\n        jsoup & 0.33 & 0.23 & 0.15 \\\\\n        apache-poi & 0.42 & 0.27 & 0.21 \\\\\n        neo4j & 0.35 & 0.29 & 0.29 \\\\\n        deeplearning4j & 0.79 & 0.68 & 0.56 \\\\\n        eclipse-jdt & 0.85 & 0.81 & 0.81 \\\\\n        \\hline\n        \\bfseries Average & \\bfseries 0.48 & \\bfseries 0.36 & \\bfseries 0.29 \\\\\n        \\hline\n    \\end{tabular}\n\\end{table}\n\n\\mybox{\\emph{Answer for RQ3:} \n    Given the same codebase, our generated code patterns are more complete and accurate than two existing pattern mining tools.\n}\n\n\\subsection{Threats to Validity}\n\\textbf{Internal validity: }\nOur four research questions covered the key steps in constructing NLI (\\textit{i.e.}, functional feature extraction, code pattern mining, and the synthesizer).\nHowever, we could not evaluate all the details in the implementation because our framework has a quite long workflow.\nFor example, we did not discuss parameter tuning for our frequent pattern mining algorithm.\nIn our current implementation, we set the frequency threshold as 5\\% to mine code patterns and it works well on our datasets.\nHowever, the best threshold may vary under different datasets.\n\nFor the case study, although we have considered the help of \\textsc{NLI4j} varies for different users.\nThe total number of participants is relatively small. We plan to put our tool in the daily development of developers and collect more user data in our future work.\n\n\\textbf{External validity: }\nWe selected five libraries from different domains, which covered the front-end parsing tool, the back-end database, and popular toolkits.\nThe evaluation shows that our tool can mine accurate functional features (accuracy of 86.2\\%) and high-quality code patterns.\nHowever, since our tool is feature-oriented, its performance on libraries with clear features are usually better than the libraries which are designed as frameworks.\nFurthermore, API invocation is not the only way of library reuse. \nSome libraries heavily rely on other design or syntax, such as design patterns and annotations (\\textit{e.g.}, the OGM mechanism in \\textit{neo4j}).\nThus, the first external validity is the generalization of our framework to other libraries.\n\nWe carefully chose the datasets in our experiment so the findings could be generalized as much as possible.\nWe selected Stack Overflow to extract functional features because it is one of the most popular platforms to search for programming tasks \\citep{uclsurvey}.\nThere are a lot of discussions about API usage from the site, and many previous studies encourage us to select it as the corpus (\\textit{e.g.}, \\citep{treude16, examplestack}). \nNonetheless, not all the libraries are active on Stack Overflow.\nAlthough most of our design is not specific to Stack Overflow, the performance may differ when other forms of user discussions are used as input.\nRegarding the codebase for mining code patterns, we downloaded all repositories with at least 5 stars from Github and the number of source code files is more than 105K.\nIn our experiment, we found a code corpus containing one hundred files is good enough to mine high-quality patterns.\nHowever, we only evaluated on Java APIs and it may not be representative to all the languages and libraries.\n\n\\section{Mining Code Patterns}\n\\label{pattern}\nAfter getting the list of functional features, we map the features to their implementation.\nAlthough Stack Overflow often provides direct code snippets along with the descriptions, such examples are usually incomplete (\\textit{i.e.} only mentioned the key APIs instead of the complete solution) and may have quality problems such as incorrect order of API calls \\citep{api-misuse}.\nTo augment these code examples, our main idea is to unveil what has been done in more similar programs.\nTo be specific, we first map each functional feature to a related API and construct a code corpus containing usage examples of the API.\nThen, we abstract each code example in the corpus into a data flow graph and apply existing frequent subgraph mining algorithm to mine the patterns.\nFinally, we transform the mined patterns (\\textit{i.e.}, frequent subgraphs) back to the text-form code.\n\n\\subsection{Code Corpus Construction}\nWe first match each functional feature with a related API and then construct a code corpus by searching usage examples of the API.\nThe rationale behind this design decision is that although code snippets on Stack Overflow suffer the quality problems, they often mentioned the correct API to use.\nSuch APIs could be a starting point to find the complete solution.\n\nGiven a functional feature, we view all the code elements mentioned in the same Stack Overflow thread (\\textit{i.e.}, contents inside the $$ tag) as candidate APIs to match.\nThe metric we use to select the related API is based on the lexical similarity.\nFirst, we split the API names according to the camel-case rule and stem the splitted tokens.\nFor each API, we calculate the number of overlapped tokens between its name and the functional feature. (\\textit{e.g.}, the number is 2 for the API \\textit{``setFillForegroundColor''} and the feature \\textit{``set cell color''} since the overlapped tokens are \\textit{``set''} and \\textit{``color''}).\nThe API with the most overlapped tokens is selected as the matched one and we break the tie by counting the occurrence number of certain API in the thread.\n\nAfter selecting a related API, we further extract usage examples of the API from client repositories downloaded from Github in advance.\nIf a source code file from the repositories contains the desired API, we add it to the corpus.\n\n\\subsection{Code Abstraction}\n\n\\begin{figure}[htb]\n\\lstset{language=Java}\n\\begin{lstlisting}\n\/\/ snippet 1\nstyle.setFillForegroundColor(short);\nstyle.setFillPattern(SOLID_FOREGROUND);\n\/\/ snippet 2\nstyle.setFillPattern(SOLID_FOREGROUND);\nstyle.setFillForegroundColor(short);\n\\end{lstlisting}\n\\caption{Example snippets where the sequence model fails}\n\\label{poi-order}\n\\end{figure}\n    \n\\begin{figure}[htb]\n    \\centering\n    \\includegraphics[width=0.45\\textwidth]{pic\/dataflow-graph.png}\n    \\caption{An example data flow graph with type annotations}\n    \\label{fig-dataflow}\n\\end{figure}\n\n\\begin{figure*}[htb]\n    \\centering\n    \\includegraphics[width=0.9\\textwidth]{pic\/bnf.png}\n    \\caption{Grammar of intermediate representation for Java code}\n    \\label{fig-bnf}\n\\end{figure*}\n\nSource code can be viewed as plain text, however, such simple representation is sensitive to trivial differences (\\textit{e.g.}, variable names, indentations) and affects the performance of pattern mining.\nThus, before applying the frequent pattern mining algorithm on the constructed code corpus, we need to abstract the code into a certain data structure.\nCommon abstractions include the AST (abstract syntax tree) and the method call sequence.\nAs the natural representation of source code, the AST is sensitive to the coding style differences (\\textit{e.g.}, using different keywords \\textit{``for''} and \\textit{``while''} to implement loops).\nRecently, \\citep{swim, api-misuse} applied the \\textit{structured call sequence} as the code abstraction to mine API-centric code patterns.\nThe sequence model allows users to define the interested parts of code such as API invocations and guard conditions.\nHowever, sometimes changing the order of certain API calls does not affect the program behavior.\nFor example, the two snippets in Figure \\ref{poi-order} behave in the same way.\nIn this case, the sequence model is too sensitive to capture the complete pattern.\n\nCompared to the AST and method call sequence, the graph model is more expressive to describe interactions between variables \\citep{nguyen2009graph}.\nIn this paper, we augment the data flow graphs by annotating the data nodes with API types.\nVertices in a data flow graph can be divided into data and operations.\nTo better fit the library reuse problem, we annotate each data node with the corresponding API type name.\nFigure \\ref{fig-dataflow} displays the same dataflow graph generated for the two snippets from Figure \\ref{poi-order}.\nThe annotations \\textit{``CellStyle''} and \\textit{``FillPatternType''} are API types from the library \\texttt{apache-poi}.\nAlso, the different order of method invocations did not affect their abstractions because they share the same data flow.\n\nWe follow the common workflow to generate data flow graphs from source code.\nFirst, we generate a self-designed intermediate representation (IR) from Java code.\nThe IR is independent of the source language and is designed to be conducive for further processing.\nSecond, we generate control flow graphs from the IR, furthermore, the graphs are refined into the static single assignment (SSA) form.\nThird, the control flow graphs are transformed into the data flow graphs.\nThe last two steps are the implementation of existing algorithms \\citep{DBLP:conf\/cc\/BraunBHLMZ13} and won't be discussed here.\nThe rest of this subsection will discuss our self-designed IR, which is shown in Figure \\ref{fig-bnf}.\n\n\\begin{figure}[htb]\n\\lstset{language=Java}\n\\begin{lstlisting}\n\/\/ for-each style iteration\nfor (String s: lst) {\n    cnt++; foo(cnt, s);\n}\n\/\/ iterator style iteration\nIterator iter = lst.iterator();\nwhile (iter.hasNext()) {\n    cnt += 1; foo(cnt, iter.next());\n}\n\\end{lstlisting}\n\\caption{Two example snippets of different coding styles}\n\\label{fig-codingstyle}\n\\end{figure}\n\nThere are two reasons to design our own intermediate representation:\nFirst, most existing tools to generate Java IR behave poorly on the incomplete code snippets.\n\\textit{e.g.}, The famous tool \\textsc{Soot}\\footnote{https:\/\/github.com\/Sable\/soot} requires all dependencies of the current file to generate the corresponding intermediate code.\nWhile our tool only requires that the input snippets can be taken as a compilation unit, which can be a method without the wrapper class, or even just a block containing several method invocations.\nSecond, the syntax of Java is complex, there are multiple ways to write code sharing the same behavior.\nAs Figure \\ref{fig-codingstyle} shows, to increments a variable, one can write \\emph{cnt++} or \\emph{cnt += 1}.\nTo iterate a list of strings, a for-each loop or an iterator are both correct.\nSuch details have not been normalized in existing tools, while our IR can eliminate some common coding style differences.\nAs a result, the two snippets will result in the same representation in our intermediate code.\nTo be specific, both increment operations are represented by $\\langle PstOp\\rangle$`++' defined in Figure \\ref{fig-bnf}.\n\nAfter generating data flow graphs for the corpus, we apply gSpan algorithm again to mine frequent subgraphs as code patterns.\n\n\\subsection{Skeleton Code}\nAs the last step of pattern mining, we recover the graph-form code patterns into the skeleton code.\n\\begin{mydef}\n    Skeleton code is an incomplete syntax tree, which is obtained by removing trees rooted at $v_1, v_2, ..., v_n$ from a complete syntax tree.\n    Each $v_i$ is a node from the complete syntax tree and we name such nodes as holes in the skeleton code.\n\\end{mydef}\nFor example, Figure \\ref{fig-skeletontree} is the skeleton code recovered from the graph in Figure \\ref{fig-dataflow}.\nNodes wrapped in the dotted line are holes in the syntax tree.\nFigure \\ref{fig-skeleton} shows the text form of the skeleton code.\n\n\\begin{figure}[htb]\n    \\centering\n    \\includegraphics[width=0.45\\textwidth]{pic\/skeleton-tree.png}\n    \\caption{Example skeleton code}\n    \\label{fig-skeletontree}\n\\end{figure}\n\n\\begin{figure}[htb]\n\\lstset{language=Java}\n\\begin{lstlisting}\n.setFillForegroundColor();\nFillPatternType fillPatternType1 = \n    FillPatternType.SOLOD_FOREGROUND;\n.setFillPattern(fillPatternType1);\n\\end{lstlisting}\n\\caption{Text-form of the example skeleton code}\n\\label{fig-skeleton}\n\\end{figure}\n\nDuring the generation of data flow graphs, we record the corresponding nodes from the syntax tree.\nTo construct the skeleton code from a data flow graph, we first list all the tree nodes included in the graph.\nThen we randomly select a syntax tree of the original source code and search the least common ancestor (LCA) of the nodes in the tree.\nAfter the search, we recover a complete syntax tree containing all the nodes from the graph.\nNaturally, the missing parts (\\textit{i.e.}, nodes not covered by the graph) in the recovered syntax tree become holes in the skeleton code.","meta":{"redpajama_set_name":"RedPajamaArXiv"}}
+{"text":"\\section{Introduction}\nAnalogue systems are used to test theories of predicted phenomena that are hard to observe directly \\cite{georgescu2014quantum_s}. Hawking \\cite{hawking1974black, hawking1975particle} predicted black-holes to thermally radiate from the quantum vacuum. Its measurement is unlikely, having a temperature inversely related to the black-hole mass. Stellar and heavier black-holes' radiation is much weaker than the cosmic microwave background (CMB) fluctuations \\cite{CMB_pedestrians,HawkingTemp}. Hypothetical microscopic primordial black-holes might emit significant Hawking radiation \\cite{halzen1991gamma}, but it is highly model-dependent, with rates seriously bounded by the lack of its observation \\cite{alexandreas1993new, fichtel1994search, linton2006new}. \n\nVolovik and Unruh \\cite{volovik2003universe, unruh1981experimental} have shown that a transonic fluid flow is analogous to the space-time geometry surrounding a black-hole, and should emit sound waves analogous to Hawking radiation. \n\nWave kinematics control the Hawking process, which appears in the presence of an effective horizon \\cite{visser1998acoustic, visser2003essential}. Black-hole dynamics and Einstein equations of gravity are not an essential requirement. The analogue of the event horizon is the surface separating the subsonic and supersonic flows. The surface gravity determines the strength of Hawking radiation and is analogous to the flow acceleration at the horizon. The horizon should also last long enough such that the modes of Hawking radiation will have well defined frequencies \\cite{visser2003essential}. \n\nMany analogues have been proposed \\cite{barcelo2011analogue, barcelo2019analogue} with realisations using water waves \\cite{rousseaux2008observation, jannes2011experimental, weinfurtner2011measurement, euve2015wave, euve2016observation, torres2017rotational}, Bose-Einstein condensates (BEC) \\cite{lahav2010realization, steinhauer2014observation, steinhauer2016observation, de2019observation}, and optics \\cite{philbin2008fiber, belgiorno2010hawking, rubino2012negative,elazar2012all,nguyen2015acoustic, bekenstein2015optical, bekenstein2017control, drori2019observation}. The analogues gave new perspectives in both gravity and the analogue systems \\cite{barcelo2011analogue}, extending Hawking's theory to wave dynamics in moving media. The universality of the Hawking effect shows it persists in real-world scenarios, where complicated dynamics replace simplified and possibly fine-tuned theories. Hawking's original derivation seems questionable, since the radiation originates from infinitely high frequencies, neglecting unknown high energy physics (known as the 'trans-Planckian problem'). The microscopic physics of the analogue systems is well known, allowing to test the role of high frequencies in Hawking predictions \\cite{jacobson1991black,unruh1995sonic}.\n\nThis paper discusses the optical analogues that use a refractive index perturbation to establish an artificial black-hole horizon. Extensive reviews of analogue gravity exist \\cite{barcelo2011analogue, barcelo2019analogue}, but they pay little attention to optical analogues and substantial progress was made after their publication. Additional useful resources include Refs. \\cite{daniele2018hawking,faccio2013analogue, leonhardt2010essential,unruh2007quantum}. This paper aims to present the main experiments that transformed the optical analogues from simple ideas to established reality. It aims to do so in the light of the recent developments, but in terms of hands-on physics. We believe that experiments reach the state where their data may lead the way for new questions and discoveries in the physics of Hawking radiation. \n\n\nSection 2. briefly describes early proposals for optical analogues using slow light and why they could not materialise. Transformation optics is also mentioned, where stationary dielectrics are viewed to change the spatial parts of the metric.\nSection 3. explains the standard theory of optical analogues discussed in this paper and the first demonstration of optical horizons. Further measurements of the frequency shift at group velocity horizons are briefly discussed, and additional related work is mentioned. \nSection 4. focuses on attempts in bulk optics rather than optical fibres, stressing the roles of group-velocity and phase-velocity \\cite{milonni2004fast} horizons. \nSection 5. discusses the first measurements of negative frequencies in optics, a phenomenon closely related to the Hawking effect. Its theory is also presented and shortly explained.\nSection 6. considers additional interpretations of the optical horizon. Theory and experiments are shown to directly relate the effect to cascaded four-wave mixing, but analysis in the time domain seems to be unavoidable. The use of temporal analogues of reflection and refraction, and numerical solutions are also mentioned. \nSection 7. discusses the first demonstration of stimulated Hawking radiation in an optical analogue, and lessons learnt from it. \nSection 8. concludes the paper with a brief outlook to the future.\n\n\\section{Early attempts}\n\nFresnel's drag \\cite{Fresnel} was an ether-based theory of light propagation in transparent media, 'confirmed' by Fizeau \\cite{fizeau}. The drag effect is just a relativistic velocity addition\\cite{rindler2006relativity}, but Fresnel's wrong theory was based on correct intuition of velocity addition. The ether was replaced by the space-time geometry and the quantum vacuum, which continued to have an intimate connection to moving media and produce puzzling phenomena \\cite{hawking1974black,hawking1975particle, fulling1973nonuniqueness,davies1975scalar,unruh1976notes}. Analogue gravity took this connection a step further, but despite theoretical progress in the 1990's, no practical realization of analogue black-holes was suggested at that time \\cite{barcelo2011analogue}.\n\nTechnology drove ideas in the right directions (optics and Bose-Einstein condensates) around the year 2000 \\cite{leonhardt1999optics,leonhardt2000slow_PRL, leonhardt2002slow_nature, garay2000sonic,garay2001sonic}. In optics, Leonhardt and Piwnicki \\cite{leonhardt1999optics,leonhardt2000slow_PRL, leonhardt2002slow_nature} suggested to slow down light such that its medium could be moved in super-luminal velocities and form a horizon. These ideas used the technology of 'slow light' \\cite{hau1999Slow_light_1,kash1999ultraslow}, where incredibly low group velocities are produced using electromagnetically induced transparency (EIT, see Fig.~\\ref{fig3}a). However, the phase velocity of 'slow light' is fast, preventing the crucial formation of a phase velocity horizon (where the media moves at the phase velocity of light. See below) \\cite{unruh2003slow}. Another problem with realising these ideas is the narrow bandwidth of light that can be slowed \\cite{unruh2003slow}, and severe absorption around it \\cite{milonni2004fast}. The inevitable conclusion was to move the medium in relativistic velocities.\n\nDespite missing key concepts of Hawking radiation, these ideas have pushed analogue gravity forward and beyond the scope of relativity (or relativitists). They showed that an analogue black-hole metric can be made in optics. Similarly, stationary dielectrics mimic spatial geometries. This interpretation inspired transformation optics, and the development of meta-materials technology extended the range of practical geometries \\cite{leonhardt2006optical, pendry2006controlling, leonhardt2006general, chen2010transformation, leonhardt2010geometry, xu2015conformal}. \n\n\\section{Changing a reference frame}\n\n\\begin{figure}[t!]\n\t\\centering\\includegraphics[width=0.9\\textwidth]{Fig1.pdf}\n\t\n\t\\caption{Main optical analogue concepts. \\textbf{(a)} A black-hole emit Hawking radiation from its event horizon: positive norm waves escape outside, while negative norm waves drift towards its singularity. The surface gravity $\\kappa$ represented by a falling weight determines the radiation's strength. \\textbf{(b)} The medium seems to flow in the pulse co-moving frame with its group velocity $u$. A pair of black-hole and white-hole horizons are formed due to nonlinear refraction index being proportional to the pulse local intensity. Positive frequency probe light is shifted, and negative frequency waves are formed at both horizons according to the dispersion relation (see Fig.~\\ref{fig2}). The magnitude of the Hawking effect is determined by the pulse steepness, being analogous to the surface gravity at the horizon (see text).}\n\t\\label{fig1}\n\\end{figure}\n\n\nThe groups of Leonhardt and K{\\\"o}nig \\cite{philbin2008fiber} started a new approach that follows a simple idea \\cite{Leonhardt2005Invent}: light itself travels at the speed of light. A light pulse (also denoted pump) travels in a dielectric medium with group velocity $u$ . In the reference frame co-moving with the pulse ('the co-moving frame'), the medium seems to flow in the opposite direction with velocity magnitude $\\left|u\\right|$ (Fig.~\\ref{fig1}b). Probe light of a different frequency differ in velocity due to dispersion. The nonlinear Kerr effect \\cite{boyd2003nonlinear} slows down the probe upon interaction with the pulse\\footnote{This has a similar effect to the flow acceleration in the fluid analogues, changing the probe velocity relative to the flow.} -- the refractive index change as $n=n_0+n_2 I$, where $n_0$ is the linear refractive index, $n_2$ is a parameter being typically  $10^{-16}\\text{cm}^2\/\\text{W}$ \\cite{boyd2003nonlinear}, and $I$ is the light intensity. A group velocity horizon forms where the probe group velocity obey $v_g=\\left|u\\right|$, blocking the probe from further entering the pump. A black-hole horizon is formed in the leading end of the pump, where the flow is directed into the pulse. Its time reversal, a white-hole horizon with outward flow, is formed in the pump trailing end \\cite{philbin2008fiber, belgiorno2011dielectric}.\n\nIf the pump is slowly varying \\cite{visser2003essential}, the probe co-moving (and Doppler shifted) frequency, \n\\begin{equation}\\label{dopplerEq}\n\\omega'=\\gamma \\left(1-n \\frac{u}{c}\\right) \\omega,\n\\end{equation}\nis conserved. Here $\\gamma=\\left(1-u^2\/c^2\\right)^{-1\/2}$ is the Lorentz factor, $c$ is the vacuum speed of light, $n$ is the refraction index, and $\\omega$ is the probe frequency in the laboratory frame. Photon pairs of Hawking radiation are produced at each horizon: one with positive $\\omega'$, and its negative partner with $-\\omega'$ \\cite{leonhardt2010essential, leonhardt2012analytical}. For positive $\\omega$, $\\omega'$ is positive only if the probe phase velocity, $v_\\phi$, is greater than $|u|$. A phase velocity horizon forms where $v_\\phi=\\left|u\\right|$ so $\\omega'=0$. The Hawking radiation outgoing from the horizon mix incoming radiation of positive and negative frequencies \\cite{leonhardt2010essential}. Quantum mechanically, it is described by a Bogoliubov transformation \n\\begin{equation}\\label{BogoliubovEq}\n\\hat{b}_\\pm=\\alpha \\hat{a}_\\pm + \\beta \\hat{a}^\\dagger_\\mp \\quad;\\quad |\\alpha|^2-|\\beta|^2=1,\n\\end{equation}\nwhere the sign of the operators equals the sign of $\\omega'$. Time dependent annihilation and creation operators for incoming modes, $\\hat{a}_\\pm,\\hat{a}^\\dagger_\\mp$ mix to form annihilation operators for outgoing modes, $\\hat{b}_\\pm$. This extracts metric energy (pump energy) to amplify the radiation, thus spontaneously creating outgoing radiation from incoming vacuum \\cite{leonhardt2010essential, drori2019observation}. The transformation parameters, $\\alpha$ and $\\beta$, give the flux of Hawking radiation. When neglecting dispersion, the radiation effective temperature is proportional to the analogue of the surface gravity -- the steepness of the pulse (giving the probe velocity gradient in the co-moving frame \\cite{visser2003essential, leonhardt2010essential}, which is related to the refractive index (and pulse intensity) gradient or rate of change \\cite{philbin2008fiber,leonhardt2010essential,belgiorno2011dielectric}).\n\nIn \\cite{philbin2008fiber}, an optical fibre called a photonic crystal fibre (PCF) provided both the desired dispersion (Fig.~\\ref{fig2}) and nonlinearity. Its unique structure guided light in a very small region -- the fibre's core \\cite{russell2003photonic, Agrawal_NLFO}. This increased the light intensity and the fibre's nonlinear response. Its nonlinear parameter was $\\gamma\\left(\\omega_0\\right) = \\omega_0 n_2 \/c A_\\text{eff} = 0.1 \\text{W}^{-1}\\text{m}^{-1}$ at $\\omega_0$ corresponding to $\\SI{780}{\\nano\\meter}$ wavelength, where $A_\\text{eff}$ is the fibre effective mode area and $\\epsilon_0$ is the vacuum permittivity \\cite{Agrawal_NLFO}. The fibre's structure was engineered to change its dispersion relation, $n(w)$, to have two points with matching group velocities \\cite{Agrawal_NLFO}: one in a normal dispersion region, where $n(w)$ is increasing; and another in an anomalous dispersion region, where $n(w)$ is decreasing. The anomalous dispersion included the pump spectra, generated by a Ti:Sapphire mode-locked laser. Self-phase modulation (SPM) due to the nonlinear refraction index counteracted the anomalous dispersion and formed stable solitons \\cite{Agrawal_NLFO}.\n\n\n\\begin{figure}[t!]\n\t\\centering\\includegraphics[width=0.9\\textwidth]{Fig2.pdf}\n\t\n\t\\caption{Schematic Doppler curve of the fibre optic analogue with phase matching conditions. Frequency $\\omega'$ at the reference frame co-moving with the pump pulse is plotted against the laboratory frequency $\\omega$ (Eq.~\\ref{dopplerEq}). The pump pulse is in a local minima, where the dispersion is anomalous. The phase matching condition of dispersive waves (DW) corresponds to conservation of $\\omega'_\\text{pump}$ (upper dotted line). Negative frequency resonant radiation (NRR) conserve $-\\omega'_\\text{pump}$ (lower dotted line. See also Fig.~\\ref{fig3}d,\\ref{fig3}f). Probe light (black-diamond) is red-shifted (white diamond) at black-hole horizon, conserving its co-moving frequency $\\omega'_\\text{probe}$ (upper solid line). At the white-hole horizon the probe (now white diamond) is blue-shifted (black diamond), as seen also in Fig.~\\ref{fig3}b. At the local maxima the group-velocity matches that of the pump, $u$. Negative Hawking radiation (NHR) conserve $-\\omega'_\\text{probe}$ (lower solid line. See also Fig.~\\ref{fig3}f). The phase horizon (PH), where the phase velocity equals $u$, corresponds to $\\omega'=0$. \n\t}\n\t\\label{fig2}\n\\end{figure}\n\n\n\nThe solitons in \\cite{philbin2008fiber} created horizons. Continuous wave (CW) probe was added to the fibre, being red-detuned from the frequency of matching group velocity (white diamond in Fig.~\\ref{fig2}). The fast probe mainly interacted with the pump trailing end, gradually slowed down, and blue-shifted (frequency up-conversion, see Fig.~\\ref{fig3}b). This pump-probe interaction is known as cross phase modulation (XPM), and induces chirp that depends on the pump pulse shape \\cite{Agrawal_NLFO}. If $\\delta n=n_2 I$ is large enough to form a horizon, the shifting probe become slower than the pulse and they separate. In the co-moving frame the probe is reflected (Fig.~\\ref{fig1}b) while conserving its co-moving frequency $\\omega'_\\text{probe}$, but not $\\omega$ (Fig.~\\ref{fig2}). This reflection demonstrated the existence of optical horizons \\cite{philbin2008fiber}. \n\nPhilbin et.al. \\cite{philbin2008fiber} used $\\SI{70}{\\femto\\second}$ duration FWHM (full width at half maximum), $\\SI{800}{\\nano\\meter}$ carrier wavelength pulses with peak power of about $\\SI{50}{\\watt}$. The PCF was $\\SI{1.5}{\\meter}$ long with core diameter smaller than $\\SI{2}{\\micro\\meter}$. The CW probe was tunable around $\\SI{1500}{\\nano\\meter}$ wavelength with $\\SI{100}{} - \\SI{600}{\\micro\\watt}$ power. Pump power was kept low to maintain a stable single soliton and reduce high-order nonlinearities, such as the Raman effect \\cite{Agrawal_NLFO}. Self-steepening was hoped to realize high Hawking temperatures. \n\nThis set-up achieved a mere $10^{-3}$ percent efficiency for probe shifting. Only $10^{-4}$ of the CW probe power was expected to interact with the pump over the $\\SI{1.5}{\\meter}$ fibre, and another order of magnitude reduction was attributed to tunnelling of the probe through the narrow pump barrier \\cite{philbin2008fiber}. While clearly showing probe blue-shifting at the white-hole horizon, this set-up could not produce detectable negative frequency partners. It was unclear whether they should form around the phase velocity horizon, where $\\omega'=0$, or around the supported frequency $-\\omega'_\\text{in}$ (Fig.~\\ref{fig2}).\n\nChoudhary and k{\\\"o}nig \\cite{choudhary2012efficient} reported probe red-shifting and studied its tunnelling. They used similar experimental parameters to \\cite{philbin2008fiber}, with tuneable $\\SI{50}{\\femto\\second}$ pump pulses and visible CW probe at $\\SI{532}{\\nano\\meter}$ (to avoid the difficulty of synchronisation). Tunnelling was minimal for probe detuning up to twice the soliton bandwidth (relative to the matching frequency), but limited pump-probe interaction kept the conversion efficiency small. Tartara \\cite{tartara2012frequency} derived both pump and probe from the same $\\SI{105}{\\femto\\second}$ source and propagated them in a $\\SI{1.1}{\\meter}$ fibre, demonstrating conversion efficiencies of tens of percents. An optical parametric amplifier produced the tuneable probe.\n\nThe Raman effect cause the pump to decelerate, and was shown to only slightly change the shifted probe spectra \\cite{robertson2010frequency}. Shifting of dispersive waves \\cite{wang2015optical,bendahmane2015observation} and trapping the probe light \\cite{nishizawa2002pulse, nishizawa2002characteristics, gorbach2007light ,hill2009evolution, wang2013soliton} were also related to the Raman effect. An optical 'black-hole laser' that use both the white and black-hole horizons was predicted and analysed \\cite{corley1999black , leonhardt2007black , faccio2012optical, gaona2017theory, bermudez2018resonant}. The horizon dynamics was also related to the formation of optical rogue waves and champion solitons \\cite{solli2007optical, demircan2012rogue , demircan2014rogue, pickartz2016adiabatic},  and the ability to form all-optical transistors \\cite{demircan2011controlling}. Similar 'front-induced transitions' were studied in other areas of photonics \\cite{gaafar2019front}, and the quest for measuring analogue Hawking radiation continued.\n\n\\section{Horizonless emissions}\nFaccio's group realized optical phase velocity horizons, without a group velocity horizon, in bulk optics \\cite{belgiorno2010hawking, rubino2011experimental}. They constructed pulses of super-luminal group velocities, by using their three dimensional nature \\cite{faccio2007conical}. A pulsed Bessel beam is one such example. It is composed of infinitely many plane waves on a cone with a constant angle $\\theta$ with the propagation axis. It can be made with an axicon lens \\cite{faccio2007conical}. At the apex of the cone, the plane waves interfere to form a bright spot that moves with super-luminal velocity that depends on $\\theta$. Belgiorno et al. \\cite{belgiorno2010hawking} used super-luminal pulses that travelled as narrow and powerful filaments inside fused silica. Looking perpendicularly to their propagation direction, they detected radiation around the phase velocity horizon -- where the radiation phase velocity matched the pulses group velocity (Fig.~\\ref{fig3}c). They could not measure along the propagation direction, because vast radiation was produced by the powerful pulses there (peak intensity was as high as $10^{13}\\text{W}\/\\text{cm}^2$ centred at $\\SI{1055}{\\nano\\meter}$ wavelength). \n\nSpurious radiation was a key issue even for observation at 90 degrees. Special care was taken to tune the radiation into spectral windows of minimal noise. Correctly identifying all the signals in such a set-up is another major challenge. Concerns were raised \\cite{schutzhold2010comment, unruh2012hawking, liberati2012quantum} and in-depth analysis related the radiation to a horizonless super-luminal perturbation, possibly linked to Hawking radiation \\cite{petev2013blackbody, finazzi2014spontaneous}. This work stressed the importance of a blocking group velocity horizon to the Hawking effect. Thermal Hawking radiation was predicted for super-luminal pulses in materials with linear dispersion at low energies (like diamond) \\cite{petev2013blackbody, finazzi2014spontaneous}, but is yet to be measured. \n\nLeonhardt and Rosenberg \\cite{leonhardt2019cherenkov} related the emission in \\cite{belgiorno2010hawking} to the physics of Cherenkov radiation\\footnote{Not to be confused with dispersive waves \\cite{akhmediev1995cherenkov}} in a surprising way. The pulse was modelled as a super-luminal light bullet \\cite{silberberg1990collapse}, and found to behave like a moving magnetic dipole that is predicted to emit Cherenkov radiation with a discontinuous spectrum \\cite{frank1942doppler, frank1984vavilov}. This discontinuity is exactly at the phase horizon -- where the effective dipole velocity equals the phase velocity of light and $\\omega'=0$ (Fig.~\\ref{fig2}). Interferences turn the discontinuity into a peak if the dipole is an extended object, like the light bullet.\n\n\\section{The role of negative frequencies}\nNegative frequencies are an integral part of Hawking radiation and can also appear as dispersive waves. Solitons emit dispersive waves (also known as resonant radiation) at a shifted frequency when disturbed by higher order dispersion. Non-dispersive three dimensional light bullets emit similar radiation \\cite{faccio2007conical}. Both emissions can be viewed to originate from self-scattering of the pulse by its nonlinear refractive index barrier. Conservation of momentum dictates the emitted frequency through a phase matching condition \\cite{dudley2006, Agrawal_NLFO}. In the reference frame co-moving with the stable pulse, this condition becomes a conservation of energy, given by the pulse co-moving frequency, $\\omega'_\\text{pulse}$ (Fig.~\\ref{fig2}). The frequency shifting at optical horizons extends this picture to the scattering of probe light. The prediction of negative Hawking modes suggested that negative dispersive waves should similarly appear. Rubino et al. \\cite{rubino2012negative} had measured this negative-frequency resonant radiation (NRR) after it was neglected and disregarded for a long time. This required very rapid (non-adiabatic) temporal changes of the pulse envelope.\n\nIntense pulses created NRR through steep shocks in two ways \\cite{rubino2012negative}: $\\SI{7}{\\femto\\second}$ high order solitons \\cite{Agrawal_NLFO} of about $\\SI{300}{\\pico\\joule}$ energy and $\\SI{800}{\\nano\\meter}$ carrier wavelength were placed in $\\SI{5}{\\milli\\meter}$ PCFs; and pulsed Bessel beams of $\\SI{60}{\\femto\\second}$ duration, about $\\SI{20}{\\micro\\joule}$ energy and $\\SI{800}{\\nano\\meter}$ carrier wavelength were sent through a $\\SI{2}{\\centi\\meter}$ long bulk calcium fluoride ($\\text{CaF}_2$), as seen in Fig.~\\ref{fig3}d. Despite NRR was seen both in optical fibres and in bulk, further work was needed to exclude other possible mechanisms for the effect -- contributions due to high order spatial modes, and possible interactions between the complex pulse and additional radiation.\n\nRubino et al. \\cite{rubino2012soliton} later used a relativistic scattering potential to directly explain NRR formation, and Petev et al. \\cite{petev2013blackbody} further related it to Hawking radiation. Conforti et al. \\cite{conforti2013negative} extended the theory to pulses in materials of normal dispersion (where the pulse experience dispersive broadening) and dominating second order nonlinearity, which effectively creates a nonlinear refractive index. The same conditions for phase matching and non-adiabatic temporal evolution were found. Analytic derivation of the NRR \\cite{conforti2013interaction} directly related it to interactions between fields and their conjugates (or the mixing of positive and negative frequency modes). It further strengthened the connection between NRR and Hawking radiation, which originate from a Bogoliubov transformation that mixes creation and annihilation operators \\cite{leonhardt2010essential, leonhardt2012analytical}. McLenaghan and K{\\\"o}nig \\cite{mclenaghan2014few} studied NRR for different PCF lengths and input chirps, and inferred UV propagation loss of $\\sim 2\\text{dB\/mm}$. We stress that the negative frequency (and negative norm) of the Hawking modes beyond the horizon is key to the Hawking amplification process \\cite{leonhardt2010essential}, extracting energy from the background metric in accordance to a Bogoliubov transformation.\n\n\\section{'It's just optics'}\nHawking radiation is a universal geometric effect that emerge due to conversion of modes at a horizon, regardless of the microscopic physics that create the background space-time geometry \\cite{visser2003essential}. The same (generalized) derivation applies for both astrophysical black-holes and analogue systems \\cite{visser2003essential, philbin2008fiber,finazzi2013quantum, jacquet2015quantum, bermudez2016hawking, linder2016derivation, jacquet2018_Thessis,jacquet2019analytical}. This established universality proved insightful for both gravity and optics (arguably solving the trans-Planckian problem and discovering negative frequencies in optics are two examples \\cite{barcelo2011analogue, barcelo2019analogue}). However, some researchers from both the optics and gravity (or quantum field theory on curved space-times) communities are still not comfortable with analogue Hawking radiation, claiming that the effect is 'just optics' \\footnote{The author have personally encountered such allegations from both communities, and believe that they partially originate from not separating 'gravity', predicted to create the black-hole event horizon, and 'kinematics', which are responsible for the Hawking effect once a horizon is formed \\cite{visser2003essential}.}. Such claims do not undermine (analogue) Hawking radiation, but complete our understanding of it. It's also possible to directly explain the effect using the underlying microscopic physics -- unknown quantum gravity for real black-holes, and nonlinear optics for the optical analogues.\n\nThe classical dynamics at optical horizons can be captured using numerical solutions that take into account the entire electric field (including negative frequencies), the dispersion relation and all nonlinearities \\cite{Agrawal_NLFO, amiranashvili2016hamiltonian, bermudez2016propagation, Raul2020inpress}. These are extensions and alternatives for solving the usual generalised nonlinear Schr{\\\"o}dinger equation (GNLSE), which is used extensively in standard descriptions of ultrafast nonlinear fibre optics \\cite{Agrawal_NLFO, dudley2006, skryabin2010colloquium}. \n\nAnother approach directly relates the phenomena to discrete photonic interactions. It uses cascaded four-wave mixing of discrete spectra, and takes the continuum limit for comparison \\cite{webb2014nonlinear, erkintalo2012cascaded}. A CW probe and a pair of beating quasi-CWs (of $\\SI{1}{\\nano\\second}$ duration) positioned symmetrically around the pump central frequency were being mixed continuously (Fig.~\\ref{fig3}e). At each step of the cascaded process, the probe shifted by the pair's detuning, and all three generated an equidistant frequency comb. A resonant amplification was produced at the frequency corresponding to conservation of $\\omega'_\\text{probe}$. Similar amplification appeared at the dispersive wave resonance, at $\\omega'_\\text{pump}$. The experiments used low cascade orders ($\\text{n}=5-7$ for the probe shift and about $\\text{n}=15$ for the dispersive waves) and were supported by numerical analysis. This picture shows how energy is being transferred from the pump to the shifting probe, by effectively absorbing pump photons of one frequency and emitting at another. The cascaded four wave mixing also details the back reaction mechanism, where the pump undergoes spectral recoil. \n\nThe studies \\cite{webb2014nonlinear, erkintalo2012cascaded} addressed the relevant phase matching conditions, but the efficiency of the cascaded process was found only in \\cite{webb2014efficiency}, for dispersive waves. Remarkably, it showed that the known concepts from horizon physics are crucial ingredients even when the pump is made of beating quasi-CWs that form a frequency comb: the mixing was analysed using (temporal) soliton fission dynamics, being efficient only when the frequency spacing between the CWs effectively generated compressing (non-adiabatic) high order solitons. The frequency-domain analysis was intractable for high cascade orders. The maximally compressed beat cycle corresponded to about $\\SI{102}{\\femto\\second}$ duration FWHM in the efficient regime, of high cascading orders. Conversion efficiencies of $10^{-4}$ over a $\\SI{100}{\\meter}$ fibre were reported.\n\nThe generation of NRR in media with quadratic nonlinearity was also related to a cascading process \\cite{conforti2013negative}, which must also be the case for cubic materials. However, negative frequencies were not demonstrated yet using a beating CW pump. It would be interesting to test how the cascading mixing behaves as it reaches negative frequencies, past the phase velocity horizon. \n\nTemporal analogues of reflection and refraction \\cite{plansinis2015temporal, plansinis2018cross} are also used to explain the frequency shifts at optical horizons. It compares the frequency change due to XPM to the wave number change during refraction. The reflection at the horizon is analogous to total internal reflection. The tunnelling through the refractive index barrier is compared to frustrated total internal reflection, where an evanescent wave extends beyond the pulse and tunnels through. The optical horizon can be seen as a temporal beam splitter for probe light (acting also as an amplifier when considering negative frequencies as well). \n\nComparing the mode mixing and squeezing of parametric amplification \\cite{gerryKnight2005QO} to the Hawking effect is also useful \\cite{leonhardt2010essential}. The origin of the two phenomena is completely different, and even in the optical analogue, simple down conversion obey different phase matching conditions that conserve total lab frequency \\cite{boyd2003nonlinear,Agrawal_NLFO} and cannot explain Hawking radiation. However, the two share intuition and the Bogoliubov transformation between incoming and outgoing modes. In both cases a pump creates 'signal' and 'idler' waves: the vacuum spontaneously generates quantum emissions, while stimulating the effect with classical light amplifies the effect for specific modes.\n\n\\section{Observing stimulated Hawking radiation}\nSpontaneous Hawking radiation originate from the quantum vacuum. Classical laser fields can replace the vacuum and stimulate the effect. Drori et al. \\cite{drori2019observation} observed stimulated negative Hawking radiation in an optical analogue (Fig.~\\ref{fig3}f). The idea of \\cite{philbin2008fiber} was used, but with more suitable pump pulses and a pulsed probe. Analysis of the fibre dispersion (schematically seen in Fig.~\\ref{fig2}) and simulations of the experiment allowed to optimise the experimental parameters. The pump was a high order few-cycle soliton that compressed and collapsed due to soliton fission \\cite{Agrawal_NLFO}. Its peak power reached a few hundred kilo-watts, with $\\SI{800}{\\nano\\meter}$ central wavelength. Very non-adiabatic dynamics generated NRR in the mid UV (Fig.~\\ref{fig3}f), at the negative of the pump co-moving frequency, $-\\omega'_\\text{pump}$. This ensured the formation of steep refractive index variations, increasing the analogous surface gravity and reducing the probability of probe tunnelling. The probe was derived from the pump's laser, and was generated through cascaded Raman scattering in a meter long PCF. Negative prechirp was used to enhance the Raman induced frequency shift \\cite{Rosenberg2020RIFS}, whose wavelength was continuously tuned up to $\\SI{1650}{\\nano\\meter}$ by varying the input power. Peak powers were around $\\SI{1}{\\kilo\\watt}$. \n\nEfficient and broad-band probe frequency shifts at the horizon accompanied negative Hawking radiation in the mid-UV, conserving the probe co-moving frequency $\\omega'_\\text{probe}$ and its conjugate $-\\omega'_\\text{probe}$, respectively. Since the fibre had strong dispersion, with different notions for phase and group velocity horizons, the spectrum was not Planckian \\cite{drori2019observation, leonhardt2012analytical, bermudez2016hawking}. Varying $\\omega'_\\text{probe}$, both signals shifted according to theory, supporting the correct interpretation of the negative Hawking radiation. By this, also the interpretation of the NRR was supported, and its analytic theory \\cite{conforti2013interaction} was generalised to include the Hawking process \\cite{Raul2020inpress}. Linear relation was verified between the probe power and that of the negative Hawking signals up to a point where the signal saturated. This saturation was related to a back-reaction of the probe on the pulse, which is a prerequisite for Hawking radiation (since its energy is drawn from the pump, or the black-hole that curves the metric). It also slightly reduced the magnitude of the NRR (Fig.~\\ref{fig3}f). \n\nThe rate of spontaneous Hawking emission was estimated based on the magnitude of the stimulated effect \\cite{drori2019observation}. It was found to be too low for measurement in the system used, calling to design an improved setup. The predicted spontaneous effect is minuscule and is overwhelmed by the noise in the system, which for the negative Hawking radiation is dominated by fluctuations of the overlapping NRR. The multi-mode nature of the PCF in the UV \\cite{Agrawal_NLFO} was found to reduce the observed signal power by up to four orders of magnitude.\n\nThe experiment \\cite{drori2019observation} showed how robust the Hawking effect is: it appeared despite the extreme nonlinear dynamics of the collapsing soliton. Since time in the co-moving frame is related to the propagation distance along the fibre, rapid variations in the pump profile do not violate the required slow evolution of the metric as long as they appear over a length scale much larger than the Hawking radiation wavelength \\cite{visser2003essential}. As such, pump deceleration due to the Raman effect could be accounted by simply correcting its central frequency. \n\n\n\\begin{figure}[!]\n\t\\centering\\includegraphics[width=0.9\\textwidth]{Fig3.pdf}\n\t\\caption{Central measurements. \\textbf{(a)} Schematic absorption (dotted line) and refractive index (solid line) for generating 'slow light' using electromagnetically induced transparency (EIT), as the inset depicts. The refractive index has small but rapid variation at the narrow resonance. The slow group velocity is inversely related to this steep change \\cite{hau1999Slow_light_1, kash1999ultraslow}. \\textbf{(b)} Frequency blue-shift of probe light at fibre optical white-hole event horizon. \\SI{70}{\\femto\\second} pump solitons were used in \\SI{1.5}{\\meter} long nonlinear PCF. A small fraction of CW probe at $\\omega_1$ is converted to $\\omega_2$, conserving its co-moving frequency, $\\omega'_\\text{probe}$. From \\cite{philbin2008fiber}; reprinted with permission from \\href{https:\/\/doi.org\/10.1126\/science.1153625}{\\textcopyright2008 AAAS}. \\textbf{(c)} Horizonless emissions from a Bessel filament creating a super-luminal refractive index perturbation in bulk silica \\cite{belgiorno2010hawking, petev2013blackbody}. The radiation was seen around the phase velocity horizon (where $\\omega'=0$) and increased with pump energy (in \\SI{}{\\micro \\joule}). The wavelength shift (black line) and broadening was in agreement with the increased nonlinear refractive index. From \\cite{belgiorno2010hawking}; reprinted with permission from \\href{https:\/\/doi.org\/10.1103\/PhysRevLett.105.203901}{\\textcopyright2010 APS} \\textbf{(d)} Negative-frequency resonant radiation (NRR) in bulk $\\text{CaF}_2$. \\SI{16}{\\micro\\joule} Bessel pump beam at \\SI{800}{\\nano\\meter} created dispersive-waves (around \\SI{600}{\\nano\\meter}) and NRR (around \\SI{350}{\\nano\\meter}) under highly non-adiabatic temporal evolution. From \\cite{rubino2012negative}; reprinted with permission from \\href{https:\/\/doi.org\/10.1103\/PhysRevLett.108.253901}{\\textcopyright2012 APS}. \\textbf{(e)} Comparison of frequency translation induced by a pump soliton (orange curve) and a pair of quasi-CW fields (purple curve). Probe light is being blue-shifted ('Idler') in a similar way in both cases, relating the effect to cascaded four-wave mixing. Similar dispersive-waves (DW) are also seen. From \\cite{webb2014nonlinear}; reprinted with permission from \\href{https:\/\/doi.org\/10.1038\/ncomms5969}{\\textcopyright2014 Springer Nature}. \\textbf{(f)} Negative frequency stimulated Hawking radiation. A compressing high-order few-cycle soliton \\cite{drori2019observation} generated horizons and NRR (green dotted-dashed curve). When a pulsed probe at \\SI{1450}{\\nano\\meter} interacted with the black-hole horizon, it red shifted (not shown) and produced negative Hawking radiation that overlapped the NRR (black solid curve). Subtracting the NRR revealed the UV Hawking signal (blue curve with 2$\\sigma$ error bars). From \\cite{drori2019observation}; reprinted with permission from \\href{https:\/\/doi.org\/10.1103\/PhysRevLett.122.010404}{\\textcopyright2019 APS}.\n\t} \n\t\\label{fig3}\n\\end{figure}\n\n\n\\section{Beyond the horizon?}\nAnalogue gravity in optics has come a long way: from erroneous visionary ideas, to careful analysis of experimental demonstrations. It drove new research directions and understandings in optics. The reality of optical horizons and negative frequencies became clear, and multiple optical interpretations made their physics more tangible. The crucial role of dispersion was stressed, determining the Hawking spectrum (Fig.~\\ref{fig2}). A group velocity horizon is blocking the radiation modes and allow their efficient conversion. A phase velocity horizon marks the support of non-trivial negative frequency modes, that allow for the Hawking amplification. The optical and other analogues, especially in water waves and BECs, provide extensive insights for gravity through concrete real-world examples. The universality of the Hawking effect separates it from gravity and high-energy physics, and emphasises the central role of classical fields to the process. This hints to more possibilities in gravity \\cite{barcelo2019analogue} and in optics \\cite{leonhardt2015cosmology}, even before going fully quantum. \n\nThe main challenge for observing the spontaneous (quantum) effect in optics is its low power. Noise from spurious radiation makes matters worse. Using knowledge from the stimulated effect \\cite{drori2019observation} could help overcome these challenges.\n\nThe demonstrated robustness of the Hawking effect is calling for new experiments to lead the way. Our increased confidence in the interpretation of the results allows to further separate from the traditional scheme of Hawking radiation, developing bolder questions and ideas. Optical technologies allow to realise wild ideas and to study them with high precision. We call for more cross-fertilisation between research of the different black-hole analogues, which might open new horizons in these fields.\n\n\\enlargethispage{20pt}\n\n\\section{Funding}\nFellowship of the Sustainability and Energy Research Initiative (SAERI) program, the Weizmann Institute of Science; European Research Council; and the Israel Science Foundation.\n\n\\section{Acknowledgements}\nI am grateful for discussions and comments from David Bermudez, Jonathan Drori and Ulf Leonhardt. I acknowledge valuable discussions with the participants of the scientific meeting 'The next generation of analogue gravity experiments' at the Royal Society (London, December 2019), and thank its organizers: Maxime Jacquet, Silke Weinfurtner and Friedrich K{\\\"o}nig.\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}
+{"text":"\\section{Frames and kinematics}\n\\section{Conventions}\n\\label{a:conv}\n\nIn this Appendix we discuss our light-cone conventions and notation. We start by reviewing the Fourier transform, the light-cone basis vectors for the longitudinal Minkowksi subspace, and the tensors needed to discuss parton and hadron dynamics in the transverse subspace. We then turn to  the conventions for the integration over the suppressed momentum component used to define the TMD fragmentation correlator and the fully inclusive jet correlators, and for the Dirac traces (or projections) needed to define the related TMD functions. For completeness, we also include a short discussion of the parton distribution correlator restricted to the spin-independent case. \n\n\n\\subsection{Fourier transform}\n\nIn order to be consistent with a large share of the literature dealing with TMD parton distribution and fragmentation functions \n(e.g. Refs.~\\cite{Bacchetta:2006tn,Mulders:2016pln,Levelt:1993ac,Mulders:1995dh}), we define the Fourier transform with a $1\/(2\\pi)^4$ factor for space-time four vector integrations, see {\\it e.g.} the definition of $\\Xi(k;\\omega)$ in Eq.~\\eqref{e:invariant_quark_correlator}. Correspondingly, we do not include such factor in four-momentum integrations, contrary to, {\\it e.g.}, Refs.~\\cite{Bjorken:1965zz,Roberts:2015lja,Roberts:2007jh,Siringo:2016jrc,Solis:2019fzm}. This, in particular, results in an additional $1\/(2\\pi)^4$ factor in Eq.~\\eqref{e:Feyn_spec_rep} and~\\eqref{e:SF_mass} with respect to the definitions in Refs.~\\cite{Bjorken:1965zz,Roberts:2015lja,Roberts:2007jh}.\n\n\\subsection{Light-cone coordinates and transverse space}\n\nIn a given reference frame, we collect the space-time components of a four-vector $a^\\mu$ inside round parentheses, $a^\\mu=(a^0,a^1,a^2,a^3)$, with $a^0$ the time coordinate. \nWe define the light-cone $\\pm$ components of the $a$ vector as\n\\begin{align}\n  a^{\\pm} = \\frac{1}{\\sqrt{2}} (a^0 \\pm a^3) \n\\end{align}\nand collect these inside square brackets: $a^\\mu = [a^-,a^+,\\vect{a}_T]$, with $\\vect{a}_T=(a^1,a^2)$ being the 2-dimensional components in transverse space. \nWe also define the transverse four-vector as $a_T^\\mu = [0,0,\\vect{a}_T]$, such that $a_T^2 = -\\vect{a}_T^2$. \nNamely, the norm of $\\vect{a}_T$ is taken according to the Euclidean metric $\\delta_T^{ij}=\\text{diag}(1,1)$, whereas the norm of $a_T$ is calculated using the Minkowski metric $g^{\\mu\\nu}=\\text{diag}(1,-1,-1,-1)$. \nNote that, in this paper, we consider highly boosted quarks and hadrons with dominant momentum component along the negative 3-axis, namely along the negative light-cone direction. Hence, we grouped the light-cone components inside the square parenthesis starting with the minus component.\n\nThe light-cone basis vectors are defined as:\n\\begin{equation}\n\\label{e:def_np_nm}\nn_{\\pm} = \\frac{1}{\\sqrt{2}} (1,0,0,\\pm 1) \\ \\ , \n\\end{equation}\nsuch that $n_+^2 = n_-^2 =0$, $n_+^\\mu n_{-\\mu} = 1$, and $a^\\pm = a^\\mu {n_\\mp}_\\mu$.  Upon considering a specific process, the basis vectors $n_{\\pm}^\\mu$ can be determined by physical quantities. For example, in inclusive deep-inelastic scattering one can choose the four-momentum of the target and virtual photon to lie in the plus-minus plane; in semi-inclusive processes the tagged hadron's momentum can replace either one, typically the photon's momentum. In semi-inclusive electron-positron annihilation into two hadrons, one typically chooses the four-momenta of both tagged hadrons. In this paper, however, we consider quark propagation and fragmentation independently of any specific process, and will study the Lorentz transformation between the different frames in which the quark hadronization mechanism can be studied (see Appendix~\\ref{a:frame_dep}). \n\nFollowing Ref.~\\cite{Bacchetta:2006tn,Mulders:2016pln}, the transverse projector, $g_T^{\\mu\\nu}$, and the transverse anti-symmetric tensor, $\\epsilon_T^{\\mu\\nu}$, are defined as:\n\\begin{align}\n\\label{e:gT_def}\ng_T^{\\mu\\nu} & \\equiv g^{\\mu\\nu} - n_+^{ \\{ \\mu} n_-^{\\nu \\} } \\\\ \n\\label{e:epsT_def}\n\\epsilon_T^{\\mu\\nu} & \\equiv \\epsilon^{\\mu \\nu \\rho \\sigma} {n_-}_\\rho {n_+}_\\sigma  \\equiv \\epsilon^{\\mu \\nu + -} \\ ,\n\\end{align}\nwhere $g^{\\mu\\nu}$ is the Minkowski metric, $\\epsilon^{\\mu\\nu\\rho\\sigma}$ is the totally anti-symmetric Levi-Civita tensor (with $\\epsilon^{0123}=1$).\nNote that $g_T^{\\mu\\nu} a_\\nu = a_T^\\mu$ projects a four-vector onto its transverse component, and $\\epsilon_T^{\\mu\\nu} a_\\nu = \\epsilon_T^{\\mu\\nu} a_{T\\nu}$ rotates that component by 90 degrees in the transverse plane.\n\nIn the paper we also make use of the correspondence between symmetric traceless tensors of definite rank and complex numbers, detailed in Ref.~\\cite{Boer:2016xqr}. Its essence is the possibility to trade an uncontracted rank-$m$ tensor with a complex number. For a rank-$m$ tensor $T$ built out of a single transverse vector $\\vect{a}_T$, this reads:\n\\begin{equation}\n  \\label{e:SST_C}\n  T^{i_1 \\cdots i_m}(\\vect{a}_T) \\rightarrow \\frac{1}{2^{m-1}}\\ \n  |\\vect{a}_T|^m\\ e^{\\pm \\i m \\phi} \\ ,\n\\end{equation}\nwhere $\\phi$ is the polar angle associated to $\\vect{a}_T$ in the transverse plane. The rank $m$ of the tensor is reflected in the power of the modulus and in the phase of the complex number.\n\n\nA useful consequence of this correspondence is that expressions proportional to $T^{i_1 \\cdots i_m}$ vanish upon integration over $\\vect{a}_T$, due to the angular part of the integration measure.\nIn our analysis we apply this correspondence to the following rank-2 tensor, built out of the hadron's transverse momentum $P_T$~\\cite{Boer:2016xqr,vanDaal:2016glj}:\n\\begin{equation}\n\\label{e:STT2_Php}\nP_T^{ij} \\ \\equiv\\ P_T^i P_T^j + \\frac{\\vect{P}_T^2}{2} g_T^{ij} \\ .\n\\end{equation} \n\n\n\n\\subsection{Parton distribution correlator}\nThe TMD parton distribution correlator and its Dirac projections are defined as:\n\\begin{equation*}\n\\label{e:proj_int_Phi}\n\\refstepcounter{equation}\n\\Phi(x,p_T)\\ \\equiv \\int_{-\\infty}^{+\\infty} dp^- \\Phi(p, P)_{\\begin{subarray}{l} p^+=xP^+ \\\\  \\end{subarray}}\\,\n= \\int dp^-\\, \\text{Disc} [ \\Phi(p, P) ]_{\\begin{subarray}{l} p^+=xP^+ \\\\  \\end{subarray}} \n\\ , \\ \\ \\ \\ \\ \\ \\ \n\\Phi^{[\\Gamma]}(x,p_T)\\ \\equiv\\ \\text{Tr} \\bigg[ \\Phi(x,p_T) \\frac{\\Gamma}{2} \\bigg] \\ ,\n\\eqno{(\\theequation \\textit{a,b})}\n\\end{equation*}\nwhere the unintegrated $\\Phi$ quark distribution correlator is defined as~\\cite{Bacchetta:2006tn,Mulders:1995dh,Echevarria:2016scs}:\n\\begin{equation*}\n\\Phi_{ij}(p,P) = \\int \\frac{d^4 \\xi}{(2\\pi)^4} e^{\\i p \\cdot \\xi}\\, \\text{Tr}_c \n\\langle P|  \n{\\cal T} \\big[ \\overline{\\psi}_j(0) W_2(0,\\infty) \\big] \\, \n{\\cal \\overline{T}} \\big[ W_1(\\infty,\\xi) \\psi_i(\\xi) \\big] \\, \n|P \\rangle  \\, .\n\\end{equation*}\nIn the previous equations $p$ is the quark momentum, $P$ is the hadronic momentum, $x=p^+\/P^+$ is the parton fractional momentum in the dominant direction. In Eq.~\\eqref{e:proj_int_Phi}(b), the ``Tr'' operator ``Tr'' without subscripts indicates to a Dirac trace,  the $1\/2$ factor is explained in Section 6.7 and 6.8 of Ref.~\\cite{Collins:2011zzd}, and $\\Gamma$ is a generic Dirac matrix;  for example, $\\Gamma=\\gamma^+$ is associated to the unpolarized TMD $f_1$, and the matrices associated to the other TMDs can be found in  \\cite{Bacchetta:2006tn}.\nIn the spirit of Feynman rules, the Dirac trace operator ``$\\text{Tr}$'' corresponds to the {\\it sum} over the polarization states of the quark in the final state. \nThe analogous color trace $\\text{Tr}_c$, corresponding to a sum over the color configurations, is included in the definition of the correlator $\\Phi$. \n\nAssuming that the correlators have the standard analiticity properties of the scattering amplitudes, the integration over the suppressed momentum component used to define the TMD correlators can be performed by complex contour deformation. Depending on the value of $x$, one can then replace the integral of the unintegrated correlator by the integral of its $s$-channel or $u$-channel discontinuity, denoted by ``Disc'' \n~\\cite{Jaffe:1983hp,Boer:1998im,Diehl:2003ny,Gamberg:2010uw}. These correspond, respectively to a quark distribution ($0 \\leq x \\leq 1$), and to an antiquark distribution  ($-1 \\leq x \\leq 0$). \n\n\n\n\\subsection{Parton fragmentation correlator}\n\nThe TMD fragmentation correlator in the parton frame and its Dirac projections are defined as:\n\n\\begin{equation*}\n\\label{e:proj_int_Delta}\n\\refstepcounter{equation}\n\\Delta(z,P_T) \\equiv  \\int_{-\\infty}^{+\\infty} \\frac{dk^+}{2z}\\, \\Delta(k, P)_{\\begin{subarray}{l} P^-= zk^- \\\\  \\end{subarray}}\n= \\int \\frac{dk^+}{2z}\\, \\text{Disc}\\, [ \\Delta(k, P) ]_{\\begin{subarray}{l} P^-= zk^- \\\\  \\end{subarray}}\n\\ , \\ \\ \\ \\ \\ \\ \\\n\\Delta^{[\\Gamma]}(z,P_T)\\ \\equiv\\ \\text{Tr} \\bigg[ \\Delta(z,P_T) \\frac{\\Gamma}{2} \\bigg] \\ ,\n\\eqno{(\\theequation \\textit{a,b})}\n\\end{equation*}\nwith the unintegrated quark correlator $\\Delta(k,P)$ defined in Eq.~\\eqref{e:1hDelta_corr}. \nHere $k$ is the momentum of the fragmenting quark, $P$ is the momentum of the produced hadron, and $z=P^-\/k^-$ is the hadron's fractional momentum in the dominant momentum direction. \nThe $1\/2$ factor in Eq.~\\eqref{e:proj_int_Delta}(b) arises in the same way as in Eq.~\\eqref{e:proj_int_Phi}(b). \nIn Eq.~\\eqref{e:proj_int_Delta}(a), the $1\/z$ factor comes from the normalization of the hadronic states (see Ref.~\\cite{Collins:1981uw} and Section 12.4 in Ref.~\\cite{Collins:2011zzd}). \nThe trace operator in this case has an additional $1\/2$ factor which appears in Eq.~\\eqref{e:proj_int_Delta}(a), since it corresponds to an {\\it average} over the quark polarizations in the initial state. \nA color trace Tr$_c\/N_c$, that in the same way corresponds to an average over the hadron's color configurations, is already included in the definition of the unintegrated correlator $\\Delta(k,P)$. \n\nNote that in the fragmentation case we integrate over the suppressed partonic plus component even if the correlator has a probabilistic interpretation in terms of the hadronic variables. This is because the integration is always performed with respect to the momentum components of the object that in a process would enter the hard interaction in a process, namely the parton.\n\n\n\\subsection{Fully inclusive jet correlator}\nIn analogy with Eqs.~\\eqref{e:proj_int_Delta}, the TMD inclusive jet correlator and its Dirac projections are defined as:\n\\begin{equation*}\n\\label{e:proj_int_J}\n\\refstepcounter{equation}\nJ(k^-,k_T)\\, \\equiv\\ \\frac{1}{2} \\int dk^+  \\, \\Xi(k) \n\\ , \\ \\ \\ \\ \\ \\ \\ \\ \\\nJ^{[\\Gamma]}(k^-,k_T)\\, \\equiv\\  \\text{Tr} \\bigg[ J(k^-,k_T) \\frac{\\Gamma}{2} \\bigg] \\ .\n\\eqno{(\\theequation \\textit{a,b})}\n\\end{equation*}\nFor consistency with Ref.~\\cite{Sterman:1986aj}, the discontinuity has been inserted directly in the definition of the unintegrated jet correlator~\\eqref{e:invariant_quark_correlator}, or equivalently \\eqref{e:invariant_quark_correlator_W}, where we also included the color trace Tr$_c\/N_c$ corresponding to an average over the initial state color configurations. \nNote that there is no $1\/k^-$ prefactor, at variance with Ref.~\\cite{Accardi:2008ne}.\n\n\n\n\n\n\n\\section{Frame transformations}\n\\label{a:frame_dep}\n\n\nIn this appendix we discuss the dependence of the fragmentation functions and of the associated momentum sum rules on the frame chosen to study the hadronization mechanism.\n\nWe will consider, in particular, two cases: the hadron frame used to define the TMD fragmentation functions \\cite{Collins:2011zzd,Bacchetta:2006tn}, and the parton frame used in the main text to derive the momentum sum rules. Either frame is defined by a specific choice of basis four-vectors $n_{-(f)}$, $n_{+(f)}, n_{1(f)}, n_{2(f)}$, that identify the light-cone plus and minus directions and the two directions orthogonal to these, with an index $f=h,p$ explicitly referring to the hadron and parton frames, respectively. The basis vectors are not only utilized to define the corresponding coordinate system, but also to decompose the fragmentation correlator in terms of Fragmentation Functions, whose definition, consequently, depends on the choice of frame. In this appendix, therefore, we will consistently use the $h$ and $p$ subscripts to explicitly distinguish between quantities defined in one or the other frame. Note, however, that in the main text we dispensed from this notation.\n\\newline\\indent\nAs discussed in Section~\\ref{ss:1h_inclusive_FF}, the hadron frame is defined such that the momentum of the hadron under consideration has no transverse component ($\\vect{P}_{Th}=0$); and the parton frame is such that the quark's momentum has no transverse component ($\\vect{k}_{Tp}=0$). \nIn the main body of this paper, we connected the fragmentation correlator and the quark propagator through a correlator-level sum rule that integrated over all hadron momenta. Hence only the quark's momentum is available to define the frame, and we could only choose to work in the parton frame. When discussing the fragmentation correlator, however, both frames are possible, with the hadron frame being the conventional choice~\\cite{Collins:2011zzd,Bacchetta:2006tn}. As a consequence, the FFs functions defined in the TMD literature, and here generically denoted by $X_h$, differ from the fragmentation functions entering the sum rules for the parton-frame $X_p$ FFs summarized in Section~\\ref{ss:1h_inclusive_FF}, where the $p$ index was dropped for simplicity. It is the purpose of this Appendix to derive the rules for transforming one set of FFs, and their corresponding sum rules and Equation of Motion relations, into the other. \n\n\\subsection{Parton and hadron frames}\n\\label{ss:p_h_frames}\n\nWe will first consider the Lorentz transformation from the parton frame to the hadron frame. The transformation is completely determined by requiring that \n(1) the parton transverse momentum in the parton frame $\\vect{k}_{Tp}$ be zero,  \n(2) the minus component be invariant, and \n(3) the norm of any four-vector be invariant. \nThe matrix associated to this transformation reads, in light-cone coordinates~\\cite{Levelt:1993ac,Collins:2011zzd}:\n\\begin{equation}\n  \\label{e:p_to_h_frame}\n \n  {\\cal M}_{h \\leftarrow p} = \n  \\begin{bmatrix}\n  1 & 0 & 0 \\\\\n  \\frac{\\vect{k}_{Th}^2}{2(k^-)^2} & 1 & \\frac{\\vect{k}_{Th}}{k^-} \\\\\n  \\frac{\\vect{k}_{Th}}{k^-} & 0 & 1\n  \\end{bmatrix}\n  \\,  ,\n\\end{equation}\nwhere $\\vect{k}_{Th}$ is the (Euclidean 2D) transverse momentum of the quark in the hadron frame. \nThe hadron frame components $a_h^\\mu$ of the vector $a$ can then be obtained from the parton frame components by\n\\begin{align}\n   a_h^\\mu = {({\\cal M}_{h \\leftarrow p})^\\mu}_\\nu \\, a_p^\\nu \\ .\n\\end{align}\nAs the minus component is invariant, we will omit the subscript identifying the frame whenever little risk of misunderstanding occurs. The inverse transformation matrix ${\\cal M}_{p \\leftarrow h}$ from the hadron to the parton frame can be simply obtained by replacing $\\vect{k}_{Th} \\rightarrow - \\vect{k}_{Th}$ in Eq.~\\eqref{e:p_to_h_frame}.\n\nFrom Eq.~\\eqref{e:p_to_h_frame} one can see that the transverse momentum $\\vect{P}_{Tp}$ of the hadron in the parton frame and the transverse momentum $\\vect{k}_{Th}$ of the quark in the hadron frame are related by\n\\begin{equation}\n\\label{e:PTp_kTh_rel}\n\\vect{P}_{Tp} = -z\\, \\vect{k}_{Th} \\, ,\n\\end{equation}\nwhile the hadron's collinear momentum fraction relative to the quark is invariant between the two considered frames because of the invariance of the minus components of the momenta:\n\\begin{align}\n\\label{e:z_inv}\n  z = \\frac{P^-_p}{k^-_p} = \\frac{P^-_h}{k^-_h} \\ .\n\\end{align}\n\nLet us now consider the transformation of the different ingredients in the Dirac decomposition of the TMD fragmentation correlator, that defines the fragmentation functions (see, for example, Eq.~\\eqref{e:1hDelta_TMDcorr_param} for the decomposition in the parton frame).  \nTo start with, the metric tensor $g^{\\mu\\nu}$ is Lorentz-invariant by definition, and the Levi-Civita tensor $\\epsilon^{\\mu\\nu\\rho\\sigma}$ is invariant, as well, since the transformation ${\\cal M}_{h \\leftarrow p}$ belongs to the orthochronus Lorentz group (det ${\\cal M}_{h \\leftarrow p} = 1$). \n\nThe transformation of the transverse $g_T^{\\mu\\nu}$ and $\\epsilon_T^{\\mu\\nu}$ tensors are, instead, more complex, and we need to first address the relation between the basis vectors defining the two frames under consideration.\nLet's consider first the hadron frame basis vectors, which can be expressed in hadron-frame coordinates as ${n_-}_{(h)}^\\mu = [1,0,\\vect{0}_T]_h$, ${n_+}_{(h)}^\\mu = [0,1,\\vect{0}_T]_h$ and $n_{i(h)}^\\mu = [0,1,\\vect{e}_i]_h$, where $\\vect{e}_1 = (1,0)$ and $\\vect{e}_2=(0,1)$, and $i=1,2$ a transverse index. We also collect the transverse basis vectors $n_{i(h)}$ into a 2D transverse vector, $\\vect{n}_{T(h)} \\equiv (n_{1(h)},n_{2(h)})$. The parton frame basis vectors are analogously defined in parton frame coordinates. \nOne can easily show that \n\\begin{subequations}\n\\label{e:n_transf}\n\\begin{numcases}{}\nn_{-(h)}^\\mu = n_{-(p)}^\\mu\n  - \\frac{1}{k^-} \\vect{k}_{Th} \\cdot \\vect{n}_{T(p)}^\\mu \n  + \\frac12 \\frac{\\vect{k}_{Th}^2}{(k^-)^2} n_{+(p)}^\\mu\n  & \\label{e:n_minus_transf} \\\\\nn_{+(h)}^\\mu = n_{+(p)}^\\mu\n  & \\label{e:n_plus_transf} \\\\\n\\vect{n}_{T(h)}^\\mu = \\vect{n}_{T(p)}^\\mu - \\frac{\\vect{k}_{Th}}{k^-} n_{+(p)}^\\mu \\ ,\n  & \\label{e:n_T_transf}     \n\\end{numcases}\n\\end{subequations}\nIt is then not difficult to obtain\n\\begin{align}\n  \\label{e:gT_transf}\ng_{T(h)}^{\\mu\\nu} & = g_{T(p)}^{\\mu\\nu} + \\frac{1}{k^-} \\vect{k}_{Th} \\cdot \\vect{n}_{T(p)}^{\\{\\mu} n_{+(p)}^{\\nu\\}} - \\frac{\\vect{k}_{Th}^2}{(k^-)^2} n_{+(p)}^\\mu n_{+(p)}^\\nu\\\\\n  \\label{e:epsT_trans}\n\\epsilon_{T(h)}^{\\mu\\nu} & = \\epsilon_{T(p)}^{\\mu\\nu} - \\frac{1}{k^-} \\epsilon^{\\mu\\nu\\rho\\sigma} \\vect{k}_{Th} \\cdot \\vect{n}_{T(p)\\rho} \\, n_{+(p)\\sigma} \\ .\n\\end{align}\nWhile neither tensor is actually Lorentz invariant in itself, the breaking terms are at least of $O(1\/k^-)$.\nIn our application to the Lorentz  transformation of the fragmentation correlator $\\Delta$, we only need these tensors contracted with $k_\\nu$, with a much simpler transformation: \n\\begin{align}\n\\label{e:nTdot_transf}\ng_{T(h)}^{\\mu\\nu} k_\\nu & = -\\frac{1}{z} g_{T(p)}^{\\mu\\nu} P_\\nu - \\frac{\\vect{P}_{Tp}^2}{z^2k^-} n_{+(p)}^\\mu  \n\\\\\n\\label{e:epsTdot_transf}\n\\epsilon_{T(h)}^{\\mu\\nu} k_\\nu & = -\\frac{1}{z} \\epsilon_{T(p)}^{\\mu\\nu} P_\\nu \\ .\n\\end{align}\nNote that Eq.~\\eqref{e:nTdot_transf} generalizes Eq.~\\eqref{e:PTp_kTh_rel} to the four vector case. \nFinally, the first four Dirac matrices transform as any other four-vector: \n\\begin{subequations}\n\\label{e:gammamu_transf}\n\\begin{numcases}{}\n\\gamma_h^- = \\gamma_p^- \\equiv \\gamma^- \n& \\label{e:gammaminus_transf} \\\\\n\\gamma_h^+ = \\gamma_p^+ \n+ \\frac{\\vect{k}_{Th}^2}{2(k^-)^2}\\, \\gamma^- \n+ \\frac{\\vect{k}_{Th} \\cdot \\vect{\\gamma}_{Tp}}{k^-}  \n& \\label{e:gammaplus_transf} \\\\\n\\vect{\\gamma}_{Th} = \\vect{\\gamma}_{Tp} + \\frac{\\vect{k}_{Th}}{k^-}\\, \\gamma^- \\ ,\n& \\label{e:gammaT_transf}     \n\\end{numcases}\n\\end{subequations}\nwith $\\gamma_5 = \\varepsilon_{\\mu\\nu\\rho\\sigma} \\gamma^\\mu \\gamma^\\nu\\gamma^\\rho \\gamma^\\sigma$ invariant because such is the Levi-Civita tensor and we are working in 4 dimensions \\cite{Collins:1984xc}.\n\n\nWe now have all the tools to understand what happens to the fragmentation functions, the Equation of Motion relations (EOMs), and the momentum sum rules when changing frames.\n\n\n\n\\subsection{Transformation of the fragmentation functions}\n\\label{ss:FFtransforms}\n\nAs discussed, the TMD fragmentation functions are conventionally defined by decomposing the TMD fragmentation correlator in terms of the light cone basis vectors of the hadron frame~\\cite{Bacchetta:2006tn}: \n\\begin{align}\n\\label{e:1hDelta_TMDcorr_param_hadron_frame}\n{\\Delta}_{(h)}(z,k_{T(h)}) & = \n\\frac{1}{2} \\slashed{n}_{-(h)} D_{1h}(z,k_{Th}^2) + \n\\i \\frac{ \\big[ \\slashed{k}_{T(h)}, \\slashed{n}_{-(h)} \\big]}{4 M} H_{1h}^{\\perp}(z,k_{Th}^2) + \n\\frac{M}{2 P^-} E_h(z,k_{Th}^2) \\\\\n\\nonumber & \n+ \\frac{\\slashed{k}_{T(h)}}{2 P^-} D_h^{\\perp}(z,k_{Th}^2) \n+ \\frac{\\i M}{4 P^-} \\big[ \\slashed{n}_{-(h)}, \\slashed{n}_{+(h)} \\big] H_h(z,k_{Th}^2) \n+ \\frac{1}{2 P^-} \\gamma_5\\, \\epsilon_{T(h)}^{\\rho\\sigma}\\, \\gamma_{\\rho}\\, {k_\\sigma} \\, G_h^{\\perp}(z,k_{Th}^2) \\ .\n\\end{align}\nNote that we omitted the frame subscript for the frame-independent quantities, and that $\\slashed{k}_{T(h)} = \\gamma_\\mu\\, g_{T(h)}^{\\mu\\nu} k_\\nu$. We have also dropped the flavor index on the mass $M$ to avoid confusion with the frame subscript $h$, and used $k_{Th}^2=k_{T(h)}^\\mu k\\big._{T(h)\\mu}$ as a shorthand in the argument of the fragmentation functions. \n\nIt is now important to realize that the TMD correlator $\\Delta_{(h)}(z,k_{Th}^2) \\equiv (2z)^{-1} \\int dk_h^+ \\Delta(k,P)$ is invariant under Lorentz trasformations, such as the hadron to parton frame transformation under discussion in this appendix, that connect frames with the same light-cone plus axis. Explicitly,\n\\begin{align}\n\\label{e:Delta_h_vs_p}\n    \\Delta_{(p)}(z,P_{Tp}^2) =  \\Delta_{(h)}(z,k_{Th}^2) \\ ,\n\\end{align}\nwhere the TMD correlator $\\Delta_{(p)} = (2z)^{-1} \\int dk_p^+ \\Delta(k,P)$ is decomposed in terms of the parton-frame light-cone basis vectors, see Eq.~\\eqref{e:1hDelta_TMDcorr_param} in the main text. This can be seen in two steps. First, notice that the integration over $dk^+$ is Lorentz invariant because the minus and transverse components are fixed. Then, look at the definition \\eqref{e:1hDelta_ampl} of the unintegrated $\\Delta^h(k,P)$: on the one hand, there are no open Lorentz indexes; on the other hand, the light-cone plus vectors associated  with the $B_i$ functions is the same in the considered frames.\n\n\nSchematically, the Dirac projections that define the FFs take the form\n\\begin{equation}\n\\label{e:inv_traces}\n  X_f \\sim \\text{Tr}\\Big[ \\Delta_{\\text{TMD}}\\, \\Gamma_{(f)} \\Big] \\ ,\n\\end{equation}\nwhere $\\Gamma_{(f)}$ is a suitable contraction of the Dirac matrices and the light-cone basis vectors for a given frame $f$, see for example \nEqs.~\\eqref{e:sumrule_D1},~\\eqref{e:sumrule_E},~\\eqref{e:sumrule_H},~\\eqref{e:sumrule_H1p},~\\eqref{e:sumrule_Dp},~\\eqref{e:sumrule_Gp}.  \nWhen  performing a Lorentz transformation, one needs to keep all the involved vectors unchanged. Under this condition, the traces in Eq.~\\eqref{e:inv_traces} are Lorentz invariant. If one changes the basis vectors, though, one obtains a {\\em different} definition of fragmentation functions, and one can study how these different fragmentation functions transform into one another.\n\nSpecifically, let's consider the transformation between the hadron-frame FFs introduced in this appendix, and the parton-frame FFs discussed in the main text. This can be obtained by decomposing, with the help of Eq.~\\eqref{e:n_transf}, the $n_{(h)}$ vectors in Eq.~\\eqref{e:Delta_h_vs_p} on the parton-frame light-cone basis, and transforming the transverse momentum components according to Eq.~\\eqref{e:PTp_kTh_rel}. The parton frame FFs can then be projected out utilizing the $\\Gamma_{(p)}$ functions, or, more simply, obtained by matching the corresponding Dirac structures in Eq.~\\eqref{e:Delta_h_vs_p}. One finds \n\\begin{align}\n\\label{e:Dperp_transf}\n& D_p^\\perp(z,P_{Tp}^2) = D_h^\\perp(z,k_{Th}^2) - z\\, D_{1h}(z,k_{Th}^2) \\equiv {\\widetilde D}_h^\\perp(z,k_{Th}^2) \\, , \\\\\n\\label{e:H_transf} \n& H_p(z,P_{Tp}^2) = H_h(z,k_{Th}^2) -z\\, \\frac{k_{Th}^2}{M^2}\\, H_{1h}^\\perp(z,k_{Th}^2) \\equiv {\\widetilde H}_h(z,k_{Th}^2) \\,  ,\n\\end{align}\nwhile all other fragmentation functions do not mix\\footnote{For the $G^\\perp$function, this is actually true only when summing over the hadron spins.}:\n\\begin{align}\n  \\label{e:p_to_h_genericFF}\n  X_p (z,P_{Tp}^2) = X_h(z,k_{Th}^2) \\ .\n\\end{align}\nIn practice, the change of basis vectors mixes the twist-2 FFs in the hadron frame ($D_{1h}$ and $H_{1h}^\\perp$) with two other twist-3 FFs ($D_h^\\perp$ and $H_h$) through the off-diagonal terms in Eq.~\\eqref{e:p_to_h_frame} proportional to $1\/k^- \\sim 1\/P_h^-$. The other FFs do not mix under this change of basis.  \n\nThe identification of the r.h.s. of Eqs.~\\eqref{e:Dperp_transf} and~\\eqref{e:H_transf} with the ${\\widetilde D}_h$ and ${\\widetilde H}_h$ functions requires one to use the hadron frame EOM relations discussed in Ref.~\\cite{Bacchetta:2006tn}. It is important to remark that these tilde-functions are among the functions that parametrize the dynamical twist-3 quark-gluon-quark correlator $\\Delta_A^\\alpha$~\\cite{Bacchetta:2006tn}. Hence, Eqs.~\\eqref{e:Dperp_transf} and~\\eqref{e:H_transf} imply that the distinction between kinematical and dynamical twist-3 is, for certain functions, frame-dependent, and the transformation ${\\cal M}_{h \\leftarrow p}$ actually maps a kinematical twist-3 quantity into a dynamical one. A similar version of the transformation~\\eqref{e:Dperp_transf} for $D^\\perp$ was already discussed in Ref.~\\cite{Levelt:1993ac}, whereas the transformation~\\eqref{e:H_transf} for $H$ is, to our knowledge, new.\n\nBefore moving to collinear functions, it is worthwhile remarking an important, but potentially confusing difference between our notation (derived e.g. from Ref.~\\cite{Bacchetta:2006tn}) and that of, {\\em e.g.}, Refs~\\cite{Metz:2016swz,Collins:2011zzd}. In the hadron frame, the natural transverse momentum variable for a FF is $k_{Th}$, as we have used in this Appendix. However, the physical interpretation of a FF should be given in the partonic frame. Hence, reading Eq.~\\eqref{e:p_to_h_genericFF} from right to left, and utilizing Eq.~\\eqref{e:PTp_kTh_rel}, we find \n\\begin{align}\n     X_h(z,k_{Th}^2) = X_p (z,z^2 k_{Th}^2)\\ .\n\\end{align}\nIn other words, the hadron frame FFs depend on the $z^2 k_{Th}^2$ combination, rather than $k_{Th}^2$ alone. This justifies using $z^2 k_{Th}^2$ as argument of $X_h$ as done in e.g.  Refs~\\cite{Metz:2016swz,Collins:2011zzd}. An important consequence is that, if one wishes to use a Gaussian approximation for the transverse momentum dependence of the FFs in the hadron frame, this should read $X_h(z,k_{Th}^2) \\approx D(z) \\exp\\big[-z^2 \\vect{k}_{Th}^2\/(\\Delta^2)\\big]$, where $\\Delta^2$ is the variance, and $D$ a function of $z$ alone. \n\nThe collinear FFs are usually defined in the parton frame as integrals of the TMD FFs over the transverse momentum (see Refs.~\\cite{Bacchetta:2006tn,Metz:2016swz}). The definition in the hadron frame follows, if one requires the collinear FFs to be frame independent. \nExplicitly,\n\\begin{equation}\n\\label{e:D1_coll_def}\nX_{p}(z) \\equiv \\int d^2 \\vect{P}_{Tp}\\, X_{p}(z,P_{Tp}^2)   \\, ,\n\\quad \\quad \\quad \nX_{h}(z) \\equiv z^2 \\int d^2 \\vect{k}_{Th}\\, X_{h}(z,k_{Th}^2) \\ ,\n\\end{equation}\nand it is easy to see that \n\\begin{align}\n  \\label{e:coll_FF_lorentz}   \n  X_{p}(z)=X_{h}(z)\n \\end{align}\nfor FFs that do not mix under Lorentz transformations.\nFollowing the standard conventions discussed in Refs.~\\cite{Bacchetta:2006tn,Metz:2016swz}, the first transverse moments are defined in the parton and hadron frames as\n\\begin{equation}\n\\label{e:first_mom} \nX_p^{(1)}(z) \\equiv \\int d^2 \\vect{P}_{Tp}\\, \\frac{\\vect{P}_{Tp}^2}{2 z^2 M^2}\\, X_p(z,P_{Tp}^2) \\, ,\n\\quad \\quad \\quad \nX_h^{(1)}(z) \\equiv z^2\\, \\int d^2 \\vect{k}_{Th}\\, \\frac{\\vect{k}_{Th}^2}{2M^2}\\, X_h(z,k_{Th}^2) \\, .\n\\end{equation} \nThese also do not mix, $X^{(1)}_{p}(z)=X^{(1)}_{h}(z)$, except for the $D^\\perp$ and $H$ functions.\n\n\n\n\n\\subsection{Transformation of the EOMs}\n\nThe EOMs allow one to relate twist-2 fragmentation functions with kinematical and dynamical twist-3 fragmentation functions. \nThey can be obtained by applying the Dirac equation $(i\\slashed{D}(\\xi) - m)\\psi(\\xi)=0$ to the fragmentation correlator and projecting on the good quark components. The resulting relation between the twist-2 and twist-3 fragmentation correlators is Lorentz covariant, as seen in Eq. (3.53) of Ref.~\\cite{Bacchetta:2006tn}, which is given in the hadron frame.  \nUsing the transformation~\\eqref{e:p_to_h_frame} it is possible to show that the EOMs are frame invariant up to terms suppressed by powers of $M\/P^-$, which can be kinematically neglected in a frame boosted to high values of $P^-$ such as we are considereing in this paper~\\cite{Bacchetta:2006tn}. \nThus, in order to derive the EOMs in the parton frame, one simply needs to apply the replacement rule~\\eqref{e:PTp_kTh_rel} for the transverse momenta to the hadron frame EOMs given in Ref.~\\cite{Bacchetta:2006tn}:  \n\\begin{equation}\n  \\label{e:eom_p_frame}\n  \\begin{aligned}\n  & E_p = \\widetilde{E}_p + z \\frac{m_{q0}}{M} D_{1p} \n  &\\quad\\quad& D_p^{\\perp} = \\widetilde{D}_p^{\\perp} + z D_{1p} \\\\\n  & H_p = \\widetilde{H}_p - \\frac{\\vect{P}_{Tp}^2}{z M^2}   H_{1p}^\\perp\n  && G_p^{\\perp} = \\widetilde{G}_p^{\\perp} + z \\frac{m_{q0}}{M}   H_{1p}^{\\perp\\,} \\ .\n  \\end{aligned}\n\\end {equation}\nThese are the EOMs utilized in Section~\\ref{ss:qgq_sumrules}, but are written with an explicit $p$ frame subscript. \n\n\n\\subsection{Transformation of the sum rules}\n\nWe now discuss the frame (in)dependence of the momentum sum rules for the fragmentation functions. \n\nLet us start from the rank $0$ sum rules for the $D_1$ and $E$ fragmentation functions, derived in Section~\\ref{ss:mu_minus} in the parton frame.  \nThe Dirac matrices that project these functions out of the fragmentation correlator are, respectively, the $\\gamma^-$ and ${\\mathbb 1}$ matrices. Since these are invariant under the ${\\cal M}_{h \\leftarrow p}$ transformation, also the momentum sum rules for $D_1$ and $E$ are invariant. \n\nThe transformation of the rank $0$ parton frame sum rule for $H$ is less straightforward, because of its mixing with the Collins function $H_1^\\perp$, and it is instructive to look at the latter first. The projection matrix for $H_1^\\perp$ is $\\Gamma = \\i \\sigma^{i-}\\gamma_5$, which renders the RHS of the master sum rule~\\eqref{e:sumrule_transverse} equal zero. Moreover this matrix is frame independent,  because the extra term from Eq.~\\eqref{e:gammaT_transf} cancels in the commutator that defines $\\sigma^{i-}$. Accordingly, the rank $1$ sum rule for $H_1^\\perp$ is Lorentz-invariant. As a consequence of Eq.~\\eqref{e:H_transf}, also the sum rule for $H$ is, despite the fact that this FF mixes with the Collins function under Lorentz transformations. \n\nThe other two FFs appearing in the rank $1$ sum rules discussed in Section~\\ref{ss:mu_transverse} are $G^\\perp$ and $D^\\perp$. \nThe projection matrix for $G^\\perp$ is $\\Gamma = \\gamma_T^i\\, \\gamma_5$, such that the RHS of the sum rule is zero, as well. Differently from the sum rule for the Collins function, this matrix does get an extra term proportional to $\\gamma^- \\gamma_5$ under Lorentz transformation, which is however only related to polarized FFs. Since the sum rules can only be obtained after summing over the hadronic polarizations, this extra term does not contribute and the sum rule for $G^\\perp$ is Lorentz invariant. \n\nThe sum rule for $D^\\perp$ in the parton frame involves the hadronic transverse momentum rescaled by the collinear momentum fraction averaged over the kinematics and summed over the produced hadrons and their spin. \nIt is therefore useful to introduce the notion of the average of a momentum-dependent $O_p=O_p(z,P_{Tp}^2)$ observable in the parton frame  as:\n\\begin{equation}\n\\label{e:def_average_pframe}\n\\langle O_p \\rangle = \\sum_{H, S} \\int dz\\, d^2 \\vect{P}_{Tp}\\, O_p(z,P_{Tp}^2)\\, z\\, D_{1}^{H}(z,P_{Tp}^2) \\ ,\n\\end{equation}\nwhere, at variance with the main text, we used an upper case hadronic $H$ flavor index to distinguish this from the hadronic frame index (this definition can also be extended to a flavor dependent observable $O^H$, but we suppressed that index for clarity).  \nThe average operator is Lorentz invariant if we define this for a hadron frame observable $O_h=O_h(z,k_{Th}^2)$ as\n\\begin{equation}\n\\label{e:def_average_hframe}\n\\langle O_h \\rangle \\equiv \\sum_{H, S} \\int dz\\, d^2 \\vect{k}_{Th}\\, O_h(z,k_{Th}^2)\\, z^3\\, D_{1}^{H}(z,k_{Th}^2) \\, .\n\\end{equation}\nNow, one can calculate the hadron frame sum rule for $D^\\perp$ by applying the ${\\cal M}_{h \\leftarrow p}$ Lorentz transformation and the mixing relation~\\eqref{e:Dperp_transf} to the parton frame sum rule~\\eqref{e:pframe_sumrule_Dp_Dpt}$a$. \nUtilizing the EOMs in the two frames, it is also possible to obtain the hadron frame sum rule for $\\widetilde{D}^\\perp$. The result of these manipulation is given in Table~\\ref{t:sum_rules_Dperp},  where we collect and compare the $D^\\perp$ and $\\widetilde{D}^\\perp$ sum rules in either frame, expressed in terms of the average  defined in Eqs.~\\eqref{e:def_average_pframe} and~\\eqref{e:def_average_hframe}.\n\n\\begin{table}\n\\small\n \\centering\n\\begin{tabular}{|c||c|c|}\n  \\hline\n  \\multicolumn{1}{|c||}{\\ frame\\ \\ } & \n  \\multicolumn{1}{c|}{\\ $\\displaystyle 2 \\sum_{H,S} \\int dz\\, M_H^2\\, {D}_f^{H\\perp(1)}(z)$\\ \\ } & \n  \\multicolumn{1}{c|}{\\ $\\displaystyle 2 \\sum_{H,S} \\int dz\\, M_H^2\\, \\widetilde{D}_f^{H\\perp(1)}(z)$\\ \\ } \\\\\n  \\hline\n  \\hline\n          &   &  \\\\ \n$f = p$ & 0 & $\\langle P_{Tp}^2\/z^2 \\rangle$ \\\\              \n          &   &  \\\\ \n$f =  h$ & $\\langle \\vect{k}_{Th}^2 \\rangle$ & 0  \\\\                      \n          &   &  \\\\ \n\\hline\n\\end{tabular}\n\\caption{Sum rules for the $D^\\perp$ and $\\widetilde{D}^\\perp$ twist-3 FFs in the parton frame ($f = p$) and in the hadron frame ($f =  h$). The flavor index is denoted by an uppercase $H$ to distinguish this from the lowercase $h$ frame index.}\n\\label{t:sum_rules_Dperp}\n\\end{table} \n\nOne can notice an interesting symmetry between the results obtained in the parton and in the hadron frames. \nIn the parton frame, the twist-3 $\\widetilde{D}^\\perp$ sum rule measures the average squared hadronic transverse momentum dynamically generated during the hadronization process scaled by a factor $1\/z$, while the $D^\\perp$ sum rule is trivial. \nIn the hadron frame, instead, it is the twist-2 sum rule for $D^\\perp$ that measures the dynamical generation of transverse momentum. In this case the averaged quantity is formally the transverse partonic momentum as seen by the hadron. \nWhile different in form, the two averages measure the same quantity, as is obvious from Eq.~\\eqref{e:PTp_kTh_rel}. The formal frame dependence of the $D^\\perp$ and $\\widetilde D^\\perp$ sum rules is a consequence of the  fact that these FFs enter the fragmentation correlator with a coefficient proportional to the transverse momentum, and of the Lorentz transformation properties of the latter. \n\n\n\n\n\n\n\n\n\n\n\n\\end{appendix}\n\n\n\\section{The fully inclusive jet correlator and its spectral decomposition}\n\\label{s:jetcor}\n\n\n\\subsection{The inclusive jet correlator}\n\nLet us start by considering the unintegrated \ninclusive quark-to-jet correlator~\\cite{Sterman:1986aj,Chen:2006vd,Collins:2007ph,Accardi:2008ne,Accardi:2017pmi,Accardi:2019luo} \n\\begin{align}\n  \\label{e:invariant_quark_correlator}\n  \\Xi_{ij}(k;w) = \\text{Disc} \\int \\frac{d^4 \\xi}{(2\\pi)^4} e^{\\i k \\cdot \\xi} \\,\n    \\frac{\\text{Tr}_c}{N_c}\\, {\\langle\\Omega|} \n    \\big[\\, {\\cal T}\\, W_1(\\infty,\\xi;w) \\psi_i(\\xi) \\big]\\, \n    \\big[\\, {\\cal \\overline{T}}\\, {\\overline{\\psi}}_j(0) W_2(0,\\infty;w) \\big] \n    {|\\Omega\\rangle} \\ ,  \n\\end{align}\nwhere ${|\\Omega\\rangle}$ is the interacting vacuum state of QCD, $\\psi$ the quark field, $W_{1,2}$ are Wilson lines that ensure the gauge invariance of the correlator, and $w$ is an external vector that determines the direction of their paths, as discussed in detail later. ${\\cal T}$ represents the time ordering operator for the fields whereas ${\\cal \\overline{T}}$ represents the anti time ordering operator~\\cite{Collins:2011zzd,Echevarria:2016scs}, and for sake of brevity we omit the flavor index of the quark fields and of $\\Xi$ (but all the results in this paper should be understood to be flavor-dependent). The color trace of the correlator will be shortly discussed in detail.\n\nThe definition~\\eqref{e:invariant_quark_correlator} clarifies and refines the definitions previously advanced in Refs.~\\cite{Accardi:2008ne,Accardi:2017pmi,Accardi:2019luo}. \nIn particular, with the present definition, the jet correlator naturally emerges at the right hand side of the master sum rule~\\eqref{e:master_sum_rule}, that we will derive in the next section and connects the single inclusive quark fragmentation process to the propagation  and inclusive fragmentation of a quark discussed in this Section. \nA diagrammatic interpretation of Eq.~\\eqref{e:invariant_quark_correlator} is given in Figure~\\ref{f:cut_diagrams}(a), where the vertical cut line represents the discontinuity, ``Disc'', in Eq.~\\eqref{e:invariant_quark_correlator}, or, in other words, a sum over all quark hadronization products~\\cite{Sterman:1986aj}. In fact, inserting a completeness between the square brackets in Eq.~\\eqref{e:invariant_quark_correlator}, one can interpret the jet correlator as the square of the sum of all possible quark-to-hadron transition amplitudes. \n\n\\begin{figure}[tb]\n\\centering\n\\begin{tabular}{ccc}\n\\includegraphics[width=0.32\\linewidth,valign=b]{Xi_cut.png}\n& \\hspace{2cm} &\t\n\\includegraphics[width=0.32\\linewidth,valign=b]{Delta_cut.png}\n\\\\ & & \\\\ \n(a) & & (b)\n\\end{tabular}\n\\caption{\nDiagrammatic interpretations of (a) the fully inclusive jet correlator of Eqs.~\\eqref{e:invariant_quark_correlator} and \\eqref{e:invariant_quark_correlator_W}, and (b) the single-hadron fragmentation correlator \\eqref{e:1hDelta_corr}. The black solid line corresponds to the hadronizing quark with momentum $k$, the black dashed line to the produced hadron with momentum $P_h$ and spin $S_h$. The yellow blob corresponds to the unobserved hadronization products. The vertical thin dashed line is the cut that puts the (unobserved) particles on the mass shell (see Appendix~\\ref{a:conv}).\n}\n\\label{f:cut_diagrams}\n\\end{figure}\n\n\nThe color-averaging of the initial-state quark, implemented as Tr$_c[\\dots]\/N_c$, has a crucial role, since it mimicks the color neutralization that has to take place in order to be able to consider the discontinuity, which corresponds to having on-shell intermediate states (see, {\\it e.g.}, Ref.~\\cite{Roberts:2015lja}, Section~2.4.). Furthermore, the color average is essential for the spectral representation of the jet correlator to be developed in Section~\\ref{ss:spectr_dec}.  Finally, color averaging is also important in view of the sum rule discussion of Section~\\ref{s:1h_rules} and, more in general, for using the inclusive jet correlator in factorization theorems at large Bjorken $x$~\\cite{Chen:2006vd}. When calculating Dirac traces, we will also average over the quark's polarization states as detailed in Appendix~\\ref{a:conv}. \n\nOn the physics side, the correlator $\\Xi$\ncaptures the hadronization of a quark including {\\em all} the products of the hadronization process. We call this the ``fully inclusive'' jet correlator\\footnote{Other names used in the literature for the same correlator are ``jet distribution\"~\\cite{Sterman:1986aj} and ``jet factor\"~\\cite{Collins:2007ph}. Its Dirac projections have also been called  ``(final state) jet functions\"~\\cite{Accardi:2008ne,Chen:2006vd,Chay:2013zya}.} \nin order to stress that none of the jet's constituents is actually reconstructed -- hence the absence of a definition for a jet axis and radius, contrary to other semi-inclusive definition of jets. In the following, when using for simplicity the term ``jet''(or ``inclusive jet'') correlator, we will always refer to this fully inclusive jet correlator.\nThe inclusiveness of $\\Xi$ will also be evident when relating this to the correlator for the hadronization of a quark into a single hadron by the sum rule we will prove in Section~\\ref{s:1h_rules}. Finally, when inserting $\\Xi$ in a DIS diagram~\\cite{Accardi:2008ne,Accardi:2017pmi}, which can be justified at large values of Bjorken $x$ where 4-momentum conservation limits the amount of transverse momentum available to final state hadrons \\cite{Sterman:1986aj,Chen:2006vd,Accardi:2018gmh}, the jet in question can be identified with the current jet. \n\nIt is also interesting to remark that, taking into account the properties of the color trace and after a specific choice for the path of the Wilson line (to be discussed next), the correlator $\\Xi$ can be expressed as the discontinuity of the gauge-invariant quark propagator, whose spectral decomposition has been studied in Ref.~\\cite{Yamagishi:1986bj} for the case of a straight Wilson line connecting $0$ to $\\xi$. In this paper, we will discuss instead the spectral representation for the case of Wilson lines running along staple-like contours -- which are the natural paths arising in QCD factorization theorems -- and use this in applications involving correlators integrated along one light-cone direction.\n\n\n\n\n\n\n\\subsubsection{Wilson line structure}\n\\label{sss:link_structure}\n\nWe work in a reference frame specified by two light-like unit vectors $n_+$ and $n_-$ such that $n_+^2 = n_-^2 = 0$ and $n_+\\cdot n_- = 1$. Any other vector $a^\\mu$ can then be specified in light-cone coordinates as $a=[a^-,a^+,\\vect{a}_T]$, with $a^\\pm = a\\cdot n_\\mp$, and $\\vect{a}_T$ the 2-dimensional coordinates of $a$ transverse with respect to the $(n_+,n_-)$ plane, see Appendix~\\ref{a:conv}. We also assume the quark to be highly boosted in the negative $z$ direction, so that the minus component of its momentum is dominant, $k^- \\gg |\\vect{k}_T| \\gg k^+$. \nWhen the jet correlator is included in the calculation of a physical process, $n_+$ and $n_-$ can be determined by the kinematics of that process; for example, in DIS one can choose these to be coplanar with the hadron and virtual photon momenta, with $n_-$ aligned with the latter \\cite{Accardi:2008ne,Accardi:2017pmi}. However, in this paper we study the jet correlator as a theoretical object in its own right. \n\n\\begin{figure\n\\centering\n\\begin{tabular}{ccccc}\n\\includegraphics[width=0.31\\textwidth]{W2W1_generic.pdf}\n& \\hspace{0.15cm} &\n\\includegraphics[width=0.31\\textwidth]{W_generic.pdf}\n& \\hspace{0.15cm} &\n\\includegraphics[width=0.31\\textwidth]{W_TMD.pdf}\n\\\\\n\\quad\\ (a) & & \\quad\\ (b) & & \\quad\\ (c)  \n\\\\ \n\\end{tabular}\n\\caption{Staple-like Wilson line paths for the jet correlator $\\Xi(k;w=n_+)$: \n(a) $W_2(0,\\infty;w=n_+)$ (red) and $W_1(\\infty,\\xi;w=n_+)$ (blue); \n(b) the combined Wilson line $W(0,\\xi;w=n_+)$; \n(c) the TMD Wilson line $W_{TMD}(\\xi^+,\\vect{\\xi}_T)$.\n}\n\\label{f:W_paths}\n\\end{figure}\n\n\n\n\\begin{comment}\n\\begin{figure}[t]\n  \\centering\n  \\textbf{(a)}\\includegraphics[width=0.3\\linewidth,valign=b]{fig_W1-general.png}\n  \\hfill\n  \\textbf{(b)}\\includegraphics[width=0.3\\linewidth,valign=b]{fig_W2-nplus.png}\n  \\hfill\n  \\textbf{(c)}\\includegraphics[width=0.3\\linewidth,valign=b]{fig_W3-tmd.png}\n\\caption{Staple-like Wilson line paths for the quark correlator $\\Xi(k;v)$: (a) For a generic direction $v$; (b) For a light-cone $v=n_+$; (c) For TMD correlators, where $\\xi^-=0$. \n\\AScom{Change this figure into the one with the $\\{ n_+, n_- \\}$ plane.} }\n\\label{f:W_paths}\n\\end{figure}\n\\begin{figure}[t]\n\\centering\n\\includegraphics[width=0.5\\linewidth]{Wpath}\n\\caption{The chosen path for the Wilson line $W_2(0,\\infty)$ in the unintegrated jet correlator $\\Xi(k,n_+)$ can be understood as the limit $\\overline{t} \\to +\\infty$ of the link in the figure. \\AScom{I have to introduce $\\overline{t}$ in the figure.} The arrows pointing out of the page represent the two-dimensional transverse plane. \\AScom{Add a similar figure for $W_1(\\infty,\\xi)$?}}\n\\label{f:W_paths}\n\\end{figure}\n\\end{comment}\n\n\nWe restrict our attention to a class of Wilson lines that reproduces the paths determined by the TMD factorization theorems for quark-initiated hard scattering processes, when $\\Xi$ is integrated over the sub-dominant momentum components $k^+$~\\cite{Collins:2008ht,Collins:2011zzd} (or even in the fully unintegrated factorization proposed in Ref.~\\cite{Collins:2007ph}). \nFor simplicity, we also restrict the discussion to the case $w=n_+$, even though no substantial impediment arises in the treatment of slightly off-the-light-cone Wilson lines. \nTo be specific, \nwe take $W_2$ to run first from 0 to infinity along the plus light-cone direction,   \nthen to infinity in the transverse plane, \nand eventually to infinity again along the minus light-cone direction, see Figure~\\ref{f:W_paths}(a).\nAnalogously, $W_1$  \nruns from infinity backwards in the minus direction until it reaches $[\\xi^-,\\infty^+,\\vect{\\infty}_T$], \nthen in the transverse direction until $[\\xi^-,\\infty^+,\\vect{\\xi}_T]$, \nand eventually reaching $\\xi = [\\xi^-,\\xi^+,\\vect{\\xi}_T]$ along the plus direction \n(see Figure~\\ref{f:W_paths}(a)).  \nExplicitly, \n\\begin{align}\n\\label{e:W2_antitime}\nW_2(0,\\infty;n_+) & =\\, \n{\\cal U}_{n_+}[0^-, 0^+, \\vect{0}_T; 0^-, \\infty^+, \\vect{0}_T]\\,\\,\n{\\cal U}_{v_T}[0^-, \\infty^+, \\vect{0}_T; 0^-, \\infty^+, \\vect{\\infty}_T]\\, \\, \n{\\cal U}_{n_-}[0^-, \\infty^+, \\vect{\\infty}_T; \\infty^-, \\infty^+, \\vect{\\infty}_T]\\, ,\n\\\\\n\\label{e:W1_time}\nW_1(\\infty,\\xi;n_+) &=\\,\n{\\cal U}_{n_-}[\\infty^-, \\infty^+, \\vect{\\infty}_T; \\xi^-, \\infty^+, \\vect{\\infty}_T]\\,\\, \n{\\cal U}_{v_T}[\\xi^-, \\infty^+, \\vect{\\infty}_T; \\xi^-, \\infty^+, \\vect{\\xi}_T]\\,\\,\n{\\cal U}_{n_+}[\\xi^-, \\infty^+, \\vect{\\xi}_T; \\xi^-, \\xi^+, \\vect{\\xi}_T]\\, , \n\\end{align}\nwith  ${\\cal U}_v$ representing the Wilson operator along a half-line starting from $a$ in the $v$ direction, {\\it i.e.},\n\\begin{align}\n  {\\cal U}_v[a;\\infty] = {\\cal P} \\exp \\Big( -\\i g \\int_0^\\infty ds\\,  v^\\mu A_\\mu(a+sv) \\Big) \\, ,\n\\end{align}\nwhere ${\\cal P}$ denotes the path-ordering operator, and the square brackets emphasizes the straightness of the path. We will comment upon other possible choices of paths below. \n\n\nFrom the definitions~\\eqref{e:W2_antitime} and \\eqref{e:W1_time}, one can see that $W_2$ is automatically anti time ordered, and $W_1$, time ordered. Namely, with our choice of paths, ${\\cal \\overline{T}} {\\cal P} \\equiv {\\cal P}$ and ${\\cal T} {\\cal P} \\equiv {\\cal P}$. \nOwing to this specific choice for the Wilson lines and thanks to the color trace and to the absence of intermediate states, we can also perform a cyclic permutation of the fields in Eq.~\\eqref{e:invariant_quark_correlator} and combine the two Wilson operators $W_{1,2}$ into the single, staple-like operator \n\\begin{align}\n  W(0,\\xi;n_+) = W_2(0,\\infty;n_+) W_1(\\infty,\\xi;n_+) \n\\end{align}\nillustrated in Figure~\\ref{f:W_paths}(b), so that\n\\begin{align}\n  \\label{e:invariant_quark_correlator_W}\n  \\Xi_{ij}(k;n_+) = \\text{Disc} \\int \\frac{d^4 \\xi}{(2\\pi)^4} e^{\\i k \\cdot \\xi} \\,\n  \\frac{\\text{Tr}_c}{N_c}\\, {\\langle\\Omega|} \\psi_i(\\xi) {\\overline{\\psi}}_j(0) W(0,\\xi;n_+) {|\\Omega\\rangle} \\, .\n\\end{align}\nThe jet correlator can thus also be written as the discontinuity of a gauge invariant quark propagator. \n\nUpon setting $\\xi^-=0$, that corresponds to integrating the correlator over the $k^+$ component, we obtain the staple-like $W_{TMD}$ Wilson line \nutilized to define TMD distributions: \n\\begin{align}\n\\label{e:W_np_TMD}\n  W_{TMD}(\\xi^+, \\vect{\\xi}_T) & \\equiv W(0,\\xi;n_+)|_{\\xi^-=0} \\nonumber \\\\\n  & = \n    {\\cal U}_{n_+}[0^-, 0^+, \\vect{0}_T; 0^-, \\infty^+, \\vect{0}_T] \\ \n    {\\cal U}_{v_T}[0^-, \\infty^+, \\vect{0}_T; 0^-, \\infty^+, \\vect{\\xi}_T] \\\n    {\\cal U}_{n_+} [0^-, \\infty^+, \\vect{\\xi}_T; 0^-, \\xi^+, \\vect{\\xi}_T] \\ , \n\\end{align}\nsee Figure~\\ref{f:W_paths}(c).  \nNote that, even without choosing a specific form for $W_{1,2}$, the integration over $k^+$ renders the (anti-)time ordering on the light cone  equivalent to the path ordering, since $\\xi^+$ becomes proportional to the time variable. \nSetting also $\\vect{\\xi}_T=0$, which corresponds to furthermore integrating $\\Xi$ over the transverse momentum $\\vect{k}_T$, one would finally obtain the collinear Wilson line  $W_{coll}(\\xi^+) \\equiv W_{TMD}(\\xi^+,\\vect{0}_T)={\\cal U}[0^-, 0^+, \\vect{0}_T; 0^-, \\xi^+, \\vect{0}_T]$, running along the light-like segment from $0^+$ to $\\xi^+$.\n\n\nIt is important to remark that the results discussed in this paper hold true also for a larger class of Wilson lines. In order to drop the ${\\cal T}$ and ${\\cal \\overline{T}}$ time-ordering operators and rewrite the jet correlator as a gauge invariant quark propagator, one only needs to ensure that the time variable increases along the path that goes from 0 to $\\infty$, and decreases when going from $\\infty$ to $\\xi$. This clearly restricts the class of Wilson lines available, but not overmuch. \nFor example, one could \nconsider off-the-light-cone lines, with $w \\neq n_+$ slightly tilted away from the plus light-cone direction, \nor adopt L-shaped lines reaching infinity along $n_+$ and then moving to infinity simultaneously along the minus and transverse directions. \nFurthermore, we will need $W$ in Eq.~\\eqref{e:invariant_quark_correlator_W} to reduce to the identity matrix when $\\xi \\to 0$, hence in that limit the path of the Wilson line should not contain loops. The straight line and the staple-like Wilson line that enter in collinear and TMD factorization belong to this category. \n\n\n\n\n\n\n\n\\subsubsection{Dirac structure}\n\\label{sss:dirac_structure}\n\nThe correlator $\\Xi$ can be decomposed on a basis of Dirac matrices,\n$\n  \\big\\{ {\\mathbb 1}, \\gamma_5, \\gamma^\\mu, \\gamma^\\mu \\gamma_5,\n    \\i\\sigma^{\\mu\\nu} \\gamma_5   \\big\\} ,\n$\nwhere $\\i$ is the imaginary unit, ${\\mathbb 1}$ the identity matrix, \n$\\gamma_5 = \\i \\epsilon^{\\mu\\nu\\rho\\sigma}\\gamma_\\mu\\gamma_\\nu\\gamma_\\rho\\gamma_\\sigma$,  $\\epsilon^{\\mu\\nu\\rho\\sigma}$ the totally antisymmetric tensor of rank 4, \nand $\\sigma^{\\mu\\nu} = (\\i\/2) [ \\gamma^\\mu,\\gamma^\\nu ]$. \nAssuming invariance under Lorentz and parity transformations, we can parametrize $\\Xi$ as:\n\\begin{align}\n  \\Xi(k;n_+) = \\Lambda A_1 {\\mathbb 1} + A_3 \\slashed{k}\n    + \\frac{\\Lambda^2}{k \\cdot n_+} B_1 \\slashed{n}_+\n    + \\frac{\\Lambda}{k \\cdot n_+} B_3 \\sigma_{\\mu\\nu} k^\\mu n_+^\\nu \\ .\n  \\label{e:jet_ampl}\n\\end{align}\nThe amplitudes\n$A_i$ and $B_i$ are, in principle, functions of all the Lorentz scalars that one can build with the available Lorentz vectors, $k \\cdot n_+$ and $k^2$~\\cite{Mulders:1995dh,Bacchetta:2000jk,Boer:2016xqr}. \nThe vector $n_+$ is available as the vector specifying the direction of the Wilson line.\nThe subscripts differ from the conventions discussed in Refs.~\\cite{Accardi:2017pmi,Accardi:2018gmh}, but have been chosen such that they match the customary decomposition of the fragmentation correlator to be discussed in Section~\\ref{ss:1h_inclusive_FF}. \nWe have also introduced a power-counting scale $\\Lambda = O(\\Lambda_{\\text{QCD}})$\nthat defines an ``operational'' twist expansion for the correlator in powers of $\\Lambda\/k^-$, where we assume a large boost in the $k^-$ direction so that $k^+ = (k^2+\\vect{k}_T^2)\/2k^- \\ll |\\vect{k}_T| \\ll k^-$, with $\\vect{k}_T \\sim O(\\Lambda)$ and $k^+,k^2\\sim O(\\Lambda^2)$. \nThis expansion is analogous to that used in Ref.~\\cite{Bacchetta:2004zf,Bacchetta:2006tn} for the single-hadron fragmentation correlator, and further discussed in Section~\\ref{ss:1h_inclusive_FF}. Its relation with the rigorous twist expansion of local operators in the Operator Product Expansion formalism is discussed, {\\it e.g..}, in Ref.~\\cite{Jaffe:1996zw}. \n\nThe twist expansion of the inclusive jet correlator can be made explicit by writing Eq.~\\eqref{e:jet_ampl} in terms of the light-cone Dirac matrices $\\gamma^\\pm = \\slashed{n}_\\mp$:\n\\begin{align}\n  \\Xi(k;n_+) = k^-  A_3 \\, \\gamma^+\n    + \\Lambda\\left( A_1\\, {\\mathbb 1}\n      + A_3\\, \\frac{\\slashed{k}_T}{\\Lambda}\n      + \\frac{\\i}{2} B_3\\, [\\gamma^+,\\gamma^-] \\right)\n    + \\frac{\\Lambda^2}{k^-} \\left( B_1\\, \\gamma^-   \n      + A_3 \\frac{k^2+\\vect{k}_T^2}{2\\Lambda^2}\\, \\gamma^-  \n      + B_3 \\frac{\\i}{2\\Lambda}[\\slashed{k}_T, \\gamma^-] \\right)  \\ .\n  \\label{e:Xi_twist_dec}\n\\end{align}\nThe amplitudes $A_{1,3}$ and $B_{1,3}$ can be projected out by tracing $\\Xi$ multiplied by suitable Dirac matrices.\nWe report here only the non-zero traces and group these according to the power counting with which they contribute to Eq.~\\eqref{e:Xi_twist_dec}: \\\\\n\\begin{itemize}\n\n\\item\n  Twist-2 structures - $O(k^-)$:\n  \\begin{align}\n    \\label{e:trace_gm}\n    \\text{Tr}[\\Xi\\, \\gamma^-] & = 4 A_3 k^- \n   \n   \n   \n   \n  \\end{align}\n\n\\item\n  Twist-3 structures - $O(\\Lambda)$:\n  \\begin{align}\n    \\label{e:trace_1}\n    \\text{Tr}[\\Xi\\, {\\mathbb 1}] & = 4 \\Lambda A_1 \\\\\n    \\label{e:trace_gi}\n    \\text{Tr}[\\Xi\\, \\gamma^i] & = 4 A_3 k_T^i \\\\\n   \n   \n   \n   \n   \n   \n    \\label{e:trace_isigmaijg5}\n    \\text{Tr}[\\Xi\\, \\i \\sigma^{ij} \\gamma_5] \n   \n                             & = -4\\Lambda B_3 \\epsilon_T^{ij} \n  \\end{align}\n\n\\item\n  Twist-4 structures - $O(\\Lambda^2\/k^-)$:\n  \\begin{align}\n    \\label{e:trace_gp}\n    \\text{Tr}[\\Xi\\, \\gamma^+] \n       & = 4 A_3 \\frac{k^2 + \\vect{k}_T^2}{2k^-} + \\frac{4\\Lambda^2}{k^-} B_1 \\\\\n   \n   \n    \\label{e:trace_isigmaipg5}\n    \\text{Tr}[\\Xi\\, \\i \\sigma^{i+} \\gamma_5] \n       & = 4 \\frac{\\Lambda}{k^-} B_3 \\epsilon_T^{ij} {k_T}_j \n  \\end{align}\n\n\\end{itemize}\nIn these formulas, the  $\\epsilon_T^{ij}=\\epsilon^{ij\\mu\\nu}n_{-\\mu}n_{+\\nu}$ tensor is the projection of the completely antysimmetric tensor in the transverse plane, and $i,j=1,2$ are transverse Lorentz indices (see Appendix~\\ref{a:conv}). \n\nWe note that the trace in Eq.~\\eqref{e:trace_gm} corresponds to the inclusive jet function $J$ defined in e.g. Ref.~\\cite{Procura:2009vm,Jain:2011xz}. \nAs we shall see in Sec.~\\ref{ss:spectr_dec}, the amplitudes $A_3$ and $A_1$ are also directly related to the chiral even and chiral odd spectral functions of the quark propagator, respectively.\n\nMoreover, \nnote that the Dirac structure associated to the amplitude $B_3$ is time-reversal odd (T-odd)~\\cite{Mulders:1995dh,Bacchetta:2000jk,Mulders:2016pln,Accardi:2017pmi}. \nIn the correlator that defines parton distribution functions, the T-odd structures are generated by the presence of the gauge link in the transverse plane.  \nSince the partonic poles vanish in the correlator that defines fragmentation functions, the T-odd FFs are generated by the interchange of {\\em in-} and {\\em out-}states induced by a time-reversal transformation rather then by the link structure~\\cite{Pijlman:2006vm,Bomhof:2004aw,Boer:2003cm,Meissner:2008yf,Metz:2016swz,Gamberg:2008yt,Gamberg:2010uw}. For this reason the FFs are universal, contrary to TMD PDFs. \nAs shown in Section~\\ref{ss:sum_operator}, the inclusive jet correlator is related to the fragmentation correlator via an on-shell integration over the hadronic momenta and a sum over all the possible hadronic final states. Since there are no {\\em out-}states in Eq.~\\eqref{e:invariant_quark_correlator}, we conclude that T-odd structures cannot be present in $\\Xi$, namely\n\\begin{align}\n\\label{e:B3_zero}\n  B_3 = 0 \\ .   \n\\end{align}\nThis further simplifies Eq.~\\eqref{e:Xi_twist_dec} and its Dirac projections in Eq.~\\eqref{e:trace_isigmaijg5} and Eq.~\\eqref{e:trace_isigmaipg5}. We will briefly return to this point also in the next subsection.\n\n\n\n\\subsection{Convolution representation and spectral decomposition}\n\\label{ss:spectr_dec}\n\nWe aim now at deriving a spectral representation for the inclusive jet correlator \\eqref{e:invariant_quark_correlator}, or, equivalently, for the the gauge invariant quark propagator \\eqref{e:invariant_quark_correlator_W}. \n\nThe first step is to rewrite Eq.\\eqref{e:invariant_quark_correlator_W} as a convolution of a quark bilinear $\\i \\widetilde S$ and the Fourier transform $\\widetilde W$ of the Wilson line,\n\\begin{align}\n  \\label{eq:convolution_Xi}\n  \\Xi_{ij}(k;n_+) = \\text{Disc} \\int d^4p\\, \\frac{\\text{Tr}_c}{N_c}\\,  {\\langle\\Omega|} \\i \\widetilde S_{ij}(p) \\widetilde W(k-p;n_+) {|\\Omega\\rangle} \\ ,\n\\end{align}\nwhere \n\\begin{align}\n\\label{e:def_tildeS}\n\\i \\widetilde S_{ij}(p) & = \\int \\frac{d^4\\xi}{(2\\pi)^4}\\, e^{\\i\\xi \\cdot p}\\, \\psi_i(\\xi) {\\overline{\\psi}}_j(0)\\, , \\\\\n\\label{e:def_tildeW}\n\\widetilde W(k-p;n_+) & = \\int \\frac{d^4\\xi}{(2\\pi)^4}\\, e^{\\i\\xi \\cdot (k-p)}\\, W(0,\\xi;n_+) \\ .\n\\end{align}\nNote that this convolution representation does not, in itself, depend on the choice of the path for the Wilson line and it is thus generally valid for the study of gauge invariant quark propagators. A careful choice of Wilson line within the generic class discussed in Section~\\ref{sss:link_structure} is only needed when relating the gauge invariant propagator to the inclusive jet correlator \\eqref{e:invariant_quark_correlator}. \n\n\n\nThe convolution representation becomes very useful in combination with the spectral decomposition of the quark bilinear.  The vacuum expectation value of the operator $\\i \\widetilde S$ is, indeed, the retarded\/advanced (according to the sign of $\\xi^0$) quark propagator, for which there exist spectral representations~\\cite{Bjorken:1965zz}. However, in this paper, we are rather interested in the jet correlator integrated over the sub-dominant $k^+$ component of the quark momentum, and we can, in fact, work with the simpler spectral representation of the Feynman quark propagator. To this end, we introduce the following \\textit{\\it auxiliary} unintegrated correlator:\n\\begin{align}\n\\label{e:Xiprime}\n\\Xi^\\prime_{ij}(k;n_+) = \\text{Disc} \\int \\frac{d^4 \\xi}{(2\\pi)^4} e^{\\i k \\cdot \\xi} \\,\n\\frac{\\text{Tr}_c}{N_c}\\, {\\langle\\Omega|}\\, {\\cal T} \\big[ \\psi_i(\\xi) {\\overline{\\psi}}_j(0) \\big]\\, W(0,\\xi;n_+) {|\\Omega\\rangle} \\, ,   \n\\end{align}\nwhere\nonly the quark bilinear operator is time ordered.  In Section~\\ref{ss:TMD_J_corr}, we will show that under the Wilson line choice discussed in Section~\\ref{sss:link_structure}, the $\\Xi$ and $\\Xi^\\prime$ correlators integrated over $k^+$ are, in fact, identical, and we can equivalently work with the latter.\n\nThe convolution representation of $\\Xi^\\prime$ is obtained from Eq.~\\eqref{eq:convolution_Xi} by replacing $\\i \\widetilde S$ with $\\i \\widetilde S^\\prime$, defined as \n\\begin{equation}\n\\label{e:def_tildeSF}\n\\i \\widetilde S^\\prime_{ij}(p) = \\int \\frac{d^4\\xi}{(2\\pi)^4}\\, e^{\\i\\xi \\cdot p}\\,\n{\\cal T} \\big[ \\psi_i(\\xi) {\\overline{\\psi}}_j(0) \\big] \\, .\n\\end{equation}\nThis operator can be given a Dirac decomposition assuming invariance under Lorentz and parity transformations~\\cite{Bjorken:1965zz}:\n\\begin{align}\n\\i \\widetilde S^\\prime_{ij}(p) = \\hat s_3(p^2) \\slashed{p}_{ij} + \\sqrt{p^2} \\hat s_1(p^2) {\\mathbb 1}_{ij} \\ ,\n\\label{e:quark_bilinear_decomp}\n\\end{align}\nwhere, for simplicity, we omitted an overall identity matrix in color space, and we can call $\\hat{s}_{1,3}$ {\\it spectral operators} for reasons that will become clear shortly. \nThe correlator $\\Xi^\\prime$ can then be written as\n\\begin{align}\n\\label{e:spectral_convolution}\n\\Xi^\\prime_{ij}(k;n_+) = \\text{Disc} \\int d^4p\\, \\frac{\\text{Tr}_c}{N_c}\\, \n{\\langle\\Omega|} \n\\Big[\\hat s_3(p^2) \\slashed{p}_{ij} + \\sqrt{p^2} \\hat s_1(p^2) {\\mathbb 1}_{ij} \\Big]  \n\\widetilde W(k-p;n_+)\\, \n{|\\Omega\\rangle} \\ .\n\\end{align}\nWe can obtain a connection with the K\\\"allen-Lehman spectral representation of the quark propagator~\\cite{Bjorken:1965zz,Weinberg:1995mt,Accardi:2008ne,Accardi:2019luo} by noticing that the Feynman propagator for the quark in momentum space is given by the expectation value of $\\i \\widetilde S^\\prime$ on the interacting vacuum. In turn, the Feynman propagator can be written as a superposition of propagators for (multi)particle states of invariant mass $\\mu$~\\cite{Bjorken:1965zz,Weinberg:1995mt}:\n\\begin{equation}\n\\label{e:Feyn_spec_rep}\n\\frac{\\text{Tr}_c}{N_c}\\, {\\langle\\Omega|} \\i \\widetilde S^\\prime(p) {|\\Omega\\rangle} = \\frac{1}{(2\\pi)^4} \\int_{-\\infty}^{+\\infty} d\\mu^2 \n\\Big\\{ \\slashed{p}\\, \\rho_3(\\mu^2) + \\sqrt{\\mu^2}\\, \\rho_1(\\mu^2) \\Big\\}\\, \n\\theta(\\mu^2)\\,\n\\frac{\\i}{p^2-\\mu^2+ \\i \\epsilon} \\ ,\n\\end{equation}\nwhere the theta function ensures that the spectral functions $\\rho_{1,3}$ contribute to the integral only at time-like momenta.\\footnote{In Eq.~\\eqref{e:Feyn_spec_rep}, there is an extra $(2\\pi)^{-4}$ factor with respect to Refs.~\\cite{Bjorken:1965zz,Accardi:2008ne,Accardi:2017pmi} because of the normalization of Eq.~\\eqref{e:invariant_quark_correlator}, which is customary in the literature dealing with TMD parton distribution and fragmentation functions and the associated non-local operators (see {\\em e.g.} Ref.~\\cite{Bacchetta:2006tn}).} \nAs a consequence of the canonical commutation relations, the spectral function $\\rho_{3}$ satisfies~\\cite{Zwicky:2016lka,Weinberg:1995mt}\n\\begin{equation}\n  \\int_0^{+\\infty} d\\mu^2 \\rho_3(\\mu^2) = 1 \\ .\n\\label{e:rho13_positivity_rho3sumrule}\n\\end{equation}This function can then be interpreted as the probability distribution for a quark to fragment into a multi-particle state of invariant mass $\\mu^2$. The $\\rho_1$ spectral function does not  satisfy any normalization condition, but, as we will show in Section~\\ref{sss:zeta_calc}, is related to the \nmass density of the quark hadronization products. \nCare is, however, needed with these interpretations since the positivity of $\\rho_3$ (along with that of $\\rho_1$) is not guaranteed in a confined theory. \n\nNote that, when working in an axial gauge $v \\cdot A=0$ as we will do in Section~\\ref{s:1h_rules}, we should also add a structure proportional to $\\slashed v$ to the decomposition in Eq.~\\eqref{e:quark_bilinear_decomp} and to the term in curly brackets in Eq.~\\eqref{e:Feyn_spec_rep}, see also Ref.~\\cite{Yamagishi:1986bj}. However, in our explicit calculations we will adopt the light-cone gauge, where $v=n_+$, and the additional, gauge-fixing term would only contribute at twist-4 level. Since in this work we limit the applications of our formalism to FF sum rules up to the twist-3 level, for sake of simplicity we have not explicitly written the gauge-fixing term in Eq.~\\eqref{e:Feyn_spec_rep}, but we will briefly return on its role in Section~\\ref{ss:TMDjet_recap}.  \n\n\n\n\n\nThe discontinuity in Eq.~\\eqref{e:spectral_convolution} is completely determined by the discontinuity of  Eq.~\\eqref{e:Feyn_spec_rep}. To calculate the latter, we employ the Cutkosky rule~\\cite{Cutkosky:1960sp,Bloch:2015efx,Zwicky:2016lka}, by which one simply needs to replace\n\\begin{equation}\n\\label{e:Cutkosky_rule}\n\\frac{1}{p^2-\\mu^2 + \\i \\epsilon} \\longrightarrow -2\\pi \\i\\, \\delta(p^2 - \\mu^2)\\, \\theta(p^0) \n\\end{equation}\nat the right hand side of that equation. Namely, as the multiparticle state of invariant mass $\\mu^2$ passes the cut, this can be thought of as a set of on-shell particles with positive energy. We thus obtain:\n\\begin{align}\n\\nonumber\n\\text{Disc}\\, \\frac{\\text{Tr}_c}{N_c}\\, {\\langle\\Omega|} \\i \\widetilde S^\\prime(p) {|\\Omega\\rangle} & = \\frac{1}{(2\\pi)^3} \\int_{-\\infty}^{+\\infty} d\\mu^2 \n\\big\\{ \\slashed{p}\\, \\rho_3(\\mu^2) + \\sqrt{\\mu^2}\\, \\rho_1(\\mu^2) \\big\\}\\, \n\\theta(\\mu^2)\\,\n\\delta(p^2 - \\mu^2)\\, \\theta(p^0) \\\\\n\\label{e:cut_Feyn_spec_rep}\n& = \\frac{1}{(2\\pi)^3}\\, \\big\\{ \\slashed{p}\\, \\rho_3(p^2) + \\sqrt{p^2}\\, \\rho_1(p^2) \\big\\}\\, \n\\underbrace{\\theta(p^2)\\, \\theta(p^0)}_{=\\theta(p^2)\\, \\theta(p^-)} \n\\end{align}\nFinally, using the operator decomposition for $\\i \\widetilde S^\\prime$ given in Eq.~\\eqref{e:quark_bilinear_decomp}, we obtain the spectral representation for the discontinuity of the expectation values of the operators $\\hat{s}_{1,3}$: \n\\begin{equation}\n\\label{e:spec_rep_s13}\n(2\\pi)^{3}\\, \\text{Disc}\\, \\frac{\\text{Tr}_c}{N_c}\\, {\\langle\\Omega|} \\hat{s}_{1,3}(p^2) {|\\Omega\\rangle} =  \\rho_{1,3}(p^2)\\, \\theta(p^2)\\, \\theta(p^-) \\ .\n\\end{equation}\nIt is in this sense, that we can refer to $\\hat s_{1,3}$ as spectral operators. \n\nNote that Eqs.~\\eqref{e:cut_Feyn_spec_rep} and~\\eqref{e:spec_rep_s13} provide a spectral representation for the quark propagator ${\\langle\\Omega|} \\i \\widetilde{S}^\\prime {|\\Omega\\rangle}$ without a Wilson line insertion. It is the purpose of the convolution representation in Eq.~\\eqref{e:spectral_convolution}, supplemented by Eq.~\\eqref{e:spec_rep_s13} to provide a spectral representation for $\\Xi^\\prime(k;n_+)$. Its application to the calculation of the $k^+$-integrated jet correlator is discussed in the next Section.\n\n\n\n\n\n\\subsection{The TMD inclusive jet correlator}\n\\label{ss:TMD_J_corr}\n\nWhen integrating the inclusive jet correlator over the suppressed $k^+$ quark momentum component, one obtains the TMD inclusive  jet correlator $J$~\\cite{Accardi:2019luo},\n\\begin{align}\n\\label{e:J_TMDcorr}\n  J_{ij}(k^-,\\vect{k}_T;n_+)\n    & \\equiv \\frac{1}{2} \\int dk^+\\, \\Xi_{ij}(k;n_+) \\nonumber \\\\\n    & = \\frac{1}{2}\\ \\text{Disc} \\int \\frac{d\\xi^+ d^2 \\vect{\\xi}_T}{(2\\pi)^3}\\,\n      e^{\\i k \\cdot \\xi} \\frac{\\text{Tr}_c}{N_c}\n      {\\langle\\Omega|} \\psi_i(\\xi) {\\overline{\\psi}}_j(0) W(0,\\xi;n_+) \n     \n      {|\\Omega\\rangle}_{|_{\\xi^-=0}} \\ ,\n\\end{align}\nwhere the $1\/2$ normalization factor is justified in Appendix~\\ref{a:conv}, and the integrand is now restricted to $\\xi^-=0$.\n\nThe TMD jet correlator can be decomposed in Dirac structures, with coefficients that can be determined by integrating the projections of $\\Xi$ given in Eqs.~\\eqref{e:trace_gm}-\\eqref{e:trace_isigmaipg5}. \nFollowing the arguments discussed in Appendix~\\ref{a:conv} and using Eq.~\\eqref{e:proj_int_J}, we define the projection of $J$ to be:\n\\begin{equation}\n\\label{e:def_proj}\nJ^{[\\Gamma]} \\ \\equiv\\ {\\rm Tr}\\bigg[ J\\, \\frac{\\Gamma}{2} \\bigg] = \n\\frac{1}{2} \\int dk^+ {\\rm Tr} \\bigg[ \\Xi\\, \\frac{\\Gamma}{2} \\bigg] = \n\\frac{1}{4} \\int dk^+ {\\rm Tr} \\big[ \\Xi\\, \\Gamma \\big] \\ .\n\\end{equation}\nFor twist-2 structures we have:\n\\begin{align}\n\\label{e:trace_J_gm}\nJ^{[\\gamma^-]} & =\n\\frac{1}{2} \\int dk^2 A_3(k^2,k^-) \n\\ \\equiv\\ \\alpha(k^-) \\ .\n\\end{align}\nFor twist-3 structures we have:\n\\begin{align}\n\\label{e:trace_J_id}\n& J^{[{\\mathbb 1}]} = \n\\frac{\\Lambda}{2k^-} \\int dk^2 A_1(k^2,k^-) \n\\ \\equiv\\ \\frac{\\Lambda}{k^-} \\zeta(k^-)  \\\\\n\\label{e:trace_J_gi}\n& J^{[\\gamma^i]} = \n\\frac{k_T^i}{2k^-} \\int dk^2 A_3(k^2,k^-) \n= \\frac{\\Lambda}{k^-} \\alpha(k^-) \\frac{k_T^i}{\\Lambda}  \\\\\n\\label{e:trace_J_isigmaijg5}\n& J^{[\\i \\sigma^{ij} \\gamma_5]} =\n- \\frac{\\Lambda}{2k^-} \\epsilon_T^{ij} \\int dk^2 B_3(k^2,k^-) \n\\ \\equiv\\ - \\frac{\\Lambda}{k^-} \\epsilon_T^{ij} \\eta(k^-) \\ .\n\\end{align}\nFor twist-4 structures we have:\n\\begin{align}\n\\label{e:trace_J_gp}\nJ^{[\\gamma^+]} & =\n\\frac{\\Lambda^2}{2(k^-)^2} \\int dk^2 \\bigg[ A_3(k^2,k^-) \\frac{k^2+\\vect{k}_T^2}{2\\Lambda^2} + B_1(k^2,k^-) \\bigg] \n\\ \\equiv\\ \\frac{\\Lambda^2}{(k^-)^2} \\omega(k^-,\\vect{k}_T^2) \n\\end{align}\n\\begin{align}\n\\label{e:trace_J_isigmaipg5}\nJ^{[\\i \\sigma^{i+} \\gamma_5]} & =\n\\frac{\\Lambda^2}{2(k^-)^2} \\epsilon_T^{ij} \\frac{{k_T}_j}{\\Lambda} \\int dk^2 B_3(k^2,k^-)\n= \\frac{\\Lambda^2}{(k^-)^2} \\epsilon_T^{ij} \\frac{{k_T}_j}{\\Lambda} \\eta(k^-) \\ .\n\\end{align}\nBecause of the integration over $k^2$, all the functions defined in the previous equations depend only on $k^-$, apart from $\\omega$ which has an additional dependence on $\\vect{k}_T^2$ that we will discuss in Section~\\ref{sss:omega_calc}.\nThe TMD jet correlator can then be given a twist decomposition in Dirac space as follows: \n\\begin{align}\n\\label{e:J_Dirac}\nJ(k^-,\\vect{k}_T;n_+) & = \n\\frac{1}{2} \\alpha(k^-) \\gamma^+ \\\\ \n\\nonumber\n& + \\frac{\\Lambda}{2k^-} \\left[ \\zeta(k^-) {\\mathbb 1}  \n\t\t\t\t\t\t\t\t\t\t\t    + \\alpha(k^-) \\frac{\\slashed{k}_T}{\\Lambda} \n\t\t\t\t\t\t\t\t\t\t\t    \t+ \\eta(k^-) \\sigma_{\\mu\\nu} n_-^\\mu n_+^\\mu \\right] \\\\\n\\nonumber\n& + \\frac{\\Lambda^2}{2(k^-)^2} \\left[ \\omega(k^-,\\vect{k}_T^2) \\gamma^-\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t+ \\frac{1}{\\Lambda} \\eta(k^-) \\sigma_{\\mu\\nu} k_T^\\mu n_+^\\nu \\right] \\ . \t\n\\end{align}\nSince $B_3 = 0$, according to time-reversal symmetry arguments, we obtain $\\eta(k^-) = 0$ and thus the correlator $J$ simplifies to:\n\\begin{equation}\n\\label{e:J_Dirac_noeta}\nJ(k^-,\\vect{k}_T;n_+) = \n\\frac{1}{2} \\alpha(k^-) \\gamma^+ \n+ \\frac{\\Lambda}{2k^-} \\left[ \\zeta(k^-) {\\mathbb 1}  \n\t\t\t\t\t\t\t\t\t\t\t    + \\alpha(k^-) \\frac{\\slashed{k}_T}{\\Lambda} \\right] \\\\\n+ \\frac{\\Lambda^2}{2(k^-)^2} \\left[ \\omega(k^-,\\vect{k}_T^2) \\gamma^- \\right] \\ . \t\n\\end{equation}\nNote that one could also explicitly factor a $\\theta(k^-)$ function out of $\\alpha$, $\\zeta$, and $\\omega$. \nThe positivity of $k^-$ is indeed guaranteed in any gauge by four-momentum conservation, if one assumes that the particles in the final state all have physical four-momenta. \n\nThe explicit calculation of the coefficients in Eq.~\\eqref{e:J_Dirac_noeta} can be carried out with the aid of the convolutional spectral representation discussed in Section~\\ref{ss:spectr_dec}. Indeed, recall that in Eq.~\\eqref{e:J_TMDcorr} the integrand is restricted to the light front $\\xi^-=0$, so that $\\xi^2 = -\\vect{\\xi}_T^2 < 0$ is space-like. Under this condition, the fermion fields anticommute and $ {\\cal T} \\big[\\psi_i(\\xi) {\\overline{\\psi}}_j(0)\\big] = \\psi(\\xi){\\overline{\\psi}}(0)$.\nThus, the integrated version of the correlators $\\Xi$ and $\\Xi^\\prime$ are equivalent,\n\\begin{equation}\n\\label{e:J_intXi_intXiprime}\nJ_{ij}(k^-,\\vect{k}_T;n_+) = \\frac{1}{2} \\int dk^+\\, \\Xi_{ij}(k;n_+) \n\\equiv \\frac{1}{2} \\int dk^+\\, \\Xi^\\prime_{ij}(k;n_+) \\, .\n\\end{equation}\nand one can utilize formulas \\eqref{e:spectral_convolution} and \\eqref{e:spec_rep_s13} in the calculation of the jet correlator coefficient. This task is carried out in the light-cone gauge in the next three susbsections. \n\n\n\n\n\\subsubsection{Calculation of the twist-2 $\\alpha$ coefficient}\n\\label{sss:alpha_calc}\n\nUsing the definition of $\\alpha$ given in Eq.~\\eqref{e:trace_J_gm}, the equivalence Eq.~\\eqref{e:J_intXi_intXiprime} between the $\\Xi$ and $\\Xi^\\prime$ integrated correlators, and the convolution representation for $\\Xi^\\prime$, we find: \n\\begin{align}\n\\label{e:alpha_calc_1}\n\\alpha(k^-) & \n= \\int dk^+\\, \\text{Disc} \\int_{\\mathbf M} d^4p\\, \\frac{\\text{Tr}_c}{N_c} {\\langle\\Omega|} \\hat{s}_3(p^2) p^- \\widetilde W(k-p) {|\\Omega\\rangle} \\\\\n\\nonumber\n& = \\frac{1}{2}\\, \\text{Disc} \\int_{\\mathbf R} dp^2 \\int_{\\mathbf R} dp^- \\frac{\\text{Tr}_c}{N_c} \n{\\langle\\Omega|} \\hat{s}_3(p^2) \\int \\frac{d\\xi^+}{2\\pi} e^{\\i \\xi^+ (k^- - p^-)} \nW_{coll}(\\xi^+) {|\\Omega\\rangle} \\ ,\n\\end{align}\nwhere the integration domain for $p$ is the whole Minkowski space ($\\mathbf M$) and we decompose the integral as $d^4p =  dp^2\\, d^2\\vect{p}_T\\, dp^-\/2p^-$. \nFrom the first to the second line we used Eq.~\\eqref{e:def_tildeW} and performed the integrations over $k^+$ and $\\vect{p}_T$, fixing $\\xi^-=0$ and $\\vect{\\xi}_T=0$ so that the staple-shaped Wilson line reduces to the straight gauge link in the $n_+$ collinear direction, {\\it i.e.}, $W_{coll}(\\xi^+)$. Next, we choose the light-cone gauge $A^-=0$ so that the collinear Wilson line reduces to the unity matrix in color space. Finally, performing the integration over $\\xi^+$ we obtain:\n\\begin{align}\n\\label{e:alpha_calc_2}\n\\nonumber\n\\alpha(k^-) & \\overset{lcg}{=} \\text{Disc} \\int_{\\mathbf R} dp^2 \\int_{\\mathbf R} \\frac{dp^-}{2} \\frac{\\text{Tr}_c}{N_c}\\, {\\langle\\Omega|} \\hat{s}_3(p^2) {|\\Omega\\rangle} \\delta(k^- - p^-) \n= \\int_{\\mathbf R} dp^2 \\int_{\\mathbf R} \\frac{dp^-}{2} \\delta(k^- - p^-) \\{ (2\\pi)^{-3} \\rho_3(p^2) \\theta(p^2) \\theta(p^-) \\} \\\\\n& = \\frac{1}{2(2\\pi)^3} \\bigg\\{ \\int_0^{+\\infty} dp^2 \\rho_3(p^2) \\bigg\\} \\theta(k^-) = \\frac{\\theta(k^-)}{2(2\\pi)^3} \\ ,\n\\end{align}\nwhere $lcg$ stresses the use of the light-cone gauge. In the second step we used the representation for the spectral operator $\\hat{s}_3$ given in Eq.~\\eqref{e:spec_rep_s13}, and in the last one we used the normalization property for $\\rho_3$ given in Eq.~\\eqref{e:rho13_positivity_rho3sumrule}. We remark that the only dependence on $k^-$ resides in the theta function, and that this result hold, in fact, in any gauge beacuse of the invariance of the jet correlator, hence of the coefficients of its Dirac decomposition. The theta function, which is due to four momentum conservation, and the accompanying numerical coefficients determined by the convention used in the definition of the correlators, also appear in the calculation of the higher-twist $\\zeta$ and $\\omega$ coefficients to be discussed next. \n\n\n\\subsubsection{Calculation of the twist-3 $\\zeta$ coefficient}\n\\label{sss:zeta_calc}\n\nThe $\\zeta$ coefficient defined in Eq.~\\eqref{e:trace_J_id} is proportional to the trace of the gauge invariant TMD jet correlator $J$. \nFactoring out the $\\theta(k^-)$ function, we can thus write\n\\begin{align}\n  \\zeta(k^-) = \\frac{\\theta(k^-)}{2(2\\pi)^3\\Lambda} M_j \\ ,\n\\end{align}\nwhere $M_j$ is a {\\it gauge-invariant} mass term. $M_j$ is in fact independent of $k^-$, and can be interpreted as the inclusive jet's (or the color-averaged dressed quark's) mass, as we will presently show.  \n\nThe calculation of $\\zeta$ in the light-cone gauge follows closely the procedure outlined in the calculation of $\\alpha$. We start from the definition of $\\zeta$ given in Eq.~\\eqref{e:trace_J_id}, use the convolution representation \\eqref{e:spectral_convolution} for the jet correlator, and obtain\n\\begin{align}\n\\label{e:zeta_calc_1}\n\\zeta(k^-) & \n= \\frac{k^-}{4\\Lambda} \\int dk^+\\, \\text{Disc} \\int_{\\mathbf M} d^4p\\, \\frac{\\text{Tr}_c}{N_c} {\\langle\\Omega|} \\sqrt{p^2} \\hat{s}_1(p^2) 4 \\widetilde W(k-p) {|\\Omega\\rangle} \\\\ \n\\nonumber\n& = \\frac{k^-}{2\\Lambda}\\, \\text{Disc} \\int_{\\mathbf R} dp^2 \\int_{\\mathbf R} \\frac{dp^-}{p^-} \\frac{\\text{Tr}_c}{N_c} \n{\\langle\\Omega|} \\sqrt{p^2} \\hat{s}_1(p^2) \\int \\frac{d\\xi^+}{2\\pi} e^{\\i \\xi^+ (k^- - p^-)} W_{coll}(\\xi^+) {|\\Omega\\rangle} \\ , \n\\end{align}\nwhere the integrations have been performed as in the case of $\\alpha$. In particular the integration over $\\vect{p}_T$ has projected the Wilson line on the light cone, leaving us once more with $W_{coll}(\\xi^+)$. \nImposing the light-cone $A \\cdot n_+ = 0$ gauge and integrating over $\\xi^+$, we obtain:\n\\begin{align}\n\\label{e:zeta_calc_2}\n\\zeta(k^-) & \\overset{lcg}{=} \\frac{k^-}{2\\Lambda} \\text{Disc} \\int_{\\mathbf R} dp^2 \\int_{\\mathbf R} \\frac{dp^-}{p^-} \n\\frac{\\text{Tr}_c}{N_c}\\, {\\langle\\Omega|} \\sqrt{p^2} \\hat{s}_1(p^2) {|\\Omega\\rangle} \\delta(k^- - p^-) \\\\\n\\nonumber\n& = \\frac{k^-}{2\\Lambda} \\int_{\\mathbf R} dp^2 \\int_{\\mathbf R} \\frac{dp^-}{p^-} \\delta(k^- - p^-) \\sqrt{p^2} \n\\{ (2\\pi)^{-3} \\rho_1^{}(p^2) \\theta(p^2) \\theta(p^-) \\} \n = \\frac{\\theta(k^-)}{2(2\\pi)^3\\Lambda} \\bigg\\{ \\int_0^{+\\infty} dp^2 \\sqrt{p^2} \\rho_1^{}(p^2) \\bigg\\} \\ ,\n  \n\\end{align}\nwhere, in going from the first to the second line, we used the representation for the spectral operator $\\hat{s}_1$ given in Eq.~\\eqref{e:spec_rep_s13\n\nThis calculation shows that the gauge-invariant jet mass $M_j$ has a particularly simple form when choosing the light-cone gauge, being completely determined by the first moment of the ``chiral-odd'' spectral function $\\rho_1^{}$:\n\\begin{equation}\n  \\label{e:Mj_lcg}\n   M_j  \\overset{lcg}{=} \\int_0^{+\\infty} d\\mu^2\\, \\sqrt{\\mu^2}\\, \\rho_1^{}(\\mu^2) \\ .\n\\end{equation}\nThe integral at the right hand side is summing over all the discontinuities of the quark propagator. In this gauge, therefore, $M_j$ can be interpreted as the average mass generated by chirality-flipping processes during the quark's fragmentation, and therefore called ``jet mass'' as proposed in Ref.~\\cite{Accardi:2017pmi}. We will elaborate further on this interpretation in Section~\\ref{ss:TMDjet_recap}.  \nIn closing, it is important to remark, that although the explicit dependence of $M_j$ on $\\rho_1$ may depend on the choice of gauge,  its numerical value is in fact gauge invariant - and, in particular, independent of $k^-$ as anticipated.\n\n\\subsubsection{Calculation of the twist-4 $\\omega$ coefficient}\n\\label{sss:omega_calc}\n\nThe calculation of the twist-4 $\\omega$ coefficient is more complex than for the $\\alpha$ and $\\zeta$ coefficients, although the main ideas and techniques discussed in the previous two subsection also apply to this case. \nSince this coefficient appears in Eq.~\\eqref{e:J_Dirac_noeta} at twist 4 only, it will not contribute to the fragmentation function sum rules discussed in Section~\\ref{s:1h_rules}, that are for now derived up to twist 3. For this reason, we leave a full study of the $\\omega$ coefficient for future work, and here we outline its general properties.\n\nFrom the definition of $\\omega$ in Eq.~\\eqref{e:trace_J_gp} and using the convolution representation in Eq.~\\eqref{e:spectral_convolution} the $\\omega$ coefficient reads:\n\\begin{align}\n\\label{e:omega_calc_1}\n\\nonumber \n\\omega(k^-,\\vect{k}_T^2) & = \\bigg(\\frac{k^-}{\\Lambda}\\bigg)^2 \n\\int dk^+\\, \\text{Disc} \\int_{\\mathbf M} d^4p\\, \\frac{\\text{Tr}_c}{N_c} {\\langle\\Omega|} \\hat{s}_3(p^2)\n\\frac{p^2+\\vect{p}_T^2}{2p^-} \\widetilde W(k-p) {|\\Omega\\rangle}\\\\ \n   & \\equiv\\ \\langle\\!\\langle \\frac{p^2}{(p^-)^2} \\rangle\\!\\rangle\n      + \\langle\\!\\langle \\frac{\\vect{p}_T^2}{(p^-)^2} \\rangle\\!\\rangle \\ .\n\\end{align}\n\n\n\n\n\n\nThe integral involving $p^2$ in Eq.\\eqref{e:omega_calc_1} can be calculated following the same procedure used for the $\\zeta$ coefficient. One obtains\n\\begin{equation}\n\\label{e:omega_avp2}\n  \\langle\\!\\langle \\frac{p^2}{(p^-)^2} \\rangle\\!\\rangle = \\frac{\\theta(k^-)}{4 \\Lambda^2 (2\\pi)^3} \\, \\mu_j^2 \\ ,\n\\end{equation}\nwhere the $\\theta(k^-)$ function arises as for the $\\alpha$ and $\\zeta$ coefficients, and (similarly to $M_j$) $\\mu_j^2$ has a particularly simple form in the light-cone gauge:\n\\begin{equation}\n  \\label{e:Kj2_lcg} \n   \\mu_j^2 \\overset{lcg}{=} \\int_0^{+\\infty} d\\mu^2\\, \\mu^2\\, \\rho_3^{}(\\mu^2) \\ .\n\\end{equation}Unlike $M_j$, however, this is not gauge-invariant.  Given the properties of $\\rho_3$ in Eq.~\\eqref{e:rho13_positivity_rho3sumrule}, $\\mu_j^2$ can be interpreted as the average invariant mass squared directly generated by the quark as it fragments into the final state\\footnote{One has to be careful, though, with this interpretation since the positivity of $\\rho_3$ is not guaranteed in a confined theory.}. \n\nThe calculation of the $\\langle\\!\\langle \\vect{p}_T^2 \/ (p^-)^2 \\rangle\\!\\rangle$ term is more involved because one cannot immediately integrate over $\\vect{p}_T$ and project the integrand on the light cone. To achieve that, one needs first to remove the explicit dependence of the integrand on $\\vect{p}_T^2$ using\n\\begin{equation}\n\\label{e:pT2_deriv}\n  \\vect{p}_T^2\\, e^{\\i \\vect{\\xi}_T \\cdot (\\vect{p}_T - \\vect{k}_T)}\n  = \\bigg( -\\frac{\\partial}{\\partial \\vect{\\xi}_T^\\alpha}\n    \\frac{\\partial}{\\partial {\\vect{\\xi}_T}_\\alpha} \n    -2\\i \\vect{k}_T^\\alpha \\frac{\\partial}{\\partial \\vect{\\xi}_T^\\alpha} \n    + \\vect{k}_T^2 \\bigg)\\, e^{\\i \\vect{\\xi}_T \\cdot (\\vect{p}_T - \\vect{k}_T)} \\ .\n\\end{equation}\nOne then obtains\n\\begin{align}\n\\label{e:omega_avpT2}\n  \\langle\\!\\langle \\frac{\\vect{p}_T^2}{(p^-)^2} \\rangle\\!\\rangle \n   \n   \n    = \\frac{\\theta(k^-)}{4 \\Lambda^2 (2\\pi)^3} \\,\n    \\Big( \\vect{k}_T^2 + \\tau_j^2 \\Big) \\ ,\n\\end{align}\nwhere \n\\begin{align}\n\\label{e:Tjdef}\n\\theta(k^-) \\, \\tau_j^2 = (2\\pi)^3 (k^-)^2\\, \\text{Disc} \n  & \\int_{\\mathbf M} d^4 p\\, \\frac{\\text{Tr}_c}{N_c} \\frac{1}{(p^-)^2} {\\langle\\Omega|} \\hat{s}_3(p^2) \n  \\int \\frac{d\\xi^+ d^2\\vect{\\xi}_T}{(2\\pi)^3} e^{\\i \\xi^+ (k^- - p^-)} \\\\\n\\nonumber\n& \\times \\bigg( -\\frac{\\partial}{\\partial \\vect{\\xi}_T^\\alpha} \\frac{\\partial}{\\partial {\\vect{\\xi}_T}_\\alpha} \n-2\\i \\vect{k}_T^\\alpha \\frac{\\partial}{\\partial \\vect{\\xi}_T^\\alpha} \\bigg) \ne^{\\i \\xi_T \\cdot (\\vect{p}_T - \\vect{k}_T)} W_{TMD}(\\xi^+, \\xi_T) {|\\Omega\\rangle} \\ .\n\\end{align}\nThe $\\vect{k}_T^2$ term in Eq.\\eqref{e:omega_avpT2} is a purely kinematical effect of the initial parton's  non-zero transverse momentum. The $\\tau_j^2$ term can be interpreted \nas the average squared transverse momentum of the fragmented hadrons relative to the quark axis. This quantity also characterizes the jet's transverse shape: the larger $\\tau_j^2$ the less aligned the final state is to the initial quark; in this sense, we can call $\\tau_j^2$ the ``jet broadening\" parameter. \n\n\n\n\n\n\n\n\\begin{comment}\n\\ \\\\ \\tocless\\subsubsection{Calculation of the twist-4 $\\omega$ coefficient}\n\\label{sss:omega_calc}\n\n\\AScom{Let's see what to do with this section.} \nThe calculation of the twist-4 $\\omega$ coefficient is more complex than for the $\\alpha$ and $\\zeta$ coefficients, although the main ideas and techniques discussed in the previous two subsection also apply to this case. \nSince this appears only at twist-4 in Eq.~\\eqref{e:J_Dirac_noeta}, it will not contribute to the momentum sum rules for fragmentation functions discussed in Section~\\ref{s:1h_rules}, that are for now limited to twist-3. For this reason, we leave a full study of the $\\omega$ coefficient for future work, and here we simply outline its general properties.\n\nAs for $\\zeta$, we first factor out a $\\theta(k^-)$ function from the gauge invariant $\\omega$ coefficient, \n\\begin{align}\n  \\omega(k^-,\\vect{k}_T^2) = \\frac{\\theta(k^-)}{4(2\\pi)^3 \\Lambda^2}\n    \\Big( \\vect{k}_T^2 + K_j^2(k^-,\\vect{k}_T^2) \\Big) \\ ,\n\\end{align}\nwhere the {\\it gauge-invariant} $K_j^2$ term can be interpreted as the jet's virtuality, generalizing the free quark's squared four momentum $k^2=m_{0q}^2$, as discussed in more detail in Section~\\ref{ss:TMDjet_recap}.\nAgain, the reason for the chosen normalization will be clear after performing the explicit calculation in the light-cone gauge.\n\nThen, we look at the definition of $\\omega$ given in Eq.~\\eqref{e:trace_J_gp}, and using the convolution representation in Eq.~\\eqref{e:spectral_convolution} we obtain\n\\begin{align}\n\\label{e:omega_calc_1}\n  \\omega(k^-,\\vect{k}_T^2)\n    & = \\bigg(\\frac{k^-}{\\Lambda}\\bigg)^2 \n      \\int dk^+ \\text{Disc} \\int_M d^4p \\frac{\\text{Tr}_c}{N_c} {\\langle\\Omega|} \\hat{s}_3(p^2)\n      \\frac{p^2+\\vect{p}_T^2}{2p^-} \\widetilde W(k-p) {|\\Omega\\rangle}\\ \n    \\equiv\\ \\langle\\!\\langle p^2\/(p^-)^2 \\rangle\\!\\rangle\n      + \\langle\\!\\langle \\vect{p}_T^2\/(p^-)^2 \\rangle\\!\\rangle \\ ,\n\\end{align}\nwhere the two average values at the right hand side can be, in principle, functions of $k^-$ and $\\vect{k}_T^2$.\nThe additional $p^-$ factors at the denominator\ncome from the the light-cone form of the integration measure, that reads $d^4p=d^2\\vect{p}_T\\, dp^2\\, dp^-\/2p^-$.\nNote also that for simplicity, as discussed in Section~\\ref{ss:spectr_dec}, we refrain from an explicit treatment of the additional light-cone gauge fixing term.\n\nThe first contribution in Eq.~\\eqref{e:omega_calc_1} can be calculated following the same procedure used for the $\\zeta$ coefficient. Using again the light-cone gauge we find:\n\\begin{equation}\n\\label{e:omega_avp2}\n  \\langle\\!\\langle p^2\/(p^-)^2 \\rangle\\!\\rangle = \\frac{\\theta(k^-)}{4 \\Lambda^2 (2\\pi)^3} \\, \\mu_j^2 \\ ,\n\\end{equation}\nwhere $\\mu_j^2$ has a particularly simple form in the light-cone gauge,\n\\begin{equation}\n  \\label{e:Kj2_lcg}\n   \\mu_j^2 = \\int_0^{+\\infty} d\\mu^2\\, \\mu^2\\, \\rho_3^{}(\\mu^2) \\ .\n\\end{equation}\nGiven the properties of $\\rho_3$ in Eq.~\\eqref{e:rho13_positivity_rho3sumrule}, this can be interpreted as the average invariant mass squared directly generated by the quark as it fragments into the final state\\footnote{One has to be careful though, as the positivity of $\\rho_3$ is not guaranteed in a confined theory.}. Unlike $M_j$, however, this term is not the only one contributing to the jet's virtuality $K_j^2$ and therefore it is {\\it not} gauge-invariant. \\AScom{Option 0: stop here} \n\nThe calculation of the $\\langle\\!\\langle \\vect{p}_T^2 \/ (p^-)^2 \\rangle\\!\\rangle$ term is more involved because one cannot immediately integrate over $\\vect{p}_T$ and project the integrand on the light cone. To achieve that, one needs first to remove the explicit dependence of the integrand on $\\vect{p}_T^2$ using\n\\begin{equation}\n\\label{e:pT2_deriv}\n  \\vect{p}_T^2\\, e^{i \\vect{\\xi}_T \\cdot (\\vect{p}_T - \\vect{k}_T)}\n  = \\bigg( -\\frac{\\partial}{\\partial \\vect{\\xi}_T^\\alpha}\n    \\frac{\\partial}{\\partial {\\vect{\\xi}_T}_\\alpha} \n    -2\\i \\vect{k}_T^\\alpha \\frac{\\partial}{\\partial \\vect{\\xi}_T^\\alpha} \n    + \\vect{k}_T^2 \\bigg)\\, e^{i \\vect{\\xi}_T \\cdot (\\vect{p}_T - \\vect{k}_T)} \\ .\n\\end{equation}\nOne then obtains\n\\begin{align}\n\\label{e:omega_avpT2}\n  \\langle\\!\\langle \\vect{p}_T^2\/(p^-)^2 \\rangle\\!\\rangle \n   \n   \n    = \\frac{\\theta(k^-)}{4 \\Lambda^2 (2\\pi)^3} \\,\n    \\Big( \\vect{k}_T^2 + \\tau_j^2 \\Big) \\ ,\n\\end{align}\nwhere \n\\begin{align}\n\\label{e:Tjdef}\n\\theta(k^-) \\, \\tau_j^2 = (2\\pi)^3 (k^-)^2\\, \\text{Disc} \n  & \\int_{M} d^4 p \\frac{\\text{Tr}_c}{N_c} \\frac{1}{(p^-)^2} {\\langle\\Omega|} \\hat{s}_3(p^2) \n  \\int \\frac{d\\xi^+ d^2\\vect{\\xi}_T}{(2\\pi)^3} e^{i \\xi^+ (k^- - p^-)} \\\\\n\\nonumber\n& \\times \\bigg( -\\frac{\\partial}{\\partial \\vect{\\xi}_T^\\alpha} \\frac{\\partial}{\\partial {\\vect{\\xi}_T}_\\alpha} \n-2\\i \\vect{k}_T^\\alpha \\frac{\\partial}{\\partial \\vect{\\xi}_T^\\alpha} \\bigg) \ne^{i \\xi_T \\cdot (\\vect{p}_T - \\vect{k}_T)} W_{TMD}(\\xi^+, \\xi_T) {|\\Omega\\rangle} \\ .\n\\end{align}\nTo complete the calculation of $\\tau_j^2$, one would need to integrate by parts, compute the first and the second derivative of the Wilson line~\\cite{LewisLicht:2008mv,Lorce:2012ce} with respect to the transverse component, and pay attention to the border terms. We defer this study to future work. \nHere we simply note the $\\tau_j^2$ is the only place where (in the chosen light-cone gauge) the Wilson line contributes to the TMD jet correlator. Furthermore, from the decomposition~\\eqref{e:omega_avpT2} and~\\eqref{e:Tjdef}, $\\tau_j^2$ can also be interpreted as the final state transverse momentum broadening of the jet compared to the fragmenting quark's transverse momentum.\n\n\\AScom{Include below equation anyway.}  Collecting all the pieces, we obtain\n\\begin{align}\n\\label{e:omega_summ}\n  \\omega(k^-,\\vect{k}_T^2) =  \\frac{\\theta(k^-)}{4 \\Lambda^2 (2\\pi)^3} \\,\n   \n   \n    \\Big( \\vect{k}_T^2 + \\mu_j^2 + \\tau_j^2 + \\textit{g.f.t.} \\Big)\\ ,\n\\end{align}\nwith the gauge invariant jet virtuality\n\\begin{align}\n  K_j^2 = {\\mu_j^2 + \\tau_j^2 + \\textit{g.f.t.}}\n\\end{align}\nreceiving contributions from the quark's fragmentation process, from the jet's interaction with the Wilson line, and from the ``gauge fixing term'' $g.f.t.$ term that symbolically represents the potential contributions from a structure proportional to $\\slashed{n}_+$ in Eq.~\\eqref{e:Feyn_spec_rep}.\nIt is the sum of these three terms that is gauge invariant.\n\\end{comment}\n\n\n\n\n\n\\subsection{Summary and interpretation of $\\boldmath M_j$}\n\\label{ss:TMDjet_recap}\n\n\\begin{comment}\nInserting the expressions of the $\\alpha$, $\\zeta$, $\\omega$ coefficients in Eq.~\\eqref{e:J_Dirac_noeta} we obtain the following decomposition for the TMD jet correlator:\n\\begin{align}\n\\label{e:J_Dirac_explicit}\n  J(k^-,\\vect{k}_T;n_+)\n    = \\frac{\\theta(k^-)}{4(2\\pi)^3\\, k^-} \\, \n    \\bigg\\{ k^-\\, \\gamma^+ + \\slashed{k}_T + M_j {\\mathbb 1} + \\frac{\\mu_j^2 + \\vect{k}_T^2}{2k^-} \\gamma^- \\bigg\\} \n   \n  + \\frac{\\theta(k^-)}{4(2\\pi)^3} \\,\n  \\frac{ \\tau_j^2 + \\textit{g.f.t.}}{2 (k^-)^2} \\, \\gamma^- \\ .\n\\end{align}\nThe expression in brackets generalizes the familiar term appearing in the free quark propagator:\n\\begin{align}\n  \\slashed{k} + m = k^-\\, \\gamma^+ + \\slashed{k}_T + m {\\mathbb 1}\n    +  \\frac{m^2 + \\vect{k}_T^2}{2k^-}\\, \\gamma^- \\ . \n\\end{align}\nWe can see that the current quark mass generalizes to the jet mass, $m \\rightsquigarrow M_j$,\nand the mass shell generalizes to the jet's invariant mass, $m^2 \\rightsquigarrow \\mu_j^2$. Conversely, using the non-interacting propagator's spectral functions $\\rho_{1,3} \\propto \\delta(\\mu^2-m^2)$ in Eqs.~\\eqref{e:Kj2_lcg} and \\eqref{e:Mj_lcg}, we obtain  $M_j^{n.i.}=m$ and $\\mu_j^{2,n.i.}=m^2$. Of the remaining terms, $\\tau_j^2$ is a consequence of the Wilson line included in the definition of the gauge-invariant jet correlator, and in the light-cone gauge it is the only contribution due to the Wilson line. \n\nOverall, the jet propagator \\eqref{e:J_Dirac_explicit} can be thought as a propagating particle of mass $M_j$, that is however off the mass shell because its virtuality $K_j^2 = \\mu_j^2  + \\tau_j^2 + \\textit{g.f.t.}$ is in general different from $M_j^2$. However, it may be dangerous to push this interpretation beyond the kinematic level, because the jet mass cannot necessarily be interpreted as a pole mass.\n\\end{comment}\n\n\n\nInserting the expressions of the $\\alpha$, $\\zeta$, $\\omega$ coefficients in Eq.~\\eqref{e:J_Dirac_noeta} we obtain the following decomposition for the TMD jet correlator:\n\\begin{align}\n\\label{e:J_Dirac_explicit}\n  J(k^-,\\vect{k}_T;n_+)\n    = \\frac{\\theta(k^-)}{4(2\\pi)^3\\, k^-} \\, \n    \\bigg\\{ k^-\\, \\gamma^+ + \\slashed{k}_T + M_j {\\mathbb 1} + \\frac{K_j^2 + \\vect{k}_T^2}{2k^-} \\gamma^- \\bigg\\} \\ ,\n\\end{align}\nwith the ``jet virtuality''\n\\begin{align}\n  K_j^2 = {\\mu_j^2 + \\tau_j^2 + \\textit{g.f.t.}}\n\\end{align}\nreceiving contributions from the invariant mass directly produced in the quark fragmentation process ($\\mu_j^2$, Eq.\\eqref{e:Kj2_lcg}), from the final state jet broadening ($\\tau_j^2$, Eq.\\eqref{e:Tjdef}), and from a gauge fixing term [$g.f.t.$]. The latter symbolically represents the potential contributions from a structure proportional to $\\slashed{v} = \\slashed{n}_+$ in Eq.~\\eqref{e:quark_bilinear_decomp} and~\\eqref{e:Feyn_spec_rep}, that, as discussed, we do not consider further in this article. As it happens to the jet mass $M_j$~\\eqref{e:Mj_lcg} the jet virtuality $K_j^2$ is also a gauge invariant quantity. \n\nThe expression in brackets in Eq.~\\eqref{e:J_Dirac_explicit} generalizes the familiar term appearing in the numerator of the free quark propagator:\n\\begin{align}\n  \\slashed{k} + m = k^-\\, \\gamma^+ + \\slashed{k}_T + m {\\mathbb 1}\n    +  \\frac{m^2 + \\vect{k}_T^2}{2k^-}\\, \\gamma^- \\ . \n\\end{align}We can see that the current quark mass generalizes to the jet mass, $m \\rightsquigarrow M_j$, and the mass shell generalizes to the jet's virtuality, $m^2 \\rightsquigarrow K_j^2$. Conversely, using the non-interacting propagator's spectral functions $\\rho_{1,3} \\propto \\delta(\\mu^2-m^2)$ in Eqs.~\\eqref{e:Kj2_lcg}, we obtain $M_j=m$; furthermore, neglecting the contribution of the Wilson line and the gauge fixing term, we obtain $K_j^2=m^2$. \n\nOverall, the jet correlator~\\eqref{e:J_Dirac_explicit} can be thought as a propagating particle of mass $M_j$, that is however off the mass shell because its virtuality $K_j^2 = \\mu_j^2  + \\tau_j^2 + \\textit{g.f.t.}$ is in general different from $M_j^2$. However, it may be dangerous to push this interpretation beyond the kinematic level, because the jet mass cannot necessarily be interpreted as a pole mass.\nIn fact, let us consider the non-perturbative Feynman quark propagator in momentum space expressed in terms of a renormalization factor $Z(p^2)$ and a mass function $M(p^2)$~\\cite{Roberts:2007jh,Roberts:2015lja,Siringo:2016jrc,Zwicky:2016lka,Solis:2019fzm})\\footnote{As for the spectral representation in Eq.~\\eqref{e:Feyn_spec_rep}, there is an additional $1\/(2\\pi)^4$ factor with respect to the expression given in e.g. Refs.~\\cite{Roberts:2007jh,Roberts:2015lja,Siringo:2016jrc,Zwicky:2016lka,Solis:2019fzm} in order to match the convention for the Fourier transform used in Eq.~\\eqref{e:invariant_quark_correlator}.}:\n\\begin{equation}\n\\label{e:SF_mass}    \n\\i S_F(p) =  \\frac{\\i Z(p^2)}{(2\\pi)^4 [\\slashed{p} - M(p^2)]} \\, . \n\\end{equation}\nBy comparing this expression with the spectral representation for the Feynman propagator presented in Eq.~\\eqref{e:Feyn_spec_rep} with $p^0>0$ and using the definition~\\eqref{e:Mj_lcg} of $M_j$ in the light-cone gauge, we find that\n\\begin{align}\n\\label{e:relation_Mj_M}\nM_j \\overset{lcg}{=} \\int_{-\\infty}^{+\\infty} dp^2\\, \\theta(p^2)\\, \\sqrt{p^2}\\, \\rho_1(p^2) = \n\\frac{\\i}{2\\pi} \\int_{-\\infty}^{+\\infty} dp^2\\, \\text{Disc}\\, \\frac{Z(p^2) M(p^2)}{p^2 - M^2(p^2)} \\, . \n\\end{align}\nThis equation relates the gauge-invariant and scale-dependent jet mass $M_j$ in the light-cone gauge and the gauge-dependent and scale-invariant mass function $M(p^2)$.  \nThe scale dependence of the jet mass is provided by the (implicit) scale dependence of the renormalized spectral function $\\rho_1$ on the one hand, and on the other hand is accounted for by the $Z(p^2)$ renormalization function~\\cite{Roberts:2015lja,Roberts:2007jh,Solis:2019fzm}. \nIf the propagator $\\i S_F$ in Eq.~\\eqref{e:SF_mass} had a single pole \nat $p^2=M^2(p^2)\\equiv M^2_p$\nand no branch cut, \nby a simple application of Cutkosky's rule one would obtain $M_j = Z(M_p^2)\\, M_p$ -- i.e., the jet mass could be identified with the renormalized pole mass. In general, however, $M_j$ is summing all the discontinuities of the non-perturbative propagator, hence also over the mass spectrum continuum, and can be different from zero even if no pole, in fact, exists. A more universal interpretation of $M_j$ can be obtained by considering the jet correlator as represented in Eq.~\\eqref{e:invariant_quark_correlator}. $M_j$ can then be thought as the sum over the masses of all physical states overlapping with a quark, weighted by the amplitude squared of that particular quark to multi-hadron state transition.\n\n\nFrom a heuristic point of view, \none can think \nof $M_j$ as a gauge-invariant mass scale that characterizes the physics of a {\\em color-averaged} (or {\\em color-screened}) dressed quark. \nIn light of this interpretation, it is possible to subtract from $M_j$ the current quark mass component responsible for the explicit breaking of the chiral symmetry, and isolate a dynamical component generated by quark-gluon interactions and responsible for the dynamical breaking of the chiral symmetry. \nThis suggest the decomposition\n\\begin{equation}\n\\label{e:Mj_decomp}\nM_j = m + m^{\\text{corr}} \\, ,\n\\end{equation} \nwhere $m$ is the current quark mass, $m^{\\text{corr}}$ is the dynamical mass, and all terms have an implicit renormalization scale dependence. In perturbation theory $m^{\\text{corr}}_{\\text{pert}} \\propto m$ vanishes in the chiral $m \\rightarrow 0$ limit. Thus the dynamical mass $m^{\\text{corr}} = M_j - m$ can also be thought as an order parameter for dynamical chiral symmetry breaking. \nAs will be discussed in details in Section~\\ref{s:1h_rules}, the decomposition~\\eqref{e:Mj_decomp} is also particularly meaningful in light of the equations of motion that relate the twist-2 and twist-3 fragmentation functions, and we will see that this mass is quantitatively related to quark-gluon-quark correlations. For this reason in the following we will refer to it as the {\\em correlation mass}. \n\nCrucially, all this discussion is not merely of theoretical interest\nbecause, in fact $M_j$ and $m^{\\text{corr}}$ can couple to the target's transversity PDF in inclusive DIS processes \\cite{Accardi:2008ne},\nand can furthermore be related to the chiral-odd twist-3 fragmentation functions $E^h$ and $\\widetilde{E}^h$ by momentum sum rules measurable in semi-inclusive processes~\\cite{Accardi:2017pmi,Accardi:2019luo}. \nWe are therefore offered the possibility of comparing calculations of the quark spectral functions, nowadays directly possible in Minkowski space~\\cite{Solis:2019fzm,Siringo:2016jrc}, to experimentally measurable quantities: this is a non-trivial feature in the case of particles such as quarks that do not appear in the physical spectrum of the theory because of color confinement. \n\nIn summary, when investigating the transition of a quark propagating in the QCD vacuum into a set of detectable hadrons in terms of the higher-twist components of the jet correlator~\\eqref{e:J_Dirac_explicit}, one is provided with a with a rather concrete window on color confinement and the dynamical generation of mass. We will revisit these points in Section~\\ref{sss:DCSB}, after an in-depth discussion of the connection of the inclusive jet correlator with the single-inclusive fragmentation correlator, and the ensuing fragmentation function sum rules.\n\n\n\n\n\n\n\\section{Introduction}\n\\label{s:intro}\n\nOne of the crucial properties of the strong force is confinement, namely the fact that color charged partons seemingly cannot exist as free particles outside of hadrons. As a consequence, any individual parton struck in a high-energy scattering process and extracted from its parent hadron must transform into at least one hadron -- in technical language, it must ``hadronize''.\nDuring this process, a struck light quark, such as an up, down or strange, initially propagates as a high-energy but nearly massless colored particle, radiating by chromodynamic {\\it bremsstrahlung} a number of other gluons and light quark-antiquark pairs (the radiation of heavy quarks such as the charm and the bottom is suppressed in proportion to their much higher mass, and can be ignored for the purposes of this discussion). \nBefore reaching the experimental detectors, however, this system of colored, nearly massles particles will turn into a number of massive, color neutral hadrons such as pions, kaons and protons (with overall color charge conservation guaranteed, arguably, by soft final state interactions with the remnant of the parton's parent hadron). Hadronization is thus quite clearly and tightly connected to parton propagation, color charge neutralization, and dynamical generation of the mass, spin, and size of hadrons. However, the exact details of this parton-to-hadrons transition are poorly known. It is the purpose of this article to shed new light on these.\n\nUnraveling hadronization dynamics is not only of fundamental importance to understand the emergence and nature of massive visible matter, but also an essential tool in hadron tomography studies at current and future facilities, including the 12 GeV program at Jefferson Lab~\\cite{Dudek:2012vr} and a future US-based Electron-Ion Collider~\\cite{Accardi:2012qut,Aidala:2020mzt}. For example, in Semi-Inclusive Deep Inelastic Scattering (SIDIS), measuring the transverse momentum of one of the final state hadrons can crucially provide a handle into the transverse motion of its parent quarks and gluons inside the hadron target~\\cite{Angeles-Martinez:2015sea,Rogers:2015sqa,Bacchetta:2016ccz,Scimemi:2019mlf,Bacchetta:2019sam,Grewal:2020hoc,Bacchetta:2017gcc,Scimemi:2019cmh,Signori:2013mda,Anselmino:2013lza,Boglione:2014oea,Collins:2016hqq,Echevarria:2018qyi}. Understanding the hadronization mechanism is therefore critically important to quantitatively connect the initial, short-scale lepton-quark scattering hidden by confinement, with the measurable properties of hadrons as they hit the detectors.\nHadronization and, more in general, hadron structure are also very important for high-energy physics as they are among the biggest sources of uncertainty in the determination of Standard Model parameters~\\cite{Webber:1999ui,Bozzi:2011ww,Quackenbush:2015yra,Bozzi:2015zja,CarloniCalame:2016ouw,Bacchetta:2018lna,Bozzi:2019vnl,Martinez:2019mwt} and the searches for physics beyond the Standard Model at the LHC~\\cite{Gao:2017yyd,Rojo:2015acz}. \nUnderstanding hadronization is also essential for the study of cold and hot nuclear matter properties by means of jet quenching measurements in electron-nucleus and heavy ion collisions \\cite{Accardi:2009qv, Arratia:2019vju}.\n\nIn high-energy collisions with a large four-momentum transfer, factorization theorems in Quantum Chromodynamics (QCD) allow one to separate the short-distance partonic scattering from the long-distance, non perturbative dynamics that binds the partons inside the target and detected particles~\\cite{Parisi:1979se,Collins:1981uk,Collins:1989gx,Catani:2000vq,Becher:2010tm,GarciaEchevarria:2011rb,Collins:2011zzd,Echevarria:2012js, Chiu:2012ir,Rogers:2015sqa}. \nIn this context, hadronization can be mapped -- and then utilized as a tool -- by means of fragmentation functions (FFs) that quantify the transmutation of a parton into one or more hadrons. FFs can be ``collinear'', namely, depending only on the ratio of the longitudinal momenta of the hadron and the parton, or ``trasverse-momentum-dependent'' (TMD), meaning they depend on both the longitudinal and transverse hadron momentum components.\n\nThe fragmentation functions can be determined by means of global QCD fits of hard semi-inclusive collisions. Collinear FFs for unpolarized hadrons are relatively well determined~\\cite{Sato:2016wqj,Bertone:2017tyb,deFlorian:2014xna}, but there is currently no fit available for leading-twist polarized collinear FFs, such as the transversity FF $H_1$. The observation of a polarized hyperon in the final state could, however, shed light on the twist-3 collinear sector~\\cite{Gamberg:2018fwy,Kanazawa:2015jxa}. In the transverse momentum sector, some information is available on the Collins TMD FF, among the polarized ones, as this involves polarized quarks but unpolarized hadrons~\\cite{Kang:2015msa,Kang:2017btw,DAlesio:2017bvu,Anselmino:2013vqa,Anselmino:2015sxa,Anselmino:2015fty}. Lastly, while unpolarized TMD FFs are so far poorly known~\\cite{Matevosyan:2011vj,Bentz:2016rav,Boglione:2017jlh}, present and forthcoming data from the {\\tt BELLE} and {\\tt BES-III} collaborations~\\cite{Garzia:2016kqk,Seidl:2019jei,Seidl:2019FFtalk} will soon allow one to perform fits of these FFs, as well~\\cite{Bacchetta:2015ora,Moffat:2019pci}. A comprehensive review on the theory and phenomenology of fragmentation functions, including di-hadron FFs and gluon FFs, can be found in Ref.~\\cite{Metz:2016swz}.\n\nThe behavior of the fragmentation functions can be usefully constrained in a global QCD fit utilizing suitable sum rules~\\cite{Jimenez-Delgado:2013sma}.\nA number of sum rules for single-hadron FFs are  documented in literature~\\cite{Collins:1981uw,Jaffe:1993xb,Mulders:1995dh,Schafer:1999kn,Bacchetta:2006tn,Meissner:2010cc,Accardi:2017pmi}, starting from the well known momentum sum rule for the unpolarized $D_1$ fragmentation function originally introduced in Ref.~\\cite{Collins:1981uw}. A few have also been proposed for di-hadron FFs~\\cite{Konishi:1979cb,deFlorian:2003cg,Majumder:2004br,Metz:2016swz}.\nAs we will see, however, the interest of FF sum rules also extends beyond their application to phenomenological fits, since a few of these are also sensitive to aspects of the non-perturbative QCD dynamics, such as the dynamics of mass generation.\n\n\n\nThe aim of this paper is to develop a field-theoretical formalism enabling us to take a fresh look at quark propagation and hadronization in the QCD vacuum.\nOur strategy is to establish an operator-level master sum rule connecting the quark-to-hadron fragmentation correlator, that describes the transition of a quark into a hadron and an unobserved remnant ~\\cite{Mulders:1995dh,Bacchetta:2006tn}, with the ``fully inclusive jet correlator'', that describes the fragmentation of a quark into an unobserved jet of particles~\\cite{Sterman:1986aj,Collins:2007ph,Accardi:2008ne,Accardi:2017pmi}. \nWe do this by generalizing the techniques utilized in Ref.~\\cite{Meissner:2010cc}. \nWe will then systematically exploit this correlator-level sum rule, and derive a complete set of sum rules for hadron spin independent fragmentation functions up to the twist-3 level. Results for selected Dirac structures have already been presented in Ref.~\\cite{Accardi:2019luo}; in this work, that also provides full details of our approach, we complete the set of twist-3 sum rules (some of which generalize known results) and comment on their theoretical and phenomenological implications.\n\nWe would like to stress already here that, while the fully inclusive jet correlator also finds an application in the QCD factorization of, {\\em e.g.}, inclusive DIS scattering at large values of the Bjorken $x$ variable~\\cite{Sterman:1986aj,Becher:2006nr,Becher:2006mr,Collins:2007ph,Accardi:2008ne,Accardi:2017pmi,Accardi:2018gmh,Manohar:2003vb,Manohar:2005az,Chen:2006vd,Chay:2005rz,Chay:2013zya}, in this paper we consider this correlator as a theoretical object of intrinsic interest, and as a tool to derive the aforementioned sum rules, {\\it independently} of any scattering process in which it may find application. \nIn fact, as we will see, the inclusive jet correlator can be rewritten as the gauge invariant propagator of a color-averaged quark, and the generation of intermediate hadronic states analyzed in terms of the quark's K\\\"allen-Lehmann spectral functions. The dynamics of mass generation in the quark hadronization process can thus be explicitly connected in a gauge invariant way to the propagation of a quark in the QCD vacuum. \n\nIn Section~\\ref{s:jetcor} we perform a spectral analysis of the jet correlator, that will yield a gauge invariant decomposition in terms of the jet's momentum $k$, its mass $M_j$, and its virtuality $K_j^2$ plus terms associated with the Wilson line that renders the correlator gauge invariant. The starting point of this analysis is the convolutional spectral representation of the gauge invariant quark propagator proposed in Ref.~\\cite{Accardi:2019luo}, see Eq.~\\eqref{e:spectral_convolution}. \nWe believe that this spectral representation can also find application beyond the present paper, for example \nin the study of the gauge independence of objects such as the virtuality-dependent parton distributions of Ref.~\\cite{Radyushkin:2016hsy}, that are playing an increasingly important role in the direct lattice QCD calculation of PDFs in momentum space~\\cite{Orginos:2017kos,Joo:2019jct}. \n\nAll the coefficients in the inclusive jet correlator's decomposition are gauge invariant. In particular, this allows us to identify $M_j$ with, and propose a gauge invariant definition of, the mass of a dressed quark. This mass can be calculated in the light-cone gauge as an integral involving the chiral-odd spectral function of the quark propagator (see Section~\\ref{ss:spectr_dec} and Section~\\ref{ss:TMD_J_corr}), and can be considered as an order parameter for the dynamical breaking of chiral symmetry (see Section~\\ref{ss:TMDjet_recap}).\n\nIn Section~\\ref{s:1h_rules}, we derive the master sum rule connecting the unintegrated single-hadron fragmentation correlator to the inclusive jet correlator, see  Eq.~\\eqref{e:master_sum_rule}, and from this obtain momentum sum rules for FFs up to twist 3 by suitable Dirac projections. These sum rules are summarized in Section~\\ref{ss:sumrules_summary}, where we extensively comment on their theoretical and phenomenological implications. \nIn particular, we find that the jet mass can be expressed as the sum of the current quark mass, $m$, and an interaction-dependent mass, $m^{\\rm corr}$, which enter, respectively, at the right hand side of the sum rules for the collinear twist-3 $E$ and $\\widetilde E$ FFs, see Section~\\ref{ss:qgq_sumrules}. Measurements of these fragmentation functions, therefore, provide one with a concrete way to experimental probe the the mass generation mechanism in QCD, and to study the dynamical breaking of the chiral symmetry. Furthermore, the $E$ and $\\widetilde E$ sum rules provide a way to separate the contribution of each hadron flavor to the overall jet mass, giving one even more insight on these processes.\n\nFinally, in Section~\\ref{s:conclusions} we summarize the results and discuss possible extensions of our work, and in the appendices\nwe provide details about our conventions and the Lorentz transformations of the fragmentation and inclusive jet correlators.\n\n\n\n\n\n\n\\section{Momentum sum rules for single-hadron fragmentation functions}\n\\label{s:1h_rules}\n\n\n\n\n\nIn this section, we will establish a sum rule at the correlator level between the single-hadron fragmentation correlator and the inclusive jet correlator, and systematically exploit this to derive explicit sum rules for fragmentation functions up to the twist-3 level. We will recover known sum rules, and derive a number of new ones. As we will discuss, the interest of these sum rules also extends beyond their application to phenomenological fits. \n\n\n\\subsection{The single-hadron fragmentation correlator}\n\\label{ss:1h_inclusive_FF}\n\nThe unintegrated correlator describing the fragmentation of a quark into a single hadron (or ``single-inclusive\" correlator) is defined as~\\cite{Mulders:1995dh,Goeke:2003az,Bacchetta:2006tn,Meissner:2007rx,Mulders:2016pln,Metz:2016swz,Echevarria:2016scs}\n\\begin{align}\n  \\label{e:1hDelta_corr}\n  & \\Delta^h_{ij}(k,P,S) = \\sum_X \\int \\frac{d^4 \\xi}{(2\\pi)^4} e^{\\i k \\cdot \\xi} \\frac{\\text{Tr}_c}{N_c} {\\langle\\Omega|} \n  {\\cal T} \\big[ W_1(\\infty,\\xi) \\psi_i(\\xi) \\big] \\, \n  |P S X \\rangle \n  \\langle P S X| \\, \n  {\\cal \\overline{T}} \\big[ \\overline{\\psi}_j(0) W_2(0,\\infty) \\big] {|\\Omega\\rangle}\\, ,\n\\end{align}\nwhere $k$ is the quark's four-momentum, $h$ is an identified hadron with four-momentum $P$ and spin $S$, and $X$ represents the quantum numbers of all unobserved hadrons in the final state. \nThe vector $S^\\mu$ is the covariant spin vector associated to the Bloch representation for a hadron with spin $1\/2$~\\cite{Bacchetta:2006tn,Mulders:1995dh}. \nThe remarks given in Section~\\ref{s:jetcor} about the importance of the color average for the definition of the inclusive jet correlator also apply to the fragmentation correlator~\\eqref{e:1hDelta_corr}. A diagrammatic interpretation is given in Figure~\\ref{f:cut_diagrams}(b).\n\nIn the following we will deal only with the correlator describing the fragmentation of a quark into an unpolarized or spinless hadron $\\Delta^h(k,P)$, which is defined as a sum of the polarization-dependent correlator over the polarization of the identified hadron:\n\\begin{align}\n  \\label{e:1hDelta_corr_unpol}\n  & \\Delta^h_{ij}(k,P) = \\sum_{S} \\Delta^h_{ij}(k,P,S) \\ .\n\\end{align}Letting the sum over $S$ act on the right hand side of Eq.~\\eqref{e:1hDelta_corr}, one obtains an explicit definition by substituting $|PSX\\rangle \\langle PSX|$ with $|PX\\rangle \\langle PX|$ in that equation.\n\nLet us now focus on the structure of the $| P X \\rangle$ final state. This is composed of one identified hadron $h$ with momentum $P$ and a remnant $X$. Following the approach of Ref.~\\cite{Levelt:1993ac}, we assume that:\n\\begin{equation}\n\\label{e:completeness_X}\n\\sum_X | X \\rangle \\langle X | = {\\mathbb 1} = \\sum_{n=0}^{+\\infty} {\\mathbb 1}_n \\ .\n\\end{equation}\nThe first equality in Eq.~\\eqref{e:completeness_X} is the completeness relation for the {$|X\\rangle$} states, {\\it i.e.}, the resolution of the identity in terms of the projectors $| X \\rangle \\langle X |$. The second equality decomposes the identity as a sum of identity operators ${\\mathbb 1}_n$ acting in the sub-space spanned by $n$--hadron states. These can be explicitly represented as \n\\begin{equation}\n\\label{e:def_1n}\n{\\mathbb 1}_n = \\frac{1}{n!} \\int d\\widetilde{K}_1 \\cdots d\\widetilde{K}_n \na^\\dagger(K_1) \\cdots a^\\dagger(K_n) \n{|\\Omega\\rangle} {\\langle\\Omega|}\na(K_1) \\cdots a(K_n) \\ ,\n\\end{equation}\nwhere, for ease of notation we combined the momentum $K_i$ and flavor $h_i$ of the $i$-th unobserved hadron into a single $\\widetilde K_i$ variable, and $a(\\widetilde K_i)$, $a^\\dagger(\\widetilde K_i)$ are the associated annihilation and creation operators. The integration reads $\\int d\\widetilde K_i \\equiv \\sum_{h_i} \\int d^3K_i \/ [(2\\pi)^3 2 E_i]$, and as before a sum over the hadron spin is understood when the corresponding index is not explicitly written.\nUsing Eqs.~\\eqref{e:completeness_X} and \\eqref{e:def_1n}, we can recast the sum over the projectors $|PX\\rangle \\langle PX|$ as:\n\\begin{align}\n\\nonumber\n\\sum_X | P X \\rangle \\langle P X | & = | P \\rangle \\langle P | \n+ \\int d\\widetilde K_1 a^\\dagger(\\widetilde K_1) | P \\rangle \\langle P | a(\\widetilde K_1) \n+ \\frac{1}{2} \\int d\\widetilde K_1 d\\widetilde K_2 a^\\dagger(\\widetilde K_1) a^\\dagger(\\widetilde K_2) | P \\rangle \\langle P | a(\\widetilde K_1) a(\\widetilde K_2) + \\cdots \\\\\n\\label{e:1h_proj_to_op} &  = a_h^\\dagger \\bigg(  \\sum_{n=0}^{+\\infty} {\\mathbb 1}_n  \\bigg) a_h = a_h^\\dagger a_h \\ , \n\\end{align}\nwhere we have used $a^\\dagger(\\widetilde K_i) {|\\Omega\\rangle} = | \\widetilde K_i \\rangle$, and $a_h^\\dagger {|\\Omega\\rangle} = |P \\rangle$ creates the identified hadron $h$ from the vacuum.\nUsing Eq.~\\eqref{e:1h_proj_to_op}, we obtain\n\\begin{equation}\n\\label{e:1hDelta_aa}\n\\Delta^h_{ij}(k,P) = \\int \\frac{d^4 \\xi}{(2\\pi)^4} e^{\\i k \\cdot \\xi} \\frac{\\text{Tr}_c}{N_c} {\\langle\\Omega|} \n{\\cal T} \\big[ W_1(\\infty,\\xi) \\psi_i(\\xi) \\big] \\, \n(a_h^\\dagger a_h) \\,\n{\\cal \\overline{T}} \\big[ \\overline{\\psi}_j(0) W_2(0,\\infty) \\big] \\, \n{|\\Omega\\rangle} \\ , \n\\end{equation}\nwhere it is understood that $a_h = a_h(P,S)$ and the same for $a_h^\\dagger$. \nFor brevity of notation, in the following we work with the same Wilson lines $W_{1,2}$ we have chosen for the inclusive jet correlator $\\Xi$ and drop the (anti)time-ordering operators.\nThe expansion of this correlator on a basis of Dirac structures can be obtained from the one given in Refs.~\\cite{Bacchetta:2004zf,Goeke:2005hb} for the distribution correlator by replacing the target hadron momentum with the produced hadron momentum, the target mass with the produced hadron mass, by interchanging $n_-$ with $n_+$, and neglecting the structures related to the polarization of the produced hadron:  \n\\begin{align}\n\\label{e:1hDelta_ampl}\n\\Delta^h(k, P) \n&= M_h A_1 {\\mathbb 1} + A_2 \\slashed{P} + A_3 \\slashed{k} + \\frac{A_4}{M_h} \\sigma_{\\mu\\nu} P^\\mu k^\\nu + \\\\\n\\nonumber & + \\frac{M_h^2}{P \\cdot n_+} B_1 \\slashed{n}_+ + \n\\frac{M_h}{P \\cdot n_+} B_2 \\sigma_{\\mu\\nu} P^\\mu {n_+^\\nu} + \n\\frac{M_h}{P \\cdot n_+} B_3 \\sigma_{\\mu\\nu} k^\\mu n_+^\\nu +\n{\\frac{1}{P \\cdot n_+}\\, B_4\\, \\epsilon_{\\mu\\nu\\rho\\sigma}\\, \\gamma^\\mu\\, \\gamma_5\\, P^\\nu\\, k^\\rho\\, n_+^\\sigma} \\ .\n\\end{align}The amplitudes $A_i$ and $B_i$ are functions of the Lorentz scalars $k \\cdot n_+$, $k \\cdot P$, $k^2$. The terms proportional to $n_+$ originate from the path defining the Wilson lines $W_{1,2}$, that provides one additional vector beside $k$ and $P$ with which to carry out the decomposition. These terms generate TMD and collinear structures that appear only at subleading twist~\\cite{Mulders:1995dh,Mulders:2016pln,Bacchetta:2006tn}. \nIn keeping with the conventions of \\cite{Bacchetta:2006tn}, we have introduced a power-counting scale $M_h$ equal to the mass of the identified hadron. This choice is not mandatory, and only affects the normalization of the above defined amplitudes and of the related fragmentation functions to be introduced below. For example, a flavor-independent choice of scale, such as $\\Lambda$ used for the jet expansion of the inclusive jet correlator in Eq.~\\eqref{e:Xi_twist_dec},  would slightly simplify a number of the sum rules to be discussed later. However the present choice of $M_h$ not only agrees with most of the literature on TMD FFs, but also suggests interesting physical interpretations for these sum rules. \nIt is also of interest to note that, formally, Eq.~\\eqref{e:jet_ampl} for the decomposition of the inclusive correlator $\\Xi(k;n_+)$ can be obtained from Eq.~\\eqref{e:1hDelta_ampl} by replacing the hadron four-momentum $P$ with the parton four-momentum $k$, and by replacing the hadron mass $M_h$ with the power counting scale $\\Lambda$.\n\nThe fragmentation process can be studied either in the parton or in the hadron frame~\\cite{Collins:2011zzd,Levelt:1993ac}. The Lorentz transformation between these and its consequences are discussed in detail in Appendix~\\ref{a:frame_dep}\\footnote{See also Ref.~\\cite{Mulders:2016pln} and Section~12.4.1 in Ref.~\\cite{Collins:2011zzd}.}. \nIn the parton frame, defined such that the parton's transverse momentum $\\vect{k}_T=0$, one can interpret the fragmentation correlator as the probability density for the quark to fragment into a hadron of a given flavor $h$ and momentum $P$, with $\\vect{P}_T$ generically non zero \\cite{Collins:2011zzd,Metz:2016swz}. \nThe parton frame, however, turns out not to be convenient in derivations of factorization theorems and calculations of semi-inclusive cross sections: the partonic momenta are integrated over, and the parton frame axes are not fixed (see Chap. 12 in Ref.~\\cite{Collins:2011zzd}). In this case, it is preferable to utilize the hadron frame, where the experimentally observable hadron's 3-momentum determines the $z$ direction, so that $\\vect{P}_T=0$. In this frame, it is the quark's transverse momentum that, in general, has a non-zero value. \n\nSince we are not dealing with a specific scattering process and we want to connect the fragmentation correlator to the invariant quark propagator, we choose to work from now on in the parton frame. \nNamely, we consider a base in Minkowski space composed of the light-cone $n_+$ and $n_-$ vectors such that $k=k^+ n_+ + k^- n_-$, and two transverse four-vectors $n_1$ and $n_2$. \nThis basis not only determines the coordinates of any four-vector under consideration, but will also be used to define a set of parton-frame TMD fragmentation functions, as we will discuss next.\n\n\nIn calculations of semi-inclusive hadron production cross sections one deals with the fragmentation correlator integrated over the subdominant quark momentum component $k^+$. \nAccording to the convention outlined in Appendix~\\ref{a:conv} this is defined as \n\\begin{align}\n\\label{eq:integratedFFcorr}\n\\Delta^h(z,P_T) \\equiv \\frac{1}{2z} \\in\ndk^+ \\, \\Delta^h(k, P)_{k^- = P^-\/z } \\ ,\n\\end{align} \nwhich corresponds to:\n\\begin{equation}\n\\label{e:1hDelta_TMDcorr}\n\\Delta^h_{ij}(z,P_T) = \\frac{1}{2z} \\int \\frac{d \\xi^+ d^2 \\vect{\\xi}_T}{(2\\pi)^3} e^{\\i k^- \\xi^+} \n\\frac{\\text{Tr}_c}{N_c}\\, \\text{Disc}\\, {\\langle\\Omega|} \nW_1(\\infty,\\xi) \\psi_i(\\xi) (a_h^\\dagger a_h) \\overline{\\psi}_j(0) W_2(0,\\infty)  \n{|\\Omega\\rangle}_{\\begin{subarray}{l} \\xi^-=0 \\\\ k^- = P^-\/z \\end{subarray}}  \\ . \n\\end{equation}\nIn the parton frame, this can be expanded in Dirac structures and parametrized in terms of TMD FFs up to twist 3 as: \n\\begin{align}\n\\label{e:1hDelta_TMDcorr_param}\n\\Delta^h(z,P_T) & = \n\\frac{1}{2} \\slashed{n}_- D^h_1(z,P_T^2)  \n- \\i \\frac{ \\big[ \\slashed{P}_T, \\slashed{n}_- \\big]}{4 z M_h} H_1^{\\perp\\, h}(z,P_T^2)\n+ \\frac{M_h}{2 P^-} E^h(z,P_T^2) \n\\\\\n\\nonumber & \n- \\frac{\\slashed{P}_T}{2 z P^-} D^{\\perp\\, h}(z,P_T^2) \n+ \\frac{\\i M_h}{4 P^-} \\big[ \\slashed{n}_-, \\slashed{n}_+ \\big] H^h(z,P_T^2)\n- \\frac{1}{2 z P^-} \\gamma_5 \\epsilon_T^{\\rho\\sigma} \\gamma_\\rho {P_T}_\\sigma G^{\\perp\\, h}(z,P_T^2) \\ ,\n\\end{align}\nwhere the Dirac structures and the TMDs explicitly depend on the hadron momentum $P_T$. It is important to note that this decomposition of the TMD fragmentation correlator depends on the choice of the light cone basis vectors, in our case the parton-frame basis discussed above, even though for simplicity of notation no explicit index is introduced to remind us of this fact.\nIn Refs.~\\cite{Mulders:1995dh,Bacchetta:2006tn,Metz:2016swz} an analogous decomposition is given, instead, in the hadron frame, as is standard procedure in the TMD literature. \nThe relation between the hadron- and parton-frame decomposition, and therefore between the hadron- and parton-frame TMD fragmentation functions, is discussed in detail in Appendix~\\ref{a:frame_dep}. \n\nEq.~\\eqref{e:1hDelta_TMDcorr_param} can be also re-arranged as a sum of terms with definite rank in $P_T$~\\cite{Boer:2016xqr}:\n\\begin{equation}\n\\label{e:1hDelta_r0_r1}\n\\Delta^h(z,P_T) = \\Delta_0^h(z,P_T^2) + {P_T}_\\alpha\\, \\Delta_1^{h\\, \\alpha}(z,P_T^2) \\, , \n\\end{equation}\nwhere\n\\begin{align}\n\\label{e:1hDelta_r0}\n& \\Delta_0^h(z,P_T^2) = \n\\frac{1}{2} \\slashed{n}_- D^h_1(z,P_T^2) + \n\\frac{M_h}{2 P^-} E^h(z,P_T^2) + \n\\frac{\\i M_h}{4 P^-} \\big[ \\slashed{n}_-, \\slashed{n}_+ \\big] H^h(z,P_T^2) \\, , \\\\\n\\label{e:1hDelta_r1}\n& \\Delta_1^{h\\, \\alpha}(z,P_T^2) = \n- \\i \\frac{ \\big[ \\gamma_T^\\alpha, \\slashed{n}_- \\big]}{4 z M_h} H_1^{\\perp\\, h}(z,P_T^2)\n- \\frac{\\gamma_T^\\alpha}{2 z P^-} D^{\\perp\\, h}(z,P_T^2) \n- \\frac{1}{2 z P^-} \\gamma_5 \\epsilon_T^{\\rho\\alpha} \\gamma_\\rho G^{\\perp\\, h}(z,P_T^2) \\, .\n\\end{align}\nThe subscript $0,1$ refers to the rank in $P_T$ of the associated structures~\\cite{Boer:2016xqr} and, in the following, we will collectively refer to $D_1^h$, $E^h$, $H^h$ as the rank 0 TMD FFs, and to $H_1^{\\perp\\, h}$, $D^{\\perp\\, h}$, $G^{\\perp\\, h}$ as the rank 1 TMD FFs. \nFrom Eq.~\\eqref{e:1hDelta_r0_r1} one can see that~\\cite{Bacchetta:2006tn}\n\\begin{equation}\n\\label{e:1hDelta_integrated}\n\\Delta^h(z) = \\int d^2 \\vect{P}_T\\, \\Delta^h(z,P_T) = \\int d^2 \\vect{P}_T\\, \\Delta_0^h(z,P_T) \\, , \n\\end{equation}\nwhere the collinear fragmentation correlator $\\Delta^h(z)$ is defined as\n\\begin{equation}\n\\label{e:1hDelta_coll_op}\n\\Delta^h_{ij}(z) = \\frac{z}{2} \\int \\frac{d \\xi^+ }{2\\pi} e^{\\i \\xi^+ P^-\/z} \n\\frac{\\text{Tr}_c}{N_c}\\, \\text{Disc}\\, {\\langle\\Omega|} \nW_1(\\infty,\\xi) \\psi_i(\\xi) (a_h^\\dagger a_h) \\overline{\\psi}_j(0) W_2(0,\\infty)  \n{|\\Omega\\rangle}_{\\begin{subarray}{l} \\xi^-=\\vect{\\xi}_T=0 \\\\ P_T=0 \n\\end{subarray}}  \\ . \n\\end{equation}\nMoreover, $\\Delta(z)$ can be parametrized as~\\cite{Bacchetta:2006tn}:\n\\begin{equation}\n\\label{e:1hDelta_coll}\n\\Delta^h(z) = \n\\frac{1}{2} \\slashed{n}_- D^h_1(z) + \n\\frac{M_h}{2 P^-} E^h(z) + \n\\frac{\\i M_h}{4 P^-} \\big[ \\slashed{n}_-, \\slashed{n}_+ \\big] H^h(z) \\, .\n\\end{equation}\nThe rank-1 term $\\Delta_1^{h\\, \\alpha}(z,P_T^2)$ in Eq.~\\eqref{e:1hDelta_r0_r1} does not contribute at the collinear level~\\eqref{e:1hDelta_integrated} since the explicit ${P_T}_\\alpha$ factor sets the associated integral over $\\vect{P}_T$ to zero. \nFor this reason, we will only be able to derive constraints on the integral of rank-0 TMD FFs, but not of rank-1 FFs (see Section~\\ref{ss:mu_transverse}). On the contrary, by weighting $\\Delta^h(z,P_T)$ by $P_T^\\alpha$ and integrating over the transverse momentum one can obtain sum rules for the first moment of rank-1 TMD FFs. \nNote that Eq.~\\eqref{e:1hDelta_integrated}, relating the collinear correlator to the TMD correlator integrated over $\\vect{P}_T$, is only valid for bare, {\\it i.e.}, non-renormalized fields in perturbative QCD~\\cite{Collins:2011zzd}.\\footnote{The identification of an integrated TMD FF with the corresponding collinear FF is also valid in certain models of QCD, where the integration over the transverse momentum can be regularized introducing a phenomenological scale to suppress the large momentum region. An example is the parton model in Gaussian approximation~\\cite{Signori:2013mda,Anselmino:2013lza}. Other examples include the spectator diquark model~\\cite{Bacchetta:2008af} and the Nambu--Jona-Lasinio model~\\cite{Matevosyan:2011vj}.} \nThe same is also true when one identifies the integrated TMD FFs $f = D_1^h, E^h, H^h$ with their collinear counterparts in Eq.~\\eqref{e:1hDelta_coll}, \\textit{i.e.}, takes $f(z) \\ \\equiv\\ \\int d^2 \\vect{P}_T f(z,P_T^2)$.\n\n\n\\subsection{Connection between the fragmentation and jet correlators}\n\\label{ss:sum_operator}\n\nWe can now discuss a momentum sum rule connecting the unintegrated single-hadron fragmentation correlator to the inclusive jet correlator.\nWe work in the context of field theory, taking inspiration from, but generalizing, the strategy outlined in Ref.~\\cite{Meissner:2010cc}.\nIn particular, the authors of that reference directly manipulate the $k^+$-integrated TMD correlator, withouth the notion of the jet correlator, and limit their attention to a restricted number of Dirac structures.\nInstead, we prove the sum rule at the level of unintegrated correlators, then specialize this to the TMD correlators, and from there derive the sum rules for the fragmentation functions. As a result, we are able to extend the formalism to include all twist-2 and twist-3 FFs. Let us also stress from the outset that, as discussed in Ref.~\\cite{Meissner:2010cc}, the proof is only valid for unpolarized correlators and FFs. In the polarized case one would just obtain trivial identities.\nThe methods utilized here have also  been used in Ref.~\\cite{Anselmino:2011ss} to prove momentum sum rules for the quark fracture functions and reduce these to parton distribution functions,  but without considering Wilson line insertions as we do, instead, in this paper. \n\nOur starting point are the definitions of the unintegrated correlators $\\Xi$ and $\\Delta^h$ in Eq.~\\eqref{e:invariant_quark_correlator} and Eq.~\\eqref{e:1hDelta_corr}, respectively. \nLet us then consider the following quantity:\n\\begin{equation}\n\\label{e:average_Ph}\n\\sum_h \\sum_{S} \n\\int \\frac{d^4 P}{(2\\pi)^4}\\, (2\\pi) \\delta(P^2 - M_h^2) \nP^\\mu \\Delta^h(k,P,S)  \\, .\n\\end{equation}\nThe integration is performed in Minkowski space over the on-shell momentum $P$ of the detected hadron with mass $M_h$, and the sum extends to all hadron spin states and species. Eq.~\\eqref{e:average_Ph} can be loosely understood as providing one with the average four-momentum of the produced hadrons, if one considers $\\Delta$ as a probability distribution in hadron momentum and spin. This interpretation becomes explicit in the parton frame for the $\\gamma^+$ projection of the $k^+$-integrated $\\Delta$ correlator \\cite{Mulders:2016pln}. \n\nLet us now introduce the $\\hat {\\vect P}_h^\\mu$ hadronic momentum operator associated to the vector $P^\\mu$ of the identified hadron in the framework of second quantization~\\cite{Weinberg:1995mt,Collins:1981uw}:\n\\begin{equation}\n\\label{e:Ph_op_1}\n\\hat{\\vect{P}}_h^\\mu = \\sum_{S} \\int \\frac{dP^- d^2 \\vect{P}_T}{2P^- (2\\pi)^3} P^\\mu\\, \\hat{a}_h^\\dagger(P,S) \\hat{a}_h(P,S) \n\\end{equation}\nas well as the inclusive $\\hat{\\vect P}^\\mu$ momentum operator, that also appears in Eq.~(4.25) of Ref.~\\cite{Collins:1981uw} and in Ref.~\\cite{Meissner:2010cc}:\n\\begin{equation}\n\\label{e:P_op_1}\n\\hat{\\vect{P}}^\\mu = \\sum_h \\hat{\\vect{P}}_h^\\mu \\ . \n\\end{equation}\nUsing Eqs.~\\eqref{e:Ph_op_1} and \\eqref{e:P_op_1}, the average four-momentum defined in Eq.~\\eqref{e:average_Ph} can be further manipulated:\n\\begin{align}\n\\label{e:unintegrated_sum_rule}\n\\nonumber\n\\sum_{h\\, S} \n\\int \\frac{d^4 P}{(2\\pi)^4}\\, (2\\pi) \\delta(P^2 - M_h^2) \nP^\\mu \\Delta^h(k,P,S)  & = \n\\sum_{h\\, S} \n\\int \\frac{dP^- d^2 \\vect{P}_T}{(2\\pi)^3 2P^-} P^\\mu \n\\int \\frac{d^4 \\xi}{(2\\pi)^4} e^{\\i k \\cdot \\xi} \n{\\langle\\Omega|} W_1 \\psi_i(\\xi)(a_h^\\dagger a_h) \\overline{\\psi}_j(0) W_2 {|\\Omega\\rangle} \\\\\n& = \\int \\frac{d^4 \\xi}{(2\\pi)^4} e^{\\i k \\cdot \\xi} \n{\\langle\\Omega|} W_1(\\infty,\\xi) \\psi_i(\\xi) \\hat{\\vect{P}}^\\mu\\, \\overline{\\psi}_j(0) W_2(0,\\infty) {|\\Omega\\rangle} \\\\\n\\nonumber\n& = \\int \\frac{d^4 \\xi}{(2\\pi)^4} e^{\\i k \\cdot \\xi} \\,\n\\i \\frac{\\partial}{\\partial \\xi_\\mu}\n\\bigg\\{ {\\langle\\Omega|} W_1(\\infty,\\xi) \\psi_i(\\xi) \\overline{\\psi}_j(0) W_2(0,\\infty) {|\\Omega\\rangle} \\bigg\\} \\ ,\n\\end{align}\nwhere, for brevity, we have omitted the color traces. The last step can be justified as follows:\n\\begin{align}\n\\label{e:comm_prop_P}\n{\\langle\\Omega|} W(\\infty,\\xi) \\psi(\\xi) \\hat{\\vect{P}}^\\mu & = \n{\\langle\\Omega|} \\big[ W(\\infty,\\xi) \\psi(\\xi)\\, , \\hat{\\vect{P}}^\\mu \\big] = \n{\\langle\\Omega|} W(\\infty,\\xi) \\big[ \\psi(\\xi)\\, , \\hat{\\vect{P}}^\\mu \\big] + \n{\\langle\\Omega|} \\big[ W(\\infty,\\xi)\\, , \\hat{\\vect{P}}^\\mu \\big] \\psi(\\xi) \\\\\n\\nonumber\n& = {\\langle\\Omega|} W(\\infty,\\xi) \\bigg( \\i \\frac{\\partial}{\\partial \\xi_\\mu} \\psi(\\xi) \\bigg) + \n{\\langle\\Omega|} \\bigg( \\i \\frac{\\partial}{\\partial \\xi_\\mu} W(\\infty,\\xi) \\bigg) \\psi(\\xi) = \n\\i \\frac{\\partial}{\\partial \\xi_\\mu} \\bigg\\{ {\\langle\\Omega|} W(\\infty,\\xi) \\psi(\\xi) \\bigg\\} \\ . \n\\end{align}\nFinally,  \nintegrating by parts, we obtain the master result of this section,\n\\begin{equation}\n\\label{e:master_sum_rule}\n\\sum_h \\sum_{S} \n\\int \\frac{d^4 P}{(2\\pi)^4}\\, (2\\pi) \\delta(P^2 - M_h^2) \nP^\\mu \\Delta^h(k,P,S) = \nk^\\mu\\, \\Xi^{n.c.}(k) \\ , \n\\end{equation}\nwhere the boundary terms have vanished because of the boundary conditions for the fermionic fields, and the {\\it n.c. (no cut)} label at the r.h.s. means that we are not calculating the discontinuity of the inclusive jet correlator. \nIt is important to notice that the derivation of Eq.~\\eqref{e:master_sum_rule} holds true even with the (anti)time-ordering operators explicit, namely the specified choice for $W_{1,2}$ is not a necessary condition for the sum rule (but it is necessary for the spectral representation of $\\Xi$ discussed in Section~\\ref{ss:spectr_dec}).\n\nThe master sum rule~\\eqref{e:master_sum_rule} encodes the connection between the quark-to-single-hadron fragmentation correlator and the jet correlator without the discontinuity, which coincides with the gauge invariant color averaged dressed quark propagator. The Dirac projections of its discontinuity give rise to the sum rules for collinear and TMD fragmentation functions that will be discussed in detail in the remainder of this section.\nNote that, since Eq.~\\eqref{e:master_sum_rule} involves a sum over the hadron spins, all the polarized structures in the $\\Delta^h$ correlator vanish. Thus, we will be able to prove sum rules for unpolarized fragmentation functions only. \nPreliminary results on these FF sum rules have been presented at various conferences~\\cite{Accardi:2018gmh}, and the unpolarized case has been discussed in Ref.~\\cite{Accardi:2019luo}. \n\n\n\n\\subsection{Sum rules for rank 0 fragmentation functions}\n\\label{ss:mu_minus}\n\nWe now specialize the master sum rule~\\eqref{e:master_sum_rule} to the TMD case in the parton frame. We start with the rank 0 term, defined in Eq.~\\eqref{e:1hDelta_r0_r1}, which can be selected by choosing $\\mu=-$.\nWe then consider the discontinuity of the sum rule, integrate both sides on the suppressed plus component of the partonic momentum, and choose the parton frame ($k_T = 0$).\nWe also exploit the relation\n\\begin{equation}\n\\label{e:measure}\n\\int \\frac{d^4 P}{(2\\pi)^3} \\delta(P^2 - M_h^2) = \n\\int \\frac{dP^- d^2 \\vect{P}_T}{2P^- (2\\pi)^3} = \n\\int \\frac{dz d^2 \\vect{P}_T}{2z (2\\pi)^3} \\ ,\n\\end{equation}\nand obtain: \n\\begin{equation}\n\\label{e:long_sr_operator_1}\n\\sum_{h\\, S} \\int \\frac{dz d^2 \\vect{P}_T}{2z (2\\pi)^3} P^- \\int dk^+ \\,\n\\text{Disc}\\, [\\Delta^h(k,P,S)]_{\\begin{subarray}{l} P^- = z k^- \\\\ k_T = 0 \\end{subarray}} = \nk^- \\int dk^+\\, \\text{Disc}\\, [\\Xi^{n.c.}(k)]_{k_T = 0} \\ .\n\\end{equation}\nThis equation can be rewritten in terms of the collinear fragmentation correlator~\\eqref{e:1hDelta_integrated} and the jet correlator Eq.~\\eqref{e:J_TMDcorr}.  \nThe result is:\n\\begin{align}\n\\label{e:long_sr_operator_2}\n\\sum_{h\\, S} \\int dz\\, z\\, \\Delta^h(z) = \n\\sum_{h\\, S} \\int dz\\, d^2 \\vect{P}_T \\, z\\, \\Delta_0^h(z,P_T) = \n2(2\\pi)^3 J(k^-,\\vect{0}_T) \\ .\n\\end{align}\nNote that only the hadron spin-independent part of $\\Delta^h(z)$ survives in~\\eqref{e:long_sr_operator_2}. \nConsidering now the Dirac projections of the correlators on both sides, we can turn this into momentum sum rules for the collinear FFs in Eq.~\\eqref{e:1hDelta_coll}. \nThere are, in general, 9 Dirac projections $\\Delta^{[\\Gamma]}$ and $J^{[\\Gamma]}$ \ninvolving twist-2 and twist-3 functions, of which only three are relevant for the rank 0 case:\\footnote{The structures $\\Gamma=\\{ \\gamma^-\\gamma_5 ,\\, \\i\\gamma_5 ,\\,  \\i\\sigma^{-+}\\gamma_5 \\}$ project polarized TMD fragmentation functions out of $\\Delta$. Since we are summing over the hadron polarization states in Eq.~\\eqref{e:average_Ph}, these contributions vanish; on the contrary, these structure do not appear in $J$ from the very beginning because of parity invariance. The projections for the other 3 Dirac structures $\\Gamma=\\{ \\i\\sigma^{i-}\\gamma_5 ,\\, \\gamma^i ,\\, \\gamma^i\\gamma_5 \\}$ produce the trivial result $0=0$.} \n\\begin{align}\n\\label{e:sumrule_D1}\n[\\, \\Gamma = \\slashed{n}_+ \\, ] \\ \\ \\ \\ \n& \\sum_{h\\, S} \\int dz  z\\,  D_1^{h}(z) = 1 \\ ,\t\\\\\n\\label{e:sumrule_E}\n[\\, \\Gamma = {\\mathbb 1} \\, ] \\ \\ \\ \\\n& \\sum_{h\\, S} \\int dz M_h E^{h}(z) = M_j \\ , \\\\\n\\label{e:sumrule_H}\n[\\, \\Gamma = \\i \\sigma^{\\mu\\nu} {n_i}_\\mu {n_j}_\\nu \\gamma_5  \\, ] \\ \\ \\ \\ \n& \\sum_{h\\, S} \\int dz  M_h H^{h}(z) = 0 \\ .\n\\end{align} \nTo obtain the result for the collinear $D_1$ and $E$ FFs we have used Eq.~\\eqref{e:alpha_calc_2} and~\\eqref{e:zeta_calc_2} with $\\theta(k^-)=1$ because $k^-$ is positive by four momentum conservation ($k^-$ is equal to the sum of the minus momenta of all produced hadrons, and these are physical on-shell particles). \n\nRenormalization is known to preserve Eq.~\\eqref{e:sumrule_D1}~\\cite{Collins:1981uw,Collins:2011zzd}.\nThe renormalization of $E^h(z)$ and its moments has been discussed in Refs.~\\cite{Belitsky:1996hg,Belitsky:1997ay} at leading order in the strong coupling and in the large $N_c$ limit, using the light-cone gauge and neglecting current quark mass contributions. \nOne can then infer an approximate evolution equation for $M_j$.\nInstead, the renormalization of $H^h$, which is directly connected to a three-parton correlation function~\\cite{Metz:2016swz}, has not yet been addressed to our knowledge.\nNevertheless, the derivations of these sum rules are rooted in the conservation of the partonic four-momentum encoded in Eq.~\\eqref{e:master_sum_rule} and in the symmetry properties of the correlators $\\Xi$ and $\\Delta^h$. Therefore, we expect Eqs.\\eqref{e:sumrule_D1}-\\eqref{e:sumrule_H} and all the other sum rules discussed in this paper to be valid \\emph{in form} also at the renormalized level in perturbative QCD.    \n\n\n\n\nThe normalization~\\eqref{e:rho13_positivity_rho3sumrule} of the spectral function $\\rho_3$ -- which is a direct consequence of the equal-time anticommutation relations for the fermion fields~\\cite{Weinberg:1995mt,Solis:2019fzm} -- is crucial to obtain the well-known {\\it momentum sum rule}~\\eqref{e:sumrule_D1} for the unpolarized fragmentation function $D_1^h$, that was originally proven without reference to the jet correlator~\\cite{Collins:1981uw,Mulders:1995dh}. \nAn experimental verification of this sum rule is therefore also an indirect check of the validity of the K\\\"allen-Lehman spectral representation. \nIt is also interesting to note that Eq.~\\eqref{e:sumrule_D1} allows one to write the unpolarized ``inclusive jet function'' and ``energy-energy correlation jet function'' introduced in the context of Soft Collinear Effective Theory as, respectively, the matching coefficients of the fragmenting jet functions onto the collinear FFs~\\cite{Procura:2009vm,Jain:2011xz} and of the TMD FFs onto the collinear FFs~\\cite{Moult:2018jzp,Luo:2019hmp}.\n\n\nThe chiral-odd sum rule~\\eqref{e:sumrule_E}, that generalizes the one discussed in Ref.~\\cite{Jaffe:1993xb,Jaffe:1996zw}, can be called {\\em ``mass sum rule''} because of its physical interpretation: the non-perturbative jet mass $M_j$ corresponds to the sum of the masses of all possible particles produced in the hadronization of the quark, weighted by the chiral-odd collinear twist-3 fragmentation function $E^h(z)$. \nIn looser terms, $M_j$ can be interpreted as the average mass of the hadronization products.\n\nFinally, the sum rule~\\eqref{e:sumrule_H} is, to our knowledge, new.\n\n\n\n\n\\subsection{Sum rules for rank 1 fragmentation functions}\n\\label{ss:mu_transverse}\n\nLet us now specify the master sum rule to the case of the rank 1 correlator $\\Delta_{1}^{h\\, \\alpha}(z,P_T)$ defined in Eq.~\\eqref{e:1hDelta_r1}. This can be selected by choosing  $\\mu = \\alpha = 1,2$ in Eq.~\\eqref{e:master_sum_rule}. Since we are working in the parton frame  \nwhere $k_T=0$, this reads:\n\\begin{equation}\n\\label{e:sumrule_transverse}\n\\sum_{h\\, {S}} \\int dz\\ d^2 \\vect{P}_T\\, P_T^\\alpha\\, \\Delta^h(z,P_T) = \n\\sum_{h\\, {S}} \\int dz\\ d^2 \\vect{P}_T\\, P_T^\\alpha\\, {P_T}_\\rho\\, \\Delta_1^{h\\, \\rho}(z,P_T) =\n0 \\, . \n\\end{equation}\nThis result can also be achieved directly from Eq.~\\eqref{e:unintegrated_sum_rule} choosing the parton frame, performing the integration explicitly with $\\mu$ transverse index, and assuming that the fermion fields vanish at the boundary of space~\\cite{Meissner:2010cc}. \n\nUsing the correspondence between symmetric traceless tensors built with the transverse momentum and complex numbers outlined in Appendix~\\ref{a:conv} and Ref.~\\cite{Boer:2016xqr}, and the relation~\\cite{Bacchetta:2006tn,Mulders:2016pln}\n\\begin{equation}\n\\label{e:NT_eps_relation}\nP_T^{[ \\alpha} \\epsilon_T^{\\rho\\sigma ]} {P_T}_\\rho = P_T^2 \\epsilon_T^{\\alpha\\rho} \\ ,\n\\end{equation}\nwe can calculate the Dirac projections of Eq.~\\eqref{e:sumrule_transverse}, based on the parametrization given in Eq.~\\eqref{e:1hDelta_r1}. The result reads\n\\begin{align}\n\\label{e:sumrule_H1p}\n[\\, \\Gamma = -\\i \\sigma^{\\mu\\nu} {n_i}_\\mu {n_+}_\\nu \\gamma_5 \\, ] \\ \\ \\ \\ \n& \\sum_{h\\, S} \\int dz z M_h\\, H_1^{\\perp\\, (1)\\, h}(z) = 0 \\ , \\\\\n\\label{e:sumrule_Dp}\n[\\, \\Gamma = -\\slashed{n}_i \\, ] \\ \\ \\ \\ \n& \\sum_{h\\, S} \\int dz M_h^2\\, D^{\\perp\\, (1)\\, h}(z) = 0 \\ , \\\\\n\\label{e:sumrule_Gp}\n[\\, \\Gamma = -\\slashed{n}_i \\gamma_5 \\, ] \\ \\ \\ \\ \n& \\sum_{h\\, S} \\int dz M_h^2\\, G^{\\perp\\, (1)\\, h}(z) = 0 \\ , \n\\end{align}\nwhere we defined the first $P_T$-moment of a generic fragmentation function $D$ as (see Appendix~\\ref{a:frame_dep}): \n\\begin{equation}\n\\label{e:def_Php_mom}\nD^{(1)}(z) = \\int d^2 \\vect{P}_T \\frac{\\vect{P}_T^2}{2 z^2 M_h^2} D(z,P_T^2) \\ .\n\\end{equation}\nAs in Section~\\ref{ss:mu_minus}, the remaining Dirac projections yield the trivial result $0=0$. The sum rule~\\eqref{e:sumrule_H1p} for $H_1^{\\perp\\, h}$ is also known as the Sch\\\"afer-Teryaev sum rule~\\cite{Schafer:1999kn,Meissner:2010cc}. \nWe already discussed the sum rule for $D^{\\perp\\, h}$ in Ref.~\\cite{Accardi:2019luo}, and that for $G^{\\perp\\, h}$ is new.\nThe QCD evolution of the first moment of $H_1^{\\perp\\, h}$ has been discussed in Ref.~\\cite{Kang:2010xv}, but no statement is available on the validity of the Sch\\\"afer-Teryaev sum rule under renormalization. Nevertheless, the sum rule for the T-odd FFs, $H_1^{\\perp\\, h}$, $G^{\\perp\\, h}$, and $H^h$, is a result of the absence of T-odd terms in the inclusive jet correlator, a feature that should be preserved under renormalization, too. Checking this by explicit calculation remains an interesting exercise for the future.\n\n\n\n\\subsection{Sum rules for dynamical twist-3 fragmentation functions}\n\\label{ss:qgq_sumrules}\n\nLet us now consider the equations of motion relations (EOMs) which relate twist-2 and twist-3 fragmentation functions in the parton frame:\n\\begin{align}\n  \\label{e:eom_E}\n  & E^h = \\widetilde{E}^h + z \\frac{m}{M_h} D^h_1 \\\\\n  \\label{e:eom_H}\n  & H^h = \\widetilde{H}^h - \\frac{\\vect{P}_T^2}{z M_h^2} H_1^{\\perp\\, h} \\\\\n  \\label{e:eom_Dp}\n  & D^{\\perp\\, h} = \\widetilde{D}^{\\perp\\, h} + z D^h_1 \\\\\n  \\label{e:eom_Gp}\n  & G^{\\perp\\, h} = \\widetilde{G}^{\\perp\\, h} + z \\frac{m}{M_h} H_1^{\\perp\\, h} \\ ,\n\\end{align}\nwhere the functions with a tilde parametrize the twist-3 $\\widetilde{\\Delta}_A^\\alpha$ quark-gluon-quark correlator~\\cite{Bacchetta:2006tn}, and $m$ is the current mass of the specific quark considered. These relations, that are a consequence of the Dirac equation for the quark field, have been originally presented in the hadron frame~\\cite{Tangerman:1994bb,Mulders:1995dh} (see also Ref.~\\cite{Bacchetta:2006tn}) and in Appendix~\\ref{a:frame_dep} we discuss their transformation to the parton frame. \n\nThe Eqs.~\\eqref{e:eom_E}-\\eqref{e:eom_Gp} allow us to investigate the momentum sum rules for the ``dynamical\" twist-3 FFs (those with a tilde) without explicitly working with  the quark-gluon-quark fragmentation correlator $\\widetilde{\\Delta}_A^\\alpha$ and the quark-gluon-quark inclusive jet correlator $\\widetilde{J}_A^\\alpha$ introduced in Ref.~\\cite{Accardi:2017pmi}.\nIndeed, combining the four EOMs with the sum rules discussed in Section~\\ref{ss:mu_minus} and Section~\\ref{ss:mu_transverse} we obtain\n\\begin{align}\n\\label{e:sumrule_Et}\n& \\sum_{h\\, S} \\int dz M_h \\widetilde{E}^{h}(z) = M_j - m = m^{\\text{corr}}\\\\\n\\label{e:sumrule_Ht}\n& \\sum_{h\\, S} \\int dz M_h \\widetilde{H}^{h}(z) = 0 \\\\\n\\label{e:sumrule_Dtp}\n& \\sum_{h\\, S} \\int dz M_h^2 \\widetilde{D}^{\\perp\\, (1)\\, h}(z) = \n- \\sum_{h\\, S} \\int dz z\\, M_h^2 D_1^{(1)\\, h}(z) \\ \\equiv\\\n\\frac{1}{2} \\langle P_{T}^2 \/ z^2 \\rangle \\\\\n\\label{e:sumrule_Gtp}\n& \\sum_{h\\, S} \\int dz M_h^2 \\widetilde{G}^{\\perp\\, (1)\\, h}(z) = 0 \\ ,\n\\end{align}\nwhich provide a complete set of sum rules for the four unpolarized dynamical twist-3 FFs. As with the twist-2 case, no sum rule can be established for polarized FFs.\n\nEq.~\\eqref{e:sumrule_Et} is the generalization of the sum rule $\\int dz \\widetilde{E} = 0$ discussed in Ref.~\\cite{Bacchetta:2006tn}. \nThis generalization is based on the fact that the jet mass $M_j$ differs in general from the current quark mass by an amount $m^{\\text{corr}}$ (the correlation mass introduced in Eq.~\\eqref{e:Mj_decomp}) which we argued is non-perturbatively generated by quark-gluon-quark correlations and the dynamical breaking of the chiral symmetry. In the FF correlator $\\widetilde{\\Delta}_A^\\alpha$, the chiral odd component of these correlations is parametrized by the $\\widetilde E$ function~\\cite{Bacchetta:2006tn}, that also provides a flavor decomposition for $m^{\\text{corr}}$ through Eq.~\\eqref{e:flav_mqcorr}. See Section~\\ref{ss:sumrules_summary} for a deeper discussion of this point. \n\nThe sum rule~\\eqref{e:sumrule_Dtp} connects the first moment of the twist-3 $\\widetilde D^\\perp$ FF to the average squared transverse momentum acquired by unpolarized hadrons fragmented from an unpolarized quark~\\cite{Accardi:2019luo} (for the definition of the average operator see Appendix~\\ref{a:frame_dep}). Therefore, this sum rule also probes the nature of the non-perturbative hadronization process in analogy with the way the sum rule for $\\widetilde E$ probes the nature of the vacuum. \nSimilar relations exist in literature, see e.g. Eq.~(76) in Ref.~\\cite{Metz:2016swz}, which connects a three-parton FF, the first moment of the Collins FF, and the average transverse momentum of an unpolarized hadron fragmenting from a transversely polarized quark. Another example is the relation between the average transverse momentum of an unpolarized quark in a transversely polarized hadron and the Qiu-Sterman function~\\cite{Qiu:1991pp,Qiu:2020oqr}. \n\nIt is finally worthwhile remarking that the sum rules for $D^\\perp$ and $\\widetilde{D}^\\perp$ are frame dependent because such are the involved transverse momenta. On the contrary, all other sum rules are frame independent. Appendix~\\ref{a:frame_dep} discusses these features in details.\n\n\n\n\n\\section{Sum rules compendium and discussion}\n\\label{ss:sumrules_summary}\n\nWe collect here for convenience the complete set of sum rules for twist-2 and twist-3 FFs scattered throughout Section~\\ref{s:1h_rules}, and remind that all fragmentation functions implicitly depend on the quark flavor, omitted for sake of simplicity.\nAt twist 2,\n\\begin{align}\n\\label{e:pframe_sumrule_D1}\n& \\sum_{h\\, S} \\int dz\\, z\\, D_1^{h}(z) = 1 \\, , \\\\\n\\label{e:pframe_sumrule_H1p}\n& \\sum_{h\\, S} \\int dz\\, z\\, M_h H_1^{\\perp\\, (1)\\, h}(z) = 0 \\, .\n\\end{align}\nAt twist 3,\n\\gdef\\thesubequation{\\theequation \\textit{a,b}}\n\\begin{subeqnarray}\n\\label{e:pframe_sumrule_E_Et}\n& \\ \\ \\displaystyle \\sum_{h\\, S} \\int dz\\, M_h\\, E^{h}(z) = M_j\\, , \\qquad \\ \\\n& \\sum_{h\\, S} \\int dz\\, M_h\\, \\widetilde{E}^{h}(z) = M_j - m = m^{\\text{corr}}\\, , \\\\\n\\refstepcounter{equation}\n\\label{e:pframe_sumrule_H_Ht}\n& \\displaystyle \\sum_{h\\, S} \\int dz\\, M_h\\, H^{h}(z) = 0\\, , \\qquad \\ \\ \n& \\sum_{h\\, S} \\int dz\\, M_h\\, \\widetilde{H}^{h}(z) = 0\\, , \\\\\n\\refstepcounter{equation}\n\\label{e:pframe_sumrule_Dp_Dpt}\n& \\ \\ \\ \\ \\ \\ \\displaystyle\\sum_{h\\, S} \\int dz\\, M_h^2\\,  D^{\\perp\\, (1)\\, h}(z) = 0\\, , \\qquad \\ \\  \n& \\sum_{h\\, S} \\int dz\\, M_h^2\\, \\widetilde{D}^{\\perp\\, (1)\\, h}(z) = \\frac{1}{2} \\langle P_{T}^2 \/ z^2 \\rangle\\, , \\\\\n\\refstepcounter{equation}\n\\label{e:pframe_sumrule_Gp_Gpt}\n& \\ \\ \\ \\ \\ \\ \\displaystyle\\sum_{h\\, S} \\int dz\\, M_h^2\\, G^{\\perp\\, (1)\\, h}(z) = 0\\, , \\qquad \\ \\  \n& \\sum_{h\\, S} \\int dz\\, M_h^2\\, \\widetilde{G}^{\\perp\\, (1)\\, h}(z) = 0 \\, .\n\\end{subeqnarray}\n\nThe sum rules~\\eqref{e:pframe_sumrule_D1},~\\eqref{e:pframe_sumrule_H1p} and~\\eqref{e:pframe_sumrule_H_Ht}$b$ were already known in literature~\\cite{Collins:1981uw,Mulders:1995dh,Meissner:2010cc,Schafer:1999kn,Bacchetta:2006tn}, with the latter proven here for the first time at the correlator level. \nIt's interesting to notice that the sum rules for the T-odd FFs $H^h$, $H_1^{\\perp\\, h}$, $G^{\\perp\\, h}$ are a consequence of the absence of T-odd terms in the inclusive jet correlator~\\eqref{e:invariant_quark_correlator} (see Eq.~\\eqref{e:B3_zero}). Being the consequence of time-reversal symmetry, this feature is frame-independent (see Appendix~\\ref{a:frame_dep}).\nThe sum rules~\\eqref{e:pframe_sumrule_E_Et} for $E$ and $\\widetilde E$ have been originally discussed in Ref.~\\cite{Jaffe:1996zw}, but here extended to the non-perturbative domain (we will have more to say about these shortly). \nAll others are, to the best of our knowledge, novel results\\footnote{A partial proof was discussed in Ref.~\\cite{Accardi:2019luo} and at various conferences, see for example Ref.~\\cite{Accardi:2018gmh}.}. \\\\ \n\n\nAs we will discuss below, these sum rules are generically useful as constraints in phenomenological fits where experimental data is scarce, and when developing fragmentation models.\nThe non-zero sum rules, however, have a significance that goes well beyond that. \nTo start with, we have shown that the $D_1$ sum rule is theoretically linked to the normalization property~\\eqref{e:rho13_positivity_rho3sumrule} of the chiral-even $\\rho_3$ spectral function, and, thus, to the equal-time anticommutation relations for the fermion fields. Hence its experimental verification also entails an indirect check of the validity of the K\\\"allen-Lehman spectral representation. \nThe sum rules~\\eqref{e:pframe_sumrule_E_Et} for $E$ and $\\widetilde E$, and~\\eqref{e:pframe_sumrule_Dp_Dpt}$b$ for $\\widetilde D^\\perp$ are also noteworthy because, unlike the others, they are sensitive to aspects of the non perturbative dynamics of QCD: respectively, the dynamical mass generation in the QCD vacuum, and the transverse momentum generation in the fragmentation process~\\cite{Accardi:2019luo}. \n\nOur proof has been developed in the parton frame in order to connect to the inclusive jet correlator, that cannot be defined in the hadron frame.\nMost of the sum rules are nonetheless frame independent, as detailed in Appendix~\\ref{a:frame_dep}. The only exceptions are the sum rules~\\eqref{e:pframe_sumrule_Dp_Dpt} for $D^\\perp$ and $\\widetilde{D}^\\perp$, that in the hadron frame exchange the role of the kinematic and dynamical twist-3 functions. In that frame, it is the first moment of the $D^{\\perp h}$ functions that are sensitive to the transverse momentum of the fragmented hadrons, whereas the first moment of the $\\widetilde{D}^{\\perp h}$ functions sum up to zero. \n\nFinally, note that our proofs are at present valid only for unrenormalized FFs. However, the arguments we utilized are rooted in the conservation of the partonic four-momentum encoded in Eq.~\\eqref{e:master_sum_rule}, and on the symmetry properties of the correlators $\\Xi$ and $\\Delta^h$. For this reason, we expect all momentum sum rules to be valid in form also at the renormalized level. \nIn fact: renormalization is known to preserve Eq.~\\eqref{e:pframe_sumrule_D1}~\\cite{Collins:2011zzd,Collins:1981uw}; \nthe evolution of $E(z)$ and $M_j$ can be inferred from the results presented in Refs.~\\cite{Belitsky:1996hg,Belitsky:1997ay}; it could also be argued that the sum rules for the T-odd FFs are preserved under renormalization due to the absence of T-odd terms in the inclusive jet correlator, but explicit calculations are needed to corroborate this hypothesis and to understand the behavior of all the other sum rules.  \n\n\n\n\n\\subsection{Dynamical chiral symmetry breaking}\n\\label{sss:DCSB}\n\nThe mass sum rules~\\eqref{e:pframe_sumrule_E_Et} are of particular interest, because they shed additional light on the QCD mass generation mechanism already explored in Section~\\ref{s:jetcor} in terms of the jet correlator $J$ and its chiral odd component. As discussed in Section~\\ref{ss:TMDjet_recap}, the jet mass $M_j=m+m^{\\text{corr}}$ quantifies the dressing of a quark as it propagates in the QCD vacuum. Here we suggest that, whereas the current quark mass $m$ is the component of $M_j$ that explicitly breaks the chiral symmetry, it is the  $m^{\\text{corr}}$ correlation mass component that can be considered a theoretically solid order parameter for its dynamically breaking.  That this is the case is supported by the following arguments, highlighting the central role played by quark-gluon interactions in generating $m^{corr}$, and how this is intrinsically connected to the properties of the QCD vacuum. \n\nOverall, the correlation mass can vanish in two circumstances, where the neglect of quark-gluon-quark correlations is achieved in different ways. In the first case, one can invoke the ``Wandzura-Wilczek (WW) approximation'', which consists in neglecting the twist-3 ``tilde'' functions, that parametrize the strength of quark-gluon-quark correlations, compared to the twist-2 and twist-3 functions without a tilde, that describe quark-quark correlations. In other words, this approximation consists in neglecting the role of gluons except in the dressing of the quark-quark correlators, and setting the ``tilde'' functions to zero~\\cite{Bastami:2018xqd}\\footnote{The WW appriximation takes its name from the fact that one is in fact utilizing simplified form of the Wandzura-Wilczek-type relations, originally introduced and discussed in Ref.~\\cite{Wandzura:1977qf}, that relate twist-2 and twist-3 functions. While in some processes the neglect of quark-gluon-quark interactions leads to phenomenologically successful comparisons to experimental data \\cite{Bastami:2018xqd}, this assumption is not a priori justified in all circumstances~\\cite{Accardi:2017pmi}. In particular, one needs to make sure that the dominant quark-quark terms do not cancel in the observable of interest. For example, in our case, a WW approximation applied to Eq.~\\eqref{e:sumrule_Dtp} would amount to predicting no transverse momentum in the fragmentation process, $\\langle P_{T}^2 \/ z^2 \\rangle = 0$, which is clearly not the case.}\nThus, $M_j \\overset{\\,_{WW}}{=} m$ and $m^{\\text{corr}} \\overset{\\,_{WW}}{=} 0$, as can be easily seen by setting $\\widetilde E = 0$ in Eq.~\\eqref{e:eom_E} and using the sum rules~\\eqref{e:sumrule_D1} and~\\eqref{e:sumrule_E}. (An application of the same WW approximation to the sum rule~\\eqref{e:sumrule_Et} consistently provides one with the identity $0=0$.)\nAnother case in which the dynamical mass $m^{\\text{corr}}$ vanishes is when the non-interacting vacuum $|0 \\rangle$ of the theory is used in place of the interacting one ${|\\Omega\\rangle}$~\\cite{Peskin:1995ev}, so that one cannot fully contract the $\\psi A_T^\\alpha \\bar\\psi$ operator that defines $\\widetilde E$ unless the interaction terms in the Lagrangian are taken into account, effectively causing $\\widetilde E = 0$ as in the WW approximation. \n\nFurthermore, one can decompose the chiral-odd spectral function $\\rho_1$ into a pole part, with an isolated singularity at the (renormalized) current mass value $\\mu^2=m^2$, and a remnant $\\overline\\rho_1$ (see, {\\it e.g.}, Ref.~\\cite{Solis:2019fzm}):\n\\begin{align}\n  \\rho_1(\\mu^2) = \\delta(\\mu^2-m^2) + \\overline\\rho_1(\\mu^2) \\ .\n\\end{align}\nThis singularity, in fact cannot appear in the full, non-perturbative propagator because quarks are not physical states of the theory; rather, it originates from a perturbative treatment of a the propagating quark considered as an asymptotic field while in reality it is not. \nNext, combining the $E$ and $\\widetilde E$ sum rule with the EOM relation~\\eqref{e:eom_E} one sees that\n\\begin{align}\n\\label{e:nonpole_rho1_mcorr}\n\tm^{\\text{corr}} \\overset{lcg}{=} \\int d\\mu^2 \\sqrt{\\mu^2}  \\, \\overline\\rho_1(\\mu^2) \\ .\n\\end{align}\nTherefore, the perturbative pole \nis effectively removed in the sum rule for $\\widetilde E$, and from the spectral decomposition of the correlation mass. This is all the more interesting, because it is the twist-3 $\\widetilde E$ function rather than $E$, that contributes to hadroproduction DIS processes, whence the sum rules can be experimentally measured~\\cite{Bacchetta:2006tn}.\n\nFinally,  the $\\widetilde E$ sum rule also provides one with a hadronic flavor decomposition of the correlation mass: \n\\begin{equation}\n\\label{e:flav_mqcorr}\nm^{\\text{corr}} = \\sum_{h,S} \\int dz M_h\\, \\widetilde{E}^{h}(z) \n  \\equiv \\sum_h \\, m_h^{\\text{corr}} \\ ,\n\\end{equation}\nwhere each $m_h^{\\text{corr}} = \\sum_{S} \\int dz M_h\\, \\widetilde{E}^{h}(z)$ quantifies the contribution to the interaction-dependent part of the jet mass associated to the hadronization into a specific hadron $h$. One can therefore envisage investigating the separate role of baryon and light mesons in the dynamical chiral symmetry breaking process, with the pions and kaons expected to become massless in the chiral limit due to the Goldstone theorem, and obtain a more fine-grained picture of the spontaneous generation of mass in QCD.\nAs a starter, calculations of $E$ and $\\widetilde{E}$ in models which incorporate the dynamical breaking of the chiral symmetry, for example such as in treatments combining the Nambu--Jona-Lasinio model~\\cite{Ito:2009zc,Matevosyan:2011vj,Bentz:2016rav} \nThe full set of sum rules provided in this article could be used to constrain and refine these calculations, and a comparison with even a limited amount of experimental data on $\\widetilde E$ would provide the model with the dynamical input necessary to explore with confidence the chiral limit via Eq.~\\eqref{e:flav_mqcorr}.  \n\n\n\\subsection{Phenomenology}\n\\label{sss:pheno}\n\nThese sum rules can be of phenomenological relevance in the studies of hard scattering process with hadrons in the final states, \nfor example, semi-inclusive deep-inelastic scattering (SIDIS) and electron-positron annihilation into one or two hadrons, \nas well as hadroproduction in hadronic collisions at both fixed target and collider facilities~\\cite{Hadjidakis:2018ifr,Aidala:2019pit}).\n\n\nThe leading-twist TMD FFs $D_1$ and $H_1^\\perp$ can be observed in \nSIDIS procceses, considering specific angular modulations of the cross section at low transverse momentum~\\cite{Bacchetta:2006tn}. \n\nThe dynamical twist-3 FFs ($\\widetilde E$, $\\widetilde H$, $\\widetilde D^\\perp$, $\\widetilde G^\\perp$) appear in the SIDIS cross section at order $1\/Q$, where $Q$ is the hard scale of the process. As it turns out, these are the only twist-3 FFs contributing to the cross section in a frame where the azimuthal angles refer to the axis given by the four-momenta of the target nucleon and the photon, rather than of the target nucleon and the detected hadron~\\cite{Bacchetta:2006tn}.\nIn such a frame, their kinematic twist-3 counterparts (those without a tilde in their symbol) do not contribute to the  cross section, but can be obtained from the former by use of the equation of motion relations \\eqref{e:eom_E}-\\eqref{e:eom_Gp}. \nThe role of twist-3 FFs in other semi-inclusive processes is reviewed in Ref.~\\cite{Metz:2016swz}. \nIn general, to access these fragmentation functions one needs to calculate cross sections at least to twist-3 level, and, in the case of the chiral-odd $E$ and $\\widetilde E$ FFs, to combine these with another chiral-odd distribution or fragmentation function.\n\n\nThe possibility to experimentally observe the tilde functions in semi-inclusive processes is particularly interesting for the case of $\\widetilde E$, which contributes to the determination of the interaction-dependent correlation mass $m^{\\text{corr}}$ and its flavor decomposition through the sum rule \\eqref{e:pframe_sumrule_E_Et}. This is not, however, the only experimental window on $m^{\\text{corr}}$. For example, as discussed in Ref.~\\cite{Accardi:2017pmi,Accardi:2018gmh}, the correlation mass $m^{\\text{corr}}$ also contributes coupled to the collinear transversity PDFs to the inclusive DIS $g_2$ structure function at large Bjorken $x_B$. Likewise, the correlation mass couples to the collinear transversity FF $H_1$ in single hadron production of, say, the self-polarizing $\\Lambda$ particle in semi-inclusive $e^+e^-$ collisions. Likewise, it can couple to the dihadron $H_1^\\sphericalangle$ FF in the case of same-hemisphere double hadron production. \n\nAs one can see, the experimental information we are after is scattered among a umber of diverse observables and process. One way to gather it in a consistent fashion is to perform ``universal'' QCD fits of a suitable subsets of PDFs and FFs. One possibility is to simultaneously fit $m^{\\text{corr}}$, the collinear transversity PDFs $h_1$, and the collinear dihadron $H_1^\\sphericalangle$. The needed processes are longitudinal-transverse asymmetries in inclusive DIS ($\\propto m^{\\text{corr}}\\, h_1 $~\\cite{Accardi:2017pmi}), \ndi-hadron production in SIDIS ($\\propto h_1 H_1^\\sphericalangle$~\\cite{Radici:2015mwa,Radici:2018iag}), the Artru-Collins asymmetry in double di-hadron production in electron-positron annihilation ($\\propto H_1^\\sphericalangle H_1^\\sphericalangle$~\\cite{Matevosyan:2018icf}), and semi-inclusive same-side dihadron production ($\\propto m^\\text{corr} H_1^\\sphericalangle$~\\cite{Accardi:2017pmi}). This kind of universal QCD analysis, seeking to numerically fit several non perturbative functions at once, is numerically very demanding in terms of raw computational power and stability of the fitting algorithms. Nonetheless its feasibility has been recently demonstrated in a series of works by the JAM collaboration~\\cite{Ethier:2017zbq,Lin:2017stx,Sato:2019yez}. \n\nIn order to properly separate perturbative and non-perturbative contributions, these observables should be addressed in the context of the associated factorization theorems. In this respect, resummed perturbative QCD and Soft-Collinear Effective Theories (SCET) provide the needed tools. Namely, the inclusive jet correlator $\\Xi$ emerges, {\\em e.g.}, in the factorization of the so-called end-point region of DIS processes at large $x$~\\cite{Becher:2006mr,Becher:2006nr,Chen:2006vd,Sterman:1986aj,Chay:2005rz}, where the final state invariant mass $Q(1-x) \\sim \\Lambda_{\\text{QCD}}$, and $Q$ is the hard momentum transfer. Those analyses should be extended to the chiral-odd components of the jet correlator, and also applied to SIDIS and $e^+e^-$ annihilation into one or two hadrons.\n\n\n\n\\begin{comment}\n\\tocless\n\\subsubsection{Flavor decomposition and chiral limit of the correlation mass}\n\nFormally it is possible to give a decomposition of the correlation mass in the hadronic flavor, which can have applications also at the phenomenological level:\n\\begin{equation}\n\\label{e:flav_mqcorr}\nm_q^{\\text{corr}} = \\sum_h m_{q,\\, h}^{\\text{corr}}\\ \\ , \\ \\ \\ \\ \\ \\ \\ \\ \\\nm_{q,\\, h}^{\\text{corr}} \\equiv \\sum_{S} \\int dz M_h \\widetilde{E}^{h}(z) \\ ,\n\\end{equation}\nwhere each contribution to $m_q^{\\text{corr}}$ is the normalization of the associated $\\widetilde E$ function multiplied by the hadron mass and summed over the hadronic polarization. Each $m_{q,\\, h}^{\\text{corr}}$ quantifies the contribution to the interaction-dependent part of the quark mass associated to the hadronization into a specific hadron $h$. \n\nIt is then instructive to consider the chiral limit of each flavor contribution, and therefore of $M_j$ and $m_q^{\\text{corr}}$. \nFor hadrons which are massive in the chiral limit, the associated jet mass and the correlation mass are, in principle, non-vanishing. \nFor the pions in the chiral limit, instead,  $m_\\pi \\to 0$ due to the Goldstone theorem. But even in a theory with only pions, we cannot a priori exclude that the LHS of the mass sum rule~\\eqref{e:sumrule_E} is non-zero.\nFor example: the presence of $M_h$ in Eq.~\\eqref{e:1hDelta_TMDcorr_param} in the coefficients of the Dirac expansion is artificial and could be modified in the chiral limit. \nAccordingly, also $M_j$ and $m_q^{\\text{corr}}$ could be non-zero.\n\\AAcom{Questa discussione (e anche le seguenti) non l'ho molto capita. Mi sa che e' meglio discuterne un po' a voce, cosi' mi spieghi.} \\AScom{Ne parliamo.}\n\nBecause of its non-vanishing value in the chiral limit, $m_q^{\\text{corr}}$ could be an order parameter for the dynamical breaking of the chiral symmetry. \n\nIn the event in which the LHS of Eq.~\\eqref{e:sumrule_E} vanishes in the chiral limit and in the pion sector, \nalso $M_j$ has to vanish in the same limit. A sufficient condition for that is $m_q^{\\text{corr}} \\sim (m_{q0})^c$, where $c$ is a positive number. Namely, also the correlation mass would be proportional to the current quark mass. \nThis would imply that, in general, the contribution to the jet mass associated to a light quark fragmenting into a pion is small. \nMoreover, the correlation mass could not be considered, in general, an order parameter for the dynamical breaking of the chiral symmetry.\n \n\nIf the LHS of Eq.~\\eqref{e:sumrule_E} vanishes in the chiral limit, the contribution $m_{q,\\, \\pi}^{\\text{corr}}$ could be small when $q$ is a light quark, but sizeable in all the other cases. \nIn the same scenario, the sum rules $\\int dz E = 0$ given in Ref.~\\cite{Jaffe:1993xb,Jaffe:1996zw} and $\\int dz \\widetilde{E} = 0$ given in Ref.~\\cite{Bacchetta:2006tn}, which were presented without any connection to the inclusive jet correlator, are justified for the fragmentation of a light quark into a pion in the chiral limit.\\\\\n\n\n\nOverall, the total correlation mass can vanish in two circumstances, where the neglect of quark-gluon-quark correlations is achieved in different ways. In the first case, one can invoke the ``Wandzura-Wilczek (WW) approximation''~\\cite{Bastami:2018xqd}, that is assume that the ``tilde'' fragmentation functions are negligible compared to their twist-2 counterparts, \nand set the former to zero\\footnote{The so-called ``Wandzura-Wilczek'' approximation consists in utilizing a simplified form of the Wandzura-Wilczek-type relations, introduced and discussed in Ref.~\\cite{Wandzura:1977qf,Bacchetta:2006tn}, that relate twist-2 and twist-3 functions. While in some processes this leads to phenomenologically successful comparisons to experimental data, the assumption is not a priori justified in all circumstances~\\cite{Bastami:2018xqd,Accardi:2017pmi}. In particular, one needs to make sure that the dominant quark-quark terms do not cancel in the observable of interest. For example, in our case, a WW approximation applied to Eq.~\\eqref{e:sumrule_Dtp} would amount to predicting no transverse momentum in the fragmentation process, $\\langle P_{pT}^2 \/ z^2 \\rangle = 0$, which is clearly not the case.}.\nThus, $M_j^{WW} = m_{q0}$ and $m_q^{corr\\, WW} = 0$, as can be easily seen by setting $\\widetilde E = 0$ in Eq.~\\eqref{e:eom_E} and using the sum rules~\\eqref{e:sumrule_D1} and~\\eqref{e:sumrule_E}. Applying the same WW approximation to the sum rule~\\eqref{e:sumrule_Et} consistently provides the identity $0=0$.\nAnother case in which the dynamical mass $m_q^{\\text{corr}}$ vanishes is the one in which the non-interacting vacuum $|0 \\rangle$ of the theory is used in place of the interacting one ${|\\Omega\\rangle}$~\\cite{Peskin:1995ev}, so that one cannot fully contract the $\\psi A_T^\\alpha \\bar\\psi$ operator, effectively causing $\\widetilde E = 0$ as in the WW approximation \n\\AScom{Alberto, controlla che questa cosa del vuoto non-interagente sia ok}.  \nThese observations suggest that the interaction-dependent correlation mass is intrinsically connected to the properties of the QCD vacuum. \n\\end{comment}\n\n\n\\begin{comment}\n\\tocless\\subsubsection{Chiral limit and flavor decomposition of the correlation mass}\n\nIt is instructive to study the chiral limit of the $M_j$ and $m_q^{\\text{corr}}$. \nWithin this limit and in a theory with only pions, the LHS of the mass sum rule~\\eqref{e:sumrule_E} vanishes, since in the chiral limit $m_\\pi \\to 0$ due to the Goldstone nature of the pion. Moreove, we expect physical quantities to be smooth functions of $m_\\pi$ in the chiral limit, which prevents $E \\sim m_\\pi^{-1}$. \nAccordingly, also $M_j$ has to vanish in the chiral limit of such a theory. A sufficient condition for that is $m_q^{\\text{corr}} \\sim (m_{q0})^c$, where $c$ is a positive number. Namely, also the correlation mass would be proportional to the current quark mass. \nThis implies that, in general, the contribution to the jet mass associated to a light quark fragmenting into a pion is small. \nMoreover, the vanishing of $M_j$ in the pion sector in the chiral limit prevents us from concluding that $M_j$ is, in general, an order parameter for the dynamical breaking of the chiral symmetry.\nNonetheless, for hadrons which are massive in the chiral limit, the jet mass and the correlation mass are, in principle, non-vanishing. \nThis suggest a decomposition of the correlation mass in the hadronic flavor:\n\\begin{equation}\n\\label{e:flav_mqcorr}\nm_q^{\\text{corr}} = \\sum_h m_{q,\\, h}^{\\text{corr}}\\ \\ , \\ \\ \\ \\ \\ \\ \\ \\ \\\nm_{q,\\, h}^{\\text{corr}} \\equiv \\sum_{S} \\int dz M_h \\widetilde{E}^{h}(z) \\ ,\n\\end{equation}\nwhere each contribution to $m_q^{\\text{corr}}$ is the normalization of the associated $\\widetilde E$ function multiplied by the hadron mass and summed over the polarization. Each $m_{q,\\, h}^{\\text{corr}}$ quantifies the contribution to the interaction-dependent part of the quark mass associated to the hadronization into a specific hadron $h$. As we have argued, the contribution $m_{q,\\, \\pi}^{\\text{corr}}$ is small when $q$ is a light quark, but can be sizeable in all the other cases. \n\nWe now recognize that the sum rules $\\int dz E = 0$ given in Ref.~\\cite{Jaffe:1993xb,Jaffe:1996zw} and $\\int dz \\widetilde{E} = 0$ given in Ref.~\\cite{Bacchetta:2006tn}, which were presented without any connection to the inclusive jet correlator, are justified only for the fragmentation of a light quark into a pion and also for the fragmentation of a heavy quark into a pion in the chiral limit.\\\\\n\n\n\nOverall, the total correlation mass can vanish in two circumstances, where the neglect of quark-gluon-quark correlations is achieved in different ways. In the first case, one can invoke the ``Wandzura-Wilczek (WW) approximation''~\\cite{Bastami:2018xqd}, that is assume that the ``tilde'' fragmentation functions are negligible compared to their twist-2 counterparts, \nand set the former to zero\\footnote{The so-called ``Wandzura-Wilczek'' approximation consists in utilizing for one's purposes a simplified form of the Wandzura-Wilczek-type relations, introduced and discussed Ref.~\\cite{Wandzura:1977qf,Bacchetta:2006tn}, that relate twist-2 and twist-3 functions. While in some processes this leads to phenomenologically successful comparisons to experimental data, the assumption is not a priori justified in all circumstances~\\cite{Bastami:2018xqd,Accardi:2017pmi}. In particular, one needs to make sure that the dominant quark-quark terms do not cancel in the observable of interest. For example, in our case, a WW approximation applied to Eq.~\\eqref{e:sumrule_Dtp} would amount to predicting no transverse momentum in the fragmentation process, $\\langle \\vect{P}_{h\\perp}^2 \/ z \\rangle = 0$, which is clearly not the case. }.\nThus, $M_j^{WW} = m_{q0}$ and $m_q^{corr\\, WW} = 0$, as can be easily seen by setting $\\widetilde E = 0$ in Eq.~\\eqref{e:eom_E} and using the sum rules~\\eqref{e:sumrule_D1} and~\\eqref{e:sumrule_E}. Applying the same WW approximation to the sum rule~\\eqref{e:sumrule_Et} consistently provides the identity $0=0$.\nAnother case in which the dynamical mass $m_q^{\\text{corr}}$ vanishes is the one in which the non-interacting vacuum $|0 \\rangle$ of the theory is used in place of the interacting one ${|\\Omega\\rangle}$~\\cite{Peskin:1995ev}, so that one cannot fully contract the $\\psi A \\bar\\psi$ operator, effectively causing $\\widetilde E = 0$ as in the WW approximation.\nThese observations suggest that the interaction-dependent correlation mass is intrinsically connected to the properties of the QCD vacuum, whose nature can be experimentally probed, at least in principle, through the sum rule \\eqref{e:sumrule_Et}.\n\\end{comment}\n\n\\section{Summary and outlook}\n\\label{s:conclusions}\n\n\nIn this paper we have studied the properties of the fully inclusive jet correlator~\\eqref{e:invariant_quark_correlator} introduced and in, {\\em e.g.}, Ref.~\\cite{Sterman:1986aj,Chen:2006vd,Collins:2007ph,Accardi:2008ne,Accardi:2017pmi,Accardi:2019luo}. \nIn particular, in Section~\\ref{sss:link_structure} we have have presented a gauge-invariant definition for this correlator, and discussed a specific class of Wilson lines (staple-like) that allows one to re-write this as the gauge-invariant quark propagator~\\eqref{e:invariant_quark_correlator_W}.\nMoreover, in Section~\\ref{sss:dirac_structure} we have decomposed the fully inclusive jet correlator in Dirac structures, and organized the various terms according to their suppression in powers of $\\Lambda\/k^-$, where $\\Lambda$ is a generic hadronic scale and $k^-$ the dominant light-cone component of the quark momentum. \n\nAs a byproduct of the Dirac decomposition of the jet correlator, we have provided a gauge invariant definition for the inclusive jet mass $M_j$, an object which encodes the physics of the hadronizing color-averaged dressed quark. \nThis mass can be decomposed in terms of the current quark mass and a dynamical component generated by nonperturbative quark-gluon-quark correlations (see Eq.~\\eqref{e:Mj_decomp}). \nNew non-perturbative effects induced by this mass and its dynamical component can emerge at the twist-3 level, for example in inclusive deep-inelastic scattering at the level of the $g_2$ structure function~\\cite{Accardi:2017pmi,Accardi:2018gmh}, and potentially in semi-inclusive DIS, in semi-inclusive annihilation into one or two hadrons, and in hadronic collisions (see Section~\\ref{sss:pheno}).\n\nIn Section~\\ref{ss:spectr_dec}, we have developed a spectral representation for the gauge-invariant quark propagator, and we have connected the jet's mass and virtuality to the chiral-odd and even spectral functions, respectively. In particular, in the light-cone gauge the jet mass reduces to the first moment of the chiral odd spectral function, which provides a link to non-perturbative treatments of the quark propagator and in particular to the properties of the associated mass function~\\cite{Siringo:2016jrc,Solis:2019fzm,Roberts:2007jh,Roberts:2015lja}, see Section~\\ref{ss:TMDjet_recap}. In analogy with the role played by the dressed quark mass and the mass function, the dynamical component of the jet mass can be interpreted as an order parameter for the dynamical breaking of the chiral symmetry (see Section~\\ref{ss:TMDjet_recap} and Section~\\ref{sss:DCSB}). \n\nIn Section~\\ref{s:1h_rules}, we have presented a connection at the operator level between the single-hadron fragmentation correlator~\\eqref{e:1hDelta_corr} and the fully inclusive jet correlator~\\eqref{e:invariant_quark_correlator}. This connection, encoded in the master sum rule~\\eqref{e:master_sum_rule}, provides an explicit link between the propagation of the quark and the fully inclusive limit of its hadronization. The chosen class of Wilson lines allows one to connect these operators to matrix elements accessible in high-energy scattering experiments.\nIn fact, from the master sum rule~\\eqref{e:master_sum_rule} we have derived momentum sum rules for the fragmentation functions of quarks into unpolarized hadrons up to twist 3, confirming sum rules already known in the literature and proposing new ones (see Section~\\ref{ss:sumrules_summary}). \nAmong the others, the novel sum rules for the $\\widetilde E$ and the $\\widetilde{D}^\\perp$ FFs have a dynamical interpretation: the RHS of this sum rules corresponds, respectively, to the mass and the average squared transverse momentum generated during the fully inclusive hadronization of a nearly massless quark.\n\nMoreover, we have connected the sum rules for the $D_1$ and the $E$ FFs to the integral of the quark's chiral-even and chiral-odd spectral functions, whose integrals become experimentally measurable quantities (see Eq.~\\eqref{e:sumrule_D1} and Eq.~\\eqref{e:sumrule_E}, respectively). \nAs a result, the sum rule for the unpolarized $D_1$ FF acquires a new deep interpretation, which goes beyond conservation of the collinear quark momentum: the RHS of the momentum sum rule for $D_1$ is precisely the normalization of the chiral-even $\\rho_3$ spectral function, whose value only depends on the equal-time (anti)commutation relations for the fields involved~\\cite{Weinberg:1995mt,Solis:2019fzm}. \nThe mass sum rules for the $E$ and $\\widetilde E$ FFs, instead, provide us with \na way to constrain the chiral-odd $\\rho_1$ spectral function, or, equivalently, to measure the color-screened dressed quark mass $M_j$ and its dynamical component $m^{\\rm corr}$, respectively.\nWe believe that the possibility to experimentally access quantities connected to the dynamical breaking of the chiral symmetry in QCD is one of the most important outcomes of this paper.\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}