diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzzerls" "b/data_all_eng_slimpj/shuffled/split2/finalzzerls" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzzerls" @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\n\\label{intro}\n\n\nThe idea of four-dimensional harmonic oscillator as a tool for universal description of the Regge spectrum of hadron masses was formulated long time ago by Feynman, Kislinger, and Ravndal~\\cite{Feynman} and later re-entered the discussion about quark confinement several times in various ways. In particular, Leutwyler and Stern developed the formalism devoted to the covariant description of bilocal meson-like fields combined with the idea of harmonic confinement~\\cite{Leutwyler:1977vz,Leutwyler:1977vy, Leutwyler:1977pv, Leutwyler:1977cs, Leutwyler:1978uk}. In recent years, the approaches to confinement based on the soft wall AdS\/QCD model with harmonic potential demonstrated an impressive phenomenological success. \n\n\n\n\nThe approach to QCD vacuum and hadronization presented here has been developed in a series of papers~\\cite{EN, EN1,NK1,NK4,NK6,NV,NV1}. It clearly \nincorporates the idea of harmonic confinement both in terms of elementary color charged fields and the composite colorless hadron fields. The distinctive feature of the approach is that it links the concept of harmonic confinement and Regge character of hadron mass spectrum to the specific class of nonperturbative gluon configurations -- almost everywhere homogeneous Abelian (anti-)self-dual gluon fields. \nA close interrelation of the Abelian (anti-)self-dual fields and the hadronization based on harmonic confinement can be read off the papers~\\cite{Pagels, Minkowski, Leutwyler1, Leutwyler2, Leutwyler:1977vz,Leutwyler:1977vy, Leutwyler:1977pv, Leutwyler:1977cs, Leutwyler:1978uk}.\nIn brief, the line of arguments is as follows (for more detailed exposition see~\\cite{NV,NV1}). \n\nAn important starting point is due to the observation of Pagels and Tomboulis~\\cite{Pagels} that Abelian self-dual fields describe a medium that is infinitely stiff to small gauge field fluctuations, i.e. the wave solutions for the effective quantum equations of motion are absent. This feature was interpreted as suggestive of confinement of color. \nStrong argumentation in favour of the Abelian (anti-)self-dual homogeneous field as a candidate for the global nontrivial minimum of the effective action originates from the papers \\cite{Minkowski,Leutwyler2,NG2011,Pawlowski,George:2012sb}. Leutwyler has shown that the constant gauge field is stable against small quantum fluctuations only if it is Abelian (anti-)self-dual covariantly constant field \\cite{Minkowski,Leutwyler2}. Nonperturbative calculation of the effective potential within the functional renormalization group \\cite{Pawlowski} supported the earlier one-loop results on existence of the nontrivial minimum of the effective action for the Abelian (anti-)self-dual field. \n\n\n\n\\begin{figure}\n\\centering\n\\sidecaption\n\\includegraphics[width=0.35\\textwidth]{effpot_xiomega}\n\\caption{Effective potential as a function of the angle $\\omega$ between chromomagnetic and chromoelectric field and the mixing angle $\\xi$ in the Cartan subalgebra. The minima in the dark gray regions correspond to the Abelian (anti-)self-dual configurations and form a periodic structure labelled by integer indices $(kl)$ in Eq.~\\eqref{minima} (for more details see \\cite{NK1,NG2011,NV}).\\label{effpotxiomega}}\n\\end{figure}\n\\begin{figure}[h!]\n\\centering\n\\sidecaption\n\\includegraphics[width=.2\\textwidth]{cube1}\\includegraphics[width=.175\\textwidth]{cube2}\n\\caption{Topological charge density for domain wall networks with different values of the wall width. The left picture is an example of confining almost everywhere homogeneous Abelian (anti-)self-dual fields. Red (blue) color corresponds to the self-dual field (anti-self-dual), green -- pure chromomagnetic field. The right plot represents the case of preferably pure chromomagnetic field when the topological charge density is nearly zero and color charged quasiparticles can be excited thus indicating deconfinement (for more details see \\cite{NV}).\\label{cubes}}\n\\end{figure}\n\nThe eigenvalues of the Dirac and Klein-Gordon operators in the presence of Abelian self-dual field are purely discrete, and the corresponding eigenfunctions of quarks and gluons charged with respect to the background are of the bound state type. This is a consequence of the \nfact that these operators contain the four-dimensional harmonic oscillator. Eigenmodes of the color charged fields have no (quasi-)particle interpretation but describe field fluctuations decaying in space and time. The consequence of this property is that the momentum representation of the translation invariant part of the propagator of the color charged field in the background of homogeneous (anti-)self-dual Abelian gauge field is an entire analytical function. \nThe absence of pole in the propagator was treated as the absence of the particle interpretation of the charged field~\\cite{Leutwyler1}. \n However, neither the homogeneous Abelian (anti-)self-dual\nfield itself nor the form of gluon propagator in the presence of this background\nhad the clue to linear quark-antiquark potential. Nevertheless, the analytic structure of the gluon and quark propagators and assumption about the randomness of the background field ensemble led both to the area law for static quarks and the Regge spectrum for light hadrons.\n\nThe model of hadronization developed in~\\cite{EN1,EN, NV,NV1} indicated that the spectrum of mesons displayed the Regge character both with respect to total angular momentum and radial quantum number of the meson. The reason for confinement of a single quark and Regge spectrum of mesons turned out to be the same -- the analytic properties of quark and gluon propagators. In this formalism any meson looks much more like a complicated collective excitation of a medium (QCD vacuum) involving quark, antiquark and gluon fields than a nonrelativistic quantum mechanical bound state of charged particles (quark and anti-quark) due to some potential interaction between them. Within this relativistic quantum field description the Regge spectrum of color neutral collective modes appeared as a \"medium effect\" as well as the suppression (confinement) of a color charged elementary modes. \n\nThese observations have almost completed the quark confinement picture based on the random almost everywhere homogeneous Abelian (anti-)self-dual fields. Self-duality of the fields plays the crucial role in this picture. This random field ensemble represents a medium where the color charged elementary excitations exist as quickly decaying in space and time field fluctuations but the collective colorless excitations (mesons) can propagate as plain waves (particles). Besides this dynamical color charge confinement, a correct complete picture must include the limit of static quark-antiquark pair with the area law for the temporal Wilson loop. In order to explore this aspect an explicit construction of the random domain ensemble was suggested in paper~\\cite{NK1}, and the area law for the Wilson loop was demonstrated by the explicit calculation. Randomness of the ensemble (in line with \\cite{Olesen7}) and (anti-)self-duality of the fields are crucial for this result. \n\n\n\n\n\n The character of meson wave functions in hadronization approach~\\cite{EN1} is fixed by the form of the gluon propagator in the background of specific vacuum gluon configurations. These wave functions are very similar to the wave functions of the soft wall AdS\/QCD with quadratic dilaton profile and Leutwyler-Stern formalism. In all three cases one deals with the generalized Laguerre polynomials as characteristic for the radial meson excitations. Another interesting observation is that the form of Euclidean gluon and quark propagators in the presence of the Abelian (anti-)self-dual background are in qualitative agreement with the decoupling scenario of the infra-red behaviour of the propagators in Dyson-Schwinger equations (DSE) and functional renormalization group (FRG) approaches and lattice QCD results for the Landau gauge propagators.\n \n \n \n \n \n\n \nThe next section is devoted to motivation of the approach based on the domain wall network gluon configurations. xt of the spontaneous chiral symmetry breaking by the background field and four-fermion interaction. \nThe structure of the effective meson action and the results for the masses, transition and decay constants of various mesons are presented in section \\ref{section2}. \n In the last section we outline the relation of gluon propagator in the model under consideration to the results of FRG and DSE. \n\n\n\n\\begin{figure}[h!]\n\\centering\n\\includegraphics[scale=.8]{two-loop}\\\\\n\\includegraphics[scale=.8]{n-loop}\n\\caption{Diagrammatic representation of nonlocal meson vertex functions. Light grey denotes averaging over background field, dark grey denotes correlation of loop diagrams by background field.\\label{diagrams}}\n\\end{figure}\n\n\n\\begin{figure}[h]\n\\centering\n\\sidecaption\n\\includegraphics[width=9cm,clip]{spectra}\n\\caption{The masses of various radially excited mesons. The same values of parameters were used for all\nmesons shown in the figure. }\n\\label{fig-masses} \n\\end{figure}\n\n\n\n\n\\section{Scalar gluon condensate and the effective action of QCD}\n\\label{section1}\n\n\n\nThe phenomenological basis of the present approach is the assumption about existence of nonzero gluon condensates in QCD, first of all -- the scalar condensate $\\langle g^2F^2\\rangle$. In order to incorporate this condensate into the functional integral approach to quantization of QCD one has to choose appropriate conditions for the functional space of gluon fields $A_\\mu^a$ to be integrated over (see, e.g., Ref.\\cite{faddeev}). \nBesides the formal mathematical content, these conditions play the role of substantial physical input which, together with the classical action of QCD, complements the statement of the quantization problem. In other words, starting with the very basic representation of the Euclidean functional integral for QCD, \none has to specify integration spaces $\\mathcal{F}_A$ for gluon and $\\Psi$ for quark fields. \nBearing in mind a nontrivial QCD vacuum structure encoded in various condensates, one have to define $\\mathcal{F}_A$ permitting gluon fields with nonzero classical action density.\nThe gauge fields $A$ that satisfy this condition have a potential to provide the vacuum with the whole variety of condensates.\n\n\nAn analytical approach to definition and calculation of the functional integral can be based on separation of gluon modes $B_\\mu^a$ responsible for nonzero condensates from the small perturbations $Q_\\mu^a$. This separation must be supplemented with gauge fixing. Background gauge fixing condition $D(B)Q=0$ is the most natural choice. To perform separation, one inserts identity\n\\begin{equation*}\n1=\\int\\limits_{{\\cal B}}DB \\Phi[A,B]\\int\\limits_{{\\cal Q}} DQ\\int\\limits_{\\Omega}D\\omega \\delta[A^\\omega-Q^\\omega-B^\\omega]\n \\delta[D(B^\\omega)Q^\\omega]\n\\end{equation*}\nin the functional integral and arrives at \n\\begin{eqnarray*}\nZ &=&N'\\int\\limits_{{\\cal B}}DB \\int\\limits_{\\Psi} D\\psi D\\bar\\psi\\int\\limits_{{\\cal Q}} DQ \\det[\\mathcal{D}(B)\\mathcal{D}(B+Q)]\n\\delta[\\mathcal{D}(B)Q]e^{-S_{\\rm QCD}[B+Q,\\psi, \\bar\\psi]}\\\\\n&=&\\int\\limits_\\mathcal{B}DB \\exp\\{-S_\\mathrm{eff}[B]\\}.\n\\end{eqnarray*}\n\n\n\\begin{table}[htbp]\n\\centering\n\\caption{Model parameters fitted to the masses of $\\pi,\\rho,K,K^*, \\eta', J\/\\psi,\\Upsilon$ and used in calculation of all other meson masses, decay and transition constants.}\n{\\begin{tabular}{@{}ccccccc@{}}\n \\hline\n$m_{u\/d}$, MeV&$m_s$, MeV&$m_c$, MeV&$m_b$, MeV&$\\Lambda$, MeV&$\\alpha_s$&$R$, fm\\\\\n\\hline\n$145$&$376$&$1566$&$4879$&$416$&$3.45$&$1.12$\\\\\n\\hline\n\\end{tabular}}\n\\label{values_of_parameters}\n\\end{table}\n\n\nThus defined quantum effective action $S_\\text{eff}[B]$ has a physical meaning of the free energy of the quantum field system in the presence of the background gluon field $B_\\mu ^a$. In the limit $V\\to \\infty$ global minima of $S_\\text{eff}[B]$ determine the class of gauge field configurations representing the equilibrium state (vacuum) of the system. Quite reliable argumentation in favour of (almost everywhere) homogeneous Abelian (anti-)self-dual fields as dominating vacuum configurations in pure gluodynamics was put forward by many authors~\\cite{Pagels,Leutwyler2}. \nAs it has already been mentioned in Introduction, nonperturbative calculation of QCD quantum effective action within the functional renormalization group approach \\cite{Pawlowski} supported the one-loop result \\cite{Pagels,Minkowski,Leutwyler2} and indicated the existence of a minimum of the effective potential for nonzero value of Abelian (anti-)self-dual homogeneous gluon field.\n\n\n\\begin{table}[h]\n\\centering\n\\caption{Decay and transition constants of various pseudoscalar and vector mesons calculated through the diagrams shown in Figs.~\\ref{weak_decay_diagrams} and \\ref{g_rho_gamma_diagrams}. }\n{\\begin{tabular}{@{}cccc|cccc@{}} \\hline\nMeson&$n$&$f_P^{\\rm exp}$&$f_P$&Meson&$n$&$g_{V\\gamma}$ \n\\cite{PDG}&$g_{V\\gamma}$\\\\\n&&(MeV)& (MeV)&&&\\\\\n\\hline\n$\\pi$ &0 &130 \\cite{PDG} &140 & $\\rho$&0&0.2&0.2 \\\\\n$\\pi(1300)$&1 & &29 & $\\rho$&1&&0.053 \\\\\n\\hline\n$K$ &0 &156 \\cite{PDG} &175 & $\\omega$&0&0.059&0.067\\\\\n$K(1460)$ &1 & &27 & $\\omega$&1&&0.018\\\\\n\\hline\n$D$ &0 &205 \\cite{PDG} &212 & $\\phi$&0&0.074&0.071\\\\\n$D$ &1 & &51 & $\\phi$&1&&0.02\\\\\n\\hline\n$D_s$ &0 &258 \\cite{PDG} &274 & $J\/\\psi$&0&0.09&0.06\\\\\n$D_s$ &1 & &57 & $J\/\\psi$&1&&0.015\\\\\n\\hline\n$B$ &0 &191 \\cite{PDG} &187 & $\\Upsilon$&0&0.025&0.014\\\\\n$B$ &1 & &55 & $\\Upsilon$&1&&0.0019\\\\\n\\hline\n$B_s$ &0 &253 \\cite{Chiu:2007bc}&248 & & &\\\\\n$B_s$ &1 & &68 & & &\\\\\n\\hline\n$B_c$ &0 &489 \\cite{Chiu:2007bc}&434 & & &\\\\\n$B_c$ &1 & &135 & &&\\\\\n\\hline\n\\end{tabular}\n\\label{constants}}\n\\end{table}\n\n\n\nGinzburg-Landau (GL) approach to the quantum effective action indicated a possibility of the domain wall network formation in QCD vacuum resulting in the dominating vacuum gluon configuration seen as an ensemble of densely packed lumps of covariantly constant Abelian (anti-)self-dual field \\cite{NK1,NG2011,NV,George:2012sb}. Nonzero scalar gluon condensate $\\langle g^2F^a_{\\mu\\nu}F^a_{\\mu\\nu}\\rangle$ \npostulated by the effective potential\nleads \nto the existence of twelve discrete degenerate global minima of the effective action (see Fig.\\ref{effpotxiomega}), \n\\begin{eqnarray}\n&&\\breve A_{\\mu}\\in\\left\\{\\breve B^{(kl)}_{\\mu}| \\ k=0,1,\\dots,5; \\ l=0,1\\right\\}, \\ \\\n\\breve B^{(kl)}_{\\mu} = -\\frac{1}{2}\\breve n_k B^{(l)}_{\\mu\\nu}x_\\nu, \n\\nonumber\\\\\n && \\tilde B^{(l)}_{\\mu\\nu}=\\frac{1}{2}\\varepsilon_{\\mu\\nu\\alpha\\beta} B^{(l)}_{\\alpha\\beta}=(-1)^l B^{(l)}_{\\mu\\nu},\n\\ \n\\breve n_k = T^3\\ \\cos\\left(\\xi_k\\right) + T^8\\ \\sin\\left(\\xi_k\\right),\n\\ \\\n\\xi_k=\\frac{2k+1}{6}\\pi,\n\\label{minima}\n\\end{eqnarray}\nwhere $l=0$ and $l=1$ correspond to the self-dual and anti-self-dual field respectively, matrix $\\breve{n}_k$ belongs to Cartan subalgebra of $su(3)$ with six values of the angle $\\xi_k$ corresponding to the boundaries of the Weyl chambers in the root space of $su(3)$. \nThe minima are connected by the parity and Weyl group reflections. \nTheir existence indicates that the system is prone to the domain wall formation. To demonstrate the simplest example of domain wall interpolating between the self-dual and anti-self-dual Abelian configurations, one allows the angle $\\omega$ between chromomagnetic and chromoelectric fields to vary from point to point in $R^4$ and restricts other degrees of freedom of gluon field to their vacuum values.\nIn this case Ginsburg-Landau Lagrangian leads to the sine-Gordon equation for $\\omega$ with the standard\n kink solution (for details see Ref.~\\cite{NG2011,NV}).\nAway from the kink location vacuum field is almost self-dual ($\\omega=0$) or anti-self-dual ($\\omega=\\pi$). Exactly at the wall it becomes purely chromomagnetic ($\\omega=\\pi\/2$). Domain wall network is constructed by means of the kink superposition. \nTopological charge density distribution for a network of domain walls with different width is illustrated in Fig.\\ref{cubes}. \n\n\n\n\n\n\nBased on this construction, the measure of integration over the background field $B_\\mu^a$ can be constructively represented as the infinite dimensional (in the infinite volume) integral over the parameters of $N\\to\\infty$ domain walls in the network: their positions, orientations and widths, with the weight determined by the effective action. The explicit construction of the domain wall network is the most recent development of the \nformalism that have been studied in the series of papers \\cite{EN,EN1,NK1,NK4,NK6}, in which the domain wall defects in the homogeneous Abelian (anti-)self-dual field were taken into account either implicitly or \nin an explicit but simplified form with the spherical domains. The practical calculations in the next sections will be done within combined implementation of domain model given in paper~\\cite{NK4}: propagators in the quark loops are taken in the approximation of the homogeneous background field and the quark loops are averaged over the background field, the correlators of the background field are calculated in the spherical domain approximation.\n\n\n\n\n\n\\begin{figure}\\centering\n\\sidecaption\n\\includegraphics[scale=.9]{f_P}\n\\hspace*{5mm}\\caption{Diagrams contributing to leptonic decay constants $f_P$. \\label{weak_decay_diagrams}}\n\\end{figure}\n\n\\begin{figure}\\centering\n\\sidecaption\n\\includegraphics[scale=.9]{V-gamma}\n\\hspace*{5mm}\\caption{Diagrams contributing to $V\\to\\gamma$ transition constants \n$g_{V\\gamma}$.\\label{g_rho_gamma_diagrams}}\n\\end{figure}\n\n\n\n\n\n\\section{Meson properties}\n\\label{section2}\n\n\n\n The truncated QCD functional integral can be rewritten in terms of the composite colorless meson fields $\\phi_{\\cal Q}$ by means of the standard bosonization procedure: introduce the auxiliary meson fields, integrate out the quark fields, perform the orthogonal transformation of the auxiliary fields that diagonalizes the quadratic part of the action and, finally, rescale the meson fields to provide the correct residue of the meson propagator at the pole corresponding to its physical mass (if any). \nMore details can be found in Ref.~\\cite{EN,EN1,NK4}. \nThe result can be written in the following compact form~\\cite{NV1}\n\\begin{eqnarray}\n&&Z={\\cal N}\n\\int D\\phi_{\\cal Q}\n\\exp\\left\\{-\\frac{\\Lambda^2}{2}\\frac{h^2_{\\cal Q}}{g^2 C^2_{ \\mathcal Q}}\\int d^4x \n\\phi^2_{\\cal Q}(x)\n-\\sum\\limits_{k=2}^\\infty\\frac{1}{k}W_k[\\phi]\\right\\},\n\\label{meson_pf}\\\\\n&&W_k[\\phi]=\n\\sum\\limits_{{\\cal Q}_1\\dots{\\cal Q}_k}h_{{\\cal Q}_1}\\dots h_{{\\cal Q}_k}\n\\int d^4x_1\\dots\\int d^4x_k\n\\Phi_{{\\cal Q}_1}(x_1)\\dots \\Phi_{{\\cal Q}_k}(x_k)\n\\Gamma^{(k)}_{{\\cal Q}_1\\dots{\\cal Q}_k}(x_1,\\dots,x_k),\n\\label{effective_meson_action}\n\\\\\n &&\\Phi_{{\\cal Q}}(x)=\\int \\frac{d^4p}{(2\\pi)^4}e^{ipx}{\\mathcal O}_{{\\mathcal Q}{\\mathcal Q}'}(p)\\tilde\\phi_{{\\mathcal Q}'}(p), \\ .\n \\label{Pphi}\n\\end{eqnarray}\nwhere condensed index $\\mathcal{Q}$ denotes all relevant meson quantum numbers and indices. Integration variables $\\phi_{\\mathcal Q}$ in the functional integral \\eqref{meson_pf} correspond to the physical meson fields that diagonalize the quadratic part of the effective meson action \\eqref{effective_meson_action} in momentum representation, which is achieved by means of orthogonal transformation ${\\mathcal O}(p)$.\n\n\n\n\\begin{figure}\n\\begin{centering}\n\\sidecaption\n\\includegraphics[width=0.3\\textwidth]{gluon_prp_dr}\n\\caption{Momentum dependence of the gluon dressing function without (solid line) and with (dashed line) accounting for the running of the strong coupling constant $\\alpha_s(p)$ (dotted line). The dashed line qualitatively reproduces the shape of the Landau gauge dressing function of gluons calculated within the functional renormalization group~\\cite{Jan2015} and Lattice QCD~\\cite{Bowman, Ilgenfritz} as well as a part of the input gluon propagator used in the approaches based on combined Dyson-Schwinger and Bethe-Salpeter equations~\\cite{Fischer:2014xha,Dorkin:2014lxa}. \\label{gluon_fig}}\n\\end{centering}\n\\end{figure}\n\n\n\nInteractions between physical meson fields $\\phi_{\\mathcal Q}$ are described by $k$-point nonlocal vertices $\\Gamma^{(k)}_{\\mathcal{Q}_1\\dots \\mathcal{Q}_2}$. subsequently tuned to the physical meson representation by means of corresponding orthogonal transformations ${\\mathcal O}(p)$. \nAs is illustrated in Fig.~\\ref{diagrams}, vertices $\\Gamma^{(k)}$ are expressed \\textit{via} 1-loop diagrams $G^{(k)}_{{\\cal Q}_1\\dots{\\cal Q}_k}$ which include nonlocal quark-meson vertices and quark propagators (for their explicit analytical form see~\\cite{NV1}).\n\nThe mass spectrum $M_\\mathcal{Q}$ of mesons and quark-meson coupling constants $h_{\\cal Q}$ are determined by the quadratic part of the effective meson action \\textit{via} equations\n\\begin{equation*}\n1=\n\\frac{g^2}{\\Lambda^2}C^2_{\\cal Q}\\tilde \\Pi_{\\cal Q}(-M^2_{\\cal Q}|B),\n\\ \\\nh^{-2}_{\\cal Q}=\n\\frac{d}{dp^2}\\tilde\\Pi_{\\cal Q}(p^2)|_{p^2=-M^2_{\\cal Q}}, \n\\end{equation*}\nwhere $\\tilde\\Pi_{\\cal Q}(p^2)$ is the diagonalized two-point correlator $\\tilde\\Gamma^{(2)}_{\\cal QQ'}(p)$ put on mass shell.\nAbove definition of the meson-quark coupling constant $h_{\\cal Q}$ provides correct residue at the pole. \n\nThe results of calculations are shown in Fig.~\\ref{fig-masses} and Table~\\ref{constants}.\nAn overall accuracy of description is 10-15 percent in the lowest order calculation\n achieved with the minimal for QCD set of parameters: infrared limits of renormalized strong coupling constant $\\alpha_s$ and quark masses $m_f$, scalar gluon condensate $\\langle g^2F^2\\rangle$ as a fundamental scale of QCD\n and topological susceptibility of pure QCD without quarks. This last parameter can be related to the mean size $R$ of domains. \n \n\\section{Discussion}\n\\label{section3}\n\nIt is interesting to take a look at the properties of gluon propagator in the present approach in view of the known functional form of quark and Landau gauge gluon propagators calculated within the functional renormalization group, Lattice QCD and Dyson-Schwinger equations \\cite{Bowman,Ilgenfritz,Jan2015,Fischer:2014xha,Dorkin:2014lxa}. \nAs it can be seen from Fig.~\\ref{gluon_fig} the shape of the gluon dressing function in the background field under consideration is in qualitative agreement with the known results of functional RG, DSE and lattice QCD. \nFor more detailed comparison we refer to the paper~\\cite{NV1}.\n\nFor conclusion we would like to outline few problems to be addressed within the domain model of QCD vacuum. The most important conceptual question relates to identification of a mechanism for stabilization of a finite mean size of domains. Our preliminary estimates indicated that the competition between gluon and ghost contributions and the contribution of the quark lowest eigenvalues to the free energy density of the finite domain could lead to the appropriate stabilization. This issue has to be studied carefully. Another important for phenomenological applications problem is the accurate description of the $n-$pont correlators in the random domain wall network ensemble which can be achieved by numerical methods. Random spherical domain ensemble is a rather rough approximation, and it has to be improved. \nDomain model of QCD vacuum offers an appealing way to study the deconfinement transition in terms of the explicit degrees of freedom which are active or suppressed in different regimes (high energy density, high baryon charge density, strong electromagnetic fields). A preliminary consideration of the relevant features of the model can be found in \\cite{NV}.\n\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\n Anyons are emergent quasi-particles that exist in two-dimensional condensed matter systems and whose exchange statistics generalize that of Bosons and Fermions. These particles have spurred much interest due to their potential applications for quantum computation. \n In particular, it was found that with certain types of non-Abelian anyons, a universal quantum computation can be performed by braiding and fusing these particles \\cite{freedman2002modular, freedman2002simulation, kitaev2003fault}. \n An intriguing benefit of this paradigm is that, due to their topological nature, computations are intrinsically robust to perturbations at zero temperature. \n At non-zero temperature, however, thermal anyonic excitations can corrupt the computation by performing non-trivial braids with the computational anyons. \n Since systems exhibiting anyonic excitations have a spectral gap $\\Delta$, this source of errors can be suppressed to some extent at temperatures $T \\ll \\Delta\/k_B$ as the density of thermal anyons scales as $e^{-\\Delta\/k_B T}$.\n Alas, this passive protection does not suffice, because the presence of thermal anyons is unavoidable at non-zero temperatures when scaling up the size of computations. \n Therefore, proficient active error correction schemes for non-Abelian models are paramount for the realization of topological quantum computers. \n\n Besides their envisaged use for topological quantum computation, topologically ordered systems (i.e., those that support anyonic excitations on top of their ground space) are also of much interest for quantum error correction. \n In particular, one of the characteristics of such systems is a robust ground space degeneracy, which allows one to use their ground space as the code space of an error correcting code. \n This realization led to the discovery of topological quantum error correcting codes, which encode logical quantum states in topologically ordered states of a system of qudits (typically arranged on a two-dimensional lattice). \n %\n Since their discovery in the 90s, most research has focused exclusively on Abelian topological codes such as the surface code and the color code \\cite{bravyi1998quantum, dennis2002topological, kitaev2003fault, wang2009threshold, fowler2012surface, wootton2012high, anwar2014fast, bravyi2014efficient, wootton2015simple, andrist2015error}, which admit an elegant characterization in terms of the stabilizer formalism \\cite{gottesman1997stabilizer}.\n Due to their geometrical locality and high error thresholds, these codes are considered to be promising candidates for protecting quantum information from noise in error-corrected quantum computers.\n One of the drawbacks of Abelian topological codes, however, is that they do not allow one to execute a universal set of logical gates in a protected fashion in two dimensions. Hence, they must be supplemented with additional protocols such as magic state distillation \\cite{bravyi2005universal} or code switching to higher-dimensional codes \\cite{kubica2015universal, bombin2016dimensional} , which introduce a large space-time overhead \\cite{campbell2017roads}. \n %\n Alternatively, there exist non-Abelian topological codes which do not suffer from this inherent limitation, and are able to perform a universal gate set natively within their code space in two dimensions \\cite{freedman2002modular}. The trade-off is that such codes go beyond the stabilizer formalism and are therefore very hard to simulate classically.\n\n While active error correction in Abelian anyon models and Abelian topological codes has been studied extensively, quantum error correction based on non-Abelian anyon models has not enjoyed the same focus. Nevertheless, important progress has been made over the last decade, including both analytical proofs and numerical demonstrations of threshold behavior for various non-Abelian topological error correcting codes \\cite{hutter2016continuous, wootton2014error, brell2014thermalization, burton2017classical, schotte2022quantum}. Moreover, syndrome extraction circuits for such non-Abelian string-net codes have been developed in recent years \\cite{Bonesteel:2012fl, schotte2022quantum}. In addition, state preparation for non-Abelian codes based on the Kitaev quantum double models via measurements has also been proposed recently for the experimental implementation on qubit lattices \\cite{verresen2021efficiently, tantivasadakarn2022shortest}, although further development is still needed in the context of fault-tolerant state preparation. Notably, previous studies in this field already include codes based on the Fibonacci anyon model, which is universal for quantum computation \\cite{burton2017classical, schotte2022quantum}. In particular, a quantum memory of qubits supporting doubled Fibonacci anyonic excitations was found to have a threshold that lies remarkably close to that of the surface code under similar assumptions \\cite{schotte2022quantum}.\n\n These results, however, all assume perfect syndrome measurements, which are topological charge measurements in this context. As we aim to model more realistic scenarios, we must take faulty measurements into consideration. \n Again, much is known in the case of Abelian topological codes \\cite{harrington, dennis2002topological, raussendorf2007fault, fowler2009high, watson2015fast, herold2017cellular}. \n For their non-Abelian counterparts, one key result stands out: in Ref.~\\cite{dauphinais2017fault} a proof was formulated that topological codes based on non-cyclic anyon models admit a error correction thresholds with faulty topological charge measurements.\n \n While this result is remarkable, non-cyclic anyon models are not universal for quantum computation, and it remains an open question whether similar claims can be made for universal models. \n \t\n\t\n In this work, we take a step towards demonstrating that fault-tolerance is indeed possible for universal non-Abelian topological codes. To this end, we define a quantum memory constructed as a two-dimensional model of Fibonacci anyons on a torus. We study active continuous quantum error correction on this model in the presence of thermal noise represented by pair-creation processes, and with faulty syndrome measurements. \n\t%\n\tThe correction procedure is based on the cellular automaton decoders originating in the works of G\u00e1cs \\cite{gacs1986reliable} and Harrington \\cite{harrington}, and further studied in the context of non-Abelian models in Ref.~\\cite{dauphinais2017fault}. \n\tThrough numerical simulations, we study how the average memory lifetime changes with the error rate. The results indicate that this code is indeed fault-tolerant, which is strong evidence for the existence of fault-tolerant universal non-Abelian codes.\n\n \n\tThe structure of this work is as follows.\n\tIn Sec.~\\ref{sec:code} we introduce the topological Fibonacci code. We then describe the details of the noise model in Sec.~\\ref{sec:noise} and introduce the cellular automaton decoder in Sec.~\\ref{sec:decoder}. \n\tWe proceed by giving an outline of the numerical simulations performed in this work in Sec.~\\ref{sec:simulation}.\n\tFinally, we present the corresponding numerical results in Sec.~\\ref{sec:results} and conclude with a discussion in Sec.~\\ref{sec:discussion}. \n\t\n\t\\section{The Fibonacci code} \\label{sec:code}\n\t\n\n\tWe consider a two-dimensional model comprised of hexagonal tiles laid out on the surface of a torus. The resulting geometry can be represented as an $ L \\times L $ hexagonal lattice with periodic boundary conditions in both directions (Fig.~\\ref{fig:pair-creation}). Each of these hexagonal tiles can contain an excitation known as a \\emph{Fibonacci anyon}.\n\t\n\n\tAnyons are point-like quasi-particle excitations which can be characterized algebraically in terms of a unitary modular tensor category (UMTC). A thorough description of anyon models using UMTCs goes beyond the scope of this work, however, some details are given in Sec.~\\ref{sec:anyons}. \n\tFor now, it is sufficient to state that an anyon model specifies a set of anyon labels, also referred to as particle types, which can fuse according to a specific set of fusion rules.\n\tThe Fibonacci anyon model considered in this work contains two labels, $ \\mathbf{1} $ and $ \\tau $, which obey the fusion rules\n\t\\begin{equation}\\label{eq:fusion_rules}\n\t\t\\mathbf{1} \\times \\mathbf{1} = \\mathbf{1}\\,, \\quad \\mathbf{1} \\times \\tau = \\tau \\times \\mathbf{1} = \\tau\\,, \\quad \\tau \\times \\tau = \\mathbf{1} + \\tau\\,.\n\t\\end{equation}\n\t\n\tIn general, one can associate a vector space to a given set of anyons, where the basis vectors are labeled by the different ways in which the anyons can fuse. This fusion space has a topological degeneracy, and can therefore be used to robustly encode quantum information. In particular, for the Fibonacci anyon model the anyonic vacuum on a two-dimensional torus has a twofold degeneracy \\cite{pfeifer2012translation}. Starting from our two-dimensional model, we can therefore define an error correcting code whose code space is identified with the anyonic vacuum on the torus and which encodes a single logical qubit.\n\tA basis for this code space can be defined using Wilson line operators along the homologically non-trivial cycles $ x $ and $ y $ shown in \\figref{fig:torus}:\n\t\\begin{equation}\\label{eq:basis}\n\t \\begin{array}{l}\n\t W^a_x \\ket{\\mathbf{1}}_x = \\frac{S_{a\\mathbf{1}}}{S_{\\mathbf{1} \\mathbf{1}}} \\ket{\\mathbf{1}}_x\\,,\\\\\n\t\t W^a_x \\ket{\\tau}_x = \\frac{S_{a\\tau}}{S_{\\mathbf{1} \\tau}} \\ket{\\tau}_x\\,, \n\t \\end{array}\n\t\t\\qquad a \\in \\{\\mathbf{1},\\tau\\} \\,,\n\t\\end{equation}\n\tWe note that a different basis, $ \\{\\ket{0}_y, \\ket{1}_y\\} $, can be defined analogously by swapping the $ x $ and $ y $ labels above, where the two bases are related through the modular $ S $ matrix $ S_{ab} = \\!\\!\\phantom{a}_y\\!\\braket{a|b}_x$.\n\tFor the Fibonacci anyon model its numerical values are\n\t\\begin{equation}\\label{eq:S}\n\t\tS = \\dfrac{1}{\\sqrt{1+\\phi^2}} \n\t\t\\begin{pmatrix}\n\t\t\t1 & \\phi \\\\\n\t\t\t\\phi & -1\n\t\t\\end{pmatrix}\\,,\n\t\\end{equation}\n\twhere $ \\phi = \\dfrac{1 + \\sqrt{5}}{2} $.\n\n \n\n\t\n\t\\begin{figure}\n\t\t\\centering\n\t\t\\includegraphics[width=0.7\\linewidth]{fig\/torus}\n\t\t\\caption{The two homologically non-trivial cycles on a torus.}\n\t\t\\label{fig:torus}\n\t\\end{figure}\n\t\n\tThe action of the mapping class group on the anyonic vacuum then corresponds to unitary operations on the code space. For the Fibonacci category, any logical unitary operator can be realized in this way, up to arbitrary precision \\cite{freedman2002modular}. Therefore, the quantum error correcting code defined above natively supports universal quantum computation. \n\t\n\tErrors in this code appear as spurious anyonic excitations, which can corrupt the encoded information if their world lines between creation and re-annihilation are topologically non-trivial, i.e., form a non-contractible cycle \\footnote{Note that a pair of Fibonacci anyons can also fuse to a single non-trivial anyon when one member of the pair has been transported along a non-trivial cycle. Since the resulting state is no longer in the code space, this does not constitute a logical operation on the encoded information. However, one can show that the encoded information is irrevocably lost in case of such an event \\cite{wootton2014error}.}. The objective of error correction is then to systematically remove these spurious excitations, without corrupting the quantum memory in the process.\n\tThis correction is performed in an active and continuous manner, and can be broken down into a series of discrete steps. At each step, a suitable recovery operation is performed based on a measured list of positions and types of the excitations, called the error syndrome.\n\t\n\tWe conclude this section by noting that the numerical simulation of the error-correction process requires the introduction of some additional manipulations on fusion states of multiple Fibonacci anyons. As these are technical details that do not contribute to the intuition of the procedure, we defer their definition to Sec.~\\ref{sec:anyons}.\n\t\n\t\n\t\n\t\\section{Noise model and correctability} \\label{sec:noise}\n\t\n\tHaving defined our model and code space, we now turn to the description of the noise model used in our simulations.\n\tWe model continuous active error correction in our Fibonacci code as a sequence of time steps, where each time step itself consists of three parts: the application of pair-creation noise, faulty syndrome measurement, and error correction respectively. \n\t\n\tAt each time step, first, for each edge of the hexagonal lattice graph a pair of anyons is created across this edge with a probability $ p $. Immediately after each pair creation event, the resulting charge in the two affected tiles is sampled, effectively collapsing all superpositions of anyonic charge within each tile to either $\\mathbf{1}$ or $\\tau$. After the pair creation noise has been applied, faulty syndrome extraction is simulated by generating a list of the anyon charge in all tiles, and flipping each outcome individually with a probability $ q $. In addition to the charges that are correctly detected, the resulting faulty syndrome can contain both \\enquote{ghost defects} (indicating a non-trivial charge when none is truly present) and \\enquote{missing defects} (failing to report a true non-trivial charge).\n\tFinally, this faulty syndrome is passed to a decoder, introduced in the following section, which then performs a set of local operations based on the current (and past) syndrome information in an attempt to move the system back towards the initial state.\n\t\n\tAfter each time step, the current state of the system is copied and it is checked whether it is still correctable. This is done by passing the copy to a clustering decoder \\cite{brell2014thermalization, burton2017classical, schotte2022quantum} and simulating a decoding procedure with perfect syndrome measurements starting from this given initial state. If this perfect decoding is successful, the memory is considered intact and the simulation is continued. If perfect decoding is unsuccessful, the memory is considered corrupted and the simulation is aborted. The memory lifetime is then defined as the number of time steps after which a perfect clustering decoder can no longer successfully restore the initial state.\n\t\n \\begin{figure}\n\t\t\\centering\n\t\t\\begin{subfigure}{.58\\linewidth}\n\t\t\t\\centering\n\t\t\t\\includegraphics[width=1\\linewidth]{fig\/hex_lattice_noise}\n\t\t\t\\caption{}\n\t\t\\end{subfigure}\n\t\t\\hfill\n\t\t\\begin{subfigure}{.4\\linewidth}\n\t\t\t\\centering\n\t\t\t\\includegraphics[width=.95\\linewidth]{fig\/dual_lattice_noise}\n\t\t\t\\caption{}\n\t\t\\end{subfigure}\n\t\t\\caption{(a) Pair-creation events creating anyonic excitations in neighboring tiles. The dotted ellipse represents a collapse to the total charge of the anyons it contains. A ghost defect is shown in blue, a missing defect is highlighted in orange with a cross. (b) The outcome of this noise process represented on the decoding graph. Note that the missing defect (highlighted in orange) will (by definition) not be visible in the syndrome.}\n\t\t\\label{fig:pair-creation}\n\t\\end{figure}\n\t\n\tFor a given pair of tiles which share an edge, the process of pair creation across this edge corresponds to the matrix elements\n\t\\begin{equation}\n\t\t\\bra{a'\\!,b'\\!;c'}U_{\\text{pc}}\\ket{a,b;c} = \\delta_{c,c'} F^{a a 1}_{\\tau\\tau a'} F^{a'\\!\\tau a}_{b \\, c \\, b'}\\,,\n\t\\end{equation}\n\twhere we have used the $ F $-symbols of the Fibonacci category, given in \\eqref{eq:Fib_F_symbols_nontriv}. Here, $ \\ket{a,b;c} $ represents the state where the affected tiles have anyon charges $ a $ and $ b $, respectively, with total charge $ c $. This then defines the probability distribution according to which outcomes $ a' $ and $ b' $ are sampled. \n\t%\n\tSince our noise model does not allow any superposition in the anyonic charge of individual tiles, it should be considered semi-classical rather than fully quantum-mechanical. Note, however, that this does not render our model completely classical. Indeed, superpositions in the total charge $ c $ of the affected tiles are an inherent part of the state evolution that cannot be captured faithfully by any classical probabilistic process. \n\tFurthermore, while the extreme decoherence assumption for the anyon charge in individual tiles greatly simplifies the numerical simulation outlined in this work, it was argued in Ref.~\\cite{brell2014thermalization} that this decoherence is unlikely to have any tangible influence on the observed memory lifetimes, as the essential topological nature of the noise processes is still captured correctly.\n\t\n\tWe note that this type of noise can be understood as originating from the connection to a thermal bath with inverse temperature $ \\beta = 1\/(k_B T) $ determined by the error rate $ p $ through the relation\n\t\\begin{equation}\\label{key}\n\t\t\\dfrac{p}{1-p} = \\e^{-\\beta \\Delta}\\,.\n\t\\end{equation}\n\tHere, $ \\Delta $ represents the energy required to create a pair of anyonic excitations and place them in neighboring tiles.\n\t\n\tTo conclude this section, we emphasize that in the case of non-Abelian error correction, even decoding with perfect syndrome measurements is still an inherently stochastic procedure due to the indeterminacy of anyonic charge measurements. This means that perfect decoding can sometimes either be successful or unsuccessful even starting from the same initial state. Our definition of the memory lifetime therefore simply corresponds to a statistical estimate of the actual memory lifetime. \n\t%\n\tFurthermore, it is not known which decoder is optimal for the Fibonacci code. Hence, the choice for the clustering decoder to verify the correctability of states is, in a way, an arbitrary one. This choice, however, is motivated by the recent discovery that the clustering decoder yields high thresholds for a related error correcting code exhibiting doubled Fibonacci anyonic excitations, and performed significantly better in this context than decoders based on a perfect matching strategy \\cite{schotte2022quantum}.\n\tIn any case, one should keep in mind that the memory lifetime as defined above, does not represent the true memory lifetime. Instead the sub-optimal verification process entails that it merely provides us with a lower bound on the true value.\n\t\n\t\n\t\\section{Harrington's cellular automaton decoder} \\label{sec:decoder}\n\n\tThe model described above is paired with a decoder which is a straight-forward adaptation of the cellular automaton decoder introduced in Ref.~\\cite{harrington}.\n\tPreviously, this decoder has also been used for a similar phenomenological model of Ising anyons in Ref.~\\cite{dauphinais2017fault}, where the existence of an error correction threshold was proven analytically.\n\tAt each time step during the error correction simulation, based on the reported measurement outcomes in the faulty syndrome, the decoding algorithm will apply local transition rules to fuse neighboring anyons or to move anyons to neighboring tiles.\n\t\n\n\tIntuitively these transition rules work as follows. The lattice is divided into square colonies of size $Q \\times Q$. At each time step, the transition rules will attempt to fuse neighboring non-trivial anyons, as observed in the faulty syndrome. If a non-trivial anyon has no neighbors, the transition rules will move it to the center of its colony. \n\tAt larger timescales, higher-level transition rules are applied on a renormalized lattice where anyons located at colony centers will be fused with anyons at neighboring colony centers, or moved toward the center of their respective super-colonies, which consist of $Q \\times Q$ colonies. This renormalization scheme is then continued at higher levels until eventually the $ Q^n \\times Q^n $ super-colony covers the entire lattice for some integer $ n $. \n\tTo ensure the latter is possible, we will always assume that the linear lattice size satisfies $L= Q^n$ for some integer $ n $.\n\tAn example of these processes is shown in \\figref{fig:transition_rules}.\n\t\n\tTo describe the action of the decoding algorithm more precisely, we will define its action at different renormalization levels $ k $. \n\tThe level-0 transition rules are those already discussed above and are applied at every time step based on the reported faulty syndrome obtained from the most recent round of faulty measurements. The transition rules are applied to one location at a time and take into consideration only the anyon content of that site and of its eight neighbors. \n\tA detailed definition of these rules is given in App. \\ref{sec:transition_rules}.\n\tWhen an anyon is moved from a site $ l $ to a neighboring site $ l' $, the (true) anyon content of site $ l $ is fused with that of site $ l' $ and the resulting charge is placed on site $ l' $ while the charge of site $ l $ is restored to the vacuum. This happens irrespective of whether or not the syndrome for both sites was correct. \n\tHence, when the decoder attempts to move a ghost defect (a trivial charge misidentified as a non-trivial one) to a neighboring site, this process does not create additional excitations. This does not, however, mean that mistaking a trivial charge for a non-trivial one has no negative consequences. Indeed, these wrong syndromes may cause the decoder to stretch out existing errors. \n\n\t\n\tThe level-1 transition rules are not applied in every time step, but only when $ t $ is a multiple of a parameter called the \\emph{work period}, which we will denote by $ U $. We require that $ U = b^2 $ for some positive integer $ b $. \n\tOne should think of $ U $ as the time scale at which a coarse-graining is performed. \n\tLevel-1 transition rules act at on a coarse-grained lattice where the sites correspond to the centers of the level-0 colonies, and these are grouped into level-1 colonies of size $ Q^2 \\times Q^2 $. \n\tHence, the actions determined by the level-1 transition rules involve pairs of level-0 colony centers separated by a distance $ Q $. An example of such a move is provided in \\figref{fig:transition_4}.\t\n\tThe transition rules themselves are nearly identical to the level-0 rules, but are based on two sets of level-1 syndromes $ s_{1,c} $ and $ s_{1,n} $ (defined below) rather than one. For a site $ l $ (which is a level-0 colony center), the transition rules use $ s_{1,c}(l) $ as the anyon content of site, while the anyon content of its neighbors (that is, the neighboring level-0 colony centers) is taken to be $ s_{1,n}(l') $.\t\n\t\n\tThe definitions of the level-1 syndromes $ s_{1,c} $ and $ s_{1,n} $ require a pair of variables $ f_c, f_n \\in [0,1]$. \n\tIntuitively these variables serve as detection thresholds for the level-1 syndromes by determining the fraction of measurements that must return a non-trivial outcome at a site before it qualifies as a non-trivial level-1 syndrome. \n\tThe proper definition, however, is slightly more complicated and uses a coarse-grained counting method. \n\tBelow, we give the precise definition of $ s_{1,c} $, the definition of $ s_{1,n} $ is entirely analogous (using $ f_n $ instead of $ f_c $). \n\tWe start by dividing the work period $ U = b^2 $, into $ b $ intervals of $ b $ time steps each.\n\tFor each of these intervals, we say a non-trivial syndrome is present at a colony center $ l $ if a non-trivial charge was reported there for at least $ f_c b $ of the $ b $ time steps in the interval. \n\tWhen at least $ f_c b $ of the $ b $ intervals have a non-trivial syndrome, $ s_{1,c}(l) $ is set to one.\n\tA visual example of this coarse-grained counting procedure is shown in \\figref{fig:binning}.\n\t\n\t\\begin{figure}[h]\n\t\t\\vspace{.5cm}\n\t\t\\centering\n\t\t\\includegraphics[width=\\linewidth]{fig\/level-n_syndrome} \n\t\t\\caption{A visual example of the coarse-grained counting procedure to determine the level-1 syndrome for a level-0 colony center. The row of dots represent $ U $ time steps, divided in $ b $ intervals of size $ b $. The time steps during which a non-trivial measurement outcome was reported are indicated by the colored dots. The crosses in the second row indicate in which intervals the fraction of non-trivial measurement outcomes is equal to or higher than $ f_c $. }\n\t\t\\label{fig:binning}\n\t\\end{figure}\n\n\tThe motivation for using two types of syndromes for $ k>0 $ is as follows. Suppose that an error spans across two neighboring colonies, which we will label $ \\rho $ and $ \\rho' $. The level-0 transition rules will transport all resulting anyons to the respective colony centers, where they can now be acted upon by level-1 transition rules at the end of the work period. Imagine that a non-trivial anyon is now present at both colony centers. When considering the level-1 transition rules acting on $ \\rho $, there are four possible scenarios for the syndromes $ s_{1,c}(\\rho) $ and $ s_{1,n}(\\rho') $. In case $ s_{1,c}(\\rho) = 0 $, the transition rules act trivially on $ \\rho $. If both $ s_{1,c}(\\rho) = 1 $ and $ s_{1,n}(\\rho') = 1 $ then the transition rules will be applied correctly and the anyons will be fused. \n\tHowever, if $ s_{1,c}(\\rho) = 1 $ but $ s_{1,n}(\\rho') = 0 $, the transition rules may move the anyon in $ \\rho $ away from $ \\rho' $, thereby increasing the weight of the error.\n\tHence, it is desirable to set $ f_c > f_n $ to decrease the odds that when a level-k syndrome reports an non-trivial anyon at a colony center, the level-k syndrome for its neighbors are false negatives. \n\tWe must be careful not to set $ f_c $ too high or $ f_n $ too low, however. If we choose $ f_c $ to high, low-weight errors could cause $ s_{1,c} $ to never report any non-trivial charges, delaying any necessary corrections. Similarly, setting $ f_n $ too low will result in low-weight error triggering many false positives for $ s_{1,n} $, which can cause the decoder to make wrong decisions. \n\t\n\tLevel-$ k $ transition rules are applied when $ t \\mod U^k = 0 $. They operate on a renormalized lattice that uses the centers of level-$ (k-1) $ colonies as sites, and groups these into level-$ k $ colonies of size $ Q^k \\times Q^k $. The level-$ k $ syndromes $ s_{k,c} $ and $ s_{k,n} $ are determined by the coarse-grained counting method described above, using $ b^k $ intervals of $ b^k $ time steps each. \n\tFor linear system size $ L $, k ranges from 0 to $ k_{\\text{max}} = \\log_Q(L) $.\n\t\n\t\n\tIt is important to note that non-Abelian anyons do not allow for instantaneous moves. \n\tIndeed, while one can construct a unitary string operator for Abelian anyons, no such operator can be constructed for the non-Abelian case. \n\tThis discrepancy can be traced back to the fact that fusion outcomes are non-deterministic for non-Abelian anyons, implying it is not possible to move an non-Abelian anyon by annihilating it with one member of a particle-antiparticle pair (as is done in e.g., the surface code).\n\t\n\tTherefore, the actions determined by level-$k$ transition rules, for $k>0$ cannot be applied withing a single time step. Instead, they will be broken up into a sequence of moves involving only pairs of neighboring sites which will be applied in $Q^k$ consecutive time steps. \n\tWe further limit the model by requiring that the number recovery operations affecting a single tile in the lattice (or site in the decoding graph), is no greater than one in each time step. \n\tThis allows all recovery operations applied in one time step to be performed in parallel. \n\tHence, we must define a hierarchy determining which actions (moves or fusions between neighboring tiles) get prioritized based on the renormalization level from which they originated. In our case, it was opted to always prioritize correction processes from the highest renormalization level \\footnote{Note that if one were to prioritize the level-0 corrections, higher-level correction could never be completed, as they would be undone immediately after their first action is applied.}\n\t\n\tIt was argued in \\cite{dauphinais2017fault} that the prohibition of instantaneous corrections would likely not influence the threshold behavior other than slightly lowering the memory lifetimes relative to a hypothetical case where this restriction is dropped. We explicitly verify this claim for our Fibonacci model below in Sec.~\\ref{sec:results}.\n\t\n\t\\begin{figure}[h]\n\t\t\\centering\n\t\t\\begin{subfigure}{.45\\linewidth}\n\t\t\t\\includegraphics[width=\\linewidth]{fig\/transition_rules_example1}\n\t\t\t\\caption{}\n\t\t\t\\label{fig:transition_1}\n\t\t\\end{subfigure}\n\t\t\\hfill\n\t\t\\begin{subfigure}{.45\\linewidth}\n\t\t\t\\includegraphics[width=\\linewidth]{fig\/transition_rules_example2}\n\t\t\t\\caption{}\n\t\t\t\\label{fig:transition_2}\n\t\t\\end{subfigure}\n\t\t\\hfill\n\t\t\\begin{subfigure}{.45\\linewidth}\n\t\t\t\\includegraphics[width=\\linewidth]{fig\/transition_rules_example3}\n\t\t\t\\caption{}\n\t\t\t\\label{fig:transition_3}\n\t\t\\end{subfigure}\n\t\t\\hfill\n\t\t\\begin{subfigure}{.45\\linewidth}\n\t\t\t\\includegraphics[width=\\linewidth]{fig\/transition_rules_example4}\n\t\t\t\\caption{}\n\t\t\t\\label{fig:transition_4}\n\t\t\\end{subfigure}\n\t\t\\caption{Illustration of the transition rules on the decoding graph. The gray disks represent non-trivial syndromes, and the blue arrows represent the actions suggested by the decoder. The blue dotted lines represent the $3\\times3$ colonies. (a-c) show a sequence of level-0 transition rules and possible outcomes of those actions. In (d) non-trivial anyons have been transported to two neighboring colony centers, the blue arrow represent a level-1 transition which could be applied at the end of the work period.}\n\t\t\\label{fig:transition_rules}\n\t\\end{figure}\n\t\n\t\\begin{figure}[h]\n\t\t\\centering\n\t\t\\hfill\n\t\t\\begin{subfigure}{.45\\linewidth}\n\t\t\t\\includegraphics[width=\\linewidth]{fig\/dual_lattice_colonies}\n\t\t\t\\caption{}\n\t\t\t\\label{fig:colonies1}\n\t\t\\end{subfigure}\n\t\t\\hfill\n\t\t\\begin{subfigure}{.45\\linewidth}\n\t\t\t\\includegraphics[width=\\linewidth]{fig\/dual_lattice_colonies_2}\n\t\t\t\\caption{}\n\t\t\t\\label{fig:colonies2}\n\t\t\\end{subfigure}\n\t\t\\hfill\n\t\t\\begin{subfigure}{.45\\linewidth}\n\t\t\t\\includegraphics[width=\\linewidth]{fig\/dual_lattice_colonies_3}\n\t\t\t\\caption{}\n\t\t\t\\label{fig:colonies3}\n\t\t\\end{subfigure}\n\t\t\\hfill\n\t\t\\caption{(a) Level-0 colonies of size $Q\\times Q$. (b) Level-1 colonies defined as $Q\\times Q$ level-0 colonies. (c) Renormalized lattice used for the level-1 transition rules. }\n\t\t\\label{fig:colonies}\n\t\\end{figure}\n\t\n\t\n\t\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\t\n\t\\section{Outline of the simulation} \\label{sec:simulation}\n\t\n\tThe goal of this work is to numerically determine a fault-tolerant error threshold for the error correcting code defined in Sec.~\\ref{sec:code} with pair-creation noise and measurement noise as outlined in Sec.~\\ref{sec:noise}, and with the cellular automaton decoder introduced in Sec.~\\ref{sec:decoder}.\n\tThis is achieved by performing Monte-Carlo simulations to determine the average memory lifetime for a range of system sizes and error rates. \n\tThese results then allow one to estimate the value of the error threshold. \n\t\n\tA single Monte-Carlo sample (with some fixed values for the noise strength $p$ and the measurement error rate $q$) is obtained as follows.\n\tFirst, the state of the system is initialized as a ground state (i.e.: containing no anyons).\n\tThen a sequence of time steps is performed consisting of the application of pair-creation noise with rate $p$, a round of faulty syndrome measurements with error probability $q$, and finally a sequence of recovery operations. \n\tAt the end of each time step, it is verified whether or not the state is considered correctable, according to the criteria specified in Sec.~\\ref{sec:noise}. \n\tThis sequence of time steps is continued until one of the following three outcomes occurs: (1) The largest connected group of anyons grows too large, rendering its classical simulation intractable \\footnote{Note that such cases are likely to correspond configurations in which the initial state cannot be recovered.}; \n\t(2) A noise process or recovery operation induces a logical error by fusing a pair of anyons along a path that forms a non-contractible loop when combined with their fusion tree;\n\t(3) The verification procedure at the end of a time step fails. \n\tThe memory lifetime is then set as the number of time steps that were completed.\n\tThe course of a single Monte-Carlo sample in the simulation is summarized as pseudo-code in Alg. \\ref{alg:decoder}.\n\t\n\t\\begin{algorithm}[H]\n\t\t\\caption{Numerical simulation} \\label{alg:decoder}\n\t\t\\begin{algorithmic}\n\t\t\t\\State initialize state\n\t\t\t\\State $t = 0$\n\t\t\t\\While{correctable with clustering decoder $\\And$ no logical errors made} \n\t\t\t\\State $t \\gets t+1$\n\t\t\t\n\t\t\t\\State apply pair-creation noise\n\t\t\t\\State perform faulty measurements\n\t\t\t\\For{$k = 0: k_{\\text{max}} $}\n\t\t\t\\If{$t\\!\\mod U^k = 0$}\n\t\t\t\\State update level-$ k $ syndromes\n\t\t\t\\State apply level-$ k $ transition rules\n\t\t\t\\EndIf\n\t\t\t\\EndFor\t\t\t\t\t\t\t\t\t\t\n\t\t\t\\EndWhile\n\t\t\t\\State memory lifetime = t\n\t\t\\end{algorithmic}\n\t\\end{algorithm}\n\t\n\\section{Numerical results}\\label{sec:results}\n\t\n\tThe Monte Carlo simulation described above were performed for various system sizes with $ p = q $. \n\tThe following parameters were used:\n\t\\begin{align*}\n\t\tQ &= 3\\,,\\\\\n\t\tb &= 7\\,,\\\\\n\t\tF_c &= 0.7\\,,\\\\\n\t\tF_n &= 0.2\\,. \\\\ \n\t\\end{align*}\n\tThe resulting average memory lifetimes for $ L = 3 $, $ L = 9 $ and $ L = 27 $ are shown below in \\figref{fig:results}.\n\t\n\t\\begin{figure*}[t]\n\t \\centering\n \t\\begin{subfigure}[t]{.49\\textwidth}\n \t\t\\centering\n \t\t\\includegraphics[trim={.8cm 7.3cm .5cm 6cm},clip,width=1.0\\linewidth]{results.pdf}\n \t\t\\caption{Average memory lifetime in function of the error strength, with $ p = q $, for various system sizes. Each data point represents the average over 1000 Monte Carlo samples. The blue line shows the coherence time of a single physical qubit. \n \t\tThe average memory lifetimes for $ p \\leq 10^{-3} $ were fitted to a function of the form $ f(p) \\sim p^{-a} $. The results for $ L = 3 $, $ L = 9 $ and $ L = 27 $ are shown as the green, yellow and red lines respectively. }\n \t\t\\label{fig:results}\n \t\\end{subfigure}\n\t \\begin{subfigure}[t]{.49\\textwidth}\n \t\t\\centering\n \t\t\\includegraphics[trim={.8cm 7.3cm .5cm 6cm},clip,width=1.0\\linewidth]{results_instant.pdf}\n \t\t\\caption{Average lifetime in function of the error strength with $ p = q $ for various system sizes and (unphysical) \\emph{instantaneous} corrections. Each data point represents the average over 1000 Monte Carlo samples. The blue line shows the coherence time of a single physical qubit. }\n \t\t\\label{fig:results_instant}\n \t\\end{subfigure}\n\t \\caption{}\n\t \\label{fig:results_both}\n\t\\end{figure*}\n\t\n\tThese results clearly indicate that the code presented in this work is indeed fault-tolerant. Furthermore, while the current data is not sufficient to demonstrate a clear-cut fault-tolerant threshold, it still exhibits threshold behavior and is remarkably similar to the results previously obtained for the toric code \\cite{harrington} and the Ising topological code \\cite{dauphinais2017fault}.\n\tWe estimate that the fault-tolerant threshold for the Fibonacci topological with pair-creation noise and measurement noise lies between $p=10^{-4}$ and $p=5\\cdot 10^{-4}$, which corresponds to an inverse temperature between $\\beta = 9.2 \/\\Delta $ and $\\beta = 7.6 \/\\Delta$. \n\tThis is comparable to the threshold found for the Ising topological code \\cite{dauphinais2017fault}, and only one order of magnitude below that for the surface code under similar circumstances \\cite{harrington}.\n\n\tFor physical error rates near $p=q=10^{-4}$, corresponding to a temperature $1\/\\beta$ one order of magnitude below the spectral gap, a code of linear size $L=27$ yields logical error rates of the order $10^{-8}$.\n\t\n\t\n\tA second round of simulations was performed to determine average memory lifetimes with the assumption that all corrections happen instantaneously. \n\tWhile this is akin to the Abelian topological codes, where distant anyons can be fused using unitary string-operators, this scenario is unphysical for non-Abelian anyons as they do not admit unitary string-operators. \n\tNevertheless, it is worth studying to which extend the results in \\figref{fig:results} are influenced by the restriction to non-instantaneous recovery operations.\n\tIn Ref.~\\cite{dauphinais2017fault} it was conjectured that allowing instantaneous corrections does not significantly change the qualitative behavior of the average memory lifetimes as a function of the error rate, but mostly just increases the memory lifetimes.\n\tOur results, shown in \\figref{fig:results_instant}, confirm this hypothesis. \n\t\n\t\n\t\\section{Discussion and outlook}\\label{sec:discussion}\n The results presented in this work demonstrate that fault-tolerant error correction is possible for non-Abelian topological quantum error correcting codes supporting a universal logical gate set within their code space. \n For a code consisting of Fibonacci anyons in hexagonal tiles on a two-dimensional torus, subjected to pair creation noise and measurement noise, we demonstrated that the cellular automaton decoder detailed in this work is fault-tolerant. \n In particular, for physical error rates $p\\leq10^{-3}$, it was found that the logical memory lifetime surpasses the physical coherence time for all system sizes.\n When interpreting the pair-creation noise as resulting from a non-zero temperature, this pseudo-threshold corresponds to an inverse temperature $\\beta = 6.9\/\\Delta$, where $\\Delta$ is the energy required to create a pair of Fibonacci anyons.\n Furthermore, our results suggest that this code admits a fault-tolerant quantum error correction threshold around $p = 10^{-4}$, or $\\beta = 9.2 \/ \\Delta$, which is similar to the fault-tolerant threshold found for the Ising topological code \\cite{dauphinais2017fault}. \n \n \n\tSeveral future research directions present themselves.\n\tFirst, more research on a possible fault-tolerant threshold is necessary.\n\n Wile the numeric results presented in this work provide a strong indication that a fault-tolerant error correction threshold exists, they do not conclusively prove its existence, nor do they provide a precise estimate of its value. \n\tHence, an important open problem is the formulation of a mathematical proof of its existence. \n\tSuch proofs were previously formulated for the toric code \\cite{harrington} and for non-cyclic non-Abelian anyon models such as in the Ising topological code \\cite{dauphinais2017fault}. \n\tDue to the cyclic nature of Fibonacci anyons (or any universal anyon model), however, the existing proofs are not sufficient.\n\t\n\tSecond, it would be interesting to study different decoders in an identical setting. This includes both different cellular-automaton decoders such as those in Refs.~\\cite{herold2015cellular, herold2017cellular}, as well as new decoders tailored to the Fibonacci topological code. \n\t\n\tThird, while this work demonstrates that the Fibonacci topological code can be operated fault-tolerantly as a quantum memory, results regarding its use for fault-tolerant quantum computing are still lacking. \n\tWe envisage that fault-tolerant topological quantum computing at non-zero temperatures could be achieved by combining the code and decoding procedure presented in this work with the scheme for performing Dehn twists presented in Refs.~\\cite{PhysRevLett.125.050502, PhysRevB.102.075105, Lavasani2019universal}. Alternatively, one can also perform transversal logical gates in a folded Fibonacci code \\cite{Zhu:2017tr}.\n\n\t\n\tFinally, it would be of great interest to expand the current results to microscopic models for non-Abelian topological quantum error correction, such as the Fibonacci Turaev-Viro code \\cite{schotte2022quantum}.\n\t\n\t\\section*{Acknowledgments}\n\tThe authors would like to thank Guillaume Dauphinais and Jim Harrington for enlightening discussions on the cellular automaton decoder. \n\tThe computational resources (Stevin Supercomputer Infrastructure) and services used in this work were provided by the Flemish Supercomputer Center (VSC), funded by Ghent University, the Research Foundation Flanders (FWO), and the Flemish Government. \n\tAS was supported by a fellowship of the Belgian American Educational Foundation. LB was supported by a PhD fellowship from the FWO. GZ was supported by the U.S. Department of Energy, Office of Science, National Quantum Information Science Research Centers, Co-design Center for Quantum Advantage (C2QA) under contract number DE-SC0012704.\n\t\n\t","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nData is collected on anything, at any time, and in any location. A study by IBM~\\cite{ibm2020} found that in 2020, 40~trillion gigabytes (40~zettabytes) were generated.\nThe majority of data in the digital realm, however, is unstructured, making it impossible for humans to process it in its entirety and leaving businesses struggling to manage such massive amounts of data. As a result, one of today's major issues for businesses is extracting information and value from data contained in information systems.\n\nProcess Mining~(PM)~\\cite{van2016process} is a relatively new area of research that lies between process modeling and analysis, machine learning, and data mining.\nIt is an umbrella term for a family of techniques to facilitate the analysis of operational processes by extracting knowledge from event logs in symbolic form, typically in the form of process models.\nAn event log serves as the input to PM algorithms, which is essentially a database of events where each event has\n(1) a case id: a unique identifier for a particular process instance,\n(2) an activity: description of the event that is occurring, and\n(3) a timestamp: timestamp of the activity execution.\nResources, expenses, and other event-related attributes may also be integrated by some techniques.\nPM techniques mostly address three tasks:\n(1) process discovery: transform the event log into a process model which describes the underlying process generating such data: common techniques include the alpha algorithm, heuristic miner, or inductive miner~\\cite{van2004},\n(2) conformance checking: analyse discrepancies between an event log and an existing process model to detect deviations, for quality checking or risk management~\\cite{Munoz},\n(3) process enhancement: improve the existing model's performance in terms of specific process performance metrics \\cite{enhance}.\n\nThe initial focus of PM was the analysis of \\textit{historical} event data to generate a process model.\nThese monitoring systems are reactive in nature, allowing users to detect a violation only after it has occurred.\nIn contrast, \nPredictive Process Mining~(PPM)\nor Predictive Process Monitoring~\\cite{maggi2013}\naims for forward-looking forms of PM with predictive qualities.\nIt is a field that combines machine learning with process mining, aiming for specific tasks such as\npredicting \n\\textit{next activity}, \\textit{activity suffix}, \\textit{next timestamp}, \\textit{remaining time}, \\textit{attributes}, \\textit{attribute suffix}, and \\textit{outcome} of running cases.\nMost tasks can be modeled as classification problems, except \\textit{next timestamp} and \\textit{remaining time} prediction tasks, which are regression problems.\nSome approaches which have been used in this realm are based on\nrecurrent neural networks~(RNNs) or Long-short term memory~(LSTMs)\n\\cite{EVERMANN2017129,tax2017predictive,nguyen2020time}.\nAlternatives are based on \nAutoencoders~\\cite{mehdiyev2020novel} and Convolutional neural networks~\\cite{pasquadibisceglie2019using}.\nMore recently, Transformer~\\cite{vaswani2017} models have gained a lot attention due to their overwhelming success in computer vision~\\cite{dosovitskiy2020image} and natural language processing~\\cite{devlin2018bert}.\nInspired by their success, recent approaches proposed a Transformer-based model for process data \\cite{bukhsh2021}.\nA more detailed overview on different models for the aforementioned tasks is covered in the survey~\\cite{rama2021deep}.\n\nPrevious work in PPM faces a few challenges:\nunclear training\/test set splits and pre-processing schemes, which leads to noticeably different findings and challenges w.r.t\\ reproducibility~\\cite{weytjens2021}.\nSince datasets often contain duplicates which are not respected by the evaluation schemes, results are often confounded.\nIn this work, we investigate the capability of important classical ML technologies as well as modern Transformers on a variety of benchmark sets with clear train\/test splits, duplicates respected, and different types of representation for classical schemes.\nMore importantly, we do not treat the methods\nas opaque schemes, but rather rely on feature relevance determination and attention mechanisms to highlight the most important factors for each model.\nThis is in line with previous approaches such as \n\\cite{harl2020explainable}, which \nvisualizes the feature relevances of a gated graph neural network\nor\n\\cite{hsieh2021dice4el} which enhances next activity prediction using LSTMs with counterfactual explanations for wrongly classified samples.\nYet both approaches evaluate the behavior on a single\nand specific dataset only.\nIn our approach, we systematically investigate relevant ML models for different approaches and representations under the umbrella of explainability. In this way, we demonstrate how, in many cases, the presence of\nspecific events can easily fool the system into relying on `trivial' signals. When these are removed, ML models reveal not only high quality behavior but also intriguing insights into the relevance of more subtle signals or sequences. \n\\paragraph*{Contributions}\nOur contribution is twofold:\na systematic study of the behavior of diverse ML models with \ntrain-test split respecting the peculiarities of datasets in PM and appropriate pre-processing,\nwhich prevent troubles due to data leakage.\nSecond, we utilize eXplainable Artificial Intelligence~(XAI) techniques to compute and visualize the most crucial features.\nOur code is open source and available at~\\url{https:\/\/github.com\/rizavelioglu\/ml4prom}.\n\nThe remainder of this work is structured as follows:\nIn Section~\\ref{methodology}, we present the peculiarities of the datasets and our methodology.\nIn Section~\\ref{experiments}, we present experiments and results.\nIn Section~\\ref{conclusion}, we conclude our contribution and discuss potential directions for future work.\n\n\n\\section{Methodology}\\label{methodology}\nIn this section we describe the datasets and how they are pre-processed to be utilized for binary classification, and highlight a few domain-specific peculiarities which have to be taken into account.\nThen we introduce both classical ML models as well as the state-of-the-art Transformer model used subsequently.\nLastly, we present the XAI techniques applied.\n\n\\subsection{Data}\nWe focus on five widely used datasets which come from a variety of domains including loan applications, road traffic management, and healthcare. These datasets are benchmarks from PM and are also accessible within the well-known ProM tools~\\cite{VanDongen2005}, with the exception of healthcare, which contains personal data. Each dataset has been labeled in order to assess explainability and put them in the ML context. The variety of datasets enables us to check the robustness of our methodology across a wide range of domains. To apply ML classifiers to event data, we first transform the data into a format that resembles a binary classification problem. We present five real-life event logs, each with its own evaluation criterion for a prediction task (referred to as the positive\/negative subpart of the logs), and thereafter explain how they are transformed for the task.\n\\paragraph*{BPIC 2017\\_application+offer~\\cite{vanDongen2017}:} a loan application process of a Dutch financial institute that covers \\num{31509}~cases. Each case represents a loan application and every application has an outcome: positive if the offer is accepted and negative if rejected.\n\\paragraph*{BPIC 2018\\_application~\\cite{vanDongen2018}:} an agricultural grant-application process of the European Union that contains \\num{43809}~applications over a period of three years together with their duration.\n\\paragraph*{Traffic Fine Management~\\cite{deLeoni2015}:} an event log of an information system managing road traffic fines by the Italian government, with fines being payed or not\n\\paragraph*{COVID~\\cite{pegoraro2022analyzing}:} a dataset of COVID-19 patients hospitalized at an intensive care unit\nconsisting of 216 cases, of which 196 are complete cases~(patients have been released either dead or alive) and 20 ongoing cases~(partial process traces) that were being treated in the COVID unit at the time data was exported. \n\\paragraph*{Hospital Billing~\\cite{Mannhardt2017}:} an event log of the billing of medical services that have been provided by a regional hospital. Each trace in the event log keeps track of the actions taken to bill a group of medical services. A randomly selected sample of \\num{100000}~process instances make up the event log.\n\n\n\\subsection{Encoding}\n\nTable~\\ref{table:dataset} shows the rules used to label the samples as well as important statistics. One \ncharacteristic of typical PM datasets is that \nthey can contain a large number of duplicates, \\textit{i}.\\textit{e}.\\ observations of the same process.\nFor example, \\texttt{Hospital} dataset contains \\num{41343} and \\num{58653} traces in positive and negative classes, respectively. Out of those traces only \\num{306} and \\num{884} are unique.\nIn addition, the top-\\num{10} most frequent traces make up the \\num{95.3}\\% and \\num{89.7}\\% of the whole traces.\nHence such data are challenging for ML due to a narrow variety. In addition, duplicates need to be accounted for in evaluation to avoid information leakage.\n\n\\begin{table}\n \\centering\n \\caption{Datasets statistics and the rules used to generate binary classification task~(L\\textsuperscript{+}\/L\\textsuperscript{-} represent positive\/negative logs, respectively).\n The statistics are the number of traces, the number of unique traces, the percentage of unique traces, and lastly the cumulative percentage of the top-10 most frequent traces.}\n \\label{table:dataset}\n \\footnotesize\n\\begin{tabular}{c@{\\hskip 4mm}l@{\\hskip 4mm}l@{\\hskip 4mm}S[table-format = 5.0]@{\\hskip 4mm}S[table-format = 4.0]@{\\hskip 2mm}S[table-format = 2.1]@{\\hskip 2mm}S[table-format = 2.1]} \\toprule\n{Dataset} & {Class} & \\multicolumn{1}{c}{{Classification Criteria}} & {traces} & {uniq.} & {uniq.\/\\%} & {top-10\\%} \\\\ \n\\toprule\n\\multirow{2}{*}{BPIC17} & L\\textsuperscript{+} & without activity `A\\_incomplete' & 16506 & 529 & 3.2 & 83.5 \\\\\n & L\\textsuperscript{-} & with activity `A\\_incomplete' & 15003 & 2101 & 14.0 & 51.5 \\\\\n\\midrule\n\\multirow{2}{*}{Traffic} & L\\textsuperscript{+} & end activity `Payment' & 67201 & 122 & 0.2 & 99.1 \\\\\n & L\\textsuperscript{-} & end activity not `Payment' & 83169 & 109 & 0.1 & 99.2 \\\\\n\\midrule\n\\multirow{2}{*}{COVID} & L\\textsuperscript{+} & with activity `Discharge alive' & 136 & 33 & 24.0 & 74.3 \\\\\n & L\\textsuperscript{-} & with activity `Discharge dead' & 60 & 20 & 33.0 & 83.3 \\\\\n\\midrule\n\\multirow{2}{*}{BPIC18} & L\\textsuperscript{+} & duration less than 9 months & 27966 & 1081 & 3.9 & 57.7 \\\\\n & L\\textsuperscript{-} & duration more than 9 months & 15843 & 477 & 3.0 & 82.1 \\\\\n\\midrule\n\\multirow{2}{*}{Hospital} & L\\textsuperscript{+} & duration less than 3 months & 41343 & 306 & 0.7 & 95.3 \\\\\n & L\\textsuperscript{-} & duration more than 3 months & 58653 & 884 & 1.5 & 89.7 \\\\\n\\bottomrule\n\\end{tabular}\n\\end{table}\n\n\n\\sloppy \nData consist of sequences of symbolic events of different length. For each of these datasets we compute \\(n\\)-grams for \\(n \\in \\{1, 2, 3\\}\\) to encode traces in vectorial form for classic ML models. Unigrams~(n=1) encode occurrence of events, bigrams~(n=2) encode two subsequent events and trigrams~(n=3) encode three subsequent events. \nTransformer models can directly deal with sequences. The network learns a vector embedding for each event within a trace.\nThis has the advantage of avoiding problems caused by the high-dimensionality of one-hot encoding method, for example.\n\n\\subsection{Models}\nUsing these representations, we train a variety of models, including Logistic Regression~(LR), Decision Tree~(DT), Random Forest~(RF), Gradient Boosting~(GB), and Transformer models. We only present the LR, DT and Transformer models because the findings do not vary significantly. We selected LR and DT due to their widespread use and overall high performance for classification problems \\cite{fernandezdelgado2014hundred}, and we selected Transformer models due to their propensity for learning complex patterns.\nThe type of encoding used in this case limits the amount of information that is accessible because only a portion of the sequential structure is represented. Conversely, transformer models can easily handle sequential data.\nWe use models as proposed in the works~\\cite{vaswani2017,bukhsh2021}.\n\n\\subsection{XAI}\nSince our primary concern is not the classification performance of our trained models, but rather whether ML models can capture underlying regularities, we employ XAI techniques to gain insight into which relationships between events in the data are used by the models \\cite{molnar2022}.\nUnlike many explanation approaches such as LIME~\\cite{lime}, LRP~\\cite{lrp}, we are interested in global explanations rather than explanations of single decisions. This is because our goal is not to comprehend the ML model, but rather to gain understanding of the key components of the PM as a whole, which can enable users to enhance processes. As an example, one may anticipate altered process behavior if they come across a specific activity, such as `Discharge dead' in COVID dataset, which was identified as crucial for the ML model.\nAs a result, we strive for global explanations and, more specifically, we employ a number of well-established feature selection methods for both linear and non-linear settings, including LR with LASSO as a linear model with strong mathematical guarantees~\\cite{lasso}, DT as a non-linear model with efficient Mean Decrease in Impurity~(MDI) and permutation importance~\\cite{understandingrf} that takes into account non-linear relations between features to determine the relevances, and Transformer model equipped with an attention mechanism that highlights complex relationships.\nAttention mechanisms are technically local explanations. We utilize them to see if highly flexible non-linear explanations provide more complex relations than global feature importance measures.\n\nAs a first result of the analysis, feature selection methods immediately reveal a potential source of information leakage:\nLeaving the datasets as they are, \nthe algorithms rely on attributes that directly encode the class label but provide no additional insight into the process.\nTable~\\ref{table:dataset_biased_feats} shows the detected features that are removed from the logs to avoid such trivial outcomes.\nAll of the listed events exist only in their respective class, \\textit{e}.\\textit{g}.\\ `A\\_Incomplete' is present only in L\\textsuperscript{-} but not in L\\textsuperscript{+}, except `Payment' where the event is present in both classes. Therefore, we removed the event only if it is the last event in a trace from L\\textsuperscript{+}, as that is the source of leakage~(see Table~\\ref{table:dataset}).\n\n\\begin{table}[tb]\n \\centering\n \\caption{Datasets with features that leak label information.}\n \\label{table:dataset_biased_feats}\n \\footnotesize\n\\begin{tabular}{l@{\\hskip 4mm}l@{\\hskip 4mm}l@{\\hskip 4mm}} \\toprule\nDataset & Class & \\multicolumn{1}{c}{{Biased feature}} \\\\\n\\toprule\nBPIC17 & L\\textsuperscript{-} & `A\\_Incomplete' \\\\\n\\midrule\n\\multirow{2}{*}{Traffic}& L\\textsuperscript{-} & `Send for Credit Collection' \\\\\n & L\\textsuperscript{+} & `Payment' \\\\\n\\midrule\n\\multirow{2}{*}{COVID} & L\\textsuperscript{-} & `Discharge dead' \\\\\n & L\\textsuperscript{+} & `Discharge alive' \\\\\n\\bottomrule\n\\end{tabular}\n\\end{table}\n\n\n\\section{Experimental Results}\\label{experiments}\nIn this section we present the data pre-processing pipeline used to transform all five datasets for training. Then, we present the evaluation metric used as well as the model design and training on each dataset. Finally, we present the scores of different approaches on the binary classification task and the resulting feature relevances. To save space, we limit ourselves to the \\texttt{BPIC17} dataset; results for the other datasets can be found in the GitHub repository.\n\n\\subsection{Data Split and Pre-Processing}\nFor simplicity, we only consider event names to encode traces and no other attributes, \\textit{e}.\\textit{g}.\\ timestamp, or resource of an event. After computing unique traces we randomly sample from them to construct train\/test sets with a ratio of \\num{70}\\%\/\\num{30}\\%, respectively. As the datasets are highly imbalanced, we sample class-wise from data--preserving the proportion of classes both in train and test sets. To account for frequency of traces, we keep the duplicate traces in train set while removing the ones in test set. \n\nWe apply minimal pre-processing to data at hand. First, we remove the biased features from traces. Then we add \\texttt{} and \\texttt{} tokens to the input to explicitly define the beginning and the end of traces. To have a fixed-length input, the traces that are shorter than the longest trace in an event log are padded with the \\texttt{} token. We then build a token dictionary consisting of unique event names in an event log and the aforementioned special tokens. Finally, the tokens in the input are replaced by their unique integer values stored in the dictionary.\n\n\\subsection{Evaluation Metric} \nBecause the datasets are highly imbalanced, we measure performance using the area under the receiver operating characteristic curve~(AUROC)~\\cite{Bradley1997auc}. ROC analysis does not favor models that outperform the majority class while under-performing the minority class, which is desirable when working with imbalanced data~\\cite[p.~27]{mehdiyev2020novel}.\nThe area under the curve of a binary classifier\nis the probability that a classifier would rank a randomly chosen positive instance higher than a randomly chosen negative one, which is given by the following equation:\n \\begin{equation}\n \\texttt{AUROC} = \\int_{x=\\infty}^{-\\infty} \\texttt{TPR}(T) \\texttt{FPR}^{'}(T)dT\n \\end{equation}\n\\texttt{AUROC} ranges from \\num{0} to \\num{1}, with an uninformative classifier~(random classifier) producing a result of \\num{0.5}.\n\n\\subsection{Model Design and Training}\nFor ML models we utilize scikit-learn~\\cite{pedregosa2011scikit} and initialize the models with default parameters.\nFor logistic regression, we employ regularization by a L1 penalty term(\\texttt{penalty} hyper-parameter) with regularization strength~(\\texttt{C} hyper-parameter). We use repeated stratified \\textit{k}-fold cross-validation~(CV) \nto evaluate and train a model across a number of iterations. \nThe number of repeats is \\num{50}, and the number of splits $\\textit{k}=5$, yielding \\num{250} models being trained. The reported metric is then the average of all scores computed.\n\nFor Transformer model, following~\\cite{bukhsh2021}, we chose the embedding dimension as \\num{36}, \\textit{i}.\\textit{e}.\\ each trace is represented by a point in \\num{36}-dimensional space. Since Transformer disregards the positional information of events in traces, we add positional encoding to token embedding which have the same dimensions. During training, the model learns to pay attention to input embedding as well as positional encoding in an end-to-end fashion. The embedding outputs are then fed to a multi-head attention block with $h=6$ heads. On the final layer of the attention block, we aggregate features using a global average pooling followed by a dropout at a rate of \\num{0.1}. Then, we employ a dense layer with ReLU activation of \\num{64} hidden units and a dropout at a rate of \\num{0.1}. Finally, we use a dense layer with sigmoid activation that outputs a value between \\num{0} and \\num{1}, which is interpreted as the ``\\textit{probability}'' of a trace belonging to positive~(desirable) class. We train the model for \\num{50} epochs with ADAM optimizer~\\cite{adam2014}, a learning rate of \\num{1e-3}, and batch size of \\num{16}.\n\n\\subsection{Results}\n\n\\subsubsection{Predictive Accuracy}\n\nWe report the experimental results for the binary classification task. Table~\\ref{table:results} reports the \\texttt{AUROC} scores of models on \\texttt{BPIC17, Traffic}, and \\texttt{COVID} datasets, where there are some features leaking the label information. Here the perfect score is achieved as the models correctly discover the biased features, as expected. However, the Transformer model and the ML models with \\num{1}-gram on \\texttt{Traffic} dataset do not achieve the perfect score on the test set. This is due to the fact that the biased feature--`Payment'--appears in both of the classes. Therefore, \\num{1}-gram encoding is not capable of leaking the information. The Transformer model also fails to discover the bias. On the other hand, models achieve the expected results with the \\num{2}-gram and \\num{3}-gram encoding because those encoding types explicitly define the biased feature: as \\texttt{} token added to traces, the feature \\texttt{(Payment, )} in \\num{2}-gram and \\texttt{(Payment, , )} in \\num{3}-gram would leak the label information~(positive class if last event is `Payment', negative class otherwise).\n\n\\begin{table}\n \\centering\n \\caption{AUROC scores on datasets \\textit{with} and \\textit{without} biased features using different encoding methods and models. The values inside the parenthesis represent the score on the hold-out test set, whereas others represent the score on the training set.}\n \\label{table:results}\n \\footnotesize\n \\resizebox{\\linewidth}{!}{\n\\begin{tabular}{ll lll lll} \\toprule\n & & \\multicolumn{3}{c}{\\textbf{with biased features}} & \\multicolumn{3}{c}{\\textbf{without biased features}} \\\\ \\cmidrule(lr){3-5} \\cmidrule(lr){6-8}\n\\textbf{Dataset} & \\textbf{Encoding} \\hspace{2mm} & \\textbf{LR} & \\textbf{DT} & \\textbf{Transformer} \\hspace{2mm} & \\textbf{LR} & \\textbf{DT} & \\textbf{Transformer} \\\\ \n\\toprule\n\\multirow{4}{*}{BPIC17} & integer & - & - & 100.(100.) & - & - & 97.3(97.3) \\\\\n & 1-gram & 100.(100.) & 100.(100.) & - & 89.0(81.3) & 89.5(76.0) & - \\\\\n & 2-gram & 100.(100.) & 100.(99.9) & - & \\textbf{97.4(97.9)} & 97.3(93.5) & - \\\\\n & 3-gram & 100.(100.) & 99.9(99.8) & - & 97.4(97.8) & 97.3(91.7) & - \\\\ \n\\midrule\n\\multirow{4}{*}{Traffic} & integer & - & - & 100.(98.4) & - & - & \\textbf{63.9(53.5)} \\\\\n & 1-gram & 100.(93.3) & 100.(92.2) & - & 61.5(36.8) & 61.5(52.7) & - \\\\\n & 2-gram & 100.(100.) & 100.(100.) & - & 63.8(45.5) & 63.9(49.8) & - \\\\\n & 3-gram & 100.(100.) & 100.(100.) & - & 63.8(48.1) & 63.9(52.3) & - \\\\ \n\\midrule\n\\multirow{4}{*}{COVID} & integer & - & - & 90.9(100.) & - & - & \\textbf{74.8(94.2)} \\\\\n & 1-gram & 100.(100.) & 100.(100.) & - & 67.3(75.0) & 69.6(68.3) & - \\\\\n & 2-gram & 100.(100.) & 99.9(100.) & - & 89.3(61.7) & 85.4(85.0) & - \\\\\n & 3-gram & 99.9(96.7) & 98.1(66.7) & - & 90.4(48.3) & 85.5(76.7) & - \\\\ \n\\midrule\n\\multirow{4}{*}{BPIC18} & integer & - & - & - & - & - & 98.6(81.2) \\\\\n & 1-gram & - & - & - & 97.5(77.9) & 97.5(78.5) & - \\\\\n & 2-gram & - & - & - & \\textbf{98.4(87.3)} & 98.2(84.5) & - \\\\\n & 3-gram & - & - & - & \\textbf{98.4(87.3)} & 98.2(81.5) & - \\\\ \n\\midrule\n\\multirow{4}{*}{Hospital} & integer & - & - & - & - & - & \\textbf{92.4(78.7)} \\\\\n & 1-gram & - & - & - & 91.8(73.9) & 92.3(56.2) & - \\\\\n & 2-gram & - & - & - & 92.5(70.5) & 92.7(52.5) & - \\\\\n & 3-gram & - & - & - & 92.6(66.0) & 92.6(48.5) & - \\\\\n\\bottomrule\n\\end{tabular}}\n\\end{table}\n\nTable~\\ref{table:results} also reports the \\texttt{AUROC} scores of models on all of the five real-life event logs, where the biased features are removed from \\texttt{BPIC17, Traffic}, and \\texttt{COVID} datasets. For \\texttt{BPIC17} and \\texttt{COVID} datasets the scores worsen but the results are still promising, hinting that the processes, sequences of events, maintain valuable information for the task even though the distinct\/biased features are removed. \nThe different amount of information in different encoding is mirrored by the results: \\num{1}-gram achieves the worst results compared to \\num{2}-gram and \\num{3}-gram, as it only incorporates single events, whereas \\num{2}-gram and \\num{3}-gram incorporates event pairs. None of those methods integrate order information, while Transformer model learns this information during training. We observe that Transformer model successfully captures the relations between events as well as the order information and outperforms other models in \\texttt{Traffic} and \\texttt{COVID} datasets, whereas in \\texttt{BPIC17} it receives a comparable result. On the other hand, we observe that the scores for \\texttt{BPIC17} and \\texttt{COVID} datasets do not fluctuate as much as it does for \\texttt{Traffic} dataset when compared to biased scores represented in Table~\\ref{table:results}. This is due to the fact that after removing the biased feature in \\texttt{Traffic} dataset, some traces in both classes become identical. \n\n\n\n\n\\subsubsection{Feature Relevances}\n\n\\begin{figure}\n \\centering\n \\includegraphics[width=.55\\textwidth]{figures\/BPIC17-LR-unbiased.pdf}\n \\vspace{-2mm}\n \\caption{Relevance of features as considered by the logistic regression model. \n A high positive\/negative value indicates a considerable contribution towards predicting the positive\/negative label, \\textit{i}.\\textit{e}. desirable\/undesirable event trace.}\n \\label{fig:relevances-unbiased-logistic-regression}\n\\end{figure}\n\nWe present the relevances of features taken into account by our trained logistic regression model in Figure~\\ref{fig:relevances-unbiased-logistic-regression}. We find that `A\\_Validating' contributes the most towards predicting an undesirable trace, whereas `O\\_Refused' contributes most to predict a desirable trace. Other events have significantly lower impact on the predictions. Interestingly, the special tokens, \\textit{i}.\\textit{e}. \\texttt{}, \\texttt{}, and \\texttt{} affect the predictions when they should not, despite the fact that their influence is negligible. In addition, some features have no effect on the predictions, \\textit{e}.\\textit{g}.\\ `A\\_Denied', `A\\_Cancelled'.\n\n\n\\begin{figure}\n \\centering\n \\includegraphics[width=\\textwidth]{figures\/BPIC17-DT-unbiased.pdf}\n \\vspace{-6mm}\n \\caption{Relevance of features as considered by decision tree model.\n \\textbf{Left}: Relevances according to MDI:\n A high value indicates a considerable contribution towards deciding between the positive and negative label.\n \\textbf{Right}: Relevances according to the permutation importance calculated on the test set, \\textit{i}.\\textit{e}.\n how much shuffling a given feature negatively impacts the performance of the decision tree.\n }\n \\label{fig:relevances-unbiased-decision-tree}\n\\end{figure}\n\n\nIn Figure~\\ref{fig:relevances-unbiased-decision-tree} we show the feature relevances that our trained decision tree model considers. Based on the MDI value~(left figure) the most relevant feature is `A\\_Validating', followed by `O\\_Refused', `A\\_Submitted', and `O\\_Cancelled' with significantly lower impact. Based on the permutation feature importance~(right figure) the most relevant features are `A\\_Validating', and `O\\_Refused', where the rest of the features have no effect on the prediction performance, which aligns well with the relevances of LR model.\n\n\\begin{figure}\n \\centering\n \\begin{subfigure}[b]{0.4\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{figures\/bpic17-attention.png}\n \n \\label{fig:y equals x}\n \\end{subfigure}\n \\hfill\n \\begin{subfigure}[b]{0.5\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{figures\/bpic17-attention_avg.pdf}\n \n \\label{fig:three sin x}\n \\end{subfigure}\n \\vspace{-5mm}\n \\caption{\n \\textbf{Left}: Relevance of features as considered by transformer model for one randomly selected event trace. The thickness of a line connecting two features indicates the intensity of attention in between, whereas the color represents one of the six different attention heads. \n \\textbf{Right}: Normalized attention scores which are averaged among all attention heads and all traces in test set.}\n \\label{fig:relevances-unbiased-transformer}\n\\end{figure}\n\nFigure~\\ref{fig:relevances-unbiased-transformer} visualizes the attention scores of six attention heads for a given trace, as well as normalized attention scores over all attention heads and test samples. We observe that, regardless of where it appears in the trace, the events mostly attend to `A\\_Validating' event for the aforementioned trace. In addition, some events focus on `O\\_Sent' in some heads, which differs from the other models. The normalized attention scores, however, demonstrates that not all traces exhibit this behavior. The plot also highlights features whose importances overlap with other models' results. In summary, all models agree on the most relevant features.\n\n\n\\section{Conclusion}\\label{conclusion}\nWe have demonstrated how to prepare process data in such a way that it can be used to train classic and modern -- state-of-the-art -- ML classifiers.\nAll our trained models exhibit high classification performance, \\textit{i}.\\textit{e}.\\ they are capable of learning the underlying regularity of the observed processes, whereby Transformers benefit from the fact that the full sequence information is available, unlike \\textit{e}.\\textit{g}.\\ 1-gram representations. XAI technologies prevent pitfalls such as information leakage by explicit encoding of the predicted event, and reveal insights into the relevance of events or sequences of events, respectively.\nThese insights enable a further exploration of crucial aspects of the processes, which is useful \\textit{e}.\\textit{g}.\\ for the improvement or correction of undesired process outcomes.\n\nFuture research could investigate the effects of various model architectures and encoding schemes on the outcomes of feature relevances. Another potential research direction might be to study the impact of adding further event attributes on the learnt representations.\n\n\n\\bibliographystyle{splncs04}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{#1}\\setcounter{equation}{0}}\n\\renewcommand{\\baselinestretch}{1.2}\n\n\n\\newcommand{\\langle 0|}{\\langle 0|}\n\\newcommand{|0\\rangle }{|0\\rangle }\n\\newcommand{\\frac{1}{2}\\,(1+\\!\\!\\not{\\!v})}{\\frac{1}{2}\\,(1+\\!\\!\\not{\\!v})}\n\\newcommand{\\langle aFF\\rangle}{\\langle aFF\\rangle}\n\\newcommand{e^{\\,gf^{abc}z^\\tau \\int_0^1 dt A^c_\\tau(x+tz)}}{e^{\\,gf^{abc}z^\\tau \\int_0^1 dt A^c_\\tau(x+tz)}}\n\n\\begin{document}\n\n\n\n\\date{\\small (December 1998)}\n\n\\author{\n{\\normalsize\\bf H.~G.~Dosch, M.~Eidem\\\"uller and M.~Jamin\\footnote{Heisenberg\n fellow} } \\\\\n\\ \\\\\n{\\small\\sl Institut f\\\"ur Theoretische Physik, Universit\\\"at Heidelberg,} \\\\\n{\\small\\sl Philosophenweg 16, D-69120 Heidelberg, Germany}\\\\\n}\n\n\\title{\n{\\small\\sf\n\\rightline{HD-THEP-98-51}\n\\rightline{hep-ph\/9812417}\n}\n\\bigskip\n\\bigskip\n{\\Huge\\bf QCD sum rule analysis of \\\\\n the field strength correlator \\\\}\n}\n\n\\maketitle\n\\thispagestyle{empty}\n\n\n\\begin{abstract}\n\\noindent\nThe gauge invariant two-point correlator for the gluon field strength tensor\nis analysed by means of the QCD sum rule method. To this end, we make use of\na relation of this correlator to a two-point function for a quark-gluon\nhybrid in the limit of the quark mass going to infinity. From the sum rules\na relation between the gluon correlation length and the gluon condensate is\nobtained. We briefly compare our results to recent determinations of the\nfield strength correlator on the lattice.\n\\end{abstract}\n\n\\vspace{1cm}\nPACS numbers: 14.70.Dj, 12.38.Bx, 11.10.Gh, 12.40.Ee\n\nKeywords: Gluons, perturbation theory, renormalisation, QCD vacuum\n\n\n\\newpage\n\\setcounter{page}{1}\n\n\n\n\n\\newsection{Introduction}\n\nThe gauge invariant non-local gluon field strength correlator plays an\nimportant r\\^ole in non-perturbative approaches to QCD \\cite{svz:79,vol:79,\nleu:81,sim:88,dos:94}. It is the basic ingredient in the model of the\nstochastic vacuum (MSV) \\cite{dos:87,ds:88} and in the description of high\nenergy hadron-hadron scattering \\cite{nr:84,ln:87,kd:90,dfk:94}. In the\nspectrum of heavy quark bound states it governs the effect of the gluon\ncondensate on the level splittings \\cite{gro:82,cgo:86,kdb:92,sty:95} and\nit is useful for the determination of the spin dependent parts in the\nheavy quark potential \\cite{sd:88,sim:89}.\n\nIts next-to-leading order correction in perturbative QCD has been calculated\nrecently by two of the authors \\cite{ej:98}. The correlator has also been\nmeasured on the lattice in pure gauge theory and full QCD using the cooling\nmethod \\cite{gmp:97,egm:97} and by making the assumptions of the MSV from\nlattice calculations of the heavy quark potentials \\cite{bbv:98}. The lattice\nanalyses found that for distances $z$ of the gluon field strength larger than\nroughly $0.4\\,{\\rm fm}$ an exponential decaying term dominates yielding a\ncorrelation length of approximately $0.2\\,{\\rm fm}$. On the other hand the\nshort distances are dominated by the perturbative $1\/z^4$ behaviour. Recently,\nthe field strength correlator has also been calculated in the framework of\nexact renormalisation group equations \\cite{elw:98}.\n\nThe gauge invariant gluon field strength correlator can be related to\na correlator of a colour singlet current composed of a (fictitious)\ninfinitely heavy octet quark and the gluon field strength tensor. This\nfact has already been employed in ref. \\cite{ej:98} in order to apply the\nmachinery developed in the Heavy Quark Effective Theory\\footnote{For a\nreview on HQET as well as original references the reader is referred to\n\\cite{neu:94}.} (HQET) for calculating the perturbative corrections. In this\npaper we again use this relation in order to apply QCD sum rule techniques \\cite{svz:79}\nto the correlator in question. The sum rule analysis can be used to estimate\nthe correlation length of the field strength correlator using as ingredients\nthe value of the gluon condensate and the results for the perturbative\ncalculation.\n\nOur paper is organised as follows. In the next section we discuss again the\nrelation of the field strength correlator and the corresponding heavy quark\ncurrent correlator. In section~3 we set up the different contributions\nneeded for the sum rule analysis and in section~4 we present our results\ntogether with a comparison with recent lattice determinations of the\nfield strength correlator. Finally, in section~5, we end with some conclusions\nand an outlook.\n\n\n\n\\newsection{The field strength correlator}\n\nThe gauge invariant two-point correlation function of the QCD field strength\ntensor $F^a_{\\mu\\nu}(x)$ in the adjoint representation can be defined as\n\\begin{equation}\n\\label{eq:2.1}\n{\\cal D}_{\\mu\\nu\\rho\\sigma}(z) \\; \\equiv \\; \\langle 0|T\\{g_s^2 F^a_{\\mu\\nu}(y)\n{\\cal P}e^{\\,gf^{abc}z^\\tau \\int_0^1 dt A^c_\\tau(x+tz)} F^b_{\\rho\\sigma}(x)\\}|0\\rangle \\,,\n\\end{equation}\nwhere the field strength $F^a_{\\mu\\nu}=\\partial_\\mu A^a_\\nu-\\partial_\\nu\nA^a_\\mu+gf^{abc}A^b_\\mu A^c_\\nu$, $z=y-x$ and ${\\cal P}$ denotes path\nordering of the exponential. In general, the gauge invariant field strength\ncorrelator could be defined with an arbitrary gauge string connecting the\nend points $x$ and $y$, but in this work we shall restrict ourselves to\na straight line. Only for that case the relation to HQET is possible.\nFrom the Lorentz structure of the field strength correlator it follows\nthat the correlator can be parametrised in terms of two scalar functions\n${\\cal D}(z^2)$ and ${\\cal D}_1(z^2)$ \\cite{ds:88}:\n\\begin{eqnarray}\n\\label{eq:2.2}\n{\\cal D}_{\\mu\\nu\\rho\\sigma}(z) & = & \\Big[\\,g_{\\mu\\rho}g_{\\nu\\sigma}-\ng_{\\mu\\sigma}g_{\\nu\\rho}\\,\\Big]\\Big(\\,{\\cal D}(z^2)+{\\cal D}_1(z^2)\\,\\Big) \\nonumber \\\\\n\\vbox{\\vskip 8mm}\n& & \\hspace{-3.9mm} +\\,\\Big[\\,g_{\\mu\\rho}z_\\nu z_\\sigma-g_{\\mu\\sigma}\nz_\\nu z_\\rho-g_{\\nu\\rho} z_\\mu z_\\sigma+g_{\\nu\\sigma}z_\\mu z_\\rho\n\\,\\Big]\\,\\frac{\\partial{\\cal D}_1(z^2)}{\\partial z^2} \\,.\n\\end{eqnarray}\nThe invariant function ${\\cal D}(z^2)$ can only occur in a non-abelian gauge\ntheory or an abelian one with monopoles. In the MSV it is responsible for\nconfinement and the formation of a string.\n\nThe correlator ${\\cal D}_{\\mu\\nu\\rho\\sigma}(z)$ can be related to the correlator\nof a local, gauge invariant current composed of an infinitely heavy quark\nfield in the octet representation, $h^a(x)$, and the gluon field strength\ntensor \\cite{eid:97,ej:98}. The current in question takes the form\n$(g_s h^a F^a_{\\mu\\nu})(x)$. Analogously to HQET the heavy octet-quark field\nis constructed from the field $Q^a$ with a finite mass $m_Q$ in the limit\n\\begin{equation}\n\\label{eq:2.3}\nh^a(x) \\; = \\; \\lim_{m_Q\\rightarrow\\infty}\\,\\frac{1}{2}\\,(1+\\!\\!\\not{\\!v})\\,e^{im_Qvx}Q^a(x) \\,,\n\\end{equation}\nwith $v$ being the four-velocity of the heavy quark. \nThe propagator of the free heavy quark field $h^a_0(x)$ in coordinate\nspace is given by\n\\begin{equation}\n\\label{eq:2.4}\nS(z) \\; = \\; \\langle 0| T\\{h^a_0(y)\\bar{h}^b_0(x)\\} |0\\rangle \\; = \\; \\delta^{ab}\n\\frac{1}{v^0}\\,\\theta(z^0)\\,\\delta\\Big({\\bf z}-\\frac{z^0}{v^0}{\\bf v}\\Big) \\,,\n\\end{equation}\nwhere $v^0$ is the zero-component of the velocity.\nThe correlator of the full field can be obtained by integrating out only\nthe heavy quark and leaving the expectation value with respect to the\ngauge field:\n\\begin{equation}\n\\label{eq:2.5}\n\\langle 0| T\\{h^a(y)\\bar{h}^b(x)\\} |0\\rangle \\; = \\;\n S(z)\\,\\langle 0| {\\cal P}e^{\\,gf^{abc}z^\\tau \\int_0^1 dt A^c_\\tau(x+tz)} |0\\rangle \\,.\n\\end{equation}\nThe gauge string is left after the elimination of the heavy quarks from the\ninteraction term of adjoint quarks with the colour potential\n\\begin{equation}\n\\label{eq:2.6}\n{\\cal L}_{int} \\; = \\; -\\,ig_s f_{abc} v^\\mu \\bar{h}^a(x) A_\\mu^c(x) h^b(x) \\,.\n\\end{equation}\nThe physical picture of this result is a heavy quark moving from point\n$x$ to $y$ with a four-velocity $v$, acquiring a phase proportional to the\npath-ordered exponential. \nThe limit of $m_Q\\rightarrow\\infty$ is necessary in order to constrain the\nheavy quark on a straight line and in order to decouple the spin interactions.\nThe same relation also holds for quarks in the fundamental representation\nwith the appropriate replacements in the exponential.\n\nThe equation \\eqn{eq:2.5} allows to establish a relation between the field\nstrength correlator \\eqn{eq:2.1} and the correlator for the colourless\nheavy quark current.\nBy integrating out the heavy degrees of freedom and using \\eqn{eq:2.5} we\narrive at\n\\begin{eqnarray}\n\\label{eq:2.8}\n\\widetilde{\\cal D}_{\\mu\\nu\\rho\\sigma}(z)\n& \\equiv & \\langle 0| T\\{g_s^2 F^a_{\\mu\\nu}(y)h^a(y)F^b_{\\rho\\sigma}(x)\\bar{h}^b(x)\n\\}|0\\rangle \\nonumber \\\\\n& = & S(z) \\, D_{\\mu\\nu\\rho\\sigma}(z) \\,.\n\\end{eqnarray}\nWe may view the composite operator $(g_s h^a F^a_{\\mu\\nu})(x)$ as an\ninterpolating field of colourless quark gluon hybrids and evaluate\n$\\widetilde{\\cal D}_{\\mu\\nu\\rho\\sigma}(z)$ by introducing these as intermediate states\nin the absorption part of $\\widetilde{\\cal D}_{\\mu\\nu\\rho\\sigma}(z)$. The lowest lying\nstate will govern the long-range behaviour and hence the inverse of its\nenergy is the correlation length.\n\nOur next aim is to evaluate this correlator in the framework of QCD sum rules\n\\cite{svz:79} and in that way obtain information on the correlation length\nof the gluon field strength correlator.\nFor the sum rule analysis it is preferable to work with the correlator in\nmomentum space. Thus we define\n\\begin{equation}\n\\label{eq:2.9}\n\\widetilde{\\cal D}_{\\mu\\nu\\rho\\sigma}(w) \\; = \\; i \\int dz \\, e^{iqz} \\langle 0|\nT\\{g_s^2 F^a_{\\mu\\nu}(y)h^a(y) F^b_{\\rho\\sigma}(x)\\bar{h}^b(x)\\}|0\\rangle \\,,\n\\end{equation}\nwhere the residual heavy quark momentum is $w=vq$. Similar to the Lorentz\ndecomposition of the coordinate space correlator ${\\cal D}_{\\mu\\nu\\rho\\sigma}(z)$\ninto scalar functions ${\\cal D}(z^2)$ and ${\\cal D}_1(z^2)$, eq.~\\eqn{eq:2.2},\nwe can write the momentum space correlator as follows: \n\\begin{eqnarray}\n\\label{eq:2.10}\n\\widetilde{\\cal D}_{\\mu\\nu\\rho\\sigma}(w) & = & \\Big[\\,g_{\\mu\\rho}g_{\\nu\\sigma}-\ng_{\\mu\\sigma}g_{\\nu\\rho}\\,\\Big]\\Big(\\,\\widetilde{\\cal D}(w)+\\widetilde{\\cal D}_1(w)\\,\\Big) \\nonumber \\\\\n\\vbox{\\vskip 8mm}\n& & \\hspace{-3.9mm} +\\,\\Big[\\,g_{\\mu\\rho}v_\\nu v_\\sigma-g_{\\mu\\sigma}\nv_\\nu v_\\rho-g_{\\nu\\rho} v_\\mu v_\\sigma+g_{\\nu\\sigma}v_\\mu v_\\rho\n\\,\\Big]\\,\\widetilde{\\cal D}_*(w) \\,.\n\\end{eqnarray}\nThe functions $\\widetilde{\\cal D}(w)$ and $\\widetilde{\\cal D}_1(w)$ are the Fourier transforms of\n$S(z)\\,{\\cal D}(z^2)$ and $S(z)\\,{\\cal D}_1(z^2)$ respectively, the function\n$\\widetilde{\\cal D}_*(w)$ is the Fourier transform of\n$S(z)z^2\\partial {\\cal D}_1(z^2)\/\\partial z^2$.\n\nFor our purpose of isolating intermediate states of the correlator\n\\eqn{eq:2.9} a decomposition according to an $O_3$ classification is more\nappropriate than the decomposition of eq.~\\eqn{eq:2.10}. Since the spin of\nthe heavy quark decouples, we only have to consider the gluon spin. The\nsix-component field can be decomposed into tensor structures depending on the\nonly two external vectors in the game; the four-velocity $v_\\mu$ and the\npolarisation vector of the gluon $e_\\mu$. This leads to the two Lorentz\nstructures for the hadronic matrix elements\n\\begin{eqnarray}\n\\label{eq:2.11}\n\\langle 0| (g_s F^a_{\\mu\\nu} h^a)(0)| H^{-}\\rangle & = &\nf^-\\,(v_\\mu e_\\nu-v_\\nu e_\\mu) \\,, \\\\\n\\vbox{\\vskip 6mm}\n\\langle 0| (g_s F^a_{\\mu\\nu} h^a)(0)| H^{+}\\rangle & = &\nf^+\\,\\varepsilon_{\\mu\\nu\\lambda\\kappa}v^\\lambda e^\\kappa \\,,\n\\end{eqnarray}\nwhere $H^\\mp$ are hadronic states with the same quantum numbers as the\ncomposite current. In the rest frame ${\\bf v}=0$, the first structure\ntransforms as a 3-vector and thus $H^-$ corresponds to a $1^-$ state whereas\nthe second structure transforms as an axialvector and $H^+$ corresponds to a\n$1^+$ state.\n\nThrough appropriate projections the two quantum numbers can be singled\nout from the correlator $\\widetilde{\\cal D}_{\\mu\\nu\\rho\\sigma}(w)$. Hence, we define\n\\begin{eqnarray}\n\\label{eq:2.13}\n\\widetilde{\\cal D}^-(w) & \\equiv & g^{\\mu\\rho}v^\\nu v^\\sigma \\, \\widetilde{\\cal D}_{\\mu\\nu\\rho\\sigma}(w)\n\\; = \\; 3\\,\\Big(\\widetilde{\\cal D}(w)+\\widetilde{\\cal D}_1(w)+\\widetilde{\\cal D}_*(w)\\Big) \\,, \\\\\n\\vbox{\\vskip 6mm}\n\\label{eq:2.14}\n\\widetilde{\\cal D}^+(w) & \\equiv & (g^{\\mu\\rho}g^{\\nu\\sigma}-2\\,g^{\\mu\\rho}v^\\nu v^\\sigma) \\,\n\\widetilde{\\cal D}_{\\mu\\nu\\rho\\sigma}(w) \\; = \\; 6\\,\\Big(\\widetilde{\\cal D}(w)+\\widetilde{\\cal D}_1(w)\\Big) \\,.\n\\end{eqnarray}\nThe Fourier transforms of the functions $\\widetilde{\\cal D}^-(w)$ and $\\widetilde{\\cal D}^+(w)$ are\nup to the factor $S(z)$ the invariant functions ${\\cal D}_\\parallel(z^2)$ and\n${\\cal D}_\\perp(z^2)$ respectively which have been used in the lattice\ncalculations of refs.~\\cite{gmp:97,egm:97}.\n\nUnder the assumption of quark-hadron duality which is usually made for\nsum rule analyses \\cite{svz:79}, we model the correlators by\na contribution from the lowest lying resonances plus the perturbative\ncontinuum above a threshold $s_0$. Inserting the matrix elements and\nperforming the heavy quark phase space integrals one obtains\n\\begin{equation}\n\\label{eq:2.15}\n\\widetilde{\\cal D}^\\mp(w) \\; = \\; \\frac{\\kappa^\\mp\\,|f^\\mp|^2}{w-E^\\mp+i\\epsilon} +\n\\int\\limits_{s_0^\\mp}^\\infty d\\lambda \\, \\frac{\\rho^\\mp(\\lambda)}\n{\\lambda-w-i\\epsilon} \\,,\n\\end{equation}\nwhere $\\kappa^-\\!=1$, $\\kappa^+\\!=-2$ and $E$ represents the energy\nof the glue around the heavy quark. The spectral densities are defined by\n$\\rho^\\mp(\\lambda)\\equiv 1\/\\pi\\,{\\rm Im}\\,\\widetilde{\\cal D}^\\mp(\\lambda+i\\epsilon)$ and are\nknown at the next-to-leading order \\cite{ej:98}. Explicit expressions will\nbe given in the next section.\n\nAfter Fourier transformation to coordinate space the above representation\nreads:\n\\begin{eqnarray}\n\\label{eq:2.16}\n\\widetilde{\\cal D}(z) & = & -i \\int \\frac{d^4 q}{(2 \\pi)^4} \\, e^{-iqz}\\,\\widetilde{\\cal D}(w) \\nonumber \\\\\n& = & \\biggl\\{-\\kappa\\,|f|^2 e^{-iE|z|} + \\int\\limits_{s_0}^\\infty d\\lambda \\,\n\\rho(\\lambda) \\, e^{-i\\lambda |z|} \\biggr\\} \\, S(z) \\,,\n\\end{eqnarray}\nwhere the factorisation of the heavy quark propagator can be seen explicitely.\nThe inverse correlation length is found to be given by $E$.\n\n\n\n\\newsection{The sum rules}\n\nThe {\\em phenomenological side} of the sum rules has already been given by\neq.~\\eqn{eq:2.15}. In this section, we shall present the {\\em theoretical side}\nof the sum rules which arises from calculating the correlator of\neq.~\\eqn{eq:2.9} in the framework of the operator product expansion\n\\cite{svz:79,wil:69}.\n\nIn coordinate space the purely perturbative contribution up to the\nnext-to-leading order in the strong coupling constant has been calculated\nin ref.~\\cite{ej:98}. Here we give the corresponding results in momentum\nspace for $\\widetilde{\\cal D}^\\mp(w)$:\n\\begin{equation}\n\\label{eq:3.1}\n\\widetilde{\\cal D}^\\mp_{PT}(w) \\; = \\; (-w)^3\\, a\\,\\Big[\\, p_{10}^\\mp+p_{11}^\\mp L +\na\\, (p_{20}^\\mp+p_{21}^\\mp L+p_{22}^\\mp L^2) \\,\\Big] \\,,\n\\end{equation}\nwhere $a\\equiv\\alpha_s\/\\pi$, $L=\\ln(-2w\/\\mu)$ and the coefficients\n$p_{ij}^\\mp$ are given explicitly in the appendix. From this result one can\nimmediately calculate the corresponding spectral functions:\n\\begin{equation}\n\\label{eq:3.2}\n\\rho^\\mp(\\lambda) \\; = \\; \\lambda^3\\,a\\, \\biggl[\\, p_{11}^\\mp +\na \\,\\biggl(p_{21}^\\mp + 2\\,p_{22}^\\mp\\ln \\frac{2 \\lambda}{\\mu}\\biggr)\n\\,\\biggr] \\,,\n\\end{equation}\nwhere $\\lambda$ has to be greater zero. Essential for the sum rule analysis\nare the contributions coming from\nthe condensates. The correlation function is expanded in powers of $1\/w$\ncorresponding to higher and higher dimensional condensates. In our case\nthe dimension three condensate $\\langle\\bar{h}h\\rangle$ vanishes since\nthe quark mass is infinite. The lowest nonvanishing term is the gluon\ncondensate of dimension four:\n\\begin{equation}\n\\label{eq:3.3}\n\\widetilde{\\cal D}^-_{FF}(w) \\; = \\; -\\,\\frac{\\pi^2}{w}\\langle aFF\\rangle \\,, \\qquad\n\\widetilde{\\cal D}^+_{FF}(w) \\; = \\; -\\,\\frac{2\\pi^2}{w}\\langle aFF\\rangle \\,.\n\\end{equation}\nThe next condensate contribution would be of dimension six, but we shall\nneglect all higher condensate contributions in this work and restrict\nourselves to the gluon condensate.\n\nIn order to suppress contributions in the dispersion integral coming from\nhigher exited states and from higher dimensional condensates, it is convenient\nto apply a Borel transformation $\\widehat{B}_T$ with $T$ being the Borel variable\n\\cite{svz:79}. Some useful formulae for the Borel transformation are also\ncollected in the appendix. For the phenomenological side of the sum rules,\neq.~\\eqn{eq:2.15}, we then find\n\\begin{equation}\n\\label{eq:3.4}\n\\widehat{{\\cal D}}^\\mp(T) \\; = \\; -\\,\\kappa^\\mp\\,|f^\\mp|^2 \\,e^{-E^\\mp\/T} +\n\\int\\limits_{s_0^\\mp}^\\infty d\\lambda\\,\\rho^\\mp(\\lambda)\\,e^{-\\lambda\/T} \\,.\n\\end{equation}\nFor the perturbative contribution it is convenient to apply the following\nidentity:\n\\begin{equation}\n\\label{eq:3.5}\n\\widehat{B}_T \\,\\widetilde{\\cal D}(w) \\; = \\; T^4 \\,\\widehat{B}_T \\left(\\frac{d}{dw}\\right)^{\\!4}\n\\widetilde{\\cal D}(w) \\,,\n\\end{equation}\nfrom which we obtain\n\\begin{equation}\n\\label{eq:3.6}\n\\widehat{{\\cal D}}^\\mp_{PT}(T) \\; = \\; 6\\, T^4\\, a\\,\\biggl[\\, p_{11}^\\mp + a \\,\\biggl(\np_{21}^\\mp+ \\frac{1}{3}\\,\\Gamma'(4)\\, p_{22}^\\mp + 2\\,p_{22}^\\mp \\ln\n\\frac{2T}{\\mu}\\biggr) \\,\\biggr] \\,,\n\\end{equation}\nwhere $\\gamma_E$ is Eulers constant and $\\Gamma'(4)=11-6\\gamma_E$.\nThe Borel transformed expression for the gluon condensate contribution\nis found to be:\n\\begin{equation}\n\\label{eq:3.7}\n\\widehat{{\\cal D}}^-_{FF} \\; = \\; \\pi^2 \\langle aFF\\rangle \\,, \\qquad\n\\widehat{{\\cal D}}^+_{FF} \\; = \\; 2 \\pi^2 \\langle aFF\\rangle \\,.\n\\end{equation}\nAfter Borel transformation, the correlators satisfy homogeneous\nrenormalisation group equations. Thus we can improve the perturbative\nexpressions by resumming the logarithmic contributions. The perturbative\ncontribution is then expressed in terms of the running coupling\n$a(2T)$:\n\\begin{equation}\n\\label{eq:3.8}\n\\widehat{{\\cal D}}^\\mp_{PT}(T) \\; = \\; 6\\, T^4 \\left(\\frac{a(2T)}{a(\\mu)}\\right)^\n{-\\gamma_1^\\mp\/\\beta_1} \\!a(2T)\\,\\left[\\, p_{11}^\\mp + a\\,\\left(\\,p_{21}^\\mp\n+\\frac{1}{3}\\,\\Gamma'(4)\\,p_{22}^\\mp\\right) \\,\\right] \\,,\n\\end{equation}\nwhere $\\beta_1=11\/2-n_f\/3$ is the first coefficient of the QCD\n$\\beta$-function. Reexpanding and comparing with eq.~\\eqn{eq:3.6},\nthe anomalous dimensions $\\gamma_1^\\mp$ are found to be\n$\\gamma_1^\\mp=2\\,p_{22}^\\mp\/p_{11}^\\mp+\\beta_1$, or explicitly\n\\begin{equation}\n\\label{eq:3.9}\n\\gamma_1^- \\; = \\; 0 \\,, \\qquad\n\\gamma_1^+ \\; = \\; 3 \\,.\n\\end{equation}\nLet us note that the correlator $\\widehat{{\\cal D}}^-(T)$ which corresponds to the\nvector intermediate state does not depend on the renormalisation scale $\\mu$\nat this order.\n\nFor the continuum contribution we first evaluate the integral with the\ngeneral formula \\cite{jm:95} which makes the numerical analysis easier:\n\\begin{equation}\n\\label{eq:3.10}\n\\int\\limits_{s_0}^\\infty d\\lambda\\,\\lambda^{\\alpha-1}\\ln^n \\frac{2\\lambda}{\\mu}\ne^{-\\lambda\/T} \\; = \\; T^\\alpha \\sum_{k=0}^{n} {n\\choose k} \\ln^k\\frac{2T}{\\mu}\n\\left[\\frac{\\partial^{n-k}}{\\partial\\alpha^{n-k}}\\Gamma\\left(\\alpha,\\frac{s_0}\n{T}\\right)\\right] \\,,\n\\end{equation}\nsome formulae for the incomplete $\\Gamma$-function $\\Gamma(\\alpha,x)$ are\ngiven in the appendix. We then obtain\n\\begin{eqnarray}\n\\label{eq:3.11}\n\\chi^\\mp(T,s_0) & = & \\int\\limits_{s_0}^\\infty d\\lambda\\,\\rho^\\mp(\\lambda) \\,\ne^{-\\lambda\/T} \\; = \\; T^4\\,a\\,\\Biggl\\{\\,p_{11}^\\mp \\,\\Gamma\\left(4,\n\\frac{s_0}{T}\\right) \\\\\n\\vbox{\\vskip 8mm}\n& & +\\,a\\left[\\,\\left(\\,p_{21}^\\mp+2\\,p_{22}^\\mp\\ln\\frac{2T}\n{\\mu}\\right)\\Gamma\\left(4,\\frac{s_0}{T}\\right) + 2\\,p_{22}^\\mp\\Gamma'\n\\left(4,\\frac{s_0}{T}\\right)\\,\\right]\\,\\Biggr\\} \\,, \\nonumber\n\\end{eqnarray}\nand after renormalisation group improvement\n\\begin{eqnarray}\n\\label{eq:3.12}\n\\chi^\\mp(T,s_0) & = & T^4 \\left(\\frac{a(2T)}{a(\\mu)}\\right)^{-\\gamma_1^\\mp\/\n\\beta_1}\\!a(2T)\\,\\Biggl\\{\\,p_{11}^\\mp \\,\\Gamma\\left(4,\\frac{s_0}{T}\\right)\n\\hspace{30mm} \\nonumber \\\\\n\\vbox{\\vskip 8mm}\n& & +\\;a \\left[\\,p_{21}^\\mp\\,\\Gamma\\left(4,\\frac{s_0}{T}\\right)+2\\,p_{22}^\\mp\n\\,\\Gamma'\\left(4,\\frac{s_0}{T}\\right)\\,\\right]\\,\\Biggr\\} \\,.\n\\end{eqnarray}\nIn the limit $s_0\\rightarrow 0$, eq.~\\eqn{eq:3.12} agrees with eq.\n\\eqn{eq:3.8} as it should.\n\n\n\n\n\\newsection{Numerical analysis}\n\nAfter equating the phenomenological and the theoretical part we end up with\nthe sum rule\n\\begin{equation}\n\\label{eq:4.1}\nK^\\mp(T) \\; \\equiv \\; -\\,\\kappa^\\mp |f^\\mp|^2 e^{-E^\\mp\/T} \\; = \\;\n\\widehat{\\cal D}_{FF}^\\mp + \\widehat{\\cal D}_{PT}^\\mp(T)-\\chi^\\mp(T,s_0) \\,,\n\\end{equation}\nwhere $\\kappa^-= 1$ and $\\kappa^+=-2$. In order to estimate the binding\nenergy we derive as an immediate consequence of \\eqn{eq:4.1}:\n\\begin{equation}\n\\label{eq:4.2}\nE^\\mp \\; = \\; -\\,\\frac{\\partial}{\\partial(1\/T)}\\,\\ln K^\\mp \\; = \\;\n-\\,\\frac{\\frac{\\partial}{\\partial(1\/T)}\\Big(\\widehat{\\cal D}_{PT}^\\mp(T)-\\chi^\\mp(T,s_0)\n\\Big)}{\\Big(\\widehat{\\cal D}_{FF}^\\mp + \\widehat{\\cal D}_{PT}^\\mp(T)-\\chi^\\mp(T,s_0)\\Big)} \\,.\n\\end{equation}\nThe derivative can also be given analytically if we first derive with respect\nto $T$ and then perform the resummation of the logarithms. We thus find\n\\begin{eqnarray}\n\\label{eq:4.3}\n\\lefteqn{\\frac{\\partial}{\\partial (1\/T)} \\left(\\widehat{\\cal D}^\\mp_{PT}(T) -\n\\chi^\\mp(T,s_0)\\right) \\; =} \\nonumber \\\\\n& & -\\,T^5 \\left(\\frac{a(2T)}{a(\\mu)}\\right)^{-\\gamma_1^\\mp\/\\beta_1}\\!\\!\na(2T)\\,\\Bigg\\{\\, p_{11}^\\mp \\left(\\Gamma(5)-\\Gamma\\left(5,\\frac{s_0}{T}\\right)\n\\right) \\nonumber \\\\\n& & +\\,a\\,\\Bigg[\\,p_{21}^\\mp \\left(\\Gamma(5)-\\Gamma\\left(5,\\frac{s_0}{T}\\right)\n\\right) + 2\\,p_{22}^\\mp \\left(\\Gamma'(5)-\\Gamma'\\left(5,\\frac{s_0}{T}\\right)\n\\right)\\Bigg]\\Bigg\\} \\,.\n\\end{eqnarray}\nWe note that the different signs of the perturbative and non-perturbative\nterms in the $1^-$ state lead to a stabilisation for the energy sum rule,\nwhereas the equal sign in the $1^+$ state destabilises the sum rule.\n\nLet us begin our numerical analysis with the case for three light quark\nflavours. As our input parameters we use $\\langle aFF\\rangle = 0.024 \\pm 0.012 \\,\\mbox{\\rm GeV}^4$\nand $\\Lambda_{3fl} = 325 \\,\\mbox{\\rm MeV}$. In principle, the coupling constant at\nnext-to-leading order could be evaluated at any scale $\\mu$. As our central\nvalue in the numerical analysis we have chosen $\\mu = 2 \\,\\mbox{\\rm GeV}$.\nFor the energy $E^-$ of the $1^-$ state we obtain the\nbest stability for a continuum threshold $s_0 = 1.7 \\,\\mbox{\\rm GeV}$ in the range\n$T \\geq 0.7 \\,\\mbox{\\rm GeV}$ with an energy $E^- \\approx 1.4 \\,\\mbox{\\rm GeV}$. To estimate\nthe errors we have varied the scale $\\mu$ as well as the continuum\nthreshold $s_0$. In figure~1 we have displayed the energy $E^-$ as a\nfunction of the Borel parameter $T$ for $\\mu=1\\,\\mbox{\\rm GeV}$ (dashed lines),\n$2\\,\\mbox{\\rm GeV}$ (solid lines) and $4\\,\\mbox{\\rm GeV}$ (dotted lines). The corresponding\nvalues of the continuum threshold are $s_0=1.5\\pm0.2\\,\\mbox{\\rm GeV}$, $1.7\\pm0.2\\,\\mbox{\\rm GeV}$\nand $1.9\\pm0.2\\,\\mbox{\\rm GeV}$ respectively. The central values have been chosen\nin order to obtain maximal stability for the sum rule.\n\n\\begin{figure}[thb]\n\\vspace{0.2cm}\n\\centerline{\n\\rotate[r]{\n\\epsfysize=13cm\n\\epsffile{Em.eps} }}\n\\caption[]{The energy $E^-$ as a function of the Borel-parameter $T$\nfor three different renormalisation scales $\\mu$ and continuum\nthresholds $s_0$.\nDashed curves $\\mu=1\\,\\mbox{\\rm GeV}$: lowest $s_0=1.3\\,\\mbox{\\rm GeV}$, middle $s_0=1.5\\,\\mbox{\\rm GeV}$,\nupper $s_0=1.7\\,\\mbox{\\rm GeV}$.\nSolid curves $\\mu=2\\,\\mbox{\\rm GeV}$: lowest $s_0=1.5\\,\\mbox{\\rm GeV}$, middle $s_0=1.7\\,\\mbox{\\rm GeV}$,\nupper $s_0=1.9\\,\\mbox{\\rm GeV}$.\nDotted curves $\\mu=4\\,\\mbox{\\rm GeV}$: lowest $s_0=1.7\\,\\mbox{\\rm GeV}$, middle $s_0=1.9\\,\\mbox{\\rm GeV}$,\nupper $s_0=2.1\\,\\mbox{\\rm GeV}$.\n\\label{fig:1}}\n\\end{figure}\n\nLarger values of $s_0$ always increase $E$ but at the same time the stability\nregion shrinks and goes to smaller values of $T$. However, even at\n$T = 0.7 \\,\\mbox{\\rm GeV}$ the influence of the higher resonances expressed through\nthe continuum model $\\chi^-(T)$ is very large:\n$\\chi^-(0.7)\/{\\cal D}^-_{PT}(0.7) \\approx 0.75$. For $s_0 = 2.1 \\,\\mbox{\\rm GeV}$, we have\na small stability region around $T = 0.65 \\,\\mbox{\\rm GeV}$\nyielding $E^- = 1.6 \\,\\mbox{\\rm GeV}$. Here the influence of the continuum model is\naround tolerable 50\\%, the perturbative corrections and the choice of the\nrenormalisation scale become however more important there.\n\nAnother source of uncertainty is the value of the gluon condensate. For the\nvalue $\\langle aFF\\rangle = 0.012 \\,\\mbox{\\rm GeV}^4$, originally obtained by \\cite{svz:79}, we\nfind $E^- = 1.2 \\,\\mbox{\\rm GeV}$ at $s_0 = 1.5 \\,\\mbox{\\rm GeV}$, whereas for\n$\\langle aFF\\rangle = 0.036 \\,\\mbox{\\rm GeV}^4$ we obtain $E^- = 1.8 \\,\\mbox{\\rm GeV}$ at $s_0 = 2.4 \\,\\mbox{\\rm GeV}$. \nWe therefore conclude from the sum rules for the above mentioned parameters\nan energy $E^-$ and a correlation length $a^-$ of:\n\\begin{equation}\nE^-_{3fl} \\; = \\; 1.5 \\pm 0.4 \\;\\mbox{\\rm GeV} \\qquad {\\rm and} \\qquad\na^-_{3fl} \\; = \\; 0.13^{+0.05}_{-0.02} \\;\\mbox{\\rm fm} \\,.\n\\end{equation}\n\nThe main sources of uncertainty are the value of the gluon condensate and the\ncontinuum contribution. Though the perturbative two-loop contributions to\nthe sum rule are very large, their influence on the value of $E^-$ is not so\ndramatic. The corrections tend to cancel in the ratio of eq. \\eqn{eq:4.2}.\nIf one determines the energy from the sum rule just containing the lowest\norder perturbation theory and chooses as the scale for $\\alpha_s$ the\napproximate value of the energy one finds for $\\langle aFF\\rangle = 0.024 \\,\\mbox{\\rm GeV}^4$ the\nvalue $E^- = 1.9 \\,\\mbox{\\rm GeV}$.\n\nIn a world without light quarks, i.e. $n_f = 0$, the main influence on the\nsum rule is the expected change of the gluon condensate which might increase\nby a factor two to three \\cite{nsvz:84}. If we perform an analysis as above,\nwe get for $\\Lambda_{0fl} = 250 \\,\\mbox{\\rm MeV}$ \\cite{lue:98},\n$\\langle aFF\\rangle = 0.048 \\pm 0.024 \\,\\mbox{\\rm GeV}^4$ and $s_0 = 2.3 \\,\\mbox{\\rm GeV}$ an energy and\ncorrelation length of \n\\begin{equation}\nE^-_{0fl} \\; = \\; 1.9 \\pm 0.5 \\;\\mbox{\\rm GeV} \\qquad {\\rm and} \\qquad\na^-_{0fl} \\; = \\; 0.11^{+0.04}_{-0.02} \\;\\mbox{\\rm fm} \\,.\n\\end{equation}\n\nFor $E^+$, the energy of the axial vector state, we obtain no stable sum rule.\nAlthough the expressions for $E^-$ and $E^+$ are equal in lowest order\nperturbation theory, higher order perturbative contributions and the gluon\ncondensate lead to a splitting in such a way that for the same values of\n$s_0$ and $T$ the resulting value for $E^-$ is higher than that for $E^+$.\n\n\n\n\\newsection{Summary and conclusions}\n\nThe analysis of the gauge invariant gluon field strength correlator\nby QCD sum rule methods allows to establish a relation between the gluon \ncondensate and the correlation length. In order to apply the sum rule\ntechnique which consists in the comparison of a phenomenological Ansatz\nwith a theoretical expression obtained from the operator product expansion\nwe interpret the gluon correlator as the correlator of two colour \nneutral hybrid states composed of a (fictitious) heavy quark transforming\nunder the adjoint representation and the gluon field. The former serves as\nthe source for the gauge string in the correlator.\n\nIn this approach the decomposition in two invariant functions $D^+$ and \n$D^-$ is more appropriate than the decomposition of eq. \\eqn{eq:2.2},\nsince $D^-$ receives only contributions from $1^-$ and $D^+$ from\n$1^+$ intermediate states (ignoring the decoupled spin of the heavy\noctet quark). Therefore these functions show simple exponential behaviour\nat large distances and {\\em not} $D$ and $D_1$. The perturbative expressions\nfor $D^+$ and $D^-$ are nearly degenerate, but the gluon condensate\ncontributes with different sign. It stabilises the sum rule for $D^-$ and\ndestabilises for $D^+$.\n\nThe value of the binding energy for the lowest intermediate $1^-$ state\n(the inverse correlation length of the correlator) with three flavours is\ndetermined to be $E^-_{3fl} = 1\/a^-_{3fl} \\approx 1.5 \\pm 0.4 \\,\\mbox{\\rm GeV}$ and\nwith zero flavours to be $E^-_{0fl} = 1\/a^-_{0fl} \\approx 1.9 \\pm 0.5 \\,\\mbox{\\rm GeV}$.\nThe main sources of uncertainty are the choice of the continuum threshold\n$s_0$ and the value of the gluon condensate. \n\nThough we find no stable sum rule for the axial vector state we have from\nthe difference of the expressions for the $1^-$ and $1^+$ state strong\nevidence for the counterintuitive result that the $1^+$ state is lighter\nthan the vector state. \n\nThe gauge invariant gluon correlator has been calculated on the lattice using\nthe cooling technique \\cite{gmp:97,egm:97}. There, the analysis has been made\nby assuming at large distances an exponential behaviour for the invariant\nfunctions $D$ and $D_1$, which in light of the present investigation seems\nless justified than the same Ansatz for the functions $D^-$ and $D^+$. The\nresults of the lattice calculation are in qualitative, but not quantitative\nagreement with the sum rule results. The lattice researchers find correlation\nlengths for $D$ and $D_1$, $a$ and $a_1$, which are degenerate within the\nerrors. The computations have been done in quenched QCD and with four dynamic\nflavours of staggered fermions at a bare quark mass of $d\\cdot m_q = 0.01$\nwhere $d$ denotes the lattice spacing. They found \\cite{egm:97}:\n\\begin{eqnarray}\n& & E^- \\; = \\; E^+ \\; = \\; \\frac{1}{a} \\; = \\; 0.90\\pm 0.14 \\;\\mbox{\\rm GeV}\n\\qquad \\mbox{for 0 flavours and} \\nonumber \\\\\n\\vbox{\\vskip 8mm}\n& & E^- \\; = \\; E^+ \\; = \\; \\frac{1}{a} \\; = \\; 0.58\\pm 0.10 \\;\\mbox{\\rm GeV}\n\\qquad \\mbox{for 4 flavours.}\n\\end{eqnarray}\n\nA preliminary analysis of the lattice data based on an exponential behaviour\nfor $D^+$ and $D^-$ \\cite{meg:98} leaves the values essentially unchanged but\nindicates a splitting of $E^+$ and $E^-$ in the same direction as proposed\nby the sum rules! The reader should also note the increase of the correlation\nlength from zero to four flavours which is predicted by the sum rules as well\nwhere it is mainly due to the decrease of the gluon condensate.\n\nIn another approach \\cite{bbv:98} the exponential behaviour of the functions\n$D^+$ and $D^* = z^2\\partial\/\\partial z^2 D_1$ for quenched QCD could be\nextracted by analysing field insertions into a Wilson loop and assuming\nfactorisation as in the model of the stochastic vacuum \\cite{dos:87,ds:88}.\nThe resulting values for the correlation lengths are smaller than those of\nthe direct lattice calculations \\cite{bbv:98} and thus compare more\nfavourably with our results:\n\\begin{equation}\nE^+ \\; = \\; \\frac{1}{a^+} \\; = \\; 1.64 \\,\\mbox{\\rm GeV}\n\\quad \\mbox{and} \\quad\nE^* \\; = \\; \\frac{1}{a^*} \\; = \\; 1.04 \\,\\mbox{\\rm GeV}\n\\quad \\mbox{for 0 flavours.} \n\\end{equation}\n\nThe sum rule analysis shows that the state investigated here namely a gluon\nconfined by an octet source has a much higher energy than the corresponding\nstate in HQET. A similar analysis of a light quark bound by a source in the\nfundamental representation \\cite{bbbd:92} yielded an energy which is by a\nfactor 2 to 4 smaller. This is to be expected on general grounds \\cite{nsvz:84}\nsince the case treated here is nearer to a glueball than to a heavy meson.\n\n\n\\newpage \\noindent\n{\\Large\\bf Acknowledgements}\n\n\\vspace{3mm} \\noindent\nThe authors would like to thank N. Brambilla, D. Gromes, E. Meggiolaro\nand A. Vairo for interesting discussions. M. Eidem\\\"uller thanks the\nLandesgraduiertenf\\\"orderung at the University of Heidelberg for support\nand M. Jamin would like to thank the Deutsche Forschungsgemeinschaft for\ntheir support.\n\n\n \n\\vspace{6mm} \\noindent\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{INTRODUCTION}\n\\label{sec:intro} \n\nTwisted light is a light beam with intrinsic orbital angular momentum (OAM) in the direction of motion; for reviews, see~\\cite{2011AdOP....3..161Y,Bliokh:2015doa}. The components of the vector potential that are transverse to the overall propagation direction of the beam can be given by~\\cite{2019PhRvA..99b3845Q}\n\\begin{equation}\n\\vec A_\\perp(\\vec r, t) = A_0 \\hat \\epsilon_\\Lambda \\, e^{i(k_z z-\\omega t)}\\, e^{i \\ell \\phi} \\,J_\\ell(\\kappa \\rho) \\,.\n\\end{equation}\n$A_0$ is a normalization constant, $\\hat\\epsilon$ is a polarization vector which can be made more specific as $\\hat\\epsilon_\\Lambda$ for circular polarization $\\Lambda = \\pm 1$. In wave number space, or momentum space quantum mechanically, it is a superposition of plane wave states all with the same longitudinal momentum $k_z$ and the same magnitude transverse momentum $\\kappa$ but varying azimuthal directions. The phase $e^{i \\ell \\phi}$ makes the orbital angular momentum in the direction of motion be $\\ell$ (or $\\ell \\hbar$), and the total angular momentum is $m_\\gamma = \\ell + \\Lambda$.\nThe swirling wavefront, in addition to longitudinal momentum, has transverse momentum components that can twist objects in its path or give them sideways kicks. \n\nThe expressions that give the momentum and angular momentum densities are obtained from the energy-momentum tensor~\\cite{Jauch:1976ava,Bjorken:1965zz}.\nUsing the electromagnetic Lagrangian \n$L = -\nF_{\\alpha\\beta} F^{\\alpha\\beta}\/(4\\mu_0)$ and the canonical or Noether procedure gives the canonical energy-momentum tensor\n\\begin{equation}\n T^{\\mu\\nu} =\t- \\frac{1}{\\mu_0} F^{\\mu\\alpha} \\partial^\\nu \\! A_\\alpha - g^{\\mu\\nu} L \\,.\n\\end{equation}\nIt is not symmetric in its two indices. Aside from aesthetics, the field equation in General Relativity requires a symmetric energy-momentum tensor. It can be symmetrized by adding a total derivative~\\cite{1940Phy.....7..449B,rosenfeld1940energy}, whereby spatial integrals over the energy-momentum tensor remain unchanged. Doing so gives the Belinfante tensor\n\\begin{equation}\n \\theta^{\\mu\\nu} = - \\frac{1}{\\mu_0} F^{\\mu\\alpha} F^\\nu_{\\ \\alpha} -\tg^{\\mu\\nu} L \\,.\n\\end{equation}\nThe momentum densities $\\vec {\\mathcal P}$ are obtained from $T^{0i}\\!\\,\/c$ or $\\theta^{0i}\\!\\,\/c$, and are\n\\begin{equation}\n \\vec {\\mathcal P} = \n\t\t\t \\epsilon_0 \\vec E \\cdot (\\vec\\nabla) \\vec A\t\\,,\t\t\\quad \\text{canonical}\\ \n\\end{equation}\nor \n\\begin{equation}\n \\vec {\\mathcal P} = \n\t\t\t \\epsilon_0 \\vec E \\times \\vec B\t\t\\,,\t\t\\quad \\ \t\\text{Belinfante}.\n\\end{equation}\nIn general, they are numerically distinct.\nThe related results for angular momentum density $\\vec {\\mathcal J}$ are\n\\begin{align}\n\\vec {\\mathcal J} = \\left\\{\n\t\t\t\\begin{array}{l}\n\t\\epsilon_0 \\vec E \\cdot (\\vec r \\times \\vec\\nabla) \\vec A\t+ \\epsilon_0 \\vec E \\times \\vec A\n\t\t\\,,\t\t\\quad\\, \\text{canonical},\t\\\\\n\t\t\t \\epsilon_0 \\, \\vec r \\times ( \\vec E \\times \\vec B )\t\t\\,,\t\t\n\t\t\\qquad\\qquad\\qquad\t\\text{Belinfante}.\t\t\n\t\t\t\\end{array}\n\\right.\n\\end{align}\nAgain, the two expressions differ by just a total derivative. But they do differ locally, so do not lead to the same torque upon small test objects.\n\nInterestingly, an alternative way to obtain the momentum density of electromagnetic field is to infer it from forces on small dielectrics calculated from Lorentz force law~\\cite{2009PhRvL.102k3602A}. The momentum density obtained this way agrees with the canonical result. Further discussions of momentum density definitions can be found in~\\cite{Bliokh:2015doa,2013EJPh...34.1337B,1978OptCo..24..185H,2019PhRvA..99f3832W,2014OExpr..22.6586O,2002PhRvL..88e3601O,2003PhRvL..91i3602G}.\n\n\\section{Angular and transverse momentum density tests}\n\nWe will discuss some predictions for making objects spin by shining twisted light upon them, thinking in one case in terms of the torque generated by absorbing the electromagnetic angular momentum, and in another case in terms of the sideways kick, sometimes called the superkick~\\cite{2013JOpt...15l5701B}, given to individual particles in a system.\n\nOne specific result for the $z$-component (averaged over time) of the angular momentum density, plotted versus radius or the distance $\\rho$ out from the vortex axis, is shown in Fig.~\\ref{fig:example}. In this figure, the projected total angular momentum of the field is $m_\\gamma=2$ and the circular polarization is $\\Lambda=\\sigma_z = 1$. One sees significant differences between the canonical and Belinfante results at most radii.\n\n\n \\begin{figure} [b]\n \\begin{center}\n \\begin{tabular}{c}\n \\includegraphics[height=5cm]{ang_mom_dens_i21}\n \\end{tabular}\n \\end{center}\n \\caption[example] \n { \\label{fig:example} \nAngular momentum density on a ring of radius $\\rho$ for a twisted light beam of total angular momentum $m_\\gamma =2$ and circular polarization $\\sigma_z =1$, with angular frequency $\\omega$ and $A_0$ normalizing the strength of the beam's electric field. The pitch angle $\\theta_k$ is $\\arctan(\\kappa\/k_z)$.\nThe Belinfante case has regions where the angular momentum density swirls in a direction opposite to the overall angular momentum.}\n \\end{figure} \n\n\n\\paragraph{One test: twisted light on a cylindrical shell.}\n\nGiven the plot just mentioned, we will consider shining a twisted beam on a cylindrical shell, or hollow cylinder. (Note that the torque on a filled cylinder would be an integrated torque, which is the about the same for the two cases, depending on the radius of the cylinder.)\n\n\nLet the cylinder and beam axes be coincident, Fig.~\\ref{fig:hollow} and the torque on shell will be proportional to angular momentum absorbed from beam at radius of cylinder.\n\n\n \\begin{figure} [ht]\n \\begin{center}\n \\begin{tabular}{c} \n \\includegraphics[height=4.5cm]{TwLIght_Cylind_SGi}\n\t\\end{tabular}\n\t\\end{center}\n \\caption[example] \n { \\label{fig:hollow} \nTwisted light hitting a hollow cylinder, with axes coincident.}\n \\end{figure} \n \n\nFor radius $\\rho = 2 \\,\\mu\\text{m} \\approx 2.74 \\,\\lambda$ and other selected parameters, and cylindrical shell sitting in a kerosine bath (the viscous drag is calculable~\\cite{1995PhRvL..75..826H,1996PhRvA..54.1593F}), terminal rotation frequency is\n\\begin{equation}\n f \\approx\n0.55 \\text{\\,Hz} ,\t\n\\quad \\text{canonical} , \n\\end{equation}\nor\n\\begin{equation}\n f \\approx \n0.23 \\text{\\,Hz} ,\t\n\\quad \\text{Belinfante}.\n\\end{equation}\nThe other parameters used in this example are $\\lambda = 0.729 \\,\\mu$m, power in beam = 4 mW, the beam put inside a Gaussian envelope of width 10 $\\lambda$, thickness of shell = 0.5 $\\mu$m, length of cylinder = 2 $\\mu$m, density of shell material twice water.\n\n\n\\paragraph{Another test: a two-ion rotor.}\n\nWe will study this case in terms of the effect of transverse momentum kicks, or superkicks, upon atoms constrained to move in a circle. \n\nConsider a rotor has two $^{40}$Ca ions constrained to revolve in a circular path in a plane. The atoms are well constrained to stay in the plane, and their radius from the center of the circle is well fixed, but they are free to revolve in a circle about the center. Such a rotor exists, and is described in~\\cite{2019PhRvL.123m3202U}, but has not been used for the present purpose. \n\n\n \\begin{figure} [ht]\n \\begin{center}\n \\begin{tabular}{c} \n \\includegraphics[height=4cm]{CaRotori}\n\t\\end{tabular}\n\t\\end{center}\n \\caption[example] \n { \\label{fig:carotor} \nA two-ion Calcium rotor.}\n \\end{figure} \n \n\nShine the beam perpendicular to plane of the rotor with vortex line passing thru its center. The beam can excite either ground state ion, and the ion receives one transverse momentum kick per lifetime of the excited state. This will give a force $dp_\\perp\/dt$, and a torque, and an angular acceleration.\nThe transverse momentum kick per absorption depends on the ratio of the momentum density and photon number density, and the numerator and denominator of this ratio give a simple result for the canonical case,\n\\begin{equation}\n p_{\\perp} = \n\t\t \\frac{ \\ell \\, \\hbar} {\\rho}\n\t\t\\qquad \\qquad \\quad \\, \\text{canonical},\n\\end{equation}\nand\n\\begin{equation}\n p_{\\perp} = \n\t\t\t\\hbar \\kappa \n\t\t\t\\displaystyle \\frac{ J_{\\ell+\\Lambda}(\\kappa \\rho) }{ J_{\\ell}(\\kappa \\rho) }\n\t\t\t \\qquad \\text{Belinfante}\t.\n\\end{equation}\nAgain, $\\Lambda$ is the photon helicity; a plot of the anticipated angular acceleration for $\\Lambda = 1$ and $\\ell = 1$ is shown in Fig.~\\ref{fig:accel}.\nThe plot was made using the $4s_{1\/2} \\to 4p_{3\/2}$ or $4p_{1\/2}$ transitions, where the excited state is not metastable but has a fast spontaneous\ndecay. The situation is analogous to laser cooling~\\cite{1975OptCo..13...68H}:\nthe spontaneous decay is isotropic so statistically there\nis no momentum kick in the decay, but the excitation\nalways involves a momentum kick in the same azimuthal\ndirection. \nThe predicted angular accelerations are wildly different, as Fig.~\\ref{fig:accel} shows.\n\n\n\n \\begin{figure} [t]\n \\begin{center}\n \\begin{tabular}{c} \n \\includegraphics[height=5cm]{rotortwistedacceli.pdf}\n\t\\end{tabular}\n\t\\end{center}\n \\caption[example] \n { \\label{fig:accel} \nCalculated angular acceleration for a two-ion Calcium rotor of varying radii, with further description in the text.}\n \\end{figure} \n \n\n\n\n\\section{Closing remarks}\n\\label{sec:closing}\n\n\n\nMore details and additional situations contrasting the canonical and Belinfante momentum and angular momentum density expressions can be found in~\\cite{Afanasev:2022vgl}. \n\nThe canonical and the Belinfante versions of the electromagnetic energy-momentum tensor are by design the same for integrated quantities such as the total momentum or total angular momentum of the field. However, point by point in space they are different, and on small test objects they give different results in calculations of the force or torque from electromagnetic waves striking them. The differences are only apparent if the light has a structured wave front, and we have worked out examples using the particular example of twisted photons. In certain regions the differences are especially dramatic. Not discussed here are tractor beam effects in the Belinfante case, discussed in~\\cite{Afanasev:2022vgl} and also noticed in~\\cite{Novitsky:07}. However, the dramatic tractor beam effects lie in limited spatial regions and are sensitive to details of the beam preparation. On the other hand, the dramatic torque differences which we have focused on in this talk are robust and exist over broad spatial regions and could well be confirmed or denied experimentally using ringlike or end-weighted rotor\n\n\\section*{Addendum}\n\n\\paragraph{Single particle off-axis in vortex beam.} At the conference it was also possible to show a third test of the possible angular momentum densities. This was to put a small test particle off-axis in a twisted photon beam, whence the transverse momentum absorption or the superkick would cause the particle to revolve about the vortex axis.\n\nSuch an experiment was actually performed nearly two decades ago~\\cite{2003PhRvL..91i3602G}, but measurements were made only at the maxima of the ringlike intensity profile of the twisted beam wavefront. It happens that the difference between the predictions of the canonical and Belinfante angular momentum is proportional to the derivative of the intensity with respect to distance from the axis~\\cite{Afanasev:2022vgl}. Hence the predictions of the two possibilities are the same at the points where measurements were made. \n\n\n \\begin{figure} [t]\n \\begin{center}\n \\begin{tabular}{c} \n \\includegraphics[height=6cm]{Garces2breali.pdf}\n\t\\end{tabular}\n\t\\end{center}\n \\caption[example] \n { \\label{fig:revolution} \nCalculated angular acceleration for a two-ion Calcium rotor of varying radii, with further description in the text.}\n \\end{figure} \n \n\nFig.~\\ref{fig:revolution} shows the predictions of the Belinfante and canonical angular momentum densities for the off-axis test particle, given as its revolution frequency (for a certain laser power and for $\\ell=2$ and $\\Lambda=1$) plotted vs.~the distance from the vortex axis cubed, as in~\\cite{2003PhRvL..91i3602G}. The dots show the radii where measurements have been made. Clearly the predictions are significantly different and measurements with the test particle held at different radii could be quite decisive. \n\n\\acknowledgments\n \nWe thank Elliot Leader for stimulating conversations. A.A. thanks the US Army Research Office Grant W911NF-19-1-0022 for support, C.E.C. thanks the National Science Foundation (USA) for support under grant PHY-1812326, and A. M. thanks the SERB-POWER Fellowship, Department of Science and Technology, Govt. of India for support. \n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}