diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzznimj" "b/data_all_eng_slimpj/shuffled/split2/finalzznimj" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzznimj" @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\n\nFinding numerical approximations of fractional integrals or fractional derivatives\nof a given function is one of the most important problems in theory of numerical\nfractional calculus. The operators of fractional integration and fractional\ndifferentiation are more complicated than the classical ones, so their evaluation \nis also more difficult than the integer order case. Li et al. use spectral \napproximations for computing the fractional integral and the Liouville--Caputo \nderivative \\cite{sal2}. They also developed\nnumerical algorithms to compute fractional integrals and Liouville--Caputo derivatives\nand for solving fractional differential equations based on piecewise polynomial\ninterpolation \\cite{sal3}. In \\cite{sal4}, Pooseh et al. presented two approximations\nderived from continuous expansions of Riemann--Liouville fractional derivatives\ninto series involving integer order derivatives and they present application\nof such approximations to fractional differential equations and fractional problems\nof the calculus of variations. Some other computational algorithms\nare also introduced in \\cite{sal7,sal5,sal6}.\nFor increasing the accuracy of the calculation, using the Gauss--Jacobi quadrature rule\nis appropriate for removing the singularity of the integrand. So considering\nthe nodes and weights of the quadrature rule is an important problem.\nThere are many good papers in the literature addressing the question of how to\nfind the nodes and weights of the Gauss quadrature rule---see \\cite{sal10,sal9,sal8}\nand references therein. The more applicable and developed method\nis the Golub--Welsch (GW) algorithm \\cite{sal12,sal11}, that is used by many\nof the mathematicians who work in numerical analysis. This method takes $O(n^2)$\noperations to solve the problem of finding the nodes and weights.\nHere we use a new method introduced by Hale and Townsend \\cite{sal13},\nwhich is based on the Glasier--Liu--Rokhlin (GLR) algorithm \\cite{sal14}.\nIt computes all the nodes and weights of the $n$-point quadrature rule\nin a total of $O(n)$ operations.\n\nThe structure of the paper is as follows.\nIn Section~\\ref{sec2}, we introduce the definitions of fractional operators\nand some relations between them. Section~\\ref{sec3} discusses the Gauss--Jacobi\nquadrature rule of fractional order and its application to approximate\nthe fractional operators. In Section~\\ref{sec4} we present two methods\nfor finding the nodes and weights of Gauss--Jacobi and discuss their advantages\nand disadvantages. Two illustrative examples are solved. In Section~\\ref{sec5}\napplications to ordinary fractional differential equations are presented.\nIn Section~\\ref{sec6} we investigate problems of the calculus of variations\nof fractional order and present a new algorithm for solving boundary\nvalue problems of fractional order. We end with Section~\\ref{sec7} \nof conclusions and possible directions of future work.\n\n\n\\section{Preliminaries and notations about fractional calculus}\n\\label{sec2}\n\nIn this section we give some necessary preliminaries of the\nfractional calculus theory \\cite{MR2218073,sal15},\nwhich will be used throughout the paper.\n\n\\begin{definition}\nThe left and right Riemann--Liouville fractional integrals\nof order $\\alpha$ of a given function $f$ are defined by\n\\begin{equation*}\n_aI_x^\\alpha f(x)=\\frac{1}{\\Gamma(\\alpha)}\n\\int_a^x(x-t)^{\\alpha-1}f(t)dt\n\\end{equation*}\nand\n\\begin{equation*}\n_xI_b^{\\alpha}f(x)\n=\\frac{1}{\\Gamma(\\alpha)}\\int_x^b(t-x)^{\\alpha-1}f(t)dt,\n\\end{equation*}\nrespectively, where $\\Gamma$ is Euler's gamma function, that is,\n\\begin{equation*}\n\\Gamma(x)=\\int_0^\\infty t^{x-1}e^{-t}dt,\n\\end{equation*}\n$\\alpha>0$ with $n-1<\\alpha\\leq n$, $n\\in\\mathbb{N}$, and $a-1$. Similar relations hold for the right\nRiemann--Liouville fractional operator. On the other hand, we have the left\nand right Riemann--Liouville fractional derivatives of order $\\alpha>0$ \nthat are defined by\n\\begin{equation*}\n_aD_x^{\\alpha}f(x)=\\frac{1}{\\Gamma(n-\\alpha)}\\frac{d^n}{dx^n}\\int_a^x(x-t)^{n-\\alpha-1}f(t)dt\n\\end{equation*}\nand\n\\begin{equation*}\n_xD_b^\\alpha f(x)=\\frac{(-1)^n}{\\Gamma(n-\\alpha)}\\frac{d^n}{dx^n}\\int_x^b(t-x)^{n-\\alpha-1}f(t)dt,\n\\end{equation*}\nrespectively.\n\\end{definition}\n\nThere are some disadvantages when trying to model real world phenomena with fractional\ndifferential equations, when fractional derivatives are taken in Riemann--Liouville sense.\nOne of them is that the Riemann--Liouville derivative of the constant function \nis not zero. Therefore, a modified definition of the fractional differential operator,\nwhich was first considered by Liouville and many decades later\nproposed by Caputo \\cite{sal16}, is considered.\n\n\\begin{definition}\nThe left and right fractional differential operators in Liouville--Caputo \nsense are given by\n\\begin{equation*}\n_a^CD_x^\\alpha f(x)\n=\\frac{1}{\\Gamma(n-\\alpha)}\\int_a^x(x-t)^{n-\\alpha-1}f^{(n)}(t)dt\n\\end{equation*}\nand\n\\begin{equation*}\n_x^CD_b^\\alpha f(x)\n=\\frac{(-1)^n}{\\Gamma(n-\\alpha)}\\int_x^b(t-x)^{n-\\alpha-1}f^{(n)}(t)dt,\n\\end{equation*}\nrespectively.\n\\end{definition}\nThe Liouville--Caputo derivative has the following two properties\nfor $n-1<\\alpha\\leq n$ and $f\\in L_1[a,b]$:\n\\begin{equation*}\n({_a^CD_x^\\alpha} {_aI_x^\\alpha} f)(x)=f(x)\n\\end{equation*}\nand\n\\begin{equation*}\n({_aI_x^\\alpha} {_a^CD_x^\\alpha} f)(x)\n=f(x)-\\sum_{k=0}^{n-1}f^{(k)}(0^{+})\\frac{(x-a)^k}{k!},\\quad t>0.\n\\end{equation*}\n\n\\begin{remark}\nUsing the linearity property of the ordinary integral operator, one deduces\nthat left and right Riemann--Liouville integrals, left and right\nRiemann--Liouville derivatives and left and right\nLiouville--Caputo derivatives are linear operators.\n\\end{remark}\n\nAnother definition of a fractional differential operator, that is useful for numerical\napproximations, is the Gr\\\"{u}nwald--Letnikov derivative, which is a generalization \nof the ordinary derivative. It is defined as follows:\n\\begin{equation*}\nD_{GL}^{\\alpha}=\\lim_{n\\to\\infty}\\frac{(\\frac{t}{N})^{-\\alpha}}{\\Gamma(-\\alpha)}\n\\sum_{j=0}^{n-1}\\frac{\\Gamma(j-\\alpha)}{\\Gamma(j+1)}f\\left(t-\\frac{tj}{n}\\right).\n\\end{equation*}\n\n\n\\section{Fractional Gauss--Jacobi quadrature rule}\n\\label{sec3}\n\nIt is well known that the Jacobi polynomials\n$\\{P_n^{(\\lambda,\\nu)}(x)\\}_{n=0}^{\\infty}$, $\\lambda, \\nu >-1$, $x\\in [-1,1]$,\nare the orthogonal system of polynomials\nwith respect to the weight function\n$$\n(1-x)^\\lambda (1+x)^\\nu,\n\\quad \\lambda, \\nu >-1,\n$$\non the segment $[-1,1]$:\n\\begin{equation}\n\\label{eq3.1}\n\\int_{-1}^{1}(1-x)^\\lambda (1+x)^\\nu P_n^{(\\lambda,\\nu)}(x)\nP_m^{(\\lambda,\\nu)}(x) dx=\\partial_n^{\\lambda,\\nu}\\delta_{mn},\n\\end{equation}\nwhere\n\\begin{equation*}\n\\partial_n^{\\lambda,\\nu}=\\Vert P_n^{(\\lambda,\\nu)}\\Vert_{w^{\\lambda,\\nu}}^2,\n\\quad w^{\\lambda,\\nu}(x)=(1-x)^\\lambda (1+x)^\\nu,\n\\end{equation*}\n$$\n\\delta_{mn}=\\left\\{\n\\begin{array}{ll}\n1, & m=n,\\\\\n0, & m\\neq n\n\\end{array} \\right.\n$$\n(see, e.g., \\cite{sal17}). These polynomials satisfy\nthe three-term recurrence relation\n\\begin{equation}\n\\label{eq3.4}\n\\begin{gathered}\nP_0^{(\\lambda,\\nu)}(x)=1,\n\\quad P_1^{(\\lambda,\\nu)}(x)=\\frac{1}{2} (\\lambda+\\nu+2) x +\\frac{1}{2} (\\lambda-\\nu),\\\\\nP_{n+1}^{(\\lambda,\\nu)}(x)=\\left( a_n^{\\lambda,\\nu} x- b_n^{\\lambda,\\nu}\\right)\nP_n^{(\\lambda,\\nu)}(x) - c_n^{\\lambda,\\nu} P_{n-1}^{(\\lambda,\\nu)}(x), \n\\quad n\\geq 2,\n\\end{gathered}\n\\end{equation}\nwhere\n\\begin{align*}\na_n^{\\lambda,\\nu}\n&=\\dfrac{(2n+\\lambda+\\nu+1) (2n+\\lambda+\\nu+2)}{2(n+1) (n+\\lambda+\\nu+1)},\\\\\n&\\quad\\\\\nb_n^{\\lambda,\\nu}\n&=\\dfrac{(\\nu^2-\\lambda^2) (2n+\\lambda+\\nu+1)}{2(n+1) \n(n+\\lambda+\\nu+1) (2n+\\lambda+\\nu)},\\\\\n&\\quad\\\\\nc_n^{\\lambda,\\nu}\n&=\\dfrac{(n+\\lambda) (n+\\nu) (2n+\\lambda+\\nu+2)}{(n+1) \n(n+\\lambda +\\nu +1) (2n+\\lambda +\\nu)}.\n\\end{align*}\nThe explicit form of the Jacobi polynomials is\n\\begin{equation}\n\\label{eq3.5}\nP_n^{(\\lambda,\\nu)}(x)=\\sum_{k=0}^{n}\n\\frac{2^{n-k}n!(k+\\nu+1)_{n-k}}{(n-k)!(n+k+\\lambda+\\nu+1)_{n-k}k!}(t-1)^k,\n\\end{equation}\nwhere we use Pchhammer's notation:\n$$\n(a)_l=a(a+1)(a+2)\\cdots (a+l-1)\n$$\n(see \\cite{sal25}). Furthermore, the Jacobi polynomials\nsatisfy in the following relations:\n\\begin{equation*}\nP_n^{(\\lambda,\\nu)}(-x)=(-1)^n P_n^{(\\lambda,\\nu)}(x),\n\\end{equation*}\n\\begin{equation*}\n\\frac{d}{dx}P_n^{(\\lambda,\\nu)} (-x)=(-1)^n \\frac{d}{dx}P_n^{(\\lambda,\\nu)}(x),\n\\end{equation*}\n\\begin{multline}\n\\label{equ3.4}\n(2n+\\lambda+\\nu) (1-x^2) \\frac{d}{dx} P_n^{(\\lambda,\\nu)}(x)\\\\\n=n\\left( \\lambda-\\nu-(2n+\\lambda+\\nu)x\\right) P_n^{(\\lambda,\\nu)}(x)\n+2(n+\\lambda) (n+\\nu)P_{n-1}^{(\\lambda,\\nu)}(x),\n\\end{multline}\n\\begin{equation*}\nP_n^{(\\lambda,\\nu)}(1)=\\binom{n+\\lambda}{n},\n\\quad P_n^{(\\lambda,\\nu)}(-1)\n=(-1)^j \\,\\dfrac{\\Gamma (n+\\nu+1)}{n!\\Gamma(\\nu+1)}.\n\\end{equation*}\nOn the other hand, the Gauss--Jacobi quadrature\nformula with remainder term is given by\n\\begin{equation}\n\\label{eq:GJqf}\n\\int_{-1}^1 (1-x)^\\lambda (1+x)^\\nu f(x) dx\n= \\sum_{k=1}^n A_k f(x_k)+R_n(f),\n\\end{equation}\nwhere $x_k$, $k=1,\\ldots, n$, are the zeros of\n$P_n^{(\\lambda,\\nu)}$, the weights $A_k$ are given by\n\\begin{equation*}\nA_k = \\dfrac{2^{\\lambda+\\nu+1} \\Gamma(\\lambda+n+1)\n\\Gamma(\\nu+n+1)}{n! \\Gamma (\\lambda+\\nu+n+1)\n\\left(\\frac{d}{dx} P_n^{(\\lambda,\\nu)}(x_k)\\right)^2 (1-x_k^2)}\n\\end{equation*}\nand the error is given by\n\\begin{equation*}\nR_n(f) = \\dfrac{f^{(2n)}(\\eta)}{(2n)!}\n\\,\\dfrac{2^{\\lambda+\\nu+2n+1} n! \\Gamma(\\lambda+\\eta+1)\n\\Gamma(\\nu +\\eta +1)\\Gamma(\\nu+\\lambda+\\eta\n+1)}{(\\lambda+\\nu+2n+1)\\left(\\Gamma(\\lambda+\\nu+2n+1)\\right)^2}\n\\end{equation*}\n(see \\cite{sal17}). We know that formula \\eqref{eq:GJqf}\nis exact for all polynomials of degree up to $2n-1$\nand is valid if $f(x)$ possesses no singularity in $[-1,1]$\nexcept at points $\\pm 1$.\n\nNow, as a special case, we introduce the fractional Jacobi polynomials\n$f_n^{\\alpha-1}(x)$, that is, $\\lambda=\\alpha-1$ and $\\nu=0$ in \\eqref{eq3.1}.\nThe set of fractional Jacobi polynomials\n$\\{f_n^{\\alpha-1}\\}_{n=0}^{\\infty}$\nis an orthogonal system of polynomials that are orthogonal\nwith respect to the weight function $(1-x)^{\\alpha-1}$,\n$\\alpha>0$, on the segment $[-1,1]$. It means that\n\\begin{equation*}\n\\int_{-1}^1w^{\\alpha-1}(x) f_n^{\\alpha-1}(x) f_m^{\\alpha-1}(x) dx\n= \\zeta_n^{\\alpha-1} \\delta_{mn},\n\\end{equation*}\nwhere\n\\begin{equation*}\n\\zeta_n^{\\alpha-1}\n=\\Vert f_n^{\\alpha-1}\\Vert_{w^{\\alpha-1}}^2,\n\\quad w^{\\alpha-1}(x)= (1-x)^{\\alpha-1}.\n\\end{equation*}\nThe three-term recurrence relation for fractional Jacobi polynomial is given by\n\\begin{gather*}\nf_0^{\\alpha-1}(x)=1,\n\\quad f_1^{\\alpha-1}(x)=\\frac{\\alpha-1}{2} (x+1),\\\\\nf_{n+1}^{\\alpha-1} (x)= \\left(A_n^\\alpha x+ B_n^\\alpha \\right)\nf_n^{\\alpha-1}-C_n^\\alpha f_{n-1}^{\\alpha-1}(x),\\quad n\\geq 2,\n\\end{gather*}\nwhere\n\\begin{align*}\nA_n^\\alpha\n&= \\dfrac{(2n+\\alpha) (2n+\\alpha+1)}{2(n+1) (n+\\alpha)},\\\\\n&\\quad \\\\\nB_n^\\alpha\n&= \\dfrac{(\\alpha-1)^2 (2n+\\alpha)}{2(n+1)(2n+\\alpha-1) (n+\\alpha)},\\\\\n&\\quad \\\\\nC_n^\\alpha\n&=\\dfrac{n(n+\\alpha-1) (2n +\\alpha +1)}{(n+1) (n+\\alpha) (2n+\\alpha-1)}.\n\\end{align*}\nUsing \\eqref{equ3.4}, we have that\n\\begin{multline}\n\\label{equ3.5}\n(2n+1-\\alpha) (1-x^2) \\frac{d}{dx} f_n^{\\alpha -1} (x)\\\\\n=n\\left(1-\\alpha-(2n+1-\\alpha)x\\right) f_n^{\\alpha -1}(x)\n+2n(n+1-\\alpha) f_{n-1}^{\\alpha -1}(x).\n\\end{multline}\nFurthermore, the fractional Gauss--Jacobi quadrature rule is\n\\begin{equation}\n\\label{eq3.19}\n\\int_{-1}^1 (1-x)^{\\alpha-1} f(x) dx = \\sum_{k=1}^n l_k f(x_k) + E_n^\\alpha (f),\n\\end{equation}\nwhere $x_k$, $k=1,2,\\ldots, n$, are the zeros\nof $f_n^{\\alpha-1}$, the weights $l_k$ are given by\n\\begin{equation*}\nl_k=\\dfrac{2^\\alpha}{(1-x_k^2)[\\frac{d}{dx}f_n^{\\alpha-1} (x_k)]^2}\n\\end{equation*}\nand\n\\begin{equation*}\nE_n^\\alpha (f)=\\dfrac{2^n f^{(2n)}(\\eta)}{(2n)! (2n+\\alpha)}\n\\, \\left(\\dfrac{2^n n! \\Gamma (n+\\alpha)}{\\Gamma (2n+\\alpha)}\\right)^2.\n\\end{equation*}\nConsider the left Riemann--Liouville fractional integral\nof order $\\alpha>0$ for $x>0$:\n\\begin{equation}\n\\label{eq3.22}\n_0I_x^\\alpha f(x)=\\dfrac{1}{\\Gamma (\\alpha)}\n\\int_0^x (x-t)^{\\alpha-1} f(t) dt.\n\\end{equation}\nFor calculating the above integral at the collocation nodes,\nfirst we transform the interval $[0,x]$ into the segment\n$[-1,1]$ using the changing of variable\n$$\ns=2\\left(\\frac{t}{x}\\right)-1,\n\\quad ds=\\frac{2}{x} dt.\n$$\nTherefore, we can rewrite \\eqref{eq3.22} as\n\\begin{equation*}\n\\dfrac{1}{\\Gamma (\\alpha)} \\int_0^x (x-t)^{\\alpha-1} f(t) dt\n=\\left(\\frac{x}{2}\\right)^\\alpha \\dfrac{1}{\\Gamma(\\alpha)}\n\\int_{-1}^1 \\dfrac{\\rho (s) ds }{(1-s)^{1-\\alpha}},\n\\end{equation*}\nwhere\n$$\n\\rho(s)=f\\left(\\frac{x}{2} (s+1)\\right).\n$$\nSo, we have\n\\begin{equation}\n\\label{eq3.24}\n_0I_x^\\alpha f(x)=\\dfrac{1}{\\Gamma (\\alpha)}\\left(\\frac{x}{2}\\right)^\\alpha\n\\int_{-1}^1 (1-s)^{\\alpha-1} \\rho(s) ds,\n\\end{equation}\nand using the $n$ point fractional Gauss--Jacobi quadrature rule\n\\eqref{eq3.19} for \\eqref{eq3.24} we get\n\\begin{equation}\n\\label{eq3.25}\n_0I_{s_i}^\\alpha f (x) = \\dfrac{(\\frac{s_i}{2})^\\alpha}{\\Gamma (\\alpha)}\n\\sum_{k=1}^n l_k f \\left( \\frac{s_i}{2} (x_k+1)\\right)+ E_n^\\alpha (f),\n\\end{equation}\nwhere\n\\begin{equation}\n\\label{eq3.26}\nl_k=\\dfrac{2^\\alpha}{(1-x_k)^2 \\left(\\frac{d}{dx} f_n^{\\alpha-1} (x_k)^2\\right)},\n\\end{equation}\n\\begin{equation*}\nE_n^\\alpha(f) = \\left(\\frac{s_i}{2}\\right)^\\alpha \\frac{1}{\\Gamma (\\alpha)}\n\\, \\dfrac{f^{(2n)} \\left(\\frac{s_i}{2}(\\eta+1)\\right) }{(2n)!}\n\\, \\dfrac{2^{2n} (n!)^2}{(2n+\\alpha)} \\, \\left(\n\\dfrac{\\Gamma (n+\\alpha)}{\\Gamma (2n+\\alpha)}\\right)^2\n\\end{equation*}\nand $x_k$, $k=1, \\ldots, n$, are quadrature nodes. In this way\nwe just need to compute the nodes and weights of the\nfractional Gauss--Jacobi quadrature rule using a fast and accurate algorithm.\nSimilarly, an approximation formula for computing the left Liouville--Caputo \nfractional derivative\nof a smooth function $f$, using the suggested method, is given by\n\\begin{equation}\n\\label{eq3.28}\n{_0^CD_{s_i}^\\alpha} f(x)=\\dfrac{(\\frac{s_i}{2})^{1-\\alpha}}{\\Gamma(1-\\alpha)}\n\\sum_{k=1}^n l_k f' \\left( \\frac{s_i}{2} (x_k+1)\\right)+ E_n^\\alpha (f),\n\\quad \\alpha \\in (0,1),\n\\end{equation}\nwhere $x_k$, $k=1,2,\\ldots,n$, are the zeros of $f_n^{-\\alpha}$,\n$$\nl_k=\\dfrac{2^{1-\\alpha}}{(1-x_k)^2 \\left(\\frac{d}{dx} f_n^{-\\alpha} (x_k)^2\\right)}\n$$\nand\n$$\nE_n^\\alpha (f) = \\dfrac{(\\frac{s_i}{2})^{1-\\alpha}}{\\Gamma(1-\\alpha)}\n\\dfrac{f^{(2n+1)}\\left(\\frac{s_i}{2} (\\eta +1)\\right)}{(2n+1)!}\n\\, \\dfrac{2^{2n} (n!)^2}{(2n+1-\\alpha)}\n\\, \\left(\\dfrac{\\Gamma (n+1-\\alpha)}{\\Gamma (2n+1-\\alpha)}\\right)^2.\n$$\nAnalogous formulas hold for the right Liouville--Caputo fractional derivative.\nNext we present two methods for computing the nodes and weights \nof the fractional Gauss--Jacobi quadrature rule \\eqref{eq3.19}.\nThe first one is based on Golub--Welsch algorithm, while the second\nis a recent method introduced by Hale and Townsend \\cite{sal13},\nwhich is based on Newton's iteration for finding roots\nand a good asymptotic formula for the weights.\n\n\n\\section{Two methods for calculating nodes and weights}\n\\label{sec4}\n\nPang et al. \\cite{sal1} use Gauss--Jacobi and Gauss--Jacobi--Lobatto quadrature\nrules for computing fractional directional integrals, together with \nthe Golub--Welsch (GW) algorithm for computing nodes and weights \nof quadrature rules. The GW algorithm exploits the three-term recurrence \nrelations \\eqref{eq3.4} satisfied by all real orthogonal polynomials.\nThis relation gives rise to a symmetric tridiagonal matrix,\n\\begin{equation*}\nx\\left[\n\\begin{array}{c}\n{J}_0^{\\lambda,\\nu} (x) \\\\\n{J}_1^{\\lambda,\\nu} (x) \\\\\n\\vdots \\\\\n{J}_{N-2}^{\\lambda,\\nu} (x)\\\\\n{J}_{N-1}^{\\lambda,\\nu} (x)\n\\end{array}\\right]\n=\\left[ \\begin{array}{ccccc}\nA_1 & B_1 & 0 & \\cdots & 0\\\\\nB_1 & A_2 & B_2 & \\cdots & 0 \\\\\n\\vdots & \\ddots & \\ddots & \\ddots & \\vdots \\\\\n0 & \\cdots & \\ddots & A_{N-1} & B_{N-1}\\\\\n0 & \\cdots & 0 & B_{N-1} & A_N\\\\\n\\end{array}\n\\right] \\,\n\\left[ \\begin{array}{c}\n{J}_0^{\\lambda,\\nu} (x)\\\\\n{J}_1^{\\lambda,\\nu} (x)\\\\\n\\vdots \\\\\n{J}_{N-2}^{\\lambda,\\nu} (x)\\\\\n{J}_{N-1}^{\\lambda,\\nu} (x)\n\\end{array}\\right]+\n\\left[ \\begin{array}{c}\n0\\\\\n0\\\\\n\\vdots\\\\\n0\\\\\nB_N{J}_{N}^{\\lambda,\\nu} (x)\n\\end{array}\\right] ,\n\\end{equation*}\nwhere\n\\begin{equation*}\nA_n=\\frac{-b_n}{a_n},\\quad B_n=\\sqrt{\\frac{c_{n+1}}{a_n a_{n+1}}},\n\\quad {J}_{n}^{\\lambda,\\nu}(x)=\\dfrac{{J}_{n}^{\\lambda,\\nu}}{\\sqrt{c_{n,\\lambda,\\nu}}},\n\\end{equation*}\nin which\n\\begin{equation*}\na_n = \\left\\{\n\\begin{array}{ll}\n\\dfrac{(2n+\\lambda+\\nu-1) (2n+\\lambda+\\nu)}{2n (n+\\lambda+\\nu)},\n& \\lambda+\\nu \\neq -1,\\\\\n\\dfrac{2n-1}{n}, & n\\geq 2, \\lambda+\\nu =-1,\\\\\n\\dfrac{1}{2}, & n=1, \\lambda+\\nu=-1,\n\\end{array} \\right.\n\\end{equation*}\n\\begin{equation*}\nb_n = \\left\\{\n\\begin{array}{ll}\n-\\dfrac{(2n+\\lambda+\\nu+1) (\\lambda^2-\\nu^2)}{2n (n+\\lambda+\\nu) (2n+\\lambda+\\nu -2)},\n& \\lambda+\\nu \\neq -1,\\\\\n\\,&\\,\\\\\n\\dfrac{\\nu^2-\\lambda^2}{n(2n-3)}, & n\\geq 2, \\lambda+\\nu=-1,\\\\\n\\,&\\,\\\\\n\\dfrac{\\nu^2-\\lambda^2}{2}, & n=1, \\lambda+\\nu=-1,\n\\end{array} \\right.\n\\end{equation*}\n\\begin{equation*}\nc_n=\\dfrac{(n+\\lambda-1) (n+\\nu-1) (2n+\\lambda+\\nu)}{n(n+\\lambda+\\nu) \n(2n+\\lambda+\\nu-2)}, \\, n\\geq2,\n\\end{equation*}\n$$\nc_{n,\\lambda,\\nu}=\\partial^{\\lambda,\\nu}.\n$$\nIt is easy to prove that $x_i$ is the zero of\n$P_n^{(\\lambda,\\nu)}$ if and only if $x_i$\nis the eigenvalue of the tridiagonal matrix \\cite{sal1}.\nMoreover, Golub and Welcsh proved that the weights of quadrature\nare the first component of corresponding eigenvectors \\cite{sal11}.\nThis algorithm takes $O(n^2)$ operations to solve the eigenvalue problem\nby taking advantage of the structure of the matrix and noting that only\nthe first component of the normalized eigenvector need to be computed.\nAn alternative approach for computing the nodes and weights of the Gauss--Jacobi rule\nis to use the same three-term recurrence relation in order to compute Newton's iterates,\nwhich converge to the zeros of the orthogonal polynomial \\cite{sal18}.\nSince the recurrence requires $O(n)$ operations for each evaluation of the polynomial\nand its derivative, we have the total complexity of order $O(n^2)$ \nfor all nodes and weights.\nFurthermore, the relative maximum error in the weights is of order $O(n)$\nand for the nodes is independent of $n$. Here we develop the new technique\nintroduced by Hale and Townsend, which utilizes asymptotic formulas\nfor both accurate initial guesses of the roots and efficient evaluation\nof the fractional Jacobi polynomial $f_n^{\\alpha-1}$ inside Newton's method \\cite{sal13}.\nWith this method it is possible to compute the nodes and weights\nof the fractional Gauss--Jacobi rule in just $O(n)$ operations\nand almost full double precision of order $O(1)$. For performance of the method,\nfirst put $\\theta_k=\\cos^{-1}x_k$ to avoid the existing clustering in computing\n$w_k$, because of presence of the term $(1-x_k^2)$ in the denominator of\n\\eqref{eq3.26} that lead to cancellation error for $x_k = \\pm 1$. \nThen, using a simple method, like the bisection one, \nfor finding $\\theta_k^{[0]}$ as the initial guess of the $k$th root,\nwe construct the Newton iterates for finding the nodes as follows:\n\\begin{equation*}\n\\theta_k^{[j+1]}=\\theta_k^{[j]}-\\frac{f_n^{\\alpha-1}\\left(\n\\cos\\theta_k^{[j]}\\right)}{\\left[-\\sin \\theta_k^{[j]}\n\\frac{d}{d\\theta}f_n^{\\alpha-1} (\\cos \\theta_k^{[j]}) \\right]},\n\\quad j=0,1,2,\\ldots\n\\end{equation*}\nOnce the iterates have converged, the nodes are given by\n$x_k=\\cos\\theta_k$ and using \\eqref{eq3.26} the weights are given by\n\\begin{equation*}\nl_k=\\left( \\frac{d}{d\\theta} f_n (\\cos \\theta_k)\\right)^{-2}.\n\\end{equation*}\nSince the zeros of the orthogonal fractional Jacobi polynomial are simple \\cite{sal17},\nwe conclude that the Newton iterates converge quadratically \\cite{sal22}. Furthermore,\nsince all fractional Jacobi polynomials satisfy the relation\n\\eqref{eq3.5}, we just need to consider all calculations for\n$x\\in [0,1]$, i.e., $\\theta \\in [0,\\frac{\\pi}{2}]$. There are three asymptotic\nformulas for finding the nodes.\n\\begin{enumerate}\n\\item[1)] One is given by Gatteschi and Pittaluga \\cite{sal19}\nfor Jacobi polynomials and here to the case of fractional Jacobi polynomials:\n\\begin{equation*}\n\\tilde{x}_k=\\cos\\left[f_k+\\frac{1}{4\\rho^2} \\left( \\frac{1}{4}\n- (\\alpha-1)^2\\right) \\cot\\left(\\frac{f_k}{2}\\right)\n-\\frac{1}{4}\\tan\\frac{f_k}{2}\\right]+O(n^{-4}),\n\\end{equation*}\n$|\\alpha|\\leq \\frac{1}{2}$, where $\\rho=n+\\frac{\\alpha}{2}$ and\n$f_k=(k+\\frac{\\alpha}{2}-\\frac{3}{4})\\frac{\\pi}{\\rho}$.\n\n\\item[2)] Let $j_{\\alpha-1,k}$ denote the $k$th root of Bessel's function\n$J_{\\alpha-1}(z)$. Then the approximation given by Gatteschi and Pittaluga \\cite{sal19}\nfor the nodes near $x=1$ becomes\n\\begin{gather*}\n\\tilde{x_k}=\\cos \\left[\\dfrac{j_{\\alpha-1,k}}{\\gamma}\\left(1\n-\\dfrac{4-(\\alpha-1)^2}{720 \\gamma^4}\\right)\\left(\\frac{ j_{\\alpha-1, k}^2}{2}\n+\\alpha^2 -2\\alpha\\right)\\right]\\\\\n+j_{\\alpha-1,k}^5 O(n^{-7}),\n\\end{gather*}\n$\\alpha \\in [-\\frac{1}{2}, \\frac{1}{2}]$, where\n$\\gamma=\\sqrt{\\rho^2+(2-\\alpha^2-2\\alpha)\/12}$.\n\n\\item[3)] Other asymptotic formulas for the nodes near\n$x=1$ are given by Olver et al. \\cite{sal20}:\n\\begin{gather*}\n\\tilde{x}_k=\\cos \\left[\\psi_k+ (\\alpha^2-2\\alpha+3\/4)\\,\n\\dfrac{\\psi_k \\cot (\\psi_k)-1}{2\\rho^2 \\psi_k}\n- (\\alpha-1)^2 \\dfrac{\\tan (\\frac{\\psi_k}{2})}{4\\rho^2}\\right]\\\\\n+ j_{\\alpha-1 ,k}^2 O (n^5),\n\\end{gather*}\n$\\alpha>-\\frac{1}{2}$, where $\\psi_k=\\dfrac{j_{\\alpha-1 ,k}}{\\rho}$.\n\\end{enumerate}\nFor computing the weights $l_k$ we just need to evaluate\n$\\dfrac{d}{d\\theta} f_n^{\\alpha-1} (\\cos \\theta)$\nin the $\\theta_k$. To this end, we use relation \\eqref{equ3.5}\nbetween $f_n^{\\alpha-1}$ and $(f_n^{\\alpha-1})'$.\nSo we just need to compute the value of $ f_n^{\\alpha-1}$ at\n$x = \\cos\\theta$. This work is done by using an asymptotic formula introduced in\n\\cite{sal13}, which takes the following form for fractional Jacobi polynomials:\n\\begin{multline*}\n\\sin^{\\alpha-1\/2}(\\theta\/2)\\cos^{1\/2}(\\theta\/2)f_n^{\\alpha-1} (\\cos \\theta)\\\\\n=\\dfrac{2^{2\\rho} B (n+\\alpha,n+1)}{\\pi}\\sum_{m=0}^{M-1}\n\\dfrac{f_m(\\theta)}{2^m (2\\rho+1)_m}+V_{M,n}^{\\alpha-1}(\\theta),\n\\end{multline*}\nwhere $\\rho=n+\\alpha\/2$, $B(\\alpha,\\beta)$ is the Beta function,\n\\begin{equation*}\nf_m(\\theta)=\\sum_{l=0}^m \\dfrac{c_{m,l}^{\\alpha-1}}{l! (m-l)!}\n\\,\\dfrac{\\cos (\\theta_{n,m,l})}{\\sin^l (\\theta\/2) \\cos ^{m-l} (\\theta\/2)},\n\\end{equation*}\n\\begin{equation*}\n\\theta_{n,m,l}=\\frac{1}{2} (2\\rho+m)\\theta -\\frac{1}{2}(\\alpha+l-a\/2)\\pi\n\\end{equation*}\nand\n\\begin{equation*}\nC_{m,l}^\\alpha= \\left(\\alpha-\\frac{1}{2}\\right)_l \\left(\\frac{3}{2}\n-\\alpha\\right)_l \\left(\\frac{1}{4}\\right)_{m-l},\n\\end{equation*}\nwhere for $\\alpha\\in(-1\/2,1\/2)$ the error term $V_{M,n}$\nis less than twice the magnitude of the first neglected term.\nWe give two examples illustrating the usefulness of the method.\n\n\\begin{example}\n\\label{exa4.1}\nThe fractional integral of the function $f(t)=\\sin(t)$ is defined as\n\\begin{equation*}\n{_0I_t}^\\alpha \\sin(t)=\\frac{1}{\\Gamma(\\alpha)}\\int_{0}^{t}(t-x)^{\\alpha-1} \\sin(x)dx,\n\\quad t\\in[0,2\\pi],\\quad \\alpha\\in(0,1).\n\\end{equation*}\nThe explicit expression of the integral, presented in Table 9.1 of \\cite{sal21}, is\n\\begin{equation*}\n{_0I_t}^\\alpha \\sin(t)=\\frac{t^\\alpha}{2i\\Gamma(\\alpha+1)}[_1F_1(1;\\alpha+1;it)\n- {_1F_1}(1;\\alpha+1;-it)],\n\\end{equation*}\nwhere $i=\\sqrt{-1}$ and $_1F_1$ is the generalized hyper-geometric function defined by\n\\begin{equation*}\n_1F_1(p;q;z)=\\sum_{k=0}^{\\infty}\\frac{(p)_k z}{(q)_k}.\n\\end{equation*}\nChoosing the test points in the set $\\{s_k=\\frac{k\\pi}{8}, k=0,1,\\ldots,16\\}$,\nthe error is given by\n\\begin{equation}\n\\label{eq5.4}\n\\sqrt{\\frac{\\sum_{k=0}^{16} (E_k-A_k)^2}{\\sum_{k=0}^{16} E_k}},\n\\end{equation}\nwhere $E_k$ is the exact result at $s_k$ and $A_k$ is the approximated result by our method.\nThe values of \\eqref{eq5.4} for $\\alpha=0.25$, $0.5$ and $0.75$ are listed in the Table~\\ref{Table 1}.\nWe used the command \\texttt{jacpts} of the \\texttt{chebfun} software \\cite{sal13}\nfor finding the nodes and weights of the fractional Gauss--Jacobi quadrature (\\texttt{fgjq}) rule.\nThe errors show that the method is more accurate than the method introduced in \\cite{sal1}.\n\\begin{table}[ht]\n\\caption{The errors obtained for Example \\ref{exa4.1} from \\eqref{eq5.4},\nwith $\\alpha= 0.25, 0.5, 0.75$ and $n=5,6,7,8,16$ in \\eqref{eq3.25}.}\n\\label{Table 1}\n\\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|c|}\n\\hline\n$\\alpha$ & $n=5$&$n=6$&$n=7$&$n=8$&$n=16$\\\\\\hline\n$0.25$ & $3.22\\times10^{-6}$&$5.14\\times10^{-8}$\n&$6.1\\times10^{-10}$& $5.58\\times10^{-12}$&$2.81\\times10^{-15}$\\\\\n$0.5$ & $4.85\\times10^{-6}$ & $7.75\\times10^{-8}$ & $9.18\\times10^{-10}$\n& $8.37\\times10^{-12}$ & $7.12\\times10^{-16}$ \\\\\n$0.75$ & $5.35\\times10^{-6}$ & $8.35\\times10^{-8}$\n& $9.65\\times10^{-10}$ & $8.6\\times10^{-12}$ & $1.39\\times10^{-15}$ \\\\\\hline\n\\end{tabular}\n\\end{table}\n\\end{example}\n\n\\begin{example}\n\\label{exam:5.2.}\nIn this example we consider the function $f(t)=t^4$. The Liouville--Caputo\nfractional derivative of $f$ of order $0<\\alpha<1$ is defined as\n\\begin{equation*}\n_0^CD_t ^{\\alpha}t^4=\\frac{1}{\\Gamma(1-\\alpha)}\n\\int_{0}^{t}4(t-x)^{1-\\alpha}t^3dx,\\quad \\alpha\\in(0,1).\n\\end{equation*}\nThe explicit form of the result is\n\\begin{equation*}\n_0^CD_t ^{\\alpha} t^4=\\frac{\\Gamma(5)}{\\Gamma(4.5)}t^{3.5}\n\\end{equation*}\n(see \\cite{sal4}). Using \\eqref{eq3.28}, we can write\n\\begin{equation*}\n_0^CD_{s_i} ^{\\alpha}t^4\\simeq \\dfrac{(\\frac{s_i}{2})^{1-\\alpha}}{\\Gamma (1-\\alpha)}\n\\sum_{k=1}^n 4 l_k \\left(\\frac{s_i}{2} (x_k+1)\\right)^3.\n\\end{equation*}\nHere the nodes and weights coincide with the nodes and weights of Example~\\ref{exa4.1}\nfor $\\alpha=0.5$. With the test point set $\\{s_k=\\frac{k}{10}, k=0,1,2,\\ldots,10\\}$\nthe error defined by \\eqref{eq5.4} is listed in Table~\\ref{Table 2}. Looking to Table~\\ref{Table 2},\none can see that the best approximation is obtained for $n=2$\nand that by increasing $n$ we do not get better results.\n\\begin{table}[ht]\n\\caption{The errors obtained for Example~\\ref{exam:5.2.} from \\eqref{eq5.4}\nwith $n=2,3,\\ldots,7$ in \\eqref{eq3.28} for approximating the Liouville--Caputo \nfractional derivative of order $\\alpha= 0.5$ of function $f(t)=t^4$.}\n\\label{Table 2}\n\\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|} \\hline\n$n=2$&$n=3$&$n=4$&$n=5$&$n=6$&$n=7$\\\\\\hline\n$4.0\\times10^{-17}$ & $8.8\\times10^{-16}$ & $1.9\\times10^{-16}$\n& $6.4\\times10^{-16}$ & $3.8\\times10^{-16}$ & $3.8\\times10^{-16}$\\\\\\hline\n\\end{tabular}\n\\end{table}\n\\end{example}\n\n\n\\section{Application to fractional ordinary differential equations}\n\\label{sec5}\n\nConsider the fractional initial value problem\n\\begin{equation}\n\\label{eq6.1}\n\\left\\{\n\\begin{array}{ll}\n_0^CD_t^\\alpha y(t)=g(t,y(t)),\n& n-1<\\alpha0$, $h^*>0$, $b_0,b_1,\\ldots,b_{n-1}\\in\\mathbb{R}$,\n$G:=[0,h^*]\\times[b_0-K,b_0+K]$,\nand function $g:G\\to\\mathbb{R}$ be continuous.\nThen, there exists $h > 0$ and a function\n$y\\in C[0, h]$ solving the Liouville--Caputo fractional initial\nvalue problem \\eqref{eq6.1}. If $0<\\alpha<1$,\nthen the parameter $h$ is given by\n$$\nh:=\\min\\left\\{h^*,\\left(\\frac{K\\Gamma(\\alpha+1)}{M}\\right)^{1\/\\alpha}\\right\\},\n\\quad M:=\\sup_{(t,z)\\in G}|g(t,z)|.\n$$\nFurthermore, if $g$ fulfils a Lipschitz condition with respect to the second variable,\nthat is,\n$$\n|g(t,y_1)-g(t,y_2)|\\leq L(y_1-y_2)\n$$\nfor some constant $L > 0$ independent of $t, y_1$ and $y_2$,\nthen the function $y\\in C[0, h]$ is unique.\n\\end{theorem}\n\nIn order to solve problem \\eqref{eq6.1} using our method, we need the following theorem.\n\n\\begin{theorem}\n\\label{thm:6.2}\nUnder the assumptions of Theorem~\\ref{thm:6.1},\nfunction $y\\in C[0, h]$ is a solution to the Liouville--Caputo\nfractional differential equation \\eqref{eq6.1}\nif and only if it is a solution to the Volterra\nintegral equation of second kind\n\\begin{equation*}\ny(t)=\\sum_{k=0}^{n-1}\\frac{t^k}{k!}b_k\n+\\frac{1}{\\Gamma(\\alpha)}\\int_0^t(t-x)^{\\alpha-1}g(x,y(x))dx.\n\\end{equation*}\n\\end{theorem}\n\n\\begin{proof}\nUsing the Laplace transform formula\nfor the Liouville--Caputo fractional derivative,\n\\begin{equation*}\n\\mathcal{L}\\{{_0^CD_t^\\alpha} y(t)\\}\n= s^\\alpha Y(s)-\\sum_{k=0}^{n-1} s^{\\alpha-k-1}y^{(k)}(0)\n\\end{equation*}\n(see \\cite{MR2218073,sal15}). Thus, using \\eqref{eq6.1}, we can write that\n\\begin{equation*}\ns^\\alpha Y(s)-\\sum_{k=0}^{n-1}s^{\\alpha-k-1}y^{(k)}(0)=G(s,y(s))\n\\end{equation*}\nor\n\\begin{equation*}\nY(s)=s^{-\\alpha}G(s,y(s))+\\sum_{k=0}^{n-1}s^{-k-1}b_k,\n\\end{equation*}\nwhere $G(s,y(s))= \\mathcal{L}\\{g(t,y(t))\\}$. \nApplying the inverse Laplace transform gives\n$$\ny(t)=\\sum_{k=0}^{n-1}\\frac{t^k}{k!}b_k\n+\\frac{1}{\\Gamma(\\alpha)}\\int_0^t(t-x)^{\\alpha-1}g(x,y(x))dx,\n$$\nwhere we used the relations\n\\begin{equation*}\n\\mathcal{L}\\{_0I_t^\\alpha y(t)\\} = \\mathcal{L}\\left\\{\\frac{1}{\\Gamma(\\alpha)}\n\\int_0^t(t-x)^{\\alpha-1}y(x)dx\\right\\}\n=\\mathcal{L}\\left\\{\\frac{t^{\\alpha-1}}{\\Gamma(\\alpha)}\n*y(t)\\right\\}=\\frac{Y(s)}{s^\\alpha}\n\\end{equation*}\nand\n$$\n\\mathcal{L}\\{t^{\\alpha-1}\\}=\\frac{\\Gamma(\\alpha)}{s^\\alpha}.\n$$\nThe proof is complete.\n\\end{proof}\n\nNow, using \\eqref{eq3.24}, \\eqref{eq3.25} and Theorem~\\ref{thm:6.2}, we have\n\\begin{equation*}\n\\begin{split}\ny(t)&=\\sum_{k=0}^{n-1}\\frac{t^k}{k!}b_k\n+\\frac{1}{\\Gamma(\\alpha)}\\int_0^t(t-x)^{\\alpha-1}g(x,y(x))dx\\\\\n&=\\sum_{k=0}^{n-1}\\frac{t^k}{k!}b_k+\\frac{1}{\\Gamma(\\alpha)}\\left(\n\\frac{t}{2}\\right)^\\alpha\\int_{-1}^1(1-s)^{\\alpha-1}\\tilde{g} (s,\\tilde{y}(s))ds,\n\\end{split}\n\\end{equation*}\nwhere\n$$\n\\tilde{g}(s,\\tilde{y}(s))=g\\left(\\frac{t}{2}(1+s),\ny\\left(\\frac{t}{2}(1+s)\\right)\\right),\n\\quad -1\\leq s\\leq 1;\n$$\n$$\n\\tilde{y}(s)=y\\left(\\frac{t}{2}(1+s)\\right),\\quad -1\\leq s\\leq 1.\n$$\nSo, using the fractional Gauss--Jacobi quadrature rule introduced in Section~\\ref{sec2},\nwe can approximate $y(t)$ by the following formula:\n\\begin{equation}\n\\label{eq6.7}\ny(t)\\simeq\\sum_{k=0}^{n-1}\\frac{t^k}{k!}b_k\n+\\frac{1}{\\Gamma(\\alpha)}\\left(\\frac{t}{2}\\right)^\\alpha\n\\sum_{k=1}^N l_k \\tilde{g}(x_k,\\tilde{y}(x_k)),\n\\end{equation}\nwhere $l_k$ and $x_k$ are the weights and nodes of our quadrature rule, respectively.\nAlso, let $t\\in[0,1]$ and $t_j=\\frac{j}{n}$ for $j=0,1,\\ldots,n$. Furthermore,\nlet the numerical values of $y(t)$ at $t_0,t_1,\\ldots,t_n$ have been obtained,\nand let these values be $y_0,y_1,\\ldots,y_n$, respectively, with $y_0=b_0$.\nNow we are going to compute the value of $y(t)$ at $t_{n+1}$, i.e., $y_{n+1}$.\nTo this end we use the predictor-corrector method introduced\nby Diethelm et al. in \\cite{sal26}, but where for the prediction part we use\ninterpolation at equispaced nodes instead of piecewise linear interpolation.\nUsing first \\eqref{eq6.7}, we know that\n\\begin{equation}\n\\label{eq10}\ny(t_{n+1})\\simeq\\sum_{k=0}^{n-1}\\frac{t_{n+1}^k}{k!}b_k\n+\\frac{1}{\\Gamma(\\alpha)}\\left(\\frac{t_{n+1}}{2}\\right)^\\alpha\\sum_{k=1}^N\nl_k g\\left(\\frac{t_{n+1}}{2}(1+x_k),y\\left(\\frac{t_{n+1}}{2}(1+x_k)\\right)\\right),\n\\end{equation}\nwhere $l_k$ and $x_k$, $k=1,2,\\ldots,N$, are the weights and nodes of\nthe fractional Gauss--Jacobi quadrature rule introduced in Section~\\ref{sec3}.\nLooking to \\eqref{eq10} we see that for calculating the value of $y(t_{n+1})$\nwe need to know the value of $g$ at point\n$\\left(\\frac{t_{n+1}}{2}(1+x_k),y\\left(\\frac{t_{n+1}}{2}(1+x_k)\\right)\\right)$.\nFor obtaining these values we act as follows.\nFirst, using the predictor-corrector method introduced in \\cite{sal28},\nwe calculate the values of $y_1(t_i)$, $i=0,1,\\ldots,n+1$,\nusing the following formulas:\n\\begin{equation*}\ny_1(t_{n+1})= \\left\\{\n\\begin{array}{ll}\ny_0+\\frac{k^\\alpha}{\\Gamma(\\alpha+2)}\\left(g(t_1,y^{pr}(t_1))\n+\\alpha g(t_0,y_0)\\right), & n=0,\\\\\ny_0+\\frac{k^\\alpha}{\\Gamma(\\alpha+2)}\\left(g(t_{n+1},\ny^{pr}(t_{n+1}))+(2^{\\alpha+1}-2) g(t_n,y(t_n))\\right)\\\\\n+\\frac{k^\\alpha}{\\Gamma(\\alpha+2)}\n\\sum_{j=0}^{n-1} a_j g(t_j,y^{pr}(t_j)), & n\\geq 1,\n\\end{array} \\right.\n\\end{equation*}\nwhere\n\\begin{equation*}\na_j= \\left\\{\n\\begin{array}{ll}\nn^{\\alpha+1}-(n-\\alpha)(n+1)^\\alpha, & j=0,\\\\\n(n-j+2)^{\\alpha+1}+(n-j)^{\\alpha+1}-2(n-j+1)^{\\alpha+1},\n& 1\\leq j \\leq n,\\\\\n1, & j=n+1\n\\end{array} \\right.\n\\end{equation*}\nand\n\\begin{equation*}\ny^{pr}(t_{n+1})= \\left\\{\n\\begin{array}{ll}\ny_0+\\frac{k^\\alpha}{\\Gamma(\\alpha+1)}g(t_0,y_0), & n=0,\\\\\ny_0+\\frac{k^\\alpha}{\\Gamma(\\alpha+2)} \\cdot (2^{\\alpha h+1}-1) \\cdot g(t_n,y(t_n))\\\\\n+\\frac{k^\\alpha}{\\Gamma(\\alpha+2)}\\sum_{j=0}^{n-1}a_j g(t_j,y(t_j)), & n\\geq1\n\\end{array} \\right.\n\\end{equation*}\nand $k$ is the step length, i.e., $k=t_{j+1}-t_j$ (for more details, see \\cite{sal28}).\nFinally, if $P_n$ is the interpolating polynomial passing through $\\{(t_i,y1(t_i)\\}_{i=0}^{n+1}$,\nthen we have the following formula for calculating $y(t_{n+1})$:\n\\begin{equation}\n\\label{eq6.12}\ny(t_{n+1})\\simeq\\sum_{k=0}^{n-1}\\frac{t_{n+1}^k}{k!}b_k\n+\\frac{1}{\\Gamma(\\alpha)}\\left(\\frac{t_{n+1}}{2}\\right)^\\alpha\n\\sum_{k=1}^N l_k g\\left(\\frac{t_{n+1}}{2}(1+x_k),\nP_n\\left(\\frac{t_{n+1}}{2}(1+x_k)\\right)\\right).\n\\end{equation}\n\nWe solve two examples illustrating the applicability of our method.\n\n\\begin{example}\n\\label{exam5.3}\nConsider the nonlinear ordinary differential equation\n\\begin{equation*}\n{_0^CD_t ^{\\alpha}}y(t)+y^2(t)=f(t)\n\\end{equation*}\nsubject to $y(0)=0$ and $y'(0)=0$, where\n$$\nf(t)=\\frac{120t^{5-\\alpha}}{\\Gamma(6-\\alpha)}-\\frac{72t^{4-\\alpha}}{\\Gamma(5-\\alpha)}\n+\\frac{12t^{3-\\alpha}}{\\Gamma(4-\\alpha)}+k(t^5-3t^4+2t^3)^2.\n$$\nFollowing \\cite{sal29}, we take $k=1$\nand $\\alpha=\\frac{3}{2}$. The exact solution is then given by\n$$\ny(t)=(t^5-3t^4+2t^3)^2.\n$$\nFigure~\\ref{Fig:6.1} plots the results obtained for $N=16$ in \\eqref{eq6.12}.\n\\begin{figure}\n\\includegraphics[scale=.75]{Figure1.eps}\n\\caption{The exact solution (the solid line) versus the approximated solution \n(the dashed line) of the fractional initial value problem of Example~\\ref{exam5.3}.}\n\\label{Fig:6.1}\n\\end{figure}\n\\end{example}\n\n\\begin{example}\n\\label{exam5.4}\nConsider the fractional oscillation equation\n\\begin{equation*}\n{_0^CD_t ^{\\alpha}}y(t)+y(t)=t e ^{-t}\n\\end{equation*}\nsubject to initial conditions $y(0)=0$ and $y'(0)=0$. The exact solution is\n$$\ny=\\int_0^t G(t-x)xe^{-x}dx,\n\\quad G(t)=t^{\\alpha-1} E_{\\alpha,\\alpha}(t),\n$$\nwhere $E_{\\alpha,\\beta}(t)$ is the generalized Mittag--Lefller \nfunction \\cite{sal29}. Results are shown in Figure~\\ref{Fig:6.2} \nfor $N=16$ in \\eqref{eq6.12} and $\\alpha=\\frac{3}{2}$.\n\\begin{figure}\n\\includegraphics[scale=.75]{Figure2.eps}\n\\caption{The exact solution (the solid line) versus the approximated solution \n(the dashed line) of the fractional initial value problem of Example~\\ref{exam5.4}.}\n\\label{Fig:6.2}\n\\end{figure}\n\\end{example}\n\n\n\\section{Application to fractional variational problems}\n\\label{sec6}\n\nIn this section we apply our method to solve, numerically, \nsome problems of the calculus of variations of fractional order.\nThe calculus of variations is a rich branch of classical mathematics \ndealing with the optimization of physical quantities \n(such as time, area, or distance). It has applications in many\ndiverse fields, including aeronautics (maximizing the lift of an aircraft wing), \nsporting equipment design (minimizing air resistance on a bicycle helmet \nor optimizing the shape of a ski), mechanical engineering \n(maximizing the strength of a column, a dam, or an arch), \nboat design (optimizing the shape of a boat hull) and physics (calculating trajectories\nand geodesics, in both classical mechanics and general relativity) \\cite{sal32}.\nThe problem of the calculus of variations of fractional order is more recent\n(see \\cite{sal34,sal35,sal33} and references therein) and consists, typically,\nto find a function $y$ that is a minimizer of a functional\n\\begin{equation}\n\\label{eq7.1}\nJ[y]=\\int_{a}^{b}L\\left(t,y(t),(_0^CD_t ^{\\alpha})y(t)\\right)dt,\n\\quad \\alpha\\in(n-1,n),\n\\quad n\\in \\mathbb{N},\n\\end{equation}\nsubject to boundary conditions\n$$\ny(a)=u_a,\\quad y(b)=u_b.\n$$\nOne can deduce fractional necessary optimality equations to problem \\eqref{eq7.1}\nof Euler--Lagrange type: if function $y$ is a solution to problem \\eqref{eq7.1},\nthen it satisfies the fractional Euler--Lagrange differential equation\n\\begin{equation}\n\\label{Euler}\n\\frac{\\partial L}{\\partial y} + {_t D_b^\\alpha}\n\\frac{\\partial L}{\\partial (_0^CD_t ^{\\alpha}) }=0.\n\\end{equation}\nIt is often hard to find the analytic solution to problems of the calculus of variations\nof fractional order by solving \\eqref{Euler}. Therefore, \nit is natural to use some numerical method to find approximations \nof the solution of \\eqref{Euler}. Here we are going to apply our method\nfor solving some examples of such fractional variational problems. To this end,\nat first we find a class of fractional Lagrangians\n\\begin{equation*}\nL\\left(t,y(t),(_0^CD_t ^{\\alpha})y(t)\\right),\n\\quad 1<\\alpha <2,\n\\end{equation*}\nsuch that the corresponding Euler--Lagrange equation \\eqref{Euler} takes the form\n\\begin{equation}\n\\label{eq7.2}\n\\frac{\\partial L}{\\partial y}+ {_t D_b^\\alpha} \\frac{\\partial L}{\\partial (_0^CD_t ^{\\alpha})}\n={_t D_b^\\alpha}\\left((_0^CD_t ^{\\alpha})y(t)\\right)+g(t,y(t)) = 0.\n\\end{equation}\nWe assume a solution to this problem as follows:\n\\begin{equation}\n\\label{eq7.3}\nL\\left(t,y(t),(_0^CD_t ^{\\alpha})y(t)\\right)\n=\\frac{1}{2}\\left((_0^CD_t ^{\\alpha})y(t)\\right)^2+f(t,y(t)).\n\\end{equation}\nThen, we evaluate the left-hand side of \\eqref{eq7.2} for \\eqref{eq7.3} and we obtain\n\\begin{equation*}\n{_t D_b^\\alpha}\\left((_0^CD_t ^{\\alpha})y(t)\\right)\n+\\frac{\\partial f(t,y(t))}{\\partial y} = 0.\n\\end{equation*}\nSo, if the Lagrangian of the variational problem \\eqref{eq7.1} is of form \\eqref{eq7.3},\nthen we just need to solve the following boundary value problem of fractional order:\n\\begin{equation}\n\\label{eq7.4}\n{_t D_b^\\alpha}\\left((_a^CD_t ^{\\alpha})y(t)\\right)+g(t,y(t))=0\n\\end{equation}\nsubject to the boundary conditions $y(a)=u_a$ and $ y(b)=u_b$, where\n\\begin{equation}\n\\label{eq:m:g}\ng(t,y(t))=\\frac{\\partial f(t,y(t))}{\\partial y}.\n\\end{equation}\nNow, we apply the operator ${_tI_b^\\alpha}$ to both sides of equation \\eqref{eq7.4}.\nWe get the following boundary value problem of fractional order:\n\\begin{equation}\n\\label{eq7.5}\n\\begin{cases}\n(_a^CD_t^\\alpha) y(t)=-{_tI_b^\\alpha}g(t,y(t)), & 1<\\alpha<2, \\quad a1} \\mathcal{L}_{[0,0]}^{\\Delta} \\;,\n\\end{equation}\nwhere $\\mathcal{I}$ denotes the identity multiplet, and ${\\cal L}_{0,\\[0,0\\]}^\\Delta$ are the non-protected multiplets transforming as singlets under the global ${\\rm SO}(5)\\times{\\rm SO}(3)$ symmetry. There are infinitely many multiplets with such quantum numbers, all with unprotected scaling dimensions. As shown in (\\ref{eq:OPEfusion}), the whole infinite set of such multiplets can appear in the fusion of two operators in the displacement multiplet, and this is the basis of the bootstrap problem discussed in section \\ref{sec:bootstrapsetup}. \n\n\n\\subsubsection{Spectrum}\n\n\n\nThe QSC method was shown to be applicable to compute the spectrum of such neutral operators in~\\cite{Grabner:2020nis}, where the non-perturbative scaling dimension of the lightest state was obtained. \nThe QSC equations relevant to this case descend from the ones written down for the cusped Wilson line in~\\cite{Gromov:2015dfa}. In fact, as first noticed in a special limit in \\cite{Cavaglia:2018lxi}, the cusp QSC equations admit infinitely many solutions corresponding to operator insertions at the cusp with the same quantum numbers as the vacuum. Then, in~\\cite{Grabner:2020nis} it was found how to take the non-trivial straight-line limit to describe states of the defect CFT. \n\nIt is currently understood that solutions of the QSC equations in \\cite{Grabner:2020nis} are in one-to-one correspondence with the multiplets $\\mathcal{L}_{0,[0,0]}^{\\Delta}$. \nA systematic way to find solutions with $\\Delta > 1$, by first solving the equations at weak coupling, was developed in~\\cite{Julius:2021uka}, giving access to an infinite family of states, and a generalisation of these techniques to the entire singlet sector of the defect CFT was developed shortly after~\\cite{spec1DCFT,Cavaglia:2021bnz}. A plot of 35 states in the spectrum in the singlet sector was presented in~\\cite{Cavaglia:2021bnz}. Details of the setup and of the nontrivial techniques to find excited state solutions of the QSC will be provided elsewhere, along with a generalisation to long operators which carry a non-zero $R$-charge~\\cite{spec1DCFT}.\n\nIn the analysis of this paper, we will need the values of the lowest lying 10 states in the singlet sector~{\\it cf.}~figure~\\ref{fig:spec}. Perturbative data for these states are given in appendix~\\ref{apd:anylSpec}. Their numerical values with at least 12 digits precision are listed for several values of the coupling constant in appendix~\\ref{apd:numSpec}. \nBoth the perturbative and numerical data are also shared in a~\\texttt{Mathematica} notebook attached to this paper.\nInclusion of the other states does not seem to lead to a significant improvement of the bounds obtained in this paper.\n\n\\begin{figure}\n \\centering\n \\includegraphics[scale=1.5]{spec.png}\n \\caption{The first 10 lowest-lying states of the spectrum computed with the QSC. These levels will be the input in the bootstrap algorithms of this paper. For a plot including 35 states, see \\cite{Cavaglia:2021bnz}. In this paper, we label these states, for any given value of the coupling, as $\\left\\{\\Delta_n\\right\\}$, ordered as $\\Delta_{n}<\\Delta_{n+1}$. }\n \\label{fig:spec}\n\\end{figure}\n\n\n\\subsubsection{Conformal bootstrap setup}\\label{sec:bootstrapsetup}\nFollowing \\cite{Giombi:2017cqn,Liendo:2018ukf}, we study the 4-point function of four identical scalars polarised in the same direction, which we take for definiteness to be $\\Phi_{\\perp}^1$. Due to conformal symmetry, their correlator can be written in terms of a function of a single variable\\footnote{See appendix \\ref{app:covariant} for a covariant expression in the R-symmetry indices.}\n\\begin{equation}\n\\langle \\langle \\Phi_{\\perp}^1(x_1) \\Phi_{\\perp}^1(x_2) \\Phi_{\\perp}^1(x_3) \\Phi_{\\perp}^1(x_4)\\rangle\\rangle = {G}(x)\\; \\left( \\langle \\langle \\Phi_{\\perp}^1(x_1) \\Phi_{\\perp}^1(x_2) \\rangle \\rangle\\, \\langle \\langle \\Phi_{\\perp}^1(x_3) \\Phi_{\\perp}^1(x_4) \\rangle \\rangle \\right)\\;\\;, \n\\end{equation}\nwhere we normalised by the 2-point functions $\\langle \\langle \\Phi_{\\perp}^i(x_i) \\Phi_{\\perp}^j(x_j) \\rangle \\rangle \\propto x_{ij}^{-2}\\,\\delta_{ij}$, and introduced the cross ratio:\n\\begin{equation}\n x\\equiv \\frac{x_{12} x_{34}}{x_{13} x_{24}}, \\;\\;\\;\\;x_{ij}\\equiv x_i - x_j .\n\\end{equation}\nThe invariance under the cyclic relabeling $(1234)\\rightarrow (2341)$ gives the crossing equation:\n\\begin{equation}\nx^2 {G}(1 - x) - (1-x)^2 {G}(x) = 0.\n\\end{equation}\nTo setup a bootstrap problem we decompose the correlator using the OPE. To take into account supersymmetry, it is best to use superconformal blocks. We follow the results of \\cite{Liendo:2018ukf}. \nTo write down the superconformal OPE, it is convenient to parametrise the 4-point function as\n\\begin{equation}\\la{pt4}\n{G}(x) = \\mathbb{F} \\;x^2 + (2 x^{-1} - 1)f(x) -\\left(x^2 - x +1\\right)f'(x)\\; ,\n\\end{equation}\nwhere the function $f(x)$ satisfies crossing in the form\n\\begin{equation}\\label{eq:fcrossing}\nx^2 f(1 - x) + (1-x)^2 f(x) = 0\\; .\n\\end{equation}\nIn the following it is convenient to notice that the crossing-invariant combination $G(x)\/x^2$ is a total derivative:\n\\begin{equation}\\label{eq:totalder}\n\\frac{G(x)}{x^2} = \\partial_{x}\\left( \\mathbb{F} x - \\( 1 - \\frac{1}{x} +\\frac{1}{x^2} \\) f(x) \\right) \\;.\n\\end{equation}\nIn this paper, we use the following notation for the spectrum of non-protected operators $\\left\\{\\Delta_n \\right\\}_{n=1}^{\\infty}$ -- each level denoting the lowest scaling dimension in a multiplet $\\mathcal{L}_{0,[0,0]}^{\\Delta_n}$. We will order the states according to $\\Delta_n < \\Delta_{n+1}$.\\footnote{Notice that there are some level crossings in the spectrum, see Figure \\ref{fig:spec}. We keep our naming convention separately for each value of $g$.} We denote the corresponding OPE coefficients as \\begin{equation}\nC_n \\equiv C_{\\Phi_{\\perp}^1 ,\\; \\Phi_{\\perp}^1 ,\\;\\mathcal{L}_{0,[0,0]}^{\\Delta_n}}.\\end{equation}\nThe OPE decomposition is:\n\\begin{equation}\\label{eq:OPEf}\nf(x) = F_{\\mathcal{I}}(x) + { {C^2_{\\rm BPS} \\, {F}_{\\mathcal{B}_2}(x)}} + \\sum_{n } { {C^2_{n} \\, {F}_{{\\Delta_n}}(x)}} \\; ,\n\\end{equation}\nwhere the superconformal blocks are \n\\beqa\\la{superblocks}\nF_{\\mathcal{I}}(x) &=& x\\;,\\\\\nF_{\\mathcal{B}_2}(x) &=& x - x\\, _2F_1(1,2,4;x ) \\;,\\\\\nF_{{\\Delta}}(x) &=& \\frac{x^{\\Delta+1}}{1-\\Delta}\\, _2F_1(\\Delta+1,\\Delta+2,2 \\Delta+4;x )\\; ,\n\\eeqa\nand $C_n$ are OPE coefficients for the non-protected states, which are nontrivial functions of the coupling and the main objective of our work. Finally, the constant $\\mathbb{F}$ and the OPE coefficients corresponding to the $\\mathcal{B}_2$ block are related as $\\mathbb{F}(g) = 1 + C^2_{BPS}(g)$, and this OPE coefficient was fixed by comparison with a topological observable computable with localisation~\\cite{Giombi:2017cqn,Liendo:2018ukf}. The same relation can also be derived using integrability as shown in Appendix \\ref{app:CBPS}. The result reads\n\\begin{equation}\n\\mathbb{F}(g) = 1 + C^2_{BPS}(g) = \\frac{3 I_1(4 g \\pi ) \\left(\\left(2 \\pi ^2 g^2+1\\right) I_1(4 g \\pi )-2 g \\pi I_0(4 g \\pi )\\right)}{2 g^2 \\pi ^2 I_2(4 g \\pi ){}^2} \\;\\;, \\end{equation}\nwhich can also be recast in terms of the Bremsstrahlung function:\n\\begin{equation}\\label{eq:F0}\n\\mathbb{F}(g) = \\frac{3 (g^2 -\\mathbb{B}(g))}{\\pi^2 (\\mathbb{B}(g))^2}\\;\\;.\n\\end{equation}\nThe conformal bootstrap constraint, arising from the compatibility of the OPE decomposition with crossing, takes the form:\n\\begin{framed}\n\\begin{equation}\\label{eq:bootstrapeq}\n\\sum_{n} C^2_n \\;\\mathcal{G}_{\\Delta_n}(x) + \\mathcal{G}_{\\text{simple}}(g, x) = 0\\;,\n\\end{equation}\n\\end{framed}\n\\noindent\nwhere we introduced the \\emph{crossed superconformal blocks}\n\\begin{equation}\\label{eq:defGD}\n\\mathcal{G}_{\\bullet}(x) \\equiv (1 - x)^2 F_{\\bullet}(x) + x^2 F_{\\bullet}(1 - x) , \\qquad \\bullet \\in \\left\\{ \\mathcal{I},\\mathcal{B}_2, \\Delta\\right\\}\\;\\;,\n\\end{equation}\nand $\\mathcal{G}_{\\text{simple}}(g, x)$ is an explicitly known function:\n\\begin{equation}\\label{eq:defGsimple}\n\\mathcal{G}_{\\text{simple}}(g, x) \\equiv \\mathcal{G}_{\\mathcal{I}}(x) + C^2_{BPS}(g)\\; \\mathcal{G}_{\\mathcal{B}_2}(x)\\;.\n\\end{equation}\nThe bootstrap constraint (\\ref{eq:bootstrapeq}) contains two sets of nontrivial quantities: the scaling dimensions and OPE coefficients $\\left\\{C_n\\right\\}$ for the nontrivial operators. In our approach, we take advantage of the fact that integrability effectively solves the problem of computing the spectrum, and focus on determining the OPE coefficients. In the following section we present two new exact relations involving this correlation function. \n\n\\section{Integrated\ncorrelators}\\label{sec:integrated}\nThe main new ingredient of this paper is the inclusion of the constraints on the 4-point function $G(x)$ arising from integrable deformations of the straight line (in addition to the spectral data coming from the QSC which were already used in~\\cite{Cavaglia:2021bnz}). \nIn this section we describe in detail these constraints, which take the form of integrals over the cross ratio for the amplitude $G(x)$ or, equivalently, $f(x)$. \n\\subsection{New integral constraints}\nWe claim that the 4-point correlator introduced in section \\ref{sec:bootstrapsetup} satisfies two integral identities involving the Bremsstrahlung and Curvature functions~\\cite{upcomingAJMNderivation}:\n\\begin{framed}\n\\begin{align}\n &\\text{ Constraint 1: }&& \\int_0^1\n \\delta G(x)\\frac{1+\\log x}{x^2}dx=\\frac{3\\mathbb{C}-\\mathbb{B}}{8 \\;\\mathbb{B}^2}\\; , \\la{eq:constr1}\\\\\n & \\text{ Constraint 2: }&& \\int_{0}^1 dx \\frac{\\delta f(x)}{x} = \n \\frac{\\mathbb{C}}{4\\;\\mathbb{B}^2} + \\mathbb{F}-3 \\; ,\\label{eq:constr2}\n\\end{align}\n\\end{framed}\n\\noindent\nwhere $\n\\delta G(x) \\equiv G(x) - G_{\\text{weak}}^{(0)}(x)$, $ \\delta f(x) \\equiv f(x) - f_{\\text{weak}}^{(0)}(x)$, and $G_{\\text{weak}}^{(0)}$, $f_{\\text{weak}}^{(0)}$ are the zero-coupling values:\n\\begin{equation}\\label{fandGtree}\nf_{\\text{weak}}^{(0)}(x)=2 x+\\frac{x}{x-1}\\;, \\;\\;\\; G_{\\text{weak}}^{(0)}(x) = \\frac{2 (x-1) x+1}{(x-1)^2}\n\\; ,\n\\end{equation}\nwhich can be easily deduced from free field theory, i.e., \n\\begin{equation}\n\\frac{G_{\\text{weak}}^{(0)}(x)}{x_{12}^2 x_{34}^2 } = \\frac{1}{x_{12}^2 x_{34}^2 } + \\frac{1}{x_{14}^2 x_{23}^2 }\\;. \n\\end{equation}\nWe notice that both integrals are convergent at finite $g$, as \\beqa\\la{limits}\n\\delta f(x)\\simeq \\frac{(3-\\mathbb{F} )}{2} \\, x^2 \n\\;\\;&,&\\;\\;\n\\delta G(x)\\simeq \\frac{(4 \\,\\mathbb{F}-1 )}{5} \\, x^2 \n\\;\\;,\\;\\;x\\to 0\\; \n,\\\\\n\\delta f(x)\\simeq \\frac{(\\mathbb{F}-3)}{2} \n\\;\\;&,&\\;\\;\n\\delta G(x)\\simeq \\frac{3(\\mathbb{F}-1 )}{2}\n\\;\\;,\\;\\;x\\to 1 \\; ,\\label{eq:limits2}\n\\eeqa\nwhere corrections are $O(x^{\\Delta_1+1})$ for $x\\sim0$, $O((1-x)^{\\Delta_1+1})$ for $x\\sim 1$, scaling with the power $\\Delta_1+1>2$, as follows from the OPE \\eq{eq:OPEf} and crossing symmetry.\n Using (\\ref{eq:totalder}) and integrating by parts, (\\ref{eq:constr1}) can also be rewritten in terms of $f(x)$ as \n\\beqa\\la{eq:constr1b}\n \\int_{\\delta_x}^1\n \\delta f(x)\\( \n \\frac{1}{x}+\\frac{1}{x^3}\n \\)dx\n -\\frac{1}{2} (\\mathbb{F}-3) \\log {\\delta_x}-\\mathbb{F}+3\n &=&\\frac{3\\mathbb{C}-\\mathbb{B}}{8 \\;\\mathbb{B}^2}\\;,\n\\eeqa\nin the limit of $\\delta_x\\rightarrow 0^+$. \nWe see that the divergence in \\eq{eq:constr1b} cancels as a consequence of \\eq{limits}.\n\n\n\n\\paragraph{Heuristic explanation of the integral relations.} \nWhile a detailed proof will be presented in \\cite{upcomingAJMNderivation}, let us discuss here the main steps. The existence of the constraints is based on a general principle~\\cite{Cooke:2017qgm}:\n\n \\emph{Every deformation of a Wilson line can be parametrised in terms of integrated correlators on the original line.}\n \nThis follows from the fact that operator insertions represent infinitesimal, localised deformations. Any general deformation (both in space-time as in R-space) can be approached through a series of integrated correlators (see \\cite{Cooke:2017qgm} for a systematic treatment of a spacetime deformation in perturbation theory). An example of this principle in action and an anticipation of the idea outlined below is presented in Appendix \\ref{app:CBPS}. \n\nTo deduce the constraints (\\ref{eq:constr1}) and (\\ref{eq:constr2}), \nwe analyse two special types of deformations in R-space, which, at leading order, are related to integrated values of $\\langle\\langle \\Phi_{\\perp}^i \\Phi_{\\perp}^i \\rangle\\rangle$. This was shown in \\cite{Correa:2012at}, leading to the first determination of the Bremsstrahlung function $\\mathbb{B}$. The constraints (\\ref{eq:constr1}) and (\\ref{eq:constr2}) arise from the extension of this analysis to the next order in the deformation parameters, as we describe in more detail below. \n\nFirst, we consider the deformation obtained by forming a cusp in R-space: i.e., we switch on the internal angle $\\theta$ on half of the line, as discussed in section \\ref{sec:linecusp}, while keeping $\\phi = 0$. As shown in \\cite{Correa:2012at}, at order ${\\cal O}(\\theta^2)$ this connects the integrated values of $\\langle\\langle \\Phi_{\\perp}^i \\Phi_{\\perp}^i \\rangle\\rangle$ to $\\mathbb{B}$ arising from the expansion of the cusp -- fixing in this way the normalisation the 2-point function. \n We extended the analysis to the next order ${\\cal O}(\\theta^4)$, finding a relation between the Curvature function \\eq{curvaturedef} and integrated 4-point functions of $\\Phi_{\\perp}^i$. \n It will be shown in \\cite{upcomingAJMNderivation} how this leads to a linear combination of the two constraints (\\ref{eq:constr1}) and (\\ref{eq:constr2}). \n\nA second important deformation of the line is the $\\frac{1}{4}$-BPS deformation defined in \\cite{Drukker:2006ga}. This is defined by an explicit parameter $\\vartheta$ in the gauge connection. On a circular contour, the expectation value is known explicitly for any value of $g$ and of the deformation parameter~\\cite{Drukker:2006ga,Pestun:2009nn}: it is equivalent to the vev on a $\\frac{1}{2}$-BPS MWL, with a redefinition of the coupling $g\\rightarrow g' = g \\cos\\vartheta$. The perturbative expansion in $O(\\vartheta)$ generates a series of identities for correlators of $\\Phi_{\\perp}^i$, now integrated on the circular $\\frac{1}{2}$-BPS line. \nAt order $O(\\vartheta^2)$, one again finds integrated 2-point functions, and this observation allowed the authors of \\cite{Correa:2012at} to compute $ \\mathbb{B}$. Extending the analysis to the next order in $\\vartheta$ leads to a second independent linear combination of the constraints (\\ref{eq:constr1}), (\\ref{eq:constr2}). \n\nWhile the above discussion sketches the main physical intuition, the full derivation of the constraints at finite coupling requires careful treatment. In fact, integrated correlators generated in these expansions produce UV divergences, which should be removed through a consistent regularisation scheme while preserving the symmetries of the setup. The derivation at finite coupling requires the use of next-to-leading order conformal perturbation theory, and will be presented in \\cite{upcomingAJMNderivation}. \n\n\n\n\\subsection{Tests of the relations at strong coupling}\nStrong coupling results for the 4-point function were recently obtained using the functional analytic bootstrap in \\cite{Ferrero:2021bsb}, where results for $f(x)$ and $G(x)$ to the first five orders at large $g$, where these functions expand as \n\\begin{equation}\\label{fstrongexp}\nG(x) = \\sum_{\\ell=0}^{\\infty}\\frac{G_{\\text{strong}}^{(\\ell)}(x)}{(4\\pi g)^\\ell}\n\\qquad\n\\text{and}\n\\qquad\nf(x) = \\sum_{\\ell=0}^{\\infty}\\frac{f_{\\text{strong}}^{(\\ell)}(x)}{(4\\pi g)^\\ell}\n\\qquad g\\rightarrow\\infty\\;.\n\\end{equation}\nThe first terms for the four-point function $G$ are\n\\begin{equation}\\begin{split}\nG_{\\text{strong}}^{(0)}(x)&=\\frac{((x-1) x+1)^2}{(x-1)^2}\\;, \\\\\nG_{\\text{strong}}^{(1)}(x)&=\\frac{x^2 ((x-1) x (x (2 x-5)+4)+2) \\log (x)-2 (x-1) ((x-1) x+1)^2}{(x-1)^3} \\\\\n&\\quad+\\frac{\\left(-2 x^4+x^3+x-2\\right) \\log (1-x)}{x}\\;,\n\\end{split}\\end{equation}\nand for the reduced correlator $f$ are\n\\beqa\\begin{split}\\label{fstron01}\nf_{\\text{strong}}^{(0)}(x)&\\equiv \\frac{x ^2}{x - 1 } + x \\;,\\\\ f_{\\text{strong}}^{(1)}(x) &\\equiv \\frac{2 (x +1) \\log (1-x ) (x -1)^3-2x (2 x -1) (x -1)-2 (x -2) x ^3 \\log (x )}{ (x -1)^2} \\;.\n\\end{split}\\eeqa\nThe remaining orders up to $\\ell=4$ can be found in the supplementary material of \\cite{Ferrero:2021bsb}. Plugging these results into the l.h.s. of the constraints and performing the integrations, one finds, from (\\ref{eq:constr1}), \n\\begin{equation}\n-\\frac{3}{16 \\pi g}-\\frac{3 (24 \\zeta_3+7)}{256 \\pi ^2 g^2}-\\frac{3 (144 \\zeta_3+7)}{2048 \\pi ^3 g^3}-\\frac{9 (352 \\zeta_3-49)}{32768 \\pi ^4 g^4}+\\dots\n= \\frac{3\\mathbb{C}-\\mathbb{B}}{8 \\;\\mathbb{B}^2} \\;,\n\\end{equation}\nwhile (\\ref{eq:constr2}) yields\n\\begin{equation}\n\\frac{2 \\pi ^2-21}{24 \\pi g}+\\frac{4 \\pi ^2-7-24 \\zeta_3}{128 \\pi ^2 g^2}+\\frac{10 \\pi ^2+83-144 \\zeta_3}{1024 \\pi ^3 g^3}+\\frac{40 \\pi ^2+867-1056 \\zeta_3}{16384 \\pi ^4 g^4}+\\dots= \\frac{\\mathbb{C}}{4\\;\\mathbb{B}^2} + \\mathbb{F}-3 \\;.\n\\end{equation}\nThese relations can be easily verified using the strong coupling expansions of (\\ref{eq:B0}),(\\ref{eq:F0}) and (\\ref{eq:strongC}). \n\n\n\\subsection{Tests at weak coupling}\\label{sec:weakcouplingintegrals}\nThe functions $G(x)$ and $f(x)$\n also have a regular expansion at weak coupling,\n\\begin{equation}\\label{fandG}\nG(x) = \\sum_{\\ell=0}^\\infty G^{(\\ell)}_{\\text{weak}}(x)\\,g^{2\\ell}\\qquad\\text{and}\\qquad\nf(x) = \\sum_{\\ell=0}^\\infty\nf^{(\\ell)}_{\\text{weak}}(x)\\,g^{2\\ell},\n\\qquad g\\rightarrow 0 \\;, \n\\end{equation}\nwhich furthermore should have a finite radius of convergence. Indeed, the general expectation for observables in planar $\\mathcal{N}$=4 SYM is that the radius of convergence is $|g|<\\frac{1}{4}$, see \\cite{Marboe:2014gma,Volin:2008kd,ShotaTalk}. The leading order of this expansion is given in (\\ref{fandGtree}), while the next-to-leading order is~\\cite{Kiryu:2018phb}\n\\beqa\\label{f1weak0}\nf_{\\text{weak}}^{(1)}(x) &=&\n\\frac{2 x}{3 (1-x)} \\left(6 \\text{Li}_2(x ) +3 \\log (1-x ) \\log (x\n )-\\pi ^2 x\\right)\\;,\\\\\n G_{\\text{weak}}^{(1)}(x) &=& \\left(-2 x-\\frac{2}{x-1}\\right) \\log (1-x)+\\left(\\frac{2 x ((x-1) x+1)}{(x-1)^2}+\\frac{(2-4 x) \\log (1-x)}{(x-1)^2}\\right) \\log (x)\\nonumber\\\\\n &&+\\frac{2 \\left(\\pi ^2 x^2+(6-12 x) \\text{Li}_2(x)\\right)}{3 (x-1)^2} \\;. \\label{eq:G1weak}\n\\eeqa\nPlugging (\\ref{fandGtree}), (\\ref{f1weak0}) into the second integral relation (\\ref{eq:constr2}) and computing the integrals, we get a perfect match with the expansion of the r.h.s., \n\\begin{equation}\n 0 \\times g^0+ \\left(\\frac{2 \\pi ^2}{3}-6 \\zeta_3\\right)g^2+ O(g^4)\\simeq\\frac{\\mathbb{C}}{4\\;\\mathbb{B}^2} + \\mathbb{F}-3 \\;, \\qquad \\quad g\\rightarrow 0.\n\\end{equation}\nVerifying the first integral relation (\\ref{eq:constr1}) in this regime, however, is more subtle. \n In fact, plugging the weak coupling expansion of $G(x)$ into the l.h.s. of (\\ref{eq:constr1}) leads, order by order for $\\ell>0$, to integrals of the form\n \\begin{equation}\\label{eq:pertintegr}\n \\int_0^1 G_{\\text{weak}}^{(\\ell)}(x) \\frac{1+\\log x}{x^2} dx \\;,\n \\end{equation}\n which are log-divergent. \n The reason is that at weak coupling $\\Delta_1\\to 1$, which creates additional divergences at each given order in $g\\to 0$ due to the correction terms in (\\ref{limits}).\n On the other hand, the integral $\\int_0^1 \\delta G(x) \\frac{1+\\log x}{x^2} dx$ is perfectly convergent at finite coupling, and -- \\emph{after computing the integral at the non-perturbative level} -- the result can be expanded giving rise to a well-behaved weak coupling expansion (which, however, contains also one negative power of the coupling). We will explain this point in detail in section~\\ref{sec:numerology}, where we will also use the integral relations for the analytic evaluation of the structure constants. \n \n \n \n \n \n \n \n \n \n \n\n\n\t\n\\section{Numerical Bootstrability}\\label{sec:numerical}\n\nThe bootstrap constraint (\\ref{eq:bootstrapeq}) presents us with a functional equation linear in the OPE coefficients, depending parametrically on the spectrum. \n\nThe equation is linear in the OPE coefficients -- which are our only unknowns since we know the spectrum from the QSC. Nevertheless, we are still faced with two challenges. First, while integrability allows us \\emph{in principle} to compute the scaling dimension of any state, in practice we can only focus on a finite number of them.\\footnote{It is an interesting challenge to understand how to use the QSC formalism to deduce global properties of a spectrum (e.g., the asymptotic density of states or expansions around large quantum numbers).} This means we need a controlled way to truncate the bootstrap constraint to a finite number of levels.\nSecondly, we need to deal with the functional nature of the equation. The Numerical Conformal Bootstrap (NCB) approach allows us to solve these challenges and obtain rigorous bounds. Here we describe a number of possible NCB algorithms which incorporate knowledge of the spectrum, in increasing degree of complexity. We will then describe how to include the new integral relations in this setup, which will lead us to the main numerical results of the paper. \n\\subsection{Basics}\\label{sec:algorithms0}\nLet us start by reviewing well known aspects of the modern NCB. Excellent reviews are availalble~\\cite{Rattazzi:2008pe,Chester:2019wfx,Poland:2018epd}, so we keep the discussion short.\n\\paragraph{The linear functionals approach. }\nThe 1D CFT we are dealing with is unitary (which descends from the ambient theory $\\mathcal{N}$=4 SYM), which implies that $C_n^2\\geq 0$. In the NCB approach for unitary theories, one exploits this fact to turn the bootstrap equation into a set of inequalities, which bound the conformal data. \n\nA convenient way to deduce such constraints is to act on the bootstrap equation (\\ref{eq:bootstrapeq}) with a \\emph{linear functional}, which transforms functions of the cross ratio $x$ into numbers. One typically considers functionals obtained as linear combinations of derivatives acting on a specific point, \nfor example\\footnote{We consider only even derivatives, since we will act with these functionals on crossed conformal blocks $\\mathcal{G}_{\\Delta}$ at $x=\\frac{1}{2}$. For an odd number of derivatives the action is trivial: $\\partial_x^{2 n+1} \\left.\\mathcal{G}_{\\Delta}(x) \\right|_{x=\\frac{1}{2}}$ = 0. }\n\\begin{equation}\\label{eq:derspace}\n\\alpha \\left[ F(x) \\right]\\equiv \\sum_{n = 0}^{N_{\\text{der}\/2} } {A}_n \\;\\partial_x^{2 n} \\left. F(x) \\right|_{x=\\frac{1}{2}}\\;,\n\\end{equation}\nfor some coefficients $\\vec{A}$, where the point $x=\\frac{1}{2}$ is chosen because it guarantees maximal convergence of the OPE. The value of $N_{\\text{der}}\\in 2 \\mathbb{N}$ gives a truncation on the space of functionals and will be a parameter in the approach. \n\nActing with $\\alpha$ on the bootstrap constraint (\\ref{eq:bootstrapeq}), we get a linear equation for the OPE coefficients \n\\begin{equation}\\label{eq:bootstraplinear}\n\\sum_{n} C^2_n \\; \\alpha\\left[ \\mathcal{G}_{\\Delta_n}\\right] +\\alpha\\left[ \\mathcal{G}_{\\text{simple}} \\right]= 0 \\; ,\n\\end{equation}\nwhere the cross-ratio dependence is removed,\nand $\\mathcal{G}_{\\Delta}$, $\\mathcal{G}_{\\text{simple}}$ are defined in (\\ref{eq:defGD}),(\\ref{eq:defGsimple}). Given a functional of the form (\\ref{eq:derspace}), this concretely is rewritten as\n\\begin{equation}\\label{eq:vectorform}\n\\sum_{n} C^2_n \\; \\left(\\vec{A}\\cdot \\vec{V}_{\\Delta_n}\\right) + \\left(\\vec{A}\\cdot \\vec{V}_{\\text{simple}}\\right)= 0 \\;,\n\\end{equation}\nwhich is true for arbitrary coefficients $\\vec{A} = (A_0,A_1,\\dots A_{N_{\\text{der}}\/2})$ , and where\n\\beqa\n\\vec{V}_{\\Delta}&\\equiv& \\left. \\left( \\mathcal{G}_{\\Delta}(x)\\,,\\, \\partial_x^2\\mathcal{G}_{\\Delta}(x), \\dots, \\partial_x^{2 n}\\mathcal{G}_{\\Delta}(x)\\,,\\, \\dots\\,,\\, \\partial_x^{N_{\\text{der}}}\\mathcal{G}_{\\Delta}(x) \\right)\\right|_{x = \\frac{1}{2}}\\;,\\label{eq:VDelta}\\\\\n\\vec{V}_{\\text{simple}} &\\equiv& \\left. \\left( \\mathcal{G}_{\\text{simple}}(x)\\,,\\, \\partial_x^2\\mathcal{G}_{\\text{simple}}(x)\\,,\\, \\dots\\,, \\, \\partial_x^{2 n}\\mathcal{G}_{\\text{simple}}(x)\\,, \\,\\dots\\,,\\, \\partial_x^{N_{\\text{der}}}\\mathcal{G}_{\\text{simple}}(x) \\right)\\right|_{x = \\frac{1}{2}} \\;.\\label{eq:Vsimple}\n\\eeqa\nStarting from section \\ref{sec:incorporating}, we will get other systems of equations of the form (\\ref{eq:vectorform}), but for more more general definitions of $\\vec{V}_{\\Delta}$, $\\vec{V}_{\\text{simple}}$. \n\nThe main principle of the NCB is finding appropriate functionals allowing to extract maximal information from the equation. In particular, we will look for $\\alpha$ which is positive semi-definite above a threshold $\\Delta_{\\ast}$:\n\\begin{equation}\\label{eq:posi}\n\\texttt{Positivity condition: } \\;\\;\\; \\vec{A}\\cdot \\vec{V}_{\\Delta} \\geq 0 , \\;\\;\\;\\; \\forall \\Delta \\geq \\Delta_{\\ast} \\;, \n\\end{equation}\nwhich allows to deduce an inequality:\n\\begin{equation}\\label{eq:inequa}\n\\sum_{\\left\\{\\Delta_n\\right\\},| \\Delta_n < \\Delta_{\\ast} } \\left(\\vec{A}\\cdot \\vec{V}_{\\Delta_n}\\right) \\; C^2_n + \\left(\\vec{A}\\cdot \\vec{V}_{\\text{simple}}\\right) \\leq 0 \\;.\n\\end{equation}\nThis relation involves the sum over a finite number of states in the spectrum and gives a rigorous truncation of the bootstrap. \n\n\\paragraph{OPE bounds algorithm. } \nBy further specifying the properties of the functional, one can deduce constraints on the OPE coefficients. Here we start by revising the algorithm used in our previous paper \\cite{Cavaglia:2021bnz}. It is an adaptation of a well-known NCB algorithm (see e.g.\\cite{ Chester:2019wfx}) to our situation, where we know the spectrum exactly.\n\\begin{framed}\n\\texttt{Algorithm 1 (Upper Bound). }\nWe start from the bootstrap constraint written in the form (\\ref{eq:vectorform}). \nWe will scan among the functionals satisfying the \\emph{positivity condition} (\\ref{eq:posi}), with the threshold coinciding with the second state in the spectrum, $\\Delta_{\\ast} \\equiv \\Delta_2$. Among such functionals, we search for $\\vec{A}^{\\text{up}}$ such that:\n\\begin{enumerate}[1)]\n\\item $(\\vec{A}^{\\text{up}}\\cdot \\vec{V}_{\\Delta_1}) = 1 $,\n\\item $(\\vec{A}^{\\text{up}}\\cdot \\vec{V}_{\\text{simple}}) $ is maximal.\n\\end{enumerate}\nThen, (\\ref{eq:inequa}) gives the optimal bound\n\\begin{equation}\nC_1^2 \\leq - (\\vec{A}^{\\text{up}}\\cdot \\vec{V}_{\\text{simple}})\\;.\n\\end{equation}\n\\end{framed}\n\\noindent\nA small tweaking of the algorithm gives us a lower bound. \n\\begin{framed}\n\\texttt{Algorithm 1 (Lower Bound). }\nAgain, choose the positivity threshold as the second state in the spectrum, $\\Delta_{\\ast} \\equiv \\Delta_2$. Among the functionals satisfying the \\emph{Positivity Condition} (\\ref{eq:posi}), search for $\\vec{A}^{\\text{low}} $ such that:\n\\begin{enumerate}[1)]\n\\item $ (\\vec{A}^{\\text{low}} \\cdot \\vec{V}_{\\Delta_1} ) = -1 $,\n\\item $(\\vec{A}^{\\text{low}} \\cdot \\vec{V}_{\\text{simple}} )$ is maximal.\n\\end{enumerate}\nThen, (\\ref{eq:inequa}) gives the optimal bound\n\\begin{equation}\nC_1^2 \\geq (\\vec{A}^{\\text{low}} \\cdot \\vec{V}_{\\text{simple}} ) \\;.\n\\end{equation}\n\\end{framed}\n\\noindent\nIn short, we can obtain bounds on the OPE coefficient of the lowest lying state, \n\\begin{equation}\n (\\vec{A}^{\\text{low}} \\cdot \\vec{V}_{\\text{simple}} )\\leq C_1^2\\leq - (\\vec{A}^{\\text{up}} \\cdot \\vec{V}_{\\text{simple}} )\\;,\n\\end{equation}\nusing as input only the values of the scaling dimensions of the first two states, $\\Delta_1$ and $\\Delta_2$. \n\nThis method was employed in \\cite{Cavaglia:2021bnz} to obtain a very narrow allowed region for $C_1$ as a function of the coupling. In the following sections we will see how this result can be dramatically improved by including the new integral relations, and how bounds for excited states can be obtained. Before moving to these new results, we discuss how the mathematical optimisation problem defined above can be solved efficiently using \\texttt{SDPB}~\\cite{Simmons-Duffin:2015qma}. \n\n\n\\paragraph{Implementation using SDPB. }\nThe space of functionals satisfying positivity conditions such as (\\ref{eq:posi}) can be navigated efficiently using semi-definite programming algorithms. These methods were introduced in the NCB context in \\cite{Simmons-Duffin:2015qma} and implemented numerically in the software package \\texttt{SDPB}~\\cite{Simmons-Duffin:2015qma,Landry:2019qug}, which we used in our work. A full description of the internal algorithms can be found in the above references. Here, we briefly describe the main approximations involved and how they impact the results. \n\nFirst, in order to apply linear programming algorithms to impose the positivity conditions, we need to employ an approximation for the conformal blocks in terms of polynomials in $\\Delta$. Similar to the cases in higher dimensions~\\cite{Chester:2019wfx}, this is obtained using a truncated expansion of the form:\n \\begin{equation}\\label{eq:expandblocks}\nf_{\\Delta}(x) \\sim \\sum_{k = 0}^{\\text{N}_{\\text{blocks}}} a_k(\\Delta) \\; \\left( r(x)\\right)^k , \\;\\;\\;\\; r(x)\\equiv \\frac{x}{\\left(\\sqrt{1-x}+1\\right)^2} \\; .\n \\end{equation}\nUnder this approximation, the action of the elementary functionals on the blocks takes the form:\n\\begin{equation}\\label{eq:polyapprox}\n\\left. \\frac{\\partial^{2 n} }{\\partial x^{2 n} } \\mathcal{G}_{\\Delta}(x)\\right|_{x=\\frac{1}{2}} \\equiv \\texttt{Pos}(\\Delta) \\times \\texttt{Poly}_n(\\Delta) \\;,\n\\end{equation}\nwhere $\\texttt{Pos}(\\Delta)$ is a strictly positive function\\footnote{We take $\\texttt{Pos}(\\Delta) = \\left(4 r(\\frac{1}{2}) \\right)^{\\Delta} \\;\\left((\\Delta^2 - 1)\\Delta (\\Delta - 1) \\prod_{n=1}^{\\frac{N_{\\text{blocks}}}{2} - 1}(\\Delta + \\frac{(3 + 2 n)}{2} )\\right)^{-1}$. The same positive prefactor allows to write a polynomial approximation both for the action of derivatives, and for the action of integral operators introduced in section \\ref{sec:incorporating}. Notice that this is different from the case of the integrated correlator constraints in \\cite{Chester:2021aun}, which required to use different methods (linear programming rather than semidefinite programming). } of $\\Delta$ and $\\texttt{Poly}_n(\\Delta)$ are polynomial in $\\Delta$. This polynomial approximation is used to impose positivity conditions such as (\\ref{eq:posi}) by treating $\\Delta$ as a continuous parameter. \n\nIn our work, we typically take $N_{\\text{blocks}}\\sim 30$. While this approximation introduces an error, in practice its impact is invisible on the scale of our bounds. We have verified on selected points that $N_{\\text{blocks}} = 50$ and $N_{\\text{blocks}} = 100$ give the same bounds to the relevant digits. \n\nThe most important parameter of the method is the integer $N_{\\text{der}}$ which characterises the dimension of the vector $\\vec{A}$ -- namely, it restricts the space of functionals we explore. The choice of $N_{\\text{der}}$ does affect significantly the bounds. However, importantly the bounds associated to the Numerical Conformal Bootstrap are rigorous, in the sense that they are true for any value of $N_{\\text{der}}$. Moreover, they can only get sharper and sharper by taking this value larger and larger. \n\nWhile in \\cite{Cavaglia:2021bnz} we presented an extrapolation to $N_{\\text{der}}\\rightarrow \\infty$, here we simply take this number to be $N_{\\text{der}} = 140$ for our main results (see section \\ref{sec:results}). Even at this fixed value, thanks to the new integral relations, we will improve significantly our previous results. \n\n\\paragraph{Precision considerations related to the spectrum. } There is still one potential source of errors we have not discussed, namely the finite precision on the spectral data coming from the numerical solution of the QSC, which we are using as input. We checked that all methods discussed in this paper are quite stable with respect to this error: an error on the spectrum propagates into a shift roughly of the same magnitude for the best estimate for the OPE coefficients. \nThis means that, in order not to contaminate the bounds for $C_i^2$, it is sufficient to use spectral data with an error a couple of orders smaller than the width of the bounds.\n\nWe also noticed that the numerical bootstrap algorithms are quite sensitive to the injection of wrong spectral data, and often they cease to converge if one inputs a spectrum which deviates too much from the QSC answer. For instance, at $g=3$, using the method discussed in section \\ref{sec:incorporating} with spectral input from 10 states, it is enough to introduce an error on the spectrum on a scale of $5\\times10^{-7}$ and the method to bound $C_1^2$ would no longer converge. \n\nTo generate the results published in this paper, we used spectral data with at least 12 digits precision (we expect that the precision is actually higher for most points, and exceeds 20 digits at weak coupling).\nWe estimate that, with such precision, errors on the spectrum do not have a significant effect on the bounds for OPE coefficients, on the full range of the coupling we consider.\n\n\\subsection{Including more information on the spectrum}\\label{sec:algorithms1}\nThe method of \\texttt{Algorithm 1} uses as input only the scaling dimensions for the first two low-lying states. A natural way to improve the bounds is to include more information on the exact spectrum. \n\nFor instance, if we know the values of the next few states $\\Delta_3,\\Delta_4,\\dots, \\Delta_{N}$, we can impose a relaxed positivity condition, where the functional should satisfy\n\\begin{equation}\\label{eq:posigap}\n(\\vec{A}\\cdot \\vec{V}_{\\Delta})\\geq 0 \\,, \\;\\;\\;\\forall \\Delta \\geq \\Delta_{N}\\; ,\n\\end{equation}\ntogether with a discrete set of additional constraints\n\\begin{equation}\\label{eq:discreteposi}\n(\\vec{A}\\cdot \\vec{V}_{\\Delta_n})\\geq 0 , \\;\\;\\; n = 2,3,\\dots, N -1 \\;.\n\\end{equation}\nSuch generalised conditions can be easily implemented in \\texttt{SDPB}\\footnote{We are grateful to Petr Kravchuk for discussing this point.}. \nThe functionals satisfying these relations now span a larger space, since they are allowed to assume negative values when $\\Delta$ lies in the gaps between the first $N$ states. This is illustrated in figures \\ref{fig:figfunc1} and \\ref{fig:figfunc2}. If we now run again the previous algorithms, the bounds will become sharper. This is simply because the bounds come from maximising some quantities -- and we are now looking in a larger space for the functional that maximises it. \n Moreover, this extension of the method also allows us to easily generate bounds for OPE coefficients for states other than the ground state (in fact, for any of the first $N-1$ states). We summarise concisely how this works below.\n\\begin{framed}\n\\texttt{Algorithm 2 -- Bounds including more states. } \nWe now use the knowledge of the first $N$ states. We describe how to obtain bounds for $C_m^2$, with $m \\leq N - 1$. \nWe restrict to functionals satisfying the \\emph{positivity conditions with gaps} \n\\beqa\n&&(\\vec{A}\\cdot \\vec{V}_{\\Delta} )\\geq 0,\\;\\;\\;\\forall \\Delta \\geq \\Delta_{N},\\\\ \n&& (\\vec{A}\\cdot \\vec{V}_{\\Delta_n} ) \\geq 0,\\;\\;\\; n \\in \\left\\{1,2,\\dots,N-1\\right\\} , \\; n\\neq m \\; .\n\\eeqa\nUnder these conditions, we search for $\\vec{A}^{\\text{up},m}$ and $\\vec{A}^{\\text{low},m}$ such that:\n\\begin{enumerate}[1)]\n\\item $(\\vec{A}^{\\text{up},m}\\cdot \\vec{V}_{\\Delta_m} ) = 1 $,\n\\item $(\\vec{A}^{\\text{up},m}\\cdot \\vec{V}_{\\text{simple}} ) $ is maximal,\n\\item$(\\vec{A}^{\\text{low},m}\\cdot \\vec{V}_{\\Delta_m} )= -1 $,\n\\item $(\\vec{A}^{\\text{low},m}\\cdot \\vec{V}_{\\text{simple}} ) $ is maximal.\n\\end{enumerate}\nThen, (\\ref{eq:inequa}) give the bounds\n\\begin{equation}\n(\\vec{A}^{\\text{low},m}\\cdot \\vec{V}_{\\text{simple}} )\\leq C_m^2 \\leq - (\\vec{A}^{\\text{up},m}\\cdot \\vec{V}_{\\text{simple}} )\\; .\n\\end{equation}\n\\end{framed}\n\\begin{figure}[t]\n\\centering\n \\includegraphics[width=0.85\\linewidth]{Functionalplot3.png}\n \\captionof{figure}{A depiction of $\\vec{A}^{\\text{up}} \\cdot \\vec{V}_{\\Delta}$ as a function of $\\Delta$, where $\\vec{A}^{\\text{up}}$ is calculated using \\texttt{Algorithm 1} with $N_{\\text{der}} = 60$, for $g = \\frac{1}{4}$. The red points mark the position of the first 10 spectral levels (the levels $\\Delta_7$ and $\\Delta_8$ are very close and not distinguishable by eye). The value of the functional is positive for all levels and takes the value $\\vec{A}^{\\text{up}} \\cdot \\vec{V}_{\\Delta_1} = 1$ on the ground state (outside the scale of the figure). }\n \\label{fig:figfunc1}\n\\centering\n \\includegraphics[width=0.85\\linewidth]{Functionalplot4.png}\n \\captionof{figure}{Here, we depict again $\\vec{A}^{\\text{up}} \\cdot \\vec{V}_{\\Delta}$ as a function of $\\Delta$ but obtained with the more general \\texttt{Algorithm 2} which allows for it to take negative values between the exact values of the first 10 levels. As the optimal functional is drawn from a larger set, this leads to a stronger bound. As in Fig. \\ref{fig:figfunc1}, here $N_{\\text{der}} = 60$ and $g = \\frac{1}{4}$. }\n \\label{fig:figfunc2}\n\\end{figure}\nFor illustration, let us compare the bounds for $C_1$ obtained with \\texttt{Algorithm 1} (i.e., including only $\\Delta_1$ and $\\Delta_2$ as input) or with \\texttt{Algorithm 2} with the input of the first $N=10$ states. The functional corresponding to the upper bound is shown in Figs. \\ref{fig:figfunc1} and \\ref{fig:figfunc2} for a specific choice of parameters. In the latter case, the functional is allowed to become negative in between the states. Notice that it assumes negative values between $\\Delta_3$ and $\\Delta_4$, while being essentially zero at these two points. In general, in the case illustrated in Fig. \\ref{fig:figfunc2} the functional takes \\emph{smaller} values for all levels $\\Delta_n$, $2 \\leq n \\leq 10$. The consequence is that the truncation of the bootstrap equation is more efficient and the resulting upper bound becomes stronger. In the case illustrated, we used $N_{\\text{der}} = 60$ and $g = \\frac{1}{4}$. While the lower and upper bounds obtained with the simpler algorithm are $C_1^2 \\in [0.0908377,0.0967516]$, with \\texttt{Algorithm 2} the bounds are $C_1^2 \\in [0.0908945,0.0949486 ]$, and the width of the bound reduces by 31 percent. Over the full range of values for the coupling, the gain in precision is at least $\\sim$ 16 percent up to $g \\sim 0.8$, and becomes significant at strong coupling, e.g. for $g>3$ the error decreases by more than 75 percent of its original value. A comparison of the error with different methods will be presented in Figures \\ref{fig:ErrorLogScale}, \\ref{fig:RelativeErrorLogScale} at the end of the section. \n\n\nBounds for the OPE coefficients of the three lowest states obtained with \\texttt{Algorithm 2} are shown in Fig. \\ref{fig:bigplot1}, with the inclusion of 10 states as input and $N_{\\text{der}} = 60$. One immediately sees that the bounds for excited states are much less precise than the one for the ground state. At low values of the coupling, in particular, the algorithm produces negative values for the lower bound for $C_2^2$ and $C_3^2$, which are worse than the obvious estimate $C_i^2>0$, so they are not shown. In section \\ref{sec:incorporating}, we will obtain significant improvements of these bounds by including the new integral relations. \n\\begin{figure}[t]\n\\centering\n \\includegraphics[width=0.8\\linewidth]{image1.png}\n \\captionof{figure}{Bounds for the first three OPE coefficients squared, \n obtained with \\texttt{Algorithm 2} with input of the first 10 states, and with $N_{\\text{der}} = 60$. The allowed region is very narrow for the ground state (lower and upper bounds form a thin region which almost looks like a line), but the precision for excited states is much less with this method. The gain in precision achieved by including the new integral relations can be seen in Figures \\ref{fig:plotintcompare}, \\ref{fig:plotintboth2}.}\n \\label{fig:bigplot1}\n\\end{figure}\n\n\\paragraph{Phase transitions in the optimal functionals. }\nAn interesting feature of Fig. \\ref{fig:bigplot1} is the corner in the upper bound for $C_2^2$. \nSuch point of non-analyticity is associated to a ``phase transition'' in the shape of the optimal functional corresponding to the bound. We plot the functional for $g = 4\/5$ and $g = 17\/20$ in Figs. \\ref{fig:plotphase1} and \\ref{fig:plotphase2}, respectively. These two points are very close to each other, on different sides of the phase transition. Correspondingly, the shape of the functional changes sharply. In particular, in the case of Figure \\ref{fig:plotphase2} the functional starts exploiting the gap between $\\Delta_3$ and $\\Delta_4$, and this leads to an abrupt improvement of the bound. \n\\begin{figure}[t]\n\\centering\n \\includegraphics[width=0.9\\linewidth]{Functionalplot7.png}\n \\captionof{figure}{Values of $(\\vec{A}^{\\text{up}} \\cdot \\vec{V}_{\\Delta} )$, as a function of $\\Delta$, for the optimal functional giving the upper bound for $C_2^2$, using \\texttt{Algorithm 2} with $N_{\\text{der}} = 60$ and $N_{\\text{states}} = 10$. Here, $g = 4\/5$. Notice the functional is positive between $\\Delta_3$ and $\\Delta_4$. }\n \\label{fig:plotphase1}\n\\centering\n \\includegraphics[width=0.9\\linewidth]{Functionalplot8.png}\n \\captionof{figure}{The optimal functional giving the upper bound for $C_2^2$, for the same parameters as in \\ref{fig:plotphase1}, but with $g = 17\/20$. Notice the abrupt change in the shape of the functional, with the opening up of the gap between $\\Delta_3$ and $\\Delta_4$, associated to the phase transition. }\n \\label{fig:plotphase2}\n\\end{figure}\n\nWe have observed that such type of phase transitions occur not only across different values of $g$, but also as $N_{\\text{der}}$ is increased. Recall that this parameter controls the dimensionality of the linear space to which the functionals belong. As this value is increased, the linear functionals acquire more degrees of freedom allowing them to penetrate the gaps between the states in the spectrum. We observed that the opening of a new gap is associated to a jump in the error. \n\nOne may wonder if the error would shrink to zero provided we used some kind of limiting procedure, by incorporating more and more levels, and consistently using more and more derivatives. We suspect this is however not the case, and that a single correlator does not contain all information on the OPE coefficients, even provided we were able to input all the spectrum. The shrinking of the bounds to zero very likely requires the use of multiple correlators.\n\nIn any case, since some pairs of levels in the spectrum are very close to each other, it is in practice not possible to exploit all the gaps within computationally reasonable values for $N_{\\text{der}}$. The presence of phase transitions as $N_{\\text{der}}$ is increased also means that the dependence of the bounds on this cutoff cannot be considered smooth. This prevents us from performing an extrapolation to $N_{\\text{der}}\\rightarrow \\infty$, as we did in the analysis of \\cite{Cavaglia:2021bnz}.\\footnote{ In that case, we believe the procedure to be justified, since we were using \\texttt{Algorithm 1}. We were not allowing for the functional to be negative between excited states, and therefore we expect no phase transitions. } Here, therefore, we will content ourselves with working at fixed large values for $N_{\\text{der}}$. \n\nIn the rest of this section we discuss some further generalisations of the algorithms. These, however, were not used to obtain our main results, and at this point the reader can safely choose to jump to section \\ref{sec:incorporating}, where we discuss the inclusion of the integrated correlator constraints. \n\n\\paragraph{Incorporating existing bounds in the algorithm.} \nSuppose we know some bounds for the first OPE coefficient, $C^2_1\\in\\left[ C_{1,-}^2 , C_{1,+}^2 \\right]$ (for instance obtained by running one of the previous algorithms), and we know the first $N$ states in the spectrum. Then, we can split $C_1^2 = C_{1,-}^2 + \\delta_+ C^2_1$, where $\\delta_+ C^2_1$ should be a positive quantity, and rewrite the bootstrap constraint (\\ref{eq:vectorform}) as:\n\\beqa\n0&=&\\sum_{n\\geq 1} C^2_n \\; \\left(\\vec{A}\\cdot \\vec{V}_{\\Delta_n}\\right) + \\left(\\vec{A}\\cdot \\vec{{V}}_{\\text{simple}}\\right) \\\\\n&=&\\delta_+ C^2_1 \\left(\\vec{A}\\cdot \\vec{V}_{\\Delta_1}\\right) + \\sum_{n\\geq 2} C^2_n \\; \\left(\\vec{A}\\cdot \\vec{V}_{\\Delta_n}\\right) + \\left(\\vec{A}\\cdot \\vec{\\widetilde{V}}_{\\text{simple}}\\right) \\;,\n\\eeqa\nwhere we redefined\n\\begin{equation} \\label{eq:simpletilde}\n\\vec{\\widetilde{V}}_{\\text{simple}} \\equiv \\vec{V}_{\\text{simple}} + C_{1,-}^2\\;\\vec{V}_{\\Delta_1} \\;.\n\\end{equation}\nNow, when computing and upper or lower bound for $C_m^2$ with $1 0$, and $\\delta_{-1} C_n^2 \\equiv C_n^2 - C_{k,+}^2 < 0$, and\n\\begin{equation}\n\\vec{\\widetilde{V}}_{\\text{simple}} \\equiv \\vec{V}_{\\text{simple}} + \\sum_{n\\neq m, \\; n\\leq N} C_{n, \\; - \\sigma_n}^2 \\;\\vec{V}_{\\Delta_n} \\; .\n\\end{equation}\n\\noindent We work in the space of functionals satisfying the conditions:\n\\beqa\n&&(\\vec{A}\\cdot \\vec{V}_{\\Delta} )\\geq 0,\\;\\;\\;\\forall \\Delta \\geq \\Delta_{N},\\\\ \n&& \\sigma_n\\; (\\vec{A}\\cdot \\vec{V}_{\\Delta_n} ) \\geq 0,\\;\\;\\; n \\in \\left\\{1,2,\\dots,N-1\\right\\} , \\; n\\neq m\\; ,\n\\eeqa\nso that (\\ref{eq:bootstraprewrite2}) becomes\n\\begin{equation}\\label{eq:bootstraprewrite3}\n C^2_m \\left(\\vec{A}\\cdot \\vec{V}_{\\Delta_m}\\right) + \\left(\\vec{A}\\cdot \\vec{\\widetilde{V}}_{\\text{simple}}\\right) \\leq 0 \\; .\n\\end{equation}\nProceeding as in the previous cases, we now search for $\\vec{A}^{\\text{up},m}$ and $\\vec{A}^{\\text{low},m}$ such that:\n\\begin{enumerate}[1)]\n\\item $(\\vec{A}^{\\text{up},m}\\cdot \\vec{V}_{\\Delta_m} ) = 1 $,\n\\item $(\\vec{A}^{\\text{up},m}\\cdot \\vec{\\widetilde{V}}_{\\text{simple}}) $ is maximal,\n\\item$(\\vec{A}^{\\text{low},m}\\cdot \\vec{V}_{\\Delta_m} )= -1 $,\n\\item $(\\vec{A}^{\\text{low},m}\\cdot \\vec{\\widetilde{V}}_{\\text{simple}}) $ is maximal.\n\\end{enumerate}\nThen, (\\ref{eq:bootstraprewrite2}) gives the bounds\n\\begin{equation}\n(\\vec{A}^{\\text{low},m}\\cdot \\vec{\\widetilde{V}}_{\\text{simple}} )\\leq C_m^2 \\leq - (\\vec{A}^{\\text{up},m}\\cdot \\vec{\\widetilde{V}}_{\\text{simple}} )\\;.\n\\end{equation}\n\\end{framed}\nWe found that in some cases these tricks are effective. In particular, one can sometimes improve significantly the bounds for the excited states, provided a rather precise bound for $C_1^2$ is exploited as input. To make an example, for $g = 1\/4$, using \\texttt{Algorithm 2} with $N_{\\text{states}}= 10$, $N_{\\text{der}} = 60$, one finds $C_3^2\\in[0,0.23779]$. \n Incorporating the bounds for $C_1^2 \\in [0.09313678, 0.09313996]$ as in \\texttt{Algorithm 3}, one can recalculate the allowed region for $C_3^2$, finding now $C_3^2 \\in [0.0214, 0.1845]$, i.e. the allowed interval shrinks by 31 percent. The magnitude of this improvement is however very sensitive to the precision of the injected bounds for $C_1^2$. In this example in particular we used as input a bound for $C_1^2$ which is more precise than what could be achieved by the algorithms presented so far, and was obtained by including the integral relations into the game. \n\nWhile \\texttt{Algorithm 3} could be useful in some contexts, we found that, once we include uniformly the new integral constraints as explained in the next section, the precision is not significantly affected by using this upgrade of the method. Our main results in section \\ref{sec:results} were obtained simply with \\texttt{Algorithm 2}. \n\n\\paragraph{Other approaches. }\nAlthough we do not explore them in this paper, it is worth mentioning that there are other techniques to study OPE coefficients within the Numerical Bootstrap (for some modern developments see e.g. \\cite{Chester:2019ifh}). \n\nOne possible approach, for which preliminary results were presented in \\cite{Cavaglia:2021bnz}, is to treat some of the OPE coefficients as variational parameters in order to find their allowed region. For example, in the case presented in \\cite{Cavaglia:2021bnz}, we rewrote the bootstrap equation as\n\\begin{equation}\n C^2_1 \\left(\\vec{A}\\cdot \\vec{V}_{\\Delta_1}\\right) + \\sum_{n\\geq 4} C^2_n \\; \\left(\\vec{A}\\cdot \\vec{V}_{\\Delta_n}\\right) + \\left(\\vec{A}\\cdot \\vec{\\widetilde{\\widetilde{V}}}_{\\text{simple}}\\right) = 0 \\;,\n\\end{equation}\nwith \n$\\vec{\\widetilde{\\widetilde{V}}}_{\\text{simple}} \\equiv \\vec{V}_{\\text{simple}} + C^2_2 \\left(\\vec{A}\\cdot \\vec{V}_{\\Delta_2}\\right) + C^2_3 \\left(\\vec{A}\\cdot \\vec{V}_{\\Delta_3}\\right)$. One can then treat $C_2^2$ and $C_3^2$ as parameters, and use one of the algorithms presented above to obtain bounds on $C_1^2$, $C_1^2 \\in \\left[ C_{1,-}^2(C_2^2, C_3^2) , \\; C_{1,+}^2(C_2^2, C_3^2)\\right] $, which depend parametrically on $C_2^2$ and $C_3^2 $. Such bounds indirectly define an allowed region for $C_2^2$ and $C_3^2$, which is given by the condition that $C_{1,-}^2(C_2^2, C_3^2) < C_{1,+}^2(C_2^2, C_3^2)$. \nOne can also use other types of bootstrap algorithm, where one does not impose optimisation conditions, but rather looks for a functional with positivity conditions that exclude a certain set of conformal data.\n\nIt is definitely worth investigating if these techniques can lead to an improvement of our results. Exploring the higher dimensional parametric space of OPE coefficients, while computationally expensive, might reveal a finer structure than treating them individually as we do in this paper (for instance, in the case of \\cite{Cavaglia:2021bnz} we observed that there is a linear combination of $C_2$ and $C_3$ for which the bound is much narrower than for each of them individually). \n\n\n\\subsection{Incorporating the integral relations}\\label{sec:incorporating}\nHere we explain how the new integral relations can be embedded in the NCB framework, similarly to what was done in \\cite{Chester:2021aun} in 4D.\n\nFirst of all, we rewrite the constraints using the OPE decomposition of the 4-point function, as new linear relations for the OPE coefficients. As shown in appendix \\ref{app:rewriteconstr}, using crossing symmetry and the OPE the two relations (\\ref{eq:constr1}), (\\ref{eq:constr2}) can be rewritten as\n\\beqa\n\\text{Constraint 1: }&&\\;\\; \\sum_{\\Delta_n} C^2_n \\; \\texttt{Int}_1\\left[\\, f_{\\Delta_n} \\,\\right] + \\texttt{RHS}_1 = 0 \\;, \\label{eq:newconstr1}\\\\\n\\text{Constraint 2: }&&\\;\\; \\sum_{\\Delta_n} C^2_n \\; \\texttt{Int}_2\\left[\\, f_{\\Delta_n} \\,\\right] + \\texttt{RHS}_2 = 0 \\;,\\label{eq:newconstr2}\n\\eeqa\nwhere we introduced the integral operators\n\\beqa\\la{ints}\n&&\\texttt{Int}_1\\left[ F(x) \\right] \\equiv - \\int_0^{\\frac{1}{2}} (x-1-x^2)\\frac{F(x)}{x^2} \\partial_x\\log\\left( x (1-x)\\right)\\, dx \\;,\\\\\n&&\\texttt{Int}_2\\left[ F(x)\\right]\\equiv \\int_{0}^{\\frac{1}{2}} dx \\frac{ F(x) \\; ( 2 x - 1) }{x^2} \\;,\n\\eeqa\nand the explicit functions of the coupling constant:\n\\beqa\n\\texttt{RHS}_1&=&\\frac{\\mathbb{B}-3\n \\mathbb{C}}{8\\mathbb{B}^2}+\\left(7 \\log(2) -\\frac{41}{8}\\right) (\\mathbb{F}-1)+ \\log\n (2)\\; , \\\\\n \\texttt{RHS}_2 &=& \\frac{1-\\mathbb{F}}{6}+(2-\\mathbb{F}\n ) \\log (2)+1 -\\frac{\\mathbb{C}}{4\\;\\mathbb{B}^2}\\; .\\label{eq:RHS2}\n\\eeqa\nIn order to obtain \\eq{ints}, we used the crossing equation \\eq{eq:fcrossing} to rewrite the integral over the half range $x\\in [0,1\/2]$. Within this interval the OPE expansion \\eq{eq:OPEf}\nconverges rapidly, which also implies that the action of the integral operators on the blocks $f_{\\Delta}$ are rapidly decreasing with $\\Delta$. Further details on the derivation of (\\ref{ints})-(\\ref{eq:RHS2}) are contained in Appendix \\ref{app:rewriteconstr}. \n\n\n\n To incorporate these equations in the bootstrap setup, we consider a generic linear combination of derivatives acting on the bootstrap equation with the new constraints (\\ref{eq:newconstr1}),(\\ref{eq:newconstr2}):\n \\beqa\\label{eq:newbootstrap}\n&& \\sum_{n} C^2_n \\; \\left[ \\sum_{k = 0}^{N_{\\text{der}}\/2} b_k \\left. \\partial_x^{2 k} \\mathcal{G}_{\\Delta_n} \\right|_{x = \\frac{1}{2}} + b_{-1} \\texttt{Int}_1[f_{\\Delta_n}] + b_{-2} \\texttt{Int}_2[f_{\\Delta_n}]\\right]\\\\ &&+\\sum_{k = 0}^{N_{\\text{der}}\/2} b_k \\left. \\partial_x^{2 k} \\mathcal{G}_{\\text{simple}} \\right|_{x = \\frac{1}{2}}+ b_{-1} \\texttt{RHS}_1 + b_{-2} \\texttt{RHS}_2= 0 \\;.\n\\eeqa\nThis equation is true for any choice of the coefficients $\\left\\{b_{-1},b_{-2}, b_0, \\dots, b_{N_{\\text{der}}\/2}\\right\\}$. It is now apparent that this more general equation takes the same form as (\\ref{eq:vectorform}), which was the starting points of our bootstrap algorithms, where we redefine $\\vec{A}$, $\\vec{V}_{\\text{simple}}$, $\\vec{V}_{\\Delta}$ as\n\\beqa\n\\vec{A} &\\equiv& ( b_{-1},b_{-2})\\oplus\\left( b_0,b_1, \\dots, b_{N_{\\text{der}}\/2}\\right), \\label{eq:speacialvectors1}\\\\\n\\vec{V}_{\\Delta} &\\equiv& (\\texttt{Int}_1[f_{\\Delta}], \\texttt{Int}_2[ f_{\\Delta}] )\\oplus \\left. \\left( \\mathcal{G}_{\\Delta}(x),\\, \\partial_x^2\\mathcal{G}_{\\Delta}(x), \\dots, \\partial_x^{N_{\\text{der}}}\\mathcal{G}_{\\Delta}(x) \\right)\\right|_{x = \\frac{1}{2}} \\; ,\\\\\n\\vec{V}_{\\text{simple}} &\\equiv & (\\texttt{RHS}_1, \\texttt{RHS}_2 )\\oplus \\left. \\left( \\mathcal{G}_{\\text{simple}}(x),\\, \\partial_x^2\\mathcal{G}_{\\text{simple}}(x), \\dots, \\partial_x^{N_{\\text{der}}}\\mathcal{G}_{\\text{simple}}(x) \\right)\\right|_{x = \\frac{1}{2}} \\; . \\label{eq:speacialvectors3}\n\\eeqa\nAfter these redefinitions, we can run the same algorithms described sections \\ref{sec:algorithms0} and \\ref{sec:algorithms1}.\n\nAt the level of implementation, using the expansion (\\ref{eq:expandblocks}) we can approximate our integral operators as\n\\begin{equation}\n\\texttt{Int}_1[ f_{\\Delta}] = \\texttt{Pos}(\\Delta) \\times \\texttt{Poly}_{(1)}(\\Delta) \\;, \\;\\;\\;\\; \\texttt{Int}_2[ f_{\\Delta}] =\\texttt{Pos}(\\Delta) \\times \\texttt{Poly}_{(2)}(\\Delta) \\; ,\n\\end{equation}\nwith polynomials $\\texttt{Poly}_{(i)}(\\Delta)$ and, crucially, the same positive prefactor as in (\\ref{eq:polyapprox}). This is a difference to what was observed in \\cite{Chester:2021aun}, and allows us to include the integral constraints into the \\texttt{SDPB} setup. \n\n\\begin{figure}\n\\centering\n \\includegraphics[width=0.8\\linewidth]{Detail_1vs2.png}\n \\captionof{figure}{Bounds for the first three OPE coefficients squared. We use the same parameters as in Fig. \\ref{fig:bigplot1}, but including either the first or the second integral relation (solid vs dashed lines). \n The use of the constraints lead to a visible improvement. \n The two relations lead to comparable results, with the first integral relation being slightly more effective.}\n \\label{fig:plotintcompare}\n\\end{figure}\n\n\n\\begin{figure}\n\\centering\n \\includegraphics[width=0.8\\linewidth]{BothPlotsSuperimposed.png}\n \\captionof{figure}{Bounds for the first three OPE coefficients obtained using two integral relations simultaneously, with $N_{\\text{der}} = 60$ and $N_{\\text{der}}=140$ (darker). This is our main result. The data for the bounds are reported in Appendix \\ref{app:boundsC}. Shown in the figures are also weak and strong coupling predictions (dotted lines), obtained in the next section \\ref{sec:analytical}. }\n \\label{fig:plotintboth2}\n\\end{figure}\n\n\n\\begin{figure}\n\\centering\n \\includegraphics[width=0.78\\linewidth]{ErrorComparePlot0New.png}\n \\captionof{figure}{The value of the error for the first OPE coefficient in logarithmic scale, $\\log_{10}\\left( \\frac{1}{2} (C_{1,+}^2 - C_{1,-}^2 ) \\right)$, as function of the coupling, resulting from various methods. We show two results from \\cite{Cavaglia:2021bnz} (obtained with input from two states): the bounds at $N_{\\text{der}}= 60$ (\\fcolorbox{black}{color1}{\\rule{0pt}{3pt}\\rule{3pt}{0pt}}), and the best result of \\cite{Cavaglia:2021bnz}, obtained with the extrapolation $N_{\\text{der}} \\rightarrow \\infty$ (\\fcolorbox{black}{color2}{\\rule{0pt}{3pt}\\rule{3pt}{0pt}}). \n These are compared to the new results, at $N_{\\text{der}}= 60$, \n obtained without using integral relations (\\fcolorbox{black}{color3}{\\rule{0pt}{3pt}\\rule{3pt}{0pt}}), using either the second (\\fcolorbox{black}{color4}{\\rule{0pt}{3pt}\\rule{3pt}{0pt}}) or the first integral relation (\\fcolorbox{black}{color5}{\\rule{0pt}{3pt}\\rule{3pt}{0pt}}), or using both of them (\\fcolorbox{black}{color6}{\\rule{0pt}{3pt}\\rule{3pt}{0pt}}). Our best result is obtained using both integral relations and $N_{\\text{der}} = 140$ (\\fcolorbox{black}{color7}{\\rule{0pt}{3pt}\\rule{3pt}{0pt}}). In all these new results we use \\texttt{Algorithm 2 } with input from $N = 10$ states. }\n \\label{fig:ErrorLogScale}\n\\centering\n \\includegraphics[width=0.78\\linewidth]{ErrorComparePlot1New.png}\n \\captionof{figure}{With the same colour scheme as in Figure \\ref{fig:ErrorLogScale}, we compare various methods for the value of the relative error on a logarithmic scale, plotting $\\log_{10}\\left( \\frac{ C_{1,+}^2 - C_{1,-}^2 }{ C_{1,+}^2 + C_{1,-}^2 } \\right)$. }\n \\label{fig:RelativeErrorLogScale}\n\\end{figure}\n\n\n\\subsection{Results}\\label{sec:results}\nIncluding the integral relations leads to a dramatic improvement of the bounds. \nTo quantify this effect, let us first describe some experiments where we add one relation at a time. This can be easily done by just dropping one component from (\\ref{eq:speacialvectors1})-(\\ref{eq:speacialvectors3}). \n For instance, with the same parameters as in Figure \\ref{fig:bigplot1}, but now including the second integral constraint (\\ref{eq:constr2}), we find the width of the bound for $C_1^2$ decreasing by at least a factor of $10$ over all range in the coupling (for $g>1.5$, the gain is a factor of 50). Adding the first integral constraint (\\ref{eq:constr1}) on its own has an even stronger effect, with the bound decreasing by at least factor $30$, which becomes a factor $80$ for $g>1.9$. The improvement is also marked at weak coupling. A comparison of the error with various methods can be found in Figures \\ref{fig:ErrorLogScale} and \\ref{fig:RelativeErrorLogScale}. \nThe gain in precision for the excited states is clearly visible in Figure \\ref{fig:plotintcompare}. With either integral relation the bounds shrink approximately by a factor $2$ starting from $g \\sim 0.3$, and by a factor $\\sim 9$ at strong coupling. \nAs can be seen in the figure, the two integral relations, separately, lead to very similar new bounds for the excited states coefficients, with the first relation (\\ref{eq:constr1}) being slightly more constraining. One might even be suspicious that the two relations are not independent, but one easily sees that this is not the case, as combining them reduces the error much further. \n\n Our best results, obtained using both integral relations together in the algorithm, are shown in Figure \\ref{fig:plotintboth2}. One can immediately see a significant improvement for the excited states, with the upper and lower bounds indistinguishable by eye for a wide range of values of the coupling. Keeping fixed the value of $N_{\\text{der}} = 60$, remarkably for $C_1^2$ the bound shrinks by at least a factor $10^3$ for all values of the coupling. For $C_2^2$ and $C_3^2$, the bound reduces monotonically with the coupling -- for $g = 0.3$ by at least factor $10^1$, which becomes $10^2$ for $g\\sim 1.5$. At strong coupling, the gain is almost a factor of 200. \nWe run the algorithm with $N_{\\text{der}} = 140$ to obtain our best results, which are reported in Appendix \\ref{app:boundsC}. \n\nThe bounds produced by the same algorithm for $C_i^2$, with $i>3$, are less precise. In particular, for these OPE coefficients the current setup only produces a nontrivial upper bound. This is compatible in magnitude with the strong coupling results of \\cite{Ferrero:2021bsb}. In particular, for the four states with $\\Delta_{\\text{strong}}^{(0)} = 6$, they found the strong coupling limit $\\langle a_{\\text{strong}}^{(0)} \\rangle_3 = C_4^2 + C_5^2 + C_6^2 + C_7^2 = 10\/429\\simeq 0.023$, and we found the upper bounds e.g. $C_4^2 < 0.0079$ and $C_5^2 < 0.0123$ at $g = 4$. We reserve further study of these excited states for the future. We expect that the precision can be improved by including more states in the algorithm and especially by considering more general bootstrap setups as we discuss in section \\ref{sec:discussion}.\n\n\n\n\n\\section{Analytic Bootstrability}\\label{sec:analytical}\nIn this section we develop a functional analytic bootstrap approach at weak coupling, using input from the QSC solution of the spectrum at weak coupling (collected in Appendix \\ref{app:spectrum}). Additionally, we use the two new integral constraints, one of which -- relation (\\ref{eq:constr1}) -- is particularly powerful to extract weak coupling data for the first OPE coefficient. \n\nWe start by discussing the subtleties in the weak coupling treatment of (\\ref{eq:constr1}), and then proceed to discuss the weak coupling Bootstrability method. Our main results are summarised at the end of this section. \n\n\\subsection{Weak coupling expansion of the first integral relation }\n We now explain how to interpret the constraint (\\ref{eq:constr1}) at weak coupling. \n In particular, as anticipated in section \\ref{sec:integrated}, we need to understand how to \n regularise correctly the integrals \n $\\int_0^1 dx G^{(\\ell)}(x) \\frac{\\log{x} +1}{x^2}$, $\\ell>1$, arising from the perturbative expansion of the integrand. \nThese integral has $\\log$-divergences, so we define an extraction scheme where the integral is defined by its finite part after dropping $\\log^n\\epsilon$ terms.\n \n We will now show that\n \\begin{equation}\\begin{split}\\label{int1weak}\n \\left.\\int_0^1 \\!\\delta G(x)\\, \\frac{1 \\!+\\! \\log(x)}{x^2}\\;dx \\right|_{\\text{small }g } \\!\\sim& 1 \\!+\\! \\underbrace{\\int_{\\epsilon}^{\\frac{1}{2}}dx \\left(\\sum_{\\ell=1}^{M} g^{2\\ell} G_{\\text{weak}}^{(\\ell)}(x)\\right)\\frac{\\log(x (1-x))}{x^2} }_{\\texttt{regularised, }\\log(\\epsilon)\\rightarrow 0}\n \\\\\n&- \\underbrace{\\left[\\frac{C_1^2}{(\\Delta _1-1)^2}\\right] }_{\\texttt{``anomaly''}} + O(g^{2 M + 2})\\;,\n\\end{split}\\end{equation}\nfor any order $M=1,2,\\dots$.\nThe meaning of this equation is the following. To reproduce the weak coupling expansion of the integral on the l.h.s., we should: i) expand the integrand up to the desired order and integrate term by term; ii) since this produces log-divergences, we introduce a cutoff $\\epsilon$ in the integration range and regularise the result by the prescription $\\log^n\\epsilon\\rightarrow 0$; iii) finally, we should add the ``anomalous term'' in the second line of (\\ref{int1weak}), which contains the OPE coefficient and scaling dimension of the ground state, expanded to the relevant order. \n\nIn the following subsection we make a digression to prove \\ref{int1weak}. Next, we will show that (\\ref{eq:constr1}) can be used to deduce several orders of the weak coupling expansion of $C_1^2$ analytically.\n\n\\subsubsection{The weak coupling ``anomaly'' }\nTo understand the regularisation (\\ref{int1weak}), it is convenient to break the l.h.s. in the two objects $\\int_0^1\\frac{\\delta G(x)}{x^2}dx + \\int_0^1\\frac{\\delta G(x)}{x^2}\\,\\log(x)\\,dx $. \n\n\\paragraph{The first piece. }\nWe start by analysing the first of these terms, which can in fact be evaluated exactly for any $g>0$:\n\\begin{framed}\n\\begin{equation}\\label{eq:specialide}\n\\int_0^1\\frac{\\delta G(x)}{x^2}dx = 1 .\n\\end{equation}\n\\end{framed}\n\\noindent\nTo prove this identity, notice that the integrand is in fact a total derivative (see (\\ref{eq:totalder})), \n\\begin{equation}\n\\frac{\\delta G(x)}{x^2}=\\partial_x\\[\\left(-\\frac{1}{x^2}+\\frac{1}{x}-1\\right) \\delta f(x)\\]+1+C^2_{\\rm BPS}(g) -2 .\n\\end{equation}\nThis combination is regular both at $x=0$ and $x=1$, so we can apply the integration formula\n\\begin{equation}\n\\int_0^1\\frac{\\delta G(x)}{x^2}dx=\nC_{BPS}^2(g)-1-\n2\\[\\left(-\\frac{1}{x^2}+\\frac{1}{x}-1\\right) \\delta f(x)\\]_{x\\to 0} \\label{eq:firstli}\\;\\; ,\n\\end{equation}\nwhere we used the fact that, due to crossing (\\ref{eq:fcrossing}),\n\\begin{equation}\n\\[\\left(-\\frac{1}{x^2}+\\frac{1}{x}-1\\right) \\delta f(x)\\]_{x\\to 1} = -\\[\\left(-\\frac{1}{x^2}+\\frac{1}{x}-1\\right)\\delta f(x)\\]_{x\\to 0} .\n\\end{equation}\n Using the OPE decomposition (\\ref{eq:OPEf}), which converges around $x\\sim 0$, one can easily evaluate the r.h.s. of (\\ref{eq:firstli}) obtaining $C^2_{BPS}-1 - (C^2_{BPS}-2) = 1$, which proves (\\ref{eq:specialide}). \n \nNote, however, that at $g=0$ the integrand of the l.h.s. of (\\ref{eq:specialide}) is strictly zero, as we defined $\\delta G(x)$ as a difference of $G(x)$ with its tree level value. The discrepancy with the value $1$ taken by the integral is a clear manifestation of the weak coupling anomaly. We will now explain how to cure this mismatch. \n\n\n\n\nTo understand the origin of the anomaly, let us look again at original integral at finite coupling, which can be rewritten, using crossing, as $\\int_0^{1}\n \\frac{\\delta G(x)}{x^2}dx =2\\int_0^{\\frac{1}{2}}\n \\frac{\\delta G(x)}{x^2}dx $. This allows us to use the OPE decomposition, which is fastly convergent on $[0,\\frac{1}{2}]$. The contribution of a single conformal block $\\mathcal{L}_{0,[0,0]}^{\\Delta_n}$ to the integrand is \n \\begin{equation}\n \\frac{ G(x) }{x^2} = \n ... - C_n^2(g) \\; \\partial_x \\left[ \\left(1-\\frac{1}{x} + \\frac{1}{x^2} \\right)f_{\\Delta_n}(x) \\right] + ... \\,, \n \\end{equation}\n which behaves, close to the limit of integration $x \\sim 0$, as\n \\begin{equation}\\label{eq:relevantsingular}\n \\frac{ G(x) }{x^2} = ... + C_n^2(g) x^{\\Delta_n(g)-2} \\left( 1 + O(x) \\right) + ...\\;.\n \\end{equation}\n Under the weak coupling expansion $\\Delta_n = \\Delta_{n}^{(0)} + \\gamma_n(g)$, with $\\gamma_n(g) = O(g^2)$, we find\n\\begin{equation}\\label{eq:smallx}\n \\left. \\frac{ G(x) }{x^2} \\right|_{\\text{small $g$, $x$}}\\!= ...+ C_n^2(g) \\! \\left[ x^{\\Delta_n^{(0)} -2 } ( 1 + g^2 \\gamma_n^{(0)} \\log(x) + O(g^4) ) \\!+\\! O(x^{\\Delta_n^{(0)}-1} ) \\right] \\!+...\\;.\n \\end{equation}\nAs long as $\\Delta_n^{(0)} > 1 $, the weak coupling expansion thus produces terms that can be safely integrated. The only exception is the block corresponding to the ground state $\\Delta_1$: in this case, since $\\Delta_1^{(0)} = 1$, we encounter a log-divergence at $x \\sim 0$. We can therefore zoom on this term to understand the correct perturbative regularisation of the integral. \n\nFocusing on the relevant singular behaviour (\\ref{eq:relevantsingular}) for $n=1$, we have the integral\n\\begin{equation}\nI=2 \\int_\\epsilon^{1\/2} x^{\\Delta_1-2}dx ,\n\\end{equation}\nwhere $\\Delta_1\\to 1$ at $g\\to 0$. Let us compare its non-perturbative treatment to a na\\\"ive finite-part regularisation of the weak coupling expansion. At finite coupling, $\\Delta_1>1$ and the integral is convergent giving\n\\begin{equation}\\label{eq:Ifinite}\nI_{\\rm finite}=\\frac{2^{\\Delta_1 }}{\\Delta_1 -1}\\;.\n\\end{equation}\nAt weak coupling, $\\Delta_1 =1 + \\gamma_1$, we expand first at small $\\gamma_1$ and then integrate. Choosing the prescription $\\log^n\\epsilon\\to 0$, and then resumming the result order by order in $\\gamma_1$ yields \n\\begin{equation}\nI_{\\rm weak}=\\frac{2^{\\Delta_1 }-2}{\\Delta_1 -1} .\n\\end{equation}\nThe discrepancy is\n\\begin{equation}\n\\delta I = \nI_{\\rm finite}-I_{\\rm weak} = \\frac{2}{\\Delta_1-1}\\;.\n\\end{equation}\nTaking into account that the integral (\\ref{eq:Ifinite}) comes from the OPE expansion and is multiplied by $C_1^2$, we have established the weak-coupling identity\n\\begin{equation}\\label{fint1}\n \\left. \\int_0^1 \\!\\frac{\\delta G(x) }{x^2}\\;dx \\right|_{ \\text{small }g } \\!\\sim \\underbrace{2 \\int_{\\epsilon}^{\\frac{1}{2}}dx \\frac{ \\sum_{\\ell=1}^{M} g^{2 \\ell} G_{\\text{weak}}^{(\\ell)}(x)}{x^2} }_{\\texttt{regularised, }\\log(\\epsilon)\\rightarrow 0} + \\underbrace{\\left[\\frac{2 C_1^2}{\\Delta _1-1}\\right] }_{\\texttt{``anomaly''}} + O(g^{2 M + 2}) ,\n\\end{equation}\nwhich is to be interpreted as explained above.\nLet us check if this identity is now compatible with the exact result (\\ref{eq:specialide}). At leading order $O(1)$, the integral term drops out. This means that the anomaly term, on its own, should match the value $1$ of the integral on the l.h.s. We know that at leading order the anomalous dimension is $\\Delta_1 -1 = 4 g^2 + O(g^4)$, which implies that we must have\n\\begin{equation}\\label{eq:firstprediction}\nC_1^2(g) \\simeq 2 g^2 + O(g^4),\n\\end{equation}\nwhich is indeed confirmed by the numerical bootstrap data~\\cite{Cavaglia:2021bnz} and the analytic bootstrap computation of section \\ref{sec:numerology}. At the next-to-leading order, we can plug in (\\ref{eq:G1weak}) and fix one more term in the OPE coefficient. Before doing this, however, we conclude the proof of (\\ref{int1weak}), which will allow us to use the constraint (\\ref{eq:constr1}) and will prove more powerful. \n\n\\paragraph{The second piece and its anomaly. }\nThe other integral term on the l.h.s. of (\\ref{eq:constr1}) can be analyzed by repeating the argument above. Now, using crossing, we rewrite $\\int_0^1 \\frac{\\delta G(x)}{x^2} \\log(x)\\;dx = \\int_0^{\\frac{1}{2}} \\frac{\\delta G(x)}{x^2} \\log\\left(x (1-x)\\right)\\;dx .$ The troublesome term, again coming from the singular contribution of the block $f_{\\Delta_1}$, is the integral\n\\begin{equation}\nI'=\\int_\\epsilon^{1\/2} x^{\\Delta_1-2} \\log\\left(x (1-x)\\right)\\, dx ,\n\\end{equation}\nwhich evaluates at finite coupling to \n\\begin{equation}\nI_{\\rm finite}' = -\\frac{2^{1-\\Delta_1 } \\left(\\Delta_1 +(\\Delta_1 -1) (\\Delta_1 \\log (4)-\\, _2F_1(1,1;\\Delta_1 +1;-1))\\right)}{(\\Delta_1 -1)^2 \\Delta_1 }.\n\\end{equation}\nComparing this with the weak-coupling regularisation,\n\\begin{equation}\nI_{\\rm weak}' = \\sum_{n=0}^{\\infty} \\underbrace{ \\int_{\\epsilon}^{\\frac{1}{2} } \\frac{ (\\log x )^{n} (\\Delta_1-1)^n}{(n!)\\, x} \\log\\left(x (1-x)\\right) dx }_{\\texttt{regularised},\\;\\log\\epsilon\\rightarrow 0} ,\n\\end{equation}\nwe find, order by order,\n\\begin{equation}\nI_{\\rm finite}' -I_{\\rm weak}' = -\\frac{1}{(\\Delta_1 - 1)^2} ,\n\\end{equation}\nwhich proves the relation (\\ref{int1weak}).\n\n\n\n\n\\subsubsection{Analytic results for $C_1^2$}\\label{sec:predictC1}\nWe are now ready to study the first integral constraint (\\ref{eq:constr1}) at weak coupling. \nUsing (\\ref{int1weak}), it becomes\n\\begin{framed}\n\\beqa\\label{int1weak2}\n \\left. \\frac{3\\mathbb{C}-\\mathbb{B}}{8 \\;\\mathbb{B}^2} \\right|_{\\text{small }g} &=& 1 \\!+\\! \\underbrace{\\int_{\\epsilon}^{\\frac{1}{2}}dx \\left(\\sum_{\\ell=1}^{M} g^{2\\ell} G_{\\text{weak}}^{(\\ell)}(x)\\right)\\frac{\\log(x (1-x))}{x^2} }_{\\texttt{regularised, }\\log(\\epsilon)\\rightarrow 0} \n- \\left. \\left[\\frac{C_1^2}{(\\Delta _1-1)^2}\\right] \\right|_{\\text{small } g}\\nonumber \\\\ &&+ O(g^{2 M + 2})\\;.\n\\eeqa\n\\end{framed}\nDue to the presence of the anomalous term in this equation, the knowledge of $G(x)$ at a fixed order at weak coupling allows to produce nontrivial predictions for the leading OPE coefficient.\nUsing (\\ref{eq:B0}), (\\ref{eq:weakC0}), the term on the l.h.s. of (\\ref{int1weak2}) has the expansion\n\\begin{equation}\\begin{split}\\label{eq:lhsconstr1}\n\\frac{3\\mathbb{C}-\\mathbb{B}}{8 \\;\\mathbb{B}^2} \n=\n&-\\frac{1}{8 g^2}+\\left(\\frac{3}{2}-\\frac{\\pi ^2}{12}\\right)+\\left(\\frac{\\pi ^4}{36}-9 \\zeta_3\\right) g^2+\\left(-\\frac{2 \\pi ^6}{135}-4 \\pi ^2 \\zeta_3+135 \\zeta_5\\right) g^4\\\\\n&+\\left(\\frac{7 \\pi ^8}{810}+\\frac{34 \\pi ^4 \\zeta_3}{15}+78 \\pi ^2 \\zeta_5-1806 \\zeta_7\\right) g^6+O\\left(g^8\\right).\n\\end{split}\\end{equation}\nOn the other hand, for general OPE coefficients and scaling dimensions, we expect the weak coupling expansion\n\\begin{equation}\nC_i^2(g) = \\sum_{\\ell=0}^{\\infty} a_i^{(\\ell)} \\, g^{2 \\ell} ,\\qquad\n\\Delta_i(g) = \\sum_{\\ell=0}^{\\infty} \\Delta_i^{(\\ell)} \\, g^{2 \\ell} ,\n\\end{equation}\n which implies that ``anomaly term'' on the r.h.s. of (\\ref{int1weak2}) starts as $-a_1^{(0)}\/(g^2\\Delta_1^{(1)})^2$. Since this is not matched on the l.h.s., we must have $a_1^{(0)} = 0$, and the anomaly term expands as\n\\begin{equation}\\label{eq:anomalyweak}\n- \\left[\\frac{C_1^2}{(\\Delta _1-1)^2}\\right] = -\\frac{a_1^{(1)}}{ (\\Delta_1^{(1)})^2g^2} + \\left(\\frac{2 a_1^{(1)} \\Delta_1^{(2)} }{(\\Delta_1^{(1)})^3} -\\frac{a_1^{(2)} }{(\\Delta_1^{(1)})^2} \\right) + \\dots\n\\end{equation}\nUp to order $O(1)$, the integral on the r.h.s. drops out from (\\ref{int1weak2}), and we just need to match (\\ref{eq:lhsconstr1}) and (\\ref{eq:anomalyweak}). From the QSC, we know (see section \\ref{apd:anylSpec}) $\\Delta_1^{(1)} = 4$, $\\Delta_1^{(2)} = -16$. The term $O(g^{-2})$ then fixes $a_1^{(1)} = 2$, consistent with (\\ref{eq:firstprediction}), while at order $O(1)$ we fix\n\\begin{equation}\\label{C1w2}\na_1^{(2)} = \\frac{4 \\pi ^2}{3}-24\\,.\n\\end{equation}\nWe can also study the next order $O(g^2)$, which involves the integral over $G_{\\text{weak}}^{(1)}$ given in (\\ref{eq:G1weak}). Computing the integral and using the next terms in the expansion of (\\ref{eq:anomalyweak}), we can now extract \n\\begin{equation}\\label{C1w3}\na_1^{(3)} = 48 \\zeta_3+320-16 \\pi\n ^2-\\frac{76 \\pi ^4}{45}\\,.\n\\end{equation}\nIn the next section, we will see how, using a functional analytic bootstrap approach, one can push this analysis to one more loop, all in all determining a 4-loop prediction for the OPE coefficient. Our full result is reported below in equation (\\ref{eq:C1finalresult}). \n\n\\subsection{Functional bootstrap at weak coupling}\\label{sec:numerology}\n\nIn this section we use an analytic functional bootstrap approach, combined with the integrability data, to obtain structure constants in perturbation theory at weak coupling. \n\n\\subsubsection{General strategy}\n\\paragraph{Warm up at the first two orders. }\nThe first two orders of the weak coupling expansion of the reduced correlator $f(x)$ defined in \\eqref{fandG} are known. The tree level \\eqref{fandGtree} is obtained by free field theory, while the one loop \\eqref{f1weak0} was computed in \\cite{Kiryu:2018phb}. \n\nAn alternative way to present the perturbative expansion of the reduced correlator is in terms of Harmonic Polylogarithms~\\cite{Remiddi:1999ew} (HPL), which are natural functions appearing in the evaluation of Feynman integrals and are implemented in the Mathematica package \\texttt{HPL} \\cite{Maitre:2005uu,Maitre:2007kp}. \nTo do this, it is useful to introduce the new object\n\\begin{equation}\\label{defh}\nh(x)=\\frac{1 - x}{x} f(x) ,\n\\end{equation}\nfor which we have the following HPL representation for the first two orders at weak coupling\n\\beqa\nh_{\\text{weak}}^{(0)}(x)&=&1-2x , \\label{eq:h0}\\\\\nh_{\\text{weak}}^{(1)}(x)&=&-2 {H}_{1,0}+2 {H}_2-\\frac{2 \\pi ^2}{3}x\\label{h1},\\label{eq:h1}\n\\eeqa\nwith\n\\begin{equation}\n{H}_{1,0}=-\\text{Li}_2(x)-\\log(1-x)\\log(x)\\,,\\qquad\n{H}_2={H}_{0,1}=\\text{Li}_2(x) ,\n\\end{equation}\nwhere we used the HPL property\n\\begin{equation}\n{H}_{n_1,...,\\underbrace{0,...,0}_{k \\text{-times}},n_i,...}=\n{H}_{n_1,...,n_i+k,...}\\;.\n\\end{equation}\nBefore discussing how to infer an ansatz for the general perturbative order, let us discuss the implications of these results for the conformal data. \n\nFirst, one can compare the leading order $h_{\\text{weak}}^{(0)}(x)$ with the OPE expansion \\eqref{eq:OPEf}, using the leading order for the scaling dimensions, which become degenerate at tree level to the values $\\Delta^{(0)} \\equiv J = 1,2,3,\\dots.$ Comparing order by order the small-$x$ expansion of \\eqref{eq:OPEf} and $h_{\\text{weak}}^{(0)}(x) $, one can fix the following constraints on the structure constants\n\\begin{equation}\\label{CLOavg}\n\\langle\\; a^{(0)}\\;\\rangle_J=\\frac{4^{-J-1} \\sqrt{\\pi } (J-1) \\Gamma (J+3)}{\\Gamma \\left(J+\\frac{3}{2}\\right)} ,\n\\end{equation}\nwhere $\\langle...\\rangle_J$ represents the sum over the state multiplicity at weak coupling for a given $\\Delta^{(0)} = J$. \nSuch averages are very common in analytic bootstrap approaches, which typically expand around points of degeneracy of the spectrum. For us, however, a big advantage will be the knowledge of the spectrum, which will give us more conditions and allow us to resolve some of these averages much more easily. In particular, at this stage we know the degeneracies, so that for instance $J=1$ counts only one state, and then (\\ref{CLOavg}) means $\\langle\\; a^{(0)}\\;\\rangle_1=a_1^{(0)}=0$, which is the same result we found in the previous section. \nAt the next level $J=2$ we have two states implying $\\langle\\; a^{(0)}\\;\\rangle_2=a_2^{(0)}+a_3^{(0)}=1\/5$, and so on. \n\nThe case of the ground state $\\Delta_1$, with $\\Delta_1^{(0)} = 1$, is special. In fact, due to the factor $1\/(\\Delta - 1)$ in the definition of the superconformal blocks (\\ref{superblocks}), the scaling dimension and OPE coefficient \\emph{at one loop} enter the OPE expansion of the correlator at tree level. In particular, the leading behaviour at small $x$ is determined by the block $C_1^2 f_{\\Delta_1}(x) \\sim x^{\\Delta_1 + 1}\/(\\Delta_1 - 1)\\sim a_1^{(1)} x^{2} \/\\Delta_1^{(1)} + O(g^2)$. Matching this with the small-$x$ behaviour of $h_{\\text{weak}}^{(0)}(x)$, and using $\\Delta_1^{(1)} = 4$ coming from the QSC, we can read off $a^{(1)}_1=2$, in agreement with the independent derivation in \\eqref{eq:firstprediction}. This obviously extends to higher loops: the small-$x$ behaviour of $h_{\\text{weak}}^{(\\ell)}(x)$ is determined by conformal data of the ground state up to $\\ell + 1$ loops. \n\nRepeating the same procedure for $h_{\\text{weak}}^{(1)}(x)$, and including the results of the previous order, we can read $a_1^{(2)} = 4 \\pi ^2\/3-24$, matching the result obtained by the integral relation in \\eqref{C1w2}. Expanding the conformal blocks at higher order in $x$, it is possible to disentangle the average appearing in \\eqref{CLOavg} obtaining \n\\begin{equation}\\begin{split}\\label{cleading}\na_2^{(0)}&=a_3^{(0)}=\\frac{1}{10},\\qquad\na_4^{(0)}=a_5^{(0)}=a_7^{(0)}=a_9^{(0)}=0,\\\\\na_6^{(0)}&=\\frac{1}{14}+\\frac{2}{7\\sqrt{37}},\\qquad\\;\\;\na_8^{(0)}=\\frac{1}{14}-\\frac{2}{7\\sqrt{37}} ,\n\\end{split}\\end{equation}\ntogether with the following constraints for the sub-leading orders\n\\begin{equation}\\label{csubleading}\n\\langle\\,a^{(1)}\\,\\rangle_2=\\frac{2\\pi^2}{15}-1,\\qquad\n\\langle\\,a^{(1)}\\,\\rangle_3=\\frac{2\\pi^2}{21}-\\frac{1159}{882},\\qquad\n\\langle\\,a^{(1)}\\,\\rangle_4=-\\frac{29}{36}+\\frac{\\pi ^2}{21}.\n\\end{equation}\n\\paragraph{General ansatz and strategy. }\nThe reduced correlator at order $g^4$ is unknown. To proceed, we will formulate an ansatz for the generic term based on the form of the first two orders (and also inspired by the functional forms observed at strong coupling~\\cite{Ferrero:2021bsb}). Consistent with (\\ref{eq:h0}),(\\ref{eq:h1}), we assume that, at $\\ell$ loops, $h$ is given in terms of a complete basis of HPL's with transcendentality\\footnote{For an HPL function $H_{n_1,n_2,\\dots, n_{\\tau}}(x)$, with $n_i\\in\\left\\{0,1\\right\\}$, the transcendentality is the number of indices.} up to $\\tau = 2 \\ell$. Besides, we assume the same transcendentality for all terms, i.e., \n\\begin{equation}\\label{basis}\nh_{\\text{weak}}^{(\\ell)}(x)=\\beta_0^{(2\\ell)}+\\beta_1^{(2\\ell)} x+\\sum_{\\tau=1}^{2\\ell}\\beta_{n_1,...,n_\\tau}^{(2\\ell-\\tau)}\\,H_{n_1,...,n_\\tau},\n\\end{equation}\nwhere the coefficients $\\beta^{(m)}$ are transcendental numbers of weight $m$ and the indices $n_i=0,1$. For example, the basis used for $\\ell=1$ according to this ansatz contains eight terms:\n\\begin{equation}\\label{eq:hansatz}\nh_{\\text{weak}}^{(1)}(x)=\\beta_0^{(2)}+\\beta_1^{(2)} x+\\beta_{0}^{(1)}H_{0}+\\beta_{1}^{(1)}H_{1}\n+\\beta_{0,0}^{(0)}H_{0,0}\n+\\beta_{1,0}^{(0)}H_{1,0}\n+\\beta_{0,1}^{(0)}H_{0,1}\n+\\beta_{1,1}^{(0)}H_{1,1} .\n\\end{equation}\nComparing it with \\eqref{f1weak0}, one can conclude that the only non-vanishing coefficients are the following\n\\begin{equation}\\label{eq:1loopfixing}\n\\beta_1^{(2)}=-\\frac{2}{3}\\pi^2\\,\\qquad\n\\beta_{1,0}^{(0)}=-2\\,\\qquad\n\\beta_{0,1}^{(0)}=2\\,,\n\\end{equation}\nleading to \\eqref{h1}. \n\nIn order to fix the coefficients of the ansatz \\eqref{basis} for $\\ell>1$, we apply the following strategy\n\\begin{itemize}\n \\item \\texttt{Crossing equation.} In terms of $h(x)$, crossing symmetry translates to\n\\begin{equation}\\label{hcrossing}\nh(1-x)+h(x)=0 .\n\\end{equation} \nTo impose this equation, in practice is enough to study some terms of its expansion around $x \\sim 0$. Once these terms are set to zero by fixing some of the $\\beta$ coefficients, the equation is satisfied for any $x$. \n \\item \\texttt{Cancelling logarithms.} In order to come from an OPE expansion (\\ref{eq:OPEf}) at weak coupling, the function $h(x)$ has to satisfy certain constraints on its $x\\sim 0$ behaviour. At $\\ell$ loops, $h^{\\ell}(x)$ can only contain terms $\\log^{m}(x)\\, x^n$, with $n\\in \\mathbb{N}$ and $m = 1,\\dots, \\ell$. We compare this with the expansion of the ansatz, and impose the cancellation of the logarithms with higher powers. \n \\item \\texttt{Conformal data matching.} The OPE expansion gives infinitely many relations between terms in a small $x$ expansion of $h^{\\ell}(x)$ and the conformal data. Equating these predictions with the expansion of the ansatz (\\ref{eq:hansatz}), we find relations between the $\\beta^{(\\ell)}$ coefficients and the conformal data $a_i^{(m)}$, $\\Delta_i^{(m)}$, with $m\\leq \\ell$ for $i>1$ and $m\\leq \\ell+1$ for $i = 1$. Using the \\emph{knowledge of the spectrum} from integrability, we can use these relations to fix some $\\beta$'s as well as some OPE coefficients. \n \\item \\texttt{Integral constraints. }\n We impose the two integral relations described in section \\ref{sec:integrated}. At a given order, the constraint (\\ref{eq:constr2}) fixes further information on the coefficients of the ansatz. The constraint (\\ref{eq:constr1}), due to the ``weak coupling anomaly'' effect described in section \\ref{sec:weakcouplingintegrals}, can be used to extract the structure constant $a_1^{(\\ell+2)}$ in terms of the coefficients $\\beta^{(\\ell)}$ of the ansatz. \n \\item \\texttt{Transcendentality. } Our assumption is that the coefficients $\\beta^{(2 \\ell - \\tau)}_{n_1,\\dots, n_{\\tau}}$ are combinations with rational coefficients\\footnote{For many of them it turns out the coefficients are integer.}\n of numbers of uniform transcendentality $2 \\ell -\\tau$. \n In particular, up to the perturbative order we considered, we did not encounter Multiple Zeta numbers, but only numbers which are products of elements of the basis\n \\begin{equation}\\label{eq:basis}\n \\left\\{\n\\pi^2 , \\zeta_3, \\zeta_5, \\zeta_7, \\dots, \\zeta_{2n+1} , \\dots \n \\right\\}.\n \\end{equation}\n The element of this basis have transcendental weight $2$,$3$,$5$, $7,\\dots,2n+1,\\dots$, and their products have weight equal to the sum of the weights of the factors. \n There are no linear relations with rational coefficients between the numbers in the basis (\\ref{eq:basis}). \n This implies that some linear relations generated by the other ``axioms'' listed above split into more constraints, as terms of different transcendentality should vanish individually. \n\\end{itemize}\n\\subsubsection{Higher loops}\n\\paragraph{Fixing the correlator at 2 loops. }\nFor $\\ell=2$, the basis \\eqref{basis} with maximal transcendentality 4 counts $32$ elements. \nImposing the crossing equation \\eqref{hcrossing}\none can fix $16$ of the coefficients $\\beta^{(2)}$. Furthermore, requiring that in the small $x$ expansion there are no $\\log^3x$ and $\\log^4x$ terms, we obtain $3$ coefficients more.\nThe remaining constraints can be found by injecting in the analytical bootstrap method the integrability data, namely the spectrum obtained with the QSC, the structure constants fixed at previous orders using $h_{\\text{weak}}^{(0)}$ and $h_{\\text{weak}}^{(1)}$, as well as using the integrated correlator \\eqref{eq:constr1} at weak coupling,~{\\it cf.}~\\eqref{int1weak}. Let us describe these steps in more detail.\n\nFirst, we expand at small $x$ and compare the terms $x^n\\log^m x$ for $n,m=0,1,2$ with the same expansion of \\eqref{eq:OPEf}. At this order, only the first $3$ lowest non-trivial states (i.e., the ground state with $J=1$\nand the two states with $J=2$) contribute to these coefficients. Then, using the spectral data\\footnote{Because of the pole in the conformal block at $\\Delta=1$, we actually need to use all terms up to $O(g^6)$ for $\\Delta_1$.} for these states reported in \\eqref{D1w}-\\eqref{D3w}, \ntogether with their structure constants at leading~\\eqref{cleading} and subleading order~\\eqref{csubleading}, we fix $9$ more parameters.\\footnote{Of these, 3 parameters are reduced by using the assumption of fixed transcendentality of the $\\beta$ coefficients.} At this stage, we are left with 4 unfixed coefficients $\\beta^{(2)}$. \n\nAs a by-product of the procedure explained above, and also including data for states with $J>2$, we generate new constraints for the structure constants. One of these constraints is particularly useful, since it expresses the $O(g^6)$ term in the expansion of $C_1^2$ in terms of the 4 remaining parameters $\\beta$ as follows\n\\begin{equation}\\begin{split}\\label{a13coeff}\na_{1}^{(3)}&=-\\frac{28 \\pi ^4}{15}-16 \\pi ^2+320+64 \\zeta_3+ \\left(\\frac{2 \\pi ^2}{3}+8 \\zeta_3-\\frac{7 \\pi ^4}{45}\\right)\\beta^{(0)}_{0,1,0,1}\\\\\n&-\\!\\left(\\!4\\!+\\!4 \\zeta_3-\\frac{4 \\pi ^4}{45}\\right) \\!\\beta_{0,0,0,1}^{(0)}\n- \\left(\\frac{2 \\pi ^2}{3}+4 \\zeta_3\\!-\\!\\frac{\\pi ^4}{9}\\right)\\!\\beta_{0,1,1,0}^{(0)}\\!-\\!\\left(\\!4\\!+\\!\\frac{2 \\pi ^2}{3}-8 \\zeta_3\\right)\\!\\beta_{0,0,1}^{(1)} . \n\\end{split}\\end{equation}\nAs shown in section \\ref{sec:predictC1}, using the integral relation \\eqref{eq:constr1} in its weak coupling fashion one can generate terms for $C_1^2$ at higher orders exploiting the anomaly. We have already used this method to give a prediction for $a_1^{(3)}$ from the knowledge of the correlator at 1 loop. Comparing this result~\\eqref{C1w3} with \\eqref{a13coeff}, and using that the $\\beta$ coefficients are rational, we conclude that \n\\begin{equation}\n\\beta_{0,0,1}^{(1)}=\\beta_{0,0,0,1}^{(0)}=0\\,,\\qquad\\text{and}\\qquad\n\\beta_{0,1,1,0}^{(0)}=\\beta_{0,1,0,1}^{(0)}=-4 ,\n\\end{equation}\nleading to the following expression for $h(x)$ at order $g^4$:\n\\begin{equation}\\begin{split}\\label{h2}\nh^{(2)}_{\\text{weak}}(x)&=4 H_{1,3}-4 H_{2,2}+8 H_{3,0}+8 H_{3,1}-8 H_{1,1,2}+4 H_{1,2,0}+8 H_{1,2,1}-8 H_{2,0,0}\\\\\n&-4 H_{2,1,0}-8 H_{1,1,0,0}+\\frac{4}{3} \\pi ^2 H_2-\\frac{4}{3} \\pi ^2 H_{1,0}+\\frac{4}{3} \\pi ^2 H_{1,1}-12 \\zeta_3 H_{1}+\\frac{8 \\pi ^4}{15}x .\n\\end{split}\\end{equation}\nAs a cross check, this formula can be tested using the integral relation \\eqref{eq:constr2}. Indeed, plugging \\eqref{h2} into \\eqref{defh} and then performing the integral, we obtain\n\\begin{equation}\n g^4\\int_{0}^1 dx \\frac{h_{\\text{weak}}^{(2)}(x)}{1-x} = \n g^4\\left(-\\frac{8 \\pi ^4}{15}-\\frac{8 \\pi ^2 \\zeta_3}{3}+90 \\zeta_5\\right)\n = \\left.\\frac{\\mathbb{C}}{4\\;\\mathbb{B}^2} + \\mathbb{F}-3\\right|_{\\text{order }g^4} ,\n\\end{equation}\nin agreement with the expansion of the r.h.s. of \\eqref{eq:constr2}. \n\n\\paragraph{Structure constants from $h_{\\text{weak}}^{(2)}$. }\n\nHaving fixed the reduced correlator at 2 loops, we can mine new data for the structure constants. Indeed, comparing this answer to the OPE, and using the knowledge for the spectrum at one loop, one can disentagle the first relation of \\eqref{csubleading}, obtaining the two separate subleading terms as follows\n\\begin{equation}\na_{2}^{(1)}=\\frac{1}{150} \\left(10 \\pi ^2-75-9 \\sqrt{5}\\right)\\,,\\qquad\na_{3}^{(1)}=\\frac{1}{150} \\left(10 \\pi ^2-75+9 \\sqrt{5}\\right).\n\\end{equation}\nSimilarly, the 3-point functions associated to the leading twist states\\footnote{Leading twist refers to states with oscillator content $[2,2|3,3,2,2|2,2]$, in the classification introduced in \\cite{Cavaglia:2021bnz}. Notice that in this case the indices $11$, $12$ and $13$ do not represent the order in which they appear at weak coupling, and they differ from the notation used in \\cite{Cavaglia:2021bnz}. The scaling dimensions of those states are computed with the QSC and read\n\\begin{equation}\n\\Delta_{11}=4+w_1 g^2+z_3 g^4+O(g^6),\\qquad\n\\Delta_{12}=4+w_2 g^2+z_2 g^4+O(g^6),\\qquad\n\\Delta_{13}=4+w_3 g^2+z_1 g^4+O(g^6),\\nonumber\n\\end{equation}\nwith $w_{1,2,3}$ and $z_{1,2,3}$ the solutions of the following equations\n\\beqa\n&& 3 w_i^3-73 w_i^2+553 w_i-1274=0\\qquad\\text{with}\\qquad w_13$, we get 5 additional $\\beta$'s. Then, using the crossing equation \\eqref{hcrossing}, we further constrain the system obtaining 50 additional coefficients.\n\nThe next step is to include integrability data in the derivation.\nExpanding $h_{\\text{weak}}^{(3)}$ at small $x$, we compare terms proportional to $x^n\\log^m x$ with the same terms appearing in the expansion of \\eqref{eq:OPEf} where we have injected spectral data from the QSC (see appendix \\ref{apd:anylSpec}) and structure constants from the previous orders. Inspecting the contribution of the $J=1$ state fixes 15 coefficients, while the contributions of higher states are less constraining since they are more and more degenerate. Indeed, $J=2$ states gives 3 constraints and $J=3$ and $J=4$ only 1 each. As before, this procedure generates several constraints on structure constants functions and in particular we obtain $a_1^{(4)}$ in terms of $\\beta^{(3)}$ coefficients. Since this quantity was previously computed in \\eqref{a14} exploiting the integral relation \\eqref{eq:constr1}, one can use it to fix 8 additional coefficients. Finally, plugging $h_{\\text{weak}}^{(3)}$ in the second integral relation \\eqref{eq:constr2}, performing the integrals and comparing with the $O(g^6)$ term of the r.h.s., we get 3 more constraints\\footnote{A partial result for the $4$-point function at $3$ loops is available upon request.}.\n\nThe remaining 14 coefficients are unconstrained. \nIt is possible that more sophisticated analytical bootstrap techniques, such as the inversion formula developed in 1D in \\cite{Mazac:2018qmi}, might be helpful in this context. In any case, our finding seems to suggest that using analytical bootstrap equations for a single correlators may not be enough to pinpoint the solution completely, even with a full knowledge of the spectrum. One expects that studying multiple correlators would have the biggest impact on fixing the solution at higher orders, as also observed at strong coupling~\\cite{Ferrero:2021bsb}. \n\n\n\n\n\n\n\n\\subsection{Results} To summarise, we collect here the analytic results for structure constants obtained with the above approach. For the ground state:\n\\begin{equation}\\begin{split}\\label{eq:C1finalresult}\nC_1^2(g) = &2 g^2 - \\left(24-\\frac{4 \\pi ^2}{3}\\right)g^4+\n\\left(320-16 \\pi\n ^2+48 \\zeta_3-\\frac{76 \\pi ^4}{45}\\right)g^6\\\\\n &-\\left(4480-\\frac{832 \\pi ^2}{3}+256 \\zeta_3-\\frac{224 \\pi ^4}{15}+880 \\zeta_5-\\frac{64 \\pi ^6}{45}\\right)g^8+\nO(g^{10}),\n\\end{split}\\end{equation}\nand for excited states at $J=2$\n\\beqa\nC_2^2&=&\\frac{1}{10}+\\frac{1}{150} \\left(10 \\pi ^2-75-9 \\sqrt{5}\\right)g^2+\nO(g^{4}),\\\\\nC_3^2&=&\\frac{1}{10}+\\frac{1}{150} \\left(10 \\pi ^2-75+9 \\sqrt{5}\\right)g^2+\nO(g^{4}),\n\\eeqa\nand the non-vanishing excited states at leading order at $J=3,4$\n\\beqa\nC_6^2=\\frac{1}{14}+\\frac{2}{7\\sqrt{37}}+\nO(g^{2})\\qquad\nC_8^2=\\frac{1}{14}-\\frac{2}{7\\sqrt{37}}+\nO(g^{2}),\n\\eeqa\n\\beqa\nC_{11}^2=s_1+\nO(g^{2})\\qquad\nC_{12}^2=s_2+\nO(g^{2})\\qquad\nC_{13}^2=s_3+\nO(g^{2}),\n\\eeqa\nwhere the constants $s_i$ are the roots of (\\ref{eq:defpi}). Our 2-loop result for the 4-point function is given in (\\ref{eq:G2loop}). Weak coupling data for scaling dimensions, coming from the QSC, are collected in Appendix \\ref{app:spectrum}.\n\n\n\n\n\\section{Discussion}\\label{sec:discussion}\nIn this paper we have continued experimenting with a combination of integrability and conformal bootstrap methods to study observables in $\\mathcal{N}$=4 SYM. This has led to the most accurate results to date for a non-supersymmetric OPE coefficient of short operators at finite coupling. For instance, as highlighted in fig. \\ref{fig:zoom}, with the methods of this paper we determine one such structure constant with error $10^{-8}$ for 't Hooft coupling $\\lambda \\sim 24 \\pi^2 $. Presently, this would not be achievable either with integrability or conformal bootstrap methods on their own. \n\n In this work, we have introduced new constraints on integrated correlators in the 1D defect CFT, connecting them to another quantity available from integrability, the cusp anomalous dimension. \nThe addition of these constraints was shown to greatly enhance the precision of the numerical bootstrap algorithm, allowing us to reach at least 7 digits of precision for $C_1^2$ over a wide range of the coupling for $g>1$ (becoming 9 digits for $g\\gtrsim 3$), and at least 2 digits precision for the next two OPE coefficients for $g>1$ (see fig.~\\ref{fig:plotintboth2}). \n\nThese new constraints were also very powerful in an analytic functional bootstrap approach, which allowed us to fix the form of the 4-point function at 2 loops, fix 4 loops for $C_1^2$, and 2 loops for the next two excited states OPE coefficients.\n\nThis development resonates nicely with the recent discovery of integrated correlators constraints for the bulk 4D theory, in that case arising from localisation~\\cite{Binder:2019jwn} (see also \\cite{Dorigoni:2021bvj,Dorigoni:2021guq}), which were also shown to have a great impact on the bootstrap~\\cite{Chester:2021aun}. \n\nIt would be interesting to see if more general deformations of the MWL can lead to even more constraints of the type studied here. At the very least, \n there should be generalisations of the present identities involving integrated $n$-point functions with $n>4$. Those would be related to higher-orders in the near-BPS expansion of the cusp anomalous dimension, which are also in principle accessible with integrability. Such generalised constraints might be useful in the bootstrap. \n\nAn important question is whether using Bootstrability it is possible to compute the OPE coefficients with (ideally) arbitrary precision, or if there is a fundamental limit. One way to improve the precision is certainly to include input from more states in the spectrum in the Numerical Boostrap algorithms (here we used only 10 states). Another possibility is to use analytical bootstrap techniques such as the ones developed for 1D CFTs in \\cite{Mazac:2016qev,Mazac:2018mdx,Mazac:2018qmi,Mazac:2018ycv,Paulos:2019fkw,Ferrero:2019luz,Bianchi:2021piu}, which might prove to be advantageous. \n\nHowever, we believe it is very unlikely that the study of a single correlator will be enough to fix all OPE coefficients, even from a completely known spectrum. This is in fact what we observe analytically at weak coupling, where the functional bootstrap approach did not fix completely the 4-point function at 3 loops. One can argue that this is in fact to be expected, as the CFT is defined not by one, but by all its correlation functions. Also in the Numerical Conformal Bootstrap, it is only the study of multiple correlators that allows to find small islands for allowed conformal data~\\cite{Kos:2016ysd}. Thus, we believe that multi-correlator Bootstrability is a very promising direction for the future. \n\nOur present setup is the 1D defect CFT, which presents some simplifications. In particular, it is a consistent CFT at the planar level. \nThis is not the case for the bulk 4D CFT, where there is intermingling of single and double traces contributing to planar 4-point functions. This presents a challenge, since the QSC does not know about the anomalous dimensions of double traces. However, we expect that analytic conformal bootstrap techniques such as the ones in \\cite{Caron-Huot:2017vep,Alday:2017vkk,Caron-Huot:2020adz,Bissi:2022mrs,Bissi:2021spj} might help to resolve this problem. In particular, it is inspiring that the double discontinuity of a 4-point function, which parametrises the full correlator thanks to the Lorentzian inversion formula~\\cite{Caron-Huot:2017vep}, is determined only by single trace operators in a large $N$ theory~\\cite{Alday:2017vkk}. \nFor instance, at strong coupling $\\lambda \\sim \\infty$, the correlator was reconstructed, at several orders in $1\/N$, starting from information on single-trace protected operators, which are the only ones contributing in this regime~\\cite{Aharony:2016dwx,Alday:2017xua}. \nOur hope is that Bootstrability could be the way to extend these beautiful results to the full finite $\\lambda$ region. \n\nIt is also an interesting direction for the future to extend these techniques to other integrable gauge theories. A natural setup would be the one of the Wilson line defect CFTs living in ABJM theory defined in \\cite{Bianchi:2017ozk,Bianchi:2018scb}, recently studied from the Bootstrap approach in \\cite{Bianchi:2020hsz}. For this setup, the cusp is also intensively studied in \\cite{Griguolo:2012iq,Bonini:2016fnc} and the Bremsstrahlung function is known exactly \\cite{Bianchi:2014laa,Correa:2014aga,Bianchi:2017svd,Bianchi:2018scb} (see also \\cite{Drukker:2019bev}), although an integrability formulation is still lacking. The QSC for the spectrum of local operators was found for this theory in \\cite{Cavaglia:2014exa,Bombardelli:2017vhk} (for its numerical solution see \\cite{Bombardelli:2018bqz}). Finding its deformation capturing the defect CFT would be very interesting and would open the way to applying the methods presented here. \nThe AdS$_3$\/CFT$_2$ duality for which a QSC was recently proposed in \\cite{Cavaglia:2021eqr,Ekhammar:2021pys} is also a fascinating laboratory to develop Bootstrability in the context of 2D CFTs. \n\n Finally, the fishnet limit of $\\mathcal{N}$=4 SYM ~\\cite{Gurdogan:2015csr} and ABJM \\cite{Caetano:2016ydc} theories are also very interesting playgrounds for combining integrability and bootstrap techniques, which present additional challenges due to the non-unitarity. We should mention that the Fishnet theories are a very promising setting to understand correlation functions analytically as shown in many recent works \n \\cite{Grabner:2017pgm,Basso:2017jwq,Gromov:2018hut,Kazakov:2018gcy,Pittelli:2019ceq,Derkachov:2019tzo,Derkachov:2020zvv,Shahpo:2021xax}. The QSC is also under control in this limit \\cite{Gromov:2017cja,Gromov:2019jfh,Cavaglia:2020hdb,Levkovich-Maslyuk:2020rlp} (for the open spin chain case, relevant for the fishnet limit of the present setup, see \\cite{Gromov:2021ahm}), and for their simplicity the Fishnet theories appear to be the \n perfect laboratory to understand the expected connection between QSC and correlators. \n There has been progress in this direction~\\cite{Cavaglia:2018lxi,Cavaglia:2021mft} (for other limits see e.g. \\cite{Jiang:2015lda,Giombi:2018qox,Giombi:2022anm}) but a full solution is still missing. Having non-perturbative data obtained with the help of bootstrap methods could be important to inform these efforts. Moreover, perhaps the study of these limits will reveal new ways in which integrability and bootstrap should be fused at a more fundamental level to study AdS\/CFT-related theories. \n\n\\acknowledgments\nWe thank Simon Caron-Huot, Nadav Drukker, Pietro Ferrero, Alessandro Georgoudis, Gregory Korchemsky, Petr Kravchuk, Andrea Manenti, Carlo Meneghelli, Amit Sever, Evgeny Sobko, Nika Sokolova, Andreas Stergiou, Roberto Tateo, Emilio Trevisani and Pedro Vieira for inspiring discussions. \nThe work of AC, NG and MP is\nsupported by European Research Council (ERC) under\nthe European Union's Horizon 2020 research and innovation programme (grant agreement No. 865075) EXACTC. NG is also partially supported by the STFC grant (ST\/P000258\/1).\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Experimental results}\nEuFe$_2$(As$_{1-x}$P$_{x}$)$_2$ single crystals used in this\nstudy were synthesized by a Bridgman method and characterized as\npreviously described.\\cite{Jeevan11} Optical investigations on crystals with $x = 0$ and $x = 0.18$ have been already reported.\\cite{Wu09,Wu11} Here we describe the magnetic behavior of EuFe$_2$As$_2$ and\nEuFe$_2$\\-(As$_{0.88}$P$_{0.12}$)$_2$, measured with a\nQuantum Design MPMS-XL superconducting quantum interference device\n(SQUID). We provide a complete characterization as a function of temperature ($T$) and magnetic field ($H$) for the main crystallographic directions. EuFe$_2$As$_2$ has orthorhombic\nsymmetry with $a$ and $b$ axes virtually identical. Even though twinning of the crystals did not allow us a characterization in the $ab$-plane, neutron scattering data indicate that the Eu$^{2+}$ moments align along the $a$-direction. \n\\begin{figure}\n \\centering\n \\includegraphics[width=0.6\\columnwidth]{Fig1.eps}\n \\caption{Zero-field cooled (ZFC) and field-cooled (FC) magnetization curves (solid red circles and open black squares, respectively) for EuFe$_2$(As$_{0.88}$P$_{0.12}$)$_2$, measured in $H=20$~Oe parallel (a) and perpendicular (b) to the $ab$-plane.}\n \\label{G_T}\n\\end{figure}\nIn Fig.~\\ref{G_T} the temperature-dependent magnetization of\nEuFe$_2$(As$_{0.88}$P$_{0.12}$)$_2$ is plotted for $H=20$~Oe, measured parallel and perpendicular to the $ab$-plane. While at elevated temperatures ($T 0$ is teacher's relative prediction confidence on the ground-truth $t\\in [K]$ and $\\eta$ is a zero-mean random noise. Then the gradient rescaling factor for all classes by applying Knowledge Distillation is given by:\n$$\\mathbb{E}_\\eta\\left[\\sum_{i\\in[K]}|\\partial^{KD}_i|\\Big\/\\sum_{i\\in[K]}|\\partial_i|\\right] = (1 - \\lambda) + \\frac{\\lambda}{T}\\left(\\frac{\\tilde{{c}}_t}{1 - {q}_t}\\right).$$\n\\end{prop}\n\n\\begin{proof}\nWe first consider the ground-truth class $t\\in[K]$. Using $\\tilde{{p}}_t = \\tilde{{q}}_t + \\tilde{{c}}_t + \\eta$ and $\\mathbb{E}[\\eta] = 0$ in \\eqref{eq:grad_scale}, we get:\n\\begin{align*}\n\\mathbb{E}_{\\eta}[|\\omega_t|] &= (1-\\lambda) + \\frac{\\lambda}{T}\\left(\\frac{\\tilde{{c}}_t}{1 - {q}_t}\\right)\n\\end{align*}\n\nNow, sum of the incorrect class gradients is given by\\footnote{Under the assumption of having a better quality teacher model, we assume $p_t > q_t$, and correspondingly $q_i \\ge p_i,~\\forall i \\in [K]\\backslash t$.}:\n\n\n\\begin{align*}\n\\sum_{i \\neq t} |\\partial^{\\mathrm{KD}}_i| &= \\sum_{i\\neq t}\\big[ (1-\\lambda){{q}}_i + \\frac{\\lambda}{T}(\\tilde{{q}}_i - \\tilde{{p}}_i)\\big]\\\\\n& = (1-\\lambda)(1-{{q}}_t) + \\frac{\\lambda}{T}(\\tilde{{p}}_t - \\tilde{{q}}_t) = |\\partial^{\\mathrm{KD}}_t|\n\\end{align*}\n\nPenultimate equality follows from ${\\bm{q}},~{\\bm{p}}$ and $\\tilde{{\\bm{q}}}$ being probability masses. Similarly applies for $\\partial_i$, and hence the proof.\n\n\n\n\n\n\n\n\\end{proof}\nAt a given snapshot during training, we could assume ${\\tilde{{c}}_t}$ to be a constant for all examples. Then for any pairs of examples $({\\bm{x}}, {\\bm{y}})$ and $({\\bm{x}}', {\\bm{y}}') \\in {\\mathcal{X}} \\times {\\mathcal{Y}}$, if the teacher is more confident on one of them, i.e., ${p} > {p}'$, then the average $\\omega$ for all classes will be greater than $\\omega'$ according to Proposition~\\ref{prop:conf}.\n\n\\begin{figure}[t]\n \\centering\n \\includegraphics[width=0.4\\textwidth]{figures\/p_t-w_t.png}\n \\caption{When applying $\\mathrm{KD}$ for ResNet-20 student model with ResNet-56 as the teacher on CIFAR-100, we plot $\\tilde{{p}}_t$ vs. $\\omega_t$ (in log scale) with 10K samples at the end of training. Clearly, we can see that the two are positively correlated.}\n \\label{fig:p_t-w_t}\n\n\\end{figure}\n\n\n\nIn Figure~\\ref{fig:p_t-w_t}, we plot the relationship between $\\omega_t$ and ${p}_t$ at the end of training ResNet-20~\\citep{he2016deep} student model with ResNet-56 as the teacher on CIFAR-100~\\citep{krizhevsky2009learning} dataset (more details of this dataset can be found in Suppl. Section~\\hyperref[sec:si-detail]{A}). The plot shows a clear positive correlation between the two. Also, we found the correlation to be stronger at the beginning of training.\n\n\nIn~\\citep{furlanello2018born}, the authors conjecture that example weight is associated with the largest value in ${\\bm{p}}$. However, in the above proof, we show that once the teacher makes a wrong prediction, using the largest value gives a contradictory result. It is also trivial to prove that, when only having two classes, that is $\\omega_{i \\neq t} = \\omega_t$, the only effect of using $\\mathrm{KD}$ is example re-weighting. So we can regard the use of $\\mathrm{KD}$ on binary classification~\\citep{Anil2018Large} as taking the binary log-loss, and multiply with the weight $\\omega_t$.\n\nIn summary, we think incorporating teacher's supervision in knowledge distillation has an effect of example re-weighting, and the weight is associated with teacher's prediction confidence ${p}_t$ on the ground-truth. Weight will be higher when ${p}_t$ is larger. Alternatively, this suggests that $\\mathrm{KD}$ would assign larger weights to training examples that are considered easier from teacher's perspective, and vice versa; which has a similar flavor to Curriculum Learning. \\citet{bengio2009curriculum} suggests this may not only speedup training convergence, but also helps to reach a better local minima. Also, according to~\\cite{roux2016tighter}, re-weighting examples during training with model prediction confidence results in a tighter bound on the classification error, leading to better generalization.\n\n\\begin{figure*}[t]\n \\centering\n \\begin{subfigure}[b]{0.22\\textwidth}\n \\includegraphics[width=\\textwidth]{figures\/heatmap-k=100_t=3.png}\n \\subcaption{}\n \\label{fig:heatmap-k=100_t=3}\n \\end{subfigure}\n ~\n \\begin{subfigure}[b]{0.22\\textwidth}\n \\includegraphics[width=\\textwidth]{figures\/heatmap-k=100_t=10.png}\n \\subcaption{}\n \\label{fig:heatmap-k=100_t=10}\n \\end{subfigure}\n ~\n \\begin{subfigure}[b]{0.22\\textwidth}\n \\includegraphics[width=\\textwidth]{figures\/heatmap-k=10-t=10.png}\n \\subcaption{}\n \\label{fig:heatmap-k=10-t=10}\n \\end{subfigure}\n ~\n \\begin{subfigure}[b]{0.26\\textwidth}\n \\includegraphics[width=\\textwidth]{figures\/heatmap-cosine-sims.png}\n \\subcaption{}\n \\label{fig:heatmap-cosine-sims}\n \\end{subfigure}\n \\caption{Using 10K samples from CIFAR-100 for ResNet-56, we plot Pearson correlations of output probability ${\\bm{p}}$ with varying $\\mathrm{softmax}$ temperature (a) $T=3$, (b) $T=10$, and (c) $T=10$ where only top-10 largest values in ${\\bm{p}}$ are preserved. In (d), we show cosine similarities computed from the weights of the final logits layer. Since every 5 classes are within the same super-class, class correlations can be interpreted as the color `squares' on the block-diagonal.}\n\\end{figure*}\n\n\\subsection{Prior of Optimal Geometry in Logit Layer}\\label{sec:analysis_class_rel}\nFor binary classification, we showed in Section~\\ref{sec:analysis_re-weight} that $\\mathrm{KD}$ is essentially performing example re-weighting. However, for multi-class classification, we argue that $\\mathrm{KD}$ also leverages relationship between classes as captured by teacher's probability mass distribution ${\\bm{p}}$ over the incorrect classes. As argued by~\\citet{hinton2015distilling}, on MNIST dataset, model assigns relatively high probability for class `7', when the ground-truth class is `2'. In this section, we first confirm their hypothesis using empirical studies. Then, we provide new insights to interpret the class relationship as a prior on optimal geometry in student's last logit layer.\n\nTo illustrate that the teacher's distribution ${\\bm{p}}$ captures class relationships, we train ResNet-56 on CIFAR-100 dataset. CIFAR-100 contains 100 classes over 20 super-classes, with each super-class containing 5 sub-classes. Figures~\\ref{fig:heatmap-k=100_t=3} and~\\ref{fig:heatmap-k=100_t=10} show the heatmap for Pearson correlation coefficient on teacher's distribution ${\\bm{p}}$ for different temperatures. We sort the class indexes to ensure that the 5 classes from the same super-class appear next to each other. We observe in Figure~\\ref{fig:heatmap-k=100_t=3} that with lower temperature, there's no pattern on the heatmap showing class relationships. But as we increase the temperature in Figure~\\ref{fig:heatmap-k=100_t=10}, classes within the same super-class clearly have high correlations to each other, as seen in the block diagonal structure. This observation verifies that teacher's distribution ${\\bm{p}}$ indeed reveals class relationships, with proper tuning on the softmax temperature. %\n\nBefore diving into the details of geometric interpretation, we recall the case of label smoothing~\\citep{szegedy2016rethinking}.\n\\begin{itemize}\n \\item From an optimization point of view,~\\citet{he2019bag} showed that there is an optimal constant margin $\\log(K(1-\\epsilon)\/\\epsilon + 1)$, between the logit of the ground-truth ${z}_t$, and all other logits ${z}_{-t}$, using a label smoothing factor of $\\epsilon$. For fixed number of classes $K$, the margin is a monotonically decreasing function of $\\epsilon$.\n \\item From geometry perspective, \\citet{muller2019does} showed the logit ${z}_k = {\\bm{h}}^\\top {\\bm{w}}_k$ for any class $k$ is a measure of squared Euclidean distance $\\|{\\bm{h}} - {\\bm{w}}_k\\|^2$ between the activations of the penultimate layer ${\\bm{h}}$\\footnote{Here ${\\bm{h}}$ can be concatenated with a ``1'' to account for the bias.}, and weights ${\\bm{w}}_k$ for class $k$ in the last logit layer. \n\\end{itemize}\nThe above results suggests that label smoothing encourages $\\|{\\bm{h}} - {\\bm{w}}_t\\|^2 \\ge \\|{\\bm{h}} - {\\bm{w}}_{-t}\\|^2$, and pushes all the other incorrect classes equally apart.\nFollowing a similar proof technique, we extend the above results to knowledge distillation:\n\\begin{prop}\\label{prop:distance}\nGiven $({\\bm{x}}, {\\bm{y}}) \\in {\\mathcal{X}} \\times {\\mathcal{Y}}$, at the optimal solution of the student for the final layer logits ${\\bm{w}}^*_k,~\\forall k\\in[K]$ at $T=1$, Knowledge Distillation enforces different inter-class distances based on teacher's probability distribution ${\\bm{p}}$: \n$$\\|{\\bm{h}} - {\\bm{w}}^*_i\\|^2 < \\|{\\bm{h}} - {\\bm{w}}^*_j\\|^2~~~\\text{iff}~~~{p}_i > {p}_j,~\\forall i,j \\in [K]\\backslash t$$\n\\end{prop}\n\n\\begin{proof}\nAt the optimal solution of the student, equating gradient in~\\Eqref{eq:dist_grad} to $0$, for $T=1$, we get:\n\\begin{equation}\n {q}^*_k=\n \\begin{cases}\n \\lambda{p}_k + (1-\\lambda) & \\text{if}\\ k=t, \\\\\n \\lambda{p}_k & \\text{otherwise}.\n \\end{cases}\n\\label{eq:optimal_stud}\n\\end{equation}\n\nUsing a similar proof technique as~\\citet{muller2019does}, $\\|{\\bm{h}} - {\\bm{w}}^*_k\\|^2 = \\|{\\bm{h}}\\|^2 + \\|{\\bm{w}}^*_k\\|^2 - 2 {\\bm{h}}^\\top{\\bm{w}}^*_k$, where ${\\bm{h}}$ is the penultimate layer activations, and ${\\bm{w}}^*_k$ are the weights of the last logits layer for class $k \\in [K]$. Note that $\\|{\\bm{h}}\\|^2$ is factored out when computing the $\\mathrm{softmax}$, and $\\|{\\bm{w}}^*_k\\|^2$ is usually a (regularized) constant across all classes.\n\nEquating ${z}^*_k={\\bm{h}}^\\top{\\bm{w}}^*_k$, and using the property $\\mathrm{softmax}({\\bm{z}}) = \\mathrm{softmax}({\\bm{z}} + c),~\\forall c\\in{\\mathbb{R}}$, we get:\n\\begin{align*}\n{q}^*_k &= \\mathrm{softmax}({z}^*_k) = \\mathrm{softmax}({\\bm{h}}^\\top{\\bm{w}}^*_k)\\\\ &=\\mathrm{softmax}\\Big(-\\frac{1}{2}\\|{\\bm{h}} - {\\bm{w}}^*_k\\|^2\\Big)\n= \\lambda{p}_k,~\\forall k \\in [K]\\backslash t\n\\end{align*}\nThe last equality follows from \\eqref{eq:optimal_stud}, and thus proves the claim on class partition prior geometry at optimality.\n\\end{proof}\nFrom Figure~\\ref{fig:heatmap-k=100_t=10}, we know that the teacher assigns higher probability to the classes within the same super-class, and hence $\\mathrm{KD}$ encourages hierarchical clustering of the logits layer weights based on the class relationships. \n\n\n\\subsection{Summarizing Effects of Knowledge Distillation}\\label{sec:summary_effects}\nPrimarily, KD brings a regularization\/calibration effect by introducing smoothed teacher distribution, although this effect is not considered as knowledge.\nBesides, we believe there are two types of knowledge teacher model will distill to its student -- real teacher's probability distribution ${\\bm{p}}$ not only benefits the student via \\emph{confidence on ground-truth class} to re-weight examples, but also from its probability mass on the \\emph{incorrect classes}.\nIntuitively, these values reflect class relationships, therefore provide the student with more guidance.\nWe further interpret these values as applying different label smoothing factor $\\epsilon$ for the incorrect classes.\nThe difference in $\\epsilon$ encourages student's optimal inter-class distance to be different for different classes.\nIn other words, the distillation loss will get lower if more desired output logit layer's geometric inequalities hold.\nAs a result, two sources of knowledge complementary to each other, which could potentially facilitate the student model's training process and further improve model generalization.\n\\subsection*{A. Experimental details}\\label{sec:si-detail}\n\n\\begin{figure*}[t]\n \\centering\n \\begin{subfigure}[b]{0.3\\textwidth}\n \\includegraphics[width=\\textwidth]{figures\/synthetic_vis-sim=00-m=0.png}\n \\subcaption{}\n \\label{fig:synthetic-sim=0.0-m=0}\n \\end{subfigure}\n ~\n \\begin{subfigure}[b]{0.3\\textwidth}\n \\includegraphics[width=\\textwidth]{figures\/synthetic_vis-sim=09-m=0.png}\n \\subcaption{}\n \\label{fig:synthetic-sim=0.9-m=0}\n \\end{subfigure}\n ~\n \\begin{subfigure}[b]{0.3\\textwidth}\n \\includegraphics[width=\\textwidth]{figures\/synthetic_vis-sim=09-m=2.png}\n \\subcaption{}\n \\label{fig:synthetic-sim=0.9-m=2}\n \\end{subfigure}\n \n \\caption{Visualization of 5K synthetic data points (with input dimensionality $d=2$) on 2-D plane. We use $K=4$, $C=2$, means there are two super-classes, one associate with label \\{0,1\\} and the other one associate with label \\{2,3\\}. We vary $\\tau$ and $M$ and produce 3 plots: (a) $\\tau=0.0$, no sine function is used; (b) $\\tau=0.9$, no sine function is used and (c) $\\tau=0.9$, $M=2$.}\n \\label{fig:synthetic-sim-m}\n\\end{figure*}\n\n\n\\begin{figure*}[t]\n\\centering\n\\includegraphics[scale=0.515]{figures\/histogram-pt.pdf}\n\\caption{ On CIFAR-100, we plot the histograms of teacher's confidence on ground-truth ${p}_t$, here teacher model is ResNet-56 with different levels $\\epsilon$ of label smoothing. From left to right, we use: (a) $\\epsilon=0.0$ and $T=1$; (b) $\\epsilon=0.0$ and $T=5$; (c) $\\epsilon=0.1$ and $T=1$ and (d) $\\epsilon=0.1$ and $T=3$. The distribution of ${p}_t$ becomes skewed after enabling label smoothing.}\n\\label{fig:histogram-pt}\n\\end{figure*}\n\n\\begin{figure*}[t]\n \\centering\n \\begin{subfigure}[b]{0.3\\textwidth}\n \\includegraphics[width=\\textwidth]{figures\/heatmap-k=100-t=10-LS=true.png}\n \\subcaption{}\n \\end{subfigure}\n ~~~~~~~~~~~~~~~~\n \\begin{subfigure}[b]{0.355\\textwidth}\n \\includegraphics[width=\\textwidth]{figures\/heatmap-cosine-sims-LS=true.png}\n \\subcaption{}\n \\end{subfigure}\n \n \\caption{Using 10K samples from CIFAR-100 for ResNet-56 trained with smoothed labels ($\\epsilon=0.1$), we plot (a) Pearson correlations of ${\\bm{p}}$ when $T=10$ and (b) Cosine similarities computed from the weights of the final logits layer. Since every 5 classes are within the same super-class, class correlations can be interpreted as the color `squares' on the diagonal.}\n \\label{fig:heatmap-label-smooth-failure}\n\\end{figure*}\n\n\\paragraph{Implementation of $\\mathrm{KD}$.} \nIn practice, the gradients from the RHS of~\\Eqref{eq:dist_grad} are much smaller compare to the gradients from LHS when temperature $T$ is large. Thus, it makes tuning the balancing hyper-parameter $\\lambda$ become non-trivial. To mitigate this and make the gradients from two parts in the similar scale, we multiply $T^2$ to the RHS of~\\Eqref{eq:dist_grad}, as suggested in~\\citep{hinton2015distilling}.\n\n\\paragraph{Implementation of $\\mathrm{KD}\\text{-sim}$.}\nWhen synthesizing the teacher distribution for $\\mathrm{KD}\\text{-sim}$, we use ${\\bm{\\rho}}^{\\text{sim}} = \\mathrm{softmax}(\\hat{{\\bm{w}}}_t \\Hat{{\\bm{W}}}^\\top)$, where $\\Hat{{\\bm{W}}}$ is the $l_2$-normalized logit layer weights and $\\hat{{\\bm{w}}}_t$ is the $t$-th row of $\\Hat{{\\bm{W}}}$. However, the cosine similarities computed for $\\mathrm{softmax}$ are limited in the range of $[0,1]$. Therefore the resulting distribution is highly likely to be uniform. To mitigate this and bring more resolution to be cosine similarities, we use the following:\n$${\\bm{\\rho}}^{\\text{sim}} = \\mathrm{softmax}((\\hat{{\\bm{w}}}_t \\Hat{{\\bm{W}}}^\\top)^{\\alpha}\/\\beta).$$\nHere $\\alpha<1$ is a hyper-parameter to amplify the resolution of cosine similarities, $\\beta$ is another hyper-parameter indicating the temperature for $\\mathrm{softmax}$.\n\n\\begin{table}[t!]\n \\begin{subtable}[h]{0.45\\textwidth}\n \\centering\n \\begin{tabular}{l | l}\n {Method} & {Hyper-parameter setting}\\\\\n \\hline\n LS & $\\epsilon=0.3$ for any $\\tau$.\\\\\n \\hline\n \\multirow{6}{*}{KD} & $\\lambda=0.7, T=3$ when $\\tau=0.0$\\\\\n & $\\lambda=0.7, T=5$ when $\\tau=0.1$\\\\\n & $\\lambda=0.7, T=2$ when $\\tau=0.2$\\\\\n & $\\lambda=0.7, T=3$ when $\\tau=0.3$\\\\\n & $\\lambda=0.7, T=10$ when $\\tau=0.4$\\\\\n & $\\lambda=0.7, T=5$ when $\\tau=0.5$;\\\\\n \\hline\n \\multirow{6}{*}{KD-pt} & $\\lambda=0.7, T=3$ when $\\tau=0.0$\\\\\n & $\\lambda=0.7, T=5$ when $\\tau=0.1$\\\\\n & $\\lambda=0.7, T=2$ when $\\tau=0.2$\\\\\n & $\\lambda=0.7, T=3$ when $\\tau=0.3$\\\\\n & $\\lambda=0.7, T=10$ when $\\tau=0.4$\\\\\n & $\\lambda=0.7, T=5$ when $\\tau=0.5$\\\\\n \\hline\n KD-sim & $\\lambda=0.7,\\alpha=0.5,\\beta=0.5$ for any $\\tau$\\\\\n\n \\end{tabular}\n \\caption{Synthetic}\n \\label{tb:hparam_synthetic}\n \\end{subtable}\n \\hfill\n \\begin{subtable}[h]{0.45\\textwidth}\n \\centering\n \\begin{tabular}{l | l}\n\n {Method} & {Hyper-parameter setting}\\\\\n \\hline\n LS & $\\epsilon=0.1$\\\\\n KD & $\\lambda=0.3,T=5$\\\\\n KD-pt & $\\lambda=0.3,T=5$\\\\\n KD-sim & $\\lambda=0.3,\\alpha=0.3,\\beta=0.3$\\\\\n KD-topk & $k=25,\\lambda=0.5,T=5$\\\\\n\n \\end{tabular}\n \\caption{CIFAR-100}\n \\label{tb:hparam_cifar}\n \\end{subtable}\n \\hfill\n \\begin{subtable}[h]{0.45\\textwidth}\n \\centering\n \\begin{tabular}{l | l}\n\n {Method} & {Hyper-parameter setting}\\\\\n \\hline\n LS & $\\epsilon=0.1$\\\\\n KD & $\\lambda=0.7,T=20$\\\\\n KD-pt & $\\lambda=0.2,T=25$\\\\\n KD-sim & $\\lambda=0.3,\\alpha=0.5,\\beta=0.3$\\\\\n KD-topk & $k=500,\\lambda=0.5,T=3$\\\\\n\n \\end{tabular}\n \\caption{ImageNet}\n \\label{tb:hparam_imagenet}\n \\end{subtable}\n \\hfill\n \\begin{subtable}[h]{0.45\\textwidth}\n \\centering\n \\begin{tabular}{l | l}\n\n {Method} & {Hyper-parameter setting}\\\\\n \\hline\n KD & $\\lambda=0.1,T=50$\\\\\n KD-topk & $k=100,\\lambda=0.1,T=50$\\\\\n\n \\end{tabular}\n \\caption{Penn Tree Bank (PTB)}\n \\label{tb:hparam_ptb}\n \\end{subtable}\n \\caption{Hyper-parameter settings for different methods on different datasets.}\n \\label{tab:temps}\n\\end{table}\n\n\\paragraph{Synthetic dataset.} \nFirst, we follow the data generation procedure described in Section~\\ref{sec:exp_synthetic}, we get a toy synthetic dataset where we only have input dimensionality $d=2$ with $K=4$ classes and $C=2$ super-classes. Figure~\\ref{fig:synthetic-sim-m} shows a series of scatter plots with different settings of class similarity $\\tau$ and task difficulty $M$. This visualization gives a better understanding of the synthetic dataset and helps us imagine what it will look like in high-dimensional setting that used in our experiments. For the model used in our experiments, besides they are 2-layer network activated by $\\tanh$, we use residual connection~\\citep{he2016deep} and and batch normalization~\\citep{ioffe2015batch} for each layer. Following~\\citep{ranjan2017l2, zhang2018heated}, we found using $l_2$-normalized logits layer weight $\\Hat{{\\bm{W}}}$ and penultimate layer $\\hat{{\\bm{h}}}$ provides more stable results. The model is trained by ADAM optimizer~\\citep{kingma2014adam} for a total of 3 million steps without weight decay and we report the best accuracy.\nPlease refer to Table~\\ref{tb:hparam_synthetic} for the best setting of hyper-parameters.\n\n\\paragraph{CIFAR-100 dataset.} \nCIFAR-100 is a relatively small dataset with low-resolution ($32\\times32$) images, containing $50k$ training images and $10k$ validation images, covering $100$ classes and $20$ super-classes. It is a perfectly balanced dataset -- we have the same number of images per class (\\emph{i.e.,} each class contains $500$ training set images) and $5$ classes per super-class.\nTo process the CIFAR-100 dataset, we use the official split from Tensorflow Dataset\\footnote{\\url{https:\/\/www.tensorflow.org\/datasets\/catalog\/cifar100}}. Both data augmentation \\footnote{\\url{https:\/\/github.com\/tensorflow\/models\/blob\/master\/research\/resnet\/cifar_input.py}}\\footnote{We turn on the random brightness\/saturation\/constrast for better model performance.} for CIFAR-100 and the ResNet model\\footnote{\\url{https:\/\/github.com\/tensorflow\/models\/blob\/master\/research\/resnet\/resnet_model.py}} are based on Tensorflow official implementations. Also, following the conventions, we train all models from scrach using Stochastic Gradient Descent (SGD) with a weight decay of 1e-3 and a Nesterov momentum of 0.9 for a total of 10K steps. The initial learning rate is 1e-1, it will become 1e-2 after 40K steps and become 1e-3 after 60K steps. We report the best accuracy for each model.\nPlease refer to Table~\\ref{tb:hparam_cifar} for the best setting of hyper-parameters.\n\n\\paragraph{ImageNet dataset.} \nImageNet contains about $1.3$M training images and $50k$ test images, all of which are high-resolution ($224\\times224$), covering $1000$ classes. The distribution over the classes is approximately uniform in the training set, and strictly uniform in the test set.\nOur data preprocessing and model on ImageNet dataset are follow Tensorflow TPU official implementations\\footnote{\\url{https:\/\/github.com\/tensorflow\/tpu\/tree\/master\/models\/official\/resnet}}. The Stochastic Gradient Descent (SGD) with a weight decay of 1e-4 and a Nesterov momentumof 0.9 is used. We train each model for 120 epochs, the mini-batch size is fixed to be 1024 and low precision (FP16) of model parameters is adopted. We didn't change the learning rate schedule scheme from the original implementation.\nPlease refer to Table~\\ref{tb:hparam_imagenet} for the best setting of hyper-parameters.\n\n\\paragraph{Penn Tree Bank dataset.}\nWe use Penn Tree Bank (PTB) dataset for word-level language modeling task using the standard train\/validation\/test split by~\\citep{mikolov2010recurrent}. The vocabulary is\ncapped at 10K unique words. We consider the state-of-the-art LSTM model called AWD-LSTM proposed by~\\citet{merity2017regularizing}. The model used several regularization tricks on top of a 3-layer LSTM, including DropConnect, embedding dropout, tied weight, etc. We use different capacity (indicated by hidden size and embedding size) as our Teacher and Student. To be specific, Teacher has a hidden size of 1150 and an embedding size of 400, while Student has a smaller hidden size of 600 and a smaller embedding size of 300. We follow the official implementation\\footnote{\\url{https:\/\/github.com\/salesforce\/awd-lstm-lm}} with simple changes for $\\mathrm{KD}\\text{-topk}$. Besides capacity, we keep the default hyper-parameter as in the official implementation to train our Teacher. However, when training smaller Student model, we follow another implementation\\footnote{\\url{https:\/\/github.com\/zihangdai\/mos}} to: (1) lower the learning rate to 0.2, (2) increase training epochs to 1000, (3) use 0.4 for embedding dropout rate and (4) use 0.225 for RNN layer dropout rate.\n\n\\subsection*{B. Additional Experiments}\\label{sec:si-exp}\n\\begin{table}[h!]\n\\centering\n\\begin{tabular}{l|ccc}\n\\toprule\n\\textbf{Method} & \\textbf{\\% top-1 accuracy}\\\\\n\\hline\nStudent & 72.51\\\\\nKD & 75.94\\\\\n\\midrule\nKD-rel & 74.14\\\\\nKD-sim & 74.30\\\\\n\\midrule\nKD-pt+rel & 75.07\\\\\nKD-pt+sim & 75.24\\\\\n\\bottomrule\n\n\\end{tabular}\n\\caption{Performance of $\\mathrm{KD}\\text{-rel}$ on CIFAR-100. We report the mean result for 4 individual runs with different initializations. We use $\\beta_1=0.6,\\beta_2=\\frac{0.1}{4},\\beta_3=\\frac{0.3}{95}$.}\n\\label{tb:exp_kdrel}\n\\end{table}\n\\paragraph{Examine optimal geometry prior effect with class hierarchy.}\nIn section~\\ref{sec:pseudo_kd}, we mentioned the optimal geometry prior effects of $\\mathrm{KD}$ can also be examined using existing class hierarchy. \nSuppose our data has a pre-defined class hierarchy (e.g., on CIFAR-100), we can also use it to examine the optimal geometry prior effects of $\\mathrm{KD}$. To be specific, let ${\\mathbb{S}}_t \\subset [K]\\backslash t$ denote the other classes that share same parent of $t$.\nWe simply assign different probability masses to different types of classes:\n\\begin{equation}\\label{eq:kdrel}\n\\rho_i^{\\textrm{rel}} = \\begin{cases}\n \\beta_1 & \\text{if } i = t,\\\\\n \\beta_2 & \\text{if } i \\in {\\mathbb{S}}_t,\\\\\n \\beta_3 & \\text{otherwise},\n \\end{cases}\n\\end{equation}\nwhere $\\beta_1 > \\beta_2 > \\beta_3$ are a hyper-parameters we could search and optimize, and we name this method as $\\mathrm{KD}\\text{-rel}$. \nAs shown in Table~\\ref{tb:exp_kdrel}, we found $\\mathrm{KD}\\text{-rel}$ performs slightly worse than $\\mathrm{KD}\\text{-sim}$ on CIFAR-100. The trend is still hold when we compound each effect with $\\mathrm{KD}\\text{-pt}$.\n\n\\subsection{Empirical Improvements for Distillation Quality}\\label{sec:improv_effective}\nThough $\\mathrm{KD}\\text{-sim}$ is able to capture class relationships using different probability masses over the incorrect classes, there's a major drawback -- all the examples from the same class will have the same relationships to the other classes. However, this is not a realistic assumption for most real-world datasets. For instance, on MNIST dataset, only some versions of `2' looks similar to `7'. To overcome this drawback, we propose $\\mathrm{KD}\\text{-topk}$, which simply borrows the top-$k$ largest values of teacher's probability ${\\bm{p}}$, and uniformly distributes the rest of the probability mass to the other classes.\nFigure~\\ref{fig:heatmap-k=10-t=10} shows that only preserving top-$10$ largest values could closely approximate the class correlations as in the full teacher's distribution ${\\bm{p}}$, and is also less noisy.\nThe above finding shows that only a few incorrect classes that are strongly correlated with the ground-truth class are useful for $\\mathrm{KD}$, and the probability mass on other classes are random noise (which is not negligible under high temperature $T$), and only has the effect of label smoothing in expectation.\n\nUsing the above intuition, we test $\\mathrm{KD}\\text{-topk}$ for image classification task on CIFAR-100 and ImageNet. Beyond computer vision, we also test $\\mathrm{KD}\\text{-topk}$ for language modeling task on Penn Tree Bank (PTB) dataset. We apply state-of-the-art LSTM model~\\citep{merity2017regularizing} with different capacities for the teacher and student. Details of PTB dataset and model specifications are in Section~\\hyperref[sec:si-detail]{A} of Suppl. For image classification, the performance of $\\mathrm{KD}\\text{-topk}$ is shown in the last row of Table~\\ref{tb:exp_real_data}. We see that $\\mathrm{KD}\\text{-topk}$ outperforms $\\mathrm{KD}$ in both the datasets. For language modeling, the results are shown in Table~\\ref{tb:exp_real_data_lm}, which shows a similar trend for $\\mathrm{KD}\\text{-topk}$.\nWe plot the performance uplift of $\\mathrm{KD}\\text{-topk}$ along with $k$ in Figure~\\ref{fig:exp_topk}, which suggests that the best performance is achieved with proper tuning of $k < K$, which captures class relationships and also reduces noise. Note that the improvements of $\\mathrm{KD}\\text{-topk}$ over $\\mathrm{KD}$ is simple and free with easy modifications. This is also the reason we omit a more sophisticated comparison with other advanced distillation techniques, such as \\citep{romero2014fitnets, yim2017gift}.\n\n\n\\subsection{Potential Improvements for Training Efficiency}\nVanilla $\\mathrm{KD}$ method requires loading a pre-trained teacher model in-memory, and computing forward-pass to get the full output distribution. While all of our proposed partial-$\\mathrm{KD}$ methods and $\\mathrm{KD}\\text{-topk}$ can be achieved with better computational efficiency when training the student model. In terms of computation cost, we can pre-compute $K\\times K$ class similarity matrix before student training, and directly use it for $\\mathrm{KD}\\text{-sim}$. For $\\mathrm{KD}\\text{-topk}$, we only need the top-k predictions from the teacher, which could be efficiently (approximately) computed using SVD-softmax~\\citep{svdsoftmax}, when the output space is large.\nAlternatively for vanilla $\\mathrm{KD}$, one could also save computation by storing teacher's predictions on-disk, which could again be optimized using our proposed methods. We only need to store a single value, i.e., teacher's confidence on ground-truth class for $\\mathrm{KD}\\text{-pt}$; and top-$k$ predictions for $\\mathrm{KD}\\text{-topk}$. As shown in our experiments in Figure~\\ref{fig:exp_topk}, using $k=10\\%K$, we can achieve comparable performance with vanilla $\\mathrm{KD}$. %\n\n\\subsection{Diagnosis of Failure Cases}\nA good understanding of $\\mathrm{KD}$ enables us to diagnose failure cases.\n\\citet{muller2019does} observed that although label smoothing improves teacher model's quality, it results in a worse student model when applying $\\mathrm{KD}$.\nVerified in CIFAR-100 (see Table~\\ref{tb:label-smooth-failure}), we found the unfavorable distillation performance could be attributed to two factors -- Firstly, as argued by the~\\citet{muller2019does} and illustrated in Figure~\\ref{fig:heatmap-label-smooth-failure} in Suppl., $\\mathrm{LS}$ destroys class relationships. Secondly, we found that the skewed teacher's prediction distribution on the ground-truth (see Figure~\\ref{fig:histogram-pt} in Suppl.) also hinders the effectiveness of $\\mathrm{KD}$, because the example re-weighting effect will be less useful. Results of $\\mathrm{KD}\\text{-sim}$ and $\\mathrm{KD}\\text{-pt}$ from last two columns of Table~\\ref{tb:label-smooth-failure} verify these findings.\nFor another failure case, \\citet{mirzadeh2019improved} showed that the `distilled' student model's quality gets worse as we continue to increase teacher model's capacity. The larger capacity teacher might overfit, and predict high (uniform) confidence on the ground truth on all the examples; thereby hindering the effectiveness of example re-weighting in knowledge distillation. Another explanation could be that there exists an optimal model capacity gap between the teacher and student, which could otherwise result in inconsistency between teacher's confidence on the ground-truth, and the desired example difficulty for the student.\nPerhaps, an `easy' example for larger capacity teacher is overly difficult for the student, due to the large difference in model expressiveness.\n\\section{Introduction} \\input{introduction}\n\n\\section{Related work} \\input{related_work}\n\n\\section{Analysis on mechanism of Knowledge Distillation} \\label{sec:analysis} \\input{analysis}\n\n\\section{Pseudo Knowledge Distillation} \\label{sec:pseudo_kd} \\input{pseudo_kd}\n\n\\section{Experiments} \\label{sec:experiments} \\input{experiments}\n\n\n\\section{Applications} \\label{sec:applications} \\input{applications}\n\n\\section{Conclusion} \\label{sec:conclusions} \\input{conclusions}\n\n\n\n\\section{Introduction} \\input{introduction}\n\n\\section{Related Work} \\input{related_work}\n\n\\section{Analyzing Mechanisms of Knowledge Distillation} \\label{sec:analysis} \\input{analysis}\n\n\\section{Isolating Effects by Partial Knowledge Distillation Methods} \\label{sec:pseudo_kd} \\input{pseudo_kd}\n\n\\section{Improving and Diagnosing Knowledge Distillation} \\label{sec:application} \\input{experiments}\n\n\n\\section{Conclusion} \\label{sec:conclusions} \\input{conclusions}\n\\newpage\n\n\n\\subsection{Proposed Partial KD Methods}\n\\textbf{Examine Example Re-weighting Effect by KD-pt.}\nLabel smoothing does not have either re-weighting or the class relationships effect, due to its uniform teacher distribution. However, if we borrow ${p}_t$ (prediction on ground truth class $t\\in[K]$) from the real teacher's probability distribution ${\\bm{p}}$, we can synthesize partial-teacher distribution that is able to incorporate example re-weighting effect. More specifically, we craft teacher probability distribution as follows:\n\\begin{equation}\\label{eq:kdpt}\n\\rho_i^{\\textrm{pt}} = \\begin{cases}\n {p}_t & \\text{if } i = t,\\\\\n (1 - {p}_t)\/(K-1) & \\text{otherwise}.\n \\end{cases}\n\\end{equation}\nFrom Proposition~\\ref{prop:conf}, it is trivial to see that $\\mathrm{KD}\\text{-pt}$ is capable of differentiating weights for different examples. However, it does not capture class relationships, since every class that is not the ground truth has the same probability mass. %\n\n\n\n\\textbf{Examine Optimal Prior Geometry Effect by KD-sim.}\nFollowing the same methodology, we synthesize a teacher distribution that only capture class relationships, and assign the same weight to each example.\nTo achieve this, we use the weights of the last logit layer ${\\bm{W}} \\in {\\mathbb{R}}^{K \\times d}$ from the teacher model to obtain class relationships. We believe the teacher, due to its larger capacity is able to encode class semantics in the weights of the last logit layer.\nThus, we create a distribution ${\\bm{\\rho}}^{\\text{sim}}$ as the $\\mathrm{softmax}$ over cosine similarity\\footnote{In practice, \nbesides tuning the temperature of the $\\mathrm{softmax}$, one could also raise the similarities to a power $<1$ to amplify the resolution of cosine similarities. Please refer to Section~\\hyperref[sec:si-detail]{A} in Suppl. for more details of our implementation.} of the weights: ${\\bm{\\rho}}^{\\text{sim}} = \\mathrm{softmax}(\\hat{{\\bm{w}}}_t \\Hat{{\\bm{W}}}^\\top)$, where $\\Hat{{\\bm{W}}}$ is the $l_2$-normalized logit layer weights, and $\\hat{{\\bm{w}}}_t$ is the $t$-th row of $\\Hat{{\\bm{W}}}$ corresponding to the ground truth class $t\\in[K]$. \nThough other distance metrics could be also used as a measure of class similarity, we leave the discussion of analysing the different choices as future work.\nTo verify our assumption, we check the heatmap of cosine similarities in Figure~\\ref{fig:heatmap-cosine-sims}, which clearly shows a similar pattern as the Pearson correlation of the teacher distribution ${\\bm{p}}$ in Figure~\\ref{fig:heatmap-k=100_t=10}.\nWe call this method $\\mathrm{KD}\\text{-sim}$. \nFrom Propositions~\\ref{prop:conf} and \\ref{prop:distance}, our proposed method, though simple and straightforward, can achieve our purpose of factoring out the class relationships effect.\n\nNote that $\\mathrm{KD}\\text{-sim}$ doesn't require the knowledge of class hierarchy. However, if the hierarchy is available (as in CIFAR-100), we could also synthesize a teacher distribution apriori.\nIn Suppl. Section~\\hyperref[sec:si-exp]{B}, we synthesize ${\\bm{\\rho}}$ by setting different values for (1) ground-truth class $t$, (2) classes within the same super-class of $t$, and (3) other incorrect classes. The quality of resulting method is slightly worse than $\\mathrm{KD}\\text{-sim}$ but can still improve student model's quality.\n\n\\textbf{Compounded Effects.}\nTo enjoy the benefits from multiple effects and approximate the functionality of $\\mathrm{KD}$, we could combine the two partial-KD methods introduced above. We explore simple linear combination of synthetic teacher's distribution -- $(1-\\alpha) {\\bm{\\rho}}^{\\text{pt}} + \\alpha {\\bm{\\rho}}^{\\text{sim}}$ and name the method $\\mathrm{KD}\\text{-pt+sim}$. %\nIt is easy to verify that this compounded method performs example re-weighting and injects optimal prior geometry through class relationships.\n\nIn the next section, we evaluate our proposed partial-distillation methods on synthetic and real-world datasets to better understand how much each of these effects could benefit the student. Based on our understanding, we propose a novel distillation method that only adopts top-$k$ largest values from the teacher distribution ${\\bm{p}}$. In Section~\\ref{sec:improv_effective}, we illustrate how this method could reduce noise in ${\\bm{p}}$ (see Figure~\\ref{fig:heatmap-k=10-t=10}), and also result in a better quality student model.\n\n\\subsection{Empirical Studies}\n\n\n\\subsubsection{Synthetic Dataset}\\label{sec:exp_synthetic}\nPerformance of $\\mathrm{KD}$ is dependent on the dataset properties. To this end, a natural question is -- \\emph{Does $\\mathrm{KD}$ perform only example re-weighting when all the classes are uncorrelated to each other?} We proved this to be true for binary classification (Section~\\ref{sec:analysis_re-weight}). To answer the same for multi-class classification task, we generate synthetic dataset, where we can control the class similarities within the same super-class.\n\n\\textbf{Setup.}\nInspired by~\\citep{ma2018modeling}, we synthesize a classification dataset with $K$ classes, and $C$ super-classes. Each super-class will have equal number of $K\/C$ classes, and each class will be assigned with a basis vector. These basis vectors are carefully generated, so that we could control the class correlations within the same super-class. More specifically, we generate a single data-point as follows:\n\\begin{enumerate}\n \\item Randomly sample $C$ orthonormal basis vectors, denoted by ${\\bm{u}}_i \\in{\\mathbb{R}}^d~\\forall i\\in[C]$.\n \\item For each orthonormal basis ${\\bm{u}}_i$, we sample $(K\/C-1)$ unit vectors ${\\bm{u}}_j\\in{\\mathbb{R}}^d$ that are $\\tau$ cosine similar to ${\\bm{u}}_i$.\n \\item Randomly sample an input data point in $d$-dimensional feature space ${\\bm{x}}\\sim\\mathcal{N}_d(\\mathbf{0}, \\mathbf{I})$.\n \\item Generate one-hot encoded label ${\\bm{y}}\\in{\\mathcal{Y}}$ with target: $t=\\argmax_{k\\in[K]} \\big({\\bm{u}}^\\top_k \\hat{{\\bm{x}}} + \\sum_{m=1}^{M} \\sin({a}_m {\\bm{u}}^T_k \\hat{{\\bm{x}}} + {b}_m) \\big)$, where $\\hat{{\\bm{x}}}$ is the $l_2$-normalized ${\\bm{x}}$; ${\\bm{a}},{\\bm{b}}\\in{\\mathbb{R}}^M$ are arbitrary constants; and we refer to the controlled $\\sin$ complexity term $M \\in {\\mathbb{Z}}^+$ as \\emph{task difficulty}.\n\\end{enumerate}\n\nAfter producing basis vectors with procedure (1) and (2), we run procedure (3) and (4) for $|\\mathcal{D}|$ times with fixed basis vectors to generate a synthetic dataset $\\mathcal{D}=\\{({\\bm{x}},{\\bm{y}})\\}$. By tuning the cosine similarity parameter $\\tau$, we can control the classes correlations within the same super-class. Setting task-difficulty $M=0$ generates a linearly separable dataset; and for $M>0$, more non-linearities will be introduced by the $\\sin$ function (see Figure~\\ref{fig:synthetic-sim-m} in Suppl. for visualization on a toy example).\nIn the following experiments, we set input dimension $d=500$ with $K=50$ classes, and $C=10$ super-classes. We use $|\\mathcal{D}|=500k$ data-points for training, and $|\\mathcal{D_{\\mathrm{valid}}}|=50k$ for validation. We use a simple 2-layer fully-connected neural network with $\\tanh$ activation, and hidden layer dimensions $64$ for the student, and $128$ for the teacher. Finally, we set $M=10$, hoping that this is the right task difficulty trade-off (i.e., not too easy, but hard enough to have a large margin between the two models for $\\mathrm{KD}$).\n\n\\begin{figure}[t]\n \\centering\n \\includegraphics[width=0.7\\columnwidth]{figures\/synthetic_results_new.png}\n \\caption{Best performance over 4 runs on a synthetic dataset with different levels of controlled class similarities.}\n \\label{fig:exp_synthetic_data}\n\\end{figure}\n\n\n\\textbf{Results and Analysis.}\nFigure~\\ref{fig:exp_synthetic_data} shows the classification accuracy on the validation set of all the methods, when varying $\\tau$ from $0.0 \\rightarrow 0.4$.\nWe notice a large margin between the teacher and student.\nLabel Smoothing ($\\mathrm{LS}$) marginally helps the student, while Knowledge Distillation ($\\mathrm{KD}$) benefits significantly.\nInterestingly, regardless of the class similarity $\\tau$; $\\mathrm{KD}\\text{-pt}$ has comparable performance with $\\mathrm{KD}$, suggesting that example re-weighting effect plays a major role in distillation. Thus, even when the classes are uncorrelated, $\\mathrm{KD}$ could still benefit the student through example re-weighting.\nNote that for this task, the data-points which are close to the decision boundary are harder to classify, and can be regarded as difficult examples.\nFurthermore, when increasing $\\tau$, we see a significant improvement in performance of $\\mathrm{KD}\\text{-sim}$, suggesting that the injected prior knowledge of class relationships boosts student model's quality. %\n\n\\subsubsection{Real-world Datasets}\nWe use two popular image classification datasets -- CIFAR-100~\\citep{krizhevsky2009learning} and ImageNet~\\citep{russakovsky2015imagenet} to analyze the quality of our proposed partial-distillation methods, and also to verify if we could approximate the performance of $\\mathrm{KD}$ by compounding effects.\n\n\\textbf{Setup.}\nCIFAR-100 is a relatively small dataset with 100 classes. We use ResNet-20 as the student, and ResNet-56 as the teacher.\nImageNet on the other hand is a large-scale dataset covering 1000 classes. We use ResNet-50 as the student, and ResNet-152 as the teacher.\nFor more details, please refer to Section~\\hyperref[sec:si-detail]{A} in Suppl.\nNote that instead of using different model families as in~\\citep{furlanello2018born,yuan2019revisit}, we use same model architecture (i.e., ResNet) with different depths for the student and teacher to avoid unknown effects introduced by model family discrepancy.\n\n\\begin{table}[t!]\n\\centering\n\\begin{tabular}{l|ccc}\n\\toprule\n\\textbf{Method} & \\textbf{CIFAR-100} & \\textbf{ImageNet}\\\\\n\\hline\n\\hline\nTeacher & 75.68 & 77.98 \\\\\nStudent & 72.51 & 76.32\\\\\n\\midrule\nLS & 73.87& 76.83\\\\\nKD & 75.94& 77.49\\\\\n\\midrule\nKD-pt & 75.08 & 77.00\\\\\nKD-sim & 74.30& 76.95\\\\\nKD-pt+sim & 75.24& 77.17\\\\\n\\midrule\nKD-topk & \\textbf{76.17} & \\textbf{77.75}\\\\\n\\bottomrule\n\n\\end{tabular}\n\\caption{We report the mean Top-1 accuracy (\\%) over 4 individual runs with different initializations. Best $k$ for $\\mathrm{KD}\\text{-topk}$ is 25 and 500 for CIFAR-100 and ImageNet, resp.}\n\\label{tb:exp_real_data}\n\\end{table}\n\n\\begin{table}[t!]\n\\centering\n\\begin{tabular}{l|ccc}\n\\toprule\n \\textbf{Method} & \\textbf{\\#Params} & \\textbf{Validation} & \\textbf{Test}\\\\\n\\hline\n\\hline\nTeacher & 24.2M & 60.90 & 58.58\\\\\nStudent & 9.1M & 64.17 & 61.55\\\\\n\\midrule\nKD & 9.1M & 64.04 & 61.33\\\\\nKD-topk & 9.1M & \\textbf{63.59} & \\textbf{60.85}\\\\\n\\bottomrule\n\n\\end{tabular}\n\\caption{Validation and test Perplexity (lower is better) of compared methods on PTB language modeling. We report the best result over 4 individual runs with different initializations. Best $k$ value for $\\mathrm{KD}\\text{-topk}$ is 100.}\n\\label{tb:exp_real_data_lm}\n\\end{table}\n\n\\textbf{Results and Analysis.}\nTable~\\ref{tb:exp_real_data} shows the overall performance when using the best hyper-parameters for each of the methods. On both the datasets, teacher model is much better than the student, and $\\mathrm{LS}$ improves student model's generalization. $\\mathrm{KD}$ can further boost student model's quality by a large margin, especially on CIFAR-100, where $\\mathrm{KD}$ even outperforms the teacher. We try to uncover the different benefits from distillation using partial-$\\mathrm{KD}$ methods. Both $\\mathrm{KD}\\text{-pt}$, and $\\mathrm{KD}\\text{-sim}$ outperforms $\\mathrm{LS}$; especially $\\mathrm{KD}\\text{-pt}$ on CIFAR-100. This suggests that the different effects from distillation benefits the student in different aspects. \nFurthermore, by combining the two effects together in $\\mathrm{KD}\\text{-pt+sim}$ (using $\\alpha=0.5$), we see a further improvement in quality. \n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nRecently a new class of cosmological models based on the string field\ntheory (SFT)~\\cite{review-sft} and the $p$-adic string theory emerges\nand attracts a lot of attention \\cite{IA1}--\\cite{GK}. It is known that\nthe SFT and the $p$-adic string theory are UV-complete ones. Thus, one\ncan expect that resulting (effective) models should be free of\npathologies. These models exhibit one general non-standard property,\nnamely, their actions have terms with infinitely many derivatives, i.e.\nnonlocal terms. The higher derivative terms usually produce phantom\nfields \\cite{Ostrogradski:1850,PaisU} (see also~\\cite{AV-NEC}). Models\nthat includes phantoms violate the null energy condition (NEC), and,\ntherefore, are unstable. Models with higher derivative terms produce\nalso well-known problems with quantum instability~\\cite{AV-NEC}.\n\nTo obtain a stable model with the NEC violation (the state parameter\n$w_{\\mathrm{DE}}<-1$) one should construct this model as an effective\nmodel, connected with the fundamental theory, which is stable and\nadmits quantization. With the lack of quantum gravity, we can just\ntrust string theory or deal with an effective theory admitting the UV\ncompletion.\n\nThe purpose of this paper is to study $f(R)$ gravity models with a\nnonlocal scalar field. We consider a general form of nonlocal action\nfor the scalar field with a quadratic potential, keeping the main\ningredient, the analytic function $\\mathcal{F}(\\Box_g)$, which in fact produces\nthe nonlocality, almost unrestricted.\n\n\n\n\n\\section{Nonlocal gravitation models}\n\nThe SFT inspired nonlocal gravitation models~\\cite{IA1} are introduced\nas a sum of the SFT action of the tachyon field $\\phi$ plus the gravity\npart of the action. One cannot deduce this form of the action from the\nSFT. In this paper we study the $f(R)$ gravity, which is a\nstraightforward modification of the general relativity. We consider the\nfollowing action:\n\\begin{equation}\nS_f=\\int d^4x \\sqrt{-g}\\left(\\frac{f(L^2R)}{16\\pi\nG_NL^2}+\\frac{1}{\\alpha^{\\prime}g_o^2}\\left(\\frac{1}{2}\\phi\\,\\mathcal{F}\\left(\\alpha^{\\prime}\\Box_g\\right)\\phi\n-V(\\phi) \\right)-\\Lambda\\right),\n\\end{equation}\nwhere $f(L^2R)$ is an arbitrary differentiable function. We use the\nsignature $(-,+,+,+)$, $g_{\\mu\\nu}$ is the metric tensor, $G_N$ is the\nNewtonian constant. The potential $V(\\phi)$ is a quadratic polynomial\n$V(\\phi)=C_2\\phi^2+C_1\\phi+C_0$, where $C_2$, $C_1$, and $C_0$ are\narbitrary real constants.\n\nThe function $\\mathcal{F}$ is assumed to be analytic at all finite points of\nthe complex plane, in other words, to be an entire function. The\nfunction $\\mathcal{F}$ can be represented by the convergent series expansion:\n$\\mathcal{F}(\\Box_g)=\\sum\\limits_{n=0}^{\\infty}f_n\\Box_g^{\\;n}$. The\nWeierstrass factorization theorem asserts that the function $\\mathcal{F}$ can\nbe represented by a product involving its zeroes $J_k$:\n\\begin{equation}\n\\mathcal{F}(J)=J^me^{Y(J)}\\prod_{k=1}^\\infty\\left(1-\\frac{J}{J_k}\\right)e^{\\frac{J}{J_k}+\\frac{J^2}{2J_k^2}\n+\\dots+\\frac{1}{p_k}\\left(\\frac{J}{J_k}\\right)^{p_k}},\n\\end{equation}\nwhere $m$ is an order of the root $J=0$ ($m$ can be equal to zero),\n$Y(J)$ is an entire function, natural numbers $p_n$ are chosen such\nthat the series\n$\\sum\\limits_{n=1}^\\infty\\left(\\frac{J}{J_n}\\right)^{p_n+1}$\n is an absolutely and uniformly convergent one.\n\nScalar fields $\\phi$ (associated with the open string tachyon) is\ndimensionless, while $[\\alpha^{\\prime}]=\\mbox{length}^2$, $[L]=\\mbox{length}$ and\n$[g_o]=\\mbox{length}$. Let us introduce dimensionless coordinates\n$\\bar{x}_\\mu=x_\\mu\/\\sqrt{\\alpha'}$, the dimensionless Newtonian\nconstant $\\bar{G}_N=G_N\/\\alpha'$, the dimensionless parameter $\\bar\nL=L\/\\sqrt{\\alpha'}$, and the dimensionless open string coupling\nconstant $\\bar g_o=g_o\/\\sqrt{\\alpha^{\\prime}}$. The dimensionless cosmological\nconstant $\\bar\\Lambda=\\Lambda{\\alpha^{\\prime}}^2$, $\\bar{R}$ is the curvature\nscalar in the coordinates $\\bar{x}_\\mu$:\n\\begin{equation}\nS_f=\\int d^4 \\bar{x} \\sqrt{-g}\\left(\\frac{f(\\bar{L}^2 \\bar{R})}{16\\pi\n\\bar{G}_N\\bar{L}^2}+\\frac{1}{\\bar{g}_o^2}\\left(\\frac{1}{2}\\phi\\,\\mathcal{F}\\left(\\bar{\\Box}_g\\right)\\phi\n-V(\\phi) \\right)-\\bar{\\Lambda}\\right). \\label{action_model2}\n\\end{equation}\nIn the following formulae we omit bars, but use only dimensionless\ncoordinates and parameters.\n\nIt is well-known~\\cite{Mukhanov1} that at $f'(R)>0$ any $f(R)$ gravity\nmodels in the metric variational approach are equivalent to the\nEinstein gravity with a scalar field\\footnote{There are two types of\n$f(R)$ gravity: the metric variational approach and the Palatini\nformalism. In the first case the equations of motion are obtained by\nvariation with respect to metric. Connections are the function of\nmetric in this formalism. In the Palatini formalism one should vary the\naction independently with respect to metric and the connections.}. In\nthe metric variational approach the equations of gravity are as\nfollows:\n\\begin{equation}\n\\label{fr_equ} G_{\\mu\\nu}\\equiv f'(R)R_{\\mu\\nu}-\n\\frac{f(R)}{2}g_{\\mu\\nu}-D_\\mu\n\\partial_\\nu f'(R)+g_{\\mu\\nu}\\Box_g f'(R)=8\\pi\nG_N T_{\\mu\\nu}, \\quad\n \\mathcal{F}(\\Box_g)\\phi=\\frac{dV}{d\\phi},\n\\end{equation}\nwhere the energy--momentum (stress) tensor $T_{\\mu\\nu}$ is:\n\\begin{equation}\n \\label{TEV}\n T_{\\mu\\nu}\\equiv{}-\\frac{2}{\\sqrt{-g}}\\frac{\\delta{S}}{\\delta\n g^{\\mu\\nu}}\n =\\frac{1}{g_o^2}\\Bigl(E_{\\mu\\nu}+E_{\\nu\\mu}-g_{\\mu\\nu}\\left(g^{\\rho\\sigma}\n E_{\\rho\\sigma}+W\\right)\\Bigr),\n\\end{equation}\n\\begin{equation}\n E_{\\mu\\nu}\\equiv\\frac{1}{2}\\sum_{n=1}^\\infty\n f_n\\sum_{l=0}^{n-1}\\partial_\\mu\\Box_g^l\\phi\\partial_\\nu\\Box_g^{n-1-l}\\phi,\\quad\n W\\equiv\\frac{1}{2}\\sum_{n=2}^\\infty\n f_n\\sum_{l=1}^{n-1}\\Box_g^l\\phi\\Box_g^{n-l}\\phi-\\frac{f_0}{2}\\phi^2+C_1\\phi.\n\\end{equation}\n\n\\section{Localization of nonlocal gravitational actions}\n\nThe Ostrogradski representation has been proposed for polynomial\n$\\mathcal{F}(\\Box)$ in the Minkowski space-time~\\cite{Ostrogradski:1850,PaisU}.\nOur goal is to generalize this result on gravitational models with an\narbitrary analytic function $\\mathcal{F}(\\Box)$ with simple and double roots.\nWe also generalize the Ostrogradski representation on the models with a\nlinear potential. The nonlocal cosmological models with quadratic\npotentials have been studied\nin~\\cite{Koshelev07,AJV0701,AJV0711,MN,KV,Vernov2010,VernovSQS}.\n\n\nLet us start with the case $C_1=0$. We consider a function $\\mathcal{F}(J)$,\nwhich has simple roots $J_i$ and double roots $\\tilde{J}_k$, and the\nfunction\n\\begin{equation}\n \\label{phi0}\n \\phi_0=\\sum\\limits_{i=1}^{N_1}\\phi_i+\\sum\\limits_{k=1}^{N_2}\\tilde\\phi_k,\n\\end{equation}\nwhere\n\\begin{equation}\n (\\Box_g-J_i)\\phi_i=0 \\quad\\mbox{and}\\quad (\\Box_g-\\tilde{J}_k)^2\\tilde\\phi_k=0\\quad\\Leftrightarrow\\quad\n(\\Box_g-\\tilde{J_k})\\tilde\\phi_k=\\varphi_k,\\quad\n (\\Box_g-\\tilde{J_k})\\varphi_k=0.\n \\label{equphi}\n\\end{equation}\nWithout loss of generality we assume that for any $i_1$ and $i_2\\neq\ni_1$ conditions $J_{i_1}\\neq J_{i_2}$ and\n${\\tilde{J}}_{i_1}\\neq{\\tilde{J}}_{i_2}$ are satisfied.\n\n\nThe energy--momentum tensor, which corresponds to $\\phi_0$, has the\nfollowing form:\n\\begin{equation}\n T_{\\mu\\nu}\\left(\\phi_0\\right)=\n T_{\\mu\\nu}\\left(\\sum\\limits_{i=1}^{N_1}\\phi_i+\\sum\\limits_{k=1}^{N_2}\\tilde\\phi_k\\right)=\n \\sum\\limits_{i=1}^{N_1}T_{\\mu\\nu}(\\phi_i)+\\sum\\limits_{k=1}^{N_2}T_{\\mu\\nu}(\\tilde\\phi_k),\n \\label{Tmunugen}\n\\end{equation}\nwhere all $T_{\\mu\\nu}$ are given by (\\ref{TEV}) and\n\\begin{equation}\n E_{\\mu\\nu}(\\phi_i)=\\frac{{\n \\mathcal{F}'(J_i)}}{2}\\partial_{\\mu}\\phi_i\\partial_{\\nu}\\phi_i,\\quad\n E_{\\mu\\nu}(\\tilde\\phi_k)= \\frac{{\n \\mathcal{F}''(\\tilde{J}_k)}}{4}\\left(\\partial_\\mu\\tilde\\phi_k\\partial_\\nu\\varphi_k\n +\\partial_\\nu\\tilde\\phi_k\\partial_\\mu\\varphi_k\\right)+\n \\frac{\\mathcal{F}'''(\\tilde{J}_k)}{12}\\partial_\\mu\\varphi_k\\partial_\\nu\\varphi_k,\n\\end{equation}\n\\begin{equation}\n \\label{Vdr} W(\\phi_i)=\\frac{J_i \\mathcal{F}'(J_i)}{2}\\phi_i^2,\\quad W(\\tilde{\\phi}_k)=\\frac{\\tilde{J}_k\n \\mathcal{F}''(\\tilde{J}_k)}{2}\\tilde\\phi_k\\varphi_k+ \\left(\\frac{{\\tilde{J}_k\n \\mathcal{F}'''(\\tilde{J}_k)}}{12}+\\frac{{\n \\mathcal{F}''(\\tilde{J}_k)}}{4}\\right)\\varphi_k^2,\n\\end{equation}\nwhere a prime denotes a derivative with respect to $J$: $\\mathcal{F}'\\equiv\n\\frac{d\\mathcal{F}}{dJ}$, \\ $\\mathcal{F}''\\equiv \\frac{d^2\\mathcal{F}}{dJ^2}$ and $\\mathcal{F}'''\\equiv\n\\frac{d^3 \\mathcal{F}}{dJ^3}$.\n\nConsidering the following local action\n\\begin{equation}\n S_{loc}=\\int d^4x\\sqrt{-g}\\left(\\frac{f(R)}{16\\pi\n G_N}-\\Lambda\\right)+\\sum_{i=1}^{N_1}S_i+\\sum_{k=1}^{N_2}\\tilde{S}_k,\n \\label{Sloc}\n\\end{equation}\nwhere\n\\begin{equation}\n S_i=\\!{}-\\frac{1}{g_o^2}\\int d^4x\\sqrt{-g}\n \\frac{\\mathcal{F}'(J_i)}{2}\\left(g^{\\mu\\nu}\\partial_\\mu\\phi_i\\partial_\\nu\\phi_i\n +J_i\\phi_i^2\\right),\n\\end{equation}\n\\begin{equation}\n \\begin{array}{l}\n \\!\\displaystyle\\tilde{S}_k=\\!\\displaystyle\\! {}-\\frac{1}{g_o^2}\\int\n d^4x\\sqrt{-g}\\left(g^{\\mu\\nu}\\left(\\frac{{\n \\mathcal{F}''(\\tilde{J}_k)}}{4}\\left(\\partial_\\mu\n \\tilde{\\phi}_k\\partial_\\nu\\varphi_k+\\partial_\\nu\n \\tilde{\\phi}_k\\partial_\\mu\\varphi_k\\right)+{}\\right.\\right.\\\\[2.7mm]\n \\displaystyle + \\left.\\frac{\n \\mathcal{F}'''(\\tilde{J}_k)}{12}\\partial_\\mu\\varphi_k\\partial_\\nu\\varphi_k\\right)+\n \\left. \\frac{\\tilde{J}_k \\mathcal{F}''(\\tilde{J}_k)}{2}\\tilde\\phi_k\\varphi_k\n +\\left(\\frac{{\\tilde{J}_k \\mathcal{F}'''(\\tilde{J}_k)}}{12}+\\frac{{\n \\mathcal{F}''(\\tilde{J}_k)}}{4}\\right)\\varphi_k^2\\right), \\label{Slocdr}\n \\end{array}\n\\end{equation}\nwe can see that solutions of the Einstein equations and equations in\n$\\phi_k$, $\\tilde{\\phi}_k$ and $\\varphi_k$, obtained from this action,\nsolve the initial nonlocal equations (\\ref{fr_equ}). Thus, we\nobtain that special solutions to nonlocal equations can be found as\nsolutions to system of local (differential) equations. If $ \\mathcal{F}(J)$ has\nan infinity number of roots then one nonlocal model corresponds to\ninfinity number of different local models and the initial nonlocal\naction (\\ref{action_model2}) generates infinity number of local actions\n(\\ref{Sloc}).\n\nWe should prove that the way of localization is self-consistent. To\nconstruct local action (\\ref{Sloc}) we assume that equations\n(\\ref{equphi}) are satisfied. Therefore, the method of localization is\ncorrect only if these equations can be obtained from the local action\n$S_{loc}$. The straightforward calculations show that the way of\nlocalization is self-consistent because:\n\\begin{equation}\n \\frac{\\delta{S_{loc}}}{\\delta \\phi_i}=0 \\, \\Leftrightarrow \\,\n \\Box_g\\phi_i=J_i\\phi_i; \\, \\frac{\\delta{S_{loc}}}{\\delta\n \\tilde{\\phi}_k}=0 \\, \\Leftrightarrow \\,\n \\Box_g\\varphi_k=\\tilde{J}_k\\varphi_k; \\,\n \\frac{\\delta{S_{loc}}}{\\delta \\varphi_k}=0 \\, \\Leftrightarrow \\,\n \\Box_g\\tilde{\\phi}_k=\\tilde{J}_k\\tilde{\\phi}_k+\\varphi_k.\n\\end{equation}\n\n\n\n\nIn spite of the above-mention equations we obtain from $S_{loc}$ the\nequations:\n\\begin{equation}\n G_{\\mu\\nu}=8\\pi G_N\\left(T_{\\mu\\nu}(\\phi_0)-\\Lambda g_{\\mu\\nu}\\right),\n\\end{equation}\nwhere $\\phi_0$ is given by (\\ref{phi0}) and $T_{\\mu\\nu}(\\phi_0)$ can be\ncalculated by (\\ref{Tmunugen}). So, we get such systems of\ndifferential equations that any solutions of these systems are\nparticular solutions of the initial nonlocal equations (\\ref{fr_equ}).\n\nLet us consider functions $\\mathcal{F}(J)$ with two and only two simple roots.\nIf $\\mathcal{F}(J)$ has two real simple roots, then $\\mathcal{F}'(J)>0$ at one root and\n$\\mathcal{F}'(J)<0$ at another root, so we get a quintom\nmodel~\\cite{Quinmodrev1}, in other words, local model with one standard\nscalar field and one phantom scalar field. In the case of two complex\nconjugated simple roots $J_j$ and $J_j^*$ one gets the following\naction:\n\\begin{equation}\nS_c=\\!\\int\\!\\! d^4x\\frac{\\sqrt{-g}}{2g_o^2}\\left(\n \\mathcal{F}'(J_j)\\left(g^{\\mu\\nu}\\partial_\\mu\\phi_j\\partial_\\nu\\phi_j\n +J_j\\phi_j^2\\right)+{\\mathcal{F}'}^*(J_j)\\left(g^{\\mu\\nu}\\partial_\\mu\\phi^*_j\\partial_\\nu\\phi^*_j\n +J^*_j{\\phi_i^*}^2\\right)\\right).\n\\end{equation}\nWe introduce real fields $\\xi$ and $\\eta$ such that $\\phi_j=\\xi+i\\eta$,\n\\ $\\phi_j^*=\\xi-i\\eta$, denote $d_r\\equiv\\Re e(\\mathcal{F}'(J))$, \\\n$d_i\\equiv\\Im m(\\mathcal{F}'(J))$, and obtain:\n\\begin{equation}\nS_c=\\int d^4x\\frac{\\sqrt{-g}}{2g_o^2}\\Bigl(d_r\ng^{\\mu\\nu}\\left(\\partial_\\mu\\xi\\partial_\\nu\\xi-\n\\partial_\\mu\\eta\\partial_\\nu\\eta\\right)+\nd_ig^{\\mu\\nu}(\\partial_\\mu\\xi\\partial_\\nu\\eta-\\partial_\\mu\\eta\\partial_\\nu\\xi)+V_1\\Bigr),\n\\end{equation}\nwhere $V_1$ is a potential term. In the case $d_i=0$ we get a quintom\nmodel, in opposite case the kinetic term in $S_c$ has a nondiagonal\nform. To diagonalize the kinetic term we make the transformation:\n$\\chi=\\upsilon+\\tilde{C}\\sigma$, $\\eta={}-\\tilde{C}\\upsilon+\\sigma$,\nwhere $\\tilde{C}\\equiv\\left(d_r+\\sqrt{d_r^2+d_i^2}\\right)\/d_i$, and get\na quintom model:\n\\begin{equation}\nS_c=\\int\nd^4x\\frac{\\sqrt{-g}}{2g_o^2}\\left(\\frac{2\\left(d_r^2+d_i^2\\right)}{d_i^2}\\left(d_r+\\sqrt{d_r^2+d_i^2}\\right)\n\\left(\\partial_\\mu\\upsilon\\partial_\\nu\\upsilon\n-\\partial_\\mu\\sigma\\partial_\\nu\\sigma\\right)+V_1\\right).\n\\end{equation}\n\nIn the case of a real double root $\\tilde{J}_k$ we express\n$\\tilde{\\phi}_k$ and $\\varphi_k$ in terms of new fields $\\xi_k$ and\n$\\chi_k$:\n\\begin{eqnarray}\n \\tilde{\\phi}_k&=&\\frac{1}{2\\mathcal{F}''(\\tilde{J}_k)}\\left(\\left(\\mathcal{F}''(\\tilde{J}_k)-\\frac{2}{3}\\mathcal{F}'''(\\tilde{J}_k)\\right)\n \\xi_k-\\left(\\mathcal{F}''(\\tilde{J}_k)+\\frac{2}{3}\\mathcal{F}'''(\\tilde{J}_k)\\right)\\chi_k\\right),\n\\quad \\varphi_k=\\xi_k+\\chi_k,\\nonumber\n\\end{eqnarray}\nwe obtain the corresponding $\\tilde{S}_k$ in the following form:\n\\begin{eqnarray}\n\\tilde{S}_k&=&\\frac{{}-1}{2g_o^2}\\!\\int\\!\nd^4\\!x\\sqrt{-g}\\left(g^{\\mu\\nu}\\frac{\\mathcal{F}''(\\tilde{J}_k)}{4}(\\partial_\\mu\n\\xi_k\\partial_\\nu\\xi_k-\\partial_\\nu\n\\chi_k\\partial_\\mu\\chi_k)+\\left[\\frac{{\\tilde{J}_k\\mathcal{F}'''(\\tilde{J}_k)}}{12}+\\frac{{\\mathcal{F}''(\\tilde{J}_k)}}{4}\\right]\n(\\xi_k+\\chi_k)^2+\\right.\\nonumber\\\\\n &+&\\left.\\frac{\\tilde{J}_k}{4}\\left[(\\mathcal{F}''(\\tilde{J}_k)-\\frac{2}{3}\\mathcal{F}'''(\\tilde{J}_k))\n \\xi_k-(\\mathcal{F}''(\\tilde{J}_k)+\\frac{2}{3}\\mathcal{F}'''(\\tilde{J}_k))\\chi_k\\right](\\xi_k+\\chi_k)\\right).\\nonumber\n\\end{eqnarray}\nIt is easy to see that each $\\tilde{S}_k$ includes one phantom scalar\nfield and one standard scalar field. So, in the case of one double root\nwe obtain a quintom model. In the Minkowski space appearance of phantom\nfields in models, when $\\mathcal{F}(J)$ has a double root, has been obtained\nin~\\cite{PaisU}. So, we come to conclusion that both two simple roots\nand one double root of $\\mathcal{F}(J)$ generate quintom models.\n\n\nThe model with action (\\ref{action_model2}) in the case $C_1\\neq 0$ has\nbeen considered in detail in~\\cite{VernovSQS}. Here we present only the\nobtained algorithm of localization for an arbitrary quadratic potential\n$V(\\phi)=C_2\\phi^2+C_1\\phi+C_0$:\n\n\\begin{itemize}\n\\item Change values of $f_0$ and $\\Lambda$ such that the potential\n takes the form $V(\\phi)=C_1\\phi$.\n\\item Find roots of the function $ \\mathcal{F}(J)$ and calculate orders of\n them. Select an finite number of simple and double roots.\n\\item Construct the corresponding local action. In the case $C_1=0$\n one should use formula (\\ref{Sloc}). In the case $C_1\\neq 0$ and\n $f_0\\neq 0$ one should use (\\ref{Sloc}) with the replacement of the\n scalar field $\\phi$ by $\\chi$ and the corresponding modification of\n the cosmological constant. In the case $C_1\\neq 0$ and $f_0=0$ the\n local action is the sum of (\\ref{Sloc}) and either\n \\begin{equation}\n S_{\\psi}={}-\\frac{1}{2g_o^2}\\int\\! d^4x\\sqrt{-g}\\left(\n f_1g^{\\mu\\nu}\\partial_\\mu\\psi\\partial_\\nu\\psi+2C_1\\psi+\\frac{f_2C_1^2}{f_1^2}\\right),\n \\end{equation}\n in the case of simple root $J=0$, or\n \\begin{eqnarray}\n S_{\\tilde{\\psi}}&=&{}-\\!\\int\\! d^4x\\frac{\\sqrt{-g}}{2g_o^2}\\left[\n g^{\\mu\\nu}\\left(f_2(\\partial_\\mu\\tilde{\\psi}\\partial_\\nu\\tau\n +\\partial_\\nu\\tilde{\\psi}\\partial_\\mu\\tau)+f_3\\partial_\\mu\\tau\\partial_\\nu\\tau\\right)\n +f_2\\tau^2+2C_1\\tilde{\\psi}+\\frac{f_3C_1}{2f_2}\\tau\\right]\\nonumber\n \\end{eqnarray}\n in the case of double root $J=0$. Note that in the case $C_1\\neq 0$\n and $f_0=0$ the local action (\\ref{Sloc}) has no term, which\n corresponds to the root $J=0$.\n\\item Vary the obtained local action and get a system of the Einstein\n equations and equations of motion. The obtained system is a finite\n order system of differential equations, \\textit{i.e.} we get a local\n system. Seek solutions of the obtained local system.\n\n\\end{itemize}\n\n\n\n\\section{Conclusion}\n\nThe main result of this paper is the generalization of the algorithm of\nlocalization on the $f(R)$ gravity models with a nonlocal scalar\nfield. The algorithm of localization is proposed for an arbitrary\nanalytic function $\\mathcal{F}(\\Box_g)$, which has both simple and double\nroots. We have proved that the same functions solve the initial\nnonlocal Einstein equations and the obtained local Einstein equations.\nWe have found the corresponding local actions and proved the\nself-consistence of our approach. In the case of two simple roots as\nwell as in the case of one double root we get a quintom\nmodel~\\cite{Quinmodrev1}. The algorithm of localization does not depend\non metric, so it can be used to find solutions for any metric.\n\nThe author wishes to express his thanks to I.~Ya.~Aref'eva for useful\nand stimulating discussions. The research\n has been supported in part by RFBR grant 08-01-00798, grant of Russian\n Ministry of Education and Science NSh-4142.2010.2 and by Federal\n Agency for Science and Innovation under state contract\n 02.740.11.0244.\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}