diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzzjhcd" "b/data_all_eng_slimpj/shuffled/split2/finalzzjhcd" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzzjhcd" @@ -0,0 +1,5 @@ +{"text":"\\section*{Introduction}\nMechanical systems subject to rolling kinetic constraints are one\nof the most studied argument of Classical Mechanics, especially\nfor its wideness of applicability in several branches of\nMechanical Sciences: Contact Mechanics, Tribology, Wear, Robotics,\nBall Bearing Theory and Control Theory applied to moving engines\nand vehicles are only some of the important fields where the\nresults about pure rolling constraint can be fruitfully used.\n\nIt is well known that, when a mechanical system moves in contact\nwith an assigned rough surface, the effective fulfilment of the\nkinetic conditions determined by the rolling without sliding\nrequirement of the system on the surface depends on the behavior,\nwith respect to the considered law of friction, of the reaction\nforces acting on the system in the contact points. For example,\nthe roll of a disk on a rough straight line, considering the\nCoulomb's law of friction, can happen only if the contact force\nlie inside the friction cone (see Example 1 below).\n\nHowever, even in the simplest case of a mechanical system formed\nby a single rigid body, in the case of multiple contact points\nbetween the rigid body and the rough surface, it could be an hard\ntask to obtain sufficient information about the contact reactions\nin order to determine if the laws of friction are satisfied or not\nduring the motion. In fact the most common methods to determine\ninformation about the reactions, starting from the simple\napplication of linear and angular momenta equations (see e.g.\n\\cite{LeviCivita,Goldstein}) to most refined techniques such as\nlagrangian multipliers in lagrangian mechanics (see e.g.\n\\cite{Huston1999}) or deep analyses of the contact between the\nsystem and the surface (see e.g. \\cite{DeMoerlooze2011}), have a\nglobal character. Then these methods, for their very nature, can\ndetermine only a reactive force system equivalent to the real one\nbut, in the general case, these methods cannot determine the\nsingle reactive forces in the contact points. The problem becomes\neven more complicated in case of multibody system, due to the\npresence of the internal reactions in the link between the parts\nof the system.\n\nIn this paper we consider the motion of a mechanical system having\ntwo or more distinct contact points with one or more assigned\nrough surfaces, and we determine necessary conditions for which in\nall the contact points the pure rolling kinetic constraint can\nhold. We also analyze the sufficiency of these conditions by\ngeneralizing to this case a well known and usually accepted\nassumption on the behavior of pure rolling constraint. Moreover,\nwe briefly discuss the possible behaviors of the system when the\nnecessary conditions are not fulfilled.\n\nThe procedure to determine if the rolling condition can be\nfulfilled can be applied both to systems formed by a single rigid\nbody and to multibody systems. It is essentially based on the\napplication of linear and angular momenta equations to the (parts\nforming the) mechanical system, and therefore it gives an\nunderdetermined system in the unknown single contact reactions.\nNevertheless, we show that the lack of complete knowledge of the\nsingle contact reactions is not an obstacle to determine the\nfeasibility of the rolling conditions.\n\nIt is however important to remark that, although the procedure has\na very simple and unassailable theoretic foundation, its effective\napplication to general systems could present insurmountable\ndifficulties. This is essentially due to the fact that the general\nprocedure explicitly requires the knowledge of the motion law of\nthe system, and in the general case the explicit time--dependent\nexpression of the motion cannot be obtained because of\ncomplications determined by the geometry of the system itself\nand\/or by the integrability of the equations of motion.\nNevertheless there are several significative cases where the\nprocedure can be explicitly performed. In the paper, we illustrate\nthree examples with rising complication: the well known case of a\ndisk falling in contact with an inclined plane (that is presented\nonly to point out some key points of the general procedure); the\ncase of a system formed by a non--coupled pair of disks connected\nwith a bar and moving on the horizontal plane; the case of a heavy\nsphere falling in contact with a guide having the form of a\nV--groove non symmetric with respect to the vertical axis and\ninclined with respect to the horizontal.\n\nThe main content of this paper can be approached starting from a\nvery standard background knowledge, essentially focused to the\nlinear and angular equations of motion for a mechanical system,\nthe so called Cardinal Equations, and the basic theory of pure\nrolling conditions and kinetic constraints. On the other hand, the\nlist of possible references involving theory and application of\npure rolling constraint is almost endless. Therefore we chose to\ncite only a very limited list of references sufficient to make the\npaper self--consistent: the classical book of Levi--Civita and\nAmaldi \\cite{LeviCivita} and the book of Goldstein\n\\cite{Goldstein} for the Cardinal Equations and the basic concepts\nabout pure rolling conditions; the book of Neimark and Fufaev\n\\cite{NeimarkF} and the paper of Massa and Pagani\n\\cite{MassPaga1991} for the behavior of systems subject to kinetic\nconstraints. The interested reader can find in the wide but not\nexhaustive lists of references of \\cite{Brogliato,Johnson} as a\nuseful starting point to delve in the expanse of the material\nrelated to this argument.\n\nThe paper is divided in four sections. Section 1 contains a very\nbrief preliminary description of the well known analysis of the\nrolling condition for a disk in contact with an inclined plane.\nThis remind is motivated by some useful affinities with the\ngeneral procedure for generic systems. Section 2 contains the\ndiscussion of the general case, and the determination of the\nnecessary conditions for pure rolling conditions simultaneously\nhold. Section 3 presents the example of the system formed by the\nnon--coupled disks and the example of the heavy sphere falling in\nthe V--groove. Section 4 is devoted to open problems, remarks and\nconclusions.\n\n\n\n\\section{Preliminaries}\n\n\\subsection*{Example 1.}\nAn homogeneous disk of mass $m$ and radius $R$ moves in the\nvertical plane being in contact with a rough guide inclined with\nslope angle $\\alpha$.\n\\begin{figure}[h] \\label{disco}\n\\centering \\includegraphics[width=0.65\\textwidth]{Fig1.eps}\n\\caption{Rolling disk on an inclined plane}\n\\end{figure}\nConsidering the system subject to the Coulomb's law of friction,\nwith obvious notation clarified by Fig. \\ref{disco}, the\nfeasibility of pure rolling condition of the disk can be\ndetermined with the following procedure:\n\n\\begin{itemize}\n\n\\item[0)] we determine the relative velocity $\\underline{{\\bf\nv}}_T(t_0)$ of the contact point $T$ of the disk at the instant\n$t_0$ with respect to the inclined plane as function of the\ninitial data of the motion. The pure rolling condition requires of\ncourse that $\\underline{{\\bf v}}_T(t_0)=0$. If so\n\n\\item[1)] we assume that the disk rolls without sliding on the\ninclined plane. Then the system has a single degree of freedom\n(for example the coordinate $s$ of $T$ along the inclined plane)\nand we can determine the equation of motion\n\\begin{equation*}\nmg \\sin \\alpha \\= \\dfrac32 m \\ddot{s} \\, ;\n\\end{equation*}\n\n\\item[2)] we determine the corresponding reaction $\\underline{{\\bf\n\\Phi}}_T$ as a (in this case constant) function of time\n\\begin{equation*}\n\\underline{{\\bf \\Phi}}_T \\= m \\underline{{\\bf a}}_C - m\n\\underline{{\\bf g}} \\= \\left( - \\dfrac13 m g \\sin \\alpha \\right)\n\\underline{{\\bf i}} + \\left( mg \\cos \\alpha \\right)\n\\underline{{\\bf j}} \\, ;\n\\end{equation*}\n\n\\item[3)] we test the Coulomb's law of friction condition\n\\begin{equation*}\\label{condizioni_CM_es0}\n\\| \\underline{{\\bf \\Phi}}_T^{\\|} \\| \\le \\mu \\, \\| \\underline{{\\bf\n\\Phi}}_T^{\\perp} \\| \\quad \\Leftrightarrow \\quad \\mu \\, \\ge\n\\dfrac13 \\tan \\alpha\n\\end{equation*}\nwhere $\\underline{{\\bf \\Phi}}_T^{\\|}, \\underline{{\\bf\n\\Phi}}_T^{\\perp}$ are the parallel and orthogonal component of\n$\\underline{{\\bf \\Phi}}_T$ with respect to the inclined plane;\n\n\\item[4)] we assume that, if and until the Coulomb's condition is\nverified, the disk moves rolling on the plane and that if and\nwhen the Coulomb's condition is not verified, the disk changes its\ndynamic evolution beginning to slide on the plane (until the first\ntime $t_1\n> t_0$ such that $\\underline{{\\bf v}}_T(t_1)=0$).\n\n\\end{itemize}\n\nSome remarks are in order to focus the possibility to generalize\nthe procedure to more complicated systems. Step 2 consists in the\ndetermination of the reaction acting on the disk as function of\ntime. The utmost simplicity of the specific problem can hide the\nfact that in a more general situation the information about the\nreaction sufficient to analyze the rolling condition could require\nan explicit determination of the motion of the system as function\nof time.\n\nStep 3 tests the compatibility of the reaction evaluated in Step 2\nwith the Coulomb's law of friction assumed as the constitutive\ncharacterization of the rough surface in contact with the disk. Of\ncourse the feasibility of the rolling condition can be tested with\nany other significative constitutive law.\n\nIn Step 4 we assume that, roughly speaking, if the disk can roll\nthen it does. This is of course an arbitrary assumption, but the\nhypothesis is well confirmed by experimental results. In the next\nsection, we will confirm this assumption in the more general\nsituation of generic system.\n\nTo conclude the section, let us note that, in this very simple\ncase, both the behaviors of the disk when the Coulomb's friction\ncondition is or is not verified are determinable. In the general\ncase, when the constitutive law is not verified, the behavior of\nthe system turns out to be not so straight to determine, although\nsome reasonable assumptions can be done. We will go back on this\narguments in Section 4.\n\n\n\\section{The general case}\n\nIn this section, following a line of though similar to the one\napplied in the previous section, we discuss the possibility that a\nmechanical system $\\S$ having two points $T_1,T_2$ in contact with\na fixed surface $\\Sigma$ moves such that in both the contact\npoints the rolling conditions can subsist respecting the Coulomb's\nlaw of friction. The arguments of the discussion can be easily\nextended to cases with more (but a finite number) than two contact\npoints and possibly to different friction constitutive laws.\n\nThe discussion is based on the fact that, along the motion, the\nreactive forces acting on the system must validate the linear and\nangular momenta equations\n\\begin{eqnarray}\\label{sistema_generale}\n\\left\\{\n\\begin{array}{l}\n\\underline{{\\bf R}}^{act} + \\underline{{\\bf R}}^{react} \\= M\n\\underline{{\\bf a}}_G \\\\ \\\\\n\n\\underline{{\\bf M}}_G^{act} + \\underline{{\\bf M}}_G^{react} \\=\n\\dfrac{d \\, \\underline{\\Gamma}_G}{d t}\n\\end{array}\n\\right.\n\\end{eqnarray}\nwhere $M$ is the total mass of the system, $G$ is the center of\nmass of the system and $\\underline{{\\bf R}}^{act}, \\underline{{\\bf\nR}}^{react},\\underline{{\\bf M}}_G^{act},\\underline{{\\bf\nM}}_G^{react}$ are respectively the sum of the active and reactive\nforces and active and reactive momenta acting on the whole system.\nIn this specific situation we have that:\n\\begin{eqnarray}\\label{determinazione_reazioni}\n\\left\\{\n\\begin{array}{l}\n\\underline{{\\bf R}}^{react} \\= \\underline{{\\bf \\Phi}}_{T_1} +\n\\underline{{\\bf \\Phi}}_{T_2}\n\\\\ \\\\\n\\underline{{\\bf M}}_G^{react} \\= \\overrightarrow{G {T_1}} \\times\n\\underline{{\\bf \\Phi}}_{T_1} + \\overrightarrow{G {T_1}} \\times\n\\underline{{\\bf \\Phi}}_{T_2}\n\\end{array}\n\\right. .\n\\end{eqnarray}\nIt is however well known \\cite{LeviCivita} that Eqs.\n(\\ref{sistema_generale}) are not sufficient to determine the\nmotion of the mechanical system and the single reactions\n$\\underline{{\\bf \\Phi}}_{T_1}, \\underline{{\\bf \\Phi}}_{T_2}$ along\nthe motion, since the system\n\\begin{eqnarray}\\label{sistema_sottodeterminato}\n\\left\\{\n\\begin{array}{l}\n\\underline{{\\bf R}}^{act} + \\underline{{\\bf \\Phi}}_{T_1} +\n\\underline{{\\bf \\Phi}}_{T_2} \\= M\n\\underline{{\\bf a}}_G \\\\ \\\\\n\n\\underline{{\\bf M}}_G^{act} + \\overrightarrow{G{T_1}} \\times\n\\underline{{\\bf \\Phi}}_{T_1} + \\overrightarrow{G{T_2}} \\times\n\\underline{{\\bf \\Phi}}_{T_2} \\= \\dfrac{d \\,\n\\underline{\\Gamma}_G}{d t}\n\\end{array}\n\\right.\n\\end{eqnarray}\nis by its very nature under--determined. In fact the projection of\nthe angular momenta equation of (\\ref{sistema_sottodeterminato})\nin the direction of $\\overrightarrow{{T_1}{T_2}}$ is a pure\nequation of motion of the system where no reactions appear. Then\n(\\ref{sistema_sottodeterminato}) can give no more than 5 relations\non the components of $\\underline{{\\bf \\Phi}}_{T_1}$ and\n$\\underline{{\\bf \\Phi}}_{T_2}$. Unfortunately, due to the\nroughness of the contacts, no preliminary conditions can be\nimposed on the components of the reactions, so that, even when the\nmotion of the mechanical system is known,\n(\\ref{sistema_sottodeterminato}) is a linear system with $6$\nunknowns that is not of maximum rank.\n\nNevertheless the parametric solution of the system\n(\\ref{determinazione_reazioni}), and an assumption parallelizing\nthe one of Step 4 of the case of Section 1, give us the\npossibility of determining if the rolling conditions in $T_1$ and\n$T_2$ are or not verified.\n\nThe procedure to test the feasibility of pure rolling condition of\nthe disk can be then based on the following steps:\n\\begin{itemize}\n\n\\item[0)] we test if the initial relative velocities of the\ncontact points $T_1,T_2$ with respect to the surface are null or\nnot. If they are null\n\n\\item[1)] we suppose that the system rolls without sliding in both\nthe contact points. This assumption fixes the dynamics (for\nexample the number of degrees of freedom...) of the system and\nconsequently allows the determination of the motion of the system;\n\n\\item[2)] we write the linear and angular momenta equations for\nthe whole system, for example in the form:\n\\begin{eqnarray}\\label{sistema_reazioni}\n\\left\\{\n\\begin{array}{l}\n\\underline{{\\bf \\Phi}}_{T_1} + \\underline{{\\bf \\Phi}}_{T_2} \\= M\n\\underline{{\\bf a}}_G - \\underline{{\\bf R}}^{act} \\\\ \\\\\n\n\\overrightarrow{G{T_1}} \\times \\underline{{\\bf \\Phi}}_{T_1} +\n\\overrightarrow{G{T_2}} \\times \\underline{{\\bf \\Phi}}_{T_2} \\=\n\\dfrac{d \\, \\underline{\\Gamma}_G}{d t} - \\underline{{\\bf\nM}}_G^{act}\n\\end{array}\n\\right. .\n\\end{eqnarray}\nSince the motion of the system is known, both the right hand sides\nof the equations, together with the position vectors\n$\\overrightarrow{G{T_1}}$ and $\\overrightarrow{G{T_2}}$, are known\nas function of time. Therefore Eqs. (\\ref{sistema_reazioni}) turn\nout to be a time--dependent under--determined linear system in the\nsix scalar unknowns given by the components of the vectors\n$\\underline{{\\bf \\Phi}}_{T_1},\\underline{{\\bf \\Phi}}_{T_2}$;\n\n\\item[3)] we solve the linear under--determined system\n(\\ref{sistema_reazioni}), obtaining the expression of the reaction\n$\\underline{{\\bf \\Phi}}_{T_1}$ and $\\underline{{\\bf \\Phi}}_{T_2}$\nas function of time and parameters $\\lambda_1,\\dots,\\lambda_r$,\nwhere of course the integer $r$ is related to the rank of\n(\\ref{sistema_reazioni}). Then we can determine the tangent and\northogonal components $\\underline{{\\bf \\Phi}}_{T_1}^{\\|},\n\\underline{{\\bf \\Phi}}_{T_1}^{\\perp}, \\underline{{\\bf\n\\Phi}}_{T_2}^{\\|}, \\underline{{\\bf \\Phi}}_{T_2}^{\\perp}$ of the\nreactions with respect to the surface $\\Sigma$ as functions of\n$(t,\\lambda_1,\\dots,\\lambda_r)$. The pure rolling conditions then\ncan subsist in both the contact points only in the time interval\n$[t_0, t_1]$ such that for every $t\\in [t_0, t_1]$ there exists at\nleast one admissible $r$--uple\n$(\\overline{\\lambda}_1,\\dots,\\overline{\\lambda}_r)$ such that the\nsystem\n\\begin{eqnarray}\\label{condizioni_CM}\n\\left\\{\n\\begin{array}{l}\n\\| \\underline{{\\bf \\Phi}}_{T_1}^{\\|}(t,\n\\overline{\\lambda}_1,\\dots,\\overline{\\lambda}_r) \\| \\le \\mu_1 \\,\n\\| \\underline{{\\bf \\Phi}}_{T_1}^{\\perp}(t,\n\\overline{\\lambda}_1,\\dots,\\overline{\\lambda}_r)\\|\n\\\\ \\\\\n\\| \\underline{{\\bf\n\\Phi}}_{T_2}^{\\|}(t,\\overline{\\lambda}_1,\\dots,\\overline{\\lambda}_r)\n\\| \\le \\mu_2 \\, \\|\\underline{{\\bf\n\\Phi}}_{T_2}^{\\perp}(t,\\overline{\\lambda}_1,\\dots,\\overline{\\lambda}_r)\\|\n\\end{array}\n\\right.\n\\end{eqnarray}\nholds;\n\n\\item[4)] we assume that, if for every $t\\in [t_0, t_1]$ there\nexists at least one admissible $r$--uple\n$(\\overline{\\lambda}_1,\\dots,\\overline{\\lambda}_r)$ such that\n(\\ref{condizioni_CM}) are verified, the system moves rolling\nwithout sliding in both points $T_1$ and $T_2$ during the time\ninterval $[t_0, t_1]$.\n\n\\end{itemize}\n\nIt is clear that the general procedure described above\nparallelizes as possible and generalizes the one of the disk on\nthe inclined plane. The most significant differences consist in\nthe explicit determination of the motion of the system (since\notherwise Eqs. (\\ref{sistema_reazioni}) could not admit a simple\nparametric solution for the reactions $\\underline{{\\bf\n\\Phi}}_{T_1}$ and $\\underline{{\\bf \\Phi}}_{T_2}$) and in the fact\nthat, when the Coulomb conditions (\\ref{condizioni_CM}) are NOT\nverified, being understood that the system does not roll in both\ncontact points, the determination of the behavior of the system\ncould require a more subtle analysis. We will go back on these\narguments in Section 4. We also remark that not all the $r$--uple\n$\\lambda_1,\\dots,\\lambda_r$ could be admissible in the discussion\nof the inequalities (\\ref{condizioni_CM}). For example, if the\nsystem is leaned on the surface, we have to restrict our attention\nto the $r$--uple such that\n\\begin{eqnarray}\\label{condizioni_appoggio}\n\\left\\{\n\\begin{array}{l}\n\\underline{{\\bf \\Phi}}_{T_1}^{\\perp}(t, \\lambda_1,\\dots,\\lambda_r)\n\\cdot \\underline{{\\nu}}_1 \\ge 0\n\\\\ \\\\\n{\\underline{{\\bf\n\\Phi}}_{T_2}^{\\perp}(t,\\lambda_1,\\dots,\\lambda_r)} \\cdot\n\\underline{{\\nu}}_2 \\ge 0\n\\end{array}\n\\right.\n\\end{eqnarray}\n(where $\\underline{{\\nu}}_i$ is the unit normal vector to the\nsurface $\\Sigma$ in the point $T_i$ and orientated toward the side\nof the system) since otherwise the system detaches from the\nsurface.\n\n\n\n\\section{Examples}\n\n\\begin{figure}[h] \\label{ruote}\n\\centering\n\\includegraphics[width=0.85\\textwidth]{Fig2.eps}\n\\caption{Rolling system on an horizontal plane}\n\\end{figure}\n\nA mechanical system is formed by two equal disks of mass $m$ and\nradius $R$ and a rod, of mass $M$ and length $L$. The rod is\nconstrained to remain orthogonal to the two planes of the disks\nwith its endpoints coinciding with the two centers of the disks\n(see Fig. \\ref{ruote}) so that the disks remain vertical. The\nwhole system is leaned on a rough horizontal plane. The system has\nthen $5$ degrees of freedom: the coordinates $x,y$ of the center\nof mass $G$ of the rod, the angle $\\vth$ formed by the plane of\nthe disks with the $xz$ plane and the two rotation angles $\\vph_1,\n\\vph_2$ of the disks. The rolling conditions in the contact points\n$T_1$ and $T_2$ are equivalently expressed by:\n\\begin{eqnarray}\\label{condizioni_PRes1}\n\\hskip -1truecm \\left\\{\n\\begin{array}{l}\n\\dot{x} + \\dfrac12 L \\dot{\\vth} \\cos \\vth - R \\dot{\\vph_1} \\cos\n\\vph_1 \\=0\n\\\\ \\\\\n\\dot{y} + \\dfrac12 L \\dot{\\vth} \\sin \\vth - R \\dot{\\vph_1} \\sin\n\\vph_1 \\=0\n\\\\ \\\\\n\\dot{\\vth} - \\dfrac{R}{L} \\left(\\dot{\\vph_1} - \\dot{\\vph_2}\n\\right) \\=0\n\\end{array}\n\\right. \\quad \\Leftrightarrow \\quad \\left\\{\n\\begin{array}{l}\n\\dot{x} - \\dfrac12 L \\dot{\\vth} \\cos \\vth - R \\dot{\\vph_2} \\cos\n\\vph_2 \\=0\n\\\\ \\\\\n\\dot{y} - \\dfrac12 L \\dot{\\vth} \\sin \\vth - R \\dot{\\vph_2} \\sin\n\\vph_2 \\=0\n\\\\ \\\\\n\\dot{\\vth} - \\dfrac{R}{L} \\left(\\dot{\\vph_1} - \\dot{\\vph_2}\n\\right) \\=0\n\\end{array}\n\\right.\n\\end{eqnarray}\nTedious but straightforward computations (see\n\\cite{NeimarkF,MassPaga1991}) give the equations of motion of the\nsystem\n\\begin{eqnarray}\\label{eq_moto_es1}\n\\left\\{\n\\begin{array}{lcl}\n\\ddot{x} &=& \\dfrac{1}{2} L \\dot{\\vth}^2 \\sin \\vth - R\n\\dot{\\vth}\\dot{\\vph_1} \\sin \\vth\n\\\\ \\\\\n\\ddot{y} &=& - \\dfrac{1}{2} L \\dot{\\vth}^2 \\cos \\vth + R\n\\dot{\\vth}\\dot{\\vph_1} \\cos \\vth\n\\\\\n\\ddot{\\vth} &=& 0\n\\\\\n\\ddot{\\vph_1} &=& 0\n\\\\\n\\ddot{\\vph_2} &=& 0\n\\end{array}\n\\right.\n\\end{eqnarray}\nIf we suppose assigned the almost generic initial data\n\\begin{eqnarray}\\label{dati_iniziali_es1}\n\\left\\{\n\\begin{array}{lcl}\nx(0) &=& x_0\n\\\\\ny(0) &=& y_0\n\\\\\n\\vth(0) &=& \\vth_0\n\\\\\n\\vph_1(0) &=& 0\n\\\\\n\\vph_2(0) &=& 0\n\\end{array}\n\\right. \\quad \\left\\{\n\\begin{array}{lcl}\n\\dot{x}(0) &=& \\dfrac{1}{2} R \\cos \\vth_0(\\dot{\\vph_1}_0 +\n\\dot{\\vph_2}_0)\n\\\\ \\\\\n\\dot{y}(0) &=& \\dfrac{1}{2} R \\sin \\vth_0(\\dot{\\vph_1}_0 +\n\\dot{\\vph_2}_0)\n\\\\ \\\\\n\\dot{\\vth}(0) &=& \\dfrac{R}{L} \\left(\\dot{\\vph_1}_0 -\n\\dot{\\vph_2}_0\\right)\n\\\\ \\\\\n\\dot{\\vph_1}(0) &=& \\dot{\\vph_1}_0\n\\\\\n\\dot{\\vph_2}(0) &=& \\dot{\\vph_2}_0\n\\end{array}\n\\right.\n\\end{eqnarray}\n with the only condition $\\dot{\\vph_1}_0 \\ne\n\\dot{\\vph_2}_0$, the motion of the system is given by\n\\begin{eqnarray}\\label{moto_es1}\n\\left\\{\n\\begin{array}{lcl}\n{x}(t) &=& \\dfrac12 L \\dfrac{\\dot{\\vph_1}_0+\n\\dot{\\vph_2}_0}{\\dot{\\vph_1}_0 -\n\\dot{\\vph_2}_0}\\left[\\sin\\left(\\dfrac{R}{L} \\left(\\dot{\\vph_1}_0 -\n\\dot{\\vph_2}_0\\right)t + \\vth_0 \\right) - \\sin \\vth_0 \\right] +\nx_0\n\\\\ \\\\\n{y}(t) &=& - \\dfrac12 L \\dfrac{\\dot{\\vph_1}_0+\n\\dot{\\vph_2}_0}{\\dot{\\vph_1}_0 -\n\\dot{\\vph_2}_0}\\left[\\cos\\left(\\dfrac{R}{L} \\left(\\dot{\\vph_1}_0 -\n\\dot{\\vph_2}_0\\right)t + \\vth_0 \\right) - \\cos \\vth_0 \\right] +\ny_0\n\\\\ \\\\\n\\vth (t) &=& \\dfrac{R}{L} (\\dot{\\vph_1}_0 - \\dot{\\vph_2}_0)t\n\\\\ \\\\\n{\\vph_1}(t) &=& \\dot{\\vph_1}_0 t\n\\\\ \\\\\n{\\vph_2}(t) &=& \\dot{\\vph_2}_0 t\n\\end{array}\n\\right.\n\\end{eqnarray}\nThe linear and angular momenta equations for the system can be\nwritten as\n\\begin{eqnarray}\\label{sistema_generale_es1}\n\\left\\{\n\\begin{array}{l}\n(2m+M)\\underline{{g}} + \\underline{{\\bf \\Phi}}_{T_1} +\n\\underline{{\\bf \\Phi}}_{T_2} \\= (2m+M) \\underline{{\\bf a}}_G\n\\\\ \\\\\n\\overrightarrow{G{C_1}} \\times m\\underline{{g}} +\n\\overrightarrow{G{C_2}} \\times m\\underline{{g}} +\n\\overrightarrow{G{T_1}} \\times \\underline{{\\bf \\Phi}}_{T_1} +\n\\overrightarrow{G{T_2}} \\times \\underline{{\\bf \\Phi}}_{T_2}\n\\\\ \\\\\n\\quad\\quad\\quad \\= {\\bf I}_{C_1}(\\underline{\\dot{\\omega}}_1) +\n\\underline{{\\omega}}_1 \\times {\\bf\nI}_{C_1}({\\underline{\\omega}}_1) + m \\overrightarrow{G{C_1}}\n\\times \\underline{{\\bf a}}_{C_1}\n\\\\ \\\\\n\\quad\\quad\\quad\\quad\\quad + {\\bf\nI}_{G}(\\dot{\\underline{\\omega}}_{rod}) +\n{\\underline{\\omega}}_{rod} \\times {\\bf\nI}_{G}({\\underline{\\omega}}_{rod})\n\\\\ \\\\\n\\quad\\quad\\quad\\quad\\quad \\quad\\quad + {\\bf\nI}_{C_2}(\\underline{\\dot{\\omega}}_2) + {\\underline{\\omega}}_2\n\\times {\\bf I}_{C_2}({\\underline{\\omega}}_2) + m\n\\overrightarrow{G{C_2}} \\times \\underline{{\\bf a}}_{C_2}\n\\end{array}\n\\right.\n\\end{eqnarray}\nTaking into account the motion of the system (\\ref{moto_es1}) and\nintroducing the orthonormal base $\\{\\underline{{\\bf u}},\n\\underline{{\\bf v}}, \\underline{{\\bf z}} \\}$ with $\\underline{{\\bf\nu}} = \\dfrac{\\overrightarrow{C_2C_1}}{L}, \\underline{{\\bf z}} =\n\\dfrac{\\overrightarrow{T_1C_1}}{R}, \\underline{{\\bf v}} =\n\\underline{{\\bf z}} \\times \\underline{\\bf u}$, with obvious\nnotation we obtain\n\\begin{eqnarray}\\label{reazioni_es1}\n\\left\\{\n\\begin{array}{ccl}\n\\Phi_{1_z} &=& \\dfrac12 \\left[(2m+M)g - (3m+M)\n\\dfrac{R^3}{L^2}(\\dot{\\vph_2}^2_0 - \\dot{\\vph_1}^2_0) \\right]\n\\\\ \\\\\n\\Phi_{2_z} &=& \\dfrac12 \\left[(2m+M)g - (3m+M)\n\\dfrac{R^3}{L^2}(\\dot{\\vph_1}^2_0 - \\dot{\\vph_2}^2_0) \\right]\n\\\\ \\\\\n\\Phi_{1_v} &=& 0\n\\\\ \\\\\n\\Phi_{2_v} &=& 0\n\\\\ \\\\\n\\Phi_{1_u} + \\Phi_{2_u} &=& - \\dfrac12 (2m+M) \\dfrac{R^2}{L}\n(\\dot{\\vph_1}^2_0 - \\dot{\\vph_2}^2_0)\n\\end{array}\n\\right.\n\\end{eqnarray}\nNote that, if the system leans on the horizontal plane, we must\nadd the requirement\n\\begin{eqnarray}\\label{condizioni_esistenza_es1}\n|\\dot{\\vph_1}^2_0 - \\dot{\\vph_2}^2_0| \\le \\dfrac{(2m+M)}{(3m+M)}\n\\dfrac{L^2}{R^2} \\dfrac{g}{R}\n\\end{eqnarray}\nsince otherwise one between $\\Phi_{1_z}$ and $\\Phi_{2_z}$ becomes\nnegative (and this is not acceptable, because the system lifts\nfrom the horizontal plane, and the initial assumptions of five\ndegrees of freedom is violated).\n\nIf (\\ref{condizioni_esistenza_es1}) is fulfilled, then we can\nchose for example $\\Phi_{1_u} = \\lambda$ and we find the reactions\n$\\underline{{\\bf \\Phi}}_{T_1}, \\underline{{\\bf \\Phi}}_{T_2}$ as\nfunctions of $\\lambda$: Coulomb conditions (\\ref{condizioni_CM})\nthen takes the form:\n\\begin{eqnarray}\\label{condizioni_CM_es1}\n\\left\\{\n\\begin{array}{l}\n|\\lambda| \\le \\mu_1 \\, \\dfrac12 \\left[(2m+M)g - (3m+M)\n\\dfrac{R^3}{L^2}(\\dot{\\vph_2}^2_0 - \\dot{\\vph_1}^2_0) \\right]\n\\\\ \\\\\n\\left| \\dfrac12 (2m+M) \\dfrac{R^2}{L} (\\dot{\\vph_1}^2_0 -\n\\dot{\\vph_2}^2_0) + \\lambda \\right| \\le \\mu_2 \\, \\dfrac12\n\\left[(2m+M)g - (3m+M) \\dfrac{R^3}{L^2}(\\dot{\\vph_1}^2_0 -\n\\dot{\\vph_2}^2_0) \\right]\n\\end{array}\n\\right.\n\\end{eqnarray}\nIn conclusion, the pure rolling of the disks can subsist if and\nonly if (\\ref{condizioni_esistenza_es1}) holds and there is a\n$\\lambda$ such that\n\\begin{eqnarray*}\n\\begin{array}{l}\n\\max \\left\\{- \\dfrac12 \\mu_1 \\left[(2m+M)g - (3m+M)\n\\dfrac{R^3}{L^2}(\\dot{\\vph_2}^2_0 - \\dot{\\vph_1}^2_0) \\right],\n\\right.\n\\\\\n\\qquad\\qquad\\left. - \\dfrac12 (2m+M) \\dfrac{R^2}{L}\n(\\dot{\\vph_1}^2_0 - \\dot{\\vph_2}^2_0) \\right.\n\\\\\n\\qquad\\qquad\\qquad\\qquad\\left. - \\dfrac12 \\mu_2 \\left[(2m+M)g -\n(3m+M) \\dfrac{R^3}{L^2}(\\dot{\\vph_1}^2_0 - \\dot{\\vph_2}^2_0)\n\\right] \\right\\}\n\\\\ \\\\\n\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad \\le \\lambda \\le\n\\\\ \\\\\n\\qquad\\qquad \\min \\left\\{ \\dfrac12 \\mu_1 \\left[(2m+M)g - (3m+M)\n\\dfrac{R^3}{L^2}(\\dot{\\vph_2}^2_0 - \\dot{\\vph_1}^2_0) \\right],\n\\right.\n\\\\\n\\left. \\qquad\\qquad \\qquad\\qquad \\dfrac12 \\mu_2 \\left[(2m+M)g -\n(3m+M) \\dfrac{R^3}{L^2}(\\dot{\\vph_1}^2_0 - \\dot{\\vph_2}^2_0)\n\\right] \\right.\n\\\\\n\\qquad\\qquad \\qquad\\qquad\\qquad\\qquad \\left. - \\dfrac12 (2m+M)\n\\dfrac{R^2}{L} (\\dot{\\vph_1}^2_0 - \\dot{\\vph_2}^2_0)\n \\right\\}.\n\\end{array}\n\\end{eqnarray*}\n\\subsection{Example 3.}\n\n\\begin{figure}[h] \\label{sfera}\n\\centering\n\\includegraphics[width=0.85\\textwidth]{Fig3.eps}\n\\caption{Sphere on a V--groove}\n\\end{figure}\n\nA mechanical system is formed by a sphere of mass $m$ and radius\n$R$ leaned in an inclined V--groove whose walls are described by\nthe equations\n\\begin{eqnarray*}\n\\pi_1: 2x + y + z \\=0 \\, ; \\quad \\pi_2: - x + y + z \\=0\n\\end{eqnarray*}\nWe introduce an orthonormal base $\\{ {\\underline{\\bf\nk}}^{\\perp}_{1}, {\\underline{\\bf k}}^{\\perp}_{2}, {\\underline{\\bf\nk}}^{\\|} \\}$ where ${\\underline{\\bf k}}^{\\perp}_{1},\n{\\underline{\\bf k}}^{\\perp}_{2}$ are orthogonal to $\\pi_1, \\pi_2$\nrespectively and ${\\underline{\\bf k}}^{\\|} = {\\underline{\\bf\nk}}^{\\perp}_{1}\\times {\\underline{\\bf k}}^{\\perp}_{2}$. The center\n$C$ of the sphere is then determined by the vector $R\n{\\underline{\\bf k}}^{\\perp}_{1} + R {\\underline{\\bf\nk}}^{\\perp}_{2} - s {\\underline{\\bf k}}^{\\|}$, where $s$ is the\ndistance of $C$ from a fixed plane orthogonal to ${\\underline{\\bf\nk}}^{\\|}$ (see Fig. \\ref{sfera}). The rolling conditions in the\ncontact points $T_1, T_2$ determines the angular velocity of the\nsphere in the form ${\\underline{\\omega}} = -\n\\dfrac{\\dot{s}}{R}({\\underline{\\bf k}}^{\\perp}_{1} -\n{\\underline{\\bf k}}^{\\perp}_{2})$ and the system has one degree of\nfreedom: the coordinate $s$.\n\nThe linear and angular momenta equations for the system can be\nwritten as\n\\begin{eqnarray}\\label{sistema_generale_es2}\n\\left\\{\n\\begin{array}{l}\nm\\underline{{g}} + \\underline{{\\bf \\Phi}}_{T_1} + \\underline{{\\bf\n\\Phi}}_{T_2} \\= m \\underline{{\\bf a}}_C\n\\\\ \\\\\n\\overrightarrow{T_1C} \\times m\\underline{{g}} +\n\\overrightarrow{T_1T_2} \\times \\underline{{\\bf \\Phi}}_{T_2} \\=\n{\\bf I}_C(\\underline{\\dot{\\omega}}) + m \\overrightarrow{T_1C}\n\\times \\underline{{\\bf a}}_C\n\\end{array}\n\\right.\n\\end{eqnarray}\nThe projection of the angular momenta equation in the direction of\n$\\overrightarrow{T_1T_2}$ gives the equation of motion of the\nsphere, that is $\\ddot{s} = \\dfrac{5\\sqrt{2}}{18}g$. This relation\nsuffices to obtain from (\\ref{sistema_generale_es2}) the\nunder--determined system of the reactions: if we decompose the\nreactions along the basis introduced above\n\\begin{eqnarray*}\n\\underline{{\\bf \\Phi}}_{T_1} = \\Phi_{1_N}{\\underline{\\bf\nk}}^{\\perp}_{1} + \\Phi_{1_u}{\\underline{\\bf k}}^{\\perp}_{2} +\n\\Phi_{1_v}{\\underline{\\bf k}}^{\\|}\n\\\\ \\\\\n\\underline{{\\bf \\Phi}}_{T_2}= \\Phi_{2_u}{\\underline{\\bf\nk}}^{\\perp}_{1} + \\Phi_{2_N}{\\underline{\\bf k}}^{\\perp}_{2} +\n\\Phi_{2_v}{\\underline{\\bf k}}^{\\|}\n\\end{eqnarray*}\nthe system takes the form\n\\begin{eqnarray}\\label{reazioni_es2}\n\\left\\{\n\\begin{array}{ccl}\n\\Phi_{1_v} &=& \\dfrac{\\sqrt{2}}{9} mg\n\\\\ \\\\\n\\Phi_{2_v} &=& \\dfrac{\\sqrt{2}}{9} mg\n\\\\ \\\\\n\\Phi_{1_N} + \\Phi_{2_u} &=& \\dfrac{1}{\\sqrt{6}} mg\n\\\\ \\\\\n\\Phi_{1_u} + \\Phi_{2_N} &=& \\dfrac{1}{\\sqrt{3}} mg\n\\\\ \\\\\n\\Phi_{2_N} + \\Phi_{2_u} &=& \\dfrac{1}{\\sqrt{3}} mg\n\\end{array}\n\\right.\n\\end{eqnarray}\nTo analyze the parametric solution of the system we chose\n$\\Phi_{1_u} = \\lambda mg$. In this case, and once again supposing\nthe sphere leaned on the groove, we must require the condition\n$\\lambda < \\frac{1}{\\sqrt{6}}$ since otherwise $\\Phi_{1_N} < 0$\nand the sphere comes off the groove. Conditions\n(\\ref{condizioni_CM}) take in this case the form\n\\begin{eqnarray}\\label{condizioni_CM_es2}\n\\left\\{\n\\begin{array}{l}\n\\lambda^2 + \\dfrac{2}{81} \\le \\mu_1^2 \\,\n\\left(\\dfrac{1}{\\sqrt{6}} - \\lambda \\right)^2\n\\\\ \\\\\n\\lambda^2 + \\dfrac{2}{81} \\le \\mu_2^2 \\, \\left(\\dfrac{1}{\\sqrt{3}}\n- \\lambda \\right)^2\n\\end{array}\n\\right.\n\\end{eqnarray}\nwith $\\lambda < \\dfrac{1}{\\sqrt{6}}$. A straightforward minimum\ncomputation for the functions on the left-hand side of\n(\\ref{condizioni_CM_es2}) shows then that the system can roll on\nboth the contact points if and only if\n\\begin{eqnarray}\\label{condizioni_CM_es2_bis}\n\\left\\{\n\\begin{array}{l}\n \\mu_1 \\ge \\dfrac{2}{\\sqrt{31}}\n\\\\ \\\\\n\\mu_2 \\ge \\sqrt{\\dfrac{2}{29}}\n\\end{array}\n\\right.\n\\end{eqnarray}\n\n\\section{Conclusions}\nThe procedure described in Sec. 2 in the case of two contact\npoints can be generalized to (multibody) systems with three or\nmore contact points (think for example of a ''steering tricycle''\nformed by three vertical disks connected with three rods leaned on\nthe horizontal plane). Of course, an increase of the number of\ncontact points implies in general an increase of the technical\ndifficulties in practical applications. This is principally due to\nthe fact that Step 1 of the general procedure is not a\nstraightforward passage. The effective knowledge of the motion of\nthe system can be achieved only in some particular cases.\nInsurmountable technical difficulties can arise both for\ngeometrical reasons (think of a convex rigid body moving in\ncontact with a surface, both having generic shapes with the only\nrequirement that the contact between rigid body and groove happens\nin two points. For a more detailed discussion on the argument,\nsee, e.g. \\cite{Hermans1995,BoriMama2002}), and\/or for\ncomputational reasons (even when the equations of motion of the\nsystem are explicitly obtained, it could be hard to integrate them\nto obtain the motion of the system). Nevertheless note that, as\npointed out by the examples in Sec. 3, not for all the systems the\nexplicit integration of the equations of motion is required.\n\nA second remark is that the general procedure gives necessary\nconditions such that the pure rolling subsists in all contact\npoints (conditions that become sufficient if we take into account\nStep 4 of the procedure) but it does not give any information on\nthe behavior of the system if the pure rolling is not possible\neven in a single contact point. In fact, analogously to what\nhappens in the simple case of Ex. 1, in the instant when\n(\\ref{condizioni_CM}) stops to hold, the dynamics of the system\n(for example, the number of degrees of freedom) changes abruptly.\n\n\nTo clarify this fact, suppose that, at the instant $t_1$ of the\nstudy of the system of Ex. 2, a sudden variation of the friction\ncoefficient $\\mu_2$ in the point $T_2$ (an oil spot on the plane?)\ncauses the invalidity of the second relation of\n(\\ref{condizioni_CM_es1}), while the first relation still holds.\nOf course, even if we chose the assumption of Step 4 of the\nprocedure as a fixed point of our argument, we cannot suppose that\nthe system continues to roll in $T_1$ (and begins to slide in\n$T_2$) since the beginning of sliding in $T_2$ can affect the pure\nrolling behavior of the system in the point $T_1$. We must perform\na new analysis of the behavior of the system, possibly supposing\nthe system rolling in $T_1$ and sliding in $T_2$, we must\ndetermine (if possible) the new equations of motion of the system\n(with the additional difficulties of different friction laws in\nthe point $T_1$ and $T_2$ and possibly increased number of degrees\nof freedom), the motion of the system, the new (parametric) system\nof reactions acting on the system and then we can test the Coulomb\ncondition in the point $T_1$.\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nSome of the theoretically most fascinating aspects of crack\npropagation in amorphous materials are the\ninstabilities that are observed in well controlled laboratory\nexperiments \\cite{99FM}. Besides some exceptions, (see for example\n\\cite{93YS, 95ABP, 03BHP} and also \\cite{07LBDF, 07BP}), it would be\nfair to say that the observed instabilities are still poorly\nunderstood. It is the opinion of the present authors that the reason\nfor the relative lack of understanding is that the theory of crack\npropagation did not treat cracks as moving free boundaries whose\ninstabilities stem from the dynamics of the free boundary itself.\nInstead, ``crack tip dynamics\" were replaced by energy balance within the theory of Linear\nElastic Fracture Mechanics \\cite{Freund}, together with an ad-hoc ``law'' of\none nature or another as to where a crack is supposed to move.\n\nIn principle this undesirable state of affairs can be greatly\nimproved within the Shear-Transformation-Zones (STZ) theory of\namorphous materials \\cite{79Arg, 79AK,98FL, 07BLanP}. This theory treats developing cracks or growing cavities as free boundaries of a material in which both elasticity\nand plasticity are taken into account, preserving all the symmetries\nand conservation laws that promise a possibly correct theory of\namorphous materials driven out of mechanical equilibrium. This theory in its various\nappearances was compared to a number of experiments and simulations (see below),\nwith a growing confidence that although not final, STZ theory is\ndeveloping in the right direction. Indeed, the application of a\nhighly simplified version of STZ theory to crack propagation\nresulted in physically interesting predictions, explaining how\nplasticity can intervene in blunting a crack tip and resulting in\nvelocity selection \\cite{06BPP}. The application of the full\nfledged theory of STZ to crack propagation is still daunting\n(although not impossible) due to the tensorial nature of the theory\nand the need to deal with an extremely stiff set of partial\ndifferential equations with a wide range of time-scales and\nlength-scales involved. For that reason it seemed advantageous to\napply the full theory to a situation in which the symmetries reduce\nthe problem to an effectively scalar theory; this is the problem of\na circular cavity developing under circular symmetric stress\nboundary conditions \\cite{07BLLP, 07BLP}. While this problem does not reach the extreme\nconditions of stress concentration that characterizes a running\nslender crack, it still raises many physical issues that appear also\nin cracks, in particular the give-and-take between elasticity and\nplasticity, the way stresses are transmitted to moving boundaries\n(in apparent excess of the material yield stress) and most\nimportantly for this paper, the possible existence of dynamical\ninstabilities of the moving free boundary. This last issue might\nalso be connected to the difference between ductile\nand brittle behaviors. In the former, a growing cavity is\nlikely to remain rather smooth, whereas in the latter, one may expect an\ninstability resulting in the growth of ``fingers'', possibly ending\nup being cracks. It is one of the challenges of the present paper to\nexamine whether the theory may predict a transition, as a function\nof material parameters or a constitutive relation, between these two\ntypes of behavior.\n\nNote that we have chosen to study the problem in a purely 2-dimensional geometry; recently\nquasi 2-dimensional systems exhibited interesting failure dynamics in laboratory experiments, where\nthe 3'rd dimension appears irrelevant for the observed phenomena \\cite{07LBDF,04SVC}. Our motivation here is however theoretical, to reduce the unnecessary analytic and numerical complications to a minimum and to gain insight as to the main physical effects under the assumption that the thin\nthird dimension in real systems does not induce a catastrophic change in behavior. When this assumption fails, as it does in some examples c.f. \\cite{99FM}, the analysis must be extended\nto include the third dimension. This is beyond the scope of this paper.\n\nIn Sec. \\ref{EBC} we present the equations that describe the problem at hand and specify their boundary conditions. Particular attention is paid to distinguish between the general Eulerian formulation which is model-independent (Subsec. \\ref{general}) and the constitutive relations involving plasticity where the STZ model is explained (Subsec. \\ref{STZ}). This section finishes with the presentation of the unperturbed problem, preparing the stage for the linear stability analysis which is discussed in Sec.\n\\ref{LSA}. In this section we present a general analysis where inertia and elastic compressibility effects are taken into account. In Appendix \\ref{QS} we complement the analysis by considering the ``quasistatic\" (when the velocity of the boundary is sufficiently small) and incompressible case (when the bulk modulus is sufficiently large) and show that both formulations agree with one another in the relevant range.\nThe results of the stability analysis are described in detail in Sec. \\ref{results} and a few concluding remarks are offered in Sec. \\ref{discussion}.\n\n\\section{Equations and Boundary Conditions}\n\\label{EBC}\n\n\\subsection{General formulation}\n\\label{general}\n\nWe start by writing down the full set of equations for a general\ntwo-dimensional elasto-viscoplastic material. A basic assertion of this theory is that\nplastic strain tensors in such materials are not state variables since their values depend on the entire history of deformation. Thus, one begins by introducing the total rate of deformation tensor\n\\begin{equation}\n\\B D^{\\rm tot} \\equiv \\frac{1}{2}\\Big[\\B \\nabla \\B v + \\left(\\B \\nabla \\B v\\right)^T\\Big] \\ , \\label{D_tot}\n\\end{equation}\nwhere $\\B v(\\B r, t)$ is the material velocity at the location $\\B\nr$ at time $t$ and $T$ denotes here the transpose of a tensor. This type of Eulerian formulation has the enormous\nadvantage that it disposes of any reference state, allowing free\ndiscussion of small or large deformations. As is required in an Eulerian frame we\nemploy the full material time derivative for a tensor $\\B T$,\n\\begin{equation}\n\\frac{{\\cal D} \\B T}{{\\cal D} t} = \\frac{\\partial \\B T}{\\partial t}\n+ \\B v\\cdot \\B\\nabla \\B T +\\B T \\cdot \\B \\omega - \\B \\omega\\cdot \\B\nT \\ , \\label{material}\n\\end{equation}\nwhere $ \\B\\omega$ is the spin tensor\n\\begin{equation}\n\\B \\omega \\equiv \\frac{1}{2}\\Big[\\B \\nabla \\B v - \\left(\\B \\nabla \\B v\\right)^T\\Big] \\ . \\label{omega}\n\\end{equation}\nFor a scalar or vector quantity $\\B V$ the commutation with the spin\ntensor vanishes identically. The Eulerian approach allows a natural\nformulation of moving free boundary problems; this will be shown to\nlead to a significant advance compared to more conventional\ntreatments.\n\nThe plastic rate of deformation tensor $\\B D^{pl}$ is introduced by assuming that the total\nrate of deformation tensor $\\B D^{\\rm tot}$ can be written as a sum of a linear\nelastic and plastic contributions\n\\begin{equation}\n\\B D^{\\rm tot} = \\frac{{\\cal D} \\B \\epsilon^{el}}{{\\cal D} t}\n+\\B D^{pl} \\label{el_pl} \\ .\n\\end{equation}\nWe further assume that $\\B D^{pl}$ is a traceless tensor, corresponding to incompressible plasticity. All possible\nmaterial compressibility effects in our theory are carried by the elastic component of the deformation.\nThe components of the linear elastic strain tensor $\\B \\epsilon^{el}$ are related to\nthe components of stress tensor, whose general form is\n\\begin{equation}\n\\sigma_{ij} = -p\\delta_{ij} + s_{ij} \\ , \\quad p=-\\frac{1}{2}\n\\sigma_{kk} \\ , \\label{sig}\n\\end{equation}\naccording to\n\\begin{equation}\n\\epsilon^{el}_{ij} = -\\frac{p}{2K}\\delta_{ij} + \\frac{s_{ij}}{2\\mu}\n\\ , \\label{linear}\n\\end{equation}\nwhere $K$ and $\\mu$ are the two dimensional bulk and shear moduli\nrespectively. The tensor ${\\B s}$ is referred to hereafter as the\n``deviatoric stress tensor\" and $p$ as the pressure. The equations\nof motion for the velocity and density are\n\\begin{eqnarray}\n\\label{eqmot1} \\rho \\frac{{\\cal D} \\B v }{{\\cal D} t} &=& \\B \\nabla\\!\\cdot\\!\\B\n\\sigma = -\\B \\nabla p+ \\B \\nabla\\!\\cdot\\! \\B s \\ , \\\\\n\\quad \\frac{{\\cal D} \\rho}{{\\cal D} t} &=& -\\rho \\B \\nabla\\!\n\\cdot\\! \\B v \\ . \\label{eqmot2}\n\\end{eqnarray}\n\nIn order to prepare the general set of equations for the analysis of\na circular cavity we rewrite the equations in polar\ncoordinates. For that aim we write\n\\begin{equation}\n\\label{polarO}\n\\B \\nabla = \\B e_r \\partial_r +\\frac{\\B e_\\theta}{r} \\partial_\\theta ,\\quad \\B v = v_r \\B e_r + v_\\theta \\B e_\\theta \\ ,\n\\end{equation}\nwhere $\\B e_r$ and $\\B e_\\theta$ are unit vectors in the radial and azimuthal directions respectively. These expressions enable us to represent the divergence operator $\\B \\nabla \\cdot$ in the equations of motion and the covariant derivative $\\B v \\!\\cdot\\! \\B \\nabla$ in the material time derivative of vectors and tensors. Some care should be taken in evaluating these differential operators in polar coordinates since the unit vectors themselves vary under differentiation according to\n\\begin{equation}\n\\label{unit_vectors}\n\\partial_r \\B e_r=0,\\quad \\partial_r\\B e_\\theta=0,\\quad \\partial_\\theta\\B e_r=\\B e_\\theta,\\quad \\partial_\\theta \\B e_\\theta=-\\B e_r \\ .\n\\end{equation}\nWe then denote $s_{rr}\\equiv -s$, $s_{\\theta \\theta} \\equiv s$,\n$s_{r\\theta}=s_{\\theta r} \\equiv \\tau$ and using Eqs. (\\ref{sig}) we obtain\n\\begin{eqnarray}\n\\label{sig_p_s}\n\\sigma_{rr} &=& -s -p \\ ,\\nonumber\\\\\n\\sigma_{\\theta \\theta} &=& s-p \\ ,\\nonumber\\\\\n\\sigma_{r \\theta} &=& \\sigma_{ \\theta r} =\\tau \\ .\n\\end{eqnarray}\nIn this notation the equations of motion\n(\\ref{eqmot1}) can be rewritten explicitly as\n\\begin{widetext}\n\\begin{eqnarray}\n\\rho \\left(\\frac{\\partial v_r}{\\partial t} \\!+\\! v_r \\frac{\\partial\nv_r}{\\partial r}\\!+\\! \\frac{v_\\theta}{r} \\frac{\\partial v_r}{\\partial\n\\theta}-\\frac{v_\\theta^2}{r}\\right)\\!&=&\\! \\frac{1}{r}\\frac{\\partial \\tau}{\\partial \\theta}\n-\\frac{1}{r^2} \\frac{\\partial }{\\partial r} \\left ( r^2 s \\right)\\! -\\!\n\\frac{\\partial\np}{\\partial r} \\nonumber \\ , \\\\\n\\rho \\left(\\frac{\\partial v_\\theta}{\\partial t} \\!+\\! v_r\n\\frac{\\partial v_\\theta}{\\partial r}\\!+\\! \\frac{v_\\theta}{r}\n\\frac{\\partial v_\\theta}{\\partial \\theta} +\\frac{v_\\theta v_r}{r}\\right)\\!&=&\\!\\frac{\\partial\n\\tau}{\\partial r}\\! +\\! \\frac{1}{r} \\frac{\n\\partial s}{\\partial \\theta}\\! -\\! \\frac{1}{r}\n\\frac{ \\partial p} {\\partial \\theta} \\!+\\!\\frac{2 \\tau}{r} \\ , \\nonumber\\\\\n\\label{EOM}\n\\end{eqnarray}\n\\end{widetext}\nwhere $\\B \\nabla\\!\\cdot\\!\\B \\sigma$ is calculated explicitly in Appendix \\ref{polar}.\n\nEquations (\\ref{el_pl}) can be rewritten in components form as\n\\begin{eqnarray}\n\\label{eq:DA_ij}\nD^{\\rm tot}_{ij}&=&\n\\frac{\\partial \\epsilon^{el}_{ij}}{\\partial t} + \\left(\\B v \\cdot \\B \\nabla \\B \\epsilon^{el} \\right)_{ij}\\\\\n&+&\\epsilon^{el}_{ir}\\omega_{rj}+\\epsilon^{el}_{i\\theta} \\omega_{\\theta j}-\n\\omega_{ir}\\epsilon^{el}_{rj}-\\omega_{i \\theta}\\epsilon^{el}_{\\theta j}+ D^{pl}_{ij}\\ .\\nonumber\n\\end{eqnarray}\nHere the components of the total rate of deformation tensor are related to the\nvelocity according to Eqs. (\\ref{D_tot}) as\n\\begin{eqnarray}\nD_{rr}^{\\rm tot} &\\equiv& \\frac{\\partial v_r}{\\partial r},\\quad\n D_{\\theta\\theta}^{\\rm tot} \\equiv \\frac{\\partial_\\theta v_\\theta +\n v_r}{r} \\ ,\\nonumber\\\\\nD_{r \\theta}^{\\rm tot} &\\equiv& \\frac{1}{2} \\left[ \\partial_r\nv_\\theta + \\frac{\\partial_\\theta v_r - v_\\theta}{r} \\right] \\ ,\n\\label{eq:totalrate}\n\\end{eqnarray}\nwhere the components of the spin tensor $\\B \\omega$ in Eq. (\\ref{omega}) are given by\n\\begin{eqnarray}\n\\omega_{rr}&=&\\omega_{\\theta \\theta}=0 \\ ,\\nonumber\\\\\n\\omega_{r \\theta}&=& - \\omega_{\\theta r} = \\frac{1}{2} \\left[\n\\frac{\\partial_{\\theta}v_r -v_\\theta}{r} - \\partial_r v_\\theta\n\\right] \\ .\n\\end{eqnarray}\nThe calculation of the tensor $\\B v\\! \\cdot\\! \\B \\nabla \\B \\epsilon^{el}$ is presented in Appendix \\ref{polar}; the linear elastic strain components of Eqs. (\\ref{linear}) are given by\n\\begin{eqnarray}\n\\epsilon_{rr}^{el} &=& - \\frac{p}{2K} -\n\\frac{s}{2\\mu}\\ , \\nonumber\\\\\n\\epsilon_{\\theta \\theta}^{el} &=& - \\frac{p}{2K}\n+ \\frac{s}{2\\mu}\\ , \\nonumber\\\\\n\\epsilon_{r \\theta}^{el} &=& \\epsilon_{\\theta r}^{el}=\\frac{\\tau}{2 \\mu} \\ .\n\\label{eq:stress-strain}\n\\end{eqnarray}\n\nSince most of the materials of interest have a large bulk modulus\n$K$, i.e. they are almost incompressible, we assume that the density\nis constant in space and time\n\\begin{equation}\n\\rho(\\B r,t) \\simeq \\rho \\label{density} \\ .\n\\end{equation}\nTherefore, Eq. (\\ref{eqmot2}) is omitted. Finally, the existence of a free boundary is introduced as the following boundary conditions\n\\begin{equation}\n\\sigma_{ij}n_j=0 \\ , \\label{stressBC}\n\\end{equation}\nwhere $\\B n$ is the unit normal vector at the free boundary.\n\n\\subsection{Viscoplastic constitutive equations:\\\\ The athermal STZ theory}\n\\label{STZ}\n\nUp to now we have considered mainly symmetries and conservation\nlaws. A general theoretical framework for the elasto-viscoplastic\ndeformation dynamics of amorphous solids should be supplemented with\nconstitutive equations relating the plastic rate of deformation\ntensor $\\B D^{pl}$ to the stress and possibly to other internal\nstate fields. We use the constitutive equations of the recently proposed athermal Shear Transformation Zones (STZ) theory \\cite{07BLanP}. This theory is based on identifying the\ninternal state fields that control plastic deformation. The basic\nobservation is that stressing a disordered solid results in localized reorganizations\nof groups of particles. These reorganizations occur\nupon surpassing a local shear threshold, and when they involve a finite irreversible shear in\na given direction, we refer to them as an ``STZ transition\". Once transformed, due to a local redistribution\nof stresses, the same local region resists further deformation in that direction,\nbut is particulary sensitive to shearing transformation if the local\napplied stress reverses its direction. Thus an STZ transition is conceived\nas a deformation unit that can undergo configurational\nrearrangements in response to driving forces. Furthermore, the\nstress redistribution that accompanies an STZ transition can induce the creation and annihilation of other local particle arrangements that can undergo further localized transitions; these\narrangements are formed or annihilated at a rate\nproportional to the local energy dissipation (recall\nthat thermal fluctuations are assumed to be absent or negligible). In this sense the interesting localized\nevents need not depend on ``pre-existing\" defects in the material, but can appear and disappear\ndynamically in a manner that we describe mathematically next.\n\nThis picture is cast into a mathematical form in terms of a scalar field $\\Lambda$\nthat represents the normalized density of regions that can undergo STZ transitions, a tensor $\\B m$ that represents the difference between the density of regions that can undergo a transition under\na given stress and the reversed one, and an effective\ndisorder temperature $\\chi$ that characterizes the state of\nconfigurational disorder of the solid \\cite{04Lan}. The present state of the theory relates\nthese internal state fields,\nalong with the deviatoric stress tensor ${\\B s}$, to the\nplastic rate of deformation tensor $\\B D^{pl}$ according to\n\\begin{eqnarray}\n\\label{eq:Dpl}\n\\tau_0 D^{pl}_{ij} \\!=\\! \\epsilon_0 \\Lambda \\C C(\\bar{s})\\left(\\frac{s_{ij}}{\\bar{s}}-m_{ij}\\right),\\quad\\bar{s} \\equiv \\sqrt{\\frac{s_{ij}s_{ij}}{2}}\\ .\n\\end{eqnarray}\nThis equation represents the dependence of the plastic rate of deformation on the current\nstress $s_{ij}$ and the recent history encoded by the internal state tensorial field $\\B m$.\nThis field acts as a back-stress, effectively reducing the local driving force for STZ transitions, up\nto the possible state of jamming when the whole parentheses vanishes. The parentheses\nprovides information about the orientation of the plastic deformation. The function $ \\C C(\\bar{s})$\ndetermines the magnitude of the effect, and is re-discussed below. The field $\\Lambda$ appears\nmultiplicatively since the rate of plastic deformation must be proportional to the density of STZ.\nThe second equation describes the dynamics of the internal back stress field\n\\begin{eqnarray}\n\\label{eq:m}&&\\tau_0\\frac{{\\cal D} m_{ij} }{{\\cal D} t} =\n2\\frac{\\tau_0 D^{pl}_{ij}}{\\epsilon_0 \\Lambda\n}- \\Gamma(s_{ij}, m_{ij})m_{ij}\\frac{e^{-1\/\\chi}}{\\Lambda} \\ ,\\nonumber\\\\\n&&\\hbox{with}\\quad\\Gamma(s_{ij}, m_{ij}) = \\frac{\\tau_0\ns_{ij}D^{pl}_{ij}}{\\epsilon_0 \\Lambda} \\ .\\label{Gamma}\n\\end{eqnarray}\nThis equation captures the dynamical exchange of stability when the material yields to the applied\nstress. The equation has a jammed fixed point when the plastic deformation vanishes, in agreement\nwith the state of STZ being all in one orientation, without the production of a sufficient number\nof new ones in the other orientation. The jammed state is realized when the applied stress\nis below the yield stress. When the stress exceeds the threshold value the stable fixed point\nof this equation corresponds to a solution with non-vanishing plastic rate of deformation. This\nstate corresponds to a situation where enough STZ are being created per unit time to allow\na persistent plastic flow. The quantity $\\Gamma$ represents the rate of STZ production in response to the flow $\\B D^{pl}$.\nThe next equation, for the density of STZ $\\Lambda$, is an elementary fixed point equation reading\n\\begin{equation}\n\\label{eq:Lambda} \\tau_0 \\frac{{\\cal D} \\Lambda }{{\\cal D} t} =\n\\Gamma(s_{ij}, m_{ij})\\left(e^{-1\/\\chi}- \\Lambda\\right) \\ .\n\\end{equation}\nThe unique fixed point of this equation is the equilibrium solution $\\Lambda=e^{-1\/\\chi}$ where\n$\\chi$ is a normalized temperature-like field which is not necessarily the bath temperature\nwhen the system is out of thermal and\/or mechanical equilibrium. The last equation is\nfor this variable, reading\n\\begin{eqnarray}\n\\label{eq:chi} \\tau_0 c_0 \\frac{{\\cal D} \\chi }{{\\cal D} t}\n&=& \\epsilon_0 \\Lambda \\Gamma(s_{ij},\nm_{ij})\\left[\\chi_\\infty\\left(\\tau_0\\bar{D}^{pl}\\right)-\\chi\\right],\\nonumber\\\\\n\\hbox{with}\\quad \\bar{D}^{pl}&\\equiv & \\sqrt{\\frac{D^{pl}_{ij}D^{pl}_{ij}}{2}} \\ .\n\\end{eqnarray}\nThis is a heat-like equation for the configurational degrees of freedom; it is discussed in detail below.\nHere and elsewhere we assume that quantities of stress dimension\nare always normalized by the yield stress $s_y$; this is\njustified as the STZ equations exhibit an exchange of dynamic\nstability from jamming to flow at $s\\!=\\!1$, i.e. at a stress that\nequals to $s_y$ \\cite{07BLanP}. The set of Eqs. (\\ref{eq:Dpl})-(\\ref{eq:chi}) is a tensorial\ngeneralization of the effectively scalar equations derived in\n\\cite{07BLanP}; such a generalization can be obtained by following the\nprocedure described in Ref. \\cite{05Pech}. In these equations,\n$\\tau_0$ is the elementary time scale of plasticity, $\\epsilon_0$ is\na dimensionless constant and\n$c_0$ is a specific heat in units of $k_B$ per particle.\n\nA weak point of the theory is the lack of a first-principle derivation that determines the\nfunction ${\\cal C}(s)$ in Eq. (\\ref{eq:Dpl}), which lumps together\nmuch of the microscopic physics that controls the stress-dependent\nrate of STZ transitions. Our theory constrains it to\nbe a symmetric function of $s$ that vanishes with vanishing\nderivatives at $s\\!=\\!0$, due to the athermal condition that states\nthat no transitions can occur in a direction opposite to the\ndirection of $s$ \\cite{07BLanP}. This constraint is not sufficient, however, to\ndetermine ${\\cal C}(s)$. To appreciate the uncertainties, recall that\nSTZ transitions are relaxation events, where energy and stress are\nexpected to re-distribute. Even without external mechanical forcing, aging in\nglassy systems involves relaxation events that are poorly\nunderstood \\cite{01LN}. The situation is even more uncertain\nwhen we deal with dynamics far from mechanical equilibrium. The best one can do at present is to choose the function ${\\C C}(s)$ by examining its influence on the resulting\nmacroscopic behaviors \\cite{07BL}.\nThus in this paper we will examine the sensitivity of the stability\nof the expanding cavity to two different choices of ${\\cal C}(s)$. At\npresent we use the one-parameter family of functions, ${\\cal\nC}(\\bar{s})=\\C F(\\bar{s}; \\zeta)$, proposed in \\cite{07BLanP}\n\\begin{equation}\n\\label{C_s} \\C F(\\bar{s};\\zeta)\\equiv\n\\,\\frac{\\zeta^{\\zeta+1}}{\\zeta!}\\int_0^{|\\bar\ns|}(|\\bar s|-s_{\\alpha})\\,s_{\\alpha}^{\\zeta}\\,\\exp (-\\zeta \\,\ns_{\\alpha})\\,d s_{\\alpha}\\ .\n\\end{equation}\nThe integral is over a distribution of transition thresholds whose\nwidth is controlled by a parameter $\\zeta$ (and see \\cite{07BLanP} for\ndetails). For finite values of $\\zeta$ there can be nonzero\nsub-yield plastic deformation for $|s|\\!<\\!1$. This behavior is well\ndocumented in the literature cf. \\cite{Lubliner} in the context of experimental stress-strain\nrelations and plastic deformations. We note that for $s$\nvery small or very large,\n\\begin{eqnarray}\n\\label{limits}\n{\\cal C}(s) &\\sim& s^{\\zeta+2} \\quad\\hbox{for}\\quad s \\to 0^+ \\ ,\\nonumber\\\\\n{\\cal C}(s) &\\simeq& s-1 \\quad\\hbox{for}\\quad s \\gg 1\n\\ .\n\\end{eqnarray}\nIn Sec. \\ref{rate_function} we propose a different one-parameter family of functions $\\C G(\\bar{s}; \\lambda)$ and study in detail the implications of this different choice on the stability of the expanding cavity.\n\nEq. (\\ref{eq:chi}) deserves special attention. It is a heat-like equation\nfor the effective disorder temperature $\\chi$ with a fixed-point\n$\\chi_\\infty$ which is attained under steady state\ndeformation. This reflects the observations of Ref. \\cite{Ono}, where the effective temperature $\\chi$\nwas shown to attain a unique value in the limit\n$t_0\\bar{D}^{pl}\\!\\to\\! 0$, where $t_0$ was the particles\nvibrational time scale. Indeed, in most applications, realistically {\\em\nimposed} inverse strain rates are much larger than the elementary time\nscale $t_0$, i.e. $t_0\\bar{D}^{pl}\\!\\ll\\! 1$. If we identify our $\\tau_0$\nwith the vibrational time scale $t_0$ (see for example \\cite{07BLanPb}), we\nconclude that\n$\\chi_\\infty$ can be taken as a constant, independent of the plastic\nrate of deformation. This assumption was adopted in all\nprevious versions of STZ theory. Note also that a low plastic rate\nof deformation is associated with $s\\!\\to\\! 1^+$, i.e. a deviatoric stress\nthat approaches the yield stress from above. However, the situation\nmight be very different in free boundary evolution problems, where\nhigh stresses concentrate near the boundary, reaching levels of a few times\nthe yield stress. Estimating $\\chi$ in the typical range of $0.1-0.15$ \\cite{07BLanPb, 07SKLF},\n$e^{-1\/\\chi}$ is in the range $10^{-4}\\!-\\!10^{-3}$. Therefore, estimating\nthe other factors in Eq. (\\ref{eq:Dpl}), for the high stresses near\nthe free boundary, in the range $1\\!-\\!10$, we conclude that\n$\\tau_0\\bar{D}^{pl}$ can reach values in the range\n$10^{-4}\\!-\\!10^{-2}$. Very recent simulations \\cite{07HL} demonstrated convincingly that in this range\nof normalized plastic rates of deformation, $\\chi_\\infty$ shows a\nconsiderable dependence on this rate, see Fig. \\ref{HL}. Since $\\chi$\naffects plastic deformation through an exponential Boltzmann-like\nfactor, even small changes of $\\chi_\\infty$ in Eq. (\\ref{eq:chi})\ncan generate significant effects \\cite{comment0}. This issue is of particular\nimportance for the question of stability (and localization) under\nstudy since the strain rate sensitivity of $\\chi_\\infty$ might\nincorporate an instability mechanism; fluctuations in the plastic\nrate of deformation, caused for example by fluctuations in $\\chi$,\ncan induce, through $\\chi_\\infty$, a further localized increase in\nplastic deformation and so on. This intuitive idea will be studied\nin the analysis to follow.\n\n\\begin{figure}\n\\centering \\epsfig{width=.47\\textwidth,file=circstabilFig1.eps}\n\\caption{(Color online) A typical relation between $\\chi_\\infty$ and\n$\\log_{10}\\!\\!\\left(\\tau_0\\bar{D}^{pl}\\right)$ for a temperature\nsignificantly smaller than the glass transition temperature. Data courtesy of T. Haxton and A. Liu. Note that the data were scaled properly.\n}\\label{HL}\n\\end{figure}\n\nThe set of Eqs. (\\ref{eq:Dpl})-(\\ref{eq:chi}) \\cite{comment} (and slight variants)\nwas shown to capture viscoelastic behavior in a variety of examples. These include small stress and finite plasticity at intermediate stresses \\cite{00FL}, a transition to flow at the\nyield stress (as discussed above) \\cite{07BLanP}, the deformation dynamics of simulated amorphous silicon \\cite{07BLanPb}, the necking\ninstability \\cite{03ELP}, the deformation dynamics near stress\nconcentrations \\cite{07BLLP}, the cavitation instability \\cite{07BLP} and\nstrain localization \\cite{07MLC}. In this work we focus on the\nimplications of these constitutive equations on the stability of\npropagating free boundaries in relation to the failure modes of\namorphous solids.\n\\subsection{The unperturbed problem}\n\\label{zeroth}\n\nIn this subsection we adapt the general theory to the circular\nsymmetry of the unperturbed expanding cavity problem. We consider an\ninfinite medium with a circular cavity of radius $R^{(0)}(t)$,\nloaded by a radially symmetric stress $\\sigma^{\\infty}$ at infinity.\nThe superscript $(0)$ in all the quantities denotes the fact that\nthey correspond to the perfectly symmetric case that is going to be\nperturbed later on. For the perfect circular symmetry the velocity\nfield $\\B v^{(0)}(\\B r,\\theta)$ is purely radial and independent of the azimuthal\nangle $\\theta$, i.e.\n\\begin{equation}\nv_r^{(0)}(\\B r,t)=v_r^{(0)}(r,t),\\quad v_\\theta^{(0)}(\\B r,t) = 0 \\\n.\n\\end{equation}\nThis symmetry also implies that\n\\begin{equation}\n\\tau^{(0)}(\\B r,t) = 0, \\quad {D^{pl}_{r \\theta}}^{(0)}(\\B r,\nt)=0,\\quad m^{(0)}_{r \\theta}(\\B r,t)=0\n\\end{equation}\nand all the diagonal components are independent of $\\theta$.\nEqs. (\\ref{el_pl}), after a simple manipulation, can be rewritten as\n\\begin{eqnarray}\n\\frac{v_r^{(0)}}{r}+\\frac{\\partial v_r^{(0)}}{\\partial r}\\! &=&\\!\n-\\frac{1}{K}\\left(\\frac{\\partial p^{(0)}}{\\partial t}+v_r^{(0)}\n\\frac{\\partial p^{(0)}}{\\partial r}\\right)\\ , \\label{kinematic1a} \\\\\n\\frac{v_r^{(0)}}{r}-\\frac{\\partial v_r^{(0)}}{\\partial r}\\!&=&\\!\n\\frac{1}{\\mu}\\left(\\frac{\\partial s^{(0)}}{\\partial t}+v_r^{(0)}\n\\frac{\\partial s^{(0)}}{\\partial r}\\right)\\!+\\!2{D^{pl}}^{(0)} \\ .\n\\nonumber\\\\ \\label{kinematic2a}\n\\end{eqnarray}\nwhere we have defined\n\\begin{equation}\n{D^{pl}_{\\theta \\theta}}^{(0)}=-{D^{pl}_{rr}}^{(0)} \\equiv\n{D^{pl}}^{(0)} \\ . \\label{D}\n\\end{equation}\nThe equations of motion (\\ref{EOM}) reduce to\n\\begin{equation}\n\\rho \\left(\\frac{\\partial v_r^{(0)}}{\\partial t} + v_r^{(0)}\n\\frac{\\partial v_r^{(0)}}{\\partial r} \\right)= -\\frac{1}{r^2}\n\\frac{\\partial }{\\partial r} \\left (r^2 s^{(0)} \\right) -\n\\frac{\\partial p^{(0)}}{\\partial r} \\ .\\label{EOM0}\n\\end{equation}\nThe boundary conditions are given by\n\\begin{eqnarray}\n\\sigma^{(0)}_{rr}(R^{(0)},t)=-p^{(0)}(R^{(0)},t)-s^{(0)}(R^{(0)},t)=0 \\ ,\\nonumber\\\\\n\\sigma^{(0)}_{rr}(\\infty,t)=-p^{(0)}(\\infty,t)-s^{(0)}(\\infty,t)=\\sigma^{\\infty}\n\\label{BC0}.\n\\end{eqnarray}\nThe initial conditions are chosen to agree with the solution of the static linear-elastic problem, i.e.\n\\begin{eqnarray}\np^{(0)}(r,t=0)&=&-\\sigma^{\\infty}\\ , \\nonumber\\\\\ns^{(0)}(r,t=0)&=& \\sigma^{\\infty}\\frac{\\left(R^{(0)}(t=0)\\right)^2}{r^2} \\ , \\nonumber\\\\\nv_r^{(0)}(r,t=0)&=& 0 \\label{initial0} \\ .\n\\end{eqnarray}\nThis choice reflects the separation of time scales between elastic and plastic responses. This\nseparation of time scales can be written explicitly in terms of the typical elastic wave speed, the\nradius of the cavity and the time scale of plasticity:\n\\begin{equation}\nR^{(0)}(t=0)\\sqrt{\\frac{\\rho}{\\mu}} \\ll \\tau_0e^{1\/\\chi} \\ .\n\\end{equation}\n Finally, the rate of the cavity growth is simply determined by\n\\begin{equation}\n\\dot{R}^{(0)}(t)=v_r^{(0)}(R^{(0)},t) \\ . \\label{edge_velocity0}\n\\end{equation}\n\nFor the circular symmetry, the STZ equations\n(\\ref{eq:Dpl})-(\\ref{Gamma}) reduce to\n\\begin{eqnarray}\n\\label{eq:Dpl0}\n&\\tau_0&\\!\\!\\! {D^{pl}}^{(0)} = \\epsilon_0 \\Lambda^{(0)} \\C C(s^{(0)})\\left(\\frac{s^{(0)}}{|s^{(0)}|}-m^{(0)}\\right) \\ , \\\\\n\\label{eq:m0} &\\tau_0&\\!\\!\\! \\left(\\frac{\\partial m^{(0)}}{\\partial\nt}+v_r^{(0)}\\frac{\\partial m^{(0)}}{\\partial r}\\right)=\\nonumber\\\\\n&2&\\!\\!\\!\\frac{\\tau_0 {D^{pl}}^{(0)}}{\\epsilon_0 \\Lambda^{(0)}\n}- \\Gamma^{(0)}(s^{(0)}, m^{(0)})m^{(0)}\\frac{e^{-1\/\\chi^{(0)}}}{\\Lambda^{(0)}} \\ ,\\\\\n\\label{eq:Lambda0} &\\tau_0&\\!\\!\\! \\left(\\frac{\\partial\n\\Lambda^{(0)}}{\\partial t}+v_r^{(0)}\\frac{\\partial\n\\Lambda^{(0)}}{\\partial r}\\right) =\\nonumber\\\\\n&\\Gamma&\\!\\!\\!\\!^{(0)}(s^{(0)}, m^{(0)})\\left(e^{-1\/\\chi^{(0)}}- \\Lambda^{(0)}\\right) \\ ,\\\\\n\\label{eq:chi0} &\\tau_0&\\!\\!\\! c_0 \\left(\\frac{\\partial\n\\chi^{(0)}}{\\partial t}+v_r^{(0)}\\frac{\\partial \\chi^{(0)}}{\\partial\nr}\\right) =\\\\\n&\\epsilon_0&\\!\\!\\! \\Lambda^{(0)} \\Gamma^{(0)}(s^{(0)},\nm^{(0)})\\left[\\chi_\\infty\\left(\\tau_0{D^{pl}}^{(0)}\\right)\\! -\\!\n\\chi^{(0)}\\right] \\ . \\nonumber\n\\end{eqnarray}\n\nNote that the $\\chi$ and $D^{pl}$ equations contain a factor of the small STZ density $\\epsilon_0 \\Lambda^{(0)}$, which\nimplies they are much stiffer than the $m$ and $\\Lambda$ equations.\nTherefore, whenever the advection terms can be neglected this\nseparation of time scales \\cite{07BLLP} allows us to replace the equations for\n$m^{(0)}$ and $\\Lambda^{(0)}$ by their stationary solutions\n\\begin{equation}\n\\label{m0_fix} m^{(0)}=\\cases{ \\frac{s^{(0)}}{|s^{(0)}|} &if $|\ns^{(0)}|\\le 1$\\cr \\frac{1}{ s^{(0)}} & if $| s^{(0)}| >1$}\n\\end{equation}\nand\n\\begin{equation}\n\\Lambda^{(0)} = e^{-1\/\\chi^{(0)}} \\ . \\label{Lam0_fix}\n\\end{equation}\nNote that Eq. (\\ref{eq:m0}) has two stable fixed-point solutions given by\nEq. (\\ref{m0_fix}), where we used Eq. (\\ref{Lam0_fix}) and omitted the advection term. The transition between these two solutions corresponds to a transition between a jammed\nand a plastically flowing state for a deviatoric stress below and above the yield stress respectively \\cite{07BLanP}. Eq. (\\ref{eq:Dpl0}) exhibits the corresponding solutions in terms of the plastic rate of deformation, zero and finite, below and above the yield stress respectively.\n\nThe unperturbed problem was studied in detail in Ref. \\cite{07BLP}.\nIt was shown that for stresses $\\sigma^\\infty$ smaller than\na threshold value $\\sigma^{th}\\!\\simeq\\!5$ the cavity exhibits transient\ndynamics in which its radius approaches a finite value in a finite\ntime. When this happens the material is jammed. On\nthe other hand, for $\\sigma^\\infty\\!>\\!\\sigma^{th}$ the cavity grows without bound, leading to a catastrophic failure of the material, accompanied by large scale plastic\ndeformations. We stress that to our knowledge this mode of failure by propagating a plastic solution\nis new, apparently not related to other recently discovered failure fronts \\cite{06GSW}.\nOne major goal of the present study is to analyze the\nstability of the unbounded growth modes that result from this\ncavitation. However, we are also interested in the range\n$\\sigma^\\infty\\!<\\!\\sigma^{th}$ where the unperturbed theory\npredicts no catastrophic failure. In this range, a failure can still\noccur if the cavity, prior to jamming, loses its perfect circular\nsymmetry in favor of relatively slender propagating ``fingers''. In\nthat case, stress localization near the tips of the propagating\n``fingers'' can lead to failure via fracture. Such a scenario is typical of brittle fracture where the stress\nlocalization due to the geometry of the defect drives crack\npropagation that might lead to macroscopic failure.\n\n\\section{Linear Stability Analysis}\n\\label{LSA}\n\nWe derive here a set of equations for the linear\nperturbations of the perfect circular symmetry where both inertia and\nelastic compressibility effects are taken into account. In Appendix \\ref{QS} we complement the analysis by considering the quasi-static and incompressible case. This case is mathematically more involved as it contains no explicit time evolution equation for the velocity and the pressure fields. By comparing the results of the two\nformulations we test for consistency and obtain some degree of\nconfidence in the derivation and the numerical implementation of the\nequations presented in this section.\n\n\n\\subsection{Equations of motion and kinematics}\n\\label{inertial}\n\nThe quantities involved in the problem are the tensors\n\\begin{equation}\n\\B s=\\left(\\begin{array}{cc}-s&\\tau\\\\\\tau&s\\end{array}\\right)\\\n,\\quad\n{\\B\nD}^{pl}=\\left(\\begin{array}{cc}-D^{pl}&D^{pl}_{r\\theta}\\\\D^{pl}_{r\\theta}&D^{pl}\\end{array}\\right)\n\\ ,\n\\end{equation}\nas well as the pressure $p(\\B r,t)$, the velocity $\\B v(\\B r,t)$ and\nthe location of the free boundary $R(\\theta,t)$. We start by\nexpanding all these quantities as follows\n\\begin{eqnarray}\nR(\\theta, t)&=& R^{(0)}(t) + e^{i n \\theta} R^{(1)}(t)\\ ,\\nonumber\\\\\ns(r,\\theta, t)&=&s^{(0)}(r,t) + e^{i n \\theta} s^{(1)}(r,t)\\ ,\\nonumber\\\\\n\\tau (r,\\theta, t)&=&i e^{i n \\theta} \\tau^{(1)}(r,t)\\nonumber\\\\\np(r,\\theta, t)&=&p^{(0)}(r,t) + e^{i n \\theta} p^{(1)}(r,t)\\ ,\\nonumber\\\\\nv_\\theta(r,\\theta, t)&=&i e^{i n \\theta} v_\\theta^{(1)}(r,t)\\ ,\\nonumber\\\\\nv_r(r,\\theta, t)&=&v_r^{(0)}(r,t) + e^{i n \\theta} v_r^{(1)}(r,t)\\ ,\\nonumber\\\\\nD^{pl}(r,\\theta, t)&=& {D^{pl}}^{(0)}(r,t)+\ne^{in\\theta}{D^{pl}}^{(1)}(r,t)\\ ,\\nonumber\\\\\nD^{pl}_{r\\theta}(r,\\theta, t) &=& i\ne^{in\\theta}{D^{pl}_{r\\theta}}^{(1)}(r,t) \\ .\n\\label{basic_perturbations}\n\\end{eqnarray}\nHere all the quantities with the superscript $(1)$ are assumed to be\nmuch smaller than their $(0)$ counterparts and $n$ is the discrete\nazimuthal wave-number of the perturbations. The small perturbation hypothesis results in a formal linear decomposition in which each linear mode of wave-number $n$ is decoupled from all the other modes. When nonlinear contributions are non-negligible, all the modes become coupled and the formal linear decomposition is invalid.\n\nWe expand then the equations of motion\n(\\ref{EOM}) to first order to obtain\n\\begin{eqnarray}\n\\label{EOM1}\n&&\\rho \\left(\\frac{\\partial v_r^{(1)}}{\\partial t} +v_r^{(0)}\n\\frac{\\partial v_r^{(1)}}{\\partial r}+v_r^{(1)} \\frac{\\partial\nv_r^{(0)}}{\\partial r} \\right)= \\nonumber\\\\\n&&-\\frac{n\\tau^{(1)}}{r} -\\frac{1}{r^2} \\frac{\\partial }{\\partial r}\n\\left ( r^2 s^{(1)} \\right) - \\frac{\\partial p^{(1)}}{\\partial r} \\\n, \\\\\\label{EOM2} &&\\rho \\left(\\frac{\\partial\nv_\\theta^{(1)}}{\\partial t} +v_r^{(0)}\n\\frac{\\partial v_\\theta^{(1)}}{\\partial r}+\n\\frac{v_r^{(0)} v_\\theta^{(1)}}{r}\\right)=\\nonumber\\\\\n&&\\frac{\\partial \\tau^{(1)}}{\\partial r} + \\frac{n s^{(1)}}{r}\n - \\frac{n p^{(1)}}{r}\n +\\frac{2 \\tau^{(1)}}{r} \\ .\n\\end{eqnarray}\nWe proceed by expanding Eqs. (\\ref{el_pl}) to first order, which after a simple manipulation yields\n\\begin{eqnarray}\n\\label{first_eq}&&\\frac{\\partial v^{(1)}_r}{\\partial r}+\\frac{-n v^{(1)}_\\theta +\nv^{(1)}_r}{r}=\\\\\n&& -\\frac{1}{K}\\left(\\frac{\\partial p^{(1)}}{\\partial t}+v_r^{(0)}\n\\frac{\\partial p^{(1)}}{\\partial r}+v_r^{(1)}\n\\frac{\\partial p^{(0)}}{\\partial r}\\right)\\ ,\\nonumber\\\\\n&&\\frac{-n v^{(1)}_\\theta + v^{(1)}_r}{r}-\\frac{\\partial\nv^{(1)}_r}{\\partial r} =\\\\\n&&\\frac{1}{ \\mu} \\left[ \\frac{\\partial s^{(1)}}{\\partial t} +\nv_r^{(0)} \\frac{\\partial s^{(1)}}{\\partial r} + v^{(1)}_r\n\\frac{\\partial s^{(0)}}{\\partial r}\\right] + 2{D^{pl}}^{(1)} \\ ,\n\\nonumber\\\\\n&&\\frac{1}{2} \\left[\\frac {\\partial v^{(1)}_\\theta}{\\partial r} +\n\\frac{n v^{(1)}_r - v^{(1)}_\\theta}{r} \\right] = \\\\\n&&\\frac{1}{2 \\mu} \\left[ \\frac{\\partial \\tau^{(1)} }{\\partial t} +\nv^{(0)}_r \\frac{\\partial \\tau^{(1)}}{\\partial r} -\\frac{2 s^{(0)} v_\\theta^{(1)}}{r}\\right]\n + {D^{pl}_{r \\theta}}^{(1)}\\nonumber \\ . \\label{last_eq}\n\\end{eqnarray}\n\nAt this point we derive an\nevolution equation for the dimensionless amplitude of the shape\nperturbation $R^{(1)}\/R^{(0)}$. To that aim we note that\n\\begin{equation}\n\\dot{R}=v_r(R)+{\\cal O}\\left[\\left(\\frac{R^{(1)}}{R^{(0)}} \\right)^2\n\\right] \\ .\n\\end{equation}\nExpanding this relation using Eqs. (\\ref{basic_perturbations}), we\nobtain to zeroth order Eq. (\\ref{edge_velocity0}) and to first order\n\\begin{equation}\n\\dot{R}^{(1)}(t)=v_r^{(1)}(R^{(0)})+R^{(1)} \\frac{\\partial\nv_r^{(0)}(R^{(0)})}{\\partial r} \\ . \\label{edge_velocity1}\n\\end{equation}\nTherefore, we obtain\n\\begin{eqnarray}\n\\label{smallness}\n&&\\frac{d}{dt}\\left(\\frac{R^{(1)}}{R^{(0)}}\n\\right)=\\\\\n&&\\frac{R^{(1)}}{R^{(0)}}\\!\\left[\\!\\frac{v_r^{(1)}(R^{(0)})}{R^{(1)}}\\!+\\!\\frac{\\partial\nv_r^{(0)}(R^{(0)})}{\\partial r}\\!-\\!\n\\frac{v_r^{(0)}(R^{(0)})}{R^{(0)}}\\!\\right] \\ .\\nonumber\n\\end{eqnarray}\nThis is an important equation since a linear instability manifests itself\nas a significant increase in $R^{(1)}\/R^{(0)}$ such that\nnonlinear terms become non-negligible. Note that the two last terms\nin the square brackets are always negative, therefore an instability\ncan occur only if the first term in the square brackets is positive\nwith absolute value larger than the sum of the two negative terms.\nMoreover, recall that the problem is non-stationary, implying that\nall the zeroth order quantities depend on time.\n\nIn order to derive the boundary conditions for the components of the\nstress tensor field we expand to linear order the normal unit vector\n$\\B n$ (not to be confused with the discrete wave-number $n$) and\ntangential unit vector $\\B t$ at the free boundary, obtaining\n\\begin{equation}\n\\label{unit_n} \\B n = \\left( 1, -i \\frac{R^{(1)}}{R^{(0)}}ne^{in\n\\theta} \\right)\\ ,\\quad \\label{unit_t} \\B t = \\left(i\n\\frac{R^{(1)}}{R^{(0)}}ne^{in \\theta} , 1 \\right) \\ .\n\\end{equation}\nEqs. (\\ref{stressBC}), expanded to first order, translate to\n\\begin{eqnarray}\n\\label{spb}\ns^{(1)}(R^{(0)})\\!\\!&+&\\!\\!p^{(1)}(R^{(0)})=\\nonumber\\\\\n\\!\\!&-&\\!\\!R^{(1)}\\left[\\frac{\\partial\ns^{(0)}(R^{(0)})}{\\partial r}+ \\frac{\\partial p^{(0)}(R^{(0)})}{\\partial r}\\right]\\ , \\\\\n\\label{taub} \\tau^1(R^{(0)})\\!\\!&=&\\!\\! n\n\\left[s^{(0)}(R^{(0)})-p^{(0)}(R^{(0)})\\right]\n\\frac{R^{(1)}}{R^{(0)}}.\n\\end{eqnarray}\nIn addition, all the first\norder fields decay as $r\\!\\to\\!\\infty$. The initial conditions are determined by the\nperturbation scheme that is being studied.\n\nTo avoid dealing with an infinite and time-dependent domain we applied\nthe following time-dependent coordinate transformation\n\\begin{equation}\n\\xi=R(t)\/r \\ . \\label{trans}\n\\end{equation}\nThis transformation allows us to integrate the equations in the\ntime-independent finite domain $\\xi\\! \\in\\! [0,1]$, with the price\nof introducing new terms in the equations. Controlling the equations at\nsmall distances required the introduction of an artificial viscosity\non the right-hand-side (RHS) of Eq. (\\ref{eqmot1}). The term\nintroduced is $\\rho\\eta\\! \\nabla^2\\! \\B v$, with $\\eta$ chosen of\nthe order of the square of space discretization over the time\ndiscretization. This introduces zeroth order contributions on the\nRHS of Eq. (\\ref{EOM0}) and first order contributions on the RHS of\nEqs. (\\ref{EOM1})-(\\ref{EOM2}).\n\n\\subsection{Linear perturbation analysis of the STZ equations}\n\\label{perturbSTZ}\n\nThe only missing piece in our formulation is the perturbation of\nthe tensorial STZ equations. In addition to the fields considered up\nto now, the analysis of the STZ equations includes also the\ninternal state fields\n\\begin{equation}\n{\\bf m}=\\left(\\begin{array}{cc}-m&m_{r\\theta}\\\\m_{r\\theta}&m\n\\end{array}\\right),\\quad\\Lambda\\quad\\hbox{and}\\quad\\chi \\ .\n\\end{equation}\nTherefore, in addition to Eqs. (\\ref{basic_perturbations}) we have\n\\begin{eqnarray}\nm(r,\\theta, t)&=& m^{(0)}(r,t)+\ne^{in\\theta}m^{(1)}(r,t)\\ ,\\nonumber\\\\\nm_{r\\theta}(r,\\theta, t)&=&i e^{i n \\theta} m_{r\\theta}^{(1)}(r,t)\\ ,\\nonumber\\\\\n\\Lambda(r,\\theta, t)&=& \\Lambda^{(0)}(r,t)+e^{in\\theta}\\Lambda^{(1)}(r,t)\\ ,\\nonumber\\\\\n\\chi(r,\\theta,t) &=& \\chi^{(0)}(r,t) + e^{in\\theta}\\chi^{(1)}(r,t)\n\\ . \\label{STZ_perturbations}\n\\end{eqnarray}\n\nWe then expand systematically Eqs. (\\ref{eq:Dpl})-(\\ref{C_s}).\nFirst, we have\n\\begin{eqnarray}\n\\label{sbar_exp}\n\\bar{s}&=&\\sqrt{\\frac{2(s^{(0)}+e^{in\\theta}s^{(1)})^2 +\n2(\\tau^{(1)} e^{in\\theta})^2}{2}} \\\\\n&\\simeq& |s^{(0)} +\ne^{in\\theta}s^{(1)}|=|s^{(0)}|+e^{in\\theta}s^{(1)}{\\rm\nsgn}\\left(s^{(0)}\\right) \\ .\\nonumber\n\\end{eqnarray}\nAccordingly we expand $\\C C(\\bar s)$ (assuming $s^{(0)}>0$) in the\nform\n\\begin{equation}\n\\C C(\\bar s) = \\C C(s^{(0)} + e^{in\\theta}s^{(1)}) \\simeq \\C\nC(s^{(0)}) + \\frac{d\\C\nC}{ds}\\left(s^{(0)}\\right)e^{in\\theta}s^{(1)},\n\\end{equation}\nwhere\n\\begin{equation}\n\\frac{d\\C C}{ds}\\left(s^{(0)}\\right) =\n\\frac{\\zeta^{\\zeta+1}}{\\zeta!}\\int_0^{|s^{(0)}|}s_\\alpha^\\zeta\nexp(-\\zeta s_\\alpha)ds_\\alpha \\ .\n\\end{equation}\nSubstituting the last three equations into (\\ref{eq:Dpl}) and\nexpanding to first order, we obtain\n\\begin{widetext}\n\\begin{eqnarray}\n\\label{Dpl1} \\tau_0{D^{pl}}^{(1)} &=&\n\\epsilon_0\\Lambda^{(0)}\\left[\\left(\\frac{\\Lambda^{(1)}}{\\Lambda^{(0)}}\\C\nC\\left(s^{(0)}\\right) + s^{(1)}\\frac{d\\C\nC\\left(s^{(0)}\\right)}{ds}\\right)\\left(\n{\\rm\nsgn}\\left(s^{(0)}\\right)-m^{(0)}\\right)\n- \\C C\\left(s^{(0)}\\right)m^{(1)}\\right] \\ , \\\\\n\\label{Dploff1} \\tau_0 {D^{pl}_{r\\theta}}^{(1)} &=& \\epsilon_0\n\\Lambda^{(0)}\\C\nC\\left(s^{(0)}\\right)\\left(\\frac{\\tau^{(1)}}{|s^{(0)}|}-m_{r\\theta}^{(1)}\\right)\n\\ .\n\\end{eqnarray}\n\\end{widetext}\nWe then expand $\\Gamma$ in the form\n\\begin{eqnarray}\n\\Gamma \\!&=&\\! \\Gamma^{(0)} +\ne^{in\\theta}\\Gamma^{(1)}\\quad\\hbox{with}\\quad \\Gamma^{(0)} =\n\\frac{2 \\tau_0\ns^{(0)}{D^{pl}}^{(0)}}{\\epsilon_0 \\Lambda^{(0)}} \\ ,\\nonumber\\\\\n\\Gamma^{(1)}\\! &=&\\! \\frac{2 \\tau_0 }{\\epsilon_0 \\Lambda^{(0)}}\n\\left[s^{(0)}{D^{pl}}^{(1)}\\! +\\! s^{(1)}{D^{pl}}^{(0)}\n\\!-\\!\\frac{s^{(0)}{D^{pl}}^{(0)}\\Lambda^{(1)}}{\\Lambda^{(0)}}\\right]\n\\\n.\\nonumber\\\\\n\\end{eqnarray}\nEq. (\\ref{eq:m}) is now used to obtain\n\\begin{eqnarray}\n\\label{m1} &&\\tau_0\\left(\\frac{\\partial m^{(1)}}{\\partial t}\n+v_r^{(0)} \\frac{\\partial m^{(1)}}{\\partial r}+v_r^{(1)}\n\\frac{\\partial\nm^{(0)}}{\\partial r} \\right) = \\nonumber\\\\\n&&\\frac{2\\tau_0}{\\epsilon_0 \\Lambda^{(0)}}\\left( {D^{pl}}^{(1)} -\n{D^{pl}}^{(0)}\\frac{\\Lambda^{(1)}}{\\Lambda^{(0)}}\\right)-\\frac{e^{-1\/\\chi^{(0)}}}{\\Lambda^{(0)}}\\times\\\\\n&&\\left[\\Gamma^{(0)}\nm^{(1)} +\\Gamma^{(1)}\nm^{(0)}+ \\Gamma^{(0)}\nm^{(0)}\\left(\\frac{\\chi^{(1)}}{\\left[\\chi^{(0)}\\right]^2}-\n\\frac{\\Lambda^{(1)}}{\\Lambda^{(0)}}\\right) \\right] \\ ,\\nonumber\n\\end{eqnarray}\nand\n\\begin{eqnarray}\n\\label{moff1} &&\\tau_0\\left(\\frac{\\partial\nm_{r\\theta}^{(1)}}{\\partial t} +v_r^{(0)} \\frac{\\partial\nm_{r\\theta}^{(1)}}{\\partial r}-\\frac{m^{(0)} v_\\theta^{(1)}}{r} \\right) =\\nonumber\\\\\n&& \\frac{2\\tau_0\n{D^{pl}_{r\\theta}}^{(1)}}{\\epsilon_0 \\Lambda^{(0)}} -\\Gamma^{(0)}\nm_{r\\theta}^{(1)}\\frac{e^{-1\/\\chi^{(0)}}}{\\Lambda^{(0)}} \\ .\n\\end{eqnarray}\nUsing Eq. (\\ref{eq:Lambda}) we obtain\n\\begin{eqnarray}\n\\label{Lambda1} &\\tau_0&\\!\\!\\!\\!\\! \\left(\\frac{\\partial\n\\Lambda^{(1)}}{\\partial t} +v_r^{(0)} \\frac{\\partial\n\\Lambda^{(1)}}{\\partial r}+v_r^{(1)} \\frac{\\partial\n\\Lambda^{(0)}}{\\partial r} \\right)= \\\\\n&\\Gamma^{(0)}&\\!\\!\\!\\left(e^{-1\/\\chi^{(0)}}\\frac{\\chi^{(1)}}{\\left[\\chi^{(0)}\\right]^2}\\!-\\!\\Lambda^{(1)}\n\\right)\\!+\\!\\Gamma^{(1)}\\left(e^{-1\/\\chi^{(0)}}\\!-\\!\\Lambda^{(0)}\n\\right) \\ . \\nonumber\n\\end{eqnarray}\nExpanding $\\bar{D}^{pl}$, similarly to Eq. (\\ref{sbar_exp}), we\nobtain\n\\begin{equation}\n\\bar{D}^{pl} \\simeq\n|{D^{pl}}^{(0)}|+e^{in\\theta}{D^{pl}}^{(1)}{\\rm\nsgn}\\left({D^{pl}}^{(0)}\\right) \\ .\n\\end{equation}\nAccordingly we expand $\\chi_\\infty\\left(\\tau_0 \\bar{D}^{pl}\n\\right)$ (with ${D^{pl}}^{(0)}\\!>\\!0$) in the form\n\\begin{eqnarray}\n&&\\chi_\\infty\\left(\\tau_0 \\bar{D}^{pl} \\right) =\n\\chi_\\infty\\left(\\tau_0 {D^{pl}}^{(0)} + e^{in\\theta}\\tau_0\n{D^{pl}}^{(1)}\\right) \\\\\n&&= \\chi_\\infty\\left(\\tau_0 {D^{pl}}^{(0)}\\right) +\n\\frac{d\\chi_\\infty}{d\\bar{D}^{pl}}\\left(\\tau_0\n{D^{pl}}^{(0)}\\right)e^{in\\theta} {D^{pl}}^{(1)} \\\n.\\nonumber\n\\end{eqnarray}\nThen, using Eq. (\\ref{eq:chi}) we obtain\n\\begin{widetext}\n\\begin{eqnarray}\n\\label{chi1} &&\\tau_0 c_0 \\left(\\frac{\\partial\n\\chi^{(1)}}{\\partial t} +v_r^{(0)} \\frac{\\partial\n\\chi^{(1)}}{\\partial r}+v_r^{(1)} \\frac{\\partial\n\\chi^{(0)}}{\\partial r} \\right)= \\epsilon_0\\left(\\Lambda^{(0)}\\Gamma^{(1)}+\\Gamma^{(0)}\\Lambda^{(1)}\\right)\n\\left(\\chi_\\infty\\left(\\tau_0{D^{pl}}^{(0)}\\right)-\\chi^{(0)}\\right)+\\nonumber\\\\\n&&\\epsilon_0\\Lambda^{(0)}\\Gamma^{(0)}\\left(\\frac{d\\chi_\\infty}{d\\bar{D}^{pl}}\\left(\\tau_0\n{D^{pl}}^{(0)}\\right){D^{pl}}^{(1)}-\\chi^{(1)}\\right)\\ .\n\\end{eqnarray}\n\\end{widetext}\nThus, Eqs. (\\ref{Dpl1})-(\\ref{Dploff1}), (\\ref{m1})-(\\ref{moff1}),\n(\\ref{Lambda1}) and (\\ref{chi1}) constitute our equations for the\ndynamics of the first order STZ quantities.\n\nThese equations already reveal some interesting features. First note\nthat the coupling between ${D^{pl}}^{(1)}$ (which is the quantity\nthat is expected to be of major importance in determining\n$v_r^{(1)}$ in Eq. (\\ref{smallness}) through Eqs.\n(\\ref{first_eq})-(\\ref{last_eq})) and $\\chi^{(1)}$, $m^{(1)}$\ndepends on $\\C C\\left(s^{(0)}\\right)$. This means that the strength\nof the coupling depends on $\\zeta$. Similarly, the coupling between\n${D^{pl}}^{(1)}$ and $s^{(1)}$ depends on $d\\C\nC\\left(s^{(0)}\\right)\/ds$ which is also a function of $\\zeta$. These observations demonstrate the importance of the precise form of the function $\\C C(s)$. This issue is further discussed in Sec. \\ref{rate_function}. Finally, note that whenever\nthe advection terms can be neglected, the known separation of time\nscales \\cite{07BLLP} allows us to use Eqs. (\\ref{m0_fix})-(\\ref{Lam0_fix}) and to\nreplace the equations for $m^{(1)}$, $m_{r\\theta}^{(1)}$ and\n$\\Lambda^{(1)}$ by their stationary solutions\n\\begin{equation}\n\\label{m1_fix} m^{(1)}=\\cases{ 0 &if $s^{(0)}\\le 1$\\cr\n-\\frac{s^{(1)}}{\\left[s^{(0)}\\right]^2} & if $ s^{(0)} >1$} \\ ,\n\\end{equation}\n\\begin{equation}\n\\label{m_rth_fix} m_{r\\theta}^{(1)}=\\cases{\n\\frac{\\tau^{(1)}}{s^{(0)}} &if $s^{(0)}\\le 1$\\cr\n\\frac{\\tau^{(1)}}{\\left[s^{(0)}\\right]^2} & if $s^{(0)} >1$}\n\\end{equation}\nand\n\\begin{equation}\n\\Lambda^{(1)} =\n\\frac{\\chi^{(1)}}{\\left[\\chi^{(0)}\\right]^2}e^{-1\/\\chi^{(0)}} \\ .\n\\label{Lam1_fix}\n\\end{equation}\n\nIn the next section we summarize the results of our analysis of the\nequations derived in Sec. \\ref{zeroth}, \\ref{inertial} and\n\\ref{perturbSTZ}.\n\n\\section{Results}\n\\label{results}\n\nWe are now ready to present and discuss the results of the stability analysis of\nthe expanding circular cavity. The full set of equations was solved numerically as\ndiscussed above. Time and length are measured in units of $\\tau_0$\nand $R^{(0)}(t\\!=\\!0)$ respectively. $\\Lambda$ and $m$ are set\ninitially to their respective fixed-points. The material-specific\nparameters used are $\\epsilon_0\\!=\\!1$, $c_0\\!=\\!1$,\n$\\mu\/s_y\\!=\\!50$, $K\/s_y\\!=\\!100$, $\\rho=1$, $\\chi^{(0)}\\!=\\!0.11$,\n$\\chi_\\infty\\!=\\!0.13$ and $\\zeta\\!=\\!7$, unless otherwise stated. In Subsec.\n\\ref{pert_shape} we study perturbations of the shape of the cavity and of the effective temperature $\\chi$.\nIn Subsec. \\ref{strain_rate} we study the effect of the rate dependence of $\\chi_\\infty$ on the stability analysis and in Subsec. \\ref{rate_function}\nwe analyze the effect of the stress-dependent rate function $\\C C(s)$.\n\n\\subsection{Perturbing the shape and $\\chi$}\n\\label{pert_shape}\n\nStudying the linear stability of the expanding cavity can be done by selecting\nwhich fields are perturbed and which are left alone. In practice\neach of the fields involved in\nthe problem may experience simultaneous fluctuations, including the\nradius of the cavity itself. Therefore, one of our tasks is to determine which of the possible\nperturbation leads to a linear instability. To start, we\nperturb the radius of the expanding cavity at $t\\!=\\!0$ while all the other fields\nare left alone. In Fig. \\ref{pertR} we\nshow the ratio $R^{(1)}\/R^{(0)}$ as a function of time for various\nloading levels $\\sigma^\\infty$ (both below and above the cavitation\nthreshold) and wave-numbers $n$. The initial amplitude of the\nperturbation was set to $R^{(1)}\/R^{(0)}\\!=\\!10^{-3}$. The observation is that\nthe ratio $R^{(1)}\/R^{(0)}$ does not grow in time in all the considered cases were the radius was perturbed, implying that here the circular\ncavity is stable against shape perturbations. Note\nthat $R^{(1)}\/R^{(0)}$ decays faster for larger n and for larger\n$\\sigma^\\infty$. Also note that for $\\sigma^\\infty\\!=\\!6.1$. i.e.\nfor unbounded zeroth order expansion, the ratio $R^{(1)}\/R^{(0)}$\ndecays to zero while below the cavitation threshold this ratio\napproaches a finite value. The latter observations means that when the material\napproaches jamming (with $R^{(0)}$ attaining a finite value in a\nfinite time) the perturbation have not yet disappeared\nentirely.\n\n\\begin{figure}\n\\centering \\epsfig{width=.47\\textwidth,file=circstabilFig2.eps}\n\\caption{(Color online) Upper panel: The ratio $R^{(1)}\/R^{(0)}$ as\na function of time for $n\\!=\\!4$ and $\\sigma^\\infty\\!=\\!2.1,4.1$ and 6.1. Note\nthat the last value is above the cavitation threshold. Lower panel:\nThe ratio $R^{(1)}\/R^{(0)}$ as a function of time for $\\sigma^\\infty\\!=\\!4.1$ and $n\\!=\\!2,4$\nand 8.}\\label{pertR}\n\\end{figure}\n\nWe stress at this point the non-stationary nature of\nthe problem in which $R^{(0)}(t)$ is an increasing function of time.\nThus, even if the absolute magnitude of the amplitude of\nthe shape perturbation $R^{(1)}(t)$ increases with time, an\ninstability is not automatically implied; $R^{(1)}(t)$ should increase\nsufficiently faster than $R^{(0)}(t)$ in order to imply an\ninstability. To exemplify this feature of the problem, we present in\nFig. \\ref{onlyR1} $R^{(1)}(t)$ for $\\sigma^\\infty\\!=\\!2.1$ and $n\\!=\\!4$. It is observed that even though\n$R^{(1)}$ increases, the smallness parameter\n$R^{(1)}\/R^{(0)}$ decreases, see Fig. \\ref{pertR}. Note also that\n$R^{(1)}$ does not increase exponentially as expected in stationary\nlinear stability analysis, but rather tends to asymptote to a\nconstant.\n\n\\begin{figure}\n\\centering \\epsfig{width=.47\\textwidth,file=circstabilFig3.eps}\n\\caption{(Color online) $R^{(1)}$ as a function of time for\n$\\sigma^\\infty\\!=\\!2.1$ and $n\\!=\\!4$.}\\label{onlyR1}\n\\end{figure}\nNext we have tested the stability of the expanding cavity against initial\nperturbations in the velocity field or in the stress field. The results were quantitatively similar to those for\nthe shape perturbations summarized in Figs. \\ref{pertR} and\n\\ref{onlyR1}, all implying linear stability.\n\nIn light of these\nresults, we concentrated then on the effect of perturbations in the STZ\ninternal state fields. Since the dynamics of the tensor $\\B m$ are\nmainly determined by the deviatoric stress field $s$, we focus on\nfluctuations in the effective disorder temperature $\\chi$. This may be the\nmost liable field to cause an instability. Indeed,\nin Ref. \\cite{07MLC} it was shown that $\\chi$ perturbations control strain localization in\na shear banding instability. Qualitatively, an instability in the\nform of growing ``fingers'' involves strain localization as well;\nplastic deformations are localized near the leading edges of the\npropagating ``fingers''. In Ref. \\cite{07MLC}, based on the data of Ref.\n\\cite{07SKLF}, it was suggested that the typical spatial fluctuations in\n$\\chi$ have an amplitude reaching about $30\\%$ of the homogeneous\nbackground $\\chi$. Obviously we cannot treat such large\nperturbations in a linear analysis and must limit ourselves to\nsmaller perturbations.\n\\begin{figure}\n\\centering \\epsfig{width=.47\\textwidth,file=circstabilFig4.eps}\n\\caption{(Color online) The ratio $R^{(1)}\/R^{(0)}$ as a function of\ntime for a perturbation of size $\\chi^{(1)}\/\\chi^{(0)}\\!=\\!0.03$ and\n$n\\!=\\!4$, introduced at time $t\\!=\\!0$. The solid line corresponds\nto $\\sigma^\\infty\\!=\\!4.1$ (below the cavitation threshold) and the\ndashed line corresponds to $\\sigma^\\infty\\!=\\!6.1$ (above the\ncavitation threshold).}\\label{pertChi}\n\\end{figure}\nIn Fig. \\ref{pertChi} we show the ratio $R^{(1)}\/R^{(0)}$ as a function of time for a perturbation of\nsize $\\chi^{(1)}\/\\chi^{(0)}\\!=\\!0.03$, introduced at time $t\\!=\\!0$.\nThe wave-number was set to $n\\!=\\!4$ and $\\sigma^\\infty$ was set\nboth below and above the cavitation threshold. First, note that for\nboth loading conditions $R^{(1)}\/R^{(0)}$ increases on a short time\nscale of about $1000\\tau_0$, a qualitatively different behavior\ncompared to the system's response to shape perturbations. Second,\nnote the qualitatively different response below and above the\ncavitation threshold. In the former case, $R^{(1)}\/R^{(0)}$\nincreases monotonically, approaching a constant value when $R^{(0)}$\nattains a finite value (i.e. jamming). In the latter case,\n$R^{(1)}\/R^{(0)}$ increases more rapidly initially, reaches a\nmaximum and then decays to 0 in the large $t$ limit. Therefore, in\nspite of the initial growth of $R^{(1)}\/R^{(0)}$, for this magnitude\nof $\\chi$ perturbations, the expanding circular cavity is linearly\nstable; below the cavitation threshold the relative magnitude of the\ndeviation from a perfect circular symmetry $R^{(1)}\/R^{(0)}$ tends\nto a finite constant, i.e. a shape perturbation is ``locked in'' the\nmaterial, while above the threshold the cavity retains its perfect\ncircular symmetry in the large $t$ limit. Nevertheless, in light of\nthe significant short time increase in $R^{(1)}\/R^{(0)}$ (here up to\n$0.6\\%$), we increased the initial ($t\\!=\\!0$) $\\chi$ perturbation\nto the range $\\chi^{(1)}\/\\chi^{(0)}\\!=\\!0.05-0.06$, in addition to shape\nperturbations of a typical size of $R^{(1)}\/R^{(0)}\\!=\\!0.02-0.03$.\nIn these cases $R^{(1)}\/R^{(0)}$ grows above $5\\%$; even more\nimportantly, the field $\\chi^{(1)}(\\B r, \\theta)$ (as well as other\nfields in the problem) becomes larger than $0.1\\chi^{(0)}(\\B r,\n\\theta)$ near the boundary of the cavity, {\\em invalidating} the\nsmall perturbation hypothesis behind the perturbative expansion and\nsignaling a linear instability. Naturally, this breakdown of the\nlinearity condition takes place firstly near a peak of the ratio\n$R^{(1)}\/R^{(0)}$, similar to the one observed in Fig.\n\\ref{pertChi}.\n\nWe thus propose that sufficiently large\nperturbations in the shape of the cavity and the effective disorder\ntemperature $\\chi$, but still of formal linear order, may lead\nto an instability. This dependence on the magnitude of the\nperturbations in a linear analysis is a result of the\nnon-stationarity of the growth. Another manifestation of the\nnon-stationarity is that even in cases where we\ndetect an instability, it was not of the usual simple exponential\ntype where an eigenvalue changes sign as a function of some\nparameter (or group of parameters). Combined with the evidence for\nthe existence of large fluctuations in $\\chi$ \\cite{07SKLF, 07MLC}, the present results\nindicate that it will be worthwhile to study the problem by direct boundary tracking\ntechniques where the magnitude of the perturbation is not limited.\n\nWe conclude that the issue of the stability of\nthe expanding cavity can be subtle. Sufficiently small\nperturbations are stable, though there is a qualitative difference\nin the response to perturbations in the effective disorder\ntemperature $\\chi$, where the ratio $R^{(1)}\/R^{(0)}$ increases (at\nleast temporarily), and other perturbations, where $R^{(1)}\/R^{(0)}$\ndecays. We have found that for large enough $\\chi$ perturbations\ncombined with initial shape perturbations, but still within the\nformal linear regime, the growth of $R^{(1)}\/R^{(0)}$ takes the system\nbeyond the linear regime, making nonlinear effects non-negligible\nand signaling an instability. This observation is further supported by the\nexistence of large $\\chi$ fluctuations discussed in \\cite{07SKLF, 07MLC}.\nNote that none of these conclusions depend significantly on variations in\n$\\epsilon_0$ and $c_0$. Moreover, perturbing the expanding\ncavity at times different than $t\\!=\\!0$ or introducing a pressure\ninside the cavity instead of a tension at infinity did not change any of the results.\n\n\n\\subsection{The effect of the rate dependence of $\\chi_\\infty$}\n\\label{strain_rate}\n\nThe analysis of Sec. \\ref{pert_shape} indicates the existence of a linear\ninstability as a result of varying the magnitude of the\nperturbations, mainly in $\\chi$, and not as a result of varying\nmaterial parameters. Here, and in Sec. \\ref{rate_function}, we aim\nat studying the effect of material-specific properties on the stability\nof the expanding cavity. Up to now we considered $\\chi_\\infty$ as a constant parameter. However, as discussed in detail in Sec.\n\\ref{STZ}, the plastic rate of deformation near the free boundary\ncan reach values in the range where changes in $\\chi_\\infty$ were\nobserved. Therefore, we repeated the calculations using the function\n$\\chi_\\infty(\\tau_0 \\bar{D}^{pl})$ plotted in Fig. \\ref{HL}. In Fig.\n\\ref{StrainRate} we compare $R^{(1)}\/R^{(0)}$ as a function of time\nwith and without a plastic rate of deformation dependence of\n$\\chi_\\infty$, both above and below the cavitation threshold. The\ninitial perturbation has $\\chi^{(1)}\/\\chi^{(0)}\\!=\\!0.03$ and\n$n\\!=\\!4$.\n\\begin{figure}\n\\centering \\epsfig{width=.47\\textwidth,file=circstabilFig5.eps}\n\\caption{(Color online) Upper panel: $R^{(1)}\/R^{(0)}$ as a function\nof time for $\\sigma^\\infty\\!=\\!4.1$ (below the cavitation\nthreshold). The solid line corresponds to a constant $\\chi_\\infty$\nand the dashed line corresponds to the plastic rate of deformation\ndependent $\\chi_\\infty(\\tau_0 \\bar{D}^{pl})$ of Fig. \\ref{HL}. The\ninitial perturbation has $\\chi^{(1)}\/\\chi^{(0)}\\!=\\!0.03$ and\n$n\\!=\\!4$. Lower panel: The same with $\\sigma^\\infty\\!=\\!6.1$ (above\nthe cavitation threshold).}\\label{StrainRate}\n\\end{figure}\nBoth below and above the cavitation threshold the plastic rate\nof deformation dependent $\\chi_\\infty(\\tau_0 \\bar{D}^{pl})$ induces\na stronger growth of $R^{(1)}\/R^{(0)}$, though the effect is\nmuch more significant above the threshold. This is understood as\nsignificantly higher rate of deformation is developed above the\ncavitation threshold, where unbounded growth takes place \\cite{07BLP},\ncompared to below the threshold where the rate of deformation\nvanishes at a finite time. We note that the dependence of $\\chi_\\infty$\non $ \\bar{D}^{pl}$ affects both the zeroth and\nfirst order solutions such that $R^{(0)}$ and $R^{(1)}$ increase.\nOur results show that $R^{(1)}$ is more sensitive to this effect\nthan $R^{(0)}$, resulting in a tendency to lose stability at yet\nsmaller perturbations. We conclude that the tendency of $\\chi_\\infty$\nto increase with the rate of deformation plays an important role in\nthe stability of the expanding cavity and might be crucial for other\nstrain localization phenomena as the shear banding instability\n\\cite{07MLC}. Moreover, this material-specific dependence of $\\chi_\\infty$, that was absent in previous\nformulations of STZ theory, might distinguish between materials that\nexperience catastrophic failure and those that do not, and between\nmaterials that fail through a cavitation instability \\cite{07BLP} and\nthose who fail via the propagation of ``fingers'' that may evolve\ninto cracks. This new aspect of the theory certainly deserves more\nattention in future work. We note in passing that recently an alternative equation to Eq. (\\ref{eq:chi}) for the time evolution of the effective temperature $\\chi$ was proposed in light of some available experimental and simulational data \\cite{08Bouch}. Preliminary analysis of the new equation in relation to the stability analysis performed in this paper indicates that the circular cavity {\\em does} become linearly unstable \\cite{unpublished}. A more systematic study of this effect may be a promising line of future investigation.\n\n\n\\subsection{The effect of changing the stress-dependent rate function $\\C C(s)$}\n\\label{rate_function}\n\n\\begin{figure}\n\\centering \\epsfig{width=.47\\textwidth,file=circstabilFig6.eps}\n\\caption{(Color online) The function $\\C C(s)$ of Eq. (\\ref{C_s})\nwith $\\zeta\\!=\\!7$ (dashed line) and of Eq. (\\ref{C_s1}) with\n$\\lambda\\!=\\!30$ (solid line).}\\label{changingC}\n\\end{figure}\n\nHere we further study the possible effects of details of the\nconstitutive behavior on the macroscopic behavior of the expanding\ncavity. In this subsection we focus on the material function $\\C C(s)$.\nThis phenomenological function, as discussed in Sec. \\ref{STZ},\ndescribes the stress-dependent STZ transition rates. It is expected\nto be symmetric and to vanish smoothly at $s\\!=\\!0$ in\nathermal conditions \\cite{07BLanP}. The plastic rate of deformation\nfor $s\\!>\\!1$ can be measured in a steady state stress-controlled\nsimple shear experiment. For such a configuration the deviatoric\nstress tensor is diagonal and the stable fixed-points of Eqs.\n(\\ref{eq:m})-(\\ref{eq:chi}) imply that the steady state plastic rate\nof deformation of Eq. (\\ref{eq:Dpl}) reads\n\\begin{equation}\n\\label{steadyDpl} \\tau_0 D^{pl} = \\epsilon_0 e^{-1\/\\chi_\\infty} \\C\nC(s)\\left(1-\\frac{1}{s}\\right) \\ .\n\\end{equation}\nTherefore, if the steady state relation $\\chi_\\infty(s)$ is known, $\\C C(s)$ can be determined from measuring the steady state value of $D^{pl}$ for various $s\\!>\\!1$, see for example \\cite{07HL}. The idea then is to interpolate\nthe $s\\!\\to\\!0^+$ behavior to the $s\\!>\\!1$ behavior with a single\nparameter that controls the amount of sub-yield deformation in the\nintermediate range. In fact, a procedure to measure $\\C C(s)$ at\nintermediate stresses was proposed in Ref. \\cite{07BL}. Up to now\nwe used the one-parameter family of functions $\\C F(s;\\zeta)$\nof Eq. (\\ref{C_s}), where $\\zeta$ controls the sub-yield\ndeformation.\n\nWe now aim at studying the effect of choosing another function\n${\\cal C}(s)$. Here we specialize for ${\\cal\nC}(\\bar{s})=\\C G(\\bar{s}; \\lambda)$, with\n\\begin{equation}\n\\label{C_s1} \\C G(\\bar{s};\\lambda)\\equiv\n\\frac{|\\bar{s}|^{1+\\lambda}}{1+|\\bar{s}|^\\lambda} \\ .\n\\end{equation}\nIn Fig. \\ref{changingC} we show $\\C C(s)$ according to the previous choice\nof Eq. (\\ref{C_s}) with $\\zeta\\!=\\!7$ and also $\\C C(s)$ according\nto the present choice of Eq. (\\ref{C_s1}) with $\\lambda\\!=\\!30$. The\ndifferent behaviors of $\\C C(s)$ and $d\\C C(s)\/ds$ near $s\\!=\\!1$\nmight affect differently $R^{(0)}$ and $R^{(1)}$, thus influencing\nthe stability of the expanding cavity.\n\n\\begin{figure}\n\\centering \\epsfig{width=.47\\textwidth,file=circstabilFig7.eps}\n\\caption{(Color online) $R^{(1)}\/R^{(0)}$ as a function of time for\nan effective temperature perturbation with\n$\\chi^{(1)}\/\\chi^{(0)}\\!=\\!0.03$ and $n\\!=\\!4$ for $\\sigma^\\infty\\!=\\!4.1$. The solid line\ncorresponds to $\\C C(\\bar s)$ of Eq. (\\ref{C_s}) with $\\zeta\\!=\\!7$\nand the dotted line corresponds to $\\C C(\\bar s)$ of Eq.\n(\\ref{C_s1}) with $\\lambda\\!=\\!30$, see Fig. \\ref{changingC}. In\nboth cases a constant $\\chi_\\infty$ was used. The dotted-dashed line\ncorresponds to $\\C C(\\bar s)$ of Eq. (\\ref{C_s1}) with\n$\\lambda\\!=\\!30$ and the rate dependent\n$\\chi_\\infty(\\tau_0\\bar{D}^{pl})$ presented in Fig.\n\\ref{HL}.}\\label{changingC1}\n\\end{figure}\n\nIn Fig. \\ref{changingC1} we compare $R^{(1)}\/R^{(0)}$ as a function\nof time for $\\C C(s)$ of Eq. (\\ref{C_s}) (previous choice) with\n$\\zeta\\!=\\!7$ and $\\C C(s)$ of Eq. (\\ref{C_s1}) (present choice)\nwith $\\lambda\\!=\\!30$, both for a constant $\\chi_\\infty$. An effective temperature perturbation with\n$\\chi^{(1)}\/\\chi^{(0)}\\!=\\!0.03$ and $n\\!=\\!4$ was introduced at\n$t\\!=\\!0$ for $\\sigma^\\infty\\!=\\!4.1$. We observe that $R^{(1)}\/R^{(0)}$ grows faster for the\npresent choice compared to the previous one. For the sake of\nillustration we added the result of a calculation with the plastic\nrate of deformation dependent $\\chi_\\infty$ as discussed in Sec.\n\\ref{strain_rate}. As expected, the effect is magnified. We conclude\nthat the material-specific function of the stress dependence of the\nSTZ transition rates $\\C C(s)$ can affect the stability of the\nexpanding cavity, possibly making it unstable for smaller\nperturbations. Again, the relations between this constitutive\nproperty and the macroscopic behavior should be further explored in\nfuture work. Bringing into consideration explicit macroscopic\nmeasurements, one can constrain the various phenomenological\nfeatures of the theory of amorphous plasticity. This philosophy\nprovides a complementary approach to obtaining a better microscopic\nunderstanding of the physical processes involved.\n\n\\section{Concluding Remarks}\n\\label{discussion}\n\nWe presented in this paper a detailed analysis of the linear stability of expanding\ncavity modes in amorphous elasto-viscoplastic solids. The stability analysis is somewhat delicate due\nto the non-stationarity of the problem, thus a perturbation may grow leaving the problem stable if this growth is slower than the growth of the radius of the cavity. The radial symmetry of the expanding cavity\nmakes it surprisingly resilient to perturbations in shape, velocity, external strains and pressure. On the other hand the radial symmetry may be lost due to perturbations in the internal state fields, especially $\\chi$, and is also sensitive to details of the constitutive relations that are employed in the STZ theory. In this respect we highlight the role of the plastic rate of deformation dependent $\\chi_\\infty(\\tau_0 \\bar{D}^{pl})$ and of the stress-dependent rates of STZ transitions $\\C C(s)$.\nIt is difficult to reach conclusive statements, since growth of perturbations beyond the linear order invalidate the approach taken here, calling for new algorithms involving surface tracking, where the size of perturbations is not limited. Nevertheless the results point out that instabilities are likely, motivating further research into the nonlinear regime. Of particular interest is the possibility to select particular forms of constitutive relations by comparing the predictions of the theory to macroscopic experiments. This appears as a promising approach in advancing the STZ theory towards a final form.\n\n{\\bf Acknowledgements} We thank T. Haxton and A. Liu for generously sharing with us their numerical data, and Chris Rycroft for pointing out an error in an early version of the manuscript. This work had been supported in part by the German Israeli Foundation and the Minerva Foundation, Munich, Germany. E. Bouchbinder acknowledges support from the Center for Complexity Science and the Lady Davis Trust.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nThe development of task-oriented dialog systems has gained much attention in both the academic and industrial communities over the past decade. Task-oriented (also referred to as goal-oriented) dialog systems help customers accomplish a task in one or multiple domains \\citep{chen2017survey}, compared with open-domain dialog systems aimed at maximizing user engagement \\citep{huang2020challenges}. A typical pipeline system architecture is divided into several components, including a natural language understanding (NLU) module. This module is responsible for classifying the first user request into potential \\textit{intents}, performing a decisive step that is required to drive the subsequent conversation with the virtual assistant in the right direction.\n\nGoal-oriented dialog systems often fail to recognize the intent of natural language requests due to system errors, incomplete service coverage, or insufficient training \\citep{grudin2019chatbots, kvale2019improving}.\nIn practice, these cases are normally identified using intent classifier uncertainty. Here, user utterances that are predicted to have a level of confidence below a certain threshold to any of the predefined intents, are identified and reported as unrecognized or \\textit{unhandled}. Figure~\\ref{fig:nlu-module} presents the NLU module from a typical task-oriented dialog system: the user utterance is either transformed into an intent with an appropriate flow of subsequent actions, or labelled as unrecognized and stored in the \\textit{unhandled pool}.\n\nUnhandled utterances often carry over various aspects of potential importance, including novel examples of existing intents, novel topics that may introduce a new intent, or seasonal topical peaks that should be monitored but not necessarily modeled within the system. In large deployments, the amount of unhandled utterances can reach tens of thousands each day. Despite their evident importance for continuous bot improvement, tools for gaining effective insights into unhandled utterances have not been developed sufficiently, leaving a vast body of knowledge, as well as a range of potential actionable items, unexploited.\n\n\\begin{figure}\n\\centering\n\\resizebox{\\columnwidth}{!}{\n\\includegraphics{figures\/nlu-module.png}\n}\n\\caption{Natural language understanding (NLU) module. Based to the intent classifier's confidence level, first user utterances are `recognized' and associated with an execution flow, or stored in an unhandled pool.}\n\\label{fig:nlu-module}\n\\end{figure}\n\nGaining insights into the topical distribution of user utterances can be achieved using unsupervised text analysis tools, such as clustering or topic modeling. Indeed, identifying clusters of semantically similar utterances can help surface topics of interest to a conversation analyst. We show that traditional clustering algorithms result in sub-optimal performance due to the unique traits of unhandled utterances in dialog systems: an unknown number of expected clusters and a very long tail of outliers. Consequently, we propose and evaluate a simple radius-based variant of the k-means clustering algorithm \\cite{lloyd1982least}, that does not require a fixed number of clusters and tolerates outliers gracefully. We demonstrate that it outperforms its out-of-the-box counterparts on a range of datasets.\n\nWe further propose an end-to-end process for surfacing topical clusters in unhandled user requests, including utterance cleanup, a designated clustering procedure and its extensive evaluation, a novel approach to cluster representatives extraction, and cluster naming. We demonstrate the benefits of the suggested clustering approach on multiple publicly available, as well as proprietary datasets for real-world task-oriented chatbots.\nThe rest of the paper is structured as follows. We survey related work in Section~\\ref{sec:rel-work} and detail our clustering procedure and its evaluation in Section~\\ref{sec:clustering}. Cluster representatives selection is presented in Section~\\ref{sec:representatives}, and the process used to assign clusters with names is described in Section~\\ref{sec:naming}. Finally, we conclude in Section~\\ref{sec:conclusions}.\n\\subsection{Evaluation of Clustering}\n\\label{sec:clustering-evaluation}\n\nWe performed a comparative evaluation of the proposed clustering algorithm and HDBSCAN\\footnote{DBSCAN resulted in outcome systematically inferior to HDBSCAN; hence, it was excluded from further experiments.}, using common clustering evaluation metrics. The nature of topical distribution of unrecognized utterances is probably most closely resembled by \\textit{intent classification} datasets, where semantically similar training examples are grouped into classes, based on their underlying intent. We used these classes to simulate cluster partitioning for the purpose of evaluation.\nWe make use of three publicly available intent classification datasets (\\citet{liu2019benchmarking}, \\citet{larson2019evaluation} and \\citet{tepper2020balancing}), as well as three datasets from real-world task-oriented chatbots in the domains of telecom, finance and retail. Table \\ref{tbl:datasets} presents the datasets details.\n\n\\begin{table}[hbt]\n\\centering\n\\resizebox{\\columnwidth}{!}{\n\\begin{tabular}{l|rrrr}\ndataset & intents & examples & mean & STD \\\\ \\hline\n\\citet{liu2019benchmarking} & 46 & 20849 & 453.23 & 896.34 \\\\\n\\citet{larson2019evaluation} & 150 & 22500 & 150.00 & 0.00 \\\\\n\\citet{tepper2020balancing} & 57 & 844 & 14.80 & 14.16 \\\\ \\hline\ntelecom & 167 & 6364 & 38.10 & 26.74 \\\\\nfinance & 142 & 2301 & 16.20 & 25.28 \\\\\nretail & 103 & 1714 & 16.64 & 11.42 \\\\\n\\end{tabular}\n}\n\\vspace{-0.05in}\n\\caption{Datasets details: the number of intents, total training examples, mean and STD of the num of examples. We excluded out-of-scope examples from the \\citet{larson2019evaluation} dataset for the sake of evaluation.}\n\\label{tbl:datasets}\n\\end{table}\n\n\\input{chapters\/3.3-clustering-evaluation-table.tex}\n\n\\subsubsection{Evaluation Approach}\nThe main approaches to clustering evaluation include extrinsic methods, which assume a ground truth, and intrinsic methods, which work in the absence of ground truth. Extrinsic techniques compare the clustering outcome to a human-generated \\textit{gold standard} partitioning. Intrinsic techniques assess the resulting clusters by measuring characteristics such as cohesion, separation, distortion, and likelihood \\citep{pfitzner2009characterization}. We employ two popular extrinsic and intrinsic evaluation metrics: adjusted random index (ARI, \\citep{hubert1985comparing}) and Silhouette Score \\citep{rousseeuw1987silhouettes}. We vary the parameters of the RBC algorithm: merge type with none vs. semantic vs. keyword-based, see Section~\\ref{sec:cluster-merging}); the encoder used for distance matrix construction using ST vs. USE; min similarity threshold used as a cluster ``radius'' (see Algorithm 1 for details). Both ARI and Silhouette yield values in the [-1, 1] range, where -1, 0 and 1 mean incorrect, arbitrary, and perfect assignment, respectively.\n\nThe unique nature of our clustering requirements introduces a challenge to standard extrinsic evaluation techniques. Specifically, the min cluster size attribute controls the amount of outliers, by considering only clusters that exceed the minimal number of members (see Figure~\\ref{fig:clustering-space}). As such, a high \\textit{min\\_size} value will yield a large amount of left-out utterances, while a \\textit{min\\_size}{=}1 will partition the entire data, including single-member clusters. Aiming to mimic the ground truth partition (i.e, the intent classification datasets), we set the \\textit{min\\_size} attribute according to the minimal class size in the dataset, subject to evaluation. For example, this attribute was set to 150 for the \\citet{larson2019evaluation} dataset, but to 2 for the finance dataset.\n\nBoth evaluation techniques assume partitioning of the input space. Therefore, for our evaluation, we exclude the set outliers generated by our clustering algorithm altogether: only the subset of instances constructing the outcome clusters (e.g., instances depicted in color in Figure~\\ref{fig:clustering-space}) was used to compute both ARI and Silhouette. For completeness, we also report the ratio of a dataset utterances covered by the generated partition (`\\% clustered' in Table~\\ref{tbl:evaluation-results}), where the higher, the better.\n\n\n\\subsubsection{Evaluation Results}\n\\label{sec:evaluation-results}\n\nTable~\\ref{tbl:evaluation-results} presents the results of our evaluation.\nClearly, the RBC algorithm outperforms HDBSCAN across the board for both ARI and Silhuette scores, with the exception of the retail dataset, where the second best ARI score ($0.37$) is obtained by RBC along with over $80$\\% of clustered utterances (compared to only $49.79$\\% by HDBSCAN). HDBSCAN also outperforms RBC in terms of the ratio of clustered utterances for~\\citet{liu2019benchmarking} and the telecom dataset. However, these results are achieved by a nearly arbitrary partition of the input data, as mirrored by the extremely low ARI and Silhuette scores. We conclude that RBC outperforms its out-of-the-box counterpart on virtually all datasets in this work.\n\nThe ratio of clustered examples (\\% clustered) exhibits considerable variance among the datasets; this result is indicative of the varying levels of semantic coherence of the underlying intent classes, which are typically constructed manually by a bot designer. As such, over $87$\\% of all training examples were covered by the clustering procedure for the retail dataset, but only $33.90$\\% for \\citet{larson2019evaluation}. Although it generated different final outcome, the merging step does not affect the ratio of clustered utterances, which is determined by the first clustering round. For example, $87.18$\\% of the utterances are clustered for all three merge types when using the ST encoder for the retail dataset.\n\nVarious merging strategies, encoders, and similarity thresholds show the benefits for different datasets, with no single parameter configuration outperforming others systematically. This result implies that the decision regarding the precise clustering configuration is dependent on the specific dataset, and should be made per qualitative or quantitative evaluation, where possible.\n\n\\section{Conclusions and Future Work}\n\\label{sec:conclusions}\n\nAnalyzing unrecognized user requests is a fundamental step towards improving task-oriented dialog systems. We present an end-to-end pipeline for cleanup, clustering, representatives selection, and cluster naming -- procedures that facilitate the effective and efficient exploration of utterances unrecognized by the NLU module. We propose a simple clustering variant of the popular k-means algorithm, and show that outperforms its out-of-the-box counterparts on a range of metrics. We also suggest a novel approach to extracting representative utterances from a cluster while simultaneously optimizing their centrality and diversity.\n\nOur future work includes evaluation of our clustering approach with additional datasets, exploration of additional approaches to representative set selection, and advanced techniques for cluster naming. Leveraging clustering results to automatically identify actionable recommendations for conversation analyst is another venue of significant practical importance, we plan to pursue.\n\n\\section{Related Work}\n\\label{sec:rel-work}\n\nIn the context of the pipeline approach to building goal-oriented dialog systems, our work is related to the task of intent detection, performed by the NLU component. Intent detection is normally formulated as a standalone classification task \\citep{xu2013convolutional, guo2014joint, chen2019bert}, which is loosely interlaced with the successive tasks in the pipeline. Out-of-domain utterance detection, which is the task of accurately discriminating between requests that are within- and outside the scope of a system, has gained much attention recently \\citep{schuster2019cross, larson2019evaluation, gangal2020likelihood, cavalin2020improving}. Contrary to these works, we assume a set of utterances already labeled by a system's NLU module as unrecognized; these are user requests that the system failed to attribute to an existing intent. We demonstrate an end-to-end approach for extracting potentially actionable insights from these utterances, by making them easily accessible to a conversation analyst. \n\nClustering is one of the most useful techniques for extracting insights from data in an unsupervised manner. In the context of text, clustering typically refers to the task of grouping together units (e.g., sentences, paragraphs, documents) carrying similar semantics, such that units in the same cluster are more semantically similar to each other than those in different clusters.\nThe unique nature of our setting imposes two constraints on the clustering algorithm: (1) unknown number of partitions, and (2) tolerating \\textit{outliers} that lie isolated in low-density regions. Density-Based Spatial Clustering of Applications with Noise (DBSCAN) \\citep{ester1996density} and its hierarchical version (HDBSCAN) \\cite{mcinnes2017hdbscan} are two common choices that satisfy these requirements. We evaluate our clustering approach against (H)DBSCAN and show its benefits across multiple datasets. \n\nAnother popular clustering algorithm that determines the number of partitions is MeanShift \\citep{cheng1995mean}, a non-parametric method for locating the maxima of a density function. Outlier detection can further be achieved with MeanShift by considering only clusters that exceed a predefined minimal size, where the rest are assigned to the outlier pool. MeanShift yielded inferior performance in all our experiments; we, therefore, exclude its results from this work.\n\n\n\n\\section{Naming Clusters}\n\\label{sec:naming}\n\nAssigning cluster with names, or labels, is an essential step towards their consumability. Common approaches to this task resort to simple but reliable techniques based on keyword extraction, such as \\textit{tf-idf}; many of these techniques made their way into the first large-scale information retrieval (IR) systems \\citep{ramos2003using, aizawa2003information}. Contemporary large pretrained LMs can also be used for the task of keyword extraction. Here we make use of BERT \\citep{devlin2018bert} for identifying key phrases, and evaluate the outcome against tf-idf.\n\n\\paragraph{Assigning Cluster Names using tf-idf} \nWe treat all utterances in individual clusters from a set $C{=}(c_1, c_2, ..., c_k)$ as distinct documents. We first applied lemmatization to these documents using the spacy toolkit\\footnote{\\url{https:\/\/spacy.io\/}} \\citep{spacy2}, excluded stopwords, and further ranked all ngram token sequences (for $N{\\in}(1,2,3)$) by their tf-idf score: term-frequency boosts ngrams typical of a cluster, and inverted-document-frequency down-weights the importance of ngrams, common across clusters. The ngram with the highest score was assigned as the cluster name.\n\n\n\\begin{comment}\nFavoring long names (e.g., a trigram) over short ones (e.g., a unigram), we defined a tf-idf score threshold for each ngram with more permissive, lower scores for trigrams and higher ones for unigrams. Score thresholds were optimized by qualitative evaluation over the $[0.10, 0.75]$ range, and were set to $0.650$, $0.400$ and $0.150$ for unigrams, bigrams and trigrams, respectively. We further sorted the candidate key-phrases by their length in a primary sort, and by score for a secondary sort. The first ngram to exceed its pre-defined corresponding threshold was selected as the cluster name.\nTable~\\ref{tbl:clustering-example} presents names automatically assigned to the two sample clusters identified in the Covid-19 dataset: `difference covid flu' and `covid pregnancy'.\n\\end{comment}\n\n\n\\paragraph{Assigning Cluster Names using BERT} \nTreating each cluster as a document, we first extract document-level representation using a pretrained BERT language model\\footnote{We use `all-MiniLM-L6-v2' model in our experiments.}. We further extract ngram representations for all unique word ngrams in the document, and compute semantic similarity between each ngram's embedding and that of the document. Ngram with the highest cosine similarity to the document is selected as cluster name.\\footnote{We make use of the KeyBERT package available at \\url{https:\/\/github.com\/MaartenGr\/KeyBERT}.}\n\n\\paragraph{Evaluation}\nAdhering to the same evaluation paradigm as Section~\\ref{sec:clustering-evaluation}, we use the six intent classification datasets for assessing the quality of cluster naming techniques. A common practice for building intent training dataset involves assigning each class in the training set with a meaningful name, typically mirroring the semantics of the class. As such, an intent class grouping example requests about Covid-19 testing information in \\citet{tepper2020balancing}, is named `testing information'. For each class in the intent trainig set, we compare the automatically extracted class name to that assigned to the class by the dataset creator, where the similarity is obtained by encoding the two phrases -- the original class name and the candidate one -- and computing their cosine similarity.\n\nTable~\\ref{tbl:cluster-naming} presents the results for the two methods. Neither approach systematically outperforms the other, and the only significant difference in favor of the tf-idf approach is found for \\citet{liu2019benchmarking}. We, therefore, conclude that the two approaches are roughly comparable and adhere to the faster td-idf method in our pipeline solution.\n\n\\begin{table}[hbt]\n\\small\n\\centering\n\\begin{tabular}{l|lc}\ndataset & td-idf & KeyBERT \\\\ \\hline\n\\citet{liu2019benchmarking} & \\hspace{0.3em}\\textbf{0.718}* & 0.626 \\\\\n\\citet{larson2019evaluation} & \\hspace{0.3em}\\textbf{0.555} & 0.489 \\\\\n\\citet{tepper2020balancing} & \\hspace{0.3em}\\textbf{0.481} & 0.460 \\\\ \\hline\ntelecom & \\hspace{0.3em}0.437 & \\textbf{0.470} \\\\\nfinance & \\hspace{0.3em}\\textbf{0.438} & 0.426 \\\\\nretail & \\hspace{0.3em}0.375 & \\textbf{0.393} \\\\\n\\end{tabular}\n\\vspace{-0.05in}\n\\caption{Cluster naming evaluation: for each dataset, the mean pairwise similarity between the predefined intent name and the assigned keyphrase is presented. `*' denotes significant difference at p-val{\\textless}0.01 using the Wilcoxon (Mann\u2013Whitney) ranksums test. }\n\\label{tbl:cluster-naming}\n\\end{table}\n\n\n\\section{Clustering of Unrecognized Requests}\n\\label{sec:clustering}\n\nConsider a virtual assistant aimed to attend to public questions about Covid-19. The rapidly evolving situation with the pandemic means that novel requests are likely to be introduced to the bot on a daily basis. For example, changes in international travel regulations would entail requests related to PCR test availability, and the decision to offer booster shots for seniors might cause a spike in questions about vaccine appointments for elderly citizens. Monitoring and promptly detecting these topics is fundamental for continuous bot improvement. We next describe the pipeline we apply, including utterance cleanup, clustering, cluster representative extraction, and cluster naming.\n\n\\begin{comment}\n\\begin{figure*}\n\\centering\n\\resizebox{12cm}{!}{\n\\includegraphics{figures\/clustering-flow.png}\n}\n\\caption{...}\n\n\\label{fig:pipeline}\n\\end{figure*}\n\\end{comment}\n\n\n\\subsection{Cleaning and Filtering Utterances}\n\nClustering unrecognized utterances aims at gaining topical insights into client needs that are currently poorly covered by the automatic reply system. In some cases, these utterances include easily identifiable, yet practically useless clusters, such as greetings (`hello, how are you?'), acknowledgements (`thank you'), or other statements of little practical importance (`would you please check that for me?'). Generally treated as dialog \\textit{fluff}, these statements and their semantic equivalents can be filtered out from the subsequent processing.\n\nWe address this issue by manually collecting a sample set of fluff utterances: a set of domain-independent ones and a (small) set of domain-specific ones, where both are treated as anchors for data cleanup. Specifically, given a predefined anchor set of fluff utterances $F$, and a set of utterances subject for clustering $U$, we encode utterances in both $F$ and $U$ into their semantic representations using the SentenceTransformer (ST) encoder \\citep{reimers2019sentence}\\footnote{Using the Universal Sentence Encoder (USE) \\citep{cer2018universal} yielded similar results, see Section \\ref{sec:evaluation-results} for details.}, and filter out each utterance $u{\\in}U$ that exceeds a minimal cosine similarity threshold to any fluff utterance $f{\\in}F$. We set the similarity threshold to $0.7$ using qualitative evaluation over the $[0.5, 0.8]$ range. Requests such as `hi, how are you doing today', and `thanks for your help' would be filtered out prior to the clustering procedure since they closely resemble utterances from the anchor fluff set.\n\n\n\\subsection{Clustering Utterances}\n\nHere we describe the main clustering procedure followed by an optional single merging step.\n\n\\subsubsection{Main Clustering Procedure}\n\\paragraph{Clustering requirements} Multiple traits make up an effective clustering procedure in our scenario. First, the number of clusters is unknown, and has to be discovered by the clustering algorithm. Second, the nature of data typically implies several large and coherent clusters, where users repeatedly introduce very similar requests, and a very long tail of unique utterances that do not have similar counterparts in the dataset. While the latter are of somewhat limited importance, they can amount to a significant ratio of the input data. There is an evident trade-off between the size of the generated clusters, their density or sparsity, and the amount of outliers: smaller and denser clusters entail larger amounts of outliers. The decision regarding the precise outcome granularity may vary according to domain and bot maturity. Growing deployments, with a high volume of unrecognized requests could benefit from surfacing large and coarse topics that are subject to automation. That said, mature deployments are likely to focus on fine-grained coherent clusters of utterances, introducing enhancements into the existing solution. Our third requirement is, therefore, a configurable density of the outcome clusters, which can be set up prior to the clustering procedure.\nFigure~\\ref{fig:clustering-space} illustrates a typical outcome of the clustering process; identified clusters are depicted in color, while the outliers, which are the majority of instances in this case, appear in grey.\n\n\\begin{figure}\n\\centering\n\\resizebox{0.95\\columnwidth}{!}{\n\\includegraphics{figures\/clusters-unhandled.png}\n}\n\\caption{t-SNE projection of a sample of unrecognized user requests in a production task-oriented dialog system. Identified clusters are in color, outliers -- in grey.}\n\\label{fig:clustering-space}\n\\end{figure}\n\nExisting clustering solutions can be roughly categorized across two major dimensions in terms of functional requirements: those requiring a fixed number of output clusters (1.a) and those that do not (1.b); those forcing cluster assignment on the entire dataset (2.a) and those tolerating outliers (2.b). Our clustering solution should accommodate (1.b) and (2.b): the number of clusters is determined by the clustering procedure, allowing for outliers. DBSCAN \\citep{ester1996density} and its descendant variants constitute a popular family of clustering solutions that satisfies these requirements; we, therefore, evaluate our algorithm against implementations of DBSCAN and its hierarchical version HDBSCAN \\citep{mcinnes2017hdbscan}.\n\n\\paragraph{Data representation} Given a set of $m$ unhandled utterances $U${=}($u_1$, $u_2$, ..., $u_m$), we compute their vector representations $E${=}($e_1$, $e_2$, ..., $e_m$) using a sentence encoder. A distance matrix $D$ of size $m{\\times}m$ is then computed, where $D[i,j]{=}1.0{-}cos(e_i, e_j)$. The matrix $D$ is further used as an input to the core clustering algorithm.\n\n\\paragraph{Radius-based clustering (RBC)} We introduce a variant of the popular k-means clustering algorithm. This variant complies with our clustering requirements by (1) imposing a strict cluster assignment criterion and (2) eventually omitting points that do not constitute clusters exceeding a predefined size. Specifically, we iterate over randomly-ordered vectors in $E$, where each utterance vector can be assigned to an existing cluster if certain conditions are satisfied; otherwise, it initiates a new cluster. To join an existing cluster, the utterance is required to surpass a predefined similarity threshold $min\\_sim$ for the cluster's centroid\\footnote{Following the k-means notation, we compute a cluster's centroid as the arithmetic mean of its member vectors.}, implying its placement within a certain \\textit{radius} from the centroid. If multiple clusters satisfy the similarity requirement, the utterance is assigned to the cluster with the highest proximity i.e., the cluster with the highest semantic similarity to its centroid. Additional iterations over the utterances are further performed, re-assigning them to different clusters if needed, until convergence or until a pre-defined number of iterations is exhausted. The amount of clusters generated by the final partition is controlled by the predefined $min\\_size$ value: elements that constitute clusters of small size (in particular, those with a single member) are considered outliers.\nAlgorithm \\ref{alg:clustering} presents the Radius-based Clustering (RBC) pseudo-code.\n\n\\SetKwInOut{Input}{input}\n\\SetKwInOut{Output}{output}\n\\SetKwComment{Comment}{\/*}{*\/}\n\n\\begin{algorithm}\n\\caption{Radius-based Clustering}\n\\label{alg:clustering}\n\\SetInd{0.75em}{0.25em}\n\\small\n\n\\textbf{input}: {E (e1, e2, ... en)} \\Comment{\\hspace{0.05cm} elements}\n\\textbf{input}: {D (n{$\\times$}n)} \\Comment{\\hspace{0.05cm} dist matrix}\n\\textbf{input}: {min\\_sim} \\Comment{\\hspace{0.05cm} min similarity}\n\\textbf{input}: {min\\_size} \\Comment{\\hspace{0.05cm} min cluster size}\n\\BlankLine\n\n$C \\gets \\emptyset$\n\n\\While{convergence criteria are not met}{\n \\BlankLine\n \\For{each element $e_i{\\in}E$} {\n \\BlankLine\n \\eIf{the highest similarity of $e_i$ to any existing cluster exceeds min\\_sim}{\n \\BlankLine\n assign $e_i$ to its most similar cluster $c$ \\\\\n re-calculate the centroid of $c$\n \\BlankLine\n }{\n \\BlankLine\n create a new cluster $c^\\prime$ and assign $e_i$ to it \\\\\n set the centroid of $c^\\prime$ to be $e_i$ \\\\\n add $c^\\prime$ to $C$\n \\BlankLine\n \n }\n }\n}\n\\BlankLine\n\\Comment{clusters with fewer elements than the predefined $min\\_size$ are considered outliers}\n\\BlankLine\n\\textbf{return}: each $c{\\in}C$ of size exceeding $min\\_size$\n\n\\end{algorithm}\n\n\n\n\n\\subsubsection{Merging Clusters}\n\\label{sec:cluster-merging}\n\nCluster merging has been extensively used as a means to determine the optimal clustering outcome in the scenario where the `true' number of partitions is unknown \\citep{krishnapuram1994generation, kaymak2002fuzzy, xiong2004similarity}. These start with a large number of clusters and iteratively merge compatible partitions until the optimization criteria is satisfied. Beginning with a fine-grained partitioning, we perform a single step of cluster merging, combining similar clusters into larger groups. A similar outcome could potentially be obtained by relaxing the \\textit{min\\_sim} similarity threshold and thereby, generating more heterogeneous flat clusters in the first place. However, a single step of cluster merging yielded results that outperform flat clustering on a range of datasets (see Table~\\ref{tbl:evaluation-results} and Section~\\ref{sec:evaluation-results} for details). \n\nClassical agglomerative hierarchical clustering (AHC) algorithms merge pairs of lower-level clusters by minimizing the agglomerative criterion: a similarity requirement that has to be satisfied for a pair of clusters to be merged. Similar to AHC, we seek to merge clusters exhibiting high mutual similarity. In contrast to AHC, our approach is not pair-wise, rather it constitutes a subsequent invocation of Algorithm \\ref{alg:clustering} that takes inter-cluster (and not inter-utterance) distance matrix $D_c$ as its input. We next describe two approaches for building this distance matrix towards a single merging step.\n\n\\paragraph{Semantic Merging}\nFormally, given a set of clusters $C$ of size $k{=}|C|$, identified by Algorithm \\ref{alg:clustering}, we compute the set of cluster centroid vectors ($cn_1$, $cn_2$, ..., $cn_k$); these vectors are assumed to reliably represent the semantics of their corresponding clusters. A distance matrix $D_c$ is then computed by calculating the pairwise semantic distance between all pairs of centroids in the set $C$. $D_c$ is further used as an input to subsequent invocation of the RBC algorithm, where the \\textit{min\\_sim} parameter can possibly differ from the previous invocation.\n\n\n\\begin{table*}[hbt]\n\\centering\n\\resizebox{\\textwidth}{!}{\n\\begin{tabular}{l|l}\ncluster name: \\textbf{difference covid flu} (28) & cluster name: \\textbf{covid pregnancy} (17) \\\\ \\hline\nis covid the same as the flu? (4) & covid 19 and pregnancy (10) \\\\ \nhow is covid different from the flu? (3) & covid risks for a pregnant woman (4) \\\\\nwhat is the difference between covid 19 and flu? & what is the risk of covid for pregnant women? \\\\\nwhat's the difference between covid and flu & is covid-19 dangerous when pregnant? \\\\\nis the covid the same as cold? & 7 months pregnant and tested positive for covid, any risks? \\\\\ncovid vs flu vs sars & covid 19 during pregnancy \\\\\n\\end{tabular}\n}\n\\caption{Example clusters of user requests generated by the RBC algorithm when applied on the Covid-19 dataset. Only a partial list of cluster members is presented in the table; the number in parenthesis denotes a cluster size.}\n\\label{tbl:clustering-example}\n\\end{table*}\n\n\n\\paragraph{Keyword-based Merging}\nUser requests to a goal-oriented dialog system are likely to be characterized by the extensive use of a domain-specific lexicon. For example, in the domain of banking, we are likely to encounter terms related to `accounts', `transactions', and `balance', while in the context of a Covid-19 Q\\&A bot, the lexicon is likely to contain extensive use of words related to `vaccine', `boosters', `appointments', and so on. Although impressive at capturing meaning, semantic representations do not necessarily capture the domain-specific notion of similar requests. For example, the two utterances `covid 19 and pregnancy' and `7 months pregnant and tested positive for covid, any risks?' do not exhibit exceptional semantic similarity, while practically they should be clustered together. The intuition that stems from the fact that both sentences contain `pregnant'\/`pregnancy', and `covid' -- words typical of the underlying domain. We therefore suggest the additional, keyword-based merging approach, as detailed below.\n\nA common way to extract lexical characteristics of a corpus is using a \\textit{log-odds ratio with informative Dirichlet prior} \\citep{monroe2008fightin} -- a method that discovers markers with excessive frequency in one dataset compared to another. We used the collection of unhandled utterances as our target corpus and a random sample of $100$K sentences from a Wikipedia dump\\footnote{We used the Wikipedia 2006 dump available at \\url{https:\/\/nlp.lsi.upc.edu\/wikicorpus\/}.} as our background neutral dataset. Setting the strict log-odds score of \\text{-}$5$, markers identified for the dataset of Covid-19 requests included \\{`quarantine', `measures', `emergency', `pregnant', `sick', `leave', `risk'\\}.\n\nGiven a set of markers, we now define cluster similarity as follows: we denote the set of domain-specific markers discovered by the log-odds ratio procedure by $M$ and the set of top-k most frequent words\\footnote{k=$10$ by qualitative evaluation over the $[3, 15]$ range.} in two clusters $c_1$ and $c_2$, by $W_1$ and $W_2$, respectively. The similarity of $c_1$ and $c_2$ is then defined to be proportional to the number of markers from $M$ that can be found in both $W_1$ and $W_2$: $sim(c_1, c_2) \\propto |M \\cap W_1 \\cap W_2|$, where $|M|$ amounts to the maximal possible similarity. Pairwise cluster distances are further computed by normalizing the similarity values to the $[0, 1]$ range, and subtracting them from $1$. A distance matrix $D_c$ is constructed by calculating pairwise distance on the set of clusters in $C$, and is further used as an input to subsequent invocation of the RBC algorithm, with an adjusted \\textit{min\\_sim} threshold.\n\nFollowing this definition and assuming a sample set of domain specific markers \\{`covid', `risk', `quarantine', `pregnant', `appointment', `test', `positive'\\}, the two utterances `covid 19 and pregnancy' and `7 months pregnant and tested positive for covid, any risks?' will exhibit considerable keyword-based similarity (intersection size{=}2), despite only moderate semantic proximity.\n\n\\paragraph{Example Clustering Result} Table~\\ref{tbl:clustering-example} presents two example clusters generated from user request to the Covid-19 bot. We applied the main RBC clustering procedure and a subsequent keyword-based merge step. As can be observed, semantically related utterances are grouped together, where the number beside an utterance reflects its frequency in the cluster. As an example, `is covid the same as the flu?' was asked four times by different users.\n\n\\input{chapters\/3.3-evaluation-results.tex}\n\n\n\\section{Selecting Cluster Representatives}\n\\label{sec:representatives}\n\nContemporary large-scale deployments of virtual assistants must cope with increasingly high volumes of incoming user requests. A typical large task-oriented system can accept over $100$K requests (i.e., user utterances) per day, where the amount of conversations that pass the initial step of intent identification can vary between 40\\% and 80\\%. Consequently, tens of thousands of requests can be identified as unrecognized on a daily basis. Clustering these utterances would result in large clusters that are often impractical for manual processing. Providing conversation analysts with a limited set of \\textit{cluster representatives} can help extract value from the unrecognized data.\n\n\\subsection{Representative Characteristics}\nA plausible set of representative cluster utterances would have to satisfy two desirable properties: utterance \\textit{centrality} and \\textit{diversity}. We define an utterance centrality to be proportional to its frequency in a cluster: requests with higher frequency should be boosted, since they are typical of the way people express their need to the bot. The diversity of the utterance set mirrors the subtle differences in the phrasing and meaning of utterances; these reflect the various ways people can express the same need.\n\nSampling randomly from a cluster may result in a sub-optimal set of representatives, in terms of both centrality and diversity. Consider the example where no `covid 19 and pregnancy' requests (Table \\ref{tbl:clustering-example}, right) are selected as representatives (low centrality), or both `what is the difference between covid 19 and flu?' and `what's the difference between covid and flu' (Table \\ref{tbl:clustering-example}, left) are selected (low diversity). Contrary to these examples, the set \\{`is covid the same as the flu?', `is the covid the same as cold?', `covid vs flue vs sars'\\} contains utterance of high centrality (the first utterance), and comprehensive coverage of the entire cluster semantics.\n\n\\subsection{Selecting Representatives}\nGiven a set of utterance vectors represented in a $k$-dimensional Euclidean space, the volume enclosed by the vectors is influenced by two factors -- the angle made by the vectors with respect to each other and their length. More orthogonal vectors span higher volume in the semantic space. Similarly, the higher is the length of the vectors, the higher is the volume they encompass. Intuitively, the angle made by the vectors is indicative of how similar the corresponding utterances are. Moreover, if the length of the vectors is equated to the centrality of the corresponding utterances, we reduce the problem of selecting $k$ diverse utterances with high centrality to that of maximizing the volume encompassed by $k$ corresponding vectors.\n\n\n\\paragraph{Selection Approach} Given a cluster $c$ of size $n$, we first project the encodings of the $n$ utterances onto a unit sphere. We further take into consideration the factor of centrality by scaling the vectors' length based on their frequency in a cluster. The volume enclosed by any subset of vector representations is now affected by both angles and the vectors' length, thereby simultaneously satisfying the two objectives for representative set selection: centrality and diversity. Figure \\ref{fig:diversity-centrality} illustrates the idea of selecting cluster representatives; we use a 2D space for the sake of interpretability.\n\nAssuming $n$ vectors in a vector space, the square of the $k$-dimensional volume enclosed by the vectors is proportional to the Gram-determinant of the vectors. Given $n$ utterances, we select $k$ diverse and central utterances by computing vectors' similarity matrix, and finding a square sub-matrix of size $k$ that has a high determinant; this can be achieved by using a determinant's point process to sample such a sub-matrix \\citep{gong2014diverse, celis2018fair}. We make use of the freely available DPPy Python package\\footnote{\\url{https:\/\/github.com\/guilgautier\/DPPy}} for this purpose. \n\nAs a concrete example, for the two clusters in Table~\\ref{tbl:clustering-example} and $k{=}3$, two represenative sets were selected: \\{`is covid the same as the flu?', `is the covid the same as cold?', `covid vs flue vs sars'\\} and \\{`covid 19 and pregnancy', `covid risks for a pregnant woman', `7 months pregnant and tested positive for covid, any risks?'\\}.\n\n\\begin{figure}\n\\centering\n\\resizebox{6.0cm}{!}{\n\\includegraphics{figures\/diversity-centrality-small.png}\n}\n\\caption{Simplified illustration of cluster representatives selection.\nWhile taking into consideration only diversity, the widest angle is $\\angle$BCD, meaning vectors $u$ and $w$ are the most diverse out of the three visualized vectors. Assuming vector length that is proportional to the vectors' centrality in a cluster, the chart shows a larger enclosed area between the vectors $u$ and $v$, out of all enclosed areas between pairs of vectors; these vectors will be selected as cluster representatives for $k{=}2$.}\n\\label{fig:diversity-centrality}\n\\end{figure}\n\\section*{Acknowledgements}\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nThe Kerr solution was discovered in 1963~\\cite{Kerr,Kerr-Texas}, and quickly became a mainstay of general relativity, though it took the wider astrophysics community somewhat longer to appreciate its full significance. Understanding how the Kerr solution was first discovered is tricky~\\cite{Kerr-history}, and even to this day no really clean pedagogical first-principles derivation exists. \nTypically one tells the students: ``Here is the answer, feed it into your favourite computer algebra system [{\\sf Maple}, {\\sf Mathematica}, {\\sf Wolfram Alpha}, \\emph{whatever}] and check that the Ricci tensor is zero.''\nInterest in the Kerr spacetime is both intense and ongoing, with many review articles~\\cite{kerr-intro, Teukolsky:2014, Adamo:2014, Hehl:2014, Bambi:2011, Reynolds:2013, Johannsen:2015, Bambi:2017, Reynolds:2019}, at least two dedicated books~\\cite{kerr-book, kerr-book-2}, and many textbook discussions~\\cite{Weinberg, MTW, Adler-Bazin-Schiffer, Wald, D'Inverno, Hartle, Carroll, Hobson, Poisson, Padmanabhan}.\n\nThe closest one has to a pedagogical first-principles derivation of the Kerr spacetime is via the Newman--Janis trick~\\cite{Newman-Janis,NJ-wiki}, developed in 1965, which was immediately used in then deriving the electro-magnetically charged Kerr--Newman spacetime~\\cite{Kerr-Newman}. \nDespite many valiant efforts~\\cite{Newman:1977, Giampieri:1990, Drake:1997, Drake:1998, Viaggiu:2006, Hansen:2013, Ferraro:2013, Keane:2014, Erbin:2014, Erbin:2016, Rajan:2016-NJ} it is still fair to say that no fully convincing explanation of why the Newman--Janis trick works has been forthcoming.\\footnote{A somewhat different ansatz, based on rather strong assumptions regarding the geodesics, has been explored in references~\\cite{Dadhich:2013,Dadhich:2022}.} \n\n\\enlargethispage{49pt}\nHerein we shall try a different approach: \n\\begin{itemize}\n\\item First, since we know that for a Newtonian rotating fluid body the Maclaurin spheroid is a good first approximation~\\cite{Spheroid, Maclaurin, Chandrasekhar, Poisson-Will, Lyttleton}, and that this is an example of an oblate spheroid in flat 3-space, one strongly suspects that oblate spheroidal coordinates might be useful when it comes to investigating rotating black holes. (And for that matter, other rotating bodies in general relativity.)\n\\item\nSecond, we know that the tetrad formalism is extremely useful~\\cite{Rajan:2016-global}, both in purely classical general relativity and especially when working with elementary particles with spin.\n\\item\nThird, in a rather different context, the use of \\emph{non-ortho-normal} tetrads has recently proved to be extremely useful\n\\cite{Visser:2021}.\n\\end{itemize}\nThese observations \\emph{suggest} that it might be useful to write the spacetime metric in the form\n\\begin{equation}\ng_{ab} = g_{AB} \\;\\; e^A{}_a\\;\\; e^B{}_b,\n\\end{equation}\nwhere we shall seek to push all of the mass dependence into the tetrad-component metric $g_{AB}$, while pushing as much as possible of the rotational aspects of the problem into the \\emph{non-ortho-normal} co-tetrad $e^A{}_a$.\nSpecifically we shall show that we can choose the \\emph{non-ortho-normal} co-tetrad to represent flat Minkowski space in oblate spheroidal coordinates\n\\begin{equation}\n(g_\\mathrm{Minkowski})_{ab} = \n\\eta_{AB} \\;\\; e^A{}_a\\;\\; e^B{}_b;\n\\qquad \n\\eta_{AB} = \\mathrm{diag}\\{-1,1,1,1\\}.\n\\end{equation}\nThis procedure separates out, to the greatest extent possible, the mass dependence from the rotational dependence, and makes the Kerr solution perhaps a little less mysterious. \n\n\\clearpage\n\\section{Preliminaries}\n\\enlargethispage{40pt}\nIn any spacetime manifold one can always (at least locally) set up a flat metric $(g_\\mathrm{flat})_{ab}$ and from that flat metric extract a (non-unique) co-tetrad \n\\begin{equation}\n(g_\\mathrm{flat})_{ab} = \n\\eta_{AB} \\;\\: e^A{}_a\\;\\: e^B{}_b;\n\\qquad \n\\eta_{AB} = \\mathrm{diag}\\{-1,1,1,1\\}.\n\\end{equation}\nGiven such a co-tetrad, one can construct the associated tetrad $e_A{}^a$, (which is just the matrix inverse of the co-tetrad) and then for any arbitrary (non-flat) metric $g_{ab}$ one can always write: \n\\begin{equation}\ng_{ab}=g_{AB}\\;\\; e^A{}_a\\; \\;e^B{}_b \\,;\n\\qquad\ng_{AB} = g_{ab} \\;\\; e_A{}^a\\;\\; e_B{}^b. \n\\end{equation}\nWe wish to derive the Kerr solution by finding a suitable flat-space tetrad, then make a natural and simple ansatz for $g_{AB}$, and check that $g_{ab}$ satisfies the vacuum Einstein equations $R_{ab}=0$. \n\n\\paragraph{Example:} Consider Schwarzschild spacetime. \nOrder the coordinates as $\\{t,r,\\theta,\\phi\\}$. Take the co-tetrad for flat space written in spherical polar coordinates to be:\n\\begin{equation}\ne^A{}_a=\\left[\\begin{array}{cccc}\n1 & 0 & 0 & 0 \\\\\n0 & 1 & 0 & 0 \\\\\n0 & 0 & r & 0 \\\\\n0 & 0 & 0 & r\\sin\\theta \\\\\n\\end{array}\\right]\\,.\n\\end{equation}\nThen, given the symmetries of the spacetime, a natural and simple ansatz for the tetrad-component metric $g_{AB}$ would be \n\\begin{equation}\ng_{AB}=\\left[\\begin{array}{cccc}\n-f(r) & 0 & 0 & 0 \\\\\n0 & \\frac{1}{f(r)} &\\;\\; 0 & 0 \\\\\n0 & 0 & 1 & 0 \\\\\n0 & 0 & 0 & 1 \\\\\n\\end{array}\\right]\\,.\n\\end{equation}\nIn this situation the Einstein equations yield\n\\begin{equation}\nf(r)=1-\\frac{2m}{r}\\,.\n\\end{equation}\nSimilarly, for the Reissner--Nordstr\\\"om spacetime one simply has\n\\begin{equation}\nf(r)=1-\\frac{2m}{r} + {Q^2\\over r^2}\\,.\n\\end{equation}\nWe shall now seek to do something similar for the Kerr spacetime. \n\n\n\n\\clearpage\n\\section{Oblate spheroidal coordinates} \nThe Cartesian metric for flat spacetime can be rewritten in terms of oblate spheroidal coordinates by defining\n\\begin{equation}\n\\begin{split}\nx & =\\sqrt{r^2+a^2}\\;\\sin\\theta\\;\\cos\\phi \\,;\\\\\ny & =\\sqrt{r^2+a^2}\\;\\sin\\theta\\;\\sin\\phi \\,;\\\\\nz & =r\\cos\\theta\\,.\n\\end{split}\n\\end{equation}\nThen, ordering the coordinates as $\\{t,r,\\theta,\\phi\\}$, the metric is \n\\begin{equation}\n\\label{E:oblate-minkowski}\ng_{ab}=\\left[\\begin{array}{cccc}\n-1 & 0 & 0 & 0 \\\\\n0 & \\frac{r^2+a^2\\cos^2\\theta}{r^2+a^2} & 0 & 0 \\\\\n0 & 0 & r^2+a^2\\cos^2\\theta & 0 \\\\\n0 & 0 & 0 & (r^2+a^2)\\sin^2\\theta \\\\\n\\end{array}\\right]\\,.\n\\end{equation}\nSetting $A\\in\\{0,1,2,3\\}$, an obvious (but naive) co-tetrad for this metric is \n\\begin{equation}\n\\label{ob_tet_flat}\n(e_\\mathrm{naive})^A{}_a=\\left[\\begin{array}{cccc}\n1 & 0 & 0 & 0 \\\\\n0 & \\sqrt{\\frac{r^2+a^2\\cos^2\\theta}{r^2+a^2}} & 0 & 0 \\\\\n0 & 0 & \\sqrt{r^2+a^2\\cos^2\\theta} & 0 \\\\\n0 & 0 & 0 & \\sqrt{r^2+a^2}\\sin\\theta \\\\\n\\end{array}\\right]\\,.\n\\end{equation}\nHowever, trying to use this co-tetrad would not be ideal for deriving the Kerr spacetime since the Kerr spacetime is \\emph{stationary} not \\emph{static}, meaning that the Kerr metric must have non-zero, off-diagonal components. \n\\enlargethispage{20pt}\n\nWe could introduce non-diagonal components in our ansatz for the tetrad-component metric $g_{AB}$, however this vastly complicates the computations. In order to simplify the derivation, we will instead find a non-diagonal co-tetrad $e^A{}_a$ which will allow us to make $g_{AB}$ diagonal. More specifically, we will make an ansatz of the form \n\\begin{equation}\n\\label{Kerr_Guess}\ng_{AB}=\\left[\\begin{array}{cccc}\n-f(r) & 0 & 0 & 0 \\\\\n0 & \\frac{1}{f(r)} & 0 & 0 \\\\\n0 & 0 & 1 & 0 \\\\\n0 & 0 & 0 & 1 \\\\\n\\end{array}\\right]\\,,\n\\end{equation}\nas we did for the Schwarzschild and Reissner--Nordstr\\\"om spacetimes.\n\n\n\\clearpage\n\\section{An improved co-tetrad }\n\nNote that there exist an infinite number of co-tetrads for any given spacetime metric, related via local Lorentz transformations. (That is, if $e^A{}_a$ is a co-tetrad, then so is $L^A{}_B\\; e^B{}_a$, where $L^A{}_B$ is a tangent-space Lorentz transformation). We wish to transform the naive tetrad given in equation \\eqref{ob_tet_flat} via a Lorentz transformation into a more useful form. \n\nSince we are using an ansatz for $g_{AB}$ of the form given in equation \\eqref{Kerr_Guess}, we wish the $e^0{}_t$ component of our new tetrad to be the reciprocal of the $e^1{}_r$ component. \nFurthermore, we will ask that the $g_{t\\phi}$ component be the only non-zero off-diagonal component of our final spacetime metric $g_{ab}$. This then constrains the relevant local Lorentz transformation to be of the form\n\\begin{equation} \nL^A{}_B=\\left[\\begin{array}{cccc}\n\\sqrt{\\frac{r^2+a^2}{r^2+a^2\\cos^2\\theta}} &\\; 0 & 0 & \\;L^0{}_3 \\\\\n0 & 1 & 0 & 0 \\\\\n0 & 0 & 1 & 0 \\\\\nL^3{}_0 & 0 & 0 & L^3{}_3 \\\\\n\\end{array}\\right]\\,.\n\\end{equation} \nHowever, since $L^A{}_B$ is a Lorentz transformation, it must satisfy\n\\begin{equation} \nL^C{}_A\\;\\; \\eta_{CD}\\;\\; L^D{}_B = \\eta_{AB}\\,.\n\\end{equation} \nThis tightly constrains the components of $L^A{}_B$; in fact this requirement can be used to solve for the remaining 3 components. They are given by\n\\begin{equation} \n\\begin{split}\nL^3{}_3 & = L^1{}_1 = \\sqrt{\\frac{r^2+a^2}{r^2+a^2\\cos^2\\theta}} \\,;\\\\\nL^3{}_0 & = L^0{}_3= -\\frac{a\\sin\\theta}{\\sqrt{r^2+a^2\\cos^2\\theta}} \\,.\n\\end{split}\n\\end{equation} \nHere $a$ can be either positive or negative depending on the sense of rotation.\n\n\nHence, explicitly, we have\n\\begin{equation} \nL^A{}_B=\\left[\\begin{array}{cccc}\n\\sqrt{\\frac{r^2+a^2}{r^2+a^2\\cos^2\\theta}} & 0 & 0 & -\\frac{a\\sin\\theta}{\\sqrt{r^2+a^2\\cos^2\\theta}} \\\\\n0 & 1 & 0 & 0 \\\\\n0 & 0 & 1 & 0 \\\\\n-\\frac{a\\sin\\theta}{\\sqrt{r^2+a^2\\cos^2\\theta}} & 0 & 0 & \\sqrt{\\frac{r^2+a^2}{r^2+a^2\\cos^2\\theta}} \\\\\n\\end{array}\\right]\\,.\n\\end{equation} \nNote that $\\det( L^A{}_B)=1$ and that this local Lorentz transformation corresponds to the velocity $\\beta = \n{a\\sin\\theta\\over\\sqrt{r^2+a^2}} \\in (-1,+1)$.\n \n\\clearpage\n\\null\n\\vspace{-75pt} \nOur new improved co-tetrad is now given by\n\\enlargethispage{20pt}\n\\begin{equation} \n\\begin{split}\n{e}^A{}_a & = L^A{}_B\\;\\; (e_\\mathrm{naive})^B{}_a \\\\[7pt]\n& =\\left[\\begin{array}{cccc}\n\\sqrt{\\frac{r^2+a^2}{r^2+a^2\\cos^2\\theta}} & 0 & 0 & -\\frac{\\sqrt{r^2+a^2}a\\sin^2\\theta}{\\sqrt{r^2+a^2\\cos^2\\theta}} \\\\\n0 & \\sqrt{\\frac{r^2+a^2\\cos^2\\theta}{r^2+a^2}} & 0 & 0 \\\\\n0 & 0 & \\sqrt{r^2+a^2\\cos^2\\theta} & 0 \\\\\n-\\frac{a\\sin\\theta}{\\sqrt{r^2+a^2\\cos^2\\theta}} & 0 & 0 & \\frac{(r^2+a^2)\\sin\\theta}{\\sqrt{r^2+a^2\\cos^2\\theta}} \\\\\n\\end{array}\\right]\\,.\n\\end{split}\n\\end{equation} \nUsing the ansatz for $g_{AB}$, as given in equation \\eqref{Kerr_Guess}, we now find\n\\begin{equation} \n\\begin{split}\ng_{ab} & =g_{AB}\\;\\; {e}^A{}_a\\;\\; {e}^B{}_b \\\\[7pt]\n& =\\left[\\begin{array}{cccc}\n\\frac{-f(r)(r^2+a^2)+a^2\\sin^2\\theta}{r^2+a^2\\cos^2\\theta} & 0 & 0 & g_{t\\phi} \\\\\n0 & \\frac{r^2+a^2\\cos^2\\theta}{f(r)(r^2+a^2)} & 0 & 0 \\\\\n0 & 0 & r^2+a^2\\cos^2\\theta & 0 \\\\\ng_{t\\phi} & 0 & 0 & g_{\\phi\\phi} \\\\\n\\end{array}\\right]\\,.\n\\end{split}\n\\end{equation} \nHere\n\\begin{equation} \ng_{t\\phi}=\\frac{(r^2+a^2)\\;a\\sin^2\\theta\\;(f(r)-1)}{r^2+a^2\\cos^2\\theta}\\,,\n\\end{equation} \nand\n\\begin{equation} \ng_{\\phi\\phi}=\\frac{(r^2+a^2)\\;\\sin^2\\theta\\;(r^2+a^2-f(r)a^2\\sin^2\\theta)}{r^2+a^2\\cos^2\\theta}\\,.\n\\end{equation} \n\n\\section{Final step: Einstein equations }\n\n\nWe now apply the Einstein equations to the ansatz developed above.\nThe vacuum Einstein equations then give a system of (partial) differential equations for the metric components $g_{ab}$. \nExplicitly finding the set of PDEs is still best done with a computer algebra system, but one now has a well-defined and relatively simple problem to solve, and the PDEs reduce to ODEs for the function $f(r)$. \nThe simplest of these ODEs is \n\\begin{equation} \nR_{\\theta\\theta}=\\frac{df(r)}{dr}(r^3+ra^2)+f(r)(r^2-a^2)-r^2+a^2=0\\,.\n\\end{equation} \nThis is a first-order linear ODE which has the solution\n\\begin{equation} \nf(r)=1-\\frac{2mr}{r^2+a^2}\\,.\n\\end{equation} \nThis finally results in the fully explicit metric\n\\begin{equation} \n\\label{Kerr}\ng_{ab}=\\left[\\begin{array}{cccc}\n-\\left(1-\\frac{2mr}{\\rho^2}\\right) & \\;0 & 0 & \\;-\\frac{2mra\\sin^2\\theta}{\\rho^2} \\\\\n0 & \\frac{\\rho^2}{\\Delta} & 0 & 0 \\\\\n0 & 0 & \\rho^2 & 0 \\\\\n-\\frac{2mra\\sin^2\\theta}{\\rho^2} & 0 & 0 & \\Sigma\\sin^2\\theta \\\\\n\\end{array}\\right]\\,.\n\\end{equation} \nHere (as usual) we have $\\rho=\\sqrt{r^2+a^2\\cos^2\\theta}$,\\; while $\\Delta=r^2+a^2-2mr$, and in turn $\\Sigma=r^2+a^2+2mra^2\\sin^2\\theta\/\\rho^2$. Notice that equation \\eqref{Kerr} is just the Kerr metric written in the usual Boyer--Lindquist coordinates, which hence concludes the derivation. \n\n\\section{Summary} \n\nThe Kerr metric (and the Minkowski metric) can be related to the mass-independent co-tetrad \n\\begin{equation} \n{e}^A{}_a \n =\\left[\\begin{array}{cccc}\n\\sqrt{\\frac{r^2+a^2}{r^2+a^2\\cos^2\\theta}} & 0 & 0 & -\\frac{\\sqrt{r^2+a^2}a\\sin^2\\theta}{\\sqrt{r^2+a^2\\cos^2\\theta}} \\\\\n0 & \\sqrt{\\frac{r^2+a^2\\cos^2\\theta}{r^2+a^2}} & 0 & 0 \\\\\n0 & 0 & \\sqrt{r^2+a^2\\cos^2\\theta} & 0 \\\\\n-\\frac{a\\sin\\theta}{\\sqrt{r^2+a^2\\cos^2\\theta}} & 0 & 0 & \\frac{(r^2+a^2)\\sin\\theta}{\\sqrt{r^2+a^2\\cos^2\\theta}} \\\\\n\\end{array}\\right]\\,.\n\\end{equation} \nby the very simple relations\n\\begin{equation}\n(g_\\mathrm{Kerr})_{ab} = g_{AB} \\;\\; e^A{}_a\\;\\; e^B{}_b;\n\\qquad\n(g_\\mathrm{Minkowski})_{ab} = \\eta_{AB} \\;\\; e^A{}_a\\;\\; e^B{}_b;\n\\end{equation}\nwhere the tetrad-component metric is particularly simple\n\\begin{equation}\n\\label{E:basic}\ng_{AB}=\\left[\\begin{array}{cccc}\n-f(r) & 0 & 0 & 0 \\\\\n0 & \\frac{1}{f(r)} & 0 & 0 \\\\\n0 & 0 & 1 & 0 \\\\\n0 & 0 & 0 & 1 \\\\\n\\end{array}\\right]\\,; \\qquad\nf(r)=1-\\frac{2mr}{r^2+a^2}\\,.\n\\end{equation}\nThis manifestly has the appropriate limit as $a\\to 0$ and \ncleanly separates out the angular and radial behaviour. Furthermore the only change needed to accomodate the electromagnetically charged Kerr--Newman solution is to replace $2mr \\to 2mr -Q^2$ and so to set\n\\begin{equation}\nf(r)=1-\\frac{2mr- Q^2}{r^2+a^2}\\,.\n\\end{equation}\nWe have been relatively slow and careful in developing and presenting the analysis, trying to provide physical motivations for our choices at each step of the process. Of course, once you see the answer, the reason it works is obvious in hindsight --- simply take the usual Boyer--Lindquist \\emph{ortho-normal} co-tetrad for Kerr, set $m\\to 0$, and then set $\\eta_{AB}\\to g_{AB}$ to compensate. \nGiven this, can we now generalize the ansatz to deal with other coordinate representations~\\cite{Baines:unit-lapse, Baines:Darboux} of the Kerr spacetime? \n(Or its slow-rotation Lense--Thirring~\\cite{Lense-Thirring, Pfister, PGLT1, PGLT2, PGLT3, PGLT4} approximation?)\n\n\n\\clearpage\n\n\n\\section{Extensions of the basic ansatz} \n\nWe now develop several extensions and generalizations of the basic ansatz (\\ref{E:basic}) presented above.\n\n\\subsection{Eddington--Finkelstein (Kerr--Schild) form}\n\nTake\n\\begin{equation}\n\\label{E:EF-KS}\ng_{AB}=\\left[\\begin{array}{cccc}\n-1+\\Phi & \\Phi & 0 & 0 \\\\\n\\Phi & 1+\\Phi & 0 & 0 \\\\\n0 & 0 & 1 & 0 \\\\\n0 & 0 & 0 & 1 \\\\\n\\end{array}\\right]\\,; \\qquad\n\\Phi = {2mr\\over r^2+a^2}\\,.\n\\end{equation}\nKeep exactly the same non-ortho-normal co-tetrad $e^A{}_a$ as above. \nThen the metric $g_{ab} = g_{AB} \\; e^A{}_a\\; e^B{}_b$ is still Ricci flat --- so it is the Kerr solution in disguise. \nThis modified ansatz was inspired by inspecting and generalizing the Eddington--Finkelstein (Kerr--Schild) form of Schwarzschild. \nDefining $\\ell_A = (1,1,0,0)$ we note that\n\\begin{equation}\ng_{AB} = \\eta_{AB} + \\Phi \\;\\ell_A\\; \\ell_B\n\\end{equation}\nwhich is of Kerr--Schild form. Contracting with the co-tetrad and defining $\\ell_a = \\ell_A \\; e^A{}_a$ we have\n\\begin{equation}\ng_{ab} = (g_\\mathrm{Minkowski})_{ab} + \\Phi \\;\\ell_a \\; \\ell_b,\n\\end{equation}\nwhere the Minkowski space metric is written in oblate spheroidal coordinates as per (\\ref{E:oblate-minkowski}) and \n\\begin{equation}\n\\ell_a = \\left(\\sqrt{r^2+a^2\\over r^2+a^2 \\cos^2\\theta} ,\n\\sqrt{r^2+a^2\\cos^2\\theta\\over r^2+a^2} , 0,\n- a \\sin^2\\theta \\sqrt{r^2+a^2\\over r^2+a^2 \\cos^2\\theta}\n \\right).\n\\end{equation}\nSince $\\ell_a$ is easily checked to be a null vector, this is manifestly seen to be Kerr spacetime in Kerr--Schild form~\\cite{kerr-intro, Kerr-history, kerr-book}.\n\n\\subsection{Quasi-Painlev\\'e--Gullstrand form}\n\nTake\n\\begin{equation}\n\\label{E:qPG}\ng_{AB}=\\left[\\begin{array}{cccc}\n-1+\\Phi &\\; \\sqrt{\\Phi} & 0 & 0 \\\\\n\\sqrt{\\Phi} & 1 & \\; 0 & 0 \\\\\n0 & 0 & 1 & 0 \\\\\n0 & 0 & 0 & 1 \\\\\n\\end{array}\\right]\\,; \\qquad\n\\Phi = {2mr\\over r^2+a^2}\\,.\n\\end{equation}\nKeep exactly the same non-ortho-normal co-tetrad $e^A{}_a$ as above.\nThen the metric $g_{ab} = g_{AB} \\; e^A{}_a\\; e^B{}_b$ is still Ricci flat --- so it is the Kerr solution in disguise. \n\n\\clearpage\nThis modified ansatz was \\emph{inspired} by looking at and generalizing the Painlev\\'e-Gullstrand form of Schwarzschild; though it was not \\emph{derived} therefrom --- more on this point later.\nThe tetrad metric $g_{AB}$ is of Painlev\\'e--Gullstrand form,\nbut the coordinate metric $g_{ab}$ is not, (and, in view of the analysis by Valiente-Kroon~\\cite{Valiente-Kroon:2003,Valiente-Kroon:2004}, cannot possibly be), of Painlev\\'e--Gullstrand form.\n\nIt is convenient to introduce two vectors, $T_A=(1,0,0,0)$ and $S_A=(0,1,0,0)$, since then \n\\begin{equation}\ng_{AB} = \\eta_{AB} + \\Phi \\;T_A\\; T_B \n+ \\sqrt{\\Phi} \\;(T_A \\;S_B + S_A \\; T_B).\n\\end{equation}\nWe can furthermore factorize this as follows\n\\begin{equation}\ng_{AB} = \\eta_{CD} \\; \n\\left(\\delta^C{}_A + \\sqrt{\\Phi} \\;S^C \\;T_A \\right) \\;\n\\left(\\delta^D{}_B + \\sqrt{\\Phi} \\;S^D \\;T_B \\right),\n\\end{equation}\nimplying the existence of a factorizable \\emph{ortho-normal} co-tetrad\n\\begin{equation}\n(e_{ortho})^A{}_ a = \n\\left(\\delta^A{}_B + \\sqrt{\\Phi} \\; S^A \\; T_B \\right) \\; e^B{}_ a.\n\\end{equation}\nNote that all the mass-dependence is concentrated in $\\Phi$, whereas all the angular dependence is still concentrated in the usual mass-independent \n\\emph{non-ortho-normal} tetrad $e^B{}_ a$.\n\n\nLet us see what happens in the coordinate basis: Setting \n\\begin{equation}\nT_a = T_A\\; e^A{}_a = \\left(\\sqrt{r^2+a^2\\over r^2+a^2 \\cos^2\\theta} ,\n0 , 0,\n- a \\sin^2\\theta \\sqrt{r^2+a^2\\over r^2+a^2 \\cos^2\\theta}\n\\right),\n\\end{equation}\nand \n\\begin{equation}\nS_a = S_A\\; e^A{}_a = \\left(0, \\sqrt{r^2+a^2\\cos^2\\theta\\over r^2+a^2} ,0,0\n\\right),\n\\end{equation}\nwe see\n\\begin{equation}\n\\label{E:QPG-full}\ng_{ab} = (g_\\mathrm{Minkowski})_{ab} + \\Phi \\;T_a \\; T_b\n+ \n\\sqrt{\\Phi} \\;(T_a \\;S_b + S_a \\; T_b),\n\\end{equation}\nwhere the Minkowski space metric is written in oblate spheroidal coordinates as per (\\ref{E:oblate-minkowski}).\nThe vectors $T_a$ and $S_a$ are orthonormal timelike and spacelike vectors with respect to both the Minkowski metric\n(\\ref{E:oblate-minkowski}) and the full metric (\\ref{E:QPG-full}). In the language of the Hamilton--Lisle ``river model''~\\cite{Hamilton:2004} these are easily identified as what they call the ``twist'' and ``flow'' vectors, and so this quasi-Painlev\\'e--Gullstrand version of the Kerr metric is equivalent to the Doran form~\\cite{Doran:1999} of the Kerr metric.\nThis is the closest we can get to putting the Kerr metric into Painlev\\'e--Gullstrand form --- partial success at the tetrad level, but failure at the coordinate level. \nFinally, we observe that explicit computation reveals that $g^{tt}=-1$, so this version of the metric is definitely unit lapse~\\cite{Baines:unit-lapse}. \n\n\\subsection{1-free-function form}\n\nLet $h(r)$ be an arbitrary differentiable function and take\n\\begin{equation}\n\\label{E:1-free}\ng_{AB}=\\left[\\begin{array}{cccc}\n-f(r)& -f(r) h(r) & 0 & 0 \\\\\n\\; -f(r) h(r) & \\;{1\\over f(r)} - f(r) h(r)^2 \\;& 0 & 0 \\\\\n0 & 0 & 1 & 0 \\\\\n0 & 0 & 0 & 1 \\\\\n\\end{array}\\right]\\,; \\qquad\nf(r) = 1-{2mr\\over r^2+a^2}\\,.\n\\end{equation}\nKeep exactly the same non-ortho-normal co-tetrad $e^A{}_a$ as above. \nThen the metric $g_{ab} = g_{AB} \\; e^A{}_a\\; e^B{}_b$ is still Ricci flat --- so it is the Kerr solution in disguise. \n\nThis ansatz was \\emph{inspired} by looking at and generalizing the Boyer--Lindquist, Kerr--Schild, and quasi-Painlev\\'e--Gullstrand forms of Kerr discussed above; not \\emph{derived} therefrom. With hindsight, one strongly suspects an underlying coordinate transformation is responsible for this behaviour. Indeed after a little ``reverse engineering'' one is lead to consider the not particularly obvious coordinate transformation\n\\begin{equation}\n\\label{E:specific}\nt \\to t + \\int h(r) \\; {\\mathrm{d}} r; \\qquad \\phi \\to \\phi + \\int {a\\, h(r)\\over r^2+a^2} \\; {\\mathrm{d}} r. \n\\end{equation}\nWriting the new coordinates as $\\bar x^a$ the relevant Jacobi matrix is\n\\begin{equation}\nJ^a{}_b = (\\bar x^a)_{,b} \n= {\\partial\\bar x^a\\over \\partial x^b} =\n\\left[ \\begin{array}{cccc}\n1 & h(r) &0&0\\\\0&1&0&0\\\\0&0&1&0\\\\\n0 &{a\\, h(r)\\over r^2+a^2} &0&1\n\\end{array} \\right].\n\\end{equation}\nGoing to the tetrad basis an easy computation yields\n\\begin{equation}\nJ^A{}_B = J^a{}_b \\;\\; e_a{}^A\\;\\; e^b{}_B =\n\\left[ \\begin{array}{cccc}\n1 & h(r) &0&0\\\\0&1&0&0\\\\0&0&1&0\\\\\n0 &0&0&1\n\\end{array} \\right].\n\\end{equation}\nBut then it is easy to check that\n\\begin{eqnarray}\n\\left[ \\begin{array}{cccc}\n1 & h(r) &0&0\\\\0&1&0&0\\\\0&0&1&0\\\\\n0 &0&0&1\n\\end{array} \\right]^T\n\\left[\\begin{array}{cccc}\n-f(r) & 0 & 0 & 0 \\\\\n0 & \\frac{1}{f(r)} & 0 & 0 \\\\\n0 & 0 & 1 & 0 \\\\\n0 & 0 & 0 & 1 \\\\\n\\end{array}\\right]\n\\left[ \\begin{array}{cccc}\n1 & h(r) &0&0\\\\0&1&0&0\\\\0&0&1&0\\\\\n0 &0&0&1\n\\end{array} \\right] \n\\nonumber\\\\\n=\n\\left[\\begin{array}{cccc}\n-f(r)& -f(r) h(r) & 0 & 0 \\\\\n\\; -f(r) h(r) & \\;{1\\over f(r)} - f(r) h(r)^2 \\;& 0 & 0 \\\\\n0 & 0 & 1 & 0 \\\\\n0 & 0 & 0 & 1 \\\\\n\\end{array}\\right].\n\\end{eqnarray}\nThat is, the 1-free-function ansatz (\\ref{E:1-free}) can be obtained from the basic ansatz (\\ref{E:basic}) by the very specific coordinate transformation (\\ref{E:specific}); with the \nspecific coordinate transformation being carefully ``reverse engineered'' to do minimal violence to the original basic ansatz.\n\n\\subsection{Lense--Thirring limit}\n\nConsider the slow rotation limit $a\\to 0$, explicitly keeping the first two terms, while keeping $h(r)$ arbitrary, then\n\\begin{equation} \n{e}^A{}_a \n =\\left[\\begin{array}{cccc}\n 1 & 0 & \\;0 & -a\\sin^2\\theta \\\\\n0 & 1 & 0 & 0 \\\\\n0 & 0 & r & 0 \\\\\n-\\frac{a\\sin\\theta}{r} & 0 & 0 & r\\sin\\theta \\\\\n\\end{array}\\right] + \\O(a^3)\\,.\n\\end{equation} \nand \n\\begin{equation}\ng_{AB}=\\left[\\begin{array}{cccc}\n-f(r)& -f(r) h(r) & 0 & 0 \\\\\n\\; -f(r) h(r) & \\;{1\\over f(r)} - f(r) h(r)^2 \\;& 0 & 0 \\\\\n0 & 0 & 1 & 0 \\\\\n0 & 0 & 0 & 1 \\\\\n\\end{array}\\right]\\,; \\qquad\nf(r)=1-\\frac{2m}{r} + {2m a^2\\over r^3}+ \\O(a^4)\\,.\n\\end{equation}\nThus our 1-free-function ansatz (\\ref{E:1-free}) leads to an entire class of tetrad metrics $g_{AB}$,\n(and implicitly, the corresponding coordinate basis metrics $g_{ab}$), that are appropriate for describing the exterior spacetime of slowly rotating objects. \nThis Lense--Thirring slow rotation limit~\\cite{Lense-Thirring,Pfister}, in its various incarnations~\\cite{PGLT1,PGLT2,PGLT3,PGLT4} is of significant importance in observational astrophysics. \n\n\n\n\\clearpage\n\\subsection{Summary}\n\nIn this section we have seen how our basic ansatz (\\ref{E:basic}), which we originally developed for physically motivating and then deriving the Kerr solution with a minimum of fuss, can be extended and modified to deal with other coordinate representations of the Kerr metric --- such as the Eddington--Finkelstein (Kerr--Schild) coordinates (\\ref{E:EF-KS}), the quasi-Painlev\\'e--Gullstrand (Doran) coordinates (\\ref{E:qPG}), and an entire 1-free-function class of coordinate systems (\\ref{E:1-free}) that still respect most of the fundamental symmetries of the original ansatz.\n\n\\section{Discussion} \n\nWe have physically motivated an ansatz for the Kerr spacetime metric, partially based on Newtonian physics, (the fact that Maclaurin's oblate spheroids already became of interest for rotating bodies some 280 years ago), and partially based on the fact that tetrad methods are known to be useful in general relativity.\nSpecifically, the key step is to write the coordinate metric as $g_{ab} = g_{AB}\\; e^A{}_a\\; e^B{}_b$, while allowing the use of \\emph{non-ortho-normal} tetrads. \n\nWe have seen that doing so permits one to force all of the non-trivial angular dependence into a mass-independent co-tetrad $e^A{}_a$ that is compatible with flat spacetime in oblate spheroidal coordinates, while forcing all of the mass-dependence (and none of the angular dependence) into the tetrad-basis metric $g_{AB}$. This clean separation between angular dependence and mass dependence greatly simplifies the computational complexity of the problem. \nWe expect these ideas to have further applications and implications. \n\n\\bigskip\n\\hrule\\hrule\\hrule\n\n\\section*{Acknowledgements}\n\nJB was supported by a Victoria University of Wellington PhD Doctoral Scholarship\nand was also indirectly supported by the Marsden Fund, \nvia a grant administered by the Royal Society of New Zealand.\n\\\\\nMV was directly supported by the Marsden Fund, \nvia a grant administered by the Royal Society of New Zealand.\n\n\\bigskip\n\\hrule\\hrule\\hrule\n\n\\clearpage\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\n\n\\section{Introduction}\nThis paper focuses on the numerical approximation of integrals of the form\n$$\nI(f) = \\int f(\\b{x}) p(\\b{x}) \\mathrm{d} \\b{x} ,\n$$\nwhere $f$ is a function of interest and $p$ is a positive and continuously differentiable probability density on $\\mathbb{R}^d$, under the restriction that $p$ and its gradient can only be evaluated pointwise up to an intractable normalisation constant.\nThe standard approach to computing $I(f)$ in this context is to simulate the first $n$ steps of a $p$-invariant Markov chain $(\\b{x}^{(i)})_{i=1}^\\infty$, possibly after an initial burn-in period, and to take the average along the sample path as an approximation to the integral:\n\\begin{equation}\\label{eqn:MC}\nI(f) \\approx I_{\\text{MC}}(f) = \\frac{1}{n}\\sum_{i=1}^n f(\\b{x}^{(i)}).\n\\end{equation}\nSee Chapters 6--10 of \\cite{Robert2013} for background.\nIn this paper $\\mathbb{E}$, $\\mathbb{V}$ and $\\mathbb{C}$ respectively denote expectation, variance and covariance with respect to the law $\\mathbb{P}$ of the Markov chain.\nUnder regularity conditions on $p$ that ensure the Markov chain $(\\b{x}^{(i)})_{i=1}^\\infty$ is aperiodic, irreducible and reversible, the convergence of $I_{\\text{MC}}(f)$ to $I(f)$ as $n \\rightarrow \\infty$ is described by a central limit theorem \n\\begin{equation}\n\\sqrt{n} (I_{\\text{MC}}(f) - I(f)) \\rightarrow \\mathcal{N}(0,\\sigma(f)^2) \\label{eqn: clt}\n\\end{equation}\nwhere convergence occurs in distribution and, if the chain starts in stationarity,\n\\begin{equation*}\n\\sigma(f)^2 = \\mathbb{V}[f(\\b{x}^{(1)})] + 2 \\sum_{i=2}^\\infty \\mathbb{C}[f(\\b{x}^{(1)}), f(\\b{x}^{(i)})]\n\\end{equation*}\nis the asymptotic variance of $f$ along the sample path.\nSee Theorem 4.7.7 of \\cite{Robert2013} and more generally \\cite{meyn2012markov} for theoretical background.\nNote that for all but the most trivial function $f$ we have $\\sigma(f)^2 > 0$ and hence, to achieve an approximation error of $O_P(\\epsilon)$, a potentially large number $O(\\epsilon^{-2})$ of calls to $f$ and $p$ are required. \n\nOne approach to reduce the computational cost is to employ control variates \\citep{Hammersley1964,Ripley1987}, which involves finding an approximation $f_n$ to $f$ that can be exactly integrated under $p$, such that $\\sigma(f - f_n)^2 \\ll \\sigma(f)^2$. \nGiven a choice of $f_n$, the standard estimator \\eqref{eqn:MC} is replaced with \n\\begin{equation}\\label{eqn:CV}\nI_{\\text{CV}}(f) = \\frac{1}{n} \\sum_{i=1}^n [f(\\b{x}^{(i)}) - f_n(\\b{x}^{(i)})] + \\underbrace{ \\int f_n(\\b{x})p(\\b{x}) \\mathrm{d}\\b{x} }_{(*)} ,\n\\end{equation}\nwhere $(*)$ is exactly computed.\nThis last requirement makes it challenging to develop control variates for general use, particularly in Bayesian statistics where often the density $p$ can only be accessed in a form that is un-normalised.\nIn the Bayesian context, \\citet{Assaraf1999,Mira2013} and \\citet{Oates2017} addressed this challenge by using $f_n = c_n + \\mathcal{L}g_n$ where $c_n \\in \\mathbb{R}$,\n$g_n$ is a user-chosen parametric or non-parametric function and $\\mathcal{L}$ is an operator, for example the Langevin Stein operator \\citep{Stein1972,Gorham2015}, that depends on $p$ through its gradient and satisfies $\\int (\\mathcal{L} g_n)(\\b{x})p(\\b{x}) \\mathrm{d}\\b{x} = 0$ under regularity conditions (see Lemma \\ref{lemma:assum}). \nConvergence of $I_{\\text{CV}}(f)$ to $I(f)$ has been studied under (strong) regularity conditions and, in particular (i) if $g_n$ is chosen parametrically, then in general $\\lim\\inf \\sigma(f - f_n)^2 > 0$ so that, even if asymptotic variance is reduced, convergence rates are unaffected; (ii) if $g_n$ is chosen in an appropriate non-parametric manner then $\\lim \\sup \\sigma(f - f_n)^2 = 0$ and a smaller number $O(\\epsilon^{-2 + \\delta})$, $0 < \\delta < 2$, of calls to $f$, $p$ and its gradient are required to achieve an approximation error of $O_P(\\epsilon)$ for the integral \\citep[see][]{Oates2019,Mijatovic2018,Barp2018,Belomestny2017,Belomestny2019a,Belomestny2019}.\nIn the parametric case $\\mathcal{L} g_n$ is called a \\emph{control variate} while in the non-parametric case it is called a \\emph{control functional}.\n\n\nPractical parametric approaches to the choice of $g_n$ have been well-studied in the Bayesian context, typically based on polynomial regression models \\citep{Assaraf1999,Mira2013,Papamarkou2014,Oates2016,Brosse2019}, but neural networks have also been proposed recently \\citep{Zhu2018,Si2020}.\nIn particular, existing control variates based on polynomial regression have the attractive property of being \\emph{semi-exact}, meaning that there is a well-characterized set of functions $f \\in \\mathcal{F}$ for which $f_n$ can be shown to exactly equal $f$ after a finite number of samples $n$ have been obtained.\nFor the control variates of \\cite{Assaraf1999} and \\cite{Mira2013} the set $\\mathcal{F}$ contains certain low order polynomials when $p$ is a Gaussian distribution on $\\mathbb{R}^d$.\nThose authors term their control variates ``zero variance'', but we prefer the term ``semi-exact'' since a general integrand $f$ will not be an element of $\\mathcal{F}$. \nRegardless of terminology, semi-exactness of the control variate is an appealing property because it implies that the approximation $I_{\\text{CV}}(f)$ to $I(f)$ is exact on $\\mathcal{F}$.\nIntuitively, the performance of the control variate method is related to the richness of the set $\\mathcal{F}$ on which it is exact.\nFor example, polynomial exactness of cubature rules is used to establish their high order convergence rates using a Taylor expansion argument \\citep[e.g.][Chapter~8]{Hildebrand1987}.\n\nThe development of non-parametric approaches to the choice of $g_n$ has to-date focused on kernel methods \\citep{Oates2017,Barp2018}, piecewise constant approximations \\citep{Mijatovic2018} and non-linear approximations based on selecting basis functions from a dictionary \\citep{Belomestny2017,South2019}. \nTheoretical analysis of non-parametric control variates was provided in the papers cited above, but compared to parametric methods, practical implementations of non-parametric methods are less well-developed.\n\nIn this paper we propose a semi-exact control functional method.\nThis constitutes the ``best of both worlds'', where at small $n$ the semi-exactness property promotes stability and robustness of the estimator $I_{\\text{CV}}(f)$, while at large $n$ the non-parametric regression component can be used to accelerate the convergence of $I_{\\text{CV}}(f)$ to $I(f)$. \nIn particular we argue that, in the Bernstein-von-Mises limit, the set $\\mathcal{F}$ on which our method is exact is precisely the set of low order polynomials, so that our method can be considered as an approximately polynomially-exact cubature rule developed for the Bayesian context.\nFurthermore, we establish a bias-correcting property, which guarantees the approximations produced using our method are consistent in certain settings where the Markov chain is not $p$-invariant. \n\nOur motivation comes from the approach to numerical integration due to \\cite{Sard1949}. \nMany numerical integration methods are based on constructing an approximation $f_n$ to the integrand $f$ that can be exactly integrated.\nIn this case the integral $I(f)$ is approximated using $(*)$ in \\eqref{eqn:CV}. \nIn Gaussian and related cubatures, the function $f_n$ is chosen in such a way that polynomial exactness is guaranteed \\citep[Section~1.4]{Gautschi2004}.\nOn the other hand, in kernel cubature and related approaches, $f_n$ is an element of a reproducing kernel Hilbert space chosen such that an error criterion is minimised \\citep{Larkin1970}.\nThe contribution of Sard was to combine these two concepts in numerical integration by choosing $f_n$ to enforce exactness on a low-dimensional space $\\mathcal{F}$ of functions and use the remaining degrees of freedom to find a minimum-norm interpolant to the integrand.\n\nThe remainder of the paper is structured as follows:\nSection~\\ref{subsec: saard's method} recalls Sard's approach to integration and Section~\\ref{subsec: stein operators} how Stein operators can be used to construct a control functional.\nThe proposed semi-exact control functional estimator $I_{\\text{SECF}}$ is presented in Section~\\ref{ssec:proposedBSS} and its polynomial exactness in the Bernstein-von-Mises limit is discussed in Section~\\ref{subsec: PE in BvM}.\nA closed-form expression for the resulting estimator $I_{\\text{CV}}$ is provided in Section~\\ref{subsec: computation}. \nThe statistical and computational efficiency of the proposed semi-exact control functional method is compared with that of existing control variates and control functionals using several simulation studies in Section~\\ref{sec: Empirical}. \nPractical diagnostics for the proposed method are established in Section~\\ref{sec: theory}.\nThe paper concludes with a discussion in Section~\\ref{sec: Discussion}.\n\n\n\n\\section{Methods} \\label{sec: methods}\n\nIn this section we provide background details on Sard's method and Stein operators before describing the semi-exact control functional method. \n\n\\subsection{Sard's Method} \\label{subsec: saard's method}\n\nMany popular methods for numerical integration are based on either (i) enforcing \\emph{exactness} of the integral estimator on a finite-dimensional set of functions $\\mathcal{F}$, typically a linear space of polynomials, or on (ii) integration of a \\emph{minimum-norm interpolant} selected from an infinite-dimensional set of functions $\\mathcal{H}$. \nIn each case, the result is a cubature method of the form\n\\begin{equation} \\label{eq:quadrature-generic}\n I_{\\text{NI}}(f) = \\sum_{i=1}^n w_i f(\\b{x}^{(i)}) \n\\end{equation}\nfor weights $\\{w_i\\}_{i=1}^n \\subset \\mathbb{R}$ and points $\\{\\b{x}^{(i)}\\}_{i=1}^n \\subset \\mathbb{R}^d$. Classical examples of methods in the former category are the univariate Gaussian quadrature rules~\\citep[Section~1.4]{Gautschi2004}, which are determined by the unique $\\{(w_i, \\b{x}^{(i)})\\}_{i=1}^n \\subset \\mathbb{R} \\times \\mathbb{R}^d$ such that $I_{\\text{NI}}(f) = I(f)$ whenever $f$ is a polynomial of order at most $2n-1$, and Clenshaw--Curtis rules~\\citep{ClenshawCurtis1960}. Methods of the latter category specify a suitable normed space $(\\mathcal{H}, \\|\\cdot\\|_{\\mathcal{H}})$ of functions, construct an interpolant $f_n \\in \\mathcal{H}$ such that \n\\begin{equation} \\label{eq:minimum-norm}\n f_n \\in \\argmin_{ h \\in \\mathcal{H}} \\big\\{ \\norm[0]{h}_{\\mathcal{H}} \\, \\colon \\, h(\\b{x}^{(i)}) = f(\\b{x}^{(i)}) \\text{ for } i = 1, \\ldots, n \\big\\}\n\\end{equation}\nand use the integral of $f_n$ to approximate the true integral. \nSpecific examples include splines~\\citep{Wahba1990} and kernel or Gaussian process based methods~\\citep{Larkin1970,OHagan1991,Briol2019}.\n\nIf the set of points $\\{\\b{x}^{(i)}\\}_{i=1}^n$ is fixed, the cubature method in \\eqref{eq:quadrature-generic} has $n$ degrees of freedom corresponding to the choice of the weights $\\{w_i\\}_{i=1}^n$.\nThe approach proposed by \\citet{Sard1949} is a hybrid of the two classical approaches just described, calling for $m \\leq n$ of these degrees of freedom to be used to ensure that $I_{\\text{NI}}(f)$ is exact for $f$ in a given $m$-dimensional linear function space $\\mathcal{F}$ and, if $m < n$, allocating the remaining $n-m$ degrees of freedom to select a minimal norm interpolant from a large class of functions $\\mathcal{H}$. \nThe approach of Sard is therefore exact for functions in the finite-dimensional set $\\mathcal{F}$ and, at the same time, suitable for the integration of functions in the infinite-dimensional set $\\mathcal{H}$.\nFurther background on Sard's method can be found in \\cite{Larkin1974} and \\cite{Karvonen2018}.\n\nHowever, it is difficult to implement Sard's method, or indeed any of the classical approaches just discussed, in the Bayesian context, since\n\\begin{enumerate}\n \\item the density $p$ can be evaluated pointwise only up to an intractable normalization constant;\n \\item to construct weights one needs to evaluate the integrals of basis functions of $\\mathcal{F}$ and of the interpolant $f_n$, which can be as difficult as evaluating the original integral.\n\\end{enumerate}\nTo circumvent these issues, in this paper we propose to combine Sard's approach to integration with Stein operators \\citep{Stein1972,Gorham2015}, thus eliminating the need to access normalization constants and to exactly evaluate integrals.\nA brief background on Stein operators is provided next.\n\n\n\n\n\\subsection{Stein Operators} \\label{subsec: stein operators}\n\nLet $\\cdot$ denote the dot product $\\b{a} \\cdot \\b{b}$ = $\\b{a}^\\top \\b{b}$, $\\nabla_{\\b{x}}$ denote the gradient $\\nabla_{\\b{x}} = [\\partial_{x_1},\\dots,\\partial_{x_d}]^\\top$ and $\\Delta_{\\b{x}}$ denote the Laplacian $\\Delta_{\\b{x}} = \\nabla_{\\b{x}} \\cdot \\nabla_{\\b{x}}$.\nLet $\\|\\b{x}\\| = (\\b{x} \\cdot \\b{x})^{1\/2}$ denote the Euclidean norm on~$\\mathbb{R}^d$.\nThe construction that enables us to realize Sard's method in the Bayesian context is the Langevin Stein operator $\\mathcal{L}$ \\citep{Gorham2015} on $\\mathbb{R}^d$, defined for sufficiently regular $g$ and $p$ as\n\\begin{align} \\label{eq:stein-operator}\n(\\mathcal{L} g)(\\b{x}) &= \\Delta_{\\b{x}} g(\\b{x}) + \\nabla_{\\b{x}} g(\\b{x}) \\cdot \\nabla_{\\b{x}} \\log{p(\\b{x})}.\n\\end{align}\nWe refer to $\\mathcal{L}$ as a Stein operator due to the use of equations of the form \\eqref{eq:stein-operator} (up to a simple substitution) in the method of \\citet{Stein1972} for assessing convergence in distribution and due to its property of producing functions whose integrals with respect to $p$ are zero under suitable conditions such as those described in Lemma~\\ref{lemma:assum}.\n\n\\begin{lemma}\\label{lemma:assum}\nIf $g \\colon \\mathbb{R}^d \\rightarrow \\mathbb{R}$ is twice continuously differentiable, $\\log p : \\mathbb{R}^d \\rightarrow \\mathbb{R}$ is continuously differentiable and $\\| \\nabla_{\\b{x}} g(\\b{x}) \\| \\leq C \\| \\b{x} \\|^{-\\delta} p(\\b{x})^{-1}$ is satisfied for some $C \\in \\mathbb{R}$ and $\\delta > d -1$, then\n\\begin{equation*}\n\\int (\\mathcal{L} g)(\\b{x}) p(\\b{x}) \\mathrm{d} \\b{x}=0,\n\\end{equation*}\nwhere $\\mathcal{L}$ is the Stein operator in \\eqref{eq:stein-operator}. \n\\end{lemma}\n\n\\noindent\nThe proof is provided in Appendix \\ref{app:ProofZero}.\nAlthough our attention is limited to \\eqref{eq:stein-operator}, the choice of Stein operator is not unique and other Stein operators can be derived using the generator method of \\cite{Barbour1988} or using Schr\\\"odinger Hamiltonians \\citep{Assaraf1999}. Contrary to the standard requirements for a Stein operator, the operator $\\mathcal{L}$ in control functionals does not need to fully characterize convergence and, as a consequence, a broader class of functions $g$ can be considered than in more traditional applications of Stein's method \\citep{Stein1972}. \n\nIt follows that, if the conditions of Lemma \\ref{lemma:assum} are satisfied by $g_n : \\mathbb{R}^d \\rightarrow \\mathbb{R}$, the integral of a function of the form $f_n = c_n + \\mathcal{L} g_n$ is simply $c_n$, the constant. \nThe main challenge in developing control variates, or functionals, based on Stein operators is therefore to find a function $g_n$ such that the asymptotic variance $\\sigma(f - f_n)^2$ is small. \nTo explicitly minimize asymptotic variance, \\cite{Mijatovic2018,Belomestny2019} and \\cite{Brosse2019} restricted attention to particular Metropolis--Hastings or Langevin samplers for which asymptotic variance can be explicitly characterized.\nThe minimization of empirical variance has also been proposed and studied in the case where samples are independent \\citep{Belomestny2017} and dependent \\citep{Belomestny2019,Belomestny2019a}. \nFor an approach that is not tied to a particular Markov kernel, authors such as \\cite{Assaraf1999} and \\cite{Mira2013} proposed to minimize mean squared error along the sample path, which corresponds to the case of an independent sampling method.\nIn a similar spirit, the constructions in \\cite{Oates2017,Oates2019} and \\cite{Barp2018} were based on a minimum-norm interpolant, where the choice of norm is decoupled from the mechanism from where the points are sampled.\n\nIn this paper we combine Sard's approach to integration with a minimum-norm interpolant construction in the spirit of \\cite{Oates2017} and related work; this is described next.\n\n\n\n\\subsection{The Proposed Method}\\label{ssec:proposedBSS}\n\nIn this section we first construct an infinite-dimensional space $\\mathcal{H}$ and a finite-dimensional space $\\mathcal{F}$ of functions; these will underpin the proposed semi-exact control functional method.\n\nFor the infinite-dimensional component, let $k \\colon \\mathbb{R}^d \\times \\mathbb{R}^d \\to \\mathbb{R}$ be a positive-definite \\emph{kernel}, meaning that (i) $k$ is symmetric, with $k(\\b{x},\\b{y}) = k(\\b{y},\\b{x})$ for all $\\b{x},\\b{y} \\in \\mathbb{R}^d$, and (ii) the \\emph{kernel matrix} $[\\b{K}]_{i,j} = k(\\b{x}^{(i)}, \\b{x}^{(j)})$ is positive-definite for any distinct points $\\{ \\b{x}^{(i)} \\}_{i=1}^n \\subset \\mathbb{R}^d$ and any $n \\in \\mathbb{N}$.\nRecall that such a $k$ induces a unique \\emph{reproducing kernel Hilbert space} $\\mathcal{H}(k)$. \nThis is a Hilbert space that consists of functions $g \\colon \\mathbb{R}^d \\to \\mathbb{R}$ and is equipped with an inner product $\\langle \\cdot , \\cdot \\rangle_{\\mathcal{H}(k)}$. \nThe kernel $k$ is such that $k(\\cdot,\\b{x}) \\in \\mathcal{H}(k)$ for all $\\b{x} \\in \\mathbb{R}^d$ and it is \\emph{reproducing} in the sense that $\\langle g , k(\\cdot, \\b{x}) \\rangle_{\\mathcal{H}(k)} = g(\\b{x})$ for any $g \\in \\mathcal{H}(k)$ and $\\b{x} \\in \\mathbb{R}^d$.\nFor $\\b{\\alpha} \\in \\mathbb{N}_0^d$ the multi-index notation $\\b{x}^{\\b{\\alpha}} := x_1^{\\alpha_1} \\cdots x_d^{\\alpha_d}$ and $|\\bm{\\alpha}| = \\alpha_1 + \\dots + \\alpha_d$ will be used.\nIf $k$ is twice continuously differentiable in the sense of \\citet[][Definition~4.35]{Steinwart2008}, meaning that the derivatives \n\\begin{equation*}\n \\partial_{\\b{x}}^{\\b{\\alpha}} \\partial_{\\b{y}}^{\\b{\\alpha}} k(\\b{x}, \\b{y}) = \\frac{\\partial^{2\\abs[0]{\\b{\\alpha}}}}{\\partial \\b{x}^{\\b{\\alpha}} \\partial \\b{y}^{\\b{\\alpha}}} k(\\b{x}, \\b{y})\n\\end{equation*}\nexist and are continuous for every multi-index $\\b{\\alpha} \\in \\mathbb{N}_0^d$ with $\\abs[0]{ \\b{\\alpha}} \\leq 2$, then \n\\begin{equation} \\label{eq:stein-kernel}\nk_0(\\b{x}, \\b{y}) = \\mathcal{L}_{\\b{x}} \\mathcal{L}_{\\b{y}} k(\\b{x}, \\b{y}),\n\\end{equation}\nwhere $\\mathcal{L}_{\\b{x}}$ stands for application of the Stein operator defined in~\\eqref{eq:stein-operator} with respect to variable $\\b{x}$, is a well-defined and positive-definite kernel~\\citep[][Lemma 4.34]{Steinwart2008}. \nThe kernel in \\eqref{eq:stein-kernel} can be written as\n\\begin{equation} \\label{eq:stein-kernel2}\n\\begin{split}\nk_0(\\b{x}, \\b{y}) ={}& \\Delta_{\\b{x}} \\Delta_{\\b{y}} k(\\b{x}, \\b{y}) + \\b{u}(\\b{x})^\\top \\nabla_{\\b{x}} \\Delta_{\\b{y}} k(\\b{x},\\b{y}) \\\\\n&+ \\b{u}(\\b{y})^\\top \\nabla_{\\b{y}} \\Delta_{\\b{x}} k(\\b{x}, \\b{y}) + \\b{u}(\\b{x})^\\top \\big[ \\nabla_{\\b{x}} \\nabla_{\\b{y}}^\\top k(\\b{x},\\b{y}) \\big] \\b{u}(\\b{y}),\n\\end{split}\n\\end{equation}\nwhere $\\nabla_{\\b{x}} \\nabla_{\\b{y}}^\\top k(\\b{x}, \\b{y})$ is the $d \\times d$ matrix with entries $[\\nabla_{\\b{x}} \\nabla_{\\b{y}}^\\top k(\\b{x}, \\b{y})]_{i,j} = \\partial_{x_i} \\partial_{y_j} k(\\b{x}, \\b{y})$ and $\\b{u}(\\b{x}) = \\nabla_{\\b{x}} \\log p(\\b{x})$. \nIf $k$ is radial then \\eqref{eq:stein-kernel2} can be simplified; see Appendix~\\ref{appendix:kernels}.\nLemma~\\ref{lem:boundary-kernel} establishes conditions under which the functions $\\b{x} \\mapsto k_0(\\b{x},\\b{y})$, $\\b{y} \\in \\mathbb{R}^d$, and hence elements of the Hilbert space $\\mathcal{H}(k_0)$ reproduced by $k_0$, have zero integral.\nLet $\\|\\b{M}\\|_{\\text{OP}} = \\sup_{\\|\\b{x}\\| = 1} \\|\\b{M} \\b{x}\\|$ denote the operator norm of a matrix $\\b{M} \\in \\mathbb{R}^{d \\times d}$.\n\n\\begin{lemma} \\label{lem:boundary-kernel} \nIf $k \\colon \\mathbb{R}^d \\times \\mathbb{R}^d \\rightarrow \\mathbb{R}$ is twice continuously differentiable in each argument, $\\log p : \\mathbb{R}^d \\rightarrow \\mathbb{R}$ is continuously differentiable, $\\| \\nabla_{\\b{x}}\\nabla_{\\b{y}}^\\top k(\\b{x},\\b{y}) \\|_{\\textsc{OP}} \\leq C(\\b{y}) \\| \\b{x} \\|^{-\\delta} p(\\b{x})^{-1}$ and $\\| \\nabla_{\\b{x}}\\Delta_{\\b{y}}k(\\b{x},\\b{y}) \\| \\leq C(\\b{y}) \\| \\b{x} \\|^{-\\delta} p(\\b{x})^{-1}$ are satisfied for some $C: \\mathbb{R}^d \\rightarrow (0,\\infty)$, and $\\delta > d -1$, then \n\\begin{equation} \\label{eq: kernel ints to 0}\n \\int k_0(\\b{x}, \\b{y}) p(\\b{x}) \\dif \\b{x} = 0\n\\end{equation}\nfor every $\\b{y} \\in \\mathbb{R}^d$, where $k_0$ is defined in \\eqref{eq:stein-kernel}.\n\\end{lemma}\n\n\\noindent The proof is provided in Appendix \\ref{app:ProofZeroKernel}.\nThe infinite-dimensional space $\\mathcal{H}$ used in this work is exactly the reproducing kernel Hilbert space $\\mathcal{H}(k_0)$.\nThe basic mathematical properties of $k_0$ and the Hilbert space it reproduces are contained in Appendix \\ref{app: basic results on H} and these can be used to inform the selection of an appropriate kernel.\n\nFor the finite-dimensional component, let $\\Phi$ be a linear space of twice-continuously differentiable functions with dimension $m-1$, $m \\in \\mathbb{N}$, and a basis $\\{\\phi_i\\}_{i=1}^{m-1}$. \nDefine then the space obtained by applying the differential operator~\\eqref{eq:stein-operator} to $\\Phi$ as $\\mathcal{L} \\Phi = \\mathrm{span}\\{ \\mathcal{L} \\phi_1, \\ldots, \\mathcal{L} \\phi_{m-1} \\}$.\nIf the pre-conditions of Lemma \\ref{lemma:assum} are satisfied for each basis function $g = \\phi_i$ then linearity of the Stein operator implies that $ \\int (\\mathcal{L}\\phi) \\mathrm{d}p = 0$ for every $\\phi \\in \\Phi$.\nTypically we will select $\\Phi = \\mathcal{P}^r$ as the polynomial space $\\mathcal{P}^r = \\mathrm{span} \\{ \\b{x}^{\\b{\\alpha}} : \\, \\b{\\alpha} \\in \\mathbb{N}_0^d, \\, 0 < \\abs[0]{\\b{\\alpha}} \\leq r \\}$ for some non-negative integer $r$. \nNote that constant functions are excluded from $\\mathcal{P}^r$ since they are in the null space of $\\mathcal{L}$; when required we let $\\mathcal{P}_0^r = \\text{span}\\{1\\} \\oplus \\mathcal{P}^r$ denote the larger space with the constant functions included. \nThe finite-dimensional space $\\mathcal{F}$ is then taken to be $ \\mathcal{F} = \\mathrm{span} \\{1\\} \\oplus \\mathcal{L} \\Phi = \\mathrm{span} \\{1, \\mathcal{L} \\phi_1, \\ldots \\mathcal{L} \\phi_{m-1} \\}$.\n\nIt is now possible to state the proposed method.\nFollowing Sard, we approximate the integrand $f$ with a function $f_n$ that interpolates $f$ at the locations $\\b{x}^{(i)}$, is exact on the $m$-dimensional linear space $\\mathcal{F}$, and minimises a particular (semi-)norm subject to the first two constraints. \nIt will occasionally be useful to emphasise the dependence of $f_n$ on $f$ using the notation $f_n(\\cdot) = f_n(\\cdot; f)$. \nThe proposed interpolant takes the form\n\\begin{equation} \\label{eq:interpolant}\n f_n(\\b{x}) = b_1 + \\sum_{i=1}^{m-1} b_{i+1} (\\mathcal{L} \\phi_i) (\\b{x}) + \\sum_{i=1}^n a_i k_0(\\b{x}, \\b{x}^{(i)}), \n\\end{equation}\nwhere the coefficients $\\b{a} = (a_1,\\ldots,a_n) \\in \\mathbb{R}^n$ and $\\b{b} = (b_1,\\ldots,b_m) \\in \\mathbb{R}^m$ are selected such that the following two conditions hold:\n\\begin{enumerate}\n \\item $f_n(\\b{x}^{(i)} ; f) = f(\\b{x}^{(i)})$ for $i = 1, \\ldots, n$ (interpolation);\n \\item $f_n(\\cdot;f) = f(\\cdot)$ whenever $f \\in \\mathcal{F}$ (semi-exactness).\n\\end{enumerate}\nSince $\\mathcal{F}$ is $m$-dimensional, these requirements correspond to the total of $n+m$ constraints.\nUnder weak conditions, discussed in Section \\ref{subsec: computation}, the total number of degrees of freedom due to selection of $\\b{a}$ and $\\b{b}$ is equal to $n+m$ and the above constraints can be satisfied. \nFurthermore, the corresponding function $f_n$ can be shown to minimise a particular (semi-)norm on a larger space of functions, subject to the interpolation and exactness constraints \\citep[to limit scope, we do not discuss this characterisation further but the semi-norm is defined in \\eqref{eq: semi norm mt} and the reader can find full details in][Theorem 13.1]{Wendland2004}. \nFigure~\\ref{fig:example-interpolation} illustrates one such interpolant.\nThe proposed estimator of the integral is then\n\\begin{equation} \\label{eq:BSS-def}\n I_{\\textsc{SECF}}(f) = \\int f_n(\\b{x}) p(\\b{x}) \\dif \\b{x} ,\n\\end{equation}\na special case of \\eqref{eqn:CV} (the interpolation condition causes the first term in \\eqref{eqn:CV} to vanish) that we call a \\emph{semi-exact control functional}.\nThe following is immediate from \\eqref{eq:interpolant} and \\eqref{eq:BSS-def}:\n\n\\begin{corollary} \\label{cor: well defined}\nUnder the hypotheses of Lemma \\ref{lemma:assum} for each $g = \\phi_i$, $i = 1,\\dots,m-1$, and Lemma \\ref{lem:boundary-kernel}, it holds that, whenever the estimator $I_{\\textsc{SECF}}(f)$ is well-defined, $I_{\\textsc{SECF}}(f) = b_1$, where $b_1$ is the constant term in \\eqref{eq:interpolant}.\n\\end{corollary}\n\n\\noindent The earlier work of \\cite{Assaraf1999} and \\cite{Mira2013} corresponds to $\\b{a} = \\b{0}$ and $\\b{b} \\neq \\b{0}$, while setting $\\b{b} = \\b{0}$ in~\\eqref{eq:interpolant} and ignoring the semi-exactness requirement recovers the unique minimum-norm interpolant in the Hilbert space $\\mathcal{H}(k_0)$ where $k_0$ is reproducing, in the sense of~\\eqref{eq:minimum-norm}.\nThe work of \\cite{Oates2017} corresponds to $b_i = 0$ for $i = 2,\\dots,m$.\nIt is therefore clear that the proposed approach is a strict generalization of existing work and can be seen as a compromise between semi-exactness and minimum-norm interpolation.\n\n\\begin{figure}[t]\n \\centering\n \\includegraphics[width=\\textwidth]{figures\/example-fig.pdf}\n \\caption{\\emph{Left}: The interpolant $f_n$ from \\eqref{eq:interpolant} at $n=5$ points to the function $f(x) = \\sin(0.5\\pi (x-1)) + \\exp(-(x-0.5)^2)$ for the Gaussian density $p(x) = \\mathcal{N}(x; 0 ,1)$. The interpolant uses the Gaussian kernel $k(x,y) = \\exp(-(x-y)^2)$ and a polynomial parametric basis with $r=2$. \\emph{Center} \\& \\emph{right}: Two translates $k_0(\\cdot,y)$, $y \\in \\{0,1\\}$, of the kernel~\\eqref{eq:stein-kernel}.\n }\n \\label{fig:example-interpolation}\n\\end{figure}\n\n\n\n\n\\subsection{Polynomial Exactness in the Bernstein-von-Mises Limit} \\label{subsec: PE in BvM}\n\nA central motivation for our approach is the prototypical case where $p$ is the density of a posterior distribution $P_{\\b{x} \\mid y_1,\\dots,y_N}$ for a latent variable $\\b{x}$ given independent and identically distributed data $y_1,\\dots,y_N \\sim P_{y_1,\\dots,y_N \\mid \\b{x}}$.\nUnder regularity conditions discussed in Section 10.2 of \\cite{VanDerVaart1998}, the Bernstein-von-Mises theorem states that\n\\begin{equation*}\n\\Big\\| P_{\\b{x} \\mid y_1,\\dots,y_N} - \\mathcal{N}\\big(\\hat{\\b{x}}_N , N^{-1} I(\\hat{\\b{x}}_N)^{-1}\\big) \\Big\\|_{\\text{TV}} \\rightarrow 0\n\\end{equation*}\nwhere $\\hat{\\b{x}}_N$ is a maximum likelihood estimate for $\\b{x}$, $I(\\b{x})$ is the Fisher information matrix evaluated at $\\b{x}$, $\\|\\cdot\\|_{\\text{TV}}$ is the total variation norm and convergence is in probability as $N \\rightarrow \\infty$ with respect to the law $P_{y_1,\\dots,y_N \\mid \\b{x}}$ of the dataset.\nIn this limit, polynomial exactness of the proposed method can be established.\nIndeed, for a Gaussian density $p$ with mean $\\hat{\\b{x}}_N \\in \\mathbb{R}^d$ and precision $N I(\\hat{\\b{x}}_N)$, if $\\phi(\\b{x}) = \\b{x}^{\\b{\\alpha}}$ for a multi-index $\\b{\\alpha} \\in \\mathbb{N}_0^d$, then\n\\begin{equation*}\n (\\mathcal{L} \\phi)(\\b{x}) = \\sum_{i=1}^d \\alpha_i\\left[ (\\alpha_i-1) x_i^{\\alpha_i-2} -\\frac{N}{2} P_i(\\b{x}) x_i^{\\alpha_i-1} \\right] \\prod_{j \\neq i} x_j^{\\alpha_j},\n\\end{equation*}\nwhere $P_i(\\b{x}) = 2\\b{e}_i^\\top I(\\hat{\\b{x}}_N) (\\b{x} - \\hat{\\b{x}}_N)$ and $\\b{e}_i$ is the $i$th coordinate vector in $\\mathbb{R}^d$. \nThis allows us to obtain the following result, whose proof is provided in Appendix~\\ref{app: proof of polyexact}:\n\\begin{lemma}\\label{lem: polyexact}\nConsider the Bernstein-von-Mises limit and suppose that the Fisher information matrix $I(\\hat{\\b{x}}_N)$ is non-singular.\nThen, for the choice $\\Phi = \\mathcal{P}^r$, $r \\in \\mathbb{N}$, the estimator $I_\\textsc{SECF}$ is exact on $\\mathcal{F} = \\mathcal{P}_0^r$.\n\\end{lemma}\nThus the proposed estimator is polynomially exact up to order $r$ in the Bernstein-von-Mises limit.\nAt finite $N$, when the limit has not been reached, the above argument can only be expected to approximately hold.\n\n\n\n\n\n\n\n\\subsection{Computation for the Proposed Method} \\label{subsec: computation}\n\nThe purpose of this section is to discuss when the proposed estimator is well-defined and how it can be computed.\nDefine the $n \\times m$ matrix\n\\begin{equation}\n \\b{P} = \\begin{bmatrix} 1 & \\mathcal{L} \\phi_1(\\b{x}^{(1)}) & \\cdots & \\mathcal{L} \\phi_{m-1}(\\b{x}^{(1)}) \\\\ \\vdots & \\vdots & \\ddots & \\vdots \\\\ 1 & \\mathcal{L} \\phi_1 (\\b{x}^{(n)}) & \\cdots & \\mathcal{L} \\phi_{m-1}( \\b{x}^{(n)}) \\end{bmatrix}, \\label{eq: def for P}\n\\end{equation}\nwhich is sometimes called a \\emph{Vandermonde} (or \\emph{alternant}) matrix corresponding to the linear space $\\mathcal{F}$.\nLet $\\b{K}_0$ be the $n \\times n$ matrix with entries $[\\b{K}_0]_{i,j} = k_0(\\b{x}^{(i)}, \\b{x}^{(j)})$ and let $\\b{f}$ be the $n$-dimensional column vector with entries $[\\b{f}]_i = f(\\b{x}^{(i)})$. \n\n\\begin{lemma} \\label{lem: comput etc}\nLet the $n \\geq m$ points $\\b{x}^{(i)}$ be distinct and $\\mathcal{F}$-\\emph{unisolvent}, meaning that the matrix $\\b{P}$ in \\eqref{eq: def for P} has full rank.\nLet $k_0$ be a positive-definite kernel for which \\eqref{eq: kernel ints to 0} is satisfied.\nThen $I_{\\textsc{SECF}}(f)$ is well-defined and the coefficients $\\b{a}$ and $\\b{b}$ are given by the solution of the linear system\n\\begin{equation} \\label{eq:block-system}\n \\begin{bmatrix} \\b{K}_0 & \\b{P} \\\\ \\b{P}^\\top & \\b{0} \\end{bmatrix} \\begin{bmatrix} \\b{a} \\\\ \\b{b} \\end{bmatrix} = \\begin{bmatrix} \\b{f} \\\\ \\b{0} \\end{bmatrix}.\n\\end{equation}\nIn particular,\n\\begin{equation}\\label{eqn:BSS}\n I_{\\textsc{SECF}}(f) = \\b{e}_1^\\top ( \\b{P}^\\top \\b{K}_0^{-1} \\b{P} )^{-1} \\b{P}^\\top \\b{K}_0^{-1} \\b{f}.\n\\end{equation}\n\\end{lemma}\n\n\\noindent The proof is provided in Appendix \\ref{app:Computation proof}.\nNotice that \\eqref{eqn:BSS} is a linear combination of the values in~$\\b{f}$ and therefore the proposed estimator is recognized as a cubature method of the form \\eqref{eq:quadrature-generic} with weights \n\\begin{equation} \\label{eq: weights}\n\\b{w} = \\b{K}_0^{-1} \\b{P} ( \\b{P}^\\top \\b{K}_0^{-1} \\b{P} )^{-1} \\b{e}_1.\n\\end{equation}\n\nThe requirement in Lemma \\ref{lem: comput etc} for the $\\b{x}^{(i)}$ to be distinct precludes, for example, the direct use of Metropolis--Hastings output. \nHowever, as emphasized in \\cite{Oates2017} for control functionals and studied further in \\cite{liu2017black,Hodgkinson2020}, the consistency of $I_{\\text{SECF}}$ does \\emph{not} require that the Markov chain is $p$-invariant.\nIt is therefore trivial to, for example, filter out duplicate states from Metropolis--Hastings output.\n\nThe solution of linear systems of equations defined by an $n \\times n$ matrix $\\b{K}_0$ and an $m \\times m$ matrix $\\b{P}^\\top \\b{K}_0^{-1} \\b{P}$ entails a computational cost of $O(n^3 + m^3)$.\nIn some situations this cost may yet be smaller than the cost associated with evaluation of $f$ and $p$, but in general this computational requirement limits the applicability of the method just described.\nIn Appendix \\ref{app: Nystrom} we therefore propose a computationally efficient approximation, $I_{\\text{ASECF}}$, to the full method, based on a combination of the Nystr\\\"{o}m approximation \\citep{williams2001using} and the well-known conjugate gradient method, inspired by the recent work of \\cite{rudi2017falkon}.\nAll proposed methods are implemented in the \\verb+R+ package \\verb+ZVCV+ \\citep{rZVCV}.\n\n\n\n\n\n\\section{Empirical Assessment}\\label{sec: Empirical}\n\nA detailed comparison of existing and proposed control variate and control functional techniques was performed.\nThree examples were considered; Section \\ref{sec: gaussian assessment} considers a Gaussian target, representing the Bernstein-von-Mises limit; Section \\ref{subsec: capture} considers a setting where non-parametric control functional methods perform well; Section \\ref{subsec: sonar} considers a setting where parametric control variate methods are known to be successful. \nIn each case we determine whether or not the proposed semi-exact control functional method is competitive with the state-of-the-art.\n\nSpecifically, we compared the following estimators, which are all instances of $I_{\\text{CV}}$ in \\eqref{eqn:CV} for a particular choice of $f_n$, which may or may not be an interpolant:\n\\begin{itemize}\n \\item Standard Monte Carlo integration, \\eqref{eqn:MC}, based on Markov chain output.\n \\item The control functional estimator recommended in \\cite{Oates2017}, $I_{\\text{CF}}(f) = (\\b{1}^\\top\\b{K}_0^{-1}\\b{1})^{-1} \\b{1}^{\\top} \\b{K}_0^{-1}\\b{f}$.\n \\item The ``zero variance'' polynomial control variate method of \\cite{Assaraf1999} and \\cite{Mira2013}, $I_{\\text{ZV}}(f) = \\b{e}_1^\\top (\\b{P}^\\top \\b{P})^{-1}\\b{P}^\\top \\b{f}$.\n \\item The ``auto zero variance'' approach of \\citet{South2019}, which uses 5-fold cross validation to automatically select (a) between the ordinary least squares solution $I_{\\text{ZV}}$ and an $\\ell_1$-penalised alternative (where the penalisation strength is itself selected using 10-fold cross-validation within the test dataset), and (b) the polynomial order.\n \\item The proposed semi-exact control functional estimator, \\eqref{eqn:BSS}.\n \\item An approximation, $I_{\\text{ASECF}}$, of \\eqref{eqn:BSS} based on the Nystr\\\"{o}m approximation and the conjugate gradient method, described in Appendix \\ref{app: Nystrom}.\n\\end{itemize}\nOpen-source software for implementing all of the above methods is available in the \\texttt{R} package \\texttt{ZVCV} \\citep{rZVCV}. \nThe same sets of $n$ samples were used for all estimators, in both the construction of $f_n$ and the evaluation of $I_{\\text{CV}}$. \nFor methods where there is a fixed polynomial basis we considered only orders $r=1$ and $r=2$, following the recommendation of \\cite{Mira2013}.\nFor kernel-based methods, duplicate values of $\\b{x}_i$ were removed (as discussed in Section~\\ref{subsec: computation}) and Frobenius regularization was employed whenever the condition number of the kernel matrix $\\b{K}_0$ was close to machine precision \\citep{Higham1988}.\nSeveral choices of kernel were considered, but for brevity in the main text we focus on the rational quadratic kernel $k(\\b{x},\\b{y}; \\lambda) = (1+\\lambda^{-2} \\|\\b{x}-\\b{y}\\|^2)^{-1}$. \nThis kernel was found to provide the best performance across a range of experiments; a comparison to the Mat\\'{e}rn and Gaussian kernels is provided in Appendix~\\ref{app: effect of the kernel}.\nThe parameter $\\lambda$ was selected using $5$-fold cross-validation, based again on performance across a spectrum of experiments; a comparison to the median heuristic \\citep{Garreau2017} is presented in Appendix \\ref{app: effect of the kernel}.\n\nTo ensure that our assessment is practically relevant, the estimators were compared on the basis of both statistical and computational efficiency relative to the standard Monte Carlo estimator. \nStatistical efficiency $\\mathcal{E}(I_\\text{CV})$ and computational efficiency $\\mathcal{C}(I_\\text{CV})$ of an estimator $I_\\text{CV}$ of the integral $I$ are defined as\n\\begin{eqnarray*}\n\\mathcal{E}(I_\\text{CV}) = \\frac{\\mathbb{E}\\left[ (I_{\\text{MC}} - I)^2 \\right]}{\\mathbb{E}\\left[ (I_\\text{CV} - I)^2 \\right]}, \\qquad\n\\mathcal{C}(I_\\text{CV}) = \\mathcal{E}(I_\\text{CV}) \\frac{T_{\\text{MC}}}{T_\\text{CV}}\n\\end{eqnarray*}\nwhere $T_\\text{CV}$ denotes the combined wall time for sampling the $\\b{x}^{(i)}$ and computing the estimator $I_\\text{CV}$.\nFor the results reported below, $\\mathcal{E}$ and $\\mathcal{C}$ were approximated using averages $\\hat{\\mathcal{E}}$ and $\\hat{\\mathcal{C}}$ over 100 realizations of the Markov chain output.\n\n\\subsection{Gaussian Illustration} \\label{sec: gaussian assessment}\n\nHere we consider a Gaussian integral that serves as an analytically tractable caricature of a posterior near to the Bernstein-von-Mises limit.\nThis enables us to assess the effect of the sample size $n$ and dimension $d$ on each estimator, in a setting that is not confounded by the idiosyncrasies of any particular MCMC method. \nSpecifically, we set $p(\\b{x}) = (2\\pi)^{-d\/2} \\exp(-\\|\\b{x}\\|^2 \/ 2)$ where $\\b{x} \\in \\mathbb{R}^d$. \nFor the parametric component we set $\\Phi = \\mathcal{P}^r$, so that (from Lemma \\ref{lem: polyexact}) $I_{\\text{SECF}}$ is exact on polynomials of order at most $r$; this holds also for $I_{\\text{ZV}}$.\nFor the integrand $f : \\mathbb{R}^d \\rightarrow \\mathbb{R}$, $d \\geq 3$, we took\n\\begin{equation}\\label{eqn:Gaussian_combin}\nf(\\b{x}) = 1 + x_2 + 0.1 x_1 x_2 x_3 + \\sin(x_1) \\exp[-(x_2 x_3)^2]\n\\end{equation}\nin order that the integral is analytically tractable ($I(f) = 1$) and that no method will be exact. \n\nFigure \\ref{fig:Gaussian} displays the statistical efficiency of each estimator for $10 \\leq n \\leq 1000$ and $3 \\leq d \\leq 100$. \nComputational efficiency is not shown since exact sampling from $p$ in this example is trivial. \nThe proposed semi-exact control functional method performs consistently well compared to its competitors for this non-polynomial integrand. \nUnsurprisingly, the best improvements are for high $n$ and small $d$, where the proposed method results in a statistical efficiency over 100 times better than the baseline estimator and up to 5 times better than the next best method. \n\n\\begin{figure}[t]\n \\centering\n \\includegraphics[width=0.8\\textwidth]{figures\/Gaussian_both_kernelrq2.pdf}\n \\caption{Gaussian example (a) estimated \\textit{statistical efficiency} with $d=4$ and (b) estimated \\textit{statistical efficiency} with $n=1000$ for integrand \\eqref{eqn:Gaussian_combin}.\n }\n \\label{fig:Gaussian}\n\\end{figure}\n\n\n\\subsection{Capture-Recapture Example} \\label{subsec: capture}\n\nThe two remaining examples, here and in Section \\ref{subsec: sonar}, are applications of Bayesian statistics described in \\cite{South2019}.\nIn each case the aim is to estimate expectations with respect to a posterior distribution $P_{\\b{x} \\mid \\b{y}}$ of the parameters $\\b{x}$ of a statistical model based on $\\b{y}$, an observed dataset. \nSamples $\\b{x}^{(i)}$ were obtained using the Metropolis-adjusted Langevin algorithm \\citep{Roberts1996}, which is a Metropolis-Hastings algorithm with proposal $\\mathcal{N} ( \\b{x}^{(i-1)} + h^2 \\frac{1}{2}\\b{\\Sigma}\\nabla_{\\b{x}} \\log P_{\\b{x} \\mid \\b{y}}(\\b{x}^{(i-1)} \\mid \\b{y}) , h^2 \\b{\\Sigma})$. \nStep sizes of $h=0.72$ for the capture-recapture example and $h=0.3$ for the sonar example (see Section \\ref{subsec: sonar}) were selected and an empirical approximation of the posterior covariance matrix was used as the pre-conditioner $\\b{\\Sigma} \\in \\mathbb{R}^{d\\times d}$. \nSince the proposed method does not rely on the Markov chain being $P_{\\b{x} \\mid \\b{y}}$-invariant we also repeated these experiments using the unadjusted Langevin algorithm \\citep{Parisi1981,Ermak1975}, with similar results reported in Appendix~\\ref{app: ULA results}.\n\n\n\nIn this first example, a Cormack--Jolly--Seber capture-recapture model \\citep{Lebreton1992} is used to model data on the capture and recapture of the bird species \\textit{Cinclus Cinclus} \\citep{Marzolin1988}. The integrands of interest are the marginal posterior means $f_i(\\b{x}) = x_i$ for $i=1,\\ldots,11$, where $\\b{x}=(\\phi_1,\\ldots,\\phi_5,p_2,\\ldots,p_6,\\phi_6 p_7)$, $\\phi_j$ is the probability of survival from year $j$ to $j+1$ and $p_j$ is the probability of being captured in year $j$. \nThe likelihood is\n\\begin{align*}\n\\ell(\\b{y}|\\b{x}) \\propto \\prod_{i=1}^{6}\\chi_i^{d_i} \\prod_{k=i+1}^{7} \\left[ \\phi_i p_k \\prod_{m=i+1}^{k-1} \\phi_m (1-p_m) \\right]^{y_{ik}},\n\\end{align*}\nwhere $d_i=D_i-\\sum_{k=i+1}^{7}y_{ik}$, $\\chi_i=1-\\sum_{k=i+1}^{7} \\phi_i p_k \\prod_{m=i+1}^{k-1} \\phi_m (1-p_m)$ and the data $\\b{y}$ consists of $D_i$, the number of birds released in year $i$, and $y_{ik}$, the number of animals caught in year $k$ out of the number released in year $i$, for $i=1,\\ldots,6$ and $k=2,\\ldots,7$. Following \\citet{South2019}, parameters are transformed to the real line using $\\tilde{x}_j=\\log(x_j\/(1-x_j))$ and the adjusted prior density for $\\tilde{x}_j$ is $\\exp(\\tilde{x}_j)\/(1+\\exp(\\tilde{x}_j))^2$, for $j=1,\\ldots,11$.\n\n\n\\citet{South2019} found that non-parametric methods outperform standard parametric methods for this 11-dimensional example. \nThe estimator $I_{\\text{SECF}}$ combines elements of both approaches, so there is interest in determining how the method performs.\nIt is clear from Figure~\\ref{fig:Recapture} that all variance reduction approaches are helpful in improving upon the vanilla Monte Carlo estimator in this example. The best improvement in terms of statistical and computational efficiency is offered by $I_{\\text{SECF}}$, which also has similar performance to $I_{\\text{CF}}$. \n\n\\begin{figure}[t]\n \\centering\n \\includegraphics[width=0.8\\textwidth]{figures\/Recapture_both_combined_kernelrq2_MALA.pdf}\n \\caption{Capture-recapture example (a) estimated \\textit{statistical efficiency} and (b) estimated \\textit{computational efficiency}. Efficiency here is reported as an average over the 11 expectations of interest.\n }\n \\label{fig:Recapture}\n\\end{figure}\n\n\\subsection{Sonar Example} \\label{subsec: sonar}\n\nOur final application is a 61-dimensional logistic regression example using data from \\citet{Gorman1988} and \\citet{Dheeru2017}. To use standard regression notation, the parameters are denoted $\\b{\\beta} \\in \\mathbb{R}^{61}$, the matrix of covariates in the logistic regression model is denoted $\\b{X} \\in \\mathbb{R}^{208 \\times 61}$ where the first column is all 1's to fit an intercept and the response is denoted $\\b{y} \\in \\mathbb{R}^{208}$. In this application, $\\b{X}$ contains information related to energy frequencies reflected from either a metal cylinder ($y=1$) or a rock ($y=0$). \nThe log likelihood for this model is\n\\begin{equation*}\\label{eqn:logistic}\n\\log \\ell(\\b{y},\\b{X}|\\b{\\beta}) = \\sum_{i=1}^{208} \\left(y_i \\b{X}_{i,\\cdot}\\b{\\beta} - \\log (1+\\exp(\\b{X}_{i,\\cdot}\\b{\\beta}) )\\right).\n\\end{equation*}\nWe use a $\\mathcal{N}(0,5^2)$ prior for the predictors (after standardising to have standard deviation of 0.5) and $\\mathcal{N}(0,20^2)$ prior for the intercept, following \\citet{South2019,Chopin2017}, \nbut we focus on estimating the more challenging integrand $f(\\b{\\beta}) = ( 1+\\exp(-\\tilde{\\b{X}}\\b{\\beta}) )^{-1}$, which can be interpreted as the probability that observed covariates $\\tilde{\\b{X}}$ emanate from a metal cylinder. \nThe gold standard of $I \\approx 0.4971$ was obtained from a 10 million iteration Metropolis-Hastings \\citep{Hastings1970} run with multivariate normal random walk proposal.\n\nFigure \\ref{fig:Sonar} illustrates the statistical and computational efficiency of estimators for various $n$ in this example. \nIt is interesting to note that $I_{\\text{SECF}}$ and $I_{\\text{ASECF}}$ offer similar statistical efficiency to $I_{\\text{ZV}}$, especially given the poor relative performance of $I_{\\text{CF}}$. Since it is inexpensive to obtain the $m$ samples using the Metropolis-adjusted Langevin algorithm in this example, $I_{\\text{ZV}}$ and $I_{\\text{ASECF}}$ are the only approaches which offer improvements in computational efficiency over the baseline estimator for the majority of $n$ values considered, and even in these instances the improvements are marginal.\n\n\\begin{figure}[t]\n \\centering\n \\includegraphics[width=0.8\\textwidth]{figures\/Sonar_both_kernelrq2_MALA.pdf}\n \\caption{Sonar example (a) estimated \\textit{statistical efficiency} and (b) estimated \\textit{computational efficiency}.\n }\n \\label{fig:Sonar}\n\\end{figure}\n\n\n\\section{Theoretical Properties and Convergence Assessment} \\label{sec: theory}\n\nIn this section we provide a convergence result and discuss diagnostics that can be used to monitor the performance of the proposed method.\nTo this end, we introduce the semi-norm\n\\begin{align}\n|f|_{k_0,\\mathcal{F}} = \\inf_{\\substack{f = h + g \\\\ h \\in \\mathcal{F}, \\, g \\in \\mathcal{H}(k_0) }} \\|g\\|_{\\mathcal{H}(k_0)} , \\label{eq: semi norm mt}\n\\end{align}\nwhich is well-defined when the infimum is taken over a non-empty set, otherwise we define $|f|_{k_0,\\mathcal{F}} = \\infty$.\n\n\n\\subsection{Finite Sample Error and a Practical Diagnostic}\n\nThe following proposition provides a finite sample error bound:\n\\begin{proposition} \\label{lem: KSD bound}\nLet the hypotheses of Corollary \\ref{cor: well defined} hold.\nThen the integration error satisfies the bound\n\\begin{equation}\n \\lvert I(f) - I_{\\textsc{SECF}}(f) \\rvert \\leq |f|_{k_0,\\mathcal{F}} \\, (\\b{w}^\\top \\b{K}_0 \\b{w})^{1\/2} \\label{eq: error bound KSD}\n\\end{equation}\nwhere the weights $\\b{w}$, defined in \\eqref{eq: weights}, satisfy\n\\begin{equation*}\n \\b{w} = \\argmin_{ \\b{v} \\in \\mathbb{R}^n} (\\b{v}^\\top \\b{K}_0 \\b{v})^{1\/2} \\quad \\text{ s.t. } \\quad \\sum_{i=1}^n v_i h(\\b{x}^{(i)}) = \\int h(\\b{x}) p(\\b{x}) \\dif \\b{x} \\quad \\text{ for every } h \\in \\mathcal{F}.\n\\end{equation*}\n\\end{proposition}\n\nThe proof is provided in Appendix \\ref{app: finite bound}.\nThe first quantity $|f|_{k_0,\\mathcal{F}}$ in \\eqref{eq: error bound KSD} can be approximated by $|f_n|_{k_0,\\mathcal{F}}$ when $f_n$ is a reasonable approximation for $f$ and this can in turn can be bounded as $|f_n|_{k_0,\\mathcal{F}} \\leq (\\b{a}^\\top \\b{K}_0 \\b{a})^{1\/2}$. \nThe finiteness of $|f|_{k_0,\\mathcal{F}}$ ensures the existence of a solution to the Stein equation, sufficient conditions for which are discussed in \\cite{Mackey2016,Si2020}. \nThe second quantity $(\\b{w}^\\top \\b{K}_0 \\b{w})^{1\/2}$ in \\eqref{eq: error bound KSD} is computable and is recognized as a \\emph{kernel Stein discrepancy} between the empirical measure $\\sum_{i=1}^n w_i \\delta(\\b{x}^{(i)})$ and the distribution whose density is $p$, based on the Stein operator $\\mathcal{L}$ \\citep{Chwialkowski2016,Liu2016}.\nNote that our choice of Stein operator differs to that in \\cite{Chwialkowski2016} and \\citet{Liu2016}.\nThere has been substantial recent research into the use of kernel Stein discrepancies for assessing algorithm performance in the Bayesian computational context \\citep{Gorham2017,Chen2018,Chen2019,singhal2019kernelized,Hodgkinson2020} and one can also exploit this discrepancy as a diagnostic for the performance of the semi-exact control functional.\nThe diagnostic that we propose to monitor is the product $(\\b{w}^\\top \\b{K}_0 \\b{w})^{1\/2} (\\b{a}^\\top \\b{K}_0 \\b{a})^{1\/2}$. This approach to error estimation was also suggested (outside the Bayesian context) in Section~5.1 of \\cite{Fasshauer2011}.\n\nEmpirical results in Figure \\ref{fig:KSD} suggest that this diagnostic provides a conservative approximation of the actual error. \nFurther work is required to establish whether this diagnostic detects convergence and non-convergence in general.\n\n\n\n\\begin{figure}[!t]\n\\centering\n\\includegraphics[width=0.4\\textwidth]{figures\/BOUND_and_MAE_N_Sonar_MALA.pdf}\n\\caption{The mean absolute error and mean of the approximate upper bound $(\\b{w}^\\top \\b{K}_0 \\b{w})^{1\/2} (\\b{a}^\\top \\b{K}_0 \\b{a})^{1\/2}$, for different values of $n$ in the sonar example of Section~\\ref{subsec: sonar}. Both are based on the semi-exact control functional method with $\\Phi = \\mathcal{P}^1$.\n}\n\\label{fig:KSD}\n\\end{figure}\n\n\n\\subsection{Consistency of the Estimator}\n\nIn this section we establish that, even in the biased sampling setting, the proposed estimator is consistent.\nIn what follows we consider an increasing number $n$ of samples $\\bm{x}^{(i)}$ whilst the finite-dimensional space $\\Phi$, with basis $\\{\\phi_1,\\dots,\\phi_{m-1}\\}$, is held fixed. \nThe samples $\\bm{x}^{(i)}$ will be assumed to arise from a $V$-uniformly ergodic Markov chain; the reader is referred to Chapter 16 of \\cite{meyn2012markov} for the relevant background.\nRecall that the points $(\\b{x}^{(i)})_{i=1}^n$ are called \\emph{$\\mathcal{F}$-unisolvent} if the matrix in \\eqref{eq: def for P} has full rank.\nIt will be convenient to introduce an inner product $\\langle \\bm{u} , \\bm{v} \\rangle_n = \\bm{u}^\\top \\bm{K}_0^{-1} \\bm{v}$ and associated norm $\\|\\bm{u}\\|_n = \\langle \\bm{u}, \\bm{u} \\rangle_n^{1\/2}$.\nLet $\\Pi$ be the matrix that projects orthogonally onto the columns of $[\\bm{\\Psi}]_{i,j} := \\mathcal{L} \\phi_j(\\bm{x}^{(i)})$ with respect to the $\\langle \\cdot , \\cdot \\rangle_n$ inner product. \n\n\\begin{theorem} \\label{thm: consistency}\nLet the hypotheses of Corollary \\ref{cor: well defined} hold and let $f$ be any function for which $|f|_{k_0,\\mathcal{F}} < \\infty$. \nLet $q$ be a probability density with $p\/q > 0$ on $\\mathbb{R}^d$ and consider a $q$-invariant Markov chain $(\\bm{x}^{(i)})_{i=1}^n$, assumed to be $V$-uniformly ergodic for some $V : \\mathbb{R}^d \\rightarrow [1,\\infty)$, such that \n\\begin{enumerate}\n \\item[A1.] $\\sup_{\\b{x} \\in \\mathbb{R}^d} \\; V(\\b{x})^{-r} \\; (p(\\bm{x}) \/ q(\\bm{x}))^4 \\; k_0(\\b{x},\\b{x})^2 < \\infty $ for some $0 < r < 1$;\n \\item[A2.] the points $(\\b{x}^{(i)})_{i=1}^n$ are almost surely distinct and $\\mathcal{F}$-unisolvent;\n \\item[A3.] $\\lim\\sup_{n \\rightarrow \\infty} \\| \\Pi \\bm{1} \\|_n \/ \\| \\bm{1} \\|_n < 1$ almost surely.\n\\end{enumerate}\nThen $| I_{\\textsc{SECF}}(f) - I(f)| = O_P(n^{1\/2})$.\n\\end{theorem}\n\n\n\nThe proof is provided in Appendix \\ref{ap: consistency proof} and exploits a recent theoretical contribution in \\cite{Hodgkinson2020}. \nAssumption A1 serves to ensure that $q$ is similar enough to $p$ that a $q$-invariant Markov chain will also explore the high probability regions of $p$, as discussed in \\cite{Hodgkinson2020}.\nSufficient conditions for $V$-uniform ergodicity are necessarily Markov chain dependent. \nThe case of the Metropolis-adjusted Langevin algorithm is discussed in \\cite{Roberts1996,Chen2019} and, in particular, Theorem 9 of \\cite{Chen2019} provides sufficient conditions for $V$-uniform ergodicity with $V(\\bm{x}) = \\exp(s \\|\\bm{x}\\|)$ for all $s > 0$. \nUnder these conditions, and with the rational quadratic kernel $k$ considered in Section \\ref{sec: Empirical}, we have $k_0(\\bm{x},\\bm{x}) = O(\\|\\nabla_{\\bm{x}} \\log p(\\bm{x})\\|^2)$ and therefore A1 is satisfied whenever $(p(\\bm{x}) \/ q(\\bm{x}) ) \\|\\nabla_{\\bm{x}} \\log p(\\bm{x})\\| = O(\\exp( t \\|\\bm{x}\\|))$ for some $t > 0$; a weak requirement. \nAssumption A2 ensures that the finite sample size bound \\eqref{eq: error bound KSD} is almost surely well-defined. \nAssumption A3 ensures the points in the sequence $(\\bm{x}^{(i)})_{i=1}^n$ distinguish (asymptotically) the constant function from the functions $\\{\\phi_i\\}_{i=1}^{m-1}$, which is a weak technical requirement.\n\n\n\n\n\n\n\n\n\n\n\n\\section{Discussion}\\label{sec: Discussion}\n\nThe problem of approximating posterior expectations is well-studied and powerful control variate and control functional methods exist to improve the accuracy of Monte Carlo integration.\nHowever, it is \\textit{a priori} unclear which of these methods is most suitable for any given task.\nThis paper demonstrates how both parametric and non-parametric approaches can be combined into a single estimator that remains competitive with the state-of-the-art in all regimes we considered.\nMoreover, we highlighted polynomial exactness in the Bernstein-von-Mises limit as a useful property that we believe can confer robustness of the estimator in a broad range of applied contexts.\nThe multitude of applications for these methods, and their availability in the \\verb+ZVCV+ package \\citep{rZVCV}, suggests they are well-placed to have a practical impact.\n\nSeveral possible extensions of the proposed method can be considered.\nFor example, the parametric component $\\Phi$ could be adapted to the particular $f$ and $p$ using a dimensionality reduction method.\nLikewise, extending cross-validation to encompass the choice of kernel and even the choice of control variate or control functional estimator may be useful. The potential for alternatives to the Nystr\\\"{o}m approximation to further improve scalability of the method can also be explored. \nIn terms of the points $\\b{x}^{(i)}$ on which the estimator is defined, these could be optimally selected to minimize the error bound in \\eqref{eq: error bound KSD}, for example following the approaches of \\cite{Chen2018,Chen2019}.\nFinally, we highlight a possible extension to the case where only stochastic gradient information is available, following \\cite{Friel2016} in the parametric context.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\section{Proof of Lemma \\ref{lemma:assum}}\\label{app:ProofZero}\n\nLemmas \\ref{lemma:assum} and \\ref{lem:boundary-kernel} are stylised versions of similar results that can be found in earlier work, such as \\cite{Chwialkowski2016,Liu2016,Oates2017}.\nOur presentation differs in that we provide a convenient explicit sufficient condition, on the tails of $\\|\\nabla g\\|$ for Lemma \\ref{lemma:assum}, and on the tails of $\\|\\nabla_{\\bm{x}} \\nabla_{\\bm{y}}^\\top k(\\b{x},\\b{y})\\|$ and $\\|\\nabla_{\\b{x}} \\Delta_{\\b{y}} k(\\b{x},\\b{y})\\|$ for Lemma \\ref{lem:boundary-kernel}, for their conclusions to hold.\n\n\\begin{proof}\nThe stated assumptions on the differentiability of $p$ and $g$ imply that the vector field $p(\\b{x}) \\nabla_{\\b{x}} g(\\b{x})$ is continuously differentiable on $\\mathbb{R}^d$. \nThe divergence theorem can therefore be applied, over any compact set $D \\subset \\mathbb{R}^d$ with piecewise smooth boundary $\\partial D$, to reveal that\n\\begin{align*}\n\\int_{D} (\\mathcal{L} g)(\\b{x}) p(\\b{x}) \\mathrm{d} \\b{x} &= \\int_D [ \\Delta_{\\b{x}} g(\\b{x}) + \\nabla_{\\b{x}} g(\\b{x}) \\cdot \\nabla_{\\b{x}} \\log p(\\b{x}) ] p(\\b{x}) \\mathrm{d}\\b{x} \\\\\n&= \\int_{D} \\left[ \\frac{1}{p(\\b{x})} \\nabla_{\\b{x}} \\cdot (p(\\b{x}) \\nabla_{\\b{x}} g(\\b{x})) \\right] p(\\b{x}) \\mathrm{d} \\b{x} \\\\\n&= \\int_{D} \\nabla_{\\b{x}} \\cdot (p(\\b{x}) \\nabla_{\\b{x}} g(\\b{x}))\\mathrm{d} \\b{x} \\\\\n&=\\oint_{\\partial D} p(\\b{x})\\nabla_{\\b{x}} g(\\b{x})\\cdot \\b{n}(\\b{x}) \\sigma(\\mathrm{d}\\b{x}),\n\\end{align*}\nwhere $\\b{n}(\\b{x})$ is the unit normal vector at $\\b{x} \\in \\partial D$ and $\\sigma(\\mathrm{d}\\b{x})$ is the surface element at $\\b{x} \\in \\partial D$. \nNext, we let $D = D_R = \\{\\b{x} : \\|\\b{x}\\| \\leq R\\}$ be the ball in $\\mathbb{R}^d$ with radius $R$, so that $\\partial D_R$ is the sphere $S_R = \\{\\b{x} : \\|\\b{x}\\| = R\\}$.\nThe assumption $\\|\\nabla_{\\b{x}} g(\\b{x})\\| \\leq C \\|\\b{x}\\|^{-\\delta} p(\\b{x})^{-1}$ in the statement of the lemma allows us to establish the bound\n\\begin{align*}\n\\left| \\oint_{S_R} p(\\b{x})\\nabla_{\\b{x}} g(\\b{x})\\cdot \\b{n}(\\b{x}) \\sigma(\\mathrm{d}\\b{x}) \\right| \\leq \\oint_{S_R} \\left| p(\\b{x})\\nabla_{\\b{x}} g(\\b{x})\\cdot \\b{n}(\\b{x}) \\right| \\sigma(\\mathrm{d}\\b{x}) &\\leq \\oint_{S_R} p(\\b{x})\\left\\| \\nabla_{\\b{x}} g(\\b{x})\\right\\| \\sigma(\\mathrm{d}\\b{x})\\\\\n&\\leq \\oint_{S_R} C\\left\\| \\b{x}\\right\\|^{-\\delta} \\sigma(\\mathrm{d}\\b{x}) \\\\\n&= C R^{-\\delta} \\oint_{S_R} \\sigma(\\mathrm{d}\\b{x}) \\\\\n&= C R^{-\\delta} \\frac{2 \\pi^{d\/2}}{\\Gamma(d\/2)} R^{d-1},\n\\end{align*}\nwhere in the first and second inequalities we used Jensen's inequality and Cauchy--Schwarz, respectively, and in the final equality we have made use of the surface area of $S_R$.\nThe assumption that $\\delta > d - 1$ is then sufficient to obtain the result:\n\\begin{equation*}\n\\left| \\int (\\mathcal{L} g)(\\b{x}) p(\\b{x}) \\mathrm{d}\\b{x} \\right| = \\lim_{R \\rightarrow \\infty} \\left| \\oint_{S_R} p(\\b{x})\\nabla_{\\b{x}} g(\\b{x})\\cdot \\b{n}(\\b{x}) \\sigma(\\mathrm{d}\\b{x}) \\right| \\; \\leq \\; \\lim_{R \\rightarrow \\infty} C \\frac{2 \\pi^{d\/2}}{\\Gamma(d\/2)} R^{d-1 - \\delta} \\; = \\; 0.\n\\end{equation*}\nThis completes the argument.\n\\end{proof}\n\n\n\\section{Differentiating the Kernel} \\label{appendix:kernels}\n\nThis appendix provides explicit forms of~\\eqref{eq:stein-kernel2} for kernels $k$ that are radial. \nFirst we present a generic result in Lemma \\ref{lem: k0 for radial} before specialising to the cases of the rational quadratic (Section \\ref{subsec: RQK}), Gaussian (Section \\ref{subsec: GK}) and Mat\\'{e}rn (Section \\ref{sec: matern}) kernels.\n\n\\begin{lemma} \\label{lem: k0 for radial}\nConsider a radial kernel $k$, meaning that $k$ has the form\n\\begin{equation*}\nk(\\b{x}, \\b{y}) = \\Psi(z), \\qquad z = \\| \\b{x} - \\b{y} \\|^2,\n\\end{equation*}\nwhere the function $\\Psi \\colon [0, \\infty) \\to \\mathbb{R}$ is four times differentiable and $\\b{x},\\b{y} \\in \\mathbb{R}^d$.\nThen \\eqref{eq:stein-kernel2} simplifies to \n\\begin{equation} \\label{eq:stein-kernel-radial}\n\\begin{split}\nk_0(\\b{x},\\b{y}) ={}& 16 z^2 \\Psi^{(4)}(z) + 16 (2+d) z \\Psi^{(3)}(z) + 4(2+d) d \\Psi^{(2)}(z) \\\\\n& + 4[2z \\Psi^{(3)}(z) + (2+d) \\Psi^{(2)}(z)] [\\b{u}(\\b{x}) - \\b{u}(\\b{y})]^\\top (\\b{x}-\\b{y}) \\\\\n& - 4 \\Psi^{(2)}(z) \\b{u}(\\b{x})^\\top (\\b{x}-\\b{y})(\\b{x}-\\b{y})^\\top \\b{u}(\\b{y}) - 2 \\Psi^{(1)}(z) \\b{u}(\\b{x})^\\top \\b{u}(\\b{y}),\n\\end{split}\n\\end{equation}\nwhere $\\b{u}(\\b{x}) = \\nabla_{\\b{x}} \\log p(\\b{x})$.\n\\end{lemma}\n\\begin{proof}\nThe proof is direct and based on have the following applications of the chain rule:\n\\begin{align*}\n\\nabla_{\\b{x}} k(\\b{x},\\b{y}) & = 2 \\Psi^{(1)}(z) (\\b{x}-\\b{y}), \\\\\n\\nabla_{\\b{y}} k(\\b{x},\\b{y}) & = - 2 \\Psi^{(1)}(z) (\\b{x}-\\b{y}), \\\\\n\\Delta_{\\b{x}} k(\\b{x},\\b{y}) & = 4 z \\Psi^{(2)}(z) + 2 d \\Psi^{(1)}(z), \\\\\n\\Delta_{\\b{y}} k(\\b{x},\\b{y}) & = 4 z \\Psi^{(2)}(z) + 2 d \\Psi^{(1)}(z), \\\\\n\\partial_{x_i} \\partial_{y_j} k(\\b{x}, \\b{y}) &= -4\\Psi^{(2)}(z) (x_i - y_i) (x_j - y_j) - 2\\Psi^{(1)}(z) \\delta_{ij}, \\\\\n\\nabla_{\\b{x}} \\Delta_{\\b{y}} k(\\b{x},\\b{y}) & = 8 z \\Psi^{(3)}(z) (\\b{x}-\\b{y}) + 4(2 + d) \\Psi^{(2)}(z) (\\b{x}-\\b{y}), \\\\\n\\nabla_{\\b{y}} \\Delta_{\\b{x}} k(\\b{x},\\b{y}) & = -8 z \\Psi^{(3)}(z) (\\b{x}-\\b{y}) - 4(2 + d) \\Psi^{(2)}(z) (\\b{x}-\\b{y}), \\\\\n\\Delta_{\\b{x}} \\Delta_{\\b{y}} k(\\b{x},\\b{y}) & = 16z^2 \\Psi^{(4)}(z) + 16(2+d) z \\Psi^{(3)}(z) + 4(2+d) d \\Psi^{(2)}(z).\n\\end{align*}\nUpon insertion of these formulae into~\\eqref{eq:stein-kernel2}, the desired result is obtained.\n\\end{proof}\n\nThus for kernels that are radial, it is sufficient to compute just the derivatives $\\Psi^{(j)}$ of the radial part.\n\n\n\\subsection{Rational Quadratic Kernel} \\label{subsec: RQK}\n\nThe rational quadratic kernel, \n\\begin{equation*}\n \\Psi(z) = (1+\\lambda^{-2} z)^{-1},\n\\end{equation*}\nhas derivatives $\\Psi^{(j)}(z) = (-1)^j \\lambda^{-2j} j! (1+\\lambda^{-2} z)^{-j - 1}$ for $j\\geq1$.\n\n\\subsection{Gaussian Kernel} \\label{subsec: GK}\n\nFor the Gaussian kernel we have $\\Psi(z) = \\exp(-z\/\\lambda^2)$. Consequently,\n\\begin{align*}\n \\Psi^{(j)}(z) &= (-1)^j\\lambda^{-2j} \\exp(-z\/\\lambda^2),\n\\end{align*}\nfor $j\\geq 1$.\n\n\\subsection{Mat\\'ern Kernels} \\label{sec: matern}\n\nFor a Mat\\'ern kernel of smoothness $\\nu > 0$ we have\n\\begin{equation*}\n \\Psi(z) = b c^\\nu z^{\\nu\/2} \\, \\mathrm{K}_\\nu( c \\sqrt{z}\\,), \\quad b = \\frac{2^{1-\\nu}}{\\Gamma(\\nu)}, \\quad c = \\frac{\\sqrt{2\\nu}}{\\lambda},\n\\end{equation*}\nwhere $\\Gamma$ the Gamma function and $\\mathrm{K}_\\nu$ the modified Bessel function of the second kind of order $\\nu$.\nBy the use of the formula $\\partial_z K_\\nu(z) = - \\mathrm{K}_{\\nu-1}(z) -\\frac{\\nu}{z} \\mathrm{K}_\\nu(z)$ we obtain\n\\begin{align*}\n \\Psi^{(j)}(z) &= (-1)^j\\frac{ b c^{\\nu+j} }{2^j} z^{(\\nu-j)\/2} \\, \\mathrm{K}_{\\nu-j}(c \\sqrt{z}\\,),\n\\end{align*}\nfor $j = 1,\\ldots,4$.\nIn order to guarantee that the kernel is twice continously differentiable, so that $k_0$ in~\\eqref{eq:stein-kernel} is well-defined, we require that $\\lceil \\nu \\rceil > 2$. As a Mat\\'ern kernel induces a reproducing kernel Hilbert space that is norm-equivalent to the standard Sobolev space of order $\\nu + \\frac{d}{2}$ \\citep[][Example 5.7]{fasshauer2011reproducing}, the condition $\\lceil \\nu \\rceil > 2$ implies, by the Sobolev imbedding theorem \\citep[][Theorem 4.12]{adams2003sobolev}, that the functions in $\\mathcal{H}(k)$ are twice continuously differentiable. Notice that $\\Psi^{(3)}(z)$ and $\\Psi^{(4)}(z)$ may not be defined at $z=0$, in which case the terms $16z^2 \\Psi^{(4)}(z)$, $16(2+d)z\\Psi^{(3)}(z)$ and $8z\\Psi^{(3)}(z)$ in~\\eqref{eq:stein-kernel-radial} must be interpreted as limits as $z \\to 0$ from the right.\n\n\n\\section{Properties of $\\mathcal{H}(k_0)$} \\label{app: basic results on H}\n\nThe purpose of this appendix is to establish basic properties of the reproducing kernel Hilbert space $\\mathcal{H}(k_0)$ of the kernel $k_0$ in~\\eqref{eq:stein-kernel}.\nFor convenience, in this appendix we abbreviate $\\mathcal{H}(k_0)$ to $\\mathcal{H}$.\nIn Lemma~\\ref{lem: linear transform rkhs} we clarify the reproducing kernel Hilbert space structure of~$\\mathcal{H}$.\nThen in Lemma~\\ref{lem: sq integ} we establish square integrability of the elements of $\\mathcal{H}$ and in Lemma~\\ref{lem: contained in Sobolev} we establish the local smoothness of the elements of $\\mathcal{H}$. \n\nTo state these results we require several items of notation:\nThe notation $C^s(\\mathbb{R}^d)$ denotes the set of $s$-times continuously differentiable functions on $\\mathbb{R}^d$; i.e. $\\partial^{\\b{\\alpha}} f \\in C^0(\\mathbb{R}^d)$ for all $|\\b{\\alpha}| \\leq s$ where $C^0(\\mathbb{R}^d)$ denotes the set of continuous functions on $\\mathbb{R}^d$.\nFor two normed spaces $V$ and $W$, let $V \\hookrightarrow W$ denote that $V$ is continuously embedded in $W$, meaning that $\\|v\\|_W \\leq C \\|v\\|_V$ for all $v \\in V$ and some constant $C \\geq 0$.\nIn particular, we write $V \\simeq W$ if and only if $V$ and $W$ are equal as sets and both $V \\hookrightarrow W$ and $W \\hookrightarrow V$.\nLet $\\mathcal{L}^2(p)$ denote the vector space of square integrable functions with respect to $p$ and equip this with the norm $\\|h\\|_{\\mathcal{L}^2(p)} = ( \\int h(\\b{x})^2 p(\\b{x}) \\mathrm{d}\\b{x} )^{1\/2}$.\nFor $h : \\mathbb{R}^d \\rightarrow \\mathbb{R}$ and $D \\subset \\mathbb{R}^d$ we let $h|_D : D \\rightarrow \\mathbb{R}$ denote the restriction of $h$ to $D$.\n\n\n\nFirst we clarify the reproducing kernel Hilbert space structure of $\\mathcal{H}$:\n\n\\begin{lemma}[Reproducing kernel Hilbert space structure of $\\mathcal{H}$] \\label{lem: linear transform rkhs}\nLet $k : \\mathbb{R}^d \\times \\mathbb{R}^d \\rightarrow \\mathbb{R}$ be a positive-definite kernel such that the regularity assumptions of Lemma \\ref{lem:boundary-kernel} are satisfied.\nLet $\\mathcal{H}$ denote the normed space of real-valued functions on $\\mathbb{R}^d$ with norm\n$$\n\\|h\\|_{\\mathcal{H}} = \\inf_{\\substack{h = \\mathcal{L} g \\\\ g \\in \\mathcal{H}(k)}} \\|g\\|_{\\mathcal{H}(k)} .\n$$\nThen $\\mathcal{H}$ admits the structure of a reproducing kernel Hilbert space with kernel $\\kappa : \\mathbb{R}^d \\times \\mathbb{R}^d \\rightarrow \\mathbb{R}$ given by $\\kappa(\\b{x},\\b{y}) = k_0(\\b{x},\\b{y})$.\nThat is, $\\mathcal{H} = \\mathcal{H}(k_0)$.\nMoreover, for $D \\neq \\emptyset$, let $\\mathcal{H} |_D$ denote the normed space of real-valued functions on $D$ with norm\n$$\n\\|h'\\|_{\\mathcal{H} |_D} = \\inf_{\\substack{h|_D = h' \\\\ h \\in \\mathcal{H}}} \\|h\\|_{\\mathcal{H}} .\n$$\nThen $\\mathcal{H} |_D$ is a reproducing kernel Hilbert space with kernel $\\kappa |_D : D \\times D \\rightarrow \\mathbb{R}$ given by $\\kappa |_D(\\b{x},\\b{y}) = k_0(\\b{x},\\b{y})$.\nThat is, $\\mathcal{H}|_D = \\mathcal{H}(\\kappa|_D)$.\n\\end{lemma}\n\\begin{proof}\nThe first statement is an immediate consequence of Theorem 5 in Section 4.1 of \\cite{Berlinet2011}.\nThe second statement is an immediate consequence of Theorem 6 in Section 4.2 of \\cite{Berlinet2011}.\n\\end{proof}\n\nNext we establish when the elements of $\\mathcal{H}$ are square-integrable functions with respect to $p$.\n\n\\begin{lemma}[Square integrability of $\\mathcal{H}$] \\label{lem: sq integ}\nLet $\\kappa$ be a radial kernel satisfying the pre-conditions of Lemma~\\ref{lem: k0 for radial}.\nIf $u_i = \\nabla_{x_i} \\log p(\\bm{x}) \\in \\mathcal{L}^2(p)$ for each $i = 1,\\dots,d$, then $\\mathcal{H} \\hookrightarrow \\mathcal{L}^2(p)$.\n\\end{lemma}\n\\begin{proof}\nFrom the representer theorem and Cauchy--Schwarz we have\n\\begin{eqnarray}\n\\int h(\\b{x})^2 p(\\b{x}) \\mathrm{d}\\b{x} \\; = \\; \\int \\langle h , \\kappa(\\cdot,\\b{x}) \\rangle_{\\mathcal{H}}^2 \\, p(\\b{x}) \\mathrm{d}\\b{x} \\; \\leq \\; \\|h\\|_{\\mathcal{H}}^2 \\int \\kappa(\\b{x},\\b{x}) p(\\b{x}) \\mathrm{d}\\b{x} . \\label{eqn: L2 bound}\n\\end{eqnarray}\nNow, in the special case $k(\\b{x},\\b{y}) = \\Psi(z)$, $z = \\|\\b{x}-\\b{y}\\|^2$, the conclusion of Lemma \\ref{lem: k0 for radial} gives that $\\kappa(\\b{x},\\b{x}) = 4 (2+d) d \\Psi^{(2)}(0) - 2 \\Psi^{(1)}(0) \\|\\b{u}(\\b{x})\\|^2$, from which it follows that \n\\begin{eqnarray}\n0 \\leq \\int \\kappa(\\b{x},\\b{x}) p(\\b{x}) \\mathrm{d}\\b{x} \\; = \\; 4 (2+d) d \\Psi^{(2)}(0) - 2 \\Psi^{(1)}(0) \\int \\|\\b{u}(\\b{x})\\|^2 p(\\b{x}) \\mathrm{d}\\b{x} \\; = \\; C^2 . \\label{eq: kappa diag bound}\n\\end{eqnarray}\nThe combination of \\eqref{eqn: L2 bound} and \\eqref{eq: kappa diag bound} establishes that $\\|h\\|_{\\mathcal{L}^2(p)} \\leq C \\|h\\|_{\\mathcal{H}}$, which is the claimed result.\n\\end{proof}\n\nFinally we turn to the regularity of the elements of $\\mathcal{H}$, as quantified by their smoothness over suitable bounded sets $D \\subset \\mathbb{R}^d$.\nIn what follows we will let $\\mathcal{H}(k)$ be a reproducing kernel Hilbert space of functions in $\\mathcal{L}^2(\\mathbb{R}^d)$, the space of square Lebesgue integrable functions on $\\mathbb{R}^d$, such that the norms\n$$\n\\|h\\|_{\\mathcal{H}(k)} \\simeq \\|h\\|_{W_2^r(\\mathbb{R}^d)} = \\Big( \\textstyle \\sum_{|\\b{\\alpha}| \\leq r} \\|\\partial^{\\b{\\alpha}} h \\|_{\\mathcal{L}^2(\\mathbb{R}^d)}^2 \\Big)^{\\frac{1}{2}}\n$$\nare equivalent.\nThe latter is recognized as the standard Sobolev norm; this space is denoted $W_2^r(\\mathbb{R}^d)$.\nFor example, the Mat\\'{e}rn kernel in Section~\\ref{sec: matern} corresponds to $\\mathcal{H}(k)$ with $r = \\nu + \\frac{d}{2}$.\nThe Sobolev embedding theorem implies that $W_2^r(\\mathbb{R}^d) \\subset C^0(\\mathbb{R}^d)$ whenever $r > \\frac{d}{2}$.\n\nThe following result establishes the smoothness of $\\mathcal{H}$ in terms of the differentiability of its elements.\nIf the smoothness of $f$ is known then $k$ should be selected so that the smoothness of $\\mathcal{H}$ matches it.\n\n\\begin{lemma}[Smoothness of $\\mathcal{H}$] \\label{lem: contained in Sobolev}\nLet $r,s \\in \\mathbb{N}$ be such that $r > s + 2 + \\frac{d}{2}$. If \\sloppy{${\\mathcal{H}(k) \\simeq W_2^r(\\mathbb{R}^d)}$} and $\\log p \\in C^{s+1}(\\mathbb{R}^d)$, then, for any open and bounded set $D \\subset \\mathbb{R}^d$, we have $\\mathcal{H}|_D \\hookrightarrow W_2^s(D)$.\n\\end{lemma}\n\\begin{proof}\nUnder our assumptions, the kernel $\\kappa|_D : D \\times D \\rightarrow \\mathbb{R}$ from Lemma~\\ref{lem: linear transform rkhs} is $s$-times continuously differentiable in the sense of Definition~4.35 of \\cite{Steinwart2008}.\nIt follows from~Lemma~4.34 of \\cite{Steinwart2008} that $\\partial_{\\b{x}}^{\\b{\\alpha}} \\kappa|_D(\\cdot,\\b{x}) \\in \\mathcal{H}|_D$ for all $\\b{x} \\in D$ and $|\\b{\\alpha}| \\leq s$.\nFrom the reproducing property in $\\mathcal{H}|_D$ and the Cauchy--Schwarz inequality we have, for $|\\b{\\alpha}| \\leq s$,\n\\begin{align*}\n|\\partial^{\\b{\\alpha}} f(\\b{x})| = \\big| \\langle f , \\partial^{\\b{\\alpha}} \\kappa|_D(\\cdot, \\b{x}) \\rangle_{\\mathcal{H}|_D} \\big| \\leq \\|f\\|_{\\mathcal{H}|_D} \\big\\| \\partial^{\\b{\\alpha}} \\kappa|_D(\\cdot, \\b{x}) \\big\\|_{\\mathcal{H}|_D} = \\|f\\|_{\\mathcal{H}|_D} \\left( \\partial_{\\b{x}}^{\\b{\\alpha}} \\partial_{\\b{y}}^{\\b{\\alpha}} \\kappa|_D(\\b{x},\\b{y})|_{\\b{y} = \\b{x}} \\right)^{1\/2} .\n\\end{align*}\nSee also Corollary 4.36 of \\cite{Steinwart2008}.\nThus it follows from the definition of $W_2^s(D)$ and the reproducing property that\n\\begin{align*}\n\\|f\\|_{W_2^s(D)}^2 = \\sum_{|\\b{\\alpha}| \\leq s} \\| \\partial^{\\b{\\alpha}} f \\|_{L_2(D)}^2 & \\leq \\| f \\|_{\\mathcal{H}|_D}^2 \\sum_{|\\b{\\alpha}| \\leq s} \\big\\| \\b{x} \\mapsto \\partial_{\\b{x}}^{\\b{\\alpha}} \\partial_{\\b{y}}^{\\b{\\alpha}} \\kappa|_D(\\b{x},\\b{y})|_{\\b{y} = \\b{x}} \\big\\|_{L^2(D)}^2 \\\\\n& = \\| f \\|_{\\mathcal{H}|_D}^2 \\big\\| \\b{x} \\mapsto \\kappa|_D(\\b{x},\\b{x}) \\big\\|_{W_2^s(D)}^2 .\n\\end{align*}\nNow, from the definition of $\\kappa$ and using the fact that $k$ is symmetric, we have\n\\begin{align*}\n\\kappa(\\b{x},\\b{x}) = \\Delta_{\\b{x}} \\Delta_{\\b{y}} k(\\b{x},\\b{y})|_{\\b{y} = \\b{x}} + 2 \\b{u}(\\b{x})^\\top \\nabla_{\\b{x}} \\Delta_{\\b{y}} k(\\b{x},\\b{y})|_{\\b{y} = \\b{x}} + \\b{u}(\\b{x})^\\top \\big[ \\nabla_{\\b{x}} \\nabla_{\\b{y}}^\\top k(\\b{x},\\b{y})|_{\\b{y} = \\b{x}} \\big] \\b{u}(\\b{x}).\n\\end{align*}\nOur assumption that $\\mathcal{H}(k) \\simeq W_2^r(\\mathbb{R}^d)$ with $r > s + 2 + \\frac{d}{2}$ implies that each of the functions $\\b{x} \\mapsto \\Delta_{\\b{x}} \\Delta_{\\b{y}} k(\\b{x},\\b{y})|_{\\b{y} = \\b{x}}$, $\\nabla_{\\b{x}} \\Delta_{\\b{y}} k(\\b{x},\\b{y})|_{\\b{y} = \\b{x}}$ and $\\nabla_{\\b{x}} \\nabla_{\\b{y}}^\\top k(\\b{x},\\b{y})|_{\\b{y} = \\b{x}}$, are $C^s(\\mathbb{R}^d)$.\nIn addition, our assumption that $\\log p \\in C^{s+1}(\\mathbb{R}^d)$ implies that $\\b{x} \\mapsto \\b{u}(\\b{x}) \\in C^s(\\mathbb{R}^d)$.\nThus $\\b{x} \\mapsto \\kappa(\\b{x},\\b{x})$ is $C^s(\\mathbb{R}^d)$ and in particular the boundedness of $D$ implies that $\\|\\b{x} \\mapsto \\kappa|_D(\\b{x},\\b{x})\\|_{W_2^s(D)} < \\infty$ as required.\n\\end{proof}\n\n\n\n\n\\section{Proof of Lemma \\ref{lem:boundary-kernel}}\\label{app:ProofZeroKernel}\n\n\\begin{proof}\n\nIn what follows $C$ is a generic positive constant, independent of $\\b{x}$ but possibly dependant on $\\b{y}$, whose value can differ each time it is instantiated.\nThe aim of this proof is to apply Lemma \\ref{lemma:assum} to the function $g(\\b{x})=\\mathcal{L}_{\\b{y}} k(\\b{x}, \\b{y})$. \nOur task is to verify the pre-condition $\\| \\nabla_{\\b{x}} g(\\b{x}) \\| \\leq C \\| \\b{x} \\|^{-\\delta} p(\\b{x})^{-1}$ for some \\sloppy{${\\delta > d-1}$}.\nIt will then follow from the conclusion of Lemma \\ref{lemma:assum} that $\\int k_0(\\b{x}, \\b{y}) p(\\b{x}) \\dif \\b{x} = 0$ as required.\nTo this end, expanding the term $\\| \\nabla_{\\b{x}} g(\\b{x}) \\|^2$, we have \n\\begin{align}\n \\| \\nabla_{\\b{x}} g(\\b{x}) \\|^2 ={}& \\| \\nabla_{\\b{x}} \\mathcal{L}_{\\b{y}} k(\\b{x}, \\b{y}) \\|^2 \\nonumber \\\\\n ={}& \\big\\| \\nabla_{\\b{x}} \\Delta_{\\b{y}} k(\\b{x}, \\b{y}) + \\nabla_{\\b{x}}[\\nabla_{\\b{y}}\\log p(\\b{y})\\cdot\\nabla_{\\b{y}}k(\\b{x},\\b{y})] \\big\\|^2 \\nonumber \\\\\n ={}& \\| \\nabla_{\\b{x}} \\Delta_{\\b{y}} k(\\b{x}, \\b{y}) \\|^2 + 2\\nabla_{\\b{x}} \\big[ \\nabla_{\\b{y}}\\log p(\\b{y})\\cdot\\nabla_{\\b{y}}k(\\b{x},\\b{y}) \\big]^\\top \\nabla_{\\b{x}} \\Delta_{\\b{y}} k(\\b{x}, \\b{y}) \\nonumber \\\\\n &+ \\big\\| \\nabla_{\\b{x}} \\big[ \\nabla_{\\b{y}}\\log p(\\b{y})\\cdot\\nabla_{\\b{y}}k(\\b{x},\\b{y}) \\big] \\big\\|^2 \\nonumber \\\\\n ={}& \\| \\nabla_{\\b{x}} \\Delta_{\\b{y}} k(\\b{x}, \\b{y}) \\|^2 + 2 \\big\\{ [\\nabla_{\\b{x}}\\nabla_{\\b{y}}^\\top k(\\b{x},\\b{y})]^\\top\\nabla_{\\b{y}}\\log p(\\b{y}) \\big\\}^\\top \\nabla_{\\b{x}} \\Delta_{\\b{y}} k(\\b{x}, \\b{y}) \\nonumber \\\\\n &+ \\big\\| [\\nabla_{\\b{x}}\\nabla_{\\b{y}}^\\top k(\\b{x},\\b{y})]\\nabla_{\\b{y}}\\log p(\\b{y}) \\big\\|^2 \\nonumber \\\\\n \\begin{split}\n \\leq{}& \\| \\nabla_{\\b{x}} \\Delta_{\\b{y}} k(\\b{x}, \\b{y}) \\|^2 + 2 \\big \\|[\\nabla_{\\b{x}}\\nabla_{\\b{y}}^\\top k(\\b{x},\\b{y})]^\\top\\nabla_{\\b{y}}\\log p(\\b{y}) \\big\\| \\| \\nabla_{\\b{x}} \\Delta_{\\b{y}} k(\\b{x}, \\b{y})\\| \\\\\n &+ \\big\\| [\\nabla_{\\b{x}}\\nabla_{\\b{y}}^\\top k(\\b{x},\\b{y})]\\nabla_{\\b{y}}\\log p(\\b{y}) \\big\\|^2 \\label{eq: various bounds applied} \n \\end{split}\n \\\\\n \\begin{split}\n \\leq{}& \\| \\nabla_{\\b{x}} \\Delta_{\\b{y}} k(\\b{x}, \\b{y}) \\|^2 + 2\\|\\nabla_{\\b{x}}\\nabla_{\\b{y}}^\\top k(\\b{x},\\b{y})\\|_{\\text{OP}}\\|\\nabla_{\\b{y}}\\log p(\\b{y})\\| \\| \\nabla_{\\b{x}} \\Delta_{\\b{y}} k(\\b{x}, \\b{y})\\| \\\\\n &+ \\| \\nabla_{\\b{x}}\\nabla_{\\b{y}}^\\top k(\\b{x},\\b{y})\\|_{\\text{OP}}^2 \\|\\nabla_{\\b{y}}\\log p(\\b{y})\\|^2 \\label{eq: various bounds applied 2} \n \\end{split}\n \\\\\n \\begin{split}\n \\leq{}& \\big[ C \\| \\b{x} \\|^{-\\delta}p(\\b{x})^{-1} \\big]^2 + 2 \\big[ C \\| \\b{x} \\|^{-\\delta}p(\\b{x})^{-1} \\big] \\| \\nabla_{\\b{y}} \\log p(\\b{y}) \\| \\big[ C \\| \\b{x} \\|^{-\\delta}p(\\b{x})^{-1} \\big] \\\\\n &+ \\big[ C \\| \\b{x} \\|^{-\\delta}p(\\b{x})^{-1} \\big]^2 \\|\\nabla_{\\b{y}} \\log p(\\b{y}) \\|^2 \\label{eqn: use bounds} \n \\end{split}\n \\\\\n \\leq{}& C \\| \\b{x} \\|^{-2\\delta} p(\\b{x})^{-2} \\nonumber \n\\end{align}\nas required.\nHere \\eqref{eq: various bounds applied} follows from the Cauchy--Schwarz inequality applied to the second term, \\eqref{eq: various bounds applied 2} follows from the definition of the operator norm $\\|\\cdot\\|_{\\text{OP}}$ and \\eqref{eqn: use bounds} employs the pre-conditions that we have assumed.\n\\end{proof}\n\n\n\n\n\n\\section{Proof of Lemma \\ref{lem: polyexact}} \\label{app: proof of polyexact}\n\n\\begin{proof}\n\nOur first task is to establish that it is sufficient to prove the result in just the particular case $\\hat{\\b{x}}_N = \\b{0}$ and $N^{-1} I(\\hat{\\b{x}}_N)^{-1} = \\b{I}$, where $\\b{I}$ is the $d$-dimensional identity matrix.\nIndeed, if $\\hat{\\b{x}}_N\\neq \\b{0}$ or $N^{-1} I(\\hat{\\b{x}}_N)^{-1} \\neq \\b{I}$, then let $\\b{t}(\\b{x}) = \\b{W}(\\b{x}-\\hat{\\b{x}}_N)$ where $\\b{W}$ is a non-singular matrix satisfying $\\b{W}^\\top \\b{W} = N I(\\hat{\\b{x}}_N)$ so that \\sloppy{${\\b{t}(\\b{x}) \\sim \\mathcal{N}(\\b{0},\\b{I})}$}. \nUnder the same co-ordinate transformation the polynomial subspace \n$$\nA=\\mathcal{P}_0^r=\\mathrm{span} \\{ \\b{x}^{\\b{\\alpha}} : \\, \\b{\\alpha} \\in \\mathbb{N}_0^d, \\, 0 \\leq \\abs[0]{\\b{\\alpha}} \\leq r \\}\n$$\nbecomes $B=\\mathrm{span} \\{ \\b{t}(\\b{x})^{\\b{\\alpha}} : \\, \\b{\\alpha} \\in \\mathbb{N}_0^d, \\, 0 \\leq \\abs[0]{\\b{\\alpha}} \\leq r \\}$.\nExact integration of functions in $A$ with respect to $\\mathcal{N}(\\hat{\\b{x}}_N , \\b{I})$ corresponds to exact integration of functions in $B$ with respect to $\\mathcal{N}(\\b{0},\\b{I})$.\nThus our first task is to establish that $B = A$.\nClearly $B$ is a linear subspace of $A$, since elements of $B$ can be expanded out into monomials and monomials generate $A$, so it remains to argue that $B$ is all of $A$. \nIn what follows we will show that $\\text{dim}(B) = \\text{dim}(A)$ and this will complete the first part of the argument.\n\nThe co-ordinate transform $\\b{t}$ is an invertible affine map on $\\mathbb{R}^d$. \nThe action of such a map $\\b{t}$ on a set $S$ of functions on $\\mathbb{R}^d$ can be defined as $\\b{t}(S) = \\{ \\b{x} \\rightarrow s(\\b{t}(\\b{x})) : s \\in S \\}$. Thus $B = \\b{t}(A)$.\nLet \\sloppy{${\\b{t}^*(\\b{x}) = \\b{W}^{-1}\\b{x} + \\hat{\\b{x}}_n}$} and notice that this is also an invertible affine map on $\\mathbb{R}^d$ with $\\b{t}^*(\\b{t}(\\b{x})) = \\b{x}$ being the identity map on $\\mathbb{R}^d$. \nThe composition of invertible affine maps on $\\mathbb{R}^d$ is again an invertible affine map and thus $\\b{t}^*\\b{t}$ is also an invertible affine map on $\\mathbb{R}^d$ and its action on a set is well-defined.\nConsidering the action of $\\b{t}^*\\b{t}$ on the set $A$ gives that $\\b{t}^*(\\b{t}(A)) = A$ and therefore $\\b{t}(A)$ must have the same dimension as $A$. \nThus $\\text{dim}(A) = \\text{dim}(t(A)) = \\text{dim}(B)$ as claimed. \n\n\nOur second task is to show that, in the case where $p$ is the density of $\\mathcal{N}(\\b{0},\\b{I})$ and thus \\sloppy{${\\nabla_{\\b{x}} \\log p(\\b{x}) = - \\b{x}}$}, the set $\\mathcal{F} = \\text{span}\\{1\\} \\oplus \\mathcal{L}\\mathcal{P}^r$ on which $I_{\\text{CV}}$ is exact is equal to $\\mathcal{P}_0^r$. \nOur proof proceeds by induction on the maximal degree $r$ of the polynomial.\nFor the base case we take $r = 1$:\n\\begin{align*}\n \\text{span}\\{1\\} \\oplus \\mathcal{L}\\mathcal{P}^1 &= \\text{span}\\{1\\} \\oplus \\text{span} \\big\\{ \\mathcal{L} x_j : j = 1,\\ldots,d \\big\\} \\\\\n &= \\text{span}\\{1\\} \\oplus \\text{span} \\big\\{ \\Delta_{\\b{x}} x_j + \\nabla_{\\b{x}} \\log p(\\b{x}) \\cdot \\nabla_{\\b{x}}(x_j) : j = 1,\\ldots,d \\big\\} \\\\\n &= \\text{span}\\{1\\} \\oplus \\text{span} \\big\\{ 0-\\b{x} \\cdot \\b{e}_j : j = 1,\\ldots,d \\big\\} \\\\ \n &= \\text{span}\\{1\\} \\oplus \\text{span} \\big\\{ -x_j : j = 1,\\ldots,d \\big\\} \\\\ \n &= \\mathcal{P}_0^1.\n\\end{align*}\nFor the inductive step we assume that $\\text{span}\\{1\\} \\oplus \\mathcal{L}\\mathcal{P}^{r-1} = \\mathcal{P}_0^{r-1}$ holds for a given $r \\geq 2$ and aim to show that $\\text{span}\\{1\\} \\oplus \\mathcal{L}\\mathcal{P}^r = \\mathcal{P}_0^r$.\nNote that the action of $\\mathcal{L}$ on a polynomial of order $r$ will return a polynomial of order at most $r$, so that $\\text{span}\\{1\\} \\oplus \\mathcal{L}\\mathcal{P}^r \\subseteq \\mathcal{P}_0^r$ and thus we need to show that $\\mathcal{P}_0^r \\subseteq \\text{span}\\{1\\} \\oplus \\mathcal{L}\\mathcal{P}^{r}$. \nUnder the inductive assumption we have \n\\begin{align*}\n\\text{span}\\{1\\} \\oplus \\mathcal{L}\\mathcal{P}^{r} &= \\text{span}\\{1\\} \\oplus \\left( \\mathcal{L} \\mathcal{P}^{r-1} \\oplus \\text{span}\\big\\{ \\mathcal{L}\\b{x}^{\\b{\\alpha}}: \\, \\b{\\alpha} \\in \\mathbb{N}_0^d, \\, \\abs[0]{\\b{\\alpha}} = r \\big\\} \\right) \\nonumber \\\\\n&= \\left( \\text{span}\\{1\\} \\oplus \\mathcal{L} \\mathcal{P}^{r-1} \\right) \\oplus \\text{span}\\big\\{ \\mathcal{L}\\b{x}^{\\b{\\alpha}}: \\, \\b{\\alpha} \\in \\mathbb{N}_0^d, \\, \\abs[0]{\\b{\\alpha}} = r \\big\\} \\nonumber \\\\\n & = \\mathcal{P}_0^{r-1} \\oplus \\text{span}\\big\\{ \\mathcal{L}\\b{x}^{\\b{\\alpha}}: \\, \\b{\\alpha} \\in \\mathbb{N}_0^d, \\, \\abs[0]{\\b{\\alpha}} = r \\big\\} \\nonumber \\\\\n & = \\mathcal{P}_0^{r-1} \\oplus \\text{span} \\big\\{ \\Delta_{\\b{x}}\\b{x}^{\\b{\\alpha}} + \\nabla_{\\b{x}}\\b{x}^{\\b{\\alpha}} \\cdot \\nabla_{\\b{x}}\\log p(\\b{x}): \\, \\b{\\alpha} \\in \\mathbb{N}_0^d, \\, \\abs[0]{\\b{\\alpha}} = r \\big\\} \\nonumber \\\\\n & = \\mathcal{P}_0^{r-1} \\oplus \\underbrace{ \\text{span} \\Bigg\\{ \\sum_{j=1}^d\n \\alpha_j(\\alpha_j-1)x_j^{\\alpha_j-2}\\prod_{k\\neq j} x_k^{\\alpha_k}\n - \\sum_{j=1}^d\\alpha_j\\b{x}^{\\b{\\alpha}}: \\, \\b{\\alpha} \\in \\mathbb{N}_0^d, \\, \\abs[0]{\\b{\\alpha}} = r \\Bigg\\} }_{=: \\mathcal{Q}^r}\n\\end{align*}\nTo complete the inductive step we must therefore show that, for each $\\b{\\alpha} \\in \\mathbb{N}_0^d$ with $|\\b{\\alpha}| = r$, we have $\\b{x}^{\\b{\\alpha}} \\in \\text{span}\\{1\\} \\oplus \\mathcal{L} \\mathcal{P}^r$.\nFix any $\\b{\\alpha} \\in \\mathbb{N}_0^d$ such that $\\abs[0]{\\b{\\alpha}} = r$.\nThen\n$$\n\\phi(\\b{x}) = \\sum_{j=1}^d \\alpha_j(\\alpha_j-1)x_j^{\\alpha_j-2}\\prod_{k\\neq j} x_k^{\\alpha_k} - \\sum_{j=1}^d\\alpha_j \\b{x}^{\\b{\\alpha}} \\in \\mathcal{Q}^r.\n$$\nand\n$$\n\\varphi(\\b{x}) = \\frac{1}{\\b{1}^\\top\\b{\\alpha}}\\sum_{j=1}^d \\alpha_j(\\alpha_j-1)x_j^{\\alpha_j-2}\\prod_{k\\neq j}x_k^{\\alpha_k} \\in \\mathcal{P}_0^{r-1}\n$$\nbecause this polynomial is of order less than $r$.\nSince $\\varphi - (\\b{1}^\\top \\b{\\alpha})^{-1} \\phi \\in \\mathcal{P}_0^{r-1} \\oplus \\mathcal{Q}^r = \\text{span}\\{1\\} \\oplus \\mathcal{L} \\mathcal{P}^r$ and\n\\begin{equation*}\n \\varphi(\\b{x}) - \\frac{1}{\\b{1}^\\top \\b{\\alpha}} \\phi(\\b{x}) = \\frac{\\sum_{j=1}^d\\alpha_j}{\\b{1}^\\top\\b{\\alpha}} \\b{x}^{\\b{\\alpha}} = \\b{x}^{\\b{\\alpha}},\n\\end{equation*}\nwe conclude that $\\b{x}^{\\b{\\alpha}} \\in \\text{span}\\{1\\} \\oplus \\mathcal{L} \\mathcal{P}^r$.\nThus we have shown that $\\{\\b{x}^{\\b{\\alpha}} : \\, \\b{\\alpha} \\in \\mathbb{N}_0^d, \\, \\abs[0]{\\b{\\alpha}} = r \\} \\subset \\text{span}\\{1\\} \\oplus \\mathcal{L}\\mathcal{P}^{r}$ and this completes the argument.\n\\end{proof}\n\n\n\\section{Proof of Lemma \\ref{lem: comput etc}}\n\\label{app:Computation proof}\n\n\\begin{proof}\n\nThe assumptions that the $\\b{x}^{(i)}$ are distinct and that $k_0$ is a positive-definite kernel imply that the matrix $\\b{K}_0$ is positive-definite and thus non-singular.\nLikewise, the assumption that the $\\b{x}^{(i)}$ are $\\mathcal{F}$-unisolvent implies that the matrix $\\b{P}$ has full rank.\nIt follows that the block matrix in~\\eqref{eq:block-system} is non-singular.\nThe interpolation and semi-exactness conditions in Section~\\ref{ssec:proposedBSS} can be written in matrix form as\n\\begin{enumerate}\n \\item $\\b{K}_0 \\, \\b{a} + \\b{P} \\b{b} = \\b{f}$ (interpolation);\n \\item $\\b{P}^\\top \\b{a} = \\b{0}$ (semi-exact).\n\\end{enumerate}\nThe first of these is merely~\\eqref{eq:interpolant} in matrix form. \nTo see how $\\b{P}^\\top \\b{a} = \\b{0}$ is related to the semi-exactness requirement ($f_n = f$ whenever $f \\in \\mathcal{F}$), observe that for $f \\in \\mathcal{F}$ we have $\\b{f} = \\b{P} \\b{c}$ for some $\\b{c} \\in \\mathbb{R}^m$. Consequently, the interpolation condition should yield $\\b{b} = \\b{c}$ and $\\b{a} = \\b{0}$. The condition $\\b{P}^\\top \\b{a} = \\b{0}$ enforces that $\\b{a} = \\b{0}$ in this case: multiplication of the interpolation equation with $\\b{a}^\\top$ yields $\\b{a}^\\top \\b{K}_0 \\b{a} + \\b{a}^\\top \\b{P} \\b{b} = \\b{a}^\\top \\b{P} \\b{c}$, which is then equivalent to $\\b{a}^\\top \\b{K}_0 \\b{a} = \\b{0}$. Because $\\b{K}_0$ is positive-definite, the only possible $\\b{a} \\in \\mathbb{R}^n$ is $\\b{a} = \\b{0}$ and $\\b{P}$ having full rank implies that $\\b{b} = \\b{c}$.\nThus the coefficients $\\b{a}$ and $\\b{b}$ can be cast as the solution to the linear system\n\\begin{equation*}\n \\begin{bmatrix} \\b{K}_0 & \\b{P} \\\\ \\b{P}^\\top & \\b{0} \\end{bmatrix} \\begin{bmatrix} \\b{a} \\\\ \\b{b} \\end{bmatrix} = \\begin{bmatrix} \\b{f} \\\\ \\b{0} \\end{bmatrix}.\n\\end{equation*}\nFrom~\\eqref{eq:block-system} we get\n\\begin{equation*}\n \\b{b} = ( \\b{P}^\\top \\b{K}_0^{-1} \\b{P} )^{-1} \\b{P}^\\top \\b{K}_0^{-1} \\b{f},\n\\end{equation*}\nwhere $\\b{P}^\\top \\b{K}_0^{-1} \\b{P}$ is non-singular because $\\b{K}_0$ is non-singular and $\\b{P}$ has full rank.\nRecognising that \\sloppy{${b_1 = \\b{e}_1^\\top \\b{b}}$} for $\\b{e}_1 = (1, 0, \\ldots, 0) \\in \\mathbb{R}^m$ completes the argument.\n\\end{proof}\n\n\n\\section{Nystr\\\"{o}m Approximation and Conjugate Gradient} \\label{app: Nystrom}\n\nIn this appendix we describe how a Nystr\\\"{o}m approximation and the conjugate gradient method can be used to provide an approximation to the proposed method with reduced computational cost.\nTo this end we consider a function of the form\n\\begin{equation} \n \\tilde{f}_{n_0}(\\b{x}) = \\tilde{b}_1 + \\sum_{i=1}^{m-1} \\tilde{b}_{i+1} \\mathcal{L} \\phi_i (\\b{x}) + \\sum_{i=1}^{n_0} \\tilde{a}_i k_0(\\b{x}, \\b{x}^{(i)}), \\label{eq: Nystrom regression}\n\\end{equation}\nwhere $n_0 \\ll n$ represents a small subset of the $n$ points in the dataset. \nStrategies for selection of a suitable subset are numerous \\citep[e.g.,][]{Alaoui2015,rudi2015less} but for simplicity in this work a uniform random subset was selected. \nWithout loss of generality we denote this subset by the first $n_0$ indices in the dataset.\nThe coefficients $\\b{a}$ and $\\b{b}$ in the proposed method \\eqref{eq:interpolant} can be characterized as the solution to a kernel least-squares problem, the details of which are reserved for Appendix \\ref{app: kernel least squares}.\nFrom this perspective it is natural to define the reduced coefficients $\\tilde{\\b{a}}$ and $\\tilde{\\b{b}}$ in \\eqref{eq: Nystrom regression} also as the solution to a kernel least-squares problem, the details of which are reserved for Appendix \\ref{app: Nystrom least squares}.\nIn taking this approach, the $(n+m)$-dimensional linear system in \\eqref{eq:block-system} becomes the $(n_0+m)$-dimensional linear system\n\\begin{equation} \\label{eq: Nystrom linear system}\n\\left[ \\begin{array}{cc} \\b{K}_{0,n_0,n}\\b{K}_{0,n,n_0} + \\b{P}_{n_0}\\b{P}_{n_0}^\\top & \\b{K}_{0,n_0,n}\\b{P} \\\\ \\b{P}^\\top\\b{K}_{0,n,n_0} & \\b{P}^\\top\\b{P}\\end{array} \\right] \\left[ \\begin{array}{c} \\tilde{\\b{a}} \\\\ \\tilde{\\b{b}} \\end{array} \\right] = \\left[ \\begin{array}{c} \\b{K}_{0,n_0,n} \\b{f}\\\\ \\b{P}^\\top \\b{f} \\end{array} \\right].\n\\end{equation}\nHere $\\b{K}_{0,r,s}$ denotes the matrix formed by the first $r$ rows and the first $s$ columns of $\\b{K}_0$.\nSimilarly $\\b{P}_r$ denotes the first $r$ rows of $\\b{P}$.\nIt can be verified that there is no approximation error when $n_0 = n$, with $\\tilde{\\b{a}} = \\b{a}$ and $\\tilde{\\b{b}} = \\b{b}$.\nThis is a simple instance of a Nystr\\\"{o}m approximation and it can be viewed as a random projection method \\citep{smola2000sparse,williams2001using}.\n\nThe computational complexity of computing this approximation to the proposed method is \n$$\nO(n n_0^2 + n m^2 + n_0^3 + m^3),\n$$\nwhich could still be quite high. \nFor this reason, we now consider iterative, as opposed to direct, linear solvers for~\\eqref{eq: Nystrom linear system}.\nIn particular, we employ the conjugate gradient method to approximately solve this linear system.\nThe performance of the conjugate gradient method is determined by the condition number of the linear system, and for this reason a preconditioner should be employed\\footnote{A linear system $\\b{A} \\b{x} = \\b{b}$ can be \\emph{preconditioned} by an invertible matrix $\\b{P}$ to produce $\\b{P}^\\top \\b{A} \\b{P} \\b{z} = \\b{P}^\\top \\b{b}$. The solution $\\b{z}$ is related to $\\b{x}$ via $\\b{x} = \\b{P} \\b{z}$.}.\nIn this work we considered the preconditioner\n\\begin{equation*}\n\\left[ \\begin{array}{cc} \\b{B}_1 & \\b{0} \\\\ \\b{0} & \\b{B}_2 \\end{array} \\right].\n\\end{equation*}\nFollowing \\cite{rudi2017falkon}, $\\b{B}_1$ is the lower-triangular matrix resulting from a Cholesky decomposition\n\\begin{equation*}\n\\b{B}_1 \\b{B}_1^\\top = \\left( \\frac{n}{n_0} \\b{K}_{0,n_0,n_0}^2 + \\b{P}_{n_0} \\b{P}_{n_0}^\\top \\right)^{-1} ,\n\\end{equation*}\nthe latter being an approximation to the inverse of $\\b{K}_{0,n_0,n} \\b{K}_{0,n,n_0} + \\b{P}_{n_0} \\b{P}_{n_0}^\\top$ and obtained at \\sloppy{${O(n_0^3 + m n_0^2)}$} cost. The matrix $\\b{B}_2$ is\n\\begin{equation*}\n\\b{B}_2 \\b{B}_2^\\top = \\big( \\b{P}^\\top\\b{P}\\big)^{-1} ,\n\\end{equation*}\nwhich uses the pre-computed matrix $\\b{P}^\\top\\b{P}$ and is of $O(m^3)$ complexity. \nThus we obtain a preconditioned linear system\n$$\n\\left[ \\begin{array}{cc} \\b{B}_1^\\top ( \\b{K}_{0,n_0,n} \\b{K}_{0,n,n_0} + \\b{P}_{n_0} \\b{P}_{n_0}^\\top ) \\b{B}_1 & \\b{B}_1^\\top \\b{K}_{0,n_0,n} \\b{P}\\b{B}_2 \\\\ \\b{B}_2^\\top \\b{P}^\\top \\b{K}_{0,n,n_0} \\b{B}_1 & \\b{I} \\end{array} \\right] \\left[ \\begin{array}{c} \\tilde{\\tilde{\\b{a}}} \\\\ \\tilde{\\tilde{\\b{b}}} \\end{array} \\right] = \\left[ \\begin{array}{c} \\b{B}_1^\\top \\b{K}_{0,n_0,n} \\b{f} \\\\ \\b{B}_2^\\top \\b{P}^\\top \\b{f} \\end{array} \\right] .\n$$\nThe coefficients $\\tilde{\\b{a}}$ and $\\tilde{\\b{b}}$ of $\\tilde{f}_{n_0}$ are related to the solution $(\\tilde{\\tilde{\\b{a}}}, \\tilde{\\tilde{\\b{b}}})$ of this preconditioner linear system via $\\tilde{\\tilde{\\b{a}}} = \\b{B}_1^{-1} \\tilde{\\b{a}}$ and $\\tilde{\\tilde{\\b{b}}} =\\b{B}_2^{-1} \\tilde{\\b{b}}$, which is an upper-triangular linear system solved at quadratic cost.\n\nThe above procedure leads to a more computationally (time and space) efficient procedure, and we denote the resulting estimator as $I_{\\text{ASECF}}(f) = \\tilde{b}_1$.\nFurther extensions could be considered; for example non-uniform sampling for the random projection via leverage scores \\citep{rudi2015less}.\n\nFor the examples in Section~\\ref{sec: Empirical}, we consider $n_0 = \\lceil\\sqrt{n}\\,\\rceil$ where $\\lceil \\cdot \\rceil$ denotes the ceiling function. We use the \\verb+R+ package \\verb+Rlinsolve+ to perform conjugate gradient, where we specify the tolerance to be~$10^{-5}$. \nThe initial value for the conjugate gradient procedure was the choice of $\\tilde{\\tilde{\\b{a}}}$ and $\\tilde{\\tilde{\\b{b}}}$ that leads to the Monte Carlo estimate, $\\tilde{\\tilde{\\b{a}}} = \\b{0}$ and $\\tilde{\\tilde{\\b{b}}} = \\b{B}_2^{-1} \\b{e}_{1}\\frac{1}{n}\\sum_{i=1}^n f(\\b{x}^{(i)})$. In our examples, we did not see a computational speed up from the use of conjugate gradient, likely due to the relatively small values of $n$ involved.\n\n\n\n\\subsection{Kernel Least-Squares Characterization} \\label{app: kernel least squares}\n\nHere we explain how the interpolant $f_n$ in \\eqref{eq:interpolant} can be characterized as the solution to the constrained kernel least-squares problem\n\\begin{equation*}\n\\argmin_{\\b{a},\\b{b}} \\frac{1}{n}\\sum_{i=1}^n \\big[ f(\\b{x}^{(i)}) - f_n(\\b{x}^{(i)}) \\big]^2 \\quad \\text{ s.t. } \\quad f_n = f \\quad \\text{ for all } \\quad f \\in \\mathcal{F} .\n\\end{equation*}\nTo see this, note that similar reasoning to that in Appendix \\ref{app:Computation proof} allows us to formulate the problem using matrices as\n\\begin{eqnarray}\n\\argmin_{\\b{a},\\b{b}} \\|\\b{f} - \\b{K}_0 \\b{a} - \\b{P} \\b{b} \\|^2 \\quad \\text{ s.t. } \\quad \\b{P}^\\top \\b{a} = \\b{0} . \\label{eq: least squares formulation of original}\n\\end{eqnarray}\nThis is a quadratic minimization problem subject to the constraint $\\b{P}^\\top \\b{a} = \\b{0}$ and therefore the solution is given by the Karush--Kuhn--Tucker matrix equation\n\\begin{eqnarray}\n\\left[ \\begin{array}{ccc} \\b{K}_0^2 & \\b{K}_0 \\b{P} & \\b{P} \\\\ \\b{P}^\\top \\b{K}_0 & \\b{P}^\\top \\b{P} & \\b{0} \\\\ \\b{P}^\\top & \\b{0} & \\b{0} \\end{array} \\right] \\left[ \\begin{array}{c} \\b{a} \\\\ \\b{b} \\\\ \\b{c} \\end{array} \\right] = \\left[ \\begin{array}{c} \\b{K} \\b{f} \\\\ \\b{P}^\\top \\b{f} \\\\ \\b{0} \\end{array} \\right] . \\label{eq: KKT first}\n\\end{eqnarray}\nNow, we are free to add a multiple, $\\b{P}$, of the third row to the first row, which produces\n\\begin{eqnarray*}\n\\left[ \\begin{array}{ccc} \\b{K}_0^2 + \\b{P} \\b{P}^\\top & \\b{K}_0 \\b{P} & \\b{P} \\\\ \\b{P}^\\top \\b{K}_0 & \\b{P}^\\top \\b{P} & \\b{0} \\\\ \\b{P}^\\top & \\b{0} & \\b{0} \\end{array} \\right] \\left[ \\begin{array}{c} \\b{a} \\\\ \\b{b} \\\\ \\b{c} \\end{array} \\right] = \\left[ \\begin{array}{c} \\b{K} \\b{f} \\\\ \\b{P}^\\top \\b{f} \\\\ \\b{0} \\end{array} \\right] .\n\\end{eqnarray*}\nNext, we make the \\emph{ansatz} that $\\b{c} = \\b{0}$ and seek a solution to the reduced linear system\n\\begin{eqnarray*}\n\\left[ \\begin{array}{cc} \\b{K}_0^2 + \\b{P} \\b{P}^\\top & \\b{K}_0 \\b{P} \\\\ \\b{P}^\\top \\b{K}_0 & \\b{P}^\\top \\b{P} \\end{array} \\right] \\left[ \\begin{array}{c} \\b{a} \\\\ \\b{b} \\end{array} \\right] = \\left[ \\begin{array}{c} \\b{K} \\b{f} \\\\ \\b{P}^\\top \\b{f} \\end{array} \\right] .\n\\end{eqnarray*}\nThis is the same as\n\\begin{eqnarray*}\n\\left[ \\begin{array}{ccc} \\b{K}_0 & \\b{P} \\\\ \\b{P}^\\top & \\b{0} \\end{array} \\right] \\left[ \\begin{array}{ccc} \\b{K}_0 & \\b{P} \\\\ \\b{P}^\\top & \\b{0} \\end{array} \\right] = \\left[ \\begin{array}{ccc} \\b{K}_0 & \\b{P} \\\\ \\b{P}^\\top & \\b{0} \\end{array} \\right] \\left[ \\begin{array}{c} \\b{f} \\\\ \\b{0} \\end{array} \\right]\n\\end{eqnarray*}\nand thus, if the block matrix can be inverted, we have \n\\begin{eqnarray}\n\\left[ \\begin{array}{ccc} \\b{K}_0 & \\b{P} \\\\ \\b{P}^\\top & \\b{0} \\end{array} \\right] = \\left[ \\begin{array}{c} \\b{f} \\\\ \\b{0} \\end{array} \\right] \\label{eq: KKT final}\n\\end{eqnarray}\nas claimed.\nExistence of a solution to \\eqref{eq: KKT final} establishes a solution to the original system \\eqref{eq: KKT first} and justifies the \\emph{ansatz}.\nMoreover, the fact that a solution to \\eqref{eq: KKT final} exists was established in Lemma \\ref{lem: comput etc}.\n\n\n\n\\subsection{Nystr\\\"{o}m Approximation} \\label{app: Nystrom least squares}\n\nTo develop a Nystr\\\"{o}m approximation, our starting point is the kernel least-squares characterization of the proposed estimator in~\\eqref{eq: least squares formulation of original}.\nIn particular, the same least-squares problem can be considered for the Nystr\\\"{o}m approximation in~\\eqref{eq: Nystrom regression}:\n\\begin{eqnarray*}\n\\argmin_{\\tilde{\\b{a}},\\tilde{\\b{b}}} \\|\\b{f} - \\b{K}_{0,n,n_0} \\tilde{\\b{a}} - \\b{P} \\tilde{\\b{b}} \\|_2^2 \\quad \\text{ s.t. } \\quad \\b{P}_{n_0}^\\top \\tilde{\\b{a}} = \\b{0} .\n\\end{eqnarray*}\nThis least-squares problem can be formulated as\n\\begin{align*}\n&\\argmin_{\\tilde{\\b{a}},\\tilde{\\b{b}}} (\\b{f} - \\b{K}_{0,n,n_0} \\tilde{\\b{a}} - \\b{P} \\tilde{\\b{b}})^\\top (\\b{f} - \\b{K}_{0,n,n_0} \\tilde{\\b{a}} - \\b{P} \\tilde{\\b{b}}) \\\\\n&=\\argmin_{\\tilde{\\b{a}},\\tilde{\\b{b}}}\n\\left[ \\b{f}^{\\top}\\b{f} \n- \\b{f}^{\\top}\\b{K}_{0,n,n_0}\\tilde{\\b{a}}\n- \\b{f}^{\\top}\\b{P}\\tilde{\\b{b}}\n- \\tilde{\\b{a}}^{\\top}\\b{K}_{0,n_0,n}\\b{f}\n+ \\tilde{\\b{a}}^{\\top}\\b{K}_{0,n_0,n}\\b{K}_{0,n,n_0}\\tilde{\\b{a}} \\right. \\\\\n&\\phantom{=\\argmin_{\\tilde{\\b{a}},\\tilde{\\b{b}}}[ }\\left.\n+ \\tilde{\\b{a}}^{\\top}\\b{K}_{0,n_0,n}\\b{P}\\tilde{\\b{b}}\n- \\tilde{\\b{b}}^{\\top}\\b{P}^{\\top}\\b{f}\n- \\tilde{\\b{b}}^{\\top}\\b{P}^{\\top}\\b{K}_{0,n,n_0}\\tilde{\\b{a}}\n- \\tilde{\\b{b}}^{\\top}\\b{P}^{\\top}\\b{P}\\tilde{\\b{b}} \\right]\\\\\n&=\\argmin_{\\tilde{\\b{a}},\\tilde{\\b{b}}}\n\\left[\\begin{array}{c} \\tilde{\\b{a}} \\\\ \\tilde{\\b{b}} \\end{array}\\right]^{\\top} \n\\left[ \\begin{array}{cc} \\b{K}_{0,n_0,n}\\b{K}_{0,n,n_0} & \\b{K}_{0,n_0,n}\\b{P} \\\\ \\b{P}^\\top\\b{K}_{0,n,n_0} & \\b{P}^\\top\\b{P} \\end{array} \\right]\n\\left[\\begin{array}{c} \\tilde{\\b{a}} \\\\ \\tilde{\\b{b}} \\end{array}\\right]\n- 2 \\left[ \\begin{array}{c} \\b{K}_{0,n_0,n} \\b{f} \\\\ \\b{P}^\\top \\b{f} \\end{array} \\right]\n\\left[\\begin{array}{c} \\tilde{\\b{a}} \\\\ \\tilde{\\b{b}} \\end{array}\\right]\n+ \\b{f}^{\\top}\\b{f}\n\\end{align*}\nThis is a quadratic minimization problem subject to the constraint $\\b{P}_{n_0}^\\top \\tilde{\\b{a}} = \\b{0}$ and so the solution is given by the Karush--Kuhn--Tucker matrix equation\n\\begin{eqnarray}\n\\left[ \\begin{array}{ccc} \\b{K}_{0,n_0,n}\\b{K}_{0,n,n_0} & \\b{K}_{0,n_0,n}\\b{P} & \\b{P}_{n_0} \\\\ \\b{P}^\\top\\b{K}_{0,n,n_0} & \\b{P}^\\top\\b{P} & \\b{0} \\\\ \\b{P}_{n_0}^\\top & \\b{0} & \\b{0} \\end{array} \\right] \\left[ \\begin{array}{c} \\tilde{\\b{a}} \\\\ \\tilde{\\b{b}} \\\\ \\tilde{\\b{c}} \\end{array} \\right] = \\left[ \\begin{array}{c} \\b{K}_{0,n_0,n} \\b{f}\\\\ \\b{P}^\\top \\b{f} \\\\ \\b{0} \\end{array} \\right] . \\label{eq: KKT first Nystrom}\n\\end{eqnarray}\nFollowing an identical argument to that in Appendix \\ref{app: kernel least squares}, we first add $\\b{P}_{n_0}$ times the third row to the first row to obtain\n\\begin{eqnarray*}\n\\left[ \\begin{array}{ccc} \\b{K}_{0,n_0,n}\\b{K}_{0,n,n_0} + \\b{P}_{n_0}\\b{P}_{n_0}^\\top & \\b{K}_{0,n_0,n}\\b{P} & \\b{P}_{n_0} \\\\ \\b{P}^\\top\\b{K}_{0,n,n_0} & \\b{P}^\\top\\b{P} & \\b{0} \\\\ \\b{P}_{n_0}^\\top & \\b{0} & \\b{0} \\end{array} \\right] \\left[ \\begin{array}{c} \\tilde{\\b{a}} \\\\ \\tilde{\\b{b}} \\\\ \\tilde{\\b{c}} \\end{array} \\right] = \\left[ \\begin{array}{c} \\b{K}_{0,n_0,n} \\b{f}\\\\ \\b{P}^\\top \\b{f} \\\\ \\b{0} \\end{array} \\right] .\n\\end{eqnarray*}\nTaking again the \\emph{ansatz} that $\\tilde{\\b{c}} = \\b{0}$ requires us to solve the reduced linear system\n\\begin{eqnarray}\n\\left[ \\begin{array}{cc} \\b{K}_{0,n_0,n}\\b{K}_{0,n,n_0} + \\b{P}_{n_0}\\b{P}_{n_0}^\\top & \\b{K}_{0,n_0,n}\\b{P} \\\\ \\b{P}^\\top\\b{K}_{0,n,n_0} & \\b{P}^\\top\\b{P}\\end{array} \\right] \\left[ \\begin{array}{c} \\tilde{\\b{a}} \\\\ \\tilde{\\b{b}} \\end{array} \\right] = \\left[ \\begin{array}{c} \\b{K}_{0,n_0,n} \\b{f}\\\\ \\b{P}^\\top \\b{f} \\end{array} \\right] . \\label{eq: KTT Nystrom}\n\\end{eqnarray}\nAs in Appendix \\ref{app: kernel least squares}, the existence of a solution to \\eqref{eq: KTT Nystrom} implies a solution to \\eqref{eq: KKT first Nystrom} and justifies the \\emph{ansatz}.\n\n\n\n\n\n\\section{Sensitivity to the Choice of Kernel} \\label{app: effect of the kernel}\n\nIn this appendix we investigate the sensitivity of kernel-based methods ($I_{\\text{CF}}$, $I_{\\text{SECF}}$ and $I_{\\text{ASECF}}$) to the kernel and its parameter using the Gaussian example of Section \\ref{sec: gaussian assessment}. Specifically we compare the three kernels described in Appendix \\ref{appendix:kernels}, the Gaussian, Mat\\'{e}rn and rational quadratic kernels, when the parameter,~$\\lambda$, is chosen using either cross-validation or the median heuristic \\citep{Garreau2017}. For the Mat\\'{e}rn kernel, we fix the smoothness parameter at $\\nu = 4.5$.\n\nIn the cross-validation approach,\n\\begin{eqnarray}\n\\lambda_\\text{CV} \\in \\argmin \\sum_{i=1}^5 \\sum_{j=1}^{ n_5 } \\big[ f( \\b{x}^{(i,j)}) - f_{i,\\lambda}( \\b{x}^{(i,j)}) \\big]^2, \\label{eq: CV error}\n\\end{eqnarray}\nwhere $n_5 := \\lfloor n \/ 5 \\rfloor$, $f_{i,\\lambda}$ denotes an interpolant of the form \\eqref{eq:interpolant} to $f$ at the points $\\{\\b{x}^{(i,j)} : j = 1,\\dots, n_5 \\big\\}$ with kernel parameter $\\lambda$, and $\\b{x}^{(i,j)}$ is the $j$th point in the $i$th fold. \nIn general~\\eqref{eq: CV error} is an intractable optimization problem and we therefore perform a grid-based search. Here we consider $\\lambda \\in 10^{\\{-1.5,-1,-0.5,0,0.5,1\\}}$.\n\nThe median heuristic described in \\citet{Garreau2017} is the choice of the bandwidth\n\\begin{equation*}\n\\tilde{\\lambda} = \\sqrt{ \\frac{1}{2} \\text{Med}\\Big\\{ \\| \\b{x}^{(i)} - \\b{x}^{(j)} \\|^2 \\: : \\: 1\\leq i < j \\leq n \\Big\\} }\n\\end{equation*}\nfor functions of the form $k(\\b{x},\\b{y}) = \\varphi(\\| \\b{x} - \\b{y} \\|\/\\lambda)$, where $\\text{Med}$ is the empirical median. This heuristic can be used for the Gaussian, Mat\\'{e}rn and rational quadratic kernels, which all fit into this framework. \n\nFigures~\\ref{fig:Gaussian_compare_N1000} and~\\ref{fig:Gaussian_compare_d4} show the statistical efficiency of each combination of kernel and tuning approach for $n=1000$ and $d=4$, respectively. The outcome that the performance of $I_{\\text{SECF}}$ and $I_{\\text{ASECF}}$ are less sensitive to the kernel choice than $I_{\\text{CF}}$ is intuitive when considering the fact that semi-exact control functionals enforce exactness on $f \\in \\mathcal{F}$.\n\n\\begin{figure}[!h]\n\\centering\n\\includegraphics[width=0.8\\textwidth]{figures\/Gaussian_compare_N1000.pdf}\n\\caption{Gaussian example, estimated statistical efficiency for $N=1000$ using different kernels and tuning approaches. The estimators are (a) $I_{\\text{CF}}$, (b) $I_{\\text{SECF}}$ with polynomial order $r=1$, (c) $I_{\\text{SECF}}$ with $r=2$, (d) $I_{\\text{ASECF}}$ with $r=1$ and (e) $I_{\\text{ASECF}}$ with $r=2$.}\n\\label{fig:Gaussian_compare_N1000}\n\\end{figure}\n\n\\begin{figure}[!h]\n\\centering\n\\includegraphics[width=0.8\\textwidth]{figures\/Gaussian_compare_d4.pdf}\n\\caption{Gaussian example, estimated statistical efficiency for $d=4$ using different kernels and tuning approaches. The estimators are (a) $I_{\\text{CF}}$, (b) $I_{\\text{SECF}}$ with polynomial order $r=1$, (c) $I_{\\text{SECF}}$ with $r=2$, (d) $I_{\\text{ASECF}}$ with $r=1$ and (e) $I_{\\text{ASECF}}$ with $r=2$.\n}\n\\label{fig:Gaussian_compare_d4}\n\\end{figure}\n\n\n\n\n\\FloatBarrier\n\n\\section{Results for the Unadjusted Langevin Algorithm}\\label{app: ULA results}\n\nRecall that the proposed method does not require that the $\\b{x}^{(i)}$ form an empirical approximation to $p$.\nIt is therefore interesting to investigate the behaviour of the method when the $(\\b{x}^{(i)})_{i=1}^\\infty$ arise as a Markov chain that does not leave $p$ invariant.\nFigures \\ref{fig:Recapture_ULA} and \\ref{fig:Sonar_ULA} show results when the unadjusted Langevin algorithm is used rather than the Metropolis-adjusted Langevin algorithm which is behind Figures \\ref{fig:Recapture} and \\ref{fig:Sonar} of the main text. The benefit of the proposed method for samplers that do not leave $p$ invariant is evident through its reduced bias compared to $I_{\\text{ZV}}$ and $I_{\\text{MC}}$ in Figure \\ref{fig:Recapture_marginal1}.\nRecall that the unadjusted Langevin algorithm \\citep{Parisi1981,Ermak1975} is defined by\n\\begin{equation*}\n\\b{x}^{(i+1)} = \\b{x}^{(i)} + \\frac{h^2}{2}\\b{\\Sigma}\\nabla_{\\b{x}} \\log P_{\\b{x} \\mid \\b{y}}(\\b{x}^{(i)} \\mid \\b{y}) + \\epsilon_{i+1},\n\\end{equation*}\nfor $i=1,\\ldots,n-1$ where $\\b{x}^{(1)}$ is a fixed point with high posterior support and $\\epsilon_{i+1}\\sim \\mathcal{N}(\\b{0},h^2\\b{\\Sigma})$. Step sizes of $h=0.9$ for the sonar example and $h=1.1$ for the capture-recapture example were selected.\n\n\n\\begin{figure}[h!]\n \\centering\n\\includegraphics[width=0.8\\textwidth]{figures\/Recapture_both_combined_kernelrq2.pdf}\n \\caption{Recapture example (a) estimated \\textit{statistical efficiency} and (b) estimated \\textit{computational efficiency} when the unadjusted Langevin algorithm is used in place of the Metropolis-adjusted Langevin algorithm. Efficiency here is reported as an average over the 11 expectations of interest.\n }\n \\label{fig:Recapture_ULA}\n\\end{figure}\n\n\n\\begin{figure}\n \\centering\n\\includegraphics[width=0.8\\textwidth]{figures\/Sonar_both_kernelrq2.pdf}\n\\caption{Sonar example (a) estimated \\textit{statistical efficiency} and (b) estimated \\textit{computational efficiency} when the unadjusted Langevin algorithm is used in place of the Metropolis-adjusted Langevin algorithm.\n}\n\\label{fig:Sonar_ULA}\n\\end{figure}\n\n\n\\begin{figure}\n \\centering\n\\includegraphics[width=0.8\\textwidth]{figures\/Recapture_marginal1_kernelrq2_ULAandMALA.pdf}\n \\caption{Recapture example (a) boxplots of 100 estimates of $\\int x_1 P_{\\b{x} \\mid \\b{y}} \\mathrm{d} \\b{x}$ when the Metropolis-adjusted Langevin algorithm is used for sampling and (b) boxplots of 100 estimates of $\\int x_1 P_{\\b{x} \\mid \\b{y}} \\mathrm{d} \\b{x}$ when the unadjusted Langevin algorithm is used for sampling. The black horizontal line represents the gold standard of approximation.\n }\n \\label{fig:Recapture_marginal1}\n\\end{figure}\n\n\n\\FloatBarrier\n\n\\section{Reproducing Kernels and Worst-Case Error} \\label{app: wce}\n\n\nThe purpose of this section is to review some basic results about worst-case error analysis in a reproducing kernel Hilbert space context.\nIn Appendices~\\ref{app: finite bound} and~\\ref{ap: consistency proof} these results are used to prove Proposition~\\ref{lem: KSD bound} and Theorem~\\ref{thm: consistency}.\n\nLet $k \\colon \\mathbb{R}^d \\times \\mathbb{R}^d \\to \\mathbb{R}$ be a positive-definite kernel such that $\\int \\lvert k(\\b{x}, \\b{y}) \\rvert p(\\b{x}) \\dif \\b{x} < \\infty$ for every $\\b{y} \\in \\mathbb{R}^d$ and $\\mathcal{H}(k)$ the reproducing kernel Hilbert space of $k$.\nThe \\emph{worst-case error} in $\\mathcal{H}(k)$ of any weights $\\b{v} = (v_1, \\ldots, v_n) \\in \\mathbb{R}^n$ and any distinct points $\\{\\b{x}^{(i)}\\}_{i=1}^n \\subset \\mathbb{R}^d$ is defined as\n\\begin{equation} \\label{eq:wce-decomposition}\n e_{\\mathcal{H}(k)}(\\b{v} ; \\{\\b{x}^{(i)}\\}_{i=1}^n) := \\sup_{ \\norm{h}_{\\mathcal{H}(k)} \\leq 1 } \\, \\bigg\\rvert \\int h(\\b{x}) p(\\b{x}) \\dif \\b{x} - \\sum_{i=1}^n v_i h(\\b{x}^{(i)}) \\bigg\\lvert.\n\\end{equation}\nIn this appendix we consider a fixed set of points $\\{\\b{x}^{(i)}\\}_{i=1}^n$ and employ the shorthand $e_{\\mathcal{H}(k)}(\\b{v})$ for $e_{\\mathcal{H}(k)}(\\b{v} ; \\{\\b{x}^{(i)}\\}_{i=1}^n)$.\nThen a standard result \\citep[see, for example, Section~10.2 in][]{NovakWozniakowski2010} is that the worst-case error admits a closed form\n\\begin{equation} \\label{eq:wce}\ne_{\\mathcal{H}(k)}(\\b{v}) = \\bigg( \\int \\int k(\\b{x}, \\b{y}) p(\\b{x}) p(\\b{y}) \\dif \\b{x} \\dif \\b{y} - 2 \\sum_{i=1}^n v_i \\int k(\\b{x}, \\b{x}^{(i)}) p(\\b{x}) \\dif \\b{x} + \\b{v}^\\top \\b{K} \\b{v} \\bigg)^{1\/2},\n\\end{equation}\nwhere $\\b{K}$ is the $n \\times n$ matrix with entries $[\\b{K}]_{i,j} = k(\\b{x}^{(i)}, \\b{x}^{(j)})$, and\n\\begin{equation} \\label{eq:wce-decomposition}\n \\bigg\\lvert \\int h(\\b{x}) p(\\b{x}) \\dif \\b{x} - \\sum_{i=1}^n v_i h(\\b{x}^{(i)}) \\bigg\\rvert \\leq \\norm{h}_{\\mathcal{H}(k)} e_{\\mathcal{H}(k)}(\\b{v})\n\\end{equation}\nfor any $h \\in \\mathcal{H}(k)$.\nBecause the worst-case error in~\\eqref{eq:wce} can be written as the quadratic form\n\\begin{equation*}\n e_{\\mathcal{H}(k)}(\\b{v}) = ( k_{pp} - 2\\b{v}^\\top \\b{k}_p + \\b{v}^\\top \\b{K} \\b{v} )^{1\/2},\n\\end{equation*}\nwhere $k_{pp} = \\int \\int k(\\b{x}, \\b{y}) p(\\b{x}) p(\\b{y}) \\dif \\b{x} \\dif \\b{y}$ and $[\\b{k}_p]_i = \\int k(\\b{x}, \\b{x}^{(i)}) p(\\b{x}) \\dif \\b{x}$, the weights $\\b{v}$ which minimise it take an explicit closed form:\n\\begin{equation*}\n \\b{v}_\\text{opt} = \\argmin_{\\b{v} \\in \\mathbb{R}^n} e_{\\mathcal{H}(k)}(\\b{v}) = \\b{K}^{-1} \\b{k}_p\n\\end{equation*}\n\nLet $\\Psi = \\{ \\psi_0, \\ldots, \\psi_{m-1} \\}$ be a collection of $m \\leq n$ basis functions for which the generalised Vandermonde matrix\n\\begin{equation*}\n \\b{P}_\\Psi = \\begin{bmatrix} \\psi_0(\\b{x}^{(1)}) & \\cdots & \\psi_{m-1}(\\b{x}^{(1)}) \\\\ \\vdots & \\ddots & \\vdots \\\\ \\psi_0(\\b{x}^{(n)}) & \\cdots & \\psi_{m-1}(\\b{x}^{(n)}) \\end{bmatrix},\n\\end{equation*}\nhas full rank. In this paper we are interested in weights which satisfy the semi-exactness conditions $\\sum_{i=1}^n v_i \\psi(\\b{x}^{(i)}) = \\int \\psi(\\b{x}) p(\\b{x}) \\dif \\b{x}$ for every $\\psi \\in \\Psi$. \nMinimising the worst-case error under these constraints gives rise to the weights\n\\begin{equation} \\label{eq:sard-weight-def}\n \\b{v}_\\text{opt}^\\Psi = \\argmin_{ \\b{v} \\in \\mathbb{R}^n } e_{\\mathcal{H}(k)}(\\b{v}) \\quad \\text{ s.t. } \\quad \\sum_{i=1}^n v_i \\psi(\\b{x}^{(i)}) = \\int \\psi(\\b{x}) p(\\b{x}) \\dif \\b{x} \\quad \\text{ for every } \\psi \\in \\Psi.\n\\end{equation}\nThese weights can be solved from the linear system~\\citep[Theorem~2.7 and Remark~D.1]{Karvonen2018}\n\\begin{equation*}\n \\begin{bmatrix} \\b{K} & \\b{P}_\\Psi \\\\ \\b{P}_\\Psi^\\top & \\b{0} \\end{bmatrix} \\begin{bmatrix} \\b{v}_\\text{opt}^\\Psi \\\\ \\b{a} \\end{bmatrix} = \\begin{bmatrix} \\b{k}_p \\\\ \\b{\\psi}_p \\end{bmatrix},\n\\end{equation*}\nwhere $\\b{a} \\in \\mathbb{R}^q$ is a nuisance vector and the $i$th element of $\\b{\\psi}_p$ is $\\int \\psi_{i-1}(\\b{x}) p(\\b{x}) \\dif \\b{x}$. \nNote that~\\eqref{eq:sard-weight-def} is merely a quadratic programming problem under the linear equality constraint $\\b{P}_\\Psi^\\top \\b{v} = \\b{\\psi}_p$.\n\nThese facts will be used in Appendices~\\ref{app: finite bound} and~\\ref{ap: consistency proof} to prove Proposition~\\ref{lem: KSD bound} and Theorem~\\ref{thm: consistency}.\nTheir relevance derives from the fact that $e_{\\mathcal{H}(k_0)}(\\b{v} ; \\{\\b{x}^{(i)}\\}_{i=1}^n)$ coincides with the kernel Stein discrepancy between $p$ and the discrete measure $\\sum_{i=1}^n v_i \\delta(\\b{x}^{(i)})$.\n\n\n\\section{Proof of Proposition \\ref{lem: KSD bound}} \\label{app: finite bound}\n\nThe following proof relies on the results reviewed in Appendix~\\ref{app: wce}.\n\n\\begin{proof}[of Proposition \\ref{lem: KSD bound}]\nApplying the results reviewed in Appendix~\\ref{app: wce} with $k = k_0$ and $\\psi_j = \\mathcal{L} \\phi_j$, for which $\\b{k}_p = \\b{0}$ and $\\b{\\psi}_p = \\b{e}_1$ from~\\eqref{eq: kernel ints to 0}, we see that the solution to the optimisation problem\n\\begin{equation*}\n \\b{v}_\\text{opt}^\\mathcal{F} = \\argmin_{ \\b{v} \\in \\mathbb{R}^n } e_{\\mathcal{H}(k_0)}(\\b{v} ; \\{\\b{x}^{(i)}\\}_{i=1}^n) \\quad \\text{ s.t. } \\quad \\sum_{i=1}^n v_i h(\\b{x}^{(i)}) = \\int h(\\b{x}) p(\\b{x}) \\dif \\b{x} \\quad \\text{ for every } h \\in \\mathcal{F}\n\\end{equation*}\ncan be obtained by solving the linear system\n\\begin{equation*}\n \\begin{bmatrix} \\b{K}_0 & \\b{P} \\\\ \\b{P}^\\top & \\b{0} \\end{bmatrix} \\begin{bmatrix} \\b{v}_\\text{opt}^\\mathcal{F} \\\\ \\b{a} \\end{bmatrix} = \\begin{bmatrix} \\b{0} \\\\ \\b{e}_1 \\end{bmatrix}.\n\\end{equation*}\nA straightforward application of the block matrix inversion formula then gives\n\\begin{equation*}\n \\b{v}_\\text{opt}^\\mathcal{F} = \\b{K}_0^{-1} \\b{P} ( \\b{P}^\\top \\b{K}_0^{-1} \\b{P} )^{-1} \\b{e}_1 = \\b{w} ,\n\\end{equation*}\nwhere in the final equality we have recognised this expression as being identical to the weights $\\b{w}$ used in our semi-exact control functional method, i.e. $I_\\textup{SECF}(f) = \\sum_{i=1}^n w_i f(\\b{x}^{(i)})$ by~\\eqref{eqn:BSS} and~\\eqref{eq: weights}.\nBy~\\eqref{eq: kernel ints to 0} the only non-zero element on the right-hand side of~\\eqref{eq:wce} is $\\b{v}^\\top \\b{K}_0 \\b{v}$.\nThus we have characterised the weights $\\bm{w}$ in the semi-exact control functional method as the solution to the problem\n\\begin{equation} \\label{eq:integral-semi-exactness}\n \\b{w} = \\argmin_{ \\b{v} \\in \\mathbb{R}^n } (\\b{v}^\\top \\b{K}_0 \\b{v})^{1\/2} \\quad \\text{ s.t. } \\quad \\sum_{i=1}^n v_i h(\\b{x}^{(i)}) = \\int h(\\b{x}) p(\\b{x}) \\dif \\b{x} \\quad \\text{ for every } h \\in \\mathcal{F}.\n\\end{equation}\nIf $f = h + g$ with $h \\in \\mathcal{F}$ and $g \\in \\mathcal{H}(k_0)$, then it follows from the integral semi-exactness property~\\eqref{eq:integral-semi-exactness} that\n\\begin{equation*}\n \\lvert I(f) - I_{\\text{SECF}}(f) \\rvert = \\lvert I(g) - I_{\\text{SECF}}(g) + I(h) - I_{\\text{SECF}}(h) \\rvert = \\lvert\n I(g) - I_{\\text{SECF}}(g) \\rvert.\n\\end{equation*}\nApplying~\\eqref{eq:wce-decomposition} and~\\eqref{eq:integral-semi-exactness} yields\n\\begin{equation*}\n \\lvert I(f) - I_{\\text{SECF}}(f) \\leq \\norm{g}_{\\mathcal{H}(k_0)} e_{\\mathcal{H}(k_0)} (\\b{w} ; \\{\\b{x}^{(i)}\\}_{i=1}^n) = \\norm{g}_{\\mathcal{H}(k_0)} (\\b{w}^\\top \\b{K}_0 \\b{w})^{1\/2}.\n\\end{equation*}\nSince this bound is valid for any decomposition $f = h + g$ with $h \\in \\mathcal{F}$ and $g \\in \\mathcal{H}(k_0)$ we have\n\\begin{align*}\n\\lvert I(f) - I_{\\text{SECF}}(f) \\rvert \\; \\leq \\; \\inf_{\\substack{f = h + g \\\\ h \\in \\mathcal{F}, \\, g \\in \\mathcal{H}(k_0)}} \\|g\\|_{\\mathcal{H}(k_0)} (\\b{w}^\\top \\b{K}_0 \\b{w})^{1\/2} \\; = \\; |f|_{k_0,\\mathcal{F}} \\, (\\b{w}^\\top \\b{K}_0 \\b{w})^{1\/2}\n\\end{align*}\nas claimed.\n\\end{proof}\n\n\n\\section{Proof of Theorem \\ref{thm: consistency}} \\label{ap: consistency proof}\n\nThe following proof relies on the worst-case error results reviewed in Appendix~\\ref{app: wce}, together with the following result, due to \\citet{Hodgkinson2020}, which studies the convergence of the worst-case error (i.e. the kernel Stein discrepancy) of a weighted combination of the states $\\{\\b{x}^{(i)}\\}_{i=1}^n$, where the weights $\\tilde{\\b{w}}$ are obtained by minimising the worst-case error subject to a non-negativity constraint:\n\n\\begin{theorem} \\label{thm: hodgkinson}\nLet $p$ be a probability density on $\\mathbb{R}^d$ and $k_0 : \\mathbb{R}^d \\times \\mathbb{R}^d \\rightarrow \\mathbb{R}$ a reproducing kernel which satisfies\n\\begin{equation*}\n \\int k_0(\\b{x}, \\b{y}) p(\\b{x}) \\dif \\b{x} = 0\n\\end{equation*}\nfor every $\\b{y} \\in \\mathbb{R}^d$.\nLet $q$ be a probability density with $p\/q > 0$ on $\\mathbb{R}^d$ and consider a $q$-invariant Markov chain $(\\bm{x}^{(i)})_{i=1}^n$, assumed to be $V$-uniformly ergodic for some $V : \\mathbb{R}^d \\rightarrow [1,\\infty)$, such that \n$$\n\\sup_{\\b{x} \\in \\mathbb{R}^d} \\; V(\\b{x})^{-r} \\; \\left( \\frac{p(\\bm{x})}{q(\\bm{x})} \\right)^4 \\; k_0(\\b{x},\\b{x})^2 < \\infty \n$$ \nfor some $0 < r < 1$.\nLet\n\\begin{equation} \\label{eq: hodgkinson-weights}\n \\tilde{\\b{w}} = \\argmin_{ \\b{v} \\in \\mathbb{R}^n} e_{\\mathcal{H}(k_0) }(\\b{v} ; \\{\\b{x}^{(i)}\\}_{i=1}^n) \\quad \\text{s.t.} \\quad \\sum_{i=1}^n v_i = 1 \\quad \\text{and} \\quad \\b{v} \\geq \\b{0}.\n\\end{equation}\nThen $e_{\\mathcal{H}(k_0) }(\\tilde{\\b{w}} ; \\{\\b{x}^{(i)}\\}_{i=1}^n) = O_P(n^{-1\/2})$.\n\\end{theorem}\n\\begin{proof}\nA special case of Theorem~1 in \\citet{Hodgkinson2020}.\n\\end{proof}\n\n\\noindent The sense in which Theorem \\ref{thm: hodgkinson} will be used is captured in the following corollary, which follows from the observation that removal of the non-negativity constraint in \\eqref{eq: hodgkinson-weights} does not increase the worst-case error:\n\n\\begin{corollary} \\label{cor: apply Hodge}\nUnder the same hypotheses as Theorem \\ref{thm: hodgkinson}, let \n\\begin{equation} \n \\bar{\\b{w}} = \\argmin_{ \\b{v} \\in \\mathbb{R}^n} e_{\\mathcal{H}(k_0) }(\\b{v} ; \\{\\b{x}^{(i)}\\}_{i=1}^n) \\quad \\text{s.t.} \\quad \\sum_{i=1}^n v_i = 1 .\n\\end{equation}\nThen $e_{\\mathcal{H}(k_0) }(\\bar{\\b{w}} ; \\{\\b{x}^{(i)}\\}_{i=1}^n) \\leq e_{\\mathcal{H}(k_0) }(\\tilde{\\b{w}} ; \\{\\b{x}^{(i)}\\}_{i=1}^n) = O_P(n^{-1\/2})$.\n\\end{corollary}\n\n\nNow the proof of Theorem \\ref{thm: consistency} can be presented:\n\n\\begin{proof}[Proof of Theorem \\ref{thm: consistency}]\n\nFrom Assumption A2 and Lemma \\ref{lem: comput etc} we have $(\\bm{P}^\\top \\bm{K}_0^{-1} \\bm{P})^{-1}$ is almost surely well-defined.\nIn the proof of Proposition~\\ref{lem: KSD bound} we saw that\n\\begin{equation*}\n e_{\\mathcal{H}(k_0)}( \\b{w} ; \\{\\b{x}^{(i)}\\}_{i=1}^n )^2 = \\b{w}^\\top \\b{K}_0 \\b{w}\n\\end{equation*}\nand\n\\begin{align*}\n ( I(f) - I_{\\textsc{SECF}}(f) )^2 & \\leq |f|_{k_0,\\mathcal{F}}^2 \\; \\b{w}^\\top \\b{K}_0 \\b{w}\n\\end{align*}\nPlugging in the expression for $\\b{w} = \\b{K}_0^{-1} \\b{P} ( \\b{P}^\\top \\b{K}_0^{-1} \\b{P} )^{-1} \\b{e}_1$ in \\eqref{eq: weights}, we obtain\n\\begin{equation} \\label{eq: wce-11-element}\n e_{\\mathcal{H}(k_0)}( \\b{w} ; \\{\\b{x}^{(i)}\\}_{i=1}^n )^2 = [(\\b{P}^\\top \\b{K}_0^{-1} \\b{P})^{-1}]_{1,1}\n\\end{equation}\nand\n\\begin{align*}\n ( I(f) - I_{\\textsc{SECF}}(f) )^2 & \\leq |f|_{k_0,\\mathcal{F}}^2 [(\\b{P}^\\top \\b{K}_0^{-1} \\b{P})^{-1}]_{1,1} .\n\\end{align*}\nIt therefore suffices to consider the stochastic fluctuations of $[(\\b{P}^\\top \\b{K}_0^{-1} \\b{P})^{-1}]_{11}$ as $n \\rightarrow \\infty$.\nTo this end, let $[\\bm{\\Psi}]_{i,j} := \\mathcal{L} \\phi_j(\\bm{x}^{(i)})$ and consider the block matrix\n\\begin{align}\n\\frac{\\bm{P}^\\top \\bm{K}_0^{-1} \\bm{P}}{\\bm{1}^\\top \\bm{K}_0^{-1} \\bm{1}} = \\left[ \\begin{array}{cc} 1 & \\frac{\\bm{1}^\\top \\bm{K}_0^{-1} \\bm{\\Psi}}{\\bm{1}^\\top \\bm{K}_0^{-1} \\bm{1}} \\\\\n\\frac{\\bm{\\Psi}^\\top \\bm{K}_0^{-1} \\bm{1}}{\\bm{1}^\\top \\bm{K}_0^{-1} \\bm{1}} & \\frac{\\bm{\\Psi}^\\top \\bm{K}_0^{-1} \\bm{\\Psi}}{\\bm{1}^\\top \\bm{K}_0^{-1} \\bm{1}} . \\end{array} \\right] \\label{eq: troublesome matrix}\n\\end{align}\nFrom the block matrix inversion formula we have\n\\begin{align}\n\\left[ \\left( \\frac{\\bm{P}^\\top \\bm{K}_0^{-1} \\bm{P}}{\\bm{1}^\\top \\bm{K}_0^{-1} \\bm{1}} \\right)^{-1} \\right]_{1,1} & = \\left[ 1 - \\frac{\\bm{1}^\\top \\bm{K}_0^{-1} \\bm{\\Psi} ( \\bm{\\Psi}^\\top \\bm{K}_0^{-1} \\bm{\\Psi} )^{-1} \\bm{\\Psi}^\\top \\bm{K}_0^{-1} \\bm{1}}{\\bm{1}^\\top \\bm{K}_0^{-1} \\bm{1}} \\right]^{-1} \\nonumber \\\\\n& = \\left[ 1 - \\frac{\\langle \\bm{1} , \\Pi \\bm{1} \\rangle_n }{\\langle \\bm{1} , \\bm{1} \\rangle_n } \\right]^{-1} . \\label{eq: big 11}\n\\end{align}\nSince $\\Pi = \\bm{\\Psi} (\\bm{\\Psi}^\\top \\bm{K}_0^{-1} \\bm{\\Psi})^{-1} \\bm{\\Psi}^\\top \\bm{K}_0^{-1}$ and\n\\begin{align*}\n \\| \\Pi \\bm{1} \\|_n^2 & = \\bm{1}^\\top \\Pi^\\top \\bm{K}_0^{-1} \\Pi \\bm{1} \\\\\n & = \\bm{1}^\\top \\bm{K}_0^{-1} \\bm{\\Psi} (\\bm{\\Psi}^\\top \\bm{K}_0^{-1} \\bm{\\Psi})^{-1} \\bm{\\Psi}^\\top \\bm{K}_{0}^{-1} \\bm{\\Psi} (\\bm{\\Psi}^\\top \\bm{K}_0^{-1} \\bm{\\Psi})^{-1} \\bm{\\Psi}^\\top \\bm{K}_0^{-1} \\bm{1} \\\\\n & = \\bm{1}^\\top \\bm{K}_0^{-1} \\bm{\\Psi} (\\bm{\\Psi}^\\top \\bm{K}_0^{-1} \\bm{\\Psi})^{-1} \\bm{\\Psi}^\\top \\bm{K}_0^{-1} \\bm{1} \\\\\n & = \\bm{1}^\\top \\bm{K}_0^{-1} \\Pi \\bm{1} \\\\\n & = \\langle \\bm{1} , \\Pi \\bm{1} \\rangle_n ,\n\\end{align*}\nour Assumption A3 implies that \\eqref{eq: big 11} is almost surely asymptotically bounded, say by a constant $C \\in [0,\\infty)$.\nIn other words, it almost surely holds that\n$$\n[(\\b{P}^\\top \\b{K}_0^{-1} \\b{P})^{-1}]_{1,1} \\; \\leq \\; C (\\bm{1}^\\top \\bm{K}_0^{-1} \\bm{1})^{-1} \n$$\nfor all sufficiently large $n$.\n\nTo complete the proof we evoke Corollary~\\ref{cor: apply Hodge}, noting that the weights $\\bar{\\b{w}}$ defined in Corollary~\\ref{cor: apply Hodge} satisfy $e_{\\mathcal{H}(k_0) }(\\bar{\\b{w}} ; \\{\\b{x}^{(i)}\\}_{i=1}^n) = (\\b{1}^\\top \\b{K}_0^{-1} \\b{1})^{-1\/2}$, which follows from \\eqref{eq: wce-11-element} with $\\b{P}= \\b{1}$.\nThus from Corollary~\\ref{cor: apply Hodge} we conclude\n\\begin{equation*}\n [(\\b{1}^\\top \\b{K}_0^{-1} \\b{1})^{-1}]_{1,1}^{1\/2} = e_{\\mathcal{H}(k_0)}(\\bar{\\b{w}} ; \\{\\b{x}^{(i)}\\}_{i=1}^n) \\leq e_{\\mathcal{H}(k_0)}(\\tilde{\\b{w}} ; \\{\\b{x}^{(i)}\\}_{i=1}^n) = O_P(n^{-1\/2}) ,\n\\end{equation*}\nas required.\n\n\\end{proof}\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\section*{Supplementary Material}\n\\input{body_supplement}\n\n\\end{document}\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}