diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzzizhf" "b/data_all_eng_slimpj/shuffled/split2/finalzzizhf" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzzizhf" @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\n\\label{introduction}\n\n\\cite{P2005} presented a study of time dependent, relativistic, force-free, ideal MHD in the absence of matter by imposing a self-similar form for the solutions of the problem. In his pioneering work he assumed that time-dependence appears only through a dimensionless variable which contained in addition to the time, the speed of light and the radial distance from the centre. Then, he found a relativistic form of the Grad-Shafranov differential equation \\citep{GR1958, S1958, S1966}. However, he did not take into account any pressure due to the surrounding plasma. In this paper we built up on Prendergast's work by taking into account the effect of pressure while still aiming for equilibrium solutions. The presence of pressure leads to extra forces and the electromagnetic field is no more force-free, but now it is the net force due to the pressure and the electromagnetic field that has to be zero, thus the electromagnetic field interacts with the plasma. This interaction leads, in the most general case, to a set of non-linear partial differential equations. In this work we study forms of these equations permitting analytical solutions, so that we can have a general picture of such systems. \n\nApart from Prendergast, problems of relativistic MHD have been studied by other authors. \\cite{CLB1991} studied the asymptotic behaviour of steady, fully relativistic, axisymmetric, hydromagnetic winds and found that the flux surfaces take the form of cylinders and parabolas around the rotation axis, \\cite{CLB1992} studied self-similar solutions for relativistic winds driven by rotating magnetic fields. \\cite{C1994,C1995} studied the full ideal MHD problem for steady state cold outflows and he found that the form of the solution depends on the amount and the distribution of the electric current, he also presented self-similar solutions for the same problem. \\cite{F1997} and \\cite{FG2001} studied force-free magnetospheres near rotating black holes and found strong evidence for a hollow jet structure and applied these solutions to galactic superluminar sources. \\cite{D2005} simulated the problem of magnetic accretion in a rotating black hole taking into account the Kerr metric. \\cite{HN2003} found stationary solutions for axisymmetric, polytropic, unconfined, ideal MHD wind using the WKB method. The general motivating for these studies is relativistic outflows in the form of jets and winds related either to AGN or to stellar mass black holes. They are mainly numerical and as such they have limitations on the parameters chosen and also on the trial functions employed to solve the systems of the partial differential equations. Our treatment leads to an analytical solution, where the system of partial differential equations simplifies by the use of self-similarity and finally we only solve an ordinary differential equation numerically. Analytical solutions allow an easier study of the parameter space and a better insight on the physical behaviour, however, they require simplifications and special boundary conditions. \n\nA way to introduce time dependence in the problem is by imposing a temporally self-similar solution. This type of self-similarity leads to solutions which are functions of a new variable $\\tilde{x}=r^{\\lambda}v(r,t)$, which is a product of a power of the spatial coordinate and a combination of time and the spatial coordinate. This method has been used in similar problems, of which the best known is the blast wave solution by \\cite{S1946} and \\cite{T1950}. In this case a relation between the expansion radius and time is found by dimensional analysis of the physical quantities involved in the system. Then the equations are solved and provide the details of the explosion. Examples of this type of self-similarity in force-free relativistic MHD can be found in \\cite{P2005,GL2008,G2009}.\n\nThe other form of self-similarity we are going to use is related to the separation of variables. In problems depending on two spatial variables, one can seek solutions which are products of a function of the angular coordinate and a function which depends on the distance from the origin. If a suitable form is imposed for the angular function and then the equation is solved numerically for the other function we find the meridionally self-similar solutions. In the case of radially self-similar solutions a suitable form is imposed for the radial function, and then the equation is solved for the angular part of the problem. Examples of MHD problems solved by this form of self-similarity can be found in \\cite{BP1982,LB1994,ST1994,VT1998}.\n \nIn the problem we are solving in this paper, we use the self-similarity technique in two steps. In the initial formalism of the problem we demand that the time evolution of the fields will only appear through the dimensionless combination of $v=r\/(ct)$. Then, we rewrite the system of the partial differential equations using this new variable. We observe that the system separates by imposing meridionally self-similar solutions. Thus, by choosing a class of those solutions the problem reduces to the solution of an ordinary differential equation, which we integrate numerically. These numerical solutions depend on the boundary conditions and the parameters chosen. We explore the parameter space and discuss the significance of the parameters chosen and their implications for the nature of the system. \\cite{TS2007} studied a similar problem of an expanding magnetized fluid in the non-relativistic limit while taking into account Newtonian gravity. \n\n\\section{Formulation of the problem}\n\\label{formulation}\n\nWe consider a system containing an electromagnetic field ${\\bf E}$, ${\\bf B}$ as measured by an observer stationary relative to the centre of the system and a fluid of rest density $\\rho_0$ and pressure $p$. The system expands uniformly with scaled velocity \n\\begin{eqnarray}\n{\\bf v} =\\frac{r}{ct}{\\bf \\hat{e}}_r\\,.\n\\label{velocity}\n\\end{eqnarray}\nAs a result of the assumed uniform expansion, the Lagrangian derivate of the velocity vanishes and thus, each element of the system moves with constant velocity. This is a requirement for an equilibrium expansion. Had the Lagrangian derivative not been zero then the same fluid element would have suffered some acceleration or deceleration and the net force would have not been zero. The second assumption is that of axial symmetry, thus the physical quantities do not depend on the $\\phi$ coordinate. The third assumption is the ideal MHD approximation, therefore in the frame of the fluid the electric field vanishes, thus \n\\begin{eqnarray}\n{\\bf E}=-{\\bf v}\\times {\\bf B}\\,.\n\\label{ohm}\n\\end{eqnarray}\nThe electromagnetic field has to satisfy Maxwell's equations\n\\begin{eqnarray}\n\\nabla \\cdot {\\bf B}=0\\,,\n\\label{divB}\n\\end{eqnarray}\n\\begin{eqnarray}\n\\nabla \\times {\\bf E}=-\\frac{1}{c}\\frac{\\partial {\\bf B}}{\\partial t}\\,,\n\\label{faraday}\n\\end{eqnarray}\n\\begin{eqnarray}\n\\nabla \\cdot {\\bf E}=\\frac{4 \\pi}{c}j^0\\,,\n\\label{gauss}\n\\end{eqnarray}\nand\n\\begin{eqnarray}\n\\nabla \\times {\\bf B}=\\frac{1}{c}\\frac{\\partial {\\bf E}}{\\partial t}+\\frac{4 \\pi}{c} {\\bf j}\\,.\n\\label{ampere}\n\\end{eqnarray}\nEquations~(\\ref{gauss}) and (\\ref{ampere}) which contain charge and current densities, allow us to determine these densities. The fluid has to satisfy the baryon mass conservation which is\n\\begin{eqnarray}\n\\Big(\\frac{\\partial}{\\partial t}+c{\\bf v}\\cdot\\nabla\\Big)(\\gamma \\rho_0)+c\\gamma \\rho_0 \\nabla \\cdot {\\bf v}=0\\,,\n\\label{continuity}\n\\end{eqnarray}\nwhere $\\gamma=(1-v^2)^{-1\/2}$ is the Lorentz factor and $\\rho_{0}$ is the rest mass density.\n\nThe momentum equation is (see, e.g., \\citealp{VK03a}) \n\\begin{eqnarray}\n-\\gamma \\rho_0\\Big(\\frac{\\partial}{\\partial t} +c{\\bf v}\\cdot \\nabla\\Big)(\\xi \\gamma c{\\bf v})-\\nabla p \n+\\frac{j^0{\\bf E}+{\\bf j} \\times {\\bf B}}{c} =0\\,,\n\\label{momentum}\n\\end{eqnarray}\nwhere the relativistic specific enthalpy (over $c^2$)\nfor a polytrope with $\\Gamma=4\/3$ is \n\\begin{eqnarray}\n\\xi=1+4\\frac{p}{\\rho_0c^{2}}\\,.\n\\end{eqnarray}\nWe have chosen $\\Gamma=4\/3$ to allow self-similar solutions \\citep{L1982}.\nFinally the entropy equation is\n\\begin{eqnarray}\n\\Big(\\frac{\\partial}{\\partial t}+c{\\bf v}\\cdot\\nabla\\Big)\\left(\\frac{p}{\\rho_0^{4\/3}}\\right) =0\\,.\n\\label{entropy}\n\\end{eqnarray}\n\nThe above system of equations has to be solved in order to determine the density and the fields. \n\nWe express the magnetic field in terms of two quantities, $P$ and $T$. The flux function $P$ depends on $v$ and $\\theta$, and is the magnetic flux that passes through a cap of semi-opening angle $\\theta$ and lies in distance $v$ from the origin in the velocity space. The function $T$ is related to the toroidal component of the magnetic field. The expression of the magnetic field that by construction satisfies~(\\ref{divB}) is\n\\begin{eqnarray}\n{\\bf B}=\\frac{1}{2\\pi r^{2} \\sin\\theta } \\Big(\\frac{\\partial P}{\\partial \\theta}{\\bf \\hat{e}}_r-v\\frac{\\partial P}{\\partial v}{{\\bf \\hat{e}_\\theta}}+T{{\\bf \\hat{e}_\\phi}} \\Big)\\,.\n\\label{magnetic}\n\\end{eqnarray}\nEquation~(\\ref{ohm}) gives the electric field\n\\begin{eqnarray}\n{\\bf E}=\\frac{1}{2 \\pi r^{2}\\sin\\theta}\\Big(vT{{\\bf \\hat{e}_\\theta}}+v^{2}\\frac{\\partial P}{\\partial v}{{\\bf \\hat{e}_\\phi}}\\Big).\n\\label{electric}\n\\end{eqnarray}\nThe velocity of the field lines is ${\\bf v}_{F}=c{\\bf E} \\times {\\bf B}\/|{\\bf B}^{2}|$ and it has a $\\phi$ component, this component is a geometrical effect of the expansion and the toroidal component of the magnetic field and there is no rotation of the central dipole.\n\nFor axially symmetric radial flows, the ${{\\bf \\hat{e}_\\phi}}$ component of the momentum equation~(\\ref{momentum}) yields that $\\left(j^0 {\\bf E}+{\\bf j}\\times{\\bf B}\\right)\\cdot {{\\bf \\hat{e}_\\phi}} = 0$, which leads to a differential equation for $P$ and $T$, \n\\begin{eqnarray}\n\\frac{\\partial T}{\\partial v} \\frac{\\partial P}{\\partial \\theta} - \\frac{\\partial T}{\\partial \\theta} \\frac{\\partial P}{\\partial v}-\\frac{v^{2}+1}{v(1-v^2)}T\\frac{\\partial P}{\\partial \\theta}=0\\,,\n\\label{Teq0}\n\\end{eqnarray}\nor, by multiplying~(\\ref{Teq0}) with $(1-v^{2})\/v$,\n\\begin{eqnarray}\n\\frac{\\partial}{\\partial v}\\Big(\\frac{1-v^{2}}{v}T\\Big)\\frac{\\partial P}{\\partial \\theta}-\\frac{\\partial}{\\partial \\theta}\\Big(\\frac{1-v^{2}}{v}T\\Big)\\frac{\\partial P}{\\partial v}=0\\,.\n\\label{Jacobian}\n\\end{eqnarray}\nThis is the Jacobian of $P$ and $[(1-v^{2})\/v]T$ with respect to $v$ and $\\theta$, thus\n\\begin{eqnarray}\nT=\\gamma^2 v \\beta(P)\\,,\n\\label{Teq}\n\\end{eqnarray}\nwhere $\\beta(P)$ is an arbitrary function of $P$.\n\n\\section{Solution}\n\\label{solution}\n\n\\subsection{The entropy equation}\n\nEquation~(\\ref{entropy}) yields a relation between density and pressure $p=Q\\rho_0^{4\/3}$, where $Q$ is a function of $v$ and $\\theta$. We can use this equation to find the density as a function of the pressure $\\rho_0 = p^{3\/4}\/Q^{3\/4}$.\n\n\\subsection{The baryon mass conservation equation}\n \nSubstituting the above expression of the density in~(\\ref{continuity}) we find that the pressure has a form that can be conveniently written as\n\\begin{eqnarray}\np= p_0 \\frac{\\gamma^4 v^4 }{r^4}\\,,\n\\label{pressure}\n\\end{eqnarray}\nwhere $p_0$ is a function of $v$ and $\\theta$.\n\nThen, the density is given by\n\\begin{eqnarray}\n\\rho_0= \\left(\\frac{p_0}{Q}\\right)^{3\/4} \\frac{\\gamma^3 v^3}{r^3} \\,.\n\\label{density}\n\\end{eqnarray}\n\n\\subsection{Maxwell's equations}\n\nBy construction, the form of the magnetic field chosen satisfies~(\\ref{divB}). The induction equation~(\\ref{faraday}) is also satisfied for the adopted forms of the magnetic and electric fields. The other two equations have the current and charge densities that are not determined yet. By solving equations~(\\ref{gauss}) and (\\ref{ampere}) for $j^{0}$ and ${\\bf j}$ respectively, we express these quantities in terms of the functions $P$ and $\\beta$. The resulting expressions for the charge and current densities are\n\\begin{eqnarray}\n\\frac{j^{0}}{c}&=&\\frac{\\gamma^2 v^{2}}{8 \\pi^{2}r^3}\\frac{{\\rm d}\\beta}{{\\rm d}P}\\frac{\\partial P}{\\partial \\theta}\\,,\n\\label{charge}\n\\\\\n{\\bf j}&=&\\frac{c}{8 \\pi^{2} r^{3} \\sin \\theta}\\Big\\{v\\frac{{\\rm d}\\beta}{{\\rm d}P}\\Big[\\gamma^2\\frac{\\partial P}{\\partial \\theta}{\\bf \\hat{e}}_r-v\\frac{\\partial P}{\\partial v}{{\\bf \\hat{e}_\\theta}}\\Big]+ \\Big[-\\frac{v^{2}}{\\gamma^2}\\frac{\\partial^{2}P}{\\partial v^{2}}+2v^{3}\\frac{\\partial P}{\\partial v}-\\sin^2 \\theta \\frac{\\partial^2 P}{\\partial \\left(\\cos\\theta\\right)^2}\\Big]{{\\bf \\hat{e}_\\phi}}\\Big\\} \\,.\n\\label{current}\n\\end{eqnarray}\nNext we use these results to solve the momentum equation. \n\n\n\\subsection{The momentum equation}\n\\label{mom_section}\n\nThe momentum equation~(\\ref{momentum}) contains three terms. The first one is the inertia term ${\\bf f}_{\\rm I}$, which is proportional to the derivative of the relativistic specific enthalpy $\\xi$. The second term, ${\\bf f}_{p}$, is due to the pressure gradient. The third term, ${\\bf f}_{\\rm em}$ is due to the electromagnetic forces. We are going to evaluate each term of this equation and seek analytical and semi-analytical solutions. The first term is\n\\begin{eqnarray}\n{\\bf f}_{\\rm I}=4\\gamma^2v^2\\frac{p}{r} {\\bf \\hat{e}}_r\\,.\n\\label{inertia_term}\n\\end{eqnarray}\nThe second term, using~(\\ref{pressure}), is\n\\begin{eqnarray}\n{\\bf f}_{p}=-\\frac{\\gamma^4 v^4}{r^4} \\nabla p_0\n- 4\\gamma^2v^2\\frac{p}{r} {\\bf \\hat{e}}_r \\,.\n\\label{pressure_term}\n\\end{eqnarray}\nFrom the sum ${\\bf f}_{\\rm I}$ and ${\\bf f}_{p}$ only the first term of ${\\bf f}_{p}$ survives and the effect of inertia is cancelled by the second term of the pressure force. This is because we have chosen a configuration that expands uniformly and as such there is no acceleration on the fluid. The ${\\bf f}_{\\rm I}$ is a pseudo-force that appears because of the choice of the frame of reference.\n \nThe third term is\n\\begin{eqnarray}\n{\\bf f}_{\\rm em}=-\\frac{{\\cal F}}{16 \\pi^3 r^4\\sin^2\\theta} \\nabla P \\,,\n\\label{em_term}\n\\end{eqnarray}\nwhere\n\\begin{eqnarray}\n{\\cal F} =\n\\frac{v^2}{\\gamma^2} \\frac{\\partial^{2} P}{\\partial v^{2}}\n-2v^{3}\\frac{\\partial P}{\\partial v}\n+\\sin^2 \\theta \\frac{\\partial^2 P}{\\partial (\\cos \\theta)^2}\n+\\gamma^2 v^2 \\beta \\frac{d \\beta}{dP}\n\\,.\n\\label{GS}\n\\end{eqnarray}\n${\\cal F}=0$ is the relativistic form of the Grad-Shafranov equation for uniform expansion in the force-free limit, as it was formulated by \\cite{P2005}. Indeed our study in the case of negligible pressure reduces to this equation, whose detailed study can be found in \\cite{GL2008}. In the present paper we study cases where the fluid pressure is no longer negligible. The total force on a volume element due to gas pressure and electromagnetic interaction is zero. If ${\\cal F} \\neq 0$ the electromagnetic force is nonzero; it is normal to the magnetic field (since $\\nabla P \\ \\bot \\ {\\bf B}$) and, depending on the sign of ${\\cal F}$, points towards the axis or in the opposite direction.\n\nNote that if gravity is non-negligible, a fourth term should be added on the left-hand side of the momentum equation~(\\ref{momentum}) and time and space coordinates have to be modified according to the metric. This gravitational term is ${\\bf f}_{G}=-\\gamma^2 \\rho_{0}c^{2}\\xi \\nabla \\ln h$ (see e.g., \\citealp{ML1986,Meliani2006}), where $h=(1-r_{\\rm S}\/r)^{1\/2}$ is the redshift factor with $r_{\\rm S}=2GM_{*}\/c^{2}$ the Schwarzschild radius for a central mass $M_{*}$. The appearance of the redshift factor, which is solely a function of $r$, makes the separation of variables ($v\\,, \\theta$) impossible. For $r\\gg r_{\\rm S}$ this factor can be approximated as $h \\approx 1$ and the gravitational term simplifies to\n\\begin{eqnarray}\n\\label{gravity}\n{\\bf f}_{G}&=&- \\frac{\\gamma^2 \\xi \\rho_0 G M_{*}}{r^2} {\\bf \\hat{e}}_r = -\\frac{p_{0}^{3\/4}\\gamma^{5}v^{3}GM_{*}}{Q^{3\/4}r^{5}}\\Big(1+\\frac{4 p_{0}^{1\/4}Q^{3\/4}\\gamma v}{c^{2}r}\\Big){\\bf \\hat{e}}_r\\,.\n\\end{eqnarray}\nIt consists of two terms, of which the first one is proportional to $r^{-5}$ and the second one is proportional to $r^{-6}$. This combination again does not permit self-similar solutions, as all the other terms appearing in the momentum equation are proportional to $r^{-5}$. A possible way of taking partially into account gravity is by assuming a plasma at distances $r\\gg r_{\\rm S}$, with non-relativistic temperatures ($p\\ll \\rho_0 c^2$) and setting $\\xi \\approx 1$, so that in the inertia term the derivative of $\\xi$ will be taken into account, but in the gravitational term it will be set to $\\xi=1$. This allows separation of variables, but also adds an extra constraint on $Q$. In our solutions we decided not take into account gravity. This is not an absurd assumption, as we are interested in late stages of expanding systems where the fluid has reached relativistic velocities and has already expanded a lot so that it is not close enough to the central mass for gravity to have an important effect. \nWe remark that while preparing this paper for publication, a similar study by \\cite{TAM2009} appeared. These authors also attempted to include gravity; in fact the gravitational term is very important in their approach, since its magnitude is such as to balance the other forces for a {\\emph{given}} poloidal magnetic field. However, they omitted a factor $\\gamma \\xi$ in the gravitational term of the momentum equation, compare our equation~(\\ref{gravity}) with the last term of their equation~(2).\nOur approach is different: we {\\emph{find}} the poloidal magnetic field that corresponds to flows in which gravity is unimportant.\n\nWe now substitute the force densities found in equations~(\\ref{inertia_term}) -- (\\ref{em_term}) in the momentum equation to find \n\\begin{eqnarray}\n{\\cal F}\\nabla P + 16 \\pi^3 \\sin^2\\theta \\gamma^4 v^4 \\nabla p_0 = 0 \\,,\n\\label{mom1}\n\\end{eqnarray}\nA direct consequence is that $\\nabla p_0 \\parallel \\nabla P$, or,\n\\begin{eqnarray}\np_0=p_0(P) \\,.\n\\end{eqnarray}\nEquation~(\\ref{mom1}) then becomes\n(after substituting ${\\cal F}$ from~\\ref{GS})\n\\begin{eqnarray}\nv^2 \\frac{\\partial^{2} P}{\\partial v^{2}}\n-\\frac{2v^3}{1-v^2}\\frac{\\partial P}{\\partial v}\n+\\frac{\\sin \\theta}{1-v^2} \\frac{\\partial}{\\partial \\theta} \\left(\\frac{1}{\\sin \\theta} \\frac{\\partial P}{\\partial \\theta} \\right) +\\frac{v^2}{(1-v^2)^2} \\beta \\frac{{\\rm d} \\beta}{{\\rm d} P}\n+ 16 \\pi^3 \\sin^2\\theta \\frac{v^4}{(1-v^2)^3} \\frac{{\\rm d} p_0}{{\\rm d}P}=0 \\,.\n\\label{MOM2}\n\\end{eqnarray}\nThis is the necessary condition that the $r$ and $\\theta$ components of the momentum equation (\\ref{momentum}) are both zero, the $\\phi$ component of the momentum equation is zero as shown by~(\\ref{Teq0}). All pressure and inertia effects are included through the last term of the previous equation, which we shall call pressure-inertia term. In the case $p_0= $ const the ${\\bf f}_{\\rm I}$ and ${\\bf f}_{p}$ forces cancel each other, and we are back in the force-free case ${\\cal F}=0$.\n\nAs explained in Appendix~\\ref{appA}, the only nontrivial semi-analytic solution\nof the previous equation corresponds to\n\\begin{eqnarray}\nP=g(v) \\sin^2 \\theta \\,, \n\\quad\n\\beta \\frac{{\\rm d} \\beta}{{\\rm d}P}= c_0 P \\,, \n\\quad\n\\frac{{\\rm d} p_0}{{\\rm d}P} = \\frac{c_1}{16 \\pi^3} \\,,\n\\label{ss}\n\\end{eqnarray}\nwhere $c_0$ and $c_1$ are constants. \n\n\nEquations~(\\ref{magnetic}), (\\ref{electric}), (\\ref{pressure}) and (\\ref{density}) give the expressions of the physical quantities for the self-similar solution\n\\begin{eqnarray}\n{\\bf B}=\n\\frac{g}{\\pi r^2} \\cos \\theta {\\bf \\hat{e}}_r \n-\\frac{v g'}{2 \\pi r^2} \\sin \\theta {{\\bf \\hat{e}_\\theta}}\n+\\frac{g}{2 \\pi r^2} \\frac{\\beta}{P} \\gamma^2 v \\sin \\theta {{\\bf \\hat{e}_\\phi}} \\,,\n\\label{newB}\n\\end{eqnarray}\n\\begin{eqnarray}\n{\\bf E}=\n\\frac{g}{2\\pi r^2} \\frac{\\beta}{P} \\gamma^2 v^2 \\sin \\theta {{\\bf \\hat{e}_\\theta}}\n+\\frac{v^2 g'}{2 \\pi r^2} \\sin \\theta {{\\bf \\hat{e}_\\phi}} \\,,\n\\end{eqnarray}\nthe density and pressure are given by equations~(\\ref{pressure}) and (\\ref{density}) and we can substitute for $\\beta$ and $p_{0}$\n\\begin{eqnarray}\n\\beta = \\pm \\left(c_0 P^2 + \\beta_{00}\\right)^{1\/2} \n\\,, \\quad \np_0=p_{00}+\\frac{c_1}{16 \\pi^3} g \\sin^2 \\theta \n\\,,\n\\label{bp}\n\\end{eqnarray}\nand a prime denotes derivative with respect to $v$.\nHere $\\beta_{00}$ and $p_{00}$ are constants, and $Q$ is a free function of $v$ and $\\theta$.\nThe $\\beta_{00}$ should vanish so that the azimuthal component of the magnetic field remains finite on the axis. Thus, $\\beta = \\pm c_0^{1\/2} g \\sin^2 \\theta $ and $c_0$ must not be negative.\nOn the other hand $p_{00}$ should be a non-negative constant so that the pressure does not vanish on the axis. The positive (negative) sign of $c_3$ corresponds to increasing (decreasing) pressure as we move away from the axis, respectively. In addition, the sign of $c_1$ controls the sign of ${\\cal F}$: they are opposite since equation~(\\ref{mom1}) yields ${\\cal F}+ c_1 \\sin^2 \\theta \\gamma^4 v^4=0$. For $c_1>0$ the electromagnetic force points away from the axis and the sum of inertia and pressure forces points toward the axis; for $c_1<0$ the opposite.\nNote that the freedom of an additive constant in the expression of the pressure means that the physical behaviour of the system does not change if we assume a background pressure which is proportional to $\\gamma^{4}\/(ct)^{4}$. The reason is that this part of the pressure introduces additional terms in the inertia and pressure gradient forces which cancel each other, see equations~(\\ref{inertia_term})~and~(\\ref{pressure_term}).\n\nThen by substituting (\\ref{ss}) into~(\\ref{MOM2}), the latter becomes the following ordinary differential equation for $g(v)$ \n\\begin{eqnarray}\nv^2 g'' -\\frac{2v^3 g'}{1-v^2}-\\frac{2g}{1-v^2}\n+\\frac{c_0v^2 g}{(1-v^2)^2}\n+\\frac{c_1 v^{4}}{(1-v^{2})^3}=0\\,,\n\\label{ode}\n\\end{eqnarray}\nwhere the first three terms depend on the poloidal magnetic flux, the fourth is related to the toroidal magnetic field, and the fifth comes from the sum of inertia and pressure terms of the momentum equation.\n\nEquation~(\\ref{ode}) can be solved numerically for various values of the parameters $c_0$ and $c_1$. These parameters are related to the relative importance of the toroidal field and the fluid pressure and inertia compared with the poloidal field, see equations~(\\ref{ss}). We present the results of the numerical solution in the next section. \n\n\\section{Results}\n\nWe have solved numerically equation~(\\ref{ode}), which is a second order ordinary differential equation. Our motivation is to describe a physical system where some magnetic flux emerges from a surface $v=v_{0}$ and expands uniformly within a sphere in the velocity space extending to $v_{\\rm max}\\approx 1$. We normalize $g(v)$ to the dimensionless function $\\tilde{g}=g\/g(v_{0})$, therefore the first boundary condition is $\\tilde{g}(v_{0})=1$. This choice of normalization affects $c_{1}$ which is now substituted by the dimensionless $\\tilde{c}_{1}=c_{1}\/g(v_{0})$. The other boundary condition comes from the fact that the flux does not go further than $v_{\\rm max}$, thus it is $\\tilde{g}(v_{\\rm max})=0$. Subject to these boundary conditions we are going to solve the equation for various combinations of the parameters appearing. However, before moving to these numerical solutions we investigate its asymptotic behaviour for $v\\ll 1$.\n\n\\subsection{Asymptotic solutions for $v\\ll 1$}\n\nWhen we focus to the non-relativistic limit $v\\ll 1$, the differential equation simplifies. The factor $\\gamma=(1-v^{2})^{-1\/2}$ is close to unity, thus the equation initially reduces to\n\\begin{eqnarray}\nv^{2}\\tilde{g}''-2v^{3}\\tilde{g}'+(c_{0}v^{2}-2)\\tilde{g}+\\tilde{c}_{1}v^{4}=0.\n\\label{asym}\n\\end{eqnarray}\nThe Frobenius expansion at small $v$ with the first term $\\tilde{g} \\propto v^F$ gives indicial equation $(F+1)(F-2)=0$ for $F<4$.\\footnote{There is also the possibility $F=4$, with the solution $\\tilde{g}\\approx -(c_1\/10)v^4$. However, this case corresponds to overcollimated field lines and for that reason is rejected.} The solution with $F=2$ corresponds to cylindrical field lines, while the $F=-1$ to a dipolar magnetic field. Since we are interested in a physical system where the flux is generated by a central source and then expands we study the dipolar solution with $F=-1$.\n\n\n\\subsection{Parameter study}\n\nThe relative importance of the physical quantities determining the behaviour of the fluid is parametrized by the two constants appearing on~(\\ref{ode}). The ratio of $\\beta$ over $P$ is $\\pm c_0^{1\/2}$, therefore a larger value of $c_{0}$ corresponds to a stronger toroidal component of the field. The definition of $\\tilde{c}_{1}$, similarly, relates the plasma pressure-inertia to the magnetic field. The asymptotic behaviour of~(\\ref{ode}) demonstrates that these terms are not important if $v$ is small and it is the poloidal flux emerging from the central dipole that dominates. As these terms are multiplied by powers of $\\gamma$ the solution will depend strongly on both the choice of the parameters and on the choice of $v_{\\rm max}$. The pressure-inertia term is multiplied by $\\gamma^{6}$, thus at very high velocities it is this term that determines the solution, whereas in intermediate velocities it is the combination of the parameters chosen. Therefore the solution depends on three parameters $c_{0}$, $\\tilde{c}_{1}$ and $v_{\\rm max}$. In our parameter study we examine the problem for various combinations of the parameters, the cases studied are for $c_{0}=0, 0.5, 1$, for $\\tilde{c}_{1}=0, 0.01, 0.1$ and for $v_{\\rm max}=0.95, 0.97, 0.99$. We have plotted the expansion factor $F={\\rm d} \\ln \\tilde{g}\/{\\rm d} \\ln v$ for $\\tilde{c}_{1}=0.1$ and $c_{0}=1$ (figure~\\ref{fig:g_c0_1}). The results are similar for $c_{0}=0, 0.5$. The value of $F$ demonstrates the behaviour of the field lines; $F=-1$ corresponds to a dipole, and $F=2$ to a cylindrical field, $F=0$ with $F'>0$ gives the x-type points and $F=0$ with $F'<0$ gives the focal points of the field lines. \n\nWe have chosen relatively small values for $c_{0}$ and $\\tilde{c}_{1}$ so that $v_{\\rm max}$ can be close to unity. Had we chosen larger values for these parameters, the flux would have been zero before reaching relativistic velocities. If we continue the integration further than $v_{\\rm max}$, even for small values of $c_{0}$ and $\\tilde{c}_{1}$, then $\\tilde{g}$ will oscillate around zero, these oscillations correspond to closed loops causally disconnected from the base \\citep{TS2007}.\n\nWe also study the problem for negative values of $\\tilde{c}_{1}$, (figure~\\ref{fig:g_c1<0}) this corresponds to systems where the pressure decreases as we move away from the axis. We find from~(\\ref{bp}) that it is essential to include a background pressure $p_{00}$, so that none of the gas pressure or the density are negative.\n\\begin{figure}\n \\centering\n \\includegraphics[width=0.5\\textwidth]{Figure1.pdf}\n \\caption{The expansion factor $F={\\rm d} \\ln \\tilde{g}\/{\\rm d} \\ln v$ according to the numerical solution of equation~(\\ref{ode}) for $c_{0}=1$ and $\\tilde{c}_{1}=0.1$. The solid line is for $v_{\\rm max}=0.95$, the dashed one for $v_{\\rm max}=0.97$ and the dotted one for $v_{\\rm max}=0.99$. All solutions converge to $-1$ for $v$ small so they have a dipolar behaviour. When $v \\approx 1$ the expansion factor becomes very small and negative, so that the field lines close within the light sphere. In intermediate distances the ones for $v_{\\rm max}=0.97$ and $v_{\\rm max}=0.99$ have a positive $F$, so the field lines have x-type points and then focal points.}\n\\label{fig:g_c0_1}\n\\end{figure}\n\\begin{figure}\n \\centering\n \\includegraphics[width=0.5\\textwidth]{Figure2.pdf}\n \\caption{The expansion factor $F={\\rm d} \\ln \\tilde{g}\/{\\rm d} \\ln v$ according to the numerical solution of equation~(\\ref{ode}) for negative $\\tilde{c}_{1}$. Depending on the choice of the parameters the solution either diverges to infinity or becomes zero for $v<1$. The parameters chosen are $c_{0}=0.5$, $\\tilde{g}(0.1)=1$ and $\\tilde{g}'(0.1)=-9.97$ which are the same for all curves. Then the equation was integrated for $\\tilde{c}_{1}=-0.01$ (solid line), $\\tilde{c}_{1}=-0.1$ (dashed line) and $\\tilde{c}_{1}=-0.5$ (dotted line). It is evident that when the pressure dominates over the other forces it requires more flux to achieve equilibrium.}\n\\label{fig:g_c1<0}\n\\end{figure}\n\nThe numerical solution for any combination of parameters verifies the fact that the field behaves like a dipole near the origin. Then, when it approaches the upper limit $v_{\\rm max}$ the field deviates from the dipolar structure. When $v_{\\rm max}$ comes closer to unity or $\\tilde{c}_{1}$ is relatively large and positive there is more flux generated which forms closed loops. These loops are contained within a separatrix surface corresponding to the root of the expansion factor $F$ with $F'>0$; and have focal points which form a circle on the equatorial plane whose radius is determined by the position of the root of $F$ with $F'<0$. The physical explanation for the formation of these loops is that there is more electromagnetic pressure needed to force the flux to reach a higher velocity. A higher velocity leads to a greater Lorentz factor multiplying the rest mass of the plasma. Thus a small increase in the $v_{\\rm max}$ leads to a dramatic increase in the pressure-inertia term and since the flux emerging from the spherical surface is limited, more flux has to be generated somehow in order to balance the forces. This extra flux appears in the form of these closed loops. The pressure-inertia term has also an effect on the collimation of the field: as it becomes more important the field lines become parallel to the axis, leading to a collimated magnetic field. The collimation and the closed loops are evident in figure~\\ref{fig:P_c0_1} where the poloidal magnetic field lines (sections of surfaces of constant flux with a meridional plane) are plotted. There is also a very strong field near the $v_{\\rm max}$. In this area the field has a very weak radial component whereas its $\\theta$ component is large marking the turn over of the field lines, as they are not permitted to exceed $v_{\\rm max}$. By comparison to the inertia-free case we have found that the addition of inertia has a more important effect, as we expected, because the pressure-inertia forces, parametrized by $\\tilde{c}_{1}$ are multiplied in~(\\ref{ode}) by a factor of $\\gamma^{6}$, thus they are very sensitive to $v_{\\rm max}$. \n\nWhen $\\tilde{c}_{1}<0$ the system behaves differently. The direction of the pressure-inertia force term is opposite to the direction of the electromagnetic force arising from $\\nabla P$, therefore it is their relative intensity that determines the fate of $\\tilde{g}$. The details depend on the initial conditions and the parameters but there are two families of solutions. For relatively large $|\\tilde{c}_{1}|$ the pressure-inertia term becomes strong compared to the electromagnetic terms of the momentum equation early enough then $\\tilde{g}$ increases fast and diverges to infinity at $v=1$, this generates infinite pressure and flux, making this configuration unacceptable. On the other hand for relatively small $|\\tilde{c}_{1}|$ the electromagnetic term dominates over the pressure-inertia term in~(\\ref{ode}) and $\\tilde{g}$ becomes zero for $v<1$, therefore the field is confined within a sphere and there are no infinite fields or pressure, figure~\\ref{fig:Nc1}. These fields are acceptable.\n\nIn non-relavistic force-free MHD the field lines coincide with the current lines, whereas now with nonzero displacement current the picture is not so simple as there are also forces between the charge and the electric field, forces due to the gradient of pressure and inertia forces. However, in our uniformly expanding model the electromagnetic force is normal to the lines of constant $P$, since it is proportional to $\\nabla P$, and as $\\nabla P \\parallel \\nabla p_{0}$ so do the pressure-inertia forces. The relative importance of the various forces appears in the momentum equation~(\\ref{ode}), in particular $v^2 g'' -[2v^3\/(1-v^2)]g'-[2\/(1-v^2)] g$ expresses the electromagnetic forces due to the poloidal magnetic field, $[c_0v^2\/(1-v^2)^2]g$ is the electromagnetic forces due to the toroidal magnetic field, and $c_1 v^{4}(1-v^{2})^3$ are the forces due to pressure and inertia. We plot these terms for the case of $c_{0}=1$, $\\tilde{c}_{1}=0.1$ and $v_{\\rm max}=0.99$ in figure~\\ref{fig:Forces}. The results for the relative intensity of forces are similar for other combinations of the parameters. \n\\begin{figure}\n \\centering\n \\includegraphics[width=0.8\\textwidth]{Figure3.pdf}\n \\caption{Plot of the poloidal field lines for $c_{0}=1.0$, and combinations of $\\tilde{c}_{1}$ and $v_{\\rm max}$. The first row (a, b, c) corresponds to $c_{1}=0$, the second (d, e, f) to $c_{1}=0.01$ and the third (g, h, i) to $c_{1}=0.1$. The first column (a, d, g) corresponds to $v_{\\rm max}=0.95$, the second (b, e, h) to $v_{\\rm max}=0.97$ and the third (c, f, i) to $v_{\\rm max}=0.99$. The field lines near the origin have a dipole structure, but as they approach $v=v_{\\rm max}$ they close, whereas in an ideal dipole there would not be such a boundary. The external circle marks the light sphere where the expansion velocity formally reaches the speed of light. The dotted line in (h) and (i) is the separatrix which corresponds to the local minimum of the flux and encircles the closed lobes appearing, the lobes have a central focus which lies at the equatorial plane.}\n\\label{fig:P_c0_1}\n\\end{figure}\n\\begin{figure}\n \\centering\n \\includegraphics[width=0.5\\textwidth]{Figure4.pdf}\n \\caption{Left: Plot of the poloidal field lines for $c_{0}=0.5$, $\\tilde{c}_{1}=-0.01$ and $\\tilde{g}'(v_{0})=-9.97$. The pressure-inertia force is weak and causes little modification to the solution compared to the previous ones. Right: Plot of the poloidal field lines for $c_{0}=0.5$, $\\tilde{c}_{1}=-0.5$ and $\\tilde{g}'(v_{0})=-9.97$. The separatrix field line that corresponds to expansion factor $F=0$ is the dotted line. The field lines enclosed by the separatrix have the usual dipolar structure, however the ones that are not enclosed emerge from a monopole at $v=1$ and $\\theta=-\\pi$ and finish at a second monopole at $v=1$ and $\\theta=\\pi$. Fields containing monopoles are unphysical and thus unacceptable. These monopoles appear because we are trying to construct a field that is impossible, as we request uniform expansion and the total force to be zero. Thus we need more magnetic flux to balance the very strong force due to the inertia-pressure term. Since we allow only limited flux to emerge from the central sphere the extra flux emerges from the poles of the light sphere. This is the reason $\\tilde{g}$ goes to infinity at $v=1$, depicting the need for extra flux.}\n\\label{fig:Nc1}\n\\end{figure}\n\\begin{figure}\n \\centering\n \\includegraphics[width=0.5\\textwidth]{Figure5.pdf}\n \\caption{The terms of equation~(\\ref{ode}) for $c_{0}=1$, $\\tilde{c}_{1}=0.1$, $\\tilde{g}(0.1)=1$ and $v_{\\rm max}=0.99$. The solid line corresponds to the $|v^2 g'' -[2v^3\/(1-v^2)]g'-[2\/(1-v^20)] g|$ the absolute value of the force due to the poloidal field, the dashed line is force due to the toroidal field $c_0v^2\/(1-v^2)^2g$ and the dotted line is $c_1 v^{4}\/(1-v^{2})^3$. When $v$ is small the forces of the toroidal field balance the poloidal, however near the top it is the the forces due to the pressure that become important.}\n\\label{fig:Forces}\n\\end{figure}\n\n\\section{Physical quantities}\n\nIn this section we study the relation of the parameters appearing in the equations to physical quantities. The physical quantities we are interested in are the energy and the twist of the field lines. \n\n\\subsection{Energy}\n\nThe magnetised fluid contains energy in four forms electromagnetic, kinetic, rest mass and thermal energy. The energy equation written in conservative form is $\\partial T^{00} \/ \\partial t + \\nabla \\cdot \\left(c T^{0j} \\hat x_j \\right) = 0$ where $T^{\\mu \\nu}$ is the energy momentum tensor, (see e.g., \\citealp{VK03a}), which gives\n\\begin{eqnarray}\n&& \\frac{\\partial}{\\partial t}\\Big(\\xi \\gamma^2 \\rho_0 c^2 - p+\\frac{B^2+E^2}{8\\pi}\\Big)+ \\nabla \\cdot \\Big(\\xi\\rho_0c^2 \\gamma^2 c{\\bf v} +\\frac{c}{4\\pi} {\\bf E} \\times {\\bf B} \\Big) =0 \\,.\n\\label{energy_eq}\n\\end{eqnarray}\nThe first two terms inside the time derivative of the above equation represent the energy density of the fluid. It consists of the rest $\\gamma\\rho_0 c^2$, kinetic $\\left(\\gamma-1\\right)\\gamma\\rho_0 c^2$, and thermal $\\left(4\\gamma^2-1\\right)p$ energy densities. The last term inside the time derivative represents the energy density of the electromagnetic field. \nSimilarly, the terms inside the space derivative correspond to the fluid energy flux and the Poynting flux. We now apply the Poynting theorem\n\\begin{eqnarray}\n\\frac{\\partial}{\\partial t}\\Big(\\frac{B^2+E^2}{8\\pi}\\Big)+\\nabla \\cdot \\Big(\\frac{c}{4\\pi} {\\bf E} \\times {\\bf B} \\Big) =-\\bm j \\cdot {\\bm E} \\,,\n\\label{poynting_theorem}\n\\end{eqnarray}\nequation~(\\ref{energy_eq}) yields\n\\begin{eqnarray}\n\\frac{\\partial}{\\partial t}(\\xi \\gamma^2 \\rho_0 c^2 - p)+\\nabla \\cdot (\\xi\\rho_0c^2 \\gamma^2 c{\\bf v}) ={\\bf j} \\cdot {\\bf E} \\,.\n\\end{eqnarray}\nThus the term ${\\bf j} \\cdot {\\bf E}$ measures the energy transfer between the fluid and the electromagnetic field. In our model this term equals\n\\begin{eqnarray}\n{\\bf j} \\cdot {\\bf E} = {\\bf j} \\cdot ({\\bf B} \\times {\\bf v}) = \n( {\\bf j} \\times {\\bf B} ) \\cdot {\\bf v} = c {\\bf v} \\cdot {\\bf f}_{\\rm em}\n=\\frac{c \\gamma^4 v^6 g' \\sin^2 \\theta}{r^5} \\frac{{\\rm d}p_0}{{\\rm d}P}\n=\\frac{c_1 c \\gamma^4 v^6 g' \\sin^2 \\theta}{16 \\pi^3 r^5} \\,.\n\\end{eqnarray}\nThe sign of this quantity shows the flow of the energy, when it is positive energy flows from the field to the fluid and vice versa. This sign depends only the product $c_{1}g'$, since all the other terms are positive.\n %\n\\begin{figure}\n \\centering\n \\includegraphics[width=0.8\\textwidth]{Figure6.pdf}\n \\caption{Plot of the energy density of the electromagnetic field for $c_{0}=1$. The first line (a, b, c) has $c_{1}=0$, the second (d, e, f) $c_{1}=0.01$ and the third (g, h, i) $c_{1}=1$. The first column (a, d, g) reaches $v_{\\rm max}=0.95$, the second column (b, e, h) $v_{\\rm max}=0.97$ and the third (c, f, i) $v_{\\rm max}=0.99$. }\n\\label{fig:e_c0_1}\n\\end{figure}\nThe energy density of the electromagnetic field is plotted in figure~\\ref{fig:e_c0_1}, the energy density of the fluid depends on the choice of $p_{00}$ and although it is related to the energy density of the electromagnetic field it cannot be defined in an rigorous way. \n\n\\subsection{Twist}\n\nWe now have an expression for the fields everywhere in space. We can evaluate the twist of the magnetic field lines. The lines of force of the magnetic field are determined by the relation:\n\\begin{eqnarray}\n\\frac{{\\rm d}r}{B_{r}}=\\frac{r {\\rm d} \\theta}{B_{\\theta}}=\\frac{r \\sin \\theta {\\rm d} \\phi}{B_{\\phi}}\\,.\n\\label{lines}\n\\end{eqnarray}\nWe substitute the expressions for the magnetic field~(\\ref{newB}) in the equation for the field lines~(\\ref{lines}). From the first equality we take $g\\sin^{2} \\theta=P_{i}$, where the constant $P_{i}$ is the value of $P$ for this particular field line. Then we equate the first and the third part of equation~(\\ref{lines})\n\\begin{eqnarray}\n{\\rm d} \\phi =\\pm \\frac{\\gamma^{2}c_{0}^{1\/2}{\\rm d}r}{2ct(1-P_{i}\/g)^{1\/2}}\\,.\n\\label{lines2}\n\\end{eqnarray}\nIn this expression (\\ref{lines2}) $\\gamma$ and $g$ depend on $v$ whereas the integral is to be done in $r$, however we can change the variable of integration from $r$ to $v$, by including the $ct$ of the denominator in the differential. The limits of integration are $v=v_{0}$, the surface the field line emerges from, and the maximum distance $v=v_{i}$ it reaches in velocity space. This upper limit of integration is the solution of the equation $g(v_{i})=P_{i}$. The twist for a given field line between two points in velocity space is constant with time as it only depends on $v$. The physical meaning of this result is that the field lines have already been twisted before the expansion starts and then they merely expand radially. By symmetry the total twist for the outgoing part (from $v_{0}$ to $v_{i}$) and the incoming part (from $v_{i}$ to $v_{0}$) is twice the integral from $v_{0}$ to $v_{i}$. The total twist is\n\\begin{eqnarray}\n\\Phi=\\pm c_{0}^{1\/2}\\int_{v_{0}}^{v_{i}}\\frac{\\gamma^{2} {\\rm d}v}{(1-P_{i}\/g)^{1\/2}}\\,.\n\\label{PHI}\n\\end{eqnarray}\n\nThe twist explicitly depends on $c_{0}$ as this determines the ratio of the toroidal to the poloidal field; in addition, the form of $g$ that appears in the integral depends on all the parameters and the boundary conditions, so apart from $c_{0}$ the choice of both $c_{1}$ and $v_{\\rm max}$ have an effect on the value of the twist. The presence of $P_{i}$ in the integral demonstrates that the twist depends also on the individual field line we choose to study. The field line that emerges from the pole corresponds to $P_{i}=0$ and the twist only depends on $c_{0}$ and $v_{\\rm max}$, as the dependence on $g$ is annulled since it is multiplied by zero.\n\n\n\\section{Discussion}\n\nIn this paper we have studied the problem of the uniform expansion of a magnetised fluid. The configuration consists of a polytrope with $\\Gamma=4\/3$ corresponding to a completely degenerate electron gas in the extreme relativistic limit and an electromagnetic field that satisfies ideal MHD. This value is the only one allowing separation of variables, we remark that the same type of polytrope was chosen by \\cite{L1982} and \\cite{TS2007} who studied non-relativistic MHD flows. On this paper we study solutions that are connected to the origin, whereas \\cite{TS2007} study fields that form magnetic islands disconnected from the origin. Their study is more appropriate for isolated magnetic plasmoids that have been twisted and lost causal connection with the parent object. \n\nThe constraint of uniform expansion is introduced through the use of the dimensionless variable $v=r\/ct$ which characterises each element of the magnetised fluid, so that it moves with constant velocity and suffers no acceleration; thus, the net force is zero everywhere but not the distinct forces due to the fluid pressure and inertia and the electromagnetic fields. In our model we take into account the electromagnetic interaction, the force due to the pressure gradient and the inertia force. These forces produce work so energy is exchanged between the electromagnetic field and the fluid. This combination of assumptions leads to separation of the partial differential equations of the problem and to semi-analytical solutions. We do not include gravity in this model. In the presence of gravity the equations do not separate unless we constrained our study in non-relativistic temperatures ($p \\ll \\rho c^{2}$) and distances $r \\gg r_{S}$. \n\nThis structure is powered by an increasing magnetic dipole at the origin which releases new flux, thus a small loop of current lies at $r=0$ and the current increases with time. A mechanism that may provide an increasing magnetic field is the Poynting Robertson battery, this mechanism has been proposed to operate in AGNs by \\cite{CC2009}. This solution has the basic properties of a collimated jet, however in nature the exact form of the jet shall depend on the interaction with the external medium and the observational results on the radiation mechanisms which are not investigated in this paper. \n\nComparing our results to the ones by \\cite{TAM2009} we find that they are similar for small $v$, where the dipolar form of the solution dominates. However when $v$ becomes large they differ significantly. This is mainly of the different approach, we solve the problem making the assumption that the magnetic flux emerges from the origin and reaches a maximum expansion velocity and then we solve for the flux function, whereas they impose a poloidal field and then they solve for the other components of the equation. \n\nThis physical system covers the late stage of the relativistic expansion of a magnetised fluid that has been already accelerated and each shell expands with constant velocity. As the velocity is proportional to the distance from the origin the inner shells do not overtake the outer ones, and there are no collisions. In this model the inlet boundary ($v_{0}$) is also moving, so this cannot be identified with a surface remaining still in real space, however $v_{0}$ can be chosen to be small and to move slowly compared to the rest of the configuration. This study may be applied to systems that occur after the explosion of an object which contained a strong magnetic field. \n\nAn interesting property of this model is that for some sets of parameters $g'$ becomes positive (leading to a positive expansion factor $F=vg'\/g$) for intermediate velocities, meaning that the field is collimated there. At larger velocities the $g'$ becomes again negative so that the lines are closed inside the light sphere.\n\nThe demand for semi-analytical and separable solutions sets constraints in the range of physical configurations we can describe. If we are to study an accelerating or decelerating expansion it is essential to give up the uniform expansion parameter $v=r\/ct$. In appendix B we show that the force equation does not admit self-similar solutions for an arbitrary combination of $r$ and $t$. \n\nIn this paper we have found analytical solutions for a complicated relativistic MHD problem. We are aware that there is a gap to bridge between the observed radiation from a relativistically expanding magnetised fluid and our idealised model. We suggest that our work can be used a stepping stone for future studies and the check of the validity of MHD simulations. The results of the simulations can be compared with our solutions for consistency. We also remark that these structures allow the exchange of energy between the field and the fluid. This is a cooling\/heating mechanism for the plasma and collimation\/decollimation of the magnetic field respectively.\n\n \n\n\\section*{Acknowledgements} \nThe authors are grateful to Professors Kanaris Tsinganos and Donald Lynden-Bell for illuminating discussions and insightful comments. KNG is grateful to the section of Astrophysics, Astronomy and Mechanics of the Department of Physics of the University of Athens, where he was a research visitor during autumn 2008 and part of this research was done. \n\n\\begin{appendices}\n\\section{~Solutions of equation~(\\ref{MOM2})}\n\\label{appA}\n\nWe seek separable solutions of equation~(\\ref{MOM2}), of the self-similar form $P=P(\\alpha)$, where $\\alpha=g(v) \\sin^2 \\theta$.\nBy using $\\alpha$ instead of $\\theta$ we may transform from the pair of independent variables $(v\\,,\\theta)$ to the $(v\\,,\\alpha)$.\nWith the following elementary relations valid for any function $\\Phi$,\n\\begin{eqnarray}\n&&\\frac{\\partial \\Phi(v\\,,\\theta)}{\\partial v}= \n\\frac{\\partial \\Phi(v\\,,\\alpha)}{\\partial v} +\n\\frac{g'}{g} \\alpha \\frac{\\partial \\Phi(v\\,,\\alpha)}{\\partial \\alpha} \\,,\n\\nonumber \\\\\n&&\\frac{\\partial \\Phi(v\\,,\\theta)}{\\partial \\theta}= \n2 \\frac{\\cos \\theta}{\\sin \\theta}\n\\alpha \\frac{\\partial \\Phi(v\\,,\\alpha)}{\\partial \\alpha} \\,,\n\\end{eqnarray}\nwe may rewrite~(\\ref{MOM2}), after dividing with $\\alpha P'$, as \n\\begin{eqnarray}\nv^2 g'' -\\frac{2v^3}{1-v^2}g' -\\frac{2}{1-v^2}g =-\\frac{v^2 g}{(1-v^2)^2} \\frac{\\beta}{\\alpha P'} \\frac{{\\rm d} \\beta}{{\\rm d}P} -\\frac{v^{4}}{(1-v^{2})^3} 16 \\pi^3 \\frac{1}{P'}\\frac{{\\rm d} p_0}{{\\rm d}P}\n\\nonumber \\\\\n-\\frac{4g^2}{1-v^2} \\frac{P''}{P'} +\\left[\\frac{4g}{1-v^2}-\\frac{(vg')^2}{g}\\right] \\frac{\\alpha P''}{P'}\n\\,,\n\\label{sum_products}\n\\end{eqnarray}\nwhere primes denote derivative with respect to $v$ or $\\alpha$. Note that the division with $\\alpha P'$ is possible because this expression cannot be zero (this would mean that $P=$ const., or equivalently that the poloidal magnetic field vanishes).\n\nWe first examine two trivial cases.\n\nThe first trivial case corresponds to $g=\\lambda = $ const. In this case~(\\ref{sum_products}) yields $\\beta=$ const, $p_0=$ const, and $P=C(1-\\cos \\theta)$, corresponding to a pure poloidal monopolar, force-free magnetic field.\n\nA second trivial case corresponds to\n\\begin{eqnarray}\ng=\\lambda \\frac{v^2}{1-v^2}\\,,\n\\nonumber\n\\end{eqnarray}\nwhere $\\lambda=$ const. In this case~(\\ref{sum_products}) yields\n\\begin{eqnarray} \n\\frac{dp_0}{d\\alpha}=\\frac{-\\lambda}{16 \\pi^3}\\left[\n\\frac{\\beta}{\\alpha} \\frac{{\\rm d}\\beta}{{\\rm d}\\alpha}\n+4 \\left(P'\\right)^2 + 4 \\lambda P' P'' + 4 \\alpha P' P''\n\\right]\\,.\n\\nonumber\n\\end{eqnarray}\nThis solution corresponds to cylindrical poloidal magnetic field in the $v\\ll 1$ regime, however this solution contains magnetic monopoles at $v=1$ and $\\theta=0, \\pi$ and it is unphysical.\n\nIn the general case, equation~(\\ref{sum_products}) yields that there are constants $c_0$, $c_1$, $c_2$, $c_3$, such that (see Appendix~B in \\citealp{VT97})\n\\begin{eqnarray}\nv^2 g'' -\\frac{2v^3}{1-v^2}g' -\\frac{2}{1-v^2}g =\n-\\frac{c_0v^2 g}{(1-v^2)^2}\n-\\frac{c_1 v^{4}}{(1-v^{2})^3} \n-\\frac{4 c_2 g^2}{1-v^2}\n+c_3\\left[\\frac{4g}{1-v^2}-\\frac{(vg')^2}{g}\\right]\n\\,.\n\\label{veq1}\n\\end{eqnarray} \nEquation~(\\ref{sum_products}) then becomes\n\\begin{eqnarray}\n\\left[\\frac{(vg')^2}{g}-\\frac{4g}{1-v^2}\\right]\n\\left(\\frac{\\alpha P''}{P'} -c_3 \\right) =\n\\frac{v^2 g}{(1-v^2)^2} \n\\left(c_0 -\\frac{\\beta}{\\alpha P'} \\frac{{\\rm d} \\beta}{{\\rm d}P} \\right)\n+\\frac{v^{4}}{(1-v^{2})^3} \n\\left(c_1- 16 \\pi^3 \\frac{1}{P'}\\frac{{\\rm d} p_0}{{\\rm d}P} \\right)\\nonumber \\\\\n+\\frac{4g^2}{1-v^2} \n\\left(c_2-\\frac{P''}{P'} \\right)\n\\,,\n\\label{sum_productsA}\n\\end{eqnarray}\ni.e., it becomes a sum of four products of a function of $v$ with a \nfunction of $\\alpha$.\n\nWe distinguish two possibilities. The first is that $\\alpha P'' \/ P' -c_3 \\neq 0$. In that case we can divide~(A.4) by $\\alpha P'' \/ P' -c_3$, and find that there are constants $c_4$, $c_5$ and $c_6$ such that $(vg')^2\/g-4g\/(1-v^2)=c_4v^2 g\/(1-v^2)^2 +c_5 v^{4}\/(1-v^{2})^3 + 4 c_6 g^2\/(1-v^2)$. However, it is highly unlikely that the last equation has solutions that satisfy also~(\\ref{veq1}), except for the trivial cases $g=\\lambda$ and $g=\\lambda v^2\/(1-v^2)$ considered above. So we are left with the second possibility, which is to have $\\alpha P'' \/ P' -c_3 = 0$, with solution\\footnote{The solution of $\\alpha P'' \/ P' -c_3 = 0$ is that $P'$ is proportional to $\\alpha^{c_3}$. However, without loss of generality we can assume that the constant of proportionality is unity.} $P'=\\alpha^{c_3}$. Equation~(\\ref{sum_productsA}) then becomes\n\\begin{eqnarray}\n\\left(g\\frac{1-v^2}{v^2}\\right)^2 \\left(4\\frac{c_3}{\\alpha}-4c_2 \\right)=\n\\left(g\\frac{1-v^2}{v^2}\\right)\n\\left(c_0 -\\frac{\\beta}{\\alpha^{c_3+1}} \\frac{{\\rm d} \\beta}{{\\rm d}P} \\right)+\n\\left(c_1- \\frac{16 \\pi^3}{\\alpha^{c_3}}\\frac{{\\rm d} p_0}{{\\rm d}P} \\right)\n\\,.\n\\label{sum_productsB}\n\\end{eqnarray}\n\nWe again distinguish the following two possibilities. \nThe first corresponds to the case where $c_2$ or $c_3$ is nonzero. In this case, by dividing~(\\ref{sum_productsB}) by $4c_3\/\\alpha -4c_2$ we find that $[g(1-v^2)\/v^2]^2$ equals a linear combination of $g(1-v^2)\/v^2$ and a constant. This means that $g(1-v^2)\/v^2 = $ const., a case that we already considered above. \nThe second possibility is to have $c_2=c_3=0$, in which case $P=\\alpha$. From equation~(\\ref{sum_productsB}) we find $\\beta {\\rm d} \\beta\/{\\rm d}P = c_0 \\alpha$ and $16 \\pi^3 {\\rm d} p_0\/{\\rm d}P = c_1$, while~(\\ref{veq1}) gives~(\\ref{ode}).\n\nIn this appendix we have chosen that the angular part of the solution is proportional to $\\sin^{2}\\theta$, this corresponds to a dipole field. The differential operator $\\sin \\theta \\partial\/\\partial \\theta \\left(\\frac{1}{\\sin \\theta} \\partial\/\\partial \\theta \\right)$ admits in general eigenfunctions in the form of $\\sin\\theta{\\rm d}P_{l}(\\cos \\theta)\/{\\rm d} \\theta$, where $P_{l}$ is the Legendre Polynomial of order $l$. The case studied above is for $l=1$. Assuming a linear form on $\\beta(P) {\\rm d}\\beta\/{\\rm d}P= c_{0}P$ and $16\\pi^{3} {\\rm d}p_{0}\/{\\rm d}P=c_{1}$ we find that we are constrained only to the dipole solution. This is because~(\\ref{MOM2}) reduces to the following form:\n\\begin{eqnarray}\n\\lambda(v)\\sin\\theta \\frac{{\\rm d}P_{l}(\\cos \\theta)}{{\\rm d} \\theta} + c_{1} \\sin^2\\theta \\frac{v^4}{(1-v^2)^3}=0 \\,.\n\\end{eqnarray}\n\nAs the pressure-inertia term is multiplied by $\\sin^{2} \\theta$ the only acceptable solution is this of a dipole for $l=1$. In the absence of pressure and inertia there are acceptable solutions of higher order multipoles, see appendix B of \\cite{GL2008} for more details. \n\n\\section{~Solutions for arbitrary combination of $r$ and $t$}\n\\label{appB}\n \nIn this appendix we study whether it is possible to have separation of variables for a dimensionless parameter other than $v=r\/(ct)$. Let us assume that we are looking for self-similar solutions of the form $v=r\/R(t)$ where $R(t)$ is an arbitrary function of $t$ which has dimensions of length. Let us assume a magnetic field of the form of equation~(\\ref{magnetic}). We shall follow step by step the process described in section 2, the electric field that satisfies the induction equation is ${\\bf E}=-[\\dot{R}\/(cR)] {\\bf \\hat{e}}_{r} \\times {\\bf B}$. The gas pressure and inertia forces contribute only to the $r$ and the $\\theta$ components of the momentum equation, whereas it is only the electromagnetic forces that contribute to the $\\phi$ component of the momentum equation. We equate the $\\phi$ component to zero, $(j^{0} {\\bf E}+ {\\bf j} \\times {\\bf B})\\cdot {\\bf \\hat{e}_{\\phi}}=0$. Unlike equation~(\\ref{Teq0}) the equation we take for it is more complicated\n\\begin{eqnarray}\n(v^{3}\\dot{R}-v)\\frac{\\partial T}{\\partial \\theta}\\frac{\\partial P}{\\partial v}+(v-v^{3}\\dot{R}^{2}) \\frac{\\partial T}{\\partial v} \\frac{\\partial P}{\\partial \\theta}+(v^{2}R\\ddot{R}-v^{2}\\dot{R}-1)T\\frac{\\partial P}{\\partial \\theta}=0\\,,\n\\label{Rarbitrary}\n\\end{eqnarray}\nwhere $\\dot{R}={\\rm d}R\/{\\rm d}t$. On division by $(v^{3}\\dot{R}^{2}-v)$, the first and the second term of the above equation only depend on $v$ and $\\theta$, therefore we expect the third term to have no dependence on $t$. This clearly happens when $R$ is linear function of $t$ and this is the condition we imposed in the first section of the paper, and indeed when substitute that in equation~(\\ref{Rarbitrary}) we return to equation~(\\ref{Teq0}).\n\\end{appendices}\n\n\\bibliographystyle{gGAF}\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\n\\section{Introduction}\n\nPhase transitions in QCD are currently very actively\nresearched. Although most of the attention is focused on the phase\ntransition at finite baryon density and temperature and the expected\ncritical point in the phase diagram~\\cite{Fodor:2001pe,Allton:2003vx},\nthere are still challenges even at vanishing density, such as a\ndetermination of the order of the phase transition or the flavor\ndependence of the chiral phase transtion temperature\n\\cite{Karsch:2000kv}. Since lattice QCD simulations have not given\nfinal answers to all these questions so far, complementary\nnon-perturbative approaches are indispensable to gain a better\nunderstanding of the non-perturbative phenomena.\n\nFor our non-perturbative study\\footnote{Talk given at the 2006 ECT*\nSchool \"Renormalization Group and Effective Field Theory Approaches to\nMany-Body Systems\", Trento, Italy.} of the chiral phase boundary of\nQCD at finite temperature, we use the functional renormalization\ngroup~(RG)~\\cite{Wegner:1972ih,Wilson:1973jj,Polchinski:1983gv,Wetterich:1992yh,Morris:1993qb},\napplied to a formulation of QCD in terms of microscopic degrees of\nfreedom given by quarks and gluons. At low temperature and momentum\nscales, QCD can be described well by effective field theories in terms\nof ordinary hadronic states. But a hadronic picture is eventually\nbound to fail at higher temperature and momentum scales, owing to\nasymptotic freedom. The recent discussion of a strongly interacting\nhigh-temperature phase of QCD suggests that a simple description of\nQCD around and above the phase transition temperature does not exist\n\\cite{Shuryak:2003xe}. In this case, a first-principles description\nin terms of quarks and gluons is most promising to bridge wide ranges\nin parameter space.\n\n\\addtolength{\\textheight}{-1.5cm}\n\nHere we discuss particular problems \nwhich are accessible in a formulation in terms of quark and \ngluons~\\cite{Braun:2005uj,Braun:2006jd}: \nfirst, we present our results for the running of the gauge coupling at finite temperature.\nSecond, we discuss the interplay\nbetween gluodynamics and (induced) quark dynamics. \nFinally, we give results for the chiral phase boundary in the plane of \ntemperature and number of quark flavors, obtained from such an investigation of the quark-gluon dynamics at finite temperature.\n\nThe functional RG yields a flow equation for the effective average\naction~$\\Gamma_k$~\\cite{Wetterich:1992yh},\n\\begin{equation}\n\\partial_t\\Gamma_k=\\frac{1}{2}\\, \\text{STr}\\, \\partial_t R_k\\,\n(\\Gamma_k^{(2)}+R_k)^{-1}, \\quad t=\\ln \\frac{k}{\\Lambda},\n\\label{eq:flow_eq1}\n\\end{equation}\nwhere $\\Gamma_k$ interpolates between the bare action\n$\\Gamma_{k=\\Lambda}= S$ and the full quantum effective action\n$\\Gamma=\\Gamma_{k=0}$; $\\Gamma_{k}^{(2)}$ denotes the second\nfunctional derivative with respect to the fluctuating field. The\nregulator function $R_k$ specifies the details of the Wilsonian\nmomentum-shell integrations, such that the flow of $\\Gamma_k$ is dominated \nby fluctuations with momenta $p^2\\simeq k^2$. \n\nIt is impossible to study the flow of the most general effective action, \nconsisting of all operators that are compatible with the symmetries of \nthe theory. Therefore we have to truncate the action to a subset of operators, \nwhich is not necessarily finite.\nNevertheless, such an approximation of the full theory can also describe reliably\nnon-perturbative physics, provided the relevant degrees of freedom in the\nform of RG relevant operators are kept in the ansatz for the effective\naction. This is obviously the most problematic part since \nit requires a lot of physical insight to make the correct physical \nchoice. A first but highly nontrivial check\nof a solution to the flow equation is provided by \na stability analysis of its RG flow, since\ninsufficient truncations generically exhibit IR instabilities \nof Landau-pole type.\n\nThe IR stability of RG flows can be improved by adjusting the\nregulator to the spectral flow of $\\Gamma^{(2)} _k$ \\cite{Gies:2002af,Gies:2003ic,Litim:2002xm}. \nDoing this, we integrate over shells of eigenvalues of $\\Gamma^{(2)} _k$ rather than\nordinary canonical momentum shells. \nIn a perturbative language, the use of the spectrally adjusted regulator \nallows for a resummation of a larger class of diagrams. Such an improvement has been \nsuccessfully applied to the study of QCD at zero and \nfinite temperature~\\cite{Gies:2002af,Gies:2003ic,Braun:2006jd} and is the underlying technical \ningredient for the results presented below.\n\nIn the subsequent sections, we use the exponential regulator~\\cite{Wetterich:1992yh} of the\nform $R_k(\\Gamma^{(2)}_k)=\\Gamma^{(2)}_k \/[\\exp (\\Gamma^{(2}_k\/\\mathcal Z_k \nk^2)-1]$, where $\\mathcal Z_k$ denotes the wave function\nrenormalization of the fields. \n\n\\section{Running gauge coupling at finite temperature}\n\nIn this section, we present our study of the running strong coupling \nat finite temperature. While an investigation of the running of the coupling \nis interesting in its own right, we will also show in the subsequent sections \nthat it represents a key ingredient of our study of the chiral phase boundary.\n\\begin{figure}[t]\n\\begin{center}\n\\subfigure[]{\\label{betafct}\\includegraphics[%\n clip,\n scale=0.76]{.\/su2flow.ps}}\\hfill\n\\subfigure[]{\\label{coupl_pure_glue}\\includegraphics[%\n clip,\n scale=0.46]{.\/alpha_k_1loop.eps}}\n\\caption{\n(a): Anomalous dimension $\\eta$ \nas a function of $\\alpha\/8\\pi = g^2\/32\\pi ^2$ for $SU(2)$ Yang-Mills \ntheory at vanishing temperature. \nThe anomalous dimension is defined via the background-field wave-function renormalization \nby $\\eta=-\\ln\\partial _t W_k ^{(1)}$. It is related to the flow of the \ncoupling as $\\partial_t g^2 _k=\\eta g^2 _k$. \nThe arrow indicates the RG flow from the initial condition at the UV scale $\\Lambda$ (left dot)\nto the IR fixed point (right dot). Owing to asymtotic freedom, there exists \nalso a fixed point for $\\alpha =0$.\n(b):~Running $\\mathrm{SU}(3)$ Yang-Mills coupling\n$\\alpha _{\\text{YM}}(k,T)$ as a function of $k$ for temperatures \n$T=0,100,300\\,\\mathrm{MeV}$, compared to the one-loop \nrunning for vanishing temperature.}\n\\label{fig:runcoup}\n\\end{center}\n\\end{figure}\n\nOwing to strong coupling, we cannot expect that low-energy gluodynamics are \nreliably described by a small number of gluonic operators. On the contrary, \ninfinitely many operators become RG relevant in the low-momentum regime and drive \nthe running of the coupling. We truncate the space of possible action\nfunctionals to an {\\em infinite} but still tractable set of operators. \nFor gauge invariance, we apply the background-field formalism as developed in\n\\cite{Reuter:1993kw,Freire:2000bq} and follow the strategy of\n\\cite{Reuter:1997gx,Gies:2002af} for an approximate resolution of the\ngauge constraints \\cite{Ellwanger:1994iz}. For an introduction to the background-field \nformalism and to background-field flow equations for gauge theories, \nwe refer the reader to the lecture notes of H. Gies \\cite{LectureHGies}.\nApart from standard gauge-fixing and ghost terms, our truncation of the \ngauge sector consists of an infinite set of operators given by powers of the\nYang-Mills Lagrangian,\n\\begin{equation}\n\\Gamma_{k}[A,\\bar{\\psi},\\psi]=\\int_x \\mathcal W_k(\\theta)+\\Gamma _k ^{\\psi}[A,\\bar{\\psi},\\psi], \\quad\n\\theta=\\frac{1}{4} F_{\\mu\\nu}^a F_{\\mu\\nu}^a,\\label{Wtrunc}\n\\end{equation}\nwhereas the quark contributions \nare summarized in $\\Gamma _k ^{\\psi}$ and will be discussed in Sec. \\ref{sec:cpbQCD}. \nFor the moment, we restrict our study to the gluonic subspace.\nExpanding the function $\\mathcal W_k(\\theta)=W_k ^{(1)} \\theta+ \\frac{1}{2}\nW_k ^{(2)} \\theta^2+\\frac{1}{3!} W_k ^{(3)} \\theta^3\\dots$, the expansion coefficients\nspan an infinite set of generalized couplings. The background-field method \\cite{Abbott:1980hw}\nprovides us with a non-perturbative definition of the running coupling $g_k$ in terms of \nthe background-field wave function renormalization $W_k ^{(1)}$: $g_k ^2 W_k ^{(1)}=\\bar{g} ^2$.\n\nOur truncation represents a gradient expansion in the field strength,\nwhich includes arbitrarily\nhigh gluonic correlators projected onto their small-momentum limit and\nonto the color and Lorentz structure arising from powers of $F^2$.\nA drawback of such gradient expansions is the appearance of\nan IR unstable Nielsen-Olesen mode in the spectrum \n\\cite{Nielsen:1978rm}. At finite temperature $T$, such a mode will be \npopulated by thermal fluctuations, typically \nspoiling perturbative computations, see e. g. Ref.~\\cite{Dittrich:1980nh}. \nOur RG approach allows us to resolve this problem with the \naid of a temperature-dependent regulator which removes the \nunphysical thermal population of the unstable mode. \nThus, we obtain a strictly positive thermal fluctuation spectrum.\nFor details of the implementation of such a regulator, we refer to \\cite{Braun:2006jd}.\n\nInserting the truncation~\\eqref{Wtrunc} in the flow equation~\\eqref{eq:flow_eq1}, we find \nthat the running of the coupling is successively driven by all generalized\ncouplings $W_k ^{(i)}$. Keeping track of all contributions from the flows of\nthe $W_k ^{(i)}$, we obtain a nonperturbative $\\beta_{g^2}$ function in\nterms of an infinite asymptotic but resummable series in powers of $g^2$,\n\\begin{equation}\n\\beta_{g^2} \\equiv \\partial _t g^2 _k= \\sum_{m=1}^\\infty a_m({\\textstyle{\\frac{T}{k}}},N_c)\n\\frac{(g^2)^{m+1}}{[2(4\\pi)^{2}]^m}. \\label{betares}\n\\end{equation}\nThe coefficients $a_m$ depend on the temperature $T$ and the rank $N_c$ of \nthe gauge group\\footnote{Odd powers in the coupling $g$ which arise\nin high-temperature small-coupling expansions \\cite{Blaizot:1999ap,Blaizot:2000fc} \nare not reproduced in our calculation. This is due to the fact that we have not yet included \nnon-local operators in our truncation. In any case, we do not expect that \nthese operators are quantitatively important for the determination of the \nchiral phase boundary since, as we will discuss below, chiral symmetry breaking sets in \nat scales $k0$, in agreement with the results of \nRef.~\\cite{Gies:2002af}, see Fig.~\\ref{betafct}. \nFor an explicit representation of the $a_m$'s and\ndetails of the computation, we refer the reader to Ref.~\\cite{Braun:2006jd}. Note that the\nappearance of an IR fixed point in Yang-Mills theories is a\nwell-investigated phenomenon in the Landau gauge~\\cite{vonSmekal:1997is,Pawlowski:2003hq,Fischer:2004uk}, \nin accordance with the Kugo-Ojima and\nGribov-Zwanziger confinement scenarios~\\cite{Kugo:1979gm,Gribov:1977wm}. \nMoreover, it is also\ncompatible with the existence of a mass gap~\\cite{Gies:2002af} in Yang-Mills theory.\n\nAs initial condition for the coupling flow,\nwe use the value of the coupling measured at the\n$\\tau$ mass scale \\cite{Bethke:2004uy}, $\\alpha_{\\mathrm{s}}=0.322$. \nFor scales $k\\gg T$, we find agreement with the perturbative running\ncoupling at zero temperature, as one would naively expect. \nIn the IR, the running is strongly modified: The coupling increases towards lower scales until\nit develops a maximum near $k\\sim T$. Below, the coupling decreases\naccording to a power law $g^2 \\sim k\/T$, see Fig.~\\ref{coupl_pure_glue}. \nThe reason for this behavior can be understood within the \nRG framework: first, the hard gluonic modes decouple from the RG flow at the scale $k\\sim T$. \nAt this point, the wavelength of fluctuations with momenta $p^20$ (middle\/blue solid curve) \n and for gauge couplings larger than the critical coupling $g>g_{\\text{cr}}$\n (lower\/green solid curve). The effect of finite ratios $T\/k$ \n is illustrated for vanishing gauge coupling by the dashed (red) line.}\n\\end{center}\n\\end{figure}\n\nIntroducing the dimensionless couplings $\\lambda_i =k^2 \\bar\\lambda _i$, the\n$\\beta$ functions for the four-fermion couplings can be written in the form\n\\begin{equation}\n\\partial_t\\lambda _i =2\\lambda _i - \\lambda _j A_{jk} \\lambda _k - b_j \\lambda _j g^2 - c_i g^4. \n\\label{eqlambda}\n\\end{equation}\nHere we use a straightforward finite-temperature\ngeneralization of the flow equations for the $\\bar\\lambda_i$, which have been \nderived and analyzed in \\cite{Gies:2003dp,Gies:2005as}. \nThe quantities $A$, $b$ and $c$ are dependent on the ratio $T\/k$, where \n$A$ is a matrix, and $b$ and $c$ are vectors in the space of $\\lambda$ couplings;\nsee Ref. \\cite{Braun:2006jd} for an explicit representation of the flow equations.\nThe various terms arising on the RHS of the flow equation of the $\\bar{\\lambda}_i$s \ncan be related straightforwardly to one-particle irreducible Feynman-graphs, which are depicted in \nFig.~\\ref{fig:graphs}.\nAs initial conditions for the four-fermion couplings, we use $\\bar\\lambda_i\\to 0$ for\n$k\\to\\Lambda\\to\\infty$. This choice ensures that the $\\bar{\\lambda}_i$ \nare generated solely by quark-gluon dynamics from first principles.\nThis point is very important, as it is contrary to, e.g., the Nambu--Jona-Lasinio model, where the\nfour-fermion couplings serve as independent input parameters.\n\nOur truncation provides a simple picture for the chiral dynamics illustrated \nby Fig.~\\ref{fig:parab}: At vanishing gauge coupling, we find fixed points \nfor the four-fermion couplings at $\\lambda _i \\neq 0$ and $\\lambda _i =0$, \nwhere the latter ones are IR attractive. By increasing the gauge coupling $g$, \nthe RG flow generates quark self-interactions of order $\\lambda\\sim g^4$, \nresulting in a shift of the fixed points. \nMoreover, we observe that the four-fermion couplings \napproach fixed points $\\lambda_\\ast$ in the IR, if the gauge coupling \nin the IR remains smaller than a critical value $g_{\\text{cr}}$.\nAt these fixed points, QCD remains in the chirally invariant phase.\nIncreasing the gauge coupling beyond the critical coupling $g>g_{\\text{cr}}$, \nthe gauge-fluctuation induced $\\lambda$'s become strong enough to \ncontribute as relevant operators to the RG flow. This is reflected \nin a destabilization of the IR fixed points \\cite{Gies:2003dp,Gies:2005as}.\nIn this case, the four-fermion couplings increase rapidly and approach a divergence \nat a finite scale~$k=k_{\\text{$\\chi$SB}}$. Indeed, this strong increase indicates the \nformation of chiral quark condensates and therefore the onset of chiral symmetry breaking.\nWe recall the NJL model as an illustration: There, the mass parameter $m^2$ of the bosonic fields \nin the partially bosonized action is inversely proportional to the \nfour-fermion coupling, $\\bar{\\lambda}\\sim 1\/m^2$. In addition, we know from such NJL-type models \nthat their effective action in the bosonized form corresponds to a Ginzburg-Landau effective potential for the order parameter. In its simplest form, the order parameter is given by the expectation value of a scalar field. \nSymmetry breaking is then reflected in a non-trivial minimum of the this potential.\nConsequently, the scale $k_{\\text{$\\chi$SB}}$ at which the four-fermion couplings \ndiverge is a good measure for the chiral symmetry breaking scale. At this scale, \nthe effective potential for the order parameter becomes flat and starts to develop a \nnonzero vacuum expectation value.\n\\begin{figure}[!t]\n\\begin{center}\n\\includegraphics[%\n clip,\n scale=0.85]{.\/alpha_crit.ps}\n\\caption{\\label{alpha_alpha_c} The figures show the \nrunning QCD coupling\n$\\alpha_{\\mathrm{s}} (k,T)$ for $N_{\\text{f}}=3$ massless quark flavors and\n$N_{\\text{c}}=3$ colors and the critical value of the running coupling\n$\\alpha_{\\mathrm{cr}} (k,T)$ as a function of $k$ for two values \nof the temperature, $T=130\\,\\mathrm{MeV}$ (left panel) and $T=220\\,\\mathrm{MeV}$ (right\npanel). The existence of the\n$(\\alpha_{\\mathrm{s}},\\alpha_{\\mathrm{cr}})$ intersection point at the scale $k_{\\text{cr}}$ \nin the left panel indicates that the quark dynamics can become critical.}\n\\end{center}\n\\end{figure}\n\nAt this point, we have traced the question of the onset of chiral symmetry \nbreaking back to the strength of the coupling $g$ relative to the critical coupling $g_{\\text{cr}}$.\nAt zero temperature, the critical coupling for three colors and not \ntoo many (massless) quark flavors is much smaller than the IR fixed point \nvalue of the coupling. Therefore QCD exhibits broken chiral symmetry \nat zero temperature.\n\nAt finite temperature, the running of the gauge coupling is\nstrongly modified in the IR. Moreover, the critical coupling becomes larger \nfor increasing ratios~$T\/k$. This reflects that the formation of \nquark condensates require stronger interactions at finite temperatures, since \nthe quarks become stiffer due to the thermal masses.\n\nIn Fig. \\ref{alpha_alpha_c}, we show the running coupling\n$\\alpha_{\\mathrm{s}}$ and its critical value $\\alpha_{\\mathrm{cr}}$\nfor $T=130\\,\\mathrm{MeV}$ and $T=220\\,\\mathrm{MeV}$ as a function of\nthe regulator scale $k$. The intersection point $k_{\\text{cr}}$\nbetween both gives the scale where the chiral quark dynamics become\ncritical. Above this scale, the system is in the chiral symmetric regime, below \nit quickly runs into the broken regime.\nThe critical temperature can now be estimated using the lowest temperature \nfor which no intersection point between \n$\\alpha_{\\mathrm{s}}$ and $\\alpha_{\\mathrm{cr}}$ exists.\nWe find $T_{\\mathrm{cr}}\\approx\n186\\,\\mathrm{MeV}$ for $N_{\\text{f}}=2$ and $T_{\\mathrm{cr}}\\approx\n161\\,\\mathrm{MeV}$ for $N_{\\text{f}}=3$ massless quark flavors in good\nagreement with lattice simulations \\cite{Karsch:2000kv}.\nWe stress that no other parameter except for the running coupling at the \n$\\tau$ mass scale has been used as input for the calculation of the \ncritical temperatures.\n\nFinally, we discuss the chiral phase boundary in the plane of the temperature \nand number $N_{\\text{f}}$ of massless quark flavors computed within our approach, \nsee Fig. \\ref{tc_nf}. For small $N_{\\text{f}}$, we observe an almost linear decrease of the critical temperature \nfor increasing $N_{\\text{f}}$ with a slope of $\\Delta T_{\\mathrm{cr}}=T(N_{\\text{f}})-T(N_{\\text{f}}+1)\\approx\n25\\,\\mathrm{MeV}$. Additionally we find the existence of \na critical number of quark flavors, $N_{\\text{f}} ^{\\mathrm{cr}}=12$, above which no chiral phase\ntransition occurs. Since $N_{\\text{f}} ^{\\mathrm{cr}} r ]$ the set with probability density level higher (greater) than $r\\in \\mathbb{R}$, that is,\n\\begin{equation}\n[\\psi> r ]=\\{ {\\bf z}\\in \\mathbb{R}^n :\\quad \\psi({\\bf z}) > r \\} = \\psi^{-1}((r,+\\infty)). \n\\end{equation}\n\nLet us define $R:=\\max_{\\bf z}\\psi({\\bf z})$. In some practical applications a threshold value $r \\in (0,R)$ is determined and the region $[\\psi> r]\\subset \\mathbb{R}^n$ is refereed to as \\emph{hot spot area} (and in case of being the union of disjoint small regions these regions are know as hot spots), see \\cite{Chainey_2008} and references therein. In fact, note that if $r$ is close to $R$ the set $[\\psi> r]\\subset \\mathbb{R}^n$ will contain the most probable occurrences of $Y$.\n\n\nIt is common in applications to use the hot spots of the random variable $Y$ to predict the hot spots of another a random variable $X$. A common way to measure the performance of such predictions is by using the so-called\n\\emph{Prediction Accuracy Index} (PAI) defined next (see \\cite{Chainey_2008}).\nIn some cases, when several possible predictions of $X$ are available, it is used the PAI measure to decide which model to use to report hots spots predictions of the target $X$. See \\cite{Chainey_2008,joshi2021considerations}.\n\nIn order to fix ideas and simplify the presentation we assume that $\\phi$ and $\\psi$ are continuous functions but we can as well consider discretized version up to some given spatial resolution as it is most common in applications.\n\n\\subsection{PAI of a subregion}\nAssume that we want to analyze a measurable study region $A$ with positive volume measure. Region $A$ corresponds to the study region where \nthe random variable $X$ occur. \nThe region $B\\subset A$ is a possible hot spot region for $X$ in $A$. Define the hit rate by (\\cite{Chainey_2008}) \n\\[\nH_{\\phi}(B)=\\frac{\\displaystyle \\int_{B} \\phi}{\\displaystyle \\int_A \\phi}.\n\\]\nIf $|B|>0$ define the average of $\\phi$ on $B$ by \n\\[\n \\fint_{B} \\phi=\\frac{1}{|B|} \\int_{B} \\phi.\n\\]\nNote that in our notation for $H_\\phi$ we do not make explicit reference to the region $A$ since we assume $A$ is fixed.\nA commonly used indicator (\\cite{Chainey_2008}) of the quality of using $B$ to approximate the hot spots of $X$ in $A$ is given by the ratio of the hit rate with the percentage of volume $\\frac{|B|}{|A|}$, that is, \n\\[\nPAI_\\phi(B)= \\frac{H_\\phi(B)}{\\frac{|B|}{|A|}}\n=\\frac{|A|}{\\displaystyle \\int_{A}\\phi }\\frac{\\displaystyle \\int_{B}\\phi }{|B|}\n=\\frac{\\displaystyle \\fint_{B} \\phi}{\\displaystyle \\fint_{A} \\phi}.\\]\nNote that the $PAI_\\phi(B)$ can also be written as the ratio between the averages of $\\phi$ on \n$B$ and $A$ which gives another interpretation of the $PAI$ that readily reveals some possible issues with this indicator such as the preferences for small subregions with high-value of $\\phi$.\\\\\n\n\nIn order to better understand this indicator we present some simple examples.\nNote that if $B=A$ we get $PAI_\\phi(B)=1$. If for instance we select $B$ with volume percentage\n\\[\n\\frac{|B|}{|A|}=0.5\n\\]\nand hit rate \n\\[\nH(B)=0.5\n\\]\nwe still have $PAI_\\phi(B)=1$ since we can predict half of occurrences of $X$ in half of the study area.\nAssume now that we have \n\\[\n\\frac{|B|}{|A|}=0.2\n\\mbox{ and }\nH(B)=0.8.\n\\]\nWe then have $PAI_\\phi(B)=4$ since we can predict 80\\% of occurrences of $X$ in 20\\% of the study area.\n\nIn practical applications we are interested in finding regions with a high value of PAI for a given density \n$\\phi$. We make the following observations:\n\n\\begin{itemize}\n\\item Since we always have \n$ \\fint_{B} \\phi\\leq R=\\max_{{\\bf z}\\in\\mathbb{R}^n} \\phi({\\bf z})$ we see that\n\\[\nPAI_\\phi(B)\n=\\frac{\\displaystyle \\fint_{B} \\phi}{\\displaystyle \\fint_{A} \\phi}\n\\leq \\frac{R}{\\displaystyle \\fint_{A} \\phi}.\n\\]\n\n\\item Let $\\phi^{-1}(R)=\\{ {\\bf z}\\in \\mathbb{R}^n \\, : \\, \n\\phi({\\bf z})=R\\}=\\arg\\max_{\\bf z}\\psi({\\bf z})$. If $B^*=\\phi^{-1}(R)\\cap A$ is such that $|B^*|>0$ then $B^*$ \n and all its possitive measure subsets maximizes the PAI indicator. In fact, \n for all $\\widetilde{B}\\subset B^*$ such that $|\\widetilde{B}|>0$ we have \n\\[\nPAI_\\phi(\\widetilde{B})\n= \\frac{R}{\\displaystyle \\fint_{A} \\phi}.\n\\]\nWe then see that, in practical applications the PAI indicator will favor small area regions around the maximum values of the density $\\phi$. Here small will depend on the spatial resolution at which subregions are computed. This might not be convenient in some applications as pointed out in \\cite{joshi2021considerations}.\n\n\\item If $B^*$ defined above is such that \n$|B^*|=0$ then, given a subregion $\\widetilde{B}\\subset A$ with \n$|\\widetilde{B}|>0$ and \n$B^*\\subsetneq \\widetilde B$, there always exists \n$\\widehat{B}$ with $|\\widehat{B}|>0$ and $ B^*\\subsetneq \\widehat B \\subsetneq \\widetilde B$ such that \n\\[\nPAI_\\phi(\\widetilde B)\n< PAI_\\phi(\\widehat B) =\\frac{\\displaystyle \\fint_{\\widehat B} \\phi}{\\displaystyle \\fint_{A} \\phi}\n<\\frac{R}{\\displaystyle \\fint_{A} \\phi}.\n\\]\n\\end{itemize}\n\n\nTherefore we see that searching for Borel subregions \nwith as high PAI as possible is not a well posed problem under general considerations. Other possible optimization problems may be needed for practical applications. See \\cite{joshi2021considerations, drawve2016metric,mohler2020learning} for more details.\\\\\n\nDue to the fact that the PAI indicator prefers small area regions \nsome modifications have been introduced. Among them, in \n\\cite{joshi2021considerations} was introduced a penalized PAI where \nthe area ratio is penalized, that is, \n\\[\nPPAI_\\phi(B)= \\frac{H_\\phi(B)}{\\left(\\frac{|B|}{|A|}\\right)^\\alpha}\n= \n\\left(\\frac{|B|}{|A|}\\right)^{1-\\alpha}\nPAI_\\phi(B)=\\lambda(B)PAI(B).\n\\]\nHere there was introduced the penalization \n$\\lambda(B)$. The exponent $\\alpha$ may depend on $B$, e.g., $\\alpha=H_\\phi(B)$ (and in this case, for \nsmall hit rate, the indicator is penalized multiplicatively by the volume proportion of the region $B$ while for large hit rate close to 1 we do not have that penalization, \\cite{joshi2021considerations}).\nMany other penalization alternatives can also be consider at the light of practical applications. For instance, a penalization of the form \n\\[\n\\lambda(B)=\\frac{|B|}{|\\partial B|}\n\\]\nwhere $|\\partial B|$ is the surface volume of $\\partial B$ with $B$ regular enough. See \\eqref{weight} below. \\\\\n\nOne additional observation is the following. \nOne can think to compute an average of several PAI values over subregions. \nLet \\[B_{N}\\subseteq B_{N-1}\\subseteq \\cdots \\subseteq B_{1}\\]\nand consider \n\\[\n\\mbox{av} P=\\frac{1}{N} \\sum_{i=1}^{N} \nPPAI_\\phi(B_i).\n\\]\nDefine the piecewise constant layered function\n\\[\nK({\\bf x}) = \n\\frac{1}{N} \\sum_{i=1}^k \\lambda(B_i)\\frac{1}{|B_i|\\ \\ } \n\\mbox{ for } {\\bf x}\\in B_k \\setminus B_{k+1},\n\\]\n$k=1,\\dots,N$, where $B_{N+1}= \\emptyset$. We have that \n\\begin{equation}\\label{layeredcake}\n\\mbox{av} P=\\frac{|A|}{\\displaystyle \\int_{A}\\phi }\\ \\ \\int_{\\mathbb{R}^n}\\phi ({\\bf x})K({\\bf x})d{\\bf x}.\n\\end{equation}\nNote that the function $K$ depends on the sets \n$B_i$ and the weight $\\lambda$ but we not make this dependence explicit in our notation. \nWe conclude that the average value (of PAI indicators) over some regions corresponds to the inner product between $\\phi$ and a piecewise constant function that weights the regions by the inverse their areas penalized by $\\lambda$.\nThe layered function $K$ can be taught as an approximation of a kernel that has singularities in the region\n$B_N$ where it takes maximum value.\n\n\n\n\n\n\\subsection{PAI of a random variable}\nRecall that $\\psi$ is the density of the random variable $Y$ that we want to use to predict the hot spots of the random variable $X$. See \\cite{Chainey_2008}. Assume that $\\psi$ is continuous and that \n$\\int_A \\psi >0$. For any $s\\in[0,1]$ define \n$r=r(s)$ by \n\n\\begin{equation}\\label{eq:def:r}\nr(s)= \\inf \\Big \\{r\\geq 0 : \\quad \\int_{[\\psi >r]\\cap A}\\psi=(1-s) \\int_A \\psi\n\\Big \\}\n\\end{equation}\nand the subregion $B_s$ by \n\\begin{equation}\\label{eq:def:Bs}\nB_s=B_s(\\psi)=[\\psi>r(s)]\\cap A.\n\\end{equation}\nDefine the prediction accuracy index at level $s$ as the PPAI of the subregion $B_s$.\n\\begin{equation}\n p(s)= p(s;\\psi,\\phi)=PPAI_\\phi(B_s)= \\frac{ |A|}{\\displaystyle \\int_A \\phi} \\lambda(B_s)\\frac{\\displaystyle \\int_{B_s} \\phi }{\n |B_s| } .\n\\end{equation}\nAs before, the $s$ level\n prediction accuracy index is computed by \n dividing the hit rate by the volume percentage using the region \n $B=[\\psi>r(s)]$ and multiplying by a penalization. \\\\\n\n\n\n\n\n\nFrom the comments on the previous subsection we have for the case $\\lambda=1$ (no penalization), \n\\begin{itemize}\n \\item $ p(s;\\psi,\\phi)\\leq \\frac{R}{ \\fint_A\\phi}$ \n \\item $\\frac{r(s)}{ \\fint_A\\phi} \\leq p(s;\\phi,\\phi)\\leq \\frac{R}{ \\fint_A\\phi}$ \n \\item If $B^*=\\phi^{-1}(R)\\cap A$ is such that $|B^*|>0$ then $p(1;\\phi,\\phi)=\\frac{R}{\\fint_A\\phi}$. \n \\item If $B^*$ defined above is such that $|B^*|=0$ and there exists $s \\in (0,1)$ such that $B_s \\subset A$ with $|B_s|>0$ and $B^*\\subsetneq B_s$, then for $\\epsilon>0$ small enough we have $B_s\\subsetneq B_{s-\\epsilon}$ and\n \\[\np(s-\\epsilon;\\phi,\\phi)0]$ (with finite volume) we have\n $p(0;\\phi,\\phi)=1$.\n\n\n\\end{itemize}\n\n\nAs mentioned before, in order to have an overall quantity, the performance may be measured by an average of the prediction accuracy index at different levels. More precisely, chose an integer $N$ and define \\begin{equation}\n P_N(\\psi,\\phi)=\\frac{1}{N}\\sum_{i=1}^Np\\left(\\frac{i}{N}; \\psi,\\phi\\right).\n\\end{equation}\n\nNote that, under appropriate assumptions we shall have a limiting value when $N\\to \\infty$. We define\n\\begin{equation}\\label{eq:def:Ppsiphi}\n P(\\psi,\\phi) = \\lim_{N\\to \\infty} P_N(\\psi,\\phi)= \\int_0^{1}\n p(s;\\psi,\\phi)ds.\n\\end{equation}\n\n\n\\subsection{Average PAI and kernels} \nNote that, by using Fubini's theorem (see also \\eqref{layeredcake}), \n\\begin{eqnarray*}\n P(\\psi,\\phi) &=& \\frac{1}{ \\fint_A\\phi} \\int_0^{1} \n \\frac{\\lambda(B_s)}{\n |B_s| } \\int_{B_s} \\phi({\\bf y})d{\\bf y} \\, ds \\\\&=& \n \\frac{1}{ \\fint_A\\phi} \\int_0^{1} \\displaystyle \\int_{\\mathbb{R}^n} \\phi({\\bf y})\n \\frac{\\lambda(B_s) }{\n |B_s| }{ 1}_{B_s}({\\bf y})d{\\bf y}\\, ds\\\\\n &=&\n \\frac{1}{ \\fint_A\\phi} \\int_{\\mathbb{R}^n} \\phi({\\bf y})\\left(\\int_0^{1}\n \\frac{\\lambda(B_s) }{\n |B_s| }{ 1}_{B_s}({\\bf y})ds \\right)d{\\bf y}=\n \\frac{1}{ \\fint_A\\phi} \\int_{\\mathbb{R}^n} \\phi K_\\psi d{\\bf y} .\n \\end{eqnarray*}\nHere we have introduced the ``layered'' function \n\\[\nK_\\psi({\\bf y})=\\int_0^{1}\n \\frac{\\lambda(B_s) }{\n |B_s| }{ 1}_{B_s}({\\bf y})\\, ds .\n\\]\nIn case $\\psi$ is continuous then $K_\\psi$ and $\\psi$ have the same level sets. Indeed, if we put \n$t(\\bf{y})$ defined such that $r(t(\\bf{y}))=\\phi({\\bf y})$ where\n$r(s)$ is defined in \\eqref{eq:def:r} we have that \n\\[\n{\\bf 1}_{B_s}({\\bf y})=\\begin{cases}\n1, & s\\leq t(\\bf{y})\\\\\n0, & s>t(\\bf{y}).\n\\end{cases}\n\\] Then, \n\\[\nK_\\psi({\\bf y}) =\n \\int_0^{t(\\bf{y})}\n \\frac{\\lambda([\\psi>r(s)]) }{\n |[\\psi>r(s)]| }ds = \\int_0^{t(\\bf{y})}\n \\frac{\\lambda(B_s) }{\n |B_s| }ds.\n\\]\nSummarizing, \n\\begin{equation} \\label{eq:PAIandInnerProduct}\nP(\\psi,\\phi)=\n \\frac{1}{\\fint_A\\phi} \\int_{\\mathbb{R}^n} \\phi({\\bf y}) K_\\psi({\\bf y}) d{\\bf y} .\n\\end{equation}\nThe $K_\\psi$ is a positive function with the same level curves as $\\psi$ with maximum value (and possible singularities) at \n$B^*=\\arg\\max_{\\mathbb{R}^n} \\psi\\cap A$. Appearance of singularities depends on the integrability of the function \n\\[\ns\\mapsto \\frac{\\lambda(B_s)}{|B_s|}=\\frac{\\lambda([\\psi>r(s)])}{|[\\psi > r(s)]|} \\mbox{ in }\n[0,1].\n\\]\nTherefore, the value $P(\\psi,\\phi)=\n \\frac{1}{\\fint_A\\phi} \\int_{\\mathbb{R}^n} \\phi K_\\psi d{\\bf x}$ will give higher weight to the regions containing $B^*=\\arg\\max_{\\mathbb{R}^n} \\psi \\,\\cap A$. To further clarify our point we present the next example. \n \\begin{figure}\\label{fig:ejemplo1}\n\\includegraphics[scale=0.15]{ejemplo.png}\n\\includegraphics[scale=0.172]{ejemplo2.png}\n\\caption{Illustration of example 1 for $p=3$ and $p=1.5$. The function \n$\\psi$ corresponds to the red dashed line and the function \n$K_\\psi$ to the solid green line.\nhttps:\/\/www.desmos.com\/calculator\/xcoi32o8bs .}\n\\end{figure} \n\\begin{example}\\normalfont\nConsider $n=1$, $p>0$ and $\\phi$ defined by \n\\[\n\\psi(x) = \\begin{cases}\n0, & x<-1,\\\\\n\\frac{1}{2}p(1-|x|)^{p-1} & -1\\leq x <1,\\\\\n0, &1\\leq x.\n\\end{cases}\n\\]\nWe have $\\int_{-1}^1 \\psi(x)=1$ and $\\psi\\geq 0$ and $\\max_{x} \\psi(x)=\\frac{1}{2}p$. Additionally if we take \n$A=[-1,1]$ we have $|A|=2$ and it holds that the mass of the region \n$[\\psi>r]$ is given by\n\\[\n\\int_{[\\psi > r]} \\psi =2\\int_0^{1-(\\frac{2r}{p})^\\frac{1}{p-1}} \\psi = 1-\\left(\\frac{2r}{p}\\right)^\\frac{p}{p-1}.\n\\]\nTherefore, given $s$, we can find $r(s)$ such that $\\int_{[\\psi > r]} \\psi =1-s$\n to obtain,\n \\[\nr=r(s)=\\frac{1}{2}ps^{\\frac{p-1}{p}}.\n\\]\nThen, the measure of the region $B_s= [\\psi> r(s)]\\cap A$ is the lengh of the interval \n$\n\\left[-1+\\left(\\frac{2r}{p}\\right)^\\frac{1}{p-1},1-\\left(\\frac{2r}{p}\\right)^\\frac{1}{p-1}\\right], \n$ \nthat is,\n\\[\n|B_s|=|[\\psi\\geq r(s)]|=2 \\left(1-\\left(\\frac{2r}{p}\\right)^\\frac{1}{p-1} \\right)=\n2 \\left(1-(s^\\frac{p-1}{p})^\\frac{1}{p-1} \\right)=2(1-s^\\frac{1}{p}).\n\\]\nGiven $y$, we can find $t(y)$ by\n\\[\nr(t(y))=\\frac{1}{2}pt^{\\frac{p-1}{p}}=\\frac{1}{2}p(1-x)^{p-1}\n\\]\nwhich gives \n\\[\nt=(1-x)^p.\n\\]\nThus we obtain for the case $\\lambda=1$ and for $y>0$\n\\begin{eqnarray}\nK_\\psi(y)&=& \\int_0^{t({y})}\n \\frac{1 }{\n |B_s| }ds\\\\ \n&=&\\int_0^{(1-y)^p} \\frac{1}{2(1-s^\\frac{1}{p})}ds\\\\\n&=& \\frac{1}{2} p \\left(\n-\\log(y)+\\sum_{\\ell=1}^{p-1} { {p-1}\\choose \\ell } \\frac{(-1)^\\ell}{\\ell} x^\\ell\n\\right). \\label{eq:Kejemplo}\n\\end{eqnarray}\nSee Figure \\ref{fig:ejemplo1} for an illustration.\n\n\n\nFrom \\eqref{eq:PAIandInnerProduct} we have $\\displaystyle P(\\phi,\\psi)= 2 \\int_{-1}^1 \\phi K_\\psi d{y}$, that is, an inner product with the function $K_\\psi$ in \\eqref{eq:Kejemplo}. Note that $K_\\psi$ puts a very high weight around the value $\\phi(0)$ and therefore ignoring other possible regions with hot spots. For instance if $\\{-0.5,0,0.5\\}=\\arg\\max_{z\\in \\mathbb{R}} \\phi $ then the possible hot spots of $\\phi$ at $0.5$ and $-0.5$ wont be detected but the value \nof $P(\\phi,\\psi)$ will be very high (as far as we select $p$ high enough). \\hfill \\Box \\vskip .5cm{}\n\\end{example}\n\n\\begin{remark} \\normalfont \\label{remarkPAI}\nWe then conclude that not the PAI at a particular level, not the average of several PAIs at different levels, are good measures or indicators in order to select one among different possible predictions of the random variable \n$X$, in particular, hots spots reported after selecting among several models using PAI as a main indicator are not adequate. See \\cite{joshi2021considerations,Chainey_2008}.\n\nIn order to make better hot spots predictions in practice it is recommended, before computing hots spots using PAI as indicator, that the selected model shall be decided using other more suitable measures. See for instance \\cite{cha2007comprehensive,gibbs2002choosing}.\n\\end{remark}\n\n\n\n\\section{The integral average transform }\\label{sec:IAT}\nIn the previous section we presented an application where integral of averages of a function on subregions are computed. We summarize this procedure as follows: consider a function $f: \\mathbb{R}^n\\to \\mathbb{R}$ and family of regions \nneighboring ${\\bf x}$, say $\\{B_{s,\\bf x} \\subset \\mathbb{R}^n\\}_{s>0,{\\bf x}\\in\\mathbb{R}^n}$ such that: \n$\\{{\\bf x}\\} \\subset B_{s,\\bf x}\\subset B_{s+\\epsilon,\\bf x}$ for all $s$ and $\\epsilon>0$.\nDefine the following integral average transform of $f$ by\n\\begin{equation}\\label{eq:defIntegralAverageTransform}\nu({\\bf x})=\\int_\\mathbb{R} \\frac{\\lambda(s,{\\bf x})}{|B_{s,{\\bf x}}|}\\int_{B_{s,{\\bf x}}} f(y)dy=\n\\int_\\mathbb{R} \\lambda(s,{\\bf x})\\fint_{B_{s,{\\bf x}}} f(y)dy.\n\\end{equation}\nThat is, a weighed integral of {\\it plain} average values of $f$ over the regions $B_{s,{\\bf x}}$, $s\\in\\mathbb{R}$. Here $\\lambda(s,{\\bf x})$ is a non-negative weight function. \n\n\n\n\nThis procedure is equivalent to integrating against a possible singular kernel. We formalize this interpretation in the next statement. \n\\begin{theorem}\nDefine\n\\[K({\\bf y} , {\\bf x})= \n \\int_\\mathbb{R}\n \\frac{\\lambda(s,{\\bf x}) }{|B_{s,{\\bf x}}| } {\\bf 1}_{B_{s, {\\bf x}}}({\\bf y} )ds\n\\]\nand consider $u$ defined in \\eqref{eq:defIntegralAverageTransform}.\nThen, \n$u({\\bf x})=\\int_{\\mathbb{R}^n} f({\\bf y})K({\\bf y},{\\bf x})dy$.\n\\end{theorem}\n\nAs in the previous subsections we can choose for instance, subsets related to levels curves of another functions $\\psi_{\\bf x}:\\mathbb{R}^n\\to \\mathbb{R}$, say, \n\\begin{equation}\\label{eq:def:B_sGeneralCase}\nB_{s,{\\bf x}} = \\psi_{\\bf x}^{-1}((-\\infty,s)).\n\\end{equation}\n\n\n\\begin{remark}\\normalfont \\label{remarkPAIandIAT}\nIn Section \\ref{sec:motivation} the average PPAI, $P(\\phi,\\psi)$, defined in \n\\eqref{eq:def:Ppsiphi}, was an integral average transform of \n$\\phi$ where the family of sets $B_s$ were given by levels sets of $\\psi$. In particular, in Section \\ref{sec:motivation} $\\psi$ is continuous and has a maximum value ${\\bf x}_0$, then $P$ corresponds to the integral average transform of $\\phi$ evaluated at ${\\bf x}_0$. The definition of subregions \n$B_s$ in \\eqref{eq:def:Bs} are related to superlevel sets of \n$\\psi$ while for this section, and the rest of the paper, we use $B_s$ as sublevel sets; See (\\ref{eq:def:B_sGeneralCase}). In the case of super level set the possible singularity of the associated kernel \n$K(\\cdot, {\\bf x} )$ will be related to the minimum value of \n$\\psi_x$ that, in order to fix ideas we are assuming to be a singleton. Recall that we are assuming \nthat \n$\\{{\\bf x}\\} \\subset B_{s,\\bf x}\\subset B_{s+\\epsilon,\\bf x}$ for all $s$ and $\\epsilon>0$.\n\\end{remark}\n\nLet us now consider a non-negative kernel given by $K:\\mathbb{R}^n\\times \\mathbb{R}^n\\to \\mathbb{R}$ a with a possible singularity at ${\\bf x}={\\bf y}$ and smooth for \n ${\\bf x}\\not ={\\bf y}$. We can then take, for instance, \n $\\psi_{\\bf x}({\\bf y})= 1\/K({\\bf x}, {\\bf y})^q$ for \n some $q>0$. Then\n\\begin{equation} \nB_{s,{\\bf x}} =\\left[\\frac{1}{s} < K(\\cdot , {\\bf x} )^q\\right]=\n\\{ { \\bf z} \\in \\mathbb{R}^n : \ns^{-1\/q}0$ and $||{\\bf x}-{\\bf y}||0$, the truncated integral, \n\\begin{eqnarray}\nu_R({\\bf x})&=&\\int_{0}^{R} \\frac{s}{n} \\fint_{||{\\bf x}-{\\bf y}||0$. For all $R$ large enough we have\n\\[\nu_R({\\bf z})=\nC_n(R)\\int_{\\mathbb{R}^n }f({\\bf y})d{\\bf y}+ \n\\int_{\\mathbb{R}^n}f({\\bf y}) G({\\bf z},{\\bf y})d{\\bf y}.\n\\]\nfor all ${\\bf z}\\in B_\\epsilon({\\bf x})$. \nIt is enough to take $R=R_0+||x||+\\epsilon$ where $B_{R_0}({\\bf 0})$ contains the support of $f$.\nWe then see that \n$-\\Delta u_R({\\bf x})= f({\\bf x})$.\n\\end{remark}\n\n\nWe have also the following result concerning a generalization of the mean value property. See \n\\cite{delaurentis1990monte}.\n\n\n\\begin{theorem}\nAssume that $u$ and $f$ are sufficiently smooth and satisfy\n\\[\n-\\Delta u = f \\mbox{ in } B_R({\\bf x}_0) .\n\\]\nThen the following \\it{mean value property} holds,\n\\begin{equation}\\label{eq:newmeanvalue}\nu({\\bf x}_0)= \\fint_{\\partial B_R({\\bf x}_0)} u({\\bf y})d{\\bf y}+\n\\int_{0}^R \\frac{s}{n} \\fint_{||{\\bf x}_0-{\\bf y}||0\\} \n\\end{equation}\nwith $u=0$ on $\\partial \\mathbb{R}^n_+.$\n\n\n\\begin{theorem} \nAssume that $n\\geq 3$, $f$ is regular enough and with compact support contained in $\\mathbb{R}^n_+$ and considered the integral average transform defined by\n\\begin{equation} \\label{eq:halfspaceAIT}\nu({\\bf x})= \\int_{0}^{\\infty} \\frac{s}{n} \\fint_{B_s({\\bf x})} f(y) { 1}_{B_s({\\bf x})\\setminus B_s({\\bf x}-2x_n{\\bf e}_n) }dyds.\n\\end{equation} Then \n\\[\n-\\Delta u = f\\mbox{ in } \\mathbb{R}^n_+\n\\]\nand $u=0$ on $\\partial \\mathbb{R}^n_+.$\n\\end{theorem}\n\\begin{proof}\nWe have (see \\cite{evans1998partial,gilbarg2015elliptic})\n\\begin{eqnarray}\nu({\\bf x})&=& \n\\int_{\\mathbb{R}^n}f({\\bf y}) \\left( G({\\bf x},{\\bf y})-\nG({\\bf x}-2x_n{\\bf e}_n,{\\bf y})d{\\bf y} \\right) \\label{Greendiff}\\\\\n&=& \\int_{0}^{\\infty} \\frac{s}{n}\\left( \\fint_{B_s({\\bf x})} f(y) dy\n- \\fint_{B_s({\\bf x}-2x_n{\\bf e}_n)} f(y)dy\\right)\nds\n\\end{eqnarray}\nHere we have used \\eqref{eq:GreenRnSolution}.\nThe result follows from recalling that the support of $f$ is contained in \n$\\mathbb{R}^n_+$ and by noting that \n\\[\n\\fint_{B_s({\\bf x})} f(y) dy\n- \\fint_{B_s({\\bf x}-2x_n{\\bf e}_n)} f(y)dy=\n\\fint_{B_s({\\bf x})} f(y) { 1}_{B_s({\\bf x})\\setminus B_s({\\bf x}-2x_n{\\bf e}_n) }dy.\n\\]\n\\end{proof}\n\n\n\n\n\n\\begin{remark} \\normalfont\nFor $n=2$ we have, for \n$R$ large enough \n\\begin{eqnarray*}\nG({\\bf x},{\\bf y})-\nG({\\bf x}-2x_n{\\bf e}_n,{\\bf y})d{\\bf y} &=&\nK_R({\\bf x},{\\bf y})-\nK_R({\\bf x}-2x_n{\\bf e}_n,{\\bf y})\n\\end{eqnarray*}\nand therefore\n\\begin{eqnarray}\n\\int_{\\mathbb{R}^n}f({\\bf y}) \\left( G({\\bf x},{\\bf y})-\nG({\\bf x}-2x_n{\\bf e}_n,{\\bf y})d{\\bf y} \\right)=\\\\\n\\int_{\\mathbb{R}^n}f({\\bf y}) \\left( K_R({\\bf x},{\\bf y})-\nK_R({\\bf x}-2x_n{\\bf e}_n,{\\bf y}) \\right)d{\\bf y}\\\\\n\\end{eqnarray}\nWe conclude the same result as in\n\\eqref{eq:halfspaceAIT} by taking \n$R\\to\\infty$.\n\\end{remark}\n\n\n\\begin{remark} \\normalfont \\label{remarkGreenIAT}\nIn the previous results we write the solution of the Poisson equation in the half space as an integral of averages of the forcing term over balls centered at \n${\\bf x}$. In the case the ball \n$B_s({\\bf x})\\not \\subset \\mathbb{R}^n_+ $, the function $f$ is cut to \n$f { 1}_{B_s({\\bf x})\\setminus B_s({\\bf x}-2x_n{\\bf e}_n) }$. See Figure \\ref{fig:halfplane}.\n\n\\end{remark}\n\n\\begin{figure}\n\\begin{tikzpicture}[scale=1.2]\n\n\\filldraw[color=red!60, fill=blue!5, very thick](0,1) circle (1.5); \n\\filldraw[black] (0,1) circle (2pt) node[anchor=west]{${\\bf x}$};\n\\filldraw[color=red!60, fill=red!0, very thick](0,-1) circle (1.5); \n\\filldraw[black] (0,-1) circle (2pt) node[anchor=west]{${\\bf x}-2x_n{\\bf e}_n$};\n \\draw[gray, thick] (-4,0) -- (4,0);\n\\end{tikzpicture}\n\\caption{Illustration of $B_s({\\bf x})\\setminus B_s({\\bf x}-2x_n{\\bf e}_n)$. Formula \\eqref{eq:halfspaceAIT} } illustrate that for the average of $f$ on $B_s({\\bf x})$ only values of the shaded region, $B_s({\\bf x})\\setminus B_s({\\bf x}-2x_n{\\bf e}_n)$, are considered in the integration.\n\\label{fig:halfplane}\n\\end{figure}\n\n\nFinally we mention that formula \\eqref{eq:GreenRnSolution} give us additional interpretation of the solution of the Poisson equations in domain with boundaries. Instead of subtracting two solution as in formula \\eqref{Greendiff}, we could extend the forcing term in such a way that averages centered at the boundary vanish. For instance, \nlet us consider the domain $\\mathbb{R}^n_+$ and ${\\bf x}\\in \\partial \\mathbb{R}^n_+$. In order to use \\eqref{eq:GreenRnSolution} to obtain the solution of \nthe Poisson equation in $\\mathbb{R}^n_+$ we could extend $f$ to \nthe whole $\\mathbb{R}^n$, say $E(f):\\mathbb{R}^n\\to \\mathbb{R}$ such that $E(f)|_{\\mathbb{R}^n_+} = f$ and with \n$\\int_{B_s({\\bf x})}f({\\bf y})d{\\bf y}=0$ for all \n ${\\bf x}\\in \\partial \\mathbb{R}^n_+$. We then have the following \nresults as an alternative the Theorem 3.4.\n\\begin{theorem} \\label{greeextension}\nAssume that $n\\geq 3$, $f$ is regular enough and with compact support contained in $\\mathbb{R}^n_+$ and considered \n\\[\nEf({\\bf y})=\n\\begin{cases}\n f({\\bf y}), & {\\bf y}\\in \\mathbb{R}^n_+,\\\\\n 0, &{\\bf y}\\in \\partial\\mathbb{R}^n_+,\\\\\n -f({\\bf y}-2y_n{\\bf e}_n), & \\mbox{elsewhere},\\\\\n\\end{cases} \n\\]\nand $u$ defined by\n\\begin{equation} \\label{eq:halfspaceAITextension}\nu({\\bf x})= \\int_{0}^{\\infty} \\frac{s}{n} \\fint_{B_s({\\bf x})} Ef(y) dyds.\n\\end{equation} Then \n\\[\n-\\Delta u = f\\mbox{ in } \\mathbb{R}^n_+\n\\]\nand $u=0$ on $\\partial \\mathbb{R}^n_+.$\n\\end{theorem}\n\\begin{proof}\nObserve that formula \\eqref{eq:halfspaceAITextension}\ncoincides with \\eqref{eq:halfspaceAIT}.\n\\end{proof}\nSimilar argument can be applied to other domains with simple boundaries such as strips and cubes. \n\n\\section{Final comments}\\label{sec:conclusions}\nIn this short note we introduced the integral average transform. Our motivation comes from the need of a better mathematical understanding of some practical measures such as the prediction accuracy index that is popular in problems related to predictive security. In this paper we have explained the mathematical and practical context of this application in order to motivate\nour main definition. The integral average transform is defined for a given function \n$f({\\bf y})$ and it is the result of integrating plain averages of $f$ over an family of sets (containing a point ${\\bf x}$) indexed by the integration argument. See \n\\eqref{eq:defIntegralAverageTransform} for a precise definition. We show that this procedure is equivalent to computing integrals against a two variable kernel $K({\\bf y},{\\bf x})$ with a possible singularity at ${\\bf x}$. We also show that \nany kernel integral of the form $\\int f({\\bf y})K({\\bf y},{\\bf x})d{\\bf y}$ can also be interpreted as an integral average transform. Given the ubiquity of kernels $K({\\bf y},{\\bf x})$ in the solution of many problems, we believe that this novel interpretation may be worth of further investigation. For instance, other kernels can be considered such as Poisson's kernels. We can also associate some dynamics to the parameter $s$.\n\n\\bibliographystyle{plain}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nA seizure is the occurrence of an abnormal, intense discharge of a batch of cortical neurons. Epilepsy is a condition of the central nervous system characterized by repeated seizures. The distinction between epilepsy and seizure is important because of the fact their treatments are different \\cite{bromfield2006}. For epilepsy diagnosis, at least two unprovoked seizures should happen in quick succession. The risk of premature death in people with epilepsy is about three times that of the general population \\cite{WHO2019}. The EEG or electroencephalogram is an electrophysiological-based technique for measuring the electrical activity emerging from the human brain normally recorded at the brain scalp.\n\nIt is widely accepted that EEG originates from cortical pyramidal neurons \\cite{avitan2009eeg} that are positioned perpendicular to the brain's surface. The EEG records the neural activity, which is the superimposition of inhibitory and excitatory postsynaptic potentials of a sizeable group of neurons discharging synchronously. Cortical neurons are the electrically excitable cells present in the central nervous system. Through a special type of connection known as synapses, the information is communicated and processed in these neurons by electrochemical signaling.\n\nScalp Electroencephalogram (EEG) is one of the main diagnostic tests in neurology for detecting epileptic seizures. Most patients demonstrate some characteristic aberrations during an epileptic seizure\\cite{hopp2016}. A drawback in using EEG signals for clinical diagnosis is that the cerebral potential may change quite drastically by muscle movements or even the environment. The resultant EEG signal contains artifacts that make it harder to provide a diagnosis to the affected patients. Most of the medical data available publicly are unlabeled \\cite{lecun_2019}. Many approaches like semi-supervised learning and self-supervised learning have used the unlabeled data for training.\n\n\n\\section{Self-supervision in the medical domain}\nThere is an abundance of medical data on the internet, most of them being unlabeled. Traditional machine learning (ML) algorithms use labeled data to train supervised models. These unlabeled data are of no use to supervised models.\n\nSelf-supervised learning (SSL) methods, as the name suggests, use some form of supervision. Using self-supervision, the labels are extracted from the data by using a fixed pretext task. The task makes use of the innate structure of the input data \\cite{doersch2015unsupervised}. Some of the basic pretext tasks are given in figure \\ref{abs}. This strategy works well on images, as was proved in the ImageNet challenge \\cite{russakovsky2015imagenet}. Self-supervision models such as SimCLR \\cite{chen2020simple} and BYOL \\cite{grill2020bootstrap} perform well in image classification tasks even if they use $1\\%$ of the ground truth labels. They give performance comparable to the fully self-supervision tasks. It also works well on time series data such as natural language \\cite{lan2019albert}, audio and speech understanding \\cite{bai2020representation}.\n\nIn the Works of \\cite{arora2019theoretical, wei2020theoretical, wei2020theoretical}, they tried to prove the usefulness of self-supervision and the qualities of a reliable self-supervision pretext task. In predictive learning, the representations of unlabeled data are learned by predicting masked time series data. Recent works \\cite{saunshi2020mathematical} on time series data have shown how a contrastive learning technique can be reduced to a masking problem. However, most of these works are focused on computer vision or text-related applications, and very few of them use EEG data.\n\nIn this work, we use different self-supervised strategies where we use the unlabelled data to increase the performance of supervised models. These strategies help the model learn natural features from the dataset, which helps the model distinguish diseased Electroencephalogram (EEG) signals from the healthy ones.\n\n\\begin{figure}[htp]\n\t\\centering\n\t\\includegraphics[width=10cm]{Images\/basic.PNG}\n\t\\caption{Basic pretext tasks for time series data}\n\t\\label{abs}\n\\end{figure}\n\n\n\\section{Related works}\nThere has been a shift from using convolutional neural networks (CNNs) to combining CNN and Graph neural networks. Many studies in this domain \\cite{rasheed2020machine, raghu2020eeg,asif2020seizurenet,ahmedt2020identification}, assume euclidean structures in EEG signals. The application of 3D-CNN on the EEG spectrogram of the signals violates the natural geometry of EEG electrodes and the connectivity in the brain signals. EEG electrodes are placed on a person's scalp and hence are non-euclidean. The natural geometry of EEG electrodes is best represented on a graph \\cite{bronstein2017geometric,chami2020machine}. In \\cite{wang2020sequential}, the authors use GNN to extract feature vectors and then use convolution on the intermediate results. Whereas in \\cite{wagh2020eeg} they extract features using Power Spectral Density (PSD) which are used as the node vectors, and perform spectral graph convolutions \\cite{kipf2016semi}. Graph representation learning has been used heavily in modeling brain networks \\cite{bullmore2009complex}. Recent models such as \\cite{tang2021self} outperforms previous CNN based methods \\cite{saab2020weak} and CNN based Long short-term memory-based methods \\cite{ahmedt2020identification}. In \\cite{tang2021self}, they use self-supervision to increase the performance of the supervision models. Specifically, they pre-train the model on the $12\/60s$ of EEG clip for predicting the next $T^{'}$ second EEG clip and transfer the learning to the model's encoder. Some studies \\cite{xu2020anomaly, banville2021uncovering , kostas2021bendr, martini2021deep} used a different pretraining strategy but did not use any graph-based methods to model the EEG signals.\n\n\n\\section{Experimental setup}\n\n\\subsection{Dataset}\nFor our testing, we used the publicly available Temple university hospital EEG Seizure corpus (TUSZ) v1.5.2. \\cite{shah2018temple}, which is the largest publicly available seizure corpus to date with $5,612$ EDF files, $3,050$ annotated seizures from clinical recordings, and eight seizure types. Five subjects were present both in the TUSZ train and test subjects, thus not considered in our self-supervision strategy. The validation split was created from the train split by randomly selecting $10\\%$ of the subjects from the train split. This was done to ensure the test and validation performance reflects the performance of unseen subjects as seen in real-world scenarios. \n\n\\iffalse\n\\begin{figure}[htp]\n \\centering\n \\includegraphics[width=5cm]{Images\/Channels.png}\n \\caption{Channels present in the TUSZ dataset}\n \\label{fig:I1}\n\\end{figure}\n\\subsection{Preprocessing}\n\\fi\nSince epileptic seizures have a specific frequency range in the EEG signal. \\cite{tzallas2009epileptic}, and hence it is intuitive to convert the raw EDF files into the frequency domain. The log amplitudes are obtained after applying the Fast Fourier transformation to all the non-overlapping $12s$ windows. This is done for all the windows for faster pre-processing. \n\n\\begin{table}[]\n\\centering\n\\begin{tabular}{||c|cccc||}\n\\hline\n &\n \\multicolumn{1}{l}{\\textbf{\\begin{tabular}[c]{@{}l@{}}EEG files \\\\ (\\% Seizure)\\end{tabular}}} &\n \\multicolumn{1}{l}{\\textbf{\\begin{tabular}[c]{@{}l@{}}Patients \\\\ (\\% Seizure)\\end{tabular}}} &\n \\multicolumn{1}{l}{\\textbf{\\begin{tabular}[c]{@{}l@{}}Total \\\\ Duration \\\\ (Sec)\\end{tabular}}} &\n \\multicolumn{1}{l||}{\\textbf{\\begin{tabular}[c]{@{}l@{}}Seizure \\\\ Duration\\\\ (Sec)\\end{tabular}}} \\\\ \\hline\n\\textbf{Train files} &\n 4599 (18.9\\%) &\n 4599 (34.1\\%) &\n 2710483 &\n 169793 (6.6\\%) \\\\ \\hline\n\\textbf{Test files} &\n 900 (25.6\\%) &\n 45 (77.8\\%) &\n 541895 &\n 53105 (9.8\\%) \\\\ \\hline\n\\end{tabular}\n\\caption{Detailed summary of TUSZ v1.5.2}\n\\label{T-1}\n\\end{table}\n\n\n\\section{Graph neural network for EEG signals}\n\\subsection{Representing EEG's as graphs}\nMultivariate time-series of EEG signals can be modeled as a graph $G$, where $G = {V, E, W}$. Here $V$ denotes the electrodes\/channels, $E$ denotes the set of edges, and $W$ is the adjacency matrix. Two types of graphs were formed according to \\cite{tang2021self} to validate our self-supervision strategies.\n\n\\textbf{Distance Graph:} To capture the natural geometry of the brain, edge weights are computed between two electrodes $v_i$ and $v_j$ is the distance between them, i.e. \n\\begin{equation}\n W_{ij} = exp(-\\frac{dist(v_i, v_j)^2}{{\\sigma}^2}) \\hspace{15pt} if \\hspace{5pt}dist(v_i, v_j) \\leq \\kappa\n\\end{equation}\n\nWhere $\\kappa$ is the threshold produced by a Gaussian kernel \\cite{shuman2013emerging}, which generates a sparse adjacency matrix, and $\\sigma$ is the standard deviation of the distances. This results in a weighted undirected graph for the EEG windows. The $\\kappa$ is set at $0.9$ as it resembles the EEG montage widely used in the medical domain \\cite{acharya2016american}.\n\n\\textbf{Correlation graph:} The edge weight $W_{ij}$ is computed as the absolute value of the cross-correlation between the electrode signals $v_{i}$ and $v_{j}$, normalized across the graph. To keep the more influential edges and introduce sparsity, the top $3$ neighbors for each node are kept, excluding self-edges.\n\n\n\\subsection{DCRNN for EEG signals}\nA Diffusion Convolutional Recurrent neural network (DCRNN)\\cite{li2017diffusion} models the \\textit{spatiotemporal dependencies} in EEG signals. DCRNN was initially proposed for traffic forecasting on road networks, where it is modeled as a diffusion process. The objective is to learn a function that predicts the future traffic speeds of a sensor network given the historic traffic speeds.\n\nThe spatial dependency of the EEG signals is similar to the traffic forecasting problem and is modeled as a diffusion process on a directed graph. This is because an electrode has an influence over other electrodes given by the edge weights. DCRNN captures spatial and temporal dependencies among time series using\ndiffusion convolution, sequence to sequence learning framework, and scheduled sampling.\n\n\n\\textbf{Diffusion Convolution} \nThe diffusion process is characterized by a bidirectional random walk on a directed graph signal $G$. Whereas the diffusion convolution operation over a graph signal $X \\in R ^{N\\times P}$ and a filter $f_\\theta$ is defined as:\n\\begin{equation}\n X_{:,p\\ \\star G \\ f_\\theta} = \\sum_{k=0}^{K-1} (\\theta_{k, 1}{(D_O^{-1} W})^k + \\theta_{k, 2}{(D_I^{-1} W^{T}})^k)X_{:,p} \\ \\ \\ for \\ p \\in \\{1, \\cdots, P\\}\n\\end{equation}\nwhere $X \\in R^{N \\times P}$ is the pre-processed EEG signal at a specific time instance $t \\in \\{1,\\cdots\\,T\\}$ with N nodes and P features. Here $\\theta \\in R^{K \\times 2}$ are the parameters of the filter and $D_O^{-1}W$, $D_I^{-1} W^T$ represent the transition matrices of the diffusion matrices of the inwards and outwards diffusion processes respectively\\cite{li2017diffusion}.\n\nFor modeling the temporal dependency in EEG signals, Gated Recurrent units \\cite{cho-etal-2014-properties}, a variant of Recurrent neural network (RNN) having a gating mechanism are used. To incorporate diffusion convolution, the matrix multiplications are replaced with diffusion convolutions \\cite{li2017diffusion}. For seizure detection, the model consists of several DCGRUs, stacked together and accompanied by a fully connected layer.\n\n\\section{Self-Supervision Methods}\nWe propose five pretraining strategies for time series data for model pretraining. These methods are tested on a T = $12s$ window, having $19$ channels and $200$ time points. We denote the matrix corresponding to the EEG signal as $S \\in R ^ {19 \\times 200}$. The objective function is to predict the original signal given the noisy\/masked signal. We use the mean absolute error between the predicted and the original EEG window as our loss function.\n\n\\subsection{Jitter}\n\\label{1}\nThe modified signal, $S^{'}$ was created by superimposing the main signal, $S$ with a noise signal. The noise has the same mean, $\\mu(S)$ and a fraction $L$ of the variance of the original signal. We set $L = 5\\%$ as experimented by \\cite{jayarathne2020person} for data augmentation. The random noise signal $M$ from a normal distribution, is created as $M=N(\\mu(S),0.05 \\cdot Var(S))$ . Then, \n\\begin{equation}\n S^{'}=S+M\n \\label{eq1}\n\\end{equation}\nSince the magnitude of the noise is much lower compared to the EEG signal, the resultant signal had a small shift along the y-axis, as shown in Figure \\ref{fig:I2}. \n\n\\graphicspath{ {Images\/} }\n\\begin{figure}[htp]\n \\centering\n \\includegraphics[width=12cm]{jittering.png}\n \\caption{EEG signal before and after introducing jitter}\n \\label{fig:I2}\n\\end{figure}\n\n\\subsection{Random sample}\n\\label{2}\nThe noisy signal is created by randomly choosing some fraction of the time points, i.e., $t_{1},t_{2}, \\cdots t_{k}$ and replacing them with their neighbors' average\\cite{jayarathne2020person}. For example, if neighbors of $t_i$ are $t_{i1},t_{i2}$ then we replace $t_i$ with $\\frac{t_{i-1}+t_{i-1}}{2}$. The first and last points were not considered as they do not have both neighbors. We chose $20\\%$ of the points in each channel and replaced them with zeros, as shown in Figure \\ref{fig:I3}. This has a \"smoothening\" effect on the initial graph as some data points which were local maxima\/minima are now in the straight line connecting their neighbors.\n\n\\begin{figure}[htp]\n \\centering\n \\includegraphics[width=12cm]{Images\/Random_Sampling.png}\n \\caption{EEG signal before and after introducing random sampling}\n \\label{fig:I3}\n\\end{figure}\n\n\\subsection{Remove a specific channel} \\label{3}\nTrying to predict a channel by using the data from other channels clears the way for many different opportunities. It proves that a particular channel is a function of other channels and does not provide any new data. \n\n\nWe try to use this problem in our self-supervision. This can be done either by picking a random channel from each lobe or for all the channels. We select channel F3 from the Frontal lobe for this task. Mathematically, we set $S_{K, i}=0$ for all $i$, where $K$ is the channel number. For illustration purposes, Figure. \\ref{fig:I4} only uses the first five channels in which the channel F3 is replaced with zeros. \n\n\\begin{figure}[htp]\n \\centering\n \\includegraphics[width=12cm]{Images\/Remove_F3_1.png}\n \\caption{Original signal with five channels}\n \\label{fig:I4}\n\\end{figure}\n\n\\begin{figure}[htp]\n \\centering\n \\includegraphics[width=12cm]{Images\/Remove_F3_2.png}\n \\caption{Resultant signals after noise is added}\n \\label{fig:I5}\n\\end{figure}\n\n\\subsection{Predicting windows} \\label{4}\nWe select a small window from each channel instead of selecting all the points of a single channel. The noisy signal is created by replacing the selected time points with dummy points for all the channels.\n\nIn other words, we choose a window of length $l$ randomly in each of the channels $i$ and make entries from $j+1$ to $j+l$ of that channel zero. So, for a particular channel $i$, $S_{i,j+1}, \\cdots ,S_{i,j+l}= 0$. The window length $l$ consists of $20\\%$ the total timepoints. These points are replaced with zeros. This is shown in Figure \\ref{fig:I6}.\n\n\\begin{figure}[htp]\n \\centering\n \\includegraphics[width=12cm]{Images\/Zeroing_Window.png}\n \\caption{Resultant signal after noise is added for a single channel}\n \\label{fig:I6}\n\\end{figure}\n\\subsection{Jittering in random window}\nThis method is a combination of methods \\ref{1} and \\ref{4}. A noise having the same characteristics as in \\ref{1} is introduced in a small window having the same length as in \\ref{4}. This method combines the advantages of both, as no essential data is being deleted but modified only in a selected window.\nIn other words, we choose a window of length $l$ randomly in each of the channels $i$. We generate a random noise $M$ with the same strategy as in \\ref{1}, and introduce this noise in a window that is selected by \\ref{4}. We use equation \\eqref{eq1} for generating noise. Mathematically,\n\\begin{equation}\n S^{'}_{i,j+k}=S_{i,j+k}+ 0.05 \\cdot Var(M)\n \\ for\\ k=1\\ to\\ l.\n\\end{equation}\n\n\\begin{figure}[htp]\n \\centering\n \\includegraphics[width=12cm]{Images\/Jitterg_Window.png}\n \\caption{A window is selected and noise is introduced}\n \\label{fig:I7}\n\\end{figure}\n\\begin{table}[t]\n\t\\caption{Comparison of our proposed techniques with recent work}\n\t\\centering\n\t\\begin{tabular}{|l|l|} \n\t\t\\toprule\n\t\t\\textbf{Seizure detection model} & \\textbf{Seizure detection AUROC} \\\\ \n\t\t\\hline\n\t\t\\textbf{Traditional algorithms \\cite{tang2021self}} & \\\\ \n\t\t\\hline\n\t\tDense-CNN & 0.812 $\\pm$0.014 \\\\\n\t\tLSTM & 0.786$\\pm$ 0.014 \\\\\n\t\tCNN-LSTM & 0.749 $\\pm$0.006 \\\\\n\n\t\t\\hline\n\t\t\\textbf{State of the art \\cite{tang2021self}} & \\\\ \n\t\t\\hline\n\t\tCorr-DCRNN w\/o Pre-training & 0.812 $\\pm$0.012 \\\\\n\t\tDist-DCRNN w\/o Pre-training & 0.824 $\\pm$ 0.020 \\\\\n\t\tCorr-DCRNN w\/ Pre-training & 0.861 $\\pm$0.005 \\\\\n\t\tDist-DCRNN w\/ Pre-training & 0.866 $\\pm$0.016 \\\\\n\t\t\n\t\t\\hline\n\t\t\\multicolumn{1}{l}{\\textbf{Our SSL strategies}} & \\\\ \n\t\t\\hline\n\t\t\n\t\tCorr-Jittering w\/Pre-training & \\textbf{0.8709 $\\pm$ 0.0047} \\\\\n\t\tCorr-Jittering in a random window w\/pretraining & 0.8658 $\\pm$ 0.0271 \\\\\n\t\tCorr-Remove channel (F3) w\/Pre-training & 0.8655 $\\pm$ 0.0035 \\\\\n\t\tCorr-Predicting windows w\/Pre-training & 0.8691 $\\pm$ 0.0040 \\\\\n\n\t\tCorr-Random sample w\/Pre-training & 0.8692 $\\pm$ 0.0042 \\\\\n\t\t& \\\\\n\t\t\\hline \n\t\tDist-Jittering w\/Pre-training & 0.8749 $\\pm$ 0.000053 \\\\\n\t\tDist-Jittering in a random window & 0.8759 $\\pm$ 0.000045 \\\\\n\t\tDist-Remove channel (F3) w\/Pre-training & 0.8774 $\\pm$ 0.000035 \\\\\n\t\tDist-Predicting windows w\/Pre-training & 0.8802 $\\pm$ 0.000012 \\\\\n\n\t\tDist-Random sample w\/Pre-training & \\textbf{0.8816 $\\pm$ 0.000006} \\\\\n\t\t\n\t\t\\bottomrule\n\t\\end{tabular}\n\t\\label{table:Result}\n\\end{table}\n\nIn Figure \\ref{fig:I7}, a certain part of the signal has some modifications, while the rest is left unchanged.\n\n\\section{Experimental results and analysis}\n\n\\subsection{Self-Supervision proposed techniques improves performance}\nSelf-supervision is typically used when we want to use unlabeled data for training our model. In contrast, the TUSZ dataset used in our study is fully labeled. In works of \\cite{arora2019theoretical, wei2020theoretical, wei2020theoretical} they tried to prove why self-supervision helps and what are the qualities of a reliable self-supervision pretext task. The study which we extended \\cite{tang2021self} also proves the same. We were able to improve the gap by using different pretraining strategies, especially on the distance-based graphs.\n\n\n\\subsection{Model training}\n\nThe AUROC score is the industry standard for evaluating models in the medical domain. It is prevalent for binary classification in medical domain \\cite{ahmedt2020identification, o2020neonatal, asif2020seizurenet}. Therefore, we use the AUROC score as the standard evaluation metric for evaluating our self-supervision strategies. The base model of predicting the next $T = 12 s$, given the current $T = 12 s$ in \\cite{tang2021self} has been replaced with these strategies. The batch size has been set to $1500$, while the other hyperparameters have been set at the default values provided by \\cite{tang2021self}. Our pretraining and training were performed on a single NVIDIA RTX 5000 GPU. The self-supervision model parameters were randomly initialized for the pretraining phase. The DCGRU decoder model parameters were randomly initialized, and the encoder weights were set to the same weights as that of the self-supervision encoder.\n\n\\subsection{Outperform the state-of-the-art}\nWe compare our five self-supervision models with the previous state-of-the-art in Table \\ref{table:Result}. For each strategy and type of graph, we ran the pretraining twice. We took five training runs for each self-supervision strategy to get a reliable estimate. The mean and standard deviation are computed across all the models.\n\nThe self-supervision model weights are transferred to the encoder of the seizure classification model for each run and then trained on the labeled windows. With these changes applied, we were able to surpass the state-of-the-art results by $1.56\\%$ after applying this simple modification.\n\n\\subsection{Low deviation during Dist-Graph Compared to Correlation Graph}\nThe correlation-based graphs either produced very marginal improvement or had a lower score compared to the state-of-the-art. We set the number of epochs for pretraining and training as $350$ and $100$, respectively. The loss function for the distance-based graph converged in both the training and testing phases over all the epochs. In contrast, the loss function for correlation-based graphs converged faster at a higher value than their distance-based counterparts. The correlation-based models were stopped early since we did not see any improvements. This happened during both the training and testing phases. This resulted in a higher standard deviation and a lower AUROC score than their respective distance-based counterparts.\n\nWe noted that in papers \\cite{song2018eeg, tang2021self, wang2018eeg, varatharajah2017eeg}, they used spatial connectivity as well as distance functional connectivity. For spatial connectivity, they used a distance-based graph, either the Euclidean distance or the actual measured between two points of a sphere. For functional connectivity, they have used a binary relationship between the electrodes. The purpose of using functional connectivity is to exploit the dynamic nature of time-series data. To validate our results, we randomly re-ran two of these five strategies for both types of graphs, and our results were within the margin of error.\n\n\\section{Conclusion and future work}\nA notable point in our results is the difference in the magnitude of the standard deviation. The order of magnitude of the correlation-based models is similar to that of the previous state-of-the-art models. However, it is considerably more precise in distance-based models. The distance graphs consistently outperformed the correlation-based graphs and benefitted the most after using these five SSL strategies. The distance-based random sampling model performed the best among all the models, whereas the correlation-based random sampling model came in a close second among the correlation-based ones. We outperformed the previous state-of-the-art by introducing the self-supervision strategy, and the training phase was kept as it is. Our future work will focus on testing our proposed novel self-supervision strategies to other time-series data.\n\n\\section*{Acknowledgement}\nWe thank Science and Engineering Research Board (SERB) and PlayPower Labs for supporting the Prime Minister's Research Fellowship (PMRF) awarded to Pankaj Pandey. We thank Federation of Indian Chambers of Commerce \\& Industry (FICCI) for facilitating this PMRF.\n\n\\bibliographystyle{splncs04}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nThe gauge mediated supersymmetry breaking (GMSB) has been proposed\ntoward removing the SUSY flavor problem \\cite{DineNeslson}. However,\nthere has not appeared yet any satisfactory GMSB model from\nsuperstring compactification, satisfying all phenomenological\nconstraints.\n\nThe GMSB relies on dynamical supersymmetry breaking \\cite{Shirman}.\nThe well-known GMSB models are an SO(10)$'$ model with $\\bf 16'$ or\n$\\bf 16'+10'$ \\cite{SO10}, and an SU(5)$'$ model with $\\bf\n10'+\\overline{5}'$ \\cite{ADS}. If we consider a metastable vacuum\nalso, a SUSY QCD type is possible in SU(5)$'$ with six or seven\nflavors, satisfying $N_c+1\\le N_f<\\frac32 N_c$ \\cite{ISS}. Three\nfamily standard models (SMs) with this kind of hidden sector are\nrare. In this regard, we note that the flipped SU(5) model of Ref.\n\\cite{KimKyaegut} has one {\\bf 16}$'$ and one {\\bf 10}$'$ of\nSO(10)$'$, which therefore can lead to a GMSB model. But as it\nstands, the confining scale of SO(10)$'$ is near the GUT scale and\none has to break the group SO(10)$'$ by vacuum expectation values of\n{\\bf 10}$'$ and\/or {\\bf 16}$'$. Then, we do not obtain the spectrum\nneeded for a GMSB scenario and go back to the gaugino condensation\nidea. If the hidden sector gauge group is smaller than SU(5)$'$,\nthen it is not known which representation necessarily leads to SUSY\nbreaking. The main problem in realizing a GMSB model is the\ndifficulty of obtaining the supersymmetry (SUSY) breaking confining\ngroup with appropriate representations in the hidden sector while\nobtaining a supersymmetric standard model (SSM) with at least three\nfamilies of the SM in the observable sector.\n\nIn this paper, we would like to address the GMSB in the orbifold\ncompactification of the E$_8\\times$E$_8^\\prime$\\ heterotic string with three families at\nlow energy. A typical recent example for the GMSB is\n $$W= m\\overline{Q}Q+\\frac{\\lambda}{M_{Pl}}Q\\overline{Q}f\n \\bar f+Mf\\bar f$$\nwhere $Q$ is a hidden sector quark and $f$ is a messenger. Before\nIntriligator, Seiberg and Shih (ISS) \\cite{ISS}, the GMSB problem\nhas been studied in string models \\cite{GMSBst}. After \\cite{ISS}\ndue to opening of new possibilities, the GMSB study has exploded\nconsiderably and it is known that the above idea is easily\nimplementable in the ISS type models \\cite{Kitano}. Here, we will\npay attention to the SUSY breaking sector, not discussing the\nmessenger sector explicitly. The messenger sector $\\{f,\\cdots\\}$ can\nbe usually incorporated, using some recent ideas of \\cite{Kitano},\nsince there appear many heavy charged particles at the GUT scale\nfrom string compactifications. The three family condition works as a\nstrong constraint in the search of the hidden sector\nrepresentations.\n\nIn addition, the GUT scale problem that the GUT scale is somewhat\nlower than the string scale is analyzed in connection with the GMSB.\nToward the GUT scale problem, we attempt to introduce {\\it two\nscales of compactification in the orbifold geometry}. In this setup,\nwe discuss physics related to the hidden sector, in particular the\nhidden sector confining scale related to the GMSB. If the GMSB scale\nis of order $ 10^{13}$ GeV, then the SUSY breaking contributions\nfrom the gravity mediation and gauge mediation are of the same order\nand the SUSY flavor problem remains unsolved. To solve the SUSY\nflavor problem by the GMSB, we require two conditions: one is the\n{\\it relatively low hidden sector confining scale} ($<10^{12}$ GeV)\nand the other is the {\\it matter spectrum allowing SUSY breaking}.\n\n\nToward this kind of GMSB, at the GUT scale we naively expect a {\\it\nsmaller coupling constant for} a relatively big {\\it hidden sector\nnonabelian gauge group} (such as SU(5)$'$ or SO(10)$'$) than the\ncoupling constant of the observable sector. But this may not be\nneeded always.\n\nThe radii of three two tori can be different in principle as\ndepicted in Fig. \\ref{fig:Thtorii}.\n\\begin{figure}[t]\n\\begin{center}\n\\begin{picture}(400,60)(0,0)\n\\SetWidth{0.9}\n \\Oval(70,35)(10,20)(0) \\Curve{(59,37)(70,33)(81,37)}\n \\Curve{(63,35)(70,37)(77,35)}\n \\Oval(280,35)(15,30)(0)\\Curve{(262,38)(280,33)(298,38)}\n \\Curve{(265,36)(280,39)(295,36)}\n \\Oval(170,35)(25,50)(0) \\Curve{(140,40)(170,30)(200,40)}\n \\Curve{(148,36)(170,42)(192,36)}\n\n\\Text(70,-5)[c]{(12)}\\Text(170,-5)[c]{(34)}\\Text(280,-5)[c]{(56)}\n\\end{picture}\n\\caption{Radii of three tori can be different.}\\label{fig:Thtorii}\n\\end{center}\n\\end{figure}\nFor simplicity, we assume the same radius $r$ for (12)- and (56)-\ntori. A much larger radius $R$ is assumed for the second (34)-torus.\nFor the scale much larger than $R$, we have a 4D theory. In this\ncase, we have four distance scales, $ R, r, \\alpha'=M_s^{-2},\\ {\\rm\nand}\\ \\kappa=M_P^{-1}, $ where $\\alpha'$ is the string tension and\n$M_P$ is the reduced Planck mass. The Planck mass is related to the\ncompactification scales by $M_P^2\\propto M_s^8 r^4 R^2$. Assuming\nthat strings are placed in the compactified volume, we have a\nhierarchy $\\frac1R<\\frac1r