diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzzkahb" "b/data_all_eng_slimpj/shuffled/split2/finalzzkahb" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzzkahb" @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction\\label{S:Intro}}\nDuffing equations describe dynamics of a systems with cubic nonlinearity, which can have either single-well or double-well potential. As distinct from a harmonic oscillator described by a linear second-order differential equation, the Duffing oscillator in its original form has essentially only one extra nonlinear stiffness term. Despite its enigmatic simplicity, the Duffing oscillator was successfully used to model various physical processes such as stiffening strings, beam buckling, superconducting Josephson parametric amplifiers, ionization waves in plasmas, as well as biomedical processes (see~\\cite{lakshmanan1996chaos} and references therein). The Duffing equations are easily implemented in electronic circuits. Several physical realizations of the Duffing equation in laboratory-based experimental models are described in the Virgin's book~\\cite{virgin2000introduction} and other review papers (see, e.g.,~\\cite{virgin2007vibration,kovacic2011duffing} and references therein). In particular, a double-well Duffing oscillator was widely used to evaluate a large variety of nonlinear systems including slender aerostructures that may buckle under loads~\\cite{virgin2007vibration}, microelectromechanical switches~\\cite{qiu2004curved}, vibration-based energy harvesters~\\cite{kazmierski2014energy,harne2013review}, electrical circuits~\\cite{debnath1989remarks}, and optical systems~\\cite{dykman1991stochastic}. \n\nCoupled Duffing oscillators have attracted special attention due to their intriguing synchronization behavior~\\cite{boccaletti2018synchronization}, including two-state intermittency~\\cite{jaimes2004}, transition to hyperchaos~\\cite{kapitaniak1993}, intermittent lag synchronization~\\cite{pisarchik2005}, attractor annihilation in stochastic resonance \\cite{pisarchik2014control}, etc. The coupled Duffing oscillators were extensively studied with respect to their parameters, such as the coupling strength, nonlinearity stiffness term, external force or modulation of parameters accessible to the system have been developed. In particular, a ring of unidirectionally coupled Duffing oscillators exhibit the most interesting dynamics, such as a transition from periodic to chaotic and hyperchaotic behavior and so-called rotating wave~\\cite{perlikowski2010routes, borkowski2015experimental, borkowski2020stability}. The stability of this system was estimated using Lyapunov exponents~ \\cite{dabrowski2012estimation,balcerzak2018fastest}. Nevertheless, little attention was paid to a study of the effect of the damping term, although ring-coupled overdamped Duffing oscillators were investigated in the presence a delay in coupling~\\cite{tchakui2016dynamics}, multistability~\\cite{meena2020resilience,jaimesself}, and even proposed for spectrum-sensing technology application~\\cite{tang2016rf}. \n\nIt is well known when a motion takes place in the environment, the system has dissipation which is modeled by a damping term related to the velocity \\cite{Landau}. In this regard, several papers were devoted to a study of Duffing oscillators with linear and nonlinear damping terms (see~\\cite{kovacic2011duffing} and references therein). The knowledge of the potential can be useful not only for conservative quantum systems, but also for understanding dynamics of dissipative systems. Although dissipative systems cannot be described by a proper potential~\\cite{graham1984existence}, however, in some cases the potential can still be found. For example, in the case of a linear time-dependent damping term, the system can be viewed as an undamped oscillator but with a variable mass and therefore the corresponding potential can be obtained~\\cite{cieslinski2010direct,barba2020lagrangians}. In addition, analytical studies of transitions which occur between three possible dynamical states (cluster synchronization, complete synchronization, and instability) were performed in a ring of $N$ diffusely coupled Duffing oscillator~\\cite{kouomou2003transitions, yolong2006synchronization}. \n \nIn this Letter, we study the dynamics of three double-well Duffing oscillators coupled in a cyclic ring. As distinct for previous studies of the same coupling configuration, in our work we focus on the effects of the damping term. We consider three different cases of the damping coefficient: fixed damping, damping proportional to time (overdamping case), and damping inversely proportional to time (quasi-conservative case). To characterize the system dynamics we use time series, power spectra, Poincar\\'e sections, bifurcation diagrams, and Lyapunov exponents. The third case is the most interesting, because we observe, for the first time to the best of our knowledge, \\emph{transient toroidal hyperchaos}.\n\nThe rest of the Letter is organized as follows. First, we describe the model of three unidirectionally ring-coupled damped Duffing oscillators. Then, we consider the case of the fixed damping term and demonstrate the existence of the rotating wave in the ring. After that, we study the case when the damping coefficient if proportional to time and demonstrate the overdamped dynamics leading to a stable fixed point. Finally, we analyze the system dynamics when the damping term is inversely proportional to time and demonstrate transient toroidal hiperchaos. At the end of the Latter, we summarize the main results. \n\n\\section{Model}\nLet us consider the simplest form of the undamped Duffing oscillator without a driving force, which can be given as\n\\begin{equation}\n\\label{DO1}\n\\ddot{x} + \\omega_0^2 x +\\delta x^3 = 0,\n\\end{equation}\nwhere the point means time derivative and $\\omega_0^2$ and $\\delta$ are non null real constants. It is easy to find the corresponding potential associated with the motion eq.~(\\ref{DO1}), which has the following form\n\\begin{equation}\n\\label{POT}\nV(x)=\\frac{1}{2} \\omega_0^2 x^2 + \\frac{1}{4} \\delta x^4.\n\\end{equation}\nThe shape of the potential function eq.~(\\ref{POT}) depends on the values of the parameters $\\omega_0^2$ and $\\delta$. Basically, there are four different cases:\n\\begin{itemize}\n\t\\item If $\\omega_0^2 > 0$ and $\\delta < 0$ then the potential has a double-hump well with a local minimum at $x=0$ and two maxima at $\\pm \\sqrt{\\omega_0^2\/\\left|\\delta\\right|}$.\n\t\\item If $\\omega_0^2 > 0$ and $\\delta > 0$ then the potential has a single well with a local minimum at $x=0$.\n\t\\item If $\\omega_0^2 < 0$ and $\\delta < 0$ then the potential has a single hump with a local maximum at $x=0$.\n\t\\item If $\\omega_0^2 < 0$ and $\\delta > 0$ then the potential has a double well with two minima at $\\pm \\sqrt{\\left|\\omega_0^2\\right|\/\\delta}$ and a local maximum at $x=0$.\n\\end{itemize}\nHere, we are interested in the last case, i.e. bistability. Since a damping term is included in eq.~(\\ref{DO1}), the expected motion is ruled by the potential eq.~(\\ref{POT}) but with dissipation until the trajectory is attracted to a stable fixed point. The question is if it is possible to define a proper potential for this kind of damped systems. \n\nIt is well known that in general the damped systems do not always have a defined potential~\\cite{graham1984existence}. However, it is possible to find potentials for dissipative systems if the damping coefficient can be written as logarithmic derivative of certain function (see, e.g.,~\\cite{barba2020lagrangians,cieslinski2010direct}). More explicitly, if we have a motion equation of the type\n\\begin{equation}\n \\label{ODE2}\n \\ddot{x}+\\alpha(t)\\dot{x}+\\beta(x,t)=0,\n\\end{equation}\nit can be described by a Langrangian of the form\n\\begin{equation}\n \\label{L4}\n L=\\frac{1}{2}m(t)\\dot{x}^2-V(x,t),\n\\end{equation}\nwhere \n\\begin{equation}\n \\alpha(t)=\\dot{m}\/m, \\qquad V(x,t)=m(t)\\int^x \\beta(z,t)dz,\n\\end{equation}\nand $\\alpha(t)$ is a time-dependent damping coefficient.\n\nIn other words, eq.~(\\ref{ODE2}) can be viewed as an undamped motion with variable mass $m(t)$. \nTherefore, including $\\alpha(t)$ to eq.~(\\ref{DO1}) we get\n\\begin{equation}\n\\label{DO2}\n\\ddot{x} + \\alpha(t) \\dot{x} + \\omega_0^2 x +\\delta x^3 = 0,\n\\end{equation}\nand the corresponding potential is given as \n\\begin{equation}\n\\label{POT1}\nV(x,t) = m(t)\\left[ \\frac{1}{2} \\omega_0^2 x^2 + \\frac{1}{4} \\delta x^4 \\right],\n\\end{equation}\nwhere $\\alpha(t)=\\dot{m}\/m$. In other words, we are talking about the same potential eq.~(\\ref{POT}) with similar fixed points but with a scaling factor $m(t)$. However, we will show in the next sections that different types of time dependence of the damping coefficient can change the dynamics of the ring-coupled Duffing oscillatory system.\n\nThe ring of three unidirectionally coupled Duffing oscillators is described as follows~\\cite{barba2020lagrangians,jaimesself}\n\\begin{equation}\n\\label{ODES1}\n \\begin{aligned}\n \\ddot{x}_1+\\alpha (t)\\dot{x}_1+\\omega_0^2 x_1+\\delta x_1^3+\\sigma(x_1-x_3) & = 0, \\\\\n \\ddot{x}_2+\\alpha (t)\\dot{x}_2+\\omega_0^2 x_2+\\delta x_2^3+\\sigma(x_2-x_1) & = 0, \\\\\n \\ddot{x}_3+\\alpha (t)\\dot{x}_3+\\omega_0^2 x_3+\\delta x_3^3+\\sigma(x_3-x_2) & = 0,\n \\end{aligned}\n\\end{equation}\nwhere $\\sigma$ is the coupling strength.\n\nIn the rest of the paper, we set $\\omega_0^2=-0.25$, $\\delta=0.5$ which are parameters related to a bistable Duffing oscillator that has three fixed points associated with the potentials eq.~(\\ref{POT}) or~(\\ref{POT1}) given as \n\\begin{equation}\n\\label{FP}\n\t\\begin{aligned}\n\tx_{u,d} & = \\pm \\sqrt{\\left|\\omega_0^2\\right|\/\\delta} = \\pm \\sqrt{1\/2}, \\\\\n\tx_0 & = 0,\n\t\\end{aligned}\n\\end{equation}\nwhere fixed points $x_{u,d}$ are stable (subindices $u$ and $d$ correspond to positive and values) and $x_0$ is unstable. \n\nBy converting the second-order eq.~(\\ref{ODES1}) into first order equations using the change of the variable $\\dot{x}=y$, the dynamics of the $j$th oscillator in the ring can be described by the following pair of first order dimensionless ordinary differential equations:\n\\begin{equation}\n\\label{ODES9}\n \\begin{aligned}\n \\dot{x}_j&=y_{j},\\\\\n \\dot{y}_j&=-\\alpha (t) y_j+\\omega_0^2 x_j-\\delta x_j^3+\\sigma(x_{j-1}-x_j),\n \\end{aligned}\n\\end{equation}\nwhere $\\sigma$ is the coupling coefficient for each oscillator $j=1, 2, 3$. \n\nIn the next sections, we will consider three cases of the time-dependent damping coefficient: $\\alpha=0.4$ (constant coefficient), $\\alpha=t\/4$ (linearly increasing in time), and \n$\\alpha= 1\/t$ (linearly decreasing in time).\n\n\\begin{figure}[th!]\n\t\\centering\n \\includegraphics[width=0.45\\textwidth]{Fig1a.eps} \n \\includegraphics[width=0.45\\textwidth]{Fig1b.eps} \n \\caption{(a) Bifurcation diagram of the local max of $x_1$ and (b) four largest Lyapunov exponents $\\lambda$ versus coupling strength $\\sigma$ for $\\omega_0^2=-0.25$, $\\delta=0.5$, and $\\alpha=0.4$.\n \\label{fig1}}\n\\end{figure}\n\n\\section{Dynamics of the system with fixed damping}\nFirst, we consider the case of the fixed damping coefficient $\\alpha(t)=0.4$. Due to the symmetrical coupling of the three Duffing oscillators eq.~(\\ref{ODES1}), the analysis is made with the bifurcation diagram of the local maxima of the amplitude of one of the oscillators (e.g., $x_{1}$) and four largest Lyapunov exponents $\\lambda$ with respect to the coupling strength $\\sigma$. These diagrams are shown in fig.~\\ref{fig1}. The observed bifurcation scenario from the equilibrium point to chaos and hyperchaos via subsequent Hopf bifurcations is in a good agreement with the Landau-Hopf transition to turbulence~\\cite{landau1944problem, hopf1948mathematical} and the Newhouse, Ruelle, and Takens theorem~\\cite{newhouse1978occurrence} stated that just after the successive Hopf bifurcation a torus decays into a strange chaotic attractor.\n\n\\begin{figure*}[th!]\n \\centerline{%\n \\begin{tabular}{c}\n (a) \\\\\n \\includegraphics[width=0.27\\textwidth]{Fig2a_i.eps} \n \\includegraphics[width=0.27\\textwidth]{Fig2a_ii.eps} \n \\includegraphics[width=0.27\\textwidth]{Fig2a_iii.eps} \\\\\n (b) \\\\\n \\includegraphics[width=0.27\\textwidth]{Fig2b_i.eps} \n \\includegraphics[width=0.27\\textwidth]{Fig2b_ii.eps} \n \\includegraphics[width=0.27\\textwidth]{Fig2b_iii.eps} \\\\\n (c) \\\\\n \\includegraphics[width=0.27\\textwidth]{Fig2c_i.eps} \n \\includegraphics[width=0.27\\textwidth]{Fig2c_ii.eps} \n \\includegraphics[width=0.27\\textwidth]{Fig2c_iii.eps} \\\\\n (d) \\\\\n \\includegraphics[width=0.27\\textwidth]{Fig2d_i.eps} \n \\includegraphics[width=0.27\\textwidth]{Fig2d_ii.eps} \n \\includegraphics[width=0.27\\textwidth]{Fig2d_iii.eps} \\\\\n (e) \\\\\n \\includegraphics[width=0.25\\textwidth]{Fig2e_i.eps} \n \\includegraphics[width=0.25\\textwidth]{Fig2e_ii.eps} \n \\includegraphics[width=0.25\\textwidth]{Fig2e_iii.eps} \\\\\n \\end{tabular}}\n \\caption{(i) Time series, (ii) Poincar\\'e sections, and (iii) power spectra for different values of the coupling strength (a) $\\sigma=0.354$, (b) $\\sigma=0.5$, (c) $\\sigma=1.078$, (d) $\\sigma=1.98$, and (d) $\\sigma=3.25$. $\\omega_0^2=-0.25$, $\\delta=0.5$, $\\alpha =0.4$.\n \\label{fig2}}\n\\end{figure*}\n\nIn fig.~\\ref{fig1} we observe the well-known scenario from a stable steady state to hyperchaos when the coupling strength $\\sigma$ is increased. The details of dynamical regimes on this route are presented in fig.~\\ref{fig2} with the time series, Poincar\\'e sections, and power spectra. Starting from $\\sigma=0$, first, a stable fixed point transforms into a limit cycle in the Hopf bifurcation when the coupling reaches $\\sigma_{1} \\approx 0.35$, where the largest Lyapunov exponent reaches zero (black line in Fig. \\ref{fig2}(b)). This regime is illustrated in fig.~\\ref{fig2}(a) and maintained within a relatively small region of $0.35<\\sigma<0.5$. Then, at $\\sigma_2=0.5$ the limit cycle transforms into a quasiperiodic regime (2D torus) when the second frequency appears in the power spectrum (Fig. \\ref{fig2}(b)). This quasiperiodic regime appears when the second largest Lyapunov exponent becomes zero and the third largest Lyapunov exponent approaches zero. When $\\sigma$ is further increased, a 3D torus arises at $\\sigma_3=1.0$ when the third largest Lyapunov exponent approaches zero. This regime is observed for $1.0<\\sigma<1.75$ and characterized by a large number of frequencies in the power spectrum (fig.~\\ref{fig2}(c)). At $\\sigma_4=1.75$ the system becomes chaotic because the largest Lyapunov exponent becomes positive (fig.~\\ref{fig2}(b)) and the power spectrum is wide (fig.~\\ref{fig2}(d)). A further increase in the coupling strength leads to hiperchaos at $\\sigma_5$ when two largest Lyapunov exponents become positive; this regime is illustrated in fig.~\\ref{fig2}(e)). \n\nAnother interesting dynamical feature of the ring-coupled oscillators is the existence of the rotation wave in the quasiperiodic and chaotic regimes which consists in the constant phase difference between the envelope (second frequency) of quasiperiodic and chaotic oscillations of each oscillator. This fascinating synchronization state was first discovered in ring-coupled Chua oscillators~\\cite{matias1997,marino1999} and then found in ring-coupled Lorenz~\\cite{matias1998,deng2002,sanchez2006} and Duffing oscillators~\\cite{perlikowski2010routes,borkowski2015experimental,borkowski2020stability}. \nIn fig.~\\ref{fig3}(a) we plot the maximum spectral component $S_{0}$ at the dominant frequency $\\Omega_{0}$ (main oscillation frequency) and the rotating wave spectral power $S_{W}$ at the wave frequency $\\Omega_{W}$ (envelope freqeuncy) as functions of the coupling strength. The latter is obtained from the power spectrum of the Hilbert transform and shown in fig.~\\ref{fig3}(b). One can see from fig.~\\ref{fig4}(a) that both spectral components increases as $\\sigma$ is increased up to $\\sigma=3$ and then the powers become decrease. This decrease happens because in this hyperchaotic regime the spectral energy is distributed over a wide frequency range, as seen in fig.~\\ref{fig2}(e)(iii)). The wave frequency $S_{W}$ is almost independent of $\\sigma$ in the chaotic and hyperchaotic regions, i.e. for $\\sigma>1.75$, although for very large couplings ($\\sigma>3$) it is not well determined. \n\n\\begin{figure}[th!]\n \\centerline{%\n \\begin{tabular}{c} \n \\includegraphics[width=0.25\\textwidth]{Fig3a.eps}\n \\includegraphics[width=0.25\\textwidth]{Fig3b.eps}\n \\end{tabular}}\n \\caption{(a) Maximum ($S_0$) and rotating wave ($S_W$) spectral components and (b) their frequencies ($\\Omega_0$ and $\\Omega_W$) of $x_1$ versus coupling strength $\\sigma$. $\\omega_0^2=-0.25$, $\\delta=0.5$, $\\alpha=0.4$. \n \\label{fig3}}\n\\end{figure}\n\nThe underlying mechanism of the rotating wave stability is explained as the effect of an additional rotational degree of freedom and the symmetric structure of the coupling. Previous results associate the rotational wave with the space-time symmetry property of an invariant ring of identical oscillators under cyclic group~\\cite{collins1994group,pazo2001transition}. Our results obtained with double-well Duffing oscillators confirm the previous findings. \n\n\\section{Damping coefficient proportional to time ($\\alpha(t)=t\/4$)}\nLet us now consider the second case, when the damping coefficient is proportional to time, specifically, $\\alpha(t)=t\/4$. In this case, the system is extremely dissipative and therefore the rotating wave does not appear. Instead, starting from different initial conditions the system tends to one of two stable equilibrium points ($x_{1}=\\pm 0.71, y_1=0$) of the potential $V(x_1,t)$ (see eqs.~(\\ref{POT1}) and (\\ref{FP})), as illustrated with the time series and phase portraits in figs.~\\ref{fig4}(a) and (b), respectively. \n\n\\begin{figure}[th!]\n\t\\centering\n \\includegraphics[width=0.24\\textwidth]{Fig4a.eps}\n \\includegraphics[width=0.24\\textwidth]{Fig4b.eps} \n \\caption{(a) Time series and (b) phase portraits on the ($x_1$, $y_1$) plane for $\\omega_0^2=-0.25$, $\\delta=0.5$, and $\\alpha(t)=t\/4$.\n \\label{fig4}}\n\\end{figure}\n\nFigures~\\ref{fig5}(a) and (b) show, respectively, the bifurcation diagram of oscillator $x_{1}$ and three largest Lyapunov exponent as functions of the coupling strength $\\sigma$. One can see in fig.~\\ref{fig5}(a) that the system switches to another coexisting fixed point although the initial conditions are fixed. This happens because for these values of the coupling parameters the initial conditions hit to the basin of attraction of another equilibrium. As seen from fig.~\\ref{fig5}(b), all Lyapunov exponents are negative for any value of the coupling strength, and the system becomes more stable as $\\sigma$ is increased. So, the high damping coefficient does not allow the rotating wave. \n\n\\begin{figure}[th!]\n \\centering\n \\includegraphics[width=0.24\\textwidth]{Fig5a.eps}\n \\includegraphics[width=0.24\\textwidth]{Fig5b.eps} \n \\caption{(a) Bifurcation diagram of $x_1$ and (b) three largest Lyapunov exponents versus coupling strength for $\\omega_0^2=-0.25$, $\\delta=0.5$, and $\\alpha(t)=t\/4$.\n \\label{fig5}}\n\\end{figure}\n\n\n\\section{Damping coefficient inversely proportional to time ($\\alpha(t)=1\/t$)}\nFinally, we consider the case when the damping coefficient is inversely proportional to time, specifically, $\\alpha(t)=1\/t$, i.e., the damping tends to zero over time. The time series and corresponding instantaneous frequency of $x_1$ calculated using the Hilbert transform are shown in figs.~\\ref{fig6}(a) and (b), respectively. One can see that such damping function results in the transient behavior followed by infinity. Moreover, in fig.~\\ref{fig6}(b) one can observe continuous changes of the instantaneous frequency during all time evolution. This evidences the existence of a permanent hyperchaotic transitory. \n\n\\begin{figure}[th!]\n\t \\centering\n \\includegraphics[width=0.24\\textwidth]{Fig6a.eps}\n \\includegraphics[width=0.24\\textwidth]{Fig6b.eps} \n \\caption{(a) Time series of $x_1$ and (b) corresponding instantaneous frequency, representing transient hyperchaos. $\\omega_0^2=-0.25$, $\\delta=0.5$, $\\sigma=3.25$, and $\\alpha(t)=1\/t$.\n \\label{fig6}}\n\\end{figure}\n\nThe transient hyperchaos manifests itself as follows. The dissipative system eq.~(\\ref{ODES1}) turns into conservative over time, causing hyperchaotic transient behavior, because a non-attracting chaotic set coexists with a chaotic attractor, i.e., there are two distinct forms of chaotic behavior. The trajectory proceeding from randomly chosen initial conditions looks chaotic for a sufficiently long period of time, during which it visits various chaotic and quasiperiodic states rather abruptly, as seen from the time series in fig.~\\ref{fig7} corresponding to different time periods.\n\n\\begin{figure}[th!]\n\t \\centering\n \\includegraphics[width=0.24\\textwidth]{Fig7a.eps} \n \\includegraphics[width=0.24\\textwidth]{Fig7b.eps}\n \\caption{Time series in different periods of time representing (a) chaotic, (b) quasiperiodic of $x_1$ and (b) corresponding instantaneous frequency, representing transient hyperchaos. $\\omega_0^2=-0.25$, $\\delta=0.5$, $\\sigma=3.25$, and $\\alpha(t)=1\/t$.\n \\label{fig7}}\n\\end{figure}\n\nTransient chaotic behavior appears in a conservative system where the phase-space volume is constant under time evolution~\\cite{lai2011transient}. In the Poincar\\'e section on ($x_1$, $y_1$) plane (fig.~\\ref{fig8}) one can distinguish two regions. The region of many scattered points represents non-attracting chaotic set called chaotic saddle~\\cite{sabarathinam2015transient} at the beginning of the transients (large amplitude oscillations at $t<2500$), whereas the central bagel of high density points corresponds to the resting part of the transients and represents toroidal hyperchaos.\n\n\\begin{figure}[th!]\n\t \\centering\n \\includegraphics[width=0.45\\textwidth]{Fig8.eps} \n \\caption{Poincar\\'e section on ($x_1$, $y_1$) plane representing transient toroidal hyperchaos for $\\omega_0^2=-0.25$, $\\delta=0.5$, $\\sigma=3.25$, and $\\alpha(t)=1\/t$.\n \\label{fig8}}\n\\end{figure}\n\nFigure~\\ref{fig9}(a) and (b) show the bifurcation diagram of local maxima of $x_1$ and four largest Lyapunov exponents versus $\\sigma$, respectively. One can see that when the coupling strength is increased, the system exhibits a fast transition to hyperchaos from a fixed point via a crisis bifurcation at a very small coupled strength $\\sigma=0.056$. For larger $\\sigma$, two largest Luyapunov exponents are always positive and the third exponent is zero. This means that the system is hyperchaotic.\n\n\\begin{figure}[th!]\n\t \\centering\n \\includegraphics[width=0.24\\textwidth]{Fig9a.eps}\n \\includegraphics[width=0.24\\textwidth]{Fig9b.eps}\n \\caption{(a) Bifurcation diagram of $x_1$ and (b) largest Lyapunov exponents versus coupling strength for $\\omega_0^2=-0.25$, $\\delta=0.5$, $\\sigma=3.25$, and $\\alpha(t)=1\/t$.\n \\label{fig9}}\n \\end{figure}\n \n \n\\section{Conclusion}\nIn this Letter, we have studied the dynamics of a system of three double-well Duffing oscillators unidirectionally coupled in a ring. We have considered three cases depending on the damping coefficient: constant damping, damping proportional to time ($\\alpha(t)=t\/4$), and damping inversely proportional to time ($\\alpha(t)=1\/t$). The system dynamics have been analyzed using time series, Fourier and Hilbert transforms, Poincar\\'e sections, and Lyapunov exponents. In the first case, we observed the route from a steady state to hyperchaos through a series of torus bifurcations, as the coupling strength is increased, as well as the existence of a rotating wave with a fixed phase between neighboring oscillators, which persists for quasiperiodic, chaotic and hyperchaotic regimes. These results are in a good agreement with previous studies of single-well Duffing oscillators in the same coupling configuration. Such similarity is explained by the fact that in the quasiperiodic and chaotic regimes, the system becomes monostable. \n\nIn the second case, the system becomes highly dissipative and therefore does not generate a rotating wave. As a result, the system, is evolved to one of the stable steady states which depends on the initial conditions. During the transient behavior, the system can occasionally switch between two coexisting states. \n\nFinally, in the third case, the system becomes conservative when the time $t$ involves to large values, and then goes to infinity. The transient toroidal hyperchaotic behavior was observed. During transients, the system can visit different unstable periodic orbits. \n\nAlthough this work was devoted to the study of the simple network motif of only three coupled oscillator, we suppose that similar dynamics may occur in larger oscillatory networks (see, e.g., \\cite{milo2002network}), especially in a larger ring of unidirectionally coupled Duffing oscillators. This is a promising topic for future research. \n\n\\acknowledgements\nJ. J. B. F. thanks the National Council for Science and Technology of Mexico (CONACYT) for the financial support granted through the scholarship number 924190. S. A. G. and A. N. P. acknowledge support from the Lobachevsky University Competitiveness Program in the frame of the 5-100 Russian Academic Excellence Project. \n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction} \\label{S:intro}\n\n\\noindent\nLattice models have been extensively used in\nthe physical sciences over the past decades\nto describe a wide variety of condensed matter\nequilibrium and non equilibrium phenomena\nsee e.g., the reviews in\n \\cite{baxter, wu, alea}.\nMagnetization was the original application, but the list has grown to\ninclude structural transitions in DNA \\cite{DNA, DNA1, DNA2}, polymer\ncoiling \\cite{polymer, polymer1}, cellular automata \\cite{cellauto,\n cellauto1}, and gene regulation \\cite{gene, gene1, gene2} to name a\nfew. The resulting models are certainly simplified, but what they\nlack in detail is compensated by their amenability to analytical\nand computational treatment -- and, occasionally, to exact solution. \nMoreover, at least for the behavior in the vicinity of a continuous transition, \nthe simplifications inherent in these approximate models may be presumed to be inconsequential.\nIn short, lattice models have proved extremely useful in the context of the physical, biological and even chemical sciences.\nIn more recent years, lattice models have also been applied to study\nsocial phenomena \\cite{castellano, stauffer1, stauffer}, such as\nracial segregation \\cite{race1, race2}, voter preferences \\cite{voter,\n voter1, voter2}, opinion formation in financial markets\n\\cite{opinion, opinion1, opinion2}, and language changes in society\n\\cite{language1, language2, language3}, offering insight into\nsocioeconomic dynamics and equilibria.\nIn this paper we consider the problem of gang aggregation via\ngraffiti in what is -- to the best of our knowledge -- the first\napplication of lattice model results \n to the emergence\nof gang territoriality. \n\nScratching words or painting images on visible surfaces is certainly\nnot a new phenomenon. Wall scribblings have survived from ancient\ntimes and have been used to reconstruct historical events and to\nunderstand societal attitudes and values. Today, graffiti (from the\nItalian \\emph{graffiare}, to scratch) is a pervasive characteristic of\nall metropolitan areas \\cite{Alonso2}. Several types of graffiti\nexist. Some are political in nature, expressing activist views against\nthe current establishment; others are expressive or offensive\nmanifestations on love, sex or race. At times, the graffiti is a mark\nof one's passage through a certain area, with prestige being\nattributed to the most prolific or creative tagger or to one who is able to\nreach inaccessible locations. The mark can be anything from a simple\nsignature to a more elaborate decorative aerosol painting\n\\cite{Phillips, Alonso}. All of these types of graffiti are usually\nscattered around the urban landscape and do not \nappear to\nfollow any\npredetermined spatio--temporal pattern of evolution. They affect the\nquality of life simply as random defacement of property, although\nsometimes they are considered art \\cite{Knox}.\n\nOn the other hand, \\textit{gang} graffiti represents a much more serious\nthreat to the public, since it is usually a sign of the presence of\ncriminal gangs engaged in illegal or underground activities such as\ndrug trafficking or extortion \\cite{Smith, Fagan}. Street gangs are\nextremely territorial, and aim to preserve economic interests and\nspheres of influence within the neighborhoods they control. \nA gang's ``turf'' is usually marked in a characteristic\nstyle, recognizable to members and antagonists \\cite{Brown, LC1974}\nwith incursions by enemies often resulting in violent acts.\nThe established boundaries between different gang factions are\nsometimes respected peacefully, but more often become contested\nlocations where it is not uncommon for murders and assaults to occur\n\\cite{BB1993}. It is here, on the boundaries between gang turfs, that\nthe most intense graffiti activity is usually concentrated.\n\nSeveral criminological and geographical studies have been presented\nconnecting gang graffiti and territoriality in American cities\n\\cite{Knox, Alonso, LC1974}. In particular, it is now considered\nwell-established that the spatial extent of a gang's area of influence\nis strongly correlated to the spatial extent of that particular gang's\ngraffiti style or language. Furthermore, it is known that the\nincidence of gang graffiti may change in time, reflecting specific\noccurrences or neighborhood changes. For example, rival gangs may\nalternate between periods of truce and hostility, the latter being\ntriggered by arrests or shootings. Similarly, boundaries may shift\nlocations when the racial or socio--economic makeup of a neighborhood\nchanges, creating new tensions, or when gang members migrate to new\ncommunities \\cite{Alonso2}. In all these cases, periods of more\nintense gang hostility are usually accompanied by intense graffiti\nmarking and erasing by rival factions in contested or newly settled\nboundary zones \\cite{LC1974}.\n\nThe purpose of this paper is to present a mathematical model that\nincludes relevant sociological and geographical information relating\ngang graffiti to gang activity. In particular, we study the\nsegregation of individuals into well defined gang clusters as driven\nby gang graffiti, and the creation of boundaries between rival\ngangs. We use a spin system akin to a 2D lattice Ising model to\nformulate our problem through the language of statistical mechanics.\nIn this context, the site variables $s_{i}$ have two constituents\nwhich represent `gang' and `graffiti' types, respectively, and\n\\textit{phase separation} is assumed to be the proxy for gang\nclustering. For the purpose of simplicity, we consider only two\ngangs, hereafter referred to as the red and blue gang, whose members\nwe refer to as \\textit{agents}. Lattice sites may be occupied by\nagents of either color or be void. Since gang members are assumed to\ntag their territory with graffiti of their same color, we also assign\na graffiti index to each site representing the preponderance of red or\nblue markings.\n\nIn particular, agents are attracted to sites with graffiti of their\nsame color, and avoid locations marked by their opponents. We\ndeliberately avoid including direct interactions between gang members,\nso that ``ferromagnetic\" type gang--gang attractions exist only insofar\nas they are mediated by the graffiti. On one hand this is\nmathematically interesting: in the broader context of physical\nsystems, interactions are often mediated but rarely are indirect\ninteractions the subject of mathematical analysis. On the other hand,\nby excluding direct gang interactions, we can specifically focus on\nthe role of graffiti in gang dynamics and segregation. Furthermore,\nas will be later discussed, under certain conditions, gang--gang\ncouplings may be unimportant, and one of the primary conclusions of\nthis work is that they appear to be unnecessary to account for the\nobserved phenomena of gang segregation. In any case, we informally\nstate without proof that all the results of this work also hold if\nexplicit agent--agent interactions are included.\n\nWe thus write $s_i = (\\eta_i, g_i)$, representing the agents and\ngraffiti configuration at site $i$, respectively. The former component\n$\\eta_i$ is discrete allowing, for simplicity, at most one agent on\neach site. The latter $g_i$ is continuous and, in principle,\nunbounded. We let $\\mathbf{s}$ denote a spin configuration on the\nentire lattice, and in Section \\ref{S:Hamiltonian}, propose a\nHamiltonian, $\\mathscr{H}(\\mathbf{s})$, to embody all relevant\nsociological information. Once $\\mathscr H(\\mathbf{s})$ has been\ndetermined, the probability for the occurrence of a spin configuration\n$\\mathbf{s}$ on a finite connected lattice $\\Lambda\\subset \\mathbb\nZ^{2}$ is determined by the corresponding Gibbs distribution\n$\\mathbb{F}(\\mathbf{s})$. Note that due to the choices made on the\nrange of the $\\eta_i, g_i$ values, $\\mathbb{F}(\\mathbf{s})$ is\ndiscrete in the $\\eta$ variables and continuous in the $g$ ones. It is\ngiven by\n\n\\begin{equation*}\n\\mathbb{F}(\\mathbf{s}) = \\frac{1}{\\mathcal{Z}}\n\\exp(-\\mathscr{H}(\\mathbf{s})),\n\\end{equation*}\n\n\\noindent\nwhere $\\mathcal{Z}$ is the partition function for the finite lattice\n$\\Lambda$ formally provided by the expression\n\n\\begin{equation*}\n\\mathcal{Z} = \\sum_{\\mathbf{s} \\in \\mathbb{S}}\n\\exp(-\\mathscr{H}(\\mathbf{s})).\n\\end{equation*}\n\n\\noindent\nHere, $\\mathbb{S}$ denotes the set of all possible configurations on\n$\\Lambda$ and the summation symbol is understood to be a summation\nover the discrete components and an integration over the continuous\nones. As usual, we begin with a finite lattice and its associated\nboundary conditions, and obtain infinite volume results by taking the\nappropriate limits.\nUsing techniques from statistical mechanics, we prove that\nour system undergoes a phase transition as the coupling parameters are\nvaried. In the unconstrained ensemble, certain parameter choices lead\nto predominance of either the red or blue gang, indicating that for\nconfigurations where the red to blue gang ratio is fixed at unity,\na phase separation will occur. Conversely, in other regions of parameter\nspace, there is no dominance of either gang type, indicating that the\ntwo are well-mixed and\/or dilute. In this work we will investigate under \nwhich conditions to expect phase separation or gang dilution.\n\nOur paper is organized as follows: in Section \\ref{S:Hamiltonian}, we\ngive details of the model and in Section\n\\ref{S:nearestNeighPhaseTransition}, we prove that a phase transition\nexists as a function of the relevant parameters. Since information on the\nlocation of \\textit{all} transition points is, by necessity,\nincomplete we consider an approximation in the form of a simplified\nmean field version of our Hamiltonian and derive the corresponding\nmean field equations in Section \\ref{S:MFHamiltonian}. Here, we show\nthat the mean field Hamiltonian also exhibits a phase\ntransition and we further prove that the latter is continuous in one\nspecified region of parameter space and first order in\nanother. Finally, in Section \\ref{S:discussion} we end with a\ndiscussion of potential sociological and ecological \nimplications of our results.\n\n\n\\section{The Hamiltonian} \\label{S:Hamiltonian}\n\nLet us define a spin system on a finite lattice $\\Lambda \\subset \\mathbb Z^{2}$.\nHere, the spin at each site $i \\in \\Lambda$ is\ndenoted by $s_i = (\\eta_i, g_i)$ and, we reiterate, $\\eta_i$ denotes the \\emph{agent\n spin} and $g_i$ represents the \\emph{graffiti field}. We allow the\nagent spin to be in the set $\\{0, \\pm 1\\}$; $\\eta_i = -1$ if the agent\nat site $i$ belongs to the blue gang, $\\eta_i = 0$ if there is no\nagent, and $\\eta_i = +1$ if the agent is a red gang member. The\ngraffiti field is in the set of real numbers: $g_i > 0$ indicates an\nexcess of red graffiti, $g_i < 0$ an excess of blue graffiti, and, in\neither case, $|g_{i}|$ indicates the magnitude of the excess. We now\nintroduce the formal Hamiltonian $\\mathscr{H}(\\mathbf{s})$\n\n\\begin{equation} \\label{E:Hamiltonian}\n - \\mathscr{H}(\\mathbf{s}) = J \\sum_{} \\eta_i g_j + K \\sum_{i}\n \\eta_i g_i + \\alpha \\sum_{i} \\eta_i^2- \\lambda \\sum_{i} g_i^2,\n \\end{equation}\n\n\\noindent\nwhere $\\mathbf{s}$ is a given configuration on the full $\\Lambda$\nlattice, $i$ and $j$ index its sites and $\\sum_{\\langle i,j\\rangle}$\nis the sum taken over every bond between nearest neighbor sites\nbelonging to $\\Lambda$. \nWe discuss the role of spins on the lattice boundary $\\Lambda^c$ in\nProposition \\ref{3point4} and following sections. The expression in\n\\ref{E:Hamiltonian} will be referred to as the GI--Hamiltonian\n(graffiti interaction Hamiltonian) and its corresponding partition\nfunction will be denoted by an unadorned ${\\mathcal Z}$. Note that\nsince $\\eta_i$ is either $0$ or $\\pm 1$, $\\eta_{i}^{2} = |\\eta_{i}|$;\nhowever, we choose to display the above form to leave open the\npossibility of $\\eta_{i} \\in \\mathbb{Z}$. As discussed earlier, there\nare no \\textit{explicit} agent--agent interactions in this model;\nindeed, the structure of the Hamiltonian assumes that gang members\ninteract with each other only via the graffiti tagging. As a result,\noccupation at site $i$ by a gang member is ``energetically'' favored\nonly if nearest-neighbor and on-site graffiti are predominantly of its\nsame color. The two coupling constants, $J$ for nearest-neighbor\ninteractions and $K$ for on-site occupation, reflect this trend. The\n$\\alpha \\eta_i^2$ term represents the proclivity of a given site to be\noccupied by agents regardless of color, implying that gang members\ncarry a strong tendency to occupy unclaimed turf if $\\alpha \\gg 1$,\nwhile $\\alpha \\ll - 1$ represents a natural paucity of gangs\naltogether. Finally, we assume graffiti imbalance of either color to\nbe energetically unfavorable via the $-\\lambda g_i^2$ term. This can\nbe interpreted as natural decay of graffiti due to the elements, or to\npolice or community intervention. For purposes of stability,\n$\\lambda$ must be positive. Although the interactions $J$, $K$ are\ntacitly assumed to be positive, generalizations to negative values may\nbe possible, and a corresponding analysis may be undertaken given the\nproper sociological interpretations.\n\n\\section{Phase transition in the GI--system}\t\n\\label{S:nearestNeighPhaseTransition}\n\\subsection{Low temperature phase}\n\n\\noindent\nThe basic strategy we follow to demonstrate an ordered, ``low\ntemperature\" phase is a \\textit{contour} argument, here \nillustrated: Suppose that\n$\\eta_i$, the agent spin at site $i$, differs from the agent spin\n$\\eta_j$ at a different site $j$. The two agent spins can differ\neither by color, representing two different gang affiliations, or by\noccupation, where one site is occupied and the other is void. \nAt the scale of nearest neighbors, each\nedge in the lattice can be defined as either a \\emph{coherent} bond,\nwhere the adjoining lattice sites are occupied and their agent spins\nare identical, or as an \\emph{incoherent} bond if this condition does\nnot hold. Thus, explicitly, $(\\eta_i, \\eta_j) = (1,1)$ or \n$(-1,-1)$ are coherent, and all the other types are not.\n\nLet us now consider any path on the lattice that joins sites $i$ and\n$j$. Since $i$ and $j$ have agent spins which are not identical, it\nmust be the case that on any path between $i$ and $j$, there is an\nincoherent bond. Furthermore, these incoherent bonds must form a\nclosed contour on the dual lattice that separates $i$ from $j$. In\nthe following subsections, we derive a bound on the probability of any\nsuch incoherent bonds and their aggregation into contours. When these\nprobabilities are small enough -- which happens in certain regions of\nthe parameter space -- we can establish a low temperature\nphase. For example, the presence of a red agent at\nthe origin will imply that, with significant probability, the majority of\nthe other sites will also be occupied by red agents, \nshowing the existence of a red phase. Similarly, a blue phase can be\nshown to exist.\n\n\nTo achieve all of these ends, we will employ the methods of\n\\textit{reflection positivity} described in \\cite{Biskup} and\n\\cite{SST} which contain a detailed account of useful techniques along\nwith relevant classic references. In this paper, we will be working\non the $L\\times L$ \\textit{diagonal} 2D torus -- the SST -- which we\ndenote by $\\mathbb T_{L}$. We will often refer to the Gibbsian\nprobability measure on $\\mathbb T_{L}$ associated with the Hamiltonian\nin Eq.(\\ref{E:Hamiltonian}) which we denote by $\\mathbb\nP_{L}(\\cdot)$.\n\n\\subsubsection{Reflection positivity}\nBy means of the reflection positivity of the Gibbs distribution we can\neasily bound the expectation of an observable which depends only on\nthe spin at any two neighboring lattice points. This result will be\nused to build the contour argument that will lead us to prove the\nexistence of a low temperature phase. We thus briefly introduce the\nconcept of reflection positivity, referring the interested reader to\n\\cite{Biskup} for a more detailed discussion of these topics.\n\nConsider a plane of reflection $p$ which intersects the torus in a\npath running through next nearest (diagonal) pairs of sites. Let\n$\\vartheta_p$ be the reflection operator through $p$. On the SST,\nthis plane $p$ divides the lattice into two halves, identified as\n$\\mathbb{T}^{+}_L$ and $\\mathbb{T}^{-}_L$, such that\n$\\mathbb{T}^{+}_L\\cap \\mathbb{T}^{-}_L = p$. Let $\\mathscr U^{+}_{p}$\ndenote the set of functions which depend only on the spin variables in\n$\\mathbb{T}^{+}_L$ and similarly for $\\mathscr U^{-}_{p}$. The\n\\textit{reflection map}, $\\vartheta_{p}$, which, in a natural fashion\nidentifies sites in $\\mathbb{T}^{+}_L$ with those in\n$\\mathbb{T}^{-}_L$ via a reflection through $p$, can also be used to\ndefine maps between $\\mathscr U^{+}_{p}$ and $\\mathscr U^{-}_{p}$:\nSpecifically, if $f\\in \\mathscr U^{+}_{p}$, we define $\\vartheta_{p} f\n\\in \\mathscr U^{-}_{p}$ to be the function $f$ evaluated on the\nconfiguration reflected from $\\mathbb{T}^{-}_L$.\n\nA measure $\\mu$ is\n\\emph{reflection positive} with respect to $\\vartheta_p$ if for every $f,g \\in \\mathscr{U}^{+}_{p}$, or $\\mathscr U^{-}_{p}$,\nthe following two properties hold\n\n\\begin{enumerate}\n \\item $\\mathbb{E}_{\\mu} (f \\vartheta_p f ) \\geq 0$,\n \\item $\\mathbb{E}_{\\mu} (f \\vartheta_p g) = \n\\mathbb{E}_{\\mu}( g \\vartheta_p f )$.\n\\end{enumerate}\n\n\\noindent\nIt is known (e.g., see \\cite{Biskup}) that $\\mathbb P_L$ is reflection\npositive with respect to $\\vartheta_p$ for every $p$ of the above\ndescribed type. We next use reflection positivity to find an upper\nbound on the expectation of observables defined on bonds. In doing so, we use the following lemmas:\n\n\\begin{lemma} \\label{L:tilingBound}\nLet $\\langle i,j \\rangle$ denote a bond of $\\mathbb T_{L}$ and let\n$\\alpha_{i}$ and $\\gamma_{j}$ denote site events at the respective\nendpoints of the bond. Let $\\mathcal Z^{(\\alpha,\\gamma)}_{\\mathbb\n T_{L}}$ denote the partition function (on $\\mathbb T_{L}$) which has been\nconstrained so that at each site with the parity of $i$, the\ntranslation of the event $\\alpha_{i}$ occurs and similarly for\n$\\gamma$. Then, for $L = 2^{k}$ for some integer $k$,\n$$\n\\mathbb P_{L}(\\alpha_{i}\\cap\\gamma_{j}) \\leq\n\\left [\n\\frac{\\mathcal Z^{(\\alpha,\\gamma)}_{\\mathbb T_{L}}}{\\mathcal Z_{\\mathbb T_{L}}}\n\\right ]^{\\frac{1}{2V}},\n$$\nwhere $V = L^{2}$ is the volume of the torus.\n\\end{lemma}\n\\begin{proof}\nThe result from this Lemma dates back to the original papers on the\nsubject. In particular, the use of bond events on the SST was\nhighlighted in \\cite{SST}. A modern and complete\nderivation is contained in \\cite{Biskup}, Section 5.3.\n\\end{proof}\n\n\\noindent\nFor a slightly more general scenario, let us \nconsider the bond $\\langle i,j\\rangle$\nand various events $\\alpha_{i}^{1}, \\gamma_{j}^{1}$, \\dots ,\n$\\alpha_{i}^{n}, \\gamma_{j}^{n}$ and let us denote by $b_{1} = \\alpha_{i}^{1}\n\\cap \\gamma_{j}^{1}$ \\dots $b_{n} = \\alpha_{i}^{n} \\cap\n\\gamma_{j}^{n}$ the corresponding bond events as described. Letting\n$b = \\cup_{j=1}^{n}b_j$ we find\n$$\n\\mathbb P_{L}(b) \\leq\n\\sum_{j=1}^{n}\n\\left [\n\\frac{\\mathcal Z^{(\\alpha_j,\\gamma_{j})}_{\\mathbb T_{L}}}{\\mathcal Z_{\\mathbb T_{L}}}\n\\right ]^{\\frac{1}{2V}}\n:=\n\\sum_{j=1}^{n}\n\\left [\n\\frac{\\mathcal Z^{(b_{j})}_{\\mathbb T_{L}}}{\\mathcal Z_{\\mathbb T_{L}}}\n\\right ]^{\\frac{1}{2V}}.\n$$\n\n\\noindent\nFinally, we have\n\\begin{lemma}\n\\label{XDS}\nLet $r_{1}, \\dots r_{m}$ denote translations of the bond\n$\\langle i,j\\rangle$\nand $b_{r_{j}}$ the translation of the bond event(s) $b$ described above. Then\n$$\n\\mathbb P_{L}(\\cap_{j = 1}^{m}b_{r_{j}}) \n\\leq\n\\left[\n\\sum_{j=1}^{n}\n\\left [\n\\frac{\\mathcal Z^{(b_{j})}_{\\mathbb T_{L}}}{\\mathcal Z_{\\mathbb T_{L}}}\n\\right ]^{\\frac{1}{2V}}\n\\right ]^{m}.\n$$\n\\begin{proof}\nAgain, we refer the reader to \\cite{Biskup}, Section 5.3.\n\\end{proof}\n\n\n\\end{lemma}\n\n\n\\subsubsection{A bound on the incoherent bond probabilities} \n\\label{SSS:boundingTheProb}\n\n\\noindent\nIn order to prove a phase transition by a contour argument, we must\nplace an upper bound on the probability for the occurrence of any type\nof incoherent bond where agent spins of neighboring sites are\ndifferent. There are four types of incoherent bonds, namely $(\\eta_i,\n\\eta_j) = (-1,1), (-1,0)$, and $(1,0)$, and $(0,0)$, regardless of\norder. Let us introduce the following notation: consider undirected\nbonds between two particular neighboring lattice sites, $\\langle\ni,j\\rangle$ and let $(\\cdot,\\cdot )$ denote the event of any of the\nnine coherent or incoherent bonds so that\n\n$$ \n(\\cdot,\\cdot) \\in \\{ (+,+), (-,-), (+,-), (-,+), (+,0), (0,+),\n(-,0), (0,-), (0,0)\\}.\n$$\n\n\\noindent\nSimilarly, let $Z^{(\\cdot, \\cdot)}_{\\mathbb T_{L}}$ denote the\npartition function restricted to \nconfigurations where all agent spins are frozen \nin accord with the above described (chessboard) pattern\nand the rest of the \nstatistical mechanics\nis provided by\nthe graffiti field against this background \\cite{biskup, chessboard}. \nThe following is readily obtained:\n\\begin{proposition}\nThe above described (agent--constrained) partition functions are given by\n\n\\begin{eqnarray}\n\\nonumber\n\\mathcal Z_{\\mathbb T_{L}}^{(0,0)} &=&\n\\left[\\frac{\\sqrt\\pi}{\\sqrt\\lambda}\\right]^{V}, \\\\ \n\\nonumber\n\\mathcal Z_{\\mathbb\n T_{L}}^{(-,-)} = \\mathcal Z_{\\mathbb T_{L}}^{(+,+)} &=&\n\\left[\\frac{\\text{e}^{\\alpha}\\sqrt\\pi}{\\sqrt\\lambda}\n \\text{e}^{\\frac{1}{4\\lambda}[4J + K]^{2}} \\right]^{V}, \\\\ \n\\nonumber\n\\mathcal\nZ_{\\mathbb T_{L}}^{(+,-)} = \\mathcal Z_{\\mathbb T_{L}}^{(-,+)} &=&\n\\left[\\frac{\\text{e}^{\\alpha}\\sqrt\\pi}{\\sqrt\\lambda}\n \\text{e}^{\\frac{1}{4\\lambda}[-4J + K]^{2}} \\right]^{V}, \\\\ \n\\nonumber\n\\mathcal\nZ_{\\mathbb T_{L}}^{(0,+)} = \\dots = \\mathcal Z_{\\mathbb T_{L}}^{(-,0)}\n&=& \\left[\\frac{\\text{e}^{\\frac{1}{2}\\alpha}\\sqrt\\pi}{\\sqrt\\lambda}\n \\text{e}^{\\frac{1}{8\\lambda}K^{2}} \\right]^{V}.\n\\end{eqnarray}\n\\end{proposition}\n\n\\noindent\n\\begin{proof}\nSince the agent variables are frozen, the $g_i$ Gaussian variables are\nindependent and the above amount to straightforward Gaussian\nintegrations.\n\\end{proof}\n\n\\noindent\nUsing Lemma \\ref{L:tilingBound} and the fact that the full partition\nfunction satisfies \n$\\mathcal Z_{\\mathbb T_{L}} \\geq \\mathcal Z_{\\mathbb T_{L}}^{(+,+)} $,\nwe can write\n\n\\begin{eqnarray}\n\\nonumber \n\\mathbb P_{L}(0,0) &\\leq&\n\\text{e}^{-\\frac{1}{2}\\alpha}\\text{e}^{-\\frac{1}{8\\lambda}[4J +\n K]^{2}}, \\\\ \n\\mathbb P_{L}(+,-) = \\mathbb P_{L}(-,+) &\\leq&\n\\text{e}^{-\\frac{2JK}{\\lambda}}, \\\\ \n\\nonumber \\mathbb P_{L}(+,0) =\n\\dots = \\mathbb P_{L}(0,-) &\\leq&\n\\text{e}^{-\\frac{1}{4}\\alpha}\\text{e}^{-\\frac{2J^{2} + JK +\n \\frac{1}{16}K^{2}}{\\lambda}}.\n\\end{eqnarray}\n\n\n\\noindent\nWe denote by $\\varepsilon = \\varepsilon (J,K,\\lambda,\\alpha)$ the sum\nof the estimates for the probabilities provided by the\nright hand sides of the preceding display. For fixed\n$\\alpha$ and $K > 0$, note that as\n$J\\lambda^{-1\/2}\\to\\infty$ (or, better yet, $J\\lambda^{-1\/2}$ and\n$K\\lambda^{-1\/2}$ both tending to infinity) the quantity $\\varepsilon$\ntends to zero. This implies the suppression of all incoherent bonds\nso that the lattice must be almost fully tiled with coherent ones.\nIn particular, the lattice is nearly filled with agents,\nwhich, at least locally, are mostly of the same type. As will be demonstrated\nbelow, this implies the existence of distinctive red and blue phases,\ni.e., in the language of statistical mechanics, of a ``low temperature''\nregime. We formalize this result in the next subsection.\n\n\n\\subsubsection{The contour argument}\n\nWe have now established all the tools we need to complete the\ncontour argument. Accordingly, we now show that two \nwell--separated lattice sites\nmust, with probability tending to one,\nhave identical agent spins in the limit $\\varepsilon \\ll 1$. This in\nturn will imply the existence of a low temperature phase.\n\n\\begin{theorem}\nConsider the GI--system on $\\mathbb Z^{2}$ and let\n$\\varepsilon(J,K,\\lambda,\\alpha)$ denote the quantity described in the\nlast paragraph of the previous subsection. Then, if the parameters\nare such that $\\varepsilon$ is sufficiently small, there are at least\ntwo distinct limiting Gibbs states characterized, respectively, by the\nabundance of red agents and the abundance of blue agents. Moreover,\nthis property holds in any limiting shift invariant Gibbs state.\n \\end{theorem}\n\\begin{proof}\nLet us start on $\\mathbb T_{L}$ with $L = 2^{k}$. For $i,j\n\\in \\mathbb T_{L}$ where $i$ and $j$ are well separated, let us \nconsider the event \n$v_{B} :=\\{\\eta_{i}\\neq\\eta_{j}\\}\\cup \\{\\eta_{i} =\n0\\}$. We will show, under the stated conditions, that uniformly in\n$L$ this probability vanishes as $\\varepsilon \\to 0$.\nAs discussed previously, in order for this event to occur, the sites\n$i$ and $j$ must be separated by a closed contour consisting of bonds\ndual to incoherent bonds. For $\\ell = 4, 6, \\dots$ let $\\mathfrak\nN_\\ell = \\mathfrak N_\\ell(i-j, L)$ denote the \\textit{number} of such\ncontours of length $\\ell$ on $\\mathbb T_{L}$. Then we claim that\nuniformly in $L$ and $i-j$,\n\n\\begin{equation}\n\\nonumber\n\\mathfrak N_\\ell \\leq 2\\ell^{2}\\lambda_{2}^{\\ell}\n\\end{equation}\n\\noindent\nwhere $\\lambda_{2}$ (with $\\lambda_{2} \\approx 2.638... < 3$) is the\nconnectivity constant for $\\mathbb Z^{2}$ \\cite{connectivity}. \nA word of explanation may\nbe in order. The $\\lambda_{2}^{\\ell}$ generously accounts for walks\nof length $\\ell$ in the vicinity of site $i$ and the factor of two for\nwalks in the vicinity of site $j$. Finally, the factor of $\\ell^{2}$\naccounts for the origin of the walk. Note this is an over-counting,\ne.g., contours which wind the torus but do not necessarily\n``enclose'' $i$ or $j$ are counted twice.\nUsing Lemma \\ref{XDS} we may now write\n\\begin{equation}\n\\nonumber\n\\mathbb P_{L}(v_{B})\n\\leq \\sum_{\\ell} \\mathfrak N_\\ell \\varepsilon^{\\ell}\n\\leq 2\\sum_{\\ell:\\mathfrak N_\\ell\\neq 0}\n\\ell^{2}[\\lambda_{2}\\varepsilon]^{\\ell}.\n\\end{equation}\nThe above obviously tends to zero as $\\varepsilon \\to 0$\ndemonstrating that in finite volume, the lattice is either populated\nwith mostly red agents \\textit{or} mostly blue agents depending -- with high probability -- on what is seen at the origin. The\nimplication of this result is that, for $\\varepsilon$ sufficiently\nsmall, there are at least two infinite volume Gibbs states -- which\ncan be realized as the limits of the appropriately conditioned\n$\\mathbb T_{L}$'s. These states have one of the two mutually\nexclusive characteristics: a preponderance of red agents or a\npreponderance of blue agents. The fact that the above must also hold\nin any shift--invariant Gibbs state is the subject of Theorem 2.5 and\nits Corollary in \\cite{BbKk} with a slight extension provided by\nCorollary 5.8 in \\cite{C2}.\n\\end{proof}\n \n \n \n \n\\subsection{High temperature phase}\nAs is sometimes (e.g., historically) the case in statistical\nmechanics, it can be an intricate job to establish a high temperature\nphase -- a region of parameters where the limiting Gibbs measure is\nunique and correlations decay rapidly. \nTypically, one calls\nupon the Dobrushin uniqueness criterion \\cite{D}. However for us, this\nroute is interdicted by the unbounded nature of the $g_i$ graffiti field.\nThe strategy here will be percolation based: First we establish the\nso--called FKG property for all the associated Gibbs measures. Then what\nfollows will be a relatively standard argument through which we show that \nthe necessary and sufficient condition for uniqueness is that the\naverage of $\\eta_{i}$ -- akin to a magnetization -- vanishes in the\nstate designed to optimize this quantity. Then, finally, we will\ndevelop a random cluster--type expansion demonstrating that under the\nexpected high--temperature conditions for the couplings, e.g.,\n$\\lambda \\gg 1$, the stated condition on this magnetization is\nsatisfied. In addition, high--temperature behavior should also be achieved under\nthe condition that agents are sparse. This requires an alternative\npercolation criterion used in conjunction with the above mentioned\nexpansion. In both scenarios, the rapid decay of correlations arises as an \nautomatic byproduct.\n\n\\subsubsection{FKG properties}\n\\label{FKGprop}\nIn this paragraph we will demonstrate that the \\textit{FKG Lattice\n Condition} (see e.g., \\cite{Li} Page 78) is satisfied by\nany finite volume Gibbs measure associated with the\nGI--Hamiltonian. \nLet us start by noting that we can define a natural\npartial ordering on the pair of states $s_{i}$ and $s_{i}^{\\prime}$\nvia the notation\n\\begin{equation}\n\\nonumber\ns_{i}\n\\succeq\ns_{i}^{\\prime}, \n\\hspace{.6 cm}\n\\text{if}\n\\hspace{.3 cm} \\eta_{i} \\geq \\eta_{i}^{\\prime}\n\\hspace{.4 cm}\n\\text{and}\n\\hspace{.4 cm}\ng_{i} \\geq g_{i}^{\\prime}.\n\\end{equation}\nFurther we introduce the notation \n$\\mathbf{s} \\succeq \\mathbf{s}^{\\prime}$ to signify that the\nabove holds for all the $s_{i}$, $s_{i}^{\\prime}$ at each\n$i\\in\\Lambda$. For individual spins $s_{i}$ and $s_{i}^{\\prime}$, we\nalso denote $s_{i}\\vee s_{i}^{\\prime}:= (\\text{max}\\{ \\eta_{i},\n\\eta_{i}^{\\prime} \\}, \\text{max}\\{ g_{i}, g_{i}^{\\prime} \\} )$ and\nsimilarly for the ``minimum'' $s_{i}\\wedge s_{i}^{\\prime}$. Finally,\nfor spin configurations $\\mathbf{s}$ and $\\mathbf{s}^{\\prime}$, the\nconfigurations $\\mathbf{s} \\vee \\mathbf{s}^{\\prime}$ and $\\mathbf{s}\n\\wedge \\mathbf{s}^{\\prime}$ are defined as \nthe sitewise maximum and minimum, respectively.\nThe FKG lattice condition -- conveniently stated for finite volume\nmeasures -- is that for all $\\mathbf{s}$, $\\mathbf{s}^{\\prime}$, the\nfollowing inequality holds:\n\n\\begin{equation}\n\\label{CZW}\n\\mathbb F_{\\Lambda}(\\mathbf{s}\\vee \\mathbf{s}^{\\prime})\\mathbb\nF_{\\Lambda}(\\mathbf{s}\\wedge\\mathbf{s}^{\\prime}) \\geq\\mathbb\nF_{\\Lambda}(\\mathbf{s})\\mathbb F_{\\Lambda}(\\mathbf{s}^{\\prime}).\n\\end{equation}\n\n\\noindent\nThe well known consequence of the above is that any pair of random\nvariables that are both increasing with respect to the partial order\ndescribed above are positively correlated.\n\n\\begin{proposition}\n\\label{3point4}\nThe finite volume Gibbs measures associated with the GI--Hamiltonian\nsatisfy the FKG lattice condition.\n\\end{proposition}\n\\begin{proof}\nWe consider an arbitrary graph and, as will be made evident, the proof\nautomatically accounts for any fixed boundary conditions. Now,\nas is well known, it is sufficient to establish that the lattice\ncondition Eq.(\\ref{CZW}) holds when differences between configurations\nare exhibited only on a pair of spin--variables. The fixed boundary\nspins thus may be regarded as part of the background which is common\nto all four possible agent--graffiti spin configurations in question.\nLet us thus assume that the differences between two configurations\noccur at sites $a$ and $b$ in the graph where certain specified\nvariables have been ``raised'' above a base configuration level\n$\\mathbf{s}$. We denote the single raise configurations by\n$\\mathbf{s}_{a}$ and $\\mathbf{s}_{b}$ and the double raise by\n$\\mathbf{s}_{ab}$. Thus, it is sufficient to show $\\mathbb\nF(\\mathbf{s}_{ab})\\mathbb F(\\mathbf{s}) \\geq \\mathbb\nF(\\mathbf{s}_{a})\\mathbb F(\\mathbf{s}_{b})$. All told, there are\nthree possibilities to consider: graffiti--graffiti, gang--graffiti and\ngang--gang raises on the $a$ and $b$ sites. For the mixed\ngang-graffiti case we must also consider the $a = b$ possibility where\nthe gang and graffiti spins have been ``raised'' at the same site. We\nneed not consider the normalization constant in any of these cases,\nsince it appears in identical roles on both sides of the purported\ninequality; consideration of the Boltzmann factors is sufficient. Let\nus introduce, in the setting of our general graph, the interaction\n\\begin{equation}\n\\nonumber\n -\\mathscr H(\\mathbf{s}) = \\sum_{\\langle i,j \\rangle}J_{i,j}\n\\eta_{i}g_{j} - \\sum_{i}[\\alpha_{i}\\eta_{i}^{2} +\n \\lambda_{i}g_{i}^{2}],\n\\end{equation}\nwhere the first sum now extends over all edges considered to be\npart of the graph and our only stipulation is that $J_{i,j} > 0$.\nAlso, we may formally include $i = j$ in this sum. Let us denote the\n``raised\" graffiti variables via the positive increments $\\delta g_a$\nand $\\delta g_b$ so that, in the graffiti--graffiti case, at sites $a$\nand $b$ $g_{a} \\to g_{a} + \\delta g_{a}$ and $g_{b} \\to g_{b} + \\delta\ng_{b}$. It is straightforward to see that\n$\\mathscr{H}(\\mathbf{s}_{ab}) + \\mathscr{H}(\\mathbf{s}) =\n\\mathscr{H}(\\mathbf{s}_{a}) + \\mathscr{H}(\\mathbf{s}_{b})$ and the\ndesired inequality holds as an identity. Similarly for the gang--gang\ncase. We can now consider the mixed case where, without loss of\ngenerality, $g_{a} \\to g_{a} + \\delta g_{a}$ and $\\eta_{b} \\to\n\\eta_{b} + \\delta\\eta_{b}$, and for us, $\\delta\\eta_{b} \\equiv\n1$. Here\n$$ -(\\mathscr{H}(\\mathbf{s}_{a}) - \\mathscr{H}(\\mathbf{s})) =\n\\sum_{i\\neq b}J_{i,a}\\eta_{i}\\delta g_{a} +J_{a,b}\\eta_{b}\\delta g_{a} - \\lambda \\delta {g_{a}}^2,\n$$\nwhile\n$$\n-(\\mathscr{H}(\\mathbf{s}_{b}) - \\mathscr{H}(\\mathbf{s})) = \\sum_{j\\neq b}J_{b,j}g_{j}\\delta \\eta_{b}\n+J_{a,b}g_{a}\\delta\\eta_{b} + \\alpha \\delta {\\eta_{b}}^2.\n$$\nHowever\n\\begin{eqnarray}\n\\nonumber\n-(\\mathscr{H}(\\mathbf{s}_{ab}) - \\mathscr{H}(\\mathbf{s})) \n&=& [\\sum_{i\\neq b}J_{i,a}\\eta_{i}\\delta g_{a} +J_{a,b}\\eta_{b}\\delta g_{a} - \\lambda \\delta {g_{a}}^2] +\n [\\sum_{j\\neq b}J_{b,j}g_{j}\\delta \\eta_{b}\n+J_{a,b}g_{a}\\delta\\eta_{b} + \\alpha {\\eta_{b}}^2]\n+J_{a,b}\\delta g_{a}\\delta\\eta_{b} \\\\\n\\nonumber\n&\\geq & 2 \\mathscr{H}(\\mathbf{s}) - \n \\mathscr{H}(\\mathbf{s}_a) - \\mathscr{H}(\\mathbf{s}_b). \n\\end{eqnarray}\n\\noindent\nCombining the above results we find that indeed\n\\begin{eqnarray}\n\\nonumber \\mathscr{H}(\\mathbf{s}_{ab}) + \\mathscr{H}(\\mathbf{s})\n\\leq \\mathscr{H}(\\mathbf{s}_a) + \\mathscr{H}(\\mathbf{s}_b).\n\\end{eqnarray}\n\\noindent\nThe same inequality can be easily shown in the mixed gang-graffiti\ncase for $a=b$, by assuming \n$g_a \\to g_a + \\delta g_a$ and $\\eta_a \\to \\eta_a +\n\\delta \\eta_a$ and by following the same steps as above. \nThis completes the proof.\n\\end{proof}\n\n\\noindent\nAs an immediate consequence, we can identify boundary conditions on\n$\\Lambda$ which most favor the dominance of the red gang. Indeed, it\nis now seen -- as was anyway clear heuristically -- that we must make\nthe boundary spins ``as red as possible'' in order for a predominance\nof $\\Lambda$ sites to be occupied by red agents. This amounts,\nsomewhat informally, to setting $g_{i} \\equiv +\\infty$ and $\\eta_{i}\n\\equiv 1$ (which is anyway automatic if $K\\neq 0$) all along the\nboundary. This ``specification'' which seems a bit arduous to work\nwith is not nearly as drastic as it sounds. Let us start with some\nnotation: For $\\Lambda$ a finite subset of $\\mathbb Z^{2}$, let us\ndefine $\\partial \\Lambda$ as those sites in $\\Lambda^{c}$ with a\nneighbor in $\\Lambda$ and $\\textsc{d} \\Lambda$ as those sites in\n$\\Lambda$ with a neighbor in $\\Lambda^{c}$. Clearly the only\nimmediate consequence of the ``drastic'' boundary condition is to\nforce $\\eta_{i} \\equiv 1$ for $i\\in\\textsc{d}\\Lambda$ and to bias, by\nat most $(3J +K)g_{i}$ the \\textit{a priori} Gaussian distribution of\nthe $g$'s. We shall do that -- and a bit more -- on $\\partial\n\\Lambda$ arguing that this, at most, is the result of the ``drastic''\nboundary condition on $\\partial(\\Lambda \\cup \\partial \\Lambda)$.\nPrecisely, we define the \\textit{red} boundary condition on $\\Lambda$\nas $\\eta_{i} \\equiv 1$ and $g_{i}$ independently distributed as normal\nrandom variables with variance $1\/{2\\lambda}$ and mean $(4J +\nK)\/2\\lambda$ for each $i\\in\\partial \\Lambda$. By the established\nmonotonicity properties these are exactly the boundary\nconditions imposed on the slightly larger lattice that will optimize\nthe average of $\\eta_{i}$ and $g_{i}$ for any $i\\in \\Lambda$.\n\n\\subsubsection{A uniqueness criterion}\n\n\\noindent It is not hard to show, by monotonicity, that a limiting\n\\textit{red measure} exists along any thermodynamic sequence of\nvolumes and that the limit is independent of the sequence and\ntherefore translation invariant. We shall denote this measure by\n$\\mu_{\\textsc{R}}(\\cdot)$ and by $\\mathbb E_{\\text{R}}(\\cdot)$ the\ncorresponding expectations. Similarly for the blue measure\nwe introduce $\\mu_{\\textsc{B}}(\\cdot)$ and $\\mathbb E_{\\text{B}}(\\cdot)$.\nWe can thus state\n\n\\begin{proposition}\n\\label{REH}\nThe necessary and sufficient condition for uniqueness among the\nlimiting Gibbs states for the GI--system is that $\\mathbb\nE_{\\text{R}}(\\eta_{0}) = 0$, where\n$\\eta_0$ is the spin at the lattice origin.\n\\end{proposition}\n\\begin{proof}\nFor two measures $\\mu_{1}$ and $\\mu_{2}$ e.g., on $\\{-1, 0,\n1\\}^{\\mathbb Z^{2}}$, we use the notation $\\mu_{1} \\geq \\mu_{2}$ to\nindicate that for any random variable $X$ which is increasing \nin all coordinates, the expected values $\\mathbb E_{1}(X)$, \ncalculated via the $\\mu_1$ measure\nare always greater than the those obtained via $\\mu_2$:\n$$\n\\mathbb E_{1}(X) \\geq \\mathbb E_{2}(X).\n$$\n\\noindent\nThis is known as \\textit{stochastic dominance}. Consider\n$\\mu_{\\text{R}}(\\cdot)$ which, by slight abuse of notation, we\ntemporarily take to be the restriction of $\\mu_{R}$ to agent events.\nSuppose that $\\mathbb E_{\\text{R}}(\\eta_{0}) = 0$. Then, by\ntranslation invariance, we have $\\mathbb E_{\\text{R}}(\\eta_{i}) = 0$\nfor all $i$. Similar considerations apply to the corresponding\n$\\mu_{\\text{B}}(\\cdot)$. It is immediately clear -- by symmetry or\nstochastic dominance -- that $\\mathbb P_{\\text{R}}(\\eta_{0} = 0) =\n\\mathbb P_{\\text{B}}(\\eta_{0} = 0)$ and thus the single site\ndistributions are identical. By the corollary to the Strassen theorem\n\\cite{St,LP} since $\\mu_{\\text{R}} \\geq\n\\mu_{\\text{B}}$ \\textit{and} these measures have identical single site\ndistributions they must be identical probability measures. Similar\nconsiderations apply to the full measures since the distribution of\nthe $g_{i}$ is determined by their conditional distributions given the\nlocal configuration of the $\\eta_i$'s. Uniqueness is established since,\nif $\\mu_{\\odot}(\\cdot)$ denotes any other infinite volume measure\nassociated to the GI--Hamiltonian, we have $\\mu_{\\text{R}} \\geq\n\\mu_{\\odot} \\geq \\mu_{\\text{B}}$ which implies equality in light of\n$\\mu_{\\text{R}} = \\mu_{\\text{B}}$.\n\\end{proof} \n\n\\subsubsection{Proof of a high--temperature phase} \nWe shall develop a graphical representation for the GI--system akin to\nthe FK representation for the Potts model \\cite{FK} that, for all\nintents and purposes, is the same as the one used in \\cite{BCG},\nwhere only the case of bounded fields is explicitly analyzed.\nLet us then consider the GI--Hamiltonian in finite volume with \nall notation pertaining to boundary conditions temporarily suppressed. For\nfixed $\\mathbf{s}$, we may decompose the graffiti fields and agents\naccording to affiliation:\n\\begin{eqnarray}\n\\nonumber\ng_{i} &=& q_{i}\\vartheta_{i}; \\hspace{.25 cm} \\vartheta_{i} = \\pm 1, \\hspace{.15 cm} q_{i} = |g_{i}|, \\\\\n\\nonumber\n\\eta_{i} &=& r_{i}\\sigma_{i}; \\hspace{.25 cm} \\sigma_{i} = \\pm 1, \\hspace{.15 cm} r_{i} = |\\eta_{i}|,\n\\end{eqnarray}\nwhere the $\\sigma$'s and $\\vartheta$'s have the definitive character \nof \\textit{Ising} variables. We can now write\n$$\n\\text{e}^{J_{i,j}g_{i}\\eta_{j}} = \n\\text{e}^{-J_{i,j}q_{i}r_{j}}(R_{i,j}\\delta_{\\vartheta_{i},\\sigma_{j}} + 1),\n$$ where $R_{i,j} = R(J_{i,j},q_{i},r_{j}) :=\n\\text{e}^{2J_{i,j}q_{i}r_{j}} -1$. In our case, we have $J_{i,j} = J$\nif $i$ and $j$ are neighboring pairs and $J_{i,i} = K$; which we will\nnot yet distinguish notationally and consider a general $J_{i,j}$\nlabel. Thus\n\n$$\n\\text{e}^{-\\mathscr H(\\mathbf{s})} =\n\\prod_{(i,j)}\\text{e}^{-J_{i,j}q_{i}r_{j}}(R_{i,j}\\delta_{\\vartheta_{i},\\sigma_{j}}+1).\n$$\n\n\\noindent\nOpening the product, we select one term for each ``edge'': If the\n$R_{i,j}$ term is selected, we declare the edge to be\n\\textit{occupied}, otherwise it is \\textit{vacant}. It is noted here\nthat the edges should be interpreted as \\textit{directed}:\nall edges appear twice and we must regard $\\langle\ni,j\\rangle$ as distinctive from $\\langle j,i\\rangle$; moreover, for $K\n\\neq 0$, the above is understood to include $i = j$.\nThe configurations of occupied edges will, generically, be denoted by\n$\\omega$. Summing over the Ising variables, we acquire the weights\n$$\nW(\\omega) = 2^{C(\\omega)}\\sum_{\\mathbf{q},\\mathbf{r}}\\prod_{(i,j)\\in\\omega}\nR_{i,j}(J_{i,j}q_{i},r_{j}),\n$$\n\\noindent\nwhere, as before the summation notation also indicates integration\nover the continuous variables. In the above, $C(\\omega)$ denotes the\nnumber of connected components of $\\omega$; here connectivity\ndeduced according to the \\textit{directed} nature of the edges or via\na double covering of the lattice.\nNormalizing these weights by the partition function we obtain a\nprobability measure on the bond configurations $\\omega$. As will be\nmade explicit below, this probability measure on bond configurations\nis well defined in finite volume. Let us denote the\nprobability measure on the bond configurations $\\omega$\nby $\\mathbb P_{\\Lambda}^{\\odot}(\\cdot)$, where the $\\odot$\nnow denotes boundary conditions accounted for in a routine fashion.\nThen, for each $\\omega$ consisting of appropriate edges, $\\mathbb\nP_{\\Lambda}^{\\odot}(\\omega) \\in (0,1)$. We shall not discuss the\nproblem of infinite volume limits which would take us too far astray\nbut be content with statements that are uniform in volume. With\nregards to the latter, and of crucial importance for our purposes is\nthe connection back to the spin--measure inherent in this\nrepresentation. For the Potts models, this was first elucidated in \\cite{ACCN} with the\ncomplete picture emerging in \\cite{ES}. In particular, for any site,\nthe contribution to the magnetization vanishes if the site\nbelongs to a cluster that is isolated from the boundary.\nThe principal objective for this representation is the following claim:\n\n\\begin{proposition}\n\\label{PIY}\nLet $\\Lambda \\subset \\mathbb Z^{2}$ be a finite connected set and\nconsider the above described representation in $\\Lambda$ with boundary\ncondition $\\odot$ on $\\partial \\Lambda$. Let $\\langle a,b\\rangle$\ndenote an edge with $a\\neq b$ and both $a$ and $b$ not belonging to\n$\\partial \\Lambda$. Let $\\mathbf{e}_{ab}$ denote the event that this\nedge is occupied and let $\\omega$ denote a configuration on the\ncompliment of $\\langle a,b\\rangle$. Then, for fixed $\\alpha$ and $K$,\nthere is an $\\varepsilon(J,\\lambda)$ with $\\varepsilon \\to 0$ as\n$J^{2}\/\\lambda\\to 0$ such that uniformly in $\\Lambda$, $\\omega$ and\n$\\odot$ -- as well as $K$ and $\\alpha$,\n$$\n\\mathbb P_{\\Lambda}^{\\odot}(\\mathbf{e}_{ab}) < \\frac{\\varepsilon}{1 + \\varepsilon}.\n$$\n\\end{proposition}\n\\begin{proof} \nLet $W_\\Lambda^{\\odot}(\\cdot)$ denote the configurational weights with\nassociated boundary conditions as described above. Then it is seen\nthat\n$$ \n\\frac{\\mathbb P_{\\Lambda}^{\\odot}(\\omega \\vee \\mathbf{e}_{ab})}{1 -\n \\mathbb P_{\\Lambda}^{\\odot}(\\omega \\vee \\mathbf{e}_{ab})} =\n\\frac{W_\\Lambda^{\\odot}(\\omega \\vee\n \\mathbf{e}_{ab})}{W_\\Lambda^{\\odot}(\\omega)}.\n$$ \nOur goal is to estimate the right hand side of the above\nwhich thereby generates the quantity\n$\\varepsilon$ featured in the statement of this proposition. Noting\nthe positivity and product structure of the numerator and\ndenominator, we may regard the object on the right as the expectation\nwith respect to a weighted measure of the quantity $R_{ab}$ and we\nshall denote this by $\\mathbb E_{\\omega}(R_{a,b})$. The latter will\nbe estimated via conditional expectation: Let $Q_{\\hat{q}_{a}}$ denote\na specification of the $q$--fields and agent occupation variables\nexcept for $q_{a}$ and let\n\\begin{eqnarray}\n\\nonumber\n\\varepsilon := \\sup_{\\omega, Q_{\\hat{q}_{a}}}\\mathbb E_{\\omega}(R_{ab}\\mid Q_{\\hat{q}_{a}}).\n\\end{eqnarray}\nObviously, $\\varepsilon \\geq \\sup_{\\omega}\\mathbb E_{\\omega}(R_{ab})$.\nAs for the complimentary fields, there is not a great deal of\ndependence: In particular, all that is needed is that $r_{c} = 1$ for\nall $c$ such that $\\langle a, c\\rangle \\in \\omega$. Concerning the\noptimizing $\\omega$, non--local considerations dictate simply, that\n$\\omega$ be such that $\\langle a,b\\rangle$ does not \\textit{reduce}\nthe number of components. Locally, as can be explicitly checked, or\nderived from monotonicity principals, the optimal scenario is when all\nbonds emanating from $a$ are present in the configuration. Thus we\nhave\n\\begin{eqnarray}\n\\nonumber\n\\varepsilon = \n\\frac{\\int\\text{e}^{-(4J+ K)q}R^{4}(J)R(K)\\text{e}^{-\\lambda q^{2}}dq}\n{\\int\\text{e}^{-(3J+K)q}R^{3}(J)R(K)\\text{e}^{-\\lambda q^{2}}dq}\n=\n2\n\\frac{\\int\\text{e}^{-\\lambda q^{2}} \\sinh^{4}(Jq)\\sinh (Kq) dq}\n{\\int\\text{e}^{-\\lambda q^{2}} \\sinh^{3}(Jq) \\sinh (Kq)dq},\n\\end{eqnarray}\nwhere in the first line $R(J) := R(J, q, 1)$. We claim that the\nfinal ratio is bounded by $J\/\\lambda^{1\/2}$ multiplied by a constant\nthat may be proportional to the ratio $K\/\\lambda^{1\/2}$. Indeed let\nus substitute $\\omega = \\lambda^{1\/2}q$ and $\\kappa :=\nK\/\\lambda^{1\/2}$. The above quantity can thus be rewritten as\n\n\\begin{eqnarray}\n\\nonumber\n\\varepsilon = 2\\frac\n{{\\int\\text{e}^{-\\omega^{2}}\\sinh \\kappa\\omega(\\sinh \\omega J\/\\lambda)^{4}d\\omega}}\n{\\int\\text{e}^{-\\omega^{2}}\\sinh \\kappa\\omega(\\sinh \\omega J\/\\lambda)^{3}d\\omega}.\n\\end{eqnarray}\n\n\\noindent\nOur claim is obvious if $\\kappa\\to 0$ but we may wish to consider\ncases where $\\kappa$ stays bounded away from zero. In general, the\nintegrands are not dominated by large $\\omega$ and we may expand the\nfactors $\\sinh \\omega J\/\\lambda$ with the result\n\\begin{eqnarray}\n\\nonumber\n\\varepsilon \\to \\frac {2 J} {\\lambda^{1\/2}}\n\\frac{\\int\\text{e}^{-\\omega^{2}}\\sinh \\kappa\\omega\\cdot\n \\omega^{4}d\\omega}{\\int\\text{e}^{-\\omega^{2}}\\sinh \\kappa\\omega\\cdot\n \\omega^{3}d\\omega}.\n\\end{eqnarray}\nWe finally claim is that the right side is bounded by a linear\nfunction of $\\kappa$:\n\\begin{eqnarray}\n\\nonumber\n\\frac{1}{1 + \\kappa} \\frac{\\int\\text{e}^{-\\omega^{2}}\\sinh\n \\kappa\\omega\\cdot \\omega^{4}d\\omega}{\\int\\text{e}^{-\\omega^{2}}\\sinh\n \\kappa\\omega\\cdot \\omega^{3}d\\omega} < B,\n\\end{eqnarray}\nfor some $B < \\infty$. This is indeed true as $\\kappa \\to 0$. We only\nneed to show that the inequality holds in the case \n$\\kappa\\to \\infty$. But here the factor\n$\\text{e}^{-\\omega^{2}}\\sinh \\kappa\\omega$ is, essentially, a Gaussian\nin the variable $\\omega - \\kappa$ and the desired result follows.\n\\end{proof}\n\\begin{theorem}\n\\label{PKS}\nConsider the GI--system and let $\\varepsilon$ denote the quantity\ndescribed in Proposition \\ref{PIY}. Then for $\\varepsilon <\n\\varepsilon_{0}$, given by\n$$\n\\varepsilon_{0} + \\frac{1}{2}\\varepsilon_{0}^{2} = \\frac{1}{2}\n$$\nthere is a unique limiting Gibbs state featuring rapid decay of correlations.\n\\end{theorem}\n\\begin{proof}\nUsing the result of Proposition \\ref{PIY}, we shall compare the\ndescribed graphical representation with independent bond percolation\non $\\mathbb Z^{2}$. We start with a well known -- and readily\nderivable -- result: Let $Y_{1}, \\dots Y_{N}$ denote an array of\nBernoulli random variables with collective behavior described by the\nmeasure $\\mu_{\\mathbf{Y}}$ and let $\\mathbf{Y}_{\\hat{Y}_{j}}$ \ndenote a configuration on the compliment of $Y_{j}$. Let us now introduce\n\\begin{eqnarray}\n\\nonumber\n p_{j} = \\max_{\\hspace{3 pt}\\mathbf{Y}_{\\hat{Y}_{j}}} \\mathbb\nP_{\\mathbf{Y}}(Y_{j} = 1\\mid \\mathbf{Y}_{\\hat{Y}_{j}})\n\\end{eqnarray}\n\\noindent\nto denote the maximal conditional probability of\nobserving $\\{Y_{j} = 1\\}$. Finally, let $X_{1}, \\dots X_{N}$ denote a\ncollection of independent Bernoulli random variables with parameters\n$p_{1}, \\dots , p_{N}$. Then, denoting the independent measure by\n$\\mu_{\\mathbf{X}}$, we have\n\\begin{equation}\n\\nonumber\n\\mu_{\\mathbf{X}} \\geq \\mu_{\\mathbf{Y}}.\n\\end{equation}\n\\noindent\nThus we may bound the probabilities of increasing events in the\ngraphical representation by the corresponding probabilities from\nindependent percolation on $\\mathbb Z^{2}$ with bond occupation\nprobabilities determined by the $\\varepsilon$ from Proposition\n\\ref{PIY}. However, we must note that the relevant percolation problem\nhas multiple types of edges.\nThe $\\varepsilon_{0}$ in the statement of\nthis proposition bounds the probability of the event\n$\\mathbf{e}_{ab}\\cup\\mathbf{e}_{ba}$ by $\\frac{1}{2}$.\nIf the\nclusters of the featured representation fail to percolate, then, as\n$\\Lambda \\nearrow \\mathbb Z^{2}$, the origin is disconnected from the\nboundary with a probability tending to one. As discussed just prior\nto Proposition \\ref{PIY}, this implies $\\mathbb E_{\\text{R}}(\\eta_{0})\n= 0$ and by Proposition \\ref{REH}, uniqueness is established.\nUnder the condition $\\varepsilon < \\varepsilon_{0}$, exponential decay\nof correlations can also be established. We will be content with the\ndecay of the two point function. The problem of general correlations\nunder these conditions has been treated elsewhere\n\\cite{MC1,MC2}. In particular, for $i,j\\in\\mathbb Z^{2}$,\n$\\mathbb E(\\eta_{i}\\eta_{j})$ in the unique infinite volume measure is\nbounded, in finite volume approximations by the probability that $i$\nand $j$ reside in the same cluster. For $\\varepsilon <\n\\varepsilon_{0}$, this decays exponentially in $|i-j|$ uniformly in\n$\\Lambda$ for $|\\Lambda|$ sufficiently large.\n\\end{proof}\n\n\\noindent\nWe now turn our attention to an alternative criterion for high\ntemperature behavior which may also be of relevance in a sociological\ncontext: Sparsity of agents. Mathematically, this pertains to the\nsituation where $\\alpha$ is large and negative ($-\\alpha \\gg 1$) which\n\\textit{a priori} suppresses the fraction of agent occupied sites.\nOur arguments will initially be based on more primitive notions of\npercolation and, following the methods of \\cite{CNPR} (see also\n\\cite{C,CMW}) could, perhaps, be completed along these lines.\nHowever, it turns out to be far simpler to appeal to the graphical\nrepresentation just employed for the final stage of the argument. We\nstart with the relevant notion of percolation and connection. In the\ncontext of site percolation on $\\mathbb Z^{2}$, we may define various\nnotions of connectivity \\cite{Grimmett}. \nHere we define $\\diamond$--connectivity to\nindicate connection between sites that are no more than two lattice\nsites away. This is not to be confused with $\\ast$--connectivity which\ndoes not consider a pair of sites to be connected if they are\nseparated by two units in the vertical or horizontal direction. We\ndenote by $p_{c}^{\\diamond}$ the threshold for $\\diamond$--percolation\non $\\mathbb Z^{2}$. Standard arguments dating to the beginning of the\nsubject show that $p_{c}^{\\diamond} \\in (0,1)$; in particular,\n$p_{c}^{\\diamond}$ is less than the threshold for ordinary, or even\n$\\ast$--connected, percolation and mean--field type bounds readily\ndemonstrate that $p_{c}^{\\diamond} > \\frac{1}{12}$.\n\nThe next proposition concerns the relative abundance of, e.g., red sites under the condition $-\\alpha \\gg 1$ with the other parameters fixed. \n\n\\begin{proposition}\n\\label{JUO}\nConsider the GI--system with parameters $\\lambda$, $K$ and $J$ fixed.\nThen there is a $\\delta_{\\alpha} = \\delta_{\\alpha}(J,K,\\lambda)$ with\n$\\delta_{\\alpha} \\to 0$ as $\\alpha\\to-\\infty$ such that uniformly in\nvolume and boundary conditions, for any site $i$ that is away from the\nboundary\n$$\n\\mathbb P^{\\odot}_{\\Lambda}(\\eta_{i} = 1) < \\delta_{\\alpha}.\n$$\n\\end{proposition}\n\\begin{proof}\nHere we employ the preliminary (red $\\succeq$ blue) FKG properties\nthat were established earlier, in \\ref{FKGprop}. \nWe start with a $\\gamma > 0$ (and\nsomewhat ``large'') and, for $j\\in \\Lambda$ not too near the boundary,\nwe consider $\\mathbb P^{\\odot}_{\\Lambda}(g_{j} > \\gamma)$. By the FKG\nproperty, this probability is less than the corresponding conditional\none given that $\\eta_{j} = 1$ and that $\\eta_{k} = 1$ for all $k$ that are\nneighbors of $j$. This conditional probability is given by a\ndefinitive expression:\n\n\\begin{equation}\n\\nonumber\n\\mathbb P^{\\odot}_{\\Lambda}(g_{j} > \\gamma) \\leq \n\\frac{\n\\int_{g > \\gamma}\\text{e}^{+(4J + K)g}\\text{e}^{-\\lambda g^{2}}dg}\n{\\int_{g}\\text{e}^{+(4J + K)g}\\text{e}^{-\\lambda g^{2}}dg}\n:= \\delta_{\\gamma}\n\\end{equation}\n\n\\noindent\nThe above can be expressed directly via the error function but in\nany case, as is not hard to show,\n\\begin{equation}\n\\nonumber\n\\delta_{\\gamma} \\leq \\frac{1}{2}\\text{e}^{-\\lambda[ \\gamma -\\frac{4J + K}{2\\lambda}]^{2}},\n\\end{equation}\n\\noindent\nas long as $\\gamma \\geq (4J + K)\/2\\lambda$, which also quantifies\nhow large $\\gamma$ must be. Provided $i$ is a few spaces away from the\nboundary, we note that $1 - 5\\delta_{\\gamma}$ is a valid\nestimate of the probability that \nboth $g_{i}$ and the $g$--values at the neighbors of $i$\ndo not exceed $\\gamma$. Let us denote this (good) non--high field\nevent by $G_{i}$. Then we may write\n\\begin{align}\n\\mathbb P_{\\Lambda}^{\\odot}(\\eta_{i} = 1) & = \\mathbb\n P_{\\Lambda}^{\\odot}(G_{i})\\mathbb P_{\\Lambda}^{\\odot}(\\eta_{i} =\n 1\\mid G_{i}) + \\mathbb P_{\\Lambda}^{\\odot}(G_{i}^{c})\\mathbb\n P_{\\Lambda}^{\\odot}(\\eta_{i} = 1\\mid G_{i}^{c}) \\notag \\\\ \n&\\leq\n 5\\delta_{\\gamma} + \\frac{\\text{e}^{(4J +\n K)\\gamma}\\text{e}^{\\alpha}} {1 + \\text{e}^{(4J +\n K)\\gamma}\\text{e}^{\\alpha} + \\text{e}^{-(4J + K)\\gamma}\n \\text{e}^{\\alpha}} := \\delta_{\\alpha},\n\\end{align}\nwhere, in various stages we have employed worst case scenarios.\nClearly, for fixed $(J, K, \\lambda)$ we may choose $\\gamma$ large so\nthat $\\delta_{\\gamma}$ is small, and $\\alpha$ negative and\nlarge in magnitude, so that $\\delta_{\\alpha}$ is small.\n\\end{proof}\n\n\\begin{theorem}\nConsider the GI--System and suppose that $-\\alpha$ is large enough so\nthat $\\delta_{\\alpha} < p_{c}^{\\diamond}$ as described just prior to\nthe statement of Proposition \\ref{JUO}. Then there is a unique\nlimiting Gibbs state featuring rapid decay of correlations.\n\\end{theorem}\n\n\\begin{proof}\nBy the dominance principle stated at the beginning of the proof of\nProposition \\ref{PKS} if $\\delta_{\\alpha} < p_{c}^{\\diamond}$, the red\nagents fail to exhibit $\\diamond$--percolation regardless of boundary\nconditions. Now consider, in the context of the bond--representation, the event\nthat the origin is connected to $\\partial \\Lambda$ in the red boundary\nconditions, which represents the sole non--vanishing contribution to\n$\\mathbb E^{R}_{\\Lambda}(\\eta_{0} = + 1)$. The bonds of any path\nconnecting the origin to $\\partial \\Lambda$ within this cluster may be\nenvisioned as alternating connections between agents and fields; the\nconnection to the red boundary ensures that both types of entities\ntake on the red color. In particular, all the agents in the cluster\nare red so that these agents must (at least) form a\n$\\diamond$--connected cluster. Hence, in finite volume, we may bound\n\n\\begin{equation}\n\\nonumber\n\\mathbb E_{\\Lambda}^{\\text{R}}(\\eta_{0} = +1) \\leq \\mathbb\nP_{\\Lambda}^{\\text{R}}(0 \\underset{\\diamond, \\text{R}}{\\leadsto}\n\\partial \\Lambda)\n\\end{equation}\n\n\\noindent\nwhere $\\{0 \\underset{\\diamond,\\text{R}}{\\leadsto} \\partial \\Lambda\\}$\nis the event of a red $\\diamond$--connection between the origin and\nthe boundary.\n\nWhen the red agent occupation probabilities are dominated by\nindependent sites with parameter $\\delta_{\\alpha} < p_{c}^{\\diamond}$,\nsuch probabilities decay exponentially. Evidently, in the limiting\nstate, the ``magnetization'' vanishes which by Proposition \\ref{REH}\nimplies a unique state. Similarly, exponential decay of correlations\nis implied by exponential decay of $\\diamond$--connectivities.\n\\end{proof}\n\n\\section{The Mean Field Rendition} \\label{S:MFHamiltonian}\n\\noindent\nIn the previous section, we showed that a phase transition between\nwell--mixed and clustering configurations exists for the general\nHamiltonian in Eq.\\,\\eqref{E:Hamiltonian}. However, finding the exact\nor even approximate values of the $J,K,\\alpha,\\lambda$ parameters for\nwhich the well--mixed to clustering transition occurs is in general a\ndifficult task. Moreover, the nature of the transition is not\nelucidated by the techniques of the preceding section. On \nthe basis of informal simulations described in the Appendix \nand certain other\nconsiderations it appears that the transition may be discontinuous or\nsecond order depending on where the phase boundary is crossed. This\ncannot be proved in the context of the present model. We thus\nintroduce a \\textit{mean--field Hamiltonian}, where instead of\nnearest--neighbor interactions we consider an all--to--all\n(interaction) coupling that is rescaled by the number of sites.\nModels of this sort are often referred to as \\textit{complete--graph}\nsystems. The mean--field Hamiltonian allows us to define, in the\nthermodynamic limit, a simple mean field free energy per particle.\nThis free energy can be subjected to exact mathematical analysis\nwhich provides a quantification of the phase transition. In\nparticular, we have found that \nthe phase boundary between the diffuse states and the\ngang--symmetry broken phase can indeed be of either type.\n\nLet us thus consider a lattice of $N$ sites -- where the detailed\ngeometry is no longer of relevance. At each site $i$, there is the\nsame $s_i = (\\eta_i, g_i)$ featured in the previous section. However\nnow, the Hamiltonian reads\n\\begin{equation}\n\\label{MFH}\n - \\mathscr{H}^{\\textsc{MF}}_{N}(\\mathbf{s}) = \\frac{1}{N} \\sum_{i,j} J \\eta_i g_j \n + \\sum_{i} (\\alpha\n \\eta_i^2- \\lambda g_i^2).\n \\end{equation}\nIt is observed that the couplings $J$ and $K$ need no longer be\ndistinguished. Indeed, for large $N$, the $g_{i}\\eta_{i}$\ninteraction, and any other \\textit{particular} interaction is not of\npertinence. We now introduce the relevant collective quantities, $n$\n$G$, and $b$, obtained via $\\eta_i$ and $g_i$, which will allow for a\nmore convenient analysis. In particular, if $N^+$ and $N^-$ designate\nthe number of red and blue lattice agents, respectively, we define\n$$\nb := \\frac{N^+ + N^{-}}{N}\n\\hspace{1 cm}\n\\text{and}\n\\hspace{1 cm}\nn :=\n\\frac{N^{+} - N^-}{N},\n$$ as the fraction of the lattice covered by agents of any type and\nthe excess -- positive or negative -- of this fraction that is of the\nred type. Moreover, we introduce $G = \\frac{1}{N} \\sum_{i} g_i$ \nto be the average graffiti imbalance. In this\ncontext, $n$ and $G$ are akin to magnetizations in a standard\none--component spin model, with $n$ corresponding to magnetization in\nthe agent variables and $G$ in the graffiti field. For occasional use, we\nalso define $n^{\\pm} = {N^{\\pm}}\/{N} \\leq 1$. We remark that in these\ndefinitions there is an implicit $N$ dependence which is notationally\nsuppressed.\n\n\\subsection{The partition function}\n\\noindent\nIn the forthcoming, we will evaluate, asymptotically, the mean field\npartition function $\\mathcal{Z}^{\\text{MF}}$ defined in accord with the\nprevious section as the partition sum $\n\\mathcal{Z}^{\\text{MF}}_{N} = \\sum_{\\mathbf{s}}\ne^{-\\mathscr{H}_{N}^{\\text{MF}}(\\mathbf{s})}. $ Here, for reasons which\nwill soon become clear, we will treat the graffiti field variables \nslightly differently. We define\n$$\nd\\mu_{g_{i}} := \\sqrt{\\frac{\\lambda}{\\pi}}\\text{e}^{-\\lambda g_{i}^{2}},\n$$\nas the normalized Gaussian measure for the individual field variables. \nLetting $\\mathbf{g}$ denote the array of these random variables\nwe may write\n\\begin{equation}\n\\nonumber\n\\mathcal{Z}^{\\text{MF}}_{N} := \\mathbb E_{\\mathbf{g}}\n\\left(\n\\sum_{\\mathbf{\\eta}}\\text{e}^{J\\sum_{i,j}g_i\\eta_j +\\alpha\\sum_{i}\\eta_{i}^{2}}\n\\right),\n\\end{equation}\n\\noindent \nwhere $\\mathbb E_{\\mathbf{g}}(\\cdot)$ denotes expectation with\nrespect to the free (independent) ensemble of Gaussian random\nvariables and $\\sum_{\\mathbf{\\eta}}$ denotes the rest of the partition\nsum i.e., over the agent configurations. It is acknowledged that this\ndiffers from the prior definitions by a multiplicative factor of\n$[\\lambda\/\\pi]^{N\/2}$ which, of course, is inconsequential.\n\nIt is at this point, with the current formulation, that the\nadvantage of the all--to--all coupling is manifest: For any\n$\\mathbf{s}$ (and any $N$) the quantity in the exponent depends only\non $n$, $b$ and $G$: $\\mathcal{Z}^{\\text{MF}}_{N} = \\mathbb E_{\\mathbf\n g}(\\sum_{\\mathbf{\\eta}}\\text{e}^{N(JnG + \\alpha b)})$. Concerning\nthe agent configurations, to perform the summation, we must multiply\nthe integrand by the number of ways of arranging $N^{+}$ red sites and\n$N^{-}$ blue sites among $N$ possible positions. We denote this\nobject by $W_{N}(b,n)$ which is given, explicitly, by the trinomial\nfactor\n\\begin{equation}\n\\nonumber\nW_{N}(b,n) = \\binom{N}{N^{+},N^{-}} =\n\\binom{N}{\\frac{1}{2}N(b+n),\\frac{1}{2}N(b-n)}.\n\\end{equation}\n\\noindent\nAs for the graffiti field configurations, it is noted that since $G$ is\nproportional to a sum of Gaussian random variables, it is itself a\nGaussian. Indeed the mean of $NG$ is zero and the variance is\n$N[2\\lambda]^{-1}$. Thus the expectation over $\\mathbf{g}$ can be\nreplaced with the expectation over $NG$ leading to\n\\begin{equation}\n\\nonumber\n\\mathcal{Z}^{\\text{MF}}_{N} =\\sum_{n,b}\\mathbb\nE_{NG}[W_{N}(b,n)\\text{e}^{N[JbG + \\alpha b] }]\n\\propto\n\\sum_{n,b}W_{N}(n,b)\\text{e}^{N[JnG + \\alpha b - \\lambda G^{2}]} dG\n\\end{equation}\nwith the constant of proportionality independent of $N$.\nNow, on the basis of the Stirling approximation, \n\\begin{equation}\n\\nonumber\nW_{N}(b, n)\n\\approx \n\\left[\\left(\\frac{b+n}{2} \\right)^{\\frac{b+n}{2}}\n\\left(\\frac{b-n}{2}\\right)^{\\frac{b-n}{2}} (1- b)^{1-b} \\right]^{-N}.\n\\end{equation}\nThus, modulo lower order terms, we have\n$\\mathcal{Z}^{\\text{MF}}_{N} \\approx\\sum_{n,b,G}\\text{e}^{-N\\Phi(b,n,G)}$\nwhere $\\Phi$, the free energy function, is given by \n\\begin{equation}\n\\label{OIQ}\ne^{- \\Phi(b,n,G)} := e^{(JnG + \\alpha b -\\lambda G^2)} \\left[ \n\\left(\\frac{b+n}{2}\\right)^{\\frac{b+n}{2}} \\left(\\frac{b-n}{2} \\right)\n^{\\frac{b-n}{2}} (1- b)^{1-b} \\right]^{-1}.\n\\end{equation}\n\n\\noindent\nIn accordance with standard asymptotic analysis\n\\begin{equation}\n\\nonumber\n\\lim_{N\\to\\infty}-\\frac{1}{N}\\log \\mathcal{Z}^{\\text{MF}}_{N} =\n\\min_{b,n,G}\\Phi (b,n,G) := F_{\\text{MF}}\n\\end{equation}\n\\noindent\nwhere $F_{\\text{MF}} = F_{\\text{MF}}(J,\\alpha,\\lambda )$ is the\n(actual) limiting free energy per site. While various aspects of the\nabove scenario for all--to--all coupling models have been long known\nand certain cases explicitly proven \\cite{E}, there is a general\ntheorem to this effect that is sufficient for our purposes,\npresented in Section 5 of \\cite{BC}.\nThus the efforts of a mean--field analysis may be summarized as\nfollows: we are to minimize $\\Phi(b,n,G)$ and the values of $b$, $n$ and $G$\nat the minima -- as a function of the couplings -- will determine the\nvarious \\textit{phases} of the system. Even in this simplified\ncontext, as will be seen, the phase transitions can be dramatic.\n\n\\subsection{The mean--field equations}\nThe free energy function is obviously well behaved except at the\nextreme values of the variables. In particular, we would like to\nassume that $0 < b < 1$ and $-b < n < +b$ where the strict\ninequalities imply that the function is smooth. Now a direct\ncalculation of the asymptotics makes it clear that no minimum could\npossibly occur near the $b=0,1$ and $b= \\pm n$ boundaries. Thus, we can\nconfine attention to the interior of the above $b$ and $n$ intervals and\nproceed by differentiation of $\\Phi(b,n,G)$ as defined in Eq.(\\ref{OIQ}).\nThus we arrive at the \\textit{mean--field equations}:\n\\begin{align}\n-\\frac{\\partial \\Phi}{\\partial G} &= J n - 2 \\lambda G = 0,\n\\label{VSR} \\\\ \n-\\frac{\\partial\n\\Phi}{\\partial b} &= \\alpha + \\log(1-b) - \\frac{1}{2} \\log \\left(\n\\frac{b^2 - n^2}{4} \\right) = 0,\n\\label{VSQ}\\\\ \n-\\frac{\\partial \\Phi}{\\partial n} &= J G\n- \\frac{1}{2} \\log \\left( \\frac{b+n}{b-n} \\right) = 0,\n\\label{VDP}\n\\end{align}\nFree energy minimization only occurs for values of $(n,b,G)$ that\nsatisfy the above system. However, other stationary points for $\\Phi$\ncan -- and, e.g., in the case of discontinuous transitions generically will\n-- occur so we must proceed with some caution. It is noted that\nEq.(\\ref{VSR}) allows us to eliminate $G$ altogether. Defining $\\mu =\n\\frac{J^2}{2\\lambda}$, we rewrite Eqs.\\,\\eqref{VSQ} and \\eqref{VDP} as\n\\begin{align}\n4e^{2 \\alpha} &= \\frac{b^2-n^2}{(1-b)^2} \\label{E:mf_b2},\\\\\n\\mu n &= \\frac{1}{2}\\log \\left( \\frac{b+n}{b-n} \\right) \\label{E:mf_n2}.\n\\end{align}\nThe analysis of this system, along with the minimization it is\nsupposed to imply will constitute the bulk of the remainder of this\nwork. Foremost, it is noted that the presentation in\nEqs.(\\ref{E:mf_b2}) and (\\ref{E:mf_n2}) are, for all intents and\npurposes, the same as would have been obtained from the mean--field version\nof the so--called BEG model \\cite{BEG}. As such, some aspects of the\ncurrent problem have been treated in \\cite{EOT}. However, the\nspecifics in \\cite{EOT} are not readily translated into that of the\ncurrent work and, moreover, our conclusions are achieved by\nstraightforward methods of analysis.\n\n\nOur investigation will proceed as follows: It is evident from physical\nconsiderations, and the subject of an elementary mathematical theorem\nproved at the end of this subsection, that as the parameters sweep\nthrough their allowed values, a phase transition occurs from the\ncircumstances where $\\Phi$ is minimized by $n = 0$ to those where\n$n\\neq 0$ is required. First, we will follow the consequences of the\n\\textit{assumption} that this happens continuously: i.e., that the\nminimizing $n$ goes to zero continuously through small values. In the\nleading order, this provides a purported phase boundary which we\ndenote by the LSP--curve.\nConsiderations of higher order terms in the vicinity of the LSP--curve\nyield that for certain portions of the curve, the stipulation is\nself--consistent and for the rest, it is not. Detailed analysis will\nshow that the former is completely consistent. In particular these\ncalculations correspond to the true minima of the free energy\nfunction. By contrast, the latter (non--self--consistent) portion is\na consequence of a discontinuous transition which has ``already''\noccurred at prior values of the parameters. In particular, the\nperturbative analysis is highlighting a local extremum and not the\ntrue minimum.\n\nWe conclude this subsection with the derivation of the LSP--curve --\nas well as the introduction of notation that will be used throughout\nthe reminder of the analysis. Assuming $n = 0$, Eq.(\\ref{E:mf_n2}) is\ntrivially satisfied and Eq.(\\ref{E:mf_b2}) defines the ``ambient''\nvalue of $b$ which we denote by $b_{R}$:\n\\begin{equation}\n\\nonumber\nb_R := \\frac{2 e^\\alpha}{1 + 2 e^{\\alpha}}.\n\\end{equation}\n\\noindent\nNote that $(b = b_{R}, n = G = 0)$ is \\textit{always} a solution to\nthe mean--field system. For simplicity we consider $b_R$ and $\\mu$ as\nthe relevant parameters for our system for the remainder of this\npaper. Let us now consider slight perturbations of $b$ about $b_R$\nand of $n$ about zero. We thus write $b = b_R(1 + \\Delta)$ with\n$\\Delta \\ll b_R$ and $|n| > 0$ with $n \\ll 1$ and obtain the following\napproximations by expanding Eq.(\\ref{E:mf_b2}) to lowest order\n\\begin{equation}\n\\label{E:relNandDelta}\nn^2 \\approx 2\\Delta \\frac{b_R^{2}}{1- b_R},\n\\end{equation}\nwhile Eq.(\\ref{E:mf_n2}), written to a higher approximation than will\nbe immediately necessary, gives us\n\\begin{equation}\n\\label{KIT}\n\\mu n \\approx \\frac{n}{b_R} - \\frac{n\\Delta}{{b_R}} + \\frac{n^3}{3{b_R}^3}.\n\\end{equation}\nWe pause to observe that Eq.(\\ref{KIT}) and, in general,\nEq.(\\ref{E:mf_n2}), have the symmetry property that with all other\nquantities fixed, if $n$ is a solution then so is $-n$. Thus, we\nmight as well assume that $n \\geq 0$. Indeed, we shall adhere to this\nconvention throughout.\nAssuming now that $\\mu$ is variable while $b_{R}$ is fixed, the\n$n\\to 0$ limit of Eq.(\\ref{KIT}) and Eq.(\\ref{E:relNandDelta}) yields\nthe tentative phase boundary\n\\begin{equation}\n\\label{RDA}\n\\mu_{S}(b_{R}) = \\frac{1}{b_{R}}.\n\\end{equation}\nThis \\textit{defines} the LSP--curve; the correspondingly tentative\nconclusion is that $n > 0$ and $b > b_{R}$ occurs for $\\mu > \\mu_{S}$\nwhile for $\\mu \\leq \\mu_{S}$, $n = b - b_{R} = 0$. However, the\nviability of these tentative conclusions depends, in a definitive\nfashion, on the value of $b_{R}$. In particular an analysis of the\nhigher order terms in Eq.(\\ref{KIT}) testifies that this picture\ncannot possibly be correct for $b_{R} < \\frac{1}{3}$; this is the subject of\nour next subsection. However, a more difficult analysis shows that\nthis picture is indeed correct for $b_{R} \\geq \\frac{1}{3}$ which is\nthe subject matter of the final subsection. First, we must attend to\nsome necessary details.\n\n\\subsubsection{Preliminary analysis}\nIn this subsection we will establish some basic properties of the\nmodel such as the existence of high-- and low--temperature phases\nalong with various monotonicity properties. In particular we show\nthat at fixed $b_{R}$, the quantity $n$, assumed to be non--negative,\nis non--decreasing with $\\mu$ and strictly increasing whenever it is\nnon--zero. For the benefit of our physics readership, such\ncontentions might typically be \\textit{assumed} and consequently, the\nentire subsection could be skipped on a preliminary reading. However,\nit is remarked that in the normal (physics) course of events, such\nquestions are most often settled by direct perturbative calculation.\nEven for continuous transitions, on some occasions, additional\njustification is actually required.\nSometimes, as in the present work, when the transition is\ndiscontinuous, the relevant calculations simply cannot be done\nanalytically and then, indeed, one must rely more heavily on\nabstract methods.\n\nIn what follows, we shall work with the free energy function given by\nEq.(\\ref{OIQ}) with $G$ eliminated in favor of $n$ according to\nEq.(\\ref{VSR}) and working with the parameters $\\mu$ and $b_{R}$. \nFor simplicity, this will be denoted by $\\Phi_{b_{R},\\mu}(b,n)$ but\nwith subscripts omitted unless absolutely necessary. Thus $\\Phi(b,n)$\nis now notation for the function\n\\begin{align}\n\\Phi_{b_{R},\\mu}(b,n) :=\n-\\frac{1}{2}\\mu n^{2} -\\alpha(b_{R}) b&\n\\notag\n\\\\\n+\\left(\n\\frac{b+n}{2} \n\\right)\n\\log& \\left(\n\\frac{b+n}{2}\n\\right)\n+\\left(\n\\frac{b-n}{2} \n\\right)\n\\log \\left(\n\\frac{b-n}{2} \n\\right)\n+(1-b)\\log(1-b).\n\\end{align}\nIt is clear that the minimum of $\\Phi(b,n)$ corresponds to the minimum\nof the original three variable free energy function $\\Phi(n,b,G)$. \nIn the following, will use the notation $n(\\mu)$ (with $n(\\mu)\n\\geq 0$) as though this defines an unambiguous function. Of course in\nthe case of phase coexistence, this will not be true. In general,\nthen, $n(\\mu)$ will stand for a representative from the set of\nminimizers at parameter value $\\mu$ and all of the results in this\nsubsection hold.\nWe start with some elementary properties of the phase diagram \ngenerated by the corresponding minimization problem.\n\\begin{proposition}\n\\label{YTZ}\nConsider $\\Phi(b,n)$ with $b_{R}$ fixed and $\\mu$ ranging in\n$[0,\\infty)$. Then for all $\\mu$ sufficiently large, $\\Phi(b,n)$ is\n minimized by a non--zero $n$ and for all $\\mu$ sufficiently small,\n $\\Phi$ is minimized by $(b_{R},0)$.\n\\end{proposition}\n\\begin{proof}\nWe begin with the assertion, gleaned from Eq.(\\ref{VSQ}), that along\nthe curves $n = 0$, $\\Phi$ is minimized by $b = b_{R}$. Thus we may\npick any fixed, nontrivial $n_{0}$, with $0 < n_{0} < b_{R}$, and it\nis sufficient to establish that $\\Phi(n_{0}, b_{R}) < \\Phi(0, b_{R})$\nonce $\\mu$ is sufficiently large. However, the desired inequality is\nmanifest for large $\\mu$ since the only $\\mu$ dependence in $\\Phi$ is\nin the term $-\\frac{1}{2}\\mu n_{0}^{2}$ which is, eventually, in\nexcess of the differences between the $\\mu$ independent term and\n$\\Phi(0, b_{R})$.\nThe second statement is proved as follows: since $b = 0$ -- which\nnecessarily implies $n = 0$ -- does not minimize the free\nenergy function, we may use the variable $\\theta := n\/b$ so that\nEq.(\\ref{E:mf_b2}) now reads\n\\begin{equation}\n\\nonumber\nb\\mu \\theta = \n\\frac{1}{2}\\log\n\\left(\\frac{1 + \\theta}{1 - \\theta}\\right ).\n\\end{equation}\nAs is well known from the analysis of mean--field Ising systems\n(and can be established, e.g., by further differentiation) the above\nequation has only the trivial solution if $b\\mu \\leq 1$. Since $b$\ncannot be greater than one, the second statement has been proved -- in\nfact whenever $\\mu \\leq 1$.\n \\end{proof}\n\n\\noindent\nThe above result establishes, in a limited sense, the existence of a\nphase transition. Here we will sharpen this result by proving that\nalong the lines of fixed $b_{R}$, there is a single transition from $n\n\\equiv 0$ to $n > 0$. This is an immediate corollary to the following\nlemma which we state separately for future purposes.\n\\begin{lemma}\n\\label{Theorem Y}\nLet $\\Phi_{\\mu}(b,n)$ denote the free energy function with $b_{R}$\nfixed and $\\mu$ (displayed) in $[0,\\infty)$. Then the minimizing\n $n(\\mu)$, if unique, is a non--decreasing function of $\\mu$. More\n generally, if at various values of $\\mu$, $\\Phi_{\\mu}$ has a\n minimizing set of $n$'s then, if $\\mu^{\\prime} > \\mu$, the minimum\n of the minimizers at $\\mu^{\\prime}$ is greater than or equal to the\n maximum of the minimizers at $\\mu$. Thus, in general any possible\n ``choice'' of $n(\\mu)$ is non--decreasing.\n\\end{lemma}\n\\begin{proof}\nLet $\\mu, \\mu^{\\prime} \\in [0, \\infty)$ with $\\mu^{\\prime} > \\mu$\nand let us denote by $(b^{\\prime}, n^{\\prime})$ a minimizing pair\nfor $\\Phi_{\\mu^{\\prime}}$ and similarly for $(b,n)$ at $\\mu$. The\nkey observation is the meager $\\mu$--dependence of the function\n$\\Phi_{\\mu}$. Indeed, $\\Phi_{\\mu^{\\prime}}(x,y) = \\Phi_{\\mu}(x,y) -\n\\frac{1}{2}(\\mu^{\\prime} - \\mu)y^{2}$. We do this twice:\n\\begin{align}\n\\Phi_{\\mu^{\\prime}}(b^{\\prime}, n^{\\prime}) &= \n\\Phi_{\\mu}(b^{\\prime},n^{\\prime}) - \\frac{1}{2}(\\mu^{\\prime} - \\mu)[n^{\\prime}]^{2}\n\\notag\n\\\\\n&\\geq\n\\Phi_{\\mu}(b,n) - \\frac{1}{2}(\\mu^{\\prime} - \\mu)[n^{\\prime}]^{2}\n\\notag\n\\\\\n&=\n\\Phi_{\\mu^{\\prime}}(b,n) - \\frac{1}{2}(\\mu - \\mu^{\\prime})n^{2}\n- \\frac{1}{2}(\\mu^{\\prime} - \\mu)[n^{\\prime}]^{2}\n\\end{align}\nleading to $\\Phi_{\\mu^{\\prime}}(b^{\\prime}, n^{\\prime}) \\geq\n\\Phi_{\\mu^{\\prime}}(b,n) + \\frac{1}{2}(\\mu^{\\prime} - \\mu)(n^{2} -\n [n^{\\prime}]^{2})$. \nThis necessarily implies that\n $[n^{\\prime}]^{2} \\geq n^{2}$ since otherwise, the previous\n inequality would be strict implying that $(b,n)$ would have been a\n ``better minimizer'' for $\\Phi_{\\mu^{\\prime}}$ than $(b^{\\prime},\n n^{\\prime})$.\n\\end{proof}\n\n\\noindent\nUsing this result we may now show the following\n\n\\begin{corollary}\n\\label{UUY}\nConsider the mean--field model defined by the free energy function\ngiven in Eq.(\\ref{OIQ}). Then for each fixed $b_{R} \\in (0,1]$, there\nis a transitional value of $\\mu$, denoted by $\\mu_{T}(b_{R})$, such\nthat $n\\equiv 0$ for $\\mu < \\mu_{T}$ and $n > 0$ for $\\mu > \\mu_{T}$.\n\\end{corollary}\n\\begin{proof}\nThis follows immediately from Proposition \\ref{YTZ} and Lemma\n\\ref{Theorem Y} above.\n\\end{proof}\n\\noindent\nAlso of interest is the following:\n\\begin{corollary}\n\\label{JDD}\nConsider the mean--field model defined by the free energy function\ngiven in Eq.(\\ref{OIQ}). Let $n(\\mu)$ denote any non--negative\nfunction corresponding to a minimizing $n$ at parameter value $\\mu$\n(usually uniquely determined). Then for $\\mu \\geq \\mu_{T}$, the\nfunction $n(\\mu)$ is strictly increasing.\n\\end{corollary}\n\\begin{proof}\nIt is seen that \\textit{if} $n(\\mu_{T}) = 0$ then the statement of\nthis corollary is self-evident at $\\mu =\\mu_{T}$. For the rest of this\nproof, we may simply assume that $\\mu$ is such that $n(\\mu) > 0$.\nSuppose then that $\\mu^{\\prime} > \\mu$ and that $n = n(\\mu)$ is part\nof the minimizing pair $(b(\\mu), n(\\mu))$ at parameter value $\\mu$.\nSuppose further that at $\\mu^{\\prime}$ the same $n$ is also part of a\nminimizing pair. Then we claim that the $b(\\mu)$ is \\textit{not} the\npartner at $\\mu^{\\prime}$ since given $n^{\\prime}$ -- purportedly\nequal to $n$ -- then $b^{\\prime}$ is uniquely determined by\nEq.(\\ref{E:mf_n2}). Upon performing some algebraic manipulations the\nlatter reads $b^{\\prime} = n^{\\prime}\/\\tanh \\mu^{\\prime}n^{\\prime}$.\nThus, the equality $n=n'$ would lead to\n\\begin{equation}\n\\nonumber\nb^{\\prime} = \\frac{n^{\\prime}}{\\tanh \\mu^{\\prime} n^{\\prime}}\n= \n\\frac{n}{\\tanh \\mu^{\\prime} n}\n\\neq\n\\frac{n}{\\tanh \\mu n} = b\n\\end{equation}\nso that explicitly $(b,n)$ cannot be a minimizer at parameter value\n$\\mu^{\\prime}$. Using the appropriate $b^{\\prime} \\neq b$, we would\nhave\n\\begin{equation}\n\\nonumber\nF_{\\text{MF}}(\\mu^{\\prime}) =\n\\Phi_{\\mu^{\\prime}}(n, b^{\\prime})\n= \\Phi_{\\mu}(n, b^{\\prime}) - \\frac{1}{2}(\\mu^{\\prime} - \\mu)n^{2} \n\\geq \n \\Phi_{\\mu}(n, b) - \\frac{1}{2}(\\mu^{\\prime} - \\mu)n^{2} = \\Phi_{\\mu^{\\prime}}(n, b)\n\\end{equation}\nin contradiction with the fact that $(b,n)$ is not minimized for the\nparameter value $\\mu^{\\prime}$.\n\\end{proof}\n\n\\subsection{A discontinuous Transition for $b_{R} < \\frac{1}{3}$} \nThe dividing point of $b_{R} = \\frac{1}{3}$ along the LSP--curve $\\mu = 1\/b_{R}$ is apparent from the higher order terms in Eq.(\\ref{KIT}). Indeed, supposing $\\mu = 1\/b_{R} + \\varepsilon$ we obtain, with the additional aid of Eq.(\\ref{E:relNandDelta}),\n\\begin{equation}\n\\label{HUH}\n\\varepsilon n = \\frac{n^{3}}{2b_{R}^{3}}(b_{R} - \\frac{1}{3}) + \\dots\n\\end{equation}\nFor $b_{R} \\geq \\frac{1}{3}$, Eq.(\\ref{HUH}) is consistent (and, as it\nturns out correct) but in the case of $b_{R} < \\frac{1}{3}$\nthis equation alone precludes the possibility of a continuous\ntransition. Indeed since we cannot have $n^{2} < 0$, the only logical\nconsequence of Eq.\\,(\\ref{HUH}) is $n \\equiv 0$ for $\\mu \\gtrsim\n\\mu_{S}(b_{R})$, i.e., the transition occurs later. But the lower\norder term \\textit{insisted} that $\\mu = \\mu_{S}(b_{R})$ was the only\nviable candidate for a continuous transition. Thus: the transition\ncannot be continuous and, at least for $b_{R} < \\frac{1}{3}$, the\npreliminary assumption that $n$ goes to zero continuously can no\nlonger be sustained. In particular, for $b_{R} < \\frac{1}{3}$,\nperturbative analysis will never be valid because the relevant\nquantities will \\textit{never} be small.\n\nThis leaves open the possibility of a transition at some\n$\\mu_{T}(b_{R})$ that is different than $\\mu_{S}$. We\nshall show that $\\mu_{T} < \\mu_{S}$ as a direct consequence of the\nfollowing:\n\\begin{proposition}\n\\label{TEF}\nConsider the mean--field model defined via the free energy function\ngiven by Eq.(\\ref{OIQ}). Then, if $b_{R} < \\frac{1}{3}$ at $\\mu =\n\\mu_{S}(b_{R})$ the quantity $n$ is strictly positive.\n\\end{proposition}\n\\begin{proof}\nWe expand the free energy function $\\Phi(n,b)$ -- with $G$ eliminated\nvia Eq.\\,\\ref{VSR} -- about $b = b_{R}$, $n = 0$ and along the curve\n$\\mu = \\mu_{S}(b_{R})$. The convenient variables are now chosen as $b\n= b_{R}(1 + \\Delta)$ and $n = b_{R}m$. We first note that all odd\nterms in $m$ must vanish. In addition, the term linear in $\\Delta$\nvanishes due to the stationarity of $\\Phi$ along the curve $\\mu =\n\\mu_{S}(b_{R})$ and, as it turns out, so does the term which is\nquadratic in $m$. This leaves us with\n\\begin{eqnarray}\n\\nonumber\n\\Phi(b, n) = \\Phi(b_{R}, 0) + \\frac{1}{2}b_{R}\n\\left[\n\\frac{1}{6}m^{4} + \\frac{1}{1-b_{R}}\\Delta^{2} - m^{2}\\Delta\n\\right]\n+ \\dots\n\\end{eqnarray}\nExamining the quadratic form in the variables $m^{2}$ and $\\Delta$,\nthe condition for a local minimum is that\n\\begin{equation}\n\\nonumber\n\\frac{1}{6}\\frac{1}{1-b_{R}} > \\frac{1}{4}, \n\\end{equation}\n\\noindent\ni.e., $b_{R} > \\frac{1}{3}$. \nWe return to $b_{R}\\geq\\frac{1}{3}$ in the next subsection.\nOf current relevance is the fact that for $b_{R} < \\frac{1}{3}$, the\ncurve $\\mu = b_{R}^{-1}$ is of a saddle point nature. This implies\nthat there \\textit{is} a direction of decrease which, as is easily\nseen, is optimized, in the physical direction, when $m^{2} = 3\\Delta$. \nIt is concluded that under the stated conditions, we can produce a\npair $(b, n)$ with $n^{2} > 0$ (and $b > b_{R}$) such that the free\nenergy for the non--trivial pair is lower; we just make the\ncorresponding objects small enough to withstand the higher order\ncorrections. Thus the actual minimum also must occur for non--trivial\nvalues of the $n$ observable.\n\\end{proof}\n\n\\noindent\nWe now have\n\\begin{theorem}\n\\label{UKW}\nConsider the mean--field system defined by the free energy function as\ngiven in Eq.(\\ref{OIQ}). Then for $b_{R} < \\frac{1}{3}$, there is a\ndiscontinuous transition at some positive $\\mu_{T}(b_{R}) <\n\\mu_{S}(b_{R})$.\n\\end{theorem}\n\\begin{proof}\n\\noindent\nThat a transition occurs at \\textit{some} $\\mu_{T} > 0$ is the\nstatement of Corollary \\ref{UUY}. Moreover, Lemma \\ref{Theorem Y} and\nthe above analysis implies $\\mu_{T} \\leq \\mu_{S}$. The discussion\nprior to Proposition \\ref{TEF} demonstrates that at $\\mu = \\mu_{T}$,\nthe quantity $n$ is already positive. It only remains to show that\nthe inequality relating $\\mu_{S}$ and $\\mu_{T}$ is strict.\nTo this end, let us reimplement the heretofore unnecessary notation\nfor the full dependence of the free energies on parameters. We have\nlearned that for $b_{R} < \\frac{1}{3}$, there is an $n_{\\star} > 0$\nand a $b_{\\star}$ (with $b_{\\star} > b_{R}$) such that\n\\begin{equation}\n\\nonumber\nF_{\\text{MF}}(b_{R},\\mu_{S}) = \n\\Phi_{b_{R},\\mu_{S}}(b_{\\star}, n_{\\star}) < \n\\Phi_{b_{R},\\mu_{S}}(b_{R}, 0).\n\\end{equation}\nInvoking Lemma \\ref{Theorem Y}, it is now\nsufficient to show that there is a $\\delta\\mu > 0$ such that for some\nnonzero $\\tilde{n}$, and some $\\tilde{b}$, the inequality\n$\\Phi_{b_{R},\\mu_{S}-\\delta\\mu}(\\tilde{b}, \\tilde{n}) <\n\\Phi_{b_{R},\\mu_{S} - \\delta\\mu}(b_{R}, 0)$ can be shown to hold.\nOnce again, the key is the simple dependence of the free energy\nfunctions on the parameter $\\mu$. Indeed, using $n_{\\star}$ and\n$b_{\\star}$ as trials, we obtain\n$$\n\\Phi_{b_{R},\\mu_{S}-\\delta\\mu}(n_{\\star},b_{\\star}) = F^{\\text{MF}}_{b_{R},\\mu_{S}}\n + \\frac{1}{2}[\\delta\\mu ]n_{\\star}^{2}\n$$ while $\\Phi_{b_{R},\\mu_{S} - \\delta\\mu}(b_{R}, 0) \\equiv\n \\Phi_{b_{R},\\mu_{S}}(b_{R}, 0) < F^{\\text{MF}}_{b_{R},\\mu_{S}}$.\n Thus, the desired inequality will indeed hold for all $\\delta\\mu$\n sufficiently small.\n\\end{proof}\n\n\n\\subsection{A continuous transition for $b_{R} \\geq \\frac{1}{3}$}\nThe starting point in our analysis is to show that at the purported\ncritical curve, the quantity $n$ actually vanishes.\n\\begin{proposition}\n\\label{UYV}\nFor $b_{R} > \\frac{1}{3}$ and $\\mu = b_{R}^{-1} =: \\mu_{S}$, the\nunique solution to the mean--field equations is $n = 0$ with $b =\nb_{R}$. In particular, $\\Phi_{b_{R},\\mu_{S}}(b_{R}, 0) <\n\\Phi_{b_{R},\\mu_{S}}(b, n)$ for any $(b,n) \\neq (b_{R}, 0)$.\n\\end{proposition}\n\\begin{proof}\nAssuming $n > 0$ the agent fraction $b$ can be eliminated in favor of the ratio\n$$\n\\theta := \\frac{n}{b}.\n$$ \nNote that while this is the same substitution as before, here it is\n$b$ rather than $n$ that is being eliminated. Notwithstanding,\n$\\theta$ still satisfies $0 < \\theta \\leq 1$. In these variables,\nthe mean--field equations, Eq.(\\ref{E:mf_b2}) and Eq.(\\ref{E:mf_n2})\nrespectively become\n\\begin{equation}\n\\label{ASA}\nn = \\frac{R\\theta}{R + \\sqrt{1-\\theta^{2}}}\n\\end{equation}\n\\begin{equation}\n\\label{BSB}\nn = b_{R}\\text{Arctanh}\\hspace{.05 cm} \\theta\n\\end{equation}\n\n\\noindent\nwhere in the above, $R := b_{R}\/(1 - b_{R})$. Let us now\ndefine $\\ell(\\theta)$ as\n\\begin{equation}\n\\nonumber\n\\ell(\\theta) := \\frac{1}{b_{R}}\n\\hspace{.1 cm} \\frac{R\\theta}{R + \\sqrt{1-\\theta^{2}}} = \\frac{(1 +\n R)\\theta}{R + \\sqrt{1-\\theta^{2}}} = \\frac{(1 + R)\\theta}{R +Q}.\n\\end{equation}\nwhere $Q = Q(\\theta) :=\\sqrt{1 - \\theta^{2}}$.\nTo prove the current proposition we need to show that for all $\\theta > 0$,\n\\begin{equation}\n\\nonumber\n\\text{Arctanh}\\hspace{.05 cm} \\theta > \\ell(\\theta) \n\\end{equation}\ndemonstrating that there cannot be a non--trivial solution to the\nmean--field equations under the conditions stated. Note that for $0 <\n\\theta \\ll 1$ the desired inequality can be explicitly demonstrated.\nIn general, it is sufficient to show, for $0 < \\theta \\leq1$, that\n$\\ell^{\\prime}(\\theta) < 1\/(1 - \\theta^{2})$, i.e., in the $Q$\nvariable that\n\\begin{equation}\n\\nonumber\n\\frac{1}{Q^{2}} > \\frac{(1+R)(R + Q + (1-Q^{2})\/Q)}{(R+Q)^{2}}.\n\\end{equation}\n\\noindent\nAlthough both sides diverge as $Q\\to 0$ the divergence on the left\nhand side is clearly stronger so we actually only need consider $Q >\n0$ limiting us to $Q \\in (0,1)$. After some manipulation, the\ninequality we need to prove is equivalent to\n\\begin{equation}\n\\nonumber\n(R+Q)^{2} > (1 + R)(RQ^{2} + Q) = (1 + R)(RQ^{2} + Q^{2}) + (1+R)(Q - Q^{2}).\n\\end{equation}\nThat is, we now wish to show\n\\begin{equation}\n\\nonumber\nR (R + 2Q + RQ)(1 - Q) > (1+R)Q(1 - Q).\n\\end{equation}\nSince $Q \\neq 1$, the above is equivalent to \n\\begin{equation}\n\\nonumber\nR^{2} + QR + R^{2}Q > Q.\n\\end{equation}\nFinally, since also $Q < 1$ it is enough to show that $2R^{2}Q^{2} +RQ \\geq\nQ$ i.e., that $2R^{2} + R \\geq 1$ which occurs for $R \\geq\n\\frac{1}{2}$. This corresponds to $b_{R} \\geq \\frac{1}{3}$.\n\\end{proof}\n\\noindent\nWe can finally show\n\\begin{theorem}\nConsider the mean--field GI--system defined by the free energy\nfunction given in Eq.(\\ref{OIQ}). Then, for $b_{R} \\geq \\frac{1}{3}$,\nas a function of $\\mu$ with $b_{R}$ fixed, there is a continuous\ntransition at $\\mu = \\mu_{S} = 1\/b_{R}$. I.e., $n(\\mu) \\equiv 0$ for\n$\\mu < \\mu_{S}$ and $n(\\mu) > 0$ for the $n$--component of any\nminimizing pair $(n(\\mu), b(\\mu))$ while, if $\\mu \\downarrow \\mu_{S}$,\nit is found that $n(\\mu)\\downarrow 0$.\n\\end{theorem}\n\\begin{proof}\nWe will marshal the facts at our disposal and then proceed in a more\nabstract vein than has been the case in the more recent of our\narguments. In what is to follow, $n(\\mu)$ and the corresponding\n$b(\\mu)$ is, once again, notation for a minimizing pair without any\nclaims to uniqueness.\nBy the preceding proposition, we know that at $\\mu = \\mu_{S}$, the\nquantity $n(\\mu)$ is unambiguous and vanishes for $\\mu < \\mu_{S}$ by\nLemma \\ref{Theorem Y}. Conversely, for $\\mu > \\mu_{S}$ we may write,\nadhering to the notation in the proof of Theorem \\ref{UKW}, our usual\nexpression:\n\\begin{equation}\n\\nonumber\n\\Phi_{b_{R},\\mu}(b,n) = \\Phi_{b_{R},\\mu_{S}}(b,n) - \\frac{1}{2}(\\mu - \\mu_{S})n^{2}.\n\\end{equation}\n\\noindent\nFor $n^{2} \\propto b - b_{R} \\ll1$ from Proposition \\ref{TEF}, we know\nthat the quantity $\\Phi_{b_{R}, \\mu_{S}}(b, n)$ agrees with\n$\\Phi_{b_{R}, \\mu_{S}}(b_{R}, 0)$ up to quartic order in $n$. Thus\nallowing $n^{2} \\ll 1$ with $n^{2}(\\mu - \\mu_{s}) \\gg n^{4},\n(b-b_{R})^{2}$ we find a non--zero $n$ corresponding to a free energy\nlower than that of $\\Phi_{b_{R}, \\mu_{S}}(b_{R}, 0)$. Therefore,\nagain by Lemma \\ref{Theorem Y}, we have $n(\\mu) > 0$ for all $\\mu >\n\\mu_{S}$. It remains to establish that $n\\downarrow 0$ as $\\mu\n\\downarrow \\mu_{S}$. Note, that along any decreasing sequence of\n$\\mu$'s the corresponding possible $n$'s must be monotone by Corollary\n\\ref{JDD} -- or even Lemma \\ref{Theorem Y} -- and hence\n$n\\downarrow 0$ as $\\mu \\downarrow \\mu_{S}$. \nNow let us suppose otherwise: that for some\nsequence of $\\mu$'s decreasing to $\\mu_{S}$ there is an associated\nsequence of minimizers, $(b(\\mu), n(\\mu))$ that has $n(\\mu) \\downarrow\nn_{\\star} > 0$. Let $b_{\\star}$ denote the associated limit for the\n$b(\\mu)$ along a further subsequence if necessary. Since\n\\begin{equation}\n\\nonumber\n\\Phi_{b_{R}, \\mu}(b(\\mu), n(\\mu)) < \\Phi_{b_{R}, \\mu}(b_{R}, 0) \\equiv \n\\Phi_{b_{R}, \\mu_{S}}(b_{R}, 0)\n\\end{equation} \nwe would have, by continuity, $\\Phi_{b_{R}, \\mu_{S}}(b_{\\star},\nn_{\\star}) \\leq \\Phi_{b_{R}, \\mu_{S}}(b_{R}, 0)$ indicating that\n\\textit{at} $\\mu = \\mu_{S}$, there is a minimizer with positive\nmagnetization in contradiction with Proposition \\ref{UYV} above. It\nfollows that, under the stated condition $b_{R} \\geq \\frac{1}{3}$, the\nlimit of $n(\\mu)$ is zero as $\\mu \\to \\mu_{S}$ while it vanishes below\nand is positive above. By this (and any other) criterion, the\ntransition at $\\mu_{S}$ is continuous. This completes the proof.\n\\end{proof}\n\n\\section{Discussion} \\label{S:discussion}\n\nIn this work, we have formulated a lattice model for gang\nterritoriality where red and blue gang agents interact solely through\ngraffiti markings. Using a contour argument, we showed that a phase\ntransition occurs between a well mixed, ``high-temperature'' phase and\nan ordered, ``low-temperature'' one as the coupling parameter $J$\nbetween gang members and graffiti becomes stronger while the graffiti\nevaporation parameter $\\lambda$ decreases. In the mean field limit of\nall--to--all lattice site couplings, we can also identify the\ntricritical point in phase space that distinguishes the occurrence of\na continuous phase transition from a first order one. We find this\npoint to be located at $b_R = 1\/3$ which corresponds, in terms of the\noriginal variables of the problem, to the gang proclivity \nterm $\\alpha\n= - 2 \\log 2 $. In particular, for $b_R \\geq 1\/3$ the phase\ntransition is continuous and occurs at $\\mu = 1 \/ b_R$. Thus, in the\nmean-field limit, for fixed $\\alpha \\geq -2 \\log 2$ the ordered ``low\ntemperature\" phase arises for $J^2 > \\lambda \/ (e^{-\\alpha} +2)$, and\nthe ``high temperature'' one is attained on the other side of this\ninequality. The transition between the two occurs in a continuous\nmanner across the $J^2 = \\lambda\/ (e^{-\\alpha}+ 2)$ locus. In the\nopposite case of $b_R < 1\/3$ (or $\\alpha < -2 \\log 2$) the phase\ntransition is discontinuous. Here, we also are able to prove that the\ntransition between high and low temperature phases occurs not at $\\mu\n= 1\/ b_R $, but rather along the $\\mu = \\mu_T < 1 \/b_R$ curve, so that\nthe phase change occurs earlier in $J$ and along a separatrix $J^2 =\nJ_c^2 < \\lambda \/ (e^{-\\alpha} + 2)$.\n\nIn the context of gang--graffiti interactions, we may identify the low\ntemperature, clustered phase as pertaining to a high level of\nantagonism between between rival gangs, where segregation leads to\nconflict along boundaries. Vice versa, the high temperature, well mixed\nconfiguration can be interpreted as a peaceful state,\nwhere despite different affiliations, gang members share the same\nturf. Our mean field results indicate that the confrontational state\nis surely attained, whether in a continuous or first order manner, for\n$J^2 > \\lambda \/ (e^{-\\alpha} + 2)$, which represents high\ngang-graffiti territoriality $J$, low external intervention in\ngraffiti removal $\\lambda$ and high proclivity $\\alpha$ for\nindividuals to become gang members. Gang clustering can be avoided\nby intervening in all three directions: by externally eliminating\ngraffiti ($\\lambda$), but also, from a deeper sociological point of\nview, by decreasing the lure of graffiti tags\nor of joining gangs in the\nfirst place ($J,\\alpha$).\nThe emergence of a (continuous or discontinuous) phase transition\nshows that it is possible to obtain segregation in a lattice model\nwithout invoking direct agent--to--agent coupling; it is certain that adding\nsuch coupling terms to the Hamiltonian would allow for even more\nfavorable segregation conditions.\n\nAlthough our work was conceived within the context of gang\ninteractions, the proposed model Hamiltonian and the tools used are\ngeneral enough that our fundamental results may be applicable to\nseveral other contexts where territoriality is played out through\nmarkings and not through direct contact between players. Many\nanimals, among which wolves, foxes and coyotes, are known to\nscent--mark their territories as a way of warning intruders of their\npresence and to exchange internal communication \\cite{Levin}. At\ntimes, buffer zones can originate between distinct animal clusters\nwhere prey species, such as deer or moose, may thrive \\cite{White}.\nInsects, such as beetles and bees, are also known to avoid previously\nmarked locations as a way to optimize foraging patterns. Similarly to\nthe role of gang graffiti markings, foreign scents lead ``others'' to\nretreat from already occupied turf or visited patches. Our work also\napplies to these contexts. Although some stochastic treatments have\nbeen recently presented \\cite{Giuggioli}, classical ecological studies\nof territoriality are usually carried out via reaction--diffusion\nequations where focal points such as dens, burrows or nests are often\nincluded \\cite{Lewis, Murray}, leading to segregation. Within this\nwork on the other hand -- whether first order or continuous -- agent\nclustering is a natural consequence of a probabilistic treatment\nwithout the need to include any anchoring sites. Finally, we are able\nto connect local microscopic parameters -- $J, K, \\lambda, \\alpha$ --\nto the emergence of large scale territorial patterns, be they gang\nclusters or animal groupings.\n\n\n\n\n\n\\vspace{.5cm}\n\n\n\\noindent \\textbf{Acknowledgments: }\nThis work was supported by NSF grants DMS--0968309 (A.B. and L.C.), DMS--0805486 (L.C.), DMS--0719642 and DMS--1021850 (M.R.D.) and by ARO grants W911NF--11--1--0332 (A.B. and M.R.D.) and W911NF--10--1--0472 (A.B.).\n\n\n\n\n\\vspace{.5cm}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\\label{sec:introduction}\n\n\nReplication plays a key role to build confidence in the scientific\nmerit of published results and the so-called replication crisis has\nled to increased interest in replication studies over the last decade\n\\citep{knaw2018,NAS2019}. These developments eventually culminated in\nlarge-scale replication projects that were conducted in various fields\n\\citep{OSC2015,Camerer2016,Camerer2018,Errington2021}. Deciding\nwhether a replication is successful is, however, not a straightforward\ntask, and different statistical methods are currently being used. For\nexample, the Reproducibility Project: Cancer Biology\n\\citep{Errington2021}, an 8-year effort to replicate experiments from\nhigh-impact cancer biology paper, has used no less than seven different methods\nto assess replicability, including significance of both the original and\nreplication study, compatibility of the original and replication\neffect estimates, and computation of a\nmeta-analytic combined effect estimate with confidence interval.\n \nHowever, all these methods have been developed for different purposes\nso have their limitations in the replication setting. For\nexample, a meta-analytic combined effect estimate treats original and\nreplication study as exchangeable, which is often questionable if the\noriginal study has not been conducted to the same standards as the\nreplication study. The $Q$-test from meta-analysis investigates\nwhether there is evidence for heterogeneity between original and\nreplication studies, but does not take into account the direction nor\nthe significance of the effect estimates. Finally,\na bright-line threshold for significance\nmakes it pointless to replicate original experiments that are just\nbeyond that threshold, even if they constitute scientifically\ninteresting and relevant claims of new discoveries.\n\nThe problems in the application of standard statistical methods to assess\nreplicability have led to a new proposal for the statistical assessment\nof replication studies \\citep{held2020}. The method combines a\nreverse-Bayes approach \\citep[see][for a recent review]{held_etal2022}\nwith a prior-predictive check for conflict and gives rise to a\nquantitative measure of replication success, the skeptical $p$-value.\nThe skeptical $p$-value depends on the two study-specific $p$-values,\nbut also on the variance ratio $c$ of the squared standard errors of the\noriginal and replication effect estimates. \nThe method therefore treats\nthe original and replication study not as exchangeable and specifically \npenalizes\nshrinkage of the replication effect estimate, compared to the original\none. The effect size perspective has been explored further \n\\citep{held_etal2020} to propose a modification that allows\noriginal studies with a ``trend to significance'' to be successful at\nreplication, but only if the effect estimate at replication is larger\nthan at original. An alternative reverse-Bayes formulation based on\nBayesian hypothesis testing has recently also been developed \\citep{PawelHeld2022}.\n\n\nIn this paper we study the skeptical $p$-value \\citep{held2020} from a frequentist perspective, aiming to achieve exact overall \nType-I error (T1E) control for any value of $c>0$, not necessarily linked to the variance ratio.\nDeclaring a replication as successful if\nboth the original and replication study are significant at level\n$\\alpha$ is known as the two-trials rule in drug development\n\\citep{senn:2007} and has overall T1E rate of $\\alpha^2$. \nThe two-trials rule suggests to distinguish\nbetween linear\nand squared T1E control\nand is identified as a limiting case of the proposed \nframework for $c \\rightarrow 0$.\nThe case $c=1$ corresponds to the harmonic mean\n$\\chi^2$-test \\citep{held2020b} where exact T1E control is also\npossible.\nWe use this insight to refine\nthe skeptical $p$-value \nto obtain exact overall T1E control at\nlevel $\\alpha^2$ for any value of $c>0$. This is achieved by deriving the required\nnull distribution\nwhich gives rise to a new family of combination tests\nfor two studies with larger project power than the two-trials rule.\nThe approach can be used to compute confidence\nregions based on the corresponding $p$-value function\nand is particularly attractive in the reverse-Bayes setting where the variance ratio \n$c$ usually reduces to the relative sample size of\nthe replication study compared to the original one.\nThis perspective is further explored with additional power calculations, \na study of the required \nminimum relative effect size for replication success,\nand an application to data from the \nExperimental Economics\nReplication Project.\n\n\n\n\n\n\n\n\n\\section*{A novel criterion for replicability}\\label{sec:framework}\n\nLet $\\hat \\theta_i$ denote the estimate of the unknown effect size\n$\\theta$ and $\\sigma_i$ the corresponding standard error from the\noriginal and replication study, $i \\in \\{o, r\\}$. The squared standard\nerrors are usually inversely proportional to the sample sizes $n_o$\nand $n_r$, \\ie $\\sigma_o^2 = \\kappa^2\/n_o$ and\n$\\sigma_r^2 = \\kappa^2\/n_r$ for some unit variance $\\kappa^2$.\nAs in standard meta-analysis we assume that the $\\hat \\theta_i$'s are\nindependent and follow a normal distribution with mean $\\theta$ and\nvariance $\\sigma_i^2$. Let\n$z_i = {\\hat \\theta_i}\/{\\sigma_i}$\ndenote the test statistic for the null hypothesis $H_0$: $\\theta=0$ and $p_i = 1-\\Phi(z_i)$ the \ncorresponding one-sided $p$-value for the alternative $H_1$: $\\theta>0$, so $z_i = \\Phi^{-1}(1-p_i)$, here $\\Phi(.)$ denotes the standard normal cumulative distribution function. \n\nThe general criterion for replication success is defined as follows. \nReplicability is achieved if \n\\begin{equation}\\label{eq:general}\n\\left({z_o^2}\/{z_{u}^2}-1\\right)_{+} \n \\left({z_r^2}\/{z_{u}^2}-1\\right)_{+} \\geq c \n\\end{equation}\nholds, here $x_{+} = \\max\\{0, x\\}$, $c > 0$ is a fixed constant and $z_u>0$ is a \nsuitably chosen threshold. \nNote that a necessary but not sufficient condition for \\eqref{eq:general} to hold is \n$\\min\\{\\abs{z_o}, \\abs{z_r}\\} > z_u$, as otherwise the left-hand side of\n\\eqref{eq:general} is zero. \n\nIn the two-sided formulation \\eqref{eq:general} is\nsufficient for replication success, irrespectively of the signs of the\nestimates $\\hat \\theta_o$ and $\\hat \\theta_r$. The one-sided\nformulation has the additional requirement that the two estimates are\nboth in the same pre-specified (positive) direction. The necessary requirement\n$\\min\\{\\abs{z_o}, \\abs{z_r}\\} > z_u = \\Phi^{-1}(1-\\alpha_u)$ then translates to\n$\\max\\{p_o, p_r\\} < \\alpha_u$, so $\\alpha_u$ serves as an upper threshold for\nthe one-sided study-specific $p$-values $p_o$ and $p_r$.\n\nThe requirement \\eqref{eq:general} can be motivated from a recent\nproposal to define replication success with a two-step procedure\n\\citep{held2020}: First, a significant original study at level\n$\\alpha$ is challenged with a skeptical normal prior with mean zero and variance \nchosen such that\nthe resulting posterior is no longer significant\n\\citep{matthews2018}. Secondly, the conflict between the replication\nstudy result and the skeptical prior is quantified with a\nprior-predictive tail probability $p_{\\mbox{\\scriptsize Box}}$\n\\citep{box:1980}. Replication success at level $\\alpha$ is achieved if\n$p_{\\mbox{\\scriptsize Box}} \\leq \\alpha$. This definition turns out to\nbe equivalent to the requirement \\eqref{eq:general} with\n$z_u=z_\\alpha = \\Phi^{-1}(1-\\alpha)$\nand $c=\\sigma_o^2\/\\sigma_r^2$, the variance ratio original to\nreplication. \nWe will use this specific choice of $c$ in \nthe replication setting\nbut treat $c$ for the moment as a free\nparameter not necessarily related to the standard errors $\\sigma_o$\nand $\\sigma_r$ of the two studies.\n\nFor fixed $z_o$, $z_r$ and $c$ we are often interested in the smallest\npossible value of $z_{u}^2$ where \\eqref{eq:general} holds and denote\nthis value as $z_{S}^2 \\in (0, \\min\\{z_o^2, z_r^2\\})$, defined as the smallest\npositive root of\n\\begin{equation}\\label{eq:equation}\n\\left({z_o^2}\/{z_{S}^2}-1 \\right) \\left({z_r^2}\/{z_{S}^2} \n- 1 \\right) = c. \n\\end{equation}\nAny $\\abs{z_S} \\geq z_u$ will hence lead to replication success, so \nthe threshold $z_u$ in \\eqref{eq:general} can now also be interpreted as a critical value for the \ntest statistic $z_S = + \\sqrt{z_S^2}$. \n\nIf both effect estimates go in the same direction, the transformation\n$p_S = 1-\\Phi(\\abs{z_S})$ defines the (one-sided) \\emph{skeptical\n $p$-value} in its original formulation, otherwise it is $p_S=\\Phi(\\abs{z_S})$\n\\citep{held2020}. A two-sided $p$-value $\\tilde p_S=2 \\{1-\\Phi(\\abs{z_S})\\}$\ncan also be considered, but is subject to the ``replication paradox'' \\citep{Ly_etal2019}\nwhere replication success can occur even if the effect\nestimates $\\hat \\theta_o$ and $\\hat \\theta_r$ are in opposite directions.\n\n\\subsection*{Overall Type-I error control}\\label{sec:pvalues}\n\nLet $\\alpha \\in (0, 1)$ be fixed. \nWe say a\n$p$-value $p$ has \\emph{linear T1E control} if\n\\begin{equation}\\label{eq:T1Elinear}\n \\Pr(p \\leq \\alpha \\given H_0) \\leq \\alpha \\mbox{ for all } \\alpha \\in (0, 1).\n\\end{equation}\nA $p$-value that fulfills\n\\eqref{eq:T1Elinear} is called \\emph{valid} \\citep{CasellaBerger2002}. If \\eqref{eq:T1Elinear} holds with\nequality for all $\\alpha$ then $p$ has \\emph{exact linear T1E\n control}.\nA $p$-value with {exact linear T1E control} has a uniform\ndistribution under the null. \n\nThe standard replication setting involves two studies, where it is useful to introduce $p$-values with\nsquared T1E control. If\n\\begin{equation}\\label{eq:T1Esquared}\n \\Pr(p_S \\leq \\alpha \\given H_0) \\leq \\alpha^2 \\mbox{ holds for all } \\alpha \\in (0, 1),\n\\end{equation}\nthen we say $p_S$ has \\emph{squared T1E control}. \nA $p$-value $p_S$ with squared T1E control also has\nlinear T1E control because $\\alpha^2 < \\alpha$ holds for all\n$\\alpha \\in (0, 1)$.\n\nIf \\eqref{eq:T1Esquared} holds with equality for all\n$\\alpha$ then $p_S$ has \\emph{exact squared T1E control}. A $p$-value\nwith {exact squared T1E control} has - by definition - a triangular null distribution (\\ie a beta\n$\\Be(2,1)$ distribution) on the unit interval. With a change-of-variables it can be shown that $p_S=\\sqrt{p}$ has exact squared T1E control, if $p$ has exact linear\nT1E control. On the other hand, if $p_S$ has\nexact squared T1E control then $p=p_S^2$ has exact linear T1E control.\n\nThe 'nominal' skeptical $p$-value $p_S$\nhas the property\n$p_S > \\max\\{p_o, p_r\\}$ \\citep[Sec.~3.1]{held2020}, so\n$p_S$ has squared T1E control for every value of $c>0$. This does, however, \nnot hold exactly and the nominal skeptical $p$-value has a much\nsmaller T1E rate than the two-trials rule. \nA perspective on the relative effect size \\citep{held_etal2020} leads to the modification \n$p_S = 1-\\Phi(\\sqrt{\\varphi}\\abs{z_S})$\nusing the golden ratio $\\varphi = (\\sqrt{5}+1)\/2$. This is less stringent than the \noriginal formulation but still ensures\nthat borderline significant original studies ($p_o \\approx \\alpha$) can only\nlead to replication success if the replication effect estimate is\nlarger than the original one. The T1E rate of the 'golden' skeptical\n$p$-value is always smaller than\n$\\alpha_g^2$ where \n\\begin{equation}\\label{eq:alphaS}\n\\alpha_g= 1-\\Phi({z_\\alpha}\/\\sqrt{\\varphi})\n\\end{equation}\nand still has squared\nT1E control if \n$c \\geq 1$ and $\\alpha \\leq 0.058$ \\citep[Section 3.2]{held_etal2020}.\n\n\nIn the following we will consider both one- and two-sided $p$-values\n$p$ and $\\tilde p$, respectively $p_S$ and $\\tilde p_S$. If they are based on two studies,\nthen $p=\\tilde p\/4$, if both effect estimates are in the same (pre-defined) direction. \nThis translates to $p_S=\\tilde p_S\/2$ and it is natural to set\n$p_S=1-\\tilde p_S\/2$ if the effect estimates disagree. \n\n\\subsection*{The two-trials rule}\\label{sec:2TR}\n\nThe two-trials rule\nrequires significance of both studies at level $\\alpha$\nso corresponds to $z_o^2 \\geq z_\\alpha^2$ and\n$z_r^2 \\geq z_\\alpha^2$, where $z_\\alpha = \\Phi^{-1}(1-\\alpha)$ is the\ncritical value. This can be identified\nas a limiting case of \\eqref{eq:general} with $z_u = z_\\alpha$ \nand $c \\downarrow 0$, where\n$z_S^2$ in \\eqref{eq:equation} converges to \n$\\min\\{z_o^2, z_r^2\\}$ \\citep[eq.~(11)]{held2020}.\n\nThe\nlargest value of $\\alpha$ (respectively the smallest value of $z_\\alpha$) such that the two-trials rule \nis fulfilled \nis\n$p_{S} = \\max\\{p_o, p_r\\}$ and the two-trials rule then translates to\n$p_{S} \\leq \\alpha$. \nUnder the null hypothesis both $p_o$\nand $p_r$ are uniformly distributed and it is straightforward to show that \nthe maximum of two independent uniform\nrandom variables follows a triangular $\\Be(2,1)$ distribution. The\nvariable $p_S$ is therefore a $p$-value with exact squared T1E control, so \n$p=p_S^2=\\max\\{p_o^2, p_r^2\\}$\nhas exact linear T1E control.\n\nWe may also consider the distribution of $Y=z_{S}^2 = \\min\\{z_o^2, z_r^2\\}$\nunder the null, where $X_o=z_o^2$ and $X_r=z_r^2$ are independent $\\chi^2(1)$\nrandom variables. The variable $Y$ has cumulative distribution function (cdf) \n\\begin{eqnarray}\nF_{0}(y) & = & 1 - 4 \\, \\left(1-\\Phi(\\sqrt{y}) \\right)^2 \\mbox{ for } y \\geq 0, \\label{eq:mincdf} \n\\end{eqnarray}\nwhich can be shown using\nthe fact that the cdf of the minimum $Y = \\min\\{X_1, X_2\\}$ of two iid\nrandom variables $X_1$ and $X_2$ with cdf $F_X(x)$ has cdf\n$F_Y(y) = 1-(1-F_X(y))^2$, here\n $ F_X(x) = \n 2 \\, \\Phi(\\sqrt{x}) - 1$\nis the cdf of the $\\chi^2(1)$ distribution for $x \\geq 0$. Furthermore, the expectation\nof $Y$ can be shown to be $\\E(Y)=1-2\/\\pi \\approx 0.36$, see\nSupporting Information (SI).\nNow $z_S^2$ doesn't take into account the direction of the effect estimates, so \nthe $p$-value\n\\[\n\\tilde p = 1- F_0(y=z_S^2) = 4\\left(1 - \\Phi(\\min\\{\\abs{z_o}, \\abs{z_r}\\})\\right)^2, \n\\]\n is two-sided with exact linear T1E control. The one-sided $p$-value is therefore \n$p=\\tilde p\/4 = \\left(\\max\\{p_o, p_r\\}\\right)^2= \\max\\{p_o^2, p_r^2\\}$, if\nboth effect estimates are in the correct direction and the relationship to the one-sided $p$-value \n$p_S=\\max\\{p_o, p_r\\}$ \nwith exact squared T1E control is simply $p_S = \\sqrt{p}$.\n\nThis perspective suggests a strategy for exact T1E control for any $c>0$: If we are able to derive the null distribution\nfunction $F_c(.)$ of $z_S^2$ in \\eqref{eq:equation} then we can\nuse the transformation $p=(1-F_c(z_S^2))\/4$ to \nobtain a one-sided $p$-value with exact linear T1E\ncontrol. The ``controlled'' skeptical $p$-value then is $p_S = \\sqrt{p}$.\nWe first consider the special case $c=1$, where the null distribution of $z_S^2$\nis particularly simple.\n\n\\subsection*{The harmonic mean ${\\chi^2}$-test}\\label{sec:harmonic}\nThe harmonic mean $\\chi^2$-test \\citep{held2020b} arises as a special\ncase of \\eqref{eq:general} for $c=1$. The solution of\n\\eqref{eq:equation} then is \n\\begin{equation}\\label{eq:solutionH}\nz_S^2 = {z_H^2}\/{2}\n\\end{equation}\nwhere\n$z^2_H=2\/(1\/z_o^2+ 1\/z_r^2)$ is the harmonic mean of the squared test\nstatistics $z_o^2$ and $z_r^2$. \nIt follows that\n$z^2_S$\nhas a gamma $\\Ga(1\/2, 2)$ distribution if $z_o^2$ and $z_r^2$ are independent\n$\\chi^2(1)$ \\citep[eq.~(2.3)]{PillaiMeng2016}. \nThe cdf $F_{c=1}(y)$ of $Y=z^2_S$ is thus readily\navailable and a two-sided $p$-value with exact linear T1E control can\nbe calculated: \n\\begin{equation}\\label{eq:p1}\n \\tilde p = 1-F_{c=1}(y=z_S^2= z_H^2\/2).\n\\end{equation}\nDivision by 4 gives the corresponding one-sided $p$-value $p=\\tilde p\/4$ and the square root\n$p_S=\\sqrt{p}$ is a one-sided $p$-value with exact squared T1E control. This implies that \nthresholding $p_S$ at $\\alpha$ will have a T1E rate of $\\alpha^2$. The corresponding value of\n$z_u$ in \\eqref{eq:general} is \\citep[Section 2.1]{held2020b}\n\\[\n z_u = \\Phi^{-1}(1-2 \\alpha^2)\/2.\n\\]\nFor $\\alpha=0.025$ we obtain\n$z_u=1.51$, which corresponds to\n$\\alpha_u=0.065$. \nThe threshold $\\alpha_u$ has two useful\ninterpretations. First, it serves as a threshold for the\nnominal skeptical $p$-value to achieve exact T1E control at level\n$\\alpha^2$. Secondly, a necessary but not sufficient requirement for\nreplication success is that both $p_o$ and $p_r$ are smaller than\n$\\alpha_u$. The two interpretations also hold for $c \\neq 1$.\n\nWe note that the harmonic mean $\\chi^2$-test can be extended to combine the results from $n$ studies and can also include weights\n\\citep{held2020b}.\n\n\\subsection*{Exact Type-I error control for $\\mathbf{c \\neq 1}$}\\label{sec:exactT1Econtrol}\nFor $c > 0$ and $c \\neq 1$\nthere is a unique solution of \\eqref{eq:equation} that fulfills the \nrequirement $0 \\leq z_S^2 \\leq \\min\\{z_o^2, z_r^2\\}$:\n\\begin{eqnarray}\nz_S^2 \n&=& \n\\frac{z_A^2}{c-1} \n \\left\\{ \\sqrt{1 + (c-1) z^2_H\/z_A^2} - 1 \\right\\},\n \\label{eq:solution}\n\\end{eqnarray}\nhere $z^2_H$ is the harmonic and $z^2_A$\nthe arithmetic \nmean of the squared test statistics $z_o^2$ and $z_r^2$. Note that \\eqref{eq:solution} is always non-negative\nand\nalso works for $c=0$ where we obtain $z_S^2 = \\min\\{z_o^2, z_r^2\\}$. \n\nIn what follows we derive a new ``controlled'' version of the\nskeptical $p$-value with exact squared T1E control for any $c \\neq 1$, \ndefined as the square root of a\n$p$-value that is a function of \\eqref{eq:solution} \nand has exact linear T1E control. To obtain the required transformation of $z_S^2$, \nconsider the probabilistic version of equation \\eqref{eq:equation}, where\nthe random variable $Y=z_S^2$ depends on the two\nindependent random variables $z_o^2$ and $z_r^2$.\nUnder the null hypothesis of no effect, $z_o^2$ and $z_r^2$ are independent\n$\\chi^2(1)$-distributed. Then $z_A^2$ and $z_H^2\/z_A^2$ in \\eqref{eq:solution} \nare also independent \\citep[Section 4.7]{GrimmettStirzaker2001} \nwhich facilitates the computation of the \ncdf $F_c(y) = \\Pr(Y \\leq y \\given c)$ of $Y$ for every\nvalue of $c>0$, $c \\neq 1$:\n\\begin{equation}\\label{eq:Fc}\n F_c(y) = 1 - \\frac{1}{\\pi} \\int_0^{1} \n \\frac{\\exp\\{-g(y, t, c)\\}}{\\sqrt{t(1-t)}} \\, dt \n\\end{equation} \nwhere\n\\[\n g(y, t, c) = \n \\frac{{(c-1)y}}{{\\sqrt{1 + (c-1) t}-1}}.\n\\]\nDetails of the derivation of \\eqref{eq:Fc} are given in SI. Computation of $F_c(y)$ is straightforward with standard numerical\nintegration techniques and so\na two-sided $p$-value with exact linear T1E control can be calculated for every $c \\neq 1$: \n\\begin{equation}\\label{eq:p2}\n \\tilde p = 1-F_c(z_S^2)\n\\end{equation}\nwhere $z_S^2$ is defined in \\eqref{eq:solution}. Division by 4\ngives the corresponding one-sided $p$-value $p=\\tilde p\/4$ with exact linear T1E control \nand $p_S=\\sqrt{p}$\ngives the one-sided ``controlled'' skeptical\n$p$-value with exact squared T1E\ncontrol. \nThe skeptical $p$-value $p_S$ needs to be thresholded at\n$\\alpha$ to achieve overall T1E control at $\\alpha^2$. Equivalently, \nwe can use the \\emph{adaptive} threshold $\\alpha_u$ (as it depends on $c$) for \nthe nominal skeptical $p$-value to ensure T1E control at level $\\alpha^2$.\n\n\\subsection*{The case $\\mathbf{c \\rightarrow \\infty}$}\nFor $c \\rightarrow \\infty$ the two-sided\n$p$-value \\eqref{eq:p2} converges to \n\\begin{equation}\\label{eq:limp}\n\\tilde p_\\infty = \\lim_{c \\rightarrow \\infty} \\tilde p = \\frac{1}{\\pi} \\int_0^{1} \n \\frac{\\exp\\left(- z_G^2 \/ \\sqrt{t}\\right)}{\\sqrt{t(1-t)}} \\, dt ,\n\\end{equation}\nwhere $z_G^2 = \\sqrt{z_o^2 z_r^2} = \\abs{z_o z_r}$ is the geometric mean of the squared test\nstatistics $z_o^2$ and $z_r^2$ (proof to be found in SI).\nNote that $\\tilde p_\\infty = 1$ if either $z_o=0$ or $z_r=0$, as\n$f(x)=1\/\\{\\pi \\sqrt{x(1-x)}\\}$ is the density of a $X \\sim \\Be(1\/2,1\/2)$ random variable and thus integrates to $1$.\nFurthermore, \\eqref{eq:limp} is a valid (two-sided) $p$-value\nwith exact linear T1E control, \\ie $\\tilde p_\\infty$\nis uniformly distributed if $z_o^2$ and $z_r^2$ are i.i.d.~$\\chi^2(1)$. \n\n\nEquation \\eqref{eq:limp} has some similarities to Fisher's and\nStouffer's method for combining $p$-values from two studies: Fisher's\nmethod is based on the product of the $p$-values, Stouffer's method is\nbased on the sum of the $z$-values, whereas \\eqref{eq:limp} is based\non the product of the $z$-values. \n\n\n\n\\section*{Statistical applications}\n\\subsection*{A new family of combination tests}\\label{sec:combTest}\nThe framework \\eqref{eq:general} (in the one-sided formulation) can\nnow be used to define a family of combination tests for two studies with exact T1E\ncontrol, indexed by the parameter $c$. \nSpecifically, we can compute for fixed $c$ the required threshold\n$\\alpha_u \\geq \\alpha$ for the nominal skeptical $p$-value\nto achieve exact overall T1E control at level\n$\\alpha^2$. The threshold $\\alpha_u$ is adaptive, as it depends on $c$, and\nis a generalization of the two-trials rule with T1E rate $\\alpha^2$\nwhere $\\alpha_u=\\alpha$. In contrast, the ``nominal'' and ``golden''\nskeptical $p$-value are based on the (non-adaptive) threshold $\\alpha$ respectively \n$\\alpha_g$ from \\eqref{eq:alphaS}.\nWe can also fix $\\alpha$ and the threshold $\\alpha_u$ and compute the \nvalue of $c$ that achieves T1E rate $\\alpha^2$.\n\nThe special case of the harmonic mean $\\chi^2$-test ($c=1$)\ngives $\\alpha_u=0.065$ for $\\alpha=0.025$. \nIn\npractice one might want to have smaller values of $\\alpha_u$, for example\n$\\alpha_u=0.035$ where $c=0.1$. Importantly, $c$ respectively $\\alpha_u$ must be chosen \\emph{before} the\ntwo $p$-values are observed. Choosing $\\alpha_u$ \\emph{after} the first $p$-value\n$p_o$ has been observed (ensuring $\\alpha_u > p_o$) may not control the T1E anymore. \n\n\n \nComputation of the constant $\\alpha_u$ is done with numerical\ntechniques. Briefly, the overall T1E rate for a given value of $c$ and $\\alpha_u$ can be\ncomputed with numerical integration \\citep[Section\n3.2]{held_etal2020}. Root-finding methods are then used to find the\nvalue of $\\alpha_u$ that gives a T1E rate of $\\alpha^2$, see the inset\nplot in Figure \\ref{fig:fig3b}. One could also fix the threshold\n$\\alpha_u$ and compute the corresponding value of $c$.\n\nFigure \\ref{fig:fig3b} compares the success region of the proposed\ntest for different values of $c$. \nAll methods control the\nT1E rate at\n$0.025^2 = 0.000625$, so the\narea under each curve is equal to this value. The case\n$c = \\infty$ is based on the one-sided $p$-value $\\tilde p_\\infty\/4$\nfrom \\eqref{eq:limp}.\nThe two-trials rule success\nregion is the squared gray area below the black line and corresponds\nto $\\alpha_u=0.025$ where $c=0$. The inset plot gives\nthe upper threshold $\\alpha_u$ as a function of $c$.\nFor $c = \\infty$, the upper threshold is $0.5$, \nindicating that both effect estimates still\nhave to be in the pre-specified direction.\nIn contrast, both Fisher's and\nStouffer's method can flag success even if one effect estimate is in the \nwrong direction \\citep{held2020b}. \n\nAn important operating characteristic in the replication setting is the \nproject power \\citep{Maca2002}, \\ie the success probability over both studies under the alternative hypothesis. \nThe two studies are assumed to have the same sample size, so \nthe distribution of both $z_o$ and $z_r$ is \n\\begin{equation}\\label{eq:distAlt}\n\\Nor(z_\\alpha + z_\\beta, 1),\n\\end{equation}\nwhere $1 - \\beta$ is \nthe individual power of each study to detect the true effect size at level $\\alpha$ with a standard $z$-test \\citep[Section 3.3]{mat2006}. The project power of the \nproposed test can then be calculated numerically \\citep[Section 3.3]{held_etal2020}.\nTable~\\ref{tab:tabPP} reports this \nquantity for different values of $c$ and the individual power $1-\\beta$\nat $\\alpha = 0.025$.\nThe project power is simply $(1-\\beta)^2$ for $c = 0$ \n(corresponding to the two-trials rule) and increases with increasing $c$. \n\n\n\\begin{figure}\n\\begin{knitrout}\n\\definecolor{shadecolor}{rgb}{0.969, 0.969, 0.969}\\color{fgcolor}\n\\includegraphics[width=\\maxwidth]{fig1-1} \n\\end{knitrout}\n\\vspace{-0.2cm}\n\\caption{\\label{fig:fig3b} Comparison of the combination test \n for different values of $c$. \n Below each line is the success region\n depending on the $p$-values\n $p_o$ ($x$-axis) and $p_r$ ($y$-axis). All methods control the T1E rate at\n $\\alpha^2=0.025^2 = 0.000625$, so the area under each\n curve is equal to this value. The $y$-axis indicates the different\n values for the upper threshold $\\alpha_u$ in color. The thresholds $\\alpha_u$ for the cases\n$c=1$ and $c=10$ are $0.065$ and $0.14$, see the inset plot. \n The two-trials rule success\n region is the squared gray area below the black line with $\\alpha_u = \\alpha = 0.025$ \n where $c=0$.}\n\\end{figure}\n\n\n\n\\begin{table}[ht]\n\\centering\n\\begin{tabular}{lrrrrrr}\n Individual &\\multicolumn{6}{c}{Project power (\\%)} \\\\\npower (\\%) & $c = 0$ & $c = 0.01$ & $c = 0.1$ & $c = 1$ & $c = 10$ & $c = \\infty$ \\\\ \n \\hline\n80 & 64 & 65 & 67 & 71 & 74 & 75 \\\\ \n 90 & 81 & 82 & 84 & 87 & 89 & 90 \\\\ \n 95 & 90 & 91 & 92 & 94 & 96 & 96 \\\\ \n \\hline\n\\end{tabular}\n\\caption{Project power of the combination test at $\\alpha=0.025$ for two equally sized studies with individual power given in the first column.} \n\\label{tab:tabPP}\n\\end{table}\n\n\n\\subsection*{{\\textit P}-value function and confidence regions}\\label{sec:ci}\n\nUp to now we have considered the test statistic $z_o$ and $z_r$ for\nthe null hypothesis $H_0$: $\\theta=0$, $i \\in \\{o, r\\}$. We now extend\nthis and consider the generalized test statistic\n\\begin{equation}\\label{eq:zMu}\nz_i(\\mu) = \\frac{\\hat \\theta_i - \\mu}{\\sigma_i}\n\\end{equation}\nfor the null hypothesis $H_0$: $\\theta=\\mu$, $i \\in \\{o, r\\}$. The\nvalues $z_o(\\mu)$ and $z_r(\\mu)$ are then used to compute $z_A^2(\\mu)$, \n$z_H^2(\\mu)$ and finally $z_S^2(\\mu)$\nin \\eqref{eq:solutionH} resp.~\\eqref{eq:solution} and so we can calculate a (two-sided) $p$-value function\n\\citep{Fraser2019}\n\\begin{equation}\\label{eq:pvalueFunction}\n \\tilde p(\\mu) = 1 - F_c( z_S^2(\\mu))\n\\end{equation}\nas a function of the null value $\\mu$ for every value of $c$. A \nconfidence region at any level $\\gamma$ can then be defined as the\nset of $\\mu$ values where $\\tilde p(\\mu) \\geq 1-\\gamma$.\nTwo examples are given in Figure \\ref{fig:Rprojects2}.\n\nThe confidence region can be computed with numerical root-finding\ntechniques. However, the $p$-value function \\eqref{eq:pvalueFunction}\nis in general bimodal, with peaks at $\\mu=\\hat \\theta_i$,\n$i \\in \\{o, r\\}$. Indeed, \\eqref{eq:zMu} will be zero for\n$\\mu=\\hat \\theta_i$, $i \\in \\{o, r\\}$, and therefore $z_H^2(\\mu)$ will\nbe zero. Then $z_S^2(\\mu)$ in \\eqref{eq:solutionH}\nresp.~\\eqref{eq:solution} will also be zero and therefore\n$\\tilde p(\\mu=\\hat \\theta_i)$ will be 1 for $i=o$ and $i=r$. The\n$p$-value function in \\eqref{eq:pvalueFunction} will be smaller one\nelsewhere. The confidence region can therefore be a union of two\ndisjoint intervals rather than just one interval, if there is conflict\nbetween the original and replication study effect estimates. This\nphenomenon is more likely to occur for smaller confidence levels\n$\\gamma$. Taking the limit $\\gamma \\rightarrow 0$ shows that the\nproposed framework does not give one, but two point estimates equal to\nthe effect estimates from the two studies. Whether or not the\nconfidence region splits into two intervals depends on the minimum\n$\\min \\tilde p = \\min_{\\hat \\theta_o < \\mu < \\hat \\theta_r} \\tilde\np(\\mu)$ of the $p$-value function between the two effect estimates\n$\\hat \\theta_o$ and $\\hat \\theta_r$. A split will occur in case\n$\\min \\tilde p < 1-\\gamma$.\n\n\n\nIf $c$ represents the variance ratio\n$c=\\sigma^2_o \/ \\sigma_r^2$ and we have $c=1$, \\ie\n$\\sigma^2_o = \\sigma_r^2 = \\sigma^2$, the minimum $\\min \\tilde p$ of the $p$-value function $\\tilde p(\\mu)$ over the\ninterval $(\\hat \\theta_o, \\hat \\theta_r)$ is equal to the $p$-value from \nthe $Q$-statistic. \nIndeed, the minimum then occurs at\n$\\mu = (\\hat \\theta_o + \\hat \\theta_r)\/2$ and the squared test statistic \n\\eqref{eq:zMu} can be written as\n\\[\n z_i^2 = \\frac{(\\hat \\theta_i - \\mu)^2}{\\sigma^2} = \\frac{(\\hat\n \\theta_o - \\hat \\theta_r)^2}{4 \\sigma^2} \\mbox{ for } i \\in \\{o, r\\}, \n\\]\nand so \n\\[\n z_S^2(\\mu) = \\frac{1}{1\/z_o^2+1\/z_r^2} = \n \\frac{1}{2} \\frac{(\\hat\n \\theta_o - \\hat \\theta_r)^2}{4 \\sigma^2} = \\frac{Q}{4}\n\\]\nwhere $Q$ is the $Q$-statistic. Now $Q$ has a $\\chi^2(1)$-distribution\nunder the null and $z_S^2(\\mu)$ has a $\\Ga(1\/2, 2)$ null\ndistribution, so $4 \\, z_S^2(\\mu)$ also has a\n$\\Ga(1\/2, 1\/2) \\stackrel{d}{=} \\chi^2(1)$ null distribution. \nFor $c \\neq 1$, $\\min \\tilde p$ will be close, but not \nequal to the $p$-value from the $Q$-test. \n\n\n\n\n\\section*{Application to replication studies}\\label{sec:applications}\n\n\nWe now consider the reverse-Bayes setting where\nwe will use the variance ratio $c = \\sigma_o^2\/\\sigma_r^2$ and assume that \nit can be re-written as $c=n_r\/n_o$.\nThis means that \n$c$ now comes directly from the data and $\\alpha_u$ is a function of \n$c$ and $\\alpha$ (as shown in Figure~\\ref{fig:fig3b}, inset plot),\nensuring exact T1E\ncontrol at level $\\alpha^2$.\n\n\n\\subsection*{Conditional and predictive power}\n\nThe power to achieve replication success, given the\nresults from the original study, is needed for calculation of the replication sample size $n_r = c \\, n_o$. \nConditional power assumes that the effect estimate from the original study is the\ntrue effect, whereas predictive power takes the uncertainty of the original effect estimate into account \\citep[Section 4.1]{held2020}.\nFigure \\ref{fig:fig5} (first row) shows\nconditional and predictive power of the controlled and golden\nskeptical $p$-value at $\\alpha=0.025$ and compares it to the two-trials \nrule for $z_o=2$\n(left) and $z_o=4$ (right). For $z_o= 2$, the power of the controlled\nskeptical $p$-value is smaller compared to the two-trials rule while it is the\nother way round for $z_o=4$. For each method, conditional power is larger \nthan predictive if it is above 50\\%, and smaller otherwise. It is noteworthy that for a borderline significant result ($z_o=2$) it is difficult to achieve large power values with the golden skeptical $p$-value, even for large relative sample size $c$. This is not the case for the controlled skeptical $p$-value, where the power curves are only slightly lower than for the two-trials rule. The differences between golden and controlled are less pronounced for $z_o= 4$.\n\\begin{figure}[!ht]\n\\begin{knitrout}\n\\definecolor{shadecolor}{rgb}{0.969, 0.969, 0.969}\\color{fgcolor}\n\\includegraphics[width=\\maxwidth]{fig2-1} \n\\end{knitrout}\n\\vspace{-0.2cm}\n\\caption{\\label{fig:fig5} First row: Conditional (solid) and predictive (dashed) power of the\n controlled and golden skeptical $p$-value and the two-trials rule (2TR) for \n $z_o = 2$ (left) and $z_o = 4$ (right).\n Second row: Project power as a function of the \n relative sample size $c$ based on all original studies (left) \n or the significant ones \n($p_o < \\alpha = 0.025$) only.\nResults are given for the golden and controlled skeptical $p$-value and\n compared with the two-trials rule (2TR).}\n\\end{figure}\n\\subsection*{Project power}\nComputation of the project power over both studies assumes that the distribution of $z_o$ is as in \\eqref{eq:distAlt}.\nHowever, the sample size $n_r$ of the replication study now depends on $c$ via \n$n_r = c \\, n_o$, so the distribution of $z_r$ is $z_r \\sim \\Nor(\\sqrt{c} (z_\\alpha + z_\\beta), 1)$.\nThe power of the original study to detect the assumed effect \n$\\theta$ is $1-\\beta$, and the power of the replication study also depends \non $c$. \n\nFigure \\ref{fig:fig5} (second row) shows the project power as a function of the relative sample size $c$ for $\\alpha=0.025$ and $\\beta=0.1$.\nFor the controlled skeptical $p$-value, the project power is smaller than with the \ngolden level for $c<0.85$, but larger otherwise and increases to\nvalues close to 98.3\\% for $c=10$, \nwhere the golden level reaches only\n93.7\\% project power. \nThe project power based on the two-trials rule is always smaller \nand converges to 90\\% for large $c$. \n\nIf we assume that a replication study will only be conducted if the original study is significant\n($p_o \\leq 0.025$), \nthen the project power of both the golden and controlled skeptical $p$-value \nconverges to 90\\% for large $c$, \nbut remains larger than with the two-trials rule for smaller values of $c$. \n\n\\subsection*{Minimum relative effect size}\\label{sec:minres}\nThe one-sided assessment of replication success allows to rearrange\nEquation~\\eqref{eq:general} to a condition based on the relative\neffect size $d = \\hat \\theta_r \/ \\hat \\theta_o$ \\citep{held_etal2020}.\nFigure~\\ref{fig:minRES} displays the minimum relative effect size for\nreplication success with the controlled skeptical $p$-value and the\ntwo-trials rule for different relative sample sizes $c$. As compared\nto the two-trials rule, the controlled skeptical $p$-value has a\nsmaller minimum relative effect size for more convincing original\nstudies, and a larger one for less convincing ones. \nThe success region of the controlled\nskeptical $p$-value does not have the strong cut-off of the two-trials\nrule at $\\alpha = 0.025$. The largest value of $p_o$ where\nreplication success is possible is the threshold $\\alpha_u$, which\nincreases with increasing relative sample size $c$ and is displayed in\ncolor in the top axis of Figure~\\ref{fig:minRES}.\n\n\n\n\n\\begin{figure}[!h]\n \\centering\n\\begin{knitrout}\n\\definecolor{shadecolor}{rgb}{0.969, 0.969, 0.969}\\color{fgcolor}\n\\includegraphics[width=\\maxwidth]{fig3-1} \n\\end{knitrout}\n\\vspace{-0.6cm}\n\\caption{Minimum relative effect size to achieve replication success \nwith the two-trials rule and the controlled skeptical $p$-value \nfor selected values of the relative sample size $c$ at $\\alpha = 0.025$.}\n\\label{fig:minRES}\n\\end{figure}\n\n\\subsection*{Application to Experimental Economics Replication Project}\\label{sec:appEE}\n\n\n\n\nWe now illustrate the proposed methodology using all 18\nstudies from the Experimental Economics replication project\n\\citep{Camerer2016}. The different effect estimates were all transformed to correlation\ncoefficients, where Fisher's $z$-transformation achieves\nasym\\-ptotically normal effect estimates $\\hat \\theta_i$ with standard\nerrors $\\sigma_i = 1\/\\sqrt{n_i - 3}$. Figure~\\ref{fig:Rprojects}\ncompares the golden ($x$-axis) and the controlled ($y$-axis) skeptical\n$p$-value. The color of the dots indicates the relative effect size\n$d$, the size represents the relative sample size $c$. There is good\nagreement between the two $p$-values (with Kendall's rank correlation coefficient\n$\\tau=0.93$) as both \nincrease with decreasing relative effect size. For comparison, the correlation of the \ncontrolled skeptical $p$-value with $p_S = \\max\\{p_o, p_r\\}$ from \nthe two-trials rule is $\\tau=0.91$.\n\n\n\n\n\\begin{figure}\n\\begin{knitrout}\n\\definecolor{shadecolor}{rgb}{0.969, 0.969, 0.969}\\color{fgcolor}\n\\includegraphics[width=\\maxwidth]{fig4-1} \n\\end{knitrout}\n\\vspace{-0.5cm}\n\\caption{\\label{fig:Rprojects} Comparison of the golden ($x$-axis) \nand controlled ($y$-axis) skeptical $p$-value. \nThe vertical and horizontal lines are at the standard 0.025 threshold.}\n\\end{figure}\n\n\n\n\n\nThe controlled skeptical $p$-value tends to be slightly smaller than\nthe golden one but there are 3 exceptions. One is the study by \nKessler \\& Roth \\citep{KesslerRoth2012}\nwhere controlled\n$p_S=0.003$ while\ngolden\n$p_S=0.001$. This study\nhas a highly significant original study ($z_o =\n9.0$) and by far the smallest relative sample size among all\n18 studies\n($c=0.16$). For such small\nvalues of $c$, the adaptive threshold $\\alpha_u$ will be smaller than the \n(non-adaptive) threshold $\\alpha_g = \n0.062$ \nfrom \\eqref{eq:alphaS}\n(compare the inset plot in Figure \\ref{fig:fig3b}), so \nthe controlled skeptical $p$-value will be larger than the golden one. \n\n\nThe study originally conducted by Ambrus \\& Greiner \\citep{AmbrusGreiner2012}, where the\ncontrolled approach leads to success at level $\\alpha=0.025$\n($p_S=0.024$) whereas the golden\napproach doesn't ($p_S=0.049$) is\nof particular interest. Figure\n\\ref{fig:Rprojects2} (left) displays the forest plot (with 95\\% confidence interval for the original and replication effect, as well as the skeptical and the meta-analytic 95\\% confidence interval for the combined effect) and the $p$-value function \nfor this study. The forest plot shows that the original study was not significant\nand there was some shrinkage of effect size so\nthe skeptical $p$-value at the golden level could not be successful by\ndefinition. In contrast, the controlled skeptical $p$-value flags\nreplication success at $\\alpha=0.025$. \nNote that one-sided controlled $p_S=0.024$ translates into two-sided $\\tilde p=(2 \\cdot 0.024)^2 = \n0.002$ at $\\mu=0$, as indicated in the title of the $p$-value function plot. \nThere is\nno evidence for conflict between original and replication ($p$-value\nfrom $Q$-test = 0.65, $\\min p = 0.66$). The 95\\% 'skeptical' confidence interval is\n[0.10, 0.49], its upper limit slightly larger \ncompared to the (fixed-effect) meta-analytic one\n([0.10, 0.41]).\nThe skeptical confidence interval at level $1- (2\\alpha)^2$ = 99.75\\% \nis [0.002, 0.580] and\ndoes - by definition (as controlled $p_S < \\alpha$) - not include zero.\n\n\n\\begin{figure*}[!ht]\n\n\\begin{knitrout}\n\\definecolor{shadecolor}{rgb}{0.969, 0.969, 0.969}\\color{fgcolor}\n\\includegraphics[width=\\maxwidth]{fig5-1} \n\\end{knitrout}\n\\vspace{-0.6cm}\n\\caption{\\label{fig:Rprojects2} Forest plot (with 95\\% confidence interval for the original and replication effect, as well as the skeptical and the meta-analytic 95\\% confidence interval\/region for the combined effect)\nand $p$-value function \n for the Ambrus \\& Greiner \\citep{AmbrusGreiner2012} (left) and the \n de Clippel \\etal \\citep{deClippel2014}\n(right) study.\n}\n\\end{figure*}\n\nAs a third example we consider the original study by de Clippel \\etal\n\\citep{deClippel2014}. Forest plot and $p$-value function are shown in\nFigure \\ref{fig:Rprojects2} (right). This is an example where there is clear \nreplication success ($p_S < 0.0001$), but also conflict between original and replication study (with the replication\neffect estimate considerably larger than the original one) and so the\nskeptical confidence region splits into two intervals. In contrast, the\nmeta-analytic pooled confidence interval is barely supported by the\neffect estimates from both studies. The $p$-value from the $Q$-test\nis 0.002 and the minimum of the $p$-value function\nalso has this value, since the variance ratio is $c \\approx 1$.\n\n\n\n\n\\section*{Discussion}\\label{sec:discussion}\n\nWe have described a novel statistical framework for the assessment of\nreplicability. It offers exact overall Type-I error control for the\nassessment of replication success, a confidence region for the\ncombined treatment effect, and an assessment of conflict between the\ntwo studies. All these aspects can be derived from the $p$-value\nfunction based on the criterion \\eqref{eq:general}, which we are able\nto compute based on the null distribution of the quantity $z_S^2$ in\n\\eqref{eq:equation}. \n\n\nThe framework stems from a recently proposed reverse-Bayes approach to assess\nreplication success \\citep{held2020}, where the parameter\n$c$ is the variance ratio $\\sigma_o^2\/\\sigma_r^2$, so the order of the\ntwo studies matters. The skeptical $p$-value $p_S$\ncan now be recalibrated to have exact squared Type-I error\ncontrol. This is achieved by an adaptive level $\\alpha_u$ which\ndepends on the variance ratio $c$. The transformed $p$-value\n$p=p_S^2$ then has exact linear T1E control and the corresponding\n$p$-value function produces a confidence region fully compatible\nwith the skeptical $p$-value. The framework thus addresses an\nimportant point raised by Diggle \\citep{Diggle2020} about the need to\naccompany the skeptical $p$-value with suitable estimation procedures to \nassess the relevance of the observed effects \\citep{Stahel2021}.\n\n\n\nThe two-trials rule is a limiting case of the formulation for\n$c \\rightarrow 0$, where $p_S$ is the maximum of the two\nstudy-specific $p$-values. We could also derive the limiting form of\n$p_S$ for $c \\rightarrow \\infty$. The success region of our\nformulation (both in terms of the two $p$-values and the relative\nsample size) can be viewed as a smoothed version of the two-trials\nrule, avoiding its ``double dichotomization'' and offering larger\nproject power. However, T1E control through the adaptive level comes\nat a certain price: the explicit penalization of small relative effect\nsizes in the previous (non-adaptive) nominal or golden\nversions of the skeptical $p$-value is lost and replication success\nmay occur even for large shrinkage of the effect estimate, as long as\nthe relative sample size $c$ is large enough. Our conclusion is that\nT1E control and penalization of small effect sizes are two competing\ngoals that cannot be achieved by a single criterion. It would be\ninteresting to extend the recently proposed dual-criterion for\nreplication studies \\citep{Rosenkrantz2021}, \nwhich simultaneously requires significance and relevance, \nto the skeptical $p$-value. \n\n\n\nNevertheless, there is good agreement between the \ncontrolled version and the golden version in our application\nto data from the Experimental Economics replication\nproject. Furthermore, the controlled version addresses concerns about\nthe ``stubbornness'' \\citep{LyWagenmakers2020} of the original (non-adaptive) formulation, if the\noriginal study is not particularly convincing and there is shrinkage\nin effect size, as exemplified by the reanalysis of the\nAmbrus \\& Greiner replication study.\n\nThis raises the question whether the controlled version is\nappropriate in the presence of heterogeneity between the two studies,\nwith different underlying true effects. In addition, the original effect\nestimate is likely to be biased if only significant studies have been\nselected for replication. The present formulation assumes that the\nunderlying effects are the same but could be extended to incorporate\nheterogeneity and bias if reasonable assumptions can be made about the\nsize of it \\citep{pawel2020}.\n\nThe paper has focused on the analysis,\nbut we plan to investigate the\ndesign of replication studies in future work. \nAvailable methodology \\citep[Section 4.2]{held2020} \nthen needs to \nbe extended to the proposed adaptive level. \nSince the adaptive level $\\alpha_u$ depends on the relative sample size $c$,\nsample size calculation \ndoes not have a closed-form expression.\nRoot-finding algorithms are therefore required to find \nthe value of $c$ which corresponds to the desired (conditional or predictive) power.\n\n\n\n\\paragraph*{Data and Software Availability} \nThe data used in Section \\ref{sec:appEE} is originally from \\url{https:\/\/osf.io\/pnwuz\/} \nand available in the R-package \\texttt{ReplicationSuccess}, available from \\texttt{CRAN}\n\\citep[supplement S1]{pawel2020}. \nThe code used in this work can be accessed\nat \\url{https:\/\/gitlab.uzh.ch\/charlotte.micheloud\/framework-for-replicability}.\n\n\\paragraph*{Acknowledgments}\n\nLH thanks the University of Zurich for granting a sabbatical leave\nthat made this research possible. LH and CM acknowledge\nsupport by the Swiss National Science Foundation (Project \\#\n189295). We appreciate helpful comments by Rachel Heyard and Samuel Pawel.\n\n\n\n\n\n\\bibliographystyle{apalike}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\\label{sect:intro}\nThe lightest supersymmetric particle (LSP) with R-parity conservation\nis absolutely stable and can contribute to the present Universe energy\ndensity as a dominant component of cold dark matter (CDM). If the\nstrong CP problem is solved by introducing a light pseudoscalar, an axion, its fermionic SUSY partner,\nan {\\em axino}, can be a natural candidate for CDM if it is the LSP.\nThe relic axino CDM can be produced either non-thermally, through\nout-of-equilibrium decays, or thermally, through scatterings and\ndecays in the hot plasma, as originally shown in\nRefs.~\\cite{CKR00,CKKR01}. The scenario was subsequently extensively studied during\nthe last decade~\\cite{CRS02,CRRS04,Steffen04,Strumia10,AxinoRevs,flaxino,\n Baer, Wyler09}, with either a neutralino or a stau as the\nnext-to-lightest supersymmetric particle (NLSP). Ways of testing the\naxino CDM scenario at the LHC were explored\nin Refs.~\\cite{Brandenburg05,Hamaguchi:2006vu,ChoiKY07,Wyler09} and implications for Affleck-Dine\nBaryogenesis in Ref.~\\cite{Roszkowski:2006kw}.\n\nThe strong CP problem is naturally solved by introducing a very light\naxion field $a$. The axion appears when the Peccei-Quinn (PQ) symmetry is\nbroken at some scale $\\fa$. Below the PQ scale, after integrating out\nheavy quarks carrying PQ charges~\\cite{KSVZ79}, an effective\naxion--gluon interaction is given by\n\\dis{\n{\\mathcal L}_a^{\\rm eff}=\\frac{\\alpha_s}{8\\pi \\fa}a\\, G_{\\mu\\nu} \\widetilde{G}^{\\mu\\nu},\n\\label{Leffa}\n}\nwhere $\\alpha_s$ is the strong coupling constant, $G$ is the field\nstrength of the gluon field and $\\widetilde{G}_{\\mu\\nu}\\equiv\n\\frac12 \\epsilon_{\\mu\\nu\\rho\\sigma} G^{\\mu\\nu}$ is its dual with $\\epsilon_{0123}=-1$.\nDifferent types of axion models have been proposed, with distinctively\ndifferent couplings to Standard Model (SM) fields, depending\non their PQ charge assignment.\n\nVery light axion models contain a complex SM singlet scalar field\ncarrying a PQ charge. In the Kim-Shifman-Vainstein-Zakharov~(KSVZ)\nclass of models~\\cite{KSVZ79} the PQ charges are assigned to new heavy\nquarks, while in the Dine-Fischler-Srednicki-Zhitnitskii~(DFSZ)\napproach~\\cite{DFSZ81} the PQ charges are assigned to the SM\nquarks. This difference is the origin of different phenomenological\nproperties~\\cite{KimRMP10} of the KSVZ and DFSZ classes of models\nsince in the low energy effective theory at the electroweak (EW) scale\n(after integrating out heavy fields) the gluon anomaly term is the\nsource of all interactions in the KSVZ models while the Yukawa\ncouplings are the source of all interactions in the DFSZ models. For\nsolving the strong CP problem, one needs a coupling of the axion to\nthe gluon anomaly and this is generated by a heavy quark loop in the\nKVSZ models, appearing as a non-renormalizable effective coupling when\nthose heavy fields are integrated out. In the DFSZ models instead the\ncoupling is generated by SM quark loops.\n\nThe couplings arising in these two popular classes of models will be discussed in the\nnext section. For the KSVZ axion, constraints on the PQ scale $\\fa$\nhave been obtained from astrophysical and cosmological considerations\nand the scale is limited to a rather narrow window $10^{10}\\gev\n\\lsim \\fa \\lsim 10^{12}\\gev$~\\cite{KimRMP10}, while for the\nDFSZ case precise constraints on $\\fa$ have not been derived yet.\n\nThe axino as a candidate for CDM was originally studied mostly for the SUSY\nversion of the KSVZ axion model, in an important production mode\ncorresponding to the interaction term given by \\eq{Leffa}~\\cite{CKKR01}.\nThe supersymmetrization of axion models\nintroduces a full axion supermultiplet $A$ which contains the\npseudoscalar axion $a$, its scalar partner {\\em saxion} $s$, and their\nfermionic partner axino $\\axino$,\n\\dis{\n A=\\frac{1}{\\sqrt2}(s+ia)+\\sqrt2 \\axino \\vartheta + F_A\n \\vartheta\\vartheta,\n}\nwhere $F_A$ stands for an auxiliary field and $\\vartheta$ for a Grassmann coordinate.\n\nThe effective axion interaction of \\eq{Leffa}\ncan easily be supersymmetrized using the superpotential of the axion\nand the vector multiplet $W_\\alpha$ containing the gluon field\n\\dis{ {\\mathcal\n L}^{\\rm eff} =-\\frac{\\alpha_s}{2\\sqrt2\\pi \\fa}\\int A\\,{\\rm\n Tr}\\,[W_\\alpha W^\\alpha].\n\\label{Leff3}\n}\nEffective axion multiplet interactions with the other gauge bosons can\nbe obtained in a similar way.\nAxino production from QCD scatterings due to interaction~\\eq{Leff3} has been\nconsidered in the literature in different approximations, which have\nled to somewhat different numerical results. The main technical\ndifficulty and source of uncertainty is the question of how to\nregulate the infrared divergences due to the exchange of massless gluons.\nIn the original study~\\cite{CKKR01} a simple insertion of a thermal\ngluon mass to regulate infrared (IR) divergences was used and the\nleading logarithmic term was obtained, without much control over the subleading\nfinite piece, since the thermal mass was introduced by hand.\nSubsequently, a hard thermal loop (HTL) resummation method was applied\nin Ref.~\\cite{Steffen04}, allowing for a self-consistent determination of\nthe gluon thermal mass and more control over the constant term in the\nhigh energy region of axino production.\nRecently, in Ref.~\\cite{Strumia10} a new calculation was presented\nwhich, although not gauge invariant, includes more terms\nof the perturbative series, in particular the decay of the gluon\nwhose thermal mass can be larger than the gluino and axino mass taken together.\nThe two latter methods have their own advantages and limitations, and\nthey coincide in the high energy region where the convergence\nof the perturbation series is stable.\nIn Ref.~\\cite{Strumia10} a previously neglected dimension-5\nterm in the Lagrangian, containing purely interactions between supersymmetric\nparticles was also included. This however changes the axino production rate by less\nthan 1\\%. In our paper, we adopt the original way of effective mass approximation\nbut we improve it to make the result positive definite. Although this\nmethod is not gauge invariant either,\nit is only known viable method at relatively low reheating\ntemperatures, which is the regime important for the axino\nas cold dark matter candidate.\nWe shall come back to this discussion in more detail below.\n\nIn the calculations published so far the couplings of the axino to the gauge\nmultiplets other than the one of the gluon was also neglected.\nIn fact, in Ref.~\\cite{CKKR01} a chiral transformation of the\nleft-handed lepton doublets was performed to remove the axion $SU(2)$\nanomaly interaction and to leave only the axion $U(1)$ contribution.\nThen the axion $SU(2)$ anomaly coupling re-appears in\nprinciple from the leptonic loops, which are independent\nof the fermion masses.\nThe corresponding axino loops, on the other hand, are suppressed by\nthe ratio $m_{\\rm lepton}^2\/M^2$ where $M$ is the largest mass\nin the loop. Therefore the error in neglecting the axion $SU(2)$\nanomaly is estimated to be at most of order $m_\\tau^2\/m_{\\tilde\\tau}^2$\ncompared to the $SU(3)$ term.\nHowever, in supersymmetric extensions of axionic models, the chiral\nrotation on the lepton fields involves also their scalar counterparts,\nthe sleptons and the saxion, which generates additional\ncouplings for those fields. It is also not clear if it is justified\nto perform a redefinition in a fully supersymmetric way when\nsupersymmetry is in any case broken. Rather than going into the swamp\nof such interactions with unknown slepton and saxion masses as\nparameters, we will keep here the general axion $SU(2)$ anomaly\ninteraction and its SUSY counterpart, which is more tractable, and\nexamine its effect on the axino abundance.\n\nMoreover, in the present work we will also consider the role of\nYukawa-type interactions between the axino and the matter multiplets,\nwhich arise either at one loop or at tree level, depending on the\nmodel. We will investigate what effect these terms may have and in\nparticular how model dependent our results for the axino abundance\nare. In the previous study, only the KSVZ model was considered for\nthe axino production. However in the DFSZ model axino production is\ndominated by the Yukawa coupling and the dependence on the reheating\ntemperature is quite different from that in the KSVZ model.\nOnly recently Refs.~\\cite{Chun:2011zd} and~\\cite{Bae:2011jb}\n considered axinos in the DFSZ model. In particular,\n Ref.~\\cite{Chun:2011zd} gave a simple approximate formulae for the\n relic density of light axino as dark matter. In our paper, we study\n axino production in the DFSZ model and the suitability of the DFSZ\n axino as dark matter in a more complete way. \n\nIn view of the recent developments in estimating the QCD contribution,\nin this paper we also update and re-examine\nrelic CDM axino production, with an emphasis on an estimate of\nthe uncertainties as well as on model dependence.\nIn particular, our updated analysis of the axino CDM scenario\nimproves on the previous works in the\nfollowing aspects:\\\\\n(i) an inclusion of some previously neglected terms in the axino production\n and of subleading terms in the mass of the gluon\nbeyond the logarithmic and constant pieces -- these last parts\nensure that the cross-section remains positive even for the invariant\nmass $s$ smaller than the gluon thermal mass; \\\\\n(ii) an explicit calculation of axino production via $SU(2)$ and $U(1)$\ninteractions; \\\\\n(iii) a derivation of the axino abundance in specific implementations\nof the KSVZ and DFSZ models. \\\\\n(iv) an update on the constraints on the reheating temperature\n$\\treh$~\\footnote{ We assume the instant reheating approximation\n and define the reheating temperature as the maximum temperature at\n which standard Big Bang expansion with a thermalised bath of SM\n particles starts. We can easily translate the axino abundance\n given with this conventional definition of the reheating\n temperature to more specific reheating scenarios. E.g.,\n Ref.~\\cite{Strumia10} considers a reheating process from the decay\n of the inflaton and obtains a slightly smaller abundance of\n axinos, reduced by a factor of 0.745. In comparing our results\n we account for this difference. } for the both the\nneutralino and the stau as the NLSP using the current WMAP-7 result on\nthe DM relic density and relevant structure formation data.\n\nIn Sect.~\\ref{sec:AxinoTh}, we define the axino by its relation to the\nshift symmetry of the KSVZ and DFSZ axions. In\nSect.~\\ref{sec:AxinoProd} we calculate axino production rate for the\nKSVZ and DFSZ axion models, and in Sect.~\\ref{sec:CosBounds} we\ndiscuss several constraints on the scenario arising from cosmology.\nFinally, in Sect.~\\ref{sec:Concl} we present our\nconclusions.\n\n\n\\section{Axinos}\\label{sec:AxinoTh}\n\n\\subsection{The framework}\n\nIn a recent review~\\cite{KimRMP10} low energy axion interactions were given in terms\nof the effective couplings with the SM fields $c_1$, $c_2,$ and\n$c_3$, which arise after integrating out all heavy PQ charged fields.\nThe effective axion Lagrangian terms are\n\\dis{\n&\\frac{(\\partial_\\mu a)}{\\fa}\\sum_i\\left(c_1^u\\bar u_i\n \\gamma^\\mu\\gamma_5 u_i+c_1^d \\bar d_i \\gamma^\\mu\\gamma_5\n d_i\\right)\\\\\n&+\\frac{a}{\\fa}\\sum_i\\left(c_2^um_u^i\\bar u_i i\\gamma_5 u_i+\n c_2^dm_d^i\\bar d_i i\\gamma_5 d_i\\right)\\\\\n &\\quad\\quad\\quad+\\frac{c_3}{32\\pi^2 \\fa}\n a G\\tilde G, \\label{eq:efflagr}\n}\nwhere $c_3$ can be defined to be 1 (if it is non-zero) by rescaling\n$\\fa$.\\footnote{The strong coupling $g_3$ is usually omitted by\n absorbing it to the field strengths. Here, $\\fa$ denotes the so-called\n axion decay constant which is typically related to the vacuum\n expectation value $V$ of the PQ symmetry breaking scalar field by\n $\\fa=V\/N$ where $N$ is the domain wall number.}\nThe $c_1 $ term is the PQ symmetry preserving derivative\ninteraction of the axion field that can be reabsorbed into the $c_2 $\nterm by a partial integration over on-shell quarks. For the $c_2$\nterms, we have only kept the lowest order terms (proportional to\n$1\/\\fa$), while in principle an infinite series of terms in $ a\/\\fa $\narises.\n\nIn the following we will consider the two popular scenarios mentioned earlier: the\nKSVZ and the DFSZ classes of axion models. The KSVZ class of axion models corresponds to\nthe choice $c_1=c_2=0, c_3=1$, and the DFSZ one to $c_1=c_3=0$\nand $c_2\\ne 0$, after integrating out the heavy field sector responsible\nfor PQ symmetry breaking. In the\nlatter model, if the Higgs doublets $H_{u,d}$ carry respective PQ\ncharges $ Q_{u,d}$, the SM fields also carry PQ charges,\\footnote{In\n variant axion models, $ Q_{u,d}$ may have family dependence but\n here we suppress family indices. They can be inserted if\n needed.} see Table~I, and the anomaly interaction proportional to\n$c_3 \\neq 0 $ arises from SM quark loops. The axion mass is given by\nthe strong interaction and is proportional to $|c_2+c_3|$. Hence the\nsum $c_2+c_3$, if it is non-zero, defines the QCD axion. Only this\ncombination of the two couplings is physical, since a chiral\naxion-dependent PQ rotation of the quark fields can shift the values\nof $c_2 $ and $c_3$, while keeping $c_2+c_3$ constant. This is connected to\nthe well-known fact that, if one of the quark masses is zero then the\nanomaly becomes unphysical and can be reabsorbed in the rotation of\nthe massless field.\n\nIn the supersymmetric version of axionic models, the interactions of\nthe saxion and the axino with matter are related by supersymmetry to\nthose of the axion. Hence the definition of the axion at low energy must be\nconnected to the definition of the axion multiplet, and therefore of\nthe saxion and the axino, at energies above the EW scale.\\footnote{The\n supersymmetrization of axion models was first discussed in\n Refs.~\\cite{Tamv82,NillesRaby82,Frere83}. An explicit model was\n first constructed in~\\cite{Kim83}, and the first cosmological study\n was performed in Ref.~\\cite{Masiero84}.}\n\nBelow we will examine more closely the KSVZ and DFSZ models of the\naxion. In both models one imposes the PQ symmetry at a high energy\nscale and the axion emerges from its spontaneous symmetry breaking.\nAs a specific example, let us consider the PQ sector at high energy with the\nPQ breaking implemented as in Ref.~\\cite{Kim83} \\dis{ W_{\\rm PQ}=f_Z\n Z(S_1S_2-V_a^2),\\label{KimW} } where $Z,S_1$, and $S_2$ are gauge\nsinglet chiral superfields, $f_Z$ is a Yukawa coupling and $V_a$ is a\nparameter in the Lagrangian which determines non-zero VEVs\n$\\langle S_1 \\rangle= \\langle S_2\\rangle =V_a$ in\nthe minimization process. The superfields transform under the\n$\\uonepq$ symmetry as \\dis{ Z\\rightarrow Z,\\quad S_1 \\rightarrow\n e^{i\\alpha\\Qs}S_1,\\quad S_2 \\rightarrow e^{-i\\alpha\\Qs}S_2. }\n\\begin{table}\n\\begin{center}\n\\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|}\n\\hline\nModel & $Z$ & $S_1$ & $S_2$ & $Q_L$ & $\\overline{Q}_R$\n&$H_d$ &$H_u$& $q_L$& $D^c_R$ & $U^c_R$ \\\\\n\\hline\n{\\rm KSVZ}& 0 & $\\Qs$&$-\\Qs$&$-\\frac12\\Qs$&$-\\frac12\\Qs$& 0& 0& 0&0&0\\\\\n{\\rm DFSZ}& 0 & $\\Qs$&$-\\Qs$& 0& 0&$-Q_d$ & $-Q_u$& 0& $+Q_d$ &$+Q_u$\\\\\n\\hline\n\\end{tabular}\n\\caption{The PQ charge assignment $Q$. $Q_L$ and $\\overline{Q}_R$ denote\nnew heavy quark multiplets. In the DFSZ model $2\\Qs=Q_u+Q_d$, and the\n PQ charge of the left-handed SM quark doublets $q_L$\n vanishes. See text for more details. }\n\\label{table:PQcharge}\n\\end{center}\n\\end{table}\nThe potential in the global supersymmetric limit is\n\\dis{ V=\\sum_a\n \\left|\\frac{\\partial W}{\\partial \\phi_a} \\right|^2 + \\frac12 D^a\n D^a,\n}\nwith\n\\dis{ D^a=g\\sum_\\phi \\phi^*T^a\\phi,\n}\nwhere $g$ is the\ngauge coupling of the gauge groups and $\\phi$ denotes collectively all\nthe scalar fields. With the superpotential given by\n\\eq{KimW} one has\n\\dis{\n &\\frac{\\partial W}{\\partial Z} = f_Z(S_1S_2-V_a^2),\\\\\n &\\frac{\\partial W}{\\partial S_1} =f_Z Z S_2,\\\\\n &\\frac{\\partial W}{\\partial S_2} =f_Z Z S_1.\\label{eq:Kim83} }\nAt the\ntree-level both $S_1$ and $S_2$ develop VEVs and break the PQ\nsymmetry. The fermionic partners of $Z$ and $S'=(S_1+S_2)\/\\sqrt2$\ncombine to become a Dirac fermion with mass $m_Z=\\sqrt2 f_Z V_a $, while\n$S=(S_1-S_2)\/\\sqrt2$ contains the axion field and can be\nidentified with the axion multiplet. From \\eq{eq:Kim83} we note that,\nif $\\langle Z\\rangle =0$ and SUSY is not broken then both the axino\nand the saxion are mass degenerate with the axion.\n\nHowever, with soft\nSUSY breaking terms included, $\\langle Z\\rangle$ can develop a non-zero\nvalue, in which case both the saxion and the axino become massive, independently of\nthe axion.\nTherefore, a full specification of the SUSY breaking mechanism is\nneeded in order to determine the mass of the axino exactly. This was first studied\nin Ref.~\\cite{Kim83} for the superpotential above, while another\nsuperpotential and SUSY breaking with a very small axino mass was\ngiven in Ref.~\\cite{ChunKN92}.\nRecently the case of a direct coupling between the axion and\nSUSY breaking sector was discussed in Ref.~\\cite{Higaki:2011bz}.\n\n\\subsection{The KSVZ model}\nIn the KSVZ approach in order to obtain the anomalous interaction of the axion and the\ngluon fields, one introduces the heavy quark fields $Q_L$ and $Q_R$ in the superpotential as Ref.~\\cite{KSVZ79}\n\\dis{\nW_{\\rm KSVZ}=W_{\\rm PQ}+f_Q Q_L\\overline{Q}_R S_1.\\label{Wksvz}\n}\nAfter the $\\uonepq$ symmetry breaking takes place, as discussed above, $Q_L$ and $Q_R$\ncombine to become a heavy Dirac fermion with mass $m_Q=f_Q V_a$,\nAssuming that $ V_a \\gg m_{3\/2} $, the scalar partner has practically the\nsame mass and the whole supermultiplet can be integrated out in a\nsupersymmetric way.\nThe low-energy Lagrangian can then be obtained by integrating out the heavy\nquark multiplet or by using anomaly matching condition.\nIn this way one finds that the axion anomaly term becomes\n\\dis{\n{\\mathcal L}^{\\rm eff}\n= \\frac{ \\alpha_s\n N_Q}{8\\pi \\fa }a\\, G^{\\mu\\nu}\\widetilde{G}_{\\mu\\nu},\\label{axion}\n}\nwhere the axion $a$ is the pseudoscalar component of the superfield $S\\equiv\n(S_1-S_2)\/\\sqrt2$, and the axion decay constant is $\\fa=2V_a$. $N_Q$\nis the number of the heavy quarks and we consider $N_Q=1$ in our\ncase. Then we have $c_1=c_2=0$ and $c_3=1$~\\cite{KimRMP10}.\n\nThe low-energy interactions of the saxion and the axino fields can be obtained in the same\nway by integrating out the heavy (s)quark fields. However, in the\nlimit of unbroken SUSY the low-energy effective Lagrangian should be\ngiven in a SUSY invariant form, including \\eq{axion}, as\n\\dis{\n{\\mathcal L}^{\\rm eff}=-\\frac{\\alpha_s}{2\\sqrt{2} \\pi \\fa}\\int\nd^2\\,\\vartheta A\\, W^{a \\alpha} W^a_{\\alpha} +\n\\textrm{h.c.} \\label{LeffKSVZ}\n}\n\nThe axion superfield $A$ and $W^{a\\,\\alpha}$ are given by~\\cite{WessBag92}\n\\dis{\n&A=\\frac{s+ia}{\\sqrt2}+\\sqrt2 \\vartheta{\\psi_a} + \\vartheta^2 F_A,\\\\\n&W^{a}_{\\alpha}=-i{\\lambda}^{a}_{\\alpha} + [\\delta_\\alpha^\\beta D^a\n -\\frac{i}{2}(\\sigma^{\\mu}\\bar\\sigma^\\nu)_\\alpha^\\beta G_{\\mu\\nu}^a]\\vartheta_\\beta\n+\\vartheta\\vartheta\\sigma^\\mu_{\\alpha\\dot{\\alpha}} D_\\mu\n\\bar{{\\lambda}}^{\\dot{\\alpha}}.\n\\label{component}\n}\nThe effective Lagrangian in terms of the Bjorken-Drell gamma matrices\nthen reads~\\footnote{Our saxion-gluino-gluino interaction\n differs by a factor of 2 from that in Ref.~\\cite{Strumia10}.}.\n\\dis{\n{\\mathcal L}^{\\rm eff}=& \\frac{\\alpha_s}{8\\pi \\fa}\\left[\n a(G^{a\\mu\\nu}\\widetilde{G}^a_{\\mu\\nu}\n +D_\\mu(\\overline{\\tilde{g}}\\gamma^\\mu\\gamma^5\\tilde{g}) ) \\right.\n \\\\\n&+s(G^{a\\mu\\nu}G^a_{\\mu\\nu}-2D^aD^a+2i \\overline{\\tilde{g}}\\gamma^\\mu D_\\mu\\tilde{g})\\\\\n&\\left. + i\\overline{\\axino}G^{a}_{\\mu\\nu}\\frac{[\\gamma^\\mu,\\gamma^\\nu]}{2}\n\\gamma^5\\gluino-2 \\overline{\\axino}\\gluino D^a\\right]\\\\\n& +\\sqrt2\\left(F_A\\lambda\\lambda + \\textrm{h.c.} \\right). ,\n\\label{eq:Leff}\n}\nwhere the gluino and axino 4-spinors are given by\n\\dis{\n\\tilde{g}=\\left( \\begin{array}{c}\n-i\\lambda\\\\\ni\\bar{\\lambda}\n\\end{array}\\right),\\qquad\n\\tilde{a}=\\left( \\begin{array}{c}\n\\psi_a\\\\\n\\overline{\\psi}_a\n\\end{array}\\right).\n}\nThe effective Lagrangian~(\\ref{eq:Leff}), including the axino interaction\nwith the D-term has recently been used in Ref.~\\cite{Strumia10} to calculate the thermal\nproduction rate of axino dark matter and will be the basis also for the\npresent analysis.\n\n\\subsection{The DFSZ model}\nIn the DFSZ framework, the $SU(2)_L\\times U(1)_Y$ Higgs doublets carry PQ charges\nand thus the light quarks are also charged under $\\uonepq$.\nThe charge assignment is shown in Table~\\ref{table:PQcharge}.\nThe anomaly coupling $a G\\widetilde{G}$ can be obtained after electroweak\nsymmetry breaking (EWSB) through the coupling\nof the axion to the Higgs doublets which couple to the light quarks.\nTo this end, one adds a non-renormalizable term to the PQ breaking\nsuperpotential~\\cite{Kim:1983dt}\n\\dis{\n W_{\\rm DFSZ}=W_{\\rm PQ}+ \\frac{f_s}{\\mplanck} S_1^2 H_d H_u,\n\\label{eq:DFZSsuperpotential}\n}\nwhere $W_{\\rm PQ}$ is given in \\eq{KimW} and\n$H_d H_u\\equiv \\epsilon_{\\alpha\\beta}H_d^\\alpha H_u^\\beta$.\nNote that here $ f_s \\sim \\mu M_P\/V_a^2 \\sim 1 $\ngenerates a phenomenologically acceptable supersymmetric $ \\mu$-term.\n\nThe superpotential including light quarks is given by\n\\dis{\nW_{\\rm MSSM}=y_t Q H_u U^c +y_b Q H_d D^c +y_\\tau L H_d E^c.\n\\label{WMSSM}\n}\n\nBefore the EW symmetry is broken but after the\n$\\uonepq$ symmetry is broken, the massless axion Goldstone boson is identified as\n\\dis{\na=\\frac{a_1-a_2}{\\sqrt2}.\\label{axion_DFSZ_high}\n}\nHere we defined the component fields in the same way as in\n \\eq{component},\n \\dis{\n S_1=\\frac{s_1+ia_1}{\\sqrt2}+\\sqrt2 \\vartheta \\tilde{a}_1 + \\vartheta^2 F_1,\n }\n and similarly for $S_2$. \nInstead, the axino mass eigenstate can be obtained from the mass matrix\n\\dis{\nf_Z\\begin{bmatrix}\n0& z_0 & \\VEV{S_2} \\\\\nz_0&0&\\VEV{S_1}\\\\\n\\VEV{S_2}& \\VEV{S_1}& 0\n\\end{bmatrix}\n\\simeq\nf_Z\\begin{bmatrix}\n0& z_0 & V_a \\\\\nz_0&0&V_a\\\\\nV_a& V_a& 0\n\\end{bmatrix}. } in the basis of\n$(\\tilde{a}_1,\\tilde{a}_2,\\tilde{Z})$ and we have used $z_0=\\VEV{Z}$.\nThe lightest state, which we will identify with the axino, has mass\n$f_Zz_0$ and is given by \\dis{ \\tilde{a}=&\n \\frac{\\tilde{a}_1-\\tilde{a}_2}{\\sqrt2}, } which coincides\n with \\eq{axion_DFSZ_high}. Since EW symmetry is not broken, the\naxion (and thus the axino) do not mix with the Higgs (and higgsino)\nand therefore the axion cannot have the $SU(3)_c$ anomalous\ninteraction with gluons generated at one loop via the quark triangle\ndiagrams.\\footnote{An axino-gluino-gluon coupling can arise at two\n loops through the Higgs Yukawa couplings, which are non-vanishing\n even above EW symmetry breaking, but it is strongly suppressed and\n we will neglect it here.} However, $SU(2)_L$ and $U(1)_Y$ anomalous\ninteraction can be generated via a higgsino triangle loop through the\nYukawa coupling derived from \\eq{eq:DFZSsuperpotential}. They are\ngiven by\n\\dis{ {\\mathcal L}_{\\rm anomaly}= -\\frac{\\alpha_2}{8\\pi\n V_a}aW^a_{\\mu\\nu}\\widetilde{W}^{a\\mu\\nu}-\\frac{\\alpha_1}{8\\pi\n V_a}aB_{\\mu\\nu}\\widetilde{B}^{\\mu\\nu}.\n\\label{DFSZanomaly21}\n}\nThese anomaly interactions appear also for the axino, but\nmay obtain corrections of order of one from SUSY breaking,\ncompared to the axion couplings, and such uncertainties will be\nlater encoded in the coefficients $C_{aWW}, C_{aYY} $.\n\nAfter the EW and the $\\uonepq$ symmetries are both broken,\nthe axion is identified with the Goldstone boson of the broken $\\uonepq$\nsymmetry, given by\n\\dis{\na = \\frac{2\\Qs V_a a_s-Q_dv_d P_d -Q_uv_uP_u}{\\sqrt{4 \\Qs^2V_a^2 +Q_d^2v_d^2+Q_u^2v_u^2}},\\label{eq:axiondef}\n}\nwhere $a_s=(a_1-a_2)\/\\sqrt2$, and we expanded the Higgs field as\n\\dis{\nH_d^0= &\\frac{v_d+h_d}{\\sqrt2}e^{iP_d\/v_d},\\qquad\nH_u^0= \\frac{v_u+h_u}{\\sqrt2}e^{iP_u\/v_u},\\\\\n&\\qquad v=\\sqrt{v_d^2+v_u^2}\\,.\n}\nwith $v_d\/\\sqrt{2}=\\langle 0|H_d|0\\rangle$ and\n$v_u\/\\sqrt{2}=\\langle 0|H_u|0\\rangle$, while $ P_{d,u} $ are the pseudoscalar\nfields contained in the electrically neutral component of $ H_{d,u}\n$.\n\n\nThe neutral Higgs boson component eaten by the $Z$-boson and the orthogonal\npseudoscalar Higgs, \\ie the phases of $H_d,H_u$ and $S_1-S_2 $, are\ngiven by\n\\dis{\n&Z^L=\\frac{-v_dP_d+v_uP_u}{v},\\\\\n&A=\\frac{v_dv_ua_s+V_a v_uP_d +\\fa v_dP_u}{\\sqrt{V_a^2v^2 +v_d^2v_u^2}}\\\\\n&a = \\frac{(\\frac{v_d}{v_u}+\\frac{v_u}{v_d})V_a a_s-v_uP_d -v_dP_u}{v \\sqrt{v^2\/(v_d^2v_u^2)V_a^2 + 1}}\\\\\n&\\qquad = \\frac{v^2V_a a_s-v_dv_u^2P_d -v_uv_d^2P_u}{v\\sqrt{v^2V_a^2 +v_d^2v_u^2}}\n\\,.\\label{eq:pseudorth}\n}\nEquating (\\ref{eq:axiondef}) with the last term of \\eq{eq:pseudorth},\nwe obtain\n\\dis{\n&Q_d =\\frac{v_u}{v_d} = \\tan\\beta ,\\qquad\nQ_u=\\frac{v_d}{v_u} = 1\/\\tan\\beta,\\\\\n& Q_\\sigma= \\frac12\\left(\\frac{v_d}{v_u}+\\frac{v_u}{v_d} \\right)\n= \\frac{1}{2 \\sin\\beta\\cos\\beta},\n}\nup to a common normalization constant.\n\nThe interactions of the axion with the matter fields are obtained through the\naxion part of the phase of the Higgs fields,\n\\dis{\n&P_d \\simeq \\frac{v_dv_u^2}{v^2V_a}a + \\cdots = \\frac{v}{V_a} \\sin^2\\beta \\cos\\beta + \\cdots,\\\\\n&P_u \\simeq \\frac{v_d^2v_u}{v^2V_a}a + \\cdots = \\frac{v}{V_a} \\sin\\beta \\cos^2\\beta + \\cdots.\n}\nConsidering the Yukawa interaction from the superpotential, \\eq{WMSSM},\n\\dis{\n{\\mathcal L}=-y_t \\overline{u}_Ru_LH_u^0-y_t^*\\overline{u}_Lu_RH_u^{0\\,*}+\\cdots,\n}\nthe Lagrangian terms for the up-type quark axion couplings are given by\n\\dis{\n{\\mathcal L}= &-y_t \\frac{v_u}{\\sqrt2}\\overline{u}_Ru_L\\exp\\left[i\\frac{v_d^2}{v^2}\n\\frac{a}{V_a} \\right]\\\\\n&-y_t^*\\frac{v_u}{\\sqrt2}\\overline{u}_Lu_R\\exp\n\\left[-i\\frac{v_d^2}{v^2}\\frac{a}{V_a} \\right]+\\cdots, \\label{eq:DFSZYuk}\n}\nand similarly for the down-type quarks.\nWe can now compare the above to the coefficients in the definition of\nthe axion~\\cite{KimRMP10} and obtain for following values for\n$c_2^{u,d}$; compare \\eq{eq:efflagr},\n\\dis{\nc_2^{u}= \\frac{v_d^2}{v^2} = \\cos^2\\beta ,\\qquad\nc_2^{d}= \\frac{v_u^2}{v^2} = \\sin^2\\beta.\n\\label{eq:DFSZc2all}\n}\n\n\nAfter integrating out all the Higgs fields except the axion supermultiplet, which remains light,\nall the axion couplings arise from the $c_2$ terms given in \\eq{eq:DFSZYuk} and\n\\eq{eq:DFSZc2all} at low energies above the quark masses.\nAt one loop in the SM fermions one obtains the axion-anomaly\ninteraction term. It is then given by\n\\dis{\n{\\mathcal L}_{\\rm anomaly}=\\frac{\\alpha_s N}{8\\pi (2V_a)}aG^{\\mu\\nu}\\widetilde{G}_{\\mu\\nu}\\,,\\label{DFSZanomaly3}\n}\nwhere $N=6$ and again $\\fa=2V_a\/N$.\n\nIn the supersymmetric limit, below EW symmetry breaking, the axino\nmass eigenstate can be read off from \\eq{eq:axiondef} and is given by\n\\dis{\n\\axino= \\frac{2\\Qs V_a \\tilde{a}_s-Q_dv_d\\tilde{h}_d\n -Q_uv_u\\tilde{h}_u}{\\sqrt{4\\Qs^2 V_a^2\n +Q_d^2v_d^2+Q_u^2v_u^2}}\\,. \\label{eq:axinoDFSZ}\n}\nHere $ \\tilde h_{d,u} $ denote the fermionic components of the\nelectrically neutral parts of $ H_{d,u} $.\nHowever since supersymmetry is broken, in general in the DFSZ models\nthe axino mixes with the higgsinos differently than the axion with\nthe Higgs and the mass eigenstate is not exactly the state given in\n\\eq{eq:axinoDFSZ}. Such a field is a good approximation to the\nphysical axino only if the mixing generated from the\nsuperpotential~(\\ref{eq:DFZSsuperpotential}) and SUSY breaking,\nwhich is of order $v_{u,d}\/\\fa $ and $z_0\/\\fa$, is negligible. Otherwise\nthe whole axino-neutralino mixing matrix has to be considered in\ndetail. We will not discuss this case further, but point instead to\nthe related studies in the case of the\nNext-to-Minimal Supersymmetric Standard Model with a\nsinglino LSP~\\cite{NMSSM}.\n\nIn the DFSZ models, there are also axino tree-level Yukawa interaction terms\nto the quark and squark with a coupling of the order of $m_q\/\\fa$\nbelow the EWSB scale, with the Higgs and higgsino with\ncoupling $\\mu\/\\fa$ even above the EWSB scale.\nThese tree-level interactions are not suppressed by a gauge\ncoupling or a loop factor, as the QCD anomaly term has, and thus they\ngive the dominant contribution to axino production through the\ndecay and\/or scattering processes at low reheating temperature $\\treh$.\nAt high reheating temperatures above EWSB instead the $ SU(3)_c$\nanomaly coupling is absent and the $SU(2)_L$ anomalous interaction\ndominates the production.\n\n\\section{The production of axinos}\\label{sec:AxinoProd}\n\nThere are two efficient and robust ways of populating the early\nUniverse with axinos. Firstly, they can be produced through\nscatterings and decays of particles in thermal equilibrium. This\nmechanism, which we call {\\em thermal production} (TP) depends on the\nreheating temperature after inflation. The other mechanism, which is\nindependent of $\\treh$, involves {\\em non-thermal production} (NTP) of\naxinos from the decay of the NLSP after it has frozen out from the\nplasma. Note also that, even though squarks are normally not the NLSP\nand remain in thermal equilibrium, for $\\treh \\lsim \\msquark$ and\nlarge gluino mass, axino yield from decay processes\n$\\squark\\rightarrow q \\axino$ can dominate the abundance~\\cite{CRS02}.\nAdditionally the decay of inflaton or moduli can produce axinos but\nsuch contributions are very model dependent and won't be considered here.\n\nThermally produced axinos in the $\\kev$ mass range were considered as\nwarm dark matter (WDM) in Ref.~\\cite{RTW91} and much lighter ones as hot DM\nin Ref.~\\cite{Masiero84}. In Ref.~\\cite{CKR00} it was shown that axinos from\nneutralino NLSP decays can be a natural candidate for CDM and this was\nextended in Ref.~\\cite{CKKR01} to include TP. If the axino mass is between\naround an MeV to several GeV, the correct axino CDM density is obtained\nwhen $\\treh$ is less than about $5\\times 10^4\\gev$~\\cite{CKKR01}. At\nhigher $\\treh$ and lower ($\\sim\\kev$) mass, axinos could constitute\nWDM. As a digression, we note that, when axinos are very heavy, and\nit is the neutralinos that play the role of the LSP, their population\nfrom heavy axino decays could constitute CDM~\\cite{CKLS08}, leading in\nparticular to the possibility of TeV-scale cosmic ray positrons, as\npointed out in Ref.~\\cite{HuhDecay09}. The possibility of either WDM\nor very heavy axinos is not discussed in this paper.\n\nThe interactions leading to CDM axinos were extensively studied in\nterms of cosmological implications in Refs.~\\cite{CKKR01,CRS02,CRRS04}\nand collider signatures in Refs.~\\cite{Brandenburg05,Hamaguchi:2006vu,ChoiKY07,Wyler09}.\nIf the LHC does not confirm the decay of heavy squarks or gluino to a lighter neutralino, the axino CDM idea with the R-parity conservation can not be saved unless some other mechanisms such as an effective SUSY is introduced \\cite{effSUSY}.\n\nIn general the couplings of the axino to gauge and matter fields are\nanalogous to those give in \\eq{LeffKSVZ}\n\\dis{\n{\\mathcal L}^{\\rm eff}=&-\\frac{\\sqrt2 \\alpha_s}{8\\pi \\fa}\\int\nd^2\\,\\vartheta S\\, G^{a \\alpha}G^a_{\\alpha}\\\\&-\\frac{\\sqrt2\n \\alpha_2 C_{aWW}}{8\\pi \\fa}\\int d^2\\,\\vartheta S\\, W^{a \\alpha}\nW^a_{\\alpha}\\\\&-\\frac{\\sqrt2 \\alpha_Y C_{aYY}}{8\\pi \\fa}\\int\nd^2\\,\\vartheta S\\, Y^{a \\alpha} Y^a_{\\alpha} +\n\\textrm{h.c.}. \\label{Leff}\n}\nHere we normalize the PQ scale by $2V_a\/N\\rightarrow \\fa$ which\nsets $ c_3 =1 $ for the QCD anomaly coupling and therefore defines the\ncoefficients $C_{aWW}$ and $C_{aYY}$ in the axion models considered above.\n\n\\subsection{Thermal production}\n\nAt sufficiently high temperatures ($\\gsim10^9\\gev$) axinos\ncan reach thermal equilibrium with SM particles and their\nsuperpartners. However, assuming that a subsequent period of cosmic\ninflation dilutes the population of such primordial axinos (and that\nthey are not produced directly in inflaton decay), a post-inflationary\naxino population comes firstly from the hot thermal bath. If the\nreheating temperature is very high, above the decoupling temperature\nof axinos ($\\sim10^9\\gev$), their relic number density reaches again\nthermal equilibrium and is the same as that of photons. In that case\naxinos must be so light ( $\\lsim 1\\kev$) that they become warm or\nhot DM~\\cite{RTW91}. On the other hand, when the reheating temperature\nis below the decoupling temperature, axino number density is much\nsmaller than that of photons and its time evolution is well described\nby the Boltzmann equation without backreaction~\\cite{CKKR01}.\n\nIn the KSVZ model, at high temperatures, the most important contributions come from two-body\nscatterings of colored particles into an axino and other particles. At\nlower reheating temperatures, on the other hand,\nthe decay of squarks or gluinos can dominate the production of\naxinos~\\cite{CRS02,CRRS04}.\nIn Ref.~\\cite{CKKR01}, an effective gluon thermal mass was introduced\nto regulate the infrared divergence in the scattering cross-section.\nSubsequently, a more consistent calculation using the hard thermal loop (HTL)\napproximation was applied to the axino production in Ref.~\\cite{Steffen04}.\nHowever, the HTL approximation is valid only for small gauge coupling,\n$g\\ll 1$, which corresponds to the reheating temperature\n$\\treh \\gg 10^6\\gev$. \nBelow $10^6\\gev$ the HTL approximation becomes less \nreliable~\\cite{Steffen04}. In fact, the production rate \nbecomes even negative at $g_3\\gtrsim 1.2$, and therefore the result becomes\nunphysical. Strumia tried to improve this behaviour by using the full \nresummed finite-temperature propagators for gluons and gluinos in the\nloop~\\cite{Strumia10}. This procedure includes axino production via \ngluon decays, which is kinematically allowed by thermal mass, and \nresults in an enhancement compared to the HTL approximation.\nHowever, his method is gauge-dependent in the next-to-leading\norder~\\cite{RS06}, indicating that not all the contributions of that\norder are included, and therefore also does not give a fully\nsatisfactory result in the large gauge coupling regime. \nFor these reasons, we believe that it is worth pursuing an\nalternative method of computing the rate at large couplings and\nwe apply the effective mass approximation as the only way to evaluate\nthe axino thermal production at large coupling $g$ and low \nreheating temperature $\\treh\\lesssim 10^4\\gev$. \nEven though this method is also not gauge invariant, it captures\nrelevant physical effects like plasma screening and gives\npositive and physical cross-sections for each single scattering\nprocess.\n\nAs stated earlier, in the previous studies of TP of relic axinos the\ncontributions from $SU(2)_L$ and $U(1)_Y$ gauge interactions were\nneglected, but for completeness,\nwe will discuss here all\nSM gauge groups explicitly in order to examine their possible effects.\n\nTo start with, in evaluating the contributions from the strong\ninteractions we will follow the method used previously\nin Ref.~\\cite{CKKR01} to obtain the (dominant) axino TP cross-section.\nWe will further update and correct the tables presented there by\nfollowing Ref.~\\cite{Strumia10} to include the previously ignored\ndimension-5 axino-gluino-squark-squark interaction term. This term\nchanges only the contribution from the processes H and J in\nTable~\\ref{table:channels}, while it does not affect the other terms.\nThe processes B, F, G and H, with gluon $t$- or $u$-channel \nexchange, are infrared-divergent and thus, \nfollowing Ref.~\\cite{CKKR01}, are regularized with the inclusion \nof an effective gluon thermal mass in the gluon propagator. \nThis method gives always positive definite values for the single\ncross sections. The full results and the method how they are\nobtained is described in the Appendix. \nIn Table~\\ref{table:channels}, we give for simplicity only the\nfirst two leading terms in the expansion for $s\/m_{\\rm eff}^2 >> 1$.\nHowever the logarithm in the approximate formulae of Table~\\ref{table:channels}\ngives unphysical negative value for $s< m_{\\rm eff}^2$. Therefore in the numerical calculation\nwe have used the full formulae which are positive definite for all\nvalues of $s$ listed in the Appendix.\n\n\nThe total cross-sections $\\sigma_n$, where $n=A,\\ldots,J$, can\nbe written as\n\\dis{\n\\sigma_n(s)=\\frac{\\alpha_s^3}{4\\pi^2\n \\fa^2}\\bar{\\sigma}_n (s),\n\\label{sigman}\n}\nwhere $\\bar{\\sigma}_n (s)$ denotes the cross-section averaged over\nspins in the initial state and are given in Table~\\ref{table:channels}.\nThe relevant multiplication factors are\nalso listed:\n$n_s$ (the number of initial spin states), $n_F$ (the number of chiral\nmultiplets) and $\\eta_i$ (the number density factor, which is $1$ for\nbosons and $3\/4$ for fermions). We assume particles in thermal\nequilibrium to have a (nearly) Maxwell-Boltzmann distribution,\nproportional to $\\eta_i $, and neglect Fermi blocking or Bose-Einstein\nenhancement factors, which are close to one at these temperatures. We\nrestrict ourselves to temperatures above the freeze-out temperature of\nSM superpartners involved, so that the approximation of thermal\nequilibrium is always satisfied.\\footnote{At lower temperatures,\n superpartner number densities, apart from the NLSP one, drop down to\n zero and the NTP regime is reached.} Finally, in\nTable~\\ref{table:channels} the group theory factors $f^{abc}$ and\n$T^a_{jk}$ of the gauge group $SU(N)$ satisfy the relations\n$\\sum_{a,b,c}|f^{abc}|^2=N(N^2-1)$ and\n$\\sum_a\\sum_{jk}|T^a_{jk}|^2=(N^2-1)\/2$.\n\nNext we move to include contributions from the $SU(2)_L$ and $U(1)_Y$\ngauge interactions.\nThe relevant axino-gaugino-gauge boson and axino-gaugino-sfermion-sfermion\ninteraction terms, in view of~\\eq{eq:Leff}, are given by\n\\dis{\n{\\mathcal L}^{\\rm eff}&=i\\frac{\\alpha_s}{16\\pi \\fa}\\overline{\\axino}\\gamma_5[\\gamma^\\mu,\\gamma^\\nu]\\tilde{g}^b\nG^b_{\\mu\\nu} +\\frac{\\alpha_s}{4\\pi \\fa}\\overline{\\axino}\\tilde{g}^a\n\\sum_{\\tilde{q}}g_s \\tilde{q}^*T^a\\tilde{q}\\\\\n&+i\\frac{\\alpha_2C_{aWW}}{16\\pi \\fa}\\overline{\\axino}\\gamma_5[\\gamma^\\mu,\\gamma^\\nu]\\tilde{W}^b\nW^b_{\\mu\\nu}\\\\\n&+\\frac{\\alpha_2}{4\\pi \\fa}\\overline{\\axino}\\tilde{W}^a\n\\sum_{\\tilde{f}_D}g_2 \\tilde{f}_D^*T^a\\tilde{f}_D\\\\\n &+ i\\frac{\\alpha_YC_{aYY}}{16\\pi \\fa}\\overline{\\axino} \\gamma_5[\\gamma^\\mu,\\gamma^\\nu] \\tilde{Y}Y_{\\mu\\nu}\\\\\n&+\\frac{\\alpha_Y}{4\\pi \\fa}\\overline{\\axino}\\tilde{Y}\n\\sum_{\\tilde{f}}g_Y \\tilde{f}^*Q_Y\\tilde{f},\n\\label{eq:Laxino}\n}\nwhere the terms proportional to $\\alpha_2$ come from the $SU(2)_L$ and the\nones proportional to $\\alpha_Y$ from the $U(1)_Y$ gauge groups.\n$C_{aWW}$ and $C_{aYY}$ are the model-dependent couplings for\nthe $SU(2)_L$ and $U(1)_Y$ gauge group\naxino-gaugino-gauge boson anomaly interactions, which is defined after the standard\nnormalization of $\\fa$, as in the first line for $SU(3)$, as stated below\n\\eq{Leff}. Here $\\alpha_2$, $\\tilde{W}$, $W_{\\mu\\nu}$ and $\\alpha_Y$,\n$\\tilde{Y}$, $Y_{\\mu\\nu}$ are the gauge coupling, the gaugino field\nand the field strength of $SU(2)_L$ and $U(1)_Y$ gauge groups,\nrespectively. $\\tilde{f_D}$ represents the sfermions of the\n$SU(2)$-doublet and $\\tilde{f}$ are the sfermions carrying the\n$U(1)_Y$ charge.\n\n\\begin{table}\n\\begin{center}\n\\begin{tabular}{|c||c|l|c|c|c|} \\hline\nn & Process & \\makebox[30mm][c]{$\\sigmabar_n$} \\hfill & \\footnotesize{$n_s$}\n& $n_F$ & \\footnotesize{$\\eta_1\\eta_2$}\\\\\n\\hline &&&&&\\\\ [-1.3em]\\hline\nA & $ g^a + g^b \\ra \\tilde a + \\tilde g^c $ &\n$\\frac{1}{24}|f^{abc}|^2$ & 4 & 1 & 1\n\\\\ [0.2em]\\hline\nB & $ g^a + \\tilde g^b \\ra \\tilde a + g^c $ &\n\\footnotesize{$\\frac{1}{4}|f^{abc}|^2\\left[\\log\\left(s\/m_{\\rm eff}^2\\right)-\\frac{7}{4}\\right]$} & 4 & 1 &\n$\\frac{3}{4}$\n\\\\ [0.2em]\\hline\nC & $ g^a + \\tilde q_k \\ra \\tilde a + q_j $ &\n$\\frac{1}{8}|T^{a}_{jk}|^2$ & 2 & \\footnotesize{$2N_F$} & 1\n\\\\ [0.2em]\\hline\nD & $ g^a + q_k \\ra \\tilde a + \\tilde q_j $ &\n$\\frac{1}{32}|T^{a}_{jk}|^2$ & 4 & $N_F$ &$\\frac{3}{4}$\n\\\\ [0.2em]\\hline\nE & $ \\tilde q_j + q_k \\ra \\tilde a + g^a $ &\n$\\frac{1}{16}|T^{a}_{jk}|^2$ & 2 & $N_F$ &$\\frac{3}{4}$\n\\\\ [0.2em]\\hline\nF & $ \\tilde g^a + \\tilde g^b \\ra \\tilde a + \\tilde g^c $ &\n\\footnotesize{$\\frac{1}{2}|f^{abc}|^2\\left[\\log\\left(s\/m_{\\rm eff}^2\\right)-\\frac{23}{12}\\right] $} & 4 & 1 &\n$\\frac{3}{4}\\frac{3}{4}$\n\\\\ [0.2em]\\hline\nG & $ \\tilde g^a + q_k \\ra \\tilde a + q_j $ &\n$\\frac{1}{4}|T^{a}_{jk}|^2\\left[\\log\\left(s\/m_{\\rm eff}^2\\right)-2\\right]$ & 4 & $N_F$ &\n$\\frac{3}{4}\\frac{3}{4}$\n\\\\ [0.2em]\\hline\nH & $ \\tilde g^a + \\tilde q_k \\ra \\tilde a + \\tilde q_j $ &\n\\footnotesize{$\\frac{1}{4}|T^{a}_{jk}|^2\\left[\\log\\left(s\/m_{\\rm eff}^2\\right)-\\frac{7}{4}\\right]\\ast$} & 2 &\n\\footnotesize{$2N_F$} &$\\frac{3}{4}$\n\\\\ [0.2em]\\hline\nI & $ q_k + {\\bar q_j} \\ra \\tilde a + \\tilde g^a $ &\n$\\frac{1}{24}|T^{a}_{jk}|^2$ & 4 & \\footnotesize{$\\frac12N_F$} &\n$\\frac{3}{4}\\frac{3}{4}$\n\\\\ [0.2em]\\hline\nJ & $ \\tilde q_k + \\tilde q_j^\\ast \\ra \\tilde a + \\tilde g^a $ & $\\frac{1}{6}|T^{a}_{jk}|^2~\\ast$ & 1 &\n$N_F$ & 1\n\\\\ [0.2em]\\hline\n\\end{tabular}\\caption{\nThe cross-section for each axino thermal-production\nchannel involving strong interactions. The particle masses are\nneglected except for the plasmon mass $m_{\\rm eff}$. The H and J\n entries with an asterisk in the third column are changed due to including the\n missing term and cross-sections or $n_F$ in the others processes\n (A,B,D,E,F, and I) are corrected from those of Ref.~\\cite{CKKR01}.\n The logarithm in the approximate formulae in this Table\ngives unphysical negative value for $s< m_{\\rm eff}^2$. Therefore in the numerical calculation\nwe used the full formulae which are positive definite for all values of $s$.\n The full cross sections for the processes \nB, F, G and H are given in the Appendix.}\n\\label{table:channels}\n\\end{center}\n\\end{table}\n\nWe start from Table~\\ref{table:channels} and replace\nquark triplets with $SU(2)_L$-doublets with corresponding group\nfactors. For the abelian $U(1)_Y$ factor, the processes A, B, and F\nvanish and we can replace $|T^a_{jk}|^2$ with the square of the\ncorresponding hypercharges. Finally, we use $N_F=(12,14,11)$\nto count the matter multiplets charged under the\nMSSM gauge groups $(SU(3)$, $SU(2)_L$ and $U(1)_Y$, respectively.\nWe include above the SUSY breaking scale the full 1-loop \nMSSM running of the gauge couplings and gaugino masses.\n\nThe second term for each gauge group in~\\eq{eq:Laxino} also generates\nthree-body gaugino decays into an axino and two sfermions, assuming\nthat the gauginos are heavy enough. The three-body decay rate of the\ngluino is given by\n\\dis{\n\\Gamma(\\gluino^a \\rightarrow \\axino \\tilde q_j\n\\tilde{q}^*_k)=\\left(\\frac{\\alpha_s^2\\mgluino^3}{128\\pi^3\\fa^2}\n\\right)\\left(\\frac{\\alpha_s}{16\\pi}{|T^a_{jk}|^2}\\right) G\\left(\\frac{\\msquark^2}{\\mgluino^2}\\right),\n}\nwhere $\\mgluino$ denotes the gluino mass,\n\\dis{\nG(x)=&\\frac23\\sqrt{1-4x}(1+5x-6x^2)\\\\\n&-4x(1-2x+2x^2)\\log\\left[\\frac{1+\\sqrt{1-4x}}{2x}-2 \\right],\n}\nand the mass of the axino has been neglected.\nHowever, the three-body decay is suppressed by an additional power of\nthe gauge coupling constant and is kinematically allowed only when the \ngluino mass is larger than the sum of the two final-state squark masses.\nTherefore gluino three-body decay through the second term in~\\eq{eq:Laxino} is\nsubdominant to the two-body decay.\n\nAs stated in Ref.~\\cite{CRS02}, an effective dimension-4 axino-quark-squark\ncoupling can be generated at a loop level also in the KSVZ model.\nHere we take into account this effective Yukawa interaction, which\nappears at a\ntwo-loop level in the KSVZ models and a tree level (with a tiny mixing)\nin the DFSZ models~\\cite{CRS02},\n\\dis{\n{\\mathcal L}_{\\axino\\psi\\tilde{\\psi}}=g_{\\rm\n eff}^{L\/R}\\tilde{\\psi}^{J\/R}_j \\bar{\\psi}_jP_{R\/L}\\gamma^5\\tilde{a},\n\\label{Ldim4}\n}\nwhere $\\psi_j$ and $\\tilde{\\psi}_j$ denote the SM fermions and their\nsuperpartners.\n\nIn the KSVZ class of models, the effective coupling comes predominantly from the\nlogarithmically divergent part of the gluon-gluino-quark loop term and is\nproportional to $\\mgluino$~\\cite{CRS02},\n\\dis{\ng_{\\rm eff}^{L\/R}\\simeq \\mp\\frac{\\alpha_s^2}{\\sqrt2\n \\pi^2}\\frac{\\mgluino}{\\fa}\\log\\left(\\frac{\\fa}{\\mgluino} \\right).\n\\label{ggq-loop}\n}\nSubleading terms have not yet been computed and may give a correction\nof the order of $20-30$\\%, in analogy with what has recently been\nobtained in Ref.~\\cite{Wyler09} for the effective tau-stau-axino coupling.\n\nIn the DFSZ models there exists also a tree-level axino-quark-squark\ncoupling which is proportional to the mass of the quark~\\cite{Chun:2011zd},\ncoming from the $c_2$ interaction term in~\\eq{eq:DFSZc2all},\n\\begin{equation}\ng_{\\rm eff}^{L\/R}= \\mp i \\frac{m_q}{\\fa}\\frac{v_{d,u}^2}{v^2}=\n\\mp i \\frac{m_q}{\\fa} \\left\\{\n\\begin{array}{c}\n\\cos^2\\beta \\cr\n\\sin^2\\beta,\n\\end{array} \\right.\n\\label{ggq-tree}\n\\end{equation}\nwhere the upper row relates to the up-type quarks and the lower row to\nthe down-type quarks. These tree-level couplings are always smaller\nthan the one-loop ones for the light generations, but not for the\nthird one. In fact for the top quark, the tree-level coupling\ndominates if $\\tanb \\lsim 4$ for the gluino mass of $700\\gev$, while\nthe bottom quark tree-level coupling dominates if $\\tanb \\gsim 1$\nfor the same choice of the gluino mass.\nNote that, at low reheating temperatures, only the top-stop-axino\ncoupling is important for axino thermal production. This is because\nthe lighter stop is usually the lightest colored superpartner and\nremains in equilibrium to rather low temperatures, even below the EWSB scale. Similarly, there\nexists an effective tau-stau-axino vertex, which was first obtained\nin Ref.~\\cite{CRRS04} and more recently re-derived in Ref.~\\cite{Wyler09} in a\nfull two-loop computation. This coupling is smaller and not important\nfor thermal production, but it is important for the non-thermal\nproduction when the stau is the NLSP.\n\nIn the DFSZ models, there is also a tree-level axino-Higgs-higgsino\ncoupling~\\cite{Bae:2011jb}; compare the second term in \\eq{eq:DFZSsuperpotential}, \\dis{\n g_{\\rm eff}=\\frac{4\\mu}{\\sqrt2 \\fa}, } where $\\mu\\sim\nf_sV_a^2\/\\mplanck$ and $\\fa=2V_a$. It contributes to axino TP through\nhiggsino decays in thermal equilibrium, and can be comparable to that\nfrom squark decays through \\eq{ggq-loop} and \\eq{ggq-tree}, or even\nbecome much larger if $\\mu$ is larger than the top quark mass.\nThe axino production due to this coupling in DFSZ models\nhas been recently investigated also in Ref.~\\cite{Chun:2011zd}.\nNote that the Yukawa coupling and the axino-higgsino mixing\ncontained in the neutralino mass matrix also give rise to\nadditional scattering channels contributing to the axino production,\nbut we will neglect them here since such dimension-4 scatterings are usually less important\nthan the decays~\\cite{CRS02}.\n\\begin{figure}[!t]\n \\begin{center}\n \\begin{tabular}{c}\n \\includegraphics[width=0.6\\textwidth]{Y_TR.eps}\n \\end{tabular}\n \\end{center}\n \\caption{Thermal axino yield $Y_\\axino^{\\rm TP}$ as a function of\n the reheating temperature $\\treh$ from strong interactions using\n the effective mass approximation (black). We use the\n representative values of $\\fa=10^{11}\\gev$ and\n $m_{\\squark}=\\mgluino=1\\tev$. For comparison, we also show the HTL\n approximation (dotted blue\/dark grey) and that of Strumia (green\/light grey).\n We also denote the yield from squark (solid green\/light grey) and \n gluino decay (dotted red), as well as out-of-equilibrium bino-like \n neutralino decay (dashed black). Here we used the interactions in \n \\eq{eq:Laxino} and \\eq{ggq-loop} for the KSVZ model.\n We use the same definition of reheating temperature\n in the instantaneous reheating approximation for the three methods.}\n\\label{fig:YTR}\n\\end{figure}\n\\begin{figure}[t]\n \\begin{center}\n \\begin{tabular}{c}\n \\includegraphics[width=0.6\\textwidth]{Y_TR_all.eps}\n \\end{tabular}\n \\end{center}\n \\caption{Thermal axino yield $Y_\\axino^{\\rm TP}$ as a function of\n$\\treh$ from each of the SM gauge groups. Here, we have used $C_{aWW}=C_{aYY}=1$.\n The lines at high $T_R$ are not perfectly parallel due to the running\n of the gauge couplings, which affects the $SU(3) $ yield more\n strongly and in the opposite direction than the other gauge groups.}\n\\label{fig:YTR_all}\n\\end{figure}\n\\begin{figure}[tbh!]\n \\begin{center}\n \\begin{tabular}{c}\n \\includegraphics[width=0.6\\textwidth]{Y_TR_KSVZ.eps}\n \\end{tabular}\n \\end{center}\n \\caption{Thermal axino yield $Y_\\axino^{\\rm TP}$ as a function of\n $\\treh$ for two specific KSVZ models: $Q_{\\rm em}=0$ ($C_{aYY}=0$) and\n $Q_{em}=2\/3$ ($C_{aYY}=8\/3$), and for a DFSZ model with the $(d^c,e)$\n unification~\\cite{Kim:1998va}, for which we used $\\mu=200\\gev$ and the\n higgsino mass $m_{\\tilde{h}}=200\\gev$. The horizontal lines show the\n values of axino mass for which the corresponding axino abundance gives\n the correct DM relic density.}\n\\label{fig:YTR_KSVZ}\n\\end{figure}\n\nWe have evaluated the thermal production of axinos numerically and present\nthe results in Figures~\\ref{fig:YTR} and ~\\ref{fig:YTR_all} for representative\nvalues of $\\fa=10^{11}\\gev$ and $\\msquark=\\mgluino=1\\tev$. \nWe do not consider here the dependence on masses, however see Ref.~\\cite{CRS02}.\n For different values\nof $\\fa$ the curves move up or down proportional to $\\fa^2$.\nIn Fig.~\\ref{fig:YTR}, we show the axino yield $Y$ (where $Y\\equiv\nn\/s$ is the ratio of the number density to the entropy density) from\nstrong interaction in the KSVZ model.\nOur result obtained with the effective mass approximation is\nshown with the solid black line.\nCompared to the previous plot in Ref.~\\cite{CKKR01}, the inclusion of the\nsquark decay changes the plot at low reheating temperature, while\nthe other new squark interactions do not have any noticeable\neffect. There is a factor 3 difference in the abundance at high reheating temperature compared to that \n in Figure 2 of Ref.~\\cite{CKKR01}, which was a numerical error at\n that time and was corrected later. \nFor comparison, the axino yield from scatterings using the\nHTL approximation~\\cite{Steffen04} is plotted with the blue (dashed) line\nand Strumia's result~\\cite{Strumia10} is shown with the green line.\n\nFor $\\treh\\gtrsim 10^4\\gev$ the axino abundance using the effective\nmass approximation increases consistently with that of Strumia. We found\nthat the difference between the two prescriptions is of order a factor of three.\n In principle we could reabsorb this difference in the definition of \nthe effective gluon mass at high temperature, which in our scheme \ncannot be determined self-consistently. With this tuning we could \nmatch the perturbative result at high temperature.\nOn the other hand, doing this would require a gluon thermal mass smaller \nthan the expression $ \\sim g T$ used in our calculations. Hence we prefer \ninstead to consider this factor as an estimate of the theoretical \nerror of using the effective mass approximation at high\nreheating temperatures. We assume that this error \ndoes not increase for reheating temperatures less than $10^4\\gev$\nand we apply the effective mass approximation to the DFSZ model.\n\n\nFor lower temperatures the contributions from the decays of squarks\nand gluinos in thermal plasma, which were not included\nin Ref.~\\cite{Strumia10}, start playing some role. We mark those in\nFig.~\\ref{fig:YTR} with a green solid and red dashed curves,\nrespectively. It is known that, at reheating temperatures above\nsuperpartner masses scattering diagrams involving dimension-4\noperators are usually subdominant relative to those coming from\ndimension-5 operators and to decay terms, and are negligible.\nAlso the decays do not give significant contribution to the TP of axinos,\napart from very low $\\treh$~\\cite{CRS02}, and this is confirmed in\nFig.~\\ref{fig:YTR}.\nWhile all the above contributions are generated\nby strong interactions only, for comparison, we show also (as a \nblack dashed line) the relative contribution from an\nout-of-equilibrium bino-like neutralino decay to an\naxino and a photon originally considered in Ref.~\\cite{CKR00}.\nIt is clear that NTP is only important at very low $\\treh$, well below\nsquark or gluino masses.\n\nIn Fig.~\\ref{fig:YTR_all}, we show a contribution to the axino yield\nin thermal production from each SM gauge group interaction. Here we\nset the coefficients $C_{aWW}=1$ and $C_{aYY}=1$ as a normalization.\nAs shown in the figure, the contributions from scatterings due to\n$SU(2)_L$ and $U(1)_Y$ couplings (blue dotted and green solid lines,\nrespectively) are significantly suppressed compared to that of\n$SU(3)_c$ (red dashed), by a factor of 10 or more. This is because the\ninteraction between axinos and gauge bosons are proportional to a\ngauge coupling-squared so that the cross-section is $\\sigma\\propto\n\\alpha^3$. Thus it would be only for very large (and perhaps\nunnatural) values of the effective couplings $C_{aWW}, C_{aYY} $ that\nthese channels could become comparable to the QCD contribution. \nTo give an order of magnitude estimate of these effects,\nwe included the SU(2) and U(1) contributions with a normalized\nvalue in figure 2. For different values of $ C_{aWW} $ and\n$C_{aYY} $ the curves move up and down.\nWe note that in general SUSY breaking effects in the leptonic\nsector may bring a modification of the couplings here considered.\nThe situation here is different from the case of gravitino production\nsince the interactions of the gravitino to the three gauge groups are\nof the same order: the spin-3\/2 gravitino component couples in fact\nuniversally, while the goldstino component proportionally to the\ngaugino masses. \nWe therefore conclude that the QCD contribution is\nstrongly dominant in the KVSZ models and so the axino production at\nhigh $\\treh$ is practically model independent as long as the number of\nheavy PQ charged states can be absorbed into the definition of $\\fa$.\n\nHowever, at low $\\treh$ the thermal production from scatterings\nbecomes strongly suppressed by the Boltzmann factor. In the region\nwhere $\\treh \\lsim 100-1000\\gev$, axino production due to the decays\nof gaugino, squarks or neutralinos become important. Actually,\nthe lightest neutralino decay via $U(1)_Y$ couplings becomes dominant in\nthe very low reheating temperature regime since the number density of\nthe heavier colored particles becomes strongly suppressed by the\nBoltzmann factor there. On the other hand the neutralinos, depending\non their composition and the supersymmetric spectrum, can freeze-out\nwith a still substantial number and then give rise also to non-thermal\naxino production, as we have seen in Fig.~\\ref{fig:YTR}. This\ncontribution to the axino yield is usually more important than the one\ndue to neutralino decays in equilibrium, which is proportional to\n$C_{aYY} $ and typically below $10^{-12}$.\n\nFor the case of the DFSZ instead, the role of the QCD interaction\nis played by the $SU(2)$ interaction and the dominant\ndecay term above the EW symmetry breaking is the higgsino one\ninstead of the squark one.\n\nOur results for the total thermal production yield for both KVSZ\nand DFSZ type of models can be seen in \nFigure~\\ref{fig:YTR_KSVZ}.\\footnote{A similar figure for the DFSZ model\n is given in Ref.~\\cite{Bae:2011jb}.}\nThere we show the KSVZ model with different values of the\n$ C_{aYY}$ coupling. In the case of non-zero $C_{aYY}$ the\ncontribution from neutralino decay in equilibrium can be\nclearly seen for $ T_{R} \\sim 10 $ GeV and it is very suppressed.\nMoreover we give also the yield for the DFSZ model in solid green, for\n$\\mu=200\\gev$ and the higgsino mass $m_{\\tilde{h}}=200\\gev$.\nWe can see that even for relatively small $\\mu$, axino production\nfrom higgsino decay dominates over the one from the anomaly terms\nfor reheating temperature $\\treh\\lesssim 10^6\\gev$. The importance of axino-Higgsino-Higgss\ncoupling in the DFSZ model was\nrecently discussed in \\cite{Chun:2011zd} and our result is consistent\nwith that analysis. The abundance is so large\nthat the CDM density can be reached with an axino mass as small as\n$100\\kev$, independently of the reheating temperature.\\footnote{\nSuch general effect due to decaying particles in equilibrium has been\nrecently called ``freeze-in'' in Ref.~\\cite{Hall:2009bx} and discussed for\nthe axino in Ref.~\\cite{Cheung:2011mg}. The freeze-in mechanism was\nincluded already in gluino or squark decays to axinos in the\nplasma in Ref.~\\cite{CRS02}.}\nIn this range of reheating temperature, the axino production\n from decay dominates that from scatterings. Therefore the use of \nthe effective mass approximation or another IR screening prescription \nin the scattering process is irrelevant to the axino production in \nthe DFSZ model in the range of reheating temperature where the decay dominates.\nFor higher $\\treh$, the $SU(2)_L$ anomaly term starts dominating\nand the abundance is proportional to $ T_R$ as in the KVSZ case,\nbut with a smaller coefficient.\nIn the same figure, we also mark horizontal lines\ncorresponding to the axino mass giving the correct DM relic density\nfor the given relic abundance of $Y_{\\axino}$.\n\n\n\\begin{figure}[t]\n \\begin{center}\n \\begin{tabular}{c}\n \\includegraphics[width=0.6\\textwidth]{TR_ma_KSVZ_fa_11.eps}\n \\end{tabular}\n \\end{center}\n \\caption{$\\treh$ versus $\\maxino$ for $\\fa=10^{11}\\gev$ in the KSVZ\n models. The bands\n inside like curves correspond to a correct relic density of DM\n axino with both TP and NTP included. To parametrize the\n non-thermal production of axinos we used $Y_{\\rm NLSP}=0$ (I),\n $10^{-10}$ (II), and $10^{-8}$ (III). The upper right-hand area of\n the plot is excluded because of the overabundance of axinos. The\n regions disallowed by structure formation are marked with vertical\n blue dashed lines and arrows for, respectively, TP\n ($\\maxino\\lsim5\\kev$, see text below \\eq{v0}) and NTP ($\\maxino\\lsim30\\mev$, with a\n neutralino NLSP). }\n\\label{fig:TR_ma}\n\\end{figure}\n\n\\begin{figure}[t]\n \\begin{center}\n \\begin{tabular}{c}\n \\includegraphics[width=0.6\\textwidth]{TR_ma_KSVZ_fa_5_9.eps}\n \\end{tabular}\n \\end{center}\n \\caption{The same as Fig.~\\ref{fig:TR_ma} but with the PQ scale $\\fa=5\\times10^9\\gev$. }\n\\label{fig:TR_ma2}\n\\end{figure}\n\n\\subsection{Non-thermal production}\n\nAs stated above, axinos can be produced non-thermally in NLSP decays after they have\nfrozen out of equilibrium. This NTP mechanism is dominant for reheating temperatures\nbelow the mass of the gluino and squarks~\\cite{CKR00,CKKR01}.\nIn this case, the axino abundance is independent of the reheating\ntemperature as long as the temperature is high enough for the NLSP\nto thermalize before freeze-out. Axino relic density from\nNTP is simply given by\n\\dis{\n\\omegaantp=\\frac{\\maxino}{\\mnlsp}\\Omega_{\\rm NLSP}\n\\simeq2.7\\times10^{10}\\bfrac{\\maxino}{100\\gev}Y_{\\rm NLSP}.\n}\nClearly, in order to produce a substantial NTP population of axinos,\nthe NLSP must itself have an energy density larger than the present\nDM density.\n\n\\begin{figure}[t]\n \\begin{center}\n \\begin{tabular}{c}\n \\includegraphics[width=0.6\\textwidth]{TR_ma_DFSZ_fa_11.eps}\n \\end{tabular}\n \\end{center}\n \\caption{The same as Fig.~\\ref{fig:TR_ma} but with DFSZ model used in Fig.~\\ref{fig:YTR_KSVZ}. }\n\\label{fig:TR_ma_DFSZ}\n\\end{figure}\n\n\\begin{figure}[t]\n \\begin{center}\n \\begin{tabular}{c}\n \\includegraphics[width=0.6\\textwidth]{TR_ma_DFSZ_fa_5_9.eps}\n \\end{tabular}\n \\end{center}\n \\caption{The same as Fig.~\\ref{fig:TR_ma_DFSZ} but with the PQ scale $\\fa=5\\times10^9\\gev$. }\n\\label{fig:TR_ma_DFSZ2}\n\\end{figure}\n\nTo see if such production is sufficient to give a dominant DM component, we need to know the\nyield of NLSPs after they have frozen out of the thermal plasma.\nFor the neutralino NLSP yield, relevant processes include pair-annihilation\nand co-annihilation with the charginos, next-to-lightest neutralinos and\nsleptons. For the stau NLSP, the yield is determined by the\nstau-stau annihilation and stau-neutralino co-annihilation processes.\nA typical relic abundance is\n\\dis{\nY_\\chi \\simeq (1\\sim10)\\times10^{-12}\\bfrac{m_\\chi}{100\\gev},\n\\label{eq:Yneutralino}\n}\nfor a bino-dominated neutralino, and\n\\dis{\nY_{\\tilde \\tau}} \\newcommand\\mstau{m_{\\stau} \\simeq 0.01\\times10^{-12}\\bfrac{m_{\\tilde \\tau}} \\newcommand\\mstau{m_{\\stau}}{100\\gev},\n\\label{eq:Ystau}\n}\nfor the stau. Note that in the latter case the NTP can produce\nsufficient axino abundance to explain the whole DM density\nonly for stau masses above $ 1.9\\tev $, which may thermalize only\nat accordingly higher temperatures.\n\nThese two choices for the NLSP were considered in the Constrained\nMinimal Supersymmetric Standard Model (CMSSM) in Ref.~\\cite{CRRS04},\nfor $\\fa < 10^{12}\\gev $, for which even the stau lifetime is of order\n$1 \\mbox{s}$, or less. Recently, the case of stau NLSP, including\nfour-body hadronic decays, was discussed in Ref.~\\cite{Wyler09} also\nfor larger values of $ \\fa$.\n\nIn conclusion, for neutralino NLSP, which decays mainly into an axino\nand a photon or a $Z$-boson, the NTP production is usually more\nefficient. For the stau NLSP, which can decay to an axino and a\ntau-lepton through a coupling of the type given in \\eq{Ldim4}, the\ncontribution is smaller, but can still be substantial.\n\n\nRegarding other NLSPs, colored relics (or even a wino-like neutralino\nif it is lighter than $~1.8\\tev$) usually remain in thermal\nequilibrium so long that their number density after freeze-out becomes\nnegligible and therefore cannot produce any substantial axino\npopulation after freeze-out~\\cite{Berger:2008ti, Covi:2009bk}.\n\n\n\\section{Cosmological constraints}\\label{sec:CosBounds}\n\n\\subsection{The relic density of dark matter}\n\nFor the total axino DM relic density, we apply the $3\\sigma$ range\nderived from WMAP-7 data~\\cite{Komatsu:2010fb}\n\\dis{ 0.109 < \\abunda\n <0.113.\n\\label{eq:wmap7_3sigma} }\nThis produces a stripe in the parameter space and also\nplays a role of the upper bound on the relic density when there are\nadditional DM components, e.g. the axion.\n\nThis can be seen in Figs.~\\ref{fig:TR_ma} and~\\ref{fig:TR_ma2}, where\nwe present the reheating temperature versus the axino mass plane for\n$\\fa=10^{11}\\gev$ and $5\\times10^9\\gev$, respectively. We have\nincluded both the thermal and the non-thermal production contributions of\naxinos, the latter assuming $Y_{\\rm NLSP}=0$ (black solid) $10^{-10}$\n(green solid) and $10^{-8}$ (red dash), denoted also as (I), (II), and\n(III), respectively. A typical stau and neutralino yield after freezout\nwill lie between (I) and (III). A correct relic\ndensity of axinos, in the range given by \\eq{eq:wmap7_3sigma},\ncorresponds to the thin bands between like curves. The parameter space\nabove the curves is excluded as giving too much relic abundance.\nSimilar figures in the DFSZ model are presented in Figs.~\\ref{fig:TR_ma_DFSZ}\nand~\\ref{fig:TR_ma_DFSZ2}, which can be compared to the figures in Ref.~\\cite{Bae:2011jb}. \n\n\n\\subsection{Nucleosynthesis}\n\nAn injection of high energy electromagnetic and hadronic particles\nduring or after Big Bang Nucleosynthesis (BBN) epoch may disrupt the\nabundances of light elements. For axino DM, the lifetime of the NLSP,\nsuch as the neutralino or the stau, is typically around $1\\second$, or less,\nand therefore constraints from the BBN are weak. However, for longer\nlifetimes such constraints become\nimportant~\\cite{Jedamzik:2004er,kkm04}. This leads to an upper bound\non the decay products of the NLSP for a given NLSP lifetime. For the\nstau NLSP, a bound state with ${}^4{\\rm He}$ severely constrains its\nlifetime to be less than roughly $5\\times\n10^3\\sec$~\\cite{Pospelov:2006sc} (although in specific cases with\ngravitino as DM, this can be up to an order of magnitude\nlarger~\\cite{bcjr09}). However for the parameters considered in\nour study, i.e. $ \\fa < 10^{12}\\gev $, the BBN constraint can be\navoided due to the small lifetime of the NLSP. For larger values of\n$\\fa$, on the other hand, non-trivial constraints arise, especially\nfor the stau NLSP, as recently discussed in Ref.~\\cite{Wyler09}.\n\n\n\\subsection{Structure formation}\n\nThe density perturbation due to axino population is suppressed at scales below\ntheir free-streaming length. When the axino mass is larger than\n${\\mathcal O}(\\kev)$, thermally produced axinos become\ncold~\\cite{CKKR01}. However, the non-thermal population of axinos from\nNLSP decays can still have a large velocity dispersion and can be too\nwarm. Lyman-$\\alpha$~\\cite{Boyarsky:2008xj} and\nreionization data~\\cite{Jedamzik:2005sx} give a bound on the velocity of\nthe WDM component and its fraction in the DM density. A recent\nanalysis using the SDSS Lyman-$\\alpha$ data~\\cite{Boyarsky:2008xj}\nleads to an upper limit on the average velocity, $\\langle v \\rangle_{\\rm\n WDM}< 0.013\\, \\rm{km\/s}$ for pure WDM, or otherwise in the case of\nmixed cold\/warm DM the WDM fraction is constrained to be $\\Omega_{\\rm\n WDM}\/\\Omega_{\\rm DM} < 0.35$ in the larger velocity region.\n\nThe present velocity of thermally produced axinos is given\nby~\\cite{Barkana:2001gr}\n\\dis{\nv_0\\simeq 0.065\\, {\\rm km\/s}\\, \\bfrac{1\\kev}{\\maxino}.\n\\label{v0}\n}\nTherefore the above Lyman-$\\alpha$ data implies $\\maxino \\gtrsim 5\\kev$\nfor TP axinos. This lower bound is marked in Figs.~\\ref{fig:TR_ma}\nand~\\ref{fig:TR_ma2} with a vertical blue dashed line and an arrow.\n\nThe free-streaming velocity of axinos produced\nnon-thermally can be obtained from the lifetime of the NLSP and the\nmass relations~\\cite{Jedamzik:2005sx,CKKR01,CRRS04}. For the\nbino-like neutralino NLSP with $C_{\\rm aYY}=8\/3$, we find\n\\dis{\nv_0\\simeq 0.4 {\\rm km\/s}\n\\bfrac{\\maxino}{1\\mev}^{-1}\\bfrac{m_\\chi}{100\\gev}^{-1\/2}\\bfrac{\\fa}{10^{11}\\gev},\n}\nand for the stau NLSP,\n\\dis{\nv_0\\simeq 2 {\\rm km\/s}\\,&\n\\bfrac{\\maxino}{1\\mev}^{-1}\\bfrac{m_{\\tilde \\tau}} \\newcommand\\mstau{m_{\\stau}}{100\\gev}^{1\/2}\\\\\n&\\times\\bfrac{m_\\chi}{100\\gev}^{-1}\\bfrac{\\fa}{10^{11}\\gev}.\n}\n\nFor NTP axinos and for the neutralino NLSP we find a lower limit on the axino\nmass $\\maxino \\gtrsim 30 \\mev$ and $1.5\\mev$ when\n $\\fa=10^{11}\\gev$ and $5\\times10^9\\gev$, respectively. These limits\n are marked in Figs.~\\ref{fig:TR_ma} and ~\\ref{fig:TR_ma2} with a\nvertical blue dashed line and an arrow. For the stau NLSP the analogous lower\nbounds are $\\maxino \\gtrsim 150\\mev$ and $7.5\\mev $, respectively.\nWe stress that these bounds apply solely if the population of axinos\nproduced through NTP is substantial ($\\gsim20-30\\%$).\n\n\n\\begin{figure}[t]\n \\begin{center}\n \\begin{tabular}{c}\n \\includegraphics[width=0.6\\textwidth]{ms_ma_KSVZ_A.eps}\n \\end{tabular}\n \\end{center}\n \\caption{Contours of constant reheating temperature in the\n NLSP--axino mass plane. Here we have assumed $Y_{\\rm\n NLSP}=10^{-12}\\left({\\mnlsp}\/{100\\gev}\\right)$, typical of\n neutralino NLSP, and taken $\\fa=10^{11}\\gev$. The cyan wedge in the\n upper right-hand corner is excluded by the overdensity of DM,\n while in the red wedge below it the axino is not the LSP. }\n\\label{fig:ms_ma_A}\n\\end{figure}\n\\begin{figure}[t]\n \\begin{center}\n \\begin{tabular}{c}\n \\includegraphics[width=0.6\\textwidth]{ms_ma_KSVZ_B.eps}\n \\end{tabular}\n \\end{center}\n \\caption{The same as Fig.~\\ref{fig:ms_ma_A} except for\n$Y_{\\rm NLSP}=10^{-14}\\left({\\mnlsp}\/{100\\gev}\\right)$, typical of stau NLSP.\n }\n\\label{fig:ms_ma_B}\n\\end{figure}\n\n\nIn Figs.~\\ref{fig:ms_ma_A} and ~\\ref{fig:ms_ma_B}, we show contours of\nthe reheating temperature in the plane spanned by the NLSP and the\naxino mass.~\\footnote{Similar figures are shown in Ref.~\\cite{Wyler09}.} In Fig.~\\ref{fig:ms_ma_A} we have fixed $\\fa=10^{11}\\gev$\nand assumed $Y_{\\rm NLSP}=10^{-12}\\left({\\mnlsp}\/{100\\gev}\\right)$,\nwhich is a typical value for bino-like neutralino NLSP. The cyan wedge in the\nupper right-hand corner is excluded by the overdensity of DM, while in\nthe red wedge below it the axino is not the LSP. In\nFig.~\\ref{fig:ms_ma_B} instead $Y_{\\rm\n NLSP}=10^{-14}\\left({\\mnlsp}\/{100\\gev}\\right)$ has been assumed,\ntypical of stau NLSP.\nSo long as the TP dominates, the curves of constant $\\treh $ remain\nvertical and practically independent of $\\maxino $, while as soon as\nthe NTP becomes important, $\\maxino $ dependence arises leading to\nnon-vertical curves. For example, with a bino-like neutralino NLSP of\n$100\\gev $, as in Fig.~\\ref{fig:ms_ma_A}, the NTP contributes only up\nto 10 \\% of the axino LSP DM density. For stau NLSP with small\nabundance $Y\\lsim 10^{-13}$, as given in~\\eq{eq:Ystau}, NTP is\nalways subdominant so that the contours remain vertical. In this case\nthe bounds coming from the free-streaming velocity of the NTP axinos\nare absent.\n\n\n\\section{Conclusion}\\label{sec:Concl}\n\nWe have performed an updated analysis of the relic axino production,\ntaking into account some new calculations that have appeared after the\ninitial study~\\cite{CKR00,CKKR01}, especially~\\cite{Steffen04,Strumia10}\nfor the thermal part, and compared them with our own results\nto explore the question of uncertainty and model-dependence.\n\nWe have found that the uncertainty has not really decreased after the latest\ncalculation~\\cite{Strumia10}. This is probably not surprising since\nthe QCD coupling is large and the convergence of the perturbative\nseries is quite slow. Comparing the different results, we estimate the \nuncertainty in the relic density of axinos produced thermally to be \nstill of order a factor of 10 or so at $T_{R}\\sim 10^4 $ GeV, and to \nthat one has to add also some possible (unknown) contributions due to \nnon-perturbative effects.\nOur result lies above both estimates given in\nRefs.~\\cite{Steffen04,Strumia10}, and this seems natural since we \nincluded subleading terms in $m_{\\rm eff}^2\/s$, which do indeed\nincrease the cross-sections for single channels ensuring their \npositivity in the whole range of integration, even in the limit\nof very large gauge coupling.\n\n\nRegarding the model dependence, our conclusions are more optimistic:\nin the KSVZ-type the QCD anomaly term strongly dominates the axino\nthermal production mechanism, apart from the case of small reheating\ntemperatures where sparticle decay contributions start playing the dominant\nrole. The inclusion of the additional anomaly couplings is completely\nnegligible, apart from unnaturally large values of the coefficients\n$C_{aWW}$ and $C_{aYY}$.\nInstead for DFSZ-type models, the Yukawa interaction dominates\naround the weak scale and can give the main production mechanism of\naxinos making it independent of the reheating temperature.\nTherefore the axino abundance is free from the uncertainty in the method\nused for the IR-divergence.\nAt large temperatures, it is the EW anomaly term that dominates, giving\n a lower abundance than in the KSVZ case.\n\n For both models, also the non-thermal production via NLSP decay can\n produce the required axino DM density, if the NLSP decouples with a\n sufficiently large abundance. But for this mechanism to dominate, the\n reheating temperature has to be very low and the axino and NLSP\n masses not too hierarchical. We find that, interestingly enough, a\n light bino NLSP decaying out of equilibrium can still produce the\n whole DM density at the cost of a very low reheating temperature (but\n sufficiently high for NLSP thermalization). For the stau case\n instead, it is quite unlikely that the NTP can dominate, unless the\n stau NLSP yield after freeze-out is unusually large.\n\n\n\\medskip\n\nDuring the final stages of completion of this work, the analysis \nRef.~\\cite{Bae:2011jb} appeared which discusses in details axino couplings\nand finds a non-trivial momentum dependence in the one-particle irreducible\none-loop axino-gluon-gluino couplings. The coefficient $ C_{1PI} $ of\nthese interactions is suppressed when the external particle momentum\nis much larger than the mass $M$ of the PQ-charged fermions in the loop.\nDue to this effect, the authors claim a suppression of order $ M^2\/T^2 $\nof the axino production from the dimension-5 operators for the DFSZ\ncase and for extremely small KSVZ quark masses (with Yukawa \ncouplings less than $10^{-5}$, \\ie for the heavy quark mass less than \n$10^6 \\gev$ for $f_a\\simeq 10^{11}\\gev$). \n\nWe investigated such suppression by inserting their $ C_{1PI} $\ncoupling in the relevant diagrams and we obtain different suppressions\ndepending on the type of Feynman graph and a strong dependence on\nthe IR regulator contained there, in most cases the gluon thermal mass.\nIn particular, we find no suppression at all for the $t$-channel gluon \nexchange for vanishing gluon mass.\nSince graphs with the one loop $ C_{1PI} $ coupling and a gluon thermal mass\ninsertion arise at lowest order at two loops, probably a full\ninvestigation of the two-loop diagrams in thermal field theory is\nneeded to resolve this issue.\n\nNote in any case that even without suppression, the DFSZ axino \nproduction is dominated by the decay term up to temperatures of the \norder of $ 10^{6-7} $ GeV. For the KVSZ case, we show in Fig.~\\ref{TR-mQ}\nas violet lines how the yield changes according to\nRef.~\\cite{Bae:2011jb} for small heavy quark masses, $m_Q=10^6 \\gev$\nand $m_Q=10^5 \\gev$. The {\\em Cold} DM axino, on which our present study is \nbased, is practically not affected.\nFor large fermion masses in the loop, $ M > T $, our anomalous\ncouplings coincide fully with the $ C_{1PI} $ in\nRef.~\\cite{Bae:2011jb} and our results are in perfect agreement.\n\n\n\\begin{figure}[!t]\n \\begin{center}\n \\begin{tabular}{c}\n \\includegraphics[width=0.6\\textwidth]{TR_ma_KSVZ_fa_11_mQ.eps}\n \\end{tabular}\n \\end{center}\n \\caption{The same as Fig.~\\ref{fig:TR_ma} but showing as well\n in magenta the yield suppression found in\n Ref.~\\cite{Bae:2011jb} for different values of the heavy\n quark masses. }\n\\label{TR-mQ}\n\\end{figure}\n\n\n\n\\medskip\n\\acknowledgments{The authors would like to thank the Galileo Galilei\n Institute and its Program \"Dark Matter: Its nature, origins and\n prospects for detection\" for hospitality when this work was\n initiated. KYC is in part supported by the Korea Research Foundation\n Grant funded from the Korean Government (KRF-2008-341-C00008 and\n No. 2011-0011083)). KYC acknowledges the Max Planck Society (MPG),\n the Korea Ministry of Education, Science and Technology (MEST),\n Gyeongsangbuk-Do and Pohang City for the support of the Independent\n Junior Research Group at the Asia Pacific Center for Theoretical\n Physics (APCTP). LC would like to thank the CERN Theory Division for\n hospitality in the context of the TH-Institute DMUH'11 (19-29 July 2011)\n during the completion of this work. JEK is supported in\n part by the National Research Foundation (NRF) grant funded by the\n Korean Government (MEST) (No. 2005-0093841). \n LR is partially supported by the Welcome Programme of the Foundation for Polish\n Science, by the Lancaster, Manchester and Sheffield Consortium for Fundamental Physics under STFC grant ST\/J0000418\/1, and by the EC 6th Framework Programme MRTN-CT-2006-035505.}\n\n\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nLearning to use the piano pedals strongly relies on listening to nuances in the sound. Instructions with respect to when the pedal should be pressed and for what duration are required to develop critical listening. To facilitate the learning process, we pose a research question: ``Can a computer point out pedalling techniques when a piano recording from a virtuoso performance is given?'' Pedalling techniques change very specific acoustic features, which can be observed from their spectral and temporal characteristics on isolated notes. However, their effects are typically obscured by the variations in pitch, dynamics and other elements in polyphonic music. Therefore, automatic detection of pedalling techniques using hand-crafted features is a challenging problem. Given enough labelled data, deep learning models have shown the ability of learning hierarchical features. If these features are able to represent acoustic characteristics corresponding to pedalling techniques, the model can serve as a detector.\n\nIn this paper, we focus on detecting the technique of the sustain pedal, which is the most frequently used one among the three standard piano pedals. All dampers are lifted off the strings when the sustain pedal is pressed. This mechanism helps to sustain the current sounding notes and allows strings associated to other notes to vibrate due to coupling via the bridge. A phenomenon known as \\textit{sympathetic resonance} \\cite{morfey2001dictionary} is thereby enhanced and embraced by pianists to create a ``dreamy'' sound effect. We can observe how the phenomenon reflects on the melspectrogram in Figure \\ref{fig:intro}, where note \\textit{F4} is played without (first) and with (second) the sustain pedal in two bars respectively. Note that the symbol under the second bar of the music score in Figure \\ref{fig:intro} can be used to indicate the sustain-pedal techniques. Yet, even if pedal notations are provided, pedalling in the same piano passage can be executed in many different ways. Playing techniques are typically adjusted to the performer's sense of tempo, dynamics, as well as the location where the performance takes place \\cite{rosenblum1993pedaling}.\n\n\n\\begin{figure}[t]\n\\centering\n\\includegraphics[width=.85\\columnwidth]{intro.pdf}\n\\caption{Different representations of the same note played without (first note) or with (second note) the sustain pedal, including music score, melspectrogram and messages from MIDI or sensor data.}\n\\label{fig:intro}\n\\end{figure}\n\nGiven that detecting pedalling nuances from the audio signal alone is a rather challenging task \\cite{goebl2008sense}, several measurement systems have been developed to capture the pedal movement. For instance, the Yamaha Disklavier piano can encode this movement into MIDI messages (0-127) along with note events. A dedicated system proposed in \\cite{liang2018measurement} enables synchronously recording the pedalling gestures and the piano sound. This can be deployed on common acoustic pianos, and it is used to provide the ground truth dataset introduced in Section \\ref{sec:dataset}. \n\nDetection of pedalling techniques from audio recordings is necessary in the cases where installing sensors on the piano is not practical. We approach the sustain-pedal detection from the audio domain using transfer learning \\cite{goodfellow2016deep} as illustrated in Figure \\ref{fig:framework}. Transfer learning exploits the knowledge gained during training on a source task and applies this to a target task \\cite{pan2010survey}. This is crucial for our case, where the target-task data is obtained from recordings of a different piano, therefore it is difficult to learn a ``good'' representation due to mechanical and acoustical deviations. In our source task, a convolutional neural network (denoted by \\texttt{convnet} hereafter) is trained for distinguishing synthesised music excerpts with or without the sustain-pedal effect. The \\texttt{convnet} is then used as a feature extractor, aiming to transfer the sustain-pedal effect learned from the source task to the target task. Support vector machines (SVMs) \\cite{suykens1999least} are trained using the frame-wise \\texttt{convnet} features from the acoustic piano recordings to finalise the feature representation transfer as the target task. SVMs can be used as a classifier to localise which frames are played with the sustain pedal. The performance is expected to improve significantly with the new feature representation. To sum up, the main contributions of this paper are:\n\\begin{enumerate}\n \\item A novel strategy of model design, which incorporates knowledge of piano acoustics and physics, enabling the \\texttt{convnet} to become more effective in representing the sustain-pedal effect.\n \n \\item A transfer learning method that allows the \\texttt{convnet} trained from the source task to be adapted to the target task, where the recording instruments and room acoustics are different. This also allows effective learning with a smaller dataset.\n \\item Finally, we conduct visual analysis on the convolutional layers of the \\texttt{convnet} to promote model designs with fewer trainable parameters, while maintaining their discriminating power.\n\\end{enumerate}\n\n\\begin{figure}[t]\n\\centering\n\\includegraphics[width=.85\\columnwidth]{framework.pdf}\n\\caption{Framework of the proposed method.}\n\\label{fig:framework}\n\\end{figure}\n\nThe rest of this paper is organised as follows. We first introduce related works in Section \\ref{sec:relatedwork}. The process of database construction is described in Section \\ref{sec:dataset}. The methods of sustain-pedal detection including \\texttt{convnet} design and transfer learning are discussed in Section \\ref{sec:method}. Experiments and results are presented in Section \\ref{sec:experiment}. We finally conclude our work in Section \\ref{sec:conclusion}.\n\n\\section{Related Work}\n\\label{sec:relatedwork}\nPast research in music information retrieval (MIR) abound in recognition of musical instruments, but automatic detection of instrumental playing techniques (IPT) remains underdeveloped \\cite{lostanlen2018extended}. IPT creates a variety of spectral and temporal variations of the sounds in different instruments. Recent research has attempted to transcribe IPT on drum \\cite{wu2018review}, erhu \\cite{yang2017filter}, guitar \\cite{su2014sparse,chen2015electric} and violin \\cite{li2015analysis,perez2015indirect}. Hand-crafted features are commonly designed based on instrument acoustics to capture the salient variations induced by IPT. The sustain-pedal technique leads to rather subtle variations, therefore most studies managed to detect the technique based on isolated notes only \\cite{lehtonen2007analysis,badeau2008piano,liang2017detection}. This challenge is further intensified in polyphonic music where clean features extracted from isolated notes cannot be easily obtained. In our prior work \\cite{liang2018legato}, the first research aiming to extract pedalling technique in polyphonic piano music, we proposed a method for detecting pedal onset times using a measure of sympathetic resonance. Yet, this method assumes the availability of modelling the specific acoustic piano which is also used in evaluation. Moreover, it is prone to errors due to its reliance on note transcription.\n\nConvolutional Neural Networks (CNNs) have been used to boost the performance in MIR tasks, with the ability to efficiently model temporal features \\cite{pons2017designing} and timbre representations \\cite{pons2017timbre}. We choose CNNs to facilitate learning time-frequency contexts related to the sustain pedal, using synthesised excerpts in pairs (\\textit{pedal} versus \\textit{no-pedal} versions). Using this method, contexts that are invariant to large pitch and dynamics changes can be learned.\n\nTo apply a \\texttt{convnet} trained from the synthesised data into the context of real recordings, a transfer learning approach can be used. It has been gaining more attentions in MIR for alleviating the data sparsity problem and its ability to be used for different tasks. For example, Choi et al. \\cite{choi2017transfer} obtained features from CNNs, which were trained for music tagging in the source task. These features outperformed MFCC features in the target tasks, such as genre and vocal\/non-vocal classification. We believe such strategy is suited to the challenges in detecting the sustain pedal from polyphonic piano music recorded in different acoustic and recording conditions.\n\nIn our case, training a \\texttt{convnet} with the synthesised data is considered as the source task. Then in the target task, we can use the learnt representations from the trained \\texttt{convnet} as features, which are extracted from every frame of a real piano recording, to train a dedicated classifier adapted to the actual acoustics of the piano and the performance venue used in the recording. This transfer learning approach is expected to better identify frames played with the sustain pedal. For the dedicated classifier in the target task, we opt for SVM instead of multi-layer perceptron because SVM can greatly reduce the training time and yield better generalisation in classification tasks~\\cite{osowski2004mlp}. In Section~\\ref{sec:tt}, compared with fine-tuning the last layer of the pre-trained \\texttt{convnet}, transfer learning with SVM trained using the activations of multiple layers also achieves better performance.\n\n\\section{Dataset}\n\\label{sec:dataset}\nFor the source task, \\textit{pedal} and \\textit{no-pedal} versions of music excerpts are required to train a \\texttt{convnet}, which is able to highlight the spectral or temporal characteristics that change with the sustain pedal instead of note events. For this reason, 1392 MIDI files publicly available from the Minnesota International Piano-e-Competition website\\footnote{\\url{http:\/\/www.piano-e-competition.com}} were downloaded. They were recorded using a Yamaha Disklavier piano from the performance of skilled competitors. To render these MIDI files into high quality audio, the Pianoteq 6 PRO\\footnote{\\url{https:\/\/www.pianoteq.com\/pianoteq6}} software was used. This physically modelled virtual instrument approved by Steinway \\& Sons can export audio using models of different instruments and recording conditions. We employed the Steinway Model D grand piano instrument and the close-miking recording mode. Audio with or without sustain-pedal effect was then generated with a sampling rate of 44.1 kHz and a resolution of 24 bits. These were rendered while preserving or removing the sustain-pedal message in the MIDI data. For each \\textit{pedal}-version audio, we can obtain the temporal regions when the sustain pedal is \\textit{on} or \\textit{off} by thresholding the MIDI message at 64 given its range of [0,127]. A pedalled segment is determined to start at a pedal onset (where the pedal state changes from \\textit{off} to \\textit{on}) and finish when the state returns to \\textit{off}. We can clip all the pedalled segments to form the \\textit{pedal} excerpts. The start and end times of the pedalled segments were also used to obtain \\textit{no-pedal} excerpts from the corresponding \\textit{no-pedal}-version of the audio.\n\nThese music excerpts were derived from pieces by 84 different composers from Baroque to the Modern period. Their durations distribute between 0.3 and 2.3 seconds. To prepare fixed-length data for training, excerpts that are shorter or longer than 2 seconds were repeated or trimmed to create a 2-second excerpt. Considering the large size of our dataset, we randomly took a thousand samples from the excerpts of each composer. In total, 62424 excerpts form a smaller dataset\\footnote{There are less than a thousand excerpts for some of the composers. The excerpts were sampled in pairs such that the ratio of \\textit{pedal} and \\textit{no-pedal} excerpts is 1:1.}. This also helps to compare \\texttt{convnet} of different architectures in a more efficient way, since the training time can be significantly reduced.\n\nFor the target task, the dataset consists of ten well known passages of Chopin's piano music. A pianist was asked to perform the passages using a Yamaha baby grand piano situated in the MAT studios at Queen Mary University of London. The audio were recorded at 44.1 kHz and 24 bits using the spaced-pair stereo microphone technique with a pair of Earthworks QTC40 omnidirectional condenser microphones positioned about 50 cm above the strings. The positions were kept constant during the recording. Meanwhile, movement of the sustain pedal was recorded along with the audio with the help of the measurement system proposed in \\cite{liang2018measurement}. The audio data were annotated with frame-wise \\textit{on} or \\textit{off} labels as the ground truth, representing whether the sustain pedal was pressed or released in each audio frame. The occurrence counts of the labels in each passage are presented in Table \\ref{tab:tt}. It can be observed that the sustain pedal was frequently used for the interpretation of Chopin's music.\n\n\\section{Method}\n\\label{sec:method}\n\n\\subsection{CNN for binary classification}\n\\begin{figure}[t]\n\\centering\n\\includegraphics[width=.9\\columnwidth]{method.pdf}\n\\caption{Details of the \\texttt{convnet} architecture and a schematic of feature extraction procedures during transfer learning.}\n\\label{fig:method}\n\\end{figure}\nGiven our large training data consisting of excerpts arranged in \\textit{pedal}\/\\textit{no-pedal} pairs, binary classification was chosen as a source task. This enabled the \\texttt{convnet} to focus on variations in the nuances on sound played with\/without the sustain pedal, while invariant to other musical elements such as pitch and loudness. Considering that the use of the sustain pedal can have effects on every piano string, this could lead to changes that affect the entire spectrum, i.e., take place at a global level. Therefore representations that reveal finer details, such as short-time Fourier transform (STFT), may become inefficient for training. The melspectrogram is a 2D representation that approximates human auditory perception through aggregating STFT bins along the frequency axis. This computationally efficient input has been shown to be successful in MIR tasks such as music tagging \\cite{choi2016automatic}. For the above reasons, we consider melspectrogram an adequate input representation.\n\nInspired by \\textit{Vggnet} \\cite{simonyan2014very} which has been found to be effective in music classification \\cite{choi2016convolutional}, our \\texttt{convnet} model uses a similar architecture with fewer trainable parameters to learn the differences in time-frequency patterns in \\textit{pedal} versus \\textit{no-pedal} cases. The model consists of a series of convolutional and max-pooling layers, which are followed by one fully-connected layer with two softmax outputs. The architecture we propose to start with, and the related hyperparameters are summarised in Figure \\ref{fig:method}, where ($c$, ($m$, $n$)) correspond to \\textit{(channel, (kernel lengths in frequency, time))} specifying the convolutional layers. Pooling layer is specified by \\textit{(pooling length in frequency, time)}. \n\nIt was noted in \\cite{pons2017timbre} that designing filter shapes within the first layer can be motivated by domain knowledge in order to efficiently learn musically relevant time-frequency contexts with spectrogram-based CNNs. To decide ($m$, $n$) of the first layer yielding the best representational power, we selected their values motivated by piano acoustics and physics, which can substantially change the sustain-pedal effect. Performance of \\texttt{convnet} with different filter shapes within the first layer were evaluated using the validation set as discussed in Section \\ref{sec:st}. Apart from the common small-square filter shape, the shapes we experimented with are either wider rectangles in the time domain to model short time-scale patterns, or in the frequency domain to fit spectral contexts.\n\nIn every convolutional layer, batch normalisation was used to accelerate convergence. The output was then passed through a Rectified Linear Unit (ReLU) \\cite{nair2010rectified}, followed by a max-pooling layer to prevent the network from over-fitting and to be invariant to small shifts in time and frequency. To further minimise over-fitting, global average pooling was used before the final fully-connected layer. The final layer used softmax activation in order to map the output to the range [0,1], which can be interpreted as a likelihood score of the presence of the sustain pedal in the input. We trained \\texttt{convnet} with the Adam optimiser \\cite{kingma2014adam} to minimise binary cross entropy. \n\nThere are possibilities that simpler model architecture, i.e., with fewer channels or convolutional layers, would be sufficient for our binary classification task using reduced parameters. We explored the effect of number of channels and layers in Section \\ref{sec:st}. The best performing \\texttt{convnet} model was selected to ensure the features extracted from it can accurately represent the acoustic effects when the sustain pedal is used.\n\n\n\\subsection{Transfer Learning}\n\\label{sec:tf}\nWhen the detection aims at real piano recordings, relying on the output from the trained \\texttt{convnet} may be inadequate. This is because our \\texttt{convnet} was trained solely on synthesised excerpts in pairs. Only the hierarchical features representing acoustic characteristics when the sustain pedal of a virtual piano is played in the specified recording environment can be learned. It has been well understood that piano sounds can be varied by brands, and also affected by room acoustics and recording conditions. Such differences could bring more variations to the sustain-pedal effect. These serve as motivations for the proposed transfer learning, which could extract the hierarchical knowledge (specialised features) from the \\texttt{convnet}. The knowledge is then used as features to train a dedicated classifier for detecting the sustain pedal of a specific piano in real scenarios.\n\nThe activations of each intermediate layers were sub-sampled using average pooling and then concatenated into the final \\texttt{convnet} features as demonstrated in Figure \\ref{fig:method}. Here average pooling can summarise the global statistics and reduce the size of feature maps to a vector of length associated to the value of $c$. In the end, a $c \\times 4$ dimensional feature vector was generated since there are 4 convolutional layers in the \\texttt{convnet}. For the brevity of this paper, the effects of using various strategies for layer-wise feature combination are not discussed.\n\nTo identify which audio frames were played with the sustain pedal, we can use SVMs to classify the frame-wise \\texttt{convnet} features into pedal \\textit{on} or \\textit{off} states. SVMs were chosen first because we assume the features extracted from the carefully-trained model in the source task should be representative and separable. Second, the SVM algorithm was originally devised for classification problems, involving finding the maximum margin hyperplane that separates two classes of data and has been shown ideal for such a task \\cite{duda2000pattern}. This allows us to focus on the quality of learnt features. SVMs were trained using a supervised learning method in the target task, where the detection was done on acoustic piano recordings. \n\nAs shown in Section \\ref{sec:tt}, the proposed transfer learning method overall outperformed the case of using the pre-trained \\texttt{convnet} output directly. It also provided better performance than using the pre-trained \\texttt{convnet} with a fine-tuned last layer, which is a common approach to transfer learning.\n\n\\section{Experiment}\n\\label{sec:experiment}\n\n\\begin{table}[t]\n\\centering\n\\caption{Performance of different \\texttt{convnet} models.}\n\\label{tab:convnet}\n\\begin{tabular}{|p{0.33\\columnwidth}|M{0.13\\columnwidth}|M{0.13\\columnwidth}|M{0.13\\columnwidth}| }\n\\hline\n\\textbf{Model} & ($\\boldsymbol{m}$, $\\boldsymbol{n}$) & \\textbf{Accuracy} & \\textbf{AUC}\\\\\n\\hline\n\\texttt{convnet-baseline} & (3, 3) & 0.9755 & \\textbf{0.9963}\\\\\n\\hline\n\\multirow{3}{*}{\\texttt{convnet-frequency}} & (9, 3) & 0.9630 & 0.9905\\\\\\cdashline{2-4}\n& (20, 3) & 0.9751 & 0.9956\\\\\\cdashline{2-4}\n& (45, 3) & 0.9747 & \\textbf{0.9968}\\\\\\hline\n\\multirow{3}{*}{\\texttt{convnet-time}} & (3, 10) & 0.9815 & \\textbf{0.9973}\\\\\\cdashline{2-4}\n& (3, 20) & 0.9787 & 0.9972\\\\\\cdashline{2-4}\n& (3, 30) & 0.9816 & 0.9971\\\\\\hline\n\\end{tabular}\n\\end{table}\n\n\\begin{table}[t]\n\\centering\n\\caption{Performance of different models based on \\texttt{convnet-multi}.}\n\\label{tab:convnetmulti}\n\\begin{threeparttable}\n\\begin{tabular}{|M{0.26\\columnwidth}|M{0.075\\columnwidth} : M{0.075\\columnwidth} |M{0.13\\columnwidth}|M{0.13\\columnwidth}| }\n\\hline\n\\textbf{\\texttt{convnet-multi}} & $\\boldsymbol{c}$ & $\\boldsymbol{l}$ & \\textbf{Accuracy} & \\textbf{AUC}\\\\\n\\hline\n\\multirow{8}{*}{\\parbox{0.25\\columnwidth}{Models with Reduced Parameters}} & 3 & 2 & 0.8781 & 0.9486\\\\\\cdashline{2-5}\n& 12 & 2 & 0.9389 & 0.9804\\\\\\cdashline{2-5}\n& 21 & 2 & 0.9552 & 0.9890\\\\\\cdashline{2-5}\n& 3 & 3 & 0.9436 & 0.9849\\\\\\cdashline{2-5}\n& 12 & 3 & 0.9708 & 0.9948\\\\\\cdashline{2-5}\n& 21 & 3 & 0.9741 & 0.9960\\\\\\cdashline{2-5}\n& 3 & 4 & 0.9513 & 0.9870\\\\\\cdashline{2-5}\n& 12 & 4 & 0.9762 & 0.9964\\\\\\hline\nOriginal Model & 21 & 4 & \\textbf{0.9837} & \\textbf{0.9983}\\\\\n\\hline\n\\end{tabular}\n\\begin{tablenotes}\n \\small\n \\item \\textit{Note:} $l$ denotes the number of convolutional layers. \n\\end{tablenotes}\n\\end{threeparttable}\n\\end{table}\n\n\\begin{table*}[t]\n\\centering\n\\caption{Performance of the two methods in the target task.}\n\\label{tab:tt}\n\\begin{tabular}{|p{0.22\\columnwidth}|M{0.13\\columnwidth}:M{0.13\\columnwidth}|M{0.13\\columnwidth}:M{0.13\\columnwidth}:M{0.13\\columnwidth}|M{0.13\\columnwidth}:M{0.13\\columnwidth}:M{0.13\\columnwidth}|}\n\\hline\n\\multirow{2}{*}{\\textbf{Music Passages}} & \\multicolumn{2}{c|}{\\textbf{Occurrence Counts}} & \\multicolumn{3}{c|}{\\textbf{Retrain Last Layer Only}} & \\multicolumn{3}{c|}{\\textbf{Transfer Learning with SVM}}\\\\\n\\cdashline{2-9}\n & \\textit{on} & \\textit{off} & $P_1$ & $R_1$ & $F_1$ & $P_1$ & $R_1$ & $F_1$\\\\\\hline\n Op.10 No.3 & 849 & 268 & 0.7615 & 0.9965 & 0.8633 & 0.8457 & 0.9941 & \\textbf{0.9139} \\\\\\hdashline\n Op.23 No.1 & 722 & 355 & 0.6670 & 0.8573 & 0.7503 & 0.8643 & 0.9349 & \\textbf{0.8982} \\\\\\hdashline\n Op.28 No.4 & 995 & 322 & 0.7569 & 0.9698 & 0.8502 & 0.8148 & 0.9859 & \\textbf{0.8922} \\\\\\hdashline\n Op.28 No.6 & 788 & 289 & 0.7357 & 0.9607 & 0.8332 & 0.8178 & 0.9569 & \\textbf{0.8819} \\\\\\hdashline\n Op.28 No.7 & 291 & 66 & 0.8217 & 0.8866 & 0.8529 & 0.8971 & 0.8385 & \\textbf{0.8668} \\\\\\hdashline\n Op.28 No.15 & 611 & 306 & 0.6659 & 0.9329 & 0.7771 & 0.8412 & 0.9624 & \\textbf{0.8977} \\\\\\hdashline\n Op.28 No.20 & 783 & 274 & 0.7405 & 0.9949 & 0.8490 & 0.7849 & 0.9974 & \\textbf{0.8785} \\\\\\hdashline\n Op.66 & 660 & 197 & 0.7720 & 0.9439 & 0.8494 & 0.9425 & 0.9439 & \\textbf{0.9432} \\\\\\hdashline\n Op.69 No.2 & 591 & 186 & 0.7622 & 0.9272 & 0.8366 & 0.9649 & 0.7902 & \\textbf{0.8688} \\\\\\hdashline\n B.49 & 1111 & 441 & 0.7091 & 0.9172 & 0.7998 & 0.8175 & 0.9919 & \\textbf{0.8963} \\\\\n \\hline\n \\textbf{Average} & 740 & 270 & 0.7392 & 0.9387 & 0.8262 & 0.8591 & 0.9396 & \\textbf{0.8938} \\\\\n \\hline\n\\end{tabular}\n\\end{table*}\n\n\\begin{figure*}[t]\n\\centering\n\\includegraphics[width=0.95\\linewidth]{chopin-Fmicro.png}\n\\caption{Overall performance of the three methods in the target task.}\n\\label{fig:microf1}\n\\end{figure*}\n\nIn our experiments, melspectrograms with 128 mel bands were extracted from excerpts to serve as input to the network, The processing was done in real-time on the GPU using \\textit{Kapre} \\cite{choi2017kapre}, which can simplify audio preprocessing and saves storage. Time-frequency transformation was performed using 1024-point FFT with a hop size of 441 samples (10 ms). \\textit{Keras} \\cite{chollet2015keras} and \\textit{Tensorflow} \\cite{tensorflow2015-whitepaper} frameworks were used for the implementation.\n\n\\subsection{Source Task}\n\\label{sec:st}\nThe 62424 excerpts were split into 80\\%\/20\\% to form the training\/validation set. Models were trained until the validation accuracy no longer improved for 10 epochs. Batch size was set to 128 examples. To examine which \\texttt{convnet} model can best discriminate \\textit{pedal} versus \\textit{no-pedal} excerpts, we compared best AUC-ROC scores (or simply AUC, representing Area Under Curve - Receiver Operating Characteristic) based on the validation set. \n\nAs we introduced in Section \\ref{sec:method}, we focused on the filter shape, i.e., ($m$, $n$) within the first layer. Models with the following ($m$, $n$) were trained:\n\\begin{itemize} \n\\item As a baseline: (3, 3) (hereafter designated as \\texttt{convnet-baseline}).\n\\vspace{5pt}\n\\item For modelling larger frequency contexts: (9, 3), (20, 3), (45, 3) (collectively denoted by \\texttt{convnet-frequency}). These values of kernel length in frequency were motivated by the piano acoustics and physical structure, which fundamentally decide how the sustain-pedal effect sounds at notes of different registers. Since the mel scale was used, (9, 3) can at least cover 283 Hz, which approximately corresponds to the frequency of note \\textit{C4}, a split point between bass and treble. Accordingly, (20, 3) and (45, 3) can be separately mapped to note \\textit{D5} and \\textit{G6}. The \\textit{stress bar} near the strings of \\textit{D5} separates the piano frame into different regions. The strings associated to notes higher than \\textit{G6} are always free to vibrate because there are no more dampers above these strings.\n\\vspace{5pt}\n\\item Finally, for modelling larger time contexts: (3, 10), (3, 20), (3, 30), covering 100, 200 and 300 ms respectively (collectively denoted by \\texttt{convnet-time}).\n\\end{itemize}\n\nThe number of channels ($c$) was set to 21 for all convolutional layers. Table \\ref{tab:convnet} presents the accuracy and AUC scores of the above models obtained from the validation set. According to the best AUC score of \\texttt{convnet-frequency} and \\texttt{convnet-time} respectively, we selected (45,3) and (3,10) along with (3, 3) to create another model with multiple filter shapes (\\texttt{convnet-multi}). To be specific, the first convolutional layer of \\texttt{convnet-multi} consisted of (7, (45, 3)), (7, (3, 10)) and (7, (3, 3)). Its outputs were then concatenated along the channel dimension. The best accuracy and AUC scores were achieved by \\texttt{convnet-multi}, i.e., 0.9837 and 0.9983. \n\nIt is noted that all the models above obtained AUC score higher than 0.99 due to the relative simplicity of the classification task. To examine if the same level of performance can be obtained with fewer trainable parameters, we trained models similar to \\texttt{convnet-multi} but with fewer channels and convolutional layers. According to the results in Table \\ref{tab:convnetmulti}, the original \\texttt{convnet-multi} remains the model with the highest score of AUC. Therefore, it was selected as the final model from the source task in order to be used as a feature extractor in the following target task.\n\n\n\\subsection{Target Task}\n\\label{sec:tt}\nIn the target task, sliding window was applied to the acoustic piano recordings in order to extract features of the trained \\texttt{convnet-multi} model at every frame, as introduced in Section \\ref{sec:tf}. The window covers a duration of 0.3 seconds with a hop size equivalent to 0.1 seconds. The 0.3-second samples were then tiled to 2 seconds and transformed into melspectrogram such that the input size was coherent with the one in the source task. The extracted features were used to train the SVM constructed by \\textit{Scikit-learn} \\cite{pedregosa2011scikit}.\n\nThe experiment was done by conducting {\\em leave-one-group-out} cross-validation, where samples were grouped in terms of music passages. The performance of the proposed transfer learning method was validated in each music passage where the frame-wise features need to be classified by the SVM into pedal \\textit{on} or \\textit{off}, while the rest of the passages constitute the training set. The SVM parameters were optimised using grid-search based on the validation results. Radial kernel was used. Its bandwidth and the penalty parameter were selected from the ranges below:\n\n\\begin{itemize} \n\\item bandwidth: [$1\/2^3$, $1\/2^5$, $1\/2^7$, $1\/2^9$, $1\/2^{11}$, $1\/2^{13}$, $1\/\\textit{feature vector dimension}$]\n\\item penalty parameter: [0.1, 2.0, 8.0, 32.0]\n\\end{itemize} \n\nWe compared the proposed transfer learning method with the detection using a fine-tuned \\texttt{convnet-multi} model, which can serve as a baseline classifier. Here ``fine-tuning'' is referred to as only retraining the fully-connected layer of \\texttt{convnet-multi}. This is commonly considered a basic transfer learning technique. Within each cross-validation fold, the fully-connected layer was updated until the accuracy stopped increasing for 10 epochs. Then we obtained the fine-tuned \\texttt{convnet-multi} outputs from short-time sliding windows over the melspectrogram of the validation passage.\n\nGiven the frame-wise \\textit{on}\/\\textit{off} results for every music passage, we calculated precision ($P_1$), recall ($R_1$) and F-measure ($F_1$) with respect to the label \\textit{on}. They are defined as:\n\\begin{equation*}\nP_1 = \\frac{N_{tp}}{N_{tp}+N_{fp}}, \nR_1 = \\frac{N_{tp}}{N_{tp}+N_{fn}}, \nF_1 = 2 \\times \\frac{P_1 \\times R_1}{P_1+R_1},\n\\end{equation*}\nwhere $N_{tp}$, $N_{fp}$ and $N_{fn}$ are the numbers of true positives, false positives and false negatives respectively. \n\nTable \\ref{tab:tt} presents the performance measurement of the two methods respectively for every validation passage in the cross-validation fold, where the occurrence counts of label \\textit{on} and \\textit{off} were obtained from the ground truth. In general, our proposed transfer learning method with SVM obtains better performance. We can observe that the average value of $P_1$ and $F_1$ are 11.99\\% and 6.76\\% higher in using the transfer learning method with SVM than with the fine-tuned \\texttt{convnet-multi}. Both methods achieved similar average value of $R_1$.\n\nWe also compared the overall performance of the two methods along with directly using the pre-trained \\texttt{convnet-multi}. Their results are presented passage by passage in Figure \\ref{fig:microf1}. Considering the imbalanced occurrence counts of the two labels, the micro-averaged F-measure ($F_{micro}$) was selected to evaluate the overall performance, because it calculates metrics globally by counting the total $N_{tp}$, $N_{fp}$ and $N_{fn}$ with respect to both labels. The proposed transfer learning method with SVM presents the best overall performance with more than 10\\% higher than the $F_{micro}$ obtained by the other two methods.\n\n\\subsection{Discussion}\n\\label{sec:discussion}\n\\begin{figure}[t]\n\\centering\n\\includegraphics[width=0.95\\columnwidth]{vis.png}\n\\caption{Visualisation of the ground truth (top row) and the detection result (bottom row) in \\textit{Op.66}. Audio frames that are annotated\/detected as pedal \\textit{on} are highlighted in orange\/green.}\n\\label{fig:vis}\n\\end{figure}\n\n\\begin{figure*}[htp]\n\\centering\n\\subfloat[Melspectrograms of two input signals and their respective deconvolved results from 4 layers of \\texttt{convnet-frequency}. \\label{fig:f45t3}]{%\n \\includegraphics[width=.95\\linewidth]{f45t3.png}%\n}\n\n\\subfloat[Melspectrograms of two input signals and their respective deconvolved results from 4 layers of \\texttt{convnet-time}. \\label{fig:f3t10}]{%\n \\includegraphics[width=.95\\linewidth]{f3t10.png}%\n}\n\\caption{Visual analysis of music excerpts in pairs. The deconvolved melspectrogram corresponding to the first feature in layer $l$ is designated by \\textit{layer$l$-1}.}\n\\label{fig:visft}\n\\end{figure*}\n\nTo gain deeper insight into the pros and cons of our method, we visualised the detection results of the last 15 seconds in the passage of \\textit{Op.66}, which obtained the best performance in the target task. Figure \\ref{fig:vis} highlights the audio frames corresponding to pedal \\textit{on} according to the detection results and the ground truth separately. Most of the frames were correctly identified. Yet, there were false positives because some frames prior to the true sustain-pedal onset times were detected as positives, hence $P_1$ was decreased. This implies a model dedicated to the detection of the sustain-pedal onset should be developed. Such model itself or its outputs could be fused with \\texttt{convnet-multi} in order to localise the pedalled segments with a better precision. There was also fragmentation corresponding to transient \\textit{on} returned by the model, leading to increasing $N_{fp}$ and $N_{fn}$. This can be reduced by post-processing techniques.\n\nIt is notable in the source task that the AUC of models with various filter shapes within the first layer obtained scores that were all higher than 0.99 as shown in Table \\ref{tab:convnet}. We assume that pressing the sustain pedal could result in acoustic characteristics that significantly change the patterns in both frequency and time. Thereby \\texttt{convnet-multi} can obtain the highest AUC score. To understand the learning process of the \\texttt{convnet} models, we conducted a visual analysis of the deconvolved melspectrogram of music excerpts in pairs, which have the same note event, but differently labelled. Visualisation results using the \\texttt{convnet-frequency} and \\texttt{convnet-time} with the best AUC score, i.e., with ($m$, $n$) set to (45, 3) and (3, 10), are shown in Figure \\ref{fig:f45t3} and Figure \\ref{fig:f3t10} respectively. In Figure \\ref{fig:visft}, we only select the first feature maps separately learned in the four convolutional layers and present their deconvolved melspectrograms. From layer 1 to 3, the two models both focus on the time-frequency contexts centred around the fundamental frequency and their partials. More contexts in the higher frequency bands can be learned by the \\texttt{convnet-frequency}. In the fourth layer, only the first half of melspectrograms are emphasised. We could infer the sustain pedal has more effects on the notes which the pedal just started to play together with. Meanwhile, the main differences between \\textit{pedal} and \\textit{no-pedal} excerpts lie in the lower frequency bands indicated by the \\texttt{convnet-time}. Considering that a slightly lower accuracy score was obtained by \\texttt{convnet-frequency}, we can assume dependencies within the higher frequency range could be a redundant knowledge to learn in our source task.\n\nAnother observation is that performance of the binary classification task is less dependent on the effect of the number of layers, according to the scores in Table \\ref{tab:convnetmulti}. This also reflects in the changes shown by the deconvolved melspectrograms from layer 1 to 3, where roughly the same time-frequency areas were emphasised. We could train \\texttt{convnet} models in a more efficient way, using fewer convolutional layers, while keeping or increasing the number of channels.\n\nThrough inspection of the detection results and the learned filters, we can extend our understanding of the \\texttt{convnet} models in music. This inspires us to develop CNN models not only for detecting the pedalled frames, but also for learning the transients introduced by the sustain-pedal onset or even the offsets. More audio data including pieces by other composers and using various recording conditions should be tested to verify the robustness of our approach. This also constitutes our future works. \n\n\\section{Conclusion}\n\\label{sec:conclusion}\nIn this paper, we answered the question: ``Can a computer point out pedalling techniques when a piano recording from a virtuoso performance is given?''. A novel transfer learning approach based on \\texttt{convnet} models was proposed to detect the sustain pedal, and evaluated on ten passages of Chopin's music. A specific transfer learning paradigm was used where the source and target tasks differ in objectives and experimental conditions, including the use of synthesised versus real acoustic recordings. The model trained in the source task can then be employed as a feature extractor in the target task. \n\nIn the source task, the model architecture was informed by piano acoustics and physics in order to facilitate the training process. Given the synthesised excerpts played with or without the sustain pedal, we showed that \\texttt{convnet} models can learn the time-frequency contexts corresponding to acoustic characteristics of the sustain pedal, instead of larger variations introduced by other musical elements. Among all models, \\texttt{convnet-multi} was selected to be used in the target task due to its highest scores of accuracy and AUC in binary classification. Features with more representation power dedicated to the sustain-pedal effect can be extracted from the intermediate layers of \\texttt{convnet-multi}. This helps to adapt the detection to acoustic piano recordings. Thus a better performance measurement was obtained compared to fine-tuning or directly applying the pre-trained \\texttt{convnet-multi} network. Finally, visualisation of the learned filters using deconvolution showed us potential directions towards designing more efficient and effective models for detecting different phases of the use of the sustain pedal.\n\n\n\n\\bibliographystyle{IEEEtran}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}