diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzzgzpm" "b/data_all_eng_slimpj/shuffled/split2/finalzzgzpm" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzzgzpm" @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\nIn this paper, we consider a stochastic minimization problem. Let $\\Xi$ be a statistical sample space with probability\n distribution $\\Pi$ (we omit the underlying $\\sigma$-algebra). Let $X\\subseteq\\mathbb{R}^n$ be a closed convex set, which represents the parameter space. $F(\\cdot;\\xi):X\\rightarrow \\mathbb{R}$ is a closed convex function associated with $\\xi\\in\\Xi$. We aim to solve the following problem:\n\\begin{align}\\label{minex}\n \\textrm{minimize}_{x\\in X\\subseteq \\mathbb{R}^n} ~~\\mathbb{E}_{\\xi}\\big(F(x;\\xi)\\big)=\\int_{\\Pi} F(x,\\xi) d\\Pi(\\xi).\n\\end{align}\n\\\\\nA common method to minimize \\eqref{minex} is Stochastic Gradient Descent (SGD) \\cite{robbins1951stochastic}:\n\\begin{align}\\label{sgd}\n x^{k+1}=\\textbf{Proj}_{X}\\big(x^k-\\gamma_k\\partial F(x^k;\\xi^k)\\big),\\quad \\xi^k\\stackrel{\\text{i.i.d}}{\\sim} \\Pi.\n\\end{align}\n\nHowever, for some problems and distributions, direct sampling from $\\Pi$ is expensive or impossible, and it is possible that $\\Xi$ is not known. In these cases, it can be much cheaper to sample by following a Markov chain that has the desired distribution $\\Pi$ as its equilibrium distribution.\n\nTo be concrete, imagine solving problem \\eqref{minex} with a discrete space $\\Xi:=\\{x\\in \\{0,1\\}^n\\mid\\langle a,x\\rangle\\leq b\\}$, where $a\\in\\mathbb{R}^n$ and $b\\in \\mathbb{R}$, and the uniform distribution $\\Pi$ over $\\Xi$.\nA straightforward way to obtain a uniform sample is iteratively randomly sampling $x\\in\\{0,1\\}^n$ until the constraint $\\langle a,x\\rangle\\leq b$ is satisfied. Even if the feasible set is small, it may take up to $O(2^n)$ iterations to get a feasible sample.\nInstead, one can sample a trajectory of a Markov chain described in \\cite{jerrum1996markov}; to obtain a sample $\\varepsilon$-close to the distribution $\\Pi$, one only needs $\\log(\\frac{\\sqrt{\\mid \\Xi\\mid}}{\\varepsilon})\\exp(O(\\sqrt{n}(\\log n)^{\\frac{5}{2}}))$ samples \\cite{dyer1993mildly}, where $|\\Xi|$ is the cardinality of $\\Xi$. This presents a signifant saving in sampling cost.\n\nMarkov chains also naturally arise in some applications. Common examples are systems that evolve according to Markov chains, for example, linear dynamic systems with random transitions or errors. Another example is a distributed system in which every node locally stores a subset of training samples; to train a model using these samples, we can let a token that holds all the model parameters traverse the nodes following a random walk, so the samples are accessed according to a Markov chain.\n\n\nSuppose that the Markov chain has a stationary distribution $\\Pi$ and a finite mixing time $T$, which is how long a random trajectory needs to be until its current state has a distribution that roughly matches $\\Pi$. A larger $T$ means a closer match. Then, in order to run one iteration of \\eqref{sgd}, we can generate a trajectory of samples $\\xi^1,\\xi^2,\\xi^3,\\ldots,\\xi^T$ and only take the last sample $\\xi:=\\xi^T$.\nTo run another iteration of \\eqref{sgd}, we repeat this process, i.e., sample a new trajectory $\\xi^1,\\xi^2,\\xi^3,\\ldots,\\xi^T$ and take $\\xi:=\\xi^T$.\n\nClearly, sampling a long trajectory just to use the last sample wastes a lot of samples, especially when $T$ is large. But, this may seem necessary because $\\xi^t$, for all small $t$, have large biases. After all, it can take a long time for the random trajectory to explore all of the space, and it will often double back and visit states that it previously visited. Furthermore, it is also difficult to choose an appropriate $T$. A small $T$ will cause large bias in $\\xi^T$, which slows the SGD convergence and reduces its final accuracy. A large $T$, on the other hand, is wasteful especially when $x^k$ is still far from convergence and some bias does not prevent \\eqref{sgd} to make good progress. Therefore, $T$ should increase adaptively as $k$ increases --- this makes the choice of $T$ even more difficult.\n\n\nSo, why waste samples, why worry about $T$, and why not just apply every sample immediately in stochastic gradient descent?\nThis approach has appeared in \\cite{johansson2007simple,johansson2009randomized}, which we call the Markov Chain Gradient Descent (MCGD) algorithm for problem \\eqref{minex}:\n\\begin{align}\\label{online-alg1}\n x^{k+1}=\\textbf{Proj}_{X}\\big(x^k-\\gamma_k \\hat{\\nabla} F(x^k;\\xi^k)\\big),\n\\end{align}\nwhere $\\xi^0,\\xi^1,\\ldots$ are samples on a Markov chain trajectory and $\\hat{\\nabla} F(x^k;\\xi^k)\\in \\partial F(x^k;\\xi^k)$ is a subgradient.\n\nLet us examine some special cases.\nSuppose the distribution $\\Pi$ is supported on a set of $M$ points, $y^1,\\dots,y^M$. Then, by letting $f_i(x) := M\\cdot\\mathrm{Prob}(\\xi=y^i)\\cdot F(x,y^i)$, problem \\eqref{minex} reduces to the finite-sum problem:\n\\begin{equation}\\label{model}\n \\textrm{minimize}_{x\\in X\\subseteq \\mathbb{R}^d}f(x)\\equiv\\frac{1}{M} \\sum_{i=1}^M f_i(x).\n\\end{equation}\nBy the definition of $f_i$, each state $i$ has the uniform probability $1\/M$.\nAt each iteration $k$ of MCGD, we have\n\\begin{equation}\\label{scheme-sgd}\n x^{k+1}=\\textbf{Proj}_{X}\\big(x^k-\\gamma_k\\hat{\\nabla} f_{j_k}(x^k)\\big),\n\\end{equation}\nwhere $(j_k)_{k\\geq 0}$ is a trajectory of a Markov chain on $\\{1,2,\\dots, M\\}$ that has a uniform stationary distribution. Here, $(\\xi^k)_{k\\geq 0}\\subseteq\\Pi$ and $(j_k)_{k\\geq 0}\\subseteq [M]$ are two different, but related Markov chains.\nStarting from a deterministic and arbitrary initialization $x^0$, the iteration is illustrated by the following diagram:\n\\begin{equation}\\label{tutu-sgd1}\n \\CD\n @. j_0 @> >>j_1 @> >> j_2 @> >> \\ldots\\\\\n @. @V VV @V VV @V VV @. \\\\\n x^0 @> >> x^1 @> >> x^2 @> >> x^3 @>>>\\ldots\n \\endCD\n\\end{equation}\nIn the diagram, given each $j_k$, the next state $j_{k+1}$ is statistically independent of $j_{k-1},\\dots,j_0$; given $j_k$ and $x^k$, the next iterate $x^{k+1}$ is statistically independent of $j_{k-1},\\dots,j_0$ and $x^{k-1},\\dots,x^0$.\n\nAnother application of MCGD involves a network: consider a strongly connected graph $\\mathcal{G} = (\\mathcal{V}, \\mathcal{E})$ with the set of vertices $\\mathcal{V}=\\{1,2,\\ldots,M\\}$ and set of edges $\\mathcal{E}\\subseteq \\mathcal{V}\\times \\mathcal{V}$.\nEach node $j\\in \\{1,2,\\ldots,M\\}$ possess some data and can compute $\\nabla f_j(\\cdot)$. To run MCGD,\nwe employ a token that carries the variable $x$, walking randomly over the network. When it reaches a node $j$, node $j$ reads $x$ form the token and computes $\\nabla f_j(\\cdot)$ to update $x$ according to \\eqref{scheme-sgd}. Then, the token walks away to a random neighbor of node $j$.\n\n\\subsection{Numerical tests}\nWe present two kinds of numerical results. The first one is to show that MCGD uses fewer samples to train both a convex model and a nonconvex model. The second one demonstrates the advantage of the faster mixing of a non-reversible Markov chain. Our results on nonconvex objective and non-reversible chains are new.\\\\\n\\\\\n\\textit{1. Comparision with SGD}\\\\\nLet us compare:\n\\begin{enumerate}\n \\item MCGD \\eqref{online-alg1}, where $j_k$ is taken from one trajectory of the Markov chain;\n \\item SGD$T$, for $T=1,2,4,8,16,32$, where each $j_k$ is the $T$th sample of a fresh, independent trajectory. All trajectories are generated by starting from the same state $0$.\n\\end{enumerate}\nTo compute $T$ gradients, SGD$T$ uses $T$ times as many samples as MCGD. We did not try to adapt $T$ as $k$ increases because there lacks a theoretical guidance.\n\nIn the first test, we recover a vector $u$ from an auto regressive process, which closely resembles the first experiment in \\cite{duchi2012ergodic}. Set matrix A as a subdiagonal matrix with random entries $A_{i,i-1}\\overset{\\text{i.i.d}}{\\sim} \\mathcal{U}[0.8,0.99]$. Randomly sample a vector $u\\in\\mathbb{R}^d$, $d=50$, with the unit 2-norm. Our data $(\\xi_t^1,\\xi_t^2)_{t=1}^{\\infty}$ are generated according to the following auto regressive process:\n\\begin{align*}\n &\\xi^1_t = A\\xi^1_{t-1}+e_1W_t,~ W_t\\stackrel{\\text{i.i.d}}{\\sim} N(0,1)\\\\\n &\\bar{\\xi}^2_t = \\left\\{\\begin{array}{ll}\n 1, & \\text{if } \\langle u, \\xi^1_t\\rangle>0,\\\\\n 0, & \\text{otherwise};\n \\end{array}\\right.\\\\\n &\\xi^2_t = \\left\\{\\begin{array}{ll}\n \\bar{\\xi}^2_t, &\\text{ with probability 0.8,}\\\\\n 1-\\bar{\\xi}^2_t, &\\text{ with probability 0.2.}\n \\end{array}\\right.\n\\end{align*}\n\nClearly, $(\\xi_t^1,\\xi_t^2)_{t=1}^{\\infty}$ forms a Markov chain. Let $\\Pi$ denote the stationary distribution of this Markov chain. We recover $u$ as the solution to the following problem:\n\\begin{align*}\n \\textrm{minimize}_{x} ~~\\mathbb{E}_{(\\xi^1,\\xi^2)\\sim\\Pi}\\ell(x;\\xi^1,\\xi^2).\n\\end{align*}\n\nWe consider both convex and nonconvex loss functions, which were not done before in the literature. The convex one is the logistic loss\n\\begin{align*}\n\\ell(x;\\xi^1,\\xi^2)=-\\xi^2\\log(\\sigma(\\langle x, \\xi^1 \\rangle))-(1-\\xi^2)\\log(1-\\sigma(\\langle x,\\xi^1\\rangle)),\n\\end{align*}\n where $\\sigma(t)=\\frac{1}{1+\\exp(-t)}$. And the nonconvex one is taken as\n\\begin{align*}\n\\ell(x;\\xi^1,\\xi^2)=\\frac{1}{2}(\\sigma(\\langle x,\\xi^1 \\rangle)-\\xi^2)^2\n\\end{align*}\nfrom \\cite{mei2016landscape}. We choose $\\gamma_k = \\frac{1}{k^q}$ as our stepsize, where $q = 0.501$. This choice is consistently with our theory below.\n\\begin{figure}\n \\centering\n \\includegraphics[width=0.245\\textwidth]{convexcase.pdf}\\includegraphics[width=0.245\\textwidth]{convexcase1.pdf}\n \\includegraphics[width=0.245\\textwidth]{nonconvexcase.pdf}\\includegraphics[width=0.245\\textwidth]{nonconvexcase1.pdf}\n \\caption{Comparisons of MCGD and SGD$T$ for $T=1,2,4,8,16,32$. $\\overline{x^k}$ is the average of $x^1,\\ldots,x^k$.}\n \\label{fig:my_label}\n\\end{figure}\n\nOur results in Figure 1 are surprisingly positive on MCGD, more so to our expectation. As we had expected, MCGD used significantly fewer total samples than SGD on every $T$. But, it is surprising that MCGD did not need even more gradient evaluations. Randomly generated data must have helped homogenize the samples over the different states, making it less important for a trajectory to converge. It is important to note that SGD1 and SGD2, as well as SGD4, in the nonconvex case, stagnate at noticeably lower accuracies because their $T$ values are too small for convergence. \\\\\n\\\\\n\\textit{2. Comparison of reversible and non-reversible Markov chains}\\\\\nWe also compare the convergence of MCGD when working with reversible and non-reversible Markov chains (the definition of reversibility is given in next section). As mentioned in \\cite{turitsyn2011irreversible}, transforming a reversible Markov chain into non-reversible Markov chain can significantly accelerate the mixing process. This technique also helps to accelerate the convergence of MCGD.\n\nIn our experiment, we first construct an undirected connected graph with $n=20$ nodes with edges randomly generated. Let $G$ denote the adjacency matrix of the graph, that is,\n\\begin{align*}\n G_{i,j}=\\left\\{\\begin{array}{ll}\n 1, &\\text{if $i, j$ are connected;}\\\\\n 0, &\\text{otherwise.}\n \\end{array}\\right.\n\\end{align*}\nLet $d_{\\max}$ be the maximum number of outgoing edges of a node.\nSelect $d=10$ and compute $\\beta^*\\sim \\mathcal{N}(0,I_d)$.\nThe transition probability of the reversible Markov chain is then defined by, known as Metropolis-Hastings markov chain,\n\\begin{align*}\n P_{i,j} = \\left\\{\\begin{array}{ll}\n \\frac{1}{d_{\\max}}, & \\text{if $j\\neq i$, $G_{i,j}=1$;}\\\\\n 1-\\frac{\\sum_{j\\neq i}G_{i,j}}{d_{\\max}}, & \\text{if $j=i$;}\\\\\n 0, & \\text{otherwise.}\n \\end{array}\\right.\n\\end{align*}\nObviously, $P$ is symmetric and the stationary distribution is uniform.\nThe non-reversible Markov chain is constructed by adding cycles. The edges of these cycles are directed and let $V$ denote the adjacency matrix of these cycles. If $V_{i,j}=1$, then $V_{j,i}=0$. Let $w_0>0$ be the weight of flows along these cycles. Then we construct the transition probability of the non-reversible Markov chain as follows,\n\\begin{align*}\nQ_{i,j} = \\frac{W_{i,j}}{\\sum_l W_{i,l}},\n\\end{align*}\nwhere $W = d_{\\max}P+w_0 V$. See [14] for an explanation why this change makes the chain mix faster.\n\n\n In our experiment, we add 5 cycles of length 4, with edges existing in $G$. $w_0$ is set to be $\\frac{d_{\\max}}{2}$. We test MCGD on a least square problem. First, we select $\\beta^*\\sim\\mathcal{N}(0, I_d)$; and then for each node $i$, we generate $x_i\\sim\\mathcal{N}(0, I_d)$, and $y_i = x_i^T\\beta^*$. The objective function is defined as,\n\\begin{align*}\n f(\\beta)=\\frac{1}{2}\\sum\\limits_{i=1}^{n}(x_i^T\\beta-y_i)^2.\n\\end{align*}\nThe convergence results are depicted in Figure \\ref{fig:irreversible}.\n\n\n\\begin{figure}\n \\centering\n \\includegraphics[width=0.4\\textwidth]{non-reversible.pdf}\\\\\n \\caption{Comparison of reversible and irreversible Markov chains. The second largest eigenvalues of reversible and non-reversible Markov chains are 0.75 and 0.66 respectively. }\n \\label{fig:irreversible}\n\\end{figure}\n\n\n\n\n\n\n\\subsection{Known approaches and results}\nIt is more difficult to analyze MCGD due to its biased samples. To see this, let $p_{k,j}$ be the probability to select $\\nabla f_j$ in the $k$th iteration.\nSGD's uniform probability selection ($p_{k,j}\\equiv\\frac{1}{M}$) yields an unbiased gradient estimate\n\\begin{align}\\label{unba}\n \\mathbb{E}_{j_k}(\\nabla f_{j_k}(x^k))=C\\nabla f(x^k)\n\\end{align}\nfor some $C>0$.\nHowever, in MCGD, it is possible to have $p_{k,j}=0$ for some $k,j$. Consider a ``random wal''. The probability $p_{j_k,j}$ is determined by the current state $j_k$, and we have $p_{j_k,i}>0$ only for $i\\in \\mathcal{N}(j_k)$ and $p_{j_k,i}=0$ for $i\\notin \\mathcal{N}(j_k)$, where $\\mathcal{N}(j_k)$ denotes the neighborhood of $j_k$. Therefore, we no longer have \\eqref{unba}.\n\nAll analyses of MCGD must deal with the biased expectation. Papers \\cite{johansson2009randomized,johansson2007simple} investigate the conditional expectation $\\mathbb{E}_{j_{k+\\tau}\\mid j_k}(\\nabla f_{j_{k+\\tau}}(x^{k}))$. For a sufficiently large $\\tau\\in\\mathbb{Z}^+$, it is sufficiently close to $\\frac{1}{M}\\nabla f(x^{k})$ (but still different).\nIn \\cite{johansson2009randomized,johansson2007simple}, the authors proved that, to achieve an $\\epsilon$ error, MCGD with stepsize $O(\\epsilon)$ can return a solution in $O(\\frac{1}{\\epsilon^2})$ iteration. Their error bound is given in the ergodic sense and using $\\mathrm{liminf}$. The authors of \\cite{ram2009incremental}\nproved a $\\liminf f(x^k)$ and $\\mathbb{E} \\mathrm{dist}^2(x^k,X^*)$ have almost sure convergence under diminishing stepsizes $\\gamma_k=\\frac{1}{k^q}$, $\\frac{2}{3}< q\\leq 1$. Although the authors did not compute any rates, we computed that their stepsizes will lead to\na solution with $\\epsilon$ error in $O(\\frac{1}{\\epsilon^{\\frac{1}{1-q}}})$ iterations, for $\\frac{2}{3}0$.\nState $i\\in[M]$ is said to have a period $d$ if $P^k_{i,i} = 0$ whenever $k$ is \\emph{not} a\nmultiple of $d$ and $d$ is the greatest integer with\nthis property. If $d=1$, then we say state $i$ is aperiodic. If every state is aperiodic, the Markov chain is said to be aperiodic.\n\n\nAny time-homogeneous, irreducible, and aperiodic\nMarkov chain has a stationary distribution $\\pi^*=\\lim_k \\pi^k\n=[\\pi^*_1,\\pi^*_2,\\ldots,\\pi^*_M]$ with $\\sum_{i=1}^M \\pi^*_i=1$ and $\\min_i\\{\\pi^*_i\\}>0$, and $\\pi^*= \\pi^* P$. It also holds that\n\\begin{equation}\\label{convermatrix}\n \\lim_{k}P^{k}=[(\\pi^*);(\\pi^*);\\ldots;(\\pi^*)]=:\\Pi^*\\in \\mathbb{R}^{M\\times M}.\n\\end{equation}\nThe largest eigenvalue of $P$ is 1, and the corresponding left eigenvector is $\\pi^*$.\n\n\\begin{assumption}\\label{ass:mc}{The Markov chain $(X_k)_{k \\ge 0}$ is time-homogeneous, irreducible, and aperiodic. It has a transition matrix $P$ and has stationary distribution $\\pi^*$.}\n\\end{assumption}\n\n\n\\subsection{Mixing time}\\label{mixt}\nMixing time is how long a Markov chain evolves until its current state has a distribution very close to its stationary distribution. The literature has a thorough investigation of various kinds of mixing times, with the majority for\n reversible Markov chains (that is, $\\pi_i P_{i,j} = \\pi_j P_{j,i}$). Mixing times of non-reversible Markov chains are discussed in \\cite{fill1991eigenvalue}. In this part, we consider a new type of mixing time of non-reversible Markov chain. The proofs are based on basic matrix analysis. Our mixing time gives us a direct relationship between $k$ and the deviation of the distribution of the current state from the stationary distribution.\n\nTo start a lemma, we review some basic notions in linear algebra. Let $\\mathbb{C}$ be the $n$-dimensional complex field.\nThe modulus of a complex number $a\\in\\mathbb{C}$ is given as $|a|$.\nFor a vector $x\\in \\mathbb{C}^{n}$, the $\\ell_{\\infty}$ and $\\ell_2$ norms are defined as $\\|x\\|_{\\infty}:=\\max_{i}|x_i|$, $\\|x\\|_{2}:=\\sqrt{ \\sum_{i=1}^n|x_i|^2}$.\nFor a matrix $A=\\begin{bmatrix}a_{i,j}\\end{bmatrix}\\in \\mathbb{C}^{m\\times n}$, its $\\infty$-induced and Frobenius norms are $\\|A\\|_{\\infty}:=\\max_{i,j}|a_{i,j}|$, $\\|A\\|_{F}:=\\sqrt{\\sum_{i,j=1}^n|a_{i,j}|^2}$, respectively.\n\nWe know $P^k\\rightarrow\\Pi^*$, as $k\\to\\infty$. The following lemma presents a deviation bound for finite $k$.\n\\begin{lemma}\\label{le1}\nLet Assumption \\ref{ass:mc} hold and let $\\lambda_i(P)\\in \\mathbb{C}$ be the $i$th largest eigenvalue of $P$, and\n$$\\lambda(P):=\\frac{\\max\\{|\\lambda_2(P)|,|\\lambda_M(P)|\\}+1}{2}\\in [0,1).$$\nThen, we can bound the largest entry-wise absolute value of the deviation matrix $\\delta^k :=\\Pi^*-P^{k}\\in \\mathbb{R}^{M\\times M}$ as\n\\begin{equation}\\label{core1}\n \\|\\delta^k\\|_{\\infty}\\leq C_P\\cdot\\lambda^k(P)\n\\end{equation}\nfor $k\\geq K_P$, where $C_P$ is a constant that also depends on the Jordan canonical form of $P$ and $K_P$ is a constant that depends on $\\lambda(P)$ and $\\lambda_2(P)$. Their formulas are given in \\eqref{cpformulate} and \\eqref{kpformulate} in the Supplementary Material.\n\n\\end{lemma}\n\\begin{remark}\nIf $P$ is symmetric, then all $\\lambda_{i}(P)$'s are all real and nonnegative, $K_P=0$, and\n $C_P\\leq M^{\\frac{3}{2}}$.\nFurthermore, \\eqref{2eq2} can be improved by directly using $\\lambda_2^k(P)$ for the right side as\n$$ \\|\\delta^k\\|_{\\infty}\\leq \\|\\delta^k\\|_{F}\\leq M^{\\frac{3}{2}}\\cdot\\lambda_2^k(P),~~k\\geq 0.$$\n\\end{remark}\n\n\n\n\n\n\n\n\\section{Convergence analysis for convex minimization}\nThis part considers the convergence of MCGD in the convex cases, i.e., $f_1, f_2,\\ldots, f_M$ and $X$ are all convex. We investigate the convergence of scheme \\eqref{scheme-sgd}.\nWe prove non-ergodic convergence of the expected objective value sequence under diminishing non-summable stepsizes, where the stepsizes are required to be ``almost\" square summable. Therefore, the convergence requirements are almost equal to SGD.\nThis section uses the following assumption.\n\\begin{assumption} The set $X$ is assumed to be convex and compact.\n\\end{assumption}\n\n\n\nNow, we present the convergence results for MCGD in the convex (but not necessarily differentiable) case. Let $f^*$ be the minimum value of $f$ over $X$.\n\\begin{theorem}\\label{th-sgd}\nLet Assumptions 1 and 2 hold and $(x^k)_{k\\geq 0}$ be generated by scheme (\\ref{scheme-sgd}). Assume that $f_i$, $i\\in[M]$, are convex functions, and the stepsizes satisfy\n\\begin{align}\\label{stepsize-1}\n \\sum_k\\gamma_k=+\\infty,\\quad\\sum_k\\ln k\\cdot\\gamma_k^2<+\\infty.\n\\end{align}\nThen, we have\n\\begin{equation}\\label{th-sgd-r1}\n \\lim_{k} \\mathbb{E} f(x^k)=f^*.\n\\end{equation}\nDefine\n$$\\psi(P):=\\max\\{1,\\frac{1}{\\ln(1\/\\lambda(P))}\\}.$$\nWe have:\n\\begin{equation}\\label{th-sgd-r2}\n \\mathbb{E}(f(\\overline{x^k})- f^*)=O\\Big(\\frac{\\psi(P)}{\\sum_{i=1}^k\\gamma_i}\\Big),\n\\end{equation}\nwhere $\\overline{x^k}:=\\frac{\\sum_{i=1}^k\\gamma_i x^i}{\\sum_{i=1}^k\\gamma_i}$.\nTherefore, if we select the stepsize $\\gamma_k=O(\\frac{1}{k^{q}})$ as $\\frac{1}{2}\\frac{1}{2}$ and $\\gamma_k=O(\\frac{1}{k^{q}})$ as $\\frac{1}{2}0$ such that\n\\begin{equation}\\label{boundx}\n \\|\\nabla f_i(x)\\|\\leq D,\\quad i\\in[M].\n\\end{equation}\n \\end{assumption}\nWe use this new assumption because $X$ is now the full space, and we have to directly bound the size of $\\|\\nabla f_i(x)\\|$. In the nonconvex case, we cannot obtain objective value convergence, and we only bound the gradients.\nNow, we are prepared to present our convergence results of nonconvex MCGD.\n\n\\begin{theorem}\\label{th-sgd2}\nLet Assumptions 1 and 3 hold and $(x^k)_{k\\geq 0}$ be generated by scheme (\\ref{scheme-sgd2}). Also, assume $f_i$ is differentiable and $\\nabla f_i$ is $L$-Lipschitz, and the stepsizes satisfy\n\\begin{align}\\label{stepsize-2}\n \\sum_k\\gamma_k=+\\infty,\\sum_k\\ln^2 k\\cdot\\gamma_k^2<+\\infty.\n\\end{align}\nThen, we have\n\\begin{equation}\\label{th-sgd2-r1}\n \\lim_{k} \\mathbb{E}\\|\\nabla f(x^k)\\|=0.\n\\end{equation}\nand\n\\begin{equation}\\label{th-sgd2-r2}\n \\mathbb{E}\\big(\\min_{1\\leq i\\leq k}\\{\\|\\nabla f(x^i)\\|^2\\}\\big)=O\\Big(\\frac{\\psi(P)}{\\sum_{i=1}^k\\gamma_i}\\Big),\n\\end{equation}\nwhere $\\psi(P)$ is given in Lemma \\ref{le1}.\nIf we select the stepsize as $\\gamma_k=O(\\frac{1}{k^{q}})$, $\\frac{1}{2}1$, then \\eqref{th-sgd2-r1} still holds and $\\mathbb{E}\\big(\\min_{1\\leq i\\leq k}\\{\\|\\nabla f(x^i)\\|^2\\}\\big)=O(\\frac{\\psi(P)}{k^{1-q}})$.\n\\end{theorem}\nThis proof of Theorem \\ref{th-sgd2} is different from previous one. In particular, we cannot expect some sort of convergence to $f(x^*)$, where $x^*\\in\\mathop{\\mathrm{argmin}} f$ due to nonconvexity. To this end, we use the Lipschitz continuity of $\\nabla f_i$ ($i\\in[M]$) to derive the ``descent\". Here, the ``$O$\" contains a polynomial compisition of constants $D$ and $L$.\n\nCompared with MCGD in the convex case, the stepsize requirements of nonconvex MCGD become a tad higher; in summable part, we need $\\sum_k\\ln^2 k\\cdot\\gamma_k^2<+\\infty$ rather than $\\sum_k\\ln k\\cdot\\gamma_k^2<+\\infty$. Nevertheless, we can still use $\\gamma_k=O(\\frac{1}{k^{q}})$ for $\\frac{1}{2}0$ corresponds to the\nantiferromagnetic case, $\\vec{S}_j$ are spin operators, $N$ is the\nlength of the spin chain. The coupling exchange $\\alpha$ is impurity\ninteraction. For simplicity, $J=1$ is assumed in this paper.\n\nThe entropy is used as a measure of the bipartite entanglement. If\n$|Gs\\rangle$ is the ground state of a chain of N qubits, a reduced\ndensity matrix of $L$ contiguous qubits can be written as\n\\begin{equation}\n\\label{eq3}\\rho_L=Tr_{N-L}|Gs\\rangle\\langle Gs|.\n\\end{equation}\nThe bipartite entanglement between the right-hand $L$ contiguous\nqubits and the rest of the system can be measured by the entropy\n\\begin{equation}\n\\label{eq4}S_L=-Tr(\\rho_L\\log_2\\rho_L).\n\\end{equation}\nOne of the properties of the entropy of a block of the system can be\ngiven by\n\\begin{equation}\n\\label{eq5}S_L=S_{N-L},\n\\end{equation}\nsince the spectrum of the reduced density matrix $\\rho_L$ is the\nsame as that of $\\rho_{N-L}$.\n\n\\section{Entanglement of Entropy with Impurities}\n\nIn order to calculate the entropy accurately using the method of\nDMRG, the length of the spin chain needs to be relatively long. The\nlength of the spin chain is chosen to be $N=256$. The total number\nof the density matrix eigenstates held in the system block is\n$m=128$ in the basis truncation procedure.\n\nTo check the accuracy of the results from the method of DMRG, the\nopen boundary condition without impurities of $\\alpha=1$ is\nconsidered. The corresponding results of finite spin chain, which is\npredicted by conformal field theory(CFT), can be considered as a\nbenchmark. It can be written as\n\\begin{equation}\n\\label{eq6}S_L=\\frac{c}{6}\\log_2[\\frac{N}{\\pi}\\sin(\\frac{\\pi}{N}L)]+A,\n\\end{equation}\nwhere $c$ is the central charge and $A$ is a non-universal constant\n\\cite{Chiara,Nicolas}. There are large oscillations between even and\nodd $L$-value entropy. To avoid these relatively large oscillations,\nthe even-value entropy is chosen. The entropy $S_L$ between\ncontiguous $L$ qubits and the remain $N-L$ qubits is plotted as a\nfunction of the subsystem L in Fig. 1 when $N=160, 200$ and $256$.\nFor $L<8$, the results of DMRG are slightly lower than that of CFT.\nFor $L>8$, almost perfect agreement between the two results is\nobtained. For different values of $N$ with large $L$, $S_L$ is small\nfor small values of $N$. For small $L$, there is almost no\ndifference between different values of $N$. It seems that $S_L$\napproaching a constant for very large $L$ is mainly due to finite\nsize effect. The entropy $S_L$ is also plotted as a function of\n$\\log_2[\\frac{N}{\\pi}\\sin(\\frac{\\pi}{N}L)]$ in the inset of Fig. 1.\nIt is shown that the entropy appears as a straight line whose slope\nis very close to $c\/6$.\n\nThe entanglement entropy $S_{L}$ is plotted as a function of the\nsubsystem length $L$ for different values of the impurity\ninteraction $\\alpha$ in Fig. 2(a). It is seen that the entropy $S_L$\nincreases with the subsystem length $L$ and then approaches a\nconstant when $L$ is very large for $\\alpha=0.1, 2.0$. When\n$\\alpha=0.3, 0.5$, the entropy $S_L$ decreases slightly, then\nincreases and approaches a constant for very large $L$. The minimal\nvalue is $1.53$ at $L=6$ for $\\alpha=0.3$ and $1.16$ at $L=4$ for\n$\\alpha=0.5$. When $\\alpha=0.1$, $S_L$ approaches the value about\n$2.4$ for very large $L$. While for $\\alpha=0.3, 0.5, 2.0$, $S_L$\napproaches the value about $1.6$ for very large $L$. The influence\nof the impurity at two ends of the Heisenberg spin$-1\/2$ chain\ndepends on the value of the impurity interaction $\\alpha$. For\n$\\alpha=\\alpha_0=0.235$, the strength alternation of even bond and\nodd bond in the center of the spin chain is minimized to almost\nclose to zero \\cite{white}. For $\\alpha<\\alpha_0$, the even\nsublattice is favored. This induces larger value of $S_L$. While for\n$\\alpha>\\alpha_0$, the odd sublattice is favored. This induces\nsmaller value of $S_L$. If the value of $\\alpha$ is less than\n$\\alpha_0=0.235$, the effects of the impurity on the entanglement\nentropy $S_L$ is stronger. Therefore, the entanglement entropy $S_L$\nof $\\alpha=0.1$ is much larger than the $S_L$ of $\\alpha=0.3, 0.5$\nand $2.0$ \\cite{Zhao,Es1,white}. The entropy $S_L$ is also plotted\nas a function of $\\log_2[\\frac{N}{\\pi}\\sin(\\frac{\\pi}{N}L)]$ in the\ninset of Fig. 2(a). It is seen that $S_L$ is almost a straight as a\nfunction of $\\log_2[\\frac{N}{\\pi}\\sin(\\frac{\\pi}{N}L)]$ for very\nlarge $L$.\n\nIf $S_{L\\alpha}$ is the entropy with impurities at two ends and\n$S_{L0}$ is the entropy without impurities, the difference of the\nentropy $\\Delta S_L$ can be defined as\n\\begin{equation}\n\\label{eq7}\\Delta S_L=S_{L\\alpha}-S_{L0}.\n\\end{equation}\nThe entropy difference $\\Delta S_{L}$ may also be called \"impurity\nentanglement entropy\" that is induced by adding impurities at two\nends of the spin chain \\cite{Es1}. The entropy difference $\\Delta\nS_{L}$ is plotted as a function of the subsystem length $L$ for\ndifferent values of the impurity interaction $\\alpha$ in Fig. 2(b).\nIt is shown that $\\Delta S_L$ decreases and then approaches a\nconstant when the subsystem $L$ increases for $\\alpha=0.1, 0.3,\n0.5$. The value of $\\Delta S_L$ increases and then approaches a\nconstant with increase of the subsystem length $L$ when\n$\\alpha=2.0$. It seems that the effect of impurity decreases with\nthe increase of the subsystem $L$ when $\\alpha<\\alpha_0=1.0$.\nHowever, the effect of the impurity increases when\n$\\alpha>\\alpha_0=1.0$. The entropy difference $\\Delta S_{L}$ is also\nplotted as a function of $\\log_2[\\frac{N}{\\pi}\\sin(\\frac{\\pi}{N}L)]$\nin the inset of Fig. 2(b). It is seen that $\\Delta S_L$ is almost a\nstraight as a function of\n$\\log_2[\\frac{N}{\\pi}\\sin(\\frac{\\pi}{N}L)]$ for very large $L$.\nSimilar to that shown in Fig. 2(a), the entropy difference $\\Delta\nS_{L}$ of $\\alpha=0.1$ is much larger than $\\Delta S_L$ of\n$\\alpha=0.3, 0.5$ and $2.0$. This is mainly due to the fact that\nsmall value of $\\alpha<\\alpha_0=0.235$ can induce stronger effect of\nimpurity on $\\Delta S_{L}$ since the even sublattice is favored\n\\cite{Zhao,Es1,white}.\n\nFrom Fig. 2, it is seen that both values of $S_L$ and $\\Delta S_L$\nare decreases when $\\alpha$ increases especially for small number of\n$L$. There are large differences of $S_L$ and $\\Delta S_L$ for\ndifferent impurity interactions $\\alpha$ when $L$ is small. If $L$\nis quite large, $S_L$ and $\\Delta S_L$ approach to constants. If\n$\\alpha=0.3, 0.5$ and $2.0$, $S_L$ approaches to about $1.6$ while\n$\\Delta S_L$ approaches to about zero for quite large $L$. It seems\nthat the effect of the impurity at two ends is very small for large\n$L$. If $\\alpha=0.1$, both $S_L$ and $\\Delta S_L$ are quite large.\nIt seems that the small value of impurity interaction\n$\\alpha<\\alpha_0=0.235$ can induce strong effect on $S_L$ and\n$\\Delta S_L$.\n\nThe central charge $c$ in Eq. (6) plays an important role in the\nmeasurement of the entanglement entropy. The central charge $c$ can\nbe calculated numerically by \\cite{Calabrese,Es1}\n\\begin{equation}\n\\label{eq8}c(L)=6[\\frac{S_{L+2}-S_{L-2}}{T(L+2)-T(L-2)}],\n\\end{equation}\nwhere\n\\begin{equation}\n\\label{eq9}T(L)=\\log_2[\\frac{N}{\\pi}\\sin(\\frac{\\pi}{N}L)].\n\\end{equation}\nThe central charge labeled by $c(L)$ is plotted in Fig. 3(a) as a\nfunction of the subsystem length $L$ for different values of the\nimpurity interaction $\\alpha$. When $\\alpha=0.1$, the central charge\n$c(L)$ increases to a peak and then decreases slowly with the\nincrease of the subsystem length $L$. The central charge $c(L)$\ndecreases and then approaches a constant with the increase of the\nsubsystem $L$ when $\\alpha=2.0$. When $\\alpha=0.3, 0.5$, the central\ncharge $c(L)$ increases and almost approaches a constant with the\nincrease of the subsystem length $L$. The central charges of\n$c(L=6)$ and $c(L=4)$ are negative when $\\alpha=0.3$ and $0.5$\nrespectively. This corresponds to the minimum values of $S_L$ shown\nin Fig. 2(a). For $\\alpha>0.235$, the central charge $c(L)$\nincreases with increasing value of $\\alpha$. For $\\alpha=0.1<0.235$,\nc(L) of $\\alpha=0.1$ is larger than that of $\\alpha=0.3$, it\ndecreases and finally approaches to that of $\\alpha=0.3$ when $L$ is\nvery large. The value of $c(L)$ of $\\alpha=0.1$ is larger than that\nof $\\alpha=0.5$ if $L<20$. If $L>20$, $c(L)$ of $\\alpha=0.1$ is\nsmaller than that of $\\alpha=0.5$. This is mainly due to the\nstronger singlet bonds on the even numbered links of the chain for\n$\\alpha<0.235$ \\cite{white}. Since the central charge may clarify\nthe behavior of the entropy for large values of subsystem, the\ncentral charge $c(L=80)$ is plotted as a function of impurity\ninteraction $\\alpha$ in Fig. 2(b) when $1\\ll L(=80)< N\/2(=128)$. It\nis seen that the central charge c reaches a minimum value when\nimpurity interaction $\\alpha=0.235$. When impurity interaction\n$\\alpha<0.235$, the central charge $c$ decreases with impurity\ninteraction $\\alpha$ increases. The central charge $c$ increases\nwith impurity interaction $\\alpha$ increases when $\\alpha>0.235$. It\napproaches to $1.0$ when impurity interaction $\\alpha$ is close to\n$2.0$.\n\nIf the impurities are qutrit with spin-$1$ operators $\\vec{S'}$, the\neffects of qutrit-impurities on the entropy can also be\ninvestigated. The entanglement entropy $S_L$ and the difference of\nthe entropy $\\Delta S_{L}$ are plotted in Figs. 4(a) and 4(b)\nrespectively as a function of the subsystem length $L$ for impurity\ninteraction $\\alpha$ and different impurities. The entropy $S_L$ and\nthe difference of the entropy $\\Delta S_{L}$ are also plotted as a\nfunction of $\\log_2[\\frac{N}{\\pi}\\sin(\\frac{\\pi}{N}L)]$ in the\ninsets of Figs. 4(a) and 4(b). Similar to that shown in Fig. 2, both\n$S_L$ and $\\Delta S_L$ of spin-$1$ decreases with the increase of\n$\\alpha$. The entropy $S_L$ increases and then almost approaches a\nconstant when the the subsystem length $L$ increases. The entropy of\nimpurity with spin-$1$ is much larger than that with spin-$1\/2$. For\nthe impurity of spin-$1\/2$, the entropies of $\\alpha=0.5$ and\n$\\alpha=2.0$ are almost not distinguishable when the subsystem\nlength $L$ is very large. While for the impurity of spin-$1$, the\ndifference of the entropies of $\\alpha=0.5$ and $\\alpha=2.0$ are\nquite large and approach a constant when $L$ is very large. It is\nclear that the effects of qutrit impurity on the entanglement\nentropy are much stronger than that of qubit impurity. It seems that\nit is more easily to control the entropy of the system using qutrit\nimpurity.\n\n\\section{Discussion}\n\nIt is clear that the impurity interaction and the impurity spin have\na strong influence on the entanglement of the two subsystems\n\\cite{Es1,Wang1,E02}. For pairwise entanglement between the impurity\nspin and the spin chain, the two boundary spins will have a strong\ntendency to form a singlet pair when the impurity interaction is\nlarge. This will reduce the entanglement between the boundary of the\ntwo spin subsystems and the rest of the system. The value of\nentanglement entropy is mainly determined by the density-matrix\nspectra, extremely by the few largest eigenvalues of the reduced\ndensity matrix \\cite{U,Zhou,Zhao}. For qubit impurities, impurities\ncan affect the entropy between two subsystems by changing the\ndistribution of the reduced density-matrix spectra. If the\nimpurities are qutrits with the same impurity interaction, not only\nthe distribution of the reduced density-matrix spectra is changed,\nbut also the degree of the freedom of density-matrix spectra of the\nsubsystem is enlarged in the Hilbert space. This is similar to the\nresult of the entropy with the increase of subsystems\n\\cite{Latorre,Zhou,Ren}.\n\nThe effects of boundary impurities on the bipartite entanglement in\nan antiferromagnetic Heisenberg open spin chain are discussed. Using\nthe method of density-matrix renormalization-group, entanglement\nentropy is calculated for the even number subsystem. The\nentanglement entropy decreases with the increase of the impurity\ninteraction while it increases with the increase of the subsystem\nlength. When the system length is very large, the entanglement\nentropy approaches a constant due to finite size effect. The\ninfluences of boundary impurities with qutrit of spin-$1$ are much\nstronger than that of qubit of spin-$1\/2$. With the same impurity\ninteraction, qutrit impurities can increase the entanglement. All\nthe results are dependent with the selection of even subsystem. This\nis shown that the entropy of the system with qutrit impurity can be\nmore easily controlled.\n\n\\vskip 0.4 cm {\\bf Acknowledgements}\n\nIt is a pleasure to thank Yinsheng Ling and Jianxing Fang for their\nmany helpful discussions. The financial supports from the National\nNatural Science Foundation of China (Grant No. 10774108) and the\nCreative Project for Doctors of Jiangsu Province of China are\ngratefully acknowledged.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section*{Introduction}\nAxions are the most natural solution to the strong CP problem \\cite{Peccei:1977hh,Weinberg:1977ma,Wilczek:1977pj}.\nThere are significant phenomenological constraints on axions, such that only so-called ``invisible axions\" remain\nas viable candidates (for a review, see \\cite{Kim:2008hd}). These are known as the KSVZ axion \\cite{Kim:1979if,Shifman:1979if} or\nDFSZ axion \\cite{Dine:1981rt,Zhitnitsky:1980tq}. The supersymmetric version of invisible axions was given in \\cite{Nilles:1981py,Kim:1983dt}.\nAxions, with a decay constant $f_a \\sim 10^{12}$ GeV can make up the dominant form of dark matter in the universe. They form a smooth coherent\nbackground state. Laboratory experiments such as ADMX \\cite{Rosenberg:2010zz,Carosi:2012zz} and light shining through walls \\cite{Redondo:2010dp,Betz:2013dza} have been designed to look for them. These are based\non the fact that an axion in a strong magnetic field produces light. In a recent paper, Graham and Rajendran \\cite{Graham:2013gfa}, have used the generic phenomenon that axions are responsible for a time dependent electric dipole moment of nuclei (see, for example, \\cite{Pospelov:1999ha}). They propose searching for this tiny effect.\n\nOn another line of reasoning, it has been known that a self-gravitating Bose gas can form stable clumps of matter \\cite{Kaup:1968zz,Ruffini:1969qy,Breit:1983nr,Tkachev:1991ka,Wang:2001wq}. In the case of axions these clumps are termed ``axion stars\" \\cite{Barranco:2010ib,Eby:2014fya,Chavanis:2011zm,Guth:2014hsa}. These objects are very dilute with $M_* \\sim 10^{-14} M_\\odot$ and radius $R_a \\sim 10^{-4} R_\\odot$. Recently the possibility of dense axion stars have been discussed \\cite{Braaten:2015eeu}. Note, if primordial density perturbations in the axion field leads to most of the axion background contained in axion stars then it will be very difficult to find axions in laboratory experiments.\n\nIn the case of axion stars, other methods for discovery are necessary. One possibility was suggested by Iwazaki \\cite{Iwazaki:2014wta}. He uses the electromagnetic coupling of axions, given by ${\\cal L} \\supset \\frac{\\alpha}{f_a \\pi} a(\\vec{x},t) \\vec{E} \\cdot \\vec{B}$ where $\\vec{E}, \\ \\vec{B}$ are the electric and magnetic fields. Neutron stars have strong magnetic fields and free electrons moving in the thin atmosphere on the surface. When an axion star enters the atmosphere of a neutron star, a time dependent electric field is produced by the axion star in the background magnetic field of the neutron star. Then an immense amount of energy is released due to the time dependent coherent dipole oscillations\nof the electrons. The energy released is in the radio frequency (note $m_a c^2 = 10^{-5}$ eV corresponds to a frequency $\\frac{m_a c^2}{2 \\pi} = 2.5 $ GHz). This suggests that the encounter of axion stars with neutron stars might be the source for Fast Radio Bursts [FRBs] \\cite{Lorimer:2007qn,Keane:2009cz,Thornton:2013iua,Spitler:2014fla}. For a review on FRBs, see \\cite{Katz:2016dti}. There are only 17 known FRBs in the list. Recently a repeating FRB has been discovered \\cite{Spitler:2016dmz}. This may make the chance encounter of an axion star with a neutron star an unlikely source for this phenomena, but it does not exclude this possibility.\n\nIn this paper we consider the effect of the time dependent electric dipole moment for neutrons in the upper core of the neutron star.\nThese also radiate in the presence of an axion star and we argue that a significant portion of this radiation is coherent. As a consequence, much more energy is available which can be released as FRBs.\n\n \\section*{Axion stars}\nLet us first note some of the parameters for, so-called, dilute axion stars. They have a minimum radius, $R_a\/R_\\odot \\sim 3 \\times 10^{-4}$, with a maximum mass, $M_a\/M_\\odot \\sim 6 \\times 10^{-14 \\mp 4}$ for an axion mass, $m_a c^2 \\sim 10^{-4 \\pm 2}$ eV \\cite{Chavanis:2011zm}.\\footnote{$M_\\odot \\approx 2 \\times 10^{30} Kg = 1.2 \\times 10^{57} \\ m_N$ and $R_\\odot \\approx 7 \\times 10^8 m$.}\nFor later use we note that in a dilute axion star the central value of the axion field, $a_0$, for an axion star with maximum mass is given by $a_0\/f_a \\sim 10^{-7 \\mp 2}$. In general, we have $a_0\/f_a \\sim 10^{-7 \\mp 2} (M_a\/M_a^{max})^4$ where $R_a\/R_\\odot \\sim 3 \\times 10^{-4} (M_a^{max}\/M_a)$.\n\nFor a dense axion star, on the other hand, the minimum mass is approximately equal to $10^{-20 \\mp 6} \\ M_\\odot$ with a minimum radius of order $R_a\/R_\\odot \\sim 10^{-11 \\mp 2}$. It has a maximum mass of order $M_a\/M_\\odot \\sim 1$ and maximum radius, $R_a\/R_\\odot \\sim 10^{-5}$ \\cite{Braaten:2015eeu}. In this case we have $a_0\/f_a \\sim 1$.\n\nThe flux, $F_a$, of axion stars in the galaxy is roughly given in terms of the energy density of dark matter in the galaxy, $\\rho_{DM} \\sim 0.3 {\\rm GeV\/cm^3}$, times the virial velocity, $v_{vir}\\sim 300 {\\rm km\/s}$, of objects in the galaxy. Thus $F_a \\sim \\left( \\rho_{DM}\/M_a \\right) v_{vir}$. For the calculations in this letter we take $m_a c^2 = 10^{-5}$ eV, corresponding to $f_a \\sim 10^{12}$ GeV, which is in the regime required for a natural abundance of axion cold dark matter \\cite{Kim:2008hd}. This gives a dilute axion star with maximum mass, $M_a\/M_\\odot = 6 \\times 10^{-12}$ or $M_a = 6.6 \\times 10^{45}$ GeV\/c$^2$. Note, the more massive the axion star, the smaller the flux of axion stars. In addition, the denser the axion star, the smaller is its radius and thus the collision rate for axion - neutron star collisions is suppressed. For both these reasons, we only consider dilute axion stars in the following. In order to evaluate the rate of axion-neutron star collisions we need to know the cross-section for this process. This cross-section is given by the area, $A = \\pi b^2$, where $b$ is the impact parameter for the collision. Due to the classical counterpart of the Sommerfeld enhancement we have $b^2 = (R_{ns} + R_a)^2 \\left[1 + \\frac{2 G_N M_{ns}}{(R_{ns} + R_a) v_{vir}^2} \\right]$, where $R_{ns} = 10$ km is the typical radius of a neutron star, $M_{ns} = 1.4 M_\\odot$ is its mass and $G_N$ is Newton's constant. The enhancement factor $\\frac{2 G_N M_{ns}}{(R_{ns} + R_a) v_{vir}^2} \\sim 20$ and we obtain $A = 3.2 \\times 10^6 \\left( \\frac{R_{ns} + R_a}{220 {\\rm km}} \\right) $ km$^2$. Assuming of order $10^9$ neutron stars in a galaxy we obtain an event rate per galaxy, \\begin{equation} R_{a-ns} \\sim 10^{-6} (6 \\times 10^{-12} M_\\odot\/M_a)\\left( \\frac{R_{ns} + R_a}{220 {\\rm km}} \\right) {\\rm y}^{-1} . \\end{equation} Note, the radius of a dilute axion star increases as its mass decreases. Thus an axion star with mass, $M_a\/M_\\odot \\sim 2 \\times 10^{-13}$ and $R_a\/R_\\odot \\sim 9 \\times 10^{-3}$ would produce an event rate per galaxy of order, $R_{a-ns} \\sim 10^{-3} y^{-1}$. Now, following Iwazaki \\cite{Iwazaki:2014wta}, we note that this is the observed rate for fast radio bursts [FRBs] \\cite{Thornton:2013iua}.\n\n\\section*{Fast Radio Bursts and Axion stars}\n\n FRBs have been observed at a frequency, $\\nu_{FRB} \\sim 1.4$ GHz. This happens to be about the axion oscillation frequencey, $\\nu_a = \\frac{m_a c^2}{2 \\pi} = 2.5 \\left( \\frac{m_a c^2}{10^{-5} {\\rm eV}} \\right)$ GHz. Moreover, the observed frequency will be red shifted. Iwazaki discussed the FRBs produced when the axion star interacts with the strong magnetic field in the thin atmosphere of the neutron star. It causes coherent dipole oscillations of the electrons which then radiate in the frequency set by the axion star over a time scale of order a milli-second or less. In the following we describe an additional effect and perhaps the dominant effect. FRBs are most likely due to events at redshifts of 0.5 to 1. They are associated with a huge release of energy of order $10^{38} - 10^{40}$ ergs or $6.25 \\times (10^{40} - 10^{42})$ GeV corresponding to a rate of energy production, $P$, exceeding $6.25 \\times (10^{43} - 10^{45})$ GeV\/s.\n\n\\section*{Neutron stars}\n\nThe outer core of neutron stars contains mostly neutrons with 3 to 5\\% free protons and electrons \\cite{Potekhin:2011xe}. The neutron star cools to a temperature, $T \\sim 10^6 {^\\circ}K$ with a strong magnetic field, $B \\geq 10^{12}$ G. When the axion star enters the outer core of the neutron star it induces a time dependent electric dipole moment in the neutrons, $d(t) = d_0 \\sin(\\omega t)$ with $\\omega = m_a c^2\/\\hbar$. We have $d_0 = 2.4 \\times 10^{-16} \\left(\\frac{a_0}{f_a}\\right)$ e cm \\cite{Pospelov:1999ha,Graham:2013gfa} and we take $a_0\/f_a = 10^{-10}$. This time dependent electric dipole moment radiates radio waves with frequency $\\nu_a$. The total power radiated is given by \\begin{equation} P = \\frac{(m_a c^2)^4 \\alpha}{3 \\pi \\hbar} \\left(\\frac{d_0 N_n \\rho}{e \\hbar c}\\right)^2 \\end{equation} where $\\alpha = 1\/137$ is the fine structure constant, $N_n$ is the number of neutrons affected by the axion star and $\\rho$ is the net polarization of the neutron spins given by $\\rho = \\frac{N_n \\uparrow - N_n \\downarrow}{N_n \\uparrow + N_n \\downarrow}$.\\footnote{We expect the polarized neutrons to radiate coherently \\cite{Dicke:1954zz}.} The neutrons are polarized by the strong magnetic field. The magnetic field inside a neutron star has been modeled \\cite{Braithwaite:2005xi,Ciolfi:2014jha}. It includes both poloidal and toroidal components. Neutrons have a magnetic dipole moment, $|\\mu_n| \\approx 6 \\times 10^{-14}$ MeV\/T. Thus $2 |\\mu_n| B = 1.2 \\times 10^{-2} (B\/10^{15} {\\rm G})$ MeV and the Boltzmann factor determining the ratio of spin up to spin down neutrons, $e^x$ is given by $ x \\equiv \\frac{2 |\\mu_n| B}{k T} = 1.4 \\ (B\/10^{15} {\\rm G})\/(T\/ 10^8 {^\\circ} {\\rm K})$ with $k T = 8.6 \\ (T\/ 10^8 {^\\circ}{\\rm K})$ keV.\\footnote{The neutrons in the neutron star are believed to be in a $^3P_2$ superfluid state \\cite{Schwenk:2003bc,Page:2010aw}. Thus the neutron spins are alligned in the condensate. In addition we have assumed a large magnetic field, associated with that of neutron stars, known as magnetars.} The net polarization is then $\\rho = \\tanh(x\/2) \\approx 0.6$. We then find the power radiated given by\n\\begin{eqnarray} P = & 1.8 \\times 10^{55} \\ \\left( \\frac{m_a c^2}{10^{-5} \\ {\\rm eV}} \\right)^4 \\left( \\frac{N_n}{1.7 \\times 10^{57}} \\ \\frac{\\rho}{0.6} \\right)^2 \\ {\\rm GeV}\/s & \\\\\n= & 2.9 \\times 10^{52} \\ \\left( \\frac{m_a c^2}{10^{-5} \\ {\\rm eV}} \\right)^4 \\left( \\frac{N_n}{1.7 \\times 10^{57}} \\ \\frac{\\rho}{0.6} \\right)^2 \\ {\\rm erg}\/s. & \\nonumber \\end{eqnarray} Since we have assumed that the total mass of the axion star is $M_a\/M_\\odot \\sim 2 \\times 10^{-13}$ or $M_a \\sim 2.2 \\times 10^{44}$ GeV\/c$^2$, the star loses all its mass to radiation in a small fraction of a second. This pulse of energy would typically last much less than a milli-second. Note, however, that the rate depends sensitively on the axion mass, the net polarization, $\\rho$, of the neutrons in the neutron star core (which depends on the magnitude of the magnetic field and the temperature in the neutron star), the fraction of the neutron star covered by the axion star, i.e. $f = \\frac{N_n}{1.7 \\times 10^{57}}$\nand the timescale for the traversal of the axion star through the neutron star. The last two effects depend crucially on the the amount of stretching of the axion star due to tidal forces and the velocity of the axion star. We will comment on the issue of tidal forces in the next section.\n\nOf course, the next question is does this radiation escape from the neutron star. This is a very difficult question which would require serious numerical simulations. But in the absence of this analysis, let me make some simplifying assumptions. There are two properties of the interior neutron star which can dramatically affect the result. First there is a huge magnetic field which is most likely a combination of poloidal and toroidal. Secondly, about 1\\% of the outer core of the neutron star is a plasma of ionized protons and electrons. It is believed that the protons in the outer core of a $10^{12}$ G neutron star are in a superconducting state. As a result, the magnetic fields either form a lattice of Abrikosov flux tubes for a type II superconductor or an intermediate state of normal and superconducting matter for a type I superconductor \\cite{Huebener:1974ui,PhysRevB.57.3058}. In either case, electromagnetic waves would be severely constrained. However, if the interior magnetic field is large enough, $B > 2 \\times 10^{16}$ G, then it is believed that the core is not superconducting \\cite{Sinha:2015bva,Sinha2015}. Moreover, for smaller magnetic fields, $ 10^{15} \\ {\\rm G} \\leq B \\leq 2 \\times 10^{16} \\ {\\rm G}$, the core may be only partially superconducting \\cite{Sinha:2015bva,Sinha2015}. Hence radiation may then propagate in this non-superconducting core. Neutron stars with surface magnetic fields of order $10^{15}$ G are known as ``magnetars.\" It is believed that magnetars may be just as numerous as neutron stars with magnetic fields in the $10^{12}$ G range \\cite{Duncan:1992hi,Thompson:1993hn,Thompson:1995gw,Duncan:2003,Turolla:2015mwa}.\n\nThe magnetic field might make it easier for the radiation to escape. It can be shown that\nthe optical opacity of a medium is severely suppressed in the presence of a strong magnetic field \\cite{Canuto:1971cd}. In the\nfully ionized atmosphere of a neutron star the opacity has been evaluated in Ref. \\cite{Potekhin:2002nk}. The absorption cross-section for transverse electromagnetic waves with $\\alpha = \\pm 1$ and frequency $\\omega$ is given by (Eqn. 51, Ref. \\cite{Potekhin:2002nk})\n\\begin{equation} \\sigma_\\alpha^{ff} \\approx \\frac{\\omega^2}{(\\omega + \\alpha \\omega_{ce})^2 (\\omega - \\alpha \\omega_{cp})^2 + \\omega^2 \\tilde \\nu_\\alpha^2} \\frac{4 \\pi e^2 \\nu_\\alpha^{ff}}{m_e c} \\end{equation} where $\\omega_{ce}, \\ \\omega_{cp}$ are cyclotron frequencies for electrons and protons, respectively. For $\\omega \\ll \\omega_{ce}, \\ \\omega_{cp}$ the absorption cross-sections vanishes $\\propto \\omega^2$. In Fig. 7, Ref. \\cite{Potekhin:2002nk}, the opacity was calculated for a plasma with\ndensity, $\\rho = 500$ g\/cm$^3$, temperature, $T = 5 \\times 10^6 {^\\circ}$ K and magnetic field, $B = 5 \\times 10^{14}$ G. These results will change with a stronger magnetic field. At magnetic fields of order $10^{16}$ G, the proton cyclotron frequency is two orders of magnitude larger than the 3.15 keV of Fig. 7, Ref. \\cite{Potekhin:2002nk}. Moreover, we are interested in radio frequencies of order $10^{-5}$ eV, which are much lower.\n\n\nAssuming we can scale the opacity of Fig. 7, Ref. \\cite{Potekhin:2002nk} (defined for $T \\sim 5\\times 10^6 {^\\circ}$K and $B \\sim 5 \\times 10^{14}$ G) from $\\kappa_0 \\sim 10^{-4.5} cm^2\/g$ at $\\hbar \\omega_0 = 10^3$ eV to lower frequencies with $\\kappa(\\hbar \\omega_a) = (\\frac{\\hbar \\omega_a}{\\hbar \\omega_0})^2 \\ \\kappa_0$ (where $\\hbar \\omega_a = m_a c^2 = 10^{-5}$ eV), we find an extinction coefficient $\\rho \\kappa = \\tau^{-1}$ with $\\tau \\sim 32 \\ (\\frac{\\rho}{10^{14} \\ g\/cm^3})^{-1} \\ km$ for a core density, $\\rho \\sim 10^{14}$ g\/cm$^3$.\\footnote{For a stronger magnetic field the frequency $\\hbar \\omega_0$ would increase. Also note, the core temperature is actually closer to $T \\sim 10^8 {^\\circ}$K.} Clearly most of the energy emitted by that fraction of the axion star which covers the outer core, located at approximately 1 km below the surface of the neutron star, makes it out of the neutron star. The rest of the energy goes into heating the plasma.\nFinally, there is necessarily a back reaction on the axion star, as its mass is converted into radiation.\n\n\\section*{Tidal distortion of the axion star}\n\nOnce the axion star approaches a distance $R_{tidal}$ from the neutron star given by \\begin{equation} R_{tidal} \\approx (\\frac{M_{ns}}{M_a})^{1\/3} \\ R_a, \\end{equation} i.e, when the gravitational force at the surface of the axion star is balanced by the tidal forces due to the neutron star, then the axion star stretches in the direction of the neutron star. If it approaches the neutron star with zero impact parameter, $b$, the radio burst may be prolonged \\cite{Pshirkov:2016bjr}, of order a second. However, if it grazes the neutron star at an impact parameter, $b \\gg R_{ns}$, then only a fraction of the axion star will actually cross the neutron star. Then the crossing timescale can be of order milli-seconds.\n\n\n\\section*{Conclusion}\n\nIn this paper we discuss the possibility that FRBs are caused by the collisions of dilute axion stars with\nneutron stars of the type known as magnetars. Unlike the previous analysis of Iwazaki \\cite{Iwazaki:2014wta} which focused on axion stars producing electric dipole radiation from the thin atmosphere of a neutron star, the proposed mechanism can be a much larger volume effect having to do with the\ninduced time dependent electric dipole moment of the neutrons interior to the star. Further calculations are needed to evaluate the viability of the proposed mechanism.\n\n\\section*{Acknowledgments}\n\nI acknowledge illuminating conversations with E. Braaten, C. Hirata, R. Furnstahl, T. Thompson and H. Zhang.\nI also received partial support for this work from DOE grant, DE-SC0011726.\n\n\n\n\n\\clearpage\n\\newpage\n\n\\bibliographystyle{utphys}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\n\n\nSoftly broken supersymmetry (SUSY) is arguably our best candidate for an explanation of what cuts off the quadratic divergent quantum corrections to the squared Higgs mass in the standard model (SM). Because of these corrections, the SM alone, with a cutoff much higher than the weak scale, suffers from a fine-tuning problem. The mass scale of superpartners effectively represent the cutoff and above that scale the tuning is no longer necessary. Thus, to completely remove tuning from the Higgs, the natural scale for superpartner masses is on the order of the weak scale.\n\n\n\nOf course neither the Higgs superpartners have been discovered. The bound on the SM Higgs mass is 114.4 GeV \\cite{LEPhiggs}, and this bound, to a good approximation, applies to the lightest Higgs in most of the parameter space of the so-called 'mSUGRA' version of the minimal supersymmetric standard model (MSSM) \\cite{lepSUSYhiggs}. These bounds are strong enough to require fine tuning within the mSUGRA model, the most restrictive being the Higgs mass bound - the physical Higgs mass in mSUGRA lies naturally below the $Z$ mass, and increases above the $Z$ with contributions only logarithmically sensitive to the superpartner mass scale. Superpartner masses (mainly the stop mass) need to be pushed beyond the electroweak scale in order to satisfy the bound.\n\n\nGoing beyond mSUGRA, Higgs and superpartner mass bounds can change - and sometimes radically. Here we study a version of the MSSM with R-parity violation. We also consider regions where the standard gaugino mass relations do not hold. The main phenomenological feature of violating R-parity is that the lightest superpartner is no longer stable and that 'missing energy' is either strongly reduced or eliminated in events with superpartners produced. This single change, allowing the lightest supersymmetric particle (LSP) to decay, reduces bounds on nearly every superpartner (save the chargino) to below 100 GeV, and sometimes well below. This keeps open the possibility of the Higgs boson decaying into superpartners.\n\n\n\nIn this paper, we explore Higgs decays into a pair of LSPs which result in multi-body final states. If the Higgs decays in a non-SM way, it affects the lower bound on the Higgs mass as the standard LEP searches are less efficient or simply do not apply. Based on a wide range of searches, we estimate that many of these new decays reduce the lower bound on the Higgs mass to 105 GeV or less. While this seems like a small change, the exponential sensitivity of the superpartner ({\\it e.g.}, stop) masses to the needed Higgs mass correction makes this reduction relevant for re-opening SUSY parameter space. The key is beating the standard decays of $h \\rightarrow b\\overline{b}$ by a large amount - and this happens in broad regions of parameter space the Higgs sector is in the decoupling limit.\n\n\nThis paper is organized as follows, section 2 reviews R parity violating operators and their bounds. Section 3 discusses constraints, signatures and parameter space for neutral gaugino LSPs. Section 4 discusses constraints, signatures and parameter space for third generation scalar LSPs, and section 5 concludes. In the appendix, we discuss the fit to $e^+ e^-$ hadronic cross section data when a new light charged scalar is in the spectrum.\n\n\n\n\\section{R parity violation}\n\nIn the MSSM generally one imposes a symmetry known as R parity whereby fields and superfields are given opposite parity. Only operators with positive parity are allowed to appear. The LSP is stable; if one starts with N superparticles, one must end with N mod 2 due to their negative parity. However we are free to introduce parity violating operators into the superpotential:\n\\begin{eqnarray}\n W & \\supset &\n \\mu_i L_i {\\bar H} + \\lambda_{ijk} L_i L_j E^c_k + \\lambda'_{ijk} L_i Q_j D^c_k \\nonumber \\\\ & + & \\lambda''_{ijk} U^c_i D^c_j D^c_k ,\n \\label{eq:rpv}\n \\end{eqnarray}\nwhere $L$, $E^c$, $\\bar{H}$, $Q$, $U^c$, and $D^c$ are lepton doublet, lepton singlet, up-type Higgs, quark doublet, up-type quark singlet, and down-type quark singlet superfields respectively, and the $ijk$ are flavor indices. The first operators violate lepton number conservation, while the third violates baryon number conservation. An acceptable theory cannot have both LNV and BNV couplings of non-negligible size as it would cause rapid proton decay.\n\nFor a general flavor of RPV bounds see \\cite{Allanach:1999ic,Bhattacharyya:1997vv}. In general LLE operators are of order a few times $10^{-2}$, while LQD and UDD operators are bounded at $10^{-1}$ though a few couplings have much harsher bounds and some are order 1. LNV bounds come from a great variety of sources. Semi-leptonic meson decays put bounds on a many products of couplings \\cite{Dreiner:2006gu} most of which fall in the range $10^{-2}-10^{-4}$. The most stringent bound for LLE operators is on the $\\lambda_{133}$ operator and comes from contributions to the electron neutrino' majorana mass and is less than $10^{-3}$ for 100 GeV superpartner masses. Other LLE bounds come from measurements of $R_{\\tau}$ and $R_{\\tau \\mu}$.\nThe LQD constraints come from a variety of sources. Neutrinoless double beta decay\nputs a strict bound on the $\\lambda_{111}^{'}$ operator of $\\sim 10^{-4} \\times (m_{\\chi}\/100GeV)^{1\/2}(m_{\\tilde{e}}\/100GeV)^2$. Charged current universality, $D_s$ and $D$ meson decays and the measurement of $R_{\\tau}$ also place bounds and in general operators involving third generation couplings $\\lambda_{331}^{'}, \\lambda_{332}^{'},\\lambda_{333}^{'}$ and $\\lambda_{232}^{'}$ are relatively unconstrained.\nThe strictest BNV bounds come from double nucleon decay and apply to first generation couplings $\\lambda_{112}^{''} \\sim 10^{-7}$. Neutron anti neutron oscillation limits $\\lambda_{113}^{''}$ to $\\sim 10^{-4}$. Other couplings are less constrained and bounds vary from $10^{-1}$ to 1. A generally safe statement is that with the exception of a few pathological couplings cited in the canon, nearly all bounds allow for reasonably prompt decay of a superpartner into SM particles. Avoiding stringent constraints on some of the first generation RPV operators may be remedied easily by requiring a Yukawa-hierarchical coupling scheme where third generation couplings are dominant; this scheme will be of particular value when considering LNV decays of gaugino LSPs. Here we favor a Yukawa hierarchical scheme and consider one only one RPV coupling at a time to be turned on. We now go on to list possible LSPs which both may be lighter than half the Higgs mass, and then may decay via RPV.\n\n\n\\section{Gaugino LSPs}\n\\subsection{Topology and General Constraints}\nThe Higgs may decay into a pair of neutral gaugino LSPs. The RPV decay then proceeds as each gaugino decays to a fermion and virtual sfermion which itself decays via an RPV vertex to two SM particles. It would be simple here to think of gauginos decaying through an effective four fermi interaction to three fermions $\\chi_0 g_i\\lambda_{jk}\/M_{\\overline{f}}$. The topology of these decays is Higgs to 6 SM fermions. In the case of BNV we would have Higgs to six jets. For LNV, Higgs to 4 jets plus 2 leptons, and LLE to 6 leptons.\n\nThere are several general constraints on this scenario applicable to all gaugino LSPs.\nThe first is the mass requirement that gauginos be less than half of the Higgs mass. What usually stands in the way of gauginos this light are assumptions about mass relations in standard SUSY mass schemes. Gauge mediation, anomaly mediation, and MSUGRA relate the gauginos by a single mass parameter. There is a tight experimental lower bound on the chargino mass of 102.7 GeV, which then constrains the mass of the lightest neutralino through standard gaugino mass relations. In addition, most standard mass schemes also predict a gluino much heavier than the other gauginos. For example, MSUGRA predicts the M2:M3 ratio to be about 3:1 thus a chargino passing the lower mass bound implies a heavy gluino. By rejecting the standard gaugino mass relationships we open up large regions of SUSY parameter space.\n\nWe must also address lower mass bounds on sparticles from existing RPV searches. DELPHI and L3 have put lower bounds on the neutralino as the LSP decaying through RPV \\cite{Abdallah:2003xc} \\cite{Achard:2001ek} \\cite{Acciarri:2000nf}. L3's mass limits for $\\chi_0$ decaying via LQD is 32.6GeV, while it's limits on decay via LLE and UDD are 40.2GeV and 39.9GeV respectively. DELPHI has put lower mass limits on decay via LLE and UDD as 39.5GeV and 38GeV. At DELPHI, a simple counting experiment was done assuming gaugino mass unification. We will examine the LLE decay count, since it was the lowest; if we avoid the LLE constraint we avoid them all. In the LLE channel DELPHI found only 1.5 events above the background at 95 percent confidence. Recalling that $N= \\epsilon L \\sigma $ where $\\epsilon$ is the efficiency, we see that for $\\epsilon$ between .11 and .38 and an integrated luminosity of 437.8$pb^{-1}$, the production limit is $\\sim$ .03-.01 pb. If we assume no contribution from charginos, a selectron mass of between 380 and 500 GeV will sufficiently suppress this process for a 30 GeV neutralino. There were additional searches at ALEPH that placed lower bounds of 29 GeV and 23 GeV on LQD and LLE decays of neutralinos respectively \\cite{Barate:1997ra}\n\\cite{Barate:1998gy}. These analyses also relied on MSUGRA. For example, they put an upper bound on neutralino production from LLE decay which is at least .5pb. Without MSUGRA, we may easily beat this production bound by picking the selectron mass.\n\nIn the case of gluino as the LSP no direct decay bounds are quoted. An indirect search exists which looks for direct decay of a neutralino produced in the decay of a squark, and assuming gaugino mass unification \\cite{Abazov:2001nt}. This search concerns the coupling $\\lambda_{2jk}^{'}$ only and looks for muons in the final sate. For reason which will be explained this does not fall into our scenarios. Interestingly, a lower mass limit on the gluino is set at 6.3 GeV \\cite{Janot:2003cr}. This is obtained from the contribution of hadronic decay width of the Z when $e^+ e^- \\rightarrow q\\overline{q}$ and a quark radiates a gluon which decays to a gluino pair.\n\nThe decay of Higgs to neutralinos must also beat the standard Higgs decays if current searches are not to have already ruled out the Higgs. The largest standard decay rate is Higgs$\\rightarrow b\\overline{b}$. The bottom Yukawa coupling is not particularly large and we shall show that it does not require special fine tuning in parameter space in order for Higgs decays to gauginos to beat the bottom decay by a factor of 5 or so.\n\n\n\nThe next constraints heavily involve the couplings to Z bosons. Decays of light neutral gauginos may not substantially change the Z width, nor may they have a large effect on the predictions for total e+ e- to hadrons cross section. These issues may be solved by decoupling the lightest neutralino sufficiently from the Z, and making sure loop induced couplings from of the Z to the gluino are not too great.\n\nThere is a constraint from supersymmetric contributions to $b \\rightarrow s\\gamma$ which come mainly from the charged Higgs and the chargino (diagrams involving the gluino also make contributions but are suppressed by insertions of the CKM matrix). Diagrams with the $ \\chi ^{+}$ and $H^+$ have a stop in the loop. Avoiding large contributions to this process is accomplished in two ways: if the stop and the charged Higgs are heavy and all diagrams are suppressed, or if the diagrams cancel. Often there is some tension between a heavy stop, which could tune the Higgs sector, and small $b- s \\gamma$. The current experimental bound is $3.55 +- .24^{+.09}_{-.1} +- .03 x 10^{-4}$\\cite{Barberio:2007cr}. The current Standard Model theoretical prediction made at NNLO is now significantly below the measurement at $3.15 +- .23 x 10^{-4}$\\cite{Misiak:2006zs}, so there is some small space for supersymmetric contributions.\n\nThere are generational constraints on LNV decays which have charged leptons in the final state. Tight constraints from Tevatron exist on like sign dilepton signals \\cite{Abulencia:2007rd}. This search found a slight excess of like sign dilepton events, 20 events at 95 percent confidence, at an integrated luminosity of 1$fb^{-1}$. The efficiencies for seeing new physics that produced a WZ were assumed to be around 8 percent. In SUSY optimized scenarios the observed excess was only 8 events and the expected efficiency was slightly lower but this scenario required a large transverse momentum imbalance which presents difficulties for a symmetric decay. If we consider that the standard Higgs production cross section at Tevatron is over $10^3$ fb, we see that if the generational LLE couplings were the same we would have expected at least a 40 event excess in like sign dilepton events; as two neutralinos decaying through a virtual charged slepton will produce like sign dileptons half the time. Taus were not covered in the dilepton search since they looked for well isolated lepton pairs that could be tracked back to a single vertex. This tells us that the LNV operator must favor third generation processes in order to avoid constraints.\n\nFinally, since these decays involve virtual sfermions and small couplings they may not be prompt. The general decay length has the form\n\\begin{equation}\nL \\sim \\frac{Nm_{\\tilde{m}}^4}{c^2 \\lambda^2 m_{\\chi}^5} \\beta \\gamma\n\\end{equation}\nwhere $\\lambda$ is the RPV coupling, $m_{\\tilde{m}}$ is the sfermion mass and N is a factor specific to the decay. If the decay length is long enough, the decay will have two displaced vertices that are separated from the primary decay vertex. However if the decay length is too long, the decay will occur outside of the detector and will thus be ruled out by existing missing energy searches. In this way we may put bounds on the RPV coupling and sfermion masses. We go on to do specific analyses of gaugino decays.\n\n\\subsection{Neutralino LSP}\n\nThe possibility of a light neutralino LSP decaying via BNV was discussed in detail in ref \\cite{Carpenter:2006hs}. Here we discuss the possibility of a light neutralino decaying via LNV as well.\n\nThe LLE signal would look like\nHiggs to 4 leptons plus missing energy, where the neutralino decays through a virtual lepton.\nFor the LQD channel the neutralino may decay either through a virtual slepton or squark. The signal would be\nHiggs to missing energy plus 4 jets, 2 leptons plus four jets, or 4 jets one lepton plus missing energy. All of the signals\nfor LNV decay of neutralinos may have multiple b squarks.\n\nBNV decays of the Higgs all have a 6 jet topology, however, the flavor antisymmetry of BNV operators does not allow two down type squarks of\nthe same flavor to be produced by the same vertex.\n\n\\begin{figure}[t]\n\\centerline{\\includegraphics{binoparameterspace.eps}}\n\\caption{Plot of mu term vs tanb for a Higgs to neutralino cross section which beats Higgs to bbar by a\nfactor of 5 and satisfies Z width, b to s gamma and chargino mass constraints}\n\\label{fig:binos}\n\\end{figure}\n\n\\begin{figure}[t]\n\\centerline{\\includegraphics{binovtanb.eps}}\n\\caption{Plot of bino mass vs. $tan\\beta$}\n\\label{fig:binovtanb}\n\\end{figure}\n\n\n Since the decay happens through a virtual heavy squarks or sleptons, we may calculate the decay length. Here we calculate the decay length for neutralinos decaying via the BNV operator.\n\\begin{eqnarray}\nL & \\simeq & \\frac{384\\pi^2 \\cos^2\\theta_w}{\\alpha \\left| U_{21}\\right|^2 \\lambda^2}\\frac{m_{\\tilde{m}}^4}{m_\\chi^5} ( \\beta\\gamma)\\\\\\nonumber\n& \\sim & \\frac{3 {\\mu}m}{\\left| U_{21}\\right|^2} \\left(\\frac{10^{-2}}{\\lambda}\\right)^2 \\left(\\frac{m_{\\tilde m}}{100\\; {\\rm GeV}}\\right)^4 \\left(\\frac{30\\; {\\rm GeV}}{m_\\chi}\\right)^5 \\frac{p_\\chi}{m_\\chi},\n\\end{eqnarray}\nwhere $|U_{21}|$ is an element of the mixing matrix and $p_\\chi$ is the neutralino's momentum. The\nneutralino may decay via BNV or either LNV operator. For large RPV couplings and small offshell masses this decay is prompt. However if $(\\lambda^2\/ m_{\\chi_0})^4$ is too small the decay will not happen inside the detector and this process will be ruled out by searches for invisible decays of the Higgs. There is an intermediate range where the decay happens inside the detector but there is a secondary displaced vertex. In this case, the decay products of each $\\chi_0$ will track back to these vertices, which are separate from the initial vertex where the $\\chi_0$'s are made. To illustrate the point we have constructed a plot assuming the BNV decay of the neutralinos which shows us the RPV coupling vs. offshell squark mass for different displacements of the secondary vertex. Though it is easy to be in a region of this parameter space with a prompt decay, there is also quite a bit of space that allows for a displaced vertex of a few microns.\n\n\\begin{figure}[t]\n\\centerline{\\includegraphics{squarks.eps}}\n\\caption{For neutralino masses of 25 and 35 GeV, squark mass vs RPV coupling for displaced vertex of 4 microns, 4cm, and 1m }\n\\label{fig:squarks}\n\\end{figure}\n\nSome attention should be paid to the specific issue of like sign dilepton constraints in LNV scenarios. As mentioned earlier, like sign dilepton measurements force us to consider only decays with taus leptons in the final state. For decays proceeding through the LQD operator we may easily choose to turn on RPV couplings such that only a tau will appear in the final state. The LLE operator requires each neutralino decay to two charged leptons and a neutrino. Since the LLE operator is antisymmetric in left handed fields, and we require the charges leptons to be taus, the neutrino may not be a tau neutrino. For example the decay may proceed via the coupling 323 to two taus and a muon neutrino. In this case the coupling 233 has an equal magnitude and the decay products are just as likely to contain a tau a muon and a tau neutrino. In this case $frac{1}{8}$ of the events will contain like sign (non-tau) dileptons, and if we a standard Higgs production cross section and an efficiency of 8 percent at 1 $fb^{-1}$ we expect to see 10 events over background at 95 percent confidence. This is within the inclusive excess for the Tevatron search.\n\nWe now mention constraints and discuss the parameter space of these decays.\n\n\nIn the case of UDD and LQD operators, there is a\ncontribution of the Z width to hadrons and the LLE operator contributes to the total Z width. Since our lightest neutralino is mostly Bino, we may sufficiently decouple from the Z to suppress large contributions to the width. All scans show points within $1\\sigma$ of the measured value. We require that the decay of Higgs to neutralinos beat Higgs $\\rightarrow b\\overline{b}$ by a factor of 5. In addition we have used to decoupling of the M1 parameter from M2 to ensure that we may satisfy the chargino lower bound of 102.7 GeV. We have also plotted points such that the value of $b\\rightarrow s \\gamma$ does not exceed $2\\sigma$ of the measured value while scanning over stop mass parameters and charged Higgs masses. We have plotted points scanning for a Higgs mass of 91 GeV, however small regions of parameter space exist for Higgs masses as low as 87 GeV.\n\\subsection{Gluino LSP}\n\n\\begin{figure}[t]\n\\centerline{\\includegraphics{gluinoparameterspace.eps}}\n\\caption{Plot of mu term vs tanb for a Higgs to gluino cross section which beats Higgs to bbar by a\nfactor of 5 and satisfies Z width, b to s gamma and chargino mass constraints}\n\\label{fig:gluinos}\n\\end{figure}\n\nThe second possibility for gaugino LSPs are gluinos. Light gluinos may easily dominate the Higgs decay width as shown in\n \\cite{Djouadi:1994js}. Again, gluinos less than half of the Higgs mass are maximally incompatible with the standard gaugino mass ratio predictions. However light gluinos do have a model\nbuilding benefit, light gluinos do not contribute substantially to squark masses in loops. When gluinos have mass, there is a two loop gauge mediation-like contribution to squark masses which\n goes like $\\frac{\\alpha_3}{\\pi} M_32 log(\\frac{\\Lambda}{m})$. Here $\\Lambda$ is the cutoff of the theory, and M3 is the gluino mass parameter. If the cutoff of the theory is high the squarks get a large mass contribution and the Higgs sector may become tuned as noted in \\cite{Schuster:2005py}.\n\n\nThe Higgs decays to gluinos through a triangle diagram involving top squarks and top quark. The diagram has a loop factor and is cut off by the largest scale in the loop. A similar diagram generates Higgs to gluons except only tops or bottoms appear in the loop. In this case however, the coupling of Higgs to tops is only the Yukawa coupling which is not enhanced by mixing. The stop's left right eigenstates are not their mass eigenstates. If the left right eigenstates have a large mixing, one stop will be much lighter than the other. In addition the Higgs coupling to $t_{1}t_{1}$ is proportional to the off-diagonal element of the mass matrix $\\sim m_{t1}^2-m_{t2}^2$. As the coupling becomes large, this decay may easily beat Higgs $\\rightarrow b\\overline{b}$. See figure \\ref{fig:gluinos} for the stop sector parameters needed to beat the bbar cross section by a factor of 5, where we have plotted parameter space in the limit of light gluinos $\\sim 20 GeV$ or less and a 100 GeV Higgs.\n\n\nThe gluinos are then free to decay via an LNV or BNV operator. The topology of this decay is similar to that of the neutralino, the decay will proceed trough a virtual squark, and result in six fermions in the final state. In the case of BNV, the gluino will decay to a quark\nand an offshell squark, which will then decay to two squarks; thus the signal is Higgs to six jets. If the gluino were to decay via an LNV vertex, it would be via the LQD operator.\n In this case the gluino would decay to a quark and an offshell squark. The squark then decays to a quark and lepton. The signal would then be Higgs to 4 jets plus missing energy or Higgs to 4 jets plus 2 charged leptons. Because there is no flavor antisymmetry in the LNV operator, these signals may contain multiple quarks of heavy flavor. In particular, if the final state leptons are neutrinos, we may have the signals 4b plus missing energy, 4c plus missing energy or 2b and 2 c plus missing energy(since each neutralino is free to decay to bottom or charm pairs). Perhaps the most striking signal would contain 2b, 2c and 2$\\tau$. There may also be signals in which one gluino decays with a neutrino in the final state, while one decays with a tau. In this case the decay products may contain an odd number of a heavy quark such as 3b, c, tau,and missing energy or 3c, b, tau and missing energy.\n\n One might worry that a similar process also contributes to Z to gluinos which would negatively effect the measured $e^+ e^- \\rightarrow$ hadrons, and the total hadronic width of the Z. However the coupling of stops to the Z is only a gauge coupling which can't compete with the coupling to the Higgs. Further, the coupling of the stops or sbottoms (but not both at the same time) to the Z may be tuned away while remaining large for the Higgs. The $b \\rightarrow s\\gamma$ measurement mostly constrains allowed values of $tan\\beta$, as part of the $b \\rightarrow s\\gamma$ amplitude goes like $1\/cos\\beta$. Our plot shows parameter space for which the $b\\rightarrow s \\gamma$ prediction is within 2$\\sigma$ of the measured value.\n Like the neutralino, since the decay of the LSP proceeds through an offshell sparticle, there exists the possibility of a secondary displaced vertex. For sufficiently large RPV coupling and small mass of the offshell sparticle, the decay will occur inside the detector. In addition, we note that the decay length in this scenario will be more likely to be shorter than that of neutralino LSPs, as it goes like $\\frac{1}{\\alpha}$. Finally, as is the case with neutralinos, like sign dilepton constraints require that the LNV operator must have a structure that favors third generation couplings.\n\n\n\n\\section {Scalars}\n\\subsection{Topology and General Constraints}\nThe Higgs may decay to a pair of scalars, which then each decay directly through an R parity violating operator. These decays of the Higgs are usually prompt, and have a 4 particle final state topology.\nThese scenarios are constrained by the chargino lower mass bound and $b\\rightarrow s\\gamma$ as were the gaugino LSP scenario. Since the scalar LSP pair must have opposite charges, like sign dileptons do not constrain this scenario. Again we must insure that these decays can beat Higgs $\\rightarrow b\\overline{b}$ by a suitable factor. However, as we explain, mass bounds on the scalar LSPs themselves are of the biggest concern.\n\nSeveral experiments have put lower mass bounds or production cross section limits on scalars decaying through RPV. The least stringent bounds for $\\tilde{\\tau}$ decay are from L3 and OPAL respectively and are $m_{\\tilde{\\tau}}> 61$ for LLE and $m_{\\tilde{\\tau}}> 74$ for LQD couplings. For sbottoms the least stringent limits quoted by L3 for UDD decays and is $m_{\\tilde{b}}> 55$. These limits would seem to exclude sbottom and stau as LSPs in our scenario. However none of these searches looked for RPV decays of these particles in mass windows much below half of the Z mass. Table 1 compiles a list of the lowest masses used in direct scalar searches \\cite{Abdallah:2003xc},\\cite{Achard:2001ek}(table 3),\\cite{Abbiendi:2003rn},\\cite{Heister:2002jc}. It was assumed that particles less than half of the Z mass could be ruled out with Z width measurements and with contributions to $e^{+}e^{-}\\rightarrow hadrons$.\n\nAs we shall elaborate, third generation scalars may have substantial mixing in their mass matrices. By varying the scalar mixing angle, we may change their coupling to the Z. By choosing a small coupling to the Z, we may suppress contributions to $e^{+}e^{-}\\rightarrow hadrons$ and the Z width for scalars with masses below half of the Higgs mass. In this way we may avoid RPV lower mass bounds for third generation scalars.\n\nFor scalars, the mass eigenstates and left right eigenstates are not the same. We may write,\n\\begin{equation}\ns1= Sin\\theta s_L + Cos\\theta s_R\n\\end{equation}\n\\begin{equation}\ns2= Cos\\theta s_L- Sin\\theta s_R\n\\end{equation}\nwhere $\\theta$ is the $s_1$ $s_2$ mixing angle. The mass matrix for third generation scalars is given by,\n\n\n\n\\centerline{$\\left(\n\\begin{array}{cc}\nM_{sL}^2 + ms^2 + D_{\\tilde s_L} & m_s A \\\\\n m_s A & M_{sR}^2 + ms^2 + D_{\\tilde s_R}\\\\\n\\end{array}\n\\right)$}\n\nwhere $A=A_{su}-\\mu cot\\beta$ for up-type, $A=A_{sb}-\\mu tan\\beta$ for down-type. The mixing angle is\n\n\\begin{equation}\nsin2\\theta =2 m_b A\/(m_{s1}^2-m_{s2}^2)\n\\end{equation}\n\nFor third generation sparticles it is possible to get large off-diagonal terms and we may have one mass eigenstate which is much lighter than the other. In general, it is not too hard to achieve a light mass eigenstate less than half of the Higgs mass. In addition the couplings of the light mass eigenstate to the Z are controlled by the mixing angle, \\begin{equation}\ng_{Zs_2 s_2}=g(I_3 sin_w^2-Q_2 sin^2\\theta)\n\\end{equation}\nwhich may be adjusted to vanish. Due to differing charges, the angle at which the scalar completely decouples will be different for stops, sbottoms and staus. It is worth noting that though third generation scalars can be decoupled from the Z, nothing will decouple them from photons. Thus while we can suppress the contribution of scalars to $e^+ e^- \\rightarrow hadrons$, we cannot eliminate it. Henceforward we will concentrate on Higgs decaying through stau and sbottom LSPs.\n\n\n\\begin{table}\n\\label{tab:points}\n\\begin{center}\n\\begin{tabular}{c|c|c|c}\nEXP & LLE & LQD & UDD\\\\\n\\hline\n\nDELPHI & $\\tau >$ 45 & - & b $>$ 45\\\\\nOPAL & $\\tau > $ 45 & $\\tau >$ 45 & b $>$ 45 \\\\\nL3 & $\\tau >$ 70 & - & b $>$ 30 \\\\\nALEPH & $\\tau >$ 45 & $\\tau >$40, b$>$ 30 & b $>$ 45 \\\\\n\\hline\n\\hline\n\\end{tabular}\n\\caption{LEP2 experiment searches for scalars decaying directly through RPV}\n\\end{center}\n\\end{table}\n\n\\subsection{Sbottom LSP}\n\n\n\nThe bottom squark is a good candidate for a light LSP. Its decay may proceed either through LNV or BNV operator.\nIn the case of BNV the decay proceeds as Higgs to light sbottoms with each light sbottom decaying to an up\n and down quark directly through the RPV coupling. Because of the down type flavor antisymmetry no bottoms appear\nin the final state. The signal is thus Higgs to 4 jets, at most 2 of heavy flavor(charm). LNV decays proceed through the LQD operator, with the signal two quarks and two leptons. In the case of the LQD operator there is no flavor antisymmetry, thus there may be a b and a $\\overline{b}$ in the final state. If the final state leptons are neutrinos, the most visible decay will be Higgs to 2b plus missing energy. The most visible signal in this case would be $h \\rightarrow 2c + 2\\tau$.\n\n\n\\begin{figure}[t]\n\\centerline{\\includegraphics{sbottombsg2.eps}}\n\\caption{Plot of mu term vs tanb for a Higgs to sbottom cross section for sbottoms between 7.5 and 30 GeV which beats Higgs to bbar by a\nfactor of 5 and satisfies Z width, b to s gamma and chargino mass constraints}\n\\label{fig:sbs}\n\\end{figure}\n\n\n\nWe will now discuss constraints and the parameter space. Long lived sbottoms are ruled\nout under 92 GeV. However, as long as the RPV couplings are large enough and the sbottom is sufficiently more massive than its decay products, the decay should be prompt.\nWe see that by adjusting the mixing angle of the sbottoms, we decrease the coupling to Z. Following the formula quoted in the beginning of this section, we see that this coupling is turned off when $sin\\theta \\sim .39$. A lower bound on sbottom masses\nhas been set at 7.5GeV by measuring contributions to the over\nall cross sections of $e^+e^- \\rightarrow hadrons$ while turning off the sbottom coupling to Z's \\cite{Janot:2004cy}. As quoted in table 1, the L3 experiment put an effective upper bound of 30 GeV on sbottom squarks decaying through RPV. Previous bounds had been set on the mass splitting of very light sbottoms and the lightest stop \\cite{Carena:2000ka}. These took into account a large stop loop contribution to the Higgs mass and limits on the $\\rho$ parameter and required a light stop lighter than 300 GeV. In our scenario however the bound is even more relaxed since we do not require a large stop loop contribution to the Higgs, and our stop sbottom splitting is not quite as extreme.\n\nThe decay rate of Higgs to sbottoms must beat that of bottoms by a factor of a few. The ratio of decays rates is given by \\cite{Berger:2002vs} as\n\\begin{equation}\n \\Gamma_{\\tilde{b}}\/ \\Gamma_b= \\mu tan\\beta^2 \/ 2 m_h^2 sin2\\theta^2 (1-4m_{b}^2\/m_h^2)^{1\/2}\n\\end{equation}\n which gets large for large values of $\\mu$ and tan$\\beta$. In this scenario there are 5 free parameters; the A term, $\\mu$, tan$\\beta$, and the soft masses. We have plotted $\\mu$ vs tan$\\beta$ in parameter space for this window of allowed sbottom mixing angles. We see in the plot of that lower values of $\\mu$ and tan$\\beta$ are ruled out by the upper mass limit on the sbottom. If the product $\\mu$tan$\\beta$ is too small, the off diagonal elements of the mass matrix will not be big enough to produce a light mass eigenstate. We therefore expect to find parameter space at larger values of $tan\\beta$ than we did for neutralino LSPs. The measurements of $e^{+}e^{-} \\rightarrow hadrons$ constrain allowed values of $\\theta$. Janot tells us the largest allowed mass range for sbottoms occurs between mixing angles $.3 < sin\\theta < .45$ \\cite{Janot:2004cy}(see fig 8). For A terms and $\\mu$ terms of a few hundred GeV we see that we may fall into this range of mixing parameter if A and $\\mu$ cancel to within 20 or 30 percent of their value. The constraint from b to s gamma follows as before and we scan over stop sector parameters and obey the chargino constraint. However neither of these constraints is as restrictive as for the case of light neutralinos since we are free to make the gaugino sector heavier in general.\n\n\\subsection{Stau LSP}\n\n\n\nThe decay of the Higgs may proceed through a stau anti-stau LSP pair. The direct RPV decays of the tau happen through LNV operators only.\n\nFor LQD decays of the stau, the final state will have 4 quarks, 2 up type and 2 down type. Because there is no flavor antisymmetry, the final state may consist of all quarks of heavy flavor, $h \\rightarrow b\\overline{b}c\\overline{c}$.\n\n\\begin{figure}[h]\n\\centerline{\\includegraphics{stausigmahadrons.eps}}\n\\caption{Plot of stau mixing vs hadronic cross section at the Z pole. The lower region is ruled out by PEP PETRA and TRISTAN measurements, the regions to the left and right by Z pole data from LEP 1.}\n\\label{fig:hadrons}\n\\end{figure}\n\n\n\n\n\\begin{figure}[h]\n\\centerline{\\includegraphics{staumassangle.eps}}\n\\caption{Plot of allowed stau mass vs. stau mixing angle}\n\\label{fig:massangle}\n\\end{figure}\n\n\n\n\n For decays with quarks in the final state, the $\\tilde{\\tau}$ masses below half the Z mass are constrained by the measurement of $e^{+} e^{-} \\rightarrow hadrons $. Following the method of Janot for light sbottoms, we may go back to LEP2, PEP, PETRA, and TRISTAN measurements of the total $e^{+} e^{-} \\rightarrow hadrons$ cross sections to set lower mass bounds on the staus \\cite{Janot:2004cy}. Ref \\cite{Janot:2004cy} contains a compilation of the total $\\sigma_{had}$ measurements for various experimental energies. We first calculate the absolute minimum allowed $\\tilde{\\tau}$ mass by assuming complete decoupling from the Z and calculating the resulting contribution to $e^{+}e^{-} \\rightarrow hadrons$ for production from photons alone for different experimental energies. We then perform a $\\chi^2$ fit for the hadronic cross section contribution. We find that at 95 percent confidence the lower limit on stau masses is 11 GeV with a best fit value of 57 GeV. A plot of the LEP and low energy cross section measurements and cross section predictions for light staus appears in the appendix. There seems to be significant parameter space for light stuas. We must note that Janot performed a more sophisticated analysis of the low energy data which may further constrain the lighter stau masses. However for stau mass heavier than 28 GeV the lowest energy data does not constrain our scenario. We will therefore conservatively consider the viable allowed stau mass window to be above 28 GeV. \n\n\nIn addition we may analyze the Z pole data from LEP 1. As quoted by Janot, the Z pole measurements limit new contributions to the hadronic cross section to 56pb at 95 percent confidence. Taking these measurements into account, we may exclude stau masses for different couplings of staus to the Z. When the $\\tilde{\\tau}$ has small coupling to the Z, we find a mass window between 28 and 45 GeV. Following the formula in section 4, we see that the Z coupling to $\\tilde{\\tau}$ is turned off when $sin \\theta=.67$, and allowed mixing angles range from $.6< sin\\theta < .7$ in the case of light staus. We have plotted the total contribution from light stau production at the Z pole vs mixing angle, as well as allowed stau mixing angles vs stau mass taking into account both Z pole and PEP,PETRA and TRISTAN data.\n\n\\begin{figure}[h]\n\\centerline{\\includegraphics{stauparameterspace.eps}}\n\\caption{Plot of mu term vs tanb for a Higgs to staus cross section, with a Higgs mass of 105 GeV, which beats Higgs to bbar by a\nfactor of 5 and satisfies Z width, b to s gamma and chargino mass, and hadronic cross section constraints}\n\\label{fig:stauspace}\n\\end{figure}\n\n\n\n In principle the parameter space for this decay looks much like the parameter space for sbottoms. However, the constraint on the allowed mixing angle is tighter since a large contribution of $e^+e^- \\rightarrow hadron$ comes directly from the photon coupling to tau with no coupling to the Z at all. Again getting a light stau requires large contributions to the off-diagonal mass matrix elements, in this case given by $m_{\\tau}(A-\\mu tan\\beta)$. Since $m_{\\tau}$ is small, a large product $\\mu$ $tan\\beta$ is required for large mixing, even larger than what was required for light sbottoms. We keep in mind that we do not want excessively high values of $\\mu$, since this would again introduce tuning into the Higgs potential. In addition tan$\\beta $ must not become too large or the bottom Yukawa coupling will become nonperturbative. We also need to choose $\\mu$ and $tan\\beta$ such that we may obey the lower mass bound on the lightest chargino. In this case, the variables effecting the stau sector that are at play in the $b \\rightarrow s\\gamma$ process are only $tan\\beta$ and $\\mu$, the stau and stop sectors are fairly uncoupled and $b \\rightarrow s\\gamma$ is not a strong constraint in the light stau scenario. We have plotted parameter space for all constraints.\n\n\n\n\nThe LLE decay of the stau is an interesting subject. If allowed, each stau would decay into a charged lepton plus a neutrino, and the signal would be opposite sign dileptons plus missing energy. The most striking signal here would be $h \\rightarrow 2\\tau + \\not{E}$. A search with similar signature, gauge mediated decays of stau to tau plus gravitino \\cite{Abreu:2000nm}, highly constrains this scenario. This search rules out the tau plus missing energy signal for stau mass larger than 2GeV. It should be noted that this search found a small window for $m_{\\tau} < m_{\\tilde{\\tau}} < 2 GeV$. One might guess that in the case where both staus decay hadronically, contributions to $e^{+}e^{-} \\rightarrow hadrons$ rule out this possibility. However, we see that the PEP, PETRA, and TRISTAN data placed cuts specifically to rule exclude the background from $\\tau^{+}\\tau^{-}$ and it is likely these decays may have been missed. See for example \\cite{Von:1990}). Thus there may be a signal for Higgs to 2 stau plus missing energy in the improbable event that there exist super-light staus. In this case, since there is a only a small window between the tau and the stau, if the RPV coupling is small, the stau may live for some time before it decays. In this case, in addition to a 2$\\tau$ plus missing energy decay, they may also be a secondary displaced vertex. For example for LLE coupling of the size a few times $10^{-4}$ a 1.8 GeV stau may live for 100 microns. Stau pair decays with one or two light leptons may also occur.\n\n\n\n\n\n\n\n\n\n\n\n\\section{Conclusions}\nIn the R parity violating MSSM without standard gaugino mass relationships, we find a menagerie of new decays for a light Higgs. We present the list of possible LSPs and topologies in the table found above.\n\n\n\nOne may wonder how all of these signals pass current bounds. There are several current searches that one must consider and we will first mention general of these. DELPHI did an analysis for Higgs goes to anything which could put a lower mass bound on the Higgs of 82 GeV \\cite{Abbiendi:2002qp}. There has been a Higgs to missing energy search which put a lower bound on the Higgs mass of 114 GeV \\cite{Schael:2006cr}. This is a concern\nfor us only when the Higgs decay is not prompt as is the case for gaugino LSPs. It is easy to avoid falling into this search if RPV couplings are large or if squark or slepton masses are of reasonable size. In fact, in the case of squarks, we would not want masses to go much beyond 1TeV, for then we encounter the same tuning problem in the Higgs potential that we wished to avoid.\n\n\nWe shall now consider searches which apply to decays with all hadronic final states. First there is the 2 jet flavorless search which puts a lower bound on the Higgs of 113 GeV \\cite{:2001yb}. In order for this search to be sensitive to our scenario, we would have to force our 4 and 6 jet final states into two jets. Such forcing is used in a set of searches where members of the Higgs multiplet undergoes cascade decay \\cite{Abbiendi:2004ww} \\cite{Abdallah:2004wy}. Thus we worry about decays $e^+e^- \\rightarrow H_2Z \\rightarrow H_1H_1Z$, where $H_1$ decays to bottoms or taus. In the all hadronic case, the final state is 4b. The event is forced into two 2b jets and the efficiency is calculated of a 4b even being picked up by a 2b search. Using the DELPHI search table we can guess our efficiency of being picked up by the 2b search. This should give us a good idea about matching our 4 jet signals to 2 jets, for 6 jets the efficiency would be even worse. As an example, notice that to rule out the Higgs at 80 GeV, the 4b search needs about 5.5 times the number of events as the 2b. Even assuming both of our b's were tagged, our scenario would only be picked up 18 percent of the time. The flavorless search is the same as the 2b search without the b tags\\cite{:2001yb}. LEP performed a search in which HZ was produced and Higgs decayed to $WW^{*}$ \\cite{Schael:2006ra}. An analysis was done for the final state $Z \\rightarrow \\nu\\nu, H \\rightarrow q\\overline{q}q\\overline{q} $, which is relevant to our light stau scenario. This search places a lower mass bound on the Higgs of 105 GeV. In addition the $WW^{*} $ search considered a final state in which $e^+e^- \\rightarrow HZ\\rightarrow 6q$. In this case however, cuts are applied which reconstruct the masses of the final state Z and Ws, so this search is not likely to be sensitive to our 4 or 6 jet signals.\n\nWe now turn to searches with quarks and charged leptons in the final states. The cascade decays mentioned above fall into this category produce final states with 4$\\tau$ and 2b+2$\\tau$. We can compare these final states to all of ours and see that these searches are not directly sensitive to any of our final states. Of our final states without missing energy these signals come closest are the six body decay 2b+2c+2$\\tau$ and the four body decay, 2c+2$\\tau$. Reconstruction of the cascade decay requires multiple b tags. It is unlikely that the 2 charms will receive a large b-likeness parameter. In addition to the b tag problem, our 2b+2c+2$\\tau$ decay must be force four jets into two loosing efficiency. We do not expect that these searches will constrain our scenario. Cascade decays with even more final state b's and $\\tau$s occur through the process $e^+e^- \\rightarrow H_2H_2\\rightarrow H_1H_1H_1$. Again, none of these final states match directly to our signals. Events with 4 or 6 b's require even more b tags, which are unlikely to match up to our 6 particle final states that have at most 2 b's and none of our final states contain more than two taus.\n\nFinally there are decays with missing energy in the final state. A compendium of some searches that constrain Higgs signals with missing energy is found in \\cite{Chang:2007de}. In particular this paper quotes upper bounds on the Higgs production cross section for Higgs decays with final states and $2q + \\not E$. These limits do not come from a Higgs search but from the LEP2 squark searches where sparticles are pair produced and decay to quarks and neutralinos. This signal is identical to our scenario with quarks and missing energy in the final state. In this case a lower bound may be set on the Higgs mass of 103 GeV if the final state quarks are light and 111 GeV if both of the final state quarks are b quarks. These bounds do not apply in cases where the final state quarks are not the same flavor. One might consider the cascade decays $e^+e^- \\rightarrow H_2Z \\rightarrow H_1H_1Z\\rightarrow 4b\/4\\tau$ $Z$. If in this case the Z decays invisibly the overall signal would be identical to our $H \\rightarrow 4b\/4\\tau + 2\\nu$ final state. The 4b search involved cuts which considered parameters such as the missing mass and the $ln(\\chi_{mZ})$ parameter where the missing mass is forced to the Z mass. These cuts are bound to exclude data in our scenario. Further we can see from the exclusion plot in fig 12 that if the efficiency of observing out scenario is 60 percent, all of parameter space is open up down to the 82 GeV Higgs mass bound. The $4\\tau$ search constraints on the decay topology such as restrictions on the angle between charged particle pairs, in addition to cuts on the missing momentum which make it insensitive to our decay. However the in the $WW^{*}$ search an analysis was done for the channel $e^{+}e^{-} \\rightarrow HZ \\rightarrow WW^{*}Z$ where the Ws decay hadronically and the Z invisibly. Here we cannot avoid constraints and a lower limit of 105 GeV may be placed on the Higgs mass. Finally, the $WW^{*}$ search analyzed a Higgs channel with charged light leptons and missing energy in the final state. This signal is identical to out final state where scalar LSPs decay through the LLE operator. Even in the case that we consider an LLE operator like 313, where the final state leptons may be only taus, the antisymmetry of the RPV operator forces us the coupling 133 to be the same size. Thus we get light leptons a quarter of the time and this search will also constrain the 2 tau plus missing energy channel. The search places an upper bound on the Higgs production cross section of .044pb and in our case, this translates to a Higgs mass lower bound of around 104 GeVfor final states with taus or light leptons plus missing energy. We may lower the bound by turning on more than one RPV coupling at once and decreasing the likelihood that the decay products are symmetric. For example, turning on two coupling at once which are equal in size decreases the Higgs mass bound to 95 GeV.\n\n\nOther asymmetric decays are also listed in table 2 which involve 2 RPV coupling being turned on at once. These scenarios are bizarre enough to be mostly unconstrained. The final state b+c+$\\tau +\\nu$ is mentioned in the $WW^{*}$ search, however the combinatorics of turning on two RPV operators at once allow us to avoid the Higgs mass bound of 95 GeV set by this scenario. Stranger 6 body decays listed in our table have are not constrained by any relevant search.\n\n\\begin{table}\n\\label{tab:points}\n\\begin{center}\n\\begin{tabular}{c|c|c|c}\n LSP & LLE & LQD & UDD\\\\\n\\hline\n\n$\\chi_0$ & 4$\\tau$+$2\\nu$ & 4b\/4c+2$\\nu$, 2b+2c+2$\\nu$, 2b+2c+2$\\tau$,3b+c+$\\tau$+$\\nu$, b+3c+$\\tau$+$\\nu$ & 2b+2c+2q\\\\\ng & - & 4b\/4c+2$\\nu$, 2b+2c+2$\\nu$, 2b+2c+2$\\tau$,3b+c+$\\tau$+$\\nu$, b+3c+$\\tau$+$\\nu$ & 2b+2c+2q \\\\\nb & - & 2b+2$\\nu$, 2c+2$\\tau$, b+c+$\\nu$+$\\tau$ & 2c+2q \\\\\n$\\tau$& 2$\\tau$+2$\\nu$ & 2b+2c & - \\\\\n\\hline\n\\hline\n\\end{tabular}\n\\caption{Higgs decay signals for all possible LSPs and RPV operators}\n\\end{center}\n\\end{table}\n\n\\begin{table}\n\\label{tab:points}\n\\begin{center}\n\\begin{tabular}{c|c|c|c}\n LSP & Signature & Mass Bound & Search\\\\\n\\hline\n\n$\\chi_0$ & $4q+2\\nu$ & 105 GeV & $WW^{*}$ with invisible Z decay\\\\\n$\\tilde{g}$ & $4q+2\\nu$ & 105 GeV & $WW^{*}$ with invisible Z decay \\\\\n$\\tilde{b}$ & $2q+2\\nu$ & 103 GeV & SUSY squark search\\\\\n- & $2b+2\\nu$ & 111 GeV & SUSY squark search\\\\\n- & $4q$ & 105 GeV & $WW^{*}$ with invisible Z decay\\\\\n$\\tilde{\\tau}$ & $\\tau\\overline{\\tau}+2\\nu$ & 104 GeV & $WW^{*}$ \\\\\n- & $l\\overline{l}+2\\nu$ & 104 GeV & $WW^{*}$ \\\\\n- & 4q & 105 GeV &$WW^{*}$ with invisible Z decay \\\\\n\n\\hline\n\\hline\n\\end{tabular}\n\\caption{Higgs Mass Lower Bounds for Various Channels. For decays not listed current searches do not severely constrain the Higgs mass.}\n\\end{center}\n\\end{table}\n\n\nOverall we have proposed over a dozen distinct channels for Higgs discovery with multiple heavy flavor particles in their final states. Some of these decays would be very hard to detect, for example those decays in which the final state is 4 or 6 jets. However, many of these decays have interesting missing energy signatures, some of which are quite bizarre - for example the b+3c+$\\tau$ +$\\nu$ decay of the gauginos. Those decays that proceed through a gaugino LSP have the added bonus of possible secondary displaced vertices. We may imagine modifying existing searches to look in some of these channels, for example modifying existing 2 or 4 b and $\\tau$ searches to be sensitive to missing energy. In addition we might hope to detect events which contain bottoms at LHCb as has been recently proposed \\cite{Kaplan:2007ap}.\n\n\n\\section{Appendix}\n\nBelow we have plotted hadronic cross section vs center of mass energies. The points are the measurement of the hadronic crossection minus the SM theory prediction with two sigma error bars. The curves are contributions to the hadronic cross section from the decay of low energy staus of various masses.\n\n\\begin{figure}[h]\n\\centerline{\\includegraphics{staucross.eps}}\n\\caption{Plot of Energy vs hadronic cross section for low energy data and stau contributions. from the top down the curves are for staues of mass 10, 15, 22, 28, and 45 GeV }\n\\label{fig:stlength}\n\\end{figure}\n\n\n\n{\\bf Acknowledgments}\n\nThis work was supported in part by funding from the US Department of Energy. We would like to thank Tom Banks and Jason Neilsen for enlightening discussion about signals and constraints.\n\n\\bibliographystyle{apsrev}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nThe Drell-Yan process in hadron-hadron collisions has for many\nyears yielded important information on the quark distributions of\nhadrons \\cite{DRELL}. In proton-nucleon collisions, for example, the sea quark\ndistribution at medium and large $x$ -- which is overwhelmed in deep\ninelastic scattering by the valence quarks -- can be directly measured.\nIt was pointed out several years ago \\cite{KUNSZT} that the Drell-Yan\nprocess at HERA could also be of interest. In particular, since the\nphotoproduction of lepton pairs arises mainly from $q\\bar{q}$ annihilation,\nthe quark distributions in the photon can be studied. For example,\nconsider the cross section differential in both\nlepton-pair mass $M$ and rapidity $y$, with the proton direction\ndefining $y>0$. In the centre-of mass frame of\nthe $\\gamma p$ collision in leading order, the momentum fractions of\nthe annihilating quarks are given simply\nby $x_{\\gamma} = \\sqrt{\\tau} \\exp(-y)$, $x_{p} = \\sqrt{\\tau} \\exp(y)$, where\n$\\tau = M^2\/s_{\\gamma p}$.\nAt large negative rapidities, therefore, the quark distribution in the photon\nis probed at large $x$, while that in the proton is probed at small $x$.\nNow at HERA, the fact that the photon-proton system has a large\npositive boost in the\nlab frame means that $y_{\\rm lab} < 0$ for the lepton pairs already implies\na very large rapidity in the photon-proton centre-of-mass frame. In fact, as we\nshall demonstrate below, lepton pairs with $y_{\\rm lab} \\sim -1$ probe\n$x_\\gamma \\sim 1$ and $x_p \\sim 10^{-4} - 10^{-3}$. In addition, such lepton\npairs should be relatively easy to identify, being produced in the\n\\lq backward' part of the\ndetector which is relatively free from hadronic activity.\n\nThe importance of such a measurement is that it allows\nan independent probe of the small $x$ structure of the proton. Recent deep\ninelastic structure function measurements from the H1 \\cite{H1} and\nZEUS \\cite{ZEUS} collaborations at HERA indicate\nthat $F_2(x,Q^2)$ rises\nsteeply at small $x$ and modest $Q^2$, in line with predictions\nbased on the \\lq\\lq perturbative pomeron\" of QCD \\cite{SMALLX}.\nIt is vitally important to obtain additional evidence of this effect,\nand the Drell-Yan cross section at negative $y_{\\rm lab}$ provides just such\na measurement. We note that the ${\\gamma p}$ Drell-Yan cross section has already\nbeen advocated as a probe of the small $x$ structure of the {\\it photon}\n\\cite{KUNSZT,GLUCKA,GLUCKB},\nbut this corresponds to large, positive $y_{\\rm lab}$ values. In practice, the\nhadronic activity in this region presumably makes a clean measurement\nrather difficult.\n\nIn this paper we study the ${\\gamma p}$ Drell-Yan cross section for $y_{\\rm lab} < 0$.\nTo test the discriminating power in the $x_p \\ll 1$ region, we use the recent\nMRS-D$_0'$ and MRS-D$_-'$ parton distributions \\cite{MRS92P}.\nThe former have \\lq\\lq conventional\" Regge small $x$ behaviour,\n$xq, xg \\sim x^0$ as $x \\to 0$, while the latter have more singular\n$xq, xg \\sim x^{-1\/2}$ behaviour as expected from perturbative\npomeron arguments \\cite{MRS92P}.\n Note that the recent $F_2$ data from H1 \\cite{H1} and ZEUS\n\\cite{ZEUS} at HERA strongly favour the D$_-'$ distributions.\nA crucial part of our argument is that for $y_{\\rm lab} < 0$\n the parton distributions of the photon are\n being probed in a $x$ region where they are\n already constrained by $F_2^\\gamma$ data. The difficulty here\nstems from the fact that in practice the $F_2^\\gamma$ data in the relevant\n$x,Q^2$ regions are not very precise, and this introduces some uncertainty\ninto the Drell-Yan predictions which in turn weakens the sensitivity\nto the proton structure. To try to quantify this, we use a variety\nof recent photon parton-distribution parametrizations.\n\nThe paper is organised as follows. In the next section we describe the\ntheoretical formalism\nfor computing the Drell-Yan cross section at next-to-leading\norder in photon-proton collisions.\nNumerical calculations then quantify the overall event rates and\nthe differences obtained from using different sets of\nparton distributions.\nIn Section 3 we perform a Monte Carlo event simulation to study in detail\nthe $x$ values which are being probed, and also to investigate the\nenergy and angular distributions of the leptons in the lab.\nSection 4 contains our conclusions.\n\n\n\\section{The Drell-Yan cross section at next-to-leading order}\n\n\\subsection{Expression for the inclusive cross section}\nIn this study we use both next-to-leading order matrix elements and\nparton distributions, in the $\\overline{\\rm MS}$ scheme. We calculate\n the inclusive cross sections of dilepton production at HERA:\n\\begin{eqnarray}\n\\gamma p & \\rightarrow & l^+l^-+X\\\\\ne p & \\rightarrow & l^+l^-+X \\; ,\n\\end{eqnarray}\nusing the expression from \\cite{GLUCKA}:\n\\begin{eqnarray}\n\\lefteqn{\n{d\\sigma\\over{dM^2}} = {4\\pi\\alpha^2\\over{3M^2}}\\int_\\tau^1{dx_p\\over{x_p}}\n\\int_{\\tau\/x_p}^1{dx_\\gamma\\over{x_\\gamma}}\\sum_{q=u,d,s,c}e_q^2\n }\n\\nonumber\\\\\n&&\n\\times \\Biggl\\{\n\\left[q_p(x_p,Q^2)\\bar q_\\gamma (x_\\gamma,Q^2)+\\bar q_p(x_p,Q^2)\nq_\\gamma(x_\\gamma,Q^2)\\right] \\left[\\delta(1-{\\tau\\over{x_px_\\gamma}})\n+{\\alpha_s(\\mu^2)\\over{2\\pi}}f_{q\\bar{q}} \\left({\\tau\\over{x_px_\\gamma}}\\right)\n\\right]\n \\nonumber\\\\\n&&\n +{\\alpha_s(\\mu^2)\\over{2\\pi}}\\; f_{gq}\\left({\\tau\\over{x_px_\\gamma}}\\right)\n\\nonumber\\\\\n&&\n \\times\\left[g_p(x_p,Q^2)\n\\left(q_\\gamma(x_\\gamma,Q^2)+\\bar q_\\gamma (x_\\gamma,Q^2)\\right)\n+\\left(q_p(x_p,Q^2)+\\bar q_p(x_p,Q^2)\\right)g_\\gamma(x_\\gamma,Q^2)\\right]\n\\nonumber \\\\\n&&\n +{6e_q^2\\alpha\\over{2\\pi}}\\; f_{gq}\\left({\\tau\\over{x_px_\\gamma}}\\right)\n\\left(q_p(x_p,Q^2)+\\bar q_p(x_p,Q^2)\\right)\\delta(1-x_\\gamma)\n\\Biggr\\}\n\\end{eqnarray}\nwhere $\\tau=M^2\/s$ and the higher-order terms $f_{q\\bar{q}}$ and $f_{gq}$ are given\nin Eqn.(5) in \\cite{GLUCKA}. In this expression, the first term contains\nthe dominating LO $q\\bar{q}$ initial state and the associated\nreal and virtual corrections.\n The second term contains the $g_p+q_\\gamma$ and $q_p+g_\\gamma$\ninitial states and finally the third term is the Compton term given by\n$q_p+\\gamma\\rightarrow \\gamma*+q$. Throughout this calculation we set the\nfactorization and renormalization scales at the invariant mass of the dilepton\npair; that is, $\\mu^2=Q^2=M^2$.\n\nFor the parton distributions in the photon which appear\nin the above expression, we take the GRV(NLO)\nnext-to-leading order set \\cite{GRV} as standard\nand probe the sensitivity of the\ncross sections to the MRS-D$_0'$ and MRS-D$_-'$ parton distributions.\nWe also study the sensitivity of the cross sections to different photon\nparametrizations, to attempt to quantify the uncertainty in extracting\ninformation on the small-$x$ proton distributions from this source.\nTo obtain the relevant cross sections for the $ep$ initial state we\nconvolute the cross-section expression with the simplest form of the\nequivalent photon approximation \\cite{WWA}, in which we have set the\nenergy scale to be $\\hat s=x_px_es$.\n\n\\subsection{Event rates and sensitivity to photon and proton\nparton distributions}\nThe cross-section calculations are performed for the energy parameters of\nHERA. For the $\\gamma p$ initial state we take $\\sqrt s_{\\gamma p}\n=200$ GeV, and for the $ep$ initial state we assume $E_e=30$ GeV and $E_p=820$\nGeV.\nIn Figure 1 we present the results of the calculation of the cross section\n$d\\sigma\/dM$ as a function of $M$, where $M$ is the invariant mass of the\ndilepton pair. The $\\gamma p$ cross section is presented in Figure 1(a) for\nboth the MRS-D$_0'$ and MRS-D$_-'$ parton distributions. The MRS-D$_-'$ cross\nsection is larger, as expected,\n but the difference between the two decreases substantially\nas $M$ increases, since at small $M$ the sensitivity to small-$x$ in the proton\nis large and the differences between the two sets of proton distribution\nfunctions is accentuated. At $M=2$ GeV\/c$^2$ the cross section is approximately\n76~pb\/GeV for the MRS-D$_0'$ case and 155~pb\/GeV\n for the MRS-D$_-'$ case, and\nat $M=4$ GeV\/c$^2$ the cross sections have fallen to 10~pb\/GeV and\n14~pb\/GeV, respectively. The\nnext-to-leading contribution to this cross section is approximately 1.5 to 2\ntimes that of the leading order one. In Figure 1(b) the corresponding\ncross section for the $ep$ initial state is shown. Evidently\nthese cross sections are approximately one order of magnitude smaller\nthan those for $\\gamma p$. The reason is that\nat these energies ${\\cal L}_{\\gamma p}\n\\simeq 0.1\\! {\\cal L}_{ep}$. The sensitivity of the $ep$ cross sections to the\ntwo sets of parametrizations is somewhat smaller than in the $\\gamma p$ case.\n\nTo emphasize the sensitivity of the Drell-Yan cross sections to the small-$x$\nbehaviour of the proton distribution functions we next calculate the rapidity\ndistribution of the dilepton pairs in the $\\gamma p$ case. The\nkinematics are such that $x_p=\\sqrt\\tau {\\rm e}^{y_{{\\gamma p}}}$ and\n$x_\\gamma=\\sqrt\\tau {\\rm e}^{-y_{{\\gamma p}}}$,\nwhich implies that $x_p\\rightarrow 0$ in the backward direction, which is the\nregion\nwhere differences between the MRS-D$_0'$ and MRS-D$_-'$ should be\nobserved.\nIn Figure 2 we present the plots for the cross section $d\\sigma\/dMdy$ as a\nfunction of $y$ for $M=4,6$ GeV\/c$^2$. The $M=4$ GeV\/c$^2$ case is shown\nin Figure 2(a) and we see that at large negative $ y$ ($\\equivy_{\\rm lab}$),\n the MRS-D$_-'$\ncross section is larger than the MRS-D$_0'$ one by a factor larger than 2.\nAt this invariant mass the proton is probed down to $x_p\\simeq 4\\times10^{-4}$\n(see below).\nAt smaller $M$ the difference is considerably larger; for instance at $M=2$\nGeV\/c$^2$ there is a factor of 4 between the two distributions at large\nnegative rapidities. However, at such low masses data from\nhadron-hadron collisions suggest that the background to the dilepton\ncross section arising from\n$J\/\\psi$ production may be substantial \\cite{DRELL}. In what follows,\ntherefore, we impose a lower limit of $M > 4$ GeV\/c$^2$ on the dilepton\nmass.\nIn Figure 2(b) the same distribution is plotted for\n$M=6$ GeV\/c$^2$ and as expected, the effect washes out, due to the fact\nthat a larger invariant mass forces larger $x_p$ values; at large\nnegative rapidities the cross section ratio,\nMRS-D$_-'$\/MRS-D$_0'$ reduces to approximately 1.5. In the positive rapidity\nregion the ratio is very nearly 1 in both Figures 2(a),(b).\n\nThe usefulness of this process to probe the small-$x$ behaviour of the proton\nstructure function depends on the\ninsensitivity of the cross sections to the large-$x$ behaviour of the\nphoton structure function. To investigate this, we repeat the\ncalculations using several other sets of photon distributions,\nkeeping the proton distributions (MRS-D$_-'$) fixed.\nFigure 3 shows the rapidity\ndistribution, $d\\sigma\/dMdy$ for $M=4$ GeV\/c$^2$ as a function of $ y$\n for various\nchoices of the photon distributions: (i) the reference GRV(NLO) set\n\\cite{GRV} (solid line), (ii) the GS(NLO) set \\cite{GS} (dashed line)\nand the LAC1(LO) set \\cite{LAC} (dotted line).\nIn the interesting region of phase space (large negative\nrapidities) we see that the different distributions give roughly the\nsame shape in rapidity but a spread in normalisation of order $10\\%$.\nThis can be understood in terms of the spread in predictions of these\nsets for the photon structure function $F_2^\\gamma$. Figure 4 shows\n data on $F_2^\\gamma$ together with the predictions of the same three\nsets as in Figure 3. We have chosen data in a similar $Q^2$ range\nas the $M^2$ values relevant to the Drell-Yan cross section at\nHERA. We see immediately that the spread in the predictions\nsimply reflects the uncertainty in the structure function\nmeasurements. More precise measurements of $F_2^\\gamma$ would of course\npin down the Drell-Yan cross section more accurately. In the meantime\nwe should regard the $O(10\\%)$ spread in normalisation as\na systematic uncertainty in the extraction of the {\\it proton}\ndistributions from the Drell-Yan data.\n\n\\section{Event simulation}\nIn the previous section we have seen how the $\\gamma p$ Drell-Yan\ncross section for $y_{\\rm lab} < 0$ and $ M > 4$ GeV\/c$^2$ offers a good\npossibility of obtaining information on the quark distribution\nin the proton at small $x$. An important question is whether the bulk\nof these events actually fall within the acceptance of the HERA\ndetectors. To study this, we perform a Monte Carlo simulation using the\nPYTHIA5.6 \\cite{PYTHIA} MC generator for ${\\gamma p} \\to l^+l^- + X$\nat $\\sqrt{s_{{\\gamma p}}} = 200$ GeV.\nThis simulation also allows us to investigate which $x_p$, $x_\\gamma$\nvalues give the dominant contribution to the cross section.\n\nFigure 5 show the distributions in (a) $x_{\\gamma}$ and (b) $x_p$\nfor $M > 4$ GeV\/c$^2$. The two histograms correspond to $y_{\\rm lab} < 0$\n(solid lines) and $y_{\\rm lab} > 0$ (dashed lines). The former is dominated\nby $x_p$ values in the $10^{-4} - 10^{-3}$ region, as expected.\nSelecting only those events with $M > 4$ GeV\/c$^2$, and\ndefining $\\vartheta_l$ and $E_l$ to be the lepton polar angle and energy\nin the lab frame respectively, Figure 6 shows the scatter plots for\n(a) $x_\\gamma$ {\\it vs.} $\\vartheta_l$,\n(b) $x_p$ {\\it vs.} $\\vartheta_l$,\n(c) $E_l$ {\\it vs.} $\\vartheta_l$,\nand (d) $M$ {\\it vs.} $\\vartheta_l$.\nWe see immediately that most of the events have leptons which\nare produced with reasonable energy and at sizeable angles to the\nbeam direction $\\vartheta = 0^{\\rm o}$, $180^{\\rm o}$. Given that\nwe expect very little additional hadronic activity in these regions\nof phase space, the acceptance for both muon and electron\npairs should be quite high.\n\n\\section{Conclusions}\n\nThe structure of the proton at small $x$ is an important new area\nof interest, and the first $F_2$ data from HERA provide tentative\nevidence for the \\lq\\lq perturbative pomeron\". The Drell-Yan cross\nsection in the backward rapidity region should in the very near future\n give a confirmation of this small-$x$\n behaviour from a completely different process.\nThe decrease in hadronic activity as $y_{\\rm lab}$\nbecomes more negative and the effect of the boost on the lepton lab angle\nshould make this a particularly clean measurement.\nOur study has concentrated on ${\\gamma p}$ collisions at $\\sqrt{s_{{\\gamma p}}} =\n200$ GeV, tacitly assuming that the final-state electron can be tagged\nwith reasonable efficiency at HERA. However, as is clear from Figure 1(b),\nthe effects we describe should also be observable in the $ep$ cross section.\n\nObservation of the dramatic differences induced by \\lq\\lq standard\"\nand \\lq\\lq singular\" small-$x$ behaviour in the proton in the $y_{\\rm lab} < 0$\nDrell-Yan cross section relies\non knowledge of the large-$x$ photon structure.\n We have shown that\nthe uncertainty in the photon at large $x$ is of the order of $10$\\%,\nwhich for $M=4 - 6$ GeV is small in comparison to the D$_0'$\/D$_-'$ effect.\nBut we should\nstress again that once these $F_2^p$ become well-measured at HERA, the\nDrell-Yan\ncross section will provide a complementary measurement of the\nlarge-$x$ behaviour of the photon.\n\n\\bigskip\n\n\\vspace{1cm}\n\\noindent {\\Large\\bf Acknowledgments}\n\n\\bigskip\nKCh would like to thank the Physics Department and Grey College\nat the University of Durham for their warm hospitality.\nThe work of KCh was supported by a EC \\lq\\lq Go--West\" Scholarship.\nACB would like to thank the Physics Department at the University\nof Durham for its kind hospitality when this project was conceived,\nand the Foundation for Research Development for partial funding.\nWe are grateful to Lionel Gordon for discussions concerning the\nGS parametrizations.\n\n\\newpage\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}