diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzzawol" "b/data_all_eng_slimpj/shuffled/split2/finalzzawol" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzzawol" @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\n\nIt is not difficult to see that for the 2-element group $\\mathbb{Z}_2 =\n\\langle\\{0,1\\}, + \\rangle$, the term operation $m(x,y,z) = x + y + z$ satisfies\nthe equations\n\\begin{equation}\\label{min-eq}\n m(y,x,x) \\approx m(x,y,x) \\approx m(x,x,y) \\approx y.\n\\end{equation}\nA slightly more challenging exercise is to show that a~finite Abelian group\nwill have such a~term operation if and only if it is isomorphic to a~Cartesian\npower of~$\\mathbb{Z}_2$.\n\nA ternary operation $m(x,y,z)$ on a~set $A$ is called a~\\emph{minority\noperation on $A$} if it satisfies the identities~(\\ref{min-eq}). A ternary\nterm $t(x,y,z)$ of an algebra $\\m a$ is a~\\emph{minority term of $\\m a$} if its\ninterpretation as an operation on $A$, $t^{\\m a}(x,y,z)$, is a~minority\noperation on $A$. Given a~finite algebra $\\m a$, one can decide if it has\na~minority term by constructing all of its ternary term operations and checking\nto see if any of them satisfy the equations~(\\ref{min-eq}). Since the set of\nternary term operations of $\\m a$ can be as big as $|A|^{|A|^3}$, this\nprocedure will have a~runtime that in the worst case will be exponential in the\nsize of~$\\m a$.\n\nIn this paper we consider the computational complexity of testing for the\nexistence of a~minority term for finite algebras that are {idempotent}. An\n$n$-ary operation $f$ on a~set $A$ is \\emph{idempotent} if it satisfies the\nequation $f(x, x, \\dots, x) \\approx x$ and an algebra is \\emph{idempotent} if\nall of its basic operations are. We observe that every minority operation is\nidempotent. While idempotent algebras are rather special, one can always form\none by taking the \\emph{idempotent reduct} of a~given algebra $\\m a$. This is\nthe algebra with universe $A$ whose basic operations are all of the idempotent\nterm operations of $\\m a$. It turns out that many important properties of an\nalgebra and the variety that it generates are governed by its idempotent\nreduct~\\cite{kearnes-kiss-book}.\n\nThe condition of an algebra having a~minority term is an example of a~more\ngeneral existential condition on the set of term operations of an algebra\ncalled a~\\emph{strong Maltsev condition}. Such a~condition consists of\na~finite set of operation symbols along with a~finite set of equations\ninvolving them. An algebra is said to satisfy the condition if for each\n$k$-ary operation symbol from the condition, there is a~corresponding $k$-ary\nterm operation of the algebra so that under this correspondence, the equations\nof the condition hold. For a~more careful and complete presentation of this\nnotion and related ones, we refer the reader to~\\cite{Garcia-Taylor}.\n\nGiven a~strong Maltsev condition $\\Sigma$, the problem of determining if\na~finite algebra satisfies $\\Sigma$ is decidable and lies in the complexity\nclass \\compEXPTIME. As in the minority term case, one can construct all term\noperations of an algebra up to the largest arity of an operation symbol in\n$\\Sigma$ and then check to see if any of them can be used to witness the\nsatisfaction of the equations of $\\Sigma$. In general, we cannot do any better\nthan this, since for some strong Maltsev conditions, it is known that the\ncorresponding decision problem is {\\compEXPTIME}-complete~\\cite{freese-valeriote}.\n\nThe situation for finite idempotent algebras appears to be better than in the\ngeneral case since there are a~number of strong Maltsev conditions for which\nthere are polynomial-time procedures to decide if a~finite idempotent algebra\nsatisfies them~\\cite{freese-valeriote, horowitz-ijac, kazda-valeriote}. At\npresent there is no known characterization of these strong Maltsev conditions\nand we hope that the results of this paper may help to lead to a~better\nunderstanding of them. We refer the reader to~\\cite{Bu-Sa} or\nto~\\cite{bergman-book} for background on the basic algebraic notions and\nresults used in this work.\n\n\n\\section{Formulation of the problem}\n\nIn this section, we formally introduce the considered problem. In all the\nproblems mentioned in the introduction, we assume that the input algebra is\ngiven as a list of tables of its basic operations. In particular, this implies\nthat the input algebra has finitely many operations. We also assume that the\ninput algebra has at least one operation (i.e., the input is non-empty) and we\nforbid nullary operations on the input.\nThe main concern of this paper is the following decision problem.\n\n\\begin{definition}\nDefine \\minority\\ to be the following decision problem:\n\\begin{itemize}\n \\item INPUT: A~list of tables of basic operations of an idempotent algebra~$\\m A$.\n \\item QUESTION: Does $\\m a$ have a~minority term?\n\\end{itemize}\n\\end{definition}\n\nThe size of an input is measured by the following formula. For a finite\nalgebra~$\\m a$, let\n\\[\n \\|\\m a\\| = \\sum_{i = 1}^\\infty k_i|A|^i,\n\\]\nwhere $k_i$ is the number of $i$-ary basic operations of $\\m a$. Since we\nassume that $\\m a$ has only finitely many operations, the sum is finite. Also\nnote that $\\lVert \\m a\\rVert \\geq \\lvert A \\rvert$ since we assumed that $\\m a$\nhas a non-nullary operation.\n\n\n\\section{Minority is a join of two weaker conditions}\n \\label{join}\n\nOne approach to understanding the minority term condition is to see if maybe\nthere exist two weaker Maltsev conditions $\\Sigma_1$ and $\\Sigma_2$ such that a\nfinite algebra $\\m A$ has a minority term if and only if $\\m a$ satisfies both\n$\\Sigma_1$ and $\\Sigma_2$. In this situation, we would say that the minority\nterm condition is the join of $\\Sigma_1$ and $\\Sigma_2$. Were this the case, we\ncould decide if $\\m a$ has a minority term by deciding $\\Sigma_1$ and\n$\\Sigma_2$.\n\nOn the surface, the minority term condition is already quite concise and\nnatural; it is not clear if having a minority term can be expressed as a join\nof weaker conditions. In this section, we show that it is a join of having a\nMaltsev term with a condition which we call having a minority-majority term\n(not to be confused with the `generalized minority-majority' terms\nfrom~\\cite{GMM-paper}). Maltsev terms are a classical object of study in\nuniversal algebra -- deciding if an\nalgebra has them is in \\compP{} for finite idempotent algebras. The\nminority-majority terms are much less understood.\n\n\\begin{definition}\n A ternary term $p(x,y,z)$ of an algebra $\\m a$ is a~\\emph{Maltsev term for\n $\\m a$} if it satisfies the equations\n \\[\n p(x,x,y)\\approx p(y,x,x)\\approx y\n \\]\n and a~6-ary term $t(x_1, \\dots, x_6)$ is a~\\emph{minority-majority term} of\n $\\m a$ if it satisfies the equations\n \\begin{align*}\n t(y,x,x,z,y,y)&\\approx y\\\\\n t(x,y,x,y,z,y)&\\approx y\\\\\n t(x,x,y,y,y,z)&\\approx y.\n \\end{align*}\n\\end{definition}\n\nWe point out that if an algebra has a~minority term then it also, trivially,\nhas a~Maltsev term, but that the converse does not hold (as witnessed by the\ncyclic group $\\mathbb{Z}_4$). Our definition of a~minority-majority term is\na~strengthening of the term condition found by Ol\\v{s}\\'{a}k\nin~\\cite{Olsak2017}. Ol\\v{s}\\'{a}k has shown that his terms are a~weakest\nnon-trivial strong Maltsev condition whose terms are all idempotent.\n\nWe observe that by padding variables, any algebra that has a~minority term or\na~majority term (just replace the final occurrence of the variable $y$ in the\nequations~(\\ref{min-eq}) by the variable $x$ to define such a~term) also has\na~minority-majority term. Since the 2-element lattice has a~majority term but\nno minority term, it follows that having a~minority-majority term is strictly\nweaker than having a~minority term.\n\n\\begin{theorem}\\label{thm:join}\n An algebra has a~minority term if and only if it has a~Maltsev term and\n a~minority-majority term.\n\\end{theorem}\n\n\\begin{proof}\nThe discussion preceding this theorem establishes one direction of this\ntheorem. For the other we need to show that if an algebra $\\m a$ has a~Maltsev\nterm $p(x,y,z)$, and a~minority-majority term $t(x_1, \\dots, x_6)$ then $\\m\na$ has a~minority term. Given such an algebra $\\m a$, define\n\\[\n m(x,y,z)=t(x,y,z,p(z,x,y),p(x,y,z),p(y,z,x)).\n\\]\nVerifying that $m(x,y,z)$ is a~minority term for $\\m a$ is\nstraightforward; we show one of the three required equalities here as an example:\n\\begin{align*}\n m(x,x,y)&\\approx t(x,x,y,p(y,x,x),p(x,x,y),p(x,y,x))\\\\\n &\\approx t(x,x,y,y,y,p(x,y,x))\\approx y.\n\\end{align*}\n\\end{proof}\n\n\\begin{corollary}\n The problem of deciding if a~finite algebra has a~minority term can be\n reduced to the problems of deciding if it has a~Maltsev term and if it has\n a~minority-majority term.\n\\end{corollary}\n\nAs was demonstrated in~\\cite{freese-valeriote, horowitz-ijac}, there is\na~polynomial-time algorithm to decide if a~finite idempotent algebra has\na~Maltsev term. Therefore, should testing for a~minority-majority term for\nfinite idempotent algebras prove to be tractable, then this would lead to\na~fast algorithm for testing for a~minority term, at least for finite\nidempotent algebras. From the hardness results found\nin~\\cite{freese-valeriote} it follows that in general, the problem of deciding\nif a~finite algebra has a~minority-majority term is \\compEXPTIME-complete; the\ncomplexity of this problem restricted to idempotent algebras is unknown.\n\n\n\\section{Local Maltsev terms}\n \\label{maltsev}\n\nIn~\\cite{freese-valeriote, horowitz-ijac, kazda-valeriote,\nValeriote_Willard_2014} polynomial-time algorithms are presented for deciding\nif certain Maltsev conditions hold in the variety generated by a~given finite\nidempotent algebra. One particular Maltsev condition that is addressed by all\nof these papers is that of having a~Maltsev term. In all but\n\\cite{freese-valeriote}, the polynomial-time algorithm produced is based on\ntesting for the presence of enough `local' Maltsev terms in the given\nalgebra.\n\n\\begin{definition}\n Let $\\m a$ be an algebra and $S \\subseteq A^2\\times\\{0,1\\}$. A term\n operation $t(x,y,z)$ of $\\m a$ is a~\\emph{local Maltsev term operation for\n $S$} if:\n \\begin{itemize}\n \\item whenever $((a,b), 0) \\in S$, $t(a,b,b) = a$, and\n \\item whenever $((a,b), 1) \\in S$, $t(a,a,b) = b$.\n \\end{itemize}\n\\end{definition}\n\nClearly, if $\\m a$ has a~Maltsev term then it has a~local Maltsev term\noperation for every subset $S$ of $A^2 \\times\\{0,1\\}$ and conversely, if $\\m a$\nhas a~local Maltsev term operation for $S = A^2 \\times\\{0,1\\}$ then it has\na~Maltsev term. In~\\cite{horowitz-ijac, kazda-valeriote,\nValeriote_Willard_2014} it is shown that if a~finite idempotent algebra $\\m a$\nhas local Maltsev term operations for all two element subsets of $A^2\n\\times\\{0,1\\}$ then $\\m a$ will have a~Maltsev term. This fact is then used as\nthe basis for a~polynomial-time test to decide if a~given finite idempotent\nalgebra has a~Maltsev term.\n\nIn this section we extract an additional piece of information from this\napproach to testing for a~Maltsev term, namely that if a~finite idempotent\nalgebra has a~Maltsev term, then we can produce an operation table or a~circuit\nfor a~Maltsev term operation in time polynomial in the size of the algebra.\n We will first prove that there is an\nalgorithm for producing circuits for a Maltsev function; the algorithm for\nproducing the operation table will then be given as a~corollary. However, for\nthe reduction presented in Section~\\ref{sec:np} we need only the algorithm\nfor producing a function table.\n\n\nLet us first briefly describe how to get a~global Maltsev operation from local\nones. Assume we know (circuits of) a~local Maltsev term operation\n$t_{a,b,c,d}(x,y,z)$ for each two element subset\n\\[\n \\{((a,b), 0), ((c,d), 1)\\}\n\\]\nof $A^2 \\times\\{0,1\\}$. These are required for $\\m A$ to have a~Maltsev term.\nA global Maltsev term can be constructed from them in two stages: First, we construct, for each $a,b\\in\nA$, an operation $t_{a,b}$ such that $t_{a,b}(a,b,b) = a$ and $t_{a,b}(x,x,y) = y$ for all\n$x,y \\in A$. This is done by fixing an enumeration $(a_1, b_1)$, $(a_2,\nb_2)$, \\dots, $(a_{n^2}, b_{n^2})$ of $A^2$, and then defining, for $1 \\le j\n\\le n^2$, the operation $t_{a,b}^j(x,y,z)$ on $A$ inductively as follows:\n\\begin{itemize}\n \\item $t_{a,b}^1(x,y,z) = t_{a,b,a_1, b_1}(x,y,z)$, and\n \\item for $1 \\le j < n^2$, $t_{a,b}^{j+1}(x,y,z) =\n t_{a,b,u,v}(t_{a,b}^j(x,y,z), t_{a,b}^j(y,y,z), z)$, where $u =\n t_{a,b}^j(a_{j+1}, a_{j+1}, b_{j+1})$ and $v = b_{j+1}$.\n\\end{itemize}\nAn easy inductive argument shows that $t_{a,b}^j(a,b,b) = a$ and\n$t_{a,b}^j(a_i, a_i, b_i) = b_i$ for all $i \\le j \\le n^2$, and so setting\n$t_{a,b}(x,y,z) = t_{a,b}^{n^2}(x,y,z)$ works.\n\nIn the second stage, we construct a~term $t_j(x,y,z)$ such that $t_j(a,a,b) =\nb$ for all $a$, $b \\in A$ and $t_j(a_i, b_i, b_i) = a_i$ for all $i \\le j$. We\ndefine this sequence of operations inductively again:\n\\begin{itemize}\n \\item $t_1(x,y,z) = t_{a_1, b_1}(x,y,z)$, and\n \\item for $1 \\le j < n^2$, $t_{j+1}(x,y,z) = t_{u,v}(x, t_j(x,y,y),\n t_j(x,y,z))$, where $u = a_{j+1}$ and $v = t_j(a_{j+1}, b_{j+1},\n b_{j+1})$.\n\\end{itemize}\nAgain, it can be shown that for $1 \\le j \\le n^2$, the operation $t_j(x,y,z)$\nsatisfies the claimed properties and so $t_{n^2}(x,y,z)$ will be a~Maltsev term\noperation for $\\m a$.\n\n\nFrom the above construction, one can obtain a~term that\n represents a Maltsev term operation of the algebra $\\m A$, starting with terms representing the operations $t_{a,b,c,d}$. But there is\nan efficiency problem with this approach:\nthe term is extended by one layer\nin each step, which results in a term of exponential size. Therefore, the\nbookkeeping of this term would increase the running time of the algorithm\nbeyond polynomial. Nevertheless, this can be circumvented by constructing\na~succint representation of the term operations, namely by considering circuits\ninstead of terms.\n\nInformally, a~circuit over an algebraic language (as a~generalization of\nlogical circuits) is a~collection of gates labeled by operation symbols, where\nthe number of inputs of each gate corresponds to the arity of the operation\nsymbol. The inputs are either connected to outputs of some other gate, or\ndesignated as inputs of the circuit; an output of one of the gates is\ndesignated as an~output of the circuit. Furthermore, these connections allow\nfor straightforward evaluation, i.e., there are no oriented cycles.\n\nFormally, we define an $n$-ary \\emph{circuit} in the language of an algebra $\\m A$ as a~directed acyclic graph with possibly multiple edges that has two kinds of vertices: \\emph{inputs} and \\emph{gates}. There are exactly $n$ inputs, labeled by variables $x_1,\\dots, x_n$, and each of them is a~source, and a~finite number of gates. Each gate is labeled by an~operation symbol of $\\m A$, the in-degree corresponds to the arity of the operation, and the in-edges are ordered. One of the vertices is designated as the \\emph{output} of the circuit. We define the size of the circuit to be the number of its vertices.\n\nThe value of a~circuit given an input tuple $a_1,\\dots,a_n$ is defined by the following\nrecursive computation: The value on an input vertex labeled by $x_i$ is $a_i$,\nthe value on a~gate labeled by $g$ is the value of the operation $g^{\\m A}$\napplied to the values of its in-neighbours in the specified order. Finally, the\noutput value of the circuit is the value of the output vertex. It is easy\nto see that the value of a~circuit on a~given tuple can be computed in linear\ntime (in the size of the circuit) in a~straightforward way. For a~fixed circuit\nthe function that maps the input tuple to the output is a~term function of $\\m\nA$. Indeed, to find such a~term it is enough to evaluate the circuit in the\nfree (term) algebra on the tuple $x_1,\\dots,x_n$. The converse is also true\nsince any term can be represented as a~`tree' circuit (it is an oriented tree\nif we omit all input vertices). Many terms can be expressed by considerably\nsmaller circuits. We give one such example in\nFigure~\\ref{fig:term-and-circuit}.\n\n\\begin{figure}\n \\[ \\begin{tikzpicture}[ split\/.style = { shape = circle, draw, fill, inner\n sep = 0, minimum size = 2.5 }, circuit logic US ]\n\n \\node [and gate, inputs = {nnn}, point right, minimum size = .7cm] (f) at (3,0) {$f$};\n \\node [and gate, inputs = {nnn}, point right, minimum size = .7cm] (g) at (0,0) {$g$};\n\n \\node (x) at (-3,.7) {$x$};\n \\node (y) at (-3,0) {$y$};\n \\node (z) at (-3,-.7) {$z$};\n\n \\draw (f.output) -- ++(right:.5) node [right] {$f(g(x,y,y),g(x,y,y),z)$} ;\n\n \\draw (g.output) -- (f.input 2);\n \\draw ($(g.output)!.5!(f.input 2)$) |- (f.input 1);\n \\node [split] at ($(g.output)!.5!(f.input 2)$) {};\n\n \\draw (x.east) -| ($(x.east)!.5!(g.input 1)$) |- (g.input 1);\n\n \\draw (y.east) -- (g.input 2);\n \\draw ($(y.east)!.5!(g.input 2)$) |- (g.input 3);\n \\node [split] at ($(y.east)!.5!(g.input 2)$) {};\n\n \\draw (z.east) -| ($(g.output)!.5!(f.input 2) + (-90:.5) $) |- (f.input 3);\n\n \\end{tikzpicture} \\]\n \\caption{A~succinct circuit representation of the term $f(g(x,y,y),g(x,y,y),z)$.}\n \\label{fig:term-and-circuit}\n\\end{figure}\n\nIn the proof of the theorem below, we will also use circuits with multiple outputs. The only difference in the definition is that several vertices are designated as outputs. Any such circuit then computes a~tuple of term functions.\n\n\\begin{theorem}\\label{maltsevcircuit}\nLet $\\m a$ be a~finite idempotent algebra. There is an algorithm whose\nruntime can be bounded by a~polynomial in the size of $\\m a$ that will either\n(correctly) output that $\\m a$ has no Maltsev term operation, or output\na~circuit for some Maltsev term operation of $\\m a$.\n\\end{theorem}\n\n\\begin{proof}\n Let $n =\n |A|$. Recall that $\\m A$ has at least one\n basic operation of positive arity and hence $\\|\\m A\\|\\geq n$. Let $m\\geq 1$ be the\n maximal arity of an operation of $\\m A$.\n\n We construct a~circuit representing a~Maltsev operation in three steps: The\n first step produces, for each $a$, $b$, $c$, $d$ from $A$, a circuit that computes a local Maltsev term operation $t_{a,b,c,d}$ as defined near the beginning of this section, the second step\n produces circuits that compute $t_{a,b}$, and the final step produces\n a~circuit for a~Maltsev operation $t$. We note that the algorithm can fail\n only in the first step.\n\n \\begin{figure}\n \\[\n \\begin{tikzpicture}[ split\/.style = { shape = circle, draw, fill, inner sep = 0,\n minimum size = 2.5 }, node distance = 1.7cm, circuit logic US ]\n \\node [and gate, inputs = {nnn}, point right] (out1) at (0,1) {$t_{a,b,u,v}$};\n \\node [and gate, inputs = {nnn}, point right] (out2) at (0,-1) {$t_{a,b,u,v}$};\n\n \\draw (out1.output) -- ++(right:.5) node [right] {$t^{j+1}_{a,b}(x,y,z)$} ;\n \\draw (out2.output) -- ++(right:.5) node [right] {$t^{j+1}_{a,b}(y,y,z)$} ;\n\n \\node at ($(out1.input 1) + (left:2.5)$) (j1) {$t^j_{a,b}(x,y,z)$};\n \\node at ($(out2.input 1) + (left:2.5)$) (j2) {$t^j_{a,b}(y,y,z)$};\n\n \\draw (j1.east) -- (out1.input 1);\n\n \\draw (j2.east) -- (out2.input 1);\n \\node (split1) [ split ] at ($(j2.east)!.5!(out2.input 1)$) {};\n \\draw (split1) |- (out1.input 2);\n \\draw (split1) |- (out2.input 2);\n\n \\node [split] (split2) at ($(split1) + (-.25,0) - (out1.input 1) + (out1.input 3)$) {};\n \\draw (split2) |- (out1.input 3);\n \\draw (split2) -- (out2.input 3);\n\n \\node [ left of = j1 ] (out1j){};\n \\draw (j1.west) -- (out1j.east);\n \\node [ left of = j2 ] (out2j){};\n \\draw (j2.west) -- (out2j.east);\n\n \\node (x) at (-8,1.5) {$x$};\n \\node (y) at (-8,0) {$y$};\n \\node (z) at (-8,-1.5) {$z$};\n\n \\node [ right of = x ] (in1j){};\n \\node [ right of = y ] (in2j){};\n \\node [ right of = z ] (in3j){};\n\n \\draw (x.east) -- (in1j);\n \\draw (y.east) -- (in2j);\n \\draw (z.east) -- (in3j);\n\n \\node (split3) [split] at ($(z.east) !.5! (in3j.west)$) {};\n \\node (mid) [ below of = j2 ] {$z$};\n \\draw (split3) |- (mid) -| (split2);\n\n \\node [ draw, dashed, fit = (in1j) (in2j) (in3j) (out1j) (out2j) ] {$C^j_{a,b}$};\n \\end{tikzpicture}\n \\]\n \\caption{Recursive definition of circuit $C^{j+1}_{a,b}$.}\n \\label{fig:C-j-ab}\n \\end{figure}\n\n\n Step 1: Circuits for $t_{a,b,c,d}$. For each $a,b,c,d$, we aim to produce a~circuit that computes a local Maltsev term\n operation $t_{a,b,c,d}$. To do this, we consider the subuniverse $R$ of $\\m\n A^2$ generated by $\\{(a,c), (b,c), (b,d)\\}$.\n According to Proposition~6.1 from~\\cite{freese-valeriote} $R$ can be generated in time $O(||\\m a||^2m)$.\n It is clear that $\\m A$ has\n a~local Maltsev term operation $t_{a,b,c,d}$ if and only if $(a,d) \\in R$. Our algorithm produces a circuit for $t_{a,b,c,d}$ by generating elements of $R$ one at a time and keeping track of circuits that witness\n the membership of these elements.\n\n More precisely, we employ a subuniverse generating algorithm to produce a sequence\n $r_1 = (a,c), r_2 = (b,c), r_3 = (b,d), r_4, \\dots$ of elements of $R$ (in time $O(||\\m a||^2m)$) such\n that each $r_{k+1}$, for $k \\ge 3$, is obtained from $r_1,\\dots,r_{k}$ by a~single\n application of an operation $f$ of $\\m A^2$. Our algorithm will also produce a~sequence of\n ternary circuits $C_{a,b,c,d}^3 \\subseteq C_{a,b,c,d}^4 \\subseteq \\dots$ such\n that each $C_{a,b,c,d}^k$ has $k$ outputs, and the values of $C_{a,b,c,d}^k$\n on $r_1,r_2,r_3$ give $r_1,\\dots,r_k$. We define $C_{a,b,c,d}^3$ to be\n the~circuit with no gates, and outputs $x_1$, $x_2$, $x_3$. The circuit\n $C_{a,b,c,d}^{k+1}$ is defined inductively from $C_{a,b,c,d}^k$: Consider an\n operation $f$ and $r_{i_1},\\dots,r_{i_p}$ with $i_j \\leq k$ such that\n $r_{k+1} = f(r_{i_1},\\dots,r_{i_p})$; add a~gate labeled $f$ to\n $C_{a,b,c,d}^k$ connecting its inputs with the outputs of $C_{a,b,c,d}^k$\n numbered by $i_j$ for $j = 1,\\dots, p$. We designate the output of this gate\n as the $(k+1)$-st output of $C_{a,b,c,d}^{k+1}$.\n \n\n\n It is straightforward to\n check that the circuits $C_{a,b,c,d}^k$ satisfy the requirements. We also\n note that the size of $C_{a,b,c,d}^k$ is exactly $k$. We stop this inductive\n construction at some step $k$ if $r_k = (a,d)$, in which case we produce the circuit\n $C_{a,b,c,d}$ from $C_{a,b,c,d}^k$ by indicating a~single output to be the\n $k$-th output of $C_{a,b,c,d}^k$. If, on the other hand, we have generated all of $R$ without producing $(a,d)$ at any step then the algorithm halts and outputs that $\\m a$ does not have a Maltsev term operation. The soundness of our algorithm follows from the fact that $\\m a$ has a~local Maltsev term $t_{a,b,c,d}$ if and only if $(a,d) \\in R$\n and that $\\m a$ has a Maltsev term if and only if it has local Maltsev terms $t_{a,b,c,d}$ for all $a$, $b$, $c$, $d \\in A$.\n The algorithm produces circuits of\n size $\\bigO(n^2)$ and spends most of its\n time generating new elements of $R$;\n \n \n generating each $C_{a,b,c,d}$ takes time\n $O(\\|\\m a\\|^2m)$, making the total time complexity of Step 1 to be $\\bigO(\\|\\m\n a\\|^2mn^4)$.\n\n\n Step 2: Circuits for $t_{a,b}$. At this point we assume that the functions $t_{a,b,c,d}$ are part of\n the signature. It is clear that the full circuit can be obtained by\n substituting the circuits $C_{a,b,c,d}$ for gates labeled by $t_{a,b,c,d}$,\n and this can be still done in polynomial time.\n\n Our task is to obtain a~circuit for $t_{a,b}$. We do this by inductively constructing circuits $C^j_{a,b}$ that compute two\n values of the terms $t_{a,b}^j$, namely $t_{a,b}^j(x,y,z)$ and\n $t_{a,b}^j(y,y,z)$. Starting with $j = 0$ and $t^0(x,y,z) = x$, we define\n $C_{a,b}^0$ to be the circuit with no gates and outputs $x,y$. Further, we\n define circuit $C_{a,b}^{j+1}$ inductively from $C_{a,b}^j$ by adding two\n gates labeled by $t_{a,b,u,v}$, where $u = t_{a,b}^j(a_{j+1}, a_{j+1},\n b_{j+1})$ and $v = b_{j+1}$: the first gate has as inputs the two outputs of\n $C_{a,b}^j$ and $z$, the second gate has as inputs two copies of the second\n output of $C_{a,b}^j$ and $z$. See Figure~\\ref{fig:C-j-ab} for a~graphical\n representation. Again, it is straightforward to check that these circuits\n have the required properties. Also note that the size of $C^j_{a,b}$ is\n bounded by $2j+3$ which is a~polynomial. The final circuit $C_{a,b}$\n computing $t_{a,b}$ is obtained from $C_{a,b}^{n^2}$ by designating the first\n output of $C_{a,b}^{n^2}$ to be the only output of $C_{a,b}$. Once we have\n $t_{a,b,c,d}$ in the signature, this process will run in time $\\bigO(n^2)$.\n\n Step 3: Circuit for a Maltsev term. Again, we assume that $t_{a,b}$ are basic operations, and construct\n circuits $C_j$ computing two values $t_j(x,y,y)$ and $t_j(x,y,z)$ of $t_j$\n inductively. The proof is analogous to Step 2, with the only difference that\n we use Figure~\\ref{fig:C-j} for the inductive definition. Again the time\n complexity is $\\bigO(n^2)$.\n\n \\begin{figure}\n \\begin{centering}\n \\begin{tikzpicture}[ split\/.style = { shape = circle, draw, fill, inner sep = 0,\n minimum size = 2.5 }, node distance = 1.7cm, circuit logic US ]\n \\node [and gate, inputs = {nnn}, point right, minimum size = .7cm] (out1) at (0,1) {$t_{u,v}$};\n \\node [and gate, inputs = {nnn}, point right, minimum size = .7cm] (out2) at (0,-1) {$t_{u,v}$};\n\n \\draw (out1.output) -- ++(right:.5) node [right] {$t_{j+1}(x,y,y)$} ;\n \\draw (out2.output) -- ++(right:.5) node [right] {$t_{j+1}(x,y,z)$} ;\n\n \\node at ($(out1.input 3) + (left:2.5)$) (j1) {$t_j(x,y,y)$};\n \\node at ($(out2.input 3) + (left:2.5)$) (j2) {$t_j(x,y,z)$};\n \\node (mid) [ above of = j1 ] {$x$};\n\n \\draw (j1.east) -- (out1.input 3);\n \\draw (j2.east) -- (out2.input 3);\n\n \\node (split1) [ split ] at ($(j1.east)!.5!(out1.input 3)$) {};\n \\draw (split1) |- (out1.input 2);\n \\draw (split1) |- (out2.input 2);\n\n \\node [split] (split2) at ($(split1) + (-.25,0) + (out1.input 1) - (out1.input 3)$) {};\n \\draw (split2) -- (out1.input 1);\n \\draw (split2) |- (out2.input 1);\n\n \\node [ left of = j1 ] (out1j){};\n \\draw (j1.west) -- (out1j.east);\n \\node [ left of = j2 ] (out2j){};\n \\draw (j2.west) -- (out2j.east);\n\n \\node (x) at (-8,1.5) {$x$};\n \\node (y) at (-8,0) {$y$};\n \\node (z) at (-8,-1.5) {$z$};\n\n \\node [ right of = x ] (in1j){};\n \\node [ right of = y ] (in2j){};\n \\node [ right of = z ] (in3j){};\n\n \\draw (x.east) -- (in1j);\n \\draw (y.east) -- (in2j);\n \\draw (z.east) -- (in3j);\n\n \\node (split3) [split] at ($(x.east) !.5! (in1j.west)$) {};\n \\draw (split3) |- (mid) -| (split2);\n\n \\node [ draw, dashed, fit = (in1j) (in2j) (in3j) (out1j) (out2j) ] {$C_j$};\n \\end{tikzpicture}\n \\end{centering}\n \\caption{Recursive definition of circuit $C_{j+1}$.}\n \\label{fig:C-j}\n \\end{figure}\n\n Each step runs in time polynomial in $\\|\\m a\\|$ (the time complexity is\n dominated by Step 1) and outputs a~polynomial\n size circuit. This also implies that expanding the gates according to their\n definitions in Steps 2 and 3 can be done in polynomial time; the final size of\n the output circuit will be bounded by $\\bigO(n^6)$.\n\\end{proof}\n\n\\begin{corollary}\\label{maltsevterm}\n Let $\\m a$ be a~finite idempotent algebra. There is an algorithm whose\n runtime can be bounded by a~polynomial in the size of $\\m a$ that will\n produce the table of some Maltsev term operation of $\\m a$, should one exist.\n\\end{corollary}\n\n\\begin{proof}\n The polynomial-time algorithm is as follows. First, generate a~polynomial size\n circuit for some Maltsev term operation of $\\m a$. This can be done in polynomial time by\n the above theorem. Second, evaluate this circuit at all $\\lvert A\\rvert^3$\n possible inputs. The second step runs in polynomial time since evaluation of\n a~circuit is linear in the size of the circuit.\n\\end{proof}\n\nWe note that there is also a more straightforward algorithm for producing the\noperation table of a Maltsev term which follows the circuit construction but\ninstead of circuits, it remembers the tables for each of the relevant term\noperations.\n\n\n\\section{Local minority terms}\n\nIn contrast to the situation for Maltsev terms highlighted in the previous\nsection, we will show that having plenty of `local' minority terms does not\nguarantee that a~finite idempotent algebra will have a~minority term. One\nconsequence of this is that an approach along the lines in~\\cite{horowitz-ijac,\nkazda-valeriote, Valeriote_Willard_2014} to finding an efficient algorithm to\ndecide if a~finite idempotent algebra has a~minority term will not work.\n\nIn this section, we will construct for each odd natural number $n > 2$ a~finite\nidempotent algebra $\\m a_n$ with the following properties: The universe of $\\m\na_n$ has size $4n$ and $\\m a_n$ does not have a~minority term, but for every\nsubset $E$ of $A_n$ of size $n-1$ there is a~term of $\\m a_n$ that acts as\na~minority term on the elements of $E$.\n\nWe start our construction by fixing some odd $n > 2$ and some minority\noperation $m$ on the set $[n] = \\{1, 2, \\dots, n\\}$. To make things concrete\nwe set\n\\[\n m(x,y,z)=\\begin{cases}\n x& y=z\\\\\n y& x=z\\\\\n z& \\text{else,}\n \\end{cases}\n\\]\nbut note that any minority operation on $[n]$ will do.\n\nSince there are two nonisomorphic groups of order 4, we have two different\nnatural group operations on $\\{0,1,2,3\\}$: addition modulo~4, which we will\ndenote by `$+$' (its inverse is `$-$'), and bitwise XOR, which we denote by\n`$\\oplus$' (this operation takes bitwise XOR of the binary representations of\ninput numbers, so for example $1\\oplus 3=2$). Throughout this section, we will\nuse arithmetic modulo 4, e.g., $6x = x + x$, for all expressions except those\ninvolving indices.\n\nThe construction relies on similarities and subtle differences of the two group\nstructures, and the derived Maltsev operations, $x-y+z$ and $x\\oplus y\\oplus z$.\nBoth these operations share a congruence $\\equiv_2$ that is given by taking\nthe remainder modulo 2. We note that $x\\equiv_2 y$ if and only if $2x = 2y$.\n\n\\begin{observation}\\label{obs:maltsev-diff}\n Let $x,y,z\\in \\{0,1,2,3\\}$. Then\n\\[\n (x\\oplus y\\oplus z) - (x - y + z) \\in \\{0,2\\}\n,\\]\nand moreover the result depends only on the classes of\n $x$, $y$, and $z$ in the congruence $\\equiv_2$ (i.e., the least significant\nbinary bits of $x$, $y$, and $z$).\n\\end{observation}\n\\begin{proof}\nBoth Maltsev operations agree modulo $\\equiv_2$, hence the difference lies in\nthe $\\equiv_2$-class of 0.\n\nTo see the second part, it is enough to observe that $x\\oplus 2=x+2=x-2$ for all\n$x$. Hence changing, say $x$ to $x'=x\\oplus 2$ simply flips the most\n significant binary bit\nof both $x\\oplus y\\oplus z$ and $x - y + z$, keeping the difference the same.\n\\end{proof}\n\n\n\\begin{definition}\nLet $A_n=[n]\\times [4]$. For $i \\in [n]$, we define $t_i(x,y,z)$ to be the\nfollowing operation on $A_n$:\n\\[\n t_i((a_1,b_1), (a_2,b_2), (a_3,b_3)) =\n \\begin{cases}\n (i,b_1 - b_2 + b_3)\n \\quad\\text{if $a_1 = a_2 = a_3 = i$, and} \\\\\n (m(a_1,a_2,a_3),b_1\\oplus b_2 \\oplus b_3),\n \\quad\\text{otherwise.}\n \\end{cases}\n\\]\nThe algebra $\\m a_n$ is defined to be the algebra with universe $A_n$ and\nbasic operations $t_1,\\dots,t_n$.\n\\end{definition}\n\nBy construction, the following is true.\n\n\\begin{claim}\\label{local}\n For every $(n-1)$-element subset $E$ of $A_n$, there is a~term operation of\n $\\m a_n$ that satisfies the minority term equations when restricted to\n elements from $E$.\n\\end{claim}\n\n\\begin{proof}\n Pick $i\\in [n]$ such that no element of $E$ has its first coordinate equal to\n $i$; the operation $t_i$ is a local minority for this $E$.\n\\end{proof}\n\n\n\\begin{proposition}\\label{prop:An}\n For $n > 1$ and odd, the algebra $\\m a_n$ does not have a~minority term.\n\\end{proposition}\n\n\\begin{proof}\n Given some $(i,a)\\in A_n$, we will refer to $a$ as the \\emph{arithmetic part}\n of $(i,a)$. This is to avoid talking about `second coordinates' in\n the confusing situation when $(i,a)$ itself is a~part of a~tuple of elements\n of $A_n$.\n \n \n\n To prove the proposition, we will define a~certain subuniverse $R$ of $(\\m\n a_n)^{3n}$ and then show that $R$ is not closed under any minority operation\n on $A_n$ (applied coordinate-wise). We will write $3n$-tuples of elements of\n $A_n$ as $3n\\times 2$ matrices where the arithmetic parts of the elements\n make up the second column.\n\n Let $R \\subseteq (A_n)^{3n}$ be the set of all $3n$-tuples of the form\n \\[\n \\begin{pmatrix}\n 1&x_1\\\\ 2&x_2\\\\ \\vdots\\\\ n&x_n\\\\\n 1&x_{n+1}\\\\ 2&x_{n+2}\\\\ \\vdots\\\\ n&x_{2n}\\\\\n 1&x_{2n+1}\\\\ 2&x_{2n+2}\\\\ \\vdots\\\\ n&x_{3n}\\\\\n \\end{pmatrix}\n \\]\n such that\n \\begin{align}\n &x_{kn+1} \\equiv_2x_{kn+2} \\equiv_2 \\dots \\equiv_2 x_{kn+n},\n &\\text{for $k=0,1,2$, and} \\label{eqn:bits} \\\\\n &\\sum_{i=1}^{3n} x_i = 2.\\label{eqn:strange-sum}\n \\end{align}\n The three equations from (\\ref{eqn:bits}) mean that the least significant bits of the\n arithmetic parts of the first $n$ entries agree and similarly for the second\n and the last $n$ entries; equation (\\ref{eqn:strange-sum}) can be viewed\n as a~combined parity check on all involved bits.\n\n \\begin{claim}\n The relation $R$ is a~subuniverse of $(\\m a_n)^{3n}$.\n \\end{claim}\n \\begin{proof}\n By the symmetry of the $t_i$'s and $R$, it is enough to show that $t_1$\n preserves $R$. Let us take three arbitrary members of $R$:\n \\[\n \\begin{pmatrix}\n 1&x_{1,1}\\\\ 2&x_{1,2}\\\\ \\vdots\\\\ n&x_{1,n}\\\\\n 1&x_{1,n+1}\\\\ 2&x_{1,n+2}\\\\ \\vdots\\\\ n&x_{1,2n}\\\\\n 1&x_{1,2n+1}\\\\ 2&x_{1,2n+2}\\\\ \\vdots\\\\ n&x_{1,3n}\\\\\n \\end{pmatrix},\n \\begin{pmatrix}\n 1&x_{2,1}\\\\ 2&x_{2,2}\\\\ \\vdots\\\\ n&x_{2,n}\\\\\n 1&x_{2,n+1}\\\\ 2&x_{2,n+2}\\\\ \\vdots\\\\ n&x_{2,2n}\\\\\n 1&x_{2,2n+1}\\\\ 2&x_{2,2n+2}\\\\ \\vdots\\\\ n&x_{2,3n}\\\\\n \\end{pmatrix},\n \\begin{pmatrix}\n 1&x_{3,1}\\\\ 2&x_{3,2}\\\\ \\vdots\\\\ n&x_{3,n}\\\\\n 1&x_{3,n+1}\\\\ 2&x_{3,n+2}\\\\ \\vdots\\\\ n&x_{3,2n}\\\\\n 1&x_{3,2n+1}\\\\ 2&x_{3,2n+2}\\\\ \\vdots\\\\ n&x_{3,3n}\\\\\n \\end{pmatrix}\n \\]\n and apply $t_1$ to them to get:\n \\begin{equation}\n \\vec r =\n \\begin{pmatrix}\n 1&x_{1,1}-x_{2,1}+x_{3,1}\\\\\n 2&x_{1,2}\\oplus x_{2,2}\\oplus x_{3,2}\\\\\n &\\vdots\\\\\n n&x_{1,n}\\oplus x_{2,n}\\oplus x_{3,n}\\\\\n 1&x_{1,n+1}-x_{2,n+1}+x_{3,n+1}\\\\\n 2&x_{1,n+2}\\oplus x_{2,n+2}\\oplus x_{3,n+2}\\\\\n & \\vdots\\\\\n n&x_{1,2n}\\oplus x_{2,2n}\\oplus x_{3,2n}\\\\\n 1&x_{1,2n+1}-x_{2,2n+1}+x_{3,2n+1} \\\\\n 2&x_{1,2n+2}\\oplus x_{2,2n+2}\\oplus x_{3,2n+2}\\\\\n & \\vdots\\\\\n n&x_{1,3n}\\oplus x_{2,3n}\\oplus x_{3,3n}\\\\\n \\end{pmatrix}\n \\label{eqn:r}\n \\end{equation}\n We want to verify that $\\vec r\\in R$. First note that (\\ref{eqn:bits}) is\n satisfied: This follows from the fact that $x-y+z$ and $x\\oplus y\\oplus z$\n give the same result modulo 2, and the assumption that the original three\n tuples satisfied (\\ref{eqn:bits}).\n\n What remains is to verify the property~(\\ref{eqn:strange-sum}). If in the\n equality~(\\ref{eqn:r}) above we replace the operations $\\oplus$ by $-$ and\n $+$, verifying~(\\ref{eqn:strange-sum}) is easy: The sum of the\n arithmetic parts of such a modified tuple is\n \\begin{equation}\n \\sum_{j=1}^{3n} (x_{1,j}-x_{2,j}+x_{3,j})=\n \\sum_{j=1}^{3n}\n x_{1,j}-\\sum_{j=1}^{3n}x_{2,j}+\\sum_{j=1}^{3n}x_{3,j}=2-2+2=2.\n \\label{eqn:2}\n \\end{equation}\n This is why we need to examine the difference between the $\\oplus$-based\n and $+$-based Maltsev operations. For $k\\in \\{0,1,2\\}$ and $i\\in\n \\{1,\\dots,n\\}$ we let\n \\[\n c_{k,i} = (x_{1,kn+i} \\oplus x_{2,kn+i} \\oplus x_{3,kn+i}) -\n (x_{1,kn+i} - x_{2,kn+i} + x_{3,kn+i})\n \\]\n By the second part of Observation~\\ref{obs:maltsev-diff}, $c_{k,i}$ does not\n depend on $i$ (changing $i$ does not change\n the $x_{j,kn+i}$'s modulo $\\equiv_2$ by condition~(\\ref{eqn:bits}) in the\n definition of $R$). Hence we can write just $c_k$ instead of $c_{k,i}$.\n\n Using $c_0$, $c_1$, and\n $c_2$ to adjust for the differences between the two Maltsev operations, we can express the\n sum of the arithmetic parts of the tuple $\\vec{r}$ as\n \\[\n \\sum_{j=1}^{3n} (x_{1,j}-x_{2,j}+x_{3,j})+\\sum_{i=2}^{n}\n c_0+\\sum_{i=2}^{n} c_1+\\sum_{i=2}^{n} c_2\n = 2+(n-1)(c_0+c_1+c_2)\n \\]\n where we used~(\\ref{eqn:2}) to get the right hand side. We chose $n$ odd,\n hence $n-1$ is even and each $c_k$ is even by\n Observation~\\ref{obs:maltsev-diff}, so $(n-1)c_k=0$ for any $k=0,1,2$. We see that the sum of the\n arithmetic parts of $\\vec{r}$ is equal to 2 which concludes the proof\n of~(\\ref{eqn:strange-sum}) for the tuple~$\\vec r$ and we are done.\n \\end{proof}\n\n It is easy to see that\n \\[\n \\begin{pmatrix}\n 1&0\\\\ 2&0\\\\ \\vdots\\\\ n&0\\\\\n 1&1\\\\ 2&1\\\\ \\vdots\\\\ n&1\\\\\n 1&1\\\\ 2&1\\\\ \\vdots\\\\ n&1\\\\\n \\end{pmatrix},\n \\begin{pmatrix}\n 1&1\\\\ 2&1\\\\ \\vdots\\\\ n&1\\\\\n 1&0\\\\ 2&0\\\\ \\vdots\\\\ n&0\\\\\n 1&1\\\\ 2&1\\\\ \\vdots\\\\ n&1\\\\\n \\end{pmatrix},\n \\begin{pmatrix}\n 1&1\\\\ 2&1\\\\ \\vdots\\\\ n&1\\\\\n 1&1\\\\ 2&1\\\\ \\vdots\\\\ n&1\\\\\n 1&0\\\\ 2&0\\\\ \\vdots\\\\ n&0\\\\\n \\end{pmatrix}\\in R,\n \\quad\\text{and}\\quad\n \\begin{pmatrix}\n 1&0\\\\ 2&0\\\\ \\vdots\\\\ n&0\\\\\n 1&0\\\\ 2&0\\\\ \\vdots\\\\ n&0\\\\\n 1&0\\\\ 2&0\\\\ \\vdots\\\\ n&0\\\\\n \\end{pmatrix}\\notin R.\n \\]\n However, the last tuple can be obtained from the first three by applying any\n minority operation on the set $A_n$ coordinate-wise. From this we conclude\n that $\\m a_n$ does not have a~minority term.\n\\end{proof}\n\nWe note that the above construction of $\\m a_n$ makes sense for $n$ even as well\nand claim that these algebras also have the same key features, namely, by\nconstruction, they have plenty of `local' minority term operations but they do\nnot have minority terms. The verification of this last fact for $n$ even is\nsimilar, but slightly more technical than for $n$ odd, and we omit the proof here.\n\nThe algebras $\\m a_n$ can also be used to witness that having a~lot of local\nminority-majority terms does not guarantee the presence of an actual\nminority-majority term. By padding with dummy variables, any local minority\nterm of an algebra $\\m a_n$ is also a~term that locally satisfies the\nminority-majority term equations. But since each $\\m a_n$ has a~Maltsev term\nbut not a~minority term, then by Theorem~\\ref{thm:join} it follows that $\\m\na_n$ cannot have a~minority-majority term.\n\n\\section{Deciding minority in idempotent algebras is in \\compNP}\\label{sec:np}\n\nThe results from the previous section imply that one cannot base an efficient\ntest for the presence of a~minority term in a~finite idempotent algebra on\nchecking if it has enough local minority terms. This does not rule out that\nthe problem is in the class \\compP, but to date no other approach to showing\nthis has worked. As an intermediate result, we show, at least, that this\ndecision problem is in \\compNP{} and so cannot be \\compEXPTIME-complete (unless\n$\\compNP=\\compEXPTIME$).\n\n\n\nWe first show that an instance $\\m a$ of the decision problem \\minority\\ can\nbe expressed as a~particular instance of the subpower membership problem for\n${\\m a}$.\n\n\\begin{definition}\\label{defSMP}\nGiven a~finite algebra $\\m a$, the \\emph{subpower membership problem} for $\\m\na$, denoted by $\\smp(\\m a)$, is the following decision problem:\n\\begin{itemize}\n \\item INPUT: $\\vec a_1, \\dots, \\vec a_k, \\vec b \\in A^n$\n \\item QUESTION: Is $\\vec b$ in the subalgebra of $\\m a^n$ generated by\n $\\{\\vec a_1, \\dots, \\vec a_k\\}$?\n\\end{itemize}\n\\end{definition}\n\nTo build an instance of $\\smp(\\m a)$ expressing that $\\m a$ has a~minority\nterm, let $I =\\{(a,b,c)\\mid \\mbox{$a, b, c \\in A$ and $|\\{a,b,c\\}| \\le 2$}\\}$.\nSo $|I| = 3|A|^2 - 2|A|$. For $(a,b,c) \\in I$, let $\\min(a,b,c)$ be the minority\nelement of this triple. So\n\\[\n \\min(a,b,b) = \\min(b,a,b) = \\min(b,b,a) = \\min(a,a,a) = a.\n\\]\nFor $1 \\le i \\le 3$, let $\\vec \\pi_i \\in A^I$ be defined by $\\vec \\pi_i(a_1,\na_2, a_3) = a_i$ and define $\\vec \\mu_A \\in A^I$ by $\\vec \\mu_A(a_1, a_2, a_3)\n= \\min(a_1, a_2, a_3)$, for all $(a_1, a_2, a_3) \\in I$. Denote the instance\n$\\vec \\pi_1$, $\\vec \\pi_2$, $\\vec \\pi_3$, and $\\vec \\mu_A$ of $\\smp(\\m a)$ by\n$\\min(\\m a)$.\n\n\\begin{proposition}\\label{min-instance}\n An algebra $\\m a$ has a~minority term if and only if $\\vec \\mu_A$ is\n a~member of the subpower of $\\m a^I$ generated by $\\{\\vec \\pi_1, \\vec \\pi_2,\n \\vec \\pi_3\\}$, i.e., if and only if $\\min(\\m a)$ is a~`yes' instance of\n $\\smp(\\m a)$ when $\\m a$ is finite.\n\\end{proposition}\n\n\\begin{proof}\n If $m(x,y,z)$ is a~minority term for $\\m a$, then applying $m$\n coordinatewise to the generators $\\vec \\pi_1$, $\\vec \\pi_2$, $\\vec \\pi_3$\n will produce the element $\\vec \\mu_A$. Conversely, any term that produces\n $\\vec \\mu_A$ from these generators will be a~minority term for $\\m a$.\n\\end{proof}\n\nExamining the definition of $\\min(\\m a)$, we see that the parameters from\nDefinition~\\ref{defSMP} are $k=3$ and $n=3|A|^2-2|A|$, which is (for algebras\nwith at least one at least unary basic operation) polynomial in $\\|\\m A\\|$.\nFor $\\m a$ idempotent, we can in fact improve $n$ to $3|A|^2-3|A|$,\nsince then we do not need to include in $I$ entries of the form $(a,a,a)$.\n\nIn general, it is known that for some finite algebras the subpower membership problem can be\n\\compEXPTIME-complete~\\cite{Kozik2008} and that for some others, e.g., for any\nalgebra that has only trivial or constant basic operations, it lies in the\nclass \\compP. In~\\cite{Mayr2012}, P.\\ Mayr shows that when $\\m a$ has a~Maltsev\nterm, then $\\smp(\\m a)$ is in \\compNP. We claim that a careful reading of Mayr's proof reveals that in fact the following uniform version of the subpower membership problem, where the algebra $\\m a$ is considered as part of the input, is also in \\compNP.\n\n\n\\begin{definition}\nDefine \\smpun\\ to be the following decision problem:\n\\begin{itemize}\n \\item INPUT: A~list of tables of basic operations of an algebra~$\\m A$ that includes a~Maltsev operation, and $\\vec a_1, \\dots, \\vec a_k, \\vec b \\in A^n$.\n \\item QUESTION: Is $\\vec b$ in the subalgebra of $\\m a^n$ generated by\n $\\{\\vec a_1, \\dots, \\vec a_k\\}$?\n\\end{itemize}\n\\end{definition}\nWe base the main result of this section\non the following.\n\\begin{theorem}[see \\cite{Mayr2012}]\\label{smpun}\n The decision problem \\smpun\\ is in the class \\compNP.\n\\end{theorem}\n\nWhile this theorem is not explicitly stated in \\cite{Mayr2012}, it can be seen\nthat the runtime of the verifier that Mayr constructs for the problem $\\smp(\\m\na)$, when $\\m a$ has a Maltsev term, has polynomial dependence on the size of\n$\\m a$ in addition to the size of the input to $\\smp(\\m a)$. We stress that\nMayr's verifier requires that the table for a Maltsev term of $\\m a$ is given as part of\nthe description of $\\m a$.\n\n\n\n\n\n\n\n\n\\begin{theorem}\\label{NP} The decision problem \\minority\\ is in the class \\compNP.\n\\end{theorem}\n\n\\begin{proof}\nTo prove this theorem, we provide a polynomial reduction $f$ of \\minority\\ to \\smpun. By Theorem~\\ref{smpun}, this will suffice.\nLet $\\m a$ be an instance of \\minority, i.e., a~finite\n idempotent algebra that has at least one operation.\n \n \n\n We first check, using the polynomial-time\n algorithm from Corollary~\\ref{maltsevterm}, to see if $\\m a$ has a~Maltsev\n term. If it does not, then $\\m a$ will not have a~minority term, and in this case we\n set $f(\\m a)$ to be some fixed `no' instance of \\smpun. Otherwise, we augment the list of basic operations of $\\m a$ by\n adding the Maltsev operation on $A$ that the algorithm produced. Denote\n the resulting (idempotent) algebra by $\\m a'$ and note that $\\m a'$ can be constructed from $\\m a$ by a polynomial-time algorithm.\n Also, note that $\\m a'$ is term equivalent to\n $\\m a$ and so the subpower membership problem is the same for both\n algebras.\n\n If we set $f(\\m a)$ to be the instance of \\smpun\\ that consists of the~list of tables of basic operations of~$\\m A'$ along with\n $\\min(\\m a)$ then we have, by Proposition~\\ref{min-instance}, that $f(\\m a)$ is a `yes' instance of \\smpun\\ if and only if $\\m a$ has a minority term. Since the construction of $f(\\m a)$ can be carried out by a procedure whose runtime can be bounded by a polynomial in $\\|\\m a\\|$, we have produced a polynomial reduction of \\minority\\ to \\smpun, as required.\n \\end{proof}\n\n\n\n\n\n\n\\section{Conclusion}\n\nWhile Theorem~\\ref{NP} establishes that testing for a~minority term for finite\nidempotent algebras is not as hard as it could be, the true complexity of this\ndecision problem is still open. Our proof of this theorem closely ties the\ncomplexity of {\\minority} to the complexity of the subpower membership problem\nfor finite Maltsev algebras and specifically to the problem \\smpun. Thus any progress on determining the complexity of\n$\\smp(\\m a)$ for finite Maltsev algebras may have a~bearing on the complexity\nof {\\minority}.\nThere has certainly been progress on the algorithmic side of $\\smp$;\na~major recent paper has shown in particular that $\\smp(\\m a)$ is tractable for\n$\\m a$ with cube term operations (of which a Maltsev term operation is\na~special case) as long as $\\m a$ generates a residually small\nvariety~\\cite{BMS18} (the statement from the paper is actually stronger than\nthis, allowing multiple algebras in place of $\\m a$).\n\nIn Section~\\ref{join} we introduced the notion of a~minority-majority term and\nshowed that if testing for such a~term for finite idempotent algebras could be\ndone by a~polynomial-time algorithm, then \\minority\\ would lie in the\ncomplexity class \\compP. This is why we conclude our paper with a~question\nabout deciding minority-majority terms.\n\n\\begin{open-problem*}\n What is the complexity of deciding if a~finite idempotent algebra has\n a~minority-majority term?\n\\end{open-problem*}\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{ Introduction}\nThe discovery of graphene, one layer of carbon atoms, arose high hopes for its application in future of\nelectronic devices~\\cite{Neto-RMP}. One reason for this attention is high mobility of charge carriers (electrons or holes)\nwhose density are well controlled by electrical and chemical doping. \nAlthough the application of pure graphene in electronic industry didn't develop because of the gap-less dispersion.\nAdding atoms and doping in graphene $e. g.$ fluorinated mono-vacancies~\\cite{Kaloni-EPL-100},\nGe-interrelated~\\cite{Kaloni-EPL-99} have been studied for investigation the electrical \ncharacteristics of graphene under impurity doping, which gives rise to exploring new interesting electronic and magnetic properties. \n\n The prediction of existence the neutral triplet collective\nmode in a honeycomb lattice is considered to be the new idea~\\cite{Baskaran-Jafari}.\nThis collective mode was predicted by considering random phase approximation (RPA) method in Hubbard model on \nthe honeycomb lattice $e. g.$ undoped graphene and graphite. By considering this collective mode,\ngood agreement between experimental and theoretical results in TRPES experiment on graphite by Momentum Average (MA) \nmethod was achieved~\\cite{Ebrahimkhas-PRB}. In this approximation, neutral triplet spin-1 mode was considered as coupling\nbetween two spinon bound states in undoped graphene but in scalar form not in $2\\times2$ \nmatrix~\\cite{Jafari-Baskaran},~\\cite{Baskaran-Jafari}. \nIf we study and recheck the calculations of ref.~\\cite{{Jafari-Baskaran}, {Ebrahimkhas-IJMPB}},\nwe will find, first: some parts of the calculations between two sub-lattice\nare eliminated and second: effects of overlap factors are neglected~\\cite{{Peres-comment},{Jafari-recomment}}.\nThe innovation of ref.~\\cite{Jafari-Baskaran} was considering scalar form for spin susceptibility, instead of $2\\times 2$ tensor\nfor two sub-lattice material.\nThe essential purpose of this research is analyzing the validity of spin-1 collective mode without one-cone approximation\nin Hubbard model and its novelty is calculating the effects of effective long range \ninteraction $(U^{eff}_{l-r})$ in RPA method on neutral triplet collective mode~\\cite{JPHCM.Jafari}.\nWhy we considered this form for long range interaction?\nBecause by considering only this interaction type\nwe will obtain Eq.~\\ref{main.eq} which is an essential equation in finding neutral collective mode in RPA formulas\n(without one-cone approximation), in addition we should consider those types of interaction,\nacting between electrons with opposite spins. \nThe effects of Coulomb interaction is known to be in the form $e^{2}\/(\\epsilon r)$ which lead to prediction\nof plasmon dispersion in graphene and has been discussed in details in ref.~\\cite{Wunsch}.\nThe coulomb interaction in the honey comb lattice didn't result spin-1 collective mode.\n\n In the first section of this paper, the effects of short range Hubbard interaction in RPA method on honeycomb\nlattice was calculated exactly without neglecting any interaction terms~\\cite{Peres-comment}, in two\nrepresentations. In the second section, we first introduced new effective long range\ninteraction as a new model in graphene and then we used this model for studying the neutral triplet collective\nmode. finally the finding of these calculations are summarized and discussed on validation of neutral collective mode\nin undoped graphene.\n\n\n\\section{ On-site interaction: sub lattice representation}\n\n The Hubbard Hamiltonian consist of two terms: tight binding (TB) and short range Hubbard interaction (HI),\n \n \\begin{eqnarray}\n H&=&H_{TB}+H_{HI} \n =-t\\sum_{,\\sigma}(a_{i}^{\\dagger \\sigma}b_{j}^{\\sigma}+b_{j}^{\\dagger \\sigma}a_{i}^{\\sigma})\\nonumber\\\\\n &+& U\\sum_{i}(a_{i}^{\\dagger \\uparrow}a_{i}^{\\uparrow}a_{i}^{\\dagger \\downarrow}a_{i}^{\\downarrow}+\n b_{i}^{\\dagger \\uparrow}b_{i}^{\\uparrow}b_{i}^{\\dagger \\downarrow}b_{i}^{\\downarrow}),\\\\\n \\label{TB.eq}\\nonumber\n \\end{eqnarray}\n \n where $t$ is the nearest neighbor hopping. $a^{\\sigma}_{i},b^{\\sigma}_{i},(a^{\\dagger \\sigma}_{i},b^{\\dagger \\sigma}_{i})$\n are annihilation (creation) operators for electrons in sub lattices $A, B$ respectively, and $\\sigma$ stands for spin of electrons,\n and $U$ is the Hubbard on-site interaction potential. The Hubbard interaction term can be written in the exchange channel,\n \n \\begin{equation}\n H_{HI}= -U\\sum_{i}(a_{i}^{\\dagger \\uparrow}a_{i}^{\\downarrow}a_{i}^{\\dagger \\downarrow}a_{i}^{\\uparrow}+\n b_{i}^{\\dagger \\uparrow}b_{i}^{\\downarrow}b_{i}^{\\dagger \\downarrow}b_{i}^{\\uparrow})\n \\label{exHI.eq}\n \\end{equation}\n \n \n\n The interaction term is represented by Feynman diagrams with two kinds of vertices.\n Each vertex corresponds to the on-site repulsion interaction on the $A$, $B$ sub-lattices Fig.~\\ref{vertices.fig}.\n In each vertex electrons with opposite spins interact via repulsion potential~\\cite{BF}.\n Fig.(\\ref{vertices.fig}-a) shows creation (annihilation) of electrons with up and down spin in each vertex\n either in $A$ or $B$ sub-lattice. In Fig.(~\\ref{vertices.fig}-b), by adding vertices of \n each sub-lattice, RPA chain is formed. In this chain the vertex with the same or different sub-lattice put to gather,\n so we define polarization operators, $\\chi_{AA}, \\chi_{AB}$ which include effects of \n overlap factors.\n \n \n \\begin{figure}[h]\n\\begin{center}\n\\vspace{-3mm}\n\\includegraphics[width=8cm,height=3cm,angle=0]{vertices1.eps}\n\\vspace{-3mm}\n\\caption{(a) Each vertex includes creation(destruction) of an electron with up (down) spin. These vertices show\ninteraction between two electrons with opposite spin in same atoms in sub-lattices by short range repulsive potential $U$. This type of\ninteraction is shown in Eq.~\\ref{exHI.eq}, in each sub-lattice $A(B)$ electrons with up(down) spin interact to electrons with down(up) spin.\n(b) The RPA chain includes summing of $A$, $B$ vertices. This figure is one of the $RPA$ chains.}\n \\label{vertices.fig} \n\\end{center}\n\\vspace{-3mm}\n \\end{figure}\n \n Each vertex in the chain can be either of $A$ or $B$ type. By summing the RPA series, one can achieve to the\n collection of equations for susceptibilities in the sub-lattice representation~\\cite{Peres-comment},\n \n \\begin{equation}\n\\begin{cases}\n \\chi_{AA}=\\chi^{0}_{AA}+U\\chi^{0}_{AA}\\chi_{AA}+U\\chi^{0}_{AB}\\chi_{BA}\\\\\n \\chi_{AB}=\\chi^{0}_{AB}+U\\chi^{0}_{AA}\\chi_{AB}+U\\chi^{0}_{AB}\\chi_{BB}\\\\\n \\chi_{BA}=\\chi^{0}_{BA}+U\\chi^{0}_{BA}\\chi_{AA}+U\\chi^{0}_{BB}\\chi_{BA}\\\\\n \\chi_{BB}=\\chi^{0}_{BB}+U\\chi^{0}_{BA}\\chi_{AB}+U\\chi^{0}_{BB}\\chi_{BB}\\\\\n\\end{cases}.\n\\label{coupling.eq}\n\\end{equation}\n\n The coupling equations in Eq.~\\ref{coupling.eq} should be decoupled, so giving two subsystems with \n the common determinant, which should be zero at the plasmon dispersion~\\cite{Peres-comment}:\n \n \\begin{equation}\n \\begin{vmatrix} \n 1-U\\chi^{0}_{AA} & -U\\chi^{0}_{AB}\\\\ \n -U\\chi^{0}_{BA} & 1-U\\chi^{0}_{BB}\n \\end{vmatrix}=0,\n \\label{determin1.eq}\n\\end{equation}\n \n or \n \n \\begin{equation}\n 1-U(\\chi^{0}_{AA}+\\chi^{0}_{BB})+U^{2}\\chi^{0}_{AA}\\chi^{0}_{BB}-U^2 \\chi^{0}_{AB}\\chi^{0}_{BA}=0.\n \\label{determin2.eq}\n \\end{equation}\n \n \n The bare polarization operators or susceptibilities of vertices can be calculated using non-interacting\n Green's function for graphene's electrons in the sub-lattice representation. By considering the Green's functions\n in the momentum representation and close to Dirac points, classification of Green's function\n by electron valleys $\\vec{K}$ and $-\\vec{K}$, by vicinities to the corresponding Dirac points was achieved. In the valleys\n $\\vec{K}$ and $-\\vec{K}$ the non-interacting Hamiltonians in the representation of sub lattice $A, B$ are\n\n \\begin{equation}\n H_{\\pm \\vec{K}}=\\begin{pmatrix} \n 0 & \\pm p_{x}-ip_{y}\\\\ \n \\pm p_{x}+ip_{y} & 0\n \\end{pmatrix},\n \\label{nonintH.eq}\n\\end{equation}\n\n here by considering $v_{F}=1$, the corresponding Matsubara Green's functions for the undoped graphene ($\\mu=0$) are\n\n\\begin{equation}\n G^{\\pm \\vec{K}}(\\vec{p},i\\epsilon)=\\frac{1}{(i\\epsilon)^2-p^2}\\begin{pmatrix} \n i\\epsilon & \\pm pe^{\\mp i\\phi}\\\\ \n \\pm pe^{\\pm i\\phi} & i\\epsilon\n \\end{pmatrix},\n \\label{MG.eq}\n\\end{equation}\n\n where $\\phi$ is the azimuthal angle of the momentum vector $\\vec{p}$.\nBy using Eq.~\\ref{MG.eq}, the polarization operators can be calculated as\n\n\\begin{equation}\n\\chi^{0}_{\\alpha,\\beta}(\\vec{q},i\\omega)=-T\\sum_{v=\\pm \\vec{K}}\\sum_{\\epsilon} \\int \\frac{d\\vec{p}}{(2\\pi)^2}\nG^{v}_{\\alpha, \\beta}(\\vec{p},i\\epsilon)G^{-v}_{\\beta, \\alpha}(\\vec{p'},i\\epsilon+i\\omega),\n\\label{chi-RPA.eq}\n\\end{equation}\n\n where $\\vec{p'}=\\vec{p}+\\vec{q}$. By performing the summation over fermionic Matsubara frequencies\n $\\epsilon=\\pi T(2n+1)$ and electron valleys $v=\\pm\\vec{K}$,\n \n \\begin{eqnarray}\n \\chi^{0}_{AA}(\\vec{q}, i\\omega)=\\chi^{0}_{BB}(\\vec{q}, i\\omega)=-\\int{\\frac{d\\vec{p}}{(2\\pi)^2}\\frac{p+p'}{(i\\omega)^2-(p+p')^2}},\\nonumber\\\\\n \\chi^{0}_{AB}(\\vec{q}, i\\omega)=\\chi^{0}_{BA}(\\vec{q}, i\\omega)=\n \\int{\\frac{d\\vec{p}}{(2\\pi)^2}\\frac{cos(\\phi-\\phi')(p+p')}{(i\\omega)^2-(p+p')^2}}.\\nonumber\\\\\n \\label{chi-sublattices.eq}\n \\end{eqnarray} \n \n Our results in Eq.~\\ref{determin1.eq} and Eq.~\\ref{chi-sublattices.eq} are in agreement with results of ref.~\\cite{Peres-comment}\n , the only difference is that the decomposition of the overlap factor $cos(\\phi-\\phi')$ near the Dirac point was performed in \n Eq.~\\ref{chi-sublattices.eq}. Now by using Eq.~\\ref{determin1.eq} and Eq.~\\ref{chi-sublattices.eq} we should find zeros of \n Eq.~\\ref{determin2.eq}, but we didn't obtain zeros or triplet collective excitation for reasonable range of $U$. In other words,\n the short-range interaction dose not lead to any neutral collective mode. \n \n\\section{On-site interaction: eigenvector representation}\n\n In this section the same problem of RPA in the triplet channel of the on-site interaction will be considered~\\cite{note},\n but in the representation of the eigenvectors of the Hamiltonians Eq.~\\ref{nonintH.eq}. For the case\n of electron valleys $\\vec{K}$ and $-\\vec{K}$, the eigenvectors are\n \n \\begin{equation}\n f^{\\vec{K}}_{s} = \\frac{1}{\\sqrt{2}} \\begin{pmatrix} \n e^{-i\\phi\/2}\n \\\\ se^{i\\phi\/2}\n \\end{pmatrix}, f^{-\\vec{K}}_{s} = \\frac{1}{\\sqrt{2}} \\begin{pmatrix} -se^{i\\phi\/2}\n \\\\ e^{i\\phi\/2} \\end{pmatrix} ,\n \\label{eignvector2.eq}\n\\end{equation}\n\n where $f^{\\pm \\vec{K}}$ corresponds to electron states in the conduction band with the energy $v_{F}p$ and in the valance band\n with the energy $-v_{F}p$. The interaction Hamiltonian Eq.~\\ref{exHI.eq} in the momentum representation takes\n this form:\n \n \\begin{eqnarray}\n H_{int}=-\\frac{U}{N}\\sum_{\\vec{p}_{1} \\vec{p}_{2} \\vec{q}} \\left\\{ \\Psi^{\\uparrow \\dagger}_{\\vec{p'}_{1}}\n \\begin{pmatrix} 1 & 0 \\\\ 0 & 0 \\end{pmatrix} \\Psi^{\\downarrow}_{\\vec{p}_{1}} \\Psi^{\\downarrow \\dagger}_{\\vec{p'}_{2}}\n \\begin{pmatrix} 1 & 0 \\\\ 0 & 0 \\end{pmatrix} \\Psi^{\\uparrow}_{\\vec{p}_{2}} \\right. \\nonumber\\\\\n \\left. + \\Psi^{\\uparrow \\dagger}_{\\vec{p'}_{1}}\n \\begin{pmatrix} 0 & 0 \\\\ 0 & 1 \\end{pmatrix} \\Psi^{\\downarrow}_{\\vec{p}_{1}} \\Psi^{\\downarrow \\dagger}_{\\vec{p'}_{2}}\n \\begin{pmatrix} 0 & 0 \\\\ 0 & 1 \\end{pmatrix} \\Psi^{\\uparrow}_{\\vec{p}_{2}} \\right \\},\n \\label{sum_psi1.eq}\n \\end{eqnarray}\n \n where $\\vec{p'}_{1}=\\vec{p}_{1}+\\vec{q}$, $\\vec{p'}_{2}=\\vec{p}_{2}-\\vec{q}$, and \n $\\Psi^{\\uparrow \\downarrow}_{\\vec{p}}=(a^{\\uparrow \\downarrow}_{\\vec{p}}, b^{\\uparrow \\downarrow}_{\\vec{p}})^T$\n is a two-components column matrix, was made from electron destruction operators. The sum in Eq.~\\ref{sum_psi1.eq} over\n $\\vec{p}_{1}, \\vec{p}_{2}, \\vec{q}$ should be distributed among electron valleys. Retaining only the terms corresponding\n to small $\\vec{q}$ (since we search small-momentum collective excitation), we get two intra-valley and two inter-valley terms\n\n\\begin{eqnarray}\n H_{int}=-\\frac{U}{N}\\sum_{\\vec{p}_{1} \\vec{p}_{2} \\vec{q}} \\sum_{\\vec{v}_{1},\\vec{v}_{2}}\n \\left\\{ \\Psi^{\\uparrow \\dagger}_{\\vec{v'}_{1}}\n \\begin{pmatrix} 1 & 0 \\\\ 0 & 0 \\end{pmatrix} \\Psi^{\\downarrow}_{\\vec{v}_{1}} \\Psi^{\\downarrow \\dagger}_{\\vec{v'}_{2}}\n \\begin{pmatrix} 1 & 0 \\\\ 0 & 0 \\end{pmatrix} \\Psi^{\\uparrow}_{\\vec{v}_{2}} \\right. \\nonumber\\\\\n \\left. + \\Psi^{\\uparrow \\dagger}_{\\vec{v'}_{1}}\n \\begin{pmatrix} 0 & 0 \\\\ 0 & 1 \\end{pmatrix} \\Psi^{\\downarrow}_{\\vec{v}_{1}} \\Psi^{\\downarrow \\dagger}_{\\vec{v'}_{2}}\n \\begin{pmatrix} 0 & 0 \\\\ 0 & 1 \\end{pmatrix} \\Psi^{\\uparrow}_{\\vec{v}_{2}} \\right \\},\n \\label{sum_psi2.eq}\n \\end{eqnarray}\n\n where $\\vec{v}_{\\alpha}=\\vec{V}_{\\alpha}+\\vec{p}_{\\alpha}, \\vec{v'}_{\\alpha}=\\vec{V}_{\\alpha}+\\vec{p'}_{\\alpha}$ and \n $\\vec{V}_{\\alpha}=\\pm \\vec{K}$. Now by changing the basis of the eigenvectors, the Eq.~\\ref{eignvector2.eq}\n can be performed according to the relation $\\Psi^{\\uparrow \\downarrow}_{\\vec{V}+\\vec{p}}=\n \\sum_{s=\\pm}f^{\\vec{V}}_{s}c^{\\uparrow \\downarrow}_{\\vec{V}+\\vec{p},s}$, where $c^{\\uparrow \\downarrow}_{\\vec{V}+\\vec{p},s}$ is the\n destruction operator for electron from the valley $\\vec{V}=\\pm \\vec{K}$, from the conduction $(s=+1)$ or\n valance $(s=-1)$ band, with the momentum $\\vec{p}$ (measured from the corresponding Dirac point) and\n with up\/down spin. The result for such transformation of Eq.~\\ref{sum_psi2.eq} is,\n \n \\begin{eqnarray}\n H_{int}=\\sum_{\\vec{p}_{1}\\vec{p}_{2}\\vec{q}}\\sum_{\\vec{V}_{1}\\vec{V}_{2}=\\pm\\vec{K}}\\sum_{s'_1,s'_2,s_1,s_2=\\pm} \\left \\{\n \\Gamma^{(1)}_{\\vec{V}_{1}\\vec{V}_{2}}+\\Gamma^{(2)}_{\\vec{V}_{1}\\vec{V}_{2}} \\right \\} \\nonumber \\\\\n \\times c^{\\uparrow \\dagger}_{\\vec{V}_{1}+\\vec{p'}_{1},s'_1} c^{\\downarrow }_{\\vec{V}_{1}+\\vec{p}_{1},s_1}\n c^{\\downarrow \\dagger}_{\\vec{V}_{2}+\\vec{p'}_{2},s'_2} c^{\\uparrow }_{\\vec{V}_{2}+\\vec{p'}_{2},s_2}\n \\label{H_int_new.eq}\n \\end{eqnarray}\n \n where the vertices are,\n \n \\begin{eqnarray}\n \\Gamma^{(1)}_{++}&=&\\Gamma^{(2)}_{--}=\\frac{u}{4}e^{\\frac{i}{2}(-\\phi_{1}+\\phi'_{1}-\\phi_{2}+\\phi'_{2})},\\nonumber \\\\\n \\Gamma^{(2)}_{++}&=&\\Gamma^{(1)}_{--}=\\frac{u}{4}s_{1}s'_{1}s_{2}s'_{2}e^{\\frac{i}{2}(\\phi_{1}-\\phi'_{1}+\\phi_{2}-\\phi'_{2})}, \\nonumber\\\\\n \\Gamma^{(1)}_{+-}&=&\\Gamma^{(2)}_{-+}=\\frac{u}{4}s_{2}s'_{2}e^{\\frac{i}{2}(-\\phi_{1}+\\phi'_{1}+\\phi_{2}-\\phi'_{2})},\\nonumber \\\\\n \\Gamma^{(2)}_{+-}&=&\\Gamma^{(1)}_{-+}=\\frac{u}{4}s_{1}s'_{1}e^{\\frac{i}{2}(\\phi_{1}-\\phi'_{1}-\\phi_{2}+\\phi'_{2})}\n \\label{vertices4.eq}\n \\end{eqnarray}\n \n \n where $u=U\/N$. The interaction Hamiltonian Eq.~\\ref{H_int_new.eq} gives rise to RPA series of the type\n depicted in Fig.~\\ref{vertices2.fig}. A vertex in each place of these series can be of type $\\Gamma^{(1,2)}$,\n while electron valleys in the Green function between vertices can be $\\vec{K}$ or $-\\vec{K}$. \n \n \\begin{figure}[h]\n\\begin{center}\n\\vspace{-3mm}\n\\includegraphics[width=6cm,height=2cm,angle=0]{vertices2.eps}\n\\vspace{-3mm}\n\\caption{ Each vertex is including $\\Gamma^{(1)}$ or $\\Gamma^{(2)}$. Each vertex is include interaction between\nelectrons of valance or conduction in each valley. In this chain I considered many types of $\\Gamma$ functions\nwhich described interaction of electrons in each valley. This chain is one of the $RPA$ chains.}\n \\label{vertices2.fig}\n\\end{center}\n\\vspace{-3mm}\n\\end{figure}\n \n \n The full vertex can be renormalized by RPA summation, $\\Gamma^{(i,j)}_{\\vec{V}_{1}\\vec{V}_{2}}$, where $i,j=1,2$ are the types of\n left and right vertices in the series. The Bethe-Sal peter equations for this vertex is,\n \n \\begin{eqnarray}\n &&\\Gamma^{(ij)}_{\\vec{V}_{1}\\vec{V}_{2}}(\\vec{p'}_{1},s'_{1},\\vec{p}_{1},s_{1},\\vec{p'}_{2},s'_{2}, \\vec{p}_{2},s_{2};i\\omega)= \\nonumber\\\\\n &&\\delta_{ij}\\Gamma^{(i)}_{\\vec{V}_{1}\\vec{V}_{2}}(\\vec{p'}_{1},s'_{1},\\vec{p}_{1},s_{1},\\vec{p'}_{2},s'_{2}, \\vec{p}_{2},s_{2}) \\nonumber \\\\\n &&-T\\sum_{k\\vec{V}}\\sum_{\\epsilon ss'}\\int \\frac{d\\vec{p}}{(2\\pi)^2}\\Gamma^{(i)}_{\\vec{V}_{1}\\vec{V}}(\\vec{p'}_{1},s'_{1},\\vec{p}_{1},s_{1},\\vec{p'},s', \\vec{p},s) \\nonumber\\\\\n &&\\times G^{\\vec{V}}_{s}(\\vec{p},i\\epsilon)G^{\\vec{V}}_{s'}(\\vec{p'},i\\epsilon+i\\omega)\n \\Gamma^{(kj)}_{\\vec{V}\\vec{V}_{2}}(\\vec{p},s,\\vec{p'},s',\\vec{p'}_{2},s'_{2}, \\vec{p}_{2},s_{2};i\\omega),\\nonumber\\\\\n \\label{new-vertex1.eq} \n \\end{eqnarray}\n \n where $\\vec{p'}=\\vec{p}+\\vec{q}$. The Green functions for eigenstates of the Hamiltonians Eq.~\\ref{nonintH.eq} at\n $\\mu=0$ have a simplest form, \n \n \\begin{equation}\n G^{\\pm \\vec{K}}_{-}(\\vec{p}, i\\epsilon)=\\frac{1}{i\\epsilon-p},~~~~~ G^{\\pm \\vec{K}}_{+}(\\vec{p}, i\\epsilon)=\\frac{1}{i\\epsilon+p}\n \\label{greenf-ne.eq}\n \\end{equation}\n \n By further processing, I should note that the vertices Eq.~\\ref{vertices4.eq} can be decoupled in to $left$ and $right$ parts,\n \n \\begin{equation}\n \\Gamma^{(i)}_{\\vec{V}_{1}\\vec{V}_{2}}(\\vec{p'}_{1},s'_{1},\\vec{p}_{1},s_{1},\\vec{p'}_{2},s'_{2}, \\vec{p}_{2},s_{2})=\n \\Gamma^{i.\\vec{V}_{1}}_{\\vec{p'}_{1}s'_{1}\\vec{p}_{1}s_{1}} . u . \\Gamma^{i.\\vec{V}_{2}}_{\\vec{p'}_{2}s'_{2}\\vec{p}_{2}s_{2}},\n \\label{l-r-gamma.eq}\n \\end{equation}\n \n where \n \n \\begin{equation}\n \\gamma^{a}_{\\vec{p'}s'\\vec{p}s}=\\frac{1}{2}e^{\\frac{i}{2}(-\\phi+\\phi')}, ~~~~~ \\gamma^{b}_{\\vec{p'}s'\\vec{p}s}=\n \\frac{1}{2}ss' e^{\\frac{i}{2}(\\phi-\\phi')},\n \\label{new-gamma.eq}\n \\end{equation}\n \n here the brief notations $1.\\vec{K}=2.(-\\vec{K})=a$ and $2.\\vec{K}=1.(-\\vec{K})=b$ are introduced. Since all bare vertices are decoupled,\n the summarized vertices also can be decoupled into the left and right parts. The RPA-renormalized interaction\n is written, $\\Gamma^{ij}_{\\vec{V}_{1}\\vec{V}_{2}}=\\Gamma^{i.\\vec{V}_{1}}V_{ij}\\Gamma^{i.\\vec{V}_{2}}$. By substituting\n it into Eq.~\\ref{new-vertex1.eq}, the interaction equations for this system obtained as,\n \n \\begin{equation}\n V_{ij}=u+u\\sum_{k\\vec{V}}\\Pi^{\\vec{V}}_{ik}V_{kj},\n \\label{int-term.eq}\n \\end{equation}\n \n the polarization operators were introduced as,\n \n \\begin{eqnarray}\n &&\\Pi^{\\vec{V}}_{ij}(\\vec{q},i\\omega)=\\nonumber\\\\\n &&-T\\sum_{\\epsilon ss'}\\int\\frac{d\\vec{p}}{(2\\pi)2}\\Gamma^{i.\\vec{V}}_{\\vec{p'}s'\\vec{p}s}G^{\\vec{V}}_{s}(\\vec{p},i\\epsilon)\n \\Gamma^{i.\\vec{V}}_{\\vec{p}s\\vec{p'}s'}G^{\\vec{V}}_{s'}(\\vec{p'},i\\epsilon+i\\omega).\\nonumber\\\\\n \\label{pol.eq}\n \\end{eqnarray}\n \n \n A nontrivial solution for the Eq.~\\ref{int-term.eq} occurs when it's determinant is zero,\n \n \\begin{eqnarray}\n \\begin{vmatrix} \n 1-u\\Pi^{\\vec{K}}_{11}-u\\Pi^{-\\vec{K}}_{11} & -u\\Pi^{\\vec{K}}_{12}-u\\Pi^{-\\vec{K}}_{12}\\\\ \n -u\\Pi^{\\vec{K}}_{21}-u\\Pi^{-\\vec{K}}_{21} & 1-u\\Pi^{\\vec{K}}_{22}-u\\Pi^{-\\vec{K}}_{22}\n \\end{vmatrix}=\\nonumber\\\\\n 1-u(\\Pi^{\\vec{K}}_{11}+\\Pi^{-\\vec{K}}_{11}+\\Pi^{\\vec{K}}_{22}+\\Pi^{-\\vec{K}}_{22})\\nonumber\\\\\n +u^{2}(\\Pi^{\\vec{K}}_{11}+\\Pi^{-\\vec{K}}_{11})(\\Pi^{\\vec{K}}_{22}+\\Pi^{-\\vec{K}}_{22})\\nonumber\\\\\n -u^{2}(\\Pi^{\\vec{K}}_{12}+\\Pi^{-\\vec{K}}_{12})(\\Pi^{\\vec{K}}_{21}+\\Pi^{-\\vec{K}}_{21})=0,\n \\label{determin3.eq}\n \\end{eqnarray}\n \n By using Eqs.~\\ref{greenf-ne.eq},~\\ref{l-r-gamma.eq},~\\ref{new-gamma.eq} and Eq.~\\ref{pol.eq}, the polarization operators can be\n easily calculated, by performing summation on $\\epsilon=\\pi T (2n+1)$ and on $s,s'=\\pm1$, we find\n \n \\begin{eqnarray}\n \\Pi^{\\vec{K}}_{11}=\\Pi^{-\\vec{K}}_{11}=\\Pi^{\\vec{K}}_{22}=\\Pi^{-\\vec{K}}_{22}=\\frac{1}{2}\\int\\frac{d\\vec{p}}{(2\\pi)^2}\\frac{p+p'}{(i\\omega)^2-(p+p')^2},\\nonumber\\\\\n \\Pi^{\\vec{K}}_{12}= \\Pi^{-\\vec{K}}_{21}=-\\frac{1}{2}\\int\\frac{d\\vec{p}}{(2\\pi)^2}\\frac{e^{i(-\\phi+\\phi')}(p+p')}{(i\\omega)^2-(p+p')^2},\\nonumber\\\\\n \\Pi^{\\vec{K}}_{21}= \\Pi^{-\\vec{K}}_{12}=-\\frac{1}{2}\\int\\frac{d\\vec{p}}{(2\\pi)^2}\\frac{e^{i(\\phi-\\phi')}(p+p')}{(i\\omega)^2-(p+p')^2}.\\nonumber\\\\\n \\label{pol2.eq}\n \\end{eqnarray}\n \n Comparing Eq.~\\ref{pol2.eq} with~\\ref{chi-sub-lattices.eq} and~\\ref{determin1.eq} with~\\ref{determin3.eq}, we find that \n $\\chi^{0}_{AA}=\\chi^{0}_{BB}=\\Pi^{\\vec{K}}_{11}+\\Pi^{-\\vec{K}}_{11}=\\Pi^{\\vec{K}}_{22}+\\Pi^{-\\vec{K}}_{22}$ and\n $\\chi^{0}_{AB}=\\chi^{0}_{BA}=\\Pi^{\\vec{K}}_{12}+\\Pi^{-\\vec{K}}_{12}=\\Pi^{\\vec{K}}_{21}+\\Pi^{-\\vec{K}}_{21}$. Therefore,\n the sub-lattice representation give the same result as that in the representation of eigenstates in short range interaction.\n So in this representation we could not find zeros for Eq.~\\ref{determin3.eq}. Finally we found that\n the on-site Hubbard interaction can not lead to formation of neutral collective mode. The essential reason is overlap factors\n which resolve sub-lattices. In the first approach the overlap factors reside in the Green functions, and in the second\n approach these factors are prescribed to interaction vertices.\n \n \\section{Long range interaction: effective Coulomb interaction}\n \n The Coulomb interaction between electrons in graphene is usually considered as long-range, so both sub-lattices\n participate equally in it, since characteristic range of the interaction is much larger than the lattice constant.\n In this section we consider long range interaction for finding it's effect on spin-1 collective mode, which was considered\n in~\\cite{JPHCM.Jafari}. If the long range (Coulomb) interaction is included interaction between electrons with same and \n opposite spins on two sub-lattices, we know, it results the plasmon collective mode~\\cite{Wunsch}. The only long range interaction\n which lead to the Eq.~\\ref{main.eq} has the form of Eq.~\\ref{long-H-exch.eq}. Another reason for selecting this form for long range interaction is, the spin-1 collective\n mode make from interaction between electrons with opposite spins.\n\n If we assume that the Coulomb interaction is represented by constant $V$ between electrons with opposite spin (because the spin-1 mode\n considered between two electrons with opposite spin), we introduced this interaction Hamiltonian,\n \n \\begin{eqnarray}\n H_{Coul}=V\\sum_{ij}\\left \\{a^{\\uparrow \\dagger}_{i}a^{\\uparrow}_{i}a^{\\downarrow \\dagger}_{j}a^{\\downarrow }_{j}+\n a^{\\uparrow \\dagger}_{i}a^{\\uparrow}_{i}b^{\\downarrow \\dagger}_{j}b^{\\downarrow }_{j}\\right. \\nonumber \\\\\n \\left. +b^{\\uparrow \\dagger}_{i}b^{\\uparrow}_{i}a^{\\downarrow \\dagger}_{j}a^{\\downarrow }_{j} + \n b^{\\uparrow \\dagger}_{i}b^{\\uparrow}_{i}b^{\\downarrow \\dagger}_{j}b^{\\downarrow }_{j} \\right \\}.\n \\label{long-H.eq}\n \\end{eqnarray}\n \n In the exchange channel, we have\n \n \\begin{eqnarray}\n H_{Coul}=-V\\sum_{ij}\\left \\{a^{\\uparrow \\dagger}_{i}a^{\\downarrow}_{j}a^{\\downarrow \\dagger}_{j}a^{\\uparrow }_{i}+\n a^{\\uparrow \\dagger}_{i}b^{\\downarrow}_{j}b^{\\downarrow \\dagger}_{j}a^{\\uparrow }_{i}\\right. \\nonumber \\\\\n \\left. +b^{\\uparrow \\dagger}_{i}a^{\\downarrow}_{j}a^{\\downarrow \\dagger}_{j}b^{\\uparrow }_{i} + \n b^{\\uparrow \\dagger}_{i}b^{\\downarrow}_{j}b^{\\downarrow \\dagger}_{j}b^{\\uparrow }_{i} \\right \\}.\n \\label{long-H-exch.eq}\n \\end{eqnarray}\n \n If we perform the same calculations starting from Eq.~\\ref{H_int_new.eq}, as was done starting from Eq.~\\ref{H_int_new.eq}\n to derive Eq.~\\ref{determin3.eq}, we will arrive to the different system,\n \n \n\n \\begin{equation}\n \\begin{cases}\n \\chi_{AA}=\\chi^{0}_{AA}+V\\chi^{0}_{AA}\\chi_{AA}+V\\chi^{0}_{AB}\\chi_{BA}+V\\chi^{0}_{AA}\\chi_{BA}+V\\chi^{0}_{AB}\\chi_{BA}\\\\\n \\chi_{AB}=\\chi^{0}_{AB}+V\\chi^{0}_{AA}\\chi_{AB}+V\\chi^{0}_{AB}\\chi_{BB}+V\\chi^{0}_{AA}\\chi_{BB}+V\\chi^{0}_{AB}\\chi_{BB}\\\\\n \\chi_{BA}=\\chi^{0}_{BA}+V\\chi^{0}_{BA}\\chi_{AA}+V\\chi^{0}_{BB}\\chi_{BA}+V\\chi^{0}_{BA}\\chi_{BA}+V\\chi^{0}_{BB}\\chi_{BA}\\\\\n \\chi_{BB}=\\chi^{0}_{BB}+V\\chi^{0}_{BA}\\chi_{AB}+V\\chi^{0}_{BB}\\chi_{BB}+V\\chi^{0}_{BA}\\chi_{BB}+V\\chi^{0}_{BB}\\chi_{BB}\\\\\n \\end{cases}.\n \\label{coupling2.eq}\n \\end{equation}\n\n \n with the solvable condition \n \n \\begin{equation}\n 1-V(\\chi^{0}_{AA}+\\chi^{0}_{AB}+\\chi^{0}_{BA}+\\chi^{0}_{BB})=0,\n \\label{main.eq}\n \\end{equation}\n \n where the susceptibilities are same as Eq.~\\ref{chi-sub-lattices.eq}. This equation is same as with essential \n equation of~\\cite{Ebrahimkhas-IJMPB} which predicted neutral collective mode in honey comb lattice. \n If we consider $ \\chi^{0}_{triplet}=\\chi^{0}_{AA}+\\chi^{0}_{AB}+\\chi^{0}_{BA}+\\chi^{0}_{BB}$, the dispersion of\n the neutral collective mode can be obtained with one of this equations,\n \n \\begin{equation}\n \\Re{\\chi^{0}_{triplet}(\\vec{q},\\omega)}=1\/V~~~~or~~~~~ \\Im{\\chi^{0}_{triplet}(\\vec{q},\\omega)}=0.\n \\label{condition.eq}\n \\end{equation}\n \n \n The zeros of the Eqs.~\\ref{condition.eq} are collections of $(\\vec{q}, \\omega)$ which\n are plotted in Fig.~\\ref{e-h-dis.fig} for $V=3.2t$. This contour plot displays dispersion of neutral collective mode.\n The contour plot of $\\Re \\chi^{0}_{triplet}$ for $V=3.2t$ is plotted under the electron-hole $(e-h)$ continuum \n region in $\\Gamma \\rightarrow K$ direction. But we know this range of $U$ is too large for electrons in graphene.\n I checked the solution and zeros of Eq.~\\ref{condition.eq} for $0.0t3.2t$\n we could find dispersions for neutral collective mode near $\\Gamma$ point in $\\Gamma-K$ direction which are meaningless.\n \n \n So the long range interaction of type Eq.~\\ref{long-H.eq} can not formed the neutral collective mode\n in undoped graphene without $one-cone approximation$.\n \n \\begin{figure}[h]\n\\begin{center}\n\\vspace{-3mm}\n\\includegraphics[width=6cm,height=5cm,angle=0]{e_h_cont.eps}\n\\vspace{-3mm}\n\\caption{The contour plot of $\\Re{\\chi^{0}_{triplet}(\\vec{q},\\omega)}=1\/V$ is displayed \nfor undoped graphene for V=3.2t. The dispersion of spin collective mode appeared below e-h continuum (cyan region) but not in\nreasonable range of $V$.\nThe $y$ axis is energy ($\\epsilon (eV)$) $vs$ momentum ($q$). The direction in e-h continuum is $\\Gamma \\rightarrow K$.}\n\\label{e-h-dis.fig}\n\\end{center}\n\\vspace{-3mm}\n \\end{figure}\n \n \n The prediction of triplet collective mode in Ref.~\\cite{Ebrahimkhas-IJMPB} was due to\n the $single-cone$ approximation in short range on-site interaction that led to scalar form of polarization operator\n without $U^2$ terms in Eq.~\\ref{determin2.eq}. One-cone approximation \n which lead to elimination of some parts of Eq.~\\ref{determin2.eq}, is the only reason that predicted of the spin-1 collective mode. \n \n \n \\section{Conclusion}\n\n In this research the effects of short range Hubbard interaction in two representations and new \n effective long range interaction Hamiltonian was studied and introduced. The prediction of\n neutral collective mode in on-site and effective long range interaction is analyzed, and we found that in the RPA method\n on Hubbard model without single-cone approximation, mixing the overlap functions of two sub-lattices suppressed\n this collective mode. These two types of interactions have\n different Hamiltonians, Eq.~\\ref{H_int_new.eq} and Eq.~\\ref{long-H-exch.eq}, in both I considered interaction between\n electrons from the electrons with opposite spins. We can't find and predict neutral collective mode as the effect of\n effective long range and short range interaction because the overlap factors in both case resolve sub-lattices and suppress \n the neutral collective mode.\n\n \\section{Acknowledgment}\n \n I wish to thank N. Sharifzadeh for helpful discussions.\n The author appreciates and thanks so much from referee of our paper~\\cite{Ebrahimkhas-IJMPB} who guides to reach the main idea of this\n paper.\n\n \n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nLet $p$ be a prime, and let $I_1, I_2\\subseteq (0,p)$ be subintervals. This paper is motivated by determining conditions on $I_1,I_2$ under which we can ensure the solubility of the congruence\n\\[\nxy\\=1\\bmod p,\\quad (x,y)\\in I_1\\times I_2.\n\\]\nFrom a heuristic point of view we would expect this congruence to have a solution whenever $|I_1|,|I_2|\\gg p^{1\/2}$. However, as highlighted by Heath-Brown \\cite{H-B2000}, the best result to date requires that\n$|I_1|\\cdot|I_2|\\gg p^{3\/2}\\log^2 p$. The proof requires one to estimate incomplete Kloosterman sums\n\\[\nS(n,H)=\\sum_{\\substack{m=n+1\\\\mathbf{m}\\not\\= 0\\bmod p}}^{n+H}e\\left(\\frac{\\ell\\overline{m}}{p}\\right),\n\\]\nfor $\\ell\\in (\\mathbb{Z}\/\\ell\\mathbb{Z})^*$, for which the Weil bound yields\n\\begin{equation}\\label{eq:weil}\n|S(n,H)|\\le 2(1+\\log p) p^{1\/2}.\n\\end{equation}\nIt has been conjectured by Hooley \\cite{hooley} that\n$S(n,H)\\ll H^{1\/2}q^\\varepsilon,$ for any $\\varepsilon>0$,\nwhich would enable one to handle\nintervals with $|I_1|,|I_2|\\gg p^{2\/3+\\varepsilon}.$\nHowever such a bound appears to remain a distant prospect.\n\nA different approach to this problem involves considering a sequence of pairs of intervals $I_1^{(j)}, I_2^{(j)}$, for $1\\le j\\le J$, and to ask whether there is a value of $j$ for which there is a solution to the congruence\n\\begin{equation}\\label{eqn:intro.1}\nxy\\=1\\bmod p,\\quad (x,y)\\in I_1^{(j)}\\times I_2^{(j)}.\n\\end{equation}\nThere are some obvious degenerate cases here. For example, if we suppose that $I_1^{(j)}=I_2^{(j)}$ for all $j$, and that these run over all intervals of a given length $H$, then we are merely asking whether there is positive integer $h\\le H$ with the property that the congruence $x(x+h)\\=1\\bmod p$ has a solution $x\\in\\mbb{Z}$. This\nis equivalent to deciding whether the set\n$\\{h^2+4:1\\le h\\le H\\}$\ncontains a quadratic residue modulo $p$. When $H=2$, therefore, it is clear that this problem has a solution for all primes $p=\\pm 1\\bmod 8$. We avoid considerations of this sort by assuming that at least one of our sequences of intervals is pairwise disjoint. The following is our main result.\n\n\\begin{theorem}\\label{thm:inverses1}\nLet $H,K>0$ and let $I_1^{(j)}, I_2^{(j)}\\subseteq (0,p)$ be subintervals, for\n$1\\le j\\le J$, such that\n\\[|I_1^{(j)}|=H\\quad\\text{and}\\quad |I_2^{(j)}|=K\\]\nand\n\\[I_1^{(j)}\\cap I_1^{(k)}=\\emptyset\\quad\\text{for all}\\quad j\\not= k.\\]\nThen\nthere exists $j\\in \\{1,\\ldots,J\\}$ for which \\eqref{eqn:intro.1} has a solution\nif\n\\[\nJ\\gg \\frac{p^{3}\\log^4 p}{H^2K^2}.\n\\]\n\\end{theorem}\n\nIf we take $J=1$ in the theorem then we retrieve the above result that \\eqref{eqn:intro.1} is soluble when\n$HK\\gg p^{3\/2}\\log^2 p$. Alternatively, if we allow a larger value of $J$, then we can get closer to what would follow on\nHooley's hypothesised bound for $S(n,H)$.\n\n\n\\begin{corollary}\nWith notation as in Theorem \\ref{thm:inverses1}, suppose that $J\\gg p^{1\/3}$.\nThen\nthere exists $j\\in \\{1,\\ldots,J\\}$ for which \\eqref{eqn:intro.1} has a solution provided that\n$H>p^{2\/3}$ and $K>p^{2\/3}(\\log p)^2$.\n\\end{corollary}\n\n\nOur proof of Theorem \\ref{thm:inverses1} relies upon a mean value estimate for incomplete Kloosterman sums. These types of estimates have been studied extensively for multiplicative characters, especially in connection with variants of Burgess's bounds (see Heath-Brown \\cite{H-B2012} and the discussion therein).\nThe situation for Kloosterman sums is relatively under-developed (see Friedlander and Iwaniec \\cite{FI}, for example). The result we present here appears to be new, although many of our techniques are borrowed directly from the treatment of the analogous multiplicative problem \\cite[Theorem 2]{H-B2012}. The deepest part of our argument is an appeal to Weil's bound for Kloosterman sums. We will prove the following result in the next section.\n\n\n\\begin{theorem}\\label{thm:mvt1}\nIf $I_1,\\ldots ,I_J\\subseteq (0,p)$ are disjoint subintervals, with $H\/2<|I_j|\\leq H$ for each $j$, then for any $\\ell\\in (\\mathbb{Z}\/p\\mathbb{Z})^*$, we have\n\\[\n\\sum_{j=1}^J\\left|\\sum_{n\\in I_j}e\\left(\\frac{\\ell\\overline{n}}{p}\\right)\\right|^2\\leq 2^{12} p\\log^2H.\n\\]\n\\end{theorem}\n\nTaking $J=1$ shows that, up to a constant factor, this result includes as a special case the bound \\eqref{eq:weil} for incomplete Kloosterman sums.\n\n\\begin{TA}\nWhile working on this paper TDB was supported by\nEPSRC grant \\texttt{EP\/E053262\/1} and\nAH was supported by EPSRC grant \\texttt{EP\/J00149X\/1}.\nWe are grateful to Professor Heath-Brown for several useful discussions.\n\\end{TA}\n\n\\section{Proof of Theorem \\ref{thm:mvt1}}\n\nOur starting point is the following mean value theorem for $S(n,H)$.\n\n\\begin{lemma}\\label{lem:mvt1}\nFor $H\\in\\mbb{N}$ and $\\ell\\in (\\mathbb{Z}\/p\\mathbb{Z})^*$, we have\n$$\n\\sum_{n=1}^p\\left|S(n,H)\\right|^2\\leq \\frac{H^2}{p}+8pH.\n$$\n\\end{lemma}\n\\begin{proof}\nAfter squaring out the inner sum and interchanging the order of summation, the left hand side becomes\n$$\n\\sum_{h_1,h_2=1}^H\\sum_{\\substack{n=1\\\\n\\not= -h_1,-h_2\\bmod p}}^pe\\left(\\frac{\\ell(\\overline{n+h_1}-\\overline{n+h_2})}{p}\\right).\n$$\nUsing orthogonality of characters it is easy to see that the inner sum over $n$ is\n\\begin{align*}\n&=\\sum_{\\substack{x,y\\in (\\mathbb{Z}\/p\\mathbb{Z})^*\\\\ \\overline{x}-\\overline{y}\\=h_1-h_2\\bmod p}}\ne\\left(\\frac{\\ell(x-y)}{p}\\right)\\\\\n&=\n\\frac{1}{p}\n\\sum_{\\substack{x,y\\in (\\mathbb{Z}\/p\\mathbb{Z})^*}}\ne\\left(\\frac{\\ell(x-y)}{p}\\right)\n\\sum_{a=1}^p\ne\\left(\\frac{a( \\overline{x}-\\overline{y} )+a(h_2-h_1)}{p}\\right).\n\\end{align*}\nHence\n\\begin{align*}\n\\sum_{n=1}^p\\left|S(n,H)\\right|^2\n&=\n\\frac{1}{p}\n\\sum_{a=1}^p\n\\sum_{h_1,h_2=1}^H\ne\\left(\\frac{a(h_2-h_1)}{p}\\right)\n\\sum_{\\substack{x,y\\in (\\mathbb{Z}\/p\\mathbb{Z})^*}}\ne\\left(\\frac{\\ell(x-y)+a( \\overline{x}-\\overline{y} )}{p}\\right)\\\\\n&=\n\\frac{1}{p}\n\\sum_{a=1}^p\n|K(\\ell,a;p)|^2\n\\sum_{h_1,h_2=1}^H\ne\\left(\\frac{a(h_2-h_1)}{p}\\right),\n\\end{align*}\nwhere $K(\\ell,a;p)$ is the usual complete Kloosterman sum. The contribution from $a=p$ is\n$$\n\\frac{|K(\\ell,0;p)|^2 H^2}{p}=\\frac{H^2}{p},\n$$\nsince $p\\nmid \\ell$. The remaining contribution has modulus\n\\begin{align*}\n\\left|\\frac{1}{p}\n\\sum_{a=1}^{p-1}\n|K(\\ell,a;p)|^2\n\\sum_{h_1,h_2=1}^H\ne\\left(\\frac{a(h_2-h_1)}{p}\\right)\\right|\n&\\leq\n4\n\\sum_{0<|a|\\leq p\/2}\n\\left|\\sum_{h_1,h_2=1}^H\ne\\left(\\frac{a(h_2-h_1)}{p}\\right)\\right|\\\\\n&\\leq\n8\n\\sum_{0