diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzzameg" "b/data_all_eng_slimpj/shuffled/split2/finalzzameg" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzzameg" @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\\label{sec1}\nIn this paper, we prove strong type, weak type inequalities of Hardy-Littlewood maximal operator and fractional Hardy Littlewood maximal operator on variable $\\ell^{{p(\\cdot)}} ({\\mathbb Z})$ sequence spaces. Inequalities for Hardy Littlewood maximal operator for variable sequence spaces $\\ell^{{p(\\cdot)}} ({\\mathbb Z})$, follows from corresponding inequalities for $\\ell^p({\\mathbb Z})$ spaces and Log Holder continuity. To prove, the inequalities for fractional Hardy Littlewood maximal operator, we use Calderon Zygmund decomposition for sequences and Log Holder continuity. \\\\\nSeveral authors have proved strong type and weak type inequalities for Hardy-Littlewood maximal operator and fractional Hardy-Littlewood maximal operator using various methods. In this paper, we provide a method that uses Calderon-Zygmund decomposition for sequences\n to prove strong and weak type inequalities for these maximal operators. Similar proof for the real line can be found in $\\text{\\cite{fior_new_proof_maximal}}$ where boundedness of Hardy-Littlewood maximal operator is proved using Calderon-Zygmund decomposition. \nEven though, we follow same method of proof given in $\\text{\\cite{fior_new_proof_maximal}}$, there are some remarkable differences \nbetween the proofs on integers and proofs on real line. \\\\\nFor detailed history of variable Lebesgue spaces, please refer $\\text{\\cite{fior_var_book}}, \\text{\\cite{lars1}}$. Similar conclusions in broader scope utilizing framework of homogeneous spaces may be found in $\\text{\\cite{shukla}}$. For detailed references on work in spaces of homogeneous type under variable exponent setting, please refer $\\text{\\cite{alme1}}, \\text{\\cite{mizuta1}}, \\text{\\cite{goro1}}, \\text{\\cite{hast1}},\\text{\\cite{kok1}}, \\text{\\cite{kok2}}$.\n\nIn this paper, we use the method of Calderon Zygmund decomposition for sequences and Log Holder continuity. We assume only Log Holder continuity at infinity and local Log Holder continuity follows in case of integers. \n\n\n\n\n\n\n\n\n\n\\section{Definitions and Notation}\n\n\n\n\n\nIn this paper, the following notation are used. Given a bounded sequence $ \\left \\{ p(n): n \\in {\\mathbb Z} \\right \\}$ which takes values in $[1, \\infty ) $, define \n$\\ell^{{p(\\cdot)}} ({\\mathbb Z})$ to be set of all sequences $\\left \\{ a(k) \\right \\}$ on ${\\mathbb Z}$ such that for some $\\lambda>0$, $ \\sum_{ k \\in {\\mathbb Z}} ( \\frac{\\abs{a(k)}}{\\lambda} )^{p(k)}< \\infty$. \\\\\nThroughout this paper, $ \\left \\{ p(n): n \\in {\\mathbb Z} \\right \\}$ denotes a bounded sequence which takes values in $[1,\\infty)$. Define $ {p_{-}} = \\inf \\left \\{ p(n): n \\in {\\mathbb Z} \\right \\}, {p_{+}} = \\sup \\left \\{ p(n): n \\in {\\mathbb Z} \\right \\} $. \\\\\nLet $\\mathcal{S}$ denote the set of all bounded sequences which take values in $[1, \\infty)$ such that ${p_{+}} < \\infty $. \\\\\nFor any set $ A \\subset {\\mathbb Z}$ , $\\abs{A}$ denotes cardinality of A. \n\\\\\nLet \n\\[ \\Omega = \\left \\{ n \\in {\\mathbb Z}: 1 \\leq p(n) < \\infty \\right \\} \\] \n\\[ \\Omega_\\infty = \\left \\{ n \\in {\\mathbb Z}: p(n) = \\infty \\right \\} \\] Then define modular functional associated with ${p(\\cdot)}$ as\n\\[ \\rho_{{p(\\cdot)}}(a) = \\sum_{k \\in {\\mathbb Z} \\setminus \\Omega_\\infty} \\abs{ a(k)}^{p(k)} + \\normb{a}_{\\ell^\\infty(\\Omega_\\infty)} \\]\n\nNote, when ${p_{+}} < \\infty$, modular functional becomes\n\\[ \\rho_{{p(\\cdot)}}(a) = \\sum_{k \\in {\\mathbb Z} } \\abs{ a(k)}^{p(k)} \\]\nThroughout this paper, we assume ${p_{+}} < \\infty$. Further for a given sequence $\\left \\{ a(k): k \\in {\\mathbb Z} \\right \\}$, define norm in $\\ell^{{p(\\cdot)}}({\\mathbb Z})$ as\n \\[ \\normb{a}_{\\ell^{{p(\\cdot)}}({\\mathbb Z})} = \\inf \\left \\{ \\lambda >0: \\rho_{{p(\\cdot)}}(\\frac{a}{\\lambda}) \\leq 1 \\right \\} \\]\n$\\normb{}_{\\ell^{{p(\\cdot)}}({\\mathbb Z})}$ is a norm $\\text{\\cite{han}}$. A similar norm on $\\ell^{{p(\\cdot)}}({\\mathbb R}^n)$ is defined in $\\text{\\cite{fior_var_book}}$ and there it is proved it is a norm. \n\n\n\n\\begin{definition}\nGiven a non-negative sequence of real numbers, $ \\left \\{ a(n): n \\in {\\mathbb Z} \\right \\}, 0 \\leq \\alpha<1$, define fractional Hardy-Littlewood Maximal operator as follows:\n\\[ M_{\\alpha} a(n) = \\sup_{n \\in I} \\abs{I}^{\\alpha -1} \\sum_{k \\in I} \\abs{a(k)} \\] where supremum is taken over all intervals of integers which contain $n$. When $\\alpha=0$, fractional Hardy-Littlewood maximal operator becomes Hardy-Littlewood maximal operator.\n\\end{definition}\n\n\n\n\n\n\n\n\\begin{definition}\nGiven a sequence ${p(\\cdot)} \\in \\mathcal{S}$, we say that ${p(\\cdot)}$ is log-Holder continuous at infinity, if there exists positive real constants $C_\\infty, p_\\infty$ such that for all $n \\in {\\mathbb Z}$, \n\\[ \\abs{ p(n) - p_\\infty } \\leq \\frac{ C_\\infty} { \\log(e + \\abs{n})} , n \\in {\\mathbb Z} \\] and $e$ is exponential number. In this case, we write ${p(\\cdot)} \\in LH_\\infty({\\mathbb Z})$\n\\end{definition}\n\n\nWe will also require Young's inequality in this paper. For any real $ a, b \\in {\\mathbb R}$, and any two real numbers $p, q$ such that $\\frac{1}{p} + \\frac{1}{q}= 1$, Young's inequality $\\text{\\cite{Krey1}}$ is as follows: \n\\[ ab \\leq \\frac{a^{p}}{p} + \\frac{b^{q}}{q } \\]\n\n\\begin{lemma}{\\label{sak_lem_LH_infty}}\nLet $ \\left \\{ q(n) \\right \\}$ be the sequence which satisfies $\\frac{1}{p(n)} + \\frac{1}{q(n)} =1$ $\\forall n \\in {\\mathbb Z}$. Let $1 \\leq {p_{-}} \\leq p(n) \\leq {p_{+}} < \\infty$, $\\forall n \\in {\\mathbb Z}$. The following are equivalent:\n\\begin{enumerate}\n\\item ${p(\\cdot)} \\in LH_\\infty({\\mathbb Z})$\n\\item $\\frac{1}{{p(\\cdot)}} \\in LH_\\infty({\\mathbb Z})$ \n\\item $\\frac{1}{{q(\\cdot)}} \\in LH_\\infty({\\mathbb Z})$ \n\\item ${q(\\cdot)} \\in LH_\\infty({\\mathbb Z})$ \n\\end{enumerate}\n\\end{lemma}\n\\begin{proof}\n(a) We shall prove (1) $\\implies (2).\\\\\n $Let ${p(\\cdot)} \\in LH_\\infty({\\mathbb Z})$. Note, that when ${p_{+}} < \\infty$, $\\forall n \\in {\\mathbb Z}$,\n\\begin{align*}\n\\abs{ \\frac{1}{p(n)} - \\frac{1}{p_\\infty}} = \\abs{ \\frac{p(n) - p_\\infty}{p(n)p_\\infty}} \\leq \\frac{\\abs{p(n) - p_\\infty}}{{p_{-}} p_\\infty} \\leq \\frac{\\frac{C_\\infty}{{p_{-}} p_\\infty}}{\\log(e+\\abs{n})} \n\\end{align*} \nfor some $LH_\\infty$ constant, which is $k_\\infty = \\frac{C_\\infty}{{p_{-}} p_\\infty}$. \\\\\n(b) We shall prove (2) $\\implies$ (1).\\\\\nLet $\\frac{1}{{p(\\cdot)}} \\in LH_\\infty({\\mathbb Z})$. Then, $\\forall n \\in {\\mathbb Z}$\n\\begin{align*}\n\\abs{p(n)-p_\\infty} &=\\abs{p(n)p_\\infty \\biggl( \\frac{1}{p(n)} - \\frac{1}{p_\\infty} \\biggr) } \\leq \\abs{p(n)p_\\infty} \\frac{C_\\infty}{\\log(e+\\abs{n})} \\\\\n& \\leq {p_{+}} p_\\infty \\frac{C_\\infty }{\\log(e+\\abs{n})} \\\\\n& \\leq \\frac{k_\\infty}{\\log(e+\\abs{n})}\n\\end{align*} for some $LH_\\infty$ constant which is $k_\\infty = {p_{+}} p_\\infty C_\\infty $ .\\\\\n(c) We shall prove that (1) $\\implies$ (3). \\\\\nLet ${p(\\cdot)} \\in LH_\\infty({\\mathbb Z})$. Note, that when ${p_{+}} < \\infty$, for $n \\in {\\mathbb Z}$,\n\\begin{align*}\n\\abs{ \\frac{1}{q(n)} - \\frac{1}{q_\\infty}} &= \\abs{ \\frac{1}{p(n)} - \\frac{1}{p_\\infty}} = \\abs{ \\frac{p(n) - p_\\infty}{p(n)p_\\infty}} \\leq \\frac{\\abs{p(n) - p(\\infty)}}{{p_{-}} p_\\infty} \\\\\n& \\leq \\frac{\\frac{C_\\infty}{{p_{-}} p_\\infty}}{\\log(e+\\abs{n})} \n\\end{align*} \nfor some $LH_\\infty$ constant, which is $k_\\infty = \\frac{C_\\infty}{{p_{-}} p_\\infty}$. \\\\\n(d) We shall prove (3) $\\implies$ (4) and (1).\\\\\n Let $\\frac{1}{{q(\\cdot)}} \\in LH_\\infty({\\mathbb Z})$. Then $\\frac{1}{\\frac{1}{{q(\\cdot)}}} \\in LH_\\infty({\\mathbb Z})$, which implies that ${q(\\cdot)} \\in LH_\\infty({\\mathbb Z})$. Since $\\frac{1}{{q(\\cdot)}} \\in LH_\\infty({\\mathbb Z})$ and\n\\[ \\abs{ \\frac{1}{p(n)} - \\frac{1}{p_\\infty}} = \\abs{ \\frac{1}{q(n)} - \\frac{1}{q_\\infty}} \\] it follows that $\\frac{1}{{p(\\cdot)}} \\in LH_\\infty({\\mathbb Z})$ and hence, using (b), ${p(\\cdot)} \\in LH_\\infty({\\mathbb Z})$.\\\\\n\\end{proof}\n\n\\begin{lemma}\nLet $ \\left \\{ q(n) \\right \\}$ be the sequence which satisfies $\\frac{1}{p(n)} + \\frac{1}{q(n)} =1$ $\\forall n \\in {\\mathbb Z}$. Let $1 \\leq {p_{-}} \\leq p(n) \\leq {p_{+}} < \\infty$, $\\forall n \\in {\\mathbb Z}$. Then, ${p(\\cdot)} \\in LH_\\infty({\\mathbb Z})$ implies $p_\\infty \\geq {p_{-}}$.\n\\end{lemma}\n\\begin{proof}\n Given that ${p(\\cdot)} \\in LH_\\infty({\\mathbb Z})$ implies $\\frac{1}{{p(\\cdot)}} \\in LH_\\infty({\\mathbb Z})$ using Lemma[$\\ref{sak_lem_LH_infty}$] ,it follows that\n\\begin{align*}\n\\abs{ \\frac{1}{p_\\infty}} & = \\abs{ \\frac{1}{p(n)} + \\frac{1}{p_\\infty} - \\frac{1}{p(n)} } \\\\\n& \\leq \\abs{ \\frac{1}{p(n)}} + \\abs{ \\frac{1}{p_\\infty} - \\frac{1}{p(n) } } \\\\\n& \\leq \\abs{\\frac{1}{{p_{-}}}} + \\frac{C_\\infty}{\\log(e+\\abs{n}) } \n\\end{align*}\nSince this is true for every $n$, $p_\\infty \\geq {p_{-}}$.\n\\end{proof}\n\nThe following lemma will be used as a variation of Holder's inequality. Proof of this lemma in continuous version can be found in $\\text{\\cite{fior_var_book}}$. Same line of proof works here.\n\\begin{lemma}{\\label{sak_holder_lem1}}\nFor $0 \\leq \\alpha < 1$, and $ p,q $ such that $ 1< p< \\frac{1}{\\alpha}, \\frac{1}{p} - \\frac{1}{q} = {\\alpha}$.\nFor every interval $I$ in ${\\mathbb Z}$ and non-negative sequence $\\left \\{ a(k) \\right \\}$ \\\\\n\\[ \\abs{I}^{{\\alpha} -1 } \\sum_{k \\in I} a(k) \\leq \\biggl( \\sum_{k \\in I} a(k)^p \\biggr)^{\\frac{1}{p} - \\frac{1}{q}} \\biggl( \\frac{1}{\\abs{I}} \\sum_{k \\in I} a(k) \\biggr)^{\\frac{p}{q}} \\]\n\\end{lemma}\n\n\n\n\n\\section{Some Results on Variable Lebesgue spaces}\nIn this section, we prove certain results that are used in proving boundedness of maximal operators.The proofs of these results in discrete case are same as corresponding results on real line $\\text{\\cite{fior_var_book}}$. For completeness sake, some of the proofs are included. \\\\\n\n\n\n\nThe modular functional has the following properties defined below $\\text{\\cite{fior_var_book}}$.\n\\begin{lemma}{\\label{sak_modular}}\nLet $ \\left \\{ u(n) \\right \\} $ be a non-negative sequence of real numbers. Let ${p(\\cdot)} \\in \\mathcal{S}$, then: \n\\begin{enumerate}\n\\item For all u, $\\rho_{{p(\\cdot)}}(u) \\geq 0$ and $\\rho_{{p(\\cdot)}}(\\abs{u})=\\rho_{{p(\\cdot)}}(u) $\\\\\n\\item $\\rho_{{p(\\cdot)}}(u) =0$ if and only if $u(k)=0$ for all k $\\in {\\mathbb Z}$ \\\\\n\\item If $\\rho_{{p(\\cdot)}}(u) < \\infty$, then $u(k) < \\infty $ for all k $\\in {\\mathbb Z}$ \\\\\n\\item $\\rho_{{p(\\cdot)}}$ is convex: Given $\\alpha, \\beta \\geq 0, \\alpha + \\beta =1, \\rho_{{p(\\cdot)}}(\\alpha u + \\beta v) \\leq \\alpha \\rho_{{p(\\cdot)}}(u) + \\beta \\rho_{{p(\\cdot)}}(v) $\\\\\n\\item If $\\abs{u(k)} \\geq \\abs{v(k)}$ , then $\\rho_{{p(\\cdot)}}(u) \\geq \\rho_{{p(\\cdot)}}(v) $. \\\\\n\\item If for some $\\delta > 0 , \\rho_{{p(\\cdot)}}(\\frac{u}{\\delta})< \\infty $, then the function $\\lambda \\to \\rho_{{p(\\cdot)}}(\\frac{u}{\\lambda})$ is continuous \nand decreasing on $[\\delta, \\infty)$. Further $\\rho_{{p(\\cdot)}}(\\frac{u}{\\lambda}) \\to 0$ as $\\lambda \\to \\infty $.\n\\end{enumerate}\n\\end{lemma}\n\n\\begin{proof}\nContinuous version of Lemma[$\\ref{sak_modular}$] can be found in $\\text{\\cite{fior_var_book}}$. Same line of proof works here.\n\\end{proof}\n\nThe following lemma is the connection between modular functional and $\\ell^{{p(\\cdot)}}({\\mathbb Z})$ norm.\n\\begin{lemma} \nLet $ \\left \\{ u(n) , n \\in {\\mathbb Z} \\right \\} $ be a non-negative sequence of real numbers. Let ${p(\\cdot)} \\in \\mathcal{S}$. if ${p_{+}} < \\infty$, then $u \\in \\ell^{{p(\\cdot)}}({\\mathbb Z})$ if and only if\n\\[ \\rho_{{p(\\cdot)}}(u) = \\sum_{k \\in {\\mathbb Z}} {\\abs{u(k)}}^{p(k)} < \\infty \\]\n\\begin{proof}\nNote if $\\rho_{{p(\\cdot)}}(u) < \\infty$, then by definition of norm $u \\in \\ell^{{p(\\cdot)}}({\\mathbb Z})$. Conversely since $u \\in \\ell^{{p(\\cdot)}}({\\mathbb Z})$ by definiton of norm, $\\rho_{{p(\\cdot)}}(\\frac{u}{\\lambda}) < \\infty$ for some $\\lambda >0$. Further by (6) of modular functional properties, we have $\\rho_{{p(\\cdot)}}(\\frac{u}{\\lambda}) < \\infty$ for some $\\lambda >1$. So, it follows that\n\\[ \\rho_{{p(\\cdot)}}(u)= \\sum_{k \\in {\\mathbb Z}} \\left( \\frac{\\abs{u(k)}\\lambda}{\\lambda} \\right)^{p(k)} \\leq \\lambda^{{p_{+}} } \\rho_{{p(\\cdot)}}(\\frac{u}{\\lambda}) < \\infty \\]\n\\end{proof}\n\\end{lemma}\n\n\n\\begin{proposition}{\\label{sak_Prop_2.10}}\nLet $ \\left \\{ a(n) , n \\in {\\mathbb Z} \\right \\} $ be a non-negative sequence of real numbers such that $a \\in \\ell^{{p(\\cdot)}}({\\mathbb Z})$ and let $p(\\cdot) \\in \\mathcal{S}$.\n\\begin{enumerate}\n\\item For all $\\lambda \\geq 1$, \n\\[ \\lambda^{{p_{-}}} \\rho_{{p(\\cdot)}}(a) \\leq \\rho_{{p(\\cdot)}}(\\lambda a) \\leq \\lambda^{{p_{+}}} \\rho_{{p(\\cdot)}}(a) \\] \n\\item when $0 < \\lambda <1$ the reverse inequalities are true, i.e \n\\[ \\lambda^{{p_{+}}} \\rho_{{p(\\cdot)}}(a) \\leq \\rho_{{p(\\cdot)}}(\\lambda a) \\leq \\lambda^{{p_{-}}} \\rho_{{p(\\cdot)}}(a) \\]\n\\end{enumerate}\n\\end{proposition}\n\\begin{proof}\nFor $\\lambda \\geq 1$ , \n\\begin{align*}\n\\rho_{{p(\\cdot)}}(\\lambda a) = \\sum_{n=1}^\\infty \\biggl( \\lambda a(n) \\biggr)^{p(n)} = \\sum_{n=1}^\\infty {\\lambda}^{p(n)} {a(n)}^{p(n)} \\leq \\lambda^{{p_{+}}} \\rho_{{p(\\cdot)}}(a)\n\\end{align*}\nFurther,\n\\begin{align*}\n{\\lambda}^{{p_{-}}} \\rho_{{p(\\cdot)}}(a) = \\sum_{n=1}^{\\infty} {\\lambda}^{{p_{-}}} {a(n)}^{p(n)} \\leq \\sum_{n=1}^\\infty \\biggl( \\lambda a(n) \\biggr)^{p(n)} = \\rho_{{p(\\cdot)}}(\\lambda a)\n\\end{align*}\n\\end{proof}\n\n\n\\begin{lemma}{\\label{sak_lem7}}[Fatou Property of the Norm].\nGiven non-negative sequence $\\left \\{ u(n) \\right \\}$ of real numbers, let ${p(\\cdot)} \\in \\mathcal{S}$. Further, let $ \\left \\{ u_k \\right \\} \\subset \\ell^{{p(\\cdot)}}({\\mathbb Z})$ be non-negative sequences of real numbers such that $u_k$ increases to a sequence $u$ pointwise. Then either $u \\in \\ell^{{p(\\cdot)}}({\\mathbb Z}) $ and $ \\normb{u_k}_{\\ell^{{p(\\cdot)}}({\\mathbb Z})} \\to \\normb{u}_{\\ell^{{p(\\cdot)}}({\\mathbb Z})}$ or $ u \\notin \\ell^{{p(\\cdot)}}({\\mathbb Z})$ and $\\normb{u_k}_{\\ell^{{p(\\cdot)}}({\\mathbb Z})} \\to \\infty$.\n\\end{lemma}\n\\begin{proof}\nFrom\n$u_k(n) \\leq u_{k+1}(n) \\implies \\sum_{k \\in {\\mathbb Z}} u_k(n) \\leq \\sum_{k \\in {\\mathbb Z}} u_{k+1}(n) $. So $\\rho_{{p(\\cdot)}}(u_k) \\leq \\rho_{{p(\\cdot)}}(u_{k+1})$. This implies that $\\rho_{{p(\\cdot)}}(\\frac{u_k}{\\lambda}) \\leq \\rho_{{p(\\cdot)}}(\\frac{u_{k+1}}{\\lambda}) $ and hence $\\normb{u_k}_{\\ell^{{p(\\cdot)}}({\\mathbb Z})} \\leq \\normb{u_{k+1}}_{\\ell^{{p(\\cdot)}}({\\mathbb Z})}$.\nTherefore, $\\left \\{ u_k \\right \\}$ is an increasing sequence, so is $ \\left \\{ \\normb{u_k}_{\\ell^{{p(\\cdot)}}({\\mathbb Z})} \\right \\}$, thus it either converges or diverges to $\\infty$. \\\\\n\nIf $ u \\in \\ell^{{p(\\cdot)}}({\\mathbb Z})$, since $ u_k \\leq u, \\normb{u_k}_{\\ell^{{p(\\cdot)}}({\\mathbb Z})} \\leq \\normb{u}_{\\ell^{{p(\\cdot)}}({\\mathbb Z})}$. Otherwise, if $u_k \\in \\ell^{{p(\\cdot)}}({\\mathbb Z})$, then $\\normb{u_k}_{\\ell^{{p(\\cdot)}}({\\mathbb Z})} < \\infty = \\normb{u}_{\\ell^{{p(\\cdot)}}({\\mathbb Z})}$. In either case,it will suffice to show that for any $\\lambda < \\normb{u}_{\\ell^{{p(\\cdot)}}({\\mathbb Z})}$, for all k, sufficiently large $\\normb{u_k}_{\\ell^{{p(\\cdot)}}({\\mathbb Z})} > \\lambda$. \\\\\n\nFix such a $\\lambda$, by the definition of the norm $ \\rho_{{p(\\cdot)}}(u\/\\lambda) > 1$. Therefore by monotone convergence theorem of the classical Lebesgue spaces,\n\\begin{align*} \\rho_{{p(\\cdot)}}(\\frac{u}{\\lambda}) & = \\sum_{m \\in {\\mathbb Z}} \\biggl( \\frac{ \\abs{u(m) }} {\\lambda } \\biggr)^{p(m)} = \\lim_{k \\to \\infty} \\biggl( \\sum_{m \\in {\\mathbb Z}} \\biggl( \\frac{ \\abs{ u_k(m) } } { \\lambda } \\biggr)^{p(m)} \\biggr) = \\\\\n & = \\lim_{k \\to \\infty} \\rho_{{p(\\cdot)}}(\\frac{u_k} {\\lambda}) \n\\end{align*}\nHence, for all $k $ sufficiently large, $\\rho_{{p(\\cdot)}}(u_k\/ \\lambda) > 1$ and so $ \\normb{u_k}_{\\ell^{{p(\\cdot)}}({\\mathbb Z})} > \\lambda $.\n\\end{proof}\n\n\n\\begin{lemma}{\\label{sak_lem8}}[Fatou's lemma for sequences].\nLet $ \\left \\{ u_k \\right \\}$ be a non-negative sequence of real numbers such that $ \\left \\{ u_k \\right \\} \\in \\ell^{{p(\\cdot)}}({\\mathbb Z})$. \n. Let ${p(\\cdot)} \\in \\mathcal{S}$, suppose the sequence $ \\left \\{ u_k \\right \\} \\in \\ell^{{p(\\cdot)}}({\\mathbb Z}) $ is such that $u_k(n) \\to u(n) $ for every $n$. If\n\\[ \\liminf_{k \\to \\infty } \\normb{u_k}_{\\ell^{{p(\\cdot)}}({\\mathbb Z})} < \\infty \\]\nthen $ u \\in \\ell^{{p(\\cdot)}}({\\mathbb Z}) $ and \n\\[ \\normb{u}_{\\ell^{{p(\\cdot)}}({\\mathbb Z})} \\leq \\liminf_{k \\to \\infty } \\normb{u_k}_{\\ell^{{p(\\cdot)}}({\\mathbb Z})} \\]\n\\end{lemma}\n\n\\begin{proof}\nDefine a new sequence \n\\[ v_k(i) = \\inf_{ m \\geq k} \\abs{u_m(i)} \\]\nThen for all $ m \\geq k, v_k(i) \\leq \\abs{u_m(i) }$ and so $ v_k \\in \\ell^{{p(\\cdot)}}({\\mathbb Z})$. Further by the definition $ \\left \\{ v_k \\right \\}$ is an increasing sequence and\n\\[ \\lim_{k \\to \\infty } v_k(i) = \\liminf_{ m \\to \\infty} \\abs{u_m(i)} = \\abs{u(i)}, \\quad i \\in {\\mathbb Z} \\]\nAlso,\n\\[ \\normb{u}_{\\ell^{{p(\\cdot)}}({\\mathbb Z})} = \\normb{ \\lim_{k \\to \\infty} v_k }_{\\ell^{{p(\\cdot)}}({\\mathbb Z})} = \\lim_{k \\to \\infty} \\normb{v_k}_{\\ell^{{p(\\cdot)}}({\\mathbb Z})} = \\liminf_{k \\to \\infty} \\normb{v_k}_{\\ell^{{p(\\cdot)}}({\\mathbb Z})} \\]\nTherefore, by Fatou's norm property[$\\ref{sak_lem7}$] for sequences\n\\[ \\normb{u}_{\\ell^{{p(\\cdot)}}({\\mathbb Z})} = \\lim_{k \\to \\infty} \\normb{v_k}_{\\ell^{{p(\\cdot)}}({\\mathbb Z})} \\leq \\lim_{k \\to \\infty} ( \\inf_{ m \\geq k} \\normb{u_m}_{\\ell^{{p(\\cdot)}}({\\mathbb Z})} ) = \\liminf_{k \\to \\infty} \\normb{u_k}_{\\ell^{{p(\\cdot)}}({\\mathbb Z})} \\]\nSo, if $ \\liminf_{k \\to \\infty} \\normb{u_k}_{\\ell^{{p(\\cdot)}}({\\mathbb Z})} < \\infty$, then $\\normb{u}_{\\ell^{{p(\\cdot)}}({\\mathbb Z})} < \\infty$, which implies $ u \\in \\ell^{{p(\\cdot)}}({\\mathbb Z})$.\n\\end{proof}\n\n\n\n\\begin{lemma}\nLet $ \\left \\{ a(k) \\right \\} $ be a non-negative sequence of real numbers and ${p(\\cdot)} \\in \\mathcal{S}$\n\\begin{enumerate}\n\\item If $ a \\in \\ell^{p(\\cdot)}({{\\mathbb Z}})$ and $ \\normb{a}_{\\ell^{{p(\\cdot)}}({\\mathbb Z})} > 0$, then $\\rho_{{p(\\cdot)}}(\\frac{a}{\\normb{a}_{\\ell^{{p(\\cdot)}}({\\mathbb Z}) }}) \\leq 1$ \\\\\n\\item If ${p_{+}} < \\infty$, then $ \\rho_{{p(\\cdot)}}(\\frac{a}{\\normb{a}_{\\ell^{{p(\\cdot)}}({\\mathbb Z}) }}) =1$ for all nontrivial $a \\in \\ell^{p(\\cdot)}({{\\mathbb Z}})$\n\\end{enumerate}\n\\end{lemma}\n\n\\begin{proof}\nSince $\\normb{a}_{\\ell^{{p(\\cdot)}}}({\\mathbb Z}) = \\inf \\left \\{ \\lambda > 0: \\rho_{{p(\\cdot)}}(a\/\\lambda) \\leq 1 \\right \\}$, $\\normb{a}_{\\ell^{{p(\\cdot)}}({\\mathbb Z})}+ \\frac{1}{n}$ is not an infimum. Therefore, there exists a $\\lambda_n$ such that\n$\\lambda_n \\leq \\normb{a}_{\\ell^{{p(\\cdot)}}({\\mathbb Z})} + \\frac{1}{n}$ and $ \\rho_{{p(\\cdot)}}(\\frac{a}{\\lambda_n}) \\leq 1$.\nFix such a decreasing sequence $\\left \\{ \\lambda_n \\right \\}$ such that $ \\left \\{ \\lambda_n \\right \\} \\to \\normb{a}_{\\ell^{{p(\\cdot)}}({\\mathbb Z})}$. Then by Fatou's lemma and the definition of modular,\n\\begin{align*}\n\\rho_{{p(\\cdot)}}(\\frac{a}{\\normb{a}_{\\ell^{{p(\\cdot)}} ({\\mathbb Z})}}) & = \\sum_{k \\in {\\mathbb Z}} \\biggl( \\frac{\\abs{a(k)}}{\\normb{a}_{\\ell^{{p(\\cdot)}}({\\mathbb Z}) }} \\biggr)^{p(k)}\\\\\n& = \\sum_{k \\in {\\mathbb Z}} \\lim_{n \\to \\infty} \\biggl( \\frac{\\abs{a(k)}}{\\lambda_n} \\biggr)^{p(k)} \\\\\n& \\leq \\liminf_{n \\to \\infty} \\sum_{k \\in {\\mathbb Z}} \\biggl( \\frac{\\abs{a(k)}}{\\lambda_n} \\biggr)^{p(k)} \\\\\n& = \\liminf_{n \\to \\infty} \\rho_{{p(\\cdot)}}(\\frac{a}{\\lambda_n}) \\leq 1\n\\end{align*}\nNow suppose that ${p_{+}} < \\infty$ but $\\rho(\\frac{a}{\\normb{a}_{\\ell^{{p(\\cdot)}}({\\mathbb Z})}}) < 1$. Then for all $\\lambda$ such that $0< \\lambda < \\normb{a}_{\\ell^{{p(\\cdot)}}({\\mathbb Z})}$, by Lemma[\\ref{sak_Prop_2.10}]\n\\[ \\rho_{\\ell^{{p(\\cdot)}}({\\mathbb Z})}(a\/\\lambda) = \\rho_{{p(\\cdot)}}( \\frac{ \\normb{a}_{\\ell^{{p(\\cdot)}}({\\mathbb Z})}}{\\lambda} \\frac{a}{\\normb{a}_{\\ell^{{p(\\cdot)}}({\\mathbb Z})}}) \\leq \\biggl( \\frac{\\normb{a}_{\\ell^{{p(\\cdot)}}({\\mathbb Z})}}{\\lambda} \\biggr)^{{p_{+}}} \\rho_{{p(\\cdot)}}(\\frac{a}{\\normb{a}_{\\ell^{{p(\\cdot)}}({\\mathbb Z})}}) \\]\nTherefore, we can find $\\lambda$ sufficiently close to $\\normb{a}_{\\ell^{{p(\\cdot)}}({\\mathbb Z})}$ such that $\\rho_{{p(\\cdot)}}(a\/\\lambda)<1$. \nBut by the definition of the norm , we must have $\\rho_{{p(\\cdot)}}(a\/\\lambda) \\geq 1$. This leads to contradiction, so the equality holds.\n\n\n\\end{proof}\n\nUsing Fatou's lemma for sequences [$\\ref{sak_lem8}$] and properties of modular functional, following lemma can be proved.\n\\begin{lemma}{\\label{sak_corollary_2}}\nLet ${p(\\cdot)} \\in \\mathcal{S}$ and $ \\left \\{ a(k) \\right \\} $ be a non-negative sequence of real numbers such that $ \\left \\{ a(k) \\right \\} \\in \\ell^{{p(\\cdot)}}({\\mathbb Z})$\n\\begin{align*}\n\\text{If} \\quad \\normb{a}_{\\ell^{{p(\\cdot)}}({\\mathbb Z})} \\leq 1, \\text{then} \\qquad \\rho_{{p(\\cdot)}}(a) \\leq \\normb{a}_{\\ell^{{p(\\cdot)}}({\\mathbb Z})} \\\\\n\\text{If} \\quad \\normb{a}_{\\ell^{{p(\\cdot)}}({\\mathbb Z})} \\geq 1, \\text{then} \\qquad \\rho_{{p(\\cdot)}}(a) \\geq \\normb{a}_{\\ell^{{p(\\cdot)}}({\\mathbb Z})}\n\\end{align*}\n\\end{lemma}\n\n\\begin{proof}\nContinuous version of Lemma[$\\ref{sak_corollary_2}$] can be found in $\\text{\\cite{fior_var_book}}$. Same line of proof works here.\n\\end{proof}\n\nUsing Lemma[$\\ref{sak_Prop_2.10}$], we can prove the following lemma below. Continuous version of Lemma[$\\ref{sak_corollary_3}$] can be found in $\\text{\\cite{fior_var_book}}$. Same line of proof works here.\n\\begin{lemma}{\\label{sak_corollary_3}}\nLet $ \\left \\{ a(k) \\right \\} $ be a non-negative sequence of real numbers such that $ \\left \\{ a(k) \\right \\} \\in \\ell^{{p(\\cdot)}}({\\mathbb Z})$ and ${p(\\cdot)} \\in \\mathcal{S}$. Then\n\\begin{enumerate}\n\\item If $ \\normb{a}_{\\ell^{{p(\\cdot)}}({\\mathbb Z})} >1$, then $\\rho_{{p(\\cdot)}}(a)^{1\/{p_{+}}} \\leq \\normb{a}_{\\ell^{{p(\\cdot)}}({\\mathbb Z})} \\leq \\rho_{{p(\\cdot)}}(a)^{1\/{p_{-}}}$ \\\\\n\\item If $ 0 < \\normb{a}_{\\ell^{{p(\\cdot)}}({\\mathbb Z})} \\leq 1 $, then $\\rho_{{p(\\cdot)}}(a)^{1\/{p_{-}}} \\leq \\normb{a}_{\\ell^{{p(\\cdot)}}({\\mathbb Z})} \\leq \\rho_{{p(\\cdot)}}(a)^{1\/{p_{+}}}$ \\\\\n\\end{enumerate}\n\\end{lemma}\n\n\n\n\n\n\n\n\n\n\n\n\n\\section{Calderon-Zygmund decompostion for Sequences}\n\\begin{theorem}{\\label{CD_General}}\nLet $1\\leq p < \\infty$ and a $\\in l^p({\\mathbb Z})$. For every $t>0$, and $0\\leq \\alpha<1$, there exists a sequence of disjoint intervals $\\left \\{ I_j^t \\right \\}$ such that\n\\begin{align*}\n(i)& \\quad t < \\frac{1}{ \\abs{I_j^t}^{1-\\alpha} } \\sum_{k \\in I_j^t} \\abs{a(k)} \\leq 2t,\\forall j \\in {\\mathbb Z} \\\\\n(ii)& \\quad \\forall n \\not\\in \\cup_j I_j^t, \\quad \\abs{a(n} \\leq t \\\\\n(iii)& \\quad \\text{If} \\quad t_1 > t_2 , \\quad \\textbf{then each} \\quad I_j^{t_1} \\quad \\textbf{is subinterval of some} \\quad I_m^{t_2}, \\quad \\forall j,m \\in {\\mathbb Z}\n\\end{align*}\n\\end{theorem}\n\\begin{proof}\nFor each positive integer N, consider the collection of disjoint intervals of cardinality $2^N$, \\\\\n\\begin{eqnarray*}\n\\left \\{ I_{N,j} \\right \\} = \\left \\{ [ (j-1)2^N +1, \\dots, j2^N] \\right \\} , j \\in {\\mathbb Z}.\n\\end{eqnarray*} \\\\\nFor each $t>0$, let $N=N_t$ be the smallest positive integer such that \n\\[ \\frac{1}{\\abs{I_{N_t,j} }^{1-\\alpha} } \\sum_{k \\in I_{N_t,j}} \\abs{a(k)} \\leq t \\] \t\\\\\nSuch $N_t$ is possible as $ a\\in \\ell^p ({\\mathbb Z})$. Now consider collection $ \\left \\{ I_{N_t,j} \\right \\}$ and subdivide each of these intervals into two intervals of equal cardinality. If $I$ is one of these intervals either \n\\begin{align*}\n(A)& \\quad \\frac{1}{\\abs{I}^{1-\\alpha} } \\sum_{k \\in I} \\abs{a(k)} >t \\\\ \\quad \\text{or} \\quad\n(B)& \\quad \\frac{1}{\\abs{I}^{1-\\alpha} } \\sum_{k \\in I} \\abs{a(k)} \\leq t \\\\\n\\end{align*}\nIn case (A) we select this interval and include it in a collection $\\left \\{ I_{r,j} \\right \\}$. \\\\\nIn case (B) we subdivide I once again unless I is a singleton and select intervals as above. \nNow the elements which are not included in $\\left \\{ I_{r,j} \\right \\}$ form a set S such that for every $n \\in S$, $\\abs{a(n)} \\leq t$. This proves (i). \\\\\nAlso from the choice of $\\left \\{ I_{r,j} \\right \\}$, note that $\\left \\{ I_{r,j} \\right \\}$ are disjoint and satisfy \n\\begin{eqnarray*}\n\\frac{1}{\\abs{I_{r,j} }^{1-\\alpha} } \\sum_{k \\in I_{r,j}} \\abs{a(k)} >t\n\\end{eqnarray*}\nSince each $I_{r,j}$ is contained in an interval $J_0$ with card $J_0 = \\abs{2I_{r,j} }^{1-\\alpha}$, which is not selected in the previous step, we have\n\\[\n\\frac{1}{\\abs{I_{r,j}}^{1-\\alpha} } \\sum_{K \\in I_{r,j}} \\abs{a(k)} \\leq \\frac{2}{\\abs{J_0 }} \\sum_{k \\in J_0} \\abs{a(k)} \\leq 2t.\n\\]\nThis proves (ii). It remains to prove (iii). \\\\\nIf $t_1 > t_2$ then $N_{t_1} \\leq N_{t_2}$. So each $I_{N_{t_1}, j}$ is contained in some \n$I_{N_{t_2}, j}$. In the subdivision and the selecting process for $t_1$ we have \n\\[\n\\frac{1}{\\abs{I_{N_{t_1,j} } }^{1-\\alpha} } \\sum_{k \\in I_{N_{t_1,j} }} \\abs{a(k)} > t_1 > t_2.\n\\]\nSo, if $I_j^{t_1}$ is not one of the intervals $I_m^{t_2}$, then it must be subinterval of some $I_m^{t_2}$ selected in an earlier step. This completes proof.\n\\end{proof}\n\n\n\n\\begin{theorem}{\\label{CD_Weak}}\nLet $1\\leq p < \\infty$ and a $\\in l^p({\\mathbb Z})$. Let $\\left \\{ I_j^t \\right \\}$ be intervals obtained from Calderon Zygmund decomposition at height $t$ and $ 0\\leq \\alpha <1 $. Then\n\\[ \\left \\{ n: M_{\\alpha}a(n) > 9t \\right \\} \\subseteq \\cup_j 2I_j^t \\]\n\\end{theorem}\n\n\n\n\\begin{proof}\nWe apply Calderon-Zygmund decomposition to the sequence $\\left \\{ a(n), n \\in {\\mathbb Z} \\right \\} $, where $a \\in \\ell^p({\\mathbb Z})$. For every $t > 0$, we get a sequence of disjoint intervals $ \\left \\{ I^t_j \\right \\}$ satisfying the criteria of Calderon - Zygmund decompostion. \nFor each $j$, we have\n\\[ \\frac{1}{{\\abs{I^t_j}}^{1-\\alpha}} \\sum_{k \\in I^t_j} \\abs{a(k)} > t \\]\nTherefore, $\\cup_j I^t_j \\subseteq \\left \\{n : M_{\\alpha} a(n) >t \\right \\}$.\n Let $n \\notin \\cup_j 2 I^t_j$ and $I$ be any interval which contains $n$. Then \n\\[ \\sum_{k \\in I} \\abs{a(k)} = \\sum_{ k \\in I \\cap (\\cup_j I^t_j) } \\abs{a(k)} + \\sum_{ I \\cap (\\cup_j I^t_j)^{\\complement} } \\abs{a(k)} = S_1 + S_2 \\]\nTo estimate $S_1$, we observe a simple geometric fact. If $ I \\cap I^t_j$ is non-empty and I is not contained in $ 2 I^t_j$, then $I^t_j \\subset 4I$. Since $n \\in I$ and $n \\notin 2 I^t_j$, for each $j$, I is not contained in $ 2I^t_j$ for each $j$. Also, note that $S_2 \\leq \\abs{I}$.\nTherefore, \n\\begin{align*}\nS_1 &\\leq \\sum_{ \\left \\{ j: I^t_j \\subseteq 4I \\right \\} } \\sum_{ k \\in I^t_j} \\abs{a(k)} \\\\\n& \\leq \\sum_{ \\left \\{ j: I^t_j \\subseteq 4I \\right \\} } 2t \\abs{I^t_j }\\\\\n& \\leq 2t \\abs{4I} \\\\\n& \\leq 8t \\abs{I}. \n\\end{align*} \nHence, $ \\sum_{ k \\in I} \\abs{a(k)} \\leq S_1 + S_2 \\leq 9t \\abs{I} $. \n\nSince $I$ was an arbritrary interval containing $n$, we have $ M_{\\alpha}a(n) \\leq 9t $. Therefore\n\\[ ( \\cup_j 2 I^t_j )^{\\complement} \\subseteq \\left \\{ n: M_{\\alpha}a(n) \\leq 9t \\right \\} \\]\n\\end{proof}\n\n\\begin{lemma}{\\label{CD_Template}}\nLet $1\\leq p < \\infty, a \\in l^p({\\mathbb Z})$ and ${q(\\cdot)} \\in \\mathcal{S}$. Let $\\left \\{ I^k_j \\right \\}$ be intervals obtained from Calderon-Zygmumd decomposition at height $(9t)^{k-1}$ and $t >0$. For any maximal operator $M_{\\alpha}, 0 \\leq \\alpha < 1$, and for every $0 0$. Define $\\Omega_{k} = \\left \\{ i \\in {\\mathbb Z}: M_{\\alpha}a(i) > A^{k} =(9t)^{k} \\right \\}$. \nFor each integer $k$, apply Theorem[$\\ref{CD_General}$] Calderon-Zygmund decomposition for sequence $\\left \\{ a(i) \\right \\}$, at height $t =A^{k-1}$ to get\npairwise disjoint cubes $ \\left \\{ I_j^k \\right \\} $ such that \n\\begin{align}\n\\Omega_k \\subset \\cup_j 2 I_j^k \\label{eq:eq1}\\tag{CD1} \\\\\n\\frac{1}{ {\\abs{I^k_j} }^{1-\\alpha} } \\sum_{{i \\in I^k_j}} a(i) > A^{k-1} \\label{eq:eq2}\\tag{CD2}\n\\end{align}\n\nFrom \\eqref{eq:eq2} we get \\\\\n\\begin{align*}\n\\frac{1}{\\abs{2I_j^k}^{1-\\alpha}} \\sum_{i \\in 2I_j^k} a(i) > \\frac{1}{2^{1-\\alpha}} A^{k-1} \n\\end{align*}\n\nDefine sets inductively as follows: \n\\begin{align*}\nE_1^k = \\biggl( \\Omega_{k+1} \\setminus \\Omega_{k} \\biggr) \\cap 2I_1^k \\nonumber \\\\\nE_2^k = \\biggl( \\biggl( \\Omega_{k+1} \\setminus \\Omega_{k} \\biggr) \\cap 2I_2^k \\biggr) \\setminus E_1^k \\nonumber\\\\\nE_3^k = \\biggl( \\biggl( \\Omega_{k+1} \\setminus \\Omega_{k}\\biggr) \\cap 2I_3^k \\setminus (E_1^k \\cup E_2^k ) \\biggr) \\nonumber\\\\\n\\ldots\nE_m^k = \\biggl( \\biggl( \\Omega_{k+1} \\setminus \\Omega_{k}\\biggr) \\cap 2I_{m}^k \\setminus (E_1^k \\cup E_2^k \\ldots E_{m-1}^k) \\biggr) \\nonumber\\\\\n\\end{align*}\nThen, sets $E_j^k$ are pairwise disjoint for all $j$ and $k$ and satisfy for every $k$\n\\[\\Omega_{k+1} \\setminus \\Omega_{k} = \\bigcup_j E_j^k \\label{eq:eq4} \\tag{CD3} \\]\nSo, ${\\mathbb Z} = \\bigcup_k \\biggl( \\Omega_{k+1}\\setminus \\Omega_{k} \\biggr) = \\bigcup_k \\bigcup_j E_j^k $.\nFurther, note $\\Omega_{k+1} \\setminus \\Omega_{k} = \\left \\{ A^{k+1} < M_{\\alpha} a(i) < A^k \\right \\}$. \\\\\nWe now estimate $ M_{\\alpha}a$ as follows, noting $A < 1$,\n\\begin{align*}\n \\sum_{i \\in {\\mathbb Z}} {M_{\\alpha} a(i)}^{q(i)}\n & = \\sum_k \\sum_{\\Omega_{k+1} \\setminus \\Omega_{k}} {M a(i)}^{q(i)} \\\\\n & \\leq \\sum_k \\sum_{\\Omega_{k+1} \\setminus \\Omega_{k}} [A^{k}]^{q(i)} \\\\\n & \\leq A^{{q_{-}}}2^{{q_{+}}(\\alpha-1)} \\sum_{k,j} \\sum_{E^k_j} \\biggl( \\frac{1}{\\abs{2I^k_j}^{1-\\alpha}} \\sum_{i \\in 2I^k_j} a(i) \\biggr)^{q(i)}\n\\end{align*}\n\\end{proof}\n\n\\section{Fractional Hardy-Littlewood Maximal operator}\n\n\n\\subsection{Strong $({p(\\cdot)}, {p(\\cdot)})$ Inequality for Fractional Hardy-Littlewood Maximal Operator}\nIn this section, we derive strong $({p(\\cdot)}, {p(\\cdot)})$ inequality for fractional Hardy-Littlewood maximal Operator. In order to prove this theorem, we use Lemma[$\\ref{CD_Template}$].\n \n \\begin{theorem}[Strong $({p(\\cdot)}, {p(\\cdot)})$ Inequality for Fractional Hardy-Littlewood Maximal Operator]\nGiven a non-negative sequence $ \\left \\{ a(i) \\right \\} \\in \\ell^{{p(\\cdot)}}({\\mathbb Z}), 0\\leq \\alpha < 1$, let $ {p(\\cdot)} \\in \\mathcal{S},{p_{+}}< \\infty, {p_{-}}> 1 , {p(\\cdot)} \\in LH_\\infty({\\mathbb Z}), {q(\\cdot)} \\in LH_\\infty({\\mathbb Z}), \\text{and} \\quad 1< {p_{-}} \\leq {p_{+}} < \\frac{1}{\\alpha} $. Define the exponent function ${q(\\cdot)}$ by \n\\[ \\frac{1}{p(k)} - \\frac{1}{q(k)} = \\alpha , k \\in {\\mathbb Z}\\] Then\n\\[ \\normb{M_{\\alpha} a}_{\\ell^{q(.)}({\\mathbb Z})} \\leq C \\normb{a}_{\\ell^{p(.)}({\\mathbb Z})} \\]\n\\end{theorem}\n \n\n\\begin{proof}\n\n\n\n\n \nNow, we are going to prove $ \\normb{M_{\\alpha} a}_{\\ell^{{q(\\cdot)}}({\\mathbb Z})} \\leq c \\normb{a}_{\\ell^{{p(\\cdot)}}({\\mathbb Z})}$. We may assume without loss of generality that $\\normb{a}_{\\ell^{{p(\\cdot)}}({\\mathbb Z})} =1$. We will show that there exist a constant $\\lambda_2=\\lambda_2({p(\\cdot)}) > 0$ such that $\\rho_{{q(\\cdot)}}(M_{\\alpha}\\frac{a}{\\lambda_2}) \\leq 1$. \nFor this it suffices to prove $ \\rho_{{q(\\cdot)}}( \\alpha_2 \\beta_2 \\gamma_2 \\delta_2 M_{\\alpha} a) \\leq \\frac{1}{2}$ form some non-negative real numbers $\\alpha_2,\\beta_2, \\gamma_2,\\delta_2 $.\nLet $ {\\lambda_2}^{-1} = \\alpha_2 \\beta_2 \\gamma_2 \\delta_2 $. Then \n\\[ \\rho_{{q(\\cdot)}}( \\alpha_2 \\beta_2 \\gamma_2 \\delta_2 M_{\\alpha} a) = \\sum_{m \\in {\\mathbb Z}} [\\alpha_2 \\beta_2 \\gamma_2 \\delta_2 M_{\\alpha} a(m)]^{q(m)} \\]\nTo estimate this term we perform \nCalderon Zygmund decomposition for sequences $\\left \\{ a(k) \\right \\}$ and use Lemma[$\\ref{CD_Template}$] for the fractional Hardy-Littlewood maximal operator $M_{\\alpha}$. We will show that $\\rho_{{q(\\cdot)}}( \\alpha_2 \\beta_2 \\gamma_2 \\delta_2 M_{\\alpha} a) \\leq \\frac{1}{2} $ for suitable choices of $ \\alpha_2, \\beta_2, \\gamma_2, \\delta_2$.\nIf we set $\\alpha_2 = A^{q_{-}} 2^{{q_{+}}{(\\alpha-1)}}$ and using Lemma[$\\ref{CD_Template}$] for $M_{\\alpha}$, we get\n\\[\\sum_{m \\in {\\mathbb Z}} [ \\alpha_2 \\beta_2 \\gamma_2 \\delta_2 M_{\\alpha} a(m) ]^{q(m)} \\leq \\sum_{k,j} \\sum_{E^k_j} \\biggl( \\beta_2 \\gamma_2 \\delta_2 \\abs{2I^k_j}^{\\alpha-1} \\sum_{r \\in 2I^k_j} a(r) \\biggr)^{q(m)} \\label{eq:eq3}\\tag{CD3} \\]\n We have to estimate right hand side of $\\eqref{eq:eq3}$. At this point, we note that $q_\\infty < \\infty$. Let $ g_2(r) = {a(r)}^{p(r) } $, then $\\eqref{eq:eq3}$ becomes,\n\\[ \\sum_{k,j} \\sum_{E^k_j} \\biggl( \\beta_2 \\gamma_2 \\delta_2 \\abs{ 2I^k_j}^{\\alpha -1} \\sum_{r \\in 2I^k_j} {g_2(r)}^{\\frac{1}{p(r)}} \\biggr)^{q(m)} \\]\n\nNote, since $\\frac{1}{{p(\\cdot)}} \\in LH_\\infty({\\mathbb Z})$, it follows that the exponents $p_\\infty, q_\\infty$ satisfy $ \\frac{1}{p_\\infty} - \\frac{1}{q_\\infty} = \\alpha $. Hence, using Lemma $\\ref{sak_lem1}$ with exponents $p_\\infty, q_\\infty$. \n\\begin{align*}\n\\abs{2I^k_j}^{\\alpha -1} \\sum_{ r \\in 2I^k_j} {g_2(r)}^{\\frac{1}{p(r)}} & \n& \\leq \\biggl( \\sum_{r \\in 2I^k_j} {g_2(r)}^{\\frac{ p_\\infty } { p(r)}} \\biggr)^{ \\frac{1}{p_\\infty} - \\frac{1}{q_\\infty} } \\biggl( \\frac{1}{\\\n\\abs{2I^k_j} }\\sum_{r \\in 2I^k_j} {g_2(r)}^{\\frac{1}{p(r)}} \\biggr)^{\\frac{p_\\infty}{q_\\infty} } \\label{eq:eq4}\\tag{CD4} \n\\end{align*}\n\nFurther, note the following estimates.\n\\begin{enumerate}\n\\item \n${g_2(r)}^{p_\\infty} \\leq 1$. since $g_2(r)^{p_\\infty} = {a(r)}^{p(r) p_\\infty} \\leq 1$ since $ a(r) \\leq 1$ \\\\\n\n\\item \nUsing $R(k) = (e+ \\abs{k})^{-N}$, with $N >1$. By taking $N$ large enough, it follows that\n\\[ \\sum_{m \\in {\\mathbb Z}} {R(m)}^{\\frac{1}{p_\\infty}} \\leq \\sum_{m \\in {\\mathbb Z}} {R(m)}^{\\frac{1}{q_\\infty}} \\leq 1 \\]\nBy taking $N$ large, the sum i.e $ \\sum_{m \\in {\\mathbb Z}} R(m) \\leq 1 $. \\\\\n\n\\item Also, note that \n\\[ \\sum_{r \\in 2I^k_j} g_2(r) \\leq \\sum_{r \\in 2I^k_j} a_2(r)^{p(r)} \\leq \\sum_{r \\in {\\mathbb Z}} a(r)^{p(r)} \\leq \\normb{a}_{\\ell^{{p(\\cdot)}}({\\mathbb Z})} =1 \\] \\\\\n\n\\item Since ${g_2(k)}^{p_\\infty} \\leq 1$. Put $ F = {g_2(k)}^{p_\\infty} $, so that $F \\leq 1$. Hence, we use following form of Lemma[$\\ref{sak_lem1}$] , where $\\frac{1}{r(\\cdot)} $ is taken as $LH_\\infty$ constant.\n\\[ \\sum_{m \\in {\\mathbb Z}} F(m)^{\\frac{1}{p(m)}} \\leq C \\sum_{m \\in {\\mathbb Z}} F(m)^{\\frac{1}{p_\\infty}} + C \\sum_{m \\in {\\mathbb Z}} {R(m)}^{\\frac{1}{p_\\infty}} \\]\nSince $ \\frac{1}{{p(\\cdot)}} \\in LH_\\infty({\\mathbb Z}) $ and using Lemma[$\\ref{sak_lem1}$] with exponents $p_\\infty, q_\\infty$ with $ F= {g_2(r)}^{p_\\infty} \\leq 1$, based on estimates [1-4], it follows that\n\\begin{align*}\n\\sum_{{\\mathbb Z}} \\biggl( g_2(k)^{p_\\infty} \\biggr)^{\\frac{1}{p(m)} } & \\leq C \\sum_{m \\in {\\mathbb Z}} \\biggl( g_2(k)^{p_\\infty} \\biggr)^{\\frac{1}{p_\\infty}} + C \\sum_{m \\in {\\mathbb Z}} R(m)^{\\frac{1}{p_\\infty}} \\\\\n&= C \\sum_{m \\in {\\mathbb Z}} g_2(k) + C \\sum_{m \\in {\\mathbb Z}} R(m)^{\\frac{1}{p_\\infty}} \\leq C\n\\end{align*}\nTherefore $\\eqref{eq:eq4}$ is\n\\begin{align*}\n \\abs{2I^k_j}^{\\alpha -1} \\sum_{ r \\in 2I^k_j} {g_2(r)}^{\\frac{1}{p(r)}} \n& \\leq \\biggl( \\sum_{r \\in 2I^k_j} {g_2(r)}^{\\frac{ p_\\infty } { p(r)}} \\biggr)^{ \\frac{1}{p_\\infty} - \\frac{1}{q_\\infty} } \\biggl( \\frac{1}{\\\n\\abs{2I^k_j} }\\sum_{r \\in 2I^k_j} {g_2(r)}^{\\frac{1}{p(r)}} \\biggr)^{\\frac{p_\\infty}{q_\\infty} } \\\\\n& \\leq C^{\\alpha} \\biggl( \\frac{1}{\\abs{2I^k_j} }\\sum_{r \\in 2I^k_j} {g_2(r)}^{\\frac{1}{p(r)}} \\biggr)^{\\frac{p_\\infty}{q_\\infty} }\n\\end{align*}\n\n\n\n\\end{enumerate}\n\n\n\nTherefore, using estimates [1-4], we can choose constant $\\beta_2 >0 $ such that\n\\begin{align*}\n\\sum_{k,j} \\sum_{E^k_j} & \\biggl( \\beta_2 \\gamma_2 \\delta_2 \\abs{ 2I_j}^{\\alpha-1} {g_2(r)}^{\\frac{1}{p(r)}} \\biggr)^{q(m)} \\leq \\sum_{k,j} \\sum_{E^k_j} \\gamma_2 \\delta_2 \\biggl( \\biggl( \\frac{1}{\\abs{2I^k_j}} \\sum_{r \\in 2I^k_j} {g_2(r)}^{\\frac{1}{p(r)}} \\biggr)^{p_\\infty} \\biggr)^{\\frac{q(m)}{q_\\infty} }\n\\end{align*}\n\nNote, $ \\frac{1}{\\abs{2I^k_j}} \\sum_{r \\in 2I^k_j} {g_2(r)}^{\\frac{p_\\infty} { p(r)} } \\leq \\frac{1}{\\abs{2I^k_j}} \\sum_{r \\in {\\mathbb Z}} {g_2(r)}^{\\frac{p_\\infty} { p(r)} } \\leq 1 $. \\\\\nSo, let $ F = \\biggl( g_2(k)^{\\frac{1}{p(k)}} \\biggr)^{q(k) p_\\infty} \\leq \\biggl( \\sum_{r \\in {\\mathbb Z}} g_2(k)^{\\frac{1}{p(k)}} \\biggr)^{q(k) p_\\infty} \\leq 1$. \\\\\n Using Lemma[$\\ref{sak_lem1}$] and with $\\frac{1}{{q(\\cdot)}} \\in LH_\\infty({\\mathbb Z})$, we get\n\\[ \\sum_{k \\in {\\mathbb Z}} F^{\\frac{1}{q_\\infty}} \\leq C_1 \\sum_{k \\in {\\mathbb Z}} F^{\\frac{1}{q(k)}} + C_2 \\sum_{k \\in {\\mathbb Z}} R^{\\frac{1}{q_\\infty}} \\]\nHence,\n\\begin{align*}\nF^{\\frac{1}{q_\\infty}} &\\leq \\sum_{k \\in {\\mathbb Z}} F^{\\frac{1}{q_\\infty}} \\leq C_1 \\sum_{k \\in {\\mathbb Z}} F^{\\frac{1}{q(k)}} + C_2 \\sum_{k \\in {\\mathbb Z}} R^{\\frac{1}{q_\\infty}} \\\\\n& = C_1 \\biggl( \\sum_{k \\in {\\mathbb Z}} g_2(k)^{\\frac{1}{p(k)}} \\biggr)^{p_\\infty} + C_2 \\sum_{k \\in {\\mathbb Z}} R^{\\frac{1}{q_\\infty}} \n\\end{align*}\n\nTherefore,\n\\begin{align*}\n\\biggl( \\sum_{k \\in {\\mathbb Z}} {g_2(k)}^{\\frac{1}{p(k)}} \\biggr)^{q(k) \\frac{p_\\infty}{q_\\infty} } & \\leq C_1 \\biggl( \\sum_{k \\in {\\mathbb Z}} {g_2(k)}^{\\frac{1}{p(k)}} \\biggr)^{p_\\infty} + C_2 \\sum_{k \\in {\\mathbb Z}} {R(k)}^{\\frac{1}{q_\\infty}} \\\\\n&\\leq C_1 \\biggl( \\sum_{k \\in {\\mathbb Z}} {g_2(k)}^{\\frac{1}{p(k)}} \\biggr)^{p_\\infty} + C_2 \\sum_{k \\in {\\mathbb Z}} {R(k)}^{\\frac{1}{q_\\infty}} \n\\end{align*}\nand so,\n\\begin{align*}\n\\sum_{k,j} \\sum_{E^k_j} \\gamma_2 \\delta_2 & \\biggl( \\biggl( \\frac{1}{\\abs{2I^k_j}} \\sum_{k \\in 2I^k_j} g_2(k)^{\\frac{1}{p(k)}} \\biggr)^{p_\\infty} \\biggr)^{\\frac{q(m)}{q_\\infty}} \\\\\n&\\leq \\delta_2 \\biggl( C_1 \\biggl( \\frac{1}{\\abs{2I^k_j}} \\sum_{k \\in 2I^k_j} {g_2(k)}^{\\frac{1}{p(k)}} \\biggr)^{p_\\infty} + C_2 \\sum_{k \\in {\\mathbb Z}} {R(k)}^{\\frac{1}{q_\\infty}} \n\\end{align*}\n\nTake $\\gamma_2>0$ such that $C_1 =1, C_2 = \\frac{1}{6}$. Then\n\\begin{align*}\n\\sum_{k,j} \\sum_{E^k_j} & \\gamma_2 \\delta_2 \\biggl( \\biggl( \\frac{1}{\\abs{2I^k_j}} \\sum_{k \\in 2I^k_j} g_2(k)^{\\frac{1}{p(k)}} \\biggr)^{p_\\infty} \\biggr)^{\\frac{q(m)}{q_\\infty}} \\\\\n&\\leq \\sum_{k,j} \\sum_{E^k_j} \\delta_2 \\biggl( \\biggl (\\frac{1}{\\abs{2I^k_j}} \\sum_{k \\in 2I^k_j} g_2(k)^{\\frac{1}{p(k)}} \\biggl)^{p_\\infty} \\biggr) + \\frac{1}{6} \\sum_{k \\in {\\mathbb Z}} {R(k)}^{\\frac{1}{q_\\infty}} \\\\\n& \\leq \\sum_{{\\mathbb Z}} \\delta_2 {M g_2(\\cdot)}^{\\frac{1}{{p(\\cdot)}}} (k)^{p_\\infty} + \\frac{1}{6} \n\\end{align*}\n\nNote that the maximal operator is bounded on $\\ell^{p_\\infty} ({\\mathbb Z})$, since $p_\\infty \\geq {p_{-}} > 1$.\nAgain apply Lemma[$\\ref{sak_lem1}$] to get \\\\\n\\[ \\sum_{k \\in {\\mathbb Z}} M( g_2(\\cdot)^{\\frac{1}{{p(\\cdot)}}} ) (k)^{p_\\infty} \\leq C \\sum_{k \\in {\\mathbb Z}} {g_2(\\cdot)}^{\\frac{p_\\infty}{p(k)}} \\leq C \\sum_{k \\in {\\mathbb Z}} g_2(k) + C \\sum_{k \\in {\\mathbb Z}} R(k)^{\\frac{1}{p_\\infty}} \\leq C \\]\nFinally, note $a(\\cdot)$ is $\\ell^{p_\\infty} ({\\mathbb Z})$ integrable, ${g_2(\\cdot)}^{\\frac{1}{{p(\\cdot)}}}$ is also $\\ell^{p_\\infty} ({\\mathbb Z})$ integrable. So, we can choose $\\delta_2 >0$ such that\n\\[ \\sum_{k \\in {\\mathbb Z}} \\delta_2 M(g_2(\\cdot)^{\\frac{1}{{p(\\cdot)}}} ) (k)^{p_\\infty} + \\frac{1}{6} \\leq \\frac{1}{3} + \\frac{1}{6} = \\frac{1}{2} \\]\n\\end{proof}\n\n\\subsection{Weak $({p(\\cdot)}, {p(\\cdot)})$ Inequality for Fractional Hardy-Littlewood Maximal Operator}\n\\begin{theorem}[Weak $({p(\\cdot)}, {p(\\cdot)})$ Inequality for Fractional Hardy-Littlewood Maximal Operator]\nGiven a non-negative sequence $ \\left \\{ a(i) \\right \\} \\in \\ell^{{p(\\cdot)}}({\\mathbb Z})$, let $ {p(\\cdot)} \\in \\mathcal{S},{p_{+}}< \\infty, {p_{-}}= 1 , 1 \\leq {p_{-}} \\leq {p_{+}} < \\frac{1}{\\alpha} \\quad \\text{and} \\quad {p(\\cdot)} \\in LH_\\infty({\\mathbb Z})$. Then\n\\[ \\sup_{t > 0} t \\normb{ \\chi_{ \\left \\{ M_{\\alpha} a(k\t) >9t \\right \\} }}_{\\ell^{{q(\\cdot)}} ({\\mathbb Z})} \\leq C \\normb{a}_{\\ell^{{p(\\cdot)}} ({\\mathbb Z})} \\]\n\\end{theorem}\n\\begin{proof}\nDenote by $ \\Omega = \\left \\{ k \\in {\\mathbb Z}: M_\\alpha a(k) > 9t \\right \\}$.\n Using Calderon-Zygmund decomposition for sequence $\\left \\{ a(k) \\right \\}$ by Lemma[$\\ref{CD_Weak}$], we get for fixed $ t > 0$, \n\\[ \\left \\{ k \\in {\\mathbb Z}: M_\\alpha a(k) > 9t \\right \\} = \\cup_j 2I_j\\]\nNow, consider disjoint sets $E_j$ such that $ E_j \\subset 2I_j$ and $\\Omega = \\cup_j E_j$.\nTo prove the weak inequality, it will suffice to show that for each $k \\in \\Omega$, $ t \\normb{\\chi_{\\Omega}(k)}_{{q(\\cdot)}} \\leq C$ and in turn it will suffice to show that for some $\\alpha_2 >0$,\n\\[ \\rho_{{q(\\cdot)}}(\\alpha_2 t \\chi_{\\Omega} ) = \\sum_{k \\in \\Omega} [\\alpha_2 t]^{q(m)} \\leq 1 \\]\nWe will show that each term on the right is bounded by $\\frac{1}{2}$ for suitable choice of $\\alpha_2$.\nTo estimate\n$ \\sum_{k \\in \\Omega} [\\alpha_2 t]^{q(m)}$, we note from results in previous section, \n\\begin{align*}\n\\sum_{k \\in \\Omega} [ \\alpha_2 t]^{q(m)} & \\leq \\sum_{j} \\sum_{E_j} \\delta_2 \\biggl( \\frac{1}{\\abs{2I_j}} \\sum_{k \\in 2I_j} {g_2(k)}^{\\frac{1}{p(k)}} \\biggr)^{p_\\infty} + \\frac{1}{6} \\\\\n& \\leq \\sum_{j} \\sum_{E_j} \\delta_2 \\biggl( \\frac{1}{\\abs{2I_j}} \\sum_{k \\in 2I_j} {g_2(k)}^{\\frac{p_\\infty}{p(k)}} \\biggr) + \\frac{1}{6} \\\\\n&\\leq \\sum_{k \\in \\Omega} \\delta_2 {g_2(k)}^{\\frac{p_\\infty}{p(k)}} + \\frac{1}{6} \n\\end{align*}\nNow , choose $\\delta_2 >0$ such that right hand side is bounded by $\\frac{1}{2}$.\n\\end{proof}\n\n\n\\section{ Hardy-Littlewood Maximal operator}\nIn this section, we prove boundedness of Hardy-Littlewood maximal operator for $\\ell^{{p(\\cdot)}}({\\mathbb Z})$ spaces where ${p_{-}}>1$. The proof is based on boundedness of Hardy-Littlewood maximal operator on $\\ell^p({\\mathbb Z})$, where $p$ is a fixed number, $1< p< \\infty$. \\\\\n\\begin{remark}\nNote that when $\\alpha =0$ the fractional Hardy-Littlewood maximal operator is nothing but Hardy-Littlewood maximal operator.\nHowever we can prove strong type, weak type inequalities for Hardy-Littlewood maximal operator on $\\ell^{{p(\\cdot)}}({\\mathbb Z})$ directly from the corresponding results for fixed $\\ell^p({\\mathbb Z})$ spaces, $ 1 < p < \\infty$. \nThe key point of the proofs is Lemma[$\\ref{sak_lem1}$].\n\\end{remark}\n\n\n\n\\begin{lemma}\\label{sak_lem1}\nLet $p(\\cdot) : {\\mathbb Z} \\to [0,\\infty) $ be such that $ p(\\cdot) \\in LH_\\infty({\\mathbb Z})$ and $ 0< p_\\infty < \\infty$. Let $R(k) = (e + \\abs{k})^{-N}, N > \\frac{1}{p_\\infty} $. Then there exists a real constant C depending on N and $LH_\\infty({\\mathbb Z})$ constant of $p(\\cdot)$ such that given any set E and any function F with $0\\leq F(y) \\leq 1$ for $ y \\in E$,\n\\end{lemma}\n\\begin{align}\n\\sum_E F(m)^{p(m)} \\leq \\sum_E F(m)^{p_\\infty} + C \\sum_E R(m)^{p_\\infty} \\label{eq:eq5}\\tag{CD5} \\\\ \n\\sum_E F(m)^{p_\\infty} \\leq C \\sum_E F(m)^{p(m)} + C \\sum_E R(m)^{p_\\infty} \\label{eq:eq6}\\tag{CD6} \n\\end{align}\n\\begin{proof}\nThis follows from continuous version $\\text{\\cite{fior_var_book}}$. Same line of proof works here\n\\end{proof}\n\n\n\n\\subsection{Strong $({p(\\cdot)}, {p(\\cdot)})$ Inequality}\n\\begin{theorem}[Strong $({p(\\cdot)}, {p(\\cdot)})$ Inequality]\n\n Given a non-negative sequence $ \\left \\{ a(i) \\right \\} \\in \\ell^{{p(\\cdot)}}({\\mathbb Z}), {p(\\cdot)} \\in \\mathcal{S},{p_{+}}< \\infty, {p_{-}}> 1$ , then\n\\[ \\normb{Ma}_{\\ell^{p(.)}({\\mathbb Z})} \\leq C \\normb{a}_{\\ell^{p(.)}({\\mathbb Z})} \\]\n\\end{theorem}\n\n\n\n\n\n\n\\begin{proof}\nBy homogenity, it is enough to prove the above result with the assumption $\\normb{a}_{{\\ell^{p(\\cdot)}}({\\mathbb Z})} =1 $. \nDue to modular property $\\ref{sak_corollary_2}$, this implies that \n$\\sum_{i \\in {\\mathbb Z}} \\abs{a(i)}^{p(i)} \\leq 1 $. So, it will suffice to prove that\n\\begin{align*}\n\\sum_{i \\in {\\mathbb Z}} \\abs{Ma(i)}^{p(i)} \\leq C \n\\end{align*}\n\nGiven that $ 0 \\leq a(k) \\leq 1 $ , it follows that $ 0 \\leq Ma(k) \\leq 1$. To prove boundedness of $ \\left \\{Ma \\right \\}$, \nwe start with Lemma[\\ref{sak_lem1}] as follows:\n\\[ \\sum_{k \\in {\\mathbb Z}} Ma(k)^{p(k)} \\leq C \\sum_{k \\in {\\mathbb Z}} Ma(k)^{p_\\infty} + C \\sum_{k \\in {\\mathbb Z}} R(k)^{p_\\infty} \\]\nSince $N > \\frac{1}{p_\\infty}$, $ \\sum_{k \\in {\\mathbb Z}} R(k)^{Np_\\infty} = \\sum_{k \\in {\\mathbb Z}} (\\frac{1}{e+\\abs{k}})^{Np_\\infty}$ converges and can be bounded as $\\leq 1$. So, the second integral is a constant depending only on $p_\\infty$ by taking sufficiently large $N > \\frac{1}{p_\\infty}$.\n\nTo bound the first integral, note that $ 1 < {p_{-}} \\leq p_\\infty$. Since $p_\\infty > 1$, M is bounded on $\\ell^{p_\\infty }({\\mathbb Z})$ and by using strong $(p,p)$ inequality valid for classical Lebesgue spaces with index $p_\\infty$, we get using Lemma[$\\ref{sak_lem1}$] and [$\\eqref{eq:eq5}$],\n\\begin{align*}\n\\sum_{k \\in {\\mathbb Z}} Ma(k)^{p_\\infty} \\leq C \\sum_{k \\in {\\mathbb Z}} a(k)^{p_\\infty} \\leq C \\sum_{k \\in{\\mathbb Z}} a(k)^{p(k)} + C \\sum_{k \\in {\\mathbb Z}} R(k)^{p_\\infty} \\leq C\n\\end{align*}\nLike previous case, the term involving summation of $R(k)$ is bounded by a constant depending only on $p_\\infty$ by taking sufficiently large $N > \\frac{1}{p_\\infty} $.\\\\\nTherefore, using above results,\n\\[ \\rho_{{p(\\cdot)}}(Ma) = \\sum_{k \\in {\\mathbb Z}} Ma(k)^{p(k)} \\leq C. \\]\n\\end{proof}\n\n\n\\subsection{Weak $({p(\\cdot)}, {p(\\cdot)})$ Inequality for Hardy-Littlewood Maximal Operator}\n\\begin{theorem}[Weak $({p(\\cdot)}, {p(\\cdot)})$ Inequality for Hardy-Littlewood Maximal Operator]\nGiven ${p(\\cdot)} \\in \\mathcal{S}, {p(\\cdot)} \\geq 1$, if ${p(\\cdot)} \\in LH_\\infty({\\mathbb Z})$, then\n\\[ \\sup_{t > 0} \\normb{ t \\chi_{ \\left \\{ Ma(n) >t \\right \\} }}_{\\ell^{{p(\\cdot)}} ({\\mathbb Z})} \\leq c \\normb{a}_{\\ell^{{p(\\cdot)}} ({\\mathbb Z})} \\]\nwhere constant depends on the Log-Holder constants of ${p(\\cdot)}, {p_{-}}$ and $p_\\infty$(if this value is finite) .\n\\end{theorem}\n\\begin{proof}\n\n\n\\begin{enumerate}\n\\item Case: ${p_{-}} > 1$ and $\\normb{a}_{\\ell^{{p(\\cdot)}}({\\mathbb Z})} = 1$. \\\\\nLet $ A = \\left \\{ n \\in {\\mathbb Z} : Ma(n) > t \\right \\} $. Then, by the use of strong $ ({p(\\cdot)}, {p(\\cdot)})$ inequality for Hardy-Littlewood maximal operator from previous section, it follows that\n\\begin{align*}\n \\normb{ t \\chi_{ \\left \\{ n: Ma(n) >t \\right \\} }(k) }_{\\ell^{{p(\\cdot)}} ({\\mathbb Z})} & \\leq \\sum_{k \\in {\\mathbb Z}} \\abs{ t \\chi_{ \\left \\{ n: Ma(n) > t \\right \\} } (k) }^{p(k)}\\leq \\sum_{ k \\in A} \\abs{t}^{p(k)} \\\\\n& \\leq \\sum_{k \\in A} {Ma}^{p(k)} \\leq \\sum_{k \\in {\\mathbb Z}} {Ma(k)}^{p(k)} = \\rho_{{p(\\cdot)}}( Ma) \\leq C = C \\normb{a}_{\\ell^{{p(\\cdot)}}({\\mathbb Z})} \\\\\n\\end{align*}\n\\item Case: ${p_{-}} >1 $ and $\\normb{a}_{\\ell^{{p(\\cdot)}}({\\mathbb Z})} \\neq 1 $ \\\\\nDenote, $b(n) = \\frac{a(n)}{\\normb{a}_{\\ell^{{p(\\cdot)}} ({\\mathbb Z})}}$ , so that $\\normb{b}_{\\ell^{{p(\\cdot)}}({\\mathbb Z})}=1$. Let $ A = \\left \\{ n \\in {\\mathbb Z} : Ma(n) > t \\right \\} $\nBy homogenity of the norm, it follows that,\n\\begin{align*}\n\\normb{t \\chi_{ \\left \\{ n: Ma(n) > t \\right \\}}(k) }_{\\ell^{{p(\\cdot)}}({\\mathbb Z})} &\\\\\n& = \\normb{t \\chi_{ \\left \\{ n: Mb(n) > \\frac{t}{\\normb{a}_{\\ell^{{p(\\cdot)}} ({\\mathbb Z})}} \\right \\}}(k) }_{\\ell^{{p(\\cdot)}}({\\mathbb Z})} \\\\\n& = \\normb{ \\normb{a}_{\\ell^{{p(\\cdot)}} ({\\mathbb Z})} \\frac{t}{\\normb{a}_{\\ell^{{p(\\cdot)}} ({\\mathbb Z})}} \\chi_{\\left \\{ n: Mb(n) > \\frac{t}{\\normb{a}_{\\ell^{{p(\\cdot)}} ({\\mathbb Z})}} \\right \\} }(k) }_{\\ell^{{p(\\cdot)}}({\\mathbb Z})} \\\\\n& = \\normb{a}_{\\ell^{{p(\\cdot)}} ({\\mathbb Z})} \\normb{ \\frac{t}{\\normb{a}_{\\ell^{{p(\\cdot)}} ({\\mathbb Z})}} \\chi_{\\left \\{ n: Mb(n) > \\frac{t}{\\normb{a}_{\\ell^{{p(\\cdot)}} ({\\mathbb Z})}} \\right \\} }(k) }_{\\ell^{{p(\\cdot)}}({\\mathbb Z})} \\\\\n& \\leq C \\normb{a}_{\\ell^{{p(\\cdot)}} ({\\mathbb Z})}\n\\end{align*}\n\\item Case: ${p_{-}}=1$ and $\\normb{a}_{\\ell^{{p(\\cdot)}}({\\mathbb Z})} =1$ \\\\\nLet $ A = \\left \\{ n \\in {\\mathbb Z} : Ma(n) > t \\right \\} $. Then, by the use of strong $ ({p(\\cdot)}, {p(\\cdot)})$ inequality for Hardy-Littlewood maximal operator from previous section, it follows that\n\\begin{align*}\n \\normb{ t \\chi_{ \\left \\{ n: Ma(n) >t \\right \\} }(k) }_{\\ell^{{p(\\cdot)}}({\\mathbb Z})} & \\leq \\sum_{k \\in {\\mathbb Z}} \\abs{ t \\chi_{ \\left \\{ n: Ma(n) < t \\right \\} } (k) }^{p(k)}= \\sum_{ k \\in A} \\abs{t}^{p(k)} \n\\\\\n=\\begin{cases}\nt^{{p_{+}}} \\abs{A} & t \\geq 1 \\\\\nt^{{p_{-}}} \\abs{A} & t \\leq 1\n\\end{cases}\n\\end{align*}\nNow, by use of weak$({p_{+}}, {p_{+}})$ inequality applicable to $\\ell^{{p_{+}}}({\\mathbb Z})$ spaces when $t \\geq 1$ and weak$({p_{+}}, {p_{+}})$ inequality applicable to $\\ell^{{p_{-}}}({\\mathbb Z})$ spaces when $t \\leq 1$ respectively, it follows that\n\\[\\abs{A} \\leq \n\\begin{cases}\n\\frac{C}{t^{{p_{+}}}} \\normb{a}_{\\ell^{p_{+}} ({\\mathbb Z})} & t \\geq 1 \\\\ \\\\\n\\frac{C}{t^{{p_{-}}}} \\normb{a}_{\\ell^{p_{-}} ({\\mathbb Z})} & t \\leq 1\n\\end{cases}\n\\]\nNote, by Lemma$\\ref{sak_corollary_2}$,\n\\[ (\\normb{a}_{{p_{+}}})^{{p_{+}}} \\leq \\sum_{k \\in {\\mathbb Z}} {\\abs{a(k)}}^{{p_{+}}} \\leq \\sum_{k \\in {\\mathbb Z}} {\\abs{a(k)}}^{p(k)} \\leq \\normb{a}_{\\ell^{{p(\\cdot)}}({\\mathbb Z})} \\]\nand $\\normb{a}_{{p_{-}}} = \\normb{a}_1 \\leq \\normb{a}_{\\ell^{{p(\\cdot)}}({\\mathbb Z})}$ as $\\ell_p({\\mathbb Z}) \\subset \\ell_1({\\mathbb Z})$. Hence\n\\begin{align*}\n\\abs{A} \\leq \n\\begin{cases}\n\\frac{C}{t^{{p_{+}}}} \\normb{a}_{\\ell^{{p(\\cdot)}} ({\\mathbb Z})} & t \\geq 1 \\\\\n\\frac{C}{t} \\normb{a}_{\\ell^{{p(\\cdot)}}({\\mathbb Z})} & t \\leq 1\n\\end{cases}\n\\end{align*}\nand therefore,\n\\begin{align*}\n \\normb{ t \\chi_{ \\left \\{ n: Ma(n) >t \\right \\} }(k) }_{\\ell^{{p(\\cdot)}} ({\\mathbb Z})} & = \n\\begin{cases}\n\\leq C \\normb{a}_{\\ell^{{p(\\cdot)}} ({\\mathbb Z})} & t \\geq 1 \\\\\n\\leq C \\normb{a}_{\\ell^{{p(\\cdot)}} ({\\mathbb Z})} & t \\leq 1\n\\end{cases}\n\\end{align*}\n\\item Case: ${p_{-}}=1$ and $\\normb{a}_{\\ell^{{p(\\cdot)}} ({\\mathbb Z})} \\neq 1$. The conclusion follows similar to case(2) and case(3).\n\\end{enumerate}\n\\end{proof}\n\n\\providecommand{\\bysame}{\\leavevmode\\hbox to3em{\\hrulefill}\\thinspace}\n\\providecommand{\\MR}{\\relax\\ifhmode\\unskip\\space\\fi MR }\n\\providecommand{\\MRhref}[2]{%\n \\href{http:\/\/www.ams.org\/mathscinet-getitem?mr=#1}{#2}\n}\n\\providecommand{\\href}[2]{#2}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction} \n\nWe let $O(n)$ denote the orthogonal group of the Euclidean $n$-space $\\mathbb{R}^{n}$ and $\\theta_n$ its Haar probability measure. We metrize $O(n)$ with the usual operator norm. Let also $\\mathcal L^n$ stand for the Lebesgue measure on $\\mathbb{R}^{n}$ and let $\\dim$ stand for the Hausdorff dimension and $\\mathcal H^s$ for $s$-dimensional Hausdorff measure. We shall prove the following theorem:\n\n\n\n\\begin{thm}\\label{theo1}\nLet $s$ and $t$ be positive numbers with $s+t > n+1$. Let $A$ and $B$ be Borel subsets of $\\mathbb{R}^{n}$ with $\\mathcal H^s(A)>0$ and $\\mathcal H^t(B)>0$. Then \nthere is $E\\subset O(n)$ such that \n$$\\dim E\\leq 2n-s-t+(n-1)(n-2)\/2=n(n-1)\/2-(s+t-(n+1))$$ \nand for $g\\in O(n)\\setminus E$,\n\\begin{equation}\\label{eq4}\n\\mathcal L^n(\\{z\\in\\mathbb{R}^{n}: \\dim A\\cap (g(B)+z)\\geq s+t-n\\})>0.\n\\end{equation}\n\\end{thm}\n\nThe version stated in the abstract concerning the case $\\dim A + \\dim B > n+1$ is slightly weaker than Theorem \\ref{theo1}. Notice that the above upper bound for the dimension of $E$ is at least $(n-1)(n-2)\/2$ which is the dimension of $O(n-1)$. In Section \\ref{Examples} we show that it is needed in the estimates. The assumption $s+t>n+1$ only comes from the fact that the statement is trivial if $s+t\\leq n+1$: then the above upper for $\\dim E$ is at least $n(n-1)\/2=\\dim O(n)$ and we could take $E=O(n)$. \n\nThis is an exceptional set estimate related to the following result of \n\\cite{M2}: (\\ref{eq4}) holds for $\\theta_n$ almost all $g\\in O(n)$ if one of the sets has dimension bigger than $(n+1)\/2$, see also Chapter 13 in \\cite{M4} and Chapter 7 in \\cite{M5}. This of course is satisfied when $s+t>n+1$, as in the theorem. It is expected that this generic result with respect to $\\theta_n$ should hold whenever $\\dim A + \\dim B > n$. Under this condition it was proved (without exceptional set estimates) in \\cite{K} and \\cite{M1} provided the orthogonal group is replaced by a larger transformation group, for example by similarity maps as in \\cite{M1}, or, more generally by Kahane in \\cite{K}, by any closed subgroup of the general linear group acting transitively outside the origin. For the orthogonal group no dimensional restrictions are needed provided one of the sets satisfies some extra condition, for example if it is rectifiable, see \\cite{M1}, or a Salem set, see \\cite{M3}.\n\nIt is easy to see, cf. the remark at the end of \\cite{M2}, that in Theorem \\ref{theo1} the positivity of the Hausdorff measures cannot be relaxed to $\\dim A = s$ and $\\dim B = t$.\n\nIf one of the sets supports a measure with sufficiently fast average decay over spheres for the Fourier transform, we can improve the estimate of Theorem \\ref{theo1}. Then the results even hold for the sum sets provided the dimensions are big enough. This is given in Theorem \\ref{theo3}. It yields immediately the following result in case one of the sets is a Salem set. By definition, $A$ is a Salem set if for every $02n-1$, then \n\\begin{equation}\\label{eq5}\n\\mathcal L^n(\\{z\\in\\mathbb{R}^{n}: \\dim A\\cap (B+z)\\geq u\\})>0.\n\\end{equation}\n(b)If $\\dim A+\\dim B\\leq 2n-1$, then there is $E\\subset O(n)$ with \n$$\\dim E\\leq n(n-1)\/2-u$$ \nsuch that for $g\\in O(n)\\setminus E$,\n\\begin{equation}\\label{eq7}\n\\mathcal L^n(\\{z\\in\\mathbb{R}^{n}: \\dim A\\cap (g(B)+z)\\geq u\\})>0.\n\\end{equation}\n\\end{thm}\n\nAnother consequence of Theorem \\ref{theo3} is the following improvement of Theorem \\ref{theo1} in the case where one of the sets has small dimension:\n\n\\begin{thm}\\label{theo6}\nLet $A$ and $B$ be Borel subsets of $\\mathbb{R}^{n}$ and suppose that $\\dim A\\leq (n-1)\/2$. If $00.\n\\end{equation}\n\\end{thm}\n\nThe method used to prove Theorem \\ref{theo1} can easily be modified to other subgroups of the general linear group $GL(n)$ in place of the orthogonal group. For example, let $S(n)$ be the group of similarities, the compositions of orthogonal maps and dilations. Then $\\dim S(n)=n(n-1)\/2+1$ and for any $x,z\\in\\mathbb{R}^{n}\\setminus\\{0\\}$, the dimension of $\\{g\\in S(n): g(z)=x\\}$ is the same as the dimension of $O(n-1)$, that is, $(n-1)(n-2)\/2$. With small changes in the proof of Theorem \\ref{theo1} this leads to \n\n\n\\begin{thm}\\label{theo2}\nLet $s$ and $t$ be numbers with $0 n$. Let $A$ and $B$ be Borel subsets of $\\mathbb{R}^{n}$ such that $\\mathcal H^s(A)>0$ and $\\mathcal H^t(B)>0$. Then \nthere is $E\\subset S(n)$ with \n$$\\dim E\\leq 2n-s-t+(n-1)(n-2)\/2$$ \nand for $g\\in S(n)\\setminus E$,\n\\begin{equation}\\label{eq6}\n\\mathcal L^n(\\{z\\in\\mathbb{R}^{n}: \\dim A\\cap (g(B)+z)\\geq s+t-n\\})>0.\n\\end{equation}\n\\end{thm}\n\n\n\\section{Peliminaries}\n\nThe proof of Theorem \\ref{theo1} is based on the relationship of the Hausdorff dimension to the energies of measures and their relations to the Fourier transform. For $A\\subset\\mathbb{R}^{n}$ (or $A\\subset O(n)$) we denote by $\\mathcal M(A)$ the set of non-zero Radon measures $\\mu$ on $\\mathbb{R}^{n}$ with compact support $\\spt\\mu\\subset A$. The Fourier transform of $\\mu$ is defined by\n$$\\widehat{\\mu}(x)=\\int e^{-2\\pi ix\\cdot y}\\,d\\mu y,~ x\\in\\mathbb{R}^{n}.$$\nFor $0(n-1)(n-2)\/2$. If $\\theta(B(g,r))\\leq r^{\\a}$ for all $g\\in O(n)$ and $r>0$, then for $x,z\\in\\mathbb{R}^{n}\\setminus\\{0\\}, r>0$,\n\\begin{equation}\\label{eq11}\n\\theta(\\{g:|x-g(z)|< r\\})\\lesssim (r\/|z|)^{\\a-(n-1)(n-2)\/2}.\n\\end{equation}\n\\end{lm}\n\\begin{proof}\nFirst we may clearly assume that $|z|=1$, and then also that $|x|=1$, because $|x-g(z)|< r$ implies $|x\/|x|-g(z)|< 2r$. Then $O_{x,z}:=\\{g\\in O(n): g(z)=x\\}$ can be identified with $O(n-1)$. Hence it is a smooth compact $(n-1)(n-2)\/2$-dimensional submanifold of $O(n)$ which implies that it can be covered with roughly $r^{-(n-1)(n-2)\/2}$ balls of radius $r$. If $g\\in G$ satisfies $|x-g(z)|< r$, then $g$ belongs to the $r$-neighbourhood of $O_{x,z}$. The lemma follows from this.\n\\end{proof}\n\n\n\n\\section{Proof of Theorem \\ref{theo1}}\n\nThe key to the proof of Theorem \\ref{theo1} is the following energy estimate. \nFor $\\mu, \\nu\\in\\mathcal M(\\mathbb{R}^{n}), g\\in O(n)$ and $z\\in\\mathbb{R}^{n}$, let $\\nu_{\\varepsilon}=\\psi_{\\varepsilon}\\ast\\nu$ as above and set\n\n\\begin{equation}\\label{eq0}\n\\nu_{g,z,\\varepsilon}(x)=\\nu_{\\varepsilon}(g^{-1}(x)-z),\\ x\\in\\mathbb{R}^{n}.\n\\end{equation}\n\n\\begin{lm}\\label{lemma1} Let $\\beta>0$ and $\\theta\\in\\mathcal M(O(n))$ be such that $\\theta(O(n))\\leq 1$ and for $x,z\\in\\mathbb{R}^{n}\\setminus\\{0\\}, r>0,$ \n\\begin{equation}\\label{eq9}\n\\theta(\\{g\\in O(n):|x-g(z)|n-\\beta$. Let $\\mu, \\nu \\in \\mathcal M(\\mathbb{R}^{n})$. Then\n\\begin{equation}\\label{eq15}\n\\iint I_u(\\nu_{g,z,\\varepsilon}\\mu)\\,d\\mathcal L^nz\\,d\\theta g \\leq C(n,s,t)I_s(\\mu)I_t(\\nu).\n\\end{equation}\n\n\\end{lm}\n\n\\begin{proof}\nWe may assume that $I_s(\\mu)$ and $I_t(\\nu)$ are finite. \nDefine \n$$\\tilde{\\nu}_{g,x,\\varepsilon}(z)=\\nu_{\\varepsilon}(g^{-1}(x)-z),\\ z\\in\\mathbb{R}^{n}.$$\nThen \n$$\\widehat{\\tilde{\\nu}_{g,x,\\varepsilon}}(z)=e^{-2\\pi ig^{-1}(x)\\cdot z}\\widehat{\\nu_{\\varepsilon}}(-z).$$\nHence by Parseval's formula for $x,y\\in\\mathbb{R}^{n}, x\\neq y,$\n\\begin{align*}\n&\\int\\nu_{\\varepsilon}(g^{-1}(x)-z)\\nu_{\\varepsilon}(g^{-1}(y)-z)\\,d\\mathcal L^nz\\\\\n&=\\int\\widehat{\\tilde{\\nu}_{g,x,\\varepsilon}}(z)\\overline{\\widehat{\\tilde{\\nu}_{g,y,\\varepsilon}}(z)}\\,d\\mathcal L^nz\\\\\n&=\\int\\widehat{\\nu_{\\varepsilon}}(-z)\\overline{\\widehat{\\nu_{\\varepsilon}}(-z)}e^{-2\\pi ig^{-1}(x-y)\\cdot z}d\\mathcal L^nz\\\\\n\\end{align*}\nIt follows by Fubini's theorem\t that\n\\begin{align*}\n&I:=\\iint I_u(\\nu_{g,z,\\varepsilon}\\mu)\\,d\\mathcal L^nz\\,d\\theta g\\\\\n&=\\iiiint k_u(x-y)\\nu_{\\varepsilon}(g^{-1}(x)-z)\\nu_{\\varepsilon}(g^{-1}(y)-z)\\,d\\mu x\\,d\\mu y\\,d\\mathcal L^nz\\,d\\theta g\\\\\n&=\\iiint k_u(x-y)\\left(\\int\\nu_{\\varepsilon}(g^{-1}(x)-z)\\nu_{\\varepsilon}(g^{-1}(y)-z)\\,d\\mathcal L^nz\\right)\\,d\\mu x\\,d\\mu y\\,d\\theta g\\\\\n&=\\iiint k_u(x-y)\\left(\\int|\\widehat{\\nu_{\\varepsilon}}(z)|^2 e^{2\\pi ig^{-1}(x-y)\\cdot z}\\,d\\mathcal L^nz\\right)\\,d\\mu x\\,d\\mu y\\,d\\theta g\\\\\n&=\\iiint k_{u,g,z}\\ast\\mu(x)\\,d\\mu x|\\widehat{\\nu_{\\varepsilon}}(z)|^2\\,d\\mathcal L^nz\\,d\\theta g,\n\\end{align*}\nwhere\n\\begin{equation*}\nk_{u,g,z}(x)=|x|^{-u}e^{2\\pi ig^{-1}(x)\\cdot z}=|x|^{-u}e^{2\\pi ix\\cdot g(z)}\n\\end{equation*}\nOne checks by direct computation that the Fourier transform of $k_{u,g,z}$, in the sense of distributions, is given by \n\\begin{equation*}\n\\widehat{k_{u,g,z}}(x)=\nc(n,u)|x-g(z)|^{u-n}.\n\\end{equation*}\nIt follows that\n$$\\iint k_{u,g,z}\\ast\\mu\\,d\\mu=\\int\\widehat{k_{u,g,z}}|\\widehat{\\mu}|^2\\,d\\mathcal L^n=c(n,u)\\int|x-g(z)|^{u-n}|\\widehat{\\mu}(x)|^2\\,d\\mathcal L^nx.$$\nAs $I_u(\\mu)<\\infty$, this is easily checked approximating $\\mu$ with $\\psi_{\\varepsilon}\\ast\\mu$ and using the Lebesgue dominated convergence theorem. \nThus \n\\begin{equation}\\label{eq8}\nI=c(n,u)\\iiint |x-g(z)|^{u-n}\\,d\\theta g|\\widehat{\\mu}(x)|^2|\\widehat{\\nu_{\\varepsilon}}(z)|^2\\,d\\mathcal L^nx\\,d\\mathcal L^nz.\n\\end{equation}\nWe first observe that if $|x|\\geq 2|z|$, then\n$$\\int |x-g(z)|^{u-n}\\,d\\theta g\\leq \\theta(O(n))2^{n-u}|x|^{u-n}\\leq 2^{2n}|x|^{s-n}|z|^{t-n}.$$\nSimilarly if $|z|\\geq 2|x|$, then\n$$\\int |x-g(z)|^{u-n}\\,d\\theta g\\leq \\theta(O(n))2^{n-u}|z|^{u-n}\\leq 2^{2n}|x|^{s-n}|z|^{t-n}.$$\nSuppose then that $|z|\/2\\leq|x|\\leq2|z|.$ Then by the assumption\n\\begin{align*}\n&\\int|x-g(z)|^{u-n}\\,d\\theta g = \\int_0^{\\infty}\\theta(\\{g:|x-g(z)|^{u-n}>\\lambda\\})\\,d\\lambda\\\\\n&=(n-u)\\int_0^{\\infty}\\theta(\\{g:|x-g(z)|0$. \nIt follows that \n\\begin{equation}\\label{eq13}\nI\\lesssim \\iint |x|^{s-n}|z|^{t-n}|\\widehat{\\mu}(x)|^2|\\widehat{\\nu_{\\varepsilon}}(y)|^2\\,dx\\,dy\\lesssim I_s(\\mu)I_t(\\nu),\n\\end{equation}\n as required.\n\n\n\n\\end{proof}\n\nNext we show that, with $\\theta$ as above, for $\\theta\\times\\mathcal L^n$ almost all $(g,z)$ the measures $\\nu_{g,z,\\varepsilon}\\mu$ converge weakly as $\\varepsilon\\to 0$. It is immediate that for almost all $(g,z)$ this takes place through some sequence $(\\varepsilon_j)$, depending on $(g,z)$, but we would at least need one sequence which is good for almost all $(g,z)$. The proof of the following theorem was inspired by an argument of Kahane in \\cite{K}.\n\n\\begin{thm}\\label{theo5}\nLet $s,t$ and $u$ be positive numbers with $u=s+t-n>0$ and let $\\mu, \\nu\\in\\mathcal M(\\mathbb{R}^{n})$ with $I_s(\\mu)<\\infty$ and $I_t(\\nu)<\\infty$. Let $\\psi_{\\varepsilon}$ be an approximate identity and $\\nu_{\\varepsilon}=\\psi_{\\varepsilon}\\ast\\nu$. For $g\\in O(n)$ and $z\\in\\mathbb{R}^{n}$, let $\\nu_{g,z,\\varepsilon}$ be as in (\\ref{eq0}). Finally, let $\\theta\\in\\mathcal M(O(n))$ be as in Lemma \\ref{lemma1}. Then for $\\theta\\times\\mathcal L^n$ almost all $(g,z)$, as $\\varepsilon\\to 0$, the measures \n$\\nu_{g,g^{-1}(z),\\varepsilon}\\mu$ converge weakly to a measure $\\lambda_{g,z}$ with the properties\n\\begin{itemize}\n\\item[(a)] $$\\spt\\lambda_{g,z}\\subset\\spt\\mu\\cap(g(\\spt\\nu)+z),$$\n\\item[(b)]$$\\int\\lambda_{g,z}(\\mathbb{R}^{n})\\,d\\mathcal L^nz = \\mu(\\mathbb{R}^{n})\\nu(\\mathbb{R}^{n})\\ \\text{for}\\ \\theta\\ \\text{almost all}\\ g\\in O(n),$$\n\\item[(c)]\n$$\\iint I_u(\\lambda_{g,z})\\,d\\mathcal L^nz\\,d\\theta g\\leq C(n,s,t)I_s(\\mu)I_t(\\nu).$$ \n\\end{itemize} \n\\end{thm}\n\\begin{proof}\nIf the convergence takes place, the support property (a) is clear. Using the change of variable from $z$ to $g^{-1}(z)$ in the appropriate places, it is then sufficient to show that for $\\theta\\times\\mathcal L^n$ almost all $(g,z)$, as $\\varepsilon\\to 0$, the measures \n$\\nu_{g,z,\\varepsilon}\\mu$ converge weakly to a measure $\\tilde{\\lambda}_{g,z}$ such that (b) and (c) hold with $\\lambda_{g,z}$ replaced by $\\tilde{\\lambda}_{g,z}$. \n\nLet $\\phi\\in C^+_0(\\mathbb{R}^{n})$. Then by Lemma \\ref{lemma1},\n$$\\iint (\\int\\nu_{g,z,\\varepsilon}\\phi\\,d\\mu)^2\\,d\\mathcal L^nz\\,d\\theta g\\lesssim I_s(\\mu)I_t(\\nu)<\\infty.$$ \nHence by Fatou's lemma\n$$\\int \\left(\\liminf_{\\varepsilon\\to 0}\\int(\\int\\nu_{g,z,\\varepsilon}\\phi\\,d\\mu)^2\\,d\\mathcal L^nz\\right)\\,d\\theta g\\lesssim I_s(\\mu)I_t(\\nu)<\\infty.$$\nThus for $\\theta$ almost all $g$ there is a sequence $(\\varepsilon_j)$ tending to $0$ such that \n$$\\sup_j\\int(\\int\\nu_{g,z,\\varepsilon_j}\\phi\\,d\\mu)^2\\,d\\mathcal L^nz<\\infty.$$\nOn the other hand, defining the measure $\\mu_{\\phi,g}$ by $\\int h\\,d\\mu_{\\phi,g}=\\int h(-g^{-1}(x))\\phi(x)\\,d\\mu x$, we have\n$$\\int\\nu_{g,z,\\varepsilon}\\phi\\,d\\mu=\\int\\nu_{\\varepsilon}(g^{-1}(x)-z)\\phi(x)\\,d\\mu x=\\mu_{\\phi,g}\\ast\\nu\\ast\\psi_{\\varepsilon}(-z),$$\nand the measures $\\mu_{\\phi,g}\\ast\\nu\\ast\\psi_{\\varepsilon}$ converge weakly to $\\mu_{\\phi,g}\\ast\\nu$. Consequently, $\\mu_{\\phi,g}\\ast\\nu$ is an $L^2$ function on $\\mathbb{R}^{n}$ and the convergence takes place almost everywhere. It follows now that for $\\theta$ almost all $g\\in O(n)$ and for every $\\phi\\in C^+_0(\\mathbb{R}^{n})$ the finite limit\n\\begin{equation}\\label{eq2}\nL_{g,z}\\phi:=\\lim_{\\varepsilon\\to 0}\\int\\nu_{g,z,\\varepsilon}\\phi\\,d\\mu=\\lim_{\\varepsilon\\to 0}\\mu_{\\phi,g}\\ast\\nu\\ast\\psi_{\\varepsilon}(-z)=\\mu_{\\phi,g}\\ast\\nu(-z)\n\\end{equation}\nexists for almost all $z\\in\\mathbb{R}^{n}$. Let $\\mathcal D$ be a countable dense subset of $C^+_0(\\mathbb{R}^{n})$ containing a function $\\phi_0$ which is $1$ on the support of $\\mu$. Then there is a set $E$ of measure zero such that (\\ref{eq2}) holds for all $z\\in\\mathbb{R}^{n}\\setminus E$ for all $\\phi\\in \\mathcal D$, the exceptional set is indenpendent of $\\phi$. Applying (\\ref{eq2}) to $\\phi_0$ we see that\n$$\\sup_{\\varepsilon>0}\\int\\nu_{g,z,\\varepsilon}\\,d\\mu<\\infty.$$\nThen by the Cauchy criterion the denseness of $\\mathcal D$ yields that whenever $z\\in\\mathbb{R}^{n}\\setminus E$, there is the finite limit $L_{g,z}\\phi:=\\lim_{\\varepsilon\\to 0}\\int\\nu_{g,z,\\varepsilon}\\phi\\,d\\mu$ for all $\\phi\\in C^+_0(\\mathbb{R}^{n})$. Hence by the Riesz representation theorem the positive linear functional $L_{g,z}$ corresponds to a Radon measure $\\tilde{\\lambda}_{g,z}$ to which the measures $\\nu_{g,z,\\varepsilon}\\mu$ converge weakly.\n\nThe claim (b) follows from (\\ref{eq2}):\n\\begin{align*}\n\\int\\lambda_{g,z}(\\mathbb{R}^{n})\\,d\\mathcal L^nz = &\\int L_{g,z}\\phi_0\\,d\\mathcal L^nz \n= \\int\\mu_{\\phi_0,g}\\ast\\nu(-z)\\,d\\mathcal L^n\\\\\n&=\\mu_{\\phi_0,g}(\\mathbb{R}^{n})\\nu(\\mathbb{R}^{n})=\\mu(\\mathbb{R}^{n})\\nu(\\mathbb{R}^{n}).\t\n\\end{align*}\nThe claim (c) follows from Lemma \\ref{lemma1}, Fatou's lemma and the lower semicontinuity of the energy-integrals under the weak convergence.\n\\end{proof}\n\n\\begin{proof}[Proof of Theorem \\ref{theo1}]\nTheorem \\ref{theo1} follows from Lemma \\ref{lemma1} and Theorem \\ref{theo5}: Let \n$$G=\\{g\\in O(n): \\mathcal L^n(\\{z\\in\\mathbb{R}^{n}: \\dim A\\cap (g(B)+z))\\geq s+t-n\\})=0\\}.$$\nThen $G$ is a Borel set. We leave checking this to the reader. It is a bit easier when $A$ and $B$ are compact. We may assume the compactness since $A$ and $B$ as in the theorem contain compact subsets with positive measure, cf \\cite{Fe}, 2.10.48. Suppose, contrary to what is claimed, that $\\dim G > 2n-s-t+(n-1)(n-2)\/2.$ Let \n$\\dim G > \\a > 2n-s-t + (n-1)(n-2)\/2$. Then by Frostman's lemma, cf. \\cite{M4}, Theorem 8.8, and Lemma \\ref{lemma2} there is $\\theta\\in\\mathcal M(G)$ satisfying (\\ref{eq9}) with $\\beta=\\a-(n-1)(n-2)\/2>2n-s-t$. \nBy Frostman's lemma there are $\\mu\\in\\mathcal M(A)$ and $\\nu\\in\\mathcal M(B)$ such that $\\mu(B(x,r))\\leq r^s$ and $\\nu(B(x,r))\\leq r^t$ for all balls $B(x,r)$. Then by easy estimation, for example, as in the beginning of Chapter 8 in \\cite{M4}, $I_{s'}(\\mu)<\\infty$ and $I_{t'}(\\nu)<\\infty$ for $00\\}$ has positive Lebesgue measure for $\\theta$ almost all $g$. It then follows from Theorem \\ref{theo5}(a) and (c) and (\\ref{eq3}) that for $\\theta$ almost all $g$,~ $\\dim A\\cap(g(B)+z)\\geq s+t-n$ for almost all $z\\in E_g$. This contradicts the definition of $G$ and the fact that $\\theta$ has support in $G$. \n\\end{proof}\n\n\n\n\\section{Intersections and the decay of spherical averages}\\label{decay}\n\nFor $\\mu\\in\\mathcal M(\\mathbb{R}^{n})$ define the spherical averages\n$$\\sigma(\\mu)(r)=r^{1-n}\\int_{S(r)}|\\widehat{\\mu}(x)|^2\\,d\\sigma_r^{n-1}x, r>0,$$\nwhere $\\sigma_r^{n-1}$ is the surface measure on the sphere $S(r)=\\{x\\in\\mathbb{R}^{n}:|x|=r\\}$. \nNotice that if $\\sigma(\\mu)(r)\\lesssim r^{-\\gamma}$ for $r>0$ and for some $\\gamma>0$, then $I_s(\\mu)<\\infty$ for $00, I_{\\gamma}(\\mu)<\\infty$ and $I_t(\\nu)<\\infty$. \n\n(a) If $\\gamma+t>2n-1$, then \n\\begin{equation}\\label{eq16}\n\\mathcal L^n(\\{z\\in\\mathbb{R}^{n}: \\dim \\spt\\mu\\cap (\\spt\\nu+z)\\geq \\gamma+t-n\\})>0.\n\\end{equation}\n(b) If $\\gamma+t\\leq 2n-1$, then there is $E\\subset O(n)$ with \n$$\\dim E\\leq 2n-1-\\gamma-t+(n-1)(n-2)\/2$$ \nsuch that for $g\\in O(n)\\setminus E$,\n\\begin{equation}\\label{eq17}\n\\mathcal L^n(\\{z\\in\\mathbb{R}^{n}: \\dim \\spt\\mu\\cap (g(\\spt\\nu)+z))\\geq \\gamma+t-n\\})>0.\n\\end{equation}\n\\end{thm}\n\n\\begin{proof} Let $u=\\gamma+t-n$. \nAs above, we only need to show that the the conclusion of Lemma \\ref{lemma1} holds under the present assumptions, but now the upper bound in (\\ref{eq15}) will be a constant involving $\\Gamma, I_t(\\nu),I_{\\gamma}(\\mu), I_u(\\mu),\\mu(\\mathbb{R}^{n})$ and $\\nu(\\mathbb{R}^{n})$. \nFor the statement (a) there is no $\\theta$ integration (or $\\theta$ is the Dirac measure at the identity map) and $n-12|z|$ or $|z|>2|x|$ we can argue as before. \n\nTo prove (a), suppose $n-12} |x-z|^{u-n}|\\widehat{\\mu}(x)|^2|\\widehat{\\nu_{\\varepsilon}}(z)|^2\\,d\\mathcal L^nx\\,d\\mathcal L^nz\\lesssim \\Gamma I_t(\\nu).$$\nSince $-11,$ can be estimated by\n\\begin{align*}\n&\\iint_{||x|-|z||\\leq 1, |z|>2} |x-z|^{u-n}|\\widehat{\\mu}(x)|^2|\\widehat{\\nu_{\\varepsilon}}(z)|^2\\,d\\mathcal L^nx\\,d\\mathcal L^n\\,z\\\\\n&\\leq \\int_{|z|>2}\\int_{|z|-1}^{|z|+1}|r-|z||^{u-n}\\int_{S(r)}|\\widehat{\\mu}(x)|^2\\,d\\sigma_r^{n-1}x\\,dr|\\widehat{\\nu_{\\varepsilon}}(z)|^2\\,d\\mathcal L^nz\\\\\n&\\lesssim \\Gamma\\int_{|z|>2} |z|^{n-1-\\gamma}|\\widehat{\\nu_{\\varepsilon}}(z)|^2\\,d\\mathcal L^nz\\\\\n&\\leq \\Gamma\\int |z|^{t-n}|\\widehat{\\nu_{\\varepsilon}}(z)|^2\\,d\\mathcal L^nz\\lesssim \\Gamma I_t(\\nu).\n\\end{align*}\nFor the remaining part we have \n\\begin{align*}\n&\\iint_{||x|-|z|| > 1, 1<|z|\/2\\leq|x|\\leq 2|z|} |x-z|^{u-n}|\\widehat{\\mu}(x)|^2|\\widehat{\\nu_{\\varepsilon}}(z)|^2\\,d\\mathcal L^nx\\,d\\mathcal L^nz\\\\\n&\\leq\\int\\sum_{1\\leq2^j\\leq3|z|} \\int_{2^j\\leq||x|-|z||\\leq 2^{j+1},|z|\/2\\leq|x|\\leq 2|z|}||x|-|z||^{u-n}|\\widehat{\\mu}(x)|^2|\\widehat{\\nu_{\\varepsilon}}(z)|^2\\,d\\mathcal L^nx\\,d\\mathcal L^nz\\\\\n&\\lesssim \\int\\sum_{1\\leq2^j\\leq3|z|} 2^{j(u-n)}\\int_{2^j\\leq||x|-|z||\\leq 2^{j+1},|z|\/2\\leq|x|\\leq 2|z|}|\\widehat{\\mu}(x)|^2\\,d\\mathcal L^nx|\\widehat{\\nu_{\\varepsilon}}(z)|^2\\,d\\mathcal L^nz\\\\\n&= \\int\\sum_{1\\leq2^j\\leq3|z|} 2^{j(u-n)}\\int_{2^j\\leq|r-|z||\\leq 2^{j+1},|z|\/2\\leq r\\leq 2|z|}\\int_{S(r)}|\\widehat{\\mu}(x)|^2\\,d\\sigma_r^{n-1}x\\,dr|\\widehat{\\nu_{\\varepsilon}}(z)|^2\\,d\\mathcal L^nz\\\\\n&\\lesssim \\Gamma \\int\\sum_{1\\leq2^j\\leq3|z|} 2^{j(u-n)}2^j|z|^{n-1-\\gamma}|\\widehat{\\nu_{\\varepsilon}}(z)|^2\\,d\\mathcal L^nz\\\\\n&\\lesssim \\Gamma\\int |z|^{u-\\gamma}|\\widehat{\\nu_{\\varepsilon}}(z)|^2\\,d\\mathcal L^nz\\lesssim \\Gamma I_t(\\nu).\n\\end{align*}\n\n\nWe establish the statement (b) by showing that \n$$\\iiint_{|z|\/2\\leq |x|\\leq 2|z|,|z|>2} |x-g(z)|^{u-n}\\,d\\theta g|\\widehat{\\mu}(x)|^2|\\widehat{\\nu_{\\varepsilon}}(z)|^2\\,d\\mathcal L^nx\\,d\\mathcal L^nz\\lesssim \\Gamma I_t(\\nu),$$\nwhere $\\theta\\in\\mathcal M(O(n))$ is as in Lemma \\ref{lemma1} with $n-1-u<\\beta2} |x-g(z)|^{u-n}\\,d\\theta g|\\widehat{\\mu}(x)|^2|\\widehat{\\nu_{\\varepsilon}}(x)|^2\\,d\\mathcal L^nx\\,d\\mathcal L^n\\,z\\\\\n&\\lesssim\\int_{|z|>2} \\int_{|z|-1}^{|z|+1}|r-|z||^{\\beta+u-n}|z|^{-\\beta}\\int_{S(r)}|\\widehat{\\mu}(x)|^2\\,d\\sigma_r^{n-1}x\\,dr|\\widehat{\\nu_{\\varepsilon}}(z)|^2\\,d\\mathcal L^nz\\\\\n&\\lesssim \\Gamma\\int_{|z|>2} |z|^{-\\beta+n-1-\\gamma}|\\widehat{\\nu_{\\varepsilon}}(z)|^2\\,d\\mathcal L^nz\\\\\n&\\leq \\Gamma\\int |z|^{t-n}|\\widehat{\\nu_{\\varepsilon}}(z)|^2\\,d\\mathcal L^nz\n\\lesssim \\Gamma I_t(\\nu).\n\\end{align*}\nFor the remaining part we have \n\n\\begin{align*}\n&\\iiint_{||x|-|z|| > 1, 1<|z|\/2\\leq|x|\\leq 2|z|} |x-g(z)|^{u-n}\\,d\\theta g|\\widehat{\\mu}(x)|^2|\\widehat{\\nu_{\\varepsilon}}(z)|^2\\,d\\mathcal L^nx\\,d\\mathcal L^nz\\\\\n&\\leq\\int\\sum_{1\\leq2^j\\leq3|z|} \\int_{2^j\\leq||x|-|z||\\leq 2^{j+1},|z|\/2\\leq|x|\\leq 2|z|}||x|-|z||^{\\beta+u-n}|z|^{-\\beta}|\\widehat{\\mu}(x)|^2|\\widehat{\\nu_{\\varepsilon}}(z)|^2\\,d\\mathcal L^nx\\,d\\mathcal L^nz\\\\\n&\\lesssim \\int\\sum_{1\\leq2^j\\leq3|z|} 2^{j(\\beta+u-n)}|z|^{-\\beta}\\int_{2^j\\leq||x|-|z||\\leq 2^{j+1},|z|\/2\\leq|x|\\leq 2|z|}|\\widehat{\\mu}(x)|^2\\,d\\mathcal L^nx|\\widehat{\\nu_{\\varepsilon}}(z)|^2\\,d\\mathcal L^nz\\\\\n&= \\int\\sum_{1\\leq2^j\\leq3|z|} 2^{j(\\beta+u-n)}|z|^{-\\beta}\\int_{2^j\\leq|r-|z||\\leq 2^{j+1},|z|\/2\\leq r\\leq 2|z|}\\int_{S(r)}|\\widehat{\\mu}(x)|^2\\,d\\sigma_r^{n-1}x\\,dr|\\widehat{\\nu_{\\varepsilon}}(z)|^2\\,d\\mathcal L^nz\\\\\n&\\lesssim \\Gamma\\int\\sum_{1\\leq2^j\\leq3|z|} 2^{j(\\beta+u-n)}|z|^{-\\beta}2^j|z|^{n-1-\\gamma}|\\widehat{\\nu_{\\varepsilon}}(z)|^2\\,d\\mathcal L^nz\\\\\n&\\lesssim \\Gamma\\int |z|^{u-\\gamma}|\\widehat{\\nu_{\\varepsilon}}(z)|^2\\,d\\mathcal L^nz\\lesssim \\Gamma I_t(\\nu).\n\\end{align*}\n\\end{proof}\n\nFor $00\n\\end{equation}\nholds for all $\\mu\\in\\mathcal M(\\mathbb{R}^{n})$ with support in the unit ball. Estimates for $\\gamma(s)$ are discussed in \\cite{M5}, Chapter 15. For $s\\leq (n-1)\/2$ the optimal, rather easy, result $\\gamma(s)=s$ is valid, see \\cite{M5}, Lemma 3.15. This together with Theorem \\ref{theo3} yields immediately Theorem \\ref{theo6}. For $s>(n-1)\/2$ the optimal estimate fails and Theorem \\ref{theo3} only gives a lower bound for the dimension of intersections which stays below and bounded away from $\\dim A + \\dim B - n$. The deepest estimates are due to Wolff \\cite{W} in the plane and to Erdo\\u gan \\cite{E} in higher dimensions. They give $\\gamma_n(s)\\geq (n+2s-2)4$ for $n\/2\\leq s \\leq (n+2)\/2$. Theorem \\ref{theo3} combined with this leads to the result that if $\\dim A + \\dim B\/2 - (3n+2)\/4>u>0$, then \n$$\\mathcal L^n(\\{z\\in\\mathbb{R}^{n}: \\dim A\\cap (g(B)+z)) > u\\})>0$$\nfor $g\\in O(n)$ outside an exceptional set $E$ with $\\dim E \\leq n(n-1)\/2 - u$. Plugging into Theorem \\ref{theo3} other known estimates for $\\gamma_n(s)$ gives similar rather weak intersection results.\n\n\n\\section{Examples}\\label{Examples}\n\nThe first example here shows that in Theorem \\ref{theo1} the bound $(n-1)(n-2)\/2$ is sharp in the case where both sets have the maximal dimension $n$. This of course does not tell us anything in the plane but it explains the appearance of the dimension of $O(n-1)$. \nIn the following we identify $O(n-1)$ with a subset of $O(n)$ letting $g\\in O(n-1)$ mean the map $(x_1,\\dots,x_n)\\mapsto (g(x_1\\dots,x_{n-1}),x_n)$. \n\\begin{ex}\\label{ex1}\nLet $n\\geq 3$. There are compact sets $A, B\\subset\\mathbb{R}^n$ such that $\\dim A=\\dim B=n$ and for every $g\\in O(n-1)$,~ $\\dim A\\cap (g(B)+z)\\leq n-1$ for all $z\\in\\mathbb{R}^n$.\n\\end{ex}\n\\begin{proof}\nLet $C, D\\subset\\mathbb{R}$ be compact sets such that $\\dim C = \\dim D = 1$ and for every $z\\in\\mathbb{R}$ the intersection $C\\cap(D+z)$ contains at most one point. Such sets were constructed in \\cite{M1}, the construction is explained also in \\cite{M4}, Example 13.18. Let $F$ be the closed unit ball in $\\mathbb{R}^{n-1}$ and take $A=F\\times C$ and $B=F\\times D$. These sets have the required properties.\n\\end{proof}\n\nThe following example shows that we need some additional assumptions, for example as in Theorem \\ref{theo3}, to get any result using only translations:\n\n\\begin{ex}\\label{ex2}\nThere are compact sets $A, B\\subset\\mathbb{R}^n$ such that $\\dim A = \\dim B =n$ and for every $z\\in\\mathbb{R}^n$ the intersection $A\\cap(B+z)$ contains at most one point.\n\\end{ex}\n\\begin{proof}\nLet $C, D\\subset\\mathbb{R}$ be the compact sets of the previous example. Take $A=C^n$ and $B=D^n$. These sets have the required properties.\n\\end{proof}\n\nI do not know what are the sharp bounds for the dimension of exceptional sets of Theorem \\ref{theo1}. For simplicity, let us look at this question in the plane. Let $d(s,t)\\in[0,1], 02,$ be the infimum of the numbers $d>0$ with the property that for all Borel sets $A,B\\subset\\mathbb{R}^2$ with $\\dim A = s, \\dim B = t$ and for all $0u\\})=0\\}\\leq d.$$\nThe problem is to determine $d(s,t)$. We know from Theorem \\ref{theo1} that if $s+t>3$, then $d(s,t)\\leq 4-s-t$. In particular $d(2,2)=0$. This suggests that $d(s,t)$ might be $4-s-t$ when $s+t>3$. However we know from Theorem \\ref{theo6} that $d(s,t)\\leq 3-s-t$ whenever $s\\leq 1\/2$. I would be happy to see some examples sheding light into this question.\n\n\\section{Concluding remarks}\nAs mentioned in the Introduction, intersection problems of this type for general sets were first studied by Kahane in \\cite{K} and by the author in \\cite{M1}, involving transformations such as similarities. For the orthogonal group the result in \\cite{M2} concerned the case where one of the sets has dimension bigger than $(n+1)\/2$. In \\cite{M3} a general method was developed to get dimension estimates for the distance sets and intersections once suitable spherical average estimates\t(\\ref{eq19}) for measures with finite energy are available. Such deep estimates were proved by Wolff in \\cite{W} and Erdo\\u gan in \\cite{E}. They gave the best known results for the distance sets, see \\cite{M5}, Chapters 15 and 16, but only minor progress for the intersections, as mentioned in Section \\ref{decay}. The known estimates for $\\sigma(\\mu)$ are discussed in \\cite{M5}, Chapter 15, see also \\cite{LR} for a recent one. \n\nThe reverse inequality in Theorem \\ref{theo1} fails: for any $0\\leq s\\leq n$ there exists a Borel set $A\\subset \\mathbb{R}^{n}$ such that \n$\\dim A\\cap f(A)=s$ for all similarity maps $f$ of $\\mathbb{R}^{n}$. This follows from \\cite{F2}, see also Example 13.19 in \\cite{M4} and the further references given there. The reverse inequality holds if one of the sets is a reasonably nice integral dimensional set, for example rectifiable, or if $\\dim A\\times B = \\dim A + \\dim B$, see \\cite{M1}. This latter condition is valid if, for example, one of the sets is Ahlfors-David regular, see \\cite{M4}, pp. 115-116. For such reverse inequalities no rotations $g$ are needed (or, equivalently, they hold for every $g$).\n\nExceptional set estimates in the spirit of this paper were first proved for projections by Kaufman in \\cite{Ka}, then continued by Kaufman and the author \\cite{KM} and by Falconer \\cite{F1}. Peres and Schlag \\cite{PS} proved such estimates for large classes of generalized projections. Exceptional set estimates for intersections with planes were first proved by Orponen \\cite{O1} and continued by Orponen and the author \\cite{MO}. In \\cite{O2} Orponen derived estimates for radial projections. All these estimates expect those in \\cite{MO} and some in \\cite{PS} are known to be sharp. Some of these and other related results are also discussed in \\cite{M5}.\n\n\n\nRecently Donoven and Falconer \\cite{DF} investigated Hausdorff dimension of intersections for subsets of certain Cantor sets and Shmerkin and Suomala \\cite{SS} for large classes of random sets.\n\n\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\n\n\\begin{abstract}\nHopfield attractor networks are robust distributed models of human memory. We propose construction rules such that an attractor network may implement an arbitrary finite state machine (FSM), where states and stimuli are represented by high-dimensional random bipolar vectors, and all state transitions are enacted by the attractor network's dynamics. Numerical simulations show the capacity of the model, in terms of the maximum size of implementable FSM, to be linear in the size of the attractor network. We show that the model is robust to imprecise and noisy weights, and so a prime candidate for implementation with high-density but unreliable devices. By endowing attractor networks with the ability to emulate arbitrary FSMs, we propose a plausible path by which FSMs may exist as a distributed computational primitive in biological neural networks.\n\\end{abstract}\n\n\\section{Introduction}\n\nHopfield attractor networks are robust models of human memory, as from a simple Hebbian learning rule they display emergent attractor dynamics which allow for reliable pattern recall, completion, and correction even in noisy situations \\cite{hopfield_neural_1982, amit_modeling_1989, eliasmith_unified_2005}. Attractor models have since found widespread use in neuroscience as a functional and tractable model of human memory \\cite{little_existence_1974, schneidman_weak_2006, chaudhuri_computational_2016, khona_attractor_2022}. The assumption of these models is that the network represents different states by different, usually uncorrelated, global patterns of persistent activity. When the network is presented with an input that closely resembles one of the stored states, the network state switches to the corresponding fixed-point attractor.\n\nThis process of switching between discrete attractor states is thought to be fundamental both to describe biological neural activity, as well as to model higher cognitive decision making processes \\cite{daelli_neural_2010, mante_context-dependent_2013, miller_itinerancy_2016, tajima_task-dependent_2017, brinkman_metastable_2022}. What attractor models currently lack, however, is the ability to perform state-dependent computation, a hallmark of human cognition \\cite{dayan_simple_2008, buonomano_state-dependent_2009, granger_toward_2020}. That is, when the network is presented with an input, the attractor state to which the network switches ought to be dependent both upon the input stimulus as well as the state the network currently inhabits, rather than simply the input.\n\nWe thus seek to endow a classical neural attractor model, the Hopfield network, with the ability to perform state-dependent switching between attractor states, without resorting to the use of biologically implausible mechanisms, such as higher-order weight tensors or training via backpropagation algorithms. The resulting attractor networks will then be able to robustly emulate any arbitrary Finite State Machine (FSM), vastly improving their usefulness as a neural computational primitive.\n\nWe achieve this by leaning heavily on the framework of Vector Symbolic Architectures (VSAs). VSAs treat computation in an entirely distributed manner, by letting symbols be represented by high-dimensional random vectors: hypervectors \\cite{kanerva_fully_2002, plate_holographic_2003, gayler_vector_2004}. When equipped with a few basic operators for binding and superimposing vectors together, corresponding often either to element-wise multiplication or addition respectively, these architectures are able to store primitives such a sets, sequences, graphs and arbitrary data bindings, as well as enabling more complex relations, such as analogical and figurative reasoning \\cite{kanerva_hyperdimensional_2009, kleyko_vector_2021}. Although different VSA implementations often have differing representations and binding operations \\cite{kleyko_survey_2022}, they all share the need for an auto-associative cleanup memory, which can recover a clean version of the most similar stored hypervector, given a noisy version of itself. We here use the recurrent dynamics of a Hopfield-like neural attractor network as a state-holding auto-associative memory \\cite{gritsenko_neural_2017}.\n\nSymbolic FSM states will thus be represented each by a hypervector and stored within the attractor network as a fixed-point attractor. Stimuli will also be represented by hypervectors, which, when input to the attractor network, will trigger the network dynamics to transition between the correct attractor states. We make use of common VSA techniques to construct a weights matrix to acheive these dynamics, where we use the Hadamard product between bipolar vectors $\\{-1,1\\}^N$ as our binding operation. We thus claim that attractor-based FSMs may be a plausible biological computational primitive insofar as Hopfield networks are.\n\nThis represents a computational paradigm that is a departure from conventional von Neumann architectures, wherein the separation of memory and computation is a major limiting factor in current advances in conventional computational performance (the von Neumann bottleneck \\cite{backus_can_1978, indiveri_memory_2015}). Similarly, the high redundancy and lack of reliance on individual components makes this architecture is fit for implementation with novel in-memory computing technologies such as resistive RAM (RRAM) or phase change memory (PCM) devices, which could perform the network's matrix-vector-multiplication (MVM) in a single step \\cite{xia_memristive_2019, ielmini_-memory_2018, zidan_chapter_2020}.\n\\section{Theory}\n\nThroughout this paper, symbols will be represented by high-dimensional randomly generated dense bipolar vectors\n\n\\begin{equation}\n \\vec{x} \\in \\{ -1 , 1\\}^{N}\n\\end{equation}\n\nwhere the number of dimensions $N>10,000$. Unless explicitly stated otherwise, any bold lowercase Latin letter may be assumed to be a new, independently generated hypervector, with the value $X_{i}$ at any index $i$ in $\\vec{x}$ generated according to\n\n\\begin{equation}\n \\text{I\\kern-0.15em P}(X_{i} = 1) = \\text{I\\kern-0.15em P}(X_{i}=-1) = \\frac{1}{2}\n\\label{eqn:probX_half}\n\\end{equation}\n\nFor any two arbitrary hypervectors $\\vec{a}$ and $\\vec{b}$, we define the similarity between the two vectors by the normalised inner product\n\n\\begin{equation}\n d(\\vec{a},\\vec{b}) := \\frac{1}{N} \\vec{a} ^{\\intercal} \\vec{b} = \\frac{1}{N} \\sum_{i = 1}^{N} a_{i}b_{i}\n\\label{eqn:d_simple}\n\\end{equation}\n\nwhere the similarity between a vector and itself $d(\\vec{a},\\vec{a}) = 1$, and $d(\\vec{a},-\\vec{a}) = -1$. Due to the high dimensionality of the vectors, the similarity between any two unrelated (and so independently generated) vectors is the mean of an unbiased random sequence of $-1$ and $1$s\n\n\\begin{equation}\n d(\\vec{a}, \\vec{b}) = 0 \\pm \\frac{1}{\\sqrt{N}} \\approx 0\n\\end{equation}\n\nwhich tends to 0 for $N\\rightarrow \\infty$. It is from this result that we get the requirement of high dimensionality, as it ensures that the inner product between two random vectors is approximately 0. We can say that independently generated vectors are \\textit{psuedo-orthogonal} \\cite{kleyko_vector_2021}. For a set of independently generated states $\\{\\vec{x}^\\mu\\}$, these results can be summarised by\n\n\\begin{equation}\n d(\\vec{x}^{\\mu},\\pm \\vec{x}^{\\nu} ) \\, \\stackrel{N \\rightarrow \\infty}{=} \\, \\pm \\delta^{\\mu \\nu}\n\\end{equation}\n\nwhere $\\delta^{\\mu \\nu}$ is the Kronecker delta. Hypervectors may be combined in a so called \\textit{binding} operation to produce a new vector that is dissimilar to both its constituents. We here choose the Hadamard product, or element-wise multiplication, as our binding operation, denoted \"$\\circ$\".\n\n\\begin{equation}\n (\\vec{a}\\circ \\vec{b})_{i} = a_{i} \\cdot b_{i}\n\\end{equation}\n\nThe statement that the product of two vectors is dissimilar to its constituents is written as\n\n\n\n\\begin{equation}\n\\begin{split}\n d(\\vec{a} \\circ \\vec{b}, \\vec{a}) \\approx 0 \\\\\n d(\\vec{a} \\circ \\vec{b}, \\vec{b}) \\approx 0\n\\end{split}\n\\end{equation}\n\nwhere we implicitly assume that $N$ is large enough that we can ignore the $\\mathcal{O}(\\frac{1}{\\sqrt{N}})$ noise terms.\n\nIf we wish to recover similarity between $\\vec{a} \\circ \\vec{b}$ and $\\vec{a}$, we can mask the system using $\\vec{b}$, such that only dimensions where $b_i = 1$ are remaining. Then, we have\n\n\n\n\\begin{equation}\n\\begin{split}\nd \\big( \\vec{a} \\circ \\vec{b} &, \\vec{a} \\circ H(\\vec{b}) \\big) = \\frac{1}{N} \\sum_{1 \\leq i \\leq N} a_i b_i a_i H(b_i) \\\\\n& = \\frac{1}{N} \\Bigg[ \\sum_{\\substack{1 \\leq i \\leq N \\\\ b_i =1}} a_i^2 H(1) - \\! \\! \\sum_{\\substack{1 \\leq i \\leq N \\\\ b_i =-1}} \\! \\! a_i^2 H(-1) \\Bigg] \\\\\n& = \\frac{1}{N} \\sum_{\\substack{1 \\leq i \\leq N \\\\ b_i =1}} 1 \\\\\n& \\approx \\frac{1}{2} \n\\end{split}\n\\end{equation}\n\n\nwhere we have used the Heaviside step function $H(\\cdot)$ defined by\n\n\\begin{equation}\n \\big( H(\\vec{b}) \\big)_{i} = H(b_{i}) = \\begin{cases} 1 \\quad \\text{if} \\quad b_{i} > 0 \\\\ 0 \\quad \\text{otherwise} \\end{cases}\n\\end{equation}\n\nto create a multiplicative mask $H(\\vec{b})$, setting to 0 all dimensions where $b_i = -1$. In the second line, we have split the summation over all dimensions into summations over dimensions where $b_i = 1$ and $-1$ respectively. The final similarity of $\\frac{1}{2}$ is a consequence of approximately half of all values in a any vector being +1 (\\mbox{Equation \\ref{eqn:probX_half}}).\n\n\nWe choose this as a mechanism for recovering similarity, rather than simply applying another Hadamard multiplication $\\vec{b}\\, \\circ $, as it is an operation that can easily and robustly be realised in a neural attractor network with asynchronous updates, as discussed later.\n\n\n\\subsection{Hopfield networks}\n\nA Hopfield network is a dynamical system defined by its internal state vector $\\vec{z}$ and fixed recurrent weights matrix $\\mat{W}$, with a state update rule given by\n\n\\begin{equation}\n \\vec{z}_{t+1} = \\mathrm{sgn} \\big( \\mat{W} \\vec{z}_t \\big)\n \\label{eqn:update_hopfield}\n\\end{equation}\n\nwhere $\\vec{z}_{t}$ is the network state at discrete time step $t$, and $\\mathrm{sgn}(\\cdot)$ is an element-wise sign function, with zeroes resolving\\footnote{Though this arbitrary choice may seem to incur a bias to a particular state, in practise the postsynaptic sum very rarely equals 0.} to +1.\n\nFrom standard Hopfield theory, we know that if we want to store $P$ uncorrelated patterns $\\{ \\vec{x}^{\\nu} \\}_{\\nu =1}^{P}$ within a Hopfield network, we can construct the weights matrix $\\mat{W}$ according to\n\n\\begin{equation}\n \\mat{W} = \\sum_{\\mathrm{patterns} \\, \\, \\nu}^{P} \\vec{x}^{\\nu} \\vec{x}^{\\nu \\intercal}\n\\end{equation}\n\nthen as long as not too many patterns are stored ($P < 0.14 N$ \\cite{hopfield_neural_1982}), the patterns will become fixed-point attractors of the network's dynamics, and the network can perform robust pattern completion and correction.\n\n\n\n\n\\subsection{Finite State Machines}\n\nA Finite State Machine (FSM) $M$ is a discrete system with a finite state set $Z_{\\text{FSM}} = \\{\\zeta_{1},\\zeta_{2}, \\ldots , \\zeta_{N_{Z}}$ \\}, a finite input stimulus set $S_{\\text{FSM}} = \\{\\sigma_{1}, \\sigma_{2}, \\ldots, \\sigma_{N_{S}} \\}$, and finite output response set $R_{\\text{FSM}} = \\{\\rho_{1}, \\rho_{2}, \\ldots, \\rho_{N_{R}} \\}$. $M$ is then fully defined with the addition of its two characterising functions $F(\\cdot)$ and $G(\\cdot)$\n\n\\begin{equation}\n\\begin{split}\n z_{t+1} & = F(z_{t}, s_{t}) \\\\\n r_{t+1} & = G(z_{t}, s_{t})\n\\end{split}\n\\end{equation}\n\nwhere $z_{t} \\in Z_{\\text{FSM}}$, $r_{t} \\in R_{\\text{FSM}}$ and $s_{t} \\in S_{\\text{FSM}}$ are the state, output and stimulus at time step $t$ respectively. $F(\\cdot)$ thus provides the state update rule, while $G(\\cdot)$ provides the output for any state-stimulus pair.\n\n$M$ can thus be represented by a directed graph, where each node represents a different state $\\zeta$, and every edge has a stimulus $\\sigma$ and optional output $\\rho$ associated with it.\n\n\n\n\n\\section{Attractor network construction}\n\\label{section:methods}\n\nWe now show how a Hopfield-like attractor network may be constructed to emulate an arbitrary FSM, where the states within the FSM are now attractors within the attractor network, and the stimuli for transitions between node states in the FSM trigger all corresponding transitions between attractors. More specifically, for every node $\\zeta \\in Z_{\\text{FSM}}$, an associated hypervector $\\vec{x}$ is randomly generated and stored as an attractor within the attractor network. We use $Z_{\\text{AN}}$ to denote the set of nodal hypervectors stored as attractors within the attractor network. Every unique stimulus $\\sigma \\in S_{\\text{FSM}}$ in the FSM is also now associated with a randomly generated hypervector $\\vec{s} \\in S_{\\text{AN}}$, where $S_{\\text{AN}}$ is the set of hypervectors associated with a unique stimulus. For the FSM edge outputs, a corresponding set of output hypervectors is similarly generated. These correspondences are summarised in \\mbox{Table \\ref{table:notation}}.\n\n\n\\begin{table}[h]\n\\centering\n\\begin{tabular}{ll|ll}\n\\multicolumn{2}{c|}{FSM (Symbols)} & \\multicolumn{2}{c}{Attractor Net. (Vectors)} \\\\ \\hline\nNodes & $\\zeta \\in Z_{\\text{FSM}}$ & Attractors & $\\vec{x} \\in Z_{\\text{AN}}$ \\\\\nInput stimuli & $\\sigma \\in S_{\\text{FSM}}$ & Input stimuli & $\\vec{s} \\in S_{\\text{AN}}$ \\\\\nOutputs & $\\rho \\in R_{\\text{FSM}}$ & Output vectors & $\\vec{r} \\in R_{\\text{AN}}$ \n\\end{tabular}\n\\caption{A comparison of the notation used to represent states, inputs and outputs in the FSM picture, and the corresponding hypervectors used to represent the FSM within the attractor network.}\n\\label{table:notation}\n\\end{table}\n\n\n\n\\subsection{Constructing transitions}\n\nWe consider the general situation that we want to initiate a transition from \\textit{source} attractor state $\\vec{x} \\in Z_{\\text{AN}}$ to \\textit{target} attractor state $\\vec{y} \\in Z_{\\text{AN}}$, by imposing some stimulus state $\\vec{s} \\in S_{\\text{AN}}$ as input onto the network.\n\n\\begin{equation}\n \\vec{x} \\stackrel{\\vec{s}}{\\longrightarrow} \\vec{y}\n\\end{equation}\n\n\nCrucial to the functioning of the network transitions is how we model input to the network. We choose to model input to the network as a masking of the network state, such that all dimensions where the stimulus $\\vec{s}$ is -1 are forced to be 0. This may be likened to saying we are considering input to the network that selectively silences half of all neurons according to the stimulus vector. While a stimulus vector $\\vec{s}$ is being imposed upon the network, the modified state update rule is thus\n\n\\begin{equation}\n \\vec{z}_{t+1} = \\mathrm{sgn} \\big( \\mat{W} (\\vec{z}_{t} \\circ H (\\vec{s})) \\big)\n\\label{eqn:update_hop_mask}\n\\end{equation}\n\nwhere the Hadamard multiplication of the network state with $H (\\vec{s})$ enacts the masking operation, and the weights matrix $\\mat{W}$ is constructed such that $\\vec{z}_{t+1}$ will resemble the desired target state.\n\n\n\nFor every edge in the FSM, we generate an \"edge state\" $\\vec{e}$, which is also stored as an attractor within the network. Each edge will use this $\\vec{e}$ state as a \"halfway-house\", en route to $\\vec{y}$. Additionally, each unique edge label will now have \\textit{two} stimulus hypervectors associated with it, $\\vec{s}_a$ and $\\vec{s}_b$ which trigger transitions from source state $\\vec{x}$ to edge state $\\vec{e}$ and edge state $\\vec{e}$ to target state $\\vec{y}$ respectively. A general transition now looks like\n\n\\begin{equation}\n \\vec{x} \\stackrel{\\vec{s}_a}{\\longrightarrow} \\vec{e} \\stackrel{\\vec{s}_b}{\\longrightarrow} \\vec{y}\n\\end{equation}\n\nwhere $\\vec{x}, \\vec{y} \\in Z_{\\text{AN}}$ correspond to nodal states in the FSM but $\\vec{e}$ exists purely to facilitate the transition.\nThe weights matrix is constructed as\n\n\\begin{equation}\n \\mat{W} = \\underbrace{\\frac{1}{N}\\sum_{\\vphantom{\\text{dg}}\\text{nodes $\\nu$}}^{N_{Z}} \\vec{x}^\\nu \\vec{x}^{\\nu \\intercal} }_{\\substack{\\text{ Hopfield}\\\\\\text{attractor terms}}} +\n \\underbrace{\\frac{1}{N}\\sum_{\\text{edges $\\eta$}}^{N_{E}} \\mat{E}^{\\eta}}_{\\substack{\\text{Asymmetric}\\\\\\text{transition terms}}}\n\\label{eqn:W_construction}\n\\end{equation}\n\n\n\nwhere $\\vec{x}^\\nu \\in Z_{\\text{AN}}$ is the state corresponding to the $\\nu$'th node in the graph to be implemented, $N_{Z}$ and $N_{E}$ are the number of nodes and edges respectively, and $\\mat{E}^\\eta$ is the addition to the weights matrix required to implement an individual edge, given by\n\n\n\n\\begin{equation}\n\\begin{split}\n \\mat{E}^{(\\eta)} & = \\vphantom{(}\\vec{e} \\vec{e}^{\\intercal} \\\\\n & +H(\\vec{s}_a) \\circ (\\vec{e}-\\vec{x}) (\\vec{x} \\circ \\vec{s}_a)^{\\intercal} \\\\\n & +H(\\vec{s}_b) \\circ (\\vec{y} -\\vec{e}) (\\vec{ e} \\circ \\vec{s}_b)^{\\intercal} \n \\label{eqn:tran_term}\n\\end{split}\n\\end{equation}\n\n\n\nwhere $\\vec{x}$, $\\vec{e}$ and $\\vec{y}$ are the source, edge, and target states of the edge $\\eta$ respectively, and $\\vec{s}_a$ and $\\vec{s}_b$ are the input stimulus vectors associated with this edge's label. The edge index $\\eta$ has been dropped for brevity. The $\\vec{e}\\vec{e}^{\\intercal}$ term is the attractor we have introduced as a halfway-house for the transition. The second set of terms enacts the $ \\vec{x} \\stackrel{\\vec{s}_a}{\\longrightarrow} \\vec{e}$ transition, by giving a nonzero inner product with the network state only when the network is in state $\\vec{x}$, \\textit{and} the network is being masked by the input $\\vec{s}_a$. This allows terms to be stored in $\\mat{W}$ which are effectively obfuscated, not affecting network dynamics considerably, until a specific stimulus is applied as a mask to the network. Likewise, the third set of terms enacts the $\\vec{e} \\stackrel{\\vec{s}_b}{\\longrightarrow} \\vec{y}$ transition.\n\nIn the absence of input, the network functions like a standard Hopfield attractor network,\n\n\n\\begin{equation}\n \\mat{W}\\vec{x} \\approx \\vec{x} \\pm \\sigma \\vec{n} \\quad \\forall \\quad \\vec{x} \\in Z_{\\text{AN}}\n\\label{eqn:normal_attractor}\n\\end{equation}\nwhere $\\vec{n} \\in \\mathbb{R}^{N}$ is a standard normally distributed random vector, and\n\\begin{equation}\n\\sigma = \\sqrt{\\frac{N_{Z} + 3N_{E}}{N}} \n\\label{eqn:sigma_SNR}\n\\end{equation} \nis the magnitude of noise due to the undesired finite inner product with other stored terms (see Appendix). Thus as long as the magnitude of the noise is not too large, $\\vec{x}$ will be a solution of $\\vec{z} = \\mathrm{sgn}(\\mat W\\vec{z})$ and so a fixed-point attractor of the dynamics.\n\nWhen a valid stimulus is presented as input to the network however, masking the network state, the previously obfuscated asymmetric transition terms become significant and dominate the dynamics. Assuming there is a stored transition term $\\mat{E}$ corresponding to a valid edge with vectors $\\vec{x},\\vec{e},\\vec{y},\\vec{s}_a, \\vec{s}_b$ having the same meaning as in \\mbox{Equation \\ref{eqn:tran_term}}, we have \n\n\n\\begin{equation}\n\\begin{split}\n \\mat{W} \\big( \\vec{x} \\circ H(\\vec{s}_a) \\big) \\appropto & \\, \\, H \\big( \\vec{s}_a \\big) \\circ \\vec{e}+ H(-\\vec{s}_a)\\circ \\vec{x} \\pm \\sqrt{2}\\sigma \\vec{n}\n\\end{split}\n\\end{equation}\n\nwhere $\\appropto$ implies approximate proportionality. The second set of terms can be ignored, as they project only to neurons which are currently being masked. Thus the only significant term is that containing the edge state $\\vec{e}$, which consequently drives the network to the $\\vec{e}$ state, enacting the $\\vec{x} \\stackrel{\\vec{s}_a}{\\longrightarrow} \\vec{e}$ transition. Since the state $\\vec{e}$ is also stored as an attractor within the network, we have\n\n\\begin{equation}\n \\mat{W} \\big( \\vec{e} \\circ H(\\vec{s}_a) \\big) \\, \\appropto \\, \\vec{e} \\pm \\sqrt{2} \\sigma \\vec{n} \n\\end{equation}\n\nand\n\n\\begin{equation}\n \\mat{W}\\vec{e} \\approx \\vec{e} \\pm \\sigma \\vec{n}\n\\end{equation}\n\nthus the edge states $\\vec{e}$ are also fixed-point attractors of the network dynamics. To complete the transition from state $\\vec{x}$ to $\\vec{y}$, the second stimulus $\\vec{s}_b$ is applied, giving\n\n\n\\begin{equation}\n\\begin{split}\n \\mat{W} \\big( \\vec{e} \\circ H(\\vec{s}_b) \\big) & \\appropto H \\big( \\vec{s}_b \\big) \\circ \\vec{y} + H(-\\vec{s}_b) \\circ \\vec{e} \\pm \\sqrt{2}\\sigma \\vec{n}\n\\end{split}\n\\end{equation}\n\n\n\nwhich drives the network state towards $\\vec{y} \\in Z_{\\text{AN}}$, the desired target attractor state. By consecutive application of the inputs $\\vec{s}_a$ and $\\vec{s}_b$, the transition terms $\\mat{E}^\\eta$ stored in $\\mat{W}$ have thus caused the network to controllably transition from the source to target attractor states. Transition terms $\\mat{E}^\\eta$ may be iteratively added to $\\mat{W}$ to achieve any arbitrary transition between attractor states, and so any arbitrary FSM may be implemented within a large enough attractor network.\n\nNote that we have here ignored that the diagonal of $\\mat{W}$ is set to 0 (no self connections), but this does not significantly affect these results.\n\n\n\n\\subsection{Edge outputs}\n\nUntil now we have not mentioned the other critical component of FSMs: the output associated with every edge. We have separated the construction of transitions and edge outputs for clarity, since the two may be effectively decoupled.\nMuch like for the nodes and edges in the FSM to be implemented, for every unique FSM output $\\rho \\in R_{\\text{FSM}}$, we generate a corresponding hypervector $\\vec{r} \\in R_{\\text{AN}}$, where $R_{\\text{AN}}$ is the set of all output vectors. Different however, is that we let these be sparse ternary vectors $\\vec{r} \\in \\{ -1, 0, 1 \\}^{N}$ with coding level $f_r := \\frac{1}{N}\\sum_{i}^N \\lvert r_i \\rvert $, the fraction of nonzero elements. These output states are then embedded in the edge state attractors, altering the $\\vec{e}\\vec{e}^{\\intercal}$ terms in each $\\mat{E}$ term according to\n\n\\begin{equation}\n \\vec{e}\\vec{e}^{\\intercal} \\rightarrow \\vec{e}_{\\vec{r}}\\vec{e}^T := \\Big[ \\vec{e} \\circ \\big( \\vec{1}-H(\\vec{r} \\circ \\vec{r}) \\big) + \\vec{r} \\Big] \\vec{e} ^{\\intercal} \n\\label{eqn:output_attr}\n\\end{equation}\n\nwhere $\\vec{e}_{\\vec{r}}$ is here defined and $\\vec{1}$ is a vector of all ones. As a result of this modification, the edge states $\\vec{e}$ themselves will no longer be exact attractors of the space. The composite state $\\vec{e}_{\\vec{r}}$ will however be stable, in which the presence of $\\vec{r}$ can be easily detected ($\\vec{e}_{\\vec{r}} \\cdot \\vec{r} = Nf_{r}$). This has been achieved without incurring any similarity and thus interference between attractors, which would otherwise alter the dynamics of the previously described transitions.\nA full transition term $\\mat{E}$, including its output, is thus given by\n\n\\begin{equation}\n\\begin{split}\n \\mat{E}^{(\\eta)} & = \\vphantom{(}\\Big[ \\vec{e} \\circ \\big( \\vec{1}-H(\\vec{r} \\circ \\vec{r}) \\big) + \\vec{r} \\Big] \\vec{e} ^{\\intercal} \\\\\n & +H(\\vec{s}_a) \\circ (\\vec{e}-\\vec{x}) (\\vec{x} \\circ \\vec{s}_a)^{\\intercal} \\\\\n & +H(\\vec{s}_b) \\circ (\\vec{y} -\\vec{e}) (\\vec{ e} \\circ \\vec{s}_b)^{\\intercal} \n \\label{eqn:tran_term_final}\n\\end{split}\n\\end{equation}\n\nwhich combined with the network state masking operation is solely responsible for storing the FSM connectivity and enabling the desired inter-attractor transition dynamics.\n\n\n\n\n\\section{Results}\n\n\\subsection{FSM Emulation}\n\n\\begin{figure*}[h]\n\\centering\n\\includegraphics[width=3in]{figs\/greek_graph.pdf}\n\\caption{An example FSM which we implement within the attractor network. Each node within the graph (e.g. \"Zeus\") is represented by a new hypervector $\\vec{x}^\\mu$ and stored as an attractor within the network. Every edge is labelled by its stimulus (e.g. \"father\\_is\"), for which corresponding hypervectors $\\vec{s}_a$ and $\\vec{s}_b$ are also generated. When a stimulus' hypervector is input to the network, it should allow all corresponding attractor transitions to take place. Each edge may also have an associated output symbol, where we here choose the edges labelled \"type\" to output the generation of the god \\{\"Primordial\", \"Titans\", \"Olympians\"\\}. This graph was chosen as it displays the generality of the embedding: it contains cycles, loops, bidirectional edges and state-dependent transitions.\n}\n\\label{fig:starwars_net}\n\\end{figure*}\n\n\n\n\\begin{figure*}\n\\centering\n\\includegraphics[width=\\linewidth]{figs\/walk_normal_greek2_print.pdf}\n\\caption{An attractor network transitioning through attractor states in a state-dependent manner, as a sequence of input stimuli is presented to the network. \\textbf{a)} The input stimuli to the network, where for each unique edge label (e.g. \"father\\_is\") in the FSM to be implemented (\\mbox{\\mbox{Figure \\ref{fig:starwars_net}}}) a pair of hypervectors $\\vec{s}_a$ and $\\vec{s}_b$ have been generated. No stimulus, a stimulus $\\vec{s}_a$, then a stimulus $\\vec{s}_b$ are input for 10 time steps each in sequence. \\textbf{b)} \\& \\textbf{c)} The similarity of the network state $\\vec{z}_t$ to stored node states $\\vec{x} \\in Z_{\\text{AN}}$ and stored edge states $\\vec{e}$ respectively, computed via the inner product (\\mbox{Equation \\ref{eqn:d_simple}}). \\textbf{d)} The similarity of the network state $\\vec{z}_t$ to the sparse output states $\\vec{r} \\in R_{\\text{AN}}$. All similarities have been labelled with the state they represent and the colours are purely illustrative. The state transitions shown here are explicitly state dependent, as can be seen from the repeated input of \"father\\_is\", which results in a transition to state \"Kronos\" when in \"Hades\", but to \"Uranus\" when in \"Kronos\". Additionally, the network is unaffected by nonsense input that does not correspond to a stored edge, as the network remains in the attractor \"Uranus\" when presented with the input \"father\\_is\".\n}\n\\label{fig:walk_standard}\n\\end{figure*}\n\n\n\nTo show the generality of FSM construction, we chose to implement a directed graph representing the relationships between gods in ancient Greek mythology, due to the graph's dense connectivity. The graph and thus FSM to be implemented is shown in \\mbox{\\mbox{Figure \\ref{fig:starwars_net}}}. From the graph it is clear that a state machine representing the graph must explicitly be capable of state-dependent transitions, e.g. the input \"overthrown\\_by\" must result in a transition to state \"Kronos\" when in state \"Uranus\", but to state \"Zeus\" when in state \"Kronos\". To construct $\\mat{W}$, the necessary random hypervectors are first generated. For every node state $\\zeta \\in Z_{\\text{FSM}}$ within the FSM (e.g. \"Zeus\", \"Kronos\") a random bipolar vector $\\vec{x}$ is generated. For every unique edge label $\\sigma \\in S_{\\text{FSM}}$ (e.g. \"overthrown\\_by\", \"father\\_is\") a pair of random stimulus hypervectors $\\vec{s}_a$ and $\\vec{s}_b$ is generated. Similarly, sparse ternary output vectors $\\vec{r}$ are also generated. The weights matrix $\\mat{W}$ is then iteratively constructed as per Equations \\ref{eqn:W_construction} and \\ref{eqn:tran_term_final}, with a new hypervector $\\vec{e}$ also being generated for every edge. The matrix generated from this procedure we denote $\\mat{W}^{\\text{ideal}}$. For all of the following results, the attractor network is first initialised to be in a certain attractor state, in this case, \"Hades\". The network is then allowed to freely evolve for 10 time steps (chosen arbitrarily) as per \\mbox{Equation \\ref{eqn:update_hopfield}}, with every neuron being updated simultaneously on every time step. During this period, it is desired that the network state $\\vec{z}_t$ remains in the attractor state in which it was initialised. An input stimulus $\\vec{s}_a$ is then presented to the network for 10 time steps, during which time the network state is masked by the stimulus vector, and the network evolves synchronously according to \\mbox{Equation \\ref{eqn:update_hop_mask}}. If the stimulus corresponds to a valid edge in the FSM, the network state $\\vec{z}_t$ should then be driven towards the correct edge state attractor $\\vec{e}$. After these 10 time steps, the second stimulus vector $\\vec{s}_b$ for a particular input is presented for 10 time steps. Again, the network evolves according to \\mbox{Equation \\ref{eqn:update_hop_mask}}, and the network should be driven towards the target state attractor $\\vec{y}$, completing the transition. This process is repeated every 30 time steps, causing the network state $\\vec{z}_t$ to travel between attractor states $\\vec{x} \\in Z_{\\text{AN}}$, corresponding to a valid walk between states $\\zeta \\in Z_{\\text{FSM}}$ in the represented FSM. To view the resulting network dynamics, the similarity between the network state $\\vec{z}_t$ and the edge and node states is calculated as per \\mbox{Equation \\ref{eqn:d_simple}}, such that a similarity of 1 between $\\vec{z}_t$ and some attractor state $\\vec{x}^\\nu$ implies $\\vec{z}_t = \\vec{x}^\\nu$ and thus that the network is inhabiting that attractor. The similarity between the network state $\\vec{z}_t$ and the outputs states $\\vec{r} \\in R_{\\text{AN}}$ is also calculated, but due to the output vectors being sparse, the maximum value that the similarity can take is $d(\\vec{z}, \\vec{r}) = f_r$, which would be interpreted as that output symbol being present.\n\nAn attractor network performing a walk is shown in \\mbox{\\mbox{Figure \\ref{fig:walk_standard}}}, with parameters $N = 10,000$, $N f_r = 200$, $N_{Z} = 8$, and $N_E = 16$. This corresponds to the network having a per-neuron noise (the finite size effect resulting from random hypervectors having a nonzero similarity to each-other) of $\\sigma \\approx 0.07$, calculated via \\mbox{Equation \\ref{eqn:sigma_SNR}}. The magnitude of the noise is thus small compared with the desired signal of magnitude 1 (\\mbox{Equation \\ref{eqn:normal_attractor}}), and so we are far away from reaching the memory capacity of the network. The network performs the walk as intended, transitioning between the correct nodal attractor states and corresponding edge states with their associated outputs. The specific sequence of inputs was chosen to show the generality of implementable state transitions. First, there is the explicit state dependence in the repeated input of \"father\\_is, father\\_is\". Second, it contains an input stimulus that does not correspond to a valid edge for the currently inhabited state (\"Zeus overthrown\\_by\"), which should not cause a transition. Third, it contains bidirectional edges (\"consort\\_is\"), whose repeated application causes the network to flip between two states (between \"Kronos\" and \"Rhea\"). And fourthly self-connections, whose target states and source states are identical. Since the network traverses all these edges as expected, we do not expect the precise structure of an FSM's graph to limit whether or not it can be emulated by the attractor network.\n\n\\subsection{Network robustness}\nOne of the advantages of attractor neural networks that make them suitable as plausible biological models is their robustness to imperfect weights \\cite{amit_modeling_1989}. That is, individual synapses may have very few bits of precision or become damaged, yet the relevant brain region must still be able to carry out its functional task. To this end, we subject the network presented here to similar non-idealities, to check that the network retains the feature of global stability and robustness despite being implemented with low-precision and noisy weights. In the first of these tests, the ideal weights matrix $\\mat{W}^{\\text{ideal}}$ was binarised and then additive noise was applied, via\n\n\n\\begin{equation}\n W^{\\text{noisy}}_{ij} = \\mathrm{sgn}\\big( W^{\\text{ideal}}_{ij}) + \\sigma_{\\text{noise}} \\cdot \\chi_{ij}\n \\label{eqn:W_noisy}\n\\end{equation}\n\n\n\n\n\\begin{figure*}[h]\n\\centering\n\\includegraphics[width=\\linewidth]{figs\/walk_binnoise_greek2.pdf}\n\\caption{The attractor network performing a walk as in \\mbox{Figure \\ref{fig:walk_standard}}, but using the damaged weights matrix $\\mat{W}^{\\text{noisy}}$, whose entries have been binarised and then independent additive noise has been applied, as per \\mbox{Equation \\ref{eqn:W_construction}}. \\textbf{a)} The distribution of weights after they have been thusly damaged with noise of magnitude $\\sigma_{\\text{noise}} = 2$. Weights whose ideal values were positive or negative have been plotted separately. \\textbf{b)} The similarity of the network state $\\vec{z}_t$ to stored node states, with the network using the weights from a). Shown above is the sequence of inputs given to the network, identical to in \\mbox{Figure \\ref{fig:walk_standard}}. \\textbf{c)} The distribution of weights damaged with $\\sigma_{\\text{noise}} = 5$. \\textbf{d)} The similarity of the network state to stored node states, but with the network using the damaged weights from c). The network transitions are thus highly robust to unreliable weights, and show a gradual degradation in performance, even when the network's weights are majorly imprecise and noisy. For both b) and d) the edge state and output similarity plots have been omitted for visual clarity.}\n\\label{fig:starwars_smear}\n\\end{figure*}\n\n\n\nwhere $\\chi_{ij} \\in \\mathbb{R}$ are independently sampled standard Gaussian variables, sampled once during matrix construction, and $\\sigma_{\\text{noise}} \\in \\mathbb{R}$ is a scaling factor on the strength of noise being imposed. The $\\mathrm{sgn}(\\cdot)$ function forces the weights to be bipolar, emulating that the synapses may have only 1 bit of precision, while the $\\chi_{ij}$ random variables act as a smearing on the weight state, emulating that the two weight states have a finite width. A $\\sigma_{\\text{noise}}$ value of $2$ thus corresponds to the magnitude of the noise being equal to that of the signal (whether $W^{\\text{ideal}}_{ij} \\geq 0$), and so, for example, for a damaged weight value of $W^{\\text{noisy}}_{ij} = +1$ there is a 38\\% chance that the pre-damaged weight $W^{\\text{ideal}}_{ij} = -1$. This level of degradation is far worse than is expected even from novel binary memory devices \\cite{xia_memristive_2019}, and presumably also for biology. We used the same set of hypervectors and sequence of inputs as in \\mbox{Figure \\ref{fig:walk_standard}}, but this time using the degraded weights matrix $\\mat{W}^{\\text{noisy}}$, to test the network's robustness. The results are shown in \\mbox{Figure \\ref{fig:starwars_smear}} for weight degradation values of $\\sigma_{\\text{noise}} = 2$ and $\\sigma_{\\text{noise}} = 5$. We see that for $\\sigma_{\\text{noise}} = 2$ the attractor network performs the walk just as well as in Figure $\\ref{fig:walk_standard}$, which used the ideal weights matrix, despite the fact that here the binary weight distributions overlap each-other considerably. Furthermore, we have that $d(\\vec{z}_t, \\vec{x}^\\nu) \\approx 1$ where $\\vec{x}^\\nu$ is the attractor that the network should be inhabiting at any time, indicating that the attractor stability and recall accuracy is unaffected by the non-idealities. For $\\sigma_{\\text{noise}} = 5$, a scenario where the realised weight carries very little information about the ideal weight's value, we see that the network nonetheless continues to function, performing the correct walk between attractor states. However, there is a degradation in the recall of stored states, with the network state no longer converging to a similarity of 1 with the stored attractor states. For greater values of $\\sigma_{\\text{noise}}$, the network ceases to perform the correct walk, and indeed does not converge on any stored attractor state (not shown). \n\nA further test of robustness was to restrict the weights matrix to be sparse, as a dense all-to-all connectivity may not be feasible in biology, where synaptic connections are spatially constrained and have an associated chemical cost. Similar to the previous test, the sparse weights matrix was generated via\n\n\\begin{equation}\n W^{\\text{sparse}}_{ij} = \\mathrm{sgn}(W_{ij}^{\\text{ideal}} \\, ) \\cdot H (\\, \\lvert W_{ij} \\rvert - \\theta)\n \\label{eqn:W_sparse}\n\\end{equation}\n\n\\begin{figure*}[h]\n\\centering\n\\includegraphics[width=\\linewidth]{figs\/walk_sparse_greek2.pdf}\n\\caption{The attractor network performing a walk as in \\mbox{Figure \\ref{fig:walk_standard}}, but using a sparse ternary weights matrix $\\mat{W}^{\\text{sparse}} \\in \\{-1,0,1\\}^{N \\times N}$, generated via \\mbox{Equation \\ref{eqn:W_sparse}}. The weights matrices for \\textbf{a)} and \\textbf{b)} are 98\\% and 99\\% sparse respectively. Shown are the similarities of the network state $\\vec{z}_t$ with stored node states $\\vec{x}^\\nu \\in Z_{\\text{AN}}$, with the stimulus input hypervector to the network at any time shown above. We see that even when 98\\% of the entries in $\\mat{W}$ are zeroes, the network continues to function with negligible loss in stability, as the correct walk between attractor states is performed, and the network converges on stored attractors with similarity $d(\\vec{z}_t, \\vec{x}) \\approx 1$. At 99\\% sparsity there is a degradation in the accuracy of stored attractors, with the network converging on states with $d(\\vec{z}_t, \\vec{x}) < 1$, but with the correct walk still being performed. Beyond 99\\% sparsity the attractor dynamics break down (not shown). Thus although requiring a large number of neurons $N$ to enforce state pseudo-orthogonality, the network requires far fewer than $N^2$ nonzero weights to function robustly.}\n\\label{fig:starwars_sparse}\n\\end{figure*}\n\n\nwhere $\\theta$ is a threshold set such that $\\mat{W}^{\\text{sparse}} \\in \\{-1,0,1\\}^{N \\times N}$ has the desired sparsity. Through this procedure, only the most extreme weight values are allowed to be nonzero. Since the terms inside $\\mat{W}^{\\text{ideal}}$ are symmetrically distributed around 0, there are approximately as many +1 entries in $\\mat{W}^{\\text{sparse}}$ as -1s. Using the same hypervectors and sequence of inputs as before, an attractor network performing a walk using the sparse weights matrix $\\mat{W}^{\\text{sparse}}$ is shown in \\mbox{Figure \\ref{fig:starwars_sparse}}, with sparsities of 98\\% and 99\\%. We see that for the 98\\% sparse case, there is again very little difference with the ideal case shown in \\mbox{Figure \\ref{fig:walk_standard}}, with the network still having a similarity of $d(\\vec{z}_t, \\vec{x}) \\approx 1$ with stored attractor states, and performing the correct walk. When the sparsity is pushed further to 99\\% however, we see that despite the network performing the correct walk, the attractor states are again slightly degraded, with the network converging on states with $d(\\vec{z}_t, \\vec{x}^\\nu) < 1$ with stored attractor states $\\vec{x}^\\nu$. For greater sparsities, the network ceases to perform the correct walk, and again does not converge on any stored attractor state (not shown).\n\nThese two tests thus highlight the extreme robustness of the model to imprecise and unreliable weights. The network may be implemented with 1 bit precision weights, whose weight distributions are entirely overlapping, or set 98\\% of the weights to 0, and still continue to function without any discernible loss in performance. The extent to which the weights matrix may be degraded and the network still remain stable is of course a function not only of the level of degradation, but also of the size of the network $N$, as well as the the number of states $N_Z$ and edges $N_E$ stored within the network. For conventional Hopfield models with Hebbian learning, these two factors are normally theoretically treated alike, as contributing an effective noise to the postsynaptic sum as in \\mbox{Equation \\ref{eqn:sigma_SNR}}, and so the magnitude of withstandable synaptic noise increases with increasing $N$ \\cite{amit_modeling_1989, sompolinsky_theory_1987}. Although a thorough mathematical investigation into the scaling of weight degradation limits is justified, as a first result we have here given numerical data showing stability even in the most extreme cases of non-ideal weights, and expect that any implementation of the network with novel devices would be far away from such extremities.\n\n\n\\subsection{Asynchronous updates}\n\n\\begin{figure*}[h]\n\\centering\n\\includegraphics[width=\\linewidth]{figs\/walk_async_greek.pdf}\n\\caption{An attractor network performing a shorter walk than in \\mbox{Figure \\ref{fig:walk_standard}}, but where neurons are updated asynchronously, with each neuron having a 10\\% chance of updating on any time step. \\textbf{a)} The similarity of the network state $\\vec{z}_t$ to stored attractor states, with the input stimuli to the network shown above. \\textbf{b)} The evolution of a subset of neurons within the attractor network, where for visual clarity, three nodal states shown have hypervectors taken from columns of the $N$-dimensional Hadamard matrix, rather than being randomly generated. The network functions largely the same as in the synchronous case, but with transitions between states now taking a finite number of time steps to complete. The model is thus not dependent on the precise timing of neuron updates, and should function robustly in asynchronous systems where timing is unreliable.}\n\\label{fig:async}\n\\end{figure*}\n\n\n\nAnother useful property of Hopfield networks is the ability to robustly function even with asynchronously updating neurons, wherein not every neuron experiences a simultaneous state update. This property is especially important for any architecture claiming to be biologically plausible, as biological neurons update asynchronously and largely independent of each-other, without the the need for global clock signals. To this end, we ran a similar experiment to that in \\mbox{Figure \\ref{fig:walk_standard}}, using the undamaged weights matrix $\\mat{W}^{\\text{ideal}}$, but with an asynchronous neuron update rule, wherein on each time step every neuron has only a 10\\% chance of updating its state. The remaining 90\\% of the time, the neuron retains its state from the previous time step, regardless of its postsynaptic sum. There is thus no fixed order of neuron updates, and indeed it is not even a certainty that a neuron will update in any finite time. To account for the slower dynamics of the network state, the time for which inputs were presented to the network, as well as the periods without any input, was increased from 10 to 40 time steps. To be able to easily view the gradual state transition, three of the attractor states were chosen to be columns of the $N$-dimensional Hadamard matrix, rather than being randomly generated. The results are shown in \\mbox{Figure \\ref{fig:async}}, for a shorter sequence of stimulus inputs. We see that the network functions as intended, but with the network now converging on the correct attractors in a finite number of updates rather than in just one. The model proposed here is thus not reliant on synchronous dynamics, which is important not only for biological plausibility, but also when considering possible implementations on asynchronous neuromorphic hardware.\n\n\n\n\n\n\\subsection{Storage capacity}\nIt is well known that the storage capacity of a Hopfield network, the number of patterns $P$ that can be stored and reliably retrieved, is proportional to the size of the network, via $P < 0.14N$ \\cite{hopfield_neural_1982, amit_modeling_1989}. When one tries to store more than $P$ attractors within the network, the so-called memory blackout occurs, after which no pattern can be retrieved. We thus perform numerical simulations for a large range of attractor network and FSM sizes, to see if an analogous relationship exists. Said otherwise, for an attractor network of finite size $N$, what sizes of FSM can the network successfully emulate?\n\nFor a given $N$, number of FSM nodes $N_Z$ and edges $N_E$, a random FSM was generated and an attractor network constructed to represent it as described in Section \\ref{section:methods}. To ensure a reasonable FSM was generated, the FSM's graph was first generated to have all nodes connected in a sequential ring structure, i.e. every state $\\zeta^{\\nu}\\in Z_{\\text{FSM}}$ connects to $\\zeta^{\\nu+1 \\mod N_Z}$. The remaining edges between nodes were selected at random, until the desired number of edges $N_E$ was reached. For each edge an associated stimulus is then required. Although one option would be to allocate as few unique stimuli as possible, so that the state transitions are maximally state dependent, this results in some advantageous cancellation effects between the $\\mat{E}^\\eta$ transition terms and the stored attractors $\\vec{x}^\\nu \\vec{x}^{\\nu \\intercal}$. To instead probe a worst-case scenario, each edge was assigned a unique stimulus.\n\nWith the FSM now generated, an attractor network with $N$ neurons was constructed as previously described. An initial attractor state was chosen at random, and then a random valid walk between states was chosen to be performed (chosen arbitrarily to be of length 6, corresponding to each run taking 180 time steps). The corresponding sequence of stimuli was input to the attractor network via the same procedure as in \\mbox{Figure \\ref{fig:walk_standard}}, each masking the network state. Each run was then evaluated to have either passed or failed, with a pass meaning that the network state inhabited the correct attractor state with overlap $d(\\vec{z}_t, \\vec{x}^\\nu) > 0.75$ in the middle of all intervals when it should be in a certain node attractor state. A pass thus corresponds to the network performing the correct walk between attractor states. The results are shown in \\mbox{Figure \\ref{fig:memcap}}. We see that for a given $N$, there is a linear relationship between the the number of nodes $N_Z$ and number of edges $N_E$ in the FSM that can be implemented before failure. That this trade-off exists is not surprising, since both contribute additively to the SNR within the attractor network (\\mbox{Equation \\ref{eqn:sigma_SNR}}). For each $N$, a linear Support Vector Machine (SVM) was fitted to the data, to find the separating boundary at which failure and success of the walk are approximately equiprobable. The boundary is given by $N_Z + \\beta N_E = c(N)$, where $\\beta$ represents the relative cost of adding nodes and edges, and $c(N)$ is an offset. For all of the fitted boundaries, the value of $\\beta$ was found to be approximately constant, with $\\beta = 2.4 \\pm 0.1$, and so is assumed to be independent of $N$. For every value of $N$, we define the capacity $C$ to be the maximum size of FSM which can be implemented before failure, for which $N_E = N_Z$. The capacity $C$ is then given by $C(N) = \\frac{c(N)}{1+\\beta}$, and is also plotted in \\mbox{Figure \\ref{fig:memcap}}.\nA linear fit reveals an approximate proportionality relationship of $C(N) \\approx 0.025 N$. In all, the boundary which limits the size of FSM which can be emulated is then given by \n\n\\begin{equation}\n N_Z + 2.4 N_E < 0.085 N\n\\end{equation}\n\nIt is expected that additional edges consume more of the network's storage capacity than additional nodes, since for every edge, 5 additional terms are added to $\\mat{W}$ (\\mbox{Equation \\ref{eqn:tran_term_final}}), contributing 3$\\times$ as much cross-talk noise as adding a node would (\\mbox{Equation \\ref{eqn:sigma_SNR}}). We can compare this storage capacity relation with that of the standard Hopfield model, by considering the case $N_E = 0$, i.e. there are no transition terms in the network, and so the network is identical to a standard Hopfield network. In this case, our failure boundary would become $N_Z < 0.085N$, in comparison to Hopfield's $P < 0.14 N$. Although this may seem like a drastic reduction in memory capacity, we must remember that as a result of input stimuli applying a masking operation to our network, the actual size of the network during these periods is actually $N^\\prime :=\\frac{1}{2}N$. In this case, the failure boundary can be rewritten as $N_Z < 0.17 N^\\prime$, which is more in keeping with the Hopfield estimate\\footnote{Although it might seem we are claiming that the network implemented here has a greater storage capacity than standard Hopfield networks, our boundary at $0.17N$ is for equiprobable success and failure, while the $0.14N$ figure is given for overwhelmingly likely success.}.\n\n\n\n\\begin{figure*}[h]\n\\centering\n\\includegraphics[width=4.5in]{figs\/memcap_final.pdf}\n\\caption{The capacity of the attractor network for varying size $N$, in terms of the size of FSM that can be emulated before failure. For a given $N$, a random FSM was generated with number of nodes $N_Z$ and number of edges $N_E$. An attractor network was then constructed as described in Section \\ref{section:methods}, and a sequence of stimuli input to the network that should trigger a specific walk between attractor states. \\textbf{a)} Every coloured square is a successful walk, with no unique $(N_Z,N_E, N)$ triplet being sampled more than once, and lower-$N$ squares occlude higher-$N$ squares. Since only graphs with at least as many edges as nodes were sampled, $N_E - N_Z$ is given on the $y$-axis rather than $N_E$. The overlain black lines are the SVM-fitted decision boundaries, distinguishing between values that succeeded and values that failed. \\textbf{b)} The capacity $C$ for varying Hopfield network sizes $N$, where $C$ is defined to be the maximum size of of FSM which can be implemented before failure, for which $N_E=N_Z$. A linear fit is overlain, and shows a linear relationship in the capacity $C$ in terms of $N$ over the range explored. Assuming that the gradients of the linear fit in a) are equal, the boundary at which failure and success are equiprobable is given by $N_Z + 2.4 N_E = 0.085 N$.}\n\\label{fig:memcap}\n\\end{figure*}\n\n\n\n\\section{Discussion}\n\\label{section:discussion}\n\n\n\\subsection{Hardware implementability}\nSince the network described in this paper differs very little from the conventional Hopfield description, having changed only the prescription for the generation of the weights matrix and the specific way that input is modelled, we can lean heavily on previous works concerning VLSI implementations of Hopfield networks \\cite{howard_associative_1987, verleysen_analog_1989, weinfeld_fully_1989}.\n\nThe main difficulty associated with an implementation would be the size requirements, requiring a high dimensionality $N$ to ensure pseudo-orthogonality of randomly generated hypervectors. Although we may need $N>10^{4}$ neurons, we have shown this does not necessarily mean we need $10^{8}$ synapses, as the network still functions when the weights matrix is sparse (\\mbox{Figure \\ref{fig:starwars_sparse}}). Despite having shown the robust functioning of the network only in the two cases of sparse ternary weights and dense noisy weights, we expect there to be a trade-off between the amount of each non-ideality that the network can withstand before failure. That is, an attractor network with dense noisy weights may withstand a greater degree of noise before failure than a network with sparse noisy weights.\n\nAn efficient and high-density implementation of a dense weights matrix may however be enabled by novel memristive crossbar technologies, which execute the dense matrix-vector-multiplication (MVM) step required in the state update rule in one operation \\cite{ielmini_-memory_2018, xia_memristive_2019, zidan_chapter_2020}. Such devices are already of great interest due to their immediate application in image processing and deep learning acceleration, and hybrid CMOS-memristor hardware implementations have thus already been pursued for their usefulness as an associative memory \\cite{guo_modeling_2015, yang_novel_2017, molahasani_majdabadi_hybrid_2021}, as well as for more direct FSM implementations \\cite{graves_memory_2020, de_lima_stap_2022} using memristive ternary content addressable memory (TCAM) cells. Since we have shown that individual weights may be bistable, incredibly unreliable and noisy without incurring a significant loss to performance, then a large enough crossbar would be a highly suitable platform on which to implement the network presented here.\n\nSince the additions to the weights matrix for each state and transition are composed purely of outer products, fully parallel one-shot in-memory online learning of the weights matrix may be achievable. As long as the updates in the memristors' conductances are sufficiently linear and symmetric, then attractors and transitions may be learned in one-shot by specifying the two vectors at the crossbar's inputs and outputs \\cite{alibart_pattern_2013, li_situ_2021}. \n\n\n\n\\subsection{Relation to other architectures}\nWhile there is a large body of work concerning the equivalence between RNNs and FSMs, their implementations broadly fall into a few categories. There are those that require iterative gradient descent methods to mimic an FSM \\cite{zeng_learning_1993, lee_giles_learning_1995, das_unified_1994}, which makes them difficult to train for large FSMs, and improbable for use in biology. There are those that require creating a new FSM with an explicitly expanded state set, $Z' := Z \\times S$, such that there is a new state for every old state-stimulus pair \\cite{minsky_computation_1967, sanfeliu_algebraic_1999}, which is unfavourable due to the the explosion of (usually one-hot) states needing to be represented, as well as the difficulty of adding new states or stimuli iteratively. There are those that require higher-order weight tensors in order to explicitly provide a weight entry for every unique state-stimulus pair \\cite{omlin_fuzzy_1998, forcada_finite-state_2001, mali_neural_2020}. As well as being non-distributed, the weight tensors require synapses to connect between more than two neurons, which is difficult to implement and non-biological \\cite{krotov_large_2021}. In \\cite{recanatesi_memory_2017} transitions are triggered by adiabatically modulating a global inhibition parameter, such that the network may transition between similar stored patterns. Lacking however is a method to construct a network to perform arbitrary, controllable transitions between states. In \\cite{chen_attractor-state_2020} an in-depth analysis of small populations of rate-based neurons is conducted, wherein synapses with short-term synaptic depression enable a rich behaviour of itinerancy between attractor states, but does not scale to large systems and arbitrary stored memories.\n\nMost closely resembling our approach, however, are earlier works concerned with the related task of creating a sequence of transitions between attractor states in Hopfield-like neural networks. The majority of these efforts rely upon the use of synaptic delays, such that the postsynaptic sum on a time step $t$ depends, for example, also on the network state at time $t-10$, rather than just $t-1$. These delay synapses thus allow attractor cross-terms of the form $\\vec{x}^{\\nu+1}\\vec{x}^{\\nu \\intercal}$ to become influential only after the network has inhabited an attractor state for a certain amount of time, triggering a walk between attractor states \\cite{sompolinsky_temporal_1986, kleinfeld_sequential_1986}. This then also allowed for the construction of networks with state-dependent input-triggered transitions \\cite{gutfreund_processing_1988, amit_neural_1988, drossaers_hopfield_1992}. Similar networks were shown to function without the need for synaptic delays, but require fine tuning of network parameters and suffer from extremely low storage capacity \\cite{buhmann_noise-driven_1987, amit_modeling_1989}. In any case, the need for synaptic delay elements represents a large requirement on any substrate which might implement such a network, and indeed are problematic to implement in neuromorphic systems \\cite{nielsen_compact_2017}.\n\nState-dependent computation in spiking neural networks was realised in \\cite{neftci_synthesizing_2013} and \\cite{liang_neuromorphic_2019}, where they used population attractor dynamics to achieve robust state representations via sustained spiking activity. However, these approaches differ from this work in that the state representations are still fundamentally population-based rather than distributed, and so pose difficulties such as the requirement of finding a new population of neurons to represent any new state.\n\nThis work also differs from conventional methods to implement graphs in VSAs \\cite{kleyko_vector_2021, poduval_graphd_2022}, in that the network state does not need to be read by an outsider in order to implement state-dependent switching. That is, where in previous works a graph is encoded by a hypervector such that it may be reliably decoded by external circuitry, we instead encode the graph's connectivity in the attractor network's weights matrix, such that its recurrent dynamics realise the desired state machine. Our implementation could however have been brought closer to previous works, and indeed made simpler, if there were a way to reliably achieve a flipping of neuron states when input is received. Then, the transition dynamics could be achieved just by storing edge terms like $\\mat{E} = \\vec{y} (\\vec{x} \\circ \\vec{s})^{\\intercal}$. Although achieving a state flip may be easy in a digital synchronous system, it would be very difficult to robustly achieve in an analogue asynchronous system. To avoid flickering, the flip would need to be reliably initiated by a single event. These events would also need to arrive at all neurons simultaneously, lest attractor dynamics take over and correct these apparent errors. Both of these factors prohibit such an operation from existing in biological systems.\n\nThe lack of a flipping mechanism is also discussed in \\cite{rigotti_internal_2010}, wherein they show the necessity of a population of neurons with mixed selectivity, connected to both the input and attractor neurons, in order to achieve the flipping-like behaviour necessary for complex state switching. This requirement arose by demanding that the network state switch to resembling the target state immediately upon receiving a stimulus. We instead show that similar results can be achieved without this extra population, if we relax to instead demanding only that the network soon evolve to the target state.\n\n\n\n\n\\subsection{Biological plausibility}\n\nTransitions between discrete neural attractor states are thought to be a crucial mechanism for performing context-dependent decision making in biological neural systems \\cite{daelli_neural_2010, mante_context-dependent_2013, miller_itinerancy_2016, tajima_task-dependent_2017}. Attractor dynamics enable a temporary retention of received information, and ensure that irrelevant inputs do not produce stable deviations in the neural state. As such, in this work we provide a description of an attractor-based network that can perform controllable context-dependent transitions in a simple manner, while abiding by the principles of distributed representation.\n\nThe procedure for generating the weights matrix $\\mat{W}$, as a result of this simplicity, makes the proposed network more plausible than other more complex approaches, e.g. those utilising gradient descent methods. It can be learned in one-shot in a fully online fashion, since adding a new node or edge involves only an additive contribution to the weights matrix, which does not require knowledge of irrelevant edges, nodes, their hypervectors, or the weight values themselves. Furthermore, as a result of the entirely distributed representation of states and transitions, new behaviours may be added to the weights matrix at a later date, both without having to allocate new hardware, and without having to recalculate $\\mat{W}$ with all previous data. Both of these factors are critical for continual online learning.\n\nEvaluating the local learnability of $\\mat{W}$ to implement transitions is also necessary to evaluate the biological plausibility of the model. In the original Hopfield paper the weights could be learned using the simple Hebbian rule\n\n\\begin{equation}\n \\delta w_{ij} = x^{\\nu}_{i} x^{\\nu}_{j}\n\\end{equation}\n\nwhere $x_i^\\nu$ and $x_j^\\nu$ are the activities of the post- and presynaptic neurons respectively, and $\\delta w_{ij}$ the online synaptic efficacy update \\cite{hebb_organization_1949, hopfield_neural_1982}. While the attractor terms within the network can be learned in this manner, the transition cross-terms that we have introduced require an altered version of the learning rule. If we simplify our network construction by removing the edge state attractors, then the local weight update required to learn a transition between states is given by\n\n\n\\begin{equation}\n \\delta w_{ij} = H(s_i) y_{i} x_{j} s_{j}\n\\end{equation}\n\nwhere $\\vec{y}$, $\\vec{x}$ and $\\vec{s}$ are as previously defined. In removing the edge states, we disallow FSMs with consecutive edges with the same stimulus (e.g. \"father\\_is, father\\_is\"), but this is not a problem if completely general FSM construction is not the goal per se. This state-transition learning rule is just as local as the original Hopfield learning rule, as the weight update from presynaptic neuron $j$ to postsynaptic neuron $i$ is dependent only upon information that may be made directly accessible in the pre- and postsynaptic neurons, and does not depend on information in other neurons to which the synapse is not connected \\cite{zenke_brain-inspired_2021, khacef_spike-based_2022}.\n\n\nThe robust functioning of the network despite noisy and unreliable weights is another prerequisite for the model to plausibly be able to exist in biological systems. The network weights may be considerably degraded without affecting the behaviour of the network, and indeed beyond this the network exhibits a so-called graceful degradation in performance. Furthermore, biological synapses are expected to have only a few bits of precision \\cite{oconnor_graded_2005, bartol_hippocampal_2015, baldassi_learning_2016}, and the network has been shown to function even in the worst case of binary weights. These properties stem from the massive redundancy arising from storing the attractor states across the entire synaptic matrix in a distributed manner, a technique that the brain is expected to utilise \\cite{rumelhart_parallel_1987, crawford_biologically_2016}.\nSince the network is still an attractor network, it retains all of the properties that make attractor networks suitable for modelling cognitive function, such as that the network can perform robust pattern completion and correction, i.e. the recovery of a stored prototypical memory given a damaged, incomplete or noisy version, and thereafter function as a stable working memory \\cite{hopfield_neural_1982, amit_modeling_1989}.\n\n\\section{Conclusion}\n\\label{section:conclusion}\n\nAttractor neural networks are robust abstract models of human memory, but previous attempts to endow them with complex and controllable attractor-switching capabilities have suffered mostly from being either non-distributed, not scalable, or not robust. We have here introduced a simple procedure by which any arbitrary FSM may be embedded into a large enough Hopfield-like attractor network, where states and stimuli are replaced by high-dimensional random vectors, and all information pertaining to FSM transitions is stored in the network's weights matrix in a fully distributed manner. Our method of modelling input to the network as a masking of the network state allows cross-terms between attractors to be stored in the weights matrix in a way that they are effectively obfuscated until the correct state-stimulus pair is present, much in a manner similar to the standard binding-unbinding operation in more conventional VSAs.\n\nWe showed that the network retains many of the features of attractor networks which make them suitable for biology, namely that the network is robust to unreliable and imprecise weights, thus also making them highly suitable for implementation with high-density but noisy devices. We presented numerical results showing that the network capacity in terms of implementable FSM size scales linearly with the size of the attractor network, and also that the network continues to function when the synchronous neuron update rule is replaced with a stochastic, asynchronous variant. \n\nIn summary, we introduced an attractor-based neural state machine which overcomes many of the shortcomings that made previous models unsuitable for use in biology, and propose that attractor-based FSMs may thus represent a plausible path by which FSMs may exist as a distributed computational primitive in biological neural networks.\n\\section*{Acknowledgements}\nWe thank Dr. Federico Corradi, Dr. Nicoletta Risi and Dr. Matthew Cook for their invaluable input and suggestions, as well as their help with proofreading this document. \n\nFunded by the Deutsche Forschungsgemeinschaft (DFG German Research Foundation) - Projects NMVAC (432009531) and MemTDE (441959088).\n\nThe authors would like to acknowledge the financial support of the CogniGron research center and the Ubbo Emmius Funds (Univ. of Groningen).\n\n\\FloatBarrier\n\n\\printbibliography\n\n\\clearpage\n \n\\onecolumn\n \n\\section*{Appendix}\n\n\\subsection{Dynamics without masking}\n\nFor the following calculations we assume that the coding level of the output states $f_r$ is low enough that their effect can be ignored. With this in mind, if we ignore the semantic differences between attractors for node states and attractors for edge states, the two summations over states can be absorbed into one summation over both types of attractor, here both denoted $\\vec{x}^\\nu$. Similarly there is then no difference between the two transition cross-terms within each $\\mat{E}$ term, and they too can be absorbed into one summation. Our simplified expression for $\\mat{W}$ is now given by\n\n\n\\begin{equation}\n \\mat{W} = \\frac{1}{N}\\sum_{\\vphantom{\\text{dg}}\\text{attr's $\\nu$}}^{N_Z + N_E} \\vec{x}^\\nu \\vec{x}^{\\nu\\, \\intercal} +\n \\frac{1}{N}\\sum_{\\text{tran's $\\lambda$}}^{2N_{E}} H(\\vec{s}^{\\pi(\\lambda)}) \\circ (\\vec{x}^{\\upsilon(\\lambda)} - \\vec{x}^{\\chi(\\lambda)}) (\\vec{x}^{\\chi(\\lambda)} \\circ \\vec{s}^{\\pi(\\lambda)})^{\\intercal}\n\\label{eqn:apx_1}\n\\end{equation}\n\nwhere $\\chi(\\lambda)$ and $\\upsilon(\\lambda)$ are functions $\\{1,\\ldots, 2N_E\\} \\rightarrow \\{1,\\ldots,N_Z + N_E \\}$ determining the indices of the source and target states for transition $\\lambda$, and $\\pi(\\lambda) : \\{1, \\ldots, 2N_E\\} \\rightarrow \\{1, \\ldots N_{\\text{stimuli}}\\}$ determines the index of the associated stimulus. We then wish to calculate the statistics of the postsynaptic sum $\\mat{W}\\vec{z}$ while the attractor network is currently in an attractor state. When in an attractor state $\\vec{x}^\\mu$, the postsynaptic sum is given by\n\n\n\\begin{equation}\n\\begin{split}\n\\Big[ \\mat{W}\\vec{x}^\\mu \\Big]_i & = \\frac{1}{N}\\sum_{\\vphantom{\\text{dg}}\\text{attr's $\\nu$}}^{N_Z + N_E} x_i^{\\nu} \\underbrace{\\big[ \\vec{x}^{\\nu} \\cdot \\vec{x}^\\mu \\big]}_{ \\substack{N\\text{ if } \\mu = \\nu \\\\ \\text{ else } \\mathcal{N}(0,N)}} \n + \\frac{1}{N}\\sum_{\\text{tran's $\\lambda$}}^{2N_{E}} H(s_i^{\\pi(\\lambda)}) \\circ (x_i^{ \\upsilon (\\lambda)} - x_i^{ \\chi (\\lambda)}) \\underbrace{\\big[ (\\vec{x}^{ \\chi (\\lambda)} \\circ \\vec{s}^{ \\pi(\\lambda)}) \\cdot \\vec{x}^\\mu \\big]}_{ \\mathcal{N}(0,N)} \\\\\n & = x^\\mu_i + \\sum_{\\vphantom{\\text{dg}}\\substack{\\text{attr's}\\\\ \\nu \\neq \\mu}}^{N_Z + N_E} \\underbrace{x_i^{\\nu}}_{ \\mathrm{Var.} = 1 } \\Big[ \\, \\mathcal{N}^{\\nu} \\big( 0,\\frac{1}{N} \\big) \\Big] \n + \\sum_{\\text{tran's $\\lambda$}}^{2N_{E}} \\underbrace{H(s_i^{ \\pi(\\lambda)}) \\circ (x_i^{ \\upsilon (\\lambda)} - x_i^{ \\chi (\\lambda)})}_{ \\mathrm{Var.}=1 }\\Big[ \\mathcal{N}^{\\lambda} \\big(0,\\frac{1}{N} \\big) \\Big] \\\\\n & \\approx x^\\mu_i + \\mathcal{N}\\Bigg( 0, \\frac{N_Z + N_E -1}{N} \\Bigg) + \\mathcal{N}\\Bigg( 0, \\frac{2N_E}{N} \\Bigg) \\\\\n & \\approx x^\\mu_i + \\mathcal{N}\\Bigg( 0, \\frac{N_Z + 3N_E}{N} \\Bigg)\n\\end{split}\n\\label{eqn:apx_2}\n\\end{equation}\n\nwhere we have used the notation $\\mathcal{N}(\\mu, \\sigma^2)$ to denote a normally-distributed random variable (RV) with mean $\\mu$ and variance $\\sigma^2$. In the third line we have made the approximation in the transition summation that the linear sum of attractor hypervectors, each multiplied by a Gaussian RV, is itself a separate Gaussian RV in each dimension. This holds as long as there are \"many\" attractor terms appearing on the LHS of the transition summation. Said otherwise, if the summation over transition terms has only very few unique attractor terms on the LHS ($N_E \\gg N_Z$), then the noise will be a random linear sum of the same few (masked) hypervectors, each with approximate magnitude $\\frac{1}{\\sqrt{N}}$, and so will be highly correlated between dimensions. Nonetheless we assume we are far away from this regime, and let the effect of the sum of these unwanted terms be approximated by a normally-distributed random vector, and so we have\n\n\\begin{equation}\n \\mat{W}\\vec{x}^\\mu \\approx \\vec{x}^\\mu + \\sigma\\vec{n}\n\\end{equation}\n\nwhere $\\sigma = \\sqrt{\\frac{N_Z + 3N_E}{N}}$ is the strength of cross-talk noise, and $\\vec{n}$ a vector composed of IID standard normally-distributed terms. This procedure of quantifying the signal-to-noise ratio (SNR) is adapted from that in the original Hopfield paper \\cite{hopfield_neural_1982, amit_modeling_1989}.\n\n\\subsection{Dynamics with masking}\n\nWe can similarly calculate the postsynaptic sum when in an attractor state $\\vec{x}^\\mu$, while the network is being masked by a stimulus $\\vec{s}^\\kappa$, with this (state, stimulus) tuple corresponding to a certain valid transition $\\lambda^\\prime$, with source, target, and stimulus vectors $\\vec{x}^\\mu$, $\\vec{x}^\\phi$, and $\\vec{s}^\\kappa$ respectively:\n\n\\begin{equation}\n\\begin{split}\n\\Big[ \\mat{W} \\big( \\vec{x}^\\mu \\circ H(\\vec{s}^\\kappa) \\big) \\Big]_i & = \\frac{1}{N}\\sum_{\\vphantom{\\text{dg}}\\text{attr's $\\nu$}}^{N_Z + N_E} x_i^{\\nu} \\underbrace{\\big[ \\vec{x}^{\\nu} \\cdot \\big( \\vec{x}^\\mu \\circ H (\\vec{s}^\\kappa) \\big) \\big]}_{ \\substack{\\frac{1}{2}N\\text{ if } \\mu = \\nu \\\\ \\text{ else } \\mathcal{N}(0,\\frac{1}{2}N)}} \\\\\n & \\quad \\quad + \\frac{1}{N}\\sum_{\\text{tran's $\\lambda$}}^{2N_{E}} H(s_i^{ \\pi(\\lambda)}) (x_i^{ \\upsilon (\\lambda)} - x_i^{ \\chi (\\lambda)}) \\underbrace{\\big[ (\\vec{x}^{ \\chi(\\lambda)} \\circ \\vec{s}^{ \\pi(\\lambda)}) \\cdot \\big( \\vec{x}^\\mu \\circ H (\\vec{s}^\\kappa) \\big)\\big]}_{ \\substack{\\frac{1}{2}N\\text{ if } \\chi(\\lambda) = \\mu \\text{ and } \\pi(\\lambda) = \\kappa \\\\ \\text{ else } \\mathcal{N}(0,\\frac{1}{2}N)}} \\\\\n & = \\frac{1}{2}x^\\mu_i + \\frac{1}{2}H(s_i^\\kappa) (x_i^\\phi - x_i^\\mu) + \\sum_{\\vphantom{\\text{dg}}\\substack{\\text{attr's}\\\\ \\nu \\neq \\mu}}^{N_Z + N_E} \\underbrace{x_i^{\\nu}}_{ \\mathrm{Var.} = 1 } \\Big[ \\, \\mathcal{N}^{\\nu}(0,\\frac{1}{2N}) \\Big] \\\\\n & \\quad \\quad + \\sum_{\\substack{\\text{tran's} \\\\ \\lambda \\neq \\lambda^\\prime}}^{2N_{E}} \\underbrace{H(s_i^{ \\pi(\\lambda)}) (x_i^{ \\upsilon(\\lambda)} - x_i^{ \\chi (\\lambda)})}_{ \\mathrm{Var.} = 1 } \\Big( \\mathcal{N}^{\\lambda}(0,\\frac{1}{2N}) \\Big) \\\\\n & \\approx \\frac{1}{2} \\big[ H(s_i^\\kappa) + H(-s_i^\\kappa) \\big] x^\\mu_i + \\frac{1}{2}H(s_i^\\kappa)( x_i^\\phi - x_i^\\mu) \n + \\mathcal{N}\\Big( 0, \\frac{N_Z + N_E -1}{2N} \\Big) + \\mathcal{N}\\Big( 0, \\frac{2N_E-1}{2N} \\Big) \\\\\n \n & = \\frac{1}{2}H(s_i^\\kappa) x_i^\\phi + \\frac{1}{2}H(-s_i^\\kappa) x_i^\\mu + \\mathcal{N}\\Big( 0, \\frac{N_Z + 3N_E -2}{2N} \\Big) \\\\\n & \\approx \\frac{1}{2} \\Bigg[ H(s_i^\\kappa) x_i^\\phi + H(-s_i^\\kappa) x_i^\\mu + \\sqrt{2} \\cdot \\mathcal{N}\\Big( 0,\\frac{N_Z + 3N_E}{N} \\Big) \\Bigg] \\\\\n\\end{split}\n\\label{eqn:apx_3}\n\\end{equation}\n\nwhere in the third line we have made the same approximations as previously discussed. The postsynaptic sum is thus approximately $\\vec{x}^\\phi$ in all indices that are not currently being masked, which drives the network towards that (target) attractor. In vector form, the above is written as\n\n\\begin{equation}\n \\mat{W} \\big( \\vec{x}^\\mu \\circ H ( \\vec{s}^\\kappa ) \\big) \\,\\, \\appropto \\,\\, H ( \\vec{s}^ \\kappa) \\circ \\vec{x}^\\phi + H (- \\vec{s}^ \\kappa) \\circ \\vec{x}^\\mu + \\sqrt{2}\\sigma\\vec{n}\n\\end{equation}\n\nwhere it is assumed that there exists a stored transition from state $\\vec{x}^\\mu$ to $\\vec{x}^\\phi$ with stimulus $\\vec{s}^\\kappa$, and $\\appropto$ denotes approximate proportionality. A similar calculation can be performed in the case that a stimulus is imposed which does not correspond to a valid transition for the current state. In this case, no terms of significant magnitude emerge from the transition summation, and we are left with \n\n\\begin{equation}\n \\mat{W} \\big( \\vec{x}^\\mu \\circ H ( \\vec{s}^\\text{invalid} ) \\big) \\,\\, \\appropto \\,\\, \\vec{x}^\\mu + \\sqrt{2}\\sigma\\vec{n}\n\\end{equation}\n\ni.e. the attractor dynamics are largely unaffected. Since we have not distinguished between our above attractor terms being node attractors or edge attractors, or our stimuli from being $\\vec{s}_a$ or $\\vec{s}_b$ stimuli, the above results can be applied to all relevant situations \\textit{mutatis mutandis}.\n\n\n\\end{document}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nAbstract polytopes are combinatorial structures with properties that generalise those of\nclassical polytopes. In many ways they are more fascinating than convex polytopes\nand tessellations. Highly symmetric examples of abstract polytopes include not\nonly classical regular polytopes such as the Platonic solids, and more exotic structures such as\nthe $120$-cell and $600$-cell, but also regular maps on surfaces (such as Klein's quartic).\n\nRoughly speaking, an abstract polytope $\\mathcal {P}$ is a partially ordered set endowed with\na rank function, satisfying certain conditions that arise naturally from a geometric\nsetting. Such objects were proposed by Gr\\\"unbaum in the 1970s, and their definition\n(initially as `incidence polytopes') and theory were developed by Danzer and Schulte.\n\nAn \\emph{automorphism} of an abstract polytope $\\mathcal {P}$ is an order-preserving permutation\nof its elements, and every automorphism of is uniquely determined by its effect\non any maximal chain in $\\mathcal {P}$ (which is known as a `flag' in $\\mathcal {P}$). The\nmost symmetric examples are regular, with all flags lying in a single orbit.\nThe comprehensive book written by Peter McMullen and Egon Schulte~\\cite{ARP} is\nnowadays seen as the principal reference on this subject.\n\nAn interesting class of examples which are not quite regular are the chiral polytopes.\nFor these, the automorphism group has two orbits on flags, with any two flags that\ndiffer in a single element lying in different orbits.\nChirality is a fascinating phenomenon that does not have a counterpart in the classical\ntheory of traditional convex polytopes.\nThe study of chiral abstract polytopes\nwas pioneered by Schulte and Weiss (see~\\cite{ChiralPolytopes, ChiralPolytopesWithPSL} for example),\nbut it has been something of a challenge to find and construct finite examples.\n\nFor quite some time, the only known finite examples of chiral polytopes had ranks $3$ and $4$.\nIn rank $3$, these are are given by the irreflexible (chiral) maps on closed\ncompact surfaces (see Coxeter and Moser~\\cite{BooksCoxeter}).\nSome infinite examples of chiral polytopes of rank $5$ were constructed by Schulte\nand Weiss in~\\cite{FEChiralPolytopes}, and then some finite examples\nof rank $5$ were constructed just over ten years ago by Conder, Hubard and Pisanski~\\cite{ConstructChiralPolytopes}.\n\nMany small examples of chiral polytopes are now known.\nThese include all chiral polytopes with at most $4000$ flags,\nand all that are constructible from an almost simple group $\\Ga$\nof order less than $900000$.\nThese have been assembled in collections, as in~\\cite{atles1, atles2}, for example.\nIn early $2009$ Conder and Devillers devised a construction for chiral polytopes whose facets are simplices,\nand used this to construct examples of finite chiral polytopes of ranks $6$, $7$ and $8$ [unpublished].\n\n At about the same time, Pellicer devised a quite different method for constructing finite chiral polytopes\nwith given regular facets, and used this construction to prove the existence of finite chiral polytopes of every rank $d \\geq 3$; see~\\cite{ConstructChiralPolytopesforanyrank}.\nA few years later, Cunningham and Pellicer proved that every finite chiral $d$-polytope with regular facets\nis itself the facet of a chiral $(d +1)$-polytope; see~\\cite{moreonConstructChiralPolytopesforanyrank}.\n\nThe work of Conder and Devillers was later taken up by Conder, Hubard, O'Reilly Regueiro and Pellicer~\\cite{ChiralPolytopesOfANSN},\nto prove that all but finitely many alternating groups $A_n$ and symmetric groups $S_n$ are the automorphism group\nof a chiral 4-polytope of type $\\{3,3,k\\}$ for some $k$ (dependent on $n$).\nThis has recently been extended to every rank greater than $4$ by the same authors as~\\cite{ChiralPolytopesOfANSN}.\n\nAlso Conder and Zhang in~\\cite{AbelianCoverOfChiralPolytopes} introduced a new covering method\nthat allows the construction of some infinite families of chiral polytopes,\nwith each member of a family having the same rank as the original, but with the size of the members of the family\ngrowing linearly with one (or more) of the parameters making up its `type' (Schl\\\"afli symbol).\nThey have used this method to construct several new infinite families of chiral polytopes of ranks $3, 4, 5$ and $6$.\nFurthermore, Zhang constructed in her PhD thesis~\\cite{Zhang'sPhdThesis} a number of chiral polytopes\nof types $\\{4, 4, 4\\}$, $\\{4, 4, 4, 4\\}$ and $\\{4, 4, 4, 4, 4\\}$, with automorphism groups of orders $2^{10}, 2^{11}, 2^{12}$,\nand $2^{15}, 2^{16}, \\cdots, 2^{22}$, and $2^{18}, 2^{19}$, respectively.\n\n\\smallskip\nNow let $\\mathcal {P}$ be a regular or chiral $4$-polytope. We say that $\\mathcal {P}$ is {\\em locally toroidal} if its facets and its vertex-figures are\nmaps on the $2$-sphere or on the torus, and either its facets or its vertex-figures (or both) are toroidal.\nUp to duality, rank 4 polytopes that are locally toroidal are of type $\\{4,4,3\\}$, $\\{4,4,4\\}$, $\\{6,3,3\\}$,\n$\\{6,3,4\\}$, $\\{6,3,5\\}$, $\\{6,3,6\\}$ or $\\{3,6,3\\}$.\n\nSchulte and Weiss~\\cite{ChiralPolytopesWithPSL} developed a construction that starts with a $3$-dimensional\nregular hyperbolic honeycomb and a faithful representation of its symmetry group as a group of complex M\\\"obius transformations\n(generated by the inversions in four circles that cut one another at the same angles as the corresponding\nreflection planes in hyperbolic space), and then derived chiral $4$-polytopes by applying\nmodular reduction techniques to the corresponding matrix group (see Monson and Schulte~\\cite{BE2009}).\nThey then used the simple group $\\hbox{\\rm PSL}(2, p)$ with $p$ an odd prime to construct infinite families of such polytopes.\n\nSome years later, Breda, Jones and Schulte~\\cite{ChiralPolytopesofPSLextension} developed a method of `mixing' a\nchiral $d$-polytope with a regular $d$-polytope to produce a larger example of a chiral polytope of the same rank $d$.\nThey used this to construct such polytopes with automorphism group $\\hbox{\\rm PSL}(2, p) \\times \\Ga^{+}(\\Omega)$, where $\\Ga^{+}(\\Omega)$\nis the rotation group of a finite regular locally-toroidal $4$-polytope.\nFor example, $\\Ga^{+}(\\Omega)$ could be $A_6 \\times C_2$ or $A_5$, when the corresponding\nchiral polytope has type $\\{4, 4, 3\\}$ or $\\{6, 3, 3\\}$, respectively.\n\nOne can see that almost all of the examples mentioned above involve non-abelian simple groups.\nOn the other hand, there appear to be few known examples of chiral polytopes with solvable automorphism groups, \napart from some of small order, and families of rank 3 polytopes arising from chiral maps on the torus (of type $\\{3,6\\}$, $\\{4,4\\}$ or $\\{6,3\\}$). \nThis was the main motivation for the research leading to this paper.\nIt was also motivated in part by a problem posed by Schulte and Weiss~\\cite{Problem}, namely the following:\n\\begin{prob}\nCharacterize the groups of orders $2^n$, with $n$ a positive integer, which are automorphism groups of regular or chiral polytopes.\n\\end{prob}\n\nHere we construct two infinite families of locally toroidal chiral 4-polytopes of type $\\{4,4,4\\}$, with solvable automorphism groups.\nEach family contains one example with $1024m^2$ or $2048m^2$ automorphisms, respectively, for every integer $m \\ge 1$.\nIn particular, if we let $m$ be an arbitrary power of $2$, say $2^k$ (with $k \\ge 0$), then the automorphism group\nhas order $2^{10+2k}$ or $2^{11+2k}$, which can be expressed as $2^n$ for an arbitrary integer $n \\ge 10$.\n\nThis extends some earlier work by the second and third authors (in~\\cite{HFL} and~\\cite{HFL1}), which showed that\nall conceivable ranks and types can be achieved for {\\em regular\\\/} polytopes with automorphism group of $2$-power order.\nIt also extends both the work by Zhang~\\cite{Zhang'sPhdThesis} mentioned above, and a construction by Cunningham~\\cite{TightChiralPolyhedra}\nof infinite families of tight chiral $3$-polytopes of type $\\{k_1, k_2\\}$ with automorphism group of order $2k_1k_2$\n(considered here for the special cases where $k_1$ and $k_2$ are powers of $2$).\n\n\n\\section{Additional background}\n\\label{background}\n\nIn this section we give some further background that may be helpful for the rest of the paper.\n\n\\subsection{Abstract polytopes: definition, structure and properties}\n\nAn abstract polytope of rank $n$ is a partially ordered set $\\mathcal {P}$ endowed with a strictly monotone rank function\nwith range $\\{-1, 0, \\cdots, n\\}$, which satisfies four conditions, to be given shortly.\n\nThe elements of $\\mathcal {P}$ are called \\emph{faces} of $\\mathcal {P}$. More specifically, the elements of $\\mathcal {P}$ of rank $j$ are called $j$-faces,\nand a typical $j$-face is denoted by $F_j$.\nTwo faces $F$ and $G$ of $\\mathcal {P}$ are said to be \\emph{incident} with each other if $F \\leq G$ or $F \\geq G$ in $\\mathcal {P}$.\nA \\emph{chain} of $\\mathcal {P}$ is a totally ordered subset of $\\mathcal {P}$, and is said to have \\emph{length} $i$ if it contains exactly $i+1$ faces.\nThe maximal chains in $\\mathcal {P}$ are called the \\emph{flags} of $\\mathcal {P}$.\nTwo flags are said to be $j$-\\emph{adjacent} if they differ in just one face of rank $j$,\nor simply \\emph{adjacent} (to each other) if they are $j$-adjacent for some~$j$.\nAlso if $F$ and $G$ are faces of $\\mathcal {P}$ with $F \\leq G$, then the set $\\{\\,H \\in \\mathcal {P} \\ |\\ F \\leq H \\leq G\\,\\}$ is\ncalled a \\emph{section} of $\\mathcal {P}$, and is denoted by $G\/F$. Such a section has rank $m-k-1$,\nwhere $m$ and $k$ are the ranks of $G$ and $F$ respectively.\nA section of rank $d$ is called a $d$-section.\n\n\\smallskip\nWe can now give the four conditions that are required of $\\mathcal {P}$ to make it an abstract polytope.\nThese are listed as (P1) to (P4) below:\n\\begin{itemize}\n\\item [(P1)] $\\mathcal {P}$ contains a least face and a greatest face, denoted by $F_{-1}$ and $F_n$, respectively.\n\\item [(P2)] Each flag of $\\mathcal {P}$ has length $n+1$ (so has exactly $n+2$ faces, including $F_{-1}$ and $F_n$).\n\\item [(P3)] $\\mathcal {P}$ is \\emph{strong flag-connected}, which means that any two flags $\\Phi$ and $\\Psi$ of $P$ can\nbe joined by a sequence of successively adjacent flags $\\Phi=\\Phi_{0}, \\Phi_1, \\cdots, \\Phi_k=\\Psi$, each of which contains $\\Phi \\cap \\Psi$.\n\\item [(P4)] The rank $1$ sections of $\\mathcal {P}$ have a certain homogeneity property known as the \\emph{diamond condition},\nnamely as follows: if $F$ and $G$ are incidence faces of $\\mathcal {P}$, of ranks $i-1$ and $i+1$, respectively, where $0 \\le i \\le n-1$,\nthen there exist precisely \\emph{two} $i$-faces $H$ in $\\mathcal {P}$ such that $F< H< G$.\n\\end{itemize}\n\n\\noindent\nAn easy case of the diamond condition occurs for polytopes of rank 3 (or polyhedra): if $v$ is a vertex of same face $f$,\nthen there are two edges that are incident with both $v$ and $f$.\n\n\\smallskip\nNext, every $2$-section $G\/F$ of $\\mathcal {P}$ is isomorphic to the face lattice of a polygon.\nNow if it happens that the number of sides of every such polygon depends only on the rank of $G$,\nand not on $F$ or $G$ itself, then we say that the polytope $\\mathcal {P}$ is {\\em equivelar}.\nIn this case, if $k_i$ is the number of edges of every $2$-section between an $(i-2)$-face and an $(i+1)$-face of $\\mathcal {P}$,\nfor $1 \\leq i \\leq n$, then the expression $\\{k_1, k_2, \\cdots, k_{n-1}\\}$ is called the Schl\\\"afli type of $\\mathcal {P}$.\n(For example, if $\\mathcal {P}$ has rank 3, then $k_1$ and $k_2$ are the valency of each vertex and the size of each 2-face,\nrespectively.)\n\n\\subsection{Automorphisms of polytopes}\n\nAn \\emph{automorphism} of an abstract polytope $\\mathcal {P}$ is an order-preserving permutation of its elements.\nIn particular, every automorphism preserves the set of faces of any given rank.\nUnder permutation composition, the set of all automorphisms of $\\mathcal {P}$ forms a group, called the automorphism\ngroup of $\\mathcal {P}$, and denoted by $\\hbox{\\rm Aut}}\\def\\Out{\\hbox{\\rm Out}(\\mathcal {P})$ or sometimes more simply as $\\Ga(\\mathcal {P})$.\nAlso it is not difficult to use the diamond condition and strong flag-connectedness to prove that if an automorphism\npreserves of flag of $\\mathcal {P}$, then it fixes every flag of $\\mathcal {P}$ and hence every element of $\\mathcal {P}$.\nIt follows that $\\Ga(\\mathcal {P})$ acts semi-regularly (or fixed-point-freely) on $\\mathcal {P}$.\n\n\\smallskip\nA polytope $\\mathcal {P}$ is said to be \\emph{regular} if its automorphism group $\\Ga(\\mathcal {P})$ acts transitively\n(and hence regularly) on the set of flags of $\\mathcal {P}$.\nIn this case, the number of automorphisms of $\\mathcal {P}$ is as large as possible, and equal to the number of flags of $\\mathcal {P}$.\nIn particular, $\\mathcal {P}$ is equivelar, and the stabiliser in $\\Ga(\\mathcal {P})$ of every 2-section of $\\mathcal {P}$ induces the full dihedral group\non the corresponding polygon.\nMoreover, for a given flag $\\Phi$ and for every $i \\in \\{0,1,\\dots,n-1\\}$, the polytope $\\mathcal {P}$ has a unique\nautomorphism $\\rho_i$ that takes $\\Phi$ to the unique flag $(\\Phi)^{i}$ that differs from $\\Phi$ in precisely its $i$-face,\nand then the automorphisms $\\rho_0,\\rho_1,\\dots,\\rho_{n-1}$ generate $\\Ga(\\mathcal {P})$ and satisfy the defining\nrelations for the string Coxeter group $[k_1, k_2, \\cdots, k_{n-1}]$, where the $k_i$ are as given in the\nprevious subsection for the Schl\\\"afli type of $\\mathcal {P}$.\nThey also satisfy a certain `intersection condition', which follows from the diamond and strong flag-connectedness\nconditions.\nThese and many more properties of regular polytopes may be found in~\\cite{ARP}.\n\n\\smallskip\nWe now turn to chiral polytopes, for which a good reference is~\\cite{ChiralPolytopes}.\n\n\\smallskip\nA polytope $\\mathcal {P}$ said to be {\\em chiral\\\/} if its automorphism group $\\Ga(\\mathcal {P})$ has two orbits on flags,\nwith every two adjacent flags lying in different orbits.\n(Another way of viewing this definition is to consider $\\mathcal {P}$ as admitting no `reflectiing' automorphism\nthat interchanges a flag with an adjacent flag.)\nHere the number of flags of $\\mathcal {P}$ is $2|\\Ga(\\mathcal {P})|$, and $\\Ga(\\mathcal {P})$ acts regularly on each of two orbits.\nAgain $\\mathcal {P}$ is equivelar, with the stabiliser in $\\Ga(\\mathcal {P})$ of every 2-section of $\\mathcal {P}$ inducing the full cyclic group on the corresponding polygon.\n\nNext, for a given flag $\\Phi$ and for every $j \\in \\{1,2,\\dots,n-1\\}$, the chiral polytope $\\mathcal {P}$ admits an\nautomorphism $\\s_j$ that takes $\\Phi$ to the flag $(\\Phi)^{j, j-1}$ which differs from $\\Phi$ in precisely its $(j-1)$-and $j$-faces.\nThese automorphisms $\\s_1,\\s_2,\\dots,\\s_{n-1}$ generate $\\Ga(\\mathcal {P})$, and if $\\mathcal {P}$ has Schl\\\"afli type $\\{k_1, k_2, \\dots, k_{n-1}\\}$,\nthen they satisfy the defining relations for the orientation-preserving subgroup of (index $2$ in) the\nstring Coxeter group $[k_1, k_2, \\cdots, k_{n-1}]$.\nAlso they satisfy a `chiral' form of the intersection condition, which is a variant of the one mentioned earlier for regular polytopes.\n\nChiral polytopes occur in pairs (or {\\em enantiomorphic\\\/} forms), such that each member of the pair is the `mirror image' of the other.\nSuppose one of them is $\\mathcal {P}$, and has Schl\\\"afli type $\\{k_1, k_2, \\cdots, k_{n-1}\\}$. Then $\\Ga(\\mathcal {P})$ is isomorphic\nto the quotient of the orientation-preserving subgroup $\\Lambda^{\\rm o}$ of the Coxeter group $\\Lambda = [k_1, k_2, \\cdots, k_{n-1}]$\nvia some normal subgroup $K$. By chirality, $K$ is not normal in the full Coxeter group $\\Lambda$, but is conjugated by\nany orientation-reversing element $c \\in \\Lambda$ to another normal subgroup $K^c$ which is the kernel of an epimorphism from $\\Lambda^{\\rm o}$\nto the automorphism group $\\Ga(\\mathcal {P}^c)$ of the mirror image $\\mathcal {P}^c$ of $\\mathcal {P}$.\n\nThe automorphism groups of $\\mathcal {P}$ and $\\mathcal {P}^c$ are isomorphic to each other, but their canonical generating sets\nsatisfy different defining relations. In fact, replacing the elements $\\s_1$ and $\\s_2$ in the canonical generating\ntuple $(\\s_1,\\s_2,\\s_3,\\dots,\\s_{n-1})$ by $\\s_1^{-1}$ and $\\s_1^{\\,2}\\s_2$ gives a set of generators for $\\Ga(\\mathcal {P})$\nthat satisfy the same defining relations as a canonical generating tuple for $\\Ga(\\mathcal {P}^c)$,\nbut chirality ensures that there is no automorphism of $\\Ga(\\mathcal {P})$ that takes $(\\s_1,\\s_2)$ to $(\\s_1^{-1},\\s_1^{\\,2}\\s_2)$\nand fixes all the other $\\s_j$.\n\nConversely, any finite group $G$ that is generated by $n-1$ elements $\\s_1,\\s_2,\\dots,\\s_{n-1}$ which satisfy both\nthe defining relations for $\\Lambda^{\\rm o}$ and the chiral form of the intersection condition is the `rotation subgroup' of an\nabstract $n$-polytope $\\mathcal {P}$ that is either regular or chiral.\nIndeed, $\\mathcal {P}$ is regular if and only if $G$ admits a group automorphism $\\rho} \\def\\s{\\sigma} \\def\\t{\\tau} \\def\\om{\\omega} \\def\\k{\\kappa$ of order $2$\nthat takes $(\\s_1,\\s_2,\\s_3,\\dots,\\s_{n-1})$ to $(\\s_1^{-1},\\s_1^{\\,2}\\s_2,\\s_3,\\dots,\\s_{n-1})$.\n\n\\smallskip\nWe now focus our attention on the rank 4 case.\nHere the generators $\\s_1, \\s_2, \\s_3$ for $\\Ga(\\mathcal {P})$ satisfy the canonical relations\n$\\,\\s_1^{k_1} = \\s_2^{k_2} = \\s_3^{k_3} = (\\s_1\\s_2)^2 = (\\s_2\\s_3)^2 = (\\s_1\\s_2\\s_3)^2 = 1$,\nand the chiral form of the intersection condition can be abbreviated to\n$$\\lg \\s_1\\rg \\cap \\lg \\s_2, \\s_3\\rg = \\{1\\} = \\lg \\s_1, \\s_2\\rg \\cap \\lg \\s_3\\rg \\ \\hbox{ and } \\ \\lg \\s_1, \\s_2\\rg \\cap \\lg \\s_2, \\s_3\\rg =\\lg \\s_2\\rg.$$\n\n\nThe following proposition is useful for the groups we will deal with in the proof of our main theorem.\nIt is called the {\\em quotient criterion} for chiral $4$-polytopes.\n\n\\begin{prop}{\\rm \\cite[Lemma 3.2]{ChiralPolytopesofPSLextension}}\\label{quotient criterion}\n$\\,$\nLet $G$ be a group generated by elements $\\s_1, \\s_2, \\s_3$ such that $(\\s_1\\s_2)^2=(\\s_2\\s_3)^2=(\\s_1\\s_2\\s_3)^2=1$,\nand let $\\theta\\!: G \\to H$ be a group homomorphism taking $\\,\\s_j \\mapsto \\ld_j\\,$ for $1\\le j \\le 3$,\nsuch that the restriction of $\\theta$ to either $\\lg \\s_1, \\s_2\\rg$ or $\\lg \\s_2, \\s_3\\rg$ is injective.\nIf $(\\ld_{1}, \\ld_{2}, \\ld_{3})$ is a canonical generating triple for $H$ as the automorphism group of some chiral $4$-polytope,\nthen the triple $(\\s_1, \\s_2, \\s_3)$ satisfies the chiral form of the intersection condition for $G$.\n\\end{prop}\n\n\n\n\\subsection{Group theory}\n\n\nWe use standard notation for group theory, as in~\\cite{GroupBook} for example.\nIn this subsection we briefly describe some of the specific aspects of group theory that we need.\n\n\\smallskip\nLet $G$ be any group. We define the {\\em commutator\\\/} $[x, y]$ of elements $x$ and $y$ of $G$\nby $[x, y]=x^{-1}y^{-1}xy$, and then define the {\\em derived subgroup} (or commutator subgroup) of $G$\nas the subgroup $G'$ of $G$ generated by all such commutators.\nThen for any non-negative integer $n$, we define the $n\\,$th derived group of $G$ by setting\n$$G^{(0)}=G, \\ G^{(1)}=G', \\ \\hbox{and } \\ G^{(n)}=(G^{(n-1)})' \\ \\hbox{when} \\ n \\geq 1.$$\n\nA group $G$ is called {\\em solvable\\\/} if $G^{(n)}=1$ for some $n$.\n(This terminology comes from Galois theory, because a polynomial over a field $\\mathbb{F}$ is solvable by radicals\nif and only if its Galois group over $\\mathbb{F}$ is a solvable group.)\nEvery abelian group and every finite $p$-group is solvable, but every non-abelian simple groups is not solvable.\nIn fact, the smallest non-abelian simple group $A_5$ is also the smallest non-solvable group.\n\nWe also need the following, which are elementary and so we give them without proof.\n\n\\begin{prop}\\label{solvable}\nIf $N$ is a normal subgroup of a group $G$, such that both $N$ and $G\/N$ are solvable, then so is $G$. \n\\end{prop}\n\n\\begin{prop}\\label{freeabelian}\nLet $G$ be the free abelian group $\\mathbb{Z}\\oplus \\mathbb{Z}$ of rank $2$, generated by two elements $x$ and $y$\nsubject to the single defining relation $[x,y] = 1$.\nThen for every positive integer $m$, the subgroup $G_m=\\lg x^m, y^m\\rg$ is characteristic in $G$, with index $|G:G_m|=m^2$.\n\\end{prop}\n\nFinally, we will use some Reidemeister-Schreier theory, which produces a defining presentation\nfor a subgroup $H$ of finite index in a finitely-presented group $G$.\nAn easily readable reference for this is~\\cite[Chapter IV]{Johnson}, but in practice we use its implementation\nas the {\\tt Rewrite} command in the {\\sc Magma} computation system~\\cite{BCP97}.\nWe also found the groups that we use in the next section with the help of {\\sc Magma} in constructing\nand analysing some small examples.\n\n\t\n\\section{Main results}\\label{Main results}\n\n\\begin{theorem}\n\\label{maintheorem}\nFor every positive integer $m \\geq 1$, there exist chiral $4$-polytopes $\\mathcal {P}_m$ and $\\mathcal {Q}_m$ of type $\\{4,4,4\\}$\nwith solvable automorphism groups of order $1024m^2$ and $2048m^2$, respectively.\n\\end{theorem}\n\n\\f {\\bf Proof.}\\hskip10pt\nWe begin by defining ${\\cal U}$ as the finitely-presented group \\\\[-10pt]\n$$\\lg\\, a, b, c \\ | \\ a^4 = b^4 = c^4 = (ab)^2 = (bc)^2 = (abc)^2 = (a^2b^2)^4 = a^2c^2b^2(ac)^2 = [a,c^{-1}]b^2 = 1 \\,\\rg.$$\nThis group ${\\cal U}$ has two normal subgroups of index $1024$ and $2048$, namely the subgroups\ngenerated by $\\{(ac^{-1})^4, (c^{-1}a)^4\\}$ and $\\{(bc^{-1})^4, (c^{-1}b)^4\\}$, respectively.\nThe quotients of ${\\cal U}$ by each of these give the initial members of our two infinite families.\n\n\\medskip\n\\noindent Case (1): Take $N$ as the subgroup of ${\\cal U}$ generated by $x = (ac^{-1})^4$ and $y = (c^{-1}a)^4$.\n\n\\smallskip\nA short computation with {\\sc Magma} shows that $N$ is normal in ${\\cal U}$, with index $1024$. \nIn fact, the defining relations for ${\\cal U}$ can be used to show that \\\\[-15pt]\n\\begin{center}\n\\begin{tabular}{lll}\n$a^{-1}xa = y$, \\qquad & $b^{-1}xb = y$ \\quad & and \\quad $c^{-1}xc = y$, \\\\[+2pt]\n$a^{-1}ya = x$, \\qquad & $b^{-1}yb = x^{-1}$ \\quad & and \\quad $c^{-1}yc = x^{-1}$. \\\\[-3pt]\n\\end{tabular}\n\\end{center}\n\nThe first, third and fifth of these are easy to prove by hand, while the second and fourth can \nbe verified in a number of ways, and the sixth follows from the other five. \nOne way to prove the second and fourth is by hand, which we leave as a challenging exercise for the interested reader.\nAnother is by a partial enumeration of cosets of the identity subgroup in ${\\cal U}$. \nFor example, if this is done using the {\\tt ToddCoxeter} command in {\\sc Magma}, allowing the definition of just 8000 cosets, \nthen multiplication by each of the words $b^{-1}xby^{-1}$ and $a^{-1}yax^{-1}$ is found to fix the trivial coset, \nand therefore $b^{-1}xby^{-1} = 1 = a^{-1}yax^{-1}.$ \n\nIt follows that conjugation by $a$, $b$ and $c$ induce the three permutations $\\,(x,y)(x^{-1},y^{-1})$,\n$\\,(x,y,x^{-1},y^{-1})\\,$ and $\\,(x,y,x^{-1},y^{-1})$ on the set $\\{x,y,x^{-1},y^{-1}\\}$,\nand then $(ac^{-1})^2$ and $(c^{-1}a)^2$ centralise both $x$ and $y$, so $x$ and $y$ centralise each other.\n\nAlso the {\\tt Rewrite} command in {\\sc Magma} gives a defining presentation for $N$, with $[x,y] = 1$ as a single \ndefining relation. Hence the normal subgroup $N$ is free abelian of rank 2. \n\n\n\\smallskip\nThe quotient ${\\cal U}\/N$ is isomorphic to the automorphism group of the chiral 4-polytope\nof type $\\{4,4,4\\}$ with 1024 automorphisms listed at~\\cite{atles1}.\n\n\\smallskip\nNow for any positive integer $m$, let $N_m$ be the subgroup generated by $x^m = (ac^{-1})^{4m}$\nand $y^m = (c^{-1}a)^{4m}$. By Proposition~\\ref{freeabelian}, we know that $N_m$ is characteristic in $N$\nand hence normal in ${\\cal U}$, with index $|\\,{\\cal U}:N_m| = |\\,{\\cal U}:N||N:N_m| = 1024m^2$.\nMoreover, in the quotient $G_m = {\\cal U}\/N_m$, the subgroup $N\/N_m$ is abelian and normal,\nwith quotient $({\\cal U}\/N_m)\/(N\/N_m) \\cong {\\cal U}\/N$ being a $2$-group, and so $G_m$ is solvable,\nby Proposition~\\ref{solvable}.\n\nNext, we use Proposition~\\ref{quotient criterion} to prove that the triple $(\\bar a, \\bar b, \\bar c)$ of\nimages of $a,b,c$ in $G_m$ satisfies the chiral form of the intersection condition.\nTo do this, we observe that the group with presentation $\\lg \\, u,v \\ | \\ u^4 = v^4 = (uv)^2 =(u^{2}v^{2})^4=1 \\, \\rg$\nhas order $2^7 = 128$, as does its image in the group $G_1 = {\\cal U}\/N_1$ under the epimorphism\ntaking $(u,v) \\mapsto (aN_1,bN_1)$. These claims are easily verifiable using {\\sc Magma}.\nThen since $G_1 = {\\cal U}\/N_1$ is a quotient of $G_m = {\\cal U}\/N_m$, the subgroup generated\nby $\\bar a$ and $\\bar b$ in $G_m = {\\cal U}\/N_m$ must have order $128$ as well,\nand hence the restriction to $\\lg \\bar a, \\bar b \\rg$ of the natural homomorphism from $G_m$ to $G_1$ is injective,\nas required.\n\nAccordingly, $G_m$ is the rotation group of a chiral or regular 4-polytope $\\mathcal {P}_m$ of type $\\{4,4,4\\}$.\nAssume for the moment that $\\mathcal {P}_m$ is regular. Then there exists an automorphism $\\rho$ of $G_m$\ntaking $(\\bar a, \\bar b, \\bar c)$ to $(\\bar a^{-1}, \\bar a^{\\,2}\\bar b, \\bar c)$, and it follows from the\nrelation $1 = \\overline{a^2c^2b^2(ac)^2} = \\bar a^2 \\bar c^2 \\bar b^2 (\\bar a \\bar c)^2$ that also\n$1 = (\\bar a^2 \\bar c^2 \\bar b^2 (\\bar a \\bar c)^2)^\\rho = \\bar a^{-2} \\bar c^{2} (\\bar a^{2} \\bar b)^2 (\\bar a^{-1} \\bar c)^2$ in $G_m$.\nBut the image of this element in $G_1$ has order 2, and as this is also the image of $a^{-2}c^{2}(a^{2}b)^{2}(a^{-1}c)^2)$ in $G_1$,\nit follows that $(\\bar a^2 \\bar c^2 \\bar b^2 (\\bar a \\bar c)^2)^\\rho = \\bar a^{-2} \\bar c^{2} (\\bar a^{2} \\bar b)^2 (\\bar a^{-1} \\bar c)^2$\nis non-trivial in $G_m$, a contradiction. \n\n\\smallskip\nThus $\\mathcal {P}_m$ is chiral, with automorphism group $G_m$ of order $1024m^2$.\n\n\\medskip\n\\noindent Case (2): Take $K$ as the subgroup of ${\\cal U}$ generated by $z = (bc^{-1})^4$ and $w = (c^{-1}b)^4$.\n\n\\smallskip\nAnother computation with {\\sc Magma} shows that $K$ is normal in ${\\cal U}$, with index $2048$,\nand moreover, the {\\tt Rewrite} command tells us that $K$ is free abelian of rank $2$. \nIn this case, the defining relations for ${\\cal U}$ give \\\\[-18pt]\n\\begin{center}\n\\begin{tabular}{lll}\n$a^{-1}za = z^{-1}$, \\qquad & $b^{-1}zb = w$ \\quad & and \\quad $c^{-1}zc = w$, \\\\[+2pt]\n$a^{-1}wa = w$, \\qquad & $b^{-1}wb = z^{-1}$ \\quad & and \\quad $c^{-1}wc = z^{-1}$. \\\\[-6pt]\n\\end{tabular}\n\\end{center}\nThe quotient ${\\cal U}\/K$ is the automorphism group of the chiral 4-polytope of type $\\{4,4,4\\}$\nwith 2048 automorphisms found by Zhang in~\\cite{Zhang'sPhdThesis}.\n\n\\smallskip\nNow for any positive integer $m$, let $K_m$ be the subgroup generated by $z^m = (bc^{-1})^{4m}$\nand $w^m = (c^{-1}b)^{4m}$. Using Proposition~\\ref{freeabelian}, we find that $K_m$ is characteristic in $K$\nand hence normal in ${\\cal U}$, with index $|\\,{\\cal U}:K_m| = |\\,{\\cal U}:K||K:K_m| = 2048m^2$.\nAlso the quotient $H_m = {\\cal U}\/K_m$ is solvable, again by Proposition~\\ref{solvable}.\n\nNext, the image of the subgroup generated by $a$ and $b$ in $H_1 = {\\cal U}\/K$ has order $128$,\nso just as in Case (1) above, we can apply Proposition~\\ref{quotient criterion} and find that the triple $(\\bar a, \\bar b, \\bar c)$\nof images of $a,b,c$ in $H_m = {\\cal U}\/K_m$ satisfies the chiral form of the intersection condition.\n\nThus $H_m$ is the rotation group of a chiral or regular 4-polytope $\\mathcal {Q}_m$ of type $\\{4,4,4\\}$.\n\n\\smallskip\nMoreover, the same argument as used in Case (1) shows that $\\mathcal {Q}_m$ is chiral, because\nthe image in $H_1$ of the element $a^{-2}c^{2}(a^{2}b)^{2}(a^{-1}c)^2)$ has order $2$,\nand hence the image of that element in $H_m$ is non-trivial.\n\n\\smallskip\nThus $\\mathcal {Q}_m$ is chiral, with automorphism group $H_m$ of order $2048m^2$.\n \\hfill\\mbox{\\raisebox{0.7ex}{\\fbox{}}} \\vspace{4truemm}\n\n\nAs a special case we have the following Corollary, which is an immediate consequence of Theorem~\\ref{maintheorem}\nwhen $m$ is taken as a power of $2$.\n\n\\begin{cor}\nThere exists a chiral $4$-polytope of type $\\{4,4,4\\}$ with automorphism group of order $2^n,$\nfor every integer $n \\ge 10$.\n\\end{cor}\n\nOn the other hand, an inspection of the lists at \\cite{atles1} shows that there exists no \nsuch chiral polytope with automorphism group of order $2^n$, where $n \\le 9$. \n\n\\newpage\\noindent\n{\\bf\\Large Acknowledgements}\n\n\\medskip\\smallskip\n\\noindent\nThe first author acknowledges the hospitality of Beijing Jiaotong University,\nand partial support from the N.Z. Marsden Fund (project UOA1626). \nThe second and the third authors acknowledge the partial support from \nthe National Natural Science Foundation of China (11731002) and the 111 Project of China (B16002). The three authors also acknowledge the extensive use of the {\\sc Magma} system~\\cite{BCP97} in helping conduct experiments that helped show the way towards the main theorem.\n\\\\[-12pt]\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nConsider the problem of distribution-free predictive inference: given dataset $D_n \\equiv \\{(X_i, Y_i)\\}_{i=1}^n$ drawn i.i.d. from $P_{XY} = P_X \\times P_{Y|X}$ on $\\smash{\\mathcal{X} \\times \\mathcal{Y}}$, and $\\smash{X_{n+1} \\sim P_X}$, we must construct a prediction set $C(D_n,\\alpha,X_{n+1}) \\equiv C(X_{n+1})$ for $Y_{n+1}$ such that\n\\begin{equation}\\label{eq:marginal-validity}\n\\text{ for any distribution } P_{XY}, ~ \\mathbb{P}(Y_{n+1} \\in C(X_{n+1})) \\geq 1-\\alpha, \n\\end{equation}\nwhere the probability is taken over all $n+1$ points and $\\alpha \\in (0,1)$ is a predefined confidence level. In other words, how do we construct $C(X)$ with nonasymptotic coverage that holds without assumptions on $P_{XY}$? \nMethods with property~\\eqref{eq:marginal-validity} are called \\emph{marginally} valid to differentiate them from \\emph{conditional} validity:\n\\[\n\\forall P_{XY},~ \\mathbb{P}(Y_{n+1} \\in C(X_{n+1}) \\mid X_{n+1}=x) \\geq 1-\\alpha, \\text{ for $P_X$-almost all $x$.} \n\\]\nConditional validity is known to be impossible without assumptions on $P_{XY}$; see~\\citet[Section 2.6.1]{balasubramanian2014conformal} and~\\citet{barber2019limits}. Hence, we will only be concerned with developing methods that are (provably) marginally valid, but have reasonable conditional coverage in practice (ie, empirically).\n\n\n\nConformal prediction is a general and universal framework for constructing marginally valid prediction sets\\footnote{The spaces $\\mathcal{X}$ and $\\mathcal{Y}$ are without restriction. For example, take $\\mathcal{X} \\equiv \\mathbb{R}^d$ and let $\\mathcal{Y}$ be a subset of $\\mathbb{R}$, or a discrete space such as in multiclass classification. Though it is not formally required, it may be helpful think of $\\mathcal{Y}$ as a totally ordered set or a metric space.\n}. It acts like a wrapper around any prediction algorithm; in other words, any black-box prediction algorithm can be ``conformalized'' to produce valid prediction sets instead of point predictions. \nWe refer the reader to the works of~\\citet{vovk2005algorithmic} and~\\citet{balasubramanian2014conformal} for details.\nIn this paper, we provide an alternate and equivalent viewpoint for accomplishing the same goals, that we call \\emph{nested conformal prediction}.\n\nConformal prediction starts from a nonconformity score. As we will see with explicit examples later, these nonconformity scores are often rooted in some underlying geometric intuition about how a good prediction set may be discovered from the data. Nested conformal acknowledges this geometric intuition and makes it explicit: instead of starting from a score, it instead starts from a sequence of all possible prediction sets $\\{\\mathcal{F}_t(x)\\}_{t\\in\\mathcal{T}}$ for some ordered set $\\mathcal{T}$. Just as we suppressed the dependence of set $C(\\cdot)$ on the labeled data $D_n$ in property \\eqref{eq:marginal-validity}, here too $\\mathcal{F}_t(\\cdot)$ will actually depend on $D_n$ but we suppress this for notational simplicity. \n\nThese prediction sets are `nested', that is, for every $t_1 \\leq t_2 \\in \\mathcal{T}$, we require that $\\mathcal{F}_{t_1}(x) \\subseteq \\mathcal{F}_{t_2}(x)$; also $\\mathcal{F}_{\\inf \\mathcal{T}}=\\emptyset$ and $\\mathcal{F}_{\\sup \\mathcal{T}}=\\mathcal{Y}$. Thus large values of $t$ correspond to larger prediction regions. Given a tolerance $\\alpha \\in [0, 1]$, we wish to identify the smallest $t \\in \\mathcal{T}$ such that \n\\[\\mathbb{P}(Y \\in \\mathcal{F}_{t}(X)) \\geq 1 - \\alpha. \n\\]\nIn a nutshell, nested conformal learns a data-dependent mapping $\\alpha \\to t(\\alpha)$ using the conformal principle. Note that the mapping must be decreasing; lower tolerance values $\\alpha$ naturally lead to larger prediction regions. \n\n\n\nWe illustrate the above principle in a simple setting involving split\/inductive conformal prediction \\cite{papadopoulos2002inductive}, \\cite{lei2018distribution},~\\cite[Chapter 2.3]{balasubramanian2014conformal}. First, split $D_n$ into a training set $D_1 \\equiv \\{(X_i, Y_i)\\}_{1\\le i\\le m}$ and a calibration set $D_2 \\equiv \\{(X_i, Y_i)\\}_{m < i\\le n}$. Using $D_1$, construct an estimate $\\widehat{\\mu}(\\cdot)$ of the conditional mean of $Y$ given $X$. Then construct the nonconformity score as the residuals of $\\widehat{\\mu}$ on $D_2$: \n$R_i := |Y_i - \\widehat{\\mu}(X_i)|, \\text{for } i \\in D_2.$\nFinally, define\n\\[\nC(X_{n+1}) = \\Big\\{y\\in\\mathbb{R}:\\,|y - \\widehat{\\mu}(X_{n+1})| < Q_{1-\\alpha}(\\{R_i\\}_{i\\in D_2})\\Big\\},\n\\] \nwhere $Q_{1-\\alpha}(A)$ for a finite set $A$ represents the $(1-\\alpha)$-th quantile of elements in $A$ (defined later). The above set is easily shown to be marginally valid due to exchangeability of order statistics.\n\nWe now give an alternate derivation of the above set using nested conformal:\n\\begin{enumerate}\n\t\\item After learning $\\widehat \\mu$ using $D_1$ (as done before), construct a sequence of prediction regions corresponding to symmetric intervals around $\\widehat{\\mu}(x)$: \n \\[\n \\{\\mathcal{F}_t(x)\\}_{t\\ge0} := \\{[\\widehat{\\mu}(x) - t, \\widehat{\\mu}(x) + t]:t\\ge0\\}.\n \\] \n Note that $\\mathcal{F}_t(x)$ is a random set since it is based on $\\widehat{\\mu}(x)$, and in our earlier notation, we take $\\mathcal{T}=[0,\\infty]$.\n It is clear that regardless of $\\widehat{\\mu}$, for any distribution of the random vector $(X, Y)$, and for any $\\alpha \\in [0,1]$, there exists a (minimal) $t = t(\\alpha)$ such that\n\n$\\mathbb{P}(Y\\in\\mathcal{F}_t(X)) \\ge 1-\\alpha.$\n\n\tHence we can rewrite our nested family $\\{\\mathcal{F}_t(x)\\}$ as\n\n\t$$\\Big\\{[\\widehat{\\mu}(x) - t, \\widehat{\\mu}(x) + t]:t\\ge0\\Big\\} = \\Big\\{[\\widehat{\\mu}(x) - t(\\alpha), \\widehat{\\mu}(x) + t(\\alpha)]:\\alpha\\in[0,1]\\Big\\}.$$\n\n\t\\item The only issue now is that we do not know the map $\\alpha\\mapsto t(\\alpha)$, that is, given $\\alpha$ we do not know which of these prediction intervals to use. Hence we use the calibration data to ``estimate'' the map $\\alpha\\to t(\\alpha)$. This is done by finding the smallest $t$ such that $\\mathcal{F}_t(x)$ contains $Y_i$ for at least $1-\\alpha$ fraction of the calibration points $(X_i, Y_i)$ (we provide formal details later). Because the sequence $\\{\\mathcal{F}_t(x)\\}$ is increasing in $t$, finding the smallest $t$ leads to the smallest prediction set within the nested family. \n\\end{enumerate}\n\nEmbedding these scores into our nested framework not only allows for easy comparison (like in Table~\\ref{tab:summary_literature}), but allows us to extend these methods beyond the split\/inductive conformal setting that they were originally derived in. Specifically, we seamlessly derive cross-conformal or jackknife+ variants of these and other methods, including our new method called QOOB (pronounced cube). \n\nA final reason that the assumption of nestedness is natural is the fact that the optimal prediction sets are nested: Suppose $Z_1, \\ldots, Z_{n}$ are exchangeable random variables with a common distribution that has density $p(\\cdot)$ with respect to some underlying measure. The ``oracle'' prediction set~\\citep{lei2013distribution} for a future observation $Z_{n+1}$ is given by the \\emph{level set} of the density with valid coverage, that is, $\\{z:p(z) \\ge t(\\alpha)\\}$ with $t(\\alpha)$ defined by largest $t$ such that $\\mathbb{P}(p(Z_{n+1}) \\ge t) \\ge 1-\\alpha$. Because $\\{z:p(z) \\ge t\\}$ is decreasing with $t$, $\\{z:p(z) \\ge t(\\alpha)\\}$ is decreasing with $\\alpha\\in[0,1]$, forming a nested sequence of sets.\n\n\n\\subsection{Organization and contributions}\nFor simplicity, our discussion will assume that $\\mathcal{Y} \\subseteq \\mathbb{R}$, but all ideas are easily extended to other prediction settings. The paper is organized as follows:\n\\begin{enumerate}\n\\item In Section~\\ref{sec:split-conformal}, we formalize the earlier discussion and present split\/inductive conformal~\\citep{papadopoulos2002inductive,lei2018distribution} in the language of nested conformal prediction, and translate various conformity scores developed in the literature for split conformal into nested prediction sets. \n\\item In Section~\\ref{sec:Jackknife-plus}, we rephrase the jackknife+~\\citep{barber2019predictive} (and cross-conformal prediction~\\citep{vovk2015cross}) in terms of the nested framework. \nThis allows the jackknife+ to use many recent score functions, such as those based on quantiles, which were originally developed and deployed in the split framework. \nParticularly, in subsection~\\ref{subsec:compuation-cross} we provide an efficient implementation of cross-conformal that matches the jackknife+ prediction time for a large class of nested sets that includes all standard nested sets. \n\n\\item In Section~\\ref{sec:OOB-conformal}, we extend the Out-of-Bag conformal~\\citep{johansson2014regression} and jackknife+ after bootstrap~\\citep{kim2020predictive} methods to our nested framework. These are based on ensemble methods such as random forests, and are relatively computationally efficient because only a single ensemble needs to be built. \n\\item In Section~\\ref{sec:QOOB}, we consolidate the ideas developed in this paper to construct a novel conformal method called QOOB (Quantile Out-of-Bag, pronounced cube), that is both computationally and statistically ``efficient.'' \nQOOB combines four ideas: quantile regression, cross-conformalization, ensemble methods and out-of-bag predictions. While being theoretically valid, Section~\\ref{sec:numerical-experiments} demonstrates QOOB's excellent empirical performance. \n\\end{enumerate}\n\n\nIn Appendix \\ref{appsec:k-fold} we derive K-fold variants of jackknife+\/cross-conformal in the nested framework and in Appendix \\ref{appsec:sampling-nested}, we develop the other aggregated conformal methods of subsampling and bootstrap in the nested framework. In Appendix \\ref{appsec:empty-case}, we discuss cross-conformal computation and the jackknife+ in the case when our nested sequence could contain empty sets. This is a subtle but important issue to address when extending these methods them to quantile-based nested sets of \\citet{romano2019conformalized}, and thus relevant to QOOB as well. In Appendix~\\ref{appsec:optimal-prediction-set} we interpret optimal conditionally-valid prediction sets (which are the level sets of the conditional density function) in the nested framework. Finally, Appendix \\ref{appsec:proofs} contains all proofs.\n\n\\section{Split conformal based on nested prediction sets}\\label{sec:split-conformal}\nIn the introduction, we showed that in a simple regression setup with the nonconformity scores as held-out residuals, split conformal intervals can be naturally expressed in terms of nested sets. Below, we introduce the general nested framework and recover the usual split conformal method with general scores using this framework. We show how existing nonconformity scores in literature exhibit natural re-interpretations in the nested framework. The following description of split conformal follows descriptions by \\citet{papadopoulos2002inductive} and~\\citet{lei2018distribution} but rewrites it in terms of nested sets. \n\nSuppose $(X_i, Y_i)\\in\\mathcal{X}\\times\\mathcal{Y}, i\\in[n]$ denotes the training dataset. Let $[n] = \\mathcal{I}_1\\cup\\mathcal{I}_2$ be a partition of $[n]$. For $\\mathcal{T}\\subseteq\\mathbb{R}$ and each $x\\in\\mathcal{X}$, let $\\{\\mathcal{F}_t(x)\\}_{t\\in\\mathcal{T}}$ (with $\\mathcal{F}_t(x)\\subseteq\\mathcal{Y}$) denote a nested sequence of sets constructed based on the first split of training data $\\{(X_i, Y_i):i\\in\\mathcal{I}_1\\}$, that is, $\\mathcal{F}_t(x) \\subseteq \\mathcal{F}_{t'}(x)$ for $t \\le t'$. The sets $\\mathcal{F}_t(x)$ in almost all examples are random through the training data, although they are not required to be random. Consider the score \n\\begin{equation}\\label{eq:Score-definition}\nr(x, y) ~:=~ \\inf\\{t\\in\\mathcal{T}:\\,y\\in\\mathcal{F}_t(x)\\},\n\\end{equation}\nwhere $r$ is a mnemonic for ``radius'' and $r(x,y)$ can be informally thought of as the smallest ``radius'' of the set that captures $y$ (and perhaps thinking of a multivariate response, that is $\\mathcal{Y}\\subseteq \\mathbb{R}^d$, and $\\{\\mathcal{F}_t(x)\\}$ as representing appropriate balls\/ellipsoids might help with that intuition).\nDefine the scores for the second split of the training data $\\{r_i = r(X_i, Y_i)\\}_{i\\in\\mathcal{I}_2}$ and set\n\\[\nQ_{1-\\alpha}(r, \\mathcal{I}_2) ~:=~ \\lceil (1-\\alpha)(1 + 1\/|\\mathcal{I}_2|)\\rceil\\mbox{-th quantile of }\\{r_i\\}_{i\\in\\mathcal{I}_2}.\n\\]\n(that is, $Q_{1-\\alpha}(r, \\mathcal{I}_2)$ is the $\\lceil(1-\\alpha)(1 + 1\/|\\mathcal{I}_2|)\\rceil$-th largest element of the set $\\{r_i\\}_{i\\in\\mathcal{I}_2}$). \nThe final prediction set is given by \n\\begin{equation}\nC(x) ~:=~ \\mathcal{F}_{Q_{1-\\alpha}(r,\\mathcal{I}_2)}(x) = \\{y \\in \\mathcal{Y}:\\,r(x, y) \\le Q_{1-\\alpha}(r, \\mathcal{I}_2)\\}.\n\\label{eq:Nested-prediction-set}\n\\end{equation}\nThe following well known sample coverage guarantee holds true \\citep{papadopoulos2002inductive,lei2018distribution}.\n\\begin{prop}\\label{thm:Conformal-main-result}\nIf $\\{(X_i, Y_i)\\}_{i\\in[n]\\cup\\{n+1\\}}$ are exchangeable, then the prediction set $C(\\cdot)$ in \\eqref{eq:Nested-prediction-set} satisfies \n\\[\n\\mathbb{P}\\left(Y_{n+1}\\in C(X_{n+1})\\mid \\{(X_i, Y_i):i\\in\\mathcal{I}_1\\}\\right) ~\\ge~ 1-\\alpha.\n\\]\nIf the scores $\\{r_i, i\\in\\mathcal{I}_2\\}$ are almost surely distinct, then $C(\\cdot)$ also satisfies\n\\begin{equation}\\label{eq:Upper-bound}\n\\mathbb{P}\\left(Y_{n+1}\\in C(X_{n+1})\\mid \\{(X_i, Y_i):i\\in\\mathcal{I}_1\\}\\right) ~\\le~ 1-\\alpha + \\frac{1}{|\\mathcal{I}_2|+1}.\n\\end{equation}\n\\end{prop}\nSee Appendix \\ref{appsec:split-conformal} for a proof. \nEquation~\\eqref{eq:Score-definition} is the key step that converts a sequence of nested sets $\\{\\mathcal{F}_t(x)\\}_{t\\in\\mathcal{T}}$ into a nonconformity score $r$. Through two examples, we demonstrate how natural sequences of nested sets in fact give rise to standard nonconformity scores considered in literature, via equation~\\eqref{eq:Score-definition}. \n\\begin{enumerate}\n \\item \\textbf{Split\/Inductive Conformal \\citep{papadopoulos2002inductive,lei2018distribution}.} Let $\\widehat{\\mu}_{\\mathcal{I}_1}(\\cdot)$ be an estimator of the regression function $\\mathbb{E}[Y|X]$ based on $(X_i, Y_i), i\\in\\mathcal{I}_1$, and consider nested sets corresponding to symmetric intervals around the mean estimate: \n \n\n\n\n \\[\n \\mathcal{F}_t(x) ~:=~ [\\widehat{\\mu}_{\\mathcal{I}_1}(x) - t, \\widehat{\\mu}_{\\mathcal{I}_1}(x) + t], \\ t \\in \\mathcal{T} = \\mathbb{R}^+.\n \\]\n Observe now that\n \\begin{align*}\n \\inf\\{t\\ge0:y\\in\\mathcal{F}_t(x)\\} &= \\inf\\{t\\ge0:\\widehat{\\mu}_{\\mathcal{I}_1}(x) - t \\le y\\le \\widehat{\\mu}_{\\mathcal{I}_1}(x) + t\\}\\\\\n &= \\inf\\{t\\ge0:\\, -t \\le y - \\widehat{\\mu}_{\\mathcal{I}_1}(x) \\le t\\} ~=~ |y - \\widehat{\\mu}_{\\mathcal{I}_1}(x)|,\n \\end{align*}\n which is exactly the nonconformity score of split conformal. \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \\item \\textbf{Conformalized Quantiles \\citep{romano2019conformalized}.} \nFor any $\\beta \\in (0,1)$, let the function $q_\\beta(\\cdot)$ be the conditional quantile function. Specifically, for each $x$, define $q_{\\beta}(x):= \\sup\\{a \\in \\mathbb{R}: \\mathbb{P}(Y \\le a \\mid X = x) \\le \\beta$\\}.\n Let $\\widehat{q}_{\\alpha\/2}(\\cdot), \\widehat{q}_{1-\\alpha\/2}(\\cdot)$ be any conditional quantile estimators based on $(X_i, Y_i),i\\in\\mathcal{I}_1$.\n \n \n If the quantile estimates are good, we hope that $\\mathbb{P}(Y \\in [\\widehat{q}_{\\alpha\/2}(X), \\widehat{q}_{1-\\alpha\/2}(X)]) \\approx 1-\\alpha$, but this cannot be guaranteed in a distribution-free or assumption lean manner. However, it may be possible to achieve this with a symmetric expansion or shrinkage of the interval $[\\widehat{q}_{\\alpha\/2}(X), \\widehat{q}_{1-\\alpha\/2}(X)]$ (assuming $\\widehat{q}_{\\alpha\/2}(X) \\leq \\widehat{q}_{1 - \\alpha\/2}(X)$). \n \n Following the intuition, consider\n \\begin{equation}\\label{eq:CQR}\n \\mathcal{F}_t(x) ~:=~ [\\widehat{q}_{\\alpha\/2}(x) - t, \\widehat{q}_{1-\\alpha\/2}(x) + t], \\ t\\in\\mathbb{R}.\n \\end{equation}\n\n Note that the sets in~\\eqref{eq:CQR} are increasing in $t$ if $\\widehat{q}_{\\alpha\/2}(x) \\le \\widehat{q}_{1-\\alpha\/2}(x)$, and\n \\begin{align*}\n \\inf\\{t\\in\\mathbb{R}:\\,y\\in\\mathcal{F}_t(x)\\} &= \\inf\\{t\\in\\mathbb{R}:\\,\\widehat{q}_{\\alpha\/2}(x) - t \\le y \\le \\widehat{q}_{1-\\alpha\/2}(x) + t\\}\\\\\n \n &= \\max\\{\\widehat{q}_{\\alpha\/2}(x) - y, y - \\widehat{q}_{1-\\alpha\/2}(x)\\}.\n \\end{align*}\n Hence $r(X_i, Y_i) = \\max\\{\\widehat{q}_{\\alpha\/2}(X_i) - Y_i,\\,Y_i - \\widehat{q}_{1-\\alpha\/2}(X_i)\\}$ for $i\\in\\mathcal{I}_2$. This recovers exactly the nonconformity score proposed by \\citet{romano2019conformalized}. We believe that it is more intuitive to start with the shape of the predictive region, like we did here, than a nonconformity score. In this sense, nested conformal is a formalized technique to go from statistical\/geometric intuition about the shape of the prediction region to a nonconformity score. \n \n \n\\end{enumerate}\nSee Table~\\ref{tab:summary_literature} for more translations between scores and nested sets. \nAll the examples discussed in this section and listed in Table~\\ref{tab:summary_literature} (except the one from~\\cite{izbicki2019distribution}) result in interval prediction sets. However, optimal prediction sets need not be intervals and furthermore, intervals are not suitable when the response space is not convex (e.g., classification). Optimal predictions sets that are conditionally valid are obtained as level sets of the conditional densities~\\citep{izbicki2019distribution} and are discussed through the nested framework in Appendix~\\ref{appsec:optimal-prediction-set}. \n\n\\begin{table}[h!]\n\\caption{Examples from the literature covered by nested conformal framework. The methods listed are split conformal, locally weighted conformal, CQR, CQR-m, CQR-r, distributional conformal and conditional level-set conformal. Functions $\\widehat{q}_{a}$ represents a conditional quantile estimate at level $a$, and $\\widehat{f}$ represents a conditional density estimate.}\n\\label{tab:summary_literature}\n\\resizebox{\\textwidth}{!}{%\n\\begin{tabular}{llll}\\hline\nReference & $\\mathcal{F}_t(x)$ & $\\mathcal{T}$ & Estimates \\\\\\hline\\hline\n\\citet{lei2018distribution} & $[\\widehat{\\mu}(x) - t, \\widehat{\\mu}(x) + t]$ & $[0, \\infty)$ & $\\widehat{\\mu}$ \\\\\n\\citet{lei2018distribution} & $[\\widehat{\\mu}(x) - t\\widehat{\\sigma}(x),\\widehat{\\mu}(x) + t\\widehat{\\sigma}(x)]$ & $[0, \\infty)$ & $\\widehat{\\mu}, \\widehat{\\sigma}$ \\\\\n\\citet{romano2019conformalized} & $[\\widehat{q}_{\\alpha\/2}(x) - t, \\widehat{q}_{1-\\alpha\/2}(x) + t]$ & $(-\\infty, \\infty)$ & $\\widehat{q}_{\\alpha\/2}, \\widehat{q}_{1-\\alpha\/2}$ \\\\\n\\citet{kivaranovic2019adaptive} & $(1+t)[\\widehat{q}_{\\alpha\/2}(x), \\widehat{q}_{1-\\alpha\/2}(x)] - t\\widehat{q}_{1\/2}(x)$ & $(-\\infty, \\infty)$ & $\\widehat{q}_{\\alpha\/2}, \\widehat{q}_{1-\\alpha\/2}, \\widehat{q}_{1\/2}$ \\\\\n\\citet{sesia2019comparison} & $[\\widehat{q}_{\\alpha\/2}(x), \\widehat{q}_{1-\\alpha\/2}(x)] \\pm t(\\widehat{q}_{1-\\alpha\/2}(x) - \\widehat{q}_{\\alpha\/2}(x))$ & $(-1\/2,\\infty)$ & $\\widehat{q}_{\\alpha\/2}, \\widehat{q}_{1-\\alpha\/2}$ \\\\\n\\citet{chernozhukov2019distributional} & $[\\widehat{q}_{t}(x), \\widehat{q}_{1-t}(x)]$ & $(0,1\/2)$ & $\\{\\widehat{q}_{\\alpha}\\}_{\\alpha\\in[0,1]}$\\\\\n\\citet{izbicki2019distribution} & $\\{y:\\widehat{f}(y|x) \\geq \\widecheck{t}_{\\delta}(x)\\}$\\tablefootnote{$\\ \\widecheck{t}_{\\delta}(x)$ is an estimator of $t_{\\delta}(x)$, where $t_{\\delta}(x)$ is defined the largest $t$ such that $\\mathbb{P}(f(Y|X) \\ge t_{\\delta}(X)\\big|X = x) \\ge 1 - \\delta$; see~\\citep[Definition 3.3]{izbicki2019distribution} for details.} & $[0,1]$ & $\\widehat{f}$\\\\\n\\hline\n\\end{tabular}%\n}\n\\end{table}\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nSplit conformal prediction methods are often thought of as being statistically inefficient because they only make use of one split of the data for training the base algorithm, while the rest is held-out for calibration. Recently many extensions have been proposed~\\citep{carlsson2014aggregated,vovk2015cross,johansson2014regression,bostrom2017accelerating,barber2019predictive,kim2020predictive} that make use all of the data for training. \nAll of these methods can be rephrased easily in terms of nested sets; we do so in Sections~\\ref{sec:Jackknife-plus} and \\ref{sec:OOB-conformal}. This understanding also allows us to develop our novel method QOOB in Section~\\ref{sec:QOOB}.\n\n\n\\section{Cross-conformal and Jackknife+ using nested sets}\\label{sec:Jackknife-plus}\n\nIn the previous section, we used a part of training data to construct the nested sets and the remaining part to calibrate them for finite sample validity. This procedure, although computationally efficient, can be statistically inefficient due to the reduction of the sample size used for calibrating. Instead of splitting into two parts, it is statistically more efficient to split the data into multiple parts. In this section, we describe such versions of nested conformal prediction sets and prove their validity.\nThese versions in the score-based conformal framework are called cross-conformal prediction and the jackknife+, and were developed by~\\citet{vovk2015cross} and~\\citet{barber2019predictive}, but the latter only for a specific score function.\n\n\\subsection{Rephrasing leave-one-out cross-conformal using nested sets}\nWe now derive leave-one-out cross-conformal in the language of nested prediction sets.\nSuppose $\\{\\mathcal{F}_{t}^{-i}(x)\\}_{t\\in\\mathcal{T}}$ for each $x\\in\\mathcal{X},\\,i\\in[n]$ denotes a collection of nested sets constructed based only on $\\{(X_j, Y_j)\\}_{\\,j\\in[n]\\setminus\\{i\\}}$. We assume that each set $\\mathcal{F}_{t}^{-i}(x)$ is constructed invariantly to permutations of the input points $\\{(X_j, Y_j)\\}_{\\,j\\in[n]\\setminus\\{i\\}}$. (Note that this is an additional assumption compared to the split conformal version). The $i$-th nonconformity score $r_i$ induced by these nested sets is defined as $r_i(x, y) ~=~ \\inf\\{t\\in\\mathcal{T}:\\,y\\in\\mathcal{F}_t^{-i}(x)\\}.$\n\\ifx false \nThe residual for point $(X_i, Y_i)$ is defined as\n\\[\nr_i(X_i, Y_i) ~:=~ \\inf\\{t\\in\\mathcal{T}:\\,Y_i \\in \\mathcal{F}_t^{-i}(X_i)\\}.\n\\]\n\\fi \nThe leave-one-out cross-conformal prediction set is given by\n\\begin{equation}\\label{eq:LOO}\n{C}^{\\texttt{LOO}}(x) ~:=~ \\left\\{y\\in\\mathbb{R}:\\,\\sum_{i=1}^n \\mathbbm{1}\\{r_i(X_i, Y_i) < r_i(x, y)\\} < (1-\\alpha)(n+1)\\right\\}.\n\\end{equation}\nFor instance, given a conditional mean estimator $\\widehat{\\mu}^{-i}(\\cdot)$ trained on $\\{(X_j, Y_j)\\}_{\\,j\\in[n]\\setminus\\{i\\}}$, we can consider the nested sets $\\mathcal{F}_t^{-i}(x) = [\\widehat{\\mu}^{-i}(x) - t, \\widehat{\\mu}^{-i}(x) + t]$ to realize the absolute deviation residual function $r_i(x, y) = |y - \\widehat{\\mu}^{-i}(x)|$.\nWe now state the coverage guarantee that ${C}^{\\texttt{LOO}}(\\cdot)$ satisfies. \n\n\\begin{thm}\\label{thm:validity-nested-conformal-jackknife-plus}\nIf $\\{(X_i, Y_i)\\}_{i\\in[n+1]}$ are exchangeable and the sets $\\mathcal{F}_{t}^{-i}(x)$ constructed based on $\\{(X_j, Y_j)\\}_{ j\\in[n]\\setminus\\{i\\}}$ are invariant to their ordering, then\n\\[\n\\mathbb{P}(Y_{n+1}\\in{C}^{\\texttt{LOO}}(X_{n+1})) ~\\ge~ 1 - 2\\alpha.\n\\]\n\\end{thm}\nSee Appendix \\ref{appsec:Jackknife-plus} for a proof\n of Theorem~\\ref{thm:validity-nested-conformal-jackknife-plus}, which follows the proof of Theorem 1 in~\\citet{barber2019predictive} except with the new residual defined based on nested sets. In particular, Theorem~\\ref{thm:validity-nested-conformal-jackknife-plus} applies when the nested sets are constructed using conditional quantile estimators as in the conformalized quantile example discussed in Section~\\ref{sec:split-conformal}. The discussion in this section can be generalization to cross-conformal and the CV+ methods of~\\citet{vovk2015cross} and~\\citet{barber2019predictive}, which construct K-fold splits of the data and require training an algorithm only $n\/K$ times (instead of $n$ times in the leave-one-out case). These are discussed in the nested framework in Appendix \\ref{appsec:k-fold}.\n \nIn a regression setting, one may be interested in constructing prediction sets that are intervals (since they are easily interpretable), whereas $C^{\\texttt{LOO}}(x)$ need not be an interval in general. Also, it is not immediately evident how one would algorithmically compute the prediction set defined in~\\eqref{eq:LOO} without trying out all possible values $y \\in \\mathcal{Y}$. We discuss these concerns in subsections~\\ref{subsec:jackknife+} and \\ref{subsec:compuation-cross}. \n\\subsection{Prediction intervals containing $C^{\\texttt{LOO}}(x)$ and jackknife+}\\label{sec:intervals-LOO}\n\\label{subsec:jackknife+}\nIn a regression setting, prediction intervals may be more interpretable or `actionable' than prediction sets that are not intervals. To this end, intervals that contain $C^{\\texttt{LOO}}(x)$ are good candidates for prediction intervals since they inherit the coverage validity of Theorem~\\ref{thm:validity-nested-conformal-jackknife-plus}. For the residual function $r_i(x, y) = |y - \\widehat{\\mu}^{-i}(x)|$, \\citet{barber2019predictive} provided an interval that always contains $C^{\\texttt{LOO}}(x)$, called the jackknife+ prediction interval. In this section, we discuss when the jackknife+ interval can be defined for general nested sets. Whenever jackknife+ can be defined, we argue that another interval can be defined that contains $C^\\texttt{LOO}(x)$ and is guaranteed to be no longer in width than the jackknife+ interval\n\nIn the general formulation of nested sets, an analog of the jackknife+ interval may not always exist. However, in the special case when the nested sets $\\mathcal{F}_t^{-i}(x)$ are themselves either intervals or empty sets, an analog of the jackknife+ interval can be derived. Note that all the examples listed in Table~\\ref{tab:summary_literature} (except for the last one) result in $\\mathcal{F}_t(x)$ being either a nonempty interval or the empty set. For clarity of exposition, we discuss the empty case separately in Appendix \\ref{appsec:empty-case}. Below, suppose $\\mathcal{F}_{r_i(X_i, Y_i)}^{-i}(x) $ is a nonempty interval and define the notation\n\\[\n[\\ell_i(x), u_i(x)]~:=~\\mathcal{F}_{r_i(X_i, Y_i)}^{-i}(x) .\n\\] \nWith this notation, the cross-conformal prediction set can be re-written as \n\\begin{align}\nC^{\\texttt{LOO}}(x) &= \\left\\{y:\\,\\sum_{i=1}^n \\mathbbm{1}\\{y\\notin [\\ell_i(x), u_i(x)]\\} < (1-\\alpha)(n+1)\\right\\} \\nonumber \\\\\n&= \\left\\{y:\\, \\alpha(n+1) - 1 < \\sum_{i= 1}^n \\mathbbm{1}\\{y \\in [\\ell_i(x), u_i(x)]\\} \\right\\}. \\label{eq:loo-interval-definition}\n\\end{align}\nSuppose $y < q_{n,\\alpha}^{-}(\\{\\ell_i(x)\\})$, where \n$q_{n,\\alpha}^{-}(\\{\\ell_i(x)\\})$ denotes the \n$\\lfloor\\alpha(n+1)\\rfloor$-th smallest value of $\\{\\ell_i(x)\\}_{i=1}^n$. Clearly, \n\\[\n\\sum_{i= 1}^n \\mathbbm{1}\\{y \\in [\\ell_i(x), u_i(x)]\\} \\leq \\sum_{i= 1}^n \\mathbbm{1}\\{y \\geq l_i(x)\\} \\leq \\lfloor\\alpha(n+1)\\rfloor - 1,\n\\] \nand hence $y \\notin C^\\texttt{LOO}(x)$. Similarly it can be shown that if $y > q_{n,\\alpha}^{+}(\\{u_i(x)\\})$ (where $q_{n,\\alpha}^{+}(\\{u_i(x)\\})$ denotes the $\\lceil(1-\\alpha)(n+1)\\rceil$-th smallest value of $\\{u_i(x)\\}_{i=1}^n$), $y \\notin C^\\texttt{LOO}(x)$. \nHence, defining\nthe jackknife+ prediction interval as\n\\begin{equation}\nC^{\\texttt{JP}}(x) ~:=~ [q_{n,\\alpha}^{-}(\\{\\ell_i(x)\\}),\\, q_{n,\\alpha}^{+}(\\{u_i(x)\\})],\\label{eq:JP-definition}\n\\end{equation}\nwe conclude\n\\begin{equation}\\label{eq:loo-jp}\nC^{\\texttt{LOO}}(x)~\\subseteq~ C^{\\texttt{JP}}(x) \\quad\\mbox{for all}\\quad x\\in\\mathcal{X}.\n\\end{equation}\nHowever, there exists an even shorter interval containing~$C^{\\texttt{LOO}}(x)$: its convex hull; this does not require the nested sets to be intervals. The convex hull of $C^{\\texttt{LOO}}(x)$ is defined as the smallest interval containing itself. Hence,\n\\begin{equation}\\label{eq:intervals-containing-loo}\nC^{\\texttt{LOO}}(x) ~\\subseteq~ \\mathrm{Conv}(C^{\\texttt{LOO}}(x)) ~\\subseteq~ C^{\\texttt{JP}}(x).\n\\end{equation} \nBecause of~\\eqref{eq:intervals-containing-loo}, the coverage guarantee from Theorem~\\ref{thm:validity-nested-conformal-jackknife-plus} continues to hold for $\\mathrm{Conv}(C^{\\texttt{LOO}}(x))$ and $C^{\\texttt{JP}}(x)$. Interestingly, $C^{\\texttt{LOO}}(x)$ can be empty but $C^{\\texttt{JP}}(x)$ is non-empty if each $\\mathcal{F}_{r_i(X_i, Y_i)}^{-i}(x) $ is non-empty (in particular it contains the medians of $\\{\\ell_i(x)\\}$ and $\\{u_i(x)\\}$). Further, $\\mathrm{Conv}(C^{\\texttt{LOO}}(x))$ can be a strictly smaller interval than $C^{\\texttt{JP}}(x)$; see Subsection~\\ref{subsec:cc-jp} for details.\n\n\\subsection{Efficient computation of the cross-conformal prediction set}\n\\label{subsec:compuation-cross}\nEquation~\\eqref{eq:LOO} defines $C^{\\texttt{LOO}}(x)$ implicitly, and does not address the question of how to compute the mathematically defined prediction set efficiently. If the nested sets $\\mathcal{F}_t^{-i}(x)$ are themselves guaranteed to either be intervals or empty sets, jackknife+ seems like a computationally feasible alternative since it just relies on the quantiles $q_{n,\\alpha}^{-}(\\{\\ell_i(x)\\}),\\, q_{n,\\alpha}^{+}(\\{u_i(x)\\})$ which can be computed efficiently. However, it turns out that $C^{\\texttt{LOO}}(x), \\mathrm{Conv}(C^{\\texttt{LOO}}(x)),$ and $C^{\\texttt{JP}}(x)$ can all be computed in near linear in $n$ time. In this section, we provide an algorithm for near linear time computation of the aforementioned prediction sets. We will assume for simplicity that $\\mathcal{F}_t^{-i}(x)$ is always an interval; the empty case is discussed separately in Appendix \\ref{appsec:empty-case}.\n\nFirst, notice that the inclusion in~\\eqref{eq:LOO} need not be ascertained for every $y \\in \\mathcal{Y}$ but only for a finite set of values in $\\mathcal{Y}$. These values are exactly the ones corresponding to the end-points of the intervals produced by each training point $\\mathcal{F}_{r_i(X_i, Y_i)}^{-i}(x) = [\\ell_i(x), u_i(x)]$. This is because none of the indicators $\\mathbbm{1}\\{r_i(X_i, Y_i) < r_i(x, y)\\}$ change value between two consecutive interval end-points. Since $\\ell_i(x)$ and $u_i(x)$ can be repeated, we define the bag of all these values (see footnote\\footnote{A bag denoted by $\\Lbag \\cdot \\Rbag$ is an unordered set with potentially repeated elements. Bag unions respect the number of occurrences of the elements, eg $\\Lbag 1, 1, 2 \\Rbag \\cup \\Lbag 1, 3 \\Rbag = \\Lbag 1, 1, 1, 2, 3 \\Rbag$. } for $\\Lbag \\cdot \\Rbag$ notation):\n\\begin{equation}\n\\label{eq:y-k-main}\n\\mathcal{Y}^x := \\bigcup_{i=1}^n \\Lbag \\ell_i(x), u_i(x)\\Rbag .\n\\end{equation}\nThus we only need to verify the condition\n\\begin{equation}\n\\label{eq:y-condition}\n\\sum_{i= 1}^n \\mathbbm{1}\\{y \\in [\\ell_i(x), u_i(x)]\\} > \\alpha(n+1) - 1\n\\end{equation}\nfor the $2n$ values of $y \\in \\mathcal{Y}^x$ and construct the prediction sets suitably.\nDone naively, verifying \\eqref{eq:y-condition} itself is an $O(n)$ operation for an overall time of $O(n^2)$. However, \\eqref{eq:y-condition} can be verified for all $y \\in \\mathcal{Y}^x$ in one pass on the sorted $\\mathcal{Y}^x$ for a running time of $O(n\\log n)$; we describe how to do so now\n\\begin{algorithm}[!t]\n\t\\SetAlgoLined\n\t\n\t\\KwInput\n\t\tDesired coverage level $\\alpha$; $\\mathcal{Y}^x$ and $\\mathcal{S}^x$ computed as defined in equations~\\eqref{eq:y-k-main},~\\eqref{eq:s-k-main} using the training data $\\{(X_i, Y_i)\\}_{i=1}^n$, test point $x$ and any sequence of nested sets $\\mathcal{F}^{-i}_t(\\cdot)$ \\;}\n\t\\KwOutput{Prediction set $C^x \\subseteq \\mathcal{Y}$}\n\t\n\t$threshold \\gets \\alpha(n+1) - 1$;\n\tif $threshold < 0$, then return $\\mathbb{R}$ and stop;\\\\\n\n\t\n\n\t$C^x \\gets \\emptyset$; $count \\gets 0$; $left\\_endpoint \\gets 0$\\;\n\t\\For{$i \\gets 1$ \\textbf{to} $|\\mathcal{Y}^x|$}{\n\t\t\\eIf{$s^x_i = 1$}{ \\label{line:condition-1}\n\t\t\t$count \\gets count + 1$\\; \\label{line:increase-count}\n\t\t\t\\If{$count > threshold $ \\textbf{and} $count -1 \\leq threshold$}{ \\label{line:condition-2}\n\t\t\t\t$left\\_endpoint \\gets y^x_i$\\; \\label{line:update-left-endpoint}\n\t\t\t}\n\t\t}{\n\t\t\t\\If{$count > threshold$ \\textbf{and} $count - 1\\leq threshold$}{ \\label{line:condition-3}\n\t\t\t\t$C^x \\gets C^x \\cup \\{[left\\_endpoint, y^x_i]\\}$\\; \\label{line:update-prediction-set}\n\t\t\t}\n\t\t\t$count \\gets count - 1$\\; \\label{line:decrease-count}\n\t\t}\n\t}\n\t\\Return $C^x$\\;\n\t\\caption{Efficient cross-conformal style aggregation}\n\t\\label{alg:efficientCC}\n\\end{algorithm}\n\nLet the sorted order of the points $\\mathcal{Y}^x$ be $y^x_1 \\leq y^x_2 \\leq \\ldots \\leq y^x_{|{\\mathcal{Y}}^x|}$. If $\\mathcal{Y}^x$ contains repeated elements, we require that the left end-points $\\ell_i$ come earlier in the sorted order than the right end-points $u_i$ for the repeated elements. Also define the bag of indicators $\\mathcal{S}^x$ with elements $s^x_1 \\leq s^x_2 \\leq \\ldots \\leq s^x_{|{\\mathcal{Y}^x}|}$, where \n\\begin{equation}\n\\label{eq:s-k-main}\ns^x_i :=\n\\begin{cases}\n1 & \\mbox{if } y^x_i \\mbox{ corresponds to a left end-point}\\\\\n0 & \\mbox{if } y^x_i \\mbox{ corresponds to a right end-point}.\n\\end{cases}\n\\end{equation}\nGiven $\\mathcal{Y}^x$ and $\\mathcal{S}^x$, Algorithm~\\ref{alg:efficientCC} describes how to compute the cross-conformal prediction set in one pass (thus time $O(n)$) for every test point. Thus the runtime (including the sorting) is $O(n\\log n)$ time to compute the predictions $\\mathcal{F}_{r_i(X_i, Y_i)}^{-i}(x)$ for every $i$. If each prediction takes time $\\leq T$, the overall time is $O(n\\log n)+Tn$, which is the same as jackknife+.\\footnote{For jackknife+, using quick-select, we could obtain $O(n) + Tn$ randomized, but the testing time $Tn$ usually dominates the additional $n\\log n$ required to sort.} In Appendix \\ref{appsec:alg-proof} we describe the algorithm and argue its correctness. \n\\begin{prop}\n\t\\label{prop:efficientCC-correctness}\n\tAlgorithm~\\ref{alg:efficientCC} correctly computes the cross-conformal prediction set~\\ref{eq:LOO} given $\\mathcal{Y}^x$ and $\\mathcal{S}^x$. \n\\end{prop}\n\\section{Extending ensemble based out-of-bag conformal methods using nested sets}\n\\label{sec:OOB-conformal}\nCross-conformal, jackknife+, and their K-fold versions perform multiple splits of the data and for every training point $(X_i, Y_i)$, a residual function $r_i$ is defined using a set of training points that does not include $(X_i, Y_i)$. \nIn the previous section, our description required training the base algorithm multiple times on different splits of the data. Often each of these individual algorithms is itself an ensemble algorithm (such as random forests). As described in this section, an ensemble algorithm naturally provide multiple (random) splits of the data from a single run and need not be re-trained on different splits to produce conformal prediction regions. This makes the conformal procedure computationally efficient.\nAt the same time, like cross-conformal, the conformal prediction regions produced here are often shorter than split conformal because they use all of the training data for prediction. \nIn a series of interesting papers \\citep{johansson2014regression, bostrom2017accelerating, linusson2019efficient, kim2020predictive}, many authors have exhibited promising empirical evidence that these ensemble algorithms improve the width of prediction sets without paying a computational cost. We call this method the \\emph{OOB-conformal} method (short for out-of-bag). \n\nWe now describe the procedure formally within the nested conformal framework, thus extending it instantly to residual functions that have hitherto not been considered. Our procedure can be seen as a generalization of the OOB-conformal method~\\citep{linusson2019efficient} or the jackknife+ after bootstrap method ~\\citep{kim2020predictive}:\n\\begin{enumerate}\n\t\\item Let $\\{M_j\\}_{j = 1}^K$ denote $K \\geq 1$ independent and identically distributed random sets drawn uniformly from $\\{M: M\\subset[n], |M| = m\\}$. This is the same as subsampling. Alternatively $\\{M_j\\}_{j=1}^K$ could be i.i.d. random bags, where each bag is obtained by drawing $m$ samples with replacement from $[n]$. This procedure corresponds to bootstrap. \n\t\\item For every $i\\in [n]$, define \n\t\\[M_{-i} := \\{M_j : i \\notin M_j\\},\\]\n\t which contains the training sets that are \\emph{out-of-bag} for the $i$-th data point. \n\t\\item The idea now is to use an ensemble learning method that, for every $i$, aggregates $|M_{-i}|$ many predictions to identify a single collection of nested sets $\\{\\mathcal{F}_t^{-i}(x)\\}_{t \\in \\mathcal{T}}$. For instance, one can obtain an estimate $\\widehat{\\mu}_{j}(\\cdot)$ of the conditional mean based on the training data corresponding to $M_j$, for every $j$, and then construct \n\t\\[\n\t\\mathcal{F}_t^{-i}(x) = [\\widehat{\\mu}_{-i}(x) - t, \\widehat{\\mu}_{-i}(x) + t],\n\t\\]\n\twhere $\\widehat{\\mu}_{-i}(\\cdot)$ is some combination (such as the mean) of $\\{\\widehat{\\mu}_j(\\cdot)\\}_{\\{j: i\\notin M_j\\}}$.\n\n\t\\item The remaining conformalization procedure is identical to $C^{\\texttt{LOO}}(x)$ described in Section~\\ref{sec:Jackknife-plus}. Define the residual score \n\n\t$r_i(x, y) := \\inf\\left\\{t\\in\\mathcal{T}:\\,y\\in\\mathcal{F}_t^{-i}(x)\\right\\}.$\n\n\\end{enumerate}\nUsing the cross-conformal scheme, the prediction set for any $x \\in \\mathcal{X}$ is given as\n\\begin{equation}\nC^{\\texttt{OOB}}(x) := \\left\\{y:\\,\\sum_{i=1}^n \\mathbbm{1}\\{r_i(X_i, Y_i) < r_i(x, y)\\} < (1-\\alpha)(n+1)\\right\\}.\n\\label{eq:def-oob-cross}\n\\end{equation}\nIf $\\mathcal{F}_t^{-i}(x)$ is an interval for all $1\\le i\\le n$ and $x\\in\\mathcal{X}$, then following the discussion in Section~\\ref{sec:intervals-LOO}, we could also derive a jackknife+ style prediction interval that is guaranteed to be non-empty:\n\\begin{equation}\nC^{\\texttt{OOB-JP}}(x) := [q^{-}_{n,\\alpha}(\\ell_i(x)), q^{+}_{n,\\alpha}(u_i(x))].\n\\label{eq:def-oob-jp}\n\\end{equation}\n\nIf $\\mathcal{F}_t^{-i}(x)$ could further contain empty sets, a jackknife+ interval can still be derived following the discussion in Appendix \\ref{appsec:empty-case}, but we skip these details here. Once again, we have that for every $x \\in \\mathcal{X}$, $C^{\\texttt{OOB}}(x) \\subseteq C^{\\texttt{OOB-JP}}(x)$; see Equation~\\eqref{eq:loo-jp} for details. The computational discussion of subsection~\\ref{subsec:compuation-cross} extends to $C^{\\texttt{OOB}}$. \n\n\n\nRecently, \\citet{kim2020predictive} provided a $1-2\\alpha$ coverage guarantee of the OOB-conformal method when $\\mathcal{F}_t^{-i}(x) = [\\widehat{\\mu}_{-i}(x) - t, \\widehat{\\mu}_{-i}(x) + t]$ where $\\widehat{\\mu}_{-i}(\\cdot)$ represents the aggregation of conditional mean estimate from $\\{M_j\\}_{i\\notin M_j}$. We generalize their result to any sequence of nested sets and extend it to the cross-conformal style aggregation scheme. In order to obtain a coverage guarantee, the conformal method must ensure a certain exchangeability requirement is satisfied. To do so, the argument of \\citet{kim2020predictive} required the number of random resamples $K$ to itself be drawn randomly from a binomial distribution. We assert the same requirement in the following theorem (proved in Appendix~\\ref{appsec:OOB-conformal}).\n\\begin{thm}\\label{thm:OOB-cross-validity}\n\tFix a permutation invariant ensemble technique that constructs sets $\\{\\mathcal{F}_t^{-i}\\}_{t\\in \\mathcal{T}}$ given a collection of subsets of $[n]$. Fix integers $\\widetilde{K}, m \\geq 1$ and let \n\t\\begin{align*}\n\tK ~&\\sim~ \\mathrm{Binomial}\\left(\\widetilde{K}, \\left(1 - \\frac{1}{n+1}\\right)^m\\right) \\quad (\\mbox{in the case of bagging}),\\;\\mathrm{or},\\\\\n\tK ~&\\sim~ \\mathrm{Binomial}\\left(\\widetilde{K}, 1 - \\frac{m}{n+1}\\right)\\quad (\\mbox{in the case of subsampling}).\n\t\\end{align*} \n\tThen \n\n\t$\\mathbb{P}(Y_{n+1}\\in C^{\\texttt{OOB}}(X_{n+1})) \\ge 1- 2\\alpha.$\n\n\\end{thm}\nBecause $C^{\\texttt{OOB}}(x) \\subseteq C^{\\texttt{OOB-JP}}(x)$ for every $x \\in \\mathcal{X}$, the validity guarantee continues to hold for $C^{\\texttt{OOB-JP}}(\\cdot)$.\nWhile we can only prove a $1-2\\alpha$ coverage guarantee, it has been observed empirically that the OOB-conformal method with regression forests as the ensemble scheme and nested sets $\\{[\\widehat{\\mu}(x) - t\\widehat{\\sigma}(x), \\widehat{\\mu}(x) + t\\widehat{\\sigma}(x)]\\}_{t \\in \\mathbb{R}^+}$ satisfies $1-\\alpha$ coverage while providing the shortest prediction sets on average~\\citep{bostrom2017accelerating}.\nOn the other hand, the best empirically performing nested sets are the ones introduced by \\citet{romano2019conformalized}: $\\{[\\widehat{q}_{\\beta}(x) - t, \\widehat{q}_{\\beta}(x) + t]\\}_{t \\in \\mathbb{R}}$ (for an appropriately chosen $\\beta$). Using nested conformal, we show how these these two ideas can be seamlessly integrated: quantile based nested sets with an OOB-style aggregation scheme. In Section~\\ref{sec:QOOB} we formally develop our novel method QOOB, and in Section~\\ref{sec:numerical-experiments} we empirically verify that it achieves competitive results in terms of the length of prediction sets. \n\n\n\\section{QOOB: A novel conformal method using nested sets}\n\\label{sec:QOOB}\nThe nested conformal interpretation naturally separates the design of conformal methods into two complementary aspects: \n\\begin{enumerate}[label=(\\alph*)]\n\t\\item identifying an information efficient nonconformity score based on a set of nested intervals, and\n\t\\item performing sample efficient aggregation of the nonconformity scores while maintaining validity guarantees.\n\\end{enumerate}\nIn this section, we leverage this dichotomy to merge two threads of ideas in the conformal literature and develop a novel conformal method that empirically achieves state-of-the-art results in terms of the width of prediction sets. \n\nFirst, we review what is known on aspect (b). While split-conformal based methods are computationally efficient, they lose sample efficiency due to sample splitting. Aggregated conformal methods such as cross-conformal, jackknife+, and OOB-conformal do not have this drawback and are the methods of choice for computationally feasible and sample efficient prediction sets. Among all aggregation techniques, the OOB-conformal method has been observed empirically to be the best aggregation scheme which uses all the training data efficiently~\\citep{bostrom2017accelerating}. \n\nNext, we consider aspect (a), the design of the nested sets. The nested sets considered by \\citet{bostrom2017accelerating} are $\\{[\\widehat{\\mu}(x) - t\\widehat{\\sigma}(x), \\widehat{\\mu}(x) + t\\widehat{\\sigma}(x)]\\}_{t \\in \\mathbb{R}^+}$ based on mean and variance estimates obtained using out-of-bag trees. On the other hand, it has been convincingly demonstrated that nested sets based on quantile estimates $\\widehat{q}_s(x)$ given by $\\{[\\widehat{q}_{\\beta}(x) - t, \\widehat{q}_{1-\\beta}(x) + t]\\}_{t \\in \\mathbb{R}}$ perform better than those based on mean-variance estimates in the split conformal setting~\\citep{romano2019conformalized, sesia2019comparison}. \n\nBuilding on these insights, we make a natural suggestion: Quantile Out-of-Bag (QOOB) conformal; pronounced ``cube'' conformal. This method works in the following way. First, a quantile regression based on random forest~\\citep{meinshausen2006quantile} with $T$ trees is learnt by subsampling or bagging the training data $T$ times. Next, using the out-of-bag trees for every training point $(X_i, Y_i)$, a quantile estimate $\\widehat{q}^{-i}_s(\\cdot)$ is learnt for fixed levels $s = \\beta$ and $ s = 1 - \\beta$. Here $\\beta = k\\alpha$ for some constant $k$. Now for every training point $i$, we define the nested sets as\n\\[\n\\mathcal{F}_t^{-i}(x) ~:=~ [\\widehat{q}_\\beta^{-i}(x) - t, \\widehat{q}_{1-\\beta}^{-i}(x) + t].\n\\]\n\nThe nonconformity scores based on these nested sets are aggregated to provide a prediction set as described by~$C^{\\texttt{OOB}}(x)$ in~\\eqref{eq:def-oob-cross} of Section~\\ref{sec:OOB-conformal}. Algorithm~\\ref{alg:QOOB} describes QOOB procedurally. Following subsection~\\ref{subsec:compuation-cross}, the aggregation step of QOOB (line \\ref{line:qoob-aggregation}, Algorithm~\\ref{alg:QOOB}) can be performed in time $O(n\\log n)$. \n\\begin{algorithm}[!t]\n\t\\SetAlgoLined\n\t\\KwInput{Training data $ \\{(X_i, Y_i)\\}_{i=1}^n$, test point $x$, desired coverage level $\\alpha$, number of trees $T$, nominal quantile level $\\beta$ (default $=2\\alpha$)}\n\t\\KwOutput{Prediction set $C^x \\subseteq \\mathcal{Y}$}\n\t$\\{M_j\\}_{j = 1}^T \\gets$ training bags drawn independently from $[n]$ using subsampling or bootstrap\\;\n\t$M_{-i} \\gets \\{M_j : i \\notin M_j\\}$\\;\n\t\\For{$i \\gets 1$ \\textbf{to} $T$}{\n\t\t$\\phi_i \\gets $ Quantile regression trees learnt using the data-points in $M_i$\\\\\\qquad \\qquad (this step could include subsampling of features)\\;\n\t}\n\t\\For{$i \\gets 1$ \\textbf{to} $n$}{\n\t\t$\\Phi_{-i} \\gets \\{\\phi_j: j \\in M_{-i}\\}$\\;\n\t\t$\\widehat{q}^{-i}(\\cdot) \\gets$ quantile regression forest using the trees $\\Phi_{-i}$\\;\n\t\n\t\t$\\mathcal{F}_t^{-i}(\\cdot) \\gets [\\widehat{q}_\\beta^{-i}(\\cdot) - t, \\widehat{q}_{1-\\beta}^{-i}(\\cdot) + t]$\\; \n\t\t$r_i(\\cdot, \\cdot) \\gets ((x, y) \\to \\inf\\left\\{t\\in\\mathcal{T}:\\,y\\in\\mathcal{F}_t^{-i}(x)\\right\\})$\\;\n\t\t\n\t}\n\t\n\t$C^x \\gets $ OOB prediction set defined in Equation~\\eqref{eq:def-oob-cross}; (call Algorithm~\\ref{alg:efficientCC} with $\\mathcal{Y}^x$, $\\mathcal{S}^x$ computed using $\\mathcal{F}_t^{-i}(\\cdot)$, the training data $\\{(X_i, Y_i)\\}_{i=1}^n$ and test point $x$ as described in equations~\\eqref{eq:y-k-main}, ~\\eqref{eq:s-k-main}) \\label{line:qoob-aggregation}\\\\% \\{(X_i, Y_i)\\}_{i=1}^n$, $x$, and $(r_i)_{i=1}^n$)\\; \n\t\\Return $C^x$\\;\n\t\\caption{Quantile Out-of-Bag conformal (QOOB)}\n\t\\label{alg:QOOB}\n\\end{algorithm}\n\nSince QOOB is a special case of OOB-conformal, it inherits an assumption-free $1-2\\alpha$ coverage guarantee from Theorem~\\ref{thm:OOB-cross-validity} if $K$ is drawn from an appropriate binomial distribution as described in the theorem. In practice, we typically obtain $1-\\alpha$ coverage with a fixed $K$. In Section~\\ref{sec:numerical-experiments}, we empirically demonstrate that QOOB achieves state-of-the-art performance on multiple real-world datasets. We also discuss three aspects of our method: \n\\begin{enumerate}[label=(\\alph*)]\n\t\\item how to select the nominal quantile level $\\beta = k\\alpha$, \n\t\\item the effect of the number of trees $T$ on the performance, and \n\t\\item the performance of the jackknife+ version of our method (QOOB-JP), which corresponds to the OOB-JP style aggregation (equation~\\eqref{eq:def-oob-jp}) of quantile-based nonconformity scores.\n\\end{enumerate}\n\\section{Numerical comparisons}\\label{sec:numerical-experiments}\nWe compare several methods discussed in this paper using synthetic and real datasets. MATLAB code to execute QOOB and reproduce the experiments in this section is provided at \\url{https:\/\/github.com\/AIgen\/QOOB}. Some experiments on synthetic data are discussed in subsection~\\ref{subsec:synthetic}; the rest of this section discusses results on real datasets. We use the following six datasets from the UCI repository: blog feedback, concrete strength, superconductivity, news popularity, kernel performance and protein structure. Metadata and links for these datasets are provided in subsection~\\ref{subsec:experiment-info}. In order to assess the coverage and width properties, we construct multiple version of each of these three datasets. For each dataset, we obtain 100 versions by independently drawing 1000 data points randomly (without replacement) from the full dataset. Then we split each such version into two parts: training and testing of sizes $768$\\footnote{the number of training points is divisible by many factors, which is useful for creating a varying number of folds for K-fold methods} and $232$ respectively. Hence corresponding to each of the six datasets, we get 100 different datasets with 768 training and 232 testing points. \n\nFor each conformal method, we report the following two metrics:\n\\begin{itemize}\n\t\n\t\\item \\emph{Mean-width}: For a prediction set $C(x)\\subseteq \\mathcal{Y} = \\mathbb{R}$ its width is defined as its Lebesgue measure. For instance, if $C(x)$ is an interval, then the width is its length and if $C(x)$ is a union of two or more disjoint intervals, then the width is the sum of the lengths of these disjoint intervals. We report the average over the mean-width given by\n\t\\begin{equation}\\label{eq:mean-width}\n\t\\mbox{Ave-Mean-Width} := \\frac{1}{100}\\sum_{b = 1}^{100}\\, \\left(\\frac{1}{232}\\sum_{i=1}^{232} \\mbox{width}({C}_b(X_i^b))\\right).\n\t\\end{equation}\n\tHere ${C}_b(\\cdot)$ is a prediction region learnt from the $b$-th version of a dataset. The outer mean is the average over 100 versions of a dataset. The inner mean is the average of the width over the testing points in a particular version of a dataset.\n\t\\item \\emph{Mean-coverage}: We have proved finite-sample coverage guarantees for all our methods and to verify (empirically) this property, we also report the average over the mean-coverage given by\n\t\\begin{equation}\\label{eq:mean-coverage}\n\t\\mbox{Ave-Mean-Coverage} := \\frac{1}{100}\\sum_{b=1}^{100}\\,\\left(\\frac{1}{232}\\sum_{i=1}^{232} \\mathbbm{1}\\{Y_i^b \\in {C}_b(X_i^b)\\}\\right).\n\t\\end{equation}\n\t\n\\end{itemize} \nIn addition to reporting the average over versions of a dataset, we also report the estimated standard deviation of the average (to guage the fluctuations). In the rest of the discussion, the qualification `average' may be skipped for succinctness, but all reports and conclusions are to be understood as comments on the average value for mean-width and mean-coverage. \n\nRandom forest (RF) based regressors perform well across different conformal methods and will be used as the base regressor in our experiments, with varying $T$, the number of trees. Each tree is trained on an independently drawn bootstrap sample from the training set (containing about $(1-1\/e)100\\% \\approx 63.2\\%$ of all training points).\nThe numerical comparisons will use the following methods:\n\\begin{enumerate}\n\t\\item SC-$T$: Split conformal~\\citep{papadopoulos2002inductive, lei2018distribution} with nested sets $\\{[\\widehat{\\mu}(x) - t, \\widehat{\\mu}(x) + t]\\}_{t \\in \\mathbb{R}^+}$ and $T$ trees. \n\t\\item Split-CQR-$T$ ($2\\alpha$): Split conformalized quantile regression~\\citep{romano2019conformalized} with $T$ trees and nominal quantile level $2\\alpha$. This corresponds to the nested sets $\n\t\\{ [\\widehat{q}_{2\\alpha}^{-i}(x) - t, \\widehat{q}_{1-{2\\alpha}}^{-i}(x) + t]\\}_{t \\in \\mathbb{R}}$. Quantile conformal methods require the nominal quantile level to be set carefully, as also noted by~\\citet{sesia2019comparison}. In our experiments, we observe that Split-CQR-$T$ performs well at the nominal quantile level $2\\alpha$. This is discussed more in subsection~\\ref{subsec:nominal-quantile}. \n\t\\item 8-fold-CC-$T$: 8-fold cross-conformal~\\citep{vovk2015cross, barber2019predictive} with $T$\n\ttrees learnt for every fold and the nested sets $\\{[\\widehat{\\mu}(x) - t, \\widehat{\\mu}(x) + t]\\}_{t \\in \\mathbb{R}^+}$. Leave-one-out cross-conformal is computationally expensive if $T$ trees are to be trained for each fold, and does not lead to significantly improved performance compared to OOB-CC in our experiments. Hence we did not report a detailed comparison across all datasets. \n\t\\item OOB-CC-$T$: OOB-cross-conformal~\\citep{johansson2014regression, kim2020predictive} with $T$\n\ttrees. This method considers the nested sets $\\{[\\widehat{\\mu}(x) - t, \\widehat{\\mu}(x) + t]\\}_{t \\in \\mathbb{R}^+}$ where $\\widehat{\\mu}$ is the average of the mean-predictions for $x$ on out-of-bag trees. \n\t\\item OOB-NCC-$T$: OOB-normalized-cross-conformal~\\citep{bostrom2017accelerating} with $T$ trees. This method considers nested sets $\\{[\\widehat{\\mu}(x) - t\\widehat{\\sigma}(x), \\widehat{\\mu}(x) + t\\widehat{\\sigma}(x)]\\}_{t \\in \\mathbb{R}^+}$ where $\\widehat{\\sigma}(x)$ is the standard deviation of mean-predictions for $x$ on out-of-bag trees.\n\n\t\\item QOOB-$T$ ($2\\alpha$): OOB-quantile-cross-conformal with $T$ trees and nominal quantile level $\\beta = 2\\alpha$. This is our proposed method. In our experiments, we observe that QOOB-T performs well at the nominal quantile level $2\\alpha$. We elaborate more on the nominal quantile selection in subsection~\\ref{subsec:nominal-quantile}.\n\\end{enumerate}\n\n\\begin{table}[!h]\n\t\\caption{Mean-width~\\eqref{eq:mean-width} of conformal methods with regression forests ($\\alpha = 0.1$).\\\\ Average values across 100 simulations are reported with the standard deviation in brackets. }\n\t\\centering\n\t\\resizebox{\\textwidth}{!}{\n\t\t\\begin{tabular}{c|c|c|c|c|c|c}\n\t\t\t\\hline\n\t\t\n\t\t\t\\textbf{Method} & {Blog} & {Protein} & {Concrete}& {News} & {Kernel} & {Superconductivity} \\\\ \\hline \\hline \n\t\t\tSC-100 & 25.54 & 16.88 & 22.29 & 12491.84 & 452.71 & 54.46 \\\\[-0.15in] \n\t\t\t& (0.71) & (0.08) & (0.14) & (348.07) & (5.10) & (0.37) \\\\\\hline\n\t\t\tSplit-CQR-100 (2$\\alpha$) & \\textbf{12.22} & 14.20 & 21.45 & \\textbf{7468.15} & \\textbf{295.49} & 39.59 \\\\[-0.15in] \n\t\t\t& (0.35) & (0.09) & (0.12) & (136.93) & (3.09) & (0.27)\\\\\\hline\n\t\t\t8-fold-CC-100 & 24.83 & 16.42 & 19.23 & 12461.40 & 411.81 & 50.30 \\\\[-0.15in] \n\t\t\t& (0.44) & (0.05) & (0.04) & (263.54) & (3.4299) & (0.24) \\\\\\hline\n\t\t\tOOB-CC-100 & 24.76 & 16.38 & 18.69 & 12357.58 & 402.97 & 49.31 \\\\[-0.15in] \n\t\t\t& (0.50) & (0.04) & (0.03) & (213.72) & (3.13) & (0.24) \\\\\\hline\n\t\t\tOOB-NCC-100 & 20.31 & 14.87 & 18.66 & 11500.22 & 353.35 & 39.55 \\\\ [-0.15in]\n\t\t\t& (0.42) & (0.05) & (0.06) & (320.91) & (2.95) & (0.22)\\\\\\hline\n\t\t\tQOOB-100 ($2\\alpha$) & 14.43 & \\textbf{13.74} & \\textbf{18.19} & 7941.19 & 300.04 & \\textbf{37.04} \\\\[-0.15in] \n\t\t\t& (0.38) & (0.05) & (0.05) & (89.21) & (2.70) & (0.18)\\\\\\hline\n\t\\end{tabular}}\n\t\\label{table:overall-comparison-width}\n\\end{table}\n\n\\begin{table}[!h]\n\t\\caption{Mean-coverage~\\eqref{eq:mean-coverage} of conformal methods with regression forests ($\\alpha = 0.1$). The standard deviation of these average mean-widths are zero upto two significant digits. }\n\t\\centering\n\t\\resizebox{\\textwidth}{!}{\n\t\t\\begin{tabular}{c|c|c|c|c|c|c}\n\t\t\t\\hline\n\t\t\n\t\t\t\\textbf{Method} & {Blog} & {Protein} & {Concrete}& {News} & {Kernel} & {Superconductivity} \\\\ \\hline \\hline \n\t\t\tSC-100 & 0.90 & 0.90 & 0.90 & 0.90 & 0.90 & 0.90 \\\\ \\hline\n\t\t\tSplit-CQR-100 ($2\\alpha$) & 0.91 & 0.90 & 0.90 & 0.90 & 0.90 & 0.90 \\\\ \\hline\n\t\t\t8-fold-CC-100 & 0.91 & 0.91 & 0.91 & 0.91 & 0.91 & 0.90 \\\\ \\hline\n\t\t\tOOB-CC-100 & 0.90 & 0.91 & 0.91 & 0.91 & 0.91 & 0.90 \\\\ \\hline\n\t\t\tOOB-NCC-100 & 0.92 & 0.91 & 0.91 & 0.92 & 0.93 & 0.91 \\\\ \\hline\n\t\t\tQOOB-100 ($2\\alpha$) & 0.92 & 0.91 & 0.92 & 0.91 & 0.93 & 0.91 \\\\ \\hline\n\t\\end{tabular}}\n\t\\label{table:overall-comparison-coverage}\n\\end{table}\n\\label{subsec:QOOB-main}\n\nTables~\\ref{table:overall-comparison-width} and~\\ref{table:overall-comparison-coverage} report the mean-width and mean-coverage that these conformal methods achieve on 6 datasets. Here, the number of trees $T$ is set to $100$ for all the methods.\nWe draw the following conclusions: \n\n\\begin{itemize}\n\t\\item Our novel method QOOB achieves the shortest or close to the shortest mean-width compared to other methods while satisfying the $1-\\alpha$ coverage guarantee. The closest competitor is Split-CQR. As we further investigate in subsection~\\ref{subsec:QOOB-trees}, even on datasets where Split-CQR performs better than QOOB, if the number of trees are increased beyond 100, QOOB shows a decrease in mean-width while Split-CQR does not improve. For example, on the kernel dataset, QOOB outperforms Split-CQR at 400 trees.\n\t\\item In Table~\\ref{table:overall-comparison-width}, QOOB typically has low values for the standard deviation of the average-mean-width across all methods. This entails more reliability to our method, which may be desirable in some applications. In subsections~\\ref{subsec:nominal-quantile} and~\\ref{subsec:QOOB-trees}, we observe that this property is true across different number of trees and nominal quantile levels as well. \n\t\\item On every dataset, QOOB achieves coverage higher than the prescribed value of $1-\\alpha$, with a margin of 1-3\\%. Surprisingly this is true even if it shortest mean-width among all methods. It may be possible to further improve the performance of QOOB in terms of mean-width by investigating the cause for this over-coverage. \n\t\\item OOB-CC does better than 8-fold-CC with faster running times. Thus, to develop QOOB, we chose to work with out-of-bag conformal.\n\\end{itemize}\n\n\n\nIn the rest of this section, we present more experimental results split\nwith the following key insights:\n\\begin{itemize}\n\n\t\\item In subsection~\\ref{subsec:nominal-quantile}, we discuss the significant impact that nominal quantile selection has on the performance of QOOB and Split-CQR. We observe that $2\\alpha$ is an appropriate general purpose nominal quantile recommendation for both methods. \n\t\\item In subsection~\\ref{subsec:QOOB-trees}, we show that increasing the number of trees $T$ leads to decreasing mean-widths for QOOB, while this is not true for its closest competitor Split-CQR. QOOB also outperforms other competing OOB methods across different values for the number of trees $T$. \n\t\\item In subsection~\\ref{subsec:QOOB-sample-size}, we compare QOOB and Split-CQR in the small sample size (small $n$) regime where we expect sample splitting methods to lose statistical efficiency. We confirm that QOOB significnatly outperforms Split-CQR on all six datasets we have considered for $n \\leq 100$. \n\t\\item In subsection~\\ref{subsec:cc-jp}, we compare the related methods of cross-conformal and jackknife+ and demonstrate that there exist settings where cross-conformal leads to shorter intervals compared to jackknife+, while having a similar computational cost (as discussed in subsection~\\ref{subsec:compuation-cross}). \n\t\\item In subsection~\\ref{subsec:synthetic}, we demonstrate on a simple synthetic dataset that QOOB achieves conditional coverage empirically. We experiment with the data distribution designed by \\citet{romano2019conformalized} for demonstrating the conditional coverage of Split-CQR.\n\\end{itemize}\n\n\n\n\n\n\\subsection{Nominal quantile selection has a significant effect on QOOB}\n\\label{subsec:nominal-quantile}\nQOOB and Split-CQR both use nominal quantiles $\\widehat{q}_\\beta$, $\\widehat{q}_{1-\\beta}$ from an underlying quantile prediction algorithm learned on the training data. In the case of Split-CQR, as observed by \\citet{romano2019conformalized} and \\citet{sesia2019comparison}, tuning $\\beta$ leads to improved performance.\nA quantile-wide comparison is reported in Figure~\\ref{fig:width-nominal-level} for QOOB-100 ($k\\alpha$) and Split-CQR-100 ($k\\alpha$), with OOB-NCC-100 as a fixed baseline (that does not vary with $k$). We observe that the nominal quantile level significantly affects the performance of Split-CQR and QOOB. Both methods perform well at the nominal quantile of about $2\\alpha$, although this identification procedure may violate the exchangeability requirement of Theorem~\\ref{thm:OOB-cross-validity}. We encourage a more detailed study on the theoretical and empirical aspects of nominal quantile selection in future work. We also note that for all values of $k$, QOOB typically has smaller standard deviation of the average-mean-width compared to Split-CQR as denoted by the shaded region, implying more reliability in the predictions. \n\n\n\\begin{figure}[t]\n\t\\includegraphics[width=0.9\\textwidth]{legendAlpha.png}\\vspace{-0.8cm}\\\\\n\t\\centering\n\t\\subfloat[Concrete structure.]{\\includegraphics[width=0.33\\textwidth]{concrete_OOBvaryAlpha.pdf}}\n\t\\subfloat[Blog feedback.]{ \\includegraphics[width=0.33\\textwidth]{blog_OOBvaryAlpha.pdf}}\n\t\\subfloat[Protein structure.]{\\includegraphics[width=0.33\\textwidth]{protein_OOBvaryAlpha.pdf}}\\\\\n\t\\subfloat[Superconductivity.]{\\includegraphics[width=0.33\\textwidth]{superconductor_OOBvaryAlpha.pdf}}\n\t\\subfloat[News popularity.]{ \\includegraphics[width=0.33\\textwidth]{news_OOBvaryAlpha.pdf}}\n\t\\subfloat[Kernel performance.]{\\includegraphics[width=0.33\\textwidth]{kernel_OOBvaryAlpha.pdf}}\n\t\\caption{QOOB and Split-CQR are sensitive to the nominal quantile level $\\beta = k\\alpha$. At $\\beta \\approx 2\\alpha$, they perform better than OOB-NCC for all datasets (OOB-NCC does not require nominal quantile tuning and is a constant baseline). For the plots above, $\\alpha = 0.1$ and all methods plotted have empirical mean-coverage at least $1-\\alpha$. The mean-width values are averaged over 100 iterations. The shaded area denotes $\\pm 1$ std-dev for the average of mean-width. }\n\t\\label{fig:width-nominal-level}\n\\end{figure}\n\n\n\\subsection{QOOB has shorter prediction intervals as we increase the number of trees}\n\\label{subsec:QOOB-trees}\nIn this experiment, we investigate the performance of the competitive conformal methods from Table~\\ref{table:overall-comparison-width} as the number of trees $T$ are varied. For QOOB and Split-CQR, we fix the quantile level to $\\beta = 2\\alpha$. We also compare with OOB-NCC and another quantile based OOB method described as follows. Like QOOB, suppose we are given a quantile estimator $\\widehat{q}_s(\\cdot)$. Consider the quantile-based construction of nested sets suggested by~\\citet{chernozhukov2019distributional}: \n\\[\\mathcal{F}_t(x) = [\\widehat{q}_t(x), \\widehat{q}_{1-t}(x)]_{t \\in (0, 1\/2)}.\n\\]\nUsing these nested sets and the OOB conformal scheme (Section~\\ref{sec:OOB-conformal}) leads to the QOOB-D method (for `distributional' conformal prediction as the original authors called it). Since QOOB-D does not require nominal quantile selection, we considered this method as a possible solution to the nominal quantile problem of QOOB and Split-CQR (subsection~\\ref{subsec:nominal-quantile}). The results are reported in Figure~\\ref{fig:width-trees} for $T$ ranging from 50 to 400. \n\n\n\\begin{figure}[h!]\n\t\\centering\n\t\\includegraphics[width=0.9\\textwidth]{legendTrees.png}\\vspace{-0.6cm}\n\t\\subfloat[Concrete structure.]{\\includegraphics[width=0.33\\textwidth]{concrete_OOBvaryTrees.pdf}}\n\t\\subfloat[Blog feedback.]{ \\includegraphics[width=0.33\\textwidth]{blog_OOBvaryTrees.pdf}}\n\t\\subfloat[Protein structure.]{\\includegraphics[width=0.33\\textwidth]{protein_OOBvaryTrees.pdf}}\\\\\n\t\\subfloat[Superconductivity.]{\\includegraphics[width=0.33\\textwidth]{superconductor_OOBvaryTrees.pdf}}\n\t\\subfloat[News popularity.]{ \\includegraphics[width=0.33\\textwidth]{news_OOBvaryTrees.pdf}}\n\t\\subfloat[Kernel performance.]{\\includegraphics[width=0.33\\textwidth]{kernel_OOBvaryTrees.pdf}}\n\t\\caption{The performance of QOOB ($2\\alpha$) improves with increasing number of trees $T$, while the performance of Split-CQR ($2\\alpha$) does not. QOOB ($2\\alpha$) beats every other method except Split-CQR ($2\\alpha$) for all values of $T$.\n\t\tFor the plots above, $\\alpha = 0.1$ and all methods plotted have empirical mean-coverage at least $1-\\alpha$. The mean-width values are averaged over 100 iterations. The shaded area denotes $\\pm 1$ std-dev for the average of mean-width. }\n\t\\label{fig:width-trees}\n\\end{figure}\n\nWe observe that with increasing $T$, QOOB continues to show improving performance in terms of the width of prediction intervals. Notably, this is not true for Split-CQR, which does not show improving performance beyond 100 trees. In the results reported in Table~\\ref{table:overall-comparison-width}, we noted that Split-CQR-100 outperformed QOOB-100 on the blog feedback, news popularity and kernel performance datasets. \nHowever, from Figure~\\ref{fig:width-trees} we observe that for $T = 400$, QOOB performs almost the same as Split-CQR on blog feedback and news popularity, and in fact does significantly better than Split-CQR on kernel performance. Further, QOOB shows lower values for the standard deviation of the average-mean-width. (The QOOB-D method performs worse than QOOB for every dataset, and hence we did not report it in the other comparisons in this paper.)\n\n\n\\subsection{QOOB outperforms Split-CQR at small sample sizes}\\label{subsec:QOOB-sample-size}\nQOOB needs $n$ times more computation than Split-CQR to produce prediction intervals, since one needs to make $n$ individual predictions. If fast prediction time is desired, our experiments in subsections~\\ref{subsec:nominal-quantile} and \\ref{subsec:QOOB-trees} indicate that Split-CQR is a competitive quick alternative. However, here we demonstrate that at the small sample regime, QOOB significantly outperforms Split-CQR on all six datasets that we have considered.\n\n\\begin{figure}[!t]\n\t\\centering\n\t\\includegraphics[width=0.9\\textwidth]{legendnTrain.png}\\vspace{-0.6cm}\n\t\\centering\n\t\\subfloat[Concrete structure.]{\\includegraphics[width=0.33\\textwidth]{concrete_OOBvaryN.pdf}}\n\t\\subfloat[Blog feedback.]{ \\includegraphics[width=0.33\\textwidth]{blog_OOBvaryN.pdf}}\n\t\\subfloat[Protein structure.]{\\includegraphics[width=0.33\\textwidth]{protein_OOBvaryN.pdf}}\\\\\n\t\\subfloat[Superconductivity.]{\\includegraphics[width=0.33\\textwidth]{superconductor_OOBvaryN.pdf}}\n\t\\subfloat[News popularity.]{ \\includegraphics[width=0.33\\textwidth]{news_OOBvaryN.pdf}}\n\t\\subfloat[Kernel performance.]{\\includegraphics[width=0.33\\textwidth]{kernel_OOBvaryN.pdf}}\n\t\\caption{The performance of QOOB and Split-CQR with varying number of training points $n$. QOOB has shorter mean-width than Split-CQR across datasets for small $n$ and also smaller standard-deviation of the average mean-width. Here, $\\alpha = 0.1$ and all methods plotted have empirical mean-coverage at least $1-\\alpha$. The mean-width values are averaged over 100 iterations. The shaded area denotes $\\pm 1$ std-dev for the average of mean-width. }\n\t\\label{fig:width-n}\n\\end{figure}\nTo make this comparison, we subsample the datasets to a smaller sample size and consider the mean-width and mean-coverage properties of QOOB ($2\\alpha$) and Split-CQR ($2\\alpha$) with $T = 100$. Figure~\\ref{fig:width-n} contains the results with $n$ ranging from $30$ to $240$. We observe that at small $n$, QOOB does significantly better than Split-CQR. This behavior is expected since at smaller values of $n$, the statistical loss due to sample splitting is most pronounced. \n\nSince the overall computation time decreases as $n$ decreases, QOOB is a significantly better alternative in the small sample regime on all fronts. \n\n\n\n\\subsection{Cross-conformal outperforms jackknife+}\n\\label{subsec:cc-jp}\nCross-conformal prediction sets are always smaller than the corresponding jackknife+ prediction sets by construction; see Subsection~\\ref{sec:intervals-LOO} and~\\eqref{eq:loo-jp}. This implies that in terms of both metrics (mean-width and mean-coverage), cross-conformal is no worse than jackknife+. The fact however that cross-conformal may not be an interval might be of practical importance. The aim of this subsection is to show that the jackknife+ prediction interval can sometimes be strictly larger than even the smallest interval containing the cross-conformal prediction set. \nWe demonstrate this through the performance of QOOB against QOOB-JP and QOOB-Conv on the blog feedback dataset (Table~\\ref{table:cc-jp}). Here QOOB-JP refers to the OOB-JP version~\\eqref{eq:def-oob-jp} (whereas QOOB uses the OOB version~\\eqref{eq:def-oob-cross}), and QOOB-Conv corresponds to the shortest interval completion of QOOB, that is, the convex hull of QOOB prediction set. For each of these, we set the nominal quantile level to $0.5$ instead of $2\\alpha$ as suggested earlier because this led to the most pronounced difference in mean-widths. \n\\begin{table}[H]\n\t\\caption{Mean-width of $C^{\\texttt{OOB}}(x), \\mathrm{Conv}(C^{\\texttt{OOB}}(x)),$ and $C^{\\texttt{OOB-JP}}(x)$ for the blog feedback dataset with QOOB method. The base quantile estimator is quantile regression forests, and $\\alpha = 0.1$. Average values across 100 simulations are reported with the standard deviation in brackets . }\n\t\\centering\n\t\\begin{tabular}{c|c|c}\n\t\t\\hline\n\t\t\\textbf{Method} &\\textbf{Mean-width} & \\textbf{Mean-coverage} \\\\ \\hline\\hline\n\t\tQOOB-100 ($\\beta = $0.5) & \\textbf{14.67 (0.246)} & 0.908 (0.002) \\\\%[-0.1in]& (0.246) & (0.002)\\\\ \n\t\t\\hline\n\t\tQOOB-Conv-100 ($\\beta = $0.5) & 14.73 (0.249) & 0.908 (0.002) \\\\%[-0.1in]& (0.249) & (0.002) \\\\ \n\t\t\\hline\n\t\tQOOB-JP-100 ($\\beta = $0.5) & 15.36 (0.248) & 0.911 (0.002) \\\\%[-0.1in] & (0.248) & (0.002) \\\\ \n\t\t\\hline\n\t\\end{tabular}\n\t\\label{table:cc-jp}\n\\end{table}\nWhile this is a specific setting, our goal is to provide a proof of existence. There may be other settings where QOOB and QOOB-JP are identical or nearly equal. Because the cross-conformal prediction set as well as its convex hull can be computed in nearly the same time (as shown in subsection~\\ref{subsec:compuation-cross}), one should always prefer cross-conformal or its convex hull over jackknife+.\n\n\\subsection{QOOB demonstrates conditional coverage empirically }\n\\label{subsec:synthetic}\nTo demonstrate that Split-CQR exhibits conditional coverage, \\citet[Appendix B]{romano2019conformalized} designed the following data-generating distribution for $P_{XY}$:\n\\begin{align*}\n\t\\epsilon_1 &\\sim N(0, 1), \n\t\\epsilon_2 \\sim N(0, 1), \n\tu \\sim \\text{Unif}[0, 1], \\quad\\mbox{and}\\quad X \\sim \\text{Unif}[0, 1], \\\\\n\tY &\\sim \\text{Pois}(\\sin^2(X) + 0.1) + 0.03 X \\epsilon_1 + 25 \\mathbf{1}\\{u < 0.01\\} \\epsilon_2.\n\\end{align*}\nWe use the same distribution to demonstrate the conditional coverage of QOOB. Additionally, we performed the experiments at a lower sample size than $n \\leq 400$ to understand the effect of sample size on both methods (while the original experiments had $n = 2000$). Figure~\\ref{fig:synthetic} summarizes the experiments. \n\\begin{figure}[!h]\n\t\\hspace{1.2cm}\\includegraphics[width=0.2\\textwidth]{legendSynthetic1.png}\\hspace{1.6cm}\\includegraphics[width=0.18\\textwidth]{legendSynthetic4.png}\\hspace{1.5cm}\\includegraphics[width=0.2\\textwidth]{legendSynthetic3.png}\n\t \\vspace{-1cm}\n\t\\centering\n\t\\subfloat[$n = 100$. (QOOB) MW = 2.16, MC = 0.91. (Split-CQR) MW = 2.23, MC = 0.92.]{\\includegraphics[width=0.33\\textwidth]{cqr_qrc_synthetic_100_visual.pdf}\\includegraphics[width=0.33\\textwidth]{cqr_qrc_synthetic_100_coverage.pdf}\\includegraphics[width=0.33\\textwidth]{cqr_qrc_synthetic_100_length.pdf}}\\\\\\vspace{-0.5cm}\n\t\\subfloat[$n = 200$. (QOOB) MW = 1.99, MC = 0.92. (Split-CQR) MW = 2.18, MC = 0.91.]{\\includegraphics[width=0.33\\textwidth]{cqr_qrc_synthetic_200_visual.pdf}\\includegraphics[width=0.33\\textwidth]{cqr_qrc_synthetic_200_coverage.pdf}\\includegraphics[width=0.33\\textwidth]{cqr_qrc_synthetic_200_length.pdf}}\\\\\\vspace{-0.5cm}\n\t\\subfloat[$n = 300$. (QOOB) MW = 1.86, MC = 0.89. (Split-CQR) MW = 1.94, MC = 0.89.]{\\includegraphics[width=0.33\\textwidth]{cqr_qrc_synthetic_300_visual.pdf}\\includegraphics[width=0.33\\textwidth]{cqr_qrc_synthetic_300_coverage.pdf}\\includegraphics[width=0.33\\textwidth]{cqr_qrc_synthetic_300_length.pdf}}\\\\\\vspace{-0.5cm}\n\n\n\n\t\\subfloat[$n = 400$. (QOOB) MW = 2.10, MC = 0.91. (Split-CQR) MW = 2.09, MC = 0.90.]{\\includegraphics[width=0.33\\textwidth]{cqr_qrc_synthetic_400_visual.pdf}\\includegraphics[width=0.33\\textwidth]{cqr_qrc_synthetic_400_coverage.pdf}\\includegraphics[width=0.33\\textwidth]{cqr_qrc_synthetic_400_length.pdf}}\n\t\\caption{The performance of QOOB-100 and Split-CQR-100 on synthetic data with varying number of training points $n$ ($\\alpha = 0.1$). MW refers to mean-width and MC refers to mean-coverage. QOOB shows conditional coverage at smaller values of $n$ than Split-CQR. Subsection~\\ref{subsec:synthetic} contains more experimental details. } \n\t\\label{fig:synthetic}\n\\end{figure}\n\nFor this experiment, the number of trees $T$ are set to $100$ for both methods. To choose the nominal quantile level, we first ran the python notebook at \\url{https:\/\/github.com\/yromano\/cqr} to reproduce the original experiments performed by \\citet{romano2019conformalized}. Their code first learns a nominal quantile level for Split-CQR by cross-validating. On executing their code, we typically observed values near $0.1$ for $\\alpha = 0.1$ and hence we picked this nominal quantile level for our experiments as well (for both Split-CQR and QOOB). Another aspect that we needed to be careful about was the depth of the quantile regression forest. For our simple 1-dimensional distribution, deeper trees lead to wide prediction sets for both Split-CQR and QOOB. As was also done in the original experiments, the performance improves if the minimum number of training data-points in the tree leaves is set to $40$. We do so in our experiment as well. \n\n\n\\subsection{Additional information on experiments}\n\n\nOur experimental results can be reproduced using the QOOB implementation at \\url{https:\/\/github.com\/AIgen\/QOOB}. Details for the datasets used in our experiments are provided in Table~\\ref{table:metadata}. \\label{subsec:experiment-info}\nAll our experiments were conducted in MATLAB using the TreeBagger class\nfor training regression forests and quantile regression forests. Default parameters were used for all datasets apart from the synthetic dataset of subsection~\\ref{subsec:synthetic}. \n\n\\begin{table}\n\t\\caption{Meta-data for the datasets used in our experiments. $N$ refers to the total number of data-points from which we create 100 versions by independently drawing 1000 data points randomly (as described in the beginning of Section~\\ref{sec:numerical-experiments}). $d$ refers to the feature dimension. }\n\t\\centering\n\t\\resizebox{\\textwidth}{!}{\\begin{tabular}{|c|l|c|c|}\n\t\t\t\\hline\n\t\t\t\\textbf{Dataset} & \\textbf{$N$} & \\textbf{$d$} &\\textbf{URL (\\url{http:\/\/archive.ics.uci.edu\/ml\/datasets\/*})} \\\\\\hline \n\t\t\tBlog & 52397 & 280 & \\url{BlogFeedback} \\\\ \\hline\n\t\t\tProtein & 45730 & 8 & \\url{Physicochemical+Properties+of+Protein+Tertiary+Structure} \\\\ \\hline\n\t\t\tConcrete & 1030 & 8 & \\url{Concrete+Compressive+Strength} \\\\ \\hline\n\t\t\tNews & 39644 & 59 & \\url{Online+News+Popularity} \\\\ \\hline\n\t\t\tKernel\\tablefootnote{The GPU kernel dataset contains four output variables corresponding to four measurements of the same entity. The output variable is the average of these values.} & 241600 & 14 &\\url{SGEMM+GPU+kernel+performance} \\\\ \\hline\n\t\t\tSuperconductivity & 21263 & 81& \\url{Superconductivty+Data} \\\\ \\hline\n\t\\end{tabular}}\n\t\\label{table:metadata}\n\\end{table}\n\n\n\n\n\n\\section{Summary}\\label{sec:summary}\n\nThis paper introduced a simple alternative framework to the score-based conformal prediction, for developing distribution-free prediction sets, which is instead based on a sequence of nested prediction sets. We argued that our nested conformal prediction framework is arguably more natural and intuitive. \nWe demonstrated how to translate a variety of existing nonconformity scores into nested prediction sets.\nWe showed how cross-conformal prediction, the jackknife+, and out-of-bag conformal can be described in our nested framework. The interpretation provided by nested conformal opens up a variety of new procedures\nto practitioners. We propose one such method: QOOB, which uses quantile regression forests to perform out-of-bag conformal prediction. We demonstrated empirically that QOOB achieves state-of-the-art performance on multiple real-world datasets. \n\n\\subsection*{Acknowledgments}\nThe authors would like to thank Robin Dunn, Jaehyeok Shin and Aleksandr Podkopaev for comments on an initial version of this paper. AR is indebted to Rina Barber, Emmanuel Cand{\\`e}s and Ryan Tibshirani for several insightful conversations on conformal prediction. This work used the Extreme Science and Engineering Discovery Environment (XSEDE)~\\citep{towns2014xsede}, which is supported by National Science Foundation grant number ACI-1548562. Specifically, it used the Bridges system~\\citep{Nystrom:2015:BUF:2792745.2792775}, which is supported by NSF award number ACI-1445606, at the Pittsburgh Supercomputing Center (PSC). CG thanks the PSC consultant Roberto Gomez for support with MATLAB issues on the cluster.\n\n\n\\bibliographystyle{apalike}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}