diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzzdwlg" "b/data_all_eng_slimpj/shuffled/split2/finalzzdwlg" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzzdwlg" @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\nThroughout this paper $R$ is a commutative noetherian ring, $\\frak a$ is an ideal of $R$. Given a Serre subcategory $\\cS$ of $R$-modules, an $R$-module $M$ is said to be $\\cS$-$\\frak a$-{\\it cofinite} if $\\Supp M\\subseteq V(\\frak a)$ and $\\Ext_R^i(R\/\\frak a, M)\\in\\cS$ for all integers $i\\geq 0$. Let $\\cN$ be the subcategory of finitely generated $R$-modules.\n\n The extension subcategory induced by $\\cN$ and $\\cS$ is denoted by $\\cN\\cS$, consisting of those $R$-modules $M$ for which there exist an exact sequence $0\\To N\\To M\\To S\\To 0$ such that $N\\in\\cN$ and $S\\in\\cS$. It has been proved by [Y] that $\\cN\\cS$ is Serre. A well-known example for these subcategories is $\\cN\\cA$, the subcategory of {\\it minimax modules} studied by [Z] where $\\cA$ is the subcategory of artinian modules. Another example is $\\cN\\cF$, the subcategory of FSF modules introduced by [Q], where $F$ consists of all modules of finite support. When $\\cS=0$, an $\\cN\\cS$-$\\frak a$-cofinite module was known as an $\\frak a$-cofinite module which is defined for the first time by Hartshorne [H], giving a negative answer to a question of [G, Expos XIII, Conjecture 1.1]. Many people [BNS, M1, M2, NS] studied $\\frak a$-cofiniteness in various cases. \n\nThe main aim of this paper is to extend the fundamental results about $\\frak a$-cofinite modules at small dimensions to $\\cN\\cS$-$\\frak a$ cofinite modules.\n We recall that a Serre subcategory $\\cS$ satisfies the condition $C_{\\frak a}$ if for every $R$-module $M$, the following implication holds.\n\\begin{center}\n$C_{\\frak a}$: If $\\Gamma_{\\frak a}(M)=M$ and $(0:_M{\\frak a})$ is in $\\cS$, then\n$M$ is in $\\cS$.\n\\end{center}\n\nFor every $\\frak p\\in\\Spec R$, we denote by $\\cS({\\frak p})$ the smallest Serre subcategory of $R_{\\frak p}$-modules containing $\\cS_{\\frak p}=\\{M_{\\frak p}|\\hspace{0.1cm} M\\in\\cS\\}$. For every $R$-module $M$, $\\dim M$ means the dimension of $\\Supp_RM$ which is the length of the longest chain of prime ideals in $\\Supp_RM$. In Section 2, we prove the following result. \n\\begin{Theorem}\nLet $(R,\\frak m)$ be a local ring, let $S(\\frak p)$ satisfy the condition $\\frak pR_{\\frak p}$ for each $\\frak p\\in V(\\frak a)$. If $M$ is an $R$-module of dimension $d$ such that $\\Ext_R^i(R\/\\frak a,M)\\in\\cN\\cS$ for each $i\\leq d$, then $\\Ext_R^i(N,M)\\in\\cN\\cS$ for each $i\\geq 0$ and each finitely generated $R$-module $N$ with $\\dim N\\leq 2$ and $\\Supp_RN\\subseteq V(\\frak a)$.\n\\end{Theorem}\n Assume that $M$ is an $R$-module such that $\\Supp_R M\\subseteq V(\\frak a)$. Melkersson [M1, Theorem 2.3] showed that if $\\dim R\/\\frak a=1$, then $M$ is $\\frak a$-cofinite if and only if $\\Hom_R(R\/\\frak a,M)$ and $\\Ext_R^1(R\/\\frak a,M)$ are finitely generated. The above theorem generalizes this result when $R$ is a local ring and $S(\\frak p)$ satisfies the condition $\\frak pR_{\\frak p}$ for each $\\frak p\\in V(\\frak a)$. Indeed we deduce that if $\\dim R\/\\frak a=1$, then $M$ is $\\cN\\cS$-$\\frak a$-cofinite if and only if $\\Hom_R(R\/\\frak a,M)$ and $\\Ext_R^1(R\/\\frak a,M)\\in\\cN\\cS$. Moreover, if $R$ is a local ring of dimension $2$ such that $\\cS(\\frak p)$ satisfies the condition $C_{\\frak pR_{\\frak p}}$ for every prime ideal $\\frak p$ of $R$ with $\\dim R\/\\frak p\\leq 1$, then $M$ is $\\cN\\cS$-$\\frak a$-cofinite if and only if $\\Hom_R(R\/\\frak a,M)$ and $\\Ext_R^1(R\/\\frak a,M)\\in\\cN\\cS$. \n\nBahmanpour et all [BNS, Theorem 3.5] showed that if $R$ is a local ring such that $\\dim R\/\\frak a=2$, then $M$ is $\\frak a$-cofinite if and only if $\\Ext_R^i(R\/\\frak a,M)$ are finitely generated for $i=0,1,2$. As another conclusion, we generalize this result when $S(\\frak p)$ satisfies the condition $\\frak pR_{\\frak p}$ for each $\\frak p\\in V(\\frak a)$. To be more prcise, we deduce that if $\\dim R\/\\frak a=2$, then $M$ is $\\cN\\cS$-$\\frak a$-cofinite if and only if $\\Ext_R^i(R\/\\frak a,M)\\in\\cN\\cS$ for $i=0,1,2$. Moreover, if $R$ is a local ring of dimension $3$ such that $\\cS(\\frak p)$ satisfies the condition $\\frak pR_{\\frak p}$ for every prime ideal $\\frak p$ with $\\dim R\/\\frak p\\leq 2$, then $M$ is $\\cN\\cS$-$\\frak a$-cofinite if and only if $\\Ext_R^i(R\/\\frak a,M)\\in\\cN\\cS$ for $i=0,1,2$. We prove the following result about local cohomology which generalizes [NS, Theorem 3.7]. For the basic properties and unexplained terminology of local cohomology, we refer the reader to the textbook by Brodmann and Sharp [BS]. \n\n\\begin{Theorem}\nLet $(R,\\frak m)$ be a local ring, let $\\frak a$ be an ideal of $R$ such that $\\dim R\/\\frak a=2$ and let\n$S(\\frak p)$ satisfy the condition $\\frak pR_{\\frak p}$ for each $\\frak p\\in V(\\frak a)$. If $n$ is a non-negative integer such that $\\Ext_R^i(R\/\\frak a,M)\\in\\cN\\cS$ for all $i\\leq n+1$, then the following conditions are equivalent.\n \n{\\rm (i)} $H_{\\frak a}^i(M)$ is $\\cN\\cS$-$\\frak a$-cofinite for all $i0$). Assume that $d>0$ and we proceed by induction on $i$. If $\\dim R\/\\frak b=0$, since $\\Ext_R^i(R\/\\frak b,M)\\in\\cN\\cS$, using a similar argument as mentioned above, $\\Ext_R^i(R\/\\frak b,M)\\in\\cS$ for all $i\\leq d$; and hence it follows from [AM, Theorem 2.9] that $H_{\\frak b}^i(M)\\in\\cS$ for all $i\\geq 0$; and hence the assertion is clear in this case. If $\\dim R\/\\frak b=1$, it follows from [AS, Theorem 3.5] that $H_{\\frak b}^i(M)$ is $\\cN\\cS$-$\\frak b$-cofinite for all $i1$, since $\\Gamma_{\\frak a}(M)\\in\\cN\\cS$, it follows from $(\\dag_i)$ and the previous isomorphisms that $\\Ext_R^j(R\/\\frak a,Q)\\in\\cN\\cS$ for all $i\\leq n$ so that (i) and (ii) are equivalent for $Q$ and non-negative integer $n-1$. Now, using again the previous isomorphisms, the conditions (i) and (ii) are equivalent for $M$ and non-negative integer $n$.\n\\end{proof}\n\n\n\\medskip\n\\begin{Corollary}\nLet $R$ be an local ring with $\\dim R\/\\frak a\\leq 2$, let $n$ be a non-negative integer such that $\\Ext_R^i(R\/\\frak a,M)\\in\\cN\\cF$ for all $i\\leq n+1$. Then the following conditions are equivalent.\n \n{\\rm (i)} $H_{\\frak a}^i(M)$ is $\\cN\\cF$-$\\frak a$-cofinite for all $i0$ and the result has been proved for all values smaller than $i$. Then $E_2^{p,0}=\\Ext_{\\overline{R}}^p(\\overline{R}\\otimes_RR\/\\frak a+\\Gamma_{\\frak a}(R),\\overline{M})\\in\\cN\\cS$ for all $0\\leq p2$, the $\\overline{R}$-module $E_r^{p,q}$ is a subquotient of $E_{r-1}^{p,q}$ and so an easy induction yields that $E_r^{p,q}\\in\\cN\\cS$ for all $r\\geq 2$ so that $E_{\\infty}^{p,q}\\in\\cN\\cS$ for all $p,q\\geq 0$ . For any $0\\leq t\\leq n$, there is a finite filtration \n$$0=\\Phi^{t+1}H^t\\subset \\Phi^tH^t\\subset\\dots\\subset\\Phi^1H^t\\subset \\Phi^0H^t\\subset H^t$$ \nsuch that $\\Phi^pH^t\/\\Phi^{p+1}H^t\\cong E_{\\infty}^{p,t-p}$ where $0\\leq p\\leq t$. Since $E_{\\infty}^{p,t-p}\\in\\cN\\cS$ for all $0\\leq p\\leq t$ and $t\\geq 0$, we deduce that $H^t\\in\\cN\\cS$ for all $t\\geq 0$; and hence $\\overline{M}$ is $\\cN\\cS$-$\\frak a$-cofinite. On the other hand, since $(0:_M\\frak a^n)\\in\\cN\\cS$, we conclude that $M$ is $\\cN\\cS$-$\\frak a$-cofinite.\n\\end{proof}\n\\medskip\n\n\\begin{Corollary}\nLet $R$ be a local ring of dimension $2$ such that $\\cS$ satisfies the condition $C_{\\frak a}$ for every ideal $\\frak a$ of $R$ with $\\dim R\/\\frak a\\leq 1$. Then $R$ admits the condition $P_1^{\\cN\\cS}(\\frak a)$ for every ideal $\\frak a$ of $R$. \n\\end{Corollary}\n\\begin{proof}\nIt follows from [AS, Theorem 3.2] and \\cref{t22} that $R$ admits the condition $P_1^{\\cN\\cS}(\\frak a)$ for all ideals with $\\dim R\/\\frak a\\leq 1$. Now, the result follows from \\cref{td}.\n\\end{proof}\n\n\n\\medskip\n\\begin{Corollary}\nLet $R$ be a local ring of dimension $2$ such that $\\cS(\\frak p)$ satisfies the condition $\\frak pR_{\\frak p}$ for every prime ideal $\\frak p$ with $\\dim R\/\\frak p\\leq 1$. Then $R$ admits the condition $P_2^{\\cN\\cS}(\\frak a)$ for every ideal $\\frak a$ of $R$. \n\\end{Corollary}\n\\begin{proof}\nIt follows from \\cref{t22} that $R$ admits the condition $P_1^{\\cN\\cS}(\\frak a)$ for all ideals $\\frak a$ with $\\dim R\/\\frak a\\leq 1$. Now, the result follows from \\cref{td}.\n\\end{proof}\n\\medskip\n\n\\begin{Corollary}\nLet $R$ be a local ring of dimension $3$ such that $\\cS(\\frak p)$ satisfies the condition $\\frak pR_{\\frak p}$ for every prime ideal $\\frak p$ with $\\dim R\/\\frak p\\leq 2$. Then $R$ admits the condition $P_2^{\\cN\\cS}(\\frak a)$ for every ideal $\\frak a$ of $R$. \n\\end{Corollary}\n\\begin{proof}\nIt follows from \\cref{t21} that $R$ admits the condition $P_2^{\\cN\\cS}(\\frak a)$ for all ideals with $\\dim R\/\\frak a\\leq 2$. Now, the result follows from \\cref{td}.\n\\end{proof}\n\n\\medskip\n\\begin{Corollary}\nLet $R$ be a local ring of dimension $3$. Then $R$ admits the condition $P_2^{\\cN\\cF}(\\frak a)$. \n\\end{Corollary}\n\\begin{proof}\nIt follows from \\cref{f} that $R$ admits the condition $P_2^{\\cN\\cF}(\\frak a)$ for all ideals with $\\dim R\/\\frak a=2$. Now, the result follows from \\cref{td}.\n\\end{proof}\n\n\\section{Cofiniteness with respect to a new dimension}\n\nFor every $R$-module $M$, it is clear that $\\Supp_RM\\subseteq V(\\Ann_RM)$ and for the case where $M$ is finitely generated, they are equal. We define the {\\it upper dimension} of $M$ and we denote it by $\\overline{\\dim}M$ which is $\\overline{\\dim}M=\\dim R\/\\Ann_RM$. Clearly $\\dim M\\leq \\overline{\\dim}M$. We first recall some results which are needed in this section. \n\n\\begin{Lemma}\\label{del}\nLet $S$ be a finitely generated $R$-algebra and let $M$ be an $S$-module. Then $M$ is $\\frak a$-cofinite if and only if $M$ is $\\frak a S$-cofinite (as an $S$-module).\n\\end{Lemma}\n\\begin{proof}\nSee [DM, Proposition 2].\n\\end{proof}\n\\medskip\n\\begin{Lemma}\nLet $S$ be a finitely generated $R$-algebra and let $M$ be an $S$-module. Then $M$ satisfies the condition $P^{\\cN}_n(\\frak a)$ if and only if $M$ satisfies the condition $P_n^{\\cN}(\\frak aS)$. \n\\end{Lemma}\n\\begin{proof}\nSee [KS, Proposition 2.15].\n\\end{proof}\n\n\n\n\\medskip\n\\begin{Lemma}\\label{mel}\nLet $M$ be an $R$-module such that $\\Supp_RM\\subseteq V(\\frak a)$. Then $M$ is Artinian and $\\frak a$-cofinite if and only if $(0:_M{\\frak a})$ has finite length.\n\\end{Lemma}\n\\begin{proof}\nSee [M1, Proposition 4.1].\n\\end{proof}\nFor every non-negative integer $n$, we denote by $\\cD_{\\leq n}$, the subcategory of all $R$-modules $M$ such that $\\overline{\\dim} M\\leq n$. We also denote by $\\cG$, the subcategory of all $R$-modules $F$ such that $V(\\Ann_RF)$ is a finite set. \n\n\\medskip\n\\begin{Lemma}\\label{ser}\nIf $0\\To N\\To M\\To M\/N\\To 0$ is an exact sequence of $R$-modules, Then $V(\\Ann_RM)=V(\\Ann_RN)\\cup V(\\Ann_RM\/N)$.\n\\end{Lemma}\n\\begin{proof}\nSince $\\Ann_RM\\subseteq \\Ann_RN\\cap\\Ann_R M\/N$, we conclude that $V(\\Ann_RN)\\cup V(\\Ann_RM\/N)\\subseteq V(\\Ann_RM)$. Now assume that $\\frak p\\in V(\\Ann_RM)$ and $\\frak p\\notin V(\\Ann_RN)$. Then there exists $r\\in\\Ann_RN\\setminus \\frak p$ and so for every $x\\in\\Ann_R M\/N$, we have $rx\\in\\Ann_RM\\subseteq \\frak p$ which implies that $x\\in\\frak p$. Consequently, $\\frak p\\in V(\\Ann_RM\/N)$. \n\\end{proof}\n\n\\medskip\n\n \n\n\\begin{Corollary}\nThe following conditions hold.\n\n{\\rm (i)} For every non-negative integer $n$, the suncategory $\\cD_{\\leq n}$ is Serre.\n\n{\\rm (ii)} The subcategory $\\cG$ is Serre.\n\\end{Corollary}\n\\begin{proof}\nThe proof is is straightforward by \\cref{ser}.\n\\end{proof}\n\n\n\\medskip \n\n\\begin{Proposition}\nLet $M$ be an $R$-module with $\\overline{\\dim}M=2$. Then $M$ is $\\frak a$-cofinite if and only if $\\Hom_R(R\/\\frak a,M)$ and $\\Ext_R^1(R\/\\frak a,M)$ are finitely generated. \n\\end{Proposition}\n\\begin{proof}\nLet $\\overline {R}=R\/\\Ann_RM$. Using [KS, Proposition 2.15] we may assume that $\\dim R=2$. Now, the result follows from [NS, Corollary 2.4].\n\\end{proof}\n\n\\medskip \n\n\\begin{Proposition}\nLet $M$ be an $R$-module with $\\overline{\\dim}M=3$. Then $M$ is $\\frak a$-cofinite if and only if $\\Ext_R^i(R\/\\frak a,M)$ is finitely generated for $i=0,1,2$. \n\\end{Proposition}\n\\begin{proof}\nLet $\\overline {R}=R\/\\Ann_RM$. Using [KS, Proposition 2.15] we may assume that $\\dim R=3$. Now, the result follows from [NS, Corollary 2.5].\n\\end{proof}\n\\medskip\n\n\\begin{Proposition}\\label{dim1}\nThe subcategory of $\\frak a$-cofinite modules in $\\cD_{\\leq 1}$ is Serre.\n\\end{Proposition}\n\\begin{proof}\n If $\\overline{\\dim} M=0$, then $\\dim M=0$ and so the module $(0:_M\\frak a)$ has finite length. Thus it follows from \\cref{mel} that $M$ is $\\frak a$-cofinite. Now, assume that $\\overline{\\dim} M=1$ and so $\\dim \\overline{R}=1$ where $\\overline{R}=R\/\\Ann_RM$. It is clear that $(0:_M\\frak a\\overline{R})=(0:_M\\frak a)$ is finitely generated and $\\Supp_{\\overline{R}}M\\subseteq V(\\overline{\\frak a})$. It follows from [M1, Proposition 4.5] that $M$ is $\\overline{\\frak a}$-cofinite. Finally, using \\cref{del}, $M$ is $\\frak a$-cofinite. The second assertion is straightforward. \n\\end{proof}\n\n\n\\medskip\n\\begin{Corollary}\nLet $M\\in\\cN\\cG$ with $\\Supp_RM\\subseteq V(\\frak a)$. Then $M$ is $\\frak a$-cofinite if and only if $(0:_M\\frak a)$ is finitely generated. \n\\end{Corollary}\n\\begin{proof}\nSince $M\\in\\cN\\cF$, there exists an exact sequence of $R$-modules $0\\To N\\To M\\To F\\To 0$ such that $N$ is finitely generated and $F$ has finite support. We notice that $(0:_F\\frak a)$ is finite, and it suffices to show that $F$ is $\\frak a$-cofinite and so we may assume that $V(\\Ann_RM)$ is a finite set so that $\\overline{\\dim}M\\leq 1$. It follows from [M1, Proposition 4.5] that $M$ is $\\overline{\\frak a}$-cofinite where $\\overline{R}=R\/\\Ann_RM$ and $\\overline{\\frak a}=\\frak a\\overline{R}$. Now \\cref{del} implies that $M$ is $\\frak a$-cofinite. \n\\end{proof}\n\\medskip\n\n\\begin{Proposition}\n The subcategory of $\\frak a$-cofinite modules in $\\cD_{\\leq 2}$ is abelian.\n\\end{Proposition}\n\\begin{proof}\nAssume that $f:M\\To N$ be a morphism of $\\frak a$-cofinite modules and assume that $K=\\Ker f, I=\\Im f$ and $C=\\Coker f$. \n The assumption implies that $(0:_I\\frak a)=(0:_I\\frak a\\overline{R})$ is finitely generated $\\overline{R}$-module where $\\overline{R}=R\/\\Ann_RM$. If $\\dim \\overline{R}=0$, the module $(0:_I\\frak a)$ has finite length and so $I$ is Artinian. Now, \\cref{mel} implies that $I$ is $\\frak a$-cofinite. If $\\dim\\overline{R}=1$, it follows from \\cref{dim1} that $I$ is $\\frak a\\overline{R}$-cofinite as $M$ is $\\frak a\\overline{R}$-cofinite; and hence $I$ is $\\frak a$-cofinite using \\cref{del}. If $\\dim\\overline{R}=2$, it follows from [NS, Corollary 2.6] that $I$ is $\\frak a\\overline{R}$-cofinite and so is $\\frak a$-cofinite by using \\cref{del}. Now, using the exact sequences of $R$-modules $$0\\To K\\To M\\To I\\To 0;$$$$0\\To I\\To N\\To C\\To 0;$$ it is straightforward to show that $K$ and $C$ are $\\frak a$-cofinite modules.\n\\end{proof}\n\n\n\\medskip\n\n\\begin{Proposition}\nThe kernel and cokernel of a homomorphism $f:M\\To N$ of $\\frak a$-cofinite modules in $\\cD_{\\leq 3}$ is $\\frak a$-cofinite if and only if $(0:_{\\Coker f}\\frak a)$ is finitely generated.\n\\end{Proposition}\n\\begin{proof}\nBy the assumption we have $\\dim R\/\\Ann_RM\\leq 3$ and also $\\dim R\/\\Ann_RN\\leq 3$. If we put $\\frak b=\\Ann_RM\\cap \\Ann_RN$ and $\\overline{R}=R\/\\frak b$, we have $\\dim \\overline{R}\\leq 3$ and further $M$ and $N$ are $R\/\\frak b$-module. Clearly $(0:_{\\Coker f}\\frak a\\overline{R})$ is a finitely generated $\\overline{R}$-module. It follows from [NS, Theorem 2.8] that \n$\\ker f$ and $\\Coker f$ are $\\frak a\\overline{R}$-cofinite and so using \\cref{del}, they are $\\frak a$-cofinite.\n\\end{proof}\n\n\n\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section*{}\n \n\\noindent An exciting area where biology meets technology is evolutionary robotics \\cite{nolfi2000evolutionary,Vargas-2014,doncieux2015evolutionary}. The key idea behind the field is to optimize the design of robots through evolutionary computing \\cite{eiben2003introduction-to}. Using artificial evolution for robot design has a strong rationale.\n\n\\begingroup\n\\addtolength\\leftmargini{0.1cm}\n\\begin{quote}\n{\\bf As natural evolution has produced successful life forms for practically all possible environmental niches on Earth, it is plausible that artificial evolution can produce specialised robots for various environments and tasks.}\n\\end{quote}\n\\endgroup\n\n\n\\noindent The long-term vision foresees a technology where robots for a certain application can be `bred' through iterated selection and reproduction cycles until they satisfy the users' criteria. This approach is not meant to replace classic engineering when designing robots for structured environments with known conditions. However, for complex, unstructured environments with (partially) unknown and possibly changing conditions evolution offers great advantages \\cite{howard2019evolving}. A crucial difference between a system of evolving robots and a usual evolutionary algorithm is that the members of the population are not digital objects in a virtual space, but physical artefacts in the real-world. Such a system goes beyond evolutionary computing and implements the Evolution of Things with several new challenges rooted in the physical incarnation \\cite{Eiben2012Embodied-Artifi,eiben2015evolutionary}.\n\nAnother property distinguishing robot evolution from mainstream evolutionary optimization is that robots have agency, i.e., they are active artefacts that exhibit autonomous behavior. To this end, it is important to note that a robot is a combination of its body (morphology, hardware) and its brain (controller, software), and the behavior depends on both \\cite{Louise2011Beyond,weigmann2012does}. This implies that the evolution of robots should concern both the bodies and the brains \\cite{pfeifer2007body}. \n\nThus, in a full-fledged evolutionary robot system the morphologies as well as the controllers undergo evolution. This is in stark contrast with the current practice. Evolutionary robotics today is mainly concerned with evolving the controllers of simulated robots \\cite{radhakrishna2018survey}. Systems where morphologies and controllers of robots evolve simultaneously are rare and they work in simulation \\cite{Auerbach_Bongard2014Environmental}. Occasionally, an organism evolved in software is constructed in the real world, using hardware \\cite{lipson2000automatic} or `wetware' \\cite{Kriegman2020}, but the evolutionary process is simulated. This evolve-then-construct approach inevitably runs into the reality gap \\cite{jakobi1995noise}. On the other hand, there exist systems where physical objects are evolved in the real-world, but these are passive artefacts without agency \\cite{Rieffel-Sayles-2010,Kuehn-Rieffel-2012}. Complete systems where real robots undergo evolution are still ahead of us, but with the development of 3D-printing, rapid prototyping, and automated assembly such systems are becoming feasible, at least in an academic setting \\cite{brodbeck2015morphological,hale2019robot, jelisavcic2017real,Vujovic2017}. \n\n\\begin{figure}[!htbp]\n \\centering\n \\includegraphics[width=.33\\textwidth]{images\/Triangle-of-Life.pdf}\n \\caption{Generic system architecture for robot evolution conceptualized by the Triangle of Life{}.\n }\n\t\\label{fig:system}\n\\end{figure} \n\nA generic model to underpin ``evolving robots in real time and real space'' has been described recently \\cite{eiben2013triangle}. \nThis model, called the Triangle of Life{}, captures the universal life cycle of a robot, not from birth to death because that would not be a cycle, but from conception (being conceived) to conception (conceiving offspring). This cycle consists of three stages: morphogenesis (from conception to birth), infancy (from birth to becoming a fertile adult), and mature life (during which the robot can mate and conceive offspring multiple times), cf. Figure \\ref{fig:system}. \n\n\nA key insight behind this paper is that including a learning stage is not an arbitrary design choice, it is pivotal for mitigating a general problem. Namely, while it can be assumed that the parents had well-matching bodies and brains (otherwise they had not been fit enough to be selected for mating), in general it cannot be assumed that crossover preserves the good match. The mis-match in the offspring may be severe (e.g., having more sensors than there are inputs in the controller) or moderate (e.g., only requiring some parameter tuning in the brain), but in any case it is important to adjust and optimize the inherited brain quickly after `birth'. \n\nThe problem we highlight here is inherent to morphological robot evolution where a large number and variety of robots is produced through consecutive generations. All these robots with different and unpredictable morphologies have to learn the tasks required by the given application. Thus, a morphologically evolving robot system needs a learning method that works in any of the possible robots and finds a good controller efficiently. \n\n\n\n\n\nComparing with current studies, the main contributions of the paper are the following. \nFirstly, a new type of controller architecture based on coupled oscillators with sensory feedback that can be customized to any modular robot driven by joints. \nIn such a type of controller, a generic method is used to specify a frame of reference (coordinates for the robot modules) for any given body in our design space.\nA steering mechanism based on scaling the activation signals, depending on the coordinates of the joint and the angle between the direction to the target and the robot's heading, is used to drive the robot joints. \nThese controllers can be used in a closed-loop approach and steer a robot towards a target regardless of the specific shape of modular robots. \nThis makes it possible to generate a closed-loop controller for a modular robot with an arbitrary shape.\nSecondly, a generic learning method that allows modular robots with arbitrary shapes to learn approaching a target. \nWe validate our method with three different robots, rather than a fixed and special shape robot, in three different scenarios. To this end, we use three modular robots, a `spider', a `gecko', and their `baby' created by an evolutionary robotics project in our lab \\cite{jelisavcic2017real}. \n\n\n\n\n\n\n\n\\begin{figure}[!ht]\n \\centering\n \\includegraphics[width=0.49\\textwidth]{images\/all_robots.pdf}\n \\caption{The three robots in our test suite after \\cite{jelisavcic2017real}. The spider (b) and gecko (c) are the parents, the baby (a) is the offspring. (d) exhibits the overall network topologies of the controllers and the coordinates of the joints in the baby (left), spider (middle), and gecko (right) robot. The core block (head) is red, other blocks are black. Joints are represented by circles.\n \n (e) shows the inner components of the spider robot.}\n \\label{fig:robots}\n\\end{figure}\n\n\n\n\n\\section{Good locomotion and how to learn it}\n\\label{sec:loco-learn}\n\nThe existing studies in learning locomotion on modular roobts can be divided into two categories in terms of the controllers, open-loop and closed-loop. The dominant approaches are using Central Pattern Generators (CPGs) \\cite{Ijspeert2008learning}.\nMost papers studied gait learning with real robots using an open-loop controller with no sensory feedback from the environment \\cite{Kamimura2005Automatic,Bongard2006Science,Kyrre2015real-world,Thakker2014ReBis}. \nFor closed-loop controllers, joint angle and foot contact are typically used to be the sensory feedback. \nThe studies \\cite{owaki2013simple,owaki2017quadruped,nordmoen2019evolved} used a CPG-based controller and foot contact from force sensors on each robot leg to produce coordinated gaits and increase its adaptability on various terrains. \nInertial measurement \\cite{wang2005motion,seo2010cpg,barasuol2013reactive,sartoretti2018central}, joint angles \\cite{kimura2007adaptive}, and touch sensing \\cite{righetti2008pattern,ajallooeian2013central} are used to be sensory feedback in CPG-based controllers to adjust robot behaviours for desired tasks. \nIn particular, the combination of a CPG-based controller and camera is used to achieve the closed-loop control for directed locomotion in a hexapod robot \\cite{shaw2019workspace}.\nSimilarly, there are other studies \\cite{wu2013neurally,Ijspeert2007From} that use various sensory feedback and controllers to achieve closed-loop control for different tasks.\nAlthough these studies proposed closed-loop controllers with various sensory feedback for locomotion, they focus on learning locomotion on a fixed shape robot such as a hexapod robot, a fish robot, a salamander robot. It is still generally lacks generic methodologies for integrating sensory feedback to adapt the locomotion for a modular robot with an arbitrary shape.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nIn this paper, we consider the problem of targeted locomotion on modular robots with arbitrary shapes. This is important, challenging, and novel. Targeted locomotion is important, because for many applications robots should be capable of going to a point of interest, be it a charging station, an object to fetch, or another robot to mate with. Targeted locomotion on evolvable modular robots is challenging because the number and the spatial arrangement of the joints, the length and branches of the limbs can vary and the overall shape can be irregular. This makes the simple adoption of the steering policy for wheeled robots --to turn left (right) apply more force to the wheels on the right (left)-- highly non-trivial. Finally, this problem is also novel; to our best knowledge there are no publications addressing this. \n\n\n\\subsection{Robots with a sense of direction}\n\\label{sec:loco}\n\nTo enable targeted locomotion of modular robots with arbitrary morphologies, we need a generic way of defining a \\textit{frame of reference}. To this end we define a coordinate system based on the fact that our robots have a core module with a camera, see Figure \\ref{fig:robots}. By definition, the `head', that is, the core module is the origin $(0,0)$ and the direction of the camera determines `North'. Now we can assign the coordinates $\\{(1,0), (2,0), ...\\}$ and $\\{(-1,0), (-2,0), ...\\}$ to the modules (passive bricks or active joints) as we go to East and West, respectively.\nSimilarly, if we go North and South we set the coordinates to $\\{(0,1), (0,2), ...\\}$ and $\\{(0,-1), (0,-2), ...\\}$, respectively. The coordinates of the three robots in this paper are shown in Figure \\ref{fig:robots}.\n\n\n\n\nTo make robots see the target we use a system proposed in \\cite{lan2018ICARCV}. When a target is detected its direction $\\alpha \\in [-\\beta, \\beta]$ w.r.t. the robot can be determined, where $[-\\beta, \\beta]$ is the cameras field of vision. This information can be combined with the frame of reference defined above: If $\\alpha < 0$, then the target is on the left; otherwise it is on the right.\n\nFollowing the literature we use coupled oscillators to control the joints of a robot and we arrange these oscillators in a network to form Central Pattern Generators (CPGs) \\cite{ijspeert2008central, steuer2019central}. Such oscillators generate tightly-coupled patterns of neural activity that drive rhythmic and stereotyped locomotion behaviors like walking, swimming, flying in vertebrate species and they have been proven to perform well in modular robots as well \\cite{hultborn2007spinal,ijspeert2008central,Ijspeert2007From}.\n\nTo provide useful information from the environment we introduce the notion of a sensory oscillator as shown in Figure \\ref{fig:framework} (a). \nSuch a sensory oscillator drives one joint in the robot by two coupled neurons $x$ and $y$, and an $out$-neuron that together generate a signal regulated by three weights as usual. \nSubsequently, the extra (square shaped) node generates the actual signal $sig$ for the joint by applying a function $f$ to the sensory information $sen$ and the signal of the $out$-neuron.\nThe novelty of this model lies in the extra node that considers the sensory input before generating the signal that actually drives the joint. The model is general, there is no restriction on the sensory input(s) and $f$ can be any function depending on the task at hand.\n\nThe subsystems described above can be combined into an adequate control system that enables the modular robots to move towards a target. To this end, we use the angle to the target $\\alpha$ as input and generate a scaling factor $d_p(\\alpha)$ and define a function $f$ to adjust the signals of the joints on the left and right side of the robots as necessary. The exact details are described in the Methods section, the overall effect is that if the target is on the left (right), then the joints on the left (right) apply less force and make the robot turn in the correct direction. The middle joints are never modified. The overall architecture is exhibited in Figure \\ref{fig:framework} (b).\n\\begin{figure}[!htbp]\n\t \\centering\n\t \\includegraphics[width=\\columnwidth]{images\/framework.pdf}\n\t\t\\caption{(a) A sensory oscillator with three neurons, $x$-neuron, $y$-neuron, $out$-neuron and an extra node $f$. The function $f$ combines the raw control signal of the $x$-neuron with the sensory information $sen$. (b) Overall scheme of steering by sending control commands to actuators depending on their lateral position and the direction of the target.}\n\t\t\\label{fig:framework}\n\\end{figure}\nThe overall control system of a robot is a CPG network where sensory oscillators of neighboring joints are connected as shown in Figure \\ref{fig:robots} (d).\n\n\n\\subsection{Learning method}\n\\label{sec:learn}\n\nLearning a task in our system amounts to finding proper weights for the controller. This boils down to optimizing a black-box objective function. Bayesian optimization (BO) is the state-of-the-art method in terms of data efficiency, but its computational complexity grows cubically with respect to the number of observations \\cite{jasper2012practical}. \nAlternatively, Evolutionary Algorithms (EA) take constant time for generating candidate solutions \\cite{eiben2015evolutionary}. This makes their overhead much less computation-intensive than that of the BO, at the expense of data efficiency. \n\nTo get the best of both worlds, here we use a combined algorithm, the Bayesian Evolutionary Algorithm (BEA), that starts with BO and runs this until the time efficiency becomes too low. At this point, a certain subset $S$ of the solutions generated so far is selected by considering their quality and diversity. This set $S$ is then used as the initial population for the EA that is run until a good solution is found or the given computing budget is exhausted. We refer the reader to the Methods section for further details.\n\n\n\n\\section{Experiments}\n\\label{sec:experiments}\nWe employed the BEA to learn adequate controllers in simulation. Then we tested the learned controllers in three test scenarios: 1) approaching a fixed target, 2) following a moving target, and 3) following another robot that is following a moving target. Let us note that the robots do not change or tune their controllers between scenarios. \n\nWe used the customized framework \\emph{Revolve} \\cite{hupkes_2018_revolve} to learn appropriate controllers for directed locomotion at a zero angle (thus: straight forward) on the virtual gecko, spider, and baby robots.\nThe fitness function for the BEA was a combination of the deviation from the required direction and the distance covered during the test period, see the Methods section for details. The duration of a test period was 60 seconds and the BEA was allowed to perform 1500 evaluations. \nDespite the heavy simulations, the computing times were acceptable, approximately one hour was enough to complete one run on a Linux PC with a 3.0GHz CPU, 64Gb RAM, and 32 cores with multi-threading. Conducting the whole learning process on the real robots would take several days, $1500 \\times 60$ seconds, plus overhead for re-positioning the robots between tests, charging the batteries, and handling breakdowns.\n\n\n\n\\subsection{Scenario 1: fixed target}\n\\label{sec:fixed}\nThe controllers learned in simulation are validated on the real robots, the spider, the gecko, and the baby. In the first series of experiments, we tested each robot with three different setups, one with a fixed target to the left, one with the target straight ahead of the robot, one with the target to the right. In all cases, the target was in the field of view of the robot camera (Raspberry Pi Camera Module v2) at the start of the test. We repeated each test five times and displayed the observed trajectories in Figure \\ref{fig:trajectories}.\nThe experiments were recorded with an overhead camera above the test arena, as shown in the supplementary videos.\nThe plots show a consistent behavior for all robots and test cases. The path of the spider shows that it is `wobbling', see the top left plot in Figure \\ref{fig:trajectories}. This is understandable because its middle-line lies over two of its limbs, hence straight forward movement is very hard without zigzagging. Comparing the blue curves with the other colors we can also see that the robots approach the target in the center faster. This result can be easily explained, because for a target on one of the sides, the robot must turn and that costs extra time.\n\n\\begin{figure}[!htbp]\n\t\\centering\n\t \\begin{adjustbox}{max width=0.49\\textwidth}\n\t\\begin{tabular}{c c c}\t\t\\includegraphics[width=0.4\\textwidth]{images\/spider9_trajectory_all.pdf}&\n\t\t\\includegraphics[width=0.4\\textwidth]{images\/gecko7_trajectory_all.pdf}&\n\t\t\\includegraphics[width=0.4\\textwidth]{images\/babyA_trajectory_all.pdf} \\\\\n\t\\end{tabular}\n \\end{adjustbox}\n \\begin{adjustbox}{max width=0.49\\textwidth}\n\t\\begin{tabular}{c c c}\n\t\t\\includegraphics[width=0.4\\textwidth]{images\/spider9_moving_tra_all.pdf}&\n\t\t\\includegraphics[width=0.4\\textwidth]{images\/gecko7_moving_tra_all.pdf}&\n\t\t\\includegraphics[width=0.4\\textwidth]{images\/babyA_moving_tra_all.pdf} \\\\\n\t\\end{tabular}\n \\end{adjustbox}\n\t\\caption{Trajectories of the spider (left), the gecko (middle), and the baby (right). The solid lines show the average trajectories (five runs). The hexagonal bins show the number that robots located in the hexagonal area for all five repetitions, where the location of robots are collected per 0.5 second. Top row: moving towards a fixed target, the coloured circles. Bottom row: following a moving target. The red line shows the path of the target robot, the blue one belongs to the `chaser'. \n\n\t}\n\t\\label{fig:trajectories}\n\\end{figure}\n\n\\subsection{Scenario 2: moving target}\n\\label{sec:moving}\nIn the second series of experiments, we tested each robot with a moving target. To this end, we used a wheeled Robobo robot that was pre-programmed to drive a given trajectory. In the initial position, the Robobo was approximately 30 cm ahead of the modular robot and started to drive to the right, then it turned and drove to the left. The bottom row of Figure \\ref{fig:trajectories} shows the trajectories after five repetitions with each modular robot. Additionally, we recorded the experiments with an overhead camera above the test arena (see Supplementary Video). These data indicate that the modular robots were able to follow the Robobo in all cases. This demonstrates that the controllers learned for a simple task (moving straight ahead) were applicable in a different and more difficult task. In turn, this proves the usefulness of our new controllers based on an internal frame of reference and sensory oscillators. \n\n\n\\begin{figure*}[!ht]\n\t\\centering\n \\begin{adjustbox}{max width=0.95\\textwidth}\n\t\\begin{tabular}{c c c c c}\n\t\t\\includegraphics[width=0.385\\textwidth,angle=90]{images\/babyA2gecko1_1.png}& \\hspace{-4mm}\n \\includegraphics[width=0.385\\textwidth,angle=90]{images\/babyA2gecko1_2.png}& \\hspace{-4mm}\n\t\t\\includegraphics[width=0.385\\textwidth,angle=90]{images\/babyA2gecko1_3.png}& \\hspace{-4mm}\n\t\t\\includegraphics[width=0.385\\textwidth,angle=90]{images\/babyA2gecko1_4.png}& \\hspace{-4mm}\n\t\t\\includegraphics[width=0.385\\textwidth,angle=90]{images\/babyA2gecko1_5.png} \\\\\n\t\t\\includegraphics[width=0.363\\textwidth,angle=90]{images\/gecko2spider1_1.png}& \\hspace{-4mm}\n\t\t\\includegraphics[width=0.363\\textwidth,angle=90]{images\/gecko2spider1_2.png}& \\hspace{-4mm}\n\t\t\\includegraphics[width=0.363\\textwidth,angle=90]{images\/gecko2spider1_3.png}& \\hspace{-4mm}\n\t\t\\includegraphics[width=0.363\\textwidth,angle=90]{images\/gecko2spider1_4.png}& \\hspace{-4mm}\n\t\t\\includegraphics[width=0.363\\textwidth,angle=90]{images\/gecko2spider1_5.png} \\\\\n\t\\end{tabular}\n \\end{adjustbox}\n\t\\caption{Still images of the double moving target experiments. Top row: the gecko follows the hand-held target and the baby follows the gecko. Bottom row: the spider follows the hand-held target and the gecko follows the spider. The hand-held target is the red cylinder at the end of a white stick. The red and blue lines show the trajectories of the first and the second robot, respectively.}\n\t\\label{fig:double-overhead}\n\\end{figure*}\n\n\\subsection{Scenario 3: double moving target}\n\\label{sec:d-moving}\n\nIn the third series of real-world experiments, we challenged the modular robots even further. In this setup, we replaced the Robobo used in the previous experiments by one of the modular robots and hand-held a target in front of it. Another modular robot was placed on the regular starting position. Then the robots started at the same time, the first one following the target hand-held by the experimenter, the second one following the first one. In Figure \\ref{fig:double-overhead}, we show two of these tests, the case of the gecko following the hand-held target and the baby following the gecko and the case of the spider following the hand-held target and the gecko following the spider. \nThe figure captures the test by five still images taken by the overhead camera, more information is shown in Supplementary Video. These show that robots can follow a target even if it is an irregularly moving irregular shape (another modular robot). The trajectories also disclose the speed differences. In the first test, the distance between the gecko and the baby is gradually growing, which indicates that the baby is a little slower than the gecko. In particular, the baby encountered a little difficulty in turning right. This is not surprising, given that its morphology with the long right limb is not perfectly symmetrical. In the second test we see the opposite, the second robot is closing in on the first one, and before the end of the experiment the gecko hits the spider.\n\n\n\n\n\n\\section{Discussion}\n\\label{sec:discussion}\n\n\n\nThe wider context of this study is full-fledged robot evolution, where both morphologies and controllers evolve. We note that a morphologically evolving robot system needs a learning component that works in any of the possible robots and adjusts the inherited controller (or a randomly initialized controller) to the given morphology and task. \n\nOur experimental work is, in essence, a feasibility study addressing this general issue within our system of evolvable robots and a specific task. \nThe concrete problem we tackle is to enable targeted locomotion, that is, approaching and following an object, in modular robots without any assumption on their specific morphology. This forms a good test case because it is morphology dependent, challenging for robots with random shapes, and practically relevant. The solution we deliver is based on an internal frame of reference and the use of oscillators with sensory feedback to drive the joints. These are combined into a control system that modulates the force in the servo motors depending on the robots angle to the target and the location of the joint in the robots body. The overall control system is a CPG-based network where neighboring joints of the robot are connected. The other ingredient of the solution is a learning method that can find good parameter values for any given CPG-based network. \n\nTo validate our approach we designed nine test cases by three robots with different morphologies and three scenarios, one with a fixed target, one with a moving target, and one with a double moving target. The robots learned a good controller for moving straight forward and these controllers were evaluated in the test scenarios.\n\nIn the first scenario, all robots approached the target accurately, even though the spider exhibited a little offset to the right, cf. Figure \\ref{fig:trajectories} and Supplementary Video. In the second scenario, the target moved to the right first and turned to the left halfway the test. All three robots could adjust their trajectory and followed the target with a little delay as can be seen in Figure \\ref{fig:trajectories}. This proves that the sensory-motor feedback loop they learned for one basic skill (straight forward movement) was generalizable to a more dynamic and complex task.\n\nIn the third scenario, a modular robot had to follow another modular robot that followed a hand-held target. This test further confirmed the adequacy of the learned controllers and illuminated the differences between the locomotion abilities. For instance, the baby followed the gecko well in a left turn, but it fell behind when turning right because of the longer right-front limb as shown in the fifth frame of Figure \\ref{fig:double-overhead}.\nIn the other test, the gecko followed the spider easily because of the wiggly locomotion of the spider and the relatively high speed of the gecko.\n\nThe results of the experiments demonstrate that the new type of oscillators and CPGs with external feedback in combination with the internal frame of reference empower robots with different morphologies to perform object following. Our frame of reference is in essence a 2-dimensional coordinate system, where the origin and the `North' are defined by specific features in our robots, the head module and the direction of the camera, respectively. This concept can be easily extended to three dimensions and to different robots with other features. For instance, a designated head module is not required, the origin can be defined by any reasonable principle that applies to the given morphologies. Likewise, while using the direction of the camera to define `North' is a natural choice in our application, a coordinate system can be defined for robots without a camera as well. \n\nFurther extensions and generalizations are possible regarding the feedback from the environment. Although in this study we use a camera, our approach is applicable in a wide range of robot systems, because the sensory oscillators can handle inputs from different sensors, e.g., accelerometers, gyroscopes, IR sensors, sonars. Furthermore, in this paper all joints on the same side are treated the same way, cf. Equations \\ref{eq:l_joint_downscale} and \\ref{eq:r_joint_downscale}, but it is possible to define a variant where the exact coordinates, for instance the distance to the `spinal cord', are also taken into account. \n\nLast but not least, improvements are possible by employing another learning method. The BEA we use here is not application specific, in principle any derivative-free black-box optimization algorithm for learning the adequate parameter values for the controller is applicable. In our view, it is very important to consider both sample efficiency and time efficiency, that is, the number of trials or evaluations as well as the time needed to achieve a decent result. In this paper we used a budget of 1500 trials and the BEA spent about one hour on learning. This is certainly acceptable and delivers the proof of concept we aimed for, but we are convinced that these figures can be improved by more advanced methods. \n\nA particular aspect here is the dichotomy between the real-world and simulations. To this end, it is important to note that the use of simulations does not invalidate the concept of physical robot evolution. In fact, simulations can be used in both the evolutionary and the learning loop.\nAs outlined in \\cite{howard2019evolving} and \\cite{hale2019robot} there are great potential advantages in evolving real and simulated robots simultaneously. If the simulated and real robots share the same genetic language then we can `cross-breed' them, using a virtual `mother' and a physical `father'. In such a hybrid evolutionary system physical evolution is accelerated by the virtual component that can find good robot features with less time and resources than physical evolution, while simulated evolution benefits from the influx of genes that are tested favourably in the real world. Additionally, even if we have clean physical evolution and all robot `children' are produced in the real world, the learning process in the Infancy stage can use simulations to obtain a good controller for the given robot `child'. The learned controllers can suffer from the reality gap, which can be mitigated by a subsequent real-world learning process on the physical robot. The advantage of the combination is that virtual learning can be on a meso-scale, spending quite a few trials (e.g., hundreds, like in our case) and the real-world process on a micro-scale with much fewer trials (perhaps just a few dozens). For the sake of the argument, if 100 real-world trials were enough, then the learning process could be completed on the real robots in an afternoon. The current paper is on the simulated side. The controllers learned in simulation just worked for us here, thus we had no need to add a real-world learning process. \n\n\n\n\nReflecting from a broader perspective, this work provides an approach for learning tasks inside an evolutionary process that produces various morphologies. Our approach is generic, applicable to various types of evolvable robot systems and different tasks. For example, it can be applied in a system for evolving robots for exploring and decommissioning nuclear power plants \\cite{hale2019robot}, as well as in more fundamental studies regarding the evolution of embodied intelligence. \n\n\n\n\n\n\n\n\\section{Methods}\n\\label{sec:methods}\n\n\\subsection*{The general system}\n\\label{subsec:general_framework}\n\nThe robots in our system are based on RoboGen \\cite{auerbach2014robogen}, they consist of a core component that hosts the controller board, the battery and a camera, 3D-printed passive bricks, and joints driven by servo motors \\cite{jelisavcic2017real}. Figure \\ref{fig:robots} displays the three robots we use in the current experiments. The camera provides information about the environment that is processed by a new type of control system based on sensory oscillators that activate the servos. \nA schematic representation of the system is presented in Figure \\ref{fig:framework} (b). \nIn the next subsections, we discuss the details of each component of the system.\n\\subsection*{Robot Vision}\n\\label{subsec:robot_vision}\n\nThe closed-loop controllers are based on visual input delivered by a camera. The robot vision system must work accurately in real-time on our Raspberry Pi Camera Module v2, and ideally be power efficient. The two pivotal steps for this system are the recognition of objects of interest, and the calculation of the angle of a target object w.r.t. the orientation of the robot.\n\n\\paragraph{Object recognition} \nHere, we follow the approach proposed in \\cite{lan2018ICARCV,lan2019evolving} that allows to quickly recognize targets with low-performance hardware.\nThe method consists of two components:\n\\begin{itemize}[nolistsep]\n \\item Detection of region of interests (ROIs) using \\textit{Fast ROIs Search} proposed in \\cite{lan2018ICARCV}.\n \\item Object recognition using Histogram of Gradients (HOG) as a feature extractor and Support Vector Machines (SVM) with the linear kernel as a detection method.\n\\end{itemize}\nWe decided to use the combination of HOG and SVM instead of Deep Neural Networks since HOG and SVM perform much faster. In this project, we used the implementation provided by OpenCV.\n\n\\paragraph{Angle to target object}\nOnce a target is detected, its relative position from the robot's perspective can be expressed by the angle $\\alpha \\in [-\\beta, \\beta]$, where $\\beta$ is the angle determined by the field of vision of the given camera. The angle to the target $\\alpha$ is the angle between the orientation of the robot (i.e, the camera) and the target object. If the target is on the left-hand side of the robot's face, then $\\alpha$ is negative, whereas if the target is on the right-hand side of the robot's face, $\\alpha$ is positive. The value of $\\alpha$ is zero if the target is straight ahead of the robot. \nGiven an image registered by the Raspberry Pi Camera Module v2 with the parameters (the field of view ($2 \\times \\beta$) is $\\ang{62.2}$, the resolution is $3280 \\times 2464$ where 3280 is the number of pixel columns ($\\mathcal{N}_c$)), where the coordinate of the target can be recognized by the robot vision as $(x,y)$ in pixels, $\\alpha$ is calculated from this image by\n\\begin{equation}\n \\alpha = \\frac{\\arctan(\\frac{x - \\mathcal{N}_c \/ 2} {\\mathcal{F}})} { \\pi \\times 180 }\n\\end{equation}\nwhere $\\mathcal{F}$ is a potential inherent factor that depends on the camera, can be calculated by \n\\begin{equation}\n \\mathcal{F} = \\frac{ \\mathcal{N}_c \/2}{\\tan(\\frac{\\beta}{180 \\times \\pi})}\n\\end{equation}\n\n\n\n\n\n\nDue to the limited camera's field of view in the real world, the robots have to process the situation that the target is out of the camera's field of view. \nFor such a situation, we expect the robots to search the target until the target is in the camera's field of view.\nTo this end, we use two solutions: 1) If the target is out of the camera's field of view initially, the robots have to search the target until it is in the camera's field of view. For this purpose, the initial value of $\\alpha$ can be set as $\\beta\/2$ (or $-\\beta\/2$) for turning right (or left).\n2) If the target escapes from the camera's field of view during locomotion, the robots keep the previous behaviours to follow the target until the target comes back to the field of view. \nThat is, $\\alpha$ preserves the previous value if no target is in the field of view except the initial stage.\n\n\n\n\n\n\n\n\n\n\\subsection*{Controller}\n\\label{subsec:controller}\n\n\\paragraph{Modeling joints as oscillators}\nThe key element of the controller is the model of a single joint. Here, we propose a new oscillator with sensory feedback to properly represent oscillatory behavior often seen in nature \\cite{ijspeert2008central}.\nThe controller composed of new oscillators works in a closed-loop, in which the robot's action can be changed according to the sensory feedback about the target in the environment.\nIn general, the sensory input to the new oscillator can be generated by any sensors.\n\nA sensory oscillator has an $x$-neuron, a $y$-neuron, an $out$-neuron, and an extra node that implements a function $f$.\nFor each time step, neuron $x$ ($y$) feeds its activation value multiplied by the weight $w_{xy}$ ($w_{yx}$) to the neuron $y$ ($x$).\nAt a time step $t$, the changes of the activation value of an $x$-neuron and a $y$-neuron can be calculated as $\\Delta x_{(t)} = w_{yx}y_{(t-1)}$ and $\\Delta y_{(t)} = w_{xy}x_{(t-1)}$ respectively, \nwhere $t-1$ represents the last time step. \nThe $x$-neuron and the $y$-neuron generate the activation values $x_{(t)}$ and $y_{(t)}$ of oscillatory patterns over time according to the following expression:\n\\begin{align}\n \\begin{split}\n x_{(t)} = x_{(t-1)} + \\Delta x_{(t)} \\\\\n \ty_{(t)} = y_{(t-1)} + \\Delta y_{(t)} \n \\end{split}\n \\label{eq:activation}\n\\end{align}\n\n\\par The $x$-neuron feeds its activation value multiplied by the weight $w_{xo}$ to the $out$-neuron.\nThe $out$-neuron applies an activation function to generate the activation value.\nFor the oscillator in the CPG-based controller of modular robots with joints driven by servo motors, the activation values of the $out$-neurons have to meet two conditions due to the limited rotating angle of the joints. \nFirst, the activation value of $out$-neuron must be periodic that repeatedly returns to its initial condition.\nAccording to the stability criterion for linear dynamical systems \\cite{bubnicki2005modern}, it is beneficial to take $w_{yx} = - w_{xy}$. Such values of parameters lead to periodic signals that do not explode over time.\nIn this study, we use the predefined initial values $(x_{(0)}, y_{(0)}) = (-\\frac{1}{2}\\sqrt{2}, \\frac{1}{2}\\sqrt{2})$, and $(w_{xy}, w_{yx}) = (0.5, -0.5)$, but they can be randomly initialized except 0. \nSecond, the activation value of $out$-neuron should be bounded in an interval.\nTherefore, we use a variant of the sigmoid function, the hyperbolic tangent function ($tanh$), as the activation function of $out$-neurons to bound the output value in $[-1,1]$.\nAt a time step $t$, the $tanh$ activation value of $out$-neuron can be calculated as follows:\n\\begin{equation}\n out_{(t)} = \\frac{e^{x_{(t)}} - e^{- x_{(t)}}}{e^{x_{(t)}} + e^{- x_{(t)}}} .\n \\label{eq:output}\n\\end{equation}\n\n\nFinally, the new oscillator executes an extra operation $f$ that combines the activation value of $out$-neuron and the external sensory signal $sen$ to produce a signal $sig$, i.e., $sig = f(sen, out)$.\nIn general, $f$ could be any function, here we use multiplication of $out$ and $sen$.\nHence, at a time step $t$, the $sig$ value can be calculated by:\n\\begin{equation}\n sig_{(t)} = sen_{(t)} \\times out_{(t)}\n \n \\label{eq:multiplication}\n\\end{equation}\nThe rationale behind our new oscillator model is to allow the inclusion of a sensory signal for a closed-loop control. We refer to this new model as the sensory oscillator (see Figure \\ref{fig:framework} (a)). \n\n\n\\paragraph{Network of oscillators}\n\nFor modular robots, e.g., the robots in Figure \\ref{fig:robots}, there are multiple joints affecting each other for achieving the actions.\nIn other words, we deal with a network of joints (oscillators) and we must take into account the influence of other oscillators.\nIn the CPG-based network controller, the neighboring oscillators are connected to each other as the blue arrow lines shown in Figure \\ref{fig:robots} (d).\nAs a result, we have a further expression for a single oscillator by including the neighborhood instead of Equation \\ref{eq:activation}.\nFor the $i$-th sensory oscillator, the activation values $x_{i(t)}$ of its $x$-neuron and $y_{i(t)}$ of its $y$-neuron can be calculated as:\n\\begin{align}\n \\begin{split}\n x_{i(t)} &= x_{i(t-1)} + \\Delta x_{i(t)} + \\sum_{j \\in \\mathcal{N}_i} x_{j(t-1)} w_{ji}\\\\\n \ty_{i(t)} &= y_{i(t-1)} + \\Delta y_{i(t)} \n \\end{split}\n \\label{eq:network}\n\\end{align}\nwhere \n$\\mathcal{N}_i$ is the set of indices of the oscillators connected to the $i$-th oscillator,\n$w_{ji}$ is the weight between $i$-th oscillator and $j$-th oscillator.\nThe connected oscillators in the CPG-based controller cooperate to achieve the desired task.\nThe number of weights that need to be optimized for the controllers of the robots, spider, gecko, and baby are 18, 13, and 16, respectively.\n\n\n\\paragraph{Steering}\nThe usual steering policy for wheeled robots is relatively easy, for left (right) turn the force needs to be reduced on the left (right) wheel. Here we generalize this idea to modular robots with no assumptions about the morphology. Our method allows for scaling the activation signals for the joints depending on the coordinates of the joint and the angle $\\alpha$ between the direction to the target and the robots heading. The key idea is to use a scaling factor \n\\begin{equation}\nd_p(\\alpha) = \\left(\\frac{\\beta - | \\alpha |}{\\beta}\\right)^p ,\n\\label{eq:cp}\n\\end{equation}\nwhere $p > 0$ is a user parameter that determines how strongly we penalize the deviation $\\alpha$. \nIn this study, we use the value of 7 for the parameter $p$ by the experiments of parameter tuning.\nRecall that robots field of vision is the region between $-\\beta$ and $\\beta$, hence $\\alpha \\in [-\\beta, \\beta]$, and $\\alpha < 0$ means that the target is on the left, $\\alpha > 0$ means the target is on the right.\n\nThis scaling factor is used to modify the signals to the joints on the left as follows: \n\\begin{equation}\n sig = \\begin{cases}\n d_p(\\alpha) \\cdot out & \\textit{if } \\alpha < 0 \\\\\n out & \\textit{if }\\alpha \\ge 0\n \\end{cases}\n \\label{eq:l_joint_downscale}\n\\end{equation}\nand, analogously, the signal for the joints on the right is modified as follows:\n\\begin{equation}\n sig = \\begin{cases}\n out & \\textit{if } \\alpha < 0 \\\\\n d_p(\\alpha) \\cdot out & \\textit{if }\\alpha \\ge 0 .\n \\end{cases}\n \\label{eq:r_joint_downscale}\n\\end{equation}\nThe signals for the middle joints are never modified.\n\n\nObserve that by these formulas we define a specific implementation of the square shaped extra node within a sensory oscillator in Figure \\ref{fig:framework}. We use $d_p(\\alpha)$ as the sensory information $sen$ and the function $f$ is simple multiplication. To our knowledge, this is a novel method. The only other existing work that is remotely similar is that of \\cite{Ijspeert2007From}, where an internal signal is used to modify the working of oscillators for two predefined locomotions, walking and swimming.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\begin{figure*}[!tbp]\n \\centering\n \\begin{adjustbox}{max width=0.99\\textwidth}\n\t\\begin{tabular}{c c c}\n\t \\Large{spider} & \\Large{gecko} & \\Large{baby} \\\\\n\t \\includegraphics[width=0.4\\textwidth]{images\/spider9_bo_ea_bea.pdf} &\n\t\t\\includegraphics[width=0.4\\textwidth]{images\/gecko7_bo_ea_bea.pdf} &\n\t\t\\includegraphics[width=0.4\\textwidth]{images\/babyA_bo_ea_bea.pdf} \n\t\\end{tabular}\n \\end{adjustbox}\n \\caption{The performance of Bayesian optimization (blue), an evolutionary algorithm (black), and the BEA (green) on the directed locomotion task in simulation. In each plot we provide the mean value (a solid line) with two standard deviations (a shadowed region). The purple dashed lines present the switch points. The plots on the left, in the middle and on the right represent results for spider, gecko, and baby, respectively.}\n \\label{fig:bea_comparison}\n\\end{figure*}\n\n\\subsection*{Fitness function for directed locomotion}\n\\label{sec:function}\nThe fitness function used here is to evaluate the performance of controllers for the task of directed locomotion. This task is defined by a target direction $\\gamma$ that the robot has to follow. The good fitness function needs to combine two objectives: minimizing deviation with respect to the target direction $\\gamma$ and maximizing speed (i.e., displacement) over the evaluation period (60 seconds in our experiments). \nTo calculate the fitness value we need \n\\begin{itemize}[noitemsep,nolistsep]\n \\item the robots starting position $p_0$ at the beginning of the evaluation period,\n \\item the robots final position $p_1$ at the end of the evaluation period,\n \\item the deviation angle $\\delta$ between the target direction $\\gamma$ and the line drawn between $p_0$ and $p_1$,\n \\item the total length $L$ of the trajectory travelled during the evaluation period (which is not the distance between $p_0$ and $p_1$).\n\\end{itemize} \n\nThe fitness value is then composed of several components. Component one is to maximize the distance travelled in the right direction. This distance can be expressed by the value $E_1 = d(p_0,p_1) \\times cot(\\delta)$, where $d(x,y)$ is the Euclidean distance between two points $x$ and $y$ and $cot$ is the cotangent function. Another component is to minimize the deviation w.r.t. the target angle. This can be expressed not only by $\\delta$ but also by the distance of the of the final position $p_1$ from the ideal trajectory starting at $p_0$ and following the target direction. This distance can be expressed as $E_2 = d(p_0,p_1) \\times tan(\\delta)$, where $tan$ is the tangent function. The third component is to reward locomotion in a straight line. This can be simply captured by the value $E_3 = d(p_0,p_1) \/ (L + \\epsilon$), where $\\epsilon$ is an infinitesimal constant. This value is maximized when the length $L$ of the travelled trajectory equals the distance between the starting point $p_0$ and the end point $p_1$. Combining these components into one formula we obtain the following fitness function:\n\n\\begin{equation}\n\\label{eq:fitness}\nF = E_3 \\cdot \\Big{(} \\frac{E_1}{\\delta + 1} - w \\cdot E_2 \\Big{)},\n\\end{equation}\nwhere $w > 0$ is a penalty factor. This function maximized when $d(p_0,p_1)$ is maximal, $\\delta$ (and hence $E_2$) is zero, and $L = d(p_0,p_1)$.\n\nThe fitness of a given controller in a robot is established by running the robot with that controller, measuring $p_0, p_1, \\delta$ and $L$ and calculating the value of $F$ as defined by Equation \\ref{eq:fitness}.\n\n\\subsection*{The Bayesian-Evolutionary Algorithm}\n\\label{subsec:BEA}\n\nThe Bayesian-Evolutionary Algorithm (BEA) consists of three stages: BO, switching, EA. In the first stage, i.e., the early iterations, Bayesian optimization is employed because its computation time is not yet large. In this paper we use standard Bayesian optimization from the flexible high-performance library Limbo \\cite{cully2018limbo}, using a Gaussian process with a Mat\\'ern 5\/2 kernel, the length scale $\\theta = 0.2$, and a GP-UCB acquisition function. This setting outperformed other hyperparameter settings in our preliminary experiments on parameter tuning.\n\nThe stage of switching starts when the time efficiency of BO drops to lower than that of EA. To determine the switch point we monitor the quality gain per time interval during the search process. This can be defined for any interval of $n$ consecutive iterations (objective function evaluations). If $t_1, \\dots, t_n$ denote the time instances of these iterations and $f_1, \\dots, f_n$ the resulting objective function values, then the gain over this time interval is $\\mathcal{G} = \\frac{f_n - f_1}{t_n - t_1}$. \nWe tested the gain of BO and EA on several well-known objective functions and found that a good moment to switch generally lies in the interval between 190 and 300 iterations.\nIn this study, the switch is triggered at 300 evaluations. \n\nTo seed the EA with a good initial population, we aim for quality (which can be exploited) as well as diversity (which assures appropriate exploration). Hence, if the intended population size of the EA is $K$, then we use K-means clustering in the top $50\\%$ of solutions found by BO and transfer the best solution in each cluster to the EA. \n\nIn the third stage, the search is continued by an evolutionary algorithm. In general, this can be any EAs, but here we use an evolution strategy where the mutation step-size is self-adaptive, but is also controlled by the quality gain per time interval. The idea is to use smaller mutation step size for exploitation when the gains are relatively large and larger mutation step sizes for exploration when gains are small over a period of iterations. \nFor parameters of the EA in BEA, the mutation rate is 0.8, the population size is 10, the tournament size is 2.\n\n\n\n\n\n\n\nWe performed additional experiments to compare the performance of the BEA against its components separately, namely, the BO and the EA. The goal of these additional experiments is to indicate the benefit of our approach compared to using either the BO or the EA alone. We report the performance of the three methods in \\autoref{fig:bea_comparison}. Additionally, we present a comparison of the BEA with the BO and the EA in terms of the achieved fitness value and the computational time in \\autoref{tab:comparison}. We notice that the BEA obtains around $20-30\\%$ better fitness value than the BO and around $45-70\\%$ better fitness value than the EA. Obviously, the BEA is slower than the EA (by around $10-20\\%$), but it is significantly faster than the BO (by about $70\\%$). Eventually, we see that the proposed optimization procedure allows to not only significantly reduce time complexity of the optimization as originally planned, but also leads to a better exploration\/exploitation, and, eventually, to better results than the standalone BO\n\nIn conclusion, we want to stress out the novelty of the proposed optimization strategy. First, the idea of combining BO and EAs is not widely-used. Typically, BO is applied to parameter tuning of EAs \\cite{karroblack, roman2016bayesian}. Here, we propose to optimize the initial population of the EA using BO. Second, running both algorithms one after the other is not necessarily beneficial. The crucial step is to decide about the moment to switch from a computationally heavy, but accurate procedure, to a lightweight generate-and-test method. Here, we discussed how to accomplish that by monitoring the time efficiency. Third, we propose heuristics to \\textit{transfer} solutions found by BO to the initial population of the EA. Last, we further propose a new self-adaptive mutation operation that takes into account information about the progress of the procedure, i.e., the gain in the fitness function value.\n\n\\begin{table}[!tbp]\n \\centering\n \\small\n \\setlength{\\tabcolsep}{8pt}\n \\renewcommand{\\arraystretch}{1.2}\n \\begin{tabular}{l|l|ccc}\n \\multicolumn{2}{c|}{} & \\multicolumn{1}{c}{spider9} & \\multicolumn{1}{c}{gecko7} & \\multicolumn{1}{c}{babyA} \\\\ \n \\hline\n best fitness ($\\uparrow$) & BO & 122.9\\% & 135.6\\% & 133.6\\% \\\\\n of BEA w.r.t. & EA & 145.9\\% & 169.7\\% & 158.7\\% \\\\ \n \n \\hline\n comp. time ($\\downarrow$) & BO & 31.0\\% & 33.8\\% & 32.4\\% \\\\\n of BEA w.r.t. & EA & 112.2\\% & 119.6\\% & 110.4\\% \\\\ \n \\end{tabular}\n \\caption{Simulation based comparison. Upper half: The performance of BEA in terms of fitness over a full run w.r.t. BO and EA defined by BEA\/BO and BEA\/EA respectively. Lower half: BEA vs. BO and the EA in terms of computation time defined as BEA\/BO and BEA\/EA respectively.}\n \\label{tab:comparison}\n\\end{table}\n\n\\section*{Data and code availability}\nThe data and code in this study can be provided by the corresponding author \nupon reasonable request.\nA Supplementary Video is available for this paper at \\url{https:\/\/youtu.be\/U9n86ngVe-4}.\n\n\n\\bibliographystyle{unsrt}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\\label{sec:intro}\nUnderstanding isotopic abundances on a large scale is a major field of interest which has received a great deal of attention for its application to terrestrial environments (ocean, meteorites), the solar system (planets, comets) and in galactic interstellar space. The variation of isotopic ratios may give us some information about the link between solar system objects and galactic interstellar environments as discussed by \\cite{aleon:10}.\n We principally focus our study on interstellar environments where low temperature conditions may significantly impact the isotopic ratios of the molecular content. Isotopic molecules are detected in a variety of environments and offer an additional tool to determine physical conditions as they usually do not suffer from opacity problems.\n \n Early modeling studies on {\\tC} and {\\dO} isotopic enrichment were performed by \\cite{langer:84} who introduced different isotopic exchange reactions, relying on previous theoretical and experimental studies by \\citealt{watson:76,smith:80}. Possible effects of selective photodissociation of CO have subsequently been emphasized \\citep{glassgold:85,lebourlot:93,visser:09}, which tend to increase the {\\nCO\/\\tCO} ratio. The use of the CN radical as a tracer of {\\dC\/\\tC} isotopic ratio has been raised by \\cite{savage:02,milam:05}, who studied the corresponding gradient as a function of the galactic distance. The actual value of the {\\dC\/\\tC} isotopic ratio in the local InterStellar Medium (ISM) is assumed to be 68 \\citep{milam:05}.\n \n The possibility of nitrogen isotopic fractionation in interstellar clouds has been investigated by \\cite{terzieva:00} who suggested various {\\fN} isotopic exchange reactions. \\cite{rodgers:04,rodgers:08b,rodgers:08a} used these suggested reaction rate constants to predict nitrogen isotopic fractionation in chemical models of dense interstellar molecular cores. They specifically discussed the role of the atomic to molecular nitrogen ratio in the fractionation process and the possible link between nitrogen hydrides and CN containing molecules (nitriles).\nThe corresponding observations are sparse however and difficult as the elemental {\\nN\/\\fN} ratio is high (441 $\\pm$ 5), as determined by the recent Genesis solar wind sampling measurement \\citep{marty:11} and assumed to hold in the local ISM. In addition, the zero point energy (ZPE) differences involved in nitrogen fractionation reactions are small and the predicted corresponding chemical enrichment is moderate. \n\\cite{lucas:98} reported {\\fHCN} absorption in diffuse clouds located in front of distant quasars with a {\\nHCN\/\\fHCN} ratio of 270$\\pm$ 27, close to value reported for the Earth. However, various new observations of isotopic nitrogen containing molecules have been reported, including {\\fN} substituted ammonia and deuterated ammonia \\citep{gerin:09,lis:10} , the diazenylium ion (\\NNHp) \\citep{bizzocchi:10,bizzocchi:13,daniel:13}, CN and HCN \\citep{ikeda:02,pillai:07,adande:12,hilyblant:13,daniel:13}, HCN and HNC \\citep{wampfler:14}. The strong depletion found in {\\fN} variants of the {\\NNHp} isotopologue strongly contradicts model predictions \\citep{gerin:09}, which motivates a reinvestigation of the chemical processes at work. With this in mind, the link between deuterated chemistry and the possible role of ortho\/para molecular hydrogen has been \nstudied by \\cite{wirstrom:12}. \n \nWe analyse in Section~\\ref{sec:reac} the various possible isotopic exchange reactions that are involved for carbon and nitrogen containing molecules. Indeed, most nitrogen fractionation observational results for CN containing molecules involve only {\\tC} and\n{\\fN} species so that the measure of the nitrogen isotopic ratio assumes a fixed {\\dC\/\\tC} fraction. We examine and extend the pioneering study of \\cite{terzieva:00}\nand check for the possible presence of barriers in the entrance channels of isotopic exchange reactions through theoretical calculations. \nWe also update the zero point energy (ZPE) values involved and derive the corresponding exothermicity values.\nWe present our new chemical model in Section~\\ref{sec:model} and compare with available observations and other models. Our conclusions are presented in Section~\\ref{sec:conclusion}\n\n\\section{Chemical reactions involving isotopic substitutes of {\\dC} and \\nN.}\n\\label{sec:reac}\n\\subsection{{\\tC} and {\\fN} exchange reactions}\nAt very low temperatures, isotopic exchange reactions may only occur if no barrier is present between the interacting atoms, ions and molecules or if tunnelling plays an important role. Experimental information is crucial and we constrain the evaluation of rate constants using that information. If no experimental data are available, we apply theoretical methods to determine the presence of a barrier: A first technique uses\n DFT (Density Functional Theory) calculations (with the hybrid M06-2X functional developed by \\cite{zhao:08} which is well suited for thermochemical calculations, associated to the cc-pVTZ basis set using GAUSSIAN09 software). The alternative is provided by the MRCI+Q method (with the aug-cc-pVTZ basis set).\nFor barrierless cases, we derive the reaction rate constants by using a simple capture theory \\citep{georgievskii:05} for both ion-neutral and neutral-neutral reactions.\nWe consider four different families of isotopic exchange reactions:\n\\begin{itemize}\n\\item{{\\emph{A : direct reactions}}. The proton transfer in the $\\NNHp + \\NfiN \\rightarrow \\NfNHp + \\NN$ reaction can serve as an example. In this case and for reactions without a barrier, the reaction rate coefficient of the forward reaction is equal to the capture rate constant multiplied by a probability factor {\\it{f(B, M)}}, $f(B,M)$ depending on the rotational constant, mass and symmetry values of the reactants and products. In reactions involving{\\fN} and {\\tC} the mass ratio of reactants and products are very close and $f(B,M) \\cong \\sigma_{\\rm{entrance~channel}} \/ \\sigma _{\\rm{exit~channels} }$} (The symmetry number $\\sigma$ is equal to the number of pathways). The reverse reaction is calculated from the equilibrium constant $K$, as in \\cite{terzieva:00}: $K = k_f\/k_r=f(B,M) \\times exp( \\Delta E \/ kT)$.\n \n\\item{{\\emph{B : reactions involving adduct formation leading to direct products without isomerization}}. As an example, we refer to \n$\\fNp + \\NN \\rightarrow \\NfiN + \\Np$. We first assume that the high pressure rate constant is equal to the capture rate constant (for reactions without a barrier). We apply statistical theory for the system at thermal equilibrium so that $ k_f + k_r = k_{capture} $. From the equilibrium constant expression, we then derive \n$ k_f= k_{capture} \\times \\frac{f(B,M)}{[ f(B,M) + exp(- \\Delta E \/ kT)] }$ and $ k_r= k_{capture} \\times \\frac{exp(- \\Delta E \/ kT)}{[f(B,M) +exp(- \\Delta E \/ kT)]}$} \n\n\\item{{\\emph{C : reactions involving adduct formation with isomerization pathways}}. Such a case holds for \n$\\tC + \\nHCN \\rightarrow \\dC + \\tHCN$. We again assume that the high pressure rate constant is given by capture theory (for reactions without a barrier). The isotopic isomerization reaction competes with the dissociation of the adduct. The rate constant depends on the location of the transition state and statistical calculations are generally required to estimate the isomerization reaction rate constant.}\n \\item{{\\emph{D : other reactive exothermic channels exist}}}. The exchange reaction is discarded generally (the possibility of N atom exchange in the {\\fN} + CN and in {\\fN} + {\\CCN} reactions is discussed).\n\\end{itemize}\nThe knowledge of the exoergicity values $\\Delta E$ is also a major concern. They are obtained from the differences of the ZPEs between products and reactants. We recall in the Appendix the corresponding expressions and derive their values by using the most recent determinations of spectroscopic constants\n \n We summarize in Table \\ref{tab:exch} the different isotopic exchange reactions considered and display the corresponding reaction rate constants. Detailed information is provided in the online material on the theoretical methods used for the different systems. The reactions involving {\\fN} are displayed in the upper part of the Table. We also consider {\\tC} isotopic exchange reactions in the lower part of Table \\ref{tab:exch}. \n %\n \\begin{table*}[h]\n\\caption{Isotopic exchange reactions.} \n\\label{tab:exch} \n\\begin{center} \n\\begin{tabular}{l l l l l c l} \n\\hline\\hline \n \nlabel \/ & \\multicolumn{3}{c}{Reaction} & k$_f$ $^*$ & $f(B,M)$ $^*$ & $\\Delta$E $^*$ \\\\\ncomment & \\multicolumn{3}{c}{ } & (cm$^{3}$ s$^{-1}$) & & (K)\\\\\n\\hline \n(1) A & {\\bf{$\\NfiN + \\NNHp$}} & $\\rightleftharpoons$ & {\\bf{$ \\NfNHp + \\NN$}} & 2.3 $\\times$ 10$^{-10}$ & 0.5 & 10.3 \\\\\n (2) A &{\\bf{$\\NfiN + \\NNHp$}} & $\\rightleftharpoons$ & {\\bf{$ \\fNNHp + \\NN$}} & 2.3 $\\times$ 10$^{-10}$ & 0.5 & 2.1 \\\\\n (3) A &{\\bf{$\\NfiN + \\fNNHp$}} & $\\rightleftharpoons$ & {\\bf{$ \\NfNHp + \\NfiN$}} & 4.6 $\\times$ 10$^{-10}$ & 1 & 8.1 \\\\\n(4) B & {\\bf{$\\fNp + \\NN$}} & $\\rightleftharpoons$ & {\\bf{$ \\Np + \\NfiN$}} & 4.8 $\\times 10 ^{-10} \\times \\frac{2}{2+exp(-28.3\/T)}$ &2 & 28.3 \\\\\n(5) C & {\\bf{$\\fN + \\CNCp$}} & $\\rightleftharpoons$ & {\\bf{$ \\CfNCp + \\nN$}} & 3.8 $\\times 10 ^{-12} \\times (\\frac{T}{300})^{-1} $ & 1 & 38.1 \\\\ \n(6) D & {\\bf{$\\fNp + \\nNO$}} & $\\rightleftharpoons$ & {\\bf{$ \\Np + \\fNO$}} & no react & - & 24.3 \\\\\n(7) barrier & {\\bf{$\\fN + \\NNHp$}} & $\\rightleftharpoons$ & {\\bf{$\\nN + \\NfNHp $}} & no react & - & 38.5 \\\\ \n(8) barrier & {\\bf{$\\fN + \\NNHp$}} & $\\rightleftharpoons$ & {\\bf{$\\nN + \\fNNHp $}} & no react & - & 30.4 \\\\ \n(9) barrier & {\\bf{$ \\fNNHp + \\nH $}} & $\\rightleftharpoons$ & {\\bf{$\\nH + \\NfNHp $}} & no react & - & 8.1 \\\\ \n (10) barrier & {\\bf{$\\fN + \\HCNHp$}} & $\\rightleftharpoons$ & {\\bf{$\\nN + \\HCfNHp $}} & no react & - & 37.1 \\\\ \n (11) D & {\\bf{$\\fN + \\nCN$}} & $\\rightleftharpoons$ & {\\bf{$ \\nN + \\CfiN$}} & upper limit : 2.0 $\\times$ 10$^{-10}$ $\\times$ & 1 & 22.9 \\\\ \n & & & & (T\/300)$^{1\/6}$ $\\times$ $\\frac{1}{1+exp(-22.9\/T)}$ & & \\\\\n (12) B & {\\bf{$\\fN + \\CCN$}} & $\\rightleftharpoons$ & {\\bf{$ \\nN + \\CCfN $}} & 1.6 $\\times 10 ^{-10} \\times (T\/300)^{1\/6} \\times$ & 1 & 26.7 \\\\ \n & & & & $\\frac{1}{1+exp{\\bf{(-26.7\/T)}}}$ & & \\\\\n(13) D & {\\bf{$\\fN + \\nNO$}} & $\\rightleftharpoons$ & {\\bf{$\\nN + \\fNO$}} & - & - & 24.3 \\\\ \n\\hline \n\\hline \n(14) B & {\\bf{$\\tCp + \\nCO$}} & $\\rightleftharpoons$ & {\\bf{$ \\Cp + \\tCO$}} & 6.6 $\\times 10 ^{-10} \\times (T\/300)^{-0.45} $& 1 & 34.7 \\\\\n & & & & $\\times$ exp(-6.5\/T) $ \\times$ $ \\frac{1}{1+exp(-34.7\/T)}$ & & \\\\\n(15) A & {\\bf{$\\tCO + \\HCOp$}} & $\\rightleftharpoons$ & {\\bf{$ \\nCO + \\HtCOp$}} & 2.6 $\\times 10 ^{-10} \\times (T\/300)^{-0.4} $ & 1 & 17.4 \\\\ \n(16) B & {\\bf{$\\tCp + \\nCN$}} & $\\rightleftharpoons$ & {\\bf{$ \\Cp + \\tCN$}} & 3.82 $\\times 10 ^{-9} \\times (T\/300)^{-0.4}$ & 1 & 31.1 \\\\ \n & & & & $ \\times$ $ \\frac{1}{1+exp(-31.1\/T)}$ & & \\\\ \n (17) B & {\\bf{$\\tC + \\nCN$}} & $\\rightleftharpoons$ & {\\bf{$ \\dC + \\tCN$}} & 3.0 $\\times 10 ^{-10} \\times \\frac{1}{1+exp(-31.1\/T)}$ & 1 & 31.1 \\\\ \n (18) C & {\\bf{$\\tC + \\nHCN$}} & $\\rightleftharpoons$ & {\\bf{$ \\dC + \\tHCN$}} & no react & - & 48.4 \\\\ \n(19) B & {\\bf{$\\tC + \\nCC$}} & $\\rightleftharpoons$ & {\\bf{$ \\dC + \\tCC$}} & 3.0 $\\times 10 ^{-10} \\times \\frac{2}{2+exp(-26.4\/T)}$ & 2 & 26.4 \\\\ \n (19) barrier &{\\bf{$ \\tCH + \\nCO $}} & $\\rightleftharpoons$ & {\\bf{$ \\tCO + \\nCH$}} & no react & - & 28.6 \\\\ \n \\hline \n\\end{tabular}\n\\end{center}\n$^*$ k$_f$ is the forward reaction rate coefficient. The reverse reaction rate coefficient, k$_r$ , is obtained by k$_r$ = $\\frac{k_f}{f(B,M)}$ exp(- $\\Delta$E\/T).\n\\end{table*}\nTable \\ref{tab:exch} shows two main discrepancies compared to previous studies by \\cite{terzieva:00} : The exchange reactions between atomic {\\fN} and {\\NNHp}, {\\HCNHp} are found to be unlikely to occur as significant barriers arise in the complex formation step. A similar result is obtained for {\\fNp} exchange with NO, whereas these reactions had been included in \\cite{terzieva:00}. The exchange reaction between atomic {\\fN} and CN, which was suggested by \\cite{rodgers:08b}, is found to be plausible.\nAdditional possibilities of exchange have also been considered such as the reaction between {\\fN} and {\\CCN}. As far as {\\tC} possible fractionation is concerned, we find that CN could be enriched in {\\tC} through the exchange reactions of CN with {\\tC} and {\\tCp}. However, such a mechanism does not hold for HNC as atomic carbon is found to react with HNC \\citep{loison:14}. $^{13}$C\nenrichment of HCN is also found to be unlikely as the calculated transition state lies above the entrance level in the hypothetical isomerization process (C mechanism). \n \n \\subsection{Ammonia synthesis}\n Ammonia synthesis proceeds mainly through a chain of reactions starting with {\\Np} and {\\HH}, as the reaction between N and {\\HHHp} has been shown to be inefficient \\citep{milligan:00}.\n \\label{sec:ammonia}\n \\subsubsection{The {\\Np} + {\\HH} reaction and isotopic substitutions}\n\\label{sec:nph2}\nThis almost thermoneutral reaction deserves a special mention and has received considerable attention. \\cite{lebourlot:91} first pointed out the possible role of ortho-{\\HH} in the interstellar chemistry of ammonia as the energy of its J=1 rotational level almost compensates the small endothermicity of the reaction $ \\Np + \\HH \\rightarrow \\NHp + \\nH$. \\cite{dislaire:12} subsequently reanalysed the experimental data \\citep{marquette:88} and suggested new separate expressions for the reaction rate with {p-\\HH} and o-\\HH. Similar results were obtained by \\cite{zymak:13}, who also emphasized the possible role of the fine structure level of \\Np. We follow the prescription derived by \\cite{dislaire:12} and extend their analysis to deuterated forms and those including {\\fN} , as displayed in Table \\ref{tab:npHH}. In the case of {\\fN} substituted compounds, we have taken into account the (small) additional term due to the change in ZPE. \n\\begin{table*}[h]\n\\caption{Reaction rate coefficients of {\\mbox{N$^{+}$}} + {\\HH} and isotopic variants.} \n\\label{tab:npHH} \n\\centering \n\\begin{tabular}{l c l l c } \n\\hline\\hline \n \n\\multicolumn{3}{c}{Reaction} & k (cm$^{3}$ s$^{-1}$)& Comment\\\\\n\\hline \n{\\bf{$\\Np + p-\\HH$}} & $\\rightarrow$ & {\\bf{$ \\NHp + \\nH$}}& 8.35 $\\times$ 10$^{-10}$ $\\times$ exp(-168.5\/T) & \\citealt{dislaire:12}\\\\ \n{\\bf{$\\Np + o-\\HH$}} & $\\rightarrow$ & {\\bf{$ \\NHp + \\nH$}}& 4.2 $\\times$ 10$^{-10}$ $\\times$ (T\/300)$^{-0.17}$ $\\times$ exp(-44.5\/T) & \\citealt{dislaire:12} \\\\ \n{\\bf{$\\fNp + p- \\HH$}} & $\\rightarrow$ & {\\bf{$ \\fNHp + \\nH$}}& 8.35 $\\times$ 10$^{-10}$$\\times$ exp(-164.3\/T)& see text \\\\ \n{\\bf{$\\fNp + o- \\HH$}} & $\\rightarrow$ & {\\bf{$ \\fNHp + \\nH$}}& 4.2 $\\times$ 10$^{-10}$ $\\times$ (T\/300)$^{-0.17}$ $\\times$ exp(-39.7\/T) & see text \\\\ \n{\\bf{$\\Np + \\HD $}} & $\\rightarrow$ & {\\bf{$ \\NDp + \\nH$}}& 3.17 $\\times$ 10$^{-10}$ $\\times$ exp(-16.3\/T) & \\citealt{marquette:88} \\\\ \n{\\bf{$\\Np + \\HD $}} & $\\rightarrow$ & {\\bf{$ \\NHp + \\nD$}}& 3.17 $\\times$ 10$^{-10}$ $\\times$ exp(-594.3\/T) & see text\\\\ \n{\\bf{$\\fNp + \\HD $}} & $\\rightarrow$ & {\\bf{$ \\fNDp + \\nH$}}& 3.17 $\\times$ 10$^{-10}$ $\\times$ exp(-9.3\/T) & see text \\\\ \n{\\bf{$\\fNp + \\HD $}} & $\\rightarrow$ & {\\bf{$ \\fNHp + \\nD$}}& 3.17 $\\times$ 10$^{-10}$ $\\times$ exp(-589.5\/T) & see text \\\\ \n{\\bf{$\\Np + \\DD $}} & $\\rightarrow$ & {\\bf{$ \\NDp + \\nD$}}& 2.37 $\\times$ 10$^{-10}$ $\\times$ exp(-197.9\/T) & \\citealt{marquette:88} \\\\ \n{\\bf{$\\fNp + \\DD $}} & $\\rightarrow$ & {\\bf{$ \\fNDp + \\nD$}}& 2.37 $\\times$ 10$^{-10}$ $\\times$ exp(-190.9\/T) & \\citealt{marquette:88} \\\\ \n \\hline \n\\end{tabular}\n\\end{table*}\n These expressions should be used with caution as in their kinetic expression, we consider that the exponential term represents the enthalpy difference between the products and reactants. The capture rate constant of {\\fNp} + HD reaction is about (2\/3)$^{0.5}$\nsmaller than the rate of the {\\fNp} + {\\HH} reaction, due to the different mass dependences. The formation of {\\fNDp} is favored at low temperatures.\n\\subsubsection{ {\\NHHHp} + \\HH}\nThe final step of ion-molecule reactions leading to ammonia formation is the reaction between {\\NHHHp} and \\HH, giving \\NHHHHp. Although exothermic, this reaction has a strong temperature dependence, displaying a minimum at T $\\sim$ 100K and a slow increase at lower temperatures \\citep{barlow:87}, which is interpreted by the presence of a barrier to complex formation, as discussed in \\cite{herbst:91}. At temperatures close to 10K, the reaction is likely to proceed through tunneling which may take place or H atom abstraction. However, the {\\NHHHp} reaction with D$_2$ is found to be slower when the temperature decreases as tunneling is not efficient with deuterium. We thus reconsider the isotopic variants of this reaction, as shown in Table \\ref{tab:nhhhp}, where we give the present reaction rates compared to previous values which were derived from \\cite{anicich:86}.\nThese values are indeed different from those used in our previous studies \\citep{roueff:05} where we assumed the same rate for the channels resulting from the reactions of {\\NHHHp} and isotopologues with HD, based on pure statistical considerations. \n We discuss the resulting modifications in Section \\ref{sec:model}.\n\\begin{table*}[h]\n\\caption{Reaction rate coefficients of {\\NHHHp} + {\\HH} and isotopic variants at T = 10K.} \n\\label{tab:nhhhp} \n\\begin{center} \n\\begin{tabular}{l c l c c} \n\\hline\\hline \n \n\\multicolumn{3}{c}{Reaction} & \\multicolumn{2}{c}{k (cm$^{3}$ s$^{-1}$)} \\\\\n & & & present work & old value (*) \\\\\n\\hline \n{\\bf{$\\NHHHp + \\HH$}} & $\\rightarrow$ & {\\bf{$ \\NHHHHp + \\nH$}}& 8.2 $\\times$ 10$^{-13}$ & 2.4 $\\times$ 10$^{-12}$ \\\\ \n {\\bf{$\\NHHHp + \\HD$}} & $\\rightarrow$ & {\\bf{$ \\NHHHHp + \\nD$}}& 8.2 $\\times$ 10$^{-13}$ & 1.2 $\\times$ 10$^{-12}$ \\\\ \n {\\bf{$\\NHHHp + \\HD$}} & $\\rightarrow$ & {\\bf{$ \\NHHHDp + \\nH$}}& 1.0 $\\times$ 10$^{-13}$ & 1.2 $\\times$ 10$^{-12}$ \\\\ \n{\\bf{$\\NHHHp + \\DD$}} & $\\rightarrow$ & {\\bf{$ \\NHHHDp + \\nD$}}& 1.0 $\\times$ 10$^{-13}$ & 2.4 $\\times$ 10$^{-12}$ \\\\ \n {\\bf{$ \\NHHDp + \\HH$}} & $\\rightarrow$ & {\\bf{$ \\NHHHDp + \\nH$}}& 8.2 $\\times$ 10$^{-13}$ & 2.4 $\\times$ 10$^{-12}$ \\\\ \n {\\bf{$ \\NHHDp + \\HD$}} & $\\rightarrow$ & {\\bf{$ \\NHHHDp + \\nD$}}& 8.2 $\\times$ 10$^{-13}$ & 1.2 $\\times$ 10$^{-12}$ \\\\ \n {\\bf{$ \\NHHDp + \\HD$}} & $\\rightarrow$ & {\\bf{$ \\NHHDDp + \\nH$}}& 1.0 $\\times$ 10$^{-13}$ & 1.2 $\\times$ 10$^{-12}$ \\\\ \n{\\bf{$ \\NHHDp + \\DD$}} & $\\rightarrow$ & {\\bf{$ \\NHDDDp + \\nH$}}& 1.0 $\\times$ 10$^{-13}$ & 2.4 $\\times$ 10$^{-12}$ \\\\ \n {\\bf{$ \\NHDDp + \\HH$}} & $\\rightarrow$ & {\\bf{$ \\NHHDDp + \\nH$}}& 8.2 $\\times$ 10$^{-13}$ & 2.4 $\\times$ 10$^{-12}$ \\\\ \n {\\bf{$ \\NHDDp + \\HD$}} & $\\rightarrow$ & {\\bf{$ \\NHHDDp + \\nD$}}& 8.2 $\\times$ 10$^{-13}$ & 1.2 $\\times$ 10$^{-12}$ \\\\ \n {\\bf{$ \\NHDDp + \\HD$}} & $\\rightarrow$ & {\\bf{$ \\NHDDDp + \\nH$}}& 1.0 $\\times$ 10$^{-13}$ & 1.2 $\\times$ 10$^{-12}$ \\\\ \n{\\bf{$ \\NHDDp + \\DD$}} & $\\rightarrow$ & {\\bf{$ \\NHDDDp + \\nD$}}& 1.0 $\\times$ 10$^{-13}$ & 2.4 $\\times$ 10$^{-12}$ \\\\ \n{\\bf{$\\NDDDp + \\HH$}} & $\\rightarrow$ & {\\bf{$ \\NHDDDp + \\nH$}}& 8.2 $\\times$ 10$^{-13}$ & 2.4 $\\times$ 10$^{-12}$ \\\\ \n {\\bf{$\\NDDDp + \\HD$}} & $\\rightarrow$ & {\\bf{$ \\NHDDDp + \\nD$}}& 8.2 $\\times$ 10$^{-13}$ & 1.2 $\\times$ 10$^{-12}$ \\\\ \n {\\bf{$\\NDDDp + \\HD$}} & $\\rightarrow$ & {\\bf{$ \\NDDDDp + \\nH$}}& 1.0 $\\times$ 10$^{-13}$ & 1.2 $\\times$ 10$^{-12}$ \\\\ \n{\\bf{$\\NDDDp + \\DD$}} & $\\rightarrow$ & {\\bf{$ \\NDDDDp + \\nD$}}& 1.0 $\\times$ 10$^{-13}$ & 2.4 $\\times$ 10$^{-12}$ \\\\ \n \\hline \n\\end{tabular}\n\\end{center}\n(*) \\cite{roueff:05}\n\\end{table*}\nIdentical reaction rate coefficients are used for the $^{15}$N isotopically substituted reactions. \n\n \\section{Models}\n\\label{sec:model}\n\\subsection{General features}\nChemical reactions involving nitrogen atoms and CH, CN and OH have been studied experimentally at low temperatures and shown to be less efficient than previously thought at low temperatures \\citep{daranlot:12,daranlot:13}, which was confirmed by theoretical studies.\nThe corresponding reaction rate constants have been implemented in the KIDA chemical data base \\citep{wakelam:13} and we have updated our chemical network accordingly. We also include the reactions discussed in \\cite{loison:14} in their study of HCN \/ HNC chemistry. We explicitly include deuterium, {\\tC} and {\\fN} molecular compounds in our chemical network. The reactions displayed in Table \\ref{tab:exch}, have been included, which allows us to test the hypothesis of a constant {\\dC}\/{\\tC} isotopic ratio to derive the {\\nN\/\\fN} ratio in {\\CfiN} containing molecules. \nWe take into account the role of the ortho\/para ratio of molecular {\\HH} in the {\\HHDp} + {\\HH} and {\\Np} ({\\fNp}) + {\\HH} chemical reactions in an approximate way: we do not compute the full ortho\/para equilibrium as in the models of \\citealt{flower:06,pagani:11,faure:13} and rather introduce it as a model parameter, which can be varied. \\cite{faure:13} have found a value of 10$^{-3}$ for temperatures below 15K\n\nApart from exchange reactions, reactions involving isotopic molecules are assumed to have the same total rate constant as those involving the main isotope except for the reaction of {\\fNp} with H$_{2}$\/HD\/D$_{2}$. The various reaction channels are obtained from statistical considerations, in the absence of experimental information. We restrict carbon containing molecules to 3 carbon atoms, nitrogen containing molecules to 2 nitrogen atoms and consider full deuteration as in our previous studies \\citep{roueff:05}. Within these constraints, the number of species considered in the model is 307 linked through more than 5400 chemical reactions. \n\\begin{table}[h]\n\\caption{Model definitions. The { C \/ \\tC} and {N \/ \\fN} ratios are respectively taken as 68 \\citep{milam:05} and 440\n \\citep{marty:11}. } \n\\label{tab:model} \n\\centering \n\\begin{tabular}{lll } \n\\hline\\hline \n & Model (a) & Model (b) \\\\\n \\hline\ndensity {\\mbox{n$_H$}} (cm$^{-3}$) & 2 $\\times$ 10$^4$ & 2 $\\times$ 10$^5$ \\\\\nTemperature (K) & 10 & 10 \\\\\ncosmic ionization rate per H$_2$ (s$^{-1}$) & 1.3 $\\times$ 10$^{-17}$ & 1.3 $\\times$ 10$^{-17}$ \\\\\n\\hline\nHe {\/} H & 0.1 & 0.1 \\\\ \nC {\/} H & 4.15 $\\times$ 10$^{-5}$ & 1.4 $\\times$ 10$^{-5}$ \\\\ \nN {\/} H & 6.4 $\\times$ 10$^{-5}$ & 2.1 $\\times$ 10$^{-5}$ \\\\ \nO {\/} H & 6 $\\times$ 10$^{-5}$ & 2.0 $\\times$ 10$^{-5}$ \\\\ \nS {\/} H & 8.0 $\\times$ 10$^{-8}$ & 8.0 $\\times$ 10$^{-8}$ \\\\\nFe \/ H & 1.5 $\\times$ 10$^{-9}$ & 1.5 $\\times$ 10$^{-9}$ \\\\ \n \\hline \n \\hline \n\\end{tabular}\n\\end{table}\nWe consider two different models as displayed in Table \\ref{tab:model}. Model (a) may be considered as a template of TMC1, and assumes a density of hydrogen nuclei n$_H$ = 2 $\\times$ 10$^4$ cm$^{-3}$. The elemental abundance of carbon relative to hydrogen nuclei is taken as 4.15 $\\times$ 10$^{-5}$ to reproduce the derived relative abundance of CO \\citep{ohishi:92}. We derive the oxygen elemental abundance by imposing a C\/O ratio of 0.7 appropriate for TMC1 and take the nitrogen elemental abundance used by \\cite{legal:14} in their work on nitrogen chemistry. The elemental abundance of sulfur is not well constrained and we have taken the low metal case value of 8.0 $\\times$ 10$^{-8}$.\nModel (b) is more representative of a pre stellar core with a density of 2 $\\times$ 10$^5$ cm$^{-3}$ similar to L134N or Barnard 1 (B1) where the elemental abundances of carbon, oxygen and nitrogen are reduced by a factor of 3\nto account for depletion.\n T~=~10K in both cases and the cosmic ionization rate $\\zeta$ per H$_2$ is 1.3 $\\times$ 10$^{-17}$ \\textbf{s$^{-1}$} as in \\cite{legal:14}.\n\\subsection{Results}\n\nWe summarize our results obtained with a 10$^{-3}$ value of the o\/p ratio of {\\HH} in Table \\ref{tab:res} and give some observational values for comparison. Time dependent effects may be visualized\n from the values reported at 10$^6$ years and at steady state for model (a). \n Steady state is reached after a few 10$^7$ and 10$^6$ years respectively for models (a) and (b).\n As there are fewer $^{15}$N enrichment reactions than previously assumed \\citep{terzieva:00}\n most nitrogen containing species are found to have isotopic abundance ratios close to the solar value ($^{14}$N\/$^{15}$N = 440) \ngiven by the Genesis mission \\citep{marty:11}. \n\\begin{table*}[h]\n\\caption{Model (a) and (b) results and observations. ss means stationary state. The value of the o\/p ratio of {\\HH} is taken as 10$^{-3}$. } \n\\label{tab:res} \n\\centering \n\\begin{tabular}{l|cc|c|lll} \n\\hline\\hline \n & \\multicolumn{2}{c|}{Model (a)} & Model (b) & TMC1 & L1544 & B1\\\\\n & t= 10$^6$ yrs& ss & ss & & & \\\\ \n\n \\hline\nelectronic fraction & 1.4 $\\times$ 10$^{-8}$ & 2.8 $\\times$ 10$^{-8}$ & 1.7 $\\times$ 10$^{-8}$ & & & \\\\\nN \/ \\fN & 440 & 456 & 455 & & & \\\\ \n2 $\\times$ N$_2$ \/ \\fN N & 438 &431& 437& & & \\\\\nNH \/ \\fNH & 429 & 428 & 421 & & & \\\\ \nNH \/ ND & 16 & 31 & 9 & & & \\\\ \n\\NHHH \/ \\HH & 6.7 10$^{-9}$ & 1.3 10$^{-9}$ & 6.0 10$^{-9}$& 2 10$^{-8}$ $^{(8)}$ & & \\\\\n\\NHHH \/ \\fNHHH & 333 & 386 & 387 & & & 300$^{+ 55}_{-40}$ $^{(1)}$\\\\\n\\NHHD \/ \\HH & 3.8 10$^{-10}$ & 5.8 10$^{-11}$ & 3.3 10$^{-10}$ & 4 10$^{-10}$ $^{(9)}$& & \\\\\n\\NHHD \/ \\fNHHD & 215 & 276 & 336 & & & 230$^{+ 105}_{-55}$ $^{(1)}$ \\\\\n\\NHHH \/ \\NHHD & 18 & 22 & 18 & 50 $^{(9)}$& & \\\\\n\\NNHp \/ \\HH & 4.8 10$^{-10}$ & 1.3 10$^{-10}$ & 2.1 10$^{-10}$ & 5 10$^{-10}$ $^{(8)}$ & & \\\\\n\\NNHp \/ \\NfNHp & 431 & 430 & 423 & &1050$^{\\pm 220}$ $^{(2)}$ & 400$^{+ 100}_{-65}$ $^{(1)}$\\\\\n\\NNHp \/ \\fNNHp & 437 & 432 & 433 & & 1110$^{\\pm 240}$ $^{(2)}$ & $>$ 600 $^{(1)}$ \\\\\n\\NNHp \/ \\NNDp & 16 & 29 & 8.6 & 12.5 $^{(9)}$ & & 2.9 $^{(1)}$ \\\\\nCN \/ \\HH & 6.8 10$^{-9}$ & 5.5 10$^{-9}$ & 1.2 10$^{-9}$ & 3 10$^{-8}$ $^{(8)}$ & & \\\\\nCN \/ \\tCN & 67 & 84 & 63 & & &50$^{+19}_{-11}$ $^{(1)}$\\\\\nCN \/ \\CfiN & 430 & 449 & 445 & & & 240$^{+135}_{-65}$ $^{(1)}$\\\\\n\\tCN \/ \\CfiN & 6.4 & 5.3 & 7.0 & & 7.5$^{\\pm 1}$ $^{(3)}$ &\\\\\nHCN \/ \\HH & 7.4 10$^{-9}$& 5.9 10$^{-10}$ & 5.4 10$^{-10}$ & 2 10$^{-8}$ $^{(8)}$ & & \\\\\nHCN \/ \\tHCN & 93 & 168 & 114 & & & 30$^{+7}_{-4}$ $^{(1)}$\\\\\nHCN \/ \\fHCN & 398 & 445 & 453 & & & 165$^{+30}_{-20}$ $^{(1)}$\\\\\n \\tHCN \/ \\fHCN & 4.3 & 2.6 & 4.0 & 2 - 4.5 $^{(5)}$ & & 5.5 $\\pm$1 $^{(1)}$ \\\\\nHCN \/ DCN & 43 & 96 & 22 & & & 20$^{+6}_{-10}$ $^{(1)}$\\\\\nHNC \/ \\HH & 5.6 10$^{-9}$ & 7.4 10$^{-10}$ & 8.4 10$^{-10}$ & 2 10$^{-8}$ $^{(8)}$ & & \\\\\nHNC\/ \\tHNC & 93 & 180 & 121 & 54 - 72 $^{(4)}$ & & 20$^{+5}_{-4}$ $^{(1)}$\\\\\nHNC\/ \\fHNC & 405 & 442 & 446 & 250 - 330 $^{(4)}$ & & 75$^{+25}_{-15}$ $^{(1)}$ \\\\\n{\\tHNC} \/ {\\fHNC} & 2.5 & 1.75 & 3.7 & 4.6 $\\pm$ 0.6 $^{(4)}$ & & 3.7 $\\pm$ 1 $^{(1)}$ \\\\\nHNC\/ DNC & 23 & 66 & 16 & & & 2.9$^{+1.1}_{-0.9}$ $^{(1)}$\\\\\nNO \/ \\HH & 1. 10$^{-7}$ & 3.1 10$^{-8}$ & 4.1 10$^{-8}$ & 2.7 10$^{-8}$ $^{(10)}$ & & \\\\\nNO \/ {\\fN}O & 438 & 451 & 446 & & & \\\\ \nCO \/ \\HH & 8.1 10$^{-5}$ & 8.0 10$^{-5}$ & 2.8 10$^{-5}$ & 8 10$^{-5}$ $^{(8)}$ & & \\\\\n {CO} \/ {\\tC}O & 68 & 67.4 & 68 & & & \\\\\nCH \/ \\HH & 1.3 10$^{-9}$ & 1.7 10$^{-9}$& 1.7 10$^{-10}$ & 2 10$^{-8}$ $^{(8)}$ & & \\\\\n{CH} \/ {\\tC}H & 74 & 154 & 71 & $>$ 71 $^{(6)}$ & & \\\\\n\\HCOp \/ \\HH & 2.5 10$^{-9}$ & 3.610$^{-10}$ & 5.7 10$^{-10}$ & 8 10$^{-9}$ $^{(8)}$ & & \\\\\n{\\HCOp} \/ {\\HtCOp} & 56 & 65 & 56 & & & 59 $^{(7)}$ \\\\\n{\\HCOp} \/ {\\DCOp} & 15 & 29 & 8.4 & 50 $^{(9)}$ & & \\\\\n\\hline \n \\hline \n\\end{tabular}\n\\tablebib{(1)~\\citet{daniel:13};\n(2) \\citet{bizzocchi:13}; (3) \\citet{hilyblant:13}; (4) \\citet{liszt:12}\n; (5) \\cite{hilyblant:13b}; (6) \\citet{sakai:13b}, (7) \\citet{hirano:14} assuming CO\/ C$^{18}$O=500,\n(8) \\cite{ohishi:92}, (9) \\cite{tine:00}, (10) \\cite{gerin:93}.\n}\n\\end{table*}\n\\section{Discussion}\nWe display the time dependence of various isotopic ratios and fractional abundances relative to {\\HH} and discuss the chemical behavior involved \nin the fractionation processes for the two reported models. We first\n consider in Figure \\ref{fig:nn2} the reservoirs of nitrogen, atomic and molecular nitrogen as well as {\\NNHp} ions which are chemically linked to \\NN.\n \\begin{figure*\n \\centering\n \\includegraphics[width=14cm]{Fig_1a.pdf}\n \n \\caption{ Upper panel : time dependence of N \/ {\\fN} isotopic ratios in atomic and molecular nitrogen and {\\NNHp} ions. (a) and (b) correspond to the models defined in Table \\ref{tab:model}; the black heavy dotted line represents the elemental N \/ \\fN. Lower panel : time dependence of the fractional abundances relative to {\\HH} of N, {\\NN} and {\\NNHp} for models (a) and (b) . The value reported for {\\NNHp} towards TMC1 \\citep{ohishi:92} is displayed as a horizontal green dashed line.}\n \\label{fig:nn2}%\n \\end{figure*}\nAtomic nitrogen becomes depleted in {\\fN} after about 10$^5$ years whereas molecular nitrogen is slightly enriched. \n These evolution times are also required to build significant amounts of molecular compounds which compare satisfactorily with available observations. \n The overall dependence of {\\NNHp} follows closely that of {\\NN} as it is formed from {\\NN} + {\\HHHp} reaction, with a slight decoupling between {\\fNNHp} and {\\NfNHp} at long evolution times. \nWe find that the isotopic ratio of the {\\NNHp} ions displays an almost constant value close to the solar value after some 10$^5$ years. They are in good agreement with observations in B1 but disagree by a factor of 2 for L1544.\n The trend that {\\fNNHp} is less abundant than {\\NfNHp} is reproduced in our results, as a result of the differences of endothermicity in their reactions with \\NN. We checked that introducing \\fN$_2$ and species containing two {\\fN} atoms had no effect on these ratios.\nWe could not find any gas-phase mechanism able to generate \n such a large ratio in pre-stellar core conditions. The high isotopic ratio found in L1544 implies an equivalently large ratio for molecular nitrogen, which is in strong contradiction with our findings.\nThese results are markedly different from those derived by \\cite{hilyblant:13} who found a moderate {\\fN} enrichment as these authors introduced the {\\fN} + {\\NNHp} fractionation reaction which we have found not to occur. \\footnote{These authors also interchanged the endothermicities of the {\\fNNHp} and {\\NfNHp} reactions with N and {\\NN}.\n\\subsection{Nitrogen hydrides}\n\\label{sec:NH}\nWe display in Figure \\ref{fig:nh3-fN} the time evolution of the isotopic ratios of nitrogen hydrides and of their fractional abundances relative to \\HH. {\\NHHH} and {\\NHH} have a very similar behavior as they both result from the reaction chain starting with the {\\Np} + {\\HH} reactions. \n \\begin{figure*\n \\centering\n \\includegraphics[width=14cm]{Fig_2.pdf}\n \n \\caption{ Upper panel : Time dependence of N \/ {\\fN} isotopic ratios in nitrogen hydrides. (a) and (b) correspond to the models defined in Table \\ref{tab:model}; The black heavy dotted line represents the elemental N \/ \\fN.\n Lower panel : time dependence of the fractional abundances relative to {\\HH} of NH, {\\NHH}, {\\NHHH} and {\\NHHD} for models (a) and (b). The values reported for {\\NHHH} \\citep{ohishi:92} and {\\NHHD} \\citep{tine:00} towards TMC1 are displayed as horizontal blue and red dashed lines respectively in the left panel.} \\label{fig:nh3-fN}%\n \\end{figure*}\n The species NH$_2$ and NH$_3$ are found to be enriched in $^{15}$N due to the $^{15}$N$^{+}$ + o-H$_{2}$ reaction which has a slightly smaller (weak) endothermicity, as reported in Table \\ref{tab:npHH}, than the \ncorresponding $^{14}$N$^{+}$ + o-H$_2$ reaction, a difference which slightly favors $^{15}$NH$^{+}$ formation. Nevertheless, despite the large uncertainties regarding \nthe rate of the N$^{+}$ + H$_2$ reaction, our results are in fair agreement with the results of \\cite{daniel:13} for prestellar core B1.\nAs the N$^{+}$ + HD $\\rightarrow$ ND$^{+}$ + H reaction has a smaller endothermicity than the N$^{+}$ + H$_{2}$ $\\rightarrow$ NH$^{+}$ + H reaction, \nthe modeled $^{14}$NH$_2$D \/ $^{15}$NH$_2$D ratio behaves somewhat differently than the $^{14}$NH$_3$ \/ $^{15}$NH$_3$ ratio. The ratio exhibits large variations around 10$^5$ years and becomes smaller than that of {\\NHHH} at large times and at steady state.\nThe observed values are compatible with calculations at sufficiently large times. The values of this ratio reported in \\cite{gerin:09} have been found to be too large as a result of assuming a single excitation rotational temperature. The future availability of collisional excitation rates of {\\NHHD} by {\\HH} \\citep{daniel:14} may give rise to additional changes.\n NH does not follow the same trend as NH$_2$ and NH$_3$ and is only slightly enriched in \\fN\n As discussed by \\cite{hilyblant:13}, NH is mainly formed \n from the dissociative recombination (DR) of \\NNHp, a reaction which has been recently revisited by \\cite{vigren:12}\n who derive a branching ratio towards NH of 7\\%.\n We have checked that this analysis still holds for model (b) even if the reaction of {\\NNHp} with CO becomes more efficient.\nThe formation route of NH through NH$_2^+$ recombination may take over for highly depleted CO. \n\n\\subsection{Nitriles and isonitriles}\nDeriving {\\fN} isotopic ratios of CN, HCN and HNC from observations is a difficult challenge as the transitions of the main isotopologues are optically thick.\nThen, most of the reported observational values of the {\\nN} \/ {\\fN} molecular ratios \n are obtained from the ratios of the minor isotopologues $^{13}$CN \/ C$^{15}$N, H$^{13}$CN \/ HC$^{15}$N \n and HN$^{13}$C \/ H$^{15}$NC which is subsequently multiplied by an assumed C \/ {\\tC} value, usually taken as 68 \\citep{milam:05}. \n\n\\subsubsection{ {\\tC}\/ {\\fN} ratios}\n %\n We test these hypotheses in our models by explicitly introducing fractionation reactions of \\tC, as discussed in Section \\ref{sec:reac}.\n Table \\ref{tab:res} shows that\nthe {\\tC} isotopic ratios of CN, HCN and HNC vary both with time and density. \n \\begin{figure*\n \\centering\n \\includegraphics[width=14cm]{Fig_3.pdf}\n \n\n \\caption{Time evolution of \\tC \/ {\\fN} ratios in CN, HCN and HNC for models (a) and (b). The \\tC\/{\\fN} elemental ratio is displayed as heavy dotted line.}\n \\label{fig:1315}%\n \\end{figure*}\nFigure \\ref{fig:1315} displays the $^{13}$CN \/ C$^{15}$N, H$^{13}$CN \/ HC$^{15}$N \n and HN$^{13}$C \/ H$^{15}$NC ratios as a function of time for the two considered models.\n The deviation from the elemental ratio of 6.48 is\n significant for HCN, HNC and CN. \n\n %\n\\subsubsection{{\\tC} chemistry}\nWe now consider the time dependences of the $^{12}$C \/ $^{13}$C isotopic ratios in CN, HCN and HNC species as displayed in Figure \\ref{fig:tCN}.\nThe time dependent ratio displays large variations, which is\n due to the fact that there are various reactions incorporating $^{13}$C \nin the molecules.\nThe elemental value of the ratio (68) is fulfilled in a narrow range around 1 Myr for model (a) and 2 $\\times$ 10$^5$ yrs for model (b) but steady state values are significantly different except for CN. \n \\begin{figure*\n \\centering\n \\includegraphics[width=14cm]{Fig_4.pdf}\n \n \\caption{time dependence of C \/ {\\tC} isotopic ratios in CN, HCN and HNC. (a) and (b) correspond to the models defined in Table \\ref{tab:model}}\n \\label{fig:tCN}%\n \\end{figure*}\n %\nThis relatively complex behavior results from the many different reaction channels involved in {\\tC} chemistry. \nWe then also consider other {\\tC} containing species and display the {\\dC} \/ {\\tC} ratio in C, CH, CO and HCO$^+$\nin Figure \\ref{fig:tC} as well as their fractional abundances relative to \\HH.\n \\begin{figure*\n \\centering\n \\includegraphics[width=14cm]{Fig_5.pdf}\n \n \\caption{ Upper panel : Time dependence of C \/ {\\tC} isotopic ratios in C, CH, CO and HCO$^+$. The black heavy dotted line represents the elemental {\\dC} \/ \\tC.\n Lower panel : Time dependence of the fractional abundances relative to {\\HH} of C, \\Cp, CO, CH and {\\HCOp}, for models (a) and (b). The observational values towards TMC1 \\citep{ohishi:92} are displayed as horizontal dashed lines with corresponding colors in the left panel. (a) and (b) correspond to the models defined in Table \\ref{tab:model}}\n \\label{fig:tC}%\n \\end{figure*}\nThe $^{12}$C\/$^{13}$C isotopic ratios of the various molecules are highly dependent\non the evolution time. \nThe transition from gas-phase atomic carbon toward CO controls the $^{13}$C enrichment. As long as there is still a relatively \nhigh carbon atom concentration in the gas phase, there is enough free $^{13}$C to allow strong enrichment of CN\nthrough the {\\tC} + CN reaction. When CO molecules become the reservoir of carbon, even if in that case the $^{13}$C concentration is low, the $^{13}$C$^{+}$ + $^{12}$CO $\\rightarrow$ \n$^{12}$C$^{+}$ + $^{13}$CO reaction still leads to a small $^{13}$CO enrichment \\citep{langer:90,milam:05}. Although this small excess is not measurable in CO, significant amounts of $^{13}$C are locked up in CO and most of the other carbon containing species become depleted in {\\tC}, as found for CH and other carbon chains. \n This effect is seen in Figure \\ref{fig:tC} for the isotopic ratio of CH in model (a) at steady state where CO is slightly enriched,\nleading to a significant depletion of {\\tC} in CH. \nOur results \nfor C and {\\HCOp} are similar to those of \\cite{furuya:11} who studied the {\\tC} fractionation of multiple carbon chains by \nexplicitly introducing the dependence of the {\\tC} position in the chain.\nWe see that\n{\\HCOp} is marginally enriched in {\\tC} at steady state whereas HCN and HNC are significantly depleted in \\tC. \n We also note that CN and {\\HCOp} may react via proton transfer to give CO + HNC$^+$ and have checked\n, at the DFT level, the absence of any barrier. The corresponding reaction rate is 2.2 $\\times$ 10$^{-9}$ ($\\frac{T}{300}$)$^{-0.4}$\n cm$^{3}$ s$^{-1}$ when using the capture rate theory. This reaction has not been included in any chemical database up to now. \n \nHNC has been shown recently to react with atomic carbon \\citep{loison:14}, which leads to the different steady state isotopic ratios obtained for HCN and HNC. \n The CN chemistry is then somewhat decoupled from that of HCN and HNC. HCN and HNC are formed at relatively long times via CN + H$_3$$^{+}$ $\\rightarrow$ HNC$^{+}$ \/HCN$^{+}$ + H$_2$, followed immediately by HNC$^{+}$ + H$_2$ $\\rightarrow$ HCNH$^{+}$ + H giving back HCN and HNC via DR \\citep{mendes:12}. With the adopted elemental abundances, the main CN destruction reactions are however O + CN and N + CN so that the HCNH$^{+}$\/HCN$^{+}$\/HNC$^{+}$\/HCN\/HNC\/CN \n network is not a closed system in contrast to models including coupled gas-grain chemistry \\citep{loison:14}.\n\n %\n \\subsubsection{ {\\fN} fractionation} \nWe now display also the N \/ {\\fN} fractionation in nitriles as well as that of NO in Figure~\\ref{fig:15CN}.\n \\begin{figure*\n \\centering\n \\includegraphics[width=14cm]{Fig_6.pdf}\n \n \\caption{Time dependence of N \/ {\\fN} isotopic ratios in CN, HCN, HNC and NO. (a) and (b) correspond to the models defined in Table \\ref{tab:model}}\n \\label{fig:15CN}%\n \\end{figure*}\nThe time dependences of the isotopic ratios are markedly different in the two models, except for NO. Whereas CN, HCN and HNC are somewhat enriched in {\\fN} for model (a) both at intermediates times and steady state, the opposite result is obtained in model (b) after 10$^{5}$ years. We display as well the time dependences of the fractional abundances of these molecules in Figure \\ref{fig:td} in order to better understand the previously described differences observed in the various fractionation ratios. \n \\begin{figure*\n \\centering\n \\includegraphics[width=14cm]{Fig_7.pdf}\n \n \\caption{Time dependence of CN, HCN, HNC and their isotopologues. (a) and (b) correspond to the models defined in Table \\ref{tab:model}}\n \\label{fig:td}%\n \\end{figure*}\n\nOur model (a) is intended to be representative of TMC1, a moderately dense cloud in an early evolutionary stage. \nWe see that the various abundances and ratios are very sensitive to the chosen \"age\", assumed to be the relevant chemical evolution time. Considering a TMC1 age of 1Myr leads to a reasonable agreement with the sparse observations available, see Table \\ref{tab:res}. \nModel (b) is more likely to be representative of a denser evolved molecular cloud such as L134N, L1544, Barnard B1, ... where elemental C, O, N are partially depleted through sticking on grains. Steady state values obtained with significant depletion conditions \\citep{roueff:05} may be used as corresponding proxies. We see that ammonia isotopologues are satisfactorily accounted for in our model. The agreement between observed and calculated \\tC \/ {\\fN} ratios is somewhat misleading as the modeled value mainly results from a significant depletion in the {\\tC} species. The ratios involving N \/ {\\fN} are found close to the elemental value in our model (b) at steady state (although {\\NHH} and {\\NHHH} are somewhat enriched in {\\fN} through the {\\fNp} + {\\HH} reaction as explained in subsection \\ref{sec:NH}), which is in agreement with the fact that no significant \ngas phase fractionation mechanisms have been found. The occurrence of small ratios in B1 observations \\citep{daniel:13},\nif real, implies other mechanisms at work. An obvious suggestion lies in the processes involved in adsorption\/{\\bf{desorption}} on grains and possible surface reactions.\n\\subsection{Role of the {\\HH} ortho to para ratio}\nWe now test the role of the value of the ortho\/para ratio of molecular hydrogen (OPR) in the context of ammonia chemistry and fractionation determination. We run two additional models in the frame of models (a) and (b) by changing the OPR by a factor of 10 \nboth upwards and downwards. The abundances of p-{\\HH} and o-{\\HH} are expressed respectively as\nn(p-{\\HH}) = $\\frac{1}{1+\\rm{OPR}}$ n(\\HH) and \nn(o-{\\HH})~=$\\frac{\\rm{OPR}}{1+\\rm{OPR}}$ n(\\HH).\nThis ratio is introduced, in addition to the reaction of {\\HH} with {\\Np} (\\fNp), in the reverse\nreaction of the fractionation of \\HHHp, namely the \\HHDp + {\\HH} reaction which plays a significant role in the deuterium \nfractionation of various molecules \\citep{pagani:11} as shown below:\n\\\\\n\\\\\n\\begin{tabular}{llll}\n{\\bf{$\\HHDp + \\rm{p}-\\HH$}} & $\\rightarrow$ & {\\bf{$ \\HHHp + \\HD$}}& , k$_1$ (cm$^3$ s$^{-1}$)\\\\\n{\\bf{$\\HHDp + \\rm{o}-\\HH$}} & $\\rightarrow$ & {\\bf{$ \\HHHp + \\HD$}}& , k$_2$ (cm$^3$ s$^{-1}$)\\\\\\ \n\\end{tabular}\n\\\\\nwith k$_1$ = 2.0 $\\times$ 10$^{-9}$ $\\times$ exp(-232\/T) and \nk$_2$ = 2.0 $\\times$ 10$^{-9}$ $\\times$ exp(-61.5\/T) . \n\\begin{table*}[h]\n\\caption{Dependence of the fractionation ratios on the o\/p ratio of {\\HH} for model (a). ss means stationary state. } \n\\label{tab:opa} \n\\centering \n\\begin{tabular}{|l|cc|cc|cc|} \n\\hline\\hline \n & \\multicolumn{2}{c|}{OPR = 10$^{-4}$} & \\multicolumn{2}{c|}{OPR = 10$^{-3}$} &\\multicolumn{2}{c|}{OPR = 10$^{-2}$}\\\\\n & t= 10$^6$ yrs& ss & t= 10$^6$ yrs& ss & t= 10$^6$ yrs& ss \\\\ \n\n \\hline\nelectronic fraction & 5.1 $\\times$ 10$^{-8}$ & 2.2 $\\times$ 10$^{-7} $ &4.8 $\\times$ 10$^{-8}$ & 2.1 $\\times$ 10$^{-7}$ & 3.7 $\\times$ 10$^{-8}$ & 4.2 $\\times$ 10$^{-8}$ \\\\\nN \/ \\fN & 440 & 456 & 440 & 456 & 440 & 452 \\\\ \n2 $\\times$ N$_2$ \/ \\fN N & 438& 430 & 438 &431& 438 & 437 \\\\\nNH \/ \\fNH & 431 & 429 & 429 & 428 & 426 & 418 \\\\ \nNH \/ ND & 16 & 31 & 16 & 31 & 18 & 23 \\\\ \n\\NHHH \/ \\fNHHH & 345 & 409 & 333 & 386 & 374 & 395 \\\\\n\\NHHD \/ \\fNHHD & 206 & 267 & 215 & 276 & 265 & 303 \\\\\n\\NHHH \/ \\NHHD & 8 & 14 & 17 & 22 & 43 & 63 \\\\\n\\NNHp \/ \\NfNHp &431 & 429 & 431 & 430 & 429 & 421\\\\\n\\NNHp \/ \\fNNHp & 437 & 432 & 437 & 432 & 436 & 433 \\\\\n\\NNHp \/ \\NNDp & 16 & 30 & 16 & 29 & 17 & 21 \\\\\nCN \/ \\tCN & 67 & 85 & 67 & 84 & 67 & 70 \\\\\nCN \/ \\CfiN & 432 & 449 & 430 & 449 & 429 & 434 \\\\\nHCN \/ \\tHCN & 92 & 168 & 93 & 168 & 93 & 165 \\\\\nHCN \/ \\fHCN & 401 & 453 & 398 & 445 & 400 & 413 \\\\\nHCN \/ DCN & 44 & 95 & 43 & 96 & 40 & 55 \\\\\nHNC\/ \\tHNC & 91 & 178 & 93 & 180 & 97 & 195 \\\\\nHNC\/ \\fHNC & 410 & 451 & 405 & 442 & 405 & 410 \\\\\nHNC\/ DNC & 23 & 66 & 23 & 66 & 22 & 32 \\\\\n\n{\\HCOp} \/ {\\HtCOp} & 57& 66 & 56 & 65 & 54 & 55 \\\\\n{\\HCOp} \/ {\\DCOp} & 15 & 30 & 15 & 29 & 16 & 19 \\\\\n\\hline \n \\hline \n\\end{tabular}\n\\end{table*}\n %\n \\begin{table}[h]\n\\caption{Dependence of the fractionation ratios on the o\/p ratio of {\\HH} for model (b) at steady state. } \n\\label{tab:opb} \n\\centering \n\\begin{tabular}{l|c|c|c} \n\\hline\\hline \n & {OPR = 10$^{-4}$} & {OPR = 10$^{-3}$} & {OPR = 10$^{-2}$}\\\\\n\n \\hline\nelectronic fraction & 2.2$\\times$ 10$^{-8}$ & 1.7 $\\times$ 10$^{-8} $ &1.1 $\\times$ 10$^{-8}$ \\\\\nN \/ \\fN & 457 & 455 & 445 \\\\ \n2 $\\times$ N$_2$ \/ \\fN N & 436& 437 & 438 \\\\\nNH \/ \\fNH & 425 & 421 & 418 \\\\ \nNH \/ ND & 9 & 9 & 11 \\\\ \n\\NHHH \/ \\fNHHH &396 & 387 & 416 \\\\\n\\NHHD \/ \\fNHHD & 322& 336 & 376 \\\\\n\\NHHH \/ \\NHHD &11 & 18 & 33 \\\\\n\\NNHp \/ \\NfNHp &425 & 423 & 417 \\\\\n\\NNHp \/ \\fNNHp & 434 & 433 & 433 \\\\\n\\NNHp \/ \\NNDp & 8.7 & 8.6 & 9.7 \\\\\nCN \/ \\tCN & 65 & 63 & 60 \\\\\nCN \/ \\CfiN & 449& 445 & 438 \\\\\nHCN \/ \\tHCN & 115 & 114 & 107 \\\\\nHCN \/ \\fHCN & 467 & 453 & 458 \\\\\nHCN \/ DCN & 25 & 22 & 18 \\\\\nHNC\/ \\tHNC & 120 & 121 & 118 \\\\\nHNC\/ \\fHNC & 461 & 446 & 453 \\\\\nHNC\/ DNC & 19 & 16 & 12 \\\\\n\n{\\HCOp} \/ {\\HtCOp} & 58& 56 &51 \\\\\n{\\HCOp} \/ {\\DCOp} & 8.6& 8.4 & 9.4 \\\\\n\\hline \n \\hline \n\\end{tabular}\n\\end{table}\n\nTables \\ref{tab:opa} and \\ref{tab:opb} display the fractionation values for the three different assumed values of the OPR for models (a) and (b)\nand Figures \\ref{fig:opa} and \\ref{fig:opb} display the corresponding time evolutions. We see that the curves are \nalmost superposable for times less than 1 Myr for model (a) and less than several 10$^5$ yrs for model (b).\n The variations are significative at steady state. We find that the competition between the destruction channels of {\\Np}\nthrough its reactions with o-{\\HH} and CO plays a major role. If o-{\\HH} is the most efficient destruction channel, which occurs typically for \n$\\rm{OPR} > 200 \\times x_{CO} $ at 10K, where x$_{CO}$ is the fractional abundance of CO relative to \\HH,\nformation of {\\NHHHHp} proceeds efficiently. In the opposite case {\\Np} is mainly destroyed through reaction with CO to yield {\\COp}, leading rapidly to {\\HCOp}, and {\\NOp} (which does not react with \\HH). As the DR rate coefficients of polyatomic ions increase significantly for larger polyatomic ions (the {\\NHHHHp} + e$^-$ reaction is 3 times more rapid than {\\HCOp} + e$^-$ and 20 times more rapid than {\\HHHp} + e$^-$), these two different channels impact the electron abundances and then affect many other species. \n Moreover, the reaction between {\\NHHH} and {\\HHHp} leads to {\\NHHHHp} + \\HH. {\\NHHHHp} then reacts mainly with electrons, recycling back to {\\NHHH} which subsequently converts to {\\NHHHHp}, amplifying the electron loss through DR. \n These effects occur when CO becomes the main carbon reservoir, for sufficiently large evolution times. \nAs a result, the electron fraction\nis found to decrease when the OPR increases at large evolution times and at steady state.\nThe decreasing electron abundance with increasing OPR values acts to diminish the effect of all DR reactions\nand as a result to increase the abundances of {\\HHHp} and {\\HHDp} since DR is the main destruction channel in our conditions. The abundance of {\\HHDp} is even more enhanced through the {\\HHHp} + HD reaction. Then the abundances of other deuterated molecular ions produced through deuteron transfer reactions of abundant neutrals with {\\HHDp} are increased as well, which impacts the subsequent formation of neutral deuterated compounds produced in the DR reactions. Eventually, the HCN\/DCN and HNC\/DNC ratios are shown to decrease for increasing OPR values.\nThis example demonstrates the complexity of the interplay between the different chemistries.\n \n \\begin{figure*\n \\centering\n \\includegraphics[width=15cm]{Fig_8.pdf}\n \n \\caption{Time dependence of fractionation ratios of CN, HCN, HNC in model (a) for 3 different OPR values. Black heavy dotted line: elemental ratio; green : OPR=10$^{-4}$, purple : OPR = 10$^{-3}$, red : OPR =10$^{-2}$}\n \\label{fig:opa}%\n \\end{figure*}\n \\begin{figure*\n \\centering\n \\includegraphics[width=15cm]{Fig_9.pdf}\n \n \\caption{Time dependence of fractionation ratios of CN, HCN, HNC in model (b) for 3 different OPR values. Black heavy dotted line: elemental ratio; green : OPR=10$^{-4}$, purple : OPR = 10$^{-3}$, red : OPR =10$^{-2}$}\n \\label{fig:opb}%\n \\end{figure*}\n\\section{Conclusions}\n\\label{sec:conclusion}\n We have built for the first time an isotopically substituted gas phase chemical network including D, {\\tC} and {\\fN} species and included it in a time dependent chemical model. Our model is based on a careful analysis of the possible gas phase mechanisms involved in carbon and nitrogen fractionation by scrutinizing the few available experimental studies and performing DFT and ab-initio quantum calculations on hypothetical reactions to check the possible presence of barriers in the reaction channels. One important result obtained is that the fractionation reaction of {\\fN} with {\\NNHp} is unlikely, due to the presence of a barrier, in contrast to the previous hypothesis made by \\cite{terzieva:00}. As a result, the modeled isotopic ratios involved in the isotopologues of \\NNHp are found to be very close to the elemental values and are similar to each other, in contradiction with observations towards L1544 \\citep{bizzocchi:13}. The availability of new collisional rate coefficients\n for the {\\NNHp - \\HH} system \\citep{lique:15} may however modify these conclusions. \n We also discarded the {\\fN} + {\\HCNHp} and {\\fN} + NO exchange reactions through similar arguments. Tentative reaction rate coefficients are also proposed for carbon fractionation reactions involving {\\tCp} and {\\tC} with CN and {\\nCC}.\nAdditionally, we have explicitly considered the various isotopologues involved in N$^{+}$ + {\\HH} reaction, assuming that the energy defect involved in the reactions of {\\Np} with para-{\\HH} is a \"real\" endothermicity. This leads to a slight decline of the exponential term when {\\fNp} reacts with {\\HH} and with HD compared to {\\Np}. This explains satisfactorily that {\\fNHHD} is found to be more enriched in {\\fN} than {\\fNHHH} in the observations. \nComparison between observations of nitriles and isonitriles and simulated values is much more questionable, as carbon and nitrogen chemistries are interdependent. Observed isotopic ratios are usually large and suffer from large error bars due to opacity effects in the main isotopologue and difficulties linked to nuclear spin effects. Whereas the various isotopologues follow a similar evolution, the isotopic ratios display significant variations due to slight shifts in the position of the maximum fractional abundances. \nA reasonable agreement {\\bf{is}} obtained between the observed {\\tC} \/ {\\fN} ratios for most of the species in L134N and Barnard B1 and steady state model values.\nOur model results show a strong depletion in {\\tC} and a near elemental ratio for {\\nN} \/ {\\fN}, whereas observations are usually interpreted by assuming an elemental ratio for {\\tC} containing species which leads to the incorrect assumption of {\\fN} enrichment. These considerations\nare undeniably dependent on the chosen elemental abundances, and in particular to the assumed C\/O ratio. \n We additionally point out a somewhat unexpected effect of the ortho to para ratio of \\HH, which affects significantly the fractional ionization, and consequently the level of deuterium fractionation through the respective orders of magnitude of DR rate coefficients of polyatomic molecular ions. The importance of coupling C, O and N chemistries is emphasized. %\n\\section*{Acknowledgments} \n We acknowledge support of PCMI (Programme National de Physique et Chimie du Milieu Interstellaire). This work has been partially funded by the Agence Nationale de la\nRecherche (ANR) research project IMOLABS (ANR-13-BS05-0008). \n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nA major outstanding puzzle in modern physics is the nature of dark matter (DM) \\cite{pdg2018}. Despite the ever-improving sensitivities of direct detection experiments, the simplest dark matter candidates have not been observed, motivating searches for a wider range of possible dark sectors. Moreover, challenges that simple cold DM candidates face on sub-galactic scales \\cite{Kuhlen:2012ft,Bullock:2017xww} might be relieved with more complex dark sectors. For instance, self-interacting dark matter has been investigated to address small-scale challenges such as the core-cusp problem \\cite{Spergel:1999mh,Loeb:2010gj}.\n\nNearly all current dark matter detection strategies, ranging from direct-detection efforts in the laboratory to indirect signals from DM annihilation (or decay), are based on the assumption that the dark matter is distributed around the universe as a gas of free particles with a large number density. This picture naturally emerges if self-interactions within the dark sector are weak, but is not strictly prescribed by existing observational constraints. The most stringent limits arise from observations of the Bullet Cluster, which restrict the self-interaction cross section per unit mass to be $\\sigma_{\\chi\\chi}\/m_{\\rm DM} \\lesssim 1 ~{\\rm cm}^2\/{\\rm g}$ \\cite{Spergel:1999mh}. It is apparent that the limit on $\\sigma_{\\chi\\chi}$ itself is significantly weakened if dark matter is clustered into composite states with large masses $m_{\\rm DM}$.\n\nIf the dark sector has strong self-interactions, it would undergo a nucleosynthesis process in the early universe much like the nuclei of the standard model (SM), whereby individual particles coalesce to form large composite states \\cite{Hardy:2014mqa,Wise:2014ola}. SM nucleosynthesis suffers from a number of theoretical accidents (such as the deuterium bottleneck) that render certain light elements unstable and thereby inhibit the production pathway of ultra-heavy elements. Still, the SM manages to produce large composite systems \\cite{ghiorso1955new}. It is thus not surprising that a completely unconstrained dark sector could also produce large composite objects. We will refer to these composite states as ultra heavy dark matter (UHDM) \\cite{Gresham:2017cvl,Grabowska:2018lnd}.\n\nDirect searches for canonical DM in the form of a gas of small particles leverage the large influx of these particles in a detector to compensate for their small cross section with SM particles. For example, about $10^{16}$ WIMP particles of mass 100 GeV would pass through a ${\\rm m}^3$ detector in a year, allowing for the direct detection of WIMP-nucleon scattering cross sections as low as $10^{-45}\\text{ cm}^2$ \\cite{Roszkowski_2018}. UHDMs, on the other hand, would arrive with significantly lower flux and require a different detection strategy. Thus, an experimental strategy for UHDM detection should leverage generic signatures of large composite objects instead of focusing on the specifics of any one composite dark matter model, as the rich dynamics of an interacting dark sector can produce a plethora of models. Accordingly, we focus here on the fact that the many constituent particles of a UHDM will enhance its cross section with SM matter such that each rare UHDM incidence could result in a spectacular event with a distinct signature imitated by no SM effect. The detector sensitivity is therefore less important in such scenarios and can be traded for greater spacetime dimensions that maximize the probability of a rare UHDM transit. In this paper, our primary focus is to establish the detectability of such a signal. While we provide an example of a dark matter model that yields such a signal, it is straightforward to construct other examples of ultra-heavy particles that can cause similar damage (e.g., Q-balls \\cite{Kusenko:1997si}). \n\nDamage tracks left by particles passing through the Earth over geological times could be recorded by ancient rock samples buried underground. For this reason, such geological samples have been proposed and used as \"natural particle detectors\" in the past, including for magnetic monopoles \\cite{Fleischer1969,Eberhard1971,Kovalik1986,Jeon:1995rf,fleischer1969jan,fleischer1969aug,price1984,price1986}, WIMPs \\cite{Snowden-Ifft1995,Collar:1994mj,engel1995,SnowdenIfft:1997hd,Baum:2018tfw,Drukier:2018pdy,Edwards:2018hcf}, and neutrinos \\cite{Baum:2019fqm,Jordan:2020gxx}. The geological exposure times of ancient rock detectors range from the present back to the time they were last heated naturally to the point of annealing, which can be up to $\\sim 10^9$ years and is thus much longer than typical DM direct detection laboratory experiments (by factors of up to $10^9$). This advantage makes geological DM detectors ideal probes of sparser, higher-mass composite DM. As we discuss in Sec. \\ref{sec:feasibility}, searches with $10 ~{\\rm m}^2$ of billion-year old rock would probe DM masses up to $m_{\\rm DM} \\sim 10^{28}~{\\rm GeV}$. A major challenge for such a detection strategy is the ability to efficiently identify DM signatures in a large volume of rock and distinguish them from geological, radioactive, and cosmic ray backgrounds. Such discrimination is significantly simpler in searches for UHDMs, since the extremely long and continuous cylindrical damage patterns they generically leave are qualitatively different from the sporadic defects due to expected backgrounds.\n\nHere, we assess the use of geologically old quartz samples as solid-state particle detectors to search for damage tracks left by UHDMs. Quartz, a crystalline polymorph of silica SiO$_2$, is one of the most abundant and well-studied minerals in the lithosphere \\cite{gotze2012application}. Defects and damage tracks can be resolved down to the micron scale with SEM-CL: a scanning electron microscope (SEM) combined with a cathodoluminescence (CL) detector. Imaging provided by the SEM is supplemented with spectral information from CL, which reveals the nature of trace elements and point defects in the quartz \\cite{stevens2009cathodoluminescence}. This modality has already proved successful at providing answers to key geological questions \\cite{vasyukova2013,macrae2013hyperspectral,leeman2012study,spear2009cathodoluminescence,ackerson2018low,hamers2017scanning,ackerson2015trace}. The technical advantages of SEM-CL mapping, as well as the considerable literature on its application in quartz, make it an appropriate choice for our readout method.\n\nNote that a similar search for long damage tracks was performed by Price and Salamon \\cite{price1986} in ancient mica crystals with null results. While they used this result to constrain the abundance of magnetic monopoles, the experiment is also sensitive to UHDMs with masses $m_{\\rm DM}\\lesssim 10^{26}\\,{\\rm GeV}$ \\cite{Bhoonah:2020dzs}. The readout method used in their experiment requires acid etching prior to microscopy in order to enlarge damage tracks and render them visible in an optical microscope. Such an etching process also enlarges background signals in the form of naturally-occurring lattice defects. Hence, the success of the experiment hinges on the low level of background in the samples and its scalability is limited by the availability of sufficiently pristine mica crystals. By contrast, the lack of etching in our proposed SEM-CL readout and the more readily available quartz mineral that meets our requirements allow us to scan over larger sample areas and extend the search pioneered by Price and Salamon to higher DM masses.\n\nThe rest of the discussion is organized as follows. In Sec.~\\ref{sec:feasibility} we present an experimental realization of the UHDM detection method sketched above, identify optimal samples, and assess its model-independent sensitivity. We then demonstrate its ability to probe a simple composite UHDM model, taking into account various existing constraints, in Sec.~\\ref{sec:model}. Finally, we conclude in Sec.~\\ref{sec:conclusion}.\n\n\\section{Detection Feasibility}\n\\label{sec:feasibility}\nIn this section, we investigate experimental issues for UHDM detection with quartz. Based on the considerations below, we establish the criteria for UHDM signatures to be robustly detectable with microscopy of about $\\SI{}{\\SI{}{\\micro}\\meter}$-resolution. We discuss the discovery reach of this approach in Sec. \\ref{sec:sensitivity} (see equations \\eqref{eqn:dragforcebound}-\\eqref{eqn:meltingenergypernucleus}).\n\n\\begin{figure}[htbp!]\n \\centering\n \\includegraphics[width=1\\columnwidth]{fig1.pdf}\n \\caption{{Schematic of the proposed readout method.} {\\bf (a)} A quartz sample of size $\\sim{\\rm cm}^2 \\times {\\rm mm}$. The black straight line illustrates a damage track as a result of an ultra-heavy composite dark matter (UHDM) particle passing through the sample. The sample is sectioned into multiple sections of thickness $\\sim100~\\SI{}{\\SI{}{\\micro}\\meter}$. We show several sections where the top and bottom surfaces are highlighted, which would be scanned using SEM-CL. {\\bf (b)} Correlated damage spots of micron-scale diameter over a macroscopic (mm-scale or longer) distance, between sections is the unique signature of the ultra-heavy DM particle interaction with quartz. Note that the probability of background features coincidently aligning reduces exponentially with the number of correlated layers. For a realistic feature density of $1000\/{\\rm cm}^2$, simulations show that correlations of 4 layers efficiently rejects false positive signals.}\n \\label{fig:readout}\n\\end{figure}\n\n\\subsection{Damage Tracks}\nSolid-state systems \\cite{fleischer1964,fleischer1965,lannunziata2012} have been used as particle track detectors with applications ranging from nuclear science to geophysics, as such tracks yield information about the history of the sample and properties of the impinging particles \\cite{fleischer1965a}. For example, DM detection using crystal damage tracks has been proposed as a directional signal in semiconductors such as diamond, where a WIMP scattering event would give rise to damage tracks of $\\sim ~30-100 ~{\\rm nm}$ in length; the directional information in these tracks could enable such a detector to probe below the \"neutrino floor\" \\cite{Rajendran:2017ynw,Marshall:2020azl}. Paleodetection, which looks for damage tracks of a similar size in ancient rock samples, has also been investigated for WIMP detection \\cite{Baum:2018tfw,Drukier:2018pdy,Edwards:2018hcf}.\n \n\nWe propose using ancient quartz as a detector for ultra-heavy composite dark matter (UHDMs). As we discuss in Sec. \\ref{sec:sensitivity}, each UHDM could deposit enough energy to locally melt nearby quartz along its trajectory. Since quartz nucleation under ambient conditions is a very slow process \\cite{buckley2018nucleation}, the melted region would solidify into amorphous silica without recrystallizing. Detecting such amorphous micro-regions within quartz samples is feasible with SEM-CL, where defects in the tetrahedrally coordinated SiO$_2$ microstructure contribute to CL emission \\cite{stevens2013cathodoluminescence}. Even in the absence of melting, the same SEM-CL method would in principle be sensitive to linear tracks of lattice distortions left by UHDMs. However, quantitatively characterizing the sensitivity of this method to such tracks requires further study of the backgrounds in natural quartz, which we leave for future work. Thus, in what follows we focus on detecting UHDMs that can cause melting.\n\n\\subsection{Quartz Samples and Backgrounds}\n\\label{sec:background}\n\nThe signature of the proposed UHDM detection method is a long damage track, of micron-scale cross section, extending through the entire length of the quartz sample (see Figure \\ref{fig:readout}a). This signature has the distinct advantage that no known mechanism produces such a track, allowing for strong geometric rejection of background signals. A variety of effects may induce localized disruption of the crystal lattice on the micron scale, such as extended growth defects or radioactive decays, but these localized features cannot pass through the entire macroscopic crystal. And although particles with low interaction probability (e.g., neutrinos or relativistic cosmic rays) can pass through an entire sample, their low nuclear cross sections yield dispersed individual damage events rather than a continuous, micron-scale-diameter track. Fractures induced by historical geological stresses could similarly pass continuously through an entire sample, but would in general be two- or three- rather than one-dimensional, and would not leave behind amorphous quartz within the damage track.\n\nTo take advantage of the distinctive extended geometry of a UHDM signal, we propose a multi-phase scanning readout, where we search for correlated feature positions at multiple depths in the sample (see Figure \\ref{fig:readout}). This allows us to reject backgrounds, which have an exponentially suppressed likelihood of lying along a single line (as shown by simulations discussed in Figure \\ref{fig:readout}). An expected background signal, given the imaging resolution of our method, is from the presence of radioactive isotopes such as uranium, which lead to fission tracks and alpha recoil damage within the crystal lattice. These processes leave behind halos of size $\\sim~10~ \\SI{}{\\SI{}{\\micro}\\meter}$ \\cite{Bower2016}, which are readily detectable using our proposed SEM-CL protocol (see Figure \\ref{fig:CLimage}d). In a single two-dimensional SEM-CL scan, these could mimic a UHDM damage track, but would be disqualified as UHDM signals by lack of correlated damage in subsequent slices. The presence of some radioactivity-induced features is potentially beneficial to our analysis, as their preservation would indicate that recent annealing events would not have removed older, UHDM-induced features. As such, if the fission track age can be determined from the host quartz, the absence of UHDM-induced features implies the lack of UHDM interaction events since the time of occurrence of the fission track.\n\nQuartz samples with low impurity levels are essential for reducing background levels. High-resolution cathodoluminescence (CL) studies reveal both the microstructure of the samples and trace element inclusions. Titanium (Ti) and aluminum (Al) are the two of the most abundant impurities in quartz. Ti is the dominant CL activator while Al is not generally considered an activator \\cite{tailby2018,leeman2012study}. Trace element studies show that quartz samples of different geological origins have a wide range of Ti and Al concentrations. Low-temperature hydrothermal vein quartz (HVQ) has the lowest trace element concentrations: quartz typically has a Ti concentration of a few 100 ppm; but this number could be as low as 6 ppm for an HVQ sample, which simultaneously has a low Al concentration \\cite{rusk2008trace,ackerson2015trace}. Here, we characterize preliminary measurements to demonstrate that low-Ti vein quartz samples are a suitable choice for our proposed experiment (see Figure \\ref{fig:CLimage}).\n\n\\begin{figure*}\n\\includegraphics[width=2\\columnwidth]{fig2.png\n\\caption{\\label{fig:CLimage}Example quartz sample characterization. SEM-CL images of two samples, {\\bf (a)} magmatic quartz from Bishop Tuff with Ti concentration $51 \\pm 6$ ppm, and {\\bf (b)} vein quartz from Jack Hills with Ti concentration $5.2 \\pm 6.5$ ppm, measured on a mass spectrometer. The scan rate is $20~{\\rm s}\/{\\rm mm}^2$ with $1.5~\\SI{}{\\SI{}{\\micro}\\meter}$ resolution for magmatic quartz and $5~{\\rm s}\/{\\rm mm}^2$ with $3~\\SI{}{\\SI{}{\\micro}\\meter}$ resolution for vein quartz (we forecast the full-scale UHDM experiment time and resources using these values). In (b) we identify a few high-count pixels in the vein quartz image, which demonstrates the possibility of high-resolution detection of concentrated CL emission. The inset shows a zoomed-in image of the region of interest with high-count pixels. These pixels could be a melting track intersection, which needs to be investigated by correlating multiple sections as described in the text. {\\bf (c)} Normalized histogram of the pixel counts in arbitrary units for each of the two sample SEM-CL images. Vein quartz shows a lower CL noise level as well as smaller variation, making it a suitable target for our detection proposal. {\\bf (d)} SEM-CL signal from a uranium halo (measured in a different quartz sample from those shown in (a) and (b)). Microscopic uranium inclusions have decayed over time; the fission products from these inclusions create crystal lattice damage, which emits cathodoluminescence (CL) upon excitation by the SEM. The CL signal from an ultra-heavy composite dark matter (UHDM) particle track would also result from crystal lattice defects at and around the track of melted quartz. Any such uranium halos in a UHDM search would be disqualified as potential damage tracks by lack of correlated damage in other slices of the sample.}\n\\end{figure*}\n\nHVQ has yet other advantages. Hydrothermal fluid flow is commonly localized along fracture systems, fault systems, and shear zones that can produce vast arrays of quartz veins. When a fracture or fault remains open and under hydrothermal pressure for a sufficient period of time, hydrothermal vein quartz grows as large, euhedral, and high-purity crystals. These properties will enable us to analyze a large net exposure with the proposed protocol, using serial sectioning and SEM-CL scanning of large samples, with background rejection via correlation of damage spots across layers (see Figure \\ref{fig:readout}b).\n\nHVQ from the Jack Hills of Western Australia is an ideal source of quartz for the DM search. The siliciclastic units at Jack Hills contain numerous, large quartz veins that appear as prominent surface features (i.e., clusters of milky white outcrops that can form very localized topographic highs) observed throughout the range. The veins are generally either undeformed or very weakly deformed, and often show an abundance of high-purity, gem-quality quartz crystals. These HVQ systems can reach impressive sizes at several locations within the Jack Hills from cm-scale to 50 meters wide \\cite{spaggiarietal2007jack}. Several of the vein systems can be followed for several kilometers and appear to be associated with major episodes of brittle faulting. The combined work of Rasmussen et al. \\cite{rasmussen2010situ} and Spaggiari \\cite{spaggiari2007jack} provide strong evidence that units (including the hydrothermal quartz veins) at Jack Hills, particularly in the vicinity of the \"classic\" W74 location, have likely remained at temperature conditions less than 330-420 $^{\\circ}$C for $\\sim$ 1.7 Gyr. The fact that the tectonic environment can be evaluated in detail \\cite{trail2016li,cavosie2004internal,baxter1984jack,spaggiari2007jack} (including thermometry and age dating) and consistently demonstrates equilibria at such low temperature (i.e., at or below greenschist facies), is to the best of our knowledge unique to Jack Hills, making it an ideal source of HVQ for our proposed measurements.\n\n\\subsection{Experimental Protocol}\n\\label{sec:protocol}\n\nThe proposed experimental protocol is as follows:\n\\begin{enumerate}\n \\item Identify quartz samples that are (i) old, having last annealed no less than 1 Gyr ago; and (ii) clean, with low CL noise level and less than a few thousand micron-scale resolved CL features per cm$^2$.\n \\item Prepare about $10^4$ samples of size $\\sim {\\rm cm}^2 \\times {\\rm mm}$ (lateral area $\\times$ length) that satisfy the above conditions and prepare each of the samples into sections of thickness $\\sim 100~\\SI{}{\\SI{}{\\micro}\\meter}$ (see Figure \\ref{fig:readout}a). Polish the top and bottom surfaces of each section, then scan them with SEM-CL.\n \\item Search for correlated damage spots, across the first few sections, that are aligned, section-to-section, along a straight line (see Figure \\ref{fig:readout}b).\n \\item If such a damage track of interest is identified, perform a dedicated search in subsequent sections to reject false positives\n \\item Repeat steps 3 and 4 for all the ${\\rm cm}^2 \\times {\\rm mm}$ samples.\n\\end{enumerate}\nThe scanning rate with SEM-CL depends on sample properties such as the concentration of CL activators. Given a typical data acquisition time of $\\sim$ 100 min per ${\\rm cm}^2$ with $\\SI{}{\\SI{}{\\micro}\\meter}$ resolution (for example see Figure \\ref{fig:CLimage}), we plan the experiment in three stages:\n\\begin{itemize}\n \\item Quartz-$1 \\, {\\rm m}^2$: About two years of experiment time with four SEM-CL apparatuses will be required to scan samples with a total area of about 1 m$^2$. The total quartz exposure of $\\sim 1\\, {\\rm m}^2\\,{\\rm Gyr}$ for such a search would probe UHDMs of mass $m_{\\rm DM} \\lesssim 10^{27}~{\\rm GeV}$. This first stage search would probe a currently unconstrained mass range with a new technique; see Figure \\ref{fig:param_space}.\n \\item Quartz-$10 \\, {\\rm m}^2$: 20 SEM apparatuses running for about four years would provide a total quartz exposure to UHDMs of $\\sim 10\\, {\\rm m}^2\\,{\\rm Gyr}$, yielding sensitivity $m_{\\rm DM} \\lesssim 10^{28}~{\\rm GeV}$.\n \\item Quartz-$100 \\, {\\rm m}^2$: 100 SEM apparatuses running for about eight years would provide a total quartz exposure to UHDMs of $\\sim 100\\, {\\rm m}^2\\,{\\rm Gyr}$, yielding sensitivity $m_{\\rm DM} \\lesssim 10^{29}~{\\rm GeV}$.\n\\end{itemize}\n\n\\subsection{Model-Independent Sensitivity}\n\\label{sec:sensitivity}\nThe proposed\nexperiment would be sensitive to a wide range of ultra-heavy dark matter (UHDM) candidates, independent of the underlying dark sector microphysics, that (1) pass through the quartz sample with sufficiently high probability while (2) depositing enough energy in a sufficiently concentrated way to melt a micron-size lateral region. \n\nGiven a DM candidate of mass $m_{\\rm DM}$, we can estimate the expected number of DM transits in a sample of area $L\\times L$ over a duration $T$ to be\n\\begin{equation}\n N \\sim 1 \\left(\\frac{10^{29}\\text{ GeV}}{m_{\\rm DM}}\\right) \\left(\\frac{L}{10\\,{\\rm m}}\\right)^2 \\left(\\frac{T}{10^9\\text{ year}}\\right)\\label{eqn:eventrate}\n\\end{equation}\nbased on the local DM density, $\\rho_{\\rm DM}\\approx 0.3\\,{\\rm GeV}\/\\text{cm}^3$. As described in the previous sections, the quartz samples under consideration are roughly $T\\sim 10^{9}\\text{ year}$ old, and a 100 ${\\rm m}^2$ sample area can be scanned in stage three. The requirement that $N\\gtrsim 1$ imposes an upper bound on the UHDM mass, $m_{\\rm DM}\\lesssim 10^{29}\\text{ GeV}\\sim 100\\text{ kg}$. The advantage afforded by the large spacetime volume of such a long-lived sample is manifest. \n\nAn UHDM moving through the Earth will collide with and deposit energy to SM particles along its path. The energy $E_1$ imparted to each SM nucleus can go as high as the kinematical limit of $10~{\\rm keV}$ (corresponding to nuclei acquiring twice the velocity of the UHDM in a collision) depending on how elastic these collisions are, while the stopping power $dE\/dx$ depends on $E_1$ as well as the UHDM radius. For simplicity, we assume in our estimates that the UHDM travels at least a few kilometers deep into the Earth's surface while maintaining its Milky Way virial velocity of $v_{\\rm DM}\\sim 10^{-3}c$. This amounts to an upper bound on the energy deposition rate\n\\begin{equation}\n \\frac{dE}{dx}\\lesssim 10^{13}\\frac{{\\rm MeV}}{\\SI{}{\\angstrom}}\\left(\\frac{m_{\\rm DM}}{10^{29}\\text{ GeV}}\\right).\\label{eqn:dragforcebound}\n\\end{equation}\nMost of the deposited energy will likely go to SM nuclei. Only a tiny portion will go directly to electrons, whose low mass limits their kinetic energy gain (for kinematics reasons) and whose coupling to DM is severely limited by astrophysical and cosmological constraints \\cite{Green:2017ybv}. The nuclei and electrons will then thermalize, leading to a loosening of molecular bonds as the electrons acquire more energy, and eventually cause melting. Due to thermal diffusion, the melted region will enlarge and cool. What ultimately remains, in the case of quartz, is a long cylindrical trail of amorphous silica, precisely the kind of damage that is detectable with the method outlined above.\n\n\nIn order to leave a robustly detectable damage trail, the UHDM must deposit sufficient energy per unit length $dE\/dx$ exceeding the required latent heat to melt each unit-length segment of a micron-radius cylinder. This amounts to a $dE\/dx$ threshold for robust detection of\n\\begin{equation}\n \\frac{dE}{dx}\\gtrsim \\frac{{\\rm MeV}}{\\SI{}{\\angstrom}}\\,.\\label{eqn:meltingenergydeposition}\n\\end{equation}\nSee Figure \\ref{fig:param_space}a for model-independent sensitivity projections. Further, since quartz has a melting point of $10^4~\\rm K \\sim 1~{\\rm eV}$ and energy tends to spread outward, the energy deposition must be sufficiently localized that the energy $E_1$ gained by each nucleus is greater than the melting temperature, i.e. it must lie in the range\n\\begin{equation}\n 1\\,{\\rm eV}\\lesssim E_1 \\lesssim 10\\, {\\rm keV} \\label{eqn:meltingenergypernucleus}\n\\end{equation}\nwhere the upper bound is the kinematical limit for energy transfer per nucleus (with mass number $A\\sim 10$).\n\n\\begin{figure}[htbp!]\n \\centering\n \\includegraphics[width=\\columnwidth]{fig3.png}\n \\caption{{Sensitivity projections for the proposed ultra heavy dark matter (UHDM) search.} {\\bf (a)} Model-independent reach of the geological-quartz detector proposal expressed as stopping power $dE\/dx$ vs mass $m_{\\rm DM}$ of a passing UHDM particle, together with the existing constraints from MACRO for energy deposition per nucleus $E_1\\sim 1~{\\rm eV}$ \\cite{Ambrosio:2004ub, Scholz:2016} as well as from damage track searches in ancient mica \\cite{price1986}. The vertical and slanted boundaries of the quartz-detectable parameter space (for different effective detector areas) stem from the requirements of an $O(1)$ probability of transit, Eq.~\\eqref{eqn:eventrate}, and a negligible slowing of the UHDM up to a $1\\text{ km}$ depth, Eq.~\\eqref{eqn:dragforcebound}, respectively. The black horizontal line indicates the melting threshold for a micron-sized lateral region, Eq.~\\eqref{eqn:meltingenergydeposition}, above which robust detection is possible. {\\bf (b)} Parameter space of the UHDM model considered in Sec.~\\ref{sec:model}. \\textit{Left:} reach on coupling $g_{\\rm n}$ vs DM mass $m_{\\rm DM}$. \\textit{Right:} reach on coupling $g_{\\rm n}$ vs mediator mass $m_\\phi$. Also shown are existing constraints from ancient mica \\cite{price1986}, fifth force experiments \\cite{Green:2017ybv}, and stellar cooling of SN1987A \\cite{Green:2017ybv} and horizontal branch (HB) stars \\cite{Green:2017ybv} (note that the stellar cooling bounds are model-dependent \\cite{DeRocco:2020xdt}). In these $g_{\\rm n}$ plots, we set $g_\\chi$ to its upper bound $m_\\phi\/\\Lambda_\\chi$ from Eq.~\\eqref{stability}.}\n \\label{fig:param_space}\n\\end{figure}\n\n\n\\section{Example UHDM Model}\n\\label{sec:model}\n\nIn this section, we consider an example of a simple, ultra-heavy composite dark matter (UHDM) state \\cite{Grabowska:2018lnd} that can give rise to the desired damage tracks while being consistent with existing experimental and observational constraints. These composite objects consist of $N_\\chi$ dark fermions $\\chi$ whose mass, inverse size, and binding energy to form the UHDM are determined by a single scale $\\Lambda_\\chi$. It follows that they have mass $m_{\\rm DM} \\sim N_\\chi \\Lambda_\\chi$ and size $R \\sim N_\\chi^{1\/3} \\Lambda_\\chi^{-1}$. We assume that the fermions $\\chi$ interact with standard model nucleons $\\psi_{\\rm n}$ through a repulsive\\footnote{Attractive DM-nucleon interactions are just as compelling as the repulsive interaction considered here. We note that the attractive interactions might have more complicated dynamics as nuclei may get trapped and accumulate inside the UHDM.} Yukawa interaction mediated by a scalar $\\phi$ of mass $m_\\phi$:\n\\begin{equation}\n \\mathcal{L}\\supset \\frac{1}{2}m_\\phi^2\\phi^2+g_{\\rm n}\\phi\\bar{\\psi}_{\\rm n}\\psi_{\\rm n}-g_{\\chi}\\phi\\bar{\\chi}\\chi\\,.\n\\end{equation}\nWe show that UHDMs with the following properties satisfy the robust detectability criteria detailed in Sec.~\\ref{sec:sensitivity} without running afoul of any existing constraints:\\footnote{Due to various constraints, this parameter space has a complicated geometry. Here we simply identified the lower and upper limits for each parameter.}\n\\begin{align}\n 10^{26} \\,{\\rm GeV} \\lesssim& m_{\\rm DM} \\lesssim 10^{29} \\,{\\rm GeV}\\label{eqn:massrange}\\\\\n 10 \\text{ nm}\\lesssim& R\\lesssim 1 \\text{ cm}\\label{eqn:radiusrange}\\\\\n 0.1\\,{\\rm eV}\\lesssim& m_\\phi\\lesssim {\\rm MeV} \\,\\leftrightarrow\\, \\,\n 10^2\\text{ fm}\\lesssim m_\\phi^{-1}\\lesssim 1\\,\\SI{}{\\SI{}{\\micro}\\meter}\n \\label{eqn:mediatorrange}\\\\ \n 100 \\,{\\rm keV} \\lesssim& \\Lambda_\\chi \\lesssim 10 \\,{\\rm GeV}\\,.\\label{eqn:Lambdarange}\n\\end{align}\nThis allows us to probe wide ranges of the couplings $g_{\\rm n}$ and $g_{\\chi}$. Two slices of this parameter space are shown in Figure~\\ref{fig:param_space}b. Eq.~\\eqref{eqn:massrange} follows from Eq.~\\eqref{eqn:eventrate} and ancient mica constraints; Eq.~\\eqref{eqn:radiusrange} follows from\n\\eqref{eqn:meltingenergydeposition}, \\eqref{eqn:meltingenergypernucleus}, \\eqref{eqn:E1dEdx}, and the quartz sample size of 1 cm; Eq.~\\eqref{eqn:mediatorrange} follows from fifth force constraints and the requirement that the UHDM-nucleus interaction be treated classically; Eq.~\\eqref{eqn:Lambdarange} follows from Eqs.~\\eqref{eqn:massrange} and \\eqref{eqn:radiusrange}.\n\n\\subsection{Detectability with Quartz}\nThe optimal UHDM detection signature is expected for mediators with a range $m_\\phi^{-1}$ satisfying $\\Lambda_\\chi^{-1}\\ll m_\\phi^{-1} \\lesssim \\SI{}{\\SI{}{\\micro}\\meter}$, since this is the intermediate regime where the UHDM-nucleon coupling is enhanced by the number of constituents $\\chi$ of the UHDM within the range of the mediator $(m_\\phi^{-1}\/\\Lambda_\\chi^{-1})^3$ while simultaneously evading existing fifth force constraints. For simplicity of analysis we only consider part of the parameter space where $m_\\phi^{-1}\\ll R$. In doing so, we limit the UHDM's cross section to be at most geometrical.\n\nAn SM nucleus located inside the UHDM only sees the composite dark matter particle's constituents $\\chi$ within the range of the mediator $ m_\\phi^{-1}\\ll R$. Hence, to the SM nucleus each point in the bulk of the UHDM is just like any other, yielding a potential energy $V(r)$ as a function of the distance $r$ from the center of the UHDM with the following profile:\n\\begin{equation}\n V(r)=\\begin{cases}\n +V_0, &rR\n \\end{cases}\n\\end{equation}\nwhere at the boundary $r\\approx R$ the potential drops to zero exponentially over a length scale of order $m_\\phi^{-1}$, and \n\\begin{align}\n V_0\\sim \\left(\\frac{\\Lambda_\\chi}{m_\\phi}\\right)^3\\frac{g_{\\chi}(10g_{\\rm n})}{m_\\phi^{-1}} \\label{eqn:V0}\n\\end{align}\nfor SM nuclei with mass number $A\\sim 10$. As a result, from the perspective of a nucleus the UHDM is just a constant potential hill moving at a velocity $v_{\\rm DM}\\sim 10^{-3}c$.\n\nSince the de Broglie wavelengths $(10\\,{\\rm MeV})^{-1}$ of the SM nuclei are smaller than the mediator ranges $m_\\phi^{-1}$ of interest, we can treat the UHDM-nucleus interactions classically. When $V_0\\gtrsim 10 \\,{\\rm keV}$, the potential $V_0$ prevents any nucleus from entering the UHDM. The UHDM-nucleus collisions are thus elastic, and the energy $E_1$ transferred to a nucleus saturates the kinematical limit $E_1\\sim 10\\, {\\rm keV}$. If $V_0\\lesssim 10 \\,{\\rm keV}$, on the other hand, the nuclei can easily climb the potential hill, and the collisions between a nucleus and the UHDM's surface will be inelastic. When a nucleus encounters the surface of the UHDM, it receives a force $F\\sim V_0\/m_\\phi^{-1}$ due to the gradient of the Yukawa potential. This force is exerted throughout the duration $\\tau\\sim m_\\phi^{-1}\/v_{\\rm DM}$ of the collision, resulting in a nearly-instantaneous momentum kick $p_1\\sim F\\tau$ which translates to the kinetic energy $E_1\\sim 10\\,{\\rm keV}\\left(V_0\/10\\,{\\rm keV}\\right)^2$ per nucleus. To sum up, the energy imparted to a nucleus after the passage of a UHDM is\n\\begin{equation}\n E_1\\sim 10\\,{\\rm keV} \\times \\text{min}\\left[1,\\left(\\frac{V_0}{10 \\,{\\rm keV}}\\right)^2\\right].\\label{eqn:E1V0}\n\\end{equation}\nUsing a lattice spacing of about $5\\SI{}{\\angstrom}$ for quartz, the energy deposition rate then follows:\n\\begin{equation}\n \\frac{dE}{dx}\\sim \\frac{E_1}{5\\SI{}{\\angstrom}}\\left(\\frac{R}{5\\SI{}{\\angstrom}}\\right)^2. \\label{eqn:E1dEdx}\n\\end{equation}\nHaving linked the model parameters with the quantities characterizing quartz damage tracks, the detectable parameter space can be evaluated based on the considerations in Sec.~\\ref{sec:sensitivity} (see Figure \\ref{fig:param_space}b).\n\n\\subsection{Existing Constraints}\n\\label{sec:existing_constraints}\n\\subsubsection{The mediator}\nPast experiments and observations have placed limits on the coupling $g_{\\rm n}$ of the mediator $\\phi$ to standard model nucleons with varying severity for different masses $m_\\phi$ of the mediator.These include: collider constraints on the meson decay rate, laboratory {\\it fifth-force} searches, and stellar cooling bounds from observations of the SN1987A event and horizontal branch (HB) stars. Note, however, that the stellar cooling bounds are model-dependent \\cite{DeRocco:2020xdt}. The following parameter space is thus ruled out \\cite{Green:2017ybv}:\n\n\\begin{itemize}\n \\item Meson decay: $g_{\\rm n}\\gtrsim 10^{-6}$, $m_\\phi\\lesssim 100\\,{\\rm MeV}$.\n \\item Fifth force: $g_{\\rm n}\\gtrsim 10^{-12} (m_\\phi\/{\\rm eV})^3$, $m_\\phi\\lesssim 100\\,{\\rm eV}$.\n \\item SN1987A: $3\\times10^{-10}\\lesssim g_{\\rm n}\\lesssim 3\\times10^{-7}$, $m_\\phi\\lesssim 30\\,{\\rm MeV}$.\n \\item HB stars: $g_{\\rm n}\\gtrsim 10^{-13}$, $m_\\phi\\lesssim 100\\,{\\rm keV}$.\n\\end{itemize}\nFurthermore, the couplings of UHDM constituents $\\chi$ to the mediator $\\phi$ add extra self-interactions among $\\chi$ that may destabilize the UHDM. In order for the UHDM to be stable the mediated self-interaction potential $g_\\chi^2\\Lambda_\\chi^3m_\\phi^{-2}$ of a single $\\chi$ must not exceed the binding energy $\\Lambda_\\chi$. This puts an upper bound on the coupling $g_\\chi$ of $\\chi$ to the mediator $\\phi$:\n\\begin{equation}\n g_{\\chi}\\lesssim \\frac{m_\\phi}{\\Lambda_\\chi}.\\label{stability}\n\\end{equation}\n\n\n\\subsubsection{Direct detection}\nOf the currently and previously running direct-detection DM experiments, MACRO puts a strong constraint on our scenario due to its large volume. MACRO is a scintillator experiment with spacetime dimensions of about $10^3\\text{ m}^2\\times 10\\text{ years}$ corresponding to $m_{\\rm DM} \\lesssim 10^{22}\\,{\\rm GeV}$ for 1 event over its decade-long lifespan. It is sensitive to energy depositions $\\gtrsim 10\\,{\\rm MeV}\/{\\rm cm}$ \\textit{to electrons} \\cite{Ambrosio:2004ub}. When a nucleus receives energy $E_1$ from interaction with a UHDM, only some fraction $Q(E_1)$, called the quenching factor, of that energy effectively goes to the electrons tied to the nucleus. It is this relatively small fraction of energy that is responsible for the processes of scintillation and ionization that may occur subsequently. We can translate MACRO's $10\\,{\\rm MeV}\/{\\rm cm}$ detection threshold to a sensitivity \\textit{to nuclear energy depositions} via effecting an increase by the quenching factor $Q(E_1)$ \\cite{Scholz:2016}. \n\nAn even more stringent bound on our model arises from direct searches for long damage tracks in muscovite mica crystals \\cite{price1986}. The non-observation of tracks extending beyond naturally occurring defects and radioactivity damage was originally used to constrain the abundance of magnetic monopoles, but also limits the UHDM parameter space. This past mica search involved total sample area $\\sim 1200 ~{\\rm cm}^2$ with sample ages $\\simeq 5 \\times 10^8 \\,{\\rm yr}$, corresponding to a dark matter reach of $m_{\\rm DM} \\lesssim 10^{26}~{\\rm GeV}$. The energy deposition threshold for detection in this experiment via etching and optical microscopy was identified as $dE\/dx \\gtrsim 6~{\\rm GeV}\/{\\rm cm}$. \n\n\\subsubsection{Astrophysical and Cosmological limits}\nIndirect limits can also be placed on the couplings $g_{\\rm n}$ and $g_{\\chi}$ from the limits on DM-baryon and DM-DM cross sections. DM-baryon interactions in the early universe can affect baryon acoustic oscillations and is therefore constrained by CMB and LSS observations. This puts an upper bound on the DM-baryon momentum-transfer cross section that would be observed today: $\\sigma_{\\chi\\rm b}\/m_{\\rm DM}\\lesssim 10^{-3}\\text{ cm}^2\/\\text{g}$ \\cite{Dvorkin:2013cea}. Astronomical observations of the Bullet Cluster also place a limit on the DM-DM momentum-transfer cross section: $\\sigma_{\\chi\\chi}\/m_{\\rm DM}\\lesssim 1\\text{ cm}^2\/\\text{g}$ \\cite{Spergel:1999mh}. Since we are mainly interested in UHDMs with geometrical cross sections of order $\\SI{}{\\SI{}{\\micro}\\meter}^2$ and masses up to $10^{29}~{\\rm GeV} (100 ~{\\rm kg})$, these astrophysical and cosmological observations only impose significant constraints on the low mass side of our parameter space. Moreover, these constraints are alleviated if UHDMs constitute less than $10\\%$ of the total dark matter mass, in which case the maximum detectable mass would also be lowered by an order of magnitude.\n\n\\section{Conclusion and Outlook}\n\\label{sec:conclusion}\n\nGiven the diverse range of theoretically well-motivated dark sectors, it is critical to perform searches with techniques that are sensitive to a broad class of dark-sector phenomena. In this paper, we propose a detection method for ultra-heavy composite dark matter particles (UHDMs). Our proposed experiment is based on mapping damage tracks in ancient quartz samples with SEM-CL scanning. This method has two significant advantages: (1) the billion-year exposure time of such samples enables us to probe dark matter candidates with masses as high as $10^{29}~{\\rm GeV} (100 ~{\\rm kg})$, surpassing the reach of existing direct-detection experiments, and (2) the distinctly-long cylindrical damage trails left by such UHDMs are easily distinguished from other features at the relevant scales.\n\nIn this work, we focus on detecting long tracks of amorphous silica in quartz samples expected from passing UHDMs that impart enough energy to cause melting. In future work, we will consider the feasibility of extending the experimental sensitivity to energy deposition rates below the melting threshold. For that purpose, we intend to carry out a number of studies including: (i) signal calibration by artificially creating damage tracks in synthetic quartz samples with a high-power pulsed laser of variable intensity and comparing it with the resulting CL signal levels; and (ii) noise calibration by preparing a set of quartz samples, natural and synthetic, with different concentrations of CL activators and analysing their CL emission rates. These studies will provide us a better understanding of the signal-to-noise ratio as seen in SEM-CL imaging, which will allow us to better estimate the detection threshold.\n\nOur proposed experiment is largely agnostic to the detailed microphysics of the dark sector, as long as it results in long damage tracks in geological quartz. To demonstrate the projected reach of the proposed approach, we considered a QCD-like dark sector that interacts with the standard model repulsively via a light mediator. The particle spectrum of this theory includes heavy bound states, composed of a large number of elementary dark fermions, which could create interesting targets for detection. We identified experimentally-detectable regions of the parameter space that satisfy various limits derived from phenomenological considerations as well as past observations. In future work, it would be interesting to delineate a broader range of dark matter models than can lead to similar damage patterns in ancient rock.\n\n\n\\begin{acknowledgments}\nThis work was supported by the DOE QuANTISED program under Award No. DE-SC0019396; the Army Research Laboratory MAQP program under Contract No. W911NF-19-2-0181; and the University of Maryland Quantum Technology Center. SR is supported by the NSF under grant PHY-1818899, the SQMS Quantum Center and DOE support for MAGIS. \n\\end{acknowledgments}\n\n\\bibliographystyle{apsrev4-1}\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nIn the study of nonnegative scalar curvature, one would like to formulate some notion of ``weak'' nonnegative scalar curvature for metrics that are not necessarily smooth. The gold standard for a notion of weak nonnegative scalar curvature would be something like Alexandrov spaces as a weak notion of spaces with lower bounds on sectional curvature. A good notion of weak nonnegative scalar curvature would be one that implies the same consequences as ``classical'' nonnegative scalar curvature---for example, the positive mass theorem in the asymptotically flat case, or topological restrictions in the compact case. \n\nAn important theorem in this direction was proved by P.~Miao \\cite{Miao:2002}, generalizing an earlier result of H.~Bray \\cite[Section 6]{Bray:2001}. See also \\cite[Section~3]{Shi-Tam:2002} for the spin case, as well as a more recent proof by D.~McFeron and G.~Sz\\'{e}kelyhidi~\\cite{McFeron-Szekelyhidi}. \n\n\\begin{thm}\\label{hypersurface}\nLet $M^n$ be a smooth manifold such that $n<8$ or $M$ is spin.\\footnote{The only point of this assumption is to make sure that the classical positive mass theorem is valid on $M$. If the positive mass theorem is true in all dimensions, then this hypothesis can safely be eliminated} Let $S$ be a smooth closed hypersurface in $M$, and let $g$ be a complete asymptotically flat metric on $M$ such such that $g$ is $C^2$ \\emph{up to} $S$ from each side of it (but not necessarily \\emph{across} it) and $C^{2,\\alpha}_\\mathrm{loc}$ away from $S$. \n\nNear each point of $S$, $S$ divides $M$ into two sides, which we will call $A$ and $B$. Let $H_A$ be the mean curvature vector of $S$ as computed by the metric on the $A$ side, and similarly define $H_B$.\n\nIf $g$ has nonnegative scalar curvature on the complement of $S$, and at each point of~$S$, $H_A-H_B$ either points toward side $A$ or is zero, then the mass of $g$ is nonnegative in each end.\n\\end{thm}\n\nNote that the hypotheses of the theorem above require $g$ to be Lipschitz everywhere. One way to interpret this theorem is that when the singular set of $g$ is a hypersurface $S$ whose induced metric is well-defined regardless of which ``side'' of $S$ one uses to compute it, then the correct notion of weak nonnegative scalar curvature on $S$ is the pointwise mean curvature comparison condition that appears as a hypothesis of the theorem above. \n\nIn this article we consider singular sets of lower dimension and ponder what conditions on $S$ correspond to weak nonnegative scalar curvature. We find that if $S$ has low enough dimension, then no further conditions are needed.\n\\begin{thm}\\label{maintheorem}\nLet $M^n$ be a smooth manifold such that $n<8$ or $M$ is spin. Let $g$ be a complete asymptotically flat Lipschitz metric on $M$, and let $S$ be a bounded subset whose $n\/2$-dimensional lower Minkowski content is zero.\nIf $g$ has bounded $C^2$-norm and nonnegative scalar curvature on the complement of $S$, then the mass of $g$ is nonnegative in each end. \n\\end{thm}\nSee Section \\ref{definitions} for the definition of Minkowski content. For now, recall that Minkowski content equals Hausdorff measure for well-behaved sets (\\textit{e.g.} submanifolds).\n\nThere is also a $W^{1,p}$ version of this theorem.\n\\begin{thm}\\label{w1p}\nLet $M^n$ be a smooth manifold such that $n<8$ or $M$ is spin. Let $p>n$, let $g$ be a complete asymptotically flat $W^{1,p}_{\\mathrm{loc}}$ (and hence continuous) metric on $M$, and let $S$ be a bounded subset whose $\\frac{n}{2}(1-\\frac{n}{p})$-dimensional lower Minkowski content is zero. If $g$ has bounded $C^2$-norm and nonnegative scalar curvature on the complement of $S$, then the mass of $g$ is nonnegative in each end. \n\\end{thm}\n\nIt might seem surprising that one does not have to place any other conditions on the behavior of $g$ at singular set, but as we will see in the proof, the Lipschitz (or $W^{1,p}$) condition is very restrictive. Essentially, $g$ is too regular for the scalar curvature to be truly singular on a small set. We use the same technique as in \\cite{Miao:2002}. Note that if $S$ is a closed submanifold, the proofs of Theorems \\ref{maintheorem} and \\ref{w1p} are much simpler.\n\nThe dimensional restriction of $n\/2$ seems to an unnecessary artifact of the conformal method used in the proof. Also note that Theorems \\ref{maintheorem} and \\ref{w1p} do not include rigidity results for Euclidean space.\n\\begin{conj}\nLet $M^n$ be a smooth manifold such that $n<8$ or $M$ is spin. Let $g$ be a complete asymptotically flat Lipschitz metric on $M$, and let $S$ be a bounded subset whose $(n-1)$-dimensional lower Minkowski content is zero.\nIf $g$ has bounded $C^2$-norm and nonnegative scalar curvature on the complement of $S$, then the mass of $g$ is nonnegative in each end. Moreover, if the mass of any end is zero, then $(M,g)$ must be isometric to Euclidean space.\n\\end{conj}\nOne might try to prove the spin case of this conjecture using a spinor argument, following \\cite{Shi-Tam:2002}. Or one might try to prove the conjecture using Ricci flow as in \\cite{McFeron-Szekelyhidi}. One advantage of the Ricci flow method is that it is more likely to produce a rigidity result.\n\n\\section{Definitions}\\label{definitions}\n\\begin{defin}\nLet $g$ be a continuous Riemannian metric on a smooth manifold $M^n$ where $n\\geq3$. Then $(M,g)$ is an \\emph{asymptotically flat manifold} if and only if there is a compact set $K\\subset M$ such that $M\\smallsetminus K$ is a disjoint union of ends, $E_\\ell$, such that\neach end is diffeomorphic to $\\rr^n$ minus a ball, and in each of\nthese coordinate charts, the metric $g_{ij}$ is $C^2$ and satisfies \n\\begin{align*}\ng_{ij}&=\\delta_{ij}+O(|x|^{-\\sigma})\\\\ \ng_{ij,k}&=O(|x|^{-\\sigma-1})\\\\\ng_{ij,kl}&=O(|x|^{-\\sigma-2})\\\\\n R_g&=O(|x|^{-\\tau}), \n\\end{align*} \nfor some $\\sigma>(n-2)\/2$ and $\\tau>n$, where the commas denote partial derivatives in the coordinate chart, and $R_g$ denotes the scalar curvature of~$g$.\n\nWe define the \\emph{mass} of each end $E_\\ell$ by the formula\n\\[m(E_\\ell,g)={1\\over\n2(n-1)\\omega_{n-1}}\\lim_{\\rho\\to\\infty}\\int_{S_\\rho}\n\\sum_{i,j=1}^n(g_{ij,i}-g_{ii,j})\\nu_j d\\mu,\\]\n where $\\omega_{n-1}$ is the area of the standard unit\n$(n-1)$-sphere, $S_{\\rho}$ is the coordinate sphere in $E_\\ell$ of radius\n$\\rho$, $\\nu$ is its outward unit normal, and $d\\mu$ is the Euclidean area\nelement on $S_{\\rho}$. The mass is well-defined on each end of an asymptotically flat manifold.\n\\end{defin}\n\n\\begin{defin}\nFor a subset $S$ of a Riemannian manifold$(M^n, g)$, the \\emph{$m$-dimensional lower Minkowski content} of $S$ is \n\\[ \\liminf_{\\epsilon\\to0} \\frac{\\mathcal{L}_g^n(S_\\epsilon)}{\\alpha_{n-m}\\epsilon^{n-m}}\\]\nwhere $\\mathcal{L}_g^n$ is Lebesgue measure with respect to $g$, $S_\\epsilon$ is the $\\epsilon$-neighborhood of $S$, and $\\alpha_{n-m}$ is the volume of the unit ball in $\\rr^{n-m}$.\n\\end{defin}\nThe $m$-dimensional lower Minkowski content provides an upper bound (up to constant) for $m$-dimensional Hausdorff measure, and they are the same for rectifiable sets (see \\cite[Chapter 3.2]{Federer-book} for details). In particular, the condition of zero Minkowski content in Theorems \\ref{maintheorem} and \\ref{w1p} is only slightly stronger than the condition of zero Hausdorff measure.\n\n\n\\section{Proof of Theorem \\ref{maintheorem}}\\label{main}\n\nFirst, we briefly sketch out the proof, which is straightforward. Choose $M^n$, $g$, and $S$ as in the statement of Theorem \\ref{maintheorem}. We mollify the metric $g$ to get a smooth metric $g_\\epsilon$ in such a way that $g_\\epsilon=g$ outside of the $2\\epsilon$-neighborhood $S_{2\\epsilon}$. The precise smoothing of $g$ does not matter much. The only important property of the smoothing is that, using the hypotheses on~$g$, we have that $g_\\epsilon$, $g_\\epsilon^{-1}$, and $\\partial g_\\epsilon$ are bounded independently of $\\epsilon$, while $\\partial\\partial g_\\epsilon=O(\\epsilon^{-1})$, with respect to a particular atlas. By the formula for the scalar curvature of $g_\\epsilon$, it follows that $R_{g_\\epsilon}=O(\\epsilon^{-1})$. The hypothesis about Minkowski content tells us (roughly) that the volume of $S_\\epsilon$ is $o(\\epsilon^{n\/2})$. Thus\n\\begin{equation}\\label{goal}\n\\int_{S_{2\\epsilon}} |R_{g_\\epsilon}|^{n\/2}\\,dg=o(1).\n\\end{equation}\nFrom there, a standard argument (as in \\cite{Miao:2002}) tells us that we can conformally deform $g_\\epsilon$ to have nonnegative scalar curvature, without changing the mass too much. Applying the classical positive mass theorem to the new, smooth manifold of nonnegative scalar curvature, we find that the original manifold $(M,g)$ has mass greater than a small negative number that is $o(1)$ in $\\epsilon$. Taking the limit as $\\epsilon$ approaches zero, the result follows. In what follows, we describe an explicit smoothing that yields \\eqref{goal}.\n\nWe choose a finite atlas $U_1,\\ldots,U_N$ for $M$. By asymptotic flatness and continuity of $g$, we can choose these $U_k$ so that $g$ is uniformly equivalent to the background Euclidean metric of each patch. That is, on each coordinate patch, we have \n\\[ C^{-1}\\delta_{ij}\\leq g_{ij}\\leq C\\delta_{ij} \\]\nas positive definite symmetric bilinear forms. \nWe choose a partition of unity $\\psi_1,\\ldots, \\psi_N$ subordinate to this cover. \nOn each patch $U_k$, we will define a smoothing $g^k_\\epsilon$ of $g$ that is defined on the support of $\\psi_k$, which we denote $U'_k$. We then obtain a smoothing $g_\\epsilon$ of $g$ by defining $g_\\epsilon=\\sum_{k=1}^N \\psi_k g^k_\\epsilon$. \n\n\\begin{notation}In what follows, we will use a generic constant $C$ to mean some large number that may depend on $(M,g)$ and the choices of $U_k$ and $\\psi_k$. The only thing that will be important to us is that $C$ is independent of $\\epsilon$.\n\\end{notation}\nGiven a coordinate patch $U_k$, we wish to define $g^k_\\epsilon$. Let $\\varphi$ be a nonnegative smooth function supported on the unit ball in $\\rr^n$ whose integral is $1$. The standard way to smooth $g$ is to convolve it with \n\\[\\varphi_\\epsilon(x) := \\epsilon^{-n}\\varphi(x\/\\epsilon).\\]\n However, we want to smooth $g$ in such a way that it does not change $g$ away from a neighborhood of $S$. In order to do that, we need the following simple lemma. \n\n\\begin{lem}\\label{sigma}\nFor each $\\epsilon>0$, on each coordinate patch $U_k$, there exists a nonnegative smooth function $\\sigma$ such that $\\sigma=\\epsilon$ on the Euclidean neighborhood $S_\\epsilon$ of $S$ in $U_k$, and $\\sigma=0$ outside $S_{2\\epsilon}$, while $|\\partial\\sigma|\\le 3$ and $|\\partial\\partial\\sigma|\\le C\\epsilon^{-1}$ everywhere.\n\\end{lem}\n\\begin{proof}\nDefine a continuous function \n\\[s(x)=\\left\\{\\begin{array}{ll}\n\\epsilon & \\text{for }x\\in S_{4\\epsilon\/3}\\\\\n5\\epsilon - 3\\dist(x,S) & \\text{for }x\\in S_{5\\epsilon\/3}\\smallsetminus S_{4\\epsilon\/3}\\\\\n0 & \\text{for }x\\notin S_{5\\epsilon\/3}\n\\end{array}\\right.\\]\nThen we can define \n\\[\\sigma(x)=\\int_{\\rr^n} s(x-y)\\varphi_{\\epsilon\/6} (y)\\,dy.\\]\nClearly, $\\sigma$ is a nonnegative smooth function such that $\\sigma=\\epsilon$ on $S_\\epsilon$ and $\\sigma=0$ outside $S_{2\\epsilon}$. We just need to check the bounds on derivatives.\n\\begin{align*}\n|\\sigma(x_1)-\\sigma(x_2)| & \\leq\\int_{\\rr^n} |s(x_1-y)-s(x_2-y)|\\varphi_{\\epsilon\/6} (y)\\,dy \\\\\n&\\leq \\int_{\\rr^n} 3 |x_1-x_2| \\varphi_{\\epsilon\/6} (y)\\,dy\\\\\n&=3|x_1-x_2|,\n\\end{align*}\nwhere we used the Lipschitz property of $s$ in the second line. Thus $|\\partial\\sigma|\\le 3$.\n\nWe know that \n\\begin{align*}\n \\partial\\sigma(x)&= \\int_{\\rr^n} s(x-y) \\partial\\varphi_{\\epsilon\/6}(y)\\,dy \\\\ \n&=\\int_{\\rr^n} s(x-y) \\left(\\frac{6}{\\epsilon}\\right)^{n+1} \\partial\\varphi\\left(\\frac{6y}{\\epsilon}\\right)\\,dy \n\\end{align*}\nArguing as above, \n\\begin{align*}\n|\\partial\\sigma(x_1)-\\partial\\sigma(x_2)| \n&\\leq \\int_\\rr^n 3|x_1-x_2| \\left(\\frac{6}{\\epsilon}\\right)^{n+1} \\left|\\partial\\varphi \\left(\\frac{6y}{\\epsilon}\\right)\\right|\\,dy \\\\\n& = \\frac{18}{\\epsilon} |x_1-x_2| \\int_{\\rr^n} |\\partial\\varphi (y)|\\,dy \\\\\n&=\\frac{C}{\\epsilon}|x_1-x_2|.\n\\end{align*}\n Thus $|\\partial\\partial\\sigma|\\le C\\epsilon^{-1}$.\n\\end{proof}\n\nFor small enough $\\epsilon$, we define $g^k_\\epsilon$ on the patch $U'_k$ by the formula\n\\[ (g^k_\\epsilon)_{ij}(x)=\\int_{\\rr^n}g_{ij}(x-\\sigma(x)y)\\varphi(y)\\,dy \n=\\int_{\\rr^n}g_{ij}(y)\\varphi_{\\sigma(x)}(x-y)\\,dy.\\]\nKeep in mind that the function $\\sigma$ described by the lemma above depends on the patch $U_k$, the singular set $S$, and $\\epsilon$. Clearly, each component of $g^k_\\epsilon$ is smooth. We now claim that $|\\partial g^k_\\epsilon|\\le C$ and $|\\partial\\partial g^k_\\epsilon|\\le C\\epsilon^{-1}$. For ease of notation, let us prove these inequalities for each component, individually. The lemma below proves the claim.\n\n\\begin{lem}\\label{convolve}\nLet $\\sigma$ be the function described in Lemma \\ref{sigma}, and let $f$ be a Lipschitz function on $U_k$ that has bounded $C^2$-norm on the complement of $S$. For small enough $\\epsilon$, if we define the function\n\\[ f_\\epsilon(x)=\\int_{\\rr^n}f(x-\\sigma(x)y)\\varphi(y)\\,dy \n=\\int_{\\rr^n}f(y)\\varphi_{\\sigma(x)}(x-y)\\,dy\\]\non the set $U'_k$, then $|\\partial f_\\epsilon|\\le C$ and $|\\partial\\partial f_\\epsilon|\\le C\\epsilon^{-1}$, where $C$ may depend on the supremum and Lipschitz constant of $f$.\n\\end{lem}\n\\begin{proof}\nOn the complement of $S_\\epsilon$, the result follows easily from differentiating the first formula for $f_\\epsilon$ above and using the $C^2$ bound on $f$ and the bounds on $|\\partial\\sigma|$ and $|\\partial\\partial\\sigma|$ from Lemma \\ref{sigma}. So we need only consider the region $S_\\epsilon$. But in this region we have $\\sigma(x)=\\epsilon$ by construction, and therefore\n\\[ f_\\epsilon(x)\n=\\int_{\\rr^n}f(y)\\varphi_\\epsilon(x-y)\\,dy \n\\]\nis just the usual mollification formula. Then since $f$ is Lipschitz, a standard computation shows that $|\\partial f_\\epsilon|$ is bounded. Moreover, for $x\\in S_\\epsilon$,\n\\begin{align*}\n\\partial f_\\epsilon (x)\n&=\\int_{\\rr^n} f(y) \\partial\\varphi_\\epsilon(x-y)\\,dy\\\\\n&=\\int_{\\rr^n} f(y) \\epsilon^{-n-1}\\partial\\varphi\\left(\\frac{x-y}{\\epsilon}\\right)\\,dy\\\\\n&=\\int_{\\rr^n} f(x-\\epsilon y) \\epsilon^{-1}\\partial\\varphi(y)\\,dy.\n\\end{align*}\nSo for any $x_1,x_2\\in S_\\epsilon$, \n\\begin{align*}\n|\\partial f_\\epsilon(x_1)-\\partial f_\\epsilon(x_2)|\n&=\\left| \\int_{\\rr^n} [f(x_1-\\epsilon y)-f(x_2-\\epsilon y)] \\epsilon^{-1}\\partial\\varphi(y)\\,dy\\right|\\\\\n&\\le \\int_{\\rr^n} |f(x_1-\\epsilon y)-f(x_2-\\epsilon y)| \\epsilon^{-1}|\\partial\\varphi(y)|\\,dy\\\\\n&\\le \\int_{\\rr^n}C|x_1-x_2| \\epsilon^{-1}|\\partial\\varphi(y)|\\,dy\\\\\n&\\le \\frac{C}{\\epsilon}|x_1-x_2|.\n\\end{align*}\nThe result follows.\n\\end{proof}\n\nSetting $g_\\epsilon=\\sum_{k=1}^N \\psi_k g^k_\\epsilon$, Lemma \\ref{convolve} implies that in each coordinate chart~$U_k$, \n$|\\partial (g_\\epsilon)_{ij}|\\le C$ and $|\\partial\\partial (g_\\epsilon)_{ij}|\\le C\\epsilon^{-1}$ for some $C$ independent of~$\\epsilon$. From looking at how scalar curvature depends on the metric, it is clear that $|R_{g_\\epsilon}|\\le C\\epsilon^{-1}$ for some $C$. Meanwhile, $g=g_\\epsilon$ outside $S_{2\\epsilon}$, and by our assumption on the Minkowski content of $S$, we have \n\\begin{align*}\n\\int_{S_{2\\epsilon}} |R_{g_\\epsilon}|^{n\/2}\\,dg &\\le \\mathcal{L}_g^n(S_{2\\epsilon})\\sup |R_{g_\\epsilon}|^{n\/2} \\\\\n&= o(\\epsilon^{n\/2}) O(\\epsilon^{-n\/2})\\\\\n&=o(1),\n\\end{align*}\nwhich is our desired estimate \\eqref{goal}. The rest of the proof of Theorem \\ref{maintheorem} proceeds exactly as in \\cite{Miao:2002}. (Technically, since we are using \\emph{lower} Minkowski content, it is inaccurate to say that $\\mathcal{L}_g^n(S_{2\\epsilon})= o(\\epsilon^{n\/2})$, but the argument still works since we only need to use a subsequence of $\\epsilon$'s approaching zero. Also, we were careless about the distinction between defining $S_\\epsilon$ using the metric $g$ versus the Euclidean metric on each chart, but by uniform equivalent of metrics, this sloppiness is inconsequential.)\n\n\\section{Proof of Theorem \\ref{w1p}}\n\nThe proof of the $W^{1,p}$ version of Theorem \\ref{maintheorem} requires only slight modification. Choose $M^n$, $g$, $S$, and $p$ as in the statement of Theorem \\ref{maintheorem}. First, observe that because of the $C^2$ bounds on $g$, we can see that $|R_{g_\\epsilon}|=O(\\epsilon^{-1})$ on the complement of $S_\\epsilon$, just as in the Lipschitz case, and we now have even better bounds on $\\mathcal{L}^n_g(S_{2\\epsilon})$, so we have\n\\[\\int_{S_{2\\epsilon}\\smallsetminus S_\\epsilon} |R_{g_\\epsilon}|^{n\/2}\\,dg=o(1).\\] \n Therefore, in order to establish \\eqref{goal}, it is sufficient to show that\n\\begin{equation}\\label{newgoal}\n\\int_{S_\\epsilon} |R_{g_\\epsilon}|^{n\/2}\\,dg=o(1).\n\\end{equation}\n\n\nNext we use a $W^{1,p}$ version of Lemma \\ref{convolve}.\n\\begin{lem}\\label{convolve-w1p}\nLet $\\sigma$ be the function described in Lemma \\ref{sigma}, and let $f\\in W^{1,p}_{\\mathrm{loc}}(U_k)$ such that $f$ has bounded $C^2$-norm on the complement of $S$. For small enough $\\epsilon$, if we define the function\n\\[ f_\\epsilon(x)=\\int_{\\rr^n}f(x-\\sigma(x) y)\\varphi(y)\\,dy \n=\\int_{\\rr^n}f(y)\\varphi_{\\sigma(x)}(x-y)\\,dy\\]\non the set $U'_k$, then $\\|\\partial f_\\epsilon\\|_{L^p(S_\\epsilon\\cap U'_k)} \\le C$ and \n$|\\partial\\partial f_\\epsilon|\\le C\\epsilon^{-1-\\frac{n}{p}}$, where $C$ may depend on the $W^{1,p}$ norm of $f$.\n\\end{lem}\n\\begin{proof}\nRecall that for $x\\in S_\\epsilon$, $\\sigma(x)=\\epsilon$, so that the formula for\n$f_\\epsilon$ is the usual mollification formula. A standard argument using H\\\"{o}lder's inequality shows that\n\\[ \\|\\partial f_\\epsilon\\|_{L^p(S_\\epsilon \\cap U'_k)} \\le \\|\\partial f\\|_{L^p(S_{2\\epsilon}\\cap U_k)}\\le C.\\]\nFor any $x\\in S_\\epsilon$, if $q$ is chosen so that $\\frac{1}{p}+\\frac{1}{q}=1$, then\n\\begin{align*}\n|\\partial\\partial f_\\epsilon(x)|\n&= \\left|\\partial\\partial \\int_{\\rr^n} f(x-y)\\varphi_\\epsilon(y)\\,dy\\right|\\\\\n&= \\left|\\partial\\int_{\\rr^n} \\partial f(x-y)\\varphi_\\epsilon(y)\\,dy\\right|\\\\\n&= \\left|\\partial\\int_{\\rr^n} \\partial f(y)\\varphi_\\epsilon(x-y)\\,dy\\right|\\\\\n&= \\left|\\int_{S_{2\\epsilon}\\cap U_k} \\partial f(y) \\partial\\varphi_\\epsilon(x-y)\\,dy\\right|\\\\\n&\\le \\|\\partial f\\|_{L^p(S_{2\\epsilon}\\cap U_k)} \\left(\\int_{B_\\epsilon(x)} |\\partial\\varphi_\\epsilon(x-y)|^q\\,dy\\right)^{\\frac{1}{q}}\\\\\n&\\le C\\left(\\epsilon^{n} (\\epsilon^{-n-1})^q\\right)^{\\frac{1}{q}}\\\\\n&= C\\epsilon^{-1-\\frac{n}{p}}.\n\\end{align*}\n\\end{proof}\nSince the scalar curvature is a contraction of $\\partial\\partial g + g^{-1}*g^{-1}*\\partial g *\\partial g$, on any of the coordinate patches, we can use Lemma \\ref{convolve-w1p} to estimate\n\\begin{align*}\n \\int_{S_\\epsilon\\cap U_k} |R_{g_\\epsilon}|^{n\/2}\\,dg\n &\\le C\\int_{S_\\epsilon\\cap U_k} |\\partial\\partial g_\\epsilon|^{n\/2}\\,dx + C\\int_{S_\\epsilon\\cap U_k} |\\partial g_\\epsilon|^{n}\\,dx \\\\\n &\\le \\mathcal{L}^n(S_\\epsilon\\cap U_k) \\left(C\\epsilon^{-1-\\frac{n}{p}}\\right)^{n\/2} + o(1)\\\\\n &= o\\left(\\epsilon^{\\frac{n}{2}\\left(1+\\frac{n}{p}\\right)}\\right) \\epsilon^{-\\frac{n}{2}\\left(1+\\frac{n}{p}\\right)}+o(1)\\\\\n &=o(1).\n \\end{align*}\n As in Section \\ref{main}, there is some justifiable carelessness in the computation above. And once again, the result of the proof follows exactly as in \\cite{Miao:2002}.\n\n\\bibliographystyle{hamsplain}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}