diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzzglhr" "b/data_all_eng_slimpj/shuffled/split2/finalzzglhr" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzzglhr" @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\n\nAdiabatic quantum computing~\\cite{farhiQuantum00} and quantum annealing~\\cite{kadowakiQuantum98}\ncan be used to prepare ground states and thermal states of quantum\nsystems, a central desideratum of quantum computation. The quality\nof ground state preparation and Gibbs sampling is central to the performance\nof quantum algorithms for optimization as well as machine learning\nproblems~\\cite{Amin:2016}. The performance of these computational\nparadigms is quantified by the adiabatic theorem in its different\nforms, whether for closed or open quantum systems. The theorem, or\nfamily of theorems, places a bound on the adiabatic error: the distance\nbetween the state that we set out to prepare (usually the solution\nto a computational problem) and the state the system actually ends up in through\nthe dynamical evolution. In general, if a spectral gap condition is\nsatisfied, the adiabatic error is bounded by $C\/t_{f}$ for some constant\n$C$, where $t_{f}$ is the total evolution time~\\cite{katoAdiabatic50,jansenBounds07,joye_general_2007}.\nThe boundary cancellation theorem (BCT) shows that an improvement\nis possible if the schedule from the initial to the final generator\n(Hamiltonian or Liouvillian) has certain properties. In the closed\nsystem case the adiabatic error is bounded by $C_{k}\/t_{f}^{k+1}$\nfor some constant $C_{k}$ independent of $t_f$, if the time-dependent Hamiltonian $H(t)$\nhas vanishing derivatives up to order $k$ at the \\emph{beginning\nand end} of the evolution~\\cite{garridoDegree62}, a bound that\ncan be improved to exponentially small in $t_{f}$ under additional\nassumptions~\\cite{nenciu_linear_1993,Hagedorn:2002kx,lidarAdiabatic09,wiebeImproved12,Ge:2015wo}.\nThis theorem has recently been extended to the open system setting\nfor a general class of time-dependent Liouville operators $\\mathcal{L}(t)$,\nwhere it can be used to prepare steady states of Liouvillians instead\nof ground states of Hamiltonians. The theorem can be succinctly stated\nas follows: For a time-dependent Liouvillian $\\mathcal{L}(t)$ with\na unique steady state $\\sigma(t)$ separated by a gap at each time\n$t$, the adiabatic error is likewise bounded by $C_{k}\/t_{f}^{k+1}$,\nbut it is sufficient for the derivatives to vanish up to order $k$\n\\emph{only at the end time} $t_{f}$~\\cite{camposvenutiError18}.\nBoundary cancellation (BC) plays a significant role in the theoretical\nanalysis of error suppression in Hamiltonian quantum computing~\\cite{albashDecoherence15,Lidar:2019ab}.\n\nIn this work we set out to demonstrate the BCT predictions in experiments\nusing the D-Wave 2000Q (DW2KQ) quantum annealer. The DW2KQ implements the transverse field Ising model $H(t) = A(t)H_X +B(t)H_Z$, where $H_X$ and $H_Z$ are transverse and longitudinal (Ising) Hamiltonians, respectively~\\cite{johnsonQuantum11,DW2KQ}.\nIt allows\nlimited programmability of the control schedules $A(t)$ and $B(t)$\nof the system Hamiltonian, which we exploit to implement boundary cancellation\nprotocols. \n\nTo guide our intuition we model the behavior of the D-Wave\nquantum annealer with the adiabatic master equation (AME) derived in \\cite{albashQuantum12}.\nThis is a time-dependent Davies-like master equation~\\cite{davies_markovian_1974} which has been successfully used to interpret several D-Wave experiments, e.g.,~\\cite{albashConsistency15,albashReexamination15} (though not always~\\cite{bando2021breakdown}).\nWhen combining the AME for dephasing with boundary cancellation, we encounter a problem. Namely, as we explain in detail below, the Liouvillian gap vanishes at $t=t_f$, which prevents us from directly applying the BCT in the form given in Ref.~\\cite{camposvenutiError18}. \n\nTo circumvent this problem, in this work we generalize the BCT and identify the conditions on the Liouvillian gap under which BC does or does not remain effective. We find that the generalized BCT\nplays an important role in the D-Wave implementation.\n\n\nThere is another \nsignificant consideration regarding\nthe implementation of BC on D-Wave:\nthe phenomenon of freezing. In essence, freezing refers to\na significant increase in all relaxation timescales well before the\nend of the anneal ($t 0$. It is then natural to assume that the spectrum of\n$H_{S}(t)$ is non-degenerate. The density\nmatrix in the energy basis is $\\rho(t)=\\sum_{mn}\\rho_{mn}|m\\rangle\\!\\langle n|$.\nThe diagonal elements of $\\rho$ in this basis evolve according to\nthe Pauli master equation~\\cite{Pauli-master-equation} (for a modern derivation see, e.g., Ref.~\\cite{Lidar:2019aa}). In particular, the ground state probability\n$\\rho_{00}=\\langle0|\\rho(t)|0\\rangle$ evolves according to \n\\bes\n\\label{eq:freezing}\n\\begin{align}\n\\label{eq:freezing-a}\n\\dot{\\rho}_{00} & =\\sum_{n}\\left(\\rho_{nn}W_{0n}-\\rho_{00}W_{n0}\\right) \\\\\n & =\\sum_{n>0}W_{0n}\\left(\\rho_{nn}-\\rho_{00}e^{-\\beta\\left(E_{n}-E_{0}\\right)}\\right)\\ ,\n \\label{eq:freezing-b}\n\\end{align}\n\\ees\nwhere the transition rate matrix $W$ has the following matrix elements:\\footnote{The matrix $W$ satisfies detailed balance, i.e., $W_{nm}e^{-\\beta E_{m}}=W_{mn}e^{-\\beta E_{n}}$,\nwhich follows from an analogous equation for $\\gamma(\\omega)$, which\nin turns follows from the Kubo-Martin-Schwinger (KMS) conditions on the bath correlation function.} \n\\begin{equation}\nW_{mn}=\\sum_{i,j}\\gamma_{ij}(E_{n}-E_{m})\\langle m|\\sigma_{j}^{z}|n\\rangle\\langle n|\\sigma_{i}^{z}|m\\rangle.\n\\label{eq:matrix_M}\n\\end{equation}\n\nOn the basis of Eqs.~\\eqref{eq:freezing} and~\\eqref{eq:matrix_M},\nfreezing is seen to be a consequence of the following argument. Since\nwe are in the adiabatic regime, only the lowest levels are populated,\ni.e., $\\rho_{nn}\\simeq0$ for $n$ greater than some $n_{A}$. Eq.~\\eqref{eq:freezing}\nis then replaced by \n\\bes\n\\label{eq:freezing-2}\n\\begin{align}\n\\label{eq:freezing-2a}\n\\dot{\\rho}_{00}&\\simeq\\sum_{n=1}^{n_{A}}W_{0n}\\left(\\rho_{nn}-\\rho_{00}e^{-\\beta\\left(E_{n}-E_{0}\\right)}\\right) \\\\\n&\\qquad -\\rho_{00}\\sum_{n=n_{A}+1}^{d}W_{0n}e^{-\\beta\\left(E_{n}-E_{0}\\right)}\\ ,\n\\label{eq:freezing-2b}\n\\end{align}\n\\ees\nwhere $d$ is the system's Hilbert space dimension. When $A(t)=0$, the Hamiltonian is diagonal in the computational basis. Let the Hamming distance between the ground state\n$|0\\rangle$ and the excited states $\\{|n\\rangle\\}_{n=1}^{n_{A}}$ when $A(t)=0$\nbe at least $q$ (note that $q\\ge 1$). Using perturbation theory\naround $A(t)=0$ it can be shown that \n$\\langle0|\\sigma_{j}^{z}|n\\rangle=O\\left(A^{q}\\right)$;\nsee App.~\\ref{app:Proof-of-freezing}. Hence, at the time $t_0$ at which $A$ is sufficiently small (but non-zero)\none has $W_{0n}= O(A^{2q}) \\simeq0$ for $n=1,2,\\ldots,n_{A}$, and we can neglect the sum in line~\\eqref{eq:freezing-2a}; we determine $t_0$ below.\nAs for the term in line~\\eqref{eq:freezing-2b}, the transition rates between higher excited states\nare typically smaller, and moreover, the terms are exponentially suppressed\nbecause of the larger gaps (i.e., for sufficiently small temperatures $E_{n}-E_{0}\\gg 1\/\\beta$, given that $n\\ge n_A+1 \\ge 2$). As a\nconsequence, Eq.~\\eqref{eq:freezing-2} becomes $\\dot{\\rho}_{00}\\approx0$,\ni.e., the ground state population is effectively frozen for $t\\ge t_0$. \n\nThe location of the freezing point for the $n$th excited state is determined by the\npoint where \nthe relaxation time for the $n$th excited state, given by $\\tau_{\\text{rel}}^{(n)} = W_{0n}^{-1}$, becomes longer than the anneal time. As long as just one of these relaxation times, with $n\\in\\{1,\\ldots,n_{A}\\}$ is too long, the system will not thermalize. For the system to freeze, i.e., $\\dot{\\rho}_{00}\\approx0$, we need all transitions to cease. Hence, we define the freezing point as the solution $t_0$ of $\\min_{n\\in\\{1,\\ldots,n_{A}\\}}\\tau_{\\text{rel}}^{(n)} =t_f$, or:\\footnote{A shortcut to deriving Eq.~\\eqref{eq:freezing_point} is to interpret $1\/\\tau_{\\text{rel}}^{(n)} = \\gamma\\left(E_{n}-E_{0}\\right)\\left|\\langle 0|V|n\\rangle\\right|^{2} = W_{0n}$ as a statement of Fermi's golden rule for the transition rate, with $V = \\sum_{j}\\sigma_{j}^{z}$ playing the role of the perturbation, and $\\gamma$ the density of states.}\n\\begin{equation}\nt_0 := \\{\\min t\\in[0,t_f] \\text{ s.t. }\\max_{n\\in\\{1,\\ldots,n_{A}\\}} W_{0n}(t)=1\/t_{f}\\}\\ .\n\\label{eq:freezing_point}\n\\end{equation}\nIn terms of the control schedule $s(\\tau)$, we have $s_0 := s(\\tau_0)$, where $\\tau_0 = t_0\/t_f$. In practice, to determine the freezing point $s_0$ we implement a closely related procedure described in App.~\\ref{app:AME}.\n\nNote that, as a consequence of the perturbative argument, (for sufficiently\nsmall $A$) the rates $W_{0n}$ are decreasing functions of $A$.\nSo if $A(t)$ is decreasing in $t$, $W_{0n}$ can be considered\nzero for $t\\ge t^{\\ast}$. A similar argument works when substituting $0\\leftrightarrow m$,\nand one finds that the population in the $m$th excited state, $\\rho_{mm}$, is frozen,\nalbeit with possibly different freezing points than given by Eq.~\\eqref{eq:freezing_point}. \n\nAnother consequence of this argument is that the phenomenon of freezing\nshould be more pronounced (i.e., occur for smaller $t_0$ and resulting in smaller $\\dot{\\rho}_{00}$ for $t>t_0$) for those problems where the ground state\nis separated by a large Hamming distance from the excited states, i.e., a larger tunneling barrier such as the T-gadget compared to the FM-gadget, discussed in Sec.~\\ref{sec:8QG} below.\nThese problems are the ones which are harder to simulate with standard classical\nsimulated annealing with single spin flip moves~\\cite{kirkpatrick_optimization_1983} (cluster flip moves~\\cite{PhysRevLett.115.077201} would not necessarily be similarly affected). \n\nFinally, note that a \\emph{simulation} of a master equation to which the considerations above apply, such as the AME (see App.~\\ref{app:AME}), is also expected to exhibit freezing.\n\n\n\\subsection{Boundary cancellation with a ramp at the end}\n\\label{sec:ramp}\n\nSince the ground state population does not change past the freezing\npoint, no change in the schedule would be effective if performed\nafter freezing. In view of these considerations we perform BC before\nfreezing sets in, which --- for the standard control schedule $s(\\tau)=\\tau$ (see Fig.~\\ref{fig:dw2kq_schedule}) --- happens for $s\\approx 0.55$,\ndepending on the problem. Our strategy will be the following. First,\nevolve from $t=0$ to $t_{f}$ with a BC schedule.\nTo avoid freezing, we ensure that $A(t_{f})\\neq0$ at the end of the BC schedule. \nThus, $t_f$ does \\emph{not} correspond to the usual total anneal time for which $A(t_f)=0$. Right after the BC schedule ends, we perform\na linear ramp\\footnote{The term ``quench'' is used in the D-Wave documentation instead of ramp~\\cite{dwave-manual}.} of duration $t_r = 1\\,\\mu$s until the schedules reach their final values (in particular $A(t_{f}+t_r)=0$), after which the system is measured in the computational\nbasis. The state after the entire evolution can be written as \n$\\mathcal{E}_{\\mathrm{ramp}}\\mathcal{E}_{\\mathrm{BC}}\\rho(0)$,\nwhere $\\mathcal{E}_{\\mathrm{BC}}$ ($\\mathcal{E}_{\\mathrm{ramp}}$) is the\nevolution through the BC schedule (ramp). However, random local fields\nand coupler perturbations [integrated control errors (ICE)] result\nin a Hamiltonian that does not behave as intended in the ideal case~\\cite{Albash:2019ab},\nand these errors have been well documented in the D-Wave processors~\\cite{dwave-manual}.\nThe effect of such random perturbations can be controlled with error\nsuppression and correction \\cite{youngAdiabatic13,pudenzErrorcorrected14,pearsonAnalog19},\nwhich will be employed below for the ferromagnetic chain gadget. Due\nto this ICE effect, the measured state is better represented by \n\\begin{equation}\n\\rho_{\\mathrm{final}}:=\\mathsf{E}_{J}\\left[\\mathcal{E}_{\\mathrm{ramp}}\\mathcal{E}_{\\text{BC}}\\rho(0)\\right]\\ ,\n\\label{eq:rho_f}\n\\end{equation}\nwhere we denoted by $\\mathsf{E}_{J}\\left[\\bullet\\right]$ the\naverage over the noise on the random couplings $J_{ij}$ and fields\n$h_{i}$. \n\n\nLet us now define $\\rho(t_{f}):=\\mathcal{E}_{\\text{BC}}\\rho(0)$, while\n$\\sigma(t_{f})$ is the Gibbs state at the end of the BC schedule [recall that $\\rho(0)= \\sigma(0)$],\ncorresponding to the Hamiltonian in Eq.~\\eqref{eq:QA_Hamiltonian}. The\nadiabatic theorem in its various forms, including with boundary cancellation,\nprovides an upper bound on the pre-ramp distance $\\|\\delta\\|_1$, where $\\delta:=\\rho(t_{f})-\\sigma(t_{f})$. Defining \n\\begin{equation}\n\\sigma_{\\mathrm{final}}:=\\mathsf{E}_{J}\\left[\\mathcal{E}_{\\mathrm{ramp}}\\sigma(t_{f})\\right]\\ ,\n\\label{eq:sigma_f}\n\\end{equation}\n$\\delta$ can be related to $\\rho_{\\mathrm{final}}$ and $\\sigma_{\\mathrm{final}}$\nvia the following bound:\n\\bes\n\\label{eq:18}\n\\begin{align}\n\\left\\Vert \\rho_{\\mathrm{final}}-\\sigma_{\\mathrm{final}}\\right\\Vert _{1} & =\\left\\Vert \\mathsf{E}_{J}[\\mathcal{E}_{\\mathrm{ramp}}\\delta]\\right\\Vert _{1}\\label{eq:bound}\\\\\n & \\le\\mathsf{E}_{J}\\left[\\left\\Vert \\mathcal{E}_{\\mathrm{ramp}}\\delta\\right\\Vert _{1}\\right]\\label{eq:Jensen}\\\\\n & \\le\\mathsf{E}_{J}\\left[\\left\\Vert \\delta\\right\\Vert _{1}\\right]\\ .\n \\label{eq:CPTP}\n\\end{align}\n\\ees\nHere Eq.~\\eqref{eq:Jensen} follows from from Jensen's inequality~\\cite{Jensen:1906up} and the fact that every norm is convex (implying $\\left\\Vert \\mathsf{E}_{J}\\left[x\\right]\\right\\Vert _{1}\\le\\mathsf{E}_{J}\\left[\\left\\Vert x\\right\\Vert _{1}\\right]$),\nwhile Eq.~\\eqref{eq:CPTP} follows because $\\mathcal{E_{\\mathrm{ramp}}}$\nis a CPTP map (implying $\\left\\Vert \\mathcal{E}_{\\mathrm{ramp}}\\delta\\right\\Vert _{1}\\le\\left\\Vert \\delta\\right\\Vert _{1}$).\nNote that $\\rho_{\\mathrm{final}}$ is the empirically measured state,\nwhile $\\sigma_{\\mathrm{final}}$ differs from the state that we\nsought to prepare $\\sigma(t_{f})$, by the presence of the extra operations\n$\\mathsf{E}_{J}\\left[\\mathcal{E}_{\\mathrm{ramp}}\\bullet\\right]$.\n\nThe bound~\\eqref{eq:18} implies a similar bound for the ground state probabilities.\nLet us define \n\\bes\n\\label{eq:19}\n\\begin{align}\n\\label{eq:19a}\nP_{\\text{GS}} &:=\\mathrm{Tr}\\left[|\\text{GS}\\rangle\\!\\langle \\text{GS}|\\rho_{\\mathrm{final}}\\right]\\\\\n\\label{eq:19b}\nP_{\\text{GS}}^{*} &:=\\mathrm{Tr}\\left[|\\text{GS}\\rangle\\!\\langle \\text{GS}|\\sigma_{\\mathrm{final}}\\right]\\ ,\n\\end{align}\n\\ees\nwhere $|\\text{GS}\\rangle$ is the ground state of the Hamiltonian at the\nend of the anneal, i.e., the ground state of $H_{Z}$. \nSince\\footnote{Here \\unexpanded{$\\left\\Vert y\\right\\Vert _{\\infty}$} is the operator\nnorm of $y$, i.e., its maximum singular value, which is $1$ for an orthogonal\nprojection.} \n\\begin{equation}\n\\left|\\mathrm{Tr}\\left[|\\text{GS}\\rangle\\!\\langle \\text{GS}|x\\right]\\right|\\le\\left\\Vert |\\text{GS}\\rangle\\!\\langle \\text{GS}|\\right\\Vert _{\\infty}\\left\\Vert x\\right\\Vert _{1}=\\left\\Vert x\\right\\Vert _{1}\\ ,\\end{equation}\n it follows that the \\emph{adiabatic error} defined as\n\\begin{equation}\nD_{\\text{GS}}:=\\left|P_{\\text{GS}}-P_{\\text{GS}}^{*}\\right|\\ ,\n\\label{eq:D_GS}\n\\end{equation}\nsatisfies\n \\begin{equation}\nD_{\\text{GS}} \\le \\left\\Vert \\rho_{\\mathrm{final}}-\\sigma_{\\mathrm{final}}\\right\\Vert _{1}\\ .\n \\label{eq:16}\n \\end{equation}\nThe adiabatic error quantifies the difference between the ground state overlaps of the experimentally measured state ($\\rho_{\\mathrm{final}}$) and the Gibbs state ($\\sigma_{\\mathrm{final}}$).\n\n\\subsection{An adiabatic error bound that combines everything}\n\\label{sec:everything}\n\nCombining Eqs.~\\eqref{eq:18} and~\\eqref{eq:16} with Props.~\\ref{prop:BCT} and~\\ref{prop:BC_gapless_end} and our numerical evidence,\nwe obtain \n\\begin{align}\n\\label{eq:21}\nD_{\\text{GS}} \\le \\left\\Vert \\rho_{\\mathrm{final}}-\\sigma_{\\mathrm{final}}\\right\\Vert _{1} &\\le \\frac{C}{t_{f}^{\\eta}}\\ ,\n\\end{align}\nwhere $C$ is now the noise averaged constant ($\\mathsf{E}_{J}\\left[\\bullet\\right]$) and $\\eta$ depends\non the physical assumptions. Namely, $\\eta=k+1$ if the Liouvillian\neither has a non-zero spectral gap throughout the anneal or if BC is enforced after the Liouvillian gap has already closed somewhere along the anneal, while $\\eta=(k+1)\/(k\\alpha+\\alpha+1)$\nif the Liouvillian gap closes at the same point at which BC is enforced (Prop.~\\ref{prop:BC_gapless_end}). \n\nOf course we may reformulate Eq.~\\eqref{eq:21} as a bound on $P_{\\text{GS}}$:\n\\begin{equation}\nP_{\\text{GS}}^* - \\frac{C}{t_{f}^{\\eta}} \\leq P_{\\text{GS}} \\leq P_{\\text{GS}}^* + \\frac{C}{t_{f}^{\\eta}}\\ .\n\\label{eq:PGS-bound}\n\\end{equation}\n\n\n\\subsection{Anomalous heating}\n\\label{sec:anom-heat}\n\nOur discussion so far has assumed that the effective temperature of the system remains constant as a function of both the anneal parameter $\\tau = t\/t_f$ and the total anneal time $t_f$. However, there is evidence to suggest that the latter is in fact not the case in the D-Wave devices, i.e., the temperature is $t_f$-dependent. The reason is an unintentional but omnipresent high-energy photon flux that enters the D-Wave chip from higher temperature stages through cryogenic filtering, which accumulates over long anneal times and manifests as an effectively higher on-chip temperature~\\cite{anomalous-heating}. This anomalous heating phenomenon will hinder our ability to test the BCP, since it means that in fact $P_{\\text{GS}}^*$ is a function of $t_f$, which complicates testing the BCT prediction as summarized by Eq.~\\eqref{eq:PGS-bound}. Indeed, for a Gibbs state $\\sigma_{\\text{final}} = e^{-\\beta H_S}\/Z$ (with $\\beta$, $H_S$, and the partition function $Z$ all evaluated at $t=t_f$), expanding $H_S$ in its eigenbasis as $H_S =\\sum_{i=0} E_i \\ketb{i}{i}$ we readily find\n\\begin{equation}\nP_{\\text{GS}}^* = \\langle \\text{GS}|\\sigma_{\\mathrm{final}}|\\text{GS}\\rangle = \\frac{1}{1+\\sum_{i=1}e^{-\\beta(t_f)\\Delta_{i}}} \\ ,\n\\end{equation}\nwhere $\\ket{0}$ is the ground state and $\\Delta_{i}:= E_i-E_0$.\n\nAssuming for simplicity that $\\beta(t_f) = \\beta_0 + a\/t_f$, i.e., a temperature that depends linearly on $t_f$ with rate $a>0$, and that $1 \\ll \\beta(t_f)\\Delta_1 \\ll \\beta(t_f)\\Delta_i$ for $i\\ge 2$, we can write this as \n\\begin{equation}\nP_{\\text{GS}}^* \\approx 1 - e^{-\\beta(0)\\Delta_1}e^{-a\\Delta_1\/t_f} \\ ,\n\\end{equation}\ni.e., the algebraic scaling with $t_f$ of Eq.~\\eqref{eq:PGS-bound} becomes obscured by an exponential scaling due to $P_{\\text{GS}}^*$.\n\nHowever, in reality we do not know the functional form of $\\beta(t_f)$, and the assumption $1 \\ll \\beta(t_f)\\Delta_1 \\ll \\beta(t_f)\\Delta_i$ may not hold. Therefore, in the analysis of our experimental results in Sec.~\\ref{sec:results} below, we instead use an ansatz of the form \n\\begin{equation}\nP_{\\text{GS}} = \\bar{P}_{\\text{GS}}^* + \\frac{C'}{t_{f}^{\\eta'}}\\ ,\n\\label{eq:PGS-fit}\n\\end{equation}\nwhere $\\bar{P}_{\\text{GS}}^*$ is a free fitting parameter representing an averaged value of the true (unknown) $P_{\\text{GS}}^*(t_f)$, $C'$ becomes another fitting parameter which already accounts for noise averaging as explained in Sec.~\\ref{sec:everything}, and \n$\\eta'$ plays the role of the effective scaling exponent, i.e., our proxy for $\\eta$ in Eq.~\\eqref{eq:PGS-bound}. \n\n\n\n\n\n\\section{Methods}\n\\label{sec:methods}\n\n\\subsection{Boundary Cancellation Protocol (BCP)}\n\\label{sec:bcp}\n\n\\begin{figure}[t]\n\\includegraphics[width=0.95\\columnwidth]{figs\/dw2kq6_param_b2}\n\\includegraphics[width=0.95\\columnwidth]{figs\/dw2kq6_sched_b2} \n\\caption{An example of the D-Wave boundary cancellation protocol with the $\\beta_{2}$\nschedule ramped at $s_{\\text{BC}}=0.50$, with the constructed parametric\nschedule (top) and the corresponding physical schedules (bottom).\nFor the schedule shown here we chose an anneal time of $t_{f}=100\\,\\mu$s. The ramp duration is $t_r=1\\mu$s. Contrast with the native schedule of the DW-LN processor shown in Fig.~\\ref{fig:dw2kq_schedule}. The most significant impact is on the $B(s)$ schedule, which is not natively flat, in contrast to the $A(s)$ schedule, which natively approaches $0$ in a very flat manner.}\n\\label{fig:sched}\n\\end{figure}\n\n\\begin{figure}\n\\centering \\subfigure[\\ ]{\\includegraphics[height=1.84in]{figs\/fm_gadget}}\n\\hspace{0.1in} \\subfigure[\\ ]{\\includegraphics[height=1.8in]{figs\/t_gadget}}\n\\caption{Illustration of the FM-gadget (a) and T-gadget (b) embedded into the\nChimera architecture as represented by an Ising Hamiltonian $H_Z$ [Eq.~\\eqref{eq:HXHZ}]. Ferromagnetic\n(blue) and anti-ferromagnetic (dashed purple) couplings all have the\nsame strength of $J_{ij}=-1$ and $J_{ij}=1$, respectively. Local fields $h_i$ are indicated by the\narrows and their value beside them. The ordering of the qubits in\nthe Chimera architecture is also enumerated in (a).}\n\\label{fig:fm_chain}\n\\end{figure}\n\n\\begin{figure*}\n\\centering \\subfigure[\\ FM-gadget]{\\includegraphics[width=0.46\\textwidth]{figs\/dw2kq6_fer_a_ct0_gaps}\\label{fig:fm_gaps-a}}\n\\hfill{}\\subfigure[\\ T-gadget]{\\includegraphics[width=0.46\\textwidth]{figs\/dw2kq6_t_half_gaps}\\label{fig:fm_gaps-b}}\n\\caption{Spectral gaps of (a) the FM-gadget and (b) the T-gadget (with $\\gamma=0.5$),\nas the schedule is varied according to typical physical schedules\n$A(s)$ and $B(s)$ of the D-Wave processor (App.~\\ref{app:betasched}),\nshown in units of $\\mathrm{ns}^{-1}$ ($\\hbar=1$). The notation $\\Delta_{n,n+1}$\ndenotes the energy gap between the $n$th and $n+1$th Hamiltonian\neigenstates, with $n=0$ being the ground state. The $13.5\\,\\mathrm{mK}$\nenergy scale is also shown, which is the reported dilution fridge temperature of the\nDW-LN processor. The minimum gap location is $s^{*}=0.43$ for the\nFM-gadget and $s^{*}=0.44$ for the T-gadget. The minimum gap region\nis marked by the light blue shading, while the estimated start of\nfrozen dynamics at $s_0$ is marked by pink shading,\nfound by solving Eq.~\\eqref{eq:freezing_point} (see App.~\\ref{app:AME} for details).\nNote that for both gadgets, the first, second, and third excited states\nare all initially degenerate in the subspace of a single $\\ket{-}$\nexcitation of the transverse-field ground state. The nonzero excited\nstate gaps $\\Delta_{12}$ and $\\Delta_{23}$ of the FM gadget ($\\Delta_{12}$ only for the T gadget) at $s=1$ are the result\nof symmetry breaking due to a cross-talk term we included in the\nHamiltonian, of strength $\\chi = 0.02$ (see App.~\\ref{app:xtalk}). The avoided level\ncrossing of the T-gadget is slightly narrower than that of the FM-gadget.\n}\n\\label{fig:fm_gaps}\n\\end{figure*}\n\n\nWe performed most of our experiments with the D-Wave 2000Q low noise (DW-LN) processor\naccessed through D-Wave Leap. We also performed additional experiments\nwith the D-Wave 2000Q processor at the NASA Quantum Artificial Intelligence\nLaboratory (DW-NA) as well as the D-Wave Advantage (DWA) processor\nthrough D-Wave Leap. \nIn the processor specifications, the hardware-determined schedules\n$A(s)$ and $B(s)$ are parametrized by the user-defined control schedule\n$s(\\tau)$. In the standard case the control is linear, i.e., $s(\\tau)=\\tau$\nbut in general $s(\\tau)$ can be programmed in a piecewise linear\nmanner as a function of time. The processor permits a maximum of $12$\npoints to specify the piecewise linear function $s(\\tau)$. We take\nadvantage of this enhanced capacity to approximate a BC schedule.\n The allowed range of programmable anneal times is $t_{f}\\in[1,2000]\\,\\mu$s.\n\nEven though $A(s)$ and $B(s)$ themselves need not satisfy the vanishing\nderivative requirement of the BCP, it follows from the chain rule that\n$A(s(t))$ and $B(s(t))$ do, as long as the control schedule $s(t)$\nsatisfies this requirement and $A(s)$ and $B(s)$ are differentiable\nto the same order as $s(t)$. To be concrete, we take \n\\begin{align}\ns(\\tau)=\\beta_{k}(\\tau)\\ ,\\qquad\\tau=t\/t_{f}\\ ,\\label{eq:s=00003Dbeta}\n\\end{align}\nwhere \n\\bes\n\\begin{align}\n\\beta_{k}(\\tau) &:=\\frac{\\mathrm{B}_{\\tau}(1,k+1)}{\\mathrm{B}_{1}(1,k+1)} \\\\\n &= 1-(1-\\tau)^{k+1} \\label{eq:beta_kt}\n\\end{align}\n\\ees\nis the regularized incomplete beta function of order $k$~\\cite{RPL:10,camposvenutiError18},\nwith\n\\begin{equation}\n\\mathrm{B}_{x}(a,b):=\\int_{0}^{x}y^{a-1}(1-y)^{b-1}dy\n\\label{eq:ibf}\n\\end{equation}\nbeing the incomplete beta function. As is apparent from Eq.~\\eqref{eq:beta_kt}, \nthe function $\\beta_{k}(\\tau)$\nhas exactly $k$ vanishing $\\tau$-derivatives at $\\tau=1$,\nas required by the BCT. We refer to this class of control schedules\nsimply as \\emph{beta schedules}. \n\nAs we discussed above, in contrast to the theoretical setup of Ref.~\\cite{camposvenutiError18},\ndue to freezing there is no practical advantage to flattening the\nschedule as $s$ approaches $1$. The effectiveness of the BCP is\nmost apparent when the flattening of the schedule occurs after the\navoided level crossing of the anneal (at $s=s^*$), but before the dynamics freeze (at $s=s_0$).\nThus, rather than using the standard schedules $\\{A(s),B(s)\\}$ with\nlinear $s(\\tau)$ that freeze out around $s\\simeq0.5$, the beta schedule\nneeds to be adjusted so that it terminates at an appropriate point $s_{\\text{BC}}$ corresponding to $t=t_f$ and satisfying\n$0< s^* < s_{\\text{BC}} 0.04$.\n}\n\\label{fig:fm_qac_np}\n\\end{figure}\n\n\n\n\\subsection{QAC-Encoded FM Gadget}\n\\label{sec:QAC-results}\n\nThe scaling of the adiabatic error of the QAC-encoded FM-gadget for\nvarious penalty strengths $\\lambda$ [recall Eq.~\\eqref{eq:QAC}]\nis shown in Fig.~\\ref{fig:fm_qac_np}, for the case of the standard\nlinear control schedule with no ramp. We observe that\nfrom the set of values tried, the optimal penalty strength is $\\lambda=0.07$.\nUsing this optimal penalty value, the ground state probabilities of\nBCP with QAC are shown in Fig.~\\ref{fig:6d}-\\ref{fig:6e}. The\nQAC-protected $P_{\\text{GS}}$ results for both $s_{\\text{BC}}$ values shown are significantly higher than their unprotected ($\\lambda=0$, NP) counterparts in Fig.~\\ref{fig:6a}-\\ref{fig:6c}, consistent\nwith many prior results on the improved performance offered by QAC.\n\nHowever, we are primarily interested here in QAC's effect on the BCP.\nIn this regard, at $s_{\\text{BC}}=0.45$, there is a notable distinction\nbetween different $k$ values, so that QAC amplifies the effect of the BC schedule. \n\nAt $s_{\\text{BC}}=0.50$, QAC-protected BCP still improves over the linear anneal\n($k=0$) by $P_{\\text{GS}}$ value, but the improvement is less distinct\nfor different $k$ values. Moreover, $\\eta'$ is smaller (at most $0.61$) than that for the optimal-$\\lambda$\nlinear QAC anneal in Fig.~\\ref{fig:fm_qac_np} (for which $\\eta'=0.69$),\nsignaling that freezing has already mostly occurred before $s=0.50$,\nand that the optimal $\\lambda$ value is $s_{\\text{BC}}$-dependent.\n\n\nOur results demonstrate that the combination of BCP\nand QAC as error suppression methods is more powerful in terms of\nthe scaling of the adiabatic distance than either method alone: while\n$k=2$ BCP has $\\eta'=1.1$ [Fig.~\\ref{fig:6a}] and optimal-$\\lambda$ linear\n($k=0$) QAC has $\\eta'=0.69$ (Fig.~\\ref{fig:fm_qac_np}), their combination yields $\\eta'=1.3$ [Fig.~\\ref{fig:6d}] and the largest $P_{GS}$ of any other schedule for $t_f=12\\,\\mathrm{\\mu s}$.\n\n\n\\begin{figure*}\n\\includegraphics[width=1\\textwidth]{figs\/dw2kq6_t_nts_hi_sq} \n\\caption{Number of tries to solution for the T-gadget, comparing the pausing\nprotocols (empty symbols) and $\\beta_{0},\\beta_{1}$ BCP (all with\na ramp at $s_{\\text{BC}}$), along with the linear anneal without a ramp (denoted\n``Lin.''; the same data for all panels), for reference. Results shown are for DW-LN, as a function\nof the total anneal time of each protocol. Different panels have different\nvalues of $s_{\\text{BC}}$, from $0.45$ to $0.55$. The pausing protocol\nis shown for different initial anneal times $t_{0}$ of $1$,$5$,\nand $20\\,\\mathrm{\\mu s}$ (empty symbols), followed by a pause of\nvarying length $t_{p}$ and a $1\\,\\mathrm{\\mu s}$ long ramp (so that\nthe paused portion of the schedule is comparable to BCP). For BCP\n(filled symbols), $t_{a}=t_{f}+1\\mathrm{\\mu s}$. For pausing, $t_{a}=t_{0}+t_{p}+1\\mathrm{\\mu s}$.\nBoth schedules are plotted against $t_{a}$ rather than their natural\nparameters ($t_{f}$ and $t_{p}$ respectively) for direct comparison.\n\\label{fig:t_pau_nts_hi}}\n\\end{figure*}\n\n\n\n\\subsection{Comparison of the BCP to the Pause-Ramp protocol}\n\\label{sec:tries}\n\nA different protocol that attempts to exploit slowing down the anneal is the pausing protocol, which interrupts an ordinary linear anneal by a single pause \\cite{marshallPower19,chenWhy20,izquierdo2020ferromagnetically,albash2020comparing,izquierdoAdvantage22}. \nPausing directly uses thermal relaxation at a single point in the anneal \nto try to increase the ground state probability. As in the BCP, this point should also be after \nan avoided level crossing, but before the open system dynamics freeze~\\cite{chenWhy20}. \n\nWe compare the BCP to a variant of the usual pausing schedule~\\cite{marshallPower19} referred to here as the pause-ramp (PR) schedule. While the pausing protocol is typically constructed by interrupting a linear anneal with a pause (but no ramp), PR is constructed by making the first linear segment last for $t_0$, pausing for $t_p$, and ramping to the end. As with the BCP, we use a ramp that is $1\\,\\mathrm{\\mu s}$ long in all cases. PR is better suited for comparison with BCP than the usual pausing schedule since they differ only in their equilibrium state preparation and not in their behavior during the frozen phase.\n\nAs our metric for comparison we use not the ground state probability but rather the number of tries-to-solution (NTS) at a fixed anneal time (with 90\\% confidence):\n\\begin{equation}\n\\mathrm{NTS}_{90\\%}(P_{\\text{GS}}) =\n\\frac{\\log(1-0.90)}{\\log(1-P_{\\text{GS}} )}\\,\n\\label{eq:NTS}\n\\end{equation}\nThis metric is simply another way to study the ground state probability, interpreted as the expected number of tries needed to find the ground state at least once with 90\\% confidence, at the given anneal time. We use this rather than the standard time-to-solution (TTS) metric, since the latter requires identifying the optimal anneal time~\\cite{speedup,AlbashDemonstration18,Hen:2015rt}. However, neither the FM-gadget nor the T-gadget exhibited an optimal anneal time in our experiments (not shown). \n\n\nFigure~\\ref{fig:t_pau_nts_hi} shows the NTS results for the T-gadget\nat various values of $s_{\\text{BC}}$. It compares the standard schedule (no BCP or pausing), the BCP protocol at $k=0$ and $k=1$, and the PR schedule with different initial anneal and pause times. Starting at $s_{\\text{BC}}=0.45$ (top left)\nthe gap is small and thermalization is fast. BCP exhibits two distinct\noptimal anneal times (minima) for $k=0$ and $k=1$, both of which\nresult in slightly smaller NTS than pausing. Also, the $k=1$ case\nis optimal at smaller $t_{a}$ than $k=0$, which can be interpreted\nas as advantage of the higher order protocol. This trend persists\nfor most of the $s_{\\text{BC}}$ values shown. Right at $s_{\\text{BC}}=0.46$, pausing\nachieves its optimal NTS curve, with a slight advantage over BCP,\nand the BCP of each order $k$ is very distinct. However, the advantage\nof pausing is only present at $s_{\\text{BC}}=0.46$, and disappears already\nat $s_{\\text{BC}}=0.47$, showing that pausing performance is highly sensitive\nto the pause point. Since the BCP slowdown is smooth, the influence\nof the optimal thermalization at $s_{\\text{BC}}=0.46$ remains present, although\nthe distinction between BCP orders diminishes as $s_{\\text{BC}}$ grows.\nWell before $s_{\\text{BC}}$ approaches the freeze-out point ($s_0\\approx0.51$), \npausing becomes detrimental for the T-gadget due to excitations (witnessed by the increase in the NTS metric), while BCP mitigates the excitations until the anneal time becomes too long to avoid them.\n\nThese results paint a picture of the two protocols as follows. \nThe relaxation induced by the BCP depends on\nthe properties of the Liouvillian over a \\emph{neighborhood} $(s_{\\text{BC}}-\\epsilon,s_{\\text{BC}}]$.\nIf this neighborhood overlaps with the neighborhood of an avoided\nlevel crossing, then the BCP has an opportunity for an advantage,\nas the speed at which the crossing is traversed is minimized and detrimental\nexcitations to the excited state are reduced. In contrast, PR schedules\nmust try to discontinuously stop the anneal as close as possible to\nthe crossing, and the time required by a paused schedule could be increased\ndue to the longer pause time needed to recover the ground state population,\nunless the precise optimal pause point is quickly found. \n\nThis suggests an important practical point: optimization\nof the BCP can be simpler than that of PR schedules, by the need to optimize just one discrete parameter\n$k$ and one continuous parameter $s_{\\text{BC}}$. Moreover, $s_{\\text{BC}}$ need not be exactly\ntuned, since even an approximate value can allow the BCP to\nclosely approach its optimal performance. On the other hand, pausing requires\n$s_{\\text{BC}}$ to be optimized very precisely, as well as the anneal and pause\ntimes $t_{0}$ and $t_{p}$.\n\n\n\n\\section{Discussion and Outlook}\n\\label{sec:discussion}\n\nWe have extended the boundary cancellation theorem for open systems\nto the case where the Liouvillian gap vanishes at the end\nof the anneal, and derived the asymptotic scaling of the adiabatic\nerror with the anneal time $t_{f}$. Armed with the corresponding theoretical\nexpectation of the scaling of the adiabatic error for the gapped and gapless\ncases,\nwe set out\nto test the scaling predictions and to improve the success probability of quantum\nannealing hardware by implementing boundary cancelling schedules. The\nspecific functional form of these schedules induces a smooth slowdown\nin accordance with the boundary cancellation theorem. We experimentally tested boundary\ncancellation protocols for open systems and \nevaluated their performance and error-suppression characteristics on specifically designed\n$8$-qubit gadgets embedded on the DW-LN annealer. \n\nWhile a quantitative agreement with the theoretical predictions was not observed, we did demonstrate that as long as the protocol terminates before the onset of freezing, it can increase the ground state population in the examples studied here beyond what is achievable with simple linear anneals, and it does so with shorter anneal times. These results are in qualitative agreement with the theoretical scaling predictions of the BCT.\n\nIn conjunction with quantum annealing correction (QAC), the boundary\ncancellation protocol is also capable of improved adiabatic error\nscaling over what would be achieved with either method alone. While\nthis does not immediately translate to a ground state solution speedup\nwithin the annealing problems studied here, we have shown that BCP-QAC\nis a novel error suppression strategy successfully combining two complimentary\nmethods: the suppression of environmentally induced logical errors\nand the promotion of relaxation via boundary cancellation.\n\nIn contrasting the BCP with the pause-ramp protocol, we found that the BCP is significantly less sensitive to the location of the ramp point, and achieves better performance except at the exact ramp point where the pause-ramp protocol is optimal.\n\nWith the small system size of $8$ qubits used in this work, it was\npossible to collect a large number of annealing samples as well as\nvalidate the protocol behavior against the energy spectrum and open\nsystem simulations. Future work will assess the protocol for larger\nsystem sizes and the impact it has on the scaling of time-to-solution\nas a function of problem size. The largest expected improvement in the protocol's performance, based on our simulations, will arise from an increase in the number and resolution of interpolation points along the annealing schedule, which will allow a more faithful experimental implementation of the ideally smooth annealing schedules demanded by the theoretical protocol.\n\n\\acknowledgments This research is based upon work (partially) supported\nby the Office of the Director of National Intelligence (ODNI), Intelligence\nAdvanced Research Projects Activity (IARPA) and the Defense Advanced\nResearch Projects Agency (DARPA), via the U.S. Army Research Office\ncontract W911NF-17-C-0050. The views and conclusions contained herein\nare those of the authors and should not be interpreted as necessarily\nrepresenting the official policies or endorsements, either expressed\nor implied, of the ODNI, IARPA, DARPA, ARO, or the U.S. Government.\nThe U.S. Government is authorized to reproduce and distribute reprints\nfor Governmental purposes notwithstanding any copyright annotation\nthereon. The authors acknowledge the Center for Advanced Research\nComputing (CARC) at the University of Southern California for providing\ncomputing resources that have contributed to the research results\nreported within this publication. URL: \\url{https:\/\/carc.usc.edu}.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Section Title}\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\IEEEPARstart{T}{his} is how to write IEEE first section with 2 line initial drop letter \\cite{WilliamMellette2018}\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\section{Introduction}\n\n\n\n\n\n\n\n\n\n\\IEEEPARstart{B}{y} 2021, annual global data centre network (DCN) traffic will reach $20.6\\times 10^{21}$ bytes, 90\\% of which will be intra-DCN \\cite{Cisco2018}. Additionally, the proportion of requests being serviced by central processing units (CPUs) is expected to decrease from 75\\% today to 50\\% in 2025 as specialised bandwidth-hungry hardware is installed to enable new machine learning applications \\cite{McKinsey2019}. Furthermore, the increasingly common approach of clustering compute resources for large-scale data processing is requiring more network-intensive server-server communication \\cite{Andreades2019}. These trends are exerting a growing strain on internal DCNs, in which many of the interconnects are electronic switches. Electronic switches have limited scalability, limited bandwidth, high latency and high power consumption \\cite{Zervas2019}, \\cite{Wang2018}. As such, switching is presenting a problematic bottleneck for DCN performance, and current network architectures are unfit to meet next-generation DCN requirements. \n\nOptical switches offer the potential to alleviate many of these network performance issues. With an optical circuit switch (OCS) implementation, there is no packet inspection, buffering, or optical-electrical-optical (OEO) conversion overhead, therefore latency times are significantly lower \\cite{Liu2015}. They also have much higher bandwidth, allowing more servers to be connected to the same switch without increasing oversubscription-related buffering, thus improving scalability. Furthermore, the lack of OEO conversion, the transparency to signal modulation format, and the lower heat generation reduces the number of expensive transceiver components needed, the hardware changes required when new transmission protocols are adopted, and the overall network power consumption respectively. The latter is particularly important since networking can account for $>$50\\% of the \\$20 bn annual DCN power costs, with $CO_2$ emissions equal in volume to the entire aviation industry \\cite{Abts2010}. In addition, optical switches have a more compact physical design than their electronic counterparts, allowing for a smaller footprint in DCs.\n\nThe difficulty of implementing all-optical DCN switching derives from the bursty nature of most DCN traffic and the lack of an all-optical memory alternative. Since no all-optical memory or processor architectures exist, current DCN packet-switched protocols cannot be implemented with an exclusively optical network architecture based on all-optical switches since header information must be processed and payload information stored on a per-hop basis. An alternative to packet switching is circuit switching, which is possible with an all-optical architecture. However, current state-of-the-art commercial optical switches have slow (100s $\\mu$s) switching times. Such long switching times are not compatible with the small data packets that dominate DCN traffic $(90\\% < 576 \\text{ bytes})$ \\cite{Zervas2019} since the switching time would be comparable or greater in size than the forwarding time making for an inefficient network.\n\nFor optical circuit switching (OCS) to be compatible with current DCN demands, it must be possible to switch circuits at the packet timescale \\cite{Zervas2019}, \\cite{Balanici2019}. This requires minimal switching overhead when switching for epochs of the order of 10s-100s of ns.\n\nA promising candidate for realising such a high-speed switch is the semiconductor optical amplifier (SOA). SOAs can be used for either space switching or wavelength switching due to their high and relatively flat optical gain bandwidth. Further benefits of SOAs over other potential optical switching technologies such as MEMS or holograms include fast inherent switching times (theoretically limited only by their $\\approx 100$ ps carrier recombination lifetimes \\cite{Huang2003}), high extinction\/optical contrast ratio, and relatively compact design, making them ideal for low latency-, scalability-, and footprint-constrained DCN applications \\cite{Assadihaghi2010}.\n\nThe sub-ns off-on time of SOAs allows for an SOA-based optical switch architecture that avoids the issues presented by the lack of all-optical memory\/processor alternatives discussed above. This SOA-based OCS solution is generally more simple and better performing than others suggested by the literature such as optical loop memory \\cite{Srivastava2009}, optical burst switching (OBS) \\cite{Qiao2004}, \\cite{Praveen2005}, \\cite{Kiran2007} and hybrid optical packet switching (OPS) \\cite{Benjamin2017}, \\cite{Wang2018}. However, SOAs have an intrinsic optical overshoot and oscillatory response to electronic drive currents due to exciton density variations and spontaneous emission in the gain region \\cite{Paradisi2019}. As demonstrated in this paper, the overshoot and oscillatory optical output result in the key advantage of SOA switching (rapid switching times) being negated, preventing sub-ns switching.\n\nA previous attempt to optimise SOA output applied a `pre-impulse step injection current' (PISIC) driving signal to the SOA \\cite{Gallep2002}. This PISIC signal pre-excited carriers in the SOA's gain region, increasing the charge carrier density and the initial rate of stimulated emission to reduce the 10\\% to 90\\% rise time from 2 ns to 500 ps. However, this technique only considered rise time when evaluating SOA off-on switching times. A more accurate off-on time is given by the settling time, which is the time taken for the signal to settle within $\\pm 5\\%$ of the `on' steady state. Before settling, bits experience variable signal to noise ratio, which impacts the bit error rate (BER) and makes the signal unusable until settled, therefore the switch is effectively `off' during this period.\n\nA later paper looked at applying a `multi-impulse step injection current' (MISIC) driving signal to remedy the SOA oscillatory and overshoot behaviour \\cite{Figueiredo2015}. As well as a pre-impulse, the MISIC signal included a series of subsequent impulses to balance the oscillations, reducing the rise time to 115 ps and the overshoot by 50\\%. However, the method for generating an appropriate pulse format was trial-and-error. Since each SOA has slightly different properties and parasitic elements, the same MISIC format cannot be applied to different SOAs, therefore a different format must be generated through this inefficient manual process for each SOA, of which there will be thousands in a real DC. As such, MISIC is not scalable. Critically, the MISIC technique did not consider the settling time, therefore the effective off-on switching time was still several ns.\n\nMore recent work expands on the driving signal modification shown in \\cite{Figueiredo2015}. \\cite{reviewer_reference_1} applies the MISIC signal detailed in \\cite{Figueiredo2015}, but in addition applies a Wiener filter, where the filter is determined by the steady state value of the SOA response and the mean squared error (MSE) between the output and the filter is minimised by means of finding optimal weight-coefficients of the filter. The work accomplishes a roughly 60\\% reduction in guard time, with the goal of reducing guard time as much as possible such that the BER of the output does not exceed a particular level. While the objective of the work (reduce guard time with respect to BER guarantees) is different to that of this work (minimise the settling time of the SOA output), and thus direct comparison is difficult, it is interesting to acknowledge this analogous approach of MSE \\& weight optimisation to optimising the output of an SOA.\n\nSimilarly, \\cite{reviewer_reference_2} explores the optimisation of an SOA by means of both modification of the driving signal and optimisation of the SOA's microwave mounting. A best case of 33\\% reduction in guard time is accomplished with an improved microwave mounting architecture and a step driving signal, where various MISIC and PISIC driving signals were also tested. This work demonstrates that significant improvements in guard time can be derived exclusively from improvements being made to the microwave mounting of the SOA - something that is not dealt with in this paper - and that the improvement of the SOA's output by optimisation of the driving signal does not preclude the simultaneous improvement by optimisation of the microwave mounting. The results therefore are complementary to those presented in this work, which improves the SOA output purely by means of driving signal optimisation. It is speculated here that the optimisation of the SOA's driving signal by the methods presented in this work combined with the optimisation of its microwave mounting could achieve greater improvements in its output than seen in either \\cite{reviewer_reference_2} or this work.\n\nThe previous solutions discussed so far have had a design flow of first manually coming up with a heuristic for a simplified model of an SOA, followed by meticulous testing and tuning of the heuristic until good real-world performance is achieved. If some aspect of the problem is changed such as the SOA type used or the desired shape of the output signal, this process must be repeated. \n\nThis paper presents a novel and scalable approach to optimising the SOA driving signal in an automated fashion with artificial intelligence (AI) techniques, namely `Particle Swarm Optimisation' (PSO), `Ant Colony Optimisation' (ACO) and `Genetic Algorithms' (GA) \\cite{Mata2018}. These algorithms were chosen on the basis that they had previously been applied to proportional-integral-derivative (PID) tuning in control theory \\cite{7873803}. Moreover, AI techniques propose the benefit of not requiring prior knowledge of the SOA and therefore provide a means of developing an optimisation method that is generalisable to any SOA-based switch. All algorithms were shown to reduce the settling and rise times to the $O$(100 ps) scale. The algorithms' hyperparameters were tuned in an SOA equivalent circuit (EC) simulation environment and their efficacy was demonstrated in an experimental setup. AI performance was compared to that of step, PISIC and MISIC driving signals as well as the popular raised cosine and PID control approaches to optimising oscillating and overshooting systems, all of which the AI algorithms outperformed. Of the AI algorithms, PSO was found to have both the best performance and generalisability due to the additional hyperparameters and search space restrictions that were required for GA and ACO. All code and plotted data are freely available at \\cite{parsonson_shabka_chlupka_2020} and \\cite{parsonson_shabka_chlupka_goh_zervas_2020} respectively.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\section{Simulation}\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nSOAs are typically modelled using simple rate equations. However, as shown in \\cite{Ghafouri-Shiraz2004}, the electrical parasitics of an SOA and its surrounding packaging degrade optical signals by broadening the output optical pulse width, reducing the peak optical power (thereby reducing optical contrast), and causing a slight time delay in the emitted optical pulse. Additionally, they alter the relaxation frequency of the SOA output oscillations. As such, modelling the electrical parasitics was crucial to building a simulation environment in which to optimise switching. As described in \\cite{Ghafouri-Shiraz2004}, \\cite{Figueiredo2011}, and \\cite{Tucker1984}, assuming a small circuit model, microwave ECs can be used to more accurately simulate semiconductor diodes by accounting for these electrical parasitics. Therefore, ECs were the chosen approach to SOA modelling for this paper. \n\nSince at low voltages ($<0.8 V$) the current ($I$) - voltage ($V$) relationship can be described by (\\ref{eq:IVRelationship}) ($q$~=~charge, $K_b$~=~Boltzmann constant, $T$~=~temperature), the ideality factor $\\eta$ and the saturation current $I_s$ could be calculated as 1.59 and $3.48 \\times 10^{-11}$ A respectively using the semi-logarithmic $I$-$V$ curve of the SOA in Fig.~\\ref{fig:soaIVCharacteristics}. Using $\\eta$, $I_s$ and the internal and external SOA constants taken from the literature for a typical silicon laser diode \\cite{Ghafouri-Shiraz2004}, \\cite{Figueiredo2011}, the SOA was modelled below the current threshold $I_{TR}$ (2-50 mA) and above $I_{TR}$ (70-110 mA).\n\n\n\\begingroup\n\\begin{equation} \\label{eq:IVRelationship}\nln\\big(I\\big) = ln\\big(I_s\\big) + \\bigg(\\frac{1}{\\eta}\\bigg) \\bigg(\\frac{q V}{K_b T}\\bigg)\n\\end{equation}\n\\endgroup\n\n\\begin{figure}[!t]\n\\centering\n\\includegraphics[scale=0.23]{Figures\/parso1.png}\n\\caption{Semi-logarithmic I-V plot for the SOA used to calculate $\\eta$ and $I_s$.}\n\\label{fig:soaIVCharacteristics}\n\\end{figure}\n\n\nThe SOA in the experimental setup had the optimum trade-off between gain and signal noise at a bias current of 70 mA, therefore the simulated SOA was biased at this current. Using Matlab's Simulink tool, a transfer function (TF) for the SOA EC was obtained and simplified as shown in (\\ref{eq:transferFunction}) with the constants defined in Table~\\ref{tab:transfer_func_table}. This allowed for custom drive signals to be generated, sent to the biased SOA EC and an optical output measured. \n\n\n\\begin{equation} \\label{eq:transferFunction}\n\\begin{split}\nTF &= \\frac{2.01\\times 10^{85}} {\\sum_{i=0}^9a_is^i }\n\\end{split}\n\\end{equation}\n\n\n\\begin{table}[]\n \\caption{Constants used in EC transfer function.}\n \\label{tab:transfer_func_table}\n \\centering\n \\tabcolsep 0.1in\n \\begin{tabular}{|c|c|c|c|}\n \\hline\n $a_9$ & 1.65 & $a_4$ & $1.37\\times 10^{52}$\\\\\n \\hline\n $a_8$ & $4.56\\times 10^{10}$ & $a_3$ & $2.82\\times 10^{62}$\\\\\n \\hline\n $a_7$ & $3.05\\times 10^{21}$ & $a_2$ & $9.20\\times 10^{71}$\\\\\n \\hline\n $a_6$ & $4.76\\times 10^{31}$ & $a_1$ & $1.69\\times 10^{81}$\\\\\n \\hline\n $a_5$ & $1.70\\times 10^{42}$ & $a_{0}$ & $2.40\\times 10^{90}$ \\\\\n \\hline\n\n \\end{tabular}\n\n\\end{table}\n\nIn the experimental setup (described later), an arbitrary waveform generator (AWG) with 12 GSPS sampling frequency was used allowing for signal bit windows of 83.3 ps, therefore for 20 ns time periods each signal had 240 points. As such, the optimisation algorithms were searching for a solution in a 240-dimensional search space. Additionally, the oscilloscope had 8-bit resolution, therefore each dimension in the solution could take one of 256 values. The EC simulation environment enabled different driving signals to be rapidly tested.\n\nThe constants used for the EC model were taken from the literature (\\cite{Ghafouri-Shiraz2004}, \\cite{Figueiredo2011}, and \\cite{Tucker1984}). The difficulty with SOA modelling, and subsequently also SOA switching, is that there are many variables whose values are difficult to experimentally measure, and which vary significantly even for same-specification SOAs due to parasitics introduced during manufacturing and packaging. Re-measuring these constants for a new SOA would be cumbersome, difficult, and unfruitful since broad assumptions would still need to be made. Furthermore, scaling this bespoke-modelling to 1,000s of SOAs in a single DC would be unrealistic. As such, analytical solutions to SOA switching are not beneficial. Additionally, different driving circuit setups with different amplifiers, bias tees, cabling etc. influence the shape of the driving signal that arrives at the SOA, thereby (if the methods described before this paper are used) requiring more manual tuning every time the equipment surrounding the SOA is changed. This highlights the need for the partially `model-free' AI approaches proposed in this paper, which neither make or require any assumptions about the SOA or the surrounding driving circuit they are optimising, resulting in their optimised driving signals being superior both in terms of performance and scalability relative to traditional analytical and\/or manual methods. Here, we borrow the term `model-free' from the field of reinforcement learning, meaning an algorithm that does not initially know anything about the environment in which it must perform its optimisation \\cite{10.5555\/3312046}.\n\nFig.~\\ref{fig:freq_response} compares the frequency response of the theoretical TF with the experimental SOA. The TF had a -3dB bandwidth of 0.5 GHz (around 700 ps rise time) compared to the experimental SOA's 0.6 GHz (around 550 ps rise time). These values were similar to one-another and consistent with both the theoretical and experimental optical responses. The differences between the responses were due to the use of EC parameters from the literature which do not exactly match those of our SOA.\n\n\\begin{figure}[!t]\n\\centering\n\\includegraphics[scale=0.4]{Figures\/parso2.png}\n\n\\caption{Frequency responses of the theoretical transfer function (TF) and the experimental SOA (Exp).}\n\n\\label{fig:freq_response}\n\n\\end{figure}\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\begin{table}[]\n \\caption{Factor(s) used on the EC transfer function coefficients to simulate different SOAs (factor = 1 unless stated otherwise).}\n \\label{tab:diff_transfer_func_table_of_factors}\n \\centering\n \\tabcolsep=0.11cm\n \\begin{tabular}{|c|c|c|c|c|}\n \\hline\n \\textbf{TF Component:} & Numerator & $a_0$ & $a_1$ & $a_2$ \\\\\n \\hline\n \n \\textbf{Factor(s)}: & 1.0, 1.2, 1.4 & 0.8 & 0.7, 0.8, 1.2 & 1.05, 1.1, 1.2 \\\\\n \\hline\n \n \\end{tabular}\n\n\\end{table}\n\n\n\n\n\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\section{Conclusion}\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nSimulation and experimental results of SOA off-on switching were presented for various driving signal formats. The paper outlined a novel approach to SOA driving signal generation with AI algorithms which made no assumptions about the SOA and therefore were general, required no historic data collection and could be scaled to any SOA-based switch, opening up the possibility of rapid all-optical switching in real data centres. Experimental settling times (and therefore effective off-on times) of 547 ps were achieved using PSO, offering an order of magnitude performance improvement with respect to settling time over our implementation of the PISIC and MISIC techniques from the literature. Additionally, the standard PID control and raised cosine techniques from control theory were shown to be inadequate for the problem of ultra-fast SOA switching. Although ACO and GA demonstrated slightly faster rise times than PSO, PSO had a faster settling time and also a significantly lower 1.8\\% cost spread, giving greater reliability that any given PSO run had found the optimum solution. Furthermore, due to the fewer restrictions placed on the search space and the lower number of fine-tuned hyperparameters compared to ACO and GA, PSO was found to be more easy to generalise to unseen SOAs. Future work expanding on the presented methods could examine the robustness of the method with respect to hardware limitations\/irregularities (e.g. temperature\/bias current variations, bit resolution of driving voltage values or the sampling frequency of the AWG). Additionally, future work could extend the method to a scenario consisting of multiple, possibly cascaded, SOAs.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\section{Experimental Setup}\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nThe experimental setup is shown in Fig.~\\ref{fig:experimentalSetup}. An INPhenix-IPSAD1513C-5113 SOA with a 3dB bandwidth of 69 nm, a small signal gain of 20.8 dB, a 0-140 mA bias current range, a saturation output power of 10 dBm, a response frequency of 0.6 GHz, and a noise figure of 7.0 dB was used. An SHF 100 BP RF amplifier was selected by calculating the amplified MSE relative to the direct signal for different amplifiers, enabling a full dynamic range peak-to-peak voltage of 7V. A $50\\Omega$ resistor was placed before the SOA, allowing for the maximum allowed dynamic current range of 140 mA to be applied across the SOA.\n\nThe 70 mA optimum SOA bias current was found by measuring how MSE, optical signal-to-noise ratio (OSNR), rise time, overshoot, and optical gain varied with current. A 70 mA bias using a -2.5 dBm SOA input laser power produced the lowest rise time and MSE. The SOA was therefore driven between 0 and 140 mA centred at 70 mA. The other equipment used included a Lightwave 7900b lasing system, an Agilent 8156A optical attenuator, an LDX-3200 Series bias current source, a Tektronix 7122B AWG with 12 GSPS sampling frequency, an Anritsu M59740A optical spectrum analyser (OSA), and an Agilent 86100C oscilloscope (OSC) with an embedded photodiode. The RF signal going into the SOA had a rise time of 180 ps, therefore this was the best possible rise time (and settling time) that the SOA could have achieved. Throughout the experiments, a wavelength of 1,545 nm was used.\n\n\n\\begin{figure}[!t]\n\\centering\n\\includegraphics[scale=0.35]{Figures\/parso16.png}\n\\caption{Diagram of the SOA experimental setup used.}\n\\label{fig:experimentalSetup}\n\\end{figure}\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\section{Experimental Results}\n\n\n\n\n\n\n\n\n\n\n\n\\begin{table*}[h]\n\\begin{center}\n \\tabcolsep 0.1in\n \\caption{\n Comparison of SOA Optimisation Techniques. (Best in bold).}\n \\begin{itemize}\n \\item \\scriptsize $^{a}$ Though exact value not reported in \\cite{reviewer_reference_2}, it is referred to as being `below 500 ps'.\n \\item \\scriptsize $^{b}$ Comparison of the ASM mounting against the commercial STF mounting.\n \\item \\scriptsize $^{c}$ Exact value not reported in \\cite{reviewer_reference_2} so percentage improvement is (approximately) inferred from a graph presented in \\cite{reviewer_reference_2}. Comparison made at bias current value corresponding to the best case performance of the best performing ASM mount + drive combination and is compared against the STF mount + drive at the same bias and for the same drive (step was best performing in the reported metrics).\n \\item \\scriptsize $^{d}$ Comparison is made between the best and worst cases presented in \\cite{reviewer_reference_1}.\n \\item \\scriptsize $^{e}$ Several variants of the `MISIC' format were tested in \\cite{Figueiredo2015} and the best is used here for comparison.\n \\item \\scriptsize $^{f}$ Comparison made with respect to the performance of the STEP driving signal presented in \\cite{Figueiredo2015}.\n \\end{itemize}\n \n \\begin{tabular}{|c|c|c|c|c|c|}\n \\hline\n \\multicolumn{1}{|p{3cm}|}{\\centering \\textbf{Method \\\\ (Technique)}}\n & \\textbf{Reference}\n & \\multicolumn{1}{|p{2.2cm}|}{\\centering \\textbf{Rise Time, ps \\\\ (Reduction, \\%)}}\n & \\multicolumn{1}{|p{2.2cm}|}{\\centering \\textbf{Settling Time, ps \\\\ (Reduction, \\%)}}\n & \\multicolumn{1}{|p{2.2cm}|}{\\centering \\textbf{Overshoot, \\% \\\\ (Reduction, \\%)}}\n & \\multicolumn{1}{|p{2.2cm}|}{\\centering \\textbf{Guard Time, ps \\\\ (Reduction, \\%)}}\n \\\\\\hline\n \\multicolumn{1}{|p{3cm}|}{\\centering \\textbf{PSO \\\\ (Signal Optimisation)}}\n & This work \n & \\multicolumn{1}{|p{2.2cm}|}{\\centering 454 ps \\\\ (35\\%)} \n & \\multicolumn{1}{|p{2.2cm}|}{\\centering \\textbf{547 ps \\\\ (85\\%)}} \n & \\multicolumn{1}{|p{2.2cm}|}{\\centering 5\\% \\\\ -} \n & \\multicolumn{1}{|p{2.2cm}|}{\\centering - \\\\ -} \n \\\\\\hline\n \\multicolumn{1}{|p{3cm}|}{\\centering \\textbf{ACO \\\\ (Signal Optimisation)}}\n & This work \n & \\multicolumn{1}{|p{2.2cm}|}{\\centering 413 ps \\\\ (41\\%)} \n & \\multicolumn{1}{|p{2.2cm}|}{\\centering 560 ps \\\\ (85\\%)} \n & \\multicolumn{1}{|p{2.2cm}|}{\\centering 4.8\\% \\\\ -}\n & \\multicolumn{1}{|p{2.2cm}|}{\\centering - \\\\ -} \n \\\\\\hline\n \\multicolumn{1}{|p{3cm}|}{\\centering \\textbf{GA \\\\ (Signal Optimisation)}}\n & This work \n & \\multicolumn{1}{|p{2.2cm}|}{\\centering \\textbf{340 ps \\\\ (51\\%)}} \n & \\multicolumn{1}{|p{2.2cm}|}{\\centering 825 ps \\\\ (78\\%)} \n & \\multicolumn{1}{|p{2.2cm}|}{\\centering 10.3\\% \\\\ -} \n & \\multicolumn{1}{|p{2.2cm}|}{\\centering - \\\\ -} \n \\\\\\hline\n \\multicolumn{1}{|p{3cm}|}{\\centering \\textbf{PISIC \\\\ (Signal Optimisation)}}\n & This work \n & \\multicolumn{1}{|p{2.2cm}|}{\\centering 502 ps \\\\ (28\\%)} \n & \\multicolumn{1}{|p{2.2cm}|}{\\centering 4350 ps \\\\ (-17\\%)} \n & \\multicolumn{1}{|p{2.2cm}|}{\\centering 40.5\\% \\\\ -} \n & \\multicolumn{1}{|p{2.2cm}|}{\\centering - \\\\ -} \n \\\\\\hline\n \\multicolumn{1}{|p{3cm}|}{\\centering \\textbf{MISIC1 \\\\ (Signal Optimisation)}}\n & This work \n & \\multicolumn{1}{|p{2.2cm}|}{\\centering 502 ps \\\\ (28\\%)} \n & \\multicolumn{1}{|p{2.2cm}|}{\\centering 4020 ps \\\\ (-8\\%)} \n & \\multicolumn{1}{|p{2.2cm}|}{\\centering undershot \\\\ -} \n & \\multicolumn{1}{|p{2.2cm}|}{\\centering - \\\\ -} \n \\\\\\hline\n \\multicolumn{1}{|p{3cm}|}{\\centering \\textbf{Raised Cosine \\\\ (Signal Optimisation)}}\n & This work \n & \\multicolumn{1}{|p{2.2cm}|}{\\centering 921 ps \\\\ (-32\\%)} \n & \\multicolumn{1}{|p{2.2cm}|}{\\centering 4690 ps \\\\ (-26\\%)} \n & \\multicolumn{1}{|p{2.2cm}|}{\\centering undershot \\\\ -} \n & \\multicolumn{1}{|p{2.2cm}|}{\\centering - \\\\ -} \n \\\\\\hline\n \\multicolumn{1}{|p{3cm}|}{\\centering \\textbf{PID Control \\\\ (Signal Optimisation)}}\n & This work \n & \\multicolumn{1}{|p{2.2cm}|}{\\centering 501 ps \\\\ (28\\%)} \n & \\multicolumn{1}{|p{2.2cm}|}{\\centering 4020 ps \\\\ (-8\\%)} \n & \\multicolumn{1}{|p{2.2cm}|}{\\centering 2.3\\% \\\\ -} \n & \\multicolumn{1}{|p{2.2cm}|}{\\centering - \\\\ -} \n \\\\\\hline\n \\multicolumn{1}{|p{3cm}|}{\\centering \\textbf{ASM Mounting + STEP Drive \\\\ (Microwave Mounting Optimisation)}} \n & \\cite{reviewer_reference_2} \n & \\multicolumn{1}{|p{2.2cm}|}{\\centering - \\\\ -} \n & \\multicolumn{1}{|p{2.2cm}|}{\\centering - \\\\ -} \n & \\multicolumn{1}{|p{2.2cm}|}{\\centering \\textbf{$\\approx$ 5\\% $^{[c]}$ \\\\ ($\\approx$ 75\\% $^{[b,c]}$)}}\n & \\multicolumn{1}{|p{2.2cm}|}{\\centering $\\approx$ 500 ps $^{[a]}$ \\\\ ($\\approx$ 33\\% $^{[b,c]}$)} \n \\\\\\hline\n \n \\multicolumn{1}{|p{3cm}|}{\\centering \\textbf{STEP Drive + Wiener Filtering \\\\ (Signal Optimisation + Filtering)}} \n & \\cite{reviewer_reference_1} \n & \\multicolumn{1}{|p{2.2cm}|}{\\centering - \\\\ -} \n & \\multicolumn{1}{|p{2.2cm}|}{\\centering - \\\\ -} \n & \\multicolumn{1}{|p{2.2cm}|}{\\centering - \\\\ -}\n & \\multicolumn{1}{|p{2.2cm}|}{\\centering \\textbf{286 ps} \\\\ \\textbf{(60\\% $^{[d]}$)}} \n \\\\\\hline\n \n \\multicolumn{1}{|p{3cm}|}{\\centering \\textbf{PISIC Drive \\\\ (Signal Optimisation)}} \n & \\cite{Figueiredo2015} \n & \\multicolumn{1}{|p{2.2cm}|}{\\centering 115 ps \\\\ (34\\% $^{[f]}$)} \n & \\multicolumn{1}{|p{2.2cm}|}{\\centering - \\\\ -} \n & \\multicolumn{1}{|p{2.2cm}|}{\\centering 25\\% \\\\ (-56\\% $^{[f]}$)}\n & \\multicolumn{1}{|p{2.2cm}|}{\\centering - \\\\ -} \n \\\\\\hline\n \\multicolumn{1}{|p{3cm}|}{\\centering \\textbf{MISIC-6 Drive $^{[e]}$ \\\\ (Signal Optimisation)}} \n & \\cite{Figueiredo2015} \n & \\multicolumn{1}{|p{2.2cm}|}{\\centering 115 ps \\\\ (34\\% $^{[f]}$)} \n & \\multicolumn{1}{|p{2.2cm}|}{\\centering - \\\\ -} \n & \\multicolumn{1}{|p{2.2cm}|}{\\centering 12.5\\% \\\\ (22\\% $^{[f]}$)}\n & \\multicolumn{1}{|p{2.2cm}|}{\\centering - \\\\ -} \n \\\\\\hline\n \\end{tabular}\n \\label{tab:review_comparison_table}\n \\end{center}\n\\end{table*}\n\n\n\n\n\n\\begin{figure}[!t]\n\\centering\n\\includegraphics[scale=0.26]{Figures\/parso17.png}\n\\caption{Experimental SOA responses to the step, PISIC, MISIC1, raised cosine and PID driving signals.}\n\\label{fig:expSOAResponses}\n\\end{figure}\n\nIn this section the experimental results for the SOA responses to step, PISIC, MISIC, raised cosine, PID and AI driving signals have been compared. The objective was to reduce the off-on switching time and power oscillations (measured by the settling time and overshoot metrics).\n\nA step driving signal was the simplest format used to drive the SOA. Fig.~\\ref{fig:expSOAResponses} (which has been normalised with respect to the steady state value as done in \\cite{Figueiredo2015} for easy comparison) shows the SOA optical response to a step driving signal, resulting in a rise time, settling time and overshoot of 697 ps, 3.72 ns and 0.0\\% (since it undershot the steady state) respectively. \n\nThe PISIC format proposed by \\cite{Gallep2002} was applied to the SOA with 2.95V step + 4.05V impulse, and the response is shown in Fig.~\\ref{fig:expSOAResponses} with a rise time, settling time and overshoot of 502 ps, 4.35 ns and 40.5\\% respectively. The form of the PISIC pulse used was optimised for the SOA in use, where different step-impulse voltage combinations (as done in [18]) were tested, as well as varying widths of the pre-impulse section of the PISIC signal as a percentage of the total signal length centered at the percentage used in [18]. It was found that a 500ps pulse width gave the best results.\n\nThe MISIC 1-6 bit-sequences proposed by \\cite{Figueiredo2015} were applied with 2.95V step + 4.05V impulse, where the same step-impulse voltage combinations were tested as for PISIC. The format with the best performance was MISIC1, whose response is shown in Fig.~\\ref{fig:expSOAResponses} with a rise time, settling time and overshoot of 502 ps, 4.02 ns and 0.0\\% (undershot) respectively. \n\nA popular approach to optimising oscillating systems in control theory is the raised cosine approach, whereby the rising step for a signal of period $T$ is adapted to a rising cosine defined by the frequency-domain piecewise function in (\\ref{eq:raisedCosine}). As $\\beta$ increases ($0 \\leq \\beta \\leq 1$), the rate of signal rise decreases. The best performing raised cosine was $\\beta = 0.5$, whose response is shown in Fig.~\\ref{fig:expSOAResponses} and whose rise time, settling time and overshoot were 921 ps, 4.69 ns and 0.0\\% (undershot) respectively. \n\n\\begingroup\n\\small\n\\begin{equation} \\label{eq:raisedCosine}\nH(f) = \\begin{cases}\n 1, & \\text{if $f \\leq \\frac{1-\\beta}{2T}$} \\\\\n \\frac{1}{2} \\left[ 1 + cos \\left( \\frac{\\pi T}{\\beta} \\left[f - \\frac{1-\\beta}{2T} \\right] \\right) \\right], & \\text{if $ \\frac{1-\\beta}{2T} < f \\leq \\frac{1+\\beta}{2T} $} \\\\\n 0, & \\text{otherwise}\n \\end{cases}\n\\end{equation}\n\\endgroup\n\nAnother popular approach in control theory is the PID controller. The optical response of the PID control signal is shown in Fig.~\\ref{fig:expSOAResponses}, with a rise time, settling time and overshoot of 501 ps, 4.02 ns and 2.3\\% respectively. In order to quickly obtain values for the 3 PID parameters, $K_c, K_i \\text{ and } K_d$, a First Order Plus Dead Time (FOPDT) model was applied to the SOA, where the key parameters for this model ($K_p, \\tau_p \\text{ and } \\theta_p$) can be measured directly from the step response of the device. The PID tuning parameter, $\\tau_c$, which is inversely proportional to the magnitude of the response to offset, was tested with values between that of an `aggressive' tuning regime $(\\tau_c \\approx 0.1)$ and a `conservative' one $(\\tau_c \\approx 10.0)$. The results shown in Fig.~\\ref{fig:expSOAResponses} are with $\\tau_c = 5.0$ which was found to be the best performing value.\n\n\nThe PSO algorithm used in the simulation environment was applied to the real SOA. The SP and the PSO response are shown in Fig.~\\ref{fig:experimentalResults}, with a rise time, settling time and overshoot of 454 ps, 547 ps and 5.0\\% respectively.\n\n\n\n\\begin{figure*}[!t]\n\\centering\n\n \\begin{tabular}{ccc}\n \\begin{overpic}[width=0.3\\textwidth]{Figures\/parso18.png} \n \\put(15, 68){(a)}\n \\end{overpic}\n &\n \\begin{overpic}[width=0.3\\textwidth]{Figures\/parso19.png}\n \\put(15, 68){(b)}\n \\end{overpic}\n &\n \\begin{overpic}[width=0.3\\textwidth]{Figures\/parso20.png} \n \\put(15, 68){(c)}\n \\end{overpic}\n \n \n \\end{tabular}\n\n\\caption{Experimental results showing the optimised SOA optical outputs for (a) PSO, (b) ACO, and (c) GA.}\n\\label{fig:experimentalResults}\n\\end{figure*}\n\n\n\\begin{figure*}[!t]\n\\centering\n\n \\begin{tabular}{ccc}\n \\begin{overpic}[width=0.3\\textwidth]{Figures\/parso21.png} \n \\put(15, 68){(a)}\n \\end{overpic}\n &\n \\begin{overpic}[width=0.3\\textwidth]{Figures\/parso22.png}\n \\put(15, 68){(b)}\n \\end{overpic}\n &\n \\begin{overpic}[width=0.3\\textwidth]{Figures\/parso23.png} \n \\put(15, 68){(c)}\n \\end{overpic}\n \n \n \\end{tabular}\n\n\\caption{Experimental results showing the optimised SOA electrical driving signal inputs for (a) PSO, (b) ACO, and (c) GA.}\n\\label{fig:ai_optimised_ops}\n\\end{figure*}\n\n\n\n\n\n\n\n\n \n \n\n\n\nAn ACO run with 200 ants accomplished a rise time, settling time and overshoot of 413 ps, 560 ps and 4.8\\% respectively, performing similarly well to the PSO algorithm. The ACO result is shown in Fig.~\\ref{fig:experimentalResults}\n\nSimilarly, the GA result shown in Fig.~\\ref{fig:experimentalResults} had a rise time, settling time, and overshoot of 340 ps, 825 ps, and 10.3\\% respectively. The rise times of the AI algorithms were an order of magnitude improvement on the step's, and the settling times (and therefore the effective off-on switching time) were several factors faster than the previous MISIC1 optimum from the literature, bringing SOA switching times truly down to the hundred ps scale. A scatter plot comparing these data is shown in Fig.~\\ref{fig:Standard_3D_Scatter}. \n\nBy comparison, PSO had the lowest settling time and therefore the lowest overall switch time. We hypothesise that this was due to the fact that PSO, being less memory-hungry than ACO and having superior convergence properties compared to GA as a result of having fewer hyperparameters to fine-tune and a smaller search space with the PISIC shell, was able to be given a better search space-hyperparameter tuning trade-off, and therefore was able to find a more optimum driving signal. This larger search space also enabled PSO to explore a wider variety of drive signal solutions without needing a large number of hyperparameters tuned (which adds complexity), allowing PSO to generalise to a more diverse set of SOAs than either ACO or GA were able to. Therefore, although in theory all AI algorithms used were powerful and generalisable, due to the number of hyperparameters and search space restrictions that were required in practice, PSO had both the best performance and generalisability, although GA came close to matching PSO.\n\n\\begin{figure}[!t]\n\\centering\n\\includegraphics[scale=0.26]{Figures\/parso24.png}\n\\caption{Scatter plot comparing the experimental rise times, settling times and overshoots of all the driving signals tested. The outlined target region highlights the performance required for truly sub-nanosecond optical switching.}\n\\label{fig:Standard_3D_Scatter}\n\\end{figure}\n\n\n\n\n\n\n\n\n\n\n\nTable \\ref{tab:review_comparison_table} shows results (both absolute, and relative improvement for cross-comparison) of the rise time, settling time, overshoot and guard time for all methods implemented in this work, as well as a variety from the literature. The rows associated with \\cite{Figueiredo2015} are the results for the optimised PISIC and MISIC-6 signals defined and implemented in this work. This is distinguished from the other two columns with `PISIC' and `MISIC' methods referred to as coming from this work, which are a re-implementation of the methods described in \\cite{Figueiredo2015} but applied to and optimised for a different experimental setup.\n\n\n\nFinally, Fig.~\\ref{fig:ai_optimised_ops} shows the electrical drive signals found by each algorithm. Whilst we stress that the main focus of this paper is the \\textit{method} rather than the \\textit{specific drive signal}, the drive signal is important for real-World implementation and general understanding of the search space restrictions used. As Fig.~\\ref{fig:ai_optimised_ops} shows, the derived driving signals are noisy despite a smooth resultant optical output. This is likely because the AWG (arbitrary waveform generator using an 8-bit digital to analogue converter) drive signal frequency was 6 GHz offering 12 GSa\/s whereas the SOA used had a -3dB frequency response of 0.6 GHz, therefore we over-sampled the drive signal by approximately ~10x. In a real DCN scenario, to implement our algorithms' driving signals in practice, we would likely use an FPGA or ASIC with an embedded on-chip DAC for multilevel signal generation, and there are already existing FPGAs (a.k.a. RF System on Chip (RFSoC)) that support multiple DACs at 6 GSa\/s. Therefore in practice the search space would be lower (fewer dimensions\/number of points to optimise) than assumed in this paper, and we would expect this to improve the AI convergence characteristics. Further experiments using fewer points in the drive signal\/a slower AWG are necessary to see what the true effects are on the AI algorithms. This is beyond the scope of this paper, and we intend to further investigate it in our future work.\n\nWithin the context of a DCN implementation of the presented methods, some considerations were made with respect to the effect that the algorithms have on the signal to noise ratio (SNR). Namely, it should be considered if the oscillations caused by the algorithms (all of which are of the order of 5\\%) have a negative effect on the SNR of the ON period of the output, particularly in comparison to the output of a step driving signal, where the ON period considered is defined as starting when the signal enters the $\\pm 5\\%$ (with respect to the steady state) region for a 20 ns pulse length. Following from the model of amplifier noise given in \\cite{agrawal} and accounting for Shot noise, intrinsic amplifier noise (the noise figure of the SOA) and the additional noise due to the fluctuations in the output, we consider the penalty on the noise figure (as defined in \\cite{agrawal}) due to the deviations of the output from its steady state value throughout the duration of its ON period. Assuming (based on intrinsic and Shot noise contributions) a base noise figure (i.e. if the driving method caused no deviations at all) of 7.1dB, the measured noise figure penalties for ACO, PSO, GA and step were 1.05 dB, 0.65 dB, 1.12 dB and 0.53 dB with SNR values of 28.52 dB, 28.90 dB, 28.54 dB and 29.06 dB respectively, showing that the additional noise figure penalty due to the AI methods ranges between 0.08 dB (PSO) and 0.59 dB (GA) compared to a step in the case of the best performing algorithm (PSO).\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n \n \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\section{Optimisation Algorithms}\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nAll AI algorithms had the goal of minimising the MSE between the actual SOA output and an ideal SOA step output with 0 rise time, settling time and overshoot. The closer the driving signal's corresponding output `process variable' (PV) was to achieving this ideal `set point' (SP), the lower its MSE (defined in (\\ref{eq:meanSquaredError})).\n\n\\begingroup\n\\begin{equation} \\label{eq:meanSquaredError}\nMSE = \\frac{1}{m} \\sum_{g=0}^{m} \\left( PV - SP \\right)^2\n\\end{equation}\n\\endgroup\n\n\n\\subsection{Particle Swarm Optimisation (PSO)}\n\n\\subsubsection{Implementation}\n\nAn overview of PSO is given in \\cite{Kiranyaz2014}, \\cite{Iqbal2015a}. PSO is a population-based AI metaheuristic for optimising continuous nonlinear functions. First proposed in 1995 by \\cite{Kennedy1995}, it combines swarm theory by observing natural phenomena such as bird flocks and fish schools with evolutionary programming. In this paper, PSO is adapted to be applicable to SOA drive signal optimisation. \n\nTo apply PSO to SOA optimisation, $n$ particles (driving signals) were initialised at random positions in a hyper-dimensional search space with $m=240$ dimensions (number of points in the signal). Since experimental results showed spurious overshoots after the rising edge and therefore an increase in the settling time, the PSO search space was bounded by a PISIC-shaped `shell' beyond which the particle dimensions could not assume values. An added benefit of the shell was a reduction in the complexity of the problem and therefore also the convergence time. The shell area was a PISIC signal with a leading edge whose width was defined as some fraction of the `on' period of the signal. At each generation, in order to evaluate a given particle position, the MSE in (\\ref{eq:meanSquaredError}) was used to calculate the fitness (which was to be minimised). As discussed in \\cite{Clerc1999}, the particle inertia weights ($w$) and personal and social cognitive acceleration constants ($c_1$ and $c_2$) can be dynamically adapted as the PSO population evolves. This was done using the update rules in (\\ref{eq:updateW}), (\\ref{eq:updateM}), and (\\ref{eq:updateC}) \\cite{Clerc1999} at the start of each generation, where $p_{best_{j}}$ was the historic personal best position of particle $j$, $x_j$ was the position (amplitude taken) of particle $j$, $w(0)$ was the initial inertia weight constant ($0~\\leq~w(0)~<~1$), $w(n_t)$ was the final inertia weight constant ($w(0) > w(n_t)$), $m_j(t)$ was the relative fitness improvement of particle $j$ at time $t$, and $c_{max}$ and $c_{min}$ were the maximum and minimum values for the acceleration constants. So long as these values satisfied (\\ref{eq:conditionForConvergence}), PSO was guaranteed to converge on some driving signal \\cite{VanDenBergh2001}. Using dynamic PSO significantly improved the algorithm's performance (see the `Hyperparameter Tuning' section below).\n\n\n\\begingroup\n\\begin{equation} \\label{eq:updateW}\nw_j(t+1) = w(0) + \\left[ \\left( w(n_t) - w(0) \\right) \\cdot \\left( \\frac{e^{m_j(t)} -1} {e^{m_j(t)} + 1} \\right) \\right]\n\\end{equation}\n\\endgroup\n\n\\begingroup\n\\begin{equation} \\label{eq:updateM}\nm_j(t) = \\frac{p_{best_j}(t) - x_j(t)}{p_{best_j}(t) + x_j(t)}\n\\end{equation}\n\\endgroup\n\n\\begingroup\n\\begin{equation} \\label{eq:updateC}\nc_{1,2}(t) = \\frac{c_{min} + c_{max}}{2} + \\frac{c_{max} - c_{min}}{2} + \\frac{e^{-m_j(t)}-1}{e^{-m_j(t)} + 1}\n\\end{equation}\n\\endgroup\n\n\\begingroup\n\\begin{equation} \\label{eq:conditionForConvergence}\n0 \\leq \\frac{1}{2} \\left( c_1 + c_2 \\right) - 1 < w < 1\n\\end{equation}\n\\endgroup\n\n\n\n\n\n\nThis PSO process could be repeated until the particles converged on a position with the best fitness (i.e. the optimum SOA driving signal). To help with convergence time and performance, some additional constants were defined:\n\n\\begin{itemize}\n \\item $iter_{max}$ = Maximum number of iterations that PSO could evolve through before termination. Higher gives more time for convergence but longer total optimisation time.\n \n \\item $max\\_v\\_f$ = Factor controlling the maximum velocity a particle could move with at each iteration. Higher can improve convergence time but, if too high, particles may oscillate around the optimum and never converge.\n \n \\item $on\\_s\\_f$ and $off\\_s\\_f$ = `On' and `off' suppression factors used to set the minimum and maximum driving signal amplitudes the particle positions could take when the step signal was `on' and `off' respectively. Lower will restrict the particle search space to make the problem tractable for the algorithm, but too low will impact the generalisability of the algorithm to any SOA.\n \n \\item $shell\\_w\\_f$ = Factor by which to multiply the `on' time of the signal to get the width of the leading edge of the PISIC shell. Higher (wider) value will give the algorithm more freedom to rise over a longer period at the leading edge of the signal and improve generalisability, but will also increase the size of the search space and impact convergence.\n\\end{itemize}\n\n\n\n\n\n\n\n\n\n\\subsubsection{Hyperparameter Tuning}\n\nThe simulation environment enabled the PSO hyperparameters to be rapidly tuned by plotting the PSO learning curve (MSE vs. number of iterations). Since the same PSO algorithm ran multiple times may converge on different minima, each PSO version with its unique hyperparameters was ran 10 times and the 10 corresponding learning curves plotted on the same graph to get a `cost spread' (i.e. how much the converged solution's MSE varied between PSO runs). A lower cost spread gave greater reliability that PSO had converged on the best solution that it could find rather than getting stuck in a local minimum. \n\nTo begin with, it was found that using dynamic PSO whereby $w$, $c_1$ and $c_2$ were adapted at the beginning of each generation led to multiple advantages. Firstly, the solution found by 10 dynamic particles had the same MSE as that found by 2,560 static particles, reducing the computation time by a factor of 256. Secondly, the final driving signal found by adaptive PSO was significantly less noisy since it was less prone to local minima. Thirdly, the final MSE found was 63\\% lower. Fourthly, although the relative cost spread of dynamic PSO was 72\\% compared to 50\\% due to the lower MSE, the absolute cost spread was just $8.7\\times 10^{-13}$ compared to $140\\times 10^{-13}$. Pursuing with dynamic PSO, it was found that placing a `PISIC shell' on the search space (with $shell\\_w\\_f = 0.1$) beyond which the particles could not travel led to an absolute cost spread of $6.9\\times 10^{-13}$ and a further 14\\% reduction in the final cost (despite initial costs being higher due to the fact that PISIC signals lead to greater overshoot and subsequently also greater oscillations). Finally, it was also found that initialising one of the $n$ particle positions as a step driving signal improved the convergence time by a factor of two. Using dynamic PSO, a PISIC shell and an embedded step, the following hyperparameter values were found to give the best spread, final cost and convergence time: $iter_{max} = 150$, $n = 160$, $max\\_v\\_f = 0.05$, $w(0) = 0.9$, $w(n_t) = 0.5$, $c_{min} = 0.1$, $c_{max} = 2.5$, $on\\_s\\_f = 2.0$, and $off\\_s\\_f = 0.2$. This final tuning resulted in a cost spread of just 1.8\\%. The evolution of this PSO tuning process is summarised in Fig.~\\ref{fig:simulatedPsoOutputs}, where the learning curves for the above sets of hyperparameters have been plotted in red, orange, blue and green respectively. The final PSO SOA output, shown in Fig.~\\ref{fig:simulatedPsoOutputs}, had a rise time, settling time and overshoot of 669 ps, 669 ps and 3.7\\% respectively. Fig.~\\ref{fig:simulatedPsoOutputs} also shows the optical response to a step driving signal, showing a rise time, settling time and overshoot of 669 ps, 4.85 ns and 31.1\\% respectively. Thus, the simulations indicated that the settling time (and therefore the effective off-on switching time) could be reduced by a factor of 7.2 and the overshoot by a factor of 8.4 compared to a step. Although rise time remained unimproved, the experimental results section shows that, for a real SOA with optical drift, PSO improves all three parameters.\n\n\n\n\\begin{figure*}[!t]\n\\centering\n\n \\begin{tabular}{ccc}\n \n \\begin{overpic}[width=0.3\\textwidth]{Figures\/parso3.png}\n \\put(15, 68){(a)}\n \\end{overpic}\n & \n \\begin{overpic}[width=0.3\\textwidth]{Figures\/parso4.png}\n \\put(15, 68){(b)}\n \\end{overpic}\n &\n \\begin{overpic}[width=0.3\\textwidth]{Figures\/parso5.png}\n \\put(15, 68){(c)}\n \\end{overpic}\n \\\\\n \\begin{overpic}[width=0.3\\textwidth]{Figures\/parso6.png} \n \\put(15, 68){(d)}\n \\end{overpic}\n &\n \\begin{overpic}[width=0.3\\textwidth]{Figures\/parso7.png}\n \\put(15, 68){(e)}\n \\end{overpic}\n &\n \\begin{overpic}[width=0.3\\textwidth]{Figures\/parso8.png} \n \\put(15, 68){(f)}\n \\end{overpic}\n \n \\end{tabular}\n\n\\caption{Simulated SOA optical response to (a) PSO, (b) ACO, and (c) GA driving signals relative to a standard step input. For reference, the target SPs used have also been plotted. Learning curves showing how both the cost spread and the optimum solution improved as the (d) PSO, (e) ACO, and (f) GA algorithms were tuned, showing 10 learning curves for each set of hyperparameters. The curves for the optimum hyperparameters have been plotted in green. For PSO in (d), some additional information has been plotted: i) No dynamic PSO, PISIC shell, or embedded step (red), ii) no PISIC shell or embedded step (blue), iii) no embedded step (orange), and iv) the final PSO algorithm (green, also plotted on separate graph (inserted)). For GA, the i) default DEAP (red) and ii) optimised (green) hyperparameter learning curves have been plotted. For ACO, the blue curve is for a run with a larger pheromone exponent (0.5) value than the optimum, and the red is for a larger dynamic range on the signal search space ($\\pm50 \\%$).}\n\n\\label{fig:simulatedPsoOutputs}\n\n\\end{figure*}\n\n\n\n\n\n \n \n \n \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\subsection{Ant Colony Optimisation (ACO)}\n\n\n\\subsubsection{Implementation}\n\nACO is primarily a path-finding evolutionary algorithm modelled on observations of how ant colonies find food sources in nature. As such, it optimises to find an optimum path along nodes in a graph, $G = \\{g_{i}\\}$ by means of probabilistic exploration, and colony exploitation across generations of ants. A more comprehensive explanation of ACO can be found in \\cite{aco_book}. Of several ACO variants, the `Ant Colony System' algorithm was used in this work.\n\nSince ACO is typically applied to routing problems, considerations must be made as to how to represent parameter selection as such a problem. A system with $N$ parameters each having $M$ possible values can be modelled as a graph with $|N|$ clusters of $|M|$ nodes, where each node maps to a possible value of a particular parameter. A path can then be found that visits one node in each cluster, defining a set of parameter values after each cluster has been visited once and only once.\n\nFor example, consider an $N=3$ parameter $(a, b, c)$ system where each parameter can take 1 of $M=2$ possible values which are selected in the order $a\\rightarrow b\\rightarrow c$. A $(N \\times N) \\times (M \\times M)$ matrix representing the probability of choosing a value for one parameter, given a previous value choice for another, can be written as in (\\ref{eq:aco_matrix}) where $\\alpha^{xy}_{ij}$ is the probability of choosing value $j$ for parameter $y$, given that value $i$ for parameter $x$ was just chosen. Zeroing the matrix entries appropriately ensures that parameter values are selected in order.\n\n\\begin{equation} \\label{eq:aco_matrix}\n \\begin{pmatrix}\n 0 & 0 & \\alpha^{ab}_{00} & \\alpha^{ab}_{01} & 0 & 0 \\\\\n 0 & 0 & \\alpha^{ab}_{10} & \\alpha^{ab}_{11} & 0 & 0 \\\\\n 0 & 0 & 0 & 0 & \\alpha^{bc}_{00} & \\alpha^{bc}_{01} \\\\\n 0 & 0 & 0 & 0 & \\alpha^{bc}_{10} & \\alpha^{bc}_{11} \\\\\n 0 & 0 & 0 & 0 & 0 & 0 \\\\\n 0 & 0 & 0 & 0 & 0 & 0 \\\\\n \\end{pmatrix}\n\\end{equation}\n\n\n\\subsubsection{Hyperparameter Tuning}\n\nThe important hyperparameters with respect to ACO (specifically the Ant Colony System algorithm used here) are the pheromone exponent (where higher values encourage more exploitation of previously found paths), the evaporation exponent (where higher values discourage exploitation of previously found paths) and the probability of an ant travelling along a randomly selected path. Additionally, the number of ants and generations must be selected.\n\nParameters were tuned by means of running optimisation routines with one hyperparameter varying across a range of values and the rest kept constant. For each MSE value, the learning curve from 10 different runs were plotted against each other. Just as with PSO, parameter values were selected to prioritise the minimisation of cost spread to ensure that the optimisation technique could give consistent results when used on different occasions. Firstly, it was found that beyond 200 ants, the cost spread did not improve significantly. Similarly, regardless of the spread, the ACO routine was typically converging after between 60 and 75 generations, so a generation cap of 100 was imposed since this was sufficient to guarantee convergence. The values for the other parameters were the pheromone constant $\\alpha = 0.25$, the evaporation constant $\\rho = 0.5$ and the exploration probability $p = 0.1$. It was also found that minimising the search space by reducing the dynamic range of the signal to $\\pm25\\%$ centred at $50\\%$ of the maximum shortened convergence time without degradation of the final signal, which had the advantage of making matrices memory sizes manageable. No further hyperparameters, such as the PISIC shell applied with the PSO method, were utilised, which is more desirable since fewer hyperparameters simplify the tuning process.\n\nAs seen in Fig.~\\ref{fig:simulatedPsoOutputs}, the spread of the ACO routine was reduced from 23\\% to 14.9\\% through tuning, but was still less consistent than the 1.8\\% spread of the PSO algorithm. Fig.~\\ref{fig:simulatedPsoOutputs} shows the convergence of the Ant Colony System algorithm for various hyperparameter combinations (described in the figure's caption). While the spread in the early iterations of the routine is explained by the embedding of a square signal in the PSO routine described above (since it is very unlikely to randomly initialise a signal better than a square and the ACO does not use any sort of initial signal embedding), the spread in the later stages is thought to be due to some practical limitations of the ACO optimisation method. For $N$ parameters with $M$ values each, the ACO routine requires 2 $(N^2 \\times M^2)$ matrices (point-wise multiplied to make a third). A 100 point signal with 100 possible values per point gives a matrix with $100,000,000$ elements. Implemented with the popular NumPy Python library, a minimum of 8 bytes per floating point means such a matrix is on the order of gigabytes. Given the relatively low power PC used in the experiment, restrictions on the state space had to be imposed due to memory limitations. This meant that rather than optimising each point on the signal (240) with the maximum resolution allowed by the AWG (8 bit = 256 points), only 180 points (those in the HIGH state of the initial driving step signal) were optimised with a resolution of 50 points. This means that the state space viewed by the ACO routine was more strongly discretised than that viewed by a method (such as PSO) with lower memory requirements, limiting how optimum the generated signal can be and how well ACO could generalise to other SOAs. Nevertheless, as will be seen, the ACO still produced driving signals that improved upon previous methods. The final ACO tuning output, shown in Fig. \\ref{fig:simulatedPsoOutputs}, had a rise time, settling time and overshoot of 753 ps, 1.58 ns and 9.1\\% respectively.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\subsection{Genetic Algorithm (GA)}\n\n\\subsubsection{Implementation}\nGAs are a group of nature-inspired population-based metaheuristics. The term `Genetic Algorithm' relates to the model proposed by John Holland in 1975 \\cite{Holland1975}. A detailed explanation of GAs can be found in \\cite{Whiteley1994}.\n\nThe DEAP Python library \\cite{fortin2012deap} was used to implement the canonical GA. Each optimisation started with an initial population of 100 individuals with random positions in order to span as much of the search space as possible. Each individual was represented by an array of 240 points with values within the 7V range, therefore representing a driving signal.\n\nDuring the evolutionary process, the mutation stage was performed by applying Gaussian noise to some points of each individual. Any individuals with points which went beyond the supported 7V range were discarded.\n\n\n\n\\subsubsection{Hyperparameter Tuning}\n\nAs described in \\cite{Whiteley1994}, there are three parts to the evolutionary process: selection, crossover, and mutation. Each of these can be implemented in a few different ways (e.g. Proportionate, Ranking or Tournament for selection \\cite{sivaraj2011review}), and each of these implementations use different hyperparameters (e.g. $tournsize$ for Tournament Selection; or $\\mu$, $\\sigma$, and $indpb$ for Gaussian Mutation). This results in an overall high number of hyperparameters which might significantly impact the probability of the GA getting stuck in a local minimum as well as the speed of convergence. The high number of hyperparameters also meant that there were more values to fine-tune, which made tuning both more complex and time consuming, thereby reducing its generalisability. Since the high number of hyperparameters already impacted generalisability, we refrained from restricting the search space (as done with ACO and with the PSO PISIC shell) to try to still allow for as much generalisability as possible, but this would have the knock-on effect of poorer convergence and a lesser settled signal. However, as demonstrated in Fig. \\ref{fig:diff_tf_pvs_pso}, GA was still able to generalise fairly well to 10 different SOAs.\n\nThe DEAP library documentation came with a set of suggested default hyperparameter values. These were varied using grid search over 61 optimisations. A limit on the number of generations was set to 500, which was found to be sufficient for convergence.\n\nMutation was implemented using Gaussian Mutation, which has a probability $indpb$ (mutation rate) of changing each of an individual's points by applying normally distributed noise of mean $\\mu$ and standard deviation $\\sigma$. Using a negative $\\mu$ led to a solution with lower values, while a positive $\\mu$ did the opposite - each leading to a lower overall performance, so $\\mu$ was set to 0. Decreasing $indpb$ or $\\sigma$ slowed down the process as it reduced the overall mutation speed, but increasing either one too much led to the GA getting stuck at local minima. By performing grid search on the hyperparameters, the optimal values were found to be 0.06 and 0.15 respectively. A population size of 60 led to the fastest initial convergence speed (per number of fitness function evaluations), however, the higher number of 100 individuals in a population led to a better overall solution after many generations. Additionally, both $cxpb$ (the probability of mating two individuals), and $mutpb$ (the probability of mutating an individual), were increased significantly from 0.6 to 0.9 and from 0.05 to 0.3 respectively. Increasing $tournsize$ (which controls the number of randomly selected individuals from which to choose the best one for the next generation \\cite{miller1995genetic}) above 4 did not have an impact on the convergence, whereas using the values of 2 and 3 significantly slowed down the process. Most hyperparameters did not change by much from the DEAP library's default values since the initial values were almost optimal and changing them led to a slower convergence.\n\nFig.~\\ref{fig:simulatedPsoOutputs} shows the 10 learning curves for the default hyper parameters (red) and the optimised parameters (green), where the cost spread was reduced from 58.6\\% to 10.8\\%. Fig.~\\ref{fig:simulatedPsoOutputs} also shows the simulated SOA output of the tuned GA algorithm with a rise time, settling time and overshoot of 799 ps, 2.55 ns, and 9.0\\% respectively.\n\nThe hyperparameters of the AI algorithms can be used to address the general problem of `SOA optimisation'. This is because the hyperparameters are only for restricting the search space to reduce the size of the problem, and restricting how much the algorithm can change its solution between iterations; they are specific to the general SOA optimisation problem, but \\textit{not} to a specific SOA. The EC simulation environment provided a useful test bed in which to tune the algorithm hyperparameters and allow optimisation of any SOA (even though drive signal solutions derived from simulations are not directly transferable to experiment).\n\n\\begin{figure}[!t]\n\\centering\n\n \\begin{tabular}{cc}\n \n \\begin{overpic}[scale=0.17]{Figures\/parso9.png}\n \\put(87, 69){(a)}\n \\end{overpic}\n \n \\\\\n \n \\begin{overpic}[scale=0.17]{Figures\/parso10.png}\n \\put(87, 71){(b)}\n \\end{overpic}\n \n \\begin{overpic}[scale=0.17]{Figures\/parso11.png}\n \\put(87, 69){(c)}\n \\end{overpic}\n \n \\\\\n \\begin{overpic}[scale=0.17]{Figures\/parso12.png}\n \\put(87, 71){(d)}\n \\end{overpic}\n \n \\begin{overpic}[scale=0.17]{Figures\/parso13.png}\n \\put(87, 69){(e)}\n \\end{overpic}\n \n \\\\\n \n \\begin{overpic}[scale=0.17]{Figures\/parso14.png}\n \\put(87, 69){(f)}\n \\end{overpic}\n \n \\begin{overpic}[scale=0.17]{Figures\/parso15.png}\n \\put(87, 69){(g)}\n \\end{overpic}\n \n \n \n \n \\end{tabular}\n\n\\caption{Simulated SOA optical responses of 10 different SOAs (each with a different transfer function) to (a) step, (c) PSO, (e) ACO, and (g) GA, and the corresponding driving signals for (b) PSO, (d) ACO, and (f) GA. All AI optimisations were done with the same hyperparameters and a common target SP.}\n\n\\label{fig:diff_tf_pvs_pso}\n\n\\end{figure}\n\n\nTo test the above claim that these algorithms can in theory be generalised to any SOA, we generate 10 different transfer functions each modelling a different SOA. These were generated by multiplying the coefficients in Table~\\ref{tab:transfer_func_table} by various factors (summarised in Table~\\ref{tab:diff_transfer_func_table_of_factors} so as to be reproducible), thereby simulating SOAs with different characteristics. The optical outputs of these different SOAs in response to the same step driving signal are shown in Fig.~\\ref{fig:diff_tf_pvs_pso}. Using the PSO and GA algorithms with the \\textit{same hyperparameters}, all 10 of these SOAs are able to be optimised with no changes to the algorithms, as shown in Fig.~\\ref{fig:diff_tf_pvs_pso} (where the AI electrical drive signals have been included for reference). Due to search space restrictions, ACO could not generalise. For all 10 SOAs, a common target set point was chosen. The set point was defined as a perfect 0 overshoot, rise time and settling time step response based on the steady states of the initial step response of one of the simulated SOA's. However, the target can be arbitrarily defined by the user if a different optical response is required, demonstrating the flexibility of the AI algorithms to optimise optical outputs with respect to specific problem requirements. Relative to this target set point, the performances are summarised in Table~\\ref{tab:diff_transfer_func_comparison_table}. Signals that did not settle have been marked as `-' and excluded from performance summary metrics. PSO had the greatest generalisability to optimising the settling times of different SOAs. Researchers in our field should therefore be able to black box our PSO AI approach and optimise their SOAs even though they will have different equivalent circuit components from the specific device(s) optimised in this paper.\n\n\n\\begin{table}[]\n\\footnotesize\n \\caption{Performance summary for the techniques applied to the 10 different simulated SOAs, given in the format min | max | mean | standard deviation (best in bold).}\n \\begin{itemize}\n \\item \\scriptsize{Signals marked `-' never settled.}\n \\end{itemize}\n \\label{tab:diff_transfer_func_comparison_table}\n \\centering\n \\tabcolsep=0.11cm\n \\begin{tabular}{|c|c|c|c|}\n \\hline\n {\\textbf{Technique}} & \\pbox{20cm}{\\textbf{Rise Time (ps)}} & {\\textbf{Settling Time (ns)}} & {\\textbf{Overshoot (\\%)}} \\\\\n \\hline\n \n Step & \\scriptsize{\\textbf{502}, \\textbf{753}, 653, 86.4} & \\scriptsize{3.1, -, 5.8, 3.0} & \\scriptsize{16.5, 70.4, 39.2, 14.1} \\\\\n \\hline\n \n PSO & \\scriptsize{669, 837, 703, \\textbf{58.5}} & \\scriptsize{\\textbf{0.67}, \\textbf{1.3}, \\textbf{0.87}, \\textbf{0.20}} & \\scriptsize{\\textbf{2.51}, \\textbf{6.01}, \\textbf{4.46}, \\textbf{1.22}} \\\\\n \\hline\n \n ACO & \\scriptsize{\\textbf{502}, \\textbf{753}, \\textbf{644}, 79.4} & \\scriptsize{1.6, -, 2.6, 0.82} & \\scriptsize{11.1, 70.4, 32.6, 17.0} \\\\\n \\hline\n \n \n \n GA & \\scriptsize{760, 930, 793, \\textbf{58.5}} & \\scriptsize{1.0, 1.5, 1.3, 1.5} & \\scriptsize{4.31, 9.36, 7.04, 1.54} \\\\\n \\hline\n \n \n \n \n \n \\end{tabular}\n\n\\end{table}\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\section{Previous Work}\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nA previous attempt to optimise SOA output applied a `pre-impulse step injection current' (PISIC) driving signal to the SOA \\cite{Gallep2002}. This PISIC signal pre-excited carriers in the SOA's gain region, increasing the charge carrier density and the initial rate of stimulated emission to reduce the 10\\% to 90\\% rise time from 2 ns to 500 ps. However, this technique only considered rise time when evaluating SOA off-on switching times. A more accurate off-on time is given by the settling time, which is the time taken for the signal to settle within $\\pm 5\\%$ of the `on' steady state. Before settling, bits experience variable signal to noise ratio, which impacts the bit error rate (BER) makes the signal unusable until settled, therefore the switch is effectively `off' during this period.\n\nA later paper looked at applying a `multi-impulse step injection current' (MISIC) driving signal to remedy the SOA oscillatory and overshoot behaviour \\cite{Figueiredo2015}. As well as a pre-impulse, the MISIC signal included a series of subsequent impulses to balance the oscillations, reducing the rise time to 115 ps and the overshoot by 50\\%. However, the method for generating an appropriate pulse format was trial-and-error. Since each SOA has slightly different properties and parasitic elements, the same MISIC format cannot be applied to different SOAs, therefore a different format must be generated through this inefficient manual process for each SOA, of which there will be thousands in a real DC. As such, MISIC is not scalable. Critically, the MISIC technique did not consider the settling time, therefore the effective off-on switching time was still several ns.\n\nThe previous solutions discussed so far have had a design flow of first manually coming up with a heuristic for a simplified model of an SOA, followed by meticulous testing and tuning of the heuristic until good real-world performance is achieved. If some aspect of the problem is changed such as the SOA type used or the desired shape of the output signal, this process must be repeated. This paper presents a novel and scalable approach to optimising the SOA driving signal in an automated fashion with artificial intelligence (AI), requiring no prior knowledge of the SOA and being able to be generalised to any device without laborious manual configuration. Three algorithms were chosen on the basis that they had previously been applied to proportional-integral-derivative (PID) tuning in control theory \\cite{7873803}: Particle swarm optimisation (PSO), ant colony optimisation (ACO), and a genetic algorithm (GA). All algorithms were shown to reduce the settling and rise times to the $O$(100ps) scale. The algorithms' hyperparameters were tuned in an SOA equivalent circuit (EC) simulation environment and their efficacy was demonstrated in an experimental setup. AI performance was compared to that of step, PISIC and MISIC driving signals as well as the popular raised cosine and PID control approaches to optimising oscillating and overshooting systems, all of which the AI algorithms outperformed.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nFor a finite subset $A\\subset \\mathbb N$, let $E_A$ denote the set of all $x\\in(0,1)$ such that the digits\n$a_1(x), a_2(x),\\ldots$ in the continued fraction expansion \n$$\nx = \n[a_1(x), a_2(x), a_3(x), \\ldots ]\n= \n\\frac{1}{a_1(x) + \\frac{1}{a_2(x) + \n\\frac{1}{a_3(x) + \\cdots}\n}}\n$$\nall belong to $A$.\nSets of the form $E_A$ are said to be of \\emph{bounded type} (see e.g.~\\cite{kontorovich, shallit});\nin particular they are Cantor sets, and study of their Hausdorff dimension has attracted significant attention.\n\nOf particular interest have been the sets $E_n=E_{\\{1,\\ldots,n\\}}$,\nwith $E_2=E_{\\{1,2\\}}$ the most studied of these, serving as a test case for various general methods\nof approximating Hausdorff dimension. \nJarnik \\cite{jarnik} showed that $\\text{dim}(E_2)>1\/4$,\nwhile Good \\cite{good} improved this to $0.5306 < \\text{dim}(E_2)<0.5320$,\nBumby \\cite{bumby2}\nshowed that $0.5312 < \\text{dim}(E_2) < 0.5314$,\nHensley \\cite{hensley1989} showed that\n$0.53128049$ $< \\text{dim}(E_2) < 0.53128051$,\nwhile\nFalk \\& Nussbaum \\cite{falknussbaum}\\footnote{This preprint has been split into the two articles\n\\cite{falknussbaum1} and \\cite{falknussbaum2}, with \\cite{falknussbaum1} containing the approximation to\n$\\text{dim}(E_2)$.} \nrigorously justified the first 8 decimal digits of $\\text{dim}(E_2)$,\nproving that\n$0.531280505981423 \\le \\text{dim}(E_2) \\le 0.531280506343388\\,.$\nA common element in the methods \\cite{bumby2, falknussbaum, hensley1989} is \nthe study of a transfer operator, while for the higher accuracy estimates \n\\cite{falknussbaum, hensley1989} there is some element of\ncomputer-assistance involved in the proof.\n\nIn \\cite{jp} we outlined a different approach to approximating the Hausdorff dimension of bounded type sets,\nagain using a transfer operator, but exploiting the real analyticity of the maps defining continued fractions to\nconsider the determinant $\\Delta$ of the operator, and its\n approximation in terms of periodic points\\footnote{The periodic points are precisely those numbers in $(0,1)$\n with periodic continued fraction expansion, drawn from digits in $A$.\n The reliance on periodic points renders the method \\emph{canonical}, inasmuch as it does not involve any arbitrary choice of coordinates or partition of the space.}\n of an underlying dynamical system.\nWhile some highly accurate \\emph{empirical} estimates of Hausdorff dimension were given, for example a 25 decimal digit approximation to $\\text{dim}(E_2)$, these were not rigorously justified. Moreover, although the \nalgorithm was proved to generate a sequence of approximations $s_n$ to the Hausdorff dimension\n(depending on points of period up to $n$),\nwith convergence rate faster than any exponential, \nthe derived error bounds were sufficiently conservative (see Remark \\ref{conservative} below) that it was unclear\nwhether they could be combined with \nthe computed approximations to yield any effective \\emph{rigorous} estimate.\n\nIn the current paper we investigate the possibility of sharpening the approach of \\cite{jp} \nso as to obtain rigorous computer-assisted estimates on $\\text{dim}(E_A)$, with particular focus on $E_2$.\nThere are several ingredients in this sharpening.\nThe first step is to locate a disc $D$ in the complex plane with the property that\nthe images of $D$ under the mappings $T_n(z)=1\/(z+n)$, $n\\in A$, are contained in $D$.\nIt then turns out to be preferable to consider the transfer operator as acting on a \\emph{Hilbert} space of analytic functions on $D$, rather than the Banach space of \\cite{jp};\nthis facilitates an estimate on the Taylor coefficients of $\\Delta$ in terms of the \\emph{approximation numbers}\n(or \\emph{singular values}) of the operator, which is significantly better than those bounds derived from Banach space methods. \nThe specific Hilbert space used is Hardy space, consisting of those analytic functions on the disc which extend as $L^2$\nfunctions on the bounding circle.\nThe contraction of $D$ by the mappings $T_n(z)=1\/(n+z)$, $n\\in A$,\nprompts the introduction of the \\emph{contraction ratio}, which captures\nthe strength of this contraction, and leads to estimates on the convergence of\nthe approximations to the Hausdorff dimension.\nThe $n^{th}$ Taylor series coefficient of $\\Delta$ can be expressed in terms of periodic points of period up to $n$, and for sufficiently small $n$ these can be evaluated\nexactly, to arbitrary precision.\nFor larger $n$, we show it is advantageous to\nobtain two distinct types of upper bound on the Taylor coefficients: we refer to these\nas the \\emph{Euler bound} and the \\emph{computed Taylor bound}.\nThe Euler bound is used for all sufficiently large $n$, while the\ncomputed Taylor bound is used for a finite intermediate range of $n$\ncorresponding to those Taylor coefficients which are deemed to be computationally inaccessible,\nbut \nwhere the Euler bound is insufficiently sharp.\nIntrinsic to the definition of the computed Taylor bounds is the sequence of\n\\emph{computed approximation bounds}, which we introduce as \ncomputationally accessible upper bounds on\nthe approximation numbers \nof the transfer operator.\n\nAs an example of the effectiveness of the resulting method we rigorously justify the \nfirst 100 decimal digits\\footnote{The choice of 100 decimal digits in the present article is motivated by a number of factors.\nOn the one hand 100 is considered a particularly round number, and an order of magnitude larger than the number of decimal digits obtained (even non-rigorously) for the dimension of $E_2$ in previous works.\nOn the other hand, readily available computer resources (namely, a program written in Mathematica running on a modestly equipped laptop) performed the necessary calculations, in particular the high accuracy evaluation of points of period up to 25, in a reasonable timeframe (approximately one day), and it turns out that this choice of maximum period is sufficient to rigorously justify 100 decimal digits.}\nof the Hausdorff dimension of $E_2$, \nthereby improving on the rigorous estimates in \\cite{bumby2, falknussbaum, good, hensley1989, jarnik}.\nSpecifically, we prove (see Theorem \\ref{e2theorem}) that \n\\begin{align*}\n\\text{dim}(E_2) =\n0.&53128050627720514162446864736847178549305910901839 \\\\\n& 87798883978039275295356438313459181095701811852398\\ldots \\,,\n\\end{align*}\n using the \n periodic points of period up to 25.\n\n\n\n\\section{Preliminaries}\n\nIn this section we collect a number of results (see also \\cite{jp}) which underpin our algorithm for approximating Hausdorff dimension.\n\n\\subsection{Continued fractions}\n\nLet $E_A$ denote the set of all $x\\in(0,1)$ such that the digits\n$a_1(x), a_2(x),\\ldots$ in the continued fraction expansion \n$$\nx = \n[a_1(x), a_2(x), a_3(x), \\ldots ]\n= \n\\frac{1}{a_1(x) + \\frac{1}{a_2(x) + \n\\frac{1}{a_3(x) + \\cdots}\n}}\n$$\nall belong to $A$.\nFor any $i\\in\\mathbb N$ we define the map $T_i$ by\n$$\nT_i(x)=\\frac{1}{i+x}\\,,\n$$\nand for a given $A\\subset \\mathbb N$, the collection $\\{T_i:i\\in A\\}$ is referred to as the corresponding\n\\emph{iterated function system}.\nIts \\emph{limit set}, consisting of limit points of sequences\n$T_{i_1}\\circ \\cdots\\circ T_{i_n}(0)$, where each $i_j\\in A$, is precisely the set $E_A$.\n\nEvery set $E_A$ is invariant under the \\emph{Gauss map} $T$, defined by\n$$\nT(x) = \\frac{1}{x} \\pmod 1\\,.\n$$\n\n\\subsection{Hausdorff dimension}\n\nFor a set $E \\subset \\mathbb R$,\ndefine\n$$\nH_\\varepsilon^\\delta(E) = \\inf\n\\left\\{\\sum_i\\text{diam}(U_i)^\\delta : \\text{$\\mathcal{U} = \\{U_i\\}$ is an open cover of $E$ such that each $\\text{diam}(U_i) \\leq \\varepsilon$}\\right\\}\\,,\n$$\nand set\n$H^\\delta(E) =\\lim_{\\varepsilon \\to 0} H_\\varepsilon^\\delta(E)$.\nThe \\emph{Hausdorff dimension} $\\text{dim}(E)$\nis then defined as\n$$\n\\text{dim}(E) = \\inf\\{\\delta \\hbox{ : } H^\\delta(E) = 0\\} \\,.\n$$\n\n\\subsection{Pressure formula}\n\nFor a continuous function $f: E_A \\to \\mathbb R$,\nits \\emph{pressure} $P(f)$ \nis given by\n$$\nP(f) = \\lim_{n\\to +\\infty}\\frac{1}{n}\n\\log \\left(\\sum_{T^nx=x \\atop x\\in E_A}\ne^{f(x) + f(Tx) + \\ldots + f(T^{n-1}x)}\n\\right)\\,,\n$$\nand if\n$f = -s\\log|T'|$ then we have the following implicit characterisation of the Hausdorff\ndimension of $E_A$ (see \\cite{bedford, bowen, falconer, mauldinurbanski}):\n\n\\begin{lemma}\\label{pressurelemma}\nThe function $s\\mapsto P(-s\\log |T'|)$ is strictly decreasing,\nwith a unique zero at $s=\\text{dim}(E_A)$.\n\\end{lemma}\n\n\\subsection{Transfer operators}\n\nFor a given $A\\subset \\mathbb N$, and $s\\in\\mathbb R$, the \\emph{transfer operator} $\\mathcal L_{A,s}$, defined by\n$$\n\\mathcal L_{A,s} f (z) = \\sum_{i\\in A} \\frac{1}{(z+i)^{2s}}\\, f\\left(\\frac{1}{z+i}\\right) \\,,\n$$\npreserves various natural function spaces, for example the Banach space of Lipschitz functions on $[0,1]$.\nOn this space it has a simple positive eigenvalue $e^{P(-s\\log|T'|)}$, which is the unique eigenvalue whose modulus equals \nits spectral radius,\nthus by Lemma \\ref{pressurelemma} \nthe Hausdorff dimension of $E_A$ is the unique value $s\\in\\mathbb R$ such that $\\mathcal L_{A,s}$ has spectral radius equal to 1.\n\n\\subsection{Determinant}\nThe \\emph{determinant} for $\\mathcal L_{A,s}$ is the entire function defined for $z$ of sufficiently small modulus\\footnote{The power series $\\sum_{n=1}^\\infty \\frac{z^n}{n} \\text{tr}(\\mathcal L_{A,s}^n)$, and hence the expression (\\ref{determinantformula}), is convergent for $|z|< e^{-P(-s\\log|T'|)}$.}\nby\n\\begin{equation}\\label{determinantformula}\n\\Delta(z,s) = \\exp -\\sum_{n=1}^\\infty \\frac{z^n}{n} \\text{tr}(\\mathcal L_{A,s}^n) \\,,\n\\end{equation}\nand for other $z\\in\\mathbb{C}$ by analytic continuation;\nhere the trace $\\text{tr}(\\mathcal L_{A,s}^n)$ is given (see \\cite{jp, ruelle}) by \n\\begin{equation}\\label{traceformula}\n\\text{tr}(\\mathcal L_{A,s}^n) = \\sum_{\\underline i\\in A^n} \\frac{|T_{\\underline i}'(z_{\\underline i})|^s}{1-T_{\\underline i}'(z_{\\underline i})} \n = \\sum_{\\underline i\\in A^n} \\frac{\\prod_{j=0}^{n-1}T^j(z_{\\underline i})^{2s}}{1-(-1)^n\\prod_{j=0}^{n-1}T^j(z_{\\underline i})^2 } \n\\,,\n\\end{equation}\nwhere the point $z_{\\underline i}$, which has period $n$ under $T$, is the unique fixed point of the \n$n$-fold composition $T_{\\underline i} = T_{i_1}\\circ T_{i_2}\\circ\\cdots\\circ T_{i_n}$.\n\n\nWhen acting on a suitable space of holomorphic functions, the eigenvalues of $\\mathcal L_{A,s}$ are precisely the reciprocals\nof the zeros of its determinant.\nIn particular, the zero of minimum modulus for $\\Delta(s,\\cdot)$ is $e^{-P(-s\\log|T'|)}$, so the Hausdorff dimension\nof $E_A$ is characterised as the value of $s$ such that 1 is the zero of minimum modulus of $\\Delta(s,\\cdot)$.\n\nIn fact we shall later show that, when $\\mathcal L_{A,s}$ acts on such a space of holomorphic functions, its \napproximation numbers decay at an exponential rate (see Corollary \\ref{ks}), so that $\\mathcal L_{A,s}$ belongs to an exponential class (cf.~\\cite{bandtlow, bj}) and is in particular a trace class operator, from which the existence and above\nproperties of trace and determinant follow (see \\cite{simon}).\n\nAs outlined in \\cite{jp}, this suggests the possibility of expressing $\\Delta(z,s)$ as a power series\n$$\n\\Delta(z,s) = 1 + \\sum_{n=1}^\\infty \\delta_n(s) z^n\\,,\n$$\nthen \ndefining $\\frak D$ by\n$$\n\\frak D(s) := \\Delta(1,s) =1 + \\sum_{n=1}^\\infty \\delta_n(s) \\,.\n$$\nThe function $\\frak D$ is an entire function of $s$ (see \\cite{jp}),\nand solutions $s$ of the equation\n\\begin{equation}\\label{propereqn}\n0 = 1+\\sum_{n=1}^\\infty \\delta_n(s) = \\frak D(s)\n\\end{equation}\nhave the property that the value 1 is an eigenvalue for $\\mathcal L_{A,s}$;\nin particular,\nthe unique zero of $\\frak D$ in the interval $(0,1)$ is precisely $\\text{dim}(E_A)$,\nbeing the unique value of $s$ for which 1 is the eigenvalue of maximum modulus for $\\mathcal L_{A,s}$.\n\nAs a result of the trace formula (\\ref{traceformula}), the coefficients $\\delta_n(s)$ are computable\\footnote{By this we mean that for a given $s$, the $\\delta_n(s)$ are computable exactly, to arbitrary precision.}\nin terms of the periodic points of $T|_{E_A}$ of period no greater than $n$, so for some suitable\n$N\\in\\mathbb N$, chosen so that $\\delta_1(s),\\ldots, \\delta_N(s)$ can be computed to a given precision in reasonable time,\nwe can define $\\frak D_N$ by\n\\begin{equation}\\label{DNdefn}\n\\frak D_N(s) := 1+\\sum_{n=1}^N \\delta_n(s)\\,.\n\\end{equation}\nA solution to the equation\n\\begin{equation}\\label{truncatedequation}\n\\frak D_N(s) = 0 \n\\end{equation}\nwill be an approximate solution to (\\ref{propereqn}), where the quality of this approximation will be related to the \nsmallness of the discarded tail\n\\begin{equation}\\label{discardedtail}\n\\sum_{n=N+1}^\\infty \\delta_n(s)\\,.\n\\end{equation}\nIn particular, any rigorous estimate of the closeness of a given approximate solution $s_N$ of (\\ref{truncatedequation}) to the true Hausdorff dimension $\\text{dim}(E_A)$ will require a rigorous upper bound\non the modulus of the tail (\\ref{discardedtail}).\n\n\\begin{remark}\\label{conservative}\nIn \\cite{jp} we considered the set $E_2=E_{\\{1,2\\}}$ and, although the empirical estimates of its\nHausdorff dimension appeared convincing,\nthe estimate on the tail (\\ref{discardedtail}) was not sharp enough to permit any effective rigorous bound.\nEssentially\\footnote{In \\cite{jp} we actually worked with $\\det(I-z\\mathcal L_{A,s}^2)$ rather than\n$\\det(I-z\\mathcal L_{A,s})$, though the methods there lead to very similar bounds for both determinants.}, the bound in \\cite{jp} was\n$\n|\\delta_n(s)| \\le \\varepsilon_n := C K^n n^{n\/2} \\theta^{n(n+1)}\n$\nwhere\n$\nC= \\gamma \\prod_{r=1}^\\infty(1-\\gamma^r)^{-1} \\approx 122979405533\n$,\n$\nK= \\frac{45}{16\\pi} \\approx 0.895247\n$,\nand\n$\n\\theta = \\left( \\frac{8}{9}\\right)^{1\/4} \\approx 0.970984\n$.\nAlthough the bounding sequence $\\varepsilon_n$ tends to zero, and does so at super-exponential rate $O(\\theta^{n^2})$, \nthe considerable inertia in this convergence \n(e.g.~the sequence increases for $1\\le n\\le 39$ to the value $\\varepsilon_{39}\\approx 1.31235 \\times 10^{22}$,\nand remains larger than $1$ until\n$n=85$)\nrenders the bound ineffective in practice,\nin view of the exponentially increasing computation time required to calculate the $\\delta_n(s)$\n(as seen in this article, we can feasibly compute several million periodic points, but performing calculations involving more\n than $2^{85}$ points is out of the question).\n\\end{remark}\n\n\n\\begin{remark}\nThe specific rigorous approximation of dimension is performed in this article only for the set $E_2$\n(see \\S \\ref{E2section}), corresponding to the iterated function system consisting of the maps $T_1(x)=1\/(x+1)$\nand $T_2(x)=1\/(x+2)$.\nIn principle, however, it can be performed for arbitrary iterated function systems consisting of real analytic maps $T_1,\\ldots, T_l$ satisfying the open set condition (i.e.~there exists a non-empty open set $U$ such that\n$T_i(U)\\cap T_j(U)=\\emptyset$ for $i\\neq j$, and $T_i(U)\\subset U$ for all $i$).\nIn this setting the accuracy of our \nHausdorff dimension estimate depends principally on the contractivity of the maps $T_i$ and the number $l$ of such maps, with stronger contraction and a smaller value of $l$ corresponding to increased accuracy.\nStronger contraction (as reflected by smallness of the \\emph{contraction ratio} defined in \\S \\ref{contractionratiossubsection}) is associated with more rapid decay of the Taylor coefficients of the determinant\n$\\Delta(z,s)$, implying greater accuracy of the polynomial truncations, while for $l>2$ the time required to locate the points of period up to $n$ increases by a factor of roughly $(l\/2)^n$ relative to the case $l=2$\n(note that for \\emph{infinite} iterated function systems, i.e.~$l=\\infty$, our method is rarely applicable, since it is usually impossible to locate all period-$n$ points for a given $n$, though here non-rigorous approximations may be obtained by suitable approximation).\nIf the $T_i$ are not M\\\"obius maps then for practical purposes there is some minor decrease in the efficiency of our method: the compositions $T_{\\underline i}$ are more highly nonlinear than in the M\\\"obius case, so evaluation of their fixed points typically takes slightly longer.\n\\end{remark}\n\n\n\\begin{remark}\nWork of Cusick \\cite{cusick1,cusick2}\non continuants with bounded digits characterised the Hausdorff dimension of $E_n=E_{\\{1,\\ldots,n\\}}$ in terms of the abscissa of convergence of a certain Dirichlet series,\nand Bumby \\cite{bumby1, bumby2} showed that $0.5312 < \\text{dim}(E_2) < 0.5314$.\nHensley \\cite{hensley1989} obtained the bound\n$0.53128049$ $< \\text{dim}(E_2) < 0.53128051$ using a recursive procedure,\nand in \\cite[Thm.~3]{hensley1996} introduced a general approach for\napproximating the Hausdorff dimension of $E_A$, obtaining in particular the empirical estimate\n$\\text{dim}(E_2)=\n0.5312805062772051416\\ldots$\n\\end{remark}\n\n\n\n\n\n\\section{Hilbert Hardy space, approximation numbers, approximation bounds}\n\nIn this section we introduce the Hilbert space upon which the transfer operator acts, then make the connection between\napproximation numbers for the operator and Taylor\ncoefficients of its determinant, leading to so-called Euler bounds on these Taylor coefficients.\n\n\\subsection{Hardy space}\n\nLet $D\\subset \\mathbb{C}$ be an open disc of radius $r$, centred at $c$.\nThe \\emph{Hilbert Hardy space}\n$H^2(D)$ consists of those functions $f$\nwhich are holomorphic on $D$ and such that\n$\\sup_{\\varrhoQ$.\n In view of Lemma \\ref{gohberglemma}, the computed Taylor bounds will be derived by first bounding the finitely\n many approximation numbers $s_1(\\mathcal L_{A,s}),\\ldots, s_N(\\mathcal L_{A,s})$, for some $N\\in\\mathbb N$, by explicitly computable quantities that we call \\emph{computed approximation bounds}.\n The computations required to derive the computed approximation bounds are not onerous, the main task being the \nevaluation of numerical integrals defining certain $H^2$ norms\n (of the transfer operator images of a chosen orthonormal basis).\n\n\nWe shall approximate $\\mathcal L_{A,s}$\nby first projecting $H^2(D)$ onto the space of polynomials up to a given degree.\nLet $\\mathcal L_{A,s}:H^2(D)\\to H^2(D)$ be a transfer operator,\nwhere $D\\subset \\mathbb C$ is an open disc of radius $\\varrho$ centred at $c$, and \n$\\{m_k\\}_{k=0}^\\infty$ is the corresponding orthonormal basis of monomials, given by\n\\begin{equation}\\label{monomial}\nm_k(z)=\\varrho^{-k}(z-c)^k\\,.\n\\end{equation}\n\n\\subsection{Approximation bounds}\n\n\\begin{defn}\\label{alphadefn}\nFor $n\\ge1$, define the $n^{th}$ \\emph{approximation bound} $\\alpha_n(s)$ to be\n\\begin{equation}\\label{alphaexpression}\n\\alpha_n(s) = \\left( \\sum_{k=n-1}^\\infty \\|\\mathcal L_{A,s} (m_k)\\|^2 \\right)^{1\/2} \\,.\n\\end{equation}\n\\end{defn}\n\n\n\\begin{prop}\\label{alphaapproximationbound}\nFor each $n\\ge1$,\n\\begin{equation}\\label{snalphanbound}\ns_n(\\mathcal L_{A,s}) \\le \\alpha_n(s)\\,.\n\\end{equation}\n\\end{prop} \n\\begin{proof}\nFor $f\\in H^2(D)$ we can write\n$$\nf = \\sum_{k=0}^\\infty \\hat f(k)\\, m_k\n$$\nwhere the sequence $(\\hat f(k))_{k=0}^\\infty$ is square summable.\nDefine the rank-$(n-1)$ projection $\\Pi_n:H^2(D)\\to H^2(D)$ by\n$$\n\\Pi_n(f) = \\sum_{k=0}^{n-2} \\hat f(k)\\, m_k \\,,\n$$\nwhere in particular $\\Pi_1\\equiv 0$.\n\nThe transfer operator $\\mathcal L_{A,s}$ is approximated by the rank-$(n-1)$ operators $$\\mathcal L_{A,s}^{(n)} := \\mathcal L_{A,s} \\Pi_n \\,,$$\nand $\\| \\mathcal L_{A,s}- \\mathcal L_{A,s}^{(n)} \\|$ can be estimated\nusing the Cauchy-Schwarz inequality as follows:\n\\begin{multline*}\n\\| (\\mathcal L_{A,s} - \\mathcal L_{A,s}^{(n)})f \\|\n= \\| \\sum_{k=n-1}^\\infty \\hat f(k)\\, \\mathcal L_{A,s}(m_k) \\| \n \\le \\sum_{k=n-1}^\\infty | \\hat f(k)| \\| \\mathcal L_{A,s}(m_k) \\| \\cr\n \\le \\left( \\sum_{k=n-1}^\\infty \\| \\mathcal L_{A,s}(m_k) \\|^2 \\right)^{1\/2} \\left( \\sum_{k=n-1}^\\infty |\\hat f(k)|^2\\right)^{1\/2} \\cr\n \\le \\left( \\sum_{k=n-1}^\\infty \\| \\mathcal L_{A,s}(m_k) \\|^2 \\right)^{1\/2} \\|f\\| \\,,\\cr\n\\end{multline*}\nand therefore \n$\\| \\mathcal L_{A,s}- \\mathcal L_{A,s}^{(n)} \\| \\le \\left( \\sum_{k=n-1}^\\infty \\| \\mathcal L_{A,s}(m_k) \\|^2 \\right)^{1\/2} = \\alpha_n(s)$.\nSince $\\mathcal L_{A,s}^{(n)}$ has rank $n-1$,\nit follows that $s_n(\\mathcal L_{A,s}) \\le \\alpha_n(s)$, as required.\n\\end{proof}\n\n\\subsection{Contraction ratios}\\label{contractionratiossubsection}\n\nLet $C_i:H^2(D)\\to H^2(D)$ be the \\emph{composition operator}\n$$\nC_i f = f\\circ T_i\\,.\n$$\n\nThe estimate arising in the following lemma motivates our definition below (see Definition \\ref{hardydefinition})\nof the \\emph{contraction ratio} associated to a disc $D$ and subset $A\\subset \\mathbb N$.\n\n\\begin{lemma}\\label{H2contractionratio}\nLet $D$ and $D'$ be concentric discs, with radii $\\varrho$ and $\\varrho'$ respectively.\nIf, for $i\\in A$, the image $T_i(D)$ is contained in $D'$, then for all $k\\ge0$,\n\\begin{equation}\n\\| C_i(m_k)\\| \\le \\left( \\frac{\\varrho'}{\\varrho}\\right)^k\\,.\n\\end{equation}\n\\end{lemma}\n\\begin{proof}\nLet $c$ denote the common centre of the discs $D, D'$.\nIf $z\\in D$ then\n$$|C_i(m_k)(z)| = \\varrho^{-k} |T_i(z)-c|^k < \\varrho^{-k} (\\varrho')^k = (\\varrho' \/\\varrho)^k\\,,$$\n so $\\|C_i(m_k)\\|\\le (\\varrho'\/\\varrho)^k$, as required.\n\\end{proof}\n\nFor each $i\\in A$, $s\\in\\mathbb R$, if the open disc $D$ is such that $-i\\notin D$ then\ndefine the \\emph{weight function} $w_{i,s}:D\\to\\mathbb{C}$ by\n$$\nw_{i,s}(z) = \\left(\\frac{1}{z+i}\\right)^{2s}\\,, \n$$\nand the \\emph{multiplication operator} $W_{i,s}:H^2(D)\\to H^2(D)$ by\n$$\nW_{i,s} f = w_{i,s} f\\,.\n$$\nWe may write\n$$\n\\mathcal L_{A,s} = \\sum_{i\\in A} W_{i,s} C_i\\,,\n$$\nso that\n$$\n\\| \\mathcal L_{A,s}(m_k) \\| \\le \n\\sum_{i\\in A} \\| W_{i,s} C_i(m_k) \\|\n\\le\n\\sum_{i\\in A} \\| w_{i,s}\\|_\\infty \\|C_i(m_k) \\|\n \\,,\n$$\nand if $\\varrho_i'$ is such that $T_i(D)$ is contained in the concentric disc $D_i'$ of radius $\\varrho_i'$ then\nLemma \\ref{H2contractionratio} implies that\n\\begin{equation}\\label{rhoprimei}\n\\| \\mathcal L_{A,s}(m_k) \\| \\le \\sum_{i\\in A} \\| w_{i,s}\\|_\\infty (\\varrho_i'\/\\varrho)^k\\,.\n\\end{equation}\n\nFor our purposes it will be more convenient to work with a slightly simpler (and less sharp) version of\n(\\ref{rhoprimei}). This prompts the following definition:\n\n\\begin{defn}\\label{hardydefinition}\nLet $A\\subset\\mathbb N$ be finite, and $D\\subset \\mathbb{C}$ an open disc of radius $\\varrho$ such that\n$\\cup_{i\\in A} T_i(D)\\subset D$. Let $D'$ be the smallest disc, concentric with $D$, such that\n$\\cup_{i\\in A} T_i(D)\\subset D'$, and let\n$\\varrho'$ denote the radius of $D'$.\nThe corresponding \\emph{contraction ratio} $h=h_{A,D}$ is defined to be\n\\begin{equation}\\label{hardyratiodefn}\nh = h_{A,D} = \\frac{\\varrho'}{\\varrho}\\,. \n\\end{equation}\n\\end{defn}\n\n\n\\begin{lemma}\\label{hardyfirst}\nLet $A\\subset \\mathbb N$ be finite, and $D$ an admissible disc, with contraction ratio $h=h_{A,D}$.\nFor all $k\\ge0$,\n\\begin{equation}\\label{normdecay}\n\\| \\mathcal L_{A,s}(m_k) \\| \\le h^k \\sum_{i\\in A} \\| w_{i,s}\\|_\\infty \\,.\n\\end{equation}\n\\end{lemma}\n\\begin{proof}\nIf $D'$ is as in Definition \\ref{hardydefinition} then $\\varrho'=\\max_{i\\in A} \\varrho_i'$ \nin the notation of (\\ref{rhoprimei}), and the result follows from (\\ref{rhoprimei}).\n\\end{proof}\n\n\\begin{cor}\\label{ks}\nLet $A\\subset \\mathbb N$ be finite, and $D$ an admissible disc, with contraction ratio $h=h_{A,D}$.\nFor all $n\\ge1$,\n\\begin{equation}\\label{salphabound}\ns_n(\\mathcal L_{A,s}) \\le \\alpha_n(s) \\le K_s h^n\n\\end{equation}\nwhere\n\\begin{equation}\\label{ksdefn}\nK_s =\n\\frac{ \\sum_{i\\in A} \\| w_{i,s}\\|_\\infty }{ h \\sqrt{1 - h^2}} \\,.\n\\end{equation}\n\\end{cor}\n\\begin{proof}\nNow\n$$\n\\alpha_n(s) = \\left( \\sum_{k=n-1}^\\infty \\|\\mathcal L_{s} (m_k)\\|^2 \\right)^{1\/2} \n$$\nfrom Definition \\ref{alphadefn} and Proposition \\ref{alphaapproximationbound},\nso Lemma \\ref{hardyfirst} gives\n$$\n\\alpha_n(s) \\le \\left( \\sum_{k=n-1}^\\infty h^{2k} \\right)^{1\/2} \\sum_{i\\in A} \\| w_{i,s}\\|_\\infty\n= \\frac{ h^{n-1}}{\\sqrt{1 - h^2}} \\sum_{i\\in A} \\| w_{i,s}\\|_\\infty \\,,\n$$\nand the result follows.\n\\end{proof}\n\n\\subsection{Euler bounds}\n\nWe can now derive the \\emph{Euler bound} on the $n^{th}$ Taylor coefficient of the determinant:\n\n\\begin{prop}\\label{eulerprop}\nLet $A\\subset \\mathbb N$ be finite, and $D$ an admissible disc, with contraction ratio $h=h_{A,D}$.\nIf the transfer operator $\\mathcal L_{A,s}$ has determinant $\\det(I-z\\mathcal L_{A,s})=1+\\sum_{n=1}^\\infty \\delta_n(s)z^n$,\n then for all $n\\ge1$,\n \\begin{equation}\\label{eulerequation}\n|\\delta_n(s)| \\le \\frac{K_s^n h^{n(n+1)\/2}}{ \\prod_{i=1}^n (1 - h^i)}\\,.\n\\end{equation}\n\\end{prop}\n\\begin{proof}\nBy Lemma \\ref{gohberglemma},\n\\begin{equation*}\n|\\delta_n(s)|\n\\leq\n \\sum_{i_1 < \\ldots < i_n}\n\\prod_{j=1}^n\ns_{i_j}(\\mathcal L_{A,s})\n \\,,\n\\end{equation*}\nso Corollary \\ref{ks} gives\n\\begin{equation*}\n|\\delta_n(s)|\n\\leq\nK_s^n\n \\sum_{i_1 < \\ldots < i_n} h^{i_1+\\ldots+i_n} \\,,\n\\end{equation*}\nand the result follows by repeated geometric summation\n(as first noted by Euler \\cite[Ch.~16]{euler}).\n\\end{proof}\n\nHenceforth we use the notation\n\\begin{equation}\\label{eulerformula}\nE_n(r) := \\frac{r^{n(n+1)\/2}}{\\prod_{i=1}^n (1-r^i)} = \\sum_{i_1< \\ldots < i_n} r^{i_1+\\ldots + i_n}\\,,\n\\end{equation}\nso that (\\ref{eulerequation}) can be written as\n\\begin{equation}\\label{eulerequation2}\n|\\delta_n(s)| \\le K_s^n E_n(h)\\,,\n\\end{equation}\nand we define the righthand side of (\\ref{eulerequation2}) \n(or equivalently of (\\ref{eulerequation}))\nto be the \\emph{Euler bound} on the $n^{th}$ Taylor coefficient of the determinant.\n\n\\section{Computed approximation bounds}\\label{cabsection}\n\nFor all $n\\ge1$, the $n^{th}$ approximation bound \n$$\\alpha_n(s) = \\left( \\sum_{k=n-1}^\\infty \\|\\mathcal L_{A,s} (m_k)\\|^2 \\right)^{1\/2}$$\nis, as noted in Proposition \\ref{alphaapproximationbound}, an upper bound on the $n^{th}$ approximation number \n$s_n(\\mathcal L_{A,s})$. \n\nEach $m_k$ is just a normalised monomial (\\ref{monomial}), and the operator $\\mathcal L_{A,s}$ is available in closed form,\nso that \n$$\\mathcal L_{A,s}(m_k)(z) = \\sum_{i\\in A} \\frac{(T_i(z)-c)^k}{\\varrho^{k} (z+i)^{2s}}\\,,$$\nand we may use numerical integration \nto compute\\footnote{Numerical integration capability\nis available in computer packages such as Mathematica,\nand these norms can be computed to arbitrary precision; although higher precision requires greater computing time,\nthese computations are relatively quick \n(e.g.~for the computations in \\S \\ref{E2section} these integrals were computed with 150 digit accuracy).}\n each Hardy norm $\\|\\mathcal L_{A,s} (m_k)\\|$ as\n\\begin{equation}\\label{hardynormexplicit}\n\\|\\mathcal L_{A,s} (m_k)\\|^2 = \\int_0^1 \\left| \\sum_{i\\in A} \\frac{(T_i(\\gamma(t))-c)^k}{\\varrho^{k} (\\gamma(t)+i)^{2s}}\\right|^2 \\, dt\\,,\n\\end{equation}\nwhere $\\gamma(t)=c+\\varrho e^{2\\pi it}$.\n\n\nEvaluation of $\\alpha_n(s)$ involves the tail sum\n$\\sum_{k=n-1}^\\infty \\|\\mathcal L_{A,s} (m_k)\\|^2$,\nand in practice we can bound this by the sum of an exactly computed long finite sum\n$\\sum_{k=n-1}^N \\|\\mathcal L_{A,s} (m_k)\\|^2$, for some $N \\gg n$,\nand a rigorous upper bound on \n$\\sum_{k=N+1}^\\infty \\|\\mathcal L_{A,s} (m_k)\\|^2$\nusing (\\ref{normdecay}).\nMore precisely, we have the following definition: \n\n\\begin{defn}\nGiven $n,N\\in\\mathbb N$, with $n\\le N$, define the\n\\emph{lower and upper computed approximation bounds}, $\\alpha_{n,N,-}(s)$ and $\\alpha_{n,N,+}(s)$, respectively, by\n\\begin{equation}\\label{alphanN}\n\\alpha_{n,N,-}(s) = \\left( \\sum_{k=n-1}^N \\| \\mathcal L_{A,s}(m_k)\\|^2 \\right)^{1\/2}\\,,\n\\end{equation}\nand\n\\begin{equation}\\label{alphanNplus}\n\\alpha_{n,N,+}(s) \n=\n\\left( \\alpha_{n,N,-}(s)^2\n+ \\left(\\sum_{i\\in A} \\|w_{i,s}\\|_\\infty\\right)^2 \\frac{h^{2(N+1)}}{1-h^2}\\right)^{1\/2}\n\\,.\n\\end{equation}\n\\end{defn}\n\nEvidently the lower computed approximation bound $\\alpha_{n,N,-}(s)$ is a lower bound for $\\alpha_n(s)$, \nin view of the positivity of the summands in (\\ref{alphaexpression}) and (\\ref{alphanN}), while\nLemma \\ref{upperlowerbounds} below establishes that the upper computed approximation bound\n$\\alpha_{n,N,+}(s)$ is an upper bound for $\\alpha_n(s)$.\nMoreover, both $\\alpha_{n,N,+}(s)$ and $\\alpha_{n,N,-}(s)$ are readily computable: they are given by finite sums and,\nas already noted, the summands $\\| \\mathcal L_{A,s}(m_k)\\|^2$ are computable to arbitrary precision.\n\n\\begin{lemma}\\label{upperlowerbounds}\nLet $s\\in\\mathbb R$. For all $n,N\\in \\mathbb N$, with $n\\le N$,\n\\begin{equation}\\label{alphaupperlower}\n\\alpha_{n,N,-}(s) \\le \\alpha_n(s) \\le \\alpha_{n,N,+}(s)\\,.\n\\end{equation}\n\\end{lemma}\n\\begin{proof}\nThe inequality $\\alpha_{n,N,-}(s) \\le \\alpha_n(s)$ is \nimmediate from the definitions.\nTo prove that $\\alpha_n(s) \\le \\alpha_{n,N,+}(s)$ note that\n$$\\alpha_n(s)^2 = \\sum_{k=n-1}^N \\|\\mathcal L_{A,s}(m_k)\\|^2 + \\sum_{k=N+1}^\\infty \\|\\mathcal L_{A,s}(m_k)\\|^2\\,,$$\nwhich together with (\\ref{normdecay}) gives\n$$\\alpha_n(s)^2 \\le \\sum_{k=n-1}^N \\|\\mathcal L_{A,s}(m_k)\\|^2 + \\left( \\sum_{i\\in A} \\| w_{i,s}\\|_\\infty \\right)^2 \\frac{h^{2(N+1)}}{1-h^2}\\,,\n$$\nand the result follows.\n\\end{proof}\n\n\\begin{remark}\nThe upper bound $\\alpha_{n,N,+}(s)$ will be used in the sequel, as a tool\nin providing rigorous estimates on Hausdorff dimension.\nIn practice $N$ will be chosen so that the values\n$\\alpha_{n,N,-}(s)$ and $\\alpha_{n,N,+}(s)$ are close enough together that \nthe inequality \n(\\ref{alphaupperlower})\ndetermines $\\alpha_n(s)$\nwith precision far higher than that of the desired Hausdorff dimension estimate;\nin particular, $N$ will be such that the difference \n$\\alpha_{n,N,+}(s)-\\alpha_{n,N,-}(s) = O(h^N)$ is\nextremely small relative to the size of $\\alpha_{n}(s)$.\n\\end{remark}\n\nCombining (\\ref{salphabound}) with (\\ref{alphaupperlower}) immediately gives the exponential bound\n\\begin{equation}\n\\label{salphaboundN}\n\\alpha_{n,N,-}(s) \\le K_s h^n\\quad\\text{for all }n \\le N\\,,\n\\end{equation}\nthough the analogous bound for $\\alpha_{n,N,+}(s)$ (which will be more useful to us in the sequel)\nrequires some extra care:\n\n\\begin{lemma}\\label{nNboundlemma}\nLet $s\\in\\mathbb R$. For all $n,N\\in \\mathbb N$, with $n\\le N$,\n\\begin{equation}\\label{nNbound}\n\\alpha_{n,N,+}(s) \\le K_s(1 + h^{2(N+2-n)})^{1\/2} h^n\\,.\n\\end{equation}\n\\end{lemma}\n\\begin{proof}\nCombining (\\ref{salphaboundN}) with (\\ref{alphanNplus})\ngives \n$$\n\\alpha_{n,N,+}(s) \n\\le\n\\left( (K_sh^n)^2\n+ \\left(\\sum_{i\\in A} \\|w_{i,s}\\|_\\infty\\right)^2 \\frac{h^{2(N+1)}}{1-h^2}\\right)^{1\/2}\\,,\n$$\nbut (\\ref{ksdefn}) gives\n$$ \\frac{ \\left(\\sum_{i\\in A} \\|w_{i,s}\\|_\\infty\\right)^2}{1-h^2} = K_s^2h^2\\,,$$\nso\n$$\n\\alpha_{n,N,+}(s) \n\\le\n\\left( (K_sh^n)^2\n+ K_s^2 h^{2(N+2)} \\right)^{1\/2} \\,,\n$$\nand the result follows.\n\\end{proof}\n\nThe utility of (\\ref{nNbound}) stems from the fact that in practice $N-n$ will be large, and that \n for sufficiently small values of $n$ the following\nmore direct analogue of (\\ref{salphaboundN}) can be used:\n\n\\begin{cor}\\label{jqnscor}\nLet $s\\in\\mathbb R$. \nSuppose $N,Q\\in\\mathbb N$, with $Q \\le N$.\nIf \n\\begin{equation}\\label{jqns}\nJ = J_{Q,N,s} := K_s\\left( 1+h^{2(N+2-Q)}\\right)^{1\/2}\n\\end{equation}\nthen\n\\begin{equation}\\label{jqnshbound}\n\\alpha_{n,N,+}(s) \\le J h^n\\quad\\text{for all }1\\le n\\le Q\\,.\n\\end{equation}\n\\end{cor}\n\\begin{proof}\nImmediate from Lemma \\ref{nNboundlemma}.\n\\end{proof}\n\n\\begin{remark}\nIn practice $Q$ will be of some modest size, dictated by the computational resources at our disposal;\nspecifically, it will be chosen slightly larger than the largest $P\\in \\mathbb N$ for which it is feasible to compute all periodic points of period\n$\\le P$ (e.g.~in \\S \\ref{E2section}, when estimating the dimension of the set $E_2=E_{\\{1,2\\}}$,\nwe explicitly compute all periodic points up to period $P=25$, and in the proof of Theorem \\ref{e2theorem} we choose $Q=28$).\nThe value $N$ will be chosen to be significantly larger than $Q$ (e.g.~in \nthe proof of Theorem \\ref{e2theorem} we choose $N=600$).\nSince $N+2-Q$ is large, $h^{N+2-Q}$ will be extremely small, and \n $J=J_{Q,N,s}$ will be extremely close to $K_s$; ideally this closeness ensures that\n the two constants $J_{Q,N,s}$ and $K_s$ \n\n are indistinguishable to the chosen level of working precision\n (e.g.~in the proof of Theorem \\ref{e2theorem}, $N+2-Q=574$ and $h\\approx 0.511284$,\n so $h^{N+2-Q}\\approx 5.9 \\times 10^{-168}$, whereas computations are performed to 150 decimal digit precision).\n\\end{remark}\n\n\n\\section{Computed Taylor bounds}\\label{taylorsection}\n\nIn order to use the computed approximation bounds to provide a rigorous upper bound on the \nTaylor coefficients of the determinant $\\det(I-z\\mathcal L_{A,s})$,\nwe now fix a further natural number $M$, satisfying $M\\le N$.\nFor any such $M$, it is convenient to define\nthe sequence $(\\alpha_{n,N,+}^{M}(s) )_{n=1}^\\infty$\nto be the one whose $n^{th}$ term equals $\\alpha_{n,N,+}(s)$ until $n=M$,\nand whose subsequent terms are given\nby the exponential upper bound on $s_n(\\mathcal L_{A,s})$ and $\\alpha_n(s)$ (cf.~(\\ref{salphabound})):\n\\begin{equation}\\label{alph}\n\\alpha_{n,N,+}^{M}(s) :=\n\\begin{cases}\n\\alpha_{n,N,+}(s) & \\text{for }1\\le n\\le M\\,, \\cr\nK_s h^n & \\text{for }n>M\\,.\n\\end{cases}\n\\end{equation}\n\nThis allows us to make the following definition:\n\n\\begin{defn}\nLet $s\\in\\mathbb R$. For $n,M,N\\in\\mathbb N$ with $n\\le M\\le N$,\nthe \\emph{Taylor bound} $\\beta_{n,N,+}^{M}(s)$ is defined by\n\\begin{equation}\\label{firstbeta}\n\\beta_{n,N,+}^{M}(s) := \\sum_{i_1<\\ldots< i_n} \\prod_{j=1}^n \\alpha_{i_j,N,+}^{M}(s) \\,,\n\\end{equation}\nwhere the sum is over those $\\underline i=(i_1,\\ldots,i_n)\\in\\mathbb N^n$\nwhich satisfy $i_1< i_2 < \\ldots< i_n$.\n\\end{defn}\n\nAs the name suggests, the Taylor bound $\\beta_{n,N,+}^{M}(s)$ bounds the $n^{th}$\nTaylor coefficient of the determinant $\\det(I-z\\mathcal L_{A,s})=1+\\sum_{n=1}^\\infty \\delta_n(s)z^n$:\n\n\\begin{lemma}\\label{betacbound}\nLet $s\\in\\mathbb R$.\nFor $n, M, N\\in \\mathbb N$ with $n\\le M\\le N$,\n\\begin{equation}\\label{cnalphatilde}\n|\\delta_n(s)|\n\\leq\n \\beta_{n,N,+}^{M}(s)\n \\,.\n\\end{equation}\n\\end{lemma}\n\\begin{proof}\nCombining\n(\\ref{salphabound}), (\\ref{alphaupperlower}) and (\\ref{alph})\ngives\n\\begin{equation*}\\label{snalphanNplus}\ns_n(\\mathcal L_{A,s}) \\le \\alpha_{n,N,+}^{M}(s)\\quad\\text{for all }1\\le n\\le M \\le N\\,,\n\\end{equation*}\nand combining this with Lemma \\ref{gohberglemma} gives (\\ref{cnalphatilde}).\n\\end{proof}\n\n\nNote that $ \\beta_{n,N,+}^{M}(s)$ is precisely the $n^{th}$ power series coefficient\nfor the infinite product $\\prod_{i=1}^\\infty (1+ \\alpha_{i,N,+}^{M}(s) z)$, and\nthat the sum in (\\ref{firstbeta}) is an infinite one; thus we will seek a computationally accessible approximation to \n$ \\beta_{n,N,+}^{M}(s)$.\nWe expect that $ \\beta_{n,N,+}^{M}(s)$ is well approximated by the $n^{th}$ power series coefficient\nfor the \\emph{finite} product $\\prod_{i=1}^M (1+ \\alpha_{i,N,+}^{M}(s) z) = \\prod_{i=1}^M (1+ \\alpha_{i,N,+}(s) z)$,\nnamely the value $\\beta_{n,N,+}^{M,-}(s)$ \ndefined as follows:\n\n\\begin{defn}\nLet $s\\in\\mathbb R$. For $n,M,N\\in\\mathbb N$ with $n\\le M\\le N$,\nthe \\emph{lower computed Taylor bound} $\\beta_{n,N,+}^{M,-}(s)$ is defined as\n\\begin{equation}\\label{betanNplusMminusalternative}\n\\beta_{n,N,+}^{M,-}(s)\n:=\n \\sum_{i_1<\\ldots< i_n\\le M} \\prod_{j=1}^n \\alpha_{i_j,N,+}(s)\\,.\n \\end{equation}\n\\end{defn}\n\n\\begin{remark}\n\\item[\\, (i)]\nThe fact that $\\beta_{n,N,+}^{M,-}(s)$ is defined in terms of upper computed approximation bounds \n$\\alpha_{i_j,N,+}(s)$,\ntogether with the finiteness of the sum (and product) in (\\ref{betanNplusMminusalternative}), \nensures that $\\beta_{n,N,+}^{M,-}(s)$ can be computed (to arbitrary precision).\n\\item[\\, (ii)]\nClearly, an equivalent definition of \n$\\beta_{n,N,+}^{M,-}(s)$ is\n\\begin{equation}\\label{betanNplusMminus}\n\\beta_{n,N,+}^{M,-}(s)\n=\n \\sum_{i_1<\\ldots< i_n \\le M} \\prod_{j=1}^n \\alpha_{i_j,N,+}^{M}(s)\\,.\n\\end{equation}\n\\end{remark}\n\n\n\n\n\n\nThe lower computed Taylor bound\n$\\beta_{n,N,+}^{M,-}(s)$ is obviously smaller than\nthe Taylor bound\n $\\beta_{n,N,+}^{M}(s)$, though in view of (\\ref{cnalphatilde})\n we require an \\emph{upper computed Taylor bound} (introduced in Definition \\ref{uppercomputedtaylorbound} below)\n that is larger than $\\beta_{n,N,+}^{M}(s)$.\nThe following result estimates\nthe difference \n$\\beta_{n,N,+}^{M}(s) - \\beta_{n,N,+}^{M,-}(s)$, and subsequently\n(see Definition \\ref{uppercomputedtaylorbound})\n provides the inspiration for the definition of the\nupper computed Taylor bound:\n\n\\begin{lemma}\\label{betadifference}\nLet $s\\in\\mathbb R$.\nGiven $Q,M,N\\in\\mathbb N$ with $Q\\le M\\le N$,\n and $J=J_{Q,N,s}$ defined by (\\ref{jqns}),\n\\begin{equation}\\label{betadifferencerequiredbound}\n\\beta_{n,N,+}^{M}(s) - \\beta_{n,N,+}^{M,-}(s)\n\\le \\sum_{l=0}^{n-1} J^{n-l} \\beta_{l,N,+}^{M,-}(s)\\, h^{M(n-l)} E_{n-l}(h)\\quad\\text{for all }1\\le n\\le Q \\,.\n\\end{equation}\n\\end{lemma}\n\\begin{proof}\nLet $n$ be such that $1\\le n\\le Q$.\nThe set\n$\n\\mathcal I_n := \\{ \\underline i = (i_1,\\ldots,i_n) \\in \\mathbb N^n: i_1 <\\ldots < i_n\\} \n$\ncan be partitioned as \n$\\mathcal I_n = \\bigcup_{l=0}^n \\mathcal I_n^{(l)}$,\nwhere the $\\mathcal I_n^{(l)}$ are defined by\n\\begin{equation*}\n\\mathcal I_n^{(l)} =\n\\begin{cases}\n\\{ \\underline i = (i_1,\\ldots,i_n)\\in\\mathcal I_n: M < i_{1}\\} & \\text{if }l=0 \\,, \\cr\n\\{ \\underline i = (i_1,\\ldots,i_n)\\in\\mathcal I_n: i_l \\le M < i_{l+1}\\} & \\text{if } 1\\le l \\le n-1\\,, \\cr\n\\{ \\underline i = (i_1,\\ldots,i_n)\\in\\mathcal I_n: i_n \\le M \\} & \\text{if }l=n\\,.\n\\end{cases}\n\\end{equation*}\nDefine\n$$\n\\beta_{n,N,+}^{M, (l)}(s) \n:= \\sum_{\\underline i \\in \\mathcal I_n^{(l)}} \n\\prod_{j=1}^n \\alpha_{i_j,N,+}^{M}(s) \n\\quad\\text{for each }0\\le l\\le n\\,,\n$$\nso that in particular\n\\begin{equation}\\label{inparticularnminus}\n\\beta_{n,N,+}^{M, (n)}(s) = \\beta_{n,N,+}^{M, -}(s) \\,.\n\\end{equation}\n\nWith this notation, and since $\\mathcal I_n = \\bigcup_{l=0}^n \\mathcal I_n^{(l)}$,\n we can express $\\beta_{n,N,+}^{M}(s)$ as\n\\begin{equation}\\label{canexpress}\n\\beta_{n,N,+}^{M}(s) \n= \\sum_{\\underline i \\in \\mathcal I_n} \\prod_{j=1}^n \\alpha_{i_j,N,+}^{M}(s) \n= \\sum_{l=0}^{n} \\beta_{n,N,+}^{M, (l)}(s) \\,.\n\\end{equation}\nCombining (\\ref{inparticularnminus}) and (\\ref{canexpress}) gives\n\\begin{equation}\\label{epsilonsum}\n\\beta_{n,N,+}^{M}(s) - \\beta_{n,N,+}^{M,-}(s)\n= \\sum_{l=0}^{n-1} \\beta_{n,N,+}^{M, (l)}(s) \\,.\n\\end{equation}\n\nIn order to bound each $\\beta_{n,N,+}^{M, (l)}(s)$ in (\\ref{epsilonsum}) we\nuse the fact that $\\alpha_{i,N,+}^{M}(s) \\le J h^i$ for all $1\\le i\\le Q$ (see Corollary \\ref{jqnscor}) to obtain\n\\begin{equation}\\label{manipulation}\n\\beta_{n,N,+}^{M, (l)}(s) \n= \\sum_{\\underline i \\in \\mathcal I_n^{(l)}} \\prod_{j=1}^n \\alpha_{i_j,N,+}^{M}(s)\n\\le J^{n-l} \\sum_{\\underline i \\in \\mathcal I_n^{(l)}} h^{i_{l+1} +\\ldots + i_n} \\prod_{j=1}^l \\alpha_{i_j,N,+}^{M}(s) \\,,\n\\end{equation}\nand introducing $\\underline \\iota = (\\iota_1,\\ldots,\\iota_{n-l})\\in \\mathcal I_{n-l}$ with $i_{l+k} = \\iota_k +M$ for $1\\le k\\le n-l$, we can re-express the righthand side of (\\ref{manipulation}) to obtain\n\\begin{equation*}\n\\beta_{n,N,+}^{M, (l)}(s)\n\\le J^{n-l} \\left( \\sum_{\\underline i \\in \\mathcal I_l^{(l)}} \\prod_{j=1}^l \\alpha_{i_j,N,+}^{M}(s) \\right)\n\\left( \\sum_{\\underline \\iota\\in \\mathcal I_{n-l}} h^{(n-l)M} h^{\\iota_1+\\ldots+\\iota_{n-l} } \\right) \\,,\n\\end{equation*}\nand therefore\n\\begin{equation}\\label{betanlM}\n\\beta_{n,N,+}^{M, (l)}(s) \n\\le J^{n-l} \\beta_{l,N,+}^{M,-}(s)\\, h^{M(n-l)} \\ E_{n-l}(h)\\,.\n\\end{equation}\n\nNow combining (\\ref{epsilonsum}) and (\\ref{betanlM}) gives the required bound (\\ref{betadifferencerequiredbound}).\n\\end{proof}\n\n\\begin{remark}\nIn practice the $l=n-1$ term on the righthand side of\n(\\ref{betadifferencerequiredbound}) tends to be the dominant one, as $M$ is chosen large enough so that\n$h^M$ is extremely small.\n\\end{remark}\n \n\\begin{defn}\\label{uppercomputedtaylorbound} \nLet $s\\in\\mathbb R$. For $n, Q,M,N\\in\\mathbb N$ with $n\\le Q\\le M\\le N$,\ndefine the \\emph{upper computed Taylor bound} $ \\beta_{n,N,+}^{M,+}(s)$ by\n$$\n \\beta_{n,N,+}^{M,+}(s)\n :=\n \\beta_{n,N,+}^{M,-}(s) + \\sum_{l=0}^{n-1} J_{Q,N,s}^{n-l}\\, \\beta_{l,N,+}^{M,-}(s)\\, h^{M(n-l)} E_{n-l}(h) \\,.\n $$\n \\end{defn}\n \n From\n Lemma \\ref{betadifference} it then follows that the upper computed Taylor bound\n $\\beta_{n,N,+}^{M,+}(s)$\n is indeed larger than the Taylor bound\n $\\beta_{n,N,+}^{M}(s)$:\n \n \n \\begin{cor}\\label{betadifferencecor}\nLet $s\\in\\mathbb R$.\nIf $Q,M,N\\in\\mathbb N$ with $Q\\le M\\le N$,\nthen \n$$\n\\beta_{n,N,+}^{M}(s) \\le \\beta_{n,N,+}^{M,+}(s)\\quad\n\\text{for all }1\\le n\\le Q \\,.\n$$\n\\end{cor}\n\\begin{proof}\nImmediate from Lemma \\ref{betadifference} and Definition \\ref{uppercomputedtaylorbound}.\n\\end{proof}\n\n Finally, we deduce that the $n^{th}$ Taylor coefficient $\\delta_n(s)$ of the determinant $\\det(I-z\\mathcal L_{A,s})$\n can be bounded in modulus by the upper computed Taylor bound\n $\\beta_{n,N,+}^{M,+}(s)$ (a quantity we can compute to arbitrary precision):\n\n\\begin{prop}\\label{taylorboundprop}\nLet $s\\in\\mathbb R$.\nIf $Q,M,N\\in\\mathbb N$ with $Q\\le M\\le N$,\nthen \n$$\n|\\delta_n(s)| \\le \\beta_{n,N,+}^{M,+}(s)\\quad\\text{for all } 1\\le n\\le Q\\,.\n$$\n\\end{prop}\n\\begin{proof}\n\nLemma \\ref{betacbound} gives\n$ |\\delta_n(s)| \\le \\beta_{n,N,+}^{M}(s)$, and\nCorollary \\ref{betadifferencecor} gives\n$\\beta_{n,N,+}^{M}(s) \\le \\beta_{n,N,+}^{M,+}(s)$,\nso the result follows.\n\\end{proof}\n\n\\begin{remark}\nIn \\S \\ref{E2section}, \nfor the computations in the proof of Theorem \\ref{e2theorem},\nwe choose $N=600$, $M=400$, and $Q=28$, using\nProposition \\ref{taylorboundprop} to obtain the upper bound on $|\\delta_n(s)|$ for $P+1=26\\le n\\le 28$,\nhaving explicitly evaluated $\\delta_n(s)$ for $1\\le n\\le 25$ using periodic points of period up to $P= 25$.\n\\end{remark}\n\n\n\n\n\n\n\n\\section{The Hausdorff dimension of $E_2$}\\label{E2section}\n\nHere we consider the set $E_2$, corresponding to the choice $A=\\{1,2\\}$.\nWe shall suppress the set $A$ from our notation, writing $\\mathcal L_s$ instead of $\\mathcal L_{A,s}$.\n\nThe approximation $s_N$ to $\\text{dim}(E_2)$, based on periodic points of period up to $N$,\nis the zero (in the interval $(0,1)$) of the function $\\frak D_N$ defined by (\\ref{DNdefn});\nthese approximations are tabulated in Table 1 for $18\\le n\\le 25$. \nWe note that the 24th and 25th approximations to $\\text{dim}(E_2)$ share\nthe first 129 decimal digits\n\\begin{align*}\n0.&5312805062772051416244686473684717854930591090183987798883978039 \\\\\n& 27529535643831345918109570181185239880428057243075187633422389339 \\,\n\\end{align*}\nthough the rate of convergence gives confidence that the first 139 digits\n\\begin{align*}\n0.&531280506277205141624468647368471785493059109018398779888397803927529 \\\\\n& 5356438313459181095701811852398804280572430751876334223893394808223090 \\,\n\\end{align*}\nof $s_{25}$ are in fact correct digits of $\\text{dim}(E_2)$.\n\nIt turns out that we can \\emph{rigorously} \njustify around three quarters of these decimal digits,\nproving that the first 100 digits are correct.\nIn fact we prove slightly more than that, \nby setting $s^-$ to be the value\n\\begin{align*}\ns^- =\n0.&531280506277205141624468647368471785493059109018398 \\\\\n& 77988839780392752953564383134591810957018118523987\\,,\n\\end{align*}\nand setting $s^+ = s^- + 2\/10^{101}$ to be the value\n\\begin{align*}\ns^+ =\n0.&531280506277205141624468647368471785493059109018398 \\\\\n& 77988839780392752953564383134591810957018118523989\\,.\n\\end{align*}\n\n\\begin{table}\n\\begin{equation*}\n\\begin{tabular}{|r|r|}\n\\hline\n$n$ & $s_n$ \\qquad\\qquad\\qquad\\qquad\\quad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad \\\\\n\\hline \n$18$ & 0.531280506277205141624468647368471785493059109018398779888397803927529535645 \\cr\n$$ & 596972005085668529391352118806494054592120629038239974478243258576620540205 \\cr\n$19$ & 0.531280506277205141624468647368471785493059109018398779888397803927529535643 \\cr\n$$ & 831345931151408384198942403518425963034455124305471103063941900681921725781 \\cr\n$20$ & 0.531280506277205141624468647368471785493059109018398779888397803927529535643 \\cr\n$$ & 831345918109570144457186603287266737112934351614056377793361034907544181115 \\cr\n$21$ & 0.531280506277205141624468647368471785493059109018398779888397803927529535643 \\cr\n$$ & 831345918109570181185239840988322512589524907498366765561230541095944497891 \\cr\n$22$ & 0.531280506277205141624468647368471785493059109018398779888397803927529535643\\cr\n$$ & 831345918109570181185239880428057259226147992212780800516214656456345194120 \\cr\n$23$ & 0.531280506277205141624468647368471785493059109018398779888397803927529535643 \\cr\n$$ & 831345918109570181185239880428057243075187635944921448427780108909724612227 \\cr\n$24$ & 0.531280506277205141624468647368471785493059109018398779888397803927529535643 \\cr\n$$ & 831345918109570181185239880428057243075187633422389339330546198723829886067 \\cr\n$25$ & 0.531280506277205141624468647368471785493059109018398779888397803927529535643 \\cr\n$$ & 831345918109570181185239880428057243075187633422389339480822309014454563836 \\cr\n\\hline\n\\end{tabular}\n\\end{equation*}\n\\caption{Approximations $s_n\\approx \\text{dim}(E_2)$; each $s_n$ is a zero of a truncation $\\frak D_n$ (formed using only periodic points of period $\\le n$) of the function $\\frak D$}\n\\end{table}\n\n\nWe then claim:\n\n\\begin{theorem}\\label{e2theorem}\nThe Hausdorff dimension of $E_2$ lies in the interval $(s^-,s^+)$.\n\\end{theorem}\n\\begin{proof}\nWe will show that $\\frak D(s^-)$ and $\\frak D(s^+)$ take opposite signs, \nand deduce that $\\dim(E_A)$, as the zero of $\\frak D$, lies between $s^-$ and $s^+$.\n\n\nLet $D\\subset\\mathbb{C}$ be the open disc centred at $c$, of radius $\\varrho$,\nwhere $c$ is the largest real root of the polynomial\n$$\n128c^7 + 768c^6 + 1296c^5 - 192c^4 - 1764c^3 - 108c^2 + 819c -216 \\,,\n$$\nso that\n$$c\\approx\n0.758687144013554292899790137015621955739402945444266741967051997691009 \\,,\n$$\nand\n\\begin{equation}\\label{crho}\n\\varrho=\n\\frac{-c+\\sqrt{-6c+5c^2+12c^3+4c^4}}{2c}\\,,\n\\end{equation}\nso that\n$$\n\\varrho\\approx\n0.957589818521375342814351002388265920293251603461349541441037951859499 \\,.\n$$\nThe relation (\\ref{crho}) ensures that $T_1(c-\\varrho)$ and $T_2(c+\\varrho)$ are equidistant from $c$, and this common distance is denoted by $\\varrho' = T_1(c-\\varrho) - c = c - T_2(c+\\varrho)$, so that\n\\begin{equation*}\\label{rhoprime}\n\\varrho' \\approx \n0.48960063348666271539624547964205669003751747416510762619582637319401 \\,.\n\\end{equation*}\nThe specific choice of $c$ is to ensure that the \ncontraction ratio $h = \\varrho'\/\\varrho$ is minimised, taking the value\n\\begin{equation*}\nh = \\frac{\\varrho'}{\\varrho} \\approx\n0.51128429314616176482942956363790038479511374855036304746799036536341 \\,.\n\\end{equation*}\n\n\n\\begin{figure}[!h]\n\\begin{center}\n \\includegraphics[]{e2_optimal_contraction_ratio_disc.pdf}\n\\caption{Inner disc $D'$ (dashed) contains images $T_1(D)$, $T_2(D)$ of the outer disc $D$, in the rigorous bound on the dimension of $E_2$}\n\\end{center}\n\\end{figure}\n\n\n\n\nHaving computed the points of period up to $P= 25$ we can \nform the functions\n$s\\mapsto \\delta_n(s)$ for $1\\le n\\le 25$,\nand evaluate these at $s=s^-$\n(cf.~Table 2) to give\n\\begin{equation}\\label{sminus22}\n\\frak D_{25}(s^-) = 1+\\sum_{n=1}^{25} \\delta_n(s^-) =\n(-1.584605810787991617286291643870\\ldots)\n\\times 10^{-101}\n< 0\\,,\n\\end{equation}\nand at $s=s^+$ to give\n\\begin{equation}\\label{splus22}\n\\frak D_{25}(s^+) =\n1+\\sum_{n=1}^{25} \\delta_n(s^+) =\n(1.454514082498475271478438451769\\ldots)\n \\times 10^{-101} >0\\,.\n\\end{equation}\n\nWe now aim to show that the approximation $\\frak D_{25}$ is close enough to \n$\\frak D$ for (\\ref{sminus22}) and (\\ref{splus22}) to imply, respectively, the negativity of $\\frak D(s^-)$\nand the positivity of $\\frak D(s^+)$.\nIn other words, we seek to bound the tail $\\sum_{n=26}^\\infty \\delta_n(s)$, and this will\nbe achieved by bounding the individual\nTaylor coefficients $\\delta_n(s)$, for $n\\ge 26=P+1$.\nIt will turn out that for $n\\ge29$ the cruder Euler bound on $\\delta_n(s)$ is sufficient,\nwhile for $26\\le n\\le 28$ we will use the Taylor bounds described in \\S \\ref{taylorsection}.\nMore precisely, for $P+1=26\\le n\\le 28=Q$ we will use the upper computed Taylor bound\\footnote{As will be noted shortly,\nthe upper computed Taylor bound we use agrees with the corresponding Taylor bound to over 200 decimal digits,\nso in particular the two quantities are indistinguishable at the 150 digit precision level of these computations.}\n$\\beta_{n,N,+}^{M,+}(s)$ for suitable $M,N\\in\\mathbb N$.\n\nHenceforth let $Q=28$, $M=400$, $N=600$ (so that in particular $Q\\le M\\le N$, as was assumed throughout \\S \\ref{taylorsection})\nand consider the case $s=s^-$.\n\nWe first evaluate\\footnote{As described in \\S \\ref{cabsection}, \n(\\ref{hardynormexplicit}) can be readily evaluated to\narbitrary precision using numerical integration; for this particular computation the precision level used was 150 decimal places.} \nthe $H^2(D)$ norms of the monomial images $\\mathcal L_{s} (m_k)$ for $0\\le k\\le N=600$.\nThese norms are decreasing in $k$;\nTable 3 contains the first few evaluations, for $0\\le k\\le 10$,\nwhile for $k= 600$ we have\n$$\n\\| \\mathcal L_{s}(m_{600}) \\|= \n(2.297607298251023508986187604945746\\ldots) \\times 10^{-176} \\,.\n$$\n\nUsing these norms $\\|\\mathcal L_s(m_k)\\|$ we then evaluate, for $1\\le n\\le M=400$, the\nupper computed approximation bounds\n$\\alpha_{n,N,+}(s) = \\alpha_{n,600,+}(s)$ defined (cf.~(\\ref{alphanNplus})) by\\footnote{Note that \n$h\\approx 0.511284$ and $N=600$,\nso $\\frac{h^{2(N+1)}}{1-h^2} \\le 8.8 \\times 10^{-351}$.\nMoreover (\\ref{weightboundsrange3}) gives\n$\\sum_{i=1}^2 \\|w_{i,s}\\|_\\infty \\le 1.81$, thus\n$ (\\sum_{i=1}^2 \\|w_{i,s}\\|_\\infty)^2 \\frac{h^{2(N+1)}}{1-h^2} \\le 2.9 \\times 10^{-350}$.\nCombining these bounds with the values taken by $\\alpha_{n,N,+}(s) $, \nit follows that for $1\\le n\\le 400$, \nthe approximation bound $\\alpha_n(s) = ( \\sum_{k=n-1}^\\infty \\|\\mathcal L_{s} (m_k)\\|^2 )^{1\/2}$ agrees with both computed approximation bounds\n$\\alpha_{n,N,-}(s)$ and $\\alpha_{n,N,+}(s)$ to at least 200 decimal places, a level well beyond the\ndesired precision used in these calculations.} \n$$\n\\alpha_{n,N,+}(s) \n= \\left( \\sum_{k=n-1}^N \\| \\mathcal L_{s}(m_k)\\|^2 \n+ \\left(\\sum_{i=1}^2 \\|w_{i,s}\\|_\\infty\\right)^2 \\frac{h^{2(N+1)}}{1-h^2}\\right)^{1\/2} \\,.\n$$\nThese bounds are decreasing in $n$; Table 4 contains the first few evaluations, for $1\\le n\\le 10$,\nwhile for $n=400$ we have\n$$\n\\alpha_{400,600,+}(s) = \n(3.806826780744825698066314723072781\\ldots) \\times 10^{-147} \\,.\n$$\n\nThe upper computed approximation bounds $\\alpha_{n,600,+}(s)$ are then used to form the \nupper computed Taylor bounds\\footnote{The difference\n$ \\beta_{n,N,+}^{M,+}(s) - \\beta_{n,N,+}^{M,-}(s)\n=\n\\sum_{l=0}^{n-1} J_{Q,N,s}^{n-l}\\, \\beta_{l,N,+}^{M,-}(s)\\, h^{M(n-l)} E_{n-l}(h)$ is smaller than\n$1.86 \\times 10^{-210}$ for $26\\le n\\le 28=Q$, so in fact the upper and lower computed Taylor bounds,\nand the Taylor bound $\\beta_{n,N,+}^M(s)$, agree to well beyond the 150 decimal place precision used in these computations.}\n $\\beta_{n,N,+}^{M,+}(s)\n= \\beta_{n,N,+}^{M,-}(s) + \\sum_{l=0}^{n-1} J_{Q,N,s}^{n-l}\\, \\beta_{l,N,+}^{M,-}(s)\\, h^{M(n-l)} E_{n-l}(h)$,\nwhere\n$$\n\\beta_{n,N,+}^{M,-}(s) = \\beta_{n,600,+}^{400,-}(s) = \\sum_{i_1<\\ldots< i_n\\le 400} \\prod_{j=1}^n \\alpha_{i_j,600,+}(s)\\,,\n$$\nwhich for $26\\le n\\le 28=Q$ are\\footnote{See also Table 6\nfor computations of $\\beta_{n,N,+}^{M,+}(s)$ for $1\\le n\\le 28=Q$.}\n$$\n\\beta_{26,N,+}^{M,+}(s)\n=\n(7.0935010683530957339350457686786431427508\\ldots) \\times 10^{-103},\n$$\n$$\n\\beta_{27,N,+}^{M,+}(s)\n=\n(7.0379118021870691622913562125699156503586\\ldots) \\times 10^{-111},\n$$\n$$\n\\beta_{28,N,+}^{M,+}(s)\n= \n(3.5360715444914082167026977943200738452867\\ldots) \\times 10^{-119},\n$$\nso in particular Proposition \\ref{taylorboundprop} gives\n\\begin{equation}\\label{intermediateE2}\n\\sum_{n=26}^{28} |\\delta_n(s)|\n\\le\n\\sum_{n=26}^{28} \\beta_{n,N,+}^{M,+}(s)\n< 7.1\n\\times 10^{-103} .\n\\end{equation}\n\nIt remains to derive the Euler bounds on the Taylor coefficients $\\delta_n(s)$ for $n\\ge 29$.\nFor $s>0$, the functions $w_{1,s}(z) = 1\/(z+1)^{2s}$ and $w_{2,s}(z)=1\/(z+2)^{2s}$ have maximum modulus on $D$ when $z=c-\\varrho$, so \n\\begin{equation}\\label{weightbound}\n\\|w_{1,s}\\|_\\infty = 1\/(1+c-\\varrho)^{2s}\\quad \\text{and} \\quad \\|w_{2,s}\\|_\\infty = 1\/(2+c-\\varrho)^{2s}\\,.\n\\end{equation}\n\n\n\n\n\n\n\n\nA computation using (\\ref{weightbound}) gives\n\\begin{equation}\\label{weightboundsrange1}\n\\|w_{1,s}\\|_\\infty \n\\le \n1.2657276413750668025007241047661655434034644495987711959332997\n\\end{equation}\nand\n\\begin{equation}\\label{weightboundsrange2}\n\\|w_{2,s}\\|_\\infty \n\\le \n0.5351507690357290789991731014616306223833750046974228167583536\n\\,,\n\\end{equation}\nthus\n\\begin{equation}\\label{weightboundsrange3}\n\\|w_{1,s}\\|_\\infty + \\|w_{2,s}\\|_\\infty \n\\le \n1.8008784104107958814998972062277961657868394542961940127\n\\,,\n\\end{equation}\nand therefore $K_s= (\\|w_{1,s}\\|_\\infty + \\|w_{2,s}\\|_{\\infty})\/( h \\sqrt{1-h^2})$ \nis bounded by\n\\begin{equation}\nK_s \n\\le \n4.098460062897625162727128104751085223751087056801141844 \\,.\n\\end{equation}\n\nNow $|\\delta_n(s)| \\le K_s^n E_n(h)$, and we readily compute (see also Table 5) that\n$$\nK_s^{29} E_{29}(h) \n< 3.991837779947559\n\\times 10^{-109}\\,,\n$$\n$$\nK_s^{30} E_{30}(h) \n< 2.976234382308237\n\\times 10^{-117}\\,,\n$$\nand we easily bound\n\\begin{equation}\\label{easybound2}\n\\left| \\sum_{n=29}^\\infty \\delta_n(s) \\right| \\le\n\\sum_{n=29}^\\infty K_s^n E_n(h) < 4 \\times 10^{-109}\\,.\n\\end{equation}\n\nCombining (\\ref{easybound2})\nwith (\\ref{intermediateE2}) gives, for $s = s^-$,\n\\begin{equation}\\label{easyboundcombined}\n\\left| \\sum_{n=26}^\\infty \\delta_n(s) \\right| \n< 7.2 \\times 10^{-103}\\,.\n\\end{equation}\n\nCombining (\\ref{easyboundcombined}) with\n(\\ref{sminus22}) \nthen gives\n\\begin{equation}\n\\frak D(s^-) = 1+\\sum_{n=1}^\\infty \\delta_n(s^-) < 0 \\,.\n\\end{equation}\n\nIt remains to show that $\\frak D(s^+)$ is positive.\nIn view of (\\ref{splus22}), \nfor this it is sufficient to show that $\\left| \\sum_{n=26}^\\infty \\delta_n(s) \\right| < 10^{-101}$\nfor $s=s^+$.\nIn fact the stronger inequality (\\ref{easyboundcombined}) (which we have proved for $s=s^-$) can also be established\nfor $s=s^+$, using the same general method as for $s=s^-$,\nsince the intermediate computed values for the\nnorms $\\|\\mathcal L_s(m_k)\\|$, computed approximation bounds $\\alpha_{n,N,+}(s)$, computed Taylor bounds\n$\\beta_{n,N,+}^{M,+}(s)$,\nand Euler bounds $K_s^nE_n(h)$,\nare sufficiently close to those for $s=s^- = s^+ - 2\/10^{101}$. \nCombining (\\ref{splus22}) with inequality (\\ref{easyboundcombined}) for $s=s^+$ gives\nthe required positivity\n\\begin{equation}\n\\frak D(s^+) = 1+\\sum_{n=1}^\\infty \\delta_n(s^+) > 0\\,.\n\\end{equation}\n\nThe map $s\\mapsto \\frak D(s)$ is continuous and increasing, so the fact that $\\frak D(s^-) < 0 < \\frak D(s^+)$\nimplies that its unique zero (which is equal to the dimension) is contained in $(s^-,s^+)$.\n\\end{proof}\n\n\n\\begin{remark}\nIf, as in Theorem \\ref{e2theorem}, our aim is to rigorously justify 100 decimal places of the computed approximation $s_P$\nto the Hausdorff dimension, then roughly speaking $P$ should be chosen so that the modulus of the\ntail $\\sum_{n=P+1}^\\infty \\delta_n(s)$ can be shown to be somewhat smaller than $10^{-100}$ for $s\\approx s_P$. Since $|\\delta_n(s)|$ is bounded above by the upper computed Taylor bound $\\beta_{n,N,+}^{M,+}(s)$, the fact that\n$\\beta_{26,N,+}^{M,+}(s) < 7.1 \\times 10^{-103}$ (see Table 6) for suitably large $M, N$, together with the rapid decay \n(as a function of $n$) of these bounds, suggests that we may choose $P=25$, i.e.~that it suffices to explicitly locate the periodic points of period $\\le 25$.\n\nThe choice of the value $Q$ is relatively unimportant, as the upper computed Taylor bounds are only slightly more time consuming to compute than the (instantaneously computable) Euler bounds; in the proof of Theorem \\ref{e2theorem} we chose $Q$ such that the Euler bounds $K_s^nE_n(h)$ were substantially smaller than $10^{-100}$ for $n>Q$ (our choice $Q=28$ has this property, as does any larger $Q$, and indeed the choice $Q=27$ may also be feasible, cf.~Table 5).\n\nThe values $M$ and $N$ \nare chosen large enough to ensure that the bound (\\ref{Taylorapproximationbound}) on $|\\delta_n(s)|$ is rendered essentially as sharp as possible using our method (see Proposition \\ref{alphaapproximationbound}) of bounding approximation numbers by approximation bounds; equally, the values $M$ and $N$ are of course chosen small enough to allow the $\\beta_{n,N,+}^{M,+}(s)$ to be evaluated in reasonable time.\n\\end{remark}\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\n\\Acp{afm} are promising materials for spintronic devices. Among the advantages over \\acp{fm} are the lack of stray fields, the very low susceptibility to magnetic fields, the abundance of materials and much faster spin dynamics \\cite{jungwirth_antiferromagnetic_2016,zelezny_spin_2018,baltz_antiferromagnetic_2018}. However, the antiferromagnetic order parameter in \\acp{afm} is difficult to read and to control because of a lack of macroscopic magnetization, a fact which is strongly related to some of their advantages. A major step in the field of antiferromagnetic spintronics \\cite{jungwirth_antiferromagnetic_2016,zelezny_spin_2018,baltz_antiferromagnetic_2018}\nwas the discovery of electrically induced \\ac{nsot} \\cite{zelezny_relativistic_2014,wadley_electrical_2016,olejnik_antiferromagnetic_2017,bodnar_writing_2018,olejnik_terahertz_2018} in specific antiferromagnetic materials. \nThese torques are a result of a special magnetic structure, where, for the magnetic state, global inversion symmetry is broken but one sublattice forms the inversion partner of the other, in combination with the inverse spin-galvanic or (Rashba--)Edelstein effect \\cite{zelezny_relativistic_2014}, which is the generation of a nonequilibrium spin polarization by electrical currents.\nCurrently, \\ce{CuMnAs} and \\ce{Mn2Au} are the two known materials that provide antiferromagnetic order at room temperature and possess the specific crystal structure required for \\ac{nsot}. The latter is the more promising material as its critical temperature is extremely high---higher than the peritectic temperature of about \\SI{950}{\\kelvin}, where the material decomposes \\cite{barthem_revealing_2013}---and it is easier to handle due to the lack of toxic components.\n\nDespite the fact that several studies clearly demonstrate that it is possible to switch the order parameter of \\ce{Mn2Au} via the application of an electrical current by $90^{\\circ}$ \\cite{zelezny_relativistic_2014,roy_robust_2016,meinert_electrical_2018,bodnar_writing_2018,salemi_orbitally_2019}, the switching mechanism---whether deterministic or thermally activated, coherent or via domain wall motion---remains concealed. The employed models and simulations so far rest on phenomenological descriptions \\cite{roy_robust_2016} and macrospin approximations \\cite{meinert_electrical_2018}. A microscopic and quantitative model of the switching process is missing. \n\nHere, we combine \\emph{ab initio} calculations with atomistic spin dynamics simulations to develop and employ a multi-scale model of the current induced switching in \\ce{Mn2Au}.\nThe three ingredients for this multi-scale model are \\emph{ab initio} calculations of the exchange interactions and anisotropies (section II), first-principles calculations of the current induced magnetic moments (section III), and atomistic spin model simulations (section IV), that include the results from the first-principles calculations and investigate the switching mechanism and its dynamics. We show that the switching is fast, on a time scale of some tens of picoseconds, but not purely deterministic, requiring some degree of thermal activation to overcome the anisotropy energy barrier during the switching process.\n\n\n\\section{Derivation of the spin model from ab initio calculations}\nWe employ the fully relativistic \\ac{skkr} method \\cite{zabloudil_electron_2005} to determine the electronic structure and magnetic interactions of \\ce{Mn2Au}. \\ce{Mn2Au} crystallizes in the \\ce{MoSi2} structure with the lattice constants $a_{\\mathrm{2d}}=\\SI{3.328}{\\angstrom}$ and $c=\\SI{8.539}{\\angstrom}$ \\cite{wells_structure_1970,shick_spin-orbit_2010,barthem_revealing_2013}.\nThe \\ce{MoSi2}-type lattice geometry is depicted in Fig.~\\ref{fig:exchange}. The potentials were treated within the \\ac{asa} with an angular momentum cutoff of $\\ell_\\text{max}=2$ to describe the electron scattering. For energy integrations we used 15 energy points on a semicircular contour on the upper complex semiplane, and up to 7260 \n$k$-points in the irreducible wedge of the Brillouin zone near the Fermi energy for the calculation of spin model parameters. \n\n\nWe perform self-consistent calculations for the layered \\ac{afm} state shown in Fig.~\\ref{fig:exchange}, which has been identified as the magnetic ground state by neutron diffraction experiments \\cite{barthem_revealing_2013},\nbut also for the \\ac{fm} state.\nWe find the layered \\ac{afm} state lower in energy than the \\ac{fm} state by 25.8 mRy\/atom, which compares fairly well to the value reported in Ref.~\\cite{khmelevskyi_layered_2008} (21.5 mRy\/atom).\nAlso in agreement with Ref.\\ \\cite{khmelevskyi_layered_2008} we obtain a larger magnetic moment for the \\ce{Mn} atoms in the layered \\ac{afm} state ($\\SI{3.74}{\\mu_{\\mathrm{B}}}$) than in the \\ac{fm} state ($\\SI{3.70}{\\mu_{\\mathrm{B}}}$).\nFor the description of the switching process we consider the following spin model:\n\\begin{equation}\n\t\\begin{aligned}\n\t\t\\mathcal{H} =&- \\frac{1}{2}\\sum_{i\\neq j}J_{ij}\\vec{S}_{i}\\cdot\\vec{S}_{j}\n\t\t-\\sum_{i} d_{z}S_{i,z}^2\\\\\n\t\t&-\\sum_{i} d_{zz}S_{i,z}^4\n\t\t-\\sum_{i} d_{xy}S_{i,x}^2S_{i,y}^2 \\ ,\n\t\\end{aligned}\n \\label{eq:spin_model}\n\\end{equation}\nwhere the isotropic exchange interactions $J_{ij}$ are obtained from \nthe \\ac{rtm} \\cite{udvardi_first-principles_2003},\nwhile the anisotropy parameters $d_{z}$, $d_{zz}$ and $d_{xy}$ are derived from band energy calculations in the spirit of the magnetic force theorem \\cite{weinberger_magnetic_2009}. \n\nThe isotropic exchange interactions calculated from the layered \\ac{afm} state as reference are plotted in Fig.~\\ref{fig:exchange} as a function of the interatomic distance. We can identify three dominant Heisenberg couplings: antiferromagnetic ones for the two nearest neighbors, $J_{1}=\\SI{-43.84}{\\milli\\electronvolt}$ and $J_{2}=\\SI{-81.79}{\\milli\\electronvolt}$, but a ferromagnetic one for the third nearest neighbor, $J_{3}=\\SI{39.28}{\\milli\\electronvolt}$. \nThese values show good qualitative agreement with those calculated in Ref.~\\cite{khmelevskyi_layered_2008} also in terms of the KKR-ASA method, but using a cutoff of $\\ell_{\\rm max}=3$ for the partial waves,\n$J_{1}=\\SI{-68.30}{\\milli\\electronvolt}$,\n$J_{2}=\\SI{-91.70}{\\milli\\electronvolt}$ and\n$J_{3}=\\SI{19.86}{\\milli\\electronvolt}$.\nSince the interactions $J_1$ and $J_2$ act between sublattices (layers), while $J_3$ is the leading interaction within a sublattice (cf.\\ Fig.~\\ref{fig:exchange}), these couplings clearly favor the layered \\ac{afm} state as the ground state of the system.\n\n\\begin{figure}\n\t\\centering\n\t\\includegraphics[width=\\linewidth]{{exchange_and_structure}.pdf}\n\t\\caption{Left: Isotropic exchange interactions as a function of distance calculated by using the \\acs*{rtm} and the \\acs*{sce} methods. Right: Crystal structure of \\ce{Mn2Au}. The two \\ce{Mn} sublattices are illustrated by red and blue spheres. The ground state orientations of the magnetic moments are indicated by arrows. The first three nearest neighbor magnetic exchange interactions $J_i$ are also visualized in the figure. \\label{fig:exchange}}\n\\end{figure}\n\nIt turns out that taking into account only the first three nearest neighbor interactions is not sufficient for a precise determination of the inter- and intra-sublattice interactions. In our simulations we, hence, consider interactions up to a distance of $2.7\\ a_{\\mathrm{2d}}$, resulting in an inter-sublattice exchange interaction of $J_{\\mathrm{inter}}=\\SI{-371.13}{\\milli\\electronvolt}$ and an intra-sublattice exchange interaction of\n$J_{\\mathrm{intra}}=\\SI{182.36}{\\milli\\electronvolt}$. \nConsidering exchange interactions only in the first three shells yields $J_{\\mathrm{inter}}=4J_1+J_2=\\SI{-257.15}{\\milli\\electronvolt}$ and $J_{\\mathrm{intra}}=4J_3=\\SI{157.12}{\\milli\\electronvolt}$, being thus 30 and 14 \\% smaller in magnitude than the ones calculated with a spatial cutoff of $2.7\\ a_{\\mathrm{2d}}$.\n\nExperimental values for the effective inter-sublattice exchange coupling, $J_{\\mathrm{eff}}=-J_{\\mathrm{inter}}\/4$ \\cite{sapozhnik_experimental_2018}, were previously provided based on susceptibility measurements for \\ce{Mn2Au} powder \\cite{barthem_revealing_2013} and thin films \\cite{sapozhnik_experimental_2018}, $J_{\\mathrm{eff}}=\\SI{75}{\\milli\\electronvolt}$ and $J_{\\mathrm{eff}}=\\SI{22 \\pm 5}{\\milli\\electronvolt}$, respectively. The corresponding values from our calculations, $J_{\\mathrm{eff}}=\\SI{92.8}{\\milli\\electronvolt}$, and the one derived from the exchange interactions in Ref.~\\onlinecite{khmelevskyi_layered_2008}, $J_{\\mathrm{eff}}=\\SI{90}{\\milli\\electronvolt}$, compare remarkably well and are also in good agreement with the experimental result for the powder sample \\cite{barthem_revealing_2013}. \n\nFrom our spin dynamics simulations we obtain a N\\'{e}el temperature of $\\SI{1680\\pm3}{\\kelvin}$, which is in good agreement with the value of \\SI{1610\\pm10}{\\kelvin} calculated in Ref.~\\cite{khmelevskyi_layered_2008} via Monte-Carlo simulations using nine nearest neighbor shells (the numerical values of which, however, were not provided beyond the first three shells).\nNote that due to a peritectic temperature of \\SI{950}{\\kelvin}, the N\\'eel temperature can only be extrapolated from experiments, yielding values in the range of \\SIrange{1300}{1600}{\\kelvin} \\cite{barthem_revealing_2013}.\n\nIn order to support the validity of our spin model description relying on the assumption of rigid magnetic moments that are stable against magnetic disorder, we also perform calculations using the \\ac{rdlm} theory \\cite{gyorffy_dlm_1985,staunton_rdlm_2006}. This approach assumes a fully spin disordered reference state, and also enables the extraction of spin model parameters by means of the so-called \\ac{sce} \\cite{drautz_spin-cluster_2004,szunyogh_atomistic_2011}, which maps the adiabatic magnetic energy surface onto a spin model.\n\nThe resulting isotropic Heisenberg couplings are also displayed in Fig.~\\ref{fig:exchange}. There is a remarkable similarity between the two spin model parameter sets, despite their quantitative differences especially for the first and third neighbor shells. \nObviously, the interactions obtained from the \\ac{sce}-\\ac{rdlm} calculation are also consistent with the layered \\ac{afm} structure as ground state and we obtain a N\\'{e}el temperature of $\\SI{1786\\pm3}{\\kelvin}$, which is in good agreement with the \\ac{rtm}. \n\nConceptually, the \\ac{rtm} gives a good approximation near the ground state, whereas the \\ac{sce} corresponds to a high-temperature phase. The fact that the two sets of parameters agree well despite this fundamental difference between the two methods can be explained by the rigidity of the \\ce{Mn} local spin moments.\nIn order to support this point we compare the \\ac{dos} for the two magnetic states in Fig.~\\ref{fig:DOS}. As also noted in Ref.~\\cite{khmelevskyi_layered_2008}, the narrow bandwidth of the \\ce{Mn} $\\mathrm{d}$-bands and the formation of a pseudogap around the Fermi level are visible in the \\ac{afm} state. The expected smearing of the \\ac{dos} in the \\ac{dlm} state due to spin disorder is clearly seen in the bottom panel of Fig.~\\ref{fig:DOS}, but the large exchange splitting between the two spin channels prevails. \nThis shows up also in the spin moment of \\ce{Mn} calculated in the \\ac{dlm} state of $\\SI{3.71}{\\mu_{\\mathrm{B}}}$ being practically the same as in the layered AFM state.\n\n\\begin{figure}\n\t\\centering\n\t\\includegraphics[width=\\linewidth]{{DOS_total_comparison_eV}.pdf}\n\t\\caption{Density of states per atom from the electronic structure calculations in the \\ac{afm} (top panel) and \\ac{dlm} (bottom panel) states. The DOS for only one \\ce{Mn} sublattice is shown. Positive values correspond to spin up states, negative ones to spin down states. \\label{fig:DOS}}\n\\end{figure}\n\nAs for the anisotropies in Eq.~\\eqref{eq:spin_model}, we calculate a second order anisotropy of $d_{z}=\\SI{-0.62}{\\milli\\electronvolt}$, and fourth order anisotropies $d_{zz}=\\SI{-0.024}{\\milli\\electronvolt}$ and $d_{xy}=\\SI{0.058}{\\milli\\electronvolt}$. These values compare fairly well to those that can be derived from the anisotropy constants reported in Ref.~\\cite{shick_spin-orbit_2010}, \n$d_{z}=\\SI{-1.19}{\\milli\\electronvolt}$, $d_{zz}=\\SI{-0.015}{\\milli\\electronvolt}$, and $d_{xy}=\\SI{0.04}{\\milli\\electronvolt}$, in particular considering that the latter ones were calculated in terms of a full-potential density functional method contrary to the \\ac{asa} we used in our calculations. This result is also in agreement with experimental reports of an upper bound for the in-plane anisotropy $d_{xy}$ of $\\SI{0.068}{\\milli\\electronvolt}$ \\cite{sapozhnik_direct_2018}.\nThus, in agreement with Refs.~\\cite{shick_spin-orbit_2010,barthem_easy_2016,sapozhnik_direct_2018} we find the magnetic easy axis along the $\\langle110\\rangle$ direction as illustrated in Fig.~\\ref{fig:exchange}. However, in our results the anisotropy responsible for the confinement in the basal plane is only about half in magnitude as compared to Ref.~\\cite{shick_spin-orbit_2010}. Note though that the out-of-plane anisotropy plays only a minor role in the switching process discussed in our work. \n\nFor our atomistic spin dynamics simulations we combine these anisotropies with the \\ac{rtm} exchange parameters since both are calculated from the same converged potential in contrast to the \\ac{sce} exchange parameters.\n\n\\section{First-principles calculations of the induced moments}\nThe inverse spin-galvanic or Rashba--Edelstein effect leads to electrically induced magnetic moments. These induced spin and orbital polarizations can be computed using the Kubo linear-response formalism.\nSpecifically, the locally induced polarizations can be expressed as\n\\begin{equation}\n \\delta \\vec{S} = \\vec{\\chi}^S \\vec{E}, ~{\\textrm{and}} ~~ \\delta \\vec{L} = \\vec{\\chi}^L \\vec{E},\n\\end{equation}\nwith $\\boldsymbol{\\chi}^{S}$ and $\\boldsymbol{\\chi}^{L}$ \nthe spin and orbital Rashba--Edelstein susceptibility tensors, respectively,\nand $\\vec{E}$ the applied electric field.\nThe magneto-electric susceptibility tensors can be obtained by evaluating the response to a perturbing electric field, $\\hat{V} = -e \\hat{\\vec{r}} \\cdot \\vec{E}$ where $e$ is the electron charge.\n\nEmploying DFT-based single-electron states, the susceptibility tensors are given by \\cite{salemi2021}\n\\begin{align}\n\\label{eq:LinearResponse}\n\\chi^{S,L}_{ij} &= -\\frac{ie}{m_e} \\int_{\\Omega} \\frac{d\\vec{k}}{\\Omega}\n\\sum_{n\\neq m} \\frac{f_{n\\vec{k}} - f_{m\\vec{k}} }{\\hbar \\omega_{nm\\vec{k}}}~\n\\frac{A^{(S,L)i}_{mn\\vec{k}} ~ p^j_{nm\\vec{k}} }{-\\omega_{nm\\vec{k}} + i\\tau_{\\text{inter}}^{-1}} \\nonumber \\\\\n& ~~~ -\\frac{ie}{m_e} \\int_{\\Omega} \\frac{d\\vec{k}}{\\Omega}\n\\sum_{n} \\frac{\\partial f_{n\\vec{k}} }{\\partial \\epsilon}~\n\\frac{A^{(S,L)i}_{nn\\vec{k}} ~ p^j_{nn\\vec{k}} }{i\\tau_{\\text{intra}}^{-1}} \\, .\n\\end{align}\nHere, $ \\hbar \\omega_{nm\\vec{k}} = \\epsilon_{n\\vec{k}} - \\epsilon_{m\\vec{k}}$, with $\\epsilon_{n\\vec{k}}$ the unperturbed relativistic Kohn--Sham single electron energies, $\\Omega$ is the Brillouin zone volume, $p^j_{nm\\mathbf{k}}$ is the matrix element of the $j^\\text{th}$ component of the momentum-operator, and\n$f_{n\\vec{k}}$ is the occupation of Kohn--Sham state $|n\\vec{k}\\rangle$. $\\vec{A}^{(S,L)}_{mn\\vec{k}}$ stands for a matrix element of the spin or orbital angular momentum operator, i.e., \n$\\vec{A}^{S}_{mn\\vec{k}} = \\hat{\\vec{S}}_{mn\\vec{k}}$ for $\\vec{\\chi}^S$ and $\\vec{A}^{L}_{mn\\vec{k}} = \\hat{\\vec{L}}_{mn\\vec{k}}$ for $\\vec{\\chi}^L$.\nThe parameters $\\tau_{\\text{inter}}$ and $\\tau_{\\text{intra}}$ are the electronic lifetimes for inter- and intraband scattering processes, respectively. These parameters capture the decay of an electron state $|n\\vec{k}\\rangle$ due to electron-electron scattering and interactions with external baths, e.g., phonons and defect scattering. In this work, we use an effective decay $\\tau = \\tau_{\\text{inter}} = \\tau_{\\text{intra}} = \\SI{50}{\\femto\\second}$.\n\nTo compute the current induced spin and orbital polarizations on the individual atoms in \\ce{Mn2Au} we employ the relativistic DFT package WIEN2k \\cite{Blaha2018}, which gives the Kohn--Sham energies $\\epsilon_{n\\vec{k}}$ and wave functions $|n\\vec{k}\\rangle$ that are then used in Eq.\\ (\\ref{eq:LinearResponse}). We calculate the induced magnetic moments \nfor different orientations of the electrical field with respect to the magnetic easy axes, as reversible switching was reported for both the \\mbox{[110]} and \\mbox{[100]} directions \\cite{bodnar_writing_2018}. Furthermore, we evaluate both the induced spin and orbital polarizations. \nThe local magnetic moments induced by the electric field are finally given as\n$\\vec{\\mu}= \\vec{\\mu}_S + \\vec{\\mu}_L = (2\\vec{\\chi}^S + \\vec{\\chi}^L)\\vec{E}$.\n \n\n\\begin{figure*}[ht]\n\t\\begin{minipage}[H]{.48\\linewidth}\n\t\t\\includegraphics[width=0.9\\linewidth]{{orbital_angledep}.pdf}\n\t\\end{minipage}%\n\t\\begin{minipage}[H]{.03\\linewidth}\n\t\t\\hspace{\\linewidth}\n\t\\end{minipage}%\n\t\\begin{minipage}[H]{.48\\linewidth}\n\t\t\\includegraphics[width=0.9\\linewidth]{{spin_angledep}.pdf}\n\t\\end{minipage}\n\t\\caption{Calculated induced orbital ($\\mu_L$) and spin ($\\mu_S$) moments on the two \\ce{Mn} sublattices as a function of the electric field direction for a field of $E=\\SI{1e7}{\\volt\\per\\meter}$ and local magnetic moments oriented along the \\mbox{[110]} direction.\n\ta) In-plane direction of the induced orbital moments on the two \\ce{Mn} sublattices (black vs.\\ red). The arrows in the center depict the local magnetic moments. b) Cartesian components of the induced orbital moments on the two \\ce{Mn} sublattices (solid vs.\\ dashed) as a function of the in-plane angle of the electric field with respect to the \\mbox{[100]} axis. c) and d) same as in a) and b) for the induced spin moments. \\label{fig:induced_moments}}\n\\end{figure*}\n\nThe calculated induced orbital and spin magnetic moments on the two \\ce{Mn} sublattices are presented in Fig.\\ \\ref{fig:induced_moments} as a function of the electric field direction. The orbital moments $\\mu_L$ are always induced perpendicular to the electric field direction and are antisymmetric (staggered) for the \\ce{Mn} atoms of two sublattices. The spin moments $\\mu_S$, on the other hand, are not necessarily perpendicular to the electric field direction, but their in-plane components are staggered as well. Additionally, the spin moments display a homogeneous out-of-plane component, i.e., a non-N\\'eel-type contribution. \n\nInterestingly, in all configurations the induced orbital moments are more than one order of magnitude larger than the induced spin moments, yet the former were not included in previous studies \\cite{zelezny_relativistic_2014,zelezny_spin-orbit_2017}.\nTo summarize, there are always quite large staggered orbital moments induced on the \\ce{Mn} sublattices and small induced spin moments with nonstaggered as well as staggered components that can be parallel on antiparallel to the orbital moments depending on the direction of the electric field, see also \\cite{salemi_orbitally_2019}.\n\n\\section{Atomistic spin dynamics simulations}\n\nTo include our first-principles calculations in a spin dynamics simulation we extend the semi-classical Heisenberg Hamiltonian [Eq.\\ (\\ref{eq:spin_model})] by contributions from induced spin and orbital moments,\n\\begin{equation}\n\\begin{aligned}\n\t\\mathcal{H} =&\n\t- \\frac{1}{2}\\sum_{i\\neq j}J_{ij}(\\vec{S}_{i}+\\vec{s}_{i})\\cdot(\\vec{S}_{j}+\\vec{s}_{j})\\\\\n\t& - \\sum_{i}J_{}^{\\mathrm{sd}}\\vec{S}_{i}\\cdot\\vec{s}_{i}\n\t+ \\sum_{i} \\xi_{}\\vec{S}_{i}\\cdot\\vec{l}_{i}\\\\\n\t&-\\sum_{i} d_{z}S_{i,z}^2 -\\sum_{i} d_{xy}S_{i,x}^2S_{i,y}^2 \\ ,\n\\end{aligned}\n\\end{equation}\nwhere $\\vec{S}_{i} =\\vec{\\mu}^{\\mathrm{d}}_{S,i}\/{\\mu}^{\\mathrm{d}}_{S}$ is the local magnetic moment of the $\\mathrm{d}$-electrons, $\\vec{s}_{i} =\\vec{\\mu}^{\\mathrm{s}}_{S,i}\/{\\mu}^{\\mathrm{d}}_{S}$ the induced magnetic moment from the conduction $\\mathrm{s}$-electrons and $\\vec{l}_{i} =\\vec{\\mu}_{L,i}\/{\\mu}^{\\mathrm{d}}_{S}$ the induced orbital magnetic moment. \nAll magnetic moments are normalized with respect to the local magnetic moment.\nThus, the Hamiltonian consists of five different contributions: the inter-atomic exchange with exchange constant $J_{ij}$, an additional intra-atomic $\\mathrm{sd}$-exchange with exchange constant $J^{\\mathrm{sd}}_{}$, a \\ac{soc} term with strength $\\xi_{}$, as well as second and fourth order anisotropy terms constituting the tetragonal anisotropy. \n\nAs our classical spin model employs quantum mechanical and statistical averages of the spin and orbital moments, we also use a classical description of the \\ac{soc} replacing the spin and orbital momentum operators by their averages. Note that this effective model for the \\ac{soc} was used by Bruno \\cite{bruno_physical_1993} in order to provide a simple physical interpretation of magnetic anisotropy. \nIn this model only the spin moments couple via the inter-atomic exchange interaction, which is in agreement with the conclusions from \\cite{sapozhnik_experimental_2018}.\n\nAll the contributions from the induced moments can also be represented by a simple Zeeman-like term with a sublattice-specific effective field that represents the staggered field, which was used in previous phenomenological descriptions,\n\\begin{align}\n\t{\\mu}^{\\mathrm{d}}_{S}\\vec{B}_{i}^{\\mathrm{ind}} = \\sum_{j}J_{ij}\\vec{s}_{j} + J_{}^{\\mathrm{sd}}\\vec{s}_{i} - \\xi_{}\\vec{l}_{i} \\ .\n\\end{align}\nFor the intra-atomic exchange we estimate from the shift in the up and down s-states $J^{\\mathrm{sd}}=\\SI{50}{\\milli\\electronvolt}$. The \\ac{soc} strength is calculated from the energy difference between the $\\mathrm{d}_{3\/2}$ and $\\mathrm{d}_{5\/2}$ resonances yielding $\\xi=\\SI{46}{\\milli\\electronvolt}$. \nTogether with the exchange interactions derived in Sec.\\ II and the induced moment calculated for an electrical field of $\\SI{1e7}{\\volt\\per\\meter}$, this yields staggered fields of about $\\SI{76}{\\milli\\tesla}$.\nHere, the contribution from the induced orbital moments dominates \\cite{salemi_orbitally_2019}. It is about a factor of five larger than the contribution from the inter-atomic exchange and more than one order of magnitude larger than that of the intra-atomic exchange.\nThis explains also why the staggered fields calculated here are much larger than those estimated and predicted before \\cite{roy_robust_2016,zelezny_spin-orbit_2017,meinert_electrical_2018} as the orbital contribution was previously not taken into account.\n\nThe time evolution of the localized \\ce{Mn} moments stemming from the $\\mathrm{d}$-electrons is described by the stochastic \\ac{llg} equation\n\\begin{align}\n\\dot{\\vec{S}}_{i}=-\\frac{\\gamma}{\\left(1+\\alpha^2\\right){\\mu}^{\\mathrm{d}}_{S}} \\vec{S}_{i}\\times \\Big[\\vec{H}_{i}+\\alpha \\vec{S}_{i}\\times\\vec{H}_{i}\\Big] \\ ,\n\\end{align}\nwhere $\\gamma=\\SI{1.76e11}{\\per\\second\\tesla}$ is the gyromagnetic ratio and $\\alpha$ a dimensionless damping constant. Temperature is included via Langevin dynamics by adding a random thermal noise $\\vec{\\zeta}_{i}$ to the effective field $\\vec{H}_{i}=-\\frac{\\partial\\mathcal{H}}{\\partial\\vec{S}_i}+\\vec{\\zeta}_{i}$ \\cite{kronmuller_classical_2007}. The field from the induced moments $\\vec{B}_{i}^{\\mathrm{ind}}$ is part of this effective field.\n\nThe damping constant is a free parameter as there are no experimental values for it in the literature.\nFor comparison with \\cite{roy_robust_2016} we use a plausible value of $\\alpha=0.01$. Similar, for the electrical field a rectangular pulse with pulse length of \\SI{20}{\\pico\\second} was simulated to compare the results with those from a phenomenological model \\cite{roy_robust_2016}.\nSince the samples in experiments are mostly of granular type \\cite{meinert_electrical_2018}, we simulate a system of $\\SI{20.3}{\\nano\\meter}\\times\\SI{20.3}{\\nano\\meter}\\times\\SI{20.5}{\\nano\\meter}$ size with open boundary conditions, resembling one grain of a typical sample.\n\nIn our simulations we consider electrical fields along \\mbox{[110]}, i.e.\\ parallel to the local magnetic moments, and along \\mbox{[100]}, since reversible switching was reported for both directions \\cite{bodnar_writing_2018}.\nFor both field configurations our model does not switch at $T=0$ for $E=\\SI{1e7}{\\volt\\per\\meter}$ corresponding to currents of about \\SIrange{1e10}{1e11}{\\ampere\\per\\meter\\squared}, which are used in experiments \\cite{bodnar_writing_2018,bodnar_imaging_2019}. Instead, we need a field strength of at least $E=\\SI{1.9e7}{\\volt\\per\\meter}$ for the field along the \\mbox{[110]} direction, where torques on the local magnetic moments are maximal. For the \\mbox{[100]} direction an even larger field of $\\SI{3.1e7}{\\volt\\per\\meter}$ is required for switching at zero temperature. However, once the system switches, it switches within a few picoseconds, see Fig.~\\ref{fig:T0_comparison}. This is even faster than predicted in the phenomenological model in Ref.~\\cite{roy_robust_2016}, probably because of the inclusion of the orbital induced moments and the exchange interactions beyond the first three nearest neighbors.\n\n\\begin{figure}\n\t\\centering\n\t\\includegraphics[width=\\linewidth]{{T0_comparison}.pdf}\n\t\\caption{Time evolution of the magnetic order parameter during $90^{\\circ}$ switching at $T=\\SI{0}{\\kelvin}$. The electric field (applied in the shaded area) is \\SI{1.9e7}{\\volt\\per\\meter} in \\mbox{[110]} direction (top) and \\SI{3.1e7}{\\volt\\per\\meter} in \\mbox{[100]} direction (bottom).\\label{fig:T0_comparison}}\n\\end{figure}\n\nAs was already pointed out in Ref.~\\cite{roy_robust_2016}, the reason for this rapid switching is the so-called exchange enhancement, which is characteristic for antiferromagnetic dynamics \\cite{dannegger2021}. The staggered fields do not only rotate the magnetic moments via the damping term in the \\ac{llg} but also induce a canting between the sublattices via the much stronger precession term. \nThis leads to a very small magnetization resulting in huge torques due to the inter-sublattice exchange field, which tries to realign the sublattices. Here, the damping term is responsible for the realignment. The precession term, on the other hand, rotates the magnetic moments towards the direction of the staggered field. The out-of-plane component of the order parameter remains zero during the process (see Fig.\\ \\ref{fig:switching_path}). Hence, the inter-sublattice exchange field governs the switching process and, in contrast to the switching in \\acp{fm}, lower damping allows for faster switching \\cite{rozsa_reduced_2019}.\n\n\\begin{figure}\n\t\\centering\n\t\\includegraphics[width=0.9\\linewidth]{switching_path.pdf}\n\t\\caption{Switching path of the two sublattice magnetization vectors. The antiparallel \\ce{Mn} moments switch over 90$^{\\circ}$ from the initial \\mbox{[110]} configuration (semi-transparent) to the final \\mbox{[-110]} configuration (opaque). During the switching process the sublattices are canted slightly resulting in huge torques from the inter-sublattice exchange field enhancing the switching process significantly. This exchange enhancement is characteristic for antiferromagnetic dynamics \\cite{dannegger2021}. The out-of-plane component is here scaled by a factor of 100!\n\t\\label{fig:switching_path}}\n\\end{figure}\n\nThe electric fields considered so far are much larger than those applied in experiments, but temperature plays an additional major role. A finite temperature does not only lower the energy barrier, here the fourth order in-plane magnetic anisotropy, but thermal fluctuations can also support probabilistic switching.\nFig.~\\ref{fig:T_comparison} shows the time evolution of the order parameter at elevated temperatures as well as the switching probability as a function of temperature for electrical fields of $E=\\SI{1e7}{\\volt\\per\\meter}$. For the \\mbox{[110]} direction the system does not switch at temperatures below \\SI{250}{\\kelvin}, between \\SI{250}{\\kelvin} and \\SI{350}{\\kelvin} the process is probabilistic and above \\SI{350}{\\kelvin} deterministic.\nIn the deterministic regime the energy barrier is so low that the system switches in a few picoseconds, similar to simulations with larger electric fields. In the probabilistic regime, however, it can take several attempts to cross the energy barrier due to thermal agitation. Of course, here the switching probability also depends on the pulse length of the external electric field as longer time scales allow for more stochastic attempts to cross the barrier.\nFor the electric field along the \\mbox{[100]} direction the probabilistic regime lies between \\SI{400}{\\kelvin} and \\SI{550}{\\kelvin}, above which the switching is deterministic.\n\nReversible switching for pulse currents along the \\mbox{[100]} direction was also observed in experiments \\cite{bodnar_writing_2018}. In the same paper, also significant heating resulting in temperatures up to \\SI{300}{\\celsius} was reported and thermal activation was considered to play an important role in the process. A key role of thermal activation was also reported by \\textcite{meinert_electrical_2018}.\nOf course, for thermal switching of nanoparticles the system size is crucial as well, especially for antiferromagnets as their thermal stability is much lower than that of ferromagnets \\cite{rozsa_reduced_2019}. Here, the system size was chosen such to avoid a purely superparamagnetic switching which would lead to a forth and back switching.\n\n\\begin{figure}\n\t\\centering\n\t\\includegraphics[width=\\linewidth]{{T_comparison}.pdf}\n\t\\caption{100 trajectories of the order parameter during the electric field pulse of $E=\\SI{1e7}{\\volt\\per\\meter}$ (shaded area). Top: at $T=\\SI{300}{\\kelvin}$ for an electric field in \\mbox{[110]} direction. Bottom: at $T=\\SI{500}{\\kelvin}$ for an electric field in \\mbox{[100]} direction. The inset shows the switching probability as a function of temperature.\\label{fig:T_comparison}}\n\\end{figure}\n\n\n\n\\section{Conclusions}\n\nModeling the current induced switching process in \\ce{Mn2Au} with all its different contributing terms in a quantitative manner is a challenging task. Here, we have presented the first multi-scale model combining first-principles calculations of exchange and anisotropy constants, as well as electrically induced spin and orbital moments in an extended atomistic spin model.\nWe predict much higher effective staggered fields due to the formerly neglected contributions from induced orbital moments.\nWithin the framework of atomistic spin dynamics simulations, we have shown that these fields---combined with inter-sublattice exchange interactions---result in switching processes on the time scale of few picoseconds. However, this switching requires significantly higher electrical fields than in experiments or, alternatively, elevated temperatures. This applies for both considered electrical field directions, \\mbox{[110]} and \\mbox{[100]}, which is in agreement with experimental findings \\cite{bodnar_writing_2018}.\nHence, in agreement with previous experimental studies \\cite{bodnar_writing_2018,meinert_electrical_2018} we find that thermal activation plays a key role in the current induced switching process and we have consequently distinguished temperature regimes for probabilistic and deterministic switching. \n\n\n\\begin{acknowledgments}\nThe authors gratefully acknowledge valuable discussions with Karel Carva. L.S.\\ and P.M.O.\\ acknowledge funding from the Swedish Research Council (VR) and the European Union's Horizon2020 Research and Innovation Programme under FET-OPEN Grant agreement No.\\ 863155 (s-Nebula), and acknowledge computer resources provided by the Swedish National Infrastructure for Computing (SNIC) at the PDC Center for High Performance Computing and the Uppsala Multidisciplinary Center for Advanced Computational Science (UPPMAX). The work of A.D., E.S.\\ and L.Sz.\\ was supported by National Research, Development, and Innovation Office under projects\nNo.\\ PD134579 and No.\\ K131938. The work in Konstanz was supported by the {Deutsche Forschungsgemeinschaft} via the {Sonderforschungsbereich 1432}.\n\\end{acknowledgments}\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\n\nStars form mainly due to gravitational contraction of a molecular cloud.\nMolecular cloud is like a vast laboratory for space chemistry. From the verge \nof collapse to star formation, a molecular cloud goes through different stages. \nChemical changes during this evolution acts as \na hint to understand physical and chemical processes associated with the source. \nFor example, SiO is a good shock tracer, high-velocity CO line wings are a good tracer \nof outflows; CH$_3$OH can trace other kinds of environments, including outflows\n\\citep{taquet2015} and even cold gas in pre-stellar cores \\citep{vastel2014}, C$^{17}$O helps to \nknow disk properties in some cases, etc. After the collapse of a molecular cloud, \na young stellar object forms at the central region. This central region is embedded in a thick rotating \nenvelope which further forms a disk like region. Some of the matter is often ejected in the form of bipolar outflow\ncarrying significant amount of angular momentum.\n\\citet{lada87} classified an evolutionary sequence of young stellar objects depending on the spectral energy \ndistribution, e.g., class I, class II, class III. Younger protostars are often called class 0 object \\citep{andre93}.\nHH212 is thought to be a Class 0 object of low-mass proto stellar system, located in Orion at 400pc \\citep{koun2017}.\n\nThe inner region of the low-mass star-forming cores may be enriched with complex molecules, e.g.,\nmethyl formate (HCOOCH$_3$), ethyl cyanide ($\\mathrm{C_2H_5CN}$), dimethyl ether \n($\\mathrm{CH_3OCH_3}$), methyl alcohol ($\\mathrm{CH_3OH}$) and formaldehyde (H$_2$CO) \\citep{biss2008, bott2004, cazaux2003}.\nTo differentiate between the `hot core' region that are present in high mass star \nforming region, a term `hot corino' \\citep{ceccar2007} is often used in the context of a low mass \nstar forming region. Molecules can form both in the gas phase and on the grain surface --\nduring the evolution of molecular cloud in gas phase, \nmolecules can deplete on grain surface and start new chemistry there. \nGrain surfaces help formation of complex molecules by hydrogenation; \nthere are also routes by which complex molecules can\nalso form in the gas phase too. Complex molecules like HCOOCH$_3$,\nCH$_3$OCH$_3$ \\citep{balu2015}, NH$_2$CHO \\citep{barone2015}\ncan effectively form in gas-phase following sublimation of key\nsimpler precursors, e.g., CH$_3$OH, NH$_2$, HDCO from grain mantles.\nComplex molecules which form on grain mantles can populate the gas-phase \nby desorption from grain mantels. Due to thermal\ndesorption in high-temperature region e.g., hot core, hot corinos,\ncomplex molecules or their precursors come out of the grain surfaces in gas phase.\nFor this reason, high deuteration of complex species in grain surfaces\nmay be reflected to gas phase (e.g., \\citealp{cc2014, das2015a, das2015b}). Deuterated water\n has been reported by \\citet{code2016} in the hot-corino of HH212. Very recently \n\\citet{lee2017a} reported D$_2$CO and singly deuterated methanol (CH$_2$DOH),\nfirst deuterated complex molecule to be observed in central\nregion of HH212.\n\nDue to limited spatial resolution of observational facilities in the past, only a few hot-corinos have\nbeen reported till now, on $<$100 AU scale, e.g., IRAS 16293-2422, NGC 1333 IRAS2A, IRAS4A etc.\n(e.g., \\citealp{imai16,jorgen2012} and references therein).\nRecently, \\citet{code2016} has suggested a `hot-corino' region in HH212\nsystem. The HH212 region is an ideal system to investigate different processes (e.g.,\ninfall, rotation, hot corino, bipolar outflow) related to the formation of a low mass protostar. \nThe HH212 source has been observed in the past using Sub-millimeter array \n(SMA)\\citep{lee2006}, the IRAM Plateau de Bure (PdB) interferometer \\citep{code2007} and Atacama Large Millimeter Array (ALMA)\n\\citep{lee2014, code2014}. ALMA, due to its high spatial resolution and sensitivity has revealed much more \ndetail than any other previous observations. The HH212 system has bipolar jets as observed by SiO, SO, and SO$_2$ \nemission lines \\citep{code2014,podio2015}. It has a central hot-corino surrounded by a flattened, \ninfalling and rotating envelope as observed by \nC$^{17}$O and HCO$^+$ \\citep{lee2006,lee2014}. Also, from HCO$^+$ and C$^{17}$O emission line \nobservations, \\citet{lee2014}, \\citet{code2014} suggested a compact disk ($\\sim 90$AU) rotating around the \nsource of $\\cong 0.2-0.3$M$\\odot$. Later observation \\citep{lee2017b} resolved the disk and\nsuggested the disk size to be 40 AU.\n\nIn this paper, we use ALMA archival dataset 2011.0.00647.S \nand report emission of deuterated formaldehyde line from central hot-corino region. We compare the HDCO emission \nwith methanol (CH$_3$OH), C$^{34}$S and C$^{17}$O and try to explain chemistry around the central region of the source.\nThe line search is performed using splatalouge (www.splatalouge.net) and molecular data has been taken from\nthe CDMS \\citep{muller01,muller05} and JPL molecular database \\citep{pic1998}.\n\n\\section {Observations}\nThe HH212 protostar system was observed with ALMA (Band -7) using 24 12m antennas on 2012 December 1 \n(Early Science Cycle 0 phase\n, \\citealt{code2014}). In this observation, the shortest and the longest baselines were respectively 20m and 360m. We report \nHDCO (Table~\\ref{tab:lines}) and include the CH$_3$OH, C$^{34}$S and C$^{17}$O lines \nto compare with deuterated formaldehyde emission. The datacubes have a spectral resolution \n488 KHz ($\\sim 0.43$ km s$^{-1}$), a typical beam FWHM of $0.''65 \\times 0.''47$ at position angle (PA) $\\sim 49^{\\circ}$.\nThe observed spectral windows were 333.7-337.4 GHz and 345.6-349.3 GHz, the typical rms level was 3-4 mJy beam$^{-1}$\nin 0.43 km s$^{-1}$ channels.\nThe data were calibrated with the CASA package, with\nquasars J0538-440 and J0607-085 as the passband calibrators, quasar\nJ0607-085 as the gain calibrator, and Callisto and Ganymede as the\nflux calibrators. We generated spectral cubes by subtracting the continuum emission in visibility data.\nWe used Briggs\nweighting with robustness parameter 0.5 for CLEANing the image.\nPositions are given with respect to peak continuum of the MM1 protostar located at \n$\\alpha(J2000)= 05^h43^m51^s.41,\n \\delta(J2000)=-01^{\\circ}02'53''.17$ \\citep{lee2014}.\n\n\\section{Results and Discussions}\n\n\n\n\n\nALMA band 7 is found to be rich in numerous spectral transitions in a region \ntowards MM1 protostar. Here, we discuss \nthree lines, C$^{17}$O, deuterated formaldehyde line (HDCO 5(1,4)-4(1,3)) and methanol (CH$_3$OH(v0 7(1,7)- 6(1,6)).\nThe details of the lines are given in Table~\\ref{tab:lines}. The C$^{17}$O (3-2) line transition was used by \\citet{code2014}\nto describe the rotating envelope around the central region. Here, we consider C$^{17}$O (3-2) \nemission as optically thin and have calculated \nits column density around different regions of the source. We compare C$^{17}$O column density with HDCO and CH$_3$OH\ncolumn density to study the chemistry around the MM1 protostar.\nFrom Figure~\\ref{fig:hdco_overlap}, HDCO emission is seen in central region of the protostar system and \npartially resolved (synthesized beam size $0.66'' \\times 0.47'', {\\rm PA}\\quad 49.6^{\\circ}$ and image size $0.70''\n\\times 0.49'', {\\rm PA}\\quad 57.2^{\\circ}$).\nFor the first time, we are reporting deuterated formaldehyde emission around the MM1 protostar position.\nRecent work by \\citet{leurini2016} \nhas described the kinematics of methanol in HH212. Moreover, from a recent observation (ALMA, 2015)\n\\citet{lee2017a} found that methanol is from a rotating disk environment. Here, we compare formaldehyde emission \nwith the most intense methanol line (E$_u$=79 K). Both the line profiles of formaldehyde and methanol\nhave peaks at $\\sim$ 1.9 km s$^{-1}$, i.e., close to the systematic velocity $\\sim$1.7 km s$^{-1}$ \\citep{lee2014}.\nThough \\citet{code2014} suggested a systematic velocity of $\\sim$ 1.3 km s$^{-1}$,\nhere we consider the systemic velocity to be $\\sim$ 1.7 km s$^{-1}$. \n\n\\begin{table}\n\t\\centering\n\t\\caption{List of unblended transitions detected towards HH212-MM1 and line properties.}\n\t\\label{tab:lines}\n\t\\begin{tabular}{lccc c}\n\t\t\\hline\n\t\tSpecies & Transition& Frequency& $E_u$& S$\\mu^2$\\\\\n\t\t & & (GHz) &(K) & D$^2$ \\\\\n\t\t\\hline\n\t\tCH$_3$OH &v0 7(1,7)- 6(1,6) & 335.58202 &78.97& 5.55\\\\ \n\t\tHDCO & 5(1,4)-4(1,3)& 335.09678 &56.25&26.05422\\\\\n\t\tC$^{17}$O & J=3-2 & 337.06112& 32.35 & 0.01411\\\\\n\t\tC$^{34}$S & J=7-6 & 337.396459 & 50.23 & 25.57\\\\\n\t\t\\hline\n\t\\end{tabular}\n\\end{table}\n\n\\begin{figure}\n\n\\includegraphics[width=\\columnwidth]{figure1.eps}\n\n\\caption{HH212 system, observed by ALMA -Band 7 \\citep{code2014}. Here, integrated emissions (moment 0 map) of three\n lines, SiO, HDCO and C$^{17}$O are overlaid on source continuum (gray scale). \n Magenta contours show the SiO(8-7) bipolar jet; \n first contour and steps are 10$\\sigma$\n ($\\sigma\\sim$ 5 mJy\/beam km\/s)\n Blue contour is of HDCO; contours are from 7$\\sigma$ and steps are 8$\\sigma$(15 mJy\/beam km\/s).\n Red contour is of C$^{17}$O, contours are from 5$\\sigma$ in steps of 5$\\sigma$ (1.5 mJy\/beam km\/s)}\n \n\\label{fig:hdco_overlap}\n\\end{figure}\n\\subsection{HDCO emission}\nThe HDCO emission is almost symmetric around the systematic velocity ($\\sim 1.6-2.0$ km s$^{-1}$), so it can be assumed that \nit originates near the central hot region. In this region, the dust temperature is \nhigh enough, and molecules on the grain surface (i.e., on the dust) easily desorb to gas phase. \nTemperature of hot-corinos may \nvary from few 10's K to few 100's K \\citep{parise2009, code2016}. Here, we have observed only one HDCO transition, \nso the temperature can not be derived directly. \\citet{code2016} used the same dataset and listed five acetaldehyde \n(CH$_3$CHO) transitions; from optically thin LTE analysis of CH$_3$CHO they suggested a temperature of $87\\pm47$K.\n\\citet{leurini2016} reported a rotational temperature 295 K for methanol emissions, while \n\\citet{lee2017a} finds an excitation temperature of 165$\\pm$85 K for deuterated methanol transitions.\nConsidering all these results and roughly the temperature variation along the radius \\citep{lee2014}\nwe have considered five excitation temperatures (20, 40, 90, 160, 300 K) for our calculations.\nWe consider some low excitation temperatures e.g., 20 \\&\n40 K for regions away from hot-corino and high-temperatures for a hot region.\n Here we assume the HDCO emission region to be optically thin and local thermodynamic equilibrium (LTE) condition\nis satisfied.\nThe details of column density calculation and the variation of it, is described in Section 3.3.\n\nIn Figure~\\ref{fig:hdco_overlap}, we see that HDCO emission is concentrated in a circular\nregion around the central protostar position, i.e., the peak of continuum emission. The \nemission is elongated along the jet direction. \nAs the emission is only partially resolved, we can not infer conclusively whether the elongation \nis real or because of the effect of synthesized beam size. Figure 2 shows that near the systematic\nvelocity, the HDCO emission is most extended; though the extended emission feature is weak ($\\sim 3\\sigma$), \nit is absent in higher velocity channels from the systematic velocity. \nThis extended emission feature is similar to the `X' shaped outflow cavity as traces by C$^{34}$S emission.\nTo draw a further conclusion, in Figure~\\ref{fig:channel_map} we plot three velocity channels near\nthe systematic velocity\nfor HDCO emission and compare it with C$^{34}$S channel maps. The C$^{34}$S emission is \ntracing a dense gas component. `X' shaped outflow is closely related to the bipolar jet or outflow\nnear the systematic velocity \\citep{code2014}. From \nFigure ~\\ref{fig:channel_map}, it can be seen that though the emissions of HDCO away from the central source\nis weak, it is significantly similar to the C$^{34}$S emission near the systematic velocity. \nTo find whether HDCO emission shows any rotation, we plot centroid emission positions of various\nvelocity channels of C$^{34}$S and HDCO line in Figure ~\\ref{fig:uv1}. \\citet{code2014} finds an\nevident rotation around jet for C$^{34}$S emission in the southern lobe. From \nFigure ~\\ref{fig:uv1} we can see that the there is definite signature of rotation for \nC$^{34}$S as the blue shifted and red-shifted emission are situated away from the \ncentral peak position and the jet axis. \nAt low velocities ($<1.0$), it is clearly showing rotation (in southern lobe) as emission\ncentroids of blue-shifted and red-shifted emission situated roughly symmetrically away from\nthe jet-axis and below the disk plane. At a high velocity, the rotation feature is not \nclear but becomes more collimated \ntowards the jet axis. The HDCO emission centroids also show a signature of rotation which is\nclear from Figure~\\ref{fig:uv1}. Looking to the southern portion \nof emission there seems to be some similarity with \nC$^{34}$S features. As the intensity of emission away from the source is very faint for HDCO \nand the shift of different velocity channels is less than the beam size, this inference \nmay not be conclusive. We can speculate that as the emission centroid (red-shifted southern\nportion in Fig. 4) of HDCO is away from the disk plane and shifts along the jet axis. So, it \nis not associated with disk rotation; at smaller scales, it may be associated with disk wind \nor small-scale outflow or cavity rotation but with current spatial resolution of the ALMA data we cannot confirm it.\n\\citet{leurini2016} finds that methanol (CH$_3$OH) could trace the base of the low-velocity of the small \nscale outflow and another higher resolution observation (0.04$''$, \\citealp{lee2017a}) suggests that\nit is from a warm environment near the disk surface. Hence, to discuss the origin HDCO of emission, we \ncompare HDCO emission with CH$_3$OH and C$^{17}$O emission in the next section. \n\n\\begin{figure}\n\t\\includegraphics[width=\\columnwidth]{figure2.eps}\n \\caption{\n HDCO colour map and contour map overplotted for the 1.63 km s$^{-1}$ channel which is close to the systematic velocity\n 1.7 km s$^{-1}$. The contours are from 3$\\sigma$ ($\\sigma \\sim$7 mJy) with a step of 2$\\sigma$.\n The HDCO emission map at systematic velocity is quite similar to 'X' shaped outflow.}\n\n \\label{fig:hdco_sysve}\n\\end{figure}\n\n\\begin{figure*}\n \\includegraphics[width=8cm, angle=270]{HDCO_C34S.eps}\n \\caption{Comparison of channel map of C$^{34}$S and HDCO. Value of velocities are near the systematic velocity \n $\\sim$ 1.7 Km s$^{-1}$. First contour is at 3$\\sigma$ and steps are 2$\\sigma$ for \n both HDCO and C$^{34}$S. $\\sigma$ is $\\sim$ 6mJy\/beam and 3.5 mJy\/beam for HDCO and C$^{34}$S\n contours, respectively.}\n \\label{fig:channel_map}\n\\end{figure*}\n\n\\begin{figure*}\n\t\\includegraphics[width=18cm]{Figure_4.eps}\n\t\\vskip -0.1cm\n \\caption{Distribution of the centroid positions of various velocity channels of \n C$^{34}$S and HDCO line. Velocities are colour-coded according to the \n color-bar shown in\nthe figure and value of velocities are subtracted from systematic velocity (1.7 Km s$^{-1}$).\nThe direction of jet (PA 22) and disk are (PA 112) are shown by lines. }\n \\label{fig:uv1}\n\\end{figure*}\n\n\\begin{figure}\n\t\\includegraphics[width=\\columnwidth]{Figure_5.eps}\n \\caption{\n Distribution of the centroid positions of various velocity channels of \n C$^{17}$O, CH$_3$OH and HDCO line. Velocities are colour-coded according to the \n color-bar shown in\nthe Figure and value of velocities are subtracted from systematic velocity (1.7 Km s$^{-1}$).\nThe direction of jet (PA 22) and disk are (PA 112) are shown by lines.\n }\n \\label{fig:uv2}\n\\end{figure}\n\n\\subsection{CH$_3$OH, and C$^{17}$O emission}\nMethyl alcohol emission of v0 7(1,7)- 6(1,6) transition\nis also observed around the MM1 protostar position.\nFrom the spectral signature (Figure~\\ref{fig:spectra}) it is seen that CH$_3$OH emission is from\na smaller region than HDCO around the protostar position.\nThe CH$_3$OH line was identified using CDMS\/JPL molecular database, having E$_u$=79K. The kinematics of methanol \nemission is already described by \\citet{leurini2016}. \n\nMethanol emission is from a smaller region ($0.64'' \\times 0.45'' PA \\quad 49.6^{\\circ}$) \nthan that from HDCO emission region. \nIn Figure~\\ref{fig:uv2} we compare HDCO emission centroid for different velocity channel\nwith C$^{17}$O and CH$_3$OH. C$^{17}$O\nemission traces disk rotation at high velocity and rotating outflow cavity at low-velocity \\citep{code2014}. Methanol \nemission shows rotation as the same sense of C$^{17}$O and traces small-scale outflow.\nThough the spatial resolution ($0.6 ''$) is not enough to ascertain the fact,\nwe have compared HDCO emission with these emissions. At small velocities, HDCO emission shows \nsome rotation near the disk plane. This rotation may trace small scale outflow or \na rotating environment \\citep{lee2017a} near the disk plane. At high velocities, the red-shifted centroid shifts towards\njet axis and away from disk plane in southern portion of the outflow region. This is very different than the \nmethanol velocity centroids at high velocities. Hence at small velocity it may trace some rotation near\nthe disk which may be because of small-scale outflow or some rotation environment \\citep{lee2017a}.\nAt systematic velocities HDCO emission traces `X' shaped outflow cavity similar to the C$^{34}$S emission; it \nmay trace small scale outflow too but due to limited spatial resolution we are not sure about this.\n \n\\subsection{Column density calculation and abundances}\nWe calculate the column densities of C$^{17}$O and HDCO, assuming the lines are optically thin.\nAs we do not know the rotational temperature of HDCO transition, we consider a range \nof temperature (20 K-300 K) as the excitation temperature of the molecular transition.\nNow, \n\\begin{equation}\nN_u\/g_u=\\frac{N_{tot}}{Q_{T_{rot}}} e^{\\frac{-E_u}{T_{rot}}}\n =\\frac{8\\pi\\nu^2 k\\int T_b dv}{hc^3Ag_u}\n =\\frac{3k\\int T_b dv}{8\\pi^3\\mu^2\\nu S} \n\\end{equation}\nwhere, g$_u$ is the statistical weight of the upper level u, N$_{tot}$ is the total \ncolumn density of the molecule,$ Q_{T_{rot}}$is the rotational partition function, $T_{rot}$ is rotational\ntemperature, $E_u$ is the upper energy level, $k$ is the Boltzman constant, $\\nu$ is the frequency \nof the line transition, $A$ is the Einstein co-efficient of the transition. $\\int T dv$ is the \nintegrated line intensity. Calculated column density is a beam average column density, so we consider the\nbeam dilution factor, given by,\n\\begin {equation}\n\\eta_{BD}=\\frac{\\theta_s^2}{\\theta_s^2+\\theta_B^2}\n\\end {equation}\nwhere, $\\theta_S$ is the source size and $\\theta_B$ is the beam size. \nAs the source is marginally resolved in HDCO emission, we are unsure about the source size.\n\\citet{code2016} considered source size $\\sim 0.3''$, which is about the size of dusty disk \\citep{lee2017b}.\n\\citet{leurini2016} considered source size $\\sim 0.2''$. Here, we consider source size $\\sim 0.2''$.\nIn that case, the calculated column density would be multiplied by $\\simeq 10$ for central \nregion only (region `M' in Fig. 6), if we consider beam dilution.\nThe spectral profile for different region is shown in Figure~\\ref{fig:spectra}.\nCalculated average column densities without beam dilution correction \nover different regions (Figure~\\ref{fig:region}) are enlisted in Table~\\ref{tab:colden}.\nConsequences of beam dilution effect are discussed section 3.5.\n\\begin{table*}\n\t\\centering\n\t\\caption{Column density of HDCO around MM1 protostar for different regions depicted in Fig~\\ref{fig:region}}\n \\label{tab:colden}\n\t\\begin{tabular}{|l| c| c| c| c |c|c|}\n\t\t\\hline\n\t\tRegions & \\multicolumn{5}{c|}{Column densities for various T$_{rot}$ in unit of $10^{14}$ (cm$^{-2}$)}&Error \\\\\n\t\t & 300 K & 160 K & 90 K & 40 K & 20 K & \\\\\n\t\t\\hline \n\t\tM & 12.3& 5.6 & 3.1 & 2.0 & .. &22.8\\%\\\\\n\t\tL2 & .. & 1.0 & 0.5 & 0.36 & 0.52 & 25.6\\%\\\\\n\t\tL3 & .. & 1.3 & 0.74 & 0.48 & 0.69 &23.6\\% \\\\\n\t\tD2 & . . & 0.86 & 0.48 & 0.31 & 0.44 &25.2\\% \\\\\n\t\tD3 & .. & 1.3 & 0.74 & 0.48 & 0.69 & 25.4\\% \\\\\n\t\\hline\n\t\\end{tabular}\n\\end{table*}\nIn the Table ~\\ref{tab:colden} we have shown the column \ndensity for different regions along the jet and perpendicular to the jet axis. \nThe central hot-corino is expected to be hot, so we have excluded the lowest temperature (20 K)\nin our range of temperature to calculate column density in the central region. Similarly, in regions away from the central\nsource we have not considered the 300 K temperature for the calculation. Considering the H$_2$ column density \n$\\sim 10^{24}$cm$^{-2}$ (see next section), X$_{HDCO} \\sim 10^{-10}$. \nThe errors for column density \nare calculated for noise and statistical (Gaussian) fitting. If we consider \n a calibration error of 20\\% then the same uncertainty will be added to column density calculation in \n addition to the statistical error; the resultant error shown in Table~\\ref{tab:colden}.\nDue to low signal to noise ratio we have not estimated the column densities in outermost \nregions e.g. L1, D1 in Figure~\\ref{fig:region}\n\n\\subsection{C$^{17}$O emission and disk mass}\n\nIn earlier Section it was mentioned that C$^{17}$O traces a Keplerian disk at high velocities. Assuming\nC$^{17}$O emission to be optically thin, we can calculate disk mass from the beam-averaged C$^{17}$O\ncolumn density and converting it to H$_2$ volume density. For conversion of C$^{17}$O column density to H$_2$,\nwe use $X_{CO}\/X_{C^{17}O}$=1792 \\citep{wilson1994} and $X_{CO}=N_{CO}\/N_{H_2}\n \\sim 10^{-4}$.\n In region `M' (Figure~\\ref{fig:region} ) C$^{17}$O column density for T$_{rot}$ = 90-300 K is \n 1.3--3.3$\\times 10^{16}$ cm$^{-2}$, so N$_{H_2}\\sim 2.3-5.9\\times 10^{23}$ cm$^{-2}$, as a beam \n averaged column density. As the disk is not resolved and has been seen edge on, considering beam \n dilution H$_2$ volume density becomes ${\\rm 8\\times N_{H_2}\/D_{disk}} $ where the disk size is $\\sim 90$AU \n \\citep{code2014}. Using the above information H$_2$ volume density becomes 1.3--3.4$\\times 10^{9}$cm$^{-3}$.\n Considering H$_2$ volume density $\\sim 10^{9}$cm$^{-3}$, we derive the disk mass to be 0.016 M$_\\odot$.\n It is close to the value 0.014 M$_\\odot$, estimated by \\citet{lee2014}.\n Here, we have considered the formula, $M_D \\sim 1.4 \\times m_{H_2} \\times \\pi r^2 \\times 2H$, where `r' \nis the disk radius and `H' is the disk height from mid-plane; here the factor, 1.4 accounts for \n the mass in the form of helium. We consider H$\\sim 40$AU and the disk radius to be 90 AU \\citep{lee2014}. The \n disk mass can also be calculated from continuum emission. We have considered that the continuum from HH212 disk is\n optically thin and isothermal. Though the disk mass is not constant as suggested by \\citet{lee2014}, we have \n calculated it by assuming a dust temperature to be $\\sim$ 90 K.\n Following earlier work \\citep{tobin2012}, we assume the spectral index\n $\\beta$=1 \\citep{kwon2009}, and the dust opacity $\\kappa_0$=0.035 g cm$^{-2}$ at 850 $\\mu$m \\citep{andrew2005}.\n We use the following formula \\citep{tobin2012} to calculate the disk mass\n $$ M_{dust}=\\frac{D^2F_\\lambda}{\\kappa_0 (\\frac{\\lambda}{850\\mu m})^\n {-\\beta}B_\\lambda(T_{dust}).}$$\n With {T$_{dust}$=90 K}, the disk mass is 0.0159 M$_\\odot$, which is close to 0.016 M$_\\odot$, \n the disk mass calculated based on hydrogen mass.\n\n\\subsection{Discussion}\n\\subsubsection{chemistry}\nFormaldehyde can be formed in both the gas and the grain phase. On grain surfaces, \nformaldehyde forms through sequential reactions of H or D atom with CO:\\\\\nCO $\\rightarrow$ HCO $\\rightarrow$ H$_2$CO and \nCO $\\rightarrow$ DCO $\\rightarrow$ HDCO\/D$_2$CO \\citep{cazaux2011, taquet2012}. The grain phase reactions\noccur mainly in cold temperature condition ($<$50K). Gas phase formation of formaldehyde and its deuterated form is\nalso possible through reaction involving CH$_2$D$^+$. More specifically, at relatively high temperatures, T$\\sim$ \n100K or higher, this reaction is relevant to the central hot-corino region of HH212\n\\citep{wootten87, oberg2012}. Reaction involving CH$_2$D$^+$ is not active in cold region due to \nits high exothermicity ($\\Delta$E of 654 K, see \\citealt{roueff13}) but in high temperature region this reaction \nmay take part and favours deuterium fractionation in gas-phase:\n\nCH$_3^+$ + HD$\\rightarrow$ CH$_2$D$^+$ + H$_2$ + $\\Delta$E.\n\n\\citet{fontani14} first time disentangled the emission of deuterated formaldehyde \nform on grain surfaces from its gas phase production. In our observation \nwe have not seen any H$_2$CO line but still we can infer about the production region and possible production\nroute of deuterated formaldehyde (HDCO). HDCO emission is from two regions, one is from central hot-corino\nregion another is from outflow cavities. The impact of bipolar jet on the cavities may release HDCO from grain\nsurfaces due to sputtering or sublimation from grain mantle (e.g., see \\citealt{code2012}). As the regions in \nthe outflow cavities have low-temperature, hence we can expect the production of HDCO is mainly from grain \nsurfaces. If we consider the column density of formaldehyde in central region assuming a temperature 160 K and \n40 K in one of the outflow region (south-west lobe, centered at $\\alpha(J2000)= 05^h43^m51^s.37.4,\n \\delta(J2000)=-01^{\\circ}02'54''.69$; see Fig~\\ref{fig:hdco_sysve}),\nthen the column densities are respectively 5.6$\\times 10^{14}$\nand 0.18$\\times 10^{14}$ cm$^{-2}$. In the outflow region this is a factor $\\sim 31$ less than the central \nregion column density of HDCO.\nFrom C$^{17}$O emission we can get a rough idea about the density difference in these two regions. In the same \nregion (south-west lobe) we have calculated the C$^{17}$O column density, and it is a factor of\n10 less than the centre region (`M'). Hence, if HDCO comes from only grain surfaces then their column densities should be \ndecreased similarly by the density factor. Considering the density differences, the column density in the \ncentral region is a factor of three ($\\sim 31\/10$) higher.\nHence, indirectly it implies that there is active gas phase HDCO production in central hot-corino region\nalong side the grain-phase desorbed HDCO. Recently, another doubly deuterated formaldehyde (D$_2$CO) has been \nreported by \\citet{lee2017a} for the same source for a beam size of $\\sim 0.04''$. In this work, HDCO line emission \nis marginally resolved, if we consider emission region $0.2''$ then considering the beam dilution, a factor of 10 \n will be multiplied by central region column density. In that case, \nthe column density of HDCO and D$_2$CO would be comparable ($\\sim 10^{15}$). \nIf we consider smaller regions of the emission region such as the D$_2$CO beam, then the column density \nwill be higher for HDCO. Hence, considering the source size $0.2''-0.04''$, the D$_2$CO\/HDCO ratio becomes\n$1-0.04$. We guess the emission region for HDCO will not be as small as that for D$_2$CO, in that case the deuteration of formaldehyde \nin quite higher than methanol for the same source. Methanol deuteration (D\/H) for the HH212 is 2.4$\\pm 0.4 \\times 10^{-2}$\nas reported by \\citet{bian2017}. On the other hand \\citet{lee2017a} suggested D\/H\n$\\sim 0.27$ for methanol in the disk environment. If we consider D\/H ratio 0.27 to be true\nthen methanol is still produced effectively on the grain surface in the disk environment. \\citet{bian2017} used\n $\\rm{^{13}CH_3OH}$ and LTE approximation to calculate methanol column density. If we consider D\/H ratio\nof 2.4$\\pm 0.4 \\times 10^{-2}$ to be more likely then probably due to high temperature deuterated \nmethanol production is not efficient in the hot-corino region. We speculate that deuterated formaldehyde can still be\nproduced in this region through gas phase reaction network unlike the methanol production. \n\\subsubsection{kinematics\/morphology}\n In Figure~\\ref{fig:region} we have \ndefined different circular regions ($0.5''$ diameter) comparable to the beam size and the observed spectra\ntowards these regions are shown in Figure~\\ref{fig:spectra}. We can see from \nFigure~\\ref{fig:uv1},~\\ref{fig:uv2} there may be emission from rotating environment or small-scale outflow near the disc \nbut it is very uncertain. At high velocity the emission is certainly shifted above the disk along the jet\nwhich signifies that the emission is affected by outflow or disk-wind at the base of the jet.\nThe column density (Table~\\ref{tab:colden}) \non both sides (L2, L3) of the central circular region of $0.5''$ diameter is less than that at the central region. \nWe can see a jump in column density for HDCO. This is due to the fact that \nat the disk ($\\sim 90$AU) and envelope interface there is a sharp rise of temperature\nand density due to accretion shock. Also, from Figure~\\ref{fig:spectra}\nwe see that the spectral signature of methanol emission is absent from the outermost regions (L1, L4, D1, D4)\nthough a weak signature of HDCO emission is present. From this also we can say that HDCO emission is more extended than \nmethanol. \\citet{lee2017a} at a scale of $0.04''$ found that difference of emission regions of the complex\nmolecules is partly due to different A-coefficients. In that observation of HH212, it was found that molecules\nwith comparatively low A-coefficients are seen to be more extended than the that with high A-\ncoefficients. Here, A-coefficient of methanol transition is lower than that of HDCO, but the region seems \nto be more compact than that of HDCO. Higher A-value may correspond to higher critical \ndensity. \\citet{guzman2011} described H$_2$CO critical densities $\\sim 10^6$; \ndue to high density ($\\sim 10^8-10^9$ cm$^{-3}$) in the central region it is expected that LTE condition \nis maintained for the reported molecular transitions here.\nHence, we can speculate that it is due to local physico-chemical condition and high line \nstrength, comparatively lower E$_u$ of HDCO which may be responsible for\nthe emission difference. Another difference we can see for methanol and HDCO emission in central region (`M') is this:\nHDCO has a red-shifted peak which is absent in CH$_3$OH emission. This peak may be from the line contamination of other \nmolecular transition, but we have not found any such line from other molecules. Alternatively, this may be \nrelated to the high velocity outflow from the base of the jet. \n\n\\begin{figure}\n\t\\includegraphics[width=\\columnwidth]{region_figure.eps}\n \\caption{Different region along and perpendicular to the jet axis. The region circle's diameter is 0.5 arcsec\n similar to the beam width. The gray scale image is the continuum image of the central source, namely, HH212. \nThe contours are for SiO jet emission along a PA 22$^o$.}\n\\label{fig:region}\n \\end{figure}\n\n\\begin{figure*}\n\t\\includegraphics[width=12cm]{plot2.eps}\n \\caption{Spectral profile in different regions as shown in Figure~\\ref{fig:region}.}\n \\label{fig:spectra}\n\\end{figure*}\n\n\\section{Conclusions}\nIn this letter we described the emission of deuterated formaldehyde (HDCO) from the hot inner region \nof HH212. This emission is limited mainly to the inner $\\sim 200$ AU. The kinematics of \nHDCO is quite similar to that of the C$^{34}$S emission \\citep{leurini2016} near systematic velocity.\nHDCO traces large scale outflow cavity near systematic velocity; \nit may trace small-scale outflow or disk wind\nbut due to limitation of spatial resolution in this observation we are uncertain about this.\nOn the other hand, both methanol and HDCO emission peak values are $\\sim 1.9$ km s$^{-1}$ \nand the spectral profile is symmetric in low to medium velocity range ($<2.4 {\\rm km s^{-1}}$). \nThe asymmetry at high velocity for HDCO may be associated with the outflow near the disk plane.\nThe HDCO rotation may be associated with the disk wind or rotating environment \\citep{lee2017a};\nor to the rotating cavity wall, similar to C$^{34}$S. Due to limited resolution of the observation\nwe cannot conclude about the rotation with certainly. The emission \nis assumed to be optically thin. Here we observe only one transition of HDCO. Thus we cannot \ndetermine the excitation temperature. We have considered a range of possible temperature depending on earlier \nstudies and typical hot-corino temperature assumed in the literature. \nThe column density of HDCO is $\\sim 10^{14} {\\rm cm^{-2}}$. Though \nwe have not observed any H$_2$CO transition but comparing results of D$_2$CO given in \\citet{lee2017a} \nwe speculate that the deuterium fractionation of formaldehyde is relatively higher than methanol in the central region.\nWe guess that the gas phase formation of deuterated formaldehyde is active in the central hot region in the low-mass \nprotostar HH212.\n\\section*{Acknowledgements}\nWe acknowledge the anonymous referee for the constructive comments.\nWe also thank Dr. Nirupam Roy (IISc, India) for his helpful suggestions.\nDS is thankful to Department of Space, Govt. of India (PRL) for support in continuing research and also \nwant to thank Young Visitor Programme (KASI) for financial help to work in Korea Astronomy\nSpace science Institute for a short time. AD want to acknowledge ISRO Respond (Grant no. ISRO\/RES\/2\/402\/16-17).\nC. -F.L. acknowledges grants from the Ministry of Science and Technology of Taiwan (MoST 104-2119-M\n-001-015-MY3) and Academia Sinica (Career Development Award).\nThis paper makes use of the following ALMA data: ADS\/JAO.ALMA\\#2011.0.00647.S.\nALMA is a partnership of ESO (representing its member states), NSF (USA) and NINS (Japan),\ntogether with NRC (Canada), NSC and ASIAA (Taiwan), and KASI (Republic of Korea),\nin cooperation with the Republic of Chile. The Joint ALMA Observatory is operated by ESO, AUI\/NRAO and NAOJ.\n\n\n\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}