diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzzpisr" "b/data_all_eng_slimpj/shuffled/split2/finalzzpisr" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzzpisr" @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction :}\n\n\\indent \\indent \nSince its invention by Withfield Diffie and Martin Hellman [1] , Public key cryptography has imposed\nitself as the necessary and indispensable building block of every IT Security architecture.\\\\*\nBut in the last decades it has been proven that public key cryptosystems based on number theory problems are not immune againt quantum computing attacks [3]. The advent of low computing ressources mobile devices such wirless rfid sensors, smart cellphones, ect has also put demands on very fast and lightweight public key algorithms .\\\\*\nPublic key cryptosystem presented in this paper is not based on number theory problems and is very fast compared to Diffie-Hellman [1] and RSA algorithms [2]. It is based on the difficulty to invert the following function : $F(x) =(a\\times x)Mod(2^p)Div(2^q)$ .\\\\* Mod is modulo operation , Div is Integer division operation , a , p and q are known integers where $( p > q )$ . \nIn this paper we construct three public key algorithms based on this problem namely a key exchange algorithm, a public key encryption algorithm and a digital signature algorithm.\\\\*\nWe prove its efficiency compared to Diffie-Hellman and RSA,\nand that the underlying problem can be a hard SAT instance [4] or a equations set of multivariate polynomials over F(2) . \n\n\\newpage\n\n\n\\section{Secret key exchange algorithm : } ~ \\\\*\n\n\n\\noindent\nBefore exchanging a secret key, Alice and Bob shared a knowledge of :\\\\*\n\n\\noindent Integers [ $l$, $m$, $p$, $q$, $r$, $Z$ ] satisfying following conditions :\\\\*\\\\*\n $q = l + m - p$ , $p > m + q + r$, $Z$ is $l$ \n bits long.\\\\*\\\\*\n\n\\noindent To exchange a secret key :\\\\*\\\\*\n\n\\noindent - 1 \\hspace{1 mm} Bob chooses randomly a integer $X$. [ $X$ ] is $m$ bits and a private knowledge of Bob.\\\\*\\\\*\n\\noindent - 2 \\hspace{1 mm} Computes number $U = (X\\times Z)Mod(2^p)Div(2^q) $ , and sends it to Alice.\\\\*\\\\*\n\\noindent - 3 \\hspace{1 mm} Alice chooses randomly a integer $Y$. [ $Y$ ] is $m$ bits and a private knowledge of Alice. \\\\*\\\\*\n\\noindent - 4 \\hspace{1 mm} Computes number $V = (Y\\times Z)Mod(2^p)Div(2^q) $ , and sends it to Bob.\\\\*\\\\*\n\\noindent - 5 \\hspace{1 mm} Bob computes number $W_a = (X \\times V)Mod(2^{p-q})Div(2^{m+r})$. \\\\*\\\\*\n\\noindent - 6 \\hspace{1 mm} Alice computes number $W_b = (Y \\times U)Mod(2^{p-q})Div(2^{m+r})$. \\\\*\n\n\\noindent Our experiments shows us that $Pr[W_a = W_b ] = 1 - 0.3*2^{-r}$. \\\\*\n\n\\noindent If Bob and Alice chooses r great enough say superior to 128, $Pr[W_a \\neq W_b ]$ will be negligible, \\\\* they can use then this protocol as a key exchange algorithm \\\\*\n\n\\noindent Secrete key exchanged is the number : \\\\*\\\\*\n\\noindent $W = (X \\times V)Mod(2^{(p-q)})Div(2^{m+r}) = (Y \\times U)Mod(2^{(p-q)})Div(2^{m+r}) $\\\\*\\\\*\n\n\\noindent A python implementation of this algorithm is provided in Appendix A\n\n\n\n\n\n\n\\newpage\n\n\n\\section{Public key encryption algorithm :}~ \\\\*\n\n\\subsection{ Encryption :}~ \\\\*\n\n\\noindent In order to send a encrypted message to Bob, Alice performs the following steps : \\\\* \\\\*\n\n\\noindent -1 \\hspace{1 mm} She gots his public key composed by integers [ $l$, $m$, $p$, $q$, $r$, $Z$, $U$ ] , satisfying : \\\\*\\\\* \n\\indent\\hspace{1 mm}$q = l + m - p$, $p > m + q + r$ , $U = (X\\times Z)Mod(2^p)Div(2^q)$, $Z$ is $l$ bits long.\\\\*\n\n\\indent\\hspace{1 mm}[ $X$ ] is the m bits long private key of Bob.\\\\*\\\\*\n\\noindent - 2 \\hspace{1 mm} She chooses randomly a integer $Y$ which is m bits long .\\\\* \n \n \n\\noindent - 3 \\hspace{1 mm} She computes number $V = (Y\\times Z)Mod(2^p)Div(2^q)$, then the secret key \\\\*\\\\* \n\\indent\\hspace{1 mm} $W = (Y \\times U)Mod(2^{p-q})Div(2^{m+r})$. \\\\*\n \n\\noindent - 4 \\hspace{1 mm} She encrypts with secret key $W$ her plaintext and sends corresponding ciphertext \\\\*\\\\* \\indent\\hspace{1 mm} and number V to Bob. \\\\*\n\n\\subsection{ Decryption :}~ \\\\*\n\n\\noindent In order to decrypt the ciphertext recieved from Alice, Bob performs the following steps :\\\\* \\\\*\n\n\\noindent - 1 \\hspace{1 mm} With his private key $X$ and number V recieved from Alice,\\\\*\\\\*\n\\indent\\hspace{1 mm} he computes secret key $W = (X \\times V)Mod(2^{p-q})Div(2^{m+r})$.\\\\* \n\n\\noindent - 2 \\hspace{1 mm} With secret key $W$, he decrypts the ciphertext recieved from Alice.\n\n\n\n\\newpage\n\n \n\\section{Digital signature Algorithm :} ~ \\\\*\n\n\\noindent Bob's public key is composed by integers [ $l$, $m$, $p$, $q$, $r$, $Z$, $U$ ] , satisfying : \\\\*\\\\* \n\\noindent $q = l + m - p$ , $p > l + q + r$, $U = (X\\times Z)Mod(2^p)Div(2^{q})$, $X$ is $m$ bits whereas $Z$ is $l$ bits long.\\\\*\\\\*\n\n\\subsection{ Signature :}~ \\\\* \n\n\\noindent In order to sign a Message Msg, Bob performs the following steps : \\\\* \n\n\\noindent - 1 \\hspace{1 mm} He chooses randomely an $m$ bit long integer $Y$. \\\\*\n\n\\noindent - 2 \\hspace{1 mm} Hashes Msg by a hash function HF and gets a digest H which length in bits is \n the same \\\\* \\\\* \n\\indent\\hspace{1 mm} as elements $Z$ of his public key . with his private key [$X$], he computes : \\\\*\n\n\\indent\\hspace{1 mm} $S1 = (Y\\times Z)Mod(2^p)Div(2^{q})$ and $S2 = (H\\times(X+Y))Mod(2^p)Div(2^{q})$ \\\\*\n\n\\noindent - 3 \\hspace{1 mm} Sends Message Msg and signature $(S1,S2)$ to Alice.\\\\* \n\n\n\\subsection{ Verification :}~ \\\\* \n\n\\noindent In order to verify that Message Msg is sent by Bob, Alice performs the following steps : \\\\* \\\\*\n\n\\noindent - 1 \\hspace{1 mm} She gots his public key.\\\\* \n\n\\noindent - 2 \\hspace{1 mm} Hashes Msg by HF and gets a digest H which the length in bits is $l$ the same as $Z$s.\\\\*\n\n\\noindent - 3 \\hspace{1 mm} From digest $H$, signature $(S1,S2)$ and the elements [ $p$, $q$, $l$, $U$, $Z$ ] of Bob's public key, \\\\*\\\\*\n\\indent She computes $Wa = (H \\times (S1 + U) )Mod(2^{p-q})Div(2^{l+r})$ and $Wb = (Z \\times S2)Mod(2^{p-q})Div(2^{l+r})$. \\\\*\n\n\\noindent - 4 \\hspace{1 mm} Compares $Wa$ to $Wb$ , Msg is sent by Bob if $Wa$ = $Wb$\\\\* \\\\*\n\n\\noindent A python implementation of this algorithm is provided in Appendix B\n\n\\newpage\n\n\\section{Efficiency :} ~ \\\\*\n\n\\noindent The key exchange algorithm presented in this paper can be realised by a multiplication circuit where some leftmost and righmost output bits are discarded, meaning that it has a time complexity of O($n^2$).\n\\noindent In comparaison to standardised key exchange algorithms such as Diffie-Hellman in the multiplicatif group or RSA whose time complexities are O($n^3$), under the same security parameters, presented key exchange algorithm is O($n$) time faster.\\\\*\nThe same can be said about presented public key and digital signature algorithms since they are basically applications of the key exchange algorithm.\n\n \n\n\n\n\\section{Security :} ~ \\\\*\n\n\\noindent The Security of presented public key cryptosystem is based on the difficulty of finding $X$ and $Y$ while knowing $Z$, $l$, $m$, $p$, $q$, $r$, $U = (X\\times Z)Mod(2^p)Div(2^{q})$, $V = (Y\\times Z)Mod(2^p)Div(2^{q})$\\\\* $l+m = p+q$, $p > m+q+r$, $Z$ is $l$ bit long , $X$ and $Y$ are $m$ bits.\\\\*\n\n\\noindent To get $X$ from $U$ and $Y$ from $V$, a attacker should :\\\\*\n\n\\noindent 1 - \\hspace{1 mm} Invert $F(X) = U = (Z \\times X) Mod(2^p)Div(2^q) $ \\\\*\n\\noindent 2 - \\hspace{1 mm} Invert $F(Y) = V = (Z \\times Y) Mod(2^p)Div(2^q) $.\\\\*\n \n \n\\noindent Puting it otherwise, presented public key cryptosystem is based on the difficulty to invert the following function :\\\\*\\\\* $F(x) = y = (a \\times x)Mod(2^p)Div(2^q)$. \\\\*\\\\* a, x, p and q are known integers, while a and x are respectively n and m bits long, $(n > m)$ and $(p > q)$ .\\\\* \n\n\\noindent At first glance we can notice that it is easy to verify a solution but it is difficult to find one, \nimplying that this problem is in NP.\\\\* \n\n\\noindent In our knowlege it has never been mentioned in the literature. In subsequent section we will reduce it to SAT in order to evaluate its hardness.\n\n\n\n\\newpage\n\n\\subsection{Hardness evaluation :} ~ \\\\* \n\n\\noindent Let $A$, $X$, $Y$, $n$, $m$ be integers where $A$, $X$ are rescpectively $n$ and $m$ bits long $(m \\leq n )$.\\\\*\n \n\n\\noindent The binary representation of A is $a_{(n-1)} ... a_{(i+1)} a_{(i)} ... a_{(0)}$.\\\\*\\\\* \nThe binary representation of X is $x_{(m-1)} ... x_{(i+1)} x_{(i)} ... x_{(0)}$ .\\\\*\\\\* \nThe binary representation of Y is $y_{(n+m-1)} ... y_{(i+1)} y_{(i)} ... y_{(0)}$ .\\\\*\n\n\\noindent $Y$ is the arithmetic product of $A$ and $X$, $Y$'s bits in function of $A$'s bits and $X$'s bits can be translated by the following set of algebric equations (1) :\\\\*\n\n\\noindent $c_0 = 0$\\\\*\n\n\\noindent For ( $j = 0 \\hspace{1.5 mm} to \\hspace{1.5 mm} m-1$ ) : \\\\*\n\n\\indent $ y_j = (( \\sum\\limits_{i=0}^{j} a_{(j-i)} \\times x_i) + c_j ) Mod(2) $ \\hspace{3 mm} and \\hspace{3 mm} $ c_{j+1} = (( \\sum\\limits_{i=0}^{j-1} a_{(j-i)} \\times x_i) + c_{j} ) Div(2) $ \\\\*\n\n\\noindent For ( $j = m-1 \\hspace{1.5 mm} to \\hspace{1.5 mm} n-1$ ) : \\\\*\n\n\\indent $ y_j = (( \\sum\\limits_{i=0}^{m-1} a_{(j-i)} \\times x_i) + c_j ) Mod(2) $ \\hspace{3 mm} and \\hspace{3 mm} $ c_{j+1} = (( \\sum\\limits_{i=0}^{m-1} a_{(j-1-i)} \\times x_i) + c_{j} ) Div(2) $ \\\\*\n\n\\noindent For ( $j = n-1 \\hspace{1.5 mm} to \\hspace{1.5 mm} m+n-1$ ) : \\\\*\n\n\\indent $ y_j = (( \\sum\\limits_{i=j-n+1}^{m-1} a_{(j-i)} \\times x_i) + c_j ) Mod(2) $ \\hspace{3 mm} and \\hspace{3 mm} $ c_{j+1} = (( \\sum\\limits_{i=j-n}^{m-1} a_{(j-i)} \\times x_i) + c_{j} ) Div(2) $ \\\\*\n\n\\noindent $c_j$ is the retenue bit of multiplication product $( Y = A \\times X )$ at column j. \\\\*\n\n\\noindent In our problem we have as unknowns bits $Y_{0\\rightarrow q}$ and $Y_{p\\rightarrow (m+n-1)}$ (1) become then following set of algebric equations (2) : \\\\*\n\n\n\\noindent For ( $j = q \\hspace{1.5 mm} to \\hspace{1.5 mm} m-1$ ) : \\\\*\n\n\\indent $ y_j = (( \\sum\\limits_{i=0}^{j} a_{(j-i)} \\times x_i) + c_j ) Mod(2) $ \\hspace{3 mm} and \\hspace{3 mm} $ c_{j+1} = (( \\sum\\limits_{i=0}^{j-1} a_{(j-i)} \\times x_i) + c_{j} ) Div(2) $ \\\\*\n\n\\noindent For ( $j = m-1 \\hspace{1.5 mm} to \\hspace{1.5 mm} n-1$ ) : \\\\*\n\n\\indent $ y_j = (( \\sum\\limits_{i=0}^{m-1} a_{(j-i)} \\times x_i) + c_j ) Mod(2) $ \\hspace{3 mm} and \\hspace{3 mm} $ c_{j+1} = (( \\sum\\limits_{i=0}^{m-1} a_{(j-1-i)} \\times x_i) + c_{j} ) Div(2) $ \\\\*\n\n\\newpage\n\n\\noindent For ( $j = n-1 \\hspace{1.5 mm} to \\hspace{1.5 mm} p$ ) : \\\\* \n\n\\indent $ y_j = (( \\sum\\limits_{i=j-n+1}^{m-1} a_{(j-i)} \\times x_i) + c_j ) Mod(2) $ \\hspace{3 mm} and \\hspace{3 mm} $ c_{j+1} = (( \\sum\\limits_{i=j-n}^{m-1} a_{(j-i)} \\times x_i) + c_{j} ) Div(2) $ \\\\*\n\n\n\\noindent Set of algebric equations (2) can be translated\nto following set of logical equations (3). \\\\*\n\n\n\\noindent If $( j \\leq m )$ $ c_j = F_j(x_{(j-1)},...,x_{(k+1)},x_{(k)},...,x_{0},c_{(j-1)})$ \\\\*\n\n\\noindent If $( m \\leq j )$ $ c_j = F_j(x_m,...,x_{(k+1)},x_{(k)},...,x_{0},c_{(j-1)})$ \\\\*\n\n\\noindent $ \\land_{j=q}^{m}((\\oplus_{i=0}^{j}(a_{(j-i)}\\land x_i) \\oplus c_{j})= y_j ) = true $ \\\\*\n\n\\noindent $ \\land_{j=m}^{n}((\\oplus_{i=j-m}^{j}(a_{(j-i)}\\land x_i) \\oplus c_{j})= y_j ) = true $ \\\\*\n\n\\noindent $ \\land_{j=n}^{p}((\\oplus_{i=j-n}^{n}(a_{(j-i)}\\land x_i) \\oplus c_{j})= y_j ) = true $ \\\\*\\\\*\n\n\n\n\\noindent Notice this set of logical equation is practicaly the same set resulting from reducing FACT to SAT, In paper [9] Authors had suggested to use FACT as a source of Hard SAT Instances, moreover SAT Solvers to this day are still inefficient in solving this sort of SAT instances.\\\\*\n\n\\noindent Every logical function can be realised by nands gates , if we replace $\\neg x_i$ by $(1 - x_i )$ , ( $x_i \\land x_j$ )\\\\* by ( $x_i \\times x_j$ ) (3) can also be translated to following set of multivariate polynomials equations : \\\\*\n\n\\noindent If $( q \\leq j \\leq m-1 )$ : \\\\*\n\n\\indent $ y_j = (( \\sum\\limits_{i=0}^{j} a_{(j-i)} \\times x_i) + F_j(x_0,....x_m) ) Mod(2) $ \\\\*\n\n\\noindent If $( m-1 \\leq j \\leq n-1)$ : \\\\*\n\n\\indent $ y_j = (( \\sum\\limits_{i=0}^{m-1} a_{(j-i)} \\times x_i) + F_j(x_0,....x_m) ) Mod(2) $ \\\\*\n\n\\noindent If $( n-1 \\leq j \\leq p )$ : \\\\*\n\n\\indent $ y_j = (( \\sum\\limits_{i=j-n+1}^{m-1} a_{(j-i)} \\times x_i) + F_j(x_0,....x_m) ) Mod(2) $ \\\\*\n\n\\noindent Where $F_j$'s are multivariate polynomials corresponding to carries $c_j$ \\\\*\n\n\\noindent Summing it up, to break presented public key cryptosystem one had to solve SAT instances resulting from\nlogical equations containing lot of Xors which is not that evident if the number of unknown bits are high enough [6]. \\\\*\n\n\\noindent Or solve sets of multivariate polynomials equations over F(2) with degrees superior or equal to parameter q.\n\n \n\n \n\n\\newpage \n\n\\section{Conclusion , open question and future work :} ~ \\\\*\n\n\\noindent In this paper we have presented a new fast public key cryptosystem based on the difficulty of inverting the following function : \n$F(x) =(a\\times x)Mod(2^p)Div(2^q)$ .\\\\* Mod is modulo operation , Div is integer division operation , a , p and q are known integers where $( p > q )$ .\\\\*\n\n\\noindent We have proved its efficiency compared to Diffie Hellman and RSA cryptosystems. We have also proved that its security is based on a new problem that can be viewed as a hard sat instance or a set of multivariate polynomial equations over F(2) .\\\\*\n\n\\noindent The fact that its security is not based on number theory problems is also a proof of its resistance against current quantum computing attacks [3].\\\\*\n\n\\noindent The last decade have seen a enormous progress of SAT Solvers but they are still inefficient in solving\nlogical statements containg a lot of xors which is the case of our problem [5][6][7].\\\\*\n\n\\noindent SAT is NP complete, meaning that solving it can take exponential time. It is has been found that the hardest instances of a SAT problem depends on its constraindness which is defined as the ratio of clauses to variables [8].\\\\*\\\\* \n\\noindent This lead us to ask what forms should have the integers composing public parameters of our PKCS\nin order to produce hard SAT instances even to a eventual SAT Solver that have not problems with xor clauses.\\\\*\\\\* \n\n\\noindent Recently we have found a way to build public key cryptosystems based on the difficulty of inverting the following function : \n$F(x) =(a\\times x)Mod(b^p)Div(b^q)$ .\\\\* Mod is modulo operation , Div is integer division operation , a , b , p and q are known integers where $( p > q )$ .\\\\*\n\n\\noindent This work will be the subject of a future paper.\n\n\n\n\\newpage\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\\label{sec:Intro}\nRecently, schemes that depend on operator splitting have found wide applicability within the domain of simulation of complex chemical reaction systems, biological systems or those that can be modelled by appropriate Markov processes, for example interacting particle systems. The recipe of splitting the system into components that can be simulated separately in an appropriate manner has led to more efficient algorithms, sometimes because some of the components can be solved explicitly, as in chemical reaction systems~\\cite{Janhke}, and others because the splitting allows for parallel computations~\\cite{Arampatzis:2012, spparks}.\n\nIn parallel with the development of those algorithms, there has also been a growing amount of work towards the numerical analysis of splitting methods for stochastic dynamics in different contexts~\\cite{Janhke,Arampatzis:2012, Arampatzis:2014, Petzold,Engblom, Bayati}. In particular, for the case of Parallel Lattice Kinetic Monte Carlo (PL-KMC), the authors in~\\cite{Arampatzis:2012} developed a general framework, based on semigroup theory, that connects lattice decompositions to operator splitting. Then, in~\\cite{Arampatzis:2014}, error estimates were provided for bounded time intervals along with comparisons between different splitting schemes. One of the important contributions of the work was to highlight the connection of the error with the commutator associated with the splitting and how it affects the efficiency of the scheme.\n\nAlthough classical techniques in numerical analysis, such as the study of the local error of the splitting scheme and expansions of the global error~\\cite{Talay-Tubaro}, work well in providing error estimates for bounded intervals, the information they provide is not of great use when the focus is on long-time results. Given that a common goal is sampling from a stationary distribution and convergence occurs for large simulation times, it thus makes sense to develop methodologies for the study of long-time errors. Approaches to tackling this problem are varied. For instance, in the case of SDEs, study of the long-time behavior has been done by employing Poisson equations~\\cite{MST10}. For Lie-Trotter splittings, backward error analysis~\\cite{Abdulle} has been used to study the performance of the schemes in capturing the stationary distribution when simulating Langevin dynamics (but see also~\\cite{Leimkuhler}). \n\nThe main idea in this work is information-theoretical in nature, following similar successful approaches studying the irreversibility of numerical schemes~\\cite{irreversibility_KPR}, sensitivity analysis~\\cite{sensitivity_PK}, and quantifying the loss of information in coarse-graining of particle systems~\\cite{Eva-coarse-grain}. In those, the authors use the relative entropy, along with other quantities derived from it, to both generate insights and provide computable quantities that are useful during a simulation. Besides that, approaching the problem from information theory still allows one to infer results about more classical metrics of error. For instance, one can derive upper bounds for the weak error of specific observables through the use of variational inequalities~\\cite{Dupuis}. \n\nOur goal is to use another derived quantity, the relative entropy on path space per unit time, or relative entropy rate (RER), to quantify the long-time loss of information when using a splitting scheme. For our comparison, we fix a time step $\\Delta t$ and then compare the $\\Delta t$-skeleton chain arising from the exact process with the discrete chain we get from the approximate process. Through rigorous asymptotics, \\ouredit{we provide an {\\em a posteriori} error expansion of RER in terms of $\\Delta t$} and connect RER with quantities central to the classical analysis of splitting schemes, like the commutator and the order of the local error of the splitting method. After deriving \\ouredit{computable estimators from our {\\em a posteriori} expansions} for the highest-order term coefficients, we estimate them with the use of SPPARKS~\\cite{spparks}, a parallel Kinetic Monte Carlo simulator, and use them to compare two well-known splitting schemes, the Lie and Strang splittings. Also, we illustrate how a practitioner can use the RER as an information criterion for selecting schemes that takes into account both long-time accuracy and communication cost. We then proceed to link the connectivity of the exact process with the RER asymptotics, which in turn allows for greater generality in the study of different operator splittings. \n\nThe plan for the following sections is as follows. \\ouredit{In Section \\ref{sec:Background}, we provide the necessary background for KMC, PL-KMC, construction and analysis of operator splitting schemes. Section~\\ref{ssec:info_theory_concepts} introduces the pathwise relative entropy and relative entropy per unit time, which are the principal tools used in this work. In Section~\\ref{sec:Long_time} we discuss the use of the relative entropy rate as a metric for studying the long-time loss of information that operator splitting schemes can have and motivate the use of asymptotic expansions for its study. Section~\\ref{sec:IPS-PKMC} is particularly important, as we study schemes through the RER in the context of stochastic particle systems and continue to Section~\\ref{sec:ising_example} with some discussion about time-step selection and the balance between error and communication in parallel KMC. Then, in Section~\\ref{sec:info-crit}, we highlight some connections between the proposed framework and model selection with information criteria. Section~\\ref{sect:conn} studies the RER for operator splitting schemes in a more general setting with the use of ideas from graph theory. Finally, in Section~\\ref{sec:transient}, we demonstrate that the RER can also be applied in transient regimes, before the simulation has converged to stationarity.} \n\n\n\n\\section{Background}\n\\label{sec:Background}\n\\ouredit{Consider that the stochastic process of interest is} an ergodic Continuous Time Markov Chain (CTMC) $X_t$ on a finite, but possibly still significantly large, state space $S$. This stochastic process can be completely defined by its \\textit{transition rates}, $q(\\sigma,\\sigma')$, which describe the probability of an update from state $\\sigma$ to state $\\sigma'$ in an infinitesimal period of time. \\ouredit{That is,}\n\\begin{align}\nP(X_{t+\\Delta t}=\\sigma'|X_t=\\sigma)=P_{\\Delta t}(\\sigma,\\sigma')=q(\\sigma,\\sigma')\\Delta t+o(\\Delta t), \\sigma\\neq \\sigma'.\n\\label{eq:infinitesimal_description}\n\\end{align}\nKinetic Monte Carlo (KMC) works by simulating the embedded Markov Chain $Y_n=X_{t_n}$, with jump times $t_n, t_n\\sim \\exp(\\lambda)$. The parameter $\\lambda(\\sigma)$ is the total rate when the system is at state $\\sigma$, \n\\begin{align}\n \\lambda(\\sigma)=\\sum_{\\underset{\\sigma'\\in S}{\\sigma'\\neq \\sigma}}q(\\sigma,\\sigma').\n \\label{eq:total_rate}\n\\end{align}\nThis allows us to write the transition probabilities of the embedded Markov Chain $p(\\sigma,\\sigma')=q(\\sigma,\\sigma')\/\\lambda(\\sigma)$. \\ouredit{We can also define the infinitesimal generator $L$ that corresponds to the Markov chain as follows. First, consider $f$: bounded and continuous function} on the state space $S$. Then, $L$ acts on $f$ at the state $\\sigma$ as\n\\begin{align}\n L[f](\\sigma)=\\sum_{\\sigma'\\in S}q(\\sigma,\\sigma')\\left(f(\\sigma')-f(\\sigma)\\right).\n \\label{eq:infinit_gen}\n\\end{align}\nNote that $L[\\delta_{\\sigma'}](\\sigma)=q(\\sigma,\\sigma')$ for all states $\\sigma,\\sigma'$,\\firstRef{where $\\delta_{\\sigma'}(\\sigma)=\\delta(\\sigma,\\sigma')$ is a Dirac probability measure. We shall also use the notation $L^k$ for the resulting operator after $k$ successive compositions of $L$. Because $L^k[\\delta_{\\sigma'}](\\sigma)=L^{k-1}[L[\\delta_{\\sigma'}]](\\sigma)$, we see that, for any $k$, $L^k[\\delta_{\\sigma'}](\\sigma)$ is a computable object that depends on the transition rates.}\n\n\\ouredit{Under fairly general conditions~\\cite{Kipnis-Landim}, the transition probability of the Markov process can be written as in semigroup form, i.e.\\ $P_t(\\sigma,\\sigma')=e^{Lt}\\delta_{\\sigma'}(\\sigma)$}. \\ouredit{In the case of interest to us, $L$ is going to be a bounded operator and such operators allow for a representation of the semigroup with a series expansion.}\n\n\\begin{lemma}\n\\label{lem:semigroup_expansion}\nLet $L$ be a linear \\& bounded operator, $L:C_{b}(S)\\to C_{b}(S)$, with $C_{b}(S)$ being the set of continuous and bounded functions on the space $S$. Then $L$ generates a uniformly continuous semigroup $e^{tL}$ which we can express in power series form.\n\\begin{align}\ne^{tL}&=\\sum_{k=0}^{\\infty}\\frac{t^k}{k!}L^{k}.\n \\label{eq:semigroup_expansion_formal}\n\\end{align} \n\\end{lemma} \n\\begin{proof}\nThis is a classical result for which many references exist, see for example chapter 1, page 2 of Pazy A. \\cite{Pazy:1983}.\n\\end{proof}\n\nThus, making use of Lemma \\ref{lem:semigroup_expansion}, we can write the transition probabiliy as\n\\begin{align}\nP_{t}(\\sigma,\\sigma')&=e^{tL}\\delta_{\\sigma'}(\\sigma)=\\sum_{k=0}^{\\infty}\\frac{t^k}{k!}L^{k}[\\delta_\\sigma'](\\sigma),\\ \\sigma,\\sigma'\\in S.\n \\label{eq:semigroup_expansion}\n\\end{align} \n\n\\subsection{Constructing approximations by semigroup splitting}\n\\label{sect:FSKMC}\n\nWe will now give the foundations of approximations by splitting methods, as applied to the simulation of CTMCs and proceed with how those ideas are applied in the case of parallel lattice KMC.\n\nAs mentioned earlier, the transition probability of the CTMC of interest can be written as $e^{tL}\\delta_{\\sigma'}(\\sigma)$. The goal is for us to design a splitting scheme that can approximate the action of $e^{tL}$. In our context, this leads to a new CTMC. One way to build such a scheme is to start with a splitting of the infinitesimal generator $L$ \\eqref{eq:infinit_gen} into \\ouredit{components $L_1, L_2$ with $L=L_1+L_2$. Then, if we consider a positive $T$} and by using the Trotter product formula~\\cite{Trotter:1959}, we have\n\\begin{align}\ne^{TL}=\\lim_{n\\to \\infty}(e^{T\/nL_1}e^{T\/nL_2})^n.\n\\label{eq:trotter}\n\\end{align}\nCorrespondingly, if we now fix $n\\in\\mathbb{N}$ and set $\\Delta t=T\/n$, we can write approximations of $e^{TL}$ by using \\eqref{eq:trotter}. \\ouredit{For example, two such approximations are:}\n\\begin{equation}\n \\begin{aligned}\n e^{TL}&\\simeq \\left (e^{\\Delta t L_1} e^{\\Delta t L_2}\\right )^{n},\\text{ (Lie),}\\\\\n e^{TL}&\\simeq \\left (e^{\\Delta t\/2 L_1} e^{\\Delta t L_2}e^{\\Delta t\/2 L_1}\\right )^{n}, \\text{ (Strang)}.\n \\end{aligned}\n \\label{eq:two_main_split}\n\\end{equation}\nTherefore for a one step transition from $t=0$ to $\\Delta t$, \\eqref{eq:two_main_split} can be written as\n\\begin{equation}\n\\begin{aligned}\ne^{L\\Delta t}&\\simeq e^{\\Delta t L_1}e^{\\Delta t L_2},\\\\\ne^{L\\Delta t}&\\simeq e^{\\Delta t\/2L_1}e^{\\Delta t L_2}e^{\\Delta t\/2L_1}.\n\\end{aligned}\n\\label{eq:one_step_split}\n\\end{equation}\n\n\\firstRef{Operator splittings can also be carried out with multiple components, such as $L=L_1+L_2+L_3+L_4$. Such a splitting is used for 2D lattice decompositions in SPPARKS~\\cite{spparks}. All arguments can be simply extended to those cases, but we stick to two components, $L_1,L_2$, for notational convenience.}\n\nThroughout this work, we use $\\Po(\\sigma,\\sigma')$ to denote the probability $e^{L\\Delta t}\\delta_{\\sigma'}(\\sigma)$ and $\\Pb(\\sigma,\\sigma')$ for the approximations arising from splittings of the semigroup. Since $L$ is a bounded operator, we can \\ouredit{express $\\Po$ as expansion \\eqref{eq:semigroup_expansion}}. If we pick $L_1,L_2$ so that they are also bounded, then we can express $\\Pb$ as an expansion too. For example, for the Lie splitting \n\\begin{align}\n\\exp(\\Delta t L_1)\\exp(\\Delta t L_2)\\delta_\\sigma'(\\sigma)\n&=\\sum_{k=0}^{\\infty}\\frac{\\Delta t^k}{k!}\\left(k!\\cdot \\sum_{m=0}^{k}\\frac{L_1^m}{m!}\\cdot \\frac{L_2^{k-m}}{(k-m)!}\\right)\\delta_{\\sigma'}(\\sigma),\n\\label{eq:lie_exp}\n\\end{align}\n\\ouredit{which can be showed by multiplying the semigroup expansions of $\\exp(\\Delta t L_1)$ and $\\exp(\\Delta t L_2)$.} Thus, if we use the notation:\n\\begin{align}\n\\label{eq:L_QofLie}\nL_Q^k:&=k!\\cdot \\sum_{m=0}^{k}\\frac{L_1^m}{m!}\\cdot \\frac{L_2^{k-m}}{(k-m)!}\n\\end{align}\nwe can write~\\eqref{eq:lie_exp} in the form\n\\begin{align}\n\\Pb(\\sigma,\\sigma')=\\sum_{k=0}^{\\infty}\\frac{\\Delta t^k}{k!}L^k_{Q}[\\delta_{\\sigma'}](\\sigma).\n\\label{eq:gen_split_power_exp}\n\\end{align}\n\\ouredit{By the definition of $L_Q^k$ in Equation~\\eqref{eq:L_QofLie},} $L_Q^0=I$, $L_Q^1=L$, $L_Q^2=(L_1^2+L_2^2+2L_1L_2)$, and so on, \\ouredit{for the case of the Lie splitting}. By a similar argument, we can write an expansion like~\\eqref{eq:gen_split_power_exp} for other \\ouredit{operator splitting approximations}. \\firstRef{In general, $L_Q$ is not a generator of a Markov process and, in that case, $L^k_Q$ is not equal $L_Q$ after $k$ compositions but is defined in the context of the expansion in~\\eqref{eq:gen_split_power_exp}. The slight abuse of notation allows us to compare the expansion of the exact process~\\eqref{eq:semigroup_expansion} with expansions of the approximating schemes of the form~\\eqref{eq:gen_split_power_exp}.} \n\nOne way to compare the accuracy of using $\\Pb$ as opposed to $\\Po$ is to calculate the local error between expansion~\\eqref{eq:semigroup_expansion} and~\\eqref{eq:gen_split_power_exp}. As an example, here are the corresponding relations for the Lie and Strang splittings. We use $\\Pbl, \\Pbs$ \\ouredit{for Lie and Strang respectively}. We will also use the notation $[L_1,L_2]:=L_1L_2-L_2L_1$ \\ouredit{to denote the operator that} captures the failure of $L_1$ and $L_2$ to commute. \\ouredit{By using the expansions}~\\eqref{eq:semigroup_expansion},~\\eqref{eq:gen_split_power_exp}, we can show that \n\\begin{align}\n \\Po(\\sigma,\\sigma') &=\\Pbl(\\sigma, \\sigma') +\\frac{1}{2}[L_1,L_2]\\delta_{\\sigma'}(\\sigma)\\Delta t^2+O(\\Delta t^3),\\label{eq:comm-lemma-Lie}\\\\\n \\Po(\\sigma,\\sigma') &=\\Pbs(\\sigma,\\sigma')+\\frac{1}{24}\\left([L_1,[L_1,L_2]]-2[L_2,[L_2,L_1]]\\right)\\delta_{\\sigma'}(\\sigma)\\Delta t^3 \\label{eq:comm-lemma-Strang}\\\\\n &+O(\\Delta t^4).\\nonumber\n\\end{align}\nFrom \\ouredit{Relations~\\eqref{eq:comm-lemma-Lie} and~\\eqref{eq:comm-lemma-Strang}}, we observe that the Strang splitting has a better local error compared to Lie \\ouredit{($\\Delta t^3$ versus $\\Delta t^2$)}. \\firstRef{Therefore, if we prescribe an error tolerance, the Strang scheme will be able to accommodate a larger $\\Delta t$ than the Lie scheme. With a larger $\\Delta t$, we will be able to take larger steps with the same tolerance during the simulation, and this is especially important for Parallel KMC, as we strive for balance between error accumulation and efficiency.}\n\nTo be able to discuss more general \\ouredit{operator splitting} approximations to $\\Po$, we introduce the following helpful lemma.\n\\begin{lemma}[Local order of error \\& commutator]\n\\label{lem:local_order_of_error}\nLet $\\Po(\\sigma,\\sigma')=e^{L\\Delta t}\\delta_{\\sigma'}(\\sigma)$ and $\\Pb(\\sigma,\\sigma')$ an approximation of $\\Po$ via a splitting scheme. Then, there is a function $C:S\\times S\\to \\mathbb{R}$ and an integer $p$, $p>1$, such that\n\n\\begin{align}\n\\Po(\\sigma,\\sigma') = \\Pb(\\sigma,\\sigma')+C(\\sigma,\\sigma')\\Delta t^p + o(\\Delta t^p).\n\\label{eq:prop:local_order_of_error}\n\\end{align}\n\\end{lemma}\nWe will refer to $C(\\sigma,\\sigma')=(L^p-L_Q^p)\\delta_{\\sigma'}(\\sigma)$ as the \\textit{commutator} and to $p$ as the \\textit{order of the local error}.\n\\begin{proof}\nThe result is immediate by using representations \\eqref{eq:semigroup_expansion}, \\eqref{eq:gen_split_power_exp}, since for $\\sigma,\\sigma'\\in S$,\n\\begin{align*}\n\\Po(\\sigma,\\sigma') - \\Pb(\\sigma,\\sigma')=\\sum_{k=0}^{\\infty}\\frac{\\Delta t^k}{k!}\\left(L^k-L_{Q}^k\\right)[\\delta_\\sigma'](\\sigma).\n\\end{align*}\nThen, $p$ is the smallest non-negative integer such that $L^p\\neq L_Q^p$. \\ouredit{This of course implies that $L^k=L_Q^k$ for $k
0$ to design an approximation to the exact process $e^{\\Delta t L}$ via a splitting method in such a way that allows for asynchronous computations.\n\n To begin, we note that any decomposition of the lattice into non-overlapping sub-lattices $\\Lambda_i$ also induces a decomposition of the generator \\eqref{eq:back:gen}, that is \n\\begin{align}\n L[f](\\sigma)=\\sum_{i=1}^{n}\\sum_{x\\in\\Lambda_i}\\sum_{\\omega\\in\n S_x}q(x,\\omega;\\sigma)\\left(f(\\sigma^{x,\\omega})-f(\\sigma)\\right).\n \\label{eq:back:split_gen}\n\\end{align}\nDue to the \\ouredit{localization} of the system, we can decompose the lattice $\\Lambda$ into \\ouredit{$n$ sub-lattices, $\\Lambda_{i}$}, so that transitions in some sub-lattices are independent from transitions in others, see Figure~\\ref{fig:lattice_decomp}. With two groups, $G_1=\\{\\Lambda_i:i \\text{ even}\\}, G_2=\\{\\Lambda_i:i \\text{ odd}\\}$, we can split $L$ into \n\\begin{equation}\n \\begin{aligned}\n L_j[f](\\sigma)&:=\\sum_{x\\in G_j}\\sum_{\\omega\\in\n S_x}q(x,\\omega;\\sigma)\\left(f(\\sigma^{x,\\omega})-f(\\sigma)\\right),\\ j=1,2,\\\\\n L[f](\\sigma)&=L_1[f](\\sigma)+L_2[f](\\sigma).\n \\end{aligned}\n \\label{eq:splitting_to_groups}\n\\end{equation}\n\n\n\\begin{figure}[h]\n\\centering\n\n\t\t\t\t\t\\begin{tikzpicture}[scale=0.14]\n\t\t\t\t\t\t\t\t\n\t\t\t\t\t\t\t\t\t\t\n\t\t\t\t\t\t\t\t\t\t\\fill [fill=green!50] (0,40) rectangle (10,30);\n\t\t\t\t\t\t\t\t\t\t\\fill [fill=white] (1,39) rectangle (9,31);\n\t\t\t\t\t\t\t\t\t\t\n\t\t\t\t\t\t\t\t\t\n\t\t\t\t\t\t\t\t\t\t\\foreach \\x in {0,1}{\n\t\t\t\t\t\t\t\t\t\t\t\\foreach \\y in {2}{\n\t\t\t\t\t\t\t\t\t\t\t\t\\fill [fill=red!50] (\\x*10+10*\\x,\\y*10) rectangle (10*\\x+10+10*\\x,10*\\y+10);\n\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\n\t\t\t\t\t\t\t\t\t\t\\foreach \\x in {0,1}{\n\t\t\t\t\t\t\t\t\t\t\t\\foreach \\y in {3}{\n\t\t\t\t\t\t\t\t\t\t\t\t\\fill [red!50] (\\x*10+10*\\x+10,\\y*10) rectangle (10*\\x+10+10*\\x+10,10*\\y+10);\n\t\t\t\t\t\t\t\t\t\t }\n\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\n\t\t\t\t\t\t\t\t\t\n\t\t\t\t\t\t\t\t\t\t\\defblue!{blue!}\n\t\t\t\t\t\t\t\t\t\t\\fill [blue!!40,ultra thick,fill=blue!] (4,35) rectangle (6,34);\n\t\t\t\t\t\t\t\t\t\t\\fill [blue!!40,ultra thick,fill=blue!] (5,34) rectangle (6,33);\n\t\t\t\t\t\t\t\t\t\t\\fill [blue!!40,ultra thick,fill=blue!] (6,33) rectangle (5,36);\n\t\t\t\t\t\t\t\t\t\t\\fill [blue!!40,ultra thick,fill=blue!] (7,35) rectangle (6,34);\n\t\t\t\t\t\t\t\t\t\t\\fill [black ,ultra thick, step=1] (0,20) grid (40,40);\n\t\t\t\t\t\t\t\t\t\t\\fill [black, ultra thick,step=10] (0,30) grid (40,40);\n\t\t\t\t\t\t\t\t\t\t\\end{tikzpicture}\n\\caption{A checkerboard decomposition of a 2D lattice. Red sub-lattices correspond to group $G_1$ and white ones to $G_2$. For comparison, a nearest neighborhood region (n.n. region) is also shown (solid black cross). Transitions involving the center of that region only depend on the state of its nearest neighbors. So, if we pick the sub-lattices much larger than the size of an n.n. region, transitions in different sub-lattices belonging to the same group are independent. A site $x$ is said to belong to the boundary of its sub-lattice if part of its n.n. region is outside that sub-lattice (the green region is the collection of all such points for the first sub-lattice). If a transition occurs at such a site $x$, then an update needs to be made to the boundary information of all other sub-lattices for which $x$ belongs to a n.n. region. }\n\\label{fig:lattice_decomp}\n\\end{figure}\n\nThus, by the formulas in \\eqref{eq:splitting_to_groups}, we can use the ideas of the previous section to construct splitting approximations to $e^{L\\Delta t}$. Those \\ouredit{can also be interpreted} as computation schedules for the parallel algorithm. Such schedules set two attributes of the simulation: (a) in what order to simulate the two groups asynchronously and (b) for how much time to simulate each group per \\ouredit{time-step} (which the user controls with the $\\Delta t$ parameter). A demonstration of how PL-KMC works is shown in Figure~\\ref{fig:pkmc}.\n \\begin{figure}[h]\n\\centering\n\\includegraphics[width=0.65\\textwidth]{images\/pkmc.eps}\n\\caption{One step of PL-KMC in the $1D$ case, where all of the spin values are set to zero initially while using the Lie splitting. After the lattice is decomposed into non-overlapping sub-lattices, here blue (indexed as $1$) and red (indexed as $2$), the algorithm proceeds by first simulating all blue sub-lattices independently by standard KMC until a time $t=\\Delta t$ is reached for all of them. Once that is done, the lattices in the second group are simulated in the same way. This results to the process $\\sigma_t$ on the whole lattice being propagated forward in time by $\\Delta t$. In-between the simulation of each group, communication between the processes is required in order to correct for the mismatch on the boundaries of the sub-lattices. The resulting error due to the mismatch is controlled by the commutator $C$~\\cite{Arampatzis:2014}.}\n\\label{fig:pkmc}\n\\end{figure}\n\nIn general, the larger the $\\Delta t$, the less different processes need to communicate to resolve inconsistencies during a run. This is a fact for any simulation algorithm that can be expressed in the above operation splitting framework, e.g. SPPARKS and others~\\cite{Arampatzis:2012}. Since communication is the usual bottleneck of PL-KMC algorithms, a practitioner would like to pick $\\Delta t$ as large as possible, given a fixed tolerance. One of the important insights of the analysis in \\cite{Arampatzis:2014} is that the commutator controls this relationship. Simply put, a small $C(\\cdot, \\cdot)$ (as defined in Lemma~\\ref{lem:local_order_of_error}) allows for a larger step size $\\Delta t$. \n\n\\section{Information metrics for comparing dynamics at long times} \n\\label{ssec:info_theory_concepts}\nWe will now introduce the main tools from information theory. In later sections, our focus will be to compare the exact process, $X_t$, and an approximation of it, $Y_t$, via their $\\Delta t$-skeleton sub-processes. That is, given a fixed $\\Delta t>0$ and $M\\in \\mathbb{N}$, we look at the discrete-time Markov processes $X_{n\\Delta t}$ and $Y_{n\\Delta t}$ for $n\\in \\{0,\\ldots,M\\Delta t\\}, T=M\\Delta t$. For this reason, we now introduce those concepts for discrete-time processes.\n\nLet us assume two discrete-time Markov processes $X_{n}$ and $Y_{n}$ on a countable state space $S$ with transition probabilities $P$ and $Q$ respectively. We also assume that for each process exists a corresponding unique stationary distribution $\\mo$ and $\\mb$. Assuming $X_{0}$ ($Y_{0}$) is distributed according to $\\mo$ ($\\mb$), we can then calculate the probability of a specific path for each process. For example, if we fix \\ouredit{a positive integer $M, T=M\\Delta t$}, and pick an $\\vec{x}\\in S^M$, then we have\n\\begin{align*}\nP_{0:T}(\\vec{x})&=P(X_{T}=x_M,\\ldots, X_{0}=x_0)=\\mo(x_0)P(x_0,x_1)\\cdots P(x_{M-1},x_{M}).\n\\end{align*}\nSimilarly, by changing $P$ to $Q$, we can calculate the path probability for $Y_n$. \n\nAssuming one would prefer a path of length $T$ of the process $Y_{n}$ to infer results about a same length path of $X_{n}$, how much information about $X_{n}$ would be lost by such a method? This is a central question in coding theory and one way to quantify the information loss is through the idea of relative entropy (RE),\n\\begin{align}\n\\label{eq:RE}\nR(Q_{0:T}|P_{0:T}):=\\sum_{\\vec{x}\\in S^M}Q_{0:T}(\\vec{x})\\log\\frac{Q_{0:T}(\\vec{x})}{P_{0:T}(\\vec{x})}\n\\end{align}\nOur definition here is with respect to the path measures $P_{0:T}, Q_{0:T}$, but we can apply the relative entropy to more general probability measures too. For this object to be properly defined, we need to have that $Q_{0:T}$ is absolutely continuous with respect to $P_{0:T}$, that is $P_{0:T}(\\vec{x})=0$ implies $Q_{0:T}(\\vec{x})=0$. Other important properties of the relative entropy rate are the following : \\ouredit{\n\\begin{enumerate*}\n\\item $R(Q_{0:T}|P_{0:T})\\geq 0$ for any $Q_{0:T}, P_{0:T}$ (Gibbs' inequality),\n\\item $R(Q_{0:T}|P_{0:T})=0\\Leftrightarrow P_{0:T}=Q_{0:T}$.\n\\end{enumerate*}}\nNote though that the relative entropy does not qualify as a metric in the classical sense, as it is not symmetric and does not satisfy the triangle inequality. It can however still be thought of as a distance between distributions and is useful as a building block for other information measures. For a more complete exposition on relative entropy and its properties, see Cover and Thomas~\\cite{Cover-Thomas}.\n\n\\ouredit{Although the pathwise RE is a suitable quantity to measure the similarity of the two path-measures, it is computationally demanding to calculate, especially in the case of parallel KMC, where we do not have $Q_{0:T}$ and $P_{0:T}$ explicitly. For this reason, we look at a related object, the relative entropy per unit time, or relative entropy rate (RER).} Given a probability measure $\\nu_0$, $\\nu_0(\\vec{x})=\\nu_0(x_0), \\vec{x}\\in S^{T}$, \\ouredit{the RER with} respect to $\\nu_0$ \\ouredit{is defined as:} \n\\begin{align}\nH_{\\nu_0}(Q|P):=\\sum_{\\vec{x}\\in S^M} {\\nu_0}(\\vec{x})Q(x_0,x_1)\\log\\frac{Q(x_0,x_1)}{P(x_0,x_1)}.\n\\end{align}\nGiven another measure $\\mu_0$, we can use the chain rule for the relative entropy~\\cite{Cover-Thomas} to relate RE and RER as\n\\begin{align}\nR(Q_{0:T}|P_{0:T})&=R(\\mu_0|\\nu_0)+\\sum_{i=1}^{M}H_{\\nu_i}(Q|P),\\label{eq:gen_re_rer_rel}\\\\\n\\nu_k(x_0,\\ldots,x_{k-1})&=\\nu_{0}(x_0)\\prod_{m=1}^{k-1}Q(x_{m-1},x_{m}).\\nonumber\n\\end{align}\nIn particular, when sampling from the stationary distribution corresponding to $Q$, that is $\\nu_0=\\mb$, then $H_{\\nu_i}=H_{\\mb}=H$ for all $i$. Then, \n\\begin{align}\nH(Q|P)&=\\sum_{x_0,x_1\\in S}\\mb(x_0)Q(x_0,x_1)\\log\\frac{Q(x_0,x_1)}{P(x_0,x_1)}.\\label{eq:rer}\n\\end{align}\nThis also simplifies Equation~\\eqref{eq:gen_re_rer_rel} to\n\n\\begin{align}\nR(Q_{0:T}|P_{0:T})&=M\\cdot H(Q|P) + R(\\mb|\\mo).\\label{eq:rel_entropy_split_PK}\n\\end{align}\nIn~\\eqref{eq:rel_entropy_split_PK}, $R(\\mb|\\mo)$ is the relative entropy of $\\mb$ with respect to $\\mo$, capturing the loss of information between the exact and approximate stationary distribution. Note that $R(\\mb|\\mo)$ does not depend on the length of the path. Instead, the term that quantifies the dependence on $T$ is $H(Q|P)$. Therefore, any difference between the two stationary measures becomes negligible for large times, \\ouredit{which is a first advantage to studying the pathwise RE through the simpler RER.}\n\n\\subsection{Information metrics and observables}\nFurther justification for the fact that the RER is the right quantity to track can be given by considering time-averaged observables. For instance, if $f$ is a function of the state space, then such an observable would be\n$$M\\cdot F_{M}(\\{X_n:n=0,\\ldots,M-1\\})=\\sum_{k=0}^{M-1}f(X_k).$$\nAn important performance metric for the approximation is the weak error:\n\\begin{align}\n|\\mathbb{E}_{P[0,T]}[F_M]-\\mathbb{E}_{Q[0,T]}[F_M]|,\\ T=M\\Delta t.\n\\label{eq:weak_error_ta}\n\\end{align}\nIn recent work~\\cite{Dupuis}, uncertainty quantification (UQ) bounds have been developed for the weak error that are of the form:\n\\begin{equation}\n\\begin{aligned}\n\\Xi_{-}(Q_{[0,T]}\\| P_{[0,T]};M\\cdot F_M)\/M&\\leq \\mathbb{E}_{P[0,T]}[F_M]-\\mathbb{E}_{Q[0,T]}[F_M]\\\\\n&\\leq \\Xi_{+}(Q_{[0,T]}\\| P_{[0,T]};M\\cdot F_M)\/M. \n\\end{aligned}\n\\label{eq:uq_bounds_fin}\n\\end{equation}\nThe quantities $\\Xi_{\\pm}(Q_{[0,T]}\\| P_{[0,T]};M\\cdot F_M)$ are defined as goal-oriented divergences~\\cite{Dupuis}, taking into account the observable $F$, and such that $\\Xi_{\\pm}(Q_{[0,T]}\\| P_{[0,T]};M\\cdot F_M)=0$, if $Q_{[0,T]},=P_{[0,T]}$ or $f$ is deterministic. \\firstRef{Note that the bound in~\\eqref{eq:uq_bounds_fin} is robust, see Theorem 3.4 in \\cite{CD:13}, as well as \\cite{jie-scalable-info-ineq}: if we consider a positive $\\eta$ and all $\\Pb$ such that $R(\\Pb|\\Po)<\\eta$, then the upper bound in~\\eqref{eq:uq_bounds_fin} is attained.}\n\n\\firstRef{ Dividing~\\eqref{eq:uq_bounds_fin} by $M$ and letting $M$ go to infinity gives an inequality with respect to the stationary measures $\\mb,\\mo$ of the scheme, $\\Pb$, and the exact process, $\\Po$, respectively:\n\\begin{align}\n\\label{eq:weak-error-limit}\n\\xi_{-}(\\Pb\\|\\Po;f)\\leq \\mathbb{E}_{\\mb}[f]-\\mathbb{E}_{\\mo}[f]\\leq \\xi_{+}(\\Pb\\|\\Po;f),\n\\end{align}\nwhere $\\xi_{\\pm}(\\Pb\\|\\Po;f)=\\lim_{M\\to\\infty}\\Xi_{\\pm}(Q_{0:T}\\|P_{0:T};F)\/M$. But $\\xi_{\\pm}$ also admit a variational representation as\n\\begin{equation}\n\\label{eq:xi-variational}\n\\begin{aligned}\n\\xi_{+}(\\Pb\\|\\Po;f)&=\\inf_{c\\geq 0}\\left\\{\\frac{1}{c}[ \\lambda_{\\Pb,\\Po}(c)+H(\\Pb\\|\\Po) ]\\right\\},\\\\\n\\xi_{-}(\\Pb\\|\\Po;f)&=\\sup_{c\\geq 0}\\left\\{-\\frac{1}{c}[ \\lambda_{\\Pb,\\Po}(-c)+H(\\Pb\\|\\Po) ]\\right\\},\n\\end{aligned}\n\\end{equation}\nwith $\\lambda_{\\Pb,\\Po}(c)$ in \\eqref{eq:xi-variational} to be the logarithm of the maximum eigenvalue of the matrix with entries $\\Po(x,y)\\exp(c\\cdot (f(y)-\\mathbb{E}_{\\mo}[f]))$ (see~\\cite{jie-scalable-info-ineq} for details). Especially when $H(\\Pb|\\Po)$ is small and through the asymptotic expansion of $\\xi_{\\pm}$, an upper bound for the weak error at stationarity can be given (following the ideas in~\\cite{Dupuis, jie-scalable-info-ineq}):\n\\begin{align}\n|\\mathbb{E}_{\\mb}[f]-\\mathbb{E}_{\\mo}[f]|&\\leq \\sqrt{\\upsilon_{\\mo}(f)}\\sqrt{2H(\\Pb|\\Po)}+O(H(\\Pb|\\Po))\\label{eq:linearized-ineq},\\\\\n\\upsilon_{\\mo}(f)&=\\sum_{k=-\\infty}^{\\infty}\\mathbb{E}_{\\mo}[f(X_k)f(X_0)].\\label{eq:auto-correlation}\n\\end{align}\n}\n\n\\firstRef{\nInequality~\\eqref{eq:linearized-ineq} connects the long-time loss of accuracy that the weak error captures with the relative entropy rate and $\\upsilon_{\\mo}(f)$, which is the integrated auto-correlation function for the observable $f$ and a quantity we can estimate during the simulation. As a consequence of~\\eqref{eq:linearized-ineq}, any further results on the asymptotic behavior of $H(\\Pb|\\Po)$ with respect to $\\Delta t$ can be simply translated to the weak error point-of-view. \n}\n\n\\section{Long-time error behavior of splitting schemes}\n\\label{sec:Long_time}\nIn this section, we compare the RER between two different processes. One of them will always be the $\\Delta t$-skeleton process derived from the CTMC we wish to simulate, with transition probability\n\\begin{align}\n\\Po(\\sigma,\\sigma')=e^{L\\Delta t}\\delta_{\\sigma'}(\\sigma).\n\\label{eq:exact_proc}\n\\end{align}\n\\ouredit{This exact $\\Delta t$-process will be compared with the} $\\Delta t$-skeleton process derived from an \\ouredit{operator} splitting of \\eqref{eq:exact_proc}. \\ouredit{Such approximations will be denoted with $\\Pb$.}\n\\firstRef{We note here that the discretization \\eqref{eq:exact_proc}\nof the original Markov process with semigroup $e^{tL}$ with respect to $\\Delta t$ is only\ncarried out as a means to compare the original process with the\napproximations $\\Pb$. The transition kernel $\\Po$ is just a particular instance\nof the transition matrix of the continuous Markov process with semigroup $P_t=e^{tL}$, so there is no approximation error\n in \\eqref{eq:exact_proc}.\nIn fact, using the $\\Delta t$-skeleton corresponds to sub-sampling from the CTMC at every $\\Delta t$.\n}\n\n\nOur goal is to show the dependence of the RER to various quantities of interest that are usually computed for short-time error analysis. We will see that the commutator, the order of the local error, and other quantities, make an appearance in the asymptotic results we develop. We limit our discussion to the case that $\\Delta t$ is in $(0,1]$, as this is the interval where splitting schemes are most accurate. We also assume throughout this section that $L$ is a bounded operator. We will often refer to the splittings previously discussed, Lie and Strang, which define discrete processes with transition probabilities\n\\begin{equation}\n\\begin{aligned}\n\t\\Pbl(\\sigma,\\sigma')&=e^{L_1\\Delta t}e^{L_2\\Delta t}\\delta_{\\sigma'}(\\sigma),\\\\\n\t\\Pbs(\\sigma,\\sigma')&=e^{L_1\\Delta t\/2}e^{L_2\\Delta t}e^{L_1\\Delta t\/2}\\delta_{\\sigma'}(\\sigma).\n\\end{aligned}\n\\label{eq:splittings}\n\\end{equation}\nHere $L$ is the original generator and $L=L_1+L_2$ with $L_1, L_2$ \\ouredit{assumed bounded as operators}. \\ouredit{For instance, in the case of parallel KMC, $L_1,L_2$ will be imposed by the domain decomposition of the lattice, see Figure~\\ref{fig:lattice_decomp}.}\n\n\nBefore we move on to the analysis, we need to address a last issue. As mentioned before, our main tool will be asymptotic expansions of the RER with respect to $\\Delta t$. We will then use those to do comparisons for different $\\Delta t$, so it is important to first account for the scaling of RER with \\ouredit{respect to} that parameter. The situation can be best illustrated by the worst case scenario, when the order of the local error \\firstRef{between two Markov semigroups, $\\Pb^A,\\ \\Pb^B$, is equal to one. }\n\\begin{lemma}\n\\label{prop:rer_scaling}\nLet $L_A,L_B$ be bounded generators of Markov Processes, $L_A\\neq L_B$, \\firstRef{with corresponding transition probabilities $\\Pb^A,\\Pb^B$ }. Then, \n$$\nH(\\Pb^B|\\Pb^A)=O(\\Delta t).\n$$\n\\end{lemma}\n\\begin{proof}\n\\ouredit{Proof follows the ideas in Theorem~\\ref{th:spparks_th}. The argument is provided in the supplementary material.} \n\\end{proof}\n\n\n\\begin{remark}\nUsing Lemma~\\ref{prop:rer_scaling}, we can readily see that given \\ouredit{an operator} splitting scheme \\firstRef{$\\Pb$ that approximates the exact $\\Po$, we expect a scaling at least of the type\n$H(\\Pb|\\Po)=O(\\Delta t)$. To correct for the $\\Delta t$ scaling}, we will instead work with a $\\Delta t$-normalized RER. \\firstRef{That is, we re-define the RER as:}\n\\begin{align}\n\\label{eq:RER-normalized}\nH(\\Pb|\\Po)\\firstRef{:=}\\frac{1}{\\Delta t}\\sum_{\\sigma,\\sigma'}\\mb(\\sigma)\\Pb(\\sigma,\\sigma')\\log\\left(\\frac{\\Pb(\\sigma,\\sigma')}{\\Po(\\sigma,\\sigma')}\\right).\n\\end{align}\n\\end{remark}\n\n\\ouredit{We wish to use the RER (Equation~\\eqref{eq:RER-normalized}) to study the long-time loss of information between $\\Pb$ and $\\Po$. However, in the case of Parallel KMC, those are difficult to calculate explicitly, hence we turn to asymptotic expansions instead. We will see that the terms in those expansions depend on the transition rates and, under suitable ergodic assumptions, can be estimated during the simulation. }\n\n\n\\section{RER analysis for Parallel KMC}\n\\label{sec:IPS-PKMC}\nWe will now study an example from a class of interacting particle systems, limiting our discussion to the Lie and Strang splittings. Given two states $\\sigma,\\sigma' \\in S$ and $x$ lattice site, $\\sigma(x)\\in \\{0,1\\}$, we have that the transition rates $q$ are\n\\begin{align}\nq(\\sigma,\\sigma')=\\begin{cases}\nq(\\sigma,\\sigma^x)>0,& \\sigma'=\\sigma^x,\\\\\n0,& \\text{else.}\n\\end{cases}\n\\label{eq:rate_assumption}\n\\end{align}\nThe rates in Equation~\\eqref{eq:rate_assumption} provide a particular example of an adsorption\/desorption system. Other mechanisms can be incorporated into~\\eqref{eq:rate_assumption} such as diffusion, reactions with multiple components or with particles that have many degrees of freedom~\\cite{Arampatzis:2012}.\n\nGiven a lattice $\\Lambda$ with $N$ sites, we are interested in simulating the process $\\sigma_t=\\{\\sigma_t(x):x\\in \\Lambda\\}$ in parallel with an \\ouredit{operator} splitting method, so we apply the ideas in Section~\\ref{sect:ParallelLatticeKMC} to that end. We first decompose the lattice into non-overlapping sub-lattices \\ouredit{(see Figure~\\ref{fig:lattice_decomp})} and this induces a decomposition of the generator into new generators $L_1, L_2$ as in~\\eqref{eq:splitting_to_groups}. Then, for any $T>0$, the \\ouredit{adsorption\/desorption} system can be simulated in $[0,T]$ using the \\ouredit{parallel KMC algorithm}. From the short-time error analysis, we can control the error by computing the commutator, $C(\\cdot,\\cdot)$, and the order of the local error that corresponds to the \\ouredit{operator splitting scheme} we use. For example, we know that for the Lie splitting that order is $p=2$ and $C(\\sigma,\\sigma')=[L_1,L_2]\\delta_{\\sigma'}(\\sigma)\/2$ (see Lemma~\\ref{lem:local_order_of_error} and \\ouredit{Equation~\\eqref{eq:comm-lemma-Lie}}). By using the properties of the generators $L_1, L_2$ along with our assumption in \\eqref{eq:rate_assumption}, we can show that \n\\begin{equation}\n\\begin{aligned}\nC(\\sigma, \\sigma')=\\ouredit{[L_1,L_2]\\delta_{\\sigma'}(\\sigma)\/2}= &\\frac{1}{2}\\sum_{x,y\\in \\Lambda} f_1(x,y;\\sigma)\\delta_{\\sigma'}(\\sigma^{x,y})- f_2(x,y;\\sigma)\\delta_{\\sigma'}(\\sigma^x)\\\\\n&-\\frac{1}{2}\\sum_{x,y\\in \\Lambda} f_3(x,y;\\sigma)\\delta_{\\sigma'}(\\sigma^y),\n\\end{aligned}\n\\label{eq:comm_Lie}\n\\end{equation}\nwhere $f_1,f_2$ and $f_3$ only depend on the transition rates $q$. We remind here that $\\sigma^{x,y}$ stands for the resulting state $\\sigma'$ after a spin-flip of an initial state $\\sigma$ at lattice sites $x,y, x\\neq y$. A full description of the above formula along with proof can be found in the supplementary material. \n\n\\begin{remark}\n\\label{rem:comm_prop}\n Formula \\eqref{eq:comm_Lie} for the Lie commutator has two important properties. First, it is computable for any pair $(\\sigma,\\sigma')\\in S\\times S$ \\firstRef{as it only depends on the transition rates $q$}. Second, it is surely equal to zero if $\\sigma'\\neq \\sigma^{x,y}$ and $\\sigma'\\neq \\sigma^x$ for all $x,y\\in \\Lambda, x\\neq y$, due to the $\\delta_{\\sigma'}$ appearing in the different sums. \\firstRef{We will also see that the sum in \\eqref{eq:comm_Lie} needs only to be evaluated for the neighboring lattice sites $x,y$ that are not both in the same group. For instance, in Figure~\\ref{fig:lattice_decomp}, we would only need to evaluate the sum over the green boundary regions of every sub-lattice, which makes the computation of the commutator much simpler (see Remark~\\ref{rem:scal_and_computation} for a complexity analysis). Those properties hold for commutators of other operator splitting schemes too, see~\\cite{Arampatzis:2014} and Section~\\ref{sect:conn}. }\n\\end{remark} \n\n\n\\ouredit{To study the asymptotic behavior of the RER, we will need to quantify the dependence of various combinations of $\\Po$ and $\\Pb$ to $\\Delta t$. To this end, we use the following facts, both of which stem from Lemma~\\ref{lem:local_order_of_error}.\n\\begin{align}\n\t\\Po(\\sigma,\\sigma')-\\Pb(\\sigma,\\sigma')&=C(\\sigma,\\sigma')\\Delta t^p+o(\\Delta t^p)\\label{eq:PminusQ},\\\\\n\t\\Po(\\sigma,\\sigma')+\\Pb(\\sigma,\\sigma')&=2\\delta_{\\sigma'}(\\sigma)+2q(\\sigma,\\sigma')\\Delta t+o(\\Delta t)\\label{eq:PplusQ}\\\\\n\t&=2\\Pb(\\sigma,\\sigma')+C(\\sigma,\\sigma')\\Delta t^p+o(\\Delta t^p)\\label{eq:PplusQ2}.\n\\end{align}}\nWe are now able to write an asymptotic result for RER for \\ouredit{the Lie and Strang operator splittings in parallel KMC under the assumption in Relation~\\eqref{eq:rate_assumption}.}\n\\begin{theorem}\n\\label{th:spparks_th}\nLet $\\Delta t\\in (0,1)$ and $\\sigma_{n\\Delta t}$ on the lattice $\\Lambda$ with transition probability $\\Po(\\sigma,\\sigma')=e^{L\\Delta t}\\delta_{\\sigma'}(\\sigma)$ for $\\sigma,\\sigma'\\in S$. Then, let $L_1+L_2$ be a splitting of $L$ based on a decomposition of the lattice $\\Lambda$. Assuming that property \\eqref{eq:rate_assumption} holds for the rates, then if there exists a state $\\sigma\\in S$ and lattice sites \\ouredit{distinct} $x,y$ such that the Lie commutator $C(\\sigma, \\sigma^{x,y})\\neq 0$, we have that\n\\begin{align}\nH(\\Pbl|\\Po)=O(\\Delta t^1) \\text{ (Lie)}.\n\\end{align}\nSimilarly, if there exists a state $\\sigma\\in S$ and \\ouredit{distinct} lattice sites $x,y,z$ such that $C(\\sigma, \\sigma^{x,y,z})\\neq 0$, \n\\begin{align}\nH(\\Pbs|\\Po)=O(\\Delta t^2) \\text{ (Strang)}. \n\\end{align}\n\\end{theorem} \n\n\\begin{proof}\n\\ouredit{We will first show the result for the Lie case and then note the differences in the proof for the Strang case. Thus, we denote $\\Pbl$ by $\\Pb$, $\\mu_{\\Lie}$ by $\\mb$, and consider a $\\Delta t\\in (0,1)$. As we wish to construct an asymptotic expansion for the RER (Equation~\\eqref{eq:RER-normalized}), we first need to expand the logarithm. Given a positive $x$} and by the definition of $tanh^{-1}$,\n\\begin{align}\n\\log(x)=2 \\mathrm{atanh}\\left(\\frac{x-1}{x+1}\\right)=2\\sum_{k=0}^{\\infty}\\frac{1}{2k+1}\\left(\\frac{x-1}{x+1}\\right)^{2k+1}.\n\\label{eq:i_arctanh_def}\n\\end{align}\nThis expansion of the logarithm converges for every $x>0$, as can be seen by applying the root convergence test. \\ouredit{Thus, expanding the logarithm part of the RER, we get:} \n\\begin{align}\n\\Delta t\\cdot H(\\Pb|\\Po)=& -2\\sum_{\\sigma,\\sigma'}\\mu_{Q}(\\sigma)\\Pb(\\sigma,\\sigma')\\frac{\\Po(\\sigma,\\sigma')-\\Pb(\\sigma,\\sigma')}{\\Pb(\\sigma,\\sigma')+\\Po(\\sigma,\\sigma')}\\label{eq:RER-expand}\\\\\n&+2\\sum_{\\sigma,\\sigma'}\\mu_{Q}(\\sigma)J(\\Delta t;\\sigma,\\sigma'),\\nonumber\\\\\nJ(\\Delta t;\\sigma,\\sigma')&:=\\Pb(\\sigma,\\sigma')\\sum_{k=1}^{\\infty}\\frac{1}{2k+1}\\left(\\frac{\\Pb(\\sigma,\\sigma')-\\Po(\\sigma,\\sigma')}{\\Pb(\\sigma,\\sigma')+\\Po(\\sigma,\\sigma')}\\right)^{2k+1} .\\label{eq:naive:F}\n\\end{align}\n\n \\ouredit{We will study the asymptotic behavior of both parts of the RER in Equation \\eqref{eq:RER-expand}. First, applying Equation~\\eqref{eq:PplusQ} to the denominator of the fraction in \\eqref{eq:RER-expand} and carrying out the simplifications, we have}\n\\begin{equation}\n\t\\begin{aligned}\n\t\t\\Delta t\\cdot H(\\Pb|\\Po)=&-2\\sum_{\\sigma,\\sigma'}\\mb(\\sigma)\\left(\\Po(\\sigma,\\sigma')-\\Pb(\\sigma,\\sigma')+G(\\Delta t;\\sigma,\\sigma')\\right)\\\\\n\t\t&+2\\sum_{\\sigma,\\sigma'}\\mb(\\sigma)J(\\Delta t;\\sigma,\\sigma')\\label{eq:proof4}.\n\t\\end{aligned}\n\\end{equation} \nNow, \\ouredit{since $\\Pb,\\Po$ are transition probabilities, $\\sum_{\\sigma'\\in S}\\Po(\\sigma,\\sigma')-\\Pb(\\sigma,\\sigma')=0$ for all $\\sigma\\in S$, and thus the corresponding part of Equation~\\eqref{eq:proof4} is zero.} To progress, we need to study the dependence on $\\Delta t$ of $J,G$. \\ouredit{First, for $G$ in Equation~\\eqref{eq:proof4},}\n\\begin{align}\nG(\\Delta t;\\sigma, \\sigma')=\\frac{(\\Po(\\sigma,\\sigma')-\\Pb(\\sigma,\\sigma'))C(\\sigma,\\sigma')\\Delta t^2}{\\left(2\\Pb(\\sigma,\\sigma')+\\Delta t^2C(\\sigma,\\sigma')+o(\\Delta t^2)\\right)}+o(\\Delta t^{2}).\n\\label{eq:proof6}\n\\end{align}\n\\ouredit{To expose the dependence of the numerator of \\eqref{eq:proof6} to $\\Delta t$, we use \\eqref{eq:PminusQ} to get}\n\\begin{align}\nG(\\Delta t;\\sigma, \\sigma')=\\frac{(C(\\sigma,\\sigma'))^2}{2\\Pb(\\sigma,\\sigma')+\\Delta t^2C(\\sigma,\\sigma')+o(\\Delta t^2)}\\Delta t^{4}+o(\\Delta t^{2}).\n\\label{eq:proof7}\n\\end{align} \nWe wish to show that $G(\\Delta t;\\sigma,\\sigma')=O(\\Delta t^2)$. From the explicit form of the commutator in \\eqref{eq:comm_Lie} and Remark \\ref{rem:comm_prop}, we can see that we only need to study $G$ in the cases that $\\sigma'=\\sigma^x$ or $\\sigma'=\\sigma^{x,y}$, given a state $\\sigma$ and lattice sites $x,y$, since otherwise $C(\\sigma,\\sigma')=0$. \\ouredit{Let us consider $\\sigma'=\\sigma^{x,y}$. Since the order of the local error is equal to two,} from expansion \\eqref{eq:gen_split_power_exp} and the fact that $L_Q[\\delta_{\\sigma^{x,y}}](\\sigma)=L[\\delta_{\\sigma^{x,y}}](\\sigma)$ and $L[\\delta_{\\sigma^{x,y}}]=q(\\sigma,\\sigma^{x,y})=0$ \\ouredit{(see the property in \\eqref{eq:rate_assumption})}, we have\n\\begin{align}\n\\Pb(\\sigma,\\sigma^{x,y})=\\frac{\\Delta t^2}{2}L^2_{Q}[\\delta_{\\sigma'}](\\sigma)+o(\\Delta t^2).\n\\label{eq:proof8}\n\\end{align}\nThus, applying~\\eqref{eq:proof8} \\ouredit{to the denominator} of~\\eqref{eq:proof7}, \n\\begin{align}\nG(\\Delta t;\\sigma, \\sigma^{x,y})&=\\frac{(C(\\sigma,\\sigma^{x,y}))^2}{\\Delta t^2\\cdot (L^2_{Q}[\\delta_{\\sigma^{x,y}}](\\sigma)+C(\\sigma,\\sigma^{x,y}))+o(\\Delta t^2)}\\Delta t^{4}+o(\\Delta t^{2})\\nonumber\\\\\n&=\\frac{(C(\\sigma,\\sigma^{x,y}))^2}{L^2_{Q}[\\delta_{\\sigma^{x,y}}](\\sigma)+C(\\sigma,\\sigma^{x,y})}\\Delta t^{2}+o(\\Delta t^{2})\n\\label{eq:proof9}\n\\end{align} \nBy similar calculations, we can show that $G(\\sigma,\\sigma^x)=O(\\Delta t^3)$, if $C(\\sigma,\\sigma^x)\\neq 0$ for that $x\\in \\Lambda$. Regardless, this would be a lower order, since $\\Delta t< 1$. Thus, $G(\\Delta t;\\sigma,\\sigma')$ \\ouredit{is indeed of order} $\\Delta t^2$. Next, we will account for $J(\\Delta t;\\sigma,\\sigma')$. If $\\sigma'=\\sigma^{x,y}$, then \n\\begin{align}\nJ(\\Delta t;\\sigma,\\sigma^{x,y})=\\Pb(\\sigma,\\sigma^{x,y})\\sum_{k=1}^{\\infty}\\frac{1}{2k+1}\\left(\\frac{\\Pb(\\sigma,\\sigma^{x,y})-\\Po(\\sigma,\\sigma^{x,y})}{\\Pb(\\sigma,\\sigma^{x,y})+\\Po(\\sigma,\\sigma^{x,y})}\\right)^{2k+1}.\n\\label{eq:proof:F_expansion}\n\\end{align}\n\\ouredit{Because $\\Pb(\\sigma,\\sigma^{x,y})=O(\\Delta t^2)$ and $\\Pb(\\sigma,\\sigma^{x,y})\\pm\\Po(\\sigma,\\sigma^{x,y})=O(\\Delta t^2)$, we get}\n$$\nJ(\\Delta t;\\sigma,\\sigma^{x,y})=O(\\Delta t^2),\n$$\nsince, for $\\sigma'=\\sigma^x$, $J(\\Delta t;\\sigma,\\sigma^x)=O(\\Delta t^4)$ and this is a lower order when $\\Delta t<1$. Therefore, $H(\\Pb|\\Po)=O(\\Delta t^1)$. Note that all of the terms of the series in \\eqref{eq:proof:F_expansion} contribute a term of order $\\Delta t^2$, so the coefficient of $\\Delta t^2$ in the asymptotic expansion of the RER will be a result of the summation of all those terms.\n\nFinally, we discuss the differences in our argument for the proof of the Strang case. First, the order of the local error for Strang is $p=3$, so every time we use formula \\eqref{eq:PminusQ} in the proof, we would introduce a term of order $\\Delta t^3$ instead of $\\Delta t^2$. Then, using an expression for $C(\\cdot, \\cdot)$ similar to \\eqref{eq:comm_Lie} but for the Strang case, we would show that \n$$J(\\Delta t;\\sigma,\\sigma^{x,y,z})=O(\\Delta t^3)=G(\\Delta t;\\sigma,\\sigma^{x,y,z})$$\n for $x,y,z\\in \\Lambda$ and $x\\neq y\\neq z$. This would then give the result for Strang.\n\\end{proof}\n\n\\subsection{Building biased a-posteriori estimators for the RER}\n\\label{sec:a-posteriori-RER}\n\\ouredit{Theorem~\\ref{th:spparks_th} shows that the long-time accuracy with respect to the RER of the two operator spllitting schemes, Lie and Strang, scales with $\\Delta t$ in the same way the global error does. However, it also exposes the first terms in the asymptotic expansion of the RER for Lie and Strang. Essentially, }\n\\begin{align}\nH(\\Pbl|\\Po)&=A\\Delta t+o(\\Delta t)\\label{eq:RER-A}\\\\\nH(\\Pbs|\\Po)&=B\\Delta t^2+o(\\Delta t^2)\n\\end{align}\n\\ouredit{where $A,B$ are the corresponding highest order RER coefficients. Those have an explicit form that depends on the system one wishes to simulate and the commutator $C(\\sigma,\\sigma')$ corresponding to the scheme. We focus to the case of the Lie operator splitting, though similar comments can also be made for Strang. For systems with transition rates satisfying the property in~\\eqref{eq:rate_assumption}, the highest-order coefficient $A$ appearing in~\\eqref{eq:RER-A} has the form:\n\\begin{align}\n\\label{eq:RER-Lie-top-order}\nA=\\sum_{\\sigma}\\mu_{\\Lie}(\\sigma)\\sum_{x,y\\in \\Lambda} C_{\\mathrm{Lie}}(\\sigma,\\sigma^{x,y})F_{\\mathrm{Lie}}(\\sigma,\\sigma^{x,y}),\n\\end{align}\nwhere $C_{\\mathrm{Lie}}$ is the Lie commutator (see Equation~\\eqref{eq:comm-lemma-Lie}) and $F_{\\mathrm{Lie}}$ is a quantity that depends on the splitting (see Equations~\\eqref{eq:exact_a} and~\\eqref{eq:exact_b} in appendix for examples on how this F can look for different splittings).} \\firstRef{Both $C$ and $F$ can be expressed in terms of the transition rates of the process $q$, i.e. they are computable for any state $\\sigma$ and $x,y\\in \\Lambda$. }\\ouredit{Therefore, $A$ in~\\eqref{eq:RER-Lie-top-order} can be estimated via an ergodic average when simulating with the Lie scheme and hence, for small $\\Delta t$, $H(\\Pbl|\\Po)\\simeq A\\Delta t$.}\n\n\\ouredit{At first glance, computing coefficient~\\eqref{eq:RER-Lie-top-order} involves work that scales with the size of the lattice}. \\ouredit{However, it was shown in Lemma~5.15 of~\\cite{Arampatzis:2014} that the commutator only depends on the boundary regions between sub-lattices (see Figure~\\ref{fig:lattice_decomp}). We will continue this discussion in Section~\\ref{sec:ising_example}, where we consider an adsorption-desorption system.} \n\\ouredit{We will also see that, apart from a comparison of the schemes in terms of the long-time loss of information, the estimators of RER can also be of use in tuning parameters of the scheme ($\\Delta t$, domain decomposition, etc.). We will then consider the behavior of the RER when simulating other systems in Section~\\ref{sect:conn}.}\n\n\\section{Error vs.\\ communication and time-step selection}\n\\label{sec:ising_example}\nIn this section, we explore the balance between numerical error and processor communication in Parallel KMC, \nin the context of a specific example.\nLet us assume a bounded two-dimensional lattice, $\\Lambda\\subset \\mathbb{Z}^2$ with $100\\times 100$ sites. At each site $x$, we have a spin variable, $\\sigma(x)\\in \\Sigma=\\{0,1\\}$, with $\\sigma(x)=0$ denoting an empty site and $\\sigma(x)=1$ an occupied one. Our model in this case is going to be an \\textit{adsorption-desorption} one, although the analysis would similarly apply for other mechanisms (diffusions, reactions, etc., see~\\cite{Arampatzis:2012} for more details). The transition rates we will use correspond to spin-flip Arrhenius dynamics. Given a lattice site $x$, we may also define the nearest-neighbor set $\\Omega_{x}=\\{z\\in \\Lambda:|z-x|=1\\}$. The transitions rates are then\n\\begin{align}\nq(\\sigma,\\sigma^x)&=q(x,\\sigma)=c_1(1-\\sigma(x))+c_2\\sigma(x)e^{-\\beta U(x)},\\label{eq:ising_q}\\\\\nU(x)&=J_0\\sum_{y\\in \\Omega_{x}}\\sigma(y)+h,\n\\end{align}\nwhere $c_1,c_2,-\\beta, J_0$ and $h$ are constants that can be tuned to generate different dynamics. We remind that $\\sigma^x$ denotes the result of a spin-flip at lattice position $x$ if we start from state $\\sigma$. Note that the transition rates \\eqref{eq:ising_q} have the property \\eqref{eq:rate_assumption}. \\ouredit{When considering a jump from $\\sigma$ to $\\sigma^x$, $q$ only depends in the spin values of the sites close to $x$ (through $U(x)$).} Since transitions are localized, we can thus employ a geometrical decomposition of the lattice, as described in Section \\ref{sect:FSKMC}, and simulate the system in parallel. To accomplish this, we used Sandia Labs' SPPARKS code, a Kinetic Monte Carlo simulator~\\cite{spparks}.\n\n\n\\ouredit{From Table~\\ref{tab:comm_cost} and Remark~\\ref{rem:scal_and_computation}, we can see that the cost of computing quantities that depend on the commutator scales as $O(N)$ for an $N\\times N$ lattice. As the highest order coefficients of the RER also depend on the commutator (see Section~\\ref{sec:a-posteriori-RER}), those also scale as $O(N)$. We can take advantage of the knowledge of the scaling by defining a per-particle RER (pp-RER)}. That is, \n\\begin{align}\n\\label{eq:pp-RER}\nH_{\\mathrm{pp}}(\\Pb|\\Po)\\ouredit{:=}\\frac{1}{N}H(\\Pb|\\Po).\n\\end{align}\nThis way, setting a tolerance for the pp-RER will have the same meaning across different system sizes. \\ouredit{We confirmed that $O(N)$ is the right scaling of the per-particle RER with respect to system size via simulation, as we saw that for increasing $N$, $H_{\\mathrm{pp}}(\\Pb|\\Po)\\simeq o(1)$.}\n\nTo estimate the top-order coefficients of the pp-RER expansion, we simulated the system until convergence to the stationary distribution was established. After that, every sample simulated by SPPARKS~\\cite{spparks} was used to calculate the estimates. Note that, in this case, we show an over-estimate of $B$, so results for the Strang splitting will be even better than the ones presented in Figure~\\ref{fig:comparison_2d_ising}. It is possible to get an estimator that converges to the exact value of $B$ by adding all of the positive terms in $L_S^3[\\delta_\\sigma'](\\sigma)$ to the denominator of~\\eqref{eq:MB}. \n\\begin{figure}[h]\n\\centering\n\\includegraphics[width=0.65\\textwidth]{images\/comparison_2d_ising_per_particle.eps}\n\\caption{Logarithmic scale - Comparison between $\\Delta t$ and the estimate of the per-particle RER for Lie \\& Strang. Estimates for the constants $A,B$ come from the simulation of a 2D Ising model on a $100\\times 100$ lattice with final time $T=1000$. Simulation was done in parallel with SPPARKS. }\n\\label{fig:comparison_2d_ising}\n\\end{figure}\nFigure~\\ref{fig:comparison_2d_ising} illustrates the difference in long-time accuracy between the two splittings. Since this is a logarithmic plot, most of the difference is made by Strang having a different order than Lie. \n\n\n\\begin{remark}[On the efficiency of computing \\ouredit{the highest order coefficients of the expansion of the RER for the Lie and Strang operator splittings.}]\n\\label{rem:scal_and_computation}\n\nIn the case of a checkerboard decomposition of the lattice (see Figure \\ref{fig:lattice_decomp}), we can calculate in exactly how many sites we need to evaluate the rates in order to calculate the commutator. However, for our purposes, upper bounds will be more appropriate. Table~\\ref{tab:comm_cost} offers a comparison of those bounds when we decompose a $N\\times N$ lattice into $m^2$ sub-lattices, assuming nearest neighbor interactions. Notice that the cost is larger for Strang due to the complexity of the corresponding commutator.\n\\begin{table}[H]\n\\centering\n\t\\begin{tabular}{|c|c|c|}\n\t\t\\hline\n\t\t& Lie& Strang\\\\\n\t\t\\hline\n\t\tUpper bound of the commutator cost& & \\\\(normalized by number of sites, $N^2$)& $2(m+1)\/N$& $6(m+1)\/N$\\\\\n\t\t\\hline\n\t\\end{tabular}\n\t\\caption{Upper bounds \\ouredit{(normalized by lattice size)} on the number of lattice sites we need to evaluate the transition rates at in order to calculate the commutator for each \\ouredit{operator} splitting. Assuming that a checkerboard decomposition \\ouredit{into $m^2$ sub-lattices} of an $N\\times N$ lattice is used, as in Figure \\ref{fig:lattice_decomp}. The commutator also encodes the cost of communication between the processes. As $N$ grows, the cost of communication is smaller, as the processes spend more time simulating on the sub-lattices than updating each others boundaries.}\n\\label{tab:comm_cost}\n\\end{table}\n\\end{remark}\n\n\n\nOn a more practical note, a user of a splitting scheme may instead like to see the flipped relationship. That is, given a fixed tolerance, what is the maximum time window during which the simulation can run asynchronously? If we interpret tolerance as a fixed value of $H_{\\mathrm{pp}}(\\Pb|\\Po)$ during the simulation, then the relationship with $\\Delta t$ is the one in Figure~\\ref{fig:tol_vs_dt}.\n\\begin{figure}[h]\n\\centering\n\\includegraphics[width=0.65\\textwidth]{images\/comparison_tolerance_dt_2d_ising_per_particle.eps}\n\\caption{Comparison between tolerance and $\\Delta t$. The difference in order of the pp-RER between the two splittings allows for a larger splitting time-step $\\Delta t$ given a fixed tolerance. This is similar to the behavior of the error in~\\cite{Arampatzis:2014}, although the RER allows us to make this statement for $T\\gg 1$.}\n\\label{fig:tol_vs_dt}\n\\end{figure}\nThere we can see that if our error tolerance with respect to the pp-RER is $10^{-3}$, then any $\\Delta t$ smaller than 0.7 works for the Strang splitting. To get within the same tolerance with Lie, $\\Delta t$ has to be less than $0.02$, a substantially small step-size for parallel computations. As is expected, a smaller step-size comes with larger communication cost and thus a longer computation for the same tolerance. This can be seen in Figure~\\ref{fig:timing_comp}. \n\n\\begin{remark}\nFigures~\\ref{fig:tol_vs_dt} and~\\ref{fig:timing_comp} illustrate the very practical consequences of the theory. Interest in highly accurate splitting schemes in PL-KMC stems from a tolerance-versus-communication point of view. A user of such a scheme would like for it to be as accurate as possible, therefore the step size, $\\Delta t$, should be relatively small. However, for the scheme to be efficient, $\\Delta t$ should be large enough for every processor to have a substantial amount of work to do before communications are in order. A good balance can be reached in-between and a scheme that is more accurate allows for a larger $\\Delta t$ while holding the same error tolerance. Given that the RER captures long-time behavior, this is an important comparison between the schemes.\n\\end{remark}\n\n\\begin{figure}[h]\n\\centering\n\\includegraphics[width=0.65\\textwidth]{images\/timings.eps}\n\\caption{Percentage of time each scheme devotes to communication in a fixed time interval, $[0,T]$, for a square $N\\times N$ lattice when simulating an Ising-type system, using four processes and for $T=3000$. Note that for the $\\Delta t$ considered, the pp-RER tolerance is $10^{-3}$ for both schemes. Due to the considerably smaller step size of the Lie scheme, a larger chunk of time is devoted to communication. This is more apparent in the case of a moderately small lattice, $N=100$, where the time spent updating the other processes is over $60\\%$ of total time. Communication cost is more severe when $N$ is smaller. By Remark \\ref{rem:scal_and_computation}, as $N$ grows, communication should take less of the total time, as the processes spent more time simulating than updating their boundaries. }\n\\label{fig:timing_comp} \n\\end{figure}\n\n\n\n\\subsection{The per-particle RER as an efficient diagnostic quantity for parallel KMC}\n\\secondRef{The discussion above about the per-particle RER, \\eqref{eq:pp-RER} suggests the use of these estimates as efficient diagnostic quantities for comparing schemes. As discussed in the previous section, we can infer the scaling of the top-order coefficient of the RER by the properties of the commutator. Consequently, we can ``normalize'' the RER (as in~\\eqref{eq:pp-RER}) by that scaling to derive a similarity measure that does not depend on system size. This is significant as it allows practitioners to compare schemes and tune parameters ($\\Delta t$, domain decomposition, etc.) on a system of smaller size and thus avoid further slowing down of the target simulation, which is crucial for complicated systems.\nOverall, our approach can be viewed as a diagnostic tool that allows to compare different parallelization\nschemes based on operator splitting.\n}\n\n\n\\section{Some connections with Model Selection and Information Criteria}\n\\label{sec:info-crit}\n\\ouredit{The interacting particle system application considered in Section~\\ref{sec:ising_example}} allows us to look at the RER via a statistical lens. The goal is to compare two models, $\\Pb^1,\\Pb^2$, of the actual distribution $\\Po$ by utilizing simulated data. From this standpoint, our methodology is nothing more than model selection. There is an abundance of literature towards tackling the comparison of different models, given a sufficiently large amount of data. A prominent example is the use of information criteria in the Model Selection literature, like Akaike~\\cite{Akaike1} and Bayesian~\\cite{Bayes1}. Those provide estimates for the information lost \\ouredit{compared to a given data set} by using one approximate model instead of another, without requiring knowledge of the true model. \n\nThe approach in this work is very similar in nature. As stated before, motivated by Theorem \\ref{th:spparks_th}, we can express the RER in each case as \n\\begin{align*}\nH(\\Pb^i|\\Po)=A_i\\Delta t^{p_i}+o(\\Delta t^{p_i}), p_i\\geq 1, i\\in \\{1,2\\}.\n\\end{align*}\nFor instance, in the case of the Lie splitting, $A_1=A$ as defined in \\eqref{eq:exact_a}, $p_1=2$ and for Strang $A_2=B, p_2=3$, as defined in \\eqref{eq:exact_b}. Given simulated data and for a small fixed $\\Delta t$, we can estimate the \\ouredit{ coefficients $A_i$}. Comparison of the schemes can now be done through\n\\begin{align}\nH(\\Pb^1|\\Po)-H(\\Pb^2|\\Po)=A_1\\Delta t^{p_1}-A_2\\Delta t^{p_2}+o\\left(\\Delta t^{\\min(p_1,p_2)}\\right).\n\\label{eq:rer_info_crit}\n\\end{align}\nThe difference $A_1\\Delta t^{p_1}-A_2\\Delta t^{p_2}$ shares the properties of the information criteria previously mentioned while also introducing some new ones, namely: \n\n\\begin{enumerate}\n\\item It is a computationally tractable quantity. \n\\item Compares the schemes in terms of long-time information loss (through $p_1,p_2$). \n\\item Takes into account communication cost of each scheme (through $A_1, A_2$ and associated commutators).\\label{item:comm_cost}\n\\end{enumerate}\nThus, as an information criterion, RER differences like in Equation~\\eqref{eq:rer_info_crit} offer a different perspective through which to pick a splitting scheme over another. \\ouredit{A new element in our approach, compared to the earlier vast literature in Information Criteria, is the use of RER instead of the standard relative entropy. Using RER allows us to compare stochastic dynamics models and in a data context, correlated time series.}\n\n\n\\section{Generalizations, Connectivity, and Relative Entropy Rate}\n\\label{sect:conn}\n\\ouredit{\nUp to this point, we have analyzed the RER with respect to the leading order in $\\Delta t$ for the case of a stochastic particle system (see Theorem~\\ref{th:spparks_th}). In this section, we study the RER in a more general setting and illustrate that it captures more details about the system and the scheme used than one would expect. We will also see how the order of the RER can change depending on those details, resulting in some cases to schemes of higher accuracy. \n}\n\\begin{definition}[Restriction of a generator]\n\\label{def:restriction}\nLet us have set $A$ with $A \\subset S\\times S$ and $L$ be an infinitesimal generator of a Markov process with associated transition rates $q$. Then, the restriction $L|_{A}$ of $ L $ is defined as\n\\begin{equation}\n\\label{eq:restriction_definition}\nL|_{A}[f](\\sigma)=\\sum_{\\sigma'\\in S}q_A(\\sigma,\\sigma') \\left(f(\\sigma')-f(\\sigma)\\right),\\ \\sigma\\in S, \n\\end{equation}\nwhere $q_A(\\sigma,\\sigma')=q(\\sigma,\\sigma')\\cdot \\chi_A(\\sigma,\\sigma') $, $\\chi_A$ is the characteristic function of set $A$ and $f$ is a continuous and bounded function on the state space $ S $. \n\\end{definition}\n\n\\ouredit{\nWe assume that the operator $L$ is splitted into $L_1, L_2$, and that both are \\textit{restrictions} of $L$. Note that Definition~\\ref{def:restriction} is general enough to include the splittings used in PL-KMC. For example, the generators $L_1,L_2$ in~\\eqref{eq:splitting_to_groups} are precisely of that form, with the groups $G_i$ playing the role of the sets ``$A$''. From another point of view, restrictions respect the original process in that the transition rates that correspond to $L|_A$ are either the same as the old ones or zero. \n}\n\nBefore we can construct an asymptotic estimate for the relative entropy rate, we need to first introduce some of the tools we will use. Let $\\sigma, \\sigma'$ be states of a CTMC on a countable state space and let $q$ be the associated transition rates. Then, a path $\\vec{z}=(z_0,\\ldots, z_n)$ from $\\sigma$ to $\\sigma'$ is a finite sequence of \\ouredit{distinct states $z_i$ such that $z_0=\\sigma, z_n=\\sigma'$, and $\\prod_{i=0}^{n}q(z_i,z_{i+1})>0.$} The length of a path will be denoted by $|\\vec{z}|=|(z_0,\\ldots,z_n)|=n$ and we will use $\\mathrm{Path}(\\sigma\\to \\sigma')$ for the set of all paths from $\\sigma$ to $\\sigma'$. Thus, we are now able to define a distance between states by looking at the length of the shortest path that connects them.\n\n\\begin{definition}[Distance between states]\n\\label{def:distance}\nLet $q$ be the transition rates of a Continuous Time Markov Process over a countable state space $S$. Then, let $\\sigma,\\sigma'\\in S$, $\\sigma\\neq \\sigma'$. The distance $d_{q}$ between the two states is defined as \n\\begin{align}\nd_{q}(\\sigma,\\sigma'):=\\min\\left\\{|\\vec{z}|:\\vec{z}\\in \\mathrm{Path}(\\sigma\\to \\sigma')\\right\\}\n\\label{eq:state_distance}\n\\end{align}\nIn the case that the two states are disconnected, i.e. $\\mathrm{Path}(\\sigma\\to \\sigma')=\\emptyset$, then $d(\\sigma,\\sigma')=+\\infty$. Given those distances, one can also define the diameter of the space as\n$$\n\\mathrm{diam}(S)=\\max_{(\\sigma,\\sigma')\\in S\\times S}\\{d(\\sigma,\\sigma')\\}.\n$$\n\\end{definition}\nThis notion of distance comes from graph theory and is known as the geodesic distance. When there is no ambiguity concerning the transition rates used, we will drop the $q$ from the notation, using $d$ instead of $d_{q}$. $d$ is not a metric in the classical sense, since it does not have to be symmetric, that is $d(\\sigma,\\sigma')\\neq d(\\sigma',\\sigma)$ in general. However, it satisfies the triangle inequality. In addition, the distances depend only on the transition rates, i.e. they are time independent. We will refer to those distances as the \\textit{connectivity} of the state space for the Markov Chain with transition rates $q$. The importance of \\ouredit{using such a distance} can be seen in the following result concerning compositions of the infinitesimal generator $L$.\n\\begin{lemma}\n\\label{le:supp_oper_pow_control}\nLet $L$ be an infinitesimal generator \\ouredit{of a Markov process, with corresponding transition rates $q$ and let $\\sigma'$ be some state of the process. Then,}\n\n\\begin{align*}\n\\{\\sigma:L^{n}[\\delta_{\\sigma'}](\\sigma)\\neq 0\\}\\subseteq \\{\\sigma:d(\\sigma,\\sigma')\\leq n\\}=B_{n}(\\sigma').\n\\end{align*}\n\\end{lemma}\n\\begin{proof}\nThe proof is by induction. Argument can be found in supplementary material.\n\\end{proof}\n\nIn other words, for a fixed state $\\sigma'$, if $d(\\sigma,\\sigma')>n$ then $L^n[\\delta_\\sigma'](\\sigma)=0$. The set $B_n(\\sigma')$ contains all states that are connected with $\\sigma'$ with $n-2$ or less in-between states. We will also use the notation $S_n(\\sigma'):=\\{\\sigma:d(\\sigma,\\sigma')=n\\}$.\n\nSince our primary interest is in studying approximations based on splitting our generator $L$ to $L_1, L_2$, it makes sense to have an extension of the previous result to compositions of $L_1, L_2 $. The following lemma is the generalization of Lemma \\ref{le:supp_oper_pow_control} to compositions of restrictions. We will use the notation $L^k|_A$ to denote the $k$th composition of generator $L$ where, instead of the original transition rates, we use $q_A$. \n\n\\begin{lemma}\n\\label{lem:compo_support}\nLet us have the state space $S$ and $S\\times S=A\\cup B, A\\cap B=\\emptyset$, along with generators $L_1=L|_{A}$, $L_2=L|_{B}$. We fix $\\sigma'\\in S$ and $k, m\\in \\mathbb{N}$. Then, \n\\begin{align*}\n\\left\\{\\sigma:L_1^{k}\\left[L_2^{m}[\\delta_{\\sigma'}]\\right](\\sigma)\\neq 0\\right\\}\\subseteq\\{\\sigma:d(\\sigma,\\sigma')\\leq k+m\\}.\n\\end{align*}\n\\end{lemma}\n\n\\begin{proof}\nInduction argument similar to that of Lemma \\ref{le:supp_oper_pow_control}, see supplementary materials.\n\\end{proof}\n\nLemma \\ref{lem:compo_support} can be simply extended to more complicated compositions by the use of similar arguments. Thus, if every composition of $L_1, L_2$ is controlled in the sense of Lemma \\ref{lem:compo_support}, then it is not difficult to see that the same control holds for collections of them of the same order, i.e. if we fix $\\sigma'\\in S$ and $k\\in \\mathbb{N}$,\n\\begin{align}\n\\{\\sigma:L^k_Q[\\delta_\\sigma'](\\sigma)\\}\\subseteq \\{\\sigma:d(\\sigma,\\sigma') p+1$ and then $C(\\sigma,\\sigma')=0$ (from Lemma \\ref{lem:comm_support}).\n\\end{proof}\n\n\\ouredit{The assumption on the commutator in Theorem~\\ref{th:main_result} is simple to check for parallel KMC, as we can write down the commutator $C(\\sigma,\\sigma')$ explicitly. For example, for Lie, $C(\\sigma,\\sigma')$ is given by Equation~\\eqref{eq:comm_Lie}, so checking the assumption is just a matter of calculation. Additionally, to find the bounded diameter $\\hat{k}=\\min\\{\\mathrm{diam}(S),p\\}$, it is sufficient to have lower bounds for the diameter, $\\mathrm{diam}(S)$, as the order of the local error of the scheme, $p$, will typically be much smaller. Example~\\ref{sec:Markov-Chain-Example} shows a case where $p$ is close to $\\mathrm{diam}(S)$ and the implications this has for the RER.\n}\n\n\\subsection{Markov chain example}\n\\label{sec:Markov-Chain-Example}\nIn order to illustrate the connectivity-RER relation, we are studying a simple example where we can compute the RER and all related quantities explicitly, either by hand or any symbolic algebra system. All calculations of the RER in this example are not from sampling but by using definition \\eqref{eq:rer}.\n\nWe study the case of a Markov process with transition rate matrix, $Q$ and $\\mathrm{diam}(S)=2$. \\ouredit{We consider a positive $\\Delta t$, $\\Delta t<1$, and}\n$$\nQ=\\left(\n\\begin{array}{ccc}\n -3 & 1 & 2 \\\\\n 3 & -4 & 1 \\\\\n 1 & 0 & -1 \\\\\n\\end{array}\n\\right).\n$$\nGiven this, we can calculate the transition probability matrix of the Markov chain as the matrix exponential of $Q$, $\\Po(\\sigma,\\sigma')=\\exp(\\Delta t Q)\\delta_{\\sigma'}(\\sigma)$. Our system has diameter equal to two since $Q_{3,2}=0$ but $Q_{3,1}\\cdot Q_{1,2}\\neq 0$. We can construct approximations of $\\Po$ by splitting $Q$ into \\ouredit{components $A,B$ with $Q=A+B$, similarly to how we expressed the generator $L$ as $L_1+L_2$}. One way to do this is\n$$\nA=\\left(\n\\begin{array}{ccc}\n -3 & 1 & 2 \\\\\n 3 & -4 & 1 \\\\\n 0 & 0 & 0 \\\\\n\\end{array}\n\\right), B=\\left(\n\\begin{array}{ccc}\n 0 & 0 & 0 \\\\\n 0 & 0 & 0 \\\\\n 1 & 0 & -1 \\\\\n\\end{array}\n\\right).\n$$\nThus, one approximation of $\\exp(Q\\Delta t)$ could be $\\exp(A\\Delta t)\\exp(B\\Delta t)$, which corresponds to the Lie splitting. From Theorem \\ref{th:main_result}, since $\\mathrm{diam}(S)=p=2$, we expect $H(\\Pbl|\\Po)=O(\\Delta t^1)$. This is indeed the case, as,\n$$\nH(\\Pbl|\\Po)\\simeq 0.124 \\Delta t-0.0566 \\Delta t^2+O\\left(\\Delta t^3\\right).\n$$\nThe use of $\\simeq$ comes from a truncation of the coefficients to three significant digits. We can work similarly with the Strang splitting, now using $\\exp(A\\Delta t\/2)\\exp(B\\Delta t)\\cdot \\exp(A\\Delta t\/2)$ as the approximation to $\\Po$. The local order of the Strang splitting is $p=3$, so we expect that $H(\\Pbs|\\Po)=O(\\Delta t^{2\\cdot 3-3})=O(\\Delta t^3)$ (see Theorem~\\ref{th:main_result}). This can be readily demonstrated by a calculation of the RER, followed by the derivation of its asymptotic expansion:\n$$\nH(\\Pbs|\\Po)\\simeq 0.0279 \\Delta t^3+0.000672 \\Delta t^4+O\\left(\\Delta t^5\\right).\n$$\n\n\n\n\\section{Quantifying information loss in transient regimes}\n\\label{sec:transient}\n\\ouredit{In this last section, we consider the case where we wish to study the performance of the operator splitting scheme in a transient regime, before convergence to the stationary distribution takes place. Note that in the proofs of Theorems~\\ref{th:spparks_th} and~\\ref{th:main_result}, we derived the asymptotic expressions of the various quantities without referring to the stationary measure $\\mb$. Therefore those results do not depend on the choice of the sampling measure. That is, with the assumptions of Theorem~\\ref{th:main_result} and $\\nu$ a probability distribution on the state space $S^M$ such that $\\nu(\\sigma)>0$ for all states $\\sigma$, then \n\\begin{align}\nH_{\\nu}(\\Pb|\\Po)=\\sum_{\\sigma\\in S^M}\\nu(\\sigma)\\Pb(\\sigma_0,\\sigma_1)\\frac{\\Pb(\\sigma_0,\\sigma_1)}{\\Po(\\sigma_0,\\sigma_1)}=O(\\Delta t^{2p-\\hat{k}}).\n\\label{eq:general_rer}\n\\end{align}\n}\n\\ouredit{Therefore, the order of the RER is independent of the sampling measure. As a result, we gain Theorem~\\ref{th:general_initial_dist}, an extension of Theorem~\\ref{th:main_result} to transient time regimes.}\n\\begin{theorem}\nWith the assumptions of Theorem~\\ref{th:main_result} for the RER, we have that for any $T>0$:\n\\begin{align}\n\\frac{R(Q_{0:T}|P_{0:T})}{T} = \\frac{R(\\mu_0|\\nu_0)}{T}+O(\\Delta t^{2p-\\hat{k}}).\n\\end{align}\n\\label{th:general_initial_dist}\n\\end{theorem}\nTheorem~\\ref{th:general_initial_dist} is implied by the decomposition of the relative entropy in terms of rates that depend on $\\nu_i$ (first discussed in Section~\\ref{ssec:info_theory_concepts}). \\ouredit{If $M$ is a positive integer, $\\Delta t$ is the scheme's time-step, and $T=M\\Delta t$}, then\n\\begin{align}\nR(Q_{0:T}|P_{0:T})&=R(\\mu_0|\\nu_0)+\\sum_{i=1}^{M}H_{\\nu_i}(\\Pb|\\Po).\n\\label{eq:gen_re_rer_rel_2}\n\\end{align}\n\\begin{proof}[Proof of Theorem~\\ref{th:general_initial_dist}]\nFrom Equation~\\eqref{eq:general_rer} we have that the order of the RER does not depend on the sampling measure $\\nu$, as long as $\\nu(\\sigma)>0$ for all $\\sigma$. Therefore, $H_{\\nu_i}(\\Pb|\\Po)=O(\\Delta t^{2p-\\hat{k}})$ for $i=1,\\ldots,M$. This, combined with Equation~\\eqref{eq:gen_re_rer_rel_2}, implies the result. \n\\end{proof}\n\n\\ouredit{Therefore, our results about the RER are applicable for parallel KMC even for practitioners that are interested in simulating the dynamics in the transient regime.}\n\n\\firstRef{\n\\begin{remark}[Relative Entropy Rate vs.\\ path-wise Relative Entropy]\nIn Section~\\ref{ssec:info_theory_concepts}, we saw that, in the stationary regime, we can relate the path-wise relative entropy with the RER via\n$$\nR(Q_{0:T}|P_{0:T})=TH(\\Pb|\\Po)+R(\\Pb|\\Po).\n$$\nIn this section, we connected the RER with the RE for transient regimes by using Relation~\\eqref{eq:gen_re_rer_rel_2}. Ultimately, those relations motivate the use of the RER as an information criterion in place of the path-wise RE, but there are other advantages too: \n\\begin{enumerate}\n\\item The RER does not depend on the length of the simulated path. Additionally, it can be estimated from a single path, while the path-wise RE requires several. \n\\item For large $T$, the RE and RER encapsulate the same amount of information about the similarity of $\\Pb$ and $\\Po$. \n\\end{enumerate}\n\\end{remark}\n}\n\n\\section{Conclusions}\n\\label{sec:Conc}\nWe introduced the relative entropy rate (RER), i.e.\\ path space relative entropy per unit time, as a means to quantify the long-time accuracy of splitting schemes for stochastic dynamics and in particular Parallel KMC algorithms. \nWe demonstrated, using {\\em a posterirori} error expansions, the dependence of RER on the following elements: the local error analysis of the splitting schemes captured by \nthe operator commutators; the local error order $p$ and \nthe splitting time step $\\Delta t$, which in the case of Parallel KMC controls the asynchrony between processors; the diameter of the graph associated with the approximated Markov jump process.\n\nBased on this analysis, we showed that RER defines a computable path-space information criterion that allows to compare, select and design different splitting schemes, taking into account both error tolerance (e.g. accuracy of the scheme) and practical concerns such as asynchrony and processor communication cost. \\ouredit{It is also appropriate to think of the RER as a diagnostic quantity that can be estimated on systems of smaller size and consequently be used to compare schemes and tune parameters without slowing down the target simulation.}\n\nFinally we note that numerical analysis of stochastic systems is typically concerned with controlling the weak error for observable functions $\\phi$,\n\\begin{align}\n\\sup_{0\\leq n\\leq N}\\left | \\mathbb{E}_{P_{0:T}}[\\phi(X(n\\Delta t))] -\\mathbb{E}_{Q_{0:T}}[\\phi(X_n)]\\right|,\n\\label{eq:weak_error}\n\\end{align}\nwhere $X_{n}$ represents the approximate chain and $X(n\\Delta t)$ the $\\Delta t$-skeleton chain of the exact process, $T=M\\cdot \\Delta t$. However, our results measure the information loss on path space between \nthe approximate chain and the $\\Delta t$-skeleton chain of the exact process, using RER. Controlling RER also implies upper bounds for observables at long times, using uncertainty quantification information inequalities developed in~\\cite{Dupuis, jie-scalable-info-ineq}. We also showed how those results can be extended to finite-time regimes.\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}
+{"text":"\\section{Introduction}\nGraphene, the two dimensional allotrope of carbon, has drawn an\nenormous amount of attention in the literature after its first\nisolation on an oxide substrate \\cite{ksn04,ksn05}. Apart from the\ntheoretical physics point of view \\cite{akg07,ksn06}, graphene has\nemerged as a possible candidate for different electronic devices\nincluding field effect transistors \\cite{mcl07}-\\cite{yml10}.\nHowever, the small bandgap of graphene reduces the controllability\nof such devices and thus limits its widespread applications.\nGraphene Nanoribbon (GNR), on the other hand, a quasi-one\ndimensional strip of graphene, has been shown to provide a\nsignificant bandgap \\cite{kn96}-\\cite{lb106} and hence is being\nconsidered as a channel material in field effect transistors\n\\cite{bo06}-\\cite{fmr08}.\n\nMore recently, it has been theoretically shown that the bandgap of a\ngraphene nanoribbon can be tuned significantly by the application of\nan external field along the width of the ribbon\n\\cite{son06,dsn07,hr08} and at sufficiently large field, it is also\npossible to collapse the gap of the ribbon. In this work, we extend\nthis result in a more generic way for both semiconducting and\nmetallic Armchair Graphene NanoRibbon (A-GNR) and demonstrate that\nnot only the magnitude of the bandgap, but the whole electronic\nstructure of the nanoribbon can be altered significantly depending\non the magnitude and polarity of the external bias. In particular,\nwe will show that it is possible to obtain a bias dependent direct\nto indirect bandgap transition in such a nanoribbon. This kind of\ncontrol over the electronic structure by the application of an\nexternal bias opens up the possibility of new device applications.\n\nA schematic of the setup that we consider in this work is shown in\nFig. \\ref{fig:schematic}, where an Armchair Graphene NanoRibbon\nA-GNR of width $W$ is sandwiched between a left gate (G$_L$) and a\nright gate (G$_R$). There is a third contact $V_c$ which keeps the\nchemical potential of the A-GNR at zero. In this work, we will\nprimarily focus on semiconducting nanoribbons, i.e., the number of\ndimers ($N$) along the width, is of the form $3M$ and $3M+1$. Note\nthat, $N=3M-1$ gives rise to metallic nanoribbons \\cite{lb06,lb106}\nand is briefly analyzed in this work. The gate terminal in each gate\nstack is separated from the GNR by a dielectric. We assume the\nEquivalent Oxide Thickness (EOT) of the gate dielectric to be 1nm.\nThe interfaces between the GNR and the gate dielectric are assumed\nto be perfect, hence the electronic structure of the GNR is not\naltered significantly. The gate dielectric confines the electrons\nand holes in the GNR along the $x$ direction, however the $y$\ncomponent of the states can be obtained using longitudinal wave\nvector $k_y\\equiv k$. The work function of the gate metal is assumed\nto be such that a zero flatband voltage is obtained. The external\nbiases $V_l$ and $V_r$ at the left and the right gates respectively\ncan be varied independently. The application of an external bias\nchanges the potential energy $U(x)=-q\\phi(x)$ inside the GNR,\naltering the electronic structure.\n\nWe now provide the details of the calculation procedure used to\ninvestigate the effect of external bias on an A-GNR. The\nself-consistent electronic structure of the A-GNR is determined by\nusing tight binding method (\\cite{rs98, jcs54}) coupled with the\nPoisson equation. Taking the left edge of the GNR at $x=0$ and the\nplane of the GNR as $z=0$, the charge density is given by\n$\\rho(x,z)=qn(x,z)$, where $q$ is the electronic charge and $n(x,z)$\nis obtained as the difference between the hole [$n_h(x,z)$] and\nelectron [$n_e(x,z)$] density as \\begin{equation}\\label{eq:n}\nn(x,z)=2\\left[\\sum_{i,\\bar{k}}(1-f(E_i(\\bar{k})))|\\psi_i^{\\bar{k}}(x,z)|^2\n-\\sum_{j,\\bar{k}}f(E_j(\\bar{k}))|\\psi_j^{\\bar{k}}(x,z)|^2\\right]\n\\end{equation} where $f(E)=\\frac{1}{1+e^{(E-\\mu)\/k_BT}}$ is the Fermi-Dirac\nprobability at temperature $T$. Here $\\bar{k}$ goes over the whole\nfirst Brillouin Zone, $i$ and $j$ are the valence and conduction\nband indices respectively. The chemical potential $\\mu$, set by the\ncontact $V_c$, is taken to zero. $E_{i}(\\bar{k})$ is the energy\neigenvalue of the state $(i,\\bar{k})$ obtained from the tight\nbinding bandstructure taking only $p_z$ orbital into account, with\nan intra-layer overlap integral, $S$ = 0.129 between two nearest\ncarbon atoms and the intra-layer hopping $t$ as $-$3.033eV\n\\cite{rs98}. Note that, the results obtained from the nearest\nneighbor calculation are in close agreement with simulation that\ntake into account coupling terms up to the third nearest neighbor\n(see supporting information). To obtain the wavefunction\n$\\psi_i^{\\bar{k}}(x,z)$, we assume normalized Gaussian orbital as\nthe basis function, where the parameter of the basis function is\nfitted using the parameter $S$. The wavefunctions are set to zero at\nthe dielectric interfaces indicating an infinite potential barrier.\nOnce self-consistency is achieved between the bandstructure\ncalculation and the Poisson equation for a given gate bias, the\nenergy eigenvalues at different $k$ points correspond to the\nelectronic structure of the A-GNR.\n\nWe take an A-GNR with $N=36$ ($W=4.55$nm) and consider three\nrepresentative bias conditions, namely, (i) $V_l=-V_r$, (ii) $V_l >\n0$, $V_r=0$ and (iii) $V_l < 0$, $V_r=0$. We now present the results\nin these three cases as shown in Fig.\n\\ref{fig:gap_eh}-\\ref{fig:gap_vlr}.\\\\\n\n\\textbf{Case (i):} In this case, the two gate voltages are\nanti-symmetric in nature, i.e., $V_l=-V_r=V_g$. We observe a\nsignificant reduction of bandgap in Fig. \\ref{fig:gap_eh}(a) and (b)\nwith an increase in $V_g$, and this has also been predicted in\n\\cite{dsn07,hr08}. It is observed that at sufficiently large $V_g$,\nboth the conduction and valence band edges shift from $k=0$, giving\na {\\it `Mexican Hat'} shape around $k=0$. Fig. \\ref{fig:gap_eh}(b)\nclearly shows a threshold like behavior of the bandgap change\n\\cite{dsn07}, and as the band edges shift from $k=0$ (non-zero\n$\\Delta k$), the bandgap starts decreasing significantly with bias.\nHowever, the particle-hole symmetry is almost conserved (the small\nasymmetry in Fig. \\ref{fig:gap_eh}(a) is due to the non-zero overlap\n$S$ assumed between two nearest neighbor carbon atoms in the\nhoneycomb lattice) and hence the bandgap continues to remain direct\nin nature, at any bias condition. Note that, the anti-symmetric bias\ncondition forces the GNR to retain its charge neutral condition with\nsimilar electron and hole density, keeping the total effective\ncharge density very low. $\\phi(x)$, dictated by the Poisson's\nequation, thus remains almost linear (uniform field) along $x$, as\nshown in Fig. \\ref{fig:gap_eh}(c). Note that bias dependent bandgaps\nmatch very well with one of the previously published reports based\non non-selfconsistent calculations \\cite{hr08} and this linearity of\n$\\phi(x)$ along the width of the nanoribbon is the reason of this\nunexpected close match.\n\n\\textbf{Case (ii):} In this case, the left gate is kept at positive\nbias, keeping the right gate grounded and the results are shown in\nFig. \\ref{fig:gap_e}(a)-(c). The bandstructure shows a dramatic\nchange as compared with case (i). With an increase in $V_l=V_g$, the\nconduction band minimum shifts from $k=0$, giving rise to a {\\it\n`Mexican Hat'} shape around $k=0$. However, the valence band maximum\ncontinues to remain at $k=0$, irrespective of $V_g$. Thus, at any\n$V_g$ for which $\\Delta k > 0$, the GNR has an indirect bandgap. The\ndirect and indirect bandgap regions are indicated in Fig.\n\\ref{fig:gap_e}(b). The magnitude of the bandgap continues to show a\nthreshold-like behavior as before, with an increased sensitivity of\nbandgap when the system becomes indirect. Note that, the spatial\ndistribution of $\\phi(x)$ along the width of the nanoribbon is\nseverely non-linear (non-uniform field) to support the increased\nelectron density inside the GNR that arises due to the conduction\nband edge moving closer to the chemical potential. We will later\npoint out that it is this strong non-linearity of $\\phi(x)$ that\ncauses such a direct to indirect bandgap transition.\n\n\\textbf{Case (iii):} A similar case like (ii) can be constructed\nwhere the left gate is at negative bias, with the right gate\ngrounded. The bandstructure in such a scenario is shown in Fig.\n\\ref{fig:gap_vlr}(a) where the conduction band minimum continues to\nremain at $k=0$, and the valence band maximum shifts away from\n$k=0$, depending on the external bias. In this case, the valence\nband edge moves closer to the chemical potential resulting in a\nrelative increase in the hole density.\n\nNote that the corrections due to the second and the third nearest\nneighbor interactions contribute at relatively large values of $k$,\naway from the zone center \\cite{sr02}. However, the band edge shift\n($\\Delta k$) from the zone center ($k=0$), in all the above\nscenarios, are small ($\\sim 5\\%$) compared to the size of the\nBrillouin zone. Hence, the calculations with nearest neighbor\ninteractions are accurate enough to predict such direct to indirect\nbandgap transition.\n\nIn Fig. \\ref{fig:gap_vlr}(b), we generalize this result and show the\ntransition from direct to indirect bandgap in the ($V_l$,$V_r$)\nspace. We compute the absolute difference of the $k$ values of the\nconduction band minimum and the valence band maximum for any\narbitrary combination of $V_l$ and $V_r$. This is plotted as a\nfunction of ($V_l$,$V_r$) in Fig. \\ref{fig:gap_vlr}(b). A zero value\n(dark color) indicates direct bandgap, whereas a non-zero value\n(lighter color) represents indirect bandgap region. We clearly\nobserve that in the ($V_l$,$V_r$) space, there are symmetric pockets\nof indirect bandgap regions, with the chosen cases (ii and iii) are\nthe most favorable conditions to obtain such a bandgap transition.\\\\\n\nIn the case of a metallic A-GNR, it is interesting to note that an\nasymmetric external electric field along the width opens a small\nbandgap at the zone center. Fig. \\ref{fig:metallic}(a) shows a\ndirect bandgap of $\\sim 18$meV for a metallic A-GNR with $N=35$\nunder a bias of 2.8V at the left gate, while grounding the other.\nHowever, as shown in Fig. \\ref{fig:metallic}(b), at larger bias,\nthis bandgap tends to become indirect accompanied with a reduction\nin its magnitude. We do not observe such an effect in the case of\nanti-symmetric bias condition, in agreement with \\cite{dsn07}.\n\nThe external bias dependent direct to indirect bandgap transition,\ncoupled with the change in magnitude of the bandgap can have\nsignificant effects in phenomena including band-to-band tunneling,\nelectron-phonon interaction and optical properties. Such an external\nbias dependent tailoring of the electronic structure can provide us\nwith the possibility of a wide variety of fascinating electronic and\noptoelectronic device applications.\\\\\n\nNow, to get more insights, we present a theoretical analysis of the\nphenomenon by starting from the Dirac equation\n\\cite{lb06,lb106,dsn07}. We write the low energy states\n$\\Psi(\\bar{r}) = e^{ik_0x}\\psi_{+}(\\bar{r}) +\ne^{-ik_0x}\\psi_{-}(\\bar{r})$ in terms of smoothly varying envelop\n$\\psi=\\{\\psi_{+},\\psi_{-}\\}$. $\\psi_{+}$ and $\\psi_{-}$ have\ncomponents on the $A$ and $B$ sublattices in the honeycomb lattice\nwith $k_0=-4\\pi\/3a_0$ and $a_0=2.44$nm \\cite{dsn07}. By making the\nreplacement $k_x\\rightarrow -i\\partial_x$ in the Dirac Hamiltonian\n\\cite{lb106}, we can write $H\\psi=E\\psi$ where the Hamiltonian ($H$) for\nthe nanoribbon is given as\n\\begin{equation}\\label{eq:H}\nH = \\left( \\begin{array}{cc}\nH_{+} & 0 \\\\\n0 & H_{-}\n\\end{array} \\right)\n\\end{equation}\nwith $H_{\\pm}=\\pm i\\hbar v\\sigma_x\\partial_x - \\hbar vk\\sigma_y\n -q\\phi(x)\\mathbf{I}$. Here, $\\sigma$ are the Pauli matrices and\n$v\\approx10^6$m\/s. To keep the analysis simple, we assume the\nintra-layer coupling parameter $S$ to be zero. The armchair boundary\ncondition with ideal edges forces\n\\begin{equation} \\label{eq:bc1} \\psi_{+}(0) +\n\\psi_{-}(0)=0\n\\end{equation}\nand\n\\begin{equation}\\label{eq:bc2} \\psi_{+}(W) +\ne^{ik_0W}\\psi_{-}(W)=0\n\\end{equation}\nNow, we give a simple argument to show\nwhy we observe a direct to indirect bandgap transition in setup (ii)\nand (iii), whereas setup (i) provides direct bandgap independent of\nexternal bias. If we write the full Hamiltonian $H$ by\ndiscretization of space along $x$, we find,\n\\begin{equation}\nTr(H)=-4q\\sum_j\\phi(x_j)\n\\end{equation}\nwhich is equal to zero in case (i) and\nthis holds good for any $k$. This is due to the anti-symmetric\nnature of the external bias and hence of $\\phi(x)$ about the mid\npoint of the nanoribbon. Now, using the fact that the sum of the\neigenvalues equals the trace of $H$, this condition forces the sum\nof the energy eigenvalues at any $k$ to be zero. Thus the conduction\nband and valence band remain symmetric about $\\mu$, forcing the bandgap to be\ndirect at any external gate bias. However, in cases (ii) and (iii),\nthe asymmetric gate biases introduce consequent asymmetry in the\nspatial distribution of $\\phi(x)$ and hence force $Tr(H)$ to become\nnonzero, allowing asymmetry in the conduction and the valence bands.\nThis manifests as a bandgap transition in the nanoribbon.\n\nWe now provide an independent numerical method derived from Eq. \\ref{eq:H}\nto re-calculate the bias dependent electronic structure and verify the trend\nof direct to indirect bandgap transition obtained from tight binding calculations.\nTo do this, We rewrite $H\\psi=E\\psi$ as \\cite{dsn07}\n\\begin{equation}\n\\partial_x \\psi_{\\pm}= \\pm \\zeta \\psi_{\\pm}\n\\end{equation}\nwhere\n\\begin{equation} \\zeta(x) = k\\sigma_z - i\\sigma_x(q\\phi(x)+E)\/\\hbar v\n\\end{equation}\nSince, in general, $\\zeta(x)$ does not commute for two\ndifferent $x$, we can write the solutions in terms of Magnus series\n\\cite{mm54,sb09}:\n\\begin{equation}\\label{eq:psi_m}\n\\psi_{\\pm}(W)=e^{\\theta_{\\pm}}\\psi_{\\pm}(0)\n\\end{equation}\nwhere $\\theta_{\\pm}=\\sum_{j=1}^\\infty(\\pm1)^j\\theta_j$. $\\theta_{j}$ is\nthe $j^{th}$ term in the Magnus series with the first three terms\nare given as \\setlength{\\arraycolsep}{-2.2em}\n\\begin{eqnarray}\n\\theta_1 = \\int_0^W\\zeta(x_1)dx_1, \\nonumber\\\\\n&&\\theta_2 = \\int_0^W dx_1\\int_0^{x_1} dx_2[\\zeta(x_1),\\zeta(x_2)],\\nonumber\\\\\n&&\\theta_3 = \\int_0^W dx_1\\int_0^{x_1} dx_2\\int_0^{x_2} dx_3\n([\\zeta(x_1),[\\zeta(x_2),\\zeta(x_3)]]\n+[\\zeta(x_3),[\\zeta(x_2),\\zeta(x_1]])\n\\end{eqnarray}\n\\setlength{\\arraycolsep}{5pt} Using Eqs. \\ref{eq:bc1}, \\ref{eq:bc2}\nand \\ref{eq:psi_m}, we obtain \\begin{equation} (e^{\\theta_{+}} -\ne^{ik_0W}e^{\\theta_{-}})\\psi_{+}(0)=0\\end{equation} To get non-trivial\nsolutions for $\\psi_{+}(0)$, we obtain \\begin{equation}\\label{eq:det}\ndet\\left[e^{\\theta_{+}} - e^{ik_0W}e^{\\theta_{-}}\\right]=0 \\end{equation} For\na given $k$, the set of values of $E$ satisfying Eq. \\ref{eq:det}\ngives the required energy eigenvalues, which can be found\nnumerically. We have verified that the results obtained using this\nmethod show a direct to indirect bandgap transition in setup (ii)\nand (iii) whereas the GNR continues to remain a direct bandgap\nsemiconductor in case (i).\n\nAs a special case, $\\zeta(x)$ commutes for two different $x$ for\n$k=0$ and hence $\\theta_j(k$=$0)$ becomes zero for $j>1$. Hence, we\ncan readily observe from Eq. \\ref{eq:det} that as long as $\\int_0^W\n\\phi(x)dx = 0$, we do not have any change in the energy eigenvalues\nat $k=0$ for any arbitrary $\\phi(x)$. This is why we should not\nexpect any change in $E(k$=$0)$ for any external bias as long as\n$V_l=-V_r$. Note that, in reality, as shown in Fig.\n\\ref{fig:gap_eh}(a), we do see a small change in $E(k$=$0)$ under\ngate bias, which arises from non-zero overlap parameter $S$.\nHowever, in case (ii) and (iii), nonzero $\\int_0^W \\phi(x)dx$\nintroduces a bias dependent upward or downward shift in the $E(k=0)$\nvalue depending on the polarity of the terminal bias.\n\nAs a final comment, we extract the effective mass values at the\nconduction band minimum and the valence band maximum for case (i)\nand (ii) using the $E-k$ relationship obtained from self-consistent\ntight binding calculations. The results are shown in Fig.\n\\ref{fig:mstar}(a)-(b). In both the cases, we observe strong\nnon-monotonic behavior of the effective mass values, both for the\nelectrons and the holes. In case (i) [Fig. \\ref{fig:mstar}(a)], the\neffective mass of the electrons follows that of the holes for both\nsmall and large biases. However, at some intermediate gate bias,\nwhere the band edges start shifting from $k=0$, we notice a\nsignificant difference in the electron and the hole effective mass\nvalues, though both of them strongly peak about that point. A\nsimilar behavior is observed in the electron effective mass in case\n(ii) [Fig. \\ref{fig:mstar}(b)], which sharply peaks around the\ndirect to indirect band transition point indicating a ``flattening\"\nof the conduction band edge when it moves away from $k=0$. However,\nwe do not observe any such sharp peak in the hole effective mass\naround the bandgap transition point. This sharp notch like behavior\nof the effective masses (with almost an order of magnitude change)\nis a unique feature of the influence of the external field on the\nelectronic structure of A-GNR where one can selectively ``slow down\"\nthe carriers by choosing the appropriate external bias condition.\n\nTo conclude, using a self-consistent tight binding calculation, we\nhave demonstrated that it is possible to change the bandgap of an\nsemiconducting A-GNR from direct to indirect by adjusting the\nexternal biases at the left and the right gate, opening up the\npossibility of new device applications. Such a direct to indirect\nbandgap transition has been explained, both qualitatively as well as\nquantitatively, by starting from Dirac equation, to support the\nfindings obtained from tight-binding calculations. Finally, the\nexternal bias dependent carrier effective masses have been shown to\nhave non-monotonic sharp behavior around the direct to indirect\nbandgap transition point.\n\n\\textbf{Acknowledgement:} K. Majumdar and N. Bhat would like to\nthank the Ministry of Communication and Information Technology,\nGovernment of India, and the Department of Science and Technology,\nGovernment of India, for their support.\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}
+{"text":"\\section{Introduction}\nThe presence of dark matter (DM) in our universe is now well established by a variety of astrophysical measurements, over a wide range of scales from sub-kpc to Gpc. Its clustering and low interaction cross sections are supported by the success of the CDM framework and the ability of N-body simulations to reproduce observed structures. In spite of these great successes, we remain ignorant to its detailed nature. A direct detection of DM via its recoils off of nuclei would confirm its particle nature and yield insight into its origin. An examination of the recoil spectrum would provide important information about its properties, and possibly the formation history of the galaxy. \n\nIf the dark matter is a Majorana fermion $\\chi$ - a supersymmetric neutralino being the most prominent example - the types of interactions available are significantly limited. The dominant scatterings are mediated via the operators:\n\\begin{eqnarray}\n{\\mathcal O}_{SI} &= &(\\bar{\\chi} \\chi) (\\bar{q} q), \\label{eqn:SI} \\\\\n{\\mathcal O}_{SD} &=& (\\bar{\\chi} \\gamma_{\\mu} \\gamma_{5} \\chi) (\\bar{q} \\gamma^{\\mu} \\gamma_{5} q) , \\label {eqn:SD}\n\\end{eqnarray}\nwhich give respectively spin-independent (SI) and spin-dependent (SD) scattering.\nAs these operators typically dominate the interaction rate of Weakly Interacting Massive Particles (WIMPs) with nuclear targets, direct detection experiments quote results as bounds on the spin-independent and spin-dependent cross section per nucleon.\n\nNonetheless, there are more dimension-6 operators that can contribute to the direct detection cross section. Namely, \n\\begin{eqnarray}\n{\\mathcal O}_{1}&=&(\\bar{\\chi} \\gamma _{5} \\chi) (\\bar q q), \\label{eqn:operators1} \\\\\n{\\mathcal O}_{2}&=&(\\bar{\\chi} \\chi) (\\bar q \\gamma_5 q), \\\\\n{\\mathcal O}_{3}&=&(\\bar{\\chi} \\gamma_{5} \\chi) ( \\bar q \\gamma_{5} q), \\\\\n{\\mathcal O}_{4}&=&(\\bar{\\chi} \\gamma_{\\mu} \\gamma_{5} \\chi) (\\bar{q} \\gamma^{\\mu} q).\n\\label{eqn:operators}\n\\end{eqnarray}\nThe operators ${\\mathcal O}_{1}$, ${\\mathcal O}_{2}$, and ${\\mathcal O}_{4}$ are not present if parity is a good symmetry of the theory, but since parity is badly broken in the Standard Model and it could be badly broken in the dark matter sector, it is reasonable to include them. If $\\chi$ is a Dirac fermion, instead of Majorana, additional operators are possible. In particular, there is the possibility of a dipole or charge radius coupling to dark matter and a vector coupling to quarks \\cite{cidm, DipolarDM}. Such an operator is quantitatively similar to \\op{1}, with the principle difference that it typically couples to atomic number $Z$ rather than mass number $A$. \n\nThese operators in Eqns.~(\\ref{eqn:operators1})--(\\ref{eqn:operators}) are present even in the context of the minimal supersymmetric Standard Model (MSSM), but there the contributions to scattering are typically far subdominant,\nas they are suppressed relative to $\\op{SI\/SD}$ by additional powers of momentum $O(q^2\/M_W^2) \\sim 10^{-6}$, or in the case of ${\\mathcal O}_4$ also by velocity suppression $v^2$. Consequently, they are usually ignored \\cite{FalkFerstl}, but see \\cite{Chatto}. Moreover, even if the dominant operators are zero, because of this suppression, these new operators are typically negligible in the context of direct detection experiments. Thus, even neglecting ${\\mathcal O}_{SI\/SD}$ it might seem unlikely that such interactions would be relevant for upcoming direct detection experiments. \n\nHowever, this reasoning ignores that these two facts often go hand in hand. $\\op{SI\/SD}$ are typically small when there is a symmetry reason for them to be small, in particular, when the DM-nucleon force is mediated by a pseudo-Goldstone boson (PGB). Because of their shift symmetry, PGBs have $q^2$ suppressed interactions. At the same time, PGBs are also naturally much lighter than the weak scale. Thus, $\\op{1-4}$ are no longer insignificant if the mediator has mass $\\lesssim O({\\rm GeV})$. When we combine this with the recent interest in new GeV-scale particles, e.g., \\cite{ArkaniHamed:2008qn}, and in particular PGBs \\cite{Nomura:2008ru} arising from models to explain PAMELA, Fermi, ATIC and HESS, we are strongly motivated to consider these scenarios.\n\nIn this paper, we explore a class of dark matter models where the scattering is momentum dependent (MDDM), i.e., where the operators \\op{1-4} dominate and are large enough to be observable in upcoming direct detection experiments. As we shall see, these operators can have a significant impact on the spectral shape and the sensitivity of various experiments. As an example, we shall see that these effects can improve the ability to explain the DAMA annual modulation signal while being consistent with other direct detection exclusions. \n\n\\section{Signals of MDDM}\n\nThe recoil rates at a direct detection experiment can be written as\n\\begin{eqnarray}\n\\frac{dR}{dE_R}= \\frac{N_T m_N \\rho_\\chi}{2m_\\chi \\mu^2} \\sigma(q^2) \\int^\\infty_{v_{min}} \\frac{f(v)}{v} dv,\n\\end{eqnarray}\nwhere $m_N$ is the nucleus mass, $N_T$ is the number of target nuclei in the detector, $\\rho_\\chi = 0.3$ GeV\/cm$^3$ is the WIMP density, $\\mu$ is the reduced mass of the WIMP-nuclei system, and $f(v)$ is the halo velocity distribution function in the lab frame. The minimum velocity to scatter with energy $E_R$ is $v_{min}= \\sqrt{m_N E_R\/2\\mu^2}$. The rest of the expression depends on the scattering's $q^2 = 2m_N E_R$. For SI interactions, we have \n\\begin{equation}\n\\sigma(q^2)_{SI} = \\frac{4 G_F^2 \\mu^2}{\\pi} \\left[Z f_p+(A-Z)f_n\\right]^2 F^2(q^2),\n\\end{equation}\nwhere $f_p,f_n$ are respectively the couplings to the proton and neutron, and $F^2$ factor is the form factor. We take the limit $f_p = f_n$. This expression is then proportional to the nucleon scattering cross section $\\sigma_p = \\frac{4}{\\pi} G_F^2 \\mu_p^2 f_p^2$, where $\\mu_p$ is the reduced mass of the WIMP-proton system. \nFor SD interactions, we have\n\\begin{eqnarray}\n\\sigma(q^2)_{SD}& =& \\frac{32 G_F^2 \\mu^2}{2J+1} [a_p^2 S_{pp}(q^2)+a_p a_n S_{pn}(q^2)\\nonumber \\\\\n& & + a_n^2 S_{nn}(q^2) ],\n\\end{eqnarray}\nwhere $a_p,a_n$ are respectively the couplings to the proton and neutron, and the $S$ factors are the form factors for SD scattering. The corresponding nucleon cross sections are $\\sigma_{(p,n)} = \\frac{24}{\\pi}G_F^2 \\mu_{(p,n)}^2 a_{(p,n)}^2$.\n\nThe effect of the new operators can be parameterized simply:\n\\begin{equation}\n\\frac{dR_i^{MDDM}}{dE_R} = \\left(\\frac{q^{2}}{q^{2}_{ref}}\\right)^n \\left(\\frac{q^2_{ref}+m^2_{\\phi}}{q^2+m^2_{\\phi}} \\right)^2 \\frac{dR_i}{dE_R},\\label{eqn:newdR}\n\\end{equation}\nwhere $i$ indexes the interaction, i.e., SD-proton, SD-neutron or SI, and we have included the propagator due to a light mediator $\\phi$ with mass $m_\\phi$. For the benchmark cases, we will take $m_\\phi^2 \\gg q^2$, to arrive at the simple form\n\\begin{equation}\n\\frac{dR_i^{MDDM}}{dE_R} = \\left(\\frac{q^{2}}{q^{2}_{ref}}\\right)^n \\frac{dR_{i}}{dE_R}.\\label{eqn:newdRsimp}\n\\end{equation}\n We have chosen to normalize the new factors out front at a reference value $q^2_{ref} \\equiv (100\\; \\text{MeV})^2$, a characteristic value for many direct detection experiments. For operators ${\\mathcal O}_1, {\\mathcal O}_2$ the exponent $n=1$, while for ${\\mathcal O}_3$, $n=2$. For ${\\mathcal O}_1$, the interaction is spin-independent on the nucleus side, while for the others it is spin-dependent. This form of the recoil rate defines the nucleon cross sections $\\sigma_{p,n}$ for momentum dependent scattering. \\op{4}'s scattering cannot be written in this form, since it has terms proportional to the DM velocity. However, we find that its spectra is almost identical to standard SI scattering, so we neglect it for the rest of the paper.\n\n\\begin{figure*}[ht]\n\\begin{center}\na) \\includegraphics[width=6cm]{figs\/Geqsq100.pdf} \\hspace{1cm }b) \\includegraphics[width=6cm]{figs\/Gemphi100.pdf}\n\\end{center}\n\\caption{Germanium spectra plots with arbitrary normalization versus energy recoil for SI momentum dependent scattering of a 100 GeV dark matter mass. Plot a) displays the effect of additional powers of $q^2$ with $q^0,q^2,{\\rm and\\;} q^4$ in solid, long dash and short dash. Plot b) illustrates the effect of $m_\\phi$ on the $q^4$ suppressed scenario with $m_\\phi=(1,.1,.01)\\; {\\rm GeV}$ in solid, long dash and short dash. \\label{fig:Gespectra} }\n\\end{figure*}\n\nMDDM is characterized by a modification of its nuclear recoil spectrum. Typically, direct detection experiments optimize their searches by going to lower energy thresholds, where standard WIMP signatures are expected to peak. In contrast, the spectrum of MDDM vanishes at zero recoil energy, and then can be either peaked or fairly flat over the range in question. \n\n\nWe show in Fig.~\\ref{fig:Gespectra} the spectra of MDDM scenarios for the case of SI germanium scattering. As we can see, the spectra differ dramatically from those expected for conventional dark matter. The powers of $q^2$ suppress the low energy events resulting in a peaked spectrum reminiscent of inelastic dark matter (iDM) \\cite{TuckerSmith:2001hy,TuckerSmith:2004jv,WeinerKribs}. In contrast to inelastic dark matter, here the peaking arises without needing a coincidence of parameters (specifically, $\\delta$ in iDM models tuned to the WIMP kinetic energy). The spectrum need not be sharply peaked, however, and can be broadly spread over a large range of recoil energies. The non-trivial propagator of Eqn.~(\\ref{eqn:newdR}) allows the possibility that events can be suppressed for $q^2\\gg m_\\phi^2$, as can be seen in Fig.~\\ref{fig:Gespectra}. Finally, increasing the dark matter mass shifts the spectra to higher energies. Given the possibilities, the lesson is that search strategies developed for the simplest dark matter candidates are by no means optimal for every dark matter candidate. The true dark matter candidate may not be one of these simplest possibilities, and it is important to cast a wide net.\n\n\\section{Existing Searches and DAMA}\nTo understand the effects of MDDM, we study how these $q^2$ effects can modify the limits arising from existing experiments. We show in Fig.~\\ref{fig:SIq2} the limits on interactions mediated by \\op{1} compared with limits on standard SI interactions, and in Fig.~\\ref{fig:SDp} the limits of \\op{3} when compared with standard SD-proton interactions. We follow the procedure laid out in Ref.~\\cite{Chang:2008xa} for CDMS \\cite{Akerib:2005kh} and XENON10 \\cite{Angle:2007uj} limits and Ref.~\\cite{WeinerKribs} for KIMS \\cite{KIMS} limits. Although PICASSO \\cite{PICASSO} and COUPP \\cite{COUPP} limits are comparable, we only discuss PICASSO limits in what follows. Our methods better reproduce their (momentum independent) result in the SD-proton case, making us more confident that the MDDM limit is realistic. \n\nInspecting the exclusion limits of these plots, we see important changes with respect to the traditional cases. First, consider the SI case with $q^{2}$ dependence (\\op{1}). While CDMS-Ge and XENON10 remain the strongest, KIMS becomes stronger than CDMS-Si over much of the parameter space. In the SD-proton case, limits from PICASSO are significantly weaker, and XENON10 becomes stronger than KIMS in the 15-25 GeV range.\n\\begin{figure*}\na) \\includegraphics[width=6cm]{figs\/SInormal.pdf}\\hspace{1cm}\nb) \\includegraphics[width=6cm]{figs\/SIq2.pdf}\n\\caption{Plots of the SI nucleon cross section $\\sigma_p$ vs DM mass $m_\\chi$ without (a) and with $q^2$ suppression (b). The colored regions show the 68, 90, and 99\\% CL regions for the best DAMA fit. The 90\\% exclusions limits are KIMS (orange dashed), CDMS Si (red solid), CDMS Ge (red dotted) and XENON10 (brown dot-dashed). We have taken $f_p=f_{n}$.\\label{fig:SIq2} }\n\\end{figure*}\n\nThese results are easy to understand. Due to suppression of low energy events in the MDDM scenario, experiments that rely upon low energy thresholds (in particular, XENON10) are weakened when compared with others with higher thresholds (such as CDMS), which is why CDMS improves relative to XENON10. \nOn the other hand, since $q^2 = 2M_N E_R$, at a given recoil energy, heavier nuclei are preferred by momentum dependent scattering, which is why KIMS improves over CDMS Si and why PICASSO weakens relative to the other experiments. Another effect occurs for COUPP and PICASSO. In these bubble chamber experiments, operation at varying temperature or pressure essentially integrates the recoil spectrum above some threshold. The background from alpha decays is known to be a flat spectrum above some specific temperature or pressure and is fit to in the data. For the broadest MDDM spectra (see Fig.~\\ref{fig:Gespectra}), the dark matter signal looks similar to this alpha background. Unfortunately, this complicates background subtraction and reduces the present sensitivity to these models.\n\nIntriguingly, momentum dependent scattering can also modify the interpretation of dark matter explanations of DAMA's annual modulation signal and exclusions from other direct detection experiments. The annual modulation signal \\cite{Drukier:1986tm,Freese:1987wu}, originally seen at the DAMA experiment \\cite{DAMA}, has recently been confirmed by DAMA\/LIBRA \\cite{Bernabei:2008yi}. On the other hand, limits from XENON and CDMS strongly constrain the simplest dark matter interpretation of the DAMA experiment: a signal resulting from the SI scattering of a WIMP. Explanation of the DAMA signal with spin-dependent scatterings \\cite{FreeseOld,Savage:2008er} is now also strongly constrained by COUPP and PICASSO.\n \nFig.~\\ref{fig:SIq2} shows that adding momentum dependence to the SI interactions can only weaken, but not eliminate the limits other experiments put on DAMA explanations at the 90\\% confidence level, at least within a Maxwellian halo model. Employing the caveats discussed in \\cite{Chang:2008xa}, alternative statistical techniques \\cite{Savage:2008er} or a non-Maxwellian halo \\cite{Fairbairn:2008gz} might allow a window at low mass when combined with these new effects.\n\nIn light of this, we now focus discussion on the scenario with the weakest direct detection limits, SD-proton scattering, shown in Fig.~\\ref{fig:SDp}. These plots show that the relative importance of different experiments can invert as one adds momentum dependence, for precisely the reasons described above. In fact, the normal SD-proton case \\cite{FreeseOld,Savage:2008er} which is ruled out by PICASSO is allowed for the $q^4$ scenario. Interestingly, these factors are also able to improve the fit with DAMA's spectral shape, so that there are new masses that can now fit the DAMA spectrum. In particular, the mass region at $\\sim 40-60\\; {\\rm GeV}$ would have normally had a shape that was inconsistent with DAMA. Since these momentum factors suppress the low energy scattering, the constraint from DAMA's unmodulated event rate \\cite{Chang:2008xa} is also weakened, leading to better consistency with DAMA's full data set. For these plots, we assumed a mediator mass of 1 GeV and 100 MeV. As we will discuss later in the next section, a lighter mediator mass of $O(100)$ MeV is more suited to generate cross sections of this size. As seen in Fig.~\\ref{fig:SDp}(c), for this lighter choice of mass, the 10 GeV DM mass region survives, but the KIMS limit cuts into about half of the higher mass region. We should note that our approach to the KIMS limits is not aggressive, and does not yield as strong a limit as that in \\cite{KIMS}. Consequently, a more aggressive limit might also be able to exclude this region as well. For mediator masses much less than an MeV, the momentum independent case is recovered since the $q^2$ factors cancel in Eqn.~(\\ref{eqn:newdR}), at least in the range that the first Born approximation is valid. \n\n\n\\begin{figure*}\na) \\includegraphics[width=5cm]{figs\/SDp.pdf}\nb) \\includegraphics[width=5cm]{figs\/SDpq4.pdf}\nc) \\includegraphics[width=5cm]{figs\/SDpq4_100.pdf}\n\\caption{Plots of the SD-proton cross section $\\sigma_p$ vs DM mass $m_\\chi$ without (a) and with $q^4$ suppression (b) and (c), where the mediator mass is 1 GeV (b) and 100 MeV (c). The colored regions show the 68, 90, and 99\\% CL regions for the best DAMA fit. The 90\\% exclusions limits are PICASSO (gray solid), KIMS (orange dashed), XENON10 (brown dot-dashed), and CDMS (red dotted). \\label{fig:SDp} }\n\\end{figure*}\n\nOne important point about these scenarios is that the expected relationship between direct and indirect detection signals breaks down. Typically, one assumes that the annihilation proceeds into some Standard Model final state. Here, since we rely upon the light mediator for our interaction, it provides an annihilation channel. Then, if the mediator is $\\lesssim {\\rm GeV}$ in mass, it is natural for it to decay dominantly to e.g., electrons, muons and pions. The limits from Super--Kamiokande WIMP capture are then trivially evaded \\cite{Itay,Menon:2009qj}. \n\n\\section{Model Building Constraints}\nFor the dark matter scattering to display the novel phenomenology discussed here, the scattering rate must be dominated by the new operators, and not the typical SI or SD coupling. This is not a trivial requirement. For comparable coefficients, the scattering mediated by operators of Eqns.~(\\ref{eqn:operators1})--(\\ref{eqn:operators}) are suppressed by powers of the velocity or momenta relative to Eqns.~(\\ref{eqn:SI}) and (\\ref{eqn:SD}). It is possible, however, that the coefficients of these new operators are much larger than the coefficients of the other operators. We discuss this further below.\n\n If the signal is to be observable at near-future experiments, the $q^{2n}$ suppression must be compensated by a large coefficient for the operator. This could be due in part to particularly large couplings of a mediator to the Dark Sector or a local over-density of the dark matter, but the simplest way to get an enhancement is just for the mediator mass to be small: $dR\/dE_{R} \\propto m_{\\phi}^{-4}$. The necessary $m_{\\phi}$ depends on the amount of $q^2$ suppression. If there is a single $q^2$, as in \\op{1} and \\op{2}, a mediator mass of a $m_\\phi \\sim {\\rm few} \\;{\\rm GeV}$, $O(1)$ couplings to the DM while having Yukawa suppression on quark side, one finds a $\\sim 10^{-36} {\\rm \\; cm^2}$ cross section. This is near the interesting region for $\\op{2}$. In the case of coherent scattering off of nuclei (as for \\op{1}), this would actually already be strongly ruled out by CDMS and XENON for a large mass range, a viable $10^{-44} {\\rm \\; cm^2}$ cross section would require something like $m_\\phi \\sim 100 \\;{\\rm GeV}$. For $q^4$ suppression, the mass has to be $O(100) \\;{\\rm MeV}$ to get a $\\sim 10^{-36} {\\rm \\; cm^2}$ cross section.\n \nIn specific cases, large contributions to $\\op{1}$ - $\\op{4}$ can be expected. If there is a light pseudoscalar present that couples both to dark matter and to quarks then $\\op{1}$ - $\\op{3}$ can be generated through its exchange without generation of either $\\op{SI}$ or $\\op{SD}$. If there is no parity violation, then the expectation is that $\\op{3}$ dominates. On the other hand, if parity violation is present, then it is plausible that the coefficients of $\\op{1}$ - $\\op{3}$ could all be comparable. In this case, it is likely that it would be easiest to probe $\\op{1}$ because of its coherent scattering off of nuclei. Alternatively, parity violation might be confined to couplings in the dark matter sector. In this case, pseudoscalar exchange could dominantly induce $\\op{2}$. We note that if the light pseudoscalar is naively realized as a pseudo-Goldstone, it is difficult to sufficiently suppress the contributions to \\op{SI}. Contributions are induced by exchanging the scalar whose vacuum expectation $f_\\phi$ value breaks the global symmetry and made the $\\phi$ light. \n \nInteractions with only $q^2$ suppression can conceivably still dominate over standard interactions without significant model-building efforts. The simplest example comes from charge-radius or dipole couplings to a composite WIMP, whose constituents are charged under a new, dark gauge group \\cite{cidm}. This generates the phenomenology of \\op{1} straightforwardly, although typically with a coupling to $Z^2$ instead of $A^2$. If the mediation arises through a PGB, \\op{1,2} can dominate over the scalar exchange as only one vertex will be suppressed by $f_\\phi$, while the scalar exchange is suppressed by $f_\\phi^2$. \n\nThe most challenging model-building comes in realizing the $q^4$ suppressed interaction, without inducing SI scattering from the accompanying scalar mediator. While this seems difficult from the perspective of a standard PGB, it can arise fairly simply in SUSY theories. While PGBs are a natural way to realize a shift symmetry, such a shift symmetry could simply be present in the theory from other origins. For instance, in theories with $N=2$ SUSY in the gauge sector (e.g., \\cite{Fox:2002bu}), there is a chiral superfield partner for every gauge boson. The pseudoscalar contained in it possesses a shift symmetry which can be thought of as a higher-dimensional gauge symmetry, compactified on an $S_1\/Z_2$ orbifold. SUSY breaking will make the associated scalar massive. This can arise either from $F-$term breaking, through the operator $X^\\dagger X (\\phi + \\phi^\\dagger)^2$ (with $X$ a spurion that gets the non-zero $F$-term, and $\\phi$ a superfield containing the PGB), or from $D$-terms, through $W^\\alpha W'_\\alpha \\phi$ (with $W^\\alpha$ the $U(1)_{Y}$ supersymmetric field strength, and $W'_\\alpha$ the supersymmetric field strength that gets a non-zero $D$-term). Even in the presence of this supersymmetry breaking, the pseudoscalar remains massless, and will only pick up a mass radiatively through diagrams violating {\\em both} the shift symmetry and SUSY. Thus, the scalar contribution can be effectively decoupled from the strength of the pseudoscalar-mediated $q^4$ interactions.\n \nFinally, a sizable coefficient for $\\op{4}$ can be generated in theories with a light gauge boson that couples to the dark matter and mixes with the $B^{\\mu}$ gauge field of the Standard Model. \n\nThere are model-dependent constraints on light pseudoscalar mediators. In particular, searches for axions can apply. For particles in the GeV range, the process $\\Upsilon \\rightarrow \\gamma \\phi$ is relevant. These branching ratios are constrained to be in the range $10^{-5} - 10^{-6}$; the precise bound depends on the final state of $\\phi$ decay, $\\phi \\to \\mu\\bar{\\mu},\\tau\\bar{\\tau}$ or invisible, \\cite{BaBarUpsilonMu,BaBarUpsilonTau,CLEOUpsilonMiss}. If the $\\phi$ has couplings comparable to Standard Model Yukawas, the branching ratio is a $few \\times 10^{-5}$ for masses well below the $\\Upsilon$ mass. Thus, these bounds constrain the $\\phi$ coupling to $b$ quarks to be somewhat smaller than the Standard Model Yukawa coupling. For lighter mediators, depending on the flavor structure of the $\\phi$ couplings, $K \\rightarrow \\pi \\pi \\phi$ may be relevant. The rate for the potentially more stringent process $K^{+} \\rightarrow \\pi^{+} \\phi$ is suppressed -- a pure pseudoscalar coupling does not mediate this process, see for e.g., \\cite{Deshpande:2005mb}. The dominant contribution to this decay comes from $\\pi - \\phi$ mixing \\cite{Weinberg:1977ma}, which is model dependent. In cases where the pseudoscalar couples only to 3rd generation quarks, the Kaon decays are absent; however, \nthe Upsilon constraints still apply. Following the procedure in \\cite{Cheng:1988im}, we find that 3rd generation couplings alone can generate a detectable rate, as pseudoscalar couplings to heavy quarks generate a coupling to $G\\widetilde{G}$ \\cite{Anselm:1985cf}. Incidentally, in general, experimental uncertainties (in particular, the light quark contribution to the proton spin $\\Delta \\Sigma$) and parameters like $\\tan \\beta$ allow a proton dominated coupling to be generated. In the minimal case of 3rd generation couplings, the $\\phi$ decays to two photons with a decay length $c\\tau \\sim 1 {\\rm \\;m}$. While couplings to leptons are not necessary to implement the scenario at hand, if the mediator couples with Yukawa strength to the muon, requiring that the magnitude of the contribution to the muon $g-2$ is no larger than the current discrepancy between theory and experiment $ |\\delta a_{\\mu}| < 290 \\times 10^{-11}$ enforces $m_{\\phi} > 300$ MeV \\cite{Deshpande:2005mb}. Finally with couplings to electrons, it is possible to search for $e^+ e^- \\to \\phi \\gamma$ for either invisible or electron decays of $\\phi$ \\cite{Borodatchenkova:2005ct}. However, for pseudoscalar $\\phi$, suppression by the electron yukawa coupling makes the production rate below the projected sensitivities \\cite{Borodatchenkova:2005ct}.\n\n \\section{Conclusions}\nAs the sensitivity of new dark matter direct detection experiments continues to increase at a rapid pace, the ability to test for new scenarios for dark matter will grow simultaneously. Present experiments are optimized to search for WIMPs with signals that peak at low nuclear recoil energies. In contrast, models with momentum-suppressed interactions (MDDM), have spectra that peak at intermediate energies, thus changing the expected signals and relative strengths of various direct detection experiments. While interesting scenarios, specifically inelastic dark matter, have been proposed with spectra that peak at high recoil energy, we find that the scenarios with this feature are more ubiquitous than previously thought. In models with new light vectors or pseudoscalars, momentum dependent interactions can be large, and this phenomenology can be present.\n\nA simple parameterization captures much of the relevant phenomenology. Specifically, one can replace $dR_i\/dE_R$ with $(q_{100}^2)^n dR_i\/dE_R$, where $i$ indexes the interaction type and $q_{100}$ is the momentum transfer in units of 100 MeV. While additional features can arise at low mediator masses, this parameterization is sufficient to reproduce the peaking in the spectrum, and provides a convenient way to compare different experiments. In analyzing the presently allowed parameter space in this way, we find that momentum dependent couplings can open allowed ranges of parameters for DAMA with dominantly spin-dependent proton couplings, if accompanied by an additional $q^4$ suppression.\n\nWhatever model of dark matter nature has chosen to realize, it is important to be cognizant of the wide range of possible phenomenology, so that possible signals are not missed or attributed to backgrounds. The framework of MDDM provides motivation, and a prescription to study and constrain these models in the future.\n\n\\vskip 0.15in\n{\\noindent Note added:} As this paper was being finished, we became aware of \\cite{AmiFF}, which appeared in the arXiv and discusses momentum dependent interactions arising from the couplings to one or more new gauge bosons, and their ability to explain DAMA from spin-independent interactions.\n\n\\begin{acknowledgments}\n\\section{Acknowledgements} \n\\noindent We would like to thank Chris Savage for useful discussions, and for providing us with code implementing the spin-dependent form factors. We thank Peter Cooper from COUPP and Viktor Zacek and Sujeewa Kumaratunga from PICASSO for information on their analyses. The work of SC is supported under DOE Grant \\#DE-FG02-91ER40674. The work of AP is supported under DOE Grant \\#DE-FG02-95ER40899 and by NSF CAREER Grant NSF-PHY-0743315. The work of NW is supported by NSF CAREER grant PHY-0449818 and DOE OJI grant \\#DE-FG02-06ER41417.\n \\end{acknowledgments}\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}
+{"text":"\\section{Introduction}\n\nThe fundamental limit of communication over wireless fading channels\ndepends on the availability of channel state information~(CSI) at the\ntransmitter\/receiver. While the channel statistics are normally stable\nand can be assumed to be available, the assumption on {\\em\ninstantaneous} CSI varies with the context. When the instantaneous CSI\nis assumed to be {\\em a priori} known, e.g., in fixed environments where\nit changes slowly and can be estimated accurately at negligible cost, at\nleast at the receiver side, the communication is said to be {\\em coherent}. On the other hand, if the instantaneous CSI is {\\em a priori} unknown, e.g, when the estimation cost is not negligible, the communication is said to be {\\em non-coherent}. \n\nIn a point-to-point multiple-input multiple-output~(MIMO) channel with\n$M$ transmit and $N$ receive antennas, it is well known that the {\\em\ncoherent} capacity scales linearly with the number of antennas as $C\\sim\n\\min\\left\\{ M, N \\right\\} \\log \\mathsf{snr}$ at high signal-to-noise\nratio~(SNR)~\\cite{Foschini,Telatar1999capacityMIMO}. \nIn the {\\em non-coherent} case with stationary fading, the capacity scales as\n$\\log\\log \\mathsf{snr} + \\chi(\\rvMat{H}) + o(1)$\\footnote{$\\chi(\\rvMat{H})$ is\ncalled the fading number of the channel.}~\\cite{Moser}, implying a DoF\nof $0$. Nevertheless, if the channel remains constant during a certain\namount of slots, say $T$ slots, then the DoF becomes strictly positive\nas $M^*(1-\\frac{M^*}{T})$ where $M^* \\triangleq \\min\\left\\{ M, N, \\lfloor\n\\frac{T}{2} \\rfloor \\right\\}$. This fading setup is commonly referred to \nas the block fading channel, and has been extensively investigated in\nthe literature~\\cite{Hochwald2000unitaryspacetime, ZhengTse2002Grassman,\nYang2013CapacityLargeMIMO}. \nRemarkably, in the block fading case, the optimal DoF can be achieved\neither by well-designed space-time\nmodulations~\\cite{Hochwald2000unitaryspacetime,ZhengTse2002Grassman,\nYang2013CapacityLargeMIMO}, or by simple training-based\nstrategies~\\cite{Hassibi2003howmuchtraining}. The converse in the\naforementioned works was based on the Rayleigh fading assumption, using\neither a direct approximation at high SNR~\\cite{ZhengTse2002Grassman} or\na duality upper bound with a carefully chosen auxiliary output\ndistribution~\\cite{Yang2013CapacityLargeMIMO}. \n\nIn multi-user MIMO channels, such as the broadcast channels~(BC) and the multiple access channels~(MAC), non-coherent communications have been studied in the block fading case.\nFor the BC, the exact DoF region is known with \\emph{isotropic} Rayleigh fading (a special case of stochastically degraded BC) and can be achieved with time division multiple access~(TDMA)~\\cite{Fadel2016coherencedisparity}. Some achievable schemes have been proposed for the BC with spatially correlated fading~\\cite{Hoang2017BC,Zhang2017spatiallyCorrelatedBC}. For the MAC, it has been shown that the optimal sum DoF can be achieved with a training-based scheme~\\cite{Fadel2016coherencedisparity}, but the optimal DoF region is still {\\em unknown}.\n\nIn this work, we make some progress for the non-coherent single-input multiple-output~(SIMO) MAC. Specifically, we derive the optimal DoF region in the case of two single-antenna transmitters (users) and a $N$-antenna receiver in block fading channel with coherence time $T$. When $N=1$, the region is achieved with a simple time division multiplexing between two users. In this case, letting two users cooperate does not help exploit more degrees of freedom and it is optimal to activate only one user at a time to achieve $1-\\frac{1}{T}$ DoF for that user. When $N>1$, a training-based scheme can achieve another DoF pair. We let two users send orthogonal pilots for channel estimation in the first $2$ time slot, then send data simultaneously in the remaining $T-2$ time slots. In this way, each user can achieve $1-\\frac{2}{T}$ DoF. \n\nThe main technical contribution of this paper lies in the converse\nproof. Leveraging the duality upper bound~\\cite{Moser}, we carefully\nchoose an output distribution with which we derive a tight outer bound\non the DoF region. Unlike previous results such as \\cite{ZhengTse2002Grassman,\nYang2013CapacityLargeMIMO}, we do not assume the Gaussianity of the\nchannel coefficients, which makes our proof more general and our results\nstronger even in the single-user case. \n\n\n\n\n\n\n\n\nThe remainder of this paper is organized as follows. The system model and preliminaries are presented in Section~\\ref{sec:model}. In Section~\\ref{sec:result}, we provide the main\nresult on the optimal DoF region of the two-user MAC, as well as the proof\nfor the case $N=1$ and the achievability for the case $N>1$. We\nintroduce the converse proof technique through a new proof for the single-user SIMO channel in Section~\\ref{sec:singleUser}, and use it to show the tight outer bound for the case $N>1$ of the MAC in Section~\\ref{sec:converse}. Finally, we conclude the paper in Section~\\ref{sec:conclusion}.\n\n{\\it Notations:} For random quantities, we use\nupper case non-italic letters: normal fonts, e.g., $\\vect{r}{X}$, for scalars; bold fonts,\ne.g., $\\rvVec{V}$, for vectors; and bold and sans serif fonts, e.g.,\n$\\rvMat{M}$, for matrices. Deterministic quantities are denoted\nwith italic letters, e.g., a scalar $x$, a vector $\\pmb{v}$, and a\nmatrix $\\pmb{M}$. Throughout the paper, we adopt the column convention\nfor vectors. The Euclidean norm of a vector and a matrix is denoted by $\\|\\vect{v}\\|$ and $\\|\\mat{M}\\|$, respectively. \nThe\ntranspose and conjugated transpose of $\\mat{M}$ is\n$\\mat{M}^{\\scriptscriptstyle\\mathsf{T}}$ and $\\mat{M}^\\H$, respectively. $\\rvMat{M}_{[i:j]}$ denotes the sub-matrix containing columns from $i$ to $j$ of a matrix $\\rvMat{M}$ (thus $\\rvMat{M}_{[i]}$ denotes column $i$). $\\diag[x_1,\\dots,x_N]$ denotes the diagonal matrix with diagonal entries $x_1,\\dots,x_N$.\n$H(.)$, $h(.)$, and $D(.\\|.)$ denote the entropy, differential entropy, and Kullback-Leibler divergence, respectively. Logarithms are in base $2$.\n$(x)^+ = \\max\\{x,0\\}$. ``$\\triangleq$'' means ``is defined as''. $\\Gamma(x) = \\int_{0}^{\\infty}z^{x-1}e^{-z}dz$ is the Gamma function.\nGiven two functions $f$ and $g$, we write $f(x) = O(g(x))$ if there exists a constant $c>0$ and some $x_0$ such that $f(x) \\le cg(x), \\forall x\\ge x_0$\n\n\\section{System Model and Preliminaries} \\label{sec:model}\nWe consider a single-input multiple-output~(SIMO) multiple-access\nchannel in which two single-antenna users send their signals to a\nreceiver with $N$ antennas. The channel between the users and the\nreceiver is flat and block fading with equal and synchronous coherence\ninterval of $T$ symbol periods. That is, the channel vector $\\rvVec{H}_k\n\\in \\mathbb{C}^{N\\times1}$, $k=1,2$, remains unchanged during each block of length $T$ symbols and changes independently between blocks. The realizations of $\\rvVec{H}_1$ and $\\rvVec{H}_2$ are {\\em unknown} to both the users and the receiver. \nThe received signal during the coherence block~$b$, $b=1,2,\\ldots$\\footnote{Throughout,\nwe omit the block index whenever confusion is unlikely.}, is \n\\begin{align}\n\\rvMat{Y}[b] = \\rvVec{H}_1[b]\\, \\rvVec{X}_1^{\\scriptscriptstyle\\mathsf{T}}[b] + \\rvVec{H}_2^{\\scriptscriptstyle\\mathsf{T}}[b] \\, \\rvVec{X}_2^{\\scriptscriptstyle\\mathsf{T}}[b]+ \\rvMat{Z}[b],\n\\end{align}\nwhere $\\rvVec{X}_1 \\in \\mathbb{C}^{T}$ and $\\rvVec{X}_2 \\in \\mathbb{C}^{T}$ are the transmitted signals from user 1 and user 2, respectively, with the power constraint \n\\begin{align} \\label{eq:powerConstraints}\n \\frac{1}{B} \\sum_{b=1}^B \\|\\rvVec{X}_{i}[b]\\|^2 \\le P T, \\quad i = 1,2\n\\end{align}\nwhere $B$ is the number of the blocks spanned by a codeword. \nWe assume that $\\rvMat{Z} \\in \\mathbb{C}^{N\\times T}$ is the additive white\nGaussian noise with independent and identically distributed~(i.i.d.)~${\\mathcal C}{\\mathcal N}(0,1)$ entries. \nThe parameter $P$ is the average power ratio between the transmitted signal and the noise, thus we refer to $P$ as the SNR of the channel. \n\nSince the channel is block memoryless\\footnote{The results can be\ngeneralized to stationary fading as done in \\cite{Moser}.}, it is well known that a rate pair $(R_1(P),R_2(P))$ in bits per channel\nuse is achievable at SNR $P$, i.e., lies within the capacity region $\\mathcal{C}_{\\text{Avg}}(P)$, for the MAC if and only if\n\\begin{align}\nR_1+R_2 &\\le \\frac{1}{T}I(\\rvVec{X}_1, \\rvVec{X}_2; \\rvMat{Y}), \\\\\nR_1 &\\le \\frac{1}{T}I(\\rvVec{X}_1; \\rvMat{Y} | \\rvVec{X}_2), \\label{eq:boundR1}\\\\\nR_2 &\\le \\frac{1}{T}I(\\rvVec{X}_2; \\rvMat{Y} | \\rvVec{X}_1), \\label{eq:boundR2}\n\\end{align}\nfor some input distribution subject to the average power constraint $P$ (as the codeword length $B$ goes to infinity)~\\cite{ElGamal}.\nThen, we say that $(d_1,d_2)$ is an achievable DoF pair with\n\\begin{align}\nd_k \\triangleq \\liminf\\limits_{P\\to\\infty} \\frac{R_k(P)}{\\log(P)}, \\quad k = 1,2.\n\\end{align} \nThe optimal DoF region ${\\mathcal D}_{\\text{Avg}}(P)$ is defined as the set of all achievable DoF pairs. \n\nWe assume that the channel vectors $\\rvVec{H}_1$ and $\\rvVec{H}_2$ are\nindependent\\footnote{Independence is not necessary but makes the\nanalysis slightly simpler.} and drawn from a generic distribution satisfying the following conditions:\n\\begin{align}\n h(\\rvVec{H}_k) &> -\\infty, \\quad \\E[\\| \\rvVec{H}_k \\|^2] < \\infty,\n \\quad k=1,2. \n\\end{align\nThe following results, whose proofs are provided in Appendix~\\ref{app:proofs}, are useful for our main analysis. \n\\begin{lemma}\\label{lemma:1}\n Let $\\mat{a}\\in\\mathbb{C}^{m\\times t}$ have full column rank,\n $\\rvMat{W}\\in\\mathbb{C}^{n\\times m}$ be such that\n $h(\\rvMat{W})>-\\infty$ and $\\E[\\|\\rvMat{W}\\|_{\\rm F}^2] < \\infty$, then we have\n \\begin{align}\n h(\\rvMat{W}\\mat{a}) &= n \\log\\det(\\mat{a}^\\H\\mat{a}) + \\const\n \\end{align\n where $\\const$ is bounded by some constant that only depends on the\n statistics of $\\rvMat{W}$. \n\\end{lemma}\n\n\\begin{lemma}\\label{lemma:2}\n Let $\\vect{r}{X}\\ge0$ be some random variable such that $\\E[\\vect{r}{X}]<\\infty$\n and $h(\\vect{r}{X}\/\\E[\\vect{r}{X}]) > -\\infty$. Then, for any $\\alpha<1$, \n \\begin{align}\n \\E[\\log(1+\\vect{r}{X})] &\\ge \\alpha \\log(1+\\E[\\vect{r}{X}]) + \\const \\label{eq:lemma2}\n \\end{align\n where $\\const>-\\infty$ is some constant that only depends on $\\alpha$. \n\\end{lemma}\nFrom the above result, we observe that \nwhen $\\E[\\vect{r}{X}]\\to\\infty$,\n$\\frac{\\E[\\log(1+\\vect{r}{X})]}{\\log(1+\\E[\\vect{r}{X}])} \\approx 1$ since we can\nlet $\\alpha$ be arbitrarily close to $1$. The upper bound is simply from\nJensen's inequality. \n\nIf the support of the input distribution is further bounded such that $\\|\\rvVec{X}_i\\|^2 \\le P$, $i=1,2$, then we say that the input satisfies the peak power constraint $P$. In this case, the capacity region and DoF region are denoted ${\\mathcal C}_{\\text{Peak}}(P)$ and ${\\mathcal D}_{\\text{Peak}}(P)$, respectively. Since the peak power constraint implies the average power constraint, we have that\n\\begin{align}\n{\\mathcal C}_{\\text{Peak}}(P) \\subseteq {\\mathcal C}_{\\text{Avg}}(P), \\quad {\\mathcal D}_{\\text{Peak}}(P) \\subseteq {\\mathcal D}_{\\text{Avg}}(P).\n\\end{align}\n\n\\begin{lemma}\\label{lemma:3}\n For any rate pair $(R_1,R_2)$ achievable under the\n average power constraint $P$, for any $\\beta\\!>\\!1$, there exists $(R_1',\\!R_2')$ achievable\n under the peak power constraint $P^\\beta$, such that \n \\begin{align}\n R_k - R_k' = O(P^{1-\\beta} \\log P^{\\beta}), \\quad k=1,2, \\label{eq:boundRpeak}\n \\end{align\n In short, \n \\begin{align}\n \\mathcal{C}_{\\text{Avg}}(P) &\\subseteq \\mathcal{C}_{\\text{Peak}}(P^\\beta) + O(P^{1-\\beta} \\log P^{\\beta}), \\quad \\forall\\,\\beta>1. \\label{eq:boundCpeak}\n \\end{align\n\\end{lemma}\nSince the pre-log of the gap $P^{1-\\beta} \\log P^{\\beta}$ is vanishing at high SNR for any $\\beta>1$, we have the DoF region\n\\begin{align}\n \\mathcal{D}_{\\text{Avg}}(P) &\\subseteq \\mathcal{D}_{\\text{Peak}}(P^\\beta) \n \\subseteq \\mathcal{D}_{\\text{Avg}}(P^\\beta)\n , \\quad \\forall\\,\\beta>1. \n\\end{align\nLetting $\\beta$ arbitrarily close to $1$, we conclude that using the peak power\nconstraint instead of the average power constraint does not change the\noptimal DoF region. We therefore consider throughout the peak power\nconstraint, which can simplify considerably the analysis. \n\n\\begin{lemma} \\label{lemma:distributionR}\n Let $\\rvVec{Y} \\in \\mathbb{C}^{N}$ be a vector-valued random variable with distribution ${\\mathcal P}$. Consider another family of distributions ${\\mathcal R}$ whose densities are given by\n\t\\begin{align}\n\tr_\\rvVec{Y}(\\vect{y}) = \\frac{\\Gamma(N) |\\det \\pmb{A}|^2}{\\pi^{N} \\beta^\\alpha \\Gamma(\\alpha)} \\|\\pmb{A} \\vect{y}\\|^{2(\\alpha-N)} \\exp\\left(-\\frac{\\|\\pmb{A} \\vect{y}\\|^2}{\\beta}\\right), \\label{eq:distR}\n\t\\end{align}\n\tfor $\\vect{y} \\in \\mathbb{C}^{N}$, where $\\alpha, \\beta > 0$, $\\pmb{A}$ is any nonsingular deterministic $N\\times N$ complex matrix. When $\\beta = \\E_{\\mathcal P}[\\|\\pmb{A} \\rvVec{Y}\\|^2]$ and $\\alpha = 1\/\\log(\\beta) = 1\/\\log(\\E_{\\mathcal P}[\\|\\pmb{A} \\rvVec{Y}\\|^2])$, denote this distribution as ${\\mathcal R}(N,\\pmb{A})$. In this case,\n\t\\begin{multline}\n\t\\E_{\\mathcal P}[-\\log(r_\\rvVec{Y}(\\rvVec{Y}))] = -\\log |\\det\\pmb{A}|^2 + N \\E_{\\mathcal P}[\\log \\|\\pmb{A} \\rvVec{Y}\\|^2] \\\\+ O(\\log\\log(\\E[\\|\\mat{a}\\rvVec{Y}\\|^2])). \\label{eq:boundDistR\n\t\\end{multline}\n\\end{lemma}\n\nIf we take $\\rvVec{Y}$ as the channel output, as long as $\\E[\\|\\mat{a}\\rvVec{Y}\\|^2] \\le P^{c_0}$ for any constant $c_0$ whose value only depends on the channel statistics, the term $O(\\log\\log(\\E[\\|\\mat{a}\\rvVec{Y}\\|^2]))$ scales double-logarithmically with $P$. \nTherefore, in the DoF sense, it is enough to consider only the first two terms in~\\eqref{eq:boundDistR}.\n\n\n\n\\section{Main Result} \\label{sec:result}\nThe main finding of this paper is the optimal DoF region of the MAC described above, as stated in Theorem~\\ref{theo:DoFregion}\n\\begin{theorem} \\label{theo:DoFregion}\n\tFor the non-coherent multiple-access channel with two single-antenna transmitters and a $N$-antenna receiver in flat and block fading with coherence time $T$, the optimal DoF region is characterized by\n\t\\begin{align}\n\td_1 + d_2 \\le 1-\\frac{1}{T},\n\t\\end{align}\n\tif $T \\le 2$ or $N=1$, and\n\t\\begin{align}\n\t\\frac{d_1}{T-2}+d_2 &\\le 1-\\frac{1}{T}, \\label{eq:DoFbound1}\\\\\n\td_1+\\frac{d_2}{T-2} &\\le 1-\\frac{1}{T}, \\label{eq:DoFbound2}\n\t\\end{align}\n\totherwise. \n\\end{theorem} \n\\begin{remark}\n\tWhen $T\\to \\infty$, the optimal DoF region approaches the region in the coherent case: $d_1+d_2 \\le 1$ if $N=1$, and $\\max\\{d_1,d_2\\}\\le 1$ if $N>1$ (as shown in~Figure~\\ref{fig:DoF_SIMO_MAC_2users}). \n\\end{remark}\t\n\\begin{figure}[!h] \n\t\\centering\n\t\\includegraphics[width=.4\\textwidth]{DoF_SIMO_MAC_2users}\n\t\\caption{The optimal DoF region of two-user SIMO MAC with $N$ receive antennas in block fading with coherence time $T$.}\n\t\\label{fig:DoF_SIMO_MAC_2users}\n\\end{figure}\n\nThe case $T = 1$~(stationary fading) is trivial: zero DoF is achievable, even if two users cooperate~\\cite{Moser}. If $T = 2$ or $N=1$, the optimal DoF region is achieved with time division multiplexing between the users, noting that the active user can achieve $1-\\frac{1}{T}$ DoF by either a training-based scheme~\\cite{Hassibi2003howmuchtraining} or unitary space-time modulations~\\cite{Hochwald2000unitaryspacetime,ZhengTse2002Grassman}. The tight outer bound follows by letting two users cooperate, then according to~\\cite{ZhengTse2002Grassman,Yang2013CapacityLargeMIMO}, it is optimal to use $\\min\\{2,N,\\lfloor \\frac{T}{2}\\rfloor\\} = 1$ transmit antenna and achieve $1-\\frac{1}{T}$ DoF in total. \n\n\nWhen $T\\ge 3, N>1$, the region is the convex hull of the origin and three points: $\\left(1-\\frac{1}{T}, 0\\right)$, $\\left(0,1-\\frac{1}{T}\\right)$, and $\\left(1-\\frac{2}{T},1-\\frac{2}{T}\\right)$. The first two points are achieved by activating only one user. The third point is achieved with a training-based scheme: let two users send orthogonal pilots in the first two time slot for the receiver to learn their channel, then send data in the remaining $T-2$ time slots. The region is then achieved with time sharing between these points. It remains to show the tight outer bound for this case $T\\ge 3, N>1$, but before that, let us introduce the proof technique by using it for a new proof of the tight DoF for the single-user SIMO channel in the next section.\n\n\\section{Single-User SIMO Channel Revisited} \\label{sec:singleUser}\nConsider the single-user (point-to-point) SIMO channel with block fading\nwith coherence time $T$\n\\begin{align}\n\\rvMat{Y} = \\rvVec{H}\\,\\rvVec{X}^{\\scriptscriptstyle\\mathsf{T}} + \\rvMat{Z},\n\\end{align}\nwhere we have the same assumptions as in the MAC channel. \nIt was shown that the DoF of this channel is $1-\\frac{1}{T}$ and can be\nachieved with either a training-based\nscheme~\\cite{Hassibi2003howmuchtraining} or well-designed space time\nmodulations~\\cite{Hochwald2000unitaryspacetime,ZhengTse2002Grassman,Yang2013CapacityLargeMIMO}.\nFor the converse of the high SNR capacity (which implies the converse of the\nDoF), while $h(\\rvMat{Y} | \\rvVec{X})$ can be calculated easily, the\nupper bound for $h(\\rvMat{Y})$ is much more involve\n~\\cite{ZhengTse2002Grassman,Yang2013CapacityLargeMIMO}. In this section,\nwe provide a simpler proof for the converse of the DoF using the duality approach as in~\\cite{Yang2013CapacityLargeMIMO} but with a simple choice of auxiliary output distribution.\n\nFirst, let us define the random variable $\\vect{r}{V}$ as the index of the\nstrongest input component, i.e.,\\footnote{When there are more than one such components, we pick an arbitrary one.}\n\\begin{align}\n\\vect{r}{V} \\triangleq \\arg\\max_{i=1,2,\\dots,T}\n|\\vect{r}{X}_i|^2.\n\\end{align} \nThus, $\\vect{r}{X}_\\vect{r}{V}$ denotes the entry in $\\rvVec{X}$ with the largest magnitude. Let the genie give $\\vect{r}{V}$ to the\nreceiver,\\footnote{This technique of giving the index of the strongest input\ncomponent to the receiver was initially proposed in~\\cite{ShengIT2017phasenoise} for phase noise channel.} we have\n\\begin{align}\nI(\\rvVec{X};\\rvMat{Y}) &\\le I(\\rvVec{X};\\rvMat{Y}, \\vect{r}{V})\\\\\n&= I(\\rvVec{X};\\rvMat{Y} | \\vect{r}{V}) + I(\\rvVec{X}; \\vect{r}{V} ) \\\\\n&\\le h(\\rvMat{Y} | \\vect{r}{V}) - h(\\rvMat{Y} | \\rvVec{X}, \\vect{r}{V}) + H(\\vect{r}{V}) \\\\\n&\\le h(\\rvMat{Y} | \\vect{r}{V}) - h(\\rvMat{Y} | \\rvVec{X}) + \\log(T), \\label{eq:boundRp2p}\n\\end{align}\nwhere the last inequality is because we have the Markov chain $\\vect{r}{V}\n\\leftrightarrow \\rvVec{X} \\leftrightarrow \\rvMat{Y}$ and $H(\\vect{r}{V}) \\le\n\\log(T)$. For a given $\\rvVec{X}$, we can apply Lemma~\\ref{lemma:1} with\n$\\rvMat{W}=[\\rvMat{H}\\ \\rvMat{Z}]$ and $\\mat{a}=\\left[ \n \\rvVec{X} \\ \\mat{\\mathrm{I}}_T \\right]^{\\scriptscriptstyle\\mathsf{T}}$ to \nobtain\n\\begin{align}\nh(\\rvMat{Y} | \\rvVec{X}) &= N\\E[\\log\\det(\\mat{\\mathrm{I}}_T + \\rvVec{X}^*\\rvVec{X}^{\\scriptscriptstyle\\mathsf{T}})]\n+ O(1) \\\\\n&= N\\E[\\log(1 + \\|\\rvVec{X}\\|^2)] + O(1).\n\\end{align}\nTo bound $h(\\rvMat{Y} | \\vect{r}{V})$, we use the duality approach~\\cite{Moser} as follows\n\\begin{align}\nh(\\rvMat{Y} | \\vect{r}{V}) &= \\E[-\\log p(\\rvMat{Y} | \\vect{r}{V})] \\notag\\\\\n&= \\E[-\\log q(\\rvMat{Y} | \\vect{r}{V})] - \\E_{\\vect{r}{V}}[D({\\mathcal P}_{\\rvMat{Y} | \\vect{r}{V} = v} \\| {\\mathcal Q})] \\notag \\\\\n&\\le \\E[-\\log q(\\rvMat{Y} | \\vect{r}{V})],\n\\end{align}\ndue to the non-negativity of the Kullback-Leibler divergence\n$D({\\mathcal P}_{\\rvMat{Y} | \\vect{r}{V} = v} \\| {\\mathcal Q})$. Here, conditioned on $\\vect{r}{V}$,\nthe distribution ${\\mathcal P}_{\\rvMat{Y} | \\vect{r}{V}}$ with probability density function (pdf) $p(.)$ is imposed by the input,\nchannel, and noise distributions, while ${\\mathcal Q}$ is\nany distribution in $\\mathbb{C}^{N\\times T}$ with the pdf $q(.)$.\nNote that a proper choice of ${\\mathcal Q}$ \nis the key to a tight upper bound. Our choice is inspired by a\ntraining-based scheme. Specifically, if we send a pilot symbol at time\nslot~$v\\in\\{1,\\ldots,T\\}$, then the output vector being the sum of\n$\\rvVec{H}$ and $\\rvMat{Z}_{[v]}$ should have comparable power in each\ndirection since $\\rvVec{H}$ is generic by assumption. Therefore, it is\nreasonable~(in the DoF sense) to let $\\rvMat{Y}_{[v]} \\sim\n{\\mathcal R}(N,\\mat{\\mathrm{I}}_N)$, where the family of distributions ${\\mathcal R}(N,\\pmb{A})$ is\ndefined in Lemma~\\ref{lemma:distributionR}. Now, \n$\\rvMat{Y}_{[v]}$ should provide a rough estimate of the direction of\nthe channel vector $\\rvVec{H}$. Based on such an observation, it is also\nreasonable to assume that, given $\\rvMat{Y}_{[v]}$, all other\n$\\rvMat{Y}_{[i]}$, $i\\ne v$, are mutually independent and follow\n\\begin{align}\n\\rvMat{Y}_{[i]} &\\sim {\\mathcal R}\\left(N,\\left(\\mat{\\mathrm{I}}_N+ \\rvMat{Y}_{[v]}\\rvMat{Y}_{[v]}^\\H\\right)^{-\\frac{1}{2}}\\right), \\quad \\forall i \\ne v.\n\\end{align}\nWe thus obtain a ``guess'' of the auxiliary joint distribution\n${\\mathcal Q}_{\\rvMat{Y}| \\vect{r}{V} = v}$. \n\\begin{proposition} \\label{prop:meanLogQ_p2p}\n\tWith the above choice of auxiliary output distribution, it follows that\n\t\\begin{multline}\n\t\\E[-\\log q(\\rvMat{Y} | \\vect{r}{V})] \\le (N+T-1)\\E[\\log(1+|\\vect{r}{X}_\\vect{r}{V}|^2)] \\\\+ N\\E[\\sum_{i=1,i\\ne \\vect{r}{V}}^{T}\\log\\left(1+\\frac{|\\vect{r}{X}_i|^2}{1+|\\vect{r}{X}_\\vect{r}{V}|^2}\\right)] + O(\\log\\log P). \\label{eq:propp2p}\n\t\\end{multline}\n\\end{proposition}\n\\begin{proof}\n\tSee Appendix~\\ref{proof:propMeanLogQ_p2p}.\n\\end{proof}\nPlugging the bounds into~\\eqref{eq:boundRp2p}, we obtain\n\\begin{align}\nI(\\rvVec{X};\\rvMat{Y}) &\\le (T\\!-\\!1)\\E[\\log(1\\!+\\!|\\vect{r}{X}_\\vect{r}{V}|^2)] + N\\E[\\log\\frac{1\\!+\\!|\\vect{r}{X}_\\vect{r}{V}|^2}{1\\!+\\!\\|\\rvVec{X}\\|^2}] \\notag\\\\&\\hspace{.5cm}+ N\\E[\\sum_{i=1,i\\ne \\vect{r}{V}}^{T}\\log\\left(1+\\frac{|\\vect{r}{X}_i|^2}{1+|\\vect{r}{X}_\\vect{r}{V}|^2}\\right)] \\notag\\\\&\\hspace{.5cm}+ O(\\log\\log P) \\notag\\\\\n&\\le (T-1)\\log(1+\\E[|\\vect{r}{X}_\\vect{r}{V}|^2]) + O(\\log\\log P) \\\\\n&\\le (T-1)\\log^+(P) + O(\\log\\log P),\n\\end{align}\nwhere we used the fact that $|\\vect{r}{X}_i|^2 \\le |\\vect{r}{X}_{\\vect{r}{V}}|^2 \\le \\|\\rvVec{X}\\|^2$, $\\forall i \\ne \\vect{r}{V}$. Thus, the DoF is upper bounded by $\\frac{T-1}{T}$, which is tight\n\n\\section{Two-User SIMO MAC} \\label{sec:converse}\nLet us get back to the MAC in this section and show that, when $T\\ge 3, N>1$, any achievable DoF pair $(d_1,d_2)$ must satisfy~\\eqref{eq:DoFbound1} and~\\eqref{eq:DoFbound2}.\n\n\\subsection{The $T\\ge N+1 > 2$ case}\nLet us consider the more straightforward case with $T\\ge N+1 > 2$. We first bound $R_1$ and $R_2$ using similar techniques as for the single-user case, and then give the tight outer bound for the DoF region in the following steps. \n\n\\subsubsection*{Step 1: Output Rotation and Genie-Aided Bound}\nGiven $\\rvVec{X}_2$, the channel with respect to (w.r.t.) input\n$\\rvVec{X}_1$ has equivalent noise $\\rvVec{H}_2 \\rvVec{X}_2^{\\scriptscriptstyle\\mathsf{T}} +\n\\rvMat{Z}$. Consider the following eigen-value decomposition\n\\begin{align}\n\\rvVec{X}_2^*\\rvVec{X}_2^{\\scriptscriptstyle\\mathsf{T}} = \\rvMat{U} \\ \\diag(0,\\dots,0,\\|\\rvVec{X}_2\\|^2) \\ \\rvMat{U}^\\H, \n\\end{align}\nfor some $T\\times T$ unitary matrix $\\rvMat{U}$. We consider the rotated\noutput $\\Yt = \\rvMat{Y} \\rvMat{U} = \\rvVec{H}_1 \\Xvt_1^{\\scriptscriptstyle\\mathsf{T}} + \\Zt$, where\n$\\Xvt_1^{\\scriptscriptstyle\\mathsf{T}} = \\rvVec{X}_1^{\\scriptscriptstyle\\mathsf{T}} \\rvMat{U} = [\\Xt_{11} \\ \\Xt_{12} \\ \\dots \\\n\\Xt_{1T}]$ and $\\Zt = (\\rvVec{H}_2 \\rvVec{X}_2^{\\scriptscriptstyle\\mathsf{T}} +\n\\rvMat{Z})\\rvMat{U}$. Note that given $\\rvVec{X}_2$, the first\n$T-1$ columns of the noise $\\Zt$ are i.i.d.~Gaussian whereas the last\ncolumn is stronger as the sum of $\\rvVec{H}_2 \\|\\rvVec{X}_2\\|$ and a Gaussian noise\nvector. Thus, we have\n\\begin{align}\nT R_1 \\le I(\\rvVec{X}_1; \\rvMat{Y} | \\rvVec{X}_2) &= I(\\Xvt_1;\\Yt | \\rvVec{X}_2). \\label{eq:boundR1_1\n\\end{align}\nLet us define the random variable $\\vect{r}{V}$ as the index of the strongest\namong the first $T-1$ elements of $\\Xvt_1$, namely,\n\\begin{align}\n\\vect{r}{V} = \\arg\\max_{i=1,2,\\dots,T-1} |\\Xt_{1i}|^2.\n\\end{align}\n\nSimilarly as in \\eqref{eq:boundRp2p} with the genie-aided bound, \n\\begin{align}\n\\hspace{-.2cm}{I(\\Xvt_1;\\Yt | \\rvVec{X}_2)\n\\le h(\\Yt | \\rvVec{X}_2,\\! \\vect{r}{V})\\! -\\! h(\\Yt | \\Xvt_1,\\rvVec{X}_2)\\!\n+\\! \\log(T-1).} \\label{eq:boundR1_2}\n\\end{align}\n\n\\subsubsection*{Step 2: Bounding $h(\\Yt | \\Xvt_1,\\rvVec{X}_2)$ and $h(\\Yt | \\rvVec{X}_2, \\vect{r}{V})$}\nGiven $\\Xvt_1$ and $\\rvVec{X}_2$, we can apply Lemma~\\ref{lemma:1} with\n$\\rvMat{W}=[\\rvVec{H}_1\\ \\rvVec{H}_2\\ \\rvMat{Z}]$ and $\\mat{a}=\\left[ \n\\rvVec{X}_1 \\ \\rvVec{X}_2 \\ \\mat{\\mathrm{I}}_T \\right]^{\\scriptscriptstyle\\mathsf{T}}\\rvMat{U}$ to\nobtain\n\\begin{multline}\nh(\\Yt|\\Xvt_1,\\rvVec{X}_2) = N \\E[\\log\\det(\n\\mat{a}^\\H \\mat{a})] + O(1) \\\\\n= N\\E\\bigg[\\log\\bigg((1+\\|\\rvVec{X}_2\\|^2)\\Big(1+\\sum_{i=1}^{T-1}|\\Xt_{1i}|^2\\Big) + |\\Xt_{1T}|^2\\bigg)\\bigg] \\\\+ O(1),\n\\label{eq:bound_ent}\n\\end{multline}\nwhere the last equality is obtained by applying $\\tilde{\\rvVec{X}}_1^{\\scriptscriptstyle\\mathsf{T}}\n= \\rvVec{X}_1^{\\scriptscriptstyle\\mathsf{T}} \\rvMat{U}$. \n\nFor $h(\\Yt | \\rvVec{X}_2, \\vect{r}{V})$, we use the duality upper bound\nas before\n\\begin{align}\nh(\\Yt | \\rvVec{X}_2, \\vect{r}{V}) = \\E[-\\log p(\\Yt | \\rvVec{X}_2, \\vect{r}{V})]\n&\\le \\E[-\\log q(\\Yt | \\rvVec{X}_2, \\vect{r}{V})], \\notag\n\\end{align}\nwhere the only difference from the single-user case is the presence of\n$\\rvVec{X}_2$. We choose the auxiliary pdf $q(.)$ as follows. Given $\\vect{r}{V} = v$, $v\\le T-1$, we let $\n\\Yt_{[v]} \\sim {\\mathcal R}(N,\\mat{\\mathrm{I}}_N)$,\nand given\n$\\Yt_{[v]}$, the other $\\Yt_{[i]}$'s are independent and follow\n\\begin{align}\n\\Yt_{[i]} &\\sim {\\mathcal R}\\left(N,\\left(\\mat{\\mathrm{I}}_N+\n\\Yt_{[v]}\\Yt_{[v]}^\\H\\right)^{-\\frac{1}{2}}\\right), \\quad\ni \\not\\in\\{v, T\\}, \\label{eq:auxDist_i}\\\\\n\\Yt_{[T]} &\\sim {\\mathcal R}\\left(N,\\left((1+\\|\\rvVec{X}_2\\|^2)\\mat{\\mathrm{I}}_N+ \\Yt_{[v]}\\Yt_{[v]}^\\H \\right)^{-\\frac{1}{2}}\\right).\\label{eq:auxDist_T}\n\\end{align}\n\n\\begin{proposition} \\label{prop:meanLogQ_MAC}\n\tWith the above choice of auxiliary output distribution, we obtain \n\tthe upper bound~\\eqref{eq:bound_meanlog} for $\\E[-\\log q(\\Yt | \\rvVec{X}_2, \\vect{r}{V})]$, and hence for $h(\\Yt | \\rvVec{X}_2, \\vect{r}{V})$.\n\t\\begin{figure*}[!t]\n\t\t\\begin{align}\n\t\t\\E&[-\\log q(\\Yt | \\rvVec{X}_2, \\vect{r}{V})] \\le (N+T-2)\\E[\\log(1+|\\Xt_{1\\vect{r}{V}}|^2)] + N\\E[\\sum_{i=1,i\\ne V}^{T-1} \\log\\left(1+\\frac{|\\Xt_{1i}|^2}{1+|\\Xt_{1\\vect{r}{V}}|^2}\\right)] \\notag\\\\ &\\hspace{.2cm}+ N\\E[\\log(1+\\|\\rvVec{X}_2\\|^2)] + \\E[\\log\\left(1+\\frac{|\\Xt_{1\\vect{r}{V}}|^2}{1+\\|\\rvVec{X}_2\\|^2}\\right)] + N\\E[\\log\\left(1+\\frac{|\\Xt_{1T}|^2}{1+\\|\\rvVec{X}_2\\|^2+|\\Xt_{1\\vect{r}{V}}|^2}\\right)] + O(\\log\\log P). \\label{eq:bound_meanlog}\n\t\t\\end{align}\n\t\t\\vspace*{-.8cm}\n\t\\end{figure*}\n\\end{proposition}\n\\begin{proof}\n\tSee Appendix~\\ref{proof:propMeanLogQ_MAC}.\n\\end{proof}\n\n\\subsubsection*{Step 3: Upper Bounds on $R_1$ and $R_2$}\nFrom~\\eqref{eq:boundR1_1}, \\eqref{eq:boundR1_2}, \\eqref{eq:bound_ent} and \\eqref{eq:bound_meanlog}, we have the bound for $R_1$\t\n\\begin{align}\nTR_1 \n&\\le \\E[f(\\Xvt_1,\\rvVec{X}_2)] + O(\\log\\log P),\n\\end{align}\nwhere $f(\\Xvt_1,\\rvVec{X}_2)$ is defined in~\\eqref{eq:f}\n\\begin{figure*}[!t]\n\t\\begin{multline}\n\tf(\\Xvt_1,\\rvVec{X}_2) \\triangleq (N+T-2)\\log\\left(1+\\max_{i=1,\\dots,T-1}|\\Xt_{1i}|^2\\right)\n\t+ \\log\\left(1+\\frac{\\displaystyle\\max_{i=1,\\dots,T-1}|\\Xt_{1i}|^2}{1+\\|\\rvVec{X}_2\\|^2}\\right) \\\\\n\t+ N\\log\\left(1+\\frac{|\\Xt_{1T}|^2}{1+\\|\\rvVec{X}_2\\|^2+\\displaystyle\\max_{i=1,\\dots,T-1}|\\Xt_{1i}|^2}\\right)\n\t- N\\log\\left(1+\\sum_{i=1}^{T-1}|\\Xt_{1i}|^2 + \\frac{|\\Xt_{1T}|^2}{1+\\|\\rvVec{X}_2\\|^2}\\right).\n\t\\label{eq:f}\n\t\\end{multline}\n\t\\setlength{\\arraycolsep}{1pt}\n\t\\hrulefill \\setlength{\\arraycolsep}{0.0em}\n\t\\vspace*{-.1cm}\n\\end{figure*}\nFollowing the exact same steps by swapping the users' role,\n\\begin{align}\nT R_2 \n&\\le \\E[f(\\Xvt_2,\\rvVec{X}_1)] + O(\\log\\log P),\n\\end{align}\nwhere $\\Xvt_2 \\triangleq \\rvVec{X}_2\\rvMat{U}_1$ with $\\rvMat{U}_1$ from the decomposition \n\\begin{align}\n\\rvVec{X}_1^*\\rvVec{X}_1^{\\scriptscriptstyle\\mathsf{T}} = \\rvMat{U}_1 \\ \\diag(0,\\dots,0,\\|\\rvVec{X}_1\\|^2) \\ \\rvMat{U}_1^\\H.\n\\end{align\n\nIt follows that, for any $\\lambda_1,\\lambda_2\\ge0$, we have the following upper bound on\nthe weighted sum rate\n\\begin{align}\n &\\hspace{-.3cm}{\\lambda_1 R_1\\! + \\! \\lambda_2 R_2}\\notag\\\\\n &\\le \\frac{1}{T} \\E[ \\lambda_1\n f(\\Xvt_1,\\rvVec{X}_2)\\! +\\! \\lambda_2 f(\\Xvt_2,\\rvVec{X}_1) ]\\! +\\! O(\\log\\log P) \\\\\n &\\le \\frac{1}{T} \\sup_{\\vect{x}_1,\\vect{x}_2}[ \\lambda_1 f(\\tilde{\\vect{x}}_1,\\vect{x}_2)\n \\! + \\! \\lambda_2 f(\\tilde{\\vect{x}}_2,\\vect{x}_1) ]\\! +\\! O(\\log\\log P),\n \\label{eq:weighted-sum-rate}\n\\end{align\nwhere the supremum is over all $\\vect{x}_1,\\vect{x}_2$ subject to the peak power\nconstraints $\\|\\vect{x}_1\\|^2\\le P$ and $\\|\\vect{x}_2\\|^2\\le P$. \n\n\\subsubsection*{Step 4: DoF upper bounds}\nSince we are only interested in the pre-log at high SNR, it is without\nloss of optimality to let \n$\\|\\vect{x}_1\\|^2 = P^{\\eta_1}, \\|\\vect{x}_2\\|^2 = P^{\\eta_2}$ for some\n$\\eta_1, \\eta_2 \\le 1$. In addition, we assume that \n\\begin{align}\n \\max_{i=1,\\dots,T-1}|\\tilde{x}_{1i}|^2 &= P^{\\bar{\\eta}_1}, \\quad |\\tilde{x}_{1T}|^2 = P^{\\eta_{1T}}, \\\\\n\\max_{i=1,\\dots,T-1}|\\tilde{x}_{2i}|^2 &= P^{\\bar{\\eta}_2}, \\quad\n|\\tilde{x}_{2T}|^2 = P^{\\eta_{2T}}.\n\\end{align}\nHence, at high SNR, \n$\\eta_1 = \\max\\{\\bar{\\eta}_1,\\eta_{1T}\\}$, $\\eta_2 =\n\\max\\{\\bar{\\eta}_2,\\eta_{2T}\\}$. From \\eqref{eq:f} and\n\\eqref{eq:weighted-sum-rate}, we have the weighted sum DoF bound\n\\begin{multline}\n{\\lambda_1 d_1 + \\lambda_2 d_2} \\\\ \n\\le \\lambda_1\\! \\frac{N\\!+\\!T\\!-\\!2}{T}\\bar{\\eta}_1 +\\! \\lambda_1\\!\n\\frac{1}{T}(\\bar{\\eta}_1\\!-\\!\\eta_2)^+ +\\! \\lambda_1\\!\n\\frac{N}{T}(\\eta_{1T}\\!-\\!\\max\\{\\bar{\\eta}_1,\\eta_2\\})^+ \\\\-\\lambda_1\n\\frac{N}{T}\\max\\{\\bar{\\eta}_1,\\eta_{1T}-\\eta_2\\}\\\\ \n+ \\lambda_2 \\frac{N\\!+\\!T\\!-\\!2}{T}\\bar{\\eta}_2 + \\lambda_2 \\frac{1}{T}(\\bar{\\eta}_2\\!-\\!\\eta_1)^+ + \\lambda_2 \\frac{N}{T}(\\eta_{2T}-\\!\\max\\{\\bar{\\eta}_2,\\eta_1\\})^+ \\\\-\\lambda_2\n\\frac{N}{T}\\max\\{\\bar{\\eta}_2,\\eta_{2T}-\\eta_1\\}, \\label{eq:weighted-sum-dof}\n\\end{multline}\nsubject to the constraints $\\bar{\\eta}_1, \\eta_{1T} \\le 1$ and\n$\\bar{\\eta}_2, \\eta_{2T} \\le 1$. Taking $(\\lambda_1,\\lambda_2)$ as $\\left(1,\\frac{1}{T-2}\\right)$ or $\\left(\\frac{1}{T-2},1\\right)$, we can verify that, when \n$3\\le N+1\\le T$, \\eqref{eq:DoFbound1} and~\\eqref{eq:DoFbound2} hold for all $(d_1,d_2)$ satisfying~\\eqref{eq:weighted-sum-dof}. Thus the optimal DoF region is characterized.\n\n\\subsection{The $3\\le T \\le N$ case}\nWhen $T \\le N$, the above choice of auxiliary output distribution is not sufficient for a tight DoF outer bound. To see this, let us take $(\\lambda_1,\\lambda_2) = \\left(1,\\frac{1}{T-2}\\right)$, then if $\\bar{\\eta}_1+\\eta_2 \\ge \\eta_{1T} = 1$ and $\\eta_2 = \\bar{\\eta}_1$, \\eqref{eq:weighted-sum-dof} becomes\n\\begin{align}\nd_1+\\frac{d_2}{T-2} \\le \\frac{T-1}{T} \\bar{\\eta}_1 + \\frac{N}{T} (\\eta_{1T} -\\bar{\\eta}_1)\n\\end{align}\nwhich is loose since the right-hand side is larger than $1-\\frac{1}{T}$ if $N \\ge T$.\nGenerally, the bound~\\eqref{eq:weighted-sum-dof} can be loose when ${\\eta}_{1T} > \\max\\{\\bar{\\eta}_1,\\eta_2\\}$ or ${\\eta}_{2T} > \\max\\{\\bar{\\eta}_2,\\eta_1\\}$. To account for such scenarios, we ought to refine our choice of auxiliary output distribution for the duality upper bound. First, given $\\rvVec{X}_2$, we define a pair of random variables $(\\vect{r}{V},\\vect{r}{U})$ as \n\\begin{align}\n\\vect{r}{V} = \\arg\\max_{i=1,2,\\dots,T} \\frac{|\\Xt_{1i}|^2}{\\sigma_i^2},\n\\end{align}\nwhere $\\sigma_i^2 = 1, \\forall i 1$. For convenience, let us denote the inputs following $p_{\\underline{\\rvVec{X}}_1}(x)$ and $p_{\\underline{\\rvVec{X}}_2}(x)$ as $\\underline{\\rvVec{X}}_1$ and $\\underline{\\rvVec{X}}_2$, respectively. Then $\\underline{\\rvVec{X}}_1$ and $\\underline{\\rvVec{X}}_2$ satisfy the peak power constraint $P^\\beta$. Similarly, we define $\\bar{\\rvVec{X}}_1$ and $\\bar{\\rvVec{X}}_2$ with pdf\n\\begin{align}\np_{\\bar{\\rvVec{X}}_i}(x) = \\begin{cases}\n\\frac{p_{\\|\\rvVec{X}_i\\|^2}(x)}{\\Pr(\\|\\rvVec{X}_i\\|^2 \\ge P^\\beta)}, &~\\text{if $x\\ge P^\\beta$}, \\\\\n0, &~\\text{if $x < P^\\beta$}.\n\\end{cases}\n\\end{align}\nClearly, $\\rvVec{X}_i$ equals $\\underline{\\rvVec{X}}_i$ if $\\|\\rvVec{X}_i\\|^2 < P^\\beta$ and $\\bar{\\rvVec{X}}_i$ otherwise. We define the random variable $\\vect{r}{V}$ as\n\\begin{align}\n\\vect{r}{V} = \\begin{cases}\n0, &~\\text{if $\\|\\rvVec{X}_1\\|^2 = \\|\\underline{\\rvVec{X}}_1\\|^2$ and $\\|\\rvVec{X}_2\\|^2 = \\|\\underline{\\rvVec{X}}_2\\|^2$}, \\\\\n1, &~\\text{otherwise}.\n\\end{cases}\n\\end{align}\nBy Markov's inequality, \n\\begin{multline}\n\\Pr(\\|\\rvVec{X}_i\\|^2 = \\|\\bar{\\rvVec{X}}_i\\|^2) = \\Pr(\\|\\rvVec{X}_i\\|^2 \\!\\ge\\! P^\\beta) \\\\ \\le \\frac{\\E[\\|\\rvVec{X}_i\\|^2]}{P^\\beta} \\le T P^{1-\\beta}, \\quad i=1,2, \\label{eq:markov1}\n\\end{multline}\nthen \n\\begin{multline}\n\\Pr(\\vect{r}{V}\\!=\\!1) = 1-\\Pr(\\|\\rvVec{X}_1\\|^2 \\!=\\! \\|\\underline{\\rvVec{X}}_1\\|^2) \\Pr(\\|\\rvVec{X}_2\\|^2 \\!=\\! \\|\\underline{\\rvVec{X}}_2\\|^2) \\\\\n\\le 1-\\left(1-TP^{1-\\beta}\\right)^2 \\le 2TP^{1-\\beta}.\\label{eq:markov2}\n\\end{multline}\nLet the genie give $\\vect{r}{V}$ to the receiver, we have that\n\\begin{align}\nTR_1 &\\le I(\\rvVec{X}_1;\\rvMat{Y}|\\rvVec{X}_2) \\\\\n&\\le I(\\rvVec{X}_1;\\rvMat{Y},\\vect{r}{V}|\\rvVec{X}_2) \\\\\n&= I(\\rvVec{X}_1;\\rvMat{Y}|\\rvVec{X}_2,\\vect{r}{V}) + I(\\rvVec{X}_1;\\vect{r}{V}|\\rvVec{X}_2) \\\\\n&\\le \\Pr(\\vect{r}{V}=0)I(\\rvVec{X}_1;\\rvMat{Y}|\\rvVec{X}_2,\\vect{r}{V}=0) \\notag \\\\&\\hspace{1cm}+ \\Pr(\\vect{r}{V}=1)I(\\rvVec{X}_1;\\rvMat{Y}|\\rvVec{X}_2,\\vect{r}{V}=1) + 1, \\label{eq:tmp642} \\\\\n&\\le I(\\underline{\\rvVec{X}}_1;\\rvMat{Y}|\\underline{\\rvVec{X}}_2) + \\Pr(\\vect{r}{V}=1)I(\\rvVec{X}_1;\\rvMat{Y}|\\rvVec{X}_2,\\vect{r}{V}=1) + 1, \\label{eq:tmp647\n\\end{align}\nwhere~\\eqref{eq:tmp642} is due to $I(\\rvVec{X}_1;\\vect{r}{V}|\\rvVec{X}_2) \\le H(\\vect{r}{V}) \\le 1$ bits.\nNext, since removing noise and giving CSI increase the rate,\n\\begin{align}\n&\\hspace{-.5cm}\\Pr(\\vect{r}{V}=1)I(\\rvVec{X}_1;\\rvMat{Y}|\\rvVec{X}_2,\\vect{r}{V}=1) \\notag\\\\&\\le \\Pr(\\vect{r}{V}=1)I(\\rvVec{X}_1;\\rvVec{H}_1\\rvVec{X}_1^{\\scriptscriptstyle\\mathsf{T}} + \\rvMat{Z} | \\rvVec{H}_1,\\vect{r}{V}=1) \\\\\n&\\le N \\Pr(\\vect{r}{V}=1) \\log(1+\\E[\\|\\rvVec{X}_1\\|^2 | \\vect{r}{V}=1]) \\\\\n&\\le N \\Pr(\\vect{r}{V}=1) \\log\\left(1+\\frac{P}{\\Pr(\\|\\rvVec{X}_1\\|^2 \\ge P^\\beta)} \\right) \\label{eq:tmp807}\\\\\n&\\le N \\Pr(\\vect{r}{V}=1) \\log\\left(1+P\\right) \\notag \\\\&\\hspace{1cm}- N \\Pr(\\vect{r}{V}=1) \\log\\Pr(\\|\\rvVec{X}_1\\|^2 \\ge P^\\beta)\\\\\n&=O(P^{1-\\beta}\\log P^\\beta).\n\\end{align}\nwhere~\\eqref{eq:tmp807} is because \n\\begin{multline}\n\\E[\\|\\rvVec{X}_1\\|^2 | \\vect{r}{V}=1] \\le \\E[\\|\\bar{\\rvVec{X}}_1\\|^2] \n= \\frac{\\int_{P^\\beta}^{\\infty} x p_{\\|\\rvVec{X}_1\\|^2}(x) dx}{\\Pr(\\|\\rvVec{X}_1\\|^2 \\ge P^\\beta)} \n\\\\\\le \\frac{\\int_{0}^{\\infty} x p_{\\|\\rvVec{X}_1\\|^2}(x) dx}{\\Pr(\\|\\rvVec{X}_1\\|^2 \\ge P^\\beta)} \n\\le \\frac{P}{\\Pr(\\|\\rvVec{X}_1\\|^2 \\ge P^\\beta)},\n\\end{multline}\nand the last equality follows from~\\eqref{eq:markov1} and \\eqref{eq:markov2}.\nPlugging this into~\\eqref{eq:tmp647} yields \n\\begin{align}\nTR_1 \\le I(\\underline{\\rvVec{X}}_1;\\rvMat{Y}|\\underline{\\rvVec{X}}_2)+ O(P^{1-\\beta}\\log P^\\beta).\n\\end{align}\nFollowing the same steps by swapping the users' role, we get the bound for $R_2$\n\\begin{align}\nTR_2 \\le I(\\underline{\\rvVec{X}}_2;\\rvMat{Y}|\\underline{\\rvVec{X}}_1)+ O(P^{1-\\beta}\\log P^\\beta).\n\\end{align} \nUsing similar techniques, we can also show that \n\\begin{align}\nT(R_1+R_2) \\le I(\\underline{\\rvVec{X}}_1,\\underline{\\rvVec{X}}_2;\\rvMat{Y})+ O(P^{1-\\beta}\\log P^\\beta).\n\\end{align}\nTherefore, there exists $(R'_1,R'_2)$ satisfying \n\\begin{align}\nR'_1+R'_2 &\\le \\frac{1}{T}I(\\underline{\\rvVec{X}}_1, \\underline{\\rvVec{X}}_2; \\rvMat{Y}), \\\\\nR'_1 &\\le \\frac{1}{T}I(\\underline{\\rvVec{X}}_1; \\rvMat{Y} | \\underline{\\rvVec{X}}_2),\\\\\nR'_2 &\\le \\frac{1}{T}I(\\underline{\\rvVec{X}}_2; \\rvMat{Y} | \\underline{\\rvVec{X}}_1),\n\\end{align}\ni.e., achievable with the constructed inputs $\\underline{\\rvVec{X}}_1$ and $\\underline{\\rvVec{X}}_2$ satisfying the peak power constraint $P^\\beta$, such that~\\eqref{eq:boundRpeak} holds. This concludes the proof.\n\n\\subsubsection{Proof of Lemma~\\ref{lemma:distributionR}} \\label{proof:lemmaDistR}\nIn this proof, all expectations are implicitly w.r.t. ${\\mathcal P}$. A direct calculation from~\\eqref{eq:distR} yields\n\\begin{multline}\n\\E[-\\log(r_\\rvVec{Y}(\\rvVec{Y}))] = -\\log|\\det\\mat{a}|^2 + (N-\\alpha) \\E[\\log\\|\\mat{a}\\rvVec{Y}\\|^2] \\\\+ \\frac{\\E[\\|\\mat{a}\\rvVec{Y}\\|^2]}{\\beta} + \\log\\Gamma(\\alpha) + \\log \\beta^\\alpha + \\log \\frac{\\pi^N}{\\Gamma(N)}.\n\\end{multline}\nWhen $\\beta = \\E[\\|\\pmb{A} \\rvVec{Y}\\|^2]$ and $\\alpha = \\frac{1}{\\log(\\beta)} = \\frac{1}{\\log(\\E[\\|\\pmb{A} \\rvVec{Y}\\|^2])}$, this becomes\n\\begin{align}\n&\\E[-\\log(r_\\rvVec{Y}(\\rvVec{Y}))] \\notag\\\\&= -\\log|\\det\\mat{a}|^2 + N\\E[\\log\\|\\mat{a}\\rvVec{Y}\\|^2] - \\frac{\\E[\\log\\|\\mat{a}\\rvVec{Y}\\|^2]}{\\log(\\E[\\|\\pmb{A} \\rvVec{Y}\\|^2])} \\notag \\\\ &\\hspace{.5cm}+ \\log\\Gamma\\left(\\frac{1}{\\log(\\E[\\|\\pmb{A} \\rvVec{Y}\\|^2])}\\right) + \\log \\frac{e\\pi^N}{\\Gamma(N)}, \\\\\n&= -\\log|\\det\\mat{a}|^2 + N\\E[\\log\\|\\mat{a}\\rvVec{Y}\\|^2] \\notag \\\\ &\\hspace{3.5cm} + O(\\log\\log(\\E[\\|\\mat{a}\\rvVec{Y}\\|^2])),\n\\end{align}\nwhere the last equality is because $0<\\frac{\\E[\\log\\|\\mat{a}\\rvVec{Y}\\|^2]}{\\log(\\E[\\|\\pmb{A} \\rvVec{Y}\\|^2])}<1$ and \n\\begin{align}\n\\log\\Gamma\\left(\\frac{1}{\\log(\\E[\\|\\pmb{A} \\rvVec{Y}\\|^2])}\\right) - \\log\\log(\\E[\\|\\mat{a}\\rvVec{Y}\\|^2]) \\rightarrow 0,\n\\end{align}\nas $\\E[\\|\\mat{a}\\rvVec{Y}\\|^2] \\to \\infty$ due to\n\\begin{align}\n\\lim\\limits_{x\\to\\infty} \\log\\Gamma\\left(\\frac{1}{x}\\right)-\\log x &= \\lim\\limits_{x\\to\\infty} \\log\\left(\\frac{1}{x}\\Gamma\\left(\\frac{1}{x}\\right)\\right) \\notag\\\\\n&= \\lim\\limits_{x\\to\\infty} \\log\\left(\\Gamma\\left(1+\\frac{1}{x}\\right)\\right) \\notag\\\\\n&= \\log(\\Gamma(1)) \\notag\\\\\n&= 0.\n\\end{align}\n\n\\subsection{Proof of Proposition~\\ref{prop:meanLogQ_p2p}} \\label{proof:propMeanLogQ_p2p}\nUsing Lemma~\\ref{lemma:distributionR}, it follows that\n\\begin{align}\n&\\E[-\\log q(\\rvMat{Y} | \\vect{r}{V}=v)] \\notag\\\\&= N\\E[\\log\\|\\rvMat{Y}_{[v]}\\|^2] + \\sum_{i=1,i\\ne v}^{T} \\E\\Bigg[\\log\\det\\left(\\mat{\\mathrm{I}}_N+ \\rvMat{Y}_{[v]}\\rvMat{Y}_{[v]}^\\H\\right)\\Bigg. \\notag\\\\&\\hspace{.3cm}\\!+\\! \\Bigg. N\\log\\left\\|\\left(\\mat{\\mathrm{I}}_N\\!+\\! \\rvMat{Y}_{[v]}\\rvMat{Y}_{[v]}^\\H\\right)^{\\!-\\!\\frac{1}{2}} \\rvMat{Y}_{[i]}\\right\\|^2\\Bigg] \\!+\\! O(\\log\\log P)\\\\\n&= N\\E[\\log\\|\\rvMat{Y}_{[v]}\\|^2] + \\sum_{i=1,i\\ne v}^{T} \\E\\Bigg[\\log\\left(1 + \\|\\rvMat{Y}_{[v]}\\|^2\\right) \\Bigg. \\notag\\\\&\\hspace{.3cm}\\!+\\! \\left. N\\log\\left(\\|\\rvMat{Y}_{[i]}\\|^2\\!-\\! \\frac{|\\rvMat{Y}_{[i]}^\\H\\rvMat{Y}_{[v]}|^2}{1\\!+\\!\\|\\rvMat{Y}_{[v]}\\|^2} \\right)\\right] + O(\\log\\log P) \\\\\n&= (N+T-1)\\E[\\log(1+\\|\\rvMat{Y}_{[v]}\\|^2)] \\notag\\\\&\\hspace{.3cm}+ N\\sum_{i=1,i\\ne v}^{T} \\E\\Big[\\log(\\|\\rvMat{Y}_{[i]}\\|^2\\!+\\!\\|\\rvMat{Y}_{[i]}\\|^2\\|\\rvMat{Y}_{[v]}\\|^2\\!-\\!|\\rvMat{Y}_{[i]}^\\H\\rvMat{Y}_{[v]}|^2) \\Big. \\notag\\\\&\\hspace{2cm}-\\Big.\\log(1+\\|\\rvMat{Y}_{[v]}\\|^2)\\Big] + O(\\log\\log P),\n\\end{align}\nwhere in the second equality, we used the identities $\\det(\\mat{\\mathrm{I}}+\\vect{u}\\vect{v}^\\H) = 1+\\vect{v}^\\H\\vect{u}$, $\\det(c\\pmb{A}) = c^n \\det(\\pmb{A})$ for $\\pmb{A}\\in \\mathbb{C}^{n\\times n}$, and $\\|(\\pmb{A}+\\vect{u}\\vect{v}^\\H)^{-1\/2} \\vect{x} \\|^2 = \\vect{x}^\\H (\\pmb{A}+\\vect{u}\\vect{v}^\\H)^{-1} \\vect{x} = \\vect{x}^\\H \\left( \\pmb{A}^{-1}-\\frac{\\pmb{A}^{-1} \\vect{u} \\vect{v}^\\H \\pmb{A}^{-1}}{1+\\vect{v}^\\H \\pmb{A}^{-1} \\vect{u}}\\right) \\vect{x}$. \n\nBy expanding $\\rvMat{Y}_{[1]}, \\dots, \\rvMat{Y}_{[T]}$, we get that, given $\\rvVec{X}$,\n\\begin{align}\n\\E_{\\rvVec{H},\\rvMat{Z}}[\\|\\rvMat{Y}_{[i]}\\|^2] = N(1+|\\vect{r}{X}_i|^2), \\quad \\forall i, \n\\end{align}\nand\n\\begin{multline}\n\\E_{\\rvVec{H},\\rvMat{Z}}[\\|\\rvMat{Y}_{[i]}\\|^2\\|\\rvMat{Y}_{[v]}\\|^2 - |\\rvMat{Y}_{[i]}^\\H\\rvMat{Y}_{[v]}|^2] \\\\= (N^2-N)(1+|\\vect{r}{X}_v|^2+|\\vect{r}{X}_i|^2),\\quad i\\ne v.\n\\end{multline}\nThen, using Jensen's inequality and Lemma~\\ref{lemma:2} (by letting $\\alpha$\narbitrarily close to $1$), we get that\n\\begin{align}\n&\\E[-\\log q(\\rvMat{Y} | \\vect{r}{V}=v)] \\notag\\\\&\\le (N+T-1)\\E[\\log(1\\!+\\!N\\!+\\!N|\\vect{r}{X}_v|^2)] \\notag\\\\&\\ +\\! N\\!\\sum_{i=1,i\\ne \\vect{r}{v}}^{T}\\E[\\log\\frac{N\\!+\\!N|\\vect{r}{X}_i|^2\\!+\\!(N^2\\!-\\!N)(1\\! +\\! |\\vect{r}{X}_v|^2\\!+\\!|\\vect{r}{X}_i|^2)}{1+N+N|\\vect{r}{X}_v|^2}] \\notag\\\\&\\ + O(\\log\\log P) \\\\\n&= (N+T-1)\\E[\\log(1+|\\vect{r}{X}_v|^2)] \\notag\\\\&\\ +\\! N\\sum_{i=1,i\\ne v}^{T}\\E[\\log\\left(1\\!+\\!\\frac{|\\vect{r}{X}_i|^2}{1\\!+\\!|\\vect{r}{X}_v|^2}\\right)] \\!+\\! O(\\log\\log P).\n\\end{align}\nTaking expectation over $\\vect{r}{V}$, we obtain~\\eqref{eq:propp2p}, which concludes the proof.\n\n\\subsection{Proof of Proposition~\\ref{prop:meanLogQ_MAC}} \\label{proof:propMeanLogQ_MAC}\nHence, we obtain from Lemma~\\ref{lemma:distributionR},\n\\begin{align}\n&\\E[-\\log(q(\\Yt |\\rvVec{X}_2, \\vect{r}{V}=v)] \\notag\\\\&= N\\E[\\log\\|\\Yt_{[v]}\\|^2] + \\sum_{i=1,i\\ne v}^{T-1}\\E[\\log\\det\\left(\\mat{\\mathrm{I}}_N+ \\Yt_{[v]}\\Yt_{[v]}^\\H\\right)] \\notag\\\\\n&\\hspace{.3cm}+ N\\sum_{i=1,i\\ne v}^{T-1}\\E[\\log\\left\\|\\left(\\mat{\\mathrm{I}}_N+ \\Yt_{[v]}\\Yt_{[v]}^\\H\\right)^{-\\frac{1}{2}} \\Yt_{[i]}\\right\\|^2] \\notag\\\\\n&\\hspace{.3cm}+ \\E[\\log\\det\\left((1+\\|\\rvVec{X}_2\\|^2)\\mat{\\mathrm{I}}_N+ \\Yt_{[v]}\\Yt_{[v]}^\\H\\right)] \\notag\\\\\n&\\hspace{.3cm}+ N\\E\\bigg[\\log\\Big\\|\\left((1\\!+\\!\\|\\rvVec{X}_2\\|^2)\\mat{\\mathrm{I}}_N\\!+\\! \\Yt_{[v]}\\Yt_{[v]}^\\H\\right)^{-\\frac{1}{2}} \\Yt_{[T]}\\Big\\|^2\\bigg]\\! \\notag\\\\\n&\\hspace{.3cm}+ O(\\log\\log P) \\\\\n&\\le N\\E[\\log(1\\!+\\!|\\Xt_{1v}|^2)] \\!+\\! \\sum_{i=1,i\\ne v}^{T}\\!B_i + O(\\log\\log P), \\label{eq:bound_meanlog1}\n\\end{align}\nwhere\n\\begin{align}\nB_i \\triangleq\\! \\E\\left[\\log\\left(1\\!+\\! \\|\\Yt_{[v]}\\|^2\\right)\\!+\\! N\\log\\bigg(\\|\\Yt_{[i]}\\|^2 \\!-\\! \\frac{|\\Yt_{[i]}^\\H\\Yt_{[v]}|^2}{1\\!+\\!\\|\\Yt_{[v]}\\|^2} \\bigg)\\right], \\notag\n\\end{align}\nfor $i \\notin \\{v,T\\}$, and\n\\begin{multline}\nB_T \\triangleq \\E\\Bigg[\\log\\bigg((1+\\|\\rvVec{X}_2\\|^2)^{N}\\Big(1 + \\frac{\\|\\Yt_{[v]}\\|^2}{1+\\|\\rvVec{X}_2\\|^2}\\Big)\\bigg) \\Bigg.\\\\\\Bigg. +N\\log\\Bigg(\\frac{1}{1+\\|\\rvVec{X}_2\\|^2}\\bigg(\\|\\Yt_{[T]}\\|^2- \\frac{|\\Yt_{[T]}^\\H\\Yt_{[v]}|^2}{1+\\|\\rvVec{X}_2\\|^2+\\|\\Yt_{[v]}\\|^2} \\bigg) \\Bigg)\\Bigg]. \\notag\n\\end{multline}\nBy expanding $\\Yt_{[1]}, \\dots, \\Yt_{[T]}$, we get that, given $\\rvVec{X}_1$ and $\\rvVec{X}_2$,\n\\begin{multline}\n\\E_{\\rvVec{H}_1,\\rvMat{Z}}[\\|\\Yt_{[v]}\\|^2\\|\\Yt_{[i]}\\|^2 - |\\Yt_{[i]}^\\H\\Yt_{[v]}|^2] \\\\= (N^2-N)\\left(1+|\\Xt_{1v}|^2 + |\\Xt_{1i}|^2\\right), \\quad i \\notin \\{v,T\\},\n\\end{multline}\nand\n\\begin{multline}\n\\E_{\\rvVec{H}_1,\\rvMat{Z}}[\\|\\Yt_{[v]}\\|^2\\|\\Yt_{[T]}\\|^2 - |\\Yt_{[T]}^\\H\\Yt_{[v]}|^2] \\\\= (N^2-N)\\left((1+\\|\\rvVec{X}_2\\|^2)(1+|\\Xt_{1v}|^2) + |\\Xt_{1T}|^2\\right) \\\\\n\\le (N^2-N)(1+\\|\\rvVec{X}_2\\|^2)(1+|\\Xt_{1v}|^2 + |\\Xt_{1T}|^2).\n\\end{multline}\nThen, applying repeatedly Lemma~\\ref{lemma:2} (by letting $\\alpha$\narbitrarily close to $1$), and Jensen's inequality,\n\\begin{align}\nB_i &= \\E[-(N-1)\\log\\left(1 + \\|\\Yt_{[v]}\\|^2\\right)] \\notag\\\\\n&\\hspace{.5cm}+ N\\E[\\log\\left(\\|\\Yt_{[i]}\\|^2+\\|\\Yt_{[i]}\\|^2\\|\\Yt_{[v]}\\|^2 - |\\Yt_{[i]}^\\H\\Yt_{[v]}|^2\\right)] \\notag\\\\\n&\\le \\E[-(N-1)\\log(1+N+N|\\Xt_{1v}|^2)] \\notag\\\\\n&\\hspace{.5cm}+ N\\E[\\log(N^2(1\\!+\\!|\\Xt_{1i}|^2)+(N^2\\!-\\!N)|\\Xt_{1v}|^2)]\\!+\\!O(1), \\notag\\\\\n&=\\E[\\log(1\\!+\\!|\\Xt_{1v}|^2)] \\!+\\! N\\E[\\log\\Big(1\\!+\\!\\frac{|\\Xt_{1i}|^2}{1\\!+\\!|\\Xt_{1v}|^2}\\Big)] \\!+\\! O(1)\\label{eq:tmp458}\n\\end{align}\nfor $i \\notin \\{v,T\\}$, and\n\\begin{align}\n&B_T = N\\E[\\log(1+\\|\\rvVec{X}_2\\|^2)] + \\E[\\log\\left(1+\\frac{\\|\\Yt_v\\|^2}{1+\\|\\rvVec{X}_2\\|^2}\\right)]\\notag\\\\\n&\\hspace{.2cm}+ N\\E[\\log\\Big(\\|\\Yt_{[T]}\\|^2\\! +\\! \\frac{\\|\\Yt_{[v]}\\|^2\\|\\Yt_{[T]}\\|^2\\! -\\! |\\Yt_{[T]}^\\H\\Yt_{[v]}|^2}{1+\\|\\rvVec{X}_2\\|^2}\\Big)] \\notag\\\\\n&\\hspace{.2cm}-N\\E[\\log\\left(1+\\|\\rvVec{X}_2\\|^2+\\|\\Yt_v\\|^2\\right)] \\\\ \n&\\le N\\E[\\log(1+\\|\\rvVec{X}_2\\|^2)] + \\E[\\log\\left(1+\\frac{N+N|\\Xt_{1v}|^2}{1+\\|\\rvVec{X}_2\\|^2}\\right)]\\notag\\\\\n&\\hspace{.2cm}+ N\\E\\Big[\\log\\Big(N(1 + \\|\\rvVec{X}_2\\|^2) \\Big.\\Big. \\notag\\\\&\\hspace{2cm}\\Big.\\Big.+ (N^2-N)(1+|\\Xt_{1v}|^2)+N^2|\\Xt_{1T}|^2\\Big)\\Big] \\notag\\\\\n&\\hspace{.2cm}- N\\E[\\log(1+\\|\\rvVec{X}_2\\|^2+N+N|\\Xt_{1v}|^2)] + O(1) \\\\\n&= N\\E[\\log(1+\\|\\rvVec{X}_2\\|^2)] + \\E[\\log\\left(1+\\frac{|\\Xt_{1v}|^2}{1+\\|\\rvVec{X}_2\\|^2}\\right)]\\notag\\\\\n&\\hspace{.2cm}+N\\E[\\log\\left(1+\\frac{|\\Xt_{1T}|^2}{1+\\|\\rvVec{X}_2\\|^2+|\\Xt_{1v}|^2}\\right)] + O(1)\\label{eq:tmp466}\n\\end{align}\nPlugging \\eqref{eq:tmp458} and \\eqref{eq:tmp466} into~{\\eqref{eq:bound_meanlog1}} then taking expectation over $\\vect{r}{V}$, we obtain \\eqref{eq:bound_meanlog}, which concludes the proof. \n\n\\end{appendix}\n\n\\bibliographystyle{IEEEtran}\n\n\\section*{Solutions}%\n\n}\n\n\n\n\n\n\n\\IfFileExists{MinionPro.sty}{\n\n\n}{\n\n\n}\n\n\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}