diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzzlmft" "b/data_all_eng_slimpj/shuffled/split2/finalzzlmft" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzzlmft" @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\n\nFlash memory is a non-volatile technology that is both electrically\nprogrammable and electrically erasable. It incorporates a set of cells\nmaintained at a set of levels of charge to encode information.\nWhile raising the charge level of a cell is an easy operation,\nreducing the charge level requires the erasure of the whole block to which the cell belongs.\nFor this reason charge is injected into the cell over several iterations.\nSuch programming is slow and can cause errors since cells may be injected with extra unwanted charge.\nOther common errors in flash memory cells are due to charge leakage and reading disturbance that\nmay cause charge to move from one cell to its adjacent cells.\nIn order to overcome these problems, the novel framework\nof \\emph{rank modulation} was introduced in~\\cite{JMSB}.\nIn this setup the information is carried by the relative ranking of the\ncells' charge levels and not by the absolute values of the charge levels.\nThis allows for more efficient programming of cells, and coding by the ranking of the cells' levels\nis more robust to charge leakage than coding by their actual values.\nIn this model codes are subsets of~$S_n$,\nthe set of all permutations on $n$ elements,\nand the codewords are members of $S_n$,\nwhere each permutation corresponds to a ranking of $n$ cells' levels from the highest one to the lowest.\nFor example, the charge levels $(c_1,c_2,c_3,c_4)=(5,1,3,4)$ are represented by the codeword $[1,4,3,2]$\nsince the first cell has the highest level, the forth cell has the next highest level and so on.\n\nTo detect and\/or correct errors caused by injection of extra charge or due\nto charge leakage we will use an appropriate\ndistance measure.\nSeveral metrics on permutations are used for this purpose. In this\npaper we will consider only the Kendall's $\\tau$-metric~\\cite{JSB10,KeGi90}.\nThe Kendall's $\\tau$-distance between two permutation $\\pi_1$ and $\\pi_2$ in $S_n$\nis the minimum adjacent transpositions required to obtained $\\pi_2$ from $\\pi_1$, where adjacent transposition is an exchange of two distinct adjacent elements.\nFor example, the Kendall's $\\tau$-distance between $\\pi_1=[2,1,4,3]$ and $\\pi_2=[2,4,3,1]$ is $2$ as\n$[2,1,4,3]\\to [2,4,1,3] \\to [2,4,3,1]$.\nTwo permutations in this metric are at distance\none if they differ in exactly one pair of adjacent elements.\nDistance one between these two permutations represent an exchange of two\ncells, which are adjacent in the permutation, due to a small change in\ntheir charge level which change their order.\n\nGray codes are very important in the context of rank modulation\nas was explained in~\\cite{JMSB}. They are used in many other applications,\ne.g.~\\cite{EtPa96,SaWi95}. An excellent survey on Gray codes is given in~\\cite{Sav97}.\nThe usage of Gray codes for\nrank modulation was also discussed in~\\cite{GLSB11,GLSB13,JMSB,YeSc12}.\nThe permutations of $S_n$ in the rank modulation scheme\nrepresent \"new\" logical levels of the flash memory. The codewords in\nthe Gray code provide the order of these levels which should be implemented\nin various algorithms with the rank modulation scheme.\nUsually, a Gray code is just a simple cycle in a graph, in which the edges are defined between\nvertices with distance one in a given metric. Two adjacent\nvertices in the graph represent on one hand two elements\nwhose distance is one by the given metric; and on\nthe other hand a move from a vertex to a vertex implied\nby an operation defined by the metric. A snake-in-the-box code\nis a Gray code in which two elements in the code are not\nadjacent in the graph, unless they are consecutive in the\ncode. Such a Gray code can detect a single error in a codeword.\nSnake-in-the-box codes were mainly discussed in the context of\nthe Hamming scheme, e.g.~\\cite{AbKa88}.\n\nIn the rank modulation scheme the Gray code is defined slightly\ndifferent since the operation is not defined by a metric.\nThe permutation is defined by the\norder of the charge levels, from the highest one to the lowest one.\nFrom a given ranking of the charge levels, which defines a permutation,\nthe next ranking is obtained by raising\nthe charge level of one of the cells to be the highest level.\nThis operation, called \"push-to-the-top\", is used in the rank modulation scheme.\nFor example, the charge levels $(c_1,c_2,c_3,c_4)=(5,1,3,4)$ are represented by the codeword $[1,4,3,2]$,\nand by applying push-to-the-top operation on the second cell which has the lowest charge level,\nwe have, for example, the charge levels $(c_1,c_2,c_3,c_4)=(5,6,3,4)$ which are represented by the codeword $[2,1,4,3]$.\nHence, the permutation $\\pi_2$ can follow the permutation $\\pi_1$\nif $\\pi_2$ is obtained from $\\pi_1$ by applying a push-to-the-top operation on $\\pi_1$.\nTherefore, the related graph is directed with an outgoing edge from the vertex\nwhich represents $\\pi_1$ into the vertex which represents $\\pi_2$.\nOn the other hand, one possible metric for the scheme is\nthe Kendall's $\\tau$-metric.\nA Gray code\n(and a snake-in-the-box code as a special case)\nrelated to the rank modulation scheme is a directed simple cycle in the graph.\nIn a snake-in-the-box code, related to this scheme, there is another requirement that\nthe Kendall's $\\tau$-distance between any two codewords\nis at least two, including consecutive codewords.\nFor example,\n$C=([1,2,3,4],\\ [4,1,2,3],\\ [2,4,1,3],\\ [3,2,4,1],\\ [4,3,2,1], \\newline [1,4,3,2],\\ [3,1,4,2],\\ [2,3,1,4])$\nis a snake-in-the-box code in $S_4$\nobtained by applying a push-to-the-top operation on the lowest cell at each time.\nThe Kendall's $\\tau$-distance between any two permutations in $C$ is at least~$2$.\n\nOne of the most important problems in the research on\nsnake-in-the-box codes is to construct the\nlargest possible code for the given graph. In a snake-in-the-box\ncode for the rank modulation scheme\nwe would like to find such a code with the largest number of permutations.\nIn a recent paper by Yehezkeally and Schwartz~\\cite{YeSc12}, the authors\nconstructed a snake-in-the-box code of length\n${M_{2n+1}=(2n+1)(2n-1) M_{2n-1}}$ for permutations of $S_{2n+1}$,\nfrom a snake of length $M_{2n-1}$ for permutations of~$S_{2n-1}$.\nWe will improve on this result by constructing a snake of length\n$M_{2n+1}=((2n+1)2n-1) M_{2n-1}$ for permutations of $S_{2n+1}$,\nfrom a snake of length $M_{2n-1}$ for permutations of~$S_{2n-1}$.\nThus, we have\n$\\lim\\limits_{n\\to \\infty} \\frac{M_{2n+1}}{S_{2n+1}}\\approx 0.4338$,\nimproving on the previous known ratio of\n$\\lim\\limits_{n\\to \\infty} \\frac{1}{\\sqrt{\\pi n}}$ \\cite{YeSc12}.\nFor these constructions of snake-in-the-box codes we\nneed an initial snake-in-the-box code and the largest one known\nto start both constructions is a snake of length 57 for\npermutations of $S_5$. We also propose a direct construction\nto form a snake of length\n$\\frac{(2n+1)!}{2} -2n+1$ for permutations\nof $S_{2n+1}$. The direct construction was applied successfully for\n$S_7$ and $S_9$. This implies better initial condition for the\nrecursive constructions, and the ratio $\\lim\\limits_{n\\to \\infty} \\frac{M_{2n+1}}{S_{2n+1}}\\approx 0.4743$.\n\nThe rest of this paper is organized as follows.\nIn Section~\\ref{sec:pre} we will define the basic concepts\nof Gray codes in the rank modulation\nscheme, the push-to-the-top operation,\nand the Kendall's $\\tau$-metric required in this paper.\nIn Section~\\ref{sec:main} we present the main ideas\nand a framework for constructions of snake-in-the-box codes.\nIn Section~\\ref{sec:recursive} we present a recursive construction\nbased on the given framework. This construction is used to obtain\nsnake-in-the-box codes longer than the ones known before.\nIn~Section~\\ref{sec:direct}, based on the framework,\nwe present an idea for a\ndirect construction based on necklaces.\nThe construction is used to obtain snake-in-the-box codes\nof length $\\frac{(2n+1)!}{2} -2n+1$ in $S_{2n+1}$,\nwhich we believe are optimal. The construction was\napplied successfully on $S_7$ and on $S_9$,\nand we conjecture that it can be applied\non $S_n$ for any odd $n > 6$.\nConclusions and problems for future research are\npresented in Section~\\ref{sec:conclude}.\n\n\\section{Preliminaries}\n\\label{sec:pre}\n\nIn this section we will repeat some notations defined and mentioned\nin~\\cite{YeSc12}, and we also present some other definitions.\n\nLet $[n]\\triangleq\\{1, 2, \\ldots , n\\}$\nand let $\\pi= [a_1, a_2, \\ldots , a_n]$ be a permutation\nover $[n]$, i.e., a permutation in $S_n$, such that for each $i\\in [n]$ we have that $\\pi(i)=a_i$.\n\nGiven a set $\\cS$ and a subset of transformations\n$T\\subseteq \\{f|f:\\cS\\to \\cS\\}$, a \\emph{Gray code}\nover $\\cS$ of size $M$, using transitions from\n$T$, is a sequence $C=(c_0, c_1, \\ldots , c_{M-1})$ of\n$M$ distinct elements from $\\cS$, called \\emph{codewords}, such that for each\n$j\\in[M-1]$ there exists a $t\\in T$ for which $c_j=t(c_{j-1})$.\nThe Gray code is called \\emph{complete} if $M=|\\cS|$,\nand \\emph{cyclic} if there exists $t\\in T$ such that $c_0=t(c_{M-1})$.\nThroughout this paper we will consider only cyclic Gray codes.\n\n\nIn the context of rank modulation for flash memories,\n$\\cS=S_n$ and the set of transformations $T$ comprises of push-to-the-top\noperations. We denote by $t_i$ the \\emph{push-to-the-top}\noperation on index $i$, $2 \\leq i \\leq n$, defined by\n\\begin{align*}\nt_i(&[a_1,\\ldots,a_{i-1},a_i,a_{i+1},\\ldots,a_n])=\\\\\n&[a_i,a_1,\\ldots,a_{i-1},a_{i+1},\\ldots,a_n].\n\\end{align*}\nand a \\emph{p-transition} will be an abbreviated notation for a push-to-the-top operation.\n\nA sequence of p-transitions will be called a \\emph{transitions sequence}.\nA permutation $\\pi_0$ and a transitions sequence\n$t_1 , t_2, \\ldots t_\\ell$ define a sequence of permutations\n$\\pi_0 , \\pi_1 , \\pi_2 ,\\ldots , \\pi_{\\ell-1} ,\\pi_\\ell$,\nwhere $\\pi_i = t_i (\\pi_{i-1})$, for each $i$, $1 \\leq i \\leq \\ell$. This\nsequence is a cyclic Gray code, if $\\pi_\\ell = \\pi_0$\nand for each $0 \\leq i < j < \\ell$, $\\pi_i \\neq \\pi_j$.\nIn the sequel the word cyclic will be omitted\n\nGiven a permutation $\\pi=[a_1,a_2,\\ldots,a_n] \\in S_n$,\nan \\emph{adjacent transposition} is an exchange of two distinct adjacent elements\n$a_i,a_{i+1}$, in $\\pi$, for some $1\\leq i\\leq n-1$.\nThe result of such an adjacent transposition is the permutation\n$[a_1,\\ldots,a_{i-1},a_{i+1},a_i,a_{i+2},\\ldots,a_n]$.\nThe \\emph{Kendall's} $\\tau$-\\emph{distance}~\\cite{KeGi90} between two permutations\n$\\pi_1,\\pi_2 \\in S_n$ denoted by $d_K (\\pi_1,\\pi_2)$\nis the minimum number of adjacent transpositions\nrequired to obtain the permutation $\\pi_2$ from the permutation $\\pi_1$.\nA \\emph{snake-in-the-box code} is a Gray code in which for each two\npermutations $\\pi_1$ and $\\pi_2$ in the code we have\n$d_K(\\pi_1,\\pi_2) \\geq 2$.\nHence, a snake-in-the-box code is a Gray code capable of detecting\none Kendall's $\\tau$-error. We will call such a snake-in-the-box\ncode a $\\mK$\\emph{-snake}.\nWe further denote by $(n,M,\\mK)$\\emph{-snake} a $\\mK$-snake of size $M$\nwith permutations from $S_n$. A $\\mK$-snake can be\nrepresented in two different equivalent ways:\n\\begin{itemize}\n{\\setlength\\itemindent{-9pt}\n\\item the sequence of codewords (permutations),\n\\item the transitions sequence along with the first permutation.}\n\\end{itemize}\n\nLet $\\cT$ be a transitions sequence and let $\\pi$ be a permutation in $S_n$.\nIf a $\\mK$-snake is obtained by applying $\\cT$ on $\\pi$\nthen a $\\mK$-snake will be obtained by using any other\npermutation from $S_n$ instead of $\\pi$. This is a simple observation\nfrom the fact that $t( \\pi_2 ( \\pi_1 )) = \\pi_2 (t(\\pi_1))$,\nwhere $t$ is a p-transition and $\\pi_2 (\\pi_1)$ refers to applying the\npermutation $\\pi_2 \\in S_n$ on the permutation $\\pi_1 \\in S_n$.\nIn other words applying $\\cT$ on a different permutation\njust permute the symbols, by a fixed given permutation,\nin all the resulting permutations when $\\cT$ is applied on $\\pi$. Therefore,\nsuch a transitions sequence~$\\cT$ will be called an \\emph{S-skeleton}.\n\nFor a transitions sequence ${\\sigma=t_{k_1}, t_{k_2}, \\ldots t_{k_{\\ell}}}$\nand a permutation $\\pi \\in S_n$,\nwe denote by $\\sigma\\left(\\pi\\right)$, the permutation obtained by applying\nthe sequence of p-transitions in $\\sigma$ on $\\pi$,\ni.e., $t_{k_1}$ is applied on $\\pi$, $t_{k_2}$ is applied on $t_{k_1} (\\pi)$, and so on.\nIn other words, $\\sigma\\left(\\pi\\right)= (t_{k_1}\\circ t_{k_2}\\circ \\ldots \\circ t_{k_{\\ell}})(\\pi)=\nt_{k_{\\ell}}\\left(t_{k_{l-1}}\\left(\\ldots t_{k_2}\\left(t_{k_1}\\left(\\pi\\right)\\right)\\right)\\right)$.\nLet $\\sigma_1,\\sigma_2$ be two transitions sequences.\nWe say that $\\sigma_1$ and $\\sigma_2$ are \\emph{matching sequences},\nand denote it by $\\sigma_1 \\leftrightsquigarrow \\sigma_2$,\nif for each ${\\pi\\in S_n}$ we have $\\sigma_1(\\pi)=\\sigma_2(\\pi)$.\n\n\nIn~\\cite{YeSc12} it was proved that a Gray code with permutations from $S_n$ using only\np-transitions on odd indices is a $\\mK$-snake. By starting with an even permutation and using only\np-transitions on odd indices we get a sequence\nof even permutations, i.e., a subset of the $A_n$, the alternating group of order $n$.\nThis observation saves us the need to check whether a Gray code is\nin fact a $\\mK$-snake, at the cost of restricting the\npermutations in the $\\mK$-snake to the set of even permutations.\nHowever, the following assertions were also proved in~\\cite{YeSc12}.\n\\begin{itemize}\n\\item If $C$ is an $(n,M,\\mK)$-snake then $M\\leq \\frac{{|S_n|}}{2}$.\n\\item If $C$ is an $(n,M,\\mK)$-snake which contains a p-transition\non an even index then ${M\\leq{\\frac{{|S_n|}}{2}-\\frac{1}{n-1}\\binom{\\lfloor n\/2 \\rfloor -1}{2}}}$.\n\\end{itemize}\nThis motivates not to use p-transitions on even indices.\nSince we will use only p-transitions on odd indices,\nwe will describe our constructions only for even permutations with odd length.\n\n\n\n\\section{Framework for Constructions of $\\mK$-Snakes}\n\\label{sec:main}\n\nIn this section we present a framework for constructing $\\mK$-snakes in $S_{2n+1}$.\nOur snakes will contain only even permutations. We start by partitioning\nthe set of even permutations of $S_{2n+1}$ into classes. Next, we describe\nhow to merge $\\mK$-snakes of different classes into one $\\mK$-snake.\nWe conclude this section by describing how to combine most of these classes\nby using a hypergraph whose vertices represent the classes and whose\nedges represent the classes that can be merge together in one step.\n\n\nWe present two constructions for a $(2n+1,M_{2n+1},\\mK)$-snake,\n$C_{2n+1}$, one recursive and one direct. In this section\nwe present the framework for these constructions.\nFirst, the permutations of $A_{2n+1}$,\nthe set of even permutations from~$S_{2n+1}$, are partitioned into classes,\nwhere each class induces one $\\mK$-snake which contains permutations only from the class.\nAll these snakes have the same S-skeleton.\nLet $L_{2n+1}$ be the set of all the classes.\n\nThe construction of $C_{2n+1}$ from the $\\mK$-snakes of $L_{2n+1}$\nproceeds by a sequence of joins, where at each step\nwe have a main $\\mK$-snake, and\ntwo $\\mK$-snakes from the remaining $\\mK$-snakes\nof $L_{2n+1}$ are joined to the current main $\\mK$-snake.\nA join is performed by replacing one transition\nin the main $\\mK$-snake with a matching sequence.\n\nIn order to join the $\\mK$-snakes we need the following lemmas,\nfor which the first can be easily verified. In the sequel, let\n${\\sigma^{k}\\triangleq\\underbrace{\\sigma\\circ \\sigma \\circ \\ldots \\circ \\sigma}_{k \\, times}}$,\ni.e., performing the transitions sequence $\\sigma$, $k$ times.\n\\begin{lemma}\n\\label{lem:helpLem1}\nIf $\\alpha,\\beta \\in S_n$ then $\\beta{=}t_{i}(\\alpha)$ if and only if $\\alpha{=}t_i^{i-1}(\\beta)$.\n\\end{lemma}\n\\begin{lemma}\n\\label{lem:mainLem}\nIf ${i \\in [n-2]}$ then $t_i \\leftrightsquigarrow t_{i+2}\\circ (t_i^{i-1} \\circ t_{i+2})^{2}$.\n\\end{lemma}\n\\begin{proof}\nLet ${\\alpha=[a_1, a_2, \\ldots ,a_i, a_{i+1}, a_{i+2}, \\ldots , a_n]}$ be a permutation over [n].\n\n\\begin{align*}\n\\begin{tabular}{lllllllll}\n$t_{i+2}(\\alpha)$ & \\\\\n\\hspace*{\\parindent} $=[a_{i+2}, a_1, \\ldots , a_i, a_{i+1}, a_{i+3},$ & $\\ldots,a_n]$,\\tabularnewline\n$t_i^{i-1}(t_{i+2}(\\alpha))$ & \\\\\n\\hspace*{\\parindent} $=[a_1, a_2, \\ldots , a_{i-1}, a_{i+2}, a_i, a_{i+1}, a_{i+3},$ & $\\ldots,a_n]$,\\tabularnewline\n$t_{i+2}(t_i^{i-1}(t_{i+2}(\\alpha)))$ & \\\\\n\\hspace*{\\parindent} $=[a_{i+1}, a_1, a_2, \\ldots , a_{i-1}, a_{i+2}, a_i, a_{i+3},$ & $\\ldots,a_n]$,\\tabularnewline\n$t_i^{i-1}(t_{i+2}(t_i^{i-1}(t_{i+2}(\\alpha))))$ & \\\\\n\\hspace*{\\parindent} $=[a_1, a_2, \\ldots ,a_{i-1}, a_{i+1}, a_{i+2}, a_i, a_{i+3},$ & $\\ldots,a_n]$,\\tabularnewline\n\\intertext{and hence we have,}\n$t_{i+2}(t_i^{i-1}(t_{i+2}(t_i^{i-1}(t_{i+2}(\\alpha)))))$ & \\\\\n\\hspace*{\\parindent} $=[a_i, a_1, \\ldots , a_{i-1}, a_{i+1}, a_{i+2},$ & $\\ldots,a_n]$ \\tabularnewline\n\\hspace*{\\parindent} $=t_i(\\alpha)$.\n\\end{tabular}\n\\end{align*}\n\\end{proof}\n\\vspace{-4pt}\n\\begin{cor}\n\\label{cor:mainLem}\nIf $\\pi \\in S_{2n+1}$ then $t_{2n-1}(\\pi)=t_{2n+1}\\left(t_{2n-1}^{2n-2}\\left(t_{2n+1}\\left(t_{2n-1}^{2n-2}\\left(t_{2n+1}(\\pi)\\right)\\right)\\right)\\right)$.\n\\end{cor}\n\nLemma~\\ref{lem:mainLem} can be generalized as follows\n(the following lemma is given\nfor completeness, but it will not be used in the sequel, and hence\nits proof is omitted).\n\n\\begin{lemma}\n\\label{lem:mainLemExp}\nIf ${i,j\\in [n]}$ and ${|i-j|{=}k}$, then ${t_i \\leftrightsquigarrow t_j\\circ (t_i^{i-1} \\circ t_j)^{k}}$.\n\\end{lemma}\n\n\nThe partition of $A_{2n+1}$ into the set of classes $L_{2n+1}$\nshould satisfy the following properties:\n\\begin{itemize}\n\\item[(P1)] The last two ordered elements of two permutations in same class are equal.\n\\item[(P2)] Any two permutations which differ only by a cyclic shift of the first $2n-1$ elements, belong to the same class.\n\\end{itemize}\n\n\\begin{cor}\nLet $\\pi$ be a permutation in $A_{2n+1}$.\n\\begin{itemize}\n\\item $\\pi$ and $t_{2n+1}(\\pi)$ belong to different classes in $L_{2n+1}$.\n\\item $\\pi$ and $t_{2n-1}(\\pi)$ belong to the same class in $L_{2n+1}$.\n\\end{itemize}\n\\end{cor}\n\nWe continue now with the description of the method to join the\n$\\mK$-snakes of $L_{2n+1}$ into $C_{2n+1}$.\nIn the rest of the paper, $A_{2n+1}$ is partitioned into classes according to the last two ordered elements in the permutations.\nLet $[x,y]$ denote the class of $A_{2n+1}$ in which the last ordered pair in the permutations is $(x,y)$.\nLet $\\cT$ be the S-skeleton of the $\\mK$-snakes in $L_{2n+1}$.\nLet $C_{\\cT}^{\\pi}$ be a $\\mK$-snake for which\n$\\cT$ is its transitions sequence, and $\\pi$ is its first permutation.\nIf $\\pi$ belongs to the class $[x,y]$,\nwe say that $C_{\\cT}^{\\pi}$ represents the class $[x,y]$.\nNote that all the permutations in $C_{\\cT}^{\\pi}$ belong to the same class.\n\nThe transitions sequence $\\cT$\nshould satisfy the following properties (these properties are needed in order to make the required joins of cycles):\n\\begin{itemize}\n\\item[(P3)] $t_{2n-1}$ is the last transition in $\\cT$.\n\\item[(P4)] Given a permutation $\\pi=[a_1,\\ldots, a_{2n},a_{2n+1}]$,\nfor each $x\\in [2n+1]\\setminus \\{a_{2n},a_{2n+1}\\}$\nthere exists a permutation $\\pi' \\in C_{\\cT}^{\\pi}$\nwhose last ordered three elements are $(x,a_{2n},a_{2n+1})$.\n\\end{itemize}\n\n\\begin{cor}\n\\label{cor:mergeCyclesCondition}\nFor each class $[x,y]$, a permutation $\\pi \\in [x,y]$, and $z\\in [2n+1]\\setminus \\{x,y\\}$,\nthere exists a permutation $\\pi' \\in C_{\\cT}^{\\pi}$ whose last ordered three elements are $(z,x,y)$,\nfollowed by the permutation $t_{2n-1}(\\pi')$.\n\\end{cor}\n\n\\begin{lemma}\n\\label{prop:mergeClasses1}\nLet $C$ be a $\\mK$-snake which doesn't contain any permutation from the classes $[y,z]$ or $[z,x]$, let\n$\\pi=[a_1, a_2, \\ldots, a_{2n-2},z, \\mathbf{x,y}]$\nbe a permutation in $C$ followed by $t_{2n-1}$, and let\n$\\sigma$ be a transitions sequence such that $\\cT = \\sigma \\circ t_{2n-1}$.\nThen replacing this $t_{2n-1}$ transition in $C$, with\n\\begin{equation*}\nt_{2n+1}\\circ \\sigma \\circ t_{2n+1}\\circ \\sigma \\circ t_{2n+1} ,\n\\end{equation*}\njoins two $\\mK$-snakes representing the classes ${[y, z]}$ and ${[z, x]}$ into $C$ (after $\\pi$).\n\\end{lemma}\n\\begin{proof}\nObserve that by Lemma \\ref{lem:helpLem1} we have~${\\sigma \\leftrightsquigarrow t_{2n-1}^{2n-2}}$.\nThus, we have\n\\begin{align*}\n\\pi=&\n \\left.\\begin{array}{l}\n \t\t\\left[ a_1, a_2, \\ldots, a_{2n-2}, z, \\mathbf{{x},y} \\right]\n \\\\ \\end{array}\\right. \\\\\n &\n \\left.\\begin{array}{l}\n \t\t\\downarrow {t_{2n+1}}\n \\\\ \\end{array}\\right. \\\\\n &\n \\left.\\begin{array}{l}\n \t\t\\left[ y, a_1, a_2, \\ldots, a_{2n-2}, \\mathbf{z, x} \\right] \\\\\n \t\t\\downarrow {\\sigma \\leftrightsquigarrow t_{2n-1}^{2n-2}} \\\\\n \t\t\\left[ a_1, a_2, \\ldots, a_{2n-2}, y, \\mathbf{z, x} \\right]\n \\\\ \\end{array}\\right\\}\n \\left.\\begin{array}{l}\n \\mK-snake\\\\\n for\\ \\left[z,x\\right] \\\\\n \\end{array}\\right. \\\\\n &\n \\left.\\begin{array}{l}\n \t\t\\downarrow {t_{2n+1}}\n \\\\ \\end{array}\\right. \\\\\n &\n \\left.\\begin{array}{l}\n \t\t\\left[ x, a_1, a_2, \\ldots, a_{2n-2}, \\mathbf{y, z}\\right] \\\\\n \t\t\\downarrow {\\sigma \\leftrightsquigarrow t_{2n-1}^{2n-2}} \\\\\n \t\t\\left[ a_1, a_2, \\ldots, a_{2n-2}, x, \\mathbf{y, z}\\right]\n \\\\ \\end{array} \\right\\}\n \\left.\\begin{array}{l}\n \\mK-snake\\\\\n for\\ \\left[y, z\\right] \\\\\n \\end{array}\\right. \\\\\n &\n \\left.\\begin{array}{l}\n \t\t\\downarrow {t_{2n+1}}\n \\\\ \\end{array}\\right. \\hphantom{\\left[ a_1, a_2, \\ldots, x,\\mathbf{y, z} \\right]}\n \\left.\\begin{array}{l}\n return\\ to\\ the\\ \\\\\n \\mK-snake\\ C \\\\\n \\end{array}\\right. \\\\\nt_{2n-1}(\\pi)=&\n \\left.\\begin{array}{l}\n \\left[ z, a_1, a_2, \\ldots, a_{2n-2}, \\mathbf{x, y}\\right]\n \\end{array}\\right.\n\\end{align*}\n\\end{proof}\nThe next step is to present an order for merging all the $\\mK$-snakes of $L_{2n+1}$,\nexcept one, into $C_{2n+1}$.\nThis step will be performed by\ntranslating the merging problem into a 3-graph problem.\nWe start with a sequence of definitions taken from~\\cite{hypergraph}.\n\\begin{definition}\n\\label{def:3-graph}\nA 3-graph (also called a 3-uniform hypergraph) $H=(V,E)$ is a hypergraph where $V$ is a set of vertices\nand $E\\subseteq \\binom{V}{3}$. A~hyperedge of $H$ will be called triple.\\\\\nA path in $H$ is an alternating sequence of $\\ell+1$ distinct\nvertices and $\\ell$ distinct triples:\n$v_0, e_1, v_1, \\ldots ,v_{\\ell-1}, e_{\\ell}, v_{\\ell}$,\nwith the property that $\\forall i\\in [\\ell]: v_{i-1},v_{i} \\in e_{i}$.\\\\\nA cycle is a closed path, i.e. $v_0 = v_{\\ell}$.\\\\\nA sub-3-graph contains a subset $E'\\subseteq E$ and the subset\n$V'\\subseteq V$ which contains all the vertices in $E'$.\\\\\nA tree $T$ in $H$ is a connected sub-3-graph of $H$ with no cycles.\n\\end{definition}\n\n\\pagebreak\nLet $H_{2n+1}=(V_{2n+1},E_{2n+1})$ be a 3-graph defined as follows:\n\\begin{align*}\nV_{2n+1}&=\\{[x,y]~:~x,y \\in [2n+1], x\\ne y\\},\\\\\nE_{2n+1}&=\\{\\{[x,y],[y,z],[z,x]\\}~:\\\\\n&\\hphantom{=\\{\\}} x,y,z\\in [2n+1], x\\ne y, x\\ne z, y\\ne z\\}.\n\\end{align*}\nWe denote a hyperedge $\\{[x,y],[y,z],[z,x]\\}$, where $x57$ \\cite{YeSc12}.\nThe S-skeleton of this code is $\\sigma ^ 3$, where\n\\begin{align*}\n\\sigma=\t& t_5,t_5,t_3,t_3,t_5,t_3,t_3,t_5,t_3,\\\\\n\t\t& t_5,t_5,t_3,t_3,t_5,t_3,t_3,t_5,t_3,t_5\n\\end{align*}\n\\begin{figure*}[htbp]\n\\begin{center}\n\\small\n\\setlength{\\tabcolsep}{1.5pt}\n\\begin{tabular}{*{56}{c|}c}\n 3 & 2 & 1 & 3 & 2 & 5 & 3 & 2 & 4 & 3 & 1 & 5 & 3 & 1 & 2 & 3 & 1 & 4 & 3 & 5 &\n 2 & 1 & 5 & 2 & 4 & 5 & 2 & 3 & 5 & 1 & 4 & 5 & 1 & 2 & 5 & 1 & 3 & 5 & 4 & 2 &\n 1 & 4 & 2 & 3 & 4 & 2 & 5 & 4 & 1 & 3 & 4 & 1 & 2 & 4 & 1 & 5 & 4 \\\\\n 4 & 3 & 2 & 1 & 3 & 2 & 5 & 3 & 2 & 4 & 3 & 1 & 5 & 3 & 1 & 2 & 3 & 1 & 4 & 3 &\n 5 & 2 & 1 & 5 & 2 & 4 & 5 & 2 & 3 & 5 & 1 & 4 & 5 & 1 & 2 & 5 & 1 & 3 & 5 & 4 &\n 2 & 1 & 4 & 2 & 3 & 4 & 2 & 5 & 4 & 1 & 3 & 4 & 1 & 2 & 4 & 1 & 5 \\\\\n 5 & 4 & 3 & 2 & 1 & 3 & 2 & 5 & 3 & 2 & 4 & 3 & 1 & 5 & 3 & 1 & 2 & 3 & 1 & 4 &\n 3 & 5 & 2 & 1 & 5 & 2 & 4 & 5 & 2 & 3 & 5 & 1 & 4 & 5 & 1 & 2 & 5 & 1 & 3 & 5 &\n 4 & 2 & 1 & 4 & 2 & 3 & 4 & 2 & 5 & 4 & 1 & 3 & 4 & 1 & 2 & 4 & 1 \\\\\n 1 & 5 & 4 & 4 & 4 & 1 & 1 & 1 & 5 & 5 & 2 & 4 & 4 & 4 & 5 & 5 & 5 & 2 & 2 & 1 &\n 4 & 3 & 3 & 3 & 1 & 1 & 1 & 4 & 4 & 2 & 3 & 3 & 3 & 4 & 4 & 4 & 2 & 2 & 1 & 3 &\n 5 & 5 & 5 & 1 & 1 & 1 & 3 & 3 & 2 & 5 & 5 & 5 & 3 & 3 & 3 & 2 & 2 \\\\\n 2 & 1 & 5 & 5 & 5 & 4 & 4 & 4 & 1 & 1 & 5 & 2 & 2 & 2 & 4 & 4 & 4 & 5 & 5 & 2 &\n 1 & 4 & 4 & 4 & 3 & 3 & 3 & 1 & 1 & 4 & 2 & 2 & 2 & 3 & 3 & 3 & 4 & 4 & 2 & 1 &\n 3 & 3 & 3 & 5 & 5 & 5 & 1 & 1 & 3 & 2 & 2 & 2 & 5 & 5 & 5 & 3 & 3 \\\\\n\\end{tabular}\n\\end{center}\n\\caption{A $(5,57,\\mK)$-snake obtained by $T_5$} \\label{fig:snake5}\n\\end{figure*}\n\\end{example}\n\n\\begin{theorem}\n\\label{thm:treeCompose}\nIf $n\\geq 2$, then there exists a nearly spanning tree $T_{2n+1}$ in $H_{2n+1}$ which doesn't include the vertex $[2,1]$.\n\\end{theorem}\n\\begin{proof}\nWe present a recursive construction for such a nearly spanning tree.\nWe start with the nearly spanning tree given in Example~\\ref{Exm:T5}.\nNote that $T_5$ doesn't include the vertex $[2,1]$.\nAssume that there exists a nearly spanning tree,~$T_{2n-1}$,\nin $H_{2n-1}$, which doesn't include the vertex $[2,1]$.\nNote that $H_{2n-1}$ is a sub-graph of~$H_{2n+1}$\nand therefore~$T_{2n-1}$ is a tree in $H_{2n+1}$.\nThe vertices of $H_{2n+1}$ which are not spanned by $T_{2n-1}$ are\n\\begin{itemize}\n\\item $[x,2n], [2n,x], [x,2n+1], [2n+1,x]$ for each $x\\in [2n-1]$,\n\\item $[2n,2n+1], [2n+1,2n]$,\n\\item $[2,1]$.\n\\end{itemize}\nThe nearly spanning tree $T_{2n+1}$ is constructed from $T_{2n-1}$ as follows.\nFor each $x$, $2\\leq x \\leq 2n-2$, the edges\n${\\langle x,x+1,2n \\rangle}$ and $\\langle x,x+1,2n+1 \\rangle$\nare joined to $T_{2n+1}$;\nalso the edges\n$\\langle 1,2,2n \\rangle$,\n$\\langle 1,2n,2n-1 \\rangle$,\n$\\langle 1,2n+1,2n-1 \\rangle$,\n$\\langle 1,2n,2n+1 \\rangle$, and\n$\\langle 2,2n+1,2n \\rangle$ are joined to $T_{2n+1}$.\nIt is easy to verify that all the vertices of $H_{2n+1}$\nwhich are not spanned by $T_{2n-1}$ (except for $[2,1]$)\nare contained in the list of the edges which are joined to $T_{2n-1}$.\nWhen an edge is joined to the tree it has one vertex which is already in the\ntree and two vertices which are not on the tree. Hence, connectivity is\npreserved and no cycle is formed.\nHence, it is easy to verify that by joining these edges to $T_{2n-1}$\nwe form a nearly spanning tree in $H_{2n+1}$.\n\\end{proof}\n\n\\begin{example}\n\\label{Exm:T7}\nBy using Theorem~\\ref{thm:treeCompose} and the\nnearly spanning tree $T_5$ of Example~\\ref{Exm:T5}\nwe obtain the spanning tree $T_7$ depicted in Figure~\\ref{fig:T7}.\nThe dashed boxes edges and the double lines nodes are added to $T_5$ in order to form $T_7$.\n\\pagestyle{empty}\n\\tikzstyle{level 1}=[sibling angle=90]\n\\tikzstyle{level 2}=[sibling angle=80]\n\\tikzstyle{level 3}=[sibling angle=50]\n\\tikzstyle{level 4}=[sibling angle=50]\n\\tikzstyle{every node}=[draw]\n\\tikzstyle{edge from parent}=[draw]\n\\begin{figure*}[htbp]\n\\centering\n\\begin{tikzpicture}[grow cyclic,very thick,level distance=10mm,\n cap=round, scale=0.9]\n \\tikzset{VertexStyle\/.style = {shape = circle,\n draw,\n text = black,\n inner sep = 1pt,\n outer sep = 0pt,\n minimum size = 8 pt,\n scale=0.9}}\n \\tikzset{Vertex7Style\/.style = {shape = circle,\n draw,\n thick,\n double,\n text = black,\n inner sep = 1pt,\n outer sep = 1pt,\n minimum size = 8 pt,\n scale=0.9}}\n \\tikzset{EdgeStyle\/.style = {draw, thin, scale=0.9}}\n \\tikzset{LabelStyle\/.style = {text = black, draw=none, scale=0.9}}\n \\tikzset{Label7Style\/.style = {text = black,outer sep = 1pt,dashed,thin, scale=0.9}}\n\\node[VertexStyle] {12}\n\tchild\n\t{\n\t\tnode[Label7Style] {126}\n\t\tchild{\n\t\t\t\tnode[Vertex7Style] {26}\n\t\t\t\t}\n\t\tchild{\n\t\t\t\tnode[Vertex7Style] {61}\n\t\t\t\t}\n\t\t}\n\tchild{\n\t\tnode[LabelStyle] {124}\n\t\tchild{\n\t\t\t\tnode[VertexStyle] {24}\n\t\t\t\tchild{\n\t\t\t\t\t\tnode[LabelStyle] {243}\n\t\t\t\t\t\tchild{\n\t\t\t\t\t\t\t\tnode[VertexStyle] {43}\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\tchild{\n\t\t\t\t\t\t\t\tnode[VertexStyle] {32}\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t}\n\t\t\t\t}\n\t\tchild{\n\t\t\t\tnode[VertexStyle] {41}\n\t\t\t\tchild{\n\t\t\t\t\t\tnode[LabelStyle] {134}\n\t\t\t\t\t\tchild{\n\t\t\t\t\t\t\t\tnode[VertexStyle] {13}\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\tchild{\n\t\t\t\t\t\t\t\tnode[VertexStyle] {34}\n\t\t\t\t\t\t\t\tchild{\n\t\t\t\t\t\t\t\t\t\tnode[Label7Style] {346}\n\t\t\t\t\t\t\t\t\t\tchild{\n\t\t\t\t\t\t\t\t\t\t\t\tnode[Vertex7Style] {46}\n\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\tchild{\n\t\t\t\t\t\t\t\t\t\t\t\tnode[Vertex7Style] {63}\n\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\tchild{\n\t\t\t\t\t\t\t\t\t\tnode[Label7Style] {347}\n\t\t\t\t\t\t\t\t\t\tchild{\n\t\t\t\t\t\t\t\t\t\t\t\tnode[Vertex7Style] {47}\n\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\tchild{\n\t\t\t\t\t\t\t\t\t\t\t\tnode[Vertex7Style] {73}\n\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t}\n\t\t\t\tchild [missing]\n\t\t\t\tchild [missing]\n\t\t\t}\n\t\t}\n\tchild{\n\t\tnode[LabelStyle] {123}\n\t\tchild{\n\t\t\t\tnode[VertexStyle] {23}\n\t\t\t\tchild{\n\t\t\t\t\t\tnode[LabelStyle] {235}\n\t\t\t\t\t\tchild{\n\t\t\t\t\t\t\t\tnode[VertexStyle] {35}\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\tchild{\n\t\t\t\t\t\t\t\tnode[VertexStyle] {52}\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t}\n\t\t\t\tchild\n\t\t\t\t{\n\t\t\t\t\t\tnode[Label7Style] {236}\n\t\t\t\t\t\tchild{\n\t\t\t\t\t\t\t\tnode[Vertex7Style] {36}\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\tchild{\n\t\t\t\t\t\t\t\tnode[Vertex7Style] {62}\n\t\t\t\t\t\t\t\tchild{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\tnode[Label7Style] {276}\n\t\t\t\t\t\t\t\t\t\t\t\t\t\tchild{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\tnode[Vertex7Style] {27}\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t\t\t\tchild{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\tnode[Vertex7Style] {76}\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t}\n\t\t\t\tchild\n\t\t\t\t{\n\t\t\t\t\t\tnode[Label7Style] {237}\n\t\t\t\t\t\tchild{\n\t\t\t\t\t\t\t\tnode[Vertex7Style] {37}\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\tchild{\n\t\t\t\t\t\t\t\tnode[Vertex7Style] {72}\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t}\n\t\t\t\t}\n\t\tchild{\n\t\t\t\tnode[VertexStyle] {31}\n\t\t\t\tchild{\n\t\t\t\t\t\tnode[LabelStyle] {153}\n\t\t\t\t\t\tchild{\n\t\t\t\t\t\t\t\tnode[VertexStyle] {15}\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\tchild{\n\t\t\t\t\t\t\t\tnode[VertexStyle] {53}\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t}\n\t\t\t}\n\t\t}\n\tchild{\n\t\tnode[LabelStyle] {125}\n\t\tchild{\n\t\t\t\tnode[VertexStyle] {25}\n\t\t\t\tchild{\n\t\t\t\t\t\tnode[LabelStyle] {254}\n\t\t\t\t\t\tchild{\n\t\t\t\t\t\t\t\tnode[VertexStyle] {54}\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\tchild{\n\t\t\t\t\t\t\t\tnode[VertexStyle] {42}\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t}\n\t\t\t\t}\n\t\tchild{\n\t\t\t\tnode[VertexStyle] {51}\n\t\t\t\tchild{\n\t\t\t\t\t\tnode[LabelStyle] {145}\n\t\t\t\t\t\tchild{\n\t\t\t\t\t\t\t\tnode[VertexStyle] {14}\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\tchild{\n\t\t\t\t\t\t\t\tnode[VertexStyle] {45}\n\t\t\t\t\t\t\t\tchild\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\tnode[Label7Style] {456}\n\t\t\t\t\t\t\t\t\t\tchild{\n\t\t\t\t\t\t\t\t\t\t\t\tnode[Vertex7Style] {56}\n\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\tchild{\n\t\t\t\t\t\t\t\t\t\t\t\tnode[Vertex7Style] {64}\n\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\tchild\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\tnode[Label7Style] {457}\n\t\t\t\t\t\t\t\t\t\tchild{\n\t\t\t\t\t\t\t\t\t\t\t\tnode[Vertex7Style] {57}\n\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\tchild{\n\t\t\t\t\t\t\t\t\t\t\t\tnode[Vertex7Style] {74}\n\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\tchild [missing]\n\t\t\t\t\t\t}\n\t\t\t\tchild\n\t\t\t\t{\n\t\t\t\t\t\tnode[Label7Style] {165}\n\t\t\t\t\t\tchild{\n\t\t\t\t\t\t\t\tnode[Vertex7Style] {16}\n\t\t\t\t\t\t\t\tchild{\n\t\t\t\t\t\t\t\t\t\tnode[Label7Style] {167}\n\t\t\t\t\t\t\t\t\t\tchild [missing]\n\t\t\t\t\t\t\t\t\t\tchild{\n\t\t\t\t\t\t\t\t\t\t\tnode[Vertex7Style] {67}\n\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\tchild{\n\t\t\t\t\t\t\t\t\t\t\tnode[Vertex7Style] {71}\n\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\tchild{\n\t\t\t\t\t\t\t\tnode[Vertex7Style] {65}\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t}\n\t\t\t\tchild\n\t\t\t\t{\n\t\t\t\t\t\tnode[Label7Style] {175}\n\t\t\t\t\t\tchild{\n\t\t\t\t\t\t\t\tnode[Vertex7Style] {17}\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\tchild{\n\t\t\t\t\t\t\t\tnode[Vertex7Style] {75}\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t}\n\t\t\t}\n\t\t}\n\t;\n\\end{tikzpicture}\n\\vspace{-15pt}\n\\caption{The nearly spanning tree $T_7$ constructed from $T_5$} \\label{fig:T7}\n\\end{figure*}\n\\end{example}\n\n\\section{A Recursive Construction}\n\\label{sec:recursive}\n\nIn this section we present the recursive construction for a\n$(2n+1,M_{2n+1},\\mK)$-snake from a\n$(2n-1,M_{2n-1},\\mK)$-snake. The construction is based on the\nnearly spanning tree $T_{2n+1}$ presented in the previous section.\nEach of its vertices represent a class in which a $\\mK$-snake based\non the $(2n-1,M_{2n-1},\\mK)$-snake is generated. Those $\\mK$-snakes\nare merged together into one $(2n+1,M_{2n+1},\\mK)$-snake using the\nframework presented in the previous section. We conclude this\nsection with an analysing the length of the generated\n$\\mK$-snake compared the total number of permutations in $S_{2n+1}$.\n\nWe generate a $(2n+1,M_{2n+1},\\mK)$-snake, $C_{2n+1}$,\nwhose transitions sequence is\n${t_{k_1} , t_{k_2} ,\\ldots , t_{k_{M_{2n+1}}}}$.\n$C_{2n+1}$ has the following properties:\n\\begin{enumerate}\n\\item[(Q1)] $k_j$ is odd for all ${j\\in[M_{2n+1}]}$.\n\\item[(Q2)] $k_{M_{2n+1}}=2n+1$.\n\\item[(Q3)] For each $z\\in [2n+1]$ there exists a permutation $\\pi \\in C_{2n+1}$ such that $\\pi(2n+1)=z$.\n\\end{enumerate}\nThe starting point of the recursive construction is $2n+1=3$.\nThe transitions sequence for $2n+1=3$ is $t_3,t_3,t_3$,\nand the complete $(3,3,\\mK)$-snake is $C_3\\eqdef \\{[1, 2, 3], [3, 1, 2], [2, 3, 1]\\}$.\nClearly (Q1), (Q2), and (Q3) hold for this transitions sequence and $C_3$.\n\nNow, assume that there exists a $(2n-1,M_{2n-1},\\mK)$-snake, $C_{2n-1}$,\nwhich satisfies properties (Q1), (Q2), (Q3), and let\n$\\cT_{2n-1}=t_{k_1} , t_{k_2} , \\ldots , t_{k_{M_{2n-1}}}$\nbe its S-skeleton, i.e., $\\cT_{2n-1}$ is the transitions sequence of $C_{2n-1}$.\nNote that (Q1), (Q2), and (Q3) depend on the transitions sequence $\\cT_{2n-1}$\nand are independent of the first permutation of $C_{2n-1}$.\nWe construct a ${(2n+1,M_{2n+1},\\mK)}$-snake, $C_{2n+1}$,\nwhere $M_{2n+1}=((2n+1)(2n)-1)M_{2n-1}$, which also satisfies (Q1), (Q2), and (Q3).\n\nFirst, all the permutations of $A_{2n+1}$ are\npartitioned into ${(2n+1)(2n)}$ classes according\nto the last ordered two elements in the permutations.\nThis implies that (P1) and (P2) are satisfied.\nIn addition, (P3) and (P4) for $\\cT_{2n-1}$ are immediately\nimplied by (Q2) and (Q3) for $C_{2n-1}$, respectively.\nHence $\\cT_{2n-1}$ can be used\nas the S-skeleton for the $\\mK$-snakes in $L_{2n+1}$.\nNow, we merge the $\\mK$-snakes of the classes in $L_{2n+1}$ (except $[2,1]$),\nby using Lemma~\\ref{prop:mergeClasses1} and the nearly spanning tree $T_{2n+1}$ of\nTheorem~\\ref{thm:treeCompose}.\nWe have to show that (Q1), (Q2), and (Q3) are satisfied for $C_{2n+1}$. (Q1) is readily verified.\nClearly, $t_{2n+1}$ was used to obtain $C_{2n+1}$ (see Lemma~\\ref{prop:mergeClasses1}),\nand therefore we can always define $\\cT_{2n+1}$\nin such a way that its last transition is $t_{2n+1}$, and hence (Q2) is satisfied.\nFor each $z\\in [2n+1]$ there exists a class $[x,z]$\nwhose $\\mK$-snake is joined into $C_{2n+1}$,\nand therefore (Q3) is satisfied.\nThus, we have\n\\begin{theorem}\nGiven a $(2n-1,M_{2n-1},\\mK)$-snake which satisfies (Q1), (Q2), and (Q3),\nwe can obtain a $(2n+1,M_{2n+1},\\mK)$-snake,\nwhere $M_{2n+1}=((2n+1)(2n)-1)M_{2n-1}$,\nwhich also satisfies (Q1), (Q2), and (Q3).\n\\end{theorem}\n\nFollowing~\\cite{YeSc12},\nwe define $D_{2n+1}=\\frac{M_{2n+1}}{(2n+1)!}$ as the ratio between\nthe number of permutations in the given ${(2n+1, M_{2n+1},\\mK)}$-snake\nand the size of $S_{2n+1}$.\nRecall that if $C$ is an $(2n+1,M,\\mK)$-snake then $M\\leq \\frac{{|S_{2n+1}|}}{2}$,\nand we conjecture that optimal size is ${M=(2n+1)!-2n+1}$.\nThus, it is desirable to obtain a value $D_{2n+1}$ close to half as much as possible.\nIn our recursive construction $M_{2n+1}=((2n+1)(2n)-1)M_{2n-1}$.\nThus, we have\n\\begin{align*}\nD_3&=\\frac{1}{2},\\\\\n\\prod_{n=2}^{\\infty}\\frac{D_{2n+1}}{D_{2n-1}}&=\n\\frac{12\\sqrt{\\pi}}{5(1+\\sqrt{5})\\Gamma(\\frac{1}{4}(5-\\sqrt{5}))\\Gamma(\\frac{1}{4}(1+\\sqrt{5}))},\n\\end{align*}\nwhich implies that\n\\begin{align*}\n\\lim\\limits_{n\\to \\infty} D_{2n+1}&= \\frac{1}{2}\\cdot \\frac{12\\sqrt{\\pi}}{5(1+\\sqrt{5})\\Gamma(\\frac{1}{4}(5-\\sqrt{5}))\\Gamma(\\frac{1}{4}(1+\\sqrt{5}))}\\\\\n&\\approx 0.4338.\n\\end{align*}\nThis computation can be done by any mathematical tool, e.g., WolframAlpha.\nThis improves on the construction described in \\cite{YeSc12}, which yields $M_{2n+1}=(2n+1)(2n-1)M_{2n-1}$ and $\\lim\\limits_{n\\to \\infty} D_{2n+1}=\\lim\\limits_{n\\to \\infty} \\frac{1}{\\sqrt{\\pi n}}$.\n\n\n\\section{A Direct Construction based on Necklaces}\n\\label{sec:direct}\n\nIn this section we describe a direct construction to\nform a $(2n+1,M_{2n+1},\\mK)$-snake. First, we describe\na method to partition the classes which were used before\ninto subclasses that are similar to necklaces. Next, we show how\nsubclasses from different classes are merged into\ndisjoint chains. Finally, we present a hypergraph and a graph\nin which we have to search for certain trees to form our desired $\\mK$-snake which\nwe believe is of maximum length. Such $\\mK$-snakes were found in $S_7$ and $S_9$.\n\n\nWe present a direct construction for a $(2n+1,M_{2n+1},\\mK)$-snake, $C_{2n+1}$.\nThe goal is to obtain $M_{2n+1}=\\frac{(2n+1)!}{2}-(2n-1)$,\nand hence ${\\frac{D_{2n+1}}{D_{2n-1}}\\geq 1-\\frac{1}{(2n)!}}$.\nWe believe that there is always a $(2n+1,M_{2n+1},\\mK)$-snake with\n$M_{2n+1}=\\frac{(2n+1)!}{2}-(2n-1)$ and there is no such $\\mK$-snake\nwith more codewords.\nWe are making a slight change in the framework discussed in Section~\\ref{sec:main}.\nFirst, all the permutations of $A_{2n+1}$ are\npartitioned into ${(2n+1)(2n)}$ classes according to the last ordered two elements.\nWe denote by $[x,y]$ the class of all even permutations\nin which the last ordered pair in the permutation is $(x,y)$.\nEach class is further partitioned into subclasses\naccording to the cyclic order of the first $2n-1$ elements in the permutations,\ni.e., in each class $[x,y]$, the $\\frac{(2n-1)!}{2}$ permutations\nare partitioned into $\\frac{(2n-2)!}{2}$ disjoint subclasses.\nThis implies that (P1) and (P2) are satisfied for both classes and subclasses.\nLet's denote each one of the subclasses by $[\\alpha]-[x,y]$ where $\\alpha$\nis the cyclic order of the first $2n-1$ elements in the permutations of the subclass.\nLet $\\alpha_1, \\alpha_2$ be two permutations over $[2n+1]\\setminus \\{x,y\\}$.\nIf $\\alpha_1$ and $\\alpha_2$ have the same cyclic order, we denote it by $\\alpha_1 \\simeq \\alpha_2$,\notherwise $\\alpha_1 \\not\\simeq \\alpha_2$.\nNote that if $\\alpha_1 \\simeq \\alpha_2$ then $[\\alpha_1]-[x,y]=[\\alpha_2]-[x,y]$.\nFor example $[1,2,3]-[4,5]$ represents the subclass\nwith the permutations $[1,2,3,4,5]$, $[3,1,2,4,5]$, and $[2,3,1,4,5]$.\n\nLet $L_{2n+1}$ be the set of all classes,\nand let $\\cT=t_{2n-1}^{2n-1}$ be the S-skeleton of the $\\mK$-snakes in $L_{2n+1}$.\nNote that a $\\mK$-snake generated by $\\cT$\nspans exactly all the permutations in one subclass.\nHence (P3) and (P4) are immediately implied for both classes and subclasses.\nSuch a $\\mK$-snake will be called a \\emph{necklace}.\nThe slight change in the framework is that instead of one $\\mK$-snake,\neach class contains $\\frac{(2n-2)!}{2}$ $\\mK$-snakes,\nall of them have the same S-skeleton.\n\nThe necklaces (subclasses) $[\\alpha]-[x,y]$ are similar to\nnecklaces on $2n-1$ elements. Joining the necklaces into\none large $\\mK$-snake might be similar to the join of cycles from\nthe pure cycling register of order $2n-1$, PCR$_{2n-1}$,\ninto one cycle, which is also known as a de Bruijn sequence~\\cite{EtLe84,Fred82}.\nThere are two main differences between the two types of necklaces.\nThe first one is that in de Bruijn sequences\nthe necklaces do not represent permutations, but words\nof a given length over some finite alphabet. The second is\nthat there is rather a simple mechanism to join all the\nnecklaces into a de Bruijn sequence.\nWe would like to have such a mechanism to join\nas many as possible necklaces from all the classes into one $\\mK$-snake.\n\nLet $T_{2n+1}$ be the nearly spanning tree constructed by Theorem~\\ref{thm:treeCompose}.\nBy repeated application of Lemma~\\ref{prop:mergeClasses1}\naccording to the hyperedges of $T_{2n+1}$ starting from a necklace in the class $[1,2]$\nwe obtain a $\\mK$-snake which contains exactly one necklace\nfrom each class $[x,y]\\ne [2,1]$.\nSuch a $\\mK$-snake will be called a \\emph{chain}.\nIf the chain contains the necklace $[\\alpha]-[1,2]$, we will denote it by $c[\\alpha]$.\nFor two permutations $\\alpha_1$ and $\\alpha_2$ over $[2n+1]\\setminus \\{1,2\\}$\nsuch that $\\alpha_1 \\simeq \\alpha_2$ we have $c[\\alpha_1]=c[\\alpha_2]$.\nNote that there is a unique way to merge the three necklaces which correspond to a hyperedge of $T_{2n+1}$,\nand hence there is no ambiguity in $c[\\alpha]$ (even so the order of the joins is not unique),\nNote also that the transitions sequence of two distinct chains is usually different.\nThe number of permutations in a chain is $((2n+1)(2n)-1)(2n-1)$.\nThe following lemma is an immediate consequence of Lemma~\\ref{prop:mergeClasses1}.\n\n\\begin{lemma}\n\\label{lem:key1}\nLet $[x,y]$, $[y,z]$, and $[z,x]$ be three classes,\nand let $\\alpha$ be a permutation of $[2n+1]\\setminus \\{x,y,z\\}$.\nThe necklaces $[\\alpha,z]-[x,y]$, $[\\alpha,y]-[z,x]$,\nand $[\\alpha,x]-[y,z]$ can be merged together, where\n$\\alpha,z$ is the sequence formed by concatenation of $\\alpha$\nand $z$.\n\\end{lemma}\n\n\\begin{lemma}\n\\label{lem:key2}\nLet $[x,y]$, $[y,z]$, and $[z,x]$ be three classes.\nAll the subclasses in these classes can be partitioned into disjoint sets,\nwhere each set contains exactly one necklace\nfrom each of the above three classes.\nThe necklaces of each set can be merged together into one $\\mK$-snake.\n\\end{lemma}\n\\begin{proof}\nFor each permutation $\\alpha$ over $[2n+1]\\setminus \\{x,y,z\\}$,\nthe necklaces $[\\alpha,z]-[x,y]$, $[\\alpha,y]-[z,x]$,\nand $[\\alpha,x]-[y,z]$ can be merged by Lemma \\ref{lem:key1}.\nThus, all the subclasses in these classes can be partitioned into disjoint sets.\n\\end{proof}\n\n\\begin{cor}\n\\label{cor:disjointChains}\nThe permutations of all the classes except for $[2,1]$\ncan be partitioned into disjoint chains.\n\\end{cor}\n\nBy Corollary~\\ref{cor:disjointChains} we construct $\\frac{(2n-2)!}{2}$\ndisjoint chains which span $A_{2n+1}$, except for all the even permutations of the class $[2,1]$.\nRecall that we have the same number, $\\frac{(2n-2)!}{2}$, of $[2,1]$-necklaces, which\nspan all the permutations of the class~$[2,1]$.\nNow, we need a method to merge all these chains and necklaces, except\nfor one necklace from the class~$[2,1]$, into one $\\mK$-snake $C_{2n+1}$.\nNote that for $2n+1=5$ we have only one chain. Thus, this chain is the final $\\mK$-snake $C_5$.\nThis $\\mK$-snake is exactly the same $\\mK$-snake as the one generated\nby the recursive construction in Section~\\ref{sec:recursive}.\n\n\\begin{lemma}\n\\label{lem:chainConnection}\nLet $x$ be an integer such that $3\\leq x\\leq 2n+1$,\nlet $\\alpha$ be a permutation of $[2n+1]\\setminus \\{x,2,1\\}$,\nand assume that the permutations $[\\alpha,1,x,2]$ and $[\\alpha,2,1,x]$ are contained in two distinct chains.\nWe can merge these two chains via the necklace $[\\alpha,x]-[2,1]$.\n\\end{lemma}\n\\begin{proof}\nLet $c_1$ be the chain which contains the permutation $\\pi_1=[\\alpha,1,x,2]$,\n$c_2$ be the chain which contains the permutation $\\pi_2=[\\alpha,2,1,x]$, and\n$\\eta$ be the necklace which contains the permutation $\\pi_3=[\\alpha,x,2,1]$.\nNote that all the chains contains only the p-transitions $t_{2n+1}$ and $t_{2n-1}$.\nThe permutation $t_{2n+1}(\\pi_1)$ appears in $c_2$,\nthe permutation $t_{2n+1}(\\pi_2)$ appears in $\\eta$,\nand the permutation $t_{2n+1}(\\pi_3)$ appears in $c_1$.\nTherefore, $\\pi_1$, $\\pi_2$, and $\\pi_3$ are followed\nby $t_{2n-1}$ in $c_1$, $c_2$, and $\\eta$, respectively.\nLet $\\sigma_i$, $i\\in \\{1,2\\}$, be a transitions sequence such that $\\sigma_i, t_{2n-1}$ is the transitions sequence of $c_i$,\nand therefore $t_{2n-1}(\\sigma_i(\\pi_i))=\\pi_i$.\nBy Lemma \\ref{lem:helpLem1} we have $\\sigma_1 \\leftrightsquigarrow t_{2n-1}^{2n-2} \\leftrightsquigarrow \\sigma_2$.\nSimilarly to Lemma~\\ref{prop:mergeClasses1},\nby replacing the transition $t_{2n-1}$ which follows $\\pi_3$ in $\\eta$,\nwith $t_{2n+1} \\circ \\sigma_{1} \\circ t_{2n+1} \\circ \\sigma_2 \\circ t_{2n+1}$,\nwe merge $c_1$, $c_2$ and $\\eta$ into a $\\mK$-snake.\nThus, we have\n\\begin{align*}\n\\pi_3=&[a_1, a_2, \\ldots, a_{2n-2},x,2,1] \\\\\n&\\downarrow {t_{2n+1}} \\\\\n&[ 1, a_1, a_2, \\ldots, a_{2n-2}, x, 2] \\\\\n&\\downarrow {\\sigma_1 \\leftrightsquigarrow t_{2n-1}^{2n-2}} \\hphantom{aaaaaaaaaaaa} the\\ chain\\ c_1 \\\\\n\\pi_1=&[ a_1, a_2, \\ldots, a_{2n-2}, 1,x,2] \\\\\n&\\downarrow {t_{2n+1}} \\\\\n&[ 2, a_1, a_2, \\ldots, a_{2n-2},1,x] \\\\\n&\\downarrow {\\sigma_2 \\leftrightsquigarrow t_{2n-1}^{2n-2}} \\hphantom{aaaaaaaaaaaa} the\\ chain\\ c_2 \\\\\n\\pi_2=&[ a_1, a_2, \\ldots, a_{2n-2}, 2, 1, x] \\\\\n&\\downarrow {t_{2n+1}} \\hphantom{aaaaaaa} return\\ to\\ the\\ necklace\\ \\eta \\\\\nt_{2n-1}(\\pi_3) =&[x, a_1, a_2, \\ldots, a_{2n-2}, 2,1]\n\\end{align*}\n\\end{proof}\n\nFor each $x$, $3\\leq x\\leq 2n+1$, and for each permutation $\\alpha$ of $[2n+1]\\setminus\\{x,1,2\\}$,\nthe merging of two distinct chains which contain the permutations\n$[\\alpha,1,x,2]$ and $[\\alpha,2,1,x]$ via the necklace $[\\alpha,x]-[2,1]$\nas described in Lemma~\\ref{lem:chainConnection}, will be denoted by $M[x]$-connection.\nNote that if $x\\in\\{3,4,5\\}$ then the permutations $[\\alpha,1,x,2]$ and $[\\alpha,2,1,x]$\nare contained in the same chain.\nThus, there are no $M[3]$-connections, $M[4]$-connections, or $M[5]$-connections.\n\nLemma~\\ref{lem:chainConnection} suggests a method\nto join all the chains and all the $[2,1]$-necklaces except one into\na $\\mK$-snake of length $\\frac{(2n+1)!}{2}-(2n-1)$.\nThis should be implemented by $\\frac{(2n-2)!}{2}-1$\niterations of the merging suggested by Lemma~\\ref{lem:chainConnection}.\nThe current merging problem is also translated into a $3-graph$ problem (see Definition~\\ref{def:3-graph}).\nLet $\\hat{H}_{2n+1}=(\\hat{V}_{2n+1},\\hat{E}_{2n+1})$ be a 3-graph defined as follows.\n\\begin{align*}\n\\hat{V}_{2n+1}&=\\{c[\\alpha]~:~\\alpha \\text{ is a permutation of } [2n+1]\\setminus \\{1,2\\}\\}\\\\\n& \\cup \\{[\\beta]-[2,1]~:\\\\\n& \\hphantom{\\cup\\{\\}} \\beta \\text{ is a permutation of } [2n+1]\\setminus \\{1,2\\}\\} \\\\\n\\hat{E}_{2n+1}&=\\{\\{c[\\alpha_1], c[\\alpha_2], [\\beta]-[2,1]\\}~:\\\\\n& \\hphantom{=\\{\\}} c[\\alpha_1] \\text{ and } c[\\alpha_2] \\text{ can be merged together} \\\\\n& \\hphantom{=\\{\\}} \\text{via } [\\beta]-[2,1] \\text{ by Lemma \\ref{lem:chainConnection}}\\}.\n\\end{align*}\nThe vertices in $\\hat{V}_{2n+1}$ are of two types, chains and $[2,1]$-necklaces.\nEach $e\\in \\hat{E}_{2n+1}$ contains three vertices, two chains and one necklace,\nwhich can be merged together by Lemma \\ref{lem:chainConnection}.\nTherefore, the edge will be signed by $M[x]$ as described before.\nNote that $\\hat{E}_{2n+1}$ might contains parallel edges with different signs.\n\nLet $\\hat{T}_{2n+1}=(V_{\\hat{T}_{2n+1}},E_{\\hat{T}_{2n+1}})$ be a nearly spanning tree in $\\hat{H}_{2n+1}$.\nNote that such a nearly spanning tree must contain all the vertices in $\\hat{V}_{2n+1}$\nexcept for one $[2,1]$-necklace.\nIf such a nearly spanning tree exists then\nby Lemma~\\ref{lem:chainConnection},\nwe can merge all the chains via $[2,1]$-necklaces\nto form the $\\mK$-snake $C_{2n+1}$.\nThis $\\mK$-snake contains all the permutations\nof $A_{2n+1}$ except for $2n-1$ permutations\nwhich form one $[2,1]$-necklace.\n\nThe joins which are performed are determined by the edges of $\\hat{T}_{2n+1}$.\nNote that there is a unique way to merge the three vertices\nwhich correspond to a hyperedge of $\\hat{T}_{2n+1}$ signed by $M[x]$.\nHence, by using the given spanning trees $T_{2n+1}$ and $\\hat{T}_{2n+1}$,\nthere is no ambiguity in $C_{2n+1}$ (even so the orders of the joins are not unique).\nHowever, different nearly spanning trees\ncan yield different final $\\mK$-snakes.\nNote that the $\\mK$-snake $C_{2n+1}$ generated by this construction\nhas only $t_{2n+1}$ and $t_{2n-1}$ p-transitions, where usually $t_{2n-1}$ is used.\nThe p-transition $t_{2n-1}$ is the only transition in the $\\mK$-snake of the subclasses.\nOn average $3$ out of $4n-2$ sequential p-transitions of $C_{2n+1}$ are the p-transition $t_{2n+1}$.\nA similar property exists when a de Bruijn sequence is generated\nfrom the necklaces of pure cycling register of order~$n$~\\cite{EtLe84,Fred82}.\n\nFinding a nearly spanning tree $\\hat{T}_{2n+1}$ is an open question.\nBut, we found such trees for $n=3$ and $n=4$.\nWe believe that a similar construction to the\none which follows in the sequel for $n=3$ and $n=4$, exists for all $n>4$.\n\n\\begin{conjecture}\n\\label{conj:consB}\nFor each $n\\geq 2$, there exists\na $(2n+1,M_{2n+1},\\mK)$-snake,\nwhere $M_{2n+1}=\\frac{(2n+1)!}{2}-(2n-1)$\nin which there are only $t_{2n-1}$ and $t_{2n+1}$ p-transitions.\n\\end{conjecture}\n\n\n\\begin{example}\n\\label{ex:consB_7}\nFor $n=3$, a $(7,2515,\\mK)$-snake is constructed by using the tree $T_7$ of Example \\ref{Exm:T7},\nand the tree $\\hat{T}_7$ defined below.\n$\\hat{T}_7$ contains $12$ chains, where each chain contains $41$ necklaces.\nIt also contains $11$ $[2,1]$-necklaces and $11$ hyperedges.\nDenote an edge in $\\hat{H}_{7}$ by $(\\{ c_i, c_j, \\eta_k \\}, x)$\nwhere $M[x]$ is the sign of the edge.\n$\\hat{T}_{7}$ is defined as follows.\n\\begin{tabular}{ll}\n\\underline{The chains in $\\hat{T}_{7}$:} & \\tabularnewline\n$c_1\\ =[3, 4, 5, 6, 7]-[1, 2]$, &\n$c_2\\ =[3, 4, 6, 7, 5]-[1, 2]$, \\tabularnewline\n$c_3\\ =[3, 4, 7, 5, 6]-[1, 2]$, &\n$c_4\\ =[3, 5, 4, 7, 6]-[1, 2]$, \\tabularnewline\n$c_5\\ =[3, 5, 6, 4, 7]-[1, 2]$, &\n$c_6\\ =[3, 5, 7, 6, 4]-[1, 2]$, \\tabularnewline\n$c_7\\ =[3, 6, 4, 5, 7]-[1, 2]$, &\n$c_8\\ =[3, 6, 5, 7, 4]-[1, 2]$, \\tabularnewline\n$c_9\\ =[3, 6, 7, 4, 5]-[1, 2]$, &\n$c_{10}=[3, 7, 4, 6, 5]-[1, 2]$,\\tabularnewline\n$c_{11}=[3, 7, 5, 4, 6]-[1, 2]$,&\n$c_{12}=[3, 7, 6, 5, 4]-[1, 2]$.\n\\end{tabular}\n\\begin{tabular}{ll}\n\\underline{The necklaces in $\\hat{T}_{7}$:} & \\tabularnewline\n$\\eta_1\\ =[3, 4, 5, 7, 6]-[2, 1]$, &\n$\\eta_2\\ =[3, 4, 6, 5, 7]-[2, 1]$, \\tabularnewline\n$\\eta_3\\ =[3, 4, 7, 6, 5]-[2, 1]$, &\n$\\eta_4\\ =[3, 5, 4, 6, 7]-[2, 1]$, \\tabularnewline\n$\\eta_5\\ =[3, 5, 6, 7, 4]-[2, 1]$, &\n$\\eta_6\\ =[3, 5, 7, 4, 6]-[2, 1]$, \\tabularnewline\n$\\eta_7\\ =[3, 6, 4, 7, 5]-[2, 1]$, &\n$\\eta_8\\ =[3, 6, 5, 4, 7]-[2, 1]$, \\tabularnewline\n$\\eta_9\\ =[3, 6, 7, 5, 4]-[2, 1]$, &\n$\\eta_{10}=[3, 7, 4, 5, 6]-[2, 1]$, \\tabularnewline\n$\\eta_{11}=[3, 7, 5, 6, 4]-[2, 1]$.\n\\end{tabular}\n\\begin{tabular}{ll}\n\\underline{The edges in $\\hat{T}_{7}$:} & \\tabularnewline\n$e_1\\ =(\\{ c_{11}, c_6\\hphantom{\\scriptsize{i}}, \\eta_9\\hphantom{\\scriptsize{i}} \\}, 6)$, &\n$e_2\\ =(\\{ c_6\\hphantom{\\scriptsize{i}}, c_1\\hphantom{\\scriptsize{i}}, \\eta_2\\hphantom{\\scriptsize{i}} \\}, 6)$, \\tabularnewline\n$e_3\\ =(\\{ c_2\\hphantom{\\scriptsize{i}}, c_{12}, \\eta_{11} \\}, 6)$, &\n$e_4\\ =(\\{ c_{12}, c_7\\hphantom{\\scriptsize{i}}, \\eta_4\\hphantom{\\scriptsize{i}} \\}, 6)$, \\tabularnewline\n$e_5\\ =(\\{ c_5\\hphantom{\\scriptsize{i}}, c_3\\hphantom{\\scriptsize{j}}, \\eta_3\\hphantom{\\scriptsize{i}} \\}, 6)$, &\n$e_6\\ =(\\{ c_3\\hphantom{\\scriptsize{i}}, c_4\\hphantom{\\scriptsize{i}}, \\eta_7\\hphantom{\\scriptsize{i}} \\}, 6)$, \\tabularnewline\n$e_7\\ =(\\{ c_9\\hphantom{\\scriptsize{i}}, c_{10}, \\eta_{10} \\}, 6)$, &\n$e_8\\ =(\\{ c_{10}, c_8\\hphantom{\\scriptsize{i}}, \\eta_5\\hphantom{\\scriptsize{i}} \\}, 6)$, \\tabularnewline\n$e_9\\ =(\\{ c_{12}, c_9\\hphantom{\\scriptsize{i}}, \\eta_8\\hphantom{\\scriptsize{i}} \\}, 7)$, &\n$e_{10}=(\\{ c_9\\hphantom{\\scriptsize{i}}, c_3\\hphantom{\\scriptsize{i}}, \\eta_1\\hphantom{\\scriptsize{i}} \\}, 7)$, \\tabularnewline\n$e_{11}=(\\{ c_2\\hphantom{\\scriptsize{i}}, c_{11}, \\eta_6\\hphantom{\\scriptsize{i}} \\}, 7)$.\n\\end{tabular}\n\n\n$\\hat{H}_{7}$ contains another $[2,1]$-necklace, $\\eta_{12}=[3, 7, 6, 4, 5]-[2, 1]$,\nand the following additional edges:\n\\begin{align*}\n\\begin{tabular}{ll}\n$e_{12}=(\\{ c_1\\hphantom{\\scriptsize{'}}, c_{11}, \\eta_{12} \\}, 6),$\n&\n$e_{13}=(\\{ c_7\\hphantom{\\scriptsize{i}}, c_2\\hphantom{\\scriptsize{i}}, \\eta_1\\hphantom{\\scriptsize{i}} \\}, 6)$, \\tabularnewline\n$e_{14}=(\\{ c_4\\hphantom{\\scriptsize{i}}, c_5\\hphantom{\\scriptsize{1}}, \\eta_8\\hphantom{\\textbf{\\scriptsize{i}}} \\}, 6)$, &\n$e_{15}=(\\{ c_8\\hphantom{\\scriptsize{i}}, c_9\\hphantom{\\scriptsize{i}}, \\eta_6\\hphantom{\\scriptsize{i}} \\}, 6)$, \\tabularnewline\n$e_{16}=(\\{ c_{10}, c_2\\hphantom{\\scriptsize{i}}, \\eta_2\\hphantom{\\scriptsize{i}} \\}, 7)$, &\n$e_{17}=(\\{ c_8\\hphantom{\\scriptsize{i}}, c_1\\hphantom{\\scriptsize{i}}, \\eta_3\\hphantom{\\scriptsize{i}} \\}, 7)$, \\tabularnewline\n$e_{18}=(\\{ c_{11}, c_{10}, \\eta_4\\hphantom{\\scriptsize{'}} \\}, 7)$, &\n$e_{19}=(\\{ c_3\\hphantom{\\scriptsize{i}}, c_{12}, \\eta_5\\hphantom{\\scriptsize{i}} \\}, 7)$, \\tabularnewline\n$e_{20}=(\\{ c_6\\hphantom{\\scriptsize{i}}, c_7\\hphantom{\\scriptsize{i}}, \\eta_7\\hphantom{\\scriptsize{i}} \\}, 7)$, &\n$e_{21}=(\\{ c_4\\hphantom{\\scriptsize{i}}, c_8\\hphantom{\\scriptsize{i}}, \\eta_9\\hphantom{\\scriptsize{i}} \\}, 7)$, \\tabularnewline\n$e_{22}=(\\{ c_1\\hphantom{\\scriptsize{i}}, c_4\\hphantom{\\scriptsize{i}}, \\eta_{10} \\}, 7)$, &\n$e_{23}=(\\{ c_5\\hphantom{\\scriptsize{i}}, c_6\\hphantom{\\scriptsize{i}}, \\eta_{11} \\}, 7)$, \\tabularnewline\n$e_{24}=(\\{ c_7\\hphantom{\\scriptsize{i}}, c_5\\hphantom{\\scriptsize{i}}, \\eta_{12} \\}, 7)$.\n\\end{tabular}\n\\end{align*}\nAn additional different illustration of $\\hat{H}_{7}$ is presented in the sequel (see Example~\\ref{ex:g_7}).\n\\end{example}\n\nFor each $n\\geq 3$,\nlet $\\cG_{2n+1}=(\\cV_{2n+1},\\cE_{2n+1})$ be a multi-graph (with parallel edges) with labels and signs on the edges.\nThe vertices of $\\cV_{2n+1}$ represent the $\\frac{(2n-2)!}{2}$ chains\nand hence $|\\cV_{2n+1}|=\\frac{(2n-2)!}{2}$.\nThere is an edge signed with $M[x]$, where $6\\leq x\\leq 2n+1$,\nbetween the vertex (chain) $c_1$ and vertex (chain) $c_2$,\nif $c_1$ contains a permutation $[\\alpha,2,1,x]$\nand $c_2$ contains the permutation $[\\alpha,1,x,2]$, where $c_1\\ne c_2$.\nThe label on this edge is the necklace $[\\alpha,x]-[2,1]$.\nNote that the label on the edge is a necklace\nwhich can merge together the chains of its corresponding endpoints by $M[x]$-connection.\nNote also that the pair $\\alpha, x$ might not be unique\nand hence the graph might have parallel edges.\nA tree in $\\cG_{2n+1}$ which doesn't have two edges with the same label,\nwill be called \\emph{a chain tree}.\nThe following Lemma can be easily verified.\n\\begin{lemma}\nThere exists a nearly spanning tree in $\\hat{H}_{2n+1}$\nif and only if there exists a chain tree in $\\cG_{2n+1}$.\n\\end{lemma}\n\nHenceforth, $T_{2n+1}$ will be the nearly spanning tree constructed in Theorem \\ref{thm:treeCompose},\nand the chains are constructed via~$T_{2n+1}$.\n\\begin{definition}\nLet $\\cG_1=(\\cV_1, \\cE_1)$ and $\\cG_2=(\\cV_2,\\cE_2)$ be two multi-graphs with labels and signs on the edges,\nwhere the set of the labels of $\\cG_i$ denoted by $\\cL_i$, $i\\in \\{1,2\\}$.\nWe say that\n$\\cG_1$ is isomorphic to $\\cG_2$ if there exist two bijective functions $f: \\cV_1\\to \\cV_2$ and $g: \\cL_1 \\to \\cL_2$,\nwith the following property:\n$(u,v)\\in \\cE_1$ with the label $\\eta$ and sign $M[x]$, if and only if\n$(f(u),f(u)) \\in \\cE_2$ with the label $g(\\eta)$ and sign $M[x]$.\n\\end{definition}\n\\begin{definition}\nFor each $n\\geq 4$, a sub-graph of $\\cG_{2n+1}$ which is isomorphic to $\\cG_{2n-1}$\nis called a component of $\\cG_{2n+1}$,\nand denoted by $A=(\\cV_A,\\cL_A)$\nwhere $\\cV_A$ consists of the vertices (chains) of the component,\n$\\cL_A$ consists of the labels ($[2,1]$-necklaces) on the edges in the component.\nNote that $|\\cV_A|=|\\cL_A|$, i.e., the numbers of the distinct labels is equal to the number of the vertices.\n\\end{definition}\n\\begin{definition}\nTwo components, $A=(\\cV_A,\\cL_A)$ and $B=(\\cV_B,\\cL_B)$, in $\\cG_{2n+1}$ are called disjoint\nif $\\cV_A\\cap \\cV_B =\\varnothing$ and $\\cL_A\\cap \\cL_B =\\varnothing$, i.e., there is no a common vertex (chain) or a common label ($[2,1]$-necklace) in $A$ and $B$.\n\\end{definition}\n\\begin{lemma}\n\\label{lem:2n+1structure}\nFor each $n\\geq 4$, $\\cG_{2n+1}$ consists of $(2n-3)(2n-2)$ disjoint copies\nof isomorphic graphs to $\\cG_{2n-1}$, called components.\nThe edges between the vertices of two distinct components are signed\nonly with $M[2n]$ and $M[2n+1]$.\n\\end{lemma}\n\\begin{proof}\nThe $M[x]$-connections are\ndeduced by the tree $T_{2n+1}$, which was used for the construction of the chains.\nIn particular, the path between the vertices $[1,x]$ and $[x,2]$\nin $T_{2n+1}$ determines the $M[x]$-connections in~$\\cG_{2n+1}$.\nBy Theorem~\\ref{thm:treeCompose}, $T_{2n-1}$ is a sub-graph of $T_{2n+1}$.\nTherefore, for each $x$, $x\\geq 3$,\nthe path between the vertices $[1,x]$ and $[x,2]$ in $T_{2n+1}$ is equal to\nthe path between the vertices $[1,x]$ and $[x,2]$ in $T_{2k+1}$ for each ${x\\leq 2k+1\\leq 2n+1}$.\nThe number of the vertices (chains) in $\\cG_{2n+1}$ is equal to $\\frac{(2n-2)!}{2}$,\nand each component contains $\\frac{(2n-4)!}{2}$ vertices.\nThus, $\\cG_{2n+1}$ consists of $(2n-3)(2n-2)$ disjoint copies\nof isomorphic graphs to $\\cG_{2n-1}$ connected by edges signed only with $M[2n]$ and $M[2n+1]$.\n\\end{proof}\n\nFor each $n\\geq 4$, let $\\hat{\\cG}_{2n+1}=(\\hat{\\cV}_{2n+1},\\hat{\\cE}_{2n+1})$ be the \\emph{component graph} of\n$\\cG_{2n+1}$.\nThe vertices of $\\hat{\\cV}_{2n+1}$ represent the components of $\\cG_{2n+1}$,\nThere is an edge signed with $M[x]$, $x\\in \\{2n,2n+1\\}$,\nbetween the vertices (components) $A$ and $B$,\nif the chain that contains the permutation $[\\alpha,2,1,x]$ is contained in $A$,\nand the chain that contains the permutation $[\\alpha,1,x,2]$ is contained in $B$.\nThe label on this edge is the necklace $[\\alpha,x]-[2,1]$.\nWe define $\\hat{\\cG}_{7}$ to be $\\cG_{7}$, i.e., each component of $\\hat{\\cG}_{7}$ consists of exactly one chain\n(and also one distinct $[2,1]$-necklace in order to follow\nthe properties of $\\hat{\\cG}_{2n+1}$).\n\\begin{definition}\nA components spanning tree, $\\hat{T}_{2n+1}$\nis a spanning tree in $\\hat{\\cG}_{2n+1}$,\nwhere in the set of the labels of the tree's edges,\nthere are no two labels from the same component, i.e.,\neach label in the set of the labels of the tree's edges belongs to a different component.\n\\end{definition}\n\\begin{example}\n\\label{ex:g_7}\n$\\hat{\\cG}_{7}$ is depicted in Figure~\\ref{fig:hatG7}, where the vertices numbers and the edges labels\ncorresponds to the chains and the necklaces in Example \\ref{ex:consB_7}, respectively.\nThe vertical edges are signed with $M[6]$,\nwhile the horizontal edges are signed with $M[7]$.\nThe double lines edges correspond to the edges of $\\hat{T}_7$.\n\\begin{figure}[H]\n\\centering\n\\vspace{-10pt}\n\\begin{tikzpicture} [-,scale=0.6,node distance=3cm,\n thick,main node\/.style={font={\\normalsize}, circle,draw,minimum size=25 pt,scale=0.6}\n ,edge_node\/.style={font={\\normalsize},scale=0.6}\n ,edgeTree_node\/.style={font={\\large},scale=0.6}]\n\n \\node[main node] (1) {1};\n \\node[main node] (11) [below of=1] {11};\n \\node[main node] (6) [below of=11] {6};\n\n \\node[main node] (2) [right of=1] {2};\n \\node[main node] (12) [below of=2] {12};\n \\node[main node] (7) [below of=12] {7};\n\n \\node[main node] (3) [right of=2] {3};\n \\node[main node] (4) [below of=3] {4};\n \\node[main node] (5) [below of=4] {5};\n\n \\node[main node] (8) [right of=3] {8};\n \\node[main node] (9) [below of=8] {9};\n \\node[main node] (10) [below of=9] {10};\n\n \\path[]\n (1) edge node[edge_node] [left, xshift=.1cm] {12} (11)\n\n (7) edge [bend left] node[edge_node] [left,xshift=.1cm] {1} (2)\n\n (4) edge node[edge_node] [left,xshift=.1cm] {8} (5)\n\n (8) edge node[edge_node] [right,xshift=-.1cm] {6} (9);\n\n \\path[]\n \n \n (11) edge[double, thin] node[edgeTree_node] [left,xshift=.1cm] {9} (6)\n (6) edge [double, thin,bend left] node[edgeTree_node] [left,xshift=.1cm] {2} (1)\n\n (2) edge[double,thin] node[edgeTree_node] [left,xshift=.1cm, yshift=-.6cm] {11} (12)\n (12) edge[double,thin] node[edgeTree_node] [left,xshift=.1cm] {4} (7)\n\n (5) edge [double,thin,bend left] node[edgeTree_node] [left,xshift=.1cm] {3} (3)\n (3) edge[double,thin] node[edgeTree_node] [left,xshift=.1cm, yshift=-.1cm] {7} (4)\n\n (9) edge[double,thin] node[edgeTree_node] [right,xshift=-.1cm] {10} (10)\n (10) edge [double,thin,bend left] node[edgeTree_node] [right,xshift=-.1cm] {5} (8)\n\n \n (2) edge[double,thin] node[edge_node] [above] {6} (11)\n\n (12) edge[double,thin,bend right] node[edge_node] [above, xshift=.7cm] {8} (9)\n (9) edge[double,thin] node[edge_node] [above, xshift=-0.8cm, yshift=0.8cm] {1} (3);\n\n \\path[]\n (6) edge node[edge_node] [above,yshift=-.1cm] {7} (7)\n (7) edge node[edge_node] [above,yshift=-.1cm] {12} (5)\n (5) edge [bend left] node[edge_node] [above,yshift=-.1cm] {11} (6)\n\n (1) edge node[edge_node] [above, xshift=-2cm, yshift=1cm] {10} (4)\n (4) edge node[edge_node] [right, xshift=0.5cm, yshift=0.9cm] {9} (8)\n (8) edge[bend right=20] node[edge_node] [above] {3} (1)\n\n (11) edge node[edge_node] [above] {4} (10)\n (10) edge [bend right] node[edge_node] [above,xshift=-.4cm,yshift=.4cm] {2} (2)\n\n (3) edge node[edge_node] [above] {5} (12);\n\n`\\end{tikzpicture}\n\\vspace{-5pt}\n\\caption{The graph $\\hat{\\cG}_7$ and its component spanning tree $\\hat{T}_7$} \\label{fig:hatG7}\n\\end{figure}\n\\end{example}\n\n\\begin{conjecture}\n\\label{conj:main}\nFor each component $A$ in $\\hat{\\cG}_{2n+1}$, $n\\geq 3$,\nand for each label $\\eta$ of $A$,\nthere exists a components spanning tree,\nwhere there is no edge in the tree with the label $\\eta$.\n\\end{conjecture}\nConjecture \\ref{conj:main} implies Conjecture \\ref{conj:consB}, i.e.,\n\\begin{theorem}\nIf Conjecture \\ref{conj:main} is true then for each $n\\geq 2$, there exists\na $(2n+1,M_{2n+1},\\mK)$-snake,\nwhere $M_{2n+1}=\\frac{(2n+1)!}{2}-(2n-1)$\nin which there are only $t_{2n-1}$ and $t_{2n+1}$ p-transitions.\n\\end{theorem}\nConjecture \\ref{conj:main} was verified by computer search for $n=3$ and $n=4$.\nBy using Conjecture \\ref{conj:main} recursively,\nfor each $n\\geq 3$, and for each necklace $\\eta$ in class $[2,1]$,\nwe can construct a chain tree $T$ in $\\cG_{2n+1}$,\nwhich doesn't include $\\eta$ as a label on an edge in $T$.\n\n\\begin{cor}\nThere exist a $(7,2515,\\mK)$-snake and a~$(9,181433,\\mK)$-snake,\nand hence $\\lim\\limits_{n\\to \\infty} \\frac{M_{2n+1}}{S_{2n+1}}\\approx 0.4743$.\n\\end{cor}\n\nNote that the ratio $\\lim\\limits_{n\\to \\infty} \\frac{M_{2n+1}}{S_{2n+1}}$ would be improved,\nif there exists a $(2m+1,\\frac{(2m+1)!}{2}-(2m-1),\\mK)$-snake for some $m>4$.\n\n\\begin{conjecture}\nThe $(2n-3)(2n-2)$ components in $\\hat{\\cG}_{2n+1}$\ncan be arranged in a $(2n-3)\\times (2n-2)$ grid.\nThe edges which are sign by $M[2n]$ define $2n-2$ cycles of length $2n-3$.\nEach cycle contains the vertices of exactly one column, and is called an $M[2n]$-cycle.\nThe edges which are sign with $M[2n+1]$ are between two components in different columns,\nand they also define $2n-2$ cycles of length $2n-3$.\nSuch a cycle will be called an $M[2n+1]$-cycle.\nEach multi-edge between two components has $\\frac{(2n-4)!}{2}$ parallel edges (the number of chains in the component).\nParallel edges have the same sign $x$, $x\\in \\{2n,2n+1\\}$,\nbut different labels (i.e., $M[x]$-connection, but with different $[2,1]$-necklaces).\n\\end{conjecture}\n\\begin{example}\nAn illustration for the structure of $\\hat{\\cG}_{2n+1}$ for $n=3$ is presented in Example~\\ref{ex:g_7},\nand for $n=4$ is depicted in Figure~\\ref{fig:hatG9}.\nIn $\\hat{\\cG}_9$ there are $30$ components,\nwhere each component is isomorphic to $\\hat{\\cG}_7$\n(thus, it contains $12$ chains and $12$ $[2,1]$-necklaces).\n\\begin{figure}[H]\n\\vspace{-5pt}\n\\begin{tikzpicture} [-, scale=0.5, node distance=1.5 cm,\n thick,main node\/.style={font={\\normalsize},circle,draw,scale=0.5,minimum size=20 pt}\n ,edge node\/.style={font={\\normalsize},scale=0.5}]\n\n \\node[main node] (6) {1};\t\n \\node[main node] (7) [below of=6]{2};\t\n \\node[main node] (8) [below of=7]{3};\t\n \\node[main node] (9) [below of=8]{4};\t\n \\node[main node] (10) [below of=9]{5};\t\n\n \\node[main node] (1)[right= 1.15cm of 6] {6};\t\t\t\t\n \\node[main node] (2) [below of=1]{7};\t \n \\node[main node] (3) [below of=2]{8};\t\t\n \\node[main node] (4) [below of=3]{9};\t\t\n \\node[main node] (5) [below of=4]{10};\t\t\n\n \\node[main node] (26) [right= 1.15cm of 1]{11};\t\n \\node[main node] (27) [below of=26]{12};\t\n \\node[main node] (28) [below of=27]{13};\t\n \\node[main node] (29) [below of=28]{14};\t\n \\node[main node] (30) [below of=29]{15};\t\n\n \\node[main node] (11) [right= 1.15cm of 26]{16};\t\n \\node[main node] (12) [below of=11]{17};\t\n \\node[main node] (13) [below of=12]{18};\t\n \\node[main node] (14) [below of=13]{19};\t\n \\node[main node] (15) [below of=14]{20};\t\n\n \\node[main node] (16) [right= 1.15cm of 11]{21};\t\n \\node[main node] (17) [below of=16]{22};\t\n \\node[main node] (18) [below of=17]{23};\t\n \\node[main node] (19) [below of=18]{24};\t\n \\node[main node] (20) [below of=19]{25};\t\n\n \\node[main node] (21) [right= 1.15cm of 16]{26};\t\n \\node[main node] (22) [below of=21]{27};\t\n \\node[main node] (23) [below of=22]{28};\t\n \\node[main node] (24) [below of=23]{29};\t\n \\node[main node] (25) [below of=24]{30};\t\n\n\n \\path[thin]\n (1) edge (2)\n (2) edge (3)\n (3) edge (4)\n (4) edge (5)\n (5) edge [bend left=20] (1)\n\n\n (6) edge (7)\n (7) edge (8)\n (8) edge (9)\n (9) edge (10)\n (10) edge [bend left=20] (6)\n\n\n (11) edge (12)\n (12) edge (13)\n (13) edge (14)\n (14) edge (15)\n (15) edge [bend left=20] (11)\n\n (16) edge (17)\n (17) edge (18)\n (18) edge (19)\n (19) edge (20)\n (20) edge [bend left=20] (16)\n\n (21) edge (22)\n (22) edge (23)\n (23) edge (24)\n (24) edge (25)\n (25) edge [bend left=20] (21)\n\n\n (26) edge (27)\n (27) edge (28)\n (28) edge (29)\n (29) edge (30)\n (30) edge [bend left=20] (26);\n\n\n\\path[thin]\n (25) edge (28)\n (28) edge[bend left=16] (10)\n (10) edge (3)\n (3) edge (12)\n (12) edge (25)\n\n (15) edge[bend right=16] (23)\n (23) edge[bend left=12, out=60, in=200] (5)\n (5) edge (9)\n (9) edge[bend left=16] (19)\n (19) edge (15)\n\n (1) edge (26)\n (26) edge (11)\n (11) edge (16)\n (16) edge (21)\n (21) edge[bend right=16] (1)\n\n (7) edge (2)\n (2) edge (29)\n (29) edge[bend left=16] (17)\n (17) edge (13)\n (13) edge (7)\n\n (22) edge[bend right=3, out=10, in=200] (8)\n (8) edge (14)\n (14) edge (30)\n (30) edge[bend right=16] (18)\n (18) edge (22)\n\n (24) edge[out=125, in=305] (6)\n (6) edge (27)\n (27) edge (4)\n (4) edge (20)\n (20) edge (24);\n\\end{tikzpicture}\n\\vspace{-30pt}\n\\caption{The graph $\\hat{\\cG}_9$} \\label{fig:hatG9}\n\\end{figure}\n\\end{example}\n\n\n\\section{Conclusions and Future Research}\n\\label{sec:conclude}\n\nGray codes for permutations using the operation push-to-the-top\nand the Kendall's $\\tau$-metric were discussed.\nWe have presented a framework for constructing snake-in-the-box codes\nfor $S_n$. The framework for the construction yield a recursive construction\nwith large snakes. A direct construction to obtain snakes which might\nbe optimal in length was also presented. Several questions arise from our discussion\nand they are considered for current and future research.\n\\begin{enumerate}\n\\item Complete the direct construction for snakes of length\n$\\frac{(2n+1)!}{2} - 2n+1$ in $S_{2n+1}$.\n\n\\item Can a snake in $S_{2n+1}$ have size larger than\n$\\frac{(2n+1)!}{2} - 2n+1$?\n\n\\item Prove or disprove that the length of the longest snake in $S_{2n}$\nis not longer than the length of the longest snake in~$S_{2n-1}$.\n\n\\item Examine the questions in this paper for the $\\ell_\\infty$ metric.\n\\end{enumerate}\n\n\\begin{center}\n{\\bf Acknowledgment}\n\\end{center}\nThe authors would like to thank the anonymous reviewers for their careful reading of the paper.\nEspecially, one of the reviewers pointed out on the good ratio $\\lim\\limits_{n\\to \\infty} \\frac{M_{2n+1}}{S_{2n+1}}\\approx 0.4338$ compared to one in \\cite{YeSc12}.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\\label{sec:intro}\n\nThe quantum Rabi model (QRM) is widely recognized as the simplest and most fundamental model describing quantum light-matter interactions, that is, the interaction between a two-level system and a bosonic field mode. Indeed, it is considered a milestone in the long history of quantum physics \\cite{HR2008, KG2004}. In \\cite{bcbs2016} we can find a recent collection of introductory, survey and original articles from both experimental and theoretical viewpoints not limited to light-matter interaction but also in diverse fields of research. For a notable achievement in recent experimental studies, we refer the reader to \\cite{YS2018}.\n\nThe Hamiltonian \\(H_{\\text{\\upshape Rabi}}\\) of the QRM is precisely given by\n\\[\n H_{\\text{\\upshape Rabi}} := \\omega a^{\\dagger}a + \\Delta \\sigma_z + g (a + a^{\\dagger}) \\sigma_x .\n\\]\nHere, \\(a^{\\dagger}\\) and \\(a\\) are the creation and annihilation operators of the single bosonic mode (\\([a,a^{\\dagger}]=1 \\)), $\\sigma_x, \\sigma_z$ are the Pauli matrices (sometimes written as \\(\\sigma_1\\) and \\(\\sigma_3\\), but since there is no risk of confusion with the variable \\(x\\) to appear below in the heat kernel, we use the usual notations), $2\\Delta$ is the energy difference between the two levels, $g$ denotes the coupling strength between the two-level system and the bosonic mode with frequency $\\omega$ (subsequently, we set $\\omega=1$ without loss of generality). The Hamiltonian \\(H_{\\text{\\upshape Rabi}}\\) of the QRM has a $\\Z_2$-symmetry that gives the parity decomposition\n\\[\n H_{\\text{Rabi}} = H_{+} \\oplus H_{-}.\n\\]\nWe note that the integrability of the QRM was established in \\cite{B2011PRL} using the $\\Z_2$-symmetry. \n\nThe aim of the present paper is to provide a explicit formula for the propagator of the Schr\\\"odinger equation for the\nQRM and the same for the two systems defined by the parity decomposition using the explicit analytical formula of\nthe corresponding heat kernels \\(K_{\\text{Rabi}}(x,y,t)\\) and \\(K_\\pm(x,y,t, \\Delta)\\) obtained\nin \\cite{RW2020hk}. The explicit formula may be used for precise computation of time evolution of quantum states without the\nnumerical drawbacks appearing in prior methods (see e.g. \\cite{WKB2012}).\n\nWe recall that the propagator is the integral kernel associated to the Schr\\\"odinger equation\ncorresponding to $H_{\\text{\\upshape Rabi}}$. Precisely, it is a (two-by-two matrix valued) function $U_{\\text{Rabi}}(x,y,t)$ satisfying\n\\begin{align*}\n& i \\frac{\\partial}{\\partial t} U_{\\text{Rabi}}(x,y,t) = -H_{\\text{\\upshape Rabi}} U(x,y,t) \\quad \\text{for all}\\quad t>0, \\\\ \n& \\lim_{t \\to 0}U_{\\text{Rabi}}(x,y,t)=\\delta_x(y) {\\bf I}_2 \\quad \\text{for}\\quad x,y \\in \\R.\n\\end{align*}\nIn other words, we have the expression as a time evolution operator \n\\begin{align*}\nU_{\\text{Rabi}}(x,y,t)=\\langle x\\,|\\,e^{-itH_{\\text{Rabi}}} \\,|\\,y\\rangle (= K_{\\text{Rabi}}(x,y,it)) \\quad \\text{for} \\quad t>0,\n\\end{align*}\nwhenever the Wick rotation, the analytic continuation of the heat kernel $K_{\\text{Rabi}}(x,y,t)$ from real to imaginary time,\nexists (possibly with singularities). \n\nLet us mention works related to the computation of the propagator or heat kernel of QRM.\nIn \\cite{ZZ1988} it has been given an approximated formula for the propagator using path-integral techniques. For the Spin-Boson model, and the QRM as a special case, the Feynman-Kac formula for the heat kernel was obtained in \\cite{HH2012,HHL2012} via a Poisson point process and a Euclidean field. However, for the study of longtime behavior of the system, the use of numerical computations or approximations is inevitable (see e.g. \\cite{LF2011, CPMP}). Moreover, recently in \\cite{DSGKSN2017}, it was shown that even approximation forms which are obtained by a perturbative approach provides can provide a significant insight. In fact, it was shown in \\cite{DSGKSN2017} that perturbative diagrammatic approach provides a direct visualization of virtual and physical photons in the physical process using the Jaynes-Cummings (the RWA of the QRM)) propagator.\n\nOur second theme is the study of the spectral (functional) determinant, that is, the zeta regularized product of the spectrum for $H_{\\pm}$. The spectral determinant of an operator is a function that has zeros at the eigenvalues, that is, it is the generalization of the characteristic polynomial for a finite matrix, It describes various important topological and\/or number theoretical invariants (see e.g. \\cite{RS1974, QHS1993TAMS}). In \\cite{KRW2017}, a significant relation was found between Braak's $G$-function, the power series of transcendental function used to prove the integrability of the QRM \\cite{B2011PRL}, and the spectral zeta function of the QRM \\cite{Sugi2016} (the Mellin transform of the partition function). Actually, the $G$-function is (up to a non-vanishing function) equal to the spectral determinant\nof \\(H_{\\text{\\upshape Rabi}}\\), that is, the zeta-regularized product associated to the spectral zeta function of the QRM.\nThis result is significant because, on the one hand, the $G$-function is defined through the solutions of system of ordinary differential equations (which is equivalent with the confluent Heun ODE picture of the QRM) that assures the existence of the entire solutions (see Appendix \\ref{sec:bargm-space-confl}) and, on the other hand, the spectral determinant arises from the linear term of the Taylor expansion of the spectral zeta function at the origin. The identification of $G$-function and spectral determinant suggests a deeper relation between the exact solvability of a quantum interaction system and the meromorphic continuation of its spectral zeta function.\n\nWe extend this result for the case of the Hamiltonians $H_{\\pm}$ of the parity decomposition of the QRM. In order to accomplish this, we show the meromorphic continuation of the spectral zeta function of the Hamiltonians $H_{\\pm}$. \nIn addition, by using the explicit formula of the partition function $Z_{\\text{Rabi}}(t)$ of the QRM\n we obtain a contour integral representation of spectral zeta function $\\zeta_{\\text{QRM}}(s; \\tau)$\n (Theorem \\ref{IntRep_SZF}). We leave the proof of the integral representation to Appendix \\ref{sec:proofmero}. There, we also discuss certain interesting properties of the spectral zeta function that are the consequence of the contour integral expression.\n\nThe common tool behind the two results in this paper, that is, the propagator formula and the identification of the spectral determinant with the $G$-function, is the meromorphic continuation to the complex plane of the heat kernel and partition function allowed by precise estimates obtained from the analytical formulas obtained in \\cite{RW2020hk}.\n\n\\section{Revisited: the discrete path integral for the heat kernel of QRM} \\label{sec:limit}\n\nIn this section we recall the formulas for the heat kernel and partition function of\nthe QRM obtained in \\cite{RW2020hk}. \nAs an introduction for the reader and to complement the discussion of the aforementioned paper, we give a brief overview of the method of computation for the heat kernel which we refer here as the method of discrete paths. We also provide a new interpretation on the resulting expression of the heat kernel using representation theory.\n\n\\subsubsection{Trotter-Kato's product formula}\n\nThe Hamiltonian of the QRM is given by\n\\[\n H_{\\text{\\upshape Rabi}} := a^{\\dagger}a + \\Delta \\sigma_z + g (a + a^{\\dagger}) \\sigma_x,\n\\]\nand it is easy to see that it can be written as\n\\begin{align*}\n H_{\\text{\\upshape Rabi}} = b^{\\dagger} b - g^2 + \\Delta \\sigma_z,\n\\end{align*}\nwhere $b = a + g \\sigma_x$ and $b^{\\dagger} = a^\\dag + g \\sigma_x $ are the annihilation and creation operators of\na non-commutative version of the quantum harmonic oscillator (i.e. $[b^{\\dagger},b] = \\bm{I}_2$).\n\nBy the Trotter-Kato product formula (see e.g. \\cite{Calin2011}), the heat semigroup is given by\n\\[\n e^{- t H_{\\text{\\upshape Rabi}}} = e^{- t (b^{\\dagger}b -g^2 + \\Delta \\sigma_z)} = \\lim_{N\\to \\infty} (e^{-t (b^{\\dagger}b -g^2)\/N} e^{-t(\\Delta \\sigma_z)\/N})^N,\n\\]\nwith convergence in the strong operator topology, and the heat kernel $K_{\\text{Rabi}}(x,y,t)$ is obtained\nfrom this formula. In \\S\\ref{sec:Propagator}, the propagator $U_{\\text{Rabi}}(x,y,t)$ is obtained from the Heat kernel by\nthe Wick rotation $t \\to i t$ from the heat kernel $K_{\\text{Rabi}}(x,y,t)$. \n\nThe first step for the computation of the heat kernel is to obtain the explicit form of the integral kernel $K^{(N)}(x,y,t)$ of\n$(e^{-t (b^{\\dagger}b -g^2)\/N} e^{-t(\\Delta \\sigma_z)\/N})^N$. \nSince the kernel $K^{(N)}(x,y,t)$ is a two-by-two matrix-valued function, we can write it\nin terms of scalar and a non-commutative (matrix) parts in a reasonable way. More precisely, we write \n\\begin{align}\n \\label{eq:sumGI}\n K^{(N)}(x,y,t) = \\sum_{\\mathbf{s} \\in \\btZ{N}} G_N(u,\\Delta,\\mathbf{s}) I_N(x,y,u, \\, \\mathbf{s}),\n\\end{align}\nwhere $G_N(u,\\Delta,\\mathbf{s})$ is a matrix-valued function and the scalars $I_N(x,y,u, \\, \\mathbf{s})$ correspond to the evaluation of multivariate Gaussian integrals. The matrices appearing in the Hamiltonian $H_{\\text{\\upshape Rabi}}$ give rise to the finite group \\(\\Z_2^N\\) structure of equation \\eqref{eq:sumGI}.\nAn important observation is that the $\\Z_2^{N}$-group structure can be interpreted in terms of finite paths as in Figure \\ref{fig:paths}, we refer to Section 4.2 of \\cite{RW2020hk} for more details. \n\nBy taking the limit, we see that the components of the heat kernel are, up to Mehler's type factor inherited by the\nquantum harmonic oscillator, given by\n\\begin{align}\n \\label{eq:limit0}\n & \\lim_{N \\to \\infty}\\left( \\frac{1-u^{\\frac{2\\Delta}N}}{2 u^{\\frac{\\Delta}N}} \\right) \\sum_{k \\geq 3}^N \\left( \\frac{1+u^{\\frac{2\\Delta}N}}{2 u^{\\frac{\\Delta}N}} \\right)^{N-k} J_\\eta^{(k,N)}(x,y,u^{\\frac{1}N},g) \\sum_{ \\mathbf{s} \\in \\Z_2^{k-3}} g^{(\\alpha,\\eta)}_{k-1}(u^{\\frac1N},\\mathbf{s}) R^{(\\alpha,\\eta)}_\\eta(u^{\\frac{1}N},\\mathbf{s}),\n\\end{align}\nwith $\\alpha,\\eta \\in \\{0,1\\}$ and where we omitted some trivial terms for clarity (see equation (33) in \\cite{RW2020hk}). This form\nof the heat kernel suggests a possible evaluation as a type of Riemann integral. However, the presence of multiple changes of\nsigns depending on $\\mathbf{s} \\in \\Z_2^{k-3}$ and $k \\geq 3$ in the term $g^{(\\alpha,\\eta)}_{k-1}(u^{\\frac1N},\\mathbf{s}) R^{(\\alpha,\\eta)}_\\eta(u^{\\frac{1}N},\\mathbf{s})$ make such an approach unfeasible.\n\n\\subsubsection{Fourier transform on $\\Z_2^n \\, (n\\geq0)$}\n\nAt this point the method described in \\cite{RW2020hk} greatly differs from the usual evaluation by path integrals.\nLet us denote by $|\\rho|$ the norm (or length) in $\\Z_2^{N}$, that is,\n$ |\\rho| = \\sum \\rho_i $ for $\\rho=(\\rho_1,\\rho_2,\\cdots,\\rho_{N})\\in \\Z_2^{N}$.\nBy using Fourier analysis, and more concretely Parseval's identity, on the group algebra \\(\\C[\\Z_2^{k-3}]\\), we see that\n\\[\n \\sum_{ \\mathbf{s} \\in \\Z_2^{k-3}} g^{(\\alpha,\\eta)}_{k-1}(u^{\\frac1N},\\mathbf{s}) R^{(\\alpha,\\eta)}_\\eta(u^{\\frac{1}N},\\mathbf{s}) = \\sum_{ \\rho \\in \\Z_2^{k-3}} \\hat{g}^{(\\alpha,\\eta)}_{k-1}(u^{\\frac1N},\\rho) \\hat{R}^{(\\alpha,\\eta)}_\\eta(u^{\\frac{1}N},\\rho).\n\\]\nThen, by fixing $|\\rho|=\\lambda \\in \\Z_{\\geq0}$, we observe that in the right-hand side of the above equation the function $\\hat{R}^{(\\alpha,\\eta)}_\\eta(u^{\\frac{1}N},\\rho)$ is given as the exponential of certain $q$-polynomials and that these $q$-polynomials do not depend on $k$. This process resembles the separation of a function defined on $\\R^{n}$ into radial and non-radial parts. We remark, however, that the function $\\hat{R}^{(\\alpha,\\eta)}_\\eta(u^{\\frac{1}N},\\rho)$ is not radial, it is only determined (as a $q$-polynomial) by $|\\rho|$. In fact, it turns out that it actually determined by the $\\mathfrak{S}_\\infty$ orbit of $\\rho$ for certain action on $\\Z_2^{\\infty}$.\n\n\\subsubsection{$\\mathfrak{S}_\\infty$-action on $\\Z_2^{\\infty}$ (Geometric)}\n\\label{sec:mathfr-acti-z_2infty}\n\nThe infinite sum in the limit appearing in the heat kernel is identified with the inductive limit\n\\[\n \\Z_2^{\\infty} = \\varinjlim_{n} \\Z_2^{n},\n\\]\nwhere, for $ i \\leq j$, the injective homomorphisms $f_{i j} : \\Z_2^i \\to \\Z_2^j$ are given by\n\\[\n f_{i j}(\\rho) = (\\rho_1,\\rho_2,\\cdots,\\rho_i,0,\\cdots,0) \\in \\Z_2^{j}\n\\]\nfor $\\rho = (\\rho_1,\\rho_2,\\cdots,\\rho_i) \\in \\Z_{2}^{i}$, that is, the natural group embeddings. As usual, we consider the inductive limit $\\Z_2^{\\infty}$ to\nbe equipped with the discrete topology.\n\nLet us also consider the infinite symmetric group $\\mathfrak{S}_\\infty$ obtained by the inductive limit of the finite symmetric groups $\\mathfrak{S}_n$ where the inductive homomorphisms are also given by the natural group embeddings. Then $\\mathfrak{S}_\\infty$ acts naturally on $\\Z_2^{\\infty}$ and the orbits are given by\n\\[\n \\mathcal{O}_\\lambda := \\left\\{ \\sigma \\in \\Z_2^{\\infty} \\, : \\, |\\sigma|= \\lambda \\right\\}\n\\]\nfor $\\lambda\\geq0$ and where the function $|\\cdot|$ is induced by the norms on each group $\\Z_2^{n}\\, (n\\geq0)$. Equivalently, we have $ \\mathcal{O}_{\\lambda} = \\mathfrak{S}_\\infty . \\sigma$, for $\\sigma \\in \\Z_2^{\\infty} $ with $|\\sigma|=\\lambda$ for $\\lambda \\in \\Z_{\\geq 0}$.\n\nIn other words, $|\\cdot|$ is an orbit invariant for the action and we get an orbit decomposition of $\\Z_2^{\\infty}$ by\n\\begin{equation*}\n \\Z_2^{\\infty} = \\bigsqcup_{n=0}^{\\infty} \\mathfrak{S}_{\\infty} . [\\underbrace{1,1,\\cdots,1}_{n}] \\xleftrightarrow{ \\text{labeled by $|\\cdot|$}} \\Z_{\\geq 0},\n\\end{equation*}\nwhere \n\\[\n [\\underbrace{1,1,\\cdots,1}_{n}] := (\\underbrace{1,1,\\cdots,1}_{n},0,0,\\cdots) \\in \\Z_2^{\\infty}\n\\]\nis the image of $(1,1,\\cdots,1) \\in \\Z_2^{n}$ in $\\Z_2^{\\infty}$. In Figure \\ref{fig:paths} we give an example of an element in $\\Z_2^{\\infty}$ and its corresponding canonical coset representative. We note that for the orbit $\\mathcal{O}_\\lambda$, the value $\\lambda$ of the invariant coincides with the length of the canonical element $(1,1,\\cdots,1)\\in \\Z_2^\\lambda$ described above.\n\n\\begin{figure}[!ht]\n \\centering\n \\begin{tikzpicture}[domain=0:4]\n\n \n\n \n \\draw[step=1,very thin,color=gray] (0.6,2) grid (9,3);\n\n \n \\draw[thick] (0.6,2) -- (1,2) ;\n \\draw[thick] (1,2) -- (9.1,2) \n node[pos=1,below] {$9$}\n node[pos=0,below]{1} node[pos=0.125,below]{2} node[pos=0.25,below]{3}\n node[pos=0.375,below]{4} node[pos=0.5,below]{5} node[pos=0.625,below]{6}\n node[pos=0.750,below]{7} node[pos=0.875,below]{8};\n\n \n \n \\draw[thick,dashed] (9.1,3) -- (11,3) ;\n \\draw[thick,dashed] (9.1,2) -- (11,2) ;\n\n \n \\draw[thick] (0.6,1.8) node[below] {$0$} -- (0.6,3.2) node[above] {$1$};\n\n \\draw[color=blue,very thick]\n (0.6,3) -- (1,3) to[out=0,in=180] (2,2) to[out=0,in=180] (3,3) to[out=0,in=180]\n (4,3) to[out=0,in=180] (5,3) to[out=0,in=180] (6,2) to[out=0,in=180] (7,3)\n to[out=0,in=180] (8,3) to[out=0,in=180] (9,2) -- (11,2);\n\n \n\n \\node at (1.5,0.5) {$\\equiv$};\n\n \n \n \n \\draw[step=1,very thin,color=gray] (2.6,0) grid (11,1);\n\n \n \\draw[thick] (2.6,0) -- (3,0) ;\n \\draw[thick] (3,0) -- (11.1,0) \n node[pos=1,below] {$9$}\n node[pos=0,below]{1} node[pos=0.125,below]{2} node[pos=0.25,below]{3}\n node[pos=0.375,below]{4} node[pos=0.5,below]{5} node[pos=0.625,below]{6}\n node[pos=0.750,below]{7} node[pos=0.875,below]{8};\n\n \n \n \\draw[thick,dashed] (11.1,1) -- (13,1) ;\n \\draw[thick,dashed] (11.1,0) -- (13,0) ;\n\n \n \\draw[thick] (2.6,-0.2) node[below] {$0$} -- (2.6,1.2) node[above] {$1$};\n\n \\draw[color=orange,very thick]\n (2.6,1) -- (3,1) to[out=0,in=180] (4,1) to[out=0,in=180] (5,1) to[out=0,in=180]\n (6,1) to[out=0,in=180] (7,1) to[out=0,in=180] (8,1) to[out=0,in=180] (9,0)\n to[out=0,in=180] (10,0) to[out=0,in=180] (11,0) -- (13,0);\n\n \n\n \\node at (14.2,0.5) {$\\pmod{\\mathfrak{S}_{\\infty}}$};\n \n \\end{tikzpicture}\n \\caption{A path in $\\Z_2^{9} \\subset \\Z_2^{\\infty} $ (above) and the corresponding canonical $\\mathfrak{S}_{\\infty}$-orbit representative in $\\Z_2^{\\infty}$ (below) of the orbit $\\mathcal{O}_6$.}\n \\label{fig:paths}\n\\end{figure}\n\nThus, using this idea, we rearrange the limit for the main body of heat kernel in the following way\n\\begin{equation}\n \\label{eq:limit1}\n \\sum_{\\lambda=0}^{\\infty} \\lim_{N \\to \\infty} \\left( \\frac{1-u^{\\frac{2\\Delta}N}}{2 u^{\\frac{\\Delta}N}} \\right) \\sum_{k \\geq \\lambda}^N \\left( \\frac{1+u^{\\frac{2\\Delta}N}}{2 u^{\\frac{\\Delta}N}} \\right)^{N-k} h_\\eta^{(k,N)}(x,y,u^{\\frac{1}N},g) \\int_{\\mathcal{O}_\\lambda^{k}} f^{(\\alpha,\\eta)}_{k-1}(u^{\\frac{1}N},\\mu) d\\mu_{\\lambda},\n\\end{equation}\nfor certain functions $h_\\eta^{(k,N)}(x,y,u^{\\frac{1}N},g)$ and $f^{(\\alpha,\\eta)}_{k-1}(u^{\\frac{1}N},\\mu) d\\mu_{\\lambda}$ and where the innermost integral is the orbital integral of the action and $\\mathcal{O}_\\lambda^{k} := \\mathcal{O}_\\lambda \\cap \\Z_2^k$ (here, $\\Z_2^k$ is regarded as a subgroup of $\\Z_2^{\\infty}$).\nLet us describe the orbital integral. First, notice that when we fix $|\\rho| = \\lambda$, the elements of $\\mathcal{O}_\\lambda^{k}$ are determined by the position of the ones, in other words there is a bijection\n\\[\n \\left\\{ \\rho \\in \\Z_2^k \\, : \\, |\\rho|= \\lambda \\right\\} \\longleftrightarrow \\left\\{ (j_1,j_2,\\cdots,j_\\lambda) \\in \\Z_{\\geq1}^{\\lambda} \\,;\\, j_1 ,dashed,thick] (1.8,2.5) -- (5.4,5);\ng \n \n\n \\draw[->,thick] (1.8,2.5) -- (5.4,0);\n \n\n \\draw[->, dashed,thick,double] (7,1.9) -- (7,0.7);\n\n \\draw[->] (7.2,1.5) -- (11,0.9);\n \n \n \n\n \\draw[->,dashed,thick] (8.3,5) to[out=0,in=0] (8.5,0);\n \n \\node at (11.5,2.5) { {\\small$ \\{\\text{paths}\\}\/\\sim = \\Z_2^{\\infty} \\; \\bm{?}$}};\n\n \n\n \\node[fill=green!20,draw] at (1.4,0.5) { {\\tiny \\begin{tabular}{c} Gaussian integrations \\\\ + \\\\ Fourier analysis on $\\Z_2^{n} (n\\geq 0)$ \\\\ (dual space picture) \\end{tabular}}};\n \n \\node[fill=green!20,draw] at (12.1,0.5) { {\\tiny \\begin{tabular}{c} $\\mathfrak{S}_{\\infty}$-orbital integrals \\\\ with probability measure \\\\ (asymptotic combinatorics) \\end{tabular}}};\n \n \\end{tikzpicture}\n \\caption{Schematic picture of computation of the heat kernel for the QRM}\n \\label{fig:paths2}\n\\end{figure}\n\nWe remark that since the $\\Z_2^{\\infty}$-group structure is inherent to the Hamiltonian $H_{\\text{\\upshape Rabi}}$, the conjectural equivalence \\eqref{eq:equiv} may be determined only up to the given quantum system. It is difficult to expect a unique relation of the type \\eqref{eq:equiv} for path integrals in general.\n\\end{rem}\n\n\\begin{rem}\n The $\\Z_2^{\\infty}$ group structure appearing in the computation of the heat kernel is unrelated to \n the $\\Z_2$-parity of the QRM. In fact, it is not difficult to note that a similar $\\Z_2^{\\infty}$ structure also\n appears in the case of the asymmetric quantum Rabi model\n \\[\n H_{\\text{\\upshape Rabi}}^{\\varepsilon} := a^{\\dagger}a + \\Delta \\sigma_z + g (a + a^{\\dagger}) \\sigma_x + \\varepsilon \\sigma_x,\n \\]\n even thought a $\\Z_2$-parity is known only for the case $\\varepsilon = 0$ (QRM case).\n\\end{rem}\n\n\n\n\\subsection{Explicit formulas for heat kernel and partition function}\n\\label{sec:expl-form-heat}\n\n\n\\subsubsection{Heat kernel}\n\\label{sec:heat-kernel}\n\nIn the expressions for the heat kernel and the partition function, the\nintegral over the $\\lambda$-th simplex for \\(\\lambda = 0 \\) is used with the meaning\n\\[\n \\idotsint\\limits_{0\\leq \\mu_1 \\leq \\cdots \\leq \\mu_\\lambda \\leq 1} f(x) d \\bm{\\mu_0} = f(x),\n\\]\nfor any function \\(f(x)\\).\n\n\\begin{thm}[Thm 4.2 of \\cite{RW2020hk}]\n The heat kernel $K_{\\text{Rabi}}(x,y,t)$ of the QRM is given by the uniformly convergent series\n \\begin{align*} \n &K_{\\text{Rabi}} (x,y,t) = K_0(x,y,g,t) \\Bigg[ \\sum_{\\lambda=0}^{\\infty} (t\\Delta)^{\\lambda} e^{-2g^2 (\\coth(\\tfrac{t}2))^{(-1)^\\lambda}}\n \\\\\n &\\qquad\\times \\idotsint\\limits_{0\\leq \\mu_1 \\leq \\cdots \\leq \\mu_\\lambda \\leq 1} e^{4g^2 \\frac{\\cosh(t(1-\\mu_\\lambda))}{\\sinh(t)}(\\frac{1+(-1)^\\lambda}{2}) + \\xi_{\\lambda}(\\bm{\\mu_{\\lambda}},t)} \n \\begin{bmatrix}\n (-1)^{\\lambda} \\cosh & (-1)^{\\lambda+1} \\sinh \\\\\n -\\sinh & \\cosh\n \\end{bmatrix}\n \\left( \\theta_{\\lambda}(x,y,\\bm{\\mu_{\\lambda}},t) \\right) d \\bm{\\mu_{\\lambda}} \\Bigg],\n \\end{align*}\n with \\(\\bm{\\mu_0} := 0\\) and \\(\\bm{\\mu_{\\lambda}}= (\\mu_1,\\mu_2,\\cdots,\\mu_\\lambda)\\) and \\(d \\bm{\\mu_{\\lambda}} = d \\mu_1 d \\mu_2 \\cdots d \\mu_{\\lambda} \\)\n for \\(\\lambda \\geq 1\\).\n\n Here, $K_0(x,y,g,t)$ is given by\n \\begin{align*}\n K_0(x,y,g,t)\n & := \\frac{e^{t(g^2+\\tfrac12)}}{\\sqrt{2\\pi \\sinh(t)}} \\exp\\left( -\\frac{(x^2 + y^2) \\cosh(t) - 2x y}{2\\sinh(t)} \\right)\\\\\n \\end{align*}\n and the functions \\(\\theta_{\\lambda}(x,y, \\bm{\\mu_{\\lambda}},t)\\) and $\\xi_\\lambda(\\bm{\\mu_{\\lambda}},t)$ are given by\n \\begin{align*}\n \\theta_{\\lambda}(x,y, \\bm{\\mu_{\\lambda}},t) &:= \\frac{2\\sqrt{2} g}{\\sinh(t)}\\left( x \\cosh(t) - y \\right) \\left( \\frac{1-(-1)^{\\lambda}}{2} \\right) - \\sqrt{2} g (x-y) \\coth(\\tfrac{t}2) \\\\\n & \\quad + \\frac{2\\sqrt{2} g (-1)^{\\lambda} }{\\sinh(t)} \\sum_{\\gamma=0}^{\\lambda} (-1)^{\\gamma} \\Big[ x \\cosh(t(1 - \\mu_{\\gamma})) - y \\cosh(t \\mu_{\\gamma}) \\Big] \\nonumber \\\\\n \\xi_\\lambda(\\bm{\\mu_{\\lambda}},t) &:= -\\frac{8g^2 }{\\sinh(t)} \\left(\\sinh(\\tfrac12t(1-\\mu_\\lambda))\\right)^2 (-1)^{\\lambda} \\sum_{\\gamma=0}^{\\lambda} (-1)^{\\gamma} \\cosh( t \\mu_{\\gamma}) \\nonumber \\\\\n &\\qquad - \\frac{4 g^2 }{\\sinh(t)} \\sum_{\\substack{0\\leq\\alpha<\\beta\\leq \\lambda-1\\\\ \\beta - \\alpha \\equiv 1 \\pmod{2} }} \\left( \\cosh(t(\\mu_{\\beta+1}-1)-\\cosh(t(\\mu_{\\beta}-1)) \\right) \\nonumber \\\\\n &\\qquad \\qquad \\qquad \\qquad \\qquad \\qquad \\times ( \\cosh(t \\mu_{\\alpha}) - \\cosh(t \\mu_{\\alpha+1})), \\nonumber \n \\end{align*}\nwhere we use the convention \\( \\mu_0 = 0 \\) whenever it appears in the formulas above.\n\\end{thm}\n\nAs mentioned in the Introduction, the Hamiltonian of the QRM has a parity decomposition \\(H_{\\text{\\upshape Rabi}} = H_{+}\\oplus H_{-} \\).\nThe formula for the heat kernel $K_{\\pm}$ of $H_{\\pm}$ can be obtained directly from the analytical formula of heat kernel\nof the QRM, this is the method used in \\cite{RW2020hk}. A direct computation of the heat kernel for the Hamiltonians $H_{\\pm}$ is also possible by using the method described in \\S\\ref{sec:limit} but appears to be more complicated. A brief overview of the decomposition in the Bargmann space and $G$-functions for the QRM can be found in Appendix \\ref{sec:bargm-space-confl}.\n\n\\begin{thm}[Thm 4.4 of \\cite{RW2020hk}]\n The heat kernel $K_{\\pm}(x,y,t, \\Delta)$ of $H_\\pm= H_{\\text{\\upshape Rabi}} |_{\\mathcal{H}_\\pm}$ is given by \n \\begin{align*}\n K_{\\pm}(x,y,t, \\Delta)\n = & K_0(x,y,g,t)\\sum_{\\lambda=0}^{\\infty} (t\\Delta)^{2\\lambda} \\Phi^-_{2\\lambda}(x,y,t) \\mp \\tilde{K}_0(x,-y,g,t) \\sum_{\\lambda=0}^{\\infty}\n (t\\Delta)^{2\\lambda+1} \\Phi^+_{2\\lambda+1}(x,-y,t),\n \\end{align*}\n where for $\\lambda\\geq1$, the function $\\Phi^\\pm_{\\lambda}(x,y,t)$ is given by\n \\begin{align*}\n \\Phi^\\pm_{\\lambda}(x,y,t) := e^{-2g^2 (\\coth(\\tfrac{t}2))^{(-1)^\\lambda}} \\idotsint\\limits_{0\\leq \\mu_1 \\leq \\cdots \\leq \\mu_{\\lambda} \\leq 1} e^{4g^2 \\frac{\\cosh(t(1-\\mu_\\lambda))}{\\sinh(t)}(\\frac{1+(-1)^\\lambda}{2}) + \\xi_{\\lambda}(\\bm{\\mu_{\\lambda}},t)\\pm \\theta_{n}(x,y, \\bm{\\mu_{\\lambda}},t)} d \\bm{\\mu_{n}}\n \\end{align*}\n and \n \\begin{align*}\n \\Phi^\\pm_0(x,y,t) := e^{-2g^2\\tanh\\big(\\frac{t}2\\big) \\pm\\sqrt2 g(x+y)\\tanh\\big(\\frac{t}2\\big)}.\n \\end{align*}\n\\end{thm}\n\nIt is not difficult to verify that in fact, we have\n\\[\n K_{\\text{Rabi}}(x,y,t, \\Delta)= K_{+}(x,y,t, \\Delta)\\oplus K_{-}(x,y,t, \\Delta).\n\\]\n\n\\subsubsection{Partition function}\n\\label{sec:partition-function}\n\nThe partition function \\( Z_{\\text{Rabi}}(\\beta)\\) for the QRM is obtained by direct computation from the formula of the heat kernel $K_{\\text{Rabi}}(x,y,t)$ by the identity\n\\begin{equation*}\n Z_{\\text{Rabi}}(\\beta):= \\int_{-\\infty}^\\infty \\tr K_{\\text{Rabi}}(x,x,\\beta) dx.\n\\end{equation*} \nThe partition functions \\(Z_{\\rm{Rabi}}^{\\pm}(\\beta) \\) of the Hamiltonians $H_{\\pm}$ are obtained in an analogous way.\n\n\\begin{cor}[Cor. 4.3 of \\cite{RW2020hk}] \\label{cor:Partition_function}\n The partition function \\( Z_{\\text{Rabi}}(\\beta)\\) of the QRM is given by\n \\begin{align*}\n Z_{\\text{Rabi}}(\\beta) &= \\frac{e^{\\beta(g^2+1)}}{\\sinh(\\beta)} \\Bigg[ 1 + e^{-2g^2 \\coth(\\frac{\\beta}2)} \\sum_{\\lambda=1}^{\\infty} (\\beta \\Delta)^{2\\lambda} \\idotsint\\limits_{0\\leq \\mu_1 \\leq \\cdots \\leq \\mu_{2 \\lambda} \\leq 1} e^{ 4g^2\\frac{\\cosh(\\beta(1-\\mu_{2\\lambda}))}{\\sinh(\\beta)} + \\xi_{2 \\lambda}(\\bm{\\mu_{2\\lambda}},\\beta) +\\psi^-_{2 \\lambda}(\\bm{\\mu_{2 \\lambda}},\\beta)} d \\bm{\\mu_{2 \\lambda}} \\Bigg],\n \\end{align*}\n where the function $\\psi_\\lambda^{-}(\\bm{\\mu_{\\lambda}},t)$ is given by\n \\begin{equation*}\n \\psi_\\lambda^{-}(\\bm{\\mu_{\\lambda}},t) := \\frac{4 g^2 }{\\sinh(t)}\\left[ \\sum_{\\gamma=0}^{\\lambda} (-1)^{\\gamma} \\sinh(t\\left(\\tfrac12 - \\mu_{\\gamma}\\right) \\right]^2.\n\\end{equation*}\nfor \\(\\lambda \\geq 1\\) and \\(\\bm{\\mu_{\\lambda}} = (\\mu_1,\\mu_2,\\cdots,\\mu_\\lambda) \\) and where \\( \\mu_0 = 0 \\). \n\\end{cor}\n\n\\begin{cor}[Cor. 4.4 of \\cite{RW2020hk}]\n The partition function \\(Z_{\\rm{Rabi}}^{\\pm}(\\beta) \\) for the Hamiltonian \\(H_{\\pm}\\) is given by\n \\begin{align*}\n &Z_{\\text{Rabi}}^{\\pm}(\\beta) = \\frac{ e^{\\beta(g^2+1)}}{2\\sinh(\\beta)} \\Bigg[ 1 + e^{-2g^2 \\coth(\\frac{\\beta}2)} \\sum_{\\lambda =1}^{\\infty} (\\beta\\Delta)^{2 \\lambda} \\idotsint\\limits_{0\\leq \\mu_1 \\leq \\cdots \\leq \\mu_{2 \\lambda} \\leq 1} e^{4g^2\\frac{\\cosh(\\beta(1-\\mu_{2\\lambda}))}{\\sinh(t)} + \\xi_{2\\lambda}(\\bm{\\mu_{2 \\lambda}},\\beta) +\\psi^-_{2 \\lambda} (\\bm{\\mu_{2\\lambda}},\\beta)} d \\bm{\\mu_{2\\lambda}} \\Bigg] \\\\\n &\\qquad \\qquad \\mp \\frac{ e^{\\beta(g^2 +1)}}{2\\cosh(\\beta)} e^{ - 2g^2 \\tanh(\\frac{\\beta}2)} \\sum_{\\lambda = 0}^{\\infty} (\\beta \\Delta)^{2\\lambda+1} \\idotsint\\limits_{0\\leq \\mu_1 \\leq \\cdots \\leq \\mu_{2 \\lambda+1} \\leq 1} e^{\\xi_{2\\lambda+1}(\\bm{\\mu_{2 \\lambda+1}},\\beta) +\\psi^+_{2 \\lambda+1} (\\bm{\\mu_{2\\lambda+1}},\\beta)} d \\bm{\\mu_{2\\lambda+1}},\n \\end{align*}\n where the function $\\psi_\\lambda^{-}(\\bm{\\mu_{\\lambda}},t)$ is as in Corollary \\ref{cor:Partition_function} and\n \\begin{equation*}\n \\psi_\\lambda^{+}(\\bm{\\mu_{\\lambda}},t) := \\frac{4 g^2 }{\\sinh(t)}\\left[ \\sum_{\\gamma=0}^{\\lambda} (-1)^{\\gamma} \\cosh(t\\left(\\tfrac12 - \\mu_{\\gamma}\\right) \\right]^2.\n\\end{equation*}\n\\end{cor}\n\n\\section{Propagator of the QRM}\n\\label{sec:Propagator}\n\nThe propagator is the integral kernel associated to the solution of the Schr\\\"odinger equation corresponding to $H_{\\text{\\upshape Rabi}}$.\nPrecisely, it is a (two-by-two matrix valued) function $U_{\\text{Rabi}}(x,y,t)$ satisfying\n$i \\frac{\\partial}{\\partial t} U_{\\text{Rabi}}(x,y,t)= -H_{\\text{\\upshape Rabi}} U_{\\text{Rabi}}(x,y,t)$ for all $t>0$ and $\\lim_{t \\to 0}U_{\\text{Rabi}}(x,y,t)=\\delta_x(y) \\bf{I}_2$ for $x,y \\in \\R$.\n\nClearly, we may obtain $U_{\\text{Rabi}}(x,y,t)$ from an explicit expression for the propagator from the heat kernel\n$K_{\\text{Rabi}}$ by a change of variable $t \\to i t$ (known as the Wick rotation).\nIn this section we formalize this idea by extending the domain $K_{\\text{Rabi}}$ to the complex plane as\na function with respect to the variable $t$.\n\nFirst, a simple technical lemma is needed to establish the holomorphicity of \\(K_{\\text{Rabi}}(x,y,t)\\).\nThe lemma is also used later to give the meromorphic continuation of the spectral zeta function\nof the QRM and the Hamiltonian of each parity.\n\n\\begin{lem}\n \\label{lem:bound}\n Suppose \\(\\lambda \\in \\Z_{\\geq 1}\\) and let\n \\[\n \\mathcal{R}^* = \\{ z \\in \\C \\, | \\, z \\neq n \\pi i \\, , n \\in \\Z\\}\n \\]\n Then, for \\( t \\in \\mathcal{R}^* \\) there are real valued functions \\( C_1(x,y,t), C_2(t), C_3(t) \\geq 0 \\)\n bounded in compact subsets of \\(\\mathcal{R}^*\\), such that\n \\begin{align*}\n \\left| \\theta_{\\lambda}(\\bm{\\mu_\\lambda},x,y,t) \\right| &\\leq \\left|\\frac{\\sqrt{2} g }{1-e^{-2 t}} \\right| C_1(x,y,t) \\\\\n \\left| \\psi_{\\lambda}^{\\pm}(\\bm{\\mu_\\lambda},t) \\right| &\\leq \\left|\\frac{2 g^2 }{1-e^{-2 t}} \\right| C_2(t) \\\\\n \\left\\vert \\xi_{\\lambda}(\\bm{\\mu_{\\lambda}},t) \\right\\vert &\\le \\left|\\frac{2 g^2 }{1-e^{-2 t}} \\right| C_3(t) \\lambda \n \\end{align*}\n uniformly for \\(0 \\leq \\mu_1 \\leq \\mu_2 \\leq \\cdots \\leq \\mu_\\lambda \\leq 1\\).\n\\end{lem}\n\n\\begin{proof}\n Let \\( t = a+ b i \\in \\mathcal{R}^*\\). For the proof it\n is convenient to use the exponential form of the hyperbolic functions appearing in the functions $\\theta_{\\lambda}(\\bm{\\mu_{\\lambda}},x,y,t)$,$\\psi_{\\lambda}^{\\pm}(\\bm{\\mu_{\\lambda}},t)$ and $\\xi_{\\lambda}(\\bm{\\mu_{\\lambda}},t)$.\n Lets consider first the case of $\\theta_{\\lambda}(\\bm{\\mu_\\lambda},x,y,t)$. Clearly, we have\n \\begin{align*}\n &\\left|\\frac{2\\sqrt{2} g e^{-t}}{1-e^{-2t}}\\left( x (e^{t}+e^{- t}) - 2 y \\right) \\left( \\frac{1-(-1)^{\\lambda}}{2} \\right) - \\sqrt{2} g (x-y) \\frac{1+e^{-t}}{1-e^{-t}}\\right| \\\\\n & \\qquad \\qquad \\leq \\frac{\\sqrt{2}|g| e^{-a}}{|1-e^{-2t}|} \\left(2 (|x|(e^a + e^{-a}) + 2|y|) + (|x|+|y|)(1+e^{-a})^2 \\right) = \\frac{\\sqrt{2} g c_1(x,y,t)}{|1-e^{-2t}|},\n \\end{align*}\n Next, we notice that, for \\( \\lambda \\equiv 1 \\pmod{2} \\), we have\n \\begin{align*}\n &s(x,t,y) := \\sum_{\\gamma=0}^{\\lambda} (-1)^{\\gamma} \\Big[ x (e^{t(1 - \\mu_{\\gamma}) } + e^{ t( \\mu_{\\gamma} - 1)}) - y (e^{- t \\mu_{\\gamma} }+ e^{ t \\mu_{\\gamma}}) \\Big] \\\\\n &\\qquad = - t \\sum_{\\gamma = 0}^{\\frac{2\\lambda-1}{2}} \\left( x \\int_{\\mu_{2\\gamma}}^{\\mu_{2\\gamma+1}} (e^{t(1-x)} + e^{t(x-1)})d x - y \\int_{\\mu_{2\\gamma}}^{\\mu_{2\\gamma+1}} (e^{t x} + e^{-t x}) d x \\right).\n \\end{align*}\n Next, we have\n \\[\n |s(x,t,y)| \\leq |t| \\left(|x| \\int_{0}^{1} (e^{a(1-x)} + e^{a(x-1)})d x + |y| \\int_{0}^{1} (e^{a x)} + e^{-a x)})d x \\right),\n \\]\n giving\n \\[\n |s(x,y,t)| \\leq 2\\frac{|t|}{|a|}(|x|+|y|)\\left(e^a + e^{-a}\\right).\n \\]\n If \\( \\lambda \\equiv 0 \\pmod{2} \\), we apply the estimate above to the first \\(\\lambda\\) terms of the sum resulting in\n \\[\n |s(x,y,t)| \\leq 2\\frac{|t|}{|a|}(|x|+|y|)\\left(2e^{|a|} + e^{-|a|} + 1 \\right) \n \\]\n and we set \\(c_2(x,y,t)\\) as the right hand side of the inequality. Setting $C_1(x,y,t) = c_1(x,y,t) + \\sqrt{2}g e^{-a} c_2(x,y,t)$ gives the desired result. The case of \\(\\psi_{\\lambda}^{\\pm}(\\bm{\\mu_\\lambda},t) \\) and the first sum in\n $\\xi_{\\lambda}(\\bm{\\mu_{\\lambda}},t)$ are dealt in the same way.\n \n For the second sum in $\\xi_{\\lambda}(\\bm{\\mu_{\\lambda}},t)$, we fix \\(0 \\leq n < \\lambda-1\\) and consider the sum\n \\begin{align*}\n S_{n}(t) &= \\sum_{\\substack{n <\\beta\\leq \\lambda-1 \\\\ \\beta - n \\equiv 1 \\pmod{2}} } \\left( (e^{- t \\mu_{\\beta+1}} + e^{ t \\mu_{\\beta+1}-2t} )-(e^{- t \\mu_{\\beta}} + e^{ t \\mu_{\\beta}-2t}) \\right) ( (e^{ t \\mu_{n}} + e^{- t \\mu_{n}}) - (e^{ t \\mu_{n+1}} + e^{- t \\mu_{n+1}}))\\\\\n &= ( (e^{ t \\mu_{n}} + e^{- t \\mu_{n}}) - (e^{ t \\mu_{n+1}} + e^{- t \\mu_{n+1}})) \\sum_{\\substack{n<\\beta\\leq \\lambda-1 \\\\ \\beta - n \\equiv 1 \\pmod{2}} } \\left( (e^{- t \\mu_{\\beta+1}} + e^{ t \\mu_{\\beta+1}-2t} )-(e^{- t \\mu_{\\beta}} + e^{ t \\mu_{\\beta}-2t}) \\right).\n \\end{align*}\n Transforming the sums into definite integrals as in the case above we see that \\(S_{n}(t) \\) is equal to\n \\begin{gather*}\n - t^2 \\left( \\int_{ \\mu_{n}}^{ \\mu_{n+1}} (e^{-t x} + e^{t x} ) d x \\right) \\sum_{\\substack{n<\\beta\\leq \\lambda-1 \\\\ \\beta - \\alpha \\equiv 1 \\pmod{2}} }\n \\left( \\int_{ \\mu_{\\beta}}^{ \\mu_{\\beta+1} } e^{- t x} d x + e^{-2 t}\\int_{ \\mu_{\\beta}}^{ \\mu_{\\beta+1} } e^{t x} d x \\right).\n \\end{gather*}\n It follows that\n \\begin{align*}\n |S_{n}(t)| &\\leq |t|^2 \\left(\\int_{0}^{1} (e^{- a x} + e^{a x} ) d x \\right) \\left( \\int_{0}^{1 } e^{- a x} d x + e^{-2 a}\\int_{0}^{1 } e^{a x } d x \\right) \\\\\n &\\leq \\left|\\frac{t}{a} \\right|^2 \\left( e^a - e^{-a} \\right) \\left(1 - e^{-2 a} \\right),\n \\end{align*}\n with a limit interpretation for \\(a = 0 \\).\n It follows that\n \\[\n \\left\\vert \\xi_{\\lambda}(\\bm{\\mu_{\\lambda}},t) \\right| \\leq \\left|\\frac{2 g^2 }{1-e^{-2 t}} \\right| \\left|\\sum_{n=0}^{\\lambda-2} S_n(t) \\right|\n \\leq \\left|\\frac{2 g^2 }{1-e^{-2 t}} \\right| c_3(t) \\lambda,\n \\]\n completing the proof.\n\\end{proof}\n\nFor the purpose of giving an explicit formula for the propagator it is only necessary to extend the function $K_{\\text{Rabi}} (x,y,t)$ to the line $i \\R$. However, for completeness we give the holomorphic extension to a larger region in the complex plane. We remark that the region is chosen according to the principal branch of logarithm (equivalently, branch of square root).\n\n\\begin{prop} \\label{prop:MeromExtK}\n For fixed $x,y \\in \\R$, the series defining any of the entries of the heat kernel $K_{\\text{Rabi}} (x,y,t)$ is\n uniformly convergent in compacts in the complement in the complex plane of the region\n $\\bigcup_{n \\in \\Z} \\{ t = a + i \\pi n \\in \\C : \\, a\\leq0 \\}$. In particular, $K_{\\text{Rabi}} (x,y,t)$ is (entrywise)\n holomorphic in said region.\n\\end{prop}\n\n\\begin{proof}\n Denote by \\(\\mathcal{D}\\) the region given by the complement of $\\bigcup_{n \\in \\Z} \\{ t = a + i \\pi n \\in \\C : \\, a\\leq0 \\}$\n and consider a compact \\(K \\subset \\mathcal{D}\\). \n Since \\(\\mathcal{D}\\) does not contain any zero of \\(1-e^{-2t}\\), we have\n \\[\n |K_0(x,y,g,t)| \\leq c_0\n \\]\n for some constant $c_0 \\ge 0$. Similarly, we set $c_1 = \\max_{t\\in K}(|t|)$ and\n \\[\n c_2 = \\max_{t \\in K}\\left(\\left|-2g^2\\coth(\\tfrac{t}2) + 4g^2 (1+e^{|t|})\/(\\sinh(t))\\right|,\\left|2g^2 \\tanh(\\tfrac{t}2)\\right| \\right),\n \\]\n then we have\n \\begin{align*}\n & \\left|\\sum_{\\lambda=0}^{\\infty} (t\\Delta)^{\\lambda} \\idotsint\\limits_{0\\leq \\mu_1 \\leq \\cdots \\leq \\mu_\\lambda \\leq 1} e^{-2g^2 (\\coth(\\tfrac{t}2))^{(-1)^\\lambda}+ 4g^2 \\frac{\\cosh(t(1-\\mu_\\lambda))}{\\sinh(t)}(\\frac{1+(-1)^\\lambda}{2}) + \\xi_{\\lambda}(\\bm{\\mu_{\\lambda}},t) \\pm \\theta_{\\lambda}(x,y,\\bm{\\mu_{\\lambda}},t)} d \\bm{\\mu_{\\lambda}}\\right| \\\\\n & \\quad \\leq \\sum_{\\lambda=0}^{\\infty} ( c_1 \\Delta)^{\\lambda} \\idotsint\\limits_{0\\leq \\mu_1 \\leq \\cdots \\leq \\mu_\\lambda \\leq 1} e^{c_2 + |\\xi_{\\lambda}(\\bm{\\mu_{\\lambda}},t)| +|\\theta_{\\lambda}(x,y,\\bm{\\mu_{\\lambda}},t)| } d \\bm{\\mu_{\\lambda}} \\leq e^{c_2+ c_3(x,y)} \\sum_{\\lambda=0}^{\\infty} ( c_1 \\Delta)^{\\lambda} e^{ \\lambda c_4 } \\idotsint\\limits_{0\\leq \\mu_1 \\leq \\cdots \\leq \\mu_\\lambda \\leq 1} d \\bm{\\mu_{\\lambda}} \\\\\n & \\quad \\leq e^{c_2+ c_3(x,y)} \\sum_{\\lambda=0}^{\\infty} \\frac{( c_1 \\Delta e^{c_4})^{\\lambda} e^{ \\lambda c_4 }}{k!} = e^{c_2+ c_3(x,y)+ c_1\\Delta e^{c_4}},\n \\end{align*}\n uniformly in $K$, where the constants $c_3$ and $c_4$ are given by Lemma \\ref{lem:bound}. Therefore, we see\n that the series defining any entry of $K_{\\text{Rabi}}(x,y,t)$ is bounded uniformly in $K$ by\n \\[\n c_0 e^{c_2+ c_3(x,y)+ c_1\\Delta e^{c_4}},\n \\]\n and the result follows from Weierstrass convergence theorem since $K$ is an arbitrary compact in $\\mathcal{D}$.\n\\end{proof}\n\nWith these preparations, we are ready to give the formula for the propagator of the QRM.\n\n\\begin{thm}\n The integral kernel $U_{\\text{Rabi}}(x,y,t)$ of $e^{-i t H_{\\text{\\upshape Rabi}}}$ (the propagator of QRM) is given by\n \\(K_{\\text{Rabi}}(x,y,i t) \\). Concretely, $U_{\\text{Rabi}}(x,y,t)$ is given by \n \\begin{align*} \n &U_{\\text{Rabi}} (x,y,t) = U_0(x,y,g,t) \\Bigg[ \\sum_{\\lambda=0}^{\\infty} (i t\\Delta)^{\\lambda} e^{- 2 g^2 (-i\\cot(\\tfrac{t}2))^{(-1)^\\lambda}}\n \\\\\n &\\qquad \\times \\idotsint\\limits_{0\\leq \\mu_1 \\leq \\cdots \\leq \\mu_\\lambda \\leq 1} e^{-4 i g^2 \\frac{\\cos(t(1-\\mu_\\lambda))}{\\sin(t)}(\\frac{1+(-1)^\\lambda}{2}) + \\bar{\\xi}_{\\lambda}(\\bm{\\mu_{\\lambda}},t)} \n \\begin{bmatrix}\n (-1)^{\\lambda} \\cosh & (-1)^{\\lambda+1} \\sinh \\\\\n -\\sinh & \\cosh\n \\end{bmatrix}\n \\left( \\bar{\\theta}_{\\lambda}(x,y,\\bm{\\mu_{\\lambda}},t) \\right) d \\bm{\\mu_{\\lambda}} \\Bigg]\n \\end{align*}\n with \\(\\bm{\\mu_0} := 0\\) and \\(\\bm{\\mu_{\\lambda}}= (\\mu_1,\\mu_2,\\cdots,\\mu_\\lambda)\\) and \\(d \\bm{\\mu_{\\lambda}} = d \\mu_1 d \\mu_2 \\cdots d \\mu_{\\lambda} \\)\n for \\(\\lambda \\geq 1\\).\n Here, \n \\begin{align*}\n U_0(x,y,g,t)\n & := \\frac{e^{i t(g^2+\\tfrac12)}}{\\sqrt{2 i \\pi \\sin(t)}} \\exp\\left( - \\frac{(x^2 + y^2) \\cos(t) - 2 x y}{2i \\sin(t)} \\right)\n \\end{align*}\n and the functions \\(\\bar{\\theta}_{\\lambda}(x,y, \\bm{\\mu_{\\lambda}},t)\\) and $\\bar{\\xi}_\\lambda(\\bm{\\mu_{\\lambda}},t)$ are given by\n \\begin{align*}\n \\bar{\\theta}_{\\lambda}(x,y, \\bm{\\mu_{\\lambda}},t) &:= \\frac{2\\sqrt{2} g}{i \\sin(t)}\\left( x \\cos(t) - y \\right) \\left( \\frac{1-(-1)^{\\lambda}}{2} \\right) + i \\sqrt{2}g (x-y) \\cot(\\tfrac{t}2) \\\\\n & \\quad + \\frac{2\\sqrt{2} g (-1)^{\\lambda} }{i \\sin(t)} \\sum_{\\gamma=0}^{\\lambda} (-1)^{\\gamma} \\Big[ x \\cos(t(1 - \\mu_{\\gamma})) - y \\cos(t \\mu_{\\gamma}) \\Big] \\nonumber \\\\\n \\bar{\\xi}_\\lambda(\\bm{\\mu_{\\lambda}},t) &:= \\frac{8g^2 }{i\\sin(t)} \\left(\\sin(\\tfrac12t(1-\\mu_\\lambda))\\right)^2 (-1)^{\\lambda} \\sum_{\\gamma=0}^{\\lambda} (-1)^{\\gamma} \\cos( t \\mu_{\\gamma}) \\\\\n & - \\frac{4 g^2 }{i\\sin(t)} \\sum_{\\substack{0\\leq\\alpha<\\beta\\leq \\lambda-1\\\\ \\beta - \\alpha \\equiv 1 \\pmod{2} }} \\left( \\cos(t(\\mu_{\\beta+1}-1)-\\cos(t(\\mu_{\\beta}-1)) \\right) ( \\cos(t \\mu_{\\alpha}) - \\cos(t \\mu_{\\alpha+1})), \\nonumber \n \\end{align*}\n In the formulas above, the square root is taken according to the principal branch.\n\\end{thm}\n\nWe note that \\(U_{\\text{Rabi}}(x,y,t) \\) can be written completely in terms of circular functions, in contrast with the case of \\(K_{\\text{Rabi}}(x,y, t) \\) which is given in terms of hyperbolic functions.\n\n\\begin{thm}\n The propagator $U_{\\pm}(x,y,t, \\Delta)$ of $H_\\pm= H_{\\text{\\upshape Rabi}} |_{\\mathcal{H}_\\pm}$ is given by \n \\begin{align*}\n U_{\\pm}(x,y,t, \\Delta)\n = & U_0(x,y,g,t)\\sum_{\\lambda=0}^{\\infty} (i t\\Delta)^{2\\lambda} \\bar{\\Phi}^-_{2\\lambda}(x,y,t) \\mp U_0(x,-y,g,t) \\sum_{\\lambda=0}^{\\infty}\n (i t\\Delta)^{2\\lambda+1} \\bar{\\Phi}^+_{2\\lambda+1}(x,-y,t),\n \\end{align*}\n where for $\\lambda\\geq1$, the function $\\bar{\\Phi}^\\pm_{\\lambda}(x,y,t)$ is given by\n \\begin{align*}\n \\bar{\\Phi}^\\pm_{\\lambda}(x,y,t) := e^{-2 g^2 (-i\\cot(\\tfrac{t}2))^{(-1)^\\lambda}} \\idotsint\\limits_{0\\leq \\mu_1 \\leq \\cdots \\leq \\mu_{\\lambda} \\leq 1} e^{- 4 i g^2 \\frac{\\cos(t(1-\\mu_\\lambda))}{\\sin(t)}(\\frac{1+(-1)^\\lambda}{2}) + \\bar{\\xi}_{\\lambda}(\\bm{\\mu_{\\lambda}},t)\\pm \\bar{\\theta}_{n}(x,y, \\bm{\\mu_{\\lambda}},t)} d \\bm{\\mu_{n}}\n \\end{align*}\n and \n \\begin{align*}\n \\bar{\\Phi}^\\pm_0(x,y,t) := e^{-2 ig^2\\tan\\big(\\frac{t}2\\big) \\pm\\sqrt2 i g(x+y)\\tan\\big(\\frac{t}2\\big)}.\n \\end{align*}\n\\end{thm}\n\n\n\\section{Spectral determinant for parity Hamiltonians and Braak's $G$-functions}\n\\label{sec:spectral-determinant}\n\nThe $G$-function for the QRM (and the respective ones for the parity Hamiltonians) was\noriginally defined by Braak \\cite{B2011PRL} to establish the exact solvability of the QRM. In \\cite{KRW2017}\n(see also \\cite{Sugi2016}) a significant relation was found between the Braak $G$-function and the spectral zeta\nfunction of the QRM (the Mellin transform of the partition function). Actually, the $G$-function is\n(up to a non-vanishing function) equal to the spectral determinant of \\(H_{\\text{\\upshape Rabi}}\\), that is, the zeta-regularized\nproduct associated to the spectral zeta function of the QRM. In this section, we extend this result for the case\nthe Hamiltonians \\(H_{\\pm}\\) of each of the parities.\n\nLet us start by recalling the definitions of the spectral zeta functions and the spectral determinant\nspecialized to the case of the QRM and the Hamiltonians of each parity.\nLet\n\\[\n \\lambda^{\\pm}_1 < \\lambda^{\\pm}_2 \\leq \\lambda^{\\pm}_3 \\leq \\ldots \\leq \\lambda^{\\pm}_n \\leq \\ldots (\\nearrow \\infty)\n\\]\nbe the eigenvalues of \\(H_{\\pm}\\), then the (Hurwitz-type) spectral zeta function \\(\\zeta^{\\pm}_{\\text{QRM}}(s; \\tau)\\) is given by the Dirichlet series\n\\[\n \\zeta^{\\pm}_{\\text{QRM}}(s; \\tau):= \\sum_{j=1}^\\infty (\\lambda^{\\pm}_j +\\tau)^{-s}.\n\\]\nSimilarly, the spectral zeta function \\(\\zeta_{\\text{QRM}}(s; \\tau)\\) of the QRM is given by\n\\[\n \\zeta_{\\text{QRM}}(s; \\tau) = \\zeta^{+}_{\\text{QRM}}(s; \\tau)+ \\zeta^{-}_{\\text{QRM}}(s; \\tau).\n\\]\nIn both cases, it is easily verified (cf. \\cite{Sugi2016}) that the zeta functions above are absolutely convergent\nfor \\(\\Re(s)>1 \\) for \\( \\tau \\in \\C - \\Spec({H_{\\text{\\upshape Rabi}}})\\). \n\nFix the log-branch by $-\\pi\\leq \\arg(\\tau- \\lambda_i)<\\pi$. For a sequence $\\mathcal{A} =\\{a_i\\}_{i\\geq 1}, \\, a_i \\in \\C$, by\ndefining the associated zeta function\n\\[\n \\zeta_{\\mathcal{A}}(s) = \\sum_{n=1}^{\\infty} a_n^{-s},\n\\]\nassumed to be holomorphic at $s=0$, the zeta regularized product (cf. \\cite{QHS1993TAMS}) associated to $\\mathcal{A}$ is given by\n\\begin{equation*}\n \\regprod_{i=0}^\\infty a_i:= \\exp\\left(-\\frac{d}{ds}\\zeta_{\\mathcal{A}}^{\\pm}(s)\\big|_{s=0}\\right).\n\\end{equation*}\nBy introducing an auxiliary parameter, the zeta regularized product is also one of the ways to define a\nfunction with prescribed zeros.\n\nIf the sequence $\\mathcal{A}$ correspond to the eigenvalues of an operator (e.g. a Hamiltonian), the zeta\nregularized product is a generalization of the characteristic polynomials of finite matrices. \nFor the case of the QRM, the zeta regularized product associated to \\(\\zeta_{\\text{QRM}}^{\\pm}(s; \\tau)\\) is defined by\n\\begin{equation*\n \\regprod_{i=0}^\\infty (\\tau-\\lambda^{\\pm}_i):= \\exp\\left(-\\frac{d}{ds}\\zeta_{\\text{QRM}}^{\\pm}(s; \\tau)\\big|_{s=0}\\right),\n\\end{equation*}\nwhere the product is over the eigenvalues \\(\\lambda_i^{\\pm}\\) in the spectrum of \\(H_{\\pm}\\). Now we define the spectral determinant of the Hamiltonians \\(H_{\\pm} \\) as\n\\begin{equation*}\n \\det (\\tau-H_{\\pm}):= \\regprod_{i=0}^\\infty (\\tau-\\lambda^{\\pm}_i).\n\\end{equation*}\nThe spectral determinant, as a function of $\\tau$ vanishes exactly at the eigenvalues of $H_{\\pm}$.\n\nIn \\cite{RW2017} the authors proved that the zeta regularized product of \\(\\zeta_{\\text{Rabi}}(s;\\tau)\\), equivalently the spectral determinant of \\( H_{\\text{\\upshape Rabi}}\\), is given (up to a non-vanishing entire function) by the complete $G$-function (called generalized $G$-function in \\cite{RW2017} ) given by\n\\[\n \\mathcal{G}(x;g,\\Delta) = G_{+}(x;g,\\Delta) G_{-}(x;g,\\Delta) \\Gamma(-x)^{-2},\n\\]\nwhere \\(G_{\\pm}(x;g,\\Delta)\\) are the parity $G$-functions defined in \\cite{B2011PRL,B2011PRL-OnlineSupplement} (see also Appendix \\ref{sec:bargm-space-confl} and cf. \\cite{LB2015JPA}).\n\n\nTo extend the result to the parity Hamiltonians we need some preparations. First, we need to show that the spectral\nzeta function $\\zeta^{\\pm}_{\\text{QRM}}(s; \\tau)$ is holomorphic around $s = 0$. In \\cite{Sugi2016}, it was shown, without using an explicit formula for the heat kernel, that $\\zeta_{\\text{QRM}}(s; \\tau)$ extends meromorphically to the complex plane with a simple pole at \\(s= 1\\) (cf. \\cite{IW2005a,IW2005b} for a reference to the method for the case of NCHO).\n\nUsing the Mellin transform expression of $\\zeta_{\\text{QRM}}(s)$ by the partition function $Z_{\\text{Rabi}}(t)$ we can give another proof for the meromorphic continuation, similar to one of Riemann's original proofs for the zeta function. In addition, by the same method we obtain the analytic continuation of the parity zeta function \\(\\zeta_{\\text{QRM}}^{\\pm}(s;\\tau)\\). The details are given in Appendix \\ref{sec:proofmero}.\n\n\\begin{thm} \\label{IntRep_SZF}We have\n\\begin{align}\\label{ContourSZF}\n \\zeta_{\\text{QRM}}(s;\\tau)= -\\frac{\\Gamma(1-s)}{2\\pi i}\\int_\\infty^{(0+)} \\frac{(-w)^{s-1} \\Omega(w)e^{-\\tau w}}{1-e^{-w}}dw.\n\\end{align}\nHere the contour integral is given by the path which starts at $\\infty$ on the real axis, encircles the origin (with a radius smaller than $2\\pi$) in the positive direction and returns to the starting point and it is assumed $|\\arg(-w)|\\leq \\pi$. This gives a meromorphic continuation of $\\zeta_{\\text{QRM}}(s;\\tau)$ to the whole plane where the only singularity is a simple pole with residue $2$ at $s=1$.\n\\end{thm}\n\n\\begin{cor\n With the notation of Theorem \\ref{IntRep_SZF}, we have\n \\begin{align*}\n \\zeta_{\\text{QRM}}^{\\pm}(s;\\tau)= -\\frac{\\Gamma(1-s)}{4\\pi i}\\int_\\infty^{(0+)}\\left( \\frac{(-w)^{s-1} \\Omega(w)e^{-\\tau w}}{1-e^{-w}} \\mp \\frac{(-w)^{s-1} \\Omega_{\\text{odd}}(w)e^{-\\tau w}}{1+e^{-w}} \\right)dw.\n \\end{align*}\n This gives a meromorphic continuation of $\\zeta^{\\pm}_{\\text{QRM}}(s;\\tau)$ to the whole plane where the only singularity is a simple pole with residue $1$ at $s=1$. \\qed\n\\end{cor}\n\nAs an important consequence of the meromorphic continuation of $\\zeta_{\\text{QRM}}^{\\pm}(s;\\tau)$ we obtain the Weyl law for the distribution of the eigenvalues of the parity Hamiltonians \\(H_{\\pm}\\) in the usual way (cf. \\cite{IW2005a,Sugi2016}).\n\nLet us define the spectral counting functions\n\\begin{align*}\n N_{\\text{Rabi}}(T) &= \\# \\{\\lambda \\in \\Spec(H_{\\text{\\upshape Rabi}}) \\, | \\, \\lambda \\le T \\}, \\\\\n N_{\\pm}(T) &= \\# \\{\\lambda \\in \\Spec(H_{\\pm}) \\, | \\, \\lambda \\le T \\}\n\\end{align*}\nfor \\(T > 0 \\).\n\n\\begin{cor}\n We have\n \\[\n N_{\\pm}(T) \\sim \\frac12 N_{\\text{Rabi}}(T) \\sim T,\n \\]\n as \\(T \\to \\infty \\). \\qed\n\\end{cor}\n\nThe Weyl law shows that the positive and negative parity eigenstates are equally distributed and supports also the original Braak conjecture concerning the number of eigenvalues (in each parity) in the consecutive intervals (see e.g. \\cite{B2011PRL,KRW2017}). We note that\nequality of the distribution between the parities also follows from the relation $G_-(x, g, \\Delta)=G_+(x,g,-\\Delta)$ between $G_\\pm$-functions \\cite{B2011PRL} and the properties of the constraint functions\/polynomials \\cite{KRW2017}.\n\nNext, we compute the residue at the poles for the $G$-functions \\(G_{\\pm}(x;g,\\Delta)\\).\n\n\\begin{lem}\\label{lem:respole}\n The residue of the $G$-function \\(G_{\\pm}(x;g,\\Delta)\\) at the (simple) pole at \\(x = N \\in \\Z_{\\geq 0}\\) is given by\n \\[\n \\Res_{x = N} G_{\\pm}(x;g,\\Delta) = \\frac{\\Delta^2 g^N}{2(N+1)} K_N(N;g,\\Delta) G_{\\pm}^{(N)}(g,\\Delta). \n \\]\n\\end{lem}\n\n\\begin{proof}\n The result follows directly by computation and comparison with the definition of \\(K_N(N;g,\\Delta)\\) and \\(G_{\\pm}^{(N)}(g,\\Delta)\\) (see Appendix \\ref{sec:bargm-space-confl} and the proof of Proposition 6.8 of \\cite{KRW2017}).\n\\end{proof}\n\nNext, we show that the zeros of the complete $G$-function $\\mathcal{G}_{\\pm}$ for each parity defined in the following captures the complete spectrum of $H_{\\pm}$. \n \\[\n \\mathcal{G}_{\\pm}(x;g,\\Delta) := G_{\\pm}(x;g,\\Delta) \\Gamma(-x)^{-1}. \n \\]\n\n\\begin{thm}\\label{thm:eigenparity}\n There is a one-to-one correspondence between eigenvalues \\(\\lambda\\) in \\(\\Spec{H_{\\pm}}\\) and zeros \\(x = \\lambda + g^2 \\) of the generalized\n \\(G\\)-function \\(\\mathcal{G}_{\\pm}(x;g,\\Delta)\\).\n\\end{thm}\n\n\\begin{proof}\n Let \\(\\lambda \\in \\R\\) be a regular eigenvalue of \\(H_{\\pm}\\), then by the definition \\( x = \\lambda + g^2 \\) is a zero of \\(G_{\\pm}(x;g,\\Delta)\\).\n Now, suppose \\(\\lambda= N - g^2\\) is an exceptional eigenvalue of \\(H_{\\pm}\\), then by Lemma \\ref{lem:respole}, we see that at \\(x = \\lambda -g^2 = N \\) the\n function \\(G_{\\pm}(x;g,\\Delta)\\) has a finite value, and then \\(\\mathcal{G}_{\\pm}(x;g,\\Delta)\\) vanishes by the zero of \\(\\Gamma(-x)^{-1}\\).\n Conversely, let \\(x \\in \\R\\) be a zero of \\(\\mathcal{G}_{\\pm}(x;g,\\Delta) \\). If \\(x \\notin \\Z_{\\geq0} \\) then \\(x \\) is a zero of \\(G_{\\pm}(x;g,\\Delta)\\) and \\(\\lambda = x - g^2 \\) is a regular eigenvalue of \\(H_{\\pm}\\). If \\(x = N \\in \\Z_{\\geq 0}\\), then, since the zero of \\(\\Gamma(-x)^{-1}\\)\n at \\(x=N \\) is canceled by the pole of \\(G_{\\pm}(x;g,\\Delta)\\), \\(x = N \\) must be a zero of the residue of \\(G_{\\pm}(x;g,\\Delta) \\) at \\(x=N \\),\n in other words, the tuple \\((g,\\Delta) \\) must be a zero of \\(K_N(N;g,\\Delta)\\) or \\(G_{\\pm}^{(N)}(g,\\Delta)\\) and thus \\(\\lambda = N - g^2 \\) is an exceptional eigenvalue\n (the Juddian or non-Juddian exceptional, respectively).\n\\end{proof}\n\n \\begin{rem}\n The spectrum of the QRM can be captured by irreducible representations of $\\mathfrak{sl}_2$ (cf. \\cite{KRW2017,W2017JPA}).\n For instance, the Juddian (resp. non-Juddian \\cite{MPS2014}) exceptional solutions are obtained from the irreducible finite dimensional (resp. lowest weight) representations. The existence of these exceptional eigenvalues inherited from the quantum harmonic oscillator (or as its ruins) which are described by the oscillator representation of $\\mathfrak{sl}_2$ is the reason for the presence of the gamma factor in \\(\\mathcal{G}_{\\pm}(x;g,\\Delta)\\).\n\\end{rem}\n\nThe meaning of Theorem \\ref{thm:eigenparity} is that the complete $G$-function \\(\\mathcal{G}_{\\pm}(x;g,\\Delta)\\) vanishes\nexactly at the eigenvalues of \\(H_{\\pm}\\). Immediatly it follows that it is equal, up to non-vanishing constant,\nto the spectral determinant of the parity Hamiltonian.\n\n\\begin{cor} \\label{cor:specdet}\n There exists a non-vanishing entire function \\(c_{\\pm}(\\tau; g, \\Delta)\\) such that\n \\[\n \\det (\\tau-H_{\\pm}) = c_{\\pm}(\\tau; g, \\Delta) \\, \\mathcal{G}_{\\pm}(\\tau;g,\\Delta). \\qed\n \\]\n\\end{cor}\n\nWe conclude by making a remark on Corollary \\ref{cor:specdet}. As mentioned before, in \\cite{B2011PRL}\nBraak proved the integrability of the QRM by defining the $G$-function of the parity Hamiltonians \\(H_{\\pm}\\).\nHere, in Corollary \\ref{cor:specdet} above, we see that the $G$-function is, up to a non-vanishing constant,\nequal to the spectral determinant of \\(H_{\\pm}\\), in other words, the zeta regularized product of the spectral zeta function \\(\\zeta_{\\text{QRM}}^{\\pm}(s;\\tau)\\).\nThe zeta regularized product of a zeta function \\(\\zeta(s)\\) is defined when the \\(\\zeta(s)\\) function is holomorphic in a neighborhood around \\(s = 0 \\)\n(in case \\(\\zeta(s)\\) has a pole at \\(s = 0 \\), a modified zeta regularized product may be used, cf. \\cite{KW2004}).\nIt would be interesting to investigate the relationship between the integrability (or exact solvability) of the Hamiltonian \\(H\\) of a\nquantum interaction model, that is, the existence of entire solutions of the corresponding Fuchsian ODE (Bargmann model), and the existence of a zeta regularized\nproduct for its corresponding spectral zeta function \\(\\zeta_{H}(s;t) \\), or equivalently, the meromorphic continuation of the spectral zeta function to a region containing \\(s=0\\). \n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nConvolutional neural networks (CNNs) have shown promising results on supervised learning tasks. However, the performance of a learned model always degrades severely when dealing with data from the other domains. Considering that constantly annotating massive samples from new domains is expensive and impractical, unsupervised domain adaptation (UDA), therefore, has emerged as a new learning framework to address this problem \\cite{csurka2017domain}. UDA aims to utilize full-labeled samples in source domain to annotate the completely-unlabeled target domain samples. Thanks to deep CNNs, recent advances in UDA show satisfactory performance in several computer vision tasks \\cite{hoffman2018cycada}. Among them, most methods bridge the source and target domain by learning domain-invariant features. These dominant methods can be further divided into two categories: (1) Learning domain-invariant features by minimizing the discrepancy between feature distributions \\cite{long2015learning,sun2016deep,long2017deep,zellinger2017central,chen2019towards}. (2) Encouraging domain confusion by a domain adversarial objectives whereby a discriminator (domain classifier) is trained to distinguish between the source and target representations. \\cite{ganin2016domain,tzeng2017adversarial,shu2018dirt,hoffman2018cycada}.\n\n\\begin{figure}[t!]\n\\centering\n\\includegraphics[width=1.0\\linewidth]{fig0.pdf}\n\\caption{The metrics of using higher-order moment tensor for domain alignment. 300 points in $\\mathbb{R}^2$ and the level set of the moment tensor with different order. As observed, using higher-order moment tenser captures the shape of the cloud of samples more accurately.}\n\\label{fig0}\n\\end{figure}\n\nFrom the perspective of moment matching, most of the existing discrepancy-based methods in UDA are based on Maximum Mean Discrepancy (MMD) \\cite{long2017deep} or Correlation Alignment (CORAL) \\cite{sun2016deep}, which are designed to match the first-order (Mean) and second-order (Covariance) statistics of different distributions. However, for the real world applications (such as image recognition), the deep features are always a complex, non-Gaussian distribution \\cite{jia2011heavy,xu2016blind}, which can not be completely characterized by its first-order or second-order statistics. Therefore, aligning the second-order or lower statistics only guarantees coarse-grained alignment of two distributions. To address this limitation, we propose to perform domain alignment by matching the higher-order moment tensor (mainly refer to third-order and fourth-order moment tensor), which contain more discriminative information and can better represent the feature distribution \\cite{pauwels2016sorting,gou2017mom}. Inspired by \\cite{pauwels2016sorting}, Fig.\\ref{fig0} illustrates the metrics of higher-order moment tensor, where we plot a cloud of points (consists of three different Gaussians) and the level sets of moment tensor with different order. As observed, the higher-order moment tensor characterizes the distribution more accurately.\n\nOur contribution can be concluded as: (1) We propose a Higher-order Moment Matching (HoMM) method to minimize the domain discrepancy, which is expected to perform fine-grained domain alignment. The HoMM integrates the MMD and CORAL into a unified framework and generalizes the first-order and second-order moment matching to higher-order moment tensor matching. Without bells and whistles, the third- and fourth-order moment matching outperform all existing discrepancy-based methods by a large margin. The source code of the HoMM is released. (2) Due to lack of labels in the target domain, we propose to learn discriminative clusters in the target domain by assigning the pseudo-labels for the reliable target samples, which also improves the transfer performance.\n\n\\section{Related Work}\n\\textbf{Learning Domain-Invariant Features} To minimize the domain discrepancy and learn domain-invariant features, various distribution discrepancy metrics have been introduced. The representative ones include Maximal Mean Discrepancy (MMD) \\cite{gretton2012kernel,tzeng2014deep,long2015learning,long2017deep}, Correlation Alignment \\cite{sun2016deep,morerio2018,chen2019joint,chen2019selective} and Wasserstein distance \\cite{lee2019sliced,chendeep}. MMD was first introduced for the two-sample tests problem \\cite{gretton2012kernel}, and is currently the most widely used metric to measure the distance between two feature distributions. Specifically, Long \\etal proposed DAN \\cite {long2015learning} and JAN \\cite{long2017deep} which perform domain matching via multi-kernel MMD or a joint MMD criteria in multiple domain-specific layers across domains. Sun \\etal proposed the correlation alignment (CORAL) \\cite{sun2016return,sun2016deep} to align the second order statistics of the source and target distributions. Some recent work also extended the CORAL into reproducing kernel Hilbert spaces (RKHS) \\cite{zhang2018aligning} or deployed alignment along geodesics by considering the log-Euclidean distance \\cite{morerio2018}. Interestingly, \\cite{li2017demystifying} theoretically demonstrated that matching the second order statistics is equivalent to minimizing MMD with the second order polynomial kernel. Besides, the approach most relevant to our proposal is the Central Moment Discrepancy (CMD) \\cite{zellinger2017central}, which matches the higher order central moments of probability distributions by means of order-wise moment differences. Both CMD and our HoMM propose to match the higher-order statistics for domain alignment. The CMD matches the higher-order central moment while our HoMM matches the higher-order cumulant tensor. Another fruitful line of work tries to learn the domain-invariant features through adversarial training \\cite{ganin2016domain,tzeng2017adversarial}. These efforts encourage domain confusion by a domain adversarial objective whereby a discriminator (domain classifier) is trained to distinguish between the source and target representations. Also, recent work performing pixel-level adaptation by image-to-image transformation \\cite{murez2018image,hoffman2018cycada} has achieved satisfactory performance and obtained much attention. In this work, we propose a higher-order moment tensor matching approach to minimize the domain discrepancy, which shows great superiority over existing discrepancy-based methods.\n\n\n\\noindent\\textbf{Higher-order Statistics} The statistics higher than first-order has been success fully used in many classical and deep learning methods \\cite{de2007fourth,koniusz2016higher,pauwels2016sorting,li2017second,gou2017mom}. Especially in the field of fine-grained image\/video recognition \\cite{lin2015bilinear,cui2017kernel}, second-order statistics such as Covariance and Gaussian descriptors, have demonstrated better performance than descriptors exploiting zeroth- or first-order statistics \\cite{li2017second,lin2015bilinear,wang2017g2denet}. However, using second-order or lower statistical information might not be enough when the feature distribution is non-Gaussian \\cite{gou2017mom}. Therefore, the higher-order (greater than two) statistics have been explored in many signal processing problems \\cite{mansour1995fourth,jakubowski2002higher,de2007fourth,gou2017mom}. In the field of Blind Source Separation (BSS) \\cite{de2007fourth,mansour1995fourth}, for example, the fourth-order statistics are widely used to identify different signals from mixtures. Gou \\etal utilizes the third-order statistics for person ReID \\cite{gou2017mom}, Xu \\etal exploits the third-order cumulant for blind image quality assessment \\cite{xu2016blind}. In \\cite{jakubowski2002higher,koniusz2016higher}, the authors exploit higher-order statistics for image recognition and detection. Matching the second order statistics can not always ensure two distributions inseparable, just as using the second order statistics can not identifies different signals from underdetermined mixtures \\cite{de2007fourth}. That's why we explore higher-order moment tensor for domain alignment.\n\n\\noindent\\textbf{Discriminative Clustering} Discriminative clustering is a critical task in the unsupervised and semi-supervised learning paradigms \\cite{grandvalet2005semi,lee2013pseudo,xie2016unsupervised,caron2018deep}. Due to the paucity of labels in the target domain, how to obtain the discriminative representations in the target domain is of great importance for the UDA tasks. Therefore, a large body of work pays attention to learn the discriminative clusters in the target domain via entropy minimization \\cite{grandvalet2005semi,morerio2018,shu2018dirt}, pseudo label \\cite{lee2013pseudo,saito2017asymmetric,xie2018learning} or distance-based metrics \\cite{chen2019joint,kang2019contrastive}. Specifically, Saito \\etal \\cite{saito2017asymmetric} assign pseudo-labels to the reliable unlabeled samples to learn discriminative representations for the target domain. Shu \\etal \\cite{shu2018dirt} consider the cluster assumption and minimize the conditional entropy to ensure the decision boundaries not cross high-density data regions. MCD \\cite{saito2018maximum} also considers to align distributions of source and target by utilizing the task-specific decision boundaries. Besides, JDDA \\cite{chen2019joint} and CAN \\cite{kang2019contrastive} propose to model the intra-class domain discrepancy and the inter-class domain discrepancy to learn more discriminative features.\n\n\n\\section{Method}\n\\begin{figure}[t!]\n\\centering\n\\includegraphics[width=0.8\\linewidth]{fig1.pdf}\n\\caption{Two-stream CNNs with shared parameters are adopted for unsupervised deep domain adaptation. The first stream operates the source data and the second stream operates the target data. The last FC layer (the input of the output layer) is used as the adapted layer.}\n\\label{fig1}\n\\end{figure}\nIn this work, we consider the unsupervised domain adaptation problem. Let $\\mathcal{D}_s=\\{\\bm{x}_s^i,y_s^i\\}_{i=1}^{n_s}$ denotes the source domain with $n_s$ labeled samples and $\\mathcal{D}_t=\\{\\bm{x}_t^i\\}_{i=1}^{n_t}$ denotes the target domain with $n_t$ unlabeled samples. Given $\\mathcal{D}_s\\cup\\mathcal{D}_t$, we aim to train a cross-domain CNN classifier $f_{\\bm\\theta}(\\bm{x})$ which can minimize the target risks $\\epsilon_t=\\mathbb{E}_{\\bm{x}\\in\\mathcal{D}_t}[f_{\\bm\\theta}(\\bm{x})\\neq \\bm{y}_t]$. Here $f_{\\bm\\theta}(\\bm{x})$ denotes the outputs of the deep neural networks, $\\bm\\theta$ denotes the model parameter to be learned. Following \\cite{long2017deep,chen2019joint}, we adopt the two-stream CNNs architecture for unsupervised deep domain adaptation. As shown in Fig. \\ref{fig1}, the two streams share the same parameters (tied weights), operating the source and target domain samples respectively. And we perform the domain alignment in the last full-connected (FC) layer \\cite{sun2016deep,chen2019joint}. According to the theory proposed by Ben-David \\etal \\cite{ben2010theory}, a basic domain adaptation model should, at least, involve the source domain loss and the domain discrepancy loss, i.e.,\n\\begin{equation}\\label{eq1}\n\\mathcal{L}(\\bm\\theta|\\mathbf{X}_s,\\mathbf{Y}_s,\\mathbf{X}_t)=\\mathcal{L}_s+\\lambda_{d}\\mathcal{L}_{d}\n\\end{equation}\n\\begin{equation}\\label{eq2}\n\\mathcal{L}_s=\\dfrac{1}{n_s}\\sum\\limits_{i=1}^{n_s}J(f_{\\bm\\theta}(\\bm{x}_i^s), \\bm{y}_i^s)\n\\end{equation}\nwhere $\\mathcal{L}_s$ represents the classification loss in the source domain, $J(\\cdot,\\cdot)$ represents the cross-entropy loss function. $\\mathcal{L}_{d}$ represents the domain discrepancy loss and $\\lambda_{d}$ is the trade-off parameter. As aforementioned, most of existing discrepancy-based methods are designed to minimize distance of the second-order or lower statistics between different domains. In this work, we propose a higher-order moment matching method, which matches the higher-order statistics of different domains.\n\\subsection{Higher-order Moment Matching}\nTo perform fine-grained domain alignment, we propose a higher-order moment matching as\n\\begin{equation}\\label{eq3}\n\\mathcal{L}_d=\\frac{1}{L^p}\\Vert\\frac{1}{n_s}\\sum\\limits_{i=1}^{n_s}\\phi_{\\bm\\theta}(\\bm{x}_s^i)^{\\otimes p}-\\frac{1}{n_t}\\sum\\limits_{i=1}^{n_t}\\phi_{\\bm\\theta}(\\bm{x}_t^i)^{\\otimes p}\\Vert_F^2\n\\end{equation}\nwhere $n_s=n_t=b$ ($b$ is the batch size) during the training process. $\\phi_{\\bm\\theta}(\\bm{x})$ denotes the activation outputs of the adapted layer. As illustrated in Fig. \\ref{fig1}, $\\bm{h}^i=\\phi_{\\bm\\theta}(\\bm{x}^i)=[\\bm{h}^i(1),\\bm{h}^i(2),\\cdots,\\bm{h}^i(L)]\\in\\mathbb{R}^L$ denotes the activation outputs of the $i$-th sample, $L$ is the number of hidden neurons in the adapted layer. Here, $\\bm{u}^{\\otimes p}$ denotes the $p$-level tensor power of the vector $\\bm{u}\\in\\mathbb{R}^c$. That is\n\\begin{equation}\\label{eq4}\n\\bm{u}^{\\otimes p}=\\underbrace{\\bm{u}\\otimes\\bm{u}\\cdots\\otimes\\bm{u}}_{\\text{p times}}\\in\\mathbb{R}^{c^p}\n\\end{equation}\nwhere $\\otimes$ denotes the outer product (or tensor product). We have $\\bm{u}^{\\otimes 0}=1$, $\\bm{u}^{\\otimes 1}=\\bm{u}$ and $\\bm{u}^{\\otimes 2}=\\bm{u}\\otimes\\bm{u}$. The 2-level tensor product $\\bm{u}^{\\otimes 2}\\in\\mathbb{R}^{c^2}$ defined as\n\\begin{equation}\\label{eq4}\n\\bm{u}^{\\otimes 2}=\\bm{u}^T\\bm{u}=\\left[\\begin{array}{cccc}\nu_1u_1 & u_1u_2 & \\cdots & u_1u_c\\\\\nu_2u_1 & u_2u_2 & \\cdots & u_2u_c\\\\\n\\vdots & \\vdots & \\ddots & \\vdots\\\\\nu_cu_1 & u_cu_2 & \\cdots & u_cu_c\n\\end{array}\\right]\n\\end{equation}\nwhen $p\\geq 3$, $\\mathbf{T}=\\bm{u}^{\\otimes p}$ is a $p$-level tensor with $\\mathbf{T}[i,j,\\cdots,k]=u_iu_j\\cdots u_k$.\n\n\n\\begin{figure}[t!]\n\\centering\n\\includegraphics[width=1.0\\linewidth]{fig2.pdf}\n\\caption{An illustration of first-order, second-order and third-order moments in the source domain. HoMM matches the higher-order ($p\\geq3$) moment across different domains.}\n\\label{fig2}\n\\end{figure}\n\n\n\\textbf{Instantiations}\nAccording to Eq. \\eqref{eq3}, when $p=1$, the first-order moment matching is equivalent to the linear MMD \\cite{tzeng2014deep}, which is expressed as\n\\begin{equation}\\label{eq6}\n\\mathcal{L}_d=\\frac{1}{L}\\Vert\\frac{1}{b}\\sum\\limits_{i=1}^{b}\\bm{h}_s^i-\\frac{1}{b}\\sum\\limits_{i=1}^{b}\\bm{h}_t^i\\Vert_F^2\n\\end{equation}\nWhen $p=2$, the second-order HoMM is formulated as,\n\\begin{equation}\\label{eq7}\n\\begin{split}\n\\mathcal{L}_d=&\\frac{1}{L^2}\\Vert\\frac{1}{b}\\sum\\limits_{i=1}^{b}\\bm{h}_s^i{^T}\\bm{h}_s^i-\\frac{1}{b}\\sum\\limits_{i=1}^{b}\\bm{h}_t^i{^T}\\bm{h}_t^i\\Vert_F^2\\\\\n=&\\frac{1}{b^2L^2}\\Vert\\bm{G}(\\bm{h}_s)-\\bm{G}(\\bm{h}_t)\\Vert_F^2\n\\end{split}\n\\end{equation}\nwhere $\\bm{G}(\\bm{h})=\\bm{H}^T\\bm{H}\\in\\mathbb{R}^{L\\times L}$ is the Gram matrix, $\\bm{H}=[\\bm{h}^1;\\bm{h}^2;\\cdots,\\bm{h}^b]\\in\\mathbb{R}^{b\\times L}$, $b$ is the batch size. Therefore, the second-order HoMM is equivalent to the Gram matrix matching, which is also widely used for cross-domain matching in neural style transfer \\cite{gatys2016image,li2017demystifying} and knowledge distillation \\cite{yim2017gift}. Li \\etal \\cite{li2017demystifying} theoretically demonstrate that matching the Gram matrix of feature maps is equivalent to minimize the MMD with the second order polynomial kernel. Besides, when the activation outputs are normalized by subtracting the mean value, the centralized Gram matrix turns into the Covariance matrix. In this respect, the second-order HoMM is also equivalent to CORAL, which matches the Covariance matrix for domain matching \\cite{sun2016deep}.\n\nAs illustrated in Fig. \\ref{fig2}, in addition to the first-order moment matching (e.g. MMD) and the second-order moment matching (e.g. CORAL and Gram matrix matching), our proposed HoMM can also perform higher-order moment tensor matching when $p\\geq 3$. Since higher-order statistics can characterize the non-Gaussian distributions better, applying higher-order moment matching is expected to perform fine-grained domain alignment. However, the space complexity of calculating the higher-order tensor $\\bm{u}^{\\otimes p}$ ($p\\geq3$) reaches $\\mathcal{O}(L^p)$, which makes the higher-order moment matching infeasible in many real-world applications. Adding bottleneck layers to shrink the length of adaptive layer does not even solve the problem. When $L=128$, for example, the dimension of a third-order tensor still reaches $\\mathcal{O}(10^6)$, and the dimension of a fourth-order tensor reaches $\\mathcal{O}(10^8)$, which is absolutely computational-unfriendly. To address this problem, we propose two practical techniques to perform the compact tensor matching.\n\n\\textbf{Group Moment Matching.} As the space complexity grows exponentially with the number of neurons $L$, one practical approach is to divide the hidden neurons in the adapted layer into $n_g$ groups, with each group $\\lfloor L\/n_g\\rfloor$ neurons. Then we can calculate and match the high-level tensor in each group respectively. That is,\n\\begin{equation}\\label{eq8}\n\\mathcal{L}_d=\\frac{1}{b^2\\lfloor L\/n_g\\rfloor^p}\\sum\\limits_{k=1}^{n_g}\\Vert\\sum\\limits_{i=1}^{b}\\bm{h}_{s,k}^i{^{\\otimes p}}-\\sum\\limits_{i=1}^{b}\\bm{h}_{t,k}^i{^{\\otimes p}}\\Vert_F^2\n\\end{equation}\nwhere $\\bm{h}_{:,k}^i\\in\\mathbb{R}^{\\lfloor L\/n_g\\rfloor}$ is the activation outputs of $k$-th group. In this way, the space complexity can be reduced from $\\mathcal{O}(L^p)$ to $\\mathcal{O}(n_g\\cdot\\lfloor L\/n_g\\rfloor^p)$. In practice, $\\lfloor L\/n_g\\rfloor\\geq25$ need to be satisfied to ensure satisfactory performance.\n\n\\textbf{Random Sampling Matching.} The group moment matching can work well when $p=3$ and $p=4$, but it tends to fail when $p\\geq5$. Therefore, we also propose a random sampling matching strategy which is able to perform arbitrary-order moment matching. Instead of calculating and matching two high-dimensional tensors, we randomly select $N$ values in the high-level tensor, and only calculate and align these $N$ values in the source and target domains. In this respect, the $p$-order moment matching with random sampling strategy can be formulated as,\n\\begin{equation}\\label{eq9}\n\\small\n\\mathcal{L}_d=\\frac{1}{b^2N}\\sum\\limits_{k=1}^{N}[\\sum\\limits_{i=1}^{b}\\prod_{j=rnd[k,1]}^{rnd[k,p]}\\bm{h}_s^i(j)-\\sum\\limits_{i=1}^{b}\\prod_{j=rnd[k,1]}^{rnd[k,p]}\\bm{h}_t^i(j)]^2\n\\end{equation}\nwhere $rnd\\in\\mathbb{R}^{N\\times p}$ denotes the randomly generated position index matrix, $rnd[k,j]\\in\\{1,2,3,\\cdots,L\\}$. Therefore, $\\prod_{j=rnd[k,1]}^{rnd[k,p]}\\bm{h}_s^i(j)$ denotes a randomly sampled value in the $p$-level tensor $\\bm{h}_{s}^i{^{\\otimes p}}$. With the random sampling strategy, we can perform arbitrarily-order moment matching, and the space complexity can be reduced from $\\mathcal{O}(L^p)$ to $\\mathcal{O}(N)$. In practice, the model can achieve very competitive results even $N=1000$.\n\\subsection{Higher-order Moment Matching in RKHS}\nSimilar to the KMMD \\cite{long2017deep}, we generalize the higher-order moment matching into reproducing kernel Hilbert spaces (RKHS). That is,\n\\begin{equation}\\label{eq10}\n\\begin{split}\n\\mathcal{L}_d=\\frac{1}{L^p}\\Vert\\frac{1}{b}\\sum\\limits_{i=1}^{b}\\psi(\\bm{h}_s^i{^{\\otimes p}})-\\frac{1}{b}\\sum\\limits_{i=1}^{b}\\psi(\\bm{h}_t^i{^{\\otimes p}})\\Vert_F^2\n\\end{split}\n\\end{equation}\nwhere $\\psi(\\bm{h}_s^i{^{\\otimes p}})$ denotes the feature representation of $i$-th source sample in RKHS. According to the proposed random sampling strategy, $\\bm{h}_s^i{^{\\otimes p}}$ and $\\bm{h}_t^i{^{\\otimes p}}$ can be approximated by two $N$-dimensional vectors $\\bm{h}_{sp}^i\\in\\mathbb{R}^N$ and $\\bm{h}_{tp}^i\\in\\mathbb{R}^N$, where $\\bm{h}_{sp}^i(k)=\\prod_{j=rnd[k,1]}^{rnd[k,p]}\\bm{h}_s^i(j)$, $k=1,\\cdots,N$. In this respect, the domain matching loss can be formulated as\n\\begin{equation}\\label{eq11}\n\\begin{split}\n\\mathcal{L}_d&=\\frac{1}{b^2}\\sum\\limits_{i=1}^{b}\\sum\\limits_{j=1}^{b}k(\\bm{h}_{sp}^i,\\bm{h}_{sp}^j)-\\frac{2}{b^2}\\sum\\limits_{i=1}^{b}\\sum\\limits_{j=1}^{b}k(\\bm{h}_{sp}^i,\\bm{h}_{tp}^j) \\\\\n&+\\frac{1}{b^2}\\sum\\limits_{i=1}^{b}\\sum\\limits_{j=1}^{b}k(\\bm{h}_{tp}^i,\\bm{h}_{tp}^j)\n\\end{split}\n\\end{equation}\nwhere $k(\\bm{x},\\bm{y})=\\exp({-\\gamma\\Vert \\bm{x}-\\bm{y}\\Vert_2})$ is the RBF kernel function. Particularly, when $p=1$, the kernelized HoMM (KHoMM) is equivalent to the KMMD.\n\n\\subsection{Discriminative Clustering}\nWhen the target domain features are well aligned with the source domain features, the unsupervised domain adaptation turns into the semi-supervised classification problem, where the discriminative clustering in the unlabeled data is always encouraged \\cite{grandvalet2005semi,xie2016unsupervised}. There have been a lot of work trying to learn the discriminative clusters in the target domain \\cite{shu2018dirt,morerio2018}, most of which minimize the conditional entropy to ensure the decision boundaries do not cross high-density data regions,\n\\begin{equation}\\label{eq12}\n\\mathcal{L}_{ent}=\\frac{1}{n_t}\\sum_{i=1}^{n_t}\\sum_{j=1}^{c}-p_j\\log p_j\n\\end{equation}\nwhere $c$ is the number of classes, $p_j$ is the softmax output of $j$-th node in the output layer. We find that the entropy regularization works well when the target domain has high test accuracy, but it helps little or even downgrades the accuracy when the test accuracy is unsatisfactory. The reason can be drawn that the classifier may be misled as a result of entropy regularization enforcing over-confident probability on some misclassified samples. Instead of clustering in the output layer by minimizing the conditional entropy, we propose to cluster in the shared feature space. First, we pick up highly confident predicted target samples whose predicted probabilities are greater than a given threshold $\\eta$, and assign pseudo-labels to these reliable samples. Then, we penalize the distance of each pseudo-labeled sample to its class center. The discriminative clustering loss can be given as\n\\begin{equation}\\label{eq13}\n\\mathcal{L}_{dc}=\\frac{1}{n_t}\\sum_{i=1}^{n_t}\\Vert\\bm{h}_t^i-\\bm{c}_{\\hat{y}_t^i}\\Vert_2^2\n\\end{equation}\nwhere $\\hat{y}_t^i$ is the assigned pseudo-label of $\\bm{x}_t^i$, $\\bm{c}_{\\hat{y}_t^i}\\in \\mathbb{R}^L$ denotes its estimated class center. As we perform update based on mini-batch, the centers can not be accurately estimated by a small size of samples. Therefore, we update the class center in each iteration via moving average method. That is,\n\\begin{equation}\\label{Eq14}\n\\mathbf{c}_j^{t+1}=\\alpha\\mathbf{c}_j^t+(1-\\alpha)\\Delta\\mathbf{c}_j^t\n\\end{equation}\n\\begin{equation}\\label{Eq15}\n\\Delta\\mathbf{c}_j=\\dfrac{\\sum_{i=1}^b\\delta(\\hat{y}_i=j)\\bm{h}_t^i}{1+\\sum_{i=1}^b\\delta(\\hat{y}_t^i=j)}\n\\end{equation}\nwhere $\\alpha$ is the learning rate of the center. $\\bm{c}_j^t$ is the class center of $j$-th class in $t$-th iteration. $\\delta(\\hat{y}_t^i=j)=1$ if $\\bm{x}_t^i$ belongs to $j$-th class, otherwise it should be 0.\n\n\n\\subsection{Full Objective Function}\nBased on the aforementioned analysis, to enable effective unsupervised domain adaptation, we propose a holistic approach with an integration of (1) source domain loss minimization, (2) domain alignment with the higher-order moment matching and (3) discriminative clustering in the target domain. The full objective function is as follows,\n\\begin{equation}\\label{eq16}\n\\mathcal{L}=\\mathcal{L}_{s}+\\lambda_d\\mathcal{L}_{d}+\\lambda_{dc}\\mathcal{L}_{dc}\n\\end{equation}\nwhere $\\mathcal{L}_s$ is the classification loss in the source domain, $\\mathcal{L}_d$ is the domain discrepancy loss measured by the higher-order moment matching, and $\\mathcal{L}_{dc}$ denotes the discriminative clustering loss. Note that in order to obtain reliable pseudo-labels for discriminative clustering, we set $\\lambda_{dc}=0$ during the initial iterations, and enable the clustering loss $\\mathcal{L}_{dc}$ after the total loss tends to be stable.\n\n\\section{Experiments}\n\\subsection{Setup}\n\\noindent\\textbf{Dataset.}\nWe conduct experiments on three public visual adaptation datasets: digits recognition dataset, Office-31 dataset, and Office-Home dataset. The digits recognition dataset includes four widely used benchmarks: MNIST, USPS, Street View House Numbers (SVHN), and SYN (synthetic digits dataset). We evaluate our proposal across three typical transfer tasks, including: \\textbf{SVHN}$\\rightarrow$\\textbf{MNIST}, \\textbf{USPS}$\\rightarrow$\\textbf{MNIST} and \\textbf{SYN}$\\rightarrow$\\textbf{MNIST}. The details of this dataset can be seen in \\cite{chen2019joint}. Office-31 is another commonly used dataset for real-world domain adaptation scenario, which contains 31 categories acquired from the office environment in three distinct image domains: \\textbf{A}mazon (product images download from amazon.com), \\textbf{W}ebcam (low-resolution images taken by a webcam) and \\textbf{D}slr (high-resolution images taken by a digital SLR camera). The office-31 dataset contains 4110 images in total, with 2817 images in \\textbf{A} domain, 795 images in \\textbf{W} domain and 498 images in \\textbf{D} domain. We evaluate our method on all the six transfer tasks as \\cite{long2017deep}. The Office-Home dataset \\cite{venkateswara2017deep} is a more challenging dataset for domain adaptation, which consists of images from four different domains: Artistic images (\\textbf{A}), Clip Art images (\\textbf{C}), Product images (\\textbf{P}) and Real-world images (\\textbf{R}). The dataset contains around 15500 images in total from 65 object categories in office and home scenes.\n\n\n\\noindent\\textbf{Baseline Methods.}\nWe compare our proposal with the following methods, which are most related to our work:\nDeep Domain Confusion (\\textbf{DDC}) \\cite{tzeng2014deep}, Deep Adaptation Network (\\textbf{DAN}) \\cite{long2015learning}, Deep Correlation Alignment (\\textbf{CORAL}) \\cite{sun2016deep}, Domain-adversarial Neural Network (\\textbf{DANN}) \\cite{ganin2016domain}, Adversarial Discriminative Domain Adaptation (\\textbf{ADDA}) \\cite{tzeng2017adversarial}, Joint Adaptation Network (\\textbf{JAN}) \\cite{long2017deep}, Central Moment Discrepancy (\\textbf{CMD}) \\cite{zellinger2017central} Cycle-consistent Adversarial Domain Adaptation (\\textbf{CyCADA}) \\cite{hoffman2018cycada}, Joint Discriminative feature Learning and Domain Adaptation (\\textbf{JDDA}) \\cite{chen2019joint}. Specifically, DDC, DAN, JAN, CORAL and CMD are representative moment matching based methods, while DANN, ADDA and CyCADA are representative adversarial training based methods.\n\n\n\n\\begin{table}[ht]\n\\centering\n\\caption{ Test accuracy (\\%) on digits recognition dataset for unsupervised domain adaptation based on modified LeNet}\\label{tab:aStrangeTable\n\\label{tab1}\n\\small\n\\begin{tabular}{ccccc}\n\\toprule\nMethod& SN$\\rightarrow$MT&US$\\rightarrow$MT&SYN$\\rightarrow$MT&Avg\\\\\n\\midrule\nSource Only &67.3$\\pm$0.3&$66.4\\pm$0.4&89.7$\\pm$0.2&74.5\\\\\nDDC &71.9$\\pm$0.4&75.8$\\pm$0.3&89.9$\\pm$0.2&79.2\\\\\nDAN &79.5$\\pm$0.3&$89.8\\pm$0.2&75.2$\\pm$0.1&81.5\\\\\nDANN &70.6$\\pm$0.2&76.6$\\pm$0.3&90.2$\\pm$0.2&79.1\\\\\nCMD &86.5$\\pm$0.3&86.3$\\pm$0.4&96.1$\\pm$0.2&89.6\\\\\nADDA &72.3$\\pm$0.2&92.1$\\pm$0.2&96.3$\\pm$0.4&86.9\\\\\nCORAL &89.5$\\pm$0.2&96.5$\\pm$0.3&96.5$\\pm$0.2&94.2\\\\\nCyCADA &92.8$\\pm$0.1&97.4$\\pm$0.3&97.5$\\pm$0.1&95.9\\\\\nJDDA &94.2 $\\pm$0.1&96.7$\\pm$0.1&97.7$\\pm$0.0&96.2\\\\\n\\midrule\n\\textbf{HoMM}(p=3) &96.5$\\pm$0.2&97.8$\\pm$0.0&97.6$\\pm$0.1&97.3\\\\\n\\textbf{HoMM}(p=4) &95.7$\\pm$0.2&97.6$\\pm$0.0&97.6$\\pm$0.0&96.9\\\\\n\\textbf{KHoMM}(p=3) &97.2$\\pm$0.1&97.9$\\pm$0.1&98.2$\\pm$0.1&97.8\\\\\n\\textbf{Full} &\\textbf{98.8}$\\pm$0.1&\\textbf{99.0}$\\pm$0.1&\\textbf{99.0}$\\pm$0.0&\\textbf{98.9}\\\\\n\\midrule\n\\textbf{KHoMM+$\\mathcal{L}_{ent}$} &\\textbf{99.0}$\\pm$0.0&\\textbf{99.1}$\\pm$0.1&\\textbf{99.2}$\\pm$0.0&\\textbf{99.1}\\\\\n\\bottomrule\n\\end{tabular}\n\\footnotesize \\small We denote SVHN, MNIST, USPS as SN, MT and US.\n\\end{table}\n\n\n\n\n\n\n\\begin{table*}[ht]\n\\centering\n\\caption{Test accuracy (\\%) on Office-31 dataset for unsupervised domain adaptation based on ResNet-50}\\label{tab:aStrangeTable\n\\label{tab2}\n\\begin{tabular}{cccccccc}\n\\toprule\nMethod& A$\\rightarrow$W&D$\\rightarrow$W&W$\\rightarrow$D&A$\\rightarrow$D&D$\\rightarrow$A&W$\\rightarrow$A&Avg\\\\\n\\midrule\nSource Only &73.1$\\pm$0.2&93.2$\\pm$0.2&$98.8\\pm$0.1&72.6$\\pm$0.2&55.8$\\pm$0.1&56.4$\\pm$0.3&75.0\\\\\nDDC \\cite{tzeng2014deep}&74.4$\\pm$0.3&94.0$\\pm$0.1&98.2$\\pm$0.1&74.6$\\pm$0.4&56.4$\\pm$0.1&56.9$\\pm$0.1&75.8\\\\\nDAN \\cite{long2015learning}&78.3$\\pm$0.3&$95.2\\pm$0.2&$99.0\\pm$0.1&75.2$\\pm$0.2&58.9$\\pm$0.2&64.2$\\pm$0.3&78.5\\\\\nDANN \\cite{ganin2016domain}&73.6$\\pm$0.3&94.5$\\pm$0.1&99.5$\\pm$0.1&74.4$\\pm$0.5&57.2$\\pm$0.1&60.8$\\pm$0.2&76.7\\\\\nCORAL \\cite{sun2016deep}&79.3$\\pm$0.3&94.3$\\pm$0.2&99.4$\\pm$0.2&74.8$\\pm$0.1&56.4$\\pm$0.2&63.4$\\pm$0.2&78.0\\\\\nJAN \\cite{long2017deep}&85.4$\\pm$0.3&97.4$\\pm$0.2&99.8$\\pm$0.2&84.7$\\pm$0.4&68.6$\\pm$0.3&70.0$\\pm$0.4&84.3\\\\\nCMD \\cite{zellinger2017central}&76.9$\\pm$0.4&94.6$\\pm$0.3&99.2$\\pm$0.2&75.4$\\pm$0.4&56.8$\\pm$0.1&61.9$\\pm$0.2&77.5\\\\\nCyCADA \\cite{hoffman2018cycada}&82.2$\\pm$0.3&94.6$\\pm$0.2&99.7$\\pm$0.1&78.7$\\pm$0.1&60.5$\\pm$0.2&67.8$\\pm$0.2&80.6\\\\\nJDDA\\cite{chen2019joint}&82.6$\\pm$0.4&95.2$\\pm$0.2&99.7$\\pm$0.0&79.8$\\pm$0.1&57.4$\\pm$0.0&66.7$\\pm$0.2&80.2\\\\\n\\midrule\n\\textbf{HoMM}(p=3) &87.6$\\pm$0.2&96.3$\\pm$0.1&99.8$\\pm$0.0&83.9$\\pm$0.2&66.5$\\pm$0.1&68.5$\\pm$0.3&83.7\\\\\n\\textbf{HoMM}(p=4) &89.8$\\pm$0.3&97.1$\\pm$0.1&100.0$\\pm$0.0&86.6$\\pm$0.1&69.6$\\pm$0.3&69.7$\\pm$0.3&85.5\\\\\n\\textbf{KHoMM}(p=4) &90.5$\\pm$0.2&98.3$\\pm$0.1&100.0$\\pm$0.0&87.7$\\pm$0.2&70.4$\\pm$0.2&70.3$\\pm$0.2&86.2\\\\\n\\textbf{Full} &\\textbf{91.7}$\\pm$0.3&\\textbf{98.8}$\\pm$0.0&\\textbf{100.0}$\\pm$0.0&\\textbf{89.1}$\\pm$0.3&\\textbf{71.2}$\\pm$0.2&\\textbf{70.6}$\\pm$0.3&\\textbf{86.9}\\\\\n\\midrule\n\\textbf{KHoMM+$\\mathcal{L}_{ent}$} &90.8$\\pm$0.1&\\textbf{99.3}$\\pm$0.1&\\textbf{100.0}$\\pm$0.0&87.9$\\pm$0.2&69.3$\\pm$0.3&69.5$\\pm$0.4& 86.1\\\\\n\\bottomrule\n\\end{tabular}\n\\end{table*}\n\n\n\n\n\n\n\n\n\n\\begin{table}[ht]\n\\centering\n\\caption{Test accuracy (\\%) on Office-Home dataset for unsupervised domain adaptation based on ResNet-50}\\label{tab:aStrangeTable\n\\label{tab3}\\resizebox{\\linewidth}{!}{\n\\begin{tabular}{ccccccc}\n\\toprule\nMethod& A$\\rightarrow$P&A$\\rightarrow$R&C$\\rightarrow$R&P$\\rightarrow$R&R$\\rightarrow$P\\\\\n\\midrule\nSource Only &50.0 &58.0 &46.2 &60.4 &59.5 \\\\\nDDC &54.9 &61.3 &50.5 &64.1 &65.9 \\\\\nDAN &57.0 &67.9 &60.4 &67.7 &74.3 \\\\\nDANN &59.3 &70.1 &60.9 &68.5 &76.8 \\\\\nCORAL &58.6 &65.4 &59.8 &68.3 &74.7 \\\\\nJAN &61.2 &68.9 &61.0 &70.3 &76.8 \\\\\n\\midrule\n\\textbf{HoMM}(p=3) &60.7 &68.3 &61.4 &69.2 &76.7 \\\\\n\\textbf{HoMM}(p=4) &63.5 &70.2 &64.6 &72.6 &79.3 \\\\\n\\textbf{KHoMM}(p=4) &63.9 &70.5 &65.3 &73.3 &79.8 \\\\\n\\textbf{Full} &\\textbf{64.7} &\\textbf{71.8} &\\textbf{66.1} &\\textbf{74.5} &\\textbf{81.2} \\\\\n\\midrule\n\\textbf{KHoMM+$\\mathcal{L}_{ent}$} &64.2 &70.1 &65.5 &73.2 &80.1 \\\\\n\\bottomrule\n\\end{tabular}}\n\\end{table}\n\n\n\\noindent\\textbf{Implementation Details.}\nIn our experiments on digits recognition dataset, we utilize the modified LeNet whereby a bottleneck layer with $90$ hidden neurons is added before the output layer. Since the image size is different across different domains, we resize all the images to $32\\times32$ and convert the RGB images to grayscale. For the experiments on Office-31, we use ResNet-50 pretrained on ImageNet as our backbone networks. And we add a bottleneck layer with 180 hidden nodes before the output layer for domain matching. It is worth noting that the \\textbf{relu} activation function can not be applied to the adapted layer, as relu activation function will make most of the values in the high-level tensor $\\bm{h}_{s}^i{^{\\otimes p}}$ to be zero, which will make our HoMM fail. Therefore, we adopt \\textbf{tanh} activation function in the adapted layer. Due to the small samples size of Office-31 and Office-Home datasets, we only update the weights of the full-connected layers (fc) as well as the final block (scale5\/block3), and fix other parameters pretrained on ImageNet. Follow the standard protocol of \\cite{long2017deep}, we use all the labeled source domain samples and all the unlabeled target domain samples for training. All the comparison methods are based on the same CNN architecture for a fair comparison. For DDC, DAN, CORAL and CMD, we embed the official implementation code into our model and carefully select the trade-off parameters to get the best results. When training with ADDA, our adversarial discriminator consists of 3 fully connected layers: two layers with 500 hidden units followed by the final discriminator output. For other compared methods, we report the results in the original paper directly.\n\n\\noindent\\textbf{Parameters.} Our model is trained with Adam Optimizer based on Tensorflow. Regarding the optimal hyper-parameters, they are determined by applying multiple experiments using grid search strategy. The optimal hyper-parameters may be distinct across different transfer tasks. Specifically, the trade-off parameters are selected from $\\lambda_{d}=\\{1,10,10^2,\\cdots,10^8\\}$, $\\lambda_{dc}\\in\\{0.01,0.03,0.1,0.3,1.0\\}$. For the digits recognition tasks, the hyper-parameter $\\lambda_d$ is set to $10^4$ for third-order HoMM and set to $10^7$ for fourth-order HoMM \\footnote{Note that the trade-off on the fourth-order HoMM is much larger than the third-order HoMM. This is because most deep features of the digits are very small, higher-order moments calculating the cumulative multiplication between different features become very close to zeros. Therefore, on digits dataset, the higher the order, the larger the trade-off should be.}. For the experiments on Office-31 and Office-Home, $\\lambda_d$ is set to $300$ for the third-order HoMM and set to $3000$ for the fourth-order HoMM. Besides, the hyper-parameter $\\gamma$ in RBF kernel is set to 1e-4 across the experiments, the learning rate of the centers is set to $\\alpha=0.5$ for digits dataset and set to $\\alpha=0.3$ for Office-31 and Office-Home dataset. The threshold $\\eta$ of the predicted probability is chosen from $\\{0.6,0.65,0.7,0.75,0.8,0.85,0.9,0.95\\}$, and the best results are reported. The parameter sensitivity can be seen in Fig. \\ref{fig4}.\n\n\n\\subsection{Experimental results}\n\\textbf{Digits Dataset} For the experiments on digits recognition dataset, we set the batch size as 128 for each domain and set the learning rate as 1e-4 throughout the experiments. Table \\ref{tab1} shows the adaptation performance on three typical transfer tasks based on the modified LeNet. As can be seen, our proposed HoMM yields notable improvement over the comparison methods on all of the transfer tasks. In particular, our method improves the adaption performance significantly in the hard transfer tasks SVHN$\\rightarrow$MNIST. Without bells and whistles, the proposed third-order KHoMM achieve 97.2\\% accuracy, improving the second-order moment matching (CORAL) by +8\\%. Besides, the results also indicate that the third-order HoMM outperforms the fourth-order HoMM and slightly underperforms the KHoMM.\n\n\\noindent\\textbf{Office-31} Table \\ref{tab2} lists the test accuracies on Office-31 dataset. We set the batchsize as 70 for each domain. The learning rate of the fc layer parameters is set as 3e-4 and the learning rate of the conv layer (scale5\/block3) parameters is set as 3e-5. As we can see, the fourth-order HoMM outperforms the third-order HoMM and achieves the best results among all the moment-matching based methods. Besides, it is worth noting that the fourth-order HoMM outperforms the second-order statistics matching (CORAL) by more than 10\\% on several representative transfer tasks A$\\rightarrow$W, A$\\rightarrow$D and D$\\rightarrow$A, which demonstrates the merits of our proposed higher-order moment matching.\n\n\\noindent\\textbf{Office-Home} Table 3 gives the results on the challenged Office-Home dataset. The parameter settings are the same as in Office-31. We only evaluate our method on 5 out of 12 representative transfer tasks due to the space limitation. As we can see, on all the five transfer tasks, the HoMM outperforms the DAN, CORAL, DANN by a large margin and also outperforms the JAN by 3\\%-5\\%. Note that the experimental results of the compared methods are reported from \\cite{wang2019transferable} directly.\n\n\nThe results in Table \\ref{tab1}, Table \\ref{tab2} and Table \\ref{tab3} reveal several interesting observations: (1) All the domain adaptation methods outperform the source only model by a large margin, which demonstrates that minimizing the domain discrepancy contributes to learning more transferable representations. (2) Our proposed HoMM significantly outperforms the discrepancy-based methods (DDC, CORAL, CMD), and the adversarial training based methods (DANN, ADDA and CyCADA), which reveals the advantages of matching the higher-order statistics for domain adaptation. (3) The JAN performs slightly better than the third-order HoMM on several transfer tasks, but it's always not as good as the fourth-order HoMM in spite of aligning the joint distributions of multiple domain-specific layers across domains. The performance of our HoMM will be improved as well if we utilize such a strategy. (4) The kernelized HoMM (KHoMM) consistently outperforms the plain HoMM, but the improvement seems limited. We believe the reason is that, the higher-order statistics are originally the high-dimensional features, which conceals the advantages of embedding the features into RKHS. (5) In all transfer tasks, the performance increases consistently by employing the discriminative clustering in target domain. In contrast, entropy regularization improves the transfer performance when the test accuracy is high, but it helps little or even downgrades the performance when the test accuracy is not that confident.\n\n\\begin{figure*}[!ht]\n \\centering\n \\begin{subfigure}[b]{0.19\\textwidth}\n \\includegraphics[width=\\textwidth]{source-eps-converted-to.pdf}\n \\caption{Source Only}\n \\label{2D1}\n \\end{subfigure}\n \\begin{subfigure}[b]{0.19\\textwidth}\n \\includegraphics[width=\\textwidth]{KMMD_1-eps-converted-to.pdf}\n \\caption{KMMD}\n \\label{2D2}\n \\end{subfigure}\n \\begin{subfigure}[b]{0.19\\textwidth}\n \\includegraphics[width=\\textwidth]{CORAL_1-eps-converted-to.pdf}\n \\caption{CORAL}\n \\label{2D3}\n \\end{subfigure}\n \\begin{subfigure}[b]{0.19\\textwidth}\n \\includegraphics[width=\\textwidth]{HoMM3_1-eps-converted-to.pdf}\n \\caption{HoMM(p=3)}\n \\label{2D4}\n \\end{subfigure}\n \\begin{subfigure}[b]{0.19\\textwidth}\n \\includegraphics[width=\\textwidth]{Full_1-eps-converted-to.pdf}\n \\caption{Full Loss}\n \\label{2D5}\n \\end{subfigure}\n\n\n \\begin{subfigure}[b]{0.19\\textwidth}\n \\includegraphics[width=\\textwidth]{source_1-eps-converted-to.pdf}\n \\caption{Source Only}\n \\label{2D21}\n \\end{subfigure}\n \\begin{subfigure}[b]{0.19\\textwidth}\n \\includegraphics[width=\\textwidth]{KMMD-eps-converted-to.pdf}\n \\caption{KMMD}\n \\label{2D22}\n \\end{subfigure}\n \\begin{subfigure}[b]{0.19\\textwidth}\n \\includegraphics[width=\\textwidth]{CORAL-eps-converted-to.pdf}\n \\caption{CORAL}\n \\label{2D23}\n \\end{subfigure}\n \\begin{subfigure}[b]{0.19\\textwidth}\n \\includegraphics[width=\\textwidth]{HoMM3-eps-converted-to.pdf}\n \\caption{HoMM(p=3)}\n \\label{2D24}\n \\end{subfigure}\n \\begin{subfigure}[b]{0.19\\textwidth}\n \\includegraphics[width=\\textwidth]{Full-eps-converted-to.pdf}\n \\caption{Full Loss}\n \\label{2D25}\n \\end{subfigure}\n\\caption{ 2D visualization of the deep features generated by different model on SVHN$\\rightarrow$MNIST. The first row illustrates the t-SNE embedding of deep features\nwhich are marked by category information, each color represents a category. The second row illustrates the t-SNE embedding of deep features which are marked by domain information, red and blue points represent the samples drawn from the source and target domains.}\n\\label{fig3}\n\\end{figure*}\n\n\n\\begin{figure*}[!ht]\n \\centering\n \\begin{subfigure}[b]{0.19\\textwidth}\n \\includegraphics[width=\\textwidth]{sen_d-eps-converted-to.pdf}\n \\caption{$\\lambda_d$}\n \\label{2D11}\n \\end{subfigure}\n \\begin{subfigure}[b]{0.19\\textwidth}\n \\includegraphics[width=\\textwidth]{sen_dc-eps-converted-to.pdf}\n \\caption{$\\lambda_{dc}$}\n \\label{2D22}\n \\end{subfigure}\n \\begin{subfigure}[b]{0.19\\textwidth}\n \\includegraphics[width=\\textwidth]{sen_N-eps-converted-to.pdf}\n \\caption{$N$}\n \\label{2D33}\n \\end{subfigure}\n \\begin{subfigure}[b]{0.19\\textwidth}\n \\includegraphics[width=\\textwidth]{sen_delta-eps-converted-to.pdf}\n \\caption{$\\eta$}\n \\label{2D55}\n \\end{subfigure}\n \\begin{subfigure}[b]{0.19\\textwidth}\n \\includegraphics[width=\\textwidth]{Convergence_1-eps-converted-to.pdf}\n \\caption{Convergence}\n \\label{2D55}\n \\end{subfigure}\n\\caption{ Analysis of parameter sensitivity (a)-(d) and convergence analysis (e). The dash line in (b) and (d) indicate the performance of HoMM without the clustering loss $\\mathcal{L}_{dc}$}\n\\label{fig4}\n\\end{figure*}\n\n\n\\subsection{Analysis}\n\\textbf{Feature Visualization}\nWe utilize t-SNE to visualize the deep features on the tasks SVHN$\\rightarrow$MNIST by ResNet-50, KMMD, CORAL, HoMM(p=3) and the Full Loss model. As shown in Fig. \\ref{fig3}, the feature distributions of the source only model in (a) suggests that the domain shift between SVNH and MNIST is significant, which demonstrates the necessity of performing domain adaptation. Besides, the global distributions of the source and target samples are well aligned with the KMMD (b) and CORAL (c), but there are still many samples being misclassified. With our proposed HoMM, the source and target samples are aligned better and categories are discriminated better as well.\n\n\\noindent\\textbf{First\/Second-order versus Higher-order}\nSince our proposed HoMM can perform arbitrary-order moment matching, we compare the performance of different order moment matching on three typical transfer tasks. As shown in table \\ref{tab4}, the order is chosen from $p\\in\\{1,2,3,4,5,6,10\\}$. The results show that the third-order and fourth-order moment matching significantly outperform the other order moment matching. When $p\\leq3$, the higher the order, the higher the accuracy. When $p\\geq4$, the accuracy will decrease as the order increases. Besides, the fifth-order moment matching also achieves very competitive results. Regarding why the fifth-order and above perform worse than the fourth-order, one reason we believe is that the fifth-order and above moments can't be accurately estimated due to the small sample size problem \\cite{raudys1991small}.\n\n\\begin{table}[ht]\n\\centering\n\\caption{Test accuracy (\\%) comparison of different-order moment matching on three transfer tasks}\n\\label{tab4}\n\\small\n\\begin{tabular}{cccccccc}\n\\toprule\norder &1 &2 &3 &4 &5 &6 &10\\\\\n\\midrule\nSN$\\rightarrow$MT &71.9 &89.5 &\\textbf{96.5} &95.7 &94.8 &91.5 &58.6 \\\\\nA$\\rightarrow$W &74.4 &79.3 &87.6 &\\textbf{89.8} &86.6 &85.3 &80.2 \\\\\nA$\\rightarrow$P &54.9 &58.6 &60.7 &\\textbf{63.5} &60.9 &58.2 &57.3\\\\\n\\bottomrule\n\\end{tabular}\n\\footnotesize \\small We denote SVHN and MNIST as SN and MT respectively.\n\\end{table}\n\n\n\n\n\\noindent\\textbf{Parameter Sensitivity and Convergence}\nWe conduct empirical parameter sensitivity on SVHN$\\rightarrow$MNIST and A$\\rightarrow$W in Fig. \\ref{fig4}(a)-(d). The evaluated parameters include two trade-off parameters $\\lambda_c$, $\\lambda_{dc}$, the number of selected values in Random Sampling Matching $N$, and the threshold $\\eta$ of the predicted probability. As we can see, our model is quite sensitive to the change of $\\lambda_{dc}$ and the bellshaped curve illustrates the regularization effect of $\\lambda_d$ and $\\lambda_{dc}$. The convergence performance is provided in Fig. \\ref{fig4}(e), which shows that our proposal converges fastest compared with other methods. It is worth noting that, the test error of the Full Loss model has a obvious mutation at the $2.0\\times10^4$ iteration where we enable the clustering loss $\\mathcal{L}_{dc}$, which also demonstrates the effectiveness of the proposed discriminative clustering loss.\n\n\n\n\\section{Conclusion}\nMinimizing statistic distance between source and target distributions is an important line of work for domain adaptation. Unlike previous methods that utilize the second-order or lower statistics for domain alignment, this paper exploits the higher-order statistics for domain alignment. Specifically, a higher-order moment matching method has been presented, which integrates the MMD and CORAL into a unified framework and generalizes the existing first-order and second-order moment matching to arbitrary-order moment matching. We experimentally demonstrate that the third-order and fourth-order moment matching significantly outperform the existing moment matching methods. Besides, we also extend the HoMM into RKHS and learn the discriminative clusters in the target domain, which further improves the adaptation performance. The proposed HoMM can be easily integrated into other domain adaptation model, and it is also expected to benefit the knowledge distillation and image style transfer.\n\n\n{\\small\n\\bibliographystyle{ieee_fullname}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\\label{sec:Intro}\nIndustrial drying accounts for approximately 12\\% of the total end-use energy used in manufacturing, corresponding to 1.2 quads annually \\cite{DOEBarrier}. The US Department of Energy estimates a reduction of around 40\\% (0.5 quads\/year) of this amount by using more efficient process controls and new drying technologies, which lower the operating costs by \\$8 B\/year \\cite{DOEReview}. In addition, the drying process significantly affects the \\textit{quality} of food products. For instance, prolonged exposure to excessive heat negatively impacts products' physical and nutritional properties \\cite{NutritionalP}.\n\nIn recent times, more efficient drying technologies have been proposed in the literature. \nDielectrophoresis (DEP) technology \\cite{DEP}\\cite{DEP2}, ultrasound drying (US) \\cite{US}\\cite{US2}, slot jet reattachment nozzle (SJR) \\cite{SJR}\\cite{SJR2}, and infrared (IR) drying \\cite{IR} are examples of these new technologies which have helped improve product quality and energy efficiency. Industrial drying units typically employ one of the possible technologies out of all the available options to achieve the end goal of the drying process. However, in most cases, these technologies perform with different efficiencies in different settings. Hence, depending on the operating conditions, some of them are more favorable to use than others. For instance, contact-based ultrasound technology is more effective in the initial phase of the process, where the moisture content of the food sample is relatively high. In contrast, pure hot air drying consumes less energy and is more effective when the moisture content is low. Thus, if these two processes are conjoined, we can take advantage of both technologies to compensate for the other's inefficiencies. Therefore, knowing (a) the sequence in which these different drying techniques should be put together, and (b) the operating parameters of each of them, helps us make use of their capabilities. To put it in another way, dividing the drying process into sub-processes that differ in drying method and operating conditions paves the way to alleviate their individual limitation.\n\n\\captionsetup[figure]{font=small,labelfont=small}\n\\begin{figure}[t]\n\\centering\n \\includegraphics[width=0.5\\textwidth]{Smart_Schematics}\n \\caption{Schematics of the continuous \\textit{smart} dryer prototype with seven buckets which accommodates multiple drying technologies to achieve better performance. In the example shown above, two DEP, two ultrasound, one IR, and two SJR modules are used in a specific order.}\n \\label{fig:SmartD}\n\\end{figure}\n \nTo give an example, let us consider the \\textit{continuous} drying testbed shown in Fig.\\ref{fig:SmartD} which contains different drying modules such as ultrasound, DEP, SJR nozzle, and IR technologies. In this case, every drying module is controlled by a set of parameters influencing the amount of moisture removal. For example, ultrasound power and its duty cycle are the control parameters of the ultrasound technology, whereas electric field intensity is the control variable of the DEP module. Moreover, the control parameters of each dryer affect the amount of energy consumed during the process resulting in a tradeoff between energy consumption and moisture removal. Therefore, we can pose a combinatorial optimization problem to determine the optimal order by which the drying modules should be placed in the testbed and the optimal control parameters associated with them in order to minimize the total energy consumed by the testbed while reaching desired moisture removal by the end of the process. \n\nThis approach can also be extended to \\textit{batch-process} drying in a similar fashion, but with the slight difference that in this case, each technology can be used more than once. Fig. \\ref{fig:Batch_p} is an example of a testbed used for the batch drying process which consists of an ultrasonic module \\cite{US}, a drying chamber with a rectangular cross-section, a blower, and a heater. The food sample is located on a vibrating sheet attached to the ultrasonic transducer. In addition, it is exposed to the hot air coming from the heater which allows combined hot-air and ultrasound drying. In other words, the setup is designed in such a way that enables alternating between (a) pure hot-air (HA), and (b) combined hot-air and ultrasound (HA\/US) drying processes. In this case, the problem of interest is to reduce energy consumption (if possible) by dividing the process into consecutive pure HA and HA\/US sub-processes; where each of these sub-processes possibly have different operating conditions.\n\n\\captionsetup[figure]{font=small,labelfont=small}\n\\begin{figure}[t]\n\\centering\n \\includegraphics[width=0.5\\textwidth]{Batch_p}\n \\caption{Schematics of the convective\/ultrasound testbed for batch-process drying which can be used for both pure hot air (HA) and combined hot air and ultrasound (HA\/US) processes. The HA mechanism consists of a blower and a heater, whereas the US mechanism is a vibrating sheet attached to the ultrasound transducer. One can switch from HA\/US process to pure HA process by turning off the US transducer}\n \\label{fig:Batch_p}\n\\end{figure}\n\nPrior works in drying literature majorly focus on either improving the efficiency of existing drying methods or developing new technologies \\cite{DEP},\\cite{US},\\cite{SJR}. The work done in \\cite{RSM3}, and \\cite{RSM2}, further utilizes optimization routines such as the response surface method (RSM) --- a statistical procedure --- to optimize the process control variables using experimental data \\cite{RSM}. To the best of our knowledge, there is scant literature that addresses optimization problems (sequencing and parameter values) related to unifying these different existing technologies. The main contribution of our work lies in the use of multiple existing technologies in a \\textit{modular} fashion towards achieving cost efficiency with desired performance levels.\nIn addition, this approach enables us to have optimal operating conditions that vary over time which can further improve the performance. Simulation results in section \\ref{sec:SR} show as large as 12\\% energy consumption reduction compared to the most efficient single-stage HA\/US process. The results also indicate an improvement of up to 63\\% in energy efficiency compared to the optimal commonly used hot air drying.\n\nSuch optimization problems appear in many industrial processes where a sequence of distinct devices with similar functions are used to form a unified process. Wood pulp industry with drying drums varying in radius and temperature, route optimization in multi-channel wireless networks with heterogeneous routers, and sensor network placement are examples of application areas where such optimization problems arise.\n \nThis paper presents a Maximum Entropy Principle (MEP)-based framework to model and subsequently optimize the different sub-processes in an industrial drying unit. Such optimization problems come with a lot of inherent complexities. For example, in the problem defined, the number of valid sequences of sub-processes is combinatorially large. Also, the discrete nature of these problems makes them difficult to deal with. In our proposed algorithm, this is addressed by assigning a probability distribution to the space of all possible configurations. In addition, the problem of determining the optimal operating conditions of sub-processes alone is broadly analogous to the resource allocation problem that is shown to be NP-hard \\cite{NP-hard} with a non-convex cost surface riddled with multiple poor local minima. Algorithms such as k-means, in this case, generally get trapped in the local minima and are very sensitive to initialization. A key concept of our algorithm is to develop a homotopy from an auxiliary function to the original non-convex cost function. This auxiliary function is a weighted linear combination of the original non-convex cost function and an appropriately chosen convex function. We choose this convex function to be the negative Shannon entropy of the probability distribution defined above. This concept is very similar to the Deterministic Annealing algorithm \\cite{rose1998deterministic} used for the clustering problem. At the beginning of the procedure, the weights are chosen such that the auxiliary objective function is dominated by the negative Shannon entropy term. Thus, the function is convex, and finding its global minimum is straightforward. After every iteration, the weight of the original non-convex cost gradually increases, and the obtained local minima are used to initialize the subsequent iteration. At the end of the procedure, the auxiliary function converges to the original non-convex cost function. This procedure is independent of the initialization and its main heuristic is that the global minimum is tracked by the algorithm as the convex cost function gradually changes to the desired non-convex cost function.\n\n\\begin{comment}\n \\section{Maximum Entropy Principle}\\label{sec:MEP}\n\nSince the framework we are presenting is built upon the Maximum Entropy Principle (MEP), we shall briefly review it. MEP states that given prior information about the random variable $X$, the most unbiased probability distribution for $X$ is the one maximizing its Shannon entropy. Had we chosen any other distribution, we would have imposed extra information on $X$. Let us assume the information we have about $X$ is in the form of constraints for expected values of the functions $f_k: X \\rightarrow \\mathbb{R}$. Then, according to MEP, the most unbiased probability distribution $p_X$ solves the optimization problem\n\n\\begin{equation} \\label{eq:MEP}\n\\begin{aligned}\n\\max_{p_X} \\hspace{0.25cm} &H(X) = -\\sum_{i=1}^n p_X(x_i) \\ln p_X(x_i) \\\\\n\\text{subject to} \\hspace{0.25cm} &\\sum_{i=1}^n p_X(x_i)f_k(x_i) = F_k \\hspace{0.25cm} \\forall 1\\leq k \\leq m,\n\\end{aligned}\n\\end{equation}\nin which $F_k$, $1\\leq k \\leq m,$ are known expected values of the functions $f_k$. Solution of (\\ref{eq:MEP}) is given by the following Gibbs' distribution\n\n\\begin{equation} \\label{eq:Gibbs}\n\\begin{aligned}\np_X(x_i) = \\frac{e^{-\\sum_k \\lambda_k f_k(x_i)}}{\\sum_{j=1}^n e^{-\\sum_k \\lambda_k f_k(x_j)}}\n\\end{aligned}\n\\end{equation}\nwhere $\\lambda_k$, $1 \\leq k \\leq m$, are the corresponding Lagrange multipliers of the equality constraints in (\\ref{eq:MEP}).\n\\end{comment}\n\n\n\\section{Problem Formulation}\\label{sec:PF}\n\nWe formulate the problem stated above as a \\textit{parameterized path-based optimization problem} \\cite{Kale}. Such problems are described by a tuple\n\\begin{align}\n\\mathcal{M}=\\langle M,\\gamma_1,\\hdots,\\gamma_M,\\eta_1,\\hdots,\\eta_M,D\\rangle,\n\\end{align}\nwhere $M$ is the number of stages allowed, and $\\gamma_k$ denotes the sub-process chosen to be used in the $k-$th stage. In particular,\n\\begin{align}\n\\gamma_k \\in \\Gamma_k := \\{f_{k1},\\hdots,f_{kL_k}\\} \\quad \\forall \\quad 1 \\leq k \\leq M,\n\\end{align}\nwhere $\\Gamma_k$ is the set of all sub-processes permissible in the $k-$th stage. Moreover,\n\\begin{align}\n&\\eta_k \\in H(\\gamma_k) \\subseteq \\mathbb{R}^{d_{\\gamma_k}} \\quad \\forall \\quad 1 \\leq k \\leq M,\n\\end{align}\nwhere $\\eta_k$ and $H(\\gamma_k)$ denote the control parameters associated with the $k-$th sub-process and its feasible set, respectively.\n$D(\\omega,\\eta_1,\\hdots,\\eta_M)$ denotes the cost incurred along a {\\em path} $\\omega$, where $\\omega\\in\\Omega := \\{(f_{1i_1},f_{2i_2},\\hdots,f_{Mi_M}):f_{ki_k}\\in\\Gamma_k\\}$ represents a sequence of sub-processes starting from the first stage to the terminal stage $M$. The objective of the underlying parameterized path-based optimization problem is to determine (a) the optimal path $\\omega^*\\in\\Omega$, and (b) the parameters $\\eta_k^*$ for all $1\\leq k \\leq M$ that solves the following optimization problem \n\\begin{equation}\n\\begin{aligned}\n\\min_{\\{\\eta_k\\},\\nu(\\omega)} \\label{eq: path-based}\n&\\sum_{\\omega\\in\\Omega}\\nu(\\omega)D(\\omega,\\eta_1,\\hdots,\\eta_M),\\\\\n\\text{subject to }& \\sum_{\\omega\\in\\Omega} \\nu(\\omega) = 1,~~ \\nu(\\omega)\\in\\{0,1\\}\\\\\n& \\eta_k \\in H(\\gamma_k) \\quad \\forall \\quad 1 \\leq k \\leq M\n\\end{aligned}\n\\end{equation}\nwhere $\\nu(\\omega)$ determines whether or not the path $\\omega$ has been taken. In other words,\n\\begin{align}\n\\begin{split} \\label{eq:nu}\n\\nu(\\omega) = \\begin{cases}\n1 \\quad \\text{if $\\omega$ is chosen}\\\\\n0 \\quad \\text{otherwise.}\n\\end{cases}\n\\centering\n\\end{split}\n\\end{align}\\\\\nFig. \\ref{fig:Seq_Dec} further illustrates all the notations defined, for the exemplary process shown in Fig. \\ref{fig:SmartD}.\n\nOne approach to address the optimization problem stated in (\\ref{eq: path-based}) is to solve each objective separately, namely, first identifying the optimal process configuration $\\omega^*$ and then trying to determine the optimal control variables $\\{\\eta^*_k\\}_{1 \\leq k \\leq M}$ for the selected configuration. However, in this approach, the coupledness of the two objectives is not taken into account which may result in a sub-optimal solution. On the other hand, our MEP-based approach aims for solving the two simultaneously.\n\nLet us reconsider the batch-process drying example described earlier, where the testbed allows up to $M$ different sub-processes, each could be either HA or HA\/US. To pose the problem of interest as a parameterized path-based optimization problem, we define\n\\begin{align}\n\\Gamma_k := \\{0,1\\} \\quad \\forall \\quad 1 \\leq k \\leq M,\n\\end{align}\nin which 0 and 1 indicate HA and HA\/US sub-processes, respectively. Thus, the process configuration $\\omega \\in \\Omega$ would become\n\\begin{equation} \\label{eq:gamma}\n\\begin{aligned}\n\\omega = (\\gamma_1, \\gamma_2, ... , \\gamma_M), \\quad \\gamma_k \\in \\{0,1\\} \\quad \\forall \\quad 1 \\leq k \\leq M,\n\\end{aligned}\n\\end{equation}\n\nThe control parameters of both sub-processes, in this case, are residence time $t$ and air temperature $T$. The heater of the setup in Fig. \\ref{fig:Batch_p} is designed to keep the air temperature between $30^{\\circ}C$ and $70^{\\circ}C$. Also, considering the settling time of the air temperature, it is required for all sub-processes to take at least $t_0 = 2$ minutes. Hence,\n\\begin{align}\n&\\eta_k = (t_k,T_k) \\quad &&\\forall \\quad 1 \\leq k \\leq M,\\\\\n& 2 \\leq t_k \\quad &&\\forall \\quad 1 \\leq k \\leq M,\\\\\n& 30^{\\circ}C \\leq T_k \\leq 70^{\\circ}C \\quad &&\\forall \\quad 1 \\leq k \\leq M,\n\\end{align}\nA key assumption here is that all the samples within a batch are similar in properties \n such as porosity and initial moisture content.\n\n\\captionsetup[figure]{font=small,labelfont=small}\n\\begin{figure}[t]\n\\centering\n \\includegraphics[width=0.5\\textwidth]{Seq_Dec}\n \\caption{Diagram corresponding to the process shown in Fig. \\ref{fig:SmartD} in which $\\Gamma = \\{\\text{DEP (1), SJR (2), US (3), IR (4)}\\}$. In the sequence shown by the arrows, $\\gamma_1 =1, \\gamma_2 = 3, ... , \\gamma_7 = 2$ which defines the process configuration $\\omega = (\\gamma_1,\\gamma_2,...,\\gamma_7)$. Moreover, $\\eta_i (1 \\leq i \\leq7)$ determines the control variables of the technology used in $i-$th stage.}\n \\label{fig:Seq_Dec}\n\\end{figure}\n\nIn order to define the cost of a process, we need to first identify the desired properties of dried food products such as wet basis moisture content (ratio of the weight of water to the total weight of the material) and color.\nFor simplicity, we will consider the wet basis moisture content as the only property that needs to lie within a prespecified range ($\\leq x_d$) at the end of the process. Throughout this paper, we denote the wet basis moisture content of the food sample at the end of the $k-$th stage under process configuration $\\omega$ by $x^{(\\omega)}_k$. To include the cost associated with the final moisture content, it is required to model the corresponding dynamics as\n\\begin{equation} \\label{eq:Dynamics}\n\\begin{aligned}\nx_{k+1}^{(\\omega)} = f_{\\gamma_k}(x_k^{(\\omega)},\\eta_k) , \\quad \\forall \\quad \\quad 0 \\leq k \\leq M-1,\n\\end{aligned}\n\\end{equation}\nwhere $x_0^{(\\omega)}$ denotes the initial wet basis moisture content of the food sample. The semi-empirical drying curves (moisture content versus time) of distillers dried grains (DDG) were derived and evaluated in \\cite{US} for $T=25^{\\circ}C$, $T=50^{\\circ}C$, and $T=70^{\\circ}C$. Interpolation between drying curves obtained for various temperatures is commonly used in drying literature to predict the kinetics of drying. For the sake of our simulations, we use interpolation and rewrite the kinetics of moisture removal in the following way\n\\begin{equation*} \\label{eq:Stat_M}\n\\begin{aligned}\nx_{k+1}^{(\\omega)} = \\frac{e^{-K_{\\gamma_k}(T_k)t_k^*(T_k,x_k^{(\\omega)},t_k)} (3.16 - M_{\\gamma_k}(T_k)) + M_{\\gamma_k}(T_k)}{1 + e^{-K_{\\gamma_k}(T_k)t_k^*(T_k,x_k^{(\\omega)},t_k)} (3.16 - M_{\\gamma_k}(T_k)) + M_{\\gamma_k}(T_k)}\n\\end{aligned}\n\\end{equation*}\nin which\\\\\n\\begin{equation} \\label{eq:K}\n\\begin{aligned}\nK_0(T_k) = &(0.074493T_k^2 - 45.5058T_k + 6839.9)\/1000\\\\\nK_1(T_k) = &(-0.05811T_k^2 + 39.962T_k - 6680.1)\/1000\\\\\n&(T_k \\hspace{0.1cm} \\text{in Kelvin})\n\\end{aligned}\n\\end{equation}\ndenote the Lewis model constants \\cite{Lewis} for HA and HA\/US sub-processes, respectively. Also,\\\\\n\\begin{equation} \\label{eq:Meq}\n\\begin{aligned}\nM_0(T_k) = &(0.2479T_k^2 - 172.09T_k + 30133)\/10000\\\\\nM_1(T_k) = &(0.1468T_k^2 - 107.27T_k + 19720)\/10000\\\\\n&(T_k \\hspace{0.1cm} \\text{in Kelvin})\n\\end{aligned}\n\\end{equation}\nrepresent the equilibrium dry basis moisture content (ratio of the weight of water to the weight of the solid material) of the DDG products. Moreover, $t_k^*(T_k,x_k,t_k)$ is defined to be\n\\begin{equation} \\label{eq:Meq}\n\\begin{aligned}\nt_k^*(T_k,x_k^{(\\omega)},t_k) = \\frac{1}{K_{\\gamma_k}(T_k)}\\log(\\frac{3.16-M_{\\gamma_k}(T_k)}{\\frac{x_k^{(\\omega)}}{1 - x_k^{(\\omega)}} - M_{\\gamma_k}(T_k)}) + t_k\n\\end{aligned}\n\\end{equation}\nTherefore, we define the process cost $D(\\omega,\\eta_1,\\hdots,\\eta_M)$ as\n\\begin{equation} \\label{eq:Cost}\n\\begin{aligned}\nD = \\sum_{k=1}^M g_{\\gamma_k}(\\eta_k) + G(x_M^{(\\omega)},x_d) \\hspace{0.1cm},\n\\end{aligned}\n\\end{equation}\nin which $g_{\\gamma_k}:\\mathbb{R}^{d_{\\gamma_k}} \\rightarrow \\mathbb{R}$ is the cost (e.g. energy consumption) of the $k-$th sub-process, and $G: \\mathbb{R}^2 \\rightarrow \\mathbb{R}$ is a function penalizing the violation of the constraint. Fig. \\ref{fig:Penalty} shows an example of a penalty function used in our simulations. \nIn batch process drying example, $g_0(.)$ is the energy consumed for HA drying which can be approximated using the following:\n\\begin{equation} \\label{eq:Cost_HA}\n\\begin{aligned}\ng_0(\\eta_k) = g_0(t_k,T_k) \\propto & \\hspace{0.1cm} \\dot{m}_{air} c_p (T_k - T_0) t_k \\\\\n= & \\hspace{0.1cm} \\rho_{air} A V_{air} c_p (T_k - T_0) t_k \\hspace{0.1cm},\n\\end{aligned}\n\\end{equation}\nwhere $\\dot{m}_{air}$ is the mass flow rate of the inlet air, $T_0$ is ambient air temperature, $A$ is the cross-section area of the chamber, $V_{air}$ is air velocity, $c_p$ and $\\rho$ are average specific heat capacity and density of air in the temperature operating range of the testbed. Therefore, we can use a weighting coefficient $\\alpha$ to adjust the cost of the HA sub-process.\n\\captionsetup[figure]{font=small,labelfont=small}\n\\begin{figure}[t]\n\\centering\n \\includegraphics[width=0.35\\textwidth]{MC_penalty.png}\n \\caption{The penalty function $G(x_M^{(\\omega)},x_d)$ used in our simulations in which $x_d = 0.075$. The function outputs large values for $x_M^{(\\omega)} > x_d$ and zero for $x_M^{(\\omega)} \\leq x_d$.}\n \\label{fig:Penalty}\n\\end{figure}\n\\begin{equation} \\label{eq:Cost_HA}\n\\begin{aligned}\ng_0(t_k,T_k) = \\hspace{0.1cm} \\alpha \\rho_{air} A V_{air} c_p (T_k - T_0) t_k \\hspace{0.1cm},\n\\end{aligned}\n\\end{equation}\nIn the setup we used (shown in Fig. \\ref{fig:Batch_p}), choosing $\\alpha = 0.5$ matches well with the experimental data. On the other hand, $g_1(.)$ is the energy consumed for HA\/US sub-processes and can be computed using\n\\begin{equation} \\label{eq:Cost_HAUS}\n\\begin{aligned}\ng_1(\\eta_k) = g_1(t_k,T_k) = g_0(t_k,T_k) + P_{US}t_k \\hspace{0.1cm},\n\\end{aligned}\n\\end{equation}\nin which $P_{US}$ is the power consumption of the ultrasound transducer. Therefore, the total cost of the process can be written in the following way:\n\\begin{equation} \\label{eq:TotalCost}\n\\begin{aligned}\nD = \\sum_{k=1}^M (\\alpha \\rho_{air} A V_{air} c_p (T_k - T_0) + \\gamma_k P_{US})t_k + G(x_M^{(\\omega)},x_d)\n\\end{aligned}\n\\end{equation}\nAs a result, we write the corresponding combinatorial optimization problem below\n\\begin{align}\n\\begin{split} \\label{eq:constraints1}\n\\min_{\\{\\gamma_k\\},\\{t_k\\},\\{T_k\\}}&~ \\sum_{k=1}^M (\\alpha \\dot{m}_{air} c_p (T_k - T_0) + \\gamma_k P_{US})t_k + G(x_M^{(\\omega)},x_d)\\\\\n\\text{subject to }~ &2 \\leq t_k\\quad \\forall \\quad 1 \\leq k \\leq M \\\\\n&30^{\\circ}C \\leq T_k \\leq 70^{\\circ}C \\quad \\forall\\quad 1 \\leq k \\leq M\n\\end{split}\n\\end{align}\nTo adapt the above problem to the form of the parameterized path-based optimization problem described in (\\ref{eq: path-based}), we rewrite it as below\n\\begin{dmath} \\label{eq:Optimization2}\n\\min_{\\{t_k\\},\\{T_k\\},v(\\omega)} \\sum_{\\omega \\in \\Omega} \\nu(\\omega) \\left( \\sum_{k=1}^M (\\alpha \\dot{m}_{air} c_p (T_k - T_0) + \\gamma_k P_{US})t_k + G(x_M^{(\\omega)},x_d)\\right)\n\\end{dmath}\n{subject to\n\\begin{align*}\n\\begin{split}\n~ &2 \\leq t_k\\quad \\forall \\quad 1 \\leq k \\leq M \\\\\n&30^{\\circ}C \\leq T_k \\leq 70^{\\circ}C \\quad \\forall\\quad 1 \\leq k \\leq M\\\\\n&\\nu(\\omega) \\in \\{0,1\\} \\quad \\forall\\quad \\omega \\in \\Omega\\\\\n&\\sum_{\\omega \\in \\Omega} \\nu(\\omega) = 1 \n\\end{split}\n\\end{align*}\n\n\n\\section{Problem Solution}\\label{sec:PS}\n\nCombinatorial optimization techniques can be used to solve the optimization problem stated in (\\ref{eq:Optimization2}). Intuitively, we can view it as a clustering problem in which the goal is to assign a particular sequence of sub-processes to every food sample with known initial moisture content. In this case, the location of the cluster centers can be thought of as the control parameters associated with that sequence. This work, similar to \\cite{Kale} and \\cite{9096570}, utilizes the idea of the Maximum Entropy Principle (MEP) \\cite{jaynes1957information}\\cite{jaynes2003probability}. To be able to invoke MEP, we relax the constraint $v(\\omega) \\in \\{0,1\\}$ in (\\ref{eq:Optimization2}) and let it take any value in $[0,1]$. We denote this new weighting parameter by\n\\begin{align}\np(\\omega) \\in [0,1] \\quad \\forall \\quad \\omega \\in \\Omega\n\\end{align}\nIn other words, we allow partial assignment of process configurations to the food sample. Note that this relaxation is only used in the intermediate stages of our proposed approach. The final solution still satisfies $p(\\omega) \\in \\{0,1\\}$. Without loss of generality, we assume that $\\sum_{\\omega \\in \\Omega}p(\\omega) = 1$. Hence, we can rewrite (\\ref{eq:Optimization2}) as:\n\\begin{dmath} \\label{eq:Optimization_relaxed}\n\\min_{\\{t_k\\},\\{T_k\\},p(\\omega)} \\sum_{\\omega \\in \\Omega} p(\\omega) \\left( \\sum_{k=1}^M (\\alpha \\dot{m}_{air} c_p (T_k - T_0) + \\gamma_k P_{US})t_k + G(x_M^{(\\omega)},x_d)\\right)\n\\end{dmath}\n{subject to\n\\begin{align*}\n\\begin{split}\n~ &2 \\leq t_k\\quad \\forall \\quad 1 \\leq k \\leq M \\\\\n&30^{\\circ}C \\leq T_k \\leq 70^{\\circ}C \\quad \\forall\\quad 1 \\leq k \\leq M\\\\\n&p(\\omega) \\in [0,1] \\quad \\forall\\quad \\omega \\in \\Omega\\\\\n&\\sum_{\\omega \\in \\Omega} p(\\omega) = 1 \n\\end{split}\n\\end{align*}\n\nSince the framework we are presenting is built upon MEP, let us briefly review it in the context of this problem. MEP states that given prior information about the process, the most unbiased set of weights is the one that has the maximum Shannon entropy. Assume the information we have about the process is the expected value of the process cost ($\\mathbb{E}(D) = D_0$). Then, according to MEP, the most unbiased weighting parameters solve the optimization problem\\\\\n\\begin{align} \\label{eq:MEP}\n\\begin{split}\n\\max_{p} &~ H(X) = -\\sum_{\\omega \\in \\Omega} p(\\omega) \\ln p(\\omega) \\\\\n\\text{subject to} &~ \\Bar{D} = D_0\n\\end{split}\n\\end{align}\nwhere $\\Bar{D}$ is the expected value of the cost $D$, namely,\n\\begin{dmath*} \\label{Expected_D}\n \\Bar{D} = \\sum_{\\omega \\in \\Omega} p(\\omega) \\left( \\sum_{k=1}^M (\\alpha \\dot{m}_{air} c_p (T_k - T_0) + \\gamma_k P_{US})t_k + G(x_M^{(\\omega)},x_d) \\right) \n\\end{dmath*} \\label{Expected_D}\nThe Lagrangian corresponding to (\\ref{eq:MEP}) is given by the maximization of $H - \\beta \\Bar{D}$, or equivalently, minimization of $F = \\Bar{D} - \\frac{1}{\\beta}H$, where $\\beta$ is the Lagrange multiplier. Therefore, the problem reduces to minimizing $F$ with respect to $p(\\omega)$, $t_k$s, and $T_k$s such that $\\sum_{\\omega \\in \\Omega} p(\\omega) = 1$. We add the last constraint with the corresponding Lagrange multiplier $\\mu$ to the objective function $F$ and rewrite the problem as below:\n\\begin{align}\n\\begin{split} \\label{eq:Unc_Lagrangian}\n\\min_{\\{t_k\\},\\{T_k\\},p(\\omega)} &~ \\Bar{D} - \\frac{1}{\\beta}H + \\mu (\\sum_{\\omega \\in \\Omega}p(\\omega) - 1)\\\\\n\\text{subject to }~ &t_0 \\leq t_k\\quad \\forall \\quad 1 \\leq k \\leq M \\\\\n&30^{\\circ}C \\leq T_k \\leq 70^{\\circ}C \\quad \\forall\\quad 1 \\leq k \\leq M\\\\\n\\end{split}\n\\end{align}\nWe denote the new objective function in (\\ref{eq:Unc_Lagrangian}) by $\\Bar{F}$. Note that $\\Bar{F}$ is convex in $p$, and the constraints in (\\ref{eq:Unc_Lagrangian}) do not depend on $p$. Hence, we determine the optimal weights by solving $\\frac{\\partial \\Bar{F}}{\\partial p}=0$ for fixed $t_k$s and $T_k$s. This gives the Gibbs distribution\n\\begin{align}\n\\begin{split} \\label{eq:p_opt}\np^*(\\omega) = \\frac{e^{-\\beta \\sum_{k=1}^M (\\alpha \\dot{m}_{air} c_p (T_k - T_0) + \\gamma_k P_{US})t_k + G(x_M^{(\\omega)},x_d) }}{\\sum_{\\omega^\\prime \\in \\Omega}e^{-\\beta \\sum_{k=1}^M (\\alpha \\dot{m}_{air} c_p (T_k - T_0) + \\gamma_k^\\prime P_{US})t_k + G(x_M^{(\\omega^\\prime)},x_d) }}\n\\end{split}\n\\end{align}\nTherefore, by plugging (\\ref{eq:p_opt}) into $\\Bar{F}$, we obtain its corresponding minimum $\\Bar{F}^*$.\n\\begin{align}\n\\begin{split} \\label{eq:F_star}\n \\Bar{F}^* &~= \\min_{p(\\omega)} \\Bar{F}\\\\\n &~= -\\frac{1}{\\beta} \\log \\sum_{\\omega \\in \\Omega} e^{-\\beta \\alpha \\dot{m}_{air} c_p (T_k - T_0) + \\gamma_k P_{US})t_k + G(x_M^{(\\omega)},x_d) }\n\\end{split}\n\\end{align}\nSubsequently, to determine the optimal process parameters, we minimize $\\Bar{F}^*$ with respect to $\\eta_k$s. In other words, solving the constrained optimization problem\n\\begin{align}\n\\begin{split} \\label{eq:eta_opt}\n\\min_{\\{t_k\\},\\{T_k\\}} &~ \\Bar{F}^*\\\\\n\\text{subject to }~ &t_0 \\leq t_k\\quad \\forall \\quad 1 \\leq k \\leq M \\\\\n&30^{\\circ}C \\leq T_k \\leq 70^{\\circ}C \\quad \\forall\\quad 1 \\leq k \\leq M\\\\\n\\end{split}\n\\end{align}\nresults in finding the optimal residence times $t_k^*$ and Temperatures $T_k^*$ for all the sub-processes. Any constrained optimization algorithm can be used to solve (\\ref{eq:eta_opt}). As an example, we used the interior point algorithm in our simulations.\\\\\nThe proposed algorithm, thus, consists of iterations with the following two steps:\n\\begin{enumerate}\n \\item Use parameters $t_k$ and $T_k$ ($\\forall 1\\leq k \\leq M$) obtained in the previous iteration to find the optimal weights according to (\\ref{eq:p_opt}).\n \\item Numerically solve the constrained optimization in (\\ref{eq:eta_opt}) to find the optimal parameters $t_k^*$ and $T_k^*$ for all $1 \\leq k \\leq M$ using $t_k$s and $T_k$s as the initial guess for the algorithm.\n\\end{enumerate}\n\n\\captionsetup[figure]{font=small,labelfont=small}\n\\begin{figure*}[t]\n\\centering\n \\includegraphics[width=1\\textwidth]{gamma0.5}\n \\caption{Figures (a1), (b1), (c1), and (d1) represent the solution obtained using our proposed algorithm for $M=2$, $M=3$, $M=4$, and $M=5$ accordingly, where $M$ indicates the maximum number of stages allowed. In all simulations, $\\alpha$ is chosen to be 0.5. Figures (a2), (b2), (c2), and (d2) are the wet basis moisture content profiles corresponding to temperature profiles (a1), (b1), (c1), and (d1) respectively. Figure (e1) compares the temperature profile of the solution for $M=6$ and optimal single-stage HA and single-stage HA\/US processes. The results show a 12.09\\% energy consumption reduction compared to the single-stage HA\/US process and a 63.19\\% improvement compared to the single-stage pure HA process. Figure (e2) shows the wet basis moisture content profiles corresponding to the temperature profiles shown in figure (e1). }\n \\label{fig:gamma0.5}\n\\end{figure*}\n\n\nIn both steps, the value of $\\Bar{F}$ is reduced. Therefore, the algorithm converges. Furthermore, we can adjust the relative weight of the entropy term $-H$ to the average cost $\\Bar{D}$ using the Lagrange multiplier $\\beta$. For $\\beta \\rightarrow 0$, maximizing the entropy term dominates minimizing the expected cost. In this case, the optimal weights derived in (\\ref{eq:p_opt}) would be equal for all the valid process configurations. On the other hand, when $\\beta \\rightarrow \\infty$, more importance is given to $\\Bar{D}$. In other words, for very large values of $\\beta$, we have:\n\\begin{align}\n\\begin{split} \\label{eq:p_lim}\n\\lim_{\\beta \\rightarrow \\infty}p(\\omega) = \\begin{cases}\n1 \\quad \\text{if} ~\\omega = \\underset{\\omega^\\prime \\in \\Omega}{\\mathrm{\\text{argmin}}}\n &\\sum_{k=1}^M \\{ (\\alpha \\dot{m}_{air} c_p (T_k - T_0) \\\\&+\\gamma_k^\\prime P_{US})t_k + G(x_M^{(\\omega^\\prime)},x_d) \\}\\\\\n 0 \\quad \\text{otherwise}\n\\end{cases}\n\\centering\n\\end{split}\n\\end{align}\nThe idea behind the algorithm is to start with $\\beta$ values close to zero, where the objective function $\\Bar{F}$ is convex and the global minimum can be found. Then, we keep track of this global minimum by gradually increasing $\\beta$ until $\\max_{\\omega \\in \\Omega} p(\\omega) \\rightarrow 1$. This procedure helps us avoid poor local minima. The proposed algorithm is as follows:\n\\begin{algorithm}[H]\n\\caption{Calculate optimal $p(\\omega)$, $t_k$, $T_k$ $(\\forall 1 \\leq k \\leq M)$}\n\\begin{algorithmic} \\label{alg:Main}\n\\STATE \\textbf{Initialize:} $T_k \\in [30,70]$, $t_0 = 2 \\leq t_k$, $\\beta = \\beta_{min}$, $p_{max}$, $\\zeta \\geq 1$\n\\STATE Compute $p(\\omega)$ according to (\\ref{eq:p_opt}), $\\forall \\omega \\in \\Omega$.\n\\WHILE{$\\max_{\\omega}p(\\omega) \\leq p_{max}$}\n\\STATE Solve (\\ref{eq:eta_opt}) using any constrained optimization algorithm to find\n$(t_k^*,T_k^*)$ with $(t_k,T_k)$ as initial guess.\n\\STATE $(t_k,T_k) \\leftarrow (t_k^*,T_k^*) \\quad \\forall \\quad 1 \\leq k \\leq M$\n\\STATE Update $p(\\omega)$ according to (\\ref{eq:p_opt}), $\\forall \\omega \\in \\Omega$.\n\\STATE $\\beta \\leftarrow \\zeta \\beta$\n\\ENDWHILE\n\\end{algorithmic}\n\\end{algorithm}\nNote that it is better to initialize $(t_k, T_k)$ in such a way that the constraint $x_M^{(\\omega)} \\leq x_d$ is satisfied for all $\\omega \\in \\Omega$ since it results in faster convergence and fewer numbers of iterations.\n\n\\section{Simulations and Results}\\label{sec:SR}\n\n\n\n\nIn this section, we simulate our proposed algorithm for drying DDG products using multiple sub-processes and compare the results with the commonly-used single-stage drying process. We also change the number of sub-processes allowed ($M$), to see how additional sub-processes affect efficiency. Moreover, we can assign weights to the energy consumed by different sub-processes in order to include their additional costs (e.g. maintenance). In our problem formulation, the coefficient $\\alpha$ defined in (\\ref{eq:Cost_HA}) can accommodate this relative weight.\n\n\\textbf {Effect of the permissible number of stages ($M$):} In simulations shown in Fig. \\ref{fig:gamma0.5}, we have considered drying fresh DDG products with roughly $75.5\\%$ initial wet basis moisture content to around $7.5\\%$ at the end of the process. We have used our proposed algorithm for $\\alpha = 0.5$ with different numbers of allowable sub-processes from two to six (Fig. \\ref{fig:Er_gamma0.5}) and compared the results with the optimal single-stage HA\/US and HA processes. The results show 11.96\\% ($M=2$), 12.07\\% ($M=3$), and 12.09\\% ( $M=4$, $M=5$, and $M=6$) improvement in energy consumption compared to the most efficient single-stage HA\/US drying process. In addition, it reduced the energy consumption by 63.13\\% ($M=2$), 63.18\\% ($M=3$), and 63.19\\% ($M=4$, $M=5$, and $M=6$), in comparison with the optimal pure HA process.\n\n\\captionsetup[figure]{font=small,labelfont=small}\n\\begin{figure}[H]\n\\centering\n \\includegraphics[width=0.27\\textwidth]{Er_gamma0.5}\n \\caption{Energy consumption of the algorithm solution for different numbers of allowable stages.}\n \\label{fig:Er_gamma0.5}\n\\end{figure}\n\n\\captionsetup[figure]{font=small,labelfont=small}\n\\begin{figure*}[h]\n\\centering\n \\includegraphics[width=1\\textwidth]{gamma_vary}\n \\caption{Figures (a1), (b1), (c1), (d1), and (e1) represent the solution of Algorithm \\ref{alg:Main} for $M=5$ and $\\alpha = 0.2$, $\\alpha = 0.1$, $\\alpha = \\frac{1}{15}$, $\\alpha = 0.05$ and $\\alpha = 0.04$, respectively. Here, $\\alpha$ denotes the relative cost weight of the pure HA process. Figures (a2), (b2), (c2), (d2), and (e2) are the wet basis moisture content profiles corresponding to temperature profiles (a1), (b1), (c1), (d1), and (e1) respectively.}\n \\label{fig:gamma_vary}\n\\end{figure*}\n\n\nAs shown in Fig. \\ref{fig:Er_gamma0.5}, the cost of the solution given by Algorithm \\ref{alg:Main} decreases as the number of allowable sub-processes increases from two to four. However, for $M \\geq 4$, increasing $M$ does not impact energy consumption reduction. In general, by increasing $M$, the cost of the process either decreases or remains constant. The reason is that the space of all valid process configurations with $M$ stages is a subspace of process configurations with $N > M$ stages. Consequently, we can choose $M$ in such a way that further increasing it does not significantly affect the cost value. In this case, as an example, $M=4$ can be chosen as the optimal number of stages.\n\n\\textbf{Effect of the relative weight of HA process ($\\alpha$):} For $\\alpha = 0.5$, the HA\/US process is significantly more efficient than the pure HA process. Therefore, the pure HA process does not appear in the optimal solutions shown in Fig. \\ref{fig:gamma0.5}. In other words, in this case, the optimal process only consists of HA\/US sub-processes with different operating conditions. However, according to Fig.\\ref{fig:gamma_vary}, by reducing $\\alpha$, less weight is given to the cost of HA, and therefore, the pure HA process starts showing up in the optimal solution. Fig.\\ref{fig:gamma_vary} plots the solutions resulting from Algorithm \\ref{alg:Main} for $M=5$, and varying $\\alpha$ from 0.2 to 0.04. For $\\alpha = 0.2$ , $\\alpha = 0.1$ , and $\\alpha = \\frac{1}{15}$ , the optimal configuration is only comprised of HA\/US sub-processes. On the other hand, for $\\alpha = 0.05$, the pure HA sub-process is used in one stage of the optimal process, and finally, for $\\alpha = 0.04$, the optimal solution is entirely a pure HA process with a constant temperature profile. Processes (a), (b), (c), and (d) in Fig.\\ref{fig:gamma_vary} resulted in 9.95\\%, 5.75\\%, 3.62\\%, and 2.08\\% reduction in energy consumption compared to the most efficient single-stage processes.\n\n\n\n\n\n\\section{Conclusion and Future Work}\n\nIn this paper, we formulated a class of combinatorial optimization problems that arise in many industrial processes comprising sub-processes with the same end goals. We examined industrial drying in continuous and batch processes in more detail and applied our proposed algorithm to a batch process drying prototype which allows both HA and HA\/US drying. In addition, we discussed the advantage of solving for both the optimal process configuration and the optimal values of the control parameters simultaneously as compared to treating them as separate problems. \nThough in the example used to illustrate our proposed algorithm only two technologies were permitted to be used, the framework can be extended to $|\\Gamma_k| \\in \\mathbb{N}$ for all $1 \\leq k \\leq M$. Moreover, our algorithm is capable of accommodating more control parameters and quality constraints. In our future works, we aim for adding air velocity, ultrasound power, and its duty cycle to the control variables and quantitative color to the constraints representing the desired features. \n\nIn the methodology presented, the space of all valid configurations of up to M sub-processes out of N available technologies is combinatorially large $O(\\sum_{k=1}^{M} {N \\choose k})$. In our future work, we are planning to use the Principle of Optimality to reduce the space of decision variables to polynomial size. According to the Principle of Optimality, along an optimal sequence of sub-processes, the next technology to be used and its operating conditions are decided only by the current state (e.g. moisture content), independent of the prior sub-processes. Therefore, successful utilization of this fact makes the proposed algorithm more scalable. Furthermore, the algorithm is flexible in adapting new constraints that are specific to the setup and technologies being used.\n\n\n\n\n\n\n\n\n\n\n\n \n\n\\bibliographystyle{IEEEtran}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\\label{sec:introduction}\nOver the last decades, diverse quantum computing components from application to physical device have been actively researched and developed.\nSome of them already have theoretically optimal performance~\\cite{Jones:2013ii,Muralidharan:2016ky,Ross:2016uz,Bombin:2015hi,Napp:2012vn,Bombin:2007ed}.\nGigantic IT corporations have announced that they succeeded in the development of dozens of qubit system~\\cite{IBM:Iidq8v80,Kelly:2018uc,Intel:1vYziePb}.\nEven they expect to see a thousand qubit system within ten years.\nBesides, several startups also started to develop a quantum computing hardware and software~\\cite{QuantumComputingReport:FR0ihxsu}.\nIt seems that the era of a quantum computing is gradually coming close to us.\n\n\nSo far, to see the feasibility of a practical large-scale quantum computing, some efforts have been dedicated to investigation into the quantity of the required quantum resource~\\cite{Suchara:2013ho,Smith:2014ux,Reiher:2017cv,Goto:2016jo,Suchara:2013tg,JavadiAbhari:2015jf,Steiger:2018to} and the expected performance~\\cite{Fowler:2012fi,Jones:2012kc,VanMeter:2009ut,Suchara:2013tg}.\nBased on such analysis, the security of conventional cryptography against a theoretically super-fast quantum computing also has been analyzed~\\cite{Grassl:2015wa}.\nHowever, we think that the analysis reported so far are not fully satisfactory because they usually have considered only a few components of a quantum computing and\/or focused on statistical examination. \nIn particular, the analysis results with the statistical examination are nothing more than simple calculation based on individual data of several quantum computing components~\\cite{Suchara:2013ho,Suchara:2013tg}.\n\n\nTo achieve more exact analysis, we believe the practical operation of a quantum computing needs to be considered.\nFor that, one first has to consider quantum computing components in terms of the practical operation.\nFor example, we have to prepare a quantum algorithm decomposed into individual gates instead of only the dominant part of a quantum algorithm.\nSecond, the above components have to be properly integrated with a target quantum computer system architecture.\nFor example, we have to find a path for qubit movements (or braiding) on the target system architecture without raising any conflicts with other paths.\nBy doing so, the quantity of qubit movements which cannot be considered in the statistical analysis can be exactly measured, and therefore we can say that the analysis results are realistic.\n\n\nIn this work, to perform the integrated analysis more efficiently, we propose and develop a quantum computing framework composed of three functional layers (compile, system and building block), where each layer has a well-defined role for a quantum computing as follows.\n\\begin{itemize}\n\\item Compile: Decomposition of a quantum algorithm into a sequence of quantum gates called a \\textit{quantum assembly code}\n\\item System: Integration of a quantum algorithm and building blocks under a quantum computer system architecture\n\\item Building Block: Implementation of logical qubits and gates according to a quantum error-correcting code and a fault-tolerant scheme\n\\end{itemize}\nWith the proposed framework, we can conduct the analysis in the most fine-grained manner.\nFor example, we even count a qubit movement one by one, and the quantity of physical qubits based on the provided quantum computer architecture and the ancilla qubits required for error correction and magic state.\nFurthermore, the framework examines the performance and the resource with the criterion, 100\\% fidelity quantum computing~\\cite{Whitney:2009wh}.\nBy doing so, we can provide full quantum resource and compare various quantum computing configurations fairly.\n\n\nThe objective of this work is to estimate the most accurate quantum computing performance and quantum resource by the help of the framework we developed.\nFor that, we first need to configure a quantum computing by choosing specific protocols and device properties. \nBy doing so, as mentioned above, we can see the quantum resource and the performance of a quantum computing based on most of quantum computing components from algorithm to hardware.\nBesides, it is also possible to see which component affects a quantum computing most seriously by applying realistic technology from the top layer to the bottom layer gradually.\nIn this regards, we can see that a theoretically optimal protocol really leads to optimal quantum computing or instead works suboptimally in composition with other components.\n\n\nBy exploiting the accurate analysis function, we can show numerical data for some theoretical conjectures in the fault-tolerant quantum computing.\nIn Steane code based quantum computing, increasing a concatenation level makes it possible to achieve more reliable quantum computing, but we guess that the more does not lead to the better all the time because a higher concatenation level requires more quantum resource and longer execution time.\nTherefore, it is reasonable to guess that there may exist a trade-off between the reliability and the performance (resource).\nIn this work, we show there exists such trade-off, and in surface code based quantum computing too.\n\n\nThe remainder of this paper is organized as follows.\nSection~\\ref{sec:related_works} reviews some related works.\nSection~\\ref{sec:configuration} overviews the proposed quantum computing framework and describes how to configure and analyze a quantum computing with the proposed framework.\nSection~\\ref{sec:proposed_framework} describes each layer of the proposed framework in detail.\nSection~\\ref{sec:performance_metric} describes the performance metric we evaluated in this work, and the analysis results will be shown in section~\\ref{sec:performance_evaluation}.\nSection~\\ref{sec:performance_improvement} discusses, by exploiting the proposed framework, what happens in a quantum computing if an individual component is improved.\nThis paper concludes with discussions in section~\\ref{sec:discussion}.\n\n\n\\section{Related Works}\\label{sec:related_works}\n\n\\begin{table*}[t]\n\\caption{\nSummary of the related works.\nNote that ``$\\bigtriangleup$\" indicates the corresponding component is partially applied.\nNamely, as mentioned before, Refs.~\\cite{Fowler:2012fi} and~\\cite{Jones:2012kc} perform the performance analysis based on only the dominant part of a quantum algorithm, not covering all quantum gates.\nBesides, surface code basically requires that physical qubits are arranged on the 2-dimensional nearest neighbor layout.\nTherefore, the related works applying surface code error correction implicitly consider a 2D layout of physical qubits in spite of explicitly no mentioning about the qubit layout.\n\\newline\n}\n\\small\n\\centering\n\\begin{tabular}{c|c|c|c|c|c} \\hline \n& \\multirow{2}{*}{Compile} & \\multirow{2}{*}{FTQC} & Micro-Architecture & System & Analysis \\\\ \n& & & (Qubit Layout) & Synthesis & Criterion\\\\ \\hline\nQuipper~\\cite{Smith:2014ux,Green:2013ha,Green:2013wb} & \\multirow{2}{*}{$\\bigcirc$} & \\multirow{2}{*}{X} & \\multirow{2}{*}{X} & \\multirow{2}{*}{X} & {One Time}\\\\ \nScaffCC~\\cite{JavadiAbhari:2015jf,JavadiAbhari:2014eu,ScaffCCScaffCC:vm} & & & & & Computing \\\\ \\hline\nFowler \\textit{et al.}~\\cite{Fowler:2012fi} & \\multirow{2}{*}{$\\bigtriangleup$} & \\multirow{2}{*}{Surface} & \\multirow{2}{*}{$\\bigtriangleup$} & \\multirow{2}{*}{X} & {One Time}\\\\ \nJones \\textit{et al.}~\\cite{Jones:2012kc} & & & & & Computing\\\\ \\hline\n\\multirow{2}{*}{Reiher \\textit{et al.}~\\cite{Reiher:2017cv}} & \\multirow{2}{*}{$\\bigcirc$} & \\multirow{2}{*}{Surface} & \\multirow{2}{*}{$\\bigtriangleup$} & \\multirow{2}{*}{X} & {One Time}\\\\ \n & & & & & Computing \\\\ \\hline\n\\multirow{3}{*}{QuRE~\\cite{Suchara:2013ho,Suchara:2013tg}} & \\multirow{3}{*}{$\\bigcirc$} & Steane, & \\multirow{3}{*}{Layout of Physical Qubits} &\\multirow{3}{*}{X} &\\multirow{3}{*}{\\minitab[c]{One Time\\\\Computing}}\\\\ \n & & Bacon-Shor, & & \\\\ \n & & Surface & & \\\\ \\hline \n\\multirow{2}{*}{Present Work} & \\multirow{2}{*}{$\\bigcirc$} & Steane, & Layouts of Physical and Logical Qubits, & \\multirow{2}{*}{$\\bigcirc$} & \\multirow{2}{*}{Fidelity 100\\%}\\\\ \n & & Surface & Communication Bus, Computing Regions & \\\\ \\hline\n\\end{tabular}\n\\label{tab:summary_relatedworks}\n\\end{table*} \n\nWe review several quantum computing frameworks discussing the performance and the quantum resource of a quantum computing.\nTable~\\ref{tab:summary_relatedworks} briefly summarizes the related works.\n\n\nQuipper~\\cite{Smith:2014ux,Green:2013ha,Green:2013wb,Green:2013gi} and ScaffCC~\\cite{JavadiAbhari:2015jf,JavadiAbhari:2014eu,ScaffCCScaffCC:vm} are frameworks for quantum compile and resource estimation of a quantum algorithm.\nThey basically compile a programmed quantum algorithm into a sequence of quantum instructions of a quantum gate and target qubit(s).\nFrom the compile results, they statistically analyze the quantum resource such as the quantities of gates and qubits.\nAll the data come from a quantum algorithm only.\nWe believe, without considering physical implementation of the algorithm, the analysis results cannot be used for the reference to the feasibility of a quantum computing.\n\n\n\nFowler \\textit{et al.}~\\cite{Fowler:2012fi} approximated the quantum computer size and the execution time of Shor algorithm ($N=2000$) with a surface code quantum computing.\nHowever, their analysis is based on the dominant part of the algorithm such as the modular exponentiation circuit only.\nTherefore, the execution time completely depends on the quantity of \\textit{Toffoli} gates in the circuit, the decomposition of Toffoli gates and measurement gate execution time.\nThe size of a quantum computer was also calculated based on the quantity of magic states to implement logical \\textit{T} gates and the quantity of logical (algorithm) qubits.\nJones~\\textit{et al.}~\\cite{Jones:2012kc} also estimated the performance and resource of a quantum algorithm within their layered architecture for a quantum computing.\nBut, their analysis methods are almost the same with Fowler \\textit{et al.}~\\cite{Fowler:2012fi}.\n\n\nOn the other hand, Reiher \\textit{et al.}~\\cite{Reiher:2017cv} applied a quantum compile and surface code error correction to estimate the performance and the resource for quantum simulation of complex chemical system.\nSince the authors did not consider any quantum computer system architecture and the system integration on the architecture, their analysis corresponds to the statistical calculation based on the data from compiled algorithm and fault-tolerant scheme.\nNote that in the literature, the authors claimed that despite overhead of quantum error correction and compile, quantum computations can be performed within reasonable time.\n\n\nThe toolbox for estimating quantum resource \\textit{QuRE}~\\cite{Suchara:2013ho,Suchara:2013tg} considers most of quantum computing components such as quantum compile, quantum error correction, physical qubit layout and quantum computing device.\nBy taking quantum compile, they prepared all quantum gates of a quantum algorithm.\nFurthermore, they covered both of block-type quantum codes (Steane code and Bacon-Shor code) and surface code and employed 2-dimensional qubit layout for their logical qubits.\nIn this regards, this toolbox performs the analysis based on more quantum computing components than before.\nHowever, their analysis method is still statistical investigation.\nFor example, their estimation on the execution time of a quantum computing simply depends on the quantity of quantum gates and its running time, $\\sum 1\/g_{parallel} \\times \\#g \\times g_T$.\nNote that $g_{parallel}$ denotes the quantity of gates $g$ executed in maximally parallel and $g_T$ is the execution time of the gate.\nAs mentioned before, it is difficult to say their analysis results coincide with the practical situation.\n\n\n\n\\section{Configuration of Quantum Computing}\\label{sec:configuration}\nWe describe how to configure a quantum computing with the proposed quantum computing framework.\nWe first overview the structure and functionality of our quantum computing framework, and second describe how to configure a quantum computing.\nWe also describe which schemes and protocols are currently supported by the framework.\n\n\n\\subsection{Overview of Framework}\\label{subsec:overview_framework}\nWe describe the structure and functionality of the proposed quantum computing framework.\nIt deals with quantum computing programming, quantum compile, quantum computer architecture, fault-tolerant quantum computing scheme and quantum computing device.\nTo this end the proposed framework is composed of three functional layers: \\textit{compile}, \\textit{system} and \\textit{building block}.\nEach layer has a well-defined role and provides several options to conduct their functions.\nAll layers are closely related to each other.\nFIG.~\\ref{fig:data_flow_components} shows the data flow on the framework for performance analysis and quantum computing.\n\n\\begin{figure}[t]\n\\centering\n\\subfigure[]{\n\t\\epsfig{file=data_flow_performance.eps, scale=0.3}\n}\n\\subfigure[]{\n\t\\epsfig{file=data_flow_execution.eps, scale=0.3}\n}\n\\caption\n{\nData flow over the layers.\n(a) For the performance analysis, the compile layer and the building block layer provide a quantum assembly code and the performance of logical operations to the system layer.\nThe system layer then performs the performance analysis during the system mapping.\n(b) The platform can be utilized to perform a practical quantum computing. \nFor that, all data is flowing sequentially from the most top layer to the bottom layer.\nIn the bottom layer control signal for physical device has to be generated for the practical computing.\n}\n\\label{fig:data_flow_components}\n\\end{figure}\n\n\nThe compile layer covers programming and compiling a quantum algorithm. \nA quantum algorithm is written in a high-level programming language, and then compiled into a sequence of quantum gates, called a \\textit{quantum assembly code}.\nTo compile a programmed quantum algorithm, target quantum gates and compile algorithm have to be determined beforehand.\nFIG.~\\ref{fig:io_compile_layer} shows the input and output of the compile layer.\nIn the present work, we hire an open-source quantum computing compiler \\textit{ScaffCC}~\\cite{JavadiAbhari:2015jf,JavadiAbhari:2014eu,ScaffCCScaffCC:vm} that supports programming language \\textit{Scaffold}~\\cite{JavadiAbhari:2012wx}.\nThe details about quantum compile, quantum gates and a quantum assembly code are described in section~\\ref{subsec:compile}.\nWhy we exploit ScaffCC (Scaffold) in this work is also discussed there. \n\n\n\\begin{figure}[t]\n\\centering\n\\epsfig{file=compile_layer.eps, scale=0.3}\n\\caption{\nThe input and output in the compile layer.\nBy a compile, a programmed quantum algorithm is decomposed into a quantum assembly code.\nFor the compile, a compile algorithm and target gates have to be determined beforehand.\nTarget gates can be varied according to a quantum computing type such as a fault-tolerant quantum computing or a non-fault-tolerant (physical) computing.\nThe selection of target gates can also be influenced by a qubit technology.\n}\n\\label{fig:io_compile_layer}\n\\end{figure}\n\n\nThe system layer synthesizes a quantum computing by integrating quantum algorithm, quantum computer architecture and building block (qubit\/gate and quantum computing device).\nIt deals with everything required to run a quantum algorithm on a quantum computer system architecture.\nFor example, it recasts a quantum algorithm for a target quantum computer architecture via \\textit{system synthesis} (also called \\textit{system mapping}).\nFor the system synthesis, this layer first takes a quantum computing system architecture.\nA qubit connectivity must be definitely defined there, and a communication bus may be included for an efficient interaction over distant qubits.\nIssues related to a quantum computer architecture depend on a chosen quantum error-correcting code and a fault-tolerant quantum computing scheme.\nFIG.~\\ref{fig:io_system_layer} describes the input and output in the system layer.\nIn terms of the performance analysis, the output of the system layer is the performance analysis result, but to run a quantum algorithm this layer generates an architecture-specific description of a quantum algorithm as shown in FIG.~\\ref{fig:data_flow_components}.\n\n\n\n\\begin{figure}[t]\n\\centering\n\\epsfig{file=system_layer.eps, scale=0.3}\n\\caption{\nThe input and output in the system layer.\nQuantum assembly code and logical gates are passed from the compile layer and the building block layer respectively.\nQuantum computer architecture describes the layout of logical\/physical qubits and a communication bus over qubits.\nSystem mapping algorithm describes how to integrate a quantum algorithm (quantum assembly code) and a quantum computer architecture.\nIt includes the qubit placement, the gate scheduling and so on.\n}\n\\label{fig:io_system_layer}\n\\end{figure}\n\n\nThe building block layer is associated with qubits and gates of quantum algorithm.\nThis work basically assumes a fault-tolerant quantum computing, and therefore such qubits and gates are related to logical qubits and gates encoded in a quantum error-correcting code.\nIn this regards, the main functionality of this layer is to assemble physical qubits and gates to implement a logical qubit and gate.\nFor that, a quantum error-correcting code should be determined first, and then logical qubit and gate will be implemented according to a fault-tolerant quantum computing scheme of the chosen quantum code.\nDuring the implementation, it is able to generate the performance of logical qubit and gate based on the properties of physical operations, time and fidelity (or \\textit{error rate}).\nFIG.~\\ref{fig:io_building_block_layer} shows the input and output at the building block layer. \nIn terms of the performance analysis, the output of the building block layer is the performance of logical quantum operations, \\textit{time} and \\textit{fidelity}.\nNote that the proposed platform supports $[[7,1,3]]$ Steane code and a surface code. \n\n\n\\begin{figure}[t]\n\\centering\n\\epsfig{file=building_block_layer.eps, scale=0.3}\n\\caption{\nThe input and output in the building block layer.\nTo make logical qubits\/gates, a quantum error-correcting code should be determined first.\nBesides, to derive the performance of logical operations, the properties of physical device, time and fidelity, have to be considered.\nThen, based on the physical device property and the fault-tolerant quantum computing scheme, logical operations with a specific performance will be generated.\n}\n\\label{fig:io_building_block_layer}\n\\end{figure}\n\n\n\\subsection{Configuration of Quantum Computing}\\label{subsec:configuration}\n\nWe describe how to configure a quantum computing with the proposed framework.\nAs mentioned above, it is possible to configure a quantum computing by selectively choosing specific protocols or the properties of physical device. \nFor example, you can configure a Steane code based fault-tolerant quantum computing to run Shor algorithm.\nFor that, through compile you generate a quantum assembly code. \nLogical building blocks are built up based on the chosen quantum error-correcting code, quantum device and the size of the quantum algorithm.\nThe next task is to determine a quantum computer architecture of a certain physical and\/or logical qubit layout.\nThe architecture depends on the chosen fault-tolerant quantum computing scheme.\nNote that it is also possible to choose a physical non-fault-tolerant quantum computing by assuming high-fidelity quantum computing device (see Section~\\ref{sec:accurate_gates}).\n\n\n\\begin{table*}[t]\n\\caption{\nList of protocols and layouts currently supported by the framework.\nIn the compile layer, we choose a compile type and a target gate set.\nThe compile type is the format of a quantum assembly code.\nThe compile type is closely related to the mapping type and qubit layout.\nA specific qubit layout can be chosen within a selection.\nIn the building block layer, the scheme for a fault-tolerant quantum computing is determined.\nAccording to the scheme, the protocol of logical operations are fixed.\n\\newline\n}\n\\small\n\\centering\n\\begin{tabular}{c|c|c} \\hline \nLayer & Options & Values \\\\ \\hline\n\\multirow{3}{*}{Compile} & Compile Type & Structured code, Non-Structured code \\\\ \\cline{2-3}\n & \\multirow{2}{*}{Target Gate Set} & $\\{X, Z, H, S (S^{\\dag}), T (T^{\\dag}), R_Z(\\theta), CNOT\\}$, \\\\ \n & & $\\{X, Z, H, S (S^{\\dag}), T (T^{\\dag}), CNOT\\}$ \\\\ \\hline\n\\multirow{3}{*}{System} & Mapping Type & Structured, Non-structured \\\\ \\cline{2-3}\n & \\multirow{2}{*}{Qubit Layout} & (Non-structured) All-to-All, 1D, 2D, Arbitrary \\\\ \n & & (Structured) All-to-All, (1D, 1D), (1D, 2D), (2D, 2D) \\\\ \\cline{2-3} \\hline\nBuilding & FTQC Scheme & Steane code, Surface code\\\\ \\cline{2-3}\nBlock & Device & Time, Fidelity \\\\ \\hline\n\\end{tabular}\n\\label{tab:options}\n\\end{table*} \n\n\nIn Table~\\ref{tab:options}, we show some options our framework currently supports.\nIn the table, the compile type and the mapping type (with qubit layout) completely depend on the type of a quantum assembly code, a \\textit{structured} code and a \\textit{non-structured} code.\nA structured quantum assembly code is processed by a structured system mapping for a structured quantum computer architecture.\nSimilarly, a non-structured quantum assembly code is taken by a non-structured system mapping based on a simple qubit layout such as a regular 1- or 2-dimensional lattice.\nThe details of the structured and non-structured quantum assembly codes will be described in Section~\\ref{subsec:compile}.\nIn Sections~\\ref{sec:performance_evaluation} and~\\ref{sec:performance_improvement}, we configure quantum computings with such options and analyze the performance and the quantum resource.\n\n\nWith the proposed framework, a quantum computing can be configured as follows.\nThe first step is to determine the type of a quantum computing, a fault-tolerant quantum computing or a non-fault-tolerant physical quantum computing.\nIn general, such determination depends on the size of a quantum algorithm and the assumption on the reliability of physical device.\nThe larger quantum algorithm requires the more reliable qubits and gates.\nIf you decided a fault-tolerant quantum computing, you need to choose a quantum error-correcting code.\nOur platform supports $[[7,1,3]]$ Steane code and a surface code now.\nAccording to a quantum algorithm and the reliability (error rate) of physical device, the concatenation level for Steane code or the code distance for a surface code will be determined.\n\n\n\n\n\nThe choice of the quantum computing type affects a succeeding quantum compile step. \nExactly, target quantum gates for a compile completely depends on the chosen quantum computing type.\nFor example, $R_Z(\\theta)$ gate for a rotational angle $\\theta$ is very exploited in a quantum algorithm, but its logical version is not generally implemented in a fault-tolerant manner. \nTherefore, to realize a fault-tolerant quantum computing, you have to decompose such $R_Z(\\theta)$ gate into a sequence of $H$, $S$ and $T$ gates those are fault-tolerantly implementable.\nOn the other hand, since the physical $R_Z(\\theta)$ gate can be easily performed on physical qubits, there is no problem with quantum gates including the rotational gate for a non-fault-tolerant physical quantum computing.\n\n\nThe second step is to make a quantum computing program and compile it into a quantum assembly code.\nAs mentioned above, for the compile, you have to exploit target quantum gates determined in the previous step.\nThe proposed framework exploits open-source programming language \\textit{Scaffold} and compiler \\textit{ScaffCC}.\nYou can see how to make a Scaffold program in Ref.~\\cite{JavadiAbhari:2012wx} and how to use ScaffCC compiler in Ref.~\\cite{JavadiAbhari:2015jf,ScaffCCScaffCC:vm}.\nIn section~\\ref{subsec:compile}, we show a simple example of a Scaffold program and the associated quantum assembly code. \nNote that ScaffCC decomposes an arbitrary one-qubit gate into a sequence of $H$, $S$ and $T$ by exploiting \\textit{gridsynth}~\\cite{Ross:2016uz} or \\textit{sqct}~\\cite{Kliuchnikov:2013tr}.\n\n\nThe third step is to choose a quantum computing architecture, in particular a qubit array. \nThe proposed platform takes not only a simple regular 1- or 2-dimensional lattice, but also a hierarchically structured qubit array.\nA communication bus also should be considered to make an efficient interaction over distant qubits.\nFIG.~\\ref{fig:layout_example} shows an example of a hierarchically structured quantum computing architectures.\nIn case of Steane code quantum computing, the structure of a qubit array seriously affects a quantum computing due to the limited local qubit interaction.\nWe will show such limitation raises highly nontrivial temporal overhead in section~\\ref{sec:performance_local_qec}.\nOn the other hand, surface code quantum computing scheme is fundamentally established based on local qubit interaction on the 2-dimensional lattice.\nFIG.~\\ref{fig:surface_code_architecture} shows a quantum computing architecture for a surface code quantum computing.\n\n\nIf the configuration is completed, we can perform the system synthesis.\nDuring the system synthesis, the quantum algorithm is reformulated for the target quantum computer architecture. \nAs will be discussed later, from the system synthesis, the expected performance (circuit depth, execution time, fidelity, KQ, and so on) and the required quantum resource (qubits and gates) of a quantum computing are evaluated.\n\n\n\\subsection{Analysis of Quantum Computing}\\label{subsection:analysis}\n\nWe briefly mention what performance are evaluated by the framework, but the detailed analysis method will be described in section~\\ref{sec:performance_metric}. \nThe framework first inspects the quantities of qubits and gates.\nThose figures are usually analyzed by a quantum compile without considering a quantum computer architecture~\\cite{Smith:2014ux,JavadiAbhari:2015jf}.\nBut, our framework examines such quantities with the consideration for a quantum computer system architecture. \nIn the system synthesis, a fault-tolerant quantum computing scheme in particular a magic state factory is taken into consideration.\nTherefore, it is possible to estimate the temporal and spatial overhead caused by factors those are veiled in a quantum algorithm, and therefore we believe our estimation nearly coincides with the resource to perform a real quantum computing.\n\n\nThe framework examines the expected quantum computing execution time (circuit depth, fidelity, KQ and so on) of a quantum algorithm based on quantum computer architecture, fault-tolerant protocol and physical device.\nBy applying the properties of physical device and fault-tolerant protocol, we deduce the performance of logical gates and quantum error correction.\nBesides, by conducting a system integration, we are able to obtain the single-round execution time of a quantum algorithm by applying the performance of logical gates and error correction above.\nOur framework goes further for more detailed analysis.\nDuring the system integration, as mentioned above, the fidelity of a quantum computing can be calculated. \nBy reflecting the fidelity, it is possible to estimate the execution time for a quantum computing achieving a fidelity 100$\\%$.\nIn doing so, we fairly compare fault-tolerant quantum computing and non-fault-tolerant quantum computing.\nThe quantity of qubits is also based on this performance criterion.\n\n\nBesides the above-mentioned, the framework can generate various performance data.\nBased on the data, we can estimate the temporal and spatial overhead of a quantum computing.\nFor example, as mentioned above limited local interaction between nearest neighbor qubits sometimes requires additional qubit movements to perform 2-qubit CNOT gate.\nThe quantity of such qubit movements corresponds to the temporal overhead.\nThe platform evaluates such temporal overhead as a ration of SWAP gates to total quantum gates.\nAs will be shown in section~\\ref{sec:performance_local_qec}, a quantum computing requires highly nontrivial temporal overhead.\n\n\n\\section{Proposed Quantum Computing Framework}\\label{sec:proposed_framework}\n\\subsection{Compile Layer}\\label{subsec:compile}\n\nA quantum compile is a process that decomposes a quantum algorithm into a sequence of quantum gates.\nHere a quantum algorithm is in entirely programmed form by using a high-level abstract programming language.\nRecently, several research groups have developed programming environments for a quantum computing by modifying conventional classical programming languages such as python and C\/C++~\\cite{Steiger:2018to,JavadiAbhari:2012wx,Microsoft:wz,Ying:2017exa,Green:2013wb,Green:2013gi}.\n\n\nIt is well known that an arbitrary quantum algorithm can be decomposed into a combination of 1-qubit rotational gates and 2-qubit $CNOT$ gate~\\cite{Nielsen:2000ga}.\nThe target quantum gates for a compile can be varied according to situation.\nFor example, the set of $H$, $T$, and $CNOT$ is de facto standard for a universal fault-tolerant quantum computing.\nBut, to reduce the complexity in compile or to provide a flexibility to a programmer, one usually add more quantum gates to target gates.\nFurthermore, according to specific quantum mechanical system, physically implementable quantum gates slightly differ~\\cite{Nielsen:2000ga,Lin:2014il}. \nIn this work, we utilize two sets of quantum gates $\\{X, Z, H, S (S^{\\dag}), T (T^{\\dag}), R_Z(\\theta), CNOT\\}$ and $\\{X, Z, H, S (S^{\\dag}), T (T^{\\dag}), CNOT\\}$.\nThe difference between both is $R_Z(\\theta)$ gate.\nAs mentioned before, since the rotational gate for an arbitrary rotational angle $\\theta$ can not be implemented in the fault-tolerant manner, the first set is exploited for a physical quantum computing and the second is used for a fault-tolerant quantum computing.\n\n\nThe output of the quantum compile, the sequence of quantum gates, is called a \\textit{quantum assembly code}.\nA quantum assembly code is a list of quantum instructions that is a combination of a quantum gate and its target qubit(s).\nIt is an intermediate representation of a quantum algorithm between a mathematical description and a physical machine instruction description~\\cite{Cross:2017ud,JavadiAbhari:2015jf,Svore:2006iw}.\nThere is not any standard for a quantum assembly code, and therefore a specific representation and a structure of which slightly differ according to literature.\nFor example, a quantum instruction to apply a Hadamard gate to a qubit $q$ is represented as ``$hadamard$ $q$\" or ``$H$($q$)\".\nBesides, a certain quantum assembly code has its own special structure.\nFor example, Open QASM by IBM~\\cite{Cross:2017ud} provides a conditional statement ``\\textit{if... then... else}\" usually supported by conventional programming languages.\n\n\nIn the present work, we hire open-source quantum compiler \\textit{ScaffCC}~\\cite{JavadiAbhari:2015jf,ScaffCCScaffCC:vm}. \nIt compiles a quantum computing program written in quantum computing programing language \\textit{Scaffold}~\\cite{JavadiAbhari:2012wx}.\nA Scaffold program consists of one \\textit{main} module and multiple sub-modules (see FIG.~\\ref{fig:scaffold_code}).\nA module generally seems like a function or a procedure of conventional programming languages such as $C\/C++$, Python and so on.\nThe module is composed of instructions calling quantum gates and\/or other modules.\nThe execution of a Scaffold program begins with the first instruction of the main module, and terminates by conducting the last instruction of the same module.\nDuring the execution of the main module, other sub-modules may be called.\n\n\nPreviously, we mentioned that some quantum assembly codes have unique feature in their structure.\nSo does the output of ScaffCC.\nThe compiler generates a hierarchically structured quantum assembly code, in which a quantum algorithm is composed of multiple modules.\nA module consists of performing quantum gates and\/or other modules. \nIn the previous paragraph, we also mentioned that a Scaffold program consists of modules.\nTo avoid any ambiguity, we need to distinguish both.\nA module in a Scaffold program is completely defined and written by a programmer, and through a compile it is converted to a module in a quantum assembly code.\nTherefore, both are technically identical.\nFIG.~\\ref{fig:module_model} shows an example of a module in a quantum assembly code. \n\n\n\\begin{figure}[t]\n\\centering\n\\epsfig{file=module_model.eps, scale=0.45}\n\\caption{\nAn example of a module. \nParameter qubits passed from external modules are clearly specified at the beginning.\nA module is defined by the preparation of local qubits and classical bits, calling quantum gates and other sub-modules, and measurement of local qubits.\n}\n\\label{fig:module_model}\n\\end{figure}\n\n\n\n\nWe need to say why we exploit ScaffCC in this work.\nAs mentioned above, a quantum assembly code is a list of quantum instructions.\nAs the size of a quantum algorithm increases, the size of a quantum assembly code nontrivially increases.\nIt completely follows the complexity of a quantum algorithm.\nObviously, the size of meaningful quantum algorithm in reality is beyond the capability of conventional super-computing.\nIn other words, the size of a quantum assembly code for our interested algorithm will be very huge, and such scalability will cause a practical problem in classical control of a quantum computing.\nFor example, the size of a quantum assembly code of Shor algorithm to factorize 512 bit integer is around 39 TB (see FIG.~\\ref{fig:qasm_size_comparison}).\nTherefore, we have had trouble in generating and managing such a huge code with a classical computing system.\nWe could not even attempt to generate more larger-sized quantum algorithm than the algorithm above due to the lack of classical storage and memory.\n\n\nOn the other hand, it is believed that the hierarchy provided by ScaffCC can suppress the scalability.\nFor example, to perform a composite quantum operation of $N$ gates as much as $M$ times, a normal (non-structured) quantum assembly code requires $MN$ quantum instructions ($\\# gates \\times \\# iteration$).\nHowever, the hierarchical assembly code requires only $M+N$ instructions ($\\# gates + \\# iteration$) by defining the operation as a module.\nBy doing this, the hierarchical quantum assembly code is much smaller than the non-structured code as shown in FIG.~\\ref{fig:qasm_size_comparison}.\nTo the best of our knowledge, only ScaffCC supports such hierarchically structured quantum assembly code.\nThis is the main reason whey we exploit ScaffCC in the proposed platform.\n\n\n\n\n\n\\begin{figure}[t]\n\\centering\n\\epsfig{file=Shor_file_size_base.eps, scale=0.23}\n\\caption{\nThe size of the quantum assembly codes for Shor algorithm $n=128$, $256$, and $512$.\nBoth codes are generated by ScaffCC.\nAs the input increases, the size of quantum assembly code in the non-structured format increases over dozens TB.\n}\n\\label{fig:qasm_size_comparison}\n\\end{figure}\n\n\nTo conclude this section, we show a simple example of a Scaffold program to make a $5$-qubit CAT state $\\frac{1}{\\sqrt{2}}(|0\\rangle^{\\otimes 5} + |1\\rangle^{\\otimes 5})$ and a corresponding quantum assembly code.\nReaders can see how to make a Scaffold program in Ref.~\\cite{JavadiAbhari:2012wx} and how to use ScaffCC compiler in Ref.~\\cite{ScaffCCScaffCC:vm}.\nFIG.~\\ref{fig:scaffold_code} shows a quantum circuit to implement a $5$-qubit CAT state and a corresponding Scaffold program. \nA structured and a non-structured quantum assembly code are respectively shown in FIG.~\\ref{fig:cat_qasm}.\n\n\\begin{figure}[t]\n\\centering\n\\subfigure[]{\n\t\\epsfig{file=cat_state_circuit.eps, scale=0.8}\n}\n\\subfigure[]{\n\t\\epsfig{file=scaffold_code.eps, scale=0.4}\t\n}\n\\caption{\n\tAn example of a Scaffold program to implement a $5$-qubit CAT state.\n\t(a) A quantum circuit and (b) its Scaffold program.\n\tThe module \\textit{MakeCAT} makes the CAT state.\n}\n\\label{fig:scaffold_code}\n\\end{figure}\n\n\n\\begin{figure}[t]\n\\centering\n\\subfigure[]{\n\t\\epsfig{file=cat_state_qasmh.eps, scale=0.4}\n}\n\\subfigure[]{\n\t\\epsfig{file=cat_state_qasmf.eps, scale=0.45}\n}\n\\caption\n{\n\tQuantum assembly codes to generate a $5$-qubit CAT state.\n\t(a) A structured code and (b) a non-structured code.\n}\n\\label{fig:cat_qasm}\n\\end{figure}\n\n\n\\subsection{System Layer}\\label{subsec:system}\n\n\nA quantum algorithm (quantum assembly code) is a logic of how to solve a given problem.\nIt is based on ideal physical situation where noiseless physical gates and arbitrary long qubit interaction are allowed.\nIn other words, a quantum algorithm is developed without considering any physical implementation.\n\n\nOn the other hand, a quantum computer where a quantum algorithm is executed has a certain logical and physical architecture such as a qubit layout. \nTherefore, to run a quantum algorithm on such quantum computer, we need to reformulate the algorithm to be compatible with the given quantum computer architecture.\nFor example, real quantum computing devices in IBM Quantum Experience~\\cite{IBM:Iidq8v80} have very limited qubit layout and allow only one-directional CNOT.\nTherefore, the quantum assembly codes shown in FIG.~\\ref{fig:cat_qasm} can not be directly executed on the IBM QX4 device (see FIG.~\\ref{fig:mapping_simple_example} (a)) because the codes include unallowable CNOT gates. \nTherefore, for the execution, we have to recast a quantum assembly code for IBM QX4. \nIn FIG.~\\ref{fig:mapping_simple_example} (b), we show the recasted (non-structured) quantum assembly code for the device.\nThis is the motivation of a quantum computing system mapping.\n\n\n\\begin{figure}[t]\n\\centering\n\\subfigure[]{\n\t\\epsfig{file=ibmqx4_layout.eps, scale=0.4}\n}\n\\subfigure[]{\n\t\\epsfig{file=ibmqx4_mapping.eps, scale=0.35}\n}\n\\caption\n{\n(a) The qubit layout of IBM QX4 device~\\cite{IBM:Iidq8v80}. \nA node indicates a qubit, and an edge with a direction implies that the application of a controlled-CNOT gate is possible, where the control qubit and the target qubit are the root and end of the arrow. \nTherefore, as you see no bi-directional CNOT is allowed on the IBM QX4.\n(b) The recasted assembly code from FIG. 7 (b). \nSince the instruction ``\\textit{CNOT data0,data1}`` is not allowed directly on the IBM QX4, Hadamard gates, ``\\textit{H data1}\" and ``\\textit{H data2}\", have to add before and after of the instruction.\nNote that the node index $k$ indicates the qubit data$k$.\nWe have not cancel out repetitive Hadamard gates.\nBy cancelling out those gates, the circuit depth can be reduced from 12 to 9.\n}\n\\label{fig:mapping_simple_example}\n\\end{figure}\n\n\nThe principle of a quantum computing system mapping is very simple, 1) \\textit{set up} a quantum computer architecture and 2) \\textit{recast} a quantum assembly code for the architecture.\nIn what follows, we first describe a quantum computer architecture, and then show how to actualize a quantum algorithm on the target architecture.\n\n\n\\subsubsection{Quantum Computer Architecture}\nWe discuss a hierarchically structured quantum computer architecture for the proposed framework.\nIn general, there is no restriction to the architecture.\nIn other literature, a regular 1- or 2-dimensional lattice is usually exploited.\nBut, in this work we assume that it is hierarchically structured.\nA quantum computer is composed of several computing regions called a module and a communication bus connecting to the modules.\nBy assuming such structured architecture, the system mapping with a hierarchically structured quantum assembly code can be done efficiently.\n\n\n\nA computing region is completely associated with a module in a quantum assembly code.\nIt consists of multiple cells for logical (or physical) qubits described in the module. \nSome cells are allocated for parameter qubits passed from other modules, and others are allocated for local qubits which are temporarily used within a module.\nAdditional space may be sometimes required to form the rectangular shape of a module.\nFIG.~\\ref{fig:example_module_layout} shows an example of a module in a quantum assembly code and its associated computing region.\n\n\n\\begin{figure}[t]\n\\centering\n\\subfigure[]{\n\t\\epsfig{file=module_parsenodeeven.eps, scale=0.35}\n}\n\\subfigure[]{\n\t\\epsfig{file=module_parsenodeeven_qubit_layout.eps, scale=0.35}\n}\n\\caption\n{\n(a) An example of a module in a quantum assembly code and (b) the associated computing region on a quantum computer architecture.\nIn (a), the qubits \\textit{a} and \\textit{even} are parameter qubits passed from other modules, and the qubit \\textit{scratch} is a local qubit locally used within the module.\nIn (b), the dark grey cells are for the parameter qubits, the white cells are for the local qubits and the light grey cells are just empty space or null qubits not working anything.\nWhile the size of the parameter qubits \\textit{a} and \\textit{even} are not specified in the module definition, it can be determined by tracing all modules that calls the module.\n}\n\\label{fig:example_module_layout}\n\\end{figure}\n\n\nFIG.~\\ref{fig:layout_example} shows examples of the above-mentioned quantum computer architecture.\nThe box labelled $M_{i,j}$ ($M_i$) indicates a module (computing region).\nWe call the arrangement of modules a \\textit{global} layout and the arrangement of qubits within a module a \\textit{local} layout.\nAll modules communicate with each other via a communication bus.\nIn the figure, the bus is depicted as a white space outside of modules.\nWe will discuss the bandwidth of the communication bus in Section~\\ref{sec:performance_metric}.\n\n\nQubit that resides inside a module supports universal quantum operation.\nThe logical qubit is composed of data qubits for holding data and ancilla qubits for error correction and logical operations.\nOn the other hand, qubits for a communication bus only perform error correction and logical Clifford operations.\nTherefore, the composition of logical qubits for a communication bus can be differ from that for modules according to a quantum error-correcting code and a fault-tolerant quantum computing scheme.\n\n\n\\begin{figure}[t]\n\\centering\n\\subfigure[]{\n\t\\epsfig{file=2d_global_layout.eps, scale=0.25}\n}\n\\subfigure[]{\n\t\\epsfig{file=1d_global_layout.eps, scale=0.25}\n}\n\\caption{\nAn example of a proposed quantum computer architecture.\n(a) 2D global layout and 2D local layout.\n(b) 1D global layout and 2D local layout.\ntLogical qubits of different color play different role in a module; parameter qubits (dark grey), local qubits (white) and dummy qubits (light grey).\n}\n\\label{fig:layout_example}\n\\end{figure}\n\n\n\n\n\n\n\n\n\\subsubsection{System Mapping}\nFIG.~\\ref{fig:data_flow_components} shows that all data are collected in the system layer and the performance of a quantum computing is evaluated there.\nIn this section, we will describe the system mapping in terms of the gate reformulation for the target quantum computer structure and the performance evaluation.\nA specific mapping process definitely depends on the type of quantum instructions.\nQuantum instructions in the hierarchically structured quantum assembly code are classified into three types: 1-, 2-qubit gate and module.\n\n\nThe set of 1-qubit gates includes $X$, $Z$, $H$, $S$ ($S^{\\dag}$), $T$ ($T^{\\dag}$), $R_Z(\\theta)$ and a preparation and a measurement in the Z basis.\nThe mapping process for such gates is straightforward and can be done independently. \nSuppose that a Hadamard gate is applied to a qubit $q$.\nIf the application of a certain quantum operation to the qubit was scheduled previously, the application of Hadamard gate will be performed after the previous operation.\nIf the previous operation was over at time $t(q)$, then Hadamard operation will start at $t(q)$ and finish at $t(q)+H_t$, where $H_t$ is the execution time of the gate.\nThis is everything for the mapping of an 1-qubit gate.\nNote that the execution time and fidelity of a quantum gate is provided from the building block layer. \n\n\nThe present work deals with a CNOT gate only for a 2-qubit gate.\nEven though there is a case when a SWAP gate is required, it is possible to implement a SWAP gate with three CNOT gates.\nWe deal with a CNOT gate as a local gate acting on two qubits located in nearest neighbor.\nSuppose that a CNOT gate is applied to qubits $q_a$ and $q_b$.\nThen, to execute the gate, both qubits have to be in temporary and spatially ready status.\nIf the are apart, we have to move both qubits to be in neighbor via SWAP operations.\nIf one qubit is being manipulated by other operation, we have to delay the CNOT operation until both qubits are in idle status.\nThen, the CNOT operation definitely begins at $\\max \\{t(q_a), t(q_b) \\}$, and finishes at time $\\max \\{t(q_a), t(q_b) \\} + CNOT_t$.\nNote that $\\max \\{t(q_a), t(q_b\\}$ is the time both qubits become idle status, and $CNOT_t$ is the execution time of a CNOT gate.\n\n\n\n\n\n\nThe third type quantum instruction, a module, seems like a multi-qubit composite quantum operation.\nTherefore, on the surface, it seems that the mapping of a module is very similar with the mapping of a 2-qubit CNOT gate.\nFor the mapping of a module, argument qubits for a module should be temporally and spatially ready.\nThe critical difference from the case of a CNOT gate is that a distinguished physical space\\footnote{The physical space for a module is the computing region we described before.} is allocated for a module.\nTherefore, to perform the mapping of a module, we have to consider qubit movements between a present module and a target module. \n\n\nSuppose that a module $A$ is being mapped now, and we have to treat a quantum instruction ``$M$ ($q_a$, $q_b$, $q_c$)\" that calls a module $M$ with argument qubits $q_a$, $q_b$ and $q_c$. \nThen, we have to move the argument qubits to the designated area of the module $M$. \nThe qubit movements are achieved by SWAP operations through a communication bus.\nWe call this movement a \\textit{forward} qubit passing.\nAfter the forward qubit passing, the qubits will be placed at the parameter qubit section of the module $M$.\nPlease see FIG.~\\ref{fig:example_module_layout} (b).\nQuantum instructions of the module $M$ will then be executed.\nIf it is faced with a quantum instruction calling other module, then some qubits in the module $M$ will be passed to the designated space of the newly called module and manipulated there by following the quantum instructions of the module.\nAfter executing all quantum instructions of the module $M$, the passed qubits have to be back to the original module $A$. \nWe call this returning movement a \\textit{backward} qubit passing.\nFIG.~\\ref{fig:parameter_passing} shows the module operation including the forward and backward qubit passings.\n\n\n\n\n\\begin{figure}[t]\n\\centering\n\\epsfig{file=parameter_passing.eps, scale=0.3}\n\\caption{\nAn example of the module operation that consists of seven steps: 1. (forward) move qubits to the bus, 2. (forward) move to the target module, 3. (forward) move to the parameter qubit cells (dark grey cells), 4. module operations, 5. (backward) move qubits to the bus, 6. (backward) move to the original module, and 7. (backward) move to the original qubit positions.\n}\n\\label{fig:parameter_passing}\n\\end{figure}\n\n\nWe perform the mapping for all modules sequentially as they appear in a quantum assembly code.\nFor that, we have to keep two lookup tables, a \\textit{global} lookup table and a \\textit{local} lookup table.\nFor each module, we first initialize a local lookup table for all qubits, and update the manipulation time of each qubit as we process each quantum instruction.\nAfter processing all instruction of a module, we determine the execution time of the module by picking up the maximum time among all qubits.\nThe performance of a module is recorded in the global lookup table.\nDuring the mapping of a module, if a module which was already mapped is called then we can refer the performance information of the called module from the global lookup table.\n\n\nAfter mapping all modules, we can determine the execution time of a quantum algorithm as the maximum time among the qubits in the \\textit{main} module. \nFor example, in FIG.~\\ref{fig:cat_qasm} (a), the execution time of the algorithm is $PrepZ_t$ + $FP_{main\\rightarrow MakeCAT}$ + $BP_{main\\leftarrow MakeCAT}$ + $MakeCAT_t$, where $MakeCAT_t=H_t + 4CNOT_t$.\nNote that $FP_{main\\rightarrow MakeCAT}$ ($BP_{main\\leftarrow MakeCAT}$) is the time for the forward (backward) qubit passing.\nThe execution time of the qubit passing depends on the distance between modules.\n\n\n\nSo far, we have described a system mapping algorithm for a hierarchically structured quantum assembly code.\nBy the way, the presented algorithm can be applied to a non-structured quantum assembly code.\nIn such code, there are two types of quantum instructions: 1- and 2-qubit gate.\nRegardless of the type of a quantum assembly code, as mentioned before the heart of the system mapping is 1) \\textit{set up} a quantum computer architecture and 2) \\textit{recast} quantum algorithm for the architecture.\nTo be compatible with the quantum assembly code, a simple qubit array such as a regular 2-dimensional lattice may be enough.\nThe proposed framework supports a system mapping on an arbitrary qubit layout as shown in FIG.~\\ref{fig:mapping_simple_example} (a).\n\n\n\n\n\n\\subsection{Building Block Layer}\\label{subsec:building_block}\n\n\nWe apply fault-tolerant quantum computing protocols based on $[[7,1,3]]$ Steane code~\\cite{Steane:2006wn} and surface code~\\cite{Fowler:2009ep,Fowler:2012fi}.\nBoth codes have well-studied logical gate protocols.\nThe concatenation level for Steane code and the code distance for a surface code are completely determined by a given quantum algorithm~\\cite{Jones:2012kc,Suchara:2013tg}.\nIn this work, we set both figures by using KQ formalism~\\cite{Steane:2003gp}.\n\n\n\\subsubsection{Steane code}\\label{sec:steane_code}\n\n\n$[[7, 1, 3]]$ Steane code encodes logical quantum information in a qubit into seven physical qubits, and protects it from an arbitrary 1-qubit quantum noise.\nSince the transversal implementations for a logical Hadamard and a logical CNOT gate are supported, many studies on the fault-tolerant quantum computing based on the Steane code have been conducted.\nIn Ref.~\\cite{Svore:2007tb}, an optimal design of a logical qubit for Steane code under the 2-dimensional nearest neighbor qubit interaction was proposed.\nThey achieved the threshold $O(10^{-5})$ with 48 physical qubits and modified quantum error correction.\n\n\nIn this work, we have redesigned a logical qubit with 30 physical qubits.\nSeven among them are used for holding data, and the others are temporarily used for logical operations and error correction.\nIn particular, we applied the Shor quantum error correction~\\cite{Shor:1996wi} that exploits $4$-qubit Shor state for the syndrome measurement.\nFor that, we prepare and verify the Shor state~\\cite{Weinstein:2012jr}. \nWe implemented the preparation of a logical state by following Ref.~\\cite{Goto:2016jo}.\nMost of logical gates are implemented as transversal gates, and the non-Clifford $T$ gate is implemented by exploiting a magic state.\nWe generate magic states by employing a $7$-qubit Shor state without magic state distillation~\\cite{Weinstein:2015ce}.\n\n\nAccuracy threshold theorem~\\cite{Knill:1996tm,Aharonov:2008jn} says that if we have a quantum device of physical error rate below a code threshold, it is possible to achieve an arbitrarily reliable quantum computing. \nBut, for a very large-sized quantum algorithm, encoding only once may not be enough.\nFortunately, by encoding a qubit recursively~\\cite{Knill:1996ty}, we can lower the effective error rate to the degree where a reliable quantum computing is possible.\n\n\nGiven a quantum algorithm, we can calculate $KQ$ and determine the maximum tolerable error rate $P_{max}$ as $1\/KQ$.\nWe then determine the concatenation level $l$ by the following inequalities satisfies\n\\begin{equation}\nP_{max} \\geq \\frac{{(c_{op}p^2)}^{2^{l}}}{c_{op}},\n\\end{equation}\nwhere $op$ is quantum error correction and logical operations, and $c_{op}$ is the constant factor of a specific logical operation $op$.\nWe obtained the constant values of each logical operation from $KQ$ of a quantum circuit for the operation.\nFor example, $c_{QEC}$ corresponds to $KQ$ of the QEC quantum circuit. \nIn this work, we have not optimized the arrange of qubits (see Table~\\ref{tab:logical_qubit_steane}), and therefore the quantum error correction and a logical operation work sub-optimally, and therefore the threshold is lower than the optimal value\\footnote{Please note that the objective of our work is not to increase a code threshold, but to configure a quantum computing and analyze its performance accurately.}.\n\n\nSuppose that a concatenation level for a quantum computing is determined as $l$.\nThe implementation of a logical $T$ gate in the level $l$ consists of only Clifford operations in a lower level $kk$.\nBy performing multiple rounds of the distillation, the magic spreaded over many states are concentrated on only a few states and therefore we can obtain high fidelity magic states.\nIn this work, we deal with the magic state distillation protocols described in Refs~\\cite{Fowler:2009ep,Fowler:2012fi}.\nThe required iteration of the protocol is completely determined by the objective fidelity and a physical error rate~\\cite{Suchara:2013tg}.\nWe set the objective error rate of the magic states as $10^{-12}$ to achieve high fidelity for the configured quantum computing\\footnote{$1\/ \\# \\textrm{T gates}$}, and empirically the 2-round distillation achieved the objective error rate in the physical error rate $10^{-3}\\sim 10^{-5}$.\n\n\nWe determine the capacity of a magic state factory that prepares and supplies $|A_L\\rangle$ states.\nThe capacity depends on a quantum algorithm and the durations of a state distillation and a logical $T$ gate.\nSuppose that a logical $T$ gate is applied consecutively to a qubit.\nThen, a magic state factory has to supply high fidelity magic states continuously.\nIf a magic state factory generates only one magic state at a time, there may happen a latency for the supply of the magic states if the magic state distillation takes more time than the duration of a logical $T$ gate.\nTherefore, the magic state factory has the capacity to prepare at least $\\max \\{parallel T\\} \\times time(MSD)\/ time(T)$ states at a time where $time(MSD)$ and $time(T)$ are the durations of the magic state distillation and the logical $T$ gate protocol.\nEmpirically, the $time(MSD)\/time(T)$ is approximately 20 in our estimation.\n\n\n \nTo conclude, the required physical qubits for $|A_L\\rangle$ and $|Y_L\\rangle$ are respectively\n\\begin{equation}\n\\max \\{parallel T\\} \\cdot \\frac{time(MSD)}{time(T)} \\cdot \\bigl(15 \\cdot Q_L)^{r-1} \\cdot (16\\cdot Q_L),\n\\end{equation}\nand \n\\begin{equation}\n\\max \\{parallel T, parallel S \\} \\cdot \\bigl(7\\cdot Q_L\\bigr)^{r-1}\\cdot (8\\cdot Q_L),\n\\end{equation}\nwhere $Q_L$ is the number of physical qubits to implement a logical qubit and $r$ is the required iterations.\nThe last distillation round requires one more logical qubit from the Bell state~\\cite{Fowler:2012fi}.\nAbove this, the ancilla qubits to perform CNOT gates during the distillation protocol also should be included.\n\n\n\n\\section{Performance Metric}\\label{sec:performance_metric}\nWe describe how to evaluate the quantum computing metrics, execution time, fidelity and the quantity of qubits.\n\n\\subsection{Execution Time}\\label{subsec:execution_time}\n\nWe examine the quantum computing time in two steps.\nIn the system mapping, we obtain the single round execution time $T_{one}$ of a quantum algorithm.\nAt the same time, the fidelity $F_{alg}$ of a quantum computing can be determined.\nNote that how to calculate the fidelity of a quantum computing is described in the following section. \nSince $T_{one}$ is the time for running a quantum algorithm once, and there is no guarantee about a reliable quantum computing.\nNoisy components may make a quantum computing broken.\nTo overcome the problem, we calculate the average execution time $T_{avg}$ by reflecting the number of the required iterations to achieve the fidelity 1 as\n\\begin{equation}\nT_{avg} = T_{one}\/F_{alg}.\n\\end{equation}\nWe believe this averaged time shows the time required for getting a reliable answer from a quantum computing\\footnote{This does not indicate that the output from a quantum computing is an exact solution. We do not consider the probabilistic nature of a quantum algorithm.}.\n\n\n\n\\subsection{Fidelity}\\label{subsec:fidelity}\nThe fidelity of a quantum computing can be calculated based on the fidelity of logical quantum gates as follows~\\cite{Suchara:2013tg},\n\\begin{equation}\nF_{alg} = \\prod_{g} {F_g}^{N_g},\n\\end{equation}\nwhere $g$ is a quantum gate utilized in the algorithm.\n$F_g$ is the fidelity of the gate $g$, and $N_g$ is the total count of the gate in the algorithm.\nThe value $N_g$ can be found from the system mapping and $F_g$ is determined in the building block layer. \nBy the way, this fidelity calculation is only applicable to Steane code based quantum computing.\nAs shown in Section~\\ref{sec:surface_code}, the final fidelity of a surface code based quantum computing is given by $F_{alg} = 1- KQ\\cdot \\epsilon_L$.\n\n\n\\subsection{The Number of Physical Qubit}\\label{subsec:qubits}\n\nWe examine the quantity of physical qubits required to run a quantum algorithm.\nSince the quantity of the required qubits differs according to a fault-tolerant quantum computing scheme, we first identify the common factor, the qubits in a quantum algorithm, and then go inside specific cases later.\n\nThe proposed hierarchical quantum computer structure consists of multiple modules and a communication bus connecting all modules.\nIn the quantum assembly code, we can find the quantity of logical (or physical) qubits for a module.\n\\begin{equation}\nQ_{comp} = \\sum_{M} \\bigl(Q^{M}_{local} + Q^{M}_{param} \\bigr), \n\\end{equation}\nwhere $Q^{M}_{local}$ ($Q^{M}_{param}$) is the number of local (parameter) qubits of a (computing) module $M$.\n\n\n\\subsubsection{Steane Code Quantum Computing}\nWe consider the Steane code quantum computing.\nThe structure of a communication bus depends on the chosen global layout over all modules.\nOn the 1D global layout, the number of the qubits can be simply calculated as $Q_{comm} = bandwidth \\times length$, where $length$ is obtained as \n\\begin{equation}\nlength = \\sum_{M} M_{width},\n\\end{equation}\nwhere $M_{width}$ is the width of a module, which is 1 for 1D local layout in common and $\\lfloor \\sqrt{Q^{M} } \\rfloor$ for 2D local layout.\nNote that $Q^{M} = Q^{M}_{local} + Q^{M}_{param}$.\n\n\nOn the other hand, on the 2D global layout, the number of qubits can be calculated as follows.\nLet us suppose that the number of modules is $|M|$.\nThen, $\\lfloor \\sqrt{|M|} \\rfloor \\times \\lfloor \\sqrt{|M|} \\rfloor$-sized 2D layout is necessary.\nTo keep the shape of a module on the 2D layout, all modules have the same size cells $n\\times n$, where $n = \\lfloor \\sqrt{\\max \\{Q^{M}}\\} \\rfloor$.\nThen, the required logical qubits for the communication bus is \n\\begin{equation}\nQ_{comm} = 2\\cdot bandwidth\\cdot n \\cdot A\\cdot B + \\bigl(n \\cdot A\\bigr)^2\n\\end{equation}\nwhere $A = \\lfloor \\sqrt{|M|} \\rfloor - 1$ and $B = \\lfloor \\sqrt{|M|} \\rfloor$.\nIn this work, we determine the bandwidth of a bus as the maximum number of parameter qubits, $bandwidth = \\max_{M} \\{Q^{M}_{param}\\}$.\n\n\nSo far, we have identified the number of logical qubits required for a quantum computing.\nAs we mentioned in Section~\\ref{sec:steane_code}, a logical qubit in the concatenation levels $k=1\\sim l-1$ is composed of 25 qubits and the qubit in the level $l$ consists of $30$ qubits.\nAccording to a physical error rate, we need to apply recursive encoding.\nTherefore, the number of total physical qubit in the Steane code quantum computing is \n\\begin{equation}\nQ_{steane} = 25^{r-1} \\cdot 30 \\cdot Q_{comp} + 25^r \\cdot Q_{comm} \n\\end{equation}\nwhere $r$ is the determined concatenation level.\n\n\n\\subsubsection{Surface Code Quantum Computing}\nWe implement double defect based logical qubits. \nFor a double defect logical qubit with code distance $d$, each defect has to be apart from a boundary as much as $d$ data qubits, and double defects also should be separated as much as $d$ data qubits.\nOn the other hand, to perform a braiding operation in a fault tolerant manner, the space between double defects has to be at least $2d+\\lfloor d\/4 \\rfloor$ rather than only $d$.\nTherefore, to implement a double defect logical qubit of code distance $d$, $(2A+1)(2B+1)$ physical qubits are required where $A=\\bigl(2d-2 + \\lfloor d\/4 \\rfloor \\bigr)$ and $B=\\bigl(4d-4 + 3\\lfloor d\/4 \\rfloor \\bigr)$.\nFIG.~\\ref{fig:double_defect_distance_3} shows a double defect logical qubit of code distance 3.\nA total of 253 physical qubits, 126 data qubits and 125 syndrome qubits, are required.\n\n\nTwo neighboring logical qubits also have to be separated as much as $\\lfloor d\/4 \\rfloor$ data qubits to keep the code distance between both during the braid transformation.\nIn this regards, if $N$ logical qubits are arranged on the 2-dimensional layout of $n_h\\times n_w$, we need \n\\begin{equation}\n\\Bigl(2\\bigl(n_w A + (n_w-1)\\lfloor d\/4 \\rfloor\\Bigr)+1\\Bigr) \\Bigl(2\\bigl(n_h B + (n_h -1)\\lfloor d\/4\\rfloor \\bigr)+1\\Bigr)\n\\end{equation}\nqubits are necessary, where $A$ and $B$ are what we mentioned above.\n\n\n\n\nWe have to consider ancilla qubits required for a CNOT gate. \nAs mentioned before, the CNOT gate between the same type logical qubits consists of three CNOT gates between different logical qubits.\nFor that two ancilla qubits, $X$-cut qubit $|g_L\\rangle$ and $Z$-cut qubit $|+_L\\rangle$ are required.\nWe allocate a pair of both ancilla qubits to every module where a CNOT gate is performed.\nIn that case, the number of logical qubits for a module is the sum over parameter qubits, local qubits and two ancilla qubits.\n\n\\begin{figure}[t]\n\\centering\n\\epsfig{file=double_defect_distance_3.eps, scale=0.28, angle=90}\n\\caption{\n\tA double $Z$-cut qubit of a code distance 3.\n\tThe blue dots indicate data qubits.\n\tOne of the green chains indicates a logical $Z$ operator, and the red chain indicates a logical $X$ operator.\n\tThrough the yellow line, it is possible to perform a fault-tolerant braiding operation from other $X$-cut qubit to this $Z$-\tcut qubit. \n\tEach defect has to be away from boundary as much as 3 data qubits, and both defects have to be separated 6 data qubits.\n\t126 data qubits and 125 syndrome qubits are required to implement a distance-3 logical qubit.\n}\n\\label{fig:double_defect_distance_3}\n\\end{figure}\n\n\\begin{figure}[t]\n\\centering\n\\epsfig{file=surface_code_architecture.eps, scale=0.3}\n\\caption{\n\tA quantum computer structure based on the surface code quantum computing.\n\tDark green dots indicate defects and yellow cells are used as a path for the braiding transformations.\n\tBlue cells can be used for the forward\/backward qubit passings over distance modules.\n\tAn enclosed section by dotted red line is a computing region, a module.\n\tA defect has to be away from the boundary of a logical qubit or a braiding path as much as $d$ data qubits, and two logical qubits are mutually separated as much as $\\lfloor d\/4 \\rfloor$ data qubits.\n}\n\\label{fig:surface_code_architecture}\n\\end{figure}\n\nIn case of a surface code quantum computing, the communication bus may be also necessary to efficiently perform forward\/backward qubit passings\\footnote{It is possible to perform a fault-tolerant braiding between distant logical qubits in different modules. On considering that, the qubit passings may not be required. By the way, a braiding between distant logical qubits requires so many physical measurements. Which makes a quantum computing unreliable. In this regards, we believe that moving logical qubits to nearby location (a target module) and performing a braiding between close qubits is more reliable. In this regards, we also perform qubit passing in the surface code quantum computing.}.\nBy the way, unlike the Steane code quantum computing that performs a sequence of SWAP operations as much as the passing distance, the qubit movement in the surface code quantum computing is much efficient.\nIt can be achieved by only performing multi-cell qubit movements~\\cite{Suchara:2013tg,Fowler:2012fi}.\nTherefore, we assumed that the surface code quantum computing performs the qubit passing in sequentially on the bus of the narrow bandwidth.\nWe set the bandwidth of the bus as $\\lfloor d\/4 \\rfloor$, and additionally the movement path should be away from a boundary as much as $d$ data qubits.\nFIG.~\\ref{fig:surface_code_architecture} shows the quantum computer architecture based on a surface code and a structured quantum assembly code, where all the modules are arranged on the 1-dimensional layout by keeping the space as much as $\\lfloor d\/4 \\rfloor$ data qubits between both modules.\n\n\nWe conclude this section by repeating the physical qubits for the magic state factory. \nThe required physical qubits for $|Y_L\\rangle$ are $\\max \\{parallel T, parallel S \\}\\times (7Q_L)^{r-1}\\times (8Q_L)$ and for $|A_L\\rangle$ are $\\max \\{parallel T\\} \\times (15Q_L)^{r-1}\\times (16Q_L)\\times time(MSD)\/time(T)$.\nNote that $Q_L$ is the physical qubits to implement a logical double defect qubit, and $r$ is the iteration of the magic state distillation. \n\n\n\\section{Analysis of Performance and Resource}\\label{sec:performance_evaluation}\n\nWe show the performance analysis of quantum computings we configured.\nFor that, we set the objective fidelity of a quantum computing as 0.7.\nThat is, a single round quantum computing time $T_{one}$ is the time of a quantum computing that achieves a fidelity at least 0.7.\nWe set the error rate of physical operation for Steane code quantum computing as $10^{-9}$, but for surface code quantum computing we apply the physical error rate $10^{-3}$.\nWe also assume that the duration of a physical operation is 1 $\\mu s$ conservatively.\nThis assumption may be pessimistic than other literature assuming tens $\\sim$ hundreds nano seconds for a physical operation.\nThe Shor algorithm we test comes from the benchmark~\\cite{ScaffCCScaffCC:vm,Suchara:2013tg}.\n\n\n\n\n\n\\subsection{Case of applying Compile}\\label{sec:performance_compile}\n\nWe show the performance changes by applying a quantum compile, i.e., decomposition of a $R_Z(\\theta)$ gate for an arbitrary angle $\\theta$.\nEven though such decomposition is required to implement a fault-tolerant quantum computing, in this section we perform physical quantum computing without error correction to see the effect of the compile only.\nFor that, the components of the other layers are completely fixed.\n\n\nFor the decomposition, we set the precision of the decomposition as $10^{-2}$, which means that a decomposition of $R_Z(\\theta)$ gate achieves the $R_Z(\\theta)$ operation with an error probability $10^{-2}$.\nConsequentially, both $R_Z(\\theta_1)$ and $R_Z(\\theta_2)$ can be decomposed into the same sequence of $H$, $S$ and $T$ if $|\\theta_1-\\theta_2| \\leq 0.01$.\nUnder such precision, a $R_Z(\\theta)$ gate is usually decomposed into a sequence of $40\\sim 50$ $H$, $S$ and $T$ gates.\\footnote{If we set the precision degree with a smaller number, we will get a longer sequence of $H$, $S$ and $T$ gates for a $R_Z(\\theta)$ gate. On the one hand, such sequence can achieve the target $R_Z(\\theta)$ gate more exactly. On the other hand, the quantum computing time will be larger than the time shown in this work. Besides, practically the duration to conduct the performance analysis also increase nontrivially. In this regards, we have set $10^{-2}$ for the precision.}\nPlease note that the decomposition algorithm in the compile works probabilistically.\n\n\n\n\\begin{figure*}[t]\n\\centering\n\\subfigure[]{\n\t\\epsfig{file=noiseless_nonlocal_gate_time.eps, scale=0.22}\n}\n\\subfigure[]{\n\t\\epsfig{file=noiseless_nonlocal_gate_qubits.eps, scale=0.22}\n}\n\\caption\n{\nWe show the quantum computing performance change by the compile effect. \n(a) Quantum computing time and (b) Qubits.\n}\n\\label{fig:noiseless_nonlocal_gate}\n\\end{figure*}\n\n\nFIG.~\\ref{fig:noiseless_nonlocal_gate} compares the performance. \nBy decomposing $R_Z(\\theta)$ gate, the quantum computing time increases as much as $4\\sim 5$ times, but the number of physical qubits stays equivalently.\nBy the way, in general $R_Z(\\theta)$ gate takes more than half of all quantum gates in Shor's factoring algorithm (see Table~\\ref{tab:RZ_ratio}).\nOn considering that $R_Z(\\theta)$ gate is decomposed into a sequence of dozens of $H$, $S$ and $T$ gates as we mentioned above, readers may guess that the performance difference between both cases should be more larger than the shown in the figure.\n\n\nAs mentioned above, we have set the precision of the decomposition as $10^{-2}$.\nMost $\\theta$ in the Shor algorithm are very small ($< 0.01$),\\footnote{$\\theta = \\pi\/2^{n-1}$ with $n=1\\sim N$ for $N$-bit integer factoring.} and therefore the decomposition of such rotation operation works as the identity operation. \nWe show the top dominant $\\theta$ used in Shor N=128 algorithm in Table~\\ref{tab:dominant_theta}.\nAll the angles are less than $0.01$.\nWhile we have not described all $\\theta$ in the algorithm in the table, empirically $75\\%$ of the angles applied in the algorithm are less than $0.01$.\nIn this regards, the performance degradation by decomposing $R_Z(\\theta)$ gates is not so remarkable regardless of the quantity of $R_Z(\\theta)$ gates in the algorithm.\n\n\n\\begin{table}[t]\n\\caption{\nThe proportion of $R_Z(\\theta)$ gate in Shor $N=128$.\n\\newline\n}\n\\small\n\\centering\n\\begin{tabular}{c|c|c|c} \\hline \nInput Size & $R_Z(\\theta)$ & Total Gates & Proportion\\\\ \\hline\n128 & $2.036 \\times 10^{9}$ & $3.399\\times 10^{9}$ & 59.90\\%\\\\ \\hline\n256 & $1.630\\times 10^{10}$ & $2.719\\times 10^{10}$ & 59.94\\%\\\\ \\hline\n512 & $1.304\\times 10^{11}$ & $2.175\\times 10^{11}$ & 59.95\\%\\\\ \\hline\n\\end{tabular}\n\\label{tab:RZ_ratio}\n\\end{table} \n\n\n\\begin{table}[t]\n\\caption{\nList of top 10 dominant angles in Shor's factoring algorithm, N=128.\nThe $\\theta$ listed in this table is less than $0.01$ and therefore $R_Z(\\theta)$ works as an identity operator.\nThe rotational angle $\\theta$ of the gate is from $\\pi\/2^{n-1}$ in Quantum Fourier Transform, and the exact representation of the angle is limited by a classical computer precision.\n\\newline\n}\n\\small\n\\centering\n\\begin{tabular}{c|c|c} \\hline \n$\\theta$ & Count & Proportion\\\\ \\hline\n$0.000000\\times 10^{0}$ & $6.88\\times 10^{8}$ & $0.3381$ \\\\ \\hline\n$-0.000000\\times 10^{0}$ & $3.44\\times 10^{8}$ & $0.1691$ \\\\ \\hline\n$-5.000000\\times 10^{-5}$ & $3.13\\times 10^{7}$ & $0.0154$ \\\\ \\hline\n$-1.000000\\times 10^{-4}$ & $3.11\\times 10^{7}$ & $0.0153$ \\\\ \\hline\n$5.000000\\times 10^{-5}$ & $3.10\\times 10^{7}$ & $0.0152$ \\\\ \\hline\n$-2.000000\\times 10^{-4}$ & $3.09\\times 10^{7}$ & $0.0152$ \\\\ \\hline\n$1.000000\\times 10^{-4}$ & $3.08\\times 10^{7}$ & $0.0151$ \\\\ \\hline\n$-4.000000\\times 10^{-4}$ & $3.08\\times 10^{7}$ & $0.0151$ \\\\ \\hline\n$2.000000\\times 10^{-4}$ & $3.07\\times 10^{7}$ & $0.0151$ \\\\ \\hline\n$-7.500000\\times 10^{-4}$ & $3.06\\times 10^{6}$ & $0.0150$ \\\\ \\hline\n\\end{tabular}\n\\label{tab:dominant_theta}\n\\end{table} \n\n\n\\subsection{Case of applying Compile and Error Correction}\\label{sec:performance_qec}\n\n\nWe show the performance of a fault-tolerant quantum computing but without considering local qubit interaction on a quantum computer architecture.\nWe assume that all qubits are directly interacted with an arbitrary qubit regardless of its position, and therefore a communication bus is not required in the architecture.\nThis setting is to see the effect of quantum error correction.\nFor that, we only configured Steane code based quantum computing because surface code computing inherently takes 2-dimensional nearest neighbored qubit layout into consideration.\nAs mentioned above, for the fault-tolerant quantum computing, we compile the quantum algorithm by decomposing $R_Z(\\theta)$ gate into $H$, $S$ and $T$ gates. \n\n\n\n\\begin{figure*}[t]\n\\centering\n\\subfigure[]{\n\t\\epsfig{file=noisy_nonlocal_gate_time.eps, scale=0.22}\n}\n\\subfigure[]{\n\t\\epsfig{file=noisy_nonlocal_gate_qubits.eps, scale=0.22}\n}\n\\caption\n{\nQuantum computing performance based on quantum error correction.\nFor the fault-tolerant operation, we have compiled a quantum algorithm by decomposing $R_Z(\\theta)$ gates into a sequence of $H$, $S$ and $T$ gates.\nIn this evaluation, the quantum computer architecture and local qubit interaction are not completely considered.\n(a) Quantum computing time and (b) Qubits.\nThe concatenation level for the input size 128 is 1, and 2 for the other cases.\n}\n\\label{fig:noisy_nonlocal_gate}\n\\end{figure*}\n\n\nFIG.~\\ref{fig:noisy_nonlocal_gate} shows the quantum computing performance.\nThe increase of the execution time and the number of qubits is very remarkable when the input size increases from 128 to 256.\nThis is because the required concatenation level increases from 1 to 2 there to satisfy the objective fidelity 0.7.\nBut, as the concatenation level stays when the input increases from 256 to 512, the increases of a quantum computing time and the number of qubits are rather modest.\n\n\nSince in this section we assume a fault-tolerant quantum computing but with non-local qubit interaction, the performance change shown in the figure is only caused by the fault-tolerant quantum computing protocol.\nFor example, in FIG.~\\ref{fig:noisy_nonlocal_gate} (b), the numbers of qubits in the Steane code quantum computing are bigger than physical computing as much as respectively 30, 900 and 900 times.\nPlease recall that we have designed a logical qubit by assembling 30 physical qubits.\n\n\n\\subsection{Case of applying Compile, Error Correction and System Architecture}\\label{sec:performance_local_qec}\n\nIn this section, we show the quantum computing performance by considering all the realistic factors we have described previously.\nWe apply fault-tolerant quantum computings based on certain quantum computer architectures where only local qubit interaction is permitted.\n\n\n\\begin{figure*}[t]\n\\centering\n\\subfigure[]{\n\t\\epsfig{file=noisy_local_gate_time.eps, scale=0.22}\n}\n\\subfigure[]{\n\t\\epsfig{file=noisy_local_gate_qubits.eps, scale=0.22}\n}\n\\caption\n{\nQuantum computing performance of the Steane code based local fault-tolerant quantum computing.\nToo see the influence by the architectural limitation, we also add the performance when the arbitrary long qubit interaction is allowed.\n(a) Quantum computing time and (b) Qubits.\nAll concatenation levels for the local qubit interaction cases (black, red and blue lines) are 3 in common. \nBut, in case of the non-local qubit interaction (green line), the concatenation level is 1 for the input size 128 and 2 for the others.\nPlease see FIG.~\\ref{fig:layout_example} about the quantum computer architectures.\n}\n\\label{fig:noisy_local_gate_steane}\n\\end{figure*}\n\nFIG.~\\ref{fig:noisy_local_gate_steane} shows the performance analysis of the Steane code quantum computing.\nWe have used the quantum computer architectures of the layouts; (1D global, 1D local), (1D global, 2D local) and (2D global, 2D local).\nPlease see FIG.~\\ref{fig:layout_example} for the quantum computer architectures.\nTo see the influence of local qubit interaction, we also compare the performance of the quantum computing based on non-local qubit interaction shown in the previous section.\n\n\nAs shown in the figure, the performance degradation by the local qubit interaction on a quantum computer architecture is highly nontrivial.\nThis is because many modules are spreaded over the quantum computer, and the communication (qubit passing) are performed frequently.\nTable~\\ref{tab:SWAP_ratio} shows the proportion of SWAP gates in the implementation of Shor algorithm.\nSurprisingly, on the proposed quantum computer architecture with a nearest neighbor qubit interaction, most of quantum operations in the Steane code quantum computing are qubit movements.\nWe think the quantity of the qubit movements is a temporal overhead to implement a quantum algorithm on a quantum computer.\nSuch large overhead caused by the qubit movements can be reduced by improving a quantum computer structure, a fault-tolerant quantum computing scheme or a system mapping algorithm.\n\n\n\\begin{table}[t]\n\\caption{\nThe proportion of \\textit{SWAP} gate in Shor's factoring algorithm.\nThe layout indicates a combination of Global Layout and Local Layout.\n\\newline\n}\n\\small\n\\centering\n\\begin{tabular}{c|c|c|c|c} \\hline \nInput Size & Layout & SWAP & Total Gates & Proportion\\\\ \\hline\n\\multirow{3}{*}{128} & (1D, 1D) & $ 7.371\\times 10^{11}$ & $ 7.405\\times 10^{11}$ & 99.54\\%\\\\ \\cline{2-5}\n & (1D, 2D) & $1.262\\times 10^{12}$ & $1.266\\times 10^{12}$ & 99.73\\%\\\\ \\cline{2-5}\n & (2D, 2D) & $1.527\\times 10^{13}$ & $1.527\\times 10^{13}$ & 99.97\\%\\\\ \\hline\n\\multirow{3}{*}{256} & (1D, 1D) & $1.116\\times 10^{13}$ & $1.118\\times 10^{13}$ & 99.76\\%\\\\ \\cline{2-5}\n & (1D, 2D) & $1.856\\times 10^{13}$ & $1.859\\times 10^{13}$ & 99.85\\%\\\\ \\cline{2-5}\n & (2D, 2D) & $2.719\\times 10^{14}$ & $2.720\\times 10^{14}$ & 99.99\\%\\\\ \\hline\n\\multirow{3}{*}{512} & (1D, 1D) & $1.068\\times 10^{14}$ & $1.070\\times 10^{14}$ & 99.80\\%\\\\ \\cline{2-5}\n & (1D, 2D) & $1.811\\times 10^{14}$ & $1.813\\times 10^{14}$ & 99.88\\%\\\\ \\cline{2-5}\n & (2D, 2D) & $5.789\\times 10^{15}$ & $5.789\\times 10^{15}$ & 99.99\\%\\\\ \\hline\n\\end{tabular}\n\\label{tab:SWAP_ratio}\n\\end{table} \n\nThe figure shows that a quantum computer architecture of the 1D global layout provides the better performance than a quantum computer of the 2D global layout.\nHowever, it may not always be the case.\nIt completely depends on the number of modules in a quantum computing program (see FIG.~\\ref{fig:scaffold_code}), and the arrangement of the modules on the architecture.\nIn general, the 2D global layout is a better architecture in terms of a qubit communication when the number of modules is very large.\nOn average, the arrangement of the modules on the 2D global layout can reduce the distance between modules than the 1D global layout.\nTherefore the communication cost of the qubit passing is less than the 1D global layout.\nAs an example, FIG.~\\ref{fig:noiseless_local_gate_gse} shows that a ground state estimation algorithm~\\cite{Suchara:2013tg,JavadiAbhari:2015jf,ScaffCCScaffCC:vm} works better on a quantum computer architecture with 2D global layout\\footnote{In the benchmark program, the algorithm is composed of at least tens of thousands modules, but the program of Shor algorithm consists of thousands modules.}.\n\n\n\\begin{figure*}[t]\n\\centering\n\\subfigure[]{\n\t\\epsfig{file=noiseless_local_gate_gse_time.eps, scale=0.22}\n}\n\\subfigure[]{\n\t\\epsfig{file=noiseless_local_gate_gse_qubits.eps, scale=0.22}\n}\n\\caption\n{\nThe quantum computing performance of a ground state estimation algorithms over input size $M=20, 40, 60, 80$.\nQuantum gates are noiseless, and only nearest neighbor qubits are mutually interacted on the quantum computer architectures.\n(a) Quantum computing time and (b) Qubits.\n}\n\\label{fig:noiseless_local_gate_gse}\n\\end{figure*}\n\n\n\\begin{figure*}[t]\n\\centering\n\\subfigure[]{\n\t\\epsfig{file=surface_code_time.eps, scale=0.22}\n}\n\\subfigure[]{\n\t\\epsfig{file=surface_code_qubits.eps, scale=0.22}\n}\n\\caption{\nThe quantum computing performance based on the surface code quantum computing.\n(a) Time and (b) Qubits.\nThe code distances are respectively 25, 27 and 30.\nIn (b), we also show the required physical qubits for a magic state factory.\n}\n\\label{fig:surface_code}\n\\end{figure*}\n\n\nFIG.~\\ref{fig:surface_code} shows the quantum computing performance of the surface code quantum computing.\nThe quantum computer architecture for a surface code quantum computing is shown in FIG.~\\ref{fig:surface_code_architecture}.\nIn the error rate $10^{-3}$, as the input size increases, the required code distance is raising 25, 27 and 30 to satisfy the objective fidelity.\nIn the figure, we also show the quantity of physical qubits to run a magic state factory that supplies $|A_L\\rangle$ states during the quantum computing.\nAs shown in the figure, in this work, the capacity of a magic state factory stays almost the same regardless of the input size of Shor algorithm.\nIt increases as much as the code distance.\n\n\nIn Ref.~\\cite{Fowler:2012fi}, the authors estimated the surface code quantum computing execution time of Shor algorithm for factoring a 2000-bit integer.\nThey did the estimation only by focusing on the quantity of the sequential Toffoli gates in the modular exponentiation circuit.\nBy following their estimation method, the quantum computing time to factorize a 512-bit integer will be only 0.45 hours ($40\\times 512^3 \\times 3\\times100~ns$) regardless of physical error rate.\nHowever, as shown in the figure, our estimation shows $8.78\\times 10^ 5$ hours are required for the same task when the physical error rate is $10^{-3}$.\nThe algorithm execution time, as shown in Section~\\ref{sec:accurate_gates}, is reduced as the physical error rate decreases.\n\n\nWe believe the reasons for such enormous gap between both estimations are as follows.\nFirst, our analysis is based on all quantum gates included in Shor algorithm but their analysis focuses on the Toffoli gates only.\nIn our estimation, the proportion of transversal gates takes about 60 \\% (\\# transversal gates\/\\# total gates), in other words, their estimation ignores the execution time by such huge transversal gates and the involved error correction.\nSecond, we have assumed that a physical gate operates in 1 $\\mu s$ conservatively while their estimation is based on 100 $ns$ measurement gates\\footnote{Their implementation of a Toffoli gate completes the operation within three measurement operations.}.\nThird, while they used an efficient decomposition of a Toffoli gate, we have used the de facto standard decomposition~\\cite{Nielsen:2000ga}.\nFourth, they might assume that braiding operations for parallel CNOT gates can be performed in parallel without any path conflicts, but we applied a braiding operation at a time to avoid conflicts between other braiding paths.\nPlease note that for Shor algorithm of $N=m$, $m$ CNOT gates can be performed in parallel in ideal case.\nFifth, we have considered architecture related issues such as qubit passing over distant modules, but they do not.\nLastly, we have applied $d$ round quantum error correction where the code distance $d$ is determined based on the physical error rate, but they did not.\n\n\n\\begin{figure*}[t]\n\\centering\n\\subfigure[]{\n\t\\epsfig{file=steane_surface_10-9_time.eps, scale=0.22}\n}\n\\subfigure[]{\n\t\\epsfig{file=steane_surface_10-9_qubits.eps, scale=0.22}\n}\n\\caption\n{\nWe simply compare the quantum resource in the Steane code quantum computing and the surface code quantum computing.\nThe physical error rate is $10^{-9}$.\nThe quantum computer architecture for the Steane code quantum computing is the 1D global layout and the 2D local layout because as shown in FIG.~\\ref{fig:noisy_local_gate_steane} the architecture shows the best performance in this work.\nWe also compare the Steane code quantum computing with non-local qubit interaction because the performance of the Steane code quantum computing significantly depends on the quantum computer architecture.\n}\n\\label{fig:steane_surface_compare}\n\\end{figure*}\n\n\nOne of the reasons why a surface code has attracted much attention is it requires relatively less overhead.\nIn what follows, we simply compare the Steane code and surface code in light of quantum resource without considering their theoretical foundation.\nFor the fair comparison, we assumed the error rate of physical device is $10^{-9}$ for both cases. \nFIG.~\\ref{fig:steane_surface_compare} shows the quantum computing time and qubits to run Shor algorithm.\nAs we mentioned before, the performance of the Steane code quantum computing completely depends on a quantum computer architecture.\nTherefore to focus on the difference in quantum resource only by a fault-tolerant quantum computing, we also compare the situation where a non-local qubit interaction is allowed.\nAs shown in the figure, in the small input size, the Steane code quantum computing requires less time and qubits than the surface code quantum computing.\nThis is because of the non-locality of multi-qubit operation.\nBut, as the input size increases, the surface code quantum computing shows the better performance than the Steane code quantum computing even non-local qubit interaction is allowed.\n\n\n\n\n\n\\section{Usability of the proposed framework}\\label{sec:performance_improvement}\n\nThe objective of the proposed framework is to help to design and analyze a quantum computing.\nIn this regards, in this section, we show how to use it for analyzing previously proposed high performance quantum computing methods with realistic quantum computer. \nThe first is an efficient compile (Section~\\ref{sec:efficient_CRN}), and the second is an improved physical gate (Section~\\ref{sec:accurate_gates}) and the last is the strategy for the fault-tolerance (Section~\\ref{sec:degree_fault_tolerance}).\n\n\n\\subsection{Efficient Decomposition of Controlled-$R_n$}\\label{sec:efficient_CRN}\nAuthors proposed an efficient decomposition algorithm for a controlled-$R_n$ gate~\\cite{Kim:2018hs}.\nBy hiring an ancilla qubit, they achieved that the total number of quantum gates $\\{H, S, T\\}$ is reduced to from 35 (Ref.~\\cite{Kliuchnikov:2013tr}) to 21.\nWe show how the proposed decomposition affects the execution time of Shor algorithm. \nEven though the proposed compile algorithm itself requires more qubits, by reducing the algorithm execution time and increasing the fidelity of a quantum computing simultaneously, in total less qubits are required.\n\n\nFIG.~\\ref{fig:compile_improvement_steane} shows the performance improvement by the efficient compile in the Steane code quantum computing.\nAt the input size $N=128$, the improved decomposition lowers the quantum computing time as much as over 400-fold and the qubits as much as 30-fold.\nThe degree of the performance improvement depends on the input size.\nAs shown in the figure, at the input size where the required concatenation level lowers by applying the proposed decomposition, the performance improvement is remarkable.\n\n\n\\begin{figure*}[t]\n\\centering\n\\subfigure[]{\n\t\\epsfig{file=base_option_time_10-9.eps, scale=0.22}\n}\n\\subfigure[]{\n\t\\epsfig{file=base_option_qubits_10-9.eps, scale=0.22}\n}\n\\caption\n{\nThe performance comparison between a naive compile and the proposed compile under Steane code based quantum computing.\n(a) The algorithm execution time. (b) The required qubits.\n}\n\\label{fig:compile_improvement_steane}\n\\end{figure*}\n\nFIG.~\\ref{fig:compile_improvement_surface} shows the performance improvement by the efficient compile in the surface code quantum computing.\nUnlike the Steane code quantum computing, the performance improvement in the quantum computing time increases gradually as the input size increases.\nThis is because there always exists the difference in the code distance.\nBy the improved decomposition, the required code distance lowers to 22, 24, 27 from 25, 27, 30 respectively.\n\n\n\\begin{figure*}[t]\n\\centering\n\\subfigure[]{\n\t\\epsfig{file=base_option_surface_time.eps, scale=0.22}\n}\n\\subfigure[]{\n\t\\epsfig{file=base_option_surface_qubits.eps, scale=0.22}\n}\n\\caption\n{\nThe performance comparison between a naive compile and the proposed compile under surface code based quantum computing.\n(a) The algorithm execution time and (b) The required qubits.\n}\n\\label{fig:compile_improvement_surface}\n\\end{figure*}\n\n\n\\subsection{Accurate Quantum Gates}\\label{sec:accurate_gates}\nPreviously, we basically assumed that physical error rate is $10^{-9}$ for Steane code quantum computing and $10^{-3}$ for surface code quantum computing.\nIn this section, we show what happens in a quantum computing if we have more accurate quantum device.\nFor that we show the performance evaluations based on the physical error rates $10^{-9}\\sim 10^{-15}$ for Steane code quantum computing and based on the physical error rates over $10^{-3}\\sim 10^{-9}$.\n\n\nFIG.~\\ref{fig:robust_gate_steane} shows the Steane code quantum computing performance over physical error rates $10^{-9}\\sim 10^{-15}$.\nWe also compare a physical quantum computing to those fault-tolerant quantum computings at the physical error rate $10^{-15}$.\nThe performance improvement by lowering the error rate from $10^{-9}$ to $10^{-12}$ is highly nontrivial because the required concatenation level is reduced from 2 and 3 to 1 in both case respectively.\nBut, lowering the error rate more does not lead to the better fault-tolerant quantum computing performance.\nIn other words, the fault-tolerant quantum computing in the physical error rate $10^{-15}$ does not show any advantage against the quantum computing in the physical error rate $10^{-12}$.\nThis is because as the physical error rate lowers the fault-tolerant quantum computings with the same concatenation level achieves very high fidelity ($>0.9$).\nIf both quantum computings are performed with the same concatenation level, both have the same single round quantum computing time.\nIn that case, if there is no big difference between fidelities, the average quantum computing $T_{avg}$ will be very similar.\n\n\n\nIn the same reason, in the physical error rate $10^{-15}$, physical quantum computing shows the better performance than a fault-tolerant quantum computing because the physical quantum computing already achieves high fidelity.\nEmpirically, $T_{one}$ of the physical quantum computing in the error rate $10^{-15}$ is $6.89 \\times 10^{8}$ with the fidelity $0.6433$.\nOn the other hand, $T_{one}$ of the fault-tolerant quantum computing is $7.78 \\times 10^{10}$ with the fidelity $0.9999$.\nObviously, physical quantum computing shows the better performance in terms of $T_{avg}$, $\\frac{6.89\\times 10^{8}}{0.6433} > \\frac{7.78\\times 10^{10}}{0.9999}$.\n\n\n\\begin{figure*}[t]\n\\centering\n\\subfigure[]{\n\t\\epsfig{file=robust_gates_steane_time.eps, scale=0.22}\n}\n\\subfigure[]{\n\t\\epsfig{file=robust_gates_steane_qubits.eps, scale=0.22}\n}\n\\caption\n{\nThe performance comparison over physical error rates $10^{-9}\\sim 10^{-15}$ under Steane code based quantum computing. \n(a) The algorithm execution time and (b) The required qubits.\nAt the error rate $10^{-15}$, as shown in this figure, a fault-tolerant quantum computing is not required.\n}\n\\label{fig:robust_gate_steane}\n\\end{figure*}\n\nFIG.~\\ref{fig:robust_gate_surface} shows the performance improvement in the surface code quantum computing over physical error rates $10^{-3}\\sim 10^{-9}$.\nIn the figure, we also compare the required code distance.\nAs shown in the figure, as the physical error rates lowers, the required code distance decreases and therefore the performance increases.\nBut, since the code distance is already too low, 4 or 5, there is not enough room for the performance improvement as the gate is improved more.\n\n\\begin{figure}[t]\n\\centering\n\\subfigure[]{\n\t\\epsfig{file=robust_gates_surface_distance.eps, scale=0.22}\n}\n\\subfigure[]{\n\t\\epsfig{file=robust_gates_surface_time.eps, scale=0.22}\n}\n\\subfigure[]{\n\t\\epsfig{file=robust_gates_surface_qubits.eps, scale=0.22}\n}\n\\caption\n{\nThe performance comparison over physical error rates $10^{-3}\\sim 10^{-9}$ under surface code based quantum computing.\n(a) The code distance, (b) The algorithm execution time and (c) The required qubits. \n}\n\\label{fig:robust_gate_surface}\n\\end{figure}\n\n\n\\subsection{Degree of Fault Tolerance}\\label{sec:degree_fault_tolerance}\nAccuracy threshold theorem~\\cite{Aharonov:2008jn,Knill:1996tm} says that if we have quantum device of physical error rates below a threshold, it is possible to achieve an arbitrary long quantum computation is possible.\nBy applying a recursive concatenated coding~\\cite{Knill:1996ty}, we can lower the effective error rate to where a reliable quantum computing is possible.\nAs we increase the concatenation level, the fidelity of a quantum computing also definitely improves.\nBut, the duration of a quantum computing is also increased by raising a concatenation level.\nTherefore, the higher concatenation level does not always make the better quantum computing possible.\nFIG.~\\ref{fig:degree_ft_steane_code} shows that there exists a recommendation for the concatenation level in the Steane code quantum computing, in particular for a quantum computing time.\nThe number of qubits unconditionally becomes larger as the concatenation level increases.\n\n\n\\begin{figure}[t]\n\\centering\n\\epsfig{file=degree_ft_steane_code.eps, scale=0.23}\n\\caption{\n\tThe quantum computing time of Shor algorithm with input size $N=512$.\n\tWe have evaluated the quantum computing performance $T_{one}$ and $T_{avg}$ according to the concatenation levels $1\\sim 5$.\n\tAfter the concatenation level 3, the fidelity of a quantum computing is almost 1 and therefore the average computing time closely approaches to the single round computing time.\n\tWhen, the concatenation level is 1, the fidelity of a quantum computing is almost vanishing and therefore the average computing goes to almost infinity.\n}\n\\label{fig:degree_ft_steane_code}\n\\end{figure}\n\n\nIn case of a surface code quantum computing, the performance completely depends on the code distance.\nThe code distance is determined to satisfy the objective fidelity of a quantum computing, but in most case the accuracy of the quantum computing by the chosen code distance exceeds than the target fidelity.\nIn this regards, on considering the averaged quantum computing time $T_{avg}$, the chosen code distance may not bring the best quantum computing performance as shown in the Steane code case.\n\n\nFIG.~\\ref{fig:degree_ft_surface_code} shows that the surface code quantum computing has the best performance with a code distance $31$, but the code distance determined by the equation is $30$.\nEven though the code distance determined from the target fidelity 0.7 is 30, the goal of a quantum computing is to find an exact answer, not a probable answer.\nBy applying the code distance 31, we can reduce the quantum computing time as much as 1400 days than the case of the code distance 30 at the cost of qubits.\n\n\n\\begin{figure}[t]\n\\centering\n\\epsfig{file=degree_ft_surface_code.eps, scale=0.23}\n\\caption{\n\tThe quantum computing time of Shor algorithm with input size $N=512$.\n\tWe have evaluated the quantum computing performance $T_{one}$ and $T_{avg}$ by varying the code distance from 29 to 35.\n\tThe calculated code distance for the objective fidelity is 30. \n\tAs shown in the figure, the code distance 31 introduces the best quantum computing performance.\n\tBy taking the code distance 31, we can reduce the quantum computing time as much as 1400 days than the case of the distance 30 at the cost of qubits.\n}\n\\label{fig:degree_ft_surface_code}\n\\end{figure}\n\n\n\n\\section{Discussion}\\label{sec:discussion}\n\nWe have proposed an integrated method for analyzing the performance and the resource of a quantum computing.\nIn particular, by considering practically running a quantum algorithm on a quantum computer hardware of a specific system architecture, we have obtained the most realistic performance and resource where the effects by all of fully decomposed algorithm, fault-tolerant scheme and system architecture are involved.\nFor that, we have proposed and developed a quantum computing framework composed of three functional layers where each layer plays a definite role of a quantum computing.\nBy exploiting the framework, we can configure a quantum computing model by selecting specific protocol and\/or properties.\nBy doing so, we can analyze not only the performance and resource of a quantum computing, but also the impact of specific components on the entire quantum computing.\nFor example, we have discussed optimal concatenation level and code distance of fault-tolerant quantum computing.\nWe believe such discussion was possible due to the proposed framework.\n\n\n\nThe analysis results completely depend on the protocols and properties of physical device we employed.\nAs shown in the figures, the quantity of the required qubits is too enormous and the execution time is too long.\nThe feasibility of a quantum computing seems too bad from our analysis results.\nHowever, we need to emphasize that the very those figures are not so important now.\nReaders need to see the tendency of the analysis results as the input size of a quantum algorithm increases.\nAs quantum computing components are being improved, the analysis results will be better than the shown in this work, but the tendency may be stayed.\n\n\nAs mentioned above, the objective of the present work is to provide the most realistic performance and resource of a quantum computing.\nOn the other hand, we believe the proposed software framework can play a significant role in practically running a quantum computing with a real quantum computing hardware later if some components are added (see FIG.~\\ref{fig:data_flow_components}).\nFor example, components to control a real quantum device have to be added in the building block layer.\nThe system layer also requires functions that execute a quantum algorithm and a quantum error correction efficiently.\nBesides, the existing components have to be improved as much as possible.\n\n\\begin{acknowledgements}\nThis work was supported by Electronics and Telecommunications Research Institute (ETRI) grant funded by the Korean government [18ZH1400, Research and Development of Quantum Computing Platform and its Cost-Effectiveness Improvement].\n\\end{acknowledgements}\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}