diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzzawxt" "b/data_all_eng_slimpj/shuffled/split2/finalzzawxt" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzzawxt" @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction\n\nThis paper focuses on a particular nearest-neighbor interacting particle system over $\\Zbb$ with one conserved quantity. The model consists of particles, antiparticles and holes with the following types of moves:\n\\begin{enumerate}[label={(\\arabic*)}]\n\\item \\textbf{Exclusion.} (Anti)particles execute totally asymmetric exclusions to the (negative) positive direction, respectively.\n\\item \\textbf{Annihilation.} An adjacent particle-antiparticle pair can annihilate each other producing two holes.\n\\item \\textbf{Pair creation.} An antiparticle-particle pair can be created from two adjacent holes.\n\\end{enumerate}\nThe evolution is a continuous time Markov jump process the jump rates of which are chosen such that the model is \\emp{attractive} and possesses \\emp{product-form stationary distributions}. These choices greatly facilitate the analysis of the macroscopic behavior of the system. By rescaling time and space with the same factor (Eulerian scaling), we arrive to a hydrodynamic limit described by a nonlinear partial differential equation. As usual in this area nonlinearity comes from the \\emp{hydrodynamic flux} which is roughly speaking the net flux of (anti)particles across a bond in equilibrium.\n\n\\bigskip\n\\noindent\\emp{Earlier results and result of this paper.} In many well-known examples of particle systems, such as simple exclusion or constant rate zero range processes, the hydrodynamic flux function is proved to be strictly concave, sometimes strictly convex. Entropy solutions of the hydrodynamic equation in these cases are well known: either shock wave or rarefaction fan emerges from step initial data (also called Riemann problem). The first realistic model that exhibits non-convex nor-concave hydrodynamic flux appeared in the seminal paper by Katz, Lebowitz and Spohn \\cite{kls} which in its one-dimensional form can be considered as a generalization of simple exclusion. The KLS model, though it is non-attractive, has become popular and it has been under extensive studies during the past decades, see \\cite{popkov,hager,chowd,popkovetal,junction}, showing spectacular properties in, e.g.,\\ its phase diagram.\n\nWe highlight another generalization of simple exclusion, \\emp{PushASEP} \\cite{pushasep}, which produces non-convex nor-concave hydrodynamic flux. The nature of this model is slightly different from ours in that it requires non-nearest neighbor interactions and jump rates that depend on the configuration over the whole interval of jumps.\n\nTo the best of our knowledge the hydrodynamic behavior beyond the shock is not rigorously established for either of the known examples with non-convex nor-concave flux. In contrary, the model we introduce here is fully covered under the hydrodynamic results of the literature. It is an \\emp{attractive} hyperbolic nearest neighbor particle system of just three possible states on a site which belongs to a more general family of misanthrope processes \\cite{coco}. One of the main conclusions is that, while hydrodynamics beyond the shock is rigorously established for this very simple attractive model, it also exhibits a non-trivial flux of both concave and convex pieces in some parameter range. We have combined recent results of PDE solution theory (see \\cite[Ch. 2]{holdenrisebro} and \\cite{hayes,fossati}) with that of hydrodynamic limits established for general interacting particle systems \\cite{bagurasa,bagurasastrong} to obtain rigorous results for the phase diagram of our model. We characterize all the possible cases of the Riemann problem which can consist of coexisting shock wave and rarefaction fan regions in various orders depending on the density values we started off from. We also show how the whole phase diagram evolves as one varies the underlying parameter values of the system. At some cumbersome calculations we used the aid of the computer. Finally, we mention that most of the results of the present paper originate from \\cite{riemann}. As a nice add-on, we have developed some stochastic simulation programs (see \\cite{sim}) for the evolution of our particle system.\n\n\\bigskip\n\\noindent\\emp{Organization of the paper.} We define our microscopic model in Section \\ref{sec:micromodel}. We calculate some of its key properties in Section \\ref{sec:props}. The detailed investigation of the Riemann problem is contained in Section \\ref{sec:entropysols}.\n\n\\section{The microscopic model}\\label{sec:micromodel\n\nThe Markov process we consider consists of particles, antiparticles and holes interacting with each other on the one-dimensional integer lattice $\\Zbb$. At lattice point $i\\in\\Zbb$, $\\om_i$ will denote the presence of a particle ($\\om_i=1$), that of an antiparticle ($\\om_i=-1$) or the absence of them ($\\om_i=0$). Thus our configuration space is $\\Om:=\\{-1,0,+1\\}^{\\Zbb}$ an element of which is denoted by $\\omb=(\\om_i)_{i\\in \\Zbb}$. The continuous time Markovian jump dynamics we attach on top of $\\Om$ is then made up of the moves\n\\[\n\\omb\\;\\longrightarrow\\;\\omb-\\delta_j+\\delta_{j+1}\\in\\Om,\n\\]\nwhich happens at rate $p(\\om_j,\\om_{j+1})$ for every $j\\in\\Zbb$, where $\\delta$ denotes the Kronecker symbol, i.e.\\ $\\delta_j(i)$ is $1$ if $i=j$ and zero otherwise. These changes take place independently conditioned on a given configuration $\\omb\\in\\Om$. This process can easily be constructed in the usual way (see \\cite{liggettbible}) and has infinitesimal generator $\\Gc$ acting on a cylinder function (i.e., one that only depends on a finite number of coordinates) $\\vphi:\\Om\\to\\Rbb$ as\n\\begin{equation}\\label{eq:infgen}\n\\big(\\Gc\\,\\vphi\\big)(\\omb)=\n\\sum_{j\\in\\Zbb}\\,p(\\om_j,\\om_{j+1})\\cdot\\big(\\vphi(\\omb-\\delta_j+\\delta_{j+1})-\\vphi(\\omb)\\big)\\qquad (\\omb\\in\\Om).\n\\end{equation}\nWith a slight abuse of notation the state of the process at time $t\\geq 0$ is also denoted by $\\omb(t)=\\big(\\om_i(t)\\big)_{i\\in\\Zbb}$. We now specify $p$ to be of the following special form. For $x,y\\in\\{-1,0,+1\\}$ let\n\\begin{equation}\\label{eq:modelrates}\np(x,y)=\\left\\{\n\\begin{array}{ll}\nc, & \\quad\\hbox{$x=0$, $y=0$;} \\\\\na, & \\quad\\hbox{$x=+1$, $y=-1$;} \\\\\na\\cdot \\frac{1 - d}{2}, & \\quad\\hbox{$x=0$, $y=-1$;} \\\\\na\\cdot \\frac{1 + d}{2}, & \\quad\\hbox{$x=+1$, $y=0$;} \\\\\n0, & \\quad\\hbox{otherwise,}\n\\end{array}\n\\right.\n\\end{equation}\nwhere $a,c>0$ are (distinct) model parameters denoting the rates of \\emp{annihilation} and \\emp{pair creation}, respectively. With parameter $d\\in(-1,1)$ we can adjust the symmetry of (anti)particles' jumping rate. In plain words, the above dynamics boils down to the following simple rules: two adjacent holes can produce an antiparticle-particle pair (in that order) with rate $c$, antiparticles and particles can only hop to the negative and positive directions with rates $a\\cdot \\frac{1-d}{2}$ and $a\\cdot\\frac{1+d}{2}$, respectively, and when a particle meets an antiparticle they annihilate each other with rate $a$. All other moves are suppressed.\n\nBy rescaling time by $a$, without loss of generality, we can assume that the annihilation rate is set to be $1$. Next, we invoke the definition of \\emp{attractivity} from \\cite[pp.\\ 71--72]{liggettbible}. Formally speaking we require the relation $\\Exp f(\\omb(t))\\leq \\Exp f(\\etab(t))$ to hold for all monotone functions $f:\\Om\\to\\Rbb$ and for all $t\\geq0$ whenever the initial configuration $\\etab(0)$ of $\\etab$ dominates that of $\\omb$ in the coordinate-wise order. Along the lines of \\cite[III., Thm.\\ 2.2]{liggettbible}, attractivity is equivalent to $p$ being monotone non-decreasing (non-increasing) in its first (second) variable, respectively. This means that our dynamics is attractive if and only if $c\\leq \\frac{1-|d|}{2}$ which we assume throughout the paper. (We do not expect different behavior in the non-attractive regime either but hydrodynamics beyond the shock is not rigorously established there and the interesting phenomena of this paper happens in the attractive parameter domain anyway.)\n\nThe model outlined above is the simplest member, after simple exclusion, of the family of \\emp{misanthrope processes} \\cite{coco}. In the above particular form, it first appeared in \\cite[pp.\\ 179--180]{tothvalkoperturb}.\n\n\\section{Properties of the model}\\label{sec:props\n\nFirst, we introduce the one-parameter family of product-form extremal stationary distributions of the above model. Then in Subsections \\ref{sec:hydro} and \\ref{sec:flux} we establish some hydrodynamic properties of the model. These are then used in Section \\ref{sec:entropysols} to determine the entropy solution of the Riemann problem.\n\nDue to attractivity a very convenient feature of our model with rates \\eqref{eq:modelrates} is that it possesses \\emp{product-form stationary distributions}. We define\n\\begin{equation}\\label{eq:paramb}\nb :\\,= \\frac{1}{2 + \\frac{1}{\\sqrt{c}}},\n\\end{equation}\nwhich is a bijection between the parameters $c\\in(0,\\frac{1}{2}]$ and $b\\in(0,\\frac{1}{2+\\sqrt{2}}]$. Now, let $\\theta\\in\\Rbb$ be a generic real that is also referred to as the \\emp{chemical potential}. Then define the one-site marginal measure by\n\\begin{align}\\label{eq:statmarginal}\n\\Gamma^{\\theta} :\\,= \\big(\\Gamma^{\\theta}(-1),\\, \\Gamma^{\\theta}(0),\\, \\Gamma^{\\theta}(1)\\big) = \\bigg( \\frac{b\\, \\exp(-\\theta)}{Z(\\theta)},\\,\\frac{1 - 2b}{Z(\\theta)},\\,\\frac{b\\, \\exp(\\theta)}{Z(\\theta)} \\bigg),\n\\end{align}\nwith \\emp{partition sum} $Z(\\theta) :\\,= 1 + 2b\\cdot(\\cosh(\\theta) - 1)$. The product measure $\\Gmb^{\\theta}$ on $\\Om$ is built from these marginals: $\\Gmb^{\\theta} :\\,= \\bigotimes_{j\\in \\Zbb}\\Gamma^{\\theta}$. We denote the expectation with respect to this measure by $\\Exp^{\\theta}$.\nThe stationarity of the above measure was carried out in \\cite{coco}. For the ergodicity part we refer to \\cite[pp. 1350--1352, Sec. 3 and further references therein]{bagurasa}.\n\n\\subsection{Hydrodynamics}\\label{sec:hydro\n\nEulerian hydrodynamic limits are well established for a large class of \\emp{attractive} asymmetric particle systems \\cite{rezakhanlou,landim,kipnislandim} or more generally \\cite{bagurasa, bagurasastrong} that also includes our model. Below we recall some elements of this theory following the latter articles. Let $N$ be the rescaling factor and define\n\\[\n\\alpha^N(\\mathrm{d}x, t) :\\,= \\frac{1}{N}\\sum_{j\\in\\Zbb}\\om_{j}(t\\cdot N)\\cdot\\delta_{j\/N}(\\mathrm{d}x)\n\\]\nas the \\emp{rescaled empirical measure} of $\\omb(t)$ at macroscopic space and time $x\\in\\Rbb$ and $t\\geq0$, respectively.\n\nNow, the sequence of initial probability distributions $(\\pib_N)_{N\\in\\Zbb^+}$ can be chosen arbitrarily such that the empirical measure $\\alpha^N$ of $\\omb^N(0)$, distributed as $\\pib_N$, converges in probability to $u_0(\\,\\cdot\\,)\\,\\mathrm{d}x$, where $u_0$ is some deterministic bounded measurable profile on $\\Rbb$ (see \\cite[pp. 1346--1348, Sec. 2.3]{bagurasa}). Then for each $t>0$: the $\\alpha^N$ of $\\omb^N(t)$ converges to $u(\\,\\cdot\\,,t)\\,\\mathrm{d}x$ in probability, where the density profile $u(\\,\\cdot\\,,t):\\Rbb\\to[-1,1]$ is one of the weak solutions of the \\emp{conservation law}\n\\begin{equation}\\label{eq:hydrogeneral}\n\\begin{aligned}\n\\partial_t u + \\partial_x G(u) &= 0;\\\\\nu(\\,\\cdot\\,,0) &= u_0(\\cdot).\n\\end{aligned}\n\\end{equation}\nIn this PDE, the \\emp{hydrodynamic flux} $G:[-1,1]\\to\\Rbb_0^+$ is\n\\[\nG(\\varrho) = \\Exp_{\\nu_{\\varrho}} p(\\om_0,\\om_1)\\qquad (\\varrho\\in[-1,1]),\n\\]\nwhere the measure $\\nu_{\\varrho}$ is a \\emp{translation-invariant extremal stationary distribution} of the process $(\\omb(t))_{t\\geq0}$ corresponding to density $\\varrho$. In our case, as we discussed in the beginning of Section \\ref{sec:props}, these measures can be expressed in product form. In plain words $G$ explains the net flux across a bond in equilibrium (recall the dynamics of $\\omb$ being totally asymmetric) and it will be determined in Section \\ref{sec:flux}.\n\nSubsequently, we will only consider the \\emp{Riemann problem} (step initial datum) in view of the initial value problem \\ref{eq:hydrogeneral}, that is\n\\begin{equation}\\label{eq:stepinitcond}\nu_0(x) = \\left\\{\n\\begin{array}{ll}\n\\ul \\quad &\\mbox{if} \\quad x \\leq 0; \\\\\n\\ur \\quad &\\mbox{if} \\quad x > 0, \\\\\n\\end{array}\n\\right.\n\\end{equation}\nwhere $\\ul\\neq\\ur\\in[-1,1]$ are the (initial) densities on the left and the right-hand side of the origin. In case of the microscopic process for sake of simplicity we set the density values ($\\ul,\\ur$) by picking the appropriate site-dependent parameters $\\theta_j$ of the product measure $\\Gmb^{\\theta}$:\n\\begin{equation*}\n\\Exp^{\\theta_j}(\\om_j) =\n\\left\\{\n\\begin{array}{ll}\n\\ul \\quad &\\mbox{if} \\quad j \\leq 0; \\\\\n\\ur \\quad &\\mbox{if} \\quad j > 0. \\\\\n\\end{array}\n\\right.\n\\end{equation*}\nAs we discussed above the initial measure can be chosen from a more general set of measures. Indeed this choice of initial measures will not play any role in the present article.\n\nIt is more convenient to reparametrize the marginal $\\Gamma^{\\theta}$ by the (\\emp{signed}) \\emp{density} of particles instead of the chemical potential $\\theta$. Let $v(\\theta)$ be the expected number of particles occupying an arbitrary lattice point under the measure $\\Gmb^{\\theta}$:\n\\[\nv(\\theta) :\\,= \\Exp^{\\theta}\\om_0 = \\frac{2b\\,\\sinh(\\theta)}{1 + 2b\\cdot(\\cosh(\\theta) - 1)} \\qquad (\\theta\\in\\Rbb).\n\\]\nIt is not hard to see that the assignment $\\theta\\mapsto v(\\theta)\\in[-1,1]$ is strictly monotone hence bijective. Its inverse $v\\mapsto\\theta(v)$ can explicitly be calculated:\n\\begin{equation}\\label{eq:densityparam}\n\\theta(v)=\n\\log\\bigg(\\frac{(1 - 2b)\\cdot v + \\sqrt{4b^2 + (1-4b)\\cdot v^2}}{2b\\cdot(1-v)}\\bigg) \\qquad (v\\in[-1,1]).\n\\end{equation}\nWith a slight abuse of notation we will freely switch between these two parametrizations of the stationary measure.\n\nTo close this section, we mention a well-known strategy for solving \\eqref{eq:hydrogeneral}. When $G$ has continuous derivative and $u$ is smooth, one can rewrite \\eqref{eq:hydrogeneral} as $\\partial_t u + G'(u)\\cdot \\partial_x u = 0$. The \\emp{characteristic curves} are then defined as lines of the $x$-$t$ space along which the solution is constant. To find these curves one solves $\\frac{\\mathrm{d}}{\\mathrm{d}t}u(x(t),t)=0$ which translates to $\\frac{\\mathrm{d}x(t)}{\\mathrm{d}t}=G'(u)$ after comparison to the original equation. The curve then can be traced back to the initial condition and takes the form $x = x_0 + t\\cdot G'(u_0(x_0))$. The quantity $G'(u_0(x_0)))$ is known as the \\emp{characteristic velocity}. Finally, to look up the value of the solution $u$ at an arbitrary point $(x,t)\\in\\Rbb\\times\\Rbb^+_0$ one just follows the characteristic curve back to the initial time and reads the value of $u_0$ there.\n\nThe previous program relies on $u$ being smooth which sometimes just fails to happen. That is there can exist points of $\\Rbb\\times\\Rbb^+$ which more than one characteristic lines hit carrying different initial values. In these cases classical (differentiable) solutions of \\eqref{eq:hydrogeneral} cease to exist and more general, so-called \\emp{weak solutions}, need to be defined. By a careful selection of the weak solutions one can uniquely identify the physically relevant one which is also referred to as the \\emp{entropy solution}. For more details and properties of the entropy solutions we refer to the monographs \\cite{smoller,holdenrisebro}.\n\nIn our case the initial condition $u_0$ is the step function \\eqref{eq:stepinitcond} hence the characteristic velocities can only attain two values: $G'(\\ul)$ and $G'(\\ur)$. Depending on the relation between the densities $\\ul,\\ur$ and on the monotonicity of $G'$ in $[\\min(\\ul,\\ur),\\max(\\ul,\\ur)]$ several possibilities can emerge. In the simplest case when $G$ is strictly concave one can essentially distinguish two cases:\n(1) if $G'(\\ul) > G'(\\ur)$ then the characteristic lines meet in some finite time hence a \\emp{shock wave}, that is a moving discontinuity appears in the entropy solution;\n(2) if $G'(\\ul) < G'(\\ur)$ then the characteristic lines are moving away from each other giving rise to a \\emp{rarefaction fan} in which the initial sharp discontinuity smears out in time.\n\nIn the general case, based on \\cite[pp. 30--34 of Sec. 2.2]{holdenrisebro}, the unique entropy solution of \\eqref{eq:hydrogeneral} with initial condition \\eqref{eq:stepinitcond} can explicitly be given as\n\\begin{equation}\\label{eq:generalentropysol}\nu(x,t) =\n\\left\\{\n\\begin{aligned}\n&\\ul &&\n\\text{if $x\\leq (G_{\\frown})'(\\ul)\\cdot t$};\\\\\n&\\big[(G_{\\frown})'\\big]^{-1}(x\/t) &&\n\\text{if $(G_{\\frown})'(\\ul)\\cdot t < x\\leq (G_{\\frown})'(\\ur)\\cdot t$};\\\\[0.3em]\n&\\ur &&\n\\text{if $x > (G_{\\frown})'(\\ur)\\cdot t$},\n\\end{aligned}\n\\right.\n\\end{equation}\nwhere $\\ul>\\ur$ and $G_{\\frown}$ is the \\emp{upper concave envelope} of $G$ defined to be the smallest concave function which is greater than or equal to $G$ above the interval $[\\ur,\\ul]$. The derivative of $G_{\\frown}$ is denoted by $(G_{\\frown})'$, while its inverse by $[(G_{\\frown})']^{-1}$. Analogously, in the opposite case when $\\ur>\\ul$ the solution can be obtained by the help of the \\emp{lower convex envelope} of $G$.\n\n\\subsection{Flux}\\label{sec:flux\n\nWe explicitly calculate the \\emp{hydrodynamic flux} $G$ of \\eqref{eq:hydrogeneral} corresponding to our microscopic model. As we noted in the previous Subsection \\ref{sec:hydro} the hydrodynamic flux is the expected current rate across a bond under the translation-invariant extremal stationary distribution $\\Gmb^{\\theta}$ (see the beginning of Section \\ref{sec:props}). That is\n\\begin{align*}\nG(v) = \\Exp^{\\theta(v)} p(\\om_0,\\om_1)\n& = \\frac{1 - d}{2}\\cdot \\Gamma^{\\theta(v)}(0)\\cdot\\Gamma^{\\theta(v)}(-1)\n+ \\frac{1 + d}{2}\\cdot \\Gamma^{\\theta(v)}(1)\\cdot\\Gamma^{\\theta(v)}(0)\\\\\n& + c \\cdot \\Gamma^{\\theta(v)}(0)\\cdot\\Gamma^{\\theta(v)}(0)\n+ 1 \\cdot \\Gamma^{\\theta(v)}(1)\\cdot\\Gamma^{\\theta(v)}(-1),\n\\end{align*}\nwhere $v\\in[-1,1]$. We remark that even the last term of the annihilation step counts as one towards the flux as only a unit signed charge is transferred over the bond in question.\n\nNow, substituting the measure $\\Gamma^{\\theta(v)}$ of \\eqref{eq:statmarginal} into the previous display we get\n\\begin{equation*}\nG(v) = \\frac{b\\cdot(1 - 2b)\\cdot(\\cosh(\\theta(v)) + d \\cdot \\sinh(\\theta(v))) + 2 b^2}{(1 + 2b\\cdot(\\cosh(\\theta(v)) - 1))^2}.\n\\end{equation*}\nNote that $G$ is \\emp{even} (i.e.\\ symmetric to the origin) if and only if $d=0$. This case is shown in Figure \\ref{fig:flux}. In the upcoming sections we will only investigate the case when $d=0$.\n\nNow, taking into consideration \\eqref{eq:densityparam} we can realize the density parametrization of the flux. Elementary calculus in combination with hydrodynamic results as described in Section \\ref{sec:hydro} then yield\n\\begin{figure}[!htb]\n\\centering\n\\includegraphics[width=0.75\\textwidth]{flux}\n\\caption[Hydrodynamic flux]{The hydrodynamic flux $G$ in the case $d=0$. A vertical slice of the surface gives a particular density-flux assignment for a $b$ value.}\\label{fig:flux}\n\\end{figure}\n\\begin{theorem}\\label{thm:main}\n The hydrodynamic limit as described in Section \\ref{sec:hydro} applies to our model \\eqref{eq:infgen} with rates \\eqref{eq:modelrates} in the attractive range $c\\leq\\frac{1-|d|}2$ with hydrodynamic flux $H:[-1, 1]\\to[0,+\\infty)$ given by\n\\begin{equation*}\nH(v) = \\frac{v\\cdot(d - v)}{2} +\n\\frac{(1 - d\\cdot v)\\cdot(4 b^2 + (1-2b)^2\\cdot v^2)}{2 \\,\\big(4 b^2 + (1 - 2 b) \\cdot\\sqrt{4 b^2 + (1 - 4 b)\\cdot v^2}\\big)}\n\\qquad (v\\in[-1,1]),\n\\end{equation*}\nwhere $b = \\frac{1}{2 + c^{-1\/2}}$ and $c,d > 0$ (see \\eqref{eq:paramb}). In the case $d=0$, we simply denote the flux by $G$:\n\\begin{equation}\\label{eq:fluxsym}\nG(v) = -\\frac{v^2}{2} +\n\\frac{4 b^2 + (1-2b)^2\\cdot v^2}{2\\,\\big(4 b^2 + (1 - 2 b)\\cdot\\sqrt{4 b^2 + (1 - 4 b)\\cdot v^2}\\big)} \\qquad (v\\in[-1,1]).\n\\end{equation}\nFor $G$ the following hold: if\n$b \\in \\big(\\frac{1}{6},\\,\\frac{1}{2+\\sqrt{2}}\\big]\\Longleftrightarrow c \\in \\big(\\frac{1}{16},\\,\\frac{1}{2}\\big]$\n($b = \\frac{1}{6}\\;\\Longleftrightarrow\\; c=\\frac{1}{16}$), then it is strictly (non-strictly) concave in $[-1,1]$. Otherwise when\n$b \\in \\big(0,\\, \\frac{1}{6}\\big)\\Longleftrightarrow c \\in \\big(0,\\, \\frac{1}{16}\\big)$\nit is neither concave nor convex having two inflection points\n\\begin{equation}\\label{eq:inflpoints}\n\\pm v_{\\infl} = \\pm\\sqrt{\\frac{(2 b^2 (1-2b))^{2\/3} - 4 b^2}{1-4b}},\n\\end{equation}\nwhich separate a concave-convex ($-v_{\\infl}$) and a convex-concave ($v_{\\infl}$) region, respectively.\n\\end{theorem}\n\n\\section{Entropy solutions}\\label{sec:entropysols\n\nWe find the entropy solutions of the Riemann problem \\eqref{eq:hydrogeneral} with step initial datum \\eqref{eq:stepinitcond} for all possible pairs of initial densities $\\ul,\\ur\\in[-1,1]$. Recalling Section \\ref{sec:hydro}, for purely concave flux two valid scenarios can only happen: the discontinuity at zero is preserved over time (\\emp{shock wave}) or it immediately smears out (\\emp{rarefaction fan}). As we have seen the flux $G$ can be non-concave in some parameter region and this results in plenty of qualitatively different solutions for \\eqref{eq:hydrogeneral} to be discussed in more details below.\n\n\\paragraph{Auxiliary calculations.} Based on Theorem \\ref{thm:main}, we introduce some further notations being valid \\emp{only} in the range $b\\in(0,\\frac{1}{6})$. The positive global maximum point of \\eqref{eq:fluxsym} is given by\n\\[\nv_{\\max} = \\frac{1}{2}\\sqrt{\\frac{(1 + 2b)(1 - 6b)}{1 - 4b}},\n\\]\nwhile the negative one is at $-v_{\\max}$. The maximum value is then $G(v_{\\max}) = G(-v_{\\max}) = (1 - 2b)^2 \/ (8 - 32b)$.\nAt $0$ the tangent line is horizontal and $G$ has a local minimum with $G(0) = b$. This tangent intersects the curve in two points symmetrically to the origin. The positive one is at\n\\begin{equation*}\nv_0 = \\sqrt{\\frac{(1 - 2b)(1 - 6b)}{1 - 4b}},\n\\end{equation*}\nwhile the negative one is at $-v_0$. See Figure \\ref{fig:tangent}.\n\\begin{figure}[!htb]\n\\centering\n\\includegraphics[width=0.65\\paperwidth]{tangent\n\\caption[Flux (non-concave case)]{Hydrodynamic flux $G$ in the non-concave case and its tangents, where $b=0.08$ and $d=0$. From a general point $v\\in[-1,1]$ one can possibly draw at most two lines that are tangential to $G$ (see $v_e^{-}$ and $v_e^{+}$). On the other hand a tangent line at $v_e$ can have two intersections with the graph of $G$ (see $v_m^{-}$ and $v_m^{+}$). $\\pm v_{\\max}$ are the maximum points while $\\pm v_{0}$ are the intersections of $G$ with the constant (dashed) line at $b$.}\\label{fig:tangent}\n\\end{figure}\n\nTo determine the lower convex and upper concave envelopes of $G$ for different densities (recalling \\eqref{eq:generalentropysol}) and to finally obtain the phase diagram we need to address some further special points of the flux $G$. Below we essentially define the tangential points which will be denoted by $v_e$'s whereas $v_m$'s will indicate the intersection points of $G$ with a tangent line. In more details the tangent line drawn from the point $(v_e, G(v_e))$ intersects the graph $G$ at (another) point $(v,G(v))$ if and only if it satisfies\n\\begin{equation}\\label{eq:tangentpoint}\nG(v) = G(v_{e}) + G'(v_{e})(v - v_{e}).\n\\end{equation}\nWe can look at \\eqref{eq:tangentpoint} in two different ways.\n\\begin{itemize}[leftmargin=*]\n\\item One can look for an intersection $(v_m,G(v_m))$ of the graph and the tangent line touching at a \\emp{given} $(v_{e}, G(v_e))$. The intersection that lies on the left (right) of $v_{e}$ is denoted by $v_m^{-}(v_{e})$ [$v_m^{+}(v_{e})$]. If $v_m^{-}(v_{e})$ [$v_m^{+}(v_{e})$] did not exist then $v_m^{-}(v_{e}):\\,=-\\infty$ [$v_m^{+}(v_{e}):\\,=+\\infty$]. See Figure \\ref{fig:tangent}.\n\\item From a \\emp{fixed} $(v,G(v))$ one can solve \\eqref{eq:tangentpoint} for a tangent point $(v_{e},G(v_e))$ of $G$. If there exist more than one solutions then let $v_{e}^{-}(v)$ be the closer while let $v_{e}^{+}(v)$ be the farther one from $v$. By default $v_{e}(v):\\,=v_{e}^{-}(v)$. If only one solution exits then $v_{e}(v)=v_{e}^{-}(v)=v_{e}^{+}(v)$. See Figure \\ref{fig:tangent}.\n\\end{itemize}\nIn the following the previously defined points, illustrated by Figure \\ref{fig:tangent}, are heavily used to determine those regions where the dynamics follows a shock wave or a rarefaction fan.\n\\paragraph{Phase diagram.} Notice that if $G''$ does not change sign in $[\\min(\\ul,\\ur),\\max(\\ul,\\ur)]$, furthermore the secant of $G$ connecting $(\\ul,G(\\ul))$ with $(\\ur,G(\\ur))$ is located \\emp{below} (\\emp{above}) its graph and\n\\begin{itemize}[leftmargin=*]\n\\item $\\ul < \\ur$ ($\\ul > \\ur$), then the characteristic lines `converge' and collide resulting in the formation of a \\emp{shock wave}.\n\\item $\\ul > \\ur$ ($\\ul < \\ur$), then the characteristic lines `diverge' and the initial discontinuity smoothens out according to the physically relevant entropy solution, i.e.\\ a \\emp{rarefaction fan} forms.\n\\end{itemize}\nOur first task is to explore these simple regions. Then in the general case we find the lower convex as well as upper concave envelopes of $G$ which will give shock and rarefaction parts of the solution via formula \\eqref{eq:generalentropysol}. These functions can be expressed in a rather simple way by using the previously defined $v_e$'s and $v_m$'s.\n\\begin{figure}[!htb]\n\\centering\n\\includegraphics[width=0.7\\paperwidth]{rsr-srs\n\\caption[RSR and SRS]{Rarefaction fan -- Shock wave -- Rarefaction fan (left) and Shock wave -- Rarefaction fan -- Shock wave (right) profiles evolving in space-time when $\\ul=1=-\\ur$ and $\\ul=-v_{\\max}=-\\ur$, respectively, where $b=0.08$ and $d=0$.}\n\\label{fig:rsr-srs}\n\\end{figure}\n\nThe general case is as follows. Observe that if $b\\geq\\frac{1}{6}$ then shock wave (rarefaction fan) forms when $\\ul < \\ur$ ($\\ul > \\ur$). Henceforth we can assume that $b$ belongs to $(0,\\frac{1}{6})$. For the simpler parts of the phase diagram we can apply the above rule. Recall the inflection points of \\eqref{eq:inflpoints}.\n\\begin{itemize}[leftmargin=*]\n\\item[] \\textbf{Shock wave (\\textrm{S}) regions}. If $\\ul < \\ur$, then $-1\\leq \\ul \\leq v_{m}^{-}(v_{e}(1))$ and $\\ur \\geq v_{m}^{+}(v_{e}(\\ul))$; $v_{m}^{+}(v_{e}(-1)) \\leq \\ur \\leq 1$ and $\\ul \\leq v_{m}^{-}(v_{e}(\\ur))$; $-1\\leq \\ul\\leq -v_{\\infl}$ and $\\ur\\leq v_e(\\ul)$; $v_{\\infl}\\leq \\ur \\leq 1$ and $\\ul \\geq v_e(\\ur)$. If $\\ul > \\ur$, then $\\ul,\\ur\\in[-v_{\\infl},v_{\\infl}]$; furthermore $v_{\\infl} \\leq \\ul \\leq v_{\\max}$ ($-v_{\\max}\\leq \\ur\\leq -v_{\\infl}$) and $v^{+}_e (\\ul) \\leq \\ur \\leq v^{-}_m(\\ul)$ ($v^{+}_m(\\ur)\\leq \\ul \\leq v^{+}_e(\\ur)$).\n\\item[] \\textbf{Rarefaction fan (\\textrm{R}) regions}. If $\\ul < \\ur$, then $\\ul,\\ur\\in[-v_{\\infl},v_{\\infl}]$. On the other hand, if $\\ul > \\ur$, then $\\ul,\\ur\\in[-1,-v_{\\infl}]$ or $\\ul,\\ur\\in[v_{\\infl},1]$.\n\\end{itemize}\n\nNow, it becomes a much more complex situation if the secant crosses the graph of $G$ above the interval $(\\min(\\ul,\\ur),\\max(\\ul,\\ur))$. We will discuss these in the following.\n\\begin{itemize}[leftmargin=*]\n\\item[] \\textbf{Rarefaction fan -- Shock wave (\\textrm{RS}) regions}. If $\\ul < \\ur$, then $v_{\\infl} < \\ur \\leq 1$ and $-v_{\\infl} \\leq \\ul < v_{e}(\\ur)$. On the other hand if $\\ul > \\ur$, then $-v_{\\max} \\leq \\ur < v_{\\infl}$ and $v_{e}^{+}(\\ur) < \\ul \\leq 1$.\n\\item[] \\textbf{Shock wave -- Rarefaction fan (\\textrm{SR}) regions}. If $\\ul < \\ur$, then $-1 \\leq \\ul < -v_{\\infl}$ and $v_{e}(\\ul) < \\ur \\leq v_{\\infl}$. On the other hand if $\\ul > \\ur$, then $-v_{\\infl} < \\ul \\leq v_{\\max}$ and $-1 \\leq \\ur < v_{e}^{+}(\\ul)$.\n\\item[] \\textbf{Rarefaction fan -- Shock wave -- Rarefaction fan (\\textrm{RSR}) region}: $\\ul > v_{\\max}$ and $\\ur < -v_{\\max}$. The shock wave in the middle has zero speed and $\\lim_{x\\to0^+}u(x,t)-\\lim_{x\\to0^-}u(x,t) = -2v_{\\max}$ for all $t>0$. See Figure \\ref{fig:rsr-srs}.\n\\item[] \\textbf{Shock wave -- Rarefaction fan -- Shock wave (\\textrm{SRS}) region}: $-1 \\leq \\ul < -v_{\\infl}$ and $v_{\\infl} < \\ur < v_{m}^{+}(v_{e}(\\ul))$.\n The shocks are connected by a rarefaction fan and they are moving away from each other. See Figure \\ref{fig:rsr-srs}.\n\\end{itemize}\n\\begin{figure}[!htb]\n\\centering\n\\includegraphics[width=0.6\\paperwidth]{phasediagram_4\n\\caption[Phase diagram]{Phase diagram of the model with varying values of $b$, where $d=0$. The $x$ and $y$ axes represent the initial density on the left ($\\ul$) and the right ($\\ur$), respectively. $\\mathbf{S}$ ($\\mathbf{R}$) denotes a shock wave (rarefaction fan).}\\label{fig:phasediagram}\n\\end{figure}\n\nThe whole phase diagram can be seen in Figure \\ref{fig:phasediagram} for four different $b$ values. The horizontal and vertical axes correspond to the initial density on the left ($\\ul$) and the right ($\\ur$), respectively. $\\mathbf{S}$ denotes the formation of a shock wave, while $\\mathbf{R}$ corresponds to a rarefaction fan. The model in the assumed $d=0$ case has a symmetry of swapping particles and antiparticles and flipping direction of the jumps at the same time. This results in the transformation $(\\ul,\\ur)\\to (-\\ur, -\\ul)$ which shows as a symmetry on the diagrams (including reversing the order of $\\mathbf{R}$'s and $\\mathbf{S}$'s due to reflection of space). The other diagonal, $(\\ul,\\ur)\\to (\\ur,\\ul)$ is \\emp{not} a symmetry of the particle system due to positive particles jumping to the right, negatives to the left which fundamentally influences the hydrodynamic behavior.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nImage-to-image translation attempts to convert the image appearance from one domain to another while preserving the intrinsic image content.\nMany computer vision tasks can be formalized as a certain image-to-image translation problem, such as super-resolution~\\cite{ledig2016photo,shi2016real}, image colorization~\\cite{zhang2016colorful,zhang2017real,iizuka2016let}, image segmentation \\cite{long2015fully,eigen2015predicting}, and image synthesis \n\\cite{chen2017photographic,simo2016learning,xie2015holistically,laffont2014transient,bozhao2017arxiv}. \nHowever, conventional image-to-image translation methods are all task specific.\nA common framework for universal image-to-image translation remains as an emerging research subject in the literature, which has gained considerable attention in recent studies\n\\cite{isola2016image,zhu2017unpaired,kim2017learning,liu2017unsupervised,yi2017dualgan}.\n\n\\begin{figure}[t]\n\\begin{center}\n \\centering\n \\resizebox{1\\linewidth}{!}{\n \\begin{minipage}[t]{1.0\\linewidth}\n \\centering\n \\includegraphics[width=0.85\\linewidth]{fig1.pdf}\n \\end{minipage}\n }\n \\caption{\n Given unpaired images from two domains, our proposed \\proposedd~ learns the image-to-image translation by a stacked structure in a coarse-to-fine manner.\n For the Cityscapes \\textit{Labels $\\rightarrow$ Photo} task in $512 \\times 512$ resolution, the result of \\proposedd~ (left) appears more realistic and includes finer details compared with the result of CycleGAN~\\cite{zhu2017unpaired} (right).}\n \\label{fig:1}\n\\end{center}\n\\end{figure}\n\nIsola \\textit{et al}.~\\cite{isola2016image} leveraged the power of \ngenerative adversarial networks (GANs) \\cite{goodfellow2014generative,mirza2014conditional,bozhao2018arxiv}, which encourage the translation results to be indistinguishable from real images in the target domain, \nto learn image-to-image translation from image pairs in a supervised fashion.\nHowever, obtaining pairwise training data is time-consuming and heavily relies on human labor.\nRecent works \\cite{zhu2017unpaired,kim2017learning,liu2017unsupervised,yi2017dualgan} explore tackling the image-to-image translation problem without using pairwise data.\nUnder the unsupervised setting, besides the traditional adversarial loss used in supervised image-to-image translation, a cycle-consistent loss is introduced to restrain the two cross-domain transformations $G$ and $F$ to be the inverses of each other (\\textit{i}.\\textit{e}., $G(F(x))\\approx x$ and $G(F(y))\\approx y$).\nBy constraining both of the adversarial and cycle-consistent losses, the networks learn how to accomplish cross-domain transformations without using pairwise training data.\n\nDespite the progress mentioned above, existing unsupervised image-to-image translation methods may generate inferior results when two image domains are of significant appearance differences or the image resolution is high.\nAs shown in Figure \\ref{fig:1}, the result of CycleGAN~\\cite{zhu2017unpaired} in translating a Cityscapes semantic layout to a realistic picture lacks details and remains visually unsatisfactory.\nThe reason for this phenomenon lies in the significant visual gap between the two distinct image domains, which makes the cross-domain transformation too complicated to be learned by running a single-stage unsupervised approach.\n\nJumping out of the scope of unsupervised image-to-image translation, many methods have leveraged the power of multi-stage refinements to tackle image generation from latent vectors~\\cite{denton2015deep,karras2017pregressive}, caption-to-image~\\cite{zhang2016stackgan}, and supervised image-to-image translation~\\cite{chen2017photographic,eigen2015predicting,wang2017high}.\nBy generating an image in a coarse-to-fine manner, a complicated transformation is broken down into easy-to-solve pieces.\nWang \\textit{et al}.~\\cite{wang2017high} successfully tackled the high-resolution image-to-image translation problem in such a coarse-to-fine manner with multi-scale discriminators.\nHowever, their method relies on pairwise training images, so cannot be directly applied to our studied unsupervised image-to-image translation task.\nTo the best of our knowledge, there exists no attempt to exploit stacked networks to overcome the difficulties encountered in learning unsupervised image-to-image translation.\n\nIn this paper, we propose the stacked cycle-consistent adversarial networks (SCANs) aiming for unsupervised learning of image-to-image translation. \nWe decompose a complex image translation into multi-stage transformations, including a coarse translation followed by multiple refinement processes.\nThe coarse translation learns to sketch a primary result in low-resolution.\nThe refinement processes improve the translation by adding details into the previous results to produce higher resolution outputs.\nWe adopt a conjunction of an adversarial loss and a cycle-consistent loss in all stages to learn translations from unpaired image data.\nTo benefit more from multi-stage learning, we also introduce an adaptive fusion block in the refinement processes to learn the dynamic integration of the current stage's output and the previous stage's output. \nExtensive experiments demonstrate that our proposed model can not only generate results with realistic details, but also enable us to learn unsupervised image-to-image translation in higher resolution.\n\nIn summary, our contributions are mainly two-fold. Firstly, we propose SCANs to model the unsupervised image-to-image translation problem in a coarse-to-fine manner for generating results with finer details in higher resolution.\nSecondly, we introduce a novel adaptive fusion block to dynamically integrate the current stage's output and the previous stage's output, which outperforms directly stacking multiple stages.\n\n\n\\section{Related Work}\n\n\\smallskip\n\\noindent\\textbf{Image-to-image translation.} GANs~\\cite{goodfellow2014generative} have shown impressive results in a wide range of tasks including super-resolution \\cite{ledig2016photo,shi2016real}, video generation \\cite{xiong2018gan}, image colorization \\cite{isola2016image}, image style transfer \\cite{zhu2017unpaired} etc. \nThe essential part of GANs is the idea of using an adversarial loss that encourages the translated results to be indistinguishable from real target images.\nAmong the existing image-to-image translation works using GANs, perhaps the most well-known one would be Pix2Pix \\cite{isola2016image}, in which Isola \\textit{et al}.~applied GANs with a regression loss to learn pairwise image-to-image translation.\nDue to the fact that pairwise image data is difficult to obtain, image-to-image translation using unpaired data has drawn rising attention in recent studies.\nRecent works by Zhu \\textit{et al}.~\\cite{zhu2017unpaired}, Yi \\textit{et al}.~\\cite{yi2017dualgan}, and Kim \\textit{et al}.~\\cite{kim2017learning} have tackled the image translation problem using a combination of adversarial and cycle-consistent losses. \nTaigman \\textit{et al}.~\\cite{taigman2016unsupervised} applied cycle-consistency in the feature level with the adversarial loss to learn a one-side translation from unpaired images.\nLiu \\textit{et al}.~\\cite{liu2017unsupervised} used a GAN combined with Variational Auto Encoder (VAE) to learn a shared latent space of two given image domains. \nLiang \\textit{et al}.~\\cite{liang2017generative} combined the ideas of adversarial and contrastive losses, using a contrastive GAN with cycle-consistency to learn the semantic transform of two given image domains with labels.\nInstead of trying to translate one image to another domain directly, our proposed approach focuses on exploring refining processes in multiple steps to generate a more realistic output with finer details by harnessing unpaired image data.\n\n\\smallskip\n\\noindent\\textbf{Multi-stage learning.}\nExtensive works have proposed to choose multiple stages to tackle complex generation or transformation problems.\nEigen \\textit{et al}.~\\cite{eigen2015predicting} proposed a multi-scale network to predict depth, surface, and segmentation, which learns to refine the prediction result from coarse to fine. \nS2GAN introduced by Wang \\textit{et al}.~\\cite{wang2016generative} utilizes two networks arranged sequentially to first generate a structure image and then transform it into a natural scene.\nZhang \\textit{et al}.~\\cite{zhang2016stackgan} proposed StackGAN to generate high-resolution images from texts, which consists of two stages: the Stage-I network generates a coarse, low-resolution result, while the Stage-II network refines the result into a high-resolution, realistic image.\nChen \\textit{et al}.~\\cite{chen2017photographic} applied a stacked refinement network to generate scenes from segmentation layouts.\nTo accomplish generating high-resolution images from latent vectors, Kerras \\textit{et al}.~\\cite{karras2017pregressive} started from generating a $4\\times 4$ resolution output, and then progressively stacked up both a generator and a discriminator to generate a $1024 \\times 1024$ realistic image.\nWang \\textit{et al}.~\\cite{wang2017high} applied a coarse-to-fine generator with a multi-scale discriminator to tackle the supervised image-to-image translation problem.\nDifferent form the existing works, this work exploits stacked image-to-image translation networks coupled with a novel adaptive fusion block to tackle the unsupervised image-to-image translation problem.\n\n\n\\section{Proposed Approach}\n\n\\subsection{Formulation}\nGiven two image domains $X$ and $Y$, the mutual translations between them can be denoted as two mappings $G:X \\to Y$ and $F:Y \\to X$, each of which takes an image from one domain and translates it to the corresponding representation in the other domain.\nExisting unsupervised image-to-image translation approaches~\\cite{zhu2017unpaired,yi2017dualgan,kim2017learning,liu2017unsupervised,taigman2016unsupervised} finish the learning of $G$ and $F$ in a single stage, which generate results lacking details and are unable to handle complex translations.\n\nIn this paper, we decompose translations $G$ and $F$ into multi-stage mappings. For simplicity, now we describe our method in a two-stage setting. Specifically, we decompose $G = G_{2} \\circ G_{1}$ and $F = F_{2} \\circ F_{1}$. $G_{1}$ and $F_{1}$ (\\textbf{Stage-1}) perform the cross-domain translation in a coarse scale, while $G_{2}$ and $F_{2}$ (\\textbf{Stage-2}) serve as refinements on the top of the outputs from the previous stage.\nWe first finish the training of Stage-$1$ in low-resolution and then train Stage-$2$ to learn refinement in higher resolution based on the fixed Stage-$1$. \n\n\nTraining two stages in the same resolution would make Stage-$2$ difficult to bring further improvement, as Stage-$1$ has already been optimized with the same objective function (see Section \\ref{sec:comp}). \nOn the other hand, we find that learning in a lower resolution allows the model to generate visually more natural results, since the manifold underlying the low-resolution images is easier to model.\nTherefore, first, we constrain Stage-$1$ to train on 2x down-sampled image samples, denoted by $X_{\\downarrow} $ and $Y_{\\downarrow}$, to learn a base transformation. Second, based on the outputs of Stage-$1$, we train Stage-$2$ with image samples $X$ and $Y$ in the original resolution.\nSuch a formulation exploits the preliminary low-resolution results of Stage-$1$ and guides Stage-$2$ to focus on up-sampling and adding finer details, thus helping improve the overall translation quality.\n\nIn summary, to learn cross-domain translations $G: X \\to Y$ and $F: Y \\to X$ on given domains $X$ and $Y$, we first learn preliminary translations $G_1: X_{\\downarrow} \\to Y_{\\downarrow}$ and $F_1: Y_{\\downarrow} \\to X_{\\downarrow}$ at the 2x down-sampled scale. Then we use $G_{2}: X_{\\downarrow} \\to X$ and $F_2: Y_{\\downarrow} \\to Y$ to obtain the final output with finer details in the original resolution. Notice that we can iteratively decompose $G_2$ and $F_2$ into more stages.%\n\n\\subsection{Stage-1: Basic Translation}\n\\label{sec:stage1}\n\n\\begin{figure}[t]\n\\begin{center}\n \\includegraphics[width=0.4\\linewidth]{stage1.pdf}\n\\end{center}\n \\caption{Illustration of an overview of Stage-$1$ for learning coarse translations in low-resolution under an unsupervised setting. Solid arrow denotes an input-output, and dashed arrow denotes a loss.}\n\\label{fig:stage1}\n\\end{figure}\n\nIn general, our Stage-$1$ module adopts a similar architecture of CycleGAN~\\cite{zhu2017unpaired}, which consists of two image translation networks $G_{1}$, and $F_{1}$ and two discriminators $D_{X_1}, D_{Y_1}$. Note that Stage-$1$ is trained in low-resolution image domains $X_{\\downarrow}$ and $Y_{\\downarrow}$. \nFigure \\ref{fig:stage1} shows an overview of the Stage-$1$ architecture.\n\nGiven a sample $\\mathbf{x}_1 \\in X_{\\downarrow}$, $G_1$ translates it to a sample $\\mathbf{\\hat{y}}_1 = G_1(\\mathbf{x}_1)$ in the other domain $Y_{\\downarrow}$. On one hand, the discriminator $D_{Y_1}$ learns to classify the generated sample $\\mathbf{\\hat{y}}_1$ to class $0$ and the real image $\\mathbf{y}$ to class $1$, respectively. On the other hand, $G_1$ learns to deceive $D_{Y_1}$ by generating more and more realistic samples. This can be formulated as an adversarial loss:\n\\begin{align}\\begin{split}\n\\mathcal{L}_{adv}(G_1, D_{Y_1}, & X_\\downarrow, Y_\\downarrow) = \n\\mathbb{E}_{\\mathbf{y} \\sim Y\\downarrow}\\left[\\log(D_{Y_1}(\\mathbf{y}))\\right] \\\\\n&+ \\mathbb{E}_{\\mathbf{x} \\sim X\\downarrow}\\left[\\log(1-D_{Y_1}(G_1(\\mathbf{x})))\\right].\n\\end{split}\\end{align}\nWhile $D_{Y_1}$ tries to maximize $\\mathcal{L}_{adv}$, $G_1$ tries to minimize it. \nAfterward, we use $F_1$ to translate $\\mathbf{\\hat{y}}_1$ back to the domain $X_{\\downarrow}$, and constrain $F_1(\\mathbf{\\hat{y}}_1=G_1(\\mathbf{x}))$ to be close to the input $\\mathbf{x}$. This can be formulated as a cycle-consistent loss:\n\\begin{equation}\n\\mathcal{L}_{cycle}(G_1, F_1, X_\\downarrow) = \\mathbb{E}_{\\mathbf{x}\\sim X\\downarrow}\\left\\|\\mathbf{x} - F_1(G_1(\\mathbf{x}))\\right\\|_1.\n\\end{equation}\n\nSimilarly, for a sample $\\mathbf{y}_1 \\in Y_{\\downarrow}$, we use $F_1$ to perform translation, use $D_{X_1}$ to calculate the adversarial loss, and then use $G_1$ to translate backward to calculate the cycle-consistent loss. Our full objective function for Stage-$1$ is a combination of the adversarial loss and the cycle-consistent loss:\n\\begin{dmath}\n\\mathcal{L}_{Stage1} =\n\\mathcal{L}_{adv}(G_1, D_{Y_1}, X_{\\downarrow}, Y{\\downarrow}) + \n\\mathcal{L}_{adv}(F_1, D_{X_1}, Y_{\\downarrow}, X_{\\downarrow}) + \n\\lambda[\\mathcal{L}_{cycle}(G_1, F_1, X_{\\downarrow}) +\n\\mathcal{L}_{cycle}(F_1, G_1, Y_{\\downarrow})],\n\\label{equ:loss}\n\\end{dmath}\nwhere $\\lambda$ denotes the weight of the cycle-consistent loss.\nWe obtain the translations $G_1$ and $F_1$ by optimizing the following objective function: \n\\begin{equation}\nG_1, F_1 = \\argmin_{G_1,F_1} \\max_{D_{X_1}, D_{Y_1}} \\mathcal{L}_{Stage1},\n\\label{equ:minmax}\n\\end{equation}\nwhich encourages these translations to transform the results to another domain while preserving the intrinsic image content. \nAs a result, the optimized translations $G_1$ and $F_1$ can perform a basic cross-domain translation in low resolution. \n\n\\subsection{Stage-2: Refinement}\n\\label{sec:stage2}\n\n\\begin{figure*}[t]\n\\begin{center}\n \\includegraphics[trim={0 0 0 0},clip,width=0.87\\linewidth]{overview.pdf}\n\\end{center}\n \\caption{Illustration of an overview of our Stage-$2$ for learning refining processes on the top of Stage-$1$'s outputs. $G_{1}$ and $F_{1}$ are the translation networks learned in Stage-$1$. \n In the training process, we keep the weights of $G_1$ and $F_1$ fixed. Solid arrow denotes an input-output, and dashed arrow denotes a loss.}\n\\label{fig:overview}\n\\end{figure*}\n\n\nSince it is difficult to learn a complicated translation with the limited ability of a single stage, the translated output of Stage-$1$ may seem plausible but still leaves us much room for improvement.\nTo refine the output of Stage-$1$, we deploy Stage-$2$ with a stacked structure built on the top of the trained Stage-$1$ to complete the full translation to generate higher resolution results with finer details.\n\n\nStage-$2$ consists of two translation networks $G_2$, $F_2$ and two discriminator networks $D_{X_2}$, $D_{Y_2}$, as shown in Figure \\ref{fig:overview}.\nWe only describe the architecture of $G_{2}$, since $F_{2}$ shares the same design (see Figure \\ref{fig:overview}).\n\n$G_2$ consists of two parts: a newly initialized image translation network $G_2^T$ and an adaptive fusion block $G_2^F$.\nGiven the output of Stage-$1$ ($\\mathbf{\\hat{y}}_1 = G_1(\\mathbf{x}_1)$), we use nearest up-sampling to resize it to match the original resolution.\nDifferent from the image translation network in Stage-$1$, which only takes $\\mathbf{x} \\in X$ as input, in Stage-$2$ we use both the current stage's input $\\mathbf{x}$ and the previous stage's output $\\mathbf{\\hat{y}}_1$. Specifically, we concatenate $\\mathbf{\\hat{y}}_1$ and $\\mathbf{x}$ along the channel dimension, and utilize $G_2^T$ to obtain the refined result\n$\\mathbf{\\hat{y}}_2 = G_{2}^T(\\mathbf{\\hat{y}}_1, \\mathbf{x})$.\n\n\n\\begin{figure}[t]\n\\begin{center}\n \\includegraphics[width=0.65\\linewidth]{alpha.pdf}\n\\end{center}\n \\caption{Illustration of the linear combination in an adaptive fusion block. The fusion block applies the fusion weight map $\\bm\\alpha$ to find defects in the previous result $\\mathbf{\\hat y}_1$ and correct it precisely using $\\mathbf{\\hat y}_2$ to produce a refined output $\\mathbf{y}_2$.}\n\\label{fig:alpha}\n\\end{figure}\n\n\nBesides simply using $\\mathbf{\\hat{y}}_2$ as the final output, we introduce an adaptive fusion block $G_2^F$ to learn a dynamic combination of $\\mathbf{\\hat{y}}_2$ and $\\mathbf{\\hat{y}}_1$ to fully utilize the entire two-stage structure. Specifically, the adaptive fusion block learns a pixel-wise linear combination of the previous results: \n\\begin{equation}\nG_2^F(\\mathbf{\\hat{y}}_1, \\mathbf{\\hat{y}}_2) = \\mathbf{\\hat{y}_1} \\odot (1-\\bm{\\alpha}_{x}) + \\mathbf{\\hat{y}_2} \\odot \\bm{\\alpha}_{x},\n\\label{equ:alpha}\n\\end{equation}\nwhere $\\odot$ denotes element-wise product and $\\bm{\\alpha} \\in (0,1)^{H \\times W}$ represents the fusion weight map, which is predicted by a convolutional network $h_{x}$:\n\\begin{equation}\n\\bm{\\alpha}_{x} = h_x(\\mathbf{x}, \\mathbf{\\hat{y}}_1, \\mathbf{\\hat{y}}_2).\n\\label{equ:alphah}\n\\end{equation}\nFigure \\ref{fig:alpha} shows an example of adaptively combining the outputs from two stages. \n\nSimilar to Stage-$1$, we use a combination of adversarial and cycle-consistent losses to formulate our objective function of Stage-$2$:\n\\begin{dmath}\n\\mathcal{L}_{Stage2} = \\mathcal{L}_{adv}(G_2 \\circ G_1, D_{Y_2}, X, Y) + \\mathcal{L}_{adv}(F_2 \\circ F_1, D_{X_2}, Y, X) + \\lambda \\left[\\mathcal{L}_{cycle}(G_2 \\circ G_1, F_2 \\circ F_1, X) + \\mathcal{L}_{cycle}(F_2 \\circ F_1, G_2 \\circ G_1, Y)\\right].\n\\label{equ:loss2}\n\\end{dmath}\nOptimizing this objective is similar to solving Equation \\ref{equ:minmax}.\nThe translation networks $G_2$ and $F_2$ are learned to refine the previous results by correcting defects and adding details on them.\n\nFinally, we complete our desired translations $G$ and $F$ by integrating the transformations in Stage-$1$ and Stage-$2$, which are capable of tackling a complex image-to-image translation problem under the unsupervised setting.\n\n\\section{Experiments}\nThe proposed approach is named \\textit{SCAN} or \\textit{SCAN Stage-$N$} if it has $N$ stages in the following experiments.\nWe explore several variants of our model to evaluate the effectiveness of our design in Section \\ref{sec:ablation}. In all experiments, we decompose the target translation into two stages, except for exploring the ability of the three-stage architecture in high-resolution tasks in Section~\\ref{sec:highres}.\n\nWe used the official released model of CycleGAN~\\cite{zhu2017unpaired}\nand Pix2Pix~\\cite{isola2016image}\nfor $256\\times256$ image translation comparisions.\nFor $512\\times512$ tasks, we train the CycleGAN with the official code\nsince there is no available pre-trained model.\n\n\\subsection{Network Architecture}\n\nFor the image translation network, we follow the settings of \\cite{zhu2017unpaired,liang2017generative}, adopting the encoder-decoder architecture from Johnson \\textit{et al}.~\\cite{johnson2016perceptual}.\nThe network consists of two down-sample layers implemented by stride-2 convolution, six residual blocks and two up-sample layers implemented by sub-pixel convolution \\cite{shi2016real}.\nNote that different from \\cite{zhu2017unpaired}, which used the fractionally strided convolution as the up-sample block, we use the sub-pixel convolution \\cite{shi2016real}, for avoiding checkerboard artifacts \\cite{odena2016deconvolution}.\nThe adaptive fusion block is a simple 3-layer convolutional network, which calculates the fusion weight map $\\bm\\alpha$ using two Convolution-InstanceNorm-ReLU blocks followed by a Convolution-Sigmoid block.\nFor the discriminator, we use the PatchGAN structure introduced in \\cite{isola2016image}.\n\n\\subsection{Datasets}\n\nTo demonstrate the capability of our proposed method for tackling the complex image-to-image translation problem under unsupervised settings, we first conduct experiments on the Cityscapes dataset \\cite{cordts2016cityscapes}. \nWe compare with the state-of-the-art approaches in the \\textit{Labels $\\leftrightarrow$ Photo} task in $256 \\times 256$ resolution.\nTo further show the effectiveness of our method to learn complex translations, we also extended the input size to a challenging $512 \\times 512$ resolution, namely the high-resolution Cityscapes \\textit{Labels $\\rightarrow$ Photo} task.\n\nBesides the \\textit{Labels $\\leftrightarrow$ Photo} task, we also select six image-to-image translation tasks from \\cite{zhu2017unpaired}, including \\textit{Map$\\leftrightarrow$Aerial}, \\textit{Facades$\\leftrightarrow$Labels}\nand \\textit{Horse$\\leftrightarrow$Zebra}. We compare our method with the CycleGAN \\cite{zhu2017unpaired} in these tasks in $256 \\times 256$ resolution.\n\n\\subsection{Training Details}\n\nNetworks in Stage-$1$ are trained from scratch, while networks in Stage-N are trained with the \\{Stage-$1$, $\\cdots$, Stage-(N-1)\\} networks fixed. For the GAN loss, Different from the previous works \\cite{zhu2017unpaired,isola2016image}, we adopt a gradient penalty term $\\lambda_{gp}(||\\nabla D(x)||_2 - 1)^2$ in the discriminator loss to achieve a more stable training process~\\cite{kodali2017train}. \nFor all datasets, the Stage-$1$ networks are trained in $128 \\times 128$ resolution, the Stage-$2$ networks are trained in $256 \\times 256$ resolution. For the three-stage architecture in Section~\\ref{sec:highres}, the Stage-$3$ networks are trained in $512 \\times 512$ resolution.\nWe set batch size to $1$, $\\lambda = 10$ and $\\lambda_{\\text{gp}} =10$ in all experiments.\nAll stages are trained with $100$ epochs for all datasets. We use Adam \\cite{kingma2014adam} to optimize our networks with an initial learning rate as $0.0002$, and decrease it linearly to zero in the last $50$ epochs.\n\n\\subsection{Evaluation Metrics} \n\n\\noindent\\textbf{FCN Score and Segmentation Score.}\nFor the Cityscapes dataset, we adopt the FCN Score and the Segmentation Score as evaluation metrics from~\\cite{isola2016image} for the \\textit{Labels $\\rightarrow$ Photo} task and the \\textit{Photo $\\rightarrow$ Labels} task, respectively.\nThe FCN Score employs an off-the-shelf FCN segmentation network \\cite{long2015fully} to estimate the realism of the translated images.\nThe Segmentation Score includes three standard segmentation metrics, which are the per-pixel accuracy, the per-class accuracy, and the mean class accuracy, as defined in~\\cite{long2015fully}.\n\n\\noindent\\textbf{PSNR and SSIM.}\nBesides using the FCN Score and the Segmentation Score, we also calculate the PSNR and the SSIM\\cite{wang2004image} for a quantitative evaluation.\nWe apply the above metrics on the \\textit{Map $\\leftrightarrow$ Aerial} task and the \\textit{Facades $\\leftrightarrow$ Labels} task to measure both the color similarity and the structural similarity between the translated outputs and the ground truth images.\n\n\\noindent\\textbf{User Preference.} \\label{sec:userstudy}\nWe run user preference tests in the high-resolution Cityscapes \\textit{Labels $\\rightarrow$ Photos} task and the \\textit{Horse$\\rightarrow$Zebra} tasks\nfor evaluating the realism of our generated photos. \nIn the user preference test, each time a user is presented with a pair of results from our proposed \\proposedd~ and the CycleGAN~\\cite{zhu2017unpaired}, and asked which one is more realistic.\nEach pair of the results is translated from the same image.\nImages are all shown \nin randomized order.\nIn total, $30$ images from the Cityscapes test set and $10$ images from the Horse2Zebra test set are used in the user preference tests.\nAs a result, $20$ participates make a total of $600$ and $200$ preference choices, respectively.\n\n\\subsection{Comparisons} \\label{sec:comp}\n\n\\begin{figure*}[t]\n\\begin{center}\n \\includegraphics[width=\\linewidth]{city-realseg.pdf}\n\\end{center}\n \\caption{Comparisons on the Cityscapes dataset of $256\\times256$ resolution. The left subfigure are \\textit{Labels $\\rightarrow$ Photo} results and the right are \\textit{Photo $\\rightarrow$ Labels} results. In the \\textit{Labels $\\rightarrow$ Photo} task, our proposed \\proposedd~ generates more natural photographs than CycleGAN; in the \\textit{Photo $\\rightarrow$ Labels} task, \\proposedd~ produces an accurate segmentation map while CycleGAN's results are blurry and suffer from deformation. SCAN also generates results that are visually closer to those of the supervised approach Pix2Pix than results of CycleGAN. Zoom in for better view.}\n %\n\\label{fig:comp-city}\n\\end{figure*}\n\n\\begin{table}[t]\n\\small\n\\tabcolsep=0.11cm\n\\caption{FCN Scores in the Labels $\\rightarrow$ Photo task and Segmentation Scores in the Photo $\\rightarrow$ Labels task on the Cityscapes dataset. The proposed methods are named after \\textit{SCAN (Stage-1 resolution)-(Stage-2 resolution)}. \\textit{FT} means that we also \\textit{fine-tune} the Stage-1 model instead of fixing its weights. \\textit{FS} means directly training Stage-2 \\textit{from-scratch} without training the Stage-1 model.}\n\\begin{center}\n\\resizebox{0.85\\columnwidth}{!}{\n\\begin{tabular}{lcccccc}\n\\hline\n & \\multicolumn{3}{c}{\\textbf{Labels $\\rightarrow$ Photo}}\n & \\multicolumn{3}{c}{\\textbf{Photo $\\rightarrow$ Labels}} \\\\\n\\bf{Method} & \\bf{Pixel acc.} & \\bf{Class acc.} & \\bf{Class IoU} & \\bf{Pixel acc.} & \\bf{Class acc.} & \\bf{Class IoU} \\\\\n\\hline \nCycleGAN~\\cite{zhu2017unpaired} & 0.52 & 0.17 & 0.11 & 0.58 & 0.22 & 0.16 \\\\\nContrast-GAN~\\cite{liang2017generative} & 0.58 & \\bf0.21 & \\bf0.16 & 0.61 & 0.23 & 0.18 \\\\\n\\proposedd~ Stage-1 128 & 0.46 & 0.19 & 0.12 & 0.71 & 0.24 & 0.20 \\\\\n\\proposedd~ Stage-1 256 & 0.57 & 0.15 & 0.11 & 0.63 & 0.18 & 0.14 \\\\\n\\proposedd~ Stage-2 256-256 & 0.52 & 0.15 & 0.11 & 0.64 & 0.18 & 0.14 \\\\\n\\proposedd~ Stage-2 128-256 \\textit{FS} & 0.59 & 0.15 & 0.10 & 0.36 & 0.10 & 0.05 \\\\\n\\proposedd~ Stage-2 128-256 \\textit{FT} & 0.61 & 0.18 & 0.13 & 0.62 & 0.19 & 0.13 \\\\\n\\proposedd~ Stage-2 128-256 & \\bf0.64 & 0.20 & \\bf0.16 & \\bf0.72 & \\bf0.25 & \\bf0.20 \\\\\n\\hline\nPix2Pix~\\cite{isola2016image} & 0.71 & 0.25 & 0.18 & 0.85 & 0.40 & 0.32 \\\\\n\\hline\n\\end{tabular}\n}\n\\end{center}\n\\label{tab:city}\n\\end{table}\n\n\\noindent\\textbf{Cityscapes \\textit{Labels $\\leftrightarrow$ Photo.}}\nTable \\ref{tab:city} shows the comparison of our proposed method \\proposedd~ and its variants with state-of-the-art methods in the Cityscapes \\textit{Labels $\\leftrightarrow$ Photo} tasks. \nThe same unsupervised settings are adopted by all methods except Pix2Pix, which is trained under a supervised setting.\n\nOn the FCN Scores, our proposed \\proposedd~ Stage-2 128-256 outperforms the state-of-the-art approaches considering the pixel accuracy, while being competitive considering the class accuracy and the class IoU. \nOn the Segmentation Scores, \\proposedd~ Stage-2 128-256 outperforms state-of-the-art approaches in all metrics.\nComparing SCAN Stage-1 256 with CycleGAN, our modified network yields improved results, which, however, still perform inferiorly to SCAN Stage-2 128-256.\nAlso, we can find that \\proposedd~ Stage-2 128-256 achieves a much closer performance to the supervised approach Pix2Pix\\cite{isola2016image} than others.\n\nWe also compare our \\proposedd~ Stage-$2$ 128-256 with different variants of SCAN. \nComparing \\proposedd~ Stage-$2$ 128-256 with \\proposedd~ Stage-$1$ approaches, we can find a substantial improvement on the FCN Scores, which indicates that adding the Stage-$2$ refinement helps to improve the realism of the output images.\nOn the Segmentation Score, comparison of the \\proposedd~ Stage-$1$ 128 and \\proposedd~ Stage-$1$ 256 shows that learning from low-resolution yields better performance. Comparison between the \\proposedd~ Stage-$2$ 128-256 and \\proposedd~ Stage-$1$ 128 shows that adding Stage-2 can further improve from the Stage-1 results.\nTo experimentally prove that the performance gain does not come from merely adding model capacity, we conducted a \\proposedd~ Stage-2 256-256 experiments, which perform inferiorly to the SCAN Stage-2 128-256. \n\nTo further analyze various experimental settings, we also conducted our \\proposedd~ Stage-2 128-256 in two additional settings, including \\textit{leaning two stages from-scratch} and \\textit{fine-tuning Stage-1}. We add supervision signals to both stages for these two settings.\nLearning two stages from scratch shows poor performance in both tasks, which indicates joint training two stages together does not guarantee performance gain. The reason for this may lie in directly training a high-capacity generator is difficult.\nAlso, fine-tuning Stage-1 does not resolve this problem and has smaller improvement compared with fixing weights of Stage-1.\n\nTo examine the effectiveness of the proposed fusion block, we compare it with several variants: \n1) \\textit{Learned Pixel Weight} (LPW), which is our proposed fusion block; \n2) \\textit{Uniform Weight} (UW), in which the two stages are fused with the same weight at different pixel locations $\\mathbf{\\hat{y}_1} (1-w) + \\mathbf{\\hat{y}_2} w$, and during training $w$ gradually increases from 0 to 1; \n3) \\textit{Learned Uniform Weight} (LUW), which is similar to \\textit{UW}, but $w$ is a learnable parameter instead; \n4) \\textit{Residual Fusion} (RF), which uses a simple residual fusion $\\mathbf{\\hat{y}_1} + \\mathbf{\\hat{y}_2}$.\nThe results are illustrated in Table~\\ref{tab:add}. It can be observed that our proposed LPW fusion yields the best performance among all alternatives, which indicates that the LPW approach can learn better fusion of the outputs from two stages than approaches with uniform weights.\n\n\\begin{table}[t]\n\\small\n\\tabcolsep=0.11cm\n\\caption{FCN Scores and Segmentation Scores of several variants of the fusion block on the Cityscapes dataset.} \n\\begin{center}\n\\resizebox{0.8\\columnwidth}{!}{\n\\begin{tabular}{lcccccc}\n\\hline\n & \\multicolumn{3}{c}{\\textbf{Labels $\\rightarrow$ Photo}}\n & \\multicolumn{3}{c}{\\textbf{Photo $\\rightarrow$ Labels}} \\\\\n\\bf{Method} & \\bf{Pixel acc.} & \\bf{Class acc.} & \\bf{Class IoU} & \\bf{Pixel acc.} & \\bf{Class acc.} & \\bf{Class IoU} \\\\\n\\hline \nCycleGAN & 0.52 & 0.17 & 0.11 & 0.58 & 0.22 & 0.16 \\\\ \n\\proposedd~ 128-256 LPW & \\bf0.64 & \\bf0.20 & \\bf0.16 & \\bf0.72 & \\bf0.25 & \\bf0.20 \\\\\n\\proposedd~ 128-256 UW & 0.59 & 0.19 & 0.14 & 0.66 & 0.22 & 0.17 \\\\ \n\\proposedd~ 128-256 LUW & 0.59 & 0.18 & 0.12 & 0.70 & 0.24 & 0.19 \\\\\n\\proposedd~ 128-256 RF & 0.60 & 0.19 & 0.13 & 0.68 & 0.23 & 0.18 \\\\\n\\hline\n\\end{tabular}\n}\n\\end{center}\n\\label{tab:add}\n\\end{table}\n\nIn Figure \\ref{fig:comp-city}, we visually compare our results with those of the CycleGAN and the Pix2Pix. \nIn the \\textit{Labels $\\rightarrow$ Photo} task, \\proposedd~ generates more realistic and vivid photos compared to the CycleGAN. Also, the details in our results appear closer to those of the supervised approach Pix2Pix. \nIn the \\textit{Photo $\\rightarrow$ Labels} task, while \\proposedd~ can generate more accurate semantic layouts that are closer to the ground truth, the results of the CycleGAN suffer from distortion and blur.\n\n\\begin{figure*}[t]\n\\begin{center}\n \\includegraphics[trim={0 1.5cm 0 0},clip,width=0.72\\linewidth]{512multi.pdf}\n\\end{center}\n \\caption{Translation results in the \\textit{Labels $\\rightarrow$ Photo} task on the Cityscapes dataset of $512 \\times 512$ resolution. Our proposed \\proposedd~ produces realistic images that even look at a glance like the ground-truths. Zoom in for best view.}\n\\label{fig:512}\n\\end{figure*}\n\n\\smallskip\n\\noindent\\textbf{High-Resolution Cityscapes \\textit{Labels $\\rightarrow$ Photo}.} \\label{sec:highres}\nThe CycleGAN only considers images in 256$\\times$256 resolution, and results of training CycleGAN directly in 512$\\times$512 resolution are not satisfactory, as shown in Figure \\ref{fig:1} and Figure \\ref{fig:512}.\n\nBy iteratively decomposing the Stage-$2$ into a Stage-$2$ and a Stage-$3$, we obtain a three-stage SCAN.\nDuring the translation process, the resolution of the output is growing from $128 \\times 128$ to $256 \\times 256$ and to $512 \\times 512$, as shown in Figure \\ref{fig:1}. Figure \\ref{fig:512} shows the comparison between our \\proposedd~ and the CycleGAN in the high-resolution Cityscapes \\textit{Labels $\\rightarrow$ Photo} task. We can clearly see that our proposed \\proposedd~ generates more realistic photos compared with the results of CycleGAN, and SCAN's outputs are visually closer to the ground truth images. \nThe first row shows that our results contain realistic trees with plenty of details, while the CycleGAN only generates repeated patterns. \nFor the second row, we can observe that the CycleGAN tends to simply ignore the cars by filling it with a plain grey color, while cars in our results have more details.\n\nAlso, we run a user preference study comparing \\proposedd~ with the CycleGAN with the setting described in Section~\\ref{sec:userstudy}. \nAs a result, 74.9\\% of the queries prefer our SCAN's results, 10.9\\% prefer the CycleGAN's results, and 14.9\\% suggest that the two methods are equal. \nThis result shows that our \\proposedd~ can generate overall more realistic translation results against the CycleGAN in the high-resolution translation task.\n\n\n\\begin{table}[!t]\n\\begin{center}\n\\caption{PSNR and SSIM values in the \\textit{Map$\\leftrightarrow$Aerial} and \\textit{Facades$\\leftrightarrow$Labels} tasks.}\n\\label{tab:mapfacades}\n\\resizebox{0.75\\columnwidth}{!}{\n\\begin{tabular}{lcccccccc}\n\\hline\n & \\multicolumn{2}{c}{\\textbf{Aerial $\\rightarrow$ Map}} \n & \\multicolumn{2}{c}{\\textbf{Map $\\rightarrow$ Aerial}} \n & \\multicolumn{2}{c}{\\textbf{Facades $\\rightarrow$ Labels}}\n & \\multicolumn{2}{c}{\\textbf{Labels $\\rightarrow$ Facades}} \\\\\n\\textbf{Method} & \\textbf{PSNR} & \\textbf{SSIM} & \\textbf{PSNR} & \\textbf{SSIM} & \\textbf{PSNR} & \\textbf{SSIM} & \\textbf{PSNR} & \\textbf{SSIM} \\\\\n\\hline\nCycleGAN\\cite{zhu2017unpaired} & 21.59 & 0.50 & 12.67 & 0.06 & 6.68 & 0.08 & 7.61 & 0.11 \\\\\n\\proposedd~ & \\bf25.15 & \\bf0.67 & \\bf14.93 & \\bf0.23 & \\bf8.28 & \\bf0.29 & \\bf10.67 & \\bf0.17\\\\\n\\hline\n\\end{tabular}\n}\n\\end{center}\n\\end{table}\n\n\\begin{figure}[t]\n\\begin{center}\n \\includegraphics[width=0.77\\linewidth]{mapfacades.pdf}\n\\end{center}\n \\caption{Translation results in the Labels$\\rightarrow$Facades task and the Aerial$\\rightarrow$Map task. Results of our proposed SCAN show finer details in both the tasks comparing with CycleGAN's results.}\n\\label{fig:mapfacade}\n\\end{figure}\n\n\\begin{figure}[t]\n\\begin{center}\n \\includegraphics[width=\\linewidth]{horse2zebra.pdf}\n\\end{center}\n \\caption{\n Translation results in the Horse$\\leftrightarrow$Zebra tasks. CycleGAN changes both desired objects and backgrounds. Adding an identity loss can fix this issue, but tends to be blurry compared with those from SCAN, which never uses the identity loss.\n }\n\\label{fig:appleorange}\n\\end{figure}\n\n\\smallskip\n\\noindent\\textbf{\\textit{Map$\\leftrightarrow$Aerial} and \\textit{Facades$\\leftrightarrow$Labels}.}\nTable \\ref{tab:mapfacades} reports the performances regarding the PSNR\/SSIM metrics. \nWe can see that our methods outperform the CycleGAN in both metrics, which indicates that our translation results are more similar to ground truth in terms of colors and structures.\n\nFigure \\ref{fig:mapfacade} shows some of the sample results in the Aerial$\\rightarrow$Map task and the Labels$\\rightarrow$Facades task. \nWe can observe that our results contain finer details while the CycleGAN results tend to be blurry. \n\n\\smallskip\n\\noindent\\textbf{\\textit{Horse$\\leftrightarrow$Zebra}.}\nFigure \\ref{fig:appleorange} compares the results of SCAN against those of the CycleGAN in the Horse$\\leftrightarrow$Zebra task.\nWe can observe that both \\proposedd~ and the CycleGAN successfully translate the input images to the other domain. \nAs the Figure \\ref{fig:appleorange} shows, the CycleGAN changes not only the desired objects in input images but also the backgrounds of the images.\nAdding the identity loss~\\cite{zhu2017unpaired} can fix this problem, but the results still tend to be blurry compared with those from our proposed SCAN.\nA user preference study on Horse$\\rightarrow$Zebra translation is performed with the setting described in Section~\\ref{sec:userstudy}. \nAs a result, 76.3\\% of the subjects prefer our SCAN's results against CycleGAN's, while 68.9\\% prefer SCAN's results against CycleGAN+idt's.\n\n\\subsection{Visualization of Fusion Weight Distributions}\n\nTo illustrate the role of the adaptive fusion block, \nwe visualize the three average distributions of fusion weights ($\\bm{\\alpha}_{x}$ in Equation \\ref{equ:alpha}) over 1000 samples from Cityscapes dataset in epoch 1, 10, and 100, as shown in Figure \n\\ref{fig:alphaevol}. \nWe observed that the distribution of the fusion weights gradually shifts from left to right.\nIt indicates a consistent increase of the weight values in the fusion maps, which implies more and more details of the second stage are bought to the final output.\n\n\\subsection{Ablation Study} \\label{sec:ablation}\n\n\\begin{figure}[t]\n\\begin{center}\n \\includegraphics[trim={0 2.5cm 0 -1cm},clip,width=0.7\\linewidth]{alpha_evol_map.pdf}\n\\end{center}\n \\caption{\n \\color{black}\n Distributions of fusion weights over all pixels in different epochs. Each distribution is an average result over 1000 sample images from the Cityscapes dataset. \n Dashed arrows indicate average weights of fusion maps.\n }\n\\label{fig:alphaevol}\n\\end{figure}\n\n\nIn Section \\ref{sec:comp}, we report the evaluation results of \\proposedd~ and its variants, here we further explore SCAN by removing modules from it: \n\\begin{itemize}\n\\item \\proposedd~ w\/o \\textit{Skip} Connection: remove the skip connection from the input to the translation network in the Stage-$2$ model \n, denoted by \\textit{\\proposedd~ w\/o Skip}.\n\\item \\proposedd~ w\/o Adaptive \\textit{Fusion} Block: remove the final adaptive fusion block in the Stage-$2$ model \n, denoted by \\textit{\\proposedd~ w\/o Fusion}.\n\\item \\proposedd~ w\/o \\textit{Skip} Connection and Adaptive \\textit{Fusion} Block: remove both the skip connection from the input to the translation network and the adaptive fusion block in the Stage-$2$ model \n, denoted by \\textit{SCAN w\/o Skip,} \\textit{Fusion}.\n\\end{itemize}\n\n\n\\begin{table}[t]\n\\caption{FCN Scores in the Cityscapes dataset for ablation study, evaluated on the \\textit{Labels $\\rightarrow$ Photo} task with different variants of the proposed SCAN.}\n\\begin{center}\n\\resizebox{0.72\\columnwidth}{!}{\n\\begin{tabular}{lccc}\n\\hline\n\\bf{Method} & \\bf{Pixel acc.} & \\bf{Class acc.} & \\bf{Class IoU} \\\\\n\\hline \n\\proposedd~Stage-1~ 128 & 0.457 & 0.188 & 0.124 \\\\\n\\proposedd~ Stage-2 128-256 w\/o Skip,Fusion & 0.513 & 0.186 & 0.125 \\\\\n\\proposedd~ Stage-2 128-256 w\/o Skip & 0.593 & 0.184 & 0.136 \\\\\n\\proposedd~ Stage-2 128-256 w\/o Fusion & 0.613 & 0.194 & 0.137 \\\\\n\\proposedd~ Stage-2 128-256 & \\bf0.637 & \\bf0.201 & \\bf0.157 \\\\\n\\hline\n\\end{tabular}\n}\n\\end{center}\n\\label{tab:fcn-alphaskip}\n\\end{table}\n\nTable \\ref{tab:fcn-alphaskip} shows the results of the ablation study, in which we can observe that removing either the adaptive fusion block or the skip connection downgrades the performance.\nWith both of the components removed, the stacked networks obtain marginal performance gain compared with Stage-$1$.\nNote that the fusion block only consists of three convolution layers, which have a relatively small size compared to the whole network.\nRefer to Table \\ref{tab:city}, in SCAN Stage-2 256-256 experiment, we double the network parameters compared to SCAN Stage-1 256, resulting in no improvement in the Label $\\rightarrow$ Photo task.\nThus, the improvement of the fusion block does not simply come from the added capacity.\n\nTherefore, we can conclude that using our proposed SCAN structure, which consists of the skip connection and the adaptive fusion block, is critical for improving the overall translation performance.\n\n\\section{Conclusions}\n\nIn this paper, we proposed a novel approach to tackle the unsupervised image-to-image translation problem exploiting a stacked network structure with cycle-consistency, namely SCAN.\nThe proposed \\proposedd~ decomposes a complex image translation process into a coarse translation step and multiple refining steps, and then applies the cycle-consistency to learn the target translation from unpaired image data.\nExtensive experiments on multiple datasets demonstrate that our proposed \\proposedd~ outperforms the existing methods in quantitative metrics and generates more visually pleasant translation results with finer details compared to the existing methods.\n\n\n\\clearpage\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\n\n\n\\section{Microcode Primitives}\n\\label{cucode:section:primitives}\n\nMicrocode programs supported by modern processors combined with the ability to\nupdate this microcode can provide a range of useful security primitives that can be used to build system defenses. \nIn the following, we explore several key primitives and discuss in\nSection~\\ref{cucode:section:case_study} how system defenses can be implemented based on\nour analysis results described in the previous section. \n\n\\par{\\bf Enabling or disabling CPU features at runtime}\n\nDespite recently uncovered security issues such as \\textsc{Spectre} and \\textsc{Meltdown}~\\cite{Lipp2018meltdown,projectzeromeltdownspectre,Kocher2018spectre},\nspeculative execution is an important feature that enables the performance of current \\ac{CPU} families.\nWhile the na\\\"ive coun\\-ter\\-mea\\-sure---disabling speculative execution completely---provides a high level of security,\nit significantly reduces the performance of a given system.\nHowever, if the speculative execution could be disabled only temporally or only for certain program states,\na trade-off between security and performance could be implemented.\n\nAnother example of a feature that can be used by both benign and malicious applications is the availability of high-resolution timers.\nSuch timers allow an attacker to abuse microarchitectural timing side channels to gather information from otherwise inaccessible contexts~\\cite{crypto:1996:kocher,Hund,pakt:2012,206170}.\nIn both cases, microcode can improve security by applying a fine-grained permission model on top of existing protection mechanisms\nby restricting features to certain applications or contexts only.\n\n\\par{\\bf Intercepting low-level \\ac{CPU} processes} \nA core functionality of microcode is the decoding of instructions.\nBy intercepting this step during the execution of x86 code, it is possible to apply fine-grained control over the behavior of\ninstructions, programs, and the system as a whole.\nFrom a security perspective, additional functionality can be added to existing instructions,\nspecial handling for corner cases can be inserted, and security checks can be included.\n\n\nBesides changing and extending the instruction decoding,\nit is also possible to influence other aspects of the \\ac{CPU}'s operation.\nFor example, the exception handling mechanism is implemented with the help of microcode.\nBefore an exception reaches the kernel-level x86 code, microcode can change the metadata passed to the kernel or handle the\nexception without involving the kernel at all.\nBy directly modifying the exception handling in microcode, expensive context switches can be avoided.\nThis allows, for example,\nspecial handling of page faults to implement page-based memory separation in a way that is completely transparent to the kernel.\n\n\\par{\\bf Isolated execution environment} \nThe microcode engine provides a fully-featured execution environment that cannot be\nintercepted by the running kernel in any way.\nAny exception delivered while microcode is running will be stalled until the current decoding is complete.\nMoreover, any state that is not explicitly written out will be contained in the microcode engine and cannot be accessed.\nMore specifically, both the running kernel and hypervisors are unable to inspect the execution state of the microcode engine.\nThis provides an enclave-like environment in which computations on sensitive data can be performed in an opaque way.\nOnly the results will be passed to the operating system,\nprotecting secret keys or other data inside the microcode.\n\n\\par{\\bf Extending and modifying the x86 instruction set} \nBy either reusing no longer used x86 instructions or adding entirely new\ninstructions to the decoding process,\nmicrocode can enable functionality not found in the standard x86 instruction set architecture.\nThese instructions can for example implement more complex semantics that are tailored to a specific use case.\nBy condensing calculations into fewer instructions, caches are utilized more effectively,\nincreasing performance.\nBesides performance improvements, new primitives can be added with new instructions.\nAs microcode can change the access level between operations,\nit is able read and write kernel-only data structures.\nCombining this with fine-grained checks enables fast access to otherwise privileged functions, without support of the running\nkernel.\n\n\\section{Case Studies of Microcode Defenses}\n\\label{cucode:section:case_study}\n\nBased on the security primitives discussed above, we now present designs and \\textit{proof-of-concept} implementations of our microcode-assisted systems defenses and constructive microcode applications. For each case study, we first briefly motivate the primitive, present the design and implementation, and conclude with an evaluation and discussion of advantages and drawbacks of our approach. Based on these case studies, we demonstrate that microcode indeed improves properties of those applications with regards to performance, security, and complexity.\nThe microcode programs and supporting infrastructure are publicly available~\\cite{microcode:amd_microprograms}.\n\nThe current state of the programs does not feature a mechanism for runtime configuration,\nhowever this is can be achieved in different ways.\nAs it is possible to load microcode updates during runtime,\nthe operating system can apply an update to enable or disable certain features.\nIt is also possible to add dedicated flags in the thread or process control structures created by\nthe operating system to signal which features should be enabled for a certain thread.\nHowever, both approaches require support from the OS to either synchronize the microcode update procedure\nacross all \\ac{CPU} cores or allocate and initialize the configuration fields for every new thread.\nAnother option is to use processor-internal storage to store configuration variables.\nTests showed that a writable region of memory exists that can be used to store these variables.\nUnfortunately, further experiments are needed to ascertain the nature\nof this memory region and the side effects of changing it at runtime.\n\nWe evaluate the performance of our case studies with microbenchmarks of the affected instructions. To this end, we determine the execution time in cycles as measured via the \\texttt{rdtsc} instruction.\nIt provides the value of the \\ac{TSC},\na counter which is incremented each clock cycle.\nThe used code snippet for performance benchmarks is illustrated in Figure~\\ref{ucode:figure:measurement}. All tests were performed on an AMD Sempron 3100+ running the minimal operating system developed by Koppe~et\\,al.\\xspace~\\cite{koppe2017reverse}. In the following, the cycle counts are given without the overhead of the measurement setup itself, which adds 65 cycles to every execution.\nFurther improvements to the performance properties of the defenses are possible with a greater understanding of the underlying hardware. This requires either more work on reverse engineering more details, especially in regards to scheduling, or, to fully utilize the existing hardware, assistance of the \\ac{CPU} vendors.\n\n{\\lstset{language=[x86masm]Assembler, deletekeywords={size}}\n\\begin{figure}\n\\begin{lstlisting}\nxor eax, eax\nxor edi, edi\ncpuid\nrdtsc\nxchg edi, eax\n\n; benchmarked instruction\nshrd ebp, ecx, 4\n\ncpuid\nrdtsc\nsub eax, edi\n\\end{lstlisting}\n\\caption{Microbenchmark setup to determine the execution time in cycles of \\texttt{shrd} (double precision right shift). The modern \\texttt{rdtscp} instruction variant is not available on the tested K8 CPU, thus the \\texttt{cpuid} instruction is used to serialize the instruction execution.}\n\\label{ucode:figure:measurement}\n\\end{figure}\n}\n\n\\subsection{Customizable RDTSC Precision}\n\\label{cucode:section:case_study:rdtsc}\n\n\\par{\\bf Motivation.}\nPrevious works demonstrated the possibility to reconstruct the memory\nlayout~\\cite{Hund,gras2017aslr,oren2015spy} using timing side channels.\nMore recently the \\textsc{Spectre} and \\textsc{Meltdown} attacks have shown in a spectacular\nway~\\cite{Kocher2018spectre,projectzeromeltdownspectre,Lipp2018meltdown} that it is possible\nto break the fundamental guarantees of memory isolation on modern systems.\nA common aspect of these attacks is the usage of high-resolution timers to observe the timing side channels.\nDue to these dangers,\nmodern browsers limit the accuracy of high-resolution timers to a recommended value~\\cite{w3c-rec-candidate}.\nWhile this does not eliminate all timing sources~\\cite{schwarz2017fantastic,kohlbrenner2016trusted},\nit raises implementation complexity of attacks and provides a mitigation against common exploits.\n\nOn the native level the timing information is commonly queried using the \\texttt{rdtsc} instruction.\nThe x86 architecture allows limiting \\texttt{rdtsc} to kernel space only.\nAny attempt of executing this instruction from user space will lead to a fault.\nBuilding upon this fact,\nthe operating system can limit the resolution of the timer available to user programs.\nUpon receiving the corresponding fault, the operating system queries the \\ac{TSC} itself,\nreduces the resolution accordingly, and passes the timestamp onto the program.\nNote that this incurs a significant performance overhead due to the necessary context switches.\n\n\\par{\\bf Design and Implementation.}\nSince we are able to change x86 microcode behavior,\nour goal is to implement a functionality similar to the\nbrowser mitigation for the native \\texttt{rdtsc} instruction.\nIn addition,\nour solution should be able to reduce the accuracy to a pre-defined value\nwithout incurring unnecessary overhead in form of context switches.\nTo this end,\nwe intercept the execution of \\texttt{rdtsc} and before the \\ac{TSC} value is made available to the application,\nwe set a pre-defined number of lower bits to zero.\nNote that the amount of zeroed bits is configurable (in the microcode\nupdate) to provide a trade-off between accuracy and security.\n\n\\par{\\bf Evaluation and Discussion.}\nWhile the default implementation of \\texttt{rdtsc} takes 7 cycles to execute,\nour custom implementation takes a total of 15 cycles to complete.\nThis overhead is due to the switch to microcode \\ac{RAM} and the additional\nlogical \\textsf{AND} operation to clear the lower bits of the \\ac{TSC} value.\nThe \\ac{RTL} representation of our \\texttt{rdtsc}\nimplementation is shown in the appendix in\nListing~\\ref{cucode:listing:rdtsc}.\n\nEven though our solution doubles the execution time, %\nit is far faster than the approach where the kernel needs to trap the raised interrupt.\nAt the same time,\nour security guarantees are comparable to the discussed browser mitigations.\nWhile raising the bar,\ntiming attacks are still possible by using methods described by\nSchwarz~et\\,al.\\xspace~\\cite{schwarz2017fantastic} and\nKohlbrenner~et\\,al.\\xspace~\\cite{kohlbrenner2016trusted}.\n\n\n\n\n\n\n\\subsection{Microcode-Assisted Address Sanitizer}\n\\label{cucode:section:case_study:hwasan}\n\\par{\\bf Motivation.}\n\\ac{ASAN}~\\cite{ASAN} is a compile-time instrumentation framework that introduces checks for every memory access in order to uncover both spatial and temporal software vulnerabilities. In particular, temporal faults such as \\textit{use-after-free} bugs present an important class of memory corruption vulnerabilities that have been used to exploit browsers and other software systems~\\cite{van2017dynamics}.\n\\ac{ASAN} tracks program memory state in a so-called \\textit{shadow\nmap} that indicates whether or not a memory address is valid.\nTherefore,\n\\ac{ASAN} inserts new instructions during compilation to perform the\nchecks as well as an instrumentation of allocators and deallocators.\nIn addition,\n\\ac{ASAN} enforces a quarantine period for memory regions and thus prevents them from being re-used directly.\nHowever,\nthis instrumentation incurs a performance overhead of roughly 100\\%.\n\nTo overcome the performance penalty and reduce the code size,\nthe authors of \\ac{ASAN} also discussed how a hardware-assisted version, dubbed \\ac{HWASAN}, could theoretically be implemented~\\cite{HWASAN}. The basic idea is to introduce a new processor instruction that performs access checks.\nThe general principle of the new instruction is illustrated in Figure~\\ref{ucode:figure:algorithms:hwasan}.\nIt receives two parameters: the pointer to be accessed and the memory access size.\nThe instruction then validates the memory access and its size with the help of the shadow map.\n\n{\\lstset{language=C}\n\\begin{figure}\n\\begin{lstlisting}\nCheckAddressAndCrashIfBad(Addr, kSize) {\n\tShadowAddr = (Addr >> 3) + kOffset;\n\tif (kSize < 8) {\n\t\tShadow = LoadByte(ShadowAddr);\n\t\tif (Shadow && Shadow <= (Addr & 7) + kSize - 1)\n\t\t\tReportBug(Addr);\n\t} else {\n\t\tShadow = LoadNBytes(ShadowAddr, kSize \/ 8);\n\t\tif (Shadow)\n\t\t\tReportBug(Addr);\n\t}\n}\n\\end{lstlisting}\n \\caption{Pseudocode of the \\ac{HWASAN} instruction~\\cite{HWASAN}; \\texttt{kSize} is the size of the memory access and \\texttt{kOffset} is a compile time constant that specifies the location of the shadow map.}\n\\label{ucode:figure:algorithms:hwasan}\n\\end{figure}\n}\n\n\\par{\\bf Design and Implementation.}\nInstead of requiring a hardware change to add the new \\ac{HWASAN} instruction,\nwe design a scheme to implement \\ac{HWASAN} in microcode.\nSimilarly to Figure~\\ref{ucode:figure:algorithms:hwasan},\nwe perform the checks accordingly and raise a fault in case an invalid memory access is detected.\nTo provide a clear separation between application code and instrumentation,\nwe implemented the checking in a single instruction.\nFor practical reasons,\nthe interface should be easy to add to existing toolchains.\n\nIn our implementation,\nwe chose to reuse an existing but unused x86 instruction, in this case the instruction \\texttt{bound}.\nSince the check requires address and size of the memory access,\nwe changed the interface of this instruction in the following way:\nthe first operand indicates the address to be accessed,\nwhile the second operand indicates the access size.\nWe want to emphasize that that our microcode instrumentation can be emitted\nwithout changes to an existing x86 assembler using the following syntax:\n{\\lstset{language=[x86masm]Assembler, deletekeywords={size}}\n\\begin{lstlisting}\nbound reg, [size]\n\\end{lstlisting}\n}\n\nSimilarly to \\ac{ASAN},\nour instruction is inserted in front of every memory access during compilation.\nWe also use the same shadow map mechanism and base address,\nhence the instrumentation requires no additional changes.\nHowever,\nthe key difference is the compactness and that no externally visible state is changed.\nIn case the memory access is valid,\nthe instruction behaves as a \\texttt{nop},\nbut if an invalid access is passed, a defined action is taken.\nTo this end, our prototype implementation currently support three methods of error reporting:\n\\begin{enumerate}\n \\item raising a standard access violation,\n \\item raising the \\texttt{bound} interrupt, and\n \\item calling a predetermined x86 routine.\n\\end{enumerate}\nNote that the first two options rely on the availability of an exception handling mechanism,\nwhile the latter option is self-contained and works even without kernel support.\n\n\\par{\\bf Evaluation and Discussion.}\nWhile the checking algorithm is semantically the same,\nwe observed a performance advantage of our solution.\nThe default \\ac{ASAN} implementation for a (valid) 4 byte load requires 129 cycles to complete,\nour version requires only 106 cycles.\nAnother advantage of our implementation is that no x86 register is changed during its execution:\ninstead of using x86 general purpose registers, our implementation\nstores temporary values in ephemeral microcode-internal registers.\nThis means the insertion of the instrumentation does not increase the register\npressure and does not cause additional register spills to the stack.\nThis is in comparison to the original \\ac{ASAN} implementation\nwhich uses two additional x86 registers to hold temporary values.\nThe overhead of additional register spills is not included in\nour benchmark as it is highly dependent on the surrounding code.\nThe \\ac{RTL} representation of our \\ac{HWASAN} implementation\ncan be found in our Github repository~\\cite{microcode:amd_microprograms}.\n\n\n\n\\subsection{Microcoded Instruction Set Randomization}\n\n\\par{\\bf Motivation.}\nIn order to counter so-called \\textit{code-injection} attacks,\na series of works investigated \\ac{ISR}\nschemes~\\cite{sovarel2005s,papadogiannakis2013asist,portokalidis2010fast,hu2006secure,barrantes2003randomized,kc2003countering}\nwith the goal of preventing the correct execution of maliciously injected code.\nTo this end,\nthe instruction encoding is randomized (e.g.,\nusing an \\textsf{XOR} with a pre-defined key) for all or a subset of instructions,\nso that the adversary does not know the semantics of a randomized instruction.\nNote that recently published advanced schemes also aim to mitigate\n\\textit{code-reuse} attacks using strong cryptographic encryption\nalgorithms~\\cite{sinha2017reviving}.\nHowever, most schemes require hardware support,\nwhich prevents their deployment to \\ac{COTS} \\acp{CPU}.\n\n\\par{\\bf Design and Implementation.}\nOur \\ac{ISR} scheme removes the link between the actual x86 operation and its semantics,\nand thus an adversary is unable to infer the meaning of an instruction\nstream even if disassembled during a \\ac{JIT}-\\ac{ROP} attack.\nIn order to be robust even when facing code-reuse or \\ac{JIT}-\\ac{ROP} attacks,\nwe assume fine-grained code randomization or software diversification.\n\nOur proof-of-concept implementation supports six different operations:\nmemory load, register move, add,\nleft and right shift,\nand exclusive or.\nEach operation can be freely assigned to any microcoded x86 instruction\nthat allows for one register operand and one memory operand.\nThis assignment effectively binds the executed x86 code to a specific instance of the \\ac{ISR}.\nExecution is only possible\nif the semantics implemented in microcode for each instruction match the one used when generating the x86 code.\nNote that due to this varying assignment and the variable instruction length of the affected opcodes,\nit is not possible to assemble a \\ac{ROP} chain or shellcode matching all possibilities.\nAdditionally, we support masking of input and output values before they\nare written to or read from potentially attacker-accessible storage,\nincluding system memory and registers.\n\nTo facilitate the translation of existing x86 code to opcodes using the newly introduced semantics of the \\ac{ISR},\nwe implemented a \\textit{transpiler}.\nThis transpiler processes a stream of disassembled x86 instructions and replaces all\noccurrences of supported opcodes with the appropriate opcodes with changed semantics.\nThe selection of the replacement opcode is performed based on the assignment in the corresponding microcode update.\nThe input to the transpiler is thus the source instruction stream and the\nmapping of x86 instructions to semantics as implemented by the \\ac{ISR},\nthe output is a modified instruction stream.\nThis output stream can them be assembled by a standard x86 assembler,\nas no new instructions are introduced.\n\n\\par{\\bf Evaluation and Discussion.}\nWe evaluate the performance of our implementation by comparing the runtime\n(measured in cycles according to the test setup described previously)\nof a toy example consisting only out of supported opcodes with the corresponding transpiled version.\nOur measurements indicate that our microcoded \\ac{ISR} scheme introduces\nan overhead of 2.5 times on average over a set of 5 different examples,\ncompared to the same code running natively.\nThis overhead is mainly due to replacing non-microcoded instructions (that normally\ntake 1-3 cycles) with microcoded instructions that require at least 7 cycles,\nincluding the additional overhead of switching to microcode \\ac{RAM} execution.\nWe provide one of the test cases in Listing~\\ref{cucode:listing:isr} in the appendix.\nNote that the cumulative performance of instruction streams may vary due to pipelining and parallel execution.\nThis is especially visible if instructions covered by the \\ac{ISR} are mixed with standard x86 instructions.\nAs our toy examples exclusively use transpiled instructions,\nwe arrive at the worst case overhead.\nSince the \\ac{ISR} can implement more complex semantics such as a multiply-accumulate,\nthe cycle overhead can be reduced with a more advanced transpiler.\nWe want to emphasize that our \\ac{ISR} does not require hardware changes compared\nto previous schemes and thus can be deployed on \\ac{COTS} \\acp{CPU} with a microcode update.\n\n\n\n\\subsection{Microcode-Assisted Instrumentation}\n\n\\par{\\bf Motivation.}\nTraditional binary defenses often suffer from either significant performance overhead or incompleteness.\nThis is typically due to the reliance on dynamic instrumentation or static binary rewriting.\nHowever,\nwith the ability to change the behavior of x86 instructions via a microcode update,\nit is possible to intercept only specific instructions without impacting performance of unrelated code.\nHence,\na microcode-assisted instrumentation leverages synergies of minimal performance overheads\nof static binary rewriting and completeness of dynamic instrumentation solutions.\n\n{\\captionsetup[figure]{skip=5pt}\n\\begin{figure}[!t]\n\t\\centering\n\t\\resizebox{0.65\\linewidth}{!}{\n\t\t\\includegraphics{figures\/instrumentation.png}\n\t}\n\t\\caption{Control flow of an instrumentation.}\n\t\\label{ucode:figure:primitives:instrumentation}\n\\end{figure}\n}\n\n\\par{\\bf Design and Implementation.}\nWe designed a microcode-as\\-sist\\-ed instrumentation scheme that allows generation of microcode updates that\nintercept a specific instruction and upon execution of this instruction, the control is transferred to a specific address.\nThis address contains standard x86 code to perform the instrumentation and finally resume execution.\nThe microcode update can additionally contain a custom-tailored filtering,\nso that the x86 handler is only invoked on specific conditions.\nAs the filtering is implemented directly in microcode,\nthe overhead of changing the x86 execution path which can invalidate\nbranch prediction and caches is only occurred when needed.\n\n\\par{\\bf Evaluation and Discussion.}\nTo test the viability of the instrumentation,\nwe implemented a proof-of-concept microprogram that instruments \\texttt{shrd} to call\nan x86 handler if a certain constant is detected in the argument register.\nThe control flow is illustrated in Figure~\\ref{ucode:figure:primitives:instrumentation}.\nUpon execution of the instruction,\n\\raisebox{.5pt}{\\textcircled{\\raisebox{-.9pt} {1}}}~control is transferred to the microcode \\ac{RAM}.\n\\raisebox{.5pt}{\\textcircled{\\raisebox{-.9pt} {2}}}~As a filter,\nwe check if the argument register is equal to a constant.\nIn case the filter does not match,\nthe instruction is executed normally and x86 execution continues after \\texttt{shrd}.\nIn case the filter matches,\n\\raisebox{.5pt}{\\textcircled{\\raisebox{-.9pt} {3}}}~the current instruction pointer\nis pushed onto the stack and the x86 instrumentation part gains control,\ncomparable to a \\texttt{call} instruction in x86.\nOnce our instrumentation gains control, it can perform any number of calculations\nand is not constrained by the size limitations of the microcode \\ac{RAM}.\n~\\raisebox{.5pt}{\\textcircled{\\raisebox{-.9pt} {4}}}~Finally,\nthe instrumentation continues the normal execution by returning to the interrupted code.\n\nWe also conducted a performance benchmark to determine the overhead introduced by our instrumentation\nfor the case where the microcoded condition does not hold --- illustrated with~\\raisebox{.5pt}{\\textcircled{\\raisebox{-.9pt} {2}}}~in Figure~\\ref{ucode:figure:primitives:instrumentation}.\nIn this case, the x86 execution should continue as fast as possible\nin order to reduce the overhead for any code not to be inspected.\nWe use the \\texttt{shrd} instrumentation for this test and measure the performance according to the described test setup.\nThe original implementation of \\texttt{shrd} executed in 2 cycles,\nour test case took 8 cycles.\nThis overhead is mainly due to the switch to microcode \\ac{RAM}\nand the two triads inserted for the instrumentation check.\nThe microcode \\ac{RTL} of the \\texttt{shrd} instrumentation is available in our Github repository~\\cite{microcode:amd_microprograms}.\n\nWhile the execution time of the single instruction is increased substantively,\nthis overhead is fixed for any semantic the instruction originally implements.\nThis implies that our instrumentation only adds 6 cycles to perform its own check,\nregardless of the original run time of the instruction.\nAdditionally, we do not introduce a conditional x86 branch,\nwhich further increases the overhead due to potential branch mis-predictions.\nMoreover, our implementation does not use scratch x86 registers and thus does\nnot increase register pressure or causes additional memory accesses.\nFinally, the overhead is only introduced for instructions that are to be inspected,\nthe rest of the execution is not impacted.\nThis is in contrast to existing dynamic instrumentation frameworks,\nsuch as Valgrind~\\cite{nethercote2007valgrind}, PIN~\\cite{luk2005pin} or DynamoRIO~\\cite{dynamorio},\nwhich increase the execution time for all instructions. For a lightweight instrumentation, the overheads induced by these tools are about 8.3, 2.5 or 5.1 times, respectively~\\cite{luk2005pin}.\n\nOn top of our framework,\nany binary instrumentation relying on intercepting of a \\textit{small} number x86 instructions can be realized.\nNote that a current limitation is that only microcoded instructions can be intercepted, however,\nthis is a limitation of the current reverse engineering progress.\nPrevious work indicated the possibility of intercepting all instructions,\nincluded non-microcoded ones.\n\n\n\n\\subsection{Authenticated Microcode Updates}\n\\label{cucode:section:microcode_update}\n\n\\par{\\bf Motivation.}\nWhile the insufficiently protected microcode update mechanism of\nAMD K8 and K10 processors enabled the research in the first place,\nit simultaneously poses a major security issue:\nan attacker can apply any update of her choosing,\nwhich was demonstrated by Koppe~et\\,al.\\xspace~\\cite{koppe2017reverse} by developing stealthy microcoded Trojans.\nHowever, as the microcode update mechanism itself is implemented in microcode,\nit is possible to develop a protection mechanism in the form of a microcode update that can provide limited security guarantees.\nWe implement a proof-of-concept that demonstrates the feasibility of such a scheme on the affected \\acp{CPU}.\n\n\\par{\\bf Design and Implementation.}\nIn order to mitigate the risk associated with the current scheme,\na microcode update mechanism is required that only accepts authenticated updates.\nHowever,\ngiven the ephemeral nature of microcode updates,\nthis countermeasure requires either a hardware re-design or a trusted application (e.g.,\na part of \\ac{UEFI} with secure boot) that applies a suitable microcode update early during boot.\nIn particular,\nthis update must then verify each further update attempt using proper cryptographic primitives.\nAt the same time,\ndue to the limited space in the microcode update,\nthe verification has to be small in terms of code size.\nNote that performance is of lesser priority in this case since\nmicrocode updates are typically only performed once per system start.\n\nOur implementation extends the \\texttt{wrmsr} instruction, which is used to start the microcode update, to enforce the following properties for the microcode update:\n\\begin{enumerate}\n \\item The update includes 32 triads, the maximum possible number on the K8 architecture. The vendor-supplied updates are always padded to this length.\n \\item A \\ac{HMAC} is appended to the update directly after the last triad.\n \\item The \\ac{HMAC} is correct for the full update, including the header. The inclusion of the header in the authenticated part protects the match registers and thus the affected instructions. The key of the \\ac{HMAC} is included in the initial microcode update.\n\\end{enumerate}\n\n\nFor our implementation,\nwe choose the block cipher \\ac{TEA}~\\cite{wheeler1994tea} due to the simplicity of\nits round function which results in a small code size in the microcode \\ac{RAM}.\nThis is especially important as our current understanding of microcode semantics\nonly allows loading of 16-bit immediate values per microcode operation.\nHence,\nloading of a single 64-bit constant requires a total of 8 operations or nearly\nthree triads (note that the whole microcode update is limited to 32 triads only).\nWhile it would be preferable to implement a strong cryptographic algorithm such as \\ac{AES},\nthese commonly require S-Boxes,\nwhich we cannot support due to code size constraints.\n\n\\par{\\bf Evaluation and Discussion.}\nAs we extend the standard update mechanism with an additional verification of the entire microcode update,\nwe incur a significant performance hit.\nIn our tests, applying a maximum length update takes 5,377 cycles without the authenticated update mechanism.\nWith our deployed authentication scheme, loading the same update requires 68,525 cycles.\nThis increase is expected due to the added verification.\nAs the update is only applied once during system boot, the performance hit is still negligible.\nFor comparison,\nthe AMD 15h architecture (Bulldozer etc.) requires 753,913 cycles on average for an update~\\cite{tr:2014:chen}.\nThis generation likely uses a public key scheme to verify the update.\n\nDue to code size limitation we were limited to the simple and small \\ac{TEA} algorithm and could not implement a public key verification scheme.\nHowever, if the update authentication mechanism were contained in the microcode \\ac{ROM} directly,\nthe code size would not be as restricted.\nWhile our \\ac{ROM} readout indicates a very high usage of the available triads,\nthere are still more padding triads present than would fit into a microcode update.\nIn our prototype implementation, the user can decide which updates to trust,\nor given the possibility to disassemble the updates,\neven which parts of an update should be applied.\nThis allows for a finer control over the hardware than what would be possible using only a vendor-accessible signature\nmethod.\nThe \\ac{RTL} of our microcode authentication scheme is available in our Github repository~\\cite{microcode:amd_microprograms}.\n\n\n\n\n\\subsection{$\\mu$Enclave}\n\n\\par{\\bf Motivation.}\nIntel \\ac{SGX}~\\cite{iacr:2016:86} is an instruction set extension that introduces\nthe creation of isolated, trusted execution environments with private memory regions.\nThese so-called \\textit{enclaves} are protected from processes even at\nhigh privilege levels and enable secure remote computation.\nInspired by \\ac{SGX} we designed and implemented a \\textit{proof-of-concept} enclave functionality,\ndubbed \\textit{$\\mu$Enclave}.\n$\\mu$Enclave can remotely attest that code indeed runs inside the enclave and ensures confidentiality of data.\nWe can thus retrofit basic enclave functionality to older \\acp{CPU} not offering a comparable solution.\nAdditionally, we use this case study to illustrate the isolation property of microcode.\n\n\n\\par{\\bf Design and Implementation.}\nWe leverage the separate microcode \\ac{IDU} to establish an isolated execution environment.\nThe other decode units are halted while the microcode \\ac{IDU} is active by design of the microarchitecture.\nDue to these isolation properties we can safely assume that x86 code,\neven when running with kernel-level privileges,\ncannot interfere with the enclave program implemented in microcode at run time.\n\n\n$\\mu$Enclave is based on the authenticated microcode update mechanism,\npresented in Section~\\ref{cucode:section:microcode_update},\nand the following strategy:\n\\begin{enumerate}\n \\item The trust is built upon the symmetric key contained in the first microcode update applied early during boot by \\ac{UEFI}. The entity controlling that key may be a chip manufacturer, software vendor, or the end-user. The entity has to ensure that payload microcode updates contain only benign behavior before signing it. \n\\item The program that is supposed to run in the $\\mu$Enclave is implemented in microcode and embedded in a signed payload microcode update.\n\\item The enclave program may perform arbitrary computations and access virtual memory. The enclave program may write sensitive data into \\ac{RAM}, but it must ensure security properties like authenticity, integrity, and secrecy itself using signing and encryption.\n\\item The enclave program can remotely attest that it indeed runs within the enclave by signing a message with the symmetric enclave key.\n\\end{enumerate}\n\n\\par{\\bf Discussion.}\nIn combination with a challenge-response protocol,\n$\\mu$Enclave enables remote attestation and additional services of the enclave\ncan be exposed either via augmenting x86 instructions or adding new \\acp{MSR}.\nA major drawback of $\\mu$Enclave is the restricted code size due to the microcode \\ac{RAM} size.\nThis limitation can be lifted by either implementing a small virtual machine and interpreting signed bytecode from\nmain memory or iteratively streaming signed microcode from main memory to microcode \\ac{RAM} as it executes.\nFor the latter, we are missing the micro-ops that can write to microcode \\ac{RAM}.\nWhile our current implementation does not support either approach,\nthis is not a fundamental limitation of $\\mu$Enclave.\n\nWhen compared to sophisticated trusted execution environments such as Intel \\ac{SGX} or ARM TrustZone,\n$\\mu$Enclave is more cumbersome to use.\nAs the enclave code needs to be written as microcode,\nthe development requires experience with this environment.\nAdditionally, the limited code size limits the selection of\ncryptographic primitives to those with very small implementations.\nThis results in the use of less secure cryptographic algorithms and thus lower security guarantees.\nFinally, the \\ac{CPU} lacks hardware support and acceleration for cryptographic operations.\nThis means, for example,\nthat the attestation needs to be implemented by the programmers of enclave code themselves.\nHowever, $\\mu$Enclave can be used on older \\acp{CPU}\nthat do not provide the mentioned vendor supplied solutions.\nAs such, it is possible to add similar primitives to legacy \\acp{CPU} without requiring a hardware change.\n\n\n\n\\section{Background and Related Work}\n\nIn the following, we first present the technical background information needed to understand the microcode details presented in this paper.\nNote that the background for the individual defenses is covered in their respective subsections in Section~\\ref{cucode:section:case_study}.\nIn addition, we review prior work that demonstrated the capabilities of microcode and discuss how our contributions presented in this paper relate to existing work.\n\n\\subsection{Microcode Background}\n\nThe \\ac{ISA} of a processor defines the available instructions and serves as an interface\nbetween software and hardware~\\cite{book:computer_architecture:stallings}.\nWe refer to the actual hardware implementation of an \\ac{ISA} as \\textit{microarchitecture}.\nThe \\ac{IDU} generates control words based on the currently decoded instruction and is a crucial\ncomponent of the microarchitecture especially for \\ac{CISC} processors with complex instructions.\nThe \\ac{IDU} of modern x86 processors is implemented as a hybrid of a hardwired decode unit,\nwhich consists of sequential logic,\nand a microcoded decode unit,\nwhich replays precomputed control words named \\textit{microinstructions}.\nThey are stored in a dedicated,\non-chip microcode \\ac{ROM}.\nThe microcode is organized in so-called \\textit{triads} containing three microinstructions and a \\textit{sequence word},\nwhich denotes the next triad to execute.\nIn the microcode address space, triads can only be addressed as a whole, i.e.,\nindividual bytes are not accessible.\nThere are multiple categories of microinstructions like arithmetic, logic,\nmemory load\/store, and special microinstructions.\n\nThe microcode of modern x86 processors can be updated at runtime in order to fix errata and add new features\nwithout the need to resort to product recalls~\\cite{patent:2002:patch_device,koppe2017reverse}.\nThese updates are usually applied early during boot by the BIOS\/EFI or operating system.\nThe process is initiated by loading the microcode update file to\nmain memory and writing the virtual address to a \\ac{MSR}.\nThe CPU then copies the microinstructions of the update to the dedicated on-chip microcode \\ac{RAM}.\nThe update engine also sets the \\textit{match registers} according to the values given in the update file.\nThe match registers contain microcode \\ac{ROM} addresses and act as breakpoints.\nThey redirect control to the triads of the update stored inside the\non-chip \\ac{RAM} once a breakpoint in microcode \\ac{ROM} is hit.\nComplex or rarely used x86 instructions are implemented with\nmicrocode and have a predefined entry point in microcode \\ac{ROM}.\nHence,\nmicrocoded x86 instructions can be intercepted by placing a breakpoint at the corresponding entry point.\nThe triads in the microcode update define the new logic of the x86 instruction.\n\n\n\n\\subsection{Related Work}\n\n\\par{\\bf Microcode and Microcode Updates.}\nPrevious work~\\cite{tr:2014:chen,hawkesmicrocode,link:2004:opteron_exposed} already provided indicators that the microcode\nupdate functionality of several \\acp{CPU} families is not sufficiently protected and might allow for custom updates to be applied.\nKoppe~et\\,al.\\xspace~\\cite{koppe2017reverse} then reverse engineered both the update mechanism of AMD K8 and K10 \\acp{CPU} as\nwell as the encoding of microcode to a point that allowed the creation of custom microcode updates.\nThese updates implemented simple microcode applications such as basic instrumentation and backdoors,\nwhich were applicable to unmodified \\ac{CPU}.\nOther work highlighting the capabilities of microcode was presented by Triulzi~\\cite{Arrigo:2016:Troopers,Arrigo:2015:Troopers},\nbut details of the implementation are not publicly available.\n\nIn this paper, we substantially extend on these insights and perform further in-depth reverse engineering and analysis of the microcode \\ac{ROM}. By understanding the \\ac{ROM} mapping, we are able to disassemble the microcode of arbitrary x86 instructions to enable the implementation of sophisticated microprograms, as demonstrated in later sections of this work.\n\n\\par{\\bf Microcoded Shadow Stacks.}\nDavi~et\\,al.\\xspace~\\cite{davi2015hafix} introduced an approach called \\ac{HAFIX} and showed that it is possible to implement a so-called \\textit{shadow stack}~\\cite{dang2015performance} using microcode (in cooperation with researchers from Intel).\nHowever, \\ac{HAFIX} relied both on a compile-time component to add additional instructions to the binary,\nand is only available on development \\acp{CPU},\nnot on standard consumer hardware.\nIntel also announced the introduction of shadow stacks into end user \\acp{CPU} with the addition of\n\\ac{CET}~\\cite{intel2016cet}.\nThis technology tracks all calls and returns which allows checking whether the normal stack and the shadow stack point to\nthe same return address.\nIf a difference is encountered,\nan exception is raised.\nAdditionally, the memory pages containing the shadow stacks are protected using special page table attributes.\nOnce \\acp{CPU} with this technology will reach the market,\nshadow stacks will be available in production code with (almost) no additional performance overhead.\n\nIn this paper, we present several designs and proof-of-concept implementations of microcode-assisted systems defenses beyond shadow stacks. In addition, our paper and the supplementary material~\\cite{microcode:amd_microprograms} will enable other researchers to build similar microcode-based system defenses and explore this area further.\n\n\n\\section{Discussion and Future Work}\n\\label{cucode:section:discussion}\n\nIn this section, we discuss benefits and challenges of microcode-assisted system defenses and review limitations of microcode in general and of our reverse engineering approach in particular. Furthermore, we present and discuss potential topics for future work such as microcode-assisted shadow stacks, lightweight syscalls as well as information isolation. We also shed light on how microcode Trojans can be detected. \n\n\\subsection{Microcode for System Defenses}\n\\label{cucode:section:discussion:defenses}\n\nModern processor microcode and the ability to update microcode can provide useful primitives such as enabling or disabling \\ac{CPU} features at runtime, intercepting instruction decoding or other microarchitectural processes to modify existing behavior, providing a small execution environment isolated from the operating system kernel, and bypassing some boundaries of the x86 \\ac{ISA} to implement new features. We have shown in Section~\\ref{cucode:section:case_study} that these primitives enable the implementation of some defensive schemes like customizable accuracy of the built-in x86 timer and $\\mu$Enclave in the first place. Other defenses such as microcoded \\ac{HWASAN} and \\ac{ISR} benefit from these primitives with regard to performance overhead and complexity. With more knowledge about microcode, additional defenses like opaque shadow stacks and information isolation can be built, as we discuss in Sections~\\ref{cucode:section:discussion:shadowstacks} and Section~\\ref{cucode:section:syscalls}. However, the generality of microcoded primitives suffers due to the limited number of processor models that currently accept custom microcode updates. We argue that the introduction of an open and documented microcode API could benefit system security research and future defensive systems. Such an API has to address several challenges like abstracting the underlying changes through processor generations, conflict handling for concurrent updates, and ensuring system stability. In order to avoid microcode malware, processor vendors could introduce an opt-in development mode that allows self-signed updates. Software vendors that want to use such an update in the field, e.g., with processors not in development mode, have to go through a signing process with the \\ac{CPU} vendor.\n\n\\subsection{Limitations}\n\\label{cucode:section:discussion:limitations}\n\nAt first, we review the limitations of microcode in general. The execution speed of certain computations can be speed up by several orders of magnitude by implementing the algorithm in hardware, e.g., in an ASIC or FPGA. Such performance gains do not apply to computations moved from an x86 implementation to microcode, because essentially it is still software. Merely the decoding is changed, but the resulting operations performed by the functional units of the processor are similar. Furthermore, the intervention of microcode in microarchitectural processes directly implemented in hardware is limited. Custom microcode updates are thus limited to changing the semantics of x86 instructions within the constraints of the existing internal \\ac{RISC} instruction set. To the best of our knowledge, no mechanisms exists to periodically trigger an action in microcode to implement an asynchronous monitoring. All actions of custom microcode programs needs to be triggered by an external event. However, as it is possible to intercept arbitrary instructions and microcode-internal processes, there are multiple options to implement a basic form of such a monitoring.\n\nOur microcode research is further limited due to our incomplete knowledge of microcode and the underlying microarchitecture. The information gained through reverse engineering may lack important details or even contain mistakes. This can only be resolved with access to the official documentation of the used features. Our microprograms only run on AMD K8 to K10 family based processors. More modern \\acp{CPU} include effective authentication schemes, such as RSA-based public key cryptography, which would need to be bypassed in order to apply a custom update. The microcode update size of the affected \\acp{CPU} is limited to 32 triads, which prohibits the implementation of large microprograms. We partly bypassed this restriction by introducing x86 callbacks. However, this bypass is not feasible in scenarios with untrusted operating system kernels such as $\\mu$Enclave. More recent \\acp{CPU} use larger microcode updates, which is an indication that their patch \\ac{RAM} is larger and can potentially accommodate more complex updates. Despite the limited code size on the tested \\acp{CPU} no upper bound on the execution time of microcode was encountered and we were able to lock up the \\acp{CPU} by forcing it into an endless loop in microcode. Furthermore, we currently can only hook microcoded x86 instructions. Detailed lists of these microcoded instructions for the K8 architecture can be found in~\\cite{amd:k8vectorpathlist} at pages 273ff. The instructions listed as VectorPath are microcoded instructions and Direct\/DoublePath instructions are decoded in hardware. While there are indications that it is possible to intercept all instructions, our current reverse engineering results do not allow for this. Lastly, the microcode \\ac{ROM} readout contains non-correctable read errors induced by dust particles or irregularities. We are currently working on improving the readout and obtaining an error-free version.\n\n\\subsection{Correctness of Reverse Engineering Results}\n\nAs our results are based on reverse engineering,\nwe can not guarantee their correctness. Additionally we are limited to observing the output of the \\ac{CPU}, any additional details of the microarchitecture such as scheduling or internal state updates are hidden from us.\nThe observations might constitute unintended behavior of the \\ac{CPU} when used outside of its specifications.\nHowever, we verified our conclusions using available resources where possible.\nA strong indication that our results are indeed correct\nis the fact that we can construct complex microcode programs that behave as expected when executed on the \\ac{CPU}.\nAdditionally the behavior is consistent between \\acp{CPU} of the AMD K8 and K10 families,\neven though they differ in details such as cache sizes, core counts,\nor feature size, and even certain implementation details such as the selection of microcoded instructions.\nThere are also parallels between our results and the descriptions found in\nthe patent describing the RISC86 instruction set~\\cite{patent:2002:risc86},\nwhich appears to be used internally by the \\ac{CPU}.\nFor example,\nthe encoding for the conditional codes of microcode jumps are the same as stated in the patent.\nWe also found similarities in the encoding of individual opcodes,\nalbeit with differences in length and number of opcode fields.\nLastly, certain operations,\nmost prominently multiple division variants or steps,\nand internal register functions,\ne.g.\nthe address of the next x86 instruction to be executed,\nare closely related.\nAfter reconstructing the mapping between virtual and physical microcode\naddresses we could also locate the implementation of specific x86 instructions.\nBy comparing the disassembled microcode with the expected function of the x86 instruction,\nwe determined that we indeed correctly interpret the bit sequences.\nExamples of this are the instructions \\texttt{shrd},\nwhose implementation shows shifts of the argument registers\naccording to the specifications and the \\texttt{wrmsr} opcode,\nwhich at its start has a large number of instructions comparing ECX (the register\nnumber to write to) to specific values consistent with the documented interface.\nWe also verified individual microcode instructions on their own by copying the bit sequences to a microcode update,\nexecuting them and comparing the output.\nThis was extended upon during the development of the microcode emulator for which we tested different\ninput states on both the emulator and the \\ac{CPU} to ensure the correctness of our emulation.\n\nA final confirmation of the correctness can be achieved with the cooperation of the \\ac{CPU} vendors.\nThe availability of official specifications and documentation would allow for a faster development\nof custom microcode programs and could potentially allow better usage of available \\ac{CPU} features.\nUnfortunately, we did not receive a response from AMD after we contacted them.\n\n\\subsection{Shadow Stacks}\n\\label{cucode:section:discussion:shadowstacks}\n\nDuring our research, we considered an opaque shadow stack implementation as a potent use case for a constructive microprogram. However, due to the fact that \\texttt{ret} (near, without immediate) is not implemented in microcode, we can not instrument this instruction. As this instruction is a key requirement in implementing an opaque shadow stack, we were unable to create a proof-of-concept. As \\ac{CPU} vendors are able to determine the logic on non-microcoded instructions during the design process, they are able to implement such a shadow stack. Below we discuss the advantages of an opaque shadow stack retrofitted by microcode. \n\nShadow stack defenses implement a second stack that is kept in sync with the system's default stack. Shadow stacks often possess special properties in order to achieve certain security goals. For example, the shadow stack can be placed in memory that cannot be accessed by normal program instructions~\\cite{kuznetsov2014code}, the direction of growth can be inverted to detect illegal stack accesses that yield diverging results~\\cite{salamat2008reverse}, or the shadow stack stores only fixed-size elements to preserve control-flow metadata in the event of a stack-based buffer overflow~\\cite{clang-safestack}. Shadow stacks ensure the integrity of sensitive data on the stack. Therefore, they are often integrated in code-reuse defenses such as CFI~\\cite{dang2015performance,clang-safestack,stackarmor:15,per-input-cfi:2015} in order to protect the backward edge of the control flow. Due to their nature, shadow stack implementations need to extend the logic of instructions operating on the stack such as \\texttt{call} and \\texttt{ret}. Software-based implementations achieve this by adding instructions at all occurrences during compilation~\\cite{kuznetsov2014code,dang2015performance,clang-safestack,gcc-safestack} or with static binary rewriting. In 2015, Davi~et\\,al.\\xspace~\\cite{davi2015hafix} proposed a hardware-assisted shadow stack implementation with low performance overhead. However, the defense still requires the insertion of instructions into the protected application.\n\nShadow stacks can also be implemented in an opaque way. The semantic of existing stack operations is extended rather than relying on the addition of instructions. Benefits of this approach are compatibility with legacy applications, protection of the whole software stack instead of transformed applications and software libraries only, and potential performance gains due to smaller code size as well as improved utilization of the underlying microarchitecture. Depending on the implementation details, stronger security properties can be enforced, e.g., by placing the shadow stack at a memory area not accessible by conventional user mode instructions. Intel released the specification of \\ac{CET} containing a shadow stack in 2016 and added GCC support in 2017~\\cite{intel2016cet,gcc2017cet}. However, to date no processor with \\ac{CET} support has been released. The \\ac{CET} shadow stack is opaque except for some new management instructions such as switch shadow stack. We argue that these management instructions will be microcoded, because they implement complex logic and are not performance critical due to their rare occurrence.\n\n\\subsection{Lightweight Syscalls}\n\\label{cucode:section:syscalls}\n\nThe syscall interface is provided by the processor and the operating system to offer services to user space. During its setup, the pointer to the syscall handler in kernel space and the kernel stack pointer are stored in \\acp{MSR}. Once the syscall instruction is invoked, the processor reads the corresponding \\acp{MSR}, switches the stack, and redirects control flow. The syscall handler then invokes the handler for the requested service according to the given syscall number in register \\texttt{eax}. The service handler sanitizes the inputs, checks access privileges (where applicable) and performs its desired action. Ultimately, control is transfered back to user space via the \\texttt{sysret} instruction by restoring segment registers, again switching stack and redirecting control to the stored instruction pointer.\n\nThe performance overhead imposed by syscalls discourages defenses from invoking them frequently. Thus, vital and critical runtime metadata of defenses are kept in the user space, where they are exposed to attackers. To thwart potential tampering with the metadata, many different kinds of information hiding schemes were introduced in the past years~\\cite{kuznetsov2014code,ASLR-Guard,dang2015performance,clang-safestack}. However, information hiding has been shown to be ineffective in several attack scenarios~\\cite{gawlik2016enabling,goktacs2016undermining,kollenda2017towards,evans2015missing}. We propose lightweight syscalls implemented in microcode, which are assigned to a dedicated opcode. They leave segment registers, the x86 instruction pointer, and the stack in place. Once the opcode is executed, the microcode implementation switches to kernel mode, performs a desired action, and switches back to user mode. The action is specific to the needs of the particular defense and could for example be a restricted read or write to the defense's metadata in kernel memory. Note that special care must be taken during implementation of the microcode update to not introduce a privilege escalation vulnerability. With lightweight syscalls, defenses such as \\ac{CFI} and \\ac{CPI} can migrate from information hiding to information isolation enforced by the privilege level of the processor. This can potentially further harden existing defenses against advanced adversaries. Due to the nature of lightweight syscalls, we estimate a low performance overhead. Based on our limited knowledge about microcode, we were unfortunately unable to implement and evaluate such an approach. Future work should explore such a microcode-based defense primitive.\n\n\n\n\n\n\n\n\n\\subsection{Microcode Trojan Detection}\n\\label{cucode:section:discussion:trojandetection}\n\nKoppe~et\\,al.\\xspace have shown that microcode updates can contain malicious behavior~\\cite{koppe2017reverse}. All presented microcode Trojans rely on the same mechanism to gain initial control, namely the interception of x86 instruction decoding. We found that the interception and the additionally executed micro-ops cause a measurable timing difference. In this paper, we showed that a related technique, namely microcode-assisted instrumentation, already exhibits a measurable performance overhead. Our further tests indicate that even if only a single triad---the smallest possible insertion---is inserted into the logic of an instruction, the overhead can already be measured. Given the unavoidable overhead of switching to the microcode \\ac{RAM}, a backdoor inserted via a microcode update is in general detectable.\n\nA detection engine can create a base line by measuring the timing of all instructions with no microcode update applied. Then the engine takes a second measurement with the update under test, compares the results, and reports any timing differences. Note that this method only detects x86 instruction hooks and not necessarily malicious behavior. A malicious update does not always need to insert additional logic into existing instructions, it could, for example, modify the handling of certain, potentially undocumented, \\acp{MSR}.\n\nIn order to also detect such modifications, the microcode update needs to be decoded and, for example, statically analyzed. Program analysis methods would also consider logic that is not inserted at instruction decoding but other internal processes like exception handling on the microarchitectural level. It is also possible to reason about the Trojan's semantics, thus yielding more accurate results. Trojans (or CPU vulnerabilities that can be exploited as backdoors) can also occur in the microcode \\ac{ROM}. The detection of these is more challenging, because their behavior is also contained in the baseline measurement and the \\ac{ROM} contents need to be read out to apply static analyses.\n\nHowever, the same problems that plague traditional malware identification are also applicable to the detection of microcode Trojans. Even if the whole microcode, both \\ac{ROM} and \\ac{RAM}, is available for analysis, it can be hard to determine if a certain code fragment is benign or malicious in nature. This problem is amplified due the limited understanding of microcode internals. But even access to the full documentation on the subject would not be sufficient, as it is possible to use obfuscation to hide the true nature of a code fragment. Lastly, it would be possible to insert a backdoor outside of the microcode engine and directly change the other functional units of the \\ac{CPU}. All-in-all detecting microcode Trojans---or hardware backdoors in general---is a difficult problem in the face of powerful adversaries. %\n\n\\subsection{Supporting Newer and Different Architectures}\n\nWhile we were able to apply our understanding of the K8 architecture\nto programming for the K10 architecture, other\narchitectures are far more difficult to support.\nAs the K10 is a close evolution of the K8,\nthe microcode engine remained largely the same.\nWe mainly noticed differences in the selection of microcoded instructions.\nFor example, the K10 architecture moved the decoding of all \\texttt{ret} instructions to hardware,\nwhile the K8 still performed decoding for some variants of it in microcode.\nMoving more instructions to the hardware decoder usually results\nin better performance as microcoded decoding takes more time.\nDuring our investigation we also determined that the entry points for microcoded instructions were constant between K8 and K10,\nbut the implementation then branched to different triads during execution.\n\nThe major problem when adapting our findings to new architectures is the strong cryptographic authentication of microcode updates for newer \\acp{CPU}.\nOnly with the ability to execute arbitrary code on the hardware,\nit was possible to gain an understanding of the fundamental encoding of microcode~\\cite{koppe2017reverse}.\nWithout such a possibility, any analysis is restricted to interpreting existing code,\nusually in the form of microcode updates.\nHowever, even the K8 and K10 architectures use a form of scrambling to obfuscate the plain text of the updates.\nAnalysis of more modern updates shows that those are most likely protected by strong\ncryptographic primitives~\\cite{hawkesmicrocode} and thus cannot be analyzed as is.\nHowever, even if the plain text of such an update is acquired,\nwithout a specification or a system to execute the code, it is still challenging to recover the microcode semantics.\nLarge amounts of data and at least some basic information on the intended functionality of the update would be needed to infer any meaning.\nGiven the comparatively small size of microcode updates (usually in the range of hundreds of kilobytes for a single \\ac{CPU}),\nthis would probably not be feasible in practice.\n\nAnother possibility is the analysis of the microcode \\ac{ROM} or engine directly.\nAnalyzing the engine itself would yield a detailed understanding\nof the encoding and available functionality of microcode,\nbut modern small feature sizes and the high complexity of current \\acp{CPU} render this approach difficult.\nWhile reading the \\ac{ROM} directly is not as difficult as analyzing a highly optimized microcode engine,\nit does not immediately yield the plain text microcode.\nAs our reverse engineering process showed,\nwe had to invert multiple permutations of the readout bits in order to obtain the plain text encoding.\nThis process was heavily dependent on both previous understanding of the\nencoding and the ability to execute chosen microcode on the \\ac{CPU},\nboth of which would not be available.\nAlso there would be no way of verifying the findings,\nas the \\ac{CPU} would not accept custom updates without the correct signature.\nWhile the public key of the signature could possibly be extracted from the \\ac{CPU},\nthe required private key would only be available to the vendor.\nModifying a single \\ac{CPU} via chip editing might resolve this issue,\nbut such an approach again requires massive hardware reverse engineering efforts\nand access to specialized and expensive lab equipment able to operate at the small feature size.\nAlso such an edit would only allow a single \\ac{CPU} to load the custom update,\nany unedited \\ac{CPU} would refuse it.\n\nIn summary, supporting newer \\acp{CPU} is mostly prevented by strong authentication of microcode updates.\nOnce the authentication is circumvented,\ne.g.,\nby the use of chip editing or side-channel attacks,\nour reverse engineering methods can be applied to infer microcode features.\nHowever, vendor support for custom microcode updates is still the\nmost viable approach to modifying the behavior of \\acp{CPU}.\n\n\\section{Conclusion}\n\nVulnerabilities affecting security and safety have accompanied computer systems since their early days. To cope with attacks, numerous defense strategies have been integrated both in software and hardware. In particular, hardware-based defenses implemented with microcode provide increased security and performance, as recently shown by the microcode updates released to address \\textsc{Spectre} and \\textsc{Meltdown}. However, little is publicly known how security mechanisms are implemented in hitherto closed-source microcode.\n\nIn this paper, we demonstrated how modern system security defenses and tools can be implemented in microcode on a modern \\ac{COTS} x86 \\ac{CPU}. Among others, we provided details how to implement timing attack mitigations, instruction set randomization, and enclave functionality. To this end, we first uncovered new x86 microcode details by a more in-depth hardware reverse engineering and novel strategies to validate the semantics. Finally, we discussed perspectives of customizable microcode and highlighted useful primitives offered by microcode to arm the system security defense landscape. \n\nIn order to foster future research in the area of processor microcode and its applications, we publish the source code of the applications described in this paper as well as the framework used for manipulating and generating microcode~\\cite{microcode:amd_microprograms}. We hope this will enable other researchers to extend and build upon our work to design and implement microprograms.\n\n\\section*{Acknowledgement}\nWe thank our shepherd Mathias Payer and the anonymous reviewers for their valuable feedback.\nPart of this work was supported by the European Research\nCouncil (ERC) under the European Union's Horizon 2020\nresearch and innovation programme (ERC Starting Grant No.\n640110 (BASTION) and ERC Advanced Grant No. 695022 (EPoCH)).\nIn addition, this work was partly supported by the German Federal Ministry of Education and Research (BMBF Grant 16KIS0592K HWSec and BMBF Grant 16KIS0820 emproof).\n\n\\balance\n\n\n\n\\section{Introduction}\n\nNew vulnerabilities, design flaws, and attack techniques with devastating consequences for the security and safety of computer systems are announced on a regular basis~\\cite{cvestats}.\nThe underlying faults range from critical memory safety violations~\\cite{cve:stats:memory} or input validation~\\cite{cve:stats:validation} in software to race conditions or side-channel attacks in the underlying hardware~\\cite{intel:errata,amd:errata,Kocher2018spectre,Lipp2018meltdown,projectzeromeltdownspectre,Hund,Doychev:2013:CTS}. To cope with erroneous behavior and to reduce the attack surface, various defenses have been developed and integrated in software and hardware over the last decades~\\cite{van2017dynamics,Szekeres:2013:EWM}.\n\nGenerally speaking, defenses implemented in software can be categorized in either compiler-assisted defenses~\\cite{ASAN,aslr,dep,onarlioglu2010g,ASLR-Guard,readactor:sp15,backes2014oxymoron} or binary defenses~\\cite{wartell2012binary,pappas2012smashing,Gawlik,abadi2005control,Isomeron:ndss15}. Note that operating system changes~\\cite{XnR:2014,readactor:sp15,aslr,dep} represent an orthogonal approach to serve both compiler-assisted and binary defenses. While compiler-assisted defenses require access to the source code and re-com\\-pi\\-la\\-tion of the software, binary defenses based on \\textit{static binary rewriting}~\\cite{wang2015reassembleable,laurenzano2010pebil,romer1997instrumentation} or \\textit{dynamic instrumentation}~\\cite{dyninst:2011,luk2005pin,nethercote2007valgrind,dynamorio} can also be leveraged for legacy and \\ac{COTS} programs. However, these binary defense strategies have two fundamental drawbacks: on the one hand, binary rewriting relies on the ability to accurately discover and disassemble all executable code in a given binary executable~\\cite{andriesse2016depth}. Any misclassified code or data yields incomplete soundness and thus cannot provide specific security guarantees, causes program termination, or incorrect computations. On the other hand, dynamic instrumentation executes unmodified binaries and inserts instrumentation logic with methods such as \\textit{emulation} or \\textit{hooking} during runtime. While this approach does not require the availability of a perfect disassembly, it typically causes significant performance overheads and thus can be prohibitively expensive in practice.\n\nOver the past decades, various defense mechanisms have been implemented in hardware to increase both security and performance. For example, dedicated security features to mitigate exploitation of memory-corruption vulnerabilities include Data Execution Prevention~\\cite{dep}, \\ac{XoM}~\\cite{XnR:2014,readactor:sp15,intelsdm}, \\ac{CFI}~\\cite{abadi2005control,intel2016cet} and Shadow Stacks~\\cite{intel2016cet,dang2015performance}. Moreover, sophisticated trusted computing security features were integrated in \\acp{CPU}~\\cite{anati2013innovative,iacr:2016:86}. \n\nBut not only novel defense mechanisms have been integrated in hardware: Similarly to any complex software system, erratic behavior exist in virtually any commercially-available \\ac{CPU}~\\cite{intel:errata,amd:errata}. To this end, x86 \\ac{CPU} vendors integrated in-field update features (e.g., to turn off defective parts or patch erroneous behavior). More precisely, the microcode unit, which translates between user-vis\\-i\\-ble \\ac{CISC} \\ac{ISA} and hardware-internal \\ac{RISC} \\ac{ISA}, can be updated by means of so-called \\textit{microcode updates}~\\cite{patent:2002:patch_device,koppe2017reverse}.\nSince microcode is proprietary and closed source, and more and more complex security features are integrated into hardware with the help of microcode (e.g., Intel SGX~\\cite{iacr:2016:86}), there is only a limited understanding of its inner workings and thus we need to trust the \\ac{CPU} vendors that the security mechanisms are implemented correctly. In particular, the \\ac{CPU}'s trustworthiness is challenged since even recently published microcode updates have been shown to cause incorrect behavior~\\cite{intelspectreretract} and several attacks on hardware security features have been demonstrated recently~\\cite{Kocher2018spectre,Lipp2018meltdown,projectzeromeltdownspectre,lee2017inferring,206170}. Moreover, since older \\ac{CPU} generations are not updated to defend against sophisticated attacks such as \\textsc{Spectre} or \\textsc{Meltdown}~\\cite{intelspectrepatch}, these \\acp{CPU} are unprotected against the aforementioned attacks which find more and more adoption into real-world attacks~\\cite{fortinetmdspecmw}.\n\n\\par{\\bf Goals and Contributions.}\nIn this work, we focus on constructive applications of x86 processor microcode for the modern system security landscape. \nOur goal is to shed light on how currently employed defenses may be realized using microcode and thus tack\\-le shortcomings of the opaque nature of x86 \\acp{CPU}.\nBuilding upon our recent work on microcode~\\cite{koppe2017reverse}, we first present novel reverse engineering strategies which ultimately provide fine-grained understanding of x86 microcode for a \\ac{COTS} AMD K8 \\ac{CPU}. \nOn this basis, we demonstrate multiple constructive applications implemented in microcode which considerably reduce the attack surface and simultaneously reduce performance overheads of software-only solutions. \nFinally, we discuss benefits and challenges for customizable microcode for future systems and applications. \n\n\\smallskip \\noindent\nIn summary, our main contributions are: \n\\begin{itemize}\n\n\\item {\\bf Uncovering New x86 Microcode Details.}\nWe present new reverse engineering results that extend and complement the publicly available knowledge of AMD K8 \\ac{CPU} microcode technology, specifically its microcode \\ac{ROM}. \nTo this end, we develop a novel reverse engineering strategy that combines chip-level reverse engineering and image processing with a custom microcode emulator in order to recover and validate microcode semantics in a semi-automatic fashion. \nIn particular, this reverse engineering step enables us to better understand the hitherto opaque microcode by analysis of its \\ac{ROM} and microcode updates. %\n\n\\item {\\bf Perspectives of Customizable Microcode.}\n We analyze the capabilities of microcode and its updates to identify building blocks that can be used to strengthen, extend, or supplement system security defenses. This includes microcode-based methods to enable or disable CPU features at runtime, a method to intercept low-level CPU processes, an isolated execution environment within the microcode engine, and the possibility to extend and modify the x86 \\ac{ISA}.\nWith regards to the trustworthiness of systems, we discuss a method to detect the presence of microcode backdoors and the challenges associated with such a detection.\n\n\n\\item {\\bf Implementation of Microcode-Assisted Defenses.}\nWe show how modern system defenses and tools can be implemented with microcode on a \\ac{COTS} AMD x86 \\ac{CPU} using the identified primitives. \nTo this end, we implemented several case studies to demonstrate that timing attack mitigation, hardware-assisted address sanitization, and instruction set randomization can be realized in microcode. \nIn addition, we realize a microcode-assisted hooking framework that allows fast filtering directly in microcode.\nFinally, we show how a secure microcode update mechanism and enclave functionality can be implemented in microcode.\nThe framework used for the deconstruction\n and manipulation of microcode, including the assembler and disassembler, as well as our created microcode programs and the microcode emulator are publicly available at \\url{https:\/\/github.com\/RUB-SysSec\/Microcode}~\\cite{microcode:amd_microprograms}.\n\\end{itemize}\n\n\n\n\n\\section{Microcode Reverse Engineering}\n\nA key contribution of our work presented in this paper is to further analyze the \\ac{ROM}\nreadouts provided by Koppe~et\\,al.\\xspace~\\cite{koppe2017reverse} to gather more details on the\nimplementation of both microcode itself and---more importantly---on the microcoded instructions.\nWhile the authors were able to identify operations and triads in the readout,\nthey were unable to reconstruct how they map to logical addresses.\nTherefore,\nthey could not locate and analyze the microcode that implements a \\emph{specific} x86 instruction.\nHowever,\nthese steps are crucial in the hooking of more advanced x86 instructions that\nrequire knowledge of the underlying implementation in the microcode \\ac{ROM}.\nThe analysis of existing microcode implementations was essential for\nthe case studies presented in Section~\\ref{cucode:section:case_study}.\n\n\\begin{figure}[t]\n\t\\centering\n\t\\resizebox{\\linewidth}{!}{\n\t\t\\includegraphics[scale=0.01]{figures\/re_process.png}\n\t}\n\t\\caption{High-level overview of the individual steps of the \\ac{ROM} reverse engineering process.}\n\t\\label{cucode:figure:rom:re_process}\n\\end{figure}\n\nThe key requirement for such an analysis is the ability to locate\nthe corresponding implementation in the microcode \\ac{ROM}.\nWe therefore require a mapping of observable addresses to the physical location in the \\ac{ROM} readout.\nGoing forward,\nwe define two different classes of addresses:\n\\begin{itemize}\n \\item \\emph{logical addresses} are used when the microcode software refers to a specific triad (e.\\,g.\\xspace, in the match registers or jumps)\n\t\\item \\emph{physical addresses} are the addresses assigned to triads in the ROM readouts during analysis.\n\\end{itemize}\nThese addresses are not related to the virtual and physical addresses used when addressing the main memory---what is\ncommonly known as the \\emph{virtual memory layout} of processes.\nAlso note that the address granularity for microcode is one triad,\nthe individual operations forming a triad are not addressable.\n\nThus, it is our goal to reverse engineer the algorithm used to map a given logical address to its corresponding physical\naddress.\nThe high level overview of this process is illustrated in Figure~\\ref{cucode:figure:rom:re_process}.\nWe used the following steps to recover the ordered microcode \\ac{ROM}:\n\\begin{itemize}\n\t\\item \\raisebox{.5pt}{\\textcircled{\\raisebox{-.9pt} {1}}}~Convert \\ac{SEM} images\n\tof each region to bitstrings with the aid of image recognition software.\n\t\\item \\raisebox{.5pt}{\\textcircled{\\raisebox{-.9pt} {2}}}~Reorder\n\tand combine the resulting bitstrings into a list of unordered triads.\n\t\\item \\raisebox{.5pt}{\\textcircled{\\raisebox{-.9pt} {3}}}~Reconstruct the mapping between logical\n\tand physical microcode addresses as well as reorder the triads according to this mapping.\n\t\\item \\raisebox{.5pt}{\\textcircled{\\raisebox{-.9pt} {4}}}~Disassemble the resulting triad list into a continuous,\n\tordered stream of instructions.\n\\end{itemize}\nThe first step,\nthe conversion of images to bitstrings,\nwas already performed by Koppe~et\\,al.\\xspace~\\cite{koppe2017reverse} and we used this data as our starting point for our further analysis.\nThe authors also already combined parts of the readouts into triads.\nWe build upon this and recovered the remaining part of the triads,\nwhich is depicted as step \\raisebox{.5pt}{\\textcircled{\\raisebox{-.9pt} {2}}}~in the figure.\nThe details of this step are described in\nSections~\\ref{cucode:section:re:layout}\nand~\\ref{cucode:section:re:ordering}.\nStep \\raisebox{.5pt}{\\textcircled{\\raisebox{-.9pt} {3}}},~the recovery of the mapping algorithm,\nconstituted the majority of our efforts.\nWe outline the approach we used in Section~\\ref{cucode:section:re:approach}\nand provide details of the solutions we developed in the following sections.\nThe mapping was reverse engineered for an AMD K8 processor.\nHowever, our approach is also applicable to the K10 architecture based on the\nsimilarities between the two architectures.\nFor the last step, we extended the disassembler used by Koppe~et\\,al.\\xspace~\\cite{koppe2017reverse} to include details learned during our own analysis.\n\n\n\n\\begin{figure}[!t]\n\t\\centering\n\t\\resizebox{\\linewidth}{!}{\n\t\t\\includegraphics{figures\/ucode_Overview.png}\n\t}\n\t\\caption{\\ac{SEM} image of region R1 showing arrays A1 to A4 and the SRAM holding the microcode update. The higher resolution raw image is available in Appendix~\\ref{cucode:section:appendix:rom}.}\n\t\\label{cucode:figure:rom:region}\n\\end{figure}\n\n\\subsection{Physical Layout}\n\\label{cucode:section:re:layout}\nThe physical storage is composed of three larger regions of \\ac{ROM} (R1 to R3),\nwhich were identified as the area containing the operations,\nand a smaller region (R4) containing the sequence words.\nPrevious work~\\cite{koppe2017reverse} already performed permutations such as inversion and interleaving of bit rows to receive whole operations in the correct bit order.\nIn addition, the algorithm for constructing triads out of three operations was known.\nThe triads are built by loading a single operation out of each of the three regions R1 to R3 and loading the corresponding\nsequence word from region R4. Thereby, the operations belonging to one triad have the same offset relative to the start of their corresponding region.\nThe different subregions of a single \\ac{ROM} region are illustrated in Figure~\\ref{cucode:figure:rom:region}, more technical details are provided in Appendix~\\ref{cucode:section:appendix:rom}.\nWe will use the same naming convention in the following.\n\nThe hardware layout suggested that the triads are organized in four arrays (A1 to A4), with A1,\nA3 and A4 containing data for 1024 triads each and A2,\nwhich is physically smaller than the other arrays,\nfor 768 triads.\nThis organization means that the first triad\nwill use bits extracted from R1:A1,\nR2:A1 and R3:A1 as its operations and the sequence word is obtained from the bits located in R4:A1.\nAs the regions are no longer relevant after combining the triads,\nthey will be omitted in further notations.\nEach of the arrays is subdivided into blocks B1 to B4, each containing 256 triads.\nThe exception to this is the array A2:\nwhile the hardware layout suggests the presence of four blocks with a smaller number of triads each,\nwe mapped the contents to three blocks with 256 triads each.\nThis means array A2 contains only 768 triads in contrast to the 1024 triads contained in the other arrays.\n\nWe were also able to locate the microcode\npatch \\ac{RAM},\nwhich is loaded with the microcode updates during runtime.\nThe \\ac{RAM} needs to be placed physically close to the rest of the microcode engine to keep signal paths short,\nhowever previously it was unknown where exactly it is located.\nUsing new images taken with a \\ac{SEM}, we could classify the area between arrays A2 and A3 as \\ac{SRAM}.\nThe area is marked in Figure~\\ref{cucode:figure:rom:region}.\nWe determined the storage type based on detailed images of the region\nand additional cross-section images.\nBoth showed visual structures specific to \\ac{SRAM}.\nThis location also contains a visually different control logic,\nwhich also indicates a different type of storage than the rest of the region.\nA higher resolution image and additional details are available in Appendix~\\ref{cucode:section:appendix:rom}.\nIt should be noted that the usage of two different classes of storage in this close proximity implies a highly optimized hardware layout.\nThe SRAM marked in the figure contains $32\\times64$ bits,\nwhich is the amount of data needed per region for 32 triads.\nThis corresponds to the maximum update size of 32 triads determined in our experiments.\nDue to the additional complexity of implementing a fast readable and writable memory in hardware,\nthe \\ac{SRAM} occupies roughly the same space as a \\ac{ROM} block with 256 triads.\n\n\\subsection{Physical Ordering}\n\\label{cucode:section:re:ordering}\nAnother insight gained from the available readout was that not only the three operations forming a triad exhibited data\ndependencies between each other (suggesting that the triads are indeed correctly combined),\nbut in some cases data flow was visible between triads immediately following each other.\nThis means the readout already partially placed related triads close to each other.\nBased on this observation, we retained the triad order and by convention placed all triads after one another with increasing\naddresses.\nThis yielded what we considered a continuous physical memory space with addresses starting at 0 and increasing with each\ntriad to 0xEFF.\nThis corresponded with the observation that the microcode patch RAM starts at the address 0xF00 for the K8 series of\nprocessors.\n\nOur physical memory space assumed an arbitrary ordering of A1 -- A3 -- A4 -- A2,\nso A1 would contain addresses from 0x0 to 0x3FF,\nA3 from 0x400 to 0x7FF,\nA4 from 0x800 to 0xBFF and A2 from 0xC00 to 0xEFF.\nWe placed A2 last\nbecause it contained less triads which we assumed to be missing at the end of the addressable space.\nIn each array, we ordered the blocks starting from the bottom of the image in Figure~\\ref{cucode:figure:rom:region},\nomitting the missing block B4 in array A2.\nPhysical address 0x0 is thus located in A1:B1 and 0xEFF in A2:B3.\n\n\\subsection{Mapping Recovery Approach}\n\\label{cucode:section:re:approach}\nOur recovery approach is based on inferring the mapping based on address pairs.\nWe chose this approach because it was infeasible to recover the mapping via hardware analysis.\nThe addressing logic is complex and the connections span multiple layers, each of which would require delayering and subsequent imaging.\nEach address pair maps a logical (microcode) address to a physical address.\nOnce the recovered function correctly produces the physical address for any given logical address in our test set,\nwe can assume that it will be correct for any further translations.\nWe thus needed a sufficiently large collection of address pairs.\nUnfortunately, the microcode updates only provided two usable data points.\n\nTherefore, we developed an approach that (i) executes all \\ac{ROM} triads on the \\ac{CPU} individually and extracts the observable semantics of a given logical address, (ii) emulates each triad we acquired from the physical \\ac{ROM} readout in a custom microcode emulator to extract the semantics for a given physical address, and (iii) correlates the extracted semantics to find matching pairs of physical and logical addresses. Details of this process are described in Section~\\ref{cucode:section:re:emulation}. This resulted in a total of $54$ address pairs.\nThe results were then reviewed in a manual analysis step to find the correct permutation of triads for a given block.\nOnce a permutation candidate for a block is found, it can be verified by checking the correctness of additional triads. Both the process and its results are described in Section~\\ref{cucode:section:permutation}.\n\n\n\nIn combination with executing known triads directly from \\ac{ROM} and extracting their side effects, we can correlate the emulated instructions with their counterparts with known addresses.\n\n\\subsection{Microcode Emulation}\n\\label{cucode:section:re:emulation}\nIn order to gather a sufficiently large number of data points to\nreverse engineer the fine grained mapping of the \\ac{ROM} addresses,\nwe implemented a microcode emulation engine.\nThis emulation engine is designed to replicate the behavior of the \\ac{CPU} during the execution of a given triad.\nThis means that for any given input,\nthe output of both the physical \\ac{CPU} and our emulation engine should be identical.\nAs our analysis framework is implemented in Python,\nwe also chose this language to implement the emulator.\nThe emulator is designed to interpret the bitstrings extracted from\nthe \\ac{CPU} and first disassembles them using our framework.\nFor each individual micro-op, this yields the operation as well as the source and target operands.\nThe operations itself are implemented as Python lambdas modifying the indicated registers.\nThis allows for simple extension of the supported instruction list.\nFor each triad the emulator returns a changeset indicating the changed registers and their new values.\nCurrently this is done on a triad-by-triad basis to support our reverse engineering method.\nHowever, by supplying the changed register set as the input state for the next\ntriad, the emulation can be performed for any number of triads in sequence.\nThe emulation engine currently supports all of the identified arithmetic microcode operations.\nAdditionally, we supply a whitelist of instructions that produce no visible effect on the specified registers.\nWhile these instructions have side effects when executed on the \\ac{CPU},\nthey are treated as no-ops,\nbecause only the visible state of the registers is considered in our further analysis.\nThe instructions and their behavior are based on previous reverse engineering results.\nWe ensured that we correctly identified a certain instruction by executing the bitstring of the instruction\nin a microcode update applied to a real CPU and observing the effects on the specified registers with varying inputs.\n\nHowever,\nas the \\ac{ROM} contains operations that implement unknown behavior,\nmost importantly reading and writing internal status registers\nor collecting information on the currently executed instruction,\nwe were unable to accurately emulate all of the triads.\nAlso the readout itself introduced both potential bit errors as well as sections that\nare unable to be read due to dust particles or other disturbances in the raw image.\nWe thus opted to only consider triads for further analysis that (i) contain\nonly known instructions and (ii) were not part of an unreadable section.\nThis emulation yielded the behavior of triads with known physical addresses for a given input state.\nThe input state assigned a different value to every x86 and usable microcode register.\nDuring testing we observed that not all microcode registers can be freely assigned to,\nsome will trigger erratic \\ac{CPU} behavior leading to crashes or loss of control.\nThus,\nwe had to exclude certain registers from our tests.\nOur input and output state contains all six x86 general purpose registers (we excluded the stack\nmanipulation registers EBP and ESP) as well as in total 22 internal microcode registers.\n\nTo gather the behavior for known logical addresses,\nwe forced execution of each \\ac{ROM} triad directly on the \\ac{CPU}.\nFor this execution,\nwe chose the same input state that was previously used for the emulation.\nThe input state was set by a sequence of x86 instructions setting the x86 registers to the chosen values.\nThe microcode registers were then set after entering microcode by a sequence\nof micro-ops preceding the jump to the triad address to be tested.\nThe output was gathered by writing out the changed registers as specified\nby our emulator to x86 registers using microcode executed after the tested triad.\nDue to the different values for each register,\nwe could determine which register was used as an input in the tested triad as well as the operation performed on it.\nHowever,\nwe also had to exclude a large number of logical addresses as those triads lead to a\nloss of control or showed a behavior that was independent of the given input state.\nIn combination,\nthese two tests yielded a collection of address pairs consisting out of the\nphysical address of a candidate triad and the logical address of the triad.\n\n{\\captionsetup[figure]{skip=5pt}\n\\begin{figure*}[!t]\n\t\\begin{minipage}{2\\columnwidth}\n\t\\centering\n\t\\resizebox{0.80\\linewidth}{!}{\n\t\t\\includegraphics{figures\/rom_mapping.png}\n\t}\n\t\\caption{Translation of logical to physical microcode \\ac{ROM} addresses.}\n\t\\label{cucode:figure:rom:mapping}\n\t\\end{minipage}\n\\end{figure*}\n}\n\n\\subsection{Permutation Algorithms}\n\\label{cucode:section:permutation}\n\nAfter gathering the microcode address pairs, we had to reconstruct the function used to map these onto each other.\nDue to the hardware layout and hardware design possibilities, we determined a number of different candidate permutation functions.\nAdditionally, we used the data points gathered in the previous step to develop new algorithmic options.\nWe then applied these possible functions in combination to test whether they were used for a specific triad.\n\nVia this empirical testing, we found that the \\ac{ROM} uses the following permutations:\n\\begin{itemize}\n\t\\item T: table based 16 triad-wise permutation, illustrated in Table~\\ref{cucode:figure:algorithms:table}\n\t\\item R: reverse counting direction, mapping higher physical address triads to lower logical addresses\n\t\\item S: pairwise swap two consecutive triads\n\t\\item L: custom table based 16 triad-wise permutation for last block, illustrated in Table~\\ref{cucode:figure:algorithms:table}\n\\end{itemize}\n\nTo determine the combination of permutations used for a specific address pair,\nwe verified the possibilities by calculating the physical address for the given logical address.\nIf the result matches the expected value,\nthe combination is correct.\nThe found combination is then used to calculate the physical addresses for the rest of the data points.\nOnce a mismatch is found, the first approach is repeated to determine the next combination of permutations.\n\nWe determined that the mapping function is constant for 256 triads at a time,\nthen the combination of algorithms changes.\nWe also had to account for potentially swapped 256 triad blocks,\nso in case of a mismatch the remaining triad blocks in a region were then considered.\nThis yielded the mapping algorithm for all but the last 256 triads.\nThe last block uses a different mapping algorithm that was reconstructed manually.\nThe detailed mapping of all triad blocks is given in Figure~\\ref{cucode:figure:rom:mapping}; Table~\\ref{cucode:figure:algorithms:table} illustrates the permutation algorithms T and L.\n\n\\begin{table}[!htb]\n\\centering\n\\begin{tabular}{|l|l|l|}\n\\hline\nPhysical & logical - T & logical - L\\\\\n\\hline\n\\texttt{0x00} & \\texttt{0x00} & \\texttt{0x00}\\\\\n\\texttt{0x10} & \\texttt{0x20} & \\texttt{0x10}\\\\\n\\texttt{0x20} & \\texttt{0x40} & \\texttt{0x20}\\\\\n\\texttt{0x30} & \\texttt{0x60} & \\texttt{0x30}\\\\\n\\texttt{0x40} & \\texttt{0x80} & \\texttt{0x40}\\\\\n\\texttt{0x50} & \\texttt{0xA0} & \\texttt{0x50}\\\\\n\\texttt{0x60} & \\texttt{0xC0} & \\texttt{0x60}\\\\\n\\texttt{0x70} & \\texttt{0xE0} & \\texttt{0x70}\\\\\n\\texttt{0x80} & \\texttt{0x10} & \\texttt{0xF0} (RS)\\\\\n\\texttt{0x90} & \\texttt{0x30} & \\texttt{0xE0} (RS)\\\\\n\\texttt{0xA0} & \\texttt{0x50} & \\texttt{0xD0} (RS)\\\\\n\\texttt{0xB0} & \\texttt{0x70} & \\texttt{0xC0} (RS)\\\\\n\\texttt{0xC0} & \\texttt{0x90} & \\texttt{0xB0} (RS)\\\\\n\\texttt{0xD0} & \\texttt{0xB0} & \\texttt{0xA0} (RS)\\\\\n\\texttt{0xE0} & \\texttt{0xD0} & \\texttt{0x90} (RS)\\\\\n\\texttt{0xF0} & \\texttt{0xF0} & \\texttt{0x80} (RS)\\\\\n\\hline\n\\end{tabular}\n \\caption{Translation of addresses for the T and L algorithms. The L algorithm applies the R and S permutations to the higher addresses after the table based permutation.}\n \\label{cucode:figure:algorithms:table}\n\\end{table}\n\n\\section{Appendix}\n\\subsection{Hardware Details of the Microcode ROM}\n\\label{cucode:section:appendix:rom}\n\nFigure~\\ref{cucode:figure:appendix:rom} shows a \\ac{SEM} image of one of the four \\acp{ROI}.\nAs an extension to previous work by Koppe~et\\,al.\\xspace~\\cite{koppe2017reverse},\nwe further delayered the chip to analyze the region\nabove the array A2 --- the second array from the bottom.\nIts repetitive structure looked visually different compared to the other analyzed NOR-\\ac{ROM} arrays.\nA cross section and an additional delayering process revealed a prominent structure in the layer underneath, due to which we identified the area as \\ac{SRAM}.\nCompared to modern \\ac{DRAM},\n\\ac{SRAM} uses more space but can be manufactured in the same process as the adjacent NOR-\\ac{ROM}.\nAdditionally, \\ac{SRAM} does not require periodic refreshes to retain the\nstored data and is often used in microcontrollers and smaller \\acp{SOC}.\nThe usage of two different storage types in this close proximity\nis an indication of a highly optimized in-house design process.\nThe common practice is to use (third-party) \\ac{IP} cores providing a single memory type.\n\nIn the \\ac{ROM}, the microcode triads are ordered with an eight line interleaving,\nmeaning that in a linear readout the successor of a triad is found seven triads ahead.\nThis ordering was verified by searching for all-zero triads at the end of the array A2.\nAfter encountering the first all-zero triad,\nmore were found at the expected seven triad distance.\nMoreover, the hardware layout already hints at the usage of this technique.\nNote that these and other techniques used are not implemented for the sake of obfuscating the \\ac{ROM} contents,\nbut instead optimize the storage in regards to die area.\n\n\\begin{figure*}[!htb]\n\t\\begin{minipage}{\\columnwidth}\n\t\\centering\n\t\\resizebox{0.8475\\linewidth}{!}{\n\t\t\\includegraphics{figures\/ROI2_Overview_v2.png}\n\t}\n\t\\caption{\\ac{SEM} image of region R1. The middle part contains the wiring and addressing for the \\ac{ROM} and \\ac{RAM}. To reduce the average signal path length, the wiring is placed between the two memory areas.}\n\t\\label{cucode:figure:appendix:rom}\n\t\\end{minipage}\n\\end{figure*}\n\n\\subsection{\\acs{RTL} Representations of Microcode Programs}\n\nIn the following, we list the \\ac{RTL} form of our custom microcode programs described in the paper.\nThe \\ac{RTL} is the same as used by Koppe~et\\,al.\\xspace~\\cite{koppe2017reverse} and follows the x86 assembly syntax closely.\nWhere appropriate, the differences to the x86 syntax are highlighted in a comment.\nA major difference is the availability of a three operand mode.\nIn that case, the left-most operand is the destination,\nthe remaining two operands are the sources.\nMore examples can be found in our Github repository~\\cite{microcode:amd_microprograms}.\n\n{\\lstset{language=[x86masm]Assembler, deletekeywords={size}}\n\\begin{lstlisting}[\n\tcaption={Implementation of our custom \\texttt{rdtsc} variant with reduced accuracy.\nIt completely replaces the default by intercepting triad 0x318,\nthe entry point for this instruction on the K8 architecture.\nThe \\texttt{dbg} opcode that is used for the read of an internal register sets certain\nflags that are not currently supported with standard annotations in the \\ac{RTL}.\nWe omitted the check of the \\texttt{CR4.TSD} control bit,\nwhich optionally prevents access to this instruction from usermode.\nWhile we were able to partially reconstruct the check from the \\ac{ROM} readout,\nwe encountered a read error during this and cannot fully and reliably reconstruct the corresponding semantics.\nHowever, this is a limitation of the current state of reverse\nengineering and we are working on improving the readout method.},\n\tlabel=cucode:listing:rdtsc,captionpos=t\n]\n; implement default rdtsc semantics, loading TSC to edx:eax\n; emit a fixed bitstring, this instruction reads an internal register\ndbg 0001010000101111111000000011111111111111110001101010000000001011 \n; .q annotation switches to 64 bit operand size\n; srl performs a logic shift right\nsrl.q rdx, t9q, 32\nsrl.q rax, t9q, 0\n\n; load the and mask\nmov t1d, 0xffff\nsll t1d, 16\nor t1d, 0xff00\n\n; sequence word annotation, continue at the next x86 instruction\n; the following triad is still executed after this annotation\n.sw_complete\n\n; reduce accuracy of the lower 32 bit TSC\n; includes two operations as padding\nand eax, t1d\nadd t2d, 0\nadd t2d, 0\n\\end{lstlisting}\n\n\\captionof{lstlisting}{Assembly code of a test case for the \\ac{ISR}.\nThe original x86 assembly code is shown on the left. The right side is the translation performed by our transpiler.\nEach source instruction maps to a single replacement instruction.\nIn this case we used a single instruction, \\texttt{bound}, to implement all semantics, but it is also possible to repurpose multiple different x86 instructions.\nThe correct handler is selected by the lower 16 Bits of the displacement given in brackets.\nThe higher 16 Bits are used as an optional argument for the selected handler.\nIn the case of memory loads, the argument is the 16 Bit offset of the memory location to be loaded relative to a fixed base address.\nThe argument to the shift handler is the amount of bits to shift.\nThe mapping of handler number to semantics is the trivial case in this example: the handler indices are used directly.\nHowever, the full 16 Bits are available to identify handlers.\nThis also allows for using multiple different indices for the same handler, further strengthening the \\ac{ISR}.}\n\\label{cucode:listing:isr}\n\\begin{minipage}[t]{.45\\textwidth}\n \\begin{lstlisting}\nmov esi, [msg0]\nmov edi, [msg1]\nmov ecx, [rc]\n\nadd edi, ecx\nadd esi, edi\nmov edi, esi\nadd esi, esi\nshr esi, 8\nadd esi, edi\n \\end{lstlisting}\n\\end{minipage}\\hfill\n\\begin{minipage}[t]{.45\\textwidth}\n \\begin{lstlisting}\nbound esi, [eax + 0x1]\nbound edi, [eax + 0x40001]\nbound ecx, [eax + 0x180001]\n\nbound edi, [ecx + 0x4]\nbound esi, [edi + 0x4]\nbound edi, [esi + 0x0]\nbound esi, [esi + 0x4]\nbound esi, [eax + 0x80003]\nbound esi, [edi + 0x4]\n \\end{lstlisting}\n\\end{minipage}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\\label{s:1}\nFor $d$ be a positive integer, let $X_i=\\{X_i(t),\\,t\\in \\R_+\\},\\,i=1,2,\\ldots,d$ be independent real-valued stochastic processes on the same probability space $\\ofp$. Define the \\pE{$d$-parameters} real-valued additive field (additive process)\n\\BQNY\nX(\\mathbf{t})=X(t_1,t_2,\\ldots,t_d)=\\sum_{i=1}^{d}X_i(t_i),\\quad \\mathbf{t}=(t_1,t_2,\\ldots,t_d)\\in \\R_+^d.\n\\EQNY\nThe additive process which plays a key role in studying of the general multiparameter processes, multiparameter potential theory, fractal geometry, spectral asymptotic theory has been actively investigated recently. To have a glance of these results, we refer the reader to \\cite{khoshnevisan2003measuring, chen2003small,karol2008small,khoshnevisan1999brownian,khoshnevisan2002level,khoshnevisan2004additive,khoshnevisan2009harmonic} and the references therein.\n\\newline\nOn the other hand, calculation of boundary non-crossing probabilities of Gaussian processes is a key topic both of theoretical and applied probability, see, e.g.,\n\\cite{Nov99,MR2009980,BNovikov2005,1103.60040,1079.62047,MR2576883,janssen}. Numerous applications concerned with the evaluation of boundary non-crossing probabilities relate to mathematical finance, risk theory, queuing theory, statistics, physics among many other fields. In the literature, most of contributions are only concentrate on the boundary non-crossing probabilities of Gaussian processes with one-parameter (e.g. Brownian motion, Brownian bridge and fractional Brownian motion), some important results of this field can see in \\cite{MR2028621,MR2175400,MR2016767,1137.60023,BiHa1,HMS12,MR3531495,MR3500417}. For multiparameter Gaussian processes, few cases are known about the boundary non-crossing probabilities (see, e.g., \\cite{Pillow,hashorva2014boundary,Bi2}).\n\\newline\nIn this paper, we are concentrating on the calculation of boundary non-crossing probabilities of additive Wiener field $W$ which defined by \\begin{equation}\\label{eqhm-1}\nW(\\mathbf{t})= W_1(t_1)+W_2(t_2)+\\ldots+W_d(t_d), \\quad \\mathbf{t} \\in\\R^d_+,\n\\end{equation}\nwhere $W_i=\\{W_i(t), t\\in \\R_+\\}, i=1,2,\\ldots,d$ are independent Wiener processes define on the same probability space $\\ofp$. It can be checked easily that $W$ is a Gaussian field with the convariance function given by\n\\begin{equation}\\label{eqhm-3}\n\\EE{W(\\mathbf{s})W(\\mathbf{t})}=\\sum_{i=1}^{d}s_i\\wedge t_i, \\quad\n\\mathbf{s}=(s_1,s_2,\\ldots,s_d), \\;\\mathbf{t}=(t_1,t_2,\\ldots,t_d).\n\\end{equation}\nFor two measurable functions $f,u:\\R_+^d\\rightarrow \\R$ we shall investigate the upper and lower bounds for\n$$P_f=\\pk{W(\\mathbf{t})+f(\\mathbf{t})\\leq u(\\mathbf{t}),\\;\\mathbf{t}\\in\\R_+^d}$$\nIn the following, we consider $u$ a general measurable function and $f\\neq 0$ to belong to the reproducing kernel Hilbert space (RKHS) of $W$ which is denote by $\\kHC$. A precise description of $\\kHC$ is given in section \\ref{sec:pre}, where the inner product $\\langle f,g\\rangle$ and the corresponding norm $\\|f\\|$ for $f,\\,g \\in \\kHC$ are also defined.\n\nAs in \\cite{HMS12}, a direct application of Theorem 1' in \\cite{LiKuelbs}\nshows that for any $f\\in \\kHC $ we have\n\\newcommand{\\Abs}[1]{\\Bigl\\lvert #1 \\Bigr\\rvert}\n \\BQN\\label{eq:00:2b}\n\\Abs{P_f - P_0} &\\le \\frac {1 }{\\sqrt{2 \\pi}} \\norm{ f}.\n\\EQN\nFurther, for any $g\\in\\kHC$ such that $g\\geq f$, we obtain\n\\BQN\\label{eq:WL}\n\\Phi(\\alpha - \\norm{g}) \\le P_{g}\\le P_f \\le \\Phi(\\alpha+ \\norm{f}),\n\\EQN\nwhere $\\Phi$ is the distribution of an $N(0,1)$ random variable and $\\alpha=\\Phi^{-1}(P_0) $ is a finite constant. When $f\\le 0$, then we can take always $g=0$ above which make the lower bound of \\eqref{eq:WL} useful if $\\norm{f}$ is large. When $\\norm{f}$ is small, the equation \\eqref{eq:00:2b} provides a good bound for the approximation rate of $P_f$ by $P_0$. Since the explicit formulas for computing $p_f$ seem to be impossible, the asymptotic performance of the bounds for trend functions $\\gamma f$ with $\\gamma\\to \\infty$ and $\\gamma\\to 0$ are thus worthy of consideration. This paper we shall consider the former case, and we obtain the following:\n\\newline\nIf $f(\\mathbf{t}_0)>0$ for some $\\mathbf{t}_0$ with non-negative components, then\n\\pE{for any $g\\ge f, g\\in \\kHC$ we have\n\\BQN\\label{LD1}\n\\ln P_{\\gamma f} \\ge \\ln\n\\Phi(\\alpha - \\gamma \\wF ) \\ge -(1+o(1))\\frac{\\gamma^2}{2}\\norm{g}^2, \\quad \\gamma \\to \\IF,\n\\EQN\nhence\n\\BQN\\label{LD}\n{\\ln P_{\\gamma f} \\ge -(1+o(1))\\frac{\\gamma^2}{2}\\norm{\\wF}^2, \\quad \\gamma \\to \\IF},\n\\EQN\nwhere $\\wF$ (which is unique and exists) solves the following minimization problem\n\\BQN \\label{OP}\n\\min_{ g,f \\in { \\kHC}, g \\ge f} \\norm{g}= \\norm{\\wF}>0.\n\\EQN\n\\pE{In Section} 2 we shall show that $\\underline{f}$ is the projection of $f$ on a closed convex set of $\\kHC$, and moreover\nwe show that\n\\BQN\\label{LD2}\n{\\ln P_{\\gamma f} \\sim \\ln P_{\\gamma \\underline{f}} \\sim - \\frac{\\gamma^2}{2}\\norm{\\underline{f}}^2, \\quad \\gamma \\to \\IF}.\n\\EQN\nThe rest of this paper are organized as follows: In section \\ref{sec:pre} we briefly talk about the RKHS of additive Wiener field and construct the solution of the minimization problem \\eqref{OP}. We present our main results in Section \\ref{sec:main}. The proofs of the results in this paper are shown in Section \\ref{sec:proofs}, and we conclude this paper by Appendix.\n\n\\section{Preliminaries}\\label{sec:pre}\nThis section reviews basic results of the reproducing kernel Hilbert space (RKHS), and we shall give a representation of the RKHS of additive Wiener field $W$. We shall also construct $V$ as a closed convex set of $\\kHC$, which finally enable us to prove that $\\underline{f}$ in \\eqref{OP} is the projection of $f$ on $V$. The idea of constructing $V$ comes from a similar result in one-parameter case (see e.g., \\cite{BiHa1, janssen, Pillow, 1137.60023}).\n\\newline\nIn the following of \\pE{this paper} bold letters are reserved for vectors, so we shall write for instance\n$\\mathbf{t}=(t_1, t_2, \\ldots, t_d)\\in\\R^d_+$ and $\\lambda_1$ denote the Lebesgue measures on $\\R_+$,\nwhereas $ds$ a mean integration with respect to this measure.\n\\subsection{The RKHS of additive Wiener field}\nRecall that $W_1$ is an one-parameter Wiener process. It is well-known (see e.g., \\cite{berlinet})\nthat the RKHS of the Wiener process $W_1$, denoted by $\\kHA$, is characterized as follows\n$$\\kHA=\\Bigl\\{h:\\R_+\\rightarrow \\R\\big|h(t)=\\int_{[0,t]}h'(s)ds,\\quad h'\\in L_2(\\R_+, \\lambda_1) \\Bigr\\}, $$\nwith the inner product $\\langle h,g\\rangle_1=\\int_{\\R_+}h'(s)g'(s)ds$ and the corresponding norm $\\|h\\|_1^2=\\langle h,h\\rangle$. The description of RKHS for $W_i,\\,i=2,3,\\ldots,d$ are evidently the same.\nWe now begin to construct the RKHS of additive Wiener field $W$, for any\n\\BQNY\nh_1(\\mathbf{t})&=&f_1(t_1)+f_2(t_2)+\\ldots+f_d(t_d),\\\\\nh_2(\\mathbf{t})&=&g_1(t_1)+g_2(t_2)+\\ldots+g_d(t_d),\n\\EQNY\nwhere $f_i(t_i),\\,g_i(t_i)\\in\\kHA,\\;i=1,2,\\ldots,d,$ define the inner product\n\\begin{equation}\\label{eqhm-10}\n\\langle h_1,h_2\\rangle=\\sum_{i=1}^{d}\\int_{\\R_+}f_i'(s)g_i'(s)ds.\n\\end{equation}\n\\begin{rem}\nFrom lemma \\ref{lemA1} in Appendix we have the representation $h(\\mathbf{t})=h_1(t_1)+h_2(t_2)+\\ldots+h_d(t_d)$ is unique, hence the above inner product is well defined.\n\\end{rem}\nNext, in view of lemma \\ref{lemA2} in Appendix we have the following\n\\begin{lem}\nThe RKHS for additive Wiener field $W$ is given by\n\\begin{equation}\n\\kHC=\\Bigl\\{h:\\R_+^d\\rightarrow \\R\\big|h(\\mathbf{t})=\\sum_{i=1}^{d}h_i(t_i),\\; \\text{where}\\; h_i\\in \\kHA, i=1,2,\\ldots,d \\Bigr\\}\n\\end{equation}\nequipped with the norm $\\norm{h}^2=\\langle h,h\\rangle.$\n\\end{lem}\nFor notational simplicity in the following we shall use the same notation $\\langle \\cdot,\\cdot\\rangle$ and $\\norm{\\cdot}$ to present the inner product and norm respectively, on space $\\kHA$ and $\\kHC$.\n\n\\subsection{The solution of minimization problem}\nIn this subsection, we begin to solve equation \\eqref{OP}. For any $h\\in\\kHA$, it has been shown (see \\cite{1137.60023}), that\nthe smallest concave majorant of $h$ solves\n\\BQNY\n\\min_{ g,f \\in { \\kHA}, g \\ge f} \\norm{g}= \\norm{\\wF}>0.\n\\EQNY\nMoreover, as shown in \\cite{janssen}\n the smallest concave majorant of $h$, which we denote by $\\underline{h}$,\n can be written analytically as the unique projection of $h$ on the closed convex set\n$$V_1=\\{h\\in \\kHA \\big|\\,h'(s) \\; \\text{is a non-increasing function} \\}$$\ni.e., $\\underline{h}= Pr_{V_1}h$. Here we write $Pr_{A}h$ for the projection of $h$ on some closed set $A$ also for\nother Hilbert spaces considered below. Further, if we define\n$$\\widetilde{V}_1=\\{h\\in \\kHA \\big|\\,\\langle h,f\\rE \\leq 0 \\;\\text{for any}\\; f\\in V_1\\} $$\nbe the polar cone of $V_1$. Then the following hold\n\\begin{lem}\\label{lemma 2.2}\n\\cite{hashorva2014boundary} With the above notation and definitions we have\n\\begin{itemize}\n\\item[(i)] If $h\\in V_1$, then $h\\geq 0$.\n\\item[(ii)] If $h\\in \\widetilde{V}_1$, then $h\\leq 0$\n\\item[(iii)] We have $\\langle Pr_{V_1}h, Pr_{\\widetilde{V}_1}h\\rangle=0$ and further\n\\begin{equation}\nh=Pr_{V_1}h+Pr_{\\widetilde{V}_1}h.\n\\end{equation}\n\\item[(iv)] If $h=h_1+h_2$, $h_1\\in V_1$, $h_2\\in \\widetilde{V}_1$ and $\\langle h_1, h_2\\rangle=0$, then $h_1=Pr_{V_1}h$ and $h_2=Pr_{\\widetilde{V}_1}h$.\n\\item[(v)] The unique solution of the minimization problem $\\min_{g\\geq h, g\\in \\kHA }\\norm{g}$ is $\\underline{h}=Pr_{V_1}h$.\n\\end{itemize}\n\\end{lem}\n\nSince we are going to work with functions $f$ in $\\kHC$ we need to consider the projection of such $f$ on a particular closed convex set.\nIn the following we shall write $f=f_1+f_2+\\ldots+f_d$ meaning that $f(\\vk{t})=f_1(t_1)+ f_2(t_2)+\\ldots+f_d(t_d) $ where $f_1,f_2,\\ldots,f_d \\in \\kHA$. Note in passing that this decomposition is unique for any $f\\in \\kHC$.\nDefine the closed convex set\n$$V_{2}=\\{h=h_1+h_2+\\ldots+h_d\\in \\kHC \\big| h_1,h_2,\\ldots,h_d \\in V_1\\}$$\nand let $\\widetilde{V_{2}}$ be the polar cone of $V_{2}$ given by\n$$\\widetilde{V_{2}}=\\{h \\in \\kHC \\big|\\langle h, v\\rEE \\leq 0 \\;\\text{for any}\\; v\\in V_{2}\\},$$\nwith inner product from \\eqref{eqhm-10}.\nAnalogous to Lemma \\ref{lemma 2.2} we have\n\\begin{lem}\\label{lemma 3.2}\nFor any $h=h_1+h_2+\\ldots+h_d\\in \\kHC$, we have\n\\begin{itemize}\n\\item[(i)] If $h\\in V_2$, then $h_i\\geq 0,i=1,2,\\ldots,d$.\n\\item[(ii)] If $h\\in \\widetilde{V}_2$, then $h_i\\leq 0,i=1,2,\\ldots,d$.\n\\item[(iii)] We have $\\langle Pr_{V_2}h, Pr_{\\widetilde{V}_2}h\\rangle=0$ and further\n\\begin{equation}\\label{eqhm-6}\nh=Pr_{V_2}h+Pr_{\\widetilde{V}_2}h.\n\\end{equation}\n\\item[(iv)] If $h=h_1+h_2$, $h_1\\in V_2$, $h_2\\in \\widetilde{V}_2$ and $\\langle h_1, h_2\\rangle=0$, then $h_1=Pr_{V_2}h$ and $h_2=Pr_{\\widetilde{V}_2}h$.\n\\item[(v)] The unique solution of the minimization problem $\\min_{g\\geq h, g\\in \\kHC }\\norm{g}$ is\n\\begin{equation}\\label{eqhm-7}\n\\underline{h}=Pr_{V_2}h=Pr_{V_1}h_1+ Pr_{V_1}h_2+\\ldots+Pr_{V_1}h_d.\n\\end{equation}\n\\end{itemize}\n\\end{lem}\n\n\\section{Main Result}\\label{sec:main}\nConsider two measurable d-parameter functions $f,u:\\R_+^d\\rightarrow \\R$. Suppose that $f(\\mathbf{0})=0$ and $f\\in \\kHC$.\nHence we can write\n$$f(\\mathbf{t})=\\sum_{i=1}^{d}f_i(t_i),\\quad f_i(t_i)\\in \\kHA,\\,i=1,2,\\ldots,d$$\nwe also suppose $f_i(0)=0,i=1,2,\\ldots,d$ in the above decomposition. Recall their representations $f_i(t_i)=\\int_{[0,t_i]}f'_i(s)ds,\\quad f'_i\\in L_2(\\R_+, \\lambda_1), i=1,2,\\ldots,d.$ We shall estimate the boundary non-crossing probability\n\\BQNY\n P_f=\\pk{W(\\mathbf{t})+f(\\mathbf{t})\\leq u(\\mathbf{t}),\\;\\mathbf{t}\\in\\R_+^d}.\n \\EQNY\n In the following we set $\\underline{f_i}= Pr_{V_1}f_i,i=1,2,\\ldots,d$ and $\\underline{f}= Pr_{V_2}f$.\nWe state next our main result:\n\n\\begin{thm} \\label{Thn1} Let the following conditions hold:\n\\BQN\\label{conA1}\\lim_{t_i \\rightarrow\\infty}u(0,\\ldots,t_i,0,\\ldots,0)\\underline{f_{i}'}(t_i)=0,\\;i=1,2,\\ldots,d.\n\\EQN\nThen we have\n\\begin{equation*}\\begin{gathered}\nP_f\\leq P_{ f-\\underline{f} }\n\\exp\\biggl (- \\sum_{i=1}^{d}\\int_{\\R_+}u(0,\\ldots,t_i,0,\\ldots,0)d \\underline{f_{i}'}(t_i)\n-\\frac12\\|\\underline{f}\\nOO ^2\\biggr).\n\\end{gathered}\n\\end{equation*}\n\\end{thm}\n\n\\begin{rem} Note that $f$ starts from zero therefore $f$ can not be a constant unless $f\\equiv 0$ but this case is trivial.\n\\end{rem}\n\\begin{rem} Conditions \\eqref{conA1} of the theorem means that asymptotically the components of shifts and their derivatives are negligible in comparison with function $u$.\n\\end{rem}\nUsing Theorem \\ref{Thn1}, we can obtain an asymptotically property of $P_{\\gamma f}$, in fact, if $u(\\mathbf{t})$ is bounded above, then we have the following result\n\\BK\\label{korr}\nIf $f\\in\\kHC$ is such that $f(\\mathbf{t}_0)$ for some $\\mathbf{t}_0$, then\n\\BQNY\n{\\ln P_{\\gamma f} \\sim \\ln P_{\\gamma \\underline{f}} \\sim - \\frac{\\gamma^2}{2}\\norm{\\underline{f}}^2, \\quad \\gamma \\to \\IF}.\n\\EQNY\n\\EK\n\n\\section{Proofs}\\label{sec:proofs}\n\\prooflem{lemma 2.2} For $h\\in V_1$, we have $h'$\n is non-increasing therefore \\pE{$h'$ } is non-negative. Since $h(0)=0,$ thus $h(u)\\geq 0$ for all $u$. The proof of statements (ii) to (v) can see in\\cite{hashorva2014boundary}, we do not repeat the proof here.\n\\QED\n\n\\prooflem{lemma 3.2}\n(i) If $h\\in V_2$, from the definition of $V_2$, we obtain $h_1,h_2,\\ldots,h_d \\in V_1$. Thus $h_i\\geq 0,i=1,2,\\ldots,d$ follow directly from (i) in Lemma \\ref{lemma 2.2}\n\\newline\n(ii) If $h(\\mathbf{t})=h_1(t_1)+h_2(t_2)+\\ldots+h_d(t_d)\\in\\widetilde{V}_2,$ then $h_i(t_i)\\in\\kHC.$ For any $f_i(t_i)\\in V_1,$ let\n\\BQNY\nv(\\mathbf{t})=f_i(t_i)\\in V_2.\n\\EQNY From the definition of $\\widetilde{V}_2$, we obtain\n$$\\langle h,v\\rangle=\\langle h_i,f_i\\rangle\\leq 0.$$\nTherefore, $h_i\\in\\widetilde{V}_1,$ and the results follow from (ii) in lemma \\ref{lemma 2.2}.\n\\newline\nThe proof of statements $(iii)$ and $(iv)$ are similar to $(iii)$ and $(iv)$ in Lemma \\ref{lemma 2.2}, and can obtain immediately from \\cite{janssen}.\n\\newline\n(v) For any $h(\\mathbf{t})\\in\\kHC,$ let $g(\\mathbf{t})\\in\\kHC$ such that $g\\geq h$, we then have $g_i\\geq h_i,i=1,2,\\ldots,d,$ where\n\\BQNY\nh&=&h_1+h_2+\\ldots+h_d,\\\\\ng&=&g_1+g_2+\\ldots+g_d.\n\\EQNY\nThe minimization problem\n\\BQNY\n\\min_{g\\geq h, g\\in \\kHC }\\norm{g}&=&\\min_{g\\geq h, g\\in \\kHC}(\\norm{g_1}+\\norm{g_2}+\\ldots+\\norm{g_d})\\\\\n&=&\\sum_{i=1}^{d}\\min_{g_i\\geq h_i,g_i\\in\\kHA}\\norm{g_i}\\\\\n&=&\\norm{\\underline{h_1}}+\\norm{\\underline{h_2}}+\\ldots+\\norm{\\underline{h_d}}.\n\\EQNY\nThe equalizes above hold if and only if\n\\begin{equation}\n\\underline{h}=Pr_{V_2}h=Pr_{V_1}h_1+ Pr_{V_1}h_2+\\ldots+Pr_{V_1}h_d.\n\\end{equation}\nCompleting the proof.\n\\QED\n\n\\prooftheo{Thn1} Denote by $\\widetilde{P}$ a probability measure that is defined via its Radon-Nikodym derivative\n\\begin{equation*}\\begin{gathered} \\frac{dP}{d\\widetilde{P}}=\\prod_{i=1}^{d}\\exp\\Big(-\\frac12\\|f_i\\nO ^2+\\int_{\\R_+}f_i'(t_i)dW_i^0(t_i)\\Big).\n\\end{gathered}\n\\end{equation*}\nAccording to Cameron-Martin-Girsanov theorem, $W_i^0(t)=W_i(t) +\\int_{[0,t]}f_i'(s)ds,\\;i=1,2,\\ldots,d$ are \\pE{independent} Wiener processes. Denote\n$1_u\\{X\\}=1\\{X(\\mathbf{t})\\leq u(\\mathbf{t}),\\;\\mathbf{t}\\in\\R_+^d\\}$ and $$W^0(\\mathbf{t})=W_1^0(t_1)+ W_2^0(t_2)+\\ldots+W_d^0(t_d).$$\nNote that $ \\norm{f}^2= \\norm{f_1}^2+\\norm{f_2}^2+\\ldots+\\norm{f_d}^2$, \\pE{hence\nusing further} \\eqref{eqhm-6} and \\eqref{eqhm-7} we obtain\n\\BQNY\n\\lefteqn{P_f}\\\\\n &=&\\EE{ 1_u\\Big(\\sum_{i=1}^{d}(W_i(t_i)+f_i(t_i))\\Big)}\\\\\n&=&\\mathbb{E}_{\\widetilde{P}}\\Biggl( \\frac{dP}{d\\widetilde{P}}1_u\\Big(W^0(\\mathbf{t})\\Big)\\Biggr)\\\\\n&=&\n\\exp\\Big(-\\frac12 \\norm{f}^2 \\Big) \\EE{ \\exp\\Big(\\sum_{i=1}^{d}\\int_{\\R_+}f_i'(t_i)dW_i^0(t_i)\n\\Big)1_u\\Big(W^0(\\mathbf{t})\\Big)}\\\\\n&=&\\exp\\Big(-\\frac12 \\norm{\\underline{f}}^2 \\Big)\\\\\n&&\\times \\mathbb{E} \\Biggl\\{\\prod_{i=1}^{d}\\exp\\Big(-\\frac12\\|Pr_{\\widetilde{V}_1}{f_i} \\nO ^2\n+\\int_{\\R_+} Pr_{\\widetilde{V}_1}f_i'(t_i)dW_i^0(t_i)\\Big)\n\\times \\exp\\Big(\\sum_{i=1}^{d}\\int_{\\R_+ } \\underline{f_i}'(t_i)dW_i^0( {t_i})\\Big)1_u\\Big(W^0(\\mathbf{t})\\Big)\\Biggr\\}.\n\\EQNY\nIn order to re-write $\\int_{\\R_+ } \\underline{f_1}'(t_1)dW_1^0( {t_1})$, we mention that in this integral $dW_1^0(t_1)=d_1(W^0(t_1,0,\\ldots,0))$, therefore on the indicator $1_u\\{\\sum_{i=1}^{d} W_i^0(t_i)\\}=1_u\\{W^0(\\mathbf{t})\\}$\nunder conditions of the theorem and using lemma \\ref{lemA3} in the Appendix we have the relations\n\\begin{equation}\\begin{gathered}\\label{eq3.2n}\n\\int_{\\R_+}\\underline{f_1}' (t_1)dW_1^0( {t_1})=\\lim_{n\\rightarrow \\infty}\\int_{[0,n]}\\underline{f_1}' (t_1)dW_1^0( {t_1})\\\\\n=\\lim_{n\\rightarrow \\infty}\\Big(\\underline{f_1}' (n)W^0(n,0,\\ldots,0)+\\int_{[0,n]}W^0(t_1,0,\\ldots,0)d(-\\underline{f_1}' )(t_1)\\Big).\n\\end{gathered}\n\\end{equation}\nSimilarly, for any $i=2,3,\\ldots,d$ we have\n\\begin{equation}\\label{eq3.3n}\\int_{\\R_+ }\\underline{f_i}'(t_i)dW_i^0(t_i)\n=\\lim_{n\\rightarrow \\infty}\\Big(\\underline{f_i}' (n)W^0(0,\\ldots,n,0,\\ldots,0)+\\int_{[0,n]}W^0(0,\\ldots,t_i,0,\\ldots,0)d(-\\underline{f_i}' )(t_i)\\Big).\n\\end{equation}\n\nCombining \\eqref{eq3.2n}--\\eqref{eq3.3n} and using conditions \\eqref{conA1}, we get that on the same indicator\n\n\\begin{equation}\n\\begin{gathered}\\label{3.6n}\\sum_{i=1}^{d}\\int_{\\R_+ } \\underline{f_i}'(t_i)dW_i^0( {t_i})\\leq \\lim_{n\\rightarrow \\infty}\\Big(\\sum_{i=1}^{d}\\underline{f_i}' (n)W^0(0,\\ldots,n,0,\\ldots,0)+\\sum_{i=1}^{d}\\int_{[0,n]}W^0(0,\\ldots,t_i,0,\\ldots,0)d(-\\underline{f_i}' )(t_i)\\Big)\\\\\n\\leq - \\sum_{i=1}^{d}\\int_{\\R_+}u(0,\\ldots,t_i,0,\\ldots,0)d \\underline{f_{i}'}(t_i).\n\\end{gathered}\n\\end{equation}\nOn the other hand, we have\n\\BQN\\label{3.7n}\nP_{f-\\underline{f}}&=&\\mathbb{E} \\Biggl\\{\\prod_{i=1}^{d}\\exp\\Big(-\\frac12\\|f-\\underline{f} \\nO ^2\n+\\int_{\\R_+} (f-\\underline{f})'dW_i^0(t)\\Big)\n1_u\\Big(W^0(\\mathbf{t})\\Big)\\Biggr\\}\\\\\n\\nonumber&=&\\mathbb{E} \\Biggl\\{\\prod_{i=1}^{d}\\exp\\Big(-\\frac12\\|Pr_{\\widetilde{V}_1}{f_i} \\nO ^2\n+\\int_{\\R_+} Pr_{\\widetilde{V}_1}f_i'(t)dW_i^0(t)\\Big)\n1_u\\Big(W^0(\\mathbf{t})\\Big)\\Biggr\\}.\n\\EQN\nFrom \\eqref{3.6n} and \\eqref{3.7n}, we conclude that\n\\begin{equation*}\\begin{gathered}\nP_f\\leq P_{ f-\\underline{f} }\n\\exp\\biggl (- \\sum_{i=1}^{d}\\int_{\\R_+}u(0,\\ldots,t_i,0,\\ldots,0)d \\underline{f_{i}'}(t_i)\n-\\frac12\\|\\underline{f}\\nOO ^2\\biggr).\n\\end{gathered}\n\\end{equation*}\n\\QED\n\n\\proofkorr{korr} From \\eqref{LD1} we obtain\n\\BQNY\n\\ln P_{\\gamma f} \\geq -(1+o(1))\\inf_{g\\geq f}\\frac{\\gamma^2}{2}\\norm{g}^2=-(1+o(1))\\frac{\\gamma^2}{2}\\norm{\\underline{f}}^2, \\quad \\gamma \\to \\IF.\n\\EQNY\nOn the other hand, from theorem \\ref{Thn1} we obtain\n\\BQNY\nP_{\\gamma f} \\leq P_{\\gamma(f-\\underline{f})}\\exp(-(1+o(1))\\frac{\\gamma^2}{2}\\norm{\\underline{f}}^2).\n\\EQNY\nSince $f(\\mathbf{t}_0)>0$, then $\\lim_{\\gamma\\to\\infty}P_{\\gamma(f-\\underline{f})}=constant>0$. Hence as $\\gamma\\to\\infty,$\n\\BQNY\n\\ln P_{\\gamma f} \\leq -(1+o(1))\\frac{\\gamma^2}{2}\\norm{\\underline{f}}^2,\n\\EQNY\nand the claim follows.\n\\QED\n\n\\section {Appendix}\\label{sec:appendix}\n\\begin{lem}\\label{lemA1} If the function $h:\\R_+^d\\rightarrow \\R$ admits the representation\n\\begin{equation}\\label{unique}\n h(\\mathbf{t})=h_1(t_1)+h_2(t_2)+\\ldots+h_d(t_d),\n\\end{equation}\n where $h_i\\in \\kHA, i=1,\\ldots,d, $ then \\pE{the} representation \\eqref{unique} is unique.\n\\end{lem}\n\\begin{proof}\nIf the function $h:\\R_+^d\\rightarrow \\R$ admits the representation\n\\begin{equation}\\label{eqhm-2}\n h(\\mathbf{t})=\\sum_{i=1}^{d}f_i(t_i)=\\sum_{i=1}^{d}g_i(t_i),\n\\end{equation}\n where $f_i,g_i\\in \\kHA, i=1,2,\\ldots,d.$\nFor any $i=1,2,\\ldots,d,$ we put $t_j=0$ for $j\\neq i$, and note that $f_j(0)=g_j(0)=0,$ then we obtain $f_i=g_i,i=1,2,\\ldots,d.$ Hence the representation \\eqref{unique} is unique.\n\\end{proof}\nNoting that the convariance function of $W_i$ is $s_i\\wedge t_i$, and the convariance function of processes $W(\\mathbf{t})=W_1(t_1)+W_2(t_2)+\\ldots+W_d(t_d)$ is given by\n\\begin{equation*}\nR(\\mathbf{s},\\mathbf{t}):=\\EE{W(\\mathbf{s})W(\\mathbf{t})}=\\sum_{i=1}^{d}s_i\\wedge t_i, \\quad\n\\mathbf{s}=(s_1,s_2,\\ldots,s_d), \\;\\mathbf{t}=(t_1,t_2,\\ldots,t_d).\n\\end{equation*}\nNext, we will identify the RKHS corresponding to a sum of $d$ covariances.\nSuppose now $R_i,i=1,2,\\ldots,d$ are $d$ covariances of Gaussian processes, the corresponding RKHS are\n$\\mathbb{K}_i,i=1,2,\\ldots,d.$ We suppose also $\\norm{\\cdot}_i$ the inner product of RKHS $\\mathbb{K}_i,i=1,2,\\ldots,d.$\nThe following is a well-known lemma and we refer the reader to \\cite{aronszajn1950theory}\nfor its proof.\n\\begin{lem}\\label{lemA2}\nThe RHKS of Gaussian processes which with covariances $R=R_1+R_2+\\ldots+R_d$ is then given by the Hilbert space $\\mathbb{K}$ consists of all functions $f(\\mathbf(t))=f_1(t_1)+f_2(t_2)+\\ldots+f_d(t_d),$ with $f_i(t_i)\\in\\mathbb{K}_i,i=1,2,\\ldots,d,$ and the norm is given by\n\\BQNY\n\\norm{f}=\\inf(\\norm{f_1}_1+\\norm{f_2}_2+\\ldots+\\norm{f_d}_d),\n\\EQNY\nwhere the infimum taken for all the decomposition $f(\\mathbf{t})=f_1(t_1)+f_2(t_2)+\\ldots+f_d(t_d),\\;g(\\mathbf{t})=g_1(t_1)+g_2(t_2)+\\ldots+g_d(t_d)$ with $f_i(t_i),\\;g_i(t_i)\\in\\mathbb{K}_i,i=1,2,\\ldots,d.$ Furthermore, if for any $f\\in \\mathbb{K}$, the decomposition $f(\\mathbf{t})=f_1(t_1)+f_2(t_2)+\\ldots+f_d(t_d)$ is unique, then the inner product of $\\mathbb{K}$ is\n\\BQNY\n\\langle f,g \\rangle=\\langle f_1,g_1 \\rangle+\\langle f_2,g_2 \\rangle+\\ldots+\\langle f_d,g_d \\rangle.\n\\EQNY\n\\end{lem}\nAlso if we define the plus $\\oplus$ among $\\mathbb{K}_i,i=1,2,\\ldots,d$ by $\\mathbb{K}_i\\oplus\\mathbb{K}_j:=\\{f=f_i+f_j\\mid f_i\\in\\mathbb{K}_i,f_j\\in\\mathbb{K}_j\\}$, then we can rewritten $\\mathbb{K}$ as\n\\BQNY\n\\mathbb{K}=\\mathbb{K}_1\\oplus\\mathbb{K}_2\\oplus\\ldots\\oplus\\mathbb{K}_d.\n\\EQNY\nLet $W_1$ be a Wiener process, $h:\\R_+\\rightarrow \\R$ be an integrable function, we can extend the integration of $h$ w.r.t $W_1$ on $\\R_+$ by the following sence\n\\BQN\\label{eq-integral}\n\\int_{\\R_+}h(s)dW_1(s)=L_2-\\lim_{n\\rightarrow \\infty}\\int_{[0,n]}h (s)dW_1(s)\n\\EQN\nwhenever this limit exists. Furthermore, for any $h\\in V_1,$ the derivative $h'\\in L_2(\\R_+, \\lambda_1)$ is non-increasing, therefore $\\int_{[0,n]}h'^2 (s)ds\\leq h'^2 (0)n$ which implies that the integral $\\int_{[0,n]}h (s)dW_1(s)$ is correctly defined as It\\^o integral. We then can construct the integration-by-parts formula\n\\begin{lem}\\label{lemA3}\nLet $h\\in V_1,$ and $W_1$ be a Wiener process. Then for any $T<\\infty$, we have the following:\n\\BQN\\label{eq-bypart}\n\\int_{[0,T]}h (s)dW_1(s)=\\int_{[0,T]}W_1(s)d(-h (s))+h (T)W_1(T),\n\\EQN\nwhere the integral in the right-hand side of \\eqref{eq-bypart} is a Riemann-Stieltjes integral.\n\\end{lem}\n\\begin{proof}\nFrom \\cite{karatzas2012brownian}, for any partition $\\pi$ of interval $[0,T],$ we obtain that the integral $\\int_{[0,T]}h (s)dW_1(s)$ coincide with the limits in probability of integral sums\n\\BQNY\n\\int_{[0,T]}h (s)dW_1(s)&=&L_2-\\lim_{\\abs{\\pi}\\rightarrow 0}\\sum_{i=1}^{N}h (s_{i-1})(W_1(s_i)-W_1(s_{i-1}))\\\\\n&=&L_2-\\lim_{\\abs{\\pi}\\rightarrow 0}\\sum_{i=1}^{N}W_1(s_i)(h (s_{i-1})-h (s_{i}))+W_1(T)h(T)\\\\\n&=&\\int_{[0,T]}W_1(s)d(-h (s))+W_1(T)h(T).\n\\EQNY\n\\end{proof}\n\n\n{\\bf Acknowledgment}: This work was partly financed by the project NSFC No.71573143 and SNSF Grant 200021-166274.\n\n\\bibliographystyle{ieeetr}\n\n ","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nEntanglement is one of the most interesting features of quantum mechanics that cannot be explained by any local classical theory. This notion plays a key role in quantum information and quantum computation sciences (see e.g. \\cite{Nielsen2010}). As soon as entanglement was recognized as a resource for quantum information processing it was considered very fragile and easily degradable by environmental noise. So the idea to avoid as much as possible interactions with environment was dominant. However, recently it has been put forward the idea that environment can after all have a positive role \\cite{VWC09}. For instance, when environment acts globally on a composite system it can supply a kind of interaction that helps in establishing entanglement.\nAmong environmental effects, dissipation plays a prominent role because it allows for the stabilization of targeted resources, like entanglement, a fact that may result as a key advantage over unitary (noiseless) manipulation \\cite{Kraus2008}.\nThen, it has been shown, both theoretically \\cite{Plenio2002} and experimentally \\cite{Krauter2011}, that a global dissipative environment can establish stationary entanglement.\nSurprisingly, this happens even without any direct interaction among subsystems \\cite{Ghosh2006}. The simplest model where such an effect occurs is that of two qubits dissipating into a common environment. A possibility that has been proved true for systems composed by more than two subsystems \\cite{Memarzadeh2013}.\n\nAfter having ascertained the benefits of global environment's action on entanglement, \none is naturally led to ask what would happen if beside there are also local environments actions.\nWould entanglement be generated as well and persist indefinitely? \nIf yes, to what amount and for which initial states?\nHere, we shall address these issues by considering\na model of two qubits dissipating into a ``glocal\" environment (at non-zero temperatures). \nBy ``glocal\" we mean a mixture of global environment (with which the two qubits jointly interact) and local environments (with which each qubit separately interacts).\n\nWe shall then determine conditions under which stationary entanglement can be induced. It results on the one hand (and rather counterintuitively) that entanglement in presence of local environments is achievable when these are at nonzero temperature.\nOn the other hand, while global environment is vital for indirect qubits interaction, it should be ideally at zero temperature.\n\nThe results are obtained by first studying the dynamical map (focusing on the stationary regime) in terms of its Kraus operators \\cite{Kraus83} and then by characterizing it through an entangling power measure. This latter relies on the statistical average over the initial states establishing an input-independent dynamics of entanglement \\cite{Zanardi2000} (a concept that has already been applied in many quantum systems \\cite{Lakshminarayan2001}).\n\nThe paper is organized as follow. In Sec. \\ref{sec:model} we introduce our model with two qubits and thermal environments. In Sec. \\ref{sec:Kraus} we find the Kraus operators corresponding to the dynamical map focusing on the stationary regime. Sec. \\ref{sec:entpower} deals with the entangling power of the map. Finally, we draw our conclusions in Sec. \\ref{sec:conclusion}.\n\n\n\n\n\\section{The Model}\\label{sec:model}\n\nDissipation of energy into environment is an important phenomenon in a variety of open quantum systems. Quite generally, the environment should be treated as a distribution of the uncorrelated thermal equilibrium mixture of states. For two qubits dissipating into their own thermal environments, the description of the dynamics stems on a master equation of Lindblad form \\cite{Breuer2002}, with Lindblad operators proportional to $\\sigma_1$, $\\sigma_2$, $\\sigma_1^{\\dagger}$ and $\\sigma_2^{\\dagger}$ respectively, where $\\sigma_i:=| g_i\\rangle\\langle e_i|$, being $|g_i\\rangle$, $|e_i\\rangle$ the ground and the excited state respectively of the $i$th qubit $(i=1,2)$.\nThis dynamics constitutes the local dissipation.\nHere, driven by the fact that the continuous miniaturization of physical devices makes qubits closely spaced, we are going to also consider global dissipation, namely the two qubits dissipating into a common thermal environment (see Fig. \\ref{fig:sys}). \nThis amounts to consider additional Lindblad operators proportional \nto $\\sigma_1 + \\sigma_2$ and $\\sigma_1^{\\dagger}+\\sigma_2^{\\dagger}$. \n\\begin{figure}[h]\n\\centering\n\\includegraphics[width=0.45\\textwidth]{fig1.pdf}\n\\fcaption{\\label{fig:sys} Pictorial representation of the system under study.}\n\\end{figure}\nThus the dynamics of the density operator $\\rho$ of the system under study will be governed by the following master equation\n\\begin{equation}\n\\dot{\\rho}(t)=\\gamma \\sum_{k=1}^2 \\left[2L_k\\rho L_k^\\dag\n-L_k^\\dag L_k \\rho -\\rho L_k^\\dag L_k\\right]+(1-\\gamma) \\sum_{k=3}^6 \\left[2L_k\\rho L_k^\\dag\n-L_k^\\dag L_k \\rho -\\rho L_k^\\dag L_k\\right],\n\\label{eq:newsys}\n\\end{equation}\nwhere\n\\begin{equation}\n\\begin{aligned}\nL_1&=\\sqrt{\\bar{n}_g+1} \\;(\\sigma_1+\\sigma_2),\\\\\nL_2&=\\sqrt{\\bar{n}_g} \\;(\\sigma_1^\\dag+\\sigma_2^\\dag),\\\\\nL_3&=\\sqrt{\\bar{n}_l+1}\\; \\sigma_1,\\\\\nL_4&=\\sqrt{\\bar{n}_l+1}\\;\\sigma_2,\\\\\nL_5&=\\sqrt{\\bar{n}_l}\\;\\sigma_1^\\dag,\\\\\nL_6&=\\sqrt{\\bar{n}_l}\\;\\sigma_2^\\dag,\n\\end{aligned}\n\\label{eq:Lin}\n\\end{equation}\nwith $\\bar{n}_g$ and $\\bar{n}_l$ the number of thermal excitations in the global and local environments. \nIn Eq.\\eqref{eq:newsys} the parameter $\\gamma\\in[0,1]$ describes the interplay between purely local dissipation ($\\gamma=0$) and purely global dissipation ($\\gamma=1$). We have assumed a unit decay rate. \n\nIn case $\\bar{n}_l=\\bar{n}_g=0$, since in\nEq.\\eqref{eq:newsys} there is no Hamiltonian term and the Lindblad operators all commute, we can just model the process as a ``weakly measure and prepare\" channel. The local dissipation just asks each qubit whether they are excited and give them some chance of decaying in the ground state. It can be represented by a Markov link \n$|e_k\\rangle\\to|g_k\\rangle$ $(k=1,2)$. The global dissipation just asks the two qubits \nfor the presence of one or two excitations in the symmetric subspace, which then decay with a fixed rate.\n It leaves $\\frac{1}{\\sqrt{2}}\\left(\\ket{e_1}\\ket{g_2}-\\ket{g_1}\\ket{e_2}\\right )$ fixed and it can be represented by a Markov link \n$\\ket{e_1}\\ket{e_2}\\to\\frac{1}{\\sqrt{2}}\\left(\\ket{e_1}\\ket{g_2}+\\ket{g_1}\\ket{e_2}\\right )\\to\\ket{g_1}\\ket{g_2}$. In case of nonzero $\\bar{n}_l$ and or $\\bar{n}_g$, the dynamics results much more involved. \n\nTo study it we formally expand the density operator in the basis $\\{\\ket{1}:=\\ket{e_1}\\ket{e_2},\\ket{2}:=\\ket{e_1}\\ket{g_2},\\ket{3}:=\\ket{g_1}\\ket{e_2},\\ket{4}:=\\ket{g_1}\\ket{g_2}\\}$ so to have\n\\begin{equation}\n\\rho(t)= \\sum_{j,k=1}^{4}\\rho_{jk}(t)\\ket{j}\\bra{k},\n\\label{eq:timeden}\n\\end{equation}\nwhere $\\rho_{jk}(t)$ are unknown time-dependent coefficients.\nFor the sake of simplicity we will define\n$\\rho_{jk}\\equiv \\rho_{jk}(0)$.\n\nUpon insertion of \\eqref{eq:timeden} into \\eqref{eq:newsys} \nthe dynamics will be described by a set of linear differential equations for the unknown coefficients\n$\\rho_{jk}(t)$ that can be compactly expressed as\n\\begin{equation}\n\\dot{\\text{\\bf{v}}}(t)=M\\text{\\bf{v}}(t),\n\\label{eq:differ}\n\\end{equation}\nwhere $\\text{\\bf{v}}(t)=\\left( \\rho_{11}(t),\\rho_{12}(t),\\cdots,\\rho_{43}(t),\\rho_{44}(t)\\right) ^{\\text{T}}$ and $M$ is a $16\\times 16$ matrix of constant coefficients given by\n\\begin{equation}\nM= \\begin{pmatrix}\n M_{11} & M_{12} \\\\\n M_{21} & M_{22}\n \\end{pmatrix},\n\\end{equation}\nwith\n\\begin{equation}\nM_{11}:= \\left(\n \\begin{matrix}\n -4 & 0 & 0 & 0 & 0 & 2\\xi & \\eta & 0\\\\ \n 0 & -3 & \\zeta & 0 & 0 & 0 & 0 & \\eta \\\\ \n 0 & \\zeta & -3 & 0 & 0 & 0 & 0 & 2\\xi \\\\ \n 0 & 0 & 0 & -2 & 0 & 0 & 0 & 0 \\\\\n 0 & 0 & 0 & 0 & -3 & 0 & 0 & 0 \\\\\n 2(1+\\xi) & 0 & 0 & 0 & 0 & -2 & \\zeta & 0 \\\\\n \\chi& 0 & 0 & 0 & 0 & \\zeta & -2 & 0 \\\\\n 0 & \\chi& 2(1+\\xi) & 0 & 0 & 0 & 0 & -1 \n \\end{matrix}\\right)-4\\xi\\, I_{8\\times 8},\n\\end{equation}\n\n\\begin{equation}\nM_{12}:= \\left(\n \\begin{matrix}\n 0 & \\eta & 2\\xi & 0 & 0 & 0 & 0 & 0 \\\\\n 0 & 0 & 0 & 2\\xi & 0 & 0 & 0 & 0 \\\\\n 0 & 0 & 0 & \\eta & 0 & 0 & 0 & 0 \\\\\n 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\\\\n \\zeta & 0 & 0 & 0 & 0 & \\eta & 2\\xi & 0 \\\\\n 0 & \\zeta & 0 & 0 & 0 & 0 & 0 & 2\\xi \\\\\n 0 & 0 & \\zeta & 0 & 0 & 0 & 0 & \\eta \\\\\n 0 & 0 & 0 & \\zeta & 0 & 0 & 0 & 0 \n \\end{matrix}\\right),\n\\end{equation}\n\n\\begin{equation}\nM_{21}:= \\left(\n \\begin{matrix}\n 0 & 0 & 0 & 0 & \\zeta & 0 & 0 & 0 \\\\\n \\chi& 0 & 0 & 0 & 0 & \\zeta & 0 & 0 \\\\\n 2(1+\\xi) & 0 & 0 & 0 & 0 & 0 & \\zeta & 0 \\\\\n 0 & 2(1+\\xi) & \\chi& 0 & 0 & 0 & 0 & \\zeta \\\\\n 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\\\\n 0 & 0 & 0 & 0 & \\chi& 0 & 0 & 0 \\\\\n 0 & 0 & 0 & 0 & 2(1+\\xi) & 0 & 0 & 0 \\\\\n 0 & 0 & 0 & 0 & 0 & 2(1+\\xi) & \\chi& 0 \n \\end{matrix}\\right),\n\\end{equation}\n\n\n\\begin{equation}\nM_{22}:= \\left(\n \\begin{matrix}\n -3 & 0 & 0 & 0 & 0 & 2\\xi & \\eta & 0 \\\\\n 0 & -2 & \\zeta & 0 & 0 & 0 & 0 & \\eta \\\\\n 0 & \\zeta & -2 & 0 & 0 & 0 & 0 & 2\\xi \\\\\n 0 & 0 & 0 & -1 & 0 & 0 & 0 & 0 \\\\\n 0 & 0 & 0 & 0 & -2 & 0 & 0 & 0 \\\\\n 2(1+\\xi) & 0 & 0 & 0 & 0 & -1 & \\zeta & 0 \\\\\n \\chi& 0 & 0 & 0 & 0 & \\zeta & -1 & 0 \\\\\n 0 & \\chi& 2(1+\\xi) & 0 & 0 & 0 & 0 & 0\n \\end{matrix}\\right)-4\\xi\\, I_{8\\times 8},\n\\end{equation}\nand \n\\begin{equation}\n\\begin{aligned}\n\\xi&:=\\gamma\\bar{n}_g+(1-\\gamma)\\bar{n}_l,\\\\ \n\\eta&:=2\\gamma\\bar{n}_g,\\\\\n\\chi&:=2\\gamma(1+\\bar{n}_g),\\\\\n\\zeta&:=-\\gamma(1+2\\bar{n}_g).\n\\end{aligned}\n\\end{equation}\n\n\n\n\\section{Steady states and dynamical map} \\label{sec:Kraus}\n\nWe are interested in the stationary solutions of Eq.\\eqref{eq:differ}, i.e. in $\\text{\\bf{v}}(t=\\infty)$.\nWe may notice that for $\\gamma<1$ or $\\bar{n}_g>0$ the steady state can be simply found by solving\n$M\\text{\\bf{v}}(t=\\infty)=0$ as $\\ker M$ results one dimensional.\nIn contrast for $\\gamma=1$ and $\\bar{n}_g=0$, $\\ker M$ results of dimension greater than one, meaning that the steady state is not unique and will depend on the initial state.\nHence it must be derived by first solving Eq.\\eqref{eq:differ} and then taking $\\lim_{t\\to\\infty}\\text{\\bf{v}}(t)$\n(see Appendix \\ref{App:differ}).\nThis different behavior should be ascribed to the fact that in Eq.\\eqref{eq:newsys} when $\\gamma=1$ and $\\bar{n}_g=0$ there exist non-trivial operators (i.e. not multiple of the identity) commuting with the Lindblad operators \\cite{Spohn}.\n\nTaking into account both cases, the stationary density operator can be expressed (in the basis $\\{|1\\rangle,\n|2\\rangle, |3\\rangle, |4\\rangle\\}$) as:\n\\begin{equation}\n\\rho(\\infty)=\\begin{pmatrix}\n B_1 & 0 & 0 & 0 \\\\\n 0 & B_2+R_1 & D-R_1 & R_2 \\\\\n 0 & D-R_1 & B_3+R_1 & -R_2 \\\\\n 0 & R_2^{*} & -R_2^{*} & B_4+R_3\n \\end{pmatrix},\n \\label{eq:stationarystate}\n\\end{equation}\nwhere\n\\begin{equation}\n\\begin{aligned}\nB_1&:=\\frac{1}{H}\\left(1-\\delta_{\\gamma,1}\\delta_{\\bar{n}_g,0}\\right)\n\\Big[ 2 \\gamma ^2 \\bar{n}_g^2-(\\gamma -1) \\bar{n}_l^2 (6 \\gamma \n \\bar{n}_g+1)+2 \\gamma \\bar{n}_g \\bar{n}_l \\Big(\\gamma (2 \\bar{n}_g-1)+1\\Big)+2\n (\\gamma -1)^2 \\bar{n}_l^3\\Big], \\\\\nB_2&:=\\frac{1}{H}\\left(1-\\delta_{\\gamma,1}\\delta_{\\bar{n}_g,0}\\right)\n\\Big(2 \\gamma \\bar{n}_g-2 (\\gamma -1) \\bar{n}_l+1\\Big) \\Big(\\gamma \n \\bar{n}_g+\\bar{n}_l (2 \\gamma \\bar{n}_g-\\gamma \\bar{n}_l+\\bar{n}_l+1)\\Big), \\\\ \nB_3&:=B_2, \\\\ \nB_4&:=\\frac{1}{H}\\left(1-\\delta_{\\gamma,1}\\delta_{\\bar{n}_g,0}\\right)\n\\Big[ 2 \\gamma ^2 (\\bar{n}_g-\\bar{n}_l) \\big(2 \\bar{n}_g\n \\bar{n}_l+\\bar{n}_g-\\bar{n}_l^2\\big)+(2 \\bar{n}_l+1)\n (\\bar{n}_l+1)^2 \\\\\n &\\hspace{1cm}+\\gamma (\\bar{n}_l+1) \\Big(\\bar{n}_g (6\n \\bar{n}_l+4)-\\bar{n}_l (4 \\bar{n}_l+1)+1\\Big)\\Big], \\\\ \nD&:=\\frac{\\gamma}{H}\\left(1-\\delta_{\\gamma,1}\\delta_{\\bar{n}_g,0}\\right)\n(\\bar{n}_g-\\bar{n}_l), \\\\\nH&:=8\\gamma ^2 (\\bar{n}_g-\\bar{n}_l) \\big(2 \\bar{n}_g\n \\bar{n}_l+\\bar{n}_g-\\bar{n}_l^2\\big)+\\gamma (2 \\bar{n}_l+1)^2 (6\n \\bar{n}_g-4 \\bar{n}_l+1)+(2 \\bar{n}_l+1)^3, \\\\\nR_1&:=\\dfrac{1}{4}\\delta_{\\gamma,1}\\delta_{\\bar{n}_g,0}\\left(\\rho_{22}-\\rho_{23}-\\rho_{32}+\\rho_{33}\\right), \\\\\nR_2&:=\\dfrac{1}{2}\\delta_{\\gamma,1} \\delta_{\\bar{n}_g,0} \\left(\\rho_{24}-\\rho_{34}\\right),\\\\\nR_3&:=\\dfrac{1}{2}\\delta_{\\gamma,1} \\delta_{\\bar{n}_g,0} \\left(\\rho_{11}+\\rho_{44}\n+\\rho_{23}+\\rho_{32}+1\\right).\n\\end{aligned}\n\\label{eq:Coes}\n\\end{equation}\nHere it is\n\\begin{equation*}\n\\delta_{\\gamma,1}:=\\left\\{\n\\begin{array}{ccc}\n0 & & 0\\le\\gamma<1 \\\\\n1 & & \\gamma=1\n\\end{array}\\right., \n\\end{equation*}\nand \n\\begin{equation*}\n\\delta_{\\bar{n}_g,0}:=\\left\\{\n\\begin{array}{ccc}\n0 & & \\bar{n}_g >0 \\\\\n1 & & \\bar{n}_g=0\n\\end{array}\\right..\n\\end{equation*}\nNotice that the dependance from the initial state is shown by terms $R_1$, $R_2$, and $R_3$.\n\nWe can consider the evolution $\\rho(0)\\to \\rho(\\infty)$ \nas resulting from a (dissipative) map, namely\n\\begin{equation}\n\\rho(\\infty)={\\cal D}(\\rho(0)).\n\\label{eq:dynmap}\n\\end{equation}\nIn order to find its Kraus decomposition \\cite{Kraus83} we need to treat the case\n$\\gamma<1$ or $\\bar{n}_g>0$ separately from $\\gamma=1$ and $\\bar{n}_g=0$.\n\nIn the former the map ${\\cal D}$ has a fixed point\n\\begin{equation}\n\\rho_{_{fixed}}(\\infty)=\\begin{pmatrix}\n B_1 & 0 & 0 & 0 \\\\\n 0 & B_2 & D & 0 \\\\\n 0 & D & B_3 & 0 \\\\\n 0 & 0 & 0 & B_4\n \\end{pmatrix}, \\label{eq:staf} \n\\end{equation}\nhence the corresponding Kraus operators can be constructed as \n\\begin{equation}\nK^\\prime_{jl}=\\sqrt{\\upsilon_l} |\\psi_j\\rangle\\langle l|, \\qquad j,l=1 \\cdots 4,\n\\label{eq:krausstaind}\n\\end{equation}\nby means of the spectral decomposition $\\rho_{_{fixed}}(\\infty)=\\sum_{j=1}^4\\upsilon_j |\\psi_j\\rangle\\langle\\psi_j|$,\nwhere\n\\begin{equation}\n\\begin{aligned}\n\\ket{\\psi_1}&=\\begin{pmatrix}\n 1 \\\\\n 0 \\\\\n 0 \\\\\n 0 \n \\end{pmatrix}, \\quad \n \\ket{\\psi_2}=\\begin{pmatrix}\n 0 \\\\\n 0 \\\\\n 0 \\\\\n 1 \n \\end{pmatrix}, \\quad\n \\ket{\\psi_3}=\\begin{pmatrix}\n 0 \\\\\n -1 \\\\\n 1 \\\\\n 0 \n \\end{pmatrix}, \\quad\n \\ket{\\psi_4}=\\begin{pmatrix}\n 0 \\\\\n 1 \\\\\n 1 \\\\\n 0 \n \\end{pmatrix}, \n\\end{aligned}\n\\label{eq:eigendecomp}\n\\end{equation}\nand\n\\begin{equation}\n\\begin{aligned}\n\\upsilon_1&=B_1, \\\\ \n\\upsilon_2&=B_4, \\\\ \n\\upsilon_3&=B_2-D, \\\\ \n\\upsilon_4&=B_2+D. \n\\end{aligned}\n\\label{eq:eigenvalues}\n\\end{equation}\n\nIn contrast, for the case $\\gamma=1$ and $\\bar{n}_g=0$, the stationary state \n\\begin{equation}\n\\label{eq:dressed}\n\\rho_{_{ini}}(\\infty):=\\begin{pmatrix}\n 0 & 0 & 0 & 0 \\\\\n 0 & R_1 & -R_1 & R_2 \\\\\n 0 & -R_1 & R_1 & -R_2 \\\\\n 0 & R_2^{*} & -R_2^{*} & R_3\n \\end{pmatrix}\n\\end{equation}\ndepends on the initial state, hence in order to find the Kraus operators of the corresponding map\nwe need to first obtain them for the map\n\\begin{equation}\n\\rho(t)={\\cal D}_t(\\rho(0)),\n\\label{eq:dynmapt}\n\\end{equation}\nwhere the subscript $t$ emphasizes the parametrically dependence on time. \nThen take the limit $t\\to\\infty$. Implementing this procedure in Appendix \\ref{App:Appendixcoeff}, \nit results\n\\begin{equation}\n\\begin{aligned}\nK_1^{\\prime\\prime}&=\\dfrac{1}{2}\\begin{pmatrix}\n 0 & 0 & 0 & 0 \\\\\n 0 & 1 & -1 & 0 \\\\\n 0 & -1 & 1 & 0 \\\\\n 0 & 0 & 0 & 2\n \\end{pmatrix}, \\ \\ \\\nK_2^{\\prime\\prime}=\\begin{pmatrix}\n 0 & 0 & 0 & 0 \\\\\n 0 & 0 & 0 & 0 \\\\\n 0 & 0 & 0 & 0 \\\\\n 1 & 0 & 0 & 0\n \\end{pmatrix}, \\\\\nK_3^{\\prime\\prime}&=\\begin{pmatrix}\n 0 & 0 & 0 & 0 \\\\\n 0 & 0 & 0 & 0 \\\\\n 0 & 0 & 0 & 0 \\\\\n 0 & 0 & 0 & 0\n \\end{pmatrix}, \\ \\ \\\nK_4^{\\prime\\prime}=\\dfrac{1}{\\sqrt{2}}\\begin{pmatrix}\n 0 & 0 & 0 & 0 \\\\\n 0 & 0 & 0 & 0 \\\\\n 0 & 0 & 0 & 0 \\\\\n 0 & 1 & 1 & 0\n \\end{pmatrix}.\n\\end{aligned}\n\\label{eq:krausinf}\n\\end{equation}\n\nTaking into account both cases \\eqref{eq:krausstaind} and \\eqref{eq:krausinf}, the stationary state \\eqref{eq:stationarystate} can be written as\n\\begin{equation}\n\\rho(\\infty)=\\sum_{jl=1}^{4}K_{jl}\\rho(0) K_{jl}^{\\dagger},\n\\end{equation} \nwhere\n\\begin{equation}\nK_{jl}=\\left( 1-\\delta_{\\gamma,1}\\delta_{\\bar{n}_g,0}\\right)K^\\prime_{jl}\n+\\delta_{\\gamma,1}\\delta_{\\bar{n}_g,0} \\delta_{j,l}K^{\\prime\\prime}_j.\n\\end{equation} \n\n\n\n\\section{Entangling power}\\label{sec:entpower}\n\n\nIn what follows, we use the concurrence \\cite{Wootters1998} to quantify the amount of entanglement which is defined as \n\\begin{equation}\nE(\\rho):=\\max\\left\\lbrace 0, \\sqrt{\\ell_1}- \\sqrt{\\ell_2}- \\sqrt{\\ell_3}- \\sqrt{\\ell_4}\\right\\rbrace,\n\\label{eq:con}\n\\end{equation}\nwhere $\\ell_j$, $j=1,2,3,4$, are the eigenvalues (in decreasing order) of \n$\\rho\\left(\\sigma_1^y\\otimes\\sigma_2^y\\rho^{*}\\sigma_1^y\\otimes\\sigma_2^y\\right)$ with $\\rho^*$ the complex conjugate of $\\rho$ and $\\sigma_k^y:=i(\\sigma_k-\\sigma_k^\\dag)$. \n\nAssume the initial state of the system to be pure and factorable. Its general parametrization is\n\\begin{equation}\n\\begin{aligned}\n|\\psi(0)\\rangle&=\\left( \\cos(\\theta_1\/2) \\ket{e_1}+\\sin(\\theta_1\/2)e^{i\\varphi_1}\\ket{g_1}\\right) \\\\\n&\\otimes\\left( \\cos(\\theta_2\/2) \\ket{e_2}+\\sin(\\theta_2\/2)e^{i\\varphi_2}\\ket{g_2}\\right),\n\\end{aligned}\n\\label{eq:inifact}\n\\end{equation}\nwith $\\theta_k\\in\\left[ 0,\\pi\\right] $ and $\\varphi_k\\in\\left[ 0,2\\pi\\right] $ for $k=1,2$.\nThen, the concurrence \\eqref{eq:con} for the stationary state \\eqref{eq:stationarystate} becomes\n\\begin{equation}\\label{steadyconc}\n\\begin{aligned}\nE&=2\\left(\\left|D\\right|-\\sqrt{B_1 B_4}+|R_1|\\right)\\\\\n&=2\\left|D\\right|-2\\sqrt{B_1 B_4}\n+\\delta_{\\gamma,1}\\delta_{\\bar{n}_g,0}\\dfrac{1}{4}\n\\left| 1-\\cos\\theta_1\\cos\\theta_2-\\cos(\\varphi_1-\\varphi_2)\\sin\\theta_1\\sin\\theta_2\\right|,\n\\end{aligned}\n\\end{equation}\nIt is worth noticing that when $\\gamma=1$ and $\\bar{n}_g=0$ the third term contributes, while the second does not because $B_1=0$.\n\nQuite generally the stationary entanglement \\eqref{steadyconc} depends on the initial state. However, we can say that a map is a good entangler when the average of the final entanglement over all possible initial states is positive. Moving on from \\cite{Zanardi2000} we define the entangling power of ${\\cal D}$ as\n\\begin{equation}\n{\\mathfrak E}({\\cal D}):=\\int E\\left( {\\cal D} ( |\\psi(0)\\rangle\\langle \\psi(0)|)\\right) \\, d\\mu( |\\psi(0)\\rangle),\n\\label{eq:enpower}\n\\end{equation}\nwhere $ d\\mu( |\\psi(0)\\rangle)$ is the probability measure over the submanifold of factorable states in $\\mathbb{C}^2\\otimes \\mathbb{C}^2$. The latter is induced by the Haar measure of ${\\rm SU}(2) \\otimes {\\rm SU}(2)$. Specifically, referring to the parametrization of \\eqref{eq:inifact}, it reads\n\\begin{equation}\nd\\mu( |\\psi(0)\\rangle)=\\frac{1}{16\\pi^2}\\prod\\limits_{k=1}^2 \\sin\\theta_k\\text{d}\\theta_k\\text{d}\\varphi_k.\n\\end{equation}\nThis measure is normalized to 1. It is trivial to see that in this case the entangling power $\\mathfrak E$ lies within $[0,1]$.\nThus from \\eqref{steadyconc} and \\eqref{eq:enpower} we get \n\\begin{equation}\\label{epower}\n\\begin{aligned}\n\\mathfrak{E}=2\\left|D\\right|-2\\sqrt{B_1 B_4}\n+\\delta_{\\gamma,1}\\delta_{\\bar{n}_g,0}\\dfrac{1}{4}.\n\\end{aligned}\n\\end{equation}\nIn Fig.\\ref{fig:entpow3D0} it is shown the entangling power \\eqref{epower} as a function of $\\gamma$ and \n$\\bar{n}_l$ for $\\bar{n}_g=0$. There, we can see that for $\\gamma<1$ and $\\bar{n}_l=0$ it is $\\mathfrak{E}=0$. In contrast, for any value of $\\gamma>0.7$, there exists a nonzero optimal value of local thermal noise $\\bar{n}_l$ maximizing the entangling power; a phenomenon reminiscent of \\emph{stochastic resonance} effect \\cite{Gammaitoni1998}. Such an optimal value of noise tends to increase as $\\gamma$ approaches $1$ (as one can also argue from Fig.\\ref{fig:entpowregions}).\n\nWhen $\\gamma$ attains the value 1, the curve of $\\mathfrak{E}$ vs $\\bar{n}_l$ is shifted upward by an amount $1\/4$, as shown in Fig.\\ref{fig:entpowg1}, and the maximum value of $\\mathfrak{E}$, namely 7\/12, is asymptotically achieved at $\\bar{n}_l\\to\\infty$.\n\nBy increasing the value of $\\bar{n}_g$ from zero, the region $\\{\\gamma,\\bar{n}_l\\}$ of positive values of \n$\\mathfrak{E}$ shrinks and also the maxima lower, as can be readily seen in Figs.\\ref{fig:entpow3D001} and \\ref{fig:entpow3D01}.\nHence we can conclude that global thermal noise is detrimental for stationary entanglement, while a suitable amount of local thermal noise is vital.\n\nThis can be explained by considering local thermal baths as injecting excitations onto the systems incoherently. Hence each excitation, thanks to the interaction mimicked by the global environment, is then shared by the two qubits ending up into an entangled state resembling $\\frac{1}{\\sqrt{2}}(|e_1g_2\\rangle+|g_1e_2\\rangle)$. If however the local noise is too much, it blurs such effect. In contrast, when $\\bar{n}_g>0$ there is the tendency by the global bath to inject coherently two excitations onto the system which is then driven into a separable state resembling $|e_1 e_2\\rangle$. \n\n\n\\begin{figure}[ht!]\n\\centering\n\\includegraphics[width=0.6\\textwidth]{fig2.pdf}\n \\fcaption{Entangling power $\\mathfrak{E}$ as a function of $\\gamma$ and $\\bar{n}_l$ for $\\bar{n}_g=0$.\n The value of $\\mathfrak{E}$ for $\\gamma$ exactly equal to 1 is not reported here.}\n \\label{fig:entpow3D0}\n\\end{figure}\n\n\n\\begin{figure}[ht!]\n\\centering\n\\includegraphics[width=0.5\\textwidth]{fig3.pdf}\n \\fcaption{Regions of the parameter space $\\{\\gamma,\\bar{n}_l\\}$ where the entangling power $\\mathfrak{E}$ is greater than zero (white) and zero (grey) for $\\bar{n}_g=0$. Along the dashed line $\\mathfrak E$ takes its maximum value. }\n \\label{fig:entpowregions}\n\\end{figure}\n\n\\begin{figure}[ht!]\n\\centering\n\\includegraphics[width=0.5\\textwidth]{fig4.pdf}\n \\fcaption{Entangling power $\\mathfrak{E}$ as a function of $\\bar{n}_l$ \n for $\\bar{n}_g=0$ and $\\gamma=1$. The bottom curve resulting from the contribution of the first two terms in Eq.\\eqref{epower} is shifted upward due to the contribution of the third term in Eq.\\eqref{epower}.}\n \\label{fig:entpowg1}\n\\end{figure}\n\n\\begin{figure}[ht!]\n\\centering\n\\includegraphics[width=0.6\\textwidth]{fig5.pdf}\n \\fcaption{Entangling power $\\mathfrak{E}$ as a function of $\\gamma$ and $\\bar{n}_l$ \n for $\\bar{n}_g=0.01$.}\n \\label{fig:entpow3D001}\n\\end{figure}\n\n\\begin{figure}[ht!]\n\\centering\n\\includegraphics[width=0.6\\textwidth]{fig6.pdf}\n \\fcaption{Entangling power $\\mathfrak{E}$ as a function of $\\gamma$ and $\\bar{n}_l$ \n for $\\bar{n}_g=0.1$.}\n \\label{fig:entpow3D01}\n\\end{figure}\n\n\n\\section{Conclusions} \\label{sec:conclusion}\n\nIn this paper, we have considered a model of two qubits dissipating by a ``glocal\" map, i.e. into local and global environments (generally at finite temperatures). By the parameter $\\gamma$, this can interpolate between perfect local and perfect global regimes. \n\nWe have then determined conditions for the presence of long living entanglement.\nThis has been done by considering the entangling capabilities of the ``glocal\" dissipative map ${\\cal D}$ through the entangling power introduced in \\eqref{eq:enpower}. \n\nIt has been shown that the number of thermal excitations in the local environments has a crucial role on the stationary entanglement of the two qubits. \n It results on the one hand (and rather counterintuitively) that entanglement in presence of local environments is achievable when these are at nonzero temperature.\nThis represents a remarkable extension of the \\emph{stochastic resonance} effect \narising in spin chains from the interplay of local dissipative and dephasing noise sources in the presence of Hamiltonian couplings \\cite{Rivas2009}.\nHere it appears in a context lacking of Hamiltonian couplings and driven by the interplay of the same kind (dissipative) of noise sources.\nOn the other hand, while global environment is vital for indirect qubits interaction, it should be ideally at zero temperature. In fact thermal noise from global environment spoils entanglement.\n \nConcerning $\\mathfrak{E}({\\cal D})$ it is also worth noticing its sudden enhancement for zero temperature global dissipation (Fig. \\ref{fig:entpowg1}). This can be regarded as a signature of a kind of phase transition occurring at $\\gamma=1$.\n\nThe map ${\\cal D}$, thanks to the properties discussed in Section \\ref{sec:Kraus}, can also be considered\nas a quantum channel and characterized in terms of its information transmission capabilities \\cite{RMP14}. \nFor instance, when $\\gamma\\neq 1$ the output space is of dimension 1, hence its capacity\nvanishes. In contrast, when $\\gamma = 1$ and $\\bar{n}_g = 0$ the output space is of dimension 2 \n(spanned by $\\frac{1}{\\sqrt{2}}\\left(\\ket{e_1}\\ket{g_2}-\\ket{g_1}\\ket{e_2}\\right)$ and $\\ket{g_1}\\ket{g_2}$) \nand the capacity could be up to 1 bit or 1 qubit\n(depending on whether classical or quantum information is considered to be transmitted). \nFor this rather different behavior in the parametric region, it could also be considered as a paradigmatic model for channel discrimination \\cite{Pirs11}.\nThese investigations are left for future works.\n\nThe present study can be of interest for experimental situations where\nthe interplay of local and global environments is relevant.\nAs an example we may mention cavity QED experiments in which atomic qubits are confined inside a high finesse optical cavity \\cite{Guthahrlein2001} and experience local spontaneous emission as well as\na global effect of vacuum bath lying outside the cavity.\nAnother example is provided by charge qubits based on double quantum dots (or analogously Cooper pair boxes) \\cite{CPA08} with local electron environments and global environment arising from voltage fluctuations.\n\n\\nonumsection{Acknowledgements}\n\\noindent\nA.N. would like to thank the University of Camerino\nfor kind hospitality and the Ministry of Science, Research\nand Technology of Iran for financial support.\n\n\n\\nonumsection{References}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}